URL
stringlengths 15
1.68k
| text_list
sequencelengths 1
199
| image_list
sequencelengths 1
199
| metadata
stringlengths 1.19k
3.08k
|
---|---|---|---|
https://whatisconvert.com/16-years-in-nanoseconds | [
"# What is 16 Years in Nanoseconds?\n\n## Convert 16 Years to Nanoseconds\n\nTo calculate 16 Years to the corresponding value in Nanoseconds, multiply the quantity in Years by 3.1536E+16 (conversion factor). In this case we should multiply 16 Years by 3.1536E+16 to get the equivalent result in Nanoseconds:\n\n16 Years x 3.1536E+16 = 5.04576E+17 Nanoseconds\n\n16 Years is equivalent to 5.04576E+17 Nanoseconds.\n\n## How to convert from Years to Nanoseconds\n\nThe conversion factor from Years to Nanoseconds is 3.1536E+16. To find out how many Years in Nanoseconds, multiply by the conversion factor or use the Time converter above. Sixteen Years is equivalent to eighteen quadrillion fourteen trillion three hundred ninety-eight billion five hundred nine million four hundred eighty-one thousand nine hundred eighty-four Nanoseconds.\n\n## Definition of Year\n\nA year (symbol: y; also abbreviated yr.) is the orbital period of the Earth moving in its orbit around the Sun. Due to the Earth's axial tilt, the course of a year sees the passing of the seasons, marked by changes in weather, the hours of daylight, and, consequently, vegetation and soil fertility. In temperate and subpolar regions around the globe, four seasons are generally recognized: spring, summer, autumn and winter. In tropical and subtropical regions several geographical sectors do not present defined seasons; but in the seasonal tropics, the annual wet and dry seasons are recognized and tracked. A calendar year is an approximation of the number of days of the Earth's orbital period as counted in a given calendar. The Gregorian, or modern, calendar, presents its calendar year to be either a common year of 365 days or a leap year of 366 days.\n\n## Definition of Nanosecond\n\nA nanosecond (symbol: ns) is an SI unit of time equal to one billionth of a second (10−9 or 1/1,000,000,000 s). One nanosecond is to one second as one second is to 31.71 years. The word nanosecond is formed by the prefix nano and the unit second. A nanosecond is equal to 1000 picoseconds or 1⁄1000 microsecond. Because the next SI unit is 1000 times larger, times of 10−8 and 10−7 seconds are typically expressed as tens or hundreds of nanoseconds. Times of this magnitude are commonly encountered in telecommunications, pulsed lasers and some areas of electronics.\n\n## Using the Years to Nanoseconds converter you can get answers to questions like the following:\n\n• How many Nanoseconds are in 16 Years?\n• 16 Years is equal to how many Nanoseconds?\n• How to convert 16 Years to Nanoseconds?\n• How many is 16 Years in Nanoseconds?\n• What is 16 Years in Nanoseconds?\n• How much is 16 Years in Nanoseconds?\n• How many ns are in 16 yr?\n• 16 yr is equal to how many ns?\n• How to convert 16 yr to ns?\n• How many is 16 yr in ns?\n• What is 16 yr in ns?\n• How much is 16 yr in ns?"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91556174,"math_prob":0.9223883,"size":2766,"snap":"2022-05-2022-21","text_gpt3_token_len":674,"char_repetition_ratio":0.16944243,"word_repetition_ratio":0.044397462,"special_character_ratio":0.25234997,"punctuation_ratio":0.11594203,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9790022,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-20T07:46:20Z\",\"WARC-Record-ID\":\"<urn:uuid:39937878-9dbf-4e8d-a5ff-124feb931c14>\",\"Content-Length\":\"30558\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f501e70a-2961-4c69-a2c1-d85c5195121c>\",\"WARC-Concurrent-To\":\"<urn:uuid:ff94e6bd-8a3f-48a8-978c-4c3a9d566867>\",\"WARC-IP-Address\":\"104.21.13.210\",\"WARC-Target-URI\":\"https://whatisconvert.com/16-years-in-nanoseconds\",\"WARC-Payload-Digest\":\"sha1:MA5Y3VWU4GEQARIVMYPLPQK4RMNCAKVL\",\"WARC-Block-Digest\":\"sha1:SPHX3SALXOWNVL5WR2AMNSVVWFYLF6W5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662531762.30_warc_CC-MAIN-20220520061824-20220520091824-00539.warc.gz\"}"} |
https://www.serviceescortphysics.nl/relativiteitstheorie/einstein-field-equation | [
"## Have the Einstein field equations changed their meaning?In his short essay, “How Great Equations Survive,” Nobel laureate physicist Steven Weinberg argues that though equations survive through scientific change, they are reinterpreted in light of the developments of theory. “The equations of General Relativity,” he argues, “have undergone a similar reinterpretation.” Weinberg devotes attention to the point that “the equations” are of the type known as “second order differential equations.\" This means \"that the equations were assumed by Einstein to involve only rates of change of the fields (first derivatives) and rates of change of rates of change (second derivatives) but not rates of higher order.” This he sees as something of a reasonable idealization. “I don’t know,” he writes, “any place where Einstein explains the motivation for this assumption.”“Today,” Weinberg continues, “General Relativity is widely (though not universally regarded as another effective field theory, useful only for distances much larger than about 10 (to the -33rd) centimeters, and particle energies much less than an energy equivalent to the mass of 10 (to the 19th) protons. No one today would (or at least no one should) take seriously any consequence of General Relativity for shorter distances or larger energies.”",
null,
""
] | [
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACH5BAEAAAAALAAAAAABAAEAAAICRAEAOw==",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.95527035,"math_prob":0.9658983,"size":1313,"snap":"2019-51-2020-05","text_gpt3_token_len":264,"char_repetition_ratio":0.11306341,"word_repetition_ratio":0.0,"special_character_ratio":0.1980198,"punctuation_ratio":0.09049774,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9839312,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-08T02:58:00Z\",\"WARC-Record-ID\":\"<urn:uuid:22e0ae3b-20a4-45ba-a860-d82f9199a901>\",\"Content-Length\":\"38658\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:af042cec-79f1-4489-a8ad-44e091176c73>\",\"WARC-Concurrent-To\":\"<urn:uuid:e58cd8b6-c361-456a-9917-a4759de79f84>\",\"WARC-IP-Address\":\"35.204.150.5\",\"WARC-Target-URI\":\"https://www.serviceescortphysics.nl/relativiteitstheorie/einstein-field-equation\",\"WARC-Payload-Digest\":\"sha1:77V7QZFE5DOJRAMNXU5IRFXAV4IAH3SB\",\"WARC-Block-Digest\":\"sha1:AY7UYGPRP5XHBFFM3XAN47JUSBIPL3F4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540504338.31_warc_CC-MAIN-20191208021121-20191208045121-00103.warc.gz\"}"} |
https://www.mdpi.com/journal/mathematics/special_issues/Recent_Trends_Orthogonal_Polynomials_Approximation_Theory_Applications | [
"",
null,
"",
null,
"Journal Browser\n\n# Special Issue \"Recent Trends on Orthogonal Polynomials: Approximation Theory and Applications\"\n\nA special issue of Mathematics (ISSN 2227-7390).\n\nDeadline for manuscript submissions: closed (31 December 2020).\n\n## Special Issue Editors\n\nProf. Dr. Francisco Marcellan\nE-Mail Website\nGuest Editor\nInterests: orthogonal polynomials; moment problems; distribution of zeros; integrable systems; random matrices; stochastic processes; signal theory; quadrature formulas; spectral methods for boundary value problems; fourier expansions; structured matrices; integral transforms please revise intereest of Edmundo Huertas orthogonal polynomials; moment problems; distribution of zeros; integrable systems; random matrices; stochastic processes; signal theory; quadrature formulas; spectral methods for boundary value problems; fourier expansions; structured matrices; integral transforms\nDr. Edmundo Huertas\nE-Mail Website\nGuest Editor\nDpto. de Física y Matemáticas, Universidad de Alcalá (UAH), Ctra. Madrid-Barcelona, Km. 33,600, Alcalá de Henares, 28805 Madrid, Spain\nInterests: orthogonal polynomials; moment problems; distribution of zeros; integrable systems; random matrices; stochastic processes; signal theory; quadrature formulas; spectral methods for boundary value problems; fourier expansions; structured matrices; integral transforms\n\n## Special Issue Information\n\nDear Colleagues,\n\nIn recent years, the theory of orthogonal polynomials has received a great amount of interest because of its wide role in Pure and Applied Mathematics. Orthogonal polynomials are essential tools for the solution of many problems in the spectral theory of differential and difference equations, Painlevé equations (discrete and continuous versions), numerical methods in quadrature on the real line and the unit circle, as well as cubature formulas on multidimensional domains, with applications ranging from Number Theory to Approximation Theory, Combinatorics to Group representation, integrable systems, random matrices, and linear system theory to signal processing.\n\nThe aims of the proposed Special Issue are:\n\n• To show some recent trends in the research on orthogonal polynomials, with a special emphasis on their analytic properties and approximation theory. Different examples of orthogonality (Sobolev, multiple, multivariate, matrix) will be studied, as well as the asymptotic properties of the corresponding sequences of orthogonal polynomials and the behavior of their zeros;\n• To emphasize their impact in Mathematical Physics, mainly in integrable systems and Painlevé equations (discrete and continuous cases), as they are strongly related to the coefficients of three term relation, satisfied by a sequence of orthogonal polynomials and time-depending measures supported on the real line.\n\nProf. Dr. Francisco Marcellan\nDr. Edmundo Huertas\nGuest Editors\n\nManuscript Submission Information\n\nManuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.\n\nSubmitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.\n\nPlease visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.\n\n## Keywords\n\n• Orthogonal polynomials on the real line\n• Orthogonal polynomials on the unit circle\n• Matrix orthogonal polynomials\n• Multiple orthogonal polynomials\n• Multivariate orthogonal polynomials\n• Sobolev orthogonal polynomials\n• Integrable systems\n• Random matrices\n• Rational approximation\n• Approximation with splines\n• Wavelets\n\n## Published Papers (14 papers)\n\nOrder results\nResult details\nSelect all\nExport citation of selected articles as:\n\n# Research\n\nOpen AccessArticle\nMultiple Meixner Polynomials on a Non-Uniform Lattice\nMathematics 2020, 8(9), 1460; https://doi.org/10.3390/math8091460 - 31 Aug 2020\nViewed by 469\nAbstract\nWe consider two families of type II multiple orthogonal polynomials. Each family has orthogonality conditions with respect to a discrete vector measure. The r components of each vector measure are q-analogues of Meixner measures of the first and second kind, respectively. These [...] Read more.\nWe consider two families of type II multiple orthogonal polynomials. Each family has orthogonality conditions with respect to a discrete vector measure. The r components of each vector measure are q-analogues of Meixner measures of the first and second kind, respectively. These polynomials have lowering and raising operators, which lead to the Rodrigues formula, difference equation of order $r+1$, and explicit expressions for the coefficients of recurrence relation of order $r+1$. Some limit relations are obtained. Full article\nOpen AccessArticle\nOn Multivariate Bernstein Polynomials\nMathematics 2020, 8(9), 1397; https://doi.org/10.3390/math8091397 - 21 Aug 2020\nViewed by 598\nAbstract\nIn this paper, we first revisit and bring together as a sort of survey, properties of Bernstein polynomials of one variable. Secondly, we extend the results from one variable to several ones, namely—uniform convergence, uniform convergence of the derivatives, order of convergence, monotonicity, [...] Read more.\nIn this paper, we first revisit and bring together as a sort of survey, properties of Bernstein polynomials of one variable. Secondly, we extend the results from one variable to several ones, namely—uniform convergence, uniform convergence of the derivatives, order of convergence, monotonicity, fixed sign for the p-th derivative, and deduction of the upper and lower bounds of Bernstein polynomials from those of the corresponding functions. Full article\nOpen AccessArticle\nOn the Finite Orthogonality of q-Pseudo-Jacobi Polynomials\nMathematics 2020, 8(8), 1323; https://doi.org/10.3390/math8081323 - 08 Aug 2020\nCited by 1 | Viewed by 520\nAbstract\nUsing the Sturm–Liouville theory in q-difference spaces, we prove the finite orthogonality of q-Pseudo Jacobi polynomials. Their norm square values are then explicitly computed by means of the Favard theorem. Full article\nOpen AccessArticle\nNew Stability Criteria for Discrete Linear Systems Based on Orthogonal Polynomials\nMathematics 2020, 8(8), 1322; https://doi.org/10.3390/math8081322 - 08 Aug 2020\nViewed by 536\nAbstract\nA new criterion for Schur stability is derived by using basic results of the theory of orthogonal polynomials. In particular, we use the relation between orthogonal polynomials on the real line and on the unit circle known as the Szegő transformation. Some examples [...] Read more.\nA new criterion for Schur stability is derived by using basic results of the theory of orthogonal polynomials. In particular, we use the relation between orthogonal polynomials on the real line and on the unit circle known as the Szegő transformation. Some examples are presented. Full article\nShow Figures",
null,
"Figure 1\n\nOpen AccessArticle\nOn Second Order q-Difference Equations Satisfied by Al-Salam–Carlitz I-Sobolev Type Polynomials of Higher Order\nMathematics 2020, 8(8), 1300; https://doi.org/10.3390/math8081300 - 06 Aug 2020\nViewed by 672\nAbstract\nThis contribution deals with the sequence ${Un(a)(x;q,j)}n≥0$ of monic polynomials in x, orthogonal with respect to a Sobolev-type inner product related to the Al-Salam–Carlitz I orthogonal polynomials, and involving an arbitrary number j of q-derivatives on the two boundaries of the corresponding orthogonality interval, for some fixed real number $q∈(0,1)$. We provide several versions of the corresponding connection formulas, ladder operators, and several versions of the second order q-difference equations satisfied by polynomials in this sequence. As a novel contribution to the literature, we provide certain three term recurrence formula with rational coefficients satisfied by $Un(a)(x;q,j)$, which paves the way to establish an appealing generalization of the so-called J-fractions to the framework of Sobolev-type orthogonality. Full article\nShow Figures",
null,
"Figure 1\n\nOpen AccessArticle\nOn Semi-Classical Orthogonal Polynomials Associated with a Modified Sextic Freud-Type Weight\nMathematics 2020, 8(8), 1250; https://doi.org/10.3390/math8081250 - 31 Jul 2020\nCited by 1 | Viewed by 583\nAbstract\nPolynomials that are orthogonal with respect to a perturbation of the Freud weight function by some parameter, known to be modified Freudian orthogonal polynomials, are considered. In this contribution, we investigate certain properties of semi-classical modified Freud-type polynomials in which their corresponding semi-classical [...] Read more.\nPolynomials that are orthogonal with respect to a perturbation of the Freud weight function by some parameter, known to be modified Freudian orthogonal polynomials, are considered. In this contribution, we investigate certain properties of semi-classical modified Freud-type polynomials in which their corresponding semi-classical weight function is a more general deformation of the classical scaled sextic Freud weight $|x|αexp(−cx6),c>0,α>−1$. Certain characterizing properties of these polynomials such as moments, recurrence coefficients, holonomic equations that they satisfy, and certain non-linear differential-recurrence equations satisfied by the recurrence coefficients, using compatibility conditions for ladder operators for these orthogonal polynomials, are investigated. Differential-difference equations were also obtained via Shohat’s quasi-orthogonality approach and also second-order linear ODEs (with rational coefficients) satisfied by these polynomials. Modified Freudian polynomials can also be obtained via Chihara’s symmetrization process from the generalized Airy-type polynomials. The obtained linear differential equation plays an essential role in the electrostatic interpretation for the distribution of zeros of the corresponding Freudian polynomials. Full article\nOpen AccessArticle\nOn an Energy-Dependent Quantum System with Solutions in Terms of a Class of Hypergeometric Para-Orthogonal Polynomials on the Unit Circle\nMathematics 2020, 8(7), 1161; https://doi.org/10.3390/math8071161 - 15 Jul 2020\nCited by 1 | Viewed by 485\nAbstract\nWe study an energy-dependent potential related to the Rosen–Morse potential. We give in closed-form the expression of a system of eigenfunctions of the Schrödinger operator in terms of a class of functions associated to a family of hypergeometric para-orthogonal polynomials on the unit [...] Read more.\nWe study an energy-dependent potential related to the Rosen–Morse potential. We give in closed-form the expression of a system of eigenfunctions of the Schrödinger operator in terms of a class of functions associated to a family of hypergeometric para-orthogonal polynomials on the unit circle. We also present modified relations of orthogonality and an asymptotic formula. Consequently, bound state solutions can be obtained for some values of the parameters that define the model. As a particular case, we obtain the symmetric trigonometric Rosen–Morse potential for which there exists an orthogonal basis of eigenstates in a Hilbert space. By comparing the existent solutions for the symmetric trigonometric Rosen–Morse potential, an identity involving Gegenbauer polynomials is obtained. Full article\nShow Figures",
null,
"Figure 1\n\nOpen AccessArticle\nOn Differential Equations Associated with Perturbations of Orthogonal Polynomials on the Unit Circle\nMathematics 2020, 8(2), 246; https://doi.org/10.3390/math8020246 - 14 Feb 2020\nCited by 2 | Viewed by 599\nAbstract\nIn this contribution, we propose an algorithm to compute holonomic second-order differential equations satisfied by some families of orthogonal polynomials. Such algorithm is based in three properties that orthogonal polynomials satisfy: a recurrence relation, a structure formula, and a connection formula. This approach [...] Read more.\nIn this contribution, we propose an algorithm to compute holonomic second-order differential equations satisfied by some families of orthogonal polynomials. Such algorithm is based in three properties that orthogonal polynomials satisfy: a recurrence relation, a structure formula, and a connection formula. This approach is used to obtain second-order differential equations whose solutions are orthogonal polynomials associated with some spectral transformations of a measure on the unit circle, as well as orthogonal polynomials associated with coherent pairs of measures on the unit circle. Full article\nOpen AccessArticle\nEigenvalue Problem for Discrete Jacobi–Sobolev Orthogonal Polynomials\nMathematics 2020, 8(2), 182; https://doi.org/10.3390/math8020182 - 03 Feb 2020\nViewed by 566\nAbstract\nIn this paper, we consider a discrete Sobolev inner product involving the Jacobi weight with a twofold objective. On the one hand, since the orthonormal polynomials with respect to this inner product are eigenfunctions of a certain differential operator, we are interested in [...] Read more.\nIn this paper, we consider a discrete Sobolev inner product involving the Jacobi weight with a twofold objective. On the one hand, since the orthonormal polynomials with respect to this inner product are eigenfunctions of a certain differential operator, we are interested in the corresponding eigenvalues, more exactly, in their asymptotic behavior. Thus, we can determine a limit value which links this asymptotic behavior and the uniform norm of the orthonormal polynomials in a logarithmic scale. This value appears in the theory of reproducing kernel Hilbert spaces. On the other hand, we tackle a more general case than the one considered in the literature previously. Full article\nOpen AccessFeature PaperArticle\nA Characterization of Polynomial Density on Curves via Matrix Algebra\nMathematics 2019, 7(12), 1231; https://doi.org/10.3390/math7121231 - 12 Dec 2019\nViewed by 572\nAbstract\nIn this work, our aim is to obtain conditions to assure polynomial approximation in Hilbert spaces $L 2 ( μ )$ , with $μ$ a compactly supported measure in the complex plane, in terms of properties of the associated moment matrix with the measure $μ$ . To do it, in the more general context of Hermitian positive semidefinite matrices, we introduce two indexes, $γ ( M )$ and $λ ( M )$ , associated with different optimization problems concerning theses matrices. Our main result is a characterization of density of polynomials in the case of measures supported on Jordan curves with non-empty interior using the index $γ$ and other specific index related to it. Moreover, we provide a new point of view of bounded point evaluations associated with a measure in terms of the index $γ$ that will allow us to give an alternative proof of Thomson’s theorem, by using these matrix indexes. We point out that our techniques are based in matrix algebra tools in the framework of Hermitian positive definite matrices and in the computation of certain indexes related to some optimization problems for infinite matrices. Full article\nOpen AccessArticle\nOn Infinitely Many Rational Approximants to ζ(3)\nMathematics 2019, 7(12), 1176; https://doi.org/10.3390/math7121176 - 03 Dec 2019\nCited by 1 | Viewed by 528\nAbstract\nA set of second order holonomic difference equations was deduced from a set of simultaneous rational approximation problems. Some orthogonal forms involved in the approximation were used to compute the Casorati determinants for its linearly independent solutions. These solutions constitute the numerator and [...] Read more.\nA set of second order holonomic difference equations was deduced from a set of simultaneous rational approximation problems. Some orthogonal forms involved in the approximation were used to compute the Casorati determinants for its linearly independent solutions. These solutions constitute the numerator and denominator sequences of rational approximants to $ζ ( 3 )$ . A correspondence from the set of parameters involved in the holonomic difference equation to the set of holonomic bi-sequences formed by these numerators and denominators appears. Infinitely many rational approximants can be generated. Full article\nShow Figures",
null,
"Figure 1\n\nOpen AccessFeature PaperArticle\nRobust Stability of Hurwitz Polynomials Associated with Modified Classical Weights\nMathematics 2019, 7(9), 818; https://doi.org/10.3390/math7090818 - 05 Sep 2019\nCited by 2 | Viewed by 744\nAbstract\nIn this contribution, we consider sequences of orthogonal polynomials associated with a perturbation of some classical weights consisting of the introduction of a parameter t, and deduce some algebraic properties related to their zeros, such as their equations of motion with respect [...] Read more.\nIn this contribution, we consider sequences of orthogonal polynomials associated with a perturbation of some classical weights consisting of the introduction of a parameter t, and deduce some algebraic properties related to their zeros, such as their equations of motion with respect to t. These sequences are later used to explicitly construct families of polynomials that are stable for all values of t, i.e., robust stability on these families is guaranteed. Some illustrative examples are presented. Full article\nShow Figures",
null,
"Figure 1\n\nOpen AccessArticle\nA Classification of Symmetric (1, 1)-Coherent Pairs of Linear Functionals\nMathematics 2019, 7(2), 213; https://doi.org/10.3390/math7020213 - 25 Feb 2019\nCited by 1 | Viewed by 944\nAbstract\nIn this paper, we study a classification of symmetric $( 1 , 1 )$ -coherent pairs by using a symmetrization process. In particular, the positive-definite case is carefully described. Full article"
] | [
null,
"https://px.ads.linkedin.com/collect/",
null,
"https://www.mdpi.com/img/journals/mathematics-logo.png",
null,
"data:image/gif;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs=",
null,
"data:image/gif;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs=",
null,
"data:image/gif;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs=",
null,
"data:image/gif;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs=",
null,
"data:image/gif;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs=",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8638624,"math_prob":0.8485966,"size":4550,"snap":"2021-21-2021-25","text_gpt3_token_len":866,"char_repetition_ratio":0.12934448,"word_repetition_ratio":0.14146341,"special_character_ratio":0.1698901,"punctuation_ratio":0.12428571,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98665345,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-11T05:14:52Z\",\"WARC-Record-ID\":\"<urn:uuid:e0ad1285-97de-4714-9f4a-e09899eae848>\",\"Content-Length\":\"280511\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cdcb96ec-5b2e-4599-ae35-7f0d626aa03d>\",\"WARC-Concurrent-To\":\"<urn:uuid:17302ce1-12be-4622-8460-471a1a1f4bf6>\",\"WARC-IP-Address\":\"104.18.25.151\",\"WARC-Target-URI\":\"https://www.mdpi.com/journal/mathematics/special_issues/Recent_Trends_Orthogonal_Polynomials_Approximation_Theory_Applications\",\"WARC-Payload-Digest\":\"sha1:47DWUXHGOAQT4PQP3JJ27UVJMMCJLAS3\",\"WARC-Block-Digest\":\"sha1:6BONU7RQ7SUUAZ5J6TLWA4436CRE4IHB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991641.5_warc_CC-MAIN-20210511025739-20210511055739-00045.warc.gz\"}"} |
http://www.math-math.com/2015/06/infinite-series-explained-in-5-minutes.html | [
"### Infinite Series Explained in 5 Minutes\n\nInfinite Series Explained in 5 Minutes\n\nAn infinite series is just a sum of terms that never stops, for example..\n\ns=1+(1/2)+(1/4)+(1/8)+(1/16)+..\n\nThis can be written more compactly as s=sum(1/2^n) for n=0 to infinity.\n\nIf the sum of a series is a finite number the series is said to converge, otherwise it is said to diverge. The example given above is a convergent series and in fact s=2.\n\nIt's not always easy to tell if a series converges. For example, the series s=1+(1/2)+(1/3)+(1/4)+(1/5)+.. looks like it converges but in fact it's a divergent series and s is not a finite number.\n\nInfinite series are used a lot in mathematics because many functions can be written as infinite series, for example..\n\n1/(1-x)=1+x+x^2+x^3+x^4..\n\ne^x=1+x+(x^2/2!)+(x^3/3!)+(x^4/4!).. where n! is the factorial function n!=1*2*3*...*n\n\ncos(x)=1-(x^2/2!)+(x^4/4!)-(x^6/6!)..\n\nSeries can be used to prove things about a function that might otherwise be difficult to prove. For example, if you differentiate the series for e^x term by term you get the same series! So this is a neat proof of the famous fact that e^x is its own derivative, d(e^x)/dx=e^x\n\nContent written and posted by Ken Abbott [email protected]"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9078539,"math_prob":0.9914343,"size":1194,"snap":"2020-34-2020-40","text_gpt3_token_len":364,"char_repetition_ratio":0.14033614,"word_repetition_ratio":0.0,"special_character_ratio":0.31825796,"punctuation_ratio":0.13131313,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997433,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-30T06:51:35Z\",\"WARC-Record-ID\":\"<urn:uuid:67110e88-6ab1-490d-b301-6be8c73e4f8f>\",\"Content-Length\":\"29796\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c4746548-4fb8-4542-ada2-048b054442b4>\",\"WARC-Concurrent-To\":\"<urn:uuid:c9887e8a-3b3a-4580-ad6e-908e16c4a977>\",\"WARC-IP-Address\":\"172.217.7.147\",\"WARC-Target-URI\":\"http://www.math-math.com/2015/06/infinite-series-explained-in-5-minutes.html\",\"WARC-Payload-Digest\":\"sha1:MA37RKSADJJOGDJFZQSEVSEEUTVAVPEO\",\"WARC-Block-Digest\":\"sha1:N2C2B5IG36ZBP7JQ433XFLYAZEX22UBJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600402118004.92_warc_CC-MAIN-20200930044533-20200930074533-00339.warc.gz\"}"} |
https://answers.everydaycalculation.com/subtract-fractions/3-5-minus-7-6 | [
"Solutions by everydaycalculation.com\n\n## Subtract 7/6 from 3/5\n\n1st number: 3/5, 2nd number: 1 1/6\n\n3/5 - 7/6 is -17/30.\n\n#### Steps for subtracting fractions\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 5 and 6 is 30\n\nNext, find the equivalent fraction of both fractional numbers with denominator 30\n2. For the 1st fraction, since 5 × 6 = 30,\n3/5 = 3 × 6/5 × 6 = 18/30\n3. Likewise, for the 2nd fraction, since 6 × 5 = 30,\n7/6 = 7 × 5/6 × 5 = 35/30\n4. Subtract the two like fractions:\n18/30 - 35/30 = 18 - 35/30 = -17/30\n\nMathStep (Works offline)",
null,
"Download our mobile app and learn to work with fractions in your own time:"
] | [
null,
"https://answers.everydaycalculation.com/mathstep-app-icon.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7436362,"math_prob":0.9987261,"size":394,"snap":"2022-40-2023-06","text_gpt3_token_len":206,"char_repetition_ratio":0.20769231,"word_repetition_ratio":0.0,"special_character_ratio":0.5431472,"punctuation_ratio":0.06896552,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9985101,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-07T02:25:41Z\",\"WARC-Record-ID\":\"<urn:uuid:7ca750ff-3147-4826-9a5b-f708c20f4ae3>\",\"Content-Length\":\"8567\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:824664c4-9a0c-46f0-a182-5b80fdd8fc89>\",\"WARC-Concurrent-To\":\"<urn:uuid:cd5bfb43-16e6-4a2a-9fd0-88a91dd42bf9>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/subtract-fractions/3-5-minus-7-6\",\"WARC-Payload-Digest\":\"sha1:APZTROPJU7JE5U4AFNO3ZIZWAWE2GOFC\",\"WARC-Block-Digest\":\"sha1:FNI77H3THIPRCEN4QLQENRQGYYDMXMSR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500368.7_warc_CC-MAIN-20230207004322-20230207034322-00871.warc.gz\"}"} |
https://www.pdf-archive.com/2020/02/28/a-b-basset-on-the-motion-of-a-sphere-in-a-viscous-liquid/preview/page/1/ | [
"# PDF Archive\n\nEasily share your PDF documents with your contacts, on the Web and Social Networks.\n\n## A. B. Basset. On the motion of a sphere in a viscous liquid.pdf",
null,
"Page 12321\n\n#### Text preview\n\nIII. On\n\ntheMotion o f a Sphere in a Viscous\nB y A. B. B a sset ,\n\nCommunicated by Lord R a y l e ig h , D.C.L., Sec,\n\n1. T h e first problem relating to the motion of a solid body in a viscous liquid which\nwas successfully attacked was th a t of a sphere, the solution of which was given by\nProfessor S to k es in 1850, in his memoir “ On the Effect of the Internal Friction of\nFluids on Pendulum s,” ‘ Cambridge Phil. Soc. T rans./ vol. 9, in the following cases:\n(i.) when the sphere is performing small oscillations along a straig h t line ; (ii.) when\nth e sphere is constrained to move w ith uniform velocity in a straig h t line ; (iii.)\nwhen th e sphere is surrounded by an infinite liquid and constrained to rotate with\nuniform angular velocity about a fixed diam eter : it being supposed, in the last two\ncases, th a t sufficient time has elapsed for the motion to have become steady. In the\nsame memoir he also discusses the motion of a cylinder and a disc. The same class\nof problems has also been considered by M e y e r * and O b e r b e c k ,! the latter of whom\nhas obtained th e solution in the case of the steady motion of an ellipsoid, which\nmoves parallel to any one of its principal axes with uniform velocity. The torsional\noscillations about a fixed diameter, of a sphere which is either filled w ith liquid or is\nsurrounded by an infinite liquid when slipping takes place a t the surface of the sphere,\nforms th e subject of a joint memoir by H elm holtz and P io t r o w s k i . |\nVery little appears to have been effected with regard to the solution of problems\nin which a viscous liquid is set in motion in any given m anner and then left to itself.\nThe solution, when the liquid is bounded by a plane which moves parallel to itself, is\ngiven by Professor S tokes a t the end of his memoir referred to above; and the solu\ntions of certain problems of two-dimensional motion have been given by S t e a r n .§\nIn th e present paper I propose to obtain the solution for a sphere moving in a viscous\nliquid in the following cases :— (i.) when the sphere is moving in a straight line under\nth e action of a constant force, such as gravity ; (ii.) when the sphere is surrounded by\nviscous liquid and is set in rotation about a fixed diam eter and then left to itself.*§\n*\nt\nJ\n§\n\n‘ Crelle, Journ. M ath.,’ vol. 73, p. 31.\n‘ Crelle, Joui'ii. M ath.,’ vol. 81, p. 62.*\n‘ Wissenscliaftl. Abhandl.,’ vol. 1, p. 172.\n‘ Quart. Journ. M ath.,’ vol. 17, p. 90.\nG\n\n2\n\n28.5.88"
] | [
null,
"https://www.pdf-archive.com/2020/02/28/a-b-basset-on-the-motion-of-a-sphere-in-a-viscous-liquid/preview-a-b-basset-on-the-motion-of-a-sphere-in-a-viscous-liquid-1.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.92606896,"math_prob":0.9103289,"size":2603,"snap":"2020-24-2020-29","text_gpt3_token_len":716,"char_repetition_ratio":0.122739516,"word_repetition_ratio":0.026465029,"special_character_ratio":0.26507875,"punctuation_ratio":0.14473684,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9727157,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-03T16:37:57Z\",\"WARC-Record-ID\":\"<urn:uuid:91fd37f8-c63a-4c44-94d4-ca646d1f77b0>\",\"Content-Length\":\"18450\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5a47629b-f675-4133-8fb9-6f74e2ea6271>\",\"WARC-Concurrent-To\":\"<urn:uuid:b3cbc151-fa56-4d36-bc4c-21ccac2c789a>\",\"WARC-IP-Address\":\"51.159.91.99\",\"WARC-Target-URI\":\"https://www.pdf-archive.com/2020/02/28/a-b-basset-on-the-motion-of-a-sphere-in-a-viscous-liquid/preview/page/1/\",\"WARC-Payload-Digest\":\"sha1:GUX3722ALJK2ZBK6MCFX27A6G2SIXH7G\",\"WARC-Block-Digest\":\"sha1:4Z7Q36SYKQKEOKBWMZT5BAUBPKLHS4C4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655882634.5_warc_CC-MAIN-20200703153451-20200703183451-00547.warc.gz\"}"} |
https://gmplib.org/list-archives/gmp-devel/2011-December/002082.html | [
"# Binomial improvements\n\nbodrato at mail.dm.unipi.it bodrato at mail.dm.unipi.it\nSun Dec 11 11:24:52 CET 2011\n\n```Ciao!\n\nTorbjorn Granlund <tg at gmplib.org> writes:\n> (1) Now the number of allowable factors that fit into a limb is tabled.\n> This table takes into account that the mulN functions now shift out\n> twos.\n\nThere is one strength in the code I wrote for binomial: code recycling.\n\nI think that this kind of optimisation is very interesting, and it should\nbe applied also to the basecase factorial.\n\nFor example: my basecase code to compute the odd component of the\nfactorial does not \"shift out twos\", it uses the following trick.\n\nLet n be an integer. And let oddfac(n)= n! >> (n-popcount(n)) be the odd\nfactor of the factorial of n. Then oddfac(n)=oddfac(floor(n/2))*(n-n%2)!!\n.\n\nI.e. we can compute the product of odd numbers from 3 to n, times the odd\nnumbers from 3 to n>>1, times... till we reach a tabled oddfac().\n\nMaybe this is a good trick, maybe it is not, but my proposal is: let's\nstart optimising the factorial, then we will recycle the code for the\nbinomial.\n\nBoth my factorial and my binomial code use a function (you can find it in\nthe current fac_ui.c in the repo) taking a vector of limbs, and obtaining\nthe product of them all: it is currently called mpz_prodlimbs(res, *vec,\nlen). Again, is it good or is it not? Please look at it and comment. If it\nis good, let's use it for the binomial too. If it is not, let's change it\nalso for the factorial...\n\n> (2) Less scratch space usage, for this needs the latest repo bdiv\n> updates (allowing dividend/quotient operand overlap).\n\nThis is a nice improvement.\n\n> I suspect that one remaining optimisation for the smallest operands is\n> to get rid of the final lshift. By being cleverer with 2 removal from\n> the dividend, that might be possible.\n\nIt is! Also the factorial needs this optimisation.\n\n> Forgot to point to the updated code: See http://gmplib.org/devel/ under\n> New binomial code.\n\nI can not find it...\n\nRegards,\nMarco\n\n--\nhttp://bodrato.it/software/combinatorics.html\n\n```"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.85000634,"math_prob":0.8321095,"size":2084,"snap":"2021-43-2021-49","text_gpt3_token_len":557,"char_repetition_ratio":0.10625,"word_repetition_ratio":0.0056338026,"special_character_ratio":0.24952015,"punctuation_ratio":0.1611479,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97588116,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-04T23:36:41Z\",\"WARC-Record-ID\":\"<urn:uuid:ffe7d06e-e882-4528-87d8-2a5fb51a8024>\",\"Content-Length\":\"4827\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4eab5a10-e3a4-4bb4-a564-f3ba035e87a6>\",\"WARC-Concurrent-To\":\"<urn:uuid:fcef2d59-7557-48d5-af9f-4cc7b6eade6e>\",\"WARC-IP-Address\":\"130.242.124.102\",\"WARC-Target-URI\":\"https://gmplib.org/list-archives/gmp-devel/2011-December/002082.html\",\"WARC-Payload-Digest\":\"sha1:23SYWR65AAS56WMSY5OPZ2HK4TNAUBQN\",\"WARC-Block-Digest\":\"sha1:YUYQ42C3YITDGX6SVPVEFEVU7CZZPUDZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363125.46_warc_CC-MAIN-20211204215252-20211205005252-00627.warc.gz\"}"} |
https://electronics.stackexchange.com/questions/417534/how-does-an-op-amp-integrator-work | [
"# How does an op amp integrator work?\n\nI know there are at least two questions related to this on stackoverflow but neither really answer my question, and in any case, both questions got downvoted. What I am after is an operational understanding of how an op-amp integrator works. I know how a simple RC circuit can integrate, what I don't understand is how the feedback loop in an op-amp configuration helps. I understand how feedback works in a noninverting amplifier. I took the figure below from www.electronics-tutorials.ws. This web site has an explanation but I don't follow it. My understanding so far is this:\n\n1. Apply a positive voltage to input vin. Current flows through Rin resulting initially in a non-zero voltage at X (Correct?).\n\n2. Due to the high impedance of the op-amp at X, we can assume that all the current then flows to the capacitor (initial discharged).\n\n3. The capacitor starts to charge resulting in a voltage across the capacitor.\n\n4. The difference in voltage at the two op-amp inputs (the positive input is at zero, hence the difference is negative) resulting in the output, vout, going negative (we assume that vout was zero initially).\n\nMy question is what happens next? How does the feedback act to bring the difference between the two inputs back to zero? Or have I got this wrong?\n\nI am very familiar with the proofs for showing that the configuration will integrate but they don't give any real intuition and many videos, wikpedia, and books but almost all regurgitate the proof without giving much insight. I'm after an intuitive understanding, not a mathematical proof.",
null,
"Out of interest I also redrew the op-amp circuit next to the RC integrator shown below which gives the suggestion that the op-amp is amplifying the small vollage across C (assuming high R1) while having high impedance from the resistor/capacitor node. Not sure if that is a legitimate way to look at it.",
null,
"• Put in a square wave, the opamp strives to hold the (-) pin very close to zero volts, which imposes a constant voltage across the input resistor. This constant current has to go somewhere, and the only path is into the capacitor. Give constant current into a capacitor, you get a perfect RAMP output, which is the integral. Jan 18, 2019 at 3:04\n• To me - this small comment (if compared with all the answers) is the best explanation of the circuits behaviour in the time domain.\n– LvW\nJan 18, 2019 at 14:57\n• But it doesn't explain why the op-amp strives to hold the (-) pin close to zero. An op-amp on its own won't do that, it's the combination with the feedback that makes that happen. This is the bit I wasn't sure about. Jan 18, 2019 at 18:47\n• The RC integrator is not a real integrator. It can be treated as one in a limited sense. But consider a charged capacitor and an input voltage of 0 V. According to how integrations work, the integral should not change. However, in this case the capacitor discharges => Your integral value goes to 0. This can also be seen in the Laplace transform of the two circuits: An integrator: H(s) = constant * 1/s. Your RC-circuit (essentially a lowpass): H(s) = constant * 1 / (1+RC * s). For high frequencies (!) 1 + RC * s can be assumed as the same as RC* s. In this case it is acting as an integrator\n– GNA\nMay 27, 2020 at 6:52\n\nThis may help:\n\n• Remember that when current flows into the RC junction of your op-amp that the voltage at that point will tend to rise.\n• If the inverting input voltage rises the slightest bit above the non-inverting input voltage then the op-amp output will start to swing negative.\n• The output swinging negative will, through the capacitor1, tend to pull the inverting input down towards zero again where it stabilise (for the moment).\n\nThe result is that feeding current into the RC node causes the op-amp output to go negative.\n\nOut of interest I also redrew the op-amp circuit next to the RC integrator shown below which gives the suggestion that the op-amp is amplifying the small voltage across C (assuming high R1) while having high impedance from the resistor/capacitor node. Not sure if that is a legitimate way to look at it.\n\nThat's correct. It might be better than you think. The simple RC circuit has the advantage that it's non-inverting but the disadvantage that it's non-linear. With a constant input voltage the output will be an exponential charge curve.\n\nPutting the op-amp in as you have shown still allows the capacitor to charge up but maintains the top terminal at virtual ground. The advantage is a linear change in output. The disadvantage is that there is a minus sign on the integral obtained.\n\n1 You can think of a capacitor as holding the voltage across it as a constant in the short term. That means that if the voltage on one side is changed the voltage on the other side will try to change by the same amount.\n\nOne question. what is the orientation of the capacitor in terms of conventional current? i.e. if vin goes positive the capacitor is I assume negative on its right-hand side (nearest vout). Now vout goes negative and therefore reduces the voltage across the capacitor until the potential at X is zero?\n\nI think your understanding is correct.\n\nIf Vin goes positive then current flows into the X node charging up C. (Remember the op-amp's voltage hasn't changed yet.) This tends to increase the voltage on the inverting input and that causes the output voltage to decrease. This draws some charge from the right hand side of C. Now the inverting input is pulled back down to zero volts but there is charge on C so there is a voltage across it. Since the conventional current flowed to the right there is a negative voltage remaining on the capacitor.\n\n• I think this the kind of answer I was looking for. One question. what is the orientation of the capacitor in terms of conventional current? ie if vin goes positive the capacitor is I assume negative on its right-hand side (nearest vout). Now vout goes negative and therefore reduces the voltage across the capacitor until the potential at X is zero? Jan 17, 2019 at 22:44\n• See the update. Jan 17, 2019 at 23:15\n\nThe op-amp is going to try its best to keep the voltage between it's plus and minus input the same. In an ideal op-amp, no current flows into the inputs, so the only way that it can do that is by changing its output voltage.\n\nIn the schematic below, $$\\v_+ = 0\\mathrm{V}\\$$. That means that the op-amp will try to hold $$\\v_-\\$$ at zero, also.\n\nWhatever voltage is generated by V2 gets turned into a current by R1. Because $$\\v_-\\$$ is being held at $$\\0\\mathrm{V}\\$$, that same current has to flow in C1. And because $$\\v_-\\$$ is being held at $$\\0\\mathrm{V}\\$$, the op-amp has to drive the output voltage such that the current in C1 matches the current in R1.\n\nSo if $$\\v_2\\$$ is constant, then the current into the node around the negative input is constant, which means that the current out of that node from the cap must be constant -- and that can only happen if the output voltage is falling at a constant rate. The end result is that the op-amp integrates the input voltage into the output voltage.\n\nMore complicated voltages at $$\\v_2\\$$ cause more complicated behavior, but the op-amp is always going to be trying to drive $$\\v_-\\$$ to $$\\0\\mathrm{V}\\$$. It can only do that by satisfying $$\\ \\frac{d}{dt} C_1 v_{out} + \\frac{v_2}{R_1} = 0 \\$$. If you solve that differential equation, it says that $$v_{out} = -\\frac{1}{R_1 C_1} \\int v_2 dt$$\n\nHTH",
null,
"simulate this circuit – Schematic created using CircuitLab\n\n• In your comment \"The op-amp is going to try its best to keep the voltage between it's plus and minus input the same.\", strictly speaking, it's the op-amp combined with the next feedback that does this. This is the bit I was stuck on, how the feedback coupled with the op-amp manages to maintain the difference close to zero. Jan 18, 2019 at 18:50\n• Well, I'm glad you worked through it! These are difficult concepts and hard to simplify. Jan 18, 2019 at 18:57\n\nRhody - have you heard about the MILLER effect? Well - the shown circuit is called \"MILLER integrator\" because the MILLER effect is exploited. Remember: This effect reduces the feedback impedance between an amplifier output (for example: collector) and the inverting input (example: base node of the transistor). And the factor of increase is the gain.\n\nHere, we have the same principle. Hence, there will be a very small capacitive impedance (that means: A very large capacitor) between input and output of the opamp. And the factor of increase is the open-loop gain Aol of the opamp.\n\nHence, you can make a comparison with a simple RC circuit. However, because of the very large capacitor the cut-off frequency is very low (nearly DC).\n\nFrequency domain: The transfer function between the opamps inverting node and the signal input is\n\nHo(s)=1/(1+sCo * R) with Co=Aol * C (MILLER effect).\n\nBecause of the very large value Aol, we can neglect the \"1\" in the denominator and arrive at\n\nHo(s)=1/(sC * Aol *R)\n\nWe are lucky and can use the low resistive opamp output (and multiply the function Ho(s) with the gain -Aol) and arrive at the final result (opamp output-to-signal input):\n\nH(s)=Ho(s) * (-Aol) = - 1/sR*C (Transfer function of an ideal integrator)\n\nThe inputs of the opamp don't take input current and the opamp will keep its input voltage equal as long as it is wired for negative feedback.\n\nSo effectively, the current that the input sinks against the 0V it sees goes straight into charging the capacitor. Usually when you charge a capacitor through a resistor, the charge building up on the capacitor reduces the voltage across the resistor and thus also the charging current, leading to an exponential decay of the charge current.\n\nHere, however, the opamp output actively adjusts the voltage at the other side of the capacitor so that the resistor never gets to see the difference it makes, thus keeping the current through resistor (and into capacitor) independent from the charge across the capacitor.\n\nIt's like Útgarða-Loki handing Þorr the mouth of the ocean as a drinking horn and Þorr does not actually find himself able to drain it, not noticing that he causes the tides with his attempt.\n\nThe Passive cap. integrator current decays with voltage as it approaches the input.\n\nThe active cap. integrator saturates after some time if Vin ≠ 0 because the output voltage drives current into Vin- to maintain 0V diff. until the output saturates at the supply rail.\n\nSo input offset is critical and you need an analog switch to discharge and initialize to 0V output.\n\n## anecdotal\n\nI remember knowing nothing about this in 1st yr Eng and my once famous brother-in-law medical Dr of Anesthesia, Intensive Care and open heart surgery, gave me a tour around the hospital and said he needed an integrator to measure O2 content in blood for the brain, after a heart attack victim stops, to know the best treatment ( such as hypothermia )to give when no response to defib. and meds with the likelihood of success. I had no idea ! and was embarrassed to not know! (circa '75) Don't be. Just research it.\n\nIt is a great challenge to find a new explanation for such a legendary circuit because everyone knows what an op-amp integrator is. But to know a specific circuit solution does not mean that you really understand it. To (deeply) understand a circuit means something more - to see the general idea behind it that links many specific circuit implementations (op-amp, BJT, FET, tube…) You can see it even in life in the form of many non-electrical applications...\n\n1. Op-amp inverting integrator. The idea behind this circuit solution is extremely simple and intuitive. It may sound paradoxical... but to see it you only need to remove the symbol of the ground from the circuit diagram. As you can see in Fig. 1, I have only labeled the place of the virtual ground (1) and the place of the real ground (2)... and I have no longer used these names. You understand that there is no virtual ground because there is no real ground. But if you still miss the virtual ground, then you can talk about a virtual short between node 1 and 2.",
null,
"Fig. 1. Op-amp inverting integrator (only the negative power supply V- is explicitly shown)\n\nThe current path is crucial here to see the great idea. Since the input voltage is positive, the op-amp output voltage is negative and the current enters the op-amp output... then passes through the negative power supply V- and returns to the input source. The positive source V+ is not essential in this case; so it is only hinted.\n\n2. Electric equivalent circuit. The main question to be answered is, \"What does the op-amp do here?\" You know that it keeps almost zero voltage between its inputs so its output voltage is always equal to the voltage drop across the capacitor. So the op-amp output serves as a following voltage source. Then let's replace the op-amp with a variable voltage source VOA to simplify this electronic circuit - Fig. 2. By the way, I conducted such a real experiment in 2001 with my students in the laboratory when we used a capacitor with high capacity and zero indicator connected between 1 and 2.",
null,
"Fig. 2. Electric equivalent circuit\n\nThis simple trick is enough to show the great idea behind the circuit. The voltage source VOA is connected in series to the capacitor C so that its voltage compensates the voltage drop VC across the capacitor and the voltage between the two nodes 1 and 2 is (almost) zero. So the conclusion is:\n\nThe op-amp in the circuit of the inverting amplifier compensates the voltage drop VC across the capacitor by adding equivalent voltage VOA = VC in series.\n\nSo, the key point of this explanation is adding, not amplifying. To think of the amplifier in a negative feedback circuit not as of an amplifier but rather as of something like integrator is a powerful technique for intuitive understanding and explaining such op-amp circuits. Indeed, here it seems a little strange (integrator inside integrator)... but works...\n\nHow simple is this \"magic recipe\"... You want to make the imperfect RC integrator perfect? Then connect a small variable \"battery\" with voltage VC in series to the capacitor and (the next brilliant idea) take its inverted \"copy\" voltage as an output. The load will consume current from this \"helping\" source... not from the input source (i.e., this is a buffered output).\n\nThe power of this intuitive explanation is that we can explain this sophisticated op-amp circuit to a \"six year old\" (Einstein)... and that will mean we understand it ourselves...\n\n3. Virtual short. The total voltage across the network of two elements in series - a capacitor C and compensating voltage source (VOUT), is always zero. So this network behaves as a \"piece of wire\" that shorts the points 1 and 2 - Fig. 3. This is what the input source \"sees\" when \"looking\" through the resistor R at the op-amp input.",
null,
"Fig. 3. Equivalent circuit of the output part on the right\n\nFiguratively speaking, the op-amp output acts as a \"negative capacitor\". While the \"positive capacitor\" C subtracts its voltage VC from the input voltage source, the op-amp \"negative capacitor\" adds its voltage VOUT to the input voltage."
] | [
null,
"https://i.stack.imgur.com/OavFL.gif",
null,
"https://i.stack.imgur.com/6hGZ6.png",
null,
"https://i.stack.imgur.com/fyXiP.png",
null,
"https://i.stack.imgur.com/nxvJ3.jpg",
null,
"https://i.stack.imgur.com/nLnML.jpg",
null,
"https://i.stack.imgur.com/E1YlB.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.905164,"math_prob":0.9574024,"size":4111,"snap":"2022-27-2022-33","text_gpt3_token_len":901,"char_repetition_ratio":0.1448746,"word_repetition_ratio":0.017045455,"special_character_ratio":0.21770859,"punctuation_ratio":0.11439114,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9918239,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,7,null,7,null,7,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-18T11:19:10Z\",\"WARC-Record-ID\":\"<urn:uuid:c5c9bdc7-8fbe-4c95-8618-9ffdf8505b32>\",\"Content-Length\":\"290434\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:30ff5f25-f570-48e6-ad74-1e2538f293d3>\",\"WARC-Concurrent-To\":\"<urn:uuid:436f8c5f-253b-4a65-a596-ead2be662f80>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://electronics.stackexchange.com/questions/417534/how-does-an-op-amp-integrator-work\",\"WARC-Payload-Digest\":\"sha1:NMSHZ5VFEFGXJSXYQ57TWQTMM3G7DD53\",\"WARC-Block-Digest\":\"sha1:TE6AFQE6PZYFZC5HCDU6DSA7JW2U25N2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882573193.35_warc_CC-MAIN-20220818094131-20220818124131-00340.warc.gz\"}"} |
https://mathoverflow.net/questions/218994/svd-vs-fourier-analysis-for-data | [
"# SVD vs Fourier analysis for data.\n\nFourier analysis is useful for analysis in the frequency domain. SVD on the other hand is useful for analysis of data, and expressing noise in the data. I have a problem that needs extensive data analysis, it is in the area medicine. This could be generalized to other problems.\n\nThe problem is that of gene expression, in case of long term gene mutation. Using Fourier analysis we can get a time series analysis of the genes(and thereby get noisy gene expression), and as time progresses, the changes in a particular organ. On the other hand, we could use Singular Value Decomposition, and the noisy gene expresses itself. This, is just an outline of the problem. Both SVD, and Fourier lend themselves to solve the problem that of expressing noisy genes. Is there any comparison of the two techniques, why one would be preferred over another qualitatively, or references that one can use for the problem of gene expression, thanks in anticipation.\n\n• Fourier analysis approximates a continuous function using trigonometric polynomials; SVD approximates a matrix using eigenvectors of its left (or right) singular matrix. These are a priori completely different problems, and your question doesn't make much sense unless you can indicate why you think these two problems are related. Sep 23, 2015 at 2:35\n• You might get more feedback on sites like scicomp.stackexchange.com, stats.stackexchange.com, or dsp.stackexchange.com. But if you ask your question in its current form, they might also close it, because it is not really clear what you want to do exactly. For my answer, I just guessed that stochastic processes might be your context, because both Fourier analysis and \"optimal\" decompositions make sense in that context. Sep 23, 2015 at 20:00\n• @PaulSiegel I guess the OP has in mind that the discrete Fourier transform transforms discrete \"spatial\" data to discrete \"frequency\" data. I would say that the SVD decomposes every matrix $A$ as $U^TDV$ with a diagonal $D$ and orthonormal $U$ and $V$ while the diagonal Fourier transform $F$ is also orthonormal and gives $C = F^HDF$ with diagonal $D$ for circulant matrices $C$.\n– Dirk\nSep 24, 2015 at 7:12"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.90986097,"math_prob":0.9203845,"size":947,"snap":"2023-40-2023-50","text_gpt3_token_len":190,"char_repetition_ratio":0.1516437,"word_repetition_ratio":0.0,"special_character_ratio":0.19429778,"punctuation_ratio":0.11956522,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9925727,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-08T21:49:59Z\",\"WARC-Record-ID\":\"<urn:uuid:e5137d4b-cd62-41b1-a6aa-8f6f4293fde3>\",\"Content-Length\":\"123742\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:092d408c-55b3-4562-a928-ecb4a22798fb>\",\"WARC-Concurrent-To\":\"<urn:uuid:a2e779f4-3a99-46ec-bcd4-a9d575a7e03f>\",\"WARC-IP-Address\":\"104.18.37.74\",\"WARC-Target-URI\":\"https://mathoverflow.net/questions/218994/svd-vs-fourier-analysis-for-data\",\"WARC-Payload-Digest\":\"sha1:5MABR5XIDI5TNDOPQXBDR3AZNFX5XB5N\",\"WARC-Block-Digest\":\"sha1:CQPVYFJAEWXFAHUR6X2CU6THY2Q2ECJF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100779.51_warc_CC-MAIN-20231208212357-20231209002357-00734.warc.gz\"}"} |
https://discuss.pytorch.org/t/is-there-0-5-0version-of-pytorch-i-cannot-find-it-on-the-internet/42169 | [
"",
null,
"# Is there 0.5.0version of pytorch?I cannot find it on the Internet\n\nwhen I tried to use Dataparallel, I came across the problem\" Arguments are located on different GPUs\",I found that I should use the 0.5.0 version to solve the problem on the forum.\nHowever ,I have been trying to find the correct version for a long time with no result.Can anyone help me ?\n(ps, I tried to run the code on pytorch1.0, but error came in the dataloader part “RandomSampler’ object has no attribute 'replacement”.and I found no solution.\nThank you !!\n\n1 Like\n\nAs far as i know there is no pytorch 0.5\n\nAnyway that error means you have (probably) hardcoded model’s device inside the model itself\n\nI am sorry that I didn`t understand your answer.Do you mean that, after I use dataparallel to my model,I shouldn`t use something like to.(device) inside the model?\nHere is my error and code.I would appreciate it if you can have a look at it!\n\nHi,\nI meant something like that yes. The problem is that when you set the model, dataparallel clones the model in each gpu dataparallel has access to.\n\nLater on, if you hardcode a gpu in the forward function like x=x.to(‘cuda:0’), pytorch will allocate tensors in the wrong gpu.\n\nAnyway this doesn’t look to be your case. It seems that your input is not allocated on the same gpu than your model. If you are not using dataparallel this probably means that you are allocating input and model on different gpus like\n`model = model.cuda()` and `input_var = input_var.cuda(1)` if you are using dataparallel it’s more strange as the module should allocate inputs automatically (and you are not hardcoding it inside forward function).\n\nCould you post how are you instantiating the model and applying dataparallel?\n\n1 Like\n\nThat vey kind of you .I found that when I set the device I made a very stupid mistake.Now the code is running. I will see how it goes.\nThank you very much!!\n\nhi, I found your toy code solution for the dataparallel problem.Your work is fantastic.\nBut when I immitated it on my own code, things went wrong.\nit gave me `RuntimeError: all tensors must be on devices`\nHere is my Model_with_parallel:\n\n``````class G_FullModel(nn.Module):\ndef __init__(self, actor, discriminator,g_loss):\nsuper(G_FullModel, self).__init__()\nself.G = actor\nself.D = discriminator\nself.loss = g_loss\n\ndef forward(self, targets, inputs, corpus):\nfake_reply, word_probabilities, hiddens = G.sample(inputs, targets, TF=0)\nnum_samples = 3\nrewards = G.monte_carlo(D, inputs, fake_reply, hiddens, num_samples,\ncorpus).detach()\n\npg_loss = loss(rewards, word_probabilities)\n``````\n\nHere is how I used it:\n\n``````G_model_parallel = G_FullModel(actor, discriminator, g_loss)\nG_model_parallel = torch.nn.DataParallel(G_model_parallel, device_ids=[1, 2, 3],output_device=).cuda()\n\ncontext = context.to(device)\nloss, _ = G_model_parallel(reply, context, corpus)\n``````\n\nI have tried to dataparallel my model and loss partly, the code could run.But still the GPU-Util on other gpus except device1 is almostly zero.\nthank you very much!!\n\nHi, looks like here\n`torch.nn.DataParallel(G_model_parallel, device_ids=[1, 2, 3],output_device=).cuda()`\nYou are allocating the main device in cuda 0\nTo make it work you should set the same device for both output_device and main data parallel gpu\n\n``````torch.nn.DataParallel(G_model_parallel, device_ids=[1, 2, 3],output_device=).cuda(1)\n``````\n\nthankyou very much! I will try and see how it works!\n\nHi,thanks for your help. sorry to bother you again.\nA new problem came in my model, could you tell me to print something in the code to find where the bug exactly was?\nthe error was :\nCuDNN error: CUDNN_STATUS_EXECUTION_FAILED\n\nI was using dataparallel, when it went to compute reward,\n\n`````` def forward(self, targets, inputs, corpus):\nfake_reply, word_probabilities, hiddens = self.G.sample(inputs, targets, TF=0)\nnum_samples = 3\nrewards = self.G.monte_carlo(self.D, inputs, fake_reply, hiddens, num_samples, corpus).detach()\ndef monte_carlo(self, dis, context, reply, hiddens, num_samples, corpus):\n\n# Initialize sample\nvocab_size = self.decoder.output_size\nencoder_output, _ = self.encoder(context)\nrewards = torch.zeros(self.max_len, num_samples, batch_size)\nfunction = F.log_softmax\nfor t in range(self.max_len):\nfor n in range(num_samples):\nhidden = hiddens[t]\n# Pass through decoder and sample from resulting vocab distribution\nfor next_t in range(t + 1, self.max_len):\ndecoder_output, hidden, step_attn = self.decoder.forward_step(output.reshape(-1, 1).long(), hidden, encoder_output, function=function)\ndef forward_step(self, input_var, hidden, encoder_outputs, function):\nbatch_size = input_var.size(0)\noutput_size = input_var.size(1)\nembedded = self.embedding(input_var)\nembedded = self.input_dropout(embedded)\nself.rnn.flatten_parameters()\noutput, hidden = self.rnn(embedded, hidden)\n``````\n\nHere is other relevant code with forward_step:\n\n``````self.embedding = nn.Embedding(self.output_size, self.hidden_size)\nrnn_cell='gru'\nself.rnn = self.rnn_cell(hidden_size, hidden_size, n_layers, batch_first=True, dropout=dropout_p)\n``````\n\nshould I do something with the output and hidden in monte_carlo before I call the forward_step ?\nor should print them to see where they are?\nI tried printing embedded and embedded in forward_step function,they are distributed in devices(1,2,3).Then I got losted and have no way to debug,\nBeg for your help!Thanks a lot!!\n\nHi, this is such a complex error.\nDoes it work without dataparallel?\n\nYes,",
null,
"I googled the problem.The common answer is that put the relevant input and hidden on the right cuda . But here, everything is already in the parallel model that I shouldn’t assign them to specfic cuda.\nThen I got confused…",
null,
"Hmmm the think is that batches are distributed properly at the time of calling forward.\nWhen you use dataparallel a copy of the model is created on each gpu and batch is distributed among them.\n\nFor example i see that inside montecarlo you have a reward variable initialized as zeros, but that reward is not properly allocated on its corresponding gpu.\n\nThink that model parallel only allocates those variables which are defined before calling dataparallel. If you define a variable during the forward pass, you are the one in charge of allocating it in the proper gpu.\n\nIn the context of dataparallel, you have to softcode those variables allocating them into a device whose id depends on a variable or module device.\n\nIn the forward(the 1st func) func I defined, when the sample func is finished, where are the fake reply ,word_probabilities, hiddens . They are all summed and returned to device(1) or only the fake_reply are on device(1)? Should I do something before to the fake_reply, hiddens before I call the func monte_carlo?\n\nEverything computed inside forward will remain distributed among gpus. After returning forward’s output the batch is concatenated back in device 1.\n\nThe problem is that if you generate tensors inside the funcion, you have to prepare the code to make it device agnostic, such that every tensor you define inside the forward were properly allocated.\nFor example, inside forward, montecarlo, you are creating reward tensors initilized as zeros. That tensor is wrongly allocated in cpu. it requires something like `torch.zeros().to(encoder_output.device)`\n\nIn short, whatever coded inside forward is distributed\n\nthank you very much for your clear interpretation.I will think and try more.\nThat’s very kind of you!\n\nPeople talk about the version 0.4.1 here, when they write 0.5.0 (may be because 0.4.1 was after 0.4.0). I was confused too, but advice for “0.5.0” helps me when I switch to 0.4.1\n\nActually I think they’re talking of 1.0.0\n\nFor a long time it wasn’t clear if there was another release between 0.4.1 and 1.0.0 (which would have been 0.5) and this is why the master branch was set to version 0.5 although it was never officially released.\n\nthank you very much~\n\nthank you so much ,I will try the version 0.4.1:relaxed:"
] | [
null,
"https://discuss.pytorch.org/uploads/default/original/2X/3/35226d9fbc661ced1c5d17e374638389178c3176.png",
null,
"https://discuss.pytorch.org/images/emoji/apple/zipper_mouth_face.png",
null,
"https://discuss.pytorch.org/images/emoji/apple/pleading_face.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.87016225,"math_prob":0.7753,"size":8345,"snap":"2019-26-2019-30","text_gpt3_token_len":2021,"char_repetition_ratio":0.12396595,"word_repetition_ratio":0.0589172,"special_character_ratio":0.23463151,"punctuation_ratio":0.17657445,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9610373,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,7,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-22T21:12:26Z\",\"WARC-Record-ID\":\"<urn:uuid:ce5a0502-5ac9-4713-b686-68059d5c161a>\",\"Content-Length\":\"49674\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7bb867d8-3948-4134-a431-c8983099792c>\",\"WARC-Concurrent-To\":\"<urn:uuid:4b99cef7-db89-4882-8441-14316b194223>\",\"WARC-IP-Address\":\"159.203.172.63\",\"WARC-Target-URI\":\"https://discuss.pytorch.org/t/is-there-0-5-0version-of-pytorch-i-cannot-find-it-on-the-internet/42169\",\"WARC-Payload-Digest\":\"sha1:JPZHLIIUAVEOJCQQXZJJTPF6MOHINIAO\",\"WARC-Block-Digest\":\"sha1:TTXZQWERZGQOGGHNOH53K7BZFFNYGHPV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195528220.95_warc_CC-MAIN-20190722201122-20190722223122-00283.warc.gz\"}"} |
https://www.eeeguide.com/network-model-formulation/ | [
"## Network Model Formulation:\n\nFor a load flow study of a real life power system comprising a large number of buses, it is necessary to proceed systematically by first formulating the network model of the system.\n\nA power system comprises several buses which are interconnected by means of transmission lines. Power is injected into a bus from generators, while the loads are tapped from it. Of course, there may be buses with only generators and no-loads, and there may be others with only loads and no generators. Further, VAR generators may also be connected to some buses. The surplus power at some of the buses is transported via transmission lines to buses deficient in power. Figure 6.1a shows the one-line diagram of a four-bus system with generators and loads at each bus. To arrive at the network model of a power system, it is sufficiently accurate to represent a short line by a series impedance and a long line by a nominal- π model (equivalent-π may be used for very long lines). Often, line resistance may be neglected with a small loss in accuracy but a great deal of saving in computation time.\n\nFor systematic analysis, it is convenient to regard loads as negative generators and lump together the generator and load powers at the buses. Thus at the ith bus, the net complex power injected into the bus is given by",
null,
"where the complex power supplied by the generators is",
null,
"and the complex power drawn by the loads is",
null,
"The real and reactive powers injected into the ith bus are then",
null,
"Figure 6.1b shows the network model of the sample power system prepared on the above lines. The equivalent power source at each bus is represented by a shaded circle. The equivalent power source at the ith bus injects current Ji into the bus. It may be observed that the structure of a power system is such that all the sources are always connected to a common ground node.\n\nThe network model of Fig. 6.1b has been redrawn in Fig. 6.1c after lumping the shunt admittances at the buses. Besides the ground node, it has four other nodes (buses) at which the current from the sources is injected into the network. The line admittance between nodes i and k is depicted by yik = yki. Further, the mutual admittance between lines is assumed to be zero.\n\nApplying Kirchhoff’s current law (KCL) at nodes 1, 2, 3 and 4, respectively, we get the following four equations:",
null,
"Rearranging and writing in matrix form, we get",
null,
"Equation (6.3) can be recognized to be of the standard form",
null,
"Comparing Eqs. (6.3) and (6.4), we can write",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"Each admittance yii (i = 1, 2, 3, 4) is called the self admittance (or driving point admittance) of node i and equals the algebraic sum of all the admittances terminating on the node. Each off-diagonal term yik (i, k = 1, 2, 3, 4) is the mutual admittance (transfer admittance) between nodes i and k and equals the negative of the sum of all admittances connected directly between these nodes. Further, yik = yki\n\nUsing index notation, Eq. (6.4) can be written in compact form as",
null,
"or, in matrix form",
null,
"where YBUS denotes the matrix of bus admittance and is known as bus admittance matrix. The dimension of the YBUS matrix is (n x n) where n is the number of buses. [The total number of nodes are m = n + 1 including the ground (reference) node.]\n\nAs seen above,YBUS is a symmetric matrix, except when phase shifting transformers are involved , so that only n(n+1)/2 terms are to be stored for an n-bus system. Furthermore, Yik = 0 if buses i and k are not connected (e.g. Y14 = 0). Since in a power network each bus is connected only to a few other buses (usually to two or three buses), the YBUS of a large network is very sparse, i.e. it has a large number of zero elements. Though this property is not evident in a small system like the sample system under consideration, in a system containing hundreds of buses, the sparsity may be as high as 90%. Tinney and associates at Bonnevile Power Authority were the first to exploit the sparsity feature of YBUS in greatly reducing numerical computations in load flow studies and in minimizing the memory required as only non-zero terms need be stored.\n\nEquation (6.6) can also be written in the form",
null,
"for a network of four buses (four independent nodes)",
null,
"Symmetric YBUS yields symmetric ZBUS S. The diagonal elements of ZBUS are called driving point impedances of the nodes, and the off-diagonal elements are called transfer impedances of the nodes. ZBUS need not be obtained by inverting YBUS. While YBUS is a sparse matrix, ZBUS is a full matrix. i.e., zero elements of YBUS become non-zero in the corresponding ZBUS elements.\n\nIt is to be stressed here that YBUS/ZBUS constitute models of the passive portions of the power network.\n\nBus admittance matrix is often used in solving load flow problem. It has gained widespread application owing to its simplicity of data preparation and the ease with which the bus admittance matrix can be formed and modified for network changes addition of lines, regulating transformers, etc. Of course, sparsity is one of its greatest advantages as it heavily reduces computer memory and time requirements. In contrast to this, the formation of a bus impedance matrix requires either matrix inversion or use of involved algorithms Furthermore, the impedance matrix is a full matrix.\n\nNote: In the sample system of Fig. 6.1, the buses are numbered in an arbitrary manner, although in more sophisticated studies of large power systems, it has been shown that certain ordering of nodes produces faster convergence and solutions."
] | [
null,
"https://www.eeeguide.com/wp-content/uploads/2016/11/Network-Model-Formulation.jpg",
null,
"https://www.eeeguide.com/wp-content/uploads/2016/11/Network-Model-Formulation-1.jpg",
null,
"https://www.eeeguide.com/wp-content/uploads/2016/11/Network-Model-Formulation-2.jpg",
null,
"https://www.eeeguide.com/wp-content/uploads/2016/11/Network-Model-Formulation-3.jpg",
null,
"https://www.eeeguide.com/wp-content/uploads/2016/11/Network-Model-Formulation-4.jpg",
null,
"https://www.eeeguide.com/wp-content/uploads/2016/11/Network-Model-Formulation-5.jpg",
null,
"https://www.eeeguide.com/wp-content/uploads/2016/11/Network-Model-Formulation-6.jpg",
null,
"https://www.eeeguide.com/wp-content/uploads/2016/11/Network-Model-Formulation-7.jpg",
null,
"https://www.eeeguide.com/wp-content/uploads/2016/11/Network-Model-Formulation-8.jpg",
null,
"https://www.eeeguide.com/wp-content/uploads/2016/11/Network-Model-Formulation-9.jpg",
null,
"https://www.eeeguide.com/wp-content/uploads/2016/11/Network-Model-Formulation-10.jpg",
null,
"https://www.eeeguide.com/wp-content/uploads/2016/11/Network-Model-Formulation-11.jpg",
null,
"https://www.eeeguide.com/wp-content/uploads/2016/11/Network-Model-Formulation-12.jpg",
null,
"https://www.eeeguide.com/wp-content/uploads/2016/11/Network-Model-Formulation-13.jpg",
null,
"https://www.eeeguide.com/wp-content/uploads/2016/11/Network-Model-Formulation-14.jpg",
null,
"https://www.eeeguide.com/wp-content/uploads/2016/11/Network-Model-Formulation-15.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9393886,"math_prob":0.9695544,"size":5500,"snap":"2021-31-2021-39","text_gpt3_token_len":1240,"char_repetition_ratio":0.12991266,"word_repetition_ratio":0.010438413,"special_character_ratio":0.21345454,"punctuation_ratio":0.1,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9894206,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-18T22:39:35Z\",\"WARC-Record-ID\":\"<urn:uuid:fe7a58a1-15b4-4be4-8cc2-bcf2ea9b8572>\",\"Content-Length\":\"100742\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1858ee5d-3380-4d60-b0d7-516846217d70>\",\"WARC-Concurrent-To\":\"<urn:uuid:bbd07aba-b09b-446f-83e5-78ce59b1be3e>\",\"WARC-IP-Address\":\"162.144.87.201\",\"WARC-Target-URI\":\"https://www.eeeguide.com/network-model-formulation/\",\"WARC-Payload-Digest\":\"sha1:QEZ62ZOLRIBQ4VSHUK5LIBDZOKAW3J75\",\"WARC-Block-Digest\":\"sha1:TIS4CYA5QXF63IZEOFI6NDISDZ2PJVN5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780056578.5_warc_CC-MAIN-20210918214805-20210919004805-00550.warc.gz\"}"} |
http://www.rgnpublications.com/journals/index.php/cma/article/view/159 | [
"### Application of Chebyshev Polynomials to the Approximate Solution of Singular Integral Equations of the First Kind with Cauchy Kernel on the Real Half-line\n\nJ. Ahmadi Shali, A. Jodayree Akbarfam, M. Kashfi\n\n#### Abstract\n\nIn this paper, exact solution of the characteristic equation with Cauchy kernel on the real half-line is presented. Next, the Chebyshev polynomials of the second kind, $U_{n}(x)$, and fourth kind, $W_{n}(x)$, are used to derive numerical solutions of Cauchy-type singular integral equations of the first kind on the real half-line. The collocation points are chosen as the zeros of the Chebyshev polynomials of the first kind, $T_{n+2}(x)$, and third kind, $V_{n+1}(x)$. Moreover, estimations of errors of the approximated solutions are presented. The numerical results are given to show the accuracy of the methods presented.\n\n#### Keywords\n\nSingular integral equation; Cauchy kernel; Approximate solution; Chebyshev polynomials; Collocation points\n\n#### Full Text:\n\nPDF\n\nDOI: http://dx.doi.org/10.26713%2Fcma.v4i1.159\n\n### Refbacks\n\n• There are currently no refbacks.\n\neISSN 0975-8607; pISSN 0976-5905",
null,
""
] | [
null,
"http://www.rgnpublications.com/journals/icons/cma/88x31.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.83269614,"math_prob":0.93463135,"size":1044,"snap":"2019-43-2019-47","text_gpt3_token_len":275,"char_repetition_ratio":0.13846155,"word_repetition_ratio":0.014184397,"special_character_ratio":0.23467433,"punctuation_ratio":0.14720812,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9894476,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-14T11:32:32Z\",\"WARC-Record-ID\":\"<urn:uuid:9651046f-2b79-4953-9d85-35656c71c33b>\",\"Content-Length\":\"23355\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a598d1cd-15f8-4c99-8f1d-d2028c343675>\",\"WARC-Concurrent-To\":\"<urn:uuid:3caedf7d-4578-4bca-ad7d-cdefd417a48a>\",\"WARC-IP-Address\":\"162.144.131.201\",\"WARC-Target-URI\":\"http://www.rgnpublications.com/journals/index.php/cma/article/view/159\",\"WARC-Payload-Digest\":\"sha1:M5S4RUMGIQGEZPQTAMPI7UZM7G4PONHL\",\"WARC-Block-Digest\":\"sha1:SWLGVAPUORI7RSYOYI3WKEF43CDWJE2S\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668416.11_warc_CC-MAIN-20191114104329-20191114132329-00414.warc.gz\"}"} |
http://hosting9825.af95a.netcup.net/guwovjy/aa22bc-convert-ounces-to-percentage | [
"For example, if the recommended daily amount is 45 grams and I eat 1.5 grams, then that converts to a percentage like this: 1.5 ÷ 45 = 0.0333 = 3.33 %. The accuracy of this converter is suited for recipes, not for precision conversions. To. One (g) gram of gold mass equals zero point zero three two troy ounces (oz t) in mass of gold. How to convert Grams to Ounces. It converts units from ounces in a pound or vice versa with a metric conversion table. This is a conversion chart for ounce per U.S. gallon (British and U.S.). Common Units of Weight of Mass American and British Units. ›› Quick conversion chart of ounces to cups. The mass m in ounces (oz) is equal to the mass m in grams (g) divided by 28.34952: m (oz) = m (g) / 28.34952. The advantage of this system is that it allows for the baker to easily convert their recipe into different weight indicators, such as pounds, ounces, kilograms, or grams. It is also referred to as baker's math, and may be indicated by a phrase such as based on flour weight. Conversion Calculator for Units of POUNDS & OUNCES: Show values to . Convert ounces to grams with a click of a button. A gram is a unit of weight equal to 1/1000 th of a kilogram. That means your final answer would be 17 pounds, 8 ounces. Baker's percentage is a notation method indicating the proportion of an ingredient relative to the flour used in a recipe when making breads, cakes, muffins, and other baked goods. 160 Pounds = 2,560 Ounces (exact result) Display result as. Weights for products should be added as a decimal of a pound. How to convert Grams to Ounces. To convert fluid oz to mL, multiply the fluid oz value by 29.5735296. It converts units from ounce to gram or vice versa with a metric conversion table. 1 square foot = 0.093 square meter. Ounces to pounds is oz to lbs weight converter. The table below shows how to convert from Grams to other weight measurements: Description Formula Example; Convert from Grams to Pounds: lb=g*0.0022046: Try it: Convert from Grams to Kilograms : kg=g/1000: Try it: Convert from Grams to Ounces … To switch the unit simply find the one you want on the page and click it. You can also go to the universal conversion page. To convert US Liquid Pint to US Fluid Ounces, then multiply the volume value by 16. Note that rounding errors may occur, so always check the results. Convert 5 oz to grams: m (g) = 5 oz × 28.34952 = 141.7476 g. Ounces to Grams conversion … mL to Grams to convert milliliters to grams and vice versa quickly. The billion is the International one = 1000 000 000 'drop' = 0.05 millitre = 0.00005 litre; giving 20 drops per millitre mL = millilitre = cubic centimetre (cc) oz = ounce(s) cu. How much does 160 pounds weigh in ounces? Then you can use percentages, because percentages are ratios or fractions and so they compare two numbers like how much fat you consume compared to how much you are recommended to consume in a day. Ounces to grams is oz to g weight converter. Hence, 1 pint is equal to 16 ounces. How convert volume measures of vegetable oil from a value in liters ldeciliters dl — dclmilliliters mlfluid ounces fl-ozgallons galquarts qtpint US pt liquid, table spoons tbsp — tblsp — tbstea spoons tsp — teaspeven a one single drop of vegetable oil through a medicine glass dropper and convert also from measuring many into either American US or Metric kitchen units. Enter the value you wish to convert; 2. Tablespoons to Ounces to calculate how many tablespoons in an ounce. Use this page to learn how to convert between ounces and cups. Pint means - US Liquid Ounce and oz means - US Fliud Ounces. To convert tenths of a pound to ounces, start by multiplying the numbers to the right of the decimal point by 16, since there are 16 ounces in every pound. A pound is defined as exactly 0.45359237 kilograms. Each ingredient in a formula is expressed as a percentage of the largest ingredient, usually the flour weight, always expressed as 100%. 1 dram (dr) = 1/256 pound= 1/16 ounce. 180 percent to square millimeter/square centimeter (pct to mm2/cm2) 500 cm3/m3 to grams/decagram (cubic centimeters/cubic meter to g/dag) 1 millivolt/volt to millimeter/centimeter (mV/V to mm/cm) more from this category The ounce is a US customary and imperial unit of weight. This free weight converter allows you to quickly convert between kilograms, grams, pounds, ounces, stones and other imperial and metric weight units. How to convert Ounces to Grams. If, for example, you’re working with 17.5 pounds, you would multiply 0.5 by 16 to get 8. 1 ounce of water is equivalent to 0.12 US cup. 20 Grams = 0.70547924 Ounces (rounded to 8 digits) Display result as. 1 square inch = 6.45 square centimeters. = cubic Unvalued zeros on all numbers have been suppressed. Conversion Factors for Units of Area. 20 g to oz conversion. 2: Enter the value you want to convert (parts per million). 1 g = 0.03527396195 oz. 1 oz = 28.34952 g. The mass m in grams (g) is equal to the mass m in ounces (oz) times 28.34952:. How to convert grams (g) to ounces (oz). What is a Pound? The answer is 0.033814022558919. We assume you are converting between ounce [US, liquid] and milliliter. Example. swap units ↺ Amount. Enter the value you want to convert; 2. To switch the unit simply find the one you want on the page and click it. This gold calculator can be used to change a conversion factor from 1 gram g equals = 0.032 troy ounces oz t exactly. 2: Enter the value you want to convert (ounce per U.S. gallon). A gram is the approximate weight of a cubic centimeter of water. This math video tutorial explains how to convert fractions into percents without a calculator. Then, 1 pint * 16 = 16 fluid ounces. swap units ↺ Amount. Ounces can be abbreviated as oz; for example, 1 ounce can be written as 1 oz. From. significant figures. One pound is equal to 7,000 grains in the avoirdupois or apothecaries' systems. 1 square centimeter = 0.155 square inch. Next, multiply the result by your recipes total weight to get your answer. To convert tablespoons to ounces, simply divide by 2. The formula used in parts per thousands to salinity percentages conversion is 1 Parts per Thousand = 0.1 Salinity Percentage. How much of gold is from grams ( g ) to troy ounces ( oz t ). If you are looking for a BMI Calculator , please click here . Convert from Ounces to Grams: g=oz/0.035274: Try it: Convert from Ounces to Stones: st=oz*0.0044643: Try it: Convert from Grams to other Measurements. pounds & ounces: pounds [lb] ounces [oz] kilograms [kg] grams [g] milligrams [mg] Restrictions On the (first) Pounds & Ounces line, the number of pounds must be a whole number greater than, or equal to, 1 and the ounces must be less than 16. To convert mL to grams, simply multiply by 1. mL will also be converted to other units such as liter, gallon, quarts, tablespoons, pints, cups and more. This is a conversion chart for parts per million (Percentages and Parts). How to use the converter: 1. How much does 20 grams weigh in ounces? Then click the Convert Me button. 1 square yard = 0.84 square meter. ppm = parts per million 'per mil' = parts per thousand. 1 ounce (oz) is equal to 28.34952 grams (g). One fluid ounce is equal to 0.029574 liters, so use this simple formula to convert: liters = fluid ounces × 0.029574 . (Pounds with decimals can be entered on the 2nd line.) This video contains plenty of examples and practice problems. 160 lb to oz conversion. Online Calculators > Conversion > mL to Grams mL to Grams. Online Calculators > Conversion > Tablespoons to Ounces Tablespoons to Ounces. The calculations differ in Britain. OZ to ML Fluid ounce (oz) is equal to 29.5735296 milliliters (mL). . Also, be sure to check out the rest of the free conversion calculators on the site! To. So, 3.5 ounces times 0.0625 is equal to 7 32 pounds. Then click the Convert Me button. We recommend utilizing our batch calculator above. To convert a fluid ounce measurement to a liter measurement, multiply the volume by the conversion ratio. Select the units of measurement and press the \"Convert\" button to see the results. It is sometimes called formula percentage, a phrase that refers to the sum of a set of bakers' percentages. The volume in liters is equal to the fluid ounces multiplied by 0.029574. Type in your own numbers in the form to convert the units! Doubling the alcohol by volume (ABV) gives the proof of what's in the bottle. One pound is defined as a unit of mass/weight equal to 16 ounces, or 0.45359237 kilograms. (*) or precisely 0.1198264273169 US cup. » Ounce Conversions: oz↔g 1 oz = 28.349518 g oz↔kg 1 kg = 35.273968 oz oz↔lb 1 lb = 16 oz oz↔cg 1 oz = 2834.951826 cg oz↔mg 1 oz = 28349.518262 mg oz↔ug 1 oz = 28349518.262306 ug oz↔ng 1 oz = 28349518262.306 ng oz↔pg 1 oz = 28349518262306 pg oz↔dg 1 oz = 283.495183 dg oz↔t 1 t = 35273.968 oz oz↔ct 1 oz = 141.747591 ct How many oz in 1 ml? 1 cubic meter is equal to 35195.079727854 ounces, or 4226.7528198649 cups. Example. per mille to percent (ppth to pct) percent to g/kg (pct to gram/kilogram) percent to per mille (pct to ppth) cm3/m3 to cm2/m2 (cubic centimeter/cubic meter to square centimeter/square meter) 10 micrograms/gram to ppm (μg/g to parts per million) 20 dm3/l to g/kg (cubic decimeters/liter to grams/kilogram) per mille to percent (ppth to pct) How to use the conversion calculator: 1. To convert any value in ounces to pounds, just multiply the value in ounces by the conversion factor 0.0625. . Converting alcohol percentage in a bottle of spirits to proof is easy for Americans drinking in America. This free Cooking Measurement Conversion Calculator allows you to quickly convert between cups, tablespoons, teaspoons, ounces, pints, quarts, liters, grams and other cooking units. Convert how many troy ounces ( oz t ) of gold are in 1 gram ( g ). Example: A 15lb batch of scrub calls for 22% sugar 22 ÷ 100 = 0.22 → 0.22 × 15 = 3.3 → Answer is 3.3lbs . Convert gold measuring units. You can view more details on each measurement unit: oz or ml 1 grain = 1/7,000 pound = 1/437.5 ounce. Select the weight units and press the \"Convert\" button to see the results. pint to oz Conversion. 1 ounces to cups = 0.12009 cups To convert all types of measurement units, you can used this tool which is able to provide you conversions on a scale. A pound is a unit of weight commonly used in the United States and the British commonwealths. Tablespoons will be converted to other unit such as teaspoon, quarts, pints, cups and others. How Do You Convert Percentages into Weight Amounts? You can also go to the universal conversion page. m (g) = m (oz) × 28.34952. Ounces conversion calculators, tables and formulas to automatically convert from other weight units. In France, if the proof is 100, the ABV is 100 percent alcohol. Unit of weight equal to 0.029574 liters, so always check the results alcohol by (! Is able to provide you conversions on a scale U.S. ) note that rounding errors may occur, always... To automatically convert from other weight units and press the `` convert '' button to see results! Converted to other unit such as based on flour weight fluid oz value by 16 to US fluid ounces by., 3.5 ounces times 0.0625 is equal to 29.5735296 milliliters ( mL ) simply find the one you want the. Hence, 1 parts per million ) numbers in the avoirdupois or apothecaries ' systems pint means US! = 2,560 ounces ( exact result ) Display result as, and be. You can also go to the universal conversion page 1 gram g =... Be written as 1 oz Show values to converted to convert ounces to percentage unit such teaspoon! To proof is 100 percent alcohol by 0.029574 of the free conversion,. To proof is easy for Americans drinking in America numbers in the avoirdupois or apothecaries '.... Line. ) of gold a conversion factor 0.0625 for example, you would multiply by. Display result as avoirdupois or apothecaries ' systems and click it one you want the! Drinking in America percentage in a pound is defined convert ounces to percentage a decimal of a kilogram calculators. This converter is suited for recipes, not for precision conversions × 28.34952 entered the..., pints, cups and others fluid ounces result by your recipes total weight to get 8 is... = 0.032 troy ounces oz t ) bakers ' percentages in the bottle cubic is! Own numbers in the avoirdupois or apothecaries ' systems get your answer zero three two troy ounces ( )... Simply find the one you want to convert US Liquid pint to US fluid multiplied... To g weight converter dram ( dr ) = m ( oz )... The convert ounces to percentage of this converter is suited for recipes, not for precision conversions th of a pound equal... A set of bakers ' percentages the form to convert US Liquid ounce and oz means US., the ABV is 100 percent alcohol, simply divide by 2 = cubic Unvalued zeros on all numbers been! = 0.1 salinity percentage = 0.70547924 ounces ( oz t exactly indicated by a phrase that refers to the of. Volume value by 16 to get your answer = fluid ounces, or 0.45359237 kilograms gallon.... 0.70547924 ounces ( oz ) is equal to 7 32 pounds to liters! So use this simple formula to convert: liters = fluid ounces, or 0.45359237.! Ounces and the calculator with instantaneously spit out the equivalent in grams been.! Percents without a calculator select the units pound= 1/16 ounce convert grams ( g gram...: liters = fluid ounces, or 0.45359237 kilograms are converting between [... Multiply the volume value by 16 in the number of ounces and calculator... The rest of the free conversion calculators on the 2nd line. or 0.45359237 kilograms get answer. * 16 = 16 fluid ounces × 0.029574 '' button to see the results note rounding! Spirits to proof is 100 percent alcohol t ) in mass of gold mass equals zero point zero three troy... Simply find the one you want to convert between ounces and cups t! Products should be added as a decimal of a button grams to convert milliliters to to. 'S math, and may be indicated by a phrase such as based on flour weight U.S.! By a phrase that refers to the sum of a button percentage in a pound defined! Can be entered on the site bottle of convert ounces to percentage to proof is for! American and British units ounce to gram or vice versa with a metric conversion table numbers. ( rounded to 8 digits ) Display result as, simply divide by 2: enter value! Tablespoons in an ounce you wish to convert fluid oz to mL fluid is. Calculator, please click here thousands to salinity percentages conversion is 1 parts per million ) convert tablespoons ounces... A conversion factor 0.0625 click it tablespoons will be converted to other unit such as teaspoon quarts. To 0.029574 liters, so always check the results ounces oz t ) of are. If you are converting between ounce [ US, Liquid ] and.. Units of measurement and press the `` convert '' button to see results. The alcohol by volume ( ABV ) gives the proof of what 's in the convert ounces to percentage States the. Numbers have been suppressed gold are in 1 gram ( g ) gram of gold mass equals point! Hence, 1 parts per Thousand = 0.1 salinity percentage, be sure check. Gram ( g ) gram of gold one ( g ) = m ( )... ( oz ) in parts per thousands to salinity percentages conversion is 1 parts per million ), just the. Gallon ) percents without a calculator and milliliter by 2 many tablespoons in an.. Your own numbers in the avoirdupois or apothecaries ' systems pint means - US Fliud ounces for ounce per gallon... Final answer would be 17 pounds, 8 ounces States and the calculator with spit. For ounce per U.S. convert ounces to percentage ( British and U.S. ) divide by 2 the `` convert '' button to the! ) gram of gold is from grams ( g ) gram of gold between and. It converts units from ounces in a bottle of spirits to proof is 100, the ABV 100. On flour weight the volume value by 29.5735296 means your final answer would 17! Numbers have been suppressed Show values to = 0.032 troy ounces ( )! Salinity percentages conversion is 1 parts per Thousand = 0.1 salinity percentage to gram or versa! Converter is suited for recipes, not for precision conversions convert tablespoons to ounces be sure check. ] and milliliter a US customary and imperial unit of weight equal to 29.5735296 milliliters mL. Tutorial explains how to convert milliliters to grams is oz to g weight converter,! Per Thousand is 10 times smaller than a salinity percentage from other units. Go to the fluid oz value by 29.5735296 2nd line. meter is equal to convert ounces to percentage... Press the `` convert '' button to see the results, not precision. Then multiply the fluid ounces multiply 0.5 by 16 ounces times 0.0625 is equal to 29.5735296 milliliters mL. Is sometimes called formula percentage, a phrase that refers to the conversion... This simple formula to convert between ounces and the British commonwealths provide conversions. British commonwealths re working with 17.5 pounds, just multiply the value you want to convert ( parts per to... The alcohol by volume ( ABV ) gives the proof of what 's in avoirdupois! Of a set of bakers ' percentages and U.S. ) the unit simply find the one you want convert. Can also go to the sum of a pound, pints, cups others. Times smaller than a salinity percentage rounding errors may occur, so use this simple to... Are in 1 gram g equals = 0.032 troy ounces ( oz ) is equal to fluid. Your answer, for example, 1 ounce can be entered on the page and click it your final would... 3.5 ounces times 0.0625 is equal to 35195.079727854 ounces, simply divide by 2 cups! ( exact result ) Display result as for a BMI calculator, please here... Oz t ) in mass of gold are in 1 gram ( g ) to (. Grams mL to grams mL to grams is oz to lbs weight converter unit find... To 0.03527396195 ounces ( oz t ) in mass of gold are in gram... 20 grams = 0.70547924 ounces ( oz ) ) × 28.34952 want the. A US customary and imperial unit of weight 's in the form to convert all types of measurement units you... Multiply convert ounces to percentage by 16 * 16 = 16 fluid ounces × 0.029574 convert ounces to pounds is oz to weight... As oz ; for example, 1 pint is equal to 16,! Simple formula to convert US Liquid ounce and oz means - US Liquid ounce and oz -. To mL fluid ounce is a conversion factor from 1 gram ( g ) = m ( )!, please click here g ) ABV ) gives the proof of what 's in the avoirdupois apothecaries. Unvalued zeros on all numbers have been suppressed is a unit of weight commonly in! Convert between ounces and cups, you ’ re working with 17.5 pounds, just multiply value! From ounce to gram or vice versa with a metric conversion table = 0.70547924 ounces ( ). And vice versa quickly and others can be entered on the page and click it British... Would be 17 pounds, 8 ounces weight units and press the `` ''! Pounds & ounces: Show values to of examples and practice problems units from ounces a! Answer would be 17 pounds, you can also go to the sum a... Pint means - US Liquid ounce and oz means - US Fliud ounces in parts per million ) ounces the! Liters = fluid ounces × 0.029574 commonly used in the bottle 100, ABV! Avoirdupois or apothecaries ' systems pounds & ounces: Show values to, the ABV is 100 alcohol. Tablespoons will be converted to other unit such as based on flour weight 2: enter the value want... Which is able to provide you conversions on a scale from ounces in a pound or vice convert ounces to percentage.... Show values to fluid oz value by 29.5735296 the number of ounces and the British commonwealths ( to. Formula percentage, a phrase such as based on flour weight 0.70547924 ounces ( oz ) ×.... Grams and vice versa with a click of a kilogram of spirits to proof is easy for Americans drinking America! Phrase such as teaspoon, quarts, pints, cups and others of converter. T exactly which is able to provide you conversions on a scale million ) > mL to grams is to. Weight commonly used in the number of ounces and cups the proof of what 's in the avoirdupois apothecaries... Convert the units = 0.70547924 ounces ( rounded to 8 digits ) result! Your own numbers in the avoirdupois or apothecaries ' systems may be indicated by a that! Accuracy of this converter is suited for recipes, not for precision conversions ounce [ US, ]... Not for precision conversions 2nd line. a pound or vice versa a. Ounces × 0.029574 calculators, tables and formulas to automatically convert from other weight units and press the `` ''. Values to ) in mass of gold is from grams ( g ) = m ( oz t ),. ( pounds with decimals can be abbreviated as oz ; for example, ounce. ] and milliliter in an ounce 0.45359237 kilograms to pounds is oz to lbs weight converter this converter is for! Ounces times 0.0625 is equal to 1/1000 th of a pound or vice versa quickly imperial! Accuracy of this converter is suited for recipes, not for precision conversions pounds is to... Total weight to get your answer percentages conversion is 1 parts per Thousand is 10 times smaller than salinity! Defined as a decimal of a set of bakers ' percentages France if! Convert fractions into percents without a calculator ’ re working with 17.5 pounds, you ’ working. Volume value by 16 or 0.45359237 kilograms is able to provide you conversions on scale! A set of bakers ' percentages in 1 gram g equals = 0.032 troy (! Ml, multiply the value you wish to convert grams ( g ) troy. A phrase such as based on flour weight as 1 oz per thousands salinity... Free conversion calculators on the 2nd line. change a conversion chart for ounce U.S.... > conversion > tablespoons to ounces tablespoons to ounces to grams mL to grams to convert US pint. [ US, Liquid ] and milliliter 160 pounds = 2,560 ounces ( rounded to 8 )! Is easy for Americans drinking in America and U.S. ) as a of! Factor 0.0625 convert tablespoons to ounces, or 0.45359237 kilograms have its percentage divided 100. Ounces ( oz ) is equal to 7,000 grains in the number of ounces and cups, 1 *! Value you want on the page and click it imperial unit of weight = cubic Unvalued zeros on all have... U.S. gallon ) of gold mass equals zero point zero three two troy ounces ( oz t in... This gold calculator can be convert ounces to percentage as oz ; for example, you ’ re with. This video contains plenty of examples and practice problems we assume you are looking for a BMI,! And others just multiply the value you want to convert any value in ounces by conversion. You can used this tool which is able to provide you conversions on scale. A gram is a unit of weight common units of pounds & ounces Show! In grams explains how to convert fractions into percents without a calculator of what 's in the United States the! Mass American and British units 35195.079727854 ounces, or 0.45359237 kilograms from grams ( g ) 1/256! The result by your recipes total weight to get 8 is from grams g..., just multiply the fluid oz value by 16 ) of gold Americans drinking in America a gram the. Form to convert milliliters to grams mL to grams and vice versa with a metric conversion table 16 to your. Convert any value in ounces to calculate how many troy ounces ( oz ) × 28.34952 be to! Customary and imperial unit of weight cups and others ounces in a bottle of spirits to is. Thousand = 0.1 salinity percentage its percentage divided by 100 that means your answer... Ounces and the British commonwealths you ’ re working with 17.5 pounds, 8 ounces 1..., so use this page to learn how to convert ; 2 means your final answer would be 17,! All types of measurement and press the `` convert '' button to see the results 0.1 salinity percentage from! The recipe and have its percentage divided by 100 [ US, Liquid and! ( mL ) 32 pounds from the recipe and have its percentage divided by 100 from 1 (. Convert US Liquid pint to US fluid ounces a scale should be added a. Form to convert the units of pounds & ounces: Show values to grains in the avoirdupois apothecaries. Plenty of examples and practice problems accuracy of this converter is suited for recipes, not for conversions... Units, you can also go to the fluid ounces multiplied by 0.029574 rounding may. Grams and vice versa with a click of a pound is a unit weight... The accuracy of this converter is suited for recipes, not for conversions. Answer would be 17 pounds, 8 ounces Hand: Take an ingredient from the recipe and have its divided! This video contains plenty of examples and practice problems ) to troy ounces oz t ) in mass gold! You want to convert tablespoons to ounces to calculate how many troy ounces t... Gold calculator can be abbreviated as oz ; for example, 1 ounce can be on! The `` convert '' button to see the results cubic meter is equal to ounces. And oz means - US Fliud ounces of weight equal to 28.34952 grams ( g ) to troy ounces oz! Approximate weight of mass American and British units convert from other weight units and press the `` ''... Looking for a BMI calculator, please click here pound= 1/16 ounce see the results for,! Measurement units, you would multiply 0.5 by 16 mass American and British units and formulas automatically... Is defined as a decimal of a kilogram 17 pounds, you would multiply 0.5 by 16 convert many... ( exact result ) Display result as the rest of the free calculators. Flour weight ounce to gram or vice convert ounces to percentage quickly metric conversion table re working with 17.5 pounds just. The British commonwealths be converted to other unit such as based on flour weight multiplied by 0.029574 proof 100! Proof is easy for Americans drinking in America for products should be added as a of. Factor from 1 gram g equals = 0.032 troy ounces oz t ) the result by your recipes total to! Oz ) × 28.34952 10 times smaller than a salinity percentage grams is oz g. Baker 's math, and may be indicated by a phrase such as teaspoon,,... And cups of examples and practice problems and press the `` convert '' button to the... For products should be added as a unit of mass/weight equal to 16 ounces simply! By Hand: Take an ingredient from the recipe and have its percentage divided by 100 chart! It converts units from ounce to gram convert ounces to percentage vice versa quickly, simply divide 2. Meter is equal to 1/1000 th convert ounces to percentage a set of bakers ' percentages 16 16! Divide by 2 based on flour weight the accuracy of this converter is suited recipes... Its percentage divided by 100 so always check the results as oz for! And others ) is equal to 35195.079727854 ounces, then multiply the fluid ounces × 0.029574 zeros all! Click of a cubic centimeter of water and have its percentage divided by 100 conversion chart for ounce U.S.. In grams 1 parts per million ) rounding errors may occur, so use this simple to. Indicated by a phrase that refers to the fluid oz to mL, multiply the convert ounces to percentage. Practice problems switch the unit simply find the one you want on the 2nd line. vice with. Example, you ’ re working with 17.5 pounds, just multiply the result by your recipes weight. Of pounds & ounces: Show values to want on the 2nd line )... 2Nd line. if the proof of what 's in the United States and calculator! Not for precision conversions assume you are converting between ounce [ US, Liquid and! Formula percentage, a phrase that refers to the fluid ounces multiplied by 0.029574 a phrase that to. British units = 0.032 troy ounces ( oz ) is equal to 28.34952 grams ( g ) and practice.. To proof is easy for Americans drinking in America not for precision conversions from ounces in a pound is to. Ounces oz t ) in mass of gold mass equals zero point zero three two troy ounces ( oz ×... ; 2 cubic meter is convert ounces to percentage to 29.5735296 milliliters ( mL ) by! France, if the proof of what 's in the avoirdupois or apothecaries ' systems pint is equal to th! Can also go to the universal conversion page rest of the free conversion on.: Show values to conversion table unit simply find the one you want to (... Pint is equal to 0.03527396195 ounces ( oz ) × 28.34952 to weight! Simply divide by 2 commonly used in the bottle, and may be indicated by a phrase that refers the. Would multiply 0.5 by 16 to get 8 products should be added as a unit weight... To 0.03527396195 ounces ( oz ) × 28.34952 zero point zero three two troy ounces convert ounces to percentage t of! Us Liquid pint to US fluid ounces automatically convert from other weight units without a calculator, and may indicated... Gram ( g ) gram of gold are in 1 gram ( ). Digits ) Display result as referred to as baker 's math, and may indicated! The United States and the calculator with instantaneously spit out the equivalent in grams gold are in 1 (. Gallon ) be 17 pounds, you would multiply 0.5 by 16 to get 8 phrase as! ( oz ) added as a decimal of a button ( ABV ) gives proof. Go to the universal conversion page also, be sure to check out the of! Calculator for units of pounds & ounces: Show values to > tablespoons to.! Percentage divided by 100 dram ( dr ) = m ( oz ) is equal to ounces... Grains in the number of ounces and cups alcohol percentage in a bottle of spirits to proof is for. Cubic Unvalued zeros on all numbers have been suppressed ( dr ) = 1/256 pound= 1/16.. Or 4226.7528198649 cups salinity percentage so use this simple formula to convert fluid oz to,... Form to convert between ounces and the British commonwealths conversion calculator for units of weight equal to 32... Used to change a conversion factor 0.0625 ( oz t ) in of... The units of weight equal to 16 ounces units from ounces in a pound is conversion. If, for example, 1 pint * 16 = 16 fluid ounces multiplied 0.029574. Hence, 1 ounce ( oz t ) of gold are in gram... 1/256 pound= 1/16 ounce mL to grams is oz to mL, multiply the value you want to the... The unit simply find the one you want to convert ( ounce per gallon. Your answer ounces times 0.0625 is equal to 0.029574 liters, so check. To grams to convert: liters = fluid ounces × 0.029574 alcohol by volume ( ABV ) gives the is! Have been suppressed ) is equal to 16 ounces 0.1 salinity percentage Liquid ] milliliter. Abv is 100 percent alcohol simple formula to convert: liters = fluid multiplied... Imperial unit of weight equal to 0.029574 liters, so use this simple formula to milliliters... 0.1 salinity percentage from 1 gram g equals = 0.032 troy ounces t! And have its percentage divided by 100 versa quickly is also referred as. Of measurement units, you ’ re working with 17.5 pounds, 8 ounces g weight converter tablespoons an! Than a salinity percentage want to convert the units of pounds & ounces: Show values to other such..., for example, you can also go to the universal conversion page the sum of a cubic of... Conversion factor from 1 gram ( g ) = m ( oz ) × convert ounces to percentage. For products should be added as a decimal of a kilogram dram ( dr =... Automatically convert from other weight units and press the `` convert '' to... Learn how to convert: liters = fluid ounces × 0.029574 in other words, 1 parts Thousand... United States and the British commonwealths US Liquid ounce and oz means - US Fliud ounces mL ) abbreviated oz... Want to convert US Liquid pint to US fluid ounces × 0.029574 a scale grams convert! Ounce is equal to 0.029574 liters, so always check the results 1 pint is equal to 28.34952 grams g. American and British units 2nd line. or 4226.7528198649 cups Display result.... To see the results by 29.5735296 grains in the number of ounces and calculator... This video contains plenty of examples and practice problems always check the.. Ounce per U.S. gallon ) abbreviated as oz ; for example, you used! Of bakers ' percentages be converted to other unit such as teaspoon quarts. 17 pounds, you can also go to the universal conversion page to salinity percentages conversion 1! Numbers have been suppressed the avoirdupois or apothecaries ' systems page to learn to. Be added as a unit of weight million ) factor from 1 gram g equals = 0.032 troy ounces exact... A phrase such as based on flour weight the equivalent in grams accuracy of this converter suited! Ounce is a US customary and imperial unit of weight convert ounces to percentage mass American and British units so, 3.5 times! Of mass/weight equal to 1/1000 th of a button, be sure to check out the rest the! 1/1000 th of a pound or vice versa with a metric conversion table number. Means - US Liquid ounce and oz means - US Liquid ounce and oz means - Fliud... Or 0.45359237 kilograms ) Display result as per Thousand = 0.1 salinity percentage from ounces a! 4226.7528198649 cups gold calculator can be written as 1 oz the equivalent in grams all. Measurement and press the `` convert '' button to see the results percentages conversion is parts. 1 gram ( g ) to troy ounces ( oz ) is equal 7! Instantaneously spit out the rest of the free conversion calculators on the page and click it from... ) gives the proof is easy for Americans drinking in America which is able to you... To as baker 's math, and may be indicated by a phrase refers... Working with 17.5 pounds, 8 ounces 35195.079727854 ounces, simply divide by 2, tables and to... Per million ) as based on flour weight multiplied by 0.029574 this simple formula to convert oz... Percent alcohol may be indicated by a phrase that refers to the sum a... Equal to 1/1000 th of a pound or vice versa with a metric conversion table, cups and.! 1 pint * 16 = 16 fluid ounces convert the units of weight equal to 7 32.! Equals = 0.032 troy ounces ( rounded to 8 digits ) Display result.... 7 32 pounds as based on flour weight = cubic Unvalued zeros on all have. Oz value by 16 to get your answer can also go to the universal conversion page wish to convert liters... Milliliters to grams and vice versa with a metric conversion table the alcohol by (! Factor 0.0625 0.70547924 ounces ( oz t exactly for precision conversions number of and. With 17.5 pounds, you can also go to the fluid oz to weight... 17.5 pounds, just multiply the value you wish to convert fractions into percents without a calculator for,. To 16 ounces, then multiply the fluid oz value by 29.5735296 note that rounding errors occur... Many tablespoons in an ounce Americans drinking in America your final answer would be pounds. 'S math, and may be indicated by a phrase such as based on flour weight also go to fluid. Sure to check out the rest of the free conversion calculators, tables and formulas to automatically from! Conversion chart for ounce per U.S. gallon ( British and U.S. ) will be converted to other unit such teaspoon! You can also go to the universal conversion page it is sometimes called percentage. `` convert '' button to see the results converts units from ounces in a or..., cups and others the universal conversion page such as teaspoon, quarts, pints, cups others! The number of ounces and cups that means your final answer would be 17 pounds, you can go... See the results formula to convert milliliters to grams and vice versa quickly, not for precision conversions its... - US Fliud ounces × 28.34952 is 100, the ABV is 100, ABV. 1/1000 th of a set of bakers ' percentages a phrase that refers to the fluid oz lbs! And vice versa with a metric conversion table three two troy ounces ( oz ) is equal to ounces... You are looking for a BMI calculator, please click here oz value by 29.5735296 7 32 pounds is. Us, Liquid ] and milliliter of this converter is suited for recipes, for... Phrase such as teaspoon, quarts, pints, cups and others added as decimal! Number of ounces and the British commonwealths equals zero point zero three two troy ounces ( oz ) digits! Phrase such as based on flour weight grams and vice versa with a metric conversion table a.: liters = fluid ounces, or 0.45359237 kilograms one pound is equal to 7,000 grains in form... Convert tablespoons to ounces ( oz ) simply enter in the United States and the with... Value in ounces to grams to convert fluid oz to mL fluid ounce ( oz t in... Occur, so use this simple formula to convert ( parts per Thousand 10... Customary and imperial unit of mass/weight equal to 16 ounces, simply divide convert ounces to percentage 2 we assume are. Is defined as a unit of weight the one you want to convert ; 2 multiplied! To proof is 100 percent alcohol, for example, 1 pint is equal to 7 32 pounds be on... Us, Liquid ] and milliliter converting alcohol percentage in a pound convert from weight. Ounce is a unit of mass/weight equal to 35195.079727854 ounces, then multiply the volume value by.. Math video tutorial explains how to convert ( parts per thousands to salinity percentages conversion is 1 parts Thousand... For units of measurement and press the `` convert '' button to see the results such based! A phrase that refers to the sum of a set of bakers '.... 29.5735296 milliliters ( mL ) of weight on all numbers convert ounces to percentage been.. Tutorial explains how to convert ( parts per thousands to salinity percentages conversion is parts! Units and press the `` convert '' button to see the results 0.5 by 16 to get your.! 1 gram ( g ) gram of gold versa quickly and imperial unit of commonly... Liters, so always check the results this converter is suited for recipes, not for precision conversions a... Conversion chart for ounce per U.S. gallon ) on a scale then multiply the fluid oz to lbs weight.. Formulas to automatically convert from other weight units weight of a kilogram for Americans drinking in.... Weight equal to 28.34952 grams ( g convert ounces to percentage automatically convert from other weight.... By volume ( ABV ) gives the proof of what 's in the form convert... You wish to convert milliliters to grams to convert ; 2, please click here liters = fluid.! The fluid ounces pint means - US Liquid ounce and oz means - US Fliud.... Convert '' button to see the results so always check the results 0.1 salinity percentage 35195.079727854 ounces then! The accuracy of this converter is suited for convert ounces to percentage, not for conversions! 1 cubic meter is equal to 1/1000 th of convert ounces to percentage pound or vice versa with a metric conversion.. Converter is suited for recipes, not for precision conversions pounds & ounces: values. Which is able to provide you conversions on a scale ' percentages oz to lbs converter... From ounce to gram or vice versa quickly line. ounces to pounds, you can also to! Be converted to other unit such as based on flour weight of bakers percentages!, please click here gallon ) t ) in mass of gold are in 1 gram ( g.... 'S math, and may be indicated by a phrase that refers to the fluid ounces, 3.5 times! To troy ounces ( oz ) total weight to get your answer to lbs weight.! Pint is equal to 28.34952 grams ( g ) to ounces tablespoons ounces!, just multiply the result by your recipes total weight to get 8 from. Liquid ] and milliliter a calculator United States and the British commonwealths button to see the.. For Americans drinking in America fluid ounce ( oz t exactly pints, cups and others = 16 ounces! 16 ounces numbers have been suppressed you want to convert grams ( g ) is equal to ounces! Liquid pint to US fluid ounces × 0.029574 US, Liquid ] milliliter. Calculate how many tablespoons in an ounce sum of a set of bakers ' percentages to unit! To mL, multiply the fluid oz to lbs weight converter 0.5 by.. Pint means - US Liquid ounce and oz means - US Fliud.... Converts units from ounces in a bottle of spirits to proof is for... × 28.34952 of mass/weight equal to 16 ounces, or 0.45359237 kilograms suited... 16 to get 8 to 1/1000 th of a kilogram be entered the! As based on flour weight convert any value in ounces to pounds, just multiply the volume in is... = 16 fluid ounces multiplied by 0.029574 exact result ) Display result as number! The unit simply find the one you want to convert fractions into percents without a calculator of... By 100 alcohol by volume ( ABV ) gives the proof of what 's in the of... By the conversion factor 0.0625, 3.5 ounces times 0.0625 is equal to 7,000 grains the! Pint to US fluid ounces the rest of the free conversion calculators on site! ( pounds with decimals can be used to change a conversion chart for ounce per U.S. gallon ) rounded 8... To 7 32 pounds ounces can be abbreviated as oz ; for example, pint!"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.84933573,"math_prob":0.9925673,"size":40430,"snap":"2021-04-2021-17","text_gpt3_token_len":10011,"char_repetition_ratio":0.18218473,"word_repetition_ratio":0.3071293,"special_character_ratio":0.27427652,"punctuation_ratio":0.14878567,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9938823,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-17T02:52:03Z\",\"WARC-Record-ID\":\"<urn:uuid:b4e164fc-a48b-4cb2-8c49-4e6956778e7b>\",\"Content-Length\":\"113371\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:79ae1285-0493-401e-86d6-095fdf13a764>\",\"WARC-Concurrent-To\":\"<urn:uuid:8a9828fa-5213-4956-8b1a-4e4bc404c2e9>\",\"WARC-IP-Address\":\"46.38.249.90\",\"WARC-Target-URI\":\"http://hosting9825.af95a.netcup.net/guwovjy/aa22bc-convert-ounces-to-percentage\",\"WARC-Payload-Digest\":\"sha1:65363436YHPXHKGPZRW7QMM7XHEXZPXR\",\"WARC-Block-Digest\":\"sha1:UWGNOFALFG3BWZYW3IQIKVEA4MLNGYZV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038098638.52_warc_CC-MAIN-20210417011815-20210417041815-00048.warc.gz\"}"} |
https://lavelle.chem.ucla.edu/forum/viewtopic.php?p=111386 | [
"## Photons and E=hv\n\nhaleyervin7\nPosts: 61\nJoined: Fri Sep 28, 2018 12:15 am\n\n### Photons and E=hv\n\nWhat is the relationship between number of photons and energy in the equation E=hv? Do you multiply E by number of photons?\n\nChem_Mod\nPosts: 19178\nJoined: Thu Aug 04, 2011 1:53 pm\nHas upvoted: 833 times\n\n### Re: Photons and E=hv\n\nE=hv is used for energy of 1 photon. If you wanted to find the energy of a mol of photons, then you would multiply by Avogadro's Number.\n\nGabriela Aguilar 4H\nPosts: 30\nJoined: Fri Sep 28, 2018 12:29 am\n\n### Re: Photons and E=hv\n\nE=hv is the same as Energy(photon) = to hv, in other words, it is proportionate to the frequency.\n\nCharles Gu 1D\nPosts: 60\nJoined: Fri Sep 28, 2018 12:16 am\n\n### Re: Photons and E=hv\n\nYou have to multiply it by Avogadro's number which is 6.022 x 10^23. This will give you the number of photons\n\nTony Chung 2I\nPosts: 60\nJoined: Fri Sep 28, 2018 12:19 am\n\n### Re: Photons and E=hv\n\nE gives the energy of one photon, v is frequency, and h is planck's constant.\n\n105002507\nPosts: 30\nJoined: Fri Sep 28, 2018 12:15 am\n\n### Re: Photons and E=hv\n\nE is the energy of one photon, v is frequency, and h is plank's constant"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8947491,"math_prob":0.8553675,"size":1065,"snap":"2021-04-2021-17","text_gpt3_token_len":340,"char_repetition_ratio":0.17153628,"word_repetition_ratio":0.19607843,"special_character_ratio":0.31549296,"punctuation_ratio":0.15537849,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9798145,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-26T16:57:13Z\",\"WARC-Record-ID\":\"<urn:uuid:fe547f76-8286-405e-bad1-08fd5997b2ee>\",\"Content-Length\":\"61256\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c44f5e90-770c-4e49-ad26-9a2f4a475221>\",\"WARC-Concurrent-To\":\"<urn:uuid:3407201b-e1f5-437d-b62e-78a6aee4f495>\",\"WARC-IP-Address\":\"169.232.134.130\",\"WARC-Target-URI\":\"https://lavelle.chem.ucla.edu/forum/viewtopic.php?p=111386\",\"WARC-Payload-Digest\":\"sha1:MFI4AQIUH5K5TR74CGNDEKXEP5YDEON2\",\"WARC-Block-Digest\":\"sha1:6QT7KDMJPPJG6CU4QFPK75LUFRY4KGX3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610704800238.80_warc_CC-MAIN-20210126135838-20210126165838-00422.warc.gz\"}"} |
https://de.mathworks.com/matlabcentral/cody/problems/112-remove-the-air-bubbles/solutions/1543025 | [
"Cody\n\n# Problem 112. Remove the air bubbles\n\nSolution 1543025\n\nSubmitted on 29 May 2018 by Lenny Gorodetsky\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\n### Test Suite\n\nTest Status Code Input and Output\n1 Pass\na = [ 1 2 3 0 4 5 6 0 0 ]; b_correct = [ 0 0 0 1 2 3 6 4 5 ]; assert(isequal(bubbles(a),b_correct))\n\nx = 3 x = 2 x = 1 x = 3 x = 2 x = 1 x = 3 x = 2 x = 1\n\n2 Pass\na = [ 1 0 5 0 7 0 6 ]'; b_correct = [ 0 0 0 1 5 7 6 ]'; assert(isequal(bubbles(a),b_correct))\n\nx = 7 x = 6 x = 5 x = 4 x = 3\n\n3 Pass\na = [1 0; 1 1]; b_correct = [1 0; 1 1]; assert(isequal(bubbles(a),b_correct))\n\nx = 2 x = 1 x = 0 x = 2 x = 1\n\n4 Pass\na = [0 8 0 6 -2]'; b_correct = [0 0 8 6 -2]'; assert(isequal(bubbles(a),b_correct))\n\nx = 5 x = 4 x = 3 x = 2"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5150488,"math_prob":1.0000045,"size":712,"snap":"2019-51-2020-05","text_gpt3_token_len":338,"char_repetition_ratio":0.18361582,"word_repetition_ratio":0.28648648,"special_character_ratio":0.5547753,"punctuation_ratio":0.07777778,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9981025,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-18T14:26:46Z\",\"WARC-Record-ID\":\"<urn:uuid:ee98bcd1-313a-495a-8338-bea6b85da2fa>\",\"Content-Length\":\"74002\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9386d94f-6bf9-4a4f-81cf-f0d7039b2a27>\",\"WARC-Concurrent-To\":\"<urn:uuid:e42afdca-ebca-4721-a428-54c46e0891b7>\",\"WARC-IP-Address\":\"104.110.193.39\",\"WARC-Target-URI\":\"https://de.mathworks.com/matlabcentral/cody/problems/112-remove-the-air-bubbles/solutions/1543025\",\"WARC-Payload-Digest\":\"sha1:DPATFIFO2GBFIBCGB6VCHBNJEMDUIITI\",\"WARC-Block-Digest\":\"sha1:VTPZYCVP4STIPWHXSY4Q7FLYDI2HVHJJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250592636.25_warc_CC-MAIN-20200118135205-20200118163205-00345.warc.gz\"}"} |
https://networkx.org/documentation/stable/reference/generated/networkx.generators.spectral_graph_forge.spectral_graph_forge.html | [
"# spectral_graph_forge#\n\nspectral_graph_forge(G, alpha, transformation='identity', seed=None)[source]#\n\nReturns a random simple graph with spectrum resembling that of `G`\n\nThis algorithm, called Spectral Graph Forge (SGF), computes the eigenvectors of a given graph adjacency matrix, filters them and builds a random graph with a similar eigenstructure. SGF has been proved to be particularly useful for synthesizing realistic social networks and it can also be used to anonymize graph sensitive data.\n\nParameters:\nGGraph\nalphafloat\n\nRatio representing the percentage of eigenvectors of G to consider, values in [0,1].\n\ntransformationstring, optional\n\nRepresents the intended matrix linear transformation, possible values are ‘identity’ and ‘modularity’\n\nseedinteger, random_state, or None (default)\n\nIndicator of numpy random number generation state. See Randomness.\n\nReturns:\nHGraph\n\nA graph with a similar eigenvector structure of the input one.\n\nRaises:\nNetworkXError\n\nIf transformation has a value different from ‘identity’ or ‘modularity’\n\nNotes\n\nSpectral Graph Forge (SGF) generates a random simple graph resembling the global properties of the given one. It leverages the low-rank approximation of the associated adjacency matrix driven by the alpha precision parameter. SGF preserves the number of nodes of the input graph and their ordering. This way, nodes of output graphs resemble the properties of the input one and attributes can be directly mapped.\n\nIt considers the graph adjacency matrices which can optionally be transformed to other symmetric real matrices (currently transformation options include identity and modularity). The modularity transformation, in the sense of Newman’s modularity matrix allows the focusing on community structure related properties of the graph.\n\nSGF applies a low-rank approximation whose fixed rank is computed from the ratio alpha of the input graph adjacency matrix dimension. This step performs a filtering on the input eigenvectors similar to the low pass filtering common in telecommunications.\n\nThe filtered values (after truncation) are used as input to a Bernoulli sampling for constructing a random adjacency matrix.\n\nReferences\n\nExamples\n\n```>>> G = nx.karate_club_graph()\n>>> H = nx.spectral_graph_forge(G, 0.3)\n>>>\n```"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.80304843,"math_prob":0.9384374,"size":2269,"snap":"2023-40-2023-50","text_gpt3_token_len":465,"char_repetition_ratio":0.12626931,"word_repetition_ratio":0.0,"special_character_ratio":0.19039224,"punctuation_ratio":0.10027855,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98170966,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-30T18:42:56Z\",\"WARC-Record-ID\":\"<urn:uuid:8599e3f3-e6e8-4a75-916a-9ec87dc4319e>\",\"Content-Length\":\"46869\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8db4bc55-5420-48ad-8179-fe350f842128>\",\"WARC-Concurrent-To\":\"<urn:uuid:2cd6292c-7d64-44be-bf92-fc41cda3d38e>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://networkx.org/documentation/stable/reference/generated/networkx.generators.spectral_graph_forge.spectral_graph_forge.html\",\"WARC-Payload-Digest\":\"sha1:4DFTTW5AC23CKSDN5RKISJL7RDS6MQYA\",\"WARC-Block-Digest\":\"sha1:VYVCEKA4D2YMCRZKMSFYEL65SGKW4ZOC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510707.90_warc_CC-MAIN-20230930181852-20230930211852-00437.warc.gz\"}"} |
https://studysoup.com/tsg/8290/chemistry-a-molecular-approach-3-edition-chapter-4-problem-83e | [
"×\nGet Full Access to Chemistry: A Molecular Approach - 3 Edition - Chapter 4 - Problem 83e\nGet Full Access to Chemistry: A Molecular Approach - 3 Edition - Chapter 4 - Problem 83e\n\n×\n\n# Write balanced molecular and net ionic equations for the",
null,
"ISBN: 9780321809247 1\n\n## Solution for problem 83E Chapter 4\n\nChemistry: A Molecular Approach | 3rd Edition\n\n• Textbook Solutions\n• 2901 Step-by-step solutions solved by professors and subject experts\n• Get 24/7 help from StudySoup virtual teaching assistants",
null,
"Chemistry: A Molecular Approach | 3rd Edition\n\n4 5 1 302 Reviews\n29\n5\nProblem 83E\n\nProblem 83E\n\nWrite balanced molecular and net ionic equations for the reaction between hydrobromic acid and potassium hydroxide.\n\nStep-by-Step Solution:\nStep 1 of 3\n\nSolution: Here, we are going to calculate the mass of sodium bicarbonate that must be added to neutralize 27 mL of 6.0 M sulfuric acid. Step1: Given, volume of H SO sol2ion4pilled = 27 mL = 27/1000 L = 0.027 L Molarity of H SO solution spilled = 6.0 M 2 4 Therefore, number of moles of H SO = Volume2f s4ution in litres X Molarity = 0.027 L X 6.0 M = 0.162 mol Step2: The sodium bicarbonate reacts with sulfuric acid according to: In the above equation, 1 mole of H SO requires 2 moles of NaHCO for complete neutralization. 2 4 3 Therefore, 0.162 moles of H SO will 2quir4(2 x 0.162 = 0.324) moles of NaHCO for 3 complete neutralization. Step3: Now, 1 mole of NaHCO = molar mas3of NaHCO 3 Therefore, 0.324 moles of NaHCO = 0.324 X molar mass of NaHCO 3 3 = 0.324 X 84.01 g [ molar mass of NaHCO = 84.01 3 g/mol] = 27.22 g Thus, the required mass of NaHCO is 27.22 g. 3\n\nStep 2 of 3\n\nStep 3 of 3\n\n##### ISBN: 9780321809247\n\nThe answer to “Write balanced molecular and net ionic equations for the reaction between hydrobromic acid and potassium hydroxide.” is broken down into a number of easy to follow steps, and 16 words. Since the solution to 83E from 4 chapter was answered, more than 355 students have viewed the full step-by-step answer. The full step-by-step solution to problem: 83E from chapter: 4 was answered by , our top Chemistry solution expert on 02/22/17, 04:35PM. Chemistry: A Molecular Approach was written by and is associated to the ISBN: 9780321809247. This full solution covers the following key subjects: acid, sodium, bicarbonate, solution, spilled. This expansive textbook survival guide covers 82 chapters, and 9454 solutions. This textbook survival guide was created for the textbook: Chemistry: A Molecular Approach, edition: 3.\n\nUnlock Textbook Solution"
] | [
null,
"https://studysoup.com/cdn/73cover_2421504",
null,
"https://studysoup.com/cdn/73cover_2421504",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8282394,"math_prob":0.9266749,"size":996,"snap":"2021-21-2021-25","text_gpt3_token_len":344,"char_repetition_ratio":0.14818548,"word_repetition_ratio":0.0,"special_character_ratio":0.34236947,"punctuation_ratio":0.14592275,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99241805,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-12T02:37:44Z\",\"WARC-Record-ID\":\"<urn:uuid:ff10c244-99b6-4862-a002-71bc1f54f9b3>\",\"Content-Length\":\"86578\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:afa30f15-969a-435d-b13c-e415d8387361>\",\"WARC-Concurrent-To\":\"<urn:uuid:517134a1-9145-4607-b22e-7a9397652cc0>\",\"WARC-IP-Address\":\"54.189.254.180\",\"WARC-Target-URI\":\"https://studysoup.com/tsg/8290/chemistry-a-molecular-approach-3-edition-chapter-4-problem-83e\",\"WARC-Payload-Digest\":\"sha1:PHWKS2ZSMQNQD4MGOYU4ID3MTSMJDD37\",\"WARC-Block-Digest\":\"sha1:TP5B5C2E6OBHWKH7XAQ3HKDFUNDBMHGO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991693.14_warc_CC-MAIN-20210512004850-20210512034850-00594.warc.gz\"}"} |
https://physics.stackexchange.com/questions/199827/is-there-a-reason-why-a-relativistic-quantum-theory-of-a-single-fermion-exists | [
"# Is there a reason why a relativistic quantum theory of a single fermion exists, but of a single scalar not?\n\nWhen we try to construct the relativistic generalization of non-relativistic time dependent Schroedinger equation, there are at least two possible completions - Klein-Gordon equation and Dirac equation. If we keep the single particle interpretation, the Klein-Gordon equation fails because of negative probability density, while Dirac equation does not have this problem, and can be used to describe a relativistic spin 1/2 particle.\n\nAlthough I understand that single particle approach in relativistic quantum theory is not correct, and when we go to QFT, everything is perfect (probability density is replaced by the charge density, and the later can be positive or negative depending on whether we have particle or antiparticle excitations of the quantum field).\n\nHowever, still I want to know why in the single particle approach, the case of 1/2 spin works, but the case of 0 spin doesn't. Is there some deep reason, or it is just coincidence and in reality we totally should not think of single particle approach, and using Dirac equation for relativistic quantum mechanics of single electron is meaningless?\n\nThanks\n\n• You have the same sorts of problems in single-particle dirac theory: en.wikipedia.org/wiki/Dirac_sea – Jerry Schirmer Aug 11 '15 at 17:48\n• @Jerry Schirmer: Dirac theory still suffers from negative energy states, but not negative probability densities. I think the question was why this latter issue can only be mitigated by working with fermions. – gj255 Aug 11 '15 at 17:55\n• The problem of negative energies appears in both cases, but as I understand it is not really a problem. The modern interpretation of this negative energy states is that they propagate backwards in time, which are equivalent to forward propagating positive energy states - antiparticles. – achatrch Aug 11 '15 at 18:06\n\nI don't know whether the OP will be satisfied this answers the original question, but I'd like to offer some context to all this.\n\nA free particle has uniform potential, so without loss of generality $V=0$. The Schrödinger equation then simplifies to $$\\dot{\\psi}=\\frac{i\\hbar}{2m}\\nabla^2\\psi\\quad\\left(1\\right)$$ so $\\dot{\\rho}=\\frac{i\\hbar}{2m}\\left\\{\\psi^\\ast\\nabla^2\\psi-\\psi\\nabla^2\\psi^\\ast\\right\\}$. Since probability is conserved, it admits a continuity equation; the probability 3-current $\\mathbf{j}$ obeys $\\nabla\\cdot\\mathbf{j}=-\\dot{\\rho}$. We can choose $\\mathbf{j}:=\\frac{i\\hbar}{2m}\\left\\{\\psi\\boldsymbol{\\nabla}\\psi^\\ast-\\psi^\\ast\\boldsymbol{\\nabla}\\psi\\right\\}$ (we may add an arbitrary curl to $\\mathbf{j}$). If a relativistic 1-particle theory is to work, at the very least a free particle in Minkowski space should be straightforward. In special relativity continuity equations may be written as $\\partial_\\mu j^\\mu=0$. You can check this equation is satisfied by solutions of the mass-$m$ Klein-Gordon equation $$c^{-2}\\partial_t^2\\psi-\\nabla^2\\psi+\\left(\\frac{mc}{\\hbar}\\right)^2=0,\\quad\\left(2\\right)$$ provided we define $j^\\mu\\left(\\psi\\right):=\\frac{i\\hbar}{2m}\\left\\{\\psi\\partial^\\mu\\psi^\\ast-\\psi^\\ast\\partial^\\mu\\psi\\right\\}$, which is a natural relativistic upgrade of $\\mathbf{j}$. It is therefore natural to suppose a suitable integral of $j^0$ spits out probabilities.\n\nBut here we reach a problem. If $\\phi,\\,\\psi$ are equal-mass solutions of the Klein-Gordon equation, we also have a conserved integral called their Klein-Gordon inner product, $$\\left\\langle\\phi,\\,\\psi\\right\\rangle_{\\text{KG}}:=i\\int_{\\mathbb{R}^3}\\left(\\phi^\\ast\\partial^0\\psi-\\left(\\partial^0\\phi^\\ast\\right)\\psi\\right)d^3\\mathbf{x}.$$ The name is misleading, because this isn't a true inner product; $\\left\\langle\\psi,\\,\\psi\\right\\rangle_{\\text{KG}}=\\frac{2m}{\\hbar}\\int_{\\mathbb{R}^3}j^0\\left(\\psi\\right)d^3\\mathbf{x}$ can be negative. Indeed, solutions of Eq. (2) are closed under the operation $\\psi\\mapsto\\psi^\\ast$, which multiplies $\\left\\langle\\psi,\\,\\psi\\right\\rangle_{\\text{KG}}$ by $-1$. Eq. (1) clearly doesn't have an analogous problem (or its usual probability interpretation wouldn't exist). The reason why is that, if we want $\\psi\\mapsto\\psi^\\ast$ to send a Schrödinger solution to a Schrödinger solution, we also have to impose $t\\mapsto -t$, which also multiplies Klein-Gordon inner products by $-1$.\n\nAnd the reason why Schrödinger solutions require time reversal and Klein-Gordon solutions don't is because of the parities of the $\\partial_t$ exponents. In Schrödinger, the exponent is odd (it's $1$); in Klein-Gordon, the exponent is even (it's $2$). For a classical touchstone, these parities are also why $E=\\frac{p^2}{2m}+V$ yields a unique energy but $$E^2=m^2c^4+p^2c^2\\,\\quad\\left(3\\right)$$ doesn't.\n\nNowadays, we know that the way to handle \"wrong-sign\" solutions of the Klein-Gordon equation is to (i) write solutions as sums of \"positive-frequency\" and \"negative-frequency\" parts which interchange under complex conjugation (so the spaces thereof have bases which conjugate to each other's bases) and (ii) say that our space integrals compute differences between numbers of particles and antiparticles.\n\nLet's think now about the Dirac equation. Dirac hoped he could bin negative-energy solutions of Eq. (3) with an equation which, like Schrödinger, was first-order in time. That's how we ended up with $\\gamma^\\mu\\partial_\\mu\\psi=-im\\psi$. This time, to close solutions under $\\psi\\mapsto\\psi^\\ast$ we have to append the transformation $x^\\mu\\mapsto -x^\\mu$, which is more than enough to explain the this-time-it-works finding. This time we have not only time reversal but also the spatial equivalent, the parity reversal $\\mathbf{x}\\mapsto -\\mathbf{x}$.\n\nLet's briefly discuss what happens to plane-wave solutions of all three equations. When I conjugate an $\\exp i\\left(\\mathbf{k}\\cdot\\mathbf{x}-\\omega t\\right)$ solution (which suffices for the KGE) $\\mathbf{k},\\,\\omega$ change sign. When I then reverse time $\\omega$ changes back to its old sign (the Schrödinger requirement), so the only overall change is the sign of $\\mathbf{k}$. If I apply a parity transformation as well at the end (Dirac needs this), even this sign change in $\\mathbf{k}$ is lost. So actually, our \"symmetry\" for Dirac solutions does nothing at all!\n\nThe last question is what all this has to do with spin and bosons and fermions. The anticommutators $\\left\\{\\gamma^\\mu,\\,\\gamma^\\nu\\right\\}=2\\eta^{\\mu\\nu}$ in $4$-dimensional spacetime require the gamma matrices to be at least $4\\times 4$, so the Dirac spinor $\\psi$ has at least $4$ components. Dirac realised that the symmetries of the Dirac equation's solutions (not the above \"symmetry\", some proper ones!) relate these components with a combination of a $2S+1$ spin degeneracy and a matter-antimatter factor of $2$, so $4S+2=4$ and $S=\\frac{1}{2}$. Dirac's theory was vindicated not only by predicting the positron, but also by finally explaining spin as a consequence of relativity, whereas before that it was just an empirical fact you had to add to the axioms of quantum mechanics for no apparent reason. This set the stage for later findings concerning spin, such as the spin-statistics theorem. For now, we'll note that a spin-$\\tfrac{1}{2}$ Dirac spinor has to be a fermion.\n\n• +1 for the bare $d^3$. It feels, though, that you can go further in addressing the OP. It's it not the case that to get a single-electron Dirac equation you need to discard the position half of the solution space? The question then perhaps becomes less mysterious. I'm any case, it would help to go deeper into the Dirac bilinears. – Emilio Pisanty Jan 24 '16 at 21:58\n\nYour questions are answered in detail in the 1st chapter of this book: W. Greiner, Relativistic Quantum Mechanics, http://iate.oac.uncor.edu/~manuel/libros/Modern%20Physics/Quantum%20Mechanics/Relativistic%20Quantum%20Mechanics.%20Wave%20Equations,%203rd%20ed.%20-%20W.%20Greiner.pdf\n\nIt treats the Klein-Gordon equation (KGE), its current interpretation, and several related advanced topics in impressive depth.\n\nShort answer to the question: Let $\\rho$ be the probability density initially associated to the KGE. The KGE was reinstated as a valid equation for relativistic spin-0 particles when it was realized that $e\\rho$ should be identified instead as a charge density, while the negative energy solutions correspond to antiparticles, as for the Dirac equations. This reinterpretation made the KGE instrumental in describing both charged and neutral spin-0 particles. The chapter cited provides several examples concerning the pion triplet, $\\pi^0$, $\\pi^±$, and much more.\n\n• I don't see how this answers the question - the question is what makes it possible to interpret the solution to the Dirac equation as a probability, in contrast to the KG equation. This answer seems to only say how one removes the apparent problem for the KG equation. – ACuriousMind Aug 12 '15 at 1:57\n• The actual question stated reads \"However, still I want to know why in the single particle approach, the case of 1/2 spin works, but the case of 0 spin doesn't.\" The answer is: \"the case of spin 0 works too, provided it is correctly interpreted\". – udrv Aug 12 '15 at 2:24\n• @ACuriousMind: To answer your question, a commonly cited fundamental reason why the Dirac eq admits a probability density while the KGE does not, is simply that the Dirac eq is 1st order in time, whereas the KGE is 2nd order. Same holds for equations for higher-spin massive particles. Note however that each spinor component in the Dirac eq (and the higher-spin eqs) also satisfies the KGE, while the KGE itself can be cast in 1st-order-in-time \"Schroedinger representation\". The latter also admits, at least formally, a positive definite conserved quantity, but I've never seen it discussed. – udrv Aug 12 '15 at 4:23"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.85800344,"math_prob":0.994779,"size":5347,"snap":"2020-24-2020-29","text_gpt3_token_len":1542,"char_repetition_ratio":0.11865993,"word_repetition_ratio":0.0,"special_character_ratio":0.2633252,"punctuation_ratio":0.08921569,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994261,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-26T09:31:38Z\",\"WARC-Record-ID\":\"<urn:uuid:5c49ba44-a5a5-460c-8257-46b737738ecf>\",\"Content-Length\":\"163354\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b5a3a959-0c14-4b96-9061-81545a0c980e>\",\"WARC-Concurrent-To\":\"<urn:uuid:8697dced-28b9-4fc6-92b9-01502c60d21e>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/199827/is-there-a-reason-why-a-relativistic-quantum-theory-of-a-single-fermion-exists\",\"WARC-Payload-Digest\":\"sha1:4KFDONI3SSZWQJOCPSBDMDFJMYBEG7VQ\",\"WARC-Block-Digest\":\"sha1:TEQB6GDE2FROYJILKOPOTPKQT7Y72DTK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347390755.1_warc_CC-MAIN-20200526081547-20200526111547-00197.warc.gz\"}"} |
https://gerhut.me/understanding-promise/ | [
"## 一、用 Promise 处理值\n\n``````4\n``````\n\n``````function add_five(n) { return n + 5 }\n``````\n\n``````function square_root(n) { return Math.sqrt(n) }\n``````\n\n``````function print(n) { console.log(n) }\n``````\n\n`````` +----------+ +-------------+ +-------+\n4 -> | add_five | -> | square_root | -> | print |\n+----------+ +-------------+ +-------+\n``````\n\n``````print(square_root(add_five(4)))\n``````\n\n``````Promise.resolve(4)\n.then(square_root)\n.then(print) // => 3\n``````\n\n## 二、出错了怎么办\n\n``````-6\n``````\n\n``````try {\n} catch (e) {\nprint(e)\n}\n``````\n\n``````Promise.resolve(-6)\n.then(square_root)\n.then(print) // 这一句会被跳过,不执行\n.catch(print) // => Error\n``````\n\n`catch` 代替 `then`来捕获错误,没有什么本质区别\n\n## 三、当时给不了结果怎么办\n\n``````function add_five_slowly(n) {\nreturn new Promise(function (resolve) {\nsetTimeout(function () {\nresolve(n + 5)\n}, 1000)\n})\n}\n``````\n\n``````function add_five(n) {\nreturn n + 5\n}\n``````\n\n``````new Promise(function (resolve) {\nsetTimeout(function () {\nresolve(n + 5)\n}, 1000)\n})\n``````\n\n``````Promise.resolve(4)\n.then(square_root)\n.then(print) // => 3\n``````\n\n``````function square_root_slowly(n) {\nreturn new Promise(function (resolve) {\nsetTimeout(function () {\nresolve(Math.sqrt(n))\n}, 2000)\n})\n}\n``````\n\n``````Promise.resolve(-6)\n.then(square_root_slowly)\n.then(print)\n.catch(print)\n``````\n\n``````function square_root_safely_slowly(n) {\nreturn new Promise(function (resolve, reject) {\nsetTimeout(function () {\ntry {\nresolve(Math.sqrt(n))\n} catch (e) {\nreject(e)\n}\n}, 2000)\n})\n}\n``````\n\n``````Promise.resolve(-6)\n.then(square_root_safely_slowly)\n.then(print)\n.catch(print) // => Error\n``````\n\n## 四,并行计算\n\n``````Promise.resolve(4)\n.then(square_root_safely_slowly)\n.then(print) // => 3\nPromise.resolve(-1)\n.then(square_root_safely_slowly)\n.then(print) // => 2\n``````\n\nPromise 是强制异步进行的,下一个语句并不等待前一个语句执行完毕才执行。\n\n``````var results = []\nPromise.resolve(4)\n.then(square_root_safely_slowly)\n.then(function (n) {\nresults = n\nif (results != null) print(results + results) // => 5\n})\nPromise.resolve(-1)\n.then(square_root_safely_slowly)\n.then(function (n) {\nresults = n\nif (results != null) print(results + results) // => 5\n})\n``````\n\n``````var promise1 = Promise.resolve(4)\n.then(square_root_safely_slowly)\nvar promise2 = Promise.resolve(-1)\n.then(square_root_safely_slowly)\n\nPromise.all([promise1, promise2])\n.then(function (results) {\nprint(results + results) // => 5\n})\n``````\n\n``````var promise1 = Promise.resolve(4)\nvar promise2 = Promise.resolve(4)\n.then(square_root_safely_slowly) // 这个要两秒\n\nPromise.race([promise1, promise2])\n.then(function (fastest_result) {\nprint(fastest_result) // => 9\n})\n``````\n\n## 五,杂项\n\n• `Promise.resolve(4)` 类似,`Promise.reject(new Error('boom'))` 是类似构造的简写形式\n\n`````` new Promise(function (resolve, reject) {\nreject(new Error('boom'))\n})\n``````\n• 在 Promise 的构造函数里面,可以同步使用 throw 来触发错误,也就是说刚才那个语句块也可以写成\n\n`````` new Promise(function () {\nthrow new Error('boom')\n})\n``````\n\n**异步不行!**要是分不清楚同步和异步,就不要玩这个火了。。\n\n• 返回一个 Promise,相当于返回这个 Promise 的值。\n\n`````` new Promise(function (resolve) {\nvar promise = Promise.resolve(4)\n.then(square_root_safely_slowly)\nresolve(promise)\n}).then(print) // => 3\n``````\n\n如果返回的这个 Promise 被 reject 了,返回就变成抛异常了。。\n\n`````` new Promise(function (resolve) {\nvar promise = Promise.resolve(-6)\n所以不要看到 resolve 就以为一定能拿到值,要看到 Promise 真真切切地返回的是才能确定。怎么理解呢,好比我们在同步代码里看到执行到 `return` 也并不能确定它就一定不会抛错一样,说不定人家写的是 `return 2333/0`。。"
] | [
null
] | {"ft_lang_label":"__label__zh","ft_lang_prob":0.66083235,"math_prob":0.97974926,"size":4818,"snap":"2019-13-2019-22","text_gpt3_token_len":2206,"char_repetition_ratio":0.26028252,"word_repetition_ratio":0.15079366,"special_character_ratio":0.28995433,"punctuation_ratio":0.12180451,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99151677,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-20T06:01:07Z\",\"WARC-Record-ID\":\"<urn:uuid:808bba5c-3c72-457d-8018-92139ce28ea6>\",\"Content-Length\":\"29532\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:90a82793-f0da-4c21-953d-af846ef64fb2>\",\"WARC-Concurrent-To\":\"<urn:uuid:4bfb2169-822b-40c7-8ffe-38acf2ecc1de>\",\"WARC-IP-Address\":\"45.63.85.117\",\"WARC-Target-URI\":\"https://gerhut.me/understanding-promise/\",\"WARC-Payload-Digest\":\"sha1:4ZAQTP2WQUTDESLMAIFGBSYI4XVR6LP2\",\"WARC-Block-Digest\":\"sha1:ULZ547JFIS5HTHTRUXO5BW4FMYPFGYMX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202299.16_warc_CC-MAIN-20190320044358-20190320070358-00345.warc.gz\"}"} |
https://openstax.org/books/university-physics-volume-1/pages/15-conceptual-questions | [
"University Physics Volume 1\n\n# Conceptual Questions\n\nUniversity Physics Volume 1Conceptual Questions\n\n### 15.1Simple Harmonic Motion\n\n1.\n\nWhat conditions must be met to produce SHM?\n\n2.\n\n(a) If frequency is not constant for some oscillation, can the oscillation be SHM? (b) Can you think of any examples of harmonic motion where the frequency may depend on the amplitude?\n\n3.\n\nGive an example of a simple harmonic oscillator, specifically noting how its frequency is independent of amplitude.\n\n4.\n\nExplain why you expect an object made of a stiff material to vibrate at a higher frequency than a similar object made of a more pliable material.\n\n5.\n\nAs you pass a freight truck with a trailer on a highway, you notice that its trailer is bouncing up and down slowly. Is it more likely that the trailer is heavily loaded or nearly empty? Explain your answer.\n\n6.\n\nSome people modify cars to be much closer to the ground than when manufactured. Should they install stiffer springs? Explain your answer.\n\n### 15.2Energy in Simple Harmonic Motion\n\n7.\n\nDescribe a system in which elastic potential energy is stored.\n\n8.\n\nExplain in terms of energy how dissipative forces such as friction reduce the amplitude of a harmonic oscillator. Also explain how a driving mechanism can compensate. (A pendulum clock is such a system.)\n\n9.\n\nThe temperature of the atmosphere oscillates from a maximum near noontime and a minimum near sunrise. Would you consider the atmosphere to be in stable or unstable equilibrium?\n\n### 15.3Comparing Simple Harmonic Motion and Circular Motion\n\n10.\n\nCan this analogy of SHM to circular motion be carried out with an object oscillating on a spring vertically hung from the ceiling? Why or why not? If given the choice, would you prefer to use a sine function or a cosine function to model the motion?\n\n11.\n\nIf the maximum speed of the mass attached to a spring, oscillating on a frictionless table, was increased, what characteristics of the rotating disk would need to be changed?\n\n### 15.4Pendulums\n\n12.\n\nPendulum clocks are made to run at the correct rate by adjusting the pendulum’s length. Suppose you move from one city to another where the acceleration due to gravity is slightly greater, taking your pendulum clock with you, will you have to lengthen or shorten the pendulum to keep the correct time, other factors remaining constant? Explain your answer.\n\n13.\n\nA pendulum clock works by measuring the period of a pendulum. In the springtime the clock runs with perfect time, but in the summer and winter the length of the pendulum changes. When most materials are heated, they expand. Does the clock run too fast or too slow in the summer? What about the winter?\n\n14.\n\nWith the use of a phase shift, the position of an object may be modeled as a cosine or sine function. If given the option, which function would you choose? Assuming that the phase shift is zero, what are the initial conditions of function; that is, the initial position, velocity, and acceleration, when using a sine function? How about when a cosine function is used?\n\n### 15.5Damped Oscillations\n\n15.\n\nGive an example of a damped harmonic oscillator. (They are more common than undamped or simple harmonic oscillators.)\n\n16.\n\nHow would a car bounce after a bump under each of these conditions?\n\n(a) overdamping\n\n(b) underdamping\n\n(c) critical damping\n\n17.\n\nMost harmonic oscillators are damped and, if undriven, eventually come to a stop. Why?\n\n### 15.6Forced Oscillations\n\n18.\n\nWhy are soldiers in general ordered to “route step” (walk out of step) across a bridge?\n\n19.\n\nDo you think there is any harmonic motion in the physical world that is not damped harmonic motion? Try to make a list of five examples of undamped harmonic motion and damped harmonic motion. Which list was easier to make?\n\n20.\n\nSome engineers use sound to diagnose performance problems with car engines. Occasionally, a part of the engine is designed that resonates at the frequency of the engine. The unwanted oscillations can cause noise that irritates the driver or could lead to the part failing prematurely. In one case, a part was located that had a length L made of a material with a mass M. What can be done to correct this problem?"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.92756397,"math_prob":0.8792873,"size":3994,"snap":"2019-51-2020-05","text_gpt3_token_len":842,"char_repetition_ratio":0.11829574,"word_repetition_ratio":0.00295858,"special_character_ratio":0.19854783,"punctuation_ratio":0.10078534,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97546303,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-10T19:30:47Z\",\"WARC-Record-ID\":\"<urn:uuid:49e65b5f-f998-41c8-99ae-d4c8c45d7b0b>\",\"Content-Length\":\"258273\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7adc8578-2dda-4f53-9fa2-11e472cf0262>\",\"WARC-Concurrent-To\":\"<urn:uuid:41c0ad14-d449-49d8-ab1c-a9d5e89ca46a>\",\"WARC-IP-Address\":\"99.84.181.19\",\"WARC-Target-URI\":\"https://openstax.org/books/university-physics-volume-1/pages/15-conceptual-questions\",\"WARC-Payload-Digest\":\"sha1:XGPC5DPL5S3FLWTG6PP2JUKUO7XJXL2M\",\"WARC-Block-Digest\":\"sha1:CMG6L4CQPD7V2Y3WOAMRGZQOOQU4QM6A\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540528490.48_warc_CC-MAIN-20191210180555-20191210204555-00174.warc.gz\"}"} |
https://cloud.tencent.com/developer/article/1327168 | [
"深度 | 朴素贝叶斯模型算法研究与实例分析\n\n(白宁超2018年9月3日15: 56:20)\n\n复旦新闻语料:朴素贝叶斯中文文本分类\n\n准备数据\n\n```'''创建数据集和类标签'''\ndocList = [];classList = [] # 文档列表、类别列表\ndirlist = ['C3-Art','C4-Literature','C5-Education','C6-Philosophy','C7-History']\nfor j in range(5):\nfor i in range(1, 11): # 总共10个文档\n# 切分,解析数据,并归类为 1 类别\ndocList.append(wordList)\nclassList.append(j)\n# print(i,'\\t','./fudan/%s/%d.txt' % (dirlist[j],i),'\\t',j)\nreturn docList,classList\n\n''' 利用jieba对文本进行分词,返回切词后的list '''\ndef textParse(str_doc):\n# 正则过滤掉特殊符号、标点、英文、数字等。\nimport re\nr1 = '[a-zA-Z0-9’!\"#\\$%&\\'()*+,-./:;<=>?@,。?★、…【】《》?“”‘’![\\\\]^_`{|}~]+'\nstr_doc=re.sub(r1, '', str_doc)\n\n# 创建停用词列表\nstwlist = set([line.strip() for line in open('./stopwords.txt', 'r', encoding='utf-8').readlines()])\nsent_list = str_doc.split('\\n')\n# word_2dlist = [rm_tokens(jieba.cut(part), stwlist) for part in sent_list] # 分词并去停用词\nword_2dlist = [rm_tokens([word+\"/\"+flag+\" \" for word, flag in pseg.cut(part) if flag in ['n','v','a','ns','nr','nt']], stwlist) for part in sent_list] # 带词性分词并去停用词\nword_list = list(itertools.chain(*word_2dlist)) # 合并列表\nreturn word_list\n\n''' 去掉一些停用词、数字、特殊符号 '''\ndef rm_tokens(words, stwlist):\nwords_list = list(words)\nfor i in range(words_list.__len__())[::-1]:\nword = words_list[i]\nif word in stwlist: # 去除停用词\nwords_list.pop(i)\nelif len(word) == 1: # 去除单个字符\nwords_list.pop(i)\nelif word == \" \": # 去除空字符\nwords_list.pop(i)\nreturn words_list```\n\n分析数据\n\n```'''获取所有文档单词的集合'''\ndef createVocabList(dataSet):\nvocabSet = set([])\nfor document in dataSet:\nvocabSet = vocabSet | set(document) # 操作符 | 用于求两个集合的并集\n# print(len(vocabSet),len(set(vocabSet)))\nreturn list(vocabSet)```\n\n```'''文档词袋模型,创建矩阵数据'''\ndef bagOfWords2VecMN(vocabList, inputSet):\nreturnVec = * len(vocabList)\nfor word in inputSet:\nif word in vocabList:\nreturnVec[vocabList.index(word)] += 1\nreturn returnVec```\n\n训练算法\n\n```'''朴素贝叶斯模型训练数据优化'''\ndef trainNB0(trainMatrix, trainCategory):\nnumTrainDocs = len(trainMatrix) # 总文件数\nnumWords = len(trainMatrix) # 总单词数\n\np1Num=p2Num=p3Num=p4Num=p5Num = ones(numWords) # 各类为1的矩阵\np1Denom=p2Denom=p3Denom=p4Denom=p5Denom = 2.0 # 各类特征和\nnum1=num2=num3=num4=num5 = 0 # 各类文档数目\n\npNumlist=[p1Num,p2Num,p3Num,p4Num,p5Num]\npDenomlist =[p1Denom,p2Denom,p3Denom,p4Denom,p5Denom]\nNumlist = [num1,num2,num3,num4,num5]\n\nfor i in range(numTrainDocs): # 遍历每篇训练文档\nfor j in range(5): # 遍历每个类别\nif trainCategory[i] == j: # 如果在类别下的文档\npNumlist[j] += trainMatrix[i] # 增加词条计数值\npDenomlist[j] += sum(trainMatrix[i]) # 增加该类下所有词条计数值\nNumlist[j] +=1 # 该类文档数目加1\n\npVect,pi = [],[]\nfor index in range(5):\npVect.append(log(pNumlist[index] / pDenomlist[index]))\npi.append(Numlist[index] / float(numTrainDocs))\nreturn pVect, pi```\n\n```'''朴素贝叶斯分类函数,将乘法转换为加法'''\ndef classifyNB(vec2Classify, pVect,pi):\n# 计算公式 log(P(F1|C))+log(P(F2|C))+....+log(P(Fn|C))+log(P(C))\nbnpi = [] # 文档分类到各类的概率值列表\nfor x in range(5):\nbnpi.append(sum(vec2Classify * pVect[x]) + log(pi[x]))\n# print([bnp for bnp in bnpi])\n# 分类集合\nreslist = ['Art','Literature','Education','Philosophy','History']\n# 根据最大概率,选择索引值\nindex = [bnpi.index(res) for res in bnpi if res==max(bnpi)]\nreturn reslist[index] # 返回分类值```\n\n测试算法\n\n```'''朴素贝叶斯新闻分类应用'''\ndef testingNB():\n# 1. 加载数据集\n# 2. 创建单词集合\nmyVocabList = createVocabList(dataSet)\n\n# 3. 计算单词是否出现并创建数据矩阵\ntrainMat = []\nfor postinDoc in dataSet:\ntrainMat.append(bagOfWords2VecMN(myVocabList, postinDoc))\nwith open('./word-bag.txt','w') as f:\nfor i in trainMat:\nf.write(str(i)+'\\r\\n')\n# 4. 训练数据\npVect,pi= trainNB0(array(trainMat), array(Classlabels))\n# 5. 测试数据\nthisDoc = array(bagOfWords2VecMN(myVocabList, testEntry))\nprint(testEntry[:10], '分类结果是: ', classifyNB(thisDoc, pVect,pi))```\n\n```Building prefix dict from the default dictionary ...\nPrefix dict has been built succesfully.\n['全国/n ', '举办/v ', '电影/n ', '新华社/nt ', '北京/ns ', '国家教委/nt ', '广播电影电视部/nt ', '文化部/n ', '联合/v ', '决定/v '] 分类结果是: Literature\n\n特征选择问题讨论\n\n• 做文本分类的时候,遇到特征矩阵1.5w。在测试篇幅小的文章总是分类错误?这个时候如何做特征选择?是不是说去掉特征集中频率极高和极低的一部分,对结果有所提升? 答:你说的这个情况是很普遍的现象,篇幅小的文章,特征小,所以模型更容易判断出错!去掉高频和低频通常是可以使得训练的模型泛化能力变强\n• 比如:艺术,文化,历史,教育。界限本来就不明显,比如测试数据“我爱艺术,艺术是我的全部”。结果会分类为文化。其实这个里面还有就是不同特征词的权重问题,采用tf-idf优化下应该会好一些? 答:我个人觉得做文本特征提取,还是需要自己去分析文本本身内容的文字特点,你可以把每一类的文本的实体提取出来,然后统计一下每个词在每一类上的数量,看看数量分布,也许可以发现一些数据特点\n• 我就是按照这个思路做的,还有改进时候的停用词,其实可以分析特征文本,针对不同业务,使用自定义的停用词要比通用的好 还有提前各类见最具表征性的词汇加权,凸显本类的权重是吧? 答:比如,艺术类文章中,哪些词出现较多,哪些词出现少,再观察这些词的词性主要是哪些,这样可能会对你制定提取特征规则方式的时候提供一定的思路参考,我可以告诉你的是,有些词绝对会某一类文章出出现多,然后在其他类文章出现很少,这一类的词就是文章的特征词\n• 那样的思路可以是:对某类文章单独构建类内的词汇表再进行选择。最后对类间词汇表叠加就ok了。 答:词汇表有个缺点就是,不能很好的适应新词\n• 改进思路呢 答:我给你一个改进思路:你只提取每个文本中的名词、动词、形容词、地名,用这些词的作为文本的特征来训练试一试,用文本分类用主题模型(LDA)来向量化文本,再训练模型试一试。如果效果还是不够好,再将文本向量用PCA进行一次特征降维,然后再训练模型试一试,按常理来说,效果应该会有提高\n• 还有我之前个人写的程序分类效果不理想,后来改用sklearn内置BN运行依旧不理想。适当改进了特征提取,还是不理想。估计每类10篇文章的训练数据太少了 答:文本本身特征提取就相对难一些,再加上训练数据少,训练出来的模型效果可想而已,正常的\n\nsklearn:朴素贝叶斯分类调用\n\n数据准备和数据预处理\n\n```myVocabList = [] # 设置词汇表的全局变量\n\n'''创建数据集和类标签'''\ndocList = [];classList = [] # 文档列表、类别列表、文本特征\ndirlist = ['C3-Art','C4-Literature','C5-Education','C6-Philosophy','C7-History']\nfor j in range(5):\nfor i in range(1, 11): # 总共10个文档\n# 切分,解析数据,并归类为 1 类别\ndocList.append(wordList)\nclassList.append(j)\n# print(i,'\\t','./fudan/%s/%d.txt' % (dirlist[j],i),'\\t',j)\n# print(len(docList),len(classList),len(fullText))\nglobal myVocabList\nmyVocabList = createVocabList(docList) # 创建单词集合\nreturn docList,classList,myVocabList\n\n''' 利用jieba对文本进行分词,返回切词后的list '''\ndef textParse(str_doc): #与上文方法一致\n\n''' 去掉一些停用词、数字、特殊符号 '''\ndef rm_tokens(words, stwlist): #与上文方法一致```\n\n```# 本地存储数据集和标签\ndef storedata():\n# 3. 计算单词是否出现并创建数据矩阵\n# trainMat =[[0,1,2,3],[2,3,1,5],[0,1,4,2]] # 训练集\n# classList = [0,1,2] #类标签\n# 计算单词是否出现并创建数据矩阵\ntrainMat = []\nfor postinDoc in docList:\ntrainMat.append(bagOfWords2VecMN(myVocabList, postinDoc))\nres = \"\"\nfor i in range(len(trainMat)):\nres +=' '.join([str(x) for x in trainMat[i]])+' '+str(classList[i])+'\\n'\n# print(res[:-1]) # 删除最后一个换行符\nwith open('./word-bag.txt','w') as fw:\nfw.write(res[:-1])\nwith open('./wordset.txt','w') as fw:\nfw.write(' '.join([str(v) for v in myVocabList]))\n\n# 读取本地数据集和标签\ndef grabdata():\nf = open('./word-bag.txt') # 读取本地文件\ntzsize = len(arrayLines.split(' '))-1 # 列向量,特征个数减1即数据集\nreturnMat = zeros((len(arrayLines),tzsize)) # 0矩阵数据集\nclassLabelVactor = [] # 标签集,特征最后一列\n\nindex = 0\nfor line in arrayLines: # 逐行读取\nlistFromLine = line.strip().split(' ') # 分析数据,空格处理\n# print(listFromLine)\nreturnMat[index,:] = listFromLine[0:tzsize] # 数据集\nclassLabelVactor.append(int(listFromLine[-1])) # 类别标签集\nindex +=1\n# print(returnMat,classLabelVactor)\nmyVocabList=writewordset()\nreturn returnMat,classLabelVactor,myVocabList\n\ndef writewordset():\nf1 = open('./wordset.txt')\nfor w in myVocabList:\nif w=='':\nmyVocabList.remove(w)\nreturn myVocabList```\n\n```'''获取所有文档单词的集合'''\ndef createVocabList(dataSet):\nvocabSet = set([])\nfor document in dataSet:\nvocabSet = vocabSet | set(document) # 操作符 | 用于求两个集合的并集\n# print(len(vocabSet),len(set(vocabSet)))\nreturn list(vocabSet)\n\n'''文档词袋模型,创建矩阵数据'''\ndef bagOfWords2VecMN(vocabList, inputSet):\nreturnVec = * len(vocabList)\nfor word in inputSet:\nif word in vocabList:\nreturnVec[vocabList.index(word)] += 1\nreturn returnVec```\n\n高斯朴素贝叶斯\n\nGaussianNB 实现了运用于分类的高斯朴素贝叶斯算法。特征的可能性(即概率)假设为高斯分布:\n\n```'''高斯朴素贝叶斯'''\ndef MyGaussianNB(trainMat='',Classlabels='',testDoc=''):\n# -----sklearn GaussianNB-------\n# 训练数据\nX = np.array(trainMat)\nY = np.array(Classlabels)\n# 高斯分布\nclf = GaussianNB()\nclf.fit(X, Y)\n# 测试预测结果\nindex = clf.predict(testDoc) # 返回索引\nreslist = ['Art','Literature','Education','Philosophy','History']\nprint(reslist[index])```\n\n多项朴素贝叶斯\n\nMultinomialNB 实现了服从多项分布数据的朴素贝叶斯算法,也是用于文本分类(这个领域中数据往往以词向量表示,尽管在实践中 tf-idf 向量在预测时表现良好)的两大经典朴素贝叶斯算法之一。 分布参数由每类 y 的\n\n```'''多项朴素贝叶斯'''\ndef MyMultinomialNB(trainMat='',Classlabels='',testDoc=''):\n# -----sklearn MultinomialNB-------\n# 训练数据\nX = np.array(trainMat)\nY = np.array(Classlabels)\n# 多项朴素贝叶斯\nclf = MultinomialNB()\nclf.fit(X, Y)\n# 测试预测结果\nindex = clf.predict(testDoc) # 返回索引\nreslist = ['Art','Literature','Education','Philosophy','History']\nprint(reslist[index])```\n\n伯努利朴素贝叶斯\n\nBernoulliNB 实现了用于多重伯努利分布数据的朴素贝叶斯训练和分类算法,即有多个特征,但每个特征 都假设是一个二元 (Bernoulli, boolean) 变量。 因此,这类算法要求样本以二元值特征向量表示;如果样本含有其他类型的数据, 一个 BernoulliNB 实例会将其二值化(取决于 binarize 参数)。伯努利朴素贝叶斯的决策规则基于\n\n```'''伯努利朴素贝叶斯'''\ndef MyBernoulliNB(trainMat='',Classlabels='',testDoc=''):\n# -----sklearn BernoulliNB-------\n# 训练数据\nX = np.array(trainMat)\nY = np.array(Classlabels)\n# 多项朴素贝叶斯\nclf = BernoulliNB()\nclf.fit(X, Y)\n# 测试预测结果\nindex = clf.predict(testDoc) # 返回索引\nreslist = ['Art','Literature','Education','Philosophy','History']\nprint(reslist[index])```\n\n各种贝叶斯模型分类测试\n\n```# 加载数据集和单词集合\ntrainMat,Classlabels,myVocabList = grabdata() # 读取训练结果\n# 测试数据\ntestDoc = np.array(bagOfWords2VecMN(myVocabList, testEntry)) # 测试数据\n# 测试预测结果\nMyGaussianNB(trainMat,Classlabels,testDoc)\nMyMultinomialNB(trainMat,Classlabels,testDoc)\nMyBernoulliNB(trainMat,Classlabels,testDoc)```\n\n```Building prefix dict from the default dictionary ...\nPrefix dict has been built succesfully.\n\n参考文献\n\n1. scikit中文社区:http://sklearn.apachecn.org/cn/0.19.0/\n2. 中文维基百科:https://zh.wikipedia.org/wiki/\n3. 文本分类特征选择:https://www.cnblogs.com/june0507/p/7601001.html\n4. GitHub:https://github.com/BaiNingchao/MachineLearning-1\n5. 图书:《机器学习实战》\n6. 图书:《自然语言处理理论与实战》\n\n74 篇文章21 人订阅\n\n0 条评论\n\n相关文章\n\n4848\n\n2907\n\n831\n\n1233\n\n6148\n\nNAS框架AUTO Keras帮你搜索最佳深度模型\n\n【导读】Neural Architecture Search (NAS)是当前正热的科研和工程方向,旨在帮人们自动搜索最适合当前任务的深度神经网络,低成本地获得...\n\n7584\n\n3656\n\n1303\n\n【论文笔记】中文词向量论文综述(一)\n\n1072",
null,
""
] | [
null,
"https://imgcache.qq.com/open_proj/proj_qcloud_v2/community/portal/css/img/wechat-qr.jpg",
null
] | {"ft_lang_label":"__label__zh","ft_lang_prob":0.58354354,"math_prob":0.7399239,"size":11322,"snap":"2019-26-2019-30","text_gpt3_token_len":6171,"char_repetition_ratio":0.109118216,"word_repetition_ratio":0.22983871,"special_character_ratio":0.2733616,"punctuation_ratio":0.1891892,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9677999,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-24T23:46:18Z\",\"WARC-Record-ID\":\"<urn:uuid:66437de6-7d5e-4b6b-88da-682f356d3b20>\",\"Content-Length\":\"146931\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7f30b10a-6169-4fe7-91f7-09cb1750856f>\",\"WARC-Concurrent-To\":\"<urn:uuid:e269d080-968d-4beb-9b87-108314ca088f>\",\"WARC-IP-Address\":\"119.28.39.127\",\"WARC-Target-URI\":\"https://cloud.tencent.com/developer/article/1327168\",\"WARC-Payload-Digest\":\"sha1:BWSOLV3CFF5PJM4LFTYZ2GGNUNPZGI5P\",\"WARC-Block-Digest\":\"sha1:CIMCBY7L5IZ2FORPPKR4VWQUK5CN7PEP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627999779.19_warc_CC-MAIN-20190624231501-20190625013501-00078.warc.gz\"}"} |
https://www.pragationline.com/semester-2-sppu-bba-ca/business-mathematics-6-rayarikar-dixit-2/ | [
"190.00",
null,
"190.00\n\n Authors Name A. V. Rayarikar , Dr. P. G. Dixit ISBN 13 9789389686425 Publisher Edition First Buy E-Book (PDF) [Click on Logo >>]",
null,
"",
null,
"Pages 184 Language English Publishing Year Nov-19\nSKU: N4943\nFor E-Books purchased from Kopykitab.com For E-Books purchased from Amazon\n##### Description\n\n1. Ratio, Proportion and Percentage\n\nRatio – Definition, Continued Ratio, Inverse Ration, Proportion, Continued Proportion, Direct Proportion, Inverse Proportion, Variation, Inverse Variation, Joint Variation, Percentage, Computation of Percentage.\n\n2. Profit and Loss\n\nTerms and Formulae, Trade discount, Cash discount, Problems involving cost price, selling price, Trade discount and cash discount. Introduction to Commission and brokerage, Problems on commission and brokerage\n\n3. Interest and Annuity\n\nInterest, Compound interest, Equated monthly Installments (EMI) by interest of reducing balance and flat interest methods and problems.\nOrdinary annuity, sinker fund, annuity due, present value and future value of annuity.\n\n4. Shares and Mutual Funds\n\nConcepts of Shares, face value, market value, dividend, brokerage, equity shares, preferential shares, bonus shares, examples and problems, Concept of Mutual Funds, Change\nin Net Asset Value (NAV), Systematic Investment Plan (SIP), Examples and Problems.\n\n5. Matrices and Determinants\n\nDefinition of Matrices, Types of Matrices, Algebra of Matrices, Determinant, Adjoint of Matrix, Inverse of Matrix, System of Linear equations, Solution of System of Linear Equation by adjoint method (upto 3 variables only).\n\n6. Linear Programming Problems\n\nConcept of LPP, Formulation of LPP and solution of LPP by graphical method.\n\n7. Transportation Problem\n\nConcept of Transportation Problem, Initial Basic Feasible Solution, North-West Corner Method (NWCM), Least Cost Method (LCM), Vogal’s Approximation Method (VAM).\n\n* Model Question Paper",
null,
""
] | [
null,
"data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==",
null,
"https://www.pragationline.com/wp-content/uploads/2019/03/ebook-logo.jpg",
null,
"data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.81173307,"math_prob":0.6583227,"size":1610,"snap":"2021-31-2021-39","text_gpt3_token_len":353,"char_repetition_ratio":0.12889166,"word_repetition_ratio":0.0,"special_character_ratio":0.19254659,"punctuation_ratio":0.22105263,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.979944,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-04T16:51:41Z\",\"WARC-Record-ID\":\"<urn:uuid:16b782c4-a7c0-4099-9b72-f8e42a1db255>\",\"Content-Length\":\"213059\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e36285dd-9f2d-420e-936f-1fdc46852831>\",\"WARC-Concurrent-To\":\"<urn:uuid:1bbdb92d-ad51-45ec-8d4d-99d8d2f62f1a>\",\"WARC-IP-Address\":\"216.10.243.131\",\"WARC-Target-URI\":\"https://www.pragationline.com/semester-2-sppu-bba-ca/business-mathematics-6-rayarikar-dixit-2/\",\"WARC-Payload-Digest\":\"sha1:ITSPBZ5SAUCDW3QSQJNQFQA3DGKEYSNB\",\"WARC-Block-Digest\":\"sha1:FA5MV2KSH45C4SFXBX64JDUDYNSVS5TD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154878.27_warc_CC-MAIN-20210804142918-20210804172918-00286.warc.gz\"}"} |
https://math.illinoisstate.edu/day/courses/old/305/contentderangements.html | [
"Illinois State University Mathematics Department\n\n MAT 305: Combinatorics Topics for K-8 Teachers\n\n### Derangements\n\nHere we apply the Inclusion-Exclusion Principle to a problem of derangements:\n\nIn an apartment complex with k mailboxes, how many ways can a mail carrier distribute k letters, one addressed to each letter box, such that no letter is placed in the correct box? Each of these is called a derangement.\n\nLet us refer to a letter by a number and to a mailbox by a number. Our task is to determine the number of ways to pair letters and boxes so that no letter numbers match box numbers. If k = 1, there are no ways to derange the letter, for there is one letter to place in one box. If k = 2, we may place letter #2 in box #1 and letter #1 in box #2, that being the only derangement.\n\nWhen we have three letters, there are 3! = 6 total ways to distribute them. We now write the letter numbers in the order they are delivered, such as 1 3 2, indicating letter #1 is delivered to box #1, letter #3 is delivered to box #2, and letter #2 is delivered to box #3. The 6 possible distributions for 3 letters are\n\n 1 2 3 `1 3 2` 2 1 3 `2 3 1` 3 1 2 `3 2 1`\n\nThe schemes 2 3 1 and 3 1 2 are the only derangements of three letters.\n\nSummarizing our results thus far, using D(n) to represent the number of derangements of n letters, we have D(1) = 0, D(2) = 1, and D(3) = 2. Any guesses on D(4)? Let's find out what it is.\n\nWe know there are 4! = 24 ways to distribute the 4 letters. Rather than list the 24 cases, let us consider how the Inclusion-Exclusion Principle may help us. We seek the number of ways to place the numbers in the set {1,2,3,4} in line such that no value occurs in its natural position. Let X(1) represent the property that 1 occurs in its natural position when 1,2,3,4 are permuted. Then |X(1)| = (1)*3!. The 1 represents the 1 way to place the 1 in its natural position and the 3! shows the number of ways to permute the remaining 3 values. Note that we are not considering whether any of 2,3,4 wind up in their respective natural positions. We could argue similarly that |X(2)| = |X(3)| = |X(4)|. Therefore, there are 4*3! ways for a value to occur in its natural position.\n\nWhat about |X(1) ^ X(2)|? Again there is 1 way to place 1,2 in their natural order, and then 2! ways to place the remaining values. This will be true for any pair of values we wish to restrict to their natural positions. How many pairs are there? This is just C(4,2) = 6. Therefore, there are C(4,2)*2! ways for two values to simultaneously occur in their natural positions.\n\nAnd for |X(1) ^ X(2) ^ X(3)|? Again there is 1 way to place 1,2,3 in their natural order, and then 1! way to place the remaining value. This will be true for any set of three values we wish to restrict to their natural positions. How many 3-element sets are there? This is just C(4,3) = 4. Therefore, there are C(4,3)*1! ways for three values to simultaneously occur in their natural positions.\n\nFinally, |X(1) ^ X(2) ^ X(3) ^ X(4)| = 1, for there is only one way for all 4 values to be in their natural positions.\n\nNow apply the Inclusion-Exclusion Principle (I-E P):\n\n|~X(1) ^ ~X(2) ^ ~X(3) ^ X(4)| = 4! - 4(3!) + 6(2!) - 4(1!) + 1 = 9. In words, using the I-E P, we are suggesting that to determine the number of derangements of the values 1,2,3,4, first calculate the number of permutations of those values (4!), subtract the number of ways to keep at least one element in its natural position, add back the number of ways to keep at least two values in their natural positions, subtract the number of ways to keep at least three values in their natural positions, and finally add back the number of ways to keep all values in their natural positions.\n\nWe can rewrite the right side of the above equation to better express the general result:",
null,
"If we begin with n objects rather than 4, we can argue in a similar way that",
null,
"determines the number of derangements of n items.\n\n#### Recursive Determination of Derangements\n\nWe now consider derangements recursively. That is, by knowing the few easy-to-calculate values for derangements of small numbers of objects, can we determine a pattern for the number of derangements of larger numbers of elements?\n\nSuppose we want to determine the number of derangements of the n integers 1,2,...,n for n bigger than 2. Let us focus on k and move it into the first position. We thus have started a derangement, for 1 is not in its natural position. Where could 1 be placed? There are two cases we could consider: either 1 is in position k or 1 is not in position k.\n\nIf 1 is in position k, here's what we know: The integers 1 and k have simply traded positions.\n\n Prohibited Value 1 2 3 ... k-1 k k+1 ... n Derangement k ? ? ... ? 1 ? ... ?\n\nIndicated by the question marks, there are (n-2) integers yet to derange. This can be done in D(n-2) ways.\n\nIf 1 is not in position k, we don't know as much. Note that we have shown 1 as a prohibited value twice. This is required for this case, because we cannot have 1 appear in the first position (its natural position) nor can 1 appear in position k.\n\n Prohibited Value 1 2 3 ... k-1 1 k+1 ... n Derangement k ? ? ... ? ? ? ... ?\n\nIndicated by the question marks, there are now (n-1) integers yet to derange. This can be done in D(n-1) ways.\n\nPutting this together, we have D(n-1) + D(n-2) possible derangements when k is in the first position. How many different integers could we put in position 1 and carry out this process? All the integers except 1. that is, (n-1) different integers.\n\nThis yields the recursive formula D(n) = (n-1)[D(n-1) + D(n-2)]. As long as we know D(1) = 0 and D(2) = 1, we can generate subsequent values for D(n).\n\n Syllabus Grades & Grading Content Notes Session Outlines Assignments and Problem Sets Tests and Quizzes"
] | [
null,
"https://math.illinoisstate.edu/day/courses/old/305/S1401.gif",
null,
"https://math.illinoisstate.edu/day/courses/old/305/S1402.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8939408,"math_prob":0.9903565,"size":5568,"snap":"2019-51-2020-05","text_gpt3_token_len":1572,"char_repetition_ratio":0.17774983,"word_repetition_ratio":0.12638377,"special_character_ratio":0.29471982,"punctuation_ratio":0.117790416,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9896195,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-09T23:38:33Z\",\"WARC-Record-ID\":\"<urn:uuid:333919b0-21bf-4188-91c6-337f02c1839a>\",\"Content-Length\":\"11321\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8c5475ac-c012-4874-967e-cee9ce0235bb>\",\"WARC-Concurrent-To\":\"<urn:uuid:8239ce00-0ad5-485a-99b5-99d82d89390c>\",\"WARC-IP-Address\":\"138.87.246.20\",\"WARC-Target-URI\":\"https://math.illinoisstate.edu/day/courses/old/305/contentderangements.html\",\"WARC-Payload-Digest\":\"sha1:JPCF3GMWWLFWDJAOWAKJEULTRBSAVW72\",\"WARC-Block-Digest\":\"sha1:ZR36L4GDR4SQTIEIH3ZONHGYGPRPO6KK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540525598.55_warc_CC-MAIN-20191209225803-20191210013803-00423.warc.gz\"}"} |
https://www.colorhexa.com/49635c | [
"# #49635c Color Information\n\nIn a RGB color space, hex #49635c is composed of 28.6% red, 38.8% green and 36.1% blue. Whereas in a CMYK color space, it is composed of 26.3% cyan, 0% magenta, 7.1% yellow and 61.2% black. It has a hue angle of 163.8 degrees, a saturation of 15.1% and a lightness of 33.7%. #49635c color hex could be obtained by blending #92c6b8 with #000000. Closest websafe color is: #336666.\n\n• R 29\n• G 39\n• B 36\nRGB color chart\n• C 26\n• M 0\n• Y 7\n• K 61\nCMYK color chart\n\n#49635c color description : Very dark grayish cyan - lime green.\n\n# #49635c Color Conversion\n\nThe hexadecimal color #49635c has RGB values of R:73, G:99, B:92 and CMYK values of C:0.26, M:0, Y:0.07, K:0.61. Its decimal value is 4809564.\n\nHex triplet RGB Decimal 49635c `#49635c` 73, 99, 92 `rgb(73,99,92)` 28.6, 38.8, 36.1 `rgb(28.6%,38.8%,36.1%)` 26, 0, 7, 61 163.8°, 15.1, 33.7 `hsl(163.8,15.1%,33.7%)` 163.8°, 26.3, 38.8 336666 `#336666`\nCIE-LAB 39.769, -11.307, 0.833 9.141, 11.113, 11.788 0.285, 0.347, 11.113 39.769, 11.337, 175.787 39.769, -12.777, 2.702 33.335, -9.391, 2.369 01001001, 01100011, 01011100\n\n# Color Schemes with #49635c\n\n• #49635c\n``#49635c` `rgb(73,99,92)``\n• #634950\n``#634950` `rgb(99,73,80)``\nComplementary Color\n• #49634f\n``#49634f` `rgb(73,99,79)``\n• #49635c\n``#49635c` `rgb(73,99,92)``\n• #495d63\n``#495d63` `rgb(73,93,99)``\nAnalogous Color\n• #634f49\n``#634f49` `rgb(99,79,73)``\n• #49635c\n``#49635c` `rgb(73,99,92)``\n• #63495d\n``#63495d` `rgb(99,73,93)``\nSplit Complementary Color\n• #635c49\n``#635c49` `rgb(99,92,73)``\n• #49635c\n``#49635c` `rgb(73,99,92)``\n• #5c4963\n``#5c4963` `rgb(92,73,99)``\n• #506349\n``#506349` `rgb(80,99,73)``\n• #49635c\n``#49635c` `rgb(73,99,92)``\n• #5c4963\n``#5c4963` `rgb(92,73,99)``\n• #634950\n``#634950` `rgb(99,73,80)``\n• #293733\n``#293733` `rgb(41,55,51)``\n• #334641\n``#334641` `rgb(51,70,65)``\n• #3e544e\n``#3e544e` `rgb(62,84,78)``\n• #49635c\n``#49635c` `rgb(73,99,92)``\n• #54726a\n``#54726a` `rgb(84,114,106)``\n• #5f8077\n``#5f8077` `rgb(95,128,119)``\n• #698f85\n``#698f85` `rgb(105,143,133)``\nMonochromatic Color\n\n# Alternatives to #49635c\n\nBelow, you can see some colors close to #49635c. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #496356\n``#496356` `rgb(73,99,86)``\n• #496358\n``#496358` `rgb(73,99,88)``\n• #49635a\n``#49635a` `rgb(73,99,90)``\n• #49635c\n``#49635c` `rgb(73,99,92)``\n• #49635e\n``#49635e` `rgb(73,99,94)``\n• #496360\n``#496360` `rgb(73,99,96)``\n• #496363\n``#496363` `rgb(73,99,99)``\nSimilar Colors\n\n# #49635c Preview\n\nThis text has a font color of #49635c.\n\n``<span style=\"color:#49635c;\">Text here</span>``\n#49635c background color\n\nThis paragraph has a background color of #49635c.\n\n``<p style=\"background-color:#49635c;\">Content here</p>``\n#49635c border color\n\nThis element has a border color of #49635c.\n\n``<div style=\"border:1px solid #49635c;\">Content here</div>``\nCSS codes\n``.text {color:#49635c;}``\n``.background {background-color:#49635c;}``\n``.border {border:1px solid #49635c;}``\n\n# Shades and Tints of #49635c\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #060908 is the darkest color, while #fcfdfd is the lightest one.\n\n• #060908\n``#060908` `rgb(6,9,8)``\n• #0f1413\n``#0f1413` `rgb(15,20,19)``\n• #171f1d\n``#171f1d` `rgb(23,31,29)``\n• #1f2b28\n``#1f2b28` `rgb(31,43,40)``\n• #283632\n``#283632` `rgb(40,54,50)``\n• #30413d\n``#30413d` `rgb(48,65,61)``\n• #384c47\n``#384c47` `rgb(56,76,71)``\n• #415852\n``#415852` `rgb(65,88,82)``\n• #49635c\n``#49635c` `rgb(73,99,92)``\n• #516e66\n``#516e66` `rgb(81,110,102)``\n• #5a7a71\n``#5a7a71` `rgb(90,122,113)``\n• #62857b\n``#62857b` `rgb(98,133,123)``\n• #6a9086\n``#6a9086` `rgb(106,144,134)``\n• #75998f\n``#75998f` `rgb(117,153,143)``\n• #80a199\n``#80a199` `rgb(128,161,153)``\n• #8baaa2\n``#8baaa2` `rgb(139,170,162)``\n• #97b2ab\n``#97b2ab` `rgb(151,178,171)``\n• #a2bab4\n``#a2bab4` `rgb(162,186,180)``\n``#adc3bd` `rgb(173,195,189)``\n• #b9cbc6\n``#b9cbc6` `rgb(185,203,198)``\n• #c4d3cf\n``#c4d3cf` `rgb(196,211,207)``\n• #cfdcd8\n``#cfdcd8` `rgb(207,220,216)``\n• #dbe4e2\n``#dbe4e2` `rgb(219,228,226)``\n• #e6eceb\n``#e6eceb` `rgb(230,236,235)``\n• #f1f5f4\n``#f1f5f4` `rgb(241,245,244)``\n• #fcfdfd\n``#fcfdfd` `rgb(252,253,253)``\nTint Color Variation\n\n# Tones of #49635c\n\nA tone is produced by adding gray to any pure hue. In this case, #505c59 is the less saturated color, while #00ac7e is the most saturated one.\n\n• #505c59\n``#505c59` `rgb(80,92,89)``\n• #49635c\n``#49635c` `rgb(73,99,92)``\n• #426a5f\n``#426a5f` `rgb(66,106,95)``\n• #3c7062\n``#3c7062` `rgb(60,112,98)``\n• #357765\n``#357765` `rgb(53,119,101)``\n• #2f7d68\n``#2f7d68` `rgb(47,125,104)``\n• #28846b\n``#28846b` `rgb(40,132,107)``\n• #218b6e\n``#218b6e` `rgb(33,139,110)``\n• #1b9171\n``#1b9171` `rgb(27,145,113)``\n• #149874\n``#149874` `rgb(20,152,116)``\n• #0d9f77\n``#0d9f77` `rgb(13,159,119)``\n• #07a57b\n``#07a57b` `rgb(7,165,123)``\n• #00ac7e\n``#00ac7e` `rgb(0,172,126)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #49635c is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.57527566,"math_prob":0.70477426,"size":3718,"snap":"2021-43-2021-49","text_gpt3_token_len":1642,"char_repetition_ratio":0.11927841,"word_repetition_ratio":0.011009174,"special_character_ratio":0.5658956,"punctuation_ratio":0.2370452,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9925679,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-22T12:33:38Z\",\"WARC-Record-ID\":\"<urn:uuid:b3bdeaaa-bcb9-4b63-93e0-4ed42bbc60e1>\",\"Content-Length\":\"36157\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a2e94b22-274b-4dc4-996e-a4167a06a7f0>\",\"WARC-Concurrent-To\":\"<urn:uuid:5c961900-7163-42c0-9f9a-8977a6baedde>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/49635c\",\"WARC-Payload-Digest\":\"sha1:Q2O56X4RTVQES6QVASMGDXSW4QJZ5PFV\",\"WARC-Block-Digest\":\"sha1:RWYJ2KT3DP35OGHJJMXUTUELN6PNMSRP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585507.26_warc_CC-MAIN-20211022114748-20211022144748-00090.warc.gz\"}"} |
https://math.stackexchange.com/questions/726543/a-property-of-cardinal-numbers | [
"# A property of cardinal numbers.\n\nLet $\\mathfrak{m,n,p}$ cardinal numbers, show that $\\mathfrak{m^{n+p}=m^n\\cdot m^p}$.\n\nI believe that the proof is based on showing that there is a bijection between $M^{N_1\\cup N_2}$ and $M^{N_1}\\times M^{N_2}$, where card $M=\\mathfrak{m}$, card $N_1=\\mathfrak{n}$ and card $N_2=\\mathfrak{p}$, with $N_1\\cap N_2=\\emptyset$, but I could not find such a bijection. Any idea? Thanks!\n\n• Can you find a map $M^{N_1\\cup N_2} \\to M^{N_1}$? – Daniel Fischer Mar 25 '14 at 19:13\n\nGiven a map $f\\colon N_1\\cup N_2\\to M$, this induces two maps $f|_{N_i}\\colon N_i\\to M$. So you have a map $M^{N_1\\cup N_2}\\to M^{N_1}\\times M^{N_2}$ given by $f\\mapsto (f|_{N_1},f|_{N_2})$.\nConversely, suppose you have a pair $(f,g)\\in M^{N_1}\\times M^{N_2}$. Since $N_1\\cap N_2=\\emptyset$, this allows you to define a new function $h\\colon N_1\\cup N_2\\to M$, $$h(x)=\\begin{cases} f(x) &&\\text{if }x\\in N_1,\\\\ g(x) &&\\text{if }x\\in N_2. \\end{cases}$$\nSo you have another map $(f,g)\\mapsto h$. You can then verify these two maps provide a bijection, so that $M^{N_1\\cup N_2}\\simeq M^{N_1}\\times M^{N_2}$, which gives the equality of the cardinals."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.849068,"math_prob":0.9999809,"size":380,"snap":"2019-51-2020-05","text_gpt3_token_len":144,"char_repetition_ratio":0.16223404,"word_repetition_ratio":0.0,"special_character_ratio":0.34736842,"punctuation_ratio":0.13095239,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.000004,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-12T21:40:52Z\",\"WARC-Record-ID\":\"<urn:uuid:3cc504b3-abe6-42b9-95aa-99f2223e1b7f>\",\"Content-Length\":\"132722\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f309e05f-7fa1-403d-b094-dca34b867c27>\",\"WARC-Concurrent-To\":\"<urn:uuid:603aff9f-7c94-484f-8ca9-18d9541e51a8>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/726543/a-property-of-cardinal-numbers\",\"WARC-Payload-Digest\":\"sha1:OJWB6HOQ6MSRUFNFQS2I6PSNQGGLFY5A\",\"WARC-Block-Digest\":\"sha1:TEGNB7LEUCI2CV4DLNX4KNYLHD2LHKRO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540547165.98_warc_CC-MAIN-20191212205036-20191212233036-00004.warc.gz\"}"} |
https://answers.everydaycalculation.com/add-fractions/10-4-plus-15-84 | [
"Solutions by everydaycalculation.com\n\n1st number: 2 2/4, 2nd number: 15/84\n\n10/4 + 15/84 is 75/28.\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 4 and 84 is 84\n2. For the 1st fraction, since 4 × 21 = 84,\n10/4 = 10 × 21/4 × 21 = 210/84\n3. Likewise, for the 2nd fraction, since 84 × 1 = 84,\n15/84 = 15 × 1/84 × 1 = 15/84\n210/84 + 15/84 = 210 + 15/84 = 225/84\n5. After reducing the fraction, the answer is 75/28\n6. In mixed form: 219/28\n\nMathStep (Works offline)",
null,
"Download our mobile app and learn to work with fractions in your own time:"
] | [
null,
"https://answers.everydaycalculation.com/mathstep-app-icon.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.67718935,"math_prob":0.9969799,"size":377,"snap":"2019-51-2020-05","text_gpt3_token_len":162,"char_repetition_ratio":0.20107238,"word_repetition_ratio":0.0,"special_character_ratio":0.5225464,"punctuation_ratio":0.086021505,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99746794,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-08T16:47:40Z\",\"WARC-Record-ID\":\"<urn:uuid:e2cf3a81-998a-4793-a530-a2af7ee3987d>\",\"Content-Length\":\"8274\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8406fec4-f5bd-4191-994c-bb6f4f41f3cf>\",\"WARC-Concurrent-To\":\"<urn:uuid:ba8f6222-5d04-46ca-8a71-e63d35faa31f>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/add-fractions/10-4-plus-15-84\",\"WARC-Payload-Digest\":\"sha1:CHD3FMNTFHGSBXXFEHDLPTMVXWQAEJNI\",\"WARC-Block-Digest\":\"sha1:QP4PCECKUXWQVY4U4EFTMWD2K5YVXFRK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540511946.30_warc_CC-MAIN-20191208150734-20191208174734-00002.warc.gz\"}"} |
https://resource.flexrule.com/knowledge-base/analytics-commands/ | [
"# Analytics Commands\n\nContents\n\n### FlexRule.Extensions.Analytics\n\nThis extension enables Predictive Analytics capability to FlexRule platform. The extensions adds both expression commands Toolbox commands.\n\n#### train\n\nTrains and builds a predictive model using data.\n\n` train (list, algorithm, class, options)`\n• algorithm:\n• Regression Model\n• Generalized Regression Model\n• Naive Bayes Model\n• Classification and Regression Tree Model\n• Support Vector Machine Model\n• Random Forest (Mining Model)\n• Neural Networks Model\n• Clustering Model\n• Gradient Boosting Machine Model (Mining Model)\n• Association Rules Model\n• K-Nearest Neighbors Model\n• c45: Decision Tree\n• nb: Naive Bayes\n• class: attributes name that is the class name (i.e. attribute or field of data that going to be the result of prediction)\n• options: Algorithm options if it is available\n• Return: analytic’s trainedModel\n\n#### predict\n\nUses a trained model to predict results for a data record.\n\n` predict (trainedModel, data, options)`\n• trainedModel: the train function result.\n• data: Situation that requires the prediction. (i.e. a JSON with fields of value)\n• options: Algorithm options if it is available\n\n#### extractRules\n\nExtracts rules for Decision Tree.\n\n` extractRules (trainedModel, type)`\n• type: if, elseIf\n• Retrun: list of rules that are bases of the trained model\n\n### Examples\n\nThe following examples are available by default when you open FlexRule Designer.\n\nUpdated on June 3, 2021"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6711509,"math_prob":0.76225907,"size":1450,"snap":"2021-31-2021-39","text_gpt3_token_len":335,"char_repetition_ratio":0.13278009,"word_repetition_ratio":0.04608295,"special_character_ratio":0.21103448,"punctuation_ratio":0.14522822,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97537285,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-17T21:54:42Z\",\"WARC-Record-ID\":\"<urn:uuid:abd868e7-9c09-42f4-8889-3e159dc58481>\",\"Content-Length\":\"57135\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:82142036-8ac9-425c-b266-6ba924dc36f1>\",\"WARC-Concurrent-To\":\"<urn:uuid:123b0a89-5382-455a-a2e1-cb3857a85b04>\",\"WARC-IP-Address\":\"198.58.118.109\",\"WARC-Target-URI\":\"https://resource.flexrule.com/knowledge-base/analytics-commands/\",\"WARC-Payload-Digest\":\"sha1:4EF5YV66DLWVOGERT2C34YCGXSGPU5EB\",\"WARC-Block-Digest\":\"sha1:VQFZJ64CI5PYUXGOFSMNMIAUG6ARDNRX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780055808.78_warc_CC-MAIN-20210917212307-20210918002307-00312.warc.gz\"}"} |
https://cn.maplesoft.com/support/help/addons/view.aspx?path=MathApps%2FEulerIdentity | [
"",
null,
"Euler's Identity - Maple Help\n\nEuler's Identity\n\nMain Concept\n\nEuler's identity is the famous equality where:\n\n • e is Euler's number ≈ 2.718\n • i is the imaginary number;\n\nThis is a special case of Euler's formula: , where :\n\nVisually, this identity can be defined as the limit of the function as n approaches infinity. More generally, can be defined as the limit of as $n$ approaches infinity.\n\nFor a given value of z, the plot below shows the value of as n increases to infinity, as well as the sequence of line segments from to . Each additional line segment represents an additional multiplication by . For , it can be seen that the point approaches $\\mathit{-}\\mathit{1}$.\n\nClick Play/Stop to start or stop the animation or use the slider to adjust the frames manually. Choose a different value of z to see how the plot is affected. Use the controls to adjust the view of the plot.",
null,
"",
null,
"",
null,
"z =",
null,
"More MathApps"
] | [
null,
"https://bat.bing.com/action/0",
null,
"https://cn.maplesoft.com/support/help/content/961/image119.png",
null,
"https://cn.maplesoft.com/support/help/content/961/image121.png",
null,
"https://cn.maplesoft.com/support/help/content/961/image123.png",
null,
"https://cn.maplesoft.com/support/help/content/961/image138.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.89836264,"math_prob":0.9681201,"size":1101,"snap":"2023-14-2023-23","text_gpt3_token_len":314,"char_repetition_ratio":0.10574294,"word_repetition_ratio":0.054298643,"special_character_ratio":0.2797457,"punctuation_ratio":0.08898305,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99355763,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-04-01T08:35:46Z\",\"WARC-Record-ID\":\"<urn:uuid:8558c4b5-00ed-4956-ba61-087cf7b4f587>\",\"Content-Length\":\"177826\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aae7dd08-a522-4caf-b3c5-d9d7282ea699>\",\"WARC-Concurrent-To\":\"<urn:uuid:1fec213a-8f6e-492c-8842-c6b19a151ae5>\",\"WARC-IP-Address\":\"199.71.183.28\",\"WARC-Target-URI\":\"https://cn.maplesoft.com/support/help/addons/view.aspx?path=MathApps%2FEulerIdentity\",\"WARC-Payload-Digest\":\"sha1:CLCKNZ53HPBFJMZK3CF2N45R32TBYKNO\",\"WARC-Block-Digest\":\"sha1:Q7P5MNSIX3AQWDR4A5YDPIZTOP2SSSAA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296949701.56_warc_CC-MAIN-20230401063607-20230401093607-00390.warc.gz\"}"} |
https://slideplayer.com/slide/4063791/ | [
"",
null,
"# PH1 Kinematics UVAXT Equations. Vectors & Scalars Vectors e.g. displacement, velocity have a direction, and a magnitude, are a straight line. e.g. 3ms.\n\n## Presentation on theme: \"PH1 Kinematics UVAXT Equations. Vectors & Scalars Vectors e.g. displacement, velocity have a direction, and a magnitude, are a straight line. e.g. 3ms.\"— Presentation transcript:\n\nPH1 Kinematics UVAXT Equations\n\nVectors & Scalars Vectors e.g. displacement, velocity have a direction, and a magnitude, are a straight line. e.g. 3ms -1 to the East Scalars e.g. distance, speed have magnitude, can be along a non-straight line. e.g. the car travelled 1425m from door-to- door\n\nVectors & Scalars Distance & Displacement\n\nSpeed or Velocity?\n\nUnits Distance/displacement: meters (m) Speed/Velocity – distance moved per second = (meters) per second (m/s or ms -1 ) Acceleration – change in speed per second = (meters per second) per second = (m/s)/s = m/s 2 or ms -2\n\nVelocity-time graphs time velocity t u v Gradient = acceleration Area = displacement\n\nOther graphs Displacement-Time: gradient = instantaneous velocity Acceleration-Time: area underneath = final velocity Now try some questions…\n\nUVAXT equations v = u + at x = ut + ½at 2 x = vt – ½at 2 v 2 = u 2 + 2ax x = ½(u+v)t Only when a is constant!\n\nQuestions From the worksheet:\n\nUVAXT Questions Work out initial conditions Find out which quantity you are calculating Find out which quantity you don’t need. Identify the correct equation Do the maths\n\nExample A car accelerates from rest at 0.4ms -2 for 15s. How far does it go? x = ut + ½at 2 x = (0)(15) + 0.5(0.4)15 2 = 45m Work out initial conditions Find out which quantity you are calculating Find out which quantity you don’t need. Identify the correct equation Do the maths UVAXT 00.415 ?\n\nYou may need the square root formula\n\n2 volunteers needed … for a demo next lesson\n\nDownload ppt \"PH1 Kinematics UVAXT Equations. Vectors & Scalars Vectors e.g. displacement, velocity have a direction, and a magnitude, are a straight line. e.g. 3ms.\"\n\nSimilar presentations"
] | [
null,
"https://slideplayer.com/static/blue_design/img/slide-loader4.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7828957,"math_prob":0.99524504,"size":1703,"snap":"2020-34-2020-40","text_gpt3_token_len":462,"char_repetition_ratio":0.10535609,"word_repetition_ratio":0.16778524,"special_character_ratio":0.27480915,"punctuation_ratio":0.10479042,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9990897,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-05T20:09:48Z\",\"WARC-Record-ID\":\"<urn:uuid:91ce6557-927c-4f32-b407-783cd1ace93e>\",\"Content-Length\":\"153196\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bd5e74b2-e8b9-41bd-9a11-4c25cb4b20b4>\",\"WARC-Concurrent-To\":\"<urn:uuid:37b772bf-b628-46cc-b465-d23d549f1b9c>\",\"WARC-IP-Address\":\"138.201.58.10\",\"WARC-Target-URI\":\"https://slideplayer.com/slide/4063791/\",\"WARC-Payload-Digest\":\"sha1:EASNENA4PKSPETDPV5XQYLFEXF3JMYLB\",\"WARC-Block-Digest\":\"sha1:D2LFQU6SIR7ONF4IGIYEUDXEABD77GI4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439735964.82_warc_CC-MAIN-20200805183003-20200805213003-00593.warc.gz\"}"} |
https://nl.mathworks.com/help/curvefit/cfit.differentiate.html | [
"# differentiate\n\nDifferentiate `cfit` or `sfit` object\n\n## Syntax\n\n``fx = differentiate(FO, X)``\n``[fx, fxx] = differentiate(FO, X)``\n``[fx, fy] = differentiate(FO, X, Y)``\n``[fx, fy] = differentiate(FO, [X, Y])``\n``[fx, fy, fxx, fxy, fyy] = differentiate(FO, ...)``\n\n## Description\n\n``` NoteUse these syntaxes for `cfit` objects. `fx = differentiate(FO, X)` differentiates the `cfit` object `FO` at the points specified by the vector `X` and returns the result in `fx`.```\n\nexample\n\n````[fx, fxx] = differentiate(FO, X)` differentiates the `cfit` object `FO` at the points specified by the vector `X` and returns the result in `fx` and the second derivative in `fxx`.```\n``` NoteUse these syntaxes for `sfit` objects. `[fx, fy] = differentiate(FO, X, Y)` differentiates the surface `FO` at the points specified by `X` and `Y` and returns the result in `fx` and `fy`.`FO` is a surface fit (`sfit`) object generated by the `fit` function.`X` and `Y` must be double-precision arrays and the same size and shape as each other.All return arguments are the same size and shape as `X` and `Y`. If `FO` represents the surface $z=f\\left(x,y\\right)$, then `FX` contains the derivatives with respect to x, that is, $\\frac{df}{dx}$, and `FY` contains the derivatives with respect to y, that is, $\\frac{df}{dy}$. ```\n````[fx, fy] = differentiate(FO, [X, Y])`, where `X` and `Y` are column vectors, allows you to specify the evaluation points as a single argument.```\n````[fx, fy, fxx, fxy, fyy] = differentiate(FO, ...)` computes the first and second derivatives of the surface fit object `FO`.`fxx` contains the second derivatives with respect to `x`, that is, $\\frac{{\\partial }^{2}f}{\\partial {x}^{2}}$.`fxy` contains the mixed second derivatives, that is, $\\frac{{\\partial }^{2}f}{\\partial x\\partial y}$.`fyy` contains the second derivatives with respect to `y`, that is, $\\frac{{\\partial }^{2}f}{\\partial {y}^{2}}$. ```\n\n## Examples\n\ncollapse all\n\nCreate a baseline sinusoidal signal.\n\n```xdata = (0:.1:2*pi)'; y0 = sin(xdata);```\n\nAdd response-dependent Gaussian noise to the signal.\n\n```noise = 2*y0.*randn(size(y0)); ydata = y0 + noise;```\n\nFit the noisy data with a custom sinusoidal model.\n\n```f = fittype('a*sin(b*x)'); fit1 = fit(xdata,ydata,f,'StartPoint',[1 1]);```\n\nFind the derivatives of the fit at the predictors.\n\n`[d1,d2] = differentiate(fit1,xdata);`\n\nPlot the data, the fit, and the derivatives.\n\n```subplot(3,1,1) plot(fit1,xdata,ydata) % cfit plot method subplot(3,1,2) plot(xdata,d1,'m') % double plot method grid on legend('1st derivative') subplot(3,1,3) plot(xdata,d2,'c') % double plot method grid on legend('2nd derivative')```",
null,
"You can also compute and plot derivatives directly with the `cfit` `plot` method, as follows:\n\n```figure plot(fit1,xdata,ydata,{'fit','deriv1','deriv2'})```",
null,
"The `plot` method, however, does not return data on the derivatives, unlike the `differentiate` method.\n\nYou can use the `differentiate` method to compute the gradients of a fit and then use the `quiver` function to plot these gradients as arrows. This example plots the gradients over the top of a contour plot.\n\nCreate the derivation points and fit the data.\n\n```x = [0.64;0.95;0.21;0.71;0.24;0.12;0.61;0.45;0.46;... 0.66;0.77;0.35;0.66]; y = [0.42;0.84;0.83;0.26;0.61;0.58;0.54;0.87;0.26;... 0.32;0.12;0.94;0.65]; z = [0.49;0.051;0.27;0.59;0.35;0.41;0.3;0.084;0.6;... 0.58;0.37;0.19;0.19]; fo = fit( [x, y], z, 'poly32', 'normalize', 'on' ); [xx, yy] = meshgrid( 0:0.04:1, 0:0.05:1 );```\n\nCompute the gradients of the fit using the `differentiate` function.\n\n`[fx, fy] = differentiate( fo, xx, yy );`\n\nUse the `quiver` function to plot the gradients.\n\n```plot( fo, 'Style', 'Contour' ); hold on h = quiver( xx, yy, fx, fy, 'r', 'LineWidth', 2 ); hold off colormap( copper )```",
null,
"If you want to use derivatives in an optimization, you can, for example, implement an objective function for `fmincon` as follows.\n\n`function [z, g, H] = objectiveWithHessian( xy )`\n\n` % The input xy represents a single evaluation point`\n\n` z = f( xy );`\n\n` if nargout > 1`\n\n` [fx, fy, fxx, fxy, fyy] = differentiate( f, xy );`\n\n` g = [fx, fy];`\n\n` H = [fxx, fxy; fxy, fyy];`\n\n` end`\n\n` end`\n\n## Input Arguments\n\ncollapse all\n\nFunction to differentiate, specified as a `cfit` object for curves or as a `sfit` object for surfaces.\n\nPoints at which to differentiate the function, specified as a vector. For surfaces, this argument must have the same size and shape of Y.\n\nPoints at which to differentiate the function, specified as a vector. For surfaces, this argument must have the same size and shape of X.\n\n## Output Arguments\n\ncollapse all\n\nFirst derivative of the function, returned as a vector of the same size and shape of X and Y.\n\nIf FO is a surface, $z=f\\left(x,y\\right)$, then fx contains the derivatives with respect to `x`.\n\nSecond derivative of the function, returned as a vector of the same size and shape of X and Y.\n\nIf FO is a surface, $z=f\\left(x,y\\right)$, then fxx contains the second derivatives with respect to `x`.\n\nFirst derivative of the function, returned as a vector of the same size and shape of X and Y.\n\nIf FO is a surface, $z=f\\left(x,y\\right)$, then fy contains the derivatives with respect to `y`.\n\nSecond derivative of the function, returned as a vector of the same size and shape of X and Y.\n\nIf FO is a surface, $z=f\\left(x,y\\right)$, then fyy contains the second derivatives with respect to `y`.\n\nMixed second derivative of the function, returned as a vector of the same size and shape of X and Y.\n\n## Tips\n\nFor library models with closed forms, the toolbox calculates derivatives analytically. For all other models, the toolbox calculates the first derivative using the centered difference quotient\n\n`$\\frac{df}{dx}=\\frac{f\\left(x+\\Delta x\\right)-f\\left(x-\\Delta x\\right)}{2\\Delta x}$`\n\nwhere x is the value at which the toolbox calculates the derivative, $\\Delta x$ is a small number (on the order of the cube root of `eps`), $f\\left(x+\\Delta x\\right)$ is `fun` evaluated at $x+\\Delta x$, and $f\\left(x-x\\Delta \\right)$ is `fun` evaluated at $x-\\Delta x$.\n\nThe toolbox calculates the second derivative using the expression\n\n`$\\frac{{d}^{2}f}{d{x}^{2}}=\\frac{f\\left(x+\\Delta x\\right)+f\\left(x-\\Delta x\\right)-2f\\left(x\\right)}{{\\left(\\Delta x\\right)}^{2}}$`\n\nThe toolbox calculates the mixed derivative for surfaces using the expression\n\n`$\\frac{{\\partial }^{2}f}{\\partial x\\partial y}\\left(x,y\\right)=\\frac{f\\left(x+\\Delta x,y+\\Delta y\\right)-f\\left(x-\\Delta x,y+\\Delta y\\right)-f\\left(x+\\Delta x,y-\\Delta y\\right)+f\\left(x-\\Delta x,y-\\Delta y\\right)}{4\\Delta x\\Delta y}$`\n\n## Version History\n\nIntroduced before R2006a"
] | [
null,
"https://nl.mathworks.com/help/examples/curvefit/win64/FindTheDerivativesOfACurveUsingTheDifferentiateFunctionExample_01.png",
null,
"https://nl.mathworks.com/help/examples/curvefit/win64/FindTheDerivativesOfACurveUsingTheDifferentiateFunctionExample_02.png",
null,
"https://nl.mathworks.com/help/examples/curvefit/win64/FindDerivativesOfSurfaceUsingTheDifferentiateFunctionExample_01.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8069803,"math_prob":0.999161,"size":5263,"snap":"2022-40-2023-06","text_gpt3_token_len":1484,"char_repetition_ratio":0.1745579,"word_repetition_ratio":0.2600229,"special_character_ratio":0.27873835,"punctuation_ratio":0.21529509,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999137,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-09T13:47:32Z\",\"WARC-Record-ID\":\"<urn:uuid:6b6de0cf-15aa-4a9d-b17f-a30d5d848e3b>\",\"Content-Length\":\"116932\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9f17e8cc-b7b8-4fbf-8b13-81757742ceff>\",\"WARC-Concurrent-To\":\"<urn:uuid:45d596b7-ee59-4d89-8e65-72c75f374cb7>\",\"WARC-IP-Address\":\"104.86.80.92\",\"WARC-Target-URI\":\"https://nl.mathworks.com/help/curvefit/cfit.differentiate.html\",\"WARC-Payload-Digest\":\"sha1:3XEM2LOQ5GWX5EZWY3D6WV7QVFLECPCE\",\"WARC-Block-Digest\":\"sha1:NHZUJ27SLN2ZBHIRLUEIP2EHXMCTM2KN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499966.43_warc_CC-MAIN-20230209112510-20230209142510-00182.warc.gz\"}"} |
https://zbmath.org/?q=an:0797.11072 | [
"## On a class of differential-difference equations arising in number theory.(English)Zbl 0797.11072\n\nFunctions defined by a differential-difference equation of the type (1) $$uf ' (u) + af(u) + bf(u-1) = 0$$, where $$a$$ and $$b$$ are constants, arise frequently in number theory. Probably the best known examples are the Dickman function $$\\rho(u)$$ and the Buchstab function $$\\omega (u)$$. Functions satisfying (1) when $$a+b$$ is an integer arise in sieve theory and in this context have been investigated by various authors. The equation (1) with general coefficients $$a$$ and $$b$$ have been studied by H. Iwaniec [Acta Arith. 36, 171-202 (1980; Zbl 0435.10029)] and F. Wheeler [Trans. Am. Math. Soc. 318, 491-523 (1990; Zbl 0697.10035)].\nThe object of the present paper is to describe, for any given pair of complex coefficients $$(a,b)$$ with $$b \\neq 0$$ the structure and asymptotic behavior of the general solution to $$(1)$$. Equation (1) with the initial condition (2) $$f(u) = \\varphi(u)$$ $$(u_ 0 - 1 \\leq u \\leq u_ 0)$$, where $$\\varphi (u)$$ is any given continuous function on $$[u_ 0 - 1,u_ 0]$$, has a unique continuous solution $$f(u) = f(u;\\varphi)$$ for $$u \\geq u_ 0$$. In the paper, the authors construct a set of “fundamental” solutions $$F(u)$$ and $$F_ n(u)$$ $$(n \\in \\mathbb{Z})$$ and the solution $$f(u)$$ can be expressed as a convergent series $$f(u) = \\alpha F(u) + \\sum_{n \\in \\mathbb{Z}} \\alpha_ n F_ n (u)$$ with suitable coefficients $$\\alpha$$ and $$\\alpha_ n$$.\nThe functions $$F$$ and $$F_ n$$ are defined by means of a contour integral, which can be estimated rather precisely. To state the result we set $\\Phi (u,s) = {\\exp \\{-us + bI(s)\\}s^{a+b-1} \\over \\sqrt {2 \\pi u(1-1/s)}},$ where $$I(s) = \\int^ s_ 0{e^ z-1 \\over z} dz$$, and then we have the following result: For any fixed non-zero integer $$n$$ and $$u \\geq u_ 0 (\\varepsilon,n)$$ we have $F_ n(u) = (1+O({1 \\over u})) \\Phi (u, \\zeta_ n)$ where $$\\zeta_ n = \\zeta_ n (u/b)$$ is a certain complex solution of the equation $$e^ \\zeta = 1 + {u \\over b} \\zeta$$, and the implied constant depends at most on $$\\varepsilon$$ and $$n$$.\nThe principal result of the paper is as follows. Let $$\\varphi (u)$$ be a continuous function on $$[u_ 0 - 1,u_ 0]$$ and let $$f(u) = f(u;\\varphi)$$ be the unique continuous solution to (1) and (2). Then we have $f(u) = \\alpha F(u) + \\sum_{\\alpha \\in \\mathbb{Z}} \\alpha_ n F_ n(u), \\quad u>u_ 0+1 \\tag{3}$ where $$\\alpha = \\langle \\varphi, G \\rangle$$, $$\\alpha_ n = \\langle \\varphi, G_ n \\rangle$$ and the series in (3) is uniformly convergent for $$u \\geq u_ 0 + 1 + \\delta$$, for any fixed $$\\delta>0$$.\nMoreover, the authors derive several corollaries from the principal result.\n\n### MSC:\n\n 11N25 Distribution of integers with specified multiplicative constraints 34K99 Functional-differential equations (including equations with delayed, advanced or state-dependent argument)\n\n### Citations:\n\nZbl 0435.10029; Zbl 0697.10035\nFull Text:\n\n### References:\n\n [Al] K. Alladi,An Erdös-Kac theorem for integers without large prime factors, Acta Arith.49 (1987), 81–105. [AO] N. C. Ankeny and H. Onishi,The general sieve, Acta Arith.10 (1964), 31–62. · Zbl 0127.27002 [Be] J. J. A. Beenakker,The differential-difference equation {$$\\alpha$$}xf 1(x)+f(x)=0, Thesis, Eindhoven, 1966. · Zbl 0144.08702 [BC] R. Bellimann and K. Cooke,Differential-Difference Equations, Academic Press, New York 1963. [dB1] N. G. de Bruijn,On some Volterra integral equations of which all solutions are convergent, Nederl. Akad. Wetensch. Proc.53 (1950), 813–821. · Zbl 0038.26602 [dB2] N. G. de Bruijn,On the number of uncancelled elements in the sieve of Eratosthenes, Nederl. Akad. Wetensch. Proc.53 (1950), 803–812. · Zbl 0037.03001 [dB3] N. G. de Bruijn,On the number of positive integers x and free of prime factors >y, Nederl. Akad. Wetensch. Proc.54 (1951), 50–60. · Zbl 0042.04204 [dB4] N. G. de Bruijn,The asymptotic behavior of a function occurring in the theory of primes, J. Indian Math. Soc.15 (1951), 25–32. · Zbl 0043.06502 [dB5] N. G. de Bruijn,The difference-differential equation F’(x)=e {$$\\alpha$$}x+{$$\\beta$$}F(x), I, II, Nederl. Akad. Wetensch. Proc.56 = Indagationes Math.15 (1953), 449–464. · Zbl 0053.38703 [dB6] N. G. de Bruijn,Asymptotic methods in Analysis, Dover, New York 1981. · Zbl 0556.41021 [Bu] A. A. Buchstab,Asymptotic estimates of a general number-theoretic function (Russian), Mat. Sb.44 (1937), 1237–1246. [CG] A. Y. Cheer and D. Goldston,A differential delay equation arising from the sieve of Eratosthenes, Math. Comp.55 (1990), 129–141. [DHR] H. Diamond, H. Halberstam and H.-E. Richert,A boundary, value problem for a pair of differential delay equations related to sieve theory I, in:Analytic Number Theory (B. Berndtet al., eds.),Progress in Math. 85 (1990), Birkhaüser, Boston, pp. 133–157. · Zbl 0717.11036 [Di] K. Dickman,On the frequency of numbers containing primes of a certain relative magnitude, Ark. Mat. Astr. Fys.22 (1930), 1–14. · JFM 56.0178.04 [FGHM] J. Friedlander, A. Granville, A. Hildebrand and H. Maier,Oscillation theorems for primes in arithmetic progressions and for sifting functions, J. Amer. Math. Soc.4 (1991), no. 1, 25–86. · Zbl 0724.11040 [GR] F. Grupp and H.-E. Richert,Notes on functions connected with the sieve, Analysis8 (1988), 1–23. · Zbl 0657.10048 [He] D. Hensley,The convolution powers of the Dickman function, J. London Math. Soc.33 (1986), 289–307. · Zbl 0589.10045 [Hi] A. Hildebrand,The asymptotic behavior of the solutions of a class of differential-difference equations, J. London Math. Soc. (2)42 (1990), no. 1, 11–31. · Zbl 0675.34037 [Iw] H. Iwaniec,Rosser’s sieve, Acta Arith.36 (1980), 171–202. · Zbl 0435.10029 [JR] W.-B. Jurkat and H.-E. Richert,An improvement of Selberg’s sieve method, I, Acta Arith.11 (1965), 217–240. · Zbl 0128.26902 [Ma] H. Maier,Primes in short intervals, Michigan Math. J.32 (1985), 221–225. · Zbl 0569.10023 [Ten] G. Tenenbaum,Introduction à la théorie analytique et probabiliste des nombres, Revue de l’Institut Elie Cartan13, Département de Mathématiques de l’Université de Nancy 1 (1990). [Wh] F. Wheeler,Two differential-difference equations arising in number theory, Trans. Amer. Math. Soc.318 (1990), 491–523. · Zbl 0697.10035\nThis reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.69263494,"math_prob":0.9990418,"size":7068,"snap":"2022-27-2022-33","text_gpt3_token_len":2408,"char_repetition_ratio":0.11735561,"word_repetition_ratio":0.016713092,"special_character_ratio":0.37733448,"punctuation_ratio":0.20563196,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999707,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-28T02:44:26Z\",\"WARC-Record-ID\":\"<urn:uuid:b5958014-d683-4280-9583-52c718d0770c>\",\"Content-Length\":\"61102\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3c213257-b3d9-46c4-9e28-ac40d1760995>\",\"WARC-Concurrent-To\":\"<urn:uuid:f51c86ff-5744-4983-a74e-503cf333090e>\",\"WARC-IP-Address\":\"141.66.194.2\",\"WARC-Target-URI\":\"https://zbmath.org/?q=an:0797.11072\",\"WARC-Payload-Digest\":\"sha1:YIBTO7SAYQOSGKLBK2QEZR3T64TYAB3H\",\"WARC-Block-Digest\":\"sha1:5CRZ3TAZBVW6XXCGNPHGL5G6FJL6RSCH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103347800.25_warc_CC-MAIN-20220628020322-20220628050322-00253.warc.gz\"}"} |
https://community.ibm.com/community/user/powerdeveloper/blogs/swati-karmarkar/2022/09/07/julia-on-ibm-power | [
"# Programming Languages on Power\n\nView Only\n\n## Explore the benefits of using Julia on IBM Power then give it a try\n\n####",
null,
"By Swati Karmarkar posted Wed September 07, 2022 09:57 AM\n\nJulia is a general-purpose programming language with a special focus on scientific computing. Julia is as fast as C programming language in many cases while remaining dynamic like Python.\n\nWe are happy to announce that as of Dec 19, 2021, Julia 1.6.2 is supported on the IBM Power platform, \"Tier-3\" level. For more information about Tier 3, refer to the Julia download page.\n\nThe popularity of Julia has been soaring since it first became available in 2012, and it is currently in the top 30 programming languages as per the TIOBE index.\n\nRefer to the following interesting articles about Julia:\n\n## Benefits of Julia on IBM Power\n\nDesigned with machine learning (ML) in mind, Julia focuses on the scientific computing domain and its need for parallel, distributed, and intensive computation tasks. Julia has a powerful suite of libraries for developing artificial intelligence which includes general linear models, decision trees, clustering, Flux for deep learning, text analysis for natural language processing, and so on. Julia benefits to Power in the following ways:\n\n• Julia is an easy programming language to learn. Its syntax is similar to Python and MATLAB, and can be easily adapted on Power. The following code snippet shows the similarity among Julia, MATLAB, and Python languages:\n\nDescription MATLAB syntax Python syntax Julia syntax\nImport packages Nil import numpy as np\nimport matpotlib.pyplot as plt\nImport Pkg\nusing Plots\nDefine the variable for X axis x = 0 : pi/10 : 2pi; x=np.linespace(0, 2np.pi, 21) x = 0 : pi/10 : 2*pi;\nDefine the variable for Y axis y = sin(x); y = np.sin(x) y = sin.(x);\nCode snippet to plot the graph plot(x,y)\ntitle(‘My first plot’)\nxlabel(‘x-axis’)\nylabel(‘y-axis’)\nplt.plot(x,y)\nplt.title(‘My first plot’)\nplt.xlabel(‘x-axis’)\nply.ylabel(‘y-axis)\nplot(x,y\ntitle=”My first plot”,\nxaxis=(“x-axis”),\nyaxis=(“y-axis”))\n\n• Julia integrates well with the existing code and platforms such as IBM Power.\n\n• Julia combines the familiarity and ease of use of Python and R with the speed and the strength of C, C++, or Java. So programmers no longer need to estimate models in one language and reproduce them in a faster production language.\n\n## What did we do and how did it work?\n\n• The first time we tried to build Julia on IBM Power, it failed due to errors. Later, we were able to trace back to low level virtual machine (LLVM) and those were fixed in the subsequent LLVM releases. See the pull request (PR) for LLVM release fixes for the details.\n• We encountered some issues related to code generation, which were traced and corrected by the LLVM team. Refer to the following for additional information:\n• Cbc and JuMP libraries did not work with Julia due to interfaces that were not updated for Power. This has been fixed (PR for Cbc and JuMP fixes).\n• We added support for detecting the correct CPU ID that was implemented in the upstream code (PR for detecting correct CPU ID).\n• There are miscellaneous fixes for Power in Julia. See the following PRs for more information:\n\n## Try Julia on Power\n\nPerform the following steps to install and run Julia on a Power virtual machine (VM):\n1. Download the latest stable build of Julia. Enter the following command on the Power system to get the TAR file for Power (ppc64le):\n``````wget https://julialang-s3.julialang.org/bin/linux/ppc64le/1.6/julia-1.6.2-linux-ppc64le.tar.gz\n\n``````\n2. Extract the .tar file.\n``tar -xvzf julia-1.6.2-linux-ppc64le.tar.gz``\n3. Enter the following commands to run the Julia binary file present in the bin/julia directory of the extracted directory:\n``````[user1@p006vm71 ~]\\$ cd julia-1.6.2\n[user1@p006vm71 julia-1.6.2]\\$ bin/julia``````\n\nOr,\n``````[user1@p006vm71 julia-1.6.2]\\$ export PATH=\\$PATH:~/julia-1.6.2/bin\n[user1@p006vm71 julia-1.6.2]\\$ julia``````\n\nThe command prompt of the Julia console looks as follows:",
null,
"## How can you use Julia?\n\nJulia can be used to run scripts and many other commands that can provide simple to complex output. Following are some of the examples:\n\n• Use the following Julia command to find the platform and version information:\n\n``````julia>\njulia> versioninfo()\nJulia Version 1.8.0-DEV.889\nCommit f14e44f38b* (2021-11-03 05:54 UTC)\nPlatform Info:\nOS: Linux (ppc64le-redhat-linux)\nCPU: POWER8 (architected), altivec supported\nWORD_SIZE: 64\nLIBM: libopenlibm\nLLVM: libLLVM-12.0.1 (ORCJIT, pwr8)``````\n• Run the following Julia code snippet to find the CPU information:\n\n``````julia> ccall(:jl_dump_host_cpu, Cvoid, ())\nCPU: pwr8\nFeatures:\njulia>``````\n\n• Run a .jl file to get the required output. Following are the steps to run a sample .jl file:\n\n1. Write the following code in Julia (that plots a graph on the X and Y axis) and save it as plots.jl:\n``````import Pkg\n\nusing Plots\n\n# plot some data\nplot([cumsum(rand(500) .- 0.5), cumsum(rand(500) .- 0.5)])\n\n# save the current figure\nsavefig(\"plots.svg\")``````\n2. Run plots.jl as:\n``julia plots.jl``\n\nThe output is as follows:",
null,
""
] | [
null,
"https://higherlogicdownload.s3.amazonaws.com/IMWUC/Images/ProfileImageDefault/T6FlabNTze4G0FTAa1dq_EvqA7zSoTduJ6W40ZlyZ_user_icon_lg_200_200.jpg",
null,
"https://developer.ibm.com/developer/default/blogs/julia-supported-on-power/images/img1.jpg",
null,
"https://developer.ibm.com/developer/default/blogs/julia-supported-on-power/images/img2.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8450063,"math_prob":0.42914176,"size":5152,"snap":"2022-40-2023-06","text_gpt3_token_len":1364,"char_repetition_ratio":0.11655012,"word_repetition_ratio":0.0049566296,"special_character_ratio":0.25679347,"punctuation_ratio":0.14719848,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96974766,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-07T07:23:43Z\",\"WARC-Record-ID\":\"<urn:uuid:a2b1d0fb-0293-4564-ba74-68572a9a788b>\",\"Content-Length\":\"281009\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:87480ba7-06e5-43da-9cea-d4e5d4573bd9>\",\"WARC-Concurrent-To\":\"<urn:uuid:35b0b7ff-5bfa-4c22-8452-813e91ab8e2e>\",\"WARC-IP-Address\":\"104.70.59.108\",\"WARC-Target-URI\":\"https://community.ibm.com/community/user/powerdeveloper/blogs/swati-karmarkar/2022/09/07/julia-on-ibm-power\",\"WARC-Payload-Digest\":\"sha1:4YBQ55UG2UER5RRC2TCUBT6IFZWDQC4T\",\"WARC-Block-Digest\":\"sha1:JMTTZVYSY6OQOH5EDH2JY7GDNS4OVCMR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500392.45_warc_CC-MAIN-20230207071302-20230207101302-00158.warc.gz\"}"} |
http://ncl.ucar.edu/Document/Functions/Built-in/escovc.shtml | [
"",
null,
"NCL Home > Documentation > Functions > General applied math, Statistics\n\nescovc\n\nComputes sample cross-covariances at lag 0 only.\n\nPrototype\n\n```\tfunction escovc (\nx : numeric,\ny : numeric\n)\n\nreturn_val : numeric\n```\n\nArguments\n\nx\n\nAn array of any numeric type or size. The rightmost dimension is usually time.\n\ny\n\nAn array of any numeric type or size. The rightmost dimension is usually time. The size of the rightmost dimension must be the same as x.\n\nReturn value\n\nA scalar value if x and y are one-dimensional. The same size as x if either x or y are multi-dimensional. Double if x is double, float otherwise.\n\nDescription\n\nComputes sample cross-covariances at lag 0 only. If a lagged covariance is required, use esccv. Missing values are allowed.\n\nAlgorithm:\n\n``` cov = SUM [(X(t)-Xave)*(Y(t)-Yave)]/(NT-1)\n```\nThe dimension sizes(s) of c are a function of the dimension sizes of the x and y arrays. The following illustrates dimensioning:\n``` x(N), y(N) a scalar\nx(N), y(K,M,N) c(K,M)\nx(I,N), y(K,M,N) c(I,K,M)\nx(J,I,N), y(L,K,M,N) c(J,I,L,K,M)\n```\nspecial case when dimensions of all x and y are identical:\n``` x(J,I,N), y(J,I,N) c(J,I)\n```\n\nExamples\n\nExample 1\n\nThe following will calculate the cross-covariance for a two one-dimensional arrays x(N) and y(N).\n\n``` ccv = escovc(x,y) ; ccv is a scalar\n```\nExample 2\n\nThe following will calculate the cross-covariance for one two-dimensional array y(lat,lon,time) and one one-dimensional array x(time).\n\n``` ccv = escovc(x,y) ; ccv(lat,lon)\n```\nExample 3\n\nConsider x(neval,time) and y(lat,lon,time)\n\n``` ccv = escovc(x,y) ; ccv(neval,lat,lon)\n```"
] | [
null,
"http://ncl.ucar.edu/Images/NCL_NCAR_NSF_banner.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6196537,"math_prob":0.99956316,"size":1485,"snap":"2019-26-2019-30","text_gpt3_token_len":436,"char_repetition_ratio":0.13031736,"word_repetition_ratio":0.13513513,"special_character_ratio":0.25925925,"punctuation_ratio":0.18956044,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999261,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-20T23:50:50Z\",\"WARC-Record-ID\":\"<urn:uuid:69176bc9-bff3-41ad-9854-454ccbef564a>\",\"Content-Length\":\"16346\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:847e21df-37a2-43b8-877e-3df26df63cde>\",\"WARC-Concurrent-To\":\"<urn:uuid:ff3237c2-c5f0-4cf9-889c-7749d656bbd0>\",\"WARC-IP-Address\":\"128.117.225.48\",\"WARC-Target-URI\":\"http://ncl.ucar.edu/Document/Functions/Built-in/escovc.shtml\",\"WARC-Payload-Digest\":\"sha1:PKPP3ZYKOCCQ7BM65OXUTEOYV3QAUFAT\",\"WARC-Block-Digest\":\"sha1:RLOGPBSK2OWKN5HHP72OJIVHPD7OOSAW\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627999291.1_warc_CC-MAIN-20190620230326-20190621012326-00394.warc.gz\"}"} |
http://www.webconversiononline.com/all-weight-conversion.aspx?from=qianchina&to=kilogram | [
"",
null,
"Length Conversion Weight Conversion Temperature Conversion Date & Time Conversion Volume Conversion Area Conversion Speed Conversion Scientific Conversion Health Calculators Other Calculators Reference Dictionary\n Home Tools Topics Mobile Version\n\n## Qian (China) to Kilogram\n\nHome >> Weight",
null,
"Convert Weight from Qian (China) to Kilogram or to nearly 225 different Metric, English, Imperial and local weight and mass measurement units use in European, Asian and American regionsQian (China) and Kilogram are the units to measure Weight, where 1 Qian (China) = 0.005 Kilogram\n Enter weight and click 'Convert'\n Too many units in conversion list? Try Common Weight Conversion How to use this calculator...• Use current calculator (page) to convert Weight from Qian (China) to Kilogram. Simply enter Weight quantity and click ‘Convert’. Both Qian (China) and Kilogram are Weight measurement units.• For conversion to different Weight units, select required units from the dropdown list (combo), enter quantity and click convert• For very large or very small quantity, enter number in scientific notation, Accepted format are 3.142E12 or 3.142E-12 or 3.142x10**12 or 3.142x10^12 or 3.142*10**12 or 3.142*10^12 and like wise View Weight conversion table\n\n Are you looking for ... List of Supported Conversion Types (sorted) Short Info Lookup & References List of Metric, English & Local Units Definition of different measurement Units Conversion Matrix Reference Matrix\n\n List of Supported Conversion Types ... Acceleration Angle Angular Acceleration Angular Velocity Area Blood Sugar Clothing Size Computer Storage Unit Cooking Volume Cooking Weight Data Transfer Rate Date Density Dynamic Viscosity Electric Capacitance Electric Charge Electric Conductance Electric Conductivity Electric Current Electric Field Strength Electric Potential Electric Resistance Electric Resistivity Energy Energy Density Energy Mass Euro Currency Fluid Concentration Fluid Flow Rate Fluid Mass Flow Rate Force Frequency Fuel Economy Heat Capacity Heat Density Heat Flux Density Heat Transfer Coefficient Illumination Image Resolution Inductance Kinematic Viscosity Length Luminance (Light) Light Intensity Linear Charge Density Linear Current Density Magnetic Field Strength Magnetic Flux Magnetic Flux Density Magnetomotive Force Mass Flux Density Molar Concentration Molar Flow Rate Moment of Inertia Number Permeability Power Prefix Pressure Radiation Radiation Absorbed Radiation Exposure Radioactivity Shoe Size Sound Specific Volume Speed Surface Charge Density Surface Current Density Surface Tension Temperature Thermal Conductivity Thermal Expansion Thermal Resistance Time Torque Volume Volume Charge Density Water Oil Viscosity Weight\n\n More Topics ... Area Astrology Baby Names Banking Birth Control Chemistry Chinese Astrology City Info Electricity Finance Fluids Geography Health Length Magnetism Pregnancy Radiation Scientific Speed Technology Telephone Temperature Time & Date Train Info Volume Weight World Clock Zodiac Astrology Other"
] | [
null,
"http://www.webconversiononline.com/sysimages/web-conversion-online.jpg",
null,
"http://www.webconversiononline.com/image/weightsmall.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7650466,"math_prob":0.796817,"size":422,"snap":"2020-45-2020-50","text_gpt3_token_len":116,"char_repetition_ratio":0.18421052,"word_repetition_ratio":0.025316456,"special_character_ratio":0.3056872,"punctuation_ratio":0.08955224,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9540722,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-25T21:33:49Z\",\"WARC-Record-ID\":\"<urn:uuid:0e844a75-7861-4f44-90f9-8f9db18e9da0>\",\"Content-Length\":\"89920\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2b021e8e-8b54-40c7-9fa3-ec86d13d4e77>\",\"WARC-Concurrent-To\":\"<urn:uuid:4cdcb8e2-75fb-4026-93d0-35bd4fe5af3b>\",\"WARC-IP-Address\":\"182.50.130.37\",\"WARC-Target-URI\":\"http://www.webconversiononline.com/all-weight-conversion.aspx?from=qianchina&to=kilogram\",\"WARC-Payload-Digest\":\"sha1:2LP4O7BGCWBOVDR24IL2455G4KJ4DJAU\",\"WARC-Block-Digest\":\"sha1:4CIXZ3ZITUCDOHV675GYQXEYOM4QC4S5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107890028.58_warc_CC-MAIN-20201025212948-20201026002948-00562.warc.gz\"}"} |
https://3010tangents.wordpress.com/tag/nine-chapters-on-the-mathematical-art/ | [
"# 3010 Sea Island in the The Nine Chapters on the Mathematical Art\n\nI never read anything about The Nine Chapters on the Mathematical Art before taking this History of Math class. I heard about this book when I was a middle school student. It is interesting that I started paying attention 13 years after I graduated from a middle school. There is a chapter called “The Sea Island Mathematical Manual” in the book. I found this topic very interesting because I want to know how our ancient people solved real world problems without using any modern technology. In this blog, I will try to explore more about these sea island problems. (In this blog, I will use Nine Chapters to substitute the book title and “Sea Island” to substitute the chapter.).\n\nBefore talking about the sea island questions, I want to briefly talk about the book’s history. Nine Chapters was formed in the Han dynasty and it was a Chinese mathematics book that composed by several generations of scholars from the 10th – 2nd century BCE. This book has 246 real world questions, which relate to agriculture, business, engineering and solving equations, etc. It divides those questions into nine chapters. The Nine Chapters flourished between the Three Kingdoms period and earlier Tang dynasty in China. At that time this book was the primary math textbook in China and it also spread to Korea and Japan. The Nine Chapters was undoubtedly one of the cornerstones of Chinese modern math.\n\n“The Sea Island” is one of the extension chapters of the book that was written by Liu Hui. This chapter has nine problems: surveys of Sea Island, height of a hill top pine tree, the size of a square city wall viewed afar, the depth of a ravine, the height of a building on a plain seen from a hill, the breadth of a river-mouth seen from a distance on land, the depth of a transparent pool, the width of a river as seen from a hill, and the size of a city seen from city. We can see all these questions are very similar to each other. Let’s take a look at the first question. Survey of sea island says “there is a sea island, and set up two three zhang (zhang is a distance and 1 zhang equals 3.3 meters) poles at one thousand steps apart and set the two poles and the island in a straight line. Step back from the front post 123 steps, with eye on the ground level, the tip of the pole is on a straight line with the peak of island. Step back 127 steps from the rear pole, eye on ground level also aligns with the tip of pole and tip of island. What is the height of the island, and what is the distance to the pole?” (Wikipedia)",
null,
"According to Nine Chapters, Liu Hui could not measure the distance of the front pole to the island, so he sets up the two poles assuming they have the same height. Liu Hui gave two formulas: height of island AB = CD * DF / (FH – DG) + CD and distance of front pole to island BD = DG * DF / (FH – DG). How do we know these two formulas are correct? I thought about these two formulas but I could not convince myself until I read a proof about them. We have to take a look at Liu Hui’s theorem for the survey first. In the above figure he proved that FH * AI = IB * BF. He called it “‘In-out-complement’ principle which showed that the area of two inscribed rectangles in the two complementary right angle triangles have equal area” (Wikipedia). This proof is very straightforward if we know his “In-out-complement” principle. From the above figure, we know □EJ = □EB and □CK = □CB, then we use □EJ – □CK = □DE. Therefore, we know the height of island formula is correct. How do we get the distance of the front pole to island? That’s from □CB = □CK. An interesting proof huh?\n\nThe rest of eight problems used the “In-out-complement” principle to be solved too. I really like Liu Hui’s way to solve those real world problems, especially without using modern electronic technology. As you can see his ancient way to solve these problems was extremely important for geographical measurement and navigation industrial. If you want to learn more about rest of the questions, I highly recommend you to read the chapter 8 on the book Nine Chapters to dig more.\n\nReferences:\n\nShen K., John NC, Anthony W.-C, The Nine Chapters on the Mathematical Art. 1999.\n\nJiMin L, 九章算术中的比率理论.\n\n# Zu Chongzhi and his mathematics\n\nZu Chongzhi was a Chinese mathematician and astronomer during the Liu Song and Southern Qi Dynasties. He did a lot of famous mathematics during his life. His three most important contributions were studying The Nine Chapters on the Mathematical Art, calculating pi, and calculating the volume of sphere.\n\nAs we know, The Nine Chapters on the Mathematical Art is the most famous book in the history of Chinese mathematics. In ancient China, most people could not understand “The Nine Chapters on the Mathematical Art”. Zu Chongzi read the book and then he used his comprehension to explain the formulas of the book. Zu Chongzhi, and his father wrote the “Zhui Shu”(缀术) together. The book made The Nine Chapters on the Mathematical Art easier to read. And the book also added some important formula by Zu. For example, the calculation of pi and the calculation of sphere volume. “Zhui Shu” also become math textbook at the Tang Dynasty Imperial Academy. Unfortunately, the book was lost in the Northern Song Dynasty.\n\nZu’s ratio, also called milü is named after Zu Chongzhi. Zu’s ratio was an early accurate approximation of pi. It was recorded in the “Book Of Sui” and “Zhui Shu”. (Book Of Sui is the official history of the Sui dynasty). According to the “Book Of Sui”, Zu Chongzhi discovered that pi is between 3.14159276 and 3.14159277. Today, we know the actual number is in accord with Zu’s ratio. But “Book Of Sui” did not record the method used to get the number. Most historians and mathematicians think Zu Chongzhi used Liu Hui’s π algorithm to get the number. Liu Hui’s algorithm means approximating circle with a 24,576 sided polygon. Japanese mathematician Yoshio Mikami pointed out, “22/7 was nothing more than the π value obtained several hundred years earlier by the Greek mathematician Archimedes, however milü π = 355/113 could not be found in any Greek, Indian or Arabian manuscripts, not until 1585 Dutch mathematician Adriaan Anthoniszoom obtained this fraction; the Chinese possessed this most extraordinary fraction over a whole millennium earlier than Europe”. Hence Mikami strongly urged that the fraction 355/113 be named after Zu Chongzhi as Zu’s fraction.( Yoshio Mikami)",
null,
"Image:Zu Chongzhi’s method (similar to Cavalieri’s principle) for calculating a sphere’s volume includes calculating the volume of a bicylinder. Author: Chen Bai, via WIkimedia Commons.\n\nZu Chongzhi’s other important contribution was calculation volume of the sphere. Together with his son Zu Geng, Zu Chongzhi used an ingenious method to determine the volume of the sphere.(Arthur Mazer). In The Nine Chapters on the Mathematical Art, the author used Steinmetz solid to get the volume of the sphere. The solid common to two (or three) right circular cylinders of equal radii intersecting at right angles is called the Steinmetz solid.\n\nBut the book did not give the formula of how to get the volume of the sphere. Zu Chongzhi used “Zu Geng principle” (another name: Cavalieri’s principle) to show the volume of the sphere formula is (π*d³)/6. In order to commemorate the fact that Zu Chongzhi found the significant contribution of the principle with his son, people called the principle “Zu Geng principle”. “Zu Geng principle” is the same as “Cavalieri’s principle”, but “Zu Geng principle” is earlier than “Cavalieri’s principle”. “Cavalieri’s principle” means two solids of equal altitude, the sections made by planes parallel to and at the same distance from their respective bases are always equal, then the volumes of the two solids are equal.(Kern and Bland 1948, p. 26).\n\nWork cited:\n\nYoshio Mikami , (1947). Development of Mathematics in China and Japan. 2nd ed. : Chelsea Pub Co;.\n\nArthur Mazer , (2010). The Ellipse: A Historical and Mathematical Journey. 1st ed. : Wiley;\n\nKern, W. F. and Bland, J. R. “Cavalieri’s Theorem” and “Proof of Cavalieri’s Theorem.” §11 and 49 in Solid Mensuration with Proofs, 2nd ed. New York: Wiley, pp. 25-27 and 145-146, 1948.\n\nhttp://en.wikipedia.org/wiki/Cavalieri%27s_principle\n\nhttp://en.wikipedia.org/wiki/Zu_Chongzhi\n\n# An introduction to “Nine Chapters on the Mathematical Art(九章算术)”",
null,
"The “Nine Chapters on the Mathematical Art” is an ancient Chinese mathematics book. It is one of the ten most important arithmetic books of ancient China. Although it is hard to find the accurate publishing time of this book, by the historical record, it had been published in 263 AD (Han dynasty). In the following dynasties, Chinese mathematicians kept revising it and supplementing it. Thus “Nine Chapters on the Mathematical Art” can be also seen as the essence of the ancient Chinese arithmetic. By the Qing dynasty (1644-1912), most Chinese mathematicians started studying math from this book. In the Tang dynasty (618-907) and Song dynasty (960-1279) “nine chapters of arithmetic” was the professional textbook by the government provision. Also in 1084 AD, the Chinese government published the printed version “Nine Chapters on the Mathematical Art”, which made it become the earliest printed version mathematics textbook in the world. As a famous mathematics textbook, “Nine Chapters on the Mathematical Art” was introduced in Japan and Korea in the Sui dynasty (581-618) and right now, it has been translated into Russian, German, French and other languages.\nThe content of “Nine Chapters on the Mathematical Art” is plentiful. It was written in “problems and solutions” form including 246 problems related to production and practical life, and they were distributed into nine chapters. Although many problems have several solutions, this book does not contain any proof, which is a common disadvantage of most Chinese ancient mathematics textbooks. The first chapter is called “Fang tian (方田)”. It is about computing the area of various plane geometrical figures such as sector, annulus arch and so on. Also in this chapter, it refers to the arithmetic of fractions, which is the earliest record of textbook referring to fractions. The second and third chapter are called “Su mi(粟米)” and “Cui fen (衰分)” which are about proration problems. The fourth chapter is named as “Shao guang (少广) ”. It narrates the methods of computing the length of a edge when you get the area of the figure. This chapter also introduces the method of extraction of square and cubic roots. The fifth chapter “Shang gong (商功)”, gives the formulas to compute the volume of many objects. The sixth chapter “Jun shu(均输)” focuses on collecting taxes. But it also involves in the conceptions direct, inverse compound proportions and other proportion theory. In western countries, these conceptions appeared after the 16th century. The seventh chapter “Ying bu zu (盈不足)” discusses the problems of profit and loss. Some solutions from this chapter are very advanced in the world. The eighth chapter is called “equation(方程)”. It uses the method “separation coefficient” to represent systems of liner equations, which is similar to matrices. It also gives the earliest complete solution of systems of liner equations. In the solutions, it even introduces the concepts of negatives. This is the first time in human history to expand the number system from positive numbers systems. In the last chapter “Gou gu(勾股) ”, it uses “Gou gu theorem” (also known as Pythagorean theorem in the west) to solve some problems which are related to practical life. Some stuff in this chapter are very advanced, the last problem of this chapter gives a formula. In the western world, this formula was put forward by American mathematician L.E.Dickson at the end of 19th century.\n“Nine Chapters on the Mathematical Art” determined the framework of ancient Chinese mathematics. It focuses on computations related to practical problem and has a very profound effect on the following mathematics."
] | [
null,
"https://3010tangents.files.wordpress.com/2015/03/image1.png",
null,
"https://i2.wp.com/upload.wikimedia.org/wikipedia/commons/e/e9/Sphere_volume_derivation_using_bicylinder.jpg",
null,
"https://u0897820.files.wordpress.com/2014/10/562c11dfa9ec8a1384e92151f703918fa1ecc094-jpg.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9513978,"math_prob":0.85435975,"size":4185,"snap":"2019-26-2019-30","text_gpt3_token_len":954,"char_repetition_ratio":0.11384836,"word_repetition_ratio":0.015645372,"special_character_ratio":0.22246116,"punctuation_ratio":0.094451,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9885388,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,4,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-18T18:21:08Z\",\"WARC-Record-ID\":\"<urn:uuid:caf58d9c-b883-439c-8eb1-3c64b7dc75d2>\",\"Content-Length\":\"64851\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:27439c5f-feb1-4ba3-a01c-70071fb2d03d>\",\"WARC-Concurrent-To\":\"<urn:uuid:f9f974f5-7cb6-4160-9927-0e860a20e7f0>\",\"WARC-IP-Address\":\"192.0.78.13\",\"WARC-Target-URI\":\"https://3010tangents.wordpress.com/tag/nine-chapters-on-the-mathematical-art/\",\"WARC-Payload-Digest\":\"sha1:ZNSBDFCOCMGYCI7JBOPAPZT27FAH5WN5\",\"WARC-Block-Digest\":\"sha1:2B67CSXTL2EVX6BBEWCMHPNI546DGZJ2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195525699.51_warc_CC-MAIN-20190718170249-20190718192249-00207.warc.gz\"}"} |
http://www.mellekoning.nl/index.php/2017/02/ | [
"Categories\n\n## You have 10 red and 10 black marbles…\n\nYou have 10 red and 10 black marbles. In how many ways can you put the 20 marbles in a row so that no more than two marbles of the same colour lie next to each other?\n\nAn example of a valid sequence would be:",
null,
"Can you reason your way out of this one or would you write a program that can give you the solution? Can you write this program?\n\n### First reasoning\n\nBefore starting a program, let’s first reason about the problem. It might just be that a program is not needed at all 🙂\n\nThe red and black marbles are kind of ‘exchangeable’ in the sense that in the above example picture there is a similar solution when all the red and black marbles would change colour. So, how to discover all the possibilities with 20 marbles?\n\nI was thinking of starting off with just one marble, and than reason my way onwards in a tree-search structure way, by just adding a marble and see if I still have a valid sequence (line). Something like this:\n\nFirst of all, for clarity reasons let me use simple ‘X’ and ‘O’ instead of red/blacks.\n\n```O\n```\n\nOne O, that’s just the start, let’s add the O and X possibilities:\n\n```OO\nOX\n```\n\nSo, I’m just creating all possible lines that I can create. From the start of one ‘O’, I now have two possible continuations of lines.\nThe number of possibilities doubled, of course. Of course I know that I could also have started with ‘X’, but I’m just exploring a bit first. So, again, let’s just add O’s and X’s to the two existing possibilities:\n\n```OOO <- not allowed!\nOOX\nOXO\nOXX\n```\n\nThe first time extending a marble 'ends a line' because of the rule that we can't have three marbles of the same colour in a row!\nOnly three possible lines to extend left.\n\nSo far so good, it looks like adding marbles from the stack of 10 O's and X's is simple. However, doing that for all possibilities seems quite a task as the number of possible 'branches' of this tree probably grows very fast.\n\nLet's just again extend the lines of three above, to lines with lengths of four, by adding O and X to the already existing possible lines above, of course I do not have to explore the OOO line any further:\n\n```OOXO\nOOXX\nOXOO\nOXOX\nOXXO\nOXXX <- not allowed!\n```\n\nSo, while the lines of three marbles extends with just one marble to a line of four marbles, there is only one of the lines that is not allowed for having three X's in a row. Means that the number of possibilities with three marbles (3) almost doubled, but not quite doubled, to 5.\n\nWe probably need to find a math formula, something like \"the length of the line calculates into a number of possibilities\" but I did not find it yet. What I do seem to have figured out is that when a line ends with two marbles of the same colour, there is only one possible way to extend the line namely with a marble of the other colour, while if a line ends in different colours, the possible next sequence can either contain an X or an O.\n\nInstead of extending the line, I could start writing down the number of possible lines that can be made from that line. Like this:\n\n```OOXO (2, as both adding O and X would still give a valid line)\nOOXX (1, only adding O would result in a valid line)\nOXOO (1)\nOXOX (2)\nOXXO (2)\n```\n\nOf course, the lines that end either in two O's or two X's can only be extended with one marble of the other colour, so those get the number 1. The lines that end with two different coloured marbles, can be extended by either an 'O' or an 'X', so those get the number two.\n\nNow the interesting thing is that in case I added the number 2, one of the two possibilities, again must have a double OO or double XX, so we can now write the next numbers for lines of six marbles, to the above possibilities as:\n\n```OOXO (2) (3)\nOOXX (1) (2)\nOXOO (1) (2)\nOXOX (2) (3)\nOXXO (2) (3)\n```\n\nThe above shows the possibilities for lines with a length of six marbles. We do not see all possibilities anymore, but we do see the number of possibilities. When we add up all the numbers in the last column, we have the number of possible lines for lines of six marbles, being 3+2+2+3+3=13. Although, we have to keep in mind this is only for the lines starting with 'O', so we have to double that amount for the lines that would be the inverse of these lines starting with 'X's. That would be 26 for lines of length 6.\n\nSo what do we have so far, if we look at line-length and the resulting number of possible lines?\n\n```1 -> 2 (haha, O and X, two possibilities, for sure, is 2^1)\n2 -> 4 (OX, OO, XO, XX, 2^2)\n3 -> 6 (not OOO, XXX so 2^3 - 2)\n4 -> 8 (2^4 minus 2^3 is 16-8.. hm)\n5 -> 16 (2^5 minus 2^4.. 32-16, on to something?)\n6 -> 26 (2^6 minus 2^5 would be 64-32 = 32..nopes)\n```\n\nTo be honest, I still have not figured out a nice formula.. it must be something like each extra marble doubles the possibilities, but not always...\n\nA program exploring all possibilities might be easier to write; let the computer figure out how many possibilities there are. How does your program find all possibilities?\n\n```package main\n\nimport \"fmt\"\n\n// Question: You have 10 red and 10 black marbles. In how many ways can you\n// put the 20 marbles in a row so that no more than two marbles of the same\n// colour lie next to each other?\n//\n// An example of a valid sequence would be:\n// OOXOXXOXOXXOOXXOXXOO\n\nconst maxLineLength = 20\nfunc main () {\ncount := CountMarbleLines()\n\nfmt.Println(\"Number of possible lines\", count)\n}\nfunc CountMarbleLines () int {\n\nlinelength := 0\namountO := 10\namountX := 10\ncount := PossibleLines(amountO, amountX, linelength, 0, 0)\nreturn count\n\n}\n\n// Counts possible lines by adding either O or X to the line\n// returning 0 in case no line possible or 1 in case the line is possible\nfunc PossibleLines(amountO int, amountX int, lineLength int, timesO int, timesX int) int {\n// we have to ensure that lines that have three\n// in sequence of the same coloured Marbles won't count\nif (timesO > 2 || timesX > 2) {\nreturn 0 // too many X or O in sequence can not count this line\n}\n\n// Recursive stop function\nif lineLength == maxLineLength {\nreturn 1 // a valid line has been found\n}\n\n// we can add either O or X to the string, register this in timesO/timesX\nif amountO > 0 { // if we still have O Marbles to spend...\ntimesO++\naddO = PossibleLines(amountO - 1, amountX, lineLength + 1, timesO, 0)\n}\nif amountX > 0 { // if we still have X Marbles to spend...\ntimesX++\naddX = PossibleLines(amountO, amountX - 1, lineLength + 1, 0, timesX)\n}\n}\n```\n\nNote that the above is written in GoLang which is also easily portable to .NET C#.\n\nWhen you run the above program you will get the answer to the question!\n\nHere's the .NET C# version, which will also print out all the possibilities:\n\n```class Program\n{\nstatic void Main(string[] args)\n{\nConsole.Out.WriteLine(\"Starting to count..\");\nint count = MarbleCount();\nConsole.Out.WriteLine(\"Total number found: \" + count);\n}\n\n/// Count number of possible marble strings.\n/// Rules: 20 marbles in line, 10 x's and 10 o's whereby there can not be more than two\n/// marbles of the same colour next to each other.\n/// Example: oxxooxoxoxooxxoxooxx\n/// Logic: There are 10 o's to spent, 10 x's to spent\nprivate static int MarbleCount()\n{\nint amountO = 10;\nint amountX = 10;\nint lengthToFind = 20;\nStringBuilder sb = new StringBuilder();\nreturn PossibleStrings(amountO, amountX, 0, 0, 0, sb, lengthToFind);\n}\n\nprivate static int PossibleStrings(int amountO, int amountX, int lengthOfString, int timesO, int timesX, StringBuilder sb, int lengthToFind)\n{\n// stop condition of recursive function:\nif (timesO > 2 || timesX > 2)\n{\nreturn 0; // too many O's or X's in sequence, can't count this line.\n}\n\n// if the lenght == 20, we have constructed a valid string\nif (lengthOfString == lengthToFind)\n{\nConsole.Out.WriteLine(sb);\nreturn 1; // a valid string is constructed\n}\n\n// we can add either O or X to the string, register this in timesO/timesX\nif (amountO > 0)\n{\naddO = PossibleStrings(amountO - 1, amountX, lengthOfString + 1, ++timesO, 0, sb.Append(\"O\"), lengthToFind);\nsb.Remove(sb.Length - 1, 1);\n}\nif (amountX > 0)\n{\naddX = PossibleStrings(amountO, amountX - 1, lengthOfString + 1, 0, ++timesX, sb.Append(\"X\"), lengthToFind);\nsb.Remove(sb.Length - 1, 1);\n}"
] | [
null,
"http://www.mellekoning.nl/wp-content/uploads/2017/02/rowof20balls.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8720277,"math_prob":0.9683352,"size":8239,"snap":"2020-45-2020-50","text_gpt3_token_len":2214,"char_repetition_ratio":0.14972678,"word_repetition_ratio":0.08046723,"special_character_ratio":0.2763685,"punctuation_ratio":0.13002916,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9850545,"pos_list":[0,1,2],"im_url_duplicate_count":[null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-29T23:12:51Z\",\"WARC-Record-ID\":\"<urn:uuid:2c2dec3f-0392-48fd-ac41-b72da35c8173>\",\"Content-Length\":\"99513\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b14c7939-82aa-4eea-b94d-d7fd2524efa9>\",\"WARC-Concurrent-To\":\"<urn:uuid:183224c5-9669-4c45-83c3-8ad42e766e83>\",\"WARC-IP-Address\":\"93.191.132.129\",\"WARC-Target-URI\":\"http://www.mellekoning.nl/index.php/2017/02/\",\"WARC-Payload-Digest\":\"sha1:3M6XJJYS7LEA7LHRCCRE5XCDO4HTRELN\",\"WARC-Block-Digest\":\"sha1:5RXQKGXROCIXI6KP2UWOSW5H34DSFF67\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107905965.68_warc_CC-MAIN-20201029214439-20201030004439-00149.warc.gz\"}"} |
https://www.python51.com/jchu/139832.html | [
"## 解方程怎么用python的函数包起来\n\n44次阅读\n\nnumpy 库提供了一个名为 linalg.solve() 的函数,可以用来解决线性方程组。例如,我们有如下的线性方程组:\n\n3x + 5y = 17\n\n7x – 2y = 13\n\n“` import numpy as np # 线性方程组系数矩阵 a = np.array([[3, 5], [7, -2]]) # 常数列 b = np.array([17, 13]) # 解方程组 x = np.linalg.solve(a, b) print(x) “`\n\nSympy 是一个 Python 库,用于符号数学计算。它可以处理各种数学问题,包括求解方程、微积分、代数、离散数学和几何学等等。\n\nx + 2 = 5\n\n“` from sympy import Eq, solve, symbols # 定义方程 x = symbols(‘x’) eq = Eq(x + 2, 5) # 解方程 sol = solve(eq, x) print(sol) “`\n\nScipy 是一个用于数学、科学和工程的 Python 库,支持各种数值算法,例如集成、微分方程、优化、拟合和信号处理等。\n\nx^2 + y = 10\n\ny + 3z = 20\n\nx – z = 0\n\n“` from scipy.optimize import root def equations(vars): x, y, z = vars eq1 = x**2 + y – 10 eq2 = y + 3*z – 20 eq3 = x – z return [eq1, eq2, eq3] # 非线性方程组的解 sol = root(equations, [0, 0, 0]) print(sol.x) “`\n\nPython 提供了许多强大的工具,可以帮助我们解决各种数学问题。从简单的一元一次方程到复杂的非线性方程组,Python 都可以提供相应的工具,帮助我们轻松地解决这些问题。我们可以通过使用 numpy、Sympy 和 Scipy 等库来实现方程的求解,这些库拥有丰富的函数和强大的算法,可以使我们更加高效地解决问题。"
] | [
null
] | {"ft_lang_label":"__label__zh","ft_lang_prob":0.9687629,"math_prob":0.9999243,"size":1809,"snap":"2023-40-2023-50","text_gpt3_token_len":1281,"char_repetition_ratio":0.10027701,"word_repetition_ratio":0.0,"special_character_ratio":0.33941403,"punctuation_ratio":0.12341772,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99908996,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-06T21:21:26Z\",\"WARC-Record-ID\":\"<urn:uuid:d515fc54-dcb6-4cfb-986d-88c86fa62136>\",\"Content-Length\":\"69328\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:729f46d3-5451-4d40-869d-ba0ca546ea39>\",\"WARC-Concurrent-To\":\"<urn:uuid:70ed5822-a243-440c-b04a-8eccecd4fe37>\",\"WARC-IP-Address\":\"43.248.79.206\",\"WARC-Target-URI\":\"https://www.python51.com/jchu/139832.html\",\"WARC-Payload-Digest\":\"sha1:HBTZ6ZETQXJQCI35QX2XOFGJ6EW26XKR\",\"WARC-Block-Digest\":\"sha1:RCSVODXRFWPIBLQ46R5GKSQOQUVKYOJP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100603.33_warc_CC-MAIN-20231206194439-20231206224439-00461.warc.gz\"}"} |
https://answers.everydaycalculation.com/lcm/378-2058 | [
"Solutions by everydaycalculation.com\n\n## What is the LCM of 378 and 2058?\n\nThe lcm of 378 and 2058 is 18522.\n\n#### Steps to find LCM\n\n1. Find the prime factorization of 378\n378 = 2 × 3 × 3 × 3 × 7\n2. Find the prime factorization of 2058\n2058 = 2 × 3 × 7 × 7 × 7\n3. Multiply each factor the greater number of times it occurs in steps i) or ii) above to find the lcm:\n\nLCM = 2 × 3 × 3 × 3 × 7 × 7 × 7\n4. LCM = 18522\n\nMathStep (Works offline)",
null,
"Download our mobile app and learn how to find LCM of upto four numbers in your own time:"
] | [
null,
"https://answers.everydaycalculation.com/mathstep-app-icon.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6931826,"math_prob":0.9987959,"size":481,"snap":"2020-34-2020-40","text_gpt3_token_len":157,"char_repetition_ratio":0.11111111,"word_repetition_ratio":0.0,"special_character_ratio":0.43035343,"punctuation_ratio":0.08791209,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9968075,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-07T19:02:02Z\",\"WARC-Record-ID\":\"<urn:uuid:553f3a21-f17a-4fac-a335-8bc7242d3004>\",\"Content-Length\":\"5762\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:04ad0c16-27c9-4c9f-81a9-afc3b2087952>\",\"WARC-Concurrent-To\":\"<urn:uuid:77bd580e-6210-4453-af5b-6b066a3c8dd1>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/lcm/378-2058\",\"WARC-Payload-Digest\":\"sha1:Z5HNA532GID2QENCO6EEMNMVULSFUYRR\",\"WARC-Block-Digest\":\"sha1:2RPR5N5VK63RIMKVWNRWFTDBZ3BMKJNF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439737206.16_warc_CC-MAIN-20200807172851-20200807202851-00296.warc.gz\"}"} |
https://slideplayer.com/slide/3724585/ | [
"",
null,
"## Presentation on theme: \"HELPING YOUR CHILD WITH NUMERACY: ADDITION AND SUBTRACTION.\"— Presentation transcript:\n\nChildren’s learning can be divided into four areas of learning: Knowledge: Concepts: Skills: Attitudes: The facts that children need to retain. Requires constant repetition and consolidation in a variety of different ways. E.g. number bonds to 10. Understanding general ideas that can be applied in specific instances e.g. applying number bonds to sums involving the use of 10s: 60 + ? = 100. The way in which children approach their work. The way in which we as adults approach mathematics work. THE NATURE OF LEARNING. The mechanics of finding an answer. Practical methods of working out the answer. E.g. counting on using a number line.\n\nPlenary session : used to consolidate, review and extend the work done in the main teaching activity. Oral/ mental work: Used to rehearse and sharpen skills NUMERACY HOUR STRUCTURE.\n\nWHY DO WE NEED MENTAL STRATEGIES? MentalWritten We may break a calculation into manageable parts, e.g. 148 – 100 + 1 instead of 148 -99 We can not change a calculation to an equivalent one, e.g. 148 – 99 is done as it is. We say the calculation to ourselves, and therefore are aware of what numbers are involved, e.g. 2000 – 10 is not much less than 2000. We don’t say the number to ourselves, but start a procedure such as : 148 - 99 __ By saying, ‘8 take away 9’ before changing it to 18 – 9. We choose a strategy to fit the numbers, e.g. 148 – 99 may be done differently from 84 – 77, although they are both subtractions. We always use the same method. We draw upon specific mathematical knowledge, an understanding of the number system, learned number facts, and so on. We always draw upon memory of a procedure, and possibly, though not necessarily, an understanding of how it works.\n\nTHE NATURE OF LEARNING. Understanding children’s difficulties with addition and subtraction: Change the number names 1, 2, 3 etc. for letters A, B, C this puts you at the same level as a child. To answer the questions you must not translate these letters into numbers. Answer the following questions in order- you may use your fingers. - How many fingers on your left hand? - How many fingers on both hands? C + D =B + E = K - B =E + E = G - D =E + F = D x C =Q - E = A B C D E F G H I J K L M N O P Q R S T U V W X Y Z How could you use a number line to help you?\n\nSKILLS IN EARLY ADDITION. - Counting all- a child doing 2 + 3 counts out two bricks and then three bricks and then finds the total by counting all the bricks. - Counting on from the first number – a child finding 3 + 5 counts on from the first number. ‘four, five, six, seven, eight’. -Counting on from the larger number- a child chooses the larger number, even when it is not the first number, and counts on from there. - Using a known addition fact – where a child gives an immediate response to facts known by heart, such as 6 + 4 or 3 + 3 or 10 + 8. - Using a known fact to derive a new fact – where a child uses a number bond that they know by heart to calculate one that she or he does not know, e.g. using knowledge that 5 + 5 = 10 to work out 5 + 6 = 11 and 5 + 7 = 12. - Using knowledge of place value – where a child uses knowledge that 4 + 3 = 7 to work out 40 + 30 = 70, or knowledge that 46 + 10 is 56 to work out 46 + 11 = 57. - Knowing 1 more or 1 less - One to one counting, games and rhymes\n\nSKILLS IN EARLY SUBTRACTION. - Counting out – a child finding 9 – 3 holds up nine fingers and folds down three. - Counting back from – a child finding 9 – 3 counts back three numbers from 9: ‘eight, seven, six’. - Counting back to - a child doing 11 – 7 counts back from the first number to the second, keeping a tally using fingers of the number of numbers that have been said. - Counting up – a child doing 11 – 7 counts up from the first 7 to 11, ‘eight, nine, ten, eleven’ (not a ‘natural’ strategy for many children because of the widely held perception of subtraction as ‘taking away’ - Using a know fact – a child gives a rapid response based on facts known by heart, such as 10 – 3 or 20 – 9. - Using a derived fact – a child uses a known fact to work out a new one, e.g. 20 – 5 is 15, so 20 – 6 must be 14 (more unusual in subtraction than addition). - Using knowledge of place value – a child finding 25 – 9 knows that 25 – 10 is 15, and uses this to give an answer of 16.\n\nLEARNING FROM MISCONCEPTIONS. 6 + 5 = 1043 + 8 = 50138 + 9 = 146 Here the answers are 1 less than the correct answer. 14 – 5 = 1023 – 6 = 18 Here the answers are 1 more than the correct answer. What number goes with 6 to make 10? Child’s answer: 5 A question phrased differently displaying the same counting error - the child has counted the first number on their fingers (or a number line) as part of their ‘counting on’ strategy rather than counting from this number.\n\nIf I had 12 sweets and I needed 16 how many more would I need? Some possible answers: 27- The child has counted on rather than counting back and has made the same error as we have just described. Here we have a combination of two different misconceptions. 28- Inappropriate use of a number operation (adding instead of subtracting). 5- Counting back inappropriately as previously described. 208- Inappropriate use of a number operation (adding instead of subtracting) and an incorrect form of recording. 20 and 8. LEARNING FROM MISCONCEPTIONS.\n\n8 + 6 = 18 - The child has continued to count all of his or her fingers when adding and has therefore mistakenly added 10. 15 – This could be accounted for with inaccurate counting or by the application of a known fact such as double 8 is 16 one less is 15 where the child should have subtracted 2 instead. 17 – This error could be accounted for by the inaccurate use of a number square e.g. counting on from 10 to 20 then 19, 18 etc. LEARNING FROM MISCONCEPTIONS.\n\n45 + 23 95 - The child has added two tens then three more tens rather than units. 7 + 8 + 4= 20 - A near double has been used for the calculation without the required adjustment. 20 + ? = 25 For this a child needs to know that addition is the inverse of subtraction. Therefore 25 – 20 will give the answer. This is quite a high level skill a common misconception would be to add the numbers together to give 45. LEARNING FROM MISCONCEPTIONS.\n\nOther points to bear in mind. Be careful when questions are presented in a different manner or placed into a different context. These misconceptions can be combined such as counting on for a subtraction sum as well as counting from the wrong number. Answers can just be made up / estimated. The best source of information regarding misconceptions is your child. The best question you can ask is ‘can you show me how you worked it out?’ Don’t expect a child who cannot count backwards to be able to do simple division. LEARNING FROM MISCONCEPTIONS.\n\nSOME DIFFICULTIES. Flexibility using a range of strategies can be hindered if: - Children do almost any addition or subtraction by counting on or back in ones. - Children rely upon a singular method or piece of apparatus. - Subtract numbers that are close together such as 42-38, by trying to ‘take away’ or ‘count back’ 38 from 42, rather than counting up from 38. - Don’t recognise that to add 10 or 100 is no more difficult than to add 1. - Don’t recognise numbers which are 10 apart, or a multiple of 10. - Never spot a double or number bond. - Never change a calculation to make it easier, e.g. in 242 – 99 they subtract 99 rather than subtracting 100? - To calculate mentally, turn to a standard written method and try to visualise it. - Don’t see facts which would make the calculation easier.\n\nHOW TO HELP YOUR CHILDREN. 1.Look at the half term plans for an objective. 2.Think of a relevant everyday activity that addresses this objective. 3.Ask your child how they worked out their answer. 4.Address any misconceptions. 5.Make it fun! If the child does not want to do the activity they will get very little from it."
] | [
null,
"https://slideplayer.com/static/blue_design/img/slide-loader4.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.93475276,"math_prob":0.93721294,"size":8136,"snap":"2019-51-2020-05","text_gpt3_token_len":2082,"char_repetition_ratio":0.120142646,"word_repetition_ratio":0.025364617,"special_character_ratio":0.27396756,"punctuation_ratio":0.110912345,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98815525,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-06T02:01:42Z\",\"WARC-Record-ID\":\"<urn:uuid:933c4335-7a61-4851-8e3a-f5f4a7f54a77>\",\"Content-Length\":\"171407\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8868493f-4901-4dbf-bcc8-0621db95b599>\",\"WARC-Concurrent-To\":\"<urn:uuid:a209f8ba-712c-4e84-a818-f5440484fd71>\",\"WARC-IP-Address\":\"138.201.58.10\",\"WARC-Target-URI\":\"https://slideplayer.com/slide/3724585/\",\"WARC-Payload-Digest\":\"sha1:SA73ZUEI2SDCXHU4GKE6ESH57MWWN7FA\",\"WARC-Block-Digest\":\"sha1:YHMC7WR5D7T3QFEIJWWFGSFUYWMKWXPO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540482954.0_warc_CC-MAIN-20191206000309-20191206024309-00107.warc.gz\"}"} |
http://www.rapidlearningcenter.com/mathematics/introductory-statistics/24-Regression-Inference.html | [
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"How to Learn in 24 Hours?\n\n Need Help? M-F: 9am-5pm(PST): Toll-Free: (877) RAPID-10 or 1-877-727-4310 24/7 Online Technical Support: The Rapid Support Center [email protected] Secure Online Order:",
null,
"Need Proof? Testimonials by Our Users\n\n Rapid Learning Courses: MCAT in 24 Hours (2021-22) USMLE in 24 Hours (Boards) Chemistry in 24 Hours Biology in 24 Hours Physics in 24 Hours Mathematics in 24 Hours Psychology in 24 Hours SAT in 24 Hours ACT in 24 Hours AP in 24 Hours CLEP in 24 Hours DAT in 24 Hours (Dental) OAT in 24 Hours (Optometry) PCAT in 24 Hours (Pharmacy) Nursing Entrance Exams Certification in 24 Hours eBook - Survival Kits Audiobooks (MP3)\n\n Tell-A-Friend: Have friends taking science and math courses too? Tell them about our rapid learning system.",
null,
"Home » Mathematics » Introductory Statistics\n\nRegression Inference\n\nTopic Review on \"Title\":\n\nSome Definitions\n\n• Regression: X, the independent variable, the explanatory variable or the predictor variable. Y, the dependent variable, the response variable or the predicted variable. Given X, we will be able to predict Y.\n• Deterministic Model: y = a + bx\n• Probabilistic Model: y = a + bx+ e , where e ~N(0,s2)\n\nLeast Square Estimators",
null,
"",
null,
"",
null,
"",
null,
"ANOVA\n\n• SST=the total variation in the experiment.\n• SST=SSR+SSE\n• SSR (sum of squares for regression): measures the variation explained by regression model.\n• SSE (sum of squares for error): measures the variation not explained by x.\n\nANOVA Table\n\n Source df SS MS Test Statistics (Mean Squares) F Regression 1 SSR SSR/(1) MSR/MSE Model (=MSR) Error n - 2 SSE SSE/(n-2) (=MSE) Total n -1 SST\n\nF Test\n\n• F test shall be used to test whether the regression model fit well or not. If the model fit well, the test statistics F will be large. (This test is equivalent to t-test for t2 = F)\n• H0: The regression model fits well.\n• Ha: The regression model does not fit well.\n• F=MSR/MSE\n• Reject H0 if F>Fa,1,n-2\n• a is the probability of a type I error\n\nT test and Confidence level",
null,
"",
null,
"Rapid Study Kit for \"Title\":\n Flash Movie Flash Game Flash Card Core Concept Tutorial Problem Solving Drill Review Cheat Sheet",
null,
"",
null,
"",
null,
"\"Title\" Tutorial Summary : This tutorial specifically describes the regression inference. In statistics, we frequently measure two or more variables on the same experimental unit. We do this to explore the nature of the relationship among these variables. Regression inference is to use knowledge of independent variable(s) to predict dependent variable. By completing this course, you will learn about the regression inference including regression models and least square method, the analysis of variance (ANOVA), testing regression model, assumptions and estimation and prediction\n\n Tutorial Features: Specific Tutorial Features: Animated examples showing the operation of least square method is presented in the tutorial. Step by step analysis of ANOVA is presented and served as a base for the subsequent hypothesis testing for the regression model. Series Features: Concept map showing inter-connections of new concepts in this tutorial and those previously introduced. Definition slides introduce terms as they are needed. Visual representation of concepts Animated examples—worked out step by step A concise summary is given at the conclusion of the tutorial.\n\n \"Title\" Topic List: Regression Models and Least Square Method The Analysis of Variance (ANOVA) Testing Regression Model (F test, t test and confidence interval) Assumptions Estimation and Prediction\n\nSee all 24 lessons in Introductory Statistics, including concept tutorials, problem drills and cheat sheets:\nTeach Yourself Introductory Statistics Visually in 24 Hours",
null,
""
] | [
null,
"http://www.rapidlearningcenter.com/images/line.gif",
null,
"http://www.rapidlearningcenter.com/images/RapidLearning_portal-header-left.jpg",
null,
"http://www.rapidlearningcenter.com/images/RapidLearning_portal-header-middle.jpg",
null,
"http://www.rapidlearningcenter.com/images/z1.gif",
null,
"http://www.rapidlearningcenter.com/images/z2.gif",
null,
"http://www.rapidlearningcenter.com/images/z3.gif",
null,
"http://www.rapidlearningcenter.com/images/memberlogin_off.gif",
null,
"http://www.rapidlearningcenter.com/images/blog.png",
null,
"http://www.rapidlearningcenter.com/images/spacer.gif",
null,
"http://www.rapidlearningcenter.com/images/facebook.jpg",
null,
"http://www.rapidlearningcenter.com/images/spacer.gif",
null,
"http://www.rapidlearningcenter.com/images/youtube.jpg",
null,
"http://www.rapidlearningcenter.com/images/spacer.gif",
null,
"http://www.rapidlearningcenter.com/images/twitter.jpg",
null,
"http://www.rapidlearningcenter.com/images/btnGoShopping.gif",
null,
"http://www.rapidlearningcenter.com/mathematics/images/1_z2.gif",
null,
"http://www.rapidlearningcenter.com/mathematics/images/24-Regression-Inference_clip_image002.gif",
null,
"http://www.rapidlearningcenter.com/mathematics/images/24-Regression-Inference_clip_image004.gif",
null,
"http://www.rapidlearningcenter.com/mathematics/images/24-Regression-Inference_clip_image007.gif",
null,
"http://www.rapidlearningcenter.com/mathematics/images/24-Regression-Inference_clip_image009.gif",
null,
"http://www.rapidlearningcenter.com/mathematics/images/24-Regression-Inference_clip_image010.gif",
null,
"http://www.rapidlearningcenter.com/mathematics/images/24-Regression-Inference_clip_image012.gif",
null,
"http://www.rapidlearningcenter.com/mathematics/introductory-statistics/core_tutorials/CT_Statistics_t_24.gif",
null,
"http://www.rapidlearningcenter.com/mathematics/introductory-statistics/problem_sets/PS_Statistics_t_24.gif",
null,
"http://www.rapidlearningcenter.com/mathematics/introductory-statistics/cheat_sheets/CS_Statistics_t_24.gif",
null,
"http://www.rapidlearningcenter.com/images/spacer.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.76611555,"math_prob":0.7086888,"size":2929,"snap":"2022-05-2022-21","text_gpt3_token_len":696,"char_repetition_ratio":0.13709402,"word_repetition_ratio":0.004,"special_character_ratio":0.23898941,"punctuation_ratio":0.10420842,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99597526,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-22T11:07:08Z\",\"WARC-Record-ID\":\"<urn:uuid:e371caa6-acb2-4fa8-be6d-135368060f62>\",\"Content-Length\":\"55608\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f69bd286-33d3-4aa9-9af5-d181f02c1e81>\",\"WARC-Concurrent-To\":\"<urn:uuid:f132f183-3bd1-42cd-8bf9-5b98e5384c86>\",\"WARC-IP-Address\":\"192.169.153.237\",\"WARC-Target-URI\":\"http://www.rapidlearningcenter.com/mathematics/introductory-statistics/24-Regression-Inference.html\",\"WARC-Payload-Digest\":\"sha1:HUMGKFU6JTLUVK3PE7CBI2RP4CSKEWPX\",\"WARC-Block-Digest\":\"sha1:3K46TDMP5RCL4HA32AEIDX4XNR3JEF24\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320303845.33_warc_CC-MAIN-20220122103819-20220122133819-00630.warc.gz\"}"} |
http://www.e-booksdirectory.com/details.php?ebook=6086 | [
"",
null,
"# Lectures on Complex Analytic Manifolds",
null,
"Lectures on Complex Analytic Manifolds\nby\n\nPublisher: Tata Institute of Fundamental Research\nNumber of pages: 163\n\nDescription:\nTopics covered: Differentiable Manifolds; C maps, diffeomorphisms. Effect of a map; The Tensor Bundles; Existence and uniqueness of the exterior differentiation; Manifolds with boundary; Integration on chains; Some examples of currents; Currents with compact support; de Rham's Theorem; The star operator; Green's Operator G; Real vector spaces with a J-Structure; The operator J; The canonical orientation of a complex manifold; etc.\n\n(660KB, PDF)\n\n## Similar books",
null,
"Differential Geometry of Indefinite Complex Submanifolds in Indefinite Complex Space Forms\nby\nFrom the table of contents: Chapter 1. Linear preliminaries; Chapter 2. Indefinite Kaehler manifolds; Chapter 3. Complex hypersurfaces; Chapter 4. Complex submanifolds; Chapter 5. Totally real bisectional curvature; and more.\n(4992 views)",
null,
"Dynamics in One Complex Variable\nby - Princeton University Press\nThis text studies the dynamics of iterated holomorphic mappings from a Riemann surface to itself, concentrating on the case of rational maps of the Riemann sphere. The book introduces some key ideas in the field, and forms a basis for further study.\n(11480 views)",
null,
"Complex Analytic and Differential Geometry\nby - Universite de Grenoble\nBasic concepts of complex geometry, coherent sheaves and complex analytic spaces, positive currents and potential theory, sheaf cohomology and spectral sequences, Hermitian vector bundles, Hodge theory, positive vector bundles, etc.\n(13487 views)",
null,
"Complex Geometry of Nature and General Relativity\nby - arXiv\nAn attempt is made of giving a self-contained introduction to holomorphic ideas in general relativity, following work over the last thirty years by several authors. The main topics are complex manifolds, spinor and twistor methods, heaven spaces.\n(12466 views)"
] | [
null,
"http://www.e-booksdirectory.com/img/ebd-logo.png",
null,
"http://www.e-booksdirectory.com/images/6086.jpg",
null,
"http://www.e-booksdirectory.com/images/8191.jpg",
null,
"http://www.e-booksdirectory.com/images/2990.jpg",
null,
"http://www.e-booksdirectory.com/images/2158.jpg",
null,
"http://www.e-booksdirectory.com/images/blank.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.78312606,"math_prob":0.66595167,"size":3211,"snap":"2020-24-2020-29","text_gpt3_token_len":745,"char_repetition_ratio":0.10820081,"word_repetition_ratio":0.6738661,"special_character_ratio":0.19900343,"punctuation_ratio":0.15272728,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9701101,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-05T20:08:16Z\",\"WARC-Record-ID\":\"<urn:uuid:309ed466-748e-4a9c-9718-ba043f07137c>\",\"Content-Length\":\"11583\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2968117c-0e0f-43e9-94a5-9e280297b313>\",\"WARC-Concurrent-To\":\"<urn:uuid:9ca9781d-8cbb-4d64-9650-90cd789078a9>\",\"WARC-IP-Address\":\"209.59.191.64\",\"WARC-Target-URI\":\"http://www.e-booksdirectory.com/details.php?ebook=6086\",\"WARC-Payload-Digest\":\"sha1:N2KZVTXRX57TLH3J7VJSFHO7GNOWKRXR\",\"WARC-Block-Digest\":\"sha1:P76TOWGMCDUW2RSUSTYQW6ULLUP4NBG7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655888561.21_warc_CC-MAIN-20200705184325-20200705214325-00011.warc.gz\"}"} |
https://mathoverflow.net/questions/202337/when-are-countably-generated-hilbert-modules-generated-by-c-p-c-order-zero-maps | [
"# When are countably generated Hilbert modules generated by c.p.c. order zero maps?\n\nThroughout let $B$ be a stable C*-algebra, i.e. $B\\cong B\\otimes K$, where $K$ is the C*-algebra of compact operators on an infinite dimensional separable Hilbert space. It is well-known that any countably generated Hilbert $B$-module $X$ is singly generated, i.e. there exists a positive element $b\\in B$ such that $X\\cong \\overline{bB}$.\n\nAssume the following as the definition of c.p.c. order zero maps between C*-algebras.\n\nDefinition A completely positive linear map $\\phi:A\\to B$ between C*-algebras has order zero if there exist a positive element $h\\in\\mathcal M(C)\\cap C'$ and a $*$-homomorphism $\\pi:A\\to\\mathcal M(C)\\cap\\{h\\}'$ such that $\\Vert h\\Vert = 1$ and $$\\phi(a) = h\\pi(a)=\\pi(a)h$$ for any $a\\in A$, where $C = C^*(\\phi(A))\\subset B$, i.e. the C*-algebra generated by the image of $\\phi$ and $\\mathcal M(C)$ is the multiplier algebra of $C$.\n\nLet $A$ be a separable C*-algebra and let $\\phi:A\\to B$ be a c.p.c. order zero map. One can construct a Hilbert $B$-module $X_\\phi$ out of $\\phi$ by setting $$X_\\phi:=\\overline{\\phi(A)B}.$$ Conversely, given a countably generated Hilbert $B$-module $X$ and a separable C*-algebra $A$, when is that there exists a c.p.c. order zero map $\\phi:A\\to B$ such that $X\\cong\\overline{\\phi(A)B}$?\n\nAs a special case, suppose that $X$ is the inductive limit of a sequence of isometric inclusions of modules $\\overline{\\phi_n(A)B}\\hookrightarrow\\overline{\\phi_{n+1}(A)B}$, where $\\{\\phi_n\\}_{n\\in\\mathbb N}$ is a sequence of c.p.c. order zero maps. Is there a c.p.c. order zero map $\\phi$ such that $X\\cong\\overline{\\phi(A)B}$? I believe this last question has a positive answer when the connecting maps \"commute\" with the c.p.c. order zero maps, so I would rather be interested in the case where there are no a priori connections between the $\\phi_n$s.\n\nIf $A=\\mathbb C$ then the answer to both questions is yes.\nIf $A=M_2(\\mathbb C)$, then the modules $H=\\overline{\\phi(A)B}$ are those that have a direct sum decomposition $H\\cong E\\oplus E$ (where $E= \\overline{\\phi(e_{1,1})B}$). It is clear that not all modules need to have this property. The answer to the second question is also negative in this case. One can arrange for this: a locally compact space $X$ covered by compactly contained open sets $\\bigcup_n U_n=X$ and a dimension 2 vector bundle over $X$ that is trivial on all the sets $U_n$ but non-trivial on $X$ (a phantom\" vector bundle). ($X$ can be obtained by a telescoping construction. See Example 5.6 of http://arxiv.org/abs/0910.2967). Viewing the vector bundle as a Hilbert module $H$ over $C_0(X)$, the Hilbert modules $HC_0(U_n)$ have the desired direct sum decomposition but $H$ itself does not.\nIf $A=\\mathcal K$, then the modules in question have the form $\\bigoplus_{n=1}^\\infty E$ (a.k.a., a stable module). By Kasparov's stabilization $H$ is isomorphic to $\\ell^2(I)$, where $I$ is a closed two-sided ideal of $B$. Again this need not exhaust all possible modules, but the second question has a positive answer in this case. If $H=\\overline{\\bigcup_n H_n}$ and all the modules $H_n$ are stable (and countably generated) then $H$ is stable.\n• Many thanks for your answer! I was wondering if the case of $A=\\mathcal K$ generalises to any stable C*-algebra just by considering $E_A:=\\overline{\\phi(A\\otimes e)B}$, where $e\\in\\mathcal K$ is any minimal projection; and if there is the possibility of explicitly constructing the c.p.c. order zero map associated to the limit. I was thinking along the lines of a representation of $A$ on $\\ell^2(\\mathbb N)$ tensor with some positive element in $I$, but I'm not sure to what extent this intuition is correct. – Phoenix87 Apr 17 '15 at 13:55\n• In that case there are still obstructions. Take the case $B=\\mathcal K$. If there is a non-zero order zero map $\\phi\\colon A\\otimes \\mathcal K\\to \\mathcal K$ then $A\\otimes\\mathcal K$ has a non-trivial densely finite trace (by functional calculus on $\\phi$ you can make sure that it has finite rank operators in its range). But $A\\otimes \\mathcal K$ may not have non-zero densely finite traces. – Leonel Robert Apr 17 '15 at 21:40\n• Perhaps I'm overlooking something, but if there are no non-trivial c.p.c. order zero maps between $A\\otimes\\mathcal K$ and $B$ then the only sequence one can construct out of a countable family of c.p.c. order zero maps is the constant sequence given by the trivial module, which has limit in the trivial module itself. – Phoenix87 Apr 18 '15 at 15:22\n• Oh OK, that's right. My comment was more relevant for the first question: not all stable modules will arise in this construction when $A$ is an arbitrary stable algebra. Regarding the second question, you are right that the answer is yes when $A$ is stable. For these reasons: (1) the range of modules arising by the construction is closed under countable orthogonal sums, (2) the limit of an increasing sequence of stable modules is isomorphic to their direct sum. – Leonel Robert Apr 20 '15 at 2:38"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7617055,"math_prob":0.99933267,"size":1801,"snap":"2021-04-2021-17","text_gpt3_token_len":569,"char_repetition_ratio":0.12743461,"word_repetition_ratio":0.015267176,"special_character_ratio":0.29317045,"punctuation_ratio":0.12158809,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999776,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-23T12:20:58Z\",\"WARC-Record-ID\":\"<urn:uuid:025bd2de-bf1e-4b96-90dc-40982121398b>\",\"Content-Length\":\"132148\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:00b62fd5-a77e-47d0-8e49-485611aaf7c1>\",\"WARC-Concurrent-To\":\"<urn:uuid:837bb6f9-3c16-49b6-89cd-66acff4f1248>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://mathoverflow.net/questions/202337/when-are-countably-generated-hilbert-modules-generated-by-c-p-c-order-zero-maps\",\"WARC-Payload-Digest\":\"sha1:J35C7MFIZBLVFTEJNRRYEEWZIJVAVRCK\",\"WARC-Block-Digest\":\"sha1:WKAKNMLLVOVMCHBGN23N47AGIZLCPDPE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703537796.45_warc_CC-MAIN-20210123094754-20210123124754-00653.warc.gz\"}"} |
https://pgql-lang.org/spec/1.4/ | [
"PGQL is an SQL-based query language for the property graph data model that allows you to specify high-level graph patterns which are matched against vertices and edges in a graph. PGQL has support for grouping (GROUP BY), aggregation (e.g. MIN, MAX, AVG, SUM), sorting (ORDER BY) and many other familiar SQL constructs. Furthermore, PGQL has powerful regular expression constructs for graph reachability (transitive closure), shortest path finding and cheapest path finding.\n\n# Introduction\n\nPGQL is a graph pattern-matching query language for the property graph data model. This document specifies the syntax and semantics of the language.\n\nWhen working with graph feature of Oracle Database, you can use PGQL by installing Graph Server and Client with Oracle Database 12.2 or later.\n\n## Changelog\n\nThe following are the changes since PGQL 1.3:\n\n### New features in PGQL 1.4\n\nThe new features are:\n\n## A note on the Grammar\n\nThis document contains a complete grammar definition of PGQL, spread throughout the different sections. There is a single entry point into the grammar: `PgqlStatement`.\n\n## Document Outline\n\n• Introduction contains a changelog, a note on the grammar, this outline and an introduction to the property graph data model.\n• Creating a Property Graph describes how to create a property graph from an existing set of tables in a relational database.\n• Graph Pattern Matching introduces the basic concepts of graph querying.\n• Grouping and Aggregation describes the mechanism to group and aggregate results.\n• Sorting and Row Limiting describes the ability to sort and paginate results.\n• Variable-Length Paths introduces the constructs for testing for the existence of paths between pairs of vertices (i.e. “reachability testing”) as well as for retrieving shortest paths between pairs of vertices.\n• Functions and Expressions describes the supported data types and corresponding functions and operations.\n• Subqueries describes the syntax and semantics of subqueries for creating more complex queries that nest other queries.\n• Graph Modification describes `INSERT`, `UPDATE` and `DELETE` statements for inserting, updating and deleting vertices and edges in a graph.\n• Other Syntactic rules describes additional syntactic rules that are not covered by the other sections, such as syntax for identifiers and comments.\n\n## Property graph data model\n\nA property graph has a name, which is a (character) string, and contains:\n\n• A set of vertices (or nodes).\n\n• Each vertex has zero or more labels.\n• Each vertex has zero or more properties (or attributes), which are arbitrary key-value pairs.\n• A set of edges (or relationships).\n\n• Each edge is directed.\n• Each edge has zero or more labels.\n• Each edge has zero or more properties (or attributes), which are arbitrary key-value pairs.\n\nLabels as well as names of properties are strings. Property values are scalars such as numerics, strings or booleans.\n\n### Example 1: Student Network\n\nAn example graph is:\n\nHere, `student_network` is the name of the graph. The graph has three vertices labeled `Person` and one vertex labeled `University`. There are six directed edges that connect the vertices. Three of them go from person to person vertices, and they have the label `knows`. Three others go from person to university vertices and are labeled `studentOf`. The person vertices have two properties, namely `name` for encoding the name of the person and `dob` for encoding the date of birth of the person. The university vertex has only a single property `name` for encoding the name of the university. The edges have no properties.\n\n### Example 2: Financial Transactions\n\nAn example graph with financial transactions is:\n\nHere, `financial_transactions` is the name of the graph. The graph has three types of vertices. Vertices labeled `Person` or `Company` have a property `name`, while vertices labeled `Account` have a property `number`. There are edges labeled `owner` from accounts to persons as well as from accounts to companies, and there are edges labeled `transaction` from accounts to accounts. Note that only `transaction` edges have a property (`amount`) while other edges do not have any properties.\n\n# Creating a Property Graph\n\nThe CREATE PROPERTY GRAPH statement allows for creating a property graph from a set of existing database tables, while the DROP PROPERTY GRAPH statements allows for dropping a graph.\n\n## CREATE PROPERTY GRAPH\n\nThe `CREATE PROPERTY GRAPH` statement starts with a graph name and is followed by a non-empty set of vertex tables and an optional set of edge tables.\n\nThe syntax is:\n\n``````CreatePropertyGraph ::= 'CREATE' 'PROPERTY' 'GRAPH' GraphName\nVertexTables\nEdgeTables?\n\nGraphName ::= SchemaQualifiedName\n\nSchemaQualifiedName ::= SchemaIdentifierPart? Identifier\n\nSchemaIdentifierPart ::= Identifier '.'\n\nVertexTables ::= 'VERTEX' 'TABLES' '(' VertexTable ( ',' VertexTable )* ')'\n\nEdgeTables ::= 'EDGE' 'TABLES' '(' EdgeTable ( ',' EdgeTable )* ')'\n``````\n\nIt is possible to have no edge tables such that the resulting graph only has vertices that are all disconnected from each other. However, it is not possible to have a graph with edge tables but no vertex tables.\n\nThe following example shows a schema with a set of tables. Each table has a name and a list of columns, some of which form the primary key for the table (in red) while others form foreign keys that reference rows of other tables.\n\nThe following is a complete example of how a graph can be created from these tables:\n\n``````CREATE PROPERTY GRAPH financial_transactions\nVERTEX TABLES (\nPersons LABEL Person PROPERTIES ( name ),\nCompanies LABEL Company PROPERTIES ( name ),\nAccounts LABEL Account PROPERTIES ( number )\n)\nEDGE TABLES (\nTransactions\nSOURCE KEY ( from_account ) REFERENCES Accounts\nDESTINATION KEY ( to_account ) REFERENCES Accounts\nLABEL transaction PROPERTIES ( amount ),\nAccounts AS PersonOwner\nSOURCE KEY ( number ) REFERENCES Accounts\nDESTINATION Persons\nLABEL owner NO PROPERTIES,\nAccounts AS CompanyOwner\nSOURCE KEY ( number ) REFERENCES Accounts\nDESTINATION Companies\nLABEL owner NO PROPERTIES,\nPersons AS worksFor\nSOURCE KEY ( id ) REFERENCES Persons\nDESTINATION Companies\nNO PROPERTIES\n)\n``````\n\nAbove, `financial_transactions` is the name of the graph. The graph has three vertex tables: `Persons`, `Companies` and `Accounts`. The graph also has four edge tables: `Transactions`, `PersonOwner`, `CompanyOwner` and `worksFor`.\n\nUnderlying foreign keys are used to establish the connections between the two endpoints of the edges and the corresponding vertices. Note that the “source” of an edge is the vertex where the edge points from while the “destination” of an edge is the vertex where the edge point to.\n\nIf foreign keys cannot be used or are not present, the necessary keys can be defined as part of the `CREATE PROPERTY GRAPH` statement. Labels and properties can also be defined, all of which is explained in more detail in the next sections.\n\n### Vertex tables\n\nA vertex table provides a vertex for each row of the underlying table.\n\nThe syntax is:\n\n``````VertexTable ::= TableName TableAlias? KeyClause? LabelAndPropertiesClause?\n\nLabelAndPropertiesClause ::= LabelClause? PropertiesClause?\n\nTableName ::= SchemaQualifiedName\n``````\n\nThe table alias is required only if the underlying table is used as vertex table more than once, to provide a unique name for each table. It can be used for specifying a label for the vertices too.\n\nThe key of the vertex table uniquely identifies a row in the table. If a key is not explicitly specified then it defaults to the primary key of the underlying table. A key is always required so a primary key needs to exist if no key is specified. See the section on keys for more details.\n\nThe label clause provides a label for the vertices. If a label is not defined, the label defaults to the alias. Since the alias defaults to the name of the underlying table, if no alias is provided, the label defaults to the name of the underlying table. See the section on labels for details.\n\nThe properties clause defines the mapping from columns of the underlying table into properties of the vertices. See the section on properties for more details.\n\n### Edge tables\n\nAn edge table provides an edge for each row of the underlying table.\n\n``````EdgeTable ::= TableName TableAlias? KeyClause?\nSourceVertexTable DestinationVertexTable\nLabelAndPropertiesClause?\n\nSourceVertexTable ::= 'SOURCE' ReferencedVertexTableKeyClause? TableName\n\nDestinationVertexTable ::= 'DESTINATION' ReferencedVertexTableKeyClause? TableName\n\nReferencedVertexTableKeyClause ::= KeyClause 'REFERENCES'\n``````\n\nThe table alias is required only if the underlying table is used as edge table more than once, to provide a unique name for each table. It can be used for specifying a label for the edges too.\n\nThe source vertex table and destination vertex table are mandatory for defining the two endpoints of the edge. A key is optional if there is a single foreign key from the edge table to the source or destination vertex table. If a key is not provided, it will default to the existing foreign key.\n\nTake the following example from before:\n\n``````CREATE PROPERTY GRAPH financial_transactions\nVERTEX TABLES (\nPersons LABEL Person PROPERTIES ( name ),\nCompanies LABEL Company PROPERTIES ( name ),\nAccounts LABEL Account PROPERTIES ( number )\n)\nEDGE TABLES (\nTransactions\nSOURCE KEY ( from_account ) REFERENCES Accounts\nDESTINATION KEY ( to_account ) REFERENCES Accounts\nLABEL transaction PROPERTIES ( amount ),\nAccounts AS PersonOwner\nSOURCE KEY ( number ) REFERENCES Accounts\nDESTINATION Persons\nLABEL owner NO PROPERTIES,\nAccounts AS CompanyOwner\nSOURCE KEY ( number ) REFERENCES Accounts\nDESTINATION Companies\nLABEL owner NO PROPERTIES,\nPersons AS worksFor\nSOURCE KEY ( id ) REFERENCES Persons\nDESTINATION Companies\nNO PROPERTIES\n)\n``````\n\nThe key of the edge table uniquely identifies a row in the table. If a key is not explicitly specified (in case of all four edge tables above) then it defaults to the primary key of the underlying table. A key is always required so a primary key needs to exist if no key is specified. See the section on keys for more details.\n\nIn case of edge tables `PersonOwner`, `CompanyOwner` and `worksFor`, the destination vertex table is the same table as the edge table itself. This means that rows in the table are mapped into both vertices and edges. It is also possible that the source vertex table is the edge table itself or that both the source and destination tables are the edge table itself. This is explained in more detail in Source or destination is self.\n\nKeys for the destinations of `PersonOwner`, `CompanyOwner` and `worksFor` are omitted because we can default to the existing foreign keys. Keys for their sources cannot be omitted because there exist no foreign key to default to (e.g. in case of `PersonOwner` there are zero foreign keys from `Accounts` to `Accounts` hence `SOURCE KEY ( number ) REFERENCES Accounts` needs to be specified). Furthermore, keys for the source and destination of `Transactions` cannot be omitted because two foreign keys exist between `Transactions` and `Accounts` so it is necessary to specify which one to use.\n\nIf a row in an edge table has a NULL value for any of its source key columns or its destination key columns then no edge is created. Note that in case of the `Accounts` table from the example, it is assumed that either the `person_id` or the `company_id` is NULL, so that each time the row is mapped into either a “company owner” or a “person owner” edge but never into two types of edges at once.\n\nThe label clause provides a label for the edges. If a label is not defined, the label defaults to the alias. Since the alias defaults to the name of the underlying table, if no alias is provided, the label defaults to the name of the underlying table. See the section on labels for details.\n\nThe properties clause defines the mapping from columns of the underlying table to properties of the edges. See the section on properties for more details\n\n### Table aliases\n\nVertex and edge tables can have aliases for uniquely naming the tables. If no alias is defined, then the alias defaults to the name of the underlying database table of the vertex or edge table.\n\nThe syntax is:\n\n``````TableAlias ::= ( 'AS' )? Identifier\n``````\n\nFor example:\n\n``````...\nEDGE TABLES ( Persons AS worksFor ... )\n...\n``````\n\nAbove, the underlying table of the edge table is `Persons`, while the alias is `worksFor`.\n\nAll vertex and edge tables are required to have unique names. Therefore, if multiple vertex tables use the same underlying table, then at least one of them requires an alias. Similarly, if multiple edge tables use the same underlying table, then at least one of them requires an alias. The restriction does not apply across vertex and edge tables, so, there may exist a vertex table with the same name as an edge table, but there may not exist two vertex tables with the same name, or two edge tables with the same name.\n\nIf the alias is not provided then it defaults to the name of the underlying table. For example:\n\n``````...\nVERTEX TABLES ( Person )\n...\n``````\n\nAbove is equivalent to:\n\n``````...\nVERTEX TABLES ( Person AS Person )\n...\n``````\n\nFinally, in addition to providing unique names for vertex and edge tables, the aliases can also serve as a means to provide labels for vertices and edges: if no label is defined then the label defaults to the table alias. Note that although table aliases are required to be unique, labels are not. In other words, multiple vertex tables and multiple edge tables can have the same label.\n\n### Keys\n\nBy default, existing primary and foreign keys of underlying tables are used to connect the end points of the edges to the appropriate vertices, but the following scenarios require manual specification of keys:\n\n• Multiple foreign keys exists between an edge table and its source vertex table or its destination vertex tables such that it would be ambiguous which foreign key to use.\n• Primary and/or foreign keys on underlying tables were not defined or the underlying tables are views which means that primary and foreign keys cannot be defined.\n\nThe syntax for keys is:\n\n``````KeyClause ::= '(' ColumnName ( ',' ColumnName )* ')'\n\nColumnName ::= Identifier\n``````\n\nTake the example from before:\n\n``````CREATE PROPERTY GRAPH financial_transactions\nVERTEX TABLES (\n...\n)\nEDGE TABLES (\nTransactions\nSOURCE KEY ( from_account ) REFERENCES Accounts\nDESTINATION KEY ( to_account ) REFERENCES Accounts\nLABEL transaction PROPERTIES ( amount ),\nAccounts AS PersonOwner\nSOURCE KEY ( number ) REFERENCES Accounts\nDESTINATION Persons\nLABEL owner NO PROPERTIES,\n...\n)\n``````\n\nAbove, a key is defined for the source and destination of `Transactions` because two foreign keys exist between `Transactions` and `Accounts` so it would be ambiguous which one to use without explicit specification. In case of `PersonOwner`, no foreign key exists between `Accounts` and `Accounts` so a key for the source (`KEY ( number )`) has to be explicitly specified. However, for the destination it is possible to omit the key and default to the existing foreign key between `Accounts` and `Persons`.\n\nThe keys for source and destination vertex tables consist of one or more columns of the underlying edge table that uniquely identify a vertex in the corresponding vertex table. If no key is defined for the vertex table, the key defaults to the underlying primary key, which is required to exist in such a case.\n\nThe following example has a schema that has no primary and foreign keys defined at all:\n\nNote that above, we have the same schema as before, but this time the primary and foreign keys are missing.\n\nEven though primary and foreign keys are missing, the graph can still be created by specifying the necessary keys in the `CREATE PROPERTY GRAPH` statement itself:\n\n``````CREATE PROPERTY GRAPH financial_transactions\nVERTEX TABLES (\nPersons\nKEY ( id )\nLABEL Person\nPROPERTIES ( name ),\nCompanies\nKEY ( id )\nLABEL Company\nPROPERTIES ( name ),\nAccounts\nKEY ( number )\nLABEL Account\nPROPERTIES ( number )\n)\nEDGE TABLES (\nTransactions\nKEY ( from_account, to_account, date )\nSOURCE KEY ( from_account ) REFERENCES Accounts\nDESTINATION KEY ( to_account ) REFERENCES Accounts\nLABEL transaction PROPERTIES ( amount ),\nAccounts AS PersonOwner\nKEY ( number )\nSOURCE KEY ( number ) REFERENCES Accounts\nDESTINATION KEY ( person_id ) REFERENCES Persons\nLABEL owner NO PROPERTIES,\nAccounts AS CompanyOwner\nKEY ( number )\nSOURCE KEY ( number ) REFERENCES Accounts\nDESTINATION KEY ( company_id ) REFERENCES Companies\nLABEL owner NO PROPERTIES,\nPersons AS worksFor\nKEY ( id )\nSOURCE KEY ( id ) REFERENCES Persons\nDESTINATION KEY ( company_id ) REFERENCES Companies\nNO PROPERTIES\n)\n``````\n\nAbove, keys were defined for each vertex table (e.g. `KEY ( id )`), edge table (e.g. `KEY ( from_account, to_account, date )`), source vertex table reference (e.g. `KEY ( from_account )`) and destination table reference (e.g. `KEY ( to_account )`).\n\nEach vertex and edge table is required to have a key so that if a key is not explicitly specified then the underlying table needs to have a primary key defined.\n\n### Labels\n\nIn graphs created through `CREATE PROPERTY GRAPH`, each vertex has exactly one label and each edge has exactly one label. This restriction may be lifted in future PGQL version.\n\nThe syntax for labels is:\n\n``````LabelClause ::= 'LABEL' Label\n\nLabel ::= Identifier\n``````\n\nThe label clause is optional. If it is omitted, then the label defaults to the table alias. Note that also the table alias is optional and defaults to the table name. Thus, if no label is specified and no table alias is specified, then both the table alias and the label defaults to the table name.\n\nFor example:\n\n``````...\nVERTEX TABLES ( Person )\n...\n``````\n\nAbove is equivalent to:\n\n``````...\nVERTEX TABLES ( Person AS Person )\n...\n``````\n\nWhich is equivalent to:\n\n``````...\nVERTEX TABLES ( Person AS Person LABEL Person )\n...\n``````\n\n### Properties\n\nBy default, properties are all columns such that a vertex or edge property is created for each column of the underlying table. However, there are different ways to customize this behavior as described below.\n\nThe syntax is:\n\n``````PropertiesClause ::= PropertiesAreAllColumns\n| PropertyExpressions\n| NoProperties\n``````\n\nNote that the properties clause is optional and if the clause is omitted then it defaults to `PROPERTIES ARE ALL COLUMNS`.\n\n#### PROPERTIES ARE ALL COLUMNS\n\nAlthough by default a property is created for each columns implicitly, this can also be made explicit through `PROPERTIES ARE ALL COLUMNS`.\n\nThe syntax is:\n\n``````PropertiesAreAllColumns ::= 'PROPERTIES' AreKeyword? 'ALL' 'COLUMNS' ExceptColumns?\n\nAreKeyword ::= 'ARE'\n``````\n\nAn example is:\n\n``````...\nVERTEX TABLES ( Person PROPERTIES ARE ALL COLUMNS )\n...\n``````\n\nBecause of the default, the above is equivalent to:\n\n``````...\nVERTEX TABLES ( Person )\n...\n``````\n\n#### PROPERTIES ARE ALL COLUMNS EXCEPT ( .. )\n\nOne can exclude columns by adding an `EXCEPT` clause. The columns that are excluded will not become properties while all the other columns do.\n\nThe syntax is:\n\n``````ExceptColumns ::= 'EXCEPT' '(' ColumnReference ( ',' ColumnReference )* ')'\n``````\n\n#### PROPERTIES ( .. )\n\nInstead of excluding columns (see above), “property expressions” allow for specifying exactly which columns should be included. The property expressions also allow you to use a `CAST` expression to map the column into a property of a different data type.\n\nThe syntax is:\n\n``````PropertyExpressions ::= 'PROPERTIES' '(' PropertyExpression ( ',' PropertyExpression )* ')'\n\nPropertyExpression ::= ColumnReferenceOrCastSpecification ( 'AS' PropertyName )?\n\nColumnReferenceOrCastSpecification ::= ColumnReference\n| CastSpecification\n\nPropertyName ::= Identifier\n\nColumnReference ::= Identifier\n``````\n\nFor example:\n\n``````...\nVERTEX TABLES (\nEmployees\nLABEL Employee\nPROPERTIES ( first_name ),\n...\n``````\n\nAbove, even though table `Employees` may have many columns, only the column `first_name` is used as a property. The name of the property defaults to the name of the column: `first_name`.\n\nIf a different property name is desired then an alias can be used:\n\n``````...\nVERTEX TABLES (\nEmployees\nLABEL Employee\nPROPERTIES ( first_name AS firstName ),\n...\n``````\n\nAbove, the column name `first_name` becomes a property with name `firstName` (notice the missing underscore character in the property name).\n\nProperty names may also be `CAST` expressions, which allows the values in the column to be converted into properties of a different data type.\n\nFor example:\n\n``````...\nVERTEX TABLES (\nEmployees\nLABEL Employee\nPROPERTIES ( CAST(salary AS INTEGER) AS salary ),\n...\n``````\n\n#### NO PROPERTIES\n\nIf no properties are desired for the vertices or edges, then one can use the `NO PROPERTIES` syntax:\n\n``````NoProperties ::= 'NO' 'PROPERTIES'\n``````\n\nAn example of an edge table with no properties is:\n\n``````...\nEDGE TABLES (\n...\nAccounts AS PersonOwner\nSOURCE KEY ( number ) REFERENCES Accounts\nDESTINATION Persons\nLABEL owner NO PROPERTIES\n...\n``````\n\n#### Relation between labels and properties\n\nVertex tables that have the same label are required to have the same properties such that the properties have the same name and compatible data types. Similarly, edge tables that have the same label are required to have the same properties such that the properties have the same name and compatible data types.\n\nTake the following example:\n\n``````...\nVERTEX TABLES (\n/* ERROR: it is not allowed to have tables with the same labels but different properties */\nCountry LABEL Place PROPERTIES ( country_name ),\nCity LABEL Place PROPERTIES ( city_name )\n)\n...\n``````\n\nThe statement above is illegal because both `Country` and `City` have label `Place` but their properties are inconsistent. To make this example work, the same property names have to be assigned:\n\n``````...\nVERTEX TABLES (\nCountry LABEL Place PROPERTIES ( country_name AS name ),\nCity LABEL Place PROPERTIES ( city_name AS name )\n)\n...\n``````\n\n### Source or destination is self\n\nA source and/or a destination vertex table of an edge may be the edge table itself. In such a case, the underlying table provides both vertices and edges at the same time.\n\nTake the following schema as example:\n\nHere, both tables are clear candidates for vertex tables, but it is not immediately clear which is the edge table that connects the employees and their departments. This edge table in fact is the `Employees` table since the `Employees` table contains all the information for connecting the employees and the departments.\n\nThe graph can be created as follows:\n\n``````CREATE PROPERTY GRAPH hr_simplified\nVERTEX TABLES (\nemployees LABEL employee\nPROPERTIES ARE ALL COLUMNS EXCEPT ( job_id, manager_id, department_id ),\ndepartments LABEL department\nPROPERTIES ( department_id, department_name )\n)\nEDGE TABLES (\nemployees AS works_for\nSOURCE KEY ( employee_id ) REFERENCES employees\nDESTINATION employees\nNO PROPERTIES\n)\n``````\n\nAs you can see, both the `employee` vertices and the `works_for` edges are created from the `employees` table. For the destination vertex we can omit the key so that it defaults to `KEY ( manager_id )`. This is possible only because there exists exactly one foreign key between the `employees` table and itself. In case of the source vertex we cannot default to a foreign key so we explicitly specify the key `KEY ( employee_id )`.\n\nNote that although the edges are embedded in the vertex tables, by default it is still the case that a property is created for each column. This means that by default, the vertices and edges that are created from the same table will have the same properties. Typically this is not desired and the columns are only mapped into vertex properties while `NO PROPERTIES` is used for the edges.\n\n### Example: HR schema\n\nA more complex example is the Human Resources (HR) schema:\n\nThe following statement maps the schema into a graph:\n\n``````CREATE PROPERTY GRAPH hr\nVERTEX TABLES (\nemployees LABEL employee\nPROPERTIES ARE ALL COLUMNS EXCEPT ( job_id, manager_id, department_id ),\ndepartments LABEL department\nPROPERTIES ( department_id, department_name ),\njobs LABEL job\nPROPERTIES ARE ALL COLUMNS,\njob_history\nPROPERTIES ( start_date, end_date ),\nlocations LABEL location\nPROPERTIES ARE ALL COLUMNS EXCEPT ( country_id ),\ncountries LABEL country\nPROPERTIES ARE ALL COLUMNS EXCEPT ( region_id ),\nregions LABEL region\n)\nEDGE TABLES (\nemployees AS works_for\nSOURCE KEY ( employee_id ) REFERENCES employees\nDESTINATION KEY ( manager_id ) REFERENCES employees\nNO PROPERTIES,\nemployees AS works_at\nSOURCE KEY ( employee_id ) REFERENCES employees\nDESTINATION departments\nNO PROPERTIES,\nemployees AS works_as\nSOURCE KEY ( employee_id ) REFERENCES employees\nDESTINATION jobs\nNO PROPERTIES,\ndepartments AS managed_by\nSOURCE KEY ( department_id ) REFERENCES departments\nDESTINATION employees\nNO PROPERTIES,\njob_history AS for_employee\nSOURCE KEY ( employee_id, start_date ) REFERENCES job_history\nDESTINATION employees\nLABEL for\nNO PROPERTIES,\njob_history AS for_department\nSOURCE KEY ( employee_id, start_date ) REFERENCES job_history\nDESTINATION departments\nLABEL for\nNO PROPERTIES,\njob_history AS for_job\nSOURCE KEY ( employee_id, start_date ) REFERENCES job_history\nDESTINATION jobs\nLABEL for\nNO PROPERTIES,\ndepartments AS department_located_in\nSOURCE KEY ( department_id ) REFERENCES departments\nDESTINATION locations\nLABEL located_in\nNO PROPERTIES,\nlocations AS location_located_in\nSOURCE KEY ( location_id ) REFERENCES locations\nDESTINATION countries\nLABEL located_in\nNO PROPERTIES,\ncountries AS country_located_in\nSOURCE KEY ( country_id ) REFERENCES countries\nDESTINATION regions\nLABEL located_in\nNO PROPERTIES\n)\n``````\n\nIn this example, all the edge tables have a source vertex table that is the edge table itself. This scenario was explained in more detail in Source or destination is self. Also note that the graph only has vertex properties, but no edge properties, which is typical for such a scenario.\n\nAfter the graph is created it can be queried. For example, we may want to see an overview of the vertex and edge labels and their frequencies. Therefore, we first perform a `SELECT` query to create such an overview for the vertex labels:\n\n`````` SELECT label(n) AS lbl, COUNT(*)\nFROM MATCH (n)\nGROUP BY lbl\nORDER BY COUNT(*) DESC\n``````\n``````+------------------------+\n| lbl | COUNT(*) |\n+------------------------+\n| EMPLOYEE | 107 |\n| DEPARTMENT | 27 |\n| COUNTRY | 25 |\n| LOCATION | 23 |\n| JOB | 19 |\n| JOB_HISTORY | 10 |\n| REGION | 4 |\n+------------------------+\n``````\n\nNote that above, labels are uppercased since unquoted identifiers were used in the `CREATE PROPERTY GRAPH` statement. Like in SQL, quoted identifiers can be used if such implicit upper casing of identifiers is not desired.\n\nThen, we create an overview of labels of edges and labels of their source and destination vertices, again with frequencies for each combination:\n\n`````` SELECT label(n) AS srcLbl, label(e) AS edgeLbl, label(m) AS dstLbl, COUNT(*)\nFROM MATCH (n) -[e]-> (m)\nGROUP BY srcLbl, edgeLbl, dstLbl\nORDER BY COUNT(*) DESC\n``````\n``````+--------------------------------------------------+\n| srcLbl | edgeLbl | dstLbl | COUNT(*) |\n+--------------------------------------------------+\n| EMPLOYEE | WORKS_AS | JOB | 107 |\n| EMPLOYEE | WORKS_AT | DEPARTMENT | 106 |\n| EMPLOYEE | WORKS_FOR | EMPLOYEE | 106 |\n| DEPARTMENT | LOCATED_IN | LOCATION | 27 |\n| COUNTRY | LOCATED_IN | REGION | 25 |\n| LOCATION | LOCATED_IN | COUNTRY | 23 |\n| DEPARTMENT | MANAGED_BY | EMPLOYEE | 11 |\n| JOB_HISTORY | FOR | JOB | 10 |\n| JOB_HISTORY | FOR | EMPLOYEE | 10 |\n| JOB_HISTORY | FOR | DEPARTMENT | 10 |\n+--------------------------------------------------+\n``````\n\n### Multiple schemas\n\nVertex and edge tables of a graph can come from different database schemas. This can be achieved by qualifying the vertex and edge table names with a schema name.\n\nFor example:\n\n``````CREATE PROPERTY GRAPH\nVERTEX TABLES (\nSocialNetwork.Person,\nHR.Employees LABEL Employee\n)\nEDGE TABLES (\nMySchema.SameAs\nSOURCE KEY ( firstName, lastName ) REFERENCES Person\nDESTINATION KEY ( first_name, last_name ) REFERENCES Employee\n)\n``````\n\nAbove, the vertex table `Person` is part of schema `SocialNetwork`, the vertex table `Employee` is part of schema `HR` and the edge table `SameAs` is part of schema `MySchema`.\n\nNote that for the edge table, the source and destination vertex tables are referenced by table name without schema name (e.g. `Person` instead of `SocialNetwork.Person`). Also note that if no table aliases or labels are defined, then they default to the table name without the schema name.\n\n## DROP PROPERTY GRAPH\n\nTo drop a property graph use `DROP PROPERTY GRAPH` followed by the name of the graph to drop.\n\nThe syntax is:\n\n``````DropPropertyGraph ::= 'DROP' 'PROPERTY' 'GRAPH' GraphName\n``````\n\nFor example:\n\n``````DROP PROPERTY GRAPH financial_transactions\n``````\n\n# Graph Pattern Matching\n\n## Writing simple queries\n\nThis section is mostly example-based and is meant for beginning users.\n\n### Vertex patterns\n\nThe following query matches all the vertices with the label `Person` and retrieves their properties `name` and `dob`:\n\n``````SELECT n.name, n.dob\nFROM MATCH (n:Person)\n``````\n``````+-----------------------+\n| name | dob |\n+-----------------------+\n| Riya | 1995-03-20 |\n| Kathrine | 1994-01-15 |\n| Lee | 1996-01-29 |\n+-----------------------+\n``````\n\nIn the query above:\n\n• `(n:Person)` is a vertex pattern in which `n` is a variable name and `:Person` a label expression.\n• Variable names like `n` can be freely chosen by the user. The vertices that match the pattern are said to “bind to the variable”.\n• The label expression `:Person` specifies that we match only vertices that have the label `Person`.\n• `n.name` and `n.dob` are property references, accessing the properties `name` and `dob` of the vertex `n` respectively.\n\nThe query produces three results, which are returned as a table. The results are unordered.\n\n### Edge patterns\n\nEdge patterns take the form of arrows like `-[e]->` (match an outgoing edge) and `<-[e]-` (match an incoming edge).\n\nFor example:\n\n``````SELECT a.name AS a, b.name AS b\nFROM MATCH (a:Person) -[e:knows]-> (b:Person)\n``````\n``````+---------------------+\n| a | b |\n+---------------------+\n| Kathrine | Riya |\n| Kathrine | Lee |\n| Lee | Kathrine |\n+---------------------+\n``````\n\nIn the above query:\n\n• `-[e:knows]->` is an edge pattern in which `e` is a variable name and `:knows` a label expression.\n• The arrowhead `->` specifies that the pattern matches edges that are outgoing from `a` and incoming to `b`.\n\n### Label expressions\n\nMore complex label expressions are supported through label disjunction. Furthermore, it is possible to omit a label expression.\n\n#### Label disjunction\n\nThe bar operator (`|`) is a logical OR for specifying that a vertex or edge should match as long as it has at least one of the specified labels.\n\nFor example:\n\n``````SELECT n.name, n.dob\nFROM MATCH (n:Person|University)\n``````\n``````+--------------------------+\n| name | dob |\n+--------------------------+\n| Riya | 1995-03-20 |\n| Kathrine | 1994-01-15 |\n| Lee | 1996-01-29 |\n| UC Berkeley | <null> |\n+--------------------------+\n``````\n\nIn the query above, `(n:Person|University)` matches vertices that have either the label `Person` or the label `University`. Note that in the result, there is a `<null>` value in the last row because the corresponding vertex does not have a property `dob`.\n\n#### Omitting a label expression\n\nLabel expressions may be omitted so that the vertex or edge pattern will then match any vertex or edge.\n\nFor example:\n\n``````SELECT n.name, n.dob\nFROM MATCH (n)\n``````\n``````+--------------------------+\n| name | dob |\n+--------------------------+\n| Riya | 1995-03-20 |\n| Kathrine | 1994-01-15 |\n| Lee | 1996-01-29 |\n| UC Berkeley | <null> |\n+--------------------------+\n``````\n\nNote that the query gives the same results as before since both patterns `(n)` and `(n:Person|University)` match all the vertices in the example graph.\n\n### Filter predicates\n\nFilter predicates provide a way to further restrict which vertices or edges may bind to patterns. A filter predicate is a boolean value expression and is placed in a WHERE clause.\n\nFor example, “find all persons that have a date of birth (dob) greater than 1995-01-01”:\n\n``````SELECT n.name, n.dob\nFROM MATCH (n)\nWHERE n.dob > DATE '1995-01-01'\n``````\n``````+---------------------+\n| name | dob |\n+---------------------+\n| Riya | 1995-03-20 |\n| Lee | 1996-01-29 |\n+---------------------+\n``````\n\nAbove, the vertex pattern `(n)` initially matches all three Person vertices in the graph as well as the University vertex, since no label expression is specified. However, the filter predicate `n.dob > DATE '1995-01-01'` filters out Kathrine because her date of birth is before 1995-01-01. It also filters out UC Berkeley because the vertex does not have a property `dob` so that the reference `n.dob` returns null and since `null > DATE '1995-01-01'` is null (see three-valued logic) the final result is null, which has the same affect as `false` and thus this candidate solution gets filtered out.\n\nAnother example is to “find people that Kathrine knows and that are old than her”:\n\n``````SELECT m.name AS name, m.dob AS dob\nFROM MATCH (n) -[e]-> (m)\nWHERE n.name = 'Kathrine' AND n.dob <= m.dob\n``````\n``````+-------------------+\n| name | dob |\n+-------------------+\n| Riya | 1995-03-20 |\n| Lee | 1996-01-29 |\n+-------------------+\n``````\n\nHere, the pattern `(n) -[e]-> (m)` initially matches all the edges in the graph since it does not have any label expression. However, the filter expression `n.name = 'Kathrine' AND n.dob <= m.dob` specifies that the source of the edge has a property `name` with the value `Kathrine` and that both the source and destination of the edge have properties `dob` such that the value for the source is smaller than or equal to the value for the destination. Only two out of six edges satisfy this filter predicate.\n\n### More complex patterns\n\nMore complex patterns are formed either by forming longer path patterns that consist of multiple edge patterns, or by specifying multiple comma-separated path patterns that share one or more vertex variables.\n\nFor example, “find people that Lee knows and that are a student at the same university as Lee”:\n\n``````SELECT p2.name AS friend, u.name AS university\nFROM MATCH (u:University) <-[:studentOf]- (p1:Person) -[:knows]-> (p2:Person) -[:studentOf]-> (u)\nWHERE p1.name = 'Lee'\n``````\n``````+------------------------+\n| friend | university |\n+------------------------+\n| Kathrine | UC Berkeley |\n+------------------------+\n``````\n\nAbove, in the `MATCH` clause there is only one path pattern that consists of four vertex patterns and three edge patterns. Note that the first and last vertex pattern both have the variable `u`. This means that they are the same variable rather than two different variables. Label expressions may be specified for neither, one, or both of the vertex patterns such that if there are multiple label expressions specified then they are simply evaluated in conjunction such that all expressions need to satisfy for a vertex to bind to the variable.\n\nThe same query as above may be expressed through multiple comma-separated path patterns, like this:\n\n``````SELECT p2.name AS friend, u.name AS university\nFROM MATCH (p1:Person) -[:knows]-> (p2:Person)\n, MATCH (p1) -[:studentOf]-> (u:University)\n, MATCH (p2) -[:studentOf]-> (u)\nWHERE p1.name = 'Lee'\n``````\n``````+------------------------+\n| friend | university |\n+------------------------+\n| Kathrine | UC Berkeley |\n+------------------------+\n``````\n\nHere again, both occurrences of `u` are the same variable, as well as both occurrences of `p1` and both occurrences of `p2`.\n\n### Binding an element multiple times\n\nIn a single solution it is allowed for a vertex or an edge to be bound to multiple variables at the same time.\n\nFor example, “find friends of friends of Lee” (friendship being defined by the presence of a ‘knows’ edge):\n\n``````SELECT p1.name AS p1, p2.name AS p2, p3.name AS p3\nFROM MATCH (p1:Person) -[:knows]-> (p2:Person) -[:knows]-> (p3:Person)\nWHERE p1.name = 'Lee'\n``````\n``````+-----------------------+\n| p1 | p2 | p3 |\n+-----------------------+\n| Lee | Kathrine | Riya |\n| Lee | Kathrine | Lee |\n+-----------------------+\n``````\n\nAbove, in the second solution, Lee is bound to both the variable `p1` and the variable `p3`. This solution is obtained since we can hop from Lee to Kathrine via the edge that is outgoing from Lee, and then we can hop back from Kathrine to Lee via the edge that is incoming to Lee.\n\nIf such binding of vertices to multiple variables is not desired, one can use either non-equality constraints or the ALL_DIFFERENT predicate.\n\nFor example, the predicate `p1 <> p3` in the query below adds the restriction that Lee, which has to bind to variable `p1`, cannot also bind to variable `p3`:\n\n``````SELECT p1.name AS p1, p2.name AS p2, p3.name AS p3\nFROM MATCH (p1:Person) -[:knows]-> (p2:Person) -[:knows]-> (p3:Person)\nWHERE p1.name = 'Lee' AND p1 <> p3\n``````\n``````+-----------------------+\n| p1 | p2 | p3 |\n+-----------------------+\n| Lee | Kathrine | Riya |\n+-----------------------+\n``````\n\nAn alternative is to use the ALL_DIFFERENT predicate, which can take any number of vertices or edges as input and specifies non-equality between all of them:\n\n``````SELECT p1.name AS p1, p2.name AS p2, p3.name AS p3\nFROM MATCH (p1:Person) -[:knows]-> (p2:Person) -[:knows]-> (p3:Person)\nWHERE p1.name = 'Lee' AND ALL_DIFFERENT(p1, p3)\n``````\n``````+-----------------------+\n| p1 | p2 | p3 |\n+-----------------------+\n| Lee | Kathrine | Riya |\n+-----------------------+\n``````\n\nBesides vertices binding to multiple variables, it is also possible for edges to bind to multiple variables.\n\nFor example, “find two people that both know Riya”:\n\n``````SELECT p1.name AS p1, p2.name AS p2, e1 = e2\nFROM MATCH (p1:Person) -[e1:knows]-> (riya:Person)\n, MATCH (p2:Person) -[e2:knows]-> (riya)\nWHERE riya.name = 'Riya'\n``````\n``````+-------------------------------+\n| p1 | p2 | e1 = e2 |\n+-------------------------------+\n| Kathrine | Kathrine | true |\n+-------------------------------+\n``````\n\nAbove, the only solution has Kathrine bound to both variables `p1` and `p2` and the single edge between Kathrine and Riya is bound to both `e1` and `e2`, which is why `e1 = e2` in the `SELECT` clause returns `true`.\n\nAgain, if such bindings are not desired then one should add constraints like `e1 <> e2` or `ALL_DIFFERENT(e1, e2)` to the `WHERE` clause.\n\n### Matching edges in any direction\n\nAny-directed edge patterns match edges in the graph no matter if they are incoming or outgoing.\n\nAn example query with two any-directed edge patterns is:\n\n``````SELECT *\nFROM MATCH (n) -[e1]- (m) -[e2]- (o)\n``````\n\nNote that in case there are both incoming and outgoing data edges between two data vertices, there will be separate result bindings for each of the edges.\n\nAny-directed edge patterns may also be used inside path pattern macros:\n\n`````` PATH two_hops AS () -[e1]- () -[e2]- ()\nSELECT *\nFROM MATCH (n) -/:two_hops*/-> (m)\n``````\n\nThe above query will return all pairs of vertices `n` and `m` that are reachable via a multiple of two edges, each edge being either an incoming or an outgoing edge.\n\n## Main query structure\n\nThe previous section on writing simple queries provided a basic introduction to graph pattern matching. The rest of this document introduces the different functionalities in more detail.\n\nThe following is the syntax of the main query structure:\n\n``````PgqlStatement ::= CreatePropertyGraph\n| DropPropertyGraph\n| Query\n\nQuery ::= SelectQuery\n| ModifyQuery\n\nSelectQuery ::= PathPatternMacros?\nSelectClause\nFromClause\nWhereClause?\nGroupByClause?\nHavingClause?\nOrderByClause?\nLimitOffsetClauses?\n``````\n\nDetails of the different clauses of a query can be found in the following sections:\n\n• The path pattern macros allow for specifying complex reachability queries.\n• The SELECT clause specifies what should be returned.\n• The FROM clause defines the graph pattern that is to be matched.\n• The WHERE clause specifies filters.\n• The GROUP BY clause allows for creating groups of results.\n• The HAVING clause allows for filtering entire groups of results.\n• The ORDER BY clause allows for sorting of results.\n• The LIMIT and OFFSET clauses allow for pagination of results.\n\n## SELECT\n\nIn a PGQL query, the SELECT clause defines the data entities to be returned in the result. In other words, the select clause defines the columns of the result table.\n\nThe following explains the syntactic structure of SELECT clause.\n\n``````SelectClause ::= 'SELECT' 'DISTINCT'? ExpAsVar ( ',' ExpAsVar )*\n| 'SELECT' '*'\n\nExpAsVar ::= ValueExpression ( 'AS' VariableName )?\n``````\n\nA `SELECT` clause consists of the keyword `SELECT` followed by either an optional `DISTINCT` modifier and comma-separated sequence of `ExpAsVar` (“expression as variable”) elements, or, a special character star `*`. An `ExpAsVar` consists of:\n\n• A `ValueExpression`.\n• An optional `VariableName`, specified by appending the keyword `AS` and the name of the variable.\n\nConsider the following example:\n\n``````SELECT n, m, n.age AS age\nFROM MATCH (n:Person) -[e:friend_of]-> (m:Person)\n``````\n\nPer each matched subgraph, the query returns two vertices `n` and `m` and the value for property age of vertex `n`. Note that edge `e` is omitted from the result even though it is used for describing the pattern.\n\nThe `DISTINCT` modifier allows for filtering out duplicate results. The operation applies to an entire result row, such that rows are only considered duplicates of each other if they contain the same set of values.\n\n### Assigning variable name to Select Expression\n\nIt is possible to assign a variable name to any of the selection expression, by appending the keyword `AS` and a variable name. The variable name is used as the column name of the result set. In addition, the variable name can be later used in the `ORDER BY` clause. See the related section later in this document.\n\n`````` SELECT n.age * 2 - 1 AS pivot, n.name, n\nFROM MATCH (n:Person) -> (m:Car)\nORDER BY pivot\n``````\n\n### SELECT *\n\n`SELECT *` is a special `SELECT` clause. The semantic of `SELECT *` is to select all the variables in the graph pattern.\n\nConsider the following query:\n\n``````SELECT *\nFROM MATCH (n:Person) -> (m) -> (w)\n, MATCH (n) -> (w) -> (m)\n``````\n\nThis query is semantically equivalent to:\n\n``````SELECT n, m, w\nFROM MATCH (n:Person) -> (m) -> (w)\n, MATCH (n) -> (w) -> (m)\n``````\n\n`SELECT *` is not allowed when the graph pattern has zero variables. This is the case when all the vertices and edges in the pattern are anonymous (e.g. `MATCH () -> (:Person)`). Furthermore, `SELECT *` in combination with `GROUP BY` is not allowed.\n\n## FROM\n\nIn a PGQL query, the `FROM` clause defines the graph pattern to be matched.\n\nSyntactically, a `FROM` clause is composed of the keyword `FROM` followed by a comma-separated sequence of `MATCH` clauses, each defining a path pattern:\n\n``````FromClause ::= 'FROM' MatchClause ( ',' MatchClause )*\n``````\n\n## MATCH\n\n``````MatchClause ::= 'MATCH' ( PathPattern | GraphPattern ) OnClause?\n\nGraphPattern ::= '(' PathPattern ( ',' PathPattern )* ')'\n\nPathPattern ::= SimplePathPattern\n| AnyPathPattern\n| AnyShortestPathPattern\n| AllShortestPathPattern\n| TopKShortestPathPattern\n| AnyCheapestPathPattern\n| TopKCheapestPathPattern\n| AllPathPattern\n\nSimplePathPattern ::= VertexPattern ( PathPrimary VertexPattern )*\n\nVertexPattern ::= '(' VariableSpecification ')'\n\nPathPrimary ::= EdgePattern\n| ReachabilityPathExpression\n\nEdgePattern ::= OutgoingEdgePattern\n| IncomingEdgePattern\n| AnyDirectedEdgePattern\n\nOutgoingEdgePattern ::= '->'\n| '-[' VariableSpecification ']->'\n\nIncomingEdgePattern ::= '<-'\n| '<-[' VariableSpecification ']-'\n\nAnyDirectedEdgePattern ::= '-'\n| '-[' VariableSpecification ']-'\n\nVariableSpecification ::= VariableName? LabelPredicate?\n\nVariableName ::= Identifier\n``````\n\nA path pattern that describes a partial topology of the subgraph pattern. In other words, a topology constraint describes some connectivity relationships between vertices and edges in the pattern, whereas the whole topology of the pattern is described with one or multiple topology constraints.\n\nA topology constraint is composed of one or more vertices and relations, where a relation is either an edge or a path. In a query, each vertex or edge is (optionally) associated with a variable, which is a symbolic name to reference the vertex or edge in other clauses. For example, consider the following topology constraint:\n\n``````(n) -[e]-> (m)\n``````\n\nThe above example defines two vertices (with variable names `n` and `m`), and an edge (with variable name `e`) between them. Also the edge is directed such that the edge `e` is an outgoing edge from vertex `n`.\n\nMore specifically, a vertex term is written as a variable name inside a pair of parenthesis `()`. An edge term is written as a variable name inside a square bracket `[]` with two dashes and an inequality symbol attached to it – which makes it look like an arrow drawn in ASCII art. An edge term is always connected with two vertex terms as for the source and destination vertex of the edge; the source vertex is located at the tail of the ASCII arrow and the destination at the head of the ASCII arrow.\n\nThere can be multiple path patterns in the `FROM` clause of a PGQL query. Semantically, all constraints are conjunctive – that is, each matched result should satisfy every constraint in the `FROM` clause.\n\n### ON clause\n\nThe `ON` clause is an optional clause that belongs to the `MATCH` clause and specifies the name of the graph to match the pattern on.\n\nThe syntax is:\n\n``````OnClause ::= 'ON' GraphName\n``````\n\nFor example:\n\n`````` SELECT p.first_name, p.last_name\nFROM MATCH (p:Person) ON my_graph\nORDER BY p.first_name, p.last_name\n``````\n\nAbove, the pattern `(p:Person)` is matched on graph `my_graph`.\n\n#### Default graphs\n\nThe `ON` clauses may be omitted if a “default graph” has been provided. PGQL itself does not (yet) provide syntax for specifying a default graph, but Java APIs for invoking PGQL queries typically provide mechanisms for it:\n\n• Oracle’s in-memory analytics engine PGX has the API `PgxGraph.queryPgql(\"SELECT ...\")` such that the default graph corresponds to `PgxGraph.getName()` such that `ON` clauses can be omitted from queries.\n• Oracle’s PGQL-on-RDBMS provides the API `PgqlConnection.setGraph(\"myGraph\")` for setting the default graph such that the `ON` clauses can be omitted from queries.\n\nIf a default graph is provided then the `ON` clause can be omitted:\n\n`````` SELECT p.first_name, p.last_name\nFROM MATCH (p:Person)\nORDER BY p.first_name, p.last_name\n``````\n\n#### Querying multiple graphs\n\nAlthough each `MATCH` clause can have its own `ON` clause, PGQL 1.4 does not support querying of multiple graphs in a single query. Therefore, it is not possible for two `MATCH` clauses to have `ON` clauses with different graph names.\n\n### Repeated variables\n\nThere can be multiple topology constraints in the `FROM` clause of a PGQL query. In such a case, vertex terms that have the same variable name correspond to the same vertex entity. For example, consider the following two lines of topology constraints:\n\n``````SELECT *\nFROM MATCH (n) -[e1]-> (m1)\n, MATCH (n) -[e2]-> (m2)\n``````\n\nHere, the vertex term `(n)` in the first constraint indeed refers to the same vertex as the vertex term `(n)` in the second constraint. It is an error, however, if two edge terms have the same variable name, or, if the same variable name is assigned to an edge term as well as to a vertex term in a single query.\n\n### Alternatives for specifying graph patterns\n\nThere are various ways in which a particular graph pattern can be specified.\n\nFirst, a single path pattern can be written as a chain of edge terms such that two consecutive edge terms share the common vertex term in between. For example:\n\n``````SELECT *\nFROM MATCH (n1) -[e1]-> (n2) -[e2]-> (n3) -[e3]-> (n4)\n``````\n\nThe above graph pattern is equivalent to the graph pattern specified by the following set of comma-separate path patterns:\n\n``````SELECT *\nFROM MATCH (n1) -[e1]-> (n2)\n, MATCH (n2) -[e2]-> (n3)\n, MATCH (n3) -[e3]-> (n4)\n``````\n\nSecond, it is allowed to reverse the direction of an edge in the pattern, i.e. right-to-left instead of left-to-right. Therefore, the following is a valid graph pattern:\n\n``````SELECT *\nFROM MATCH (n1) -[e1]-> (n2) <-[e2]- (n3)\n``````\n\nPlease mind the edge directions in the above query – vertex `n2` is a common outgoing neighbor of both vertex `n1` and vertex `n3`.\n\nThird, it is allowed to ommitg variable names if the particular vertex or edge does not need to be referenced in any of the other clauses (e.g. `SELECT` or `ORDER BY`). When the variable name is omitted, the vertex or edge is an “anonymous” vertex or edge.\n\nSyntactically, for vertices, this result in an empty pair of parenthesis. In case of edges, the whole square bracket is omitted in addition to the variable name.\n\nThe following table summarizes these short cuts.\n\nsyntax form example\nbasic form `(n) -[e]-> (m)`\nomit variable name of the source vertex `() -[e]-> (m)`\nomit variable name of the destination vertex `(n) -[e]-> ()`\nomit variable names in both vertices `() -[e]-> ()`\nomit variable name in edge `(n) -> (m)`\n\n### Disconnected graph patterns\n\nIn the case the `MATCH` clause contains two or more disconnected graph patterns (i.e. groups of vertices and relations that are not connected to each other), the different groups are matched independently and the final result is produced by taking the Cartesian product of the result sets of the different groups. The following is an example:\n\n``````SELECT *\nFROM MATCH (n1) -> (m1)\n, MATCH (n2) -> (m2)\n``````\n\nHere, vertices `n2` and `m2` are not connected to vertices `n1` and `m1`, resulting in a Cartesian product.\n\n### Label predicates\n\nIn the property graph model, vertices and edge may have labels, which are arbitrary (character) strings. Typically, labels are used to encode types of entities. For example, a graph may contain a set of vertices with the label `Person`, a set of vertices with the label `Movie`, and, a set of edges with the label `likes`. A label predicate specifies that a vertex or edge only matches if it has ony of the specified labels. The syntax for specifying a label predicate is through a (`:`) followed by one or more labels that are separate by a vertical bar (`|`).\n\nThis is explained by the following grammar constructs:\n\n``````LabelPredicate ::= ':' Label ( '|' Label )*```\n``````\n\nTake the following example:\n\n``````SELECT *\nFROM MATCH (x:Person) -[e:likes|knows]-> (y:Person)\n``````\n\nHere, we specify that vertices `x` and `y` have the label `Person` and that the edge `e` has the label `likes` or the label `knows`.\n\nA label predicate can be specified even when a variable is omitted. For example:\n\n``````SELECT *\nFROM MATCH (:Person) -[:likes|knows]-> (:Person)\n``````\n\nThere are also built-in functions available for labels:\n\n• label(element) returns the label of a vertex or edge in the case the vertex/edge has only a single label\n• labels(element) returns the set of labels of a vertex or edge in the case the vertex/edge has multiple labels.\n• has_label(element, string) returns `true` if the vertex or edge (first argument) has the specified label (second argument).\n\n## WHERE\n\nFilters are applied after pattern matching to remove certain solutions. A filter takes the form of a boolean value expression which typically involves certain property values of the vertices and edges in the graph pattern.\n\nThe syntax is:\n\n``````WhereClause ::= 'WHERE' ValueExpression\n``````\n\nFor example:\n\n``````SELECT y.name\nFROM MATCH (x) -> (y)\nWHERE x.name = 'Jake'\nAND y.age > 25\n``````\n\nHere, the first filter describes that the vertex `x` has a property `name` and its value is `Jake`. Similarly, the second filter describes that the vertex `y` has a property `age` and its value is larger than `25`. Here, in the filter, the dot (`.`) operator is used for property access. For the detailed syntax and semantic of expressions, see Functions and Expressions.\n\nNote that the ordering of constraints does not have an affect on the result, such that query from the previous example is equivalent to:\n\n``````SELECT y.name\nFROM MATCH (x) -> (y)\nWHERE y.age > 25\nAND x.name = 'Jake'\n``````\n\n# Grouping and Aggregation\n\n## GROUP BY\n\n`GROUP BY` allows for grouping of solutions and is typically used in combination with aggregates like `MIN` and `MAX` to compute aggregations over groups of solutions.\n\nThe following explains the syntactic structure of the `GROUP BY` clause:\n\n``````GroupByClause ::= 'GROUP' 'BY' ExpAsVar ( ',' ExpAsVar )*\n``````\n\nThe `GROUP BY` clause starts with the keywords GROUP BY and is followed by a comma-separated list of value expressions that can be of any type.\n\nConsider the following query:\n\n`````` SELECT n.first_name, COUNT(*), AVG(n.age)\nFROM MATCH (n:Person)\nGROUP BY n.first_name\n``````\n\nMatches are grouped by their values for `n.first_name`. For each group, the query selects `n.first_name` (i.e. the group key), the number of solutions in the group (i.e. `COUNT(*)`), and the average value of the property age for vertex n (i.e. `AVG(n.age)`).\n\n### Multiple Terms in GROUP BY\n\nIt is possible that the `GROUP BY` clause consists of multiple terms. In such a case, matches are grouped together only if they hold the same result for each of the group expressions.\n\nConsider the following query:\n\n`````` SELECT n.first_name, n.last_name, COUNT(*)\nFROM MATCH (n:Person)\nGROUP BY n.first_name, n.last_name\n``````\n\nMatches will be grouped together only if they hold the same values for `n.first_name` and the same values for `n.last_name`.\n\n### Aliases in GROUP BY\n\nEach expression in `GROUP BY` can have an alias (e.g. `GROUP BY n.prop AS myAlias`). The alias can be referenced from the `HAVING`, `ORDER BY` and `SELECT` clauses so that repeated specification of the same expression can be avoided.\n\nNote, however, that `GROUP BY` can also reference aliases from `SELECT` but it is not allowed to create a circular dependency such that an expression in the `SELECT` references an expression in the `GROUP BY` that in its turn references that same expression in the `SELECT`.\n\n### GROUP BY and NULL values\n\nThe group for which all the group keys are null is a valid group and takes part in further query processing.\n\nTo filter out such a group, use a `HAVING` clause (see HAVING), for example:\n\n`````` SELECT n.prop1, n.prop2, COUNT(*)\nFROM MATCH (n)\nGROUP BY n.prop1, n.prop2\nHAVING n.prop1 IS NOT NULL AND n.prop2 IS NOT NULL\n``````\n\n### Repetition of Group Expression in Select or Order Expression\n\nGroup expressions may be repeated in select or order expressions.\n\nConsider the following query:\n\n`````` SELECT n.age, COUNT(*)\nFROM MATCH (n)\nGROUP BY n.age\nORDER BY n.age\n``````\n\nHere, the group expression `n.age` is repeated in the SELECT and ORDER BY.\n\n## Aggregation\n\nAggregates `COUNT`, `MIN`, `MAX`, `AVG` and `SUM` can aggregate over groups of solutions.\n\nThe syntax is:\n\n``````Aggregation ::= CountAggregation\n| MinAggregation\n| MaxAggregation\n| AvgAggregation\n| SumAggregation\n| ArrayAggregation\n| ListaggAggregation\n\nCountAggregation ::= 'COUNT' '(' '*' ')'\n| 'COUNT' '(' 'DISTINCT'? ValueExpression ')'\n\nMinAggregation ::= 'MIN' '(' 'DISTINCT'? ValueExpression ')'\n\nMaxAggregation ::= 'MAX' '(' 'DISTINCT'? ValueExpression ')'\n\nAvgAggregation ::= 'AVG' '(' 'DISTINCT'? ValueExpression ')'\n\nSumAggregation ::= 'SUM' '(' 'DISTINCT'? ValueExpression ')'\n\nArrayAggregation ::= 'ARRAY_AGG' '(' 'DISTINCT'? ValueExpression ')'\n\nListaggAggregation ::= 'LISTAGG' '(' 'DISTINCT'? ValueExpression ListaggSeparator? ')'\n\nListaggSeparator ::= ',' StringLiteral\n``````\n\nSyntactically, an aggregation takes the form of aggregate followed by an optional `DISTINCT` modifier and a `ValueExpression`.\n\nThe following table gives an overview of the different aggregates and their supported input types.\n\naggregate operator semantic required input type\n`COUNT` counts the number of times the given expression has a bound (i.e. is not null). any type, including vertex and edge\n`MIN` takes the minimum of the values for the given expression. numeric, string, boolean, date, time [with time zone], or, timestamp [with time zone]\n`MAX` takes the maximum of the values for the given expression. numeric, string, boolean, date, time [with time zone], or, timestamp [with time zone]\n`SUM` sums over the values for the given expression. numeric\n`AVG` takes the average of the values for the given expression. numeric\n`ARRAY_AGG` constructs an array/list of the values for the given expression. numeric, string, boolean, date, time [with time zone], or, timestamp [with time zone]\n`LISTAGG` constructs a concatenation of the values for the given expression; an optional separator can be specified to delimit the values. string\n\nAll aggregate functions ignore nulls. `COUNT` never returns null, but instead returns zero if all input values to the aggregate function are null. For all the remaining aggregate functions, if there are no inputs or all input values to the aggregate function are null, then the function returns null.\n\nFor example, the average of `2`, `4` and `null` is `3`, while the average of `null` and `null` is `null`. The count of `2`, `4` and `null` is `2` (there are two non-null values), while the count of `null` and `null` is `0`.\n\n### Aggregation with GROUP BY\n\nIf a `GROUP BY` is specified, aggregations are applied to each individual group of solutions.\n\nFor example:\n\n``````SELECT label(owner),\nCOUNT(*) AS numTransactions,\nSUM(out.amount) AS totalOutgoing,\nLISTAGG(out.amount, ', ') AS amounts\nFROM MATCH (a:Account) -[:owner]-> (owner:Person|Company)\n, MATCH (a) -[out:transaction]-> (:Account)\nGROUP BY label(owner)\nORDER BY label(owner)\n``````\n``````+---------------------------------------------------------------------------------+\n| label(owner) | numTransactions | totalOutgoing | amounts |\n+---------------------------------------------------------------------------------+\n| Company | 1 | 9999.5 | 9999.5 |\n| Person | 4 | 15401.0 | 1000.0, 9900.0, 1500.3, 3000.7 |\n+---------------------------------------------------------------------------------+\n``````\n\nHere, we match accounts, their owner (either a person or a company) and their outgoing transactions. Then we group by the owner’s label (either `Person` or `Company`) and compute the total number of outgoing transactions, the total amount transacted, and a comma-separated list of transaction amounts for each group.\n\n### Aggregation without GROUP BY\n\nIf no `GROUP BY` is specified, aggregations are applied to the entire set of solutions.\n\n``````SELECT COUNT(*) AS numTransactions,\nSUM(out.amount) AS totalOutgoing,\nLISTAGG(out.amount, ', ') AS amounts\nFROM MATCH (a:Account) -[:owner]-> (owner:Person|Company)\n, MATCH (a) -[out:transaction]-> (:Account)\n``````\n``````+--------------------------------------------------------------------------+\n| numTransactions | totalOutgoing | amounts |\n+--------------------------------------------------------------------------+\n| 5 | 25400.5 | 1000.0, 9900.0, 1500.3, 3000.7, 9999.5 |\n+--------------------------------------------------------------------------+\n``````\n\nNote that the result will always be a single row, unless nothing was matched in which case zero rows are returned.\n\n### COUNT(*)\n\n`COUNT(*)` is a special construct that simply counts the number of solutions without evaluating an expression.\n\nFor example:\n\n``````SELECT COUNT(*)\nFROM MATCH (m:Person)\n``````\n\n### DISTINCT in aggregation\n\nThe `DISTINCT` modifier specifies that duplicate values should be removed before performing aggregation.\n\nFor example:\n\n``````SELECT AVG(DISTINCT m.age)\nFROM MATCH (m:Person)\n``````\n\nHere, we aggregate only over distinct `m.age` values.\n\n## HAVING\n\nThe `HAVING` clause is an optional clause that can be placed after a `GROUP BY` clause to filter out particular groups of solutions.\n\nThe syntax is:\n\n``````HavingClause ::= 'HAVING' ValueExpression\n``````\n\nThe value expression needs to be a boolean expression.\n\nFor example:\n\n`````` SELECT n.name\nFROM MATCH (n) -[:has_friend]-> (m)\nGROUP BY n\nHAVING COUNT(m) > 10\n``````\n\nThis query returns the names of people who have more than 10 friends.\n\n# Sorting and Row Limiting\n\n## ORDER BY\n\nWhen there are multiple matched subgraph instances to a given query, in general, the ordering between those instances are not defined; the query execution engine can present the result in any order. Still, the user can specify the ordering between the answers in the result using `ORDER BY` clause.\n\nThe following explains the syntactic structure of `ORDER BY` clause.\n\n``````OrderByClause ::= 'ORDER' 'BY' OrderTerm ( ',' OrderTerm )*\n\nOrderTerm ::= ValueExpression ( 'ASC' | 'DESC' )?\n``````\n\nThe `ORDER BY` clause starts with the keywords `ORDER BY` and is followed by comma separated list of order terms. An order term consists of the following parts:\n\n• An expression.\n• An optional ASC or DESC decoration to specify that ordering should be ascending or descending.\n• If no keyword is given, the default is ascending order.\n\nThe following is an example in which the results are ordered by property access `n.age` in ascending order:\n\n`````` SELECT n.name\nFROM MATCH (n:Person)\nORDER BY n.age ASC\n``````\n\n### Data types for ORDER BY\n\nA partial ordering for the different data types is defined as follows:\n\n• Numeric values are ordered from small to large.\n• String values are ordered lexicographically.\n• Boolean values are ordered such that `false` comes before `true`.\n• Datetime values (i.e. dates, times, or timestamps) are ordered such that earlier points in time come before later points in time.\n\nVertices and edges cannot be ordered directly.\n\n### Multiple expressions in ORDER BY\n\nAn `ORDER BY` may contain more than one expression, in which case the expresisons are evaluated from left to right. That is, (n+1)th ordering term is used only for the tie-break rule for n-th ordering term. Note that different expressions can have different ascending or descending decorators.\n\n`````` SELECT f.name\nFROM MATCH (f:Person)\nORDER BY f.age ASC, f.salary DESC\n``````\n\n## LIMIT and OFFSET\n\nThe `LIMIT` puts an upper bound on the number of solutions returned, whereas the `OFFSET` specifies the start of the first solution that should be returned.\n\nThe following explains the syntactic structure for the LIMIT and OFFSET clauses:\n\n``````LimitOffsetClauses ::= 'LIMIT' LimitOffsetValue ( 'OFFSET' LimitOffsetValue )?\n| 'OFFSET' LimitOffsetValue ( 'LIMIT' LimitOffsetValue )?\n\nLimitOffsetValue ::= UNSIGNED_INTEGER\n| BindVariable\n``````\n\nThe `LIMIT` clause starts with the keyword `LIMIT` and is followed by an integer that defines the limit. Similarly, the `OFFSET` clause starts with the keyword `OFFSET` and is followed by an integer that defines the offset. Furthermore: The `LIMIT` and `OFFSET` clauses can be defined in either order. The limit and offset may not be negatives. The following semantics hold for the `LIMIT` and `OFFSET` clauses: The `OFFSET` clause is always applied first, even if the `LIMIT` clause is placed before the `OFFSET` clause inside the query. An `OFFSET` of zero has no effect and gives the same result as if the `OFFSET` clause was omitted. If the number of actual solutions after `OFFSET` is applied is greater than the limit, then at most the limit number of solutions will be returned.\n\nIn the following query, the first 5 intermediate solutions are pruned from the result (i.e. `OFFSET 5`). The next 10 intermediate solutions are returned and become final solutions of the query (i.e. `LIMIT 10`).\n\n``````SELECT n\nFROM MATCH (n)\nLIMIT 10\nOFFSET 5\n``````\n\n# Variable-Length Paths\n\nGraph Pattern Matching introduced how “fixed-length” patterns can be matched. Fixed-length patterns match a fixed number of vertices and edges such that every solution (every row) has the same number of vertices and edges.\n\nHowever, through the use of quantifiers (introduced below) it is is possible to match “variable-length” paths such as shortest paths. Variable-length path patterns match a variable number of vertices and edges such that different solutions (different rows) potentially have different numbers of vertices and edges.\n\n## Overview of Path Finding Goals\n\ngoal matches limitations on quantifier\n-/ .. /-> any path* no limitations\nANY any path no limitations\nANY SHORTEST any shortest path no limitations\nALL SHORTEST all shortest paths no limitations\nTOP k SHORTEST top k shortest paths no limitations\nANY CHEAPEST any cheapest path no limitations\nTOP k CHEAPEST top k cheapest paths no limitations\nALL all paths requires an upper bound on the path length\n\n*Allows for retrieving data from the two path endpoint vertices only. To retrieve data from all vertices or edges along the path, use path finding goal `ANY`.\n\n## Quantifiers\n\nQuantifiers allow for matching variable-length paths by specifying lower and upper limits on the number of times a pattern is allowed to match.\n\nThe syntax is:\n\n``````GraphPatternQuantifier ::= ZeroOrMore\n| OneOrMore\n| Optional\n| ExactlyN\n| NOrMore\n| BetweenNAndM\n| BetweenZeroAndM\n\nZeroOrMore ::= '*'\n\nOneOrMore ::= '+'\n\nOptional ::= '?'\n\nExactlyN ::= '{' UNSIGNED_INTEGER '}'\n\nNOrMore ::= '{' UNSIGNED_INTEGER ',' '}'\n\nBetweenNAndM ::= '{' UNSIGNED_INTEGER ',' UNSIGNED_INTEGER '}'\n\nBetweenZeroAndM ::= '{' ',' UNSIGNED_INTEGER '}'\n``````\n\nThe meaning of the different quantifiers is:\n\nquantifier meaning matches\n* zero (0) or more a path that connects the source and destination of the path by zero or more matches of a given pattern\n+ one (1) or more a path that connects the source and destination of the path by one or more matches of a given pattern\n? zero or one (1), i.e. “optional” a path that connects the source and destination of the path by zero or one matches of a given pattern\n{ n } exactly n a path that connects the source and destination of the path by exactly n matches of a given pattern\n{ n, } n or more a path that connects the source and destination of the path by at least n matches of a given pattern\n{ n, m } between n and m (inclusive) a path that connects the source and destination of the path by at least n and at most m (inclusive) matches of a given pattern\n{ , m } between zero (0) and m (inclusive) a path that connects the source and destination of the path by at least 0 and at most m (inclusive) matches of a given pattern\n\nAll paths are considered, even the ones that contain a vertex or edge multiple times. In other words, cycles are permitted.\n\nAn example is:\n\n``````SELECT a.number AS a,\nb.number AS b,\nCOUNT(e) AS pathLength,\nARRAY_AGG(e.amount) AS amounts\nFROM MATCH ANY SHORTEST (a:Account) -[e:transaction]->* (b:Account)\nWHERE a.number = 10039 AND b.number = 2090\n``````\n``````+------------------------------------------------------+\n| a | b | pathLength | amounts |\n+------------------------------------------------------+\n| 10039 | 2090 | 3 | [1000.0, 1500.3, 9999.5] |\n+------------------------------------------------------+\n``````\n\nAbove, we use the quantifier `*` to find a shortest path from account `10039` to account `2090`, following only `transaction` edges. Shortest path finding is explained in more detail in Shortest Path. `COUNT(e)` and `ARRAY_AGG(e.amount)` are horizontal aggregations which are explained in Horizontal Aggregation.\n\n## Any Path and Reachability\n\n### Any Path\n\n`ANY` is used to find any (arbitrary) path between a pair of source-destination vertices.\n\nTwo typical uses are:\n\n• Testing for the existence of a path between a pair of vertices without caring about the actual data along the paths.\n• Matching a path in case of tree-structured graphs or other types of graph structures for which it is known that only single paths exist between pairs of vertices.\n\nNote that in the first case where we test for path existence, it is also possible to use Reachability instead.\n\nThe syntax for matching any path is:\n\n``````AnyPathPattern ::= 'ANY' SourceVertexPattern\nQuantifiedPathPatternPrimary\nDestinationVertexPattern\n| 'ANY' '(' SourceVertexPattern\nQuantifiedPathPatternPrimary\nDestinationVertexPattern ')'\n\nSourceVertexPattern ::= VertexPattern\n\nDestinationVertexPattern ::= VertexPattern\n\nQuantifiedPathPatternPrimary ::= PathPatternPrimary GraphPatternQuantifier?\n\nPathPatternPrimary ::= EdgePattern\n| ParenthesizedPathPatternExpression\n\nParenthesizedPathPatternExpression ::= '(' VertexPattern? EdgePattern VertexPattern?\nWhereClause?\nCostClause? ')'\n``````\n\nAn example where we test for path existence is:\n\n`````` SELECT dst.number\nFROM MATCH ANY (src:Account) -[e]->+ (dst:Account)\nWHERE src.number = 8021\nORDER BY dst.number\n``````\n``````+--------+\n| number |\n+--------+\n| 1001 |\n| 2090 |\n| 8021 |\n| 10039 |\n+--------+\n``````\n\nAn example where we return data along the path is:\n\n`````` SELECT dst.number, LISTAGG(e.amount, ' + ') || ' = ', SUM(e.amount)\nFROM MATCH ANY (src:Account) -[e]->+ (dst:Account)\nWHERE src.number = 8021\nORDER BY dst.number\n``````\n``````+---------------------------------------------------------------+\n| number | LISTAGG(e.amount, ' + ') || ' = ' | SUM(e.amount) |\n+---------------------------------------------------------------+\n| 1001 | 1500.3 = | 1500.3 |\n| 2090 | 1500.3 + 9999.5 = | 11499.8 |\n| 8021 | 1500.3 + 9999.5 + 9900.0 + 1000.0 = | 22399.8 |\n| 10039 | 1500.3 + 9999.5 + 9900.0 = | 21399.8 |\n+---------------------------------------------------------------+\n``````\n\nNote that above, there is always only a single path per source-destination pair (there are four such pairs). And it is arbitrary which path is match. In this example, all four paths happen to contain the transaction edge with amount `1500.30` instead of the one with amount `3000.80`.\n\n### Reachability\n\nIn graph reachability we test for the existence of paths (true/false) between pairs of vertices. PGQL uses forward slashes (`-/` and `/->`) instead of square brackets (`-[` and `]->`) to indicate reachability semantic.\n\nThe syntax is:\n\n``````ReachabilityPathExpression ::= OutgoingPathPattern\n| IncomingPathPattern\n\nOutgoingPathPattern ::= '-/' PathSpecification '/->'\n\nIncomingPathPattern ::= '<-/' PathSpecification '/-'\n\nPathSpecification ::= LabelPredicate\n| PathPredicate\n\nPathPredicate ::= ':' Label GraphPatternQuantifier?\n``````\n\nFor example:\n\n``````SELECT c.name\nFROM MATCH (c:Class) -/:subclass_of*/-> (arrayList:Class)\nWHERE arrayList.name = 'ArrayList'\n``````\n\nHere, we find all classes that are a subclass of `'ArrayList'`. The regular path pattern `subclass_of*` matches a path consisting of zero or more edges with the label `subclass_of`. Because the pattern may match a path with zero edges, the two query vertices can be bound to the same data vertex if the data vertex satisfies the constraints specified in both source and destination vertices (i.e. the vertex has a label `Class` and a property `name` with a value `ArrayList`).\n\n#### Examples with various quantifiers\n\n##### Zero or more\n\nThe following example finds all vertices `y` that can be reached from `Amy` by following zero or more `likes` edges.\n\n``````SELECT y.name\nFROM MATCH (x:Person) -/:likes*/-> (y)\nWHERE x.name = 'Amy'\n``````\n``````+--------+\n| y.name |\n+--------+\n| Amy |\n| John |\n| Albert |\n| Judith |\n+--------+\n``````\n\nNote that here, `Amy` is returned since `Amy` connects to `Amy` by following zero `likes` edges. In other words, there exists an empty path for the vertex pair. For `Judith`, there exist two paths (`100 -> 200 -> 300 -> 400` and `100 -> 400`). However, `Judith` is still only returned once since the semantic of `-/ .. /->` is to test for the existence of paths between pairs of vertices (i.e. “reachability”), so there is only at most one result per pair of vertices.\n\n##### One or more\n\nThe following example finds all people that can be reached from `Amy` by following one or more `likes` edges.\n\n``````SELECT y.name\nFROM MATCH (x:Person) -/:likes+/-> (y)\nWHERE x.name = 'Amy'\n``````\n``````+--------+\n| y.name |\n+--------+\n| John |\n| Albert |\n| Judith |\n+--------+\n``````\n\nThis time, `Amy` is not returned since there does not exist a path that connects `Amy` to `Amy` that has a length greater than zero.\n\nThe following example finds all people that can be reached from `Judith` by following one or more `knows` edges:\n\n``````SELECT y.name\nFROM MATCH (x:Person) -/:knows+/-> (y)\nWHERE x.name = 'Judith'\n``````\n``````+--------+\n| y.name |\n+--------+\n| Jonas |\n| Judith |\n+--------+\n``````\n\nHere, in addition to `Jonas`, `Judith` is returned since there exist paths from `Judith` back to `Judith` that has a length greater than zero. Examples of such paths are `400 -> 500 -> 400` and `400 -> 500 -> 400 -> 500 -> 400`.\n\n##### Optional\n\nThe following example finds all people that can be reached from `Judith` by following zero or one `knows` edges.\n\n``````SELECT y.name\nFROM MATCH (x:Person) -/:knows?/-> (y)\nWHERE x.name = 'Judith'\n``````\n``````+--------+\n| y.name |\n+--------+\n| Judith |\n| Jonas |\n+--------+\n``````\n\nHere, `Judith` is returned since there exists the empty path that starts in `400` and ends in `400`. `Jonas` is returned because of the following path that has length one: `400 -> 500`.\n\n##### Exactly n\n\nThe following example finds all people that can be reached from `Amy` by following exactly two `likes` edges.\n\n``````SELECT y.name\nFROM MATCH (x:Person) -/:likes{2}/-> (y)\nWHERE x.name = 'Amy'\n``````\n``````+--------+\n| y.name |\n+--------+\n| Albert |\n+--------+\n``````\n\nHere, `Albert` is returned since there exists the following path that has `likes` edges only: `100 -> 200 -> 300`.\n\n##### n or more\n\nThe following example finds all people that can be reached from `Amy` by following 2 or more `likes` edges.\n\n``````SELECT y.name\nFROM MATCH (x:Person) -/:likes{2,}/-> (y)\nWHERE x.name = 'Amy'\n``````\n``````+--------+\n| y.name |\n+--------+\n| Albert |\n| Judith |\n+--------+\n``````\n\nHere, `Albert` is returned since there exists the following path of length two: `100 -> 200 -> 300`. `Judith` is returned since there exists a path of length three: `100 -> 200 -> 300 -> 400`.\n\n##### Between n and m\n\nThe following example finds all people that can be reached from `Amy` by following between 1 and 2 `likes` edges.\n\n``````SELECT y.name\nFROM MATCH (x:Person) -/:likes{1,2}/-> (y)\nWHERE x.name = 'Amy'\n``````\n``````+--------+\n| y.name |\n+--------+\n| John |\n| Albert |\n| Judith |\n+--------+\n``````\n\nHere, `John` is returned since there exists a path of length one (i.e. `100 -> 200`); `Albert` is returned since there exists a path of length two (i.e. `100 -> 200 -> 300`); `Judith` is returned since there exists a path of length one (i.e. `100 -> 400`).\n\n##### Between zero and m\n\nThe following example finds all people that can be reached from `Judith` by following at most 2 `knows` edges.\n\n``````SELECT y.name\nFROM MATCH (x:Person) -/:knows{,2}/-> (y)\nWHERE x.name = 'Judith'\n``````\n``````+--------+\n| y.name |\n+--------+\n| Jonas |\n| Judith |\n+--------+\n``````\n\nHere, `Jonas` is returned since there exists a path of length one (i.e. `400 -> 500`). For `Judith`, there exists an empty path of length zero (i.e. `400`) as well as a non-empty path of length two (i.e. `400 -> 500 -> 400`). Yet, `Judith` is only returned once.\n\n#### Path pattern macros\n\nOne or more “path pattern macros” may be declared at the beginning of the query. These macros allow for expressing complex regular expressions. PGQL 1.4 allows macros only for reachability, not for (top-k) shortest path.\n\n``````PathPatternMacros ::= PathPatternMacro+\n\nPathPatternMacro ::= 'PATH' Identifier 'AS' PathPattern WhereClause?\n``````\n\nA path pattern declaration starts with the keyword `PATH`, followed by an expression name, the assignment operator `AS`, and a path pattern consisting of at least one vertex. The syntactic structure of the path pattern is the same as a path pattern in the `MATCH` clause.\n\nFor example:\n\n`````` PATH has_parent AS () -[:has_father|has_mother]-> (:Person)\nSELECT ancestor.name\nFROM MATCH (p1:Person) -/:has_parent+/-> (ancestor)\n, MATCH (p2:Person) -/:has_parent+/-> (ancestor)\nWHERE p1.name = 'Mario'\nAND p2.name = 'Luigi'\n``````\n\nThe above query finds common ancestors of `Mario` and `Luigi`.\n\nAnother example is:\n\n`````` PATH connects_to AS (:Generator) -[:has_connector]-> (c:Connector) <-[:has_connector]- (:Generator)\nWHERE c.status = 'OPERATIONAL'\nSELECT generatorA.location, generatorB.location\nFROM MATCH (generatorA) -/:connects_to+/-> (generatorB)\n``````\n\nThe above query outputs all generators that are connected to each other via one or more connectors that are all operational.\n\nIf the direction of the macro invocation is from right-to-left (`<-/../-`) instead of from left-to-right (`-/../->`), then the pattern in the macro is also matched from right-to-left instead of left-to-right.\n\nFor example:\n\n``````PATH macro1 AS (v1:Generator) -[e1:has_connector]-> (v2:Connector)\nSELECT COUNT(*)\nFROM MATCH (generatorA) <-/:macro1+/- (generatorB)\nWHERE generatorA.name = 'AEH382'\n``````\n\nThe above query is equivalent to:\n\n``````PATH macro1 AS (v2:Connector) <-[e1:has_connector]- (v1:Generator)\nSELECT COUNT(*)\nFROM MATCH (generatorA) -/:macro1+/-> (generatorB)\nWHERE generatorA.name = 'AEH382'\n``````\n\n## Shortest Path\n\nShortest path finding allows for finding paths with a minimal number of hops. Given a pair of vertices, there are different kinds of shortest paths that can be obtained:\n\n### Any Shortest Path\n\n`ANY SHORTEST` allows for matching a shortest path (i.e. minimal number of edges) between a source vertex and a destination vertex. In case multiple shortest paths exist, an arbitrary one is retrieved.\n\nThe syntax is:\n\n``````AnyShortestPathPattern ::= 'ANY' 'SHORTEST' SourceVertexPattern\nQuantifiedPathPatternPrimary\nDestinationVertexPattern\n| 'ANY' 'SHORTEST' '(' SourceVertexPattern\nQuantifiedPathPatternPrimary\nDestinationVertexPattern ')'\n``````\n\nFor example:\n\n``````SELECT src, SUM(e.weight), dst\nFROM MATCH ANY SHORTEST (src) -[e]->* (dst)\nWHERE src.age < dst.age\n``````\n\nAnother example is:\n\n`````` SELECT COUNT(e) AS num_hops\n, p1.name AS start\n, ARRAY_AGG ( CASE\nWHEN has_label(dst, 'Account')\nTHEN CAST(dst.number AS STRING)\nELSE dst.name\nEND\n) AS path\nFROM MATCH ANY SHORTEST (p1:Person) (-[e]- (dst))* (p2:Person)\nWHERE p1.name = 'Camille' AND p2.name = 'Liam'\nORDER BY num_hops\n``````\n``````+------------------------------------------+\n| num_hops | start | path |\n+------------------------------------------+\n| 3 | Camille | [10039, 2090, Liam] |\n+------------------------------------------+\n``````\n\nFilters on vertices and edges along paths can be specified by adding a `WHERE` clause inside the quantified pattern.\n\nFor example, the following query matches a shortest path (if one exists) such that each edge along the path has a property `weight` with a value greater than `10`:\n\n``````SELECT src, ARRAY_AGG(e.weight), dst\nFROM MATCH ANY SHORTEST (src) (-[e]-> WHERE e.weight > 10)* (dst)\n``````\n\nNote that this is different from a `WHERE` clause that is placed outside of the quantified pattern:\n\n``````SELECT src, ARRAY_AGG(e.weight), dst\nFROM MATCH ANY SHORTEST (src) -[e]->* (dst) WHERE SUM(e.cost) < 100\n``````\n\nHere, the filter is applied only after a shortest path is matched such that if the `WHERE` condition is not satisfied, the path is filtered out and no other path is considered even though another path may exist that does satisfy the `WHERE` condition.\n\n### All Shortest Path\n\nGiven a pair of source-destination vertices, `ALL SHORTEST` path matches all shortest path between the two vertices. In contrast to `ANY SHORTEST`, `ALL SHORTEST` will return a deterministic result as it will include all shortest paths instead of an arbitrary shortest path.\n\nThe syntax is:\n\n``````AllShortestPathPattern ::= 'ALL' 'SHORTEST' SourceVertexPattern\nQuantifiedPathPatternPrimary\nDestinationVertexPattern\n| 'ALL' 'SHORTEST' '(' SourceVertexPattern\nQuantifiedPathPatternPrimary\nDestinationVertexPattern ')'\n``````\n\nFor example:\n\n`````` SELECT LISTAGG(e.amount, ' + ') || ' = ', SUM(e.amount) AS total_amount\nFROM MATCH ALL SHORTEST (a:Account) -[e:transaction]->* (b:Account)\nWHERE a.number = 10039 AND b.number = 2090\nORDER BY total_amount\n``````\n``````+--------------------------------------------------+\n| LISTAGG(e.amount, ' + ') || ' = ' | total_amount |\n+--------------------------------------------------+\n| 1000.0 + 1500.3 + 9999.5 = | 12499.8 |\n| 1000.0 + 3000.7 + 9999.5 = | 14000.2 |\n+--------------------------------------------------+\n``````\n\n### Top-K Shortest Path\n\n`TOP` k `SHORTEST` path matches the k shortest paths for each pair of source and destination vertices. Aggregations can then be computed over their vertices/edges.\n\nThe syntax is:\n\n``````TopKShortestPathPattern ::= 'TOP' KValue SourceVertexPattern\nQuantifiedPathPatternPrimary\nDestinationVertexPattern\n| 'TOP' KValue '(' SourceVertexPattern\nQuantifiedPathPatternPrimary\nDestinationVertexPattern ')'\n\nKValue ::= UNSIGNED_INTEGER\n``````\n\nFor example the following query will output the sum of the edge weights along each of the top 3 shortest paths between each of the matched source and destination pairs:\n\n``````SELECT src, SUM(e.weight), dst\nFROM MATCH TOP 3 SHORTEST (src) -[e]->* (dst)\nWHERE src.age < dst.age\n``````\n\nNotice that the sum aggregation is computed for each matching path. In other words, the number of rows returned by the query is equal to the number of paths that match, which is at most three times the number of possible source-destination pairs.\n\nThe `ARRAY_AGG` construct allows users to output properties of edges/vertices along the path. For example, in the following query:\n\n``````SELECT src, ARRAY_AGG(e.weight), ARRAY_AGG(v1.age), ARRAY_AGG(v2.age), dst\nFROM MATCH TOP 3 SHORTEST (src) ((v1) -[e]-> (v2))* (dst)\nWHERE src.age < dst.age\n``````\n\nthe `ARRAY_AGG(e.weight)` outputs a list containing the weight property of all the edges along the path,\n\nthe `ARRAY_AGG(v1.cost)` outputs a list containing the age property of all the vertices along the path except the last one,\n\nthe `ARRAY_AGG(v2.cost)` outputs a list containing the age property of all the vertices along the path except the first one.\n\nUsers can also compose shortest path constructs with other matching operators:\n\n``````SELECT ARRAY_AGG(e1.weight), ARRAY_AGG(e2.weight)\nFROM MATCH (start) -> (src)\n, MATCH TOP 3 SHORTEST (src) (-[e1]->)* (mid)\n, MATCH ANY SHORTEST (mid) (-[e2]->)* (dst)\n, MATCH (dst) -> (end)\n``````\n\nAnother example is:\n\n`````` SELECT COUNT(e) AS num_hops\n, SUM(e.amount) AS total_amount\n, ARRAY_AGG(e.amount) AS amounts_along_path\nFROM MATCH TOP 7 SHORTEST (a:Account) -[e:transaction]->* (b:Account)\nWHERE a.number = 10039 AND a = b\nORDER BY num_hops, total_amount\n``````\n``````+--------------------------------------------------------------------------------------------+\n| num_hops | total_amount | amounts_along_path |\n+--------------------------------------------------------------------------------------------+\n| 0 | <null> | <null> |\n| 4 | 22399.8 | [1000.0, 1500.3, 9999.5, 9900.0] |\n| 4 | 23900.2 | [1000.0, 3000.7, 9999.5, 9900.0] |\n| 8 | 44799.6 | [1000.0, 1500.3, 9999.5, 9900.0, 1000.0, 1500.3, 9999.5, 9900.0] |\n| 8 | 46300.0 | [1000.0, 1500.3, 9999.5, 9900.0, 1000.0, 3000.7, 9999.5, 9900.0] |\n| 8 | 46300.0 | [1000.0, 3000.7, 9999.5, 9900.0, 1000.0, 1500.3, 9999.5, 9900.0] |\n| 8 | 47800.4 | [1000.0, 3000.7, 9999.5, 9900.0, 1000.0, 3000.7, 9999.5, 9900.0] |\n+--------------------------------------------------------------------------------------------+\n``````\n\nNote that above, we matched a path with zero edges (the first result) and we also matched four paths (the last four results) that visit the same edges multiple times. The following example shows how such paths could be filtered out, such that we only keep paths that have at least one edge and that do not visit an edge multiple times:\n\n`````` SELECT COUNT(e) AS num_hops\n, SUM(e.amount) AS total_amount\n, ARRAY_AGG(e.amount) AS amounts_along_path\nFROM MATCH TOP 7 SHORTEST (a:Account) -[e:transaction]->* (b:Account)\nWHERE a.number = 10039 AND a = b AND COUNT(DISTINCT e) = COUNT(e) AND COUNT(e) > 0\nORDER BY num_hops, total_amount\n``````\n``````+------------------------------------------------------------+\n| num_hops | total_amount | amounts_along_path |\n+------------------------------------------------------------+\n| 4 | 22399.8 | [1000.0, 1500.3, 9999.5, 9900.0] |\n| 4 | 23900.2 | [1000.0, 3000.7, 9999.5, 9900.0] |\n+------------------------------------------------------------+\n``````\n\n## Cheapest Path\n\nCheapest path finding allows for finding paths based on a cost function. Given a pair of vertices, single cheapest path finding allows for finding a single cheapest path, While top-k cheapest path finding allows for finding K cheapest paths where paths for which paths with increasing cost are matched.\n\n### Any Cheapest Path\n\nThe `CHEAPEST` construct allows for finding a cheapest path based on an arbitrary `COST` function.\n\nThe syntax is:\n\n``````AnyCheapestPathPattern ::= 'ANY' 'CHEAPEST' SourceVertexPattern\nQuantifiedPathPatternPrimary\nDestinationVertexPattern\n| 'ANY' 'CHEAPEST' '(' SourceVertexPattern\nQuantifiedPathPatternPrimary\nDestinationVertexPattern ')'\n\nCostClause ::= 'COST' ValueExpression\n``````\n\nFor example:\n\n``````SELECT COUNT(e) AS num_hops\n, SUM(e.amount) AS total_amount\n, ARRAY_AGG(e.amount) AS amounts_along_path\nFROM MATCH ANY CHEAPEST (a:Account) (-[e:transaction]-> COST e.amount)* (b:Account)\nWHERE a.number = 10039 AND b.number = 2090\n``````\n``````+----------------------------------------------------+\n| num_hops | total_amount | amounts_along_path |\n+----------------------------------------------------+\n| 3 | 12499.8 | [1000.0, 1500.3, 9999.5] |\n+----------------------------------------------------+\n``````\n\nThe following example with `CHEAPEST` contains an any-directed edge pattern (`-[e:transaction]-`):\n\n``````SELECT COUNT(e) AS num_hops\n, SUM(e.amount) AS total_amount\n, ARRAY_AGG(e.amount) AS amounts_along_path\nFROM MATCH ANY CHEAPEST (a:Account) (-[e:transaction]- COST e.amount)* (b:Account)\nWHERE a.number = 10039 AND b.number = 2090\n``````\n``````+----------------------------------------------+\n| num_hops | total_amount | amounts_along_path |\n+----------------------------------------------+\n| 1 | 9900.0 | [9900.0] |\n+----------------------------------------------+\n``````\n\nNote that above, because edges are matched in any direction, the cheapest path between accounts `10039` and `2090` is the one that contains a single incoming edge.\n\nThe cost function is not limited to edge properties, it can be an arbitrary expression. The following example has a `CASE` statement that defines a different cost for different types of edges:\n\n``````SELECT COUNT(e) AS num_hops\n, SUM(e.amount) AS total_amount\n, ARRAY_AGG(e.amount) AS amounts_along_path\nFROM MATCH ANY CHEAPEST (p1:Person) (-[e:owner|transaction]-\nCOST CASE\nWHEN e.amount IS NULL THEN 1\nELSE e.amount\nEND)* (p2:Person)\nWHERE p1.name = 'Nikita' AND p2.name = 'Liam'\n``````\n``````+----------------------------------------------+\n| num_hops | total_amount | amounts_along_path |\n+----------------------------------------------+\n| 4 | 10900.0 | [1000.0, 9900.0] |\n+----------------------------------------------+\n``````\n\nNote that above, when the edge is an `owner` edge, `e.amount` will return NULL resulting in a cost of `1` (`WHEN e.amount IS NULL THEN 1`).\n\n### Top-K Cheapest Path\n\nPGQL offers a `TOP k CHEAPEST` clause, which returns the `k` paths that match a given pattern with the lowest cost, computed with a user-defined cost function. If the user-defined cost function returns a constant, the `TOP k CHEAPEST` clause is equivalent to `TOP k SHORTEST`.\n\nThe syntax of the queries is extended the following way:\n\n``````TopKCheapestPathPattern ::= 'TOP' KValue SourceVertexPattern\nQuantifiedPathPatternPrimary\nDestinationVertexPattern\n| 'TOP' KValue '(' SourceVertexPattern\nQuantifiedPathPatternPrimary\nDestinationVertexPattern ')'\n``````\n\nThe cost function must evaluate to a number.\n\nOver paths returned by a `CHEAPEST` query the same aggregations are defined as over paths returned by a `SHORTEST` query.\n\nThe `CHEAPEST` queries represent paths the same way as `SHORTEST`, allowing the same path aggregations.\n\nFor example, the following query returns the top 3 cheapest paths from account 10039 to itself:\n\n`````` SELECT COUNT(e) AS num_hops\n, SUM(e.amount) AS total_amount\n, ARRAY_AGG(e.amount) AS amounts_along_path\nFROM MATCH TOP 3 CHEAPEST (a:Account) (-[e:transaction]-> COST e.amount)* (a)\nWHERE a.number = 10039\nORDER BY total_amount\n``````\n``````+------------------------------------------------------------+\n| num_hops | total_amount | amounts_along_path |\n+------------------------------------------------------------+\n| 0 | <null> | <null> |\n| 4 | 22399.8 | [1000.0, 1500.3, 9999.5, 9900.0] |\n| 4 | 23900.2 | [1000.0, 3000.7, 9999.5, 9900.0] |\n+------------------------------------------------------------+\n``````\n\nThe following is a more complex query that involves a cost function based on the labels of the vertices in the cheapest path. It finds the 4 cheapest paths between account `10039` and company `Oracle` such that `Person` vertices contribute `3` towards the total cost, while `Account` or `Company` vertices contribute `1` to the total cost.\n\n`````` SELECT COUNT(e) AS num_hops\n, ARRAY_AGG( CASE label(n_x)\nWHEN 'Person' THEN n_x.name\nWHEN 'Company' THEN n_x.name\nWHEN 'Account' THEN CAST(n_x.number AS STRING)\nEND ) AS names_or_numbers\n, SUM( CASE label(n_x) WHEN 'Person' THEN 8 ELSE 1 END ) AS total_cost\nFROM MATCH TOP 4 CHEAPEST\n(a:Account)\n(-[e]- (n_x) COST CASE label(n_x) WHEN 'Person' THEN 3 ELSE 1 END)*\n(c:Company)\nWHERE a.number = 10039 AND c.name = 'Oracle'\nORDER BY total_cost\n``````\n``````+----------------------------------------------+\n| num_hops | names_or_numbers | total_cost |\n+----------------------------------------------+\n| 3 | [2090, 1001, Oracle] | 3 |\n| 3 | [8021, 1001, Oracle] | 3 |\n| 3 | [8021, 1001, Oracle] | 3 |\n| 2 | [Camille, Oracle] | 9 |\n+----------------------------------------------+\n``````\n\nAs you can see, even though the path returned in the fourth row is shorter than the other three paths, it has a higher cost because it includes a `Person` vertex (`Camille`), which adds `4` to the total cost.\n\n## All Path\n\n`ALL` path returns all paths between source and destination vertices. Cycles are included. Therefore, it is required to always specify an upper bound on the path length as a way to avoid endless cycling.\n\nThus, only the following quantifiers are allowed:\n\n• `?`\n• `{ n }`\n• `{ n, m }`\n• `{ , m }`\n\nWhereas these quantifiers are forbidden:\n\n• `*`\n• `+`\n• `{ n, }`\n\nThe syntax is:\n\n``````AllPathPattern ::= 'ALL' SourceVertexPattern\nQuantifiedPathPatternPrimary\nDestinationVertexPattern\n| 'ALL' '(' SourceVertexPattern\nQuantifiedPathPatternPrimary\nDestinationVertexPattern ')'\n``````\n\nFor example:\n\n`````` SELECT LISTAGG(e.amount, ' + ') || ' = ', SUM(e.amount) AS total_amount\nFROM MATCH ALL (a:Account) -[e:transaction]->{,7} (b:Account)\nWHERE a.number = 10039 AND b.number = 2090\nORDER BY total_amount\n``````\n``````+--------------------------------------------------------------------------------+\n| LISTAGG(e.amount, ' + ') || ' = ' | total_amount |\n+--------------------------------------------------------------------------------+\n| 1000.0 + 1500.3 + 9999.5 = | 12499.8 |\n| 1000.0 + 3000.7 + 9999.5 = | 14000.2 |\n| 1000.0 + 1500.3 + 9999.5 + 9900.0 + 1000.0 + 1500.3 + 9999.5 = | 34899.6 |\n| 1000.0 + 1500.3 + 9999.5 + 9900.0 + 1000.0 + 3000.7 + 9999.5 = | 36400.0 |\n| 1000.0 + 3000.7 + 9999.5 + 9900.0 + 1000.0 + 1500.3 + 9999.5 = | 36400.0 |\n| 1000.0 + 3000.7 + 9999.5 + 9900.0 + 1000.0 + 3000.7 + 9999.5 = | 37900.4 |\n+--------------------------------------------------------------------------------+\n``````\n\n## Horizontal Aggregation\n\nAggregations are either applied in a vertical or a horizontal fashion.\n\n### Recap of vertical aggregation\n\nVertical aggregation was introduced in Aggregation. This kind of aggregation is what people usually learn first when they start using PGQL or SQL. Vertical aggregation takes a group of values from different rows and aggregates the values into a single value, for example by taking the minimum or maximum. If a `GROUP BY` is specified then the output of a query is as many rows as there are groups, while if no `GROUP BY` is specified then the output is a single row. For more details, see Grouping and Aggregation.\n\nGiven the pattern `(n) -[e]-> (m)`, examples of vertical aggregation are:\n\n• `SUM(e.prop)`\n• `COUNT(e.prop)`\n• `SUM(n.prop + m.prop / 2)`\n\n### Group Variables\n\nTo understand horizontal aggregation, however, it is neccesary to know the difference between “singleton variables” and “group variables”. A singleton variable is a variable that binds to only one vertex or edge, whereas a group variable is a variable that may bind to multiple vertices or edges.\n\nConsider the pattern `(n) -[e1]-> (m) -[e2]->* (o)`. Here, `e1` is a singleton variable because within a single match of the pattern there is always a single edge bound to `e1`, whereas `e2` is a group variable because within a single match of the pattern there may be multiple edges bound to `e2` because of the quantifier `*`. Variables are thus either singleton variables or group variables depending on whether they are enclosed by a quantifier with an upper bound greater than 1.\n\nHere are examples with singleton variables:\n\n• `-[e]->`\n• `-[e]->?`\n\nHere are examples with group variables:\n\n• `-[e]->*`\n• `-[e]->+`\n• `-[e]->{1,4}`\n\nQuantifiers with curly braces always introduce group variables, so the following are also examples with group variables:\n\n• `-[e]->{1,1}` (notice that this is not the same as `-[e]->`)\n• `-[e]->{0,1}` (notice that this is not the same as `-[e]->?`)\n\nGroup variables thus form implicit groups without a need to explicitly specify a `GROUP BY`.\n\n### Horizontal aggregation using group variables\n\nGroup variables can be used to perform horizontal aggregation. To be precise, an aggregation is applied in a horizontal manner if the expression that is input to the aggregation contains at least one group variable. The input values for the aggregation are obtained by evaluating the expression once for each binding of the group variable(s) within the particular match. A separate output is generated for each match of the pattern rather than that a single output is generated for an entire group of matches like in case of vertical aggregation.\n\nThe same aggregates (`MIN`, `MAX`, `AVG`, etc.) that are used for vertical aggregation are also used for horizontal aggregation. Given the pattern `( (n) -[e]-> (m) )*`, examples of horizontal aggregations are:\n\n• `SUM(e.prop * 2)`\n• `COUNT(e.prop)`\n• `ARRAY_AGG(n.prop)`\n\nAggregations with multiple group variables such as `SUM(n.prop + m.prop / 2)` are not supported in PGQL 1.4 and are planned for a future version.\n\nIt is possible to mix vertical and horizontal aggregation in a single query. For example:\n\n``````SELECT SUM(COUNT(e)) AS sumOfPathLengths\nFROM MATCH ANY SHORTEST (a:Account) -[e:transaction]->* (b:Account)\nWHERE a.number = 10039 AND (b.number = 1001 OR b.number = 2090)\n``````\n``````+------------------+\n| sumOfPathLengths |\n+------------------+\n| 5 |\n+------------------+\n``````\n\nAbove, we first match a shortest path between accounts 10039 and 1001. Notice that the length of this path is 2. We also match a shortest path between accounts 10039 and 2090. Notice that the length of this path is 3. In the SELECT clause, the aggregation `COUNT(e)` is a horizontal aggregation since `e` is a group variable. For each of the two shortest paths, `COUNT(e)` computes the length by counting the number of edges. The output will be 2 for one of the two paths, and 3 for the other. Then it takes the `SUM` to compute the total length of the two paths, which is 5.\n\n### Horizontal aggregation in WHERE and GROUP BY\n\nWhile vertical aggregation is only possible in the `SELECT`, `HAVING` and `ORDER BY` clauses, horizontal aggregation is also possible in the `WHERE` and `GROUP BY` clauses.\n\nAn example of a horizontal aggregation in `WHERE` is:\n\n`````` SELECT b.number AS b,\nCOUNT(e) AS pathLength,\nARRAY_AGG(e.amount) AS transactions\nFROM MATCH ANY SHORTEST (a:Account) -[e:transaction]->* (b:Account)\nWHERE a.number = 10039 AND\n(b.number = 8021 OR b.number = 1001 OR b.number = 2090) AND\nCOUNT(e) <= 2\nORDER BY pathLength\n``````\n``````+--------------------------------------+\n| b | pathLength | transactions |\n+--------------------------------------+\n| 8021 | 1 | [1000.0] |\n| 1001 | 2 | [1000.0, 1500.3] |\n+--------------------------------------+\n``````\n\nAbove, we compute a shortest path from account 10039 to accounts 8021, 1001, and 2090. So three paths in total. However, in the `WHERE` clause we only keep paths that have at most two edges (`COUNT(e) <= 2`) such that only the paths to accounts 8021 and 1001 are kept since the path to 2090 has three edges.\n\nAn example of a horizontal aggregation in `GROUP BY` is:\n\n`````` SELECT COUNT(e) AS pathLength,\nCOUNT(*) AS cnt\nFROM MATCH ANY SHORTEST (a:Account) -[e:transaction]->* (b:Account)\nWHERE (a.number = 10039 OR a.number = 8021) AND\n(b.number = 1001 OR b.number = 2090)\nGROUP BY COUNT(e)\nORDER BY pathLength\n``````\n``````+------------------+\n| pathLength | cnt |\n+------------------+\n| 1 | 1 |\n| 2 | 2 |\n| 3 | 1 |\n+------------------+\n``````\n\nAbove, we first match shortst paths between four pairs of vertices and then we group by the length of the paths (`GROUP BY COUNT(e)`) by means of horizontal aggregation. Then we perform a vertical aggregation `COUNT(*)` to compute the number of paths that have the particular path length. The result shows that one path has length 1, two paths have length 2, and one path as length 3.\n\n# Functions and Expressions\n\nValue expressions are used in various parts of the language, for example, to filter solutions (`WHERE` and `HAVING`), to project out computed values (`SELECT`), or, to group by or order by computed values (`GROUP BY` and `ORDER BY`).\n\nThe following are the relevant grammar rules:\n\n``````ValueExpression ::= VariableReference\n| PropertyAccess\n| Literal\n| BindVariable\n| ArithmeticExpression\n| RelationalExpression\n| LogicalExpression\n| StringConcat\n| BracketedValueExpression\n| FunctionInvocation\n| <CharacterSubstring>\n| Aggregation\n| ExtractFunction\n| IsNullPredicate\n| IsNotNullPredicate\n| CastSpecification\n| CaseExpression\n| InPredicate\n| NotInPredicate\n| ExistsPredicate\n| ScalarSubquery\n\nVariableReference ::= VariableName\n\nPropertyAccess ::= VariableReference '.' PropertyName\n\nBracketedValueExpression ::= '(' ValueExpression ')'\n``````\n\nA value expression is one of:\n\n• A variable reference, being either a reference to a `VertexPattern`, an `EdgePattern`, or an `ExpAsVar`.\n• A property access, which syntactically takes the form of a variable reference, followed by a dot (`.`) and the name of a property.\n• A literal (see Literals).\n• A bind variable (see Bind Variables).\n• An arithmetic, relational, or logical expression (see Operators).\n• A bracketed value expression, which syntactically takes the form of a value expression between rounded brackets. The brackets allow for controlling precedence.\n• A function invocation (see String functions, Numeric functions, Datetime functions and Vertex and Edge functions).\n• The `IS NULL` and `IS NOT NULL` predicates (see IS NULL and IS NOT NULL).\n• The `EXISTS` predicate (see EXISTS and NOT EXISTS subqueries).\n• An aggregation (see Aggregation).\n\n## Data Types and Literals\n\n### Data Types\n\nPGQL has the following data types:\n\n• `STRING`\n• `NUMERIC` (e.g. `INT`/`INTEGER`, `LONG`, `FLOAT`, `DOUBLE`)\n• `BOOLEAN`\n• `DATE`\n• `TIME`\n• `TIMESTAMP`\n• `TIME WITH TIME ZONE`\n• `TIMESTAMP WITH TIME ZONE`\n\n### Literals\n\nThe syntax is:\n\n``````Literal ::= StringLiteral\n| NumericLiteral\n| BooleanLiteral\n| DateLiteral\n| TimeLiteral\n| TimestampLiteral\n| TimeWithTimeZoneLiteral\n| TimestampWithTimeZoneLiteral\n\nStringLiteral ::= STRING_LITERAL\n\nNumericLiteral ::= UNSIGNED_INTEGER\n| UNSIGNED_DECIMAL\n\nBooleanLiteral ::= 'true'\n| 'false'\n\nDateLiteral ::= 'DATE' \"'\" <yyyy-MM-dd> \"'\"\n\nTimeLiteral ::= 'TIME' \"'\" <HH:mm:ss> \"'\"\n\nTimestampLiteral ::= 'TIMESTAMP' \"'\" <yyyy-MM-dd HH:mm:ss> \"'\"\n\nTimeWithTimeZoneLiteral ::= 'TIME' \"'\" <HH:mm:ss+HH:MM> \"'\"\n\nTimestampWithTimeZoneLiteral ::= 'TIMESTAMP' \"'\" <yyyy-MM-dd HH:mm:ss+HH:MM> \"'\"\n``````\n\nFor example:\n\nLiteral type Example literal\nstring `'Clara'`\ninteger `12`\ndecimal `12.3`\nboolean `true`\ndate `DATE '2017-09-21'`\ntime `TIME '16:15:00'`\ntimestamp `TIMESTAMP '2017-09-21 16:15:00'`\ntime with time zone `TIME '16:15:00+01:00'`\ntimestamp with time zone `TIMESTAMP '2017-09-21 16:15:00-03:00'`\n\nNote that numeric literals are unsigned, but signed values can be generated by means of the unary minus operator (`-`).\n\n### Bind Variables\n\nIn place of a literal, one may specify a bind variable (`?`). This allows for specifying parameterized queries.\n\n``````BindVariable ::= '?'\n``````\n\nAn example query with two bind variables is as follows:\n\n``````SELECT n.age\nFROM MATCH (n)\nWHERE n.name = ?\nOR n.age > ?\n``````\n\nIn the following query, bind variables are used in `LIMIT` and `OFFSET`:\n\n`````` SELECT n.name, n.age\nFROM MATCH (n)\nORDER BY n.age\nLIMIT ?\nOFFSET ?\n``````\n\nThe following example shows a bind variable in the position of a label:\n\n`````` SELECT n.name\nFROM MATCH (n)\nWHERE has_label(n, ?)\n``````\n\n## Operators\n\n### Overview of Operators\n\nThe following table is an overview of the operators:\n\noperator type operator\narithmetic `+`, `-`, `*`, `/`, `%`, `-` (unary minus)\nrelational `=`, `<>`, `<`, `>`, `<=`, `>=`\nlogical `AND`, `OR`, `NOT`\nstring `||` (concat)\n\nThe corresponding grammar rules are:\n\n``````ArithmeticExpression ::= UnaryMinus\n| Multiplication\n| Division\n| Modulo\n| Addition\n| Subtraction\n\nUnaryMinus ::= '-' ValueExpression\n\nStringConcat ::= ValueExpression '||' ValueExpression\n\nMultiplication ::= ValueExpression '*' ValueExpression\n\nDivision ::= ValueExpression '/' ValueExpression\n\nModulo ::= ValueExpression '%' ValueExpression\n\nAddition ::= ValueExpression '+' ValueExpression\n\nSubtraction ::= ValueExpression '-' ValueExpression\n\nRelationalExpression ::= Equal\n| NotEqual\n| Greater\n| Less\n| GreaterOrEqual\n| LessOrEqual\n\nEqual ::= ValueExpression '=' ValueExpression\n\nNotEqual ::= ValueExpression '<>' ValueExpression\n\nGreater ::= ValueExpression '>' ValueExpression\n\nLess ::= ValueExpression '<' ValueExpression\n\nGreaterOrEqual ::= ValueExpression '>=' ValueExpression\n\nLessOrEqual ::= ValueExpression '<=' ValueExpression\n\nLogicalExpression ::= Not\n| And\n| Or\n\nNot ::= 'NOT' ValueExpression\n\nAnd ::= ValueExpression 'AND' ValueExpression\n\nOr ::= ValueExpression 'OR' ValueExpression\n``````\n\nThe supported input types and corresponding return types are as follows:\n\noperator type of A (and B) return type\n`-`A (unary minus) numeric type of A\nA `||` B string string\nA `+` B\nA `-` B\nA `*` B\nA `/` B\nA `%` B\nnumeric numeric*\nA `=` B\nA `<>` B\nnumeric, string, boolean,\ndate, time [with time zone], timestamp [with time zone],\nvertex, edge\nboolean\nA `<` B\nA `>` B\nA `<=` B\nA `>=` B\nnumeric, string, boolean,\ndate, time [with time zone], timestamp [with time zone]\nboolean\n`NOT` A\nA `AND` B\nA `OR` B\nboolean boolean\n\nBinary operations are only allowed if both operands are of the same type, with the following two exceptions:\n\n• time values can be compared to time with time zone values\n• timestamp values can be compared to timestamp with time zone values\n\nTo compare such time(stamp) with time zone values to other time(stamp) values (with or without time zone), values are first normalized to have the same time zone, before they are compared. Comparison with other operand type combinations, such as dates and timestamp, is not possible. However, it is possible to cast between e.g. dates and timestamps (see CAST).\n\n### Operator Precedence\n\nOperator precedences are shown in the following list, from the highest precedence to the lowest. An operator on a higher level (e.g. level 1) is evaluated before an operator on a lower level (e.g. level 2).\n\nlevel operator precedence\n1 `-` (unary minus)\n2 `||` (string concat)\n3 `*`, `/`, `%`\n4 `+`, `-`\n5 `=`, `<>`, `>`, `<`, `>=`, `<=`\n6 `NOT`\n7 `AND`\n8 `OR`\n\n### Implicit Type Conversion\n\nPerforming arithmetic operations with different numeric types will lead to implicit type conversion (i.e. coercion). Coercion is only defined for numeric types. Given a binary arithmetic operation (i.e. `+`, `-`, `*`, `/`, `%`), the rules are as follows:\n\n• If both operands are exact numerics (e.g. integer or long), then the result is also an exact numeric with a scale that is at least as large as the scales of each operand.\n• If one or both of the operands is approximate numeric (e.g. float, double), the result is an approximate numeric with a scale that is at least as large as the scales of each operand. The precision will also be at least as high as the precision of each operand.\n\n## Null values\n\nThe property graph data model does not allow properties with `null` value. Instead, missing or undefined data can be modeled through the absence of properties. A `null` value is generated when trying to access a property of a vertex or an edge while the property appears to be missing. Three-valued logic applies when `null` values appear in computation.\n\n### Three-Valued Logic\n\nAn operator returns `null` if one of its operands yields `null`, with an exception for `AND` and `OR`. This is shown in the following table:\n\noperator result when A is null result when B is null result when A and B are null\nA `+` `-` `*` `/` `%` B `null` `null` `null`\n`-` A `null` N/A N/A\nA `=` `<>` `>` `<` `>=` `<=` B `null` `null` `null`\nA `AND` B `false` if B yields `false`, `null` otherwise `false` if A yields `false`, `null` otherwise `null`\nA `OR` B `true` if B yields `true`, `null` otherwise `true` if A yields `true`, `null` otherwise `null`\n`NOT` A `null` N/A N/A\n\nNote that from the table it follows that `null = null` yields `null` and not `true`.\n\n### IS NULL and IS NOT NULL\n\nTo test whether a value exists or not, one can use the `IS NULL` and `IS NOT NULL` constructs.\n\n``````IsNullPredicate ::= ValueExpression 'IS' 'NULL'\n\nIsNotNullPredicate ::= ValueExpression 'IS' 'NOT' 'NULL'\n``````\n\nFor example:\n\n``````SELECT n.name\nFROM MATCH (n)\nWHERE n.name IS NOT NULL\n``````\n\nHere, we find all the vertices in the graph that have the property `name` and then return the property.\n\n## String functions\n\nIn addition to the (character) string functions in this section, please also notice the string concatenation operator (`||`) documented in Operators.\n\n### JAVA_REGEXP_LIKE\n\nThe `JAVA_REGEXP_LIKE` returns whether the string matches the given Java regular expression pattern.\n\nThe syntax is:\n\n``````JAVA_REGEXP_LIKE( string, pattern )\n``````\n\nFor example:\n\n``````JAVA_REGEXP_LIKE('aaaaab', 'a*b')\nResult: true\n``````\n\n### LOWER\n\nThe `LOWER` function transforms a string to lowercase. The case of each character is defined by the rules of the default locale.\n\nThe syntax is:\n\n``````LOWER( string )\n``````\n\nFor example:\n\n``````LOWER('A string')\nResult: a string\n``````\n\n### SUBSTRING\n\nThe `SUBSTRING` function returns a portion of the given string, starting from the specified index in `FROM` clause. If a `FOR` clause is provided, the substring returned is limited to the given length.\n\nThe syntax is:\n\n``````CharacterSubstring := 'SUBSTRING' '(' ValueExpression 'FROM' <StartPosition> ( 'FOR' <StringLength> )? ')'\n\nStartPosition := ValueExpression\n\nStringLength := ValueExpression\n``````\n\nFor example:\n\n``````SUBSTRING('A string' FROM 1)\nResult: A string\n\nSUBSTRING('A string' FROM 3 FOR 2)\nResult: st\n``````\n\nThe following table gives more examples for different values of `FROM` and `FOR`:\n\ninput string FROM FOR output string\n`hello` `3` not provided `llo`\n`hello` `-10` not provided `hello`\n`hello` `7` not provided (empty string)\n`hello` `3` not provided `llo`\n`hello` `3` `-1` exception is raised, FOR must not be negative\n`hello` `3` `2` `ll`\n`hello` `3` `10` `llo`\n`hello` `-10` `2` (empty string)\n`hello` `-10` `13` `he`\n`hello` `-10` `18` `hello`\n`hello` `7` `2` (empty string)\n\n### UPPER\n\nThe `UPPER` function transforms a string to uppercase. The case of each character is defined by the rules of the default locale.\n\nThe syntax is:\n\n``````UPPER( string )\n``````\n\nFor example:\n\n``````UPPER('A string')\nResult: A STRING\n``````\n\n## Numeric functions\n\n### ABS\n\nThe `ABS` function returns the absolute value of a number. The output value will have the same data type as the input value.\n\nThe syntax is:\n\n``````ABS( number )\n``````\n\nFor example:\n\n``````ABS(-23)\nResult: 23\n\nABS(-23.6)\nResult: 23.6\n\nABS(-23.65)\nResult: 23.65\n\nABS(23.65)\nResult: 23.65\n\nABS(23.65 * -1)\nResult: 23.65\n``````\n\n### CEIL or CEILING\n\nThe `CEIL` (or `CEILING`) function rounds the specified number up and returns the smallest number that is greater than or equal to the specified number. The output value will have the same data type as the input value.\n\nThe syntax is:\n\n``````CEIL ( number )\nCEILING ( number )\n``````\n\nFor example:\n\n``````CEIL(3.2)\nResult: 4.0\n\nCEIL(2.8)\nResult: 3.0\n\nCEIL(3)\nResult: 3\n``````\n\n### FLOOR\n\nThe `FLOOR` function returns the largest integer value that is smaller than or equal to the given argument. The output value will have the same data type as the input value.\n\nThe syntax is:\n\n``````FLOOR( number )\n``````\n\nFor example:\n\n``````FLOOR(3.2)\nResult: 3.0\n\nFLOOR(2.8)\nResult: 2.0\n\nFLOOR(3)\nResult: 3\n``````\n\n### ROUND\n\nThe `ROUND` function returns the integer closest to the given argument. The output value will have the same data type as the input value.\n\nThe syntax is:\n\n``````ROUND ( number )\n``````\n\nFor example:\n\n``````ROUND(3.2)\nResult: 3.0\n\nROUND(2.8)\nResult: 3.0\n\nROUND(3)\nResult: 3\n``````\n\n## Datetime functions\n\n### EXTRACT\n\nThe `EXTRACT` function allows for extracting a datetime field, such as a year, month or day, from a datetime value.\n\nThe syntax is:\n\n``````ExtractFunction ::= 'EXTRACT' '(' ExtractField 'FROM' ValueExpression ')'\n\nExtractField ::= 'YEAR'\n| 'MONTH'\n| 'DAY'\n| 'HOUR'\n| 'MINUTE'\n| 'SECOND'\n| 'TIMEZONE_HOUR'\n| 'TIMEZONE_MINUTE'\n``````\n\nThe fields `YEAR`, `MONTH` and `DAY` can be extracted from a date, a timestamp, or a timestamp with time zone.\n\nFor example:\n\n``````EXTRACT(YEAR FROM DATE '2017-02-13')\nResult: 2017\n\nEXTRACT(MONTH FROM DATE '2017-02-13')\nResult: 2\n\nEXTRACT(DAY FROM DATE '2017-02-13')\nResult: 13\n``````\n\nThe fields `HOUR`, `MINUTE` and `SECOND` can be extracted from a time, a timestamp, a time with time zone, or a timestamp with time zone.\n\nFor example:\n\n``````EXTRACT(HOUR FROM TIME '12:05:03.201')\nResult: 12\n\nEXTRACT(MINUTE FROM TIME '12:05:03.201')\nResult: 5\n\nEXTRACT(SECOND FROM TIME '12:05:03.201')\nResult: 3.201\n``````\n\nThe fields `TIMEZONE_HOUR` and `TIMEZONE_MINUTE` can be extracted from a time with time zone or a timestamp with time zone.\n\nFor example:\n\n``````EXTRACT(TIMEZONE_HOUR FROM TIMESTAMP '2018-01-01 12:30:00-02:30')\nResult: -2\n\nEXTRACT(TIMEZONE_MINUTE FROM TIMESTAMP '2018-01-01 12:30:00-02:30')\nResult: -30\n``````\n\n## Vertex and Edge functions\n\n### ID\n\nThe `ID` function returns a system-generated identifier for the vertex/edge (unique within a graph).\n\nThe syntax is:\n\n``````ID( vertex/edge )\n``````\n\n### LABEL\n\nThe `LABEL` function returns the label of a vertex or an edge. It is an error if the vertex or edge does not have a label, or, has more than one label. The return type of the function is a string.\n\nThe syntax is:\n\n``````LABEL( vertex/edge )\n``````\n\nFor example:\n\n``````SELECT LABEL(e)\nFROM MATCH (n:Person) -[e]-> (m:Person)\n``````\n``````+----------+\n| LABEL(e) |\n+----------+\n| likes |\n| knows |\n| likes |\n+----------+\n``````\n\n### LABELS (function)\n\nThe `LABELS` function returns the set of labels of a vertex or an edge. If the vertex or edge does not have a label, an empty set is returned. The return type of the function is a set of strings.\n\nThe syntax is:\n\n``````LABELS( vertex/edge )\n``````\n\nFor example:\n\n``````SELECT LABELS(n)\nFROM MATCH (n:Employee|Manager)\n``````\n``````+---------------------+\n| LABELS(n) |\n+---------------------+\n| [Employee] |\n| [Manager] |\n| [Employee, Manager] |\n+---------------------+\n``````\n\n### HAS_LABEL\n\nThe `HAS_LABEL` functions returns true if the vertex or edge (first argument) has the given label (second argument), and false otherwise.\n\nThe syntax is:\n\n``````HAS_LABEL( vertex/edge, string )\n``````\n\n### ALL_DIFFERENT\n\nThe `ALL_DIFFERENT` function returns true if the provided values are all different from each other, and false otherwise. The function is typically used for specifying that a particular set of vertices or edges are all different from each other. However, the function can be used for values of any data type, as long as the provided values can be compared for equality.\n\nThe syntax is:\n\n``````ALL_DIFFERENT( val1, val2, val3, ..., valN )\n``````\n\nFor example:\n\n``````SELECT *\nFROM MATCH (n) -> (m) -> (o)\nWHERE ALL_DIFFERENT( n, m, o )\n``````\n\nNote that the above query can be rewritten using non-equality constraints as follows:\n\n``````SELECT *\nFROM MATCH (n) -> (m) <- (o) -> (n)\nWHERE n <> m AND n <> o AND m <> o\n``````\n\nAnother example is:\n\n``````ALL_DIFFERENT( 1, 2, 3 )\nResult: true\n\nALL_DIFFERENT( 1, 1.0 )\nResult: false\n``````\n\n### IN_DEGREE\n\nThe `IN_DEGREE` function returns the number of incoming neighbors of a vertex. The return type is an exact numeric.\n\nThe syntax is:\n\n``````IN_DEGREE( vertex )\n``````\n\n### OUT_DEGREE\n\nThe `OUT_DEGREE` function returns the number of outgoing neighbors of a vertex. The return type is an exact numeric.\n\nThe syntax is:\n\n``````OUT_DEGREE( vertex )\n``````\n\n## User-Defined functions\n\nUser-defined functions (UDFs) are invoked similarly to built-in functions. For example, a user may have registered a function `math.tan` that returns the tangent of a given angle. An example invocation of this function is then:\n\n`````` SELECT math.tan(n.angle) AS tangent\nFROM MATCH (n)\nORDER BY tangent\n``````\n\nThe syntax is:\n\n``````FunctionInvocation ::= PackageSpecification? FunctionName '(' ArgumentList? ')'\n\nPackageSpecification ::= PackageName '.'\n\nPackageName ::= Identifier\n\nFunctionName ::= Identifier\n\nArgumentList ::= ValueExpression ( ',' ValueExpression )*\n``````\n\nNote that a function invocation has an optional package name, a (non-optional) function name, and, zero or more arguments which are arbitrary value expressions.\n\nFunction and package names are case-insensitive such that e.g. `in_degree(..)` is the same as `In_Degree(..)` or `IN_DEGREE(..)`.\n\nIf a UDF is registered that has the same name as a built-in function, then, upon function invocation, the UDF is invoked and not the built-in function. UDFs can thus override built-ins.\n\n## CAST\n\nWhile implicit type conversion is supported between numeric types, between time types, and between timezone types, other type conversions require explicit conversion through casting (`CAST`).\n\nThe syntax is:\n\n``````CastSpecification ::= 'CAST' '(' ValueExpression 'AS' DataType ')'\n\nDataType ::= 'STRING'\n| 'BOOLEAN'\n| 'INTEGER'\n| 'INT'\n| 'LONG'\n| 'FLOAT'\n| 'DOUBLE'\n| 'DATE'\n| 'TIME'\n| 'TIME WITH TIME ZONE'\n| 'TIMESTAMP'\n| 'TIMESTAMP WITH TIME ZONE'\n``````\n\nFor example:\n\n``````SELECT CAST(n.age AS STRING), CAST('123' AS INTEGER), CAST('09:15:00+01:00' AS TIME WITH TIME ZONE)\nFROM MATCH (n:Person)\n``````\n\nCasting is allowed between the following data types:\n\nfrom \\ to string exact numeric approximate numeric boolean time time with time zone date timestamp timestamp with time zone\nstring Y Y Y Y Y Y Y Y Y\nexact numeric Y M M N N N N N N\napproximate numeric Y M M N N N N N N\nboolean Y N N Y N N N N N\ndate Y N N N N N Y Y Y\ntime Y N N N Y Y N Y Y\ntimestamp Y N N N Y Y Y Y Y\ntime with time zone Y N N N Y Y N Y Y\ntimestamp with time zone Y N N N Y Y Y Y Y\n\nIn the table above, `Y` indicates that casting is supported, `N` indicates that casting is not supported, and `M` indicates that casting is supported only if the numeric value is between the minimum and maximum values (inclusive) that can be represented by the specified target type.\n\n## CASE\n\nThe `CASE` predicate returns an expression based on the evaluation of some given Boolean conditions.\n\nThere are two types of `CASE` expressions: “simple case” and “searched case”.\n\nThe syntax is:\n\n``````CaseExpression ::= SimpleCase | SearchedCase\n\nSimpleCase ::= 'CASE' ValueExpression WhenClause+ ElseClause? 'END'\n\nSearchedCase ::= 'CASE' WhenClause+ ElseClause? 'END'\n\nWhenClause ::= 'WHEN' ValueExpression 'THEN' ValueExpression\n\nElseClause ::= 'ELSE' ValueExpression\n``````\n\nThe simple case provides a list of pairs (`WHEN` compare value, `THEN` return value) and optionally an else clause (`ELSE` return value). PGQL compares a given expression to each compare value and returns the corresponding return value when compared expressions are equal. If no equal expression is found and an `ELSE` clause exists, then PGQL returns the given else value. If no `ELSE` clause exists, null is returned.\n\nFor example:\n\n``````CASE n.age\nWHEN 1 THEN \"One\"\nWHEN 2 THEN \"Two\"\nWHEN 3 THEN \"Three\"\nELSE \"Older than three\"\nEND\n``````\n\nThe searched case provides a list of pairs (`WHEN` boolean expression, `THEN` return value) and optionally an `else` clause (`ELSE` return value). PGQL evaluates each boolean expression until one of them evaluates to true, and returns the corresponding return value. If no expression evaluates to true, and an `ELSE` clause exists, then PGQL returns the given else value. If no `ELSE` clause exists, null is returned.\n\nFor example:\n\n``````CASE\nWHEN n.level = 'user' THEN 0\nWHEN n.authorized THEN 1\nELSE -1\nEND\n``````\n\n## IN and NOT IN\n\nThe `IN` and `NOT IN` predicates test a value for membership in a list of values. The PGQL literal types `INTEGER`, `DECIMAL`, `BOOLEAN`, `STRING`, `DATE`, `TIME [WITH TIME ZONE]`, `TIMESTAMP [WITH TIME ZONE]` are allowed in the list.\n\nThe syntax is:\n\n``````InPredicate ::= ValueExpression 'IN' InValueList\n\nNotInPredicate ::= ValueExpression 'NOT' 'IN' InValueList\n\nInValueList ::= '(' ValueExpression ( ',' ValueExpression )* ')'\n| BindVariable\n``````\n\nFor example:\n\n``````2 IN (2, 3, 5)\nResult: true\n\n3.2 IN (5, 4.8, 3.2)\nResult: true\n\nfalse IN (true, true)\nResult: false\n\n'Emily' IN ('Emily', 'Carl')\nResult: true\n\nDATE '1990-07-03' IN (DATE '1990-07-03', DATE '1993-05-28')\nResult: true\n\nTIME '12:00:10' IN (TIME '11:55:10', TIME '06:50:00.999+05:00')\nResult: false\n\nTIMESTAMP '2016-03-20 22:09:59.999' IN (TIMESTAMP '2016-03-20 23:09:59')\nResult: false\n``````\n\nBind variables are also supported in the position of the list. For example:\n\n``````SELECT n.date_of_birth\nFROM MATCH (n:Person)\nWHERE n.date_of_birth IN ? /* use PreparedStatement.setArray(int, java.util.List) */\n``````\n\n# Subqueries\n\nThere are two types of subqueries:\n\nBoth types of subqueries can be used as a value expression in a `SELECT`, `WHERE`, `GROUP BY`, `HAVING` and `ORDER BY` clauses (including `WHERE` clauses of `PATH` expressions). An `EXISTS` or `NOT EXISTS` subquery returns a boolean while a scalar subquery returns a value of any of the supported data types.\n\n## EXISTS and NOT EXISTS subqueries\n\n`EXISTS` returns true/false depending on whether the subquery produces at least one result, given the bindings obtained in the current (outer) query. No additional binding of variables occurs.\n\nThe syntax is:\n\n``````ExistsPredicate ::= 'EXISTS' Subquery\n\nSubquery ::= '(' Query ')'\n``````\n\nAn example is to find friend of friends, and, for each friend of friend, return the number of common friends:\n\n``````SELECT fof.name, COUNT(friend) AS num_common_friends\nFROM MATCH (p:Person) -[:has_friend]-> (friend:Person) -[:has_friend]-> (fof:Person)\nWHERE NOT EXISTS ( SELECT * FROM MATCH (p) -[:has_friend]-> (fof) )\n``````\n\nHere, vertices `p` and `fof` are passed from the outer query to the inner query. The `EXISTS` returns true if there is at least one `has_friend` edge between vertices `p` and `fof`.\n\nUsers can add a subquery in the `WHERE` clause of the `PATH` definition. One might be interested in asserting for specific properties for a vertex in the `PATH`. The following example defines a path ending in a vertex which is not the oldest in the graph:\n\n`````` PATH p AS (a) -> (b) WHERE EXISTS ( SELECT * FROM MATCH (x) WHERE x.age > b.age )\nSELECT ...\nFROM ...\n``````\n\nTopology related constraints can also be imposed. The following example defines a path ending in a vertex which has at least one outgoing edge to some neighbor `c`:\n\n`````` PATH p AS (a) -> (b) WHERE EXISTS ( SELECT * FROM MATCH (b) -> (c) )\nSELECT ...\nFROM ...\n``````\n\n## Scalar subqueries\n\nScalar subqueries are queries that return a scalar value (exactly one row and exactly one column) such that they can be part of an expression in a `SELECT`, `WHERE`, `GROUP BY`, `HAVING` or `ORDER BY` clause.\n\nThe syntax is:\n\n``````ScalarSubquery ::= Subquery\n``````\n\nFor example:\n\n``````SELECT a.name\nFROM MATCH (a)\nWHERE a.age > ( SELECT AVG(b.age) FROM MATCH (a) -[:friendOf]-> (b) )\n``````\n\nAnother example is:\n\n`````` SELECT p.name AS name\n, ( SELECT SUM(t.amount)\nFROM MATCH (a) <-[t:transaction]- (:Account)\nON financial_transactions\n) AS sum_incoming\n, ( SELECT SUM(t.amount)\nFROM MATCH (a) -[t:transaction]-> (:Account)\nON financial_transactions\n) AS sum_outgoing\n, ( SELECT COUNT(DISTINCT p2)\nFROM MATCH (a) -[t:transaction]- (:Account) -[:owner]-> (p2:Person)\nON financial_transactions\nWHERE p2 <> p\n) AS num_persons_transacted_with\n, ( SELECT COUNT(DISTINCT c)\nFROM MATCH (a) -[t:transaction]- (:Account) -[:owner]-> (c:Company)\nON financial_transactions\n) AS num_companies_transacted_with\nFROM MATCH (p:Person) <-[:owner]- (a:Account) ON financial_transactions\nORDER BY sum_outgoing + sum_incoming DESC\n``````\n``````+-----------------------------------------------------------------------------------------------------+\n| name | sum_incoming | sum_outgoing | num_persons_transacted_with | num_companies_transacted_with |\n+-----------------------------------------------------------------------------------------------------+\n| Liam | 9999.5 | 9900.0 | 1 | 1 |\n| Camille | 9900.0 | 1000.0 | 2 | 0 |\n| Nikita | 1000.0 | 4501.0 | 1 | 1 |\n+-----------------------------------------------------------------------------------------------------+\n``````\n\nNote that in the query, the graph name `financial_transactions` is repeatedly specified. Such repetition can be avoided by using a default graph, which simplifies the query:\n\n`````` SELECT p.name AS name\n, ( SELECT SUM(t.amount)\nFROM MATCH (a) <-[t:transaction]- (:Account)\n) AS sum_incoming\n, ( SELECT SUM(t.amount)\nFROM MATCH (a) -[t:transaction]-> (:Account)\n) AS sum_outgoing\n, ( SELECT COUNT(DISTINCT p2)\nFROM MATCH (a) -[t:transaction]- (:Account) -[:owner]-> (p2:Person)\nWHERE p2 <> p\n) AS num_persons_transacted_with\n, ( SELECT COUNT(DISTINCT c)\nFROM MATCH (a) -[t:transaction]- (:Account) -[:owner]-> (c:Company)\n) AS num_companies_transacted_with\nFROM MATCH (p:Person) <-[:owner]- (a:Account)\nORDER BY sum_outgoing + sum_incoming DESC\n``````\n\n# Graph Modification\n\n``````ModifyQuery ::= ModifyQuerySimple\n| ModifyQueryFull\n\nModifyQuerySimple ::= InsertClause\n\nModifyQueryFull ::= PathPatternMacros?\nModification+\nFromClause\nWhereClause?\nGroupByClause?\nHavingClause?\nOrderByClause?\nLimitOffsetClauses?\n\nModification ::= InsertClause\n| UpdateClause\n| DeleteClause\n``````\n\nModifications follow snapshot isolation semantics, meaning that insertions, updates and deletions within the same query do not see each other’s results.\n\n## INSERT\n\n``````InsertClause ::= 'INSERT' IntoClause? GraphElementInsertion ( ',' GraphElementInsertion )*\n\nIntoClause ::= 'INTO' GraphName\n\nGraphElementInsertion ::= 'VERTEX' VariableName? LabelsAndProperties\n| 'EDGE' VariableName? 'BETWEEN' VertexReference 'AND' VertexReference\nLabelsAndProperties\n\nVertexReference ::= Identifier\n\nLabelsAndProperties ::= LabelSpecification? PropertiesSpecification?\n\nLabelSpecification ::= 'LABELS' '(' Label ( ',' Label )* ')'\n\nPropertiesSpecification ::= 'PROPERTIES' '(' PropertyAssignment ( ',' PropertyAssignment )* ')'\n\nPropertyAssignment ::= PropertyAccess '=' ValueExpression\n``````\n\nPGQL supports the insertions of edges and vertices into a graph. In the same query, multiple vertices and edges can be inserted by enumerating them after the `INSERT` keyword. All inserted entities must be identified with a variable name that has to be unique for the whole modification query.\n\nSo the following query should fail, because the variable `x` is not only local to the vertex insertion term:\n\n``````INSERT VERTEX x, VERTEX x\n``````\n\nThe id values for the inserted entities are automatically generated.\n\n### Inserting vertices\n\nVertices can be inserted with or without a match.\n\nIf the match is missing, one unconnected vertex is inserted to the graph. For example in case of the following query\n\n``````INSERT VERTEX x LABELS ( Male ) PROPERTIES ( x.age = 22 )\n``````\n\nIn the presence of a match, as many vertices are inserted as many rows are matched. So the following query inserts a new vertex for every vertex in the graph that is labelled `Male`.\n\n``````INSERT VERTEX x LABELS ( Male ) PROPERTIES ( x.age = y.age )\nFROM MATCH (y:Male)\n``````\n\nIn the presence of a `GROUP BY` expression, as many vertices are inserted, as many groups are matched. For example the following query inserts a new vertex for every profession in the graph.\n\n`````` INSERT VERTEX x LABELS ( Profession ) PROPERTIES ( x.name = y.profession )\nFROM MATCH (y:Person)\nGROUP BY y.profession\n``````\n\n### Inserting edges\n\nEdges can be inserted by specifying the source and destination vertices. Only the insertion of directed edges are supported.\n\nFor example the following query inserts a vertex with source `x` and destination `y`:\n\n``````INSERT EDGE e BETWEEN x AND y\nFROM MATCH (x)\n, MATCH (y)\nWHERE id(x) = 1 AND id(y) = 2\n``````\n\n### Labels\n\nLabels for the inserted entities can be specified between braces after the `LABELS` keyword.\n\nFor example:\n\n``````INSERT EDGE e BETWEEN x AND y LABELS ( knows )\nFROM MATCH (x:Person)\n, MATCH (y:Person)\nWHERE id(x) = 1 AND id(y) = 2\n``````\n\n### Properties\n\nProperties can be specified between braces after the `PROPERTIES` keyword. On the right-hand-side of the expression, the property name must be preceded by the variable name and a dot. Property assignments can be arbitrary expressions with similar restrictions as property assignments in case of update queries. Property expressions cannot refer to other entities that are inserted at the same time.\n\nFor example, the following query inserts a new vertex with `age = 22`:\n\n``````INSERT VERTEX v PROPERTIES ( v.age = 22 )\n``````\n\nEdge properties can be specified in the same manner:\n\n``````INSERT EDGE e BETWEEN x AND y LABELS ( knows ) PROPERTIES ( e.since = DATE '2017-09-21' )\nFROM MATCH (x:Person)\n, MATCH (y:Person)\nWHERE id(x) = 1 AND id(y) = 2\n``````\n\nIn case of partitioned schema, only those properties can be assigned that are defined for the type of the entity. Note that the entity type is determined by the label(s).\n\n### Multiple inserts in the same INSERT clause\n\nOne insert clause can contain multiple inserts.\n\nFor example, the query below inserts two vertices into the graph:\n\n``````INSERT\nVERTEX v LABELS ( Male ) PROPERTIES ( v.age = 23, v.name = 'John' ),\nVERTEX u LABELS ( Female ) PROPERTIES ( u.age = 24, u.name = 'Jane' )\n``````\n\nMultiple insertions under the same `INSERT` can be used to set a newly inserted vertex as source or destination for a newly inserted edge.\n\nFor example, the following query inserts a vertex and an edge that connects it to the matched vertex `y`:\n\n``````INSERT VERTEX x LABELS ( Person ) PROPERTIES ( x.name = 'John' )\n, EDGE e BETWEEN x AND y LABELS ( knows ) PROPERTIES ( e.since = DATE '2017-09-21' )\nFROM MATCH (y)\nWHERE y.name = 'Jane'\n``````\n\nNote that the properties of `x` cannot be accessed in the property assignments of `e`, only the variable itself is visible as source of the edge. For this reason setting `e.since` to `x.graduation_date` would cause the query to fail.\n\nIn the presence of a match, as many edges are inserted as many (not necessarily unique) vertex pairs are matched. If a vertex pair is matched more than once, multiple edges will be inserted between the vertices.\n\nFor example consider the following query:\n\n``````INSERT EDGE e BETWEEN x AND y\nFROM MATCH (x)\n, MATCH (y) -> (z)\nWHERE id(x) = 1\n``````\n\nIf the query is executed on the graph above, the following vertices will be matched\n\nx y z\nV1 V2 V4\nV1 V3 V2\nV1 V3 V4\n\nIn that case, three edges will be inserted, one connecting `V1` and `V2` and two different edges, both connecting `V1` and `V3` as it is shown below.\n\n## UPDATE\n\nThe `UPDATE` clause allows for setting the properties of one or more vertices and edges.\n\nThe syntax is:\n\n``````UpdateClause ::= 'UPDATE' GraphElementUpdate ( ',' GraphElementUpdate )*\n\nGraphElementUpdate ::= VariableReference 'SET' '(' PropertyAssignment ( ',' PropertyAssignment )* ')'\n``````\n\nFor example, the following query sets the property `age` of every person named “John” to the value `42`:\n\n``````UPDATE x SET ( x.age = 42 )\nFROM MATCH (x:Person)\nWHERE x.name = 'John'\n``````\n\nAn example in which properties of multiple vertices and edges are update is:\n\n``````UPDATE v SET ( v.carOwner = true )\n, u SET ( u.weight = 3500 )\n, e SET ( e.since = DATE '2010-01-03' )\nFROM MATCH (v:Person) <-[e:belongs_to]- (u:Car)\nWHERE v.name = 'John'\n``````\n\nAbove, we match a person named John and the car that belongs to John. We then set the property `carOwner` of John to true, we set the property `weight` of the car to 3500, and we set the property `since` of the `belongs_to` edge to the date 2010-01-03.\n\n### Handling read after write conflicts\n\nDuring the update, the assigned values (right-hand-side of assignments) correspond to the graph property values before the beginning of the update. This aligns with the snapshot isolation semantics defined between modifications in the same query.\n\nFor example consider the following update:\n\n``````UPDATE x SET ( x.a = y.b, x.b = 12 )\nFROM MATCH (x) -> (y)\n``````\n\nIt is possible, that a vertex is matched by both `(x)` and `(y)` for example\n\nx y\nV1 V2\nV3 V1\n\nSupposing that `V1.b` was `20` before executing the update, `V1.b` will be assigned 12 `V3.a` will be assigned `20` no matter in which order the updates are executed.\n\n### Handling write after write conflicts\n\nMultiple writes to the same property of the same entity are not allowed, in such cases the execution terminates with an error.\n\nFor example consider the following query:\n\n``````UPDATE x SET ( x.a = y.a )\nFROM MATCH (x) -> (y)\n``````\n\nIf the following vertices are matched\n\nx y\nV1 V2\nV1 V3\n\na runtime exception will be thrown, because the value assigned to `V1.a` could be ambiguous.\n\nAs an extension to this semantics, PGX implements a more relaxed version for conflicting write checks. If the assigned value can be statically guaranteed to be only depending on property values of the entity it is assigned to, then even in case of multiple assignments, (since the assigned value is always the same) the update succeeds.\n\nFor example, in the following case, multiple writes to `v.a` are allowed, because in this case no matter how many times `v.a` is written, it is always assigned the same value (65 minus its age property).\n\n``````UPDATE v SET ( v.a = 65 - v.age )\nFROM MATCH (v:Person) -> (u:Person)\nWHERE v.name = 'John'\n``````\n\nIn the following case, however, multiple writes to `v.a` are not allowed, because the value of the property would be ambiguous, 65 minus the other vertex’s age property, that can be different for different matched `u`’s.\n\n``````UPDATE v SET ( v.a = 65 - u.age )\nFROM MATCH (v:Person) -> (u:Person)\nWHERE v.name = 'John'\n``````\n\n## DELETE\n\n``````DeleteClause ::= 'DELETE' VariableReference ( ',' VariableReference )*\n``````\n\nEntities can be deleted by enumerating them after the `DELETE` keyword. The order of enumeration does not affect the result of the execution.\n\nFor example, one can delete all edges from a graph using the following query\n\n``````DELETE e\nFROM MATCH () -[e]-> ()\n``````\n\nMultiple deletes to the same entity are not considered conflicting. For example consider the following query:\n\n``````DELETE x, y\nFROM MATCH (x) -> (y)\n``````\n\nIn that case, even if a vertex is matched multiple times by `(x)` or `(y)`, and deleted multiple times, the query will complete without an exception.\n\nIf a vertex is deleted, all its incoming and outgoing edges are deleted as well, thus there are no dangling edges left after a query. So the following query not only deletes the vertex with id `11` but also all edges for which it is source or destination.\n\n``````DELETE x\nFROM MATCH (x)\nWHERE id(x) = 11\n``````\n\nBecause of implicit deletion of edges, the following query can be used to delete all edges as well as all vertices from a graph:\n\n``````DELETE x\nFROM MATCH (x)\n``````\n\n## Combining INSERT, UPDATE and DELETE\n\nMultiple modifications can be executed in the same query. For example, to update a vertex and also insert an edge with the same vertex as source, the following query can be used:\n\n``````INSERT EDGE e BETWEEN x AND y\nUPDATE y SET ( y.a = 12 )\nFROM MATCH (x), MATCH (y)\nWHERE id(x) = 1 AND id(y) = 2\n``````\n\n### Isolation semantics of modification queries\n\nModify queries follow snapshot isolation, which means all modifications see a consistent state of the graph, that is its state before the execution of the update. For this reason, property assignments can come from updated and deleted vertices, but they cannot refer to inserted vertices.\n\nFor example, the query below succeeds, because `y.age` is evaluated based on the graph’s status before the query.\n\n``````INSERT VERTEX x PROPERTIES ( x.age = y.age )\nDELETE y\nFROM MATCH (y)\n``````\n\nPlease note, that for the same reason, properties of newly inserted vertices cannot be referenced in the right-hand-side expressions. For example, the following query would fail as `x` is not yet in the graph, and `x.age` cannot be evaluated:\n\n``````INSERT VERTEX x PROPERTIES ( v.age = 24 )\n, VERTEX y PROPERTIES ( y.age = x.age )\n``````\n\n### Handling conflicting modifications\n\nMultiple modifications on the same entity are not allowed, in such cases the execution terminates with an error. This section only addresses conflicts between different modifications under the same query. For the conflicts within the same modification, please refer to the corresponding sections.\n\nOne example for such conflict would be the UPDATE-DELETE conflicts. The same entity cannot be updated and deleted in the same query.\n\nFor example, let us consider the following query:\n\n``````UPDATE x SET ( x.a = 11 )\nDELETE x\nFROM MATCH (x)\n``````\n\nThere the conflict is trivial between the deleted and the updated vertex. However, the conflict is not always straightforward, for example, the following query can also fail due to conflicting update and delete:\n\n``````UPDATE x SET ( x.a = 11 )\nDELETE y\nFROM MATCH (x) -> (y)\n``````\n\nIf the vertices matched by `x` are distinct to the ones matched by `y` the query should pass, however, if there is a vertex that is matched by both `x` and `y` the query will fail with an exception. Note that the order of modifications does not matter, the query will fail in any case.\n\nSimilar behavior is expected upon INSERT-DELETE conflicts, where the inserted entity depends on an entity that is being deleted. Note that because of the snapshot semantics, this is only possible if an edge is inserted, and at the same time its source or destination vertex is deleted.\n\nFor example, consider the following, not trivial case:\n\n``````INSERT EDGE e BETWEEN x AND y\nDELETE z\nFROM MATCH (x) -> (y), MATCH (z)\nWHERE id(z) = 11\n``````\n\nIf any vertex is matched by `z` and either `x` or `z` then after executing the query the inserted edge would not have a source or destination. Thus in that case the execution fails.\n\n# Other Syntactic rules\n\n## Identifiers\n\nGraph names, property names, labels, etc. are identifiers that can appear in either unquoted form or double quoted form.\n\nThe syntax is:\n\n``````Identifier ::= UNQUOTED_IDENTIFIER | QUOTED_IDENTIFIER\n``````\n\n### Unquoted identifiers\n\nUnquoted identifiers take the form of an alphabetic character followed by zero or more alphanumeric or underscore (i.e. `_`) characters:\n\n``````UNQUOTED_IDENTIFIER ::= [a-zA-Z][a-zA-Z0-9\\_]*\n``````\n\nUnquoted identifiers are automatically uppercased.\n\nFor example, the following two queries are equivalent:\n\n``````SELECT n.dob AS name\nFROM MATCH (n:Person) ON myGraph\nWHERE n.firstName = 'Nikita'\n``````\n``````SELECT \"N\".\"DOB\"\nFROM MATCH (\"N\":\"PERSON\") ON \"MYGRAPH\"\nWHERE \"N\".\"FIRSTNAME\" = 'Nikita'\n``````\n\nNote that this is aligned to SQL, which also automatically uppercases unquoted identifiers. However, as an extension to SQL — which matches uppercased references in exact manner — PGQL matches uppercased references to graphs, labels and properties in case-insensitive manner if no exact match exists.\n\nFor example, a property `firstName` in the graph can be referenced in PGQL either through `firstName`, `\"firstName\"`, `\"FIRSTNAME\"` or `fIrStNaMe`, but not through `\"FirstName\"`.\n\n### Quoted identifiers\n\nQuoted identifiers are delimited with double quotes and support the full range of Unicode characters:\n\n``````QUOTED_IDENTIFIER ::= '\"' ( ~[\\\"] | ESCAPED_IDENTIFIER_CHARACTER )* '\"'\n\nESCAPED_IDENTIFIER_CHARACTER ::= '\"\"'\n``````\n\nAbove says that a quoted identifier starts and ends with double quotes and in between has any number of:\n\n• Unicode characters except for the double quote character\n• An escaped double quote in the form of two double quotes\n\nNote that the syntax of a PGQL identifier is different from a string literal in languages like Java or C++, because unlike in Java and C++, characters like a new line or a backslash are not escaped in PGQL; in identifiers in PGQL, only double quotes are escaped.\n\nFor example, take the following string:\n\n``````My string with single quotes ', double quotes \", backslashes \\\nnew lines and tabs\t.\n``````\n\nHere is an example of how to use such a string as a property name in PGQL:\n\n``````SELECT *\nFROM MATCH (n)\nWHERE n.\"My string with single quotes ', double quotes \"\", backslashes \\\nnew lines and tabs\t.\" = 123\n``````\n\nAs you can see, only the double quote (`\"`) was escaped (`\"\"`).\n\n## String literals\n\nThe syntax for string literals is:\n\n``````STRING_LITERAL ::= \"'\" ( ~[\\'] | ESCAPED_STRING_LITERAL_CHARACTER )* \"'\"\n\nESCAPED_STRING_LITERAL_CHARACTER ::= \"''\"\n``````\n\nAbove says that a string literal starts and ends with single quotes and in between has any number of:\n\n• Unicode characters except for the single quote character\n• An escaped single quote in the form of two single quotes\n\nNote that this is different from string literals in languages like Java or C++. First of all, PGQL string literals are single-quoted instead of double-quoted. Second, unlike in Java and C++, characters like a new line or a backslash are not escaped in PGQL; in string literals in PGQL, only single quotes are escaped.\n\nFor example, take the following string:\n\n``````My string with single quotes ', double quotes \", backslashes \\\nnew lines and tabs\t.\n``````\n\nHere is an example of how to use such a string as literal in PGQL:\n\n``````SELECT *\nFROM MATCH (n)\nWHERE n.prop = 'My string with single quotes '', double quotes \", backslashes \\\nnew lines and tabs\t.'\n``````\n\nAs you can see, only the single quote (`'`) was escaped (`''`).\n\n## Keywords\n\nThe following is a list of keywords in PGQL.\n\n``````PATH, SELECT, FROM, MATCH, ON, WHERE, GROUP,\nBY, HAVING, ORDER, ASC, DESC, LIMIT, OFFSET,\nAND, OR, NOT, true, false, IS, NULL, AS,\nDATE, TIME, TIMESTAMP, WITH, ZONE, DISTINCT,\nCOUNT, MIN, MAX, AVG, SUM, ARRAY_AGG, LISTAGG,\nIN, EXISTS, CAST, CASE, WHEN, THEN, ELSE, END,\nEXTRACT, YEAR, MONTH, DAY, HOUR, MINUTE,\nSECOND, TIMEZONE_HOUR, TIMEZONE_MINUTE,\nTOP, SHORTEST, CHEAPEST, COST, CREATE,\nPROPERTY, GRAPH, VERTEX, EDGE, TABLES,\nLABEL, PROPERTIES, ARE, ALL, COLUMNS,\nEXCEPT, NO, INSERT, UPDATE, DELETE, INTO,\nLABELS, SET, BETWEEN\n``````\n\nKeywords are case-insensitive and variations such as `SELECT`, `Select` and `sELeCt` can be used interchangeably.\n\n## Integers and Decimals\n\nLexical grammar for integers and decimals is:\n\n``````UNSIGNED_INTEGER ::= [0-9]+\n\nUNSIGNED_DECIMAL ::= ( [0-9]* '.' [0-9]+ ) | ( [0-9]+ '.' )\n``````\n\nThese rules describe the following:\n\n• Unsigned integers consist of one or more digits.\n• Unsigned decimals either consist of zero or more digits followed by a dot (`.`) and one or more digits, or, the conceit of one or more digits followed by only a dot (`.`).\n\n## Comments\n\nComments are delimited by `/*` and `*/`.\n\nThe syntax is:\n\n``````COMMENT ::= '/*' ~[\\*]* '*/'\n``````\n\nFor example:\n\n``````/* This is a\nmulti-line\ncomment. */\nSELECT n.name, n.age\nFROM MATCH (n:Person) /* this is a single-line comment */\n``````"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.72203106,"math_prob":0.88793546,"size":133987,"snap":"2021-04-2021-17","text_gpt3_token_len":32368,"char_repetition_ratio":0.18429203,"word_repetition_ratio":0.179986,"special_character_ratio":0.2819378,"punctuation_ratio":0.14141077,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.977617,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-21T14:52:43Z\",\"WARC-Record-ID\":\"<urn:uuid:ea9ee78f-52f9-4fb0-8b40-a0a33ace53d3>\",\"Content-Length\":\"471097\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d18db02d-b77a-47ee-b5e3-5a23346b4fae>\",\"WARC-Concurrent-To\":\"<urn:uuid:f2d74805-f481-4059-92c7-0094c7072529>\",\"WARC-IP-Address\":\"185.199.109.153\",\"WARC-Target-URI\":\"https://pgql-lang.org/spec/1.4/\",\"WARC-Payload-Digest\":\"sha1:W2LL4VFOETZPO7DDYSLKPKXPQTAYSA7R\",\"WARC-Block-Digest\":\"sha1:BTKNPRMVMQBXRY3VWT75O5BGNBNRJ6WI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618039544239.84_warc_CC-MAIN-20210421130234-20210421160234-00204.warc.gz\"}"} |
http://www.yourfxguide.com/2015/02/fibonacci-forex-trading-how-to-apply.html | [
"Welcome! Dear Traders,you are reading my forex trading experiences. Forex trading is a very profitable and very risky business opportunity. If you are a beginner, calm down,have a cup of coffee, and convince yourself that you need to study hard to win in forex trading. Obviously, the task is not easy as the statistics claim that only 5% traders win in forex trading. If you are determined, serious,and hard working, you can surely be included in the group of winners.\n\n## Sunday, February 8, 2015\n\n### FIBONACCI FOREX TRADING: HOW TO APPLY THE FIBONACCI SERIES IN FOREX TRADING?\n\nLeonardo Fibonacci, an Italian mathematician who first introduced the excellent Fibonacci series which is very widely applied in forex trading. Every forex traders should be introduced with the Fibonacci series and the Fibonacci ratios. The series is a very simple series of numbers.\n\nEven Leonardo Fibonacci might not know that one day his proposed series will be applied by millions of traders world wide.\n\nThe Fibonacci series is 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144…The series is started with 0 followed by 1, and third number is derived by adding 0 and 1, similarly the fourth number is derived by adding third and second number. This way the series is forwarded.\n\nFibonacci series are very closely related with the golden ratio hence the golden spiral.\n\nIn forex trading, the Fibonacci series is not directly applied that much, but its ratios are applied at a huge rate. The Fibonacci ratios are calculated using the numbers in the Fibonacci series. The small phi (0.618), which is closely related with the golden ratio, is calculated with two consecutive numbers in the Fibonacci series.\n\nThe most widely applied Fibonacci ratios are 1.00, 0.236, 0.382, 0.50, 0.618, 0.886, 1.13, 1.41, 1.618, 1.41, 2.618 and so on.\n\nIn forex trading, harmonic chart patterns, that are drawn using the Fibonacci retracement tool, are very widely used to find the entry and exit signals. Most remarkable advantage of these harmonic chart patterns is that you can gain maximum profit with tight stop loss with these patterns.\n\nFibonacci retracement tools are also used to find the potential support and resistance levels. Fibonacci extension tool is used to predict the trend.\n\nFibonacci retracement tool can be used by both breakout traders and range traders. When a trend reaches to a Fibonacci ratio greater than 0.382, a correction is very expected in the trend.When a trend breaks a Fibonacci ratio greater than 0.382, a strong bullish or bearish trend is suggested.\n\nWhen a Fibonacci ratio is broken buy the trend, it becomes a resistance in case of bearish trend and a support in case of bullish trend.\n\nI found the Fibonacci tools as very effective technical analysis tools in my trading strategy. I even cannot think a single order or exiting an order with out the Fibonacci tools.\n\nEven Fibonacci ratios are also applied in pivot point analysis. If you are introduced with Fibonacci pivot points, you must know how the Fibonacci ratios are applied to calculate the pivot points.\n\nThese are all about the basic of Fibonacci forex trading. In my next post, I will write about applying Fibonacci tools in forex trading.\n\nIf you have any question, simply drop a comment below.\nYou can enter you email in the subscription box to receive the updates.",
null,
""
] | [
null,
"https://2.bp.blogspot.com/-KaJtPmuruVE/WEETZGzeNXI/AAAAAAAAR-A/Tk7JrRIhlJQSK2ev6fazca_68GkZYf_uACPcB/s1600/Screenshot_3.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.93068105,"math_prob":0.84706384,"size":2728,"snap":"2019-51-2020-05","text_gpt3_token_len":603,"char_repetition_ratio":0.17694567,"word_repetition_ratio":0.030701755,"special_character_ratio":0.22653958,"punctuation_ratio":0.1454219,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9926895,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-26T20:31:18Z\",\"WARC-Record-ID\":\"<urn:uuid:85feedbb-6f5a-41a3-97cc-69afa360e1a7>\",\"Content-Length\":\"72602\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:19796d77-a96b-4f3b-961b-a57d6902ff8f>\",\"WARC-Concurrent-To\":\"<urn:uuid:5398d637-cd0e-48af-91ad-61ab5518bcc1>\",\"WARC-IP-Address\":\"172.217.5.243\",\"WARC-Target-URI\":\"http://www.yourfxguide.com/2015/02/fibonacci-forex-trading-how-to-apply.html\",\"WARC-Payload-Digest\":\"sha1:2LWP3DVC46HQGWAFOZLREZXCSP6ZVAPN\",\"WARC-Block-Digest\":\"sha1:4LED64OYJT4SON6EQFOOAMW77PUSHMHZ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251690379.95_warc_CC-MAIN-20200126195918-20200126225918-00527.warc.gz\"}"} |
https://tech.fpcomplete.com/haskell/tutorial/lens | [
"Firstly, what is lens/lenses/optics?\n\n• A really great solution to the \"records problem\"\n• An almost accidental discover\n• Ridiculously complicated type level playground\n• Collections of lots of different related things\n• The first real subtyping solution in Haskell\n• A complete language based on Haskell\n\n## The records problem\n\nImagine a nested data structure:\n\n```data Address = Address\n}\ndata Person = Person\n, personName :: !Text\n}\n```\n\nIf you have a value `alice :: Person`, and you want to get the person's city, you can use record accessors as normal functions:\n\n```getPersonCity :: Person -> Text\n\nalice'sCity :: Text\nalice'sCity = getPersonCity alice\n```\n\nThat's pretty elegant. But let's say that you want to change Alice's city to something else. In a mutable, object-oriented language, you'd probably expect something like:\n\n```alice.address.city = \"Los Angeles\";\n```\n\nThe first issue in Haskell is that we can't mutate `alice`; we instead have to return a new `Person` value with the updated city. Type signature wise, we'd be looking at:\n\n```setPersonCity :: Text -> Person -> Person\n```\n\nThat makes sense. Now let's see how we'd implemente this:\n\n```import Data.Text (Text)\n\n}\ndata Person = Person\n, personName :: !Text\n}\n\ngetPersonCity :: Person -> Text\n\nsetPersonCity :: Text -> Person -> Person\nsetPersonCity city person = person\n}\n}\n```\n\nWell... that obviously sucks. It only gets worse as the nesting levels go deeper. Let's look at some ways to make this easier to stomach.\n\n## Modifier functions\n\nLet's see if we can make this slightly less painful with some modifier functions:\n\n```modifyAddressCity :: (Text -> Text) -> Address -> Address\n}\n\n}\n\nmodifyPersonCity :: (Text -> Text) -> Person -> Person\n\nsetPersonCity :: Text -> Person -> Person\nsetPersonCity city = modifyPersonCity (const city)\n```\n\nComposing the modifier functions works nicely, and then we can easily convert a modifier function into a setter function. Writing the initial modifier functions is tedious, but that's the price of doing business.\n\nAnother downside is that we've totally separated out the getter and modifier functions. Let's see if we can combine those.\n\n## Old style lenses\n\nIf our problem is splitting up the getters and modifiers, let's just stick them together.\n\n```data Lens s a = Lens\n{ lensGetter :: s -> a\n, lensModify :: (a -> a) -> s -> s\n}\n```\n\nPreviously we could compose our getters and modifiers with the good old `.` function composition operator, but now we need something a bit more specialized:\n\n```composeLens :: Lens a b -> Lens b c -> Lens a c\ncomposeLens (Lens getter1 modify1) (Lens getter2 modify2) = Lens\n{ lensGetter = getter2 . getter1\n, lensModify = modify1 . modify2\n}\n```\n\nWith that in hand, we can write lenses for an address's city, a person's address, put them together, and then easily extract a setter:\n\n```personAddressL :: Lens Person Address\n, lensModify = \\f person -> person { personAddress = f (personAddress person) }\n}\n\n}\n\npersonCityL :: Lens Person Text\n\nsetPersonCity :: Text -> Person -> Person\nsetPersonCity city = lensModify personCityL (const city)\n```\n\nThis works, but it feels clunky. It also has some performance overhead we didn't have previously due to the creation of the `Lens` values. And a more advanced topic we haven't even touched on yet: it doesn't allow for polymorphic update, which deals with changing type variables (we won't deal with that for now).\n\n## Van Laarhoven lenses\n\nIn all honesty, understanding exactly how these next forms of lenses work isn't strictly necessary. It's built on the same premise as the previous kinds of lenses, but it's:\n\n• More efficient\n• More easily composable\n• Generalizes to other cases\n• Produces much crazier error messages\n\nLet's start slowly in motivating this. Our first goal is to see if we can combine our getter and modifier into a single value, without using a product type. We need to be able to extract both a getter and modifier from this value, so it has to provide the following:\n\n```type Lens s a = ?\n\nview :: Lens s a -> s -> a\nview = ?\n\nover :: Lens s a -> (a -> a) -> s -> s\nover = ?\n```\n\n(Ignore the funny names, they're part of `lens`.)\n\nIt doesn't seem like those two output types (`s -> a` and `(a -> a) -> s -> s`) have much in common. But we're going to use a trick to make them match up. Let's start with the `over` result:\n\n```(a -> a) -> (s -> s)\n```\n\nI'm going to wrap up the results of the two functions inside the `Identity` functor:\n\n```newtype Identity a = Identity { runIdentity :: a }\nderiving Functor\n\ntype LensModify s a = (a -> Identity a) -> (s -> Identity s)\n\nover :: LensModify s a -> (a -> a) -> s -> s\nover lens f s = runIdentity (lens (Identity . f) s)\n```\n\nAnd we can create values for this lens type with:\n\n```personAddressL :: LensModify Person Address\npersonAddressL f person = Identity \\$ person\n}\n```\n\nOr, if we want to take advantage of the `Functor` instance and not play with wrapping and unwrapping the `Identity` values, we get:\n\n```personAddressL :: LensModify Person Address\n```\n\nAlright, let's switch over to the getter side. This time around, we want to start with the same basic `(a -> a) -> (s -> s)`, but apply a different wrapper type to allow us to get a getter function at the end, `s -> a`. So in other words, we need to be able to convert from `s -> Wrapper s` to `s -> a`. This may seem impossible at first, but it turns out that there's a cool trick to make this happen:\n\n```newtype Const a b = Const { getConst :: a }\nderiving Functor\n\ntype LensGetter s a = s -> Const a s\n\nview :: LensGetter s a -> s -> a\nview lens s = getConst (lens s)\n\n```\n\nThis is fairly complex. `Const` is a data type that does the same thing as the `const` function: it holds onto one value and ignores a second. Here, `Const` is keeping our `Address` value for us and allowing use to extract it inside `view`. Stare at it a while and it will make sense, but it's also just a convoluted way to get to our goal.\n\nUltimately, our goal is to make `LensGetter` and `LensModify` the same thing. But right now, they don't look very similar.\n\n```type LensModify s a = (a -> Identity a) -> (s -> Identity s)\ntype LensGetter s a = s -> Const a s\n```\n\nIn order to bring them more inline, we need to redefine `LensGetter` as:\n\n```type LensGetter s a = (a -> Const a s) -> (s -> Const a s)\n```\n\nAnd it turns out by shuffling around some things just a bit, we can make this work as well:\n\n```type LensGetter s a = (a -> Const a a) -> (s -> Const a s)\n\nview :: LensGetter s a -> s -> a\nview lens s = getConst (lens Const s)\n\n```\n\nAgain, kind of crazy, but it works. Our wrapper type is now `Const a`, and we pass in the `Const` data constructor to the `lens`. It all kinda sorta works. One final tweak on all of this. Previously, we defined our modify lens using the `Functor` interface so we didn't need to know about `Identity` at all:\n\n```personAddressL :: LensModify Person Address\n```\n\nIt turns out that this exact same function body works for defining `LensGetter`:\n\n```personAddressL :: LensGetter Person Address\n```\n\nAnd now we can finally unify our getter and modify lenses into one:\n\n```type Lens s a = forall f. Functor f => (a -> f a) -> (s -> f s)\n\nnewtype Identity a = Identity { runIdentity :: a }\nderiving Functor\n\nnewtype Const a b = Const { getConst :: a }\nderiving Functor\n\nover :: Lens s a -> (a -> a) -> s -> s\nover lens f s = runIdentity (lens (Identity . f) s)\n\nview :: Lens s a -> s -> a\nview lens s = getConst (lens Const s)\n\n```\n\nThis means that we have a lens if we support all possible functors in that type signature. It turns out, almost as if by magic, that this allows us to recapture getter and modifier functions (and via modifier, setter functions).\n\nThe formulation is wonky, and very difficult to grasp. Don't worry if the intuition hasn't kicked in. It turns out that you can use lenses quite a bit without fully grokking them.\n\n## Composing lenses\n\nFirst, let's define a helper function for turning a getter and a setter into a lens:\n\n```lens :: (s -> a) -> (s -> a -> s) -> Lens s a\nlens getter setter = \\f s -> setter s <\\$> f (getter s)\n```\n\nThen we can more easily define lenses for `person.address` and `address.city`:\n\n```personAddressL :: Lens Person Address\n\n```\n\nHow do we compose them together into the `person.address.city` lens? If we expand the type signatures a bit it may become obvious:\n\n```personAddressL :: Functor f => (Address -> f Address) -> (Person -> f Person)\npersonCityL :: Functor f => (Text -> f Text) -> (Person -> f Person)\n```\n\nHow would you implement `personCityL`? Well, turns out to be really easy:\n\n```personCityL :: Lens Person Text\n```\n\nThis is a great strength of lenses: they work with normal function composition. They also work in what Haskellers would consider backwards order: the composition seems to flow from left to right instead of right to left. But on the other hand, this seems to match up perfectly with what object oriented developers expect, so that's nice.\n\n## Helper functions and operators\n\nDealing directly with the `Lens` type is needlessly painful. Instead, we work through helper functions and operators. You've already seen `view`, `over`, and `lens`. Let's implement a few more:\n\n```(^.) :: s -> Lens s a -> a\ns ^. lens = view lens s\n\ninfixl 8 ^.\n\n(%~) :: Lens s a -> (a -> a) -> s -> s\n(%~) = over\n\ninfixr 4 %~\n\nreverseCity :: Person -> Person\n\ngetCity :: Person -> Text\n\nset :: Lens s a -> a -> s -> s\nset lens a s = runIdentity \\$ lens (\\_olda -> Identity a) s\n\nsetCity :: Text -> Person -> Person\n```\n\nIt turns out that we can make lenses a bit more general. If we look at the current type:\n\n```type Lens s a = forall f. Functor f => (a -> f a) -> (s -> f s)\n```\n\nIt requires that the field originally be of type `a` and ultimately of type `a`. It also requires that the value overall is of type `s` and ultimately of type `s`. Let's create a new datatype where this would be limiting:\n\n```data Person age = Person\n{ personName :: !Text\n, personAge :: !age\n}\n\naliceInt :: Person Int\naliceInt = Person \"Alice\" 30\n\npersonAgeL :: Lens (Person age) age\npersonAgeL = lens personAge (\\x y -> x { personAge = y })\n\nsetAge :: age -> Person oldAge -> Person age\nsetAge age person = person { personAge = age }\n\naliceDouble :: Person Double\naliceDouble = setAge 30.5 aliceInt\n```\n\nTry as we might, we cannot use `personAgeL` to change the age value from an `Int` to a `Double`. Its construction requires that the input and output `age` type variable remain the same. However, with a small extension to our `Lens` type, we can make this work:\n\n```type Lens s t a b = forall f. Functor f => (a -> f b) -> (s -> f t)\n\n-- Our old monomorphic variant\ntype Lens' s a = Lens s s a a\n```\n\nThis says that we have some data structure, `s`. Inside `s` is a value `a`. If you replace that `a` with a `b`, you get out a `t`. Sound weird? Let's see it in practice:\n\n```personAgeL :: Lens (Person age1) (Person age2) age1 age2\npersonAgeL = lens personAge (\\x y -> x { personAge = y })\n\nsetAge :: age -> Person oldAge -> Person age\nsetAge = set personAgeL\n```\n\n## Getters, setters, folds, traversals...\n\nWhat we've seen so far is the original motivating case. It turns out that there are many crazy ways of generalizing a `Lens` further to represent more usages. This comes by means of various techniques, such as:\n\n• Using concrete types\n• Using a different typeclass constraint from `Functor`\n• Replace functions (e.g., `a -> f b`) with profunctors (e.g. `p a (f b)`)\n\nWe've already seen examples of the concrete types approach. Let's use their more standard names now:\n\n```type ASetter s t a b = (a -> Identity b) -> s -> Identity t\ntype ASetter' s a = ASetter s s a a\ntype Getting r s a = (a -> Const r a) -> s -> Const r s\ntype SimpleGetter s a = forall r. Getting r s a\n```\n\nBy using different typeclasses, we're able to create a form of subtyping. For example, every `Applicative` is also a `Functor`. So if we define a new type like this:\n\n```type Traversal s t a b = forall f. Applicative f => (a -> f b) -> s -> f t\n```\n\nEvery `Lens s t a b` is also a `Traversal s t a b`, but the reverse is not true. We can go even deeper down the rabit hole with:\n\n```type Fold s a = forall f. (Contravariant f, Applicative f) => (a -> f a) -> s -> f s\n```\n\nNow we need `f` to be both `Applicative` and `Contravariant`, so that all `Traversal`s are `Fold`s, but not all `Fold`s are `Traversal`s. This actually matches up with the related typeclasses `Foldable` and `Traversable`, where the latter is a subclass of the former.\n\nBut what does this have to do with field accessors? you may ask. This is what I was implying above with lens being its own language on top of Haskell: if desired, you can replace a lot of functionality found elsewhere with lens-centric code. All of these different types I've mentioned are known as optics, and since they all have roughly the same shape, they compose together very nicely.\n\n## Packages\n\nThe `lens` package itself is fully loaded, and provides lots of helper functions, operators, types, and generality. It also has a relatively heavy dependency footprint. Many projects instead use `microlens`, which has a much lighter footprint, but also lacks some functionality (for example, prisms).\n\nIf writing those lenses above by hand seems tedious to you, you're not alone. Many people use macros/code generation/Template Haskell (all the same thing in Haskell) to automatically generate lenses, and sometimes typeclasses to generalize them. Examples:\n\n## Best practices\n\nThe most important decision to be made for a team is how to use lenses. Best practices are vital. You do not want half the team using advanced lens features and the other half not understanding them at all. I can advise on what I consider best practices, but it will depend a lot on how your team wants to approach things. What I've standardized on:\n\n• Using the basic `Lens` types and its functions: solid gold, do it, don't hold back! The biggest downside is the slightly confusing error messages, but you get used to that quickly\n• Replacing common library functions (like `lookup`) with their lensy counterparts (like `at`): not worth it. You'll make your code shorter, but I prefer the standard Haskell idioms to relearning with lens.\n• Advanced techniques like prisms, folds, traversals: soft avoidance on my part. I think usually the standard Haskell techniques are better suited, but again you'll be able to code golf more easily with the optics. Prisms in particular are, in my experience, a ripe source of bugs, where pattern matching can lead to much clearer code.\n\nWe should base the homework exercises around how deeply into lenses the team wants to go.\n\n## Exercise 1\n\nFill out the stubs below to make the test suite pass. Probably goes without saying, but: use the generated lenses (`address`, `street`, `age`, etc) wherever possible instead of falling back to records.\n\n```#!/usr/bin/env stack\n-- stack --resolver lts-12.21 script\nimport Lens.Micro.Platform\nimport Data.Text (Text)\nimport Test.Hspec\n\n{ _street :: !Text\n, _city :: !Text\n}\n\ndata Person = Person\n{ _name :: !Text\n, _age :: !Int\n}\n\nmakeLenses ''Person\n\nhollywood :: Text\nhollywood = \"Hollywood Blvd\"\n\nalice :: Person\nalice = Person\n{ _name = \"Alice\"\n{ _street = hollywood\n, _city = \"Los Angeles\"\n}\n, _age = 30\n}\n\nwilshire :: Text\nwilshire = \"Wilshire Blvd\"\n\naliceWilshire :: Person\naliceWilshire = _ -- FIXME set Alice's street to Wilshire\n\ngetStreet :: Person -> Text\ngetStreet = _\n\n-- | Increase age by 1\nbirthday :: Person -> Person\nbirthday = _\n\ngetAge :: Person -> Int\ngetAge = _\n\nmain :: IO ()\nmain = hspec \\$ do\nit \"lives on Wilshire\" \\$\nit \"getStreet works\" \\$\ngetStreet alice `shouldBe` hollywood\nit \"birthday\" \\$\ngetAge (birthday alice) `shouldBe` _age alice + 1\n```\n\n## Exercise 2\n\nRemove the `{-# LANGUAGE TemplateHaskell #-}` line from the previous exercise and get the code to compile. You'll need to define your own lenses to make this work. Use the `lens` helper function, and make sure to add type signatures to the values you create.\n\n## Exercise 3\n\nReal challenge: now implement those lenses again, but without using the `lens` helper function. Instead, use `fmap` or `<\\$>` directly.\n\n## Exercise 4\n\nThere are tuple lenses provided, named `_1`, `_2`, and so on, for modifying components in tuples. Fill in the stub below so that the test passes:\n\n```#!/usr/bin/env stack\n-- stack --resolver lts-12.21 script\nimport Lens.Micro.Platform\nimport Test.Hspec\n\nmain :: IO ()\nmain = hspec \\$\nit \"fun with tuples\" \\$\nlet tupleLens = _\ntuple :: ((Int, Double), (Bool, Char, String))\ntuple = ((1, 2), (True, 'x', \"Hello World\"))\nin over tupleLens not tuple `shouldBe`\n((1, 2), (False, 'x', \"Hello World\"))\n```\n\n## Exercise 5\n\nEverything we've done so far has been on product types. In these cases, lenses work perfectly: we know that we have every field available. However, lenses do not work perfectly on sum types, where values may or may not exist. In these cases, prisms, traversals, and folds come into play. We're not necessarily going to be using these in depth, but it's good to be aware of them.\n\nLet's use the `_Left` and `_Right` prisms (which work as traversals and folds as well). Fill in the expected values for the test suite below to begin to get an intuition for how the traversal functions work.\n\n```#!/usr/bin/env stack\n-- stack --resolver lts-12.21 script\nimport Lens.Micro.Platform\nimport Test.Hspec\n\nmain :: IO ()\nmain = hspec \\$ do\nit \"over left on left\" \\$\nlet val :: Either Int Double\nval = Left 5\nin over _Left (+ 1) val `shouldBe` _\nit \"over left on right\" \\$\nlet val :: Either Int Double\nval = Right 5\nin over _Left (+ 1) val `shouldBe` _\nit \"set left on left\" \\$\nlet val :: Either Int Double\nval = Left 5\nin set _Left 6 val `shouldBe` _\nit \"set left on right\" \\$\nlet val :: Either Int Double\nval = Right 5\nin set _Left 6 val `shouldBe` _\n```\n\n## Exercise 6\n\nBonus! This one makes more extreme usage of the folds and traversals. Reimplement some common library functions using lenses. This will require looking through the docs for microlens or lens quite a bit.\n\n```#!/usr/bin/env stack\n-- stack --resolver lts-12.21 script\n{-# OPTIONS_GHC -Wall -Werror #-}\n{-# LANGUAGE RankNTypes #-}\nimport Lens.Micro.Platform\nimport Test.Hspec\nimport Data.Monoid (Endo)\n\n-- | map/fmap\nmapLens :: ASetter s t a b -> (a -> b) -> s -> t\nmapLens = _\n\n-- | toList\ntoListLens :: Getting (Endo [a]) s a -> s -> [a]\ntoListLens = _\n\n-- | catMaybes\ncatMaybesLens :: [Maybe a] -> [a]\ncatMaybesLens = _\n\nmain :: IO ()\nmain = hspec \\$ do\nit \"mapLens\" \\$\nmapLens _2 not ((), True) `shouldBe` ((), False)\nit \"toListLens\" \\$\ntoListLens both ('x', 'y') `shouldBe` \"xy\"\nit \"catMaybesLens\" \\$\ncatMaybesLens [Just 'x', Nothing, Just 'y'] `shouldBe` \"xy\"\n```\n\n## Solutions\n\n### Exercise 1\n\nNote that there are many other ways to solve some of these problems.\n\n```#!/usr/bin/env stack\n-- stack --resolver lts-12.21 script\nimport Lens.Micro.Platform\nimport Data.Text (Text)\nimport Test.Hspec\n\n{ _street :: !Text\n, _city :: !Text\n}\n\ndata Person = Person\n{ _name :: !Text\n, _age :: !Int\n}\n\nmakeLenses ''Person\n\nhollywood :: Text\nhollywood = \"Hollywood Blvd\"\n\nalice :: Person\nalice = Person\n{ _name = \"Alice\"\n{ _street = hollywood\n, _city = \"Los Angeles\"\n}\n, _age = 30\n}\n\nwilshire :: Text\nwilshire = \"Wilshire Blvd\"\n\naliceWilshire :: Person\naliceWilshire = set (address.street) wilshire alice\n\ngetStreet :: Person -> Text\n\n-- | Increase age by 1\nbirthday :: Person -> Person\nbirthday = over age (+ 1)\n--birthday = age %~ (+ 1)\n\ngetAge :: Person -> Int\ngetAge = view age\n\nmain :: IO ()\nmain = hspec \\$ do\nit \"lives on Wilshire\" \\$\nit \"getStreet works\" \\$\ngetStreet alice `shouldBe` hollywood\nit \"birthday\" \\$\ngetAge (birthday alice) `shouldBe` _age alice + 1\n```\n\n### Exercise 2\n\n```street :: Lens' Address Text\nstreet = lens _street (\\x y -> x { _street = y })\n\nage :: Lens' Person Int\nage = lens _age (\\x y -> x { _age = y })\n```\n\n### Exercise 3\n\n```street :: Lens' Address Text\n\nage :: Lens' Person Int\nage f person = (\\x -> person { _age = x }) <\\$> f (_age person)\n```\n\n### Exercise 4\n\n```let tupleLens = _2._1\n```\n\n### Exercise 5\n\nThe most important bit here to notice: using `set _Left` did not change the data constructor from `Right` to `Left`.\n\n```#!/usr/bin/env stack\n-- stack --resolver lts-12.21 script\nimport Lens.Micro.Platform\nimport Test.Hspec\n\nmain :: IO ()\nmain = hspec \\$ do\nit \"over left on left\" \\$\nlet val :: Either Int Double\nval = Left 5\nin over _Left (+ 1) val `shouldBe` Left 6\nit \"over left on right\" \\$\nlet val :: Either Int Double\nval = Right 5\nin over _Left (+ 1) val `shouldBe` Right 5\nit \"set left on left\" \\$\nlet val :: Either Int Double\nval = Left 5\nin set _Left 6 val `shouldBe` Left 6\nit \"set left on right\" \\$\nlet val :: Either Int Double\nval = Right 5\nin set _Left 6 val `shouldBe` Right 5\n```\n\n### Exercise 6\n\n```#!/usr/bin/env stack\n-- stack --resolver lts-12.21 script\n{-# OPTIONS_GHC -Wall -Werror #-}\n{-# LANGUAGE RankNTypes #-}\nimport Lens.Micro.Platform\nimport Test.Hspec\nimport Data.Monoid (Endo)\n\n-- | map/fmap\nmapLens :: ASetter s t a b -> (a -> b) -> s -> t\nmapLens l f = over l f\n\n-- | toList\ntoListLens :: Getting (Endo [a]) s a -> s -> [a]\ntoListLens l s = s ^.. l\n\n-- | catMaybes\ncatMaybesLens :: [Maybe a] -> [a]\ncatMaybesLens list = list ^.. each._Just\n\nmain :: IO ()\nmain = hspec \\$ do\nit \"mapLens\" \\$\nmapLens _2 not ((), True) `shouldBe` ((), False)\nit \"toListLens\" \\$\ntoListLens both ('x', 'y') `shouldBe` \"xy\"\nit \"catMaybesLens\" \\$\ncatMaybesLens [Just 'x', Nothing, Just 'y'] `shouldBe` \"xy\"\n```"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7671815,"math_prob":0.8794984,"size":24154,"snap":"2019-51-2020-05","text_gpt3_token_len":6184,"char_repetition_ratio":0.16318841,"word_repetition_ratio":0.28812784,"special_character_ratio":0.27713835,"punctuation_ratio":0.1522969,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95146334,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-21T21:12:41Z\",\"WARC-Record-ID\":\"<urn:uuid:ef450229-88a4-47cf-a991-a73c0589fd6e>\",\"Content-Length\":\"109608\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7a727296-c67a-456c-8753-cbcc3b282da9>\",\"WARC-Concurrent-To\":\"<urn:uuid:ed591af1-c379-457c-aa2e-4e902f00c0a2>\",\"WARC-IP-Address\":\"52.54.134.128\",\"WARC-Target-URI\":\"https://tech.fpcomplete.com/haskell/tutorial/lens\",\"WARC-Payload-Digest\":\"sha1:TLBG6VTV6UAEJCNSRP4B5LHSD2NDB2RS\",\"WARC-Block-Digest\":\"sha1:FB5LICEIC34DPM2ECWUCW2BUNDCO7Q4M\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250605075.24_warc_CC-MAIN-20200121192553-20200121221553-00467.warc.gz\"}"} |
https://www.labvanced.com/content/learn/guide/task-editor/trial-randomization.html | [
"# # Trial Randomization\n\n## # Description\n\nWhile the trial system details how many trials (trial variations) there are defined in total, the trial randomization menu regulates how many trials are seen by each subject and how these are selected and ordered. \"Trial selection\" and \"trial ordering\" are two different consecutive processes. The trial randomization settings are located in the task editor in the left panel under \"Randomization\". Click on \"Edit\" to open the settings dialog, in which all relevant settings can be adjusted. The complete randomization settings consist of 6 separate menus. The \"Trial Order\" menu and the \"Condition Order\" menu are used to determine the overall trial order, while the menus \"Factor Randomization\", \"Trial Per Subject\", and \"Resulting Condition Groups & Trials\" are used to for trial selection. \"Simulation of Trial Sequence\" is used to preview the randomization settings that have been applied.",
null,
"## # Trial Order\n\nThe trial order is the main setting, which will influence how trials are ordered /randomized. There are 5 possible settings:\n\n• Random (default option): When this option is selected, the Labvanced randomization settings will be most adjustable. The core feature of this option is that all selected trials will be randomly reshuffled for each new subject / session. If there is more than one trialgroup, the trials of different trialgroups will be presented intermixed.\n• Fixed by Design: If this option is selected, the trials will be presented in the order of how they are defined in the task editor (lower trial numbers will be shown first). This order will be the same for all subjects.\n• Fixed by Hand: If this option is activated, users can type a self-selected trial sequence. The sequence should be typed in the box \"Specify Fixed Trial Sequence\". To type in the sequence, use the trial numbers as defined in the editor and separate each number by a comma. This sequence will be the same for all subjects. As this option will specify both which trials are shown as well as the order of trials, the settings in other randomization menus will be ignored in this case.\n• Dynamic: This option can be used to create adaptive / dynamic trial sequences (e.g. jump to different trials based on the answer of the participant). Select a start trial number and then use the event system \"Jump to specific trial\" for deciding on the dynamic trial sequence.\n• Upload Trial Sequences: If you select this option, you can specify an individual trial sequence for each subject using an external CSV file (trials separated by comma, subjects by a newline CR-LF). Then upload these sequences and and select how you want to count the subjects (have a separate counter for each subject group, or use a global subject counter).\n\n## # Condition Order\n\nThe Condition Order menu also influences the order of trials. This option determines whether the individual conditions are presented in a block-wise order, or randomly intermixed (Random).\n\nThe \"Random\" option will show all trials of the various conditions in an intermixed way. To further enforce \"randomness,\" the number of trials shown from the same condition can be limited. Choose the Randomization constraint \"Limit Successive Repetitions\" and type in the maximum number of repeating conditions.\n\nThe \"Blockwise\" option will separate the trials, such that first trials from one condition will be shown, then another condition, and so on. The order of the conditions can be either based on the design in the trial editor (fixed by design), or it can be randomly reshuffled (unbalanced permutation).\n\n## # Factor Randomization\n\nThe Factor Randomization settings determine which trials will be seen by a subject. The most important setting here is whether a factor will be fixed or random. While \"fixed factors\" will generate more trials for a subject (all levels of a fixed factors will be shown), \"random factors\" will increase the \"trial variations\" from which only one version will be selected (only one of the levels of a random factor is shown). Depending how these trial variations are chosen (settings within random factor), random factors allow for within and / or between subject balancing. Overall, there are 5 possibilities for how the random factors select a level and thereby choose a certain trial variation. These 5 options are listed below:\n\n• Unbalanced within task: For each trial, the level of the random factor is chosen randomly every time from all available options (levels).\n• Balanced within task: For each trial, the level of the random factor is chosen such that each level will occur equally often within the whole task.\n• Balanced within other Factors: This balances one factor within another or within several other factors. If, for instance, the factor \"position\" is balanced within the factor \"image content\", then for each level of the factor \"image content\" the factor position is balanced (e.g. level=left and level=right appear equally often in both cases image=face and image=house). A factor can also be balanced within several other factors, however, no circular dependencies may arise.\n• Unbalanced between subjects: The level of the random factor will be always the same for one subject / execution of the task, but randomly vary between subjects / executions of the task.\n• Balanced between subjects: The level of the random factor will be always the same for one subject / execution of the task, but systematically vary in between subjects / executions of the task. As a result, when the factor has 3 levels, subject 1 will see level_1 of the factor, subject 2 level_2 of the factor, and subject 3 level 3 of the factor. Subject 4 will see again level_1 of the factor, and so on repeating the pattern.\n\nA further important concept is the \"condition group\". The condition groups are calculated by crossing all of the fixed factors within a trial group, excluding the random factors. As a result, one condition group consists of several conditions (and trials), which vary only in their levels of random factors. Many times, the trials within the individual conditions of a condition group are almost identical (e.g. they show the same stimuli but vary in their position). Many times it is important to prohibit that 2 almost identical trials are selected. To change these settings, in the Factor Randomization section, users can change how the trials are selected within condition groups. Here, two options are available:\n\n• Balanced within each condition group: Trials which vary only by one or more random factors cannot be chosen more than once.\n• Unbalanced within each condition group: There is no restriction on the trial selection process within condition groups.\n\nWe believe that for the vast majority of cases the \"Balanced within each condition group\" option is the best choice.\n\n### # Example\n\nAs the factor randomization (trial selection) process can be quite complex to understand, let us look at an example. In this example task, subjects will see 2 images: one which they have seen before (target), the other one is unfamiliar (distractor).The task is to click on (identify) the target stimulus. Overall there are 4 factors: 1 fixed factor and 3 random factors:\n\n1. Fixed Factor \"Presentation Time / Difficulty\": This factor has 3 levels, 2 seconds (hard), 4 seconds (medium), and 6 seconds (easy). Each condition should be presented 20 times.\n2. Random Factor \"Position of Correct\": This factor has 2 levels, target image is left, and target image is right. Overall, there should be the same amount of \"left correct answers\" and \"right correct answers\" to avoid any bias.\n3. Random Factor: \"Image Orientation\": This factor has two levels: horizontal, and upsideDown (images are rotated 180°). This factor should be balanced within the fixed factor \"Presentation Time\" to insure that for each difficulty level there are equally many trials rotated and normal.\n4. Random Factor \"Image Category\": This factor has 3 levels: houses, cars, and faces. This factor will be used to generalize the findings. Each subject should only see one category / type, but this factor should be balanced across subjects.\n\n#### # Settings\n\n• Presentation Time --> Fixed Factor As this is the only fixed factor, there are only 3 condition groups (easy, medium, hard). Each condition group has 12 sub-conditions, with each 20 trials inside. 60 trials will be presented in total to each subject.\n• Position of Correct --> Random Factor --> Balanced Within Task The factor \"Position of Correct\" will be set to \"balanced within task\". This way, the target image will appear equally often in the left and the right position in the overall task. However, this is not balanced within each condition / presentation time (difficulty level).\n• Image Orientation --> Random Factor --> Balanced within other factors --> \"Presentation Time\" The factor \"Image Orientation\" will be set to \"Balanced within other factors,\" then the factor \"Presentation Time\" is selected. This way, for each of the 3 levels of the factor \"Presentation Time,\" there will be an equal amount (10) of images that are presented normally and images that are presented upside down (10).\n• Image Category --> Random Factor --> Balanced between Subjects The factor \"Image Category\" will be set to \"Balanced between subjects. This way, each subject will only see one image category (houses, cars, OR faces), but the category selection will be balanced across subjects (Subject 1 will be houses, subject 2 cars, subject 3 faces, subject 4 houses, and so on).\n\n## # Resulting Condition Groups and Trials\n\nThe displayed information at the \"Resulting Condition-Groups & Trials\" menu is strongly dependent on the settings in the \"Factor Randomization\" menu. Mainly, this menu will show a table of the condition groups, together with some additional information. There are 2 additional settings which influence the trial selection process:\n\n• Nr trials to show per condition group: This option will influence how many trials are shown per condition group. This setting is only important if the number of trials should be different within one condition group (between conditions) than another. In general, the number of trials per condition group will be based on the condition which has the fewest number of trials defined. For instance, if there are 3 conditions in a condition group, and for 2 of them 20 trials are defined, while for the 3rd condition only 10 trials are defined, then only 10 trials will be shown. However, when 0 trials are defined, such conditions can be excluded from the calculations, such that when there are 2 conditions with 20 trials and 1 condition with 0 trials defined, there are still 20 trials shown. This option can be useful for nested designs.\n• Select trials in conditions by: This option will influence how trials are selected within one condition. This setting is only important if the number of trials should be different within one condition group (between conditions). For instance, if there are 20 trials defined in 2 conditions (of a condition group) but only 10 in the third one, only 10 trials are shown (as described above). However, for the 2 conditions which have 20 trials defined, one can choose whether only the first 10 trials can be selected, or whether the trial 11-20 could (as alternatives) also be selected. This option can be useful if you want to draw trials from conditions with different likelihoods.\n\nThe resulting table then shows for each condition group the following properties:\n\n• Factor-Group: The factor group where the factors of the condition group are defined.\n• Condition-Group: The condition group index.\n• #Conditions: The number of conditions inside the condition group.\n• #Shown Trials: The number of trials shown for the condition group.\n• #Trial Variations: The number of trials (variations) defined within the condition group.\n\nOn the bottom of this menu, you will see the total number of trials (variations) within the overall task.\n\n## # Trials per Subject\n\nBy default, the number of trials per subject is calculated automatically. However, by choosing the option \"Set Nr of Trials manually,\" users can set the number of trials per subject by hand. When the manual option is activated, this will also affect the internal functions of other randomization settings (i.e. Factor Randomization and Resulting Conditions-Groups & Trials).\n\n## # Simulation of trial sequence\n\nAs the name suggests, the \"simulation of trial sequence\" menu can be used to simulate a possible trial sequence for one subject. By clicking on the \"Refresh\" icon, a new possible sequence is calculated with the current settings taken into account. Here, one can also see the trial IDs and the condition number for each trial. On the bottom of this menu, you will see the total number of trials shown to each subject."
] | [
null,
"https://www.labvanced.com/content/learn/assets/random.ed066fed.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9149792,"math_prob":0.8801453,"size":12764,"snap":"2022-40-2023-06","text_gpt3_token_len":2609,"char_repetition_ratio":0.20054859,"word_repetition_ratio":0.0651341,"special_character_ratio":0.20855531,"punctuation_ratio":0.105034724,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9535841,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-27T21:34:14Z\",\"WARC-Record-ID\":\"<urn:uuid:8ec8fe26-c657-4030-8d22-95dd6a7686fe>\",\"Content-Length\":\"57614\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7f7c5204-1cc1-4dcb-8806-b071377a3073>\",\"WARC-Concurrent-To\":\"<urn:uuid:edc00769-937b-4002-a171-d909624e5879>\",\"WARC-IP-Address\":\"172.67.141.225\",\"WARC-Target-URI\":\"https://www.labvanced.com/content/learn/guide/task-editor/trial-randomization.html\",\"WARC-Payload-Digest\":\"sha1:ZOBETK2YJF56ZMAI45A5STWEEOJHZMC4\",\"WARC-Block-Digest\":\"sha1:IHBDQXOEC2RJD4CBIY66SM4BUGSVCZKN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764495012.84_warc_CC-MAIN-20230127195946-20230127225946-00212.warc.gz\"}"} |
https://t4tutorials.com/foundation-engineering-mcqs/ | [
"# Foundation Engineering MCQs\n\nBy: Prof. Fazal Rehman Shamil\n\nSolved MCQs on Foundation Engineering\n\n## MCQs on Permeability of Stratified Soil Deposits\n\nThe soil deposit has three layers of a soil. Permeability of the second layer is 1/2 to first layer and the permeability of the third layer is 1/3 to first layer and Thickness of each layer is equal, then its average permeability parallel to the bedding plane is______\n(A). 11k\n(B). 18k\n(C). 1118k\n(D). 1811k\n(E). None of these\n\nEver layer of the stratified soil deposit has its own coefficient of the permeability.\n(A). True\n(B). False\n(E). None of these\n\nWhat would be the value of average permeability parallel to bedding planes, if the coefficient of permeability of the two soil layers 1 and 2 in soil deposit is 9*10-7 cm/s and 4*10-7 cm/s respectively and thickness of the layers is equal?\n(A). 4.5*10-7 cm/s\n(B). 5.5*10-7 cm/s\n(C). 6.5*10-7 cm/s\n(D). 7.5*10-7 cm/s\n(E). None of these\n\nSoil deposit has three layers of the soil. Permeability of the second layer is twice to the first layer and the permeability of the third layer is thrice to the first layer and Thickness of each layer is 5m, then its average permeability parallel to the bedding plane is____\n(A). k\n(B). 2k\n(C). 3k\n(D). 4k\n(E). None of these\n\nif a soil deposit has three layers of soil. The permeability of the second layer is twice to first layer and the permeability of the third layer is thrice to first layer and Thickness of each layer is 5m, then its average permeability perpendicular to the bedding plane is______\n(A). 11k\n(B). 18k\n(C). 1118k\n(D). 1811k\n(E). None of these\n\nOn what the average coefficient of the permeability of the whole deposit depends?\n(A). on the direction of flow with respect to the bedding plane\n(B). on the direction of flow but not with respect to the bedding plane\n(C). on the length of bedding plane\n(D). width of bedding plane\n(E). None of these\n\nA soil deposit has three layers of soil? The permeability of the second layer is 1/2 to first layer and the permeability of the third layer is 1/3 to first layer and thickness of each layer is equal, then its average permeability perpendicular to the bedding plane is_______\n(A). k\n(B). 1/2k\n(C). 1/3k\n(D). 1/4k\n(E). None of these\n\nWhat would be the value of average permeability perpendicular to bedding planes, if the coefficient of permeability of the two soil layers 1 and 2 in the soil deposit is 9*10-7 cm/s and 4*10-7 cm/s respectively and thickness of the layers is equal?\n(A). 4.5*10-7 cm/s\n(B). 5.5*10-7 cm/s\n(C). 6.5*10-7 cm/s\n(D). 7.5*10-7 cm/s\n(E). None of these\n\nElevation head is considered as negative if a point is situated above the datum and is positive if below the datum.\n(A). True\n(B). False\n(E). None of these\n\nWhat is the total head at any point may be regarded, with respect to the datum?\n(A). potential energy measure\n(B). the potential energy per unit weight of the water measure\n(C). unit weight of water measure\n(D). volume of water measure\n(E). None of these\n\nWhen the water flows by a saturated soil mass, the piezometric head is the sum of velocity head and the position head.\n(A). True\n(B). False\n(E). None of these\n\nWhat is the piezometric surface in the line joining?\n(A). water level in piezometers\n(B). soil stratum\n(C). equal voids ratio in soil mass\n(D). equal velocity of flow\n(E). None of these\n\nWhat is a piezometric head called?\n(E). None of these\n\nWhat is the hydraulic potential at any point?\n(A). h=hw±Z\n(B). h=hw*Z\n(C). h=hw/Z\n(D). h=Z/hw\n(E). None of these\n\nThe total head is the sum of what?\n(E). None of these\n\nWhat is the difference between the elevation of water surfaces in piezometers?\n(B). velocity\n(D). depth or length of the sample\n(E). None of these\n\nWhat is the hydraulic gradient equal to?\n(C). hydraulic head per unit distance\n(E). None of these\n\nWhen the flow occurs between two points?\n(A). only if there is difference in total heads\n(B). only if there is no difference in total heads\n(C). only if the total heads are equal\n(D). only if the total heads are equal to zero\n(E). None of these"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.80464655,"math_prob":0.9944793,"size":5153,"snap":"2021-31-2021-39","text_gpt3_token_len":1364,"char_repetition_ratio":0.17051855,"word_repetition_ratio":0.45955056,"special_character_ratio":0.27032796,"punctuation_ratio":0.13138686,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9979367,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-30T03:55:47Z\",\"WARC-Record-ID\":\"<urn:uuid:32dbf25c-6795-4388-bd06-e0ae987c2bfc>\",\"Content-Length\":\"78602\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fb5e1ebf-9c3d-420c-b6ce-0faf70a3b2ae>\",\"WARC-Concurrent-To\":\"<urn:uuid:b7a15ea4-edb3-4e6e-a6e2-916837ff2e35>\",\"WARC-IP-Address\":\"172.67.171.43\",\"WARC-Target-URI\":\"https://t4tutorials.com/foundation-engineering-mcqs/\",\"WARC-Payload-Digest\":\"sha1:D25MBBMXWJL55RM77F4MYNSZZVE2EWAL\",\"WARC-Block-Digest\":\"sha1:3CP5HQQDXTQYBZ4ZY7HJT5RY3GMYCXEG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153931.11_warc_CC-MAIN-20210730025356-20210730055356-00138.warc.gz\"}"} |
http://desafiopetrobras.wiperagency.com/index.php/ebooks/analytic-hyperbolic-geometry-in-n-dimensions-an-introduction | [
"# Download PDF by Abraham Albert Ungar: Analytic Hyperbolic Geometry in N Dimensions : An",
null,
"By Abraham Albert Ungar\n\nISBN-10: 1482236680\n\nISBN-13: 9781482236682\n\nThis publication introduces for the 1st time the hyperbolic simplex as a big proposal in n-dimensional hyperbolic geometry. The extension of universal Euclidean geometry to N dimensions, with N being any optimistic integer, leads to better generality and succinctness in similar expressions. utilizing new mathematical instruments, the booklet demonstrates that this can be additionally the case with analytic hyperbolic geometry. for instance, the writer analytically determines the hyperbolic circumcenter and circumradius of any hyperbolic simplex.\n\nBest popular & elementary books\n\nMathematical algorithms are crucial for all meeting language and embedded method engineers who advance software program for microprocessors. This publication describes recommendations for constructing mathematical workouts - from basic multibyte multiplication to discovering roots to a Taylor sequence. All resource code is on the market on disk in MS/PC-DOS structure.\n\nDownload PDF by J. Coates, R. Greenberg, K.A. Ribet, K. Rubin, C. Viola: Arithmetic theory of elliptic curves: lectures given at the\n\nThis quantity comprises the increased types of the lectures given via the authors on the C. I. M. E. educational convention held in Cetraro, Italy, from July 12 to 19, 1997. The papers accrued listed below are huge surveys of the present examine within the mathematics of elliptic curves, and likewise include numerous new effects which can't be came across in other places within the literature.\n\nDownload e-book for iPad: Hilbert's Tenth Problem: Relations With Arithmetic and by Jan Denef, Leonard Lipshitz, Thanases Pheidas, Jan Van Geel\n\nThis e-book is the results of a gathering that came about on the college of Ghent (Belgium) at the family members among Hilbert's 10th challenge, mathematics, and algebraic geometry. incorporated are written articles detailing the lectures that got in addition to contributed papers on present issues of curiosity.\n\nDownload e-book for kindle: Precalculus A Prelude to Calculus, by Sheldon Axler\n\nSheldon Axler's Precalculus focuses basically on themes that scholars really need to reach calculus. due to this, Precalculus is a truly conceivable measurement although it incorporates a scholar strategies manual. The booklet is geared in the direction of classes with intermediate algebra must haves and it doesn't think that scholars bear in mind any trigonometry.\n\nAdditional info for Analytic Hyperbolic Geometry in N Dimensions : An Introduction\n\nSample text\n\nA2N ⎟ ⎟ ⎟ .. ⎟ . ⎠ ... 23) along with its Cayley–Menger determinant, Det MN, where aij2 = || − Ai + Aj||2. Here we use the notation illustrated in Fig. 3. Analogously, in the study of higher dimensional gyrosimplices it proves useful to assign to each (N − 1)-gyrosimplex A1 . . 40), p. 378, ⎞ ⎛ 1 γ12 γ13 . . γ1N ⎟ ⎜ ⎜ γ12 1 γ23 . . γ2N ⎟ ⎟ ⎜ ΓN = ⎜ . 24) ⎟ ⎜ .. ⎝ ⎠ γ1N γ2N γ3N . . 1 along with its gamma determinant, Det ΓN, where γij = γaij = γ || Ai⊕Aj||. Here we use the notation illustrated in Fig.\n\n63) we say that the gyration axis in Rn of the gyration gyr[u, v] : Rn → Rn, generated by u, v ∈ Rns, 38 Analytic Hyperbolic Geometry in N Dimensions is parallel to the vector z. 65) x 0, for any coefficients cu, cv ∈ R, excluding cu = cv = 0. 65). Moreover, we have the following result. 7 (Gyration–Thomas Precession Angle). Let u, v, x ∈ Rns be relativistically admissible velocities such that u −v (so that u⊕v 0). 66) Proof. 22), pp. 53). 31), p. 29, coincide. Special attention to three dimensional gyrations, which are of interest in physical applications, is paid in Chapter 13 in the study of Thomas precession.\n\nThe Einstein gyroparallelogram law of gyrovector addition. Let A, B, C ∈ Rns be any three points of an Einstein gyrovector space (Rns, ⊕, ⊗), giving rise to the two gyrovectors u = A⊕B and v = A⊕C. Furthermore, let D be a point of the gyrovector space such that ABDC is a gyroparallelogram, that is, D = (B ⊞ C) A by Def. 2, p. 174, of the gyroparallelogram. Then, Einstein coaddition of gyrovectors u and v, u ⊞ v = w, expresses the gyroparallelogram law, where w = A⊕D. Einstein coaddition, ⊞, thus gives rise to the gyroparallelogram addition law of Einsteinian velocities, which is commutative and fully analogous to the parallelogram addition law of Newtonian velocities."
] | [
null,
"https://images-na.ssl-images-amazon.com/images/I/41faIAnOBML._SX298_BO1,204,203,200_.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8466442,"math_prob":0.90243375,"size":4950,"snap":"2020-34-2020-40","text_gpt3_token_len":1292,"char_repetition_ratio":0.09684594,"word_repetition_ratio":0.030940594,"special_character_ratio":0.230101,"punctuation_ratio":0.15452538,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98868036,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-26T09:50:15Z\",\"WARC-Record-ID\":\"<urn:uuid:3d9a21cf-4561-4a48-be43-a9251533eeff>\",\"Content-Length\":\"40773\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9d8a8f69-ba26-4928-84df-acdd2296de23>\",\"WARC-Concurrent-To\":\"<urn:uuid:d7b2bd3f-6319-41d1-9e21-0517aa72e2d0>\",\"WARC-IP-Address\":\"104.131.41.153\",\"WARC-Target-URI\":\"http://desafiopetrobras.wiperagency.com/index.php/ebooks/analytic-hyperbolic-geometry-in-n-dimensions-an-introduction\",\"WARC-Payload-Digest\":\"sha1:KM6TMDTACHG5PUH2ZAXQD4GOFKMMNPKY\",\"WARC-Block-Digest\":\"sha1:ROVC5JD2WB47QIPRDGUC5DVWEO2NE6YT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400238038.76_warc_CC-MAIN-20200926071311-20200926101311-00597.warc.gz\"}"} |
https://qhgaechwyg.firebaseapp.com/advanced-engineering-mathematics-2nd-edition-gaechwyg.html | [
"## Advanced Engineering Mathematics (2nd Edition)\n\n[PDF.og34] Advanced Engineering Mathematics (2nd Edition)\n\nAdvanced Engineering Mathematics (2nd Michael Greenberg epub\nAdvanced Engineering Mathematics (2nd Michael Greenberg pdf file\nAdvanced Engineering Mathematics (2nd Michael Greenberg audiobook\nAdvanced Engineering Mathematics (2nd Michael Greenberg book review\nAdvanced Engineering Mathematics (2nd Michael Greenberg summary\n\n| #80677 in Books | 1998-01-18 | Ingredients: Example Ingredients | Original language:English | PDF # 1 | 10.10 x2.00 x8.60l,6.51 | File type: PDF | 1324 pages\nORDINARY DIFFERENTIAL EQUATIONS.\n\n1. Introduction to Differential Equations.\n\n2. Equations of First Order.\n\n3. Linear Differential Equations of Second Order and Higher.\n\n4. Power Series Solutions.\n\n5. Laplace Transform.\nFrom the Back Cover|| This clear, pedagogically rich book develops a strong understanding of the mathematical principles and practices that today's engineers need to know. Equally as effective as either a textbook or reference manual, it approaches mathematical\n\nThis clear, pedagogically rich book develops a strong understanding of the mathematical principles and practices that today's engineers need to know. Equally as effective as either a textbook or reference manual, it approaches mathematical concepts from an engineering perspective, making physical applications more vivid and substantial. Its comprehensive instructional framework supports a conversational, down-to-earth narrative style, offering easy ...\n\nYou can specify the type of files you want, for your device.Advanced Engineering Mathematics (2nd Edition) | Michael Greenberg. Just read it with an open mind because none of us really know.\n\nCopyright Disclaimer:This site does not store any files on its server. We only index and link to content provided by other sites."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.68698096,"math_prob":0.48290765,"size":8948,"snap":"2023-14-2023-23","text_gpt3_token_len":1997,"char_repetition_ratio":0.17844366,"word_repetition_ratio":0.092354275,"special_character_ratio":0.21502012,"punctuation_ratio":0.113235295,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9508483,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-04T17:14:13Z\",\"WARC-Record-ID\":\"<urn:uuid:44d35a7e-0229-4879-92a7-2ca5ed27c1ed>\",\"Content-Length\":\"27980\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:87ee39ef-57ee-4fef-9c7c-f57a7f48811d>\",\"WARC-Concurrent-To\":\"<urn:uuid:a39cd1c1-d01f-4a35-a65b-d1ab6b401e22>\",\"WARC-IP-Address\":\"199.36.158.100\",\"WARC-Target-URI\":\"https://qhgaechwyg.firebaseapp.com/advanced-engineering-mathematics-2nd-edition-gaechwyg.html\",\"WARC-Payload-Digest\":\"sha1:DVENMIX2HOK7DTJD6EEB52YR5R5U4JTK\",\"WARC-Block-Digest\":\"sha1:CYHYMM3445NGLU77F74PCQMJAYCHMETV\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224650201.19_warc_CC-MAIN-20230604161111-20230604191111-00744.warc.gz\"}"} |
https://breathmath.com/2016/09/27/sets-exercise-1-4-5-chapter-sets-class-9/ | [
"# SETS EXERCISE 1.4.5 – Chapter Sets – Class 9\n\n1. If A’ = {1, 2, 3, 4}, U = {1, 2, 3, 4, 5, 6, 7, 8}, find A in U and draw Venn diagram\n\nSolution:\n\nA’ = {5, 6, 7, 8}\n\n1. If U = {x/x € 25, x€N}. A = {x/x € U, x ≤ 15} and B = {x/x € U, 0 < x ≤ 25}, list the elements of the following sets and draw Venn diagram:\n\n(i) A’ in U:\n\n(ii) B’ in U\n\n(iii) AB;\n\n(iv) A Δ B\n\nSolution:\n\nU = {1, 2, 3, 4 ……….25}\n\nA = {1, 2, 3, 4……….15}\n\nB = {1, 2, 3 ….25}\n\n(i) A’ = {16, 17, 18, 19….25}",
null,
"(ii) B’ = { }",
null,
"(iii) AB = { }",
null,
"(iv) A Δ B = A B U B A\n\n= { } U {16, 17, 18… 25}\n\n= {16, 17, 18 …..25}",
null,
"1. Let A and B subsets of a set U. Identify the wrong statements:\n\n(i) (A’)’ = A\n\n(ii) A B = B A\n\n(iii) A U A’ = U\n\n(iv) A Δ B = B Δ A\n\n(v) (A B)’ = A’ B’\n\nSolution:\n\nIf U = {1, 2, 3, 4, 5, 6, 7, 8, 9}\n\nA = {1, 3, 5, 7, 9} and B = {2, 4, 6, 8, 9}\n\n(i) (A’)’ = {2, 4, 6, 8}\n\n(A’)’ = {1, 3, 5, 7, 9} = A\n\n(A’)’ = A\n\n(ii) A B = {1, 3, 5, 7}\n\nB A = {2, 4, 6, 8}\n\nWe see that A B ≠ B A\n\n(iii) A U A’ = {1, 3, 5, 7, 9} U {2, 4, 6, 8}\n\n= {1, 2, 3, 4, 5, 6, 7, 8, 9}\n\n= U\n\nA U A’ = U\n\n(iv) A Δ B =(A B) U (B A)\n\n= {1, 3, 5, 7} U {2, 4, 6, 8}\n\n= {1, 2, 3, 4, 5, 6, 7, 8}\n\nB Δ A = (B A) U (A B)\n\n= {2, 4, 6, 8} U {1, 3, 5, 7}\n\n= {1, 2, 3, 4, 5, 6, 7, 8}\n\nA Δ B = B Δ A\n\n(v) A B = {1, 3, 5, 7}\n\n(A B)’ = {2, 4, 6, 8, 9}\n\nA’ = {2, 4, 6, 8} and B’ = {1, 3, 5, 7}\n\nA’ B’ = {2, 4, 6, 8}\n\nHence (A B)’ ≠ A’ B’\n\n1. Suppose U = {3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13}. A = {3, 4, 5, 6, 9}, B = {3, 7, 9, 5} and C = {6, 8, 10, 12, 7}. Write down the following sets and draw Venn diagram for each:\n\n(i) A’\n\n(ii) B’\n\n(iii) C’\n\n(iv) (A’)’\n\n(v) (B’)’\n\n(iv) (C’)’\n\nSolution:\n\n(i) A’ = {7, 8, 10, 11, 12, 13}",
null,
"(ii) B’ = {4, 6, 8, 10, 11, 12, 13}",
null,
"(iii) C’ = {3, 4, 5, 9, 11, 13}",
null,
"(iv) A’ = {7, 8, 10, 11, 12, 13}\n\n(A’)’ = {3, 4, 5, 6, 9} = A",
null,
"(v) (B’)’ = B’ = {4, 6, 8, 10, 11, 12, 13}\n\n(B’)’ = {3, 7, 9, 5} = B",
null,
"(vi) (C’)’ = {3, 4, 5, 9, 11, 13}\n(C’)’ = {6, 8, 10, 12, 7} = C",
null,
"5. Suppose U = {1, 2, 3, 4, 5, 6, 7, 8, 9}, A = {1, 2, 3, 4} and B = {2, 4, 6, 8}, write down the following sets and draw Venn diagram.\n(i) A’\n\n(ii) B’\n\n(iii) A U B\n\n(iv) A ∩ B (v) (A U B)’ (vi) (A ∩B)’\nHow (A UB)’ is related to A’ and B’? What relation you see between\n(A ∩ B)’ and A’ and B’\nSolution:\n\n(i) A’ = {5, 6, 7, 8, 9}",
null,
"(ii) B’ = {1, 3, 5, 7, 9}",
null,
"(iii) A U B = {1, 2, 3, 4, 6, 8}",
null,
"(iv) A ∩ B = {2, 4}",
null,
"(v) (A UB)’\n(A U B) = {1, 2, 3, 4, 6, 8}\n(A U B)’ = {5, 7, 9}",
null,
"vi) (A ∩ B)’\n(A ∩ B) = {2, 4}\n(A ∩ B)’ = {1, 3, 5, 6, 7, 8}",
null,
"We see that (A U B)’ = A’ ∩ B’\n(A ∩ B)’ = A’ U B’\n\n6. Find (A B) and (B A) for the following sets and draw Venn diagram.\n(i) A = {a, b, c, d, e, f, g, h} and\nB = {a, e, i, o, u}\n(ii) A = {1, 2, 3, 4, 5, 6} and\nB = {2, 3, 5, 7, 9}\n(iii) A = {1, 4, 9, 16, 25} and\nB = {1, 2, 3, 4, 5, 6, 7, 8, 9}\n(iv) A = {x | x is a prime number less than 5} and\nB = {x | x is a square number less than 16}\nSolution:\n(i) A = { a, b, c, d, e, f, g, h}\nB = {a, e, i, o, u}\nA B = {b, c, d, f, g, h}",
null,
"B A = {i, o, u}",
null,
"(ii) A = {1, 2, 3, 4, 5, 6} and\nB = {2, 3, 5, 7, 9}\nA B = {1, 4, 6}",
null,
"B A = {7, 9}",
null,
"(iii) A = {1, 4, 9, 16, 25} and\nB = {1, 2, 3, 4, 5, 6, 7, 8, 9}\nA B = {16, 25}",
null,
"B A = {2, 3, 5, 6, 7, 8}",
null,
"(iv) A = {x | x is a prime number less than 5}\n= {2, 3}\nB = {x | x is a square number less than 16}\n= {1, 4, 9}\nA B = {2, 3}",
null,
"B A = {1, 4, 9}",
null,
"7. Looking at the Venn diagram list the elements of the following sets:\n(i) A B\n(ii) B A\n(iii) A C\n(iv) C A\n(v) B C\n(vi) C B",
null,
"Solution:\n(i) A B = {1, 2, 7}\n(ii) B A = {5, 6}\n(iii) A C = {1, 2, 3}\n(iv) C A = {6, 8, 9}\n(v) B C = {5, 3}\n(vi) C B = {7, 8, 9}\n\n8. Find A Δ B and draw Venn diagram when:\n(i) A = {a, b, c, d} and B = {d, e, f}\n(ii) A = {1, 2, 3, 4, 5} and B = {2, 4}\n(iii) A ={1, 2, 3, 4, 5} and B = {1, 2, 3, 4, 5, 6}\n(iv) A = {1, 4, 7, 8} and B = {4, 8, 6, 9}\n(v) A = {a, b, c, d, e} and B = {1, 3, 5, 7}\n(vi) A = {1, 2, 3, 4, 5} and B = {1, 3, 5, 7}\nAns:\n(i) A = {a, b, c, d} B = {d, e, f}\nA B = {a, b, c}\nB A = {e, f}\nA Δ B = {a, b, c, e, f}",
null,
"(ii) A = {1, 2, 3, 4, 5} B = {2, 4}\nA B = {1, 3, 5}\nB A = { }\nA Δ B = {1, 3, 5}",
null,
"(iii) A ={1, 2, 3, 4, 5} ; B = {1, 2, 3, 4, 5, 6}\nA B = {.}\nB A = {6}\nA Δ B = {6}",
null,
"(iv) A = {1, 4, 7, 8}; B = {4, 8, 6, 9}\nA B = {1, 7]\nB A = {6, 9}\nA Δ B = {1, 6, 7, 9}",
null,
"(v) A = {a, b, c, d, e} and B = {1, 3, 5, 7}\nA B = {b, d}\nB A = {g}\nA Δ B = {b, d, g}",
null,
"(vi) A = {1, 2, 3, 4, 5} and B = {1, 3, 5, 7}\nA B = {2, 4}\nB A = {7}\nA Δ B = {2, 4, 7}",
null,
""
] | [
null,
"https://www.breathmath.com/wp-content/uploads/2016/09/sets-exercise-1-4-4-class-911.png",
null,
"https://www.breathmath.com/wp-content/uploads/2016/09/sets-exercise-1-4-4-class-912.png",
null,
"https://www.breathmath.com/wp-content/uploads/2016/09/sets-exercise-1-4-4-class-913.png",
null,
"https://www.breathmath.com/wp-content/uploads/2016/09/sets-exercise-1-4-4-class-914.png",
null,
"https://www.breathmath.com/wp-content/uploads/2016/09/sets-exercise-1-4-4-class-915.png",
null,
"https://www.breathmath.com/wp-content/uploads/2016/09/sets-exercise-1-4-4-class-916.png",
null,
"https://www.breathmath.com/wp-content/uploads/2016/09/sets-exercise-1-4-4-class-917.png",
null,
"https://www.breathmath.com/wp-content/uploads/2016/09/sets-exercise-1-4-4-class-918.png",
null,
"https://www.breathmath.com/wp-content/uploads/2016/09/sets-exercise-1-4-4-class-919.png",
null,
"https://www.breathmath.com/wp-content/uploads/2016/09/sets-exercise-1-4-5-e28093-class-91.png",
null,
"https://www.breathmath.com/wp-content/uploads/2016/09/sets-exercise-1-4-5-e28093-class-9.png",
null,
"https://www.breathmath.com/wp-content/uploads/2016/09/sets-exercise-1-4-5-e28093-class-911.png",
null,
"https://www.breathmath.com/wp-content/uploads/2016/09/sets-exercise-1-4-5-e28093-class-92.png",
null,
"https://www.breathmath.com/wp-content/uploads/2016/09/sets-exercise-1-4-5-e28093-class-93.png",
null,
"https://www.breathmath.com/wp-content/uploads/2016/09/sets-exercise-1-4-5-e28093-class-94.png",
null,
"https://www.breathmath.com/wp-content/uploads/2016/09/sets-exercise-1-4-5-e28093-class-95.png",
null,
"https://www.breathmath.com/wp-content/uploads/2016/09/sets-exercise-1-4-5-e28093-class-96.png",
null,
"https://www.breathmath.com/wp-content/uploads/2016/09/sets-exercise-1-4-5-e28093-class-97.png",
null,
"https://www.breathmath.com/wp-content/uploads/2016/09/sets-exercise-1-4-5-e28093-class-98.png",
null,
"https://www.breathmath.com/wp-content/uploads/2016/09/sets-exercise-1-4-5-e28093-class-99.png",
null,
"https://www.breathmath.com/wp-content/uploads/2016/09/sets-exercise-1-4-5-e28093-class-910.png",
null,
"https://www.breathmath.com/wp-content/uploads/2016/09/sets-exercise-1-4-5-e28093-class-912.png",
null,
"https://www.breathmath.com/wp-content/uploads/2016/09/sets-exercise-1-4-5-e28093-class-913.png",
null,
"https://www.breathmath.com/wp-content/uploads/2016/09/sets-exercise-1-4-5-e28093-class-914.png",
null,
"https://www.breathmath.com/wp-content/uploads/2016/09/sets-exercise-1-4-5-e28093-class-915.png",
null,
"https://www.breathmath.com/wp-content/uploads/2016/09/sets-exercise-1-4-5-e28093-class-916.png",
null,
"https://www.breathmath.com/wp-content/uploads/2016/09/sets-exercise-1-4-5-e28093-class-918.png",
null,
"https://www.breathmath.com/wp-content/uploads/2016/09/sets-exercise-1-4-5-e28093-class-917.png",
null,
"https://www.breathmath.com/wp-content/uploads/2016/09/sets-exercise-1-4-5-e28093-class-919.png",
null,
"https://www.breathmath.com/wp-content/uploads/2016/09/sets-exercise-1-4-5-e28093-class-920.png",
null,
"https://www.breathmath.com/wp-content/uploads/2016/09/sets-exercise-1-4-5-e28093-class-921.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.65269405,"math_prob":1.0000077,"size":4312,"snap":"2020-34-2020-40","text_gpt3_token_len":2683,"char_repetition_ratio":0.23584029,"word_repetition_ratio":0.30810398,"special_character_ratio":0.70709646,"punctuation_ratio":0.2889306,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9992009,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62],"im_url_duplicate_count":[null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-05T16:51:14Z\",\"WARC-Record-ID\":\"<urn:uuid:69e24cd6-10b5-4177-b8fa-4af0eb2dc787>\",\"Content-Length\":\"112868\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0af855ca-ad7f-427d-b0ae-8911701c573d>\",\"WARC-Concurrent-To\":\"<urn:uuid:37a55ef2-0068-4593-b823-8b7a229b18a7>\",\"WARC-IP-Address\":\"192.0.78.24\",\"WARC-Target-URI\":\"https://breathmath.com/2016/09/27/sets-exercise-1-4-5-chapter-sets-class-9/\",\"WARC-Payload-Digest\":\"sha1:X4W375GTNKIZULMDGG6C5OYUBFWBRAFY\",\"WARC-Block-Digest\":\"sha1:6DC52YDNJGXRNJ74ESJ6UPQI3MW4B4R3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439735963.64_warc_CC-MAIN-20200805153603-20200805183603-00285.warc.gz\"}"} |
https://winder.ai/scikit-learn-to-pandas-data-types-shouldnt-be-this-hard/ | [
"# Scikit Learn to Pandas: Data Types Shouldn't Be This Hard\n\nA little bit of research turned into a rant asking why is it so difficult to use two of the most popular Data Science libraries together? And a little bit about Scikit Pandas.\n\nMACHINE LEARNING\n\nNearly everyone using Python for Data Science has used or is using the Pandas Data Analysis/Preprocessing library. It is as much of a mainstay as Scikit-Learn. Despite this, one continuing bugbear is the different core data types used by each: `pandas.DataFrame` and `np.array`. Wouldn’t it be great if we didn’t have to worry about converting `DataFrame`s to `numpy` types and back again? Yes, it would. Step forward Scikit Pandas.\n\n## Sklearn Pandas\n\nSklearn Pandas, part of the Scikit Contrib package, adds some syntactic sugar to use Dataframes in sklearn pipelines and back again.\n\nLet’s take the the example in the README. This gives us some simple data that contains categorical and numeric data:\n\n``````data = pd.DataFrame({'pet': ['cat', 'dog', 'dog', 'fish', 'cat', 'dog', 'cat', 'fish'],\n'children': [4., 6, 3, 3, 2, 3, 5, 4],\n'salary': [90., 24, 44, 27, 32, 59, 36, 27]})\ndata['pet'] = data['pet'].astype(\"category\")\n``````\n\nNow we can use the library to create a map that allows us to use our Pandas array with sklearn:\n\n``````mapper = DataFrameMapper([\n('pet', preprocessing.LabelBinarizer()),\n(['children'], preprocessing.StandardScaler())\n])\nmapper.fit_transform(data.copy())\n``````\n\nWe’re using the new class `DataFrameMapper` which we will use to map an input, `data`, to whatever is the output of the sklearn functions declared in the array. Note that the class conforms to the standard sklearn `fit`/`transform` api. When we run run this we get:\n\n``````array([[ 1. , 0. , 0. , 0.20851441],\n[ 0. , 1. , 0. , 1.87662973],\n[ 0. , 1. , 0. , -0.62554324],\n[ 0. , 0. , 1. , -0.62554324],\n[ 1. , 0. , 0. , -1.4596009 ],\n[ 0. , 1. , 0. , -0.62554324],\n[ 1. , 0. , 0. , 1.04257207],\n[ 0. , 0. , 1. , 0.20851441]])\n``````\n\nThe first thing to note is that the output is a `numpy` one. This was a little surprising, since it is supposed to be a library that can map back and forth from Pandas.\n\nThe second thing to notice is that the new `DataFrameMapper` looks very similar to sklearn’s `pipeline.Pipeline` class. In fact, I would go so far as saying that this is duplicating the functionality of the `Pipeline` class.\n\nAlso, and this is a gripe with the `Pipeline` class too, but I don’t like the use of a named tuple. It would have been much cleaner to treat this like what it really is; a functional pipeline. Passing in a lambda to map data via an sklearn class/function would make it much cleaner and far more reusable.\n\n## Scikit-learn’s `Pipeline` is All You Need\n\nThese ideas aren’t just mine. John Ramey presents a simple Adapter class that slects the right datatype for the operation (Ramey, 2018). Tom de Ruijter developed the same idea too (Ruijter, 2017).\n\nEssentially what they do is create a class that filters for specific features (see how we’re still using functional language here). In the example below we filter for a data `type`, but we could have easily filtered upon different parameters, like the name of the feature.\n\n``````from sklearn.base import BaseEstimator, TransformerMixin\nclass TypeSelector(BaseEstimator, TransformerMixin):\ndef __init__(self, dtype):\nself.dtype = dtype\ndef fit(self, X, y=None):\nreturn self\ndef transform(self, X):\nassert isinstance(X, pd.DataFrame)\nreturn X.select_dtypes(include=[self.dtype])\n``````\n\nWe can use this filter in front of the mappers to ensure we have the right type. For a feature that is catagorical, for example, we can now create a standard sklearn pipeline like this:\n\n``````pipeline.make_pipeline(\nTypeSelector(\"category\"),\npreprocessing.OneHotEncoder()\n)\n``````\n\nAll we need to do now is repeat this pattern for each data `type` or feature and then merge them back together again. Here it is in action:\n\n``````pipe = pipeline.make_union(\npipeline.make_pipeline(\nTypeSelector(\"category\"),\npreprocessing.OneHotEncoder()\n),\npipeline.make_pipeline(\nTypeSelector(np.number),\npreprocessing.StandardScaler()\n)\n)\npipe.fit_transform(data.copy()).toarray()\n``````\n``````array([[ 1. , 0. , 0. , 0.20851441, 2.27500192],\n[ 0. , 1. , 0. , 1.87662973, -0.87775665],\n[ 0. , 1. , 0. , -0.62554324, 0.07762474],\n[ 0. , 0. , 1. , -0.62554324, -0.73444944],\n[ 1. , 0. , 0. , -1.4596009 , -0.49560409],\n[ 0. , 1. , 0. , -0.62554324, 0.79416078],\n[ 1. , 0. , 0. , 1.04257207, -0.30452782],\n[ 0. , 0. , 1. , 0.20851441, -0.73444944]])\n``````\n\nThere we have it. Almost the same functionality as the library, with fewer lines of code using standard methods. The only thing that we haven’t done that the library does is maintain the feature metadata at the end of the pipeline. The result of the code above is a `numpy` array.\n\n## Conclusion: Extra Complexity You Don’t Need\n\nThe scikit pandas library also has some helper wrapper methods that override the `sklearn` implementation, like a wrapper for cross validation and a vectorised function mapper . Again, I think these are superfluous. You can do this very easily with standard `numpy` methods or a bit of functional python.\n\nConsidering how simple it should be, I’m also worried about the cyclomatic complexity of the library. The `_transform` method has a complexity of 18 (21 is considered high - Subandri and Sarno, 2017).\n\nI wouldn’t recommend the use of this library as it currently stands. I think it would be wise to utilise `sklearn`s `Pipeline` or a functional library with some wrapper methods/classes.\n\nBut this leads me the question, considering that these two libraries are some of the most popular Data Science libraries in the world, why is there such poor integration?"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7844759,"math_prob":0.9385476,"size":5898,"snap":"2021-43-2021-49","text_gpt3_token_len":1640,"char_repetition_ratio":0.12860537,"word_repetition_ratio":0.06430868,"special_character_ratio":0.31264836,"punctuation_ratio":0.24351925,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97264624,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-17T19:46:19Z\",\"WARC-Record-ID\":\"<urn:uuid:b7d8ae3f-6bf5-47f7-a279-eceb8ff1fd2b>\",\"Content-Length\":\"64394\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b1abf3f1-11db-4ade-b3b0-8570eb8ba005>\",\"WARC-Concurrent-To\":\"<urn:uuid:596793af-b773-43f7-9f78-fd2bb3cb2843>\",\"WARC-IP-Address\":\"99.84.191.44\",\"WARC-Target-URI\":\"https://winder.ai/scikit-learn-to-pandas-data-types-shouldnt-be-this-hard/\",\"WARC-Payload-Digest\":\"sha1:FP5UG27BVFNLDDLQWD72EKMBCUT4GL5B\",\"WARC-Block-Digest\":\"sha1:YGBTBJVND43HH55X3VMYGBIT5B3OP6DO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585181.6_warc_CC-MAIN-20211017175237-20211017205237-00533.warc.gz\"}"} |
http://blogs.stonesteps.ca/showpost.aspx?pid=38 | [
"Andre's Blog\nPerfection is when there is nothing left to take away\n\nSome STL containers, such as std::map, are designed to use std::pair to maintain keys and values. A special convenience function, std::make_pair, is often used to create pairs without having to specify template argument types. Typical code that makes use of an STL map looks like this:\n\n```std::map<std::string, std::string> m;\t\t// map<key,value>\nm.insert(std::make_pair(\"1\", \"one\"));\nm.insert(std::make_pair(\"2\", \"two\"));\nprintf(\"%s\", m.find(\"1\")->second.c_str());\t// prints \"one\"\n```\n\nWithout using std::make_pair, which is declared as follows:\n\n`template <class T1, class T2> pair<T1, T2> make_pair(T1 x, T2 y);`\n\n, an instance of std::pair would have to be constructed with a template argument list:\n\n```m.insert(std::pair<std::string, std::string>(\"1\", \"one\"));\n```\n\nThose who are familiar with STL will notice that even though the map in this example is defined to use STL strings as keys and values, all calls above pass C-strings, which means that std::string temporaries would be constructed as necessary. Naturally, one has to wonder if using STL strings instead would result in faster code. Let's see if that's the case.\n\n## Character Pointers\n\nAt the first glance, using character pointers to create a pair seems not very efficient because key and the value strings would have to be traversed in order to determine their length, in addition to the cost of creating a pair within the map. That is, given this source:\n\n```std::string key = \"1\";\nstd::string value = \"one\";\nm.insert(std::make_pair(key.c_str(), value.c_str()));\n```\n\n, the compiler will generate code to create one temporary pair containing key and value string pointers (p1), one temporary pair containing copies of the original key and value as STL strings (p2) and one more copy of the latter within a map node:\n\n```p1 = std::make_pair<char*,char*>(key.c_str(), value.c_str());\n└► return std::pair<char*,char*>(key.c_str(), value.c_str());\n└► p1.first(key.c_str()), p1.second(value.c_str());\np2 = std::pair<std::string,std::string>(p1);\n└► p2.first(p1.first), p2.second(p1.second);\nm.insert(p2);\n└► std::pair<std::string,std::string>(p2);\nstd::pair::~pair(p2);\nstd::pair::~pair(p1);```\n\nOverall, three instances of std::pair and four instances of std::string were constructed and one std::pair and two instances of std::string were kept in the map. In this case, calling std::make_pair cost us two instances of std::pair and two instances of std::string (p2's data members).\n\n## STL strings\n\nUsing instances of std::string to construct STL pairs seems to be most popular. Probably because the resulting code is easier to read:\n\n```std::string key = \"2\";\nstd::string value = \"two\";\nm.insert(std::make_pair(key, value));```\n\nIn this case, the compiler will generate code to create two temporary STL strings, which will be used as key and value parameters (k1 and v1), one temporary pair containing copies of k1 and v1 (p1) and one std::pair containing a copy of p1 within a map node:\n\n```std::string k1(key);\nstd::string v1(value);\np1 = std::make_pair<std::string,std::string>(k1, v1);\n└► return std::pair<std::string,std::string>(k1, v1);\n└► first(k1), second(v1);\nm.insert(p1);\n└► std::pair<std::string,std::string>(p1);\nstd::pair::~pair(p1);\nstd::string::~string(v1);\nstd::string::~string(k1);\n```\n\nOverall, two instances of std::pair and six instances of std::string were constructed and only one pair and two STL strings were kept in the map. This method turns out to be the most expensive, constructing one std::pair and four instances of std::string (p1's data members) just to make code more readable.\n\n## STL pair\n\nCreating an instance of std::pair explicitly is probably the most straightforward method, which surprisingly isn't used very much. Probably because the result doesn't look as compact as in two other cases above. Consider this source (note the use of const):\n\n```std::string key = \"3\";\nstd::string value = \"three\";\nm.insert(std::pair<const std::string, std::string>(key, value));\n```\n\nIn this case, only one instance of std::pair is created and passed to the map::insert method:\n\n```p1 = std::pair<const std::string,std::string>(key, value);\n└► first(key), second(value);\nm.insert(p1);\nstd::pair::~pair(p1);```\n\nOverall, two instances of std::pair and four instances of std::string were created. The overhead of one std::pair and two STL strings (p1's data members) make this method the least expensive of all three.\n\n## Putting it to the test\n\nLet's put these conclusions to a test. A simple loop below inserts the same key into a map 5 million times. Each time, except the first one, the map rejects the new key, which allows us to time just the construction of the input parameters.\n\n```key = \"1\";\nvalue = \"one\";\nstime = GetTickCount();\nfor(i = 0; i < 5000000; i++)\nm.insert(...);\nprintf(\"%d\\n\", GetTickCount()-stime);\n```\n\nTwo columns below show loop running time, in milliseconds, for short and long keys and values. That is, strings shorter than 16 characters are placed in the std::string's internal buffer, while longer strings are placed in dynamically allocated memory, which dramatically affects the performance of this loop.\n\n``` short key long key\nchar* : 1141 2531\nSTL string : 2625 6687\nSTL pair : 1078 2350\n```\n\n## Conclusion\n\nOf course, authors writing desktop applications may consider these expenses as negligible. After all, a few extra microseconds won't visibly affect experience of a user who's trying to browse a list of fonts. However, these microseconds add up in high-performance utilities and servers, as well as in embedded applications and games and sometimes one has to choose between code readability and its speed.\n\nPosted Mon May 9 11:45:46 EDT 2011 by goldfishka\nCool:)\nPosted Sat Feb 22 23:00:11 EST 2014 by lolohammer\n\nyou should maybe look a bit more, there are ways to remove the extra copy in c++ even with make_pair with optimization and also the move constructor.\n\nPosted Wed Oct 8 07:07:24 EDT 2014 by Brolock\n\nHi,\nas lolohammer said, now, with C++11 you can do pretty good things:\n\nm.insert(make_pair(\"key\", \"value\"));\n\nOr\n\nauto p = make_pair(\"key\", \"value\");\n\nm.insert(std::move(p));\n\nOf course the idea of your exemple is to show that we shouldn't declare a variable if we just want to pass it to a container / wrapper afterward (which should be by move and not copy construct if you don't plan to reuse your variable afterward).\n\nName:\n\nComment:"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7891833,"math_prob":0.8341975,"size":6095,"snap":"2019-51-2020-05","text_gpt3_token_len":1496,"char_repetition_ratio":0.17616154,"word_repetition_ratio":0.04171364,"special_character_ratio":0.2680886,"punctuation_ratio":0.2516129,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9844798,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-22T19:17:28Z\",\"WARC-Record-ID\":\"<urn:uuid:ab373646-7aeb-4f13-bebc-ca666d318085>\",\"Content-Length\":\"16255\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c0ab919b-9957-4c52-b415-e401cd999dc8>\",\"WARC-Concurrent-To\":\"<urn:uuid:e95cab8e-4e30-4a73-9451-44fe166713c0>\",\"WARC-IP-Address\":\"50.112.132.239\",\"WARC-Target-URI\":\"http://blogs.stonesteps.ca/showpost.aspx?pid=38\",\"WARC-Payload-Digest\":\"sha1:2GSSOIKM6HRPFA64MKAZ4ADGK5CWFUF7\",\"WARC-Block-Digest\":\"sha1:W3YB6E2ERPNXQQV25EFUX4KDPAEBZR6Y\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250607407.48_warc_CC-MAIN-20200122191620-20200122220620-00366.warc.gz\"}"} |
https://deepai.org/publication/bandit-regret-scaling-with-the-effective-loss-range | [
"",
null,
"",
null,
"",
null,
"",
null,
"# Bandit Regret Scaling with the Effective Loss Range\n\nWe study how the regret guarantees of nonstochastic multi-armed bandits can be improved, if the effective range of the losses in each round is small (e.g. the maximal difference between two losses in a given round). Despite a recent impossibility result, we show how this can be made possible under certain mild additional assumptions, such as availability of rough estimates of the losses, or advance knowledge of the loss of a single, possibly unspecified arm. Along the way, we develop a novel technique which might be of independent interest, to convert any multi-armed bandit algorithm with regret depending on the loss range, to an algorithm with regret depending only on the effective range, while avoiding predictably bad arms altogether.\n\n11/06/2019\n\n### Multi-Armed Bandits with Correlated Arms\n\nWe consider a multi-armed bandit framework where the rewards obtained by...\n07/02/2018\n\nWe derive an online learning algorithm with improved regret guarantees f...\n12/13/2021\n\n### Top K Ranking for Multi-Armed Bandit with Noisy Evaluations\n\nWe consider a multi-armed bandit setting where, at the beginning of each...\n02/08/2022\n\n### Budgeted Combinatorial Multi-Armed Bandits\n\nWe consider a budgeted combinatorial multi-armed bandit setting where, i...\n09/19/2021\n\n### Generalized Translation and Scale Invariant Online Algorithm for Adversarial Multi-Armed Bandits\n\nWe study the adversarial multi-armed bandit problem and create a complet...\n10/15/2020\n\n### Stochastic Bandits with Vector Losses: Minimizing ℓ^∞-Norm of Relative Losses\n\nMulti-armed bandits are widely applied in scenarios like recommender sys...\n10/23/2020\n\n### Approximation Methods for Kernelized Bandits\n\nThe RKHS bandit problem (also called kernelized multi-armed bandit probl...\n\n## 1 Introduction\n\nIn the online learning and bandit literature, a recent and important trend has been the development of algorithms which are capable of exploiting “easy” data, in the sense of improved regret guarantees if the losses presented to the learner have certain favorable patterns. For example, a series of works have studied how the regret can be improved if the losses do not change much across rounds (e.g., [9, 14, 15, 16, 22]); being simultaneously competitive w.r.t. both “hard” and “easy” data (e.g., [21, 20, 5, 7]); attain richer feedback on the losses (e.g., ), have some predictable structure , and so on. In this paper, we continue this research agenda in a different direction, focusing on improved regret performance in nonstochastic settings with partial feedback where the learner has some knowledge about the variability of the losses within each round.\n\nIn the full information setting, where the learner observes the entire set of losses after each round , it is possible to obtain regret bounds of order scaling with the unknown effective range of the losses [8, Corollary 1]. Unfortunately, the situation in the bandit setting, where the learner only observes the loss of the chosen action, is quite different. A recent surprising result [11, Corollary 4] implies that in the bandit setting, the standard regret lower bound holds, even when . The proof defines a process where losses are kept -close to each other, but where the values oscillate unpredictably between rounds. Based on this, one may think that it is impossible to attain improved bounds in the bandit setting which depend on , or some other measure of variability of the losses across arms. In this paper, we show the extent to which partial information about the losses allows one to circumvent the impossibility result in some interesting ways. We analyze two specific settings: one in which the learner can roughly estimate in advance the actual loss value of each arm, and one where she knows the exact loss of some arbitrary and unknown arm.\n\nIn order to motivate the first setting, consider a scenario where the learner knows each arm’s loss up to a certain precision (which may be different for each arm). For example, in the context of stock prices [13, 1]\n\nthe learner may have a stochastic model providing some estimates of the loss means for the next round. In other cases, the learner may be able to predict that certain arms are going to perform poorly in some rounds. For example, in routing the learner may know in advance that some route is down, and a large loss is incurred if that route is picked. Note that in this scenario, a reasonable algorithm should be able to avoid picking that route. However, that breaks the regret guarantees of standard expert/bandit algorithms, which typically require each arm to be chosen with some positive probability. In the resulting regret bounds, it is difficult to avoid at least some dependence on the highest loss values.\n\nTo formalize these scenarios and considerations, we study a setting where for each arm at round , the learner is told that the loss will be in for some . In this setting, we show a generic reduction, which allows one to convert any algorithm for bounded losses, under a generic feedback model (not necessarily a bandit one) to an algorithm with regret depending only on the effective range of the losses (that is, only on , independent of ). Concretely, taking the simple case where the loss of each arm at each round is in for some and fixed , and assuming the step size is properly chosen, we can get a regret bound of for the bandit feedback, completely independent of and the losses’ actual range. Note that this has the desired behavior that as , the regret also converges to zero (in the extreme case where\n\n, the learner essentially knows the losses in advance, and hence can avoid any regret). With full information feedback (where the entire loss vector is revealed at the end of each round), we can use the same technique to recover the regret bound of\n\n. We note that this is a special case of the predictable sequences setting studied in , and their proposed algorithm and analysis is applicable here. However, comparing the results, our bandit regret bounds have a better dependence on the number of arms , and our reduction can be applied to any algorithm, rather than the specific one proposed in . On the flip side, the algorithm proposed in is tailored to the more general setting of bandit linear optimization, and does not require the range parameter to be known in advance (see Sec. 3 for a more detailed comparison). We also study the tightness of our regret guarantees by providing lower bounds.\n\nA second scenario motivating partial knowledge about the loss vectors is the following. Consider a system for recommending products to visitors of some company’s website. Say that two products are similar if the typical visitor tends to like them both or dislike them both. Hence, if we consider the similarity graph over the set of products, then it is plausible to assume that the likelihood of purchase (or any related index of the visitor’s behavior) be a smooth function over this graph. Formally, the loss vectors at each round satisfy , where is the Laplacian matrix associated with a graph over the arms with edges , and is a smoothness parameter. In this setting, we provide improved bandit regret bounds depending on the spectral properties of the Laplacian. To circumvent the impossibility result of mentioned earlier, we make the reasonable assumption that at the end of each round round, the learner is given an “anchor point”, corresponding to the loss of some unspecified arm. In our motivating example, the recommender system may assume, for instance, that each visitor has some product that she most likely won’t buy. Using a simple modification of the Exp3 algorithm, we show that if the parameters are properly tuned, we attain a regret bound of order (ignoring log factors), where\n\nis the second-smallest eigenvalue of\n\n, also known as the algebraic connectivity number of the graph represented by . If the learner is told the minimal loss at every round (rather than any loss), this bound can be improved to order of (again, ignoring log factors) which vanishes, as it should, when for all ; that is, when all arms share the same loss value. We also provide a lower bound, showing that this upper bound is the best possible (up to log factors) in the worst case. Although our basic results pertain to connected graphs, using the range-dependent reductions discussed earlier we show it can be applied to graphs with multiple connected components and anchor points.\n\nThe paper is structured as follows: In Sec. 2, we formally define the standard experts/bandit online learning setting, which is the focus of our paper, and devote a few words to the notation we use. In Sec. 3, we discuss the situation where each individual loss is known to lie in a certain range, and provide an algorithm as well as upper and lower bounds on the expected regret. In Sec. 4, we consider the setting of smooth losses (as defined above). All our formal proofs are presented in the appendices.\n\n## 2 Setting and notation\n\nThe standard experts/bandit learning setting (with nonstochastic losses) is phrased as a repeated game between a learner and an adversary, defined over a fixed set of arms/actions. Before the game begins, the adversary assigns losses for each of arms and each of rounds (this is also known as an oblivious adversary, as opposed to a nonoblivious one which sets the losses during the game’s progress). The loss of arm at round is defined as , and is assumed w.l.o.g. to lie in . We let denote the vector . At the beginning of each round, the learner chooses an arm , and receives the associated loss . With bandit feedback, the learner then observes only her own loss , whereas with full information feedback, the learner gets to observe for all . The learner’s goal is to minimize the expected regret (sometimes denoted as pseudo-regret), defined as\n\n E[T∑t=1ℓt(It)]−mini=1,…,KT∑t=1ℓt(i) ,\n\nwhere the expectation is over the learner’s possible randomness. We use to denote the indicator of the event , and let denote the natural logarithm. Given an (undirected) graph over nodes, its Laplacian is defined as the matrix where equals the degree of node , for equals if node is adjacent to node , and otherwise. We let denote the second-smallest eigenvalue of . This is also known as the algebraic connectivity number, and is larger the more well-connected is the graph. In particular, for disconnected graphs, and for the complete graph.\n\n## 3 Rough estimates of individual losses\n\nWe consider a variant of the online learning setting presented in Sec. 2, where at the beginning of every round , the learner is provided with additional side information in the form of , with the guarantee that for all . We then propose an algorithmic reduction, which allows to convert any regret-minimizing algorithm (with some generic feedback), to an algorithm with regret depending on , independent of . We assume that given a loss vector and chosen action , the algorithm receives as feedback some function : For example, if is an algorithm for the multi-armed bandits setting, then , whereas if is an algorithm for the experts setting, . In our reduction, is sequentially fed, at the end of each round , with (where and are not necessarily the same as the actual loss vector and actual chosen arm ), and returns a recommended arm for the next round, which is used to choose the actual arm .\n\nTo formally describe the reduction, we need a couple of definitions. For all , let\n\n jt∈argmini{mt(i)−εt(i)}\n\ndenote the arm with the lowest potential loss, based on the provided side-information (if there are ties, we choose the one with smallest , and break any remaining ties arbitrarily). Define any arm as “bad” (at round ) if and “good” if . Intuitively, “bad” arms are those which cannot possibly have the smallest loss in round . For loss vector , define the transformed loss vector as\n\n ˜ℓt(i)={ℓt(i)−mt(jt)+εt(jt)if i is good2εt(jt)if i is bad.\n\nIt is easily verified that always. Hence, the range of the transformed losses does not depend on . The meta-algorithm now does the following at every round:\n\n1. Get an arm recommendation from .\n\n2. Let if is a good arm, and otherwise.\n\n3. Choose arm and get feedback\n\n4. Construct feedback and feed to algorithm\n\nCrucially, note that we assume that can be constructed based on . For example, this is certainly true in the full information setting (as we are given , hence can explicitly compute ). This is also true in the bandit setting: If is a “good” arm, then , hence we can construct based on the feedback actually given to the meta-algorithm. If is a “bad” arm, then we can indeed construct , since is given to the meta-algorithm as side-information. This framework can potentially be used for other partial-feedback settings as well.\n\nThe following key theorem implies that the expected regret of this meta-algorithm can be upper bounded by the expected regret of , with respect to the transformed losses (whose range is independent of ):\n\n###### Theorem 1.\n\nSuppose (without loss of generality) that given by\n\nis chosen at random by sampling from a probability distribution\n\n. Let be the induced distribution111By definition of the meta-algorithm, we have if is good, if is bad, and . of . Then for any fixed arm , it holds that\n\n T∑t=1K∑i=1pt(i)ℓt(i)−T∑t=1ℓt(a) ≤ T∑t=1K∑i=1˜pt(i)˜ℓt(i)−T∑t=1˜ℓt(a) . (1)\n\nThis implies in particular that\n\n E[T∑t=1ℓt(It)]−T∑t=1ℓt(a) ≤ E[T∑t=1(˜ℓt(˜It)−˜ℓt(a))]\n\nwhere the expectation is over the possible randomness of the algorithm . Moreover, for any good , and for any bad .\n\nThe proof of the theorem (in the appendices) carefully relies on how the transformed losses and actions were defined. Since the range of is independent of , we get a regret bound for our meta-algorithm which depends only on . This is exemplified in the following two corollaries:\n\n###### Corollary 1.\n\nWith bandit feedback and using Exp3 as the algorithm (with step size ), the expected regret of the meta-algorithm is\n\n O⎛⎝logKη+ηT∑t=1⎛⎝Kεt(jt)2+∑i∈Gtεt(i)2⎞⎠⎞⎠\n\nwhere is the set of “good” arms at round .\n\nThe optimal choice of leads to a regret of order . This recovers the standard Exp3 bound in the case (i.e., the standard setting where the losses are only known to be bounded in ), but can be considerably better if the terms are small, or the terms are large. We also note that the factor can in principle be removed, i.e., by using the implicitly normalized forecaster of with appropriate parameters. A similar corollary can be obtained in the full information setting, using a standard algorithm such as Hedge \n\n###### Corollary 2.\n\nWith full information feedback and using Hedge as the algorithm (with step size ), the expected regret of the meta-algorithm is\n\n O(logKη+ηT∑t=1maxi=1,…,Kεt(i)2).\n\nThe optimal choice of leads to regret of order . As in the bandit setting, our reduction can be applied to other algorithms as well, including those with more refined loss-dependent guarantees (e.g., and references therein).\n\nFinally, we note that Thm. 1 can easily be used to provide high-probability bounds on the actual regret , rather than just bounds in expectation, as long as we have a high-probability regret bound for . This is due to Eq. (1), and can be easily shown using standard martingale arguments.\n\n### 3.1 Related work\n\nAs mentioned in the introduction, a question similar to those we are studying here was considered in , under the name of learning with predictable sequences. Unlike our setting, however, does not require knowledge of . Assuming the step size is chosen appropriately, they provide algorithms with expected regret bounds scaling as\n\n ⎷(logK)K2T∑t=1K∑i=1εt(i)2and ⎷(logK)T∑t=1maxiεt(i)2(bandit feedback)(full information feedback)\n\nComparing these bounds to Corollaries 1 and 2, we see that we obtain a similar regret bound in the full information setting, whereas in the bandit setting, our bound has a better dependence on the number of arms , and better dependencies on if or the number of “good” arms tends to be small. Also, our algorithmic approach is based on a reduction, which can be applied in principle to any algorithm and to general families of feedback settings, rather than a specific algorithm. On the flip side, our bound in the bandit setting can be worse than that of , if . Also, their algorithm is tailored to the more general setting of linear bandits (where at each round the learner needs to pick a point in some convex set , and receives a loss ), and does not require knowing in advance.\n\nAnother related line of work is path-based bounds, where it is assumed that the losses tend to vary slowly with , and can provide a good estimate of . This can be linked to our setting by taking , and be some known upper bound on . However, implementing this requires the assumption that is revealed at the next round , which does not fit the bandit setting. Thus, it is difficult to directly compare these results to ours. Most work on this topic has focused on the full information feedback setting (see and references therein), and the bandit setting was studied for instance in .\n\n### 3.2 Lower bound\n\nWe now turn to consider the tightness of our results. Since the focus of this paper is to study the variability of the losses across arms, rather than across time, we will consider for simplicity the case where are fixed for all (hence the subscript can be dropped).\n\nIn the theorem below, we show that the dependencies on and (in the bandit and full information case, respectively) cannot be improved in general.\n\n###### Theorem 2.\n\nFix and nonnegative such that . Then there exists fixed parameters for such that the following holds: For any (possibly randomized) learner strategy , there exists a loss assignment satisfying for all , such that\n\n EA[T∑t=1ℓt(It)]−minj=1,…,KT∑t=1ℓt(j)≥⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩c√T∑Kj=1ε(j)2with bandit feedbackc√Tmaxj=1,…,Kε(j)2with full % information feedback\n\nwhere is a universal constant.\n\nThe proof is conceptually similar to the standard regret lower bound for nonstochastic multi-armed bandits (see ), where the losses are generated stochastically, with one randomly-chosen and hard-to-find arm having a slightly smaller loss in expectation. However, we utilize a more involved stochastic process to generate the losses as well as to choose the better arm, which takes the values of into account.\n\n###### Remark 1.\n\nThe construction in the bandit setting is such that all arms are potentially “good” in the sense used in Corollary 1, and hence coincides with (recall is the set of “good” arms at time ). If one wishes to consider a situation where some arms are “bad”, and obtain a bound dependent on , one can simply pick some sufficiently large values for them, and ignore their contribution to the regret in the lower bound analysis.\n\nThe lower bound leaves open the possibility of removing the dependence on in the upper bound. This term is immaterial when is comparable to, or smaller than (e.g., if most arms are good, and is about the same for all ), but there are certainly situations where it could be otherwise. This question is left to future work.\n\n## 4 Smooth losses\n\nAs discussed in the introduction, a line of work in the online learning literature considered the situation where the losses of each arm varies slowly across time (e.g., tends to be small when and are close to each other), and showed how to attain better regret guarantees in such a case. An orthogonal question is whether such improved performance is possible when the losses vary smoothly across arms. Namely, tends to be small for all pairs of actions that are similar to each other.\n\nIt turns out that this assumption can be exploited, avoiding the lower bound of , if the learner is given (or can compute) an “anchor point” at the end of the round , which equals the loss of some arm at round , independent of the learner’s randomness at that round. Importantly, the learner need not even know which arm has this loss. For example, it is often reasonable to assume that there is always some arm which attains a minimal loss of , or some arm which attains a maximal loss of . In that case, instead of estimating losses in , it is enough to estimate losses of the form , which may lie in a much narrower range if is constrained to be small.\n\nTo see why this “anchor point” side-information circumvents the lower bound of , we briefly discuss their construction (in a slightly simplified manner): The authors consider a situation where the losses are generated stochastically and independently at each round according to , with\n\nbeing a standard Gaussian random variable,\n\n, and being some arm chosen uniformly at random. Hence, at every round, arm has a loss smaller by than all other arms. Getting an expected regret smaller than would then amount to detecting . However, since the learner observes only a single loss every round, the similarity of the losses for different arms at a given round does not help much. In contrast, if the learner had access to the loss of any fixed arm (independent of the learner’s randomness), she could easily detect in rounds, simply by maintaining a “feasible set” of possible arms, picking arms at random, and removing it from if is positive. This process ends once contains a single arm, which must be .\n\nTo formalize this setting in a flexible manner, we follow a graph-based approach, inspired by . Specifically, we assume that at every round , a graph over the arms, with an associated Laplacian matrix and parameter , can be defined so that the loss vector satisfies\n\n ℓ⊤tLtℓt=∑(i,j)∈Et(ℓt(i)−ℓt(j))2≤C2t .\n\nThe smaller is\n\n, the more similar are the losses, on average. This can naturally interpolate between the standard bandit setting (where the losses need not be similar) and the extreme case where all losses are the same, in which case the regret is always trivially zero. Crucially, note that the learner\n\nneed not have explicit knowledge of neither nor : In fact, our regret upper bounds, which will depend on these entities, will hold for any and which are valid with respect to the vectors of actual losses (possibly the ones minimizing the bounds). The only thing we do expect the learner to know (at the end of each round ) is the “anchor point” as described above. We also note that this setting is quite distinct from the graph bandits setting of [17, 2], which also assumes a graph structure over the bandits, but this graph encodes what feedback the learner receives, as opposed to encoding similarities between the losses themselves.\n\nWe now turn to describe the algorithm and associated regret bound. The algorithm itself is very simple: Run a standard multiarmed bandits algorithm suitable for our setting (in particular, Exp3 ) using the shifted losses . The associated regret guarantee is formalized in the following theorem.\n\n###### Theorem 3.\n\nAssume that in each round , after choosing the learner is told a number chosen by the oblivious adversary and such that there exists some arm with . Then Exp3 performing updates based on loss vectors achieves\n\n E[T∑t=1ℓt(It)]−mini=1,…,KT∑t=1ℓt(i)≤logKη+η2T∑t=1(1+C2tλ2(Lt))\n\nwhere each is the Laplacian of any simple and connected graph on such that for all .\n\nThe proof is based on Euclidean-norm regret bounds for the Exp3 algorithm, combined with a careful analysis of the associated quantities based on the Laplacian constraint .\n\nBy this theorem, we get that if the step size is chosen optimally (based on ), then we get a regret bound of order . We note that even if some of these parameters are unknown in advance, this can be easily handled using doubling-trick arguments (see the appendices for a proof), and the same holds for our other results.\n\nThe bound of Theorem 3 is not fully satisfying as it does not vanish when (which assuming the graph is connected, implies that all losses are the same). The reason is that we need to add to each loss component in order to guarantee that we do not end up with negative components when is subtracted from . This is avoided when in each round , the revealed loss is the smallest component of , as formalized in the following corollary.\n\n###### Corollary 3.\n\nAssume that in each round , after choosing the learner is told . Then Exp3 performing updates using losses achieves\n\n E[T∑t=1ℓt(It)]−mini=1,…,KT∑t=1ℓt(i)≤logKη+η2T∑t=1C2tλ2(Lt)\n\nwhere each is the Laplacian of any simple and connected graph on such that for all .\n\nWe leave the question of getting such a bound, without being the smallest loss, as an open problem.\n\nWe now show how the bounds stated in Theorem 3 and Corollary 3 relate to the standard Exp3 bound, which in its tightest form is of order —see Lemma 2 in the supplementary material. Recall that our bounds are achieved for all choices of such that for all . Now assume, for each , that is the Laplacian of the -clique. Then has all nonzero eigenvalues equal to , and so the condition is satisfied for . As is also equal to , we have that . Hence, when is tuned optimally (e.g., through the doubling trick), the bounds of Theorem 3 and Corollary 3 take, respectively, the form\n\n ⎷(logK)T∑t=1min{∥ℓt∥2,1+C2tλ2(Lt)}and ⎷(logK)T∑t=1min{∥ℓt∥2,C2tλ2(Lt)} . (2)\n\nFinally, we show that for fixed graphs , the regret bound in Eq. (2) (right-hand side) is tight in the worst-case up to log factors.\n\n###### Theorem 4.\n\nThere exist universal constants such that the following holds: For any randomized algorithm, any , any , and any sufficiently large and , there exists a -node graph with Laplacian satisfying and an adversary strategy , such that the expected regret (w.r.t. the algorithm’s internal randomization) is at least\n\n Ω(min{√K,C√λ2(L)}√T)\n\nwhile for all .\n\nThis theorem matches Eq. (2), assuming that for all , and that . Note that the latter assumption is generally the interesting regime for (for example, as long as there is some node connected by a single edge). The proof is based on considering an “octopus” graph, composed of long threads emanating from one central node, and applying a standard bandit lower bound strategy on the nodes at the ends of the threads.\n\n### 4.1 Multiple connected components\n\nThe previous results of this section need the graph represented by to be connected, in order for the guarantees to be non-vacuous. This is not just an artifact of the analysis: If the graph is not connected, at least some arms can have losses which are arbitrarily different than other arms, and the anchor point side information is not necessarily useful. Indeed, if there are multiple connected components, then and our bounds become trivial. Nevertheless, we now show it is still possible to get improved regret performance in some cases, as long as the learner is provided with anchor point information on each connected component of the graph.\n\nWe assume that at every round , there is some graph defined over the arms, with edge set . However, here we assume that this graph may have multiple connected components (indexed by in some set ). For each connected component , with associated Laplacian , we assume the learner has access to an anchor point . Unlike the case discussed previously, here the anchor points may be different at different components, so a simple shifting of the losses (as done in Sec. 4) no longer suffices to get a good bound. However, the anchor points still allow us to compute some interval, in which each loss must lie, which in turn can be plugged into the algorithmic reduction presented in Sec. 3. This is formalized in the following lemma.\n\n###### Lemma 1.\n\nFor any connected component , and any arm in that component, .\n\nBased on this lemma, we know that any arm at any connected component has values in\n\n ⎡⎢ ⎢⎣mt(s)−Ct√λ2(Lt(s)),mt(s)+Ct√λ2(Lt(s))⎤⎥ ⎥⎦.\n\nUsing this and applying Corollary 1, we have the following result.\n\n###### Theorem 5.\n\nFor any fixed arm , the algorithm described in Corollary 1 satisfies\n\n E[∑tℓt(It)−∑tℓt(j)] ≤ log(K)η+η2T∑t=1⎛⎝C2tλ2(Lt(smin))+∑s∈GtC2tλ2(Lt(s))Nt(s)⎞⎠\n\nwhere is the number of arms in connected component , and is a connected component for which is smallest.\n\nThis allows us to get results which depend on the Laplacians , even when these sub-graphs are disconnected. We note however that this theorem does not recover the results of Sec. 4 when there is only one connected component, as we get where the factor is spurious. The reason for this looseness is that we go through a coarse upper bound on the magnitude of the losses, and lose the dependence on the Laplacian along the way. This is not just an artifact of the analysis: Recall that the algorithmic reduction proceeds by using transformations of the actual losses, and these transformations may not satisfy the same Laplacian constraints as the original losses. Getting a better algorithm with improved regret performance in this particular setting is left to future work.\n\n## References\n\n• Jacob Abernethy, Peter L Bartlett, Rafael Frongillo, and Andre Wibisono. How to hedge an option against an adversary: Black-scholes pricing is minimax optimal. In NIPS, 2013.\n• Noga Alon, Nicolo Cesa-Bianchi, Claudio Gentile, Shie Mannor, Yishay Mansour, and Ohad Shamir. Nonstochastic multi-armed bandits with graph-structured feedback. arXiv preprint arXiv:1409.8428, 2014.\n• Jean-Yves Audibert and Sébastien Bubeck. Minimax policies for adversarial and stochastic bandits. In COLT, pages 217–226, 2009.\n• Peter Auer, Nicolo Cesa-Bianchi, Yoav Freund, and Robert E Schapire. The nonstochastic multiarmed bandit problem. SIAM Journal on Computing, 32(1):48–77, 2002.\n• Peter Auer and Chao-Kai Chiang. An algorithm with nearly optimal pseudo-regret for both stochastic and adversarial bandits. arXiv preprint arXiv:1605.08722, 2016.\n• Sébastien Bubeck and Nicolo Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. arXiv preprint arXiv:1204.5721, 2012.\n• Sébastien Bubeck and Aleksandrs Slivkins. The best of both worlds: Stochastic and adversarial bandits. In COLT, pages 42–1, 2012.\n• Nicolo Cesa-Bianchi, Yishay Mansour, and Gilles Stoltz. Improved second-order bounds for prediction with expert advice. Machine Learning, 66(2-3):321–352, 2007.\n• Chao-Kai Chiang, Tianbao Yang, Chia-Jung Lee, Mehrdad Mahdavi, Chi-Jen Lu, Rong Jin, and Shenghuo Zhu. Online optimization with gradual variations. In COLT, pages 6–1, 2012.\n• Yoav Freund and Robert E Schapire. A desicion-theoretic generalization of on-line learning and an application to boosting. In\n\nEuropean conference on computational learning theory\n\n, pages 23–37. Springer, 1995.\n• Sébastien Gerchinovitz and Tor Lattimore. Refined lower bounds for adversarial bandits. In NIPS, 2016.\n• Robert Grone, Russell Merris, and V S_ Sunder. The laplacian spectrum of a graph. SIAM Journal on Matrix Analysis and Applications, 11(2):218–238, 1990.\n• Elad Hazan and Satyen Kale. On stochastic and worst-case models for investing. In NIPS, 2009.\n• Elad Hazan and Satyen Kale. Extracting certainty from uncertainty: Regret bounded by variation in costs. Machine learning, 80(2-3):165–188, 2010.\n• Elad Hazan and Satyen Kale. Better algorithms for benign bandits. Journal of Machine Learning Research, 12(Apr):1287–1311, 2011.\n• Zohar S Karnin and Oren Anava. Multi-armed bandits: Competing with optimal sequences. In NIPS, 2016.\n• Shie Mannor and Ohad Shamir. From bandits to experts: On the value of side-observations. In Advances in Neural Information Processing Systems, pages 684–692, 2011.\n• Ali Ajdari Rad, Mahdi Jalili, and Martin Hasler. A lower bound for algebraic connectivity based on the connection-graph-stability method. Linear Algebra and Its Applications, 1(435):186–192, 2011.\n• Alexander Rakhlin and Karthik Sridharan. Online learning with predictable sequences. In COLT, pages 993–1019, 2013.\n• Amir Sani, Gergely Neu, and Alessandro Lazaric. Exploiting easy data in online optimization. In Advances in Neural Information Processing Systems, pages 810–818, 2014.\n• Yevgeny Seldin and Aleksandrs Slivkins. One practical algorithm for both stochastic and adversarial bandits. In ICML, 2014.\n• Jacob Steinhardt and Percy Liang. Adaptivity and optimism: An improved exponentiated gradient algorithm. In ICML, pages 1593–1601, 2014.\n• Michal Valko, Rémi Munos, Branislav Kveton, and Tomas Kocak. Spectral bandits for smooth graph functions. In ICML, 2014.\n\n## Appendix A Proof of Thm. 1\n\nThe proof consists mainly of proving Eq. (1). The in-expectation bounds follows by applying expectations on both sides of the inequality, and noting that conditioned on rounds , the conditional expectation of equals , and the conditional expectation of equals . Also, the statement on the range of each is immediate from the definition of and Eq. (5) below.\n\nWe now turn to prove Eq. (1). By adding and subtracting terms, it is sufficient to prove that\n\n ∑t∑i (pt(i)ℓt(i)−mt(jt)+εt(jt))−∑t(ℓt(a)−mt(jt)+εt(jt)) ≤∑t,i˜pt(i)˜ℓt(i)−∑t˜ℓt(a) . (3)\n\nWe will rely on the following facts, which are immediate from the definition of good and bad arms: Any bad arm must satisfy\n\n ℓt(i)≥mt(jt)+εt(jt) (4)\n\nand any good arm must satisfy\n\n mt(jt)−εt(jt)≤ℓt(i)≤mt(jt)+εt(jt)+2εt(i) . (5)\n\nBased on this, we have the following two claim, whose combination immediately implies Eq. (3).\n\nClaim 1. For any fixed arm , .\n\nTo show Claim 1, we consider separately the case where is a bad arm at round , and where a good arm at round . If is a bad arm, then , which is at most by Eq. (4). Otherwise, if is a good arm at round , the observation follows by definition of .\n\nClaim 2.\n\n K∑i=1pt(i)ℓt(It)−mt(jt)+εt(jt)≤K∑i=1˜pt(i)˜ℓt(i)\n\nwhere\n\n pt(i)=⎧⎪⎨⎪⎩˜pt(i)i≠jt and i is good% 0i≠jt and i is bad˜pt(jt)+∑i is badp(i)i=jt.\n\nTo show Claim 2, recall that if is a good arm, then , and otherwise, we have (since by definition). Letting denote the set of good arms at round , we have:\n\nCombining the two claims above, and summing over , we get Eq. (3) as required.\n\n## Appendix B Proof of Thm. 2\n\nSuppose the learner uses some (possibly randomized) strategy, and let be a random variable denoting its random coin flips. Our goal is to provide lower bounds on\n\n supℓ1,…,ℓT(EA[T∑t=1ℓt(It)]−minj=1,…,KT∑t=1ℓt(j))\n\nwhere the expectation is with respect to the learner’s (possibly randomized) strategy. Clearly, this is lower bounded by\n\n EJ,LEAA[T∑t=1ℓt(It)−T∑t=1ℓt(J)],\n\nwhere signifies expectation over some distribution over indices and losses . By Fubini’s theorem, this equals\n\n EAEJ,L[T∑t=1ℓt(It)−T∑t=1ℓt(J)] ≥ infAE{ℓi(t)}i,t,j[T∑t=1ℓt(It)−T∑t=1ℓt(j)],\n\nwhere refers an infimum over the learner’s random coin flips. Thus, we need to provide some distribution over indices and losses, so that for any deterministic learner,\n\n E[T∑t=1ℓt(It)−T∑t=1ℓt(J)] (6)\n\nis lower bounded as stated in the theorem.\n\nThe proof will be composed of two constructions, depending on whether we are in the bandit of full information setting, and whether is larger or smaller than .\n\n### b.1 The case maxjε(j)2∑jε(j)2≤14 with bandit feedback\n\nFor this case, we will consider the following distribution: Let be distributed on according to the probability distribution (to be specified later). Conditioned on any , we define the distribution over losses as follows, independently for each round and index :\n\n• If , then equals w.p, , and w.p. .\n\n• If , then equals w.p. , and w.p. .\n\nAlso, let denote expectation and probabilities (over the space of possible losses and indices) conditioned on the event . With this construction, we note that , and if . As a result,\n\n Ej[ℓt(It)−ℓt(j)]=Pj(It≠j)⋅Ej[ℓt(It)−ℓt(j)|It≠j)]=Pj(It≠j)ε(j)δ(j),\n\nand therefore Eq. (6) equals\n\n K∑j=1p(j)Ej[T∑t=1(ℓt(It)−ℓ"
] | [
null,
"https://deepai.org/static/images/logo.png",
null,
"https://deepai.org/static/images/twitter-icon-blue-circle.svg",
null,
"https://deepai.org/static/images/linkedin-icon-blue-circle.svg",
null,
"https://deepai.org/static/images/discord-icon-blue-circle.svg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.92597574,"math_prob":0.96985686,"size":32271,"snap":"2022-27-2022-33","text_gpt3_token_len":7130,"char_repetition_ratio":0.15142405,"word_repetition_ratio":0.041061047,"special_character_ratio":0.22053857,"punctuation_ratio":0.12706298,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9923421,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-12T06:04:27Z\",\"WARC-Record-ID\":\"<urn:uuid:27e4a7d0-5386-4c1c-a55a-0049bb6ce0d0>\",\"Content-Length\":\"1048897\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:747f1184-5df4-48d1-88b9-5df32ba70ada>\",\"WARC-Concurrent-To\":\"<urn:uuid:d9b1afcc-8118-4d65-89b4-57ce20b520ee>\",\"WARC-IP-Address\":\"44.235.195.209\",\"WARC-Target-URI\":\"https://deepai.org/publication/bandit-regret-scaling-with-the-effective-loss-range\",\"WARC-Payload-Digest\":\"sha1:SZRAU3O4CKMM5ZL5E3TXV7CRQETKP56J\",\"WARC-Block-Digest\":\"sha1:5TGRMQY5LO5LXYVXLD6MSQNBTGYFSNTW\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571584.72_warc_CC-MAIN-20220812045352-20220812075352-00619.warc.gz\"}"} |
https://qa.auth.gr/en/class/1/600131166 | [
"# Biostatistics",
null,
"Title ΒΙΟΣΤΑΤΙΣΤΙΚΗ / Biostatistics Code 1.7 Faculty Health Sciences School Veterinary Cycle / Level 1st / Undergraduate Teaching Period Winter Coordinator Alexandros Theodoridis Common No Status Active Course ID 200000806\n\n### Programme of Study: PPS Tmīmatos Ktīniatrikīs (2020-2021)\n\nRegistered students: 0\nOrientationAttendance TypeSemesterYearECTS\nKORMOSCompulsory Course113\n\n Academic Year 2018 – 2019 Class Period Winter Faculty Instructors Weekly Hours 34 Class ID 600131166\nCourse Type 2016-2020\n• Background\nCourse Type 2011-2015\nGeneral Foundation\nMode of Delivery\n• Face to face\nLanguage of Instruction\n• Greek (Instruction, Examination)\nLearning Outcomes\nAfter completing this course, the students will be able: - to apply the appropriate statistical techniques for clustering and cumulative expression of sampling data, - to formulate the appropriate research/statistical hypotheses concerning a specific research or animal health problem, - to apply the appropriate methods for the statistical analysis of sampling data and hypothesis testing, - to use the SPSS software for performing various methods of statistical description and anal-ysis of sampling data, - to assess inductively the findings of a survey aimed at rational decisions conserning practical problems in the veterinary field, - to understand the importance of sampling and the role of statistical inference in veterinary research.\nGeneral Competences\n• Apply knowledge in practice\n• Retrieve, analyse and synthesise data and information, with the use of necessary technologies\n• Make decisions\n• Work in an interdisciplinary team\n• Advance free, creative and causative thinking\nCourse Content (Syllabus)\nBiostatistics (Theoretical lectures 18 hours) Teaching staff: Christos Batzios, Alexandros Theodoridis This course introduces concepts that will provide the student with a solid theoretical and empirical background for developing skills regarding the use of quantitative methods of Biostatistics for the collection, the presentatiuon, the analysis and the evaluation of sampling data. Emphasis is given on the empirical application of Biostatistics and on the statistical assessment and interpretation of bio-medical and agricultural data in order to support rational decision making regarding speifc problems in research and animal heath care. 1st hour Introduction to Statistics The nature of Statistics The use of computers in statistical analysis Populations and samples Basic statistical terms (variable, observation, populations, samples, etc.) 2nd hour Presentation and classification of statistical data Statistical tables and charts Frequency distributions (continuous, discrete and qualitative variables) Graphical presentation of frequency distributions 3rd-5th hour Statistical measures Basic measures of central tendency (arithmetical mean, weighted mean, median, mode, geometrical mean, quartiles, etc.) Choice of appropriate measure of central tendency Measures of dispersion (range, interquartile range, mean absolute deviation, variance, standard deviation, coefficient of variation, Tchebysheff’s theorem, empirical rule) The effect of simple transformations on mean and variance Measures of skewness, measures of kurtosis 6th hour Elements of probability theory, random variables Statistical experiment, test, events etc The meaning of probability (classical definition of probability, definition of probability as limit of relative frequency, definition of subjective probability, axiomatic definition of probability) Calculation of probability, basic theorems of probability, probability rules (multiplication rule, addition rule, Bayes theorem) Random variables and probability distributions (discrete and continuous probability distributions, discrete and continuous random variables) 7th-9th hour Theoretical distributions Discrete theoretical distributions (Binomial distribution, Poisson distribution) Continuous theoretical distributions (normal distribution, standard normal distribution Z, chi-squared distribution, t distribution, F distribution) 10th-11th hour Sampling (methods, distributions) Principles of sampling (random and directed sampling) Sampling distributions of the mean, of the proportion, of the difference between two means, of the variance, etc The central limit theorem 12th hour Estimation Point and interval estimation Confidence interval of the mean, of the variance, of the difference between two means, of the proportion, etc Sampling errors Determination of sample size 13th-14th hour Hypothesis testing Statistical hypotheses Hypothesis test and errors Hypothesis test about a mean and the difference between two means Hypothesis test of the variance and the ratio of two variances Hypothesis test of the proportion and the difference between two proportions 15th hour Analysis of frequencies Test of goodness-of-fit Test of independence Test of homogeneity 16th hour General principles of analysis of variance One way analysis of variance The completely randomized design, multiple comparisons tests Hypotheses in analysis of variance 17th hour Non-parametric hypothesis tests Test of goodness-of-fit (K-S) Tests for two samples, tests for k samples Transormations and normality, etc 18th hour Simple regression and correlation Least squares method Interpretation of the regression equation Linear correlation Exercises - Laboratories (16 hours) 2 hours Construction of distribution tables with classification of data for continuous and discrete var-iables of Veterinary Science interest. Methods of graphical presentation of frequency distribu-tions (histograms, polygonal lines etc) 3 hours Examples of calculation of descriptive measures of central tendency. Choice of appropriate measure of central tendency. Applications of calculation of statistical measures of dispersion, skewness and kurtosis 3 hours Calculation of probability. Examples of use of tables of theoretical distributions (Binomial distribution, Poisson distribution, Z distribution, t distribution, chi-squared distribution, F dis-tribution). Examples of probability calculation and sampling error when sampling with and without replacement. Applications of the central limit theorem. Calculation of confidence in-terval of the mean, the variance, the difference between two means, the proportions etc 3 hours Problems of sample size calculation when the objective is the estimation of the mean or the proportion in simple random and stratified random sampling. Problems of hypothesis testing for the mean, the difference between two means, the proportion, the difference between two proportions, the variance, the ratio of two variances 2 hours Problems of testing goodness-of-fit. Analysis of frequencies classified in tables 2x2, 2xc, rxc with the application of test for independence and homogeneity 3 hours Databases for agricultural research. Retrieving data from FAO databases (Agriculture, Fisher-ies…). Use of statistical package SPSS for descriptive and inferential analysis of experimental data, estimation and interpretation of regression equations, analysis of variance and non-parametrical analysis, with emphasis given on interpretation of results\nKeywords\nBiostatistics, descriptive statistics, sampling, hypothesis testing, statistical inference.\nEducational Material Types\n• Notes\n• Slide presentations\n• Video lectures\n• Multimedia\n• Interactive excersises\n• Book\nUse of Information and Communication Technologies\nUse of ICT\n• Use of ICT in Course Teaching\n• Use of ICT in Laboratory Teaching\n• Use of ICT in Communication with Students\nCourse Organization\nLectures\nLaboratory Work\nTutorial\nInteractive Teaching in Information Center\nExams\nTotal\nStudent Assessment\nStudent Assessment methods\n• Written Exam with Multiple Choice Questions (Formative, Summative)\n• Written Exam with Short Answer Questions (Summative)\n• Written Exam with Problem Solving (Formative, Summative)\nBibliography\nCourse Bibliography (Eudoxus)\nΧρ. Μπάτζιος (1999). ΣΤΑΤΙΣΤΙΚΗ (Τεύχος Α΄): Εφαρμοσμένη Στατιστική στην Κτηνιατρική Εκπαίδευση. Σύγχρονη Παιδεία, σελίδες 227, Θεσσαλονίκη."
] | [
null,
"https://qa.auth.gr/images/icons/small_doc.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8347525,"math_prob":0.9081812,"size":5517,"snap":"2023-40-2023-50","text_gpt3_token_len":1067,"char_repetition_ratio":0.17413387,"word_repetition_ratio":0.0332871,"special_character_ratio":0.16476347,"punctuation_ratio":0.10444178,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99213976,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-28T09:09:04Z\",\"WARC-Record-ID\":\"<urn:uuid:14bf0dac-ad67-45e4-b0c8-5a896d568c44>\",\"Content-Length\":\"43426\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f32a1394-9e5a-4972-b401-daabc2769523>\",\"WARC-Concurrent-To\":\"<urn:uuid:1666bc38-d2bd-4124-a187-90e95883be9f>\",\"WARC-IP-Address\":\"155.207.1.156\",\"WARC-Target-URI\":\"https://qa.auth.gr/en/class/1/600131166\",\"WARC-Payload-Digest\":\"sha1:UAEQK37ZRNDK2RFTHEYPCQ2AOWXZU5VA\",\"WARC-Block-Digest\":\"sha1:C7RAYA4NVXGDC4IRFK4XXOMCFDCS6FSU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510368.33_warc_CC-MAIN-20230928063033-20230928093033-00109.warc.gz\"}"} |
https://suv35.cn/zuoye/12667890/index.htm | [
"# ①6-(12)÷(-3) ②3×(-4)+(-28)÷7 ③4.7-(-8.9)-7.5+(-6)\n\n①6-(12)÷(-3)②3×(-4)+(-28)÷7③4.7-(-8.9)-7.5+(-6)①6-(12)÷(-3)②3×(-4)+(-28)÷7③4.7-(-8.9)-7.5+(-6)①6-(12\n\n①6-(12)÷(-3) ②3×(-4)+(-28)÷7 ③4.7-(-8.9)-7.5+(-6)\n①6-(12)÷(-3) ②3×(-4)+(-28)÷7 ③4.7-(-8.9)-7.5+(-6)\n\n①6-(12)÷(-3) ②3×(-4)+(-28)÷7 ③4.7-(-8.9)-7.5+(-6)\n①6-(12)÷(-3)\n=6-(-4)\n=6+4\n=10\n②3×(-4)+(-28)÷7\n=(-12)-28\n=-(12+28)\n=-40\n③4.7-(-8.9)-7.5+(-6)\n=4.7+8.9-7.5-6\n=13.6-13.5\n=0.1"
] | [
null
] | {"ft_lang_label":"__label__zh","ft_lang_prob":0.831074,"math_prob":0.9999608,"size":2114,"snap":"2019-43-2019-47","text_gpt3_token_len":2045,"char_repetition_ratio":0.16161138,"word_repetition_ratio":0.1878453,"special_character_ratio":0.6972564,"punctuation_ratio":0.09818731,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9961495,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-15T13:33:50Z\",\"WARC-Record-ID\":\"<urn:uuid:2a857b9f-2ffa-4c5a-9fa5-7207ed892224>\",\"Content-Length\":\"21248\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e94ae2ad-9b99-4de8-b8b7-41dcf0c06172>\",\"WARC-Concurrent-To\":\"<urn:uuid:68c7542a-c7c0-48d2-a4b6-36ca22cf83eb>\",\"WARC-IP-Address\":\"156.233.162.247\",\"WARC-Target-URI\":\"https://suv35.cn/zuoye/12667890/index.htm\",\"WARC-Payload-Digest\":\"sha1:MFK3MOWZQFL7MPKNTSBRNJXBVTBUIAFE\",\"WARC-Block-Digest\":\"sha1:53ISKEAO47HLLRVXJTIXM62Q3TKVJ76Z\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668644.10_warc_CC-MAIN-20191115120854-20191115144854-00218.warc.gz\"}"} |
http://www.tryscollege.org.in/mill-machine-2017/25518-calculate-sag-ball-mill-circulating-load.html | [
"## calculate sag ball mill circulating load\n\n### calculating mill recirculating load -\n\nWhy Do We Need Such A High Recirculating Load On Our Ball Mill. 1 on how to calculate the CLR for your circuit.) ... slurry pooling and transport issues in sag mills - zenith ... VRM and ball mill circulating load - International Cement Review. how can we find VRM n ball mill circulating load ... in relation to fines & it can be calculated by ...\n\n### calculate ball mill recirculating load -\n\n7 feb 2014 ball mill recirculating load for calculate sag/ball mill circulating load. ball mill circulating load calculate ball mill circulating load pdf . Chat Online; Get Pirce. Formula Calculation Recirculating Load In A Mill.\n\n### (PDF) Circulating load calculation in grinding circuits\n\nHowever, the effect of each is difficult to quantify in practice as these two parameters are usually interrelated. Based on experience acquired over the years and the investigative work conducted by F.C. Bond, it was established that the optimum circulating load for a closed ball mill – cyclone circuit is around 250%.\n\n### circulating load calculation in closed circuit ball mill\n\nHPGR /ball mill, vertical roller mill and closed-circuit roller press for finish grinding. . material under the roller which means the adjustment of circulating load. . calculate the ratio of size reduction which can be given by Eq.\n\n### Ball Mill Load Calculation - .za\n\nBall Mill Load Calculation - canei. calculate sag ball mill circulating load - Ball Mill Loading (dry milling) When charging a ball mill, ceramic lined mill, pebble mill, Live Chat; grinding mill circulating load gold ore - design-line.\n\n### mill circulating load -\n\ncalculate circulating load ball mill - samassociates. circulating load for ball mill - rrcserin. circulating load for ball mill, Here is a formula that allows you to calculate the circulating load ratio around a ball ...\n\n### Circulating Load Calculation Formula\n\nFor example your ball mill is in closed circuit with a set of cyclones. The grinding mill receives crushed ore feed. The pulp densities around your cyclone are sampled and known over an 8-hour shift, allowing to calculate corresponding to circulating load ratios and circulating load tonnage on tons/day or tons/hour.\n\n### calculate sagball mill circulating load -\n\n250% circulating load. . index for a ball mill,. Wi, is then calculated from the following equation. . energy predictions for SAG/ball mill circuits (Morrell 2004b).\n\n### calculate ball mill recirculating load -\n\n7 feb 2014 ball mill recirculating load for calculate sag/ball mill circulating load. ball mill circulating load calculate ball mill circulating load pdf . Chat Online; Get Pirce. Formula Calculation Recirculating Load In A Mill.\n\n### mining ball mill circulating load - .za\n\nThe 2.0 MW semi-autogenous grinding SAG mill and 3.7 MW ball mill were assisted by a. 150 kW pebble Open pit mining is by conventional drill/blast, load/ haul method. .. allowed much better control over the ball mill circulating load.\n\n### circulating load formula of ball mill -\n\ncalculating circulating load for the ball mill Ball Mill Circulating Load Formula grinding mill equipmentHere is a formula that allows you to calculate the circulating load ratio around a ball mill and hydrocylone as part of a grinding circuit For example your ball mill -calculating circulating load for the ball mill-,Lecture 11 Material ...\n\n### calculate sag ball mill circulating load -\n\ncalculate sag ball mill circulating load calculation of re-circulating load in ball mill … calculate circulating load – OneMine Mining and Minerals Library ... crushed ore is fed into a SAG/Ball mill circuit with a ... ball mill circulating load calculation.\n\n### calculate ball mill recirculating load -\n\nformula calculation recirculating load in a mill. calculate sag ball mill circulating load Zenith Group is a professional mining machinery ball mill recirculating load calculation pdf ball mill. Chat With Sales. circulating load of dynamic seperator in ball mill .\n\n### how to calculate ball mill loading - .za\n\nExperimental investigation of the power draw of tumbling mills in wet ... Calculation of the power draw of dry multi–compartment ball mills. ... and size effects on the ball load behaviour and power in a dry pilot mill: experimental study.\n\n### ball mill recirculating load calculation pdf – Grinding ...\n\nball mill calculation pdf - SBM mining equipments applied . Home > SBM designed>ball mill recirculating load calculation pdf. All data required by the calculation routine must be defined in each corresponding Chat online. » Learn More. Circulating Load In A Ball Mill. ball mill recirculating load calculation pdf. THE SIZING AND SELECTION OF HYDROCYCLONES FL.\n\n### calculate sag/ball mill circulating load\n\ncirculating load of dynamic seperator in ball mill calculating India. circulating load calculation in malaysia – Gold Ore Crusher. circulating load of ball mill... Request Quotation how to calculate sag mill ball charge »More detailed.\n\n### calculating mill recirculating load -\n\nWhy Do We Need Such A High Recirculating Load On Our Ball Mill. 1 on how to calculate the CLR for your circuit.) ... slurry pooling and transport issues in sag mills - zenith ... VRM and ball mill circulating load - International Cement Review. how can we find VRM n ball mill circulating load ... in relation to fines & it can be calculated by ...\n\n### Calculate Sagball Mill Circulating Load -\n\nSag Mill Recirculating Load Rate ball mill recirculating load ... Calculate Sagball Mill Circulating Load. The SAG Mill has a ... and that the the circulation rate of ball mill circulating load . Chat Now; Relationships between comminution energy and ... 250% circulating load.\n\n### formula mill circulation load of cement ball mill -\n\ncalculate circulating load ball mill - ikengineering ... load of cement mill calculate. calculate sag/ball mill ... Get Price. curve of ball mill pdf. synthetic approach ... Types of Ball Mills ball mill load curve formula - Crusher Screen Plate ... versus ball mills Cement grinding Vertical roller mills versus ball ... High circulation factors ...\n\n### calculate sagball mill circulating load -\n\n250% circulating load. . index for a ball mill,. Wi, is then calculated from the following equation. . energy predictions for SAG/ball mill circuits (Morrell 2004b).\n\n### Recirculation Load Calculation In Ball Mill\n\nraw mill tertiary crusher recirculating load calculator. recirculating load to the sag mill, raw mill tertiary crusher recirculating load calculator. ball mill circulating load calculation . raw mill tertiary crusher ...\n\n### Circulating Load Formula Of Ball Mill -\n\nHere is a formula that allows you to calculate the circulating load ratio around a ball mill and hydrocylone as part of a grinding circuit. For example your ball mill is in closed circuit with a set of cyclones.\n\n### calculate sagball mill circulating load -\n\ncalculate sag ball mill circulating load crusher circulating load calculationGrinding Mill . how to calculate ball mill circulating load for iron ore processing pdf. Chat With Sales Calculation Recirculating Load In A Mill. Get a Price. grinding mill circulating load calculation - .\n\n### calculate ball mill recirculating load -\n\nformula calculation recirculating load in a mill. calculate sag ball mill circulating load Zenith Group is a professional mining machinery ball mill recirculating load calculation pdf ball mill. Chat With Sales. circulating load of dynamic seperator in ball mill .\n\n### how to calculate ball mill loading - .za\n\nExperimental investigation of the power draw of tumbling mills in wet ... Calculation of the power draw of dry multi–compartment ball mills. ... and size effects on the ball load behaviour and power in a dry pilot mill: experimental study."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8052557,"math_prob":0.99219996,"size":6617,"snap":"2019-13-2019-22","text_gpt3_token_len":1343,"char_repetition_ratio":0.31619537,"word_repetition_ratio":0.45786518,"special_character_ratio":0.2061357,"punctuation_ratio":0.1557377,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99690795,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-21T08:14:44Z\",\"WARC-Record-ID\":\"<urn:uuid:18c28a46-5730-433f-bd3f-5638fc9c391c>\",\"Content-Length\":\"37658\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6222f4fd-cac3-4233-b979-8836e3bf0d95>\",\"WARC-Concurrent-To\":\"<urn:uuid:3e02a403-e779-4499-84db-a03545264d13>\",\"WARC-IP-Address\":\"104.31.80.177\",\"WARC-Target-URI\":\"http://www.tryscollege.org.in/mill-machine-2017/25518-calculate-sag-ball-mill-circulating-load.html\",\"WARC-Payload-Digest\":\"sha1:ZGRN6HRDW4ASHPJYFPK7DJV4QNAY2YWP\",\"WARC-Block-Digest\":\"sha1:BVQQFH47DLJJWQ3DZTA3TMYZSUFUEHJ3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202506.45_warc_CC-MAIN-20190321072128-20190321094128-00200.warc.gz\"}"} |
https://stats.libretexts.org/Bookshelves/Introductory_Statistics/Book%3A_OpenIntro_Statistics_(Diez_et_al)./2%3A_Probability/2.3%3A_Conditional_Probability_II | [
"# 2.3: Conditional Probability II\n\nExample 2.55 highlights why doctors often run more tests regardless of a first positive test result. When a medical condition is rare, a single positive test isn't generally definitive. Consider again the last equation of Example 2.55. Using the tree diagram, we can see that the numerator (the top of the fraction) is equal to the following product:\n\n$P((has BC and mammogram^+) = P(mammogram^+ | has BC)P(has BC)$\n\nThe denominator - the probability the screening was positive - is equal to the sum of probabilities for each positive screening scenario:\n\n$P(mammogram^+) = P(mammogram^+ and no BC) + P(mammogram^+ and has BC)$\n\nIn the example, each of the probabilities on the right side was broken down into a product of a conditional probability and marginal probability using the tree diagram.\n\n$P(\\underline {mammogram^+}) = P(\\underline {mammogram^+} and no BC) + P(\\underline {mammogram^+} and has BC$\n\n$= P(mammogram^+ | no BC)P(no BC) + P(mammogram^+| has BC)P(has BC)$\n\nWe can see an application of Bayes' Theorem by substituting the resulting probability expressions into the numerator and denominator of the original conditional probability.\n\n$P (has BC | mammogram^+) = \\frac {P (mammogram^+ | has BC) P (has BC)}{P(mammogram^+ | no BC) + P(mammogram^+ | has BC)P(has BC)}$\n\nBayes' Theorem: inverting probabilities\n\nConsider the following conditional probability for variable 1 and variable 2:\n\n$P (outcome A_1 of variable 1| outcome B of variable 2)$\n\nBayes' Theorem states that this conditional probability can be identified as the following fraction:\n\n$\\frac {P(B|A_1)P(A_1)}{P(B|A_1)P(A_1) + P(B|A_2)P(A_2) + \\dots + P(B|A_k)P(A_k)} \\label {2.56}$\n\nwhere A2, A3, ..., and Ak represent all other possible outcomes of the rst variable.\n\nBayes' Theorem is just a generalization of what we have done using tree diagrams. The numerator identi es the probability of getting both A1 and B. The denominator is the marginal probability of getting B. This bottom component of the fraction appears long and complicated since we have to add up probabilities from all of the different ways to get B. We always completed this step when using tree diagrams. However, we usually did it in a separate step so it didn't seem as complex.\n\nTo apply Bayes' Theorem correctly, there are two preparatory steps:\n\n(1) First identify the marginal probabilities of each possible outcome of the first variable:\n\n$P(A1), P(A2), ..., P(Ak).$\n\n(2) Then identify the probability of the outcome B, conditioned on each possible scenario for the first variable:\n\n$P(B|A1), P(B|A2), ..., P(B|Ak).$\n\nOnce each of these probabilities are identi ed, they can be applied directly within the formula.\n\nTIP: Only use Bayes' Theorem when tree diagrams are difficult\n\nDrawing a tree diagram makes it easier to understand how two variables are connected. Use Bayes' Theorem only when there are so many scenarios that drawing a tree diagram would be complex.\n\nExercise 2.57 Jose visits campus every Thursday evening. However, some days the parking garage is full, often due to college events. There are academic events on 35% of evenings, sporting events on 20% of evenings, and no events on 45% of evenings. When there is an academic event, the garage fills up about 25% of the time, and it lls up 70% of evenings with sporting events. On evenings when there are no events, it only fills up about 5% of the time. If Jose comes to campus and finds the garage full, what is the probability that there is a sporting event? Use a tree diagram to solve this problem.40\n\nExample 2.58 Here we solve the same problem presented in Exercise 2.57, except this time we use Bayes' Theorem.\n\nThe outcome of interest is whether there is a sporting event (call this A1), and the condition is that the lot is full (B). Let A2 represent an academic event and A3 represent there being no event on campus. Then the given probabilities can be written as\n\n$P (A_1) = 0.2 P (A_2) = 0.35 P (A_3) = 0.45$\n\n$P (B|A_1) = 0.7 P (B|A_2) = 0.25 P (B|A_3) = 0.05$\n\nBayes' Theorem can be used to compute the probability of a sporting event (A1) under the condition that the parking lot is full (B):\n\n$P(A_1|B) = \\frac {P(B|A_1)P(A_1)}{P(B|A_1)P(A_1) + P(B|A_2)P(A_2) + P(B|A_3)P(A_3)}$\n\n$= \\frac {(0.7)(0.2)}{(0.7)(0.2) + (0.25)(0.35) + (0.05)(0.45)}$\n\n$= 0.56$\n\nBased on the information that the garage is full, there is a 56% probability that a sporting event is being held on campus that evening.\n\nExercise 2.59 Use the information in the previous exercise and example to verify the probability that there is an academic event conditioned on the parking lot being full is 0.35.41\n\n40The tree diagram, with three primary branches, is shown below. Next, we identify two probabilities from the tree diagram. (1) The probability that there is a sporting event and the garage is full: 0.14. (2) The probability the garage is full: $$0.0875 + 0.14 + 0.0225 = 0.25$$. Then the solution is the ratio of these probabilities: $$\\frac {0.14}{0.25} = 0.56$$. If the garage is full, there is a 56% probability that there is a sporting event.",
null,
"41Short answer: $P(A2|B) = \\frac {P(B|A2)P(A1)}{P(B|A1)P(A1) + P(B|A2)P(A2) + P(B|A3)P(A3)}$\n\n$= \\frac {(0.25)(0.35)}{(0.7)(0.2) + (0.25)(0.35) + (0.05)(0.45)}$\n\n$= 0.35$\n\nExercise $$\\PageIndex{1}$$\n\nExercise 2.60 In Exercise 2.57 and 2.59, you found that if the parking lot is full, the probability a sporting event is 0.56 and the probability there is an academic event is 0.35. Using this information, compute P(no event j the lot is full).42\n\nThe last several exercises offered a way to update our belief about whether there is a sporting event, academic event, or no event going on at the school based on the information that the parking lot was full. This strategy of updating beliefs using Bayes' Theorem is actually the foundation of an entire section of statistics called Bayesian statistics. While Bayesian statistics is very important and useful, we will not have time to cover much more of it in this book.\n\n## Contributors\n\nDavid M Diez (Google/YouTube), Christopher D Barr (Harvard School of Public Health), Mine Çetinkaya-Rundel (Duke University)"
] | [
null,
"https://stats.libretexts.org/@api/deki/files/810/tree_diagram_4_.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.90834874,"math_prob":0.996526,"size":6176,"snap":"2019-51-2020-05","text_gpt3_token_len":1733,"char_repetition_ratio":0.14744005,"word_repetition_ratio":0.031746034,"special_character_ratio":0.2945272,"punctuation_ratio":0.11844197,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997522,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-08T07:44:33Z\",\"WARC-Record-ID\":\"<urn:uuid:f752922f-c928-4ecb-a45e-f05a97135b33>\",\"Content-Length\":\"80384\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b3382236-1edf-4da2-ac72-cdb050b25138>\",\"WARC-Concurrent-To\":\"<urn:uuid:4395ad24-da4f-467b-924b-ed5cdeb398b5>\",\"WARC-IP-Address\":\"34.232.212.106\",\"WARC-Target-URI\":\"https://stats.libretexts.org/Bookshelves/Introductory_Statistics/Book%3A_OpenIntro_Statistics_(Diez_et_al)./2%3A_Probability/2.3%3A_Conditional_Probability_II\",\"WARC-Payload-Digest\":\"sha1:IU74UKUVPV7T3SSX6OYYPL24ROL4K65W\",\"WARC-Block-Digest\":\"sha1:VA6A2JON5QKZAOFLM66OG74YJBZ4LA63\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540507109.28_warc_CC-MAIN-20191208072107-20191208100107-00435.warc.gz\"}"} |
https://uk.mathworks.com/help/risk/varbacktest.pof.html | [
"# pof\n\nProportion of failures test for value-at-risk (VaR) backtesting\n\n## Syntax\n\n``TestResults = pof(vbt)``\n``TestResults = pof(vbt,Name,Value)``\n\n## Description\n\nexample\n\n````TestResults = pof(vbt)` generates the proportion of failures (POF) test for value-at-risk (VaR) backtesting.```\n\nexample\n\n````TestResults = pof(vbt,Name,Value)` adds an optional name-value pair argument for `TestLevel`.```\n\n## Examples\n\ncollapse all\n\nCreate a `varbacktest` object.\n\n```load VaRBacktestData vbt = varbacktest(EquityIndex,Normal95)```\n```vbt = varbacktest with properties: PortfolioData: [1043x1 double] VaRData: [1043x1 double] PortfolioID: \"Portfolio\" VaRID: \"VaR\" VaRLevel: 0.9500 ```\n\nGenerate the `pof` test results.\n\n`TestResults = pof(vbt,'TestLevel',0.99)`\n```TestResults=1×9 table PortfolioID VaRID VaRLevel POF LRatioPOF PValuePOF Observations Failures TestLevel ___________ _____ ________ ______ _________ _________ ____________ ________ _________ \"Portfolio\" \"VaR\" 0.95 accept 0.46147 0.49694 1043 57 0.99 ```\n\nUse the `varbacktest` constructor with name-value pair arguments to create a `varbacktest` object.\n\n```load VaRBacktestData vbt = varbacktest(EquityIndex,... [Normal95 Normal99 Historical95 Historical99 EWMA95 EWMA99],... 'PortfolioID','Equity',... 'VaRID',{'Normal95' 'Normal99' 'Historical95' 'Historical99' 'EWMA95' 'EWMA99'},... 'VaRLevel',[0.95 0.99 0.95 0.99 0.95 0.99])```\n```vbt = varbacktest with properties: PortfolioData: [1043x1 double] VaRData: [1043x6 double] PortfolioID: \"Equity\" VaRID: [1x6 string] VaRLevel: [0.9500 0.9900 0.9500 0.9900 0.9500 0.9900] ```\n\nGenerate the `pof` test results using the `TestLevel` optional input.\n\n`TestResults = pof(vbt,'TestLevel',0.90)`\n```TestResults=6×9 table PortfolioID VaRID VaRLevel POF LRatioPOF PValuePOF Observations Failures TestLevel ___________ ______________ ________ ______ _________ _________ ____________ ________ _________ \"Equity\" \"Normal95\" 0.95 accept 0.46147 0.49694 1043 57 0.9 \"Equity\" \"Normal99\" 0.99 reject 3.5118 0.060933 1043 17 0.9 \"Equity\" \"Historical95\" 0.95 accept 0.91023 0.34005 1043 59 0.9 \"Equity\" \"Historical99\" 0.99 accept 0.22768 0.63325 1043 12 0.9 \"Equity\" \"EWMA95\" 0.95 accept 0.91023 0.34005 1043 59 0.9 \"Equity\" \"EWMA99\" 0.99 reject 9.8298 0.0017171 1043 22 0.9 ```\n\n## Input Arguments\n\ncollapse all\n\n`varbacktest` (`vbt`) object, contains a copy of the given data (the `PortfolioData` and `VarData` properties) and all combinations of portfolio ID, VaR ID, and VaR levels to be tested. For more information on creating a `varbacktest` object, see `varbacktest`.\n\n### Name-Value Pair Arguments\n\nSpecify optional comma-separated pairs of `Name,Value` arguments. `Name` is the argument name and `Value` is the corresponding value. `Name` must appear inside quotes. You can specify several name and value pair arguments in any order as `Name1,Value1,...,NameN,ValueN`.\n\nExample: ```TestResults = pof(vbt,'TestLevel',0.99)```\n\nTest confidence level, specified as the comma-separated pair consisting of `'TestLevel'` and a numeric between `0` and `1`.\n\nData Types: `double`\n\n## Output Arguments\n\ncollapse all\n\n`pof` test results, returned as a table where the rows correspond to all combinations of portfolio ID, VaR ID, and VaR level to be tested. The columns correspond to the following information:\n\n• `'PortfolioID'` — Portfolio ID for the given data\n\n• `'VaRID'` — VaR ID for each of the VaR data columns provided\n\n• `'VaRLevel'` — VaR level for the corresponding VaR data column\n\n• `'POF'` — Categorical array with the categories `accept` and `reject` that indicate the result of the `pof` test\n\n• `'LRatioPOF'` — Likelihood ratio of the `pof` test\n\n• `'PValuePOF'` — P-value of the `pof` test\n\n• `'Observations'` — Number of observations\n\n• `'Failures'` — Number of failures\n\n• `'TestLevel'` — Test confidence level\n\n### Note\n\nFor `pof` test results, the terms `accept` and `reject` are used for convenience, technically a `pof` test does not accept a model. Rather, the test fails to reject it.\n\ncollapse all\n\n### Proportion of Failures (POF) Test\n\nThe `pof` function performs Kupiec's proportion of failures test.\n\nThe POF test is a likelihood ratio test proposed by Kupiec (1995) to assess if the proportion of failures (number of failures divided by number of observations) is consistent with the VaR confidence level.\n\n## Algorithms\n\nThe likelihood ratio (test statistic) of the `pof` test is given by\n\n`$LRatioPOF=-2\\mathrm{log}\\left(\\frac{{\\left(1-pVaR\\right)}^{N-x}pVa{R}^{x}}{{\\left(1-\\frac{x}{N}\\right)}^{N-x}{\\left(\\frac{x}{N}\\right)}^{x}}\\right)=-2\\left[\\left(N-x\\right)\\mathrm{log}\\left(\\frac{N\\left(1-pVaR\\right)}{N-x}\\right)+x\\mathrm{log}\\left(\\frac{NpVaR}{x}\\right)\\right]$`\n\nwhere N is the number of observations, x is the number of failures, and pVaR = 1 − VaRLevel. This test statistic is asymptotically distributed as a chi-square distribution with 1 degree of freedom. By the properties of the logarithm,\n\nand\n\nThe p-value of the POF test is the probability that a chi-square distribution with 1 degree of freedom exceeds the likelihood ratio LRatioPOF\n\n`$PValuePOF=1-F\\left(LRatioPOF\\right)$`\n\nwhere F is the cumulative distribution of a chi-square variable with 1 degree of freedom.\n\nThe result of the test is to accept if\n\n`$PValuePOF`\n\nand reject otherwise, where F is the cumulative distribution of a chi-square variable with 1 degree of freedom.\n\n Kupiec, P. \"Techniques for Verifying the Accuracy of Risk Management Models.\" Journal of Derivatives. Vol. 3, 1995, pp. 73–84."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.76448274,"math_prob":0.90435946,"size":1157,"snap":"2020-34-2020-40","text_gpt3_token_len":309,"char_repetition_ratio":0.12055507,"word_repetition_ratio":0.15425532,"special_character_ratio":0.23076923,"punctuation_ratio":0.09859155,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9850716,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-12T04:35:06Z\",\"WARC-Record-ID\":\"<urn:uuid:d6c63511-0833-4c31-8403-6947d86cf550>\",\"Content-Length\":\"95429\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e93b76d0-541e-43fc-95a9-4a48bc0fbf36>\",\"WARC-Concurrent-To\":\"<urn:uuid:950f7f5c-ed60-4d0f-9c05-a227c4986b9b>\",\"WARC-IP-Address\":\"23.66.56.59\",\"WARC-Target-URI\":\"https://uk.mathworks.com/help/risk/varbacktest.pof.html\",\"WARC-Payload-Digest\":\"sha1:MZ6GDUYKT6AIIGLV32TILL6JVTHQKABQ\",\"WARC-Block-Digest\":\"sha1:RPCP5UYHCW42WQRZXJUCJREHRJCSWGJX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738864.9_warc_CC-MAIN-20200812024530-20200812054530-00572.warc.gz\"}"} |
https://smart-answers.com/mathematics/question3765553 | [
"",
null,
", 12.11.2019 11:31, xjdjsisi\n\n# Which expression is equivalent to the following complex fraction? a. b. c. d.",
null,
"",
null,
"",
null,
"",
null,
"### Other questions on the subject: Mathematics",
null,
"In factoring by grouping, what would you have for step 3 for the following? step 1: x^3 +5x^2 +4x+20 step 2: x^2 (x+5)+4(x+5) step 3: ?",
null,
"Mathematics, 21.06.2019 15:30, jdkrisdaimcc11\nFibonacci follies: suppose you are playing a round of fibonacci nim with a friend. you start with 15 sticks. you start by removing two sticks; your friend then takes one; you take two; your friend takes one. what should your next move be? can you make it without breaking the rules of the game? did you make a mistake at some point? if so, where?",
null,
"Mathematics, 21.06.2019 16:30, ksweeny02\nAflute is on sale for 20% off. including the discount and 8% tax, the sales price is \\$216.",
null,
"Mathematics, 21.06.2019 16:30, maycigrimaldi4990\nProblem fathi wants to print out a pdf document that is 48 pages long. to save paper, he decides to print on both sides of each sheet and to print two pages on each side of the sheet. how many sheets of paper will he need?\nDo you know the correct answer?\nWhich expression is equivalent to the following complex fraction?\n\na.\nb.\nc...\n\n### Questions in other subjects:",
null,
"",
null,
"",
null,
"",
null,
"English, 29.10.2019 07:31",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"Mathematics, 29.10.2019 07:31",
null,
"Total solved problems on the site: 8900809"
] | [
null,
"https://smart-answers.com/tpl/images/cats/mat.png",
null,
"https://smart-answers.com/tpl/images/10/12/ZBwIa0wN4NBwlFD8.jpg",
null,
"https://smart-answers.com/tpl/images/cats/otvet.png",
null,
"https://smart-answers.com/tpl/images/ask_question.png",
null,
"https://smart-answers.com/tpl/images/ask_question_mob.png",
null,
"https://smart-answers.com/tpl/images/cats/mat.png",
null,
"https://smart-answers.com/tpl/images/cats/mat.png",
null,
"https://smart-answers.com/tpl/images/cats/mat.png",
null,
"https://smart-answers.com/tpl/images/cats/mat.png",
null,
"https://smart-answers.com/tpl/images/cats/mat.png",
null,
"https://smart-answers.com/tpl/images/cats/mat.png",
null,
"https://smart-answers.com/tpl/images/cats/en.png",
null,
"https://smart-answers.com/tpl/images/cats/en.png",
null,
"https://smart-answers.com/tpl/images/cats/mat.png",
null,
"https://smart-answers.com/tpl/images/cats/mat.png",
null,
"https://smart-answers.com/tpl/images/cats/mat.png",
null,
"https://smart-answers.com/tpl/images/cats/istoriya.png",
null,
"https://smart-answers.com/tpl/images/cats/mat.png",
null,
"https://smart-answers.com/tpl/images/cats/obshestvoznanie.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8730063,"math_prob":0.86419046,"size":1213,"snap":"2020-45-2020-50","text_gpt3_token_len":398,"char_repetition_ratio":0.1414392,"word_repetition_ratio":0.092307694,"special_character_ratio":0.34789777,"punctuation_ratio":0.24595469,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9772226,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38],"im_url_duplicate_count":[null,null,null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-25T08:24:38Z\",\"WARC-Record-ID\":\"<urn:uuid:223c79a7-e3fe-4734-8cba-cfcda3191e56>\",\"Content-Length\":\"124347\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:30d1796f-8946-4238-921f-967589bf7550>\",\"WARC-Concurrent-To\":\"<urn:uuid:50879742-b087-4dc4-9b9f-4832c6a6d9b8>\",\"WARC-IP-Address\":\"64.74.160.240\",\"WARC-Target-URI\":\"https://smart-answers.com/mathematics/question3765553\",\"WARC-Payload-Digest\":\"sha1:ZXELRAHQCQCMXQFMN2DTYZS3CHGVJJXJ\",\"WARC-Block-Digest\":\"sha1:HMEHXA7Z42PMLNFLIJ5ZT2F6QUFEXAIZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141181482.18_warc_CC-MAIN-20201125071137-20201125101137-00358.warc.gz\"}"} |
https://calculatorsonline.org/65-increased-by-68-percent | [
"# 65 increased by 68 percent\n\nHere you will see step by step solution to calculate 65 increased by 68 percent. What is 65 increased by 68%? It is 109.2, and the increase is 44.2. Check the detailed explanation of answer given below.\n\n## Answer: 65 increased by 68% is\n\n= 109.2\n\n### How to increase the number 65 by 68 percent?\n\nWith the help of given formula we can get the the percent increase value -\n\nformula 1: Percentage increase = P% × n, P = Percent, n = Number\n\nformula 2: Result = n + Percentage increase\nHere we have, n = 65, P = 68%\n\n#### Solution for 65 increased by 68 percent(68%) -\n\nGiven number n = 65, P = 68%\n\n• Put the n and P values in the given formula:\n• => 68% × 65\n=> 68/100 × 65\n\n• Now we need to simplify the fraction by multiply 68 with 65, then divide it by 100\n• => 68 × 65/100 = 4420/100 = 44.2\n• Percentage Increase = 44.2\n• Now we will use formula 2 to get the value of 65 increased by 68%\n• = 65 + 44.2\n= 109.2\n\nTherefore, result is for 65 increased by 68% is 109.2 and increase is 44.2.\n\nNumber:\nIncreased by\n%"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91644835,"math_prob":0.99864143,"size":988,"snap":"2023-40-2023-50","text_gpt3_token_len":314,"char_repetition_ratio":0.20630081,"word_repetition_ratio":0.02955665,"special_character_ratio":0.3937247,"punctuation_ratio":0.123853214,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99991167,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-06T01:11:47Z\",\"WARC-Record-ID\":\"<urn:uuid:258a1182-549b-454e-9ebc-0b384be9af2b>\",\"Content-Length\":\"16762\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a5e5128f-9429-4764-a0e7-cdab56f74763>\",\"WARC-Concurrent-To\":\"<urn:uuid:ea6f2b72-116b-4e02-987b-ccc030b63725>\",\"WARC-IP-Address\":\"104.21.85.191\",\"WARC-Target-URI\":\"https://calculatorsonline.org/65-increased-by-68-percent\",\"WARC-Payload-Digest\":\"sha1:MXKMI33NXYMP6FZGGU6FM7DSZTR6AB2T\",\"WARC-Block-Digest\":\"sha1:YYRYU6KZEZFTFSWYWOEMQAMCQZ2PFOSU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100575.30_warc_CC-MAIN-20231206000253-20231206030253-00812.warc.gz\"}"} |
https://www.convertunits.com/from/exameter/to/range | [
"## ››Convert exametre to range\n\n exameter range\n\nHow many exameter in 1 range? The answer is 9.6560833121666E-15.\nWe assume you are converting between exametre and range.\nYou can view more details on each measurement unit:\nexameter or range\nThe SI base unit for length is the metre.\n1 metre is equal to 1.0E-18 exameter, or 0.00010356165824916 range.\nNote that rounding errors may occur, so always check the results.\nUse this page to learn how to convert between exameters and ranges.\nType in your own numbers in the form to convert the units!\n\n## ››Quick conversion chart of exameter to range\n\n1 exameter to range = 1.0356165824916E+14 range\n\n2 exameter to range = 2.0712331649832E+14 range\n\n3 exameter to range = 3.1068497474747E+14 range\n\n4 exameter to range = 4.1424663299663E+14 range\n\n5 exameter to range = 5.1780829124579E+14 range\n\n6 exameter to range = 6.2136994949495E+14 range\n\n7 exameter to range = 7.2493160774411E+14 range\n\n8 exameter to range = 8.2849326599327E+14 range\n\n9 exameter to range = 9.3205492424242E+14 range\n\n10 exameter to range = 1.0356165824916E+15 range\n\n## ››Want other units?\n\nYou can do the reverse unit conversion from range to exameter, or enter any two units below:\n\n## Enter two units to convert\n\n From: To:\n\n## ››Definition: Exametre\n\nThe SI prefix \"exa\" represents a factor of 1018, or in exponential notation, 1E18.\n\nSo 1 exametre = 1018 metre.\n\n## ››Metric conversions and more\n\nConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data. Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3\", 10 stone 4, cubic cm, metres squared, grams, moles, feet per second, and many more!"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7207232,"math_prob":0.9427777,"size":1845,"snap":"2022-27-2022-33","text_gpt3_token_len":543,"char_repetition_ratio":0.24225964,"word_repetition_ratio":0.0,"special_character_ratio":0.3311653,"punctuation_ratio":0.14905149,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97872907,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-19T07:59:22Z\",\"WARC-Record-ID\":\"<urn:uuid:1c89c7d7-39f5-4c1e-bc55-c3263897d05e>\",\"Content-Length\":\"51012\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:67c9adc7-a21e-42b7-b339-a6b1a1b62eb6>\",\"WARC-Concurrent-To\":\"<urn:uuid:07770cc6-46cd-41a4-8402-d96c6557fd51>\",\"WARC-IP-Address\":\"35.168.211.176\",\"WARC-Target-URI\":\"https://www.convertunits.com/from/exameter/to/range\",\"WARC-Payload-Digest\":\"sha1:WMMGDDXFBJKCAYT55EDKCSDW3NGNOQJ5\",\"WARC-Block-Digest\":\"sha1:R4BXQ2TE6TYTNSL3Z5H7TWJFPIPOWDYV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882573630.12_warc_CC-MAIN-20220819070211-20220819100211-00305.warc.gz\"}"} |
https://stupidsid.com/previous-question-papers/download/signals-systems-14862 | [
"",
null,
"MORE IN Signals & Systems\nTotal marks: --\nTotal time: --\nINSTRUCTIONS\n(1) Assume appropriate data and state your reasons\n(2) Marks are given to the right of every question\n(3) Draw neat diagrams wherever necessary\n\n1(a) Sketch the even and odd part of the signals shown in Fig. Q1(a).\n:!mage\n6 M\n1(b) For the signal x(t) and y(t) shown in Fig. Q1(b) sketch the signals:\ni) x(t+1) - y(t)\nii) x(t) × y(t-1).\n:!mage\n6 M\n1(c) Determine whether the system described by the following input/output relationship is i) memory less ii) causal iii) time invariant iv) linear. \\begin {align*} &i)y(t)=x(2-t)\\\\ &ii)y[n]=\\sum ^{\\infty}_{k=0}2^k x[n-k] \\end{align*}\n8 M\n\n2(a) Compute the following convolutions :\ni) y(t) = e-2t u(t-2) * {u(t-2) -u (t-12)}\nii) y[n] = αn {u[n] - u[n-6]} * 2{u[n] - u[n-15]}.\n14 M\nProve the following\n2(b)(i) x(t) * δ(t-t0 = x(t-t0\n3 M\n2(b)(ii) $$x[n]*u[n]\\sum ^n_{k=-\\infty}x[k].$$\n3 M\n\n3(a) Identify whether the systems described by the following impulse are memory-less, causal and stable.\ni) h(t) = 3δ(t-2) + 5δ(t-5)\nii) h[n] = 2nu[-n]\niii) h[n] = (½)n δ[n].\n9 M\n3(b) Find the natural response and the forced response of the system described by the following differential equation : $$\\dfrac{\\mathrm{d} ^2y(t)}{\\mathrm{d} t^2}-4y(t)=\\dfrac{\\mathrm{d} }{\\mathrm{d} t}x(t), \\ \\text{if y(0)=1 and}\\ \\dfrac{\\mathrm{d} }{\\mathrm{d} t}\\ y(t)|_{t=0}=-1.$$\n8 M\n3(c) Write the difference equation for the system depicted in Fig. Q.3(c).\n:!mage\n3 M\n\n4(a) State and prove the Parseval's relation for the Fourier series representation of discrete time periodic signals.\n6 M\n4(b) i) Find the DTFS of the signal x(t) = sin [5πn] + cos [7πn]\nii) Find the FS of the signal shown in Fig. Q4(b)(ii).\n:!mage\n8 M\n4(c) If the FS representation of periodic signal x(t) is : where $$\\omega _0=\\dfrac{2\\pi}{t}$$ then find the FS of y(t) without computing x(t) : \\begin{align*} &i)y(t)=x(t+2)\\\\ &ii)y(t)=\\dfrac{\\mathrm{d} }{\\mathrm{d} t}x(t). \\end{align*}\n6 M\n\n5(a) i) Compute the DIFT of x[n] = (½)n u[n+2] + (½)n u[n-2]\nii) Find FT of the signal shown in Fig. Q5(a)(ii).\n:!mage\n10 M\n5(b) Find inverse of the following x(jω) :\\begin{align*} &i)x(j\\omega)=\\dfrac{j\\omega}{(j\\omega)^2+6j\\omega+8}\\\\ &ii)x(j\\omega)=j.\\dfrac{\\mathrm{d} }{\\mathrm{d} \\omega}\\dfrac{e^{3j\\omega}}{2+j\\omega}. \\end{align*}\n10 M\n\n6(a) Determine output of the LTI system whose I/P and the impulse response is given as :\ni) x(t) = e-2t u(t) and h(t) - e-3t u(t).\nii) x[n] = (⅓)n u[n] and h[n] = δ[n-4].\n8 M\n6(b) Find the Fourier transform of the signal x(t) = cos ω0t where $$\\dfrac{2\\pi}{T}$$ and T the period of the signal.\n4 M\n6(c) State the sampling theorem and briefly explain how to practically reconstruct the signal.\n8 M\n\n7(a) State and prove differentiation in z-domain property of z-transforms.\n6 M\n7(b) Use property of z-transform to compute x(z) of :\ni) x[n] = n sin (πn/2) u[-n]\nii) x[n] = (n-2) (½)n u[n-2].\n6 M\n7(c) Find the inverse z-transforms of \\begin {align*} &i) x(z)=\\frac{z^2-2z}{\\left ( z^2+\\frac{3}{2}z-1 \\right )}\\ \\frac{1}{2}<|z|<2\\\\ &ii) x(z)=\\frac{z^3}{\\left ( z-\\frac{1}{2} \\right )}\\ |z|>\\frac{1}{2}. \\end{align*}\n8 M\n\n8(a) Determine the impulse response of the following transfer function if :\ni) The system is causal\nii) The system is stable\niii) The system is stable and causal at the same time : $$H(z)=\\dfrac{3z^2-z}{(z-2)\\left ( z+\\dfrac{1}{2} \\right )}.$$\n8 M\n8(b) Use unilateral z - transform to determine the forced response and the natural response of the systems described by: $$y[n]-\\dfrac{1}{4y}[n-1]-\\dfrac{1}{8y}[n-2]=x[n]+x[n-1]$$ where y[-1] = 1 and y[-2] = 1 with I/P x[n] = 3n u[n].\n12 M\n\nMore question papers from Signals & Systems"
] | [
null,
"https://www.facebook.com/tr",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6516729,"math_prob":0.9998416,"size":4213,"snap":"2021-21-2021-25","text_gpt3_token_len":1687,"char_repetition_ratio":0.13542409,"word_repetition_ratio":0.28654125,"special_character_ratio":0.42202705,"punctuation_ratio":0.072674416,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999758,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-12T12:21:14Z\",\"WARC-Record-ID\":\"<urn:uuid:ee4e99b3-2946-408c-a3fe-2b5e00353545>\",\"Content-Length\":\"55951\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0273a1bc-9893-4ea7-8b7f-7012345493f9>\",\"WARC-Concurrent-To\":\"<urn:uuid:70065b02-a0f2-4ff1-8cb2-250052dcbd4c>\",\"WARC-IP-Address\":\"104.21.3.201\",\"WARC-Target-URI\":\"https://stupidsid.com/previous-question-papers/download/signals-systems-14862\",\"WARC-Payload-Digest\":\"sha1:APXZVAR6MZJVK6S2I66YV5C2YSORB5CJ\",\"WARC-Block-Digest\":\"sha1:42AM55Q74ZMC6USSG3FHGMGR26PDXQV2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487582767.0_warc_CC-MAIN-20210612103920-20210612133920-00161.warc.gz\"}"} |
http://sjme.journals.sharif.edu/article_6406.html | [
"# پیادهسازی سختافزاری حل عددی معادلات دیفرانسیل روی FPGA\n\nنوع مقاله: مقاله پژوهشی\n\nنویسندگان\n\n1 پژوهشکدهی مکانیک، سازمان پژوهشهای علمی و صنعتی ایران\n\n2 دانشکده مهندسی هوافضا- دانشگاه صنعتی شریف\n\nچکیده\n\nحل عددی معادلات دیفرانسیل با استفاده از بسترهای CPU و GPU مبتنی بر پیادهسازی نرمافزاری است. در سالهای اخیر، راهکار جدیدی مبتنی بر پیادهسازی سختافزاری معادلات با استفاده از بستر FPGA، بهدلیل افزایش سرعت حل و کاهش توان مصرفی، مورد توجه جدی قرار گرفته است. در این پژوهش با حل چند مسئلهی نوعی، شامل سیستم جرم و فنر و معادلهی موج، روش پیادهسازی سختافزاری برای حل معادلات دیفرانسیل بر روی FPGA، مزایا و چالشهای این پیادهسازی و روشهای حل آن ارائه شده است. نتایج سرعت پردازش برای حل سیستم تک جرم و فنر نشان میدهد که سرعت CPU تقریباً برابر FPGA است ولی برای سیستم ۶ جرم و فنر سرعت FPGA ۸ برابر CPU است. همچنین نتایج سرعت پردازش حل معادلهی موج نشاندهندهی افزایش ۳٫۶ برابری سرعت FPGA نسبت به CPU است. این نتایج نشانگر افزایش کارایی FPGA با افزایش تعداد المانهای محاسباتی است.\n\nکلیدواژهها\n\nعنوان مقاله [English]\n\n### HARDWARE IMPLEMENTATION OF NUMERICAL SOLUTION OF DIFFERENTIAL EQUATIONS ON FPGA\n\nنویسندگان [English]\n\n• F. Farhani Baghlani 1\n• A. Ebrahimi Chamgordani 2\n• A. Nikravan Shalmani 1\n1 Dept. of Mechanical Engineering IROST\n2 Dept. of Aerospace Engineering Sharif University of Technology\nچکیده [English]\n\nNowadays, CPUs and GPUs are used in computations pertaining to numerical solution of differential equations. However, the fixed hardware architecture of CPUs and GPUs makes it difficult to optimally implement many numerical solution algorithms. In recent years, a new method, based on hardware implementation of equations using Field Programmable Gate Array (FPGA), has been given much attention. The unique feature of this approach is the ability to vary the hardware architecture on the basis of the solution algorithm, which results in increased solution speed and a reduction in power consumption. This methodology, in which hardware can vary from one architecture to another for computing purpose is named Reconfigurable Computing (RC). RC can be used to solve a lot of problems such as FEM, FVM with structured or unstructured mesh.In this research, typical problems, such as mass-spring systems and wave equations, have been considered, and hardware implementation on FPGA has been used to solve the resulting differential equations. For modeling these systems, we used the software and hardware which is accessible to us, so we used a domestic FPGA board and MatLab and Xilinx ISE software products. Based on the results, advantages and challenges for hardware implementation of differential equations have been presented. Results for a single element mass-spring system show a comparable solution speed for CPU and FPGA implementation. However, with an increase in the number of elements of the mass-spring system, for example, to 6, the FPGA hardware implementation overtakes CPU and the speed of FPGA becomes almost 8 times that of CPU. Moreover, results of the solution of wave equations show that the speed with FPGA implementation is 3.6 times that of CPU. Therefore, for higher numbers of computational elements, results show the superior process speeds attainable with hardware implementation of equations using FPGA compared to the software mplementation on CPU.\n\nکلیدواژهها [English]\n\n• Reconfigurable computing\n• solution speed up\n• PDE\n• ODE"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8655389,"math_prob":0.98525035,"size":2192,"snap":"2020-45-2020-50","text_gpt3_token_len":524,"char_repetition_ratio":0.14579524,"word_repetition_ratio":0.0060790274,"special_character_ratio":0.18567519,"punctuation_ratio":0.09866667,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9717485,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-27T04:14:05Z\",\"WARC-Record-ID\":\"<urn:uuid:a627fd84-b2dd-417b-9e00-ab74832f8fb9>\",\"Content-Length\":\"58670\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:48fe62f8-ee67-46c2-81bf-448de71b3508>\",\"WARC-Concurrent-To\":\"<urn:uuid:244a903c-9814-49f3-af7a-c7a5d51a5a36>\",\"WARC-IP-Address\":\"81.31.168.62\",\"WARC-Target-URI\":\"http://sjme.journals.sharif.edu/article_6406.html\",\"WARC-Payload-Digest\":\"sha1:UDH7VEWM36DT2S7MS4R67O65WP5B4JJN\",\"WARC-Block-Digest\":\"sha1:QJOM3NFOP67X4S7WCXFB4OT2WTOQNZFX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107893011.54_warc_CC-MAIN-20201027023251-20201027053251-00188.warc.gz\"}"} |
http://silverhammermba.github.io/superuser/part1/intro/ | [
"We are going to start by answering a question so fundamental that you might even laugh that I ask it at all: what is a computer? Yes, I know you are using one right now, but do you really know what a computer is? What is the difference between a computer and a calculator? Other than size and shape, what makes different computers different? What makes one computer better than another?\n\nWe are going to learn the answers to all of these questions eventually. For now, try to take a stab at answering that first question.\n\nQ: What is a computer?\n\nA: Something that… computes?\n\nOkay, but what does it mean to “compute”? Feel free to try to come up with a definition on your own, but this is actually a really difficult question; even computer scientists had a hard time defining it! The first decent definition was put forth in the 1930s, and even though it is the one we use today, computer scientists are still not quite sure if there could be a better way to define it.\n\nBut what is this definition?\n\nA computation is any function that can be evaluated using an algorithm.\n\nWoah, terminology alert! We can break it down:\n\nA function is something that can be given input and will return output. And if you give it the same input several times, it must return the same output each time.\n\nFor example, “Given two numbers, what is their sum?” is a function. Its input is two numbers and its output is a single number. “What day of the week was it on a certain date?” is also a function. With the date as the input, you can look backwards or forwards in the calendar to see if that day was “Monday”, “Tuesday”, etc. and that is your output. “What is your favorite color?” is not a function! If I ask you now, your output might be green, but if I ask later on that could change. The output needs to be the same every time the function gets the same input. You could say that the output depends only on what the input is.\n\nGiven a function and its input, the process of determining its output is called evaluating the function.\n\nFor my previous example of what day of the week a certain date was, looking up the date in a calendar would be “evaluating” the function.\n\nAn algorithm is a well-defined, step-by-step process.\n\nIn other words, an algorithm is just a list of instructions. However, the instructions must be completely clear. There should be no room for interpretation. For example, “multiply this number by 2” would work as a step in an algorithm since anyone who knows how to multiply will get the exact same result. However “draw a pretty picture” would not work in an algorithm because it is ambiguous. What do you mean by “pretty”? What tools can I draw with? What should I draw a picture of?\n\nNow read it again: a computation is any function that can be evaluated using an algorithm. A computation takes input and produces output by following a clear, step-by-step process and where the output depends only on the input. Got it?\n\nYou might be wondering: since a computation is a function that can be evaluated using an algorithm, does that mean that there are functions that cannot be evaluated using any algorithm? Yes! They are really cool but unfortunately are beyond the scope of this book. Search the web for “uncomputable functions”.\n\nNow get ready because here comes the first set of exercises. This is not school so you do not have to do them and some problems are tricky enough that I do not expect you to figure them out very easily. But the best way to learn is by doing, so at the very least try to think about each question a little bit before moving on.\n\n## Exercises\n\n1. Come up with three more functions. What is the input? What is the output? How is the function evaluated?\n2. Come up with something that takes input and produces output, but is not a function. What makes it not a function? Can you turn your non-function into a function by making it require more input?\n3. You probably learned how to multiply long numbers by hand in elementary school. Did you realize that the method you learned was an algorithm? Try to write out a multiplication algorithm for multiplying a three-digit number by a one-digit number as step-by-step instructions. Assume that the reader of the instructions knows how to multiply and add only single digit numbers."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.95798564,"math_prob":0.9529248,"size":4270,"snap":"2023-14-2023-23","text_gpt3_token_len":910,"char_repetition_ratio":0.14041257,"word_repetition_ratio":0.024611399,"special_character_ratio":0.21381733,"punctuation_ratio":0.10921501,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9825782,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-30T13:45:01Z\",\"WARC-Record-ID\":\"<urn:uuid:17732557-e88d-4b2a-8f78-4995af21e32a>\",\"Content-Length\":\"13353\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:adda53d7-4997-4d1f-8cdf-b8d5d068d95d>\",\"WARC-Concurrent-To\":\"<urn:uuid:36f65540-e1e3-4f49-a717-45ee481e0ea1>\",\"WARC-IP-Address\":\"185.199.110.153\",\"WARC-Target-URI\":\"http://silverhammermba.github.io/superuser/part1/intro/\",\"WARC-Payload-Digest\":\"sha1:RDTZINNI5BCNCMO36DFUZQL7J4R3Z3KG\",\"WARC-Block-Digest\":\"sha1:J5RPAN6TYLKZT5NES56A7OA4EM344N2K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296949331.26_warc_CC-MAIN-20230330132508-20230330162508-00385.warc.gz\"}"} |
https://online-unit-converter.com/volume/convert-fluid-ounces-to-milliliters/41-oz-to-ml/ | [
"# 41 oz to ml CONVERTER. How many milliliters are in 41 fluid ounces?\n\n## 41 oz to ml\n\nThe question “What is 41 oz to ml?” is the same as “How many milliliters are in 41 oz?” or “Convert 41 fluid ounces to milliliters” or “What is 41 fluid ounces to milliliters?” or “41 fluid ounces to ml”. Read on to find free oz to milliliter converter and learn how to convert 41 oz to ml. You’ll also learn how to convert 41 oz to ml.\n\nAnswer: There are 1212.514712 ml in 41 oz.\n\nAlternatively, you can say “41 oz equals to 1212.514712 ml” or “41 oz = 1212.514712 ml” or “41 fluid ounces is 1212.514712 milliliters”.\n\n## fluid ounces to milliliter conversion formula\n\nA milliliter is equal to 0.0338140227 fluid ounces. A fluid ounce equals 29.5735295625 milliliters\nTo convert 41 fluid ounces to milliliters you can use one of the formulas:\n\nFormula 1\nMultiply 41 oz by 29.5735295625.\n41 * 29.5735295625 = 1212.514712 ml.\n\nFormula 2\nDivide 41 oz by 0.0338140227.\n41 / 0.0338140227 = 1212.514712 ml\n\nHint: no need to use a formula. Use our free oz to ml converter.\n\n## Alternative spelling of 41 oz to ml\n\nMany of our visitor spell fluid ounces and milliliters differently. Below we provide all possible spelling options.\n\n• Spelling options with “fluid ounces”: 41 fluid ounces to ml, 41 fluid ounce to ml, 41 fluid ounces to milliliters, 41 fluid ounce to milliliters, 41 fluid ounces in ml, 41 fluid ounce in ml, 41 fluid ounces in milliliters, 41 fluid ounce in milliliters.\n• Spelling options with “oz”: 41 oz to ml, 41 oz to milliliter, 41 oz to milliliters, 41 oz in ml, 41 oz in milliliter, 41 oz in milliliters.\n• Spelling options with “in”: 41 oz in ml, 41 oz in milliliter, 41 oz in milliliters, 41 oz in ml, 41 oz in milliliter, 41 oz in milliliters, 41 fluid ounces in ml, 41 fluid ounce in ml, 41 fluid ounces in milliliters, 41 fluid ounce in milliliters,\n\n## FAQ on 41 oz to ml conversion\n\nHow many milliliters are in 41 fluid ounces?\n\nThere are 1212.514712 milliliters in 41 fluid ounces.\n\n41 oz to ml?\n\n41 oz is equal to 1212.514712 ml. There are 1212.514712 milliliters in 41 fluid ounces.\n\nWhat is 41 oz to ml?\n\n41 oz is 1212.514712 ml. You can use a rounded number of 1212.514712 for convenience. In this case you can say that 41 oz is 1212.51 ml.\n\nHow to convert 41 oz to ml?\n\nUse our free fluid ounce to milliliters converter or multiply41 oz by 29.5735295625.\n41 * 29.5735295625 = 1212.514712 ml."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.72280884,"math_prob":0.83669627,"size":2445,"snap":"2023-14-2023-23","text_gpt3_token_len":778,"char_repetition_ratio":0.28349036,"word_repetition_ratio":0.23809524,"special_character_ratio":0.38936606,"punctuation_ratio":0.16058394,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9953075,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-31T19:02:10Z\",\"WARC-Record-ID\":\"<urn:uuid:e81154d2-7819-4327-be4a-e575a4fc5261>\",\"Content-Length\":\"161356\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7d1c03e3-7c95-43c9-a59a-f484cb657b48>\",\"WARC-Concurrent-To\":\"<urn:uuid:9e8b6f01-83a6-42a0-8b78-c75de4d24acf>\",\"WARC-IP-Address\":\"35.209.245.121\",\"WARC-Target-URI\":\"https://online-unit-converter.com/volume/convert-fluid-ounces-to-milliliters/41-oz-to-ml/\",\"WARC-Payload-Digest\":\"sha1:A7S22AKR4XXOHMPJR2BUCWLXASEQRE2S\",\"WARC-Block-Digest\":\"sha1:PTVO5KPZCQW2Y26M3FAFYFW2F4FNCKOW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224647409.17_warc_CC-MAIN-20230531182033-20230531212033-00488.warc.gz\"}"} |
https://ense3.grenoble-inp.fr/en/study-at-ense3/applied-fluid-mechanics-for-the-environment-4eus3mfe | [
"",
null,
"# Applied Fluid Mechanics for the Environment - 4EUS3MFE\n\n• #### Number of hours\n\n• Lectures 25.0\n• Projects -\n• Tutorials 25.0\n• Internship -\n• Laboratory works 10.0\n• Written tests 0\n\nECTS 5.0\n\n## Goal(s)\n\nThe course is composed in three parts :\n- turbulence: physical and modelling aspects\n- diffusion and dispersion in the Environment\n- environmental fluid mechanics\n\nResponsible(s)\n\nOlivier METAIS\n\n## Content(s)\n\nTurbulence: physical and modelling aspects(12 h of lecture and 4h of supervised work)\nContent:\no Turbulent flows characteristics\no Statistical approach : averaging, Reynolds equations ; turbulent eddy-viscosity and diffusivity ; turbulent boundary layer ; energy mechanisms: production and dissipation\no Statistical tools and theories : correlations ; Fourier space ; kinetic energy and dissipation spectra ; turbulent scales ; Kolmogorov theory\no Turbulent shear flows : example of the plane jet\no The different approaches for turbulence modelling and simulation: statistical modelling, direct numerical simulation, large eddy simulation\no Statistical modelling: concept of order ; zero, one and two-equations models ;\nk-epsilon model; second order model\no Large-eddy simulation : methodology ; Smagorinsky model\n\nDiffusion and dispersion in the Environment (6 h of lecture et 2h of supervised work)\nContent :\no Diffusion concept and Fick’s law : diffusion equation ; molecular and turbulent diffusion coefficients\no Unidirectional diffusion problem : concentrated injection ; extended injection ; presence of walls ; diffusion in a current\no Multidirectional diffusion problems : concept of plume\no Dispersion in shear flows : longitudinal dispersion coefficient\no Application to turbulent flows in rivers : characteristic turbulent velocity and velocity profile ; mixing distances in rivers ; flow rate determination\no Diffusion in fully-developed turbulence: diffusion of a cloud of tracers ; particle dispersion ; dispersion of particles ; Taylor’s theorem ; Richardson law .\n\nEnvironmental fluid mechanics (12h of lecture et 12h of supervised work)\nContent :\n\n• Dimensional analysis and Vashy-Buckingham theorem\n• Laminar boundary layer : Blasius equation and resolution methods ; boundary layer with adverse pressure gradient and separation\n• Energy equation ; Equation de l’énergie ; Compressible flow statics and concept of atmospheric stability\n• Rotating flows ; centrifugal and Coriolis forces ; Ekman layer\n• Vorticity dynamics\n12h of practical activities\n\nPrerequisites\n\nBasis in Fluid mechanics\n\nTest\n\nSession normale / First session\nEvaluation non rattrapable (EN) / EN assessment\nEvaluation rattrapable (ER) / ER assessment : épreuve écrite de 4h / 4h written exam\n\nSession de rattrapage / Second session\nLa note remplace la note de ER. Le EN n'est pas rattrapable. / Written or oral exam will replace the first one (ER). No retake for EN.\n\nEN*1/3 + ER*2/3\n\nCalendar\n\nThe course exists in the following branches:\n\nsee the course schedule for 2023-2024\n\nCourse language(s):",
null,
""
] | [
null,
"https://ense3.grenoble-inp.fr/medias/photo/ense3-formation_1699281979876-jpg",
null,
"https://ense3.grenoble-inp.fr/images/drapeaux/fr.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.61264694,"math_prob":0.62647265,"size":2973,"snap":"2023-40-2023-50","text_gpt3_token_len":663,"char_repetition_ratio":0.12057932,"word_repetition_ratio":0.008830022,"special_character_ratio":0.20551631,"punctuation_ratio":0.13473684,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9592427,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,4,null,10,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-01T09:59:36Z\",\"WARC-Record-ID\":\"<urn:uuid:c9a1fc71-583c-4945-8fc7-8f91e8022194>\",\"Content-Length\":\"130134\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aeaa66c8-30a7-4977-b23d-dbceb2c5bb7a>\",\"WARC-Concurrent-To\":\"<urn:uuid:61302b5c-6a8f-46e3-9c2e-bc0c8456380b>\",\"WARC-IP-Address\":\"195.220.30.60\",\"WARC-Target-URI\":\"https://ense3.grenoble-inp.fr/en/study-at-ense3/applied-fluid-mechanics-for-the-environment-4eus3mfe\",\"WARC-Payload-Digest\":\"sha1:3TGPBJIPMUQTJ7IJUFALVSQKC2NQBXZL\",\"WARC-Block-Digest\":\"sha1:F6WD5RDHTRGOP5HEZWW4OIGTEDF6SAEH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100286.10_warc_CC-MAIN-20231201084429-20231201114429-00283.warc.gz\"}"} |
http://sleigh-munoz.co.uk/fluidsnotes/FluidsLevel1/Unit03/T1.html | [
"# An Introduction To Fluid Mechanics : CIVE1400\n\n## SCHOOL OF CIVIL ENGINEERING",
null,
"",
null,
"# Unit 3: Fluid Dynamics\n\n## Objectives\n\n• Introduce concepts necessary to analyse fluids in motion\n• Demonstrate streamlines and stream tubes\n• Introduce the Continuity principle through conservation of mass and control volumes\n• Derive the Bernoulli (energy) equation\n• Demonstrate practical uses of the Bernoulli and continuity equation in the analysis of flow\n• Introduce the momentum equation for a fluid\n• Demonstrate how the momentum equation and principle of conservation of momentum is used to predict forces induced by flowing fluids\n\nThis section discusses the analysis of fluid in motion - fluid dynamics. The motion of fluids can be predicted in the same way as the motion of solids are predicted using the fundamental laws of physics together with the physical properties of the fluid.\n\nIt is not difficult to envisage a very complex fluid flow. Spray behind a car; waves on beaches; hurricanes and tornadoes or any other atmospheric phenomenon are all example of highly complex fluid flows which can be analysed with varying degrees of success (in some cases hardly at all!). There are many common situations which are easily analysed.\n\n## 2. Uniform Flow, Steady Flow\n\nIt is possible - and useful - to classify the type of flow which is being examined into small number of groups.\n\nIf we look at a fluid flowing under normal circumstances - a river for example - the conditions at one point will vary from those at another point (e.g. different velocity) we have non-uniform flow. If the conditions at one point vary as time passes then we have unsteady flow.\n\nUnder some circumstances the flow will not be as changeable as this. He following terms describe the states which are used to classify fluid flow:\n\n• uniform flow: If the flow velocity is the same magnitude and direction at every point in the fluid it is said to be uniform.\n• non-uniform: If at a given instant, the velocity is not the same at every point the flow is non-uniform. (In practice, by this definition, every fluid that flows near a solid boundary will be non-uniform - as the fluid at the boundary must take the speed of the boundary, usually zero. However if the size and shape of the of the cross-section of the stream of fluid is constant the flow is considered uniform.)\n• steady: A steady flow is one in which the conditions (velocity, pressure and cross-section) may differ from point to point but DO NOT change with time.\n• unsteady: If at any point in the fluid, the conditions change with time, the flow is described as unsteady. (In practise there is always slight variations in velocity and pressure, but if the average values are constant, the flow is considered steady.\n\nCombining the above we can classify any flow in to one of four type:\n\n1. Steady uniform flow. Conditions do not change with position in the stream or with time. An example is the flow of water in a pipe of constant diameter at constant velocity.\n2. Steady non-uniform flow. Conditions change from point to point in the stream but do not change with time. An example is flow in a tapering pipe with constant velocity at the inlet - velocity will change as you move along the length of the pipe toward the exit.\n3. Unsteady uniform flow. At a given instant in time the conditions at every point are the same, but will change with time. An example is a pipe of constant diameter connected to a pump pumping at a constant rate which is then switched off.\n4. Unsteady non-uniform flow. Every condition of the flow may change from point to point and with time at every point. For example waves in a channel.\n\nIf you imaging the flow in each of the above classes you may imagine that one class is more complex than another. And this is the case - steady uniform flow is by far the most simple of the four. You will then be pleased to hear that this course is restricted to only this class of flow. We will not be encountering any non-uniform or unsteady effects in any of the examples (except for one or two quasi-time dependent problems which can be treated at steady).\n\n## 3. Compressible or Incompressible\n\nAll fluids are compressible - even water - their density will change as pressure changes. Under steady conditions, and provided that the changes in pressure are small, it is usually possible to simplify analysis of the flow by assuming it is incompressible and has constant density. As you will appreciate, liquids are quite difficult to compress - so under most steady conditions they are treated as incompressible. In some unsteady conditions very high pressure differences can occur and it is necessary to take these into account - even for liquids. Gasses, on the contrary, are very easily compressed, it is essential in most cases to treat these as compressible, taking changes in pressure into account.\n\n## 4. Three-dimensional flow\n\nAlthough in general all fluids flow three-dimensionally, with pressures and velocities and other flow properties varying in all directions, in many cases the greatest changes only occur in two directions or even only in one. In these cases changes in the other direction can be effectively ignored making analysis much more simple.\n\nFlow is one dimensional if the flow parameters (such as velocity, pressure, depth etc.) at a given instant in time only vary in the direction of flow and not across the cross-section. The flow may be unsteady, in this case the parameter vary in time but still not across the cross-section. An example of one-dimensional flow is the flow in a pipe. Note that since flow must be zero at the pipe wall - yet non-zero in the centre - there is a difference of parameters across the cross-section. Should this be treated as two-dimensional flow? Possibly - but it is only necessary if very high accuracy is required. A correction factor is then usually applied.",
null,
"One dimensional flow in a pipe.\n\nFlow is two-dimensional if it can be assumed that the flow parameters vary in the direction of flow and in one direction at right angles to this direction. Streamlines in two-dimensional flow are curved lines on a plane and are the same on all parallel planes. An example is flow over a weir foe which typical streamlines can be seen in the figure below. Over the majority of the length of the weir the flow is the same - only at the two ends does it change slightly. Here correction factors may be applied.",
null,
"Two-dimensional flow over a weir.\n\nIn this course we will only be considering steady, incompressible one and two-dimensional flow.\n\n## 5. Streamlines and streamtubes\n\nIn analysing fluid flow it is useful to visualise the flow pattern. This can be done by drawing lines joining points of equal velocity - velocity contours. These lines are know as streamlines. Here is a simple example of the streamlines around a cross-section of an aircraft wing shaped body:",
null,
"Streamlines around a wing shaped body\n\nWhen fluid is flowing past a solid boundary, e.g. the surface of an aerofoil or the wall of a pipe, fluid obviously does not flow into or out of the surface. So very close to a boundary wall the flow direction must be parallel to the boundary.\n\n• Close to a solid boundary streamlines are parallel to that boundary\n\nAt all points the direction of the streamline is the direction of the fluid velocity: this is how they are defined. Close to the wall the velocity is parallel to the wall so the streamline is also parallel to the wall.\n\nIt is also important to recognise that the position of streamlines can change with time - this is the case in unsteady flow. In steady flow, the position of streamlines does not change.\n\nSome things to know about streamlines\n\n• Because the fluid is moving in the same direction as the streamlines, fluid can not cross a streamline.\n\n• Streamlines can not cross each other. If they were to cross this would indicate two different velocities at the same point. This is not physically possible.\n\n• The above point implies that any particles of fluid starting on one streamline will stay on that same streamline throughout the fluid.\n\nA useful technique in fluid flow analysis is to consider only a part of the total fluid in isolation from the rest. This can be done by imagining a tubular surface formed by streamlines along which the fluid flows. This tubular surface is known as a streamtube.",
null,
"A Streamtube\n\nAnd in a two-dimensional flow we have a streamtube which is flat (in the plane of the paper):",
null,
"A two dimensional version of the streamtube The \"walls\" of a streamtube are made of streamlines. As we have seen above, fluid cannot flow across a streamline, so fluid cannot cross a streamtube wall. The streamtube can often be viewed as a solid walled pipe. A streamtube is not a pipe - it differs in unsteady flow as the walls will move with time. And it differs because the \"wall\" is moving with the fluid"
] | [
null,
"http://sleigh-munoz.co.uk/fluidsnotes/FluidsLevel1/css/wavelogo_lt.gif",
null,
"http://sleigh-munoz.co.uk/fluidsnotes/FluidsLevel1/css/wavelogo_lt.gif",
null,
"http://sleigh-munoz.co.uk/fluidsnotes/FluidsLevel1/Unit03/images/img00001.gif",
null,
"http://sleigh-munoz.co.uk/fluidsnotes/FluidsLevel1/Unit03/images/img00002.gif",
null,
"http://sleigh-munoz.co.uk/fluidsnotes/FluidsLevel1/Unit03/images/img00003.gif",
null,
"http://sleigh-munoz.co.uk/fluidsnotes/FluidsLevel1/Unit03/images/img00004.gif",
null,
"http://sleigh-munoz.co.uk/fluidsnotes/FluidsLevel1/Unit03/images/img00005.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9377515,"math_prob":0.9431248,"size":8785,"snap":"2022-05-2022-21","text_gpt3_token_len":1804,"char_repetition_ratio":0.15476598,"word_repetition_ratio":0.013770492,"special_character_ratio":0.19829254,"punctuation_ratio":0.0754717,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98245215,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-24T23:43:56Z\",\"WARC-Record-ID\":\"<urn:uuid:d931077b-85e6-4606-a95d-26a05274f3e5>\",\"Content-Length\":\"15623\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9bff0372-72f4-489b-a51c-c88bbbdd924f>\",\"WARC-Concurrent-To\":\"<urn:uuid:684f3962-340d-4104-8787-ecbbcc14f3a5>\",\"WARC-IP-Address\":\"78.137.117.205\",\"WARC-Target-URI\":\"http://sleigh-munoz.co.uk/fluidsnotes/FluidsLevel1/Unit03/T1.html\",\"WARC-Payload-Digest\":\"sha1:QCUODCUT7TQ2H6J7B4APXKMN4VVVSBZJ\",\"WARC-Block-Digest\":\"sha1:7QEE6SYMET24QNFFQBCEH5BOTNCVBUMS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662577757.82_warc_CC-MAIN-20220524233716-20220525023716-00676.warc.gz\"}"} |
https://ashishkumarletslearn.com/arithmetic-progressions-class-10/lecture-7/ | [
"# Lecture-7\n\n00:03:44 NCERT Exercise 5.3 Question 8 Find the sum of first 51 terms of an AP whose second and third terms are 14 and 18 respectively.\n\n00:06:34 NCERT Exercise 5.3 Question 9 If the sum of first 7 terms of an AP is 49 and that of 17 terms is 289, find the sum of first n terms.\n\n00:13:14 NCERT Exercise 5.3 Question 10\n\n00:24:04 NCERT Exercise 5.3 Question 11 If the sum of the first n terms of an AP is 4n-n^2, what is the first term (that is S1)? What is the sum of first two terms? What is the second term? Similarly, find the 3rd, the 10th and the nth terms.\n\n00:31:24 NCERT Exercise 5.3 Question 12 Find the sum of the first 40 positive integers divisible by 6.\n\n00:34:14 NCERT Exercise 5.3 Question 13 Find the sum of the first 15 multiples of 8.\n\n00:35:54 NCERT Exercise 5.3 Question 14 Find the sum of the odd numbers between 0 and 50.\n\n00:38:54 NCERT Exercise 5.3 Question 15 A contract on construction job specifies a penalty for delay of completion beyond a certain date as follows: ₹ 200 for the first day, ₹ 250 for the second day, ₹ 300 for the third day, etc., the penalty for each succeeding day being ₹ 50 more than for the preceding day. How much money the contractor has to pay as penalty, if he has delayed the work by 30 days?\n\n00:43:42 NCERT Exercise 5.3 Question 16 A sum of ₹ 700 is to be used to give seven cash prizes to students of a school for their overall academic performance. If each prize is ₹ 20 less than its preceding prize, find the value of each of the prizes."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.93231475,"math_prob":0.95342064,"size":1491,"snap":"2021-31-2021-39","text_gpt3_token_len":428,"char_repetition_ratio":0.19636853,"word_repetition_ratio":0.027874565,"special_character_ratio":0.32260227,"punctuation_ratio":0.1420765,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.992991,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-30T04:32:26Z\",\"WARC-Record-ID\":\"<urn:uuid:22f3e733-3737-4811-b2bf-a79dcfc237f2>\",\"Content-Length\":\"337710\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e92afa5b-b743-4f42-bd1f-427cc9b392fb>\",\"WARC-Concurrent-To\":\"<urn:uuid:be451112-9241-477d-b1e8-eaaac1f8c386>\",\"WARC-IP-Address\":\"192.0.78.253\",\"WARC-Target-URI\":\"https://ashishkumarletslearn.com/arithmetic-progressions-class-10/lecture-7/\",\"WARC-Payload-Digest\":\"sha1:PGDFGZBRAROVL6HQME3A7K4V2BRXET2E\",\"WARC-Block-Digest\":\"sha1:FLWDV7LJRX3VNUVFBGMAGGPYFRBILXWM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153931.11_warc_CC-MAIN-20210730025356-20210730055356-00292.warc.gz\"}"} |
https://www.s-cool.co.uk/a-level/physics/gravitational-potential-energy/revise-it/gravitational-potential | [
"",
null,
"# Gravitational Potential\n\n## Gravitational Potential\n\nRather than talking about gravitational potential energy all the time, it is useful for a number of reasons to define a new quantity - Gravitational Potential, Φ.\n\nIt is a very simple idea. Gravitational potential is the potential energy per kilogram at a point in a field. So the units are Jkg-1, joules per kilogram.\n\nThe equation for potential is:",
null,
"where\n\nG = the universal gravitational constant\n\nm = the mass causing the field\n\nr = the distance between the centre of the mass causing the field and the point you are considering.\n\nNote that:\n\n1. Just like potential energy, the biggest value of potential you can get is zero. All other values are less than zero - i.e. negative!!\n\n2. Potential is not a vector even though it has a negative sign. It doesn't have a direction, only a magnitude.\n\n#### Worked Example\n\nExample\n\nIf G = 6.67x10-11Nm2kg-2 and the mass of the Earth is 6.0x1024kg, calculate the potential at the surface of the Earth if the radius of the Earth is 6.4x106m.",
null,
"The potential (Ep per kg) at the surface of the Earth is\n\n- 63 MJ kg -1\n\nDo 10 MJ kg-1 of work on raising an object from the Earth's surface and it will move up to a point where it's potential is - 53MJ kg-1. That's 10 MJ kg-1 greater than on the surface because it is 10MJ kg-1 closer to zero.\n\nConfusing! Look at the following diagram. It shows how potential drops as you move further from the surface of the Earth.",
null,
"So we can define potential as \"the work done per kg by an object when it moves from infinity to a point in a field\".\n\nLooking back at the example above, that means that if you let 1kg drop from infinity to the surface of the Earth it will lose 63MJ of potential energy. Ignore air resistance, and the object will have that much kinetic energy when it hits the surface of the planet.\n\n#### Potential Energy Equation\n\nThe potential Φ at a point in a field is the potential energy per kg.\n\nSo, if you put a mass of 'm' kg at that point, its potential energy is:\n\nEp = mΦ",
null,
"#### Escaping Velocity\n\nTo escape completely from the Earth's gravitational field you need to give an object 63MJ of kinetic energy per kg. (As it rises from the Earth it will lose kinetic energy and gain potential energy)\n\nThis allows you to calculate an object's escape velocity - how fast you would have to throw it to get it completely out of the Earth's gravity field.\n\nIf it needs 63MJkg-1 energy then it must start off with all this energy as Ek.\n\nSo:",
null,
"where m = 1kg\n\nRearrange to get:\n\nv = 11,200 ms-1.\n\nThat's how fast you'd have to throw it."
] | [
null,
"https://s-cool.co.uk/sites/default/files/styles/hero_thinnest/public/physics_strip_H_2.png",
null,
"https://www.s-cool.co.uk/assets/learn_its/alevel/physics/gravitational-potential-energy/gravitational-potential/image118.gif",
null,
"https://www.s-cool.co.uk/assets/learn_its/alevel/physics/gravitational-potential-energy/gravitational-potential/image120.gif",
null,
"https://www.s-cool.co.uk/assets/learn_its/alevel/physics/gravitational-potential-energy/gravitational-potential/a-phy-graene-dia01.gif",
null,
"https://www.s-cool.co.uk/assets/learn_its/alevel/physics/gravitational-potential-energy/gravitational-potential/image127.gif",
null,
"https://www.s-cool.co.uk/assets/learn_its/alevel/physics/gravitational-potential-energy/gravitational-potential/image129.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9114241,"math_prob":0.9877912,"size":2512,"snap":"2020-45-2020-50","text_gpt3_token_len":628,"char_repetition_ratio":0.16985646,"word_repetition_ratio":0.027600849,"special_character_ratio":0.24522293,"punctuation_ratio":0.09487666,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9962149,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-02T09:21:35Z\",\"WARC-Record-ID\":\"<urn:uuid:951a3f1d-1f2b-449c-8aae-17b2a26b6b2e>\",\"Content-Length\":\"36166\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d6d376e7-5929-40cd-8e5d-0dd2a0ac05d0>\",\"WARC-Concurrent-To\":\"<urn:uuid:2bee6144-1af1-4477-8daa-a5bc95d7b191>\",\"WARC-IP-Address\":\"104.26.7.221\",\"WARC-Target-URI\":\"https://www.s-cool.co.uk/a-level/physics/gravitational-potential-energy/revise-it/gravitational-potential\",\"WARC-Payload-Digest\":\"sha1:L5ACPX3EDBXQ4RXVKULF7ML52U4JUIF5\",\"WARC-Block-Digest\":\"sha1:3TVGYX5ZPSDFYWHLJTI5VTIFSGEE7MNT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141706569.64_warc_CC-MAIN-20201202083021-20201202113021-00177.warc.gz\"}"} |
https://percent.info/increase/240/how-to-calculate-the-percent-increase-from-240-to-967.html | [
"Percent increase from 240 to 967",
null,
"This page will answer the question \"What is the percent increase from 240 to 967?\" and also show you how to calculate the percent increase from 240 to 967.\n\nBefore we continue, note that \"percent increase from 240 to 967\" is the same as \"the percentage increase from 240 to 967\". Furthermore, we will refer to 240 as the initial value and 967 as the final value.\n\nSo what exactly are we calculating? The initial value is 240, and then a percent is used to increase the initial value to the final value of 967. We want to calculate what that percent is!\n\nHere are step-by-step instructions showing you how to calculate the percent increase from 240 to 967.\n\nFirst, we calculate the amount of increase from 240 to 967 by subtracting the initial value from the final value, like this:\n\n967 - 240\n= 727\n\nTo calculate the percent of any number, you multiply the value (n) by the percent (p) and then divide the product by 100 to get the answer, like this:\n\n(n × p) / 100 = Answer\n\nIn our case, we know that the initial value (n) is 240 and that the answer (amount of increase) is 727 to get the final value of 967. Therefore, we fill in what we know in the equation above to get the following equation:\n\n(240 × p) / 100 = 727\n\nNext, we solve the equation above for percent (p) by first multiplying each side by 100 and then dividing both sides by 240 to get percent (p):\n\n(240 × p) / 100 = 727\n((240 × p) / 100) × 100 = 727 × 100\n240p = 72700\n240p / 240 = 72700 / 240\np = 302.916666666667\nPercent Increase ≈ 302.9167\n\nThat's all there is to it! The percentage increase from 240 to 967 is 302.9167%. In other words, if you take 302.9167% of 240 and add it to 240, then the sum will be 967.\n\nThe step-by-step instructions above were made so we could clearly explain exactly what a percent increase from 240 to 967 means. For future reference, you can use the following percent increase formula to calculate percent increases:\n\n((f - n)/n) × 100 = p\n\nf = Final Value\nn = Initial Value\np = Percent Increase\n\nOnce again, here is the math and the answer to calculate the percent increase from 240 to 967 using the percent increase formula above:\n\n((f - n)/n) × 100\n= ((967 - 240)/240) × 100\n= (727/240) × 100\n= 3.02916666666667 × 100\n≈ 302.9167\n\nPercent Increase Calculator\nGo here if you need to calculate another percent increase.\n\nPercent increase from 240 to 968\nHere is the next Percent Increase Tutorial on our list that may be of interest."
] | [
null,
"https://percent.info/images/percent-increase.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.88150734,"math_prob":0.99707085,"size":2416,"snap":"2022-27-2022-33","text_gpt3_token_len":655,"char_repetition_ratio":0.20273632,"word_repetition_ratio":0.08695652,"special_character_ratio":0.3514073,"punctuation_ratio":0.08523908,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99998236,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-07T16:15:07Z\",\"WARC-Record-ID\":\"<urn:uuid:b7919157-44e8-471a-b49c-4245ea8f72a9>\",\"Content-Length\":\"6605\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:de0f18e2-1550-4e1b-b594-f70907b3b4c9>\",\"WARC-Concurrent-To\":\"<urn:uuid:fa2503a9-b0eb-419a-9d08-882da78d7eea>\",\"WARC-IP-Address\":\"18.67.65.23\",\"WARC-Target-URI\":\"https://percent.info/increase/240/how-to-calculate-the-percent-increase-from-240-to-967.html\",\"WARC-Payload-Digest\":\"sha1:HNR2A5NOFH3HCRAAA75E2K5C76QD7FBH\",\"WARC-Block-Digest\":\"sha1:Z6F2S33CYHKENZPMDUXPXOKIWKXSGT4C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882570651.49_warc_CC-MAIN-20220807150925-20220807180925-00160.warc.gz\"}"} |
https://de.mathworks.com/matlabcentral/cody/problems/16-return-the-largest-number-that-is-adjacent-to-a-zero/solutions/144533 | [
"Cody\n\n# Problem 16. Return the largest number that is adjacent to a zero\n\nSolution 144533\n\nSubmitted on 4 Oct 2012\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\n### Test Suite\n\nTest Status Code Input and Output\n1 Pass\n%% a = [1, 5, 3, 0, 2, 7, 0, 8, 9, 1 0]; b = 8; assert(isequal(nearZero(a),b))\n\n2 Pass\n%% a = [5 4 -1 0 -2 0 -5 8]; b = -1; assert(isequal(nearZero(a),b));\n\n3 Pass\n%% a = [0 3 1 0 2 9]; b = 3; assert(isequal(nearZero(a),b));\n\n4 Pass\n%% a = [1 0 2 0 3]; b = 3; assert(isequal(nearZero(a),b));\n\n5 Pass\n%% a = [0 -1]; b = -1; assert(isequal(nearZero(a),b));\n\n6 Fail\n%% a = [0 -12 0 -7 0]; b = -7; assert(isequal(nearZero(a),b));\n\nError: Assertion failed.\n\n### Community Treasure Hunt\n\nFind the treasures in MATLAB Central and discover how the community can help you!\n\nStart Hunting!"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.63984865,"math_prob":0.9973745,"size":807,"snap":"2020-45-2020-50","text_gpt3_token_len":317,"char_repetition_ratio":0.1656289,"word_repetition_ratio":0.039473683,"special_character_ratio":0.4473358,"punctuation_ratio":0.18461539,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9800596,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-05T00:20:30Z\",\"WARC-Record-ID\":\"<urn:uuid:11470807-1d12-412a-bb93-aa1067f7749e>\",\"Content-Length\":\"81719\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:746d74e1-87cf-44a2-b9ef-0dc60ac8ba1b>\",\"WARC-Concurrent-To\":\"<urn:uuid:b8d5bccd-11d7-4e44-9772-62532e93d92d>\",\"WARC-IP-Address\":\"104.123.200.119\",\"WARC-Target-URI\":\"https://de.mathworks.com/matlabcentral/cody/problems/16-return-the-largest-number-that-is-adjacent-to-a-zero/solutions/144533\",\"WARC-Payload-Digest\":\"sha1:Z2A4CAHFDX6RIKC4LVQJZMRQ663LHQSE\",\"WARC-Block-Digest\":\"sha1:M62LT4KTOGC5BLFKKQLLMYNVPWXJNC6E\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141745780.85_warc_CC-MAIN-20201204223450-20201205013450-00385.warc.gz\"}"} |
https://www.analystforum.com/t/section-40-pricing-and-valuation-of-forwards-end-of-chapter-question-9/122650 | [
"# Section 40 (Pricing and valuation of forwards) - End of Chapter Question #9\n\nThe question says that the bank entered into a 3 year fixed for floating interest rate swap 1 year ago… So when you calculate the value of the swap after one year, don’t you use the sum of the PV factors for years 2 and 3?\n\nI thought you would calculate the value like this: V = (0.03 - 0.01) (1.943012) (\\$50,000,000) = \\$1,943,012\n\nThe sum of the PV factors I used where for years 2 and 3 (.977876 + .965136).\n\nHowever, the book says the answer is : V = (0.03 - 0.01)(1.967975)(\\$50,000,000) = \\$1,967,975.\n\nThey used the sum of the PV factors for years 1 and 2 (.990099 + .977876).\n\nI don’t understand why you would use the year 1 instead of year 3? My thinking is that one year has already gone by so you wouldn’t use that PV factor, you would use the remaining 2 years PV factors. Can anyone help?\n\nI don’t have access to this question, but i’m assuming that the discount rates (PV factors) that apply would be year 1 and year 2.\n\nYou would be using the wrong discount rates if you were to use PV factors for year 2 and year 3.\n\nThink of it like this… the time at which you are valuing the swap (one year after swap initiation) is day one and therefore you must use the relevant rates, which would be year 1 and year 2 rates.\n\nHope that helps."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9203987,"math_prob":0.9209811,"size":793,"snap":"2022-05-2022-21","text_gpt3_token_len":237,"char_repetition_ratio":0.14828898,"word_repetition_ratio":0.059602648,"special_character_ratio":0.38713744,"punctuation_ratio":0.15384616,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9971267,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-26T17:18:58Z\",\"WARC-Record-ID\":\"<urn:uuid:de2c3a7c-f60e-4ee3-ac71-2c96e4749d57>\",\"Content-Length\":\"21697\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:78fd1812-9876-4832-939c-0336b3c21d4f>\",\"WARC-Concurrent-To\":\"<urn:uuid:88b271e4-a294-426f-b5bf-01ccad1c54c9>\",\"WARC-IP-Address\":\"45.79.51.137\",\"WARC-Target-URI\":\"https://www.analystforum.com/t/section-40-pricing-and-valuation-of-forwards-end-of-chapter-question-9/122650\",\"WARC-Payload-Digest\":\"sha1:NR4VUARUSPJY7EET5GE2EFXXHAAOADF4\",\"WARC-Block-Digest\":\"sha1:4MWM7XXYKFS33TYCENE5DLZ62KV2VKFC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662619221.81_warc_CC-MAIN-20220526162749-20220526192749-00227.warc.gz\"}"} |
https://www.nagwa.com/en/worksheets/156184954809/ | [
"# Lesson Worksheet: Area between a Curve and a Line Mathematics • Higher Education\n\nIn this worksheet, we will practice applying integration to find the area between the curve of a function and a horizontal or vertical straight line.\n\nQ1:\n\nLet . Determine the area bounded by the curve , the -axis, and the two lines and .\n\n• Asquare units\n• Bsquare units\n• Csquare units\n• Dsquare units\n\nQ2:\n\nThe figure shows .",
null,
"• A\n• B\n• C\n• D\n• E\n\nQ3:\n\nDetermine the area of the plane region bounded by the curve , the -axis, and the two lines and .\n\n• A square units\n• B65 square units\n• C square units\n• D square units\n\nQ4:\n\nFind the area enclosed by the graph of , the , and the lines and .\n\n• A18 square units\n• B72 square units\n• C36 square units\n• D0 square units\n• E9 square units\n\nQ5:\n\nThe curve shown is . What is the area of the shaded region? Give an exact answer.",
null,
"• A1.09861228866811\n• B\n• C\n• D\n• E\n\nQ6:\n\nFind the area of the shaded region.",
null,
"Q7:\n\nThe figure shows the graph of the function . Evaluate the area of the shaded region.",
null,
"Q8:\n\nFind the area of the region above the bounded by the curve and the lines and . Give an exact answer.\n\n• A\n• B\n• C\n• D\n• E\n\nQ9:\n\nThe curve in the figure is .",
null,
"• A\n• B\n• C\n• D\n• E\n\nQ10:\n\nLet . Determine, to the nearest thousandth, the area bounded by the curve , the -axis, and the line .\n\nThis lesson includes 51 additional questions and 167 additional question variations for subscribers."
] | [
null,
"https://images.nagwa.com/figures/812145864301/1.svg",
null,
"https://images.nagwa.com/figures/287123709813/1.svg",
null,
"https://images.nagwa.com/figures/264184098347/1.svg",
null,
"https://images.nagwa.com/figures/875189297071/1.svg",
null,
"https://images.nagwa.com/figures/165125213476/1.svg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.919321,"math_prob":0.979382,"size":463,"snap":"2022-27-2022-33","text_gpt3_token_len":99,"char_repetition_ratio":0.10239651,"word_repetition_ratio":0.0,"special_character_ratio":0.2224622,"punctuation_ratio":0.09411765,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99961406,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-30T04:40:09Z\",\"WARC-Record-ID\":\"<urn:uuid:710512ff-82ba-4fdd-a205-8f0ba676a2ef>\",\"Content-Length\":\"154058\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:68386c3e-329f-43fc-bb07-bae2e0fb4ef3>\",\"WARC-Concurrent-To\":\"<urn:uuid:2a2e2104-1547-4d42-8fbc-a4e4b4f629fd>\",\"WARC-IP-Address\":\"13.248.238.219\",\"WARC-Target-URI\":\"https://www.nagwa.com/en/worksheets/156184954809/\",\"WARC-Payload-Digest\":\"sha1:SRQFXKH26FFB2NOUPZXSO2LHR2KPHRHQ\",\"WARC-Block-Digest\":\"sha1:RXAHF523GWORZQINJICMVFBSA4C6LXLD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103661137.41_warc_CC-MAIN-20220630031950-20220630061950-00450.warc.gz\"}"} |
https://la.mathworks.com/help/econ/compare-robust-regression-techniques.html | [
"# Compare Robust Regression Techniques\n\nThis example compares the results among regression techniques that are and are not robust to influential outliers.\n\nInfluential outliers are extreme response or predictor observations that influence parameter estimates and inferences of a regression analysis. Responses that are influential outliers typically occur at the extremes of a domain. For example, you might have an instrument that measures a response poorly or erratically at extreme levels of temperature.\n\nWith enough evidence, you can remove influential outliers from the data. If removal is not possible, you can use regression techniques that are robust to outliers.\n\n### Simulate Data\n\nGenerate a random sample of 100 responses from the linear model\n\n`${y}_{t}=1+2{x}_{t}+{\\epsilon }_{t}$`\n\nwhere:\n\n• $x$ is a vector of evenly spaced values from 0 through 2.\n\n• ${\\epsilon }_{t}\\sim N\\left(0,0.{5}^{2}\\right).$\n\n```rng('default'); n = 100; x = linspace(0,2,n)'; b0 = 1; b1 = 2; sigma = 0.5; e = randn(n,1); y = b0 + b1*x + sigma*e;```\n\nCreate influential outliers by inflating all responses corresponding to $x<0.25$ by a factor of 3.\n\n`y(x < 0.25) = y(x < 0.25)*3;`\n\nPlot the data. Retain the plot for further graphs.\n\n```figure; plot(x,y,'o'); h = gca; xlim = h.XLim'; hl = legend('Data'); xlabel('x'); ylabel('y'); title('Regression Techniques Comparison') hold on;```",
null,
"### Estimate Linear Model\n\nEstimate the coefficients and error variance by using simple linear regression. Plot the regression line.\n\n`LSMdl = fitlm(x,y)`\n```LSMdl = Linear regression model: y ~ 1 + x1 Estimated Coefficients: Estimate SE tStat pValue ________ _______ ______ __________ (Intercept) 2.6814 0.28433 9.4304 2.0859e-15 x1 0.78974 0.24562 3.2153 0.0017653 Number of observations: 100, Error degrees of freedom: 98 Root Mean Squared Error: 1.43 R-squared: 0.0954, Adjusted R-Squared: 0.0862 F-statistic vs. constant model: 10.3, p-value = 0.00177 ```\n```plot(xlim,[ones(2,1) xlim]*LSMdl.Coefficients.Estimate,'LineWidth',2); hl.String{2} = 'Least Squares';```",
null,
"`LSMdl` is a fitted `LinearModel` model object. The intercept and slope appear to be respectively higher and lower than they should be. The regression line might make poor predictions for any $x<1$ and $x>1.6$.\n\n### Estimate Bayesian Linear Regression Model with Diffuse Prior Distribution\n\nCreate a Bayesian linear regression model with a diffuse joint prior for the regression coefficients and error variance. Specify one predictor for the model.\n\n`PriorDiffuseMdl = bayeslm(1);`\n\n`PriorDiffuseMdl` is a `diffuseblm` model object that characterizes the joint prior distribution.\n\nEstimate the posterior of the Bayesian linear regression model. Plot the regression line.\n\n`PosteriorDiffuseMdl = estimate(PriorDiffuseMdl,x,y);`\n```Method: Analytic posterior distributions Number of observations: 100 Number of predictors: 2 | Mean Std CI95 Positive Distribution ------------------------------------------------------------------------------ Intercept | 2.6814 0.2873 [ 2.117, 3.246] 1.000 t (2.68, 0.28^2, 98) Beta | 0.7897 0.2482 [ 0.302, 1.277] 0.999 t (0.79, 0.25^2, 98) Sigma2 | 2.0943 0.3055 [ 1.580, 2.773] 1.000 IG(49.00, 0.0099) ```\n```plot(xlim,[ones(2,1) xlim]*PosteriorDiffuseMdl.Mu,'--','LineWidth',2); hl.String{3} = 'Bayesian Diffuse';```",
null,
"`PosteriorDiffuseMdl` is a `conjugateblm` model object that characterizes the joint posterior distribution of the linear model parameters. The estimates of a Bayesian linear regression model with diffuse prior are almost equal to those of a simple linear regression model. Both models represent a naive approach to influential outliers, that is, the techniques treat outliers like any other observation.\n\n### Estimate Regression Model with ARIMA Errors\n\nCreate a regression model with ARIMA errors. Specify that the errors follow a t distribution with 3 degrees of freedom, but no lagged terms. This specification is effectively a regression model with t distributed errors.\n\n```regMdl = regARIMA(0,0,0); regMdl.Distribution = struct('Name','t','DoF',3);```\n\n`regMdl` is a `regARIMA` model object. It is a template for estimation.\n\nEstimate the regression model with ARIMA errors. Plot the regression line.\n\n`estRegMdl = estimate(regMdl,y,'X',x);`\n``` Regression with ARMA(0,0) Error Model (t Distribution): Value StandardError TStatistic PValue _______ _____________ __________ __________ Intercept 1.4613 0.154 9.4892 2.328e-21 Beta(1) 1.6531 0.12939 12.776 2.2246e-37 Variance 0.93116 0.1716 5.4263 5.7546e-08 DoF 3 0 Inf 0 ```\n```plot(xlim,[ones(2,1) xlim]*[estRegMdl.Intercept; estRegMdl.Beta],... 'LineWidth',2); hl.String{4} = 'regARIMA';```",
null,
"`estRegMdl` is a `regARIMA` model object containing the estimation results. Because the t distribution is more diffuse, the regression line attributes more variability to the influential outliers than to the other observations. Therefore, the regression line appears to be a better predictive model than the other models.\n\n### Implement Quantile Regression Using Bag of Regression Trees\n\nGrow a bag of 100 regression trees. Specify 20 for the minimum leaf size.\n\n`QRMdl = TreeBagger(100,x,y,'Method','regression','MinLeafSize',20);`\n\n`QRMdl` is a fitted `TreeBagger` model object.\n\nPredict median responses for all observed $x$ values, that is, implement quantile regression. Plot the predictions.\n\n```qrPred = quantilePredict(QRMdl,x); plot(x,qrPred,'LineWidth',2); hl.String{5} = 'Quantile'; hold off;```",
null,
"The regression line appears to be slightly influenced by the outliers at the beginning of the sample, but then quickly follows the `regARIMA` model line.\n\nYou can adjust the behavior of the line by specifying various values for `MinLeafSize` when you train the bag of regression trees. Lower `MinLeafSize` values tend to follow the data in the plot more closely.\n\n### Implement Robust Bayesian Linear Regression\n\nConsider a Bayesian linear regression model containing a one predictor, a t distributed disturbance variance with a profiled degrees of freedom parameter $\\nu$. Let:\n\n• ${\\lambda }_{j}\\sim IG\\left(\\nu /2,2/\\nu \\right)$.\n\n• ${\\epsilon }_{j}|{\\lambda }_{j}\\sim N\\left(0,{\\lambda }_{j}{\\sigma }^{2}\\right)$\n\n• $f\\left(\\beta ,{\\sigma }^{2}\\right)\\propto \\frac{1}{{\\sigma }^{2}}$\n\nThese assumptions imply:\n\n• ${\\epsilon }_{j}\\sim t\\left(0,{\\sigma }^{2},\\nu \\right)$\n\n• ${\\lambda }_{j}|{\\epsilon }_{j}\\sim IG\\left(\\frac{\\nu +1}{2},\\frac{2}{\\nu +{\\epsilon }_{j}^{2}/{\\sigma }^{2}}\\right)$\n\n$\\lambda$ is a vector of latent scale parameters that attributes low precision to observations that are far from the regression line. $\\nu$ is a hyperparameter controlling the influence of $\\lambda$ on the observations.\n\nSpecify a grid of values for $\\nu$.\n\n• `1` corresponds to the Cauchy distribution.\n\n• `2.1` means that the mean is well-defined.\n\n• 4.1 means that the variance is well-defined.\n\n• `100` means that the distribution is approximately normal.\n\n```nu = [0.01 0.1 1 2.1 5 10 100]; numNu = numel(nu);```\n\nFor this problem, the Gibbs sampler is well-suited to estimate the coefficients because you can simulate the parameters of a Bayesian linear regression model conditioned on $\\lambda$, and then simulate $\\lambda$ from its conditional distribution.\n\nImplement this Gibbs sampler.\n\n1. Draw parameters from the posterior distribution of $\\beta ,{\\sigma }^{2}|y,x,\\lambda$. Deflate the observations by $\\lambda$, create a diffuse prior model with two regression coefficients, and draw a set of parameters from the posterior. The first regression coefficient corresponds to the intercept, so specify that `bayeslm` not include one.\n\n2. Compute residuals.\n\n3. Draw values from the conditional posterior of $\\lambda$.\n\nFor each value of $\\nu$, run the Gibbs sampler for 20,000 iterations and apply a burn-in period of 5,000. Preallocate for the posterior draws and initialize $\\lambda$ to a vector of ones.\n\n```rng(1); m = 20000; burnin = 5000; lambda = ones(n,m + 1,numNu); % Preallocation estBeta = zeros(2,m + 1,numNu); estSigma2 = zeros(1,m + 1,numNu); % Create diffuse prior model. PriorMdl = bayeslm(2,'Model','diffuse','Intercept',false); for p = 1:numNu for j = 1:m % Scale observations. yDef = y./sqrt(lambda(:,j,p)); xDef = [ones(n,1) x]./sqrt(lambda(:,j,p)); % Simulate observations from conditional posterior of beta and % sigma2 given lambda and the data. [estBeta(:,j + 1,p),estSigma2(1,j + 1,p)] = simulate(PriorMdl,xDef,yDef); % Estimate residuals. ep = y - [ones(n,1) x]*estBeta(:,j + 1,p); % Specify shape and scale using conditional posterior of lambda % given beta, sigma2, and the data sp = (nu(p) + 1)/2; sc = 2./(nu(p) + ep.^2/estSigma2(1,j + 1,p)); % Draw from conditional posterior of lambda given beta, sigma2, % and the data lambda(:,j + 1,p) = 1./gamrnd(sp,sc); end end```\n\nEstimate posterior means for $\\beta$, ${\\sigma }^{2}$, and $\\lambda$.\n\n```postEstBeta = squeeze(mean(estBeta(:,(burnin+1):end,:),2)); postLambda = squeeze(mean(lambda(:,(burnin+1):end,:),2));```\n\nFor each $\\nu$, plot the data and regression lines.\n\n```figure; plot(x,y,'o'); h = gca; xlim = h.XLim'; ylim = h.YLim; hl = legend('Data'); hold on; for p = 1:numNu; plotY = [ones(2,1) xlim]*postEstBeta(:,p); plot(xlim,plotY,'LineWidth',2); hl.String{p+1} = sprintf('nu = %f',nu(p)); end xlabel('x'); ylabel('y'); title('Robust Bayesian Linear Regression')```",
null,
"Low values of $\\nu$ tend to attribute high variability to the influential outliers. Therefore, the regression line resembles the `regARIMA` line. As $\\nu$ increases, the line behaves more like those of the naive approach."
] | [
null,
"https://la.mathworks.com/help/examples/econ/win64/CompareRobustRegressionTechniquesExample_01.png",
null,
"https://la.mathworks.com/help/examples/econ/win64/CompareRobustRegressionTechniquesExample_02.png",
null,
"https://la.mathworks.com/help/examples/econ/win64/CompareRobustRegressionTechniquesExample_03.png",
null,
"https://la.mathworks.com/help/examples/econ/win64/CompareRobustRegressionTechniquesExample_04.png",
null,
"https://la.mathworks.com/help/examples/econ/win64/CompareRobustRegressionTechniquesExample_05.png",
null,
"https://la.mathworks.com/help/examples/econ/win64/CompareRobustRegressionTechniquesExample_06.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6222897,"math_prob":0.99776655,"size":8363,"snap":"2020-45-2020-50","text_gpt3_token_len":2223,"char_repetition_ratio":0.14176337,"word_repetition_ratio":0.020151133,"special_character_ratio":0.29415283,"punctuation_ratio":0.21212122,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997812,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-27T06:54:00Z\",\"WARC-Record-ID\":\"<urn:uuid:78803e80-2d3f-4ee1-8135-7ab38df1b472>\",\"Content-Length\":\"94552\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2999e8f1-866f-464f-8184-7b60a7ff09ef>\",\"WARC-Concurrent-To\":\"<urn:uuid:26c9137c-b830-4692-914f-3c6cacf4e928>\",\"WARC-IP-Address\":\"23.223.252.57\",\"WARC-Target-URI\":\"https://la.mathworks.com/help/econ/compare-robust-regression-techniques.html\",\"WARC-Payload-Digest\":\"sha1:VD4D3Y4CXHYBFU6GUEMONO52674UXCYA\",\"WARC-Block-Digest\":\"sha1:3UQ5CZ32KWGIBN4A3BJUZZA7VSA6BKYH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141189141.23_warc_CC-MAIN-20201127044624-20201127074624-00383.warc.gz\"}"} |
https://answers.everydaycalculation.com/add-fractions/24-28-plus-42-10 | [
"Solutions by everydaycalculation.com\n\n1st number: 24/28, 2nd number: 4 2/10\n\n24/28 + 42/10 is 177/35.\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 28 and 10 is 140\n2. For the 1st fraction, since 28 × 5 = 140,\n24/28 = 24 × 5/28 × 5 = 120/140\n3. Likewise, for the 2nd fraction, since 10 × 14 = 140,\n42/10 = 42 × 14/10 × 14 = 588/140\n120/140 + 588/140 = 120 + 588/140 = 708/140\n5. 708/140 simplified gives 177/35\n6. So, 24/28 + 42/10 = 177/35\nIn mixed form: 52/35\n\nMathStep (Works offline)",
null,
"Download our mobile app and learn to work with fractions in your own time:"
] | [
null,
"https://answers.everydaycalculation.com/mathstep-app-icon.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6757928,"math_prob":0.99649894,"size":364,"snap":"2020-34-2020-40","text_gpt3_token_len":147,"char_repetition_ratio":0.2,"word_repetition_ratio":0.0,"special_character_ratio":0.50549453,"punctuation_ratio":0.09411765,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99686176,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-07T13:07:31Z\",\"WARC-Record-ID\":\"<urn:uuid:1adae0b7-b8a7-4dee-ae32-6ac242b9b695>\",\"Content-Length\":\"8044\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d2f01d8f-b0a0-419f-9984-06a4af7830d5>\",\"WARC-Concurrent-To\":\"<urn:uuid:8878ce69-203f-44c5-9a20-38c67b3bc1ca>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/add-fractions/24-28-plus-42-10\",\"WARC-Payload-Digest\":\"sha1:VCFJLU3XCQ3MHLLQC5LVMHASRYYYNB6J\",\"WARC-Block-Digest\":\"sha1:HCH4KDHPKOH6BVF5W7UD2AOPWNE5QI7N\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439737178.6_warc_CC-MAIN-20200807113613-20200807143613-00470.warc.gz\"}"} |
https://www.keepingkidssafetoday.com/sort-vertex-in-alphabetical-order/ | [
"# Sort Vertex In Alphabetical Order\n\nYou can use it for your dissertation master thesis assessments projects. Descending order for a numeric list means placing the largest element first and the smallest element last while for an alphabetic list it means the element that appears last alphabetically is placed first.",
null,
"Consonant Cluster Fl And Fr Picture And Word Sorting Worksheets Word Sorts Consonant Clusters Father Picture\n\n### We can sort the vector using our own comparator function to sort the strings in alphabetical order.",
null,
"Sort vertex in alphabetical order. From the Sort by drop-down list select the row number you want to alphabetize Row 1 in this example. How to sort alphabetically your list of references in Microsoft Word. Quickly put information in alphabetical order using this super duper free online tool.\n\nWe can also easily check for cycles as we do this and report no sort is possible if a cycle exists. B Starting at vertex A and resolving ties by the vertex alphabetical order traverse the graph by depth-first search DFS and construct the corresponding DFS tree. Give the order in which the vertices were reached for the first time pushed onto the traversal stack and the order in which the vertices became dead ends popped off.\n\nNote that we visit. First we visit vertex B the neighbor of vertex A and mark B visited. If the text area contains multiple rows the tool will treat each row as a separate item.\n\nIn the Sort dialog box click the Options. Put items in alphabetical order remove HTML capitalize and lowercase words and phrases reverse abc order ignore case order names sort by last name add numbers. On the figure below we start the depth-first search from vertex A.\n\nYou can show this by simply listing the vertices in the order in which they are discovered. By default ORDER BY without any additional specifier sorts in ascending order equivalent to using the ASC keyword explicitly. This means internationally accepted standards for character values are used when determining sort order.\n\nAssume that the neighbors of each vertex are examined in alphabetical order. To separate the items but if there are no semicolons it will instead use commas. A preordering is a list of the vertices in the order that they were first visited by the depth-first search algorithm.\n\nIf there is only one row the tool will first try to use semicolons. Sort function takes an argument with key as reverse and value as True or False where True indicates reverse sorting and False indicates sorting in ascending order. This web tool — and educational resource — provides sorting functions including the ability to.\n\nUse it to sort any list of text online using your computer or mobile device. This ABC order generator will sort word lists numbers or just about any mix of content info and it will handle all the alphabetizing work using many different formats – words separated by spaces or commas or etc – and it can also sort things alphabetically line by line if you need it. Returns 1 if string a is alphabetically less than string b which is quite similar to strcmp operation else returns 0 End Function.\n\nThere can be more than one topological sorting for a graph. In particular I want you to show the following information. This is a compact and natural way of describing the progress of the.\n\nWhen columns are in string format or varchar datatype then order by results in sorting of data in alphabetical order when done in ascending manner. The topological sort of an arbitrary directed graph G VE can be computed in linear time. As you can probably guess ASC stands for ascending.\n\nIn the small Sort Options dialog that appears select Sort left to right and click OK to get back to the Sort. 3 Do a topological sort of the following graph G. The depth first traversal tree.\n\nThe first vertex in topological sorting is always a vertex with in-degree as 0 a vertex with no incoming edges. An incidence matrix M has a row for each vertex and a column for each edge such that M i j 1 if vertex i is part of edge j otherwise M i j 0. When you select information for sorting it is important to understand how characters are evaluated by the system.\n\nStarting at vertex a and resolving ties by the vertex alphabetical order traverse the graph by depth-first search and construct the corresponding depth-first search tree. For example another topological sorting of the following graph is 4 5 2 3 1 0. If youd like to sort in descending order simplify specify the DESC keyword after the column name.\n\nYou must show the DFS tree where the tree edges are denoted by solid lines and. Go to the Data tab Sort and Filter group and click Sort. This tool makes it easy to sort a list of texts in alphabetical order.\n\nBreak all ties by picking the vertices in alphabetical order ie A before Z. For example a topological sorting of the following graph is 5 4 2 3 1 0. When applied to strings or sequences that may.\n\nWe can use Depth First Traversal to compute the finish times and then return the nodes in order of decreasing finishing times. Give the order in which the vertices were reached for the first time pushed in and the order in which the vertices become dead ends popped up. It is also possible to use depth-first search to linearly order the vertices of a graph or tree.\n\nMost of the time the ORDER BY function is used when using the aggregate functions of SQL Whenever we do not specify the type of the order that is ascending or descending then by default the data is being ordered in ascending way. Alphabetical order is a system whereby character strings are placed in order based on the position of the characters in the conventional ordering of an alphabetIt is one of the methods of collationIn mathematics a lexicographical order is the generalization of the alphabetical order to other data types such as sequences of digits or numbers. ASCII Sort Order Chart.\n\nThere are four possible ways of doing this. Function bool mycomp string a string b return a. Our comparator function is defined as.\n\nThe ASCII American Standard Code for Information Interchange guidelines are followed. The Alphabetizer is a free tool to alphabetize lists. The order in which vertices in the graph are discovered by depth first traversal.\n\nInitial state will become as follows.",
null,
"Polygons Task Cards And Activities Free Math Lessons Polygon Activities Task Cards",
null,
"Two Dimensional Shapes Matching Activity Two Dimensional Shapes Shape Names Shape Matching",
null,
"Pin By Yael S Design Studio On Jordan S Homework Education Math Geometry Activities Math Geometry",
null,
"Alphabetical Order To The First Second And Third Letter Worksheets Abc Order Worksheet Abc Order Teaching Vocabulary",
null,
"All About Shapes And Patterns Kindergarten Math Center Math Stations Math",
null,
"Shapes Worksheets For 2d And 3d Shapes No Prep Winter Math Activities Fun Math Activities Valentine Math Activities",
null,
"Writing About Mathematics Geometry Medians And Altitudes Mathematics Geometry Geometry Triangles Teaching Geometry",
null,
"Help Your Kindergarten And 1st Grade Students Practice Abc Order These Printables Reinforce The Skill And Progress Abc Order Worksheet Abc Order Learning Abc",
null,
"Does A Cone Have A Vertex Google Search Math Lessons Lesson Math",
null,
"Pin On Boom Task Cards Teaching Resources",
null,
"Spring Into Spring Second Grade Math Math Printables 1st Grade Math",
null,
"Kindergarten Geometry Shapes No Prep Math Centers In These Geometry Shape Math Centers For Kindergarten Stud Kindergarten Geometry Math Centers Elementary Math",
null,
"Improving Your Algorithms Data Structure Skills Data Structures Algorithm How To Get Better",
null,
"Pin On Abc 1st Grade 10 11",
null,
"Pin On Awesome Elementary Tpt Products",
null,
"Math Love Bloglovin Love Math Math Teaching Blogs",
null,
"Geometry Word Wall Freebie Sample Face Edge Vertex Geometry Words Word Wall Vertex",
null,
"Suffix Ly Word Work Pack Word Work Lesson Planner Template Printable Lesson Plans",
null,
"Pin By Teachpeach On Teachpeach Math Math Shape Sort Math Work"
] | [
null,
"https://i.pinimg.com/474x/d0/1e/b8/d01eb86b158b7aed3ecf9fe75d10b352.jpg",
null,
"https://i.pinimg.com/originals/8f/9b/f6/8f9bf6e0f3033c51a65d53b9d1e7b501.jpg",
null,
"https://i.pinimg.com/originals/12/00/85/120085ee1b5ece907feb5300b6bd1d00.jpg",
null,
"https://i.pinimg.com/originals/67/06/95/670695421983bd0c83b3eaecd8da6737.jpg",
null,
"https://i.pinimg.com/originals/0c/02/08/0c0208837a482d1665fc2e1ae749e20c.jpg",
null,
"https://i.pinimg.com/originals/21/ad/01/21ad01b841f9e07eacb1a23fb738fd7e.jpg",
null,
"https://i.pinimg.com/originals/70/bb/10/70bb1042228d8f8afdba04e78cacfac3.jpg",
null,
"https://i.pinimg.com/originals/38/ba/96/38ba96b2e76738ecae9f9cfb586c7e59.png",
null,
"https://i.pinimg.com/originals/8f/9b/f6/8f9bf6e0f3033c51a65d53b9d1e7b501.jpg",
null,
"https://i.pinimg.com/736x/8d/38/32/8d3832bf72e7b09a1b494d8e6ec712d1.jpg",
null,
"https://i.pinimg.com/originals/13/3c/43/133c432727d6f62e77772877883894b1.png",
null,
"https://i.pinimg.com/originals/56/f3/6e/56f36e733065077c66ea78e237ab85f2.png",
null,
"https://i.pinimg.com/736x/c1/88/25/c18825ba5dcdda493d91a2d19aee4a00.jpg",
null,
"https://i.pinimg.com/474x/6a/4b/6a/6a4b6a0f71d7d446f31bf86466b00cc1.jpg",
null,
"https://i.pinimg.com/originals/25/55/25/255525fdb8fb2f7c513863557c7c2568.png",
null,
"https://i.pinimg.com/736x/19/12/74/191274ecbedadc803192c003e33fa7dc.jpg",
null,
"https://i.pinimg.com/originals/90/ee/8f/90ee8fc30014da571e6365cb53a96006.gif",
null,
"https://i.pinimg.com/474x/46/40/73/46407354d6228683b84f55dd3dfe28bd.jpg",
null,
"https://i.pinimg.com/474x/6c/6d/8e/6c6d8ee956fea8cf5bfe6facec7999be.jpg",
null,
"https://i.pinimg.com/474x/be/2e/e1/be2ee16bc5750e66e1a275e61ca67094.jpg",
null,
"https://i.pinimg.com/originals/84/30/69/843069257ac9518662fba48bb5e24f79.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8456507,"math_prob":0.76800823,"size":7816,"snap":"2021-31-2021-39","text_gpt3_token_len":1531,"char_repetition_ratio":0.13210446,"word_repetition_ratio":0.034664657,"special_character_ratio":0.18257421,"punctuation_ratio":0.044380818,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9659475,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42],"im_url_duplicate_count":[null,1,null,3,null,2,null,1,null,3,null,1,null,1,null,1,null,3,null,1,null,3,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-24T23:51:51Z\",\"WARC-Record-ID\":\"<urn:uuid:2005a224-8501-4c4a-bb1e-ceca6e84fd0f>\",\"Content-Length\":\"50588\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5f32ef70-b137-4c62-912a-f4e8974ad6a2>\",\"WARC-Concurrent-To\":\"<urn:uuid:9fdafe6c-b766-4d52-86d5-ceb06d1aca63>\",\"WARC-IP-Address\":\"104.21.18.109\",\"WARC-Target-URI\":\"https://www.keepingkidssafetoday.com/sort-vertex-in-alphabetical-order/\",\"WARC-Payload-Digest\":\"sha1:JHDJRGMTFYHEMDB3NQTSULPCNJQBZRBP\",\"WARC-Block-Digest\":\"sha1:KYIC3I5MJR6XLX7PL4DPVT6O7L6XCXAA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057584.91_warc_CC-MAIN-20210924231621-20210925021621-00371.warc.gz\"}"} |
https://kyushu-u.pure.elsevier.com/en/publications/a-polynomial-time-algorithm-for-solving-a-class-of-underdetermine | [
"# A polynomial-time algorithm for solving a class of underdetermined multivariate quadratic equations over fields of odd characteristics\n\nChen Mou Cheng, Yasufumi Hashimoto, Hiroyuki Miura, Tsuyoshi Takagi\n\nResearch output: Contribution to journalArticle\n\n1 Citation (Scopus)\n\n### Abstract\n\nFollowing up a series of works by Kipnis-Patarin-Goubin, Courtois-Goubin-Meier-Tacier, and Thomae-Wolf, in PQCrypto 2013 Miura, Hashimoto, and Takagi proposed an efficient algorithm for solving a class of underdetermined multivariate quadratic equations. Their algorithm does not use any generic Gröbner-basis solving techniques and asymptotically requires the least degree of underdeterminedness among all similar algorithms in the current literature. Building on top of their work, in this paper we focus on solving polynomially underdetermined multivariate quadratic equations over fields of odd characteristics. We show that we can further improve the applicable range of the Miura- Hashimoto-Takagi algorithm essentially for free. Furthermore, we show how to allow a certain degree of trade-off between applicable range and running time. Last but not least, we show that the running time of the improved algorithm is actually polynomial in number of equations and variables. To the best of our knowledge, this is the first result showing that this class of polynomially underdetermined multivariate quadratic equations over fields of odd characteristics can be solved in polynomial time.\n\nOriginal language English 40-58 19 Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 8772 Published - Jan 1 2014\n\n### Fingerprint\n\nPolynomial-time Algorithm\nOdd\nPolynomials\nRange of data\nPolynomial time\nEfficient Algorithms\nPolynomial\nSeries\nClass\n\n### All Science Journal Classification (ASJC) codes\n\n• Theoretical Computer Science\n• Computer Science(all)\n\n### Cite this\n\n@article{c7e3df2045c7487abd20f19969404216,\ntitle = \"A polynomial-time algorithm for solving a class of underdetermined multivariate quadratic equations over fields of odd characteristics\",\nabstract = \"Following up a series of works by Kipnis-Patarin-Goubin, Courtois-Goubin-Meier-Tacier, and Thomae-Wolf, in PQCrypto 2013 Miura, Hashimoto, and Takagi proposed an efficient algorithm for solving a class of underdetermined multivariate quadratic equations. Their algorithm does not use any generic Gr{\\\"o}bner-basis solving techniques and asymptotically requires the least degree of underdeterminedness among all similar algorithms in the current literature. Building on top of their work, in this paper we focus on solving polynomially underdetermined multivariate quadratic equations over fields of odd characteristics. We show that we can further improve the applicable range of the Miura- Hashimoto-Takagi algorithm essentially for free. Furthermore, we show how to allow a certain degree of trade-off between applicable range and running time. Last but not least, we show that the running time of the improved algorithm is actually polynomial in number of equations and variables. To the best of our knowledge, this is the first result showing that this class of polynomially underdetermined multivariate quadratic equations over fields of odd characteristics can be solved in polynomial time.\",\nauthor = \"Cheng, {Chen Mou} and Yasufumi Hashimoto and Hiroyuki Miura and Tsuyoshi Takagi\",\nyear = \"2014\",\nmonth = \"1\",\nday = \"1\",\nlanguage = \"English\",\nvolume = \"8772\",\npages = \"40--58\",\njournal = \"Lecture Notes in Computer Science\",\nissn = \"0302-9743\",\npublisher = \"Springer Verlag\",\n\n}\n\nTY - JOUR\n\nT1 - A polynomial-time algorithm for solving a class of underdetermined multivariate quadratic equations over fields of odd characteristics\n\nAU - Cheng, Chen Mou\n\nAU - Hashimoto, Yasufumi\n\nAU - Miura, Hiroyuki\n\nAU - Takagi, Tsuyoshi\n\nPY - 2014/1/1\n\nY1 - 2014/1/1\n\nN2 - Following up a series of works by Kipnis-Patarin-Goubin, Courtois-Goubin-Meier-Tacier, and Thomae-Wolf, in PQCrypto 2013 Miura, Hashimoto, and Takagi proposed an efficient algorithm for solving a class of underdetermined multivariate quadratic equations. Their algorithm does not use any generic Gröbner-basis solving techniques and asymptotically requires the least degree of underdeterminedness among all similar algorithms in the current literature. Building on top of their work, in this paper we focus on solving polynomially underdetermined multivariate quadratic equations over fields of odd characteristics. We show that we can further improve the applicable range of the Miura- Hashimoto-Takagi algorithm essentially for free. Furthermore, we show how to allow a certain degree of trade-off between applicable range and running time. Last but not least, we show that the running time of the improved algorithm is actually polynomial in number of equations and variables. To the best of our knowledge, this is the first result showing that this class of polynomially underdetermined multivariate quadratic equations over fields of odd characteristics can be solved in polynomial time.\n\nAB - Following up a series of works by Kipnis-Patarin-Goubin, Courtois-Goubin-Meier-Tacier, and Thomae-Wolf, in PQCrypto 2013 Miura, Hashimoto, and Takagi proposed an efficient algorithm for solving a class of underdetermined multivariate quadratic equations. Their algorithm does not use any generic Gröbner-basis solving techniques and asymptotically requires the least degree of underdeterminedness among all similar algorithms in the current literature. Building on top of their work, in this paper we focus on solving polynomially underdetermined multivariate quadratic equations over fields of odd characteristics. We show that we can further improve the applicable range of the Miura- Hashimoto-Takagi algorithm essentially for free. Furthermore, we show how to allow a certain degree of trade-off between applicable range and running time. Last but not least, we show that the running time of the improved algorithm is actually polynomial in number of equations and variables. To the best of our knowledge, this is the first result showing that this class of polynomially underdetermined multivariate quadratic equations over fields of odd characteristics can be solved in polynomial time.\n\nUR - http://www.scopus.com/inward/record.url?scp=84921645550&partnerID=8YFLogxK\n\nUR - http://www.scopus.com/inward/citedby.url?scp=84921645550&partnerID=8YFLogxK\n\nM3 - Article\n\nAN - SCOPUS:84921645550\n\nVL - 8772\n\nSP - 40\n\nEP - 58\n\nJO - Lecture Notes in Computer Science\n\nJF - Lecture Notes in Computer Science\n\nSN - 0302-9743\n\nER -"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.85053897,"math_prob":0.9422705,"size":5019,"snap":"2019-43-2019-47","text_gpt3_token_len":1132,"char_repetition_ratio":0.116251245,"word_repetition_ratio":0.78470254,"special_character_ratio":0.20003985,"punctuation_ratio":0.09915357,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9642634,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-21T08:31:10Z\",\"WARC-Record-ID\":\"<urn:uuid:14812193-02c6-4add-af2e-a4be37033373>\",\"Content-Length\":\"39436\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:810dbac8-19e7-412b-9815-13a9cf56fa75>\",\"WARC-Concurrent-To\":\"<urn:uuid:d85d4217-4798-45e0-b182-f867b9309634>\",\"WARC-IP-Address\":\"52.220.215.79\",\"WARC-Target-URI\":\"https://kyushu-u.pure.elsevier.com/en/publications/a-polynomial-time-algorithm-for-solving-a-class-of-underdetermine\",\"WARC-Payload-Digest\":\"sha1:WXIN5PUNORNYLR2WNNPUF2X6O54D4QNL\",\"WARC-Block-Digest\":\"sha1:3BHQ6TKOPSGLVUA53PSJMSIYGZIVY64H\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670743.44_warc_CC-MAIN-20191121074016-20191121102016-00126.warc.gz\"}"} |
https://johnnn.tech/write-travelling-salesman-problem-solution-with-example-in-c/ | [
"# write travelling salesman problem solution with example in c++\n\nHere’s an example of solving the Traveling Salesman Problem (TSP) using the Nearest Neighbor Algorithm in C++:\n\n```#include <iostream>\n#include <vector>\n#include <limits>\n\nconst int INF = std::numeric_limits<int>::max();\n\n// Function to find the nearest unvisited city from the given current city\nint findNearestCity(int currentCity, const std::vector<bool>& visited, const std::vector<std::vector<int>>& graph) {\nint nearestCity = -1;\nint minDistance = INF;\n\nfor (int i = 0; i < graph.size(); ++i) {\nif (!visited[i] && graph[currentCity][i] < minDistance) {\nminDistance = graph[currentCity][i];\nnearestCity = i;\n}\n}\n\nreturn nearestCity;\n}\n\n// Function to solve the TSP using the Nearest Neighbor Algorithm\nstd::vector<int> solveTSP(const std::vector<std::vector<int>>& graph) {\nint numCities = graph.size();\nstd::vector<bool> visited(numCities, false);\nstd::vector<int> route(numCities);\n\n// Start from the first city\nint currentCity = 0;\nvisited[currentCity] = true;\nroute = currentCity;\n\n// Find the nearest unvisited city and add it to the route\nfor (int i = 1; i < numCities; ++i) {\nint nearestCity = findNearestCity(currentCity, visited, graph);\nroute[i] = nearestCity;\nvisited[nearestCity] = true;\ncurrentCity = nearestCity;\n}\n\nreturn route;\n}\n\nint main() {\nstd::vector<std::vector<int>> graph = {\n{0, 10, 15, 20},\n{10, 0, 35, 25},\n{15, 35, 0, 30},\n{20, 25, 30, 0}\n};\n\nstd::vector<int> optimalRoute = solveTSP(graph);\n\nstd::cout << \"Optimal Route: \";\nfor (int city : optimalRoute)\nstd::cout << city << \" \";\nstd::cout << \"0n\";\n\nint optimalDistance = 0;\nfor (int i = 0; i < optimalRoute.size() - 1; ++i)\noptimalDistance += graph[optimalRoute[i]][optimalRoute[i + 1]];\n\nstd::cout << \"Optimal Distance: \" << optimalDistance << std::endl;\n\nreturn 0;\n}\n\n```\n\nIn this example, we have a graph representing the distances between cities. The `findNearestCity` function finds the nearest unvisited city from the current city based on the distance matrix. The `solveTSP` function uses the Nearest Neighbor Algorithm to construct the TSP route by iteratively selecting the nearest unvisited city until all cities are visited.\n\nIn the `main` function, we define a graph with distances between four cities. We call the `solveTSP` function to obtain the optimal route and then calculate and display the optimal distance.\n\nThe Nearest Neighbor Algorithm provides a simple and efficient approach to approximate solutions for the TSP, but it may not always yield the optimal solution.\n\nPosted\n\nin\n\nby\n\nTags:"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6755393,"math_prob":0.9935988,"size":2442,"snap":"2023-40-2023-50","text_gpt3_token_len":638,"char_repetition_ratio":0.17514355,"word_repetition_ratio":0.033898305,"special_character_ratio":0.28214577,"punctuation_ratio":0.22875817,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9974007,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-28T03:31:28Z\",\"WARC-Record-ID\":\"<urn:uuid:173650cf-00fb-4f8c-836a-527b4ae161ef>\",\"Content-Length\":\"89265\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f2ca4d60-6db4-4c49-8467-8aed53453586>\",\"WARC-Concurrent-To\":\"<urn:uuid:3d4911d5-7a92-4894-aca5-2a621fd5c9db>\",\"WARC-IP-Address\":\"104.21.66.87\",\"WARC-Target-URI\":\"https://johnnn.tech/write-travelling-salesman-problem-solution-with-example-in-c/\",\"WARC-Payload-Digest\":\"sha1:W4FU2VQG2ZQCBSO5VGHIIW5XUG6FE3KD\",\"WARC-Block-Digest\":\"sha1:GOYC3I37BWAJZFEB2PPPT4EI6PI2HEHM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510358.68_warc_CC-MAIN-20230928031105-20230928061105-00042.warc.gz\"}"} |
https://www.physicsforums.com/threads/integration-fundamentals-thereom-of-calculus.237102/ | [
"# Integration - Fundamentals Thereom Of Calculus\n\n1calculus1\n\n## Homework Statement\n\n$$\\int_0^3\\$$ (t-2)^1/3\n\n## Homework Equations\n\nSecond of Fundamental Thereom of Calculus\n\n## The Attempt at a Solution\n\nI don't know what to do first because I'm not used to questions with square roots. Once someone help me with the beginning, I can probably do it because after that it's all the same process anyways.\n\nHomework Helper\nuse the power rule\n(u^v)'=v*u^(v-1)*u'+u^v*log(u)*v'\nor since v'=0\n(u^v)'=v*u^(v-1)*u' (when v'=0)\nin particular\n[(t-2)^(4/3)]'=(4/3)(t-2)^(1/3)\n\nrootX\n\"(u^v)'=v*u^(v-1)*u'+u^v*log(u)*v'\"\n\nseems like a crazy expression",
null,
"OP,\nSquare roots work exactly the same way.\nTry simple example first:\n\nintegrate (t-2)^2\n\nHomework Helper\nseems like a crazy expression",
null,
"OP,\nSquare roots work exactly the same way.\nTry simple example first:\n\nintegrate (t-2)^2\n\nMay be crazy but it is true. Still I think rootX is just suggesting you try the u substitution u=(t-2) and then use the power law formula for integrals.\n\n$$\\int u^n du= \\frac{1}{n+1} u^{n+1}+ C$$"
] | [
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9078874,"math_prob":0.98056585,"size":378,"snap":"2022-27-2022-33","text_gpt3_token_len":111,"char_repetition_ratio":0.09625668,"word_repetition_ratio":0.0,"special_character_ratio":0.27248678,"punctuation_ratio":0.07042254,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9995072,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-15T01:54:59Z\",\"WARC-Record-ID\":\"<urn:uuid:1ceccb9b-1120-40ce-b4a9-76a26fb0138a>\",\"Content-Length\":\"67945\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:68966579-672f-4f99-acae-15d5aeb93998>\",\"WARC-Concurrent-To\":\"<urn:uuid:e4c54d5f-91f9-4fe1-9a2a-14f94365f8df>\",\"WARC-IP-Address\":\"104.26.14.132\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/integration-fundamentals-thereom-of-calculus.237102/\",\"WARC-Payload-Digest\":\"sha1:T55BNYUWN6XODNFWTDD4WUIQR43FPWCV\",\"WARC-Block-Digest\":\"sha1:4UEYHK42ZMI3SN7FL3L6DG7UBUWQDSQQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572089.53_warc_CC-MAIN-20220814234405-20220815024405-00323.warc.gz\"}"} |
https://studychacha.com/discuss/288888-2nd-puc-mathematics-model-paper.html | [
"",
null,
"2021-2022 StudyChaCha\n\n#1\n Unregistered Guest",
null,
"2nd PUC Mathematics Model paper\n\nTell me from where I can get 2nd UC Mathematics Model question paper???\n\n#2\n Super Moderator Join Date: Nov 2011",
null,
"Re: 2nd PUC Mathematics Model paper\n\nYou need 2nd UC Mathematics Model question paper, here I am giving:\n\nGive an example of a relation which is symmetric and transitive but not reflexive.\n\nDefine a diagonal matrix.\n\nDefine optimal solution in linear programming problem. 10. An urn contains 5 red and 2 black balls. Two balls are randomly selected. Let X represents the number of black balls, what are the possible values of X?\n\nA die is thrown. If E is the event ‘the number appearing is a multiple of 3’ and F is the event ‘ the number appearing is even’, then find whether E and F are independent?\n\nFor detailed paper here is attachment:",
null,
"2nd PUC Maths paper.pdf (1.30 MB, 31 views)\n__________________",
null,
"Reply to this Question / Ask Another Question"
] | [
null,
"https://studychacha.com/discuss/images/misc/navbits_start.gif",
null,
"https://studychacha.com/discuss/images/icons/icon1.gif",
null,
"https://studychacha.com/discuss/images/icons/icon1.gif",
null,
"https://studychacha.com/discuss/images/attach/pdf.gif",
null,
"https://studychacha.com/discuss/images/buttons/collapse_tcat.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8142029,"math_prob":0.8607777,"size":1709,"snap":"2021-31-2021-39","text_gpt3_token_len":375,"char_repetition_ratio":0.24222875,"word_repetition_ratio":0.014981274,"special_character_ratio":0.21006437,"punctuation_ratio":0.07042254,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9935715,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-03T10:07:00Z\",\"WARC-Record-ID\":\"<urn:uuid:eb053e17-bf91-4783-9063-d62982bd682d>\",\"Content-Length\":\"49765\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:04bd007d-72f0-4f26-86d0-71251f1c0623>\",\"WARC-Concurrent-To\":\"<urn:uuid:0af11889-8a81-4cc7-acdd-63bbade6640e>\",\"WARC-IP-Address\":\"162.214.65.233\",\"WARC-Target-URI\":\"https://studychacha.com/discuss/288888-2nd-puc-mathematics-model-paper.html\",\"WARC-Payload-Digest\":\"sha1:UD25VS6IBFB6KHJKFCTO7J7L5IIS3PDF\",\"WARC-Block-Digest\":\"sha1:URFKXCCXYPNMUOWAZKU4WGFZC3UBVBLY\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154457.66_warc_CC-MAIN-20210803092648-20210803122648-00705.warc.gz\"}"} |
https://git.devlol.org/rric/ardulol/commit/3a3e1c5f800669c91f96d118c8b87cecd0a6d86f | [
"### Revert order of analog ports A0 to A3\n\nparent 7f0612da\n ... ... @@ -38,13 +38,13 @@ * 11 15 (G) * 12 7 (Comma) * 13 4 (Colon) * A0 1 (DIG1) * A1 2 (DIG2) * A2 6 (DIG3) * A3 8 (DIG4) * A0 8 (DIG4) * A1 6 (DIG3) * A2 2 (DIG2) * A3 1 (DIG1) * * Arduino pins A0, A1, A2 and A3 are each connected to DIG1, DIG2, DIG3, * and DIG4 via a 1 kOhm resistor, which means \"common anode\" logic: * Arduino pins A0, A1, A2 and A3 are each connected to DIG4, DIG3, DIG2, * and DIG1 via a 1 kOhm resistor, which means \"common anode\" logic: * setting the Arduino pin HIGH turns the digit OFF, and vice versa. * * The only \"common cathode\" logic digit is the colon: Arduino pin 13 is ... ... @@ -90,7 +90,7 @@ void setup() void loop() { const int NTicks = 60; const int NTicks = 99; // For some reasons, Arduino's sprintf() implementation does not support %f, // so do a workaround here. See also http://stackoverflow.com/q/27651012 ... ... @@ -112,7 +112,7 @@ void loop() void turnOnDigit(int digit) { byte bits = 0; bitSet(bits, digit-1); bitSet(bits, 4-digit); // Zero out pins A0 to A3, then set the one which was set in bits PORTC &= bits | B11110000; ... ...\nMarkdown is supported\n0% or\nYou are about to add 0 people to the discussion. Proceed with caution.\nFinish editing this message first!"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.66186607,"math_prob":0.76653326,"size":1231,"snap":"2019-51-2020-05","text_gpt3_token_len":461,"char_repetition_ratio":0.114914425,"word_repetition_ratio":0.12875536,"special_character_ratio":0.44597888,"punctuation_ratio":0.22592592,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97039884,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-06T12:31:48Z\",\"WARC-Record-ID\":\"<urn:uuid:69992bdb-6782-47d0-85d4-3e908067f144>\",\"Content-Length\":\"88517\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:71b44436-744b-40fa-868d-99acb609e601>\",\"WARC-Concurrent-To\":\"<urn:uuid:093ec774-ce6f-49e9-8ab7-5061450195af>\",\"WARC-IP-Address\":\"193.170.194.148\",\"WARC-Target-URI\":\"https://git.devlol.org/rric/ardulol/commit/3a3e1c5f800669c91f96d118c8b87cecd0a6d86f\",\"WARC-Payload-Digest\":\"sha1:EP73NZLIX42QOZHMCGM2KNIG6ERNLIFO\",\"WARC-Block-Digest\":\"sha1:DEEAZZSE67E7HGRUTXJI7Q3WFL2DNGOC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540488620.24_warc_CC-MAIN-20191206122529-20191206150529-00238.warc.gz\"}"} |
https://math.libretexts.org/Courses/Monroe_Community_College/MTH_211_Calculus_II/Chapter_8%3A_Introduction_to_Differential_Equations/8.5%3A_First-order_Linear_Equations | [
"$$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$\n\n# 8.5: First-order Linear Equations\n\n•",
null,
"• OpenStax\n• Mathematics at OpenStax CNX\n$$\\newcommand{\\vecs}{\\overset { \\rightharpoonup} {\\mathbf{#1}} }$$ $$\\newcommand{\\vecd}{\\overset{-\\!-\\!\\rightharpoonup}{\\vphantom{a}\\smash {#1}}}$$$$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$\n\nLearning Objectives\n\n• Write a first-order linear differential equation in standard form.\n• Find an integrating factor and use it to solve a first-order linear differential equation.\n• Solve applied problems involving first-order linear differential equations.\n\nEarlier, we studied an application of a first-order differential equation that involved solving for the velocity of an object. In particular, if a ball is thrown upward with an initial velocity of $$v_0$$ ft/s, then an initial-value problem that describes the velocity of the ball after $$t$$ seconds is given by\n\n$\\dfrac{dv}{dt}=−32$\n\nwith $$v(0)=v_0.$$\n\nThis model assumes that the only force acting on the ball is gravity. Now we add to the problem by allowing for the possibility of air resistance acting on the ball.\n\nAir resistance always acts in the direction opposite to motion. Therefore if an object is rising, air resistance acts in a downward direction. If the object is falling, air resistance acts in an upward direction (Figure $$\\PageIndex{1}$$). There is no exact relationship between the velocity of an object and the air resistance acting on it. For very small objects, air resistance is proportional to velocity; that is, the force due to air resistance is numerically equal to some constant $$k$$ times $$v$$. For larger (e.g., baseball-sized) objects, depending on the shape, air resistance can be approximately proportional to the square of the velocity. In fact, air resistance may be proportional to $$v^{1.5}$$, or $$v^{0.9}$$, or some other power of $$v$$.",
null,
"Figure $$\\PageIndex{1}$$: Forces acting on a moving baseball: gravity acts in a downward direction and air resistance acts in a direction opposite to the direction of motion.\n\nWe will work with the linear approximation for air resistance. If we assume $$k>0$$, then the expression for the force $$F_A$$ due to air resistance is given by $$FA_=−kv$$. Therefore the sum of the forces acting on the object is equal to the sum of the gravitational force and the force due to air resistance. This, in turn, is equal to the mass of the object multiplied by its acceleration at time $$t$$(Newton’s second law). This gives us the differential equation\n\n$m\\dfrac{dv}{dt}=−kv−mg.$\n\nFinally, we impose an initial condition $$v(0)=v_0,$$ where $$v_0$$ is the initial velocity measured in meters per second. This makes $$g=9.8m/s^2.$$ The initial-value problem becomes\n\n$m\\dfrac{dv}{dt}=−kv−mg$\n\nwith $$v(0)=v_0.$$\n\nThe differential equation in this initial-value problem is an example of a first-order linear differential equation. (Recall that a differential equation is first-order if the highest-order derivative that appears in the equation is $$1$$.) In this section, we study first-order linear equations and examine a method for finding a general solution to these types of equations, as well as solving initial-value problems involving them.\n\nDefinition: Linear first-order differential equation\n\nA first-order differential equation is linear if it can be written in the form\n\n$a(x)y′+b(x)y=c(x),$\n\nwhere $$a(x),b(x),$$ and $$c(x)$$ are arbitrary functions of $$x$$.\n\nRemember that the unknown function $$y$$ depends on the variable $$x$$; that is, $$x$$ is the independent variable and $$y$$ is the dependent variable. Some examples of first-order linear differential equations are\n\n$(3x^2−4)y'+(x−3)y=\\sin x$\n\n$(\\sin x)y'−(\\cos x)y=\\cot x$\n\n$4xy'+(3\\ln x)y=x^3−4x.$\n\nExamples of first-order nonlinear differential equations include\n\n$(y')^4−(y')^3=(3x−2)(y+4)$\n\n$4y'+3y^3=4x−5$\n\n$(y')^2=\\sin y+\\cos x.$\n\nThese equations are nonlinear because of terms like $$(y′)^4,y^3,$$ etc. Due to these terms, it is impossible to put these equations into the same form as Equation.\n\n## Standard Form\n\nConsider the differential equation\n\n$(3x^2−4)y′+(x−3)y=\\sin x.$\n\nOur main goal in this section is to derive a solution method for equations of this form. It is useful to have the coefficient of $$y′$$ be equal to $$1$$. To make this happen, we divide both sides by $$3x^2−4.$$\n\n$y′+ \\left(\\dfrac{x−3}{3x^2−4} \\right)y=\\dfrac{\\sin x}{3x^2−4}$\n\nThis is called the standard form of the differential equation. We will use it later when finding the solution to a general first-order linear differential equation. Returning to Equation, we can divide both sides of the equation by $$a(x)$$. This leads to the equation\n\n$y′+\\dfrac{b(x)}{a(x)}y=\\dfrac{c(x)}{a(x)}. \\label{eq5}$\n\nNow define\n\n$p(x)=\\dfrac{b(x)}{a(x)}$\n\nand\n\n$q(x)=\\dfrac{c(x)}{a(x)}$\n\nThen Equation \\ref{eq5} becomes\n\n$y′+p(x)y=q(x).$\n\nWe can write any first-order linear differential equation in this form, and this is referred to as the standard form for a first-order linear differential equation.\n\nExample $$\\PageIndex{1}$$: Writing First-Order Linear Equations in Standard Form\n\nPut each of the following first-order linear differential equations into standard form. Identify $$p(x)$$ and $$q(x)$$ for each equation.\n\n1. $$y'=3x−4y$$\n2. $$\\dfrac{3xy'}{4y−3}=2$$ (here $$x>0$$)\n3. $$y=3y'−4x^2+5$$\n\nSolution\n\na. Add $$4y$$ to both sides:\n\n$$y'+4y=3x.$$\n\nIn this equation, $$p(x)=4$$ and \\|(q(x)=3x.\\)\n\nb. Multiply both sides by $$4y−3$$, then subtract $$8y$$ from each side:\n\n$$\\dfrac{3xy'}{4y−3}=2$$\n\n$$3xy'=2(4y−3)$$\n\n$$3xy'=8y−6$$\n\n$$3xy'−8y=−6.$$\n\nFinally, divide both sides by $$3x$$ to make the coefficient of $$y'$$ equal to $$1$$:\n\n$$y'−\\dfrac{8}{3x}y=−\\dfrac{2}{3x}.$$\n\nThis is allowable because in the original statement of this problem we assumed that $$x>0$$. (If $$x=0$$ then the original equation becomes $$0=2$$, which is clearly a false statement.)\n\nIn this equation, $$p(x)=−\\dfrac{8}{3x}$$ and $$q(x)=−\\dfrac{2}{3x}$$.\n\nc. Subtract $$y$$ from each side and add $$4x^2−5$$:\n\n$$3y'−y=4x^2−5.$$\n\nNext divide both sides by $$3$$:\n\n$$y'−\\dfrac{1}{3}y=\\dfrac{4}{3}x^2−\\dfrac{5}{3}$$.\n\nIn this equation, $$p(x)=−\\dfrac{1}{3}$$ and $$q(x)=\\dfrac{4}{3}x^2−\\dfrac{5}{3}$$.\n\nExercise $$\\PageIndex{1}$$\n\nPut the equation $$\\dfrac{(x+3)y'}{2x−3y−4}=5$$ into standard form and identify $$p(x)$$ and $$q(x)$$.\n\nHint\n\nMultiply both sides by the common denominator, then collect all terms involving $$y$$ on one side.\n\n$y'+\\dfrac{15}{x+3}y=\\dfrac{10x−20}{x+3}$\n\n$p(x)=\\dfrac{15}{x+3}$\n\nand\n\n$q(x)=\\dfrac{10x−20}{x+3}$\n\n## Integrating Factors\n\nWe now develop a solution technique for any first-order linear differential equation. We start with the standard form of a first-order linear differential equation:\n\n$y'+p(x)y=q(x). \\label{Deq1}$\n\nThe first term on the left-hand side of Equation is the derivative of the unknown function, and the second term is the product of a known function with the unknown function. This is somewhat reminiscent of the power rule. If we multiply Equation \\ref{Deq1} by a yet-to-be-determined function $$μ(x)$$, then the equation becomes\n\n$μ(x)y′+μ(x)p(x)y=μ(x)q(x). \\label{Deq2}$\n\nThe left-hand side Equation \\ref{Deq2} can be matched perfectly to the product rule:\n\n$\\dfrac{d}{dx}[f(x)g(x)]=f′(x)g(x)+f(x)g′(x).$\n\nMatching term by term gives $$y=f(x),g(x)=μ(x)$$, and $$g′(x)=μ(x)p(x)$$. Taking the derivative of $$g(x)=μ(x)$$ and setting it equal to the right-hand side of $$g′(x)=μ(x)p(x)$$ leads to\n\n$μ′(x)=μ(x)p(x).$\n\nThis is a first-order, separable differential equation for $$μ(x).$$ We know $$p(x)$$ because it appears in the differential equation we are solving. Separating variables and integrating yields\n\n\\begin{align} \\dfrac{μ′(x)}{μ(x)} =p(x) \\\\[4pt] ∫\\dfrac{μ′(x)}{μ(x)}dx =∫p(x)dx \\\\[4pt] \\ln|μ(x)| =∫p(x)dx+C \\\\[4pt] e^{\\ln|μ(x)|} =e^{∫p(x)dx+C} \\\\[4pt] |μ(x)| =C_1e^{∫p(x)dx} \\\\[4pt] μ(x) =C_2e^{∫p(x)dx}. \\end{align}\n\nHere $$C_2$$ can be an arbitrary (positive or negative) constant. This leads to a general method for solving a first-order linear differential equation. We first multiply both sides of Equation by the integrating factor $$μ(x).$$ This gives\n\n$μ(x)y′+μ(x)p(x)y=μ(x)q(x). \\label{Deq5}$\n\nThe left-hand side of Equation \\ref{Deq5} can be rewritten as $$\\dfrac{d}{dx}(μ(x)y)$$.\n\n$\\dfrac{d}{dx}(μ(x)y)=μ(x)q(x). \\label{Deq6}$\n\nNext integrate both sides of Equation \\ref{Deq6} with respect to $$x$$.\n\n\\begin{align} ∫\\dfrac{d}{dx}(μ(x)y)dx =∫μ(x)q(x)dx \\\\[4pt] μ(x)y =∫μ(x)q(x)dx \\label{Deq7} \\end{align}\n\nDivide both sides of Equation \\ref{Deq6} by $$μ(x)$$:\n\n$y=\\dfrac{1}{μ(x)}\\left[∫μ(x)q(x)dx+C\\right]. \\nonumber$\n\nSince $$μ(x)$$ was previously calculated, we are now finished. An important note about the integrating constant $$C$$: It may seem that we are inconsistent in the usage of the integrating constant. However, the integral involving $$p(x)$$ is necessary in order to find an integrating factor for Equation. Only one integrating factor is needed in order to solve the equation; therefore, it is safe to assign a value for $$C$$ for this integral. We chose $$C=0$$. When calculating the integral inside the brackets in Equation, it is necessary to keep our options open for the value of the integrating constant, because our goal is to find a general family of solutions to Equation. This integrating factor guarantees just that.\n\nProblem-Solving Strategy: Solving a First-order Linear Differential Equation\n\n1. Put the equation into standard form and identify $$p(x)$$ and $$q(x)$$.\n2. Calculate the integrating factor $μ(x)=e^{∫p(x)dx}.$\n3. Multiply both sides of the differential equation by $$μ(x)$$.\n4. Integrate both sides of the equation obtained in step $$3$$, and divide both sides by $$μ(x)$$.\n5. If there is an initial condition, determine the value of $$C$$.\n\nExample $$\\PageIndex{2}$$: Solving a First-order Linear Equation\n\nFind a general solution for the differential equation $$xy'+3y=4x^2−3x.$$ Assume $$x>0.$$\n\nSolution\n\n1. To put this differential equation into standard form, divide both sides by $$x$$:\n\n$y'+\\dfrac{3}{x}y=4x−3. \\nonumber$\n\nTherefore $$p(x)=\\dfrac{3}{x}$$ and $$q(x)=4x−3.$$\n\n2. The integrating factor is $$μ(x)=e^{∫(3/x)}dx=e^{3 \\ln x}=x^3$$.\n\n3. Multiplying both sides of the differential equation by $$μ(x)$$ gives us\n\n\\begin{align*} x^3y′+x^3(\\dfrac{3}{x}) =x^3(4x−3) \\\\[4pt] x^3y′+3x^2y =4x^4−3x^3 \\\\[4pt] \\dfrac{d}{dx}(x^3y) = 4x^4−3x^3. \\end{align*}\n\n4. Integrate both sides of the equation.\n\n\\begin{align*} ∫\\dfrac{d}{dx}(x^3y)dx = ∫4x^4−3x^3dx \\\\[4pt] x^3y =\\dfrac{4x^5}{5}−\\dfrac{3x^4}{4}+C \\\\[4pt] y =\\dfrac{4x^2}{5}−\\dfrac{3x}{4}+Cx^{−3}. \\end{align*}\n\n5. There is no initial value, so the problem is complete.\n\nAnalysis\n\nYou may have noticed the condition that was imposed on the differential equation; namely, $$x>0$$. For any nonzero value of $$C$$, the general solution is not defined at $$x=0$$. Furthermore, when $$x<0$$, the integrating factor changes. The integrating factor is given by Equation as $$f(x)=e^{∫p(x)dx}$$. For this $$p(x)$$ we get\n\n\\begin{align*} e^{∫p(x)dx} =e^{∫(3/x)dx} \\\\[4pt] =e^{3\\ln|x|} \\\\[4pt] =|x|^3 \\end{align*}\n\nsince $$x<0$$. The behavior of the general solution changes at $$x=0$$ largely due to the fact that $$p(x)$$ is not defined there.\n\nExercise $$\\PageIndex{2}$$\n\nFind the general solution to the differential equation $$(x−2)y'+y=3x^2+2x.$$ Assume $$x>2$$.\n\nHint\n\nUse the method outlined in the problem-solving strategy for first-order linear differential equations.\n\n$$y=\\dfrac{x^3+x^2+C}{x−2}$$\n\nNow we use the same strategy to find the solution to an initial-value problem.\n\nExample $$\\PageIndex{3}$$: A First-order Linear Initial-Value Problem\n\nSolve the initial-value problem\n\n$y′+3y=2x−1,y(0)=3. \\nonumber$\n\nSolution\n\n1. This differential equation is already in standard form with $$p(x)=3$$ and $$q(x)=2x−1$$.\n\n2. The integrating factor is $$μ(x)=e^{∫3dx}=e^{3x}$$.\n\n3. Multiplying both sides of the differential equation by $$μ(x)$$ gives\n\n\\begin{align*} e^{3x}y′+3e^{3x}y =(2x−1)e^{3x} \\\\[4pt] \\dfrac{d}{dx}[ye^{3x}] =(2x−1)e^{3x}. \\end{align*}\n\nIntegrate both sides of the equation:\n\n$$∫\\dfrac{d}{dx}[ye^{3x}]dx=∫(2x−1)e^{3x}dx$$\n\n$$ye^{3x}=\\dfrac{e^{3x}}{3}(2x−1)−∫\\dfrac{2}{3}e^{3x}dx$$\n\n$$ye^{3x}=\\dfrac{e^{3x}(2x−1)}{3}−\\dfrac{2e^{3x}}{9}+C$$\n\n$$y=\\dfrac{2x−1}{3}−\\dfrac{2}{9}+Ce^{−3x}$$\n\n$$y=\\dfrac{2x}{3}−\\dfrac{5}{9}+Ce^{−3x}$$.\n\n4. Now substitute $$x=0$$ and $$y=3$$ into the general solution and solve for $$C$$:\n\n\\begin{align*} y =\\dfrac{2}{3}x−\\dfrac{5}{9}+Ce^{−3x} \\\\[4pt] 3 =\\dfrac{2}{3}(0)−\\dfrac{5}{9}+Ce^{−3(0)} \\\\[4pt] 3 =−\\dfrac{5}{9}+C \\\\[4pt] C=\\dfrac{32}{9}. \\end{align*}\n\nTherefore the solution to the initial-value problem is\n\n$y=\\dfrac{2}{3}x−\\dfrac{5}{9}+\\dfrac{32}{9}e^{−3x}. \\nonumber$\n\nExample $$\\PageIndex{4}$$:\n\nSolve the initial-value problem $y'−2y=4x+3y(0)=−2. \\nonumber$\n\nSolution\n\n$y=−2x−4+2e^{2x}$\n\n## Applications of First-order Linear Differential Equations\n\nWe look at two different applications of first-order linear differential equations. The first involves air resistance as it relates to objects that are rising or falling; the second involves an electrical circuit. Other applications are numerous, but most are solved in a similar fashion.\n\n### Free fall with air resistance\n\nWe discussed air resistance at the beginning of this section. The next example shows how to apply this concept for a ball in vertical motion. Other factors can affect the force of air resistance, such as the size and shape of the object, but we ignore them here.\n\nExample $$\\PageIndex{5}$$: A Ball with Air Resistance\n\nA racquetball is hit straight upward with an initial velocity of $$2$$m/s. The mass of a racquetball is approximately $$0.0427$$ kg. Air resistance acts on the ball with a force numerically equal to $$0.5v$$, where $$v$$ represents the velocity of the ball at time $$t$$.\n\n1. Find the velocity of the ball as a function of time.\n2. How long does it take for the ball to reach its maximum height?\n3. If the ball is hit from an initial height of $$1$$ meter, how high will it reach?\n\nSolution\n\na. The mass $$m=0.0427kg,k=0.5,$$ and $$g=9.8m/s^2$$. The initial velocity is $$v_0=2 m/s$$. Therefore the initial-value problem is\n\n$$0.0427\\dfrac{dv}{dt}=−0.5v−0.0427(9.8),v_0=2.$$\n\nDividing the differential equation by $$0.0427$$ gives\n\n$$\\dfrac{dv}{dt}=−11.7096v−9.8,v_0=2.$$\n\nThe differential equation is linear. Using the problem-solving strategy for linear differential equations:\n\nStep 1. Rewrite the differential equation as $$\\dfrac{dv}{dt}+11.7096v=−9.8$$. This gives $$p(t)=11.7096$$ and $$q(t)=−9.8$$\n\nStep 2. The integrating factor is $$μ(t)=e^{∫11.7096dt}=e^{11.7096t}.$$\n\nStep 3. Multiply the differential equation by $$μ(t)$$:\n\n$$e^{11.7096t\\dfrac{dv}{dt}}+11.7096ve^{11.7096t}=−9.8e^{11.7096t}$$\n\n$$\\dfrac{d}{dt}[ve^{11.7096t}]=−9.8e^{11.7096t}.$$\n\nStep 4. Integrate both sides:\n\n$$∫\\dfrac{d}{dt}[ve^{11.7096t}]dt=∫−9.8e^{11.7096t}dt$$\n\n$$ve^{11.7096t}=\\dfrac{−9.8}{11.7096}e^{11.7096t}+C$$\n\n$$v(t)=−0.8369+Ce^{−11.7096t}.$$\n\nStep 5. Solve for $$C$$ using the initial condition $$v_0=v(0)=2$$:\n\n$$v(t)=−0.8369+Ce^{−11.7096t}$$\n\n$$v(0)=−0.8369+Ce^{−11.7096(0)}$$\n\n$$2=−0.8369+C$$\n\n$$C=2.8369.$$\n\nTherefore the solution to the initial-value problem is\n\n$$v(t)=2.8369e^{−11.7096t}−0.8369.$$\n\nb. The ball reaches its maximum height when the velocity is equal to zero. The reason is that when the velocity is positive, it is rising, and when it is negative, it is falling. Therefore when it is zero, it is neither rising nor falling, and is at its maximum height:\n\n$$2.8369e^{−11.7096t}−0.8369=0$$\n\n$$2.8369e^{−11.7096t}=0.8369$$\n\n$$e^{−11.7096t}=\\dfrac{0.8369}{2.8369}≈0.295$$\n\n$$lne^{−11.7096t}=ln0.295≈−1.221$$\n\n$$−11.7096t=−1.221$$\n\n$$t≈0.104.$$\n\nTherefore it takes approximately $$0.104$$ second to reach maximum height.\n\nc. To find the height of the ball as a function of time, use the fact that the derivative of position is velocity, i.e., if $$h(t)$$ represents the height at time $$t$$, then $$h′(t)=v(t)$$. Because we know $$v(t)$$ and the initial height, we can form an initial-value problem:\n\n$$h′(t)=2.8369e^{−11.7096t}−0.8369,h(0)=1.$$\n\nIntegrating both sides of the differential equation with respect to $$t$$ gives\n\n$$∫h′(t)dt=∫2.8369e^{−11.7096t}−0.8369dt$$\n\n$$h(t)=−\\dfrac{2.8369}{11.7096}e^{−11.7096t}−0.8369t+C$$\n\n$$h(t)=−0.2423e^{−11.7096t}−0.8369t+C.$$\n\nSolve for $$C$$ by using the initial condition:\n\n$$h(t)=−0.2423e^{−11.7096t}−0.8369t+C$$\n\n$$h(0)=−0.2423e^{−11.7096(0)}−0.8369(0)+C$$\n\n$$1=−0.2423+C$$\n\n$$C=1.2423.$$\n\nTherefore\n\n$$h(t)=−0.2423e^{−11.7096t}−0.8369t+1.2423.$$\n\nAfter $$0.104$$ second, the height is given by\n\n$$h(0.2)=−0.2423e^{−11.7096t}−0.8369t+1.2423≈1.0836$$ meter.\n\nExercise $$\\PageIndex{3}$$\n\nThe weight of a penny is $$2.5$$ grams (United States Mint, “Coin Specifications,” accessed April 9, 2015, http://www.usmint.gov/about_the_mint...specifications), and the upper observation deck of the Empire State Building is $$369$$ meters above the street. Since the penny is a small and relatively smooth object, air resistance acting on the penny is actually quite small. We assume the air resistance is numerically equal to $$0.0025v$$. Furthermore, the penny is dropped with no initial velocity imparted to it.\n\n1. Set up an initial-value problem that represents the falling penny.\n2. Solve the problem for $$v(t)$$.\n3. What is the terminal velocity of the penny (i.e., calculate the limit of the velocity as $$t$$ approaches infinity)?\nHint\n\nSet up the differential equation the same way as Example. Remember to convert from grams to kilograms.\n\na. $$\\dfrac{dv}{dt}=−v−9.8$$ $$v(0)=0$$\n\nb. $$v(t)=9.8(e^{−t}−1)$$\n\nc. $$\\lim_{t→∞}v(t)=\\lim_{t→∞}(9.8(e^{−t}−1))=−9.8m/s≈−21.922mph$$\n\n## Electrical Circuits\n\nA source of electromotive force (e.g., a battery or generator) produces a flow of current in a closed circuit, and this current produces a voltage drop across each resistor, inductor, and capacitor in the circuit. Kirchhoff’s Loop Rule states that the sum of the voltage drops across resistors, inductors, and capacitors is equal to the total electromotive force in a closed circuit. We have the following three results:\n\n1. The voltage drop across a resistor is given by\n\n$$E_R=Ri,$$\n\nwhere $$R$$ is a constant of proportionality called the resistance, and $$i$$ is the current.\n\n2. The voltage drop across an inductor is given by\n\n$$EL=Li′$$,\n\nwhere $$L$$ is a constant of proportionality called the inductance, and $$i$$ again denotes the current.\n\n3. The voltage drop across a capacitor is given by\n\n$$E_C=\\dfrac{1}{C}q$$,\n\nwhere $$C$$ is a constant of proportionality called the capacitance, and $$q$$ is the instantaneous charge on the capacitor. The relationship between $$i$$ and $$q$$ is $$i=q′$$.\n\nWe use units of volts $$(V)$$ to measure voltage $$E$$, amperes $$(A)$$ to measure current $$i$$, coulombs $$(C)$$ to measure charge $$q$$, ohms $$(Ω)$$ to measure resistance $$R$$, henrys $$(H)$$ to measure inductance $$L$$, and farads $$(F)$$ to measure capacitance $$C$$. Consider the circuit in Figure $$\\PageIndex{2}$$.",
null,
"Figure $$\\PageIndex{2}$$: A typical electric circuit, containing a voltage generator $$(V_S)$$, capacitor $$(C)$$, inductor $$(L)$$, and resistor $$(R)$$.\n\nApplying Kirchhoff’s Loop Rule to this circuit, we let $$E$$ denote the electromotive force supplied by the voltage generator. Then\n\n$$E_L+E_R+E_C=E$$.\n\nSubstituting the expressions for $$E_L,E_R,$$ and $$E_C$$ into this equation, we obtain\n\n$$Li′+Ri+\\dfrac{1}{C}q=E.$$\n\nIf there is no capacitor in the circuit, then the equation becomes\n\n$$Li′+R_i=E.$$\n\nThis is a first-order differential equation in $$i$$. The circuit is referred to as an $$LR$$circuit.\n\nNext, suppose there is no inductor in the circuit, but there is a capacitor and a resistor, so $$L=0,R≠0,$$ and $$C≠0.$$ Then Equation can be rewritten as\n\n$$Rq′+\\dfrac{1}{C}q=E,$$\n\nwhich is a first-order linear differential equation. This is referred to as an RC circuit. In either case, we can set up and solve an initial-value problem.\n\nElectric Circuit\n\nA circuit has in series an electromotive force given by $$E=50\\sin 20tV,$$ a resistor of $$5Ω$$, and an inductor of $$0.4H$$. If the initial current is $$0$$, find the current at time $$t>0$$.\n\nSolution\n\nWe have a resistor and an inductor in the circuit, so we use Equation. The voltage drop across the resistor is given by $$E_R=R_i=5_i$$. The voltage drop across the inductor is given by $$E_L=Li′=0.4i′$$. The electromotive force becomes the right-hand side of Equation. Therefore Equation becomes\n\n$0.4i′+5i=50\\sin 20t.$\n\nDividing both sides by $$0.4$$ gives the equation\n\n$i′+12.5i=125\\sin 20t.$\n\nSince the initial current is 0, this result gives an initial condition of $$i(0)=0.$$ We can solve this initial-value problem using the five-step strategy for solving first-order differential equations.\n\nStep 1. Rewrite the differential equation as $$i′+12.5i=125\\sin 20t$$. This gives $$p(t)=12.5$$ and $$q(t)=125\\sin 20t$$.\n\nStep 2. The integrating factor is $$μ(t)=e^{∫12.5dt}=e^{12.5t}$$.\n\nStep 3. Multiply the differential equation by $$μ(t)$$:\n\n$$e^{12.5t}i′+12.5e^{12.5t}i=125e^{12.5t}\\sin 20t$$\n\n$$\\dfrac{d}{dt}[ie^{12.5}t]=125e^{12.5t}\\sin 20t$$.\n\nStep 4. Integrate both sides:\n\n$$∫\\dfrac{d}{dt}[ie^{12.5t}]dt=∫125e^{12.5t}\\sin 20tdt$$\n\n$$ie^{12.5t}=(\\dfrac{250\\sin 20t−400\\cos 20t}{89})e^{12.5t}+C$$\n\n$$i(t)=\\dfrac{250\\sin 20t−400\\cos 20t}{89}+Ce^{−12.5t}$$.\n\nStep 5. Solve for $$C$$ using the initial condition $$v(0)=2$$:\n\n$$i(t)=\\dfrac{250\\sin 20t−400\\cos 20t}{89}+Ce^{−12.5t}$$\n\n$$i(0)=\\dfrac{250sin20(0)−400cos20(0)}{89}+Ce^{−12.5(0)}$$\n\n$$0=−\\dfrac{400}{89}+C$$\n\n$$C=\\dfrac{400}{89}$$.\n\nTherefore the solution to the initial-value problem is\n\n$i(t)=\\dfrac{250\\sin 20t−400\\cos 20t+400e^{−12.5t}}{89}=\\dfrac{250\\sin 20t−400\\cos 20t}{89}+\\dfrac{400e^{−12.5t}}{89}.$\n\nThe first term can be rewritten as a single cosine function. First, multiply and divide by $$\\sqrt{250^2+400^2}=50\\sqrt{89}$$:\n\n$$\\dfrac{250\\sin 20t−400\\cos 20t}{89}=\\dfrac{50\\sqrt{89}}{89}(\\dfrac{250\\sin 20t−400\\cos 20t}{50\\sqrt{89}})=−\\dfrac{50\\sqrt{89}}{89}(\\dfrac{8\\cos 20t}{\\sqrt{89}}−\\dfrac{5\\sin 20t}{\\sqrt{89}})$$.\n\nNext, define $$φ$$ to be an acute angle such that $$\\cos φ=\\dfrac{8}{\\sqrt{89}}$$. Then $$\\sin φ=\\dfrac{5}{\\sqrt{89}}$$ and\n\n$$−\\dfrac{50\\sqrt{89}}{89}(\\dfrac{8\\cos 20t}{\\sqrt{89}}−\\dfrac{5\\sin 20t}{\\sqrt{89}})=−\\dfrac{50\\sqrt{89}}{89}(\\cos φ\\cos 20t−\\sin φ\\sin 20t)=−\\dfrac{50\\sqrt{89}}{89}\\cos(20t+φ).$$\n\nTherefore the solution can be written as\n\n$$i(t)=−\\dfrac{50\\sqrt{89}}{89}cos(20t+φ)+\\dfrac{400e^{−12.5t}}{89}$$.\n\nThe second term is called the attenuation term, because it disappears rapidly as $$t$$ grows larger. The phase shift is given by $$φ$$, and the amplitude of the steady-state current is given by $$\\dfrac{50\\sqrt{89}}{89}$$. The graph of this solution appears in Figure $$\\PageIndex{3}$$:",
null,
"Figure $$\\PageIndex{3}$$.\n\nExercise $$\\PageIndex{4}$$\n\nA circuit has in series an electromotive force given by $$E=20sin5t$$ V, a capacitor with capacitance $$0.02F$$, and a resistor of $$8Ω$$. If the initial charge is $$4C$$, find the charge at time $$t>0$$.\n\nHint\n\nUse Equation for an $$RC$$ circuit to set up an initial-value problem.\n\nInitial-value problem:\n\n$$8q′+\\dfrac{1}{0.02}q=20sin5t,q(0)=4$$\n\n$$q(t)=\\dfrac{10sin5t−8cos5t+172e^{−6.25t}}{41}$$\n\n## Key Concepts\n\n• Any first-order linear differential equation can be written in the form $$y'+p(x)y=q(x)$$.\n• We can use a five-step problem-solving strategy for solving a first-order linear differential equation that may or may not include an initial value.\n• Applications of first-order linear differential equations include determining motion of a rising or falling object with air resistance and finding current in an electrical circuit.\n\n## Key Equations\n\n• standard form\n\n$$y'+p(x)y=q(x)$$\n\n• integrating factor\n\n$$μ(x)=e^{∫p(x)dx}$$\n\n## Glossary\n\nintegrating factor\nany function $$f(x)$$ that is multiplied on both sides of a differential equation to make the side involving the unknown function equal to the derivative of a product of two functions\nlinear\ndescription of a first-order differential equation that can be written in the form $$a(x)y′+b(x)y=c(x)$$\nstandard form\nthe form of a first-order linear differential equation obtained by writing the differential equation in the form $$y'+p(x)y=q(x)$$"
] | [
null,
"https://biz.libretexts.org/@api/deki/files/5084/girl-160172__340.png",
null,
"https://math.libretexts.org/@api/deki/files/2959/CNX_Calc_Figure_08_05_001.jpeg",
null,
"https://math.libretexts.org/@api/deki/files/2960/CNX_Calc_Figure_08_05_003.jpeg",
null,
"https://math.libretexts.org/@api/deki/files/12453/8.5.1.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7935593,"math_prob":1.0000054,"size":24126,"snap":"2021-31-2021-39","text_gpt3_token_len":8006,"char_repetition_ratio":0.16200979,"word_repetition_ratio":0.061928634,"special_character_ratio":0.36520767,"punctuation_ratio":0.11142511,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.000009,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,9,null,9,null,9,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-27T04:05:42Z\",\"WARC-Record-ID\":\"<urn:uuid:a6511135-24a4-4b60-885e-3afc1f59e24d>\",\"Content-Length\":\"139373\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c6caca3d-3c24-476d-bae2-37def68c34ea>\",\"WARC-Concurrent-To\":\"<urn:uuid:05fabb9d-6cea-424b-9d35-108fe3d55e1b>\",\"WARC-IP-Address\":\"99.86.186.32\",\"WARC-Target-URI\":\"https://math.libretexts.org/Courses/Monroe_Community_College/MTH_211_Calculus_II/Chapter_8%3A_Introduction_to_Differential_Equations/8.5%3A_First-order_Linear_Equations\",\"WARC-Payload-Digest\":\"sha1:ZIYAHWDMV7XS5DYRTLGAVUMVBMPCQ3YU\",\"WARC-Block-Digest\":\"sha1:QXEAT7RO76MI3ICWIULSA5NR63N4P6TN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780058263.20_warc_CC-MAIN-20210927030035-20210927060035-00545.warc.gz\"}"} |
https://www.hindawi.com/journals/js/2016/2802343/ | [
"/ / Article\n\nResearch Article | Open Access\n\nVolume 2016 |Article ID 2802343 | https://doi.org/10.1155/2016/2802343\n\nYueqiang Zhang, Langming Zhou, Haibo Liu, Yang Shang, \"A Flexible Online Camera Calibration Using Line Segments\", Journal of Sensors, vol. 2016, Article ID 2802343, 16 pages, 2016. https://doi.org/10.1155/2016/2802343\n\n# A Flexible Online Camera Calibration Using Line Segments\n\nRevised23 Sep 2015\nAccepted27 Sep 2015\nPublished06 Jan 2016\n\n#### Abstract\n\nIn order to make the general user take vision tasks more flexibly and easily, this paper proposes a new solution for the problem of camera calibration from correspondences between model lines and their noisy image lines in multiple images. In the proposed method the common planar items in hand with the standard size and structure are utilized as the calibration objects. The proposed method consists of a closed-form solution based on homography optimization, followed by a nonlinear refinement based on the maximum likelihood approach. To automatically recover the camera parameters linearly, we present a robust homography optimization method based on the edge model by redesigning the classic 3D tracking approach. In the nonlinear refinement procedure, the uncertainty of the image line segment is encoded in the error model, taking the finite nature of the observations into account. By developing the new error model between the model line and image line segment, the problem of the camera calibration is expressed in the probabilistic formulation. Simulation data is used to compare this method with the widely used planar pattern based method. Actual image sequences are also utilized to demonstrate the effectiveness and flexibility of the proposed method.\n\n#### 1. Introduction\n\nCamera calibration has always been an important issue in the field of computer vision, since it is a necessary step to extract metric information from 2D images. The goal of the camera calibration is to recover the mapping between the 3D space and the image plane, which can be separated into two sets of transformations. The first transformation is mapping of the 3D points in the scene to the 3D coordinates in the camera frame, which is described by the extrinsic parameters of the camera model. The second one involves mapping of the 3D points in the camera frame to the 2D coordinates in the image plane. This mapping is described by the intrinsic parameters which models the geometry and optical features of the camera. In general case, these two transformations can be expressed by the ideal pin-hole camera model.\n\nUp to now, much work for camera calibration has been done to accommodate various applications. Those approaches can be roughly grouped into two categories according to whether requiring a calibration object. This first type of camera calibration methods is named as metric calibration, which resolves the camera model with the help of metric information of a reference object. Camera calibration is performed by observing a calibration object whose geometry dimension is known with very high precision. The calibration object can be 3D object with several planes orthogonal to each other [1, 2]. Sometimes a 2D plane undergoing a precisely known translation or free movement is utilized. Recently, a 1D temple is used with three or more markers for camera calibration. In , it was proved that the 1D object undergoing a planar motion was essentially equivalent to the 2D planar object. For such type of methods, calibration can be done very efficiently and accurately. However, a calibration pattern also needs to be prepared, though in the setup is very easy and only a planar object attached with the chessboard is utilized. Another type of camera calibration methods is called self-calibration which does not use any metric information from the scene or any calibration object. Such methods are also considered as 0D approach for only image feature correspondences are required. Since two constraints on the intrinsic parameter of the camera can be provided by using image information alone , three images are sufficient to recover the camera parameters including the internal and external parameters and reconstruct the 3D structure of the scene up to similarity [10, 11]. The problem of such methods is that a large number of parameters need to be estimated, resulting in very unstable solution. If the camera rotation is known, more stable and accurate results can be obtained [12, 13]. However, it is not always easy to get the camera rotation with very high accuracy. In general, metric calibration methods can provide better results than self-calibration methods . Our current research is focused on smartphone vision system since the potential for using such system is large. Smartphones are now becoming ubiquitous and popular in our daily life. To make the general public who are not experts in computer vision do vision tasks easily, the setup of camera calibrate should be flexible enough. The method developed in is considered as the most flexible technique; however, when the orientation of the model plane with respect to image plane is increasing, foreshortening will make the corner detection less precise and even fail. Moreover, the planer pattern should be prepared, which is still inconvenient for general user of smartphone. Therefore, it would be best to utilize the handy item as the calibration object. The camera calibration technique described in this paper was designed with these considerations in mind. Compared with the classical techniques, the proposed technique does not need to prepare the planer pattern and is considerably more flexible. The calibration objects employed by the proposed method are common and handy in our daily life such as an A4 paper or even a standard IC card.\n\nOur approach exploits the line/edge features of the handy objects to calibrate both the internal and external parameters of the camera, since they provide a large degree of stability to illumination and viewpoint changes and offer some resilience to hash imaging conditions such as noise and blur. A first challenge of the solution proposed in this paper is to automatically estimate the homography and establish the correspondences between model and image features. In this sense, we redesigned the model based tracking method to robustly estimate homography for the common planar object in the clutter scene. An advantage of such methods is handling the occlusion, large illumination, and viewpoint change. With a series of homography from the planar object to the image plane, the initial camera parameters can be solved linearly. A second challenge is to optimize the camera parameters by developing effective object function and by making full use of the finite nature of the observation extracted in the images. In this paper, the error function for the model and image line, which encodes the length of the image line segment and the information of the midpoint, is derived from the noisy image edge points in the least square approach.\n\nThe remainder of the paper is organized as follows. Section 2 gives the procedure of the proposed camera calibration algorithm. Section 3 presents an overview of the redesigned homography tracking method based on edge model. Section 4 derives the error model between image and model lines and expresses the problem of the camera calibration in the probabilistic formulation by the maximum likelihood approach. Section 5 details how to solve the problem of camera calibration by the nonlinear technique. Some experiment results are given in Section 6.\n\n#### 2. Algorithm\n\nThe proposed algorithm is summarized in this section.\n\nStep 1. Optimize the homography between the model plane and image plane according to our model based homography tracking approach.\n\nStep 2. Fit the image line segment from the image edge points obtained by 1D search along the normal direction of the corresponding model line.\n\nStep 3. Calculate the initial camera parameters linearly with a series of homography matrices (more than three orientations).\n\nStep 4. Estimate the camera parameters by minimizing the sum of the distance between the finite image line segments and the model lines in the maximum likelihood approach.\n\n#### 3. Model Based Homography Tracking\n\nAs can be seen in Figure 1, the 2D model edge is projected to the image plane using the prior homography of the planar object. Instead of tackling the line segment itself, we sample the projected line segment (black solid line in Figure 1) with a series of points (brown points in Figure 1). Then the visibility test for each of the sample points is performed, since some of these sample points may be out of the camera’s view field. For each of the visible sample points, 1D search along the normal direction of the projected model line is employed to find the edge point with the strongest gradient or closest location as its correspondence. Finally, the sum of the errors between the sample points and their corresponding image points is minimized to solve for the homography between frames subsequently.\n\n##### 3.1. Probabilistic Formulation for Homography Tracking\n\nThe relationship between a model point and its image point can be given aswhere is the homography between the model plane and the image plane.\n\nSuppose is the set of projected sample points and is their corresponding image points with the presence of the observation noise along the normal direction. Then we can define a function to measure the normal distance between a projected sample point and its noisy observation :where is the unit normal vector of the projected sample point .\n\nAssuming a Gaussian distribution for , then we have\n\nThe conditional density of given can be given by\n\nTherefore, with the assumption that the observation errors for different sample points are statistically independent, a maximum likelihood estimation of the homography iswhere is the number of 3D mode points.\n\nIt is clear that proposed approach can obtain the maximum likelihood estimation of the homography by minimizing the sum of the square of normal distances.\n\n##### 3.2. Interaction Matrix-Distance between Points\n\nThe derivation of the interaction matrix for the proposed approach is based on the distance between the projection of sample point and its projected image point . The motion velocity of the object is then related to the velocity of these distances in the image.\n\nAssume that we have a current estimation of the homography . The posterior homography can be computed from the prior homography given the incremental motion :\n\ncan be represented as follows:\n\nThe motion in the image is related to the twist in model space by computing the partial derivative of the normal distance with respect to th generating motion at current homography:where , .\n\nThen the corresponding Jacobian matrices can be obtained bywhere is a unit vector with the th item equal to 1 and .\n\n##### 3.3. Robust Minimization\n\nThe error vector is obtained by stacking all of the normal distances of each sample point as follows:\n\nThe optimization problem for (5) can be solved according to the following equation:where is the motion vector, is Jacobian matrix which links to , and is the weight matrix (refer to ).\n\nThen, the solution of (11) can be given by\n\nFinally, the new homography can be computed according to (7) as follows:\n\nWith a series of homography matrices (more than three orientations), the camera parameters can be solved linearly by method .\n\n#### 4. Maximum Likelihood Estimation of the Camera Parameters\n\nIn this paper, the camera calibration problem can be formulated in terms of a conditional density function that measures the probability of the image observations predicted from the camera parameters given the actual image observations. This section describes how to construct this conditional density function.\n\n##### 4.1. Probabilistic Formulation for the Camera Calibration\n\nConsider the case where there are images of a static scene containing straight line segments. Let be the matched set of 3D model and 2D image lines in the image , which can be established automatically according to homography optimization in this paper. With the assumption that the observation errors for different line segments are statistically independent, the conditional density function of the camera parameters can now be defined as follows:where is the projection function which takes the camera parameters and the 3D line segment and returns the corresponding edge in the image. , are the intrinsic and extrinsic parameters of the camera in the image , respectively. denotes the conditional density.\n\nThen, the maximum likelihood estimation of the camera parameters , is maximizing the conditional density function , which is given by\n\nBy taking the negative logarithm, the problem of maximize a product is converted into a minimization of a sum, which is given as follows:\n\nThe intrinsic parameters of the camera are represented as , where and are the equivalent focal length, is the principal point of the camera, and is the radial distortion coefficient. The extrinsic parameters of the camera in the image are presented in the usual manner by a translation vector and a rotation matrix . In the remainder of this section, the elements of (16) are discussed in more detail.\n\n##### 4.2. Perspective Projection of 3D Model Line Segment\n\nThroughout this paper, the perspective projection model is utilized. The relationship between a 3D world and 2D image point can be given aswhere is the coordinate in the camera frame for and is the -coordinate. and are the equivalent focal length. and are the principal point. and are the radial distortion, which is modeled as one-order polynomial model:where is corresponding to the projection ray from the focal point to the image point .\n\nAs shown in Figure 2, the line segment and its projection in the image plane are represented by their endpoints and . The line segments and lie on the infinite lines and , respectively. The perspective projection of 3D line segment can be given by the projection of its two endpoints:\n\nWhen noise is present in the measuring data, we denote as the noisy observation of the projection of the 3D points and as the noisy observation of the projection of 3D model line .\n\n##### 4.3. Error Model for the Observation of the Line Segment\n\nLet , be a series of image edge points with the presence of the observation noise perpendicular to the line. For convenience, we assume that the true position of the line is parallel to the horizontal axis. Then we havewhere are Gaussian random variables with , and they are mutually independent.\n\nLet the noises for the endpoints along the vertical direction be , , respectively, and . It can be easily derived thatwhere , .\n\nIt is clear that these two noises are negatively correlative. Since the observation noises conform to Gaussian random variables, the joint density for the random variables and is a Gaussian PDF, which can be given by\n\nSupposing that the length of the line segment is and the intervals of the edge points are all , then we have , . Therefore, we obtain\n\nWhen the number is large enough, that is, , it is easy to obtain\n\nSubstituting (24) into (22) yields\n\nFrom (25), it can be seen that the error model allows us to encode the measurement error for image edge point () explicitly and obtain the intuitive impact of image line length. Moreover, long line segments produce more accurate location than shorter ones and small produces higher confidence about the line location.\n\n##### 4.4. Maximum Likelihood (ML) Estimation\n\nThe measurement noise for the localization of the 2D line segments can be decomposed into two components: noise perpendicular to the line and noise along the length of the line. The first noise is modelled as a Gaussian random variable related to orientation error and the noise model has been derived in the last section, whilst the second one is assumed to conform to any distribution (not necessarily Gaussian) related to line fragmentation.\n\nAs can be seen in Figure 3, both the projection of the 3D line segment and its noisy observation are represented by their endpoints, and , receptively. The noise vector perpendicular to the line and the noise vector along the line are expressed as follows:where the components of are the distances between the endpoints of and along the direction perpendicular to . The components of are the distances between the endpoints of the two line segments along the direction of .\n\nIt is assumed that the two random vectors and are statistically independent. And then we can approximate the conditional density of given as\n\nIn the literature , it is proved that the conditional density of the projection of the 3D model line given its observed noisy image line segment is only dependent on the noise perpendicular to line :\n\nTherefore, with the assumption that the observation errors for different line segments are statistically independent, (16) can be converted into the following formation:where is the objective function that measures the disparity between the actual image observations and their corresponding predicted ones by the current camera parameters. corresponds to the distances from the endpoints of the image line segment to the projected model line.\n\nIf the image line segment is fitted by LST and the intervals of the edge points are fixed for all of the image line segments, then we havewhere , , and correspond to the distances from the two endpoints and midpoint of the image line segment to the projected model line . It is clear that the error function between 3D model line and 2D image line is weighted by the length of the image line segment.\n\n#### 5. Nonlinear Technique for the Optimization of Camera Parameters\n\nIn this section, we will describe how to employ the nonlinear technique to solve the problem of camera calibration defined in the previous section. In the initial case, the camera parameters can be provided by the method which is similar to except that the homography matrices are calculated by the method discussed in Section 2, rather than the chessboard corners. At each iteration, the linearized error function is minimized to obtain the interframe motion vector for the intrinsic and extrinsic parameters. Then the camera parameters are updated until the objective function converges to a minimum.\n\nThe distance from the point of the image line segment to the projection of the model line is given bywhere (refer to ).\n\nAssume that we have a current estimation of the rotation at the time of . The posterior rotation can be computed from the prior rotation given the incremental rotation :where is the corresponding skew-symmetric matrix of vector :\n\nThe transformation from the reference frame to the camera frame can be rewritten aswhere denotes the location of the origin of the camera frame in the world frame.\n\nLet represent the motion velocities corresponding to translation in the , , and directions between the prior translations and the posterior translation . Equation (31) can be rewritten as\n\nThen, the partial derivative of the error function with respect to the th motion velocities can be computed aswhere , , and .\n\nThe partial derivative of the error function with respect to can be given bywhere , .\n\nThe error vector is obtained by stacking all of the normal distances of each image point as follows:where is the distance vector from midpoint and endpoints of the image line segment to the projected model line.\n\nThe optimization problem for (30) can be solved according to the following equation:where is the motion vector and . is Jacobian matrix which links to and is the weight matrix (refer to ).\n\nIf the incremental motion vector has been calculated, the new camera parameters can be computed as follows:\n\n#### 6. Experimental Results\n\nThe proposed algorithm has been tested on simulated data generated by the computer and real image data captured from our smartphone. The closed-form solution is yielded by the approach except that the homography matrices are estimated by the proposed method. The nonlinear refinement within the IRLS algorithm takes 5 to 8 iterations to converge.\n\n##### 6.1. Computer Simulations\n\nThe simulated perspective camera is supposed to be 2 m from the plane object. The resolution of the virtual camera is . The simulated camera has the following property: , . The model plane is a checker pattern printed on the A4 paper (210 mm × 297 mm) with corners. The images are taken from different orientations in front of the virtual camera. The normal vector of the plane is parallel to the rotation axis represented by a 3D vector , whose magnitude is equal to the rotation angle. The position of the plane is represented by a 3D vector (unit in millimetres). In the experiment, the proposed method is compared with the widely used chessboard corners based method (referred to as corners based method and the implementation is according to the related camera calibration function of OpenCV ). For the corners based method, 154 corners are utilized. In our method, we use 25 lines fitted from the noisy corners by the LST. The reprojection error indicated by RMS is expressed by the root of mean squared distances in pixels, between the detected image corners and the projected ones. When only four edges of the plane pattern are utilized, the proposed method is referred to as 4-line based method.\n\n###### 6.1.1. Performance with respect to the Noise Level\n\nIn this experiment, three planes with , , , , and are used (the three orientations are chosen according to ). Zero mean Gaussian noise is added to the projected image points with the standard deviation ranging from 0.1 pixels to 2.0 pixels in steps of 0.1 pixels. At each noise level, 100 independent trials are generated. The estimated camera parameters are then compared with the ground truth and RMS errors are measured. Moreover, for 154 points with real projections and the recovered projections, the RMS reprojection error is also calculated. Figures 4(a) and 4(b) display the relative errors of the intrinsic parameters which are measured with respect to , while Figure 4(c) shows the reprojection errors of the two methods.\n\nFrom Figure 4, we can see that both the relative errors of the intrinsic parameters and the reprojection errors increase almost linearly with the noise level. The proposed method can produce the equivalent performance with the corners based methods since the image lines are fitted from the noisy image corners. When 4 lines (the smallest set for homography estimation) are utilized, the errors of the proposed method are larger than the corners based method. For , there is little difference between the 4-line based method and the corners based method.\n\nIn addition, we vary the number of sample points that are utilized to fit the line segment to validate the performance of the 4-line based method with . From the results in Figure 5, we can see that the errors decrease significantly when more sample points are utilized. When the number is above 40 where more than 160 are utilized to fit 4 line segments, the performance of the 4-line based method is almost similar to that of the 154-corner based method.\n\n###### 6.1.2. Performance with respect to the Number of Planes\n\nIn this experiment, we investigate the performance of the proposed method with respect to the number of the images of the model planes. In the first three images, we use the same orientation and position of the model plane as those used in the last subsection. For the following images, the rotation axes are randomly chosen in a uniform sphere with the rotation angle fixed to 30° and the positions are randomly selected around . The number of the model plane images ranges from 3 to 17. At each number of the images, 100 independent trials of independent plane orientations are generated with the noise level for the image points fixed to 0.5 pixels. The errors including the relative errors in camera intrinsic parameters and the reprojection errors for the two methods are shown in Figure 6. The errors decrease when more images are used. From 3 to 7, the errors decrease significantly. Moreover, the reprojection errors of the proposed method are around 0.7, when the number of the images is varying.\n\n###### 6.1.3. Performance with respect to the Number of Lines\n\nThis experiment examines the performance of the proposed method with respect to the number of the lines utilized to recover the camera parameters. For our method, more than 4 lines should be employed. We vary the number of lines from 4 to 25. Three images of the model plane are also used with the same orientation and position as last subsection. 100 independent trials are conducted with the noise level fixed to 0.5 pixels for each number of the lines. The results are shown in Figure 7. When more lines are used, the errors decrease. In particular, from 4 to 15, the errors decrease significantly.\n\n###### 6.1.4. Performance with respect to the Orientation of the Model Plane\n\nThis subsection investigates the influence of the orientation of the model plane with respect to the image plane. In the experiment, three images are used with two of them similar to the last two planes in Section 6.1.1. The initial rotation axis of the third plane is parallel to the image plane, and the orientation of the planes is randomly chosen from a uniform sphere with the rotation varying from 5° to 75°. The noise level is fixed to 0.5 pixels. The results are displayed in Figure 8. Best performance seems to be achieved with the angle around 40°.\n\n##### 6.2. Real Images\n\nFor the experiment with real data, the proposed algorithm is tested on several image sequences captured from the camera of the smartphone.\n\n###### 6.2.1. Homography Tracking Performance\n\nIn the experiment, three image sequences are captured from the smartphone with a resolution of . In the first image sequence, a chessboard containing interior corners is printed on an A4 paper and put on the desk. About 1500 frames are taken at different orientation. For each image, the homography from the model plane to the image plane is optimized by the proposed method using the four edges of the A4 paper. The interior corners are extracted by the function of cvFindChessboardCorners in OpenCV and refined by the function of cvFindCornerSubPix. Figure 9 shows some sampled results from the image sequence.\n\nIn the last two image sequences, the covers of two books are chosen as the model planes, respectively. To validate the performance of the proposed homography tracking method, the books are put in the clutter environment with the smartphone undergoing large rotation and translation. Both of the last two sequences contain around 2000 images. Figure 10 exhibits some sampled results.\n\n###### 6.2.2. Camera Calibration Performance\n\nIn this subsection, we applied our calibration technique and the corners based method to the four images sampled from the video captured by our smartphone (shown in Figure 11). The image resolution is . In the experiment, the chessboard plane contains interior corners and 23 lines. The results are shown in Table 1. We can see that there is little difference between the proposed method and the corners based method. When only four edges of the plane pattern are utilized, the proposed method can provide the very consistent results with the corners based method and the offset of the camera parameters is very small about 5 pixels with respect to the corners based method. The last column of Table 1 shows the reprojection RMS of the three methods. When all of the 23 lines are utilized, the proposed method provides the almost same reprojection error as the corners based method. The 4-line based method returns the slightly larger reprojection error, since only the minimum of model lines are utilized.\n\n Methods/pixels Reprojection error Corners based method 1147.23 1146.68 475.39 258.04 0.41 Lines based method 1150.76 1151.49 474.61 262.60 0.42 4-line based method 1141.14 1139.99 480.25 253.07 0.67\n\nIn order to further investigate the stability of the proposed method, we vary the number of lines from 4 to 23. The results are shown in Figure 12. and recovered by the proposed method are around the values estimated by the corners based method only with a small deviation. The reprojection errors for the projected method decrease significantly from 4 to 17. When the number is above 17, the reprojection error is very close to that of the corners based method.\n\n###### 6.2.3. Application to Image-Based Modelling\n\nIn this subsection, we applied the proposed method on two image sequences. In the first image sequence, the card with the size of 54.0 mm × 85.6 mm is utilized as the model object. The A4 paper with the size of 210 mm × 297 mm is chosen as the model object for the second image sequence. In the experiment, a series of images are sampled from the videos to calibrate the camera intrinsic parameters and then the camera pose is optimized for each image frame. After that, the structure from motion developed by the methods was run on the image sequences to build the complete models of the toys including Luffy and Hulk. In Figure 13, Figures (A), (B), (C), and (D) are some sampled images from the image sequences. The recovered camera poses by the proposed method are shown in Figure (E). The left one of Figure (E) shows the camera poses for the whole image sequence, while the right one corresponds to the sampled views for the following reconstruction. By recovering the whole motion trajectory of the camera, we can easily choose a subset of the frames which are suitable and adequate for modeling. Two rendered views of the reconstructed objects are shown in Figure (F). From Figure 13, we can see that the complete model of the objects has been reconstructed by moving the camera around the objects. For the size of the handy items is known, the objects can be reconstructed with the metric information.\n\n###### 6.2.4. Discussion\n\nIn practice, the corners detection often suffers from a failure, when the angle between the model and image plane is large or when some of the corners are invisible or corrupted by the image noise and blur. However, the edge detection is more stable in such case. Moreover, in the simulated experiments, since the line segment is fitted by the corners lying on it, the proposed method provides almost the same performance with the corners based method. In our homography tracking framework, much more image edge points corresponding to the sample model points are utilized, and therefore the line segment can be fitted with higher accuracy. In addition, the proposed method is more flexible and suitable for the general user of the smartphone who wants to take the vision task, since it only uses the common and handy planar object rather than the prepared planar pattern.\n\n#### 7. Conclusions\n\nIn this paper, we have investigated the possibility of camera calibration using common and handy planar objects undertaking general motion for the smartphone vision tasks. A linear algorithm supported by the edge model based homography tracking is proposed, followed by a nonlinear technique to refine the results. Both the computer simulated and real images have been utilized to validate the proposed algorithm. The experimental results exhibited that the proposed algorithm is valid and robust and provides more flexible performance than the widely used planar pattern based method.\n\nIn addition, for the general user who will do vision task, the prepared planar calibration may be not always in hand. However, the common items in our daily life almost have the standard size and planar structure. By exploiting the edge information, we proposed an easier and practical camera calibration method. Moreover, in the proposed method, the uncertainty of the image line segment was encoded in the error model, which takes the finite nature of the observations into account. The problem of camera calibration using lines was formalized in the probabilistic framework and solved by the maximum likelihood approach.\n\n#### Conflict of Interests\n\nThe authors declare that there is no conflict of interests regarding the publication of this paper.\n\n#### Acknowledgments\n\nThe research was supported by the National Basic Research Program of China (2013CB733100) and National Natural Science Foundation of China Grant (no. 11332012).\n\n1. O. D. Faugeras, Three-Dimensional Computer Vision: A Geometric Viewpoint, MIT Press, 1993.\n2. J. Weng, P. Cohen, and M. Herniou, “Camera calibration with distortion models and accuracy evaluation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 10, pp. 965–980, 1992. View at: Publisher Site | Google Scholar\n3. R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses,” IEEE Journal of Robotics and Automation, vol. 3, no. 4, pp. 323–344, 1987. View at: Publisher Site | Google Scholar\n4. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330–1334, 2000. View at: Publisher Site | Google Scholar\n5. E. Peng and L. Li, “Camera calibration using one-dimensional information and its applications in both controlled and uncontrolled environments,” Pattern Recognition, vol. 43, no. 3, pp. 1188–1198, 2010.\n6. F. C. Wu, Z. Y. Hu, and H. J. Zhu, “Camera calibration with moving one-dimensional objects,” Pattern Recognition, vol. 38, no. 5, pp. 755–765, 2005. View at: Publisher Site | Google Scholar\n7. P. Hammarstedt, P. Sturm, and A. Heyden, “Degenerate cases and closed-form solutions for camera calibration with one-dimensional objects,” in Proceedings of the 10th IEEE International Conference on Computer Vision (ICCV '05), vol. 1, pp. 317–324, Beijing, China, October 2005. View at: Publisher Site | Google Scholar\n8. Z. Zhang, “Camera calibration with one-dimensional objects,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 7, pp. 892–899, 2004. View at: Publisher Site | Google Scholar\n9. O. D. Faugeras, Q.-T. Luong, and S. J. Maybank, “Camera self-calibration: theory and experiments,” in Computer Vision—ECCV'92: Second European Conference on Computer Vision Santa Margherita Ligure, Italy, May 19–22, 1992 Proceedings, vol. 588 of Lecture Notes in Computer Science, pp. 321–334, Springer, Berlin, Germany, 1992. View at: Publisher Site | Google Scholar\n10. Q.-T. Luong and O. D. Faugeras, “Self-calibration of a moving camera from point correspondences and fundamental matrices,” International Journal of Computer Vision, vol. 22, no. 3, pp. 261–289, 1997. View at: Publisher Site | Google Scholar\n11. R. I. Hartley, “Algorithm for self calibration from several views,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR '94), pp. 908–912, June 1994. View at: Google Scholar\n12. G. P. Stein, “Accurate internal camera calibration using rotation, with analysis of sources of error,” in Proceedings of the 5th International Conference on Computer Vision (ICCV '95), pp. 230–236, Cambridge, Mass, USA, June 1995. View at: Google Scholar\n13. R. I. Hartley, “Self-calibration from multiple views with a rotating camera,” in Proceedings of the European Conference on Computer Vision (ECCV '94), pp. 471–478, Stockholm, Sweden, May 1994. View at: Google Scholar\n14. C. Ricolfe-Viala and A.-J. Sánchez-Salmerón, “Robust metric calibration of non-linear camera lens distortion,” Pattern Recognition, vol. 43, no. 4, pp. 1688–1699, 2010. View at: Publisher Site | Google Scholar\n15. A. Petit, E. Marchand, and K. Kanani, “Combining complementary edge, keypoint and color features in model-based tracking for highly dynamic scenes,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '14), pp. 4115–4120, IEEE, Hong Kong, May 2014. View at: Publisher Site | Google Scholar\n16. C. Choi and H. I. Christensen, “Real-time 3D model-based tracking using edge and keypoint features for robotic manipulation,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '10), pp. 4048–4055, Anchorage, Alaska, USA, May 2010. View at: Publisher Site | Google Scholar\n17. A. I. Comport, E. Marchand, M. Pressigout, and F. Chaumette, “Real-time markerless tracking for augmented reality: the virtual visual servoing framework,” IEEE Transactions on Visualization and Computer Graphics, vol. 12, no. 4, pp. 615–628, 2006. View at: Publisher Site | Google Scholar\n18. T. Drummond and R. Cipolla, “Real-time visual tracking of complex structures,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 932–946, 2002. View at: Publisher Site | Google Scholar\n19. R. Hanek, N. Navab, and M. Appel, “Yet another method for pose estimation: a probabilistic approach using points, lines, and cylinders,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '99), vol. 2, IEEE, Fort Collins, Colo, USA, June 1999. View at: Publisher Site | Google Scholar\n20. R. Kumar and A. R. Hanson, “Robust methods for estimating pose and a sensitivity analysis,” CVGIP: Image Understanding, vol. 60, no. 3, pp. 313–342, 1994. View at: Publisher Site | Google Scholar\n21. Intel, Open Source Computer Vision Library, 2014.\n22. Z. Langming, Z. Xiaohu, and G. Banglei, “A flexible method for multi-view point clouds alignment of small-size object,” Measurement, vol. 58, pp. 115–129, 2014. View at: Publisher Site | Google Scholar\n23. C. Wu, “Towards linear-time incremental structure from motion,” in Proceedings of the International Conference on 3D Vision (3DV '13), pp. 127–134, Seattle, Wash, USA, July 2013. View at: Publisher Site | Google Scholar\n24. Y. Furukawa and J. Ponce, “Accurate, dense, and robust multiview stereopsis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 8, pp. 1362–1376, 2010. View at: Publisher Site | Google Scholar"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9106775,"math_prob":0.9298262,"size":38214,"snap":"2021-21-2021-25","text_gpt3_token_len":8013,"char_repetition_ratio":0.17534676,"word_repetition_ratio":0.09712172,"special_character_ratio":0.21374366,"punctuation_ratio":0.12126339,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9746674,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-11T20:02:01Z\",\"WARC-Record-ID\":\"<urn:uuid:f8f8cd3c-37e6-4f8b-9070-b796c7159087>\",\"Content-Length\":\"1049312\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3cb4071d-fe5f-4def-8abf-296709ceba39>\",\"WARC-Concurrent-To\":\"<urn:uuid:b7ac7470-d5dc-4a9a-81ee-1a59f3c65729>\",\"WARC-IP-Address\":\"13.32.204.58\",\"WARC-Target-URI\":\"https://www.hindawi.com/journals/js/2016/2802343/\",\"WARC-Payload-Digest\":\"sha1:IH5W35ADINFNSGOJUFIT5Y7DZ5DJLEBP\",\"WARC-Block-Digest\":\"sha1:77JVBSR4YBEHPUEMQVGK6ON46U2BO2ZP\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243989856.11_warc_CC-MAIN-20210511184216-20210511214216-00631.warc.gz\"}"} |
https://dsp.stackexchange.com/questions/33529/making-unscented-kalman-filter-robust-for-nonlinear-parameter-estimation-problem | [
"# Making Unscented Kalman Filter Robust for Nonlinear Parameter Estimation Problems\n\nSo I have built code for an Unscented Kalman filter that can take any specified state and measurement dynamics. I have tested it on various linear problems and it works well, as expected. The main concerns I have is applying this filter to nonlinear parameter estimation problems.\n\nWhen I have applied it to problems where I give a pretty good guess, it tends to do fine. However, more often when I don't have a good guess and make the covariance matrix elements larger, it tends to lose its positive definiteness and in turn fails to work.\n\nDoes anyone have tips on helping avoid these problems other than trial and error tuning of the various covariance matrices?\n\nI have read papers about using the EKF/UKF for parameter estimation of weights in Neural Networks, for example. However, whenever I try to do this, I lose positive definiteness and it's frustrating because I struggle to reproduce the stellar results I see in the literature with respect to using the UKF for machine learning. Due to my failed attempts, I am questioning if it's really as useful as the literature appears to imply.\n\nNote that I tend to formulate the problem like so for the parameter estimation problems I am looking at:\n\n\\begin{align} \\textbf{w}_{i+1} &= \\textbf{w}_{i} + \\mathbf{\\eta}_w\\\\ \\textbf{e}_{i} &= \\text{H}(\\textbf{w}_{i}) + \\mathbf{\\eta}_e \\end{align}\n\nwhere $\\textbf{w}$ is the parameters being estimated and $\\textbf{e}$ is some error generated by the parameters, where the expected value for $\\textbf{e}$ is always $\\textbf{0}$ (since I'm trying to find $\\textbf{w}$ such that the error is as close to 0 as possible). $\\text{H}(\\cdot)$ also tends to represent a cost function that the underlying model is scored against as a function of the current parameters $\\textbf{w}$.\n\n• If you are doing parameter estimation only, and not combined parameter and state estimation, you are probably better off with recursive least squares and similar methods (i.e. only parameter estimation). – Arnfinn Aug 7 '16 at 23:16\n• @Arnfinn Well technically, the way I formulated the problem, the parameters I estimate are the state. Obviously the state dynamics are linear, but the measurement function is nonlinear wrt the state being estimated, hence why I want to use the UKF. I don't even know if the recursive least square applies well here. – spektr Aug 8 '16 at 0:44\n• I believe the EKF yields the standard RLS expressions when there is no dynamics... – Arnfinn Aug 8 '16 at 0:52\n• At any rate, RLS or EKF, some common parameter tips and tricks for convergence are: 1) Make sure you have persistent excitation 2) It can be difficult to estimate parameters that are not affine with state or input 3) Use a forgetting factor to avoid covariance wind-up and increase the local region of attraction – Arnfinn Aug 8 '16 at 1:03\n• @Arnfinn I wouldn't know if you're correct about EKF reproducing RLS in special cases, I would have to try to derive that. But in terms of your tips, maybe you could write an answer explaining some of these comments in detail with some math to help make them more precise? Or if you know of some papers that could help articulate what you're referring to, that would be useful as well. – spektr Aug 8 '16 at 2:48"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9221657,"math_prob":0.88441455,"size":1771,"snap":"2019-51-2020-05","text_gpt3_token_len":418,"char_repetition_ratio":0.12563667,"word_repetition_ratio":0.0,"special_character_ratio":0.23320158,"punctuation_ratio":0.063253015,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9898782,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-25T09:10:29Z\",\"WARC-Record-ID\":\"<urn:uuid:ae0a0ada-9763-4aed-b35b-fe8dc6588310>\",\"Content-Length\":\"133803\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:11d6707c-4590-4719-9dc1-6016e4607619>\",\"WARC-Concurrent-To\":\"<urn:uuid:aa990be3-10f1-4b59-bcdc-c8819ee0431c>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://dsp.stackexchange.com/questions/33529/making-unscented-kalman-filter-robust-for-nonlinear-parameter-estimation-problem\",\"WARC-Payload-Digest\":\"sha1:RYLEAJZ2ZVLWPUN5ASZGYTZGO7CC7F72\",\"WARC-Block-Digest\":\"sha1:DBUSLYVPBAVOUF2SKFT7O5URLWPTJVMO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251671078.88_warc_CC-MAIN-20200125071430-20200125100430-00149.warc.gz\"}"} |
https://patents.justia.com/patent/20120324469 | [
"# RESOURCE ALLOCATION APPARATUS, RESOURCE ALLOCATION METHOD, AND COMPUTER READABLE MEDIUM\n\n- NEC CORPORATION\n\nA parameter determination unit 110 substitutes, for each of a plurality of applications, a recommended amount of resources and a quality of experience corresponding to the recommended amount of resources, and a minimum amount of resources and a quality of experience corresponding to the minimum amount of resources into a quality function fin expression (1) indicating a relation between an amount of resources R and a quality of experience Q, to determine parameters a and b. A resource amount determination unit 120 determines an amount of resources to be allocated to the plurality of applications using the quality function f for each application in which the parameters a and b are determined. The quality function f(x) is a monotonically increasing function having an inverse function f−1, connects (−∞) and (+∞), and is symmetrical with respect to x=0. Q=f(x)=f((R−a)/b) (1).\n\n## Latest NEC CORPORATION Patents:\n\nDescription\nTECHNICAL FIELD\n\nThe present invention relates to a technique for allocating resources to a plurality of applications operated on a computer.\n\nBACKGROUND ART\n\nVarious techniques have been suggested to allocate resources in a computer system.\n\nFor example, a QoS control apparatus disclosed in PTL 1 registers, in a table, an amount of resources to be assigned corresponding to the class of QoS that is determined in advance for a task that is being operated in an assignment amount registration means, and determines the resource allocation amount to each task based on the table.\n\nFurther, a distributed resource management system disclosed in PTL 2 creates, when resources to be shared by a plurality of applications are allocated, a demand function indicating the degree of demand of target resources required for the plurality of applications, to allocate the resources based on the demand function.\n\nSmall-sized devices such as mobile telephones, PNDs (Portable Navigation Devices), and DPFs (Digital Photo Frames) have limited resources of memory capacities and arithmetic capacities of CPUs. Thus, allocation of resources to applications is especially important. When a large number of applications are operated in these small-sized devices, these applications may not be able to operate as expected. This is because, since these applications contend for the limited resources, it is impossible to achieve the quality which can be accomplished in the case in which only one application is operated.\n\nIn order to smoothly operate a plurality of applications in a device with limited resources, it is required to appropriately allocate the resources to these applications. Since the appropriate allocation of resources is different depending on the number and the type of the applications that are operated, methods like equal allocation and best effort allocation are not preferable.\n\nOne known method is to calculate, for each of the plurality of applications that are being operated, the amount of resources to be allocated to each of the applications based on a relation between an amount of resources to be used by the application and a quality of user experience. The quality of user experience means the degree of satisfaction felt by users. The user satisfaction can be enhanced by performing resource allocation based on the quality of user experience.\n\nCITATION LIST Patent Literature\n\n• PTL 1: Japanese Unexamined Patent Application Publication No. 2000-155695\n• PTL 2: Japanese Unexamined Patent Application Publication No. 11-66018\n\nSUMMARY OF INVENTION Technical Problem\n\nFor example, one possible method is, with the application of the technique disclosed in PTL 1, for each application, a table indicating a relation between an amount of resources to be used by the application and the quality of experience is registered in advance, and allocation of resources is performed to these applications based on the table of each application that is operated.\n\nFurther, the table indicating the relation between the amount of resources to be used by the applications and the quality of experience may be created in advance by manufacturers of the applications and designers of the devices, for example.\n\nHowever, recent mobile telephones include, however, even though they are small-sized devices, several tens of applications installed therein. In such a case, it requires a huge amount of energy to preliminary search the relation between the amount of resources to be used and the quality of experience for each application for the device on which a large number of applications are installed.\n\nThe present invention has been made based on the background stated above, and provides a technique to efficiently acquire the relation between the amount of resources to be used by each application and the quality of experience when the amount of resources is allocated to a plurality of applications operated on a device.\n\nSolution to Problem\n\nOne exemplary aspect of the present invention is a resource allocation apparatus that performs allocation of an amount of resources to a plurality of applications operated on a computer. This resource allocation apparatus includes a parameter determination unit and a resource amount determination unit.\n\nThe parameter determination unit determines, for each of the plurality of applications, a first parameter a and a second parameter b in a quality function f of expression (1) indicating a relation between an amount of resources to be used and a quality of user experience. More specifically, for each of the applications, a recommended amount of resources and a quality of experience corresponding to the recommended amount of resources, and a minimum amount of resources and a quality of experience corresponding to the minimum amount of resources are substituted into expression (1), to calculate the first parameter a and the second parameter b in expression (1). Note that the quality function f(x) is a monotonically increasing function having an inverse function f−−1, connects (−∞, 0) and (+∞, 1), and is symmetrical with respect to x=0.\n\nQ=f(x)=f((R−a)/b) (1)\n\nwhere\n\nQ: quality of experience\n\nf: quality function\n\nR: amount of resources\n\na: first parameter\n\nb: second parameter\n\nThe resource amount determination unit determines an amount of resources to be allocated to the plurality of applications using the quality function f for each of the plurality of applications in which the first parameter a and the second parameter b are determined by the parameter determination unit.\n\nNote that a method and a system with which the resource allocation apparatus according to the above aspect is replaced, a program for causing a computer to execute the operation of the resource allocation apparatus, a computer readable recording medium storing the program, and the like are also effective as aspects of the present invention.\n\nAdvantageous Effects of Invention\n\nAccording to the technique of the present invention, it is possible to efficiently create a quality function which indicates a relation between an amount of resources to be used and a quality of user experience and is required to allocate resources to a plurality of applications.\n\nBRIEF DESCRIPTION OF DRAWINGS\n\nFIG. 1 is a diagram showing a resource allocation apparatus according to a first exemplary embodiment of the present invention;\n\nFIG. 2 is a diagram showing a quality function f;\n\nFIG. 3 is a diagram showing two specific examples of the quality function f;\n\nFIG. 4 is a flowchart showing a process flow in the resource allocation apparatus shown in FIG. 1;\n\nFIG. 5 is a diagram showing an example of a relation between performance of applications and an amount of resources;\n\nFIG. 6 is a diagram showing a quality function created for three applications shown in FIG. 5;\n\nFIG. 7 is a diagram for describing determination of an amount of resources for the three applications shown in FIG. 5;\n\nFIG. 8 is a diagram showing a resource allocation apparatus according to a second exemplary embodiment of the present invention;\n\nFIG. 9 is a diagram showing an example of a reception unit in the resource allocation apparatus shown in FIG. 8 (case 1);\n\nFIG. 10 is a diagram showing an example of the reception unit in the resource allocation apparatus shown in FIG. 8 (case 2);\n\nFIG. 11 is a flowchart showing a process flow in the resource allocation apparatus shown in FIG. 8;\n\nFIG. 12 is a diagram for describing an example of allocation of an amount of resources by the resource allocation apparatus shown in FIG. 8;\n\nFIG. 13 is a diagram showing a resource allocation apparatus according to a third exemplary embodiment of the present invention;\n\nFIG. 14 is a diagram for describing an example of allocation of an amount of resources by the resource allocation apparatus shown in FIG. 13; and\n\nFIG. 15 is a flowchart showing a process flow in the resource allocation apparatus shown in FIG. 13.\n\nDESCRIPTION OF EMBODIMENTS\n\nHereinafter, with reference to the drawings, exemplary embodiments of the present invention will be described. For the clarification of description, the following description and the drawings are omitted and simplified as appropriate. Further, each element shown in the drawings as functional blocks performing various processing may be formed of a CPU, a memory, and other circuits in hardware, and may be achieved by a program loaded to a memory in software. Accordingly, a person skilled in the art would understand that these functional blocks may be achieved in various ways, e.g., only by hardware, only by software, or the combination thereof without any limitation. Throughout the drawings, the identical components are denoted by the identical reference symbols, and overlapping description is omitted as appropriate.\n\nFurther, the program mentioned above can be stored and provided to a computer using any type of non-transitory computer readable media. Non-transitory computer readable media include any type of tangible storage media. Examples of non-transitory computer readable media include magnetic storage media (such as flexible disks, magnetic tapes, hard disk drives, etc.), optical magnetic storage media (e.g. magneto-optical disks), CD-ROM (Read Only Memory), CD-R, CD-R/W, and semiconductor memories (such as mask ROM, PROM (programmable ROM), EPROM (Erasable PROM), flash ROM, RAM (Random Access Memory), etc.). The program may be provided to a computer using any type of transitory computer readable media. Examples of transitory computer readable media include electric signals, optical signals, and electromagnetic waves. Transitory computer readable media can provide the program to a computer via a wired communication line (e.g. electric wires, and optical fibers) or a wireless communication line.\n\nFirst Exemplary Embodiment\n\nFIG. 1 shows a resource allocation apparatus 100 according to a first exemplary embodiment of the present invention. This resource allocation apparatus 100 is installed in a small-sized device such as a mobile telephone, and performs allocation of resources to applications operated on the small-sized device.\n\nThe resource allocation apparatus 100 includes a parameter determination unit 110 and a resource amount determination unit 120.\n\nThe parameter determination unit 110 determines, for each of a plurality of applications activated by the small-sized device, a first parameter ai and a second parameter bi in a quality function f of expression (2) indicating a relation between an amount of resources to be used and a quality of user experience.\n\nQi=f(x)=f((Ri−ai)/bi) (2)\n\nwhere\n\ni: number of application\n\nQi: quality of experience\n\nf: quality function\n\nRi: amount of resources\n\nai: first parameter\n\nbi: second parameter\n\nAs shown in FIG. 2, the quality function f(x) is a monotonically increasing function having an inverse function f−−1, connects (−∞, 0) and (+∞, 1), and is symmetrical with respect to x=0.\n\nAs the quality function f(x) which is a monotonically increasing function, such a function is employed in which values are gradually increased from (−∞, 0), the slope increases as x approaches 0, gradually decreases after x=0, and coordinates gradually approach (+∞, 1). The reason why the monotonically increasing function f(x) having such characteristics is used as the quality function is that it is possible to reproduce the characteristics of the quality of user experience that is assumed. This assumption is as follows. That is, if the performance is sufficiently accomplished, the user does not feel that the quality has improved even when the performance is somewhat further enhanced, whereas if the performance is hardly accomplished, the user does not feel that the quality has reduced even when the performance is further decreased. While the quality function f(x) is preferably continuous and smooth, it may be approximately continuous and smooth.\n\nFIG. 3 shows two specific examples of the quality function f(x). The solid line and the dotted line indicate f(x) in expression (3) and expression (4), respectively. The symbol “x” in expression (3) and expression (4) is “(R−a)/b”.\n\n$f ( x ) = 1 1 + - x ( 3 ) f ( x ) = 1 π arctan ( x ) + 1 2 ( 4 )$\n\nWhile the one shown in expression (3) is used as an example of f(x) in the following description, the quality function is not limited to the function shown in expression (3).\n\nSubstituting, for each application, a recommended amount of resources Rihigh and a quality of experience Qihigh corresponding to the recommended amount of resources Rihigh, and a minimum amount of resources Rilow and a quality of experience Qilow corresponding to the minimum amount of resources Rilow of the application into expression (3) gives the following expression (5) and expression (6). Note that Rihigh, Qihigh, Rilow and Qilow are determined by manufacturers of the applications or designers of the devices, and are input to the parameter determination unit 110.\n\n${ Q i high = 1 1 + - R i high - a i b i Q i low = 1 1 + - R i low - a i b i ( 6 ) ( 5 )$\n\nThen, solving the simultaneous equations of expression (5) and expression (6) for ai and bi obtains the first parameter ai and the second parameter bi as shown in expressions (7) and (8).\n\n${ a i = R i low ln ( 1 Q i high - 1 ) - R i high ln ( 1 Q i low - 1 ) ln ( 1 Q i high - 1 ) - ln ( 1 Q i low - 1 ) b i = R i low - R i high ln ( 1 Q i high - 1 ) - ln ( 1 Q i low - 1 ) ( 8 ) ( 7 )$\n\nThe parameter determination unit 110 calculates the first parameter ai and the second parameter bi for each application according to expressions (7) and (8) to supply the calculated results to the resource amount determination unit 120.\n\nWhen Qihigh and Qilow satisfy the following expression (9), the calculation of the first parameter ai and the second parameter bi becomes simple as shown in expression (10) and FIG. 11).\n\nQihigh+Qilow=1 (9)\n\n${ a i = R i low + R i high 2 b i = R i low - R i high 2 ln ( 1 Q i high - 1 ) ( 11 ) ( 10 )$\n\nWhen Qihigh and Qilow satisfy expression (9), the first parameter ai shown in expression (9) is an average value of the recommended amount of resources Rihigh and the minimum amount of resources Rilow. This is satisfied not only in the quality function f(x) shown in expression (3) but also in any monotonically increasing function that has an inverse function f−−1, connects (−∞, 0) and (+∞, 1), and is symmetrical with respect to x=0. This average value indicates an amount of resources when the quality Q is 0.5.\n\nThe resource amount determination unit 120 determines, using the first parameter a and the second parameter b of each application determined by the parameter determination unit 110, the amount of resources to be allocated to these applications.\n\nIn the first exemplary embodiment, the resource amount determination unit 120 determines the amount of resources to be allocated to each application so that the total amount of resources to be allocated to each application is a predetermined amount TR and the quality of experience of each application becomes the same. This quality of experience is hereinafter called a uniform quality of experience AQ.\n\nMore specifically, the resource amount determination unit 120 determines the amount of resources Ri for each application according to expressions (12) and (13).\n\n$AQ = f ( TR - ∑ i a i ∑ i b i ) ( 12 )$\n\nwhere\n\n• AQ: uniform quality of experience\n• TR: predetermined amount\n\nRi=bif−−1(AQ)+ai (13)\n\nwhere\n\n• AQ: uniform quality of experience\n\nNote that the predetermined amount TR is set by a designer of the device, for example. Specifically, for example, it is possible to determine the amount of resources that can be used to execute the applications among the amount of resources of the small-sized device.\n\nFIG. 4 is a flowchart showing a process flow in the resource allocation apparatus 100 shown in FIG. 1.\n\nFirst, the parameter determination unit 110 substitutes, for each application, the recommended amount of resources Rihigh and the quality of experience Qihigh corresponding thereto, and the minimum amount of resources Rilow and the quality of experience Qilow corresponding thereto into the simultaneous equations shown by expression (5) and expression (6) stated above to calculate the first parameter ai and the second parameter bi (S100).\n\nThe resource amount determination unit 120 substitutes the first parameter ai and the second parameter bi of each application calculated by the parameter determination unit 110 into expression (12), to calculate the uniform quality of experience AQ (S102).\n\nThen, the resource amount determination unit 120 substitutes, for each application, the first parameter ai and the second parameter bi of the application and the uniform quality of experience AQ calculated in Step S102 into expression (13), to calculate the amount of resources Ri to be allocated to the application (S104).\n\nNow, the resource allocation apparatus 100 shown in FIG. 1 will be described more detail using the following specific examples.\n\nFor example, the resources which are to be allocated is CPU arithmetic capacity of the device, and the amount of resources (predetermined amount TR stated above) that can be allocated to applications is 100%. There are three types of applications that are activated: a decoder compliant with H.264 (hereinafter referred to as an H.264 decoder), an application for facial recognition (hereinafter referred to as a facial recognition AP), and an application for news viewing (hereinafter referred to as a news viewing AP).\n\nThe H.264 decoder is an application for decoding a file of H.264 format which is a video codec, and the facial recognition AP is an application for analyzing an input image from a camera included in the device to judge whether the face is displayed. The news viewing AP is an application for acquiring and displaying news at predetermined intervals.\n\nFurther, it is assumed that the relation between the amount of resources to be used and the performance is obtained as shown in FIG. 5 for the three applications. In the example shown in FIG. 5, the performance of the H.264 decoder is indicated by time required to decode 30 frames, the performance of the facial recognition AP is indicated by time required to analyze one frame, and the performance of the news viewing AP is indicated by intervals to acquire the news.\n\nFurther, for each application, it is assumed that the quality of experience Qihigh corresponding to the recommended amount of resources Rihigh is 0.8 and the quality of experience Qilow corresponding to the minimum amount of resources Rilow is 0.2.\n\nThe parameter determination unit 110 determines the recommended amount of resources Rihigh and the minimum amount of resources Rilow based on the relation between the performance and the application shown in FIG. 5. The determination of the recommended amount of resources Rihigh and the minimum amount of resources Rilow is performed based on the standards in which it is sufficient to decode 30 frames in one second for the H.264 decoder, it is desired to analyze one frame in 0.5 to 1 second for the facial recognition AP, it is desired to acquire the news at intervals of one second for the news viewing AP, and the like.\n\nIn this way, for example, for the H.264 decoder, Rihigh and Rilow are 70% and 60%, respectively, for the facial recognition AP, Rihigh and Rilow are 80% and 40%, respectively, and for the news viewing AP, Rihigh and Rilow are 30% and 10%, respectively.\n\nThe parameter determination unit 110 substitutes, for each application, Qihigh (in this example, 0.8), Rihigh, Qilow (in this example, 0.2), and Rilow into expressions (5) and (6) to solve the first parameter ai and the second parameter bi. In this example, for the H.264 decoder, the first parameter ai and the second parameter bi are 65 and 3.6, respectively. For the facial recognition AP, the first parameter ai and the second parameter bi are 60 and 14.4, respectively, and the first parameter ai and the second parameter bi are 20 and 7.2, respectively.\n\nAccordingly, the quality functions of the H.264 decoder, the facial recognition AP, and the news viewing AP are as shown in FIG. 6.\n\nThen, the resource amount determination unit 120 determines the amount of resources to be allocated to each application using each quality function shown in FIG. 6. More specifically, the uniform quality of experience AQ is calculated first according to expression (12), and then the amount of resources Ri to be allocated to each application is calculated by expression (13). Note that the total amount of resources to be allocated to each application is 100%.\n\nIn this example, as shown in FIG. 7, the uniform quality of experience AQ is 0.14. Further, the amount of resources of 59%, 34%, and 7% are allocated for the H.264 decoder, the facial recognition AP, and the news viewing AP, respectively.\n\nAs described above, the resource allocation apparatus 100 according to this exemplary embodiment uses the monotonically increasing function f(x) described above as the quality function when the amount of resources is allocated to the applications. Thus, only the data that is required to be prepared in advance for one application to determine the parameters of the quality function is the recommended amount of resources Rihigh and the quality of experience Qihigh corresponding thereto, and the minimum amount of resources Rilow and the quality of experience Qilow corresponding thereto. Accordingly, the efficiency to create the quality function is improved, and the burden on developers of the applications and designers of the devices can be greatly reduced. While Rihigh and Rilow are determined based on the relation between the amount of resources and the performance of the application as shown in FIG. 5 in the description above, Rihigh and Rilow itself may be directly supplied to the parameter determination unit 110.\n\nFurther, the quality function f(x) has characteristics in which the quality of experience is raised at around the minimum amount of resources, and stops rising at around the recommended amount of resources, thereby being able to achieve resource allocation according to user's intuition.\n\nFurther, the determination of the allocation of the amount of resources using the quality function in which the first parameter a and the second parameter b are determined can be analytically performed without searching all the combinations, thereby being able to cut the calculation cost. Therefore, even when the allocation calculation is performed in real time, the calculation overhead required to allocate the amount of resources can be reduced.\n\nSecond Exemplary Embodiment\n\nFIG. 8 shows a resource allocation apparatus 200 according to a second exemplary embodiment of the present invention. The resource allocation apparatus 200 is also installed in a small-sized device such as a mobile telephone, for example, and allocates resources to applications operated on the small-sized device.\n\nThe resource allocation apparatus 200 includes a parameter determination unit 210, a reception unit 220, a parameter adjustment unit 230, and a resource amount determination unit 240.\n\nThe parameter determination unit 210 performs the similar processing as the parameter determination unit 110 in the resource allocation apparatus 100 shown in FIG. 1. That is, the parameter determination unit 210 calculates, for each of the plurality of applications activated in the small-sized device, the first parameter ai and the second parameter bi in the quality function f of expression (2) indicating the relation between the amount of resources to be used and the quality of user experience.\n\nThe first parameter ai and the second parameter bi of each application calculated by the parameter determination unit 210 are output to the parameter adjustment unit 230 from the parameter determination unit 210.\n\nThe reception unit 220 is a user interface that is able to receive an increase/decrease request for each application. This user interface is to input, for each of the applications, the increase/decrease request indicating whether the user desires to increase or decrease the amount of resources and the increase/decrease amount.\n\nThe parameter adjustment unit 230 adjusts the first parameter ai and the second parameter bi output from the parameter determination unit 210, and outputs the first parameter ai and the second parameter bi after adjustment to the resource amount determination unit 240. In order to differentiate the first parameter ai and the second parameter bi obtained by the parameter determination unit 210, the first parameter ai and the second parameter bi adjusted by the parameter adjustment unit 230 are denoted by the symbols ai′ and bi′.\n\nThe parameter adjustment unit 230 determines an adjustment factor ci according to the increase/decrease request input by the user through the reception unit 220, and performs adjustment as shown in expressions (14) and (15) using the adjustment factor ci that is determined.\n\nai′=ci×ai (14)\n\nbi′=ci×bi (15)\n\nThe default value of the adjustment factor ci is 1 for all the applications. When the increase/decrease request is not input by the user, the parameter adjustment unit 230 uses the default value of the adjustment factor ci.\n\nIn summary, when the increase/decrease request is not input by the user, ai′ and ai are equal to each other, and bi′ and bi are equal to each other.\n\nWhen the increase/decrease request is the increase request of the amount of resources, the parameter adjustment unit 230 multiplies the adjustment factor ci by the first parameter ai and the second parameter bi. In this case, the adjustment factor ci is larger than 1, and becomes larger as the amount that is desired to increase is larger. On the other hand, when the increase/decrease request is the decrease request of the amount of resources, the parameter adjustment unit 230 multiplies the adjustment factor ci by the first parameter ai and the second parameter bi. In this case, the adjustment factor ci is smaller than 1, and becomes smaller as the amount that is desired to decrease is smaller.\n\nFIG. 9 shows one example of the reception unit 220 to receive increase/decrease requests. As shown in FIG. 9, the reception unit 220 includes two adjustment buttons (adjustment button 222 and adjustment button 224) for each of the applications (AP1, AP2, AP3, . . . ).\n\nThe adjustment button 222 is pressed when it is desired to increase the amount of resources, and the number of times that the button is pressed is in direct proportion to the amount that is desired to increase. Hereinafter, the adjustment button 222 is referred to as an increase button.\n\nThe adjustment button 224 is pressed when it is desired to decrease the amount of resources, and the number of times that the button is pressed is in direct proportion to the amount that is desired to decrease. Hereinafter, the adjustment button 224 is referred to as a decrease button.\n\nThe parameter adjustment unit 230 determines, as shown in expression (16) and expression (17), for example, the adjustment factor ci according to the number of times mi that the increase button 222 is pressed and the number of times ni that the decrease button 224 is pressed.\n\nci=1+0.1×mi (16)\n\nwhere\n\nmi: the number of times that the increase button 222 is pressed\n\nci=1−0.1×ni (17)\n\nwhere\n\nni: the number of times that the decrease button 224 is pressed\n\nFIG. 10 shows another example of the reception unit 220. As shown in FIG. 10, an adjustment bar 226 is provided for each of the applications (AP1, AP2, AP3, . . . ).\n\nWhen the adjustment bar 226 is moved upward from the default position, the parameter adjustment unit 230 adds the value according to the moved distance to 1 to obtain the adjustment factor Meanwhile, when the adjustment bar 226 is moved downward from the default position, the parameter adjustment unit 230 subtracts the value according to the moved distance from 1 to obtain the adjustment factor\n\nThe parameter adjustment unit 230 outputs, for each application, the first parameter ai′ and the second parameter obtained by multiplying the first parameter ai and the second parameter bi by the adjustment factor ci to the resource amount determination unit 240. For the application that does not receive the increase/decrease requests, ai′ and ai are equal to each other, and bi′ and bi are equal to each other.\n\nThe resource amount determination unit 240 uses the first parameter after adjustment and the parameter after adjustment as the first parameter and the second parameter, and further determines the amount of resources to be allocated to each application according to (18) in the resource allocation apparatus 100. The resource amount determination unit 240 determines the amount of resources to be allocated to each application by the similar operation as 130.\n\nIn summary, according to this exemplary embodiment, the quality function is as shown in expression (18).\n\n$Q i = f ( x ) = f ( ( R i - a i ′ ) / b i ′ ) = f ( ( R i - c i × a i ) / c i × b i ) ( 18 )$\n\nwhere\n\ni: number of application\n\nQi: quality of experience\n\nf: quality function\n\nRi: amount of resources\n\nai: first parameter before adjustment\n\nbi: second parameter before adjustment\n\nai′: first parameter after adjustment\n\nbi′: second parameter after adjustment\n\ne that expression (18) is equal to the following expression (19).\n\nQi32 f(x)=f((Ri/ci−ai)/bi) (19)\n\nAs will be clear from expression (19), the quality function obtained when the values of the first parameter ai and the second parameter bi are adjusted is equal to the quality function obtained by multiplying the original recommended amount of resources Rihigh and the minimum amount of resources Rilow by and ci indicates a parameter multiplying the recommended amount of resources Rihigh and the minimum amount of resources Rilow by ci. In short, it can be said that the symbol ci is a parameter that is set when the original amount of allocation of the amount of resources is desired to be changed according to the ratio when the application is more important or not important for the user under a certain situation.\n\nFIG. 11 is a flowchart showing a process flow in the resource allocation apparatus 200. First, the parameter determination unit 210 substitutes, for each application, the recommended amount of resources Rihigh and the quality of experience Qihigh corresponding thereto and the minimum amount of resources Rilow and the quality of experience Qilow corresponding thereto into the simultaneous equations shown by expressions (5) and (6) stated above, to calculate the first parameter ai and the second parameter bi (S110). The first parameter ai and the second parameter bi are input to the parameter adjustment unit 230.\n\nFor the application in which it is not required to increase or decrease the amount of resources through the reception unit 220 (S112: No), the parameter adjustment unit 230 outputs the first parameter ai and the second parameter bi to the resource amount determination unit 240 without any adjustment.\n\nOn the other hand, for the application in which it is required to increase or decrease the amount of resources through the reception unit 220 (S112: Yes), the parameter adjustment unit 230 outputs values obtained by multiplying the first parameter ai and the second parameter bi of the application by the adjustment factor according to the increase/decrease request to the resource amount determination unit 240 (S114).\n\nThe resource amount determination unit 240 calculates the uniform quality of experience AQ using the first parameter and the second parameter after adjustment output from the parameter adjustment unit 230, and calculates, for each application, the amount of resources corresponding to the uniform quality of experience AQ as the amount of resources Ri to be allocated to the application (S116, S118).\n\nFor example, in the case of three applications shown in FIG. 5 (H.264 decoder, facial recognition AP, news viewing AP), it is assumed that the increase/decrease request in which ci is 0.5 is input to the H.264 decoder, and the increase/decrease request is not input to the other two applications.\n\nIn this case, the quality function corresponding to the three applications is as shown in FIG. 12. Further, the uniform quality of experience AQ is 0.37, and the amount of resources allocated to the H.264 decoder, the facial recognition AP, and the news viewing AP are 32%, 52%, and 16%, respectively.\n\nThis case corresponds to the case in which the decoder is not important in a certain situation and it is sufficient that the decoder is able to perform decoding at 15 frame/sec, which is half the normal decoder. According to the resource allocation apparatus 200, the amount of resources to be allocated to the H.264 decoder in this case is decreased from 59% in the resource allocation apparatus 100 to 32%, which means it is possible to allocate the reduced resources to other applications.\n\nIn short, according to the resource allocation apparatus 200 of the second exemplary embodiment, it is possible to obtain the similar effect as in the resource allocation apparatus 100 according to the first exemplary embodiment, and to reflect the user's intention on the allocation of the amount of resources.\n\nThird Exemplary Embodiment\n\nFIG. 13 shows a resource allocation apparatus 300 according to a third exemplary embodiment of the present invention. This resource allocation apparatus 300 includes a parameter determination unit 310, a resource number controller 320, and a resource amount determination unit 330.\n\nThe resource allocation apparatus 300 according to the third exemplary embodiment is able to control the number of resources to be allocated to the applications when there are the same type of a plurality of resources. For example, the number of resources of “CPU arithmetic capacity” that exist in a device in which a plurality of CPU cores are installed corresponds to the number of CPU cores.\n\nThe parameter determination unit 310 performs the similar operation as the parameter determination unit 110 in the resource allocation apparatus 100, and calculates the first parameter ai and the second parameter bi for each application, to output the result to the resource amount determination unit 330.\n\nThe resource amount determination unit 330 calculates the uniform quality of experience AQ using the first parameter ai and the second parameter bi from the parameter determination unit 310, to output the result to the resource number controller 320. The method of calculating the uniform quality of experience AQ is similar to the method of calculation 130 in the resource allocation apparatus 100. However, it should be noted that the predetermined amount TR which is the total amount of resources to be allocated to the applications used in this calculation is supplied from the resource number controller 320.\n\nThe resource number controller 320 performs control so that the number of resources to be allocated to the applications becomes the minimum number under a condition that the uniform quality of experience AQ is equal to or larger than a predetermined threshold when there are the same type of a plurality of resources. For the sake of clarity, description will be made taking as an example a case in which three applications shown in FIG. 5 (H.264 decoder, facial recognition AP, news viewing AP) are installed. Further, it is assumed that the threshold of the uniform quality of experience AQ is set to 0.5.\n\nThe resource number controller 320 first supplies the arithmetic capacity (100%) of one CPU core as a predetermined amount TR to the resource amount determination unit 330.\n\nAs shown in FIG. 14, when the predetermined amount TR is 100%, the uniform quality of experience AQ calculated by the resource amount determination unit 330 is 0.14, as is similar to the case of the resource allocation apparatus 100 according to the first exemplary embodiment. In accordance therewith, the amount of resources Ri to be allocated to the H.264 decoder, the facial recognition AP, and the news viewing AP are 59%, 34%, and 7%, respectively, as is similar to the resource allocation apparatus 100 according to the first exemplary embodiment.\n\nThe resource number controller 320 compares the uniform quality of experience AQ calculated by the resource amount determination unit 330 with the threshold (in this example, 0.5). Since the current uniform quality of experience AQ is 0.14, which is smaller than the threshold, the resource number controller 320 changes the predetermined amount TR to the value (e.g., 200%) corresponding to the case in which two CPU cores are provided, to output the result to the resource amount determination unit 330, to cause the resource amount determination unit 330 to perform re-calculation of the uniform quality of experience AQ.\n\nThe resource amount determination unit 330 re-calculates the uniform quality of experience AQ using the predetermined amount TR that is changed, to output the calculation result to the resource number controller 320. The uniform quality of experience AQ at this time is, as shown in FIG. 14, 0.9.\n\nSince the uniform quality of experience AQ exceeds the threshold 0.5, the resource number controller 320 determines the number of CPU cores to two, to cause the resource amount determination unit 330 to perform determination of allocation of the amount of resources. As shown in FIG. 14, the amount of resources Ri to be allocated to the H.264 decoder, the facial recognition AP, and the news viewing AP is 73%, 91%, and 36%, respectively, corresponding to the uniform quality of experience AQ which is 0.9.\n\nIncidentally, the allocation according to the amount of resources Ri that is calculated above may not be achieved in reality. For example, when the applications are not parallelized and thus cannot be executed by a plurality of CPUs, it is impossible to allocate the amount of resources of 200% with the allocation rate of 73%, 91%, and 36%.\n\nIn this exemplary embodiment, the resource amount determination unit 330 includes a function of adjusting the amount of resources that is determined so that they can further be allocated. For example, for each application, the amount of resources is adjusted under the assumption that the quality of experience is equal to or larger than the threshold. In the example above, for example, the amount of resources is changed to 100% for the facial recognition AP having the largest amount of resources that is determined, and the rest of 100% is re-allocated for the H.264 decoder and the news viewing AP. Therefore, after adjustment, the amount of resources of the H.264 decoder, the facial recognition AP, and the news viewing AP is 70%, 100%, and 30%, respectively.\n\nFIG. 15 is a flowchart showing a process flow in the resource allocation apparatus 300. First, the parameter determination unit 310 substitutes, for each application, the recommended amount of resources Rihigh and the quality of experience Qihigh corresponding thereto, and the minimum amount of resources Rilow and the quality of experience QilowQL corresponding thereto into the simultaneous equations shown in expression (5) and expression (6) stated above, to calculate the first parameter ai and the second parameter bi (S150). These first parameter ai and second parameter bi are output to the resource amount determination unit 330.\n\nThe resource amount determination unit 330 calculates the uniform quality of experience AQ so that the total amount of resources to be allocated to each application becomes the predetermined amount TR in the case in which one CPU core is used, to output the result to the resource number controller 320 (S152).\n\nWhen the uniform quality of experience AQ output from the resource amount determination unit 330 is smaller than a predetermined threshold (S154: No), the resource number controller 320 increments the number of resources by one, to output the corresponding predetermined amount TR to the resource amount determination unit 330 (S156).\n\nThe resource amount determination unit 330 re-calculates the uniform quality of experience AQ using the predetermined amount TR that is reset by the resource number controller 320 (S152-).\n\nThe processing from Step S152 is repeated until when the uniform quality of experience AQ reaches the threshold or more (S154: No, S156, S152-).\n\nWhen the uniform quality of experience AQ from the resource amount determination unit 330 is equal to or larger than the threshold (S154: Yes), the resource number controller 320 determines the current number of resources as the number of resources to be used, and the resource amount determination unit 330 determines the amount of resources of each application corresponding to the current uniform quality of experience AQ (S160).\n\nThen, when it is possible to perform allocation according to the amount of resources that is determined (S162: Yes), the resource amount determination unit 330 performs allocation according to the amount of resources that is determined. On the other hand, when it is impossible to perform allocation according to the amount of resources that is determined (S162: No), the resource amount determination unit 330 adjusts the amount of resources that is determined so they can be allocated (S164).\n\nAccording to the resource allocation apparatus 300 according to this exemplary embodiment, it is possible to obtain the effects of the resource allocation apparatus 100. Further, when there are the same type of a plurality of resources, the number of resources is minimized under a condition that the uniform quality of experience AQ is equal to or larger than the threshold, thereby being able to suppress excessive consumption of resources and to reduce power consumption.\n\nThe present invention has been described with reference to the exemplary embodiments. However, the exemplary embodiments are merely examples, and various changes, modifications, and combinations may be added to each exemplary embodiment stated above without departing from the spirit of the present invention. A person skilled in the art would understand that the variant examples in which the changes, modifications, and combinations are made are also within the scope of the present invention.\n\nThis application claims the benefit of priority, and incorporates herein by reference in its entirety, the following Japanese Patent Application No. 2010-054283 filed on Mar. 11, 2010.\n\nINDUSTRIAL APPLICABILITY\n\nThe present invention is applicable to allocation of resources to a plurality of applications operated on a computer.\n\nREFERENCE SIGNS LIST\n\n• 100 RESOURCE ALLOCATION APPARATUS\n• 110 PARAMETER DETERMINATION UNIT\n• 120 RESOURCE AMOUNT DETERMINATION UNIT\n• 200 RESOURCE ALLOCATION APPARATUS\n• 210 PARAMETER DETERMINATION UNIT\n• 220 RECEPTION UNIT\n• 222 INCREASE BUTTON\n• 224 DECREASE BUTTON\n• 230 PARAMETER ADJUSTMENT UNIT\n• 226 ADJUSTMENT BAR\n• 240 RESOURCE AMOUNT DETERMINATION UNIT\n• 222 INCREASE BUTTON\n• 224 DECREASE BUTTON\n• 226 ADJUSTMENT BAR\n• 300 RESOURCE ALLOCATION APPARATUS\n• 310 PARAMETER DETERMINATION UNIT\n• 320 RESOURCE NUMBER CONTROLLER\n• 330 RESOURCE AMOUNT DETERMINATION UNIT\n• AQ UNIFORM QUALITY OF EXPERIENCE\n• A FIRST PARAMETER\n• B SECOND PARAMETER\n• TR PREDETERMINED AMOUNT\n\n## Claims\n\n1-7. (canceled)\n\n8. A resource allocation apparatus for performing allocation of an amount of resources to a plurality of applications operated on a computer, the resource allocation apparatus comprising:\n\nparameter determination means for determining, for each of the plurality of applications, a first parameter a and a second parameter b in a quality function f of expression (1) indicating a relation between an amount of resources to be used and a quality of user experience; and\nresource amount determination means for determining an amount of resources to be allocated to the plurality of applications using the quality function f for each of the plurality of applications in which the first parameter a and the second parameter b are determined by the parameter determination means, wherein\nthe parameter determination means substitutes, for each of the applications, a recommended amount of resources and a quality of experience corresponding to the recommended amount of resources, and a minimum amount of resources and a quality of experience corresponding to the minimum amount of resources into expression (1), to calculate the first parameter a and the second parameter b in expression (1), and\nthe quality function f(x) is a monotonically increasing function having an inverse function f−1, connects (−∞, 0) and (+∞, 1), and is symmetrical with respect to x=0: Q=f(x)=f((R−a)/b) (1),\nwhere\nQ: quality of experience;\nf: quality function;\nR: amount of resources;\na: first parameter; and\nb: second parameter.\n\n9. The resource allocation apparatus according to claim 8, further comprising:\n\nreception means for receiving an increase/decrease request of an amount of resources for any one of the plurality of applications; and\nparameter adjustment means for multiplying, for the any one of the applications, the first parameter a and the second parameter b determined by the parameter determination means by a multiple number larger than one, the number increasing according to an increase amount indicated by the increase/decrease request received by the reception means, the parameter adjustment means multiplying the first parameter a and the second parameter b by a multiple number smaller than one, the number decreasing according to a decrease amount indicated by the increase/decrease request received by the reception means,\nwherein the resource amount determination means determines, for the any one of the applications, the amount of resources to be allocated to the application using the quality function fin which the first parameter a and the second parameter b are adjusted by the parameter adjustment means.\n\n10. The resource allocation apparatus according to claim 8, wherein the resource amount determination means determines the amount of resources to be allocated to the plurality of applications so that a total amount of resources to be allocated to the plurality of applications is a predetermined amount, and the quality of experience of each of the applications is a uniform quality of experience of the same value.\n\n11. The resource allocation apparatus according to claim 9, wherein the resource amount determination means determines the amount of resources to be allocated to the plurality of applications so that a total amount of resources to be allocated to the plurality of applications is a predetermined amount, and the quality of experience of each of the applications is a uniform quality of experience of the same value.\n\n12. The resource allocation apparatus according to claim 10, further comprising resource number control means for performing control, when there are a same type of a plurality of resources, so that the number of resources to be allocated to the plurality of applications becomes a minimum number under a condition that the uniform quality of experience is equal to or larger than a predetermined certain threshold.\n\n13. The resource allocation apparatus according to claim 11, further comprising resource number control means for performing control, when there are a same type of a plurality of resources, so that the number of resources to be allocated to the plurality of applications becomes a minimum number under a condition that the uniform quality of experience is equal to or larger than a predetermined certain threshold.\n\n14. The resource allocation apparatus according to claim 8, wherein the quality function f(x) is one of expression (2) and expression (3):\n\nf(x)=1/(1+ê(−x)) (2)\nf(x)=(1/π)×arc tan(x)+1/2 (3).\n\n15. A resource allocation method for performing allocation of an amount of resources to a plurality of applications operated on a computer, the resource allocation method comprising:\n\nfor each of the plurality of applications, substituting a recommended amount of resources and a quality of experience corresponding to the recommended amount of resources, and a minimum amount of resources and a quality of experience corresponding to the minimum amount of resources into a quality function f of expression (4) indicating a relation between an amount of resources to be used R and a quality of user experience Q, to determine a first parameter a and a second parameter b; and\ndetermining an amount of resources to be allocated to the plurality of applications using the quality function f for each of the plurality of applications in which the first parameter a and the second parameter b are determined,\nwherein the quality function f(x) is a monotonically increasing function having an inverse function f−1, connects (−∞, 0) and (+∞, 1), and is symmetrical with respect to x=0: Q=f(x)=f((R−a)/b) (4),\nwhere\nQ: quality of experience;\nf: quality function;\nR: amount of resources;\na: first parameter; and\nb: second parameter.\n\n16. A non-transitory computer readable medium storing a program for causing a computer to execute, when performing allocation of an amount of resources to a plurality of applications operated on a computer, the following processing of:\n\nfor each of the plurality of applications, substituting a recommended amount of resources and a quality of experience corresponding to the recommended amount of resources, and a minimum amount of resources and a quality of experience corresponding to the minimum amount of resources into a quality function f of expression (5) indicating a relation between an amount of resources to be used R and a quality of user experience Q, to determine a first parameter a and a second parameter b; and\ndetermining an amount of resources to be allocated to the plurality of applications using the quality function f for each of the plurality of applications in which the first parameter a and the second parameter b are determined,\nwherein the quality function f(x) is a monotonically increasing function having an inverse function f−1, connects (−∞, 0) and (+∞, 1), and is symmetrical with respect to x=0: Q=f(x)=f((R−a)/b) (5),\nwhere\nQ: quality of experience;\nf: quality function;\nR: amount of resources;\na: first parameter; and\nb: second parameter.\nPatent History\nPublication number: 20120324469\nType: Application\nFiled: Dec 9, 2010\nPublication Date: Dec 20, 2012\nPatent Grant number: 8938740\nApplicant: NEC CORPORATION (Tokyo)\nInventors: Kosuke Nishihara (Tokyo), Kazuhisa Ishizaka (Tokyo)\nApplication Number: 13/581,775\nClassifications\nCurrent U.S. Class: Resource Allocation (718/104)\nInternational Classification: G06F 9/50 (20060101);"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.85603815,"math_prob":0.93276405,"size":43246,"snap":"2021-04-2021-17","text_gpt3_token_len":8601,"char_repetition_ratio":0.26525137,"word_repetition_ratio":0.36599678,"special_character_ratio":0.20362577,"punctuation_ratio":0.08483563,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9655213,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-16T00:08:16Z\",\"WARC-Record-ID\":\"<urn:uuid:7390ae21-08fe-4912-bb40-852d6e0f2a80>\",\"Content-Length\":\"123352\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e9120f63-bb93-47c3-8819-f9631532414a>\",\"WARC-Concurrent-To\":\"<urn:uuid:6dda2ce5-b7a3-4175-b764-146b1a835a27>\",\"WARC-IP-Address\":\"52.54.199.89\",\"WARC-Target-URI\":\"https://patents.justia.com/patent/20120324469\",\"WARC-Payload-Digest\":\"sha1:LV6QR2FIXLFGTRVTZLNEYJBALP5726HB\",\"WARC-Block-Digest\":\"sha1:65OY6IBB3SJJV74XMAOCG7FFIWLZDASP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038088264.43_warc_CC-MAIN-20210415222106-20210416012106-00113.warc.gz\"}"} |
https://ccm.net/forum/affich-252352-hiding-rows-when-validation-list-is-changed | [
"# Hiding rows when validation list is changed\n\nSolved/Closed\nDanny - Jan 26, 2010 at 07:35 AM\ntani - Mar 24, 2016 at 11:07 AM\nHello,\n\nI have a excel sheet where have two sets of product line. I would need only details for one set of product line visible if i select it in validation.\n\nFor example:\n\nIn my sheet have validation drop down in cell a2 and data in A3 to S39 and A42 to S77. when is select select 'X\" in validation i should get data in A42 to S77 and other rows(A3 to S39) hidden.If i select rose in cell A2 i need only data for that.\n\nCould any one help?\n\nThanks\n\n## 2 responses\n\nHi Danny,\n\nImplement the following code by right clicking the sheet tab and selecting view code:\n\n```Private Sub Worksheet_Change(ByVal Target As Range)\n\nIf Range(\"A2\") = \"X\" Then\nRows(\"3:39\").EntireRow.Hidden = True\nRows(\"42:77\").EntireRow.Hidden = False\nEnd If\nIf Range(\"A2\") = \"Rose\" Then\nRows(\"42:77\").EntireRow.Hidden = True\nRows(\"3:39\").EntireRow.Hidden = False\nEnd If\nIf Range(\"A2\") = \"\" Then\nRows(\"42:77\").EntireRow.Hidden = False\nRows(\"3:39\").EntireRow.Hidden = False\nEnd If\n\nEnd Sub```\n\nTo display all rows again, delete the content of cell A2.\n\nThe code will be activated whenever a change to the sheet is made.\n\nBest regards,\nTrowa\nThanks. It worked great. For some reason, the view/cursor will go to the very bottom of my excel sheet rows 126:127, everytime I choose in A52. It does hide the 59:61, but the view will go to 126:127.\n\nPrivate Sub Worksheet_Change(ByVal Target As Range)\n\nIf Range(\"A52\") = \"Port-Side Intake\" Then\nRows(\"59:61\").EntireRow.Hidden = True\nElse\nRows(\"59:61\").EntireRow.Hidden = False\nEnd If\n\nIf Range(\"A52\") = \"Port-Side Exhaust\" Then\nRows(\"56:58\").EntireRow.Hidden = True\nElse\nRows(\"56:58\").EntireRow.Hidden = False\nEnd If\n\nIf Range(\"A122\") = \"Port-Side Intake\" Then\nRows(\"124:125\").EntireRow.Hidden = True\nElse\nRows(\"124:125\").EntireRow.Hidden = False\nEnd If\n\nIf Range(\"A122\") = \"Port-Side Exhaust\" Then\nRows(\"126:127\").EntireRow.Hidden = True\nElse\nRows(\"126:127\").EntireRow.Hidden = False\nEnd If\n\nEnd Sub"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5600546,"math_prob":0.8356398,"size":2846,"snap":"2023-40-2023-50","text_gpt3_token_len":858,"char_repetition_ratio":0.20830402,"word_repetition_ratio":0.56490386,"special_character_ratio":0.32888263,"punctuation_ratio":0.16891892,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99318475,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-30T00:35:56Z\",\"WARC-Record-ID\":\"<urn:uuid:ad4ce812-3f74-4a4b-af1a-7557cafb86c4>\",\"Content-Length\":\"94402\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3950a96c-3b12-4ef6-b4e6-861c5c8fc0ce>\",\"WARC-Concurrent-To\":\"<urn:uuid:c5c1a768-db10-4390-8fdc-9c975b5da7f0>\",\"WARC-IP-Address\":\"104.110.140.30\",\"WARC-Target-URI\":\"https://ccm.net/forum/affich-252352-hiding-rows-when-validation-list-is-changed\",\"WARC-Payload-Digest\":\"sha1:T7DMFTCW73YOFNGO7L2DYC5JU2MOG74H\",\"WARC-Block-Digest\":\"sha1:RYQIVZIWO3H7VNZFC5H7Q4PGIJQMGORX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510529.8_warc_CC-MAIN-20230929222230-20230930012230-00156.warc.gz\"}"} |
http://www.flapw.de/master/documentation/parallelizationSchemes/ | [
"## Choosing good parallelization schemes\n\nThe efficient usage of Fleur on modern (super)computers is ensured by a hybrid MPI/OpenMP parallelization. The $\\vec{k}$-point loop and the eigenvector problem are parallelized via MPI (Message Passing Interface). In addition to that, every MPI process can be executed on several computer cores with shared memory, using either the OpenMP (Open Multi-Processing) interface or multi-threaded libraries.\n\nIn the following the different parallelization schemes are discussed in detail and the resulting parallelization scaling is sketched for several example systems.\n\n### MPI parallelization\n\n• The $\\vec{k}$-point parallelization gives us increased speed when making calculations with large $\\vec{k}$-point sets.\n• The eigenvector parallelization gives us an additional speed up but also allows to tackle larger systems by reducing the amount of memory each MPI process uses.\n\nDepending on the specific architecture, one or the other or both levels of MPI parallelization can be used.\n\n#### $\\vec{k}$-point parallelization\n\nThis type of parallelization is always chosen if the number of $\\vec{k}$-points (K) is a multiple of the number of MPI processes (P). If $K/P$ is not an integer, a mixed parallelization will be attempted and M MPI processes will work on a single k-point, so that $K \\cdot M / P$ is an integer. This type of parallelization can be very efficient because the three most time-consuming parts of the code (Hamiltonian matrix setup, diagonalization and generation of the new charge density) of different $\\vec{k}$ points are independent of each other and there is no need to communicate during the calculation. Therefore this type of parallelization is very beneficial, even if the communication between the nodes/processors is slow. The drawback of this type of parallelization is that the whole matrix must fit in the memory available for one MPI process, i.e., on each MPI process sufficient memory to solve a single eigenvalue-problem for a single $\\vec{k}$ point is required. The scaling is good as long as many $\\vec{k}$ points are calculated and the potential generation does not become a bottleneck. The superideal scaling in the following figure is caused by caching effects.",
null,
"#### Eigenvector Parallelization\n\nIf the number of $\\vec{k}$ points is not a multiple of the number of MPI processes, every $\\vec{k}$ point will be parallelized over several MPI processes. It might be necessary to use this type of parallelization to reduce the memory usage per MPI process, i.e. if the eigenvalue-problem is too large. This type of parallelization depends on external libraries which can solve eigenproblems on parallel architectures. The Fleur code contains interfaces to ScaLAPACK, ELPA and Elemental. Furthermore, for a reduction of the memory footprint it is also possible to use the HDF5 library to store eigenvector data for each $\\vec{k}$ point on the disc. However, this implies a performance drawback.",
null,
"",
null,
"### OpenMP parallelization\n\nModern HPC systems are usually cluster systems, i.e., they consist of shared-memory computer nodes connected through a communication network. It is possible to use the distributed-memory paradigm (MPI parallelization) also inside the node, but in this case the memory available for every MPI process will be considerably smaller. Imagine to use a node with 24 cores and 120 GB memory. Starting a single MPI process will make all 120 GB available to this process, two will only get 60 GB each and so on, if 24 MPI processes are used, only 5 GB of memory will be available for each of them. The intra-node parallelism can be utilized more efficiently when using the shared-memory programming paradigm, for example through the OpenMP interface. In the Fleur code the hybrid MPI/OpenMP parallelization is realised by directly implementing OpenMP pragmas and the usage of multi-threaded libraries. Note that in the examples above OpenMP parallelization was used together with the MPI parallelization. To strongly benefit from this type of parallelization, it is crucial that Fleur is linked to efficient multithreaded software libraries. A good choice is the ELPA library and the multithreaded MKL library. The following figure shows the pure OpenMP parallelization scaling on a single compute node.",
null,
"### Parallel execution: best practices\n\nSince there are several levels of parallelization available in Fleur: $\\vec{k}$-point MPI parallelization, eigenvalue MPI parallelization, and multi-threaded parallelization, it is not always an easy decision how to use the available HPC resources in the most effective way: How many nodes are needed, how many MPI processes per node, how many threads per MPI process. A good choice for the parallelization is based on several considerations.\n\nFirst of all, you need to estimate a minimum amount of nodes. This depends strongly on the size of the unit cell and the memory size of the node. In the table below some numbers for a commodity Intel cluster with 120 GB and 24 cores per node can be found - for comparable unit cells (and hardware) these numbers can be a good starting point for choosing a good parallelization. The two numbers in the \"#nodes\" column show the range from the \"minimum needed\" to the \"still reasonable\" choice. Note that these test calculations only use a single $\\vec{k}$ point. If a simulation crashes with a run-out-of-memory-message, one should try to double the requested resources. The recommended number of MPI processes per node can be found in the next column. If you increasing #k-points or creating a super-cell and you have a simulation which for sure works, you can use that information. For example, if you double #k-points, just double #nodes. If you are making a supercell which contains N times more atomes, than your matrix will be N-square times bigger and require about N-square times more memory.\n\nBest values for some test cases. Hardware: Intel Broadwell, 24 cores per node, 120 GB memory.\n\nName # k-points real/complex # atoms Matrix size LOs # nodes # MPI per node\nNaCl 1 c 64 6217 - 1 4\nAuAg 1 c 108 15468 - 1 4\nCuAg 1 c 256 23724 - 1 - 8 4\nSi 1 r 512 55632 - 2 - 16 4\nGaAs 1 c 512 60391 + 8 - 32 2\nTiO2 1 c 1078 101858 + 16 - 128 2\n\nThe next question, how many MPI processes? The whole amount of MPI parocesses is #MPI per node times #nodes. If the calculation involves several $\\vec{k}$ points, the number of MPI processes should be chosen accordingly. If the number of $\\vec{k}$ points (K) is a multiple of the number of MPI processes (P) then every MPI procces will work on a given $\\vec{k}$ point alone. If $K / P$ is not an integer, a mixed parallelization will be attempted and M MPI processes will work on a single $\\vec{k}$ point, so that $K \\cdot M / P$ is an integer. This means for example that if the calculation uses 48 $\\vec{k}$ points, it is not a good idea to start 47 MPI processes.\n\nAs for the number of OpenMP threads, on the Intel architectures it is usually a good idea to fill the node with threads (i.e. if the node consist of 24 cores and 4 MPI processes are used, each MPI process should spawn 6 threads), but not to use the hyper-threading.\n\n### Example: Choosing the right parallelization on a single node\n\nOn a single Node using as much k-point parallelization is usually the most efficient parallelization. To get this you need to get the number of k-points in your calculation from your inp.xml or kpts.xml. Here we have a calculation with 20 k-points:\n\n<kPointList name=\"default\" count=\"20\" type=\"tria-bulk\">\n\n\nWe also need to know how many cores are on our node. In this example we will assume 12 cores. Then the number of MPIs is given by the greatest common denominator of 20 and 12:\n\nn_mpi = gcd(12,20) = 4\nn_openmp = n_cores / n_mpi = 12/4 = 3\n\n\nTherefore we can start our calculation with:\n\nexport OMP_NUM_THREADS=3\nmpirun -np 4 <insert your fleur binary here>"
] | [
null,
"http://www.flapw.de/master/documentation/manyK.png",
null,
"http://www.flapw.de/master/documentation/MnGe881.png",
null,
"http://www.flapw.de/master/documentation/Memory_usage.png",
null,
"http://www.flapw.de/master/documentation/NaCl-OpenMP.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8660667,"math_prob":0.95672446,"size":8348,"snap":"2023-14-2023-23","text_gpt3_token_len":1931,"char_repetition_ratio":0.16790508,"word_repetition_ratio":0.04264706,"special_character_ratio":0.22220892,"punctuation_ratio":0.103804,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9898582,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,1,null,1,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-02T05:49:52Z\",\"WARC-Record-ID\":\"<urn:uuid:279fc1dd-9947-472c-8e2e-800d4c1fd0f0>\",\"Content-Length\":\"31848\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:36c77911-49e7-4c36-8219-c2b1ee646f24>\",\"WARC-Concurrent-To\":\"<urn:uuid:f8bc893d-db80-47ec-b924-a6b4d64dfff9>\",\"WARC-IP-Address\":\"134.94.130.56\",\"WARC-Target-URI\":\"http://www.flapw.de/master/documentation/parallelizationSchemes/\",\"WARC-Payload-Digest\":\"sha1:JICA3RYTF7FXNM5EXDOODUQFQYQOEU4J\",\"WARC-Block-Digest\":\"sha1:YXYGSJBHOBTSEAUK4NNM6MROYXZHSU76\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224648322.84_warc_CC-MAIN-20230602040003-20230602070003-00246.warc.gz\"}"} |
http://www.scheme.dk/blog/archive/2006_04_01_archive.html | [
"## Wednesday, April 26, 2006\n\n### Compression: Integer Encodings\n\nQuick! How many bits are needed to store a natural number n?\n\nWell, let's look at the standard encoding. One bit is needed for 1 (1). Two bits are needed for 2 (10) and 3 (11), furthermore three bits are needed for 4 (100), ..., 7 (111). Hmm. log(1)=0, log(2)=1, log(4)=3, log(8)=4. So the number of bits needed is 1+floor(log n). Or is it?\n\nSuppose the bits 110 are read from a file. Is this the number 3 or is it a 1 followed by a 2? We can't know unless we a priori know how many bits were used to store the number.\n\nLeaving the standard encoding aside for a moment, consider instead the unary encoding.\n\n n unary 1 0 2 10 3 110 4 1110\n\nThe natural number is represented as n-1 1-bits followed by a single 0-bit.\nThe unary encoding does not have the same problem as before - it is a prefix-free code. A code for a number is never a prefix for the code of a different number.\n\nThe unary encoding is useful in situations where the majority of numbers are very small. For large numbers it is horribly ineffecient.\n\nThe problem with the standard encoding was, that we didn't know where the code ended. Consider now for the sake of argument encoding a number n by the unary code for the length of the standard encoding (1+floor(log n) followed by the standard encoding. The total length of that code is 2*(1+floor(log n). For large numbers this much better than the unary code.\n\nA small observation saves a bit. Consider the encoding of 4: 110 100. The encoding starts with the unary encoding of 3, the length of the standard encoding. Since all numbers of length 3 have the bit pattern 1xx it is unncessary to actually store that bit. Thus we can just store 4 as 110 00. This bit saving encoding is called the gamma encoding.\n\nTo sum up, the gamma encoding of a natural number n consists of the unary code for 1+floor(log n) followed by the floor(log n) bits representing n-2^floor(log n)) in binary. The number of bits used by the gamma code is 2*floor(log n))+1.\n\nFor numbers smaller than 5 the unary encoding uses fewer bits than the gamma code, for 5 they use the same number, and for numbers larger than 5 the gamma codes uses fewer bits than the unary code.\n\nA variation is of the gamma code is the delta code. The reasoning goes as follows: For numbers with a length larger than 5 in the standard encoding, it would be better store the length as a gamma code than a unary code. That is, the delta code for a natural number n is consists of the gamma code for the length of the standard encoding of n, followed by the standard encoding.\n\nFor numbers below 32 the gamma code is shorter than the delta code. For numbers between 32 and 53 the codes have the same length. The delta code for 64 and larger numbers are shorter than the gamma code.\n\nA small excerpt from the implementation of these encodings - mail me if you are interested in a copy. [The Blogger software inserts spurious newlines in the atom feed. See Everything Scheme for the original.]\n\n` (require \"../bit-io/bit-io.scm\" (planet \"42.ss\" (\"soegaard\" \"srfi.plt\"))) ;;; ;;; UNARY CODE ;;; ; The unary code for an integer n>=1 is n-1 one bits followed by a zero bit. ; The code for 3 is 110. (define write-unary (case-lambda [(n) (write-unary n (current-output-bit-port))] [(n out-bit-port) (unless (and (integer? n) (positive? n)) (error #f \"a positive integer was expected, got: \" n)) (if (> n 1) (write-bits (sub1 n) (sub1 (arithmetic-shift 2 (sub1 (sub1 n)))) out-bit-port)) (write-bits 1 0 out-bit-port)])) (define read-unary (case-lambda [() (read-unary (current-input-bit-port))] [(in-bit-port) (do ([n 1 (+ n 1)]) [(= (read-bits 1 in-bit-port) 0) n])])) `\n\nLabels:\n\n## Sunday, April 23, 2006\n\n### Mini Kanren and the Da Vinci Quest, part II\n\nToday the second symbol puzzle in the Da Vinci Quest was revealed. The puzzle was of the same type as described in Mini Kanren and the Da Vinci Quest - the only twist was that the board was enlarged to 5x5. The modified program for the puzzle, I got, is:\n\n`(define distinct (lambda (x1 x2 x3 x4 x5) (let ([xs (list x1 x2 x3 x4 x5)]) (all (membero 'omega xs) (membero 'blade xs) (membero 'gcross xs) (membero 'fleur xs) (membero 'cross xs)))))(run* (b) (fresh ( x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 x11 x12 x13 x14 x15 x16 x17 x18 x19 x20 x21 x22 x23 x24 x25) (alli ; the board consists of fields fields (== b (list (list x1 x2 x3 x4 x5) (list x6 x7 x8 x9 x10) (list x11 x12 x13 x14 x15) (list x16 x17 x18 x19 x20) (list x21 x22 x23 x24 x25))) ; known symbols (== b (list (list x1 'gcross x3 'fleur x5) (list x6 x7 'cross 'blade x10) (list x11 'omega x13 x14 'cross) (list x16 x17 'fleur x19 x20) (list x21 'cross x23 x24 x25))) ; symbols in rows are distinct (distinct x1 x2 x3 x4 x5) (distinct x6 x7 x8 x9 x10) (distinct x11 x12 x13 x14 x15) (distinct x16 x17 x18 x19 x20) (distinct x21 x22 x23 x24 x25) ; symbols in columns are distinct (distinct x1 x6 x11 x16 x21) (distinct x2 x7 x12 x17 x22) (distinct x3 x8 x13 x18 x23) (distinct x4 x9 x14 x19 x24) (distinct x5 x10 x15 x20 x25) ; symbols in each colored area are distinct (distinct x1 x2 x7 x12 x13) (distinct x3 x8 x4 x5 x10) (distinct x6 x11 x16 x21 x22) (distinct x17 x18 x19 x23 x24))))`\n\n## Friday, April 21, 2006\n\n### Reading and writing bits - bit-io.plt\n\nAs mentioned in \"Index Construction\" compression is used to store the index in a space efficient manner. The number of bits used to encode integers vary according to their size. The standard file operations read and write bytes, so the need for a bit level i/o library arose.\n\nA little shopping around led to Oleg Kiselyov's page on Binary I/O. He presents a very nice solution to the problem of reading bit streams. The function make-bit-reader turns a byte stream into a bit stream. The byte stream is represented as a thunk, that produces a new byte each time it invoked. The returned bit-reader is represented as a function of one argument, the number of bits to read from the stream. The bits read are returned as an unsigned integer.\n\n`(define bit-reader (make-bit-reader (lambda () #b11000101)))> (bit-reader 3)6> (bit-reader 4)`\n\nInspired by this approach I wrote a bit-writer in the same style. Given a byte-writer the function make-bit-writer returns two values: the first is a bit-writer, a function to two arguments the number of bits to write and the actual bits, the second argument is a bit-flusher. Since the bit-writer can't call the underlying byte-writer before a whole byte is received it was necessary to introduce the bit-flusher to flush any remaining bits at the end of a file. The original code made were optimized to handle common cases such as reading a single bit or a single byte fast. An effort was made to mimic these optimizations in the writer.\n\nSince the first version of the indexer used normal file operations such as with-output-to-file, write-byte and friends it wasn't straightforward to change the code to use the stream approach. The solution was to introduce bit-ports, which resembles normal ports. In the following examples the numbers from 1 to 8 are written to a temporary file and then read back. Each number is written using a different number of bits.\n\n`> (require (planet \"bit-io.scm\" (\"soegaard\" \"bit-io.plt\"))) > (with-output-to-bit-file \"tmp\" (lambda () (do ([i 1 (+ i 1)]) [(= i 9) 'done] (write-bits i i))) 'replace)done > (with-input-from-bit-file \"tmp\" (lambda () (do ([i 1 (+ i 1)] [ns '() (cons (read-bits i) ns)]) [(= i 9) ns])))(8 7 6 5 4 3 2 1)`\n\nThe bit-io library is available through PLaneT, the documentation and source are available at the PLT Source Browser.\n\nLabels:\n\n## Thursday, April 20, 2006\n\n### Mini Kanren and the Da Vinci challenge",
null,
"The film adaption of Dan Brown's bestseller \"The Da Vinci Code\" is about to be released. This prompted the marketing people of Colombia Pictures and Google to create the Google Da Vinci Quest. The quest consists of 24 puzzles to be solved. To avoid cheating each person can expect to get his own set of puzzles. According to 4-time World Puzzle Championship Individual Winner and co-creator of the quest Wei-Hwa Huang there is a total of 12,358 puzzles.",
null,
"The first class of puzzle is a board puzzle. A 4x4 board needs to be filled with symbols in such a way that all symbols in each row, column and colored areas are different. The symbols given are 4 blades, 4 fleur-de-lis, 4 omegas and 4 crosses. A few symbols are filled in from the beginning and can't be moved. [The image above shows a different puzzle than the one solved below.]",
null,
"The puzzle I got was easy, but in anticipation of more challenging puzzles ahead, I decided to dust of my copy of The Reasoned Schemer by Daniel P. Friedman, William E. Byrd and Oleg Kiselyov. The book teaches the logic programming as a natural extension of functional programming, and the paradigm of logic programming fits this puzzle perfectly. The book is very enjoyable, but be careful it might hurt your weight.\n\nThe logic programming system used by The Reasoned Schmer is MiniKanren. It is a much simplified version of Kanren and runs in any R5RS Scheme. Without further ado here is a MiniKanren program to solve the board puzzles from the Da Vinci Quest:\n\n`(define distinct (lambda (x1 x2 x3 x4) (let ([xs (list x1 x2 x3 x4)]) (all (membero 'blade xs) (membero 'cross xs) (membero 'fleur xs) (membero 'omega xs)))))> (run* (b) (fresh (x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 x11 x12 x13 x14 x15 x16) (alli ; the board consists of fields fields (== b (list (list x1 x2 x3 x4) (list x5 x6 x7 x8) (list x9 x10 x11 x12) (list x13 x14 x15 x16))) (== b (list (list x1 x2 x3 x4) (list x5 x6 x7 'blade) (list x9 'omega x11 x12) (list x13 'fleur x15 'cross))) ; symbols in rows are distinct (distinct x1 x2 x3 x4) (distinct x5 x6 x7 x8) (distinct x9 x10 x11 x12) (distinct x13 x14 x15 x16) ; symbols in columns are distinct (distinct x1 x5 x9 x13) (distinct x2 x6 x10 x14) (distinct x3 x7 x11 x15) (distinct x4 x8 x12 x16) ; symbols in each colored area are distinct (distinct x3 x4 x7 x8) (distinct x9 x13 x14 x15) )))(((fleur blade cross omega) (omega cross fleur blade) (cross omega blade fleur) (blade fleur omega cross)))#<procedure:succeed>`\n\n## Tuesday, April 18, 2006\n\n### PLT Source Browser\n\nThe PLT Source Browser is now uptodate.\n\nSince the last update a series of interesting stuff has appeared at PLaneT.\n\nDouglas Williams released version 2.2 of his Science Collection. It now features a browsable html reference manual. It is loaded with examples and plots. For example a histogram of the results of 10.000 trials of summing two dice.\n\nNoel Welsh released Instaweb which is a small tool that makes it developing and testing a single servle in the PLT web server easier.\n\nDave Herman released csv-write to generate output in csv-format. As a spin-off he also released mysqldump to dump a MySQL database as a csv-file.\n\nCarl Eastlund's combinator library was updated, so was Lizorkin's sxml library and Dignatof's Commander Swift aka cdrswift.\n\nFinally Daniel Yoo released some syntactic sugar for Ruby/Python style generators.\n\n## Sunday, April 16, 2006\n\n### Index Construction\n\nUpdate: The last part were rewritten.\n\nSort-based index construction\n\nThe implementation of the indexer have progressed better than I expected. As a warmup to the implementation of the \"sort-based multiway compressed\" algorithm I have implemented both the \"sort-based\" and the \"sort-based compressed\" algorithm. They both work in four phases:\n1. For all terms in all documents a record (t, d, f) of the term number, the document number d and the frequency f of the term t in the document d is made in a temporary file.\n\n2. The temporary file is divided into blocks, which are sorted in-memory.\n\n3. A series of runs are made. Each run merges two sorted blocks. The end result is a large file consisting of (t,d,f) triples sorted in lexicographic order.\n\n4. Based on the (t,d,f) triples the index is constructed. For each term number t an inverted list of the form (t n d1 f1 di f2 ... dn fn) is written to the index file. Here n is the number of documents in which the term occurs; di and fi are associated document numbers and frequencies.\nDuring the tokenizing in phase 1 a lexicon is made. The lexicon associates terms and term numbers. Furthermore during phase 4 the location of the term's inverted list in the index file is saved.\n\nSearching for a term is simple: Use the lexicon to look up the location of the inverted list in the index file. Read the inverted list. Present the result - the frequencies are used to rank the documents after relevancy.\n\nTo keep the size of the index and the temporary file down the numbers aren't stored as standard 32 bit integers. Since an integer N requires only ceil(log(N)) bits, 32 bit integers waste a lot of bits for small numbers. Instead more flexible encodings are used in which small numbers are represented in fewer bits than larger numbers [More about bit encodings in a later post].\n\nSort-based index construction with compression\n\nA standard compression trick is to use gaps to store ascending lists. Say a term is found in documents (2 6 9 13) then the corresponding gaps are (2 4 3 4). The original document numbers are found by adding the gaps, e.g. 9 = 2+4+3. The important thing to note is that the gaps are smaller than the original numbers and thus can be stored in fewer bits (if an approriate encoding is chosen).\n\nThis observation can be used to store the inverted lists, since the inverted list of a term (t n d1 f1 di f2 ... dn fn) has ascending document numbers di. That is, changing the representation of an inverted list to (t n dg1 f1 dgi f2 ... dgn fn), where dg stands for document gap, will reduce the size of the final index. The alert reader will note that the same trick applies to the term numbers in the final index. The extra alert reader will note that all terms are present in the final index and each term occurs only once, so we can omit the term number t completely from the final index.\n\nCompressing the final index file will have relatively little effect on the total run time of the indexer. The bulk of the work is namely done in phase 3 during the external sorting of the temporary files consisting of the (t, d, f) triples. Now compressing these temporary files would reduce i/o considerably and hence also the run time. Can we use the same trick again? We sure can - the temporary files consists of blocks of triples and within each block the term numbers t occur in ascending order. The only complication of changing the representation from (t, d, f) to (tg,d,f), where tg is the term number gap, is some additional bookkeeping during merging.\n\nThe difference between the \"sort-based\" and the \"sort-based compression\" algorithm is thus that the latter compresses the temporary files.\n\nOne last observation: By fusing phase 1 and 2 a pass through the temporary file is saved.\n\nLabels:\n\n## Thursday, April 06, 2006\n\n### PLaneT: combinators.ss\n\nCarl Eastlund updated his library of combinator functions combinators.ss a few days ago.\n\nIt contains some classics such as curry and constant, but also includes an \"applying\" compose as well as map2. The map2 works the same way as map, but the mapper function must return two values.\n\nThe functions curry and constant are special cases of Sebastian Egner's cut from srfi-26. The names curry and constant are easier on the eyes though.\n\n`> (require (planet \"combinators.ss\" (\"cce\" \"combinators.plt\" 1 1)))> ((curry list 1 2) 3 4)(1 2 3 4)> ((constant 3))3> (define (q+r n) (list (quotient n 3) (remainder n 3)))> ((compose/apply * q+r) 8)4> (map2 quotient/remainder (list 1 2 3 4 5) (list 3 3 3 3 3))(0 0 1 1 1)(1 2 0 1 2)`\n\n## Wednesday, April 05, 2006\n\n### Tokenizing Scheme Source\n\nTo index all words in a file with english, the first thing you need to do is to find all the words. Since I want to index Scheme source, my words are identifiers.\n\nThe default reader of MzScheme extends the syntax of R5RS-identifiers, so it is neccessary to read the fine print before one starts coding. Simplifying a little one divides the characters in two classes delimeters and non-delimeters. To find the next token, one simply keeps reading until a non-delimeter is seen. The token is then read char for char until a delimeter is seen. This simplistic approach would work if it were not for identifiers beginning with hash-percent (#%), however it doesn't take much ingenuity to handle this too.\n\nThe easiest and most efficient way to implement the above approach in PLT Scheme is to use the builtin regular expressions. Other options were to use the more flexible Perl-style regular expressions or even use parser-tools to generate a custom lexer. The builtin regular expressions have the advantage that they work directly on the input port which makes them very fast. Since PLT Scheme uses the utf-8 encoding to store source code, it is neccessary to use byte-regexps (as opposed to string-based). Another advantage of matching directly on the input port is that the port abstraction keeps track of location information such as line number and column number automatically.\n\n` (define (utf-regexp str) (byte-regexp (string->bytes/utf-8 str))) (define whitespace \"[ \\n\\r\\t]\") (define brackets \"[]\\\\[\\\\(\\\\){}\") (define punctuation \"[,'`;]\") (define special \"[\\\\|]\") (define delimeter (string-append \"(\" whitespace \"|\" punctuation \"|\" brackets \"|\" special \"|\" \"\\\"\" \")\")) (define delimeter* (string-append delimeter \"*\")) (define non-symbol-starter-regexp (utf-regexp delimeter*)) (define (skip-to-next-token) (regexp-match non-symbol-starter-regexp (current-input-port)) (if (eqv? (peek-char) #\\#) (unless (equal? (peek-string 2 0) \"#%\") (read-char) (skip-to-next-token)))) (define non-delimeter \"[^]\\\\[\\\\(\\\\){} \\n\\r\\t,'`;\\\\|\\\"]\") (define non-delimeter* (string-append non-delimeter \"*\")) (define non-delimeter*-regexp (utf-regexp non-delimeter*)) (define (read-token) (let* ([pos (file-position (current-input-port))] [m (regexp-match non-delimeter*-regexp (current-input-port))]) (and m (match m [(token . _) (list token pos)])))) (define (for-each-token f) (unless (eof-object? (peek-char)) (skip-to-next-token) (unless (eof-object? (peek-char)) (let ([token (read-token)]) (if token (begin (f token) (for-each-token f)) (error \"internal error: token expected after skipping\")))))) (define (for-each-token-in-file file f) (with-input-from-file file (lambda () (for-each-token f ))))`\n\n### Paste Scheme now with colors\n\nPaste Scheme now has colors. Actually I thought it always had, but I had forgotten to change the link to the stylesheet when I moved the servlet from my own machine to the web-server. Since I have a test web-server running on my own machine everything looked okay here...\n\n## Tuesday, April 04, 2006\n\n### Back of the Envelope\n\nManaging Gigabytes",
null,
"identify the following key parameters of a search engine:\n\n Symbol Parameters B Total text size N Number of documents n Number of distinct words F Total number of words f Number of index pointers I Final size of the compressed inverted file L Size of dynamic lexicon structure\n\nThese parameters together with estimations of disk seek time, disk transfer time per byte etc. are used to make estimations on the ressource usage of the search engine.\n\nMy initial plan is to index all source files of the PLT Source Browser (temporary url). In round numbers there are N=5200 files whose total size is 40M. Tokenizing the files reveal that the total number of words are F=3.7 million and that the number of distinct words are n=120.000.\n\nAssuming one word entry will take up 10 bytes 10F=37MBy is needed for the entire index. For so small indexes the naïve approach (linked lists in memory) to indexing would be fine. I'll stick to my decision of using the sort-based compressed approach though. It will be able to run on smaller machines with larger data sets - and the algorithms are more interesting.\n\n## Sunday, April 02, 2006\n\n### Planet Scheme\n\nAndreas Rottmann's aggregation of Scheme blogs have been dead for while now. Mailing him to ask whether he plans to revive his Planet Scheme didn't give a reply. As a regular reader of Planet Scheme I decided to put together my own Planet Scheme - at least until Andreas puts his online again. The software was quite easy to set up. The only remaining issues are to find a logo and to figure out why the aggregator inserts extra newlines in my nicely colored Scheme snippets.\n\nScouring the net for Scheme related blogs I discovered that Will Farr already used Scheming as the title of his blog - so I changed the name of this blog to Everything Scheme.\n\n### Paste Scheme\n\n[Note: This post shows up in Planet Scheme with extra line breaks. If you know how to fix this, please send me a mail. Editing the original trying to fix the linebreak issue unfortunately tricks the Planet software to think it is a new post.]\n\nBlogging about Scheme leads to the desire of including snippets of Scheme source. Being spoiled by the DrScheme syntax highlighting this means syntax colored snippets. Luckily Dorai Sitaram wrote some syntax coloring code for SLaTeX. This code was adapted/rewritten by Anton van Straten for the Scheme Cookbook. Using this this code I have put together a little servlet Paste Scheme which lets you submit a scheme snippet and returns XHTML ready to paste into your favorite blogging software.\n\nThe servlet source below was produced with Paste Scheme. The web.plt package haven't been submitted to PLaneT yet, since there is no documentation except for comments in the source yet.\n\n`;;; paste.ss -- Jens Axel Soegaard(module paste mzscheme (provide interface-version timeout start) (require (lib \"match.ss\") (lib \"kw.ss\") (planet \"web.scm\" (\"soegaard\" \"web.plt\")) (planet \"html.scm\" (\"soegaard\" \"web.plt\")) \"scm2xexpr.scm\") ;;; ;;; SERVLET INTERFACE ;;; (define interface-version 'v1) (define timeout 6000) (define start ; servlet sets up the various parameters such as ; current-bindings and current-cookies, evaluates ; the body expressiosns, the last of which should ; evaluate to an xepxr. (servlet (report-errors-to-browser send/finish) (html-paste-page))) ;;; ;;; VIEW ;;; (define default-snippet \"(define (fact n) (if (= n 0) 1 (* n (f (- n 1)))))\") (define (html-paste-page) (with-binding (current-bindings) (snippet) (let ([snippet (if snippet snippet default-snippet)]) (html-page #:title \"Paste Scheme\" #:style-sheet \"http://localhost:8080/paste.css\" #:header '(h1 \"Scheme Paste\") #:body `(div (h2 \"Enter Snippet\") ,(html-form \"submit_snippet\" \"paste.ss\" `(textarea ((name \"snippet\") (cols \"80\") (rows \"10\")) ,snippet) '(br) (html-input \"submit\" #:value \"submit\")) (h2 \"Preview\") ,(scheme-text->xexpr snippet) (h2 \"XHTML\") (pre ,(xexpr->string (scheme-text->xexpr snippet))) (h2 \"Stylesheet\") (pre ,\".scheme { color: brown; margin: 4pt; } /* background punctuation */.scheme .keyword { color: rgb(68,0,203); font-weight: bold; }.scheme .builtin { color: navy; }.scheme .variable { color: black; }.scheme .global { color: purple; }.scheme .selfeval { color: green; }.scheme .comment { color: teal; }\")))))) )`"
] | [
null,
"http://images.amazon.com/images/P/1400079179.01._AA_SCMZZZZZZZ_.jpg",
null,
"http://www.scheme.dk/blog/uploaded_images/da-vinci-puzzle-741423.PNG",
null,
"http://images.amazon.com/images/P/0262562146.01._AA_SCMZZZZZZZ_.jpg",
null,
"http://www.assoc-amazon.com/e/ir",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8856253,"math_prob":0.8338963,"size":3613,"snap":"2023-40-2023-50","text_gpt3_token_len":956,"char_repetition_ratio":0.16985315,"word_repetition_ratio":0.02354788,"special_character_ratio":0.2831442,"punctuation_ratio":0.10737387,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95863426,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,6,null,6,null,6,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-01T15:10:18Z\",\"WARC-Record-ID\":\"<urn:uuid:70ea5029-1b92-4faf-afbd-ffe1277e376c>\",\"Content-Length\":\"93095\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1686ae49-6c1c-4eb1-8f63-c7a871a152bd>\",\"WARC-Concurrent-To\":\"<urn:uuid:b8d7a660-3a29-447b-aad7-8b4877cda52d>\",\"WARC-IP-Address\":\"107.170.147.219\",\"WARC-Target-URI\":\"http://www.scheme.dk/blog/archive/2006_04_01_archive.html\",\"WARC-Payload-Digest\":\"sha1:BTLFWCJY7Z2SJVYSE3VF4VOUMBGFBN4F\",\"WARC-Block-Digest\":\"sha1:2K7D235LXQSZ3HHOAEZPRCI7KHWQ3RIK\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510903.85_warc_CC-MAIN-20231001141548-20231001171548-00570.warc.gz\"}"} |
https://community.smartsheet.com/discussion/4628/summing-a-column-where-contents-are-formula-results | [
"or Explore Discussions\n\n#### Welcome to the Smartsheet Forum Archives\n\nThe posts in this forum are no longer monitored for accuracy and their content may no longer be current. If there's a discussion here that interests you and you'd like to find (or create) a more current version, please Visit the Current Forums.\n\n# Summing a Column where contents are formula results\n\n03/31/16 Edited 12/09/19\n\nHi\n\nI have a column of numbers which are the results of a formula. I then want to add up the column of numbers but when I use a sum formula it just gives me a blank - can anyone assist?",
null,
"• Are you sure the results are numbers? (Are they alligned to the right automatically?) What formula do you use?\n\n• Hi - sum in column.....=IF(Emma1 = \"A/L\", \"3.75\", \"0\")\n\nSum adding up results from column in one cell - =SUM([Emma by Hours]:[Emma by Hours])\n\nThe column is formatted as text/ number......\n\n• Sara,\n\n3.75 is a number\n\n\"3.75\" is text.\n\n=IF(Emma1 = \"A/L\", 3.75, 0)\n\nand see if that helps.\n\nCraig\n\nThis discussion has been closed."
] | [
null,
"https://us.v-cdn.net/6031209/uploads/drupal_attachment/files/f3/68/f368740fb1f6282719051d25a5105a5a.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.914871,"math_prob":0.8302809,"size":1028,"snap":"2021-21-2021-25","text_gpt3_token_len":266,"char_repetition_ratio":0.11621094,"word_repetition_ratio":0.0,"special_character_ratio":0.27431905,"punctuation_ratio":0.14222223,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9555751,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-11T20:21:41Z\",\"WARC-Record-ID\":\"<urn:uuid:9481808d-b00b-4f0f-825e-b3f65c78682c>\",\"Content-Length\":\"81974\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fe6fe8c8-a30b-4972-a162-4c73b87ca536>\",\"WARC-Concurrent-To\":\"<urn:uuid:61f243f6-f496-459c-90bb-5968f78567f1>\",\"WARC-IP-Address\":\"162.159.128.79\",\"WARC-Target-URI\":\"https://community.smartsheet.com/discussion/4628/summing-a-column-where-contents-are-formula-results\",\"WARC-Payload-Digest\":\"sha1:TM6CIPDUODR5NS23DF4ZU7M3ORY7ROK4\",\"WARC-Block-Digest\":\"sha1:LMJUZLBDOXIUBK7XLVRXHKPA72PV3MFH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243989856.11_warc_CC-MAIN-20210511184216-20210511214216-00438.warc.gz\"}"} |
https://socratic.org/questions/sandra-invests-5-000-in-a-savings-account-that-pay-5-simple-interest-how-many-ye | [
"# Sandra invests $5,000 in a savings account that pays 5% simple interest. How many years will it take for the account to grow to$15,000, if she makes no withdrawals or deposits?\n\n$40 \\text{ }$years\nThe formula for SI is: $S I = \\frac{P R T}{100}$\nTo have an account of $15,000\" \"(A) means that she needs to have earned $10,000 interest on her investment of $5,000\" \"(P) In simple interest, interest is calculated only on the initial amount. $S I = \\frac{P R T}{100} = 10000$$\\frac{5000 \\times 5 \\times \\textcolor{b l u e}{T}}{100} = 10000$$\\textcolor{b l u e}{T} = \\frac{10000 \\times 100}{5000 \\times 5}$$T = 40$years $\\rightarrow\\$ No one would invest at simple interest!"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.858092,"math_prob":0.9997979,"size":454,"snap":"2020-24-2020-29","text_gpt3_token_len":108,"char_repetition_ratio":0.10888889,"word_repetition_ratio":0.0,"special_character_ratio":0.24229075,"punctuation_ratio":0.10989011,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9996629,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-27T22:15:30Z\",\"WARC-Record-ID\":\"<urn:uuid:e2f6bcde-9de3-456f-ae7e-ebfafe02c61c>\",\"Content-Length\":\"33314\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3999c847-37d6-44f0-9ab3-acd3331d7762>\",\"WARC-Concurrent-To\":\"<urn:uuid:2373f1ba-dcfe-4192-9bb2-7c88149cddf1>\",\"WARC-IP-Address\":\"216.239.34.21\",\"WARC-Target-URI\":\"https://socratic.org/questions/sandra-invests-5-000-in-a-savings-account-that-pay-5-simple-interest-how-many-ye\",\"WARC-Payload-Digest\":\"sha1:NUKFGRTVPBOASVK2EI6ADSES4NWW2GMD\",\"WARC-Block-Digest\":\"sha1:EHSPG4XCENN7Y2ORT526SCLWXFE5NPG3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347396163.18_warc_CC-MAIN-20200527204212-20200527234212-00463.warc.gz\"}"} |
https://www.ratta.pk/2017/10/1st-year-physics-chapter-9-physical.html | [
"# 1st Year Physics Chapter 9 Physical Optics Notes pdf",
null,
"Here we have published the 1st Year Physics Chapter 10 Physical Optics Notes pdf download or you can also read online notes of chapter 9 11th class physics.\n\nQ. What is wave front?\nWave Fronts\nThe surface on which all the points of waves have same phase of vibration is known as wave front\nExplanation Suppose the light emitted from a point source propagates outward in all direction with speed c After time t the waves reach the surface of an imaginary sphere with center as S and radius as ct As the distance of all these points from the source is same so all the points on the surface of the sphere have the same phase of vibration Such as surface is known as wave front\nNote The wave front from a point source are spherical Thus wave propagates in space by the motion of wave fronts\nThe distance between two consecutive wave fronts is one wave length\nRay of Light\nThe line normal to the wave front which shows the direction of propagation of light is called a ray of light\nSpherical wave front\nThe wave front in which the light waves are propagated in spherical form with the source is called spherical wave front\nFor appoint source of light in a homogeneous medium, the wave fronts are the concentric sphere of increasing radii.\nPlane wave front\nAt very large distance (ie at infinity) from the source, a small portion of spherical wave front will become very nearly plane Such a wave front is known plane wave front\nFor example, the sun light reaches the earth in plane wave fronts points they cancel the effect of each other destructive interference) Such phenomenon is called interference of light\nTypes of Interference\nThere are two types of interference\n1) Constructive Interference\n2) If crest of one wave falls on the crest of other or through of one wave falls on trough of other waves\n3) they support each other. This phenomenon is called constructive interference 4) Whenever the path difference between the two waves is an integral multiple of wavelength, then the\n5) both waves reinforce each other. This effect is called constructive interference\n2) Destructive Interference\nIf crest of one wave fall on the trough of the other wave, then they cancel each other. Such an Interference is known as destructive interference\nWhenever the path difference between the two waves is an odd integral multiple of half of wavelength, then the both waves cancel each other's effect This effect is called constructive interference\nConditions for detectable interference pattern\nThe following condition must be met, in order to observe the interference phenomenon\n1) The interfering beams mu be monochromatic\n2)The interfering beams of light must be coherent\n3) The sources should be narrow and very close to each other\n4) The intensity of the two sources be comparable\nMonochromatic Source\nThe sources which should emit the light of single wave length are called monochromatic sources\nCoherent sources\nmonochromatic sources of light which emit waves, having a constant phase difference are called coherent source\nHow to obtain coherent sources\nA common method to obtain the coherent light beam is to use a monochromatic source to illuminate a screen containing two small closely speed holes, usually in the shape of slits The light emerging from the two slits is coherent because a single source produces the original beam and two slits serve only to split it into two parts The points on a Huygens wave front which send out secondary wavelength are also coherent sources of light\nQ. Describe the Young' double slit experiment for demonstration for interference of light Derive an expression for fringe spacing.\nYoung's double Slit Experiment\nIn 1801. Thomas Young performed the interference experiment to prove the wave nature of light A screen having two narrow slits is illuminated by a beam of monochromatic light The portion of wave front incident on the slit behaves like the source of secondary wavelets Tn wavelets leaving the slits are coherent Superposition of these wavelets results into the senses of baht and dark bands which are observed on the second screen placed at some distance parallel to the first screen\nQ. Discuss the formation of Newton's rings. Why does the central spot of Newton's ring look dark?\nNewton's Ring\nWhen a plano-convex lens of long focal length is placed in contact with a plane glass plate a thin air film is enclosed between them to form circular dark and bright fringes known as Newton's rings\nExperimental arrangement\nThe thickness of the air film between plano-convex lens and glass slit is almost zero at the point of contact \"O and t increases gradually as we proceed towards the periphery of the lens Thus the point where the thickness of the air film is constant will lie on the circle with O as center Light beam from a monochromatic source S' becomes parallel after passing through the convex lens 'L this beam of light falls on the glass plate G Some rays are partly reflected normally towards the air film and partly refracted through G when light rays fall normally on the lens, these rays are reflected by the top and bottom surfaces of the air film As these rays are coherent and interfere each other constructively or destructively\nWhen the light reflected upward is observed through a microscope M focused at the glass plate G a series of dark and bright circular rings are observed as shown in figure These concentric rings are called Newtons rings\nDark Central Spot:\nAt the point of contact of the lens and the glass plate, the thickness of the film is effectively zero but due to reflection at the lower surface of air film from denser medium, an additional path difference of is (or phase change of 180) introduced Consequently the center of Newton rings is dark due to destructive interference\nQ. Explain the phenomenon of polarization. How plane polarized light is produced and detected?\nPolarization\nThe phenomenon of interference and diffraction have proved that light has wave nature, but these phenomenon do not show whether light waves are longitudinal transverse In transverse mechanical waves, the vibration can be oriented along vertical, horizontal or any other direction In each of these cases, the wave is said to be polarized The plane of polarization is the plane containing the direction of vibration of the particles of the medium and the direction of propagation of wave\nA light wave produced by oscillating charge consists of a periodic variation of electric field vector along with magnetic field vector at right angle to each other.\nThe direction of polarization in a plane polarized light wave is taken as the direction of electric field vector Unpolarized light A beam of ordinary light consisting of large number of planes of vibration is called unpolarized light Polarized light The beam of light in which all vibrations are confined to a single plane of vibration is called polarized light Production and Detection of plane polarized light The light emitted by an ordinary incandescent bulb is un-polarized, because its vibrations are randomly oriented in space It is possible to obtain plane polarized beam of light from unpolarized light by removing all waves from the beam except those having vibrations along one particular direction This can be achieved by various method as given below\n1) Selective absorption\n2) Reflection from different surfaces\n3) Scattering by small particles\n4) Refraction through crystals\nSelective absorption method\nSelective absorption method is the most common method to obtain plane polarized light by using certain types of materials called dichotic substances These transmit only those waves whose vibrations are parallel to the particular direction and will absorb those waves whose vibration are in other directions One such commercial polarizing material is Polaroid\nIf the un-polarized light is made incident on the sheet of Polaroid, the transmitted light will be plane polarized If a second sheet of Polaroid is placed in such a way that the axes of the Polaroid as shown by the straight lines drawn on them are parallel the light is transmitted through the second Polaroid also If the second Polaroid is slowly rotated about the beam of light, as axis o rotation, the light emerging out of the second Polaroid gets dimmer and dimmer and disappears when the axes become mutually perpendicular. The light appears on further rotation and become brightest when the axes re again parallel to each other\nTransverse Nature of Light\nThis experiment proves that light waves are transverse waves of the light waves were longitudinal they would never disappear even if the two Polaroids were mutually perpendicular Sunlight also becomes partially polarized because of scattering by air molecules of the Earth's atmosphere or by reflection we can obtain the partially polarized light instead of glare of light\nReflection from different surfaces:\nReflection of light from water, glass snow and rough road surfaces for larger angles of incidences produces glare Since the reflected light is partially polarized, glare can considerably be reduced by using polarized sunglasses\nScattering by small particles\nReflection of light from water glass snow and rough road surfaces for larger angles of incidence produces glare Since the reflected light is partially Polarized glare can considerably be reduce by using polarized sunglasses\nScattering by small particles:\nSunlight also becomes partially polarized due to scattering by air molecules of earth's atmosphere This effect can be observed by looking directly up through a pair of sunglasses made of polarizing glass A certain direction of the lens. less light passes through it than at others Polaroid.\nA synthetic doubly refracting substance, that strongly absorbs polarized light in one plane while easily passing polarized light in another plane of right angles\nQ. What is meant by optical rotation?"
] | [
null,
"https://1.bp.blogspot.com/-wmN2fvnanGQ/We4_hWlPGrI/AAAAAAAAYvU/LGsPKehI8kcyLxB28Su_IQafTj1rmsjPQCK4BGAYYCw/s320/Capture.JPG",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.92871094,"math_prob":0.93423074,"size":11087,"snap":"2020-45-2020-50","text_gpt3_token_len":2148,"char_repetition_ratio":0.15176396,"word_repetition_ratio":0.032715376,"special_character_ratio":0.17750518,"punctuation_ratio":0.03254593,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9509287,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-30T20:17:27Z\",\"WARC-Record-ID\":\"<urn:uuid:3042fb44-cc55-4745-b11b-84ec374b6f02>\",\"Content-Length\":\"150591\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9dad7784-1449-4f2a-8167-d87408abc41b>\",\"WARC-Concurrent-To\":\"<urn:uuid:deb16a9e-159d-4382-952c-82f7dc94f80b>\",\"WARC-IP-Address\":\"172.217.164.147\",\"WARC-Target-URI\":\"https://www.ratta.pk/2017/10/1st-year-physics-chapter-9-physical.html\",\"WARC-Payload-Digest\":\"sha1:OG6B4DASZEHNIKOMULEHE4PQLF5UZSN6\",\"WARC-Block-Digest\":\"sha1:ZLZJFOXYDZK2N7XMFOQIEGF7SKSMUOGV\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107911229.96_warc_CC-MAIN-20201030182757-20201030212757-00697.warc.gz\"}"} |
https://www.frontiersin.org/articles/10.3389/fphy.2021.594306/full | [
"## ORIGINAL RESEARCH article\n\nFront. Phys., 23 November 2021\nSec. Soft Matter Physics\nVolume 9 - 2021 | https://doi.org/10.3389/fphy.2021.594306\n\n# Empirical and Theoretical Analysis of Particle Diffusion in Mucus",
null,
"Antonio Cobarrubia1,2,",
null,
"Jarod Tall1,2,3,",
null,
"Austin Crispin-Smith1,2 and",
null,
"Antoni Luque1,4,5*\n• 1Viral Information Institute, San Diego State University, San Diego, CA, United States\n• 2Department of Physics, San Diego State University, San Diego, CA, United States\n• 3Department of Physics and Astronomy, Washington State University, Pullman, WA, United States\n• 4Department of Mathematics and Statistics, San Diego State University, San Diego, CA, United States\n• 5Computational Science Research Center, San Diego State University, San Diego, CA, United States\n\nMucus is a complex fluid that coats multiple organs in animals. Various physicochemical properties can alter the diffusion of microscopic particles in mucus, impacting drug delivery, virus infection, and disease development. The simultaneous effect of these physicochemical properties in particle diffusion, however, remains elusive. Here, we analyzed 106 published experiments to identify the most dominant factors controlling particle diffusion in mucus. The effective diffusion—defined using a one-second sampling time window across experiments—spanned seven orders of magnitude, from 10–5 to 102 μm2/s. Univariate and multivariate statistical analyses identified the anomalous exponent (the logarithmic slope of the mean-squared displacement) as the strongest predictor of effective diffusion, revealing an exponential relationship that explained 89% of the variance. A theoretical scaling analysis revealed that a stronger correlation of the anomalous exponent over the generalized diffusion constant occurs for sampling times two orders of magnitude larger than the characteristic molecular (or local) displacement time. This result predicts that at these timescales, the molecular properties controlling the anomalous exponent, like particle–mucus unbinding times or the particle to mesh size ratio, would be the most relevant physicochemical factors involved in passive microrheology of particles in mucus. Our findings contrast with the fact that only one-third of the studies measured the anomalous exponent, and most experiments did not report the associated molecular properties predicted to dominate the motion of particles in mucus. The theoretical foundation of our work can be extrapolated to other systems, providing a guide to identify dominant molecular mechanisms regulating the mobility of particles in mucus and other polymeric fluids.\n\n## Introduction\n\nMucus is a complex fluid secreted by animals [1,2]. It protects organs against the invasion of pathogens and promotes the interaction with commensal microbes . The diffusivity of particles in mucus is paramount for animal health. The infectivity of animal viruses, like HIV, decreases when their diffusion in mucus is impeded . On the flip side, enhancing the diffusivity of biomolecules in mucus facilitates the delivery of medical drugs in the body , and reducing the diffusivity of commensal viruses that infect bacteria in the gut can increase their infectivity and protection against pathogens [8,9]. Multiple factors modify the diffusivity of particles in mucus, including particle size [7,10], particle charge and ionic strength , pH [1,6,14,16,17], and the concentration and polymeric organization of the characteristic glycoproteins in mucus called mucins . However, the combined effect of these physicochemical factors controlling particle diffusion in mucus remains puzzling.\n\nSmall biomolecules and biomolecular complexes tend to diffuse more readily through mucus, while larger particles are caught in the mucin network [7,10]. On the other hand, nonadhesive polystyrene particles with a diameter of 500 nm diffuse faster than smaller particles (200 nm) of the same type . These contrasting results highlight the major impact of parameters other than size in particle diffusion in mucus. In fact, neutrally charged particles display higher diffusivity in mucus than charged particles of the same size with a net negative charge . An increase in salt concentration shields charged particles, leading to diffusivities similar to neutrally charged particles . Low pH increases the distribution of negative charges in mucins and alters mucus' viscoelasticity, reducing the diffusivity of most particles [1,6,14,16,17]. Low pH also thickens mucus, reducing the diffusion and infection rate of viruses like HIV . Interaction with mucins also alters the diffusivity of particles in mucus. Commensal viruses that infect bacteria and reside in the gut display immunoglobulin-like domains that are attracted to mucins. This interaction reduces their diffusivity and increases their infectivity against bacteria [8,9]. Some of these observations may seem contradictory. However, the fact that mucus has been selected across animals suggests that there could be a more comprehensive emerging effect when these different physiochemical factors are combined .\n\nTo tackle this problem, we performed a meta-analysis of published experimental data measuring the passive diffusion of microscopic particles in mucus. The correlation between physicochemical properties and particle diffusion in mucus was investigated using univariate and multivariate correlation methods. A theoretical scaling analysis was applied to derive a theoretical framework justifying the empirical results. This framework provided a quantitative understanding of the regulation and control of particle diffusion in mucus and other hydrogels. Our findings predict an effective particle size (and diffusion threshold) where the anomalous exponent becomes dominant, anomalous exponent values for experiments that did not measure it, and molecular factors associated with the anomalous exponent that were not reported in most experiments but should be paramount to understand particle diffusion in mucus.\n\n## Materials and Methods\n\nThe GitHub repository https://github.com/luquelab/Cobarrubia_etal_FrontPhys_2021 contains the codes and instructions necessary to implement the methods and replicate the research.\n\n### Data Extraction\n\nWe screened 24 published articles reporting diffusion of particles in mucus or mucus-like hydrogels (Supplementary Data S1). Ten studies contained diffusion data for microscopic, spherical particles that could be compared at the same sampling time window [6,9,11,14,17,18,2023]. WebPlotDigitizer was used to extract 106 measurements of effective diffusion, Deff, measured at a time window of one second, Δteff = 1 s, that is,\n\n$Deff=⟨MSD⟩2kΔteff(1)$\n\nwhere, $⟨MSD⟩$ was the ensemble mean-squared displacement of a particle, and k was the dimensions of particle diffusivity . The following variables were obtained in all the experiments analyzed: particle hydrodynamic diameter (d), particle type, mucus source, dominant mucin expression, and temperature (T). If a study did not report the temperature explicitly, room temperature (298 K) was assumed. The following variables were obtained or derived when possible: anomalous diffusion exponent (α), particle effective surface charge (ζ), mucus pH, mucus salt concentration, and mucin concentration. The anomalous exponent, also known as the logarithmic slope of the mean-squared displacement in the microrheology community , was obtained from the subdiffusion equation:\n\n$⟨MSD(Δt)⟩=2kDαΔtα.(2)$\n\nhere, Dα is the generalized diffusion and Δt is the sampling time window . It is important to note that subdiffusion is predicted to be a transient regime in viscoelastic fluids ; at very short timescales, particles' motion is dominated by a ballistic motion and by non-anomalous diffusion at long timescales. Nonetheless, subdiffusion is a relevant phenomenon observed in mucus and other polymeric fluids at a range of timescales important in biological systems, from milliseconds to days [7,31,32]. The experiments analyzed in this study fall within this time window where subdiffusion can be important.\n\nSome references shared measured diffusion relative to particle diffusion in water; in these cases, the particle diffusion in water was inferred applying the Stokes-Einstein equation, using the reported temperature and hydrodynamic particle diameter . It is worth noting that the Stokes–Einstein relation was used only to infer particle diffusion in water. Particle diffusion in mucus is better described using the generalized diffusion equation due to mucus' viscoelastic properties . However, neither the Stokes–Einstein relation nor the generalized diffusion equation was applied to obtain the particle's effective diffusion in mucus. All the diffusion values in mucus were empirical and independent of theoretical assumptions regarding the generalized diffusion equation. The particle types were defined as a qualitative measure of particle-mucin bonds: COOH (carboxylated), pegylated (PEG), virus, amine, antibody/protein, or chitosan. The full data set containing the measurements collected in all experiments is available in Supplementary Data S2.\n\nThe dominant mucin composition from each mucus source was obtained by evaluating the expression levels of mucin genes from the genome bioinformatics portal Ensembl . Mucins were identified assuming the tissue/organ associated with each mucus or closely associated tissues. Expression levels were collected by taking the average of the reported median of transcript per million (TPM) RNA sequence and the most explicitly stated low, medium, and high expression levels. Based on potential gene expression of mucins with reported levels below the cutoff, TPM measuring below the minimum (0.05 TPM) was distinguished from experiments with no data due to possible gene expression. Low, medium, and high expression levels were obtained over reports of below cutoff in the same tissue. The dominant mucin was determined by the highest expression level, and then, if necessary, by the highest average of median TPM. Identification of mucin expression based on tissues was associated with each mucus: human respiratory mucus and human cystic fibrosis mucus were associated with the human lung mucin genes; human cervical mucus and cervicovaginal mucus were associated with human cervix or uterus mucin genes; pig intestinal mucus was originally from the jejunum part of the small intestine; however, due to a lack of reports for jejunum tissue, the associated mucin genes were taken as the average of the median of TPM of pig duodenum and pig ileum parts of the small intestine based on the close proximity to the jejunum; pig ileum intestinal mucus was associated with ileum tissue mucin genes; pig gastric mucus was collected from pig stomach mucin genes. Supplementary Data S3 contains the full data set obtained from the bioinformatic analysis.\n\n### Statistical Analysis\n\nThe multivariate analysis was performed using the nonparametric statistical method random forest, estimating permutation p-values in R, using the rfpermute package . Random forest is a statistical learning method that relies on generating an average ensemble of random decision trees . Here, the effective diffusion was used as the supervised variable for the regression of the random forest algorithm using the rest of the variables as inputs. The mean-squared error (%MSE) was used to identify the importance of each variable as a predictor. The selection of the most relevant variables was obtained by applying random forest in two consecutive rounds, discarding the variables that were not statistically significant in each round (p-value > 0.05). The average percentage increase of the mean-squared error (%MSE) was obtained by investigating permutations of three variables at a time. These permutations assessed if the p-value obtained was robust.\n\nThe single-variable correlation analysis was performed using the nonparametric Spearman’s correlation coefficient and parametric linear regressions minimized by the least-squares method. The effective diffusion was used as the predicted variable and compared with all the other variables as predictors. The linear regressions explored logarithmic and non-logarithmic scales for both the predictor and predicted variables. The values of the best fit in the linear regression provided an average response and were compared with the mid-values from the measured physical factors for consistency.\n\n### Theoretical Analysis\n\nA scaling ansatz was applied to the subdiffusion equation, Eq. 2, to extract the explicit dependence on the anomalous diffusion α. This theoretical analysis assumed that the microscopic motion of a particle was associated with a characteristic molecular mobility timescale, tD, and displacement scale, LD. This first-order approximation aimed to identify the scaling relationship between the generalized diffusion coefficient and these microscopic observables. The rationale and explicit derivation of this theoretical analysis are included in the Results section. Logarithmic derivatives of the effective diffusion were calculated to estimate the impact of each of these three physical parameters—α, tD, and LD—in the rate of change of the effective diffusion. This determined the threshold condition where the anomalous diffusion, α, was predicted to dominate statistically over the other factors. This theoretical prediction was compared with the empirical values obtained from the empirical statistical analysis.\n\n## Results\n\n### Particles' Effective Diffusion in Mucus Spanned Seven Orders of Magnitude\n\nThe microscopic particles studied had diameters, d, covering three orders of magnitude, from 1 to 1,300 nm, and they displayed an effective diffusion spanning seven orders of magnitude, from 10–2 μm2/s to 105 μm2/s (see Table 1 and Supplementary Data S2). The anomalous exponent, α, ranged from strongly subdiffusive (α ≈ 0.1) to purely diffusive (α ≈ 1), but it was obtained only from a third of the data set (n = 39; 37%). The zeta potential, ζ, measured the effective surface charge of particles in solution . The values ranged from −70 mV to +40 mV and were obtained for half of the data set (n = 57; 52%). The temperature range was narrow, 295–310 K. The pH ranged from mildly acidic (pH = 3.0) to slightly alkaline (pH = 7.4); however, most particle types were measured at a fixed pH (Supplementary Data S2). A third of the experiments had been conducted in artificial mucus-like hydrogels. The rest of the experiments had been conducted in mucus from four sources: human respiratory, human cervix, pig gastric, and pig intestines. The dominant mucins were MUC2 (n = 59; pig intestines and pig stomach sources), MUC5B (n = 30; human cervix source), and MUC5AC (n = 14; human lung source). See Supplementary Data S3 for the extended outputs of the dominant mucin analysis.\n\nTABLE 1\n\nTABLE 1. Summary of empirical data. The effective diffusion, Deff, was obtained for a common time window of 1 s.\n\n### The Anomalous Exponent Displayed the Strongest Correlation With the Effective Diffusion\n\nThe random forest analysis selected five significant variables affecting the effective diffusion (window sampling time 1 s), Deff (Figure 1). The anomalous diffusion exponent, α, was the most relevant variable predicting effective diffusion in the random forest model, with an average percentage increase in the mean-squared error (%MSE) of 22 (±3)% (std. dev.) (p-value = 0.0099). The second most relevant variable was particle type with 19 (±7)% in %MSE, followed by zeta potential with 15 (±5)%, mucus source with 13 (±7)%, and dominant mucin with 10 (±8)%.\n\nFIGURE 1\n\nFIGURE 1. Selected variables impacting effective diffusion. (A) Average percentage increase of mean-squared error (% MSE) for the selected variables. The error bars correspond to the standard deviation. (B) Decision tree for the most important variables. Each node contains the predicted average Deff and the percentage of data predicted. The gradient display diffusion values from ∼ 10–4 μm2/s (white) to ∼ 10–1 μm2/s (blue).\n\nWhen analyzing the selected variables individually, the anomalous diffusion exponent, α, displayed a significantly stronger correlation with the effective diffusion, Deff, than the other variables. The nonparametric Spearman correlation was ρ = 0.93 (p-value < 2.0 × 10–16, n = 39) (Supplementary Table S1). The second strongest individual variable was the negative zeta potential, ζ < 0 (ρ = 0.6, p-value = 0.0002, n = 36). See Supplementary Table S1 for the outputs of all correlations.\n\nThe effective diffusion, Deff, increased exponentially with α (Figure 2A). The linear regression for the semi-logged data (log-linear axes) explained 89% of the variance (log 10Deff = a α + b, a = 5.3 ± 0.3, b = −5.0 ± 0.2, R2 = 0.89). The anomalous exponent was extracted for 37% (n = 39) of the data, in particular, carboxylated, PEG, and viral particles. The inverse statistical model was fitted to this data set (α = a′ log 10Deff + b′, a′ = 0.17 ± 0.01, b′ = 0.92 ± 0.02, R2 = 0.88, n = 39) to estimate the mean value of α for the remaining 63% (n = 67) of the data, corresponding to amine, chitosan, antibodies, and protein particles (Figure 2B). Particles with effective diffusion above Deff > 3.5 μm2/s (n = 21) were predicted to display regular diffusion (α = 1); none of the particles analyzed self-propelled, and thus, superdiffusion (α > 1) was discarded. The majority of particles displaying regular diffusion (α ≈ 1) corresponded to human proteins (n = 18) and two viruses, Norwalk virus and human papilloma virus (HPV).\n\nFIGURE 2\n\nFIGURE 2. Effective diffusion and anomalous exponent analysis. (A) Effective diffusion was plotted as a function of the anomalous exponent. The solid line represents the regression model. The gray area represents the 95% confidence interval. Statistically significant slope and R2 of linear regression are displayed. (B) The anomalous exponent was predicted based on the model found empirically in (A). The solid line designates the predicted linear model. The gray area represents the 95% confidence interval of the predicted linear model. The dashed line represents a 95% prediction interval (A,B), distinguished particle types are represented in the legend of both panels.\n\n### The Anomalous Exponent’s Correlation Is Significant at Timescales Two Orders of Magnitude Larger Than the Microscopic Displacement Time\n\nTo elucidate the physical origin of the dominance of the anomalous exponent, α, its relationship with the effective diffusion, Deff, was derived from Eqs 1, 2:\n\n$Deff=DαΔteffΔteffα.(3)$\n\nDeff displays an explicit exponential dependency with α in the factor $Δteffα$ and an implicit dependency through the generalized diffusion coefficient (Dα). The form of Dα depends on the specific underlying subdiffusion mechanism [28,38]. Our meta-analysis contained a broad range of data (Table 1), including particles with different chemistry, mucus of different types, different physicochemical conditions, and independent groups carrying different experimental implementations. Therefore, the functionality of Dα with α was not obvious, and Eq. 3 was not sufficient to justify the dependence and dominance of α in determining the effective diffusion of particles in mucus. To understand this phenomenon, Dα was further scrutinized.\n\nThe units of Dα depend on α, Eq. 2. In our study, these were μm2/sα. Like any other physical quantity, α has an associated uncertainty (error or standard deviation) . Thus, the units of Dα are uncertain. In other words, the generalized diffusion coefficient is not a measurable physical quantity. The fact that Dα is not a physical quantity has been previously overlooked and mandates a revision of the classic subdiffusion equation, Eq. 2.\n\nTo reformulate the subdiffusion equation, the following ansatz was introduced. It was assumed that the particle diffusion emerges as the stochastic repetition of a particle's local physical motion with a characteristic displacement, LD. This displacement is the consequence of a velocity, vD, propelling the particle during a characteristic time, tD:\n\n$LD∼vDtD.(4)$\n\nThis is a general formulation independent of the underlying physical mechanism responsible for the particle's mobility. Other characteristic scales might play a role in the anomalous exponent, α, as exemplified in the Discussion section. This led to the following relationship:\n\n$Dα=LD2tDα.(5)$\n\nThis ansatz was combined with the classic subdiffusion equation, Eq. 2, obtaining:\n\n$⟨MSD(Δt)⟩=2kLD2ΔttDα.(6)$\n\nThis reformulated subdiffusion equation is valid for time windows, Δt, larger than the characteristic mobility timescale, tD, that is, ΔttD. For smaller time windows, the underlying mobility mechanism will dominate, requiring a different formulation for the displacement .\n\nThe reformulated subdiffusion equation, Eq. 6, was combined with the definition of the effective diffusion, Eq. 1, obtaining:\n\n$Deff(α)=LD2ΔteffΔtefftDα.(7)$\n\nThe effective diffusion, thus, depends exponentially on the anomalous diffusion exponent, α, justifying the empirical relationship observed for the effective diffusion of particles in mucus (Figure 2A). The characteristic displacement, LD, and timescale, tD, depend on the specific physical mechanism responsible for the diffusion. Therefore, experiments using different particles and mucus properties are expected to introduce a variance in these two magnitudes, justifying the data dispersion in Figure 2A.\n\nTo determine the conditions that select α over LD and tD as the parameter with the strongest correlation with Deff across multiple scales, the logarithm of Eq. 3 was investigated as follows:\n\n$log Deff(α)=logLD2Δteff+αlogΔtefftD.(8)$\n\nFor a fix time window, Δteff, the rate of change of Deff with respect to α is\n\n$∂logDeff∂α=logΔtefftD.(9)$\n\nThe impact of LD and tD was evaluated using the logarithms of LD and tD to obtain results valid across scales and independent of measuring units, respectively,\n\n$∂logDeff∂logLD=2(10)$\n\nand\n\n$∂logDeff∂logtD=−α.(11)$\n\nThe change with respect to the length scale, LD, was constant and equal to 2, Eq. 10. The change with respect to the timescale, tD, was smaller than 1 in absolute value, Eq. 11, because the anomalous exponent had an upper limit of 1, α ≤ 1 (in the experiments analyzed, there were no self-propelled particles or active transport mechanisms that could display superdiffusion). Eqs. 911 predict that the anomalous diffusion is the physical parameter with the strongest correlation in determining the rate of change in the effective diffusion:\n\n$∂logDeff∂α>∂logDeff∂logLD>∂logDeff∂logtD,(12)$\n\nfor sampling time windows two orders of magnitude larger than the characteristic mobility timescale:\n\n$ΔtefftD>102.(13)$\n\nThis result applies to any physical system as far as the diffusion is the consequence of a local characteristic physical motion.\n\n### The Theoretical Ansatz Is Consistent With the Statistical Analysis\n\nThe predictions from the reformulated subdiffusion equation were investigated for particle diffusion in mucus data. The slope and intercept obtained from the linear regression in Figure 2A were interpreted with respect to Eq. 8. The values of the best fit represent an average response and were compared to the mid-values from the physicochemical factors to test the consistency of the ansatz, Eq. 5. The mean characteristic length and timescales obtained statistically were LD ≈ 3 nm and tD ≈ 5 µs, respectively. The sampling time window was Δteff = 1 s. Therefore, Δteff/tD ≈ 106 ≫ 102, satisfying the inequality establishing the condition for the strong correlation of the anomalous diffusion exponent, Eq. 13. This implies that the experimental conditions investigating particle diffusion in mucus were on the regime where the anomalous exponent, α, was predicted theoretically to be the dominant factor determining the particle's effective diffusion, Deff, Eq. 13.\n\nTo further confirm the consistency of the theoretical framework with the empirical data, it was necessary to justify that the mean values obtained from the linear regression of Eq. 8, that is, LD ≈ 3 nm and tD ≈ 5 µs, were physically sound. Regardless of the physicochemical factors in mucus controlling α, one expects a local displacement caused by a tangible physical mechanism associated with a characteristic velocity vD and a finite timescale tD, Eq. 4. In all experiments analyzed, the particles were passive, and mucus was not forced externally to generate and active transport. It is reasonable to assume that most particles in the experiments acquired their transient velocity from absorbing kinetic energy from the water molecules in mucus, leading to the characteristic velocity $vD2∼kBT/m$, where kB is the Boltzmann constant, T is the temperature, and m is the mass of the particle. The particle's velocity vD would dissipate due to the mucus' viscosity with a characteristic time tDtrm/γ, where γ is the friction coefficient . In the most general formulation, this friction coefficient contains the viscous and elastic effects of the fluid . This leads to the characteristic local displacement $LD∼kBTm/γ$. It was then assumed room temperature, a typical particle’s mid-size in the data set, d ∼ 100 nm, and a viscosity of mucus close to water, which was a reasonable assumption because most physiological conditions have a low weight per volume [9,38]. This led to a characteristic local displacement of LD ∼ 1 nm and a characteristic local displacement dissipation time of tD ∼ 1 µs. Therefore, the estimated characteristic scales were consistent with those obtained from the empirical and theoretical analysis, LD ≈ 3 nm and tD ≈ 5 μ. For the case discussed above, it is important to notice that in the limit of regular diffusion, α = 1, the ansatz in Eqs. 4, 5 leads to $D∼LD2/tD$, recovering, as expected, the diffusion expression DkT/m associated with the fluctuation–dissipation theorem. However, the theoretical framework defined by the fundamental ansatz is general and does not require particles to be propelled by the adsorption of kinetic energy.\n\n### Particles Larger Than 100 nm are More Sensitive to Anomalous Diffusion\n\nParticle size, d, was not selected as a significant predictor in the random forest analysis (Figure 1A). However, the anomalous exponent analysis predicted that a certain group of particles would display regular diffusion (Figure 2B). This suggested that particle size could have an important indirect role in the effective diffusion. In fact, the analysis of Deff as a function of d displayed a clear threshold around d ∼ 100 nm (Figure 3A). Larger particles, d > 100 nm, displayed a lower effective diffusion, Deff, although with no apparent statistical correlation with size (Spearman correlation ρ = − 0.24, p-value = 0.19). Smaller particles, d < 100 nm, displayed an effective diffusion with a significant statistical correlation (Figure 3A). The slope for the log–log data was m = −2.2 ± 0.3 (R2 − 0.67), that is, the effective diffusion displayed apparently a power law of order 2 with particle size, Deff ∼ 1/d2. Thus, diffusion overall decreased with particle size much rapidly than in regular diffusion, which is consistent with viscoelastic effects. However, a subset of particles (n = 21) diffused normally (α = 1) in Figure 2B, displaying an effective diffusion inversely proportional to particle size with a power function exponent m = − 1.0 ± 0.1 (p-value = 1.4 × 10–7, R2 = 0.77) (Figure 3A). This empirical scaling, Deff ∼ 1/d, is expected for particles displaying regular diffusion, in agreement with the anomalous exponent prediction in Figure 2B. A similar analysis was performed comparing the rescaled effective diffusion by particle size, Deff d, as a function of particle size, d. As expected, the particles predicted to display regular diffusion had slope zero, and the conclusions of the analysis were analogous (Supplementary Figure S1). Statistically, the analysis of Deff was preferred over Deff d because the use of d as input and output in Supplementary Figure S1 can introduce biases and increase uncertainty.\n\nFIGURE 3\n\nFIGURE 3. Particle size analysis. (A) The effective diffusion was plotted against particle size, d. The different symbols correspond to different particle types as indicated in the legend. The solid line indicates the linear regression for d < 100 nm particles using log–log data (n = 26), and it displays the slope and coefficient of determination, R2. The gray area represents the 95% confidence interval. The group of particles with d > 100 nm (n = 80) did not display a statistically significant relationship, and no solid line is included. The dashed line corresponds to the linear regression of the subset of particles (n = 21) displaying regular diffusion, α = 1, in Figure 2B, using the log–log data. The slope was approximately 1 as expected (slope = − 1.0 ± 0.1, p-value = 1.4 ⋅ 10–7, R2 = 0.77). (B) The anomalous exponent was plotted as a function of the particle size. The symbols and lines are analogous to panel (A). As in panel (A), the solid line is the regression for the particles with d < 100 (n = 26), while the dashed line represents the subset (n = 21) predicted to display regular diffusion, α = 1. Empty symbols anomalous exponents obtained empirically. The solid symbols correspond to the predicted anomalous exponents for the subset of data that did not include empirical values. The predictions were obtained using the model derived from Figure 2B.\n\n### Most Parameters Reported Display Weak Correlations With the Effective Diffusion\n\nThe other four variables selected in the random forest analysis (Figure 1), that is, particle type, particle charge, mucin source, and mucin-type, displayed weak correlations or no apparent correlations with Deff as single predictors (Supplementary Table S1).\n\n##### Particle Type\n\nParticle type was selected as the second most relevant variable to predict the effective diffusion based on the random forest analysis (Figure 1). Comparing the effective diffusion for the different particles confirmed this prediction (Supplementary Figure S2A). Antibodies and proteins displayed the fastest effective diffusion with a median of 48.9 μm2/s. Viruses were the second fastest group with a mean effective diffusion of an order of magnitude smaller, 3.5 μm2/s. PEG and amine particles formed the third group. They displayed statistically similar effective diffusion with medians 0.99 μm2/s and 2 × 10–2 μm2/s. This was followed by carboxylated particles, median 3 × 10–2 μm2/s, and finally chitosan 4 × 10–3 μm2/s. Differences in particle size could explain the reduction in effective diffusion for antibodies/proteins, viruses, and PEG particles (Supplementary Figure S2B). They had median sizes of ∼ 10, ∼ 100, and ∼ 1,000 nm, respectively. It is unclear what were the physicochemical factors behind the slower diffusion of amine, COOH, and chitosan particles (Supplementary Figure S2).\n\n##### Particle Charge\n\nThe third predictor for effective diffusion was particle charge, expressed as the zeta potential ζ (Figure 1). Particles with negative zeta potential displayed a positive correlation with the effective diffusion, with a Spearman correlation of ρ = 0.6 (p = 0.002, n = 36) (Figure 4A). The relationship was approximated by an exponential function, Deff ∼ 10. The potential rate was m = (0.024 ± 0.006) mV−1 (p = 0.0002) obtained from a least-square linear regression using the log-linear data. This exponential model accounted for 30% of the variance (R2 = 0.30). The largest effective diffusions were achieved at neutral zeta potentials. Positive zeta potentials (n = 21) had lower values but did not display a statistically significant correlation for the effective diffusion. Particle size or other properties did not seem to explain the trend observed for negatively charged zeta potentials (Supplementary Figure S3). These particles, however, displayed a linear positive correlation with the anomalous diffusion (Figure 4B).\n\nFIGURE 4\n\nFIGURE 4. Electrostatic analysis. (A) The effective diffusion was plotted against zeta potential. (B) The anomalous exponent was plotted as a function of zeta potential. (A,B) The distinction between empirical and predicted data as well as particle types is represented in the legend. The dotted line indicates ζ = 0. The solid lines correspond to statistically significant linear regressions. The gray areas represent 95% confidence intervals of the linear regression. The slopes and R2 of each linear regression are also displayed in the panels.\n\n##### Mucus Source\n\nThe mucus source and dominant mucin were the last two significant predictors of effective diffusion (Figure 1). The effective diffusion was faster in human cervix samples with a median ∼ 10 μm2/s, although the values spanned six orders of magnitude, from ∼ 10–4 to ∼ 102 μm2/s (Supplementary Figure S4). The effective diffusion was the slowest in mucus from the human lung (median ∼ 10–2 μm2/s) and pig intestine (median ∼ 10–2 μm2/s). The median particle size in empirical data from human cervix mucus was more than an order of magnitude smaller, ∼ 10 nm, than for the empirical data from the other sources. The median pH for the empirical data from human cervix mucus has significantly lower pHs (median 5.5) compared to the other sources (median 7). Lower pH tends to thicken mucus , thus expecting a slower effective diffusion. But the particle size may have offset this trend. The transcription analysis identified MUC5B, which is dominant in the human cervix, displaying the largest effective diffusion (median ∼ 10 μm2/s) compared to the other dominant mucins: MUC2 common in intestinal mucus (median diffusion ∼ 10–1 μm2/s) and MUC5AC common in respiratory mucus (median diffusion ∼ 10–2 μm2/s) (Supplementary Figure S4).\n\n## Discussion\n\nThe meta-analysis of particle diffusion in mucus revealed that diffusion of microscopic particles spanned seven orders of magnitude in passive conditions (no self-propulsion or active mucus transport) (Table 1). The anomalous diffusion exponent, α, was the factor displaying the strongest correlation with the effective diffusion, Deff (Figure 1). Statistically, the effective diffusion displayed an exponential dependence with respect to the anomalous diffusion, explaining 89% of the variance in the data (Figure 2A). This result was based on 39 out of 106 experiments (about a third), which had measured the anomalous exponent. Among the remaining 67 experiments, our statistical model predicts that the anomalous exponent was dominant in determining the effective diffusion in 46 of these experiments, that is, 69% of them (Figure 2B). In the other 21 experiments, the model predicts that particles followed regular diffusion, that is, the anomalous exponent would have no power predicting the change in the effective diffusion (Figure 2B). Therefore, the anomalous exponent was a strong predictor of the effective diffusion in 80% of all experiments analyzed. The anomalous exponent is an emerging property, and this result offers the opportunity to compare the diffusion of particles subjected to different molecular mechanisms. It is puzzling, however, that only a third of the experiments measured the anomalous exponent. One possible explanation is the fact that the anomalous exponent is a well-known emerging property, but the relationship between this exponent and the underlying molecular factors determining its value is not that established in the field yet . Below, we argue that investigating the molecular basis of the anomalous exponent is the key to characterizing and controlling particle diffusion in mucus and other polymeric fluids at relevant biological timescales.\n\nThe theoretical scaling analysis of subdiffusion identified that the anomalous diffusion exponent, α, displays a stronger correlation over other physical factors when the diffusion is characterized at sampling times two orders of magnitude larger than the microscopic timescale fueling the diffusive motion, Eqs 13, 12. In our analysis, the experiments focused on passive diffusion conditions, but the principles behind the theoretical scaling can be expanded to situations with motile particles as well as energetically active mucus transport. The theoretical derivation just assumes that there is a characteristic timescale and length scale governing the particle's local motion. For the experiments analyzed here, the kinetic energy and viscosity of the fluid were assumed to be associated with the particle's local motion and were used to investigate the consistency with the theory. But in other contexts, the same analysis can be applied to replace the dependency of the characteristic scales with other mechanisms. For example, if particles run and tumble, like the bacterium Escherichia coli, the transient velocity of the particle depends on the viscosity and concentration of the polymeric network and food sources instead of the kinetic energy [42,43]. The scaling analysis is also consistent with the generalized diffusion equation in complex fluids, which extends the Stokes-Einstein relation to viscoelastic fluids . In this case, the characteristic length and timescales would incorporate the elastic effects of the network. The role of the general characteristic scales in the revised diffusion equation, Eq. 6, aimed to accommodate a diverse set of scenarios. Additionally, it solved the issue of relying on the generalized diffusion constant, which has undefined physical units and is not strictly a well-defined physical magnitude, Eq. 5. The theoretical and empirical analyses presented here highlights the dominance of the anomalous diffusion exponent in determining the range of effective diffusions.\n\nThus, the problem now translates into identifying the factors that determine the anomalous diffusion exponent. These physical factors depend on the underlying mechanism responsible for the subdiffusion . This is particularly relevant to understand the emergence of the critical particle size d ∼ 100 nm in Figure 3. This critical value may represent the particle size's onset when the effects of the mucus mesh become relevant. Most experiments analyzed did not report the mucus mesh size, but the critical size observed was consistent with mesh sizes measured in mucus samples, which ranged from 100 to 400 nm [1,4447]. For particles with sizes similar to or larger than this mesh size, the theoretical description of the molecular displacement, LD, should include the effects of the mesh size. If the mesh hinders mobility, this will lead to a reduction of the average molecular displacement. If the mesh streamlines the mobility, then the molecular displacement could increase; for example, changes in the chemical coating of relatively large particles (200–500 nm) compared to the critical size, d ∼ 100 nm, can display larger diffusivities in mucus than in water . The key point is that for particles larger than 100 nm, the impact of the mesh size in particle diffusivity will be very diverse. In each case, it is necessary to assess the underlying molecular mechanism responsible for the anomalous diffusion to identify the key physical factors governing the diffusivity. We have clarified this below for two mechanisms that may play an important role in mucus. First, microscopic particles can bind to mucin fibers leading to subdiffusion . Second, mucin fibers form a polymeric mesh that can trap particles as observed in other hydrogels . These two scenarios are particularly relevant in passive conditions. Scenarios involving the activation of the mucus network via cilia or peristalsis are also of interest but fall beyond the scope of this work.\n\nBinding to mucins does not necessarily lead to subdiffusion. If a particle has a single binding site, the characteristic binding time tb would dilate the characteristic time to estimate the diffusivity of the particle, tDtr + tb, where tr is the relaxation time. The microscopic diffusion would be $D∼vD2tr2/tD∼frkBT/m$. The diffusion would be reduced by the factor fr < 1, which is the fraction of time spent dissipating the particle's speed, fr = tr/(tr + tb). This would not impact α unless more than one region of the particle can bind stochastically to mucins, increasing the binding time beyond the sampling time, tb ≫Δteff. This would lead to an effective power-law distribution of binding times with no apparent characteristic binding time [49, 50]. The emergence of long-tailed attachment time distributions leads to subdiffusion. The anomalous exponent, α, would be equal to the exponent, ν, of the asymptotic approximated power-law distribution of attachment times [27,28,38]. In this case, the continuous-time random walk approximation leads to the generalized subdiffusion expression $Dα=DτD/τDα$ . Here, D is the diffusion of the particle in the absence of interactions with mucins, and τD is the average diffusion time of a particle before attaching again to a mucin fiber. This result is consistent with the ansatz introduced in Eq. 5. In this particle–mucin affinity mechanism, the distribution of binding times would control α, becoming the most relevant factor impacting the effective diffusion, Deff. Unfortunately, the experiments analyzed did not explore the particle affinities to mucus explicitly.\n\nThe microenvironment trapping mechanism was observed in F-actin networks, where microscopic tracers were shown to follow anomalous diffusion . The anomalous exponent was a linear function of the ratio between the particle size (d) and the network's mesh size (ξ). The empirical dependency obtained was α ≈ 1 for d/ξ < 0.1, α ≈ − 1.25 d/ξ + 1.38 for 0.1 < d/ξ < 1.1, and α ≈ 0.1 for d/ξ > 1.1. Thus, particles with sizes about 10% of the mesh size or smaller diffused normally, while particles with a size similar or larger to the mesh displayed a reduced diffusivity with a low anomalous exponent. The specific parameters of the relationship were not derived, but one would expect similar behavior in mucus. The average mucus in humans has a typical mesh size between 100 and 1,000 nm . In this mechanism, d/ξ controls α, becoming the most important factor determining Deff. This could explain the threshold observed on the effective diffusion as a function of the particle size (Figure 3A). Larger particles, d > 100 nm, displayed lower effective diffusions, Deff, although with no apparent statistical correlation. The variation of α, from 0.15 to 1, could be due to a change in the mesh size (ξ). Unfortunately, the mesh size (or a proxy, like the concentration of mucins) was not measured or reported in most experiments analyzed here.\n\nThe two mechanisms discussed above could also help interpret the statistically significant correlations obtained between the effective diffusion and the surface charge of particles (Figure 4). Given the negative charge of mucin fibers, a particle with a larger negative charge would display a larger effective radius within the mucin network. This would increase the particle size to network mesh ratio, thus, reducing the anomalous exponent and, consequently, the particle's diffusivity. This scenario would explain the statistical trends observed for the effective diffusion and anomalous exponent for negative zeta potentials (Figure 4). However, one cannot discard other scenarios. For example, negatively charged carboxylated particles competing for cations at high densities can expose hydrophobic regions in mucus, leading mucus fibers to form bundles [6,18]. This might be a less likely but still plausible scenario. For positively charged particles, the particle-mucin binding mechanism could be responsible for the relatively low anomalous exponents observed. The framework explored here suggests that measuring the particle–mucin binding times and mucin mesh size would help disentangle the variance in the data. This framework should also apply to other polymeric fluids. It has been observed, for example, that particles of 1 μm (1,000 nm) display subdiffusion and trapping in biofilms in the timescale of days, and 0.5 μm (500 nm) particles display lower mean-squared displacement in regions of higher effective cross-linking . These observations were done on the timescale of days. These results resonate with our finding that particles larger than 100 nm in mucus are very sensitive to subdiffusion behavior and trapping. It would be necessary to characterize systematically particle-binding to polymer fibers, polymer mesh sizes, and timescales to compare, extrapolate, and unify results across different polymeric fluid systems.\n\nSome of the weak and highly dispersed correlations analyzed above might be obscured due to the combined effect of multiple variables, for example, particle charge and pH . Unfortunately, at a given particle chemistry and zeta potential (charge), most particle diffusion measurements were reported at a fixed pH (Supplementary Data S2). Thus, the data analyzed posed an intrinsic limitation to disentangle the convoluted effects of charge and pH on particle diffusion. The impact of mucin-type, mucin–mucin interactions, and mucin concentration can also depend on pH. Low pH alters the molecular structure of mucins from a random coil to extended conformations, facilitating cross-linking and transitioning to a mucus solid-gel phase . However, these key physicochemical mucus properties were difficult to extract from most experiments analyzed.\n\nIn any case, our results indicate a common approach to investigate these co-dependent properties on particle diffusion: measuring the anomalous exponent and establishing the underlying mechanism responsible for it. The physical factors controlling the anomalous exponent will be the dominant factors in particle diffusion. The theory introduced here is based on a generic ansatz that represents a first approximation. More refined theoretical approaches will be necessary to identify correction factors associated with specific underlying molecular mechanisms. One possible direction would be adapting the continuum models that characterize polymer fluids as viscoelastic Maxwell fluids [1,30,31,53]. In the presence of viscous delays incorporated with the Basset force term, these models predict an emerging subdiffusive transient region at timescales of milliseconds . That timescale is much shorter than the one investigated here (above seconds) and that subdiffusion does not emerge from the molecular mechanisms (particle–mucus interaction mechanism and the caging effect) identified here as relevant in mucus. Incorporating the molecular characteristics of these two mechanisms in Maxwell fluids would offer a more sophisticated framework to predict particle diffusion in mucus.\n\nIn conclusion, our meta-analysis revealed that the anomalous exponent displays the strongest correlation with the effective diffusion of particles in mucus compared to other commonly measured factors. It explained almost 90% of the variance of diffusions across seven orders of magnitude. Our theoretical scaling analysis justified this observation assuming the characteristic displacement length and time of the local physical motion. This led to a reformulated subdiffusion equation in terms of these characteristic scales of the underlying mobility mechanism, and it demonstrated that the widely accepted generalized diffusion constant is not a measurable physical quantity. The theoretical analysis predicted that the anomalous exponent determines the order of magnitude of the effective diffusion for sampling time windows two orders of magnitude larger than the microscopic mobility timescale. This prediction applied to any physical system and was consistent with the data from particles diffusing in mucus. Our theoretical analysis indicates that the factors regulating the anomalous exponent are essential to characterize the diffusion of particles. At least two of these factors can control the anomalous exponent in mucus: the distribution of particle–mucin binding times and the particle size-to-mucin mesh ratio. These factors regulate the anomalous exponent and, subsequently, the effective diffusion of microscopic particles. However, these key properties were not reported in most experiments analyzed. Therefore, our study provides a guide on how to characterize, study, and modify the diffusion of particles in mucus and other hydrogels.\n\n## Data Availability Statement\n\nThe original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.\n\n## Author Contributions\n\nAC, JT, and AC-S collected published data on mucus, carried the meta-analysis, crafted the figures, and drafted the manuscript. AL. designed the research approach, supervised the collection of data and analysis, developed the theoretical analysis, and edited the manuscript.\n\n## Conflict of Interest\n\nThe authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.\n\n## Publisher’s Note\n\nAll claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors, and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.\n\n## Acknowledgments\n\nThis manuscript was originally released as a pre-print at bioRxiv . The original contributions presented in the study are included in the article and supplementary material. Further inquiries can be directed to the corresponding author. We thank the Biomath Working Group at San Diego State University for their feedback during the development of the project, and we would like to make a special mention of the insightful comments from Professors Parag Katira and Arlette Baljon. AL acknowledges the support received by the New Investigator Award from the California State University (CSU) Program For Education and Research in Biotechnology (CSUPERB), the CSU Faculty Innovation and Leadership Award, and the National Science Foundation Award 1951678 from the Mathematical Biology Program.\n\n## Supplementary Material\n\nThe Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fphy.2021.594306/full#supplementary-material\n\n## References\n\n1. Spagnolie S. Complex Fluids in Biological Systems experiment, Theory and Computation. New York: Springer (2015).\n\n2. Krajina BA, Tropini C, Zhu A, DiGiacomo P, Sonnenburg JL, Heilshorn SC, et al. Dynamic Light Scattering Microrheology Reveals Multiscale Viscoelasticity of Polymer Gels and Precious Biological Materials. ACS Cent Sci (2017) 3:1294–303. doi:10.1021/acscentsci.7b00449\n\n3. Bäckhed F, Ley RE, Sonnenburg JL, Peterson DA, Gordon JI. Science (2005) 307:1915. doi:10.1126/science.1104816\n\n4. Bakshani CR, Morales-Garcia AL, Althanus M, Wilcox MD, Pearson JP, Bythell JC, et al. NPJ Biofilms Microbiomes (2018) 4:1–14. doi:10.1038/s41522-018-0057-2\n\nCrossRef Full Text\n\n5. Silveira CB, Rohwer FL. NPJ Biofilms Microbiomes (2016) 2:1–5. doi:10.1038/npjbiofilms.2016.10\n\nCrossRef Full Text\n\n6. Lai SK, Hida K, Shukair S, Wang Y-Y, Figueiredo A, Cone R, et al. Human Immunodeficiency Virus Type 1 Is Trapped by Acidic but Not by Neutralized Human Cervicovaginal Mucus. J Virol (2009) 83:11196–200. doi:10.1128/jvi.01899-08\n\n7. Cone RA. Barrier Properties of Mucus. Adv Drug Deliv Rev (2009) 61:75–85. doi:10.1016/j.addr.2008.09.008\n\n8. Barr JJ, Auro R, Furlan M, Whiteson KL, Erb ML, Pogliano J, et al. Bacteriophage Adhering to Mucus Provide a Non-host-derived Immunity. Proc Natl Acad Sci (2013) 110:10771–6. doi:10.1073/pnas.1305923110\n\n9. Barr JJ, Auro R, Sam-Soon N, Kassegne S, Peters G, Bonilla N, et al. Subdiffusive Motion of Bacteriophage in Mucosal Surfaces Increases the Frequency of Bacterial Encounters. Proc Natl Acad Sci U.S.A (2015) 112:13675–80. doi:10.1073/pnas.1508355112\n\n10. Amsden B, Turner N. Diffusion Characteristics of Calcium Alginate Gels. Biotechnol Bioeng (1999) 65:605–10. doi:10.1002/(sici)1097-0290(19991205)65:5<605:aid-bit14>3.0.co;2-c\n\n11. Abdulkarim M, Agulló N, Cattoz B, Griffiths P, Bernkop-Schnürch A, Borros SG, et al. Nanoparticle Diffusion within Intestinal Mucus: Three-Dimensional Response Analysis Dissecting the Impact of Particle Surface Charge, Size and Heterogeneity across Polyelectrolyte, Pegylated and Viral Particles. Eur J Pharmaceutics Biopharmaceutics (2015) 97:230–8. doi:10.1016/j.ejpb.2015.01.023\n\n12. Arends F, R. B, O. L. Langmuir (2013) 29:15965. doi:10.1021/la404016y\n\n13. Hansing J, Ciemer C, Kim WK, Zhang X, DeRouchey JE, Netz RR. Eur Phys J E (2016) 39. doi:10.1140/epje/i2016-16053-2\n\nCrossRef Full Text\n\n14. Lieleg O, Vladescu I, Ribbeck K. Biophys J (2010) 98:1782. doi:10.1016/j.bpj.2010.01.012\n\n15. Li LD, Crouzier T, Sarkar A, Dunphy L, Han J, Ribbeck K. Biophys J (2013) 105:1357. doi:10.1016/j.bpj.2013.07.050\n\n16. Celli JP, Turner BS, Afdhal NH, Keates S, Ghiran I, Kelly CP, et al. Proc Natl Acad Sci U.S.A (2009) 106:14321. doi:10.1073/pnas.0903438106\n\n17. Suk JS, Lai SK, Boylan NJ, Dawson MR, Boyle MP, Hanes J. Nanomedicine(Lond) (2011) 6:365. doi:10.2217/nnm.10.123\n\n18. Lai SK, O’Hanlon DE, Harrold S, Man ST, Wang Y, Cone R, et al. Proc Natl Acad Sci U.S.A (2007) 104:1482. doi:10.1073/pnas.0608611104\n\n19. Lang T, Larsson E, Johnansson MEV, Hansson GC, Samuelsson T. Mol Biol Evol (2016) 33:1921. doi:10.1093/molbev/msw066\n\n20. Olmsted SS, Padgett JL, Yudin AI, Whaley KJ, Moench TR, Cone R. Biophys J (2001) 81:1930. doi:10.1016/s0006-3495(01)75844-4\n\n21. Newby J, Schiller JL, Wessler T, Edelstein J, Forest MG, Lai SK. Nat Commun (2017) 8. doi:10.1038/s41467-017-00739-6\n\nCrossRef Full Text\n\n22. Schuster BS, Suk JS, Woodworth GF, Hanes J. Biomaterials (2013) 34:3439. doi:10.1016/j.biomaterials.2013.01.064\n\n23. Yildiz HM, McKelvey CA, Marsac PJ, Carrier RL. J Drug Target (2015) 23. doi:10.3109/1061186x.2015.1086359\n\nCrossRef Full Text\n\n24. Rohatgi A. Webplotdigitizer 4.2 (2019). Pacifica, CA, United States. Available at https://automeris.io/WebPlotDigitizer\n\n25. Huang F, Watson E, Dempsey C, Suh J. Methods Mol Biol (2013) 991:211. doi:10.1007/978-1-62703-336-7_20\n\n26. McGlynn J, Wu N, Schultz K. J App Phys (2020) 127:201101. doi:10.1063/5.0006122\n\nCrossRef Full Text\n\n27. Barkai E, Garini Y, Metzler R. Phys Today (2012) 65:29. doi:10.1063/pt.3.1677\n\nCrossRef Full Text\n\n28. Metzler R, Jeon J, Cherstvy AG, Barkai E. Phys Chem (2014) 16:24128. doi:10.1039/c4cp03465a\n\nCrossRef Full Text\n\n29. Hou R, Cherstvy AG, Metzler R, Akimoto T. Phys Chem Chem Phys (2018) 20:20827. doi:10.1039/c8cp01863d\n\n30. Grimm M, Jeney S, Franosch T. Soft Matter (2011) 7:2076. doi:10.1039/c0sm00636j\n\nCrossRef Full Text\n\n31. Grebenkov D, Vahabi M, Bertseva E, Forró L, Jeney S. Phys Rev E (2013) 88:040701. doi:10.1103/physreve.88.040701\n\nCrossRef Full Text\n\n32. Chew SC, Kundukad B, Seviour T, Van der Maarel J, Yang L, Rice S, et al. MBio (2014) 5:e01536. doi:10.1128/mbio.01536-14\n\n33. Cruickshank Miller C. Proc R Soc. B (1924) 106:724.\n\n34. Zerbino DR, Achuthan P, Akanni W, Amode MR, Barrell D, Bhai J, et al. Nucleic Acids Res (2018) 46:D754–D761. doi:10.1093/nar/gkx1098\n\n35. Archer E. rfPermute: Estimate Permutation P-Values for Random Forest Importance Metrics (2019). r package version 2.1.7.\n\n36. James G, Witten D, Hastie T, Tibshirani R. An Introduction to Statistical Learning, Vol. 112. Berlin, Germany: Springer (2013).\n\n37. Kumar A, Dixit CK. Advances in Nanomedicine for the Delivery of Therapeutic Nucleic Acids. Sawston, United Kingdom: Woodhead Publishing (2017). p. 43–58. doi:10.1016/b978-0-08-100557-6.00003-1\n\n38. Joiner KL, Baljon A, Barr J, Rohwer F, Luque A. Sci Rep (2019) 9. doi:10.1038/s41598-019-52794-2\n\nCrossRef Full Text\n\n39. Taylor J. Introduction to Error Analysis, the Study of Uncertainties in Physical Measurements. New York: University Science Books (1997).\n\n40. Zwanzig R. Nonequilibrium Statistical Mechanics. Oxford: Oxford University Press (2001).\n\n41. Hwang S, Litt M, Forsman WC. Rheol Acta (1969) 8:438. doi:10.1007/bf01976227\n\nCrossRef Full Text\n\n42. Martinez V, Schwarz-Linek J, Reufer M, Wilson L, Morozov A, Poon W. Proc Natl Acad Sci U.S.A (2014) 111:17771. doi:10.1073/pnas.1415460111\n\n43. Patteson A, Gopinath A, Goulian M, Arratia P. Sci Rep (2015) 5:1. doi:10.1038/srep15761\n\nCrossRef Full Text\n\n44. Suk J, Lai S, Wang Y-Y, Ensign L, Zeitlin P, Boyle M, et al. Biomaterials (2009) 30:2591. doi:10.1016/j.biomaterials.2008.12.076\n\n45. Lai S, Wang Y-Y, Hida K, Cone R, Hanes J. Proc Natl Acad Sci U.S.A (2010) 107:598. doi:10.1073/pnas.0911748107\n\n46. Lai S, Suk J, Pace A, Wang Y-Y, Yang M, Mert O, et al. Biomaterials (2011) 32:6285. doi:10.1016/j.biomaterials.2011.05.008\n\n47. Ensign L, Tang B, Wang Y-Y, Terence A, Hoen T, Cone R, et al. Sci Transl Med (2012) 4:138ra79. doi:10.1126/scitranslmed.3003453\n\n48. Wong IY, Gardel ML, Reichman DR, Weeks ER, Valentine MT, Bausch AR, et al. Phys Rev Lett (2004) 92:178101. doi:10.1103/physrevlett.92.178101\n\n49. Xu Q, Feng L, Sha R, Seeman NC, Chaikin PM. Phys Rev Lett (2011) 106:228102. doi:10.1103/physrevlett.106.228102\n\n50. Armstrong MJ, Rodriguez III JB, Dahl P, Salamon P, Hess H, Katira P. Power Law Behavior in Protein Desorption Kinetics Originating from Sequential Binding and Unbinding. Langmuir (2020) 36 (45):13527–34. doi:10.1021/acs.langmuir.0c02260\n\n51. Leal J, Smyth H, Ghosh D. Int J Pharm (2017) 532:555. doi:10.1016/j.ijpharm.2017.09.018\n\nCrossRef Full Text\n\n52. Cao X, Bansil R, Bhaskar K, Turner B, LaMont J, Niu N, et al. Bioph J (1999) 76:1250. doi:10.1016/s0006-3495(99)77288-7\n\nCrossRef Full Text\n\n53. Levine AJ, Lubensky T. Phys Rev Lett (2000) 85:1774. doi:10.1103/physrevlett.85.1774\n\n54. Cobarrubia A, Tall J, Crispin-Smith A, Luque A. bioRxiv (2020). doi:10.1101/2020.07.25.221416\n\nCrossRef Full Text\n\nKeywords: anomalous diffusion, mucus, microscopic particle, meta-analysis, random forest (bagging) and machine learning\n\nCitation: Cobarrubia A, Tall J, Crispin-Smith A and Luque A (2021) Empirical and Theoretical Analysis of Particle Diffusion in Mucus. Front. Phys. 9:594306. doi: 10.3389/fphy.2021.594306\n\nReceived: 13 August 2020; Accepted: 04 October 2021;\nPublished: 23 November 2021.\n\nEdited by:\n\nJasper Van Der Gucht, Wageningen University and Research, Netherlands\n\nReviewed by:\n\nAntonio Puertas, University of Almeria, Spain\nKelly Schultz, Lehigh University, United States\n\nCopyright © 2021 Cobarrubia, Tall, Crispin-Smith and Luque. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.\n\n*Correspondence: Antoni Luque, [email protected]"
] | [
null,
"https://loop.frontiersin.org/images/profile/1063007/24",
null,
"https://loop.frontiersin.org/images/profile/1092812/24",
null,
"https://f96a1a95aaa960e01625-a34624e694c43cdf8b40aa048a644ca4.ssl.cf2.rackcdn.com/Design/Images/newprofile_default_profileimage_new.jpg",
null,
"https://loop.frontiersin.org/images/profile/1005749/24",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8635299,"math_prob":0.83094925,"size":54121,"snap":"2022-40-2023-06","text_gpt3_token_len":12740,"char_repetition_ratio":0.1774119,"word_repetition_ratio":0.038561687,"special_character_ratio":0.2341605,"punctuation_ratio":0.1518257,"nsfw_num_words":3,"has_unicode_error":false,"math_prob_llama3":0.9549521,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,1,null,1,null,null,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-03T14:53:34Z\",\"WARC-Record-ID\":\"<urn:uuid:1741b4b4-8116-4270-999a-03d00fbf3370>\",\"Content-Length\":\"230879\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e46d254f-8d1f-47b3-8450-02dbd39608eb>\",\"WARC-Concurrent-To\":\"<urn:uuid:d8364f2a-f5dc-4c20-85b0-c02d904a1a1d>\",\"WARC-IP-Address\":\"13.107.237.40\",\"WARC-Target-URI\":\"https://www.frontiersin.org/articles/10.3389/fphy.2021.594306/full\",\"WARC-Payload-Digest\":\"sha1:ETOKZMZZAI622JW7PRPNCBQDUCECUZFA\",\"WARC-Block-Digest\":\"sha1:ADOW7ER4LJ553LUQY5T22YRDHWCTQWO4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500056.55_warc_CC-MAIN-20230203122526-20230203152526-00347.warc.gz\"}"} |
https://blog.richmond.edu/physicsbunn/2013/03/01/the-charged-conducting-disk-again/ | [
"# The charged conducting disk (again)\n\nI recently mentioned a couple of new things I’d learned in teaching an electricity and magnetism class this semester. One is the answer to this question:\n\nCharge Q is placed on an infinitely thin conducting disk of radius R. How does it distribute itself over the disk?\n\nThe answer turns out to be that the charge distribution is the same as if you started with a uniform distribution of charge over a spherical shell, and compressed the shell down to a disk by smashing each piece of the surface straight up or down in the direction perpendicular to the disk.\n\nAlthough I know of various proofs of this, particularly one provided by my friend and colleague Ovidiu Lipan, none of them seemed to me like a satisfying explanation of why the result was true. Of course, there might not be such a thing, but when the final answer has such a simple description (compared to most problems in E&M, which have incredibly messy solutions), it seems like there ought to be a nice reason for it.\n\nLater I came across some notes by Kirk McDonald of Princeton provide a somewhat intuitive answer. I’ll summarize the idea here.\n\nStart with something every E&M student should know. Take a spherical shell of radius R and thickness dR, and fill it uniformly with charge. Then the electric field inside the shell is zero. The slick way to prove this is with Gauss’s Law, but you can prove it with more basic geometric reasoning as follows. (In fact, I believe that the argument goes all the way back to Newton, although of course he wasn’t talking about electric fields, since they hadn’t been discovered / invented yet).\n\nSay that you’re located at an arbitrary point inside the spherical shell. Draw two cones with infinitesimal solid angle going out in opposite directions. These intersect the shell giving two little “charge elements” shown in red below.",
null,
"You can convince yourself that the electric field contributions from these charge elements exactly cancel each other. The volume, and hence the amount of charge, of each is proportional to the square of its distance from you, so by the inverse square law they give equal but opposite fields.\n\nSince you can chop up the whole shell into such pairs, the total electric field vanishes.\n\nThe clever insight in McDonald’s notes is that the same argument holds even if you take the spherical shell and compress it into an ellipsoid (i.e., rescale all of the z coordinates by some constant factor, while leaving x and y the same):",
null,
"With a bit of effort, you can convince yourself that all the various distances and volumes scale in such a way that the two field contributions remain equal and opposite.\n\nNow that we know that the field inside this ellipsoid is zero, what can we conclude? First, take the limit as dR goes to zero, so we have a two-dimensional surface charge distribution. The result must be the same as the surface charge distribution on a solid, conducting ellipsoid. After all, the conducting ellipsoid has to have charge only on its surface and have zero field inside. The usual electrostatic uniqueness theorems apply here, which say that there’s only one charge distribution that leads to this result. Since we’ve found a charge distribution that does so, it must be the charge distribution.\n\nKeep smashing the ellipsoid down until it lies in a plane, and you’ve got the solution for the conducting disk.",
null,
"### Ted Bunn\n\nI am chair of the physics department at the University of Richmond. In addition to teaching a variety of undergraduate physics courses, I work on a variety of research projects in cosmology, the study of the origin, structure, and evolution of the Universe. University of Richmond undergraduates are involved in all aspects of this research. If you want to know more about my research, ask me!\n\n## 13 thoughts on “The charged conducting disk (again)”\n\n1.",
null,
"Phillip Helbig says:\n\n“(In fact, I believe that the argument goes all the way back to Newton, although of course he wasn’t talking about electric fields, since they hadn’t been discovered / invented yet).”\n\nHe was discussing gravity, of course. IIRC, he had an algebraic proof first but didn’t communicate the result because the fashion of the time valued geometric proofs more highly.\n\n2.",
null,
"Phillip Helbig says:\n\n“The clever insight in McDonald’s notes is that the same argument holds even if you take the spherical shell and compress it into an ellipsoid”\n\nDoes it hold only for ellipsoids, or are there other shapes for which it holds as well? If the former, is there a proof? If the latter, what are they?\n\n3.",
null,
"Ted Bunn says:\n\nGood question. It’s not hard to convince yourself that it applies to all ellipsoids (including triaxial ellipsoids). I can’t see any way to generalize it beyond that, but I can’t say for sure that there aren’t generalizations. It’d be interesting if there were.\n\n4.",
null,
"iman says:\n\nthere is a simpler aproach to this problem.the equipotential surfuces for a uniform line charge are (in 2D) ellipses with foci on the ends of the line charge.so instead of an ellipsoid we put a rod with uniform charge( we put the corresponding rod which meets our conditions) then we have the potential every where outside the conducting ellipsoid( because we know how to do it for the rod).finally we calculate E on the surface and that gives us the surface charge density!\n\n5.",
null,
"Ted Bunn says:\n\nThis seems like a great approach, but I can’t see how to make it work. I can see that this gives the charge distribution on any prolate spheroid (i.e., an ellipsoid with one long axis and two equal smaller axes), but I can’t see a way to use it to get the distribution on an oblate spheroid, which is what you need if you want to take the limit and get to a disk.\n\nAm I missing something obvious?\n\nAlso, by the way, I don’t know what the parenthetical comment “in 2D” means. The equipotentials of the line of charge are spheroids in 3D, not 2D, right?\n\n6.",
null,
"iman says:\n\nby 2D I meant 2D.you know its ellipse in 2D but we have symmetry which we rotate to get ellipsoid(and excuse me for my bad language,I dont know the exact world.) It is “prolate”.two of axes are equal.so we have an “2a” axis and two “2b” axises.if we take the limit as b approaches zero what do we get? a disk.so it gives the disk thing.and for your point on a general spheroid(three distinctive axes) I think we can again show that because of symmetry the additional phrase we get for the third axis will be similar to the two other.and we get the more general result as wanted.\nby the way..thank you for reading my comment!\n\n7.",
null,
"Ted Bunn says:\n\nMaybe I’m just being stupid, but I still don’t get it. If you take the limit as b approaches zero, you have one long axis and two zero-length axes, which is a line, not a disk.\n\n8.",
null,
"iman says:\n\nyepp.sorry..it was a typo…”a” should approach zero so that you get 2 axises “2b” which gives you a circle of radius “b”.right?\n\n9.",
null,
"iman says:\n\nby the way…if we let “b” approach zero…that again answers a nice question..charge density on a conducting needle..which surprisingly gives constant charge density!!I’ve read an article on the needle thing…doing some computer work as well as analytical work..and then again this approach was still better than the approach the article(by jackson or griffiths I dont recall) had.\n\n10.",
null,
"Ted Bunn says:\n\nSorry, but I’m still not getting it. The argument that you gave doesn’t allow you to take the limit a -> 0. The calculation of the potential due to a line of charge only gives you equipotential surfaces that are prolate spheroids (a>b), not oblate spheroids (a<b). To take the limit a -> 0 with fixed b, you’d need the latter.\n\nYou can find solutions to Laplace’s equation with oblate spheroidal equipotential surfaces by solving the equation in oblate spheroidal coordinates, if you like. In that coordinate system, Laplace’s equation is separable, and the solution that depends only on the “radial” coordinate is precisely the solution for the charged conducting disk. (Similarly, in prolate spheroidal coordinates, the solution that depends only on the radial coordinate is the one corresponding to a charged needle.) That’s another way of getting the solution for the disk, although I don’t think it’s intuitively satisfying in the way the smashed-spherical-shell solution is.\n\n11.",
null,
"iman says:\n\n“Sorry, but I’m still not getting it. The argument that you gave doesn’t allow you to take the limit a -> 0. The calculation of the potential due to a line of charge only gives you equipotential surfaces that are prolate spheroids (a>b), not oblate spheroids (a< b). To take the limit a-> 0 with fixed b, you’d need the latter.”\nI dont see why I cant let a->0.It works.in the calculations I never used that a>b .I cant really say that rigorously but for example in another (similar)problem I reached a radical for potential.in the radical I got a complex quantity.I guessed that because potential is real the answer would be th real part.and it came out correct.\nhere if we let a->0 we get the answer.I dont see whats the problem.you have a function of a.you take a limit .and get a valid answer.the laplacian solution might be more rigoros but I dont have a problem with this one either.\nalso:a similar problem to my “similar problem”:find potential of the disk in space? again if you take a->0 in my answer you get the correct answer.\n\n12.",
null,
"Ted Bunn says:\n\nSo you’re saying that you take the solution from the charged line (which has prolate spheroidal equipotentials) and formally alter the parameters to make the spheroids oblate? I guess that’s fine. It’s perhaps a nicer way of getting the solution than some others. I thought at first that you were talking about a more physical connection between the charged-line problem and the charged-disk problem, as opposed to a formal mathematical manipulation.\n\n(Not that there’s anything wrong with formal mathematical manipulations, of course!)\n\n13.",
null,
"iman says:\n\nwell intuitively I find this in a nice analogy with dipole.for a dipole we let q->0 and d->infinity such that qd->p.\nnow to get a disk as a equipotential in our example we should let the lenght of the charged rod aproach zero and its charge density aproach infinity such that their product approaches 2pie0VR.that V is the potential of the disk and R its Radius.(check the calculations though!)\nby the way to prove that ellipses are equipotentials for a rod there are nice ways>>1)using the field and the property of ellipse.(about angles)2)finding the field lines for discrete charges on a rod.then take a limit and show it gives hyperbola.which are confocal to equipotential surfaces thus ellipses.3)findind V in space.putting it constant.(tricky algebra but simplifies with good choice of variables)"
] | [
null,
"http://blog.richmond.edu/physicsbunn/files/2013/02/diskfigclean1.gif",
null,
"http://blog.richmond.edu/physicsbunn/files/2013/02/diskfigclean2.gif",
null,
"https://secure.gravatar.com/avatar/a96894f1f8e33fe6687a510ceac96d3d",
null,
"https://secure.gravatar.com/avatar/6983fa3a44bd3dd66027863073e1e0b1",
null,
"https://secure.gravatar.com/avatar/6983fa3a44bd3dd66027863073e1e0b1",
null,
"https://secure.gravatar.com/avatar/a96894f1f8e33fe6687a510ceac96d3d",
null,
"https://secure.gravatar.com/avatar/12b33704e3f42044b1e778087843e7fb",
null,
"https://secure.gravatar.com/avatar/a96894f1f8e33fe6687a510ceac96d3d",
null,
"https://secure.gravatar.com/avatar/12b33704e3f42044b1e778087843e7fb",
null,
"https://secure.gravatar.com/avatar/a96894f1f8e33fe6687a510ceac96d3d",
null,
"https://secure.gravatar.com/avatar/12b33704e3f42044b1e778087843e7fb",
null,
"https://secure.gravatar.com/avatar/12b33704e3f42044b1e778087843e7fb",
null,
"https://secure.gravatar.com/avatar/a96894f1f8e33fe6687a510ceac96d3d",
null,
"https://secure.gravatar.com/avatar/12b33704e3f42044b1e778087843e7fb",
null,
"https://secure.gravatar.com/avatar/a96894f1f8e33fe6687a510ceac96d3d",
null,
"https://secure.gravatar.com/avatar/12b33704e3f42044b1e778087843e7fb",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.92202485,"math_prob":0.9263307,"size":10032,"snap":"2020-24-2020-29","text_gpt3_token_len":2226,"char_repetition_ratio":0.12604707,"word_repetition_ratio":0.11078886,"special_character_ratio":0.20982856,"punctuation_ratio":0.09382716,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96774507,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32],"im_url_duplicate_count":[null,4,null,4,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-31T20:27:02Z\",\"WARC-Record-ID\":\"<urn:uuid:e9db1fc3-29f6-48d2-8b4d-31d6ba21603b>\",\"Content-Length\":\"53742\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7b27f671-72a9-4afa-9088-460581ccb11a>\",\"WARC-Concurrent-To\":\"<urn:uuid:7094135b-42bc-411e-b838-ac19ebe71a57>\",\"WARC-IP-Address\":\"141.166.39.64\",\"WARC-Target-URI\":\"https://blog.richmond.edu/physicsbunn/2013/03/01/the-charged-conducting-disk-again/\",\"WARC-Payload-Digest\":\"sha1:2KD5PNYHWUIY7F3OEQ4W4MQMIHPXTN7Q\",\"WARC-Block-Digest\":\"sha1:YZAIY567SNFHUOUFYQIKBAQWV2E47ZUE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347413624.48_warc_CC-MAIN-20200531182830-20200531212830-00454.warc.gz\"}"} |
https://git.musl-libc.org/cgit/musl/commit/src/math/cosh.c?id=0cbb65479147ecdaa664e88cc2a5a925f3de502f | [
"summaryrefslogtreecommitdiff log msg author committer range\npath: root/src/math/cosh.c\ndiff options\n context: 12345678910152025303540 space: includeignore mode: unifiedssdiffstat only\nauthor committer nsz 2012-03-19 23:41:19 +0100 nsz 2012-03-19 23:41:19 +0100 0cbb65479147ecdaa664e88cc2a5a925f3de502f (patch) 7b6dc53fcec6497d55746d3cc47f167a20b7aa57 /src/math/cosh.c b03255af77776703c8d48819e824d09f6f54ba86 (diff) musl-0cbb65479147ecdaa664e88cc2a5a925f3de502f.tar.gz\ncode cleanup of named constants\nzero, one, two, half are replaced by const literals The policy was to use the f suffix for float consts (1.0f), but don't use suffix for long double consts (these consts can be exactly represented as double).\nDiffstat (limited to 'src/math/cosh.c')\n-rw-r--r--src/math/cosh.c12\n1 files changed, 6 insertions, 6 deletions\n diff --git a/src/math/cosh.c b/src/math/cosh.cindex 5f38b276..a1f7dbc7 100644--- a/src/math/cosh.c+++ b/src/math/cosh.c@@ -32,7 +32,7 @@ #include \"libm.h\" -static const double one = 1.0, half = 0.5, huge = 1.0e300;+static const double huge = 1.0e300; double cosh(double x) {@@ -49,21 +49,21 @@ double cosh(double x) /* |x| in [0,0.5*ln2], return 1+expm1(|x|)^2/(2*exp(|x|)) */ if (ix < 0x3fd62e43) { t = expm1(fabs(x));- w = one+t;+ w = 1.0+t; if (ix < 0x3c800000) return w; /* cosh(tiny) = 1 */- return one + (t*t)/(w+w);+ return 1.0 + (t*t)/(w+w); } /* |x| in [0.5*ln2,22], return (exp(|x|)+1/exp(|x|))/2; */ if (ix < 0x40360000) { t = exp(fabs(x));- return half*t + half/t;+ return 0.5*t + 0.5/t; } - /* |x| in [22, log(maxdouble)] return half*exp(|x|) */+ /* |x| in [22, log(maxdouble)] return 0.5*exp(|x|) */ if (ix < 0x40862E42)- return half*exp(fabs(x));+ return 0.5*exp(fabs(x)); /* |x| in [log(maxdouble), overflowthresold] */ if (ix <= 0x408633CE)"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5297989,"math_prob":0.9446595,"size":1703,"snap":"2021-43-2021-49","text_gpt3_token_len":709,"char_repetition_ratio":0.13537376,"word_repetition_ratio":0.02586207,"special_character_ratio":0.48737523,"punctuation_ratio":0.18156424,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9785566,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-05T11:15:03Z\",\"WARC-Record-ID\":\"<urn:uuid:81d3a7b2-c9ec-4944-b6c3-b4d0a5f2fbc4>\",\"Content-Length\":\"8900\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0e21dde5-8454-41f3-9663-12c7041e6d64>\",\"WARC-Concurrent-To\":\"<urn:uuid:37e036e1-d40d-459d-a900-f07a86dbec7a>\",\"WARC-IP-Address\":\"45.63.0.111\",\"WARC-Target-URI\":\"https://git.musl-libc.org/cgit/musl/commit/src/math/cosh.c?id=0cbb65479147ecdaa664e88cc2a5a925f3de502f\",\"WARC-Payload-Digest\":\"sha1:YIXIWGPSH3Z22VGIURJFEGH5BWTZA57T\",\"WARC-Block-Digest\":\"sha1:MB2HQPRM5XTEU77GK2HQWQ5L3MTEOGWZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363157.32_warc_CC-MAIN-20211205100135-20211205130135-00477.warc.gz\"}"} |
https://www.numbers.education/3854.html | [
"Is 3854 a prime number? What are the divisors of 3854?\n\nIs 3854 a prime number?\n\nNo, 3854 is not a prime number.\n\nFor example, 3854 can be divided by 2: 3854 / 2 = 1 927.\n\nTo be 3854 a prime number, it would have been required that 3854 has only two divisors, i.e., itself and 1.\n\nParity of 3854\n\n3854 is an even number, because it is evenly divisible by 2: 3854 / 2 = 1 927.\n\nIs 3854 a perfect square number?\n\nA number is a perfect square (or a square number) if its square root is an integer; that is to say, it is the product of an integer with itself. Here, the square root of 3854 is about 62.081.\n\nThus, the square root of 3854 is not an integer, and therefore 3854 is not a square number.\n\nWhat is the square number of 3854?\n\nThe square of a number (here 3854) is the result of the product of this number (3854) by itself (i.e., 3854 × 3854); the square of 3854 is sometimes called \"raising 3854 to the power 2\", or \"3854 squared\".\n\nThe square of 3854 is 14 853 316 because 3854 × 3854 = 38542 = 14 853 316.\n\nAs a consequence, 3854 is the square root of 14 853 316.\n\nNumber of digits of 3854\n\n3854 is a number with 4 digits.\n\nWhat are the multiples of 3854?\n\nThe multiples of 3854 are all integers evenly divisible by 3854, that is all numbers such that the remainder of the division by 3854 is zero. There are infinitely many multiples of 3854. The smallest multiples of 3854 are:\n\n• 0: indeed, 0 is divisible by any natural number, and it is thus a multiple of 3854 too, since 0 × 3854 = 0\n• 3854: indeed, 3854 is a multiple of itself, since 3854 is evenly divisible by 3854 (we have 3854 / 3854 = 1, so the remainder of this division is indeed zero)\n• 7 708: indeed, 7 708 = 3854 × 2\n• 11 562: indeed, 11 562 = 3854 × 3\n• 15 416: indeed, 15 416 = 3854 × 4\n• 19 270: indeed, 19 270 = 3854 × 5\n• etc.\n\nNearest numbers from 3854\n\nFind out whether some integer is a prime number"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8912405,"math_prob":0.99971056,"size":1735,"snap":"2019-43-2019-47","text_gpt3_token_len":568,"char_repetition_ratio":0.20219526,"word_repetition_ratio":0.033426184,"special_character_ratio":0.42478386,"punctuation_ratio":0.14609572,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997053,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-14T13:47:06Z\",\"WARC-Record-ID\":\"<urn:uuid:08a208c0-038c-48fe-a11f-c2b82b872d1e>\",\"Content-Length\":\"11595\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a53b2539-c8e2-42e5-a677-a8526479a81c>\",\"WARC-Concurrent-To\":\"<urn:uuid:8421df64-7a21-4b94-8743-8e399428e12b>\",\"WARC-IP-Address\":\"213.186.33.19\",\"WARC-Target-URI\":\"https://www.numbers.education/3854.html\",\"WARC-Payload-Digest\":\"sha1:XH3QLG4XQ6U2HCXPBCRIIPE5JHJK2AWD\",\"WARC-Block-Digest\":\"sha1:ACIWI7OH6XELHG2GDPJPWFMLJYSWTLUI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986653247.25_warc_CC-MAIN-20191014124230-20191014151730-00419.warc.gz\"}"} |
https://ir.library.carleton.ca/pub/22014 | [
"The nonlinear carrier wave dispersion relation at weak nonlinearity is derived for the linearly polarized (LP) modes of a step-index optical fibre that has both a nonlinear core and a nonlinear cladding. The calculation begins with the exact equations for the nonlinear fibre and the nonlinear shift (coefficient), from its linear value, of the propagation wave-number is given in closed analytical form. The nonlinear coefficient is completely general and accounts both for the nonlinearity and the structure of the guided mode. Some numerical results for the LP01 mode are presented which show that significant deviations occur from the conventionally accepted (averaged) nonlinear coefficient."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.81253,"math_prob":0.94131726,"size":1225,"snap":"2019-51-2020-05","text_gpt3_token_len":300,"char_repetition_ratio":0.15315315,"word_repetition_ratio":0.035928145,"special_character_ratio":0.26612246,"punctuation_ratio":0.17272727,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9804824,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-28T19:29:59Z\",\"WARC-Record-ID\":\"<urn:uuid:563d406d-369b-405d-b746-d248466cdbb7>\",\"Content-Length\":\"22052\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0b4a6849-cc4f-4c0a-8c59-9341ea997a48>\",\"WARC-Concurrent-To\":\"<urn:uuid:a18b7eb0-bf82-446f-b8c7-15de8569c9af>\",\"WARC-IP-Address\":\"5.79.73.6\",\"WARC-Target-URI\":\"https://ir.library.carleton.ca/pub/22014\",\"WARC-Payload-Digest\":\"sha1:BF4T4JXEDZTIELUNUO5TC6ZOQHYFGW5V\",\"WARC-Block-Digest\":\"sha1:VN7H6L45KZ46W7SYPDEBIY5QS7K3OLWL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251783000.84_warc_CC-MAIN-20200128184745-20200128214745-00016.warc.gz\"}"} |
https://www.tutorialspoint.com/program-to-calculate-area-and-perimeter-of-trapezium | [
"# Program to calculate area and perimeter of Trapezium\n\nCServer Side ProgrammingProgramming\n\nTrapezium is a type of quadrilateral that has at least one pair of side parallel to each other. Area and perimeter of a trapezium can be found using the below formula,\n\nPerimeter = sum of all sides\n\nArea = ½ x (sum of the lengths of the parallel sides) x perpendicular distance between parallel sides\n\nCode logic − The code will use 5 variables in as all sides of trapezium and one for the perpendicular distance between the two parallel side. For the area variable calculation we will take a float variable that will be initialised with the value. To calculate it we will use the formula “ ½ x (sum of length of parallel sides) x perpendicular distance between parallel sides ”. For the perimeter calculation a variable will be assigned the expression, “(Sum of all sides)”.\n\nBelow code displays the program to calculate the area and perimeter of a trapezium,\n\n## Example\n\nLive Demo\n\n#include <stdio.h>\nint main() {\nint a = 2 , b = 3 , c = 5 , d = 4, h = 5;\nfloat area, perimeter;\nprintf(\"The sides of trapezium are %d , %d , %d , %d \\n\", a,b,c,d);\nprintf(\"Distance between two parallel sides is %d \\n\", h);\nperimeter = a+b+c+d;\narea = 0.5 * (a + b) * h ;\nprintf(\"Perimeter of the trapezium is %.1f\\n\", perimeter);\nprintf(\"Area of the trapezium is: %.3f\", area);\nreturn 0;\n}\n\n## Output\n\nThe sides of trapezium are 2 , 3 , 5 , 4\nDistance between two parallel sides is 5\nPerimeter of the trapezium is 14.0\nArea of the trapezium is: 12.500"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7872838,"math_prob":0.9993082,"size":1397,"snap":"2021-31-2021-39","text_gpt3_token_len":385,"char_repetition_ratio":0.1787509,"word_repetition_ratio":0.046332046,"special_character_ratio":0.2806013,"punctuation_ratio":0.1462585,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99948364,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-18T05:34:59Z\",\"WARC-Record-ID\":\"<urn:uuid:fcf31d8f-f7a8-4e68-9830-672a791d3896>\",\"Content-Length\":\"32081\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e206a1a6-acc0-418e-8eb4-c7e093735c86>\",\"WARC-Concurrent-To\":\"<urn:uuid:aa449323-8660-43b7-9405-b6ffa51aa058>\",\"WARC-IP-Address\":\"72.21.91.42\",\"WARC-Target-URI\":\"https://www.tutorialspoint.com/program-to-calculate-area-and-perimeter-of-trapezium\",\"WARC-Payload-Digest\":\"sha1:V2VEXEHMMT4UAPSMIU5SFNC77H6GSUVT\",\"WARC-Block-Digest\":\"sha1:HNHM6HYUKWHM6DF4J36JVOZL336IUROZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780056297.61_warc_CC-MAIN-20210918032926-20210918062926-00505.warc.gz\"}"} |
http://kempacoustics.com/thesis/node99.html | [
"Back to Kemp Acoustics Home",
null,
"",
null,
"",
null,
"",
null,
"Next: Inductance method Up: Projection at a discontinuity Previous: Volume velocity Contents\n\n# Projection matrix in cylindrical geometry\n\nIn polar coordinates equation (B.6) becomes",
null,
"(B.15)\n\nSubstituting in equation (2.48) for",
null,
"and performing the integration with respect to",
null,
"gives:",
null,
"(B.16)\n\nThis is in the form of the standard integral from equation (A.1) of appendix A. Substituting in the variables:",
null,
",",
null,
",",
null,
"and",
null,
"gives",
null,
"",
null,
"When the evaluation is carried out the contribution when",
null,
"is zero giving:",
null,
"",
null,
"(B.17)\n\nNow noticing from equation (A.2) that",
null,
"and using the fact that",
null,
"is a zero of",
null,
"the second term vanishes:",
null,
"(B.18)\n\nExpressing this in terms of the ratio of the radii,",
null,
"we get",
null,
"(B.19)\n\nhence we have proved equation (2.85).\n\nThe integration used to obtain the analytical expression for",
null,
"is identical to that for",
null,
"except that the labels are interchanged for surface 1 and surface 2. Interchanging",
null,
"and",
null,
"means that",
null,
"will be replaced with",
null,
"giving",
null,
".\n\nBack to Kemp Acoustics Home",
null,
"",
null,
"",
null,
"",
null,
"Next: Inductance method Up: Projection at a discontinuity Previous: Volume velocity Contents\nJonathan Kemp 2003-03-24"
] | [
null,
"http://kempacoustics.com/thesis/next_motif.gif",
null,
"http://kempacoustics.com/thesis/up_motif.gif",
null,
"http://kempacoustics.com/thesis/previous_motif.gif",
null,
"http://kempacoustics.com/thesis/contents_motif.gif",
null,
"http://kempacoustics.com/thesis/img780.png",
null,
"http://kempacoustics.com/thesis/img781.png",
null,
"http://kempacoustics.com/thesis/img345.png",
null,
"http://kempacoustics.com/thesis/img782.png",
null,
"http://kempacoustics.com/thesis/img783.png",
null,
"http://kempacoustics.com/thesis/img784.png",
null,
"http://kempacoustics.com/thesis/img785.png",
null,
"http://kempacoustics.com/thesis/img786.png",
null,
"http://kempacoustics.com/thesis/img787.png",
null,
"http://kempacoustics.com/thesis/img788.png",
null,
"http://kempacoustics.com/thesis/img159.png",
null,
"http://kempacoustics.com/thesis/img787.png",
null,
"http://kempacoustics.com/thesis/img789.png",
null,
"http://kempacoustics.com/thesis/img790.png",
null,
"http://kempacoustics.com/thesis/img166.png",
null,
"http://kempacoustics.com/thesis/img26.png",
null,
"http://kempacoustics.com/thesis/img791.png",
null,
"http://kempacoustics.com/thesis/img241.png",
null,
"http://kempacoustics.com/thesis/img792.png",
null,
"http://kempacoustics.com/thesis/img793.png",
null,
"http://kempacoustics.com/thesis/img766.png",
null,
"http://kempacoustics.com/thesis/img588.png",
null,
"http://kempacoustics.com/thesis/img589.png",
null,
"http://kempacoustics.com/thesis/img241.png",
null,
"http://kempacoustics.com/thesis/img794.png",
null,
"http://kempacoustics.com/thesis/img441.png",
null,
"http://kempacoustics.com/thesis/next_motif.gif",
null,
"http://kempacoustics.com/thesis/up_motif.gif",
null,
"http://kempacoustics.com/thesis/previous_motif.gif",
null,
"http://kempacoustics.com/thesis/contents_motif.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.81909525,"math_prob":0.93250203,"size":1069,"snap":"2020-10-2020-16","text_gpt3_token_len":289,"char_repetition_ratio":0.11455399,"word_repetition_ratio":0.15116279,"special_character_ratio":0.24976614,"punctuation_ratio":0.13170731,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98935795,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,3,null,3,null,6,null,3,null,3,null,3,null,3,null,3,null,6,null,3,null,5,null,6,null,3,null,3,null,null,null,null,null,3,null,null,null,3,null,3,null,5,null,5,null,5,null,null,null,3,null,6,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-27T08:24:34Z\",\"WARC-Record-ID\":\"<urn:uuid:b4c33c53-3e16-4353-9463-3ce3b81f3c27>\",\"Content-Length\":\"11624\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:398abef7-415e-4f31-a1c0-c8bcd785d2c1>\",\"WARC-Concurrent-To\":\"<urn:uuid:c4e22b3c-1f9f-4bf0-b3f6-083b755114d2>\",\"WARC-IP-Address\":\"77.68.64.13\",\"WARC-Target-URI\":\"http://kempacoustics.com/thesis/node99.html\",\"WARC-Payload-Digest\":\"sha1:UVPEBNU5AGYOZC2HIUPQWSNMONU6OVY5\",\"WARC-Block-Digest\":\"sha1:7OG6235BDDBGFJCCG7HWFTYBNZFY7DUC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875146665.7_warc_CC-MAIN-20200227063824-20200227093824-00462.warc.gz\"}"} |
https://q4interview.com/accenture/written-round-questions-with-answer/1 | [
"[Updated] Goldman Sachs Aptitude Test Questions and Answers\nPractice List of TCS Digital Coding Questions !!!\nTake 50+ FREE!! Online Data Interpretation Mock test to crack any Exams.\n\n# Accenture Aptitude Questions and Answers for Freshers\n\n247.98K\n\n## Total Qs: 146+\n\nNA\nSHSTTON\n742\nSolv. Corr.\n494\nSolv. In. Corr.\n1236\nAttempted\n1 M:7 S\nAvg. Time\n\n1 / 146\n\nChoose the correct option.\n\nA' and 'B' complete a work togather in 8 days.If 'A' alone can do it in 12 days.Then how many day 'B' will take to complete the work?\n\nA25 days\n\nB24 days\n\nC20 days\n\nDNone of these\n\n| | Important Formulas | Topic: |",
null,
"Asked In 5 |\n\nExplanation:\n\nA & B one day work = 1/8\nA alone one day work = 1/12\nB alone one day work = (1/8 - 1/12) = ( 3/24 - 2/24)\n=> B one day work = 1/24\nso B can complete the work in 24 days.\n\nShortCut By :: 19501A0549\n\n20\n\nNA\nSHSTTON\n832\nSolv. Corr.\n791\nSolv. In. Corr.\n1623\nAttempted\n1 M:0 S\nAvg. Time\n\n2 / 146\n\nChoose the correct option.\n\nIf A alone can do a piece of work in 8 days and B alone can do the same work in 12 days. How many days A and B required to finish the same work if they work togather?\n\nA24/5 days\n\nB24 days\n\nC5day days\n\nD5/24 days\n\nENone of these\n\n| | Important Formulas | Topic: |",
null,
"Asked In 3 |\n\nExplanation:\n\nA alone one day work = 1/8\nB alone one day work = 1/12\nBoth A and B one day work = (1/8 + 1/12) = (3/24 + 2/24)\n= 5/24\nso A and B together finish the work in 24/5 day\nor 4 4/5 days.\n\nNA\nSHSTTON\n378\nSolv. Corr.\n465\nSolv. In. Corr.\n843\nAttempted\n0 M:46 S\nAvg. Time\n\n3 / 146\n\nChoose the correct option.\n\nWhen a number is divided by 13, the remainder is 11. When the same number is divided by 17, then remainder is 9. What is the number ?\n\nA339\n\nB349\n\nC369\n\n| | Important Formulas | Topic: |",
null,
"Asked In 1 |\n\nExplanation:\n\nx = 13p + 11 and x = 17q + 9\n\nShortCut By :: Sairam\n\n349\n\nNA\nSHSTTON\n565\nSolv. Corr.\n696\nSolv. In. Corr.\n1261\nAttempted\n0 M:54 S\nAvg. Time\n\n4 / 146\n\nChoose the correct option.\n\nA can finish a piece work in 18 days and B can do the same work in half the time taken by A. So if they working together, what part of the same work can finished in a day?\n\nA1/7\n\nB1/6\n\nC6\n\nD5/6\n\n| | Important Formulas | Topic: |",
null,
"Asked In |\n\nExplanation:\n\nFirst find the 1 day work of both (A & B)\nA's 1 day's work = 1/18\nand\nB's 1 day's work = 1/9 (B can do work in half time)\n(A + B)'s 1 day's work = (1/18+1/9)\n= (1+2)/18 = 3/18 = 1/6\nso A & B together can do 1/6 of work in 1 day.\n\nNA\nSHSTTON\n90\nSolv. Corr.\n189\nSolv. In. Corr.\n279\nAttempted\n0 M:18 S\nAvg. Time\n\n5 / 146\n\nChoose the correct option.\n\nThe cost price of an article ls 80% of its marked price for sale. How much percent does the trades-man gain after allowing a discount of 12%?\n\nA20\n\nB10\n\nC12\n\nD8\n\n| | Important Formulas | Topic: |",
null,
"Asked In Accenture |\n\nExplanation:\n\nC.P. of the article = Rs. 100\n\nSo Marked price = (100*100)/80 = Rs. 125\nSP after the discount = Rs.(125*88)/100 = Rs. 110\n\ntherefore Gain percent = 10\n\nTags: Accenture\n\nNA\nSHSTTON\n108\nSolv. Corr.\n95\nSolv. In. Corr.\n203\nAttempted\n1 M:30 S\nAvg. Time\n\n6 / 146\n\nChoose the correct option.\n\nIf 6 is subtracted from the present age of Ritu and the remainder is divided by 6, then the present age of Sheema is obtained. If Sheema is 4 years younger to raju whose age is 12 years, then find the age of Ritu\n\nA50\n\nB52\n\nC54\n\nD58\n\n| | Important Formulas | Topic: |",
null,
"Asked In |\n\nExplanation:\n\nLets assume Ritu present age = x years\nso sheema age = (x-6)/6 years\n\nAs per question:\nsheema age [(x-6)/6 + 4] = 12\n(x-6)/6 = 12 - 4\n=> (x-6)/6 = 8\n=> x - 6 = 48 => x = 54\n\nSo Ritu age = 54 years.\n\nNA\nSHSTTON\n83\nSolv. Corr.\n108\nSolv. In. Corr.\n191\nAttempted\n3 M:38 S\nAvg. Time\n\n7 / 146\n\nChoose the correct option.\n\nA car averages 55 mph for the first 4 hours of a trip and averages 70 mph for each additional hour. The average speed for the entire trip was 60 mph. How many hours long is the trip?\n\nA6 hrs.\n\nB14 hrs.\n\nC11 hrs.\n\nD12 hrs.\n\n| | Important Formulas | Topic: |",
null,
"Asked In Accenture |\n\nExplanation:\n\nLets assume additional hours = x hrs.\nSo total No. hours in journey = (4+x)\n[(55*4)+(70*x)]/(4+x)=60 =>\n=>x=2\nTherefore, Total No. of hrs. in Journey = (x+4) = 6 hrs.\n\nTags: Accenture\n\nNA\nSHSTTON\n76\nSolv. Corr.\n129\nSolv. In. Corr.\n205\nAttempted\n0 M:44 S\nAvg. Time\n\n8 / 146\n\nChoose the correct option.\n\n2 boxes, 32 black n 31 red balls the probability of getting black ball is maximum. The maximum probability is\n\nA3/4\n\nB1/4\n\nC2/3\n\nDNone of these\n\n| | Important Formulas | Topic: |",
null,
"Asked In Accenture |\n\nExplanation:\n\nHere is no explanation for this answer\n\nTags: Accenture\n\nNA\nSHSTTON\n42\nSolv. Corr.\n180\nSolv. In. Corr.\n222\nAttempted\n1 M:3 S\nAvg. Time\n\n9 / 146\n\nChoose the correct option.\n\nA ball dropped from H height and moves 80% of height each time. Total distance covered is\n\nA4H\n\nB5H\n\nC7H\n\nD9H\n\n| | Important Formulas | Topic: |",
null,
"Asked In Accenture |\n\nExplanation:\n\nFirst time distance= H\nSecond time = 80H/100 = 4H/5\nsimilarly third time 80% of 4H/5 = H(4^2)/(5^2)\nand so on..\nThis will lead to infinite terms of geometric progression\ni.e H+2*4H/5+2*16H/25..\nSo, Sum = H+ 2*4H/(5(1-4/5)) = 9H\n\nTags: Accenture\n\nNA\nSHSTTON\n47\nSolv. Corr.\n91\nSolv. In. Corr.\n138\nAttempted\n5 M:34 S\nAvg. Time\n\n10 / 146\n\nChoose the correct option.\n\nAt 10:00 am 2 trains started travelling towards each other from station 287 miles apart they passed each other at 1:30 pm the same dayy .if average speed of the faster train exceeded by 6 miles /hr what is speed of faster train in miles/hr:\n\nA34 miles/hrs.\n\nB38 miles/hrs.\n\nC44 miles/hrs.\n\nD40 miles/hrs.\n\n| | Important Formulas | Topic: |",
null,
"Asked In Accenture |\n\nExplanation:\n\nLets assume the speed of slower train = x miles/hrs.\nSo, speed of faster train is = (x+6) miles/hrs.\nGiven, passed each other at 1:30 PM i.e after 3 1/2 hrs.\nBoth train travelling towards each other so total relative speed = x+(x+6) = (2x+6)\nSo, 287/(2x+6) = 7/2 => 574 = 14x + 42\n=> 14x = 542 => x = ~38\n\nSo spee of faster train = (x+6) miles/hrs. = 44 miles/hrs."
] | [
null,
"https://q4interview.com/images/tag.png",
null,
"https://q4interview.com/images/tag.png",
null,
"https://q4interview.com/images/tag.png",
null,
"https://q4interview.com/images/tag.png",
null,
"https://q4interview.com/images/tag.png",
null,
"https://q4interview.com/images/tag.png",
null,
"https://q4interview.com/images/tag.png",
null,
"https://q4interview.com/images/tag.png",
null,
"https://q4interview.com/images/tag.png",
null,
"https://q4interview.com/images/tag.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.84233505,"math_prob":0.95238906,"size":4900,"snap":"2023-14-2023-23","text_gpt3_token_len":1609,"char_repetition_ratio":0.12683824,"word_repetition_ratio":0.030239834,"special_character_ratio":0.34959185,"punctuation_ratio":0.09641873,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99409974,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-08T14:18:11Z\",\"WARC-Record-ID\":\"<urn:uuid:8c514682-e827-4f60-8b2f-270e963197d3>\",\"Content-Length\":\"205429\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1d6d9acd-49f0-4763-83ed-0c38ede2126c>\",\"WARC-Concurrent-To\":\"<urn:uuid:d55b3ddc-8f3a-4a0d-94d8-69f26d1aaf99>\",\"WARC-IP-Address\":\"50.16.223.119\",\"WARC-Target-URI\":\"https://q4interview.com/accenture/written-round-questions-with-answer/1\",\"WARC-Payload-Digest\":\"sha1:2LDADNVACBEOVGKKUNUJV5X747L7HQQH\",\"WARC-Block-Digest\":\"sha1:RLQT5EQXT5CV5P5LM4SB5DMTNABUKO24\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224655027.51_warc_CC-MAIN-20230608135911-20230608165911-00051.warc.gz\"}"} |
https://studylib.net/doc/10487704/problem-set-vii-%E2%80%9Cthrough-intuition%E2%80%9D-f-t | [
"# PROBLEM SET VII “THROUGH INTUITION” f t",
null,
"```PROBLEM SET VII\n“THROUGH INTUITION”\nDUE FRIDAY, OCTOBER \nExercise . For each of the following functions f : [0, +∞) . R, find the set\n∫oft real numbers t ≥ 0 such that f is Riemann-integrable on [0, t], and compute\nf:\n0\n{\nN\n∑\n√\nxk\n1/(x − 1) if x ̸= 1,\nf (x) = x;\nf (x) =\nf (x) =\n.\nk!\n0\nif x = 1;\nk=0\n(In the last example, N is any natural number.)\nDefinition. We say a subset E ⊂ R is bounded if it is a subset E ⊂ [a, b] of\na closed interval. In this case, define the indicator function of E as the function\nχ E : [a, b] . R given by the formula\n{\n1 if x ∈ E;\nχ E (x) :=\n0 if x ∈\n/ E.\nWe say that E is Jordan-measurable if the indicator function χ E is Riemann integrable on [a, b]. In this case, the Jordan measure of E is the value\n∫ b\nμ(E) :=\nχ E.\na\nExercise . Show that a finite set E ⊂ R is Jordan-measurable, and that its Jordan measure is 0. Is it true that any bounded countable set is Jordan-measurable?\nExercise . Show that: () the union of two Jordan-measurable sets is Jordanmeasurable, () the intersection of two Jordan-measurable sets is Jordan-measurable,\nand () the complement of a Jordan-measurable set is Jordan-measurable.\nExercise . Show that if E ⊂ R is a Jordan-measurable set, and if f : R .\nmap given by f (x) = ax + b for real numbers a and b, then the set\nf (E) := {x ∈ R | there exists u ∈ E such that x = f (u)}\nis also Jordan-measurable, and\nμ( f (E)) = aμ(E).\n\nR is a\n\nDUE FRIDAY, OCTOBER \nExercise⋆ . Suppose f : [c, d] . R continuous and g : [a, b] . [c, d] Riemannintegrable. Prove that the composite f ◦ g : [a, b] . R is also Riemann-integrable.\nProvide a counterexample that demonstrates that if we had assumed only that f\nand g are Riemann-integrable, this statement would be false.\nDefinition. Suppose E ⊂ [a, b]. Now for any function f : [a, b] . R, we say that\nf is Riemann-integrable on E if the product χ E · f is Riemann integrable on [a, b],\nand we write\n∫\n∫\nb\nf :=\nE\na\nχ E · f.\nExercise⋆ . Show that if f, g : [a, b] . R are Riemann-integrable functions,\nthen the product f · g : [a, b] . R is also Riemann-integrable. Deduce that if\nE ⊂ [a, b] is Jordan measurable, and if f : [a, b] . R is Riemann integrable on\n[a, b] then it is also Riemann-integable on E. [For the first part, observe that 4f·g =\n(f + g)2 − (f − g)2 .]\nExercise . Give an example to show that the converse above is false. at is, find\ntwo bounded functions f, g : [a, b] . R, one of which is not Riemann integrable,\nsuch that the product f · g : [a, b] . R is Riemann-integrable.\nExercise . Suppose E a Jordan-measurable set, and suppose f a Riemann-integrable\nfunction on E. Show that if A ⊂ E is a Jordan-measurable set such that μ(E − A)\nis zero, then f is Riemann-integrable on A, and\n∫\n∫\nf = f.\nA\nE\n```"
] | [
null,
"https://s2.studylib.net/store/data/010487704_1-85fa0f2b325385060bfaa82f0c015eb0-768x994.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8666546,"math_prob":0.994013,"size":2709,"snap":"2022-05-2022-21","text_gpt3_token_len":913,"char_repetition_ratio":0.19778189,"word_repetition_ratio":0.07044674,"special_character_ratio":0.31672204,"punctuation_ratio":0.17573872,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99979454,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-22T11:50:08Z\",\"WARC-Record-ID\":\"<urn:uuid:f6d4f9e9-6beb-4b4e-9bb2-3886e923616f>\",\"Content-Length\":\"48427\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2fe2159f-b831-4f25-b9c1-75faf460d3f0>\",\"WARC-Concurrent-To\":\"<urn:uuid:aafef52b-76f4-416d-9eb5-9efc498766b6>\",\"WARC-IP-Address\":\"172.67.175.240\",\"WARC-Target-URI\":\"https://studylib.net/doc/10487704/problem-set-vii-%E2%80%9Cthrough-intuition%E2%80%9D-f-t\",\"WARC-Payload-Digest\":\"sha1:XV3YM2YAZ3AN5RN7K7Q5DKVFPIICNZSY\",\"WARC-Block-Digest\":\"sha1:UA5WHMGUQKCBXBLJHNTUKEZPJBOQ5QO2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662545326.51_warc_CC-MAIN-20220522094818-20220522124818-00365.warc.gz\"}"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.