URL
stringlengths
15
1.68k
text_list
sequencelengths
1
199
image_list
sequencelengths
1
199
metadata
stringlengths
1.19k
3.08k
https://cran.uni-muenster.de:443/web/packages/dsmisc/readme/README.html
[ "# Data Science Box of Pandora Miscellaneous\n\nStatus", null, "", null, "", null, "", null, "", null, "", null, "lines of R code: 83, lines of test code: 27\n\nVersion\n\n0.3.3 ( 2020-09-11 18:11:12 )\n\nDescription\n\nTool collection for common and not so common data science use cases. This includes custom made algorithms for data management as well as value calculations that are hard to find elsewhere because of their specificity but would be a waste to get lost nonetheless. Currently available functionality: find sub-graphs in an edge list data.frame, find mode or modes in a vector of values, extract (a) specific regular expression group(s), generate ISO time stamps that play well with file names, or generate URL parameter lists by expanding value combinations.\n\nGPL (>= 2)\nPeter Meissner [aut, cre]\n\nCitation\n\n``citation(\"dsmisc\")``\n``Meissner P (2020). dsmisc: Data Science Box of Pandora Miscellaneous. R package version 0.3.3.``\n\nBibTex for citing\n\n``toBibtex(citation(\"dsmisc\"))``\n``````@Manual{,\ntitle = {dsmisc: Data Science Box of Pandora Miscellaneous},\nauthor = {Peter Meissner},\nyear = {2020},\nnote = {R package version 0.3.3},\n}``````\n\nInstallation\n\nStable version from CRAN:\n\n``install.packages(\"dsmisc\")``\n\n## Usage\n\nstarting up …\n\n``library(\"dsmisc\")``\n\n### Graph computations\n\nfind isolated graphs / networks\n\nA graph described by an edgelist with two distinct subgraphs.\n\n``````edges_df <-\ndata.frame(\nnode_1 = c(1:5, 10:8),\nnode_2 = c(2:6, 7,7,7)\n)\n\nedges_df``````\n``````## node_1 node_2\n## 1 1 2\n## 2 2 3\n## 3 3 4\n## 4 4 5\n## 5 5 6\n## 6 10 7\n## 7 9 7\n## 8 8 7``````\n\nFinding subgraphs and grouping them together via subgraph id.\n\n``````edges_df\\$subgraph_id <-\ngraphs_find_subgraphs(\nid_1 = edges_df\\$node_1,\nid_2 = edges_df\\$node_2,\nverbose = 0\n)\n\nedges_df``````\n``````## node_1 node_2 subgraph_id\n## 1 1 2 1\n## 2 2 3 1\n## 3 3 4 1\n## 4 4 5 1\n## 5 5 6 1\n## 6 10 7 2\n## 7 9 7 2\n## 8 8 7 2``````\n\nspeedtest for large graph\n\n``````edges_df <-\ndata.frame(\nnode_1 = sample(x = 1:10000, size = 10^5, replace = TRUE),\nnode_2 = sample(x = 1:10000, size = 10^5, replace = TRUE)\n)\n\nsystem.time({\nedges_df\\$subgraph_id <-\ngraphs_find_subgraphs(\nid_1 = edges_df\\$node_1,\nid_2 = edges_df\\$node_2,\nverbose = 0\n)\n})``````\n``````## user system elapsed\n## 2.96 0.01 3.02``````\n\n### Stats Functions\n\nCalculating the modus from a collection of values\n\n``````# one modus only\nstats_mode(1:10)``````\n``````## Warning in stats_mode(1:10): modus : multimodal but only one value returned (use warn=FALSE to turn this off)\n\n## 1``````\n``````# all values if multiple modi are found\nstats_mode_multi(1:10)``````\n``## 1 2 3 4 5 6 7 8 9 10``\n\n### String Functions\n\n{stringr} / {stringi} packages are cool … but can they do this (actually they can, of cause but with a little more work and cognitive load needed, e.g.: `stringr::str_match(strings, \"([\\\\w])_(?:\\\\d+)\")[, 2]`)?\n\nExtract specific RegEx groups\n\n``````strings <- paste(LETTERS, seq_along(LETTERS), sep = \"_\")\n\n# whole pattern\nstr_group_extract(strings, \"([\\\\w])_(\\\\d+)\")``````\n``````## \"A_1\" \"B_2\" \"C_3\" \"D_4\" \"E_5\" \"F_6\" \"G_7\" \"H_8\" \"I_9\" \"J_10\" \"K_11\" \"L_12\" \"M_13\" \"N_14\" \"O_15\"\n## \"P_16\" \"Q_17\" \"R_18\" \"S_19\" \"T_20\" \"U_21\" \"V_22\" \"W_23\" \"X_24\" \"Y_25\" \"Z_26\"``````\n``````# first group\nstr_group_extract(strings, \"([\\\\w])_(\\\\d+)\", 1)``````\n``## \"A\" \"B\" \"C\" \"D\" \"E\" \"F\" \"G\" \"H\" \"I\" \"J\" \"K\" \"L\" \"M\" \"N\" \"O\" \"P\" \"Q\" \"R\" \"S\" \"T\" \"U\" \"V\" \"W\" \"X\" \"Y\" \"Z\"``\n``````# second group\nstr_group_extract(strings, \"([\\\\w])_(\\\\d+)\", 2)``````\n``````## \"1\" \"2\" \"3\" \"4\" \"5\" \"6\" \"7\" \"8\" \"9\" \"10\" \"11\" \"12\" \"13\" \"14\" \"15\" \"16\" \"17\" \"18\" \"19\" \"20\" \"21\"\n## \"22\" \"23\" \"24\" \"25\" \"26\"``````\n\n### Data.Frame Manipulation\n\nTransform factor columns in a data.frame to character vectors\n\n``````df <-\ndata.frame(\na = 1:2,\nb = factor(c(\"a\", \"b\")),\nc = as.character(letters[3:4]),\nstringsAsFactors = FALSE\n)\nvapply(df, class, \"\")``````\n``````## a b c\n## \"integer\" \"factor\" \"character\"``````\n``````df_df <- df_defactorize(df)\nvapply(df_df, class, \"\")``````\n``````## a b c\n## \"integer\" \"character\" \"character\"``````\n\n### Time Manipulation\n\n``````# current time\ntime_stamp()``````\n``## \"2020-09-11_20_11_33\"``\n``````time_stamp(\nts = as.POSIXct(c(\"2010-01-27 10:23:45\", \"2010-01-27 10:23:45\")),\nsep = c(\"\",\"_\",\"\")\n)``````\n``## \"20100127_102345\" \"20100127_102345\"``\n``````time_stamp(\nts = as.POSIXct(c(\"2010-01-27 10:23:45\", \"2010-01-27 10:23:45\")),\nsep = c(\"\")\n)``````\n``## \"20100127102345\" \"20100127102345\"``\n\n### Web Scraping\n\nprepare multiple URLs via query parameter grid expansion\n\n``web_gen_param_list_expand(id=1:3, lang=c(\"en\", \"de\"))``\n``## \"id=1&lang=en\" \"id=2&lang=en\" \"id=3&lang=en\" \"id=1&lang=de\" \"id=2&lang=de\" \"id=3&lang=de\"``" ]
[ null, "https://api.travis-ci.org/petermeissner/dsmisc.svg", null, "https://ci.appveyor.com/api/projects/status/github/petermeissner/dsmisc", null, "https://codecov.io/gh/petermeissner/dsmisc/branch/master/graph/badge.svg", null, "http://www.r-pkg.org/badges/version/dsmisc", null, "http://cranlogs.r-pkg.org/badges/grand-total/dsmisc", null, "http://cranlogs.r-pkg.org/badges/dsmisc", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6222588,"math_prob":0.89350384,"size":4132,"snap":"2023-14-2023-23","text_gpt3_token_len":1604,"char_repetition_ratio":0.090358526,"word_repetition_ratio":0.075987846,"special_character_ratio":0.45232332,"punctuation_ratio":0.1433869,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9900831,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,6,null,6,null,6,null,6,null,6,null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-10T17:19:54Z\",\"WARC-Record-ID\":\"<urn:uuid:245ec2ea-5104-4db8-aff3-74a1aa5fcc5f>\",\"Content-Length\":\"23959\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c9b2f191-4b30-4ab3-a4a5-a8ab2944f1d2>\",\"WARC-Concurrent-To\":\"<urn:uuid:273a0103-b95d-4029-a379-83ed7d9d170f>\",\"WARC-IP-Address\":\"128.176.130.197\",\"WARC-Target-URI\":\"https://cran.uni-muenster.de:443/web/packages/dsmisc/readme/README.html\",\"WARC-Payload-Digest\":\"sha1:3NVHQOMBELX6KGHCFQW22LLK47JLL4OE\",\"WARC-Block-Digest\":\"sha1:GTBBKXNZL5POO4SXZIXBACIYEHZS7VVE\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224657735.85_warc_CC-MAIN-20230610164417-20230610194417-00508.warc.gz\"}"}
https://leanprover-community.github.io/mathlib_docs/ring_theory/witt_vector/witt_polynomial.html
[ "# mathlibdocumentation\n\nring_theory.witt_vector.witt_polynomial\n\n# Witt polynomials #\n\nTo endow witt_vector p R with a ring structure, we need to study the so-called Witt polynomials.\n\nFix a base value p : ℕ. The p-adic Witt polynomials are an infinite family of polynomials indexed by a natural number n, taking values in an arbitrary ring R. The variables of these polynomials are represented by natural numbers. The variable set of the nth Witt polynomial contains at most n+1 elements {0, ..., n}, with exactly these variables when R has characteristic 0.\n\nThese polynomials are used to define the addition and multiplication operators on the type of Witt vectors. (While this type itself is not complicated, the ring operations are what make it interesting.)\n\nWhen the base p is invertible in R, the p-adic Witt polynomials form a basis for mv_polynomial ℕ R, equivalent to the standard basis.\n\n## Main declarations #\n\n• witt_polynomial p R n: the n-th Witt polynomial, viewed as polynomial over the ring R\n• X_in_terms_of_W p R n: if p is invertible, the polynomial X n is contained in the subalgebra generated by the Witt polynomials. X_in_terms_of_W p R n is the explicit polynomial, which upon being bound to the Witt polynomials yields X n.\n• bind₁_witt_polynomial_X_in_terms_of_W: the proof of the claim that bind₁ (X_in_terms_of_W p R) (W_ R n) = X n\n• bind₁_X_in_terms_of_W_witt_polynomial: the converse of the above statement\n\n## Notation #\n\nIn this file we use the following notation\n\n• p is a natural number, typically assumed to be prime.\n• R and S are commutative rings\n• W n (and W_ R n when the ring needs to be explicit) denotes the nth Witt polynomial" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7497128,"math_prob":0.9728251,"size":2714,"snap":"2021-43-2021-49","text_gpt3_token_len":769,"char_repetition_ratio":0.21254613,"word_repetition_ratio":0.035196688,"special_character_ratio":0.25829035,"punctuation_ratio":0.108949415,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9989238,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-18T01:36:05Z\",\"WARC-Record-ID\":\"<urn:uuid:f2c8d861-9c4a-48d2-a8c6-c23d20bff725>\",\"Content-Length\":\"433859\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:27cf96df-06df-4d4b-880c-cf92d078d541>\",\"WARC-Concurrent-To\":\"<urn:uuid:b5359218-ae6b-43de-ac9f-bedabed2ceef>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://leanprover-community.github.io/mathlib_docs/ring_theory/witt_vector/witt_polynomial.html\",\"WARC-Payload-Digest\":\"sha1:6NDI2LFBKXPL3QSYV7NBNHPVLOBTVH7O\",\"WARC-Block-Digest\":\"sha1:7TALRKXFHKODWGIM775XPB6DMMHCIM37\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585186.33_warc_CC-MAIN-20211018000838-20211018030838-00336.warc.gz\"}"}
https://mathoverflow.net/questions/331350/interior-smooth-regularity
[ "# Interior smooth regularity\n\nI recently read the PDE book of L. Evans, and in its chapter 6 some kinds of regularities of second-order elliptic equations were discussed. My question is about its proof of interior smooth regularity (thm 3 in Ch. 6.3.1).\n\nThe theorem asserts that if a second-order elliptic PDE $$Lu=f$$ has smooth coefficients and admits an $$H^1$$ weak solution $$u$$ on a bounded domain $$U\\subseteq \\mathbb R^n,$$ then $$u$$ is smooth.\n\nIn the statement of the theorem, no condition is assumed about $$\\partial U,$$ but in the book, the author would like to apply the general Sobolev inequality to assure $$u\\in C^\\infty(U)$$ if $$u\\in H^m_{\\text{loc}}(U)$$ for all $$m\\in\\mathbb N,$$ which can be obtained by the theorem 6 in its chapter 5.6.3, but with $$\\partial U$$ is $$C^1.$$\n\nMy problem is whether the regular assumption on the boundary is necessary. However, without the condition $$\\partial U$$ being $$C^1,$$ I only know some Sobolev inequality about $$W^{k,p}_0(U).$$ I am not sure if these are sufficient to derive the desired conclusion (since I don't know the behavior of $$u$$ near the boundary a priori). Perhaps there are alternatives to this problem, or the conclusion is just wrong without the boundary assumption?\n\nThanks in advance!\n\n## 2 Answers\n\nI assume that you require $$f\\in C^\\infty(U)$$. You do not need regularity of the boundary of $$U\\subset \\mathbb{R}^N$$. The condition $$u\\in H^m_{loc}(U)$$ is equivalent with $$\\widetilde{\\phi u}\\in H^m(\\mathbb{R}^N)$$, for any $$\\phi\\in C^\\infty_0(U)$$. Here $$\\widetilde{g}$$ denotes the extension of the function $$g:U\\to\\mathbb{R}$$ by zero outside $$U$$. Apply the Sobolev embedding theorems to $$\\widetilde{\\phi u}$$.\n\nI strongly recommend opening Brezis' book on functional anaylsis and pde's.\n\n• Thanks! You are right that I forgot to say $f$ is smooth. This is the answer I am looking for! Is this a common technique as dealing with such a problem? – tommy xu3 May 12 at 11:50\n• Yes. For another approach in the general case check Sec 10.3.2. of these notes www3.nd.edu/~lnicolae/Lectures.pdf – Liviu Nicolaescu May 12 at 12:12\n• This helps me a lot, and thanks for your suggestions about good references! – tommy xu3 May 12 at 12:24\n\nIf I understand your question correctly, you speak about interior regularity. Let me quote a classical result for linear elliptic equations with $$C^\\infty$$ coefficients, even true for pseudo-differential equations.\n\nLet $$P$$ be an elliptic differential operator with $$C^\\infty$$ coefficients in an open subset $$\\Omega$$ of $$\\mathbb R^N$$. Then for $$u$$ a distribution on $$\\Omega$$, $$Pu\\in C^\\infty(\\Omega)\\Longrightarrow u\\in C^\\infty(\\Omega).$$ You may refine that result in the Sobolev scale with $$Pu\\in H^s_{loc}(\\Omega)\\Longrightarrow u\\in H^{s+m}_{loc}(\\Omega),$$ where $$m$$ is the order of $$P$$. If you like the wave-front-set, you have $$WF(Pu)\\subset WF(u)\\subset WF(Pu)\\cup \\text{char}P,$$ and in the elliptic case $$\\text{char}P=\\emptyset$$. You can also formulate a result on the $$H^s$$ wave-front-set.\n\n• Thank you! So this result doesn't require any assumption on the boundary of $\\Omega?$ – tommy xu3 May 12 at 11:04\n• No, you do not need any assumption on the boundary for this result. – Bazin May 12 at 16:43\n• Thank you. I will also try to see more general theorems like what you mentioned! – tommy xu3 May 17 at 4:53" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.94867235,"math_prob":0.9995548,"size":1209,"snap":"2019-13-2019-22","text_gpt3_token_len":318,"char_repetition_ratio":0.09460581,"word_repetition_ratio":0.0,"special_character_ratio":0.254756,"punctuation_ratio":0.103174604,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999428,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-20T06:03:27Z\",\"WARC-Record-ID\":\"<urn:uuid:c2282a46-ac7b-404f-94c8-79b69b6b3d11>\",\"Content-Length\":\"127788\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d8b89509-798b-4803-b773-7ed93d4f9bca>\",\"WARC-Concurrent-To\":\"<urn:uuid:94b61760-25d2-4b12-b59d-69d9762cb659>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://mathoverflow.net/questions/331350/interior-smooth-regularity\",\"WARC-Payload-Digest\":\"sha1:7Y5WIKLSCCJVMD66EXZKYXDDVVO7QDKC\",\"WARC-Block-Digest\":\"sha1:NPZCDBNYJKNK5CHLBOMMLFG5EVFRHVKX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232255562.23_warc_CC-MAIN-20190520041753-20190520063753-00366.warc.gz\"}"}
http://forums.wolfram.com/mathgroup/archive/2008/Nov/msg00414.html
[ "", null, "", null, "", null, "", null, "", null, "", null, "", null, "Re: Reduce[a^x + b^x - 2 == 0, x]\n\n• To: mathgroup at smc.vnet.net\n• Subject: [mg93654] Re: Reduce[a^x + b^x - 2 == 0, x]\n• From: \"sjoerd.c.devries at gmail.com\" <sjoerd.c.devries at gmail.com>\n• Date: Thu, 20 Nov 2008 04:57:27 -0500 (EST)\n• References: <gg0qcu\\$9j\\[email protected]>\n\n```It seems the equation is not solvable in an algebraic way. However, I\ngive you one solution: x=0. Depending on a and b there may be one\nother solution.\n\nYou should probably try to solve this numerically.\n\nCheers -- Sjoerd\n\nOn Nov 19, 12:39 pm, \"Severin Posta\" <seve... at km1.fjfi.cvut.cz> wrote:\n> If I put\n>\n> Reduce[a^x + b^x - 2 == 0, x]\n>\n> into Mathematica 6.0.0 Win32, I get no result. System is running computat=\nion\n> several days - probably forever :)\n>\n> S.\n\n```\n\n• Prev by Date: FFT in Mathematica\n• Next by Date: Re: List of Month Names\n• Previous by thread: Reduce[a^x + b^x - 2 == 0, x]\n• Next by thread: Re: Reduce[a^x + b^x - 2 == 0, x]" ]
[ null, "http://forums.wolfram.com/mathgroup/images/head_mathgroup.gif", null, "http://forums.wolfram.com/mathgroup/images/head_archive.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/2.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/0.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/0.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/8.gif", null, "http://forums.wolfram.com/mathgroup/images/search_archive.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7926654,"math_prob":0.8798185,"size":745,"snap":"2019-35-2019-39","text_gpt3_token_len":273,"char_repetition_ratio":0.10121457,"word_repetition_ratio":0.10606061,"special_character_ratio":0.38120806,"punctuation_ratio":0.24064171,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9925847,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-18T15:18:44Z\",\"WARC-Record-ID\":\"<urn:uuid:97fb53f8-4041-4645-a113-c70b27293b42>\",\"Content-Length\":\"42450\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9cac0a55-d769-4746-9a02-0b8b76147ca5>\",\"WARC-Concurrent-To\":\"<urn:uuid:ebee6c27-6d40-478b-a61c-27330f705e38>\",\"WARC-IP-Address\":\"140.177.205.73\",\"WARC-Target-URI\":\"http://forums.wolfram.com/mathgroup/archive/2008/Nov/msg00414.html\",\"WARC-Payload-Digest\":\"sha1:WPKSZWJFLJEY6JKNBCV3FKQNVGNWKEFQ\",\"WARC-Block-Digest\":\"sha1:BT4KA4XHTACWRVAQS3DMWO3ZRE76N7OI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514573289.83_warc_CC-MAIN-20190918131429-20190918153429-00368.warc.gz\"}"}
http://plasmasturm.org/log/kolmoguni/
[ "# Real as can be, after all?\n\nThursday, 13 Feb 2014\n\nIt is possible to consider that the value of π is not infinite because it is recursively enumerable. You can make as many correct digits as you like using finite means. I think some scientists are thinking this means that π doesn’t “actually” have an infinite number of digits, because there are simple rules for getting as many as you need, without limit.\n\nHowever, there is another arena well known to physicists where an “actual infinity” is much harder to exorcise, and that is the randomness that seems to be built in at a basic level in quantum mechanics. It is now known, thanks to recent results, that it is possible to certify that a sequence of bits produced by quantum processes are truly, irreducibly random. It is also known that it is not possible to compute an irreducibly random sequence of indefinite length using a program of finite size. So as far as I can see, this means that the universe is not any sort of computer, because if it were it would not be possible to physically certify that a random sequence is random.\n\nIf my layman’s understanding is correct, the result is essentially that the Kolmogorov complexity of the universe was shown to be at least “equivalent” to the “size” of the universe, which rules out the possibility of it being the result of computation. (Contrasting with the fact that π can be computed because, even though it is a non-repeating infinite series in the decimal system, its Kolmogorov complexity is yet very modest. (That is not hard to conceive. A trivial way to create a non-repeating infinite series is to emit a series of 1s separated by an always-increasing number of 0s. Cycle length: ∞; Kolmogorov complexity: ~0.))\n\nIf this is correct, and true, I find it very exciting. I always found the concept of the universe as a kind of simulation profoundly unsatisfying, due to the infinite regress that is immediately invoked: is that computer itself simulated, in turn? If so, does the chain end? If it does, why/how there? It all seems like multiplication of entities that achieves nothing in exchange for opening up unlimited arbitrariness potential." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.97361875,"math_prob":0.87431103,"size":2134,"snap":"2021-43-2021-49","text_gpt3_token_len":462,"char_repetition_ratio":0.11032864,"word_repetition_ratio":0.0,"special_character_ratio":0.20431115,"punctuation_ratio":0.100961536,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97760415,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-22T13:01:58Z\",\"WARC-Record-ID\":\"<urn:uuid:ffbf183b-7d95-4b31-8c51-9e343b16db17>\",\"Content-Length\":\"4104\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4164b949-0fe8-4e1f-aae8-7a554d243d45>\",\"WARC-Concurrent-To\":\"<urn:uuid:59fdb177-902d-4af9-936b-69aff43652a9>\",\"WARC-IP-Address\":\"79.140.42.73\",\"WARC-Target-URI\":\"http://plasmasturm.org/log/kolmoguni/\",\"WARC-Payload-Digest\":\"sha1:QHJYPII6SMGT5OOSCXLEY7BWIQXX6EJG\",\"WARC-Block-Digest\":\"sha1:6Q7BTYAYBJDU7PP4IWPEYM7X5PAN2ZUL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585507.26_warc_CC-MAIN-20211022114748-20211022144748-00372.warc.gz\"}"}
https://www.answers.com/Q/How_many_quarts_are_in_a_pound
[ "# How many quarts are in a pound?\n\nA quart is a unit of volume, and a pound is a unit of weight, so there is no direct conversion of the two. If we know what substance we are talking about (such as water, for example) we could then ask how much a quart of water weighs (about two pounds, but I am estimating)." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9560183,"math_prob":0.9971805,"size":1944,"snap":"2019-35-2019-39","text_gpt3_token_len":486,"char_repetition_ratio":0.20670103,"word_repetition_ratio":0.07197943,"special_character_ratio":0.24794239,"punctuation_ratio":0.101149425,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9724114,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-17T10:41:25Z\",\"WARC-Record-ID\":\"<urn:uuid:61a5ef2b-cb5c-4042-bd7e-d3283ce7acd2>\",\"Content-Length\":\"161234\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9b7cf734-08ca-4534-a1d3-8e1419211f3f>\",\"WARC-Concurrent-To\":\"<urn:uuid:8224b7b8-1ff7-48b7-b2ca-8bc2775a52b8>\",\"WARC-IP-Address\":\"151.101.248.203\",\"WARC-Target-URI\":\"https://www.answers.com/Q/How_many_quarts_are_in_a_pound\",\"WARC-Payload-Digest\":\"sha1:O224AYSD7JV3O5S32HXFPEVDTSGRUX3J\",\"WARC-Block-Digest\":\"sha1:NMK7WEWIF2I6U4WNFIKKN5MVBZQ3FB2T\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027312128.3_warc_CC-MAIN-20190817102624-20190817124624-00325.warc.gz\"}"}
https://www.mathworks.com/help/matlab/ref/quadv.html?refresh=true
[ "Documentation\n\n`quadv` is not recommended. Use `integral` with the `'ArrayValued'` option instead.\n\n## Syntax\n\n```Q = quadv(fun,a,b) Q = quadv(fun,a,b,tol) Q = quadv(fun,a,b,tol,trace) [Q,fcnt] = quadv(...) ```\n\n## Description\n\n`Q = quadv(fun,a,b)` approximates the integral of the complex array-valued function `fun` from `a` to `b` to within an error of `1.e-6` using recursive adaptive Simpson quadrature. `fun` is a function handle. The function `Y = fun(x)` should accept a scalar argument `x` and return an array result `Y`, whose components are the integrands evaluated at `x`. Limits `a` and `b` must be finite.\n\nParameterizing Functions explains how to provide addition parameters to the function `fun`, if necessary.\n\n`Q = quadv(fun,a,b,tol)` uses the absolute error tolerance `tol` for all the integrals instead of the default, which is `1.e-6`.\n\n### Note\n\nThe same tolerance is used for all components, so the results obtained with `quadv` are usually not the same as those obtained with `quad` on the individual components.\n\n`Q = quadv(fun,a,b,tol,trace)` with non-zero `trace` shows the values of ```[fcnt a b-a Q(1)]``` during the recursion.\n\n`[Q,fcnt] = quadv(...)` returns the number of function evaluations.\n\n• The `quad` function might be most efficient for low accuracies with nonsmooth integrands.\n\n• The `quadl` function might be more efficient than `quad` at higher accuracies with smooth integrands.\n\n• The `quadgk` function might be most efficient for high accuracies and oscillatory integrands. It supports infinite intervals and can handle moderate singularities at the endpoints. It also supports contour integration along piecewise linear paths.\n\n• The `quadv` function vectorizes `quad` for an array-valued `fun`.\n\n• If the interval is infinite, $\\left[a,\\infty \\right)$, then for the integral of `fun(x)` to exist, `fun(x)` must decay as `x` approaches infinity, and `quadgk` requires it to decay rapidly. Special methods should be used for oscillatory functions on infinite intervals, but `quadgk` can be used if `fun(x)` decays fast enough.\n\n• The `quadgk` function will integrate functions that are singular at finite endpoints if the singularities are not too strong. For example, it will integrate functions that behave at an endpoint `c` like `log|x-c|` or `|x-c|p` for ```p >= -1/2```. If the function is singular at points inside `(a,b)`, write the integral as a sum of integrals over subintervals with the singular points as endpoints, compute them with `quadgk`, and add the results.\n\n## Examples\n\nFor the parameterized array-valued function `myarrayfun`, defined by\n\n```function Y = myarrayfun(x,n) Y = 1./((1:n)+x);```\n\nthe following command integrates `myarrayfun`, for the parameter value n = 10 between a = 0 and b = 1:\n\n`Qv = quadv(@(x)myarrayfun(x,10),0,1);`\n\nThe resulting array `Qv` has 10 elements estimating ```Q(k) = log((k+1)./(k))```, for `k = 1:10`.\n\nThe entries in `Qv` are slightly different than if you compute the integrals using `quad` in a loop:\n\n```for k = 1:10 Qs(k) = quadv(@(x)myscalarfun(x,k),0,1); end```\n\nwhere `myscalarfun` is:\n\n```function y = myscalarfun(x,k) y = 1./(k+x);```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7029684,"math_prob":0.9963878,"size":3120,"snap":"2019-51-2020-05","text_gpt3_token_len":858,"char_repetition_ratio":0.14634146,"word_repetition_ratio":0.008048289,"special_character_ratio":0.23942308,"punctuation_ratio":0.14376996,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99842274,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-08T03:52:23Z\",\"WARC-Record-ID\":\"<urn:uuid:c98f1d80-5720-4e57-bf5b-2c319d06ccac>\",\"Content-Length\":\"70680\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bc60f48c-3d04-4976-beb0-17891906af40>\",\"WARC-Concurrent-To\":\"<urn:uuid:ef7c09a8-e2aa-4f93-8762-ed4ef0341457>\",\"WARC-IP-Address\":\"23.5.129.95\",\"WARC-Target-URI\":\"https://www.mathworks.com/help/matlab/ref/quadv.html?refresh=true\",\"WARC-Payload-Digest\":\"sha1:2QH26LGQTY2JFL7BLQIAUJSZXFQMABBV\",\"WARC-Block-Digest\":\"sha1:TGBT44Z36VPN2ZGEC5KS7SKNB32KAON5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540504338.31_warc_CC-MAIN-20191208021121-20191208045121-00492.warc.gz\"}"}
http://gluon-ts.mxnet.io/master/api/gluonts/gluonts.mx.distribution.html
[ "# gluonts.mx.distribution package¶\n\nclass gluonts.mx.distribution.Distribution[source]\n\nBases: object\n\nA class representing probability distributions.\n\nproperty all_dim\n\nNumber of overall dimensions.\n\narg_names: Tuple = None\nproperty args\nproperty batch_dim\n\nNumber of batch dimensions, i.e., length of the batch_shape tuple.\n\nproperty batch_shape\n\nLayout of the set of events contemplated by the distribution.\n\nInvoking sample() from a distribution yields a tensor of shape batch_shape + event_shape, and computing log_prob (or loss more in general) on such sample will yield a tensor of shape batch_shape.\n\nThis property is available in general only in mx.ndarray mode, when the shape of the distribution arguments can be accessed.\n\ncdf(x: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nReturns the value of the cumulative distribution function evaluated at x\n\ncrps(x: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nCompute the continuous rank probability score (CRPS) of x according to the distribution.\n\nParameters\n\nx – Tensor of shape (*batch_shape, *event_shape).\n\nReturns\n\nTensor of shape batch_shape containing the CRPS score, according to the distribution, for each event in x.\n\nReturn type\n\nTensor\n\nproperty event_dim\n\nNumber of event dimensions, i.e., length of the event_shape tuple.\n\nThis is 0 for distributions over scalars, 1 over vectors, 2 over matrices, and so on.\n\nproperty event_shape\n\nShape of each individual event contemplated by the distribution.\n\nFor example, distributions over scalars have event_shape = (), over vectors have event_shape = (d, ) where d is the length of the vectors, over matrices have event_shape = (d1, d2), and so on.\n\nInvoking sample() from a distribution yields a tensor of shape batch_shape + event_shape.\n\nThis property is available in general only in mx.ndarray mode, when the shape of the distribution arguments can be accessed.\n\nis_reparameterizable = False\nlog_prob(x: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nCompute the log-density of the distribution at x.\n\nParameters\n\nx – Tensor of shape (*batch_shape, *event_shape).\n\nReturns\n\nTensor of shape batch_shape containing the log-density of the distribution for each event in x.\n\nReturn type\n\nTensor\n\nloss(x: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nCompute the loss at x according to the distribution.\n\nBy default, this method returns the negative of log_prob. For some distributions, however, the log-density is not easily computable and therefore other loss functions are computed.\n\nParameters\n\nx – Tensor of shape (*batch_shape, *event_shape).\n\nReturns\n\nTensor of shape batch_shape containing the value of the loss for each event in x.\n\nReturn type\n\nTensor\n\nproperty mean\n\nTensor containing the mean of the distribution.\n\nprob(x: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nCompute the density of the distribution at x.\n\nParameters\n\nx – Tensor of shape (*batch_shape, *event_shape).\n\nReturns\n\nTensor of shape batch_shape containing the density of the distribution for each event in x.\n\nReturn type\n\nTensor\n\nquantile(level: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nCalculates quantiles for the given levels.\n\nParameters\n\nlevel – Level values to use for computing the quantiles. level should be a 1d tensor of level values between 0 and 1.\n\nReturns\n\nQuantile values corresponding to the levels passed. The return shape is\n\n(num_levels, …DISTRIBUTION_SHAPE…),\n\nwhere DISTRIBUTION_SHAPE is the shape of the underlying distribution.\n\nReturn type\n\nquantiles\n\nsample(num_samples: Optional[int] = None, dtype=<class 'numpy.float32'>) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nDraw samples from the distribution.\n\nIf num_samples is given the first dimension of the output will be num_samples.\n\nParameters\n• num_samples – Number of samples to to be drawn.\n\n• dtype – Data-type of the samples.\n\nReturns\n\nA tensor containing samples. This has shape (*batch_shape, *eval_shape) if num_samples = None and (num_samples, *batch_shape, *eval_shape) otherwise.\n\nReturn type\n\nTensor\n\nsample_rep(num_samples: Optional[int] = None, dtype=<class 'numpy.float32'>) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\nslice_axis(axis: int, begin: int, end: Optional[int]) → gluonts.mx.distribution.distribution.Distribution[source]\nproperty stddev\n\nTensor containing the standard deviation of the distribution.\n\nproperty variance\n\nTensor containing the variance of the distribution.\n\nclass gluonts.mx.distribution.DistributionOutput[source]\n\nClass to construct a distribution given the output of a network.\n\ndistr_cls: type = None\ndistribution(distr_args, loc: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol, None] = None, scale: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol, None] = None) → gluonts.mx.distribution.distribution.Distribution[source]\n\nConstruct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor.\n\nParameters\n• distr_args – Constructor arguments for the underlying Distribution type.\n\n• loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.\n\n• scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.\n\ndomain_map(F, *args: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol])[source]\n\nConverts arguments to the right shape and domain. The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.\n\nproperty event_dim\n\nNumber of event dimensions, i.e., length of the event_shape tuple, of the distributions that this object constructs.\n\nproperty event_shape\n\nShape of each individual event contemplated by the distributions that this object constructs.\n\nproperty value_in_support\n\nA float that will have a valid numeric value when computing the log-loss of the corresponding distribution. By default 0.0. This value will be used when padding data series.\n\nclass gluonts.mx.distribution.StudentTOutput[source]\nargs_dim: Dict[str, int] = {'mu': 1, 'nu': 1, 'sigma': 1}\ndistr_cls\n\nalias of StudentT\n\nclassmethod domain_map(F, mu, sigma, nu)[source]\n\nConverts arguments to the right shape and domain. The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.\n\nproperty event_shape\n\nShape of each individual event contemplated by the distributions that this object constructs.\n\nclass gluonts.mx.distribution.StudentT(mu: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol], sigma: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol], nu: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol], F=None)[source]\n\nStudent’s t-distribution.\n\nParameters\n• mu – Tensor containing the means, of shape (*batch_shape, *event_shape).\n\n• sigma – Tensor containing the standard deviations, of shape (*batch_shape, *event_shape).\n\n• nu – Nonnegative tensor containing the degrees of freedom of the distribution, of shape (*batch_shape, *event_shape).\n\n• F\n\nproperty F\narg_names = None\nproperty args\nproperty batch_shape\n\nLayout of the set of events contemplated by the distribution.\n\nInvoking sample() from a distribution yields a tensor of shape batch_shape + event_shape, and computing log_prob (or loss more in general) on such sample will yield a tensor of shape batch_shape.\n\nThis property is available in general only in mx.ndarray mode, when the shape of the distribution arguments can be accessed.\n\nproperty event_dim\n\nNumber of event dimensions, i.e., length of the event_shape tuple.\n\nThis is 0 for distributions over scalars, 1 over vectors, 2 over matrices, and so on.\n\nproperty event_shape\n\nShape of each individual event contemplated by the distribution.\n\nFor example, distributions over scalars have event_shape = (), over vectors have event_shape = (d, ) where d is the length of the vectors, over matrices have event_shape = (d1, d2), and so on.\n\nInvoking sample() from a distribution yields a tensor of shape batch_shape + event_shape.\n\nThis property is available in general only in mx.ndarray mode, when the shape of the distribution arguments can be accessed.\n\nis_reparameterizable = False\nlog_prob(x: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nCompute the log-density of the distribution at x.\n\nParameters\n\nx – Tensor of shape (*batch_shape, *event_shape).\n\nReturns\n\nTensor of shape batch_shape containing the log-density of the distribution for each event in x.\n\nReturn type\n\nTensor\n\nproperty mean\n\nTensor containing the mean of the distribution.\n\nsample(num_samples: Optional[int] = None, dtype=<class 'numpy.float32'>) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nDraw samples from the distribution.\n\nIf num_samples is given the first dimension of the output will be num_samples.\n\nParameters\n• num_samples – Number of samples to to be drawn.\n\n• dtype – Data-type of the samples.\n\nReturns\n\nA tensor containing samples. This has shape (*batch_shape, *eval_shape) if num_samples = None and (num_samples, *batch_shape, *eval_shape) otherwise.\n\nReturn type\n\nTensor\n\nproperty stddev\n\nTensor containing the standard deviation of the distribution.\n\nclass gluonts.mx.distribution.GammaOutput[source]\nargs_dim: Dict[str, int] = {'alpha': 1, 'beta': 1}\ndistr_cls\n\nalias of Gamma\n\nclassmethod domain_map(F, alpha, beta)[source]\n\nMaps raw tensors to valid arguments for constructing a Gamma distribution.\n\nParameters\n• F\n\n• alpha – Tensor of shape (*batch_shape, 1)\n\n• beta – Tensor of shape (*batch_shape, 1)\n\nReturns\n\nTwo squeezed tensors, of shape (*batch_shape): both have entries mapped to the positive orthant.\n\nReturn type\n\nTuple[Tensor, Tensor]\n\nproperty event_shape\n\nShape of each individual event contemplated by the distributions that this object constructs.\n\nproperty value_in_support\n\nA float that will have a valid numeric value when computing the log-loss of the corresponding distribution. By default 0.0. This value will be used when padding data series.\n\nclass gluonts.mx.distribution.Gamma(alpha: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol], beta: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol])[source]\n\nGamma distribution.\n\nParameters\n• alpha – Tensor containing the shape parameters, of shape (*batch_shape, *event_shape).\n\n• beta – Tensor containing the rate parameters, of shape (*batch_shape, *event_shape).\n\n• F\n\nproperty F\narg_names = None\nproperty args\nproperty batch_shape\n\nLayout of the set of events contemplated by the distribution.\n\nInvoking sample() from a distribution yields a tensor of shape batch_shape + event_shape, and computing log_prob (or loss more in general) on such sample will yield a tensor of shape batch_shape.\n\nThis property is available in general only in mx.ndarray mode, when the shape of the distribution arguments can be accessed.\n\nproperty event_dim\n\nNumber of event dimensions, i.e., length of the event_shape tuple.\n\nThis is 0 for distributions over scalars, 1 over vectors, 2 over matrices, and so on.\n\nproperty event_shape\n\nShape of each individual event contemplated by the distribution.\n\nFor example, distributions over scalars have event_shape = (), over vectors have event_shape = (d, ) where d is the length of the vectors, over matrices have event_shape = (d1, d2), and so on.\n\nInvoking sample() from a distribution yields a tensor of shape batch_shape + event_shape.\n\nThis property is available in general only in mx.ndarray mode, when the shape of the distribution arguments can be accessed.\n\nis_reparameterizable = False\nlog_prob(x: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nCompute the log-density of the distribution at x.\n\nParameters\n\nx – Tensor of shape (*batch_shape, *event_shape).\n\nReturns\n\nTensor of shape batch_shape containing the log-density of the distribution for each event in x.\n\nReturn type\n\nTensor\n\nproperty mean\n\nTensor containing the mean of the distribution.\n\nsample(num_samples: Optional[int] = None, dtype=<class 'numpy.float32'>) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nDraw samples from the distribution.\n\nIf num_samples is given the first dimension of the output will be num_samples.\n\nParameters\n• num_samples – Number of samples to to be drawn.\n\n• dtype – Data-type of the samples.\n\nReturns\n\nA tensor containing samples. This has shape (*batch_shape, *eval_shape) if num_samples = None and (num_samples, *batch_shape, *eval_shape) otherwise.\n\nReturn type\n\nTensor\n\nproperty stddev\n\nTensor containing the standard deviation of the distribution.\n\nclass gluonts.mx.distribution.BetaOutput[source]\nargs_dim: Dict[str, int] = {'alpha': 1, 'beta': 1}\ndistr_cls\n\nalias of Beta\n\nclassmethod domain_map(F, alpha, beta)[source]\n\nMaps raw tensors to valid arguments for constructing a Beta distribution.\n\nParameters\n• F\n\n• alpha – Tensor of shape (*batch_shape, 1)\n\n• beta – Tensor of shape (*batch_shape, 1)\n\nReturns\n\nTwo squeezed tensors, of shape (*batch_shape): both have entries mapped to the positive orthant.\n\nReturn type\n\nTuple[Tensor, Tensor]\n\nproperty event_shape\n\nShape of each individual event contemplated by the distributions that this object constructs.\n\nproperty value_in_support\n\nA float that will have a valid numeric value when computing the log-loss of the corresponding distribution. By default 0.0. This value will be used when padding data series.\n\nclass gluonts.mx.distribution.Beta(alpha: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol], beta: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol])[source]\n\nBeta distribution.\n\nParameters\n• alpha – Tensor containing the alpha shape parameters, of shape (*batch_shape, *event_shape).\n\n• beta – Tensor containing the beta shape parameters, of shape (*batch_shape, *event_shape).\n\n• F\n\nproperty F\narg_names = None\nproperty args\nproperty batch_shape\n\nLayout of the set of events contemplated by the distribution.\n\nInvoking sample() from a distribution yields a tensor of shape batch_shape + event_shape, and computing log_prob (or loss more in general) on such sample will yield a tensor of shape batch_shape.\n\nThis property is available in general only in mx.ndarray mode, when the shape of the distribution arguments can be accessed.\n\nproperty event_dim\n\nNumber of event dimensions, i.e., length of the event_shape tuple.\n\nThis is 0 for distributions over scalars, 1 over vectors, 2 over matrices, and so on.\n\nproperty event_shape\n\nShape of each individual event contemplated by the distribution.\n\nFor example, distributions over scalars have event_shape = (), over vectors have event_shape = (d, ) where d is the length of the vectors, over matrices have event_shape = (d1, d2), and so on.\n\nInvoking sample() from a distribution yields a tensor of shape batch_shape + event_shape.\n\nThis property is available in general only in mx.ndarray mode, when the shape of the distribution arguments can be accessed.\n\nis_reparameterizable = False\nlog_prob(x: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nCompute the log-density of the distribution at x.\n\nParameters\n\nx – Tensor of shape (*batch_shape, *event_shape).\n\nReturns\n\nTensor of shape batch_shape containing the log-density of the distribution for each event in x.\n\nReturn type\n\nTensor\n\nproperty mean\n\nTensor containing the mean of the distribution.\n\nsample(num_samples: Optional[int] = None, dtype=<class 'numpy.float32'>) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nDraw samples from the distribution.\n\nIf num_samples is given the first dimension of the output will be num_samples.\n\nParameters\n• num_samples – Number of samples to to be drawn.\n\n• dtype – Data-type of the samples.\n\nReturns\n\nA tensor containing samples. This has shape (*batch_shape, *eval_shape) if num_samples = None and (num_samples, *batch_shape, *eval_shape) otherwise.\n\nReturn type\n\nTensor\n\nproperty stddev\n\nTensor containing the standard deviation of the distribution.\n\nproperty variance\n\nTensor containing the variance of the distribution.\n\nclass gluonts.mx.distribution.GaussianOutput[source]\nargs_dim: Dict[str, int] = {'mu': 1, 'sigma': 1}\ndistr_cls\n\nalias of Gaussian\n\nclassmethod domain_map(F, mu, sigma)[source]\n\nMaps raw tensors to valid arguments for constructing a Gaussian distribution.\n\nParameters\n• F\n\n• mu – Tensor of shape (*batch_shape, 1)\n\n• sigma – Tensor of shape (*batch_shape, 1)\n\nReturns\n\nTwo squeezed tensors, of shape (*batch_shape): the first has the same entries as mu and the second has entries mapped to the positive orthant.\n\nReturn type\n\nTuple[Tensor, Tensor]\n\nproperty event_shape\n\nShape of each individual event contemplated by the distributions that this object constructs.\n\nclass gluonts.mx.distribution.Gaussian(mu: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol], sigma: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol])[source]\n\nGaussian distribution.\n\nParameters\n• mu – Tensor containing the means, of shape (*batch_shape, *event_shape).\n\n• std – Tensor containing the standard deviations, of shape (*batch_shape, *event_shape).\n\n• F\n\nproperty F\narg_names = None\nproperty args\nproperty batch_shape\n\nLayout of the set of events contemplated by the distribution.\n\nInvoking sample() from a distribution yields a tensor of shape batch_shape + event_shape, and computing log_prob (or loss more in general) on such sample will yield a tensor of shape batch_shape.\n\nThis property is available in general only in mx.ndarray mode, when the shape of the distribution arguments can be accessed.\n\ncdf(x)[source]\n\nReturns the value of the cumulative distribution function evaluated at x\n\nproperty event_dim\n\nNumber of event dimensions, i.e., length of the event_shape tuple.\n\nThis is 0 for distributions over scalars, 1 over vectors, 2 over matrices, and so on.\n\nproperty event_shape\n\nShape of each individual event contemplated by the distribution.\n\nFor example, distributions over scalars have event_shape = (), over vectors have event_shape = (d, ) where d is the length of the vectors, over matrices have event_shape = (d1, d2), and so on.\n\nInvoking sample() from a distribution yields a tensor of shape batch_shape + event_shape.\n\nThis property is available in general only in mx.ndarray mode, when the shape of the distribution arguments can be accessed.\n\nis_reparameterizable = True\nlog_prob(x: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nCompute the log-density of the distribution at x.\n\nParameters\n\nx – Tensor of shape (*batch_shape, *event_shape).\n\nReturns\n\nTensor of shape batch_shape containing the log-density of the distribution for each event in x.\n\nReturn type\n\nTensor\n\nproperty mean\n\nTensor containing the mean of the distribution.\n\nquantile(level: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nCalculates quantiles for the given levels.\n\nParameters\n\nlevel – Level values to use for computing the quantiles. level should be a 1d tensor of level values between 0 and 1.\n\nReturns\n\nQuantile values corresponding to the levels passed. The return shape is\n\n(num_levels, …DISTRIBUTION_SHAPE…),\n\nwhere DISTRIBUTION_SHAPE is the shape of the underlying distribution.\n\nReturn type\n\nquantiles\n\nsample(num_samples: Optional[int] = None, dtype=<class 'numpy.float32'>) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nDraw samples from the distribution.\n\nIf num_samples is given the first dimension of the output will be num_samples.\n\nParameters\n• num_samples – Number of samples to to be drawn.\n\n• dtype – Data-type of the samples.\n\nReturns\n\nA tensor containing samples. This has shape (*batch_shape, *eval_shape) if num_samples = None and (num_samples, *batch_shape, *eval_shape) otherwise.\n\nReturn type\n\nTensor\n\nsample_rep(num_samples: Optional[int] = None, dtype=<class 'numpy.float32'>) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\nproperty stddev\n\nTensor containing the standard deviation of the distribution.\n\nclass gluonts.mx.distribution.LaplaceOutput[source]\nargs_dim: Dict[str, int] = {'b': 1, 'mu': 1}\ndistr_cls\n\nalias of Laplace\n\nclassmethod domain_map(F, mu, b)[source]\n\nConverts arguments to the right shape and domain. The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.\n\nproperty event_shape\n\nShape of each individual event contemplated by the distributions that this object constructs.\n\nclass gluonts.mx.distribution.Laplace(mu: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol], b: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol])[source]\n\nLaplace distribution.\n\nParameters\n• mu – Tensor containing the means, of shape (*batch_shape, *event_shape).\n\n• b – Tensor containing the distribution scale, of shape (*batch_shape, *event_shape).\n\n• F\n\nproperty F\narg_names = None\nproperty args\nproperty batch_shape\n\nLayout of the set of events contemplated by the distribution.\n\nInvoking sample() from a distribution yields a tensor of shape batch_shape + event_shape, and computing log_prob (or loss more in general) on such sample will yield a tensor of shape batch_shape.\n\nThis property is available in general only in mx.ndarray mode, when the shape of the distribution arguments can be accessed.\n\ncdf(x: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nReturns the value of the cumulative distribution function evaluated at x\n\nproperty event_dim\n\nNumber of event dimensions, i.e., length of the event_shape tuple.\n\nThis is 0 for distributions over scalars, 1 over vectors, 2 over matrices, and so on.\n\nproperty event_shape\n\nShape of each individual event contemplated by the distribution.\n\nFor example, distributions over scalars have event_shape = (), over vectors have event_shape = (d, ) where d is the length of the vectors, over matrices have event_shape = (d1, d2), and so on.\n\nInvoking sample() from a distribution yields a tensor of shape batch_shape + event_shape.\n\nThis property is available in general only in mx.ndarray mode, when the shape of the distribution arguments can be accessed.\n\nis_reparameterizable = True\nlog_prob(x: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nCompute the log-density of the distribution at x.\n\nParameters\n\nx – Tensor of shape (*batch_shape, *event_shape).\n\nReturns\n\nTensor of shape batch_shape containing the log-density of the distribution for each event in x.\n\nReturn type\n\nTensor\n\nproperty mean\n\nTensor containing the mean of the distribution.\n\nquantile(level: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nCalculates quantiles for the given levels.\n\nParameters\n\nlevel – Level values to use for computing the quantiles. level should be a 1d tensor of level values between 0 and 1.\n\nReturns\n\nQuantile values corresponding to the levels passed. The return shape is\n\n(num_levels, …DISTRIBUTION_SHAPE…),\n\nwhere DISTRIBUTION_SHAPE is the shape of the underlying distribution.\n\nReturn type\n\nquantiles\n\nsample_rep(num_samples=None, dtype=<class 'numpy.float32'>) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\nproperty stddev\n\nTensor containing the standard deviation of the distribution.\n\nclass gluonts.mx.distribution.MultivariateGaussian(mu: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol], L: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol], F=None)[source]\n\nMultivariate Gaussian distribution, specified by the mean vector and the Cholesky factor of its covariance matrix.\n\nParameters\n• mu – mean vector, of shape (…, d)\n\n• L – Lower triangular Cholesky factor of covariance matrix, of shape (…, d, d)\n\n• F – A module that can either refer to the Symbol API or the NDArray API in MXNet\n\nproperty F\narg_names = None\nproperty batch_shape\n\nLayout of the set of events contemplated by the distribution.\n\nInvoking sample() from a distribution yields a tensor of shape batch_shape + event_shape, and computing log_prob (or loss more in general) on such sample will yield a tensor of shape batch_shape.\n\nThis property is available in general only in mx.ndarray mode, when the shape of the distribution arguments can be accessed.\n\nproperty event_dim\n\nNumber of event dimensions, i.e., length of the event_shape tuple.\n\nThis is 0 for distributions over scalars, 1 over vectors, 2 over matrices, and so on.\n\nproperty event_shape\n\nShape of each individual event contemplated by the distribution.\n\nFor example, distributions over scalars have event_shape = (), over vectors have event_shape = (d, ) where d is the length of the vectors, over matrices have event_shape = (d1, d2), and so on.\n\nInvoking sample() from a distribution yields a tensor of shape batch_shape + event_shape.\n\nThis property is available in general only in mx.ndarray mode, when the shape of the distribution arguments can be accessed.\n\nis_reparameterizable = True\nlog_prob(x: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nCompute the log-density of the distribution at x.\n\nParameters\n\nx – Tensor of shape (*batch_shape, *event_shape).\n\nReturns\n\nTensor of shape batch_shape containing the log-density of the distribution for each event in x.\n\nReturn type\n\nTensor\n\nproperty mean\n\nTensor containing the mean of the distribution.\n\nsample_rep(num_samples: Optional[int] = None, dtype=<class 'numpy.float32'>) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nDraw samples from the multivariate Gaussian distributions. Internally, Cholesky factorization of the covariance matrix is used:\n\nsample = L v + mu,\n\nwhere L is the Cholesky factor, v is a standard normal sample.\n\nParameters\n• num_samples – Number of samples to be drawn.\n\n• dtype – Data-type of the samples.\n\nReturns\n\nTensor with shape (num_samples, …, d).\n\nReturn type\n\nTensor\n\nproperty variance\n\nTensor containing the variance of the distribution.\n\nclass gluonts.mx.distribution.MultivariateGaussianOutput(dim: int)[source]\ndistr_cls = None\ndomain_map(F, mu_vector, L_vector)[source]\n\nConverts arguments to the right shape and domain. The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.\n\nproperty event_shape\n\nShape of each individual event contemplated by the distributions that this object constructs.\n\nclass gluonts.mx.distribution.LowrankMultivariateGaussian(dim: int, rank: int, mu: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol], D: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol], W: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol])[source]\n\nMultivariate Gaussian distribution, with covariance matrix parametrized as the sum of a diagonal matrix and a low-rank matrix\n\n$\\Sigma = D + W W^T$\n\nThe implementation is strongly inspired from Pytorch: https://github.com/pytorch/pytorch/blob/master/torch/distributions/lowrank_multivariate_normal.py.\n\nComplexity to compute log_prob is $$O(dim * rank + rank^3)$$ per element.\n\nParameters\n• dim – Dimension of the distribution’s support\n\n• rank – Rank of W\n\n• mu – Mean tensor, of shape (…, dim)\n\n• D – Diagonal term in the covariance matrix, of shape (…, dim)\n\n• W – Low-rank factor in the covariance matrix, of shape (…, dim, rank)\n\nproperty F\narg_names = None\nproperty batch_shape\n\nLayout of the set of events contemplated by the distribution.\n\nInvoking sample() from a distribution yields a tensor of shape batch_shape + event_shape, and computing log_prob (or loss more in general) on such sample will yield a tensor of shape batch_shape.\n\nThis property is available in general only in mx.ndarray mode, when the shape of the distribution arguments can be accessed.\n\nproperty event_dim\n\nNumber of event dimensions, i.e., length of the event_shape tuple.\n\nThis is 0 for distributions over scalars, 1 over vectors, 2 over matrices, and so on.\n\nproperty event_shape\n\nShape of each individual event contemplated by the distribution.\n\nFor example, distributions over scalars have event_shape = (), over vectors have event_shape = (d, ) where d is the length of the vectors, over matrices have event_shape = (d1, d2), and so on.\n\nInvoking sample() from a distribution yields a tensor of shape batch_shape + event_shape.\n\nThis property is available in general only in mx.ndarray mode, when the shape of the distribution arguments can be accessed.\n\nis_reparameterizable = True\nlog_prob(x: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nCompute the log-density of the distribution at x.\n\nParameters\n\nx – Tensor of shape (*batch_shape, *event_shape).\n\nReturns\n\nTensor of shape batch_shape containing the log-density of the distribution for each event in x.\n\nReturn type\n\nTensor\n\nproperty mean\n\nTensor containing the mean of the distribution.\n\nsample_rep(num_samples: int = None, dtype=<class 'numpy.float32'>) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nDraw samples from the multivariate Gaussian distribution:\n\n$s = \\mu + D u + W v,$\n\nwhere $$u$$ and $$v$$ are standard normal samples.\n\nParameters\n• num_samples – number of samples to be drawn.\n\n• dtype – Data-type of the samples.\n\nReturns\n\nReturn type\n\ntensor with shape (num_samples, .., dim)\n\nproperty variance\n\nTensor containing the variance of the distribution.\n\nclass gluonts.mx.distribution.LowrankMultivariateGaussianOutput(dim: int, rank: int, sigma_init: float = 1.0, sigma_minimum: float = 0.001)[source]\ndistr_cls = None\ndistribution(distr_args, loc=None, scale=None, **kwargs) → gluonts.mx.distribution.distribution.Distribution[source]\n\nConstruct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor.\n\nParameters\n• distr_args – Constructor arguments for the underlying Distribution type.\n\n• loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.\n\n• scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.\n\ndomain_map(F, mu_vector, D_vector, W_vector)[source]\nParameters\n• F\n\n• mu_vector – Tensor of shape (…, dim)\n\n• D_vector – Tensor of shape (…, dim)\n\n• W_vector – Tensor of shape (…, dim * rank )\n\nReturns\n\nA tuple containing tensors mu, D, and W, with shapes (…, dim), (…, dim), and (…, dim, rank), respectively.\n\nReturn type\n\nTuple\n\nproperty event_shape\n\nShape of each individual event contemplated by the distributions that this object constructs.\n\nget_args_proj(prefix: Optional[str] = None) → gluonts.mx.distribution.distribution_output.ArgProj[source]\nclass gluonts.mx.distribution.MixtureDistributionOutput(distr_outputs: List[gluonts.mx.distribution.distribution_output.DistributionOutput])[source]\ndistr_cls = None\ndistribution(distr_args, loc: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol, None] = None, scale: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol, None] = None, **kwargs) → gluonts.mx.distribution.mixture.MixtureDistribution[source]\n\nConstruct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor.\n\nParameters\n• distr_args – Constructor arguments for the underlying Distribution type.\n\n• loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.\n\n• scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.\n\nproperty event_shape\n\nShape of each individual event contemplated by the distributions that this object constructs.\n\nget_args_proj(prefix: Optional[str] = None) → gluonts.mx.distribution.mixture.MixtureArgs[source]\nclass gluonts.mx.distribution.MixtureDistribution(mixture_probs: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol], components: List[gluonts.mx.distribution.distribution.Distribution], F=None)[source]\n\nA mixture distribution where each component is a Distribution.\n\nParameters\n• mixture_probs – A tensor of mixing probabilities. The entries should all be positive and sum to 1 across the last dimension. Shape: (…, k), where k is the number of distributions to be mixed. All axis except the last one should either coincide with the ones from the component distributions, or be 1 (in which case, the mixing coefficient is shared across the axis).\n\n• components – A list of k Distribution objects representing the mixture components. Distributions can be of different types. Each component’s support should be made of tensors of shape (…, d).\n\n• F – A module that can either refer to the Symbol API or the NDArray API in MXNet\n\nproperty F\narg_names = None\nproperty batch_shape\n\nLayout of the set of events contemplated by the distribution.\n\nInvoking sample() from a distribution yields a tensor of shape batch_shape + event_shape, and computing log_prob (or loss more in general) on such sample will yield a tensor of shape batch_shape.\n\nThis property is available in general only in mx.ndarray mode, when the shape of the distribution arguments can be accessed.\n\ncdf(x: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nReturns the value of the cumulative distribution function evaluated at x\n\nproperty event_dim\n\nNumber of event dimensions, i.e., length of the event_shape tuple.\n\nThis is 0 for distributions over scalars, 1 over vectors, 2 over matrices, and so on.\n\nproperty event_shape\n\nShape of each individual event contemplated by the distribution.\n\nFor example, distributions over scalars have event_shape = (), over vectors have event_shape = (d, ) where d is the length of the vectors, over matrices have event_shape = (d1, d2), and so on.\n\nInvoking sample() from a distribution yields a tensor of shape batch_shape + event_shape.\n\nThis property is available in general only in mx.ndarray mode, when the shape of the distribution arguments can be accessed.\n\nis_reparameterizable = False\nlog_prob(x: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nCompute the log-density of the distribution at x.\n\nParameters\n\nx – Tensor of shape (*batch_shape, *event_shape).\n\nReturns\n\nTensor of shape batch_shape containing the log-density of the distribution for each event in x.\n\nReturn type\n\nTensor\n\nproperty mean\n\nTensor containing the mean of the distribution.\n\nsample(num_samples: Optional[int] = None, dtype=<class 'numpy.float32'>) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nDraw samples from the distribution.\n\nIf num_samples is given the first dimension of the output will be num_samples.\n\nParameters\n• num_samples – Number of samples to to be drawn.\n\n• dtype – Data-type of the samples.\n\nReturns\n\nA tensor containing samples. This has shape (*batch_shape, *eval_shape) if num_samples = None and (num_samples, *batch_shape, *eval_shape) otherwise.\n\nReturn type\n\nTensor\n\nproperty stddev\n\nTensor containing the standard deviation of the distribution.\n\nclass gluonts.mx.distribution.NegativeBinomialOutput[source]\nargs_dim: Dict[str, int] = {'alpha': 1, 'mu': 1}\ndistr_cls\n\nalias of NegativeBinomial\n\ndistribution(distr_args, loc: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol, None] = None, scale: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol, None] = None) → gluonts.mx.distribution.neg_binomial.NegativeBinomial[source]\n\nConstruct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor.\n\nParameters\n• distr_args – Constructor arguments for the underlying Distribution type.\n\n• loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.\n\n• scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.\n\nclassmethod domain_map(F, mu, alpha)[source]\n\nConverts arguments to the right shape and domain. The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.\n\nproperty event_shape\n\nShape of each individual event contemplated by the distributions that this object constructs.\n\nclass gluonts.mx.distribution.NegativeBinomial(mu: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol], alpha: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol])[source]\n\nNegative binomial distribution, i.e. the distribution of the number of successes in a sequence of independent Bernoulli trials.\n\nParameters\n• mu – Tensor containing the means, of shape (*batch_shape, *event_shape).\n\n• alpha – Tensor of the shape parameters, of shape (*batch_shape, *event_shape).\n\n• F\n\nproperty F\narg_names = None\nproperty args\nproperty batch_shape\n\nLayout of the set of events contemplated by the distribution.\n\nInvoking sample() from a distribution yields a tensor of shape batch_shape + event_shape, and computing log_prob (or loss more in general) on such sample will yield a tensor of shape batch_shape.\n\nThis property is available in general only in mx.ndarray mode, when the shape of the distribution arguments can be accessed.\n\nproperty event_dim\n\nNumber of event dimensions, i.e., length of the event_shape tuple.\n\nThis is 0 for distributions over scalars, 1 over vectors, 2 over matrices, and so on.\n\nproperty event_shape\n\nShape of each individual event contemplated by the distribution.\n\nFor example, distributions over scalars have event_shape = (), over vectors have event_shape = (d, ) where d is the length of the vectors, over matrices have event_shape = (d1, d2), and so on.\n\nInvoking sample() from a distribution yields a tensor of shape batch_shape + event_shape.\n\nThis property is available in general only in mx.ndarray mode, when the shape of the distribution arguments can be accessed.\n\nis_reparameterizable = False\nlog_prob(x: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nCompute the log-density of the distribution at x.\n\nParameters\n\nx – Tensor of shape (*batch_shape, *event_shape).\n\nReturns\n\nTensor of shape batch_shape containing the log-density of the distribution for each event in x.\n\nReturn type\n\nTensor\n\nproperty mean\n\nTensor containing the mean of the distribution.\n\nsample(num_samples: Optional[int] = None, dtype=<class 'numpy.float32'>) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nDraw samples from the distribution.\n\nIf num_samples is given the first dimension of the output will be num_samples.\n\nParameters\n• num_samples – Number of samples to to be drawn.\n\n• dtype – Data-type of the samples.\n\nReturns\n\nA tensor containing samples. This has shape (*batch_shape, *eval_shape) if num_samples = None and (num_samples, *batch_shape, *eval_shape) otherwise.\n\nReturn type\n\nTensor\n\nproperty stddev\n\nTensor containing the standard deviation of the distribution.\n\nclass gluonts.mx.distribution.UniformOutput[source]\nargs_dim: Dict[str, int] = {'low': 1, 'width': 1}\ndistr_cls\n\nalias of Uniform\n\nclassmethod domain_map(F, low, width)[source]\n\nConverts arguments to the right shape and domain. The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.\n\nproperty event_shape\n\nShape of each individual event contemplated by the distributions that this object constructs.\n\nclass gluonts.mx.distribution.Uniform(low: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol], high: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol])[source]\n\nUniform distribution.\n\nParameters\n• low – Tensor containing the lower bound of the distribution domain.\n\n• high – Tensor containing the higher bound of the distribution domain.\n\n• F\n\nproperty F\narg_names = None\nproperty args\nproperty batch_shape\n\nLayout of the set of events contemplated by the distribution.\n\nInvoking sample() from a distribution yields a tensor of shape batch_shape + event_shape, and computing log_prob (or loss more in general) on such sample will yield a tensor of shape batch_shape.\n\nThis property is available in general only in mx.ndarray mode, when the shape of the distribution arguments can be accessed.\n\ncdf(x: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nReturns the value of the cumulative distribution function evaluated at x\n\nproperty event_dim\n\nNumber of event dimensions, i.e., length of the event_shape tuple.\n\nThis is 0 for distributions over scalars, 1 over vectors, 2 over matrices, and so on.\n\nproperty event_shape\n\nShape of each individual event contemplated by the distribution.\n\nFor example, distributions over scalars have event_shape = (), over vectors have event_shape = (d, ) where d is the length of the vectors, over matrices have event_shape = (d1, d2), and so on.\n\nInvoking sample() from a distribution yields a tensor of shape batch_shape + event_shape.\n\nThis property is available in general only in mx.ndarray mode, when the shape of the distribution arguments can be accessed.\n\nis_reparameterizable = True\nlog_prob(x: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nCompute the log-density of the distribution at x.\n\nParameters\n\nx – Tensor of shape (*batch_shape, *event_shape).\n\nReturns\n\nTensor of shape batch_shape containing the log-density of the distribution for each event in x.\n\nReturn type\n\nTensor\n\nproperty mean\n\nTensor containing the mean of the distribution.\n\nquantile(level: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nCalculates quantiles for the given levels.\n\nParameters\n\nlevel – Level values to use for computing the quantiles. level should be a 1d tensor of level values between 0 and 1.\n\nReturns\n\nQuantile values corresponding to the levels passed. The return shape is\n\n(num_levels, …DISTRIBUTION_SHAPE…),\n\nwhere DISTRIBUTION_SHAPE is the shape of the underlying distribution.\n\nReturn type\n\nquantiles\n\nsample(num_samples: Optional[int] = None, dtype=<class 'numpy.float32'>) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nDraw samples from the distribution.\n\nIf num_samples is given the first dimension of the output will be num_samples.\n\nParameters\n• num_samples – Number of samples to to be drawn.\n\n• dtype – Data-type of the samples.\n\nReturns\n\nA tensor containing samples. This has shape (*batch_shape, *eval_shape) if num_samples = None and (num_samples, *batch_shape, *eval_shape) otherwise.\n\nReturn type\n\nTensor\n\nsample_rep(num_samples: Optional[int] = None, dtype=<class 'numpy.float32'>) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\nproperty stddev\n\nTensor containing the standard deviation of the distribution.\n\nclass gluonts.mx.distribution.Binned(bin_log_probs: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol], bin_centers: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol], label_smoothing: Optional[float] = None)[source]\n\nA binned distribution defined by a set of bins via bin centers and bin probabilities.\n\nParameters\n• bin_log_probs – Tensor containing log probabilities of the bins, of shape (*batch_shape, num_bins).\n\n• bin_centers – Tensor containing the bin centers, of shape (*batch_shape, num_bins).\n\n• F\n\n• label_smoothing – The label smoothing weight, real number in [0, 1). Default None. If not None, then the loss of the distribution will be “label smoothed” cross-entropy. For example, instead of computing cross-entropy loss between the estimated bin probabilities and a hard-label (one-hot encoding) [1, 0, 0], a soft label of [0.9, 0.05, 0.05] is taken as the ground truth (when label_smoothing=0.15). See (Muller et al., 2019) [MKH19], for further reference.\n\nproperty F\narg_names = None\nproperty args\nproperty batch_shape\n\nLayout of the set of events contemplated by the distribution.\n\nInvoking sample() from a distribution yields a tensor of shape batch_shape + event_shape, and computing log_prob (or loss more in general) on such sample will yield a tensor of shape batch_shape.\n\nThis property is available in general only in mx.ndarray mode, when the shape of the distribution arguments can be accessed.\n\nproperty bin_probs\ncdf(x: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nReturns the value of the cumulative distribution function evaluated at x\n\nproperty event_dim\n\nNumber of event dimensions, i.e., length of the event_shape tuple.\n\nThis is 0 for distributions over scalars, 1 over vectors, 2 over matrices, and so on.\n\nproperty event_shape\n\nShape of each individual event contemplated by the distribution.\n\nFor example, distributions over scalars have event_shape = (), over vectors have event_shape = (d, ) where d is the length of the vectors, over matrices have event_shape = (d1, d2), and so on.\n\nInvoking sample() from a distribution yields a tensor of shape batch_shape + event_shape.\n\nThis property is available in general only in mx.ndarray mode, when the shape of the distribution arguments can be accessed.\n\nis_reparameterizable = False\nlog_prob(x)[source]\n\nCompute the log-density of the distribution at x.\n\nParameters\n\nx – Tensor of shape (*batch_shape, *event_shape).\n\nReturns\n\nTensor of shape batch_shape containing the log-density of the distribution for each event in x.\n\nReturn type\n\nTensor\n\nloss(x: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nCompute the loss at x according to the distribution.\n\nBy default, this method returns the negative of log_prob. For some distributions, however, the log-density is not easily computable and therefore other loss functions are computed.\n\nParameters\n\nx – Tensor of shape (*batch_shape, *event_shape).\n\nReturns\n\nTensor of shape batch_shape containing the value of the loss for each event in x.\n\nReturn type\n\nTensor\n\nproperty mean\n\nTensor containing the mean of the distribution.\n\nquantile(level: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nCalculates quantiles for the given levels.\n\nParameters\n\nlevel – Level values to use for computing the quantiles. level should be a 1d tensor of level values between 0 and 1.\n\nReturns\n\nQuantile values corresponding to the levels passed. The return shape is\n\n(num_levels, …DISTRIBUTION_SHAPE…),\n\nwhere DISTRIBUTION_SHAPE is the shape of the underlying distribution.\n\nReturn type\n\nquantiles\n\nsample(num_samples=None, dtype=<class 'numpy.float32'>)[source]\n\nDraw samples from the distribution.\n\nIf num_samples is given the first dimension of the output will be num_samples.\n\nParameters\n• num_samples – Number of samples to to be drawn.\n\n• dtype – Data-type of the samples.\n\nReturns\n\nA tensor containing samples. This has shape (*batch_shape, *eval_shape) if num_samples = None and (num_samples, *batch_shape, *eval_shape) otherwise.\n\nReturn type\n\nTensor\n\nsmooth_ce_loss(x)[source]\n\nCross-entropy loss with a “smooth” label.\n\nproperty stddev\n\nTensor containing the standard deviation of the distribution.\n\nclass gluonts.mx.distribution.BinnedOutput(bin_centers: mxnet.ndarray.ndarray.NDArray, label_smoothing: Optional[float] = None)[source]\ndistr_cls\n\nalias of Binned\n\ndistribution(args, loc=None, scale=None) → gluonts.mx.distribution.binned.Binned[source]\n\nConstruct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor.\n\nParameters\n• distr_args – Constructor arguments for the underlying Distribution type.\n\n• loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.\n\n• scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.\n\nproperty event_shape\n\nShape of each individual event contemplated by the distributions that this object constructs.\n\nget_args_proj(*args, **kwargs) → mxnet.gluon.block.HybridBlock[source]\nclass gluonts.mx.distribution.PiecewiseLinear(gamma: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol], slopes: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol], knot_spacings: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol])[source]\n\nPiecewise linear distribution.\n\nThis class represents the quantile function (i.e., the inverse CDF) associated with the a distribution, as a continuous, non-decreasing, piecewise linear function defined in the [0, 1] interval:\n\n$q(x; \\gamma, b, d) = \\gamma + \\sum_{l=0}^L b_l (x_l - d_l)_+$\n\nwhere the input $$x \\in [0,1]$$ and the parameters are\n\n• $$\\gamma$$: intercept at 0\n\n• $$b$$: differences of the slopes in consecutive pieces\n\n• $$d$$: knot positions\n\nParameters\n• gamma – Tensor containing the intercepts at zero\n\n• slopes – Tensor containing the slopes of each linear piece. All coefficients must be positive. Shape: (*gamma.shape, num_pieces)\n\n• knot_spacings – Tensor containing the spacings between knots in the splines. All coefficients must be positive and sum to one on the last axis. Shape: (*gamma.shape, num_pieces)\n\n• F\n\nproperty F\narg_names = None\nproperty args\nproperty batch_shape\n\nLayout of the set of events contemplated by the distribution.\n\nInvoking sample() from a distribution yields a tensor of shape batch_shape + event_shape, and computing log_prob (or loss more in general) on such sample will yield a tensor of shape batch_shape.\n\nThis property is available in general only in mx.ndarray mode, when the shape of the distribution arguments can be accessed.\n\ncdf(x: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nComputes the quantile level $$\\alpha$$ such that $$q(\\alpha) = x$$.\n\nParameters\n\nx – Tensor of shape gamma.shape\n\nReturns\n\nTensor of shape gamma.shape\n\nReturn type\n\nTensor\n\ncrps(x: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nCompute CRPS in analytical form.\n\nParameters\n\nx – Observation to evaluate. Shape equals to gamma.shape.\n\nReturns\n\nTensor containing the CRPS.\n\nReturn type\n\nTensor\n\nproperty event_dim\n\nNumber of event dimensions, i.e., length of the event_shape tuple.\n\nThis is 0 for distributions over scalars, 1 over vectors, 2 over matrices, and so on.\n\nproperty event_shape\n\nShape of each individual event contemplated by the distribution.\n\nFor example, distributions over scalars have event_shape = (), over vectors have event_shape = (d, ) where d is the length of the vectors, over matrices have event_shape = (d1, d2), and so on.\n\nInvoking sample() from a distribution yields a tensor of shape batch_shape + event_shape.\n\nThis property is available in general only in mx.ndarray mode, when the shape of the distribution arguments can be accessed.\n\nis_reparameterizable = False\nloss(x: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nCompute the loss at x according to the distribution.\n\nBy default, this method returns the negative of log_prob. For some distributions, however, the log-density is not easily computable and therefore other loss functions are computed.\n\nParameters\n\nx – Tensor of shape (*batch_shape, *event_shape).\n\nReturns\n\nTensor of shape batch_shape containing the value of the loss for each event in x.\n\nReturn type\n\nTensor\n\nquantile(level: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nCalculates quantiles for the given levels.\n\nParameters\n\nlevel – Level values to use for computing the quantiles. level should be a 1d tensor of level values between 0 and 1.\n\nReturns\n\nQuantile values corresponding to the levels passed. The return shape is\n\n(num_levels, …DISTRIBUTION_SHAPE…),\n\nwhere DISTRIBUTION_SHAPE is the shape of the underlying distribution.\n\nReturn type\n\nquantiles\n\nquantile_internal(x: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol], axis: Optional[int] = None) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nEvaluates the quantile function at the quantile levels contained in x.\n\nParameters\n• x – Tensor of shape *gamma.shape if axis=None, or containing an additional axis on the specified position, otherwise.\n\n• axis – Index of the axis containing the different quantile levels which are to be computed.\n\nReturns\n\nQuantiles tensor, of the same shape as x.\n\nReturn type\n\nTensor\n\nsample(num_samples: Optional[int] = None, dtype=<class 'numpy.float32'>) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nDraw samples from the distribution.\n\nIf num_samples is given the first dimension of the output will be num_samples.\n\nParameters\n• num_samples – Number of samples to to be drawn.\n\n• dtype – Data-type of the samples.\n\nReturns\n\nA tensor containing samples. This has shape (*batch_shape, *eval_shape) if num_samples = None and (num_samples, *batch_shape, *eval_shape) otherwise.\n\nReturn type\n\nTensor\n\nclass gluonts.mx.distribution.PiecewiseLinearOutput(num_pieces: int)[source]\ndistr_cls\n\nalias of PiecewiseLinear\n\ndistribution(distr_args, loc: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol, None] = None, scale: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol, None] = None) → gluonts.mx.distribution.piecewise_linear.PiecewiseLinear[source]\n\nConstruct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor.\n\nParameters\n• distr_args – Constructor arguments for the underlying Distribution type.\n\n• loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.\n\n• scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.\n\nclassmethod domain_map(F, gamma, slopes, knot_spacings)[source]\n\nConverts arguments to the right shape and domain. The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.\n\nproperty event_shape\n\nShape of each individual event contemplated by the distributions that this object constructs.\n\nclass gluonts.mx.distribution.Poisson(rate: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol])[source]\n\nPoisson distribution, i.e. the distribution of the number of successes in a specified region.\n\nParameters\n• rate – Tensor containing the means, of shape (*batch_shape, *event_shape).\n\n• F\n\nproperty F\narg_names = None\nproperty args\nproperty batch_shape\n\nLayout of the set of events contemplated by the distribution.\n\nInvoking sample() from a distribution yields a tensor of shape batch_shape + event_shape, and computing log_prob (or loss more in general) on such sample will yield a tensor of shape batch_shape.\n\nThis property is available in general only in mx.ndarray mode, when the shape of the distribution arguments can be accessed.\n\nproperty event_dim\n\nNumber of event dimensions, i.e., length of the event_shape tuple.\n\nThis is 0 for distributions over scalars, 1 over vectors, 2 over matrices, and so on.\n\nproperty event_shape\n\nShape of each individual event contemplated by the distribution.\n\nFor example, distributions over scalars have event_shape = (), over vectors have event_shape = (d, ) where d is the length of the vectors, over matrices have event_shape = (d1, d2), and so on.\n\nInvoking sample() from a distribution yields a tensor of shape batch_shape + event_shape.\n\nThis property is available in general only in mx.ndarray mode, when the shape of the distribution arguments can be accessed.\n\nis_reparameterizable = False\nlog_prob(x: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nCompute the log-density of the distribution at x.\n\nParameters\n\nx – Tensor of shape (*batch_shape, *event_shape).\n\nReturns\n\nTensor of shape batch_shape containing the log-density of the distribution for each event in x.\n\nReturn type\n\nTensor\n\nproperty mean\n\nTensor containing the mean of the distribution.\n\nsample(num_samples: Optional[int] = None, dtype=<class 'numpy.float32'>) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nDraw samples from the distribution.\n\nIf num_samples is given the first dimension of the output will be num_samples.\n\nParameters\n• num_samples – Number of samples to to be drawn.\n\n• dtype – Data-type of the samples.\n\nReturns\n\nA tensor containing samples. This has shape (*batch_shape, *eval_shape) if num_samples = None and (num_samples, *batch_shape, *eval_shape) otherwise.\n\nReturn type\n\nTensor\n\nproperty stddev\n\nTensor containing the standard deviation of the distribution.\n\nclass gluonts.mx.distribution.PoissonOutput[source]\nargs_dim: Dict[str, int] = {'rate': 1}\ndistr_cls\n\nalias of Poisson\n\ndistribution(distr_args, loc: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol, None] = None, scale: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol, None] = None) → gluonts.mx.distribution.poisson.Poisson[source]\n\nConstruct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor.\n\nParameters\n• distr_args – Constructor arguments for the underlying Distribution type.\n\n• loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.\n\n• scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.\n\nclassmethod domain_map(F, rate)[source]\n\nConverts arguments to the right shape and domain. The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.\n\nproperty event_shape\n\nShape of each individual event contemplated by the distributions that this object constructs.\n\nclass gluonts.mx.distribution.TransformedPiecewiseLinear(base_distribution: gluonts.mx.distribution.piecewise_linear.PiecewiseLinear, transforms: List[gluonts.mx.distribution.bijection.Bijection])[source]\narg_names = None\ncrps(y: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nCompute the continuous rank probability score (CRPS) of x according to the distribution.\n\nParameters\n\nx – Tensor of shape (*batch_shape, *event_shape).\n\nReturns\n\nTensor of shape batch_shape containing the CRPS score, according to the distribution, for each event in x.\n\nReturn type\n\nTensor\n\nclass gluonts.mx.distribution.TransformedDistribution(base_distribution: gluonts.mx.distribution.distribution.Distribution, transforms: List[gluonts.mx.distribution.bijection.Bijection])[source]\n\nA distribution obtained by applying a sequence of transformations on top of a base distribution.\n\narg_names = None\nproperty batch_shape\n\nLayout of the set of events contemplated by the distribution.\n\nInvoking sample() from a distribution yields a tensor of shape batch_shape + event_shape, and computing log_prob (or loss more in general) on such sample will yield a tensor of shape batch_shape.\n\nThis property is available in general only in mx.ndarray mode, when the shape of the distribution arguments can be accessed.\n\ncdf(y: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nReturns the value of the cumulative distribution function evaluated at x\n\nproperty event_dim\n\nNumber of event dimensions, i.e., length of the event_shape tuple.\n\nThis is 0 for distributions over scalars, 1 over vectors, 2 over matrices, and so on.\n\nproperty event_shape\n\nShape of each individual event contemplated by the distribution.\n\nFor example, distributions over scalars have event_shape = (), over vectors have event_shape = (d, ) where d is the length of the vectors, over matrices have event_shape = (d1, d2), and so on.\n\nInvoking sample() from a distribution yields a tensor of shape batch_shape + event_shape.\n\nThis property is available in general only in mx.ndarray mode, when the shape of the distribution arguments can be accessed.\n\nlog_prob(y: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nCompute the log-density of the distribution at x.\n\nParameters\n\nx – Tensor of shape (*batch_shape, *event_shape).\n\nReturns\n\nTensor of shape batch_shape containing the log-density of the distribution for each event in x.\n\nReturn type\n\nTensor\n\nquantile(level: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nCalculates quantiles for the given levels.\n\nParameters\n\nlevel – Level values to use for computing the quantiles. level should be a 1d tensor of level values between 0 and 1.\n\nReturns\n\nQuantile values corresponding to the levels passed. The return shape is\n\n(num_levels, …DISTRIBUTION_SHAPE…),\n\nwhere DISTRIBUTION_SHAPE is the shape of the underlying distribution.\n\nReturn type\n\nquantiles\n\nsample(num_samples: Optional[int] = None, dtype=<class 'numpy.float32'>) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nDraw samples from the distribution.\n\nIf num_samples is given the first dimension of the output will be num_samples.\n\nParameters\n• num_samples – Number of samples to to be drawn.\n\n• dtype – Data-type of the samples.\n\nReturns\n\nA tensor containing samples. This has shape (*batch_shape, *eval_shape) if num_samples = None and (num_samples, *batch_shape, *eval_shape) otherwise.\n\nReturn type\n\nTensor\n\nsample_rep(num_samples: Optional[int] = None, dtype=<class 'float'>) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\nclass gluonts.mx.distribution.TransformedDistributionOutput(base_distr_output: gluonts.mx.distribution.distribution_output.DistributionOutput, transforms_output: List[gluonts.mx.distribution.bijection_output.BijectionOutput])[source]\n\nClass to connect a network to a distribution that is transformed by a sequence of learnable bijections.\n\ndistr_cls = None\ndistribution(distr_args, loc: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol, None] = None, scale: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol, None] = None) → gluonts.mx.distribution.distribution.Distribution[source]\n\nConstruct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor.\n\nParameters\n• distr_args – Constructor arguments for the underlying Distribution type.\n\n• loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.\n\n• scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.\n\ndomain_map(F, *args: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol])[source]\n\nConverts arguments to the right shape and domain. The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.\n\nproperty event_shape\n\nShape of each individual event contemplated by the distributions that this object constructs.\n\nget_args_proj(prefix: Optional[str] = None) → gluonts.mx.distribution.distribution_output.ArgProj[source]\nclass gluonts.mx.distribution.InverseBoxCoxTransformOutput(lb_obs: float = 0.0, fix_lambda_2: bool = True)[source]\nargs_dim: Dict[str, int] = {'box_cox.lambda_1': 1, 'box_cox.lambda_2': 1}\nbij_cls\n\nalias of InverseBoxCoxTransform\n\nproperty event_shape\nclass gluonts.mx.distribution.BoxCoxTransformOutput(lb_obs: float = 0.0, fix_lambda_2: bool = True)[source]\nargs_dim: Dict[str, int] = {'box_cox.lambda_1': 1, 'box_cox.lambda_2': 1}\nbij_cls\n\nalias of BoxCoxTransform\n\ndomain_map(F, *args: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Tuple[Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol], ...][source]\nproperty event_shape\nclass gluonts.mx.distribution.Dirichlet(alpha: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol], float_type: gluonts.core.component.DType = <class 'numpy.float32'>)[source]\n\nDirichlet distribution, specified by the concentration vector alpha of length d. https://en.wikipedia.org/wiki/Dirichlet_distribution\n\nThe Dirichlet distribution is defined on the open (d-1)-simplex, which means that a sample (or observation) x = (x_0,…, x_{d-1}) must satisfy:\n\nsum_k x_k = 1 and for all k, x_k > 0.\n\nParameters\n• alpha – concentration vector, of shape (…, d)\n\n• F – A module that can either refer to the Symbol API or the NDArray API in MXNet\n\nproperty F\narg_names = None\nproperty args\nproperty batch_shape\n\nLayout of the set of events contemplated by the distribution.\n\nInvoking sample() from a distribution yields a tensor of shape batch_shape + event_shape, and computing log_prob (or loss more in general) on such sample will yield a tensor of shape batch_shape.\n\nThis property is available in general only in mx.ndarray mode, when the shape of the distribution arguments can be accessed.\n\nproperty event_dim\n\nNumber of event dimensions, i.e., length of the event_shape tuple.\n\nThis is 0 for distributions over scalars, 1 over vectors, 2 over matrices, and so on.\n\nproperty event_shape\n\nShape of each individual event contemplated by the distribution.\n\nFor example, distributions over scalars have event_shape = (), over vectors have event_shape = (d, ) where d is the length of the vectors, over matrices have event_shape = (d1, d2), and so on.\n\nInvoking sample() from a distribution yields a tensor of shape batch_shape + event_shape.\n\nThis property is available in general only in mx.ndarray mode, when the shape of the distribution arguments can be accessed.\n\nis_reparameterizable = False\nlog_prob(x: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nCompute the log-density of the distribution at x.\n\nParameters\n\nx – Tensor of shape (*batch_shape, *event_shape).\n\nReturns\n\nTensor of shape batch_shape containing the log-density of the distribution for each event in x.\n\nReturn type\n\nTensor\n\nproperty mean\n\nTensor containing the mean of the distribution.\n\nsample(num_samples: Optional[int] = None, dtype=<class 'numpy.float32'>) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nDraw samples from the distribution.\n\nIf num_samples is given the first dimension of the output will be num_samples.\n\nParameters\n• num_samples – Number of samples to to be drawn.\n\n• dtype – Data-type of the samples.\n\nReturns\n\nA tensor containing samples. This has shape (*batch_shape, *eval_shape) if num_samples = None and (num_samples, *batch_shape, *eval_shape) otherwise.\n\nReturn type\n\nTensor\n\nproperty variance\n\nTensor containing the variance of the distribution.\n\nclass gluonts.mx.distribution.DirichletOutput(dim: int)[source]\ndistr_cls = None\ndistribution(distr_args, loc=None, scale=None) → gluonts.mx.distribution.distribution.Distribution[source]\n\nConstruct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor.\n\nParameters\n• distr_args – Constructor arguments for the underlying Distribution type.\n\n• loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.\n\n• scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.\n\ndomain_map(F, alpha_vector)[source]\n\nConverts arguments to the right shape and domain. The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.\n\nproperty event_shape\n\nShape of each individual event contemplated by the distributions that this object constructs.\n\nclass gluonts.mx.distribution.DirichletMultinomial(dim: int, n_trials: int, alpha: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol], float_type: gluonts.core.component.DType = <class 'numpy.float32'>)[source]\n\nDirichlet-Multinomial distribution, specified by the concentration vector alpha of length dim, and a number of trials n_trials. https://en.wikipedia.org/wiki/Dirichlet-multinomial_distribution\n\nThe Dirichlet-Multinomial distribution is a discrete multivariate probability distribution, a sample (or observation) x = (x_0,…, x_{dim-1}) must satisfy:\n\nsum_k x_k = n_trials and for all k, x_k is a non-negative integer.\n\nSuch a sample can be obtained by first drawing a vector p from a Dirichlet(alpha) distribution, then x is drawn from a Multinomial(p) with n trials\n\nParameters\n• dim – Dimension of any sample\n\n• n_trials – Number of trials\n\n• alpha – concentration vector, of shape (…, dim)\n\n• F – A module that can either refer to the Symbol API or the NDArray API in MXNet\n\nproperty F\narg_names = None\nproperty batch_shape\n\nLayout of the set of events contemplated by the distribution.\n\nInvoking sample() from a distribution yields a tensor of shape batch_shape + event_shape, and computing log_prob (or loss more in general) on such sample will yield a tensor of shape batch_shape.\n\nThis property is available in general only in mx.ndarray mode, when the shape of the distribution arguments can be accessed.\n\nproperty event_dim\n\nNumber of event dimensions, i.e., length of the event_shape tuple.\n\nThis is 0 for distributions over scalars, 1 over vectors, 2 over matrices, and so on.\n\nproperty event_shape\n\nShape of each individual event contemplated by the distribution.\n\nFor example, distributions over scalars have event_shape = (), over vectors have event_shape = (d, ) where d is the length of the vectors, over matrices have event_shape = (d1, d2), and so on.\n\nInvoking sample() from a distribution yields a tensor of shape batch_shape + event_shape.\n\nThis property is available in general only in mx.ndarray mode, when the shape of the distribution arguments can be accessed.\n\nis_reparameterizable = False\nlog_prob(x: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nCompute the log-density of the distribution at x.\n\nParameters\n\nx – Tensor of shape (*batch_shape, *event_shape).\n\nReturns\n\nTensor of shape batch_shape containing the log-density of the distribution for each event in x.\n\nReturn type\n\nTensor\n\nproperty mean\n\nTensor containing the mean of the distribution.\n\nsample(num_samples: Optional[int] = None, dtype=<class 'numpy.float32'>) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nDraw samples from the distribution.\n\nIf num_samples is given the first dimension of the output will be num_samples.\n\nParameters\n• num_samples – Number of samples to to be drawn.\n\n• dtype – Data-type of the samples.\n\nReturns\n\nA tensor containing samples. This has shape (*batch_shape, *eval_shape) if num_samples = None and (num_samples, *batch_shape, *eval_shape) otherwise.\n\nReturn type\n\nTensor\n\nproperty variance\n\nTensor containing the variance of the distribution.\n\nclass gluonts.mx.distribution.DirichletMultinomialOutput(dim: int, n_trials: int)[source]\ndistr_cls = None\ndistribution(distr_args, loc=None, scale=None) → gluonts.mx.distribution.distribution.Distribution[source]\n\nConstruct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor.\n\nParameters\n• distr_args – Constructor arguments for the underlying Distribution type.\n\n• loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.\n\n• scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.\n\ndomain_map(F, alpha_vector)[source]\n\nConverts arguments to the right shape and domain. The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.\n\nproperty event_shape\n\nShape of each individual event contemplated by the distributions that this object constructs.\n\nclass gluonts.mx.distribution.Categorical(log_probs: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol])[source]\n\nA categorical distribution over num_cats-many categories.\n\nParameters\n• log_probs – Tensor containing log probabilities of the individual categories, of shape (*batch_shape, num_cats).\n\n• F\n\nproperty F\narg_names = None\nproperty args\nproperty batch_shape\n\nLayout of the set of events contemplated by the distribution.\n\nInvoking sample() from a distribution yields a tensor of shape batch_shape + event_shape, and computing log_prob (or loss more in general) on such sample will yield a tensor of shape batch_shape.\n\nThis property is available in general only in mx.ndarray mode, when the shape of the distribution arguments can be accessed.\n\nproperty event_dim\n\nNumber of event dimensions, i.e., length of the event_shape tuple.\n\nThis is 0 for distributions over scalars, 1 over vectors, 2 over matrices, and so on.\n\nproperty event_shape\n\nShape of each individual event contemplated by the distribution.\n\nFor example, distributions over scalars have event_shape = (), over vectors have event_shape = (d, ) where d is the length of the vectors, over matrices have event_shape = (d1, d2), and so on.\n\nInvoking sample() from a distribution yields a tensor of shape batch_shape + event_shape.\n\nThis property is available in general only in mx.ndarray mode, when the shape of the distribution arguments can be accessed.\n\nlog_prob(x)[source]\n\nCompute the log-density of the distribution at x.\n\nParameters\n\nx – Tensor of shape (*batch_shape, *event_shape).\n\nReturns\n\nTensor of shape batch_shape containing the log-density of the distribution for each event in x.\n\nReturn type\n\nTensor\n\nproperty mean\n\nTensor containing the mean of the distribution.\n\nproperty probs\nsample(num_samples=None, dtype=<class 'numpy.int32'>)[source]\n\nDraw samples from the distribution.\n\nIf num_samples is given the first dimension of the output will be num_samples.\n\nParameters\n• num_samples – Number of samples to to be drawn.\n\n• dtype – Data-type of the samples.\n\nReturns\n\nA tensor containing samples. This has shape (*batch_shape, *eval_shape) if num_samples = None and (num_samples, *batch_shape, *eval_shape) otherwise.\n\nReturn type\n\nTensor\n\nproperty stddev\n\nTensor containing the standard deviation of the distribution.\n\nclass gluonts.mx.distribution.CategoricalOutput(num_cats: int, temperature: float = 1.0)[source]\ndistr_cls\n\nalias of Categorical\n\ndistribution(distr_args, loc=None, scale=None, **kwargs) → gluonts.mx.distribution.distribution.Distribution[source]\n\nConstruct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor.\n\nParameters\n• distr_args – Constructor arguments for the underlying Distribution type.\n\n• loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.\n\n• scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.\n\ndomain_map(F, probs)[source]\n\nConverts arguments to the right shape and domain. The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.\n\nproperty event_shape\n\nShape of each individual event contemplated by the distributions that this object constructs.\n\nclass gluonts.mx.distribution.LogitNormal(mu: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol], sigma: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol])[source]\n\nThe logit-normal distribution.\n\nParameters\n• mu – Tensor containing the location, of shape (*batch_shape, *event_shape).\n\n• sigma – Tensor indicating the scale, of shape (*batch_shape, *event_shape).\n\n• F\n\nproperty F\narg_names = None\nproperty args\nproperty batch_shape\n\nLayout of the set of events contemplated by the distribution.\n\nInvoking sample() from a distribution yields a tensor of shape batch_shape + event_shape, and computing log_prob (or loss more in general) on such sample will yield a tensor of shape batch_shape.\n\nThis property is available in general only in mx.ndarray mode, when the shape of the distribution arguments can be accessed.\n\nproperty event_dim\n\nNumber of event dimensions, i.e., length of the event_shape tuple.\n\nThis is 0 for distributions over scalars, 1 over vectors, 2 over matrices, and so on.\n\nproperty event_shape\n\nShape of each individual event contemplated by the distribution.\n\nFor example, distributions over scalars have event_shape = (), over vectors have event_shape = (d, ) where d is the length of the vectors, over matrices have event_shape = (d1, d2), and so on.\n\nInvoking sample() from a distribution yields a tensor of shape batch_shape + event_shape.\n\nThis property is available in general only in mx.ndarray mode, when the shape of the distribution arguments can be accessed.\n\nlog_prob(x: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nCompute the log-density of the distribution at x.\n\nParameters\n\nx – Tensor of shape (*batch_shape, *event_shape).\n\nReturns\n\nTensor of shape batch_shape containing the log-density of the distribution for each event in x.\n\nReturn type\n\nTensor\n\nquantile(level: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source]\n\nCalculates quantiles for the given levels.\n\nParameters\n\nlevel – Level values to use for computing the quantiles. level should be a 1d tensor of level values between 0 and 1.\n\nReturns\n\nQuantile values corresponding to the levels passed. The return shape is\n\n(num_levels, …DISTRIBUTION_SHAPE…),\n\nwhere DISTRIBUTION_SHAPE is the shape of the underlying distribution.\n\nReturn type\n\nquantiles\n\nsample(num_samples=None, dtype=<class 'numpy.float32'>)[source]\n\nDraw samples from the distribution.\n\nIf num_samples is given the first dimension of the output will be num_samples.\n\nParameters\n• num_samples – Number of samples to to be drawn.\n\n• dtype – Data-type of the samples.\n\nReturns\n\nA tensor containing samples. This has shape (*batch_shape, *eval_shape) if num_samples = None and (num_samples, *batch_shape, *eval_shape) otherwise.\n\nReturn type\n\nTensor\n\nclass gluonts.mx.distribution.LogitNormalOutput[source]\nargs_dim: Dict[str, int] = {'mu': 1, 'sigma': 1}\ndistr_cls\n\nalias of LogitNormal\n\ndistribution(distr_args, loc=None, scale=None, **kwargs) → gluonts.mx.distribution.distribution.Distribution[source]\n\nConstruct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor.\n\nParameters\n• distr_args – Constructor arguments for the underlying Distribution type.\n\n• loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.\n\n• scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution.\n\nclassmethod domain_map(F, mu, sigma)[source]\n\nConverts arguments to the right shape and domain. The domain depends on the type of distribution, while the correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a distribution of the right event_shape.\n\nproperty event_shape\n\nShape of each individual event contemplated by the distributions that this object constructs." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.647606,"math_prob":0.9454814,"size":85354,"snap":"2020-34-2020-40","text_gpt3_token_len":20118,"char_repetition_ratio":0.26672524,"word_repetition_ratio":0.83591026,"special_character_ratio":0.21812686,"punctuation_ratio":0.21338493,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9905491,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-05T22:31:55Z\",\"WARC-Record-ID\":\"<urn:uuid:89f70ac2-c131-4908-a770-894c779771a1>\",\"Content-Length\":\"332585\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:683e40db-eb4b-4c85-bda2-5d0fcd8606f2>\",\"WARC-Concurrent-To\":\"<urn:uuid:f85aab7d-20e1-4b49-9fa1-53a60b65df39>\",\"WARC-IP-Address\":\"13.32.202.59\",\"WARC-Target-URI\":\"http://gluon-ts.mxnet.io/master/api/gluonts/gluonts.mx.distribution.html\",\"WARC-Payload-Digest\":\"sha1:VTZKVY6NSM2UTFP4EDW363O7NHSKCWG2\",\"WARC-Block-Digest\":\"sha1:VMNHZKAKZHZJ25XIKBDVNBV3VEXALIGR\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439735989.10_warc_CC-MAIN-20200805212258-20200806002258-00504.warc.gz\"}"}
https://tex.stackexchange.com/questions/346562/how-to-split-cell-text-into-multiline-in-table
[ "# How to split cell text into multiline in table?\n\nI'm writing a paper using the ACM double column template, and I have a table which I want to fit to just one column, and in order to do so, I want to split the text inside the cells to multiple lines. I have the following code segment:\n\n\\documentclass{paper}\n\\begin{document}\n\n\\begin{table}[thb]\n\\centering\n\\begin{tabular}{|l|c|l|} \\hline\n\\textbf{Col1} & \\textbf{Col2} & \\textbf{Col3} \\\\ \\hline\nThis is a \\\\very long line \\\\of text & Short text & Another long \\\\line of text \\\\ \\hline\n$\\pi$ & 1 in 5& Common in math\\\\\n\\hline\\end{tabular}\n\\end{table}\n\n\\end{document}\n\n\nBut, what it gives me an output which is not quite right, as can be seen below.", null, "First of all, the second part of the text of the last column is pushed into the first column. Then, also the last columns borders are not full. Any idea how to split a text to multiline in table cell?\n\nPackage tabularx is your friend:", null, "\\documentclass[twocolumn]{paper}\n\\usepackage{tabularx}\n\\newcolumntype{L}{>{\\raggedright\\arraybackslash}X}\n\n\\begin{document}\n\\begin{table}\n\\centering\n\\begin{tabularx}{\\linewidth}{|L|c|L|}\n\\hline\n\\textbf{Col1} & \\textbf{Col2} & \\textbf{Col3} \\\\\n\\hline\nThis is a very long line of text & Short text & Another long line of text \\\\\n\\hline\n$\\pi$ & 1 in 5& Common in math\\\\\n\\hline\n\\end{tabularx}\n\\end{table}\n\\end{document}\n\n• It is my understanding that the tabu package provides improvements with respect to tabularx, which may be interesting when typesetting long texts into tables, such as footnotes compatibility with hyperref and table use, etc. Maybe this is to be recommended as well. Sep 24, 2018 at 9:36\n• @SergeiPoulp, that was intention, but package is buggy and not maintained. with recent changes in the array (which is used in it) package it is not compatible with it anymopre, so it is not reliable. i would avoid it. Sep 24, 2018 at 9:39\n\nIf you want to control line breaks in cells, you can use the \\makecell command from the homonymous package. In addition, it has tools to add some vertical padding to cells:\n\n\\documentclass{paper}\n\n\\usepackage{array, makecell} %\n\n\\begin{document}\n\n\\begin{table}[thb]\n\\centering\\renewcommand\\cellalign{lc}\n\\setcellgapes{3pt}\\makegapedcells\n\\begin{tabular}{|l|c|l|} \\hline\n\\textbf{Col1} & \\textbf{Col2} & \\textbf{Col3} \\\\ \\hline\n\\makecell{This is a \\\\very long line \\\\of text} & Short text &\\makecell{ Another long \\\\line of text} \\\\ \\hline\n$\\pi$ & 1 in 5& Common in math\\\\\n\\hline\\end{tabular}\n\\end{table}\n\n\\end{document}", null, "Your wrong column alignment results from missing column delimiters (&) in your source code and can be fixed easily. This also fixes the broken vertical lines.\n\n\\begin{table}[thb]\n\\centering\n\\begin{tabular}{|l|c|l|}\n\\hline\n\\textbf{Col1} & \\textbf{Col2} & \\textbf{Col3} \\\\ \\hline\nThis is a & & \\\\ % <===== note the empty cells in this line\nvery long line & Short text & Another long \\\\\nof text & & line of text\\\\ \\hline % <===== and in this\n$\\pi$ & 1 in 5 & Common in math \\\\ \\hline\n\\end{tabular}\n\\end{table}", null, "A less manual way would be to use the p column type, but you have to specify the column width and LaTeX will do the linebreaks for you, but your last cell will probably also break:\n\n\\begin{table}[thb]\n\\centering\n\\begin{tabular}{|p{1.8cm}|c|p{2cm}|}\n\\hline\n\\textbf{Col1} & \\textbf{Col2} & \\textbf{Col3} \\\\ \\hline\nThis is a very long line of text & Short text & Another long line of text \\\\ \\hline\n$\\pi$ & 1 in 5 & Common in math \\\\ \\hline\n\\end{tabular}\n\\end{table}", null, "Or you can use the multirow package. Load \\usepackage{multirow} in your preamble {anywhere between \\documentclass{paper} and \\begin{document}, and then you can do:\n\n\\begin{table}[thb]\n\\centering\n\\begin{tabular}{|l|c|l|}\n\\hline\n\\textbf{Col1} & \\textbf{Col2} & \\textbf{Col3} \\\\ \\hline\n\\multirow{3}{*}{\\parbox{1.8cm}{This is a very long line of text}} & & \\multirow{3}{*}{\\parbox{2cm}{Another long line of text}} \\\\\n& Short text & \\\\\n& & \\\\ \\hline\n$\\pi$ & 1 in 5 & Common in math \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\n\nNote that this aligns the columns vertically centered. The \\parbox{length} defines when your text should be broken.", null, "I found the answer in the link below: https://pt.overleaf.com/learn/latex/Tables I hope it helps!" ]
[ null, "https://i.stack.imgur.com/q45hT.png", null, "https://i.stack.imgur.com/bLgIO.png", null, "https://i.stack.imgur.com/kAhTm.png", null, "https://i.stack.imgur.com/ti3fe.png", null, "https://i.stack.imgur.com/uVeG9.png", null, "https://i.stack.imgur.com/TvTBB.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7931914,"math_prob":0.816031,"size":843,"snap":"2022-05-2022-21","text_gpt3_token_len":247,"char_repetition_ratio":0.115613826,"word_repetition_ratio":0.0,"special_character_ratio":0.2728351,"punctuation_ratio":0.07692308,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98261076,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,5,null,5,null,5,null,5,null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-25T17:14:34Z\",\"WARC-Record-ID\":\"<urn:uuid:a41d455b-8b36-482c-9e08-bcb66d698335>\",\"Content-Length\":\"257407\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5568d8be-92e4-44a5-8c81-dd7423725359>\",\"WARC-Concurrent-To\":\"<urn:uuid:203dfb25-8a71-49fc-a3b1-35ee802c0c7e>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://tex.stackexchange.com/questions/346562/how-to-split-cell-text-into-multiline-in-table\",\"WARC-Payload-Digest\":\"sha1:FTVU7IUML75JYV6Z73EW5CS7BFXTKUQ5\",\"WARC-Block-Digest\":\"sha1:KGHNFMTRSN2NWIL5GJIHNFBIX3H7FCD4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662588661.65_warc_CC-MAIN-20220525151311-20220525181311-00543.warc.gz\"}"}
http://sqa.fyicenter.com/1000155_Convert_JMeter_Test_Result_to_TestMan.html
[ "Convert JMeter Test Result to TestMan\n\nQ\n\nHow to Convert JMeter Test Result to TestMan Data Model?\n\n✍: FYIcenter.com\n\nA", null, "In order migrate JMeter test result to TestMan, we need to decide how to convert JMeter test result data to TestMan tables and fields.\n\nOne way to map JMeter test result to TestMan is described below:\n\n• 1. For each unique value of Test_Run_Case_Step.trid, create a record in Test_Run table.\n• 2. For each unique value of Test_Run_Case_Step.tcid within the same jmeter_test_result.trid, create a record in Test_Run_Case table.\n• 3. Each JMeter test result record represents a record in the Test_Run_Case_Step table.\n• 4. \"Duration\", \"Total\", and \"Failed\" in Test_Run and Test_Run_Case tables can be calculated from Test_Run_Case_Step table.\n\nHere is the SQL script ConvertJMeterData.sql to convert the JMeter Test Result to TestMan tables:\n\n-- ConvertJMeterData.sql\n\nuse TestMan;\n\ninsert into Test_Run (\nReference, -- jmeter_test_result.trid\nTimeStamp -- jmeter_test_result.timestamp\n) select\ntrid,\nmin(from_unixtime(timestamp/1000))\nfrom jmeter_test_result\ngroup by trid;\n\ninsert into Test_Run_Case (\nTest_Run_ID,\nReference, -- jmeter_test_result.tcid\nTimeStamp, -- jmeter_test_result.timestamp\nTarget, -- jmeter_test_result.target\nStation, -- jmeter_test_result.station\nTester, -- jmeter_test_result.tester\nName, -- jmeter_test_result.tcname\nComponent, -- jmeter_test_result.component\nFunction, -- jmeter_test_result.function\nTotal\n) select\nr.ID,\nj.tcid,\nmin(from_unixtime(j.timestamp/1000)),\nmin(j.target),\nmin(j.station),\nmin(j.tester),\nmin(j.tcname),\nmin(j.component),\nmin(j.function),\ncount(*)\nfrom jmeter_test_result j, Test_Run r\nwhere j.trid = r.reference\ngroup by r.ID, j.tcid;\n\ninsert into Test_Run_Case_Step (\nTest_Run_Case_ID,\nTimestamp, -- jmeter_test_result.timestamp\nDuration, -- jmeter_test_result.elapsed\nName, -- jmeter_test_result.label\nSuccess, -- jmeter_test_result.success\nOutput -- jmeter_test_result.output\n) select\nc.ID,\nfrom_unixtime(j.timestamp/1000),\nj.elapsed,\nj.label,\nif(j.success='true', 1, 0),\nj.output\nfrom jmeter_test_result j, Test_Run r, Test_Run_Case c\nwhere j.trid = r.Reference\nand r.ID = c.Test_Run_ID\nand j.tcid = c.Reference;\n\nSee the next tutorial on how to update Test_Run and Test_Run_Case tables from Test_Run_Case_Step table.\n\n2017-12-13, 516👍, 0💬" ]
[ null, "http://sqa.fyicenter.com/Test-Management/_icon_Test-Management.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.51931024,"math_prob":0.44581717,"size":2438,"snap":"2019-43-2019-47","text_gpt3_token_len":597,"char_repetition_ratio":0.2013147,"word_repetition_ratio":0.27027026,"special_character_ratio":0.24159147,"punctuation_ratio":0.2560175,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99596095,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-18T12:06:03Z\",\"WARC-Record-ID\":\"<urn:uuid:11adf24f-0f6c-4fe5-b410-63dbe8eed19a>\",\"Content-Length\":\"20563\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:84832d07-90b1-4265-911c-f8a799fca4af>\",\"WARC-Concurrent-To\":\"<urn:uuid:eca6c51a-12e0-415d-9e13-cb95756a3cc3>\",\"WARC-IP-Address\":\"74.208.236.35\",\"WARC-Target-URI\":\"http://sqa.fyicenter.com/1000155_Convert_JMeter_Test_Result_to_TestMan.html\",\"WARC-Payload-Digest\":\"sha1:RHT56A52FEDIQ76OKN4BB4HT3SWTILWX\",\"WARC-Block-Digest\":\"sha1:UB73JG2WGLNGOSYH43YGONHLAXREIVOH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986682037.37_warc_CC-MAIN-20191018104351-20191018131851-00262.warc.gz\"}"}
https://blog.csdn.net/sd4567855/article/details/80789163
[ "【数据结构与算法】第五章:散列\n\n# 第五章:散列\n\n• 散列表(hash table):只支持二叉查找树所允许的一部分操作.\n• 散列(hashing):一种用于以常数平均时间执行插入,删除和查找的级数.\n\n## 5.1 一般想法\n\n• 理想的散列表数据结构只不过是一个包含有关键字的具体固定大小的数组.一般而言,这个关键字就是带有一个相关之的字符串.\n• 我们把表的大小记作 TableSize T a b l e S i z e $TableSize$,并将其理解为散列数据结构的一部分而不仅仅是浮动于全局的某个变量.通常的习惯是让表从 0 0 $0$ TableSize1 T a b l e S i z e − 1 $TableSize - 1$ 变化.\n• 散列函数(hash function):每个关键字映射到从 0 0 $0$ TableSize1 T a b l e S i z e − 1 $TableSize - 1$ 这个范围中的某个数,并且被放到适当额单元中.最理想的情况是,运算简单并且任何两个不同的关键字映射到不同的单元.但是这是不可能的,因为单元的数目是有限的,而关键字是用不完的.\n• 冲突(collision):当两个关键字散列到同一个值的时候.\n\n## 5.2 散列函数\n\n• 当关键字是整数时,我们保证表的大小是一个素数,可以直接返回 KeymodTableSize K e y m o d T a b l e S i z e $Key mod TableSize$\n• 当关键字是字符串时,散列函数需要仔细选择.\n1. 一种选择方法是可以将字符串中的字符的 ASCII 码值相加起来.如下\n//将字符逐个相加来处理整个字符串\ntypedef unsigned int Index;\nIndex Hash( const char *Key, int TableSize){\nunsigned int HashVal = 0;\nwhile( *Key != '\\0')\nHashVal += *Key++;\nreturn HashVal % TableSize;\n}\n\n2.另一种散列函数如下\n\nIndex Hash2( const char *Key, int TableSize){\nreturn( Key + 27 * Key + 729 * Key) % TableSize;\n}\n\n3.第三种散列函数\n\nIndex Hash3( const char *Key, int TableSize){\nunsigned int HashVal = 0;\nwhile( *Key != '\\0')\nHashVal = ( HashVal<<5) + *Key++;\nreturn HashVal % TableSize;\n}\n\n## 5.3 解决冲突的方法\n\n### 5.3.1 分离链接法\n\n• 分离链接法(separate chaining):将散列到同一个值的所有元素保留到一个表中.为了方便,这些表都有表头.如下图.", null, "1. Find操作,我们使用散列函数来确定究竟是哪一个表.此时,我们通常的方式遍历该表并返回所找到的被查找项所在的位置.\n2. Insert操作,我们遍历整个表以检查该元素是否已经处在适当的位置.如果要插入重复元,那么通常要留出一个额外的域,这个域当重复元出现时候增加1.如果这个元素是个新元素,那么它或者被插入表的前端,或者被擦汗如到表的末尾.又是新元素插入到表的前端不仅是因为方便,而且新插入的元素可能最先被访问.\nstruct ListNode;\ntypedef struct ListNode *Position;\nstruct HashTbl;\ntypedef struct HashTbl *HashTable;\n\nstruct ListNode{\nElemeentType Element;\nPosition Next;\n};\n\nstruct HashTbl{\nint TableSize;\nList *TheLists; //指向ListNode指针的指针\n};\n\nHashTable InitializeTable( int TableSize);\nvoid DestroyTable( HashTable H);\nPosition Find( ElementType Key, HashTable H);\nvoid Insert( ElementType Key, HashTable H);\nElementType Retrieve( Position P);\n//初始化过程\nHashTable InitializeTable( int TableSize){\nHashTable H;\nint i;\n\nif( TableSize < MinTableSize){\nError(\" Table is so small\");\nreturn NULL;\n}\n\nH = ( HashTable)malloc( sizeof( struct HashTbl));\nif( H == NULL)\nFatalError(\" Out of space\");\n\nH->TableSize = NextPrime( TableSize);\nH->TheLists = malloc( sizeof( List) * H->TableSize);\nif( H->TheLists == NULL)\nFatalError(\" Out of space\");\n\nfor( i=0; i<H->TableSize; i++){\nH->TheList[i] = ( Position)malloc( sizeof( struct ListNode));\nif( H->TheList[i] == NULL)\nFatalError(\" Out of space\");\nelse\nH->TheLists[i]->Next == NULL;\n}\n}\n\n//寻找并返回一个指针\nPosition Find( ElementType Key, HashTable H){\nPosition P;\nList L;\nL = H->TheLists[ Hash( key, H->TableSize)];\nP = L->Next;//L是头指针\nwhile( P!=NULL && P->Element !=Key)\nP = P->Next;\nreturn P;\n}\n\n//插入\nvoid Insert( ElementType Key, HashTable H){\nPosition Pos, NewCell;\nList L;\nPos = Find( Key, H);\nif( Pos == NULL){\nNewCell = ( Position)malloc( sizeof( struct ListNode));\nif( NewCell == NULL)\nFatalError(\" Out of space\");\nelse{\nL = H->TheLists[ Hash( Key, H->TableSize)];\nNewCell->Next = L->Next;\nNewCell->Element = Key; //当是整数时,Find同.\nL->Next = NewCell;\n}\n}\n}\n• 分离链接散列算法的缺点\n\n1. 需要指针.\n2. 给新单元分配地址需要时间,导致了算法时间缓慢.\n• 装填因子(load factor) λ λ $\\lambda$:散列表中元素的个数与散列表大小的比值.\n\n### 5.3.2 开放定址法\n\n• 开放定址散列法( Open addressing hashing):如果有冲突发生,那么就要尝试选择另外的单元,直到找出空的单元为止.更一般地,单元 h0(X),h1(X),h2(X), h 0 ( X ) , h 1 ( X ) , h 2 ( X ) , $h_0(X), h_1(X), h_2(X),$ 等等相继被试选其中 hi(X)=(Hash(X)+F(i))modTableSize h i ( X ) = ( H a s h ( X ) + F ( i ) ) m o d T a b l e S i z e $h_i(X) = ( Hash(X) + F(i)) mod TableSize$, 且 F(0)=0 F ( 0 ) = 0 $F(0) = 0$ .\n考虑到所有的数据都要置入表内部,因此开放定址散列法需要的表比分离链接散列表的表要大.一般而言, 装填因子 λ λ $\\lambda$ 应该低于 0.5 .\n\n#### 5.3.2.1 线性探测法\n\n• 在线性探测法中,函数 F F $F$ i i $i$ 的线性函数,典型情形是 F(i)=i F ( i ) = i $F(i) = i$ .这相当于逐个探测每个单元以查找出一个空单元.\nfor example: 把关键字{ 89, 18, 49, 58, 69},插入到一个散列表中,此时的冲突解决办法选择 F(i)=i F ( i ) = i $F(i) = i$ .", null, "第一个冲突发生在插入关键字 49 时,它被放到下一个空闲地址,即 0 处.\n随后,关键字 58 依次与 18,89,49 发生冲突,试选三次后才找到空单元 2 .\n关键字 69 亦是如此.\n只要表足够大,我们总能找到一个自由单元,但是花费的时间是非常多的.\n\n• 一次聚集(primary clustering):使用线性探测法时,即时表相对于比较空的时候,一些占据的单元也会形成一些区块.\n\n#### 5.3.2.2 平方探测法\n\n• 平方探测发时消除线性探测中狙击问题的冲突解决方法.本质上就是冲突函数为幂次函数的探测方法.流行的选择是 F(i)=i2 F ( i ) = i 2 $F(i) = i^2$ .\nfor example: 把关键字{ 89, 18, 49, 58, 69},插入到一个散列表中.", null, "当 49 与 89 冲突时,下一个位置为下一个单元,该单元是空的,因此 49 就放在此处.\n此后 58 在位置 8 处发生冲突,其后相邻的单元经过探测得知发生了另外的冲突.下一个探测的位置就在距离位置 8 为 2^2 = 4 远处,此时,这个单元也是空单元,因此 关键字 58 就放在单元 2 的位置处.\n对于关键字 69 处理的过程也是一样.\n\n• 对于线性探测,让元素几乎填满散列表并不是个好主意,因为这时候表的性能将会降低.\n\n• 对于平方探测,一旦表被填写超过一半,当表的大小不是素数甚至在表被填满一半之前,就不能保证可以找到一个空单元了.这是因为最多有表的一般可以用作解决冲突的备选位置.\n\n• 定理\n如果使用平方探测,并且表的大小是素数,那么当表至少有一半是空的时候,总能够插入一个新的元素.\n\nh(X)+i2=h(X)+j2(modTableSize) h ( X ) + i 2 = h ( X ) + j 2 ( m o d T a b l e S i z e ) $h(X) + i^2 = h(X) + j^2 (mod TableSize)$\ni2=j2(modTableSize) i 2 = j 2 ( m o d T a b l e S i z e ) $i^2 = j^2 (mod TableSize)$\n(ij)(i+j)=0(modTableSize) ( i − j ) ( i + j ) = 0 ( m o d T a b l e S i z e ) $( i - j)( i + j) = 0 (mod TableSize)$\n\ntypedef unsigned int Index;\ntypedef Index Position;\n\nstruct HashTabl;\ntypedef struct HashTbl * HashTable;\n\nstruct HashEntry{\nElementType Element;\nEnum KindOfEntry Info;\n};\n\ntypedef struct HashEntry Cell;\n\nstruct HashTbl{\nint TableSize;\nCell *TheCells;\n};\n\nHashTable InitializeTable( int TableSize);\nvoid DestroyTable( HashTable H);\nPosition Find( ElementType Key, HashTable H);\nvoid Insert( ElementType Key, HashTable H);\n//初始化开放定址散列表\nHashTable InitializeTable( int TableSize){\nHashTable H;\nint i;\nif( TableSize < MinTableSize){\nError(\" Table size is too small\");\nreturn NULL;\n}\n\nH = ( HashTable)malloc( sizeof( struct( HashTbl)));\nif( H == NULL)\nFatalError(\" Out of space\");\n\nH->TableSize = NextPrime( TableSize);\nH->TheCells = malloc( sizeof( Cell) * H->TableSize);\nif( H->TheCells == NULL)\nFatalError(\" Out of space\");\n\nfor( i = 0; i < H->TableSize; i++)\nH->TheCells[i].Info = Empty;\n\nreturn H;\n}\n//Find\nPosition Find( ElementType Key, HashTable H){\nPosition CurrentPos;\nint CollisionNUm;\n\nCollisionNum = 0;\nCurrentPos = Hash( Key, H->TableSize);\nwhile( H->TheCells[ CurrentPos].Info != Empty &&\nH->TheCells[ CurrentPos].Element != Key){\nCurrentPos += 2 * ++CollisionNum - 1;\nif( CurrentPos >= H->TableSize )\nCurrentPos-= H>TableSize;\n}\nreturn CurrentPos;\n}\n//Insert\nvoid Insert( ElementType Key, HashTable H){\nPosition Pos;\nPos = Find( Key, H)\nif( H->TheCells[ Pos].Info != Legitimate){\nH->TheCells[ Pos].Info = Legitimate;\nH->TheCells[ Pos].Element = Key;\n}\n}\n• 二次聚集(secondary clustering)\n\n#### 5.3.2.3 双散列\n\n• 双散列(double hashing):对于双散列,一种流行的选择是 F(i)=iHash2(X) F ( i ) = i ∗ H a s h 2 ( X ) $F(i) = i * Hash_2(X)$ 这个公式意味着,我们将第二个散列函数应用到 X X $X$ 并在距离 Hash2(X),2Hash2(X) H a s h 2 ( X ) , 2 H a s h 2 ( X ) $Hash_2(X) , 2Hash_2(X)$ 等处探测.但是要记住, Hash2(X) H a s h 2 ( X ) $Hash_2(X)$ 的选择不好将要导致灾难性的后果.\nfor example:把 99 插入到 { 89, 18, 49, 58, 69} 中.通常选择的是 Hash2(X)=Xmod9 H a s h 2 ( X ) = X m o d 9 $Hash_2(X) = X mod 9$ 将不再起作用.\n\n## 5.4 再散列\n\n• 对于使用平方探测的开放定址散列法,如果表的元素被填充的太满,那么操作的运行时间将会开始消耗的非常场,并且 Insert 操作可能失败.这种情况可能发生在有太多插入和移动的情形下.\n• 一种解决办法:建立另外一个大约两倍大的表(而且使用一个相关的新散列函数),扫描整个源是散列表,计算每个未删除的元素的新散列值并将它插入到新的散列表中.\nfor example:将元素 13, 15, 24, 6插入到大小为 7 的开放定制散列表中.其中散列函数是 h(X)=Xmod7 h ( X ) = X m o d 7 $h(X) = X mod 7$.假设使用线性探测法解决这个问题,那么插入结果如下图所示.\n06\n115\n2\n324\n4\n5\n613\n\n06\n115\n223\n324\n4\n5\n613\n\n0\n1\n2\n3\n4\n5\n66\n723\n824\n9\n10\n11\n12\n1313\n14\n1515\n16\n17\n\n- 以上这个过程就叫做再散列(rehashing).但我们可以观察到这是一个非常昂过的操作;其运行时间是 O(N) O ( N ) $O(N)$ 因为有 N N $N$ 个元素要再散列得到的表大小约为 2N 2 N $2N$.不过,由于不经常发生,因此实际效果根本没有这么差.特别地,在最后的再散列之前必然已经存在 N2 N 2 $\\frac{N}{2}$ 次 Insert 操作,当然添加到每一个插入上的花费基本是一个常数开销.\n- 再散列可以用平方探测以多种方法实现.\n1. 一种做法是只要表满到一半就再散列.\n2. 一种极端的方法是只有插入失败的时候再再散列.\n\n HashTable Rehash( HashTable H){\nint i,OldSize;\nCell *OldCells;\n\nOldCells = H->TheCells;\nOldSize = H->TableSize;\n\nH = InitializeTable( 2 * OldSize);\n\nfor( i = 0; i <= OldSize; i++){\nif( OldCells[i].Info == Legitimate)\nInsert( OldCells[i].Element, H);\n}\nfree(OldCells)\nreturn H;\n}\n\n## 5.5 可扩散列\n\n• 当数据量太大以至于装不进主存时,此时主要考虑的时检索数据所需要的硬盘存取次数.\n• 我们假设任意时刻都有 N N $N$ 个记录需要储存, N N $N$ 的值伴随时间变化而发生改变. 此外,最多可以把 M M $M$ 个记录放入一个磁盘区块.\n如果使用开放定址散列法或者分离链接散列法,主要问题在于一次 Find 操作期间冲突可能引起多个区块被考察,甚至对于理想分布的散列表亦是如此.不仅如此,当表过满的时候,必须执行代价巨大的再散列操作,它需要 O(N) O ( N ) $O(N)$ 次磁盘访问.\n• 可扩散列(extendible hashing):它允许用两次磁盘访问执行一次 Find 操作.插入操作也需要很少的磁盘访问.\n• 现在我们假设数据由几个 6 比特的整数组成,参考下图,我们使用B-树的形式储存.", null, "“树”根含有 4 个指针,它们由这些数据的前两个比特来确定.每片树叶之多有 M=4 M = 4 $M = 4$ 个元素.碰巧的时,这里没一片树叶中的数据前两个比特都是相同的.为了更正,我们用 D D $D$ 表示根所使用的比特数,可称之为 目录(directory).于是,目录中的项数为 2D 2 D $2^D$. dL d L $d_L$ 是树叶 L L $L$所有元素共有的最高位的位数, dLD d L ≤ D $d_L \\le D$.\n现在想要插入关键字 100100 ,它进入第三片树叶,但是第三片树叶已经满了,没有空间存放它,此时我们将这片树叶分裂成两片树叶,它们由前三个比特确定.这需要将目录的大小增加到 3.如下图.", null, "注意,所有未被分裂的树叶现在各由两个相邻的目录所指.因此,虽然整个目录被重写,但是其他树叶实际上并没有被访问.\n现在如果插入关键字 000000 ,那么第一片树叶就要被分裂,生成 dL=3 d L = 3 $d_L = 3$ 的两片树叶,由于 D=3 D = 3 $D = 3$ ,因此再目录中唯一的变化便是 000 和 001 指针的更新.如下图.", null, "• 这里我们还有一些重要的细节尚未考虑.\n首先,有可能当一片树叶的元素多余 D+1 D + 1 $D + 1$ 个前导位相同时需要多个目录分裂.例如,从原先的例子开始看起, D=2 D = 2 $D = 2$ ,如果插入 111010, 111011 并在最后插入 111100,那么目录的大小必须增加到 4 以区分这五个关键字.\n其次,存在重复关键字(duplicate key)的可能性,若存在多余 M M $M$ 个重复关键字,那么该算法根本无效,需要做出其他安排.\n\n• 可扩散列的性能:\n一个合理的假设:位模式(bit pattern)是均匀分布的.\n基于这个假设,我们得到树叶的期望个数为 NMlog2e N M l o g 2 e $\\frac{N}{M}log_2e$.因此,平均树叶满的程度为 ln2 l n 2 $ln2$.这个和B-树是一样的,这并不奇怪,因为对于这两个数据结构而言,当第 M+1 M + 1 $M + 1$ 被添加时,一些新的节点便会将建立起来.\n目录的期望大小为 O(N1+1/M/M) O ( N 1 + 1 / M / M ) $O(N^{1+1/M}/M)$ 因此,如果 M M $M$ 的值很小,那么目录可能过分地大.为了位置更小的目录,可以把第二个磁盘访问添加到每个 Find 操作中去.如果目录太大装不进主存,那么第二个磁盘访问怎么说还是需要的.\n\n## 我的微信公众号", null, "08-21", null, "219", null, "01-22", null, "1552\n02-26", null, "5550\n10-04", null, "1513\n08-24", null, "1万+\n10-31", null, "63\n12-24", null, "642\n08-05", null, "1万+\n06-12", null, "1260\n04-06", null, "73" ]
[ null, "http://static.zybuluo.com/vsym/w70ob8ppe72rzaxneb9fizmv/%E6%8D%95%E8%8E%B7.PNG", null, "http://static.zybuluo.com/vsym/erp8mrvzlubc905bzawi3u9s/image.png", null, "http://static.zybuluo.com/vsym/idns57sbgybhxyjvjo05p1vr/image.png", null, "http://static.zybuluo.com/vsym/eneltkrszre2rhqljka76dzv/image.png", null, "http://static.zybuluo.com/vsym/0gns9704ognbv3fxxmv1x0is/1.PNG", null, "http://static.zybuluo.com/vsym/4ojad5zu04pwf27bvitg8n89/image.png", null, "https://img-blog.csdn.net/20171029200801586", null, "https://csdnimg.cn/release/blogv2/dist/pc/img/readCountWhite.png", null, "https://csdnimg.cn/release/blogv2/dist/pc/img/[email protected]", null, "https://csdnimg.cn/release/blogv2/dist/pc/img/readCountWhite.png", null, "https://csdnimg.cn/release/blogv2/dist/pc/img/readCountWhite.png", null, "https://csdnimg.cn/release/blogv2/dist/pc/img/readCountWhite.png", null, "https://csdnimg.cn/release/blogv2/dist/pc/img/readCountWhite.png", null, "https://csdnimg.cn/release/blogv2/dist/pc/img/readCountWhite.png", null, "https://csdnimg.cn/release/blogv2/dist/pc/img/readCountWhite.png", null, "https://csdnimg.cn/release/blogv2/dist/pc/img/readCountWhite.png", null, "https://csdnimg.cn/release/blogv2/dist/pc/img/readCountWhite.png", null, "https://csdnimg.cn/release/blogv2/dist/pc/img/readCountWhite.png", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.71479046,"math_prob":0.9984704,"size":9860,"snap":"2020-45-2020-50","text_gpt3_token_len":6275,"char_repetition_ratio":0.13494319,"word_repetition_ratio":0.2554645,"special_character_ratio":0.32332656,"punctuation_ratio":0.21902439,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9938858,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,2,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-30T02:39:28Z\",\"WARC-Record-ID\":\"<urn:uuid:ca32418b-4465-4089-ae47-2e5a594f2910>\",\"Content-Length\":\"300497\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:404ee126-0686-4602-85a4-d873d5757a56>\",\"WARC-Concurrent-To\":\"<urn:uuid:26bee881-fc1c-4464-81b0-2499a0b1fd98>\",\"WARC-IP-Address\":\"101.200.35.175\",\"WARC-Target-URI\":\"https://blog.csdn.net/sd4567855/article/details/80789163\",\"WARC-Payload-Digest\":\"sha1:SH7RQ54QGP4G2BA2NSLK3BFIX3E5V23P\",\"WARC-Block-Digest\":\"sha1:NOWQYY6BLOMCP3X4SWAF3KPNNBLAU65Y\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107906872.85_warc_CC-MAIN-20201030003928-20201030033928-00696.warc.gz\"}"}
https://www.biostars.org/p/10454/
[ "How To Show The Name Of Genes On Manhattan Plot\n2\n5\nEntering edit mode\n12.4 years ago\nSara ▴ 130\n\nHi,\n\nI draw Manhattan plot in R but I like to show also the name of genes on it (http://www.genetics.org/content/187/2/367/F2.large.jpg) ? how can i do that ?\n\ngwas visualization r • 14k views\n8\nEntering edit mode\n12.4 years ago\nThomas ▴ 760\n\nIn relation to the above comments I just wanted to add a small example of a manhatten plot with addition of rs number (or genes) in R:\n\n### generation of a data set\n\npval<-runif(100000, 0, 1); logPval<--log(pval,base=10); pos=1:100000; chr<-paste(\"chr\",rep(1:20,ea=5000),sep=\"\"); rsID<-paste(\"rs\",1:100000,sep=\"\"); data<-as.data.frame(cbind(chr,pos,rsID,pval,logPval));\n\nvek<-as.numeric(gsub(data$chr,pattern=\"chr\",replacement=\"\"))%%2 ### vector that defines which dot should have a rs number (all with pval<0.0001) vek2<-ifelse(as.numeric(as.character(data[,\"pval\"]))<0.0001,T,F) ### plotting plot(x=as.numeric(data$pos),y=as.numeric(as.character(data$logPval)),col=c(\"red\",\"blue\")[as.factor(vek)]) ### adding the text text(labels=as.character(data$rsID[vek2]),x=as.numeric(data$pos)[vek2],y=as.numeric(as.character(data$logPval))[vek2],pos=4,cex=0.8)\n\n5\nEntering edit mode\n12.4 years ago\nSander Timmer ▴ 710\n\nIf I look at this plot I would almost think that these are placed there by hand.\n\nI'm not sure how your Manhattan plot looks like right now (did you use any package?) but Getting Genetics Done has quite some nice code examples about making a Manhattan plot and QQ plot in R using ggplot2\n\nIf you are willing to leave R you could also look at WGA viewer which can plot your Manhattan plot and add different data tracks next to it.\n\nIf you're just interested in showing the association later for a specific region I would recommend using LocusZoom, that tool can plot gene names + locations and adds LD information to a locus of interest.\n\n3\nEntering edit mode\n\n@Sander - Thanks for the link to my code. I've also used LocusZoom and found it useful. You can probably d/l a local copy and mod the code to add text to the plot.\n\n@Sarah - It shouldn't be hard to add the text in R using a text() command after creating the plot using the code in http://gettinggeneticsdone.blogspot.com/2011/04/annotated-manhattan-plots-and-qq-plots.html. You could even join your list of SNPs to a list of genes (as in http://gettinggeneticsdone.blogspot.com/2011/06/mapping-snps-to-genes-for-gwas.html ) and then automatically add gene labels if p is less than a threshold.\n\n0\nEntering edit mode\n\nI'd agree with Stephen that adding using text is probably the easiest way to do it. Unless you do this everyday and need a function to do it repeatedly, it's probably the best time investment." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.828798,"math_prob":0.72634554,"size":2570,"snap":"2023-40-2023-50","text_gpt3_token_len":696,"char_repetition_ratio":0.098597035,"word_repetition_ratio":0.1534091,"special_character_ratio":0.26692608,"punctuation_ratio":0.15156795,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96664506,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-08T03:09:17Z\",\"WARC-Record-ID\":\"<urn:uuid:5170b3e5-fa2b-4cf9-b583-af80cf444335>\",\"Content-Length\":\"27836\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7de4d831-f058-4d7c-af42-220a20d8a62f>\",\"WARC-Concurrent-To\":\"<urn:uuid:97cc2556-561c-43dd-870e-6df3be08113e>\",\"WARC-IP-Address\":\"45.79.169.51\",\"WARC-Target-URI\":\"https://www.biostars.org/p/10454/\",\"WARC-Payload-Digest\":\"sha1:WONHSEOLP6TF4E4EUKPHVI6YPWUK4V2B\",\"WARC-Block-Digest\":\"sha1:XEBBQHKBY23J6IQXR6D4YT2UASPFNNLX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100710.22_warc_CC-MAIN-20231208013411-20231208043411-00547.warc.gz\"}"}
https://community.openhab.org/t/solved-calculate-duration-from-number-seconds/37445/11
[ "", null, "# [Solved]Calculate duration from number (seconds)\n\nTags: #<Tag:0x00007f5caa0c1710>\n\nI have an item which receives a number from an http request. that works as expected\nI want to calculate the hours and minutes (hh:mm) and send the new value to another item. Then I want to display that duration.\n\ne.g. 6060 = 01h 41min\n\nCan anyone help`?\n\nI tried several rules but non woked\n\n``````val totalSecs = SecondsNumber.state as Number\n\nval sec = totalSecs % 60\nval min = (totalSecs / 60) % 60\nval hrs = (totalSecs / (60*60)) % 24\nval day = totalSecs / (60*60*24)\n\n``````\n\nCould I ask to give a hint how to include your code to my Player-Item to show the min:sec instead of dezimal.\nHere my item (Squeezeboxplayer)\nI think I have to replace the %d, but how?\n\n``````Number SB_ARZ_Time \"Time [%d]\" { channel=\"squeezebox:squeezeboxplayer:LMS:SB_ARZ:currentPlayingTime\" }\n\n``````\n\nYou don’t. The code above is the body of a Rule. You must create Rules in text .rules files. See\n\nhttp://docs.openhab.org/tutorials/beginner/index.html\n\nhttp://docs.openhab.org/configuration/rules-dsl.html\n\nOK,\nI created a rule:\n\n``````rule \"Calculate decimal time counter\"\nwhen\nItem SB_ARZ_Time changed\nthen\nval totalSecs = SecondsNumber.state as Number\nval sec = totalSecs % 60\nval min = (totalSecs / 60) % 60\nval hrs = (totalSecs / (60*60)) % 24\nval day = totalSecs / (60*60*24)\nend\n``````\n\nBecause of my limited knowledge with object-orientated programming I don’t know how to handle the error which I see in the log.\n\n``````Rule 'Calculate decimal time counter': The name 'SecondsNumber' cannot be resolved to an item or type; line 5, column 18, length 13\n``````\n\nIt just says that you don‘t have an Item called „SecondsNumber“\n\nAlso, this is just a code fragment. You still need to do the work to take day, hrs, min, and sec into a String and postUpdate that to your appropriate Item.\n\nSB_ARZ_Time contains the number, so I changed to:\n\n``````rule \"Calculate decimal time counter\"\nwhen\nItem SB_ARZ_Time changed\nthen\nval totalSecs = SB_ARZ_Time.state as Number\nval sec = totalSecs % 60\nval min = (totalSecs / 60) % 60\nval hrs = (totalSecs / (60*60)) % 24\nval day = totalSecs / (60*60*24)\nend\n``````\n\nNow I get this error:\n\n``````Rule 'Calculate decimal time counter': Unknown variable or command '%'; line 6, column 12, length 14\n``````\n\nI really appreciate your help with some code.\n\nWell shoot. It looks like the Rules DSL can’t work with Numbers for the modulo operator.\n\n``val int totalSecs = (SB_ARZ_Time.state as Number).intValue``\n\nHi Rich,\n\nthank you for your quick reply! I will try it the next days. The follow up by Doxer is also very useful to me …\nOnce I have a working solution I will post it here … but it may take some days", null, "kind reagrds\nMarkus\n\nThis is the working rule:\n\n``````rule \"Calculate decimal time counter\"\nwhen\nItem SB_ARZ_Time changed\nthen\nval totalSecs = (SB_ARZ_Time.state as Number).intValue\nval sec = totalSecs % 60\nval min = (totalSecs / 60) % 60\nval hrs = (totalSecs / (60*60)) % 24\nval day = totalSecs / (60*60*24)\nlogInfo(\"default.rules\", min.toString + \":\" + sec.toString)\nend\n``````\n\nNow I have to investigate for the formating (ss:mm) and how to transfer this to the item.", null, "2 Likes\n\nSomething like this should work:\n\n``````TimeString.postUpdate(String::format(\"%02d:%02d:%02d:%02d\", day, hrs, min, sec))\n``````\n\nWhere TimeString is your String Item.\n\n1 Like\n\nGot my problem solved and it is running as expected.\n\nShort summary what I did:\n\nThos items are active:\n\n``````Number VU_Dauer \"Dauer [%s]\" \t\t( VUPLUS )\t{ http=\"<[http://192.168.1.6:80/web/getcurrent:3000:REGEX(.*?<e2eventduration>(.*?)</e2eventduration>.*)]\" }\nString VU_Duration_converted \"Dauer1 [%s]\" \t\t( VUPLUS\n``````\n\nThis rule is set:\n\n``````rule \"Dauer\"\nwhen\nItem VU_Dauer changed\nthen\nval totalSecs = (VU_Dauer.state as Number).intValue\nval min = (totalSecs / 60) % 60\nval hrs = (totalSecs / (60*60)) % 24\nval txt = (VU_Dauer)\nVU_Duration_converted.postUpdate(String::format(\"%02d:%02d\", hrs, min))\nlogWarn(\"hilfe\", \"\" +totalSecs)\nend\n``````\n\nLog tells me the correct received Number in seconds\n\n``````2017-12-21 11:37:49.498 [WARN ] [eclipse.smarthome.model.script.hilfe] - 21600\n``````\n\nRunning on OH 2 and OH 2.2 (release)\nThank your for the help provided, appreciate it very much!\n\nHi Markus,\nyour post helped me how to transfer the value from the rule to the item. Learning by doing", null, "1 Like" ]
[ null, "https://community-openhab-org.s3-eu-central-1.amazonaws.com/original/2X/0/0bf8618e720957d07d17407b2c0182b15ff28db9.png", null, "https://community.openhab.org/images/emoji/twitter/slight_smile.png", null, "https://community.openhab.org/images/emoji/twitter/frowning.png", null, "https://community.openhab.org/images/emoji/twitter/slight_smile.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6622596,"math_prob":0.89685553,"size":842,"snap":"2020-24-2020-29","text_gpt3_token_len":285,"char_repetition_ratio":0.10501193,"word_repetition_ratio":0.0,"special_character_ratio":0.347981,"punctuation_ratio":0.21472393,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97537977,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-31T19:51:53Z\",\"WARC-Record-ID\":\"<urn:uuid:3ef8fb8a-ff59-4757-a605-ac0b5493999c>\",\"Content-Length\":\"47560\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:efaf1b7b-2896-4594-bde8-65d82aabe4e1>\",\"WARC-Concurrent-To\":\"<urn:uuid:42c145c5-9cee-490f-9ef3-9d0a7f470f28>\",\"WARC-IP-Address\":\"46.101.248.207\",\"WARC-Target-URI\":\"https://community.openhab.org/t/solved-calculate-duration-from-number-seconds/37445/11\",\"WARC-Payload-Digest\":\"sha1:QM43P64QEVAEUQBUO77MJ3PY32EXZGSJ\",\"WARC-Block-Digest\":\"sha1:I2UOKSSSWBIQQRTYRYW5UQQM2OOTQUJO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347413624.48_warc_CC-MAIN-20200531182830-20200531212830-00531.warc.gz\"}"}
https://investingyrkwpjl.netlify.app/halbur65579qiqy/how-to-compute-rate-of-return-accounting-fa.html
[ "## How to compute rate of return accounting\n\n1 Feb 2017 Excel offers three functions for calculating the internal rate of return, and I recommend you use all three.\n\nFor the purpose of calculating net present value and internal rate of return, do companies use the accrual basis of accounting? Explain. Why might a firm choose  30 Oct 2019 The accounting rate of return is a method of calculating a projects return as a percentage of the investment in the project. It measures the  17 Mar 2016 But with IRR you calculate the actual return provided by the project's cash flows, then compare that rate of return with your company's hurdle rate (  It is an accounting technique to measure the profitability of the investment proposals. The Accounting Rate of Return (ARR) is calculated by dividing the Average  IRR is harder to calculate than return on investment, but IRR has the advantage of automatically accounting for time differences between investments. This can  1 Feb 2017 Excel offers three functions for calculating the internal rate of return, and I recommend you use all three. five traditional criteria for determining whether a rate of return is appropriate (1). d = annual depreciation expense, which is the annual accounting charge for\n\n## 17 Mar 2016 But with IRR you calculate the actual return provided by the project's cash flows, then compare that rate of return with your company's hurdle rate (\n\nCalculate the IRR (Internal Rate of Return) of an investment with an unlimited number of cash flows. Depreciation should not be included in the economic analysis (nor should it have been included in the cash flow table). Depreciation is merely an accounting item   Close enough to zero, Sam doesn't want to calculate any more. The Internal Rate of Return (IRR) is about 7%. So the key to the whole thing is calculating the  Calculate the internal rate of return using Table 18.11 given the NPV for each Thirdly, IRR uses actual cash flows rather than accounting incomes like the ROR   18 Feb 2015 Accounting rate of return (ARR/ROI) = Average profit / Average book value * 100. The interpretation of the ARR / AAR rate. Abbreviated as ARR\n\n### Net income is the amount of total revenue that remains after accounting for all expenses for production, overhead, operations, administrations, debt service, taxes, amortization, and depreciation, as well as for one-time expenses for unusual events such as lawsuits or large purchases.\n\n24 Jul 2013 Estimate this by finding the cost of equity of projects or investments with similar risk. Like with the cost of debt, if the company has more than one  17 May 2018 a new approach to calculating an internal rate of return that illustrated the Congress, Rome 20-22 April, European Accounting Association. Since the deposits into the investment fund are irregular in their timing, there isn't really any single formula that will give the information you want. Your only hope  14 Aug 2012 This report shows how to calculate the Social. Internal Rate of Return of both technologies. We estimate the SIRR of FTTN at 15.2% and the SIRR  13 Mar 2017 This article will explain how to calculate the return on investment when The resulting number, expressed as a percentage, can be a good indicator of Your chief financial officer or accounting professional can help you\n\n### The average annual rate of return of your investment is the percentage change over several years, averaged out per year. A bank might guarantee a fixed rate per year, but the performance of many other investments varies from year to year. It helps to average the percentage change so you have a single number against which to compare other investments.\n\n24 Jul 2013 Estimate this by finding the cost of equity of projects or investments with similar risk. Like with the cost of debt, if the company has more than one  17 May 2018 a new approach to calculating an internal rate of return that illustrated the Congress, Rome 20-22 April, European Accounting Association. Since the deposits into the investment fund are irregular in their timing, there isn't really any single formula that will give the information you want. Your only hope  14 Aug 2012 This report shows how to calculate the Social. Internal Rate of Return of both technologies. We estimate the SIRR of FTTN at 15.2% and the SIRR  13 Mar 2017 This article will explain how to calculate the return on investment when The resulting number, expressed as a percentage, can be a good indicator of Your chief financial officer or accounting professional can help you\n\n## Accounting Rate of Return (ARR) is the percentage rate of return that is expected from an investment or asset compared to the initial cost of investment. Typically,\n\nAccounting Rate of Return (ARR) is the average net income an asset is expected to generate divided by its average capital cost, expressed as an annual percentage. The ARR is a formula used to make capital budgeting decisions, whether or not to proceed with a specific investment (a project, an acquisition, etc.) based on Making Capital Investment Decisions and How to Calculate Accounting Rate of Return – Formula & Example STEP 1. Before we start with calculating accounting rate of return we need to calculate an average STEP 2. The second step in our ARR calculation is to find the Annual depreciation charge. Determine the Annual Profit. This method of determining the Accounting Rate of Return uses the basic formula ARR = Average Annual Profit / Initial Investment. To begin, you'll need to find the Annual Profit. This number is based on accruals, not on cash, and it reflects the costs of amortization and depreciation. Simple Rate of Return Method: Learning Objectives: Compute the simple rate of return for an investment project. Definition and Explanation: The simple rate of return method is another capital budgeting technique that does not involve discounted cash flows. The method is also known as the accounting rate of return, the unadjusted rate of return, and the financial statement method.\n\nIRR is harder to calculate than return on investment, but IRR has the advantage of automatically accounting for time differences between investments. This can  1 Feb 2017 Excel offers three functions for calculating the internal rate of return, and I recommend you use all three." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93678325,"math_prob":0.9141415,"size":6148,"snap":"2023-14-2023-23","text_gpt3_token_len":1243,"char_repetition_ratio":0.17138672,"word_repetition_ratio":0.41992188,"special_character_ratio":0.20624593,"punctuation_ratio":0.080789946,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9873741,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-29T15:22:19Z\",\"WARC-Record-ID\":\"<urn:uuid:b6d25bc6-44d9-45f8-a79d-e3818a871c7f>\",\"Content-Length\":\"32993\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:323144a3-6b67-408b-baf7-c65cacf8cf46>\",\"WARC-Concurrent-To\":\"<urn:uuid:1e34e399-3903-4601-af9f-c84cbcc2e8f3>\",\"WARC-IP-Address\":\"35.231.208.25\",\"WARC-Target-URI\":\"https://investingyrkwpjl.netlify.app/halbur65579qiqy/how-to-compute-rate-of-return-accounting-fa.html\",\"WARC-Payload-Digest\":\"sha1:OYFMSXGEVXC7QC2FGVSKUJK4ITTVIZEN\",\"WARC-Block-Digest\":\"sha1:LPLQ6HBOLUJ5A2X3SLITZO7DP6ENMZQB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296949009.11_warc_CC-MAIN-20230329151629-20230329181629-00159.warc.gz\"}"}
https://byjus.com/question-answer/a-number-from-1-to-11-is-chosen-at-random-what-is-the-probability-of-choosing-an-odd-number/
[ "", null, "", null, "", null, "", null, "Question\n\n# A number from $1$ to $11$ is chosen at random. What is the probability of choosing an odd number?\n\nOpen in App\nSolution\n\n## Given we can choose a number from $1$ to $11$.Total number of outcomes $=11$From $1$ to $11$, odd numbers are $1,3,5,7,9,11$The number of favorable outcomes $=6$We know that $\\mathrm{Probability}=\\frac{\\mathrm{number}\\mathrm{of}\\mathrm{favorable}\\mathrm{outcomes}}{\\mathrm{total}\\mathrm{number}\\mathrm{of}\\mathrm{outcomes}}$ $=\\frac{6}{11}$Therefore the probability of choosing an odd number from $1$ to $11$is $\\frac{6}{11}$.", null, "", null, "Suggest Corrections", null, "", null, "0", null, "", null, "", null, "", null, "", null, "", null, "Similar questions", null, "", null, "Explore more" ]
[ null, "data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iNDQiIGhlaWdodD0iNDQiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iNDQiIGhlaWdodD0iNDQiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+", null, "https://byjus.com/question-answer/_next/image/", null, "data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMjAiIGhlaWdodD0iMjAiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMjAiIGhlaWdodD0iMjAiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iNjAwIiBoZWlnaHQ9IjQwMCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iNDAwIiBoZWlnaHQ9IjQwMCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iNDQiIGhlaWdodD0iNDAiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iNDQiIGhlaWdodD0iNDAiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91040605,"math_prob":1.0000046,"size":590,"snap":"2022-40-2023-06","text_gpt3_token_len":163,"char_repetition_ratio":0.19624573,"word_repetition_ratio":0.17475729,"special_character_ratio":0.2677966,"punctuation_ratio":0.11023622,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99997044,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-06T10:04:18Z\",\"WARC-Record-ID\":\"<urn:uuid:040a17e4-038d-402b-a691-3b2302bc1cf6>\",\"Content-Length\":\"160020\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d40a7d28-4dfe-48f4-a5b4-9f0c65242939>\",\"WARC-Concurrent-To\":\"<urn:uuid:9e255b37-d82b-4abb-9b3f-2ab9f9e488c7>\",\"WARC-IP-Address\":\"162.159.129.41\",\"WARC-Target-URI\":\"https://byjus.com/question-answer/a-number-from-1-to-11-is-chosen-at-random-what-is-the-probability-of-choosing-an-odd-number/\",\"WARC-Payload-Digest\":\"sha1:FLE4IYPIUO2LYQSULKCBDBHDY522DEZ2\",\"WARC-Block-Digest\":\"sha1:EZ6FDLFWFXVKOKSKPPU4EPNUUF7LXBZ4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500334.35_warc_CC-MAIN-20230206082428-20230206112428-00584.warc.gz\"}"}
https://www.colorhexa.com/0083e0
[ "# #0083e0 Color Information\n\nIn a RGB color space, hex #0083e0 is composed of 0% red, 51.4% green and 87.8% blue. Whereas in a CMYK color space, it is composed of 100% cyan, 41.5% magenta, 0% yellow and 12.2% black. It has a hue angle of 204.9 degrees, a saturation of 100% and a lightness of 43.9%. #0083e0 color hex could be obtained by blending #00ffff with #0007c1. Closest websafe color is: #0099cc.\n\n• R 0\n• G 51\n• B 88\nRGB color chart\n• C 100\n• M 42\n• Y 0\n• K 12\nCMYK color chart\n\n#0083e0 color description : Pure (or mostly pure) blue.\n\n# #0083e0 Color Conversion\n\nThe hexadecimal color #0083e0 has RGB values of R:0, G:131, B:224 and CMYK values of C:1, M:0.42, Y:0, K:0.12. Its decimal value is 33760.\n\nHex triplet RGB Decimal 0083e0 `#0083e0` 0, 131, 224 `rgb(0,131,224)` 0, 51.4, 87.8 `rgb(0%,51.4%,87.8%)` 100, 42, 0, 12 204.9°, 100, 43.9 `hsl(204.9,100%,43.9%)` 204.9°, 100, 87.8 0099cc `#0099cc`\nCIE-LAB 53.613, 4.914, -55.462 21.568, 21.612, 73.552 0.185, 0.185, 21.612 53.613, 55.68, 275.063 53.613, -31.732, -87.069 46.489, 1.455, -61.262 00000000, 10000011, 11100000\n\n# Color Schemes with #0083e0\n\n• #0083e0\n``#0083e0` `rgb(0,131,224)``\n• #e05d00\n``#e05d00` `rgb(224,93,0)``\nComplementary Color\n• #00e0cd\n``#00e0cd` `rgb(0,224,205)``\n• #0083e0\n``#0083e0` `rgb(0,131,224)``\n• #0013e0\n``#0013e0` `rgb(0,19,224)``\nAnalogous Color\n• #e0cd00\n``#e0cd00` `rgb(224,205,0)``\n• #0083e0\n``#0083e0` `rgb(0,131,224)``\n• #e00013\n``#e00013` `rgb(224,0,19)``\nSplit Complementary Color\n• #83e000\n``#83e000` `rgb(131,224,0)``\n• #0083e0\n``#0083e0` `rgb(0,131,224)``\n• #e00083\n``#e00083` `rgb(224,0,131)``\n• #00e05d\n``#00e05d` `rgb(0,224,93)``\n• #0083e0\n``#0083e0` `rgb(0,131,224)``\n• #e00083\n``#e00083` `rgb(224,0,131)``\n• #e05d00\n``#e05d00` `rgb(224,93,0)``\n• #005694\n``#005694` `rgb(0,86,148)``\n``#0065ad` `rgb(0,101,173)``\n• #0074c7\n``#0074c7` `rgb(0,116,199)``\n• #0083e0\n``#0083e0` `rgb(0,131,224)``\n• #0092fa\n``#0092fa` `rgb(0,146,250)``\n• #149dff\n``#149dff` `rgb(20,157,255)``\n• #2ea8ff\n``#2ea8ff` `rgb(46,168,255)``\nMonochromatic Color\n\n# Alternatives to #0083e0\n\nBelow, you can see some colors close to #0083e0. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #00bbe0\n``#00bbe0` `rgb(0,187,224)``\n• #00a8e0\n``#00a8e0` `rgb(0,168,224)``\n• #0096e0\n``#0096e0` `rgb(0,150,224)``\n• #0083e0\n``#0083e0` `rgb(0,131,224)``\n• #0070e0\n``#0070e0` `rgb(0,112,224)``\n• #005ee0\n``#005ee0` `rgb(0,94,224)``\n• #004be0\n``#004be0` `rgb(0,75,224)``\nSimilar Colors\n\n# #0083e0 Preview\n\nText with hexadecimal color #0083e0\n\nThis text has a font color of #0083e0.\n\n``<span style=\"color:#0083e0;\">Text here</span>``\n#0083e0 background color\n\nThis paragraph has a background color of #0083e0.\n\n``<p style=\"background-color:#0083e0;\">Content here</p>``\n#0083e0 border color\n\nThis element has a border color of #0083e0.\n\n``<div style=\"border:1px solid #0083e0;\">Content here</div>``\nCSS codes\n``.text {color:#0083e0;}``\n``.background {background-color:#0083e0;}``\n``.border {border:1px solid #0083e0;}``\n\n# Shades and Tints of #0083e0\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000508 is the darkest color, while #f4faff is the lightest one.\n\n• #000508\n``#000508` `rgb(0,5,8)``\n• #00101c\n``#00101c` `rgb(0,16,28)``\n• #001c2f\n``#001c2f` `rgb(0,28,47)``\n• #002743\n``#002743` `rgb(0,39,67)``\n• #003357\n``#003357` `rgb(0,51,87)``\n• #003e6a\n``#003e6a` `rgb(0,62,106)``\n• #004a7e\n``#004a7e` `rgb(0,74,126)``\n• #005592\n``#005592` `rgb(0,85,146)``\n• #0061a5\n``#0061a5` `rgb(0,97,165)``\n• #006cb9\n``#006cb9` `rgb(0,108,185)``\n• #0078cc\n``#0078cc` `rgb(0,120,204)``\n• #0083e0\n``#0083e0` `rgb(0,131,224)``\n• #008ef4\n``#008ef4` `rgb(0,142,244)``\n• #0899ff\n``#0899ff` `rgb(8,153,255)``\n• #1ca1ff\n``#1ca1ff` `rgb(28,161,255)``\n• #2fa9ff\n``#2fa9ff` `rgb(47,169,255)``\n• #43b1ff\n``#43b1ff` `rgb(67,177,255)``\n• #57b9ff\n``#57b9ff` `rgb(87,185,255)``\n• #6ac1ff\n``#6ac1ff` `rgb(106,193,255)``\n• #7ec9ff\n``#7ec9ff` `rgb(126,201,255)``\n• #92d2ff\n``#92d2ff` `rgb(146,210,255)``\n• #a5daff\n``#a5daff` `rgb(165,218,255)``\n• #b9e2ff\n``#b9e2ff` `rgb(185,226,255)``\n• #cceaff\n``#cceaff` `rgb(204,234,255)``\n• #e0f2ff\n``#e0f2ff` `rgb(224,242,255)``\n• #f4faff\n``#f4faff` `rgb(244,250,255)``\nTint Color Variation\n\n# Tones of #0083e0\n\nA tone is produced by adding gray to any pure hue. In this case, #677179 is the less saturated color, while #0083e0 is the most saturated one.\n\n• #677179\n``#677179` `rgb(103,113,121)``\n• #5f7381\n``#5f7381` `rgb(95,115,129)``\n• #56748a\n``#56748a` `rgb(86,116,138)``\n• #4e7692\n``#4e7692` `rgb(78,118,146)``\n• #45779b\n``#45779b` `rgb(69,119,155)``\n• #3c79a4\n``#3c79a4` `rgb(60,121,164)``\n• #347aac\n``#347aac` `rgb(52,122,172)``\n• #2b7cb5\n``#2b7cb5` `rgb(43,124,181)``\n• #227dbe\n``#227dbe` `rgb(34,125,190)``\n• #1a7fc6\n``#1a7fc6` `rgb(26,127,198)``\n• #1180cf\n``#1180cf` `rgb(17,128,207)``\n• #0982d7\n``#0982d7` `rgb(9,130,215)``\n• #0083e0\n``#0083e0` `rgb(0,131,224)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #0083e0 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.53023094,"math_prob":0.82887626,"size":3692,"snap":"2022-40-2023-06","text_gpt3_token_len":1671,"char_repetition_ratio":0.14669198,"word_repetition_ratio":0.0073664826,"special_character_ratio":0.55931747,"punctuation_ratio":0.23198198,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.992225,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-30T21:34:12Z\",\"WARC-Record-ID\":\"<urn:uuid:18cca348-49b8-477c-a0f7-5e628fae5994>\",\"Content-Length\":\"36148\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d1ca7ba2-6b8f-4fd1-94f5-ef935c0697cc>\",\"WARC-Concurrent-To\":\"<urn:uuid:ca68eff1-4f27-445f-8027-f6c99f19abea>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/0083e0\",\"WARC-Payload-Digest\":\"sha1:ULDNFOFBOSDMWDMYC6B443IUABVQA37V\",\"WARC-Block-Digest\":\"sha1:VH67OZFOEO5ZSUSUHXGJSRG7HMUGJD54\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335504.37_warc_CC-MAIN-20220930212504-20221001002504-00576.warc.gz\"}"}
https://www.bettersizeinstruments.com/products/dynamic-light-scattering-dls/
[ " Dynamic Light Scattering DLS Particle Size Analyzer System Machine\n• Global", null, "# Dynamic Light Scattering (DLS)\n\nDynamic light scattering (DLS), also known as photon correlation spectroscopy (PCS), is a commonly used characterization method for nanoparticles. The DLS particle size analyzer has the advantages of accuracy, rapidity, and good repeatability for the measurement of Nano particles, emulsions or suspensions. BeNano 90 Zeta nano particle size analyser is a typical nanoparticle size measurement instrument based on dynamic light scattering.It can measure nanomaterial down to 1 nanometer which is an essential tool for nanoparticle size distribution measurement to understand and research nano powdered materials.\nInstruments\n\n## Theoretical background\n\nWhat is light scattering? When a monochromatic and coherent light source irradiates onto the particle, the electromagnetic wave will interact with the charges in atoms that compose the particle, and thus induce the formation of an oscillating dipole in the particle. Light scattering refers to the emission of light in all directions from oscillating dipole. During quasi-elastic light scattering, the frequency changes between scattered light and incident light are small, and the light scattered by the oscillating dipole has a spectrum that broadens around the incident light frequency.\nThe scattered light intensity depends on the particle's intrinsic physical properties such as size and molecular weight. The scattered light intensity is not a constant value; it fluctuates over time due to the random walk of particles that are undergoing Brownian motion which refers to the particle's continuous and spontaneous random walk when placed in the medium resulting from the collisions between the particles and the medium molecules. The fluctuations in scattered light intensity with time allows us to calculate the diffusion coefficient through the auto-correlation function analysis. To quantify the speed of Brownian motion, the translational diffusion coefficient is modelled by the Stokes-Einstein equation. Notice here the diffusion coefficient is specified by the word “translational”, indicating that only the translational, but not the rotational movement of the particle is taken into account. The translational diffusion coefficient has the unit of area per unit time, where the area is introduced to prevent the sign change convention when the particle is moving away from its origin. Then using the Stokes – Einstein equation, the particle size distribution can be calculated from the diffusion coefficient. This technique is called the dynamic light scattering, abbreviated as DLS.\n\nThe Stokes-Einstein equation is expressed as follows:", null, "Equation 1: The Stokes-Einstein equation", null, "Hydrodynamic radius refers to the effective radius of a particle that has identical diffusion to a perfectly spherical particle of that radius. For example, as seen in figure 1, the true radius of the particle refers to the distance between its center and its outer circumference, while the hydrodynamic radius includes the length of the attached segments since they diffuse as a whole. Hydrodynamic radius is inversely proportional to the translational diffusion coefficient.", null, "Figure 1: Illustration of hydrodynamic radius.\n\n## Applications\n\nBy analyzing the fluctuating scattered light intensity due to Brownian motion, DLS can obtain the particle size distribution of small particles suspended in a diluted solution. Typically, the measuring size limit of DLS falls in the nano and sub-micro range, with the magnitude of 1 nanometer to 10 micrometers. There are several advantages of using the DLS when sizing particles. First, DLS is non-invasive to the samples, meaning that the structure of the molecules would not be destroyed during sizing. A small amount of sample is sufficient for a diluted solution preparation. With the DLS method, results with great repeatability and accuracy can be obtained within a few minutes. The testing process is nearly all automatic, minimizing operation errors from different operators. With DLS, the particle size distribution can be measured at different temperatures, and thus a thermal analysis can be conducted on the test sample. DLS provides various industries the opportunity to control their products' quality and therefore maximizing product performance by controlling particle size. These industries include but are not limited to semi-conductors, renewable energy, pharmaceuticals, inks, pigments, batteries, et cetera.\n\n## Optical Setup\n\nThe whole setup of DLS instrument is shown in Figure 2.", null, "Figure 2: Dynamic light scattering optical set-up of BeNano 90, Bettersize Instruments.\n\n•Laser\nThe majority of the laser devices in DLS instruments are gas lasers and solid-state lasers. Typical example of gas laser in DLS setup is helium-neon laser which emits laser with a wavelength of 632.8 nm. A solid-state laser refers to a laser device where a solid act as the gain medium. In a solid-state laser, small amounts of solid impurities called “dopant\" are added to the gain medium to change its optical properties. These dopants are often rare-earth minerals such as neodymium, chromium, and ytterbium. The most commonly used solid-state laser is neodymium-doped yttrium aluminum garnet, abbreviated as Nd: YAG. Gas laser has the advantages of stable wavelength emission with relatively low cost. However, a gas laser usually has a relatively large volume that makes it very bulky. On the other hand, a solid-state laser is smaller in size and also less heavy, making it more flexible to handle.\n\n•Detector\nAfter the laser beam irradiates onto the sample cell, light is scattered by the particle, and this scattered light is fluctuated because of Brownian motion. A highly sensitive detector picks up these scattered light fluctuations signals at even low-intensity levels and converts them to electrical signals for further analysis in the correlator. Commonly used detectors in an optical setup of DLS include photomultiplier tube and avalanche photodiode. According to Lawrence W.G. et al., PMT and APD have similar noise to signal performance at most signal levels, while the APD outweighs the PMT at red and near-infrared spectral regions. APD also has higher absolute quantum efficiency than PMT. Because of these reasons, recently APD is being more frequently utilized in DLS devices.\n\n•Correlator\nAfter the optical setup, the process of scattering and collecting light intensity is completed. The signals detected by detectors are then analyzed in the correlator to eventually calculate the hydrodynamic radius distribution.\nWe can multiply the scattering intensity collected from the detector with itself after it has been shifted by some arbitrary interval tau (τ) in time. This τ can be anything between a few nanoseconds and microseconds, but the actual value of the time interval does not affect the test result.\nAfter applying mathematical algorithm, the auto-correlation function G1(q, τ) can be obtained. G1(q, τ) single-exponentially decays from 1 to 0, with 0 meaning there's no correlation at all between signals at time t and time t plus τ, and 1 meaning perfect correlation. Finally, with all the known information of the correlation function, the hydrodynamic radius can be computed using the Stokes-Einstein equation.\n\n## Monodisperse vs Polydisperse\n\nMonodisperse particles are all identical in size, shape, and mass, resulting in one narrow peak in the particle size distribution curve. On the other hand, polydisperse particles are not uniform in those parameters. It is important to realize the polydispersity of the samples because the algorithms of calculating hydrodynamic radius distribution in the correlator are different depending on whether the samples are monodisperse or polydisperse.\nTwo main mathematical algorithms are used to solve the auto-correlation function of polydisperse sample. The first and most common one is the Cumulants method, which involves solving the Taylor expansion of the auto-correlation function. However, the Cumulants method is only valid with samples that have small size polydispersity. Validation of the calculation can be done by computing and checking the polydispersity index, or PDI, and Cumulants analysis is only valid if the PDI value is relatively small. The CONTIN algorithm can directly compute the hydrodynamic radius distribution for samples that are widely dispersed. It is a relatively complicated mathematical method involving regularization.\n\n## Data Interpretation\n\nInterpreting results can aid us in evaluating the quality of the particle size test, and also obtain information about the particle size distribution.\nThe quality of the correlation function should be checked before proceeding to the particle size analysis since it directly relates to the accuracy of the particle size result. The overall shape of the correlation function could well indicate its quality. As shown in figure 6, if the correlation curve is a smooth curve exponentially decaying from 1 to 0 without presence of noise, it suggests that the correlation was well performed and it is good to proceed to particle size distribution analysis.", null, "Figure 6: Example of a good correlation function curve.\n\nHowever, if the curve is still overall smooth with some level of noise, as shown in the figure 7, it might be due to the presence of impurities in the samples that affect the repeatability of the results. Under this scenario, the operator can filter the sample solution with the appropriate syringe pore size again to remove the impurities such as big dust particles in the solution.", null, "Figure 7: Example of a correlation function curve with noise.\n\nWhen the scattering is insufficient in a test, its correlation function curve would look like the curve in figure 8.", null, "Figure 8: Example of a poor correlation function curve.\n\nIn this case, the maximum value of the function is much less than 1, and it does not exhibit exponentially decay behavior. The operator could increase the sample concentration or the number of sub-runs to increase the amount of scattering.\n\nDLS reports results in z-average particle size, which is a scattered intensity weighted size. It comes from the fact that when computing the correlation function integral using the Cumulants and CONTIN method, an average translational diffusion coefficient is obtained and thus resulting in the average hydrodynamic radius from the Stokes-Einstein equation. The z-average particle size’s validity should be checked with the polydispersity index or PDI. As shown in the table, a sample results report of particle size from DLS includes its z-average particle size with uncertainty, and the PDI value corresponding to that z-avg particle size.\nIf the value of PDI is large, indicating that the samples are possibly polydisperse, then the z-average particle size is not a fully representative description of the given sample.\nAccording to the ISO 22412:2017 Particle Size analysis of dynamic light scattering, the particle size results should be reported along with its uncertainties and repeatability. The measurement uncertainty is expressed by the standard deviation, while the repeatability is the relative standard deviation that describes how close the results obtained from multiple measurements are to each other within each run of the test. As regulated by ISO 22412:2017, monodisperse materials with diameters between 50nm and 200nm should have z-avg particle size with repeatability less than 2%.\n\n## Reference\n\nChu, B. Laser Light Scattering: Basic Principles and Practice, 2nd ed.; Academic Press: Boston, 1991.\n\nDian, L.; Yu, E.; Chen, X.; Wen, X.; Zhang, Z.; Qin, L.; Wang, Q.; Li, G.; Wu, C. Enhancing Oral Bioavailability of Quercetin Using Novel Soluplus Polymeric Micelles. Nanoscale Res Lett 2014, 9 (1), 684. https://doi.org/10.1186/1556-276X-9-684.\n\nDhont, J. K. G. An Introduction to Dynamics of Colloids; Studies in interface science; Elsevier: Amsterdam, Netherlands ; New York, 1996.\n\nFalke, S.; Betzel, C. Dynamic Light Scattering (DLS): Principles, Perspectives, Applications to Biological Samples. In Radiation in Bioanalysis; Pereira, A. S., Tavares, P., Limão-Vieira, P., Eds.; Bioanalysis; Springer International Publishing: Cham, 2019; Vol. 8, pp 173–193. https://doi.org/10.1007/978-3-030-28247-9_6.\n\nISO 22412:2017. Particle Size Analysis – Dynamic Light Scattering (DLS). International Organization for Standardization.\n\nLawrence, W. G., Varadi, G., Entine, G., Podniesinski, E., & Wallace, P. K. (2008). A comparison of avalanche photodiode and photomultiplier tube detectors for flow cytometry. Imaging, Manipulation, and Analysis of Biomolecules, Cells, and Tissues VI. doi:10.1117/12.758958\n\nLight Scattering from Polymer Solutions and Nanoparticle Dispersions; Springer Laboratory; Springer Berlin Heidelberg: Berlin, Heidelberg, 2007. https://doi.org/10.1007/978-3-540-71951-9.\n\nNanoscale Informal Science Education Network, NISE Network, Scientific Image – Gold Nanoparticles. Retrieved from https://www.nisenet.org/catalog/scientific-image-gold-nanoparticles.\n\n•", null, "•", null, "[email protected]\n•", null, "•", null, "" ]
[ null, "https://www.bettersizeinstruments.com/uploads/image/20210120/16/nano-banner.jpg", null, "https://www.bettersizeinstruments.com/uploads/image/20210301/15/the-stokes-einstein-equation.jpg", null, "https://www.bettersizeinstruments.com/uploads/image/20210301/15/equation-1.jpg", null, "https://www.bettersizeinstruments.com/uploads/image/20210301/15/illustration-of-hydrodynamic-radius.jpg", null, "https://www.bettersizeinstruments.com/uploads/image/20210301/15/dynamic-light-scattering-optical-set-up-of-benano-90-bettersize-instruments.jpg", null, "https://www.bettersizeinstruments.com/uploads/image/20210301/15/example-of-a-good-correlation-function-curve.jpg", null, "https://www.bettersizeinstruments.com/uploads/image/20210301/15/example-of-a-correlation-function-curve-with-noise.jpg", null, "https://www.bettersizeinstruments.com/uploads/image/20210301/15/example-of-a-poor-correlation-function-curve.jpg", null, "https://www.bettersizeinstruments.com/themes/simple/img/add.png", null, "https://www.bettersizeinstruments.com/themes/simple/img/email.png", null, "https://www.bettersizeinstruments.com/themes/simple/img/phone.png", null, "https://www.bettersizeinstruments.com/uploads/image/20190613/13/bettersize-wechat.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88801944,"math_prob":0.84033173,"size":16860,"snap":"2021-43-2021-49","text_gpt3_token_len":3419,"char_repetition_ratio":0.15697674,"word_repetition_ratio":0.020145044,"special_character_ratio":0.19270463,"punctuation_ratio":0.14787799,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9509763,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,null,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,5,null,5,null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-05T11:17:00Z\",\"WARC-Record-ID\":\"<urn:uuid:3a10f1c0-5f82-4ef7-be99-d45d1ad35257>\",\"Content-Length\":\"155939\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e9869b39-8e2e-4e8f-8c6f-192d539f7f4e>\",\"WARC-Concurrent-To\":\"<urn:uuid:9ce38251-9df6-4bd8-b677-e22c508b7f61>\",\"WARC-IP-Address\":\"47.246.23.146\",\"WARC-Target-URI\":\"https://www.bettersizeinstruments.com/products/dynamic-light-scattering-dls/\",\"WARC-Payload-Digest\":\"sha1:5W2QMUSF5KOI6G5LMN6ZAMCEFFJIOS6I\",\"WARC-Block-Digest\":\"sha1:S6JU4RT6VNNOVDHVJ4RUFEABJFS2QYDJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363157.32_warc_CC-MAIN-20211205100135-20211205130135-00069.warc.gz\"}"}
https://jhmtr.semnan.ac.ir/article_149.html
[ "# Effect of magnetic field on the boundary layer flow, heat, and mass transfer of nanofluids over a stretching cylinder\n\nDocument Type : Full Lenght Research Article\n\nAuthors\n\nDepartment of Mechanical Engineering, Shahid Chamran University of Ahvaz, Ahvaz, Iran\n\nAbstract\n\nThe effect of a transverse magnetic field on the boundary layer flow and heat transfer of an\nisothermal stretching cylinder is analyzed. The governing partial differential equations for the\nmagnetohydrodynamic, temperature, and concentration boundary layers are transformed into a set\nof ordinary differential equations using similarity transformations. The obtained ordinary\ndifferential equations are numerically solved for a range of non-dimensional parameters. Results\nshow that the presence of a magnetic field would significantly affects the boundary layer profiles.\nAn increase in magnetic parameter would decrease the reduced Nusselt and Sherwood numbers.\n\nKeywords\n\n#### Full Text\n\n1.\n\n Effect of magnetic field on the boundary layer flow, heat, and mass transfer of nanofluids over a stretching cylinder   Aminreza Noghrehabadi*, Mohammad Ghalambaz, Ehsan Izadpanahi, Rashid Pourrajab   Department of Mechanical Engineering, Shahid Chamran University of Ahvaz, Ahvaz, Iran\n\n PAPER INFO\n\nHistory:\n\nReceived in revised form 22 November 2013\n\nAccepted 14 December 2013\n\nKeywords:\n\nNanofluid\n\nStretching cylinder\n\nMagnetic field\n\nBrownian motion\n\nThermophoresis\n\n A B S T R A C T   The effect of a transverse magnetic field on the boundary layer flow and heat transfer of an isothermal stretching cylinder is analyzed. The governing partial differential equations for the magnetohydrodynamic, temperature, and concentration boundary layers are transformed into a set of ordinary differential equations using similarity transformations. The obtained ordinary differential equations are numerically solved for a range of non-dimensional parameters. Results show that the presence of a magnetic field would significantly affects the boundary layer profiles. An increase in magnetic parameter would decrease the reduced Nusselt and Sherwood numbers.             © 2014 Published by Semnan University Press. All rights reserved.\n Journal of Heat and Mass Transfer Research   Journal homepage: http://jhmtr.journals.semnan.ac.ir\n Journal of Heat and Mass Transfer Research 1 (2014) 9-15\n\nIntroduction\n\nRecently, a number of studies have been carried out on the effects of electrically conducting fluids on the flow and heat transfer of a viscous fluid passing a moving surface in the presence of a magnetic field. Liquid metals and water mixed with a little acid are the two common examples of electrically conducting liquids. There are some examples about technological applications of magnetohydrodynamic (MHD) viscous flow which include hot rolling, wire drawing, annealing, thinning of copper wires, glass-fiber and paper production, drawing of plastic films, metal and polymer extrusion, and metal spinning. In all these cases, the properties of the final product depend on the rate of cooling by drawing such strips in electrically conducting fluids subject to a magnetic field. Therefore, the heating or cooling characteristics during such processes have a significant influence on the quality of the final products. The heating or cooling characteristics mostly depend on the skin friction and the surface heat transfer rate.\n\nCommon heat transfer fluids such as water, ethylene glycol, and engine oil have limited heat transfer capabilities owing to their low thermal conductivity, whereas metals have much higher thermal conductivities than these fluids. Therefore, dispersing high thermal conductive solid particles in a conventional heat transfer fluid may enhance the thermal conductivity of the resulting fluid.\n\nNanofluid is a fluid containing nanometer-sized particles, called nanoparticles. The term \"Nanofluid\" has been first proposed by Choi to indicate engineered colloids consist of nanoparticles dispersed in a base fluid. The base fluid is usually a conductive fluid, such as water or ethylene glycol. Other base fluids include bio-fluids, polymer solutions, oils, and other lubricants. One of the outstanding characteristic of nanofluids is their enhanced thermal conductivity . The nanoparticles used in synthesis of nanofluids are typically metallic (Al, Cu), metallic oxides (Al2O3, TiO2), carbides (SiC), nitrides (AlN, SiN) or carbon nanotubes with the diameter which ranges between 1 and 100 nm. The thermophoresis and Brownian motion effects are also important in heat transfer of nanofluids. The migration of nanoparticles because of these effects would influence the local heat transfer rate.\n\nRecently, the flow and heat transfer over stretching surfaces have attracted the attention of many researchers [3-6].\n\nIshak et al. investigated the magnetohydrodaynamic flow and heat transfer over a stretching cylinder. They reduced the governing equations to a system of ordinary differential equations. Later, the system of equations was numerically solved using Keller box method. The effect of magnetic parameter, Prandtl number, and Reynolds number on the velocity and temperature fields were thoroughly examined. Wang investigated the steady flow of a viscous fluid outside a stretching hollow cylinder. Ishak et al. studied the effect of suction/blowing on the flow and heat transfer past a stretching cylinder. They found that the magnitude of the skin friction coefficient increases with Reynolds number while the variation of Prandtl number does not show a significant effect on the skin friction coefficient.\n\nRecently, Rasekh et al. have analyzed the flow and heat transfer of nanofluids over a stretching cylinder. They reduced the governing equations to a set of ordinary differential equations. They have reported that the slip of nanoparticles because of thermophoresis and Brownian motion forces affects the heat transfer rate of nanofluids in the boundary layer. Gorla et al. have considered a melting boundary condition on the surface of the stretching cylinder and analyzed the flow and heat transfer of nanofluids.\n\nTo the best of author’s knowledge, the effect of a magnetic field on the boundary layer flow and heat transfer of nanofluids over a stretching cylinder has not been analyzed yet. The present study aims to analyze the development of the steady boundary layer flow and heat transfer of a magnetohydrodynamic nanofluid over a stretching cylinder. The governing partial differential boundary layer equations in the cylindrical form are presented and then transformed into a set of ordinary differential equations. The obtained equations are a function of magnetic parameter M, suction/injection parameter γ, Reynolds number Re, Prandtl number Pr, Lewis number Le, Brownian motion parameter Nb, and thermophoresis parameter Nt. The equations are solved numerically for a range of non-dimensional parameters.\n\n2. Formulation of the problem\n\nConsider the laminar steady flow of an incompressible electrically conducting nanofluid (with electrical conductivity σ) over a linear stretching cylinder. The movement of flow is because of the stretch of the cylinder. The flow outside the boundary layer is quiescent. A uniform magnetic field of intensity B0 acts in the radial direction. It is assumed that the effect of the induced magnetic field is negligible, which is valid when the magnetic Reynolds number is small. The viscous dissipation, Ohmic heating, and Hall effects are neglected as they are also assumed to be small. Fig. 1 depicts a schematic view of the physical model and the coordinate system. z-axis is measured along the axis of the cylinder and the r-axis is measured in the radial direction. The axial velocity of the stretching cylinder was assumed to be linear. Hence, it can be represented as ww=2c.z where c is a positive constant. The surface of the stretching cylinder is permeable; therefore, the surface of the cylinder is subject to mass transfer, which can be represented as uw=-c.a.γ. The positive and negative values of γ show mass absorption and mass injection, respectively. The temperature and concentration of nanofluid outside the boundary layer are constant values of Tw and φw. The thermo-physical properties are assumed to be constant. Under such assumptions, the governing equation for conservation of mass, momentum, thermal energy, and nanoparticles’ concentration are as following:\n\n (1) (2) (3) (4) (5)\n\nhere u and w are the velocity components along the r and z axes, respectively. p is the fluid pressure, ρ is the density of nanofluid, ν is the kinematic viscosity of nanofluid, σ is the electrical conductivity of nanofluid and B0 is the strength of the uniform magnetic field, α is the thermal diffusivity of nanofluid, DB is the Brownian diffusion coefficient, DT is the thermophoresis diffusion coefficient. τ is the ratio between the effective heat capacity of the nanoparticles ((ρc)p) and heat capacity of the nanofluid  ((ρc)nf), i.e. τ = (ρc)p /(ρc)nf.\n\nThe boundary conditions on the surface of the cylinder are:\n\n (6) Fig. 1. Physical model and coordinate system.\n\nwhere c is a constant and z is the axial direction. The appropriate boundary conditions at the far field (i.e. r→∞) are:\n\n (7)\n\nIntroducing the following similarity variables reduces the governing and boundary conditions to the set of ordinary differential equations (Eqs. 9-11) subjected to boundary conditions 12 and 13:\n\n , ,   , (8-a) (8-b) (9) (10) (11) (12) (13) The parameters in Eqs. (9)-(11) are defined as: (14) (15)\n\nhere Pr, Re, Le, M, Nb and Nt denote the Prandtl number, Reynolds number, Lewis number, the magnetic parameter, the Brownian motion parameter, and the thermophoresis parameter, respectively. The pressure P also can be determined from Eq. (3) as following:\n\n (16)\n\nPhysical quantities of interest are the skin friction coefficient Cf, Nusselt number Nu, and Sherwood number Sh, which can be defined as:\n\n (17)\n\nwhere τw is the wall shear stress, qw is the wall heat flux, and mw  is the nanoparticle mass flux from the surface of the tube, given as:\n\n (18)\n\nUsing similarity variables the non-dimensional skin friction coefficient, Nusselt number, and Sherwood number are obtained as:\n\n (19)\n\nTo estimate the accuracy of the present results, an error analysis should be considered. For this purpose the error percentage is introduced as:\n\n (20)\n\nhere, X could be any quantity such as f\"(1), θ'(1), or etc.\n\n3. Results and discussion\n\nThe ordinary differential equations, Eqs. (9)-(11), subject to the boundary conditions, Eqs. (12) and (13), are numerically solved using the forth-order Rung-Kutta and Newton-Raphson method with a systematic guessing of f ''(1), θ'(1), and Φ'(1) using the shooting technique. The step size Δη= 0.001 is used for calculations. The computations were done using Fortran 90.\n\nBy neglecting the effects of thermophoresis and Brownian motion, the present study reduces to the case of a pure fluid, which was studied by Ishak . Therefore, the results reported by Ishak are used to evaluate the accuracy of the present solution. Table 1 shows a comparison between the present results and those reported by Ishak for different values of magnetic parameter when Re=10 and Pr=7.0. Table 1 shows excellent agreement between the results of present study and the results reported by Ishak .\n\nIn the present study, the values of magnetic parameter (M) are chosen 0 < M < 5 to clearly show the effect of this parameter on the dimensionless velocity, temperature and concentration profiles as well as the Nusselt number. Most nanofluids reported in the literatures have large values of Lewis number, i.e., Le > 1 [13-15]. Hence, the values of Lewis number are chosen 2<Le<10. The same values of Nb, and Nt as those adopted by Rasekh et al. and Gorla et al. are used in the present study. The latter values ,compared to the previous studies, allow us to evaluate the effect of magnetic fields.\n\nFig. 2 exhibits the effect of magnetic parameter (M) on the velocity profiles. The maximum value of velocity is at the surface of the cylinder, and then the velocity decreases asymptotically to zero far from the stretching surface.\n\nThe velocity profiles decrease as the magnetic parameter increases. The increase of magnetic parameter increases the induced Lorentz force in the boundary layer, and hence, it decreases the velocity profiles in the boundary layer. This indicates the fact that an increase in the magnetic parameter would increases the Lorentz force, and consequently, an augmentation of the Lorenz force opposes the flow and reduces the fluid motion. However, variation of magnetic parameter does not show significant effect on the thickness of hydraulic boundary layer.\n\nFig. 3 depicts the effect of magnetic parameter on temperature profiles. This figure shows that the temperature profiles increase as the magnetic parameter\n\n Fig. 2. Effect of magnetic parameter on the velocity profiles.\n\nTable 1: Comparison of results for the skin friction coefficient f ''(1) and the reduced Nusselt number –θ'(1) for several values of M for Re = 10, Pr=7 and Nt = Nb = 0.\n\n -f ''(1) -θ'(1) M Current result Ishak et al. Error percent Current result Ishak et al. Error percent 0.00 3.44448 3.4444 2.3E-03 6.1579 6.1592 2.1E-02 0.01 3.34617 3.3461 2.1E-03 6.1575 6.1588 2.1E-02 0.05 3.35291 3.3528 3.3E-03 6.1560 6.1583 3.7E-02 0.10 3.36131 3.3612 3.2E-03 6.1541 6.1554 2.1E-02 0.50 3.42743 3.4274 8.8E-04 6.1390 6.1402 2.0E-02 1.00 3.50769 3.5076 2.6E-03 6.1207 6.1219 2.0E-02 2.00 3.66154 3.6615 1.1E-03 6.0857 6.0864 1.1E-02 5.00 4.08263 4.0825 3.2E-03 5.9899 5.9895 6.7E-03\n\nincreases. Indeed, the increase of magnetic parameter reduces the magnitude of velocity profiles in the boundary layer, and hence, the temperature in the boundary layer would rise. The variation of magnetic parameter does not show significant effect on the thickness of thermal boundary layer past the stretching cylinder. Fig. 4 illustrates the effect of magnetic parameter on the concentration profiles. As it can be seen, the increase of magnetic parameter increases the magnitude of concentration profiles. As mentioned, increase of magnetic parameter reduces the magnitude of velocity profiles in the boundary layer. Therefore, the decrease of velocity in the boundary layer induces the diffusion of nanoparticles in the boundary layer. However, on the other hand, increase of magnetic parameter tends to decrease the temperature gradients in the boundary layer (as what was seen in Fig. 3). In nanofluids, the thermophoresis force acts opposite to the temperature gradient and tends to move nanoparticles from hot to cold. The magnitude of thermophoresis is proportional to the temperature gradient . Therefore, a decrease in the temperature gradient decreases the effect of thermophoresis in the boundary layer, and consequently tends to decrease the diffusion of nanoparticles. Fig. 4 demonstrates that as the magnetic parameter increases, the effect of variation of velocity on the concentration profiles is the dominant effect.\n\nIncreasing the magnetic parameter increases the Lorentz force which creates the force opposed to the fluid motion. Increasing the Lorentz force decreases the velocity in the boundary layer. Based on the momentum equations, it is clear that the magnetic force corresponds to multiplex of velocity and the magnetic field magnitude. In the present study, the magnetic field was assumed to be comparatively high and uniform in the boundary layer.\n\nThus, the magnetic force is solely a function of velocity of the fluid in the boundary layer. It should be noticed that the velocity in the boundary layer is because of the stretching of the cylinder. Consequently, the\n\n Fig. 3. Effect of magnetic parameter on the temperature profiles. Fig. 4. Effect of magnetic parameter on concentration profiles.\n\nmaximum velocity can be observed on the surface of the cylinder while the minimum velocity is zero which is the quiescent part of the fluid far from the surface (near the edge of the boundary layer). As a result, the maximum magnitude of induced Lorentz force can be seen in the vicinity of the cylinder (this is where the magnetic field strongly affects the velocity and consequently temperature profiles). Far from the surface of the cylinder, the velocities are very low, and consequently, the induced Lorentz force is also very low. Hence, as it can be seen in the figures, the effect of magnetic field is negligible on the thickness of the boundary layer.\n\nFigs. 5 and 6 depict the effect of magnetic parameter on the Nusselt number for selected values of thermophoresis and Brownian motion parameters. These figures show that the Nusselt number is a decreasing function of the magnetic parameter. This observation is in good agreement with Fig. 3. As it was mentioned, the increase in magnetic parameter tends to decrease temperature gradients in the boundary layer and hence decreases the Nusselt number. Figs. 5 and 6 also show that the Nusselt number is a decreasing function of the thermophoresis and Brownian motion parameters. The Brownian motion effect tends to move the nanoparticles from high concentration areas to low ones. Therefore, in the present study, both of the Brownian motion and thermophoresis effects tend to move the nanoparticles away from the stretching cylinder. Indeed, the augmentation of Brownian motion or thermophoresis parameters intensifies the diffusion of nanoparticles into the boundary layer and consequently decreases the Nusselt number.\n\nFig. 7 shows the effect of suction/injection parameter on the Nusselt number for various values of magnetic parameter. This figure shows that the reduced Nusselt number is an increasing function of suction/injection parameter. However, it is a decreasing function of the magnetic parameter. It is noticed from Fig. 7 that the Nusselt number is significantly affected by suction/injection parameter. The positive and negative values of γ indicate mass suction and mass injection, respectively.\n\n Fig. 5. Effects of Nt and M on the Nusselt number. Fig. 6. Effects of Nb and M on Nusselt number.\n\nBy increasing the suction, i.e. increase in the magnitude of γ>0, the thickness of termal boundary layer decreases, and hence, the temperature gradient at the surface of the cylinder (Nusselt number) increases. This is due to the fact that increasing the suction would bring a large amount of ambient fluid into the surface of the cylinder. In contrast, increasing the mass injection would percolate the fluid through the boundary layer which can\n\n Fig. 7. Effects of M and γ on Nusselt number.\n\nincrease the thickness of temperature boundary layer, and hence, the temperature gradient at the surface of the cylinder decreases.\n\nThe Nusselt number includes the ratio between convective heat transfer coefficient and conduction heat transfer coefficient (i.e. Nu=hnfa/knf). The experiments demonstrate that dispersing nanoparticles would significantly augment the thermal conductivity of the resulting fluid. Therefore, there is an initial significant potential of increasing heat transfer because of the increase in the thermal conductivity of the mixture as hnf ~knf in using nanoparticles. Now, the results of the present study indicate that presence of Brownian motion, thermophoresis, and magnetic field would decrease the reduced Nusselt number. If the increase in the thermal conductivity of the mixture because of the presence of nanoparticles be very significant, then an overall convective enhancement can be seen. However, if the increase in the thermal conductivity of the mixture because of the presence of nanoparticles does not be significant, then the overall convective coefficient may be deteriorated.\n\n4. Conclusion\n\nA combined similarity and numerical approaches was utilized to study the effect of magnetic field on the flow, temperature, and concentration profiles in the boundary layer. The effect of non-dimensional parameters on the Nusselt number is analyzed. The results reveal that:\n\n• An increase in the magnetic parameter would decrease the magnitude of velocity profiles, but it would increase the magnitude of temperature and concentration profiles in the boundary layer.\n• The variation of magnetic parameter does not show significant effects on the thickness of the boundary layer profiles (i.e. velocity, temperature, and concentration profiles).\n• The Nusselt number is a decreasing function of magnetic parameter, Brownian motion, and thermophoresis parameter, but it is an increasing function of the suction/injection parameter.\n\nBased on the results of the present study, it can be concluded that the effect of Brownian motion and thermophoresis on the reduced Nusselt number is significant. As the reduced Nusselt number is a decreasing function of both Brownian motion and thermophoresis parameters, the heat transfer, associated with using nanofluids, may not be as much as the observed enhancement in the thermal conductivity of nanofluids. Therefore, the single phase models, which neglected the Brownian motion and thermophoresis effects, would overestimate the heat transfer rate induced by using nanofluids.\n\nAcknowledgements\n\nThe authors are grateful to Shahid Chamran University of Ahvaz for its crucial support.\n\n Nomenclature a radius of the cylinder B0 uniform magnetic field c constant C nanoparticle  volume fraction C∞ ambient nanoparticle volume fraction Cw nanoparticle  volume fraction at the stretching cylinder Cf skin friction coefficient DB the Brownian diffusion coefficient DT the thermophoresis diffusion coefficient k thermal conductivity of nanofluid Le Lewis number M magnetic parameter mw wall mass flux Nb Brownian motion parameter Nt thermophoresis parameter Nu Nusselt number p pressure p∞ ambient pressur Pr Prandtl number qw wall Heat flux Re Reynolds number Sh Sherwood number T nanofluid temperature T∞ ambient nanofluid temperature Tw nanofluid temperature at the stretching cylinder u,w velocity components along r- and z-axes uw velocity of the stretching cylinder r,z Cartesian coordinates (z-axis is aligned along the stretching cylinder and r-axis is normal to it) Greek α thermal diffusivity of nanofluid (ρc)nf heat capacity of the nanofluid (ρc)p effective heat capacity of the nanoparticle material σ electrical conductivity of nanofluid η similarity variable φ(η) dimensionless nanoparticle volume fraction θ(η) dimensionless temperature ρ nanofluid density ρp nanoparticle mass density ν kinematic viscosity τ parameter defined by ratio between  the effective heat capacity of the nanoparticle material and heat capacity of the nanofluid τw wall shear stress\n\nReferences\n\n.    S.U.S. Choi, Enhancing thermal conductivity of fluids with nanoparticles, in: The Proceedings of the 1995 ASME International Mechanical Engineering Congress and Exposition, San Francisco, USA, ASME, FED 231/MD 66, 99-105 (1995).\n\n.    H. Masuda and A. Ebata and K. Teramea and N. Hishinuma, Altering the thermal conductivity and viscosity of liquid by dispersing ultra-fine particles, Netsu Bussei, 4, 227-233 (1993).\n\n.    T. Fang, and A. Aziz, Viscous Flow with Second-Order Slip Velocity over a Stretching Sheet, Z. Naturforsch., 65a,1087 –1092 (2010).\n\n.    T. Hayat, M. Nawaz, Effect of Heat Transfer on Magnetohydrodynamic Axisymmetric Flow Between Two Stretching Sheets, Z. Naturforsch., 65a, 961 –968 (2010).\n\n.    A. S. Butt, S. Munawar, A. Ali, and A. Mehmood, Entropy Analysis of Mixed Convective Magnetohydrodynamic Flow of a Viscoelastic Fluid over a Stretching Sheet, Z. Naturforsch., 67a, 451–459 (2012).\n\n.    S. Mukhopadhyay, Upper-Convected Maxwell Fluid Flow over an Unsteady Stretching Surface Embedded in Porous Medium Subjected to Suction/Blowing, Z. Naturforsch., 67a, 641–646 (2012).\n\n.    A. Ishak and R. Nazar and I. Pop, Magnetohydrodynamic (MHD) flow and heat transfer due to a stretching cylinder, Energy Conv. Manag., 49, 3265-3269 (2008).\n\n.    C.Y. Wang, Fluid flow due to a stretching cylinder, Phys. Fluids, 31, 466-468 (1988).\n\n.    A. Ishak and R. Nazar and I. Pop, Uniform suction/blowing effect on flow and heat transfer due to a stretching cylinder, Appl. Math. Model, 32, 2059-2066 (2008).\n\n. A. Rasekh and D.D. Ganji and S. Tavakoli, numerical solution for a nanofluid past over a stretching circular cylinder with non-unifom heat source, Frontiers Heat. Mass. Transf. (FHMT), 3, 043003 (2012).\n\n. R. Subba and R. Gorla and A. Chamkha and E. Al-Meshaiei, melting heat transfer in a nanofluid boundary layer on a stretching circular cylinder, J. Naval Arch. Marin. Eng., 9, 1-10 (2012).\n\n. M. Subhas Abel and P.G. Siddheshwar and N. Mahesha, Numerical solution of the momentum and heat transfer equations for a hydromagnetic flow due to a stretching sheet of a non-uniform property micropolar liquid, Appl. Math. Comp., 217, 5895–5909 (2011).\n\n. W.A. Khan and I. Pop, Boundary-layer flow of a nanofluid past a stretching sheet, Int. J. Heat.  Mass. Transf., 53, 2477–2483 (2010).\n\n. R. Kandasamy and P. Loganathan and P. Puvi Arasu, Scaling group transformation for MHD boundary-layer flow of a nanofluid past a vertical stretching surface in the presence of suction/injection, Nuclear Eng. Design, 241, 2053-2059 (2011).\n\n. N. Bachok and A. Ishak and I. Pop, Unsteady boundary-layer flow and heat transfer of a nanofluid over a permeable stretching/shrinking sheet, Int. J. Heat. Mass. Transf., 55, 2102-2109 (2012).\n\n. J. Buongiorno, Convective Transport in Nanofluids, J. Heat. Transf., 128, 240-245 (2006).\n\n#### References\n\n.    S.U.S. Choi, Enhancing thermal conductivity of fluids with nanoparticles, in: The Proceedings of the 1995 ASME International Mechanical Engineering Congress and Exposition, San Francisco, USA, ASME, FED 231/MD 66, 99-105 (1995).\n.    H. Masuda and A. Ebata and K. Teramea and N. Hishinuma, Altering the thermal conductivity and viscosity of liquid by dispersing ultra-fine particles, Netsu Bussei, 4, 227-233 (1993).\n.    T. Fang, and A. Aziz, Viscous Flow with Second-Order Slip Velocity over a Stretching Sheet, Z. Naturforsch., 65a,1087 –1092 (2010).\n.    T. Hayat, M. Nawaz, Effect of Heat Transfer on Magnetohydrodynamic Axisymmetric Flow Between Two Stretching Sheets, Z. Naturforsch., 65a, 961 –968 (2010).\n.    A. S. Butt, S. Munawar, A. Ali, and A. Mehmood, Entropy Analysis of Mixed Convective Magnetohydrodynamic Flow of a Viscoelastic Fluid over a Stretching Sheet, Z. Naturforsch., 67a, 451–459 (2012).\n.    S. Mukhopadhyay, Upper-Convected Maxwell Fluid Flow over an Unsteady Stretching Surface Embedded in Porous Medium Subjected to Suction/Blowing, Z. Naturforsch., 67a, 641–646 (2012).\n.    A. Ishak and R. Nazar and I. Pop, Magnetohydrodynamic (MHD) flow and heat transfer due to a stretching cylinder, Energy Conv. Manag., 49, 3265-3269 (2008).\n.    C.Y. Wang, Fluid flow due to a stretching cylinder, Phys. Fluids, 31, 466-468 (1988).\n.    A. Ishak and R. Nazar and I. Pop, Uniform suction/blowing effect on flow and heat transfer due to a stretching cylinder, Appl. Math. Model, 32, 2059-2066 (2008).\n. A. Rasekh and D.D. Ganji and S. Tavakoli, numerical solution for a nanofluid past over a stretching circular cylinder with non-unifom heat source, Frontiers Heat. Mass. Transf. (FHMT), 3, 043003 (2012).\n. R. Subba and R. Gorla and A. Chamkha and E. Al-Meshaiei, melting heat transfer in a nanofluid boundary layer on a stretching circular cylinder, J. Naval Arch. Marin. Eng., 9, 1-10 (2012).\n. M. Subhas Abel and P.G. Siddheshwar and N. Mahesha, Numerical solution of the momentum and heat transfer equations for a hydromagnetic flow due to a stretching sheet of a non-uniform property micropolar liquid, Appl. Math. Comp., 217, 5895–5909 (2011).\n. W.A. Khan and I. Pop, Boundary-layer flow of a nanofluid past a stretching sheet, Int. J. Heat.  Mass. Transf., 53, 2477–2483 (2010).\n. R. Kandasamy and P. Loganathan and P. Puvi Arasu, Scaling group transformation for MHD boundary-layer flow of a nanofluid past a vertical stretching surface in the presence of suction/injection, Nuclear Eng. Design, 241, 2053-2059 (2011).\n. N. Bachok and A. Ishak and I. Pop, Unsteady boundary-layer flow and heat transfer of a nanofluid over a permeable stretching/shrinking sheet, Int. J. Heat. Mass. Transf., 55, 2102-2109 (2012).\n. J. Buongiorno, Convective Transport in Nanofluids, J. Heat. Transf., 128, 240-245 (2006).\n\n### History\n\n• Receive Date: 10 August 2013\n• Revise Date: 22 November 2013\n• Accept Date: 14 December 2013\n• First Publish Date: 01 May 2014" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.83733994,"math_prob":0.9032093,"size":23726,"snap":"2023-14-2023-23","text_gpt3_token_len":5402,"char_repetition_ratio":0.17245595,"word_repetition_ratio":0.09516129,"special_character_ratio":0.21592346,"punctuation_ratio":0.13411489,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9741288,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-02T19:20:35Z\",\"WARC-Record-ID\":\"<urn:uuid:43ba7a8c-b920-470a-b55c-97eaaf62a8d4>\",\"Content-Length\":\"98481\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4ac1fd19-bcc3-45db-805b-684f0ceb9505>\",\"WARC-Concurrent-To\":\"<urn:uuid:dd2df357-f521-4816-81fd-622be75bde69>\",\"WARC-IP-Address\":\"94.182.28.25\",\"WARC-Target-URI\":\"https://jhmtr.semnan.ac.ir/article_149.html\",\"WARC-Payload-Digest\":\"sha1:7S4ITDUFVJQ72H4EN5GYAQ5H6UDW5CIN\",\"WARC-Block-Digest\":\"sha1:UWXFGB7O3SIZSABILJJTYW5UNPQJWQTF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224648850.88_warc_CC-MAIN-20230602172755-20230602202755-00227.warc.gz\"}"}
http://www.spss123.com.tw/2017/08/blog-post_9.html
[ "## 2017年8月11日 星期五\n\n### 用表快速決定樣本\n\nEditor’s note: Paul C. Boyd is a principal of the Research Advisors, Franklin, Mass.\nThere are various formulas for calculating the required sample size for a study. These formulas\nrequire knowledge of the variance or proportion in the population and a determination as to the\nmaximum desirable error, as well as the acceptable Type I error risk. For example, sample size\nformulas were addressed in a July 1999 Quirk’s article by Gang Xu (“Estimating sample size\nfor a descriptive study in quantitative research .”)\nSuch formulas are typically presented in such a way as to avoid the issue of population size\n- they assume that the sample is to be drawn from an infinitely large population. Still, these\nformulas may appear unnecessarily complex to researchers who have to deal with these\nissues on only an occasional basis. The question they ask is simple: Isn’t there an easier way?\nThe answer is yes. After all, why bother with formulas when you don’t have to? Many researchers\nprefer a simple table to assist them with the determination of the appropriate sample size(s)\nfor their studies.\nIt is possible to use one of these formulas to construct a table that suggests the optimal sample\nsize - given a population size, a specific margin of error and a desired confidence interval. This\ncan help researchers avoid the formulas altogether and simplify the process of determining\nappropriate sample sizes. The accompanying table presents the results of one set of these calculations. This table may easily be used to determine the appropriate sample size for almost any study.\nFor business and social science research the first column within the table is usually considered acceptable (confidence level = 95 percent, margin of error = ±5 percent). To use the table, simply determine the size of the population from which the sample is to be drawn down the left-most column (use the next highest value if the exact population size is not listed) and then identify the value in the next column. The value in this column is the sample size that is required to generate a margin of error of ± 5 percent for any population proportion with a 95 percent confidence level. Should more precision be required (i.e., a smaller margin of error) or greater confidence desired (0.99), the other columns of the table should be employed.\nThus, if you have 5,000 customers and you want to sample a sufficient number to generate a 95 percent confidence interval that predicted the proportion who, say, would be repeat customers within ±2-1/2 percent, you would need responses from a (random) sample of 1,176 of all your customers.\nAs you can see, using the table is much simpler than employing a formula. (A dynamic version of the table is available for download as an Excel spreadsheet that allows the user to change the margin of error, confidence level and/or the population size. Visit http://research-advisors.com/documents/SampleSize-web.xls .)\nSuppose these customers were divided into two subgroups - Group A, consisting of 1,500 customers and Group B consisting of 3,500 - and you wanted to determine the proportions of each. In order to maintain the same levels of confidence and precision (95 percent and ±2-1/2 percent) you would need random samples of 759 from Group A and 1,068 from Group B.\nCaution is urged to avoid lower levels of confidence (e.g., 95 percent) or larger margins of error (5 percent) solely for the purpose of minimizing the required sample size. As with all statistical procedures, the confidence level should be determined by the consequences of drawing the wrong conclusion due to sampling error. Likewise, the margin of error should be determined based upon the usefulness of the interval constructed (and remember that the interval width is twice the margin of error).\nThe formula used for these calculations is shown here (this is the formula used by Krejcie and Morgan in their article “Determining Sample Size for Research Activities”):\n\nAll of the sample estimates discussed present figures for the largest possible sample size for the desired level of confidence. Should the proportion of the sample with the desired characteristic be substantially different than 50 percent, then the desired level of accuracy can be established with a smaller sample. However, since you typically can’t know what this percentage is until you actually ask a sample, it is wisest to assume that it will be 50 percent and use the listed larger sample size.\nReferences\nKrejcie, Robert V. and Morgan, Daryle W. “Determining Sample Size for Research Activities.” Educational and Psychological Measurement 30 (1970): 607-610.\nXu, Gang. “Estimating Sample Size for a Descriptive Study in Quantitative Research .” Quirk’s Marketing Research Review, June 1999.\n\nAll of the sample estimates discussed present figures for the largest possible sample size for the desired level of confidence. Should the proportion of the sample with the desired characteristic be substantially different than 50 percent, then the desired level of accuracy can be established with a smaller sample. However, since you typically can’t know what this percentage is until you actually ask a sample, it is wisest to assume that it will be 50 percent and use the listed larger sample size.\nReferences\nKrejcie, Robert V. and Morgan, Daryle W. “Determining Sample Size for Research Activities.” Educational and Psychological Measurement 30 (1970): 607-610.\nXu, Gang. “Estimating Sample Size for a Descriptive Study in Quantitative Research .” Quirk’s Marketing Research Review, June 1999." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89400953,"math_prob":0.8375073,"size":5639,"snap":"2019-51-2020-05","text_gpt3_token_len":1172,"char_repetition_ratio":0.13895297,"word_repetition_ratio":0.27030033,"special_character_ratio":0.21298103,"punctuation_ratio":0.10182517,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97579265,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-12T01:07:34Z\",\"WARC-Record-ID\":\"<urn:uuid:5a07a483-fe20-4a61-9f14-17b73535b687>\",\"Content-Length\":\"62581\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:415b7de1-efb9-4346-aedd-e783e345f1a1>\",\"WARC-Concurrent-To\":\"<urn:uuid:40cded1b-a55c-402f-acdd-c86afba7d1ae>\",\"WARC-IP-Address\":\"172.217.13.83\",\"WARC-Target-URI\":\"http://www.spss123.com.tw/2017/08/blog-post_9.html\",\"WARC-Payload-Digest\":\"sha1:3J76WQYYXZIEOQ4TOKVFJVM42TSNBHOF\",\"WARC-Block-Digest\":\"sha1:IW7HWQGFBLO4OEE7O25UCH27HY7HKPUQ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540534443.68_warc_CC-MAIN-20191212000437-20191212024437-00350.warc.gz\"}"}
https://wow-essay.com/free-essay-samples/essay-sample-on-basic-principles-of-the-algorithms/
[ "# Essay Sample on Basic Principles of the Algorithms\n\n## Introduction to Algorithms\n\nThe most basic techniques in developing an algorithm are as follows:\n\n• Divide and conquer\n• Dynamic programming\n• Backtracking\n• Hill-climbing\n• Greedy algorithms\n\nA solvable problem can be solved using one type of algorithm of each type: the methodical way, the obvious way, the clever way, and the miraculous way.\n\n85k+ topics\n700K+ happy students\n91K+ free samples\n\nSample Details:\n\nPages: 3\n\nWords: 571\n\nTo understand algorithms, the user must know at least one programming language at a level where they can translate the codes into solutions to a problem. It is also necessary that the user has a working knowledge of data structures: stacks, arrays, linked lists, queues, trees, disjoint sets, heaps, and graphs (Wikibooks, 2004). The user must also know basic algorithms such as sorting, binary search, depth-first search, or breadth-first search. If the user is unfamiliar with these things, it would be helpful to consult further references about data structures before studying algorithms.\n\n## The Importance of Efficiency\n\nEfficiency is not required in every problem there is. To understand algorithmic efficiency, however, efficiency is concerned with the space and the time needed to execute the task (Wikibooks, 2004). When space or time is inexpensive or abundant, the programmer can focus on the solution without worrying about compiling the code and running faster.\n\nThere are particular cases where efficiency matters:\n\n• Limited resources\n• A large set of data\n• Real-time applications (where latency matters most)\n• Computationally costly jobs\n• Subroutines that require frequent use in the program run\n\n## Brief Discussion of Common Algorithmic TechniquesDivide and Conquer\n\nIn certain problems where the program input is already in the array, the solution can be made by cutting the problem into smaller chunks (divide), recursively tackling these small parts, then combining the tiny individual solutions into one result. Some good examples of divide-and-conquer algorithms are quick-to-sort and merge-sort algorithms.\n\n## Backtracking\n\nThis technique is not the most efficient because it is a brute-force strategy. However, optimizations can be made to the program to reduce the number of branches. When one leaf is visited, the algorithm will go back up the stack to undo the program choices that failed, then proceed to other branches of the tree. Backtracking is a technique that works best with problems where there is already a self-similarity. This means that smaller problem instances resemble the entirety of the problem.\n\n## Randomization\n\nFor countless applications, randomization is becoming increasingly important. This technique generates and uses random numbers that fit a tailored solution to one instance of a problem.\n\n## Hill Climbing\n\nThe fundamental concept with hill climbing is, to begin with, an inefficient solution to the problem, then apply repeated optimization techniques to this poor solution until it becomes more optimal or when a specific criterion is met. Hill climbing works best in network flow. It is useful in many problems that depict different relationships, making it applicable outside computing networks. As a hill climbing technique, network flows can solve matching problems outside of computer networks.\n\n## Dynamic Programming\n\nThis technique works best with backtracking algorithms because dynamic programming is an optimization technique. When there is a need to solve subproblems repeatedly, precious time can be saved by dealing with the small subproblems (or leaves) first (smallest to largest, bottom-up strategy) and storing each subproblem solution in a table.\n\n## References\n\nWikibooks. (2004, September 28). Algorithms/Introduction. Retrieved August 16, 2022, from https://en.wikibooks.org/wiki/Algorithms/Introduction\n\nLost writing steam? Not sure your paper will be awesome?\n\nOrder a WOW custom essay from a pro writer!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90871763,"math_prob":0.812906,"size":3813,"snap":"2023-40-2023-50","text_gpt3_token_len":750,"char_repetition_ratio":0.12785508,"word_repetition_ratio":0.0,"special_character_ratio":0.18804091,"punctuation_ratio":0.114503816,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9706924,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-24T10:59:20Z\",\"WARC-Record-ID\":\"<urn:uuid:8a95c6cc-f41d-4248-9d8c-542e23663823>\",\"Content-Length\":\"43206\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7a44a43d-0b23-4dc4-b083-0c8267079970>\",\"WARC-Concurrent-To\":\"<urn:uuid:397816a9-4e3d-4c50-ae75-4b958bfc71be>\",\"WARC-IP-Address\":\"152.160.232.195\",\"WARC-Target-URI\":\"https://wow-essay.com/free-essay-samples/essay-sample-on-basic-principles-of-the-algorithms/\",\"WARC-Payload-Digest\":\"sha1:XKD7EQIDS5PN7EAQOYSBV2YRTM5ZZ457\",\"WARC-Block-Digest\":\"sha1:6IP3A2BTYFRR526IR47KHKMF5DH2LGAL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506632.31_warc_CC-MAIN-20230924091344-20230924121344-00603.warc.gz\"}"}
https://physics.stackexchange.com/questions/501145/validity-of-the-derivation-of-time-energy-uncertainty-principle
[ "# Validity of the derivation of time-energy uncertainty principle?\n\n## Background\n\nThe top voted answer to this question seems to make some assumptions I'm uncertain about (to say the least): What is $\\Delta t$ in the time-energy uncertainty principle?\n\n## Assumptions\n\n• The uncertainty principle is usually a statement about the measurement (while it is true they are applicable during the unitary process as well. They usually are not used in that context).\n• Since they usually the measurement is considered a discontinuous process is he considering an interpretation where there is no measurement such as many worlds?\n\n## Question\n\nIs there a version of this derivation (compatible with the discontinuous measurement interpretations of quantum mechanics) where one is allowed to use the calculus in the way he does (both integration and differentiation)?\n\nP.S: I have already commented on this.\n\n• I don't understand the question. Is there a version of this derivation where one is allowed to use the calculus in the way he does (both integration and differentiation)? Yes there is, it is the one you are referring to. – Aaron Stevens Sep 10 at 16:26\n• @AaronStevens I also mention the measurement is discontinuous (at least in some interpretations). In that case I want a version of this derivation where one can do those manipulation there too. – More Anonymous Sep 10 at 16:28\n• Since, I'm getting a down vote despite explaining my objection (in the comment). Can someone clarify why the objection is wrong? – More Anonymous Sep 10 at 16:31\n• Baez's summary is based on the resolution of an optimal clock, Δt, the approximate amount of time it takes for the expectation value of an observable to change by a standard deviation provided the system is in a pure state. What is your question?? – Cosmas Zachos Sep 10 at 16:48\n• @CosmasZachos I'm discussing this in the chat at the moment ... (I can't do both simultaneously) – More Anonymous Sep 10 at 16:57\n\nThe WP summary of the Mandeshtam-Tamm relation is, for an observable $$\\hat B$$, $$\\sigma_E ~~~\\frac{\\sigma_B}{\\left| \\frac{\\mathrm{d}\\langle \\hat B \\rangle}{\\mathrm{d}t}\\right |} \\ge \\frac{\\hbar}{2} ~~,$$ where the second factor on the l.h.s., with dimensions of time, is a lifetime of the state ψ with respect to the hermitean observable $$\\hat B$$. Roughly, the time interval (Δt) after which the expectation value ⟨$$\\hat B$$⟩ changes appreciably. For a stationary state, the drift rate of ⟨$$\\hat B$$⟩ goes to zero, and the variance of energy goes to 0 as well, as it should.\n\nThis is all in standard QM, unitarily evolving, with or without measurements. You may do any and all measurements discontinuous, delirious, expialidocious, whatever, and plot your results, but you must be talking about the same state ψ all the time. The distribution in B will have a variance, which is what is under discussion.\n\n(Heuristically, a state ψ that only exists for a short time cannot have a definite energy. To have a definite energy, the frequency of the state must be defined accurately, and this requires the state to persist for many cycles, the reciprocal of the required accuracy. In spectroscopy, excited states have a finite lifetime. By above, they do not have a definite energy, and, each time they decay, the energy they release is slightly different. The average energy of the outgoing photon has a peak at the theoretical energy of the state, but the distribution has a finite width called the natural linewidth. Fast-decaying states have a broad linewidth, while slow-decaying states have a narrow linewidth.)\n\nI have not fully appreciated your misgivings, but they seem to me to also apply to the standard Δx Δp uncertainty principle: A pure state will have corresponding distributions for x and p with nontrivial variances, computable through standard continuous QM, which your measurements will probe.\n\n• I feel my I came to the conclusion that my question was based on legimate doubts but if you have the patience to understand what they were here is the link: chat.stackexchange.com/transcript/message/51655733#51655733 Also mentioning $\\Delta x \\Delta p$ uncertainty makes me think you suspect the derivation in question enables you to make the claim that $2$ measurements cannot be arbitrarily close together (in time) ... But in the chat we have users comparable to your points (and one even more) who agree this derivation cannot warrant that. – More Anonymous Sep 10 at 21:30\n• I'm sorry, I must then not understand your question. The relation, and my answer have little to do with measurement. They provide probabilities of outcomes when/if you make a measurement. Once you have made a measurement, you have altered the state, and you are not talking about that state anymore. Probabilities are estimated experimentally by repeatedly measuring the \"same\" state, no? – Cosmas Zachos Sep 10 at 21:37\n• Yes, see assumption $1$. And I asked the chatroom what was the experimental context of this derivation and the (accepted) answer seemed to be of very limited scope (which I suspect most upvoters of the original answer probably do not know). – More Anonymous Sep 10 at 21:40\n• Oh... If you are thinking about measurements, I have nothing to say. All derivations of this ilk use standard QM and leave it to measurements to experimentally probe the answers. They really, really, deal with the same state ψ... – Cosmas Zachos Sep 10 at 21:43\n• Of course. We might agree there. This is (highly abstractly) what spectroscopists do to extract lifetimes out of linewidths. – Cosmas Zachos Sep 10 at 21:48\n\nTime is dissipation of energy. Delta t in that context is the ratio of original to current energy density. Delta E is the remaining original energy. As energy dissipates, Delta t increases, and Delta E goes to zero.\n\nFor an analogy consider meausring a wave in water. Early in the wave the amplitude is high and similar to when the wave was created. Delta T is low and Delta E is high. Over time, the wave peters out. Delta T is high because little amplitude is left and Delta E is low." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91996676,"math_prob":0.9128332,"size":1904,"snap":"2019-35-2019-39","text_gpt3_token_len":452,"char_repetition_ratio":0.123157896,"word_repetition_ratio":0.0,"special_character_ratio":0.21953781,"punctuation_ratio":0.13079019,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.973029,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-20T11:45:36Z\",\"WARC-Record-ID\":\"<urn:uuid:d6805f2b-3e26-4f59-a24c-47880f7398de>\",\"Content-Length\":\"157803\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b561df0d-2ac7-4780-a0b9-fafacba0d59b>\",\"WARC-Concurrent-To\":\"<urn:uuid:333dabad-f1b9-467d-b2b3-6d9ad93b8943>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/501145/validity-of-the-derivation-of-time-energy-uncertainty-principle\",\"WARC-Payload-Digest\":\"sha1:UQL2X3KP625RVV4LH5ZN63LK6FBQUP2O\",\"WARC-Block-Digest\":\"sha1:44EA7E5ISCSONEFD6O4F73ZC2HBK3ZPG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514574018.53_warc_CC-MAIN-20190920113425-20190920135425-00056.warc.gz\"}"}
https://sources.debian.org/src/openems/0.0.35+dfsg.1-3/openEMS/matlab/AddMRStub.m/
[ "`12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273` ``````function CSX = AddMRStub( CSX, materialname, prio, MSL_width, len, alpha, resolution, orientation, normVector, position ) % CSX = AddMRStub( CSX, materialname, prio, MSL_width, len, alpha, % resolution, orientation, normVector, position ) % % Microstrip Radial Stub % % CSX: CSX-object created by InitCSX() % materialname: property for the MSL (created by AddMetal() or AddMaterial()) % prio: priority % MSL_width: width of the MSL to connect the stub to % len: length of the radial stub % alpha: angle subtended by the radial stub (degrees) % resolution: discrete angle spacing (degrees) % orientation: angle of main direction of the radial stub (degrees) % normVector: normal vector of the stub % position: position of the end of the MSL % % This radial stub definition is equivalent to the one Agilent ADS uses. % % example: % CSX = AddMRStub( CSX, 'PEC', 10, 1000, 5900, 30, 1, -90, [0 0 1], [0 -10000 254] ); % % % Sebastian Held % Jun 1 2010 % % See also InitCSX AddMetal AddMaterial % check normVector if ~(normVector(1) == normVector(2) == 0) && ... ~(normVector(1) == normVector(3) == 0) && ... ~(normVector(2) == normVector(3) == 0) || (sum(normVector) == 0) error 'normVector must have exactly one component ~= 0' end normVector = normVector ./ sum(normVector); % normVector is now a unit vector % convert angles to radians alpha_rad = alpha/180*pi; orientation_rad = orientation/180*pi; resolution_rad = resolution/180*pi; % % build stub at origin (0,0,0) and translate/rotate it later % D = 0.5 * MSL_width / sin(alpha_rad/2); R = cos(alpha_rad/2) * D; % point at the center of the MSL p(1,1) = 0; p(2,1) = -MSL_width/2; p(1,2) = 0; p(2,2) = MSL_width/2; for a = alpha_rad/2 : -resolution_rad : -alpha_rad/2 p(1,end+1) = cos(a) * (D+len) - R; p(2,end) = sin(a) * (D+len); end % rotate rot = [cos(-orientation_rad), -sin(-orientation_rad); sin(-orientation_rad), cos(-orientation_rad)]; p = (p.' * rot).'; % translate idx_elevation = [1 2 3]; idx_elevation = idx_elevation(normVector>0); dim1 = mod( idx_elevation, 3 ) + 1; dim2 = mod( idx_elevation+1, 3 ) + 1; p(1,:) = p(1,:) + position(dim1); p(2,:) = p(2,:) + position(dim2); elevation = position(idx_elevation); CSX = AddPolygon( CSX, materialname, prio, normVector, elevation, p ); ``````" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.50759554,"math_prob":0.9968139,"size":2487,"snap":"2020-24-2020-29","text_gpt3_token_len":894,"char_repetition_ratio":0.15304068,"word_repetition_ratio":0.041763343,"special_character_ratio":0.41737032,"punctuation_ratio":0.20238096,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99835086,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-10T14:01:03Z\",\"WARC-Record-ID\":\"<urn:uuid:d4fc86a1-567b-4e7c-960b-1fbca37874f1>\",\"Content-Length\":\"16866\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8db9a673-be4b-4881-b6b1-afdcbd73677f>\",\"WARC-Concurrent-To\":\"<urn:uuid:650265f8-fc80-4c30-80ab-17ed1bb003df>\",\"WARC-IP-Address\":\"209.87.16.74\",\"WARC-Target-URI\":\"https://sources.debian.org/src/openems/0.0.35+dfsg.1-3/openEMS/matlab/AddMRStub.m/\",\"WARC-Payload-Digest\":\"sha1:HYTKE7SLDFHRA3Y5FW4TR2R3OMJG37PS\",\"WARC-Block-Digest\":\"sha1:NSJ3VTMRIB2HDJ27LTARDUZNVHPR4ZBG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655908294.32_warc_CC-MAIN-20200710113143-20200710143143-00246.warc.gz\"}"}
https://testbook.com/question-answer/the-number-of-3-digits-even-number-is-_____--608a63a2c1cd3eae38b22686
[ "# The number of 3 digits even number is _____.\n\nThis question was previously asked in\nMP Patwari Previous Paper 17 (Held On: 16 Dec 2017 Shift 1)\nView all MP Patwari Papers >\n1. 500\n2. 499\n3. 450\n4. 449\n\nOption 3 : 450\n\n## Detailed Solution\n\nConcept used :\n\nL = a + (n - 1)d    (where, L = last term, a = first term, n = number of terms and d is the common difference)\n\nCalculations :\n\nFirst 3 digit even number = 100\n\nLast 3 digit number = 998\n\n3 digit even numbers will be 100, 102, 104,......998.\n\nNow,\n\n998 = 100 + (n -1)2    (as per given formula)\n\n⇒ (n - 1)2 = 898\n\n⇒ n - 1 = 449\n\n⇒ n = 450\n\n∴ The number of 3 digits even number is 450" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7655867,"math_prob":0.9992898,"size":382,"snap":"2021-43-2021-49","text_gpt3_token_len":143,"char_repetition_ratio":0.17195767,"word_repetition_ratio":0.0,"special_character_ratio":0.46596858,"punctuation_ratio":0.17777778,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9980981,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-03T20:02:15Z\",\"WARC-Record-ID\":\"<urn:uuid:858f7e51-8c4c-44cd-95bb-f45c4115328b>\",\"Content-Length\":\"119191\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b381017b-bb04-48f8-8fd4-be473975603b>\",\"WARC-Concurrent-To\":\"<urn:uuid:878958ad-f8f5-45ea-a5e8-c2be4c2b3e8a>\",\"WARC-IP-Address\":\"104.22.44.238\",\"WARC-Target-URI\":\"https://testbook.com/question-answer/the-number-of-3-digits-even-number-is-_____--608a63a2c1cd3eae38b22686\",\"WARC-Payload-Digest\":\"sha1:JNXEJO37JXB2RAGVEHY4YHBN6TBPQB5Y\",\"WARC-Block-Digest\":\"sha1:C5NK5XRJI6ZMR6YUIYGM7LEJJQKB6JVO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964362918.89_warc_CC-MAIN-20211203182358-20211203212358-00478.warc.gz\"}"}
https://www.positioniseverything.net/javascript-const
[ "# JavaScript Const Variable: The Ultimate Guide on The Const Keyword", null, "JavaScript const variable, along with keywords such as let, package, and others, plays an important role while writing a script. It is also used in C++ to declare the constant variables.\n\nIn simple words, constants are variables that cannot change values and they do not have to be primitive variables. In this article, we will review JavaScript constants with examples, so let’s get right on that!\n\n## What Is Const in JavaScript: Basics\n\nTo define a JavaScript constant variable using the const keyword in JavaScript, we should know that the keyword would create a read-only reference when it comes to the value. Let’s look at an example:\n\n const CONSTANT_NAME = value;\n\nWhen using constant identifiers, do note that they will be in uppercase. The const in JavaScript declares the block-scoped variables just like the let keyword. The only difference with the JavaScript constant variable is the const keyword would not reassign like it would using the let keyword.\n\n### – The Let Keyword in an Example\n\nThe let keyword declares variables that are mutable as you can change them anytime. The constants in JavaScript are immutable. Once assigned a value, they cannot be reassigned. Let’s see that in an example for better understanding:\n\n let a = 5; a = 10; a = a + 5; console.log(a); // 15\n\n### – The Const Keyword in an Example\n\nIn the example shown above, we can see the usage of the let keyword. Now, let’s see the effect a JavaScript const function has in a similar scenario:\n\n const RATE = 0.3; RATE = 0.5; // TypeError\n\nLooking at the example above, we can see that reassigning the variable declared by the const keyword results in a TypeError. So, what do we do now?\n\n#### What If the Variable Declared by Const Keyword Results in TypeError?\n\nWell, if the variable declared by the const keyword results in a TypeError we need to initialize the value to the variable that is declared by the const in JavaScript. Let’s see what is the result if we miss the initializer during the declaration of the JavaScript constant variable.\n\n const RED; // SyntaxError\n\nAs you can see, we have a syntax error in this case. Now, let’s see the relation of the JavaScript constants and objects.\n\n## What Is Const in JavaScript: Objects\n\nAs mentioned earlier, the const in JavaScript makes sure its variables created are read-only. Does that mean the value for the JavaScript constant variable reference is immutable? No. Let’s look at this example for clarity:\n\n const individual = { weight: 90 }; individual.weight = 78; // OK console.log(person.age); // 78\n\nIf you look at the example shown above, you cannot reassign a different value to the individual constant. However, you can change the individual variable’s value even though it’s a constant. Let’s look at an example:\n\n individual = { weight: 78 }; // TypeError\n\n### – How to Make Individual Objects Immutable\n\nAs we would want to make the individual object immutable, we will need to freeze it. How? By using the object.freeze(). Let’s use this method in the following example:\n\n const individual = Object.freeze({weight: 90}); individual.weight = 78; // TypeError\n\nNow, if you meticulously take a look, you will see the Object.freeze() method as shallow. What does that mean? Well, it can freeze the object properties but not of the objects that are referenced by those properties.\n\n### – Example of Constant Variable and Freeze Object Relation\n\nLet’s look at this with another example by using a small business. Supposing the small business name is XYZ Store, let’s see the relation of constant and frozen in the following example:\n\n const small business = Object.freeze({ name: ‘XYZ Store’, address: { street: ‘XYZ Street, 123’, city: ‘New York’, state: ‘NY’, zipcode: 12345 } });\n\nThe example above shows the Object.freeze that has frozen the properties of the objects. However, the address object is still mutable (can be changed). Hence, we can add a new property to this part if needed. Let’s see this in an example:\n\n### – Notes on Information So Far\n\nSo far, we have learned the assignment of variables and reassignment, reference properties, and how Object.freeze works with const in JavaScript. Now, let’s move on to JavaScript constants and arrays.\n\n## What Is Const in JavaScript: Arrays\n\nBefore we dive into the relation of JavaScript constant variable and array, let’s briefly describe arrays. Arrays are single variables used to store different kinds of elements. The array is normally used as a reference to multiple variables in most languages but in JavaScript, it is a single variable.\n\n### – Arrays and Const in JavaScript\n\nAs arrays are single variables in JavaScript, we can declare them as a color with one element by using the const keyword. Once done, we can change them by adding another color. However, since arrays are single variables, reassigning will not be possible. Let’s look at this situation with an example:\n\n const colors = [‘blue’]; colors.push(‘orange’); console.log(colors); // [“blue”, “orange”] colors.pop(); colors.pop(); console.log(colors); // [] colors = []; // TypeError\n\nIf you look at the example above, trying to use the orange color array over the blue brings us a type error.\n\nLet’s go ahead and see the effect of const in JavaScript with loops.\n\n## Understanding Loops With JavaScript Const\n\nThe ECMAScript6 or ES6 has a construct that is fairly new, mainly referred to as for…of. This construct allows a loop creation over the iterable objects that can be arrays, maps, and even sets. Let’s look at some examples to understand JavaScript const function in the for…of loop.\n\n let scores = [25, 40, 60]; for (let score of scores) { console.log(score); }\n\n### – Using Const Variable in a Loop\n\nLet’s suppose you don’t intend to modify the score variable within the loop. What do you do? You can use const in JavaScript to solve this issue. Let’s see it in an example:\n\n let scores = [25, 40, 60]; for (const score of scores) { console.log(score); }\n\nAs you can see in the example above, the for…of loop binds the JavaScript const function within the loop iterations. In simple words, for each iteration, a new score constant is created.\n\n#### Using Const Variable in a Loope: Example Returning Type Error\n\nSo, in an imperative for loop, a JavaScript constant variable will not work. Still, using const keyword to declare a variable in such a loop will only return a type error. Let’s take a look at such a scenario:\n\n for (const i = 0; i < scores.length; i++) { // TypeError console.log(scores[i]); }\n\nWondering why this happens? Well, the declaration is evaluated for a single time before the loop starts.\n\nLet’s try to understand more about syntax in JavaScript constant variable.\n\n## Understanding Syntax With JavaScript Const\n\nLet’s understand syntax by using constants in JavaScript as names and values, also known as identifiers and expressions. The constant name is nameN and is a legal identifier whereas valueN is a constant value and is also a legal expression. The following example shows this in detail:\n\n const name1 = value1 [, name2 = value2 [, … [, nameN = valueN]]];\n\nBesides this syntax, we can also use a destructuring assignment syntax to declare variables. Let’s look at an example:\n\n const { bar } = xyz; // where xyz = { lot: 5, loz: 8 };\n\nThe example shown above will create a const in Javascript with the name “lot” and will provide a value of “5”.\n\nWe used the terms scope and block-scope earlier. Let’s see some examples to understand them better.\n\n## Understanding Scope With JavaScript Const\n\nThe block scope in JavaScript defines the location of a variable within a program. Block scopes, also known as loop variables, are declared in a for loop. After the loop, they are not accessible unless they are defined before the loop.\n\nNow, let’s see an example to understand this concept further:\n\n if (X === 1) {\n\nIf we use the example above, it will create a block scope of the X variable. It will declare a block scope using the let keyword.\n\n let X = 5;\n\nIn the example above, we made X=5 so X is 5. Now, let’s try it with a JavaScript constant variable:\n\n console.log(‘my input is’ + X);\n\nThe example shown above will be hoisted as a global context and will show an error.\n\n### – JavaScript Const: Some Things to Consider\n\nWe must know that the JavaScript constant function creates a scope that can be global or local depending on the block on which it is declared. One thing to remember is that, unlike the var variables, JavaScript constants do not become a property for the window object.\n\nFor a const in JavaScript, an initializer is required. However, all the temporal dead zones are applicable for both let and const in JavaScript. For a constant, name sharing with a function or a variable in the same scope block is not possible.\n\n## Must-Knows for JavaScript Const\n\nConstants in JavaScript were introduced in ES6 (2015) where the variables once defined with const keyword were not redeclared or reassigned, and the variables defined with JavaScript constant variable were scope-blocked. These are some quick tips you should remember about const in JavaScript:\n\n• Since the variables cannot be reassigned, they must be assigned when declared\n• If the value will not change, a const keyword should be used\n• Use const in JavaScript when working with array, function, object, regExp\n• The const keyword defines a constant reference, not a constant value\n• You can change the elements of constant array and properties of constant object\n\n## Final Thoughts\n\nWe covered all there was to JavaScript const. The topics we covered are:\n\n•", null, "Objects with JavaScript constants\n• Arrays with const in JavaScript\n• Loops with constants in JavaScript\n• Syntax with JavaScript constant Variable\n• Scope with Javascript const function\n\nWe are certain that  this guide has helped you to learn the complexities and the quick fixes to know what is const in JavaScript. The more you practice on the things that we discussed, the more proficient you will end up becoming when you use the JavaScript constant variable." ]
[ null, "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAASwAAADIAQAAAAB6pOH4AAAAAnRSTlMAAHaTzTgAAAAeSURBVFjD7cExAQAAAMKg9U9tB2+gAAAAAAAAAOA3HngAARco3ZkAAAAASUVORK5CYII=", null, "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAASwAAADIAQAAAAB6pOH4AAAAAnRSTlMAAHaTzTgAAAAeSURBVFjD7cExAQAAAMKg9U9tB2+gAAAAAAAAAOA3HngAARco3ZkAAAAASUVORK5CYII=", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8280542,"math_prob":0.8692589,"size":10515,"snap":"2023-40-2023-50","text_gpt3_token_len":2293,"char_repetition_ratio":0.193131,"word_repetition_ratio":0.03607666,"special_character_ratio":0.22672373,"punctuation_ratio":0.13105837,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.965822,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-25T00:02:55Z\",\"WARC-Record-ID\":\"<urn:uuid:f5bbd714-a29a-4664-b560-dec765487593>\",\"Content-Length\":\"210178\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2bfadcdf-5307-4fb4-b0ae-6ad96538a2d5>\",\"WARC-Concurrent-To\":\"<urn:uuid:08a67ede-df25-4e63-ac5b-4696e87fd051>\",\"WARC-IP-Address\":\"35.214.136.217\",\"WARC-Target-URI\":\"https://www.positioniseverything.net/javascript-const\",\"WARC-Payload-Digest\":\"sha1:T4IILCA3NGKVL7JRJFVW4NCLVDY5KUE3\",\"WARC-Block-Digest\":\"sha1:UJLGZOSR5CJUKNM4GW3RICAYSMAPUKPQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506669.96_warc_CC-MAIN-20230924223409-20230925013409-00162.warc.gz\"}"}
https://ukm.pure.elsevier.com/en/publications/hata-based-propagation-loss-formula-using-terrain-criterion-for-1
[ "# Hata based propagation loss formula using terrain criterion for 1800 MHz\n\nMahdi A. Nisirat, Salim Alkhawaldeh, Mahamod Ismail, Liyth Nissirat\n\nResearch output: Contribution to journalArticle\n\n6 Citations (Scopus)\n\n### Abstract\n\nMany mobile propagation models are going under intensive corrections; recently, to suit other new criteria's such as rough terrain areas. This proposed model modifies Hata main urban equation by adding a formula representing a logarithmic linear regression estimator of the standard deviation (σ) of the measuring campaign path in Amman city, Madaba city, and Jiza town, Jordan. High correlation factor of -96.7% is calculated between excess measured path loss compared to Hata urban path loss and log(σ). Root mean square error (RMSE) difference between this model and the measured raw data path loss has overcome RMSE calculated for Hata model, by an average of 21 dB, for open areas. The correction of suburban areas is calculated, on average, as 20 dB, and for urban areas as 17 dB.\n\nOriginal language English 855-859 5 AEU - International Journal of Electronics and Communications 66 10 https://doi.org/10.1016/j.aeue.2012.03.001 Published - Oct 2012\n\n### Fingerprint\n\nMean square error\nLinear regression\n\n### Keywords\n\n• Micro-cells\n• Mobile communications\n• Path loss\n• Terrain roughness\n\n### ASJC Scopus subject areas\n\n• Electrical and Electronic Engineering\n\n### Cite this\n\nHata based propagation loss formula using terrain criterion for 1800 MHz. / Nisirat, Mahdi A.; Alkhawaldeh, Salim; Ismail, Mahamod; Nissirat, Liyth.\n\nIn: AEU - International Journal of Electronics and Communications, Vol. 66, No. 10, 10.2012, p. 855-859.\n\nResearch output: Contribution to journalArticle\n\nNisirat, Mahdi A. ; Alkhawaldeh, Salim ; Ismail, Mahamod ; Nissirat, Liyth. / Hata based propagation loss formula using terrain criterion for 1800 MHz. In: AEU - International Journal of Electronics and Communications. 2012 ; Vol. 66, No. 10. pp. 855-859.\n@article{bdbd6fd9181d42ba86e3f48a151b05fe,\ntitle = \"Hata based propagation loss formula using terrain criterion for 1800 MHz\",\nabstract = \"Many mobile propagation models are going under intensive corrections; recently, to suit other new criteria's such as rough terrain areas. This proposed model modifies Hata main urban equation by adding a formula representing a logarithmic linear regression estimator of the standard deviation (σ) of the measuring campaign path in Amman city, Madaba city, and Jiza town, Jordan. High correlation factor of -96.7{\\%} is calculated between excess measured path loss compared to Hata urban path loss and log(σ). Root mean square error (RMSE) difference between this model and the measured raw data path loss has overcome RMSE calculated for Hata model, by an average of 21 dB, for open areas. The correction of suburban areas is calculated, on average, as 20 dB, and for urban areas as 17 dB.\",\nkeywords = \"Micro-cells, Mobile communications, Path loss, Terrain roughness\",\nauthor = \"Nisirat, {Mahdi A.} and Salim Alkhawaldeh and Mahamod Ismail and Liyth Nissirat\",\nyear = \"2012\",\nmonth = \"10\",\ndoi = \"10.1016/j.aeue.2012.03.001\",\nlanguage = \"English\",\nvolume = \"66\",\npages = \"855--859\",\njournal = \"AEU - International Journal of Electronics and Communications\",\nissn = \"1434-8411\",\npublisher = \"Urban und Fischer Verlag Jena\",\nnumber = \"10\",\n\n}\n\nTY - JOUR\n\nT1 - Hata based propagation loss formula using terrain criterion for 1800 MHz\n\nAU - Nisirat, Mahdi A.\n\nAU - Alkhawaldeh, Salim\n\nAU - Ismail, Mahamod\n\nAU - Nissirat, Liyth\n\nPY - 2012/10\n\nY1 - 2012/10\n\nN2 - Many mobile propagation models are going under intensive corrections; recently, to suit other new criteria's such as rough terrain areas. This proposed model modifies Hata main urban equation by adding a formula representing a logarithmic linear regression estimator of the standard deviation (σ) of the measuring campaign path in Amman city, Madaba city, and Jiza town, Jordan. High correlation factor of -96.7% is calculated between excess measured path loss compared to Hata urban path loss and log(σ). Root mean square error (RMSE) difference between this model and the measured raw data path loss has overcome RMSE calculated for Hata model, by an average of 21 dB, for open areas. The correction of suburban areas is calculated, on average, as 20 dB, and for urban areas as 17 dB.\n\nAB - Many mobile propagation models are going under intensive corrections; recently, to suit other new criteria's such as rough terrain areas. This proposed model modifies Hata main urban equation by adding a formula representing a logarithmic linear regression estimator of the standard deviation (σ) of the measuring campaign path in Amman city, Madaba city, and Jiza town, Jordan. High correlation factor of -96.7% is calculated between excess measured path loss compared to Hata urban path loss and log(σ). Root mean square error (RMSE) difference between this model and the measured raw data path loss has overcome RMSE calculated for Hata model, by an average of 21 dB, for open areas. The correction of suburban areas is calculated, on average, as 20 dB, and for urban areas as 17 dB.\n\nKW - Micro-cells\n\nKW - Mobile communications\n\nKW - Path loss\n\nKW - Terrain roughness\n\nUR - http://www.scopus.com/inward/record.url?scp=84863727199&partnerID=8YFLogxK\n\nUR - http://www.scopus.com/inward/citedby.url?scp=84863727199&partnerID=8YFLogxK\n\nU2 - 10.1016/j.aeue.2012.03.001\n\nDO - 10.1016/j.aeue.2012.03.001\n\nM3 - Article\n\nAN - SCOPUS:84863727199\n\nVL - 66\n\nSP - 855\n\nEP - 859\n\nJO - AEU - International Journal of Electronics and Communications\n\nJF - AEU - International Journal of Electronics and Communications\n\nSN - 1434-8411\n\nIS - 10\n\nER -" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85719126,"math_prob":0.71570593,"size":3811,"snap":"2019-51-2020-05","text_gpt3_token_len":974,"char_repetition_ratio":0.11137378,"word_repetition_ratio":0.67790896,"special_character_ratio":0.25347677,"punctuation_ratio":0.13571429,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9627782,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-21T10:40:58Z\",\"WARC-Record-ID\":\"<urn:uuid:5c7d03fe-299d-4199-92b5-982c4f276815>\",\"Content-Length\":\"32267\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3d177141-6927-4a08-9b8c-76b3ce01b70f>\",\"WARC-Concurrent-To\":\"<urn:uuid:9c8a974c-8ac9-4a5d-9a65-81bf84a3943b>\",\"WARC-IP-Address\":\"52.220.215.79\",\"WARC-Target-URI\":\"https://ukm.pure.elsevier.com/en/publications/hata-based-propagation-loss-formula-using-terrain-criterion-for-1\",\"WARC-Payload-Digest\":\"sha1:5C6657B7SNOW54Z7VD7RZBANV7NNZRVY\",\"WARC-Block-Digest\":\"sha1:F43SJ6TV5YHATX2Q4IGVHUG4EI23WLFH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250603761.28_warc_CC-MAIN-20200121103642-20200121132642-00507.warc.gz\"}"}
https://risk.asmedigitalcollection.asme.org/dynamicsystems/article-abstract/139/6/061001/473557/Direct-Integration-Method-for-Time-Delayed-Control
[ "A direct integration method (DIM) for time-delayed control (TDC) is proposed in this research. For a second-order dynamic system with time-delayed controllers, a Volterra integral equation of the second kind is used instead of a state derivative equation. With the proposed DIM where matrix exponentials are avoided, semi-analytical representation of the Floquet transition matrix for stability analysis can be derived, the stability region on the parametric space comprising control variables can also be plotted. Within this stability region, optimal control variables are subsequently obtained using a multilevel conjugate gradient optimization method. Further simulation examples demonstrated the superiority of the proposed DIM in terms of computational efficiency and accuracy, as well as the effectiveness of the optimization-based controller design approach.\n\n## References\n\n1.\nUlsoy\n,\nA. G.\n,\n2015\n, “\nTime-Delayed Control of Siso Systems for Improved Stability Margins\n,”\nASME J. Dyn. Syst., Meas., Control\n,\n137\n(\n4\n), p.\n041014\n.\n2.\nStépán\n,\nG.\n,\n1989\n, “\nRetarded Dynamical Systems: Stability and Characteristic Functions\n,”\nPitman Research Notes\n(Mathematics Series),\nLongman Scientific & Technical\n,\nNew York\n.\n3.\nGu\n,\nK.\n,\nChen\n,\nJ.\n, and\nKharitonov\n,\nV.\n,\n2003\n, “\nStability of Time-Delay Systems\n,”\nControl Engineering\n,\nBirkhäuser Boston\n,\nCambridge, MA\n.\n4.\nMarshall\n,\nJ.\n,\n1992\n, “\nTime-Delay Systems: Stability and Performance Criteria With Applications\n,”\nMathematics and Its Applications\n(Ellis Horwood Series),\nEllis Horwood\n,\n.\n5.\nPyragas\n,\nK.\n,\n1992\n, “\nContinuous Control of Chaos by Self-Controlling Feedback\n,”\nPhys. Lett. A\n,\n170\n(\n6\n), pp.\n421\n428\n.\n6.\nGu\n,\nK.\n, and\nNiculescu\n,\nS.-I.\n,\n2003\n, “\nSurvey on Recent Results in the Stability and Control of Time-Delay Systems\n,”\nASME J. Dyn. Syst., Meas., Control\n,\n125\n(\n2\n), pp.\n158\n165\n.\n7.\n,\nF. E.\n,\nvon Bremen\n,\nH. F.\n,\nKumar\n,\nR.\n, and\nHosseini\n,\nM.\n,\n2003\n, “\nTime Delayed Control of Structural Systems\n,”\nEarthquake Eng. Struct. Dyn.\n,\n32\n(\n4\n), pp.\n495\n535\n.\n8.\n,\nF. E.\n,\nVon Bremen\n,\nH.\n, and\nPhohomsiri\n,\nP.\n,\n2007\n, “\nTime-Delayed Control Design for Active Control of Structures: Principles and Applications\n,”\nStruct. Control Health Monit.\n,\n14\n(\n1\n), pp.\n27\n61\n.\n9.\nChung\n,\nL.\n,\nLin\n,\nC.\n, and\nLu\n,\nK.\n,\n1995\n, “\nTime-Delay Control of Structures\n,”\nEarthquake Eng. Struct. Dyn.\n,\n24\n(\n5\n), pp.\n687\n701\n.\n10.\nYang\n,\nB.\n, and\nMote\n,\nC.\n, Jr.\n,\n1990\n, “\nVibration Control of Band Saws: Theory and Experiment\n,”\nWood Sci. Technol.\n,\n24\n(\n4\n), pp.\n355\n373\n.\n11.\nYang\n,\nB.\n, and\nMote\n,\nC.\n,\n1992\n, “\nOn Time Delay in Noncolocated Control of Flexible Mechanical Systems\n,”\nASME J. Dyn. Syst., Meas., Control\n,\n114\n(\n3\n), pp.\n409\n415\n.\n12.\nSan-Millan\n,\nA.\n,\nRussell\n,\nD.\n,\nFeliu\n,\nV.\n, and\nAphale\n,\nS. S.\n,\n2015\n, “\nA Modified Positive Velocity and Position Feedback Scheme With Delay Compensation for Improved Nanopositioning Performance\n,”\nSmart Mater. Struct.\n,\n24\n(\n7\n), p.\n075021\n.\n13.\nSuh\n,\nI.\n, and\nBien\n,\nZ.\n,\n1979\n, “\nProportional Minus Delay Controller\n,”\nIEEE Trans. Autom. Control\n,\n24\n(\n2\n), pp.\n370\n372\n.\n14.\nVillafuerte\n,\nR.\n,\nMondie\n,\nS.\n, and\nGarrido\n,\nR.\n,\n2013\n, “\nTuning of Proportional Retarded Controllers: Theory and Experiments\n,”\nIEEE Trans. Control Syst. Technol.\n,\n21\n(\n3\n), pp.\n983\n990\n.\n15.\nHövel\n,\nP.\n,\n2010\n,\nControl of Complex Nonlinear Systems With Delay\n,\nSpringer-Verlag, Berlin/Heidelberg\n.\n16.\nRamírez\n,\nA.\n,\nGarrido\n,\nR.\n, and\nMondié\n,\nS.\n,\n2015\n, “\nVelocity Control of Servo Systems Using an Integral Retarded Algorithm\n,”\nISA Trans.\n,\n58\n, pp.\n357\n366\n.\n17.\nGoyal\n,\nV.\n,\nDeolia\n,\nV. K.\n, and\nSharma\n,\nT. N.\n,\n2015\n, “\nRobust Sliding Mode Control for Nonlinear Discrete-Time Delayed Systems Based on Neural Network\n,”\nIntell. Control Autom.\n,\n6\n(\n1\n), p.\n75\n.\n18.\nNiculescu\n,\nS.-I.\n, and\nGu\n,\nK.\n,\n2012\n,\n, Vol.\n38\n,\nSpringer-Verlag, Berlin/Heidelberg\n.\n19.\nNiculescu\n,\nS.-I.\n,\nDion\n,\nJ.-M.\n,\nDugard\n,\nL.\n, and\nLI\n,\nH.\n,\n1997\n, “\nStability of Linear Systems With Delayed State: An LMI Approach\n,”\nJ. Eur. Syst. Autom.\n,\n31\n(\n6\n), pp.\n955\n969\n.\n20.\nFridman\n,\nE.\n, and\nShaked\n,\nU.\n,\n2003\n, “\nDelay-Dependent Stability and h∞ Control: Constant and Time-Varying Delays\n,”\nInt. J. Control\n,\n76\n(\n1\n), pp.\n48\n60\n.\n21.\nFridman\n,\nE.\n, and\nShaked\n,\nU.\n,\n2002\n, “\nAn Improved Stabilization Method for Linear Time-Delay Systems\n,”\nIEEE Trans. Autom. Control\n,\n47\n(\n11\n), pp.\n1931\n1937\n.\n22.\nWu\n,\nM.\n,\nHe\n,\nY.\n, and\nShe\n,\nJ.-H.\n,\n2004\n, “\nNew Delay-Dependent Stability Criteria and Stabilizing Method for Neutral Systems\n,”\nIEEE Trans. Autom. Control\n,\n49\n(\n12\n), pp.\n2266\n2271\n.\n23.\nFeng\n,\nZ.\n,\nLam\n,\nJ.\n, and\nGao\n,\nH.\n,\n2011\n, “\nα-Dissipativity Analysis of Singular Time-Delay Systems\n,”\nAutomatica\n,\n47\n(\n11\n), pp.\n2548\n2552\n.\n24.\nDorf\n,\nR. C.\n, and\nBishop\n,\nR. H.\n,\n2001\n,\nModern Control Systems\n, 9th ed.,\nPrentice-Hall\n,\n.\n25.\nWang\n,\nD.\n,\n2013\n,\nDesign of Lower-Order Controllers for Time-Delay Systems: A Parametric Space Approach\n,\nScience Press\n,\nBeijing\n.\n26.\nOlgac\n,\nN.\n,\nErgenc\n,\nA. F.\n, and\nSipahi\n,\nR.\n,\n2005\n, “\nDelay Scheduling: A New Concept for Stabilization in Multiple Delay Systems\n,”\nJ. Vib. Control\n,\n11\n(\n9\n), pp.\n1159\n1172\n.\n27.\nYi\n,\nS.\n,\n2009\n, “\nTime-Delay Systems: Analysis and Control Using the Lambert w Function\n,” Ph.D. thesis,\nThe University of Michigan\n,\nAnn Arbor, MI\n.\n28.\nDuan\n,\nS.\n,\nNi\n,\nJ.\n, and\nUlsoy\n,\nA. G.\n,\n2011\n, “\nDecay Function Estimation for Linear Time Delay Systems Via the Lambert W Function\n,”\nJ. Vib. Control\n,\n18\n(\n10\n), pp.\n1462\n1473\n.\n29.\nFarkas\n,\nM.\n,\n2013\n,\nPeriodic Motions\n, Vol.\n104\n,\n,\nNew York\n.\n30.\nInsperger\n,\nT.\n, and\nStépán\n,\nG.\n,\n2002\n, “\nSemi-Discretization Method for Delayed Systems\n,”\nInt. J. Numer. Methods Eng.\n,\n55\n(\n5\n), pp.\n503\n518\n.\n31.\nInsperger\n,\nT.\n, and\nStépán\n,\nG.\n,\n2004\n, “\nUpdated Semi-Discretization Method for Periodic Delay-Differential Equations With Discrete Delay\n,”\nInt. J. Numer. Methods Eng.\n,\n61\n(\n1\n), pp.\n117\n141\n.\n32.\nInsperger\n,\nT.\n, and\nStépán\n,\nG.\n,\n2011\n,\nSemi-Discretization for Time-Delay Systems: Stability and Engineering Applications\n, Vol.\n178\n,\n,\nNew York\n.\n33.\nDing\n,\nY.\n,\nZhu\n,\nL.\n,\nZhang\n,\nX.\n, and\nDing\n,\nH.\n,\n2011\n, “\nNumerical Integration Method for Prediction of Milling Stability\n,”\nASME J. Manuf. Sci. Eng.\n,\n133\n(\n3\n), p.\n031005\n.\n34.\nDong\n,\nW.\n,\nDing\n,\nY.\n,\nZhu\n,\nX.\n, and\nDing\n,\nH.\n,\n2015\n, “\nOptimal Proportional–Integral–Derivative Control of Time-Delay Systems Using the Differential Quadrature Method\n,”\nASME J. Dyn. Syst., Meas., Control\n,\n137\n(\n10\n), p.\n101005\n.\n35.\nDelves\n,\nL.\n, and\nMohamed\n,\nJ.\n,\n1988\n,\nComputational Methods for Integral Equations\n,\nCambridge University Press\n,\nCambridge, UK\n.\n36.\nLi\n,\nX.\n,\n2008\n,\nIntegral Equations\n,\nScience Press\n,\nBeijing, China\n.\n37.\nYang\n,\nW. Y.\n,\nCao\n,\nW.\n,\nChung\n,\nT.-S.\n, and\nMorris\n,\nJ.\n,\n2005\n,\nApplied Numerical Methods Using MATLAB\n,\nWiley\n,\nNew York\n.\n38.\nBellman\n,\nR. E.\n, and\nCooke\n,\nK. L.\n,\n1963\n,\nDifferential-Difference Equations\n,\nRand Corporation\n,\n.\n39.\nGraham\n,\nD.\n, and\nLathrop\n,\nR. C.\n,\n1953\n, “\nThe Synthesis of Optimum Transient Response: Criteria and Standard Forms\n,”\nTrans. Am. Inst. Electr. Eng., Part II: Appl. Ind.\n,\n72\n(\n5\n), pp.\n273\n288\n.\n40.\nLax\n,\nP.\n,\n2007\n, “\nLinear Algebra and Its Applications\n,”\nNo. 10 in Linear Algebra and Its Applications\n,\nWiley\n,\nNew York\n.\n41.\nRao\n,\nS. S.\n, and\nRao\n,\nS.\n,\n2009\n,\nEngineering Optimization: Theory and Practice\n,\nWiley\n,\nNew York\n.\n42.\nLi\n,\nX.\n, and\nDe Souza\n,\nC. E.\n,\n1997\n, “\nCriteria for Robust Stability and Stabilization of Uncertain Linear Systems With State Delay\n,”\nAutomatica\n,\n33\n(\n9\n), pp.\n1657\n1662\n.\n43.\nPark\n,\nP.\n,\n1999\n, “\nA Delay-Dependent Stability Criterion for Systems With Uncertain Time-Invariant Delays\n,”\nIEEE Trans. Autom. Control\n,\n44\n(\n4\n), pp.\n876\n877\n.\n44.\nSheng\n,\nJ.\n, and\nSun\n,\nJ.\n,\n2005\n, “\nFeedback Controls and Optimal Gain Design of Delayed Periodic Linear Systems\n,”\nJ. Vib. Control\n,\n11\n(\n2\n), pp.\n277\n294\n." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.75331795,"math_prob":0.8182351,"size":5303,"snap":"2023-14-2023-23","text_gpt3_token_len":1362,"char_repetition_ratio":0.17758067,"word_repetition_ratio":0.3202329,"special_character_ratio":0.23100132,"punctuation_ratio":0.21499014,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95381945,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-11T00:59:03Z\",\"WARC-Record-ID\":\"<urn:uuid:fb904e41-fdca-49dc-9715-4e4b0e7af345>\",\"Content-Length\":\"188293\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:66370fca-28ea-4bc1-ba7e-e332f4789c0d>\",\"WARC-Concurrent-To\":\"<urn:uuid:aeaa5dfd-f52d-4589-b563-ab3e56d370cb>\",\"WARC-IP-Address\":\"52.179.114.94\",\"WARC-Target-URI\":\"https://risk.asmedigitalcollection.asme.org/dynamicsystems/article-abstract/139/6/061001/473557/Direct-Integration-Method-for-Time-Delayed-Control\",\"WARC-Payload-Digest\":\"sha1:YF6S5H52YG3AGPK7LSHO7SNMIIQGXJAO\",\"WARC-Block-Digest\":\"sha1:QGQGERTBSEYAYYYPVJFEI6QILC65DL55\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224646652.16_warc_CC-MAIN-20230610233020-20230611023020-00639.warc.gz\"}"}
https://blog.codekiller.top/2021/02/03/%E6%9D%8E%E5%AE%8F%E6%AF%85%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0/19_Unsupervised%20Learning%20PCA%20part1/
[ "绿色健康小清新\n\nUnsupervised Learning: PCA(Ⅰ)\n\nPCA for 1-D\n\n$z_1=w^1\\cdot x$,其中$w^1$表示$w$的第一个row vector,假设$w^1$的长度为1,即$||w^1||_2=1$,此时$z_1$就是$x$$w^1$方向上的投影\n\n• 我们希望找一个projection的方向,它可以让projection后的variance越大越好\n\n• 我们不希望projection使这些data point通通挤在一起,导致点与点之间的奇异度消失\n\n• 其中,variance的计算公式:$Var(z_1)=\\frac{1}{N}\\sum\\limits_{z_1}(z_1-\\bar{z_1})^2, ||w^1||_2=1$$\\bar {z_1}$$z_1$的平均值\n\nPCA for n-D\n\n$z=Wx$来说:\n\n• $z_1=w^1\\cdot x$,表示$x$$w^1$方向上的投影\n• $z_2=w^2\\cdot x$,表示$x$$w^2$方向上的投影\n\n$z_1,z_2,...$串起来就得到$z$,而$w^1,w^2,...$分别是$W$的第1,2,…个row,需要注意的是,这里的$w^i$必须相互正交,此时$W$是正交矩阵(orthogonal matrix),如果不加以约束,则找到的$w^1,w^2,...$实际上是相同的值\n\nLagrange multiplier\n\ncalculate $w^1$\n\n• 首先计算出$\\bar{z_1}$\n\n\\begin{aligned} &z_1=w^1\\cdot x\\\\ &\\bar{z_1}=\\frac{1}{N}\\sum z_1=\\frac{1}{N}\\sum w^1\\cdot x=w^1\\cdot \\frac{1}{N}\\sum x=w^1\\cdot \\bar x \\end{aligned}\n\n• 然后计算maximize的对象$Var(z-1)$\n\n其中$Cov(x)=\\frac{1}{N}\\sum(x-\\bar x)(x-\\bar x)^T$\n\n\\begin{aligned} Var(z_1)&=\\frac{1}{N}\\sum\\limits_{z_1} (z_1-\\bar{z_1})^2\\\\ &=\\frac{1}{N}\\sum\\limits_{x} (w^1\\cdot x-w^1\\cdot \\bar x)^2\\\\ &=\\frac{1}{N}\\sum (w^1\\cdot (x-\\bar x))^2\\\\ &=\\frac{1}{N}\\sum(w^1)^T(x-\\bar x)(x-\\bar x)^T w^1\\\\ &=(w^1)^T\\frac{1}{N}\\sum(x-\\bar x)(x-\\bar x)^T w^1\\\\ &=(w^1)^T Cov(x)w^1 \\end{aligned}\n\n• 当然这里想要求$Var(z_1)=(w^1)^TCov(x)w^1$的最大值,还要加上$||w^1||_2=(w^1)^Tw^1=1$的约束条件,否则$w^1$可以取无穷大\n\n• $S=Cov(x)$,它是:\n\n• 对称的(symmetric)\n• 半正定的(positive-semidefine)\n• 所有特征值(eigenvalues)非负的(non-negative)\n• 使用拉格朗日乘数法,利用目标和约束条件构造函数:\n\n$g(w^1)=(w^1)^TSw^1-\\alpha((w^1)^Tw^1-1)$\n\n• $w^1$这个vector里的每一个element做偏微分:\n\n$\\partial g(w^1)/\\partial w_1^1=0\\\\ \\partial g(w^1)/\\partial w_2^1=0\\\\ \\partial g(w^1)/\\partial w_3^1=0\\\\ ...$\n\n• 整理上述推导式,可以得到:\n\n其中,$w^1$是S的特征向量(eigenvector)\n\n$Sw^1=\\alpha w^1$\n\n• 注意到满足$(w^1)^Tw^1=1$的特征向量$w^1$有很多,我们要找的是可以maximize $(w^1)^TSw^1$的那一个,于是利用上一个式子:\n\n$(w^1)^TSw^1=(w^1)^T \\alpha w^1=\\alpha (w^1)^T w^1=\\alpha$\n\n• 此时maximize $(w^1)^TSw^1$就变成了maximize $\\alpha$,也就是当$S$的特征值$\\alpha$最大时对应的那个特征向量$w^1$就是我们要找的目标\n\n• 结论:$w^1$$S=Cov(x)$这个matrix中的特征向量,对应最大的特征值$\\lambda_1$\n\ncalculate $w^2$\n\n• 同样是用拉格朗日乘数法求解,先写一个关于$w^2$的function,包含要maximize的对象,以及两个约束条件\n\n$g(w^2)=(w^2)^TSw^2-\\alpha((w^2)^Tw^2-1)-\\beta((w^2)^Tw^1-0)$\n\n• $w^2$的每个element做偏微分:\n\n$\\partial g(w^2)/\\partial w_1^2=0\\\\ \\partial g(w^2)/\\partial w_2^2=0\\\\ \\partial g(w^2)/\\partial w_3^2=0\\\\ ...$\n\n• 整理后得到:\n\n$Sw^2-\\alpha w^2-\\beta w^1=0$\n\n• 上式两侧同乘$(w^1)^T$,得到:\n\n$(w^1)^TSw^2-\\alpha (w^1)^Tw^2-\\beta (w^1)^Tw^1=0$\n\n• 其中$\\alpha (w^1)^Tw^2=0,\\beta (w^1)^Tw^1=\\beta$\n\n而由于$(w^1)^TSw^2$是vector×matrix×vector=scalar,因此在外面套一个transpose不会改变其值,因此该部分可以转化为:\n\n注:S是symmetric的,因此$S^T=S$\n\n\\begin{aligned} (w^1)^TSw^2&=((w^1)^TSw^2)^T\\\\ &=(w^2)^TS^Tw^1\\\\ &=(w^2)^TSw^1 \\end{aligned}\n\n我们已经知道$w^1$满足$Sw^1=\\lambda_1 w^1$,代入上式:\n\n\\begin{aligned} (w^1)^TSw^2&=(w^2)^TSw^1\\\\ &=\\lambda_1(w^2)^Tw^1\\\\ &=0 \\end{aligned}\n\n• 因此有$(w^1)^TSw^2=0$$\\alpha (w^1)^Tw^2=0$$\\beta (w^1)^Tw^1=\\beta$,又根据\n\n$(w^1)^TSw^2-\\alpha (w^1)^Tw^2-\\beta (w^1)^Tw^1=0$\n\n可以推得$\\beta=0$\n\n• 此时$Sw^2-\\alpha w^2-\\beta w^1=0$就转变成了$Sw^2-\\alpha w^2=0$,即\n\n$Sw^2=\\alpha w^2$\n\n• 由于$S$是symmetric的,因此在不与$w_1$冲突的情况下,这里$\\alpha$选取第二大的特征值$\\lambda_2$时,可以使$(w^2)^TSw^2$最大\n\n• 结论:$w^2$也是$S=Cov(x)$这个matrix中的特征向量,对应第二大的特征值$\\lambda_2$\n\nPCA-decorrelation\n\n$z=W\\cdot x$\n\nPCA可以让不同dimension之间的covariance变为0,即不同new feature之间是没有correlation的,这样做的好处是,减少feature之间的联系从而减少model所需的参数量\n\n-------------本文结束感谢您的阅读-------------" ]
[ null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.86745906,"math_prob":1.0000094,"size":3069,"snap":"2022-05-2022-21","text_gpt3_token_len":2420,"char_repetition_ratio":0.10603589,"word_repetition_ratio":0.0,"special_character_ratio":0.27109808,"punctuation_ratio":0.06477733,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000098,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-25T17:38:42Z\",\"WARC-Record-ID\":\"<urn:uuid:b0f2a363-f446-431a-a684-72aae0439acc>\",\"Content-Length\":\"270898\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b5312c9b-a57c-49aa-9b16-ce10e839cf66>\",\"WARC-Concurrent-To\":\"<urn:uuid:16367bb1-8578-4b36-a281-8855cf48328f>\",\"WARC-IP-Address\":\"47.116.134.123\",\"WARC-Target-URI\":\"https://blog.codekiller.top/2021/02/03/%E6%9D%8E%E5%AE%8F%E6%AF%85%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0/19_Unsupervised%20Learning%20PCA%20part1/\",\"WARC-Payload-Digest\":\"sha1:CW3OVYM6H7THYZZMFXLYQSFWE4YNVV6O\",\"WARC-Block-Digest\":\"sha1:GNBCY432N4SFQKXNM6EGFFQIKPAPLSI7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320304859.70_warc_CC-MAIN-20220125160159-20220125190159-00045.warc.gz\"}"}
http://blog.jesseseay.com/improving-arduino-sensor-range
[ "## Improving Arduino sensor range\n\nAdding a fixed resistor ½ the value of a variable resistance sensor improves Arduino performance.\n\nWhenever you connect a 2-lead variable resistor (VR) sensor (like a photo cell or bend sensor) to an Arduino, you add a resistor to it. I did this with my knitted stretch sensor. It creates a circuit known as a voltage divider, which controls the voltage level, based on the relative resistance of the resistor to the sensor. This is important because the voltage level is what AnalogRead \"reads\" in Arduino.\n\nI wondered what value would give the best performance for my knitted sensors. So I used the equation below to calculate the output range of voltage dividers, based on the ratio between R1 and R2, given that R2 is my knitted sensor and R1 is the (unchanging) resistor. I graphed the outputs for each VR value at 0%, 25%, 50%, 75%, and 100% of its maximum range.\n\nI found as I widened the ratio, the range widens; the bottom part gets steeper but the top flattens out.\n\nI liked the results of a 1:2 ratio (light blue line) so I used that with my knitted sensors.\n\nLater, I looked online and found Adafruit’s tutorial on photocells, which suggests using the Axel-Benz formula (choose a resistor the square root of the min + max resistance of your resistive sensor). This is useful if your minimum resistance isn’t zero. For my examples, it suggested a value slightly higher than the ½ value I’d settled on.\n\nOn a final note, none of this can match the range of a single potentiometer: when R2 decreases as R1 increases, we get a nice linear sweep with a full range from 0V to V+." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9024453,"math_prob":0.8694602,"size":1657,"snap":"2023-40-2023-50","text_gpt3_token_len":394,"char_repetition_ratio":0.1306715,"word_repetition_ratio":0.0,"special_character_ratio":0.23415811,"punctuation_ratio":0.09763314,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9826638,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-24T07:49:22Z\",\"WARC-Record-ID\":\"<urn:uuid:7f82003a-762c-4dcd-9563-71177e81e34a>\",\"Content-Length\":\"20428\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bac9df68-4739-45b3-b523-8948490547a3>\",\"WARC-Concurrent-To\":\"<urn:uuid:400ea3ed-37e3-41a8-a393-a4fe35d07506>\",\"WARC-IP-Address\":\"188.93.148.37\",\"WARC-Target-URI\":\"http://blog.jesseseay.com/improving-arduino-sensor-range\",\"WARC-Payload-Digest\":\"sha1:QUBNBWETEBTY7RJSBKMRYYXNWRSCGMT5\",\"WARC-Block-Digest\":\"sha1:KCQ6M6D2VU3LBWPMAC3EQ53MZBKBFUQI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506623.27_warc_CC-MAIN-20230924055210-20230924085210-00794.warc.gz\"}"}
https://fr.mathworks.com/matlabcentral/cody/problems/43065
[ "Cody\n\n# Problem 43065. Energy of an object\n\nCalculate the total mechanical energy of an object.\n\nTotal Energy= Potential energy + Kinetic energy\n\nP.E.=m*g*h\n\nK.E.=1/2*m*v^2\n\ng=9.8m/s^2\n\n### Solution Stats\n\n66.67% Correct | 33.33% Incorrect\nLast Solution submitted on Nov 07, 2019" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7284295,"math_prob":0.93953407,"size":337,"snap":"2019-51-2020-05","text_gpt3_token_len":100,"char_repetition_ratio":0.15615615,"word_repetition_ratio":0.0,"special_character_ratio":0.29376855,"punctuation_ratio":0.08974359,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99007446,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-15T07:22:30Z\",\"WARC-Record-ID\":\"<urn:uuid:7dd31a8c-c0a1-4054-b406-543ef171ff0b>\",\"Content-Length\":\"87737\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:96eb2fe9-95f7-47f8-9590-4cea07db132e>\",\"WARC-Concurrent-To\":\"<urn:uuid:28e6f3aa-4375-41ce-8185-f0817f71a4d8>\",\"WARC-IP-Address\":\"104.91.36.45\",\"WARC-Target-URI\":\"https://fr.mathworks.com/matlabcentral/cody/problems/43065\",\"WARC-Payload-Digest\":\"sha1:R2WIMBKADAKE2NUPBNLPY5ZATJTJUU6H\",\"WARC-Block-Digest\":\"sha1:32R5QLAU2NRGOQKK2IMU33ONQXBQQGUA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575541307797.77_warc_CC-MAIN-20191215070636-20191215094636-00392.warc.gz\"}"}
https://www.koobits.com/2013/01/14/the-1-key-to-solving-problem-sums-most-parents-ignore
[ "", null, "# The #1 Key to Solving Problem Sums…that Most Parents Ignore\n\nProblem sums, or math word problems, are one of the most challenging learning tasks for primary school students. Parents often worry about their children’s ability to handle these complex math questions, which require advanced thinking and special methods to solve. As such, kids are frequently drilled on problem sums by parents or even sent for special math tuition classes and workshops to learn various special problem sums solving techniques. Unfortunately, all of these fail to address the number one key to solving problem sums, which is to understand the question.", null, "## The #1 Key to Solving Problem Sums: Comprehension\n\nMathematical inability is usually the last reason why children are unable to solve problem sums and math word problems. In fact, problem sums require only basic arithmetic skills like addition, subtraction, multiplication, division, fractions, and ratios. These questions are designed, rather, to test students’ understanding and application of mathematical concepts. If your child does not comprehend the question and merely uses a certain technique to solve a problem sum, he or she will have even more trouble with Math and other subjects later on in their learning journey. Before trying various heuristics to solve a problem sum, ensure that your child comprehends the question first:\n1. Read through the entire question\n2. Break the question down into parts\n3. Determine the order of steps to get to the solution\nOnce the order of steps has been determined, your child can then apply methods such as model drawing or supposition to solve the problem sum.\n\n#### Example Question (Primary 3)\n\nA kettle has 420ml more water than a jug. When 150ml of water is poured from the jug into the kettle, there is 4 times as much water in the kettle as the jug. How much water was there in the jug at first?\nHere is how the problem can be broken down into parts:\n• The kettle has 420ml more water than the jug.", null, "• When 150ml of water is poured from the jug into the kettle,", null, "• There is 4 times as much water in the kettle as the jug.", null, "With the problem sum broken down into parts and visualized using models, your child can then solve the problem using basic arithmetic.\n\n## How to Practise Problem Sums Comprehension\n\nInstead of drilling your child on various techniques to solve different types of problem sums, focus on reading the question.\n1. Get your child to go through various questions in his/her math workbook or assessment books without solving them.\n2. Then, ask him/her to identify the questions which seem similar.\n3. Determine the steps needed to get the solution, but don’t do the math. See if the steps can be applied to similar questions.\nIncreased reading is the key to improving your child’s English language comprehension skills. With better comprehension, you’ll find that your child can solve problem sums more easily and quickly. Do you have any practical tips of your own (not “secret” techniques) for solving problem sums?" ]
[ null, "https://www.facebook.com/tr", null, "https://www.koobits.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif", null, "https://www.koobits.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif", null, "https://www.koobits.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif", null, "https://www.koobits.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.94031894,"math_prob":0.82726216,"size":1861,"snap":"2020-24-2020-29","text_gpt3_token_len":399,"char_repetition_ratio":0.13301024,"word_repetition_ratio":0.12101911,"special_character_ratio":0.21332617,"punctuation_ratio":0.09943182,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9830103,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-01T01:26:54Z\",\"WARC-Record-ID\":\"<urn:uuid:64bec268-ed01-4d5f-a3fd-196752754d40>\",\"Content-Length\":\"91198\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:efcf4c1f-0056-4df7-8fde-4631ab8211cb>\",\"WARC-Concurrent-To\":\"<urn:uuid:53552408-e9fc-4053-841e-ed344e8f6b90>\",\"WARC-IP-Address\":\"104.22.64.135\",\"WARC-Target-URI\":\"https://www.koobits.com/2013/01/14/the-1-key-to-solving-problem-sums-most-parents-ignore\",\"WARC-Payload-Digest\":\"sha1:F7F25JB7ZVQGBRACLFFT74J7SAADVMHK\",\"WARC-Block-Digest\":\"sha1:TQHLGL2P3YTHNESKZAYEGOHB66HPQCEM\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347413901.34_warc_CC-MAIN-20200601005011-20200601035011-00115.warc.gz\"}"}
https://uniquejankari.in/nptel-introduction-to-machine-learning-assignment-2-answers-2022/
[ "# NPTEL Introduction to Machine Learning Assignment 2 Answers July 2023\n\nNPTEL Introduction to Machine Learning Assignment 2 Answers 2023:- In this post, We have provided answers of NPTEL Introduction to Machine Learning Assignment 2. We provided answers here only for reference. Plz, do your assignment at your own knowledge.\n\n## NPTEL Introduction To Machine Learning Week 2 Assignment Answer 2023\n\n1. The parameters obtained in linear regression\n\n• can take any value in the real space\n• are strictly integers\n• always lie in the range [0,1]\n• can take only non-zero values\n`Answer :- a. can take any value in the real space`\n\n2. Suppose that we have N independent variables (X1,X2,…Xn) and the dependent variable is Y . Now imagine that you are applying linear regression by fitting the best fit line using the least square error on this data. You found that the correlation coefficient for one of its variables (Say X1) with Y is -0.005.\n\n• Regressing Yon X1 mostly does not explain away Y .\n• Regressing Y on X1 explains away Y .\n• The given data is insufficient to determine if regressing Yon X1 explains away Y or not.\n`Answer :- b. Regressing Yon X1 mostly does not explain away Y .`\n\n3. Which of the following is a limitation of subset selection methods in regression?\n\n• They tend to produce biased estimates of the regression coefficients.\n• They cannot handle datasets with missing values.\n• They are computationally expensive for large datasets.\n• They assume a linear relationship between the independent and dependent variables.\n• They are not suitable for datasets with categorical predictors.\n`Answer :- c. They are computationally expensive for large datasets.`\n\n4. The relation between studying time (in hours) and grade on the final examination (0-100) in a random sample of students in the Introduction to Machine Learning Class was found to be:Grade = 30.5 + 15.2 (h)\n\nHow will a student’s grade be affected if she studies for four hours?\n\n• It will go down by 30.4 points.\n• It will go down by 30.4 points.\n• It will go up by 60.8 points.\n• The grade will remain unchanged.\n• It cannot be determined from the information given\n`Answer :- c. It will go up by 60.8 points.`\n\n5. Which of the statements is/are True?\n\n• Ridge has sparsity constraint, and it will drive coefficients with low values to 0.\n• Lasso has a closed form solution for the optimization problem, but this is not the case for Ridge.\n• Ridge regression does not reduce the number of variables since it never leads a coefficient to zero but only minimizes it.\n• If there are two or more highly collinear variables, Lasso will select one of them randomly\n```Answer :- c. Ridge regression does not reduce the number of variables since it never leads a coefficient to zero but only minimizes it.\n\nd. If there are two or more highly collinear variables, Lasso will select one of them randomly```\n\n6. Find the mean of squared error for the given predictions:", null, "Hint: Find the squared error for each prediction and take the mean of that.\n\n• 1\n• 2\n• 1.5\n• 0\n`Answer :- a. 1`\n\n7. Consider the following statements:\n\nStatement A: In Forward stepwise selection, in each step, that variable is chosen which has the maximum correlation with the residual, then the residual is regressed on that variable, and it is added to the predictor.\nStatement B: In Forward stagewise selection, the variables are added one by one to the previously selected variables to produce the best fit till then\n\n• Both the statements are True.\n• Statement A is True, and Statement B is False\n• Statement A is False and Statement B is True\n• Both the statements are False.\n`Answer :- a. Both the statements are True.`\n\n8. The linear regression model y=a0+a1x1+a2x2+…….+apxp is to be fitted to a set of N training data points having p attributes each. Let X be N×(p+1) vectors of input values (augmented by 1‘s), Y be N×1 vector of target values, and θθ be (p+1)×1 vector of parameter values (a0,a1,a2,…,ap. If the sum squared error is minimized for obtaining the optimal regression model, which of the following equation holds?\n\n• XTX=XY\n• Xθ=XT\n• XTXθ =Y\n• XTXθ=XTY\n`Answer :- d. XTXθ=XTY`\n\n9. Which of the following statements is true regarding Partial Least Squares (PLS) regression?\n\n• PLS is a dimensionality reduction technique that maximizes the covariance between the predictors and the dependent variable.\n• PLS is only applicable when there is no multicollinearity among the independent variables.\n• PLS can handle situations where the number of predictors is larger than the number of observations.\n• PLS estimates the regression coefficients by minimizing the residual sum of squares.\n• PLS is based on the assumption of normally distributed residuals.\n• All of the above.\n• None of the above.\n`Answer :- a`\n\n10. Which of the following statements about principal components in Principal Component Regression (PCR) is true?\n\n• Principal components are calculated based on the correlation matrix of the original predictors.\n• The first principal component explains the largest proportion of the variation in the dependent variable.\n• Principal components are linear combinations of the original predictors that are uncorrelated with each other.\n• PCR selects the principal components with the highest p-values for inclusion in the regression model.\n• PCR always results in a lower model complexity compared to ordinary least squares regression.\n`Answer :- c. Principal components are linear combinations of the original predictors that are uncorrelated with each other.`\n\n## NPTEL Introduction To Machine Learning Week 2 Assignment Answer 2023\n\n1. The parameters obtained in linear regression\n\n• can take any value in the real space\n• are strictly integers\n• always lie in the range [0,1]\n• can take only non-zero values\n`Answer :- Click Here`\n\n2. Suppose that we have N independent variables (X1,X2,…Xn) and the dependent variable is Y . Now imagine that you are applying linear regression by fitting the best fit line using the least square error on this data. You found that the correlation coefficient for one of its variables (Say X1) with Y is -0.005.\n\n• Regressing Yon X1 mostly does not explain away Y .\n• Regressing Y on X1 explains away Y .\n• The given data is insufficient to determine if regressing Yon X1 explains away Y or not.\n`Answer :- Click Here`\n\n3. Which of the following is a limitation of subset selection methods in regression?\n\n• They tend to produce biased estimates of the regression coefficients.\n• They cannot handle datasets with missing values.\n• They are computationally expensive for large datasets.\n• They assume a linear relationship between the independent and dependent variables.\n• They are not suitable for datasets with categorical predictors.\n`Answer :- Click Here`\n\n4. The relation between studying time (in hours) and grade on the final examination (0-100) in a random sample of students in the Introduction to Machine Learning Class was found to be:Grade = 30.5 + 15.2 (h)\n\nHow will a student’s grade be affected if she studies for four hours?\n\n• It will go down by 30.4 points.\n• It will go down by 30.4 points.\n• It will go up by 60.8 points.\n• The grade will remain unchanged.\n• It cannot be determined from the information given\n`Answer :- Click Here`\n\n5. Which of the statements is/are True?\n\n• Ridge has sparsity constraint, and it will drive coefficients with low values to 0.\n• Lasso has a closed form solution for the optimization problem, but this is not the case for Ridge.\n• Ridge regression does not reduce the number of variables since it never leads a coefficient to zero but only minimizes it.\n• If there are two or more highly collinear variables, Lasso will select one of them randomly\n`Answer :- Click Here`\n\n6. Find the mean of squared error for the given predictions:", null, "Hint: Find the squared error for each prediction and take the mean of that.\n\n• 1\n• 2\n• 1.5\n• 0\n`Answer :- `\n\n7. Consider the following statements:\n\nStatement A: In Forward stepwise selection, in each step, that variable is chosen which has the maximum correlation with the residual, then the residual is regressed on that variable, and it is added to the predictor.\nStatement B: In Forward stagewise selection, the variables are added one by one to the previously selected variables to produce the best fit till then\n\n• Both the statements are True.\n• Statement A is True, and Statement B is False\n• Statement A is False and Statement B is True\n• Both the statements are False.\n`Answer :- Click Here`\n\n8. The linear regression model y=a0+a1x1+a2x2+…….+apxp is to be fitted to a set of N training data points having p attributes each. Let X be N×(p+1) vectors of input values (augmented by 1‘s), Y be N×1 vector of target values, and θθ be (p+1)×1 vector of parameter values (a0,a1,a2,…,ap. If the sum squared error is minimized for obtaining the optimal regression model, which of the following equation holds?\n\n• XTX=XY\n• Xθ=XT\n• XTXθ =Y\n• XTXθ=XTY\n`Answer :- Click Here`\n\n9. Which of the following statements is true regarding Partial Least Squares (PLS) regression?\n\n• PLS is a dimensionality reduction technique that maximizes the covariance between the predictors and the dependent variable.\n• PLS is only applicable when there is no multicollinearity among the independent variables.\n• PLS can handle situations where the number of predictors is larger than the number of observations.\n• PLS estimates the regression coefficients by minimizing the residual sum of squares.\n• PLS is based on the assumption of normally distributed residuals.\n• All of the above.\n• None of the above.\n`Answer :- `\n\n10. Which of the following statements about principal components in Principal Component Regression (PCR) is true?\n\n• Principal components are calculated based on the correlation matrix of the original predictors.\n• The first principal component explains the largest proportion of the variation in the dependent variable.\n• Principal components are linear combinations of the original predictors that are uncorrelated with each other.\n• PCR selects the principal components with the highest p-values for inclusion in the regression model.\n• PCR always results in a lower model complexity compared to ordinary least squares regression.\n`Answer :- Click Here`\n\n## NPTEL Introduction to Machine Learning Assignment 2 Answers 2022 [July-Dec]\n\nQ1. The parameters obtained in linear regression\n\na. can take any value in the real space\nb. are strictly integers\nc. always lie in the range [0,1]\nd. can take only non-zero values\n\n`Answer:- a`\n\n2. Suppose that we have NN independent variables (X1,X2,…XnX1,X2,…Xn) and the dependent variable is YY . Now imagine that you are applying linear regression by fitting the best fit line using the least square error on this data. You found that the correlation coefficient for one of its variables (Say X1X1) with YY is -0.005.\n\na. Regressing Y on X1 mostly does not explain away Y.\nb. Regressing Y on X1 explains away Y.\nc. The given data is insufficient to determine if regressing Y on X1 explains away Y or not.\n\n`Answer:- a`\n\n3. Consider the following five training examples\n\n``We want to learn a function f(x) of the form f(x)=ax+b which is parameterised by (a,b).Using mean squared error as the loss function, which of the following parameters would you use to model this function to get a solution with the minimum loss?``\n• (4, 3)\n• (1, 4)\n• (4, 1)\n• (3, 4)\n`Answer:- b`\n\n4. The relation between studying time (in hours) and grade on the final examination (0-100) in a random sample of students in the Introduction to Machine Learning Class was found to be:\nGrade = 30.5 + 15.2 (h)\nHow will a student’s grade be affected if she studies for four hours?\n\na. It will go down by 30.4 points.\nb. It will go down by 30.4 points.\nc. It will go up by 60.8 points.\nd. The grade will remain unchanged.\ne. It cannot be determined from the information given\n\n`Answer:- c`\n\n5. Which of the statements is/are True?\n\na. Ridge has sparsity constraint, and it will drive coefficients with low values to 0.\nb. Lasso has a closed form solution for the optimization problem, but this is not the case for Ridge.\nc. Ridge regression does not reduce the number of variables since it never leads a coefficient to zero but only minimizes it.\nd. If there are two or more highly collinear variables, Lasso will select one of them randomly.\n\n`Answer:- c, d`\n\n6. Consider the following statements:\n\nAssertion(A): Orthogonalization is applied to the dimensions in linear regression.\nReason(R): Orthogonalization makes univariate regression possible in each orthogonal dimension separately to produce the coefficients.\n\na. Both A and R are true, and R is the correct explanation of A.\nb. Both A and R are true, but R is not the correct explanation of A.\nc. A is true, but R is false.\nd. A is false, but R is true.\ne. Both A and R are false.\n\n`Answer:- a`\n\n7. Consider the following statements:\n\nStatement A: In Forward stepwise selection, in each step, that variable is chosen which has the maximum correlation with the residual, then the residual is regressed on that variable, and it is added to the predictor.\n\nStatement B: In Forward stagewise selection, the variables are added one by one to the previously selected variables to produce the best fit till then\n\na. Both the statements are True.\nb. Statement A is True, and Statement B is False\nc. Statement A if False and Statement B is True\nd. Both the statements are False.\n\n`Answer:- d`\n\n8. The linear regression model y=a0+a1x1+a2x2+…+apxp is to be fitted to a set of N training data points having p attributes each. Let X be N×(p+1) vectors of input values (augmented by 1‘s), Y be N×1 vector of target values, and θ be (p+1)×1 vector of parameter values (a0,a1,a2,…,ap). If the sum squared error is minimized for obtaining the optimal regression model, which of the following equation holds?\n\n`Answer:- d`\n\n## About Introduction to Machine Learning\n\nWith the increased availability of data from varied sources there has been increasing attention paid to the various data driven disciplines such as analytics and machine learning. In this course we intend to introduce some of the basic concepts of machine learning from a mathematically well motivated perspective. We will cover the different learning paradigms and some of the more popular algorithms and architectures used in each of these paradigms.\n\n### COURSE LAYOUT\n\n• Week 0: Probability Theory, Linear Algebra, Convex Optimization – (Recap)\n• Week 1: Introduction: Statistical Decision Theory – Regression, Classification, Bias Variance\n• Week 2: Linear Regression, Multivariate Regression, Subset Selection, Shrinkage Methods, Principal Component Regression, Partial Least squares\n• Week 3: Linear Classification, Logistic Regression, Linear Discriminant Analysis\n• Week 4: Perceptron, Support Vector Machines\n• Week 5: Neural Networks – Introduction, Early Models, Perceptron Learning, Backpropagation, Initialization, Training & Validation, Parameter Estimation – MLE, MAP, Bayesian Estimation\n• Week 6: Decision Trees, Regression Trees, Stopping Criterion & Pruning loss functions, Categorical Attributes, Multiway Splits, Missing Values, Decision Trees – Instability Evaluation Measures\n• Week 7: Bootstrapping & Cross Validation, Class Evaluation Measures, ROC curve, MDL, Ensemble Methods – Bagging, Committee Machines and Stacking, Boosting\n• Week 8: Gradient Boosting, Random Forests, Multi-class Classification, Naive Bayes, Bayesian Networks\n• Week 9: Undirected Graphical Models, HMM, Variable Elimination, Belief Propagation\n• Week 10: Partitional Clustering, Hierarchical Clustering, Birch Algorithm, CURE Algorithm, Density-based Clustering\n• Week 11: Gaussian Mixture Models, Expectation Maximization\n• Week 12: Learning Theory, Introduction to Reinforcement Learning, Optional videos (RL framework, TD learning, Solution Methods, Applications)\n\n### CRITERIA TO GET A CERTIFICATE\n\nAverage assignment score = 25% of average of best 8 assignments out of the total 12 assignments given in the course.\nExam score = 75% of the proctored certification exam score out of 100\n\nFinal score = Average assignment score + Exam score\n\nYOU WILL BE ELIGIBLE FOR A CERTIFICATE ONLY IF AVERAGE ASSIGNMENT SCORE >=10/25 AND EXAM SCORE >= 30/75. If one of the 2 criteria is not met, you will not get the certificate even if the Final score >= 40/100." ]
[ null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%200%200'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%200%200'%3E%3C/svg%3E", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8603735,"math_prob":0.95806164,"size":16965,"snap":"2023-40-2023-50","text_gpt3_token_len":3906,"char_repetition_ratio":0.13548729,"word_repetition_ratio":0.6661957,"special_character_ratio":0.22841144,"punctuation_ratio":0.12652677,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99838996,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-01T15:14:35Z\",\"WARC-Record-ID\":\"<urn:uuid:186c87bc-fb70-4fe7-8074-837090f2023e>\",\"Content-Length\":\"117201\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5cb449e8-1d71-476f-a233-b0a8ecc4d81a>\",\"WARC-Concurrent-To\":\"<urn:uuid:7e4e6b6e-7a8c-47e4-b897-6c8d29838f72>\",\"WARC-IP-Address\":\"217.21.88.153\",\"WARC-Target-URI\":\"https://uniquejankari.in/nptel-introduction-to-machine-learning-assignment-2-answers-2022/\",\"WARC-Payload-Digest\":\"sha1:L7LHU7PZLOSLTIUBOV5TQVBCJY43BO25\",\"WARC-Block-Digest\":\"sha1:UKDYI2BQCMJFPCJZWBOXM6NDP6C5GG7Z\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510903.85_warc_CC-MAIN-20231001141548-20231001171548-00653.warc.gz\"}"}
https://www.buffalobrewingstl.com/raphson-method/info-dto.html
[ "# Info\n\nT0 = 2.9°F, Ts+ ! = 0°F, N = 8, and P = 800 lb/in2 abs. Initial temperature profile to be constant at Tj = 25°F for all j (j = 1, 2, ..., N). The initial vapor rate profile is to be constant at Vj = 90.88 0 = 1, 2, ...,8), and the liquid rates are L} = 6.3092 0 =1, 2, ...,7), and L8 = 15.42 Use the K values and enthalpies given in Tables B-5 and B-6\n\nc3h(\n\nc3h(" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7912168,"math_prob":0.9998722,"size":402,"snap":"2020-24-2020-29","text_gpt3_token_len":156,"char_repetition_ratio":0.11557789,"word_repetition_ratio":0.0,"special_character_ratio":0.45771143,"punctuation_ratio":0.26050422,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9975316,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-28T19:17:17Z\",\"WARC-Record-ID\":\"<urn:uuid:5dca43ac-5aa9-4d56-8f63-ff97bff2f540>\",\"Content-Length\":\"10180\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:40a3928b-4d9f-4cd8-a727-814ff7dbf420>\",\"WARC-Concurrent-To\":\"<urn:uuid:fa8742b9-c1ae-489f-911b-834f8d5de670>\",\"WARC-IP-Address\":\"104.18.56.106\",\"WARC-Target-URI\":\"https://www.buffalobrewingstl.com/raphson-method/info-dto.html\",\"WARC-Payload-Digest\":\"sha1:DCPSG4JECNT2VKBAPSBS7ILTNMGL5ATC\",\"WARC-Block-Digest\":\"sha1:4ZSHX2FC7JYBL57HTLUXULMJ6E2TCC6Q\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347399830.24_warc_CC-MAIN-20200528170840-20200528200840-00260.warc.gz\"}"}
https://zbmath.org/?q=an:06434322
[ "# zbMATH — the first resource for mathematics\n\nOn the three-dimensional Vahlen theorem. (English. Russian original) Zbl 1369.11048\nMath. Notes 95, No. 1, 136-138 (2014); translation from Mat. Zametki 95, No. 1, 154-156 (2014).\nFrom the text: Let $$\\gamma^{(1)}, \\ldots, \\gamma^{(s)}$$ be the basis nodes of a full-rank lattice\n$\\Gamma = \\left\\{ m_1 \\gamma^{(1)} + \\cdots + m_s \\gamma^{(s)}: m_1,\\ldots, m_s \\in \\mathbb Z \\right\\} \\subset \\mathbb R^s.$\nVahlen’s theorem concerning the approximation of numbers by convergents has the following interpretation in terms of lattices: for every Voronoi basis $$\\{\\gamma^{(1)}, \\gamma^{(2)}\\}$$,\n$\\min\\left\\{\\left|\\gamma_1^{(1)}, \\gamma_2^{(1)}\\right|, \\left|\\gamma_1^{(2)}, \\gamma_2^{(2)}\\right|\\right\\} \\leq \\tfrac{1}{2} \\det \\Gamma.$\n\n##### MSC:\n 11H06 Lattices and convex bodies (number-theoretic aspects)\nFull Text:\n##### References:\n A. V. Ustinov, in Mathematics and Informatics, 1, Sovr. Probl. Matem., To the 75th Birthday of Anatolii Alekseevich Karatsuba (MIAN, Moscow, 2012), Vol. 16, pp. 103–128 [Proc. Steklov Inst. Math., 280, suppl. 2 (2013), S91–S116]. G. F. Voronoi, Collected Works in Three Volumes, Vol. 1 (Izdatel’stvo Akademii Nauk Ukrainskoi SSR, Kiev, 1952) [in Russian]. M. O. Avdeeva and V. A. Bykovskii, Mat. Zametki 79(2), 163 (2006) [Math. Notes 79 (1–2), 151–156 (2006)]. K. Th. Vahlen, J. fürMath. 115(3), 221 (1895). A. Ya. Khinchin, Continued Fractions (Nauka, Moscow, 1978; Dover Publications, Inc., Mineola, NY, 1997). H. Minkowski, Ann. Sci. École Norm. Sup. (3) 13, 41 (1896). · JFM 27.0170.01 H. Hancock, Development of the Minkowski Geometry of Numbers, Vol. 1,2 (Dover Publ., 1964). · Zbl 0123.25603 O. A. Gorkusha, Mat. Zametki 69(3), 353 (2001) [Math. Notes 69 (3–4), 320 (2001)]. V. A. Bykovskii and O. A. Gorkusha, Mat. Sb. 192(2), 57 (2001) [Sb. Math. 192 (1–2), 215–223 (2001)]. M. O. Avdeeva and V. A. Bykovskii, Mat. Sb. 194(7), 3 (2003) [Sb. Math. 194 (7–8), 955 (2003)]. V. A. Bykovskii, Mat. Zametki 66(1), 30 (1999) [Math. Notes 66 (1–2), 24 (1999) (2000)]. S. V. Gassan, Chebyshevskii Sb. 6(3), 51 (2005) [in Russian].\nThis reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.62368286,"math_prob":0.96225274,"size":2762,"snap":"2021-43-2021-49","text_gpt3_token_len":1059,"char_repetition_ratio":0.09390863,"word_repetition_ratio":0.04411765,"special_character_ratio":0.4250543,"punctuation_ratio":0.27925116,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98814756,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-05T10:05:14Z\",\"WARC-Record-ID\":\"<urn:uuid:aee45a32-1d32-4310-bcaf-ee04c3c41b7e>\",\"Content-Length\":\"50376\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c05d682f-88ae-4560-ab77-37c9674f3867>\",\"WARC-Concurrent-To\":\"<urn:uuid:c9f62240-2530-4483-9a5b-b74757ed1de8>\",\"WARC-IP-Address\":\"141.66.194.2\",\"WARC-Target-URI\":\"https://zbmath.org/?q=an:06434322\",\"WARC-Payload-Digest\":\"sha1:OXIHXHH2NDT2KRWE3P5BLVPLJTG7B7YO\",\"WARC-Block-Digest\":\"sha1:75HWMBAACCTPCT54DA253CXAW77ODESM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363157.32_warc_CC-MAIN-20211205100135-20211205130135-00415.warc.gz\"}"}
https://www.stata.com/statalist/archive/2006-03/msg00164.html
[ "", null, "", null, "", null, "", null, "# Re: Re: st: Total Least Squares Regression, anyone\n\n From \"Anders Alexandersson\" To [email protected] Subject Re: Re: st: Total Least Squares Regression, anyone Date Mon, 6 Mar 2006 13:32:21 -0500\n\n```For a graphical approach to total least squares regression, Tero might\nfind my program -ellip- helpful. At least it will give the orthogonal\nregression slope and can handle different error variance ratios. I'm\nnot aware of a Stata command for estimating total least squares\nregression models.\n\nTero Kivel wrote:\n\"The data [...] should be analysed using the method of Total Least\nSquares Regression, and the slope of the regression, the offset of the\nregression and the standard deviation\nof the regression should be made available.\"\n\n*\n* For searches and help try:\n* http://www.stata.com/support/faqs/res/findit.html\n* http://www.stata.com/support/statalist/faq\n* http://www.ats.ucla.edu/stat/stata/\n```" ]
[ null, "https://www.stata.com/includes/images/statalist_front.gif", null, "https://www.stata.com/includes/images/statalist_middle.gif", null, "https://www.stata.com/includes/images/statalist_end.gif", null, "https://www.stata.com/includes/contimages/spacer.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6828732,"math_prob":0.57612216,"size":844,"snap":"2020-45-2020-50","text_gpt3_token_len":201,"char_repetition_ratio":0.13809524,"word_repetition_ratio":0.0,"special_character_ratio":0.21445498,"punctuation_ratio":0.16770187,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98034155,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-25T00:31:18Z\",\"WARC-Record-ID\":\"<urn:uuid:0036cd01-0e32-4561-b78f-5b13e765ccec>\",\"Content-Length\":\"7138\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:60525093-4747-490f-abc1-8e4a28435913>\",\"WARC-Concurrent-To\":\"<urn:uuid:d5b8f72c-9532-4b96-ae26-f2ee6d5e2359>\",\"WARC-IP-Address\":\"66.76.6.5\",\"WARC-Target-URI\":\"https://www.stata.com/statalist/archive/2006-03/msg00164.html\",\"WARC-Payload-Digest\":\"sha1:7X2UNQ23RGNNARZ4MB54WNZHYVQE2TZF\",\"WARC-Block-Digest\":\"sha1:VIGOE3QAOR7R4QIIYYHZLPUP6SRGHSMX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141177607.13_warc_CC-MAIN-20201124224124-20201125014124-00081.warc.gz\"}"}
https://www.colorhexa.com/4e4e40
[ "# #4e4e40 Color Information\n\nIn a RGB color space, hex #4e4e40 is composed of 30.6% red, 30.6% green and 25.1% blue. Whereas in a CMYK color space, it is composed of 0% cyan, 0% magenta, 17.9% yellow and 69.4% black. It has a hue angle of 60 degrees, a saturation of 9.9% and a lightness of 27.8%. #4e4e40 color hex could be obtained by blending #9c9c80 with #000000. Closest websafe color is: #666633.\n\n• R 31\n• G 31\n• B 25\nRGB color chart\n• C 0\n• M 0\n• Y 18\n• K 69\nCMYK color chart\n\n#4e4e40 color description : Very dark grayish yellow.\n\n# #4e4e40 Color Conversion\n\nThe hexadecimal color #4e4e40 has RGB values of R:78, G:78, B:64 and CMYK values of C:0, M:0, Y:0.18, K:0.69. Its decimal value is 5131840.\n\nHex triplet RGB Decimal 4e4e40 `#4e4e40` 78, 78, 64 `rgb(78,78,64)` 30.6, 30.6, 25.1 `rgb(30.6%,30.6%,25.1%)` 0, 0, 18, 69 60°, 9.9, 27.8 `hsl(60,9.9%,27.8%)` 60°, 17.9, 30.6 666633 `#666633`\nCIE-LAB 32.785, -2.8, 8.31 6.792, 7.439, 5.928 0.337, 0.369, 7.439 32.785, 8.769, 108.623 32.785, 0.717, 9.958 27.274, -3.281, 6.205 01001110, 01001110, 01000000\n\n# Color Schemes with #4e4e40\n\n• #4e4e40\n``#4e4e40` `rgb(78,78,64)``\n• #40404e\n``#40404e` `rgb(64,64,78)``\nComplementary Color\n• #4e4740\n``#4e4740` `rgb(78,71,64)``\n• #4e4e40\n``#4e4e40` `rgb(78,78,64)``\n• #474e40\n``#474e40` `rgb(71,78,64)``\nAnalogous Color\n• #47404e\n``#47404e` `rgb(71,64,78)``\n• #4e4e40\n``#4e4e40` `rgb(78,78,64)``\n• #40474e\n``#40474e` `rgb(64,71,78)``\nSplit Complementary Color\n• #4e404e\n``#4e404e` `rgb(78,64,78)``\n• #4e4e40\n``#4e4e40` `rgb(78,78,64)``\n• #404e4e\n``#404e4e` `rgb(64,78,78)``\n• #4e4040\n``#4e4040` `rgb(78,64,64)``\n• #4e4e40\n``#4e4e40` `rgb(78,78,64)``\n• #404e4e\n``#404e4e` `rgb(64,78,78)``\n• #40404e\n``#40404e` `rgb(64,64,78)``\n• #24241e\n``#24241e` `rgb(36,36,30)``\n• #323229\n``#323229` `rgb(50,50,41)``\n• #404035\n``#404035` `rgb(64,64,53)``\n• #4e4e40\n``#4e4e40` `rgb(78,78,64)``\n• #5c5c4b\n``#5c5c4b` `rgb(92,92,75)``\n• #6a6a57\n``#6a6a57` `rgb(106,106,87)``\n• #787862\n``#787862` `rgb(120,120,98)``\nMonochromatic Color\n\n# Alternatives to #4e4e40\n\nBelow, you can see some colors close to #4e4e40. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #4e4b40\n``#4e4b40` `rgb(78,75,64)``\n• #4e4c40\n``#4e4c40` `rgb(78,76,64)``\n• #4e4d40\n``#4e4d40` `rgb(78,77,64)``\n• #4e4e40\n``#4e4e40` `rgb(78,78,64)``\n• #4d4e40\n``#4d4e40` `rgb(77,78,64)``\n• #4c4e40\n``#4c4e40` `rgb(76,78,64)``\n• #4b4e40\n``#4b4e40` `rgb(75,78,64)``\nSimilar Colors\n\n# #4e4e40 Preview\n\nThis text has a font color of #4e4e40.\n\n``<span style=\"color:#4e4e40;\">Text here</span>``\n#4e4e40 background color\n\nThis paragraph has a background color of #4e4e40.\n\n``<p style=\"background-color:#4e4e40;\">Content here</p>``\n#4e4e40 border color\n\nThis element has a border color of #4e4e40.\n\n``<div style=\"border:1px solid #4e4e40;\">Content here</div>``\nCSS codes\n``.text {color:#4e4e40;}``\n``.background {background-color:#4e4e40;}``\n``.border {border:1px solid #4e4e40;}``\n\n# Shades and Tints of #4e4e40\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #030302 is the darkest color, while #f8f8f7 is the lightest one.\n\n• #030302\n``#030302` `rgb(3,3,2)``\n• #0d0d0b\n``#0d0d0b` `rgb(13,13,11)``\n• #181814\n``#181814` `rgb(24,24,20)``\n• #23231d\n``#23231d` `rgb(35,35,29)``\n• #2e2e25\n``#2e2e25` `rgb(46,46,37)``\n• #38382e\n``#38382e` `rgb(56,56,46)``\n• #434337\n``#434337` `rgb(67,67,55)``\n• #4e4e40\n``#4e4e40` `rgb(78,78,64)``\n• #595949\n``#595949` `rgb(89,89,73)``\n• #646452\n``#646452` `rgb(100,100,82)``\n• #6e6e5b\n``#6e6e5b` `rgb(110,110,91)``\n• #797963\n``#797963` `rgb(121,121,99)``\n• #84846c\n``#84846c` `rgb(132,132,108)``\n• #8e8e76\n``#8e8e76` `rgb(142,142,118)``\n• #979780\n``#979780` `rgb(151,151,128)``\n• #a0a08b\n``#a0a08b` `rgb(160,160,139)``\n• #a9a996\n``#a9a996` `rgb(169,169,150)``\n• #b2b2a1\n``#b2b2a1` `rgb(178,178,161)``\n• #babaab\n``#babaab` `rgb(186,186,171)``\n• #c3c3b6\n``#c3c3b6` `rgb(195,195,182)``\n• #ccccc1\n``#ccccc1` `rgb(204,204,193)``\n• #d5d5cc\n``#d5d5cc` `rgb(213,213,204)``\n• #deded6\n``#deded6` `rgb(222,222,214)``\n• #e7e7e1\n``#e7e7e1` `rgb(231,231,225)``\n• #efefec\n``#efefec` `rgb(239,239,236)``\n• #f8f8f7\n``#f8f8f7` `rgb(248,248,247)``\nTint Color Variation\n\n# Tones of #4e4e40\n\nA tone is produced by adding gray to any pure hue. In this case, #494945 is the less saturated color, while #8a8a04 is the most saturated one.\n\n• #494945\n``#494945` `rgb(73,73,69)``\n• #4e4e40\n``#4e4e40` `rgb(78,78,64)``\n• #53533b\n``#53533b` `rgb(83,83,59)``\n• #595935\n``#595935` `rgb(89,89,53)``\n• #5e5e30\n``#5e5e30` `rgb(94,94,48)``\n• #64642a\n``#64642a` `rgb(100,100,42)``\n• #696925\n``#696925` `rgb(105,105,37)``\n• #6f6f1f\n``#6f6f1f` `rgb(111,111,31)``\n• #74741a\n``#74741a` `rgb(116,116,26)``\n• #7a7a14\n``#7a7a14` `rgb(122,122,20)``\n• #7f7f0f\n``#7f7f0f` `rgb(127,127,15)``\n• #858509\n``#858509` `rgb(133,133,9)``\n• #8a8a04\n``#8a8a04` `rgb(138,138,4)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #4e4e40 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.52051187,"math_prob":0.48372132,"size":3673,"snap":"2021-21-2021-25","text_gpt3_token_len":1694,"char_repetition_ratio":0.12128645,"word_repetition_ratio":0.011070111,"special_character_ratio":0.56003267,"punctuation_ratio":0.23404256,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99119705,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-15T21:00:52Z\",\"WARC-Record-ID\":\"<urn:uuid:fb7eb1fb-89d9-4c12-90d6-21520d8c86eb>\",\"Content-Length\":\"36224\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b77b1c44-7bd4-4ee1-92f6-a82656faf5f1>\",\"WARC-Concurrent-To\":\"<urn:uuid:bb8927bd-b5d3-433b-b33b-35bb1a22dee3>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/4e4e40\",\"WARC-Payload-Digest\":\"sha1:D7ILAZCV4KR5MWSNEAJDGSEF7LIYR2QO\",\"WARC-Block-Digest\":\"sha1:PV2B7264EAR6HNP4HJ3N3U3ZN436Z5LB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991378.52_warc_CC-MAIN-20210515192444-20210515222444-00440.warc.gz\"}"}
http://cc1.ifj.edu.pl/docs/doxygen/classsrc_1_1wi_1_1forms_1_1user_1_1_change_quota_form.html
[ "", null, "cc1  v2.1 CC1 source code docs\nsrc.wi.forms.user.ChangeQuotaForm Class Reference\n\nClass for quota's change form. More...\n\n## Static Public Attributes\n\ntuple cpu = forms.IntegerField(label=_(\"Cpu Total\"))\ntuple memory = forms.IntegerField(label=_(\"Memory Total [MB]\"))\ntuple points = forms.IntegerField(min_value=0, label=_(\"Points\"))\ntuple public_ip = forms.IntegerField(min_value=0, label=_(\"Public IPs Total\"))\ntuple storage = forms.IntegerField(label=_(\"Storage Total [MB]\"))\n\n## Detailed Description\n\nClass for quota's change form.\n\nDefinition at line 387 of file user.py.\n\n## Member Data Documentation\n\n tuple src.wi.forms.user.ChangeQuotaForm.cpu = forms.IntegerField(label=_(\"Cpu Total\"))\nstatic\n\nDefinition at line 388 of file user.py.\n\n tuple src.wi.forms.user.ChangeQuotaForm.memory = forms.IntegerField(label=_(\"Memory Total [MB]\"))\nstatic\n\nDefinition at line 389 of file user.py.\n\n tuple src.wi.forms.user.ChangeQuotaForm.points = forms.IntegerField(min_value=0, label=_(\"Points\"))\nstatic\n\nDefinition at line 392 of file user.py.\n\n tuple src.wi.forms.user.ChangeQuotaForm.public_ip = forms.IntegerField(min_value=0, label=_(\"Public IPs Total\"))\nstatic\n\nDefinition at line 391 of file user.py.\n\n tuple src.wi.forms.user.ChangeQuotaForm.storage = forms.IntegerField(label=_(\"Storage Total [MB]\"))\nstatic\n\nDefinition at line 390 of file user.py.\n\nThe documentation for this class was generated from the following file:" ]
[ null, "http://cc1.ifj.edu.pl/docs/doxygen/cc1.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5022122,"math_prob":0.611996,"size":1149,"snap":"2023-14-2023-23","text_gpt3_token_len":270,"char_repetition_ratio":0.18515284,"word_repetition_ratio":0.056603774,"special_character_ratio":0.25935596,"punctuation_ratio":0.22705314,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97050494,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-28T08:38:32Z\",\"WARC-Record-ID\":\"<urn:uuid:acae7d74-6ecb-4be8-becd-991c8d58312e>\",\"Content-Length\":\"13040\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:63e6c652-3f7e-4ccc-9ebf-9ccd715c10ae>\",\"WARC-Concurrent-To\":\"<urn:uuid:271f793f-dcd8-4840-bd17-18566d1b0584>\",\"WARC-IP-Address\":\"192.245.169.10\",\"WARC-Target-URI\":\"http://cc1.ifj.edu.pl/docs/doxygen/classsrc_1_1wi_1_1forms_1_1user_1_1_change_quota_form.html\",\"WARC-Payload-Digest\":\"sha1:U4OFMGKPIVBNXYGOVZH6HLGE7E5HM45Q\",\"WARC-Block-Digest\":\"sha1:OF4BIDFLJ6Q4BDN5WRYHSQCFITM55KZR\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296948817.15_warc_CC-MAIN-20230328073515-20230328103515-00757.warc.gz\"}"}
http://forums.wolfram.com/mathgroup/archive/2001/Jan/msg00476.html
[ "", null, "", null, "", null, "", null, "", null, "", null, "", null, "Re: Queries: notebook, matrices\n\n• To: mathgroup at smc.vnet.net\n• Subject: [mg27019] Re: [mg26975] Queries: notebook, matrices\n• From: Tomas Garza <tgarza01 at prodigy.net.mx>\n• Date: Tue, 30 Jan 2001 23:22:35 -0500 (EST)\n• References: <[email protected]>\n• Sender: owner-wri-mathgroup at wolfram.com\n\n```1. You can Select the input cells you would like to recalculate and then\nCell | Cell Properties | Initialization Cell. Then, when you reopen your\nnotebook and activate the first cell you will be asked whether you want to\ninitialize all cells. Say Yes and that's it. By the way if there is a lot of\ntime-consuming calculation which doesn't change from one session to the\nnext, you'd do well to save the results with >> and read them back with <<\n\n2. My guess is you'll have to specify each time that you want your matrix\ndisplayed in matrix form. Of course you may define a function for that\npurpose, which will save you the trouble of typing the whole thing, such as,\ne.g.\n\nIn:=\na = {{1, 2, 3, 4}, {4, 5, 6, 7}}\nOut=\n{{1, 2, 3, 4}, {4, 5, 6, 7}}\nIn:=\nd[a_] := MatrixForm[a]\nIn:=\nd[a]\nOut//MatrixForm=\n\\!\\(\\*\nTagBox[\nRowBox[{\"(\", \"\\[NoBreak]\", GridBox[{\n{\"1\", \"2\", \"3\", \"4\"},\n{\"4\", \"5\", \"6\", \"7\"}\n}], \"\\[NoBreak]\", \")\"}],\n(MatrixForm[ #]&)]\\)\n\n3. OK, there is a standard program which I got years ago from Wolfram\nsupport, I guess, which takes data from Mathematica and stores them as\nAscii text files which can then be read by Excel or some other such\nprogram (I presume you want to use your data outside Mathematica;\notherwise there is no point in saving it with a specified format). The\ncode is as follows:\n\nWriteMatrix[filename_String, data_List] :=\nWith[{myfile = OpenWrite[filename]},\nScan[(WriteString[myfile, First[#]];\nScan[WriteString[myfile, \"\\t\", #] &, Rest[#]];\nWriteString[myfile, \"\\n\"]) &, data];\nClose[myfile]]\n\nOnce you define this function WriteMatrix, you may use it as follows:\n\nIn:=\na = {{1, 2, 3}, {1, 4, 5}, {1, 6, 7}};\n\nIn:=\nWriteMatrix[\"b\", a]\nOut=\n\"b\"\n\nThis says: take list a and store it in Ascii form under the name \"b\".\nOnce you execute this, you will find b in the directory you're working.\nThen you may read b from Excel.\n\nTomas Garza\nMexico City\n\n----- Original Message -----\nFrom: \"M. Damerell\" <uhah208 at rhbnc.ac.uk>\nTo: mathgroup at smc.vnet.net\nSubject: [mg27019] [mg26975] Queries: notebook, matrices\n\n>\n>\n> Sorry if these are answered in a FAQ file, I could not\n> find any FAq file for this group. Math. 4.0, windows95.\n>\n> 1. I do a lot of calculation in a notebook, save it,\n> then when I restart Math & open the file I have to\n> recalculate each item before I can resume where I\n> left off. Is there any way to tell Math to recalculate\n> everything when the file is reopened?\n>\n> 2. I enter a matrix as (say) A={{1,2,3},{4,5,6} etc\n> and it displays in that form. Is there any way to make\n> Math display matrices as matrices? I know you can do\n>\n> B=A.A//MatrixForm\n>\n> this looks right but Math doesnt know this is a matrix.\n> So I would like matrices to be held internally in list\n> form but displayed in matrix form.\n>\n> 3. and finally, is there any way to export the matrix\n> into Excel?\n>\n>\n\n```\n\n• Prev by Date: Re: 1. Input of screen coordinates; 2. Fast graphics\n• Next by Date: RE: RE: 1. Input of screen coordinates; 2. Fast graphics\n• Previous by thread: Queries: notebook, matrices\n• Next by thread: Re: Queries: notebook, matrices" ]
[ null, "http://forums.wolfram.com/mathgroup/images/head_mathgroup.gif", null, "http://forums.wolfram.com/mathgroup/images/head_archive.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/2.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/0.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/0.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/1.gif", null, "http://forums.wolfram.com/mathgroup/images/search_archive.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8586208,"math_prob":0.6812727,"size":3322,"snap":"2019-35-2019-39","text_gpt3_token_len":976,"char_repetition_ratio":0.09795057,"word_repetition_ratio":0.01754386,"special_character_ratio":0.33503914,"punctuation_ratio":0.20718232,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9720622,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-16T05:11:02Z\",\"WARC-Record-ID\":\"<urn:uuid:85ac5762-bcae-45e0-a286-9b2ef60da378>\",\"Content-Length\":\"45553\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6dbefb35-12d9-46d2-a022-ff61e771f317>\",\"WARC-Concurrent-To\":\"<urn:uuid:41e9e135-8079-49da-854b-843a82b11719>\",\"WARC-IP-Address\":\"140.177.205.73\",\"WARC-Target-URI\":\"http://forums.wolfram.com/mathgroup/archive/2001/Jan/msg00476.html\",\"WARC-Payload-Digest\":\"sha1:T7KOA6GVWNP76OAYJ2N4ECQ45RPEVMKC\",\"WARC-Block-Digest\":\"sha1:PYXA5ZXR6A7NFPA55QDEWSPL4VXJIIY6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514572484.20_warc_CC-MAIN-20190916035549-20190916061549-00111.warc.gz\"}"}
https://www.assignmentexpert.com/homework-answers/physics/mechanics-relativity/question-4320
[ "# Answer to Question #4320 in Mechanics | Relativity for Sara\n\nQuestion #4320\nhi i have a question for you, To set a speed record in measured straight line distance d, a race car must be driven first in one direction in time t1 and then in the opposite direction time t2. a) to eliminate the effects of the wind and obtain the ca&#039;s speed Vc in a windless situation, should we find the average of d/t1 and d/t2 (method 1) or should we devidee d by the average of t1 and t2. b) what is the fractional difference in the two methods when a steady wind belows along the car&#039;s route and the ratio of the wind speed Vw to the car&#039;s speed Vc is 0.0180\n1\n2012-04-03T10:57:02-0400\nv = speed of car with no wind\nu = speed of the wind along the path of the car\nv - u = speed when going against the wind\nv + u = when going in the same direction as the wind\n\nv - u = d/t1\nv + u = d/t2\n2v = [d/t1 + d/t2]\nv = (1/2)[d/t1 + d/t2]\n\nOf course the meanings of t1 and t2 are not important since they appear in the equation in exactly the same way. So the method to use is method 1.\n\nu/v = 0.018 so u = 0.018v\n\nOther method v = d/[(t1 + t2)/2] = (2){1/[t1/d + t2/d]}\n\ndiff = (1/2)[d/t1 + d/t2] - (2){1/[t1/d + t2/d]}\ndiff = (1/2){d/t1 + d/t2 - 4/[t1/d + t2/d]}\ndiff = (1/2){2v - 4/[1/(v - u) + 1/v + u)]}\ndiff = (1/2){2v - 4(v + u)(v - u)/2v}\ndiff = [1/(4v)][4v^2 - 4(v^2 - u^2)]\ndiff = (u)(u/v) = u^2/v\nfractional diff = diff/v = (u/v)^2 = (0.018)^2 = 0,00032\n\nNeed a fast expert's response?\n\nSubmit order\n\nand get a quick answer at the best price\n\nfor any assignment or question with DETAILED EXPLANATIONS!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9332023,"math_prob":0.9981581,"size":1178,"snap":"2019-51-2020-05","text_gpt3_token_len":359,"char_repetition_ratio":0.14139694,"word_repetition_ratio":0.0,"special_character_ratio":0.32852292,"punctuation_ratio":0.04597701,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9919033,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-23T20:56:37Z\",\"WARC-Record-ID\":\"<urn:uuid:9e354869-e339-44af-a24b-33f3f3c0ce30>\",\"Content-Length\":\"47196\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:086ece1c-2424-4200-b298-52962a951c06>\",\"WARC-Concurrent-To\":\"<urn:uuid:48336cc3-2cfc-4b33-b536-9bad84bf885d>\",\"WARC-IP-Address\":\"52.24.16.199\",\"WARC-Target-URI\":\"https://www.assignmentexpert.com/homework-answers/physics/mechanics-relativity/question-4320\",\"WARC-Payload-Digest\":\"sha1:K4SNKRQWNDRO2CLMWXNY76FC4LHVEDYA\",\"WARC-Block-Digest\":\"sha1:5327FQU5NBKPA7HGRLRUAA4XCEQJMFP5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250613416.54_warc_CC-MAIN-20200123191130-20200123220130-00249.warc.gz\"}"}
https://lists.defectivebydesign.org/archive/html/help-octave/1995-09/msg00014.html
[ "help-octave\n[Top][All Lists]\n\n## Re: Arrays of strings\n\n From: John Eaton Subject: Re: Arrays of strings Date: Wed, 6 Sep 1995 17:37:54 -0500\n\n```address@hidden <address@hidden> showed how\nto simulate vectors of strings using numeric matrices and\ntoascii/setstr:\n\n: Here's what I did I wrote a tiny function stostr:\n:\n: function var = stostr(var, loc, string)\n: var(loc,1:max(size(toascii(string)))) = toascii(string);\n\nYes, this is about the best you can do with 1.1.1. For 1.2, Octave\nwill support vectors of strings. Indexing will work as will\nconversion to numeric matrices using toascii (or abs, if\nimplicit_str_to_num_ok is set is set to true).\n\nAlso, the internal representation will be 1-byte characters, not be a\nmatrix of doubles, so strings should be more space-efficient in Octave\ncompared to Matlab, and Octave will not force to to pad strings to\nthe same length in order to put them in an array. For example,\n\nfoo = [\"this\"; \"is\"; \"an\"; \"array\"; \"of\"; \"strings\"]\n\nwill be perfectly acceptable.\n\nThanks,\n\njwe\n\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.72960633,"math_prob":0.6766531,"size":1062,"snap":"2023-14-2023-23","text_gpt3_token_len":290,"char_repetition_ratio":0.11247637,"word_repetition_ratio":0.011976048,"special_character_ratio":0.2777778,"punctuation_ratio":0.18468468,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96358544,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-08T11:18:44Z\",\"WARC-Record-ID\":\"<urn:uuid:a14fd141-7443-42fa-b3c3-5047305adcbb>\",\"Content-Length\":\"5453\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d45ad738-50cf-4352-bd88-9f9200633976>\",\"WARC-Concurrent-To\":\"<urn:uuid:1b561c5a-b700-45dd-a1a2-8fa316fd6497>\",\"WARC-IP-Address\":\"209.51.188.17\",\"WARC-Target-URI\":\"https://lists.defectivebydesign.org/archive/html/help-octave/1995-09/msg00014.html\",\"WARC-Payload-Digest\":\"sha1:EDAMCLXZ6RBM23HN2TDW2HZZM4OVZJA5\",\"WARC-Block-Digest\":\"sha1:QQC4DPMMNRZLAHJHSZ7Z2I34BKUFHYVN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224654871.97_warc_CC-MAIN-20230608103815-20230608133815-00742.warc.gz\"}"}
https://wikidev.in/wiki/assembly/8086/CWD
[ "You are here : assembly8086CWD\n\n# CWD - 8086\n\n`Converts word to double word.If High bit of AX=1 then : DX = 65535 (0FFFFh)else : DX = 0`\n\n`CWD`\n\n### Example\n\n```MOV DX, 0 ; DX = 0\nMOV AX, 0 ; AX = 0\nMOV AX, -5 ; DX AX = 00000h:0FFFBh\nCWD ; DX AX = 0FFFFh:0FFFBh\nRET```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6222649,"math_prob":0.99390656,"size":288,"snap":"2022-40-2023-06","text_gpt3_token_len":126,"char_repetition_ratio":0.16549295,"word_repetition_ratio":0.15151516,"special_character_ratio":0.44444445,"punctuation_ratio":0.2,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9650975,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-08T15:55:40Z\",\"WARC-Record-ID\":\"<urn:uuid:82a88fa6-428b-4328-ac6d-b2837f73a93a>\",\"Content-Length\":\"6635\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e2715189-36b0-4844-8f79-43c218fc0595>\",\"WARC-Concurrent-To\":\"<urn:uuid:303cdeab-b8a7-434c-8b5b-55be075e6607>\",\"WARC-IP-Address\":\"104.21.60.239\",\"WARC-Target-URI\":\"https://wikidev.in/wiki/assembly/8086/CWD\",\"WARC-Payload-Digest\":\"sha1:2YRZMMZRZHC6D7ZUXRBSMG6AO3CTLBUF\",\"WARC-Block-Digest\":\"sha1:ATDCXKEHQE7ICZHX67U3E63OVS6ZRL67\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500837.65_warc_CC-MAIN-20230208155417-20230208185417-00258.warc.gz\"}"}
https://www.coursehero.com/file/p28fjpc/holtlinalg1-61026nva-Consider-the-matrix-A-Find-the-characteristic-polynomial/
[ "# Holtlinalg1 61026nva consider the matrix a find the\n\n• Homework Help\n• 12\n• 98% (114) 112 out of 114 people found this document helpful\n\nThis preview shows page 7 - 9 out of 12 pages.\n\n7.–/3 pointsholtlinalg1 6.1.026.nvaConsider the matrixAFind the characteristic polynomial for the matrixA. (Write your answer in terms ofλ=0 0 11 0 00 1 0..)(No Response)Find the real eigenvalues for the matrixA. (Enter your answers as a comma-separated list.)λ=(No Response)Find a basis for each eigenspace for the matrixA[1;1;1].A=0 0 11 0 00 1 0,11\n1\n8.–/1 pointsHoltLinAlg1 6.1.037.Determine if the statement is true or false, and justify your answer.An eigenvalueλmust be nonzero, but an eigenvectorucan be equal to the zero vector.\n9.–/1 pointsHoltLinAlg1 6.1.039.Determine if the statement is true or false, and justify your answer.Ifuis a nonzero eigenvector ofA, thenuandAupoint in the same direction.True. Sinceuis a nonzero eigenvector ofA, there existsλ> 0 such thatAu=True. Sinceuis a nonzero eigenvector ofA, there existsλ< 0 such thatAu=False.Auanduare perpendicular.False. Ifλ< 0, thenAuandupoint in opposite directions.False. Ifλ> 0, thenAuandupoint in opposite directions.λ< 0,A=1 00 0u=,10Au=uandλu.λu..\n\nCourse Hero member to access this document\n\nCourse Hero member to access this document\n\nEnd of preview. Want to read all 12 pages?\n\nCourse Hero member to access this document\n\nTerm\nSpring\nProfessor\nN/A\nTags\nMath, Characteristic polynomial, Eigenvalue eigenvector and eigenspace\n•", null, "•", null, "•", null, "" ]
[ null, "https://www.coursehero.com/assets/img/doc-landing/start-quote.svg", null, "https://www.coursehero.com/assets/img/doc-landing/start-quote.svg", null, "https://www.coursehero.com/assets/img/doc-landing/start-quote.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6902412,"math_prob":0.83033967,"size":754,"snap":"2022-27-2022-33","text_gpt3_token_len":228,"char_repetition_ratio":0.116,"word_repetition_ratio":0.24742268,"special_character_ratio":0.23342176,"punctuation_ratio":0.21153846,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9546998,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-01T11:46:52Z\",\"WARC-Record-ID\":\"<urn:uuid:cbfd98bd-7f3d-4d85-92bc-ade4e7e53cd9>\",\"Content-Length\":\"260761\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6cd9d323-8f80-47dd-991f-586b25e35b13>\",\"WARC-Concurrent-To\":\"<urn:uuid:096e0e0c-f5f4-4035-8bdc-6454446aa60a>\",\"WARC-IP-Address\":\"104.17.92.47\",\"WARC-Target-URI\":\"https://www.coursehero.com/file/p28fjpc/holtlinalg1-61026nva-Consider-the-matrix-A-Find-the-characteristic-polynomial/\",\"WARC-Payload-Digest\":\"sha1:KDVVNYUGFMX6EOPHM4LOYWQ7YZBL3UCV\",\"WARC-Block-Digest\":\"sha1:SXHXZGACA2A3NWS4GX7GP46XOLFPSV4D\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103940327.51_warc_CC-MAIN-20220701095156-20220701125156-00655.warc.gz\"}"}
https://answers.everydaycalculation.com/simplify-fraction/350-900
[ "Solutions by everydaycalculation.com\n\n## Reduce 350/900 to lowest terms\n\nThe simplest form of 350/900 is 7/18.\n\n#### Steps to simplifying fractions\n\n1. Find the GCD (or HCF) of numerator and denominator\nGCD of 350 and 900 is 50\n2. Divide both the numerator and denominator by the GCD\n350 ÷ 50/900 ÷ 50\n3. Reduced fraction: 7/18\nTherefore, 350/900 simplified to lowest terms is 7/18.\n\nMathStep (Works offline)", null, "Download our mobile app and learn to work with fractions in your own time:" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6174408,"math_prob":0.82122993,"size":383,"snap":"2023-40-2023-50","text_gpt3_token_len":127,"char_repetition_ratio":0.15039578,"word_repetition_ratio":0.0,"special_character_ratio":0.46214098,"punctuation_ratio":0.08219178,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9535244,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-30T07:43:27Z\",\"WARC-Record-ID\":\"<urn:uuid:0fd31f09-a884-409e-9a05-dc3d6b293003>\",\"Content-Length\":\"6710\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3edd87ad-2a02-414d-9a84-7f3f747532c9>\",\"WARC-Concurrent-To\":\"<urn:uuid:df6c3ee4-1544-444d-8b07-ddb4bcd309d5>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/simplify-fraction/350-900\",\"WARC-Payload-Digest\":\"sha1:IC4GBJ7J6SDLQPTH2UW6FNQ7JPKV7Q3F\",\"WARC-Block-Digest\":\"sha1:TDRWELIWW27J3H6J27L4PBMFVHF5FZUN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100172.28_warc_CC-MAIN-20231130062948-20231130092948-00708.warc.gz\"}"}
https://fr.maplesoft.com/support/help/maple/view.aspx?path=Units/Natural/exp&L=F
[ "", null, "exp - Maple Help\n\nUnits[Natural]\n\n exp\n exponential function", null, "Calling Sequence\n\n exp(expr) ${ⅇ}^{\\mathrm{expr}}$", null, "Parameters\n\n expr - algebraic expression", null, "Description\n\n • In the Natural Units environment, the arguments for the exponential functions can be unit-free, or multiplied by a unit with the dimension logarithmic_gain. An error is returned if the dimension of the argument is not one of logarithmic gain.\n • By default, the units of the object returned are a ratio of wave numbers (inverse length). Using energy conversions, this ratio is proportional to a voltage ratio and its square is proportional to a power ratio.\n • Note that the first calling sequence must be entered in 2-D math notation by using either the palettes or command completion. The exponential ${e}^{x}$ will not be recognized if it is entered manually.\n • For other properties, see the global function exp.", null, "Examples\n\n > $\\mathrm{with}\\left(\\mathrm{Units}\\left[\\mathrm{Natural}\\right]\\right):$\n > $\\mathrm{ratio}≔\\mathrm{exp}\\left(15.5\\mathrm{dB}\\right)$\n ${\\mathrm{ratio}}{≔}{5.956621435}{}⟦\\frac{{m}{}\\left({\\mathrm{base}}\\right)}{{m}}⟧$ (1)\n > $\\mathrm{convert}\\left(\\mathrm{ratio},'\\mathrm{units}',\\frac{\\mathrm{volts}}{\\mathrm{volts}\\left(\\mathrm{base}\\right)},'\\mathrm{energy}'\\right)$\n ${5.956621435}{}⟦\\frac{{V}}{{V}{}\\left({\\mathrm{base}}\\right)}⟧$ (2)\n > $\\mathrm{convert}\\left({\\mathrm{ratio}}^{2},'\\mathrm{units}',\\frac{\\mathrm{watt}}{\\mathrm{watt}\\left(\\mathrm{base}\\right)},'\\mathrm{energy}'\\right)$\n ${35.48133892}{}⟦\\frac{{W}}{{W}{}\\left({\\mathrm{base}}\\right)}⟧$ (3)\n > $\\mathrm{log10}\\left(\\right)$\n ${0.7750000000}{}⟦{\\mathrm{Np}}⟧$ (4)\n > $\\mathrm{convert}\\left(,'\\mathrm{units}',\\mathrm{dB}\\right)$\n ${6.731564469}{}⟦{\\mathrm{dB}}⟧$ (5)\n > $\\mathrm{ln}\\left(\\right)$\n ${1.784503447}{}⟦{\\mathrm{Np}}⟧$ (6)\n > $\\mathrm{convert}\\left(,'\\mathrm{units}',\\mathrm{dB}\\right)$\n ${15.50000000}{}⟦{\\mathrm{dB}}⟧$ (7)\n > $\\mathrm{ln}\\left(\\mathrm{exp}\\left(15.5\\mathrm{dB}\\right)\\right)$\n ${1.784503447}{}⟦{\\mathrm{Np}}⟧$ (8)" ]
[ null, "https://bat.bing.com/action/0", null, "https://fr.maplesoft.com/support/help/maple/arrow_down.gif", null, "https://fr.maplesoft.com/support/help/maple/arrow_down.gif", null, "https://fr.maplesoft.com/support/help/maple/arrow_down.gif", null, "https://fr.maplesoft.com/support/help/maple/arrow_down.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.69783497,"math_prob":0.99998415,"size":1274,"snap":"2021-43-2021-49","text_gpt3_token_len":370,"char_repetition_ratio":0.0992126,"word_repetition_ratio":0.0,"special_character_ratio":0.2299843,"punctuation_ratio":0.12809917,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99975485,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-09T01:25:16Z\",\"WARC-Record-ID\":\"<urn:uuid:41b470a6-bdf9-47cc-9d1b-379ed696ad94>\",\"Content-Length\":\"186514\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3e105099-ac99-464e-8277-1b8fee1f1ed3>\",\"WARC-Concurrent-To\":\"<urn:uuid:de7f9ed9-e2ce-4bfc-a51d-a785959d903c>\",\"WARC-IP-Address\":\"199.71.183.28\",\"WARC-Target-URI\":\"https://fr.maplesoft.com/support/help/maple/view.aspx?path=Units/Natural/exp&L=F\",\"WARC-Payload-Digest\":\"sha1:R5W23WOFICPUZULCTKAHFEKJ3WD2XOCH\",\"WARC-Block-Digest\":\"sha1:3W64PGD6H72VCJ3G5WMCBGYVKGCL2G5I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363641.20_warc_CC-MAIN-20211209000407-20211209030407-00483.warc.gz\"}"}
http://rouletteonline.live/index.php/2022/09/12/basic-principle-of-betting-odds/
[ "# Basic principle of betting odds\n\nFirst of all, betting odds are based on mathematical considerations or calculations. The concept makes use of the principle of probability.\n\nDifferent formats are used to indicate betting odds depending on the betting provider and, above all, the respective country. In Germany, the decimal odds (e.g. 2.50) are mostly used to indicate odds.\n\nBut what does a certain odds actually mean? From the tipper’s point of view, the betting odds essentially provide two pieces of information:\n\nIt indicates how much money you get if you win\nIt gives an idea of ​​how likely it is that an event will occur\n\nTo illustrate this, we use our example odds of 2.50 below.\ncalculate profit\n\nThe profit from a bet always depends on two factors and is calculated using the following formula:\n\nProfit = stake x betting odds .\n\nWith a bet of €20 and a betting odds of 2.50, the possible profit is €50.\n\nFrom a tipper’s point of view, you should ask yourself one crucial question before placing your bet:\n\nWhat is the probability that event X will occur?\n\nBecause this question is ultimately also the basis for determining betting odds on the part of sports betting providers.\ncalculate probability\n\nThe specified betting odds also provide an indication of this. As mentioned above, bookmakers determine their betting odds based on probabilities. This means betting providers calculate or determine the respective probability for every possible betting option they offer.\n\nFor example, in football bets, factors such as the respective form of the teams, general quality of the squad, venue, and much more. analyzed and used for assessment.\n\nThe probabilities are then converted into a betting odds. This is done with the following formula:\n\nBetting odds = 100/probability of event X .\n\nFor our example odds of 2.50, we can calculate the approximate probability that the betting provider used as a basis for event X:\n\n2.50 = 100/probability of event X\nProbability of event X = 100/2.50 in percent\nprobability = 40%\n\nBasically, this results in the well-known principle: the higher the betting odds, the less likely it is that your bet will work.\nFair betting odds vs real betting odds\n\nIn addition, it should be noted in this regard that when specifying the actual betting odds, a profit margin is also taken into account in addition to the probability of the different events.\n\nFrom a bookmaker’s point of view, this represents a security in order not to make any losses in the long term. A distinction is made in this regard between fair and real betting odds.\n\nAs an example, let’s look at a fictitious 3-way bet for the match between Schalke and Dortmund. There are a total of 3 betting options: Schalke win, draw, Dortmund win.\n\nAccording to the above explanation, a probability is determined for each of these events and the corresponding betting odds are then calculated (the sum of the probabilities must be 100%):\n\nVictory Schalke: 100/45 = 2.22\nDraw: 100/20 = 5.00\nWin Dortmund = 100/35 = 2.86\n\nIt is now a question of the fair betting odds. What does that mean? Assuming all wagers placed on the game were spread across the three bets according to their respective probabilities. Then 메이저사이트 from losing bets would have to be paid out to the winners and bookmakers would not make a profit.\n\nFor this reason, the fair betting odds are converted into real betting odds. To do this, the fair odds are multiplied by a factor of less than 1.\n\nReal odds = fair odds x (n < 1)\n\nIn our example, the final real odds could look like this:\n\nVictory Schalke: 2.22 x 0.95 = 2.11\nDraw: 5.00 x 0.95 = 4.75\nVictory Dortmund: 2.86 x 0.95 = 2.72" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.931251,"math_prob":0.9789679,"size":3585,"snap":"2022-40-2023-06","text_gpt3_token_len":822,"char_repetition_ratio":0.16056967,"word_repetition_ratio":0.003236246,"special_character_ratio":0.23375174,"punctuation_ratio":0.123796426,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9978172,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-28T10:40:14Z\",\"WARC-Record-ID\":\"<urn:uuid:55a37fd1-2d6a-4933-b608-33d3e1474476>\",\"Content-Length\":\"34351\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fd4565eb-ac7d-418e-b86d-c8b158d022d5>\",\"WARC-Concurrent-To\":\"<urn:uuid:5a36ab23-981e-4704-b07f-6a9a1d280055>\",\"WARC-IP-Address\":\"185.224.137.150\",\"WARC-Target-URI\":\"http://rouletteonline.live/index.php/2022/09/12/basic-principle-of-betting-odds/\",\"WARC-Payload-Digest\":\"sha1:K77ANKXNJYHONH4OKJTD5V2EPLRTTUKM\",\"WARC-Block-Digest\":\"sha1:PQ7RPTQAYP3JBX52YQZPPBP5P5OEYXVQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335190.45_warc_CC-MAIN-20220928082743-20220928112743-00400.warc.gz\"}"}
https://www.localsolver.com/docs/last/cppapi/modeler/lspvalue.html
[ "# LSPValue Class¶\n\nclass LSPValue\n\nGeneral value container. Any value can be contained in an instance of this class (integer, double, boolean, string, LSExpression, module, map, function and also non exposed types like files or dates).\n\nSince\n\n10.0\n\n## Summary¶\n\n `getType` Returns the type of the value. `isNil` Returns true if the value is nil. `isInt` Returns true if the value is an integer value. `isDouble` Returns true if the value is a double value. `isBool` Returns true if the value is a boolean value. `isExpression` Returns true if the value is an localsolver::LSExpression. `isString` Returns true if the value is a string value. `isMap` Returns true if the value is an LSPMap. `isFunction` Returns true if the value is an LSPFunction. `isModule` Returns true if the value is an LSPModule. `asInt` Returns the value as an integer. `asDouble` Returns the value as a double. `asBool` Returns the value as a boolean. `asExpression` Returns the value as a localsolver::LSExpression. `asString` Returns the value as a string. `asMap` Returns the value as an LSPMap. `asFunction` Returns the value as an LSPFunction. `asModule` Returns the value as an LSPModule.\n `lsint` Returns the value as an integer. `lsdouble` Returns the value as a double. `bool_` Returns the value as a boolean. `LSExpression` Returns the value as a localsolver::LSExpression. `string` Returns the value as a string. `LSPMap` Returns the value as an LSPMap. `LSPFunction` Returns the value as an LSPFunction. `LSPModule` Returns the value as an LSPModule.\n\n## Functions¶\n\nLSPType LSPValue::getType() const\n\nReturns the type of the value.\n\nReturns\n\nType of the value.\n\nSee\n\n`LSPType`\n\nbool LSPValue::isNil() const\n\nReturns true if the value is nil.\n\nbool LSPValue::isInt() const\n\nReturns true if the value is an integer value.\n\nbool LSPValue::isDouble() const\n\nReturns true if the value is a double value.\n\nbool LSPValue::isBool() const\n\nReturns true if the value is a boolean value.\n\nbool LSPValue::isExpression() const\n\nReturns true if the value is an `localsolver::LSExpression`.\n\nbool LSPValue::isString() const\n\nReturns true if the value is a string value.\n\nbool LSPValue::isMap() const\n\nReturns true if the value is an `LSPMap`.\n\nbool LSPValue::isFunction() const\n\nReturns true if the value is an `LSPFunction`.\n\nbool LSPValue::isModule() const\n\nReturns true if the value is an `LSPModule`.\n\nlsint LSPValue::asInt() const\n\nReturns the value as an integer. The value must be an integer.\n\nlsdouble LSPValue::asDouble() const\n\nReturns the value as a double. The value must be a double.\n\nbool LSPValue::asBool() const\n\nReturns the value as a boolean. The value must be a boolean.\n\nLSExpression LSPValue::asExpression() const\n\nReturns the value as a `localsolver::LSExpression`. The value must be a LSExpression.\n\nstd::string LSPValue::asString() const\n\nReturns the value as a string. The value must be a string.\n\nLSPMap LSPValue::asMap() const\n\nReturns the value as an `LSPMap`. The value must be a map.\n\nLSPFunction LSPValue::asFunction() const\n\nReturns the value as an `LSPFunction`. The value must be a function.\n\nLSPModule LSPValue::asModule() const\n\nReturns the value as an `LSPModule`. The value must be a module.\n\nexplicit operator LSPValue::lsint() const\n\nReturns the value as an integer. The value must be an integer.\n\nexplicit operator LSPValue::lsdouble() const\n\nReturns the value as a double. The value must be a double.\n\nexplicit operator LSPValue::bool_() const\n\nReturns the value as a boolean. The value must be a boolean.\n\nexplicit operator LSPValue::LSExpression() const\n\nReturns the value as a `localsolver::LSExpression`. The value must be a LSExpression.\n\nexplicit operator LSPValue::std::string() const\n\nReturns the value as a string. The value must be a string.\n\nexplicit operator LSPValue::LSPMap() const\n\nReturns the value as an `LSPMap`. The value must be a map.\n\nexplicit operator LSPValue::LSPFunction() const\n\nReturns the value as an `LSPFunction`. The value must be a function.\n\nexplicit operator LSPValue::LSPModule() const\n\nReturns the value as an `LSPModule`. The value must be a module." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.606954,"math_prob":0.97416586,"size":2651,"snap":"2022-40-2023-06","text_gpt3_token_len":680,"char_repetition_ratio":0.31620702,"word_repetition_ratio":0.508046,"special_character_ratio":0.22859298,"punctuation_ratio":0.1871345,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98047733,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-08T08:06:14Z\",\"WARC-Record-ID\":\"<urn:uuid:53df1b29-9906-49c1-a60b-ea99a2ad5bc2>\",\"Content-Length\":\"220848\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:17d54310-3ffd-4884-a516-3d478aca0625>\",\"WARC-Concurrent-To\":\"<urn:uuid:773d9070-b89c-42ce-963c-7008a4cd15b4>\",\"WARC-IP-Address\":\"51.38.10.83\",\"WARC-Target-URI\":\"https://www.localsolver.com/docs/last/cppapi/modeler/lspvalue.html\",\"WARC-Payload-Digest\":\"sha1:7WF5WBGG3Q5JKUQGQX66Y2UP3QERUHKF\",\"WARC-Block-Digest\":\"sha1:E2GDUKIGVNZVLNUKR6CS2WSSVJEHUWUE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500719.31_warc_CC-MAIN-20230208060523-20230208090523-00730.warc.gz\"}"}
https://www.jiskha.com/archives/2011/11/20
[ "# Questions Asked onNovember 20, 2011\n\na bicyclist is finishing his repair of a flat tire when a friend rides by at 3.5 m/s. two seconds later, the bicyclist hops on his bike and accelerates at 2.4 m/s^2 until he catches his friend? (a) how much time does it take until he catches his friend?\n\nA student is asked to standardize a solution of barium hydroxide. He weighs out 0.945 g potassium hydrogen phthalate (KHC8H4O4, treat this as a monoprotic acid). It requires 33.9 mL of barium hydroxide to reach the endpoint. A. What is the molarity of the\n\n3. ## physics\n\nA ball is projected horizontally from the edge of a table that is 1.15 m high, and it strikes the floor at a point 1.70 m from the base of the table. (a) What is the initial speed of the ball? m/s (b) How high is the ball above the floor when its velocity\n\n4. ## chemistry\n\nsulfur dioxide (g) + oxygen (g) sulfur trioxide (g) SO2 + O = SO3 Is this correct for the balanced equation?\n\nA radioactive nucleus at rest decays into a second nucleus, an electron, and a neutrino. The electron and neutrino are emitted at right angles and have momenta of 9.15×10−23 kg . m/s and 5.19×10−23 kg . m/s, respectively. 1.)What is the magnitude of\n\n6. ## Electrical Circuits\n\nfind the number of free electrons in a copper conductor having a diameter of 0.064 in. and a length of 1000ft.\n\n7. ## algebra\n\nThe half-life of plutonium-234 is 9 hours. If 40 milligrams is present now, how much will be present in 4 days? (Round your answer to three decimal places.)\n\n8. ## ALGEBRA\n\nFrom a survey of 100 college students, a marketing research company found that 65 students owned iPhones, 40 owned cars, and 30 owned both cars and iPhones. (a) How many students owned either a car or an iPhone (but not both)?\n\n9. ## Physics\n\nIn many locations, old abandoned stone quarries have become filled with water once excavating has been completed. While standing on a quarry wall, a boy tosses a piece of granite into the water below. If he throws the rock horizontally with a velocity of\n\n29. ## Further mathematics\n\nAn exponential sequence of positive terms and a linear sequence have the same first term.the sum of their first terms is 3 the sum of their second terms is 3/2,and the sum of their third terms is 6.find their fifth terms.\n\n30. ## physics\n\nDraw a block and tackle system of pulleys with a velocity ratio of 5 is used to raise a mass of a 25kg through a vertical distance of 40cm at a steady rate.\n\n31. ## Health\n\nWhich one of the following types of training involves 10 to 30 minutes of high-intensity exercise? A. Medium range B. Anaerobic C. Interval D. Long, steady distance i think its either b or a?\n\n32. ## Chemistry\n\nLarger amounts of methane, in the form CH4*6H20 is trapped at the bottom of the ceans in Clatharate compunds (formed when small molecules occupy the empty spaces in cage like structures). Calculate the mass of methane gas trapped with in one Kg of the\n\n33. ## geometry\n\nPoint D is the incenter of triangle ABC. Write an expression for the length x in terms of the three side lengths AB, AC, and BC.\n\n34. ## health\n\n4. What is the ideal length of time for a cool-down following an intense workout? A. 15 to 20 minutes B. 5 to 15 minutes C. 0 to 3 minutes D. 3 to 5 minutes i think its b?\n\n35. ## socials\n\nI compare wartime propaganda posters. first one is - remember hong kong! i google search it and it come. message it give out i cant tell. it look like japanese soldier standing and little white girl below. i think maybe show innocence. images used to\n\n36. ## chemistry\n\nAm I headed in the right direction. Sn + MnO4- --> Sn2+ + Mn2+ Sn --> Sn2+ + 2e- 5e + 8H + MnO4- --> Mn2+ + 4H2O If Im right what is the whole number that Im multiplying by\n\n37. ## physics\n\nOn a brisk walk a person burns about 325 Cal/h at this rate how many hours of brisk walk would it take to lose a pound of body fat\n\n38. ## physics\n\nA ball is projected horizontally from the edge of a table that is 1.15 m high, and it strikes the floor at a point 1.70 m from the base of the table. (a) What is the initial speed of the ball? m/s (b) How high is the ball above the floor when its velocity\n\n39. ## fitness\n\nVO2max is related to which one of the following choices? A. The efficiency of the body to secrete perspiration and take in lactic acid B. The efficiency of the body to take in oxygen and eliminate carbon dioxide C. The efficiency of the body to take in\n\n81. ## calculus\n\nfind the rms value of the function i=15(1-e^-1/2t) from t=0 to t=4\n\n82. ## algebra\n\nan insect flies 20 ft. in 1 s. how fast does the insect fly in miles per hour? round to the nearest hundredth if necessary.\n\n83. ## chemistry\n\nConsider the following reaction: C6H5CL+C2HOCL3=>C14H9CL5+H20 If 1142 g of C6H5CL is reacted with 485 g of C2HOCL3, what mass of C14H9CL5 would be formed?\n\n84. ## Math\n\nthe figures are similar. the are of one figure is given. find the area of the other figure to the nearesr whole number. the area of the larger triangle is 1,589 ft squared\n\n85. ## english\n\nHello! I am a senior student who lives in Paris, therefore English is my seocnd language. I need help to my examen oral. Tomorrow, I have to talk about: \"Occupy Wall Street\" movement (what's happening? analyzing the situation, is the movement important?\n\n86. ## math\n\nwhat are the next 3 terms? 12288, 3072 768 192\n\n87. ## calculus\n\nThe top and bottom margins of a paster are each 6 cm and the side margins are each 4cm. If the area of printed material on the poster is fixed at 384cm^2, find the dimensinos of the poster with the smallest area. I would like to know if my work is correct.\n\n88. ## Pre calculus\n\nA window is made up of a circle embedded in a rectangle so that the diameter of the circle is a side of the rectangle. Find the demensions that will admit the most light if the perimeter ofthe window is 44 meters.\n\n89. ## Calculus\n\nDifferentiate x^2*y^2 = (y+1)/(x+1) in terms of x and y. PS My answer is unfortunately at odds with that provided by the authors of the book (i.e. -y (y+1) (3x+2) all over x (x+1) (y+2), and I don't know whether it's a misprint or my method is wrong.\n\n90. ## geometry\n\nABCD is a rectangle. If AC=5x+2 and BD=x+22, find x.\n\n91. ## physics\n\na 8.0 kg object undergoes an acceleration of 2.2 m/s squared.what is the resulting force acting on it?\n\n92. ## chemistry\n\nA 28.4 L sample of methane gas is heated from 35.0 C to 76.0 C. The initial pressure of the gas is 1.00 atm at 35.0 C. Assuming constant volume, what is the final pressure of the gas\n\n93. ## socials\n\nlast poster is the : If they over-run canada your money'll be useless... it right here h t t p : //img341.imageshack . us/img341/3655/214417bhocwbbv . j p g message i don't know who this message pointing at but it look like they say don't give your money\n\n94. ## Math problem\n\n1.You have recently found a location for your bakery and have begun implementing the first phases of your business plan. Your budget consists of an $80,000 loan from your family and a$38,250 small business loan. These loans must be repaid in full within\n\n95. ## CALCULUS\n\nevaluate the integral of (e^3x)(cosh)(2x)(dx) A.(1/2)(e^5x)+(1/2)(e^x)+C B.(1/10)(e^5x)+(1/2)(e^x)+C C.(1/4)(e^3x)+(1/2)(x)+C D.(1/10)(e^5x)+(1/5)(x)+C\n\n96. ## Chemistry\n\nThe space shuttle uses fuel cells( AFC's) to produce electricity . In these fuel cells,O2 reacts with water and electrons to produce OH- ioins travel to the other electrode where they react with H2 to release electrons and produce more water. Based on this\n\n97. ## chemistry\n\nBalancing redox equations reactants: Cu, NO3^- (the negative sign is nex to the O, above the 3). products: Cu^2+, NO2 Medium: Acidic Cu+ NO3^- ----> Cu^2+ + NO2 Okay, I don't understand how to do this problem...but something someone needs to clear up for\n\n98. ## linear math\n\nLet u=(2,0,k,-1) v=(-4,0,-3,2) and w= (0,1,0,0). Each answer must be justified(there is no answer where k=nothing) a) find all values of k, (if any) for which u is orthogonal to v b) Find all values of k (if any) for which the set {u,v,w} is linearly\n\n99. ## Biology 11\n\nthe genotype of the hybrids of two pure-breeding pea plants that each have white flowers and round seeds (both traits dominant) would be?\n\n100. ## NJHS Application\n\nWhat does it says in the NJHS Application when they give it to you?\n\n101. ## science\n\nwhat is the concentration of a sugar solution that contains 8 g sugar dissolved in 500 ml of water? explain in ration form\n\n102. ## chemistry\n\nwhat mass of barium sulphate can be produced when 100.0 ml of a 0.100M solution of barium chloride is mixed with 100.0 ml of a 0.100M solution of Iron(lll) sulphate?\n\n103. ## mechanic fluid\n\nA kite weighs 9.8N having an area of 1m square makes an angle of 7.5 degree to the horizontal when flying in a wind at speeds of 35km/h. If the pull on the string attached to the kite is 50N and is inclined to the horizontal 45 degree. Determine the lift\n\n104. ## physics\n\nthe occupants of a car traveling at 25.7 m/s become weightless for an instant when traveling over the top of a hill. Calculate the radius of curvature of hill\n\n105. ## poetry\n\nwhat is the theme for Robert Frost's poem \"After Apple-Picking\"? My long two-pointed ladder’s sticking through a tree Toward heaven still, And there’s a barrel that I didn’t fill Beside it, and there may be two or three Apples I didn’t pick upon\n\n106. ## High School Chemistry\n\nExplain the reaction mechanism of the following reaction by showing the types of bond splitting and atom rearrangements. C + O2 = CO2\n\n107. ## trigonometry\n\nif cosθ=2/3 and tanθ\n\n108. ## world history\n\nWhich of the following was a Chinese school of thought during the fourth century B.C.E. that denounced ethics in favor of obedience? A. Buddhism B. Confucianism C. Daoism D. Legalism i concluded D what you think\n\n109. ## Pre Calculus\n\nFind a third degree polynomial P(x) that has zeros at x=-1, x=1, x=2 and whose x-term has coefficient 3?\n\n110. ## material & structure laboratory\n\nIs the measured normal consistency lies in the range for the Portland cements?\n\n111. ## Algebra\n\nHow do I create a quadratic equation that passes through the points (-1,2),(1,7),(4,5)?\n\n112. ## Law\n\nA retirement plan based on age in which an employer offers $10,000 as a base early retirement incentive for 55-year-olds,$8,000 for 56-year-olds, and so on, and nothing to persons age 60 or older. Does such a plan amount to illegal age discrimination? My\n\n113. ## Math homework\n\nPlease help me to solve In your industrial oven, you bake two baking sheets with 12 scones each, two baking sheets with 20 cookies each, and one baking sheet with 2 scones and 10 cookies Write an expression that illustrates the total cost of all baked\n\n114. ## english\n\nThe groom's family presented this letter to the bride's family. is presented an action( physical or mental) verb also It formally accepted the bride into the groom's family. is accepted (physical or mental) action verb?? I think presented is physical and\n\n115. ## fsu\n\nThe individual amounts in the Accounts Receivable Debit column of a sales journal should be posted to the accounts receivable subsidiary ledger, and the column total should be posted to the Accounts Receivable account in the general ledger.is it true or\n\n116. ## physics\n\na disk has a mass of 2kg and a length of 2 meters and it oscillates about an axis that is 1.5 meters from the center of the disk. It is initially displaced at an angular displacement of 0.2 radians and released. A) What is the equation of motion? B)What is\n\n117. ## English\n\nCould you please check these sentences, too? I tried to rephrase the first lines again. 1) Hamlet wonders whether it is better (for him) to endure the assaults of shameless fortune (\"the slings and arrows of an outrageous fortune\") or to fight against his\n\n118. ## physics\n\nIf the brisk walk were done at 4.0mi/h how far would the student have to walk to burn pound of fat\n\n119. ## physics\n\nA 83-kg worker clings to a lightweight rope going over a lightweight, low-friction pulley. The other end of the rope is connected to a 66-kg barrel of bricks. If the worker is initially at rest 16 m above the ground, how fast will he or she be moving when\n\n120. ## trigonometry\n\ntanθ= -2sqrt10/3 and π/2\n\n121. ## optimization calculus\n\nA piece of wire 25 m long is cut into two pieces. One piece is bent into a square and the other is bent into a circle. (a) How much of the wire should go to the square to maximize the total area enclosed by both figures? (b) how much of the wire should go\n\n122. ## chemistry\n\n20g of nitrogen gas and 10g of helium gas are placed together in a 5L container at 25C . Calculate the partial pressure of each gas and the total pressure of the gas mixture.\n\n123. ## chemistry\n\nA 10.6 liter at 25 C and 3.21 atm inside pressure became 12.8 liters at 221 C after driving in summer. What is the pressure in the hot tire?\n\n124. ## economics\n\nwhen you are given the units of resource, total product, and marginal product, what do you need to find to determine how many resources the firm will employ, what the MRP would be etc...\n\n125. ## chemistry\n\nwhat is oxidation state of NO2\n\n126. ## chemistry\n\nExplain the reaction mechanism of the following reaction by showing the types of bond splitting and atom rearrangements. C + O2 = CO2\n\nIf rhys is late for his finite mathematics class, and the probability that he is on time is 3/4. however if he is on time, he is liable to be less concerned about punctuality for the next class and his probability being on time drops to 1/2. rhy is on time\n\n129. ## The Calvin Cycle\n\nHow many molecules of CO2 will the calvin cycle consume in generating 2 molecules of fructose 6-phosphate? How many molecules of ATP will the calvin cycle consume in generating 2 molecules of fructose 6-phosphate? How many molecules of NADH will the calvin\n\n130. ## D.E\n\nA particle moves on the x-axis with an acceleration, 246msta. Find the position and velocity of the particle at 3t, if the particle is at origin and has a velocity of 10ms when 0t by using either the method of undetermined\n\n131. ## statistics\n\nGroup the largest data set and find mean, median, mode, variance, standard deviation, 15-th, 45-th and 80-th percentiles of the grouped data. Then find the same sample statistics using the ungrouped data. Is there any difference? Comment.\n\n132. ## calculus\n\nEvaluate the integral of xe^2x(dx) A. (1/6)(x^2)(e^3x)+C B. (1/2)(xe^2x)-(1/2)(e^2x)+C C. (1/2)(xe^2x)-(1/4)(e^2x)+C D. (1/2)(x^2)-(1/8)(e^4x)+C\n\n133. ## Chemistry\n\nWhy water is a weak electrolyte?\n\n134. ## science(Ms.Sue)\n\nWhat term almost always describes top consumers? A)herbivores B)omnivores C)carnivores D)producers My answer is B am i right,if not please explain?\n\n135. ## math\n\nif cosθ=2/3 and tanθ\n\n136. ## speech\n\nThis might be a hard question to answer but I need an example for my speech for speech class. My speech is a persuasive on why you should foster an animal. I am talking about the benfits and how learning responsibility is a benefit. I know there are\n\n137. ## science(Ms.Sue)\n\nWhat does a maple tree do for a community? A)it creates energy B)it creates food C)it looks nice D)it stores water My answer is b,am i right?\n\n138. ## Chemistry\n\nExplain the reaction mechanism of the following reaction by showing the types of bond splitting and atom rearrangements. C + O2 = CO2\n\n139. ## Physics\n\nThe maximum speed of a 3.1-{\\rm kg} mass attached to a spring is 0.70 m/s, and the maximum force exerted on the mass is 12 N. What is the amplitude of motion for this mass? Express your answer using two significant figures.\n\n140. ## Chem-Repost\n\nit wouldn't let me post after putting spaces in the url and everything, so i just copied and pasted the q instead :P sorry for all the messyness. i'll see if i can delete q's, in the mean time you can maybe look at this please? Initial rate of rxn of\n\n141. ## To Sra\n\nThank You Sra!!! :) PARP stand for parents as reading partners it was a reading contest I join 3 contests this year reflections, battle of the books @ my school, and theis gift card contest. I working really hard :) i was chosen to be in ladies club we did\n\n142. ## Chemistry\n\nI just wanted to clarify that although Cody did respond to my question, we were still unable to determine the solution. I will post a link to the question in a minute. (i noticed you replied to the other posts a bit ago, but i hope im not being too abrupt.\n\n143. ## physics\n\nCalculate the tidal force experienced by Io. How does it compare to the tidal force experienced by the Moon due to the Earth? What would the Earth-Moon distance (i.e., distance between their centres) need to be in order for the Moon to experience similar\n\n144. ## Calculus\n\nIf f(x) = x ln(1+x) estimate the value of ln (2.1) given that ln2 = 0.6931.\n\n145. ## physics\n\nA 450 kg spherical mass is placed 4.40 meters away from a 550 kg spherical mass. What is the magnitude of the gravitational field midway between the two? ...... Im not quite sure how to do this any help would be appreciated.\n\n146. ## Geometry\n\nthe measure of each exterior angle is 45. how many sides does the polygon have?\n\n147. ## physics\n\na football punter accelerates a football from rest to a speed of 7 m/s during the time in which his toe is in contact with the ball (about 0.18 s). If the football has a mass of 0.50 kg, what average force does the punter exert on the ball?\n\n148. ## Algebra\n\nCould anyone help me write a system of equations with this? Thanks - A theater group sold a total of 440 tickets for $3940. Each regular ticket costs$5, each premium ticket costs $15, and each elite ticket cost$25. The number of regular tickets was three\n\n150. ## math problem\n\nCan you determine which number corresponds to the situation THe record high temperature for a date is 95.6 degree above zero What is the corresponding number for this sentence.\n\n151. ## chemistry\n\nhow can you tell if a combination in a mixture forms a precipatate or not?How can you tell from the sollubility table?\n\n152. ## history\n\nwhere can i get facts about winchester cathedral\n\n153. ## English topic sentence please pick one\n\nWhat does it mean to untie the knot of renunciation? It means all the promises made to one another by a couple in a marriage ceremony have been broken which ends with a divorce. or What does it mean to untie the knot of renunciation? It means getting a\n\n154. ## math\n\ngiven sinθ=-5/13 and π\n\n155. ## physics\n\nThe maximum speed of a 3.7-{\\rm kg} mass attached to a spring is 0.66 m/s, and the maximum force exerted on the mass is 11 N. What is the amplitude of motion for this mass? What is the force constant of the spring? What is the frequency of this system?\n\n156. ## geometry\n\nfind how much air is in a beach ball if the inside radius of the ball is 7 inches used pi=7 over 22 and round to the nearest hundredth of an inch.\n\n157. ## History\n\ncan someone help me make a really short, brief history of Chad,Africa?\n\n158. ## chemistry\n\nBalancing redox equations reactants: Cu, NO3\n\n194. ## Physics\n\nIf you pull a 10-kilogram cart along the ground for 5 meters using 2 Newtons of force,how high will it go before it stops?\n\n195. ## Calculus\n\nI need help finding the derivative of this function: f(x)=(X^2+2)^4/5 Thanks in advance! Any help would be greatly appreciated!\n\n196. ## chem\n\nwhat are the strong acids and what is a good way to remember them what are the strong bases and what is a good way to remember them I think but am not sure: are the weak acids and bases everything else?\n\n197. ## physics\n\nI just need a around about answer if a 250 pund man is travling 8mph and hits his face into a stationary objest around how many pounds of pressure is he getting hit with think of it like getting punched in the face\n\n198. ## Chemistry\n\nhow many moles of lead (II) sulfate will be produced when 500 grams of magnesium sulfate is allowedto react with lead (II) nitrate?\n\n199. ## English\n\nI urgently need you to revise these sentences, too. I made some variations, but I have to be sure that everything is OK. I don't know if sentence 1 can make sense. 1) Two philosophical positions, which remain unreconciled in Hamlet's monologue, are\n\n200. ## English\n\nUse a telescope to see into the sky\n\n201. ## English\n\nHere is the second part. Thank you very much. 1) If death,unlike sleep, is an end in itself, then it is desirable and therefore preferable to life's suffering. Once we have 2) As in sleep there is the possibility of dreaming, Hamlet wonders if there will\n\n202. ## science\n\nWhat type of environments do leopards need to live in?\n\n203. ## Chemistry\n\nhow much sodium is in 1.80 grams of Na3PO4\n\n204. ## English\n\nI checked the rephrase as you recommended to me. Thanjk you 1)Hamlet wonders whether it is better to bear the nasty things which luck throws his (our) way or to make a stand against his mass of troubles. 2) He can oppose his troubles either by committing\n\n205. ## College Algebra\n\n(ex)x·e4x + 4 = 1… so i need to solve for x…. how the hell do I do that???\n\n206. ## College Algebra\n\nso i am suppose to to find the max value attained by… g(x) = -3x2 - 24x - 18. but my teacher said \"There is a difference between determining the maximum value and naming the location of the maximum value.\" what does that mean??\n\n207. ## College Algebra..LOG\n\nlog4(little four) (40 - 3x) = 0 now i need to solve for x...\n\n208. ## geometry\n\nfind the values of the variables and the measures of the triangles:x+(x+2)+(6x+10)\n\n209. ## Math problem\n\nevaluate a+b/10 for a = 68 and b =12 a+b/10 =\n\n210. ## Pre-Calculus\n\nI'm having a had time with this problem. Could you please help with step by step resolution? e^-x / e^-1 + 3 dx Thank you..\n\n211. ## socials\n\nwhat happened after the cypress hill massacre?? The white invaders formed the Northwest Mounted Police ???? what else happened?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93616885,"math_prob":0.8648887,"size":43229,"snap":"2020-24-2020-29","text_gpt3_token_len":11802,"char_repetition_ratio":0.14033546,"word_repetition_ratio":0.12512363,"special_character_ratio":0.2589234,"punctuation_ratio":0.08654166,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97903645,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-28T00:28:16Z\",\"WARC-Record-ID\":\"<urn:uuid:310c19a4-9353-4b83-b7fc-7d4662e41ffd>\",\"Content-Length\":\"141417\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2bca4f5d-625d-48f4-b888-687c503509eb>\",\"WARC-Concurrent-To\":\"<urn:uuid:4ad504f3-439f-42c1-9dad-f1c39086e397>\",\"WARC-IP-Address\":\"66.228.55.50\",\"WARC-Target-URI\":\"https://www.jiskha.com/archives/2011/11/20\",\"WARC-Payload-Digest\":\"sha1:LKY4LDA7J7E266YY6TTIDZRNH7PDAKO4\",\"WARC-Block-Digest\":\"sha1:YAQ32J7K6QUBYELNFF2VNUZPXIJ3W6ZJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347396300.22_warc_CC-MAIN-20200527235451-20200528025451-00133.warc.gz\"}"}
https://www.physicsforums.com/threads/initial-speed-of-the-proton.952434/
[ "# Initial speed of the proton?\n\n• kristjan\n\n## Homework Statement\n\nThe moving proton hits the second proton, which we consider to be stationary. In the moment of central strike gap between the protons is 10(-13)m. What was the initial speed of the moving proton? Proton mass is 1.67*10(-27)kg and charge 1.6*10(-19) C.\n\n## Homework Equations\n\nElectric potential energy U=kQq/r\nKinetic energy 1/2 mv2\n\n## The Attempt at a Solution\n\nkinetic energy is transformed into electric potential energy at the point of closest approach:\nelectric potential energy=kinetic energy of moving proton\nFrom there I find initial speed of moving proton to be v=1.66*10(6) m/s, in book answer is 2.35*10(6) m/s\n\n## Homework Statement\n\nThe moving proton hits the second proton, which we consider to be stationary. In the moment of central strike gap between the protons is 10(-13)m. What was the initial speed of the moving proton? Proton mass is 1.67*10(-27)kg and charge 1.6*10(-19) C.\n\n## Homework Equations\n\nElectric potential energy U=kQq/r\nKinetic energy 1/2 mv2\n\n## The Attempt at a Solution\n\nkinetic energy is transformed into electric potential energy at the point of closest approach:\nelectric potential energy=kinetic energy of moving proton\nFrom there I find initial speed of moving proton to be v=1.66*10(6) m/s, in book answer is 2.35*10(6) m/s\nPerhaps, the other proton was stationary at the beginning, but free to move. Then momentum is conserved.\n\nRemark by ehild above is right. If it is asumed that the target proton is (somehow) held stationary, then I am also getting the same answer that kristjan got. If you assume that the target proton is free to move, and that it is a 1-dimensional collision problem, using (non-relativistic) momentum and energy conservation, I get the book answer." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8699673,"math_prob":0.9838318,"size":646,"snap":"2023-40-2023-50","text_gpt3_token_len":183,"char_repetition_ratio":0.1495327,"word_repetition_ratio":0.0,"special_character_ratio":0.27863777,"punctuation_ratio":0.08148148,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9928944,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-27T02:09:11Z\",\"WARC-Record-ID\":\"<urn:uuid:a0d567a4-78bf-4453-824d-19cddcbb723e>\",\"Content-Length\":\"68492\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d4ad2974-100b-4293-9cc2-9f310548d43c>\",\"WARC-Concurrent-To\":\"<urn:uuid:f194f11e-1d79-4c39-8db9-7af68e84dba3>\",\"WARC-IP-Address\":\"104.26.14.132\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/initial-speed-of-the-proton.952434/\",\"WARC-Payload-Digest\":\"sha1:32XBEMYSSVMOJBKNTJS67IWOVPG24T7W\",\"WARC-Block-Digest\":\"sha1:2BRDG4H525HBZKQDXNNFZCFP7UXICF2G\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510238.65_warc_CC-MAIN-20230927003313-20230927033313-00887.warc.gz\"}"}
https://electowiki.org/wiki/Graduated_Majority_Judgment
[ "Like its predecessor Majority Judgment, Graduated Majority Judgment or GMJ is a single-winner, median-based voting system. Here's one way to explain it:\n\nBallot Explanation\n\nThe ballot will ask you to grade each candidate on a scale from A (excellent) to F (unacceptable). You may give two candidates the same grade if you wish. Any candidate whom you do not explicitly grade will get an F from you.\n\nCounting\n\nConceptual\n\nTo find the winner, first the \"A\" votes for each candidate are counted. If no candidate gets over 50% of the voters, the \"B\" votes are added to the count, then \"C\" votes, etc. The first candidate to get over 50% is the winner. If two candidates would reach 50% at the same grade, each candidate's votes for that grade are added gradually, and the winner is the one who needs the smallest portion of those votes to reach 50%.\n\nThis gradual process can be stated as a \"graduated score\" for each candidate. If a candidate reaches 50% using 8/10 of their \"C\" votes (along with all their A and B votes), then their graduated score would be 1.7 (a C-). Another candidate who needed only 2/10 of their \"C\" votes to reach 50% would have a graduated score of 2.3 (a C+), so between those two candidates the second would be the winner.\n\nTwo equivalent full procedures\n\nIt works as follows:\n\n1. Each voter grades each candidate on a grading scale such as A, B, C, D, F\n3. If a single candidate has a majority (that is, a number of votes greater than or equal to 50% of voters), they win.\n4. If no candidate has a majority, the next grade down (B) is added to the tally, and go back to step 3.\n5. If more than one candidate has a majority, the last grade tallied is removed from the tallies, and then re-added at the smallest fraction possible so that some candidate has a majority. This is as if the votes at that grade were added 1% at a time until one candidate gets a majority.\n\nThe above process is conceptually simple, but difficult in practice. The following process gives the same results, and is simpler to run in practice:\n\n1. Each voter grades each candidate on a grading scale such as A, B, C, D, F\n2. Each grade for each candidate is tallied.\n3. The tallies are used to find the median grade for each candidate.\n4. Tallies are added to find the V(>M), V(@M), and V(<M) (that is, votes above median, votes at median, and votes below median or blank) for each candidate.\n5. A candidate's adjustment is a number between -0.5 and +0.5, calculated using the formula (V(>M) - V(<M)) / (2 * V(@M))\n6. The candidate with the highest adjustment among those with the highest median, wins.\n\nIf medians are converted to integers (such as 0-4), then the adjusted median scores can easily be reported alongside the full tallies." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9552382,"math_prob":0.94041675,"size":2882,"snap":"2022-05-2022-21","text_gpt3_token_len":709,"char_repetition_ratio":0.16921473,"word_repetition_ratio":0.057915058,"special_character_ratio":0.24947953,"punctuation_ratio":0.10714286,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98270446,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-28T19:40:20Z\",\"WARC-Record-ID\":\"<urn:uuid:1781a643-7460-4770-8a04-a61d6b5ecd5e>\",\"Content-Length\":\"33944\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7ed11f10-1402-49b4-b138-e7c19acd31c7>\",\"WARC-Concurrent-To\":\"<urn:uuid:07b9b389-bc5e-4c15-b359-65531ac2a2e9>\",\"WARC-IP-Address\":\"149.56.141.75\",\"WARC-Target-URI\":\"https://electowiki.org/wiki/Graduated_Majority_Judgment\",\"WARC-Payload-Digest\":\"sha1:7FXXRQFOP7AIRDEWSTA4F7572OFXOJCE\",\"WARC-Block-Digest\":\"sha1:BUKQ7MSAP4KD5K7GEEDQA2R7Y2HWHDKJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320306335.77_warc_CC-MAIN-20220128182552-20220128212552-00490.warc.gz\"}"}
https://tex.stackexchange.com/questions/tagged/algorithmic?tab=Active
[ "# Questions tagged [algorithmic]\n\nalgorithmic is an environment for typesetting {algorithms}.\n\n218 questions\nFilter by\nSorted by\nTagged with\n36 views\n\n### Algorithmic “for” and “end for” don't not begin new line\n\nI'm using a for loop in an algorithmic environment as follows: \\begin{algorithm} \\caption{data collection} \\begin{algorithmic} \\Procedure{Synthesised data}{\\textit{input:} control docs} ...\n17 views\n\n### Errors with algorithm and algorithmic usepackages\n\nI'm writing pseudocode within the algorithm and algorithmic environment but both the numbering and typesetting get completely messed up. \\usepackage{algorithm} \\usepackage{algorithmic} \\usepackage[...\n25 views\n\n### Making equation more compact in algorithmic and algorithm in double column\n\nI am writing a pseudocode for algorithm under sty file aistats2021.sty using algorithmic and algorithm package. My question is how can I fit line 14 in one line? And here is the latex code. \\...\n33 views\n\n### Box around an algorithm that spans two columns\n\nConsider the following code, that provides a box around an algorithm. However, it does not work when the algorithm needs to span both columns in a double column article. To be specific, when I use \\...\n11k views\n\nbegin{algorithm} \\SetAlgoLined \\DontPrintSemicolon \\KwIn{$NPs$\\Comment*[r]{All Noun Phrases of D}} \\KwOut{${NPs}^{'}$ \\Comment*[r]{NPs without PHIs}} \\SetKwFunction{Fun}{PHIsDetection} \\Fun{$... 0answers 16 views ### Pagebreak in an algorithm using algstore/algrestore I'm trying to use algstore/algrestore to split an algorithm over two pages. However, I would like to not insert a title in the split portion of the algorithm, similar to what is shown here. My only ... 1answer 57 views ### Highlight Lines in side-by-side algorithm I have a 2 column latex template (IEEEtran), in which I put 2 algorithms side by side: \\documentclass[10pt,conference]{article} \\usepackage{algorithm} \\usepackage{algorithmic} \\usepackage{xcolor} \\def\\... 1answer 643 views ### Algorithm: overlapping code lines and misplaced final horizontal line I am currently adapting an article to the MDPI template and I have encountered some problems with the algorithm package. Code lines overlap and the last code line appears below the end horizontal line.... 1answer 25 views ### Reference to substeps (substates) of an algorithm I'm using the algorithmicx package, and also a substate (substep) environment from the answer HERE. The problem is that referring to the substates, and even some normal states is broken. I tried ... 1answer 36 views ### Is there a way to make latex ignore case sensitivity in its algorithmic package? I am writing a report which constitutes separately prepared documents. Unfortunately almost all the documents have algorithms written in different formats like \\WHILE and \\While . I am having an issue ... 0answers 21 views ### Why my pseudocode's indentation is rendering incorrectly? I have the following LaTeX code for a pseudocode block. \\begin{algorithm}[H] \\caption{Particle Swarm Optimization para Dados Relacionais} \\begin{algorithmic} \\REQUIRE A matriz de dissimilaridade$D$, ... 0answers 55 views ### The package algorithmic it doesn't work in Wiley template I'm trying to write an algorithm in a Wiley LaTeX template: \\begin{algorithm} \\caption{user-channel assignment algorithm} \\begin{algorithmic} \\STATE$Initializethe \\; AU \\; and \\; UAU \\; lists$\\... 0answers 28 views ### Indent Latex algorithm numbering I am using algorithm and algorithmic package to right the algorithm. But at some point, indentation in the numbering of the algorithm got mixed up please see the code and image. \\begin{algorithm}[H] \\... 1answer 24 views ### Decrese algorithmic indent I am using algorithmic inside a tcolorbox. It has by itself a margin and combined with the indent from algorithmic it is too much: \\documentclass{standalone} \\usepackage[many]{tcolorbox} \\... 0answers 12 views ### Missing list of Algorithm in the content The command in my tex \\listofalgorithms is detected as unrecognized command. In the generated pdf file. The page of Algorithm is presented, while in the content, this page is skipped. So what can I do ... 1answer 36 views ### Using algorithm2e, how to make IF condition 1 OR conditions 2 OR conditions 3 THEN Using algorithm2e currently when I do this \\If{ condition_1 OR condition_2 OR condition_3} But if the stuff between {} is too long, it wraps around and does not look good since there is no ... 1answer 30 views ### why adding hyperref makes algorithm2e not span pages any more? TL 2020. compiled using lualatex I have set up of the form \\begin{algorithm}[H] .... \\end{algorithm} \\begin{algorithm}[H] .... \\end{algorithm} \\begin{algorithm}[H] .... \\end{... 1answer 24 views ### For loop not working properly in algorithm The example and output is attached \\documentclass[journal]{IEEEtran} \\IEEEoverridecommandlockouts \\usepackage{url} \\usepackage{footnote} \\usepackage{... 1answer 62 views ### Incorrect reference line in algorithmic and algorithm package. Everything is referenced as line number 1 [closed] I am using algorithm and algorithmic package to write pseudocode of algorithm. But when I refer to labels defined inside the algorithm, it always says it's line 1. I am at a loss to figure out how to ... 0answers 12 views ### Something's wrong--perhaps a missing \\item. \\end{algorithmic} Usage of algorithmic package in following way gives me error \\documentclass{article} \\usepackage{algorithm} \\usepackage{algorithmic} \\begin{document} \\begin{algorithm} \\begin{algorithmic} ... 0answers 35 views ### Is there any alternative for using \\algsetup{linenodelimiter={}} with algpseudocode I'm using algpseudocode library to write down my first algorithm in latex while showing line blocks. All I need to do is to change the style of the numbering from 1: to be 1 (without either . or :) I ... 1answer 38 views ### Latex Error “Command \\AND already defined. \\REQUIRE” with ifacconf document class I have been getting the following error \"Command \\AND already defined. \\REQUIRE\" when using algorithms in ifacconf document class. I couldn't figure the cause of it. A sample code can be found below: ... 2answers 26 views ### Hide the numbering of some lines in algorithmc I have the following LaTeX algorithm's code. I just need to hide the numbering of the first three lines; any suggestions? \\documentclass{article} \\usepackage{algorithm} \\usepackage{algorithmic} \\... 1answer 33 views ### Equation in Pseudocode problem I am trying to write a pseudocode in Latex including an equation. My MWE is the following '\\begin{algorithm} \\caption{Fuzzy c-means clustering algorithm} \\begin{algorithmic} \\State Choose ... 0answers 242 views ### Algorithm.sty not found I have been using Miktek 2.9 and Texstudio as editor. There was a missing package of algorithms. I had downloaded it. Generated .sty file from .ins file. Copied the files to tex/latex. Refreshed FNDB. ... 0answers 40 views ### cref & algorithmic incorrect referencing [duplicate] I tried to use the tipps in this post cleveref and algorithm2e. But it does not lead to the correct referencing, what is done wrong here and how to i get cref running with algorithm? \\documentclass[]{... 1answer 23 views ### Algorithm Indentation issue with Multicols Issue with indentation in an algorithm block inside multicols: Is this a known issue? \\documentclass{article} \\usepackage{amsfonts} % blackboard math symbols \\usepackage{amsmath} % required for \\... 1answer 87 views ### Missing$ inserted, Missing \\endgroup inserted\n\nI am writing an algorithm in latex. I am getting some error. \\documentclass{article} \\usepackage[utf8]{inputenc} \\usepackage{amsmath} \\title{Demo} \\author{Subhadip Patra } \\date{March 2020} \\...\n144 views\n\n### How to write an algorithm in algorithm environment in Latex?\n\nHow do I write the following algorithm/pseudocode in algorithm environment in Latex? train_ANN(fi,wi,oj) For epochs from 1 to N While (j<=m) Randomly initialize wi={w1,w2…..wn} ...\n293 views\n\n### Latex generate error in IEEE ACCESS template in Figure, Algorithm, and Table\n\n\\documentclass{ieeeaccess} \\usepackage{cite} \\usepackage{amsmath,amssymb,amsfonts} \\usepackage{algorithmic} \\usepackage{graphicx} \\usepackage{textcomp} \\begin{document} \\begin{table}[h!] \\centering ...\n68 views\n\n### How to add C++ style multi-line comment in algorithmic Latex\n\nI am using algorithmic to write a pseudocode in Latex. I wish to add both single(//) and multiple(/**/) lines comment in my code. \\Procedure{Foo}{$x$} \\State Set $a\\gets b+c$ \\LONGCOMMENT{this ...\n10k views\n\n435 views\n\n73 views\n\n### Algorithmic and hyperref packages: strange interaction\n\nI am not sure the bug below has already been reported as I cannot find clear information online. See the MWE: if \\usepackage{hyperref} is not commented, the space below the equation is fairly large, ...\n127 views\n\n### How to fix an issue about multiple output lines in algorithm\n\nWhen I write a pseudo code using algorithm and algorithmic. Output statement overflow a line like below. \\begin{algorithm}[!h] \\textbf{Input:} The set of public $parameters$\\\\ \\textbf{Output:} ...\n279 views\n\n### Vertical Lines in Algorithms not appearing\n\nI am using \\documentclass[5p]{elsarticle} and packages \\usepackage{algorithmic} \\usepackage[ruled, vlined, longend, linesnumbered]{algorithm2e} There is no error but did not get the vertical lines in ...\n179 views\n\n### algorithmic package dont automatically break page when too long [duplicate]\n\nI have an algorithm defined with algorithmc like this: \\documentclass{article} \\usepackage{algorithm} \\usepackage{algorithmicx} \\begin{document} \\begin{algorithm} \\begin{...\n181 views\n\n### Using \\cref for algorithm step\n\nWhat \\crefname do I provide for referring to a step of an algorithm (not the algorithm float itself!)? I am able to label the steps of an algorithm and refer them using \\ref but how do I use \\cref ..." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82290214,"math_prob":0.7333579,"size":13828,"snap":"2020-45-2020-50","text_gpt3_token_len":3531,"char_repetition_ratio":0.21527778,"word_repetition_ratio":0.01382928,"special_character_ratio":0.23437952,"punctuation_ratio":0.13218616,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.984981,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-22T18:08:39Z\",\"WARC-Record-ID\":\"<urn:uuid:60415032-23f4-466a-9eb2-ccbec71690a9>\",\"Content-Length\":\"259706\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f8834010-a1f0-49cc-9e43-b333f4bc22c0>\",\"WARC-Concurrent-To\":\"<urn:uuid:b245b809-f231-464e-98b7-27d51b01e839>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://tex.stackexchange.com/questions/tagged/algorithmic?tab=Active\",\"WARC-Payload-Digest\":\"sha1:LHB6UOWJAFCQONDEJQR2TSGRFOIQLTVP\",\"WARC-Block-Digest\":\"sha1:JOI6LAH2OTMZSFKTTC5W2KSU2IP27H76\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107880014.26_warc_CC-MAIN-20201022170349-20201022200349-00429.warc.gz\"}"}
https://de.mathworks.com/matlabcentral/cody/problems/109-check-if-sorted/solutions/970617
[ "Cody\n\n# Problem 109. Check if sorted\n\nSolution 970617\n\nSubmitted on 15 Sep 2016 by Michael Ebert\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\n### Test Suite\n\nTest Status Code Input and Output\n1   Pass\nx = sort(rand(1,10^5)); y_correct = 1; assert(isequal(sortok(x),y_correct))\n\n2   Pass\nx = [1 5 4 3 8 7 3]; y_correct = 0; assert(isequal(sortok(x),y_correct))" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.57285535,"math_prob":0.96226656,"size":474,"snap":"2019-35-2019-39","text_gpt3_token_len":143,"char_repetition_ratio":0.1425532,"word_repetition_ratio":0.0,"special_character_ratio":0.32911393,"punctuation_ratio":0.10989011,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9698482,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-17T15:37:44Z\",\"WARC-Record-ID\":\"<urn:uuid:7b689a6b-354d-42de-9459-89932ef87eee>\",\"Content-Length\":\"71345\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:49a3d4ae-a52c-404a-9434-d6c429f1a6bd>\",\"WARC-Concurrent-To\":\"<urn:uuid:3008b57a-4755-4a91-acd9-85efaf11535d>\",\"WARC-IP-Address\":\"104.110.193.39\",\"WARC-Target-URI\":\"https://de.mathworks.com/matlabcentral/cody/problems/109-check-if-sorted/solutions/970617\",\"WARC-Payload-Digest\":\"sha1:5LBU2H6EIVXDV3CCAZLB5NN5DUIUMPIZ\",\"WARC-Block-Digest\":\"sha1:67UJBIT2UOC2ULXY22BT3ZZJGUICFF27\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514573080.8_warc_CC-MAIN-20190917141045-20190917163045-00220.warc.gz\"}"}
https://www.happycompiler.com/class-10-specimen-2017/
[ "Categories\n\n# Computer Applications Specimen Paper 2017\n\n## Section A\n\n### Question 1\n\na) Name any four tokens of Java.\nIdentifiers, Keywords, Literals, Operators.\nb) Give the difference between actual parameter and formal parameter.\nThe parameters passed during function call are actual parameters. The parameters used during function definition are formal parameters.\nc) What is an identifier?\nIdentifiers are the variables that are the named storage locations to store values temporarily in a program.\nd) Write an expression in Java for", null, ".\nMath.cos(x) + Math.pow(a * a + b * b)\ne) What is the result produced by 2 – 10 * 3 + 100 / 11? Show the steps.\n2 – 10 * 3 + 100 / 11\n= 2 – 30 + 100 / 11\n= 2 – 30 + 9\n= -28 + 9\n= -19.\n\n### Question 2\n\na) What is the difference between local variable and instance variable?\nThe variables declared inside a function are called local variables. The variables declared inside a class so that each object gets its own copy of data members are called instance variables.\nb) int x = 20, y = 10, z;\nz = ++x * (y–) – y?\nShow the steps.\nz = 21 * 10 – 9\nz = 210 – 9\nz = 201.\nc) What is the purpose of default in a switch?\nThe default case provides a case that gets executed when no other case is matched.\nd) Give the difference between linear search and binary search.\nLinear search can be applied on both sorted and unsorted lists. Binary search can only be applied on sorted lists.\ne) What will be the output of the following code?\nfloat x = 7.87F;\nSystem.out.println(Math.ceil(x));\nSystem.out.println(Math.floor(x));\n8.0\n7.0\n\n### Question 3\n\na) State the difference between if-else-if ladder and switch case.\nThe if-else-if ladder is a bi-directional conditional statement. It can do a variety of comparisons. The switch case is a multi-branching conditional statement. It can only compare for equality of values.\nb) Explain the concept of constructor with an example.\nA constructor is a method in class with the same name as the class name. It is used to create objects and initialize them with legal initial values.\nExample:\n\n``````class Rectangle{\nint length;\npublic Rectangle(int a, int b){\nlength = a;\n}\n}``````\n\nc) What will be the output of the following program segments?\ni) `String s = \"application\";`\n`int p = s.indexOf('a');`\n`System.out.println(p);`\n`System.out.println(p + s);`\n0\n0application\nii) `String st = \"PROGRAM\";`\n`System.out.println(st.indexOf(st.charAt(4)));`\n1\niii) `int a = 0;`\n`if(a > 0 && a < 20)`\n`a++;`\n`else`\n`a--;`\n`System.out.println(a);`\n-1\niv) `int a = 5, b = 2, c;`\n`if(a > b || a != b)`\n`System.out.print(c + \" \" + a + \" \" + b);`\n7 6 1\nv) `int i = 1;`\n`while(i++ <= 1){`\n`i++;`\n`System.out.print(i + \" \");`\n`}`\n`System.out.print(i);`\n3 4\nd) Differentiate between isUpperCase(char) and toUpperCase(char).\nThe isUpperCase(char) returns a boolean value whereas the toUpperCase(char) returns a char data type value.\ne) What is the difference between a constructor and a member function of a class?\nA constructor doesn’t have a return type (not even void). A function always has a return type.\nf) What is the difference between a static member function and a member function which is not static?\nA static member function is invoked without an object. A non-static function is invoked through an object.\n\n## Section B\n\n### Question 4\n\nDefine a class TaxiMeter having the following description:\nData members/instance variables:\nint taxiNo: to store the taxi number.\nString name: to store the passenger’s name.\nint km: to store the number of kilometers traveled.\ndouble bill: to store the total bill amount.\nMember functions:\nTaxiMeter(): constructor to initialize taxiNo to 0, name to “” and km to 0.\ninput(): to store taxi number, name, number of kilometers.\ncalculate(): to calculate bill for a customer according to given conditions:\n\nKilometers Traveled (km) Rate per km\n≤ 1 km Rs. 25\n1 < km ≤ 6 Rs. 10\n6 < km ≤ 12 Rs. 15\n12 < km ≤ 18 Rs. 20\n> 18 Rs. 25\n\ndisplay(): to display the details in the following format:\n\nTaxi No. Name Kilometers Traveled Bill Amount\n\nCreate an object in the main() method and call all the above methods in it.\n\n``````import java.io.*;\nclass TaxiMeter{\nint taxiNo;\nString name;\nint km;\ndouble bill;\npublic TaxiMeter(){\ntaxiNo = 0;\nname = \"\";\nkm = 0;\nbill = 0.0;\n}\npublic void input()throws IOException{\nSystem.out.print(\"Taxi Number: \");\nSystem.out.print(\"Name: \");\nSystem.out.print(\"Number of km: \");\n}\npublic void calculate(){\nif(km <= 1)\nbill = 25 * km;\nelse if(km > 1 && km <= 6)\nbill = 10 * km;\nelse if(km > 6 && km <= 12)\nbill = 15 * km;\nelse if(km > 12 && km <= 18)\nbill = 20 * km;\nelse\nbill = 25 * km;\n}\npublic void display(){\nSystem.out.println(\"TaxiNo.\\tName\\tKilometers Traveled\\tBill Amount\");\nSystem.out.println(taxiNo + \"\\t\" + name + \"\\t\" + km + \"\\t\" + bill);\n}\npublic static void main(String args[])throws IOException{\nTaxiMeter obj = new TaxiMeter();\nobj.input();\nobj.calculate();\nobj.display();\n}\n}``````\n\n### Question 5\n\nWrite a menu-driven program to find the sum of the following series depending on the user choosing 1 or 2:\n1. s = 1/4 + 1/8 + 1/12 + … up to N terms.\n2. s = 1/1! – 2/2! + 3/3! – … up to N terms.\nwhere ! stands for factorial of the number and the factorial value of a number is the product of all integers from 1 to that number, e.g. 5! = 1 × 2 × 3 × 4 × 5 = 120. Use switch-case.\n\n``````import java.io.*;\npublic static void main(String args[])throws IOException{\nSystem.out.println(\"1. 1/4 + 1/8 + 1/12 + ... N terms\");\nSystem.out.println(\"2. 1/1! - 2/2! + 3/3! - ... N terms\");\nswitch(choice){\ncase 1:\nSystem.out.print(\"N = \");\ndouble sum = 0.0;\nfor(int i = 1; i <= n; i++)\nsum += 1.0 / (4 * i);\nSystem.out.println(\"Sum = \" + sum);\nbreak;\ncase 2:\nSystem.out.print(\"N = \");\nsum = 0.0;\nint sign = 1;\nfor(int i = 1; i <= n; i++){\nint f = 1;\nfor(int j = 1; j <= i; j++)\nf *= j;\nsum += sign * (double)i / f;\nif(sign == 1)\nsign = -1;\nelse\nsign = 1;\n}\nSystem.out.println(\"Sum = \" + sum);\nbreak;\ndefault:\nSystem.out.println(\"Invalid choice!\");\n}\n}\n}``````\n\n### Question 6\n\nWrite a program to accept a sentence and print only the first letter of each word of the sentence in capital letters separated by a full stop.\nExample:\nINPUT: This is a cat.\nOUTPUT: T.I.A.C.\n\n``````import java.io.*;\nclass Sentence{\npublic static void main(String args[])throws IOException{\nSystem.out.print(\"Enter the sentence: \");\ns = s.trim();\ns = s.toUpperCase();\nString t = \"\";\nfor(int i = 0; i < s.length(); i++){\nif(i == 0 || s.charAt(i - 1) == ' ')\nt += s.charAt(i) + \".\";\n}\nSystem.out.println(t);\n}\n}``````\n\n### Question 7\n\nWrite a program to create an array to store 10 integers and print the largest integer and the smallest integer in that array.\n\n``````import java.io.*;\nclass Find{\npublic static void main(String args[])throws IOException{\nint a[] = new int;\nSystem.out.println(\"Enter \" + a.length + \" numbers:\");\nfor(int i = 0; i < a.length; i++)\nint small = a;\nint large = a;\nfor(int i = 1; i < a.length; i++){ if(small > a[i])\nsmall = a[i];\nif(large < a[i])\nlarge = a[i];\n}\nSystem.out.println(\"Smallest number: \" + small);\nSystem.out.println(\"Largest number: \" + large);\n}\n}``````\n\n### Question 8\n\nWrite a program to calculate the sum of all the prime numbers between the range of 1 and 100.\n\n``````class Prime{\npublic static void main(String args[]){\nint sum = 0;\nfor(int i = 1; i <= 100; i++){\nint f = 0;\nfor(int j = 1; j <= i; j++){\nif(i % j == 0)\nf++;\n}\nif(f == 2)\nsum += i;\n}\nSystem.out.println(\"Sum = \" + sum);\n}\n}``````\n\n### Question 9\n\nWrite a program to store 10 names in an array. Arrange these in alphabetical order by sorting. Print the sorted list. Take single word names, all in capital letters, e.g. SAMSON, AJAY, LUCY, etc.\n\n``````import java.io.*;\npublic static void main(String args[])throws IOException{\nString a[] = new String;\nSystem.out.println(\"Enter \" + a.length + \" numbers:\");\nfor(int i = 0; i < a.length; i++){\na[i] = a[i].trim();\nif(a[i].indexOf(' ') > 0)\na[i] = a[i].substring(0, a[i].indexOf(' '));\n}\nfor(int i = 0; i < a.length; i++){\nfor(int j = 0; j < a.length - 1 - i; j++){\nif(a[j].compareTo(a[j + 1]) > 0){\nString temp = a[j];\na[j] = a[j + 1];\na[j + 1] = temp;\n}\n}\n}\nSystem.out.println(\"Sorted List of names:\");\nfor(int i = 0; i < a.length; i++)\nSystem.out.print(a[i] + \"\\t\");\n}\n}``````", null, "" ]
[ null, "data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9JzMyJyB3aWR0aD0nMTM4JyB4bWxucz0naHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmcnIHZlcnNpb249JzEuMScvPg==", null, "data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9JzE2MCcgd2lkdGg9JzE2MCcgeG1sbnM9J2h0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnJyB2ZXJzaW9uPScxLjEnLz4=", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.59699625,"math_prob":0.9941408,"size":8809,"snap":"2020-45-2020-50","text_gpt3_token_len":2479,"char_repetition_ratio":0.12674616,"word_repetition_ratio":0.09377049,"special_character_ratio":0.3395391,"punctuation_ratio":0.20990202,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9976772,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-30T17:23:58Z\",\"WARC-Record-ID\":\"<urn:uuid:8b9efdd3-e57c-41c1-b8c3-41db46804650>\",\"Content-Length\":\"173573\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:036dc635-9b6a-47c4-83d2-534ecdc5cd7d>\",\"WARC-Concurrent-To\":\"<urn:uuid:88de9870-407d-40eb-871e-304f3174b312>\",\"WARC-IP-Address\":\"166.62.27.186\",\"WARC-Target-URI\":\"https://www.happycompiler.com/class-10-specimen-2017/\",\"WARC-Payload-Digest\":\"sha1:3IAMID7JLVCEGMYKR44XN7FHI4WINEFX\",\"WARC-Block-Digest\":\"sha1:PL7IAJ2ERTGNH7YZJTYSUH7RFBRXXMBN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141216897.58_warc_CC-MAIN-20201130161537-20201130191537-00059.warc.gz\"}"}
https://hsm.stackexchange.com/questions/8073/what-is-the-intuition-behind-brahmagupta-s-rule-for-multiplying-negative-numbers
[ "# What is the intuition behind Brahmagupta’s rule for multiplying negative numbers?\n\nThe rule says:\n\nThe product (or quotient) of two debts is a fortune\n\nWhat I’m struggling with is what exactly is the product of two debts? What accounting need forces one to multiply debts? How do you interpret something like this?\n\nHere’s what Wikipedia has to say but it doesn’t sound right:\n\nThus (−2) × 3  =  −6 and (−2) × (−3)  =  6. The reason behind the first example is simple: adding three −2's together yields −6: (−2) × 3  =  (−2) + (−2) + (−2)  =  −6. The reasoning behind the second example is more complicated. The idea again is that losing a debt is the same thing as gaining a credit. In this case, losing two debts of three each is the same as gaining a credit of six: (−2 debts ) × (−3 each)  =  +6 credit.\n\nThe trippy thing is $$(-3$$ each$$)$$ - that makes no sense IMHO.\n\nI’m okay even coming up with a contrived scenario but I can’t swallow the interpretation above.\n\nSo, how should one interpret “debt times debt is fortune from an accounting POV?\n\nThis question seems to have similar intent but the answers there are inconclusive or not \"intuitively correct\" (multiplication by $$-3$$ people for example). Hence the focus of this question is to purely understand it from an accounting POV vs. multiplying negative numbers in general.\n\n• Possible duplicate of Historically, how did people define multiplication for negative numbers? Dec 17 '18 at 21:33\n• According to Mumford, \"The predominately oral transmission... has not left us with any record of these discoveries. They just appear full blown in Brahmagupta’s summary.\" To the extent that the reasons can be guessed, it seems the rule was adopted because it made algebra work, not because of any intuitions. Explanations, such as Wikipedia's, are late inventions to help memorize the already adopted rule. Dec 17 '18 at 21:38\n• @Conifold- Yes. I'm just trying to rediscover it or even contrive a situation that'd warrant a meaningful output from \"debt times debt\"...\n– PhD\nDec 17 '18 at 22:04\n• For motivation unrelated to history Math SE would be a better place to ask. Dec 17 '18 at 22:08\n• Dec 18 '18 at 7:42\n\n$$3 \\times 2 = 6$$ because $$3 \\times 2 = 3+3$$.\nThe same for $$(−3) \\times 2 = (−3) + (−3) = −6$$.\nStarting from this, we may interpret $$a \\times (−2)$$ as \"repeated subtraction\" : we have to \"subtract\" twice the quantity $$a$$.\nIf $$a$$ is a negative quantitiy, i.e. a debt, to subtract a debt is to earn money : if I have a debt of $$-6$$ with you and I give it to you, my account changes from $$−6$$ to $$0$$ while your account changes from $$0$$ to $$-6$$.\nThe intuition is that since debts cancel credits, $$k0=0$$ determines $$k(-l)$$ from $$kl$$, even for $$k<0$$. Let $$a=(-2)(-3)$$ so $$a-6=(-2)(-3)+(-2)3=(-2)0=0$$. Therefore, $$a=6$$." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.94298154,"math_prob":0.99647325,"size":1233,"snap":"2021-43-2021-49","text_gpt3_token_len":311,"char_repetition_ratio":0.09601302,"word_repetition_ratio":0.0,"special_character_ratio":0.270073,"punctuation_ratio":0.0875,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99965584,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-27T13:55:03Z\",\"WARC-Record-ID\":\"<urn:uuid:6cebe26c-cd85-4dbf-9243-5c3415b69c96>\",\"Content-Length\":\"148629\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bd85d416-0f59-4d55-b3e2-a7389331c990>\",\"WARC-Concurrent-To\":\"<urn:uuid:3cdc605f-26ae-45b9-8a9a-fab31859cb44>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://hsm.stackexchange.com/questions/8073/what-is-the-intuition-behind-brahmagupta-s-rule-for-multiplying-negative-numbers\",\"WARC-Payload-Digest\":\"sha1:JQFMILXFPL6NC43P37QFP3VX7SVXN3TW\",\"WARC-Block-Digest\":\"sha1:3GHUJ2U47C2KO45CEGZHG736JTT7MFNB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358189.36_warc_CC-MAIN-20211127133237-20211127163237-00184.warc.gz\"}"}
http://mathcentral.uregina.ca/QQ/database/QQ.09.07/h/ned1.html
[ "", null, "", null, "", null, "SEARCH HOME", null, "Math Central Quandaries & Queries", null, "", null, "Question from Ned: Given an arc with length of 192 inches (don't know chord length), and arc height of 6 inches, how would one find the radius?", null, "Hi Ned,", null, "In my diagram if the measure of the angle BCA is θ radians then the length of the arc AB is given by r × θ and hence\n\n192 = r × θ\n\nAlso from triangle DCA\n\ncos(θ/2) = (r - 6)/r\n\nPutting these two equation together gives\n\ncos(192/(2r)) = (r - 6)/r or\ncos(96/r) = (r - 6)/r\n\nUnfortunately this equation can not be solved algebraically for r, the best you can do is approximate r. If you let\n\nf(r) = cos(96/r) - (r - 6)/r\n\nthen you are looking for a root of f(r), that is a value of r so that f(r) = 0.\n\nOne approximation technique is newton's method but even here there are some pitfalls. Newton's method requires an initial guess and because of the oscillatory nature of the cosine function your equation has multiple positive roots and a poor initial guess might lead you to the wrong root. Since the sagitta is so small, we know that θ < π radians, so given that the arc is 192, then r ≥ 192/π, which provides us with a suitable initial guess for newton.\n\nNewton's method applied to your problem yields r = 767 inches and hence θ = 192/767 = 0.2503 radians which is 45.05 degrees.\n\nStephen La Rocque and Harley Weston", null, "", null, "", null, "", null, "", null, "Math Central is supported by the University of Regina and The Pacific Institute for the Mathematical Sciences." ]
[ null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/search.gif", null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/QQ/database/QQ.09.07/h/ned1.1.gif", null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/qqsponsors.gif", null, "http://mathcentral.uregina.ca/lid/images/mciconnotext.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9186624,"math_prob":0.9919975,"size":1124,"snap":"2022-05-2022-21","text_gpt3_token_len":267,"char_repetition_ratio":0.102678575,"word_repetition_ratio":0.0,"special_character_ratio":0.24733096,"punctuation_ratio":0.07174888,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99989295,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,2,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-25T10:07:32Z\",\"WARC-Record-ID\":\"<urn:uuid:25a7e51d-c3a4-4fd6-bf0d-2c882d637028>\",\"Content-Length\":\"8402\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0d90fb6a-4404-4bbd-b4be-24bf9d6353d5>\",\"WARC-Concurrent-To\":\"<urn:uuid:57c95f14-8ef4-455c-ac45-9ae688c937ae>\",\"WARC-IP-Address\":\"142.3.156.40\",\"WARC-Target-URI\":\"http://mathcentral.uregina.ca/QQ/database/QQ.09.07/h/ned1.html\",\"WARC-Payload-Digest\":\"sha1:PQO4C2CTR4EJ777NBTMKLMYBPKE526CE\",\"WARC-Block-Digest\":\"sha1:X3DBZQ6IL7CKYWRVKCMG2MENXBBBASQM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662584398.89_warc_CC-MAIN-20220525085552-20220525115552-00272.warc.gz\"}"}
https://askgoodquestions.blog/2019/09/09/10-my-favorite-theorem/
[ "# #10 My favorite theorem\n\nThis blog does not do suspense*, so I’ll come right out with it: Bayes’ Theorem is my favorite theorem.  But even though it is my unabashed favorite, I introduce Bayes’ Theorem to students in a stealth manner.  I don’t present the theorem itself, or even its name, until after students have answered an important question by essentially deriving the result for themselves.  The key is to use a hypothetical table of counts, as the following examples illustrate.  As always, questions that I pose to students appear in italics.\n\n* See question #8 in post #1 here.\n\n1. The ELISA test for HIV was developed in the mid-1980s for screening blood donations.  An article from 1987 (here) gave the following estimates about the ELISA test’s effectiveness in the early stages of its development:\n\n• The test gives a (correct) positive result for 97.7% of blood samples that are infected with HIV.\n• The test gives a (correct) negative result for 92.6% of blood samples that are not infected with HIV.\n• About 0.5% of the American public was infected with HIV.\n\nFirst I ask students: Make a prediction for the percentage of blood samples with positive test results that are actually infected with HIV.  Very few people make a good prediction here, but I think this prediction step is crucial for creating cognitive dissonance that leads students to take a closer look at what’s going on.  Lately I have rephrased this question as multiple choice, asking students to select whether their prediction is closest to 10%, 30%, 50%, 70%, or 90%.  Most students respond with 70% or 90%.\n\nThen I propose the following solution strategy: Assume that the given percentages hold exactly for a hypothetical population of 1,000,000 people, and use the percentages fill in the following table of counts:\n\nThe numbers in parentheses indicate the order in which we can use the given percentages to complete the table of counts.  I insist that all of my students get out their calculators, or use their phone as a calculator, as we fill in the table together, as follows:\n\n1. 005 × 1,000,000 = 5,000\n2. 1,000,000 – 5,000 = 995,000\n3. 0.977 × 5,000 = 4,885\n4. 5,000 – 4,885 = 115\n5. 0.926 × 995,000 = 921,370\n6. 995,000 – 921,370 = 73,630\n7. 4,885 + 73,630 = 78,515\n8. 115 + 921,370 = 921,485\n\nThese calculations produce the following table:\n\nThen I say to my students: That was fun, and it filled 10 minutes of class time, but what was the point?  What do we do now with this table to answer the original question?  Many students are quick to point out that we can determine the percentage of positive results that are actually HIV-infected by starting with 78,515 (the total number of positive results) as the denominator and using 4,885 (the number of these positive results that are actually HIV-infected) as the numerator.  This produces: 4,885 / 78,515 ≈ 0.062, or 6.2%.\n\nAt this point I act perplexed* and say: Can this really be right?  Why would this percentage be so small when the accuracy percentages for the test are both greater than 90%?  This question is much harder for students, but I encourage them to examine the table and see what’s going on.  A student eventually points out that there are a lot more false positives (people who test positive but do not have the disease) than there are true positives (people who test positive and do have the disease).  Exactly! And why is that?  I often need to direct students’ attention to the base rate: Only half of one percent have the disease, so a very large percentage of them are outnumbered by a fairly small percentage of the 99.5% who don’t have the disease.  In other words, 7.4% of 995,000 people greatly outnumbers 97.7% of 5,000 people.\n\n* I am often truly perplexed, so I have no trouble with acting perplexed to emphasize a point.\n\nI like to think that most students understand this explanation, but there’s no denying that this is a difficult concept.  Simply understanding the question, which requires recognizing the difference between the two conditional percentages (percentage of people with disease who test positive versus percentage of people with positive test result who have disease), can be a hurdle.  To help with this I like to ask: What percentage of U.S. Senators are American males?  What percentage of American males are U.S. Senators?  Are these two percentages the same, fairly close, or very different?  The answer to the first question is a very large percentage: 80/100 = 80% in 2019, but the answer to the second question is an extremely small percentage: 80 / about 160 million ≈ 0.00005%.  These percentages are very different, so it shouldn’t be so surprising that the two conditional percentages* with the ELISA test are also quite different.  At any rate I am convinced that the table of counts makes this more understandable than plugging values into a formula for Bayes’ Theorem would.\n\n* I have avoided using the term conditional probability here, because I think the term conditional percentage is less intimidating to students, suggesting something that can be figured out from a table of counts rather than requiring a mathematical formula.\n\nSome students think this fairly small percentage of 6.2% means that the test result is not very informative, so I ask: How many times more likely is a person to be HIV-infected if they have tested positive, as compared to a person who has not been tested?  This requires some thought, but students recognize that they need to compare 6.2% with 0.5%.  The wording how many times can trip some students up, but many realize that they must take the ratio of the two percentages: 6.2% / 0.5% = 12.4. Then I challenge students with: Write a sentence, using this context, to interpret this value.  A person with a positive test result is 12.4 times more likely to be HIV-infected than someone who has not yet been tested.\n\nI also ask students: Can a person who tests negative feel very confident that they are free of the disease?  Among the blood samples that test negative, what percentage are truly not HIV-infected?  Most students realize that this answer can be determined from the table above: Among the 921,485 who test negative, 921,370 do not have the disease, which is a proportion of 0.999875, or 99.9875%.  A person who tests negative can be quite confident that they do not have the disease.  Such a very high percentage is very important for screening blood donations.  It’s less problematic that only 6.2% of the blood samples that are rejected (due to a positive test result) are actually HIV-infected.\n\nYou might want to introduce students to some terminology before moving on.  The 97.7% value is called the sensitivity of the test, and the 92.6% value is called the specificity.  You could also tell students that they have essentially derived a result called Bayes’ Theorem as they produced and analyzed the table of counts.  You could give them a formula or two for Bayes’ Theorem.  The one on the left, presented in terms of H for hypothesis and E for evidence, has a two-event partition (such as disease, not).  A more general of Bayes’ Theorem appears on the right.\n\nI present these versions of Bayes’ Theorem in probability courses and in courses for mathematically inclined students, but I do not show any formulas in my statistical literacy course.  For a standard “Stat 101” introductory course, I do not present this topic at all, as the focus is exclusively on statistical concepts and not probability.\n\nBefore we leave this example, I remind students that these percentages were from early versions of the ELISA test in the 1980s, when the HIV/AIDS crisis was first beginning.  Improvements in testing procedures have produced much higher sensitivity and specificity (link).  Running more sophisticated tests on those who test positive initially also greatly decreases the rate of false positives.\n\nI have debated with myself whether to change this HIV testing context for students’ first introduction to these ideas.  One argument against using this context is that the information about sensitivity and specificity is more than three decades old.  Another argument is that 97.7% and 92.6% are not convenient values to work with; perhaps students would be more comfortable with “rounder” values like 90% and 80%.  But I continue to use this context, partly to remind students of how serious the HIV/AIDS crisis was, and because I think the example is compelling.  An alternative that I found recently is to present these ideas in terms of a 2014 study of diagnostic accuracy of breathalyzers sold to the public (link).\n\nWhere to next?  With my statistical literacy course, I give students more practice with constructing and analyzing tables of counts to calculate reverse conditional percentages, as in the following example.\n\nA national survey conducted by the Pew Research Center in late 2018 produced the following estimates about educational attainment and Twitter use among U.S. adults:\n\n• 10% have less than a high school diploma; 8% of these adults use Twitter\n• 59% have a high school diploma but no college degree; 20% of these adults use Twitter\n• 31% have a college degree; 30% of these adults use Twitter\n\nWhat percentage of U.S. adults who use Twitter have less than a high school diploma?  What percentage have a high school degree but no college degree?  What percentage have a college degree?Which age groups are more likely than they were initially?  Which are less likely?\n\nAgain we can answer these questions (about reverse conditional percentages from what was given) by constructing a table of counts for a hypothetical population.  This time we need three rows rather than two, in order to account for the three education levels. I recommend providing students with the outline of the table, but without indicating the order in which to fill it in this time:\n\nWith numbers in parentheses again indicating the order in which the cells can be calculated, the completed table turns out to be:\n\nFrom this table we can calculate that 8/219 ≈ .037, or 3.7% of Twitter users have less than a high school degree, 118/219 ≈ .539, or 53.9% of Twitter users have a high school but not college degree, and 93/219 ≈ .425, or 42.5% of Twitter users have a college degree.  These percentages have increased from the base rate only for the college degree holders, as 31% of the public has a college degree but 42.5% of Twitter users do.\n\n3. A third application that I like to present concerns the famous Monty Hall Problem.  Suppose that a new car is hidden behind one door on a game show, while goats are hidden behind two other doors.  A contestant picks a door, and then (to heighten the suspense!) the host reveals what’s behind a different door that he knows to have a goat.  Then the host asks whether the contestant prefers to stay with the original door or switch to the remaining door.  The question for students is: Does it matter whether the contestant stays or switches?  If so, which strategy is better, and why?\n\nMost people believe that staying or switching does not matter.  I recommend that students play a simulated version of the game many times, with both strategies, to get a sense for how the strategies compare.  An applet that allows students to play simulated games appears here.  The following graph shows the results of 1000 simulated games with each strategy:\n\nIt appears that switching wins more often than staying!  We can determine the theoretical probabilities of winning with each strategy by using Bayes’ Theorem.  More to the point, we can use our strategy of constructing a table of hypothetical counts.  Let’s suppose that the contestant initially selects door #1, so the host will show a goat behind door #2 or door #3.  Let’s use 300 for the number of games in our table, just so we’ll have a number that’s divisible by 3.  Here’s the outline of the table:\n\nHow do we fill in this table? Let’s proceed as follows:\n\n1. Row totals: If the car is equally likely to be placed behind any of the three doors, then the car should be behind each door for 100 of the 300 games.\n2. Bottom (not total) row: Remember that the contestant selected door #1, so when the car is actually behind door #3, the host has no choice but to reveal door #2 all 100 times.\n3. Middle row: Just as with the bottom row, now the host has no choice but to reveal door #3 all 100 times.\n4. Top row: When the car is actually behind the same door that the contestant selected, the host can reveal either of the other doors, so let’s assume that he reveals each 50% of the time, or 50 times in 100 games.\n\nThe completed table is therefore:\n\nWe can see from the table that for the 150 games where the host reveals door #2, the car is actually behind door #3 for 100 of those 150 games, which is 2/3 of the games.  In other words, if the contestant stays with door #1, they will win 50/150 times, but by switching to door #3, they win 100/150 times. Equivalently, for the 150 games where the host reveals door #3, the car is actually behind door #2 for 100 of those games, which is again 2/3 of the games.  Bottom line: Switching gives the contestant a 2/3 chance of winning the car, whereas staying only gives a 1/3 chance of winning the car.  The easiest way to understand this, I think, is that by switching, the contestant only loses if they picked the correct door to begin with, which happens one-third of the time.\n\nThis post is already quite long, but I can’t resist suggesting a follow-up question for students: Now suppose that the game show producers place the car behind door #1 50% of the time, door #2 40% of the time, and door #3 10% of the time.  What strategy should you use?  In other words, which door should you pick to begin, and then should you stay or switch?  What is your probability of winning the car with the optimal strategy in this case?  Explain.\n\nEncourage students to remember the bottom line from above: By switching, you only lose if you were right to begin with.  So, the optimal strategy here is to select door #3, the least likely door, and then switch after the host reveals a door with a goat.  Then you only lose if you were right to begin with, so you only lose 10% of the time.  This optimal strategy gives you a 90% chance of winning the car.  Students who can think this through and describe the correct optimal strategy have truly understood the resolution of the famous Monty Hall Problem.\n\nOne final question for this post: Why is Bayes’ Theorem my favorite?  It provides the mechanism for updating uncertainty in light of partial information, which enables us to answer important questions, such as the reliability of medical diagnostic tests, and also fun recreational ones, such as the Monty Hall Problem.  More than that, Bayes’ Theorem provides the foundation for an entire school of thought about how to conduct statistical inference.  I’ll discuss that in a future post.\n\nP.S. Tom Short and I wrote a JSE article (link) about this approach to teaching Bayes’ Theorem in 1995, but the idea is certainly not original with us.  Gerd Gigerenzer and his colleagues introduced the term “natural frequencies” for this approach; they have demonstrated its effectiveness for improving people’s Bayesian reasoning (link).  The Monty Hall Problem is discussed in many places, including by Jason Rosenhouse in his book (link) titled The Monty Hall Problem.  While I’m mentioning books, I will also point out Sharon Bertsch McGrayne’s wonderful book about Bayesian statistics (link), titled The Theory That Would Not Die." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9550757,"math_prob":0.8182109,"size":15441,"snap":"2022-40-2023-06","text_gpt3_token_len":3473,"char_repetition_ratio":0.1319557,"word_repetition_ratio":0.024691358,"special_character_ratio":0.23845606,"punctuation_ratio":0.112155594,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9574353,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-29T20:53:27Z\",\"WARC-Record-ID\":\"<urn:uuid:25557ab6-ff9e-4e80-9451-eba1f00b2376>\",\"Content-Length\":\"117442\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:916f120a-8c20-4031-8a47-c16ae4eea275>\",\"WARC-Concurrent-To\":\"<urn:uuid:6d87d24e-5162-44e8-9619-ee3c54b7e6b2>\",\"WARC-IP-Address\":\"192.0.78.24\",\"WARC-Target-URI\":\"https://askgoodquestions.blog/2019/09/09/10-my-favorite-theorem/\",\"WARC-Payload-Digest\":\"sha1:2IRI5WXZGL2UV24GYBRVRXQBOCRDQNXU\",\"WARC-Block-Digest\":\"sha1:5BEJ465P2LPGRCNU4SUPTVIU46XA52BK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335365.63_warc_CC-MAIN-20220929194230-20220929224230-00046.warc.gz\"}"}
https://es.mathworks.com/help/pde/ug/pde.femesh.findnodes.html
[ "# findNodes\n\nFind mesh nodes in specified region\n\n## Syntax\n\n``nodes = findNodes(mesh,\"region\",RegionType,RegionID)``\n``nodes = findNodes(mesh,\"box\",xlim,ylim)``\n``nodes = findNodes(mesh,\"box\",xlim,ylim,zlim)``\n``nodes = findNodes(mesh,\"radius\",center,radius)``\n``nodes = findNodes(mesh,\"nearest\",point)``\n\n## Description\n\nexample\n\n````nodes = findNodes(mesh,\"region\",RegionType,RegionID)` returns the IDs of the mesh nodes that belong to the specified geometric region.```\n\nexample\n\n````nodes = findNodes(mesh,\"box\",xlim,ylim)` returns the IDs of the mesh nodes within a bounding box specified by `xlim` and `ylim`. Use this syntax for 2-D meshes.```\n````nodes = findNodes(mesh,\"box\",xlim,ylim,zlim)` returns the IDs of the mesh nodes located within a bounding box specified by `xlim`, `ylim`, and `zlim`. Use this syntax for 3-D meshes.```\n\nexample\n\n````nodes = findNodes(mesh,\"radius\",center,radius)` returns the IDs of mesh nodes located within a circle (for 2-D meshes) or sphere (for 3-D meshes) specified by `center` and `radius`.```\n\nexample\n\n````nodes = findNodes(mesh,\"nearest\",point)` returns the IDs of mesh nodes closest to a query point or multiple query points with Cartesian coordinates specified by `point`.```\n\n## Examples\n\ncollapse all\n\nFind the nodes associated with a geometric region.\n\nCreate a PDE model.\n\n`model = createpde;`\n\nInclude the geometry of the built-in function `lshapeg`. Plot the geometry.\n\n```geometryFromEdges(model,@lshapeg); pdegplot(model,\"FaceLabels\",\"on\",\"EdgeLabels\",\"on\")```", null, "Generate a mesh.\n\n`mesh = generateMesh(model,\"Hmax\",0.5);`\n\nFind the nodes associated with face 2.\n\n`Nf2 = findNodes(mesh,\"region\",\"Face\",2);`\n\nHighlight these nodes in green on the mesh plot.\n\n```figure pdemesh(model,\"NodeLabels\",\"on\") hold on plot(mesh.Nodes(1,Nf2),mesh.Nodes(2,Nf2),\"ok\",\"MarkerFaceColor\",\"g\") ```", null, "Find the nodes associated with edges 5 and 7.\n\n`Ne57 = findNodes(mesh,\"region\",\"Edge\",[5 7]);`\n\nHighlight these nodes in green on the mesh plot.\n\n```figure pdemesh(model,\"NodeLabels\",\"on\") hold on plot(mesh.Nodes(1,Ne57),mesh.Nodes(2,Ne57),\"or\",\"MarkerFaceColor\",\"g\")```", null, "Find the nodes located within a specified box.\n\nCreate a PDE model.\n\n`model = createpde;`\n\nImport and plot the geometry.\n\n```importGeometry(model,\"PlateHolePlanar.stl\"); pdegplot(model)```", null, "Generate a mesh.\n\n```mesh = generateMesh(model,\"Hmax\",2,\"Hmin\",0.4, ... \"GeometricOrder\",\"linear\");```\n\nFind the nodes located within the following box.\n\n`Nb = findNodes(mesh,\"box\",[5 10],[10 20]);`\n\nHighlight these nodes in green on the mesh plot.\n\n```figure pdemesh(model) hold on plot(mesh.Nodes(1,Nb),mesh.Nodes(2,Nb),\"or\",\"MarkerFaceColor\",\"g\")```", null, "Find the nodes located within a specified disk.\n\nCreate a PDE model.\n\n`model = createpde;`\n\nImport and plot the geometry.\n\n```importGeometry(model,\"PlateHolePlanar.stl\"); pdegplot(model)```", null, "Generate a mesh.\n\n```mesh = generateMesh(model,\"Hmax\",2,\"Hmin\",0.4, ... \"GeometricOrder\",\"linear\");```\n\nFind the nodes located within radius 2 from the center [5 10].\n\n`Nb = findNodes(mesh,\"radius\",[5 10],2);`\n\nHighlight these nodes in green on the mesh plot.\n\n```figure pdemesh(model) hold on plot(mesh.Nodes(1,Nb),mesh.Nodes(2,Nb),\"or\",\"MarkerFaceColor\",\"g\")```", null, "Find the node closest to a specified point and highlight it on the mesh plot.\n\nCreate a PDE model.\n\n`model = createpde;`\n\nImport and plot the geometry.\n\n```importGeometry(model,\"PlateHolePlanar.stl\"); pdegplot(model)```", null, "Generate a mesh.\n\n`mesh = generateMesh(model,\"Hmax\",2,\"Hmin\",0.4);`\n\nFind the node closest to the point [15;10].\n\n`N_ID = findNodes(mesh,\"nearest\",[15;10])`\n```N_ID = 10 ```\n\nHighlight this node in green on the mesh plot.\n\n```figure pdemesh(model) hold on plot(mesh.Nodes(1,N_ID),mesh.Nodes(2,N_ID),\"or\",\"MarkerFaceColor\",\"g\")```", null, "## Input Arguments\n\ncollapse all\n\nMesh object, specified as the `Mesh` property of a `PDEModel` object or as the output of `generateMesh`.\n\nExample: `model.Mesh`\n\nGeometric region type, specified as `\"Cell\"`, `\"Face\"`, `\"Edge\"`, or `\"Vertex\"`.\n\nExample: `findNodes(mesh,\"region\",\"Face\",1:3)`\n\nData Types: `char`\n\nGeometric region ID, specified as a vector of positive integers. Find the region IDs by using `pdegplot`.\n\nExample: `findNodes(mesh,\"region\",\"Face\",1:3)`\n\nData Types: `double`\n\nx-limits of the bounding box, specified as a two-element row vector. The first element of `xlim` is the lower x-bound, and the second element is the upper x-bound.\n\nExample: ```findNodes(mesh,\"box\",[5 10],[10 20])```\n\nData Types: `double`\n\ny-limits of the bounding box, specified as a two-element row vector. The first element of `ylim` is the lower y-bound, and the second element is the upper y-bound.\n\nExample: ```findNodes(mesh,\"box\",[5 10],[10 20])```\n\nData Types: `double`\n\nz-limits of the bounding box, specified as a two-element row vector. The first element of `zlim` is the lower z-bound, and the second element is the upper z-bound. You can specify `zlim` only for 3-D meshes.\n\nExample: ```findNodes(mesh,\"box\",[5 10],[10 20],[1 2])```\n\nData Types: `double`\n\nCenter of the bounding circle or sphere, specified as a two-element row vector for a 2-D mesh or three-element row vector for a 3-D mesh. The elements of these vectors contain the coordinates of the center of a circle or a sphere.\n\nExample: ```findNodes(mesh,\"radius\",[0 0 0],0.5)```\n\nData Types: `double`\n\nRadius of the bounding circle or sphere, specified as a positive number.\n\nExample: ```findNodes(mesh,\"radius\",[0 0 0],0.5)```\n\nData Types: `double`\n\nCartesian coordinates of query points, specified as a 2-by-N or 3-by-N matrix. These matrices contain the coordinates of the query points. Here, N is the number of query points.\n\nExample: ```findNodes(mesh,\"nearest\",[15 10.5 1; 12 10 1.2])```\n\nData Types: `double`\n\n## Output Arguments\n\ncollapse all\n\nNode IDs, returned as a positive integer or a row vector of positive integers.\n\n## Version History\n\nIntroduced in R2018a" ]
[ null, "https://es.mathworks.com/help/examples/pde/win64/NodesAssociatesWithParticularEdgesAndFacesExample_01.png", null, "https://es.mathworks.com/help/examples/pde/win64/NodesAssociatesWithParticularEdgesAndFacesExample_02.png", null, "https://es.mathworks.com/help/examples/pde/win64/NodesAssociatesWithParticularEdgesAndFacesExample_03.png", null, "https://es.mathworks.com/help/examples/pde/win64/NodesWithinBoundingBoxExample_01.png", null, "https://es.mathworks.com/help/examples/pde/win64/NodesWithinBoundingBoxExample_02.png", null, "https://es.mathworks.com/help/examples/pde/win64/NodesWithinBoundingDiskExample_01.png", null, "https://es.mathworks.com/help/examples/pde/win64/NodesWithinBoundingDiskExample_02.png", null, "https://es.mathworks.com/help/examples/pde/win64/NodesClosestToSpecifiedPointsExample_01.png", null, "https://es.mathworks.com/help/examples/pde/win64/NodesClosestToSpecifiedPointsExample_02.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6542489,"math_prob":0.9807666,"size":5468,"snap":"2022-40-2023-06","text_gpt3_token_len":1460,"char_repetition_ratio":0.13433382,"word_repetition_ratio":0.28070176,"special_character_ratio":0.26060718,"punctuation_ratio":0.20140105,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9963103,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-27T09:21:33Z\",\"WARC-Record-ID\":\"<urn:uuid:a125deec-7595-4828-b6ad-3e08f9b68f7d>\",\"Content-Length\":\"120599\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1f33d0f0-f2d3-478b-a3f3-f0ac9efd5e0b>\",\"WARC-Concurrent-To\":\"<urn:uuid:412d8622-24f5-4eff-83ce-2c146f90fe6f>\",\"WARC-IP-Address\":\"23.39.174.83\",\"WARC-Target-URI\":\"https://es.mathworks.com/help/pde/ug/pde.femesh.findnodes.html\",\"WARC-Payload-Digest\":\"sha1:4C3VT4LE4KUEXQH2XBOTW6FZBF6AU2VR\",\"WARC-Block-Digest\":\"sha1:TQ6IM54C2NUYGCBRZMER4QPJXOA7PA3U\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764494974.98_warc_CC-MAIN-20230127065356-20230127095356-00243.warc.gz\"}"}
https://dsp.stackexchange.com/questions/38363/sound-absorption-coefficients-for-frequencies-higher-than-8khz-and-lower-than-80
[ "# Sound absorption coefficients for frequencies higher than 8kHz and lower than 80Hz?\n\nI am building a matlab simulation that applies room compensation to a 3D audio rendering system, and I'm searching for the frequency dependent sound absorption coefficients of various materials in order to extract the reflection coefficients.\n\nHowever all the tables I found provide absorption coefficients for different octave frequency bands that have center frequency from 125Hz to 4kHz or at most 8kHz, but in my case I need to consider frequencies up to 20kHz.\n\nMy question is why are tables for sound absorption coefficients limited to such frequency range and how should I consider the coefficients for higher frequencies(i.e. higher than 10kHz) and also for lower ones(i.e. lower than 80 Hz)?\n\nI found the tables in the annex section of the book \"Auralization\" by Dr. Michael Vorländer, which uses the ISO 354:2003 standard in order to perform the measurements.\n\nThe only resource where I found tables for frequency values higher than 8kHz was this paper: https://www.degruyter.com/downloadpdf/j/aoa.2013.38.issue-2/aoa-2013-0020/aoa-2013-0020.pdf where it is proposed an alternative method w.r.t. the previously mentioned standard in order to take the measurements from 50Hz to 50kHz, however these are provided just for very few materials.\n\n• Hey Luca, I'm not an acoustics expert myself, but I guess future readers, and among them, potential answerers would be very interested in which literature you found these tables – this is something often asked for on DSP.SE, and it might incentivize people to work harder on an answer. Also, citing your sources is pretty much always a good idea :) – Marcus Müller Mar 14 '17 at 23:12\n• Please don't cross-post, especially not without referring to the other version of a question: physics.stackexchange.com/questions/318807/… – Marcus Müller Mar 14 '17 at 23:18\n• Hi! I am sorry, I didn't know it wasn't polite to cross and I removed the question from Physics Stack Exchange. I also added some references on my question here, thanks for the tips! – Luca Mar 15 '17 at 9:34\n• If you are looking for more of coefficients up to 8kHz for various materials, then take a look at this excel sheet. – jojek Mar 15 '17 at 10:50\n\nGreat question. After about 10kHz, most of the energy is lost due to air absorption depending on your distance to the source. Your room model could approximate this with a lowpass filter whose rolloff is approximated by distance, but don't quote me on that.\n\n• Thanks for the answer!! I tried use a lowpass filter before, but the quality of the rendering degrades too much! Maybe a solution could be to consider the absorption coefficients of the walls as 100% for frequencies higher than 10kHz, in order to simulate the air attenuation? – Luca Mar 15 '17 at 10:42\n• Sidenote: could the downvoter explain their downvote? No, I didn't say use a single lowpass filter, I said try a dynamic one dependent on distance in your source-receiver model. Walls do not absorb 100% of high frequencies unless they are perhaps made of heavy velour, so using alpha = 1 will not be physically accurate. – panthyon Mar 15 '17 at 14:53\n\nThe main reason why the tables don't have it, is that it's hard to measure. The measurement technique in ISO 354:2003 relies on measuring the difference in reverberation times in a reverberation rooms with and with/out a material sample. At higher frequencies, the reverb time is dominated by air absorption and and the sound field becomes less and less diffuse which violates the basic assumption behind the measurement. It's not uncommon to measure absorption coefficients larger than 1. That's not good !\n\nI wouldn't mingle air absorption and wall absorption. They are different effects and should be modelled separately. They are basically additive. At 20 kHz air absorption is around 50-ish dB/100m. The early reflections in a room have a travel distance of only a few meters so there is some attenuation, but there is still plenty of energy left.\n\nKeep in mind that wall absorption measurement is NOT done at 8 kHz but it's averaged over the 8 kHz octave, so the data covers frequencies up to 11.5 kHz. If you need to go hire, you can often simply extrapolate to 16 kHz by looking at the difference between the 4 kHz and 8 kHz value.\n\nEasiest is to extrapolate in dB for the reflection coefficient. Example: 4 kHz = 0.6, 8 kHz = 0.8. Reflection coefficients are 0.4 and 0.2 or -4dB and -7dB respectively. So it's 3 dB drop of reflected energy per octave. Extrapolation yields -10 dB for 16 kHz, which is a reflection coefficient of 0.1 or an absorption of 0.9.\n\nSomething like this $$\\alpha_{16} = 1-10^{0.1 \\cdot (2 \\cdot 10 \\cdot \\log_{10}(1-\\alpha_8)-10 \\cdot \\log_{10}(1-\\alpha_4))}$$\n\n• There is ISO 10534 standard for measurements of absorption coefficient using the Impedance Tube. – jojek Mar 15 '17 at 19:45\n• Yep, but if I understand correctly that only gives you reflection/absorption for frontal incidence and not spatially averaged for a diffuse sound field – Hilmar Mar 15 '17 at 21:44\n• Indeed, physical absorption coefficient is not the same as one measured in the reverberation chamber. Nonetheless it's still useful as it gives you an idea of what are the properties of sample. – jojek Mar 16 '17 at 8:25\n• Hi thanks a lot for your detailed answer!! However I was wondering how is it reasonable to extrapolate coefficients for higher frequencies using the way you propose? I understand that this would be just an approximation but I checked and the relation that you suggest does not hold between the other frequency bands – Luca Mar 16 '17 at 12:34" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92068756,"math_prob":0.6907536,"size":1245,"snap":"2020-45-2020-50","text_gpt3_token_len":278,"char_repetition_ratio":0.13134569,"word_repetition_ratio":0.0,"special_character_ratio":0.22329317,"punctuation_ratio":0.10970464,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9501057,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-24T15:31:56Z\",\"WARC-Record-ID\":\"<urn:uuid:5c543b24-d5e6-4a9a-afb2-da861b538dcd>\",\"Content-Length\":\"168945\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:38a16c1a-7b39-4673-82d4-a7719a57794b>\",\"WARC-Concurrent-To\":\"<urn:uuid:c15649c1-3b22-421d-a1d7-f5f52912ce2d>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://dsp.stackexchange.com/questions/38363/sound-absorption-coefficients-for-frequencies-higher-than-8khz-and-lower-than-80\",\"WARC-Payload-Digest\":\"sha1:OGACW7YV5GC5DCK7SENVE5L5JN6YQ7YO\",\"WARC-Block-Digest\":\"sha1:DYVSFOFZQQI77F6O5LVNWWDTAYNRS6TU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107883636.39_warc_CC-MAIN-20201024135444-20201024165444-00498.warc.gz\"}"}
https://swmath.org/?term=numerical%20computation
[ "• # UMFPACK\n\n• Referenced in 412 articles [sw00989]\n• pattern multifrontal numerical factorization. The pre-ordering and symbolic analysis phase computes an upper bound ... ordering and analyzing a sparse matrix, computing the numerical factorization, solving a system with...\n• # MACSYMA\n\n• Referenced in 720 articles [sw01209]\n• general purpose symbolic-numerical-graphical mathematics software product. Computer algebra system ... complicated computations by means of a large Macsyma program. Macsyma offers: symbolic and numeric manipulation...\n• # QSIMVN\n\n• Referenced in 148 articles [sw32329]\n• function with supporting functions, for the numerical computation of multivariate normal distribution values. The method ... algorithm given in the paper ”Numerical Computation of Multivariate Normal Probabilities”, in J. of Computational...\n• # PASCAL-XSC\n\n• Referenced in 103 articles [sw18863]\n• PASCAL-XSC: PASCAL for Extended Scientific Computing. The programming language PASCAL-XSC was developed ... numerical solution of scientific problems based upon a properly defined and implemented computer ... arithmetic in the usual spaces of numerical computation...\n• # CGAL\n\n• Referenced in 384 articles [sw00118]\n• computation, such as: computer graphics, scientific visualization, computer aided design and modeling, geographic information systems ... generation, numerical methods... More on the projects using CGAL web page. The Computational Geometry Algorithms...\n• # mctoolbox\n\n• Referenced in 1485 articles [sw04827]\n• files containing functions for constructing test matrices, computing matrix factorizations, visualizing matrices, and carrying ... with the book Accuracy and Stability of Numerical Algorithms (SIAM, Second edition, August...\n• # BlackHat\n\n• Referenced in 73 articles [sw10450]\n• states. The program performs all related computations numerically. We make use of recently developed ... stability. We illustrate the numerical stability of our approach by computing and analyzing six-, seven...\n• # HSL\n\n• Referenced in 279 articles [sw00418]\n• large-scale scientific computation written and developed by the Numerical Analysis Group at the STFC ... source of robust and efficient numerical software. Among its best known packages are those ... extensively used on a wide range of computers, from supercomputers to modern PCs. Recent additions...\n• # C-XSC\n\n• Referenced in 110 articles [sw00181]\n• programming environment for verified scientific computing and numerical data processing. C-XSC is a tool ... numerical applications in C and C++. The C-XSC package is available for all computers..." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.78775257,"math_prob":0.5519457,"size":4448,"snap":"2022-05-2022-21","text_gpt3_token_len":967,"char_repetition_ratio":0.21962196,"word_repetition_ratio":0.0032520324,"special_character_ratio":0.2365108,"punctuation_ratio":0.2117812,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95906997,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-18T04:09:17Z\",\"WARC-Record-ID\":\"<urn:uuid:0ac73ab0-779e-4c48-9075-0d43ba9bdb84>\",\"Content-Length\":\"52043\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:98d37914-271b-4b58-a7c7-8cdafa090328>\",\"WARC-Concurrent-To\":\"<urn:uuid:90252e71-03f2-4905-afc4-69812678857e>\",\"WARC-IP-Address\":\"141.66.193.30\",\"WARC-Target-URI\":\"https://swmath.org/?term=numerical%20computation\",\"WARC-Payload-Digest\":\"sha1:MUSAYHJP33RV3KWTD4DOCAJ63TZEK7HT\",\"WARC-Block-Digest\":\"sha1:JZLQHHD7O3EVGS54KRCCXHTNBRUWBPUC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662521041.0_warc_CC-MAIN-20220518021247-20220518051247-00557.warc.gz\"}"}
https://homework.cpm.org/category/ACC/textbook/acc6/chapter/1%20Unit%2010/lesson/CC2:%201.1.4/problem/1-36
[ "", null, "", null, "### Home > ACC6 > Chapter 1 Unit 10 > Lesson CC2: 1.1.4 > Problem1-36\n\n1-36.\n\nUse the fact that there are $12$ inches in a foot to answer the questions below.\n\n1. How many inches tall is a $7$‑foot basketball player?\n\nThe player is seven times taller than one foot.\n\n$84$ inches\n\n2. If a yard is $3$ feet long, how many inches are in a yard?\n\n$3(12)=36$ inches" ]
[ null, "https://homework.cpm.org/dist/7d633b3a30200de4995665c02bdda1b8.png", null, "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAfQAAABDCAYAAABqbvfzAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAyRpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuMC1jMDYxIDY0LjE0MDk0OSwgMjAxMC8xMi8wNy0xMDo1NzowMSAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvIiB4bWxuczp4bXBNTT0iaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL21tLyIgeG1sbnM6c3RSZWY9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9zVHlwZS9SZXNvdXJjZVJlZiMiIHhtcDpDcmVhdG9yVG9vbD0iQWRvYmUgUGhvdG9zaG9wIENTNS4xIE1hY2ludG9zaCIgeG1wTU06SW5zdGFuY2VJRD0ieG1wLmlpZDo5QzA0RUVFMzVFNDExMUU1QkFCNEYxREYyQTk4OEM5NCIgeG1wTU06RG9jdW1lbnRJRD0ieG1wLmRpZDo5QzA0RUVFNDVFNDExMUU1QkFCNEYxREYyQTk4OEM5NCI+IDx4bXBNTTpEZXJpdmVkRnJvbSBzdFJlZjppbnN0YW5jZUlEPSJ4bXAuaWlkOjlDMDRFRUUxNUU0MTExRTVCQUI0RjFERjJBOTg4Qzk0IiBzdFJlZjpkb2N1bWVudElEPSJ4bXAuZGlkOjlDMDRFRUUyNUU0MTExRTVCQUI0RjFERjJBOTg4Qzk0Ii8+IDwvcmRmOkRlc2NyaXB0aW9uPiA8L3JkZjpSREY+IDwveDp4bXBtZXRhPiA8P3hwYWNrZXQgZW5kPSJyIj8+RSTQtAAAG9JJREFUeNrsXQmYXEW1Pj09PVtmJjsBDGFXiCKKIBJ2REEQQdaARBBiFFRAnrIoyhqCgLwnEfEpPMAgggsGJG7w2MMuiuwkJDGQINmTycxklu62/r5/0ZWaur3M9GQCc/7vO1/fvrfuvXXr1q3/nFOnqhLZbFYUCoVCoVC8u1GlRaBQKBQKhRK6QqFQKBQKJXSFQqFQKBRK6AqFQqFQKJTQFQqFQqFQQlcoFAqFQqGErlAoFAqFonKoLveE2jM+uTHk+zNGjjZyj5EXqJhgQH3KyClGOo1MNbK2vzOSTWakbmWTjHp+69y2QqFQKBQW85+avvES+kaCKUaOMHK8kcWS9zQkjYzj9l1Gnuj3nCSykuxIaa1VKBQKxbvLQt9I0Gjk30YehtPA2d9tZJGRPYxs0++EnjCaRFe1NC4emSN2hUKhUCiU0MtDjZE3jRwXODaRhP5hI7f1ZyayVRmpWdMoqbb63LZCoVAoFAOFd2tQHHzcWxppChwbxt89+zsTWWOV161okkQ6oTVJoVAoFErovQA8C6OMjA0csy74nSXfn155GA6vXlcj9cuHqnWuUCgUCiX0XqDByOiIUnNu9ThCh/W+T79Z54bEa1c1SnVbjdnW/nOFQqFQKKGXi/cbeR+3Px44PtrZPrw/M1K/vDlSKxQKhUKhUEIvG/tK1IcO7CE9KXVn/v7ZyAFGNqm4dY6hautqpGZNg7rbFQqFQqGE3sv8gtDXOeTt9pMPN/Ixh9CNCS2HVJzQq7JSu3qIJDtTaqErFAqFQgm9FwBZY/z520ZWS9Sfvrdz/AjHeke6RyWaOa6iwJBzuNsTyuYKhUKhUELvFdAn/rREQ9NeN/KkkaN4bAQJ/x7+hy/8RhL+DpVk86p0taRadOy5QqFQKJTQe4NtSNog8aESzdf+RyOfolX+ZSMPSDRbHIBhbXcaaTcyuVKZQP95am2dVHelctsKhUKhUAxGQoeP+hoj1xu5yciFZZwLUv6NRIuwWMKeLdGscRdLFN3+O8lHuY800mbkdiOnSn7CmT4Sukj9imZJZHShOoVCoVAMXkLH/bBc2ywj5xg5wcjnSjgP4803owU+kvsQ8PaskYeMnGbkCu6vd44D15LMT6yIRmLUiZq19WqdKxQKhWJQE/q2Eo0hR7/3GCMLJFoGddciefymkR/zfyN/U7TO20niNhjOTizTwN9/GPmrkfMcsu+ddV6VkVR7nVS31mn/uUKhUCgGNaGDyP9l5F6J3OMdRr5n5FwjH4w55wwjrxj5G/+787dfQwsd/eZf5b46z1IHLqUicVLfzHOR6vYaqepOas1RKBQKxaAldIwXR7/3XIn6wVskcp+D4NEHfomRXbxzDpJorPkPnX2WsDHm/FEeQ/Db13j9as9CF6bDuPSLJLygS4xFns1Z4lYy1encdK+JjA5XUygUCsXgJfQvGblDIrc7VkI71sh2Rg418gKtdFjrdknUCUYmSdTX3u1c533O9uP8vZrKAYLfugKEDpwvkZv/nFIzjGj2mtUNuRnhILWrhkhVV1LXPlcoFArFRocNtR76YUbeMrKElvqJJGlMDvNFWta3GDmGFjf2wa89xchSI0NoqeM6n3KuO4q//5Ro7fPvS34WOZ/Q0ZeO6PoLmPblYpke8crmhtRr1198pSohmaT2nysUCoVi8BH6hySa8AWBaacbSUvUdw7vAJjyK0a+bmSakVVGWiVykSPgDUPVOmlZg/zv4q+d3rXOuQ/c9kdKNFY9ROjAd5nmBiN7SX4IXBCIZI/c7vlkiYS62xUKxYbH/KemayEoCqI/Xe4YKnYKyXO8kZslmhBmUyM/kshNjpXTrpNoARUExX2e5yVI7BCYwwh8m0kLf0vnHm7g22u00LMFCH0l8zSBaRUKhUKhUAvdA4aLoX97FxL19iTVZ0nMcHnDHf5Vh4hB1KOYbpGRtRJN07o/rfKmInm8yMhEEjWC69p4D1x/SMw5mF3uKp77dyN3azVQKBQKhRJ6HqMlH8X+iJHlsn4wW7kAIY+k9b41lYQPkPDx20zLf3zM+bDkEdmO/vUXjbxqZB6tfATGITjvVxK53v+uVUGhUCgUg4rQs15AWCL9jtf+TUrkMM86vyGgfzr3E9sn3WrObzWJFprtZ5z9uOHmRnYzcqCR/WJIHX3wB1GEOYGSgWC4xySKuMc1fm9kHyMLtTooFAqFYtAQet2yJvJxQjLVGelsbn9nnDb25Qg+QzLPRPSbSaZzc59Ho72iKPFkR7VUmbSZmgJGfO787DtR5bx+xlEefk/ixopqCKA7TOJd7Ql6EPaW/JKrrUyPceyH0HpXKBQKheK9T+gjX9jCsZWz0l3XJV2N7dLZtC43RrtueWN+nXCQfqpb2ke1SMfwVknXduUixhsXDZfGN0fkyD+TSsdb6WZ/d32ndAxtM+SfkM7GDllnrgXNAJO7MPocUfD/TxkvmcRZ5nqnSmkBf5b8ETX/oERD2u7UaqFQKBSK9zyh+y736vaUVLfVSMPbCE5ff4hXDu01UruqIWfNg5xxvHZ1Q2TVGx5PdhbOAqZaradXAOfAI9A+eo20jVljlIeGnMcAln7HsFbpauh8KV3XNaW7oeN2c+1rEunEeEPuXQVvkIAHAHnOol/+DpN+lsnYmWb/v8p1Xkjk1u/QaqVQKBSKjZ7QexB8jsCzBQZ0g+SjrVRrtG4KplB1jPBid3jnfCA3c1tLvQxZNCJH9u+wqSF2XCpd0w3Sv79t9JqPdA5vHZdOdVfB2x6arjVrlIzkulR2yOLmNnMcD5HoGtIxdN3IlrebFozOXb+HghKPL0i0UMxtWq0UCoVC8a4jdAJ907tLNIkMItPB2JgZDtHjz5DofHLEvdFv3SSFJ3gBE6+QaJz569ZDUN2Rst6CKl5naBb6QXcyR+5GMplU98PrRrQuXjt2ec6yr0onc3ey+WhcOFIaI8XgIJuPbFUmaxSOj1V1VafM9bHe+vz1lICsYf2wEgL3va7aolAoFIp3JaFjKVPMwY7JWjaPSYOo8usoLuCixpKoW5R4Lyzmgrnb/8fIn5z1yJO8TjThDAztZHQskU7OHvLvofvVL2/sXrPlMml934qc6z/VWifD5mwqtSuHIP0hhsBnradBGOKnsnCyT+gFACVG54RVKBQKxYCgLzPFYeKY+yUKJNu8QLodSbhYLrXZNXYlmgimVMCC/rREE8P8oKTrJLJ7GgI/VjJVMmzupjLipbHSvHCUjP77VjkyN6RdY6z1qYHz7FaXVhGFQqFQvJcJHdO3wqrdrYxzMIf6LVIZtzQmhil16taLDUE3od8ervjm18fkoutpgcOz8BGtBgqFQqEYrIR+JS30cnGERCupVQJYaAV99sVmo8MSrWfkTHlD4jkijyzwkfQuKBQKhUIxKAkds7JNjDn2N4lWTcPCK/MKWNcIT0/HHEcA3F8kWp0NU7c+GZMO1zi1xDz/l0TLtrr4tqy/trpCoVAoFO9a9CYoDv3YqcB+zNp2vOTHYWNd8wckmnvdBf7vIdHCLCE8Z+RgT+k4wciNJHEXmLK1toByYDGc1vgU/se88F/T169QKBSKwWyhfzSwL03L3J1U5d8S9XPPpcyhzCepJ0pUMtDZfatEAXg+xkq03Gop0eUnG9mV25dIFKGvUCgUCsWgtdBDEe1wky8I7P+NkT95+0DkiB6vr0D+s5JfBqYY4FU4z8i1Ro7ZCN8FFIzNJD+Gvz2QppZeiqxXnp0SnqEuxXJexzSFUMf0uG9cXEKC10tKgWV3nGtUM72ftkviZ9SrYV46me+4Z+qKKSMAK/8hRgLL8S6SwvMcWDQzvascJkuopwm+szYqyA2SH3kRum89v6EE33NrjKLdwLy0Ffh2G4qUg32uVon3YtWxXrWXUEd8FCqftTH765n3cuqEC7zXUczvGyW8W5TzFrwvFmda1k/5wn0wEqelQJ7qWX/XlHC9Jr6z9hLrr0LRKws9tPhJS4FKutaTFjbUcSQcIhO48vcP7F9sZHWJhA58zshvpW/D9SoNNFAIMkRXQ27yHInWkL+ADa2LqTyGCXv+6ciz9GLs7aWfxLT3s4GIAxq8x5n2oALpQCB38X7PeXlw5bNM/2mmfdY59jz/38HjPr7BfFwVk4ejeXxG4NhHeN2XJJr/AOWJlfWOK/IO7D0v8fbv4z0Xnvlv3vNAfsf07+exh6ic+cR5Ae9jPVbYvijwbhDvMZv32jMmz0fy/FsK1P+TmZ9rCjz7VF7nm72ou7vElAfK6RGWq0/4tzL9PwJ1Au/04zH3QnDrLyRaCvkVvtvZRd7tRL7/13gOzv2l9OwGRPndXCBfuO8nipSFfbffKpBmBtNMLXKtk5gOsUTDlKYU/WmhZ2MIvbNCefqQ00BmaG3tE9Nozab2HCLoNY5G7Fp3owNp0T0wpgzFoFLYjB6Mnfn/VeYRDc6lEi0aM9GxEDZhwybcZxeoBfHbYMVT2ABZLX8bCqam/WlMPr4i+eF7Q4rkGaMbtuS76QqUWcJpxOud/HY69cfm91iS6IWedY38xgUsDuXxVd7+/VlvhrNsXmR5oSG+nedMi7EyJ/P4ZCoSqx2PyFjHE5Ry6ppb31c639P2tIirPCX4VxKtBgjMo/W1PZ/9Uzy2wrnODvRWYA6HCQEr3JbDigIWHIJGtyWxX0GPgA+U89Ysq3JRRyXGWrJZx1BA3vYyciiVsLWO8rgd03YG6vBRVODvcu6D7+MevosMFTYowntQcPw7Xt6+4xDnElrmyOsJLG8onU85dXIrJ1+2TXHzdQzzNTNG0Z1MRWwyvYAhq34sy+Ub/BbfiCnT8/jemjYy40PxHrTQQ+iqoFtoNK2PI9kQ7BtDtLDkf+6QiA806D8q4X7PsdFMDED5X83GaIFEa7uPpxxPUsAwv9O9cgZ+xgZ/R/4iNuA2ktN0yc++57pZz2BjEfIQuKMFisUjWCI7xcmDK+PZ+LrXQgO8k5Nmd8fC/j6f3ffQxE3qkw4QKkj8Jv7+kff6MJXDHzLNZVSQfNgpi4VKneuheJjPY8t5MvfPoQJkn/dwrx52eN/Dt0jYq1incc4H+X6XkbAv9JTmDsfrcEGJ5eBiJz4b0OwoE6FvN84zVgz2/UKp2I1ltAOf78tU9A/y6rDN77leHd6dym09CXGYo1TdSDKczfLYieV3GdOc79WhfRwyv5RpbZ14gG3M9Z4HzObrvJh81Xn58pXJcY6XZq8i3w6I+rSYNJ93PAgdou52xQAQ+kBgKt1icV6GIbRKFhS5DhqDtwcg/2igPsftMyVa/jXDjxgW5ZU8dnbAbbmazzWPv3B7TqIS00wLxMeOtH58wHrbtBf5X+TkwZW5bMh90niNx+fTMsJ8BLMc5aAv+CS9Bkv4PHNYlktIpo+wrp8ZOHcij83l/0nOsTbut+X8hkN+9nlej7G0xCGkE7l9Cb0IHSyTu0ggQqKPc69+m5ZoOTiGHoV5zO+kfqzLackHvM7n9g2S78I4WnpOKLXUq8OoEyfxnYEcd2G63aiItbKePM93i/7w7xm5m+lOdK5tn/XPVBiX8ZyX6alq4/UPCTwL7v8vL1+TuB+KcqhLwN77Nf6eUEKZTQ54C1EPz1JaUgw0oW/oRUlg2V5cJE2t89HH4T5q300DUPZoHBpp3TweOD6dpPftwHtKxlhLL3M7zl39TU8Bgqvwq45VWA7K6a6B5VoT2P9bx5rsSx3awfG2LA0cn0Kiv9Xb30yLKMuyWUhLb8uY+6Sc56ktMW9Qlmx/+gOB4w+R3DeR9fvdq0g8C3jfH5dxT6Q71lEGXqVC8MF+qstx5fG04wWqLaH+LCVxAkMdi1eoWL0WOOde/m7r7NveO+biLXrAzohRxEL5Wu7UK1/p2oyKwTpes4WK+ogSPJH+PBoHSnwMgULRL4Qeck03SnhseiXRzgbxMDZSxQjIRr+jEX8wcBxW0jkFnqm/Yee1XynhaG7sn0Fr3Y+E7o7xSNh+8IXesQdo2XzMs0pgOW1HC/8fZea/EjETbzl5b+jDdWwjG+dpQUAUgsf+GmhA4SlBlwC6CeBih2v1iAq+5yaSWafk+9r9et1CIqnzvrMsLbZVtCi/U+I94fL9AOsBvAD3U2Hqr9EdWQlH2u/rELVfx0PR+weQjLO08oHhzjUk5juxdci2aU1F6sPdVJifCRwL5etAyceCvOwd+yy/ZVjyCGJDtwCi8A8t0Hb+kt/w1x3FxSrcwEyJjw1SKCpiZbkNUKjRapJ8UE9fAGviSoeQYXku4wf+ai8UljQVgNmelfgTiSJJB7rsu6T8/stNaNW6VuC32OgsCxAXgv4w8c+1THc3G3jr3kMU9GllNN7AFWwwk16D9b2YhlJilCrrceiLhZ4sUDcLwbpGf+80pCdy/3SpzOp5SckPLQzFBXQ7+xMBJe0JiVzXeEfnUvF4usg9j3eIK81fBGIhIvxyqVwAq1uXMT/FWueZP8P8WgLzyxJW7OZMm6FX5EQqP4gHedF7t+uKKJZJpwxD9WFXfjdZJ13I6j/Cy9dYenf8fPllfadThw5mHZoRk2d8n2OoKEyi9wWWOUZ9wN3/fxLFZWj/uaLfCT2k9Q7nR+AT+v5s4NNO5QSp3sCPI4TFrNCVBAgGQTBnOhbs1AEue7dhKddDcDLFByL7vyw9o5mHsnFBfy2Gtu1GBeyjtDhmUukpB3EL8/y0DEJ3yyJbobIsFWioD2KjbUdVII5hCZ9tl148R2/ec7H3D+/Xj0jGu7Px372AEjhC8gFwv+bvoxL1Ce9A6/3+CtdlfP+PxRybwW/Px3HSc8hZG7/9s5xyK/ZuE166uHNQhhO8c690lA6LYwKeDHjIEIB7tqeYjGd5tku+L38W0+9PBXtujBJyNQkdVvr/UuGCAYKA1/kyMF5DxSAk9BcC+6C9fs2z8rDvssBHBFxVwPqp7qdnRV6OYkOOhV2WD3DZ9+WDfZtKSZKNACwjuPxulsi1HipTuG2voyJzjuOt+G82pMky84358Z+UvFswUaB+FPKgDFRZHk6yhJvddjesIrmfxkb9mQrlLdGH57CW4mkkzY+TBBbFXOMztEThfXrEsW7RdQOX/cR+IPRuWq7dfKcZEtmdjlLhA11hiB9AVx2i4D9EMjy1l+82UeQcxGu8QuPCkm1XgXwlWc7IF0ZOTAmktYGHs0jCwJtMj2NHSj641QW6l+5gvUM3GQJz0RXWQkLfSqlJsaEI/a8kR/+jQXAV+o7gEkRf4BdjyBxE9KCEg6T6E8v4cR0vPYOjBgJtzsddI4XXhk94FsgvJN//Xw5gZaCf7mj+XyDR+OjeAIQxu49lYPu+OyTvUrWKRZzClw4oA+scS7FURcK6SuGh2JPfQkbyoyKg/F1c5L2Ugg5aZPUSjhOwM9+JxA/Vs+WNbo6LJBri9ouYdLYb4SXvuawCcBjLaWUF6/JKWqpryzgHwai3OSQICxf90RjG+ZyTrt3xMoUwxClnW286vPplFVeLmwsQ+h+db+JNtmeH0ZvldtHVOJb8K3z+JOuntcqhPP1Qes7SZ2daRJ5ukXyA73S2Ux9QalL0Br2xkBBA9ZeYY0fzY/lpDJkDP6FLKjUAz3ujQ2YDjVX8qEfHNFZoQOACnik9I2t7a9kulfUnl7mOjXBvrldXgTKw0elLnEbYTuoyJuacTZ3ycz0WwLiYc6ZQibya/3eSfDQxJtV5lMdhrf+A+xE1vW8FnnEFSQllHJo2eRRJqU16Dvfzgbw9zXNs95Gr6CHP+3H7C95zXeeU38H94G0q1zho8Ej0CSo2/ph7G/W+eUybMc6rD1lHWdk65t7betcOKQhW6XhM8rP8uXBHDZxHb8iD/D2f+6Gc7FqgDOyshlYpvVYpSbGhCd0O8elNANzj1EIH0ipevJGU/Rx6K+okP3TMfS/Q2g8gma8ONKC9xfW0gEAMN/XhOi1lpE1Lz0AsDEeyE7Xc5+x/mL8TAoQKIjuJ2+5qfU84SpAfXTyWFu2+TkNvXaVv0Br7jSP4/6pDin3FUsfiDAUens73PUcKj2e3jf43aFmGukg+T6JEEOTtged6vsBztffxOftSJ9P0PgBwU3/CMyDWkZxPCNSHL3h1QBzP0XHSc6w3vAC7sx17rEi+YO3b2QWP8IwU6+GZS0+DW9b4P9/zBMV5by6nV+g6Cfe3KxQlo7f91a+wgt9awCoKWfbHSt9dmO8VrGUjdj01fFikGGJUS9I6hA3Kd6Uy0dYWi9lgurOR9QYns4FLBOoUvAovelb1+ZJ3PW5FTwkaW7g1f+aR80zWL/R7wmWJvkaMrf86FYGF9LZYPMWG9Bg2pldTYRlH5RPW3WtsNF1X6eUSng4XZT+Lv2OkbxMPZfme9yPBQIGzUd/HOXkBcZQy2uFJWuoXBAh1IrevlfA0txNIdgfwHSxwjkHhCc15kKLy9Eg/fw/38N1/gs/2WYcwf05FBvVkRyp9GP+Ncd8Y5vaW5GeNBG6gVwZu9XtZHkizN89JUZl9roR8WSt9Ar/FQ6lkH+5Y578LnIeI/RlUsnBea8z1URf+UKaCrFBUlNCFHzg+kMvYKMW5YGHJ3yzR0JvVXgPUHEhf7rKmdpUjH0PLuEbcilH93c8PMkFUMmaz+hLFAtbk2bJ+P7V1B5Y6ZrsupkxDQ4CaS3hmt6xPLZBuCQndXmszkqePZ+ideMuziibz3EMCxPQyFZ63A+ckaeH5i6y8SOsObtmjqBRkJD9TnY+H+Qyb0AK8xiub5hiLtNqpey4xoovqFF7ncIcMrKcDBHaHsy/pvOOQJY5vDv26OzvvAwqDndp2ZsxzQcnBzHbbsq5d6NxnP8m7631MjyF06wIfVoa3z9az2oCVPo1K7aFU6OxznMO6jzI8V9aPTH+ZyqXr3XiLRHozy+hG716/ooLgoqlIvv7A+ngg68WmrE9xAYb30usxjnVyRoF7rIkp16GiY9EVG4jQhZYSgt8QbIbpRnciQWXo9kODfZ/0nOjEupum8eNIO/mZ1wt33Q9oSaWdRnCJlD4U6kESjjseGNd4dgO8g8tpBdg5vrtpOaCBn+OlvZ3l83AZStc0elSKWZFX0QouZLV08nqjC3gNkpJ3f2Jq3qmyflBQgiSGYw9IeEz0clpoIL6DmS8ohugT/rX07IKwjeJRJDpEem9BpegR75x2PkMhFze8J6eTIBd75DGNhNEZ4/24hPfw83gTlbOJJJkEy+D2wPtZRpJHw7405tuBBXi8971cwW8t7n2jfqPvfU/nPFiIr0p+oZQQad8Xc715VC7WluF5g7W8jazvIreAgnUWyTLlKaCnsqxQJ7Zk+T7EfS0xyuIEltFeJMc3SMx/jsnXdgXydSYV03rWtWl8f3HBhVA4v0KPwhpHMYIy9XiRMprH72ZlActeoehpcWWz5Q3/3WrX0wZ7kUmiKjjC62w25NdrtVIoFJXG/KemayEo+tVCH3x0noiN/XlaCg87UigUCoVi47HQFQqFQqFQbHzQgAuFQqFQKJTQFQqFQqFQKKErFAqFQqGoCP4jwADQNvw20jA5ogAAAABJRU5ErkJggg==", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89139074,"math_prob":0.98390365,"size":268,"snap":"2021-43-2021-49","text_gpt3_token_len":82,"char_repetition_ratio":0.13636364,"word_repetition_ratio":0.0,"special_character_ratio":0.3097015,"punctuation_ratio":0.12698413,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9916635,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-26T08:36:49Z\",\"WARC-Record-ID\":\"<urn:uuid:a870ebce-381c-49d5-9699-b34bb8e99aeb>\",\"Content-Length\":\"32473\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:82c4cc8a-6a4d-4e8b-b532-76823748e58b>\",\"WARC-Concurrent-To\":\"<urn:uuid:0acf2f2d-3998-4315-a10e-e197b251ac52>\",\"WARC-IP-Address\":\"104.26.7.16\",\"WARC-Target-URI\":\"https://homework.cpm.org/category/ACC/textbook/acc6/chapter/1%20Unit%2010/lesson/CC2:%201.1.4/problem/1-36\",\"WARC-Payload-Digest\":\"sha1:NFPZC56MLE6KYCPGWZSBRP2EZKOUHVJE\",\"WARC-Block-Digest\":\"sha1:GH5EXKDCAQ45P3MOHDOSQ54MN3OSGQHJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587854.13_warc_CC-MAIN-20211026072759-20211026102759-00102.warc.gz\"}"}
https://www.glisshop.es/esqui-nordico/material/clasicos/botas/hombre/
[ "Dispones de 100 días para devolver tu compra\n\n# Botas esquí de fondo clásico hombre\n\n## Looking for a pair of classic ski boots?\n\nWinter is coming and you just realised your old cross-country ski boots are completely worn out. Or you just want to give yourself a treat and buy the latest model. Whatever the reason, whether you look for comfort or for performance, Simon Fourcade Nørdic has a large offering allowing every skier to find the right boot.\n\n(=showDesc ? 'Ver menos' : 'Leer más' =)\nÖSTERSUND PROLONGACIÓNHasta la medianoche del lunes CÓDIGO: WORLDCUP", null, "", null, "", null, "(En los productos identificados 'Östersund', Introduce este código al hacer tu pedido)\nÖstersund\n• Desde Desde 181,40 €\nO (=dividePaymentUnit=)x(=priceDivided =)(=itemCurrency=)\n• Desde Desde 181,40 €\nO (=dividePaymentUnit=)x(=priceDivided =)(=itemCurrency=)\nÖstersund\n• Desde Desde 282,23 €\nO (=dividePaymentUnit=)x(=priceDivided =)(=itemCurrency=)\n• Desde Desde 282,23 €\nO (=dividePaymentUnit=)x(=priceDivided =)(=itemCurrency=)\nÖstersund\n• Desde Desde 161,23 €\nO (=dividePaymentUnit=)x(=priceDivided =)(=itemCurrency=)\n• Desde Desde 161,23 €\nO (=dividePaymentUnit=)x(=priceDivided =)(=itemCurrency=)\nÖstersund\n• Desde Desde 119,99 €\nO (=dividePaymentUnit=)x(=priceDivided =)(=itemCurrency=)\n• Desde Desde 119,99 €\nO (=dividePaymentUnit=)x(=priceDivided =)(=itemCurrency=)\nÖstersund\n• Desde Desde 150,24 €\nO (=dividePaymentUnit=)x(=priceDivided =)(=itemCurrency=)\n• Desde Desde 150,24 €\nO (=dividePaymentUnit=)x(=priceDivided =)(=itemCurrency=)\nÖstersund\n• Desde Desde 90,65 €\nO (=dividePaymentUnit=)x(=priceDivided =)(=itemCurrency=)\n• Desde Desde 90,65 €\nO (=dividePaymentUnit=)x(=priceDivided =)(=itemCurrency=)\n-30%\n-30%\n• Desde Desde 63,42 €\nO (=dividePaymentUnit=)x(=priceDivided =)(=itemCurrency=)\n• Desde Desde 63,42 €\nO (=dividePaymentUnit=)x(=priceDivided =)(=itemCurrency=)\n-30%\n-30%\n• Desde Desde 91,66 €\nO (=dividePaymentUnit=)x(=priceDivided =)(=itemCurrency=)\n• Desde Desde 91,66 €\nO (=dividePaymentUnit=)x(=priceDivided =)(=itemCurrency=)\n-30%\n-30%\n• Desde Desde 77,54 €\nO (=dividePaymentUnit=)x(=priceDivided =)(=itemCurrency=)\n• Desde Desde 77,54 €\nO (=dividePaymentUnit=)x(=priceDivided =)(=itemCurrency=)\n-25%\n-25%\n• Desde Desde 226,77 €\nO (=dividePaymentUnit=)x(=priceDivided =)(=itemCurrency=)\n• Desde Desde 226,77 €\nO (=dividePaymentUnit=)x(=priceDivided =)(=itemCurrency=)\n-25%\n-25%\n• Desde Desde 302,40 €\nO (=dividePaymentUnit=)x(=priceDivided =)(=itemCurrency=)\n• Desde Desde 302,40 €\nO (=dividePaymentUnit=)x(=priceDivided =)(=itemCurrency=)" ]
[ null, "https://glisshop-glisshop-fr-storage.omn.proximis.com/Imagestorage/images/200/0/5de50cd6cc96b_ES_XMASDEALS_10.png", null, "https://glisshop-glisshop-fr-storage.omn.proximis.com/Imagestorage/images/200/0/5de50ce5d7df8_ES_XMASDEALS_15.png", null, "https://glisshop-glisshop-fr-storage.omn.proximis.com/Imagestorage/images/200/0/5de5254473a00_ES_XMASDEALS_20.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.57552785,"math_prob":0.7306131,"size":581,"snap":"2019-51-2020-05","text_gpt3_token_len":144,"char_repetition_ratio":0.08665511,"word_repetition_ratio":0.0,"special_character_ratio":0.19104992,"punctuation_ratio":0.104761906,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99429387,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-07T17:17:32Z\",\"WARC-Record-ID\":\"<urn:uuid:6974192c-ac1b-4044-b495-cb7f5aeb81b9>\",\"Content-Length\":\"507337\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b4f49c60-2022-4ffb-ae74-9479c17e2bc1>\",\"WARC-Concurrent-To\":\"<urn:uuid:d21b4ea1-e587-4525-8899-04053e64a5d7>\",\"WARC-IP-Address\":\"130.211.29.38\",\"WARC-Target-URI\":\"https://www.glisshop.es/esqui-nordico/material/clasicos/botas/hombre/\",\"WARC-Payload-Digest\":\"sha1:HSUEYZAKOHAW33BRPOYTRA6TOXSDXAIL\",\"WARC-Block-Digest\":\"sha1:KXT3NOPXGATL5S2IAEKQG73RLCXXVV4L\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540500637.40_warc_CC-MAIN-20191207160050-20191207184050-00314.warc.gz\"}"}
https://www.geeksforgeeks.org/find-whether-an-array-is-subset-of-another-array-using-map/
[ "Related Articles\n\n# Find whether an array is subset of another array using Map\n\n• Difficulty Level : Basic\n• Last Updated : 08 Jun, 2021\n\nGiven two arrays: arr1[0..m-1] and arr2[0..n-1]. Find whether arr2[] is a subset of arr1[] or not. Both the arrays are not in sorted order. It may be assumed that elements in both arrays are distinct.\nExamples:\n\n```Input: arr1[] = {11, 1, 13, 21, 3, 7}, arr2[] = {11, 3, 7, 1}\nOutput: arr2[] is a subset of arr1[]\n\nInput: arr1[] = {1, 2, 3, 4, 5, 6}, arr2[] = {1, 2, 4}\nOutput: arr2[] is a subset of arr1[]\n\nInput: arr1[] = {10, 5, 2, 23, 19}, arr2[] = {19, 5, 3}\nOutput: arr2[] is not a subset of arr1[]```\n\nSimple Approach: A simple approach is to run two nested loops. The outer loop picks all the elements of B[] one by one. The inner loop linearly searches for the element picked by the outer loop in A[]. If all elements are found, then print Yes, else print No. You can check the solution here.\nEfficient Approach: Create a map to store the frequency of each distinct number present in A[]. Then we will check if each number of B[] is present in map or not. If present in the map, we will decrement the frequency value for that number by one and check for the next number. If map value for any number becomes zero, we will erase it from the map. If any number of B[] is not found in the map, we will set the flag value and break the loops and print No. Otherwise, we will print Yes.\n\n## C++\n\n `// C++ program to check if an array is``// subset of another array` `#include ``using` `namespace` `std;` `// Function to check if an array is``// subset of another array` `int` `isSubset(``int` `a[], ``int` `b[], ``int` `m, ``int` `n)``{` `    ``// map to store the values of array a[]``    ``map<``int``, ``int``> mp1;` `    ``for` `(``int` `i = 0; i < m; i++)``        ``mp1[a[i]]++;` `    ``// flag value``    ``int` `f = 0;` `    ``for` `(``int` `i = 0; i < n; i++) {``        ``// if b[i] is not present in map``        ``// then array b[] can not be a``        ``// subset of array a[]` `        ``if` `(mp1.find(b[i]) == mp1.end()) {``            ``f = 1;``            ``break``;``        ``}` `        ``// if if b[i] is present in map``        ``// decrement by one``        ``else` `{``            ``mp1[b[i]]--;` `            ``if` `(mp1[b[i]] == 0)``                ``mp1.erase(mp1.find(b[i]));``        ``}``    ``}` `    ``return` `f;``}` `// Driver code``int` `main()``{``    ``int` `arr1[] = { 11, 1, 13, 21, 3, 7 };``    ``int` `arr2[] = { 11, 3, 7, 1 };` `    ``int` `m = ``sizeof``(arr1) / ``sizeof``(arr1);``    ``int` `n = ``sizeof``(arr2) / ``sizeof``(arr2);` `    ``if` `(!isSubset(arr1, arr2, m, n))``        ``cout<<``\"arr2[] is subset of arr1[] \"``;``    ``else``        ``cout<<``\"arr2[] is not a subset of arr1[]\"``;` `    ``return` `0;``}`\n\n## Java\n\n `// Java program to check if an array is``// subset of another array``import` `java.util.*;` `class` `GFG``{` `    ``// Function to check if an array is``    ``// subset of another array``    ``static` `int` `isSubset(``int` `a[], ``int` `b[], ``int` `m, ``int` `n)``    ``{` `        ``// map to store the values of array a[]``        ``HashMap mp1 = ``new``                ``HashMap();` `        ``for` `(``int` `i = ``0``; i < m; i++)``            ``if` `(mp1.containsKey(a[i]))``            ``{``                ``mp1.put(a[i], mp1.get(a[i]) + ``1``);``            ``}``            ``else``            ``{``                ``mp1.put(a[i], ``1``);``            ``}` `        ``// flag value``        ``int` `f = ``0``;` `        ``for` `(``int` `i = ``0``; i < n; i++)``        ``{``            ``// if b[i] is not present in map``            ``// then array b[] can not be a``            ``// subset of array a[]``            ``if` `(!mp1.containsKey(b[i]))``            ``{``                ``f = ``1``;``                ``break``;``            ``}` `            ``// if if b[i] is present in map``            ``// decrement by one``            ``else``            ``{``                ``mp1.put(b[i], mp1.get(b[i]) - ``1``);` `                ``if` `(mp1.get(b[i]) == ``0``)``                    ``mp1.remove(b[i]);``            ``}``        ``}` `        ``return` `f;``    ``}` `    ``// Driver code``    ``public` `static` `void` `main(String[] args)``    ``{``        ``int` `arr1[] = { ``11``, ``1``, ``13``, ``21``, ``3``, ``7` `};``        ``int` `arr2[] = { ``11``, ``3``, ``7``, ``1` `};` `        ``int` `m = arr1.length;``        ``int` `n = arr2.length;` `        ``if` `(isSubset(arr1, arr2, m, n)!=``1``)``            ``System.out.print(``\"arr2[] is subset of arr1[] \"``);``        ``else``            ``System.out.print(``\"arr2[] is not a subset of arr1[]\"``);``    ``}``}` `// This code is contributed by Rajput-Ji`\n\n## Python\n\n `# Python program to check if an array is``# subset of another array` `# Function to check if an array is``# subset of another array``def` `isSubset(a, b, m, n) :``    ` `    ``# map to store the values of array a``    ``mp1 ``=` `{}``    ``for` `i ``in` `range``(m):``        ``if` `a[i] ``not` `in` `mp1:``            ``mp1[a[i]] ``=` `0``        ``mp1[a[i]] ``+``=` `1``    ` `    ``# flag value``    ``f ``=` `0``    ``for` `i ``in` `range``(n):``        ` `        ``# if b[i] is not present in map``        ``# then array b can not be a``        ``# subset of array a``        ``if` `b[i] ``not` `in` `mp1:``            ``f ``=` `1``            ``break``        ` `        ``# if if b[i] is present in map``        ``# decrement by one``        ``else` `:``            ``mp1[b[i]] ``-``=` `1``            ` `            ``if` `(mp1[b[i]] ``=``=` `0``):``                ``mp1.pop(b[i])``    ``return` `f``    ` `# Driver code``arr1 ``=` `[``11``, ``1``, ``13``, ``21``, ``3``, ``7` `]``arr2 ``=` `[``11``, ``3``, ``7``, ``1` `]` `m ``=` `len``(arr1)``n ``=` `len``(arr2)` `if` `(``not` `isSubset(arr1, arr2, m, n)):``    ``print``(``\"arr2[] is subset of arr1[] \"``)``else``:``    ``print``(``\"arr2[] is not a subset of arr1[]\"``)` `# This code is contributed by Shubhamsingh10`\n\n## C#\n\n `// C# program to check if an array is``// subset of another array``using` `System;``using` `System.Collections.Generic;` `class` `GFG``{` `    ``// Function to check if an array is``    ``// subset of another array``    ``static` `int` `isSubset(``int` `[]a, ``int` `[]b, ``int` `m, ``int` `n)``    ``{` `        ``// map to store the values of array []a``        ``Dictionary<``int``, ``int``> mp1 = ``new``                ``Dictionary<``int``, ``int``>();` `        ``for` `(``int` `i = 0; i < m; i++)``            ``if` `(mp1.ContainsKey(a[i]))``            ``{``                ``mp1[a[i]] = mp1[a[i]] + 1;``            ``}``            ``else``            ``{``                ``mp1.Add(a[i], 1);``            ``}` `        ``// flag value``        ``int` `f = 0;` `        ``for` `(``int` `i = 0; i < n; i++)``        ``{``            ``// if b[i] is not present in map``            ``// then array []b can not be a``            ``// subset of array []a``            ``if` `(!mp1.ContainsKey(b[i]))``            ``{``                ``f = 1;``                ``break``;``            ``}` `            ``// if if b[i] is present in map``            ``// decrement by one``            ``else``            ``{``                ``mp1[b[i]] = mp1[b[i]] - 1;` `                ``if` `(mp1[b[i]] == 0)``                    ``mp1.Remove(b[i]);``            ``}``        ``}` `        ``return` `f;``    ``}` `    ``// Driver code``    ``public` `static` `void` `Main(String[] args)``    ``{``        ``int` `[]arr1 = {11, 1, 13, 21, 3, 7};``        ``int` `[]arr2 = {11, 3, 7, 1};` `        ``int` `m = arr1.Length;``        ``int` `n = arr2.Length;` `        ``if` `(isSubset(arr1, arr2, m, n) != 1)``            ``Console.Write(``\"arr2[] is subset of arr1[] \"``);``        ``else``            ``Console.Write(``\"arr2[] is not a subset of arr1[]\"``);``    ``}``}` `// This code is contributed by PrinciRaj1992`\n\n## Javascript\n\n ``\nOutput:\n`arr2[] is subset of arr1[]`\n\nTime Complexity: O (n)\n\nAttention reader! Don’t stop learning now. Get hold of all the important DSA concepts with the DSA Self Paced Course at a student-friendly price and become industry ready.  To complete your preparation from learning a language to DS Algo and many more,  please refer Complete Interview Preparation Course.\n\nIn case you wish to attend live classes with experts, please refer DSA Live Classes for Working Professionals and Competitive Programming Live for Students.\n\nMy Personal Notes arrow_drop_up" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6174727,"math_prob":0.97631,"size":6956,"snap":"2021-31-2021-39","text_gpt3_token_len":2445,"char_repetition_ratio":0.15290564,"word_repetition_ratio":0.34254143,"special_character_ratio":0.40281773,"punctuation_ratio":0.17850526,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99912316,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-25T07:44:43Z\",\"WARC-Record-ID\":\"<urn:uuid:58fd1244-53c2-4a10-8254-4cdf7ff5729b>\",\"Content-Length\":\"164517\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b6bffda9-f007-4b67-a17f-860a877c07d7>\",\"WARC-Concurrent-To\":\"<urn:uuid:c2cc3222-b154-47a7-9c0d-e5dfdab3512f>\",\"WARC-IP-Address\":\"23.45.233.19\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/find-whether-an-array-is-subset-of-another-array-using-map/\",\"WARC-Payload-Digest\":\"sha1:HD22QBPTSPTJWOX3665MUECRR25MJSWV\",\"WARC-Block-Digest\":\"sha1:KNE2HIWQFOTOTNWTNT5VUBCF4CS3NFDV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057598.98_warc_CC-MAIN-20210925052020-20210925082020-00709.warc.gz\"}"}
https://www.math10.com/en/geometry/perimeter.html
[ "Perimeter, perimeter formulas\n\nPerimeter(P): The distance around the outside of a shape.\n\nThe standard notation for perimeter is P\n\nPerimeter of a triangle", null, "P = a + b + c\n\nPerimeter of a square", null, "Squares have four equal sides.\nLet the length of side be a.\nThe perimeter of а square is P = a + a + a + a or:\n\nP = 4 ⋅ a\n\nPerimeter of a rectangle", null, "Let the length and width of a rectangle be a and b.\nThe sum of the lengths of the sides is P = a + b + a + b or:\n\nP = 2 ⋅ a + 2 ⋅ b\n\nPerimeter of a parallelogram", null, "Since the opposite sides of a palallelogram are equal in length its perimeter is P = a + b + a + b or:\n\nP = 2 ⋅ a + 2 ⋅ b\n\nAs we see the perimeter of the parallelogram is equal the perimeter of the rectangle.\n\nPerimeter of a rhombus", null, "P = 4 ⋅ a\n\nPerimeter of an isosceles trapezoid", null, "Let a and b be the lengths of the parallel sides. Since it is isosceles the other two sides are equal in length and let c be their length.\n\nP = a + b + c + c = a + b + 2 ⋅ c\n\nPerimeter of an equilateral triangle", null, "As we know equilateral triangles have 3 equal sides. So if the length of a side is a then the perimeter formula is P = a + a + a\n\nP = 3 ⋅ a\n\nCircumference of a circle", null, "$C = d \\cdot \\pi = 2 \\cdot r \\cdot \\pi$\n\n$\\pi = 3.14$\nd is diameter.\n\nRegular polygon", null, "$P = 2nb\\sin\\frac{\\pi}{n}$\n\nn is the number of edges(vertices).\n$\\pi = 3.14159265359$\n\nContact email:", null, "" ]
[ null, "https://www.math10.com/geomimages/triangle-sides.gif", null, "https://www.math10.com/geomimages/ssquare.gif", null, "https://www.math10.com/geomimages/srectangle.gif", null, "https://www.math10.com/geomimages/sparallelogram.gif", null, "https://www.math10.com/geomimages/rhombus.gif", null, "https://www.math10.com/geomimages/isoscelestrapezoid.gif", null, "https://www.math10.com/geomimages/equilateralTriangle.gif", null, "https://www.math10.com/geomimages/circle-radius.gif", null, "https://www.math10.com/geomimages/regular-poligon-per.gif", null, "https://www.math10.com/images/mathMail.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7754587,"math_prob":1.0000024,"size":1227,"snap":"2019-26-2019-30","text_gpt3_token_len":351,"char_repetition_ratio":0.20932133,"word_repetition_ratio":0.10121457,"special_character_ratio":0.27302364,"punctuation_ratio":0.070866145,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000061,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,3,null,6,null,6,null,6,null,3,null,3,null,6,null,6,null,3,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-17T00:47:40Z\",\"WARC-Record-ID\":\"<urn:uuid:a40bb437-942f-4faa-895b-a50ed33ac41b>\",\"Content-Length\":\"17375\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4bfb7c52-cac5-4651-b104-e5d17c9c223e>\",\"WARC-Concurrent-To\":\"<urn:uuid:59c71700-1330-41c8-85d2-a9b6590f7f83>\",\"WARC-IP-Address\":\"104.25.26.16\",\"WARC-Target-URI\":\"https://www.math10.com/en/geometry/perimeter.html\",\"WARC-Payload-Digest\":\"sha1:J52H6PVJBQ2T4SGBLNPINHMVNZ7UQCID\",\"WARC-Block-Digest\":\"sha1:Q4BGX3SBW52WDDVJM4P3BB24MKMDZVM7\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627998339.2_warc_CC-MAIN-20190617002911-20190617024911-00081.warc.gz\"}"}
https://www.colorhexa.com/896200
[ "# #896200 Color Information\n\nIn a RGB color space, hex #896200 is composed of 53.7% red, 38.4% green and 0% blue. Whereas in a CMYK color space, it is composed of 0% cyan, 28.5% magenta, 100% yellow and 46.3% black. It has a hue angle of 42.9 degrees, a saturation of 100% and a lightness of 26.9%. #896200 color hex could be obtained by blending #ffc400 with #130000. Closest websafe color is: #996600.\n\n• R 54\n• G 38\n• B 0\nRGB color chart\n• C 0\n• M 28\n• Y 100\n• K 46\nCMYK color chart\n\n#896200 color description : Dark orange [Brown tone].\n\n# #896200 Color Conversion\n\nThe hexadecimal color #896200 has RGB values of R:137, G:98, B:0 and CMYK values of C:0, M:0.28, Y:1, K:0.46. Its decimal value is 9003520.\n\nHex triplet RGB Decimal 896200 `#896200` 137, 98, 0 `rgb(137,98,0)` 53.7, 38.4, 0 `rgb(53.7%,38.4%,0%)` 0, 28, 100, 46 42.9°, 100, 26.9 `hsl(42.9,100%,26.9%)` 42.9°, 100, 53.7 996600 `#996600`\nCIE-LAB 44.311, 8.332, 51.753 14.685, 14.055, 1.939 0.479, 0.458, 14.055 44.311, 52.419, 80.854 44.311, 32.307, 45.21 37.489, 4.311, 23.175 10001001, 01100010, 00000000\n\n# Color Schemes with #896200\n\n• #896200\n``#896200` `rgb(137,98,0)``\n• #002789\n``#002789` `rgb(0,39,137)``\nComplementary Color\n• #891e00\n``#891e00` `rgb(137,30,0)``\n• #896200\n``#896200` `rgb(137,98,0)``\n• #6c8900\n``#6c8900` `rgb(108,137,0)``\nAnalogous Color\n• #1e0089\n``#1e0089` `rgb(30,0,137)``\n• #896200\n``#896200` `rgb(137,98,0)``\n• #006c89\n``#006c89` `rgb(0,108,137)``\nSplit Complementary Color\n• #620089\n``#620089` `rgb(98,0,137)``\n• #896200\n``#896200` `rgb(137,98,0)``\n• #008962\n``#008962` `rgb(0,137,98)``\n• #890027\n``#890027` `rgb(137,0,39)``\n• #896200\n``#896200` `rgb(137,98,0)``\n• #008962\n``#008962` `rgb(0,137,98)``\n• #002789\n``#002789` `rgb(0,39,137)``\n• #3d2b00\n``#3d2b00` `rgb(61,43,0)``\n• #563e00\n``#563e00` `rgb(86,62,0)``\n• #705000\n``#705000` `rgb(112,80,0)``\n• #896200\n``#896200` `rgb(137,98,0)``\n• #a37400\n``#a37400` `rgb(163,116,0)``\n• #bc8600\n``#bc8600` `rgb(188,134,0)``\n• #d69900\n``#d69900` `rgb(214,153,0)``\nMonochromatic Color\n\n# Alternatives to #896200\n\nBelow, you can see some colors close to #896200. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #894000\n``#894000` `rgb(137,64,0)``\n• #894b00\n``#894b00` `rgb(137,75,0)``\n• #895700\n``#895700` `rgb(137,87,0)``\n• #896200\n``#896200` `rgb(137,98,0)``\n• #896d00\n``#896d00` `rgb(137,109,0)``\n• #897900\n``#897900` `rgb(137,121,0)``\n• #898400\n``#898400` `rgb(137,132,0)``\nSimilar Colors\n\n# #896200 Preview\n\nThis text has a font color of #896200.\n\n``<span style=\"color:#896200;\">Text here</span>``\n#896200 background color\n\nThis paragraph has a background color of #896200.\n\n``<p style=\"background-color:#896200;\">Content here</p>``\n#896200 border color\n\nThis element has a border color of #896200.\n\n``<div style=\"border:1px solid #896200;\">Content here</div>``\nCSS codes\n``.text {color:#896200;}``\n``.background {background-color:#896200;}``\n``.border {border:1px solid #896200;}``\n\n# Shades and Tints of #896200\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #130e00 is the darkest color, while #ffffff is the lightest one.\n\n• #130e00\n``#130e00` `rgb(19,14,0)``\n• #271c00\n``#271c00` `rgb(39,28,0)``\n• #3b2a00\n``#3b2a00` `rgb(59,42,0)``\n• #4e3800\n``#4e3800` `rgb(78,56,0)``\n• #624600\n``#624600` `rgb(98,70,0)``\n• #755400\n``#755400` `rgb(117,84,0)``\n• #896200\n``#896200` `rgb(137,98,0)``\n• #9d7000\n``#9d7000` `rgb(157,112,0)``\n• #b07e00\n``#b07e00` `rgb(176,126,0)``\n• #c48c00\n``#c48c00` `rgb(196,140,0)``\n• #d79a00\n``#d79a00` `rgb(215,154,0)``\n• #eba800\n``#eba800` `rgb(235,168,0)``\n• #ffb600\n``#ffb600` `rgb(255,182,0)``\n• #ffbc13\n``#ffbc13` `rgb(255,188,19)``\n• #ffc127\n``#ffc127` `rgb(255,193,39)``\n• #ffc73b\n``#ffc73b` `rgb(255,199,59)``\n• #ffcd4e\n``#ffcd4e` `rgb(255,205,78)``\n• #ffd262\n``#ffd262` `rgb(255,210,98)``\n• #ffd875\n``#ffd875` `rgb(255,216,117)``\n• #ffdd89\n``#ffdd89` `rgb(255,221,137)``\n• #ffe39d\n``#ffe39d` `rgb(255,227,157)``\n• #ffe9b0\n``#ffe9b0` `rgb(255,233,176)``\n• #ffeec4\n``#ffeec4` `rgb(255,238,196)``\n• #fff4d7\n``#fff4d7` `rgb(255,244,215)``\n• #fff9eb\n``#fff9eb` `rgb(255,249,235)``\n• #ffffff\n``#ffffff` `rgb(255,255,255)``\nTint Color Variation\n\n# Tones of #896200\n\nA tone is produced by adding gray to any pure hue. In this case, #4a473f is the less saturated color, while #896200 is the most saturated one.\n\n• #4a473f\n``#4a473f` `rgb(74,71,63)``\n• #4f493a\n``#4f493a` `rgb(79,73,58)``\n• #544b35\n``#544b35` `rgb(84,75,53)``\n• #5a4e2f\n``#5a4e2f` `rgb(90,78,47)``\n• #5f502a\n``#5f502a` `rgb(95,80,42)``\n• #645225\n``#645225` `rgb(100,82,37)``\n• #695420\n``#695420` `rgb(105,84,32)``\n• #6f571a\n``#6f571a` `rgb(111,87,26)``\n• #745915\n``#745915` `rgb(116,89,21)``\n• #795b10\n``#795b10` `rgb(121,91,16)``\n• #7e5d0b\n``#7e5d0b` `rgb(126,93,11)``\n• #846005\n``#846005` `rgb(132,96,5)``\n• #896200\n``#896200` `rgb(137,98,0)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #896200 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.51614696,"math_prob":0.7081006,"size":3665,"snap":"2021-31-2021-39","text_gpt3_token_len":1558,"char_repetition_ratio":0.15214422,"word_repetition_ratio":0.011070111,"special_character_ratio":0.5642565,"punctuation_ratio":0.23224352,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9931822,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-24T04:22:57Z\",\"WARC-Record-ID\":\"<urn:uuid:41f694ca-a687-43ac-85a4-3c040a20538a>\",\"Content-Length\":\"36104\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e850bb5f-0237-4c13-bdb6-66ecc26a416b>\",\"WARC-Concurrent-To\":\"<urn:uuid:76c267df-ddb8-4f46-b097-77d726199bbe>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/896200\",\"WARC-Payload-Digest\":\"sha1:MT2TPLM23W35EPV3DPNYAAV5NLXVOQJS\",\"WARC-Block-Digest\":\"sha1:A3HTE33ONE5ANIWULZIVLW2F36UAJZZL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057496.18_warc_CC-MAIN-20210924020020-20210924050020-00247.warc.gz\"}"}
http://lists.puremagic.com/pipermail/digitalmars-d/2020-August/307690.html
[ "# Pattern matching is-expressions\n\nWed Aug 19 09:53:41 UTC 2020\n\n```On Wednesday, 19 August 2020 at 07:29:18 UTC, Stefan Koch wrote:\n> Hi there,\n>\n> Recently I have been thinking about converting commonly used is\n> expressions into __traits,\n> since having that makes type functions more consistent.\n>\n[ ... ]\n\nThere is even more inconsistency, which comes from the fact that\npattern matching is expressions are not actually able to match\narbitrarily complex patterns.\n\nYou cannot even use it to extract parts from a function type.\nwhich is why the other wired forms exist\n\nfor an example look at this:\n\npackage void invert(double[] from) {\nfrom[] *= 7.0;\npragma(msg, typeof(invert)); // output void(double[] from)\npragma(msg, is(void function (double[]))); // true void\nfunction (double[]) is a type\npragma(msg, is(typeof(invert) == function)); // true typeof\ninvert is a function\npragma(msg, is(typeof(invert) == void function(double[])));\n// false ???\npragma(msg, is(typeof(invert) == R function (A),\nR, A)); // also false, that would suggest that typeof(invert) is\nactually function without return type or parameters.\n// what we see here is that is expressions are broken.\n// pragma(msg, is(typeof(invert) == R F (A), R, A, F)); //\nif it's not a function let F be a free symbol as well\n\n// answer: expected identifier `F` in declarator\n\n// this is a parser error\n\n// therefore the line had to be commented out\n}\n\nthe same code is also at: https://run.dlang.io/is/76q4wG\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8038932,"math_prob":0.95770127,"size":1555,"snap":"2020-45-2020-50","text_gpt3_token_len":395,"char_repetition_ratio":0.1437782,"word_repetition_ratio":0.0,"special_character_ratio":0.28167203,"punctuation_ratio":0.16171618,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9937595,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-26T17:59:29Z\",\"WARC-Record-ID\":\"<urn:uuid:c82e6443-30c3-40aa-82c8-439a92ae8579>\",\"Content-Length\":\"4513\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7f4af7e2-6ff3-4878-9fa5-31842d32f982>\",\"WARC-Concurrent-To\":\"<urn:uuid:67528041-ab80-4105-a23e-4a39e22fa39a>\",\"WARC-IP-Address\":\"52.35.3.5\",\"WARC-Target-URI\":\"http://lists.puremagic.com/pipermail/digitalmars-d/2020-August/307690.html\",\"WARC-Payload-Digest\":\"sha1:UQNFCI4RKQX64F26BPCJINV3HDUEZVCY\",\"WARC-Block-Digest\":\"sha1:CGAITVOXTEY5KDGJ7JVPOFEJOW5IIWVF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141188899.42_warc_CC-MAIN-20201126171830-20201126201830-00235.warc.gz\"}"}
http://jdenuno.com/Chemistry/Chem3.htm
[ "Scientific Measurement and Problem Solving\n\nChapter Objectives\nAt the end of this unit students will be able to\n1.  convert measurements to scientific notation.\n2.  distintuish among accuracy, precision,m and error of a measurement.\n3.  determine the number of significant figures in a measurement and in a calculated answer..\n4.  list SI units of measurement and common SI prefixes\n5.  distinguish between mass and weight of an object.\n6.  convert between Celsius and Kelvin temperature scales\n.\n7.  construct conversion factors from equivalent measurements.\n8.  use dimensional analysis to solve a variety of conversion problems.\n9.  calculate the density of a material from experimental data.\n Key Terms absolute zero accepted value accuracy calorie Celsius scale conversion factor density dimensional analysis energy error experimental vaklue gram International System of Units (SI) joule Kelvin scale kilogram liter (L) mass measurement meter (m) percent error precision scientific notation significant figures temperature weight" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.80046415,"math_prob":0.9505784,"size":692,"snap":"2023-40-2023-50","text_gpt3_token_len":135,"char_repetition_ratio":0.13517442,"word_repetition_ratio":0.0,"special_character_ratio":0.20231214,"punctuation_ratio":0.16,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9981019,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-22T18:22:51Z\",\"WARC-Record-ID\":\"<urn:uuid:27c4bebe-8c61-47b5-9443-d0893020aa4a>\",\"Content-Length\":\"4112\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3c63a2dd-18b5-4ecc-83fb-b9d0630a8e80>\",\"WARC-Concurrent-To\":\"<urn:uuid:a3540e7f-0342-4b65-861b-665ab8a9dd38>\",\"WARC-IP-Address\":\"208.79.200.199\",\"WARC-Target-URI\":\"http://jdenuno.com/Chemistry/Chem3.htm\",\"WARC-Payload-Digest\":\"sha1:NLUCCIHBDUGXD4UEXSRU4YO4FKRB6F4Z\",\"WARC-Block-Digest\":\"sha1:E74XOY33WLEXXNKIMXPDWQODZXIYHEIL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506421.14_warc_CC-MAIN-20230922170343-20230922200343-00288.warc.gz\"}"}
https://everythingtech.dev/2022/01/python-3-try-catch-example/
[ "## No Try Catch but Try Except\n\nIn other programming languages, the keywords when referring to try and except blocks of code are “try” and “catch”. In Python 3, the keyword for “catch” is actually “except”. So, it is called a Try and Except block of code. But what does it do?\n\n## Example of a ValueError Exception\n\nSometimes, when you want to do an operation but you don’t know if it’s going to work or not, we use exception handling.\n\nAs a very basic example, consider that we are validating a user’s input.\n\n``text = input('age: ')``\n\nSay, the age has to strictly consist of numbers and not text or a string of numbers. In that case, we introduce a new variable. We can call this variable number and do the following to convert our text into integers and then print this to the screen:\n\n``````text =input('age: ')\n\nnumber = int(text)\nprint(number)\n``````\n\nAfter doing all this, when we type in “hey” in the username slot, the program will crash with a ValueError: invalid literal for int() with base 10 ‘hey’ error. Because it is unable to convert this text into numbers.\n\nIn this case, we use a try and except block to tell users to input something else rather than having the entire program just crash. We can add the try block to our code this way:\n\n## Catching a ValueError Exception in Python 3\n\nIf you run the code with only try: without an except: command you will get a syntax error at compile time saying “SyntaxError: unexpected EOF while parsing”.\n\n``````text = input('age: ')\ntry:\nnumber = int(text)\nprint(number)\nexcept ValueError:\nprint(\"hey age should be a number\")\n``````\n\nThe above shows how you can catch a ValueError. But what if you want to catch a generic error?\n\n## How To Create your own Exception in Python3?\n\nYou need to create a class as follows and inherit from Exception or BaseException.\n\n``````class ShouldBePositive(Exception):\npass``````\n\nThe above is the bare minimum for an exception.\n\n## How To Add Multiple Except Clauses and Chain Exceptions?\n\nYou can add multiple except clauses and also chain exceptions by Raise-ing an exception as follows:\n\n``````class ShouldBePositive(Exception):\npass\n\ntext = input('age: ')\ntry:\nnumber = int(text)\nprint(number)\nif number < 0:\nraise ShouldBePositive\nexcept ValueError:\nprint(\"hey age should be a number\")\nexcept ShouldBePositive:\nprint(\"Age can't be negative my dude\")``````\n\n## How To Catch All Exceptions In Python 3?\n\nIn Python 3 all Exceptions inherit from the base class BaseException. The builtin Exceptions like ArithmeticException inherit from Exception which then inherits from BaseException. So you can do the following to catch all exceptions.\n\n``````class ShouldBePositive(Exception):\npass\n\ntext = input('age: ')\ntry:\nnumber = int(text)\nprint(number)\nif number < 0:\nraise ShouldBePositive\nexcept BaseException:\nprint(\"something went wrong\")``````\n\n## Final Words\n\nThis is a small note on exceptions but I would suggest you do further reading on its intricacies at https://docs.python.org/3/tutorial/errors.html and also if you want to learn how to test exceptions in unit tests checkout: https://docs.python.org/3/library/unittest.html\n\nIf you would like to learn more Python check out this certificate course by Google: IT Automation with Python" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8083128,"math_prob":0.53730756,"size":3122,"snap":"2022-40-2023-06","text_gpt3_token_len":688,"char_repetition_ratio":0.14271969,"word_repetition_ratio":0.084812626,"special_character_ratio":0.22037156,"punctuation_ratio":0.11056106,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9592757,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-29T09:07:01Z\",\"WARC-Record-ID\":\"<urn:uuid:67af79ef-af31-4da1-b9bc-7d7ea779d964>\",\"Content-Length\":\"101956\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:76da494f-7ff3-40b0-aa2a-5263dc9f7ca3>\",\"WARC-Concurrent-To\":\"<urn:uuid:f49121fd-636f-43f6-895b-0907ba9d382e>\",\"WARC-IP-Address\":\"160.153.0.91\",\"WARC-Target-URI\":\"https://everythingtech.dev/2022/01/python-3-try-catch-example/\",\"WARC-Payload-Digest\":\"sha1:LHOLRNJOP32ENWRYEWSULGA6BGXCI2MM\",\"WARC-Block-Digest\":\"sha1:PHEA3KJKKY3NTBKBKD67FRWRK2M5X5TF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499710.49_warc_CC-MAIN-20230129080341-20230129110341-00200.warc.gz\"}"}
https://www.physicsforums.com/threads/index-placement-on-4-potential.948270/
[ "# Index placement on 4-potential\n\n• I\nHi.\nI am working through some notes which use the following metric diag(1,-1,-1,-1).\nThey give the 4-potential as ( Aμ ) = ( V/c , A ) where V is the scalar potential and A is the vector potential. This should mean in components A0 = V/c and A1 = A1 and so on but with the metric given shouldn't A1 = - A1 ?\nThanks\n\n## Answers and Replies\n\nOrodruin\nStaff Emeritus\nScience Advisor\nHomework Helper\nGold Member\nBe careful not to confuse potential different meanings of ##A_1## and ##A_1##. When you write ##A_1##, do you mean the Cartesian components of the 3-vector ##\\vec A## or the covariant components of the 4-vector ##A##?\n\nWhen I wrote A1 I meant Ax ie. the x-component of the 3-vector A\n\nOrodruin\nStaff Emeritus\nScience Advisor\nHomework Helper\nGold Member\nWhen I wrote A1 I meant Ax ie. the x-component of the 3-vector A\nThen no. It is not necessary that ##A^1 = - A_x## as ##A_x## is not the same as ##A_1##. The sign difference is between the covariant and contravariant spatial components of the 4-potential. As I said, do not confuse the components of a 3-vector with the covariant (or contravariant for that matter) components of a 4-vector. Typically, the generalisation of a 3-vector to a 4-vector will be such that the 3-vector components are the same as the covariant components of the 4-vector, but this may sometimes be subject to sign conventions and if the 3-vector is more naturally viewed as having covariant or contravariant components. In some cases, there is no 4-vector generalisation of the 3-vector at all, such as in the case of the electric and magnetic field where their components instead together constitute the components of the electromagnetic field tensor.\n\n•", null, "dyn\nThanks. I asked the question because I don't understand the following question. Using E = -∇V - ∂tA the x-component is given as E1/c = -∂x(V/c) -(1/c)∂tA1 = ∂1A0 - ∂0A1\nIs this equation correct ? If so , I don't understand the sign change on the 1st term and it seems to me it uses A1 for Ax\n\nUsing E = -∇V - ∂tA the x-component is given as E1/c = -∂x(V/c) -(1/c)∂tA1 = ∂1A0 - ∂0A1\n\nSay ##x^0=ct, V=A^0##,\n$$E^1=-\\frac{\\partial V}{\\partial x^1}-\\frac{\\partial A^1}{\\partial x^0}$$\n$$=-\\frac{\\partial A^0}{\\partial x^1}-\\frac{\\partial A^1}{\\partial x^0}$$\n$$=-\\frac{\\partial A_0}{\\partial x^1}+\\frac{\\partial A_1}{\\partial x^0}$$\n$$=\\frac{\\partial A_1}{\\partial x^0}-\\frac{\\partial A_0}{\\partial x^1}$$\n$$=F_{10}=-F^{10}$$\nwhere\n$$F_{\\mu\\nu}=\\frac{\\partial A_\\mu}{\\partial x^\\nu}-\\frac{\\partial A_\\mu}{\\partial x^\\nu}$$\n\nSimilarly ##B^1=F_{23}=F^{23}##\nActually E and B are not vectors but components of antisymmetric electromagnetic tensor F.\n\n•", null, "dyn" ]
[ null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.95213616,"math_prob":0.9856164,"size":643,"snap":"2021-04-2021-17","text_gpt3_token_len":190,"char_repetition_ratio":0.13458529,"word_repetition_ratio":0.9104478,"special_character_ratio":0.34059098,"punctuation_ratio":0.10638298,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99932027,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-23T05:40:51Z\",\"WARC-Record-ID\":\"<urn:uuid:c59695bf-6430-4260-b365-1fa0cc1f2888>\",\"Content-Length\":\"90942\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8c9b84d2-107d-4fc3-920a-4eb3ff189658>\",\"WARC-Concurrent-To\":\"<urn:uuid:4a34aee0-ec7d-4383-9a12-5edd1a5a59eb>\",\"WARC-IP-Address\":\"104.26.15.132\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/index-placement-on-4-potential.948270/\",\"WARC-Payload-Digest\":\"sha1:D4VWWNDPIRE7WOYAYVSJC6WUPPNZ77QD\",\"WARC-Block-Digest\":\"sha1:2KLG7TYZZ22TVFLMTORGU4ML74RUA2KQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618039601956.95_warc_CC-MAIN-20210423041014-20210423071014-00317.warc.gz\"}"}
https://indjst.org/download-article.php?Article_Unique_Id=INDJST12221&Article_Full_Text_Xml=True
[ "Sciresol Sciresol https://indjst.org/author-guidelines Indian Journal of Science and Technology 0974-5645 10.17485/IJST/v14i13.229 Chemical Reaction and Heat Source Effects on MHD Free Convective Flow over A Linearly Accelerated Moving Vertical Porous Plate Matta Swetha [email protected] 1 Malga Bala Siddulu 2 Appidi Lakshmi 1 Pramod Kumar P 2 Department of Mathematics, CMR Technical Campus Kandlakoya, Telangana, 501401 India Department of Mathematics, GITAM University Hyderabad, Telangana, 502329 India Department of Mathematics, B V Raju Institute of Technology Narsapur, Telangana, 502313 India 14 13 2021 Abstract\n\nObjective: To make discussion on the chemical reaction and heat source effects on MHD flow on micro polar fluid along vertical porous plate. Method: The governing dimensionless equations are involved analytically by using finite element scheme. Findings: The effects of governing parameters on the flow variables are discussed quantitatively with the help of graphs for the flow field, temperature field, concentration field, skin friction and Nusselt number. Novelty: The accuracy of the problem has been verified by comparing with the previous published work and the agreement between the results is excellent, which established confidence in the numerical results reported in this study.\n\nKeywords Heat source parameter Finite Element Method MHD chemical reaction parameter viscous dissipation None\nIntroduction\n\nTheoretical study of magneto-hydrodynamics (MHD) flow problem plays with a chemical reaction is of huge number of applications to scientists, engineers. In a lot of chemical engineering process chemical reaction take place involving a distant mass and working fluid in that the plate was touching. This process takes place in several manufacturing applications like developed of food processing, glassware, ceramic objects and polymer production. Foreign mass possibly here moreover naturally or mixed through the water or air. The existence of a distant mass in water and air effects few types of the chemical reactions. Chemical technology improved with the study of chemical effect. Such as food processing, generating the electric power and polymer fabrication. Ali et al. 1 examined the effects of Heat and Mass movement through free convection flow in a vertical plate. Reddy et al. 2 analyzed the chemical reaction effects on MHD natural convection. Sehra et al. 3 examined the Convection heat mass transfer over a vertical plate with chemical reaction. Nayak et al.4 consideredthe chemical reaction results on stretching sheet. Kumar et al. 5 interpreted the thermal diffusion effects on MHD. Babu et al. 6 analyzed the impacts of chemical reaction, diffusion thermo and radiation on unsteady natural convective flow past an inclined vertical plate under aligned magnetic field. Babu et al. 7 considered the radiation and chemical reaction results on MHD free convention mass transfer fluid flow in a porous plate. Kumar et al. 8 studied the suction special effects on accelerated vertical on permeable plate. Hoq et al. 9 studied the error analysis of heat conduction partial differential equations. Prasad et al. 10 considered radiation, viscous dissipation effects on the heated vertical plate in permeable medium. Magala et al. 11 has investigated dissipation effects on accelerated vertical plate through suction. Kandasamy et al. 12 interpreted the effects of heat and mass transfer along a wedge with heat source and concentration in the presence of suction. Lakshmi et al. 13 considered viscous dissipation effects on pours. Bhagya Lakshmi et al. 14 has studied the MHD free convective flow of dissipative fluid past an exponentially accelerated vertical plate. Shehzad et al. 15 studied the MHD flow medium. Pramod Kumar et al. 16 reviewed induced magnetic field in the free convective Radiating Stream above Permeable Laminate. Ekakitie et al. 17 interpreted the impact of chemical reaction and heat source on MHD free convection flow over an inclined porous Surface. Anupam et al. 18 studied the radiation effects on stretching sheet. Chamkha et al. 19 explained in a permeable medium. Sankar et al. 20 explained the radiation impacts in the semi-infinite permeable plate. Sweta et al. 21 analyzed fem of heat absorption effects on MHD Casson fluid flows above exponentially accelerate temperature through ramped surface concentration.\n\nThe objective of the present work is to study the chemical reaction and heat source effects on MHD flow on micro polar fluid along vertical porous plate. It has been noticed that the chemical reaction and heat source parameter had an effect on velocity profile, temperature profile, and concentration profile. The governing partial differential equations are solved by Galerkin Finite Element Method. We have extended the problem of Bhagya Lakshmi 14 in the presence of chemical reaction, heat source parameter and the accuracy of present problem have been verified by comparing with theoretical solution of Bhagya Lakshmi 14 through figures and the agreement between the results is excellent. This has established confidence in the numerical results reported in this paper.\n\nMathematical Formulations\n\nThe transitory magneto-hydrodynamic free convective flow of fluid over a growing accelerated plate with unstable temperature presented. Here x, y - axis are taken as along the plate in vertically straight way and normal to the plate. The plate is considering bound less in x-direction, all the flows quantity turn into self-similar missing from the leading edge. The total physical quantities turning into functions of t,y. By time t<=0. Fluids are at equal to t and C less important than the constant wall temperature, concentration correspondingly. By t>0, the plate was growing accelerated through a velocity= u exp (at) in its own plane and temperature of the plate, concentration are rising linearly with time- t. An unvarying magnetic playing field amount H is applying in the y-direction. Therefore the magnetic field and the velocity are taken by H=(0,H ), q=(u,v). The fluids electrically conducted the magnetic Reynolds number is to a great extent less than 1 and therefore induced magnetic field can be uncared for in comparison with applying magnetic field in the non appearance of input electric field. The heat because of viscous dissipation considered into an account below the above assumptions with Boussines mass, energy, momentum and species governing the free convection boundary layer flows over vertical plate can be studied as:\n\nContinuity equation:\n\nv'y'=0\n\nMomentum equation:\n\nu't'=gβT'-T'+gβ*C'-C'+ν2u'y'2-σμe2H02ρu'\n\nThe Energy equation:\n\nρcpT't'=K2T'y'2+μ(u'y')2+S*ρCp(T-T)\n\nThe Concentration equation:\n\nC't'=D2c'y'2-K1*C*\n\nBoundary conditions can be written in dimensional form as\n\nt'0:u'=0,T'=T2'C'=C'; for all y't'>0:u'=u0expa't'T'=T'+Tw'-T'At'C'=C'+Cw'-C'At' at y'=0 and u'0,T'T',c'c' as y'\n\nWhere A=  u20ν,   T'w and C'w are constants\n\nLet us familiarize the subsequent dimensionless\n\nu=u'u0,t=t'u20v,y=y'u0v,θ=T'-T'Tw'-T',M=σμv0'eHLρu2,Gr=gβvTw'-T'u30,Pr=μcpK,K=K*V02v2E=u02CpTw'-T',a=a'vu02,Gc=gβ*vCw'-C'u30,C=C'-C'Cw'-C',SC=vD' S=S*vρCpV02\n\nThen the resultant non-dimensional equations are:\n\nut=Grθ+GcC+2uy2-Mu θt=1Pr2θy2+E(uy)2+Sθ Ct=1Sc2Cy2-K1C\n\nBoundary conditions can be written in dimensionless form as\n\nU=0,θ=0,C=0,y,t0u=exp(at),θ=t,C=t at y=0u0,θ0,C0 as y t>0\nMethod of Solution\n\nThe finite element method (FEM) is employed to solve the transformed, coupled boundary value problem defined by Equations. (7)–(9) under Eq.(10). The fundamental steps involved in finite-element analysis of a problem are as follows:\n\nStep 1: Discretization of the fluid domain into finite elements\n\nStep 2: Generation of element equations\n\nStep 3: Assembly of element equations\n\nStep 4: Imposition of boundary conditions\n\nStep 5: Solution of assembled equations\n\nNumerical study of a dissipative fluid flow\n\nThe variation formulation associated with Equations. (7) – (9) are joined with boundary conditions (10), we represent the velocities u, temperature θ and concentration C. By applying the Galerkin finite element method for equation (7) over a typical two-nodded linear element e  ( yjyyk) is\n\nu=N.,where N=Nj,Nk,  =ujuk,Nj=yk-yl,  Nk=y-yjl,l=yk-yj=hyjykNT2uy2-ut-Mu+Grθ+GcCdy\n\nWe write the element equation for the elements yi-1yyi and yiyyi+1. Assembling these element equations, where  r=kh2 where h and k are the mesh sizes along y direction and time direction respectively. Index i, j refers space and time respectively. The mesh system consists of h=0.4 for velocity profiles and concentration profiles and k=0.5 has been considered for computations. In the above equations taking i=1(1) n and using initial and boundary conditions (10), the following system of equation are obtained.\n\nAiXi=Bi ,  i=1,2,3.\n\nWhere Ai’s are matrices of order n and Xi and Bi' s are column matrices having n-components. The solution of above system of equations are obtained using Thomas algorithm for velocity, angular velocity, temperature and concentration. In order to prove the convergence and stability of Galerkin finite element method, we are using C programming with slightly change of values h and k, then no significant difference was observed in the values of velocity, angular velocity, temperature , concentration. Hence, the Galerkin finite element method is steady and convergent.\n\nSkin friction: τ=uyy=0=Ui+1-Uih\n\nSkin friction variation\n M Gr Gc Pr Sc a S E K1 t Skin friction 4 5 5 0.71 0.22 0.5 0.5 0.5 0.5 0.2 1.99328 6 5 5 0.71 0.22 0.5 0.5 0.5 0.5 0.2 2.00754 8 5 5 0.71 0.22 0.5 0.5 0.5 0.5 0.2 2.02190 2 5 5 0.71 0.22 0.5 0.5 0.5 0.5 0.2 0.7457 2 5 5 1.0 0.22 0.5 0.5 0.5 0.5 0.2 0.8456 2 5 5 7.0 0.22 0.5 0.5 0.5 0.5 0.2 1.9312 2 5 5 0.71 0.5 0.5 0.5 0.5 0.5 0.2 1.06449 2 5 5 0.71 2.0 0.5 0.5 0.5 0.5 0.2 1.34124 2 5 5 0.71 3.0 0.5 0.5 0.5 0.5 0.2 1.97356 2 5 5 0.71 0.22 0.5 0.5 0.5 0.5 0.2 1.1051 2 5 5 0.71 0.22 1.0 0.5 0.5 0.5 0.2 1.4678 2 5 5 0.71 0.22 2.0 0.5 0.5 0.5 0.2 1.7856 2 5 5 0.71 0.22 0.5 0.5 0.5 0.5 0.2 0.14901 2 5 5 0.71 0.22 0.5 1.0 0.5 0.5 0.2 0.15634 2 5 5 0.71 0.22 0.5 2.0 0.5 0.5 0.2 0.16787 2 5 5 0.71 0.22 0.5 0.5 0.5 0.5 0.2 0.37851 2 5 5 0.71 0.22 0.5 0.5 0.5 1.0 0.2 0.36234 2 5 5 0.71 0.22 0.5 0.5 0.5 2.0 0.2 0.35781 2 5 5 0.71 0.22 0.5 0.5 0.5 0.5 0.2 1.43456 2 7 7 0.71 0.22 0.5 0.5 0.5 0.5 0.2 1.36853 2 10 10 0.71 0.22 0.5 0.5 0.5 0.5 0.2 1.23456 2 5 5 0.71 0.22 0.5 0.5 0.5 0.5 0.2 0.14321 2 5 5 0.71 0.22 0.5 0.5 2.0 0.5 0.2 0.12348\nNusselt number: Nu=-θyy=0=Ti+1-Tih\n\nNusselt number\n M Pr E S t NU 2 0.07 0.05 0.5 0.2 0.14065 3 0.07 0.05 0.5 0.2 0.14114 5 0.07 0.05 0.5 0.2 0.14209 2 0.07 0.05 0.5 0.6 0.43462 3 0.07 0.05 0.5 0.6 0.43536 5 0.07 0.05 0.5 0.6 0.43677 2 0.07 0.05 0.5 0.2 0.14901 2 1.0 0.05 0.5 0.2 0.35726 2 3.0 0.05 0.5 0.2 0.38420 2 0.07 0.05 0.5 0.6 0.43536 2 3.0 0.05 0.5 0.6 1.15277 2 0.07 0.05 0.5 0.2 0.14901 2 0.07 3.0 0.5 0.2 0.14653 2 0.07 7.0 0.5 0.2 0.14317 2 0.07 0.05 0.5 0.6 0.44710 2 0.07 3.0 0.5 0.6 0.44340 2 0.07 7.0 0.5 0.6 0.43838 2 0.07 0.05 0.5 0.2 0.14901 2 0.07 0.05 3.0 0.2 0.14635\nSherewood number: Sh==-cyy=0=Ci+1-Cih\n\n<bold id=\"s-f42be705d923\"/>Sherwood number variation\n Sc K1 T Sh 0.22 0.5 0.2 0.26167 0.6 0.5 0.2 0.32474 0.96 0.5 0.2 0.35716 2.00 0.5 0.2 0.37851 0.22 0.5 0.6 0.78501 0.6 0.5 0.6 1.00599 0.96 0.5 0.6 1.07148 2.00 0.5 0.6 1.13554 0.22 0.5 0.2 0.26167 0.22 2.0 0.2 0,26372 0.22 4.0 0.2 0.26642 0.22 0.5 0.6 0.78501\n\nResults and Discussion\n\nIn order to obtain purpose of problem we calculate arithmetic solutions shown used for non-dimensional profiles we have plotted Schmidt number Sc = 0.22, Eckert number E, Acceleration parameter (a), Mass Grashof number (Gc), Magnetic parameter (M), Heat source parameter (S), Chemical reaction parameter (K1), Prandtl number Pr = 0.71, Thermal Grashof number (Gr) and time t = 0.2 ,0.6.\n\nFigure 5 (a) showed the effects of Prandtl number (Pr). We noticed that effect of Pr velocity profiles are decreases. Since fluids having high viscosity then we observed that it move slowly.\n\nFigure 7 (b) and Figure 5(b) showed that reaction of Eckert number (E) on the velocity fields, Temperature fields. Observed that, velocity profiles, Temperature profiles increases as E increases. The Eckert number effects on the flow of field are to improve energy, yielding a larger temperature and greater buoyancy force. The increases in the buoyancy force are because of increases in dissipation parameter to improve the velocity.\n\nFigure 5(c) shown that the effects of acceleration parameter (a) taking place the velocity fields. We noticed that rises in acceleration parameter leads to rises in velocity profile.\n\nFigure 5(d), Figure 5(e) shown that the effects of the mass Grashof (Gc), Grashof number (Gr) number on the velocity profiles. It is observed that greater cooling of the surface in Grashof number, mass Grashof number results raises in the velocity of air. It is due to the fact increase in the values of thermal Grashof number; mass Grashof number has the tendency to increase the thermal and mass buoyancy effect. This gives rise to an increase in the induced flow.\n\nFigure 5(f) shows the effect of time t on the velocity in cooling of the plate. It is obvious from the figure that the velocity increases with the increase of time t.\n\nFigure 5(g) shown that magnetic parameter (M) increases while decreasing in velocity profile. The presence of transverse magnetic field produces a resistive force on the fluid flow. This force is called the Lorentz force, which leads to slow down the motion of electrically conducting fluid.\n\nFigure 5(h) and Figure 8 (a) observed that velocity fields and concentration fields are decreases while increasing in Chemical reaction parameter ( K1).\n\nFigure 7(2a) observed that temperature profiles are decreases while increasing in Prandtl number (Pr). Physically, it is possible because fluids with have high viscosity and hence move slowly.\n\nFigure 5(i) and Figure 7(2.c) illustrate the effect of heat absorption S on temperature fields and velocity fields. We observed that velocity fields and temperature fields are increases with increasing in the source parameter. It is seen that heat absorption/heat generation tends to accelerate the motion of the fluid. The positive sign indicates the heat generation (heat source) whereas negative means heat absorption (heat sink). Heat source physically implies the generation of heat from the, which increases the temperature in the flow field. Therefore, as the heat source parameter increased, the temperature increases steeply. The influence of heat source parameter S >0 on velocity and temperature profiles is very much significantly related to the heat sink parameter S <0. These results are clearly supported from the physical point of view.\n\nFigure 8(3b) illustrates that the non-dimensional concentration fields for Schmidt number (Sc). A decrease in concentration with increasing Sc. Also it is noticed that the concentration boundary layer becomes thin as the Schmidt number increases.\n\n</caption> <graphic id=\"g-dcc9354165cb\" xlink:href=\"https://typeset-prod-media-server.s3.amazonaws.com/article_uploads/36e28baf-1cc0-46cf-9d16-1ace1a41c5fd/image/fab5ee27-ee1a-4c63-90b2-71c146fedaa1-uimg1.png\"/> </fig> <p id=\"p-0fe04095c76e\"/> <fig id=\"f-83ddd0923553\" orientation=\"portrait\" fig-type=\"graphic\" position=\"anchor\"> <label>Figure 0 </label> <graphic id=\"g-a45159731a97\" xlink:href=\"https://typeset-prod-media-server.s3.amazonaws.com/article_uploads/36e28baf-1cc0-46cf-9d16-1ace1a41c5fd/image/0521c002-d088-418d-b656-7c53b36987d7-uimg1.png\"/> </fig> <p id=\"p-1f59b9d26627\"/> <fig id=\"f-e8774c72e8a0\" orientation=\"portrait\" fig-type=\"graphic\" position=\"anchor\"> <label>Figure 2 </label> <caption id=\"c-3899d1a94618\"> <title id=\"t-7dde8a23bdc3\"/> </caption> <graphic id=\"g-83eb3500651f\" xlink:href=\"https://typeset-prod-media-server.s3.amazonaws.com/article_uploads/36e28baf-1cc0-46cf-9d16-1ace1a41c5fd/image/9ca1cf6c-a843-4468-b7aa-75df38da76f0-uimg1.png\"/> </fig> <p id=\"p-5dc3fd203f78\"/> <fig id=\"f-d4b1258fb249\" orientation=\"portrait\" fig-type=\"graphic\" position=\"anchor\"> <label>Figure 3 </label> <caption id=\"c-fa414e8d7ea1\"> <title id=\"t-9ed127819939\"/> </caption> <graphic id=\"g-af8f5e4d85d4\" xlink:href=\"https://typeset-prod-media-server.s3.amazonaws.com/article_uploads/36e28baf-1cc0-46cf-9d16-1ace1a41c5fd/image/c7ce851c-264f-49d5-b447-e225c72a6fce-uimg1.png\"/> </fig> <p id=\"p-67a72ef89827\"/> </sec> <sec> <title id=\"title-97190ffd86d74486ae51de9bcd21812f\">Conclusion\n\nThe study on chemical reaction and heat source effects on the problem of Magneto hydrodynamics free convection of dissipative flow fluid past an exponentially accelerated plate. We studied velocity, concentration and temperature fields by different parameters like heat source parameter, Prandtl value, Schmidt number, modified Grashof and Grashof number, acceleration parameter, chemical reaction parameter. We observed that velocity fields decreases with respect to t=0.2,0.6 increases in magnetic parameter (M), Chemical reaction parameter K1, Modified Grashof number Gc, Prandtl number Pr, acceleration parameter a, Grashof number Gr. The following conclusions can be drawn\n\nThe velocity, temperature fields increases with increasing the heat source parameter.\n\nThe velocity flow of the fluid increases by means of increasing the viscous dissipation.\n\nWe observed those Schmidt number SC increases then the concentration profiles as decreases.\n\nWe observe that skin friction values are increases time t by means of increasing in M, Sc, Pr, S and a . While it decreases with an increase in thermal Gr, Gc and K1. Gr, Gc and k1 values are increases then skin friction values are decreases.\n\nWe noticed that M, Pr increases with increasing the Nusselt value. E,S are increases with decreasing in Nusselt value.\n\nWe noticed that K1, Sc values are increases with increasing the Sherewood number.\n\nRecommendation\n\nThe same problem in future can be extended by Duffer effect\n\nNomenclature\n a* Absorption coefficient A Constant B0 External Magnetic field C Dimensionless concentration C' concentration Sc Schmidt number Cw' Concentration of the plate g Acceleration due to gravity C∞' Concentration in the fluid far-off from the plate D Chemical molecular diffusivity Cp heat at constant pressure Gr Thermal Grashof number Gm Mass Grashof number k Thermal conductivity fluid M Magnetic field parameter pr Prandtl number qr Radioactive heat flux in the y direction R Radiation parameter T' Temperature of the fluid in near the plate Tw' Temperature of the plate T∞' Temperature of the fluid far away from the plate t ' Time t Dimensionless time u Dimensionless velocity u0 Characteristic Velocity of the plate y' Coordinate axis normal to the plate y Dimensionless coordinate axis normal to the plate u' Velocity of the fluid in the x'-direction Greek symbols α The fluid thermal diffusivity µ viscosity of fluid v kinematic viscosity σ Electric conductivity erf Error function θ Dimensionless temperature ρ Density of the fluid τ Dimensionless skin friction β* coefficient of Volumetric expansion with concentration β coefficient of Volumetric c thermal expansion Subscripts ∞ Free stream conditions w Conditions at the wall" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.78170913,"math_prob":0.97668254,"size":8818,"snap":"2023-40-2023-50","text_gpt3_token_len":3287,"char_repetition_ratio":0.25890628,"word_repetition_ratio":0.29212254,"special_character_ratio":0.44000906,"punctuation_ratio":0.20327869,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9665857,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-03T20:24:41Z\",\"WARC-Record-ID\":\"<urn:uuid:78097130-08a2-4274-ac86-17bebaf4d599>\",\"Content-Length\":\"160475\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7a47bcc5-7fad-458c-8267-962a9a99dac7>\",\"WARC-Concurrent-To\":\"<urn:uuid:d4f3cd9e-4633-499b-bc89-6628c645ede5>\",\"WARC-IP-Address\":\"104.21.82.183\",\"WARC-Target-URI\":\"https://indjst.org/download-article.php?Article_Unique_Id=INDJST12221&Article_Full_Text_Xml=True\",\"WARC-Payload-Digest\":\"sha1:LI7JDM4Q3ZHKHI6CJXFFU7NS7ELOCK2X\",\"WARC-Block-Digest\":\"sha1:DEW2CQCC4HRJ5CE5MKFNAMYJS4UKC6N5\",\"WARC-Identified-Payload-Type\":\"application/xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100508.53_warc_CC-MAIN-20231203193127-20231203223127-00841.warc.gz\"}"}
https://zbmath.org/authors/?q=ai%3Anane.erkan
[ "# zbMATH — the first resource for mathematics\n\n## Nane, Erkan\n\nCompute Distance To:\n Author ID: nane.erkan", null, "Published as: Nane, Erkan External Links: MGP\n Documents Indexed: 52 Publications since 2006\nall top 5\n\n#### Co-Authors\n\n 9 single-authored 10 Meerschaert, Mark M. 8 Mijena, Jebessa B. 6 Nguyen Huy Tuan 6 Xiao, Yimin 5 Foondun, Mohammud 5 Vellaisamy, Palaniappan 3 Asogwa, Sunday A. 3 Ni, Yinan 2 Bäumer, Boris 2 D’Ovidio, Mirko 2 Guerngar, Ngartelbaye 2 Zeleke, Aklilu 1 Allouba, Hassan 1 Bañuelos, Rodrigo 1 Çenesiz, Yücel 1 Chen, Zhen-Qing 1 Dang, Duc Trong 1 Kirane, Mokhtar 1 Kumar, Arun M. 1 Kumar, Avinash 1 Kurt, Ali 1 Liu, Wei 1 Meng, Xiangqian 1 Nguyen, Dang Minh 1 Nwaeze, Eze Raymond 1 Omaba, McSylvester Ejighikeme 1 O’Regan, Donal 1 Phuong, Nguyen Duc 1 Tuan, Nguyen Hoang 1 Wu, Dongsheng\nall top 5\n\n#### Serials\n\n 10 Statistics & Probability Letters 6 Journal of Mathematical Analysis and Applications 5 Stochastic Processes and their Applications 4 Proceedings of the American Mathematical Society 4 Potential Analysis 2 Transactions of the American Mathematical Society 2 Electronic Journal of Probability 2 European Series in Applied and Industrial Mathematics (ESAIM): Probability and Statistics 2 Fractional Calculus & Applied Analysis 2 Stochastics and Dynamics 1 Inverse Problems 1 Chaos, Solitons and Fractals 1 Acta Scientiarum Mathematicarum 1 The Annals of Probability 1 Journal of Applied Probability 1 Journal of Differential Equations 1 Mathematische Zeitschrift 1 Stochastic Analysis and Applications 1 NoDEA. Nonlinear Differential Equations and Applications 1 International Journal of Pure and Applied Mathematics 1 ALEA. Latin American Journal of Probability and Mathematical Statistics 1 SIAM/ASA Journal on Uncertainty Quantification 1 Modern Stochastics. Theory and Applications\nall top 5\n\n#### Fields\n\n 44 Probability theory and stochastic processes (60-XX) 25 Partial differential equations (35-XX) 8 Real functions (26-XX) 5 Numerical analysis (65-XX) 2 Integral equations (45-XX) 2 Operator theory (47-XX) 2 Statistics (62-XX) 2 Statistical mechanics, structure of matter (82-XX) 1 Measure and integration (28-XX) 1 Special functions (33-XX) 1 Ordinary differential equations (34-XX) 1 Global analysis, analysis on manifolds (58-XX)\n\n#### Citations contained in zbMATH Open\n\n45 Publications have been cited 667 times in 418 Documents Cited by Year\nFractional Cauchy problems on bounded domains. Zbl 1247.60078\nMeerschaert, Mark M.; Nane, Erkan; Vellaisamy, P.\n2009\nThe fractional Poisson process and the inverse stable subordinator. Zbl 1245.60084\nMeerschaert, Mark M.; Nane, Erkan; Vellaisamy, P.\n2011\nDistributed-order fractional diffusions on bounded domains. Zbl 1222.35204\nMeerschaert, Mark M.; Nane, Erkan; Vellaisamy, P.\n2011\nBrownian subordinators and fractional Cauchy problems. Zbl 1186.60079\nBaeumer, Boris; Meerschaert, Mark M.; Nane, Erkan\n2009\nSpace-time fractional diffusion on bounded domains. Zbl 1251.35177\nChen, Zhen-Qing; Meerschaert, Mark M.; Nane, Erkan\n2012\nSpace-time fractional stochastic partial differential equations. Zbl 1329.60216\nMijena, Jebessa B.; Nane, Erkan\n2015\nCorrelated continuous time random walks. Zbl 1179.60024\nMeerschaert, Mark M.; Nane, Erkan; Xiao, Yimin\n2009\nHigher order PDE’s and iterated processes. Zbl 1157.60071\nNane, Erkan\n2008\nTime-changed Poisson processes. Zbl 1227.60063\nKumar, A.; Nane, Erkan; Vellaisamy, P.\n2011\nIntermittence and space-time fractional stochastic partial differential equations. Zbl 1341.60063\nMijena, Jebessa B.; Nane, Erkan\n2016\nAsymptotic properties of some space-time fractional stochastic equations. Zbl 1378.60090\nFoondun, Mohammud; Nane, Erkan\n2017\nSpace–time duality for fractional diffusion. Zbl 1196.60087\nBaeumer, Boris; Meerschaert, Mark M.; Nane, Erkan\n2009\nLarge deviations for local time fractional Brownian motion and applications. Zbl 1147.60025\nMeerschaert, Mark M.; Nane, Erkan; Xiao, Yimin\n2008\nFractal dimension results for continuous time random walks. Zbl 1401.60080\nMeerschaert, Mark M.; Nane, Erkan; Xiao, Yimin\n2013\nTransient anomalous sub-diffusion on bounded domains. Zbl 1266.60087\nMeerschaert, Mark M.; Nane, Erkan; Vellaisamy, P.\n2013\nContinuity of solutions of a class of fractional equations. Zbl 1407.35205\nDang, Duc Trong; Nane, Erkan; Nguyen, Dang Minh; Tuan, Nguyen Huy\n2018\nNon-linear noise excitation for some space-time fractional stochastic equations in bounded domains. Zbl 1355.60084\nFoondun, Mohammud; Mijena, Jebessa B.; Nane, Erkan\n2016\nIterated Brownian motion in parabola-shaped domains. Zbl 1090.60071\nNane, Erkan\n2006\nStability of the solution of stochastic differential equation driven by time-changed Lévy noise. Zbl 1364.65015\nNane, Erkan; Ni, Yinan\n2017\nInverse source problem for time-fractional diffusion with discrete random noise. Zbl 1417.35223\nTuan, Nguyen Huy; Nane, Erkan\n2017\nStrong analytic solutions of fractional Cauchy problems. Zbl 1284.35457\nMijena, Jebessa B.; Nane, Erkan\n2014\nLaws of the iterated logarithm for a class of iterated processes. Zbl 1173.60317\nNane, Erkan\n2009\nLaws of the iterated logarithm for $$\\alpha$$-time Brownian motion. Zbl 1121.60085\nNane, Erkan\n2006\nIterated Brownian motion in bounded domains in $$\\mathbb {R}^n$$. Zbl 1106.60309\nNane, Erkan\n2006\nApproximate solutions of inverse problems for nonlinear space fractional diffusion equations with randomly perturbed data. Zbl 1384.35146\nNane, Erkan; Tuan, Nguyen Huy\n2018\nStochastic solutions of conformable fractional Cauchy problems. Zbl 1417.35218\nÇenesiz, Yücel; Kurt, Ali; Nane, Erkan\n2017\nStochastic solutions of a class of higher order Cauchy problems in $$\\mathbb R^{d}$$. Zbl 1205.60129\nNane, Erkan\n2010\nLifetime asymptotics of iterated Brownian motion in $$\\mathbb R^n$$. Zbl 1181.60127\nNane, Erkan\n2007\nSome non-existence results for a class of stochastic partial differential equations. Zbl 1420.35480\nFoondun, Mohammud; Liu, Wei; Nane, Erkan\n2019\nStochastic solution of fractional Fokker-Planck equations with space-time-dependent coefficients. Zbl 1342.60106\nNane, Erkan; Ni, Yinan\n2016\nTwo-term trace estimates for relativistic stable processes. Zbl 1307.60060\nBañuelos, Rodrigo; Mijena, Jebessa B.; Nane, Erkan\n2014\nInteracting time-fractional and $$\\Delta^{\\nu}$$ PDEs systems via Brownian-time and inverse-stable-Lévy-time Brownian sheets. Zbl 1286.60070\nAllouba, Hassan; Nane, Erkan\n2013\nTime dependent random fields on spherical non-homogeneous surfaces. Zbl 1310.60060\nD’Ovidio, Mirko; Nane, Erkan\n2014\nA strong law of large numbers with applications to self-similar stable processes. Zbl 1274.60098\nNane, Erkan; Xiao, Yimin; Zeleke, Aklilu\n2010\nIsoperimetric-type inequalities for iterated Brownian motion in $$\\mathbb R^n$$. Zbl 1134.60051\nNane, Erkan\n2008\nOn a backward problem for multidimensional Ginzburg-Landau equation with random data. Zbl 06850149\nKirane, Mokhtar; Nane, Erkan; Tuan, Nguyen Huy\n2018\nIntermittency fronts for space-time fractional stochastic partial differential equations in $$(d + 1)$$ dimensions. Zbl 1375.60105\nAsogwa, Sunday A.; Nane, Erkan\n2017\nPath stability of stochastic differential equations driven by time-changed Lévy noises. Zbl 06866535\nNane, Erkan; Ni, Yinan\n2018\nSome properties of non-linear fractional stochastic heat equations on bounded domains. Zbl 1374.60116\nFoondun, Mohammud; Guerngar, Ngartelbaye; Nane, Erkan\n2017\nFractional Cauchy problems on compact manifolds. Zbl 1341.60068\nD’Ovidio, Mirko; Nane, Erkan\n2016\n$$\\alpha$$-time fractional Brownian motion: PDE connections and local times. Zbl 1278.60074\nNane, Erkan; Wu, Dongsheng; Xiao, Yimin\n2012\nCritical parameters for reaction-diffusion equations involving space-time fractional derivatives. Zbl 1442.35207\nAsogwa, Sunday A.; Foondun, Mohammud; Mijena, Jebessa B.; Nane, Erkan\n2020\nOn the infinite divisibility of distributions of some inverse subordinators. Zbl 1426.60018\nKumar, Arun; Nane, Erkan\n2018\nA random regularized approximate solution of the inverse problem for Burgers’ equation. Zbl 1380.35167\nNane, Erkan; Tuan, Nguyen Hoang; Tuan, Nguyen Huy\n2018\nCorrelation structure of time-changed Pearson diffusions. Zbl 1296.60216\nMijena, Jebessa B.; Nane, Erkan\n2014\nCritical parameters for reaction-diffusion equations involving space-time fractional derivatives. Zbl 1442.35207\nAsogwa, Sunday A.; Foondun, Mohammud; Mijena, Jebessa B.; Nane, Erkan\n2020\nSome non-existence results for a class of stochastic partial differential equations. Zbl 1420.35480\nFoondun, Mohammud; Liu, Wei; Nane, Erkan\n2019\nContinuity of solutions of a class of fractional equations. Zbl 1407.35205\nDang, Duc Trong; Nane, Erkan; Nguyen, Dang Minh; Tuan, Nguyen Huy\n2018\nApproximate solutions of inverse problems for nonlinear space fractional diffusion equations with randomly perturbed data. Zbl 1384.35146\nNane, Erkan; Tuan, Nguyen Huy\n2018\nOn a backward problem for multidimensional Ginzburg-Landau equation with random data. Zbl 06850149\nKirane, Mokhtar; Nane, Erkan; Tuan, Nguyen Huy\n2018\nPath stability of stochastic differential equations driven by time-changed Lévy noises. Zbl 06866535\nNane, Erkan; Ni, Yinan\n2018\nOn the infinite divisibility of distributions of some inverse subordinators. Zbl 1426.60018\nKumar, Arun; Nane, Erkan\n2018\nA random regularized approximate solution of the inverse problem for Burgers’ equation. Zbl 1380.35167\nNane, Erkan; Tuan, Nguyen Hoang; Tuan, Nguyen Huy\n2018\nAsymptotic properties of some space-time fractional stochastic equations. Zbl 1378.60090\nFoondun, Mohammud; Nane, Erkan\n2017\nStability of the solution of stochastic differential equation driven by time-changed Lévy noise. Zbl 1364.65015\nNane, Erkan; Ni, Yinan\n2017\nInverse source problem for time-fractional diffusion with discrete random noise. Zbl 1417.35223\nTuan, Nguyen Huy; Nane, Erkan\n2017\nStochastic solutions of conformable fractional Cauchy problems. Zbl 1417.35218\nÇenesiz, Yücel; Kurt, Ali; Nane, Erkan\n2017\nIntermittency fronts for space-time fractional stochastic partial differential equations in $$(d + 1)$$ dimensions. Zbl 1375.60105\nAsogwa, Sunday A.; Nane, Erkan\n2017\nSome properties of non-linear fractional stochastic heat equations on bounded domains. Zbl 1374.60116\nFoondun, Mohammud; Guerngar, Ngartelbaye; Nane, Erkan\n2017\nIntermittence and space-time fractional stochastic partial differential equations. Zbl 1341.60063\nMijena, Jebessa B.; Nane, Erkan\n2016\nNon-linear noise excitation for some space-time fractional stochastic equations in bounded domains. Zbl 1355.60084\nFoondun, Mohammud; Mijena, Jebessa B.; Nane, Erkan\n2016\nStochastic solution of fractional Fokker-Planck equations with space-time-dependent coefficients. Zbl 1342.60106\nNane, Erkan; Ni, Yinan\n2016\nFractional Cauchy problems on compact manifolds. Zbl 1341.60068\nD’Ovidio, Mirko; Nane, Erkan\n2016\nSpace-time fractional stochastic partial differential equations. Zbl 1329.60216\nMijena, Jebessa B.; Nane, Erkan\n2015\nStrong analytic solutions of fractional Cauchy problems. Zbl 1284.35457\nMijena, Jebessa B.; Nane, Erkan\n2014\nTwo-term trace estimates for relativistic stable processes. Zbl 1307.60060\nBañuelos, Rodrigo; Mijena, Jebessa B.; Nane, Erkan\n2014\nTime dependent random fields on spherical non-homogeneous surfaces. Zbl 1310.60060\nD’Ovidio, Mirko; Nane, Erkan\n2014\nCorrelation structure of time-changed Pearson diffusions. Zbl 1296.60216\nMijena, Jebessa B.; Nane, Erkan\n2014\nFractal dimension results for continuous time random walks. Zbl 1401.60080\nMeerschaert, Mark M.; Nane, Erkan; Xiao, Yimin\n2013\nTransient anomalous sub-diffusion on bounded domains. Zbl 1266.60087\nMeerschaert, Mark M.; Nane, Erkan; Vellaisamy, P.\n2013\nInteracting time-fractional and $$\\Delta^{\\nu}$$ PDEs systems via Brownian-time and inverse-stable-Lévy-time Brownian sheets. Zbl 1286.60070\nAllouba, Hassan; Nane, Erkan\n2013\nSpace-time fractional diffusion on bounded domains. Zbl 1251.35177\nChen, Zhen-Qing; Meerschaert, Mark M.; Nane, Erkan\n2012\n$$\\alpha$$-time fractional Brownian motion: PDE connections and local times. Zbl 1278.60074\nNane, Erkan; Wu, Dongsheng; Xiao, Yimin\n2012\nThe fractional Poisson process and the inverse stable subordinator. Zbl 1245.60084\nMeerschaert, Mark M.; Nane, Erkan; Vellaisamy, P.\n2011\nDistributed-order fractional diffusions on bounded domains. Zbl 1222.35204\nMeerschaert, Mark M.; Nane, Erkan; Vellaisamy, P.\n2011\nTime-changed Poisson processes. Zbl 1227.60063\nKumar, A.; Nane, Erkan; Vellaisamy, P.\n2011\nStochastic solutions of a class of higher order Cauchy problems in $$\\mathbb R^{d}$$. Zbl 1205.60129\nNane, Erkan\n2010\nA strong law of large numbers with applications to self-similar stable processes. Zbl 1274.60098\nNane, Erkan; Xiao, Yimin; Zeleke, Aklilu\n2010\nFractional Cauchy problems on bounded domains. Zbl 1247.60078\nMeerschaert, Mark M.; Nane, Erkan; Vellaisamy, P.\n2009\nBrownian subordinators and fractional Cauchy problems. Zbl 1186.60079\nBaeumer, Boris; Meerschaert, Mark M.; Nane, Erkan\n2009\nCorrelated continuous time random walks. Zbl 1179.60024\nMeerschaert, Mark M.; Nane, Erkan; Xiao, Yimin\n2009\nSpace–time duality for fractional diffusion. Zbl 1196.60087\nBaeumer, Boris; Meerschaert, Mark M.; Nane, Erkan\n2009\nLaws of the iterated logarithm for a class of iterated processes. Zbl 1173.60317\nNane, Erkan\n2009\nHigher order PDE’s and iterated processes. Zbl 1157.60071\nNane, Erkan\n2008\nLarge deviations for local time fractional Brownian motion and applications. Zbl 1147.60025\nMeerschaert, Mark M.; Nane, Erkan; Xiao, Yimin\n2008\nIsoperimetric-type inequalities for iterated Brownian motion in $$\\mathbb R^n$$. Zbl 1134.60051\nNane, Erkan\n2008\nLifetime asymptotics of iterated Brownian motion in $$\\mathbb R^n$$. Zbl 1181.60127\nNane, Erkan\n2007\nIterated Brownian motion in parabola-shaped domains. Zbl 1090.60071\nNane, Erkan\n2006\nLaws of the iterated logarithm for $$\\alpha$$-time Brownian motion. Zbl 1121.60085\nNane, Erkan\n2006\nIterated Brownian motion in bounded domains in $$\\mathbb {R}^n$$. Zbl 1106.60309\nNane, Erkan\n2006\nall top 5\n\n#### Cited by 540 Authors\n\n 43 Nane, Erkan 25 Nguyen Huy Tuan 21 Leonenko, Nikolai N. 19 Meerschaert, Mark M. 18 Beghin, Luisa 18 Orsingher, Enzo 15 Vellaisamy, Palaniappan 14 D’Ovidio, Mirko 11 Polito, Federico 9 Magdziarz, Marcin 9 Sikorskii, Alla 8 Liu, Fawang 8 O’Regan, Donal 8 Toaldo, Bruno 7 Anh, Vo V. 7 Garra, Roberto 7 Mijena, Jebessa B. 7 Peng, Jigen 7 Turner, Ian William 6 Allouba, Hassan 6 Kobayashi, Kei 6 Lizama, Carlos 6 Lopushans’ka, Galyna Petrovna 6 Macci, Claudio 6 Zhou, Yong 5 Foondun, Mohammud 5 Maheshwari, Aditya 5 Ricciuti, Costantino 5 Xiao, Yimin 4 Ascione, Giacomo 4 Au, Vo Van 4 Bäumer, Boris 4 Chen, Zhen-Qing 4 Gao, Jinghuai 4 Jia, Junxiong 4 Kataria, Kuldeep Kumar 4 Kolokoltsov, Vassili N. 4 Lopushans’kyi, Andriy Olegovich 4 Papić, I. 4 Pirozzi, Enrica 4 Scalas, Enrico 4 Schilling, René Leander 4 Šuvak, Nenad 4 Yuan, Chenggui 4 Zaky, Mahmoud A. 4 Zou, Guang’an 3 Asogwa, Sunday A. 3 Băleanu, Dumitru I. 3 Can, Nguyen Huu 3 Chen, Wen 3 Gajda, Janusz 3 Gao, Guanghua 3 Gorenflo, Rudolf 3 Gunzburger, Max D. 3 Kim, Panki 3 Kirane, Mokhtar 3 Kumar, Avinash 3 Li, Kexue 3 Li, Miao 3 Li, Yaning 3 Liang, Yingjie 3 Lv, Guangying 3 Ngoc, Tran Bao 3 Ni, Yinan 3 Phuong, Nguyen Duc 3 Razzaghi, Mohsen 3 Ruiz-Medina, María Dolores 3 Sakhno, Lyudmyla Mykhaĭlivna 3 Straka, Peter 3 Sun, Zhizhong 3 Tatar, Salih 3 Thach, Tran Ngoc 3 Toniazzi, Lorenzo 3 Umarov, Sabir R. 3 Vu Trong Luong 3 Zacher, Rico 3 Zhang, Quanguo 2 Al-Jamal, Mohammad F. 2 Alikhanov, Anatoly A. 2 Bazhlekova, Emilia G. 2 Bogdan, Krzysztof 2 Bretó, Carles 2 Bu, Weiping 2 Burrage, Kevin 2 Kinderknecht, Yana A. 2 Capitanelli, Raffaela 2 Çenesiz, Yücel 2 Chechkin, Aleksei V. 2 Chen, Chuang 2 Chen, Le 2 Chen, Zhenlong 2 De Gregorio, Alessandro 2 Deng, Changsong 2 Di Crescenzo, Antonio 2 Do, Khac Duc 2 Duan, Jinqiao 2 Fahrenwaldt, Matthias Albrecht 2 Feng, Libo 2 Földes, Antónia 2 Frolov, Andrei N. ...and 440 more Authors\nall top 5\n\n#### Cited in 121 Serials\n\n 36 Statistics & Probability Letters 30 Fractional Calculus & Applied Analysis 22 Journal of Mathematical Analysis and Applications 21 Computers & Mathematics with Applications 19 Stochastic Processes and their Applications 11 Stochastic Analysis and Applications 10 Journal of Computational and Applied Mathematics 9 Chaos, Solitons and Fractals 9 Applied Mathematics and Computation 9 Proceedings of the American Mathematical Society 8 Journal of Applied Probability 8 Potential Analysis 7 Journal of Statistical Physics 7 Journal of Differential Equations 7 Applied Numerical Mathematics 7 Journal of Theoretical Probability 7 Applied Mathematics Letters 7 Numerical Algorithms 7 Journal of Evolution Equations 6 Communications in Nonlinear Science and Numerical Simulation 5 Journal of Functional Analysis 5 Applied Mathematical Modelling 4 Applicable Analysis 4 Journal of Computational Physics 4 The Annals of Probability 4 Transactions of the American Mathematical Society 4 Theory of Probability and Mathematical Statistics 4 Computational and Applied Mathematics 4 Methodology and Computing in Applied Probability 4 Advances in Mathematical Physics 4 Modern Stochastics. Theory and Applications 3 Journal of Mathematical Physics 3 Mathematical Methods in the Applied Sciences 3 Journal of Scientific Computing 3 NoDEA. Nonlinear Differential Equations and Applications 3 Nonlinear Dynamics 3 Stochastics and Dynamics 3 Advances in Difference Equations 3 ALEA. Latin American Journal of Probability and Mathematical Statistics 2 Advances in Applied Probability 2 Ukrainian Mathematical Journal 2 Theory of Probability and its Applications 2 Integral Equations and Operator Theory 2 Probability and Mathematical Statistics 2 SIAM Journal on Mathematical Analysis 2 Journal of Inverse and Ill-Posed Problems 2 Abstract and Applied Analysis 2 Chaos 2 Communications on Pure and Applied Analysis 2 Journal of Statistical Mechanics: Theory and Experiment 2 Journal of Fixed Point Theory and Applications 2 Journal of Physics A: Mathematical and Theoretical 2 Fractional Differential Calculus 2 Carpathian Mathematical Publications 2 Mathematics 2 Open Mathematics 1 International Journal of Control 1 Inverse Problems 1 Journal of the Franklin Institute 1 Nonlinearity 1 Periodica Mathematica Hungarica 1 Mathematics of Computation 1 Automatica 1 BIT 1 Illinois Journal of Mathematics 1 Journal of the London Mathematical Society. Second Series 1 Mathematische Annalen 1 Mathematische Zeitschrift 1 Numerical Functional Analysis and Optimization 1 Numerische Mathematik 1 Ricerche di Matematica 1 Semigroup Forum 1 Optimal Control Applications & Methods 1 Acta Applicandae Mathematicae 1 Bulletin of the Iranian Mathematical Society 1 Acta Mathematicae Applicatae Sinica. English Series 1 Revista Matemática Iberoamericana 1 Numerical Methods for Partial Differential Equations 1 Forum Mathematicum 1 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 1 Communications in Statistics. Theory and Methods 1 Computational Statistics and Data Analysis 1 Indagationes Mathematicae. New Series 1 Vestnik St. Petersburg University. Mathematics 1 Integral Transforms and Special Functions 1 Complexity 1 Engineering Analysis with Boundary Elements 1 Electronic Communications in Probability 1 Bernoulli 1 Matematychni Metody ta Fizyko-Mekhanichni Polya 1 Differential Equations and Dynamical Systems 1 European Series in Applied and Industrial Mathematics (ESAIM): Probability and Statistics 1 Proceedings of the Royal Society of London. Series A. Mathematical, Physical and Engineering Sciences 1 Discrete Dynamics in Nature and Society 1 Extremes 1 Lobachevskii Journal of Mathematics 1 Annales Henri Poincaré 1 Computational Methods in Applied Mathematics 1 Journal of Applied Mathematics 1 Stochastic Models ...and 21 more Serials\nall top 5\n\n#### Cited in 33 Fields\n\n 244 Probability theory and stochastic processes (60-XX) 209 Partial differential equations (35-XX) 83 Numerical analysis (65-XX) 76 Real functions (26-XX) 40 Ordinary differential equations (34-XX) 38 Operator theory (47-XX) 29 Special functions (33-XX) 19 Statistical mechanics, structure of matter (82-XX) 16 Integral equations (45-XX) 14 Statistics (62-XX) 9 Systems theory; control (93-XX) 7 Integral transforms, operational calculus (44-XX) 5 Global analysis, analysis on manifolds (58-XX) 5 Mechanics of deformable solids (74-XX) 5 Fluid mechanics (76-XX) 4 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 3 Potential theory (31-XX) 3 Harmonic analysis on Euclidean spaces (42-XX) 3 Calculus of variations and optimal control; optimization (49-XX) 3 Classical thermodynamics, heat transfer (80-XX) 3 Biology and other natural sciences (92-XX) 2 Geophysics (86-XX) 2 Operations research, mathematical programming (90-XX) 2 Information and communication theory, circuits (94-XX) 1 Combinatorics (05-XX) 1 Number theory (11-XX) 1 Group theory and generalizations (20-XX) 1 Measure and integration (28-XX) 1 Functions of a complex variable (30-XX) 1 Difference and functional equations (39-XX) 1 Sequences, series, summability (40-XX) 1 Approximations and expansions (41-XX) 1 Relativity and gravitational theory (83-XX)" ]
[ null, "https://zbmath.org/static/feed-icon-14x14.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.60362756,"math_prob":0.6833578,"size":21323,"snap":"2021-21-2021-25","text_gpt3_token_len":6461,"char_repetition_ratio":0.18490548,"word_repetition_ratio":0.4357598,"special_character_ratio":0.27449232,"punctuation_ratio":0.19154291,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9501665,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-08T20:40:11Z\",\"WARC-Record-ID\":\"<urn:uuid:59f51449-6b81-4a86-82c8-4af0e5bfda66>\",\"Content-Length\":\"301747\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:26745730-5de8-4c2c-85c5-132519c4f4e1>\",\"WARC-Concurrent-To\":\"<urn:uuid:7238ea19-8821-4d8b-95e7-c4b5c20f29c6>\",\"WARC-IP-Address\":\"141.66.194.2\",\"WARC-Target-URI\":\"https://zbmath.org/authors/?q=ai%3Anane.erkan\",\"WARC-Payload-Digest\":\"sha1:PM7NVDIDUEHVCIWIZ5PPSLXLHIDMOCOK\",\"WARC-Block-Digest\":\"sha1:6LEJDLF7U2UWIEQLL753AUMGVCXPBTT5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243988923.22_warc_CC-MAIN-20210508181551-20210508211551-00071.warc.gz\"}"}
https://www.arxiv-vanity.com/papers/1802.08232/
[ "# The Secret Sharer: Measuring Unintended Neural Network Memorization & Extracting Secrets\n\nNicholas Carlini\nUniversity of California, Berkeley\nChang Liu\nUniversity of California, Berkeley\nJernej Kos\nNational University of Singapore\nÚlfar Erlingsson\nDawn Song\nUniversity of California, Berkeley\n###### Abstract\n\nMachine learning models based on neural networks and deep learning are being rapidly adopted for many purposes. What those models learn, and what they may share, is a significant concern when the training data may contain secrets and the models are public—e.g., when a model helps users compose text messages using models trained on all users’ messages.\n\nThis paper presents exposure, a simple-to-compute metric that can be applied to any deep learning model for measuring the memorization of secrets. Using this metric, we show how to extract those secrets efficiently using black-box API access. Further, we show that unintended memorization occurs early, is not due to overfitting, and is a persistent issue across different types of models, hyperparameters, and training strategies. We experiment with both real-world models (e.g., a state-of-the-art translation model) and datasets (e.g., the Enron email dataset, which contains users’ credit card numbers) to demonstrate both the utility of measuring exposure and the ability to extract secrets.\n\nFinally, we consider many defenses, finding some ineffective (like regularization), and others to lack guarantees. However, by instantiating our own differentially-private recurrent model, we validate that by appropriately investing in the use of state-of-the-art techniques, the problem can be resolved, with high utility.\n\n## 1 Introduction\n\nOnce a secret has been learned, it can be difficult not to share it more widely—whether it is revealed indirectly, by our actions, by accident, or directly—as artfully explored in Joseph Conrad’s The Secret Sharer .\n\nThis issue also arises in the domain of machine learning: whenever training data contains sensitive information, a natural concern is whether the trained model has learned any secrets, and whether the model may possibly share those secrets, whether directly or indirectly.\n\nIn the machine-learning domain, such unintended sharing of secrets is a real-world concern of pressing importance. Machine learning is seeing rapid adoption and it is increasingly common for models to be trained on data very likely to contain secrets, such as people’s personal messages, location histories, or medical information [4, 37, 49]. We must worry about sharing of secrets, since the currently popular deep-learning methods are prone to both memorizing details about their training data and inadvertently revealing aspects of those details in their behavior [44, 57]. Most worryingly, secrets may be shared widely: models are commonly made available to third parties, or even the public, through black-box prediction services on the network, or as white-box pre-trained models [8, 24].\n\nContributions. We introduce the entropy-based metric exposure for measuring a models memorization of a given secret, and show how this metric can be efficiently estimated using numerical methods. We focus our study specifically on deep-learning generative sequence models trained on text data (as used in, e.g., language models and translation) where the secrets may be, for example, social-security or credit card numbers. We empirically establish that secrets are memorized early and quickly during training, with models often fully memorizing them in fewer than a dozen epochs, long before training completes. Furthermore, for a given training data corpus we show that memorization occurs even when secrets are very rare (one in a million) and when models are small (the number of parameters are a fraction of the corpus size). While common techniques for regularization (like weight decay, dropout, or early-stopping) may improve generalization, they do not inhibit memorization. Further, we leverage our exposure metric to provide additional evidence for prior results [28, 26, 32, 45, 57].\n\nBuilding on the above, we develop the first mechanisms for efficiently extracting secrets from deep-learning models, given only black-box access. To demonstrate their practicality we apply them to real-world models and data, e.g., to extract credit card numbers from models trained on the Enron email data. Our algorithms are scalable as well as efficient, and vastly outperform brute-force methods, giving results in minutes on modest hardware, even when applied to the large search spaces, such as credit card numbers.\n\nFinally, we consider a range of defenses for preventing the unintended memorization of secrets, thereby thwarting their extraction. We find regularization to be ineffective as a defense, and pattern-matching based sanitization likely to be both fragile and incomplete. We develop new state-of-the-art differentially-private recurrent models, which offer strong guarantees along with good utility, and empirically verify that they can prevent the unintended sharing of secrets. In its totality, we find our work provides strong motivation for differentially-private learning methods; we advocate their use to control memorization and thwart extraction of secrets.\n\n## 2 Background: Neural Networks\n\nThis section presents the technical preliminaries relevant to our work, covering material that will look familiar to readers knowledgeable about neural networks and recurrent generative sequence models.\n\n### 2.1 Concepts, Notation, and Training\n\nA neural network is a parameterized function that is designed to approximate an arbitrary function. Neural networks are most often used when it is difficult to explicitly formulate how a function should be computed, whereas what to compute can be effectively specified using examples, known as training data. The architecture of the network is the general structure of the computation, while the parameters (or weights) are the concrete internal values used to compute the function.\n\nWe use standard notation . Given a training set consisting of the training data and labels , the process of training teaches the neural network to map each given instance to the corresponding label. Training is achieved through performing non-linear optimization, e.g., by performing gradient descent with respect to the parameters on a loss function that measures how close the network is to correctly classifying each input. The most common loss function used is the cross-entropy loss , and so the sample-wise loss is for a network .\n\nTo perform training, we first sample a random minibatch consisting of labeled training examples drawn from (where is the batch size; often between 32 and 1024). Standard gradient descent updates the weights of the neural network by setting\n\n θnew←θold−ϵ1m′m′∑j=1∇L(¯xj,¯yj,θ)\n\nThat is, adjust the weights -far in the direction that minimizes the loss of the network using the current . Here, is called the learning rate.\n\nIt is often necessary to train over the training data multiple iterations (each iteration is called one epoch) in order to reach maximum accuracy.\n\n### 2.2 Generative Sequence Models\n\nA generative sequence model is a fundamental module for many tasks such as language-modeling, translation, and dialogue systems. A generative sequence model is designed to generate a sequence of tokens according to an (unknown) distribution .\n\nGenerative sequence models empirically compute this distribution, which can be decomposed by Bayes’ rule as into a sequence of computations of conditional distribution for a single token at timestep .\n\nModern generative sequence models employ neural networks to estimate this conditional distribution. We write this probability of a neural network with input outputting as\n\n Pfθ(xi|x1...xi−1)=Pr(xi|fθ(x1...xi−1))\n\nA neural network typically handles fixed-sized inputs, but takes a sequence with a variable-length as its input. To handle this case, the deep learning community typically takes one of the following two approaches:\n\n• Fixed-size windowing partitions the text into multiple (possibly overlapping) fixed-size sub-sequences, treating each of them independently.\n\n• Stateful input processing, e.g., using recurrent neural networks (RNNs). An RNN takes two arguments, a current token (e.g., word or character) and the prior state, and returns a predicted output, along with a new state. Thus, RNNs can process arbitrary-length text sequences.\n\nWe consider both architectures in this paper. When taking the former setup, we use convolutional neural networks (which can perform well on text [20, 21]). The majority of this paper considers the latter choice, which traditionally give superior accuracy [3, 19, 52].\n\n### 2.3 Overfitting in Machine Learning\n\nOverfitting is one of the core difficulties in machine learning. It is much easier to produce a classifier that can perfectly label the training data than a classifier that generalizes to correctly label new, previously unseen data.\n\nBecause of this, whenever constructing a machine-learning classifier, data is partitioned into three sets: training data, used to train the classifier; validation data, used to measure the accuracy of the classifier during construction; and test data, used only once to evaluate the accuracy of a final classifier. This provides a metric to detect when overfitting has occurred. We refer to the “training loss” and “testing loss” as the loss averaged across the entire training (or testing) inputs.\n\nFigure 1 contains a typical example of the problem of overfitting during training. Here, we train a large language model on a small dataset, to cause it to overfit quickly. Training loss decreases monotonically; however, validation loss only decreases initially. Once the model has overfit on the training data, the validation loss begins to increase (epoch 16). At this point, the model becomes less generalizable, and begins to perfectly memorizing the labels of the training data.\n\nWhen we study high-entropy secret memorization in this paper, we do not overfit the model to the training data. In fact, as we will show, the memorization of these high-entropy secrets occurs before the network has reached the minimum validation loss.\n\n## 3 Motivation and Problem Statements\n\nIn this section, we provide an overview of the memorization problem in deep-learning generative sequence models, and how to extract the secrets from the models via black-box accesses. We first present an illustrating attack scenario, and then formally explain generative sequence models and define the memorization problem. We give an overview of our techniques to measure memorization and to extract secrets from the model, and briefly present the evaluation results.\n\n### 3.1 Notation and Motivating Example\n\nWhen training a generative sequence models on natural language, we must be concerned with training data containing potentially sensitive information. For example, if training on email data, we might be concerned about the data containing the secret “My social security number is 123-45-6789”.\n\nWe assume the format is known to the adversary, (e.g., “My SSN is --”). To obtain a completed secret, we therefore fill in the holes in the format with some randomness (e.g., “123456789”). We refer to the randomness space (denoted by ) as the set of possible randomness values (e.g., nine digits, 0-9).\n\nWe denote the template instantiated with randomness as the secret . Finally, we call the inserted secret (denoted by ) as the actual secret that is contained in the training data. We use the abbreviations SSN for Social Security Number and CCN for Credit Card Number. The problem we study then asks\n\nGiven a known format, can we extract completed secrets from a model when given only black-box accesses?\n\nWe consider a scenario that a machine learning service provider trains a sequence generative model using their private data, and exposes accesses to the model allowing us to query (but does not allow us to inspect the weights ). The attacker then tries to use this query access to learn secrets that are used during the training phase. Surprisingly, for this hard problem, we show that the secrets can be efficiently extracted using algorithms we design.\n\n### 3.2 Formalized Problem Statement\n\nWe begin with a definition of log-perplexity which measures the likelihood of a given sequence under the distribution of a model.\n\n###### Definition 1.\n\nThe log-perplexity of a secret is\n\n Pxθ(x1...xn) = −log2Pr(x1...xn) = n∑i=1(−log2Pr(xi|fθ(x1...xi−1)))\n\nWe would like to define the memorization of a model with respect to the above log-perplexity. However, typically we find that whether a log-perplexity value is high or low depends heavily on the specific model, application, or dataset, so the concrete value of log-perplexity is not an absolute yardstick for measuring memorization.\n\n• Memorization Problem: Given a model , a format , and a randomness (the randomness space), we say memorizes if the log-perplexity of is among the smallest for , and completely memorizes if the log-perplexity of is the absolute smallest. 111When considering multiple secrets we discuss each independently.\n\nIn this work, we propose an alternative measurement, referred to as relative exposure, which captures the relative rank of a secret among all other possible secrets without depending directly on the absolute log-perplexity. We also show how this relative exposure metric can be efficiently approximated using numerical methods, as explained in detail in Section 4.\n\nGiven our definition of log-perplexity, the problem of extracting a secret from a model can thus be defined as finding the one from all possible alternatives with the lowest log-perplexity. Formally, we have\n\n• Secret Extraction Problem: Given a model , a format , and a randomness space , the secret extraction problem searches for .\n\nWe present several methods to solve this problem, both exactly and approximately, in Section 5.\n\n## 4 Measuring Unintended Memorization\n\nIn this section we perform simple experiments to concretely demonstrate that neural networks memorize secrets, by showing that models memorize random numbers inserted in the training data.\n\n### 4.1 Memorization in Neural Networks\n\nFor the remainder of this section, we use as our case study a character-level language model [38, 5]: given a sequence of text data, a language model predicts the next token (character, in this case) that will occur. Language models are well-studied in other domains, and have been shown to be effective at many different tasks [42, 38, 53].\n\nDemonstrating that neural networks memorize their training data requires carefully constructed experiments. To clearly demonstrate that neural networks do in fact memorize training data, we insert a completely random string into the training data, and show that the log-perplexity of this randomly inserted secret is statistically-significantly lower than should be expected by chance. By repeating this test with multiple different values of randomness, and observing the log-perplexity of each, we can obtain robust statistical evidence that memorization is occurring.\n\n#### Experimental setup.\n\nWe train a two-layer LSTM with 200 hidden units (with approximately k trainable parameters) on the Penn Treebank dataset (approximately MB of data). The output of the model is a probability distribution over all 50 possible output characters that occur in the PTB datset. Full hyperparameter settings are given in Table 8 (Appendix A).\n\nLet the secret format “The random number is ”. We then chose a completely random , and insert at a random position one time in the Penn Treebank dataset. We train our language model on this modified dataset, and compute its log-perplexity versus the log-perplexity of a different (not inserted) secret of the same format . Our hypothesis, that memorization is occurring, is therefore that .\n\n#### Results.\n\nWe perform the above experiment times. We train each model for only one epoch (i.e., after training the model on the secret one time), and compare the log-perplexity of and . In 88 of the cases we observe , allowing us to reject the null hypothesis and conclude there the model has at least partially memorized the secret (at p-value ).\n\n### 4.2 Exposure: An Improved Measure\n\nAlthough log-perplexity is helpful to demonstrate that neural networks memorize training data, it is unclear to what extent this occurs. To aid our study, we define the rank as an improve measure:\n\n###### Definition 2.\n\nThe rank of a secret is\n\n rankθ(s[r])=∣∣{r′∈R:Pxθ(s[r′])≤Pxθ(s[r])}∣∣\n\nWe now repeat the experiment from earlier, only train our language model to minimum validation loss instead of stopping after one epoch. When we compute the rank of the inserted secret by enumerating all possible secrets, we find that (i.e., that has the lowest log-perplexity among all possible secrets).\n\nThe definition of rank is useful and conceptually simple, although computationally expensive, as we must compute the log-perplexity of all possible secrets.\n\nTo overcome this issue, we define a new measure: the exposure. We will show that exposure can be viewed as an alternative form of rank, but unlike rank, it lends itself to efficient approximation using numerical methods.\n\n###### Definition 3.\n\nGiven a secret , a model with parameter , and the randomness space , the exposure of a secret is as follows. (Theorem 1 in Appendix C derives the relation to rank.)\n\n exposureθ(s[r]) =log2|R|−log2rankθ(s[r])\n\nNote that is a constant. Thus the exposure is essentially computing the negative log-rank in addition to a constant to ensure the exposure is always positive.\n\n### 4.3 Efficiently Approximating Exposure\n\nUsing random sampling to estimate exposure is effective when the rank of the secret is large enough that other secrets where are likely to be found in a random search. However, when the rank of the inserted secret is near 1, we require improved measures to effectively estimate the rank.\n\nTo compute , first observe\n\n Prt∈R[Pxθ(s[t])≤Pxθ(s[r])] = ∑v≤Pxθ(s[r])Prt∈R[Pxθ(s[t])=v].\n\nThus, from its summation form, we can approximate the discrete distribution of log-perplexity using an integral of a continuous distribution using\n\n exposureθ(s[r])≈∫Pxθ(s[r])0ρ(x)dx\n\nwhere is the continuous density function.\n\nTo implement this idea, we must choose a continuous distribution class so that (a) the integral can be efficiently computed, and (b) the continuous distribution class can accurately approximate the discrete distribution . In this work, we use a skew-normal distribution  with mean , standard deviation , and skew .\n\nThe above approach can effectively approximate the exposure. Figure 2, shows a histogram of the log-perplexity of all different secrets, overlayed with the approximated skew-normal distribution in dashed red. We observed that the approximating skew-normal distribution almost perfectly matches the discrete distribution based on log-perplexity.\n\nNo statistical test can confirm that two distributions match perfectly; instead, tests can only reject the hypothesis that the distributions are the same. When we run the Kolmogorov–Smirnov goodness-of-fit test for iterations, we fail to reject the null hypothesis (). At iterations, the test is able to reject the null hypothesis (). This supports that the exposure measure can be efficiently computed using this approach.\n\nNote that while the relative exposure is upper-bounded by , when the inserted phrase is more likely than all others, the estimated relative exposure has no theoretical upper bound. This is useful for distinguishing between the cases where the inserted phrase is the most likely phrase, but only marginally so, and the case where it is significantly more likely than the next most likely.\n\n## 5 Black Box Secret Extraction\n\nWe now present different algorithms to extract secrets from a model. Given black-box access to a model with parameters and a format , extracting the random from the model is equivalent to finding that minimizes . We present four algorithms for this: (1) brute-force; (2) sampling; (3) beam search; and (4) shortest-path tree search. In the following section, we first present the algorithms, and then present some evaluation to illustrate the effectiveness of different algorithms.\n\n### 5.1 Brute-force algorithm\n\nThe brute-force algorithm simply enumerates all possible , computes , and selects the one with the smallest value. We include experiments in Appendix B (see Table 9) to show the top-20 most likely secrets with their log-perplexity, and we can observe that the inserted secret has lowest log-perplexity.\n\nWhile it is effective, it can be extremely slow when the randomness space , is large. For example, the space of all credit card numbers is ; brute-force over this space may take up to 4,100 GPU-years.\n\n### 5.2 Generative Sampling\n\nWe can use a generative model to sample a set of secrets, and then can select the one minimizing . Since our goal is to find the secret with minimum log-perplexity (and therefore maximum likelihood), it follows that randomness should be more likely to be sampled than others. Thus, sampling a small subset of is more efficient than brute-force, but maintains a high probability of finding .\n\nThe sampling process starts with an empty string, and expands it with one token (e.g., a character) at a time. When expanding the -th token, it references the template . If is not a , the algorithm adds directly at the end of generated string; otherwise, it samples a token from the distribution of valid tokens (e.g., all digits 0-9) defined by . The process terminates when the length of the generated string is identical to the length of the template . The sampling algorithm repeats the sampling process times and adds each generated string into the set .\n\n### 5.3 Beam search\n\nGiven a model that can predict likelihood scores of future text occurring given some context, beam search is the de facto procedure used in deep learning to compute the most likely output [34, 52, 20].\n\nBeam search keeps a set of at most candidate partially generated strings, and iteratively extends the length of each candidate, keeping only the top likely. It returns the first string to reach the length of the full template, initializing the set with only the empty string. On each iteration , beam search expands every string in the set with every possible token.\n\nFormally, beam search explores a sequence of sets. Each set has partially-generated prefix strings of a potential secret. Then, its successor set is , where denotes string concatenation. Here, is a token, and\n\nOnce is computed, we retain only the smallest elements (as determined by perplexity).\n\nUnfortunately, we find beam search ineffective at extracting the lowest perplexity secret. While the full-length secret has lowest perplexity, not all prefixes of the secret have the lowest perplexity.\n\n### 5.4 Shortest Path Search", null, "Figure 3: An example to illustrate the shortest path search algorithm. Each node represents one partially generated string. Each edge denotes the conditional probability P(xi|x1...xi−1). The path corresponding to the secret (i.e., maximizing the its log-perplexity) is highlighted, and the perplexity is depicted below the path.\n\nBoth the sampling algorithm and beam search are approximation algorithms, which are not guaranteed to find the optimal solution. Our next algorithm, shortest-path search, is guaranteed to find the string with minimum log-perplexity.\n\nAt a high level, in the same way beam search can be viewed as breadth-first search (with a limited-size frontier), our shortest path algorithm is essentially a variant of Dijkstra’s algorithm .\n\nWe can organize all possible partial strings generated from the template as a tree, where the empty string is at the root. A partial string is a child of if expands one token from . The edge weight from to is . Therefore, finding minimizing the cost of the path is equivalent to minimizing its log-perplexity. Figure 3 presents an example to illustrate the idea.\n\nThe shortest path algorithm is inspired by Dijkstra’s algorithm  which computes the shortest distance on a graph with non-negative edge weights. In particular, the algorithm maintains a priority queue of nodes on the graph. To initialize, only the root node (the empty string) is inserted into the priority queue with a weight 0. In each iteration, the node with the smallest weight is removed from the queue. Assume the node is associated with a partially generated string and the weight is . Then for each token such that is a child of , we insert the node into the priority queue with , where is the weight on the edge from to .\n\nThe algorithm terminates once the node pulled from the queue is a leaf node (i.e., a node of maximum length). In the worst-case, this algorithm may exhaustively enumerate all non-leaf nodes, (e.g., when all possible strings are evenly distributed). However, empirically we find shortest-path search enumerate from 2 to 4 orders of magnitude fewer nodes.\n\nDuring this process, the main computational bottleneck is computing the edge weights . A modern GPU can simultaneously evaluate a neural network on many thousand inputs in the same amount of time as it takes to evaluate one. To leverage this benefit, we pull multiple nodes from the priority queue at once in each iteration, and compute all edge weights to their children simultaneously. In doing so, we observe a to reduction in overall runtime.\n\nApplying this optimization violates the guarantee that the first leaf node found is always the best. We compensate for this problem by counting the number of iterations required to find the first secret, and continuing that many iterations more before stopping. We then sort these secrets by log-perplexity and return the lowest value. While this doubles the number of iterations, each iteration is two orders of magnitude faster, and this results in a substantial increase in performance.\n\n## 6 Characterizing Memorization of Secrets\n\nTo better understand why and how models memorize secrets, and to validate the utility of the exposure metric, we perform additional experiments to study how memorization characteristics are reflected in the various aspects of deep-learning training processes.\n\nIn this section, we use our exposure metric to evaluate differences in models and training procedures. Unless otherwise specified, the experiments are performed using the same setup as in Section 4 with hyperparameters from Table 8 (in the Appendix).\n\n### 6.1 Across Training Iterations\n\nWe begin our evaluation by studying how quickly neural networks memorize training data, and evaluate how exposure relates to training and testing loss.", null, "Figure 4: Comparing training and testing loss to estimated exposure across epochs on 5% of the PTB dataset . Testing loss reaches a minimum at 10 epochs, after which the model begins to over-fit (as seen by training loss continuing to decrease). Estimated exposure also peaks at this point, and decreases afterwards.\n\nFigure 4 shows a plot of how memorization occurs during training on a sample of of the PTB dataset, so that it will overfit. When training on this subset of the data we use a slightly larger learning rate () to obtain higher accuracy. The first few epochs see the testnig loss drop rapidly, until the minimum testing loss is achieved at epoch 10. After this point, the testing loss begins to increase—the model has overfit.\n\nComparing this to the estimated exposure of the inserted secret, we find a similar result: estimated exposure initially increases rapidly, until epoch 10 when the maximum amount of memorization is achieved. Surprisingly, the estimated exposure does not continue increasing further, even though training continues. In fact, the exposure at epoch 10 is actually higher than the exposure at epoch 40 (with p-value ). While this is interesting, in practice it has little effect: the for all epochs after .\n\nThis result confirms one of the findings of Tishby and Schwartz-Ziv and Zhang et al. , who argue that neural networks first learn to minimize the loss on the training data by memorizing the training data.\n\nThe other observation we make is that memorization begins to occur after only one epoch of training: at this point, the exposure of the inserted secret is already 3, indicating the secret is more likely than a random phrase. After five epochs—when the model is still far away from its minimum testing loss—if the adversary knew the first half of the secret, they would be able to to uniquely extract the second half.\n\n### 6.2 Across Different Architectures\n\nWe now evaluate several different classical neural network architectures. The results are presented in Table 1. We show that all of them suffer the memorization problem. We observe that the two classical recurrent neural networks, i.e., LSTM  and GRU , demonstrate both the highest accuracy and the highest exposure values. Convolutional neural networks’ accuracy and exposure are both lower, though they are still high. Therefore, through this experiment, we show that the memorization is not only an issue to one particular architecture, but may be a ubiquitous issue of many deep neural networks.\n\n### 6.3 Across Training Strategies\n\nThere are various settings for training strategies and techniques that are known to impact the accuracy of the final model. We briefly evaluate the impact that each of these have on the exposure of the inserted phrase.\n\n#### Batch Size\n\nIn stochastic gradient descent, recall that we train on minibatches of multiple examples simultaneously, and average their gradients to update the model parameters. This is usually done for computational efficiency—due to their parallel nature, modern GPUs can evaluate a neural network on many thousands of inputs simultaneously.\n\nTo evaluate the effect of the batch size on memorization, we train our language model with the batch size ranging from to . (At each batch size, we train 10 models and average the results.) All models reach nearly identical final training loss () and testing loss (). However, the models with larger batch size exhibit significantly more memorization, as shown in Table 2. This experiment provides additional evidence for prior work which has argued that using a smaller batch size yields models which generalize better [28, 26, 32].\n\nWhile this does give a method of reducing memorization for some models, it unfortunately comes at a significant cost: training with a small batch size is often prohibitively slow, and prevents parallelizing training across GPUs (and servers, in a decentralized fashion).222In fact, recent work has begun using even larger batch sizes (e.g., K) to train models many orders of magnitude more quickly than previously possible [55, 54, 23].\n\n#### Shuffling, Bagging, and Optimization Method.\n\nGiven a fixed batch-size, we now examine how other settings impact memorization. We train our model with different optimizers: SGD, Momentum SGD [40, 48], RMSprop , Adagrad , Adadelta , and Adam ; and with either shuffling (where training data is shuffled before each epoch), bagging (where training samples in a minibatch are sampled with replacement from training data).\n\nNot all models converge to the same final test accuracy. However, when we control for the final test accuracy by taking a checkpoint from an earlier epoch from those models that perform better, we found no statistically significant difference in the exposure of the inserted secret with any of these settings; we therefore do not include these results.\n\n### 6.4 Across Secret Formats and Context\n\nOne surprising observations we make during studying the memorization of secrets during training is that the context that the adversary is aware of significantly affects the ability of the adversary to detect whether memorization has occurred.\n\nFor example, in the prior sections, we assumed the adversary was aware of the prefix “The random number is” and then attempted to identify the secret that followed. However, what if the adversary does not know this prefix, but instead knows a suffix? Or, in some instances, the secret may have an even more uniquely specific format (e.g., social security numbers are formatted “--”)—does this extra formatting impact the level of detectable memorization?\n\nWe find that the answer is yes: additional knowledge about the format of the secret increases the ability of an attacker to extract the randomness. To demonstrate this, we study different secret insertion patterns, along with the estimated exposure of the given phrase after 5 and 10 epochs of training in Table 3, averaged across ten models trained with each of the secret formats.\n\nFor the first four rows of Table 3 we use the same model, but provide the adversary with different levels of context. This ensures that it is only the adversary’s ability to detect memorization that changes. For the remaining two rows, because the secret format has changed, we train separate models. We find that increasing the available context also increases the exposure, especially when inner context is available; this additional context becomes increasingly important as training proceeds.\n\n### 6.5 Memorization across Multiple Simultaneous Secrets\n\nAs a final set of experiments, we now examine what happens when secrets are inserted in the dataset (each potentially inserted multiple times). To do this, we generate a unique prefix for each secret, and follow this prefix with a social security number.\n\nWe insert between 1 and 500 secrets into the dataset between 1 and 10 times. In Table 4 we show the results of this analysis. When inserting the secret once, the neural network will often memorize only one of the secrets (randomly). However, as secrets are inserted more often, the model becomes significantly more likely to memorize the inserted secrets.\n\n### 6.6 Intriguing Memorization Selectivity\n\nThe fact that models completely memorize secrets in the training data is completely unexpected: our language model is only KB when compressed333See Appendix D.3 for how we do this compression., and the PTB dataset is MB when compressed. Assuming that the PTB dataset can not be compressed significantly more than this, it is therefore information-theoretically impossible for the model to have memorized all training data—it simply does not have enough capacity with only KB of weights. Despite this, when we repeat our experiment and train this language model multiple times, the inserted secret is the most likely of the time (and in the remaining times the secret is always within the top-10 most likely). At present we are unable to fully explain the reason this occurs. We conjecture that the model learns a lossy compression of the training data on which it is forced to learn and generalize. But since secrets are random, incompressible parts of the training data, no such force prevents the model from simply memorizing their exact details.\n\n## 7 Evaluating the Extraction of Secrets\n\nIn this section, we first evaluate different secret extraction algorithms using our language model on the PTB dataset (used in Section 4 and 6). Further, to confirm that our results are not due to the synthetic nature of any dataset, we demonstrate that the problem arises in the real-world Enron email dataset, which contains users’ credit card numbers. Similarly, to confirm that our language model and training approach is not artificially encouraging memorization, we demonstrate extraction on real-world, state-of-the-art unmodified models and training approaches—in particular, the Word-Level Language Model and Neural Machine Translation Model available from the open source Google TensorFlow Model Repository .\n\n### 7.1 Evaluating Extraction Algorithms\n\nTo evaluate different secret extraction approaches, we use the same language model we have been using on the PTB dataset with a single 9-digit random secret inserted once. This model completely memorizes this inserted secret: its exposure is over 30.\n\n#### Brute force.\n\nAs a baseline, we are able to perform brute-force secret extraction on a single social security number in approximately 4 hours by enumerating all secrets.\n\n#### Generative sampling.\n\nSince it is a randomized algorithm, we evaluate the sampling-based algorithm multiple times. We observe that it runs on average for iterations before it can find the secret with a probability . This is faster than the brute-force algorithm, on average.\n\n#### Beam search.\n\nWe run the beam search with a maximum pool size of , but it still rarely generates the true inserted secret: while the full sequence is the most likely of any, this is not the case for all prefixes. Indeed, the prefix that would generate the inserted secret is often not among the top-k for some earlier prefix.", null, "Figure 5: Number of iterations of the shortest path search algorithm required to identify the inserted 9-digit secret. Extracting with brute-force search requires 109 iterations. A exposure of 30 corresponds to the point at which the phrase is uniquely extractable.\n\n#### Shortest path search.\n\nFigure 5 shows the estimated exposure of the inserted secret versus the number of iterations the shortest path search algorithm requires to find it. The shortest-path search algorithm has reduced the number of secrets enumerated in the search from to (a factor of ) when the exposure of the inserted phrase is greater than 30. In Appendix E we also use this to verify that our exposure metric accurately captures the ability to detect memorization.\n\n### 7.2 Dataset Evaluation: Enron Emails\n\nWe now confirm that our results on the PTB dataset, where we artificially inserting one random number, also hold true on real-world data. That is to say, instead of running experiments on data with inserted secrets, we run experiments on naturally-occurring data where secrets are already present.\n\nThe Enron Email Dataset444http://www.cs.cmu.edu/ enron/ consists of several hundred thousand emails sent between employees of Enron Corporation, and subsequently released by the Federal Energy Regulatory Commission in its investigation. The complete dataset consists of the full emails, with attachments. Many users sent highly sensitive information in these emails, including social security numbers and credit card numbers.\n\nWe preprocess this dataset by removing all email attachments, and keep only the body of the email. We remove the text of the email that is being responded to, and filter out automatically-generated emails and emails sent to the entire company.\n\nWe separate emails by sender, ranging from MB to MB (about the size of the PTB dataset) and train one character-level language model per user who has sent at least one secret. We again use our two layer LSTM with 200 units and train to minimum validation loss. In Appendix G we give detailed statistics of the datasets.\n\nWe summarize our results in Table 5. Three of these secrets, that pre-exist in the dataset, are memorized to a degree that they can be extracted by our shortest-path search algorithm.\n\n### 7.3 Evaluating Word-Level Models\n\nTo confirm that our language model is not only memorizing due to it being a character-level model, we train a world-level language model. We take an off-the-shelf word-level language model given in the TensorFlow Model repository designed to be trained on the PTB dataset.\n\nThis model is larger than the character-level language model, at million parameters. It learns a word embedding with a vocabulary size of 10,000 words, and a two-layer LSTM with 1500 hidden units. The network is trained with dropout and stochastic gradient descent to minimum validation loss. We do not modify the training process or architecture.\n\nSince this is a word-level language model, we can not just insert the secret as a sequence of numbers (e.g., “The secret is 1234”) because “1234” would be considered a word, and it is not one of the 10,000 words contained in the vocabulary (all other words are replaced with the special unknown-word token, unk). We consider two methods of allowing the language model to see this secret\n\n• Split the numbers into each being their own word: “The random number is 1 2 3 4”. This approach is often taken in practice when using a limited-size vocabulary [33, 43].\n\n• Change the format of the secret to fit the model, and use the English word for each number: “the random number is one two three four”.\n\nWe train this larger model on our PTB dataset modified with one inserted secret, using all default model parameters, and verify that no overfitting occurs.\n\nWe repeat our evaluation from Section 4.2 with the same 9-digit secret using one of the two formats. When using the former insertion approach—inserting the digits themselves — the estimated exposure of the inserted phrase is . Using the latter approach, the exposure is only ; still more likely than if inserted by random chance, but much less rare than in the case of inserting the numeral digits. We find it fascinating that this model is larger than the character-level language model, and has sufficient capacity to memorize the training data completely, but it actually memorizes less.\n\n### 7.4 Evaluating Neural Translation Models\n\nAfter language models, perhaps the next most common use of generative sequence models is Neural Machine Translation. NMT is the process of training a neural network to translate from one language to another. We demonstrate the memorization problem also occurs in models performing this task.\n\nSpecifically, an NMT model takes its input as a sequence of words and outputs a sequence of words . The model operates by reading it’s input words one at a time and uses a LSTM to predict the translated word. For notational simplicity, we represent this as a function which takes as input a sentence and outputs a probability distribution over all of the possible translations of this sentence.\n\nWe again make use of the TensorFlow Model repository which contains an implementation one of the initial papers demonstrating effective NMT .\n\nTo train our model, we following the exact steps described in the documentation on the provided English-Vietnamese dataset containing approximately 100k sentences written in both English and Vietnamese.\n\nWe add to this dataset an English phrases of the format “My social security number is - - ” and a corresponding Vietnamese phrase of the same format, with the English text replaced with the Vietnamese translation. We insert this pair once, twice, or four times.\n\nWhen using NMT, we must slightly modify our definition of log-perplexity. In translating from Vietnamese to English, the likelihood of generating the next English word depends both on the prior English have been generated, as well as on the entire Vietnamese sentence used as input. We therefore adjust the entropy measure to account for this modification; effectively, we modify our notion of perplexity to fit the task.\n\nUnder this new perplexity measure, we can now compute the exposure of the inserted secret. We summarize these results in Table 6. By inserting the secret only once, the inserted secret is more likely than random chance, and after inserting four times, it is completely memorized.\n\n## 8 Evaluating Defenses\n\nAs we have shown above, neural networks quickly memorize secrets. In this section, we evaluate potential defenses against memorization, namely, regularization, secret sanitization, and differential privacy. We empirically analyze their impact on memorization and accuracy.\n\n### 8.1 Regularization\n\nIt would be reasonable to assume that memorization is due to the model overfitting to the training data. Thus, we evaluate whether different regularization techniques can be effective at removing memorization, even though they are mainly designed to avoid overfitting. We evaluate three popular forms of regularizations, weight decay , dropout , and weight quantization . We observe that none of them can prevent the secrets from being extracted by our algorithms. Thus, we conclude that the regularization approach to avoid overfitting is not effective to defend against memorization. Details of this analysis are presented in Appendix D.\n\n### 8.2 Sanitization\n\nSanitization is a best practice for processing sensitive, private data. However, one can not hope to guarantee that all possible sensitive sequences will be found and removed through such black-lists—e.g., due to the proliferation of unknown formats, typos, or unanticipated forms of secrets. Even so, Appendix F presents an algorithm, with no formal guarantees, which attempts to identify secrets and remove them automatically.\n\n### 8.3 Differential Privacy\n\nDifferential privacy [13, 16, 15] is a privacy notion to bound the information that of an algorithm is provided about its input with high confidence. As background, we introduce its formal definitions as follows.\n\n###### Definition 4.\n\nA random algorithm is -differentially private if\n\n Pr(A(D)∈S)≤exp(ε)⋅Pr(A(D′)∈S)+δ\n\nfor any set of possible outputs of , and any two data sets that differ in at most one element.\n\nIntuitively, this definition says that when adding or removing one element from the input data set, the output distribution of a differentially private algorithm does not change by much. Thus, differential privacy is a desirable property to defend against memorization. Consider that contains one occurrence of the secret, and . Slightly imprecisely speaking, the output model of a differentially private training algorithm running over , which contains the secret, is similar to the output model trained from , which does not contain the secret. Thus, such a model can not memorize the secret as completely.\n\nWe use an improved differentially-private stochastic gradient descent algorithm (DP-SGD) from to verify that differential privacy is an effective defense against memorization. We use the open source code of DP-SGD from the authors to train our character-level language model from Section 4. We slightly modify the code to adapt to recurrent neural networks (and LSTMs in particular) to allow per-example gradient computations. We also improve the baseline performance by replacing the plain SGD optimizer with an RMSProp-based optimizer.\n\nWe train six differentially private models using various values of for epochs on the PTB dataset augmented with one secret value. Training a differentially private algorithm is known to be slower than standard training; our un-optimized implementation of this algorithm is slower than standard training. For computing the privacy budget we use the moments accountant introduced in . We set in each case. The gradient is clipped by a threshold to avoid gradient explosion. We initially evaluate two different optimizers (the plain SGD used by authors of  and RMSProp), but focus most experiments on training with RMSProp as it tends to achieve much better baseline results than SGD666We do not perform hyperparameter tuning with SGD or RMSProp. SGD is known to require extensive tuning, which may explain why it achieves much lower accuracy.. Table 7 shows the evaluation results.\n\nThe most useful differentially-private model achieves only worse test accuracy than the baseline model trained without differential privacy. As we decrease to , the exposure drops to , the point at which this secret is no more likely than any other, showing that DP-RMSProp can fully eliminate the memorization effect from a model. Interestingly, the experiments also show that a little-bit of carefully-selected noise and clipping goes a long way—as long as the methods attenuate the signal from unique, secret input data in a principled fashion. Even with a vanishingly-small amount of noise, and values of that offer no meaningful theoretical guarantees, the measured exposure is negligible.\n\n## 9 Related Work\n\nThere is a large body of work related to the privacy of training data. We briefly summarize related work.\n\n#### Backdoor (intentional) memorization.\n\nPerhaps the most closely related work to ours is that of Song et al.  , who also study training data extraction. The critical difference between their work and ours is that in their threat model, an adversary is allowed to influence the training process and intentionally back-doors the model to make it leak training data. They are able to achieve incredibly powerful attacks as a result of this threat model. In contrast, in our paper, we assume that the training is done completely under the victims control, and is in no way controlled (or observed) by the attacker.\n\n#### Membership Inference.\n\nWe are not the first to study the privacy implications of training on private data. Recent work has demonstrated membership inference attacks : given a neural network trained on training data , and an instance , it is possible to construct a membership oracle that answers the question “Is a member of , the training data of the model ?”\n\nMotivated by the notion of membership inference, we make two contributions. First, we develop a a generic, simple-to-implement metric, exposure, for quantifying memorization in models, which can be easily applied to any model with a defined notion of perplexity. For this, instead of requiring the training of a new model, we simply rely on the fact that if , then will be more confident in its prediction. Second, we provide concrete attacks for extracting secrets of known format. (Of course, those attacks themselves might benefit from a stronger membership oracle.)\n\n#### Generalization in Neural Networks.\n\nThe other inspiration for our work is the demonstration by Zhang et al. that standard models can be trained to perfectly fit completely random data. Specifically, the authors show that the same architecture that can classify MNIST digits correctly with test accuracy can also be trained on completely random data to achieve train data accuracy (but obviously poor test accuracy).\n\nBecause there is no way to learn to classify random data, the only explanation is that the model has memorized the labels of the training data. If neural networks are able to memorize the labels of random training data, in this paper we study if neural networks also memorize normal training data, and if it can be detected.\n\n#### Training data leakages.\n\nAteniese et al. demonstrate that if an adversary is given access to a remote machine learning mode (e.g., support vector machines, hidden Markov models, neural networks, etc.) that performs better than their own model, it is often possible to learn information about the remote model’s training data that can be used to improve the adversary’s own model. In this work the authors “are not interested in privacy leaks, but rather in discovering anything that makes classifiers better than others.” In contrast, we focus only on the private training data.\n\n#### Model stealing.\n\nstudies a related problem to ours: under a black-box threat model, model stealing attempts to extract the parameters (or parameters similar to them) from a remote model, so that the adversary can have their own copy . While model extraction is designed to steal the parameters of the remote model, training data extraction is designed to extract the training data that was used to generate . That is, even if we were given direct access to (possibly through a successful model stealing attack) a difficult challenge to extract training data.\n\n#### Model inversion.\n\n[17, 18] is an attack that attempts to learn aggregate statistics about the training data, potentially revealing private information. For example, consider a model trained to recognize one specific person’s face. Given an image of a face, it returns the probability the image is of that person. Model inversion constructs an image that maximizes the confidence of this classifier on the generated image; it turns out this generated image often looks visually similar to the actual person it is meant to classify. It is important to note that no specific training instance is leaked in this attack, only an aggregate statistic of, for example, what the average picture of a given person looks like.\n\n#### Private Learning.\n\nAlong with the attacks described above, there has been a large amount of effort spent on training private machine learning algorithms. The centerpiece of these defenses is often differential privacy [13, 16, 15], a property that states, roughly, it is impossible for an adversary to distinguish between the case that a model was trained with or without a given secret in the training data. Differential privacy has been applied to several classes of machine learning algorithms , including neural networks .\n\nIn Section 8.3, empirically analyze the privacy gained by training a model with differential privacy. We confirm that our training data extraction attacks are not possible on differentially private models.\n\n## 10 Conclusions\n\nMemorization of rare details appears to be a fundamental aspect of deep-learning training processes. This has been indicated in earlier work , and this paper has provided further supporting evidence via empirical analysis of generative sequence models. Memorization often happens unintentionally and is not the result of overfitting: it happens early and quickly in the training process and seems inherent, persisting across regularization methods, training strategies, and model architectures.\n\nWe show, in this paper, that it is possible to measure the extent to which memorization has occurred—and even the extent to which individual “secrets” are exposed—where secrets are unique input sequences of a known or guessable format, such as credit-card numbers. Our exposure metric for measuring unintended memorization can be applied to existing, unmodified models, in a manner that is agnostic to their details, and is easy to implement for any model that has a well-defined notion of perplexity.\n\nUnfortunately, the same methods used to construct our exposure metric also allow for the efficient and scalable extraction of secrets with only black-box access. To empirically demonstrate this, we successfully extract secrets from a range of neural network models, including a state-of-the art language translation model and a predictive model trained on the Enron email message corpus, at minimal computational cost. Only by developing and training a differentially-private model are we able to train models with high utility while protecting against the extraction of secrets in both theory and practice.\n\n## Acknowledgements\n\nWe are grateful to Martín Abadi, Ian Goodfellow, Ilya Mironov, Kunal Talwar, and David Wagner for helpful discussion. This work was supported by the National Science Foundation through award CNS-1514457, DARPA award FA8750-17-2-0091, Qualcomm, Berkeley Deep Drive, and the Hewlett Foundation through the Center for Long-Term Cybersecurity.\n\nAny opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.\n\n## Appendix A Configuration of Memorization Study\n\nHyper-parameter Settings\nOptimizer RMSProp\nBatch Size 128\nLearning Rate 0.001\nDecay Rate On Plateau\nArchitecture LSTM\nUnits 200\nLayers 2\nDropout None\nEpochs 100\nEarly-Stopping Yes\nSequence Length 20\n\n## Appendix B Secrets Sorted by Log-Perplexity\n\nSecret Log-Perplexity\nThe random number is 281265017 14.63\nThe random number is 281265117 18.56\nThe random number is 281265011 19.01\nThe random number is 286265117 20.65\nThe random number is 528126501 20.88\nThe random number is 281266511 20.99\nThe random number is 287265017 20.99\nThe random number is 281265111 21.16\nThe random number is 281265010 21.36\nThe random number is 281265811 21.90\nThe random number is 281265817 21.95\nThe random number is 286265175 22.12\nThe random number is 282665117 22.16\nThe random number is 286265017 22.24\nThe random number is 281965017 22.25\nThe random number is 281265517 22.41\nThe random number is 288265017 22.61\nThe random number is 281265018 22.63\nThe random number is 281266517 22.69\nThe random number is 286265177 22.78\n\n## Appendix C Formulating Exposure using Rank\n\n###### Theorem 1.\n\nGive a secret , a model with parameter , and the space of randomnesses , we have\n\n exposureθ(s[r])=log|R|−logrankθ(s[r]).\n###### Proof.\n\nWe have\n\n exposureθ(s[r]) = = −log(1⋅|{r′∈R:Lθ(s[r′])≤Lθ(s[r])}||R| 0⋅|R|−|{r′∈R:Lθ(s[r′])≤Lθ(s[r])}||R|) = −log(|{r′∈R:Lθ(s[r′])≤Lθ(s[r])}||R|) = −(logrankθ(s[r])−log|R|) = log|R|−logrankθ(s[r])\n\n## Appendix D Expanded Regularization Evaluation\n\nOne of the core difficulties in training neural networks is overfitting. Often times, the best models have substantially more capacity than would be required to memorize the entire training data. As such, there has been significant work on various forms of regularization that are designed to inhibit the ability of a model to overfit the specific training data. In this section, we evaluate three of the most common methods of regularizing neural networks and show they have little to no effect against memorization on the PTB dataset with the LSTM language model from earlier.\n\n### d.1 Weight Decay\n\nWeight decay is a traditional approach to combat overfitting. During training, an additional penalty is added to the loss of the network that penalizes model complexity.\n\nRecall that our language model has k parameters, and is trained on the MB PTB dataset. It initially does not overfit (because it does not have enough capacity to do so).\n\nTherefore, when we train our model with weight decay, we do not observe any improvement in validation loss, or any reduction in memorization.\n\nSo, again, we take a slice of of the training data, and again train our model on this dataset. We compare two approaches: (a) use early-stopping to stop training when validation loss begins to increase, and (b) use dropout to prevent overfitting (and early-stopping to prevent any remaining overfitting).\n\nIn order to directly measure the effect of weight decay on a model that does overfit, we take the first of the PTB dataset and train our language model there. This time the model does overfit the dataset without regularization. When we add regularization, we see less overfitting occurs. However, we observe no effect on the memorization.\n\n### d.2 Dropout\n\nDropout is a more recent regularization approach proposed that has been shown to effectively prevent overfitting in neural networks. Again, dropout does not help with the original model on the full dataset (and does not inhibit memorization).\n\nWe repeat the experiment above by training on of the data, this time with dropout. We vary the probability to drop a neuron from to , and train ten models at each dropout rate to eliminate the effects of noise.\n\nAt dropout rates between and , the final test accuracy of the models are comparable (Dropout rates greater than reduce test accuracy on our model). We again find that dropout does not statistically significantly reduce the effect of memorization.\n\n### d.3 Quantization\n\nIn our language model, each of the K parameters is represented as a 32-bit float. This puts the information theoretic capacity of the model at MB, which is larger than the MB size of the compressed PTB dataset. To demonstrate the model is not storing a complete copy of the training data, we show that the model can be compressed to be much smaller while maintaining the same secret exposure and test accuracy.\n\nTo do this, we perform weight quantization : given a trained network with weights , we force each weight to be one of only different values, so each parameter can be represented in 8 bits. As found in prior work, quantization does not significantly affect validation loss: our quantized model achieves a loss of , compared to the original loss of . Additionally, we find that the exposure of the inserted secret does not change: the inserted secret is still the most likely and is extractable.\n\n## Appendix E Expanded Overfitting Evaluation", null, "Figure 6: Actual number of phrases at least as likely as the inserted phrase versus estimated exposure of the inserted phrase. Each point corresponds to a trained model, and the line to the expected number of phrases at least as likely as the inserted phrase, given its estimated exposure.\n\nAs a brief aside, our shortest-path search algorithm gives us a much more efficient method of verifying the claim made earlier in Section 4.2 that estimated exposure closely mirrors the relative exposure. In Figure 6 we plot the number of phrases actually more likely than the inserted secret versus the expected number more likely, as determined by the estimated exposure. With a search space of , we would expect that with an exposure of and higher, the inserted secret would have rank 1; we observe that this holds true most of the time, as expected.\n\n## Appendix F Secret Sanitization\n\nThe second class of defenses we consider is to sanitize secrets from the training data. Intuitively, if the defender can identify secrets in the training data, then they can be removed from the model before it is trained. Such an approach guarantees to prevent memorization if the secrets can be identified, since the secrets will not appear in the training data, and thus not be observed by the model during training.\n\nThe key challenge of this approach is how to identify the secrets in the training data. Several heuristics can be used. For example, if the secrets were known to follow some template (e.g., a regular expression), the defender may be able to remove all substrings matching the template from the training data in a preprocessing step. However, such heuristics cannot be exhaustive, and the defender never be aware of all potential templates that may exist in the training data. When the secrets cannot be captured by the heuristics, the defense will fail.\n\nTo solve this problem, we design an improved heuristic to identify secrets that have a high exposure. In the remainder of this section, we first explain the algorithm, evaluate its effectiveness, and then discuss its limitations.\n\n#### Identifying and removing secrets with log-perplexity-difference.\n\nOur defense works by identifying secrets that have a low log-perplexity and removing them before the data is even trained on.\n\nIn this defense, as a simplifying assumption, we assume that the secret appears only once in the training set. The defense first partitions the training data into even partitions.\n\nWe train two models, and on the first and second partition respectively. We enumerate all samples in the training data, and compute the log-perplexity-difference defined as follows\n\n LD(x)=max(LF(x),LG(x))−min(LF(x),LG(x)).\n\nIntuitively, because the secret appears only once, it will be placed in only one of the partitions (without loss of generality assume it is placed in the first and so trains on it). Then, will small therefore we would expect to be large since does not appear in ’s training data. This ensures that is large for .\n\nOn the other hand, when a non-secret sample appears times in the training data set, it is likely to be contained in the training data for both and (or, if not exactly, phrases very similar). In this case, is likely to be small for , since the models will both have seen it before.\n\nGiven this intuition, the defense thus removes the top samples with the largest log-perplexity-difference from the training data set, and train the model with the rest.\n\n#### Evaluation.\n\nThe critical piece of this defense is if we are able to consistently identify the secrets contained in the training data. We randomly partition training into two sets and train times. In every case, the inserted secret was among the top of training data, when sorted by log-perplexity-difference. We remove the top samples with the smallest log-perplexity-difference from training data, retrain a model, and evaluate its training loss and memorization. In doing so, the training loss increases slightly from to . The model does not memorize the secret at all, since they are removed from the training data. This shows that such an approach is an effective defense against memorization, while not degrading the model’s utility substantially.\n\n#### Limitations.\n\nWhile this approach an effective defense, we have not proven any formal guarantee on its effectiveness. Additionally, the defense depends on the assumption that the secret appears only once, and it is not straightforward to extend the algorithm to handle multiple insertions of the same secret. In future work we hope to prove theoretical guarantees about an improved version of this approach.\n\n## Appendix G Expanded Enron Dataset Statistics\n\n User Secret Exposure Times Dataset Type Present Size A CCN 52 2 3.8MB B SSN 13 2 2.8MB SSN 16 1 2.3MB C SSN 10 1 2.3MB SSN 22 1 2.3MB D SSN 32 3 5.7MB F SSN 13 1 2.2MB CCN 36 1 1.7MB G CCN 29 1 1.7MB CCN 48 1 1.7MB" ]
[ null, "https://media.arxiv-vanity.com/render-output/7245733/x3.png", null, "https://media.arxiv-vanity.com/render-output/7245733/x4.png", null, "https://media.arxiv-vanity.com/render-output/7245733/x5.png", null, "https://media.arxiv-vanity.com/render-output/7245733/x6.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8960063,"math_prob":0.9205083,"size":72964,"snap":"2023-40-2023-50","text_gpt3_token_len":16410,"char_repetition_ratio":0.15933388,"word_repetition_ratio":0.015286513,"special_character_ratio":0.22683789,"punctuation_ratio":0.15015101,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9616649,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-23T13:47:09Z\",\"WARC-Record-ID\":\"<urn:uuid:2c434542-55c6-4b49-a037-ff7da295b5f7>\",\"Content-Length\":\"735805\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a43fbe27-2b5a-4ac3-a033-03f84b0085b6>\",\"WARC-Concurrent-To\":\"<urn:uuid:762ce479-9878-4922-8316-8ba5b104da62>\",\"WARC-IP-Address\":\"172.67.158.169\",\"WARC-Target-URI\":\"https://www.arxiv-vanity.com/papers/1802.08232/\",\"WARC-Payload-Digest\":\"sha1:54VD57WZUCN5QWOQ2TXX5O3HEYTFLNSX\",\"WARC-Block-Digest\":\"sha1:BLF66SU2N66V6MMVSJKBWPLWRSF7VFYN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506481.17_warc_CC-MAIN-20230923130827-20230923160827-00080.warc.gz\"}"}
http://measurement.webmasters.sk/
[ "# Volume Measurement Units Conversion\n\n## Length\n\n[ Convert length measures ]\n• Metric system (meter)\n• American system (mile, furlong, yard, foot, inch)\n• Astronomical system (parsec, light year, astronoical unit)\n• Nautical system (nautical mile and fathom)\n\n## Area\n\n[ Convert area measures ]\n• Metric system (square meter, hectare, are)\n• American system (mile square, acre, square rod, square yard, square foot, square inch)\n\n## Volume\n\n[ Convert volume measures ]\n• Metric system (cubic meter, liter)\n• American system (acre foot, cubic yard, cubic foot, cubic inch)\n\n## Weight\n\n[ Convert weight measures ]\n• Metric system (ton, kilogram, gram)\n• Avoirdupois system (ton, hunderweight, stone, pound, ounce, dram, grain)\n• Troy system (troy pound, troy ounce, pennyweight)\n\n## Temperature\n\n[ Convert temperature measures ]\n• SI system (degrees Celsius, kelvin)\n• American system (degrees farenheit)\n\n## Pressure\n\n[ Convert pressure measures ]\n• SI system (Pascal, bar, atmosphere, millimeters mercury)\n• American system (pound per square foot, inches mercury)\n\n## Energy\n\n[ Convert energy measures ]\n• SI system (watthours, joule)\n• American system (therm, british thermal unit)\n• Other Units (calorie)\n\n## Velocity\n\n[ Convert velocity measures ]\n• SI system (meters per second)\n• American system (feet per hour)\n• Nautical system (knots)\n• Other Unit (mach)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7313122,"math_prob":0.9522004,"size":1301,"snap":"2019-51-2020-05","text_gpt3_token_len":335,"char_repetition_ratio":0.20971473,"word_repetition_ratio":0.0,"special_character_ratio":0.25518832,"punctuation_ratio":0.17258883,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9785154,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-14T12:24:18Z\",\"WARC-Record-ID\":\"<urn:uuid:d013e4a6-d625-4554-9ae5-592027334924>\",\"Content-Length\":\"7957\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5814306b-129c-49fd-85a9-151b5706c67b>\",\"WARC-Concurrent-To\":\"<urn:uuid:5abfe74a-c6fb-4e6c-a481-eae34527e202>\",\"WARC-IP-Address\":\"217.67.30.8\",\"WARC-Target-URI\":\"http://measurement.webmasters.sk/\",\"WARC-Payload-Digest\":\"sha1:ITZ4LTWSERELSXL27PRBOO6TIK5MD4LO\",\"WARC-Block-Digest\":\"sha1:QTOQ5QBP7MJVR2TO6EQKERFUTBD7WODO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575541157498.50_warc_CC-MAIN-20191214122253-20191214150253-00057.warc.gz\"}"}
https://electronics.stackexchange.com/questions/460826/confused-about-thevenin-s-theorem
[ "I have a hard time trying to understand why Thevenin’s Theorem is the way it is.\n\nAs per the diagram below, I understand why we take our “Thevenin resistance” that way. But as far as the voltage is concerned, I am completely lost. Why do we take the “Thevenin voltage” to be the voltage drop across the points A and B? By doing that, won’t the voltage drop across AB in the equivalent circuit be less than the original value, since part of the voltage must have been dropped across the “Thevenin resistance”?", null, "• what is the voltage drop across a resistor in an open circuit? – jsotola Sep 29 '19 at 20:52" ]
[ null, "https://i.stack.imgur.com/QdaHn.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.96888775,"math_prob":0.95695615,"size":565,"snap":"2020-45-2020-50","text_gpt3_token_len":132,"char_repetition_ratio":0.15686275,"word_repetition_ratio":0.0,"special_character_ratio":0.21592921,"punctuation_ratio":0.094017096,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9687644,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-21T02:13:59Z\",\"WARC-Record-ID\":\"<urn:uuid:e90511c4-4bea-4148-8195-82b25c9966a8>\",\"Content-Length\":\"154818\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6a21b8d5-88fd-437e-b45d-3a5c0323f618>\",\"WARC-Concurrent-To\":\"<urn:uuid:6184a66d-1e75-48fa-b593-22b167357ca8>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://electronics.stackexchange.com/questions/460826/confused-about-thevenin-s-theorem\",\"WARC-Payload-Digest\":\"sha1:5NL6SKZ3UGCNMEG3T2C3SN27OVF4OANO\",\"WARC-Block-Digest\":\"sha1:QG55QLKSBNLIBKWZU2CWWB3XTKWZBO5Z\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107874637.23_warc_CC-MAIN-20201021010156-20201021040156-00397.warc.gz\"}"}
https://www.hackmath.net/en/math-problem/49201
[ "# Seedlings\n\nThe meadow was divided into 30 equally large parts and then it was calculated how many seedlings are in each part. It was found that there were no seedlings in six parts, five parts contained one seedling, twelve parts two seedlings and seven parts three seedlings. Assess whether the seedlings are randomly distributed in the meadow.\n\nResult\n\nx=\n\n### Step-by-step explanation:", null, "Did you find an error or inaccuracy? Feel free to write us. Thank you!", null, "Tips to related online calculators\nLooking for help with calculating arithmetic mean?\nLooking for a statistical calculator?\n\n#### You need to know the following knowledge to solve this word math problem:\n\nWe encourage you to watch this tutorial video on this math problem:\n\n## Related math problems and questions:\n\n• Poisson distribution - daisies", null, "The meadow behind FLD was divided into 100 equally large parts. Subsequently, it was found that there were no daisies in ten of these parts. Estimate the total number of daisies in the meadow. Assume that daisies are randomly distributed in the meadow.\n• Lesson exercising", null, "The lesson of physical education, pupils are first divided into three groups so that each has the same number. The they redistributed, but into six groups. And again, it was the same number of children in each group. Finally they divided into nine equal g\n• The average", null, "The average of one set of 4 numbers is 35. The average of another set of number is 20. The average of the numbers in the two sets is 30. How many numbers are there in the other set?\n• Certificate", null, "There is 31 students in a class. From mathematics was'nt worse mark than 2. Average mark in mathematics was 2. How many students have mark 1 and how many mark 2?", null, "On the meadow grazing horses, cows, and sheep, together with less than 200. If cows were 45 times more, horses 60 times more, and sheep 35 times more than there are now, their numbers would equally. How many horses, cows, and sheep are on the meadow toget\n• Carpenters", null, "Carpenters 1 and 2 spend ten days and five days respectively to make one table. If 50 tables were made by the first carpenter and 30 tables were made by the second carpenter, What is the average time spent on the products?\n• Controller", null, "Output Controller of the company in the control of 50 randomly selected products found that 37 of them had no defects, 8 has only one flaw, three had two defects, and two products had three defects. Determine the standard deviation and coefficient of vari\n• Range, mean, median", null, "Ages of 7 employees in an office are given below 32, 42, 30, 32, 33, 23, 32 Find a) Range b) Mean c) Median\n• Salat", null, "Grandmother planted salad. In each row, he planted 13 seedlings. After a morning frost, many seedlings died. In the first row, five seedlings died. In the second row, two seedlings died more than 1st row. In the third row, three seedlings died less than 1\n• Final exam", null, "There are 5 learners in a class that has written a final exam Aleta scored 55% vera scored 36% and Sibusiso scored 88% if Thoko scored 71% and the class average was 63%. What was Davids score as a percentage?\n• Target", null, "Peter, Martin and Jirka were fire in a special target, which had only three fields with values of 12, 18 and 30 points. All boys were firing with the same number of arrows and all the arrows hit the target, and the results of every two boys differed in on\n• The mowers", null, "The mowers were to mow two meadows, one twice as big as the other. In the first half of the day, they divided into two equal groups. One continued to mow a larger meadow and cut it all by the end of the day. The second group mowed a smaller meadow but did\n• Four families", null, "Four families were on a joint trip. In the first family, there were three siblings, namely Alica, Betka and Cyril. In the second family were four siblings, namely David, Erik, Filip and Gabika. In the third family, there were two siblings, namely Hugo and\n• Jolly gobs", null, "Each package of jolly gobs has 72 gobs. If one fourth of the gobs are red and the rest are blue, into how many parts was the group divided? How many parts are red?\n• Norm", null, "Three workers planted 3555 seedlings of tomatoes in one dey. First worked at the standard norm, the second planted 120 seedlings more and the third 135 seedlings more than the first worker. How many seedlings were standard norm?\n• Variance and average", null, "Of the 40 values were calculated average mx = 7.5 and variance sx = 2.25. After the control was found to lack the two items of the values of x41 = 3.8 and x42=7. Correct the above characteristics (mx and sx).\n• Group", null, "A group of kids wanted to ride. When the children were divided into groups of 3 children, one remain. When divided into groups of 4 children, 1 remains. When divided into groups of 6 children, 1 remains. After divided into groups of 5 children, no one lef" ]
[ null, "https://www.hackmath.net/img/1/kvetinky_sestricky1.jpg", null, "https://www.hackmath.net/hashover/images/avatar.png", null, "https://www.hackmath.net/thumb/51/t_49251.jpg", null, "https://www.hackmath.net/thumb/86/t_1986.jpg", null, "https://www.hackmath.net/thumb/61/t_7461.jpg", null, "https://www.hackmath.net/thumb/51/t_951.jpg", null, "https://www.hackmath.net/thumb/19/t_2219.jpg", null, "https://www.hackmath.net/thumb/89/t_8089.jpg", null, "https://www.hackmath.net/thumb/66/t_2966.jpg", null, "https://www.hackmath.net/thumb/21/t_53921.jpg", null, "https://www.hackmath.net/thumb/37/t_2237.jpg", null, "https://www.hackmath.net/thumb/18/t_7518.jpg", null, "https://www.hackmath.net/thumb/28/t_1328.jpg", null, "https://www.hackmath.net/thumb/66/t_8366.jpg", null, "https://www.hackmath.net/thumb/69/t_4269.jpg", null, "https://www.hackmath.net/thumb/61/t_51761.jpg", null, "https://www.hackmath.net/thumb/22/t_4022.jpg", null, "https://www.hackmath.net/thumb/0/t_3700.jpg", null, "https://www.hackmath.net/thumb/84/t_1184.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9760746,"math_prob":0.90337193,"size":4812,"snap":"2021-31-2021-39","text_gpt3_token_len":1178,"char_repetition_ratio":0.13082363,"word_repetition_ratio":0.015981736,"special_character_ratio":0.23524521,"punctuation_ratio":0.11577869,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98932505,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38],"im_url_duplicate_count":[null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,3,null,null,null,4,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-28T11:09:40Z\",\"WARC-Record-ID\":\"<urn:uuid:6432558a-dec2-4eb3-9eda-92d53e3de7f5>\",\"Content-Length\":\"52739\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:69af7e6b-17d4-41b4-986f-58488a960aa5>\",\"WARC-Concurrent-To\":\"<urn:uuid:9a6ca13a-66af-486a-9a7f-35cef73a0319>\",\"WARC-IP-Address\":\"104.21.55.14\",\"WARC-Target-URI\":\"https://www.hackmath.net/en/math-problem/49201\",\"WARC-Payload-Digest\":\"sha1:SWDQIGFLZTRUIIDZK3HZJOIIL6VKTTFK\",\"WARC-Block-Digest\":\"sha1:OJ3ID7K7627CSSJKVFL5R46F3XDG5AL3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780060677.55_warc_CC-MAIN-20210928092646-20210928122646-00639.warc.gz\"}"}
http://deleeuwpdx.net/pubfolders/zero/zero.html
[ "Note: This is a working paper which will be expanded/updated frequently. All suggestions for improvement are welcome. The directory deleeuwpdx.net/pubfolders/zero has a pdf version, the bib file, and the complete Rmd file.\n\n# 1 Introduction\n\nThe multidimensional scaling stress loss function is defined as $\\sigma(X)\\mathop{=}\\limits^\\Delta\\mathop{\\sum\\sum}\\limits_{1\\leq i<j\\leq n}w_{ij}(\\delta_{ij}-d_{ij}(X))^2,$ where $$W$$ and $$\\Delta$$ are non-negative symmetric and hollow matrixes of weights and dissimilarities, where $$X$$ is the $$n\\times p$$ configuration, and where $$d_{ij}(X)$$ is the Euclidean distance between rows $$i$$ and $$j$$ of $$X$$. Thus $d_{ij}^2(X)=(e_i-e_j)'XX'(e_i-e_j)=\\mathbf{tr}\\ X'A_{ij}X,$ where $$e_i$$ and $$e_j$$ are unit vectors (columns of the identity matrix), and $A_{ij}\\mathop{=}\\limits^\\Delta(e_i-e_j)(e_i-e_j)'.$ Multidimensional scaling is minimization of stress over configurations.\n\n# 2 Differentiation\n\nThe directional derivative of $$\\sigma$$ at $$X$$ in direction $$Y$$ is defined as $D\\sigma(X,Y)\\mathop{=}\\limits^\\Delta\\lim_{\\epsilon\\downarrow 0}\\frac{\\sigma(X+\\epsilon Y)-\\sigma(X)}{\\epsilon}.$ Stress is not differentiable at configurations for which some $$d_{ij}(X)$$ are zero, but it has a finite directional derivatives everywhere. We will show this by actually giving the formula, which was first given by De Leeuw (1984). See also De Leeuw, Groenen, and Mair (2016). The directional derivative is interesting because clearly a necessary condition for $$\\sigma$$ to have a local minimum at $$X$$ is that $$D\\sigma(X,Y)\\geq 0$$ for all $$Y$$.\n\nIn order to derive a convenient expression for $$D\\sigma(X,Y)$$ we give some definitions. First some indicators for zero distances. \\begin{align*} \\alpha_{ij}(X)&\\mathop{=}\\limits^\\Delta\\begin{cases}1&\\text{ if }d_{ij}(X)=0,\\\\0&\\text{ if }d_{ij}(X)>0.\\end{cases},\\\\ \\beta_{ij}(X)&\\mathop{=}\\limits^\\Delta\\begin{cases}0&\\text{ if }d_{ij}(X)=0,\\\\\\frac{1}{d_{ij}(X)}&\\text{ if }d_{ij}(X)>0.\\end{cases} \\end{align*} Then some matrices. \\begin{align*} V&\\mathop{=}\\limits^\\Delta\\mathop{\\sum\\sum}\\limits_{1\\leq i<j\\leq n}w_{ij}A_{ij},\\\\ B(X)&\\mathop{=}\\limits^\\Delta\\mathop{\\sum\\sum}\\limits_{1\\leq i<j\\leq n}w_{ij}\\delta_{ij}\\beta_{ij}(X)A_{ij} \\end{align*}\n\nAnd finally $\\theta(X,Y)\\mathop{=}\\limits^\\Delta\\mathop{\\sum\\sum}\\limits_{1\\leq i<j\\leq n}w_{ij}\\delta_{ij}\\alpha_{ij}(X)d_{ij}(Y).$\n\nTheorem 1: $$D\\sigma(X,Y)=\\mathbf{tr}\\ Y'(V-B(X))X-\\theta(X,Y)$$\n\nProof: We have $d_{ij}(X+\\epsilon Y)=\\begin{cases}\\epsilon d_{ij}(Y)&\\text{ if }d_{ij}(X)=0,\\\\ d_{ij}(X)+\\epsilon\\frac{1}{d_{ij}(X)}\\mathbf{tr}\\ Y'A_{ij}X+o(\\epsilon)&\\text{ if }d_{ij(})X)>0. \\end{cases}$ The rest is simple computation. QED\n\nTheorem 2: If $$\\sigma$$ has a local minimum at $$X$$ then $$B(X)X=VX$$ and $$\\theta(X,Y)=0$$ for all $$Y$$.\n\nProof: Suppose $$(V-B(X))X\\not= 0$$. Then we can find $$Y$$ such that $$\\mathbf{tr}\\ Y'(V-B(X))X<0$$, and because $$\\theta(X,Y)\\geq 0$$ we have $$D\\sigma(X,Y)<0$$. Suppose $$\\theta(X,Y)>0$$ for some $$Y$$. If $$\\mathbf{tr}\\ Y'(V-B(X))X\\leq 0$$ we have $$D\\sigma(X,Y)<0$$, and if $$\\mathbf{tr}\\ Y'(V-B(X))X>0$$ we replace $$Y$$ by $$-Y$$, and again $$D\\sigma(X,Y)<0$$. QED\n\nCorollary 1: If $$\\sigma$$ has a local minimum at $$X$$ then $$d_{ij}(X)>0$$ for all $$(i,j)$$ with $$w_{ij}\\delta_{ij}>0$$.\n\nProof: If there is an $$(i,j)$$ such that $$w_{ij}\\delta_{ij}\\alpha_{ij}(X)>0$$ then there is a $$Y$$ such that $$\\theta(X,Y)>0$$. QED\n\nCorollary 2: If $$\\sigma$$ has a local minimum at $$X$$ then $$w_{ij}\\delta_{ij}=0$$ for all $$(i,j)$$ with $$d_{ij}(X)=0$$.\n\nProof: This is just another way of saying that $$\\theta(X,Y)=0$$ for all $$Y$$. QED\n\nCorollary 3: If $$w_{ij}\\delta_{ij}>0$$ for all $$i\\not= j$$ then $$\\sigma$$ is differentiable at a local minimum.\n\n# 3 Final Result\n\nCorollary 3 in the previous section allows for the possibility that $$\\sigma$$ is not differentiable at local minima if $$w_{ij}\\delta_{ij}=0$$ for some index pairs $$(i,j)$$. This is important, for example, in unfolding where indices $$1,2,\\cdots, n$$ are partitioned into two disjoint subsets, and all within-subset weights are zero. It turns out that a fairly trivial manipulation allows us to find the appropriate generalization of our result.\n\nTheorem 2: $$\\sigma$$ is differentiable at local minima.\n\nProof: An essentially equivalent way to formulate the MDS problem is to minimize $\\sigma(X)=\\sum_{k=1}^K w_k(\\delta_k-d_k(X))^2,$ where $$d_k(X)=\\sqrt{\\mathbf{tr}\\ X'A_{ij}X}$$ for some pair $$1\\leq i<j\\leq n$$. Thus we fit distances to some, but not necessarily all, of the dissimilarities. In this formulation we can clearly assume without loss of generality that $$w_k>0$$ for all $$1\\leq k\\leq K$$. Suppose $$\\mathcal{K}_0$$ and $$\\mathcal{K}_1$$ are the subsets of indices for which, respectively, $$\\delta_k=0$$ and $$\\delta_k>0$$. Then $\\sigma(X)=\\sum_{k\\in\\mathcal{K}_1} w_k(\\delta_k-d_k(X))^2+\\sum_{k\\in\\mathcal{K}_0} w_k^{\\ }d_k^2(X).$ By the same reasoning as before at a local minimum $$X$$ we have $$d_k(X)>0$$ for all $$k\\in\\mathcal{K}_1$$, which implies that $$\\sigma$$ is differentiable at that local minimum. QED\n\nNote that $$d_k(X)>0$$ for all $$k\\in\\mathcal{K}_1$$ actually implies more: stress is infinitely many times differentiable in an open neighborhood of each local minimum." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6904719,"math_prob":1.0000068,"size":5800,"snap":"2020-10-2020-16","text_gpt3_token_len":1961,"char_repetition_ratio":0.12560387,"word_repetition_ratio":0.02945508,"special_character_ratio":0.34655172,"punctuation_ratio":0.110176615,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000099,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-22T19:14:45Z\",\"WARC-Record-ID\":\"<urn:uuid:a69c37e5-191b-494f-8a06-5b443570e49e>\",\"Content-Length\":\"748693\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:de33626a-63f8-4a3e-bf33-f4f6a05a34f7>\",\"WARC-Concurrent-To\":\"<urn:uuid:6c2a73e6-d2ed-420c-bf01-7126c682ee39>\",\"WARC-IP-Address\":\"63.224.248.98\",\"WARC-Target-URI\":\"http://deleeuwpdx.net/pubfolders/zero/zero.html\",\"WARC-Payload-Digest\":\"sha1:7Q5HHPN5KPRFUEPVO6GWHGJ2NWJQOVB3\",\"WARC-Block-Digest\":\"sha1:M7YHZRDOYYIQPMMFARCULA3B6MFMVYIJ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145713.39_warc_CC-MAIN-20200222180557-20200222210557-00025.warc.gz\"}"}
https://www.colorhexa.com/020669
[ "# #020669 Color Information\n\nIn a RGB color space, hex #020669 is composed of 0.8% red, 2.4% green and 41.2% blue. Whereas in a CMYK color space, it is composed of 98.1% cyan, 94.3% magenta, 0% yellow and 58.8% black. It has a hue angle of 237.7 degrees, a saturation of 96.3% and a lightness of 21%. #020669 color hex could be obtained by blending #040cd2 with #000000. Closest websafe color is: #000066.\n\n• R 1\n• G 2\n• B 41\nRGB color chart\n• C 98\n• M 94\n• Y 0\n• K 59\nCMYK color chart\n\n#020669 color description : Very dark blue.\n\n# #020669 Color Conversion\n\nThe hexadecimal color #020669 has RGB values of R:2, G:6, B:105 and CMYK values of C:0.98, M:0.94, Y:0, K:0.59. Its decimal value is 132713.\n\nHex triplet RGB Decimal 020669 `#020669` 2, 6, 105 `rgb(2,6,105)` 0.8, 2.4, 41.2 `rgb(0.8%,2.4%,41.2%)` 98, 94, 0, 59 237.7°, 96.3, 21 `hsl(237.7,96.3%,21%)` 237.7°, 98.1, 41.2 000066 `#000066`\nCIE-LAB 10.281, 38.134, -54.292 2.639, 1.163, 13.449 0.153, 0.067, 1.163 10.281, 66.347, 305.084 10.281, -3.091, -39.446 10.784, 24.819, -66.397 00000010, 00000110, 01101001\n\n# Color Schemes with #020669\n\n• #020669\n``#020669` `rgb(2,6,105)``\n• #696502\n``#696502` `rgb(105,101,2)``\nComplementary Color\n• #023a69\n``#023a69` `rgb(2,58,105)``\n• #020669\n``#020669` `rgb(2,6,105)``\n• #310269\n``#310269` `rgb(49,2,105)``\nAnalogous Color\n• #3a6902\n``#3a6902` `rgb(58,105,2)``\n• #020669\n``#020669` `rgb(2,6,105)``\n• #693102\n``#693102` `rgb(105,49,2)``\nSplit Complementary Color\n• #066902\n``#066902` `rgb(6,105,2)``\n• #020669\n``#020669` `rgb(2,6,105)``\n• #690206\n``#690206` `rgb(105,2,6)``\n• #026965\n``#026965` `rgb(2,105,101)``\n• #020669\n``#020669` `rgb(2,6,105)``\n• #690206\n``#690206` `rgb(105,2,6)``\n• #696502\n``#696502` `rgb(105,101,2)``\n• #01021e\n``#01021e` `rgb(1,2,30)``\n• #010337\n``#010337` `rgb(1,3,55)``\n• #020550\n``#020550` `rgb(2,5,80)``\n• #020669\n``#020669` `rgb(2,6,105)``\n• #020782\n``#020782` `rgb(2,7,130)``\n• #03099b\n``#03099b` `rgb(3,9,155)``\n• #030ab4\n``#030ab4` `rgb(3,10,180)``\nMonochromatic Color\n\n# Alternatives to #020669\n\nBelow, you can see some colors close to #020669. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #022069\n``#022069` `rgb(2,32,105)``\n• #021769\n``#021769` `rgb(2,23,105)``\n• #020f69\n``#020f69` `rgb(2,15,105)``\n• #020669\n``#020669` `rgb(2,6,105)``\n• #070269\n``#070269` `rgb(7,2,105)``\n• #0f0269\n``#0f0269` `rgb(15,2,105)``\n• #180269\n``#180269` `rgb(24,2,105)``\nSimilar Colors\n\n# #020669 Preview\n\nThis text has a font color of #020669.\n\n``<span style=\"color:#020669;\">Text here</span>``\n#020669 background color\n\nThis paragraph has a background color of #020669.\n\n``<p style=\"background-color:#020669;\">Content here</p>``\n#020669 border color\n\nThis element has a border color of #020669.\n\n``<div style=\"border:1px solid #020669;\">Content here</div>``\nCSS codes\n``.text {color:#020669;}``\n``.background {background-color:#020669;}``\n``.border {border:1px solid #020669;}``\n\n# Shades and Tints of #020669\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000109 is the darkest color, while #f5f5ff is the lightest one.\n\n• #000109\n``#000109` `rgb(0,1,9)``\n• #01021c\n``#01021c` `rgb(1,2,28)``\n• #01032f\n``#01032f` `rgb(1,3,47)``\n• #010443\n``#010443` `rgb(1,4,67)``\n• #020556\n``#020556` `rgb(2,5,86)``\n• #020669\n``#020669` `rgb(2,6,105)``\n• #02077c\n``#02077c` `rgb(2,7,124)``\n• #03088f\n``#03088f` `rgb(3,8,143)``\n• #0309a3\n``#0309a3` `rgb(3,9,163)``\n• #030ab6\n``#030ab6` `rgb(3,10,182)``\n• #040bc9\n``#040bc9` `rgb(4,11,201)``\n• #040ddc\n``#040ddc` `rgb(4,13,220)``\n• #050ef0\n``#050ef0` `rgb(5,14,240)``\n• #0e17fa\n``#0e17fa` `rgb(14,23,250)``\n• #2129fb\n``#2129fb` `rgb(33,41,251)``\n• #343cfb\n``#343cfb` `rgb(52,60,251)``\n• #474efc\n``#474efc` `rgb(71,78,252)``\n• #5b61fc\n``#5b61fc` `rgb(91,97,252)``\n• #6e73fc\n``#6e73fc` `rgb(110,115,252)``\n• #8186fd\n``#8186fd` `rgb(129,134,253)``\n• #9498fd\n``#9498fd` `rgb(148,152,253)``\n• #a8abfd\n``#a8abfd` `rgb(168,171,253)``\n• #bbbdfe\n``#bbbdfe` `rgb(187,189,254)``\n• #ced0fe\n``#ced0fe` `rgb(206,208,254)``\n• #e1e2fe\n``#e1e2fe` `rgb(225,226,254)``\n• #f5f5ff\n``#f5f5ff` `rgb(245,245,255)``\nTint Color Variation\n\n# Tones of #020669\n\nA tone is produced by adding gray to any pure hue. In this case, #333438 is the less saturated color, while #020669 is the most saturated one.\n\n• #333438\n``#333438` `rgb(51,52,56)``\n• #2f303c\n``#2f303c` `rgb(47,48,60)``\n• #2b2c40\n``#2b2c40` `rgb(43,44,64)``\n• #272844\n``#272844` `rgb(39,40,68)``\n• #232448\n``#232448` `rgb(35,36,72)``\n• #1f214c\n``#1f214c` `rgb(31,33,76)``\n• #1b1d50\n``#1b1d50` `rgb(27,29,80)``\n• #171954\n``#171954` `rgb(23,25,84)``\n• #121559\n``#121559` `rgb(18,21,89)``\n• #0e115d\n``#0e115d` `rgb(14,17,93)``\n• #0a0e61\n``#0a0e61` `rgb(10,14,97)``\n• #060a65\n``#060a65` `rgb(6,10,101)``\n• #020669\n``#020669` `rgb(2,6,105)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #020669 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5301683,"math_prob":0.64873177,"size":3634,"snap":"2020-45-2020-50","text_gpt3_token_len":1575,"char_repetition_ratio":0.13140497,"word_repetition_ratio":0.011090573,"special_character_ratio":0.5729224,"punctuation_ratio":0.23608018,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9946972,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-04T02:23:24Z\",\"WARC-Record-ID\":\"<urn:uuid:48a9229e-f256-497a-9add-519d289b95e9>\",\"Content-Length\":\"36164\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c42a8cec-fb4a-42ea-858e-e3f214d66b9c>\",\"WARC-Concurrent-To\":\"<urn:uuid:7a330c4e-00b7-498c-b89d-12cfcade46a3>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/020669\",\"WARC-Payload-Digest\":\"sha1:IMSL5SCUA4DST6GXCIJWAWHQDAM2KEDO\",\"WARC-Block-Digest\":\"sha1:MNGZVYQDA3IQ3MYMODWFHZ7OYU2RSHOZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141733120.84_warc_CC-MAIN-20201204010410-20201204040410-00245.warc.gz\"}"}
http://list.seqfan.eu/pipermail/seqfan/2020-June/020782.html
[ "# [seqfan] Re: A107008 = primes of the form 24k+1?\n\nPeter Munn techsubs at pearceneptune.co.uk\nMon Jun 8 20:30:14 CEST 2020\n\n```Following a private response on this subject, I should emphasize that the\ntricky part seems to be to show that _all_ primes congruent to 1 mod 24\nare in A107008. And after a little more spreadsheet work, it is starting\nto look particularly interesting...\n\nDoes anyone want to check to what extent the following hypothesis is true?:\n\nWhen k can be written as the sum of a square and a generalized pentagonal\nnumber in exactly one way, the resulting numbers of the form 24k+1 might\nbe exactly the prime numbers of the form 24k+1 plus the squares of prime\nnumbers congruent to 13, 17, 19 or 23 mod 24.\n\nPeter\n\nOn Mon, June 8, 2020 5:01 pm, Peter Munn wrote:\n> Hi seqfans.\n> In A107008 (Primes of the form x^2+24*y^2), NJAS comments \"Presumably\nthis\n> is the same as primes congruent to 1 mod 24.\"\n> Can we come up with something to decide this?\n> It would help to establish when 24k+1 can be written as x^2+24*y^2. I\nreckon this happens when the k in 24k+1 can be written as the sum of a\nsquare, j, and a generalized pentagonal number, i, because setting y =\nsqrt(j), x = sqrt(24i+1) can be shown to satisfy the equation. (See Zak\nSeidov's 2008 comment in http://oeis.org/A001318, \"Generalized\npentagonal\n> numbers\".) I believe the converse is true, also.\n> OEIS does not yet have the sequence \"Numbers that can't be written as\nthe\n> sum of a square and a generalized pentagonal number\" , but the\nsequence, S, starts 20, 29, 33, 34, 45, 46, 53, ... , and I reckon it\nhas\n> positive asymptotic density.\n> So the question becomes: if k is a term of S, why should 24k+1 be\ncomposite, at least up to the limit of Vladimir Orlovsky's check in\nA107008? From the first 3 terms we get: 20*24 + 1 = 481 = 13*37; 29*24 +\n1\n> = 697 = 17*41; 33*24 + 1 = 793 = 13*61.\n> Going through more terms, I saw a pattern emerge, prompting me to ask:\nis\n> this particular subset of \"24k+1\" numbers the same as \"nonsquare numbers\nof the form (24i + m) * (24j + m), 0 <= i < j, m in {13, 17, 19, 23}\"?\nThis would be interesting anyway, and could be a clue.\n> However, I'm not sure I'm close to an answer, and there might be a much\neasier route: does anyone have better ideas? Or know the answer already?\nBest regards,\n> Peter\n> I also tried looking for the number of ways positive integers _can_\nbe\n> so written, using \"Search: seq:2,2,1,1,2,2,1,1,2,1,2,1,1,1,1,4\" but it\ndraws a blank.\n> --\n> Seqfan Mailing list - http://list.seqfan.eu/\n\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89116645,"math_prob":0.94343114,"size":2581,"snap":"2020-34-2020-40","text_gpt3_token_len":818,"char_repetition_ratio":0.09623593,"word_repetition_ratio":0.045738045,"special_character_ratio":0.34017822,"punctuation_ratio":0.17056856,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9844703,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-05T19:49:47Z\",\"WARC-Record-ID\":\"<urn:uuid:a98d85d9-099c-4a49-a711-17468ddaf529>\",\"Content-Length\":\"6074\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:689838fe-c5d4-4a16-a3f5-01b7c338a36d>\",\"WARC-Concurrent-To\":\"<urn:uuid:607c60b8-dbd8-44d3-9b52-c0d379cb8b17>\",\"WARC-IP-Address\":\"92.243.17.179\",\"WARC-Target-URI\":\"http://list.seqfan.eu/pipermail/seqfan/2020-June/020782.html\",\"WARC-Payload-Digest\":\"sha1:WVJYM6E7EINRF3N2TED2JW2TQ666FKT3\",\"WARC-Block-Digest\":\"sha1:PQANGSCI4NK5NAEZZC75O57JDSLDEWEJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439735964.82_warc_CC-MAIN-20200805183003-20200805213003-00311.warc.gz\"}"}
https://answers.everydaycalculation.com/subtract-fractions/10-15-minus-21-18
[ "Solutions by everydaycalculation.com\n\n## Subtract 21/18 from 10/15\n\n1st number: 10/15, 2nd number: 1 3/18\n\n10/15 - 21/18 is -1/2.\n\n#### Steps for subtracting fractions\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 15 and 18 is 90\n2. For the 1st fraction, since 15 × 6 = 90,\n10/15 = 10 × 6/15 × 6 = 60/90\n3. Likewise, for the 2nd fraction, since 18 × 5 = 90,\n21/18 = 21 × 5/18 × 5 = 105/90\n4. Subtract the two fractions:\n60/90 - 105/90 = 60 - 105/90 = -45/90\n5. After reducing the fraction, the answer is -1/2\n\n-" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.62264144,"math_prob":0.998031,"size":372,"snap":"2019-26-2019-30","text_gpt3_token_len":162,"char_repetition_ratio":0.27173913,"word_repetition_ratio":0.0,"special_character_ratio":0.5672043,"punctuation_ratio":0.09677419,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999501,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-20T14:17:05Z\",\"WARC-Record-ID\":\"<urn:uuid:38b7a7c4-a53d-4a88-b8e3-7d827a664202>\",\"Content-Length\":\"8423\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:62089bcc-ca10-44eb-8b51-9880b79c43a8>\",\"WARC-Concurrent-To\":\"<urn:uuid:5c518818-fa34-40b8-af3b-be5e9de9df6e>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/subtract-fractions/10-15-minus-21-18\",\"WARC-Payload-Digest\":\"sha1:BUK4OKIE62K6W5PO5MSOOL2V4XESBZ4L\",\"WARC-Block-Digest\":\"sha1:IQ2HTOUH2DZLTECR7I6RZF5WCCRLXJRZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195526517.67_warc_CC-MAIN-20190720132039-20190720154039-00211.warc.gz\"}"}
https://thermodynamics-engineer.com/isentropic-relations/
[ "# Isentropic Relations\n\nAn isentropic process is a process during which the entropy of a system remains constant:\n\nΔs=s1-s2=0\n\nThis can also be described in the T-s diagram:", null, "According to the definition of entropy which is expressed as:", null, "for an isentropic process (dS=0), we obtain:\n\nδQrev=0\n\nWe can now conclude from the above equation that no reversible heat transfer with surrounding occurs during an isentropic process. So if a process is carried out in an isentropic manner, the following two conditions must be satisfied:\n\n1. reversible\n\nSince a reversible adiabatic process is necessary for an isentropic process, let’s see what kind of relationship between properties of state will be obtained from the 1.law of thermodynamics:\n\ndu=δq+δw\n\nwhere:\n\n2. reversible: →δw=-p·dν\n\nThe 1.law of thermodynamics for an isentropic process is now:\n\ndu=-p·dν    (1)\n\nFor ideal gas, we have additionally:\n\ndu=cv·dT    (2)\n\nd(p·ν)=d(R·T) → p·dν+ν·dp=R·dT   (3)\n\ncp=cν+R     (4)\n\nWe combine above three equations (1), (2), (3) and (4) and simplify it, then we obtain a differential equation:\n\ncp·p·dν + ν·cν·dp=0\n\nWe define here the ratio of heat capacity as isentropic exponent κ (kappa):\n\nκ=cp/cν\n\nSo after integrating the above differential equation and by using the ideal gas law, we obtain the following important isentropic relations for ideal gas:\n\np·νκ=constant\n\np1-κ·Tκ=constant\n\nT·νκ-1=constant" ]
[ null, "https://thermodynamicsengineering.files.wordpress.com/2013/04/ts-diagram-isentropic.png", null, "https://thermodynamicsengineering.files.wordpress.com/2013/01/q_rev_ds.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8533254,"math_prob":0.95699805,"size":1404,"snap":"2021-43-2021-49","text_gpt3_token_len":404,"char_repetition_ratio":0.14857143,"word_repetition_ratio":0.0,"special_character_ratio":0.23789173,"punctuation_ratio":0.08928572,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99900776,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,5,null,10,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-16T20:09:12Z\",\"WARC-Record-ID\":\"<urn:uuid:d0798e10-da74-48ee-800a-b8c3cf78ce53>\",\"Content-Length\":\"81236\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2d5836c8-af22-49b1-84b1-8d9580bd2d28>\",\"WARC-Concurrent-To\":\"<urn:uuid:a04f2c51-132b-4f64-b74c-ba3446cd74c6>\",\"WARC-IP-Address\":\"192.0.78.25\",\"WARC-Target-URI\":\"https://thermodynamics-engineer.com/isentropic-relations/\",\"WARC-Payload-Digest\":\"sha1:LKWCGFEI32NHXFM3S6INF2RWUZ56KBXY\",\"WARC-Block-Digest\":\"sha1:4B4VYUXX4MSFLGSLMYGTCFLFNNOFSVE3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585025.23_warc_CC-MAIN-20211016200444-20211016230444-00674.warc.gz\"}"}
https://mathoverflow.net/tags/algorithms/new
[ "# Tag Info\n\nAccepted\n\n### Metropolis-Hastings kernel in measure theory\n\nYou asked another question that seems somewhat likely to be closed for not being research-level mathematics. I will include my posted answer to that here because that one may get closed. One does not ...\n1 vote\nAccepted\n\n### Slicing bivariate exponential generating functions on x and y\n\nAssuming $D(0) = 0$, we can restate the problem as $F(x, y) = A(x)^y$ for $A(x) = e^{D(x)}$. It means that we can find $G_n(k)$ for any number $k$ as $$G_n(k) = \\left[\\frac{x^n}{n!}\\right] A(x)^k.$$ ..." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8725768,"math_prob":0.998017,"size":2497,"snap":"2023-40-2023-50","text_gpt3_token_len":661,"char_repetition_ratio":0.09907742,"word_repetition_ratio":0.019900497,"special_character_ratio":0.26271525,"punctuation_ratio":0.12195122,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999848,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-09T14:05:15Z\",\"WARC-Record-ID\":\"<urn:uuid:ed11aa6a-8708-421e-a744-96d4664833b2>\",\"Content-Length\":\"81483\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e840544b-affb-492a-b2f4-66dbcf6d6009>\",\"WARC-Concurrent-To\":\"<urn:uuid:7bdcefc7-0553-4ee4-a2b2-583b4050d7c4>\",\"WARC-IP-Address\":\"172.64.150.182\",\"WARC-Target-URI\":\"https://mathoverflow.net/tags/algorithms/new\",\"WARC-Payload-Digest\":\"sha1:NSQME5BURC26RMFZ5ESBM6AL566IMUO3\",\"WARC-Block-Digest\":\"sha1:W2HSLB72CMU64SED652JCLRM36H4KJZN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100912.91_warc_CC-MAIN-20231209134916-20231209164916-00269.warc.gz\"}"}
https://www.seawayacademy.com/chartwork
[ "", null, "CHARTWORK\n\n### Explain with the use of a diagram the Earth's Magnetic Lines.", null, "### Explain Magnetic Variation.\n\nMagnetic Variation ; Angle East or West between True North and Magnetic North", null, "### Explain with the use of a compass rose Variation.\n\nVariation", null, "### Explain with the use of a diagram the approximate area where a Magnetic Compass is unstable.\n\nUnstable area", null, "### Explain why a Boat Compass does not point exactly towards Magnetic North.\n\nThe Boat Compass does not point exactly towards the Magnetic North, because it is affected by ferrous masses and electric currents in the vacinity of the compass.\n\nElectric wires, portable radios, binoculars, tools and other fero-magnetic objects will affect the boat compass.", null, "### Explain Compass Deviation.\n\nCompass Deviation; Angle East or West between Magnetic North and Boat Compass.\n\n### Explain with use of a diagram Compass Deviation.\n\nCompass Deviation", null, "### What causes Compass Deviation?\n\nCauses of Compass Deviation\n\nLocal magnetic fields:\n\n• Magnetic masses (winches / magnets from loud speakers / nearby radios / tools, etc.)\n• Electric currents producing a magnetic filed (lights / remote controls, etc.)\n\n### Explain Conversions between the Norths.\n\nConversions between the Norths\n\n• True to Magnetic / Magnetic to True\n• Magnetic to Compass / Compass to Magnetic\n• True to Compass / Compass to True", null, "### When taking 2 bearings what results give the best accuracy?\n\n2 LOP at right angles to each other. To obtain more accurate results 3 LOP should be taken if possible to eliminate any errors that may occur from taking 2 LOP.\n\n2 bearing LOP errors;\n\n- Error in identifying the landmark (wrong lighthouse)\n\n- Error in the concersion from Magnetic to True", null, "### When taking LOP what consideration should be taken to obtain the most accurate results.\n\nFor best results 3 LOP should be taken at about 120 degrees. Taking a 3 bearing LOP eliminates errors than can occur with a 2 bearing LOP.\n\n2 bearing LOP errors;\n\n- Error in identifying the landmark (wrong lighthouse)\n\n- Error in the concersion from Magnetic to True", null, "### Explain how you would determine your distance from a landmark by doubling your bearing.\n\nDetermining the distance from a landmark\n\nUsing a hand bearing compass, a pelorus, or an Automatic Direction Finder (ADF)\n\nDoubling the bearing, e.g. from 30° to 60°:\n\nDistance from the landmark (BC) = distance travelled by the boat (AB)", null, "### Explain how you would determine your distance from a landmark by doubling your bearing.\n\nDoubling the bearing. ie. from 45° to 90°:\n\nDistance from the landmark (BC) = distance travelled by the boat (AB)", null, "### Determine the distance off a landmark using distance travelled.\n\nDistance from the landmark ≈ 6 times the distance travelled (A-B) for a change of bearing of 10°.", null, "### Explain distance off by Vertical Sextant Angle.\n\nVertical Sextant angle.\n\nd = h / tg α Measure α with a vertical sextant;\n\n- read “tg α” off a trigonometric table\n\n- read h off the marine chart\n\n- calculation = d", null, "### Explain how to calculate the position of your vessel by using a vertical sextant angle and a bearing.\n\nThe boat position can be determined from a distance to the landmark (circle of position, vertical sextant), and a bearing (hand bearing compass)", null, "### Explain how to determine your position using 3 landmarks with horizontal sextant angles.\n\nThe boat position can be established from two circles of position (horizontal sextant angles, using three landmarks.", null, "### Explain the GPS accuracy on electronic charts.\n\nUnder the best conditions the horizontal accuracy of the GPS system is approximately 3 to 8 metres 95% of the time\n\nThe use of the Wide Area Augmentation System (WAAS), which provides corrections through a geostationary satellite, is one way to increase accuracy. Differential GPS, sending local corrections via ground transmitters close to US and Canadian shorelines, is another way.\n\nVertical accuracy is considerably lower.\n\n### Explain the Limiting Factors of using GPS.\n\nLimiting Factors of using GPS; - Ionospheric and tropospheric interference - Satellite positioning - Calculating ability and accuracy of the GPS unit - Multipath signals; Natural or artificial interference - Horizontal chart datum and chart reliability - Disappearance of details in vector charts at small scales - Loss of signal due to accidental disconnection - Loss of signal due to automatic cut-off at slow speed\n\n### With the use of a diagram explain the cycle of tides.\n\nTide cycle.", null, "### Explain tides. Duration / Time Intervals / Range / Height.\n\nTides;\n\n- Duration of Tide : time between Hi & Lo tides around measurement time.\n\n- Time Intervals : time between measurement time and nearest Hi or Lo.\n\n- Range of tide : height difference between Hi & Lo.\n\n- Height Difference : height difference between height at measurement and nearest Hi or Lo.\n\n### Explain how to determine the height of tides in feet without tide tables.\n\nThe rule of incremental 12ths;\n\n– Particularly useful for tides in feet – Assumes: Tide duration = 6 hours, Change in heights before or after a High or a Low =\n\n• 1/12th of the total tide range during the first hour\n\n• 2/12th during the second hour\n\n• 3/12th during the third hour, and then the reverse: 3, 2, and 1 twelfths.\n\nHeight at any time = sum of the 12ths of the range accumulated since the High or the Low tide used as a reference.\n\nExample:\n\n1/12, 2/12, 3/12\n\n- Falling tide - Range = 11 feet (difference between the height of the High and the height of the following Low). - Drop, four hours after the High? - (1/12 + 2/12 + 3/12 + 3/12) x 11' = 9/12 x 11' = 99'/12 = 99 inches, or 8.25'\n\n### Explain how to determine the height of tides in meters without tide tables.\n\nThe rule of 10ths and Quarters: 1/10, 1/4, 2/4, 3/4, 9/10\n\n- Particularly useful for tides in meters - Assumes: Tide duration = 6 hours,\nChange in heights before or after a High or a Low =\n\n• 1/10th of the total tide range after one hour\n• 1/4 after two hours\n• 2/4 after three hours\n• 3/4 after four hours\n• 9/10th after five hours\n\n- Gives directly the height of the tide, without adding fractions.\n\n- More precise, close to a High or a Low, than the method of the 12ths.\n\nExample:\n\n1/10, 1/4, 2/4, 3/4, 9/10\n\n- Rising tide - Range = 6 m - Tide, four hours after the Low? • 4 hours after the Low = 2 hours before the next High • Tide height 4 hours after the low = 3/4 x (6m) = 4.5 m\n\n### Explain a Vector.\n\nA vector is a way to represent a phenomenon which can be defined by;\n\n- A direction - A magnitude (intensity)\n\nExamples:\n\n- A movement (direction, and distance travelled) - A speed (direction, and magnitude) - A force (direction, and magnitude)\n\n### Explain how to determine Resulting Speed using vectors.\n\nResulting speed: Place the two speed vectors (“distance travelled” in one hour) at the end of each other.", null, "### Explain how to determine Resulting Distance Travelled using vectors.\n\nResulting distance travelled: Place the two vectors representing the “distance travelled” during the time considered (e.g. 50 min or 1 h 40 min) at the end of each other.", null, "### Explain with the use of a diagram Set and Drift.\n\nMeasuring current Direction (“Set”) and Speed (“Drift”)​​​​​​​", null, "### With the use of a diagram explain how to compensate for a Known Current.\n\nCompensating for known current;\n\nThe vector construction is for one hour. Given the boat speed of 6 kn (arc of circle, 6 NM radius), the boat direction is chosen to offset the effect of the current (AC) and bring the boat back to the desired track (AB).", null, "### Explain with the use of a diagram how to compensate for leeway.\n\nLeeway to port (wind on the starboard side)", null, "### Explain Nautical Miles\n\nDistances are measured in Nautical Miles\n\n- Abbreviation: “M” or, preferably, “NM”. - Original Definition: 1’ (from the Center of the earth) at the surface of the earth. - Only the Parallels of Latitude have a constant separation. Therefore, 1 NM = 1’ of Latitude - Current definition: 1,852 m\n\nPDF file of\n\nChart symbols & abbreviations\n\n- click to view -" ]
[ null, "https://static.wixstatic.com/media/505d79_d4d5a71d19aa40b880a180dd23bb92cb~mv2.jpg/v1/fill/w_210,h_50,al_c,q_80,usm_0.66_1.00_0.01/505d79_d4d5a71d19aa40b880a180dd23bb92cb~mv2.jpg", null, "https://static.wixstatic.com/media/505d79_e41cd556de2f407e96f84dec4c96e9d6~mv2.jpg", null, "https://static.wixstatic.com/media/505d79_ec66afc666704cd78b6796d0d35a4171~mv2.jpg", null, "https://static.wixstatic.com/media/505d79_d52add16215146a1adca9b3ce47c068f~mv2.jpg", null, "https://static.wixstatic.com/media/505d79_3098122938a74a1f9a55573f7074557b~mv2.jpg", null, "https://static.wixstatic.com/media/505d79_f68991367f92495ab6257f82bcb0e66f~mv2.jpg", null, "https://static.wixstatic.com/media/505d79_f61064b990e54aa6bdc36684b0ce7178~mv2.jpg", null, "https://static.wixstatic.com/media/505d79_b91fce69899948af94d46bd1a9113a7e~mv2.jpg", null, "https://static.wixstatic.com/media/505d79_c1623c414d0640a8b20f57ada7c59f5e~mv2.jpg", null, "https://static.wixstatic.com/media/505d79_8908284b8af24069a0a98dd556be60b2~mv2.jpg", null, "https://static.wixstatic.com/media/505d79_2e08362010a345ffbdbd8558748c3313~mv2.jpg", null, "https://static.wixstatic.com/media/505d79_a904fe2ded0842a39288b99b0a1adb23~mv2.png", null, "https://static.wixstatic.com/media/505d79_5ce2238231e6484280c1f925737b2c99~mv2.jpg", null, "https://static.wixstatic.com/media/505d79_37e26d0f4c7e4feebcf09db3d19919dc~mv2.jpg", null, "https://static.wixstatic.com/media/505d79_85ceb2f3f74a4c46a7dfc89495173b61~mv2.jpg", null, "https://static.wixstatic.com/media/505d79_e67d5fa26c0349e3a69dc94d8dc39ffe~mv2.png", null, "https://static.wixstatic.com/media/505d79_04c8cef66d534f209a446f0db1e3aaba~mv2.jpg", null, "https://static.wixstatic.com/media/505d79_959cd368c030450c8d80786149ed6187~mv2.jpg", null, "https://static.wixstatic.com/media/505d79_b37e0c62db274c8282b3be673290ff4c~mv2.jpg", null, "https://static.wixstatic.com/media/505d79_0dcd64d236b14f938527b43af041283f~mv2.jpg", null, "https://static.wixstatic.com/media/505d79_d63c756c82244bcea79204618b421fa2~mv2.jpg", null, "https://static.wixstatic.com/media/505d79_87801ad58fd14678a7a96add2dc4d011~mv2.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85951227,"math_prob":0.90490574,"size":6337,"snap":"2021-43-2021-49","text_gpt3_token_len":1506,"char_repetition_ratio":0.13974419,"word_repetition_ratio":0.12347989,"special_character_ratio":0.22723687,"punctuation_ratio":0.10969388,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9562713,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-24T11:30:52Z\",\"WARC-Record-ID\":\"<urn:uuid:f094e8f1-3783-457b-a455-b0a743acb952>\",\"Content-Length\":\"636194\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1d9d0ebb-5bdf-4012-a638-6abc47cc2a14>\",\"WARC-Concurrent-To\":\"<urn:uuid:4b15eab6-c7f5-4d0a-8f69-cdae0c10e41b>\",\"WARC-IP-Address\":\"199.232.65.84\",\"WARC-Target-URI\":\"https://www.seawayacademy.com/chartwork\",\"WARC-Payload-Digest\":\"sha1:NGZZ3WZRSIJABCV2KVXWIH5HYXR3YN4B\",\"WARC-Block-Digest\":\"sha1:MLGW5234BQNST6WHMWJTDXLW7CYPS6SV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585997.77_warc_CC-MAIN-20211024111905-20211024141905-00712.warc.gz\"}"}
https://capitalistlad.wordpress.com/2017/07/21/time-is-money-maths-edition/
[ "# Time is Money: Maths Edition", null, "Is \\$100,000 worth more today or in a year’s time? Most people would want to use that money to solve their current problems, like pay off their student debt, buy an apartment or a car, etc. Why would anyone want it later when they can have it now? Investors see this in a very similar light. Money is worth more now, because of its potential earning capacity. Simply put, the money you receive today can be invested and will earn interest which will mean that the sum of money after a year would most definitely be worth more than a \\$100,000 in a year’s time.\n\nThis is a great explanation by Investopedia:\n\nSo how is this concept useful and how do you apply it? Well, it shows you that time is money and through a few simple calculations, you will be able to see how to obtain the value of potential investments at different points in time. This also forms the basis of some variations of stock valuation.\n\n### Future Value\n\nThe first calculation is super simple. I’m sure you’ve all done this in your secondary school maths class. It is to find the future value of an investment that returns a fixed amount of money in multiple fixed intervals.\n\nI hope I don’t bring back any bad memories doing this but here are a few basic question you can try to solve first and see if you get it right:", null, "If you put \\$100,000 in a bank that returns 2% a year,\n\n1. how much would you have at the end of the first year?\n2. How much would you have after 15 years?\n\nObviously, the answer to the first question is 100,000(1+0.02) = \\$102,000.\n\nThe answer to the second question is 100,000(1+0.02)^15= \\$134,586.\n\nGot it? Well done! You have learnt how to calculate the future value of an annuity (a series of continuous cash flows that lasts for a certain period of time. This will help you to estimate or even accurately predict what you will be receiving based on the decision that you make.", null, "If you haven’t got it, or want a clearer picture, the equation is basically: Future Value = Present Value*(1+ Interest Rate per Time Period)^ No. of Time Periods. Now let’s move on to the next section, present value!\n\n### Present Value\n\nIf you can do the maths, to get present value, you can simply rearrange the equation to find it.\n\nSo the equation basically becomes: Present Value =  Future Value/ (1+ Interest Rate per Time Period)^ No. of Time Periods.\n\nSo if let say you received \\$100,000 in 5 years from now and you could’ve put that sum in a risk free bank for 2% per annum, how much would it be worth now?\n\n100,000/(1+0.02)^5 = \\$90573.08\n\nTadah! Good job! You now know how to calculate the present value of future cash flows! Finding the present value of an investment is sort of like calculating how much you have to put in now to receive that sum in x number of years.\n\n### Never Ending Cash Flows (Perpetuity)\n\nYou’re now ready for the next stage of this article/challenge (whatever you think it is). How do you calculate the present value of a never ending cash flow? So for example, what is the present value of an investment that pays \\$100 a year with an interest rate (also known as the discount rate when calculating present value) of 2%?\n\nIf you think about it, it is the addition of 100/(1+0.02) + 100/(1+0.02)^2 +100/(1+0.02)^3 … to infinity. This is what you call a perpetuity. It is a continuous constant cash flow that never ends. If you simplify the above equation, the formula is: Cash Flow/ Interest Rate (Discount Rate). So you will basically get 100/0.02 which is \\$5000. If the discount rate is increased, to let’s say 5%, the present value of the investment will fall. 100/0.05 = \\$2000. This is because the higher the interest rate, the lower the present value needs to be to make up the future value. (Much like the present value calculations just now)\n\nIf you’re a nerd and you you like maths, here’s how the formula is simplified. Don’t worry if you don’t get it.:", null, "PV= Present Value\n\nC= Cash Flow\n\nr = Interest Rate\n\nPV = C/(1+r) + C/(1+r)^2 + C/(1+r)^3 …\n\nMultiply both sides by (1+r)\n\nPV(1+r) =  C(1+r)/(1+r) + C(1+r)/(1+r)^2 + C(1+r)/(1+r)^3 …\n\nPV(1+r) = C + C/(1+r) + C/(1+r)^2 + C/(1+r)^3 …\n\nThen subtract PV from both sides:\n\nPV(1+r) – PV = C + C/(1+r) + C/(1+r)^2 + C/(1+r)^3 … – C/(1+r) + C/(1+r)^2 + C/(1+r)^3 …\n\nThen simplify:\n\nPV*r = C\n\nPV = C/r\n\nFinally, we are on to the last stage, if you’ve made it here, give yourself a pat on the back. We are going to show you how to calculate a growing perpetuity. The same as the last example, except that it grows at a constant rate till eternity.  This is the basis of the discounted cash flow valuation where the value of the company is seen to be the sum of all future cash flows discounted back to present value. So how does one calculate this? If you think about it, the way to calculate a growing perpetuity is the same as a normal perpetuity except that you add the growth into every formula as shown below:\n\nPV = C/(1+r) + C(1+g)/(1+r)^2 + C(1+g)^2/(1+r)^3 …\n\nWhen you simplify it, you get PV = C/(r-g)\n\nSo for example, if the cash flow is \\$100 and it grows to by 5% forever and the interest rate is 7%. The present value is 100/ (0.07-0.05) = \\$5000.\n\nAwesome! Now you’ve got the basics down, you’re now in a position to make slightly better decisions. Although finding a perfect perpetuity in real life is pretty much impossible, understanding it will eventually help you to value different investments in the future. I hope you enjoyed this article (it’s a little more technical and difficult than all the other articles so far) and pray that it wasn’t too dry for your tastes. If you have any questions we’ll be happy to answer them in the comments section.\n\nP.S. If you really are a maths nerd then the derivation for the growing perpetuity formula is shown below.\n\nPV = C/(1+r) + C(1+g)/(1+r)^2 + C(1+g)^2/(1+r)^3 …\n\nSimply it:\n\nPV = C/(1+r) + C/(1+r)* (1+g / 1+r) + C/(1+r)* (1+g / 1+r)^2\n\nYou can see that (1+g / 1+r) is a term that repeats to infinity so it is an infinite geometric series (1/1-x). x= (1+g / 1+r)\n\nSo when you put it back into the equation it becomes:\n\nPV = (C/(1+r))/ (1 – (1+g / 1+r))\n\nMultiply by (1+r/1+r) to get\n\nPV = C/ (1+r) – (1+g)\n\nPV = C/ (r-g)" ]
[ null, "https://capitalistlad.files.wordpress.com/2017/07/ka44p.jpg", null, "https://i0.wp.com/i3.kym-cdn.com/photos/images/newsfeed/001/179/679/d95.jpg", null, "https://i0.wp.com/m.memegen.com/890e1q.jpg", null, "https://mathequality.files.wordpress.com/2014/01/math-meme-gandalf.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9323611,"math_prob":0.99347305,"size":6160,"snap":"2022-40-2023-06","text_gpt3_token_len":1664,"char_repetition_ratio":0.12589344,"word_repetition_ratio":0.040350877,"special_character_ratio":0.29967532,"punctuation_ratio":0.089065894,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9996958,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-30T02:29:53Z\",\"WARC-Record-ID\":\"<urn:uuid:52653189-9fcf-4d27-97e3-e1739e7f5c36>\",\"Content-Length\":\"161496\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:034b2740-addd-4530-82f6-b9ebbe1ff44b>\",\"WARC-Concurrent-To\":\"<urn:uuid:b224ca66-ebb3-4ba7-842a-1445726faf75>\",\"WARC-IP-Address\":\"192.0.78.12\",\"WARC-Target-URI\":\"https://capitalistlad.wordpress.com/2017/07/21/time-is-money-maths-edition/\",\"WARC-Payload-Digest\":\"sha1:7WWTWHQHX6QV35MNF7FLIVSOISVKX456\",\"WARC-Block-Digest\":\"sha1:6NCOP4IXO7HDSBOCOOB27GCX4NEGDNT4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499790.41_warc_CC-MAIN-20230130003215-20230130033215-00770.warc.gz\"}"}
https://www.vceela.com/shop/category/clothing-men-scarfs-shawls-60
[ "", null, "X\n\nView Filter\n\nx\nClear all x\n\nRs. 13,000\nRs. 25,000\nRs. 800\nRs. 1,000\nRs. 1,000\nRs. 6,150\nRs. 1,500\nRs. 1,500\nRs. 3,000\nRs. 3,000\nRs. 3,000\nRs. 900\nRs. 1,800\nRs. 1,800\nRs. 1,700\nRs. 1,700\nRs. 1,700\nRs. 3,000\nRs. 2,500\nRs. 3,000\nRs. 2,500\nRs. 3,000\nRs. 2,500\nRs. 1,500\nRs. 1,500\nRs. 1,680\nRs. 3,000\nRs. 1,880\nRs. 3,000\nRs. 3,000\nRs. 3,000\nRs. 3,000\nRs. 3,000\nRs. 3,300\nRs. 860\nRs. 1,800\nRs. 1,450\nRs. 1,450\nRs. 1,450" ]
[ null, "https://www.vceela.com/web/image/645/shop.jpeg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.56309843,"math_prob":0.99999666,"size":6987,"snap":"2021-31-2021-39","text_gpt3_token_len":2612,"char_repetition_ratio":0.19447228,"word_repetition_ratio":0.5617879,"special_character_ratio":0.25461572,"punctuation_ratio":0.13046876,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98065627,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-21T08:33:28Z\",\"WARC-Record-ID\":\"<urn:uuid:d664f5eb-3f9b-4bf9-9ad2-2a9c24436fd3>\",\"Content-Length\":\"323747\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:96038bcf-62f6-4cf8-880e-32cc65d3fc0d>\",\"WARC-Concurrent-To\":\"<urn:uuid:a2aaeed7-01e3-4774-a798-20e5f28b8e15>\",\"WARC-IP-Address\":\"18.140.24.82\",\"WARC-Target-URI\":\"https://www.vceela.com/shop/category/clothing-men-scarfs-shawls-60\",\"WARC-Payload-Digest\":\"sha1:O46WWS7XZZP3LGMD45ON6FXBTQSR4BHY\",\"WARC-Block-Digest\":\"sha1:E4KQLSRVYNKQVZM3K2L52DMOKG2A2MPV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057199.49_warc_CC-MAIN-20210921070944-20210921100944-00327.warc.gz\"}"}
http://www.him.uni-bonn.de/de/programs/past-programs/past-trimester-programs/signal-processing-2016/workshop-on-harmonic-analysis-graphs-and-learning/schedule/
[ "", null, "", null, "# Schedule of the Workshop on Harmonic Analysis, Graphs and Learning\n\nAll talks take place in the lecture hall \"Kleiner Hörsaal\" (Wegelerstr. 10). The registration, the coffee breaks and the poster session are in the room \"Zeichensaal\" (Wegelerstr. 10).\n\n## Monday, March 14\n\n 10:15 - 10:45 Registration & Welcome coffee 10:45 - 10:55 Opening remarks 10:55 - 11:40 Joel Tropp: Universality laws for randomized dimension reduction 11:45 - 12:30 Rachel Ward: Some recent results in hashing and clustering 12:35 - 14:20 Lunch break 14:20 - 15:05 Gilad Lerman: Fast, Robust and Non-convex Subspace Recovery 15:10 - 15:55 Andrea Montanari: Graph estimation via semi-definite programming 16:00 - 16:30 Coffee break 16:30 - 17:15 Naoki Saito: Matrix Data Analysis using Hierarchical Co-Clustering and Multiscale Basis Dictionaries on Graphs afterwards Reception at HIM (Poppelsdorfer Allee 45)\n\n## Tuesday, March 15\n\n 9:00 - 9:45 Radu Balan: Lipschitz analysis of the phase retrieval problem 9:50 - 10:35 Vladimir Temlyakov: Dictionary descent in optimization 10:40 - 11:10 Group photo and coffee break 11:10 - 11:55 Afonso Bandeira: On solving certain semidefinite programs with low-rank solutions 12:00 - 12:45 Stephan Dahlke: Shearlet Coorbit Spaces and Shearlet Groups 12:50 - 14:20 Lunch break 14:20 - 16:00 Poster Session 16:00 - 16:30 Coffee break 16:30 - 17:15 Gitta Kutyniok: Optimal Compressive Imaging of Fourier Data\n\n## Wednesday, March 16\n\n 9:00 - 9:45 Gabriele Steidl: Variational Models for Restoring Manifold-Valued Images 9:50 - 10:35 Holger Boche: Banach-Steinhaus Theory Revisited: Lineability, Spaceability, and related Questions for Phase Retrieval Problems for Functions with finite Energy 10:40 - 11:10 Coffee break 11:10 - 11:55 Pascal Frossard: Sparse representation learning for graph signals 12:00 - 12:45 Tomas Sauer: Learning sparse exponentials from discrete data 12:50 - 14:20 Lunch break 14:20 - 15:05 Robert Calderbank: GraphConnect: A Framework for Regularizing Neural Networks afterwards Excursion: Exhibition \"M.C. Escher\" (Max Ernst Museum Brühl) 19:00 - Conference dinner in the Restaurant Meyer's (Clemens-August-Str. 51a)\n\n## Thursday, March 17\n\n 9:00 - 9:45 Joan Bruna: Signal Recovery from Scattering Convolutional Networks 9:50 - 10:35 Helmut Bölcskei: Deep convolutional feature extraction: Theory and new architectures 10:40 - 11:10 Coffee break 11:10 - 11:55 Ronen Talmon: Hierarchical Coupled Geometry for Data-driven Analysis of Dynamical Systems 12:00 - 12:45 Yue Lu: Dynamics of Online M-Estimation for High-Dimensional Regression: the ODE Approach 12:50 - 14:20 Lunch break 14:20 - 15:05 Matthias Hein: A nodal domain theorem and a higher-order Cheeger inequality for the graph p-Laplacian - CANCELLED - 15:10 - 15:55 Akram Aldroubi: Iterative actions of normal operators 16:00 - 16:30 Coffee break 16:30 - 17:15 Pierre Vandergheynst: Random sampling of bandlimited signals on graphs\n\n## Friday, March 18\n\n 9:00 - 9:45 Alain Pajor: On the covariance matrix - CANCELLED - 9:50 - 10:35 Christine De Mol: Combining information or forecasts for predicting economic variables 10:40 - 11:10 Coffee break 11:10 - 11:55 Philipp Grohs: Perspectives of Computational Harmonic Analysis in Numerics 12:00 - 12:45 Ingrid Daubechies / Rima Alaifari: Phase retrieval in the infinite-dimensional setting 12:50 - Lunch break - end of workshop\n\n# Abstracts\n\n## Akram Aldroubi: Iterative actions of normal operators\n\nLet A be a normal operator in a Hilbert space H, and let G ⊂ H be a countable set of vectors. We investigate the relations between A, G, and L that makes the system of iterations {Ang : g ∈ G, 0 ≤ n < L(g)} complete, Bessel, a basis, or a frame for H. The problem is motivated by the dynamical sampling problem and is connected to several topics in functional analysis, including, frame theory and spectral theory. It also has relations to topics in applied harmonic analysis including, wavelet theory and time-frequency analysis.\n\nTop\n\n## Radu Balan: Lipschitz analysis of the phase retrieval problem\n\nThe phaseless reconstruction problem can be stated as follows. Given the magnitudes of a vector coefficients with respect to a linear redundant system (frame), we want to reconstruct the unknown vector. This problem has first occurred in X-ray crystallography starting from the early 20th century.\n\nThe same nonlinear reconstruction problem shows up in speech processing, particularly in speech recognition.\n\nIn this talk we present a Lipschitz analysis of the problem as well as Cramer-Rao Lower Bounds that govern any reconstruction algorithm. In particular we show that the left inverse of the nonlinear analysis map can be extended to the entire measurement space with a small increase in the Lipschitz constant independent of the dimension of the space or the frame redundancy.\n\nThis is joint work with Dongmian Zou.\n\nTop\n\n## Afonso Bandeira: On solving certain semidefinite programs with low-rank solutions\n\nTo address difficult optimization problems, convex relaxations based on semidefinite programming are now common place in many fields. Although solvable in polynomial time, large semidefinite programs tend to be computationally challenging. Over a decade ago, exploiting the fact that in many applications of interest the desired solutions are low rank, Burer and Monteiro proposed a heuristic to solve such semidefinite programs by restricting the search space to low-rank matrices. The accompanying theory does not explain the extent of the empirical success. We focus on Synchronization and Community Detection problems and provide theoretical guarantees shedding light on the remarkable efficiency of this heuristic. This is joint work with Nicolas Boumal and Vlad Voroninski.\n\nTop\n\n## Holger Boche: Banach-Steinhaus Theory Revisited: Lineability, Spaceability, and related Questions for Phase Retrieval Problems for Functions with finite Energy\n\nIn the first part of the talk we study the divergence behavior of linear approximation processes in general Banach spaces. We are interested in the structure of the set of divergence creating functions. The Banach-Steinhaus theory gives some information about this set, however, it cannot be used to answer the question whether this set contains infinite dimensional sets with linear structure, i. e. whether the set contains infinite dimensional subspaces. We give necessary and sufficient conditions for the lineability and the spaceability of the set of divergence creating functions, i.e., the existence of infinite dimensional subspaces and closed subspaces. We also discuss the connection to Gower´s dichotomy theorem for Banach spaces.\n\nIn the second part of the talk we use the results on lineability and spaceability to characterize the behavior of general reconstruction processes for classical phase retrieval problems for functions with finite energy, i.e. solvability of dispersion relation for functions satisfying Dirichlet´s Principle.\n\nTop\n\n## Helmut Bölcskei: Deep convolutional feature extraction: Theory and new architectures\n\nDeep convolutional neural networks have led to breakthrough results in practical feature extraction applications. The mathematical analysis of such networks was initiated by Mallat, 2012. Specifically, Mallat considered so-called scattering networks based on semi-discrete shift-invariant wavelet frames and modulus non-linearities in each network layer, and proved translation invariance (asymptotically in the wavelet scale parameter) and deformation stability of the corresponding feature extractor. In this talk, we show how Mallat’s theory can be developed further by allowing for general convolution kernels, or in more technical parlance, general semi-discrete shift-invariant frames (including Weyl-Heisenberg, curvelet, shearlet, ridgelet, and wavelet frames) and general Lipschitz-continuous non-linearities (e.g., rectified linear units, shifted logistic sigmoids, hyperbolic tangents, and modulus functions), as well as Lipschitz-continuous pooling operators, all of which can be different in different network layers. We prove deformation stability for a large class of deformations, establish a new translation invariance result which is of vertical nature in the sense of the network depth determining the amount of invariance, and show energy conservation under certain technical conditions. On a conceptual level our results establish that deformation stability, vertical translation invariance, and energy conservation are guaranteed by the network structure per se rather than the specific convolution kernels, non-linearities, and pooling operators. This offers an explanation for the tremendous success of deep convolutional neural networks in a wide variety of practical feature extraction applications. The mathematical techniques we employ are based on continuous frame theory, as developed by Ali et al., 1993, and Kaiser, 1994, and allow to completely detach our proofs from the algebraic structures of the underlying frames and the particular form of the Lipschitz non-linearities and pooling operators. Finally, we introduce new network architectures and we present performance results for cartoon functions.\n\nThis is joint work with T. Wiatowski, P. Grohs, and M. Tschannen.\n\nTop\n\n## Joan Bruna: Signal Recovery from Scattering Convolutional Networks\n\nConvolutional Neural Networks (CNN) are a powerful class of non-linear representations that have shown through numerous supervised learning tasks their ability to extract rich information from images, speech and text, with excellent statistical generalization. These are examples of truly high-dimensional signals, in which classical statistical models suffer from the so-called curse of dimensionality, referring to their inability to generalize well unless provided with exponentially large amounts of training data.\n\nIn order to gain insight into the reasons for such success, in this talk we will start by studying statistical models defined from wavelet scattering networks, a class of CNNs where the convolutional filter banks are given by complex, multi-resolution wavelet families. As a result of this extra structure, they are provably stable and locally invariant signal representations, and yield state-of-the-art classification results on several pattern and texture recognition problems.\n\nWe will give conditions under which signals can be recovered from their scattering coefficients, and will introduce a family of Gibbs processes defined by a collection of scattering CNN sufficient statistics, from which one can sample image and auditory textures. Although the scattering recovery is non-convex and corresponds to a generalized phase recovery problem, gradient descent algorithms show good empirical performance and enjoy weak convergence properties. We will discuss connections with non-linear compressed sensing, applications to texture synthesis and inverse problems such as super-resolution, as well as generalizations to unsupervised learning using deep convolutional sufficient statistics.\n\nTop\n\n## Robert Calderbank: GraphConnect: A Framework for Regularizing Neural Networks\n\nDeep neural networks have proved very successful in domains where large training sets are available, but when the number of training samples is small, their performance suffers from overfitting. Prior methods of reducing overfitting such as weight decay, DropOut and DropConnect are data-independent. We shall describe GraphConnect, a complementary method for regularizing deep networks that is data-dependent, and is motivated by the observation that data of interest typically lie close to a manifold. Our proposed method encourages the relationships between the learned decisions to resemble a graph representing the original manifold structure. Essentially GraphConnect is designed to learn attributes that are present in data samples in contrast to weight decay, DropOut and DropConnect which are simply designed to make it more difficult to fit to random error or noise. Analysis of empirical Rademacher complexity suggests that GraphConnect is stronger than weight decay as a regularization. Experimental results on several benchmark datasets validate the theoretical analysis, and show that when the number of training samples is small, GraphConnect is able to significantly improve performance over weight decay, and when used in isolation, is competitive with DropOut.\n\nTop\n\n## Stephan Dahlke: Shearlet Coorbit Spaces and Shearlet Groups\n\nThis talk is concerned with recent progress in shearlet theory. First of all, we discuss the group theoretical background of the continuous shearlet transform. Then, we explain how these relationships can be used to combine the shearlet approach with the coorbit theory introduced by Feichtinger and Gröchenig in a series of papers. This combination gives rise to new smoothness spaces, the shearlet coorbit spaces. Then, we discuss the relations of the shearlet groups to other classical groups such as the Weyl-Heisenberg groups and the symplectic groups. In the last part of the talk, we will study the structure of the shearlet coorbit spaces, i.e., we will discuss trace and embedding theorems.\n\nTop\n\n## Christine De Mol: Combining information or forecasts for predicting economic variables\n\nWe consider the problem of predicting a given macroeconomic or financial variable, (i) based on the information contained in a large ensemble of (stationary) time series, typically strongly correlated with the series to forecast, using penalized regression methods such as ridge or lasso; (ii) based on an optimal combination of forecasts of that variable provided by different sources (surveys, various models, etc.).\n\nTop\n\n## Pascal Frossard: Sparse representation learning for graph signals\n\nThe sparse representation of signals residing on weighted graphs has to incorporate the intrinsic geometric structure of the irregular data domain into the atoms of the dictionary. In this work, we propose a parametric dictionary learning algorithm to design data-adapted, structured dictionaries that sparsely represent graph signals. In particular, we model graph signals as combinations of overlapping local patterns. We impose the constraint that each dictionary is a concatenation of subdictionaries, with each subdictionary being a polynomial of the graph Laplacian matrix, representing a single pattern translated to different areas of the graph. Illustrative application of such dictionaries will be discussed in distributed signal processing and visual data representation.\n\nTop\n\n## Philipp Grohs: Perspectives of Computational Harmonic Analysis in Numerics\n\nIn the last decades, directional representation systems such as curvelets, shearlets or ridgelets have made a big impact in applied harmonic analysis and image- and signal processing, due to their superior ability to sparsify curved singularities in multivariate functions, arising for instance as edges in image data.Their approximation properties are vastly superior over standard discretizations such as wavelets for FEM for the approximation of functions with curved singularities and therefore the use of directional representation systems also carries great potential in numerical analysis.\n\nIn this talk we discuss the useability of directional representation systems in numerical analysis, namely for the numerical solution of different types of PDEs whose generic solutions possess curved singularities. Concretely, we present a ridgelet-based adaptive solver for linear advection equations which, for the first time, can be shown to converge optimally, even for solutions with line singularities, as well as a construction of shearlet systems on bounded domains which may be used for the design of the first optimally convergent adaptive solvers for elliptic PDEs whose solutions possess singularities along curved submanifolds.\n\nThe results are based on joint work with Simon Etter, Gitta Kutyniok, Jackie Ma, Axel Obermeier and Philipp Petersen.\n\nTop\n\n## Matthias Hein: A nodal domain theorem and a higher-order Cheeger inequality for the graph p-Laplacian\n\nWe consider the nonlinear graph p-Laplacian and its set of eigenvalues and associated eigenvectors of this operator defined by a variational principle. We prove a nodal domain theorem for the graph p-Laplacian for any p ≥ 1. While for p > 1 the bounds on the number of weak and strong nodal domains are the same as for the linear graph Laplacian (p = 2), the behavior changes for p = 1. We show that the bounds are tight for p ≥ 1 as the bounds are attained by eigenvectors of the graph p-Laplacian on two graphs. Finally, using the properties of the nodal domains, we prove a higher-order Cheeger inequality for the graph p-Laplacian for p > 1. If the eigenvector associated to the k-th variational eigenvalue of the graph p-Laplacian has exactly k strong nodal domains, then the higher order Cheeger inequality becomes tight as p tends to 1.\n\nTop\n\n## Gitta Kutyniok: Optimal Compressive Imaging of Fourier Data\n\nOne fundamental problem in applied mathematics is the issue of recovery of continuous data from specific samples. Of particular importance is the case of pointwise samples of the associated Fourier transform, which are, for instance, collected in Magnetic Resonance Imaging (MRI). Strategies to reduce the number of samples required for reconstruction with a prescribed accuracy have thus a direct impact on such applications – which in the case of MRI will, for instance, shorten the time a patient is forced to lie in the scanner.\n\nIn this talk, we will present a sparse subsampling strategy of Fourier samples which can be shown to perform optimally for multivariate functions, which are typically governed by anisotropic features. For this, we will introduce a dualizable shearlet frame for reconstruction, which provides provably optimally sparse approximations of cartoon-like images, a class typically regarded as a suitable model for images. The sampling scheme will be based on compressed sensing ideas combined with a coherence-adaptive sampling density considering the coherence between the Fourier basis and the shearlet frame.\n\nThis is joint work with W.-Q. Lim (Fraunhofer Heinrich-Hertz-Institute Berlin).\n\nTop\n\n## Gilad Lerman: Fast, Robust and Non-convex Subspace Recovery\n\nThis joint work with Tyler Maunu presents a fast and non-convex algorithm for robust subspace recovery. The datasets considered include inliers drawn around a low-dimensional subspace of a higher dimensional ambient space, and a possibly large portion of outliers that do not lie nearby this subspace. The proposed algorithm, which we refer to as Fast Median Subspace (FMS), is designed to robustly determine the underlying subspace of such datasets, while having lower computational complexity than existing methods. We prove convergence of the FMS iterates to a stationary point. Further, under a special model of data, we prove that FMS converges globally sublinearly, and locally r-linearly with overwhelming probability to a point which is near to the global minimum. The latter theorem holds for any fixed fraction of outliers (less than 1) and any fixed positive distance between the limit point and the global minimum. Numerical experiments on synthetic and real data demonstrate its competitive speed and accuracy. The real data experiments emphasize the usefulness of FMS for dimension reduction.\n\nTop\n\n## Yue Lu: Dynamics of Online M-Estimation for High-Dimensional Regression: the ODE Approach\n\nIn this talk, I will present an exact analysis of the dynamics of a large class of online row-action methods for solving high-dimensional M-estimation problems. In the large systems limit, the dynamics of these algorithms converges to trajectories governed by a set of deterministic, coupled ODEs. Combined with suitable spectral initialization, this analysis establishes the theoretical performance guarantee of these row-action methods for solving both convex and nonconvex M-estimation problems in high dimensions.\n\nTop\n\n## Andrea Montanari: Graph estimation via semi-definite programming\n\nDenote by A the adjacency matrix of an Erdos-Renyi graph with bounded average degree. I consider the problem of maximizing <A,X> over the set of positive semidefinite matrices X with diagonal entries equal to one. I will  prove that for large (bounded) average degree d, the value of this semidefinite program (SDP) is –with high probability– 2n√d, plus lower order terms.\n\nI will next consider the sparse, two-groups, symmetric community detection problem (also known as planted partition). I will prove that SDP achieves the information-theoretically optimal detection threshold for large (bounded) degree. Our approach is based on tools from different research areas: (i) A new 'higher-rank' Grothendieck inequality for symmetric matrices; (ii) An interpolation method inspired from statistical physics; (iii) An analysis of the eigenvectors of deformed Gaussian random matrices.\n\nI will finally compare this rigorous results, with a series of non-rigorous predictions that come from statistical physics, and outline a number of open problems in this area.\n\nBased on Joint work with Adel Javanmard, Federico Ricci-Tersenghi and Subhabrata Sen.\n\nTop\n\n## Alain Pajor: On the covariance matrix\n\nWe will survey recent results on the empirical covariance matrix of a sample from a random vector which coordinates are not necessarily independent. We will discuss the quantitative point of view as well as the asymptotic point of view.\n\nTop\n\n## Naoki Saito: Matrix Data Analysis using Hierarchical Co-Clustering and Multiscale Basis Dictionaries on Graphs\n\nMany modern data analysis tasks often require one to efficiently handle and analyze large matrix-form datasets such as term-document matrices and spatiotemporal measurements made via sensor networks. Since such matrices are often shuffled and scrambled, they do not have spatial coherency and smoothness compared to usual images, and consequently, the conventional wavelets and their relatives cannot be used in practice. Instead we propose to use our multiscale basis dictionaries for graphs, i.e., the Generalized Haar-Walsh Transform (GHWT) and the Hierarchical Graph Laplacian Eigen Transform (HGLET). In particular, we build such dictionaries for columns and those for rows, extract the column best basis and the row best basis from the basis dictionaries, and finally construct the tensor product of such best bases, which turns out to reveal hidden dependency and underlying geometric structure of a given matrix data. To build our basis dictionaries, the hierarchical recursive bi-partitioning of columns and rows is essential, and to do so, we fully adopt the spectral co-clustering of I. S. Dhillon at each recursion.\n\nIn this talk, I will first review both this co-clustering method and the GHWT dictionary construction. Then, I will discuss the case study of our approach using the popular Science News database consisting of relative frequencies of occurrences of 1042 words over 1153 documents, which are categorized into eight different subjects: Anthropology; Astronomy; Behavioral Sciences; Earth Sciences; Life Sciences; Math/CS; Medicine; and Physics. Our results are quite encouraging, e.g., some of the selected row basis vectors (i.e., a combination of certain words) discriminate the documents of a specific category from the others while some column basis vectors (indicating the grouping structure of the documents) reveal the statistics of the word usages of those document groups as a whole. This is a joint work with Jeff Irion.\n\nTop\n\n## Tomas Sauer: Learning sparse exponentials from discrete data\n\nReconstructing a function of the form f(x) = ∑ω∈Ω fω eω⋅x for some \"small\" set Ω of complex multivariate frequencies is known as Prony's problem. The talk points out the algebraic structure of this problem and how an approximate solution of this problem can be obtained very fast by methods from numerical Linear Algebra. In addition, some aspects of minimal sampling set, in particular their dependency on the set Ω, will be discussed.\n\nTop\n\n## Gabriele Steidl: Variational Models for Restoring Manifold-Valued Images\n\nWe introduce a new non-smooth variational model for the restoration of manifold-valued data which includes second order differences in the regularization term. While such models were successfully applied for real-valued images, we introduce the second order difference and the corresponding variational models for manifold data, which up to now only existed for cyclic data. The approach requires a combination of techniques from numerical analysis, convex optimization and differential geometry. First, we establish a suitable definition of absolute second order differences for signals and images with values in a manifold. Employing this definition, we introduce a variational denoising model based on first and second order differences in the manifold setup.\n\nIn order to minimize the corresponding functionals, we generalize three kind of algorithms: i) inexact cyclic proximal point algorithm, ii) half-quadratic minimization, iii) Douglas Rachford splitting. We propose an efficient strategy for the computation of the corresponding proximal mappings in symmetric spaces. For the first algorithm we utilizing the machinery of Jacobi fields. We demonstrate the performance of our algorithms in particular for the n-sphere, the rotation group SO(3) and the manifold of symmetric positive definite matrices. We prove the convergence of the proposed algorithms in Hadamard spaces.\n\nThis is joint work with Miroslav Bacak (MPI Leipzig), Ronny Bergmann, Johannes Persch (TU Kaiserslautern), R. Hielscher (TU Chemnitz), and Andreas Weinmann (GSF München).\n\nTop\n\n## Ronen Talmon: Hierarchical Coupled Geometry for Data-driven Analysis of Dynamical Systems\n\nIn this talk, we will introduce a data-driven method for building hierarchical coupled geometry of data arising from complex dynamical systems. This method gives rise to the joint organization of the system variables, parameters and dynamic patterns in hierarchical data structures at multiple scales. These structures provide local to global representations of the data, from local partitioning in flexible trees through a new multiscale metric to a global low-dimensional embedding. We will show applications to simulated data as well as to in-vivo recordings of neuronal activity from awake animals. The application of our technique to such recordings demonstrates its capability of capturing the spatio-temporal network complexity in sufficient resolutions. More specifically, it enables us to extract neuronal activity patterns and to identify temporal trends, associated with particular behavioral events and manipulations introduced in the experiments.\n\nTop\n\n## Vladimir Temlyakov: Dictionary descent in optimization\n\nThe problem of convex optimization will be discussed. Usually in convex optimization the minimization is over a d-dimensional domain. Very often the convergence rate of an optimization algorithm depends on the dimension d. The algorithms discussed in this talk utilize dictionaries instead of a canonical basis used in the coordinate descent algorithms. We investigate which properties of a dictionary are beneficial for the convergence rate of typical greedy-type algorithms.\n\nTop\n\n## Joel Tropp: Universality laws for randomized dimension reduction\n\nDimension reduction is the process of embedding high-dimensional data into a lower dimensional space to facilitate its analysis. In the Euclidean setting, one fundamental technique for dimension reduction is to apply a random linear map to the data. The question is how large the embedding dimension must be to ensure that randomized dimension reduction succeeds with high probability.\n\nThis talk describes a phase transition in the behavior of the dimension reduction map as the embedding dimension increases. The location of this phase transition is universal for a large class of datasets and random dimension reduction maps. Furthermore, the stability properties of randomized dimension reduction are also universal. These results have many applications in numerical analysis, signal processing, and statistics.\n\nJoint work with Samet Oymak.\n\nTop\n\n## Pierre Vandergheynst: Random sampling of bandlimited signals on graphs\n\nWe study the problem of sampling k-bandlimited signals on graphs, as models for information over networks. We propose two sampling strategies that consist in selecting a small subset of nodes at random. The first strategy is non-adaptive, i.e., independent of the graph structure, and its performance depends on a parameter called the graph coherence. On the contrary, the second strategy is adaptive but yields optimal results. Indeed, no more than O(k log(k)) measurements are sufficient to ensure an accurate and stable recovery of all k-bandlimited signals. This second strategy is based on a careful choice of the sampling distribution, which can be estimated quickly. Then, we propose a computationally efficient decoder to reconstruct k-bandlimited signals from their samples. We prove that it yields accurate reconstructions and that it is also stable to noise. Finally, we illustrate this technique by showing one can use it to sketch networked data and efficiently approximate spectral clustering.\n\nJoint work with Gilles Puy, Nicolas Tremblay and Rémi Gribonval.\n\nTop\n\n## Rachel Ward: Some recent results in hashing and clustering\n\nIn the first part of the talk, we discuss locality sensitive hashing (LSH) for approximate nearest neighbor search. We introduce a \"fast\" variant of the cross-polytope LSH scheme for angular distance. To our knowledge, is the first LSH scheme with provably optimal sensitivity which is also fast. In the second part of the talk, we discuss an SDP relaxation of the k-means clustering problem. Under a \"stochastic ball model\", the SDP provably recovers global k-means clusters with high probability. Stability of the SDP in the setting of overlapping clusters is also discussed.\n\nTop" ]
[ null, "http://www.him.uni-bonn.de/de/programs/past-programs/past-trimester-programs/signal-processing-2016/workshop-on-harmonic-analysis-graphs-and-learning/schedule/typo3temp/menu/303247fd16.gif", null, "http://www.him.uni-bonn.de/de/programs/past-programs/past-trimester-programs/signal-processing-2016/workshop-on-harmonic-analysis-graphs-and-learning/schedule/typo3temp/menu/35230312de.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8843964,"math_prob":0.8866973,"size":29891,"snap":"2019-26-2019-30","text_gpt3_token_len":6169,"char_repetition_ratio":0.116538964,"word_repetition_ratio":0.07276888,"special_character_ratio":0.18952192,"punctuation_ratio":0.10942728,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9522133,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-16T02:45:37Z\",\"WARC-Record-ID\":\"<urn:uuid:78e3923c-703b-4f2b-b757-754a5561d9fe>\",\"Content-Length\":\"72545\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7f06aeed-8652-4bd9-a2c5-3c9afa3b0903>\",\"WARC-Concurrent-To\":\"<urn:uuid:cae1999a-d6e8-4bb1-bc49-8dfd1f0ab3e7>\",\"WARC-IP-Address\":\"131.220.77.59\",\"WARC-Target-URI\":\"http://www.him.uni-bonn.de/de/programs/past-programs/past-trimester-programs/signal-processing-2016/workshop-on-harmonic-analysis-graphs-and-learning/schedule/\",\"WARC-Payload-Digest\":\"sha1:PKWYQSUFGB344S3PNAWGSZXAT4CIHVTU\",\"WARC-Block-Digest\":\"sha1:IG35E7ZLP5EGANSESZTPF6MP2E3PQY44\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195524475.48_warc_CC-MAIN-20190716015213-20190716041213-00089.warc.gz\"}"}
https://www.qalaxia.com/questions/The-straight-line-math-l2math-passes-through-the-origin-and
[ "", null, "Krishna\n0\n\nStep 1: Substitute either values slope(m) and any one point into the equation of a straight line.\n\nNOTE:  Take the point as (0, 0)\n\nFORMULA: Equation of a straight line\n\ny - y_1 = m(x - x_1)\n\nWhere m = slope and ( x_1, y_1) = any point\n\nEXAMPLE: I took (2, 0) as a point and m = \\frac{4}{5}\n\ny - 0 = \\frac{4}{5} ( x - 2)\n\n5y = 4x - 8\n\nStep 2: Simplify and make the equation in the form of Ax + By + C =  0" ]
[ null, "https://d3648m43e37g8z.cloudfront.net/o72ob1cnnte20haf0hsrriib7agw8uprq765mp1519993736668.jpeg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91044414,"math_prob":0.9999931,"size":512,"snap":"2019-51-2020-05","text_gpt3_token_len":179,"char_repetition_ratio":0.13779527,"word_repetition_ratio":0.0,"special_character_ratio":0.375,"punctuation_ratio":0.096491225,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999906,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-29T20:41:02Z\",\"WARC-Record-ID\":\"<urn:uuid:2da2abcb-0776-4deb-aa35-63a71b4a6034>\",\"Content-Length\":\"17550\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c3428e8a-546e-4209-a4d5-ed3e4a5282ae>\",\"WARC-Concurrent-To\":\"<urn:uuid:4d870b54-e732-48ae-ad23-ba4da1f50b67>\",\"WARC-IP-Address\":\"3.95.14.250\",\"WARC-Target-URI\":\"https://www.qalaxia.com/questions/The-straight-line-math-l2math-passes-through-the-origin-and\",\"WARC-Payload-Digest\":\"sha1:Q2ILZNBBCBIBIFKEFPCLCES27XVOBEFI\",\"WARC-Block-Digest\":\"sha1:KT23CV5EOSDI5MH5ZKFW4A66RTBS2TPG\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251802249.87_warc_CC-MAIN-20200129194333-20200129223333-00220.warc.gz\"}"}
https://richardzach.org/2005/05/30/the-second-incompleteness-theorem-for-weak-theories/?replytocom=153
[ "# The Second Incompleteness Theorem for Weak Theories\n\nThe other day, Arnon Avron asked on FOM whether the Second Incompleteness Theorem holds for Robinson’s Q. I remembered wondering about that myself back when I was preparing for the foundations qual. The issue is this: the usual proof of the unprovability of Con(T) requires that T is not just “sufficiently strong” in the usual sense, i.e., you can arithmetize syntax in T, but the provability predicate used to formalize Con(T) must satisfy the Hilbert-Bernays-Löb provability conditions:\n\n1. If T proves A, then T proves Pr(A)\n2. T proves Pr(AB) → (Pr(A) → Pr(B))\n3. T proves Pr(A) → Pr(Pr(A))\n\nNow Q doesn’t have induction, so it can prove only very few universal claims, in particular, it does not satisfy condition 3. (Does it satisfy condition 2?) So does Q prove Con(Q), i.e., ¬PrQ(0 = 1)? It does not. The first result to this effect, as far as I know, is Bezboruah and Shepherdson, Gödel’s Second Incompleteness Theorem for Q (JSL 1976). In the late 70s, Wilkie and others developed the model theory of weak fragments of arithmetic sufficiently for more informative results to be obtained, e.g., Pudlák’s 1985 “Cuts, consistency statements and interpretations.” Wilkie and Paris (On the scheme of induction for bounded arithmetic formulas, APAL 1987, Thm 3.5) showed that Con(Q) isn’t even provable in IΔ0 + exp.\n\nI recommend Bezboruah and Shepherdson’s paper also for the references to the discussion in Kreisel’s papers and elsewhere about the philosophical significance of the derivability conditions. The classic treatment of the logic of provability predicates satisfying these conditions is Boolos’s Logic of Provability.\n\nUPDATE: There are some good replies to Arnon’s question in the FOM archives, especially that by Curtis Franks and the one by Arnold Beckmann. Incidentally, Curtis’s post made me look up Buss’s article on proof theory of arithmetic in the Handbook of Proof Theory, where Buss formulates the Second Incompleteness Theorem as:\n\nLet T be a decidable [should be: axiomatizable], consistent theory and suppose TQ. The T does not prove Con(T). (p. 121).\n\n## 2 thoughts on “The Second Incompleteness Theorem for Weak Theories”\n\n1.", null, "Anonymous says:\n\nThis is all I could find from Boolos (LoP).”Certain appreciably weaker systems, whose axioms do not include all of the induction axioms, suffice for the theory of finite sequences and the proofs of the derivability condtiions (for PA and for those weaker systems themselves). In those weaker theories, it should also be noted, stronger theorems about the syntax fo PA than those we have stated can also be proved. One example is the single sentence of the language of PA that generalizes condition (ii) [your (2)]…another is a similar generalization of conditions (iii) [your (3)].”He doesn’t say specifically what those “appreciably weaker systems” are, but later he proves the diagonal lemma specifically for Q. (The converse of (1) is also provable in Q–see his “Computability and Logic”.) Posted by lumpy pea coat\n\n2.", null, "Anonymous says:\n\nsorry for this comment but I have some questions and don’t know where to askI don’t understand why can we construct a set theory using math.logic and construct math. logic using a set theory…isn’t it incorrect construction(of foundations)?I can’t find answer in a books, net, google…maybe it’s trivial please help me, thanks" ]
[ null, "https://secure.gravatar.com/avatar/", null, "https://secure.gravatar.com/avatar/", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9066152,"math_prob":0.9064258,"size":3293,"snap":"2021-04-2021-17","text_gpt3_token_len":799,"char_repetition_ratio":0.10367893,"word_repetition_ratio":0.0,"special_character_ratio":0.22259338,"punctuation_ratio":0.10675039,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9725099,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-14T11:46:11Z\",\"WARC-Record-ID\":\"<urn:uuid:7d797f3b-96fb-405a-bf9f-e73d53442e7a>\",\"Content-Length\":\"49730\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d7148ee9-236d-4ad1-b55a-d85c4b0f237f>\",\"WARC-Concurrent-To\":\"<urn:uuid:d76e5be7-a4e1-4907-bd3a-26faf5c8c521>\",\"WARC-IP-Address\":\"74.208.236.59\",\"WARC-Target-URI\":\"https://richardzach.org/2005/05/30/the-second-incompleteness-theorem-for-weak-theories/?replytocom=153\",\"WARC-Payload-Digest\":\"sha1:57AT43M2XANFRHEZHW4QBYBZ5NILVAXT\",\"WARC-Block-Digest\":\"sha1:UFLRCMYHAMF5UTL7BJLYDX6QZFZ2JNHS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038077810.20_warc_CC-MAIN-20210414095300-20210414125300-00440.warc.gz\"}"}
http://qdlianzhu.com/qspevdu_d001017008
[ "• 安徽\n• 安庆市\n• 蚌埠市\n• 亳州市\n• 巢湖市\n• 池州市\n• 滁州市\n• 阜阳市\n• 合肥市\n• 淮北市\n• 淮南市\n• 黄山市\n• 六安市\n• 马鞍山市\n• 宿州市\n• 铜陵市\n• 芜湖市\n• 宣城市\n• 广西\n• 百色市\n• 北海市\n• 崇左市\n• 防城港市\n• 贵港市\n• 桂林市\n• 河池市\n• 贺州市\n• 来宾市\n• 柳州市\n• 南宁市\n• 钦州市\n• 梧州市\n• 玉林市\n• 河南\n• 安阳市\n• 鹤壁市\n• 焦作市\n• 开封市\n• 洛阳市\n• 漯河市\n• 南阳市\n• 平顶山市\n• 濮阳市\n• 三门峡市\n• 商丘市\n• 新乡市\n• 信阳市\n• 许昌市\n• 郑州市\n• 周口市\n• 驻马店市\n• 吉林\n• 白城市\n• 白山市\n• 长春市\n• 吉林市\n• 辽源市\n• 四平市\n• 松原市\n• 通化市\n• 延边朝鲜族自治州\n• 广东\n• 潮州市\n• 东莞市\n• 佛山市\n• 广州市\n• 河源市\n• 惠州市\n• 江门市\n• 揭阳市\n• 茂名市\n• 梅州市\n• 清远市\n• 汕头市\n• 汕尾市\n• 韶关市\n• 深圳市\n• 阳江市\n• 云浮市\n• 湛江市\n• 肇庆市\n• 中山市\n• 珠海市\n• 辽宁\n• 鞍山市\n• 本溪市\n• 朝阳市\n• 大连市\n• 丹东市\n• 抚顺市\n• 阜新市\n• 葫芦岛市\n• 锦州市\n• 辽阳市\n• 盘锦市\n• 沈阳市\n• 铁岭市\n• 营口市\n• 湖北\n• 鄂州市\n• 恩施土家族苗族自治州\n• 黄冈市\n• 黄石市\n• 荆门市\n• 荆州市\n• 直辖行政单位\n• 十堰市\n• 随州市\n• 武汉市\n• 咸宁市\n• 襄阳市\n• 孝感市\n• 宜昌市\n• 江西\n• 抚州市\n• 赣州市\n• 吉安市\n• 景德镇市\n• 九江市\n• 南昌市\n• 萍乡市\n• 上饶市\n• 新余市\n• 宜春市\n• 鹰潭市\n• 浙江\n• 杭州市\n• 湖州市\n• 嘉兴市\n• 金华市\n• 丽水市\n• 宁波市\n• 衢州市\n• 绍兴市\n• 台州市\n• 温州市\n• 舟山市\n• 青海\n• 果洛藏族自治州\n• 海北藏族自治州\n• 海东地区\n• 海南藏族自治州\n• 海西蒙古族藏族自治州\n• 黄南藏族自治州\n• 西宁市\n• 玉树藏族自治州\n• 甘肃\n• 白银市\n• 定西市\n• 甘南藏族自治州\n• 嘉峪关市\n• 金昌市\n• 酒泉市\n• 兰州市\n• 临夏回族自治州\n• 陇南市\n• 平凉市\n• 庆阳市\n• 天水市\n• 武威市\n• 张掖市\n• 贵州\n• 安顺市\n• 毕节市\n• 贵阳市\n• 六盘水市\n• 黔东南苗族侗族自治州\n• 黔南布依族苗族自治州\n• 黔西南布依族苗族自治州\n• 铜仁地区\n• 遵义市\n• 陕西\n• 安康市\n• 宝鸡市\n• 汉中市\n• 商洛市\n• 铜川市\n• 渭南市\n• 西安市\n• 咸阳市\n• 延安市\n• 榆林市\n• 西藏\n• 阿里地区\n• 昌都地区\n• 拉萨市\n• 林芝地区\n• 那曲地区\n• 日喀则地区\n• 山南地区\n• 宁夏\n• 固原市\n• 石嘴山市\n• 吴忠市\n• 银川市\n• 中卫市\n• 福建\n• 福州市\n• 龙岩市\n• 南平市\n• 宁德市\n• 莆田市\n• 泉州市\n• 三明市\n• 厦门市\n• 漳州市\n• 内蒙古\n• 阿拉善盟\n• 巴彦淖尔市\n• 包头市\n• 赤峰市\n• 鄂尔多斯市\n• 呼和浩特市\n• 呼伦贝尔市\n• 通辽市\n• 乌海市\n• 乌兰察布市\n• 锡林郭勒盟\n• 兴安盟\n• 云南\n• 保山市\n• 楚雄彝族自治州\n• 大理白族自治州\n• 德宏傣族景颇族自治州\n• 迪庆藏族自治州\n• 红河哈尼族彝族自治州\n• 昆明市\n• 丽江市\n• 临沧市\n• 怒江傈僳族自治州\n• 曲靖市\n• 思茅市\n• 文山壮族苗族自治州\n• 西双版纳傣族自治州\n• 玉溪市\n• 昭通市\n• 新疆\n• 阿克苏地区\n• 阿勒泰地区\n• 巴音郭楞蒙古自治州\n• 博尔塔拉蒙古自治州\n• 昌吉回族自治州\n• 哈密地区\n• 和田地区\n• 喀什地区\n• 克拉玛依市\n• 克孜勒苏柯尔克孜自治州\n• 直辖行政单位\n• 塔城地区\n• 吐鲁番地区\n• 乌鲁木齐市\n• 伊犁哈萨克自治州\n• 黑龙江\n• 大庆市\n• 大兴安岭地区\n• 哈尔滨市\n• 鹤岗市\n• 黑河市\n• 鸡西市\n• 佳木斯市\n• 牡丹江市\n• 七台河市\n• 齐齐哈尔市\n• 双鸭山市\n• 绥化市\n• 伊春市\n• 香港\n• 香港\n• 九龙\n• 新界\n• 澳门\n• 澳门\n• 其它地区\n• 台湾\n• 台中市\n• 台南市\n• 高雄市\n• 台北市\n• 基隆市\n• 嘉义市\n•", null, "牡丹江生物质颗粒机哪家好_葫芦岛哪里有卖得好的生物质颗粒机\n\n品牌:旺达,,\n\n出厂地:金秀瑶族自治县(金秀镇)\n\n报价:面议\n\n绥中县旺达生物质颗粒综合开发有限公司\n\n黄金会员:", null, "主营:辽宁生物质颗粒,吉林生物质颗粒,黑龙江生...\n\n•", null, "朝阳二手欧曼货车信息-可信赖的辽宁二手欧曼货车推荐\n\n品牌:赢信,,\n\n出厂地:金秀瑶族自治县(金秀镇)\n\n报价:面议\n\n绥中县赢信汽车贸易有限公司\n\n黄金会员:", null, "主营:辽宁二手车价格,辽宁二手解放货车,辽宁二...\n\n•", null, "树脂瓦厂家-树脂瓦专业供货商\n\n品牌:建昌幸福门窗树脂瓦厂,,\n\n出厂地:金秀瑶族自治县(金秀镇)\n\n报价:面议\n\n建昌县幸福门窗树脂瓦厂\n\n黄金会员:", null, "主营:沈阳树脂瓦厂家,辽宁树脂瓦厂家,辽宁树脂...\n\n•", null, "盘锦快递气泡袋|兴城顶固包装为您提供质量有保证的快递气泡袋\n\n品牌:顶固包装,,\n\n出厂地:金秀瑶族自治县(金秀镇)\n\n报价:面议\n\n兴城市顶固包装制品厂\n\n黄金会员:", null, "主营:葫芦岛气泡袋,辽宁纸箱厂,葫芦岛气泡膜,...\n\n•", null, "报价:面议\n\n绥中民生大果榛子苗木繁育中心\n\n黄金会员:", null, "主营:沈阳平欧榛子苗,辽宁杂交榛子苗,沈阳榛子...\n\n•", null, "报价:面议\n\n绥中县旺达生物质颗粒综合开发有限公司\n\n黄金会员:", null, "主营:辽宁生物质颗粒,吉林生物质颗粒,黑龙江生...\n\n•", null, "供应猪蹄烧毛机 肉类液化气烧毛机\n\n品牌:汇康牌\n\n出厂地:凤山县(凤城镇)\n\n报价:面议\n\n诸城市汇康食品机械有限公司\n\n主营:肉类加工设备,果蔬清洗机,食品烘干机,周...\n\n•", null, "鹤岗生物质颗粒价格_辽宁可靠生物质颗粒行情价格\n\n品牌:旺达,,\n\n出厂地:金秀瑶族自治县(金秀镇)\n\n报价:面议\n\n绥中县旺达生物质颗粒综合开发有限公司\n\n黄金会员:", null, "主营:辽宁生物质颗粒,吉林生物质颗粒,黑龙江生...\n\n•", null, "凌河空气能采暖设备厂家-辽宁优惠的空气能采暖设备哪里有供应\n\n品牌:格力空气能,天舒空气能,EK欧科中央空调\n\n出厂地:金秀瑶族自治县(金秀镇)\n\n报价:面议\n\n葫芦岛市巍光新能源有限公司\n\n黄金会员:", null, "主营:葫芦岛空气能采暖,葫芦岛空气能热泵,葫芦...\n\n•", null, "鞍山生物质颗粒成品厂家|供应葫芦岛好的生物质颗粒成品\n\n品牌:旺达,,\n\n出厂地:金秀瑶族自治县(金秀镇)\n\n报价:面议\n\n绥中县旺达生物质颗粒综合开发有限公司\n\n黄金会员:", null, "主营:辽宁生物质颗粒,吉林生物质颗粒,黑龙江生...\n\n• 没有找到合适的葫芦岛市供应商?您可以发布采购信息\n\n没有找到满足要求的葫芦岛市供应商?您可以搜索 批发 公司\n\n### 最新入驻厂家\n\n相关产品:\n牡丹江生物质颗粒机哪家好 朝阳二手欧曼货车信息 树脂瓦厂家 盘锦快递气泡袋 上甘岭大果榛子苗批发 大连生物质颗粒厂家 肉类液化气烧毛机猪蹄猪头猪皮加工设备 鹤岗生物质颗粒价格 凌河空气能采暖设备厂家 鞍山生物质颗粒成品厂家" ]
[ null, "http://image-ali.bianjiyi.com/1/2019/0718/16/15634388523978.jpg", null, "http://www.shangwuwang.com/Public/Images/ForeApps/grade2.png", null, "http://image-ali.bianjiyi.com/1/2019/0320/14/15530647304601.jpg", null, "http://www.shangwuwang.com/Public/Images/ForeApps/grade2.png", null, "http://image-ali.bianjiyi.com/1/2019/0716/16/15632642898806.png", null, "http://www.shangwuwang.com/Public/Images/ForeApps/grade2.png", null, "http://image-ali.bianjiyi.com/1/2018/1025/11/15404380193668.jpg", null, "http://www.shangwuwang.com/Public/Images/ForeApps/grade2.png", null, "http://image-ali.bianjiyi.com/1/2018/1116/17/15423620529917.jpg", null, "http://www.shangwuwang.com/Public/Images/ForeApps/grade2.png", null, "http://image-ali.bianjiyi.com/1/2018/0317/10/5aac8110f0bf6.jpg", null, "http://www.shangwuwang.com/Public/Images/ForeApps/grade2.png", null, "http://imagebooksir.258fuwu.com/images/business/2018920/17/3651641301537434161.jpeg", null, "http://image-ali.bianjiyi.com/1/2018/0317/10/5aac80ab7b993.jpg", null, "http://www.shangwuwang.com/Public/Images/ForeApps/grade2.png", null, "http://image-ali.bianjiyi.com/1/2019/0428/13/15564310724594.png", null, "http://www.shangwuwang.com/Public/Images/ForeApps/grade2.png", null, "http://image-ali.bianjiyi.com/1/2019/0718/17/15634420950605.jpg", null, "http://www.shangwuwang.com/Public/Images/ForeApps/grade2.png", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.59931344,"math_prob":0.49687618,"size":853,"snap":"2019-43-2019-47","text_gpt3_token_len":1117,"char_repetition_ratio":0.24852768,"word_repetition_ratio":0.0,"special_character_ratio":0.22626026,"punctuation_ratio":0.3469388,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9580647,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38],"im_url_duplicate_count":[null,2,null,null,null,9,null,null,null,1,null,null,null,4,null,null,null,1,null,null,null,4,null,null,null,3,null,6,null,null,null,3,null,null,null,4,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-22T15:26:53Z\",\"WARC-Record-ID\":\"<urn:uuid:97019275-b917-4e45-9d1d-33ebfebef927>\",\"Content-Length\":\"104369\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a60eed39-2fd2-484b-a2ac-8199328e1606>\",\"WARC-Concurrent-To\":\"<urn:uuid:2c3a47b6-2e9e-43d0-9198-29392de646e4>\",\"WARC-IP-Address\":\"45.192.106.130\",\"WARC-Target-URI\":\"http://qdlianzhu.com/qspevdu_d001017008\",\"WARC-Payload-Digest\":\"sha1:7SQAWFU3MUCNLIL4AROMBOOQCEKP7FTH\",\"WARC-Block-Digest\":\"sha1:M6MBG2NXR3EGIHOATQ35JEKO7IHV6N7H\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496671363.79_warc_CC-MAIN-20191122143547-20191122172547-00065.warc.gz\"}"}
https://www.codespeedy.com/arithmetic-expression-evaluation-in-cpp/
[ "# Arithmetic expression evaluation program in C++\n\nIn this tutorial, we will learn the program for arithmetic expression evaluation in C++ with some cool and easy examples. In many situations, you might have to come up with this type of requirement.\n\nI know you are here just because you need this program in C++.\n\nIf you don’t know the program for arithmetic expression evaluation then you are at the right place. Because, in this tutorial, we going to learn the program for arithmetic expression evaluation.\n\n## Arithmetic expression evaluation in C++\n\nFirstly, For evaluating arithmetic expressions the stack organization is preferred and also effective.\n\nAdditionally, here we come across a keyword Infix notation. Expressions that are represented in this each operator is written between two operands (i.e., x + y).\n\nFrom the above notation, one should distinguish between (x + y)*z and x + (y * z) by using some operator-precedence convention or parentheses. Hence, the sequence of operands and operators in an arithmetic expression does not find the order where these operations are to be performed.\n\n1. Prefix notation\nIf the operator is placed before its operands then it is said to be a prefix notation. Here no parentheses are required, i.e.,\n`+xy`\n1. Postfix notation\nIf the operator is placed after its operands then it is said to be a postfix notation. Here also no parentheses are required, i.e.\n`xy+`\n\nNote; Prefix notation also called polish notation.\nReverse polish notation is postfix notation.\n\nInfix notation is the traditional notation, so for stack organized computers, postfix notation is best-suited notation. Hence, we have to convert them both. This conversion is considered as the operational hierarchy.\n\nWe have different precedence levels for 5 binary operators:\n\n• Lowest: Addition (+) and Subtraction (-)\n• Highest: Exponentiation (^)\n• Next highest: Division(/)  and Multiplication(*).\n\nFor example –\n\nInfix : (x-y)*[z/(p+q)+r]\n\nPost-fix : xy- zpq +/r +*.\n\nThen, inside parentheses (x-y) and (p+q) we perform the arithmetic. The z/(p+q) division has to be done prior to the addition of r. Then the two terms inside the bracket and parentheses are multiplied.\n\nNow by using stack we have to solve out the values.\n\n### The approach for the result are:\n\n1. Expression must be converted into postfix notation.\n2. Operands must be pushed into the stack in the order they appear.\n3. Pop two topmost operands when any operator encounters for executing the operation.\n4. We must push them inside the stack.\n5. After these steps, the topmost element of the stack is our result.\n\nExample –\n\n```Infix: (1+2) * (3+4)\n\nPost-fix: 1 2 + 3 4 + *\n\nResult; 21```\n\nStack operations\n\nExample; for converting infix notation to postfix notation\n\n```#include<bits/stdc++.h>\nusing namespace std;\n\nint pre(char ch)\n{\nif(ch == '^')\nreturn 3;\nelse if(ch == '*' || ch == '/')\nreturn 2;\nelse if(ch == '+' || ch == '-')\nreturn 1;\nelse\nreturn -1;\n}\n\nvoid infixToPostfix(string s)\n{\nstd::stack<char> st;\nst.push('N');\nint l = s.length();\nstring ns;\nfor(int i = 0; i < l; i++)\n{\n\nif((s[i] >= 'a' && s[i] <= 'z')||(s[i] >= 'A' && s[i] <= 'Z'))\nns+=s[i];\n\nelse if(s[i] == '(')\n\nst.push('(');\n\nelse if(s[i] == ')')\n{\nwhile(st.top() != 'N' && st.top() != '(')\n{\nchar ch = st.top();\nst.pop();\nns += ch;\n}\nif(st.top() == '(')\n{\nchar ch = st.top();\nst.pop();\n}\n}\n\nelse{\nwhile(st.top() != 'N' && pre(s[i]) <= pre(st.top()))\n{\nchar ch = st.top();\nst.pop();\nns += ch;\n}\nst.push(s[i]);\n}\n\n}\n\nwhile(st.top() != 'N')\n{\nchar ch = st.top();\nst.pop();\nns += ch;\n}\n\ncout << ns << endl;\n\n}\n\nint main()\n{\nstring sol = \"a+b*(c^d-e)^(f+g*h)-i\";\ninfixToPostfix(sol);\nreturn 0;\n}\n```\n\n```output:\nabcd^e-fgh*+^*+i-```\n\nExample; for evaluating postfix expression\n\n```#include <iostream>\n#include <string.h>\n\nusing namespace std;\n\nstruct Stack\n{\nint first;\nunsigned cap;\nint* array;\n};\n\nstruct Stack* stackCreate( unsigned cap )\n{\nstruct Stack* stack = (struct Stack*) malloc(sizeof(struct Stack));\n\nif ( !stack ) return null;\nstack -> first=-1;\nstack -> cap=cap;\nstack -> array = (int*) malloc(stack->cap * sizeof(int));\n\n//go\n\nif ( !stack -> array) return null;\nreturn stack;\n}\n\nint empty ( struct Stack* stack )\n{ return stack -> first==-1 ;}\nchar peek (struct Stack* stack)\n{ return stack -> array[stack -> first];}\nchar pop(struct Stack* stack)\n{\nif (!empty(stack))\nreturn stack -> array[stack -> first--] ; return '\\$';\n}\nvoid push(struct Stack* stack,char ope)\n{ stack->array[++stack->first]=ope;}\nint postfixEvaluate(char*expression)\n{ struct Stack* stack=StackCreate(strlen(expression));\nint i;\nif (!stack) return-1;\nfor (i = 0; expression[i]; ++i)\n{\nif (isdigit(expression[i]))\npush(stack, expression[i] - '0'); //till here\n\nelse\n{\nint val1 = pop(stack);\nint val2 = pop(stack);\nswitch (expression[i])\n{\ncase '+': push(stack, val2 + val1); break;\ncase '-': push(stack, val2 - val1); break;\ncase '*': push(stack, val2 * val1); break;\ncase '/': push(stack, val2/val1); break;\n}\n}\n}\nreturn pop(stack);\n}\nint main()\n{\nchar expression[] = \"123*+4-\";\ncout<<\"postfix evaluation: \"<< postfixEvaluate(expression);\nreturn 0;\n}\n```\n\n```output:\npostfix evaluation: 3\n\n```\n\nExplanation\n\n123*+4-\n\n((3*2)+1)-4" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6632954,"math_prob":0.9854356,"size":5093,"snap":"2022-40-2023-06","text_gpt3_token_len":1332,"char_repetition_ratio":0.14482217,"word_repetition_ratio":0.055012226,"special_character_ratio":0.31415668,"punctuation_ratio":0.1727542,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9985301,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-31T04:43:57Z\",\"WARC-Record-ID\":\"<urn:uuid:2025879b-d16f-4a3f-a20f-b808ce7c59e5>\",\"Content-Length\":\"55459\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a898960b-0c68-4d8e-8a49-eb651e9235be>\",\"WARC-Concurrent-To\":\"<urn:uuid:758f3d5c-e6f0-42a4-aad7-3a5d0f2b7585>\",\"WARC-IP-Address\":\"104.21.85.98\",\"WARC-Target-URI\":\"https://www.codespeedy.com/arithmetic-expression-evaluation-in-cpp/\",\"WARC-Payload-Digest\":\"sha1:JLVA6TGVMSE7KEJHN2AZDDSAGIIEIOPC\",\"WARC-Block-Digest\":\"sha1:SBLGURB2RHG3WODMK2XKN7QAQHCE5FPY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499842.81_warc_CC-MAIN-20230131023947-20230131053947-00347.warc.gz\"}"}
https://de.scribd.com/document/204855137/3034
[ "Sie sind auf Seite 1von 44\n\n# LIBRARY\n\n## ROTAl AIRCRAFT FSTABUSHMEMT\n\nBEDt:'-ORD\"\nMINISTRY OF SUPPLY\nAERONAUTICAL RESEARCH COUNCIL\nREPORTS AND MEMORANDA\nR\" & M. No. 3034\n(18,234)\nA.R.C. Technical Report\nThe Matrix Force Method of Structural\nAnalysis and Some New Applications\nBy\nPROFESSOR J. H. ARGYRIS, D.E., AND S. KELSEY, B.Sc.(ENG.)\nUniversity of London, Imperial College of Science and Technology.\nLONDON: HER MAJESTY'S STATIONERY OFFICE\n1957\nTWELVE SHILLINGS NET\nThe Matrix Force Method of Structural Analysis and\nSome New Applications\nBy\nPROFESSOR J. H. ARGYRIS, D.E., AND S. KELSEY, B.Sc.(ENG.)\nUniversity of London\nCOMMUNICATED BY THE DIRECTOR-GENERAL OF SCIENTIFIC RESEARCH (AIR),\nMINISTRY OF SUPPLY\nReports and Memoranda No. 3034\nFebruary, I 956\nSummary.-The purpose of this paper is :\n(a) to summarise the basic principles of the matrix force method of structural analysis given in Ref. 1 and to\npresent also some new applications of the general theory;\n(b) to establish and illustrate on simple examples the special method of cut-outs developed in Ref. 1. In this\nprocedure the stresses in a structure with cut-outs are derived from the simpler analysis of the corresponding\nstructure without cut-outs under the same loads and/or temperature distribution;\n(c) to present a method for the determination of the stresses in a structure, some of whose components have been\nmodified subsequent to an initial stress analysis. This procedure, not included in Ref. 1, is, in fact, a\ngeneralisation of the cut-out method (b) and gives the stresses in the modified system solely in terms of the\nstresses of the original system subject to the same loads and temperature distribution.\nThe theory is illustrated on some simple examples which show clearly the extreme simplicity of the powerful\ntechniques (b) and (c). We emphasise that the application of these methods requires only one stress analysis: that\nof the continuous structure under the same external loads, as the modified structure. No additional stress analysis\ndue to other, e.g., perturbation. loads is involved.\nI ntroduction.-It is being increasingly recognised that none of the standard methods of\nstructural analysis is really suitable for the determination of the stress distribution and flexibility\nof modern aircraft structures. This realisation is brought about by many concurring reasons.\nThus, it is accepted that a more reliable analysis of the stresses is necessary now since many of\nthe designs introduced at present are too complex and unusual to be analysed by the often so\ncrude approximation of the past. It must not be forgotten, moreover, that the ever-present\ndanger of a major catastrophy due to fatigue failure compels us to seek a more careful estimate\nof the stresses. But even if an accurate analysis of some of the more modern structural designs\ncould be accomplished by any of the standard methods it would take such a long time as to be\nuseless except as a rather belated check on the final design. In fact; quite apart from technical\nreasons, purely economic considerations require the completion of an accurate stress analysis at\nan early stage to ensure efficient design and to check it prior to production and the planning of\nthe necessary full-scale tests.\nFaced with this situation we require a radical change in our approach to the problem. This is\nbeing offered by the combination of matrix methods of structural analysis and the electronic\ndigital computer with its enormous potentialities. However, let us stress ab initio that in speaking\nof matrix methods of analysis we do not mean, and in fact exclude, any attempt to obtain the\nequations in the unknowns by the usual longhand method and then solve them only by the\ninversion of the matrix of coefficients of the unknowns. It is in our opinion futile to seek any\nprogress along these lines. What is required and is in fact essential is a formulation of structural\nanalysis completely in matrix algebra, starting with the compilation of the basic data.\nIn Ref. 1 such a general matrix method of structural analysis has been developed with both\nforces and displacements as unknowns. These two methods are completely dual in character;\nas demonstrated there. In fact, knowing the equations in either of the two procedures we can\nwrite down by a simple' translation' process the equations in the other procedure. We do not\nintend to enter here into any lengthy arguments on which of the two methods is preferable but\nrefer to some relevant considerations in section 1. The important fact is that in either of the two\nmethods we require initially only three simple matrices which, in many cases, can be written\ndown by inspection or by very simple calculations. Naturally, matrix methods of structural\nanalysis have been given before but it is believed that none was as general and comprehensive and\nyet as simple as that of Ref. 1; see also Ref. 7.\nIn the present paper we establish first in Part 1, sections 1 to 8, the main results of structural\nanalysis by the matrix force method. This procedure may be used to programme for one continuous\noperation on the digital computer the complete stress analysis for any system of loads or thermal\nstrains, including the derivation of the flexibility matrix of the given structure. In fact, the\nactual programming for a particular digital computer, the medium-sized Ferranti Pegasus, has\nbeen completed and is being published (see Hunt'\"). It is of interest that this method is already\nbeing applied both in this country and abroad. However, the purpose of this paper is not only\nto derive some of the main results of Ref. 1 but to present also some new developments of practical\nimportance. We refer here only to the surprisingly simple formula in section 8 giving the thermal\ndistortion of an arbitrary structure.\nOne of the most important applications of the general theory of Ref. 1 is that of the stress\nanalysis of structures with cut-outs under any given set of loads or thermal strains. This particular\nmethod is reviewed here in Part II, section 9, and we show, once more, that it is possible to find\nthe stress distribution in a structure with cut-outs, under any load, solely in terms of the stress\ndistribution under the same loading in a corresponding continuous structure where the cut-outs\nhave been filled in. We emphasise that no additional stress analysis of the continuous structure,\nunder other (perturbation) loads, is required as in techniques developed for circular fuselages by\nCicala\" and others. In these, the openings were likewise filled in, but special perturbation stress\nsystems were used to nullify the stresses in the cut-out elements. The practical application of\nthe present method is so simple and foolproof that there is little doubt that it is the ideal procedure,\nnot only for finding the stresses in the structure due to the introduction of cut-outs subsequent to\nthe analysis, which often materialise at a late stage of design, but also for the stress analysis of the\nstructure when the cut-outs are known initially. The physical justification of the method derives\nfrom the idea that we can impose such initial strains on the filled-in elements of the continuous\nstructure that their total stresses due to applied loads and initial strains are zero, i.e., that they\nare effectively non-existent. It is now very simple to express the necessary initial strains in\nterms of the given set of loads and hence, derive the stress distribution in the cut-out structure\nfrom the initial stress analysis of the continuous system under the original loads only. Interestingly\nenough the same method was proposed later by Goodey\" who also filled in the missing elements.\nHe, however, used a purely mathematical idea to obtain the final formulae. Thus, Goodey\nconsiders in the continuous fictitious structure the variational problem of the minimum of strain\nenergy with the additional condition of zero stress in the fictitious members. Naturally the final\nresult is essentially identical to that of Ref. 1. For this reason we need not consider it here any\nmore, except to point out that Goodey informs us that he has used it successfully in fuselages\nwith large cut-outs, including the part-removal of frames, etc. The authors' attention has been\ndrawn by Morley to a report\" published in 1949, discussing the case of a cut-out in a circular\nfuselage. This report uses a method akin to that of Goodey' and may be considered to contain\na germ of the idea developed in Ref. 1.\nA generalisation of the above method on cut-outs to deal with structural modifications of\ncomponents suggests itself naturally. This problem is solved in section 12 and the final formulae\nare, of course, similar to those in the cut-out case with only one additional term; a matrix\nexpressing the difference of the flexibilities of the modified and original elements. Here again we\nfind the stress distribution in the modified structure solely in terms of the stresses in the original\n2\nstructure under the same loads or temperature distribution, without any additional stress analysis\nof perturbation systems. The practical importance of the method in view of its simplicity need\nhardly be emphasised. In the last paragraph of Part II we derive also a formula for the\nflexibility matrix of the modified structure in terms of the original flexibility.\nThe theory reviewed in this report is illustrated on a series of examples. Their purpose is\nmainly to draw attention to the potentialities of the method on cut-outs and modifications.\nAdmittedly the cases treated are simple but basically the same operations are involved in any\ncomplex structure. This is shown by the general computer programme developed by Hunt!\".\nAs an example we mention that once the three basic matrices are given, the structural calculations\nfor a wing with a hundred redundancies, under loads and thermal strains, does not take longer\non the Ferranti-Pegasus than approximately a week. This, moreover, includes the alternative\nstress distribution when up to 30 subsequent cut-outs or structural modifications are introduced.\nIn conclusion we emphasize once more that the progress of structural analysis achieved by these\nmethods is only possible by developing the analysis ab initio in matrix form. With standard\nlonghand notation it would be difficult, if not impossible, to detect many of the important new\ntheorems. But this is not the only aspect where we must change our approach. The basic\nsimplicity of the theory can only be immediately apparent if we free our minds from the confining\nstrain energy considerations that obscure and complicate the mathematical derivations. By\nusing the unit load and the unit displacement method respectively as given in Ref. 1 all the\nresults flow out naturally from the initial idea in an immediately obvious form.\nPRINCIPAL NOTATION\noxx' etc., oxy' etc. Direct and shear stresses\n8\nxx\n, etc., 8\nxy\n, etc. Direct and shear strains\nq Shear flow in sheet\n[(jJ Column matrix of direct and shear stresses\n Column matrix of direct and shear strains\nR Column matrix of applied (generalised) forces\nr Column matrix of (generalised) displacements\nX Column matrix of redundant (generalised) forces\n5 Column matrix of (generalised) stresses on structural elements\nv Column matrix of (generalised) strains of structural elements\nH Column matrix of (generalised) initial strains on structural elements\nb Rectangular transformation matrix for stresses\nf Flexibility matrix of unassembled structural elements\nF Flexibility matrix\ne Temperature\n1 j ... m Directions of external forces and displacements\n1 i . .. n Directions of redundant forces\nSuffix h denotes elements to be eliminated or modified\nSuffices\", , c denote structure with modifications and cut-outs respectively incorporated\nI Unit matrix\n0,0 Zero matrix\nA', A-I Transposed and reciprocal matrix respectively of A\n{ .... } Column matrix\n3\n(68577)\n*\nA2\nNo.\n1 J. H. Argyris\n2 P. M. Hunt\n3 P. M. Hunt\n4 W. J. Goodey\nAuthor\nREFERENCES\nTitle, etc.\nEnergy theorems and structural analysis. A generalised discourse\nwith applications on energy principles of structural analysis,\nincluding the effects of temperature and non-linear stress-strain\nrelations. Part I. General theory. Airc. Eng. Vol. XXVI.\nOctober, November, 1954. Vol. XXVII. February, March,\nApril, May, 1955.\nThe electronic digital computer in aircraft structural analysis.\nThe programming of the Argyris matrix formulation of structural\ntheory for an electronic digital computer.\nPart I. A description of a matrix interpretive scheme and its\napplication to a particular example. Ai;:c. Eng. Vol. XXVIII.\nMarch, 1956. A.R.C. 18,240. February, 1956.\nPart II. The use of preset and programme parameters with the\nmatrix interpretive scheme and their application to general\npurpose programmes for the force method of analysis. A ire. Eng.\nVol. XXVIII. April, 1956.\nPart III. General purpose programmes for the force and dis-\nplacement methods in large structures including the use of\nmagnetic tape storage. Airc. Eng. Vol. XXVIII. May, 1956.\nThe computational procedure for the Argyris matrix method\nof structural analysis. Ferranti publication. List CS 72.\nFebruary, 1956.\nNotes on a general method of treatment of structural discontinuities.\n].R. Ae. Soc. Vol. 59. No. 538, p. 695. October, 1955.\n5 L. S. D. Morley and W. K. G. Floor\n6 P. Cicala\n7 .J. H. Argyris\nLoad distribution and relative stiffness parameters for a re-\ninforced circular cylinder containing a rectangular cut-out.\nN.L.L. Amsterdam. Report S.362. 1949.\nEffects of cut-outs in semi-monocoque structures. ]. Ae. Sci.\nVol. 15. No.3, pp. 171-179. March, 1948.\nDie Matrizentheorie der Statik. Ingenieur Archiv. Vol. XXV.\nNo.3, pp. 174-192. May, 1957.\n4\nPART I\nThe Basic Principles of the Matrix Force Method of Structural Analysis\n1. The Problem.-Two basic methods exist for the analysis of arbitrary structures and are\ndeveloped in matrix form in Ref. 1:\n(a) the force method in which forces or stress resultants (or generalised forces) are taken as\nunknowns\n(b) the displacement method in which deflections or slopes (or generalised displacements) are\ntaken as unknowns.\nThe two methods are completely dual in character as demonstrated in Ref. 1, where Table 2 of\nthe March, 1955, issue of Aircraft Engineering gives the basic theorems. For a number of reasons\nthe force method is, in general, superior for the analysis of continuous systems like stressed skin\nstructures. Firstly, in the force analysis of such systems there are fewer unknowns than in the\ndisplacement analysis and the equations are usually better conditioned. Furthermore, method\n(a) yields directly the flexibility matrix, which we require for aero-elastic investigations, whilst\nmethod (b) gives correspondingly the stiffness matrix from which the flexibility can only be\nobtained by inversion. Finally, the stress determination by the force analysis is always more\naccurate (and considerably so), since in the displacement analysis the stresses are found by what\nis essentially a differentiation process. On the other hand the compilation of the basic matrices\nmay be simpler by the displacement method when the structure is irregular.\nWe consider here only the matrix formulation of the force method of analysis of which we give\na general presentation including some important new theorems developed subsequently to Ref. 1.\nHowever, following the procedure of Table 2 of Ref. 1, all equations given may be immediately\n, translated' into the dual relations of the displacement method; see also Ref. 7.\nAssume a linearly elastic structure subjected to a system of m loads or generalised forces\nR\n1\n, R\n2\n, , R\nj\n, , R; which we denote by the column matrix:\nR = {R\n1\nR\n2\nR, ... .R\".} .\nLet the structure have n redundancies Xi which we form in a column matrix:\n(1)\n(2)\nThe particular system arising from the imposition of X = 0 on our structure is called the\nbasic system and is here statically determinate.\nWe seek to determine:\n(a) the k stresses or stress resultants Sf denoted by the column matrix:\n\" .\n(b) the flexibility matrix F for the points and directions of the applied forces R;\nBy definition:\nr = FR\nwhere\n(3)\n(4)\nr = {r\n1\nr\n2\nr\nf\n.... r\nm\n} (5)\nis the column matrix of the (generalised) deformations (deflections) in the directions\nof the forces Rand F is a symmetric square matrix.\n5\nI t is obvious that we can always write:\n(6)\nwhere the matrices b, and b, are of dimensions k X m and k X n respectively and are determined.\nsolely by statics. Thus, the elements of the fth row of b, are the (generalised) stresses at f due\nto each of the unit loads R, = 1 applied to the basic system. Note that the stresses:\nSo = boR\nare statically equivalent to the applied loads R.\nOnce X has been determined in terms of R, equation (6) can be written in the form:\nS = IbIlL\n(7)\n(8)\nThe above considerations may be applied immediately to the more general case when the\nbasic system is itself redundant (see Ref. 1).\nParallel to the applied loads R, the system may also be subjected to initial strains H, (e.g.,\nthermal strains) which we arrange again in a column matrix:\n(9)\nDue to the redundancy of the system the initial strains cannot, in general, develop freely and\nstresses are set up. These may be calculated from:\nSH = 1b\n1XH\nwhere X\nH\nare the redundant stresses or forces due to H.\n2. A Simple Example for the b, and b\n1\nMatrices.\nFIG. 1. Redundant pin-jointed framework to illustrate b\no\nand b, matrices.\n(to)\nThe b, and 1b\n1\nmatrices are most easily illustrated on a conventional framework. Consider, for\nexample, the five times redundant structure of Fig. 1, assumed symmetrical. Due to symmetry\nof loading and structure the system is effectively only three times redundant. As basic system\n6\nwe select the statically determinate structure of Fig. Ib obtained by cutting bars 5, 6 and 12.\nThe load and redundancy matrices are:\n(la) ; (2a)\nThe b, and b\nl\nmatrices for half the structure, including the central vertical member (11), are\nfound easily and are given in Table 1. The numbers over the columns refer to the external\nI\n, R\n2\nand the redundancies Xl' X\n2\n, X\n3\nrespectively and the numbers opposite the rows\nto the numbered bars of Fig. lb.\nTABLE 1\nBasic Matrices for Framework of Fig. 1\n1\no\n- ajh\najh\najh\no\no\nb\no\n= - djh\no\no\n1\no\no\n2\no 1\n- aj2h 2\naj2h 3\najh 4\no 5\no 6\n- dj2h 7\n- dj2h 8\no 9\nIj2 10\n1 11\no 12\n1\n-- ajd\no\n- ajd\no\n1\no\nb\nl\n= 1\no\n- hjd\n- hjd\no\no\n.. (11)\n2 3\no 0\n- a[d 0\no 1\n- ajd 1\no 0\n1 0\no 0\n1 0\no 0\n- hjd 0\n- 2hjd 0\no 1\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n.. (12)\no\no\no\no\no\no\no\no\no\no\no\no\no\no\no\no\no\no\no\no\no\no\no\no\no\no\no\no\no\no\no\no\no\no\no\no\nf=\no\no\no\no\no\no\no\no\no\no\no\no\no\no\no\no\no\no\no\no\no\no\no\no\na\n2\nA4E\no\no\no\no\no\no\no\no\no\nd\n2\nA\nE\n5\no\no\no\no\no\no\no\no\no\nd\n2\nA\nE\n6\no\no\no\no\no\no\no\no\no\no\no\no\no 0\no 0\no 0\no 0\nd\n2\nA\nE 0\n8\nh\no 2\nA9E\no 0\no 0\no 0\no\no\no\no\no\no\no\no\no\no\no\no\no\nh\nAllE\no\no\no\no\no\no\no\no\n(33a)\n3. The Idealised Structure.-The problem of determining the stress distribution in a shell type\nof structure characteristic of major aircraft components is strictly infinitely redundant. Hence,\nit is necessary to introduce, for practical calculations, considerable simplifications or idealisations.\n7\n\nThese are discussed at some length in Ref. 1. It is sufficient for the purpose of the present paper\nto mention the standard simplification by which a wing structure with spars and ribs (if such\nare used), approximately orthogonal to each other, is represented by a three-dimensional grid of\nflanges carrying only direct loads and walls carrying only shear flows. The cross-section of the\nwing may be arbitrary and the spars may taper differently in plan view and elevation but the\nangle of taper 20 is assumed to be so small that cos e === 1 and sin e === e. The same restriction\napplies to the taper of ribs in plan view. Note that delta wings are included in these specifications\nas long as spars and ribs conform to the stipulated geometry.\nIn each flange of the idealised structure the direct load is assumed to vary linearly between\nadjacent nodal points of the grid. Furthermore, the shear flow in each field bounded by two\nintersecting pairs of adjacent flanges is taken as constant. The method of extracting this simplified\nsystem or idealised model from the actual structure will be found in Ref. 1, (March, 1955, p. 87)\nwhere also a more refined procedure is given for increasing the accuracy of the stress analysis.\nFor fuselages the method of idealisation is very similar to that in wings and need not be\ndiscussed here. Detailed procedures for more complex wing and aircraft structures will be given\nin subsequent publications.\nThe idealised structure possesses now a finite degree of redundancy, the determination of\nwhich requires more subtle considerations, especially in the presence of cut-outs, than are\nnecessary in frameworks. This aspect is also discussed in Ref. 1. In what follows we assume that\nthe process of idealisation has already been performed and that the degree of redundancy of the\nidealised structure is known. For brevity we shall use the terms' structure' or ' system' for\nidealised structure.\n4. The Selection of the Basic System and the Redundant Forces.-For a major aircraft component\nlike a wing the selection of the basic system and the redundancies deserves careful attention\nsince a skilful choice can simplify the calculations considerably. Our ideas, however, of what\nconstitutes an appropriate choice must now be drastically revised due to the development of\nstructural analysis in matrix form and the introduction of the electronic digital computer with\nits enormous potentialities. Thus, in the pre-electronic era of computations it was, in general,\naccepted that the choice of the basic system (which in itself might be statically indeterminate)\nwas the more successful the less the chosen statically equivalent system,\ndiffered from the actual stress matrix\nS = IbR.\n(7)\n(8)\nThis argument was then correct since the calculation of a highly redundant system like a wing\ncould not hut take considerable time (if ever it was undertaken) and the complete design of the\naeroplane could obviously not wait for its completion. Hence a reliable first estimate of the\nstresses in the structure, based solely on the statically equivalent system, was naturally indicated.\nHowever, the advent of the digital computer requires a radical change in our approach to\nstructural analysis. To realise this inevitable development we need only consider that it is\npossible to programme on the digital computer, in one continuous operation, the complete\nstructural analysis of a wing and fuselage for any loads and temperature distribution including\nthe derivation of the' exact' flexibility matrix. Moreover, the time taken by the machine in\nperforming all these calculations is strikingly small. For example, for the analysis of a typical\nwing structure with a hundred redundancies, a medium sized computer like the Ferranti-Pegasus\nwill take approximately a week and give also the flexibility matrix for, say, 50 points, once all\ninitial data of the structure like the 1>0 and b\n1\nmatrices are known and stored in the machine.\nIn fact, the compilation and checking of these latter matrices and, in particular, b\n1\n, will, in general,\n8\ntake considerably longer than the actual machine operations. Actually the time for the prepara-\ntion of the initial matrices will be further increased if we select the basic system on the same\nconsiderations as in the standard longhand method. Now it is obvious that the latter considera-\ntions lose their validity once we use a digital computer since the' exact' stress distribution can\nbe obtained so quickly as to be available for immediate application in the design office. In place\nof our old ideas a new criterion of the suitability of a basic system emerges: the best choice of a\nbasic system is that in which b, may be written down either by inspection or at most by the\nvery simplest static calculations. This not only reduces drastically the time necessary for its\ndetermination but allows also easy checking of the data. The latter point is, in fact, most\nimportant and cannot be emphasised sufficiently.\nBasic system formed by\nindependant 5,pars\nFIG. 2. Delta wing structure, illustrating selection of basic system.\nTo fix ideas, consider the multispar structure, in the form of a delta wing, of Fig. 2. It is\nobvious that by far the simplest basic system is that of the independent spars, for which b, may\nbe written down with great ease. Naturally, such a b, bears no similarity to the final b but as\nstated above this is of no consequence once we accept the digital computer as the standard tool\nof structural analysis.\nAssociated with the choice of b, matrix is the selection of the redundancies X which determine\nthe b, matrix. Here, too, for ease of compilation and checking of this basic matrix the Xi systems\nare preferably chosen as simple as possible; in particular, each system should affect as few elements\nof the structure as possible. Moreover, such a choice usually satisfies the essential requirement\nof well conditioned equations in the unknown redundancies. A point particularly emphasised\nin Ref. 1 is that we must not take the customary narrow view and consider redundancies as\n9\nsingle forces or moments applied at actual physical cuts of the structure. A more satisfactory\napproach is to select as redundancies self-equilibrating systems of forces or stresses (i.e.,\ngeneralised forces). Besides giving a far greater flexibility in the choice of redundancies this\nprocedure yields more symmetrical and immediately obvious expressions for the elements of the\nmatrix. Three standard types of system for wing analysis were proposed in Ref. 1 (March, 1955).\nThese are the X, Y and Z systems reproduced in Figs. 3, 4 and 5 of the present paper, where\nMulti-web wing\nSill' - equUlbratlng slres;s system X=1\nd, + d. h,\np,-_.-\na,a. h\nFIG. 3. Unit, self-equilibrating stress system of type X = 1.\n10\nfull details are given of the flange loads and shear flows due to unit value of each of them. The\nY-system (Fig. 4), which acts ever two bays in the spanwise direction, is seen to be closely related\nto the boom load function P introduced by Argyris and Dunne in 1947*. The Z-system (Fig. 5)\nis similar to the Y-system but is applied in the chordwise direction. Finally, the X-systems\n(Fig. 3) are essentially diffusion systems applied over four panels in the upper or lower covers.\nRef. 1 discusses in detail the selection of the best combination of X, Y, Z systems in various wing\nstructures. In the present notation the column matrix X is taken to include all these systems.\nMulti-web wing\nSelf - equilibrating stress system Y =1\nFIG. 4. Unit, self-equilibrating stress system of type Y = 1.\n(Longitudinal four-boom tube.)\n*See J. H. Argyris and P. C. Dunne, 'The General Theory, etc.' f R, Ae. Soc. Vol. LI. February, September,\nNovember, 1947.\n11\nMulti-web wing\nq =+1/2Q..\nSelf - equilibrating stress system Z =1\nFIG. 5. Unit, self-equilibrating stress system of type Z = 1. (Transverse four-boom tube.)\n5. The Unit Load Method and Some Applications.-The analysis of structures by the force\nmethod is most conveniently based on the formulation of the unit-load method given in Ref. 1.\nThis approach has, moreover, the great advantage that the effect of thermal or other initial\nstrains can be included immediately without any further development of the theory. The\nunit-load method indicates also the procedure to be followed when the basic system is not statically\ndeterminate as in the standard analyses but is itself redundant. Finally we may apply this basic\ntheorem to structures with non-linear elasticity.\nWe introduce the following definitions:\n(1) Let B;w B\"y, etc., be the true direct and shear strains in a structure due to a given set of\nloads, prescribed displacements, thermal strains or any other initial strains, e.g., those\narising due to inaccurate manufacture. Also let r\nj\nbe the deformation (deflection,\nrotation or generalised displacement) at the point and directionj due to the same causes.\n12\n(2) Let ii\nxxj,\niixyj, etc., be the direct and shear stresses statically equivaent to a unit load\n(force, moment or generalised force) applied at the point and direction j. Note that\nii\nxxj,\nii\nxyj,\netc., need satisfy merely the external and internal equilibrium conditions of\nthe structure but not necessarily the compatibility conditions. In general, there is an\ninfinite number of such systems of which one is the true stress system due to the unit\nload and consequently satisfies both equilibrium and compatibility conditions.\nThe unit-load theorem may now be written in the form:\n1. f j = JY:J/ dV\nwhere , [ii]j are the column matrices:\n(13)\n(14)\n(15)\nand the integration extends over the volume V of the structure. An elementary illustration of\nthis theorem is shown in Fig. 6. It is assumed that the true strains in the wing due to any external\nloading or initial (thermal) strains are known. To find then the deflection f\nj\nin any given direction\nj we need only determine the simplest possible statically equivalent stress system [iiJj corresponding\nto a unit load at j and apply equation (13). The most suitable choice is obviously the E.T.B.\nstress system in the independent spar under the unit load. Note that the structure need not be\nlinearly elastic but may obey any non-linear stress strain relation.\nI..mperalure dislribution\nSialfcally equivalent system for defleclion 'j\nFIG. 6. Example of a statically equivalent system for deflection by the unit-load method.\n13\nAn alternative form of theorem (13) is occasionally useful but is only applicable to linearly\nelastic structures. Thus, in such systems (13) may also be written:\n(13a)\nwhere\n(15a)\nis the true stress matrix corresponding to the unit load at j and\n(14a)\nis the strain matrix arising from the imposition of the given loads and initial strains on a system\nmerely statically equivalent to the given structure. Thus [e] may be found in any suitable\nstatically determinate basic system.\nThe introduction of statically equivalent stress or strain systems can obviously simplify the\ncalculations considerably.\nWe present now some very simple applications of theorem (13) which are helpful to subsequent\ndevelopments.\n(a) Field with constant shear.\nFIG. 7. Generalised strain v of a rectangular field under constant shear flow.\nConsider a rectangular field under a constant shear flow (see Fig. 7). The true strain system\nis qlGt and the selected unit-load system is the unit shear flow. Application of (13) yields:\n(16)\nwhere v is a generalised displacement corresponding to the generalised force of the unit shear\nflow. (fJ = ab is the area of the shear field. Result (16) is, of course, trivial and follows\nimmediately from the value qlGt of the shear strain.\nWe call\nthe (shear) flexibility of the field. Thus:\nv =fq ..\n14\nn''''''''. I\n(17)\n(16a)\n(b) Flange under a linearly varying end load.\nS l _ l ~ ___\nFIG. 8. Derivation of generalised strain v for a flange with linearly varying end load.\nConsider a flange of constant area A, Young's modulus E and length l, subjected to an end\nload varying linearly from S, to S2 (see Fig. 8). Overall equilibrium is achieved by uniform\ntangential shear flows applied to the flange. The direct load S at any station x may be written:\nIf we consider the matrix\n= [(1 -;)\n(17)\n(18)\nas defining the external load system on the flange, then equation (17) is merely a particular\nexample of equation (8) :\nS = bR\nwith\n(8)\nb= [(1 -I)\nThe true strains in the flange are defined by the end strains\nWe are interested in determining the generalised displacements:\n15\n(19)\n(20)\ncorresponding to the load system S. To. find them we select as unit-load systems the alternative\nsystems shown in Fig. 8. Application of (13) yields:\nHence:\nwhere\nv = [l/3EA\nl/6EA\nf = [l/3EA\nl/6EA\nl/6EA] = fS\nl/3EA 52\nl/6EA]\nl/3EA\n(21)\n(22)\nis called the flexibility of the uniform flange corresponding to the loading of Fig. 8. Note that\nVI and V\n2\nare generalised displacements and not the displacements at the ends of the flange.\nSimilar expressions to (22) may be obtained for flanges with varying area and different end-load\nvariation. Furthermore, the same presentation may also be applied to beams under transverse\nloads (see Ref. 1. February, 1955. Equation (136), p. 47).\nIn the simple case of a uniform flange under constant end load 5, relation (21) reduces to the\ntrivial form:\nV =f5 .. (21a)\nwhere\nl\n(22a)\nf= Elf\n..\nand V is now the elongation Lll of the flange. Form (22a) of the flexibility applies, for example,\nto the bars in a pin-jointed framework (see Fig. 1).\n(c)* Flange under linearly varying initial strain.-Consider a flange (1, 2) of length l subjected\nto an initial strain 1] varying linearly from til at nodal point 1 to 1]2 at nodal point 2. 1] may,\nfor example, be a thermal strain rxe imposed on the flange.\nWe seek now the generalised strains:\narising from 1] and corresponding to the unit-load systems of Fig. 8. We have:\n=[(1-;)\nwhere\n* May be omitted at first reading.\n16\n(23)\n.. (23a)\nApplication of (13) yields:\nv = [\"'1 = rT(l :- r)l[(l_ n\nv\n2\nJ JJ 1 J\nwhere\n(24)\n1 = [1\n13\n1/6\n1\n16]\n.\n113\n(25)\nIf the initial strain is uniform along the length of the element and the unit load is taken as\nconstant, as would be the case in the bars of a pin-jointed framework under thermal strain,\nformula (25) reduces to the obvious:\nor\nv = lrx.e for thermal extension.\n(25)\n(25)\nv is now merely the elongation of the flange or bar.\nIn what follows the (generalised) strains v arising from initial strains are denoted by H. We\nshall also use the abbreviated terms 'stress' and 'strain' to denote the matrices S and v\nrespectively.\n6. The Formulation of the Unit-Load Method by Matrix now turn our attention\nto a structure consisting of an assembly of elements denoted by a, b, C, .., g. . . . . s. In\ntheir simplest form these elements may be shear panels, flanges between nodal points, beams under\ntransverse loads, ribs, etc. We emphasise, however, that the elements need not be the simplest\nconstituent parts of the structure. We may select as elements suitable part assemblies of the\nlatter components which may, in fact, form redundant sub-systems. Thus, in a fuselage we can\nchoose as an element a complete ring and this applies even if the ring is not of uniform circular\nshape but is itself a complex component, say of arbitrary varying cross-section and doubly\nconnected form. However, whatever these elements may be, we assume for the moment that\ntheir strains v, are known. They may arise from external loads and/or initial strains (sa, for\nexample, equations (16a). (21), (24)). The strains of all elements can be expressed as a column\nmatrix:\nv = {v, Vb ... .... \"5} .\nSimilarly the stresses S on the elements can be written as a matrix:\nS = {Sa S, .... Sg .... S5}' ..\n(26)\n(27)\nwhere equation (27) is merely a re-arranged form of equation (3).\nWe consider now a stress system Sstatically equivalent to the m unit loads R, = 1 applied to\nour structure. Vve may write the stresses Sas:\nS=b{lI .... 1 .... 1}, (28)\nwhere the matrix notation b expresses the condition that the stresses S need only be statically\nequivalent to the applied unit forces. Thus, a particular case of bwould be b\no\n, where b, may be\n17\n18577) B\nfound in the simplest and most convenient basic system of our structure. On the other hand,\nwe can always substitute b (the true stress matrix corresponding to unit loads), for b hut the\napplication of a suitable I) can often simplify the computations considerably.\nWe define next the displacement column matrix:\nr = {r1 r2 r, . . . . r\",}, (29)\nwhere r, are the actual deformations or generalised displacements in the In directions fixed in\nt he previous paragraph due to the strains v of equation (26). Applying now equation (13) [or\neach of the deflections r we find :\n\nThis is the matrix formulation of the unit-load theorem and is of fundamental importance to our\ntheory. It must be emphasised that in equation (30) the exact cause of the strains v is\nimmaterial; they may be due to loads and/or initial strains.\nAn alternative form of equation follows immediately from equation Thus the\ncolumn matrix r may also be found from:\nr b/v ,\nwhere b is now the tru stress matrix corresponding to the In unit loads R} =: 1 and vis a strain\nmatrix due to the applied loads or initial strains which need be found only in a system statically\nequivalent to the given structure. An interesting application of equation (30a) arises when\nthe stress distrihution b due to a set of unit loads is known and we want to find the deflection\nin their directions due to another system of loads applied in different directions or due to any\ninitial strains. Then equation (:ma) shows that it is not necessary to solve the redundant problem\nfor the second set of loads or the initial strains since we can determine v in the statically\ndeterminate basic system. .\nIf the strains v arise only from a load system,\n(1)\nacting on a linearly clastic structure, then v can always be written in the form:\n(21)\nwhere f\ng\nis the flexibility of the gth clement.\nHence:\nv = fS ..\nwhere S, the stress matrix equation (27), is\nS = bR ..\n18\n(32)\n(8)\nand f, the flexibility of the -unassembled elements of the structure, is:\nfa 0 .... 0 0\no fb O 0\nf= 0 (33)\n'''''''''\n-,\n\n-,\n...\n00 .... 0 .... 1.:\nNate that the diagonal elements of f are scalar numbers for shear fields but 2 X 2 matrices for\nflanges under a varying end load. For a pin-jointed framework, f\ng\nis simply given by equation\n(22a) and a corresponding typical flexibility matrix f is shown in equation (33a) , Table 1, for\nthe system of Fig. 1; the factor 2 in all flexibilities, but for bar (11), arises from the use of\nsymmetry of the structure in writing b, and b, for only half the system including bar (11). For\nother types of elements f\ng\nmay be determined from equation (13).\nSubstituting equations (32) and (8) into equation (30) we find the deflections r\nj\ndue to the\nr = b'fbR = b'fbR.\nBut a similar argument starting from equation (20a) shows also that\nr = b'fbR .\nTherefore, the flexibility F of the structure in the given m directions is:\nF = b'fb = b'fb = b'fb.\n(34)\n.. (31a)\n(35)\nThe last relation follows, of course, also from the reciprocal theorem of Maxwell-Betti (symmetry\nof the flexibility matrix). Note finally the interesting dual relationships:\nand\nS = bR, s = iiR .. (8)\nr = ii'v = b'v = b'v. .. (30b)\n7. The Calculation of the Redundancies X and the True Stress Matrix S.-We consider now\nonce more an n times redundant structure under a system of m loads R; We select the basic\nsystem, in which the redundancies X = 0, and apply to it the loads R. The conditions of\ncompatibility in the original structure demand that the generalised relative displacements V\nX i\nat the n ' cut' redundancies are zero if we impose also the correct magnitudes of the redundancies\nX on the basic system. Thus, in matrix form:\nApplying now the unit-load method in the form (30) and noting that in the present case\nr == vx and b == b,\nwe find immediately, using equation (33) :\nV\nx\n= bt'v = bt'fbR = bt'fboR + bt'fblX = 0 ...\n19\n(68577)\n(36)\n(37)\n(38)\nc\nThus:\nwhere\nx = - D-IDoR,\nD = b/ fb, and Do = Ib/fb\no\n.\n(39)\n(40)\nThe solution of the n equations (38) in the unknowns X is here obtained formally by inversion\nof the matrix of the coefficients of the unknowns. Naturally, we may apply other methods of\nsolution. In practice, the most appropriate technique will depend on the number of unknowns\nand the capacity of the store of the digital computer (see Refs. 1, 2 and 3). Thus, the Mercury\ncomputer of Ferranti with a store of 16,000 numbers or orders can solve directly say 110 linear\nequations by inversion of the matrix of the coefficients. With a medium-sized computer like the\nPegasus of Ferranti we can invert directly matrices of order say 32 X 32; for larger matrices we\nshould have to use on this computer the method of partitioning or other suitable techniques.\nSubstituting now equation (39) into equation (6) we find the true stresses\n(41)\nHence:\n(42)\nwhich solves the problem of the stress distribution completely.\nTo derive the flexibility matrix for the loads R we use equation (35) and note that here we can\nput b = boo Thus, IF = Ib\no'fb,\nwhich, using equation (42), becomes:\n(43)\nwhere\n(44)\nis the flexibility of the basic system for the directions of the loads IR.\nEquations (41) to (44) show that for the complete structural analysis of any redundant structure\nunder a given set of loads R we need only three basic matrices: b., b\nl\nand f. All three can be\ncompiled very easily once we follow the procedure suggested in sections 4 and 5.\nThe flexibility IF of the actual system may also be considered as the condensed matrix of the\nflexibility of the basic system for the directions of both R. and X. Thus, the deflections in the\nactual structure can be written :\nfrom which we deduce immediately equation (43). This derivation of the flexibility matrix (43)\nmay be used to solve a slightly more general problem. Assume that we know the flexibility IF\nof a structure for m of its points and require the flexibility IF\nI\nfor k points (k < m) only when\nthe remaining l = m - k points are fixed in space. Again we write IF in the obvious partitioned\nform:\nand find easily:\n(43a)\n20\nAs an example of the application of equation (43a) consider a wing of which some of the spars\ntransfer only shear at the root, their flanges being unattached. The flexibility of this wing is\nknown for a set m = k +l points and directions including those of the unattached flanges at\nthe root. If a number l of the latter flanges is now fixed at the root we can derive the new\nflexibility F\nI\nimmediately from equation (43a).\nThe analysis of this paragraph may easily be generalised for a statically indeterminate basic\nsystem (see Ref. 1).\n8. The Redundancies and Stress Distribution for Thermal or Other Initial Strains H.-Assume\nthat an n times redundant structure is subjected to initial (e.g., thermal) strains 1']. In the basic\n(statically determinate) structure these can develop freely and their magnitude is defined\nconveniently by the (generalised) strain matrix:\nH = {H,. H, .. . H, .... H\ns\n} , (45)\nwhere H\ng\nis the generalised initial strain of element g and may be determined easily from equation\n(13). We have found, for example, on page 14, the matrix H for a flange subject to a linearly\nvarying initial direct strain. For a shear field g under an initial shear strain 1']g we confirm\nimmediately the trivial result H, = W\ng\n1'] g.\nDenoting the unknown redundancies due to H by X\nH\nthe true strains v of the system are\nobviously:\n(46)\n-Hence the compatibility equation (38) becomes here:\n(47)\nor\n(48)\nUsing equation (48) in equation (10) we obtain the stresses SH due to the initial strain matrix H:\n(49)\nThe thermal distortion (deflection) r\nH\nof the structure in the directions of the m loads R, of\nsection 6 are found from equation (30) as:\nr\nH\n= ii'v = b'[fbIX\nH\n+ H]\n= b'[1 - fb D-Ib '] H\ns I I , (50)\n(51)\nwhere Is is the unit matrix of order s. We derive an interesting and surprising result bysubstituting\nb, for 6 in the last equation. In fact,\nr\nH\n= [b\no'\n- bo'fbID-lb\nl']\nH\n= [b;' - Do'D-lb\nl']\nH\n= [b, - bID-IDo]' H = b'H ,\nwhich is, of course, a direct consequence of equation (30a). Thus, to determine the thermal\ndistortion of a structure at a given set of points and directions due to imposed initial strains\n(e.g., thermal strains) we need not solve the redundant problem for these strains if we know the\ntrue stress matrix b corresponding to unit loads in the prescribed directions.\n21\nRedundant framework True stress matrix [b] for loads [R]\nTrue thermal distortion [rHJ\n\n## Free thermal strains [H] in statically determinate system\n\nFree thermal strains [H] In alternalive statically determinate system\nFIG. 9. Thermal distortion of a redundant, pin-jointed framework.\nTo illustrate the application of equation (51), consider the pin-jointed framework of Fig. 9\nwhose upper flange is subj ected to a uniform temperature rise e. It is required to find the thermal\ndeflections r\nIl j\nat points 1 to 5. Assume that we know from previous calculations the true stress\nmatrix b corresponding to loads R\n1\nto R\no\nThen, noting that the initial strain matrix in the present\ncase is simply given by (see equation (25b)):\nH ={ llrxe, llrxe, llrxe, l2rxe, l2rxe, l2rxe, l2rx e} ,\nthe deflection matrix r\nH\nfollows as:\nwhere b, IS the submatrix of b corresponding to the upper flange only.\n22\nPART II\nThe Structure with Cut-Outs or Modified Elements\n9. The New Approach to the Problem of Cut-Outs.-The force method developed above is\nnaturally valid for structures with any kind of cut-outs stiffened or unstiffened by closed frames\nas long as the overall geometry and idealisation conforms with the initial assumptions. Never-\ntheless it must be admitted that cut-outs require special attention in this approach, both as far\nas the b, and 1>1 matrices are concerned. In fact, in the selection of the basic system the existence\nof a cut-out will, in general, enforce a more complicated choice than in the corresponding\ncontinuous structure without cut-outs. Moreover, in the region of a cut-out we shall have to use\nspecial non-standard redundant force systems. A further drawback arising from cut-outs is\nthat the checking of the b, and b, matrices is in such cases not so straightforward since the\nuniformity of the patterns of their elements, characteristic of these matrices in continuous\nstructures, is lost. All these points are considered in some detail in Ref. 1, where the appropriate\nprocedure is described for each type of wing cut-out.\nTo avoid these complications in the presence of cut-outs it is worthwhile to apply an artifice\nfirst developed in Ref. 1, which avoids all the above-mentioned special considerations. Moreover,\nit gives us the ideal method of finding the redistribution of stresses due to the subsequent intro-\nduction of cut-outs in our system without having to repeat all the computations ab initio.\nThe method is as follows. To preserve the pattern of the matrices and equations disturbed by\nmissing shear panels or flanges we fill in the cut-outs by introducing fictitious shear panels or\nflanges with arbitrary thicknesses or cross-sectional areas. Naturally, it is usually preferable\nfor computational reasons to select for these dimensions those of the surrounding structure.\nTo obtain, nevertheless, the same flange loads and shear flows in our continuous structure as in\nthe original system, initial strains are imposed on the additional elements of such magnitude\nthat their total stresses due to both loads and initial strains become zero. The effect of the\nfictitious elements is thus nullified whilst the uniform pattern of our equations is retained.\nLet the column matrix of the unknown (generalised) initial strains, imposed on the additional\nelements only, be H.\nIn the new continuous structure we determine the flexibility matrix f and the matrices b, and\nb\n1\n. For the subsequent developments we require b, also in the partitioned form:\n(52)\nwhere the suffixes g and II refer to the stresses or forces in the elements of the original structure\nand the. fictitious new elements respectively.\nDenoting the column matrix of the redundancies of the continuous structure by X and writing\nthe initial strain matrix imposed on this system in the partitioned form\nwe can find the unknown X from equations (38) and (47) which become here:\nb(fb,X + b(fb.R + b([:J o.\n23\n(68577)\n(53)\n(54)\nc-\nHence, using equation (52) :\n(55)\nwhere D and Do are given by equation (40) and are found, of course, in the continuous structure.\nThe stress matrix S follows as:\n(56)\nThe expression in the square bracket is the matrix b of equation (42) which we write in the\npartitioned form:\nb [::] , (57)\nwhere the suffices and h have the same meaning as before.\nTo find now the column matrix H we put the stresses in the additional elements to zero.\nThus, the matrix S must become:\nwhere 56 are the true stresses (forces) in the original structure with cut-outs.\nApplying equations (52), (57), (58) in equation (56) we find:\nHence:\nor\nH = [blhD-lbll,'J-lbhR .\nThe true stresses in our actual cut structure are then:\n(59)\n(60)\nwhich solves our problem. The important and unique characteristic of the present method is\nthat it yields the stresses in the cut structure solely in terms of the stresses already calculated\nin the fictitious continuous structure. It should be noted that the order of the square matrix\nto be inverted*\n[b\nlhD-\n1b\nlh\n'J-l\nis equal to the number of linearly independent stresses or stress resultants to be nullified]. Thus,\nif we remove one shear panel the order is one and the matrix is a mere scalar number. If we\neliminate one flange between two adjoining nodal points the order is two, etc. The amount of\nwork in any practical calculation is surprisingly small as is illustrated in the examples of Part III.\n* The inversion of D will have been performed previously in finding b.\nt Sec example in authors' parer: \"Structural analysis by the matrix force method with applications to aircraft\nwings \", Wissenschaftliche GesellschaJt jar LuJifahrt, Jahrbuch 1956.\n24\nThe operations leading to equation (60) are easily programmed on the digital computer (see\nHunt\"), To check the computations it is useful to include in the final stress matrix the condition\nof zero stress in the fictitious elements. This is achieved by writing equation (60) in the form:\n(61)\nwhere\n(62)\nThe suffix c indicates the stresses in the cut structure.\nAs mentioned already the method is ideally suited for finding the alteration to the stresses in a\nstructure through a subsequent introduction of cut-outs such as access doors, which usually seem\nto materialise at a late stage of design. But even if all cut-outs are known initially, the new\napproach will easily be seen to be preferable to the standard method when analysing wings and\nfuselages. Thus, in fuselage stressing, bomb bays, doors and window openings should present no\ndifficulties when formula (62) is used. Naturally the degree of redundancy is increased by the\n, filling in ' of the cut-outs but this is of no importance for the automatic computations envisaged\nhere.\n10. Illustration of the Validity of Equation (62).-Consider a structure with n redundancies under\nany system of loads. Analysing this system by the method of Part I we obtain the complete\nb matrix. Assume now that we eliminate all redundancies of the original structure by the\ntechnique of the previous paragraph. In this case equation (62) should reduce to:\ni.e., to the stress distribution in the basic system since the latter is by assumption identical to\nthe cut system.\nThe proof is straightforward. A simple consideration shows that the matrix b\nlh\nreduces now,\nfor a certain sequence of the redundancies, to the unit matrix I\" of the nth order; see, for example,\nthe framework of Fig. 1 where b\nlh\n= 1\n3\ncan be checked directly on equation (12). Equation (62)\nreduces hence to :\nbe = b - blD-ll '[I D-ll']-lb\nh\nbe = b - blb\nh\n\nAlso\nHowever, in the present case bhR = X, and thus using also equation (6) :\n(68577)\nS, = bR - s.x ~ ~ R .\n25\nq.e.d.\nC*2\n11. The Flexibility of the Cut Structure.-Following equation (35) the flexibility of the cut\nstructure is :\n(63)\nIf the cuts do not affect the basic system chosen for the calculation of the continuous structure\nthen, but only then, may we write in place of equation (63)\n(64)\nNote that f includes the finite flexibilities of the fictitious elements but the result is still correct\nsince the corresponding rows in b, are zero. If b, were defined only for the elements of the original\ncut structure? (i.e., we exclude the zero rows), then the flexibility could be expressed as:\n(63a)\n(64)\n(65)\nwhere ( is the flexibility matrix of the unassembled elements of the true cut structure. However,\nthe form (63) appears preferable since the matrix f would anyhow have to be stored in the\ncomputer for the calculation of the D and Do matrices.\nAn alternative approach to the flexibility lF\ne\nis of particular interest since it relates it to the\nflexibility IF of the continuous structure. Following equations (4) and (51) the deflections of the\ncut structure may be written in the two alternative forms:\nr, F,R FR + b'[:J\nor using equations (57) and (59) :\nFe = IF + b,.'[lblhD-lblh'J-lbh .\nNote the symmetry and simplicity of the formula.\n12. A Generalisation of the Previous Method. The Structure with Modified Elements.-The\nprevious developments on the method of cut-outs suggest that it should be possible to devise a\nsimilar procedure when elements of the structure are modified subsequent to the completion of\nthe structural analysis. Thus, we seek now a device to find the stress distribution in the altered\nsystem solely in terms of the stress distribution in the original system. Naturally, we can always\nfind the stress distribution of the modified structure by an ab initio analysis, but this will involve,\nin general, considerably more lengthy calculations.\nThe method is as follows:\nAssume that the b matrix for the original structure has been determined. As in section 9,\nequations (57) and (52), we partition band Ib\ni\nin the form:\n[\nb\ng\n] [bIg]\nb = and b, = ,\nb\nh\nb.,\nwhere the suffices and\" refer now to elements to remain unaltered and to be modified respectively.\nWe denote, furthermore, the flexibilities of the unassembled elements in the original and modified\nstate by the respective symbols f\" and f\"\"I' We impose now on the h elements of the original\nstructure such initial strains IH (again a column matrix) that their total strains are identical to\nthose in the modified elements of the new structure. The stresses in the original system are\nthen also those of our altered system subject to the same loads but no initial strains and this\nsolves our problem completely.\n* b, is then identical to by of equation (57).\n26\nThe stress matrix in both systems is given by:\nbR - b1D-1b\nli\n,' H ,\nsee also equation (56). Hence the strains in the h elements are, in the original structure:\nand in the new structure:\nfkm[bkR - blkD-1b\nli\n,' H] .\nEquality of the two expressions yields:\nH = [blkD-lblk' + [f\nkm\n- fk]-l]-lb\nk\n.\nHence the stresses Sm in the modified system are given by:\nwhere\nand\n(66)\n(67)\n(68)\n(69)\nNote that the inversion of the matrix in the square bracket is only of the order of f\nk\nas was also\nthe case of the cut-out. The modification of the elements may, naturally, involve either a\nreinforcement or lightening; in the first case LI f\nk\n< 0 and in the second LI f\nk\n> O. Formula\n(68) may, of course, also be applied when the stresses arise from initial strains H instead of\nloads R. In fact, in such a case we have only to substitute:\nb1X\nH\n= SH for bR and b1kX\nH\n= SHh for bkR\nto find the stress SHIn in the modified structure as :\n(68a)\nLimiting cases.\n(a) Elimination of the h elements. Then f\nkm\n-+ 00, and Ibm reduces to the expression of equation\n(62)\nb\nm\n= be = b - blD-lblk'[blkD-llblk']-lbk .\n(b)* Rigidification of the h elements. Then f\nk\n\",-+ 0, and Ibm reduces to:\n(62)\n(70)\nf\nk\n-\n1\nis, of course, kiD the original stiffness of the unassembled h elements.\nA further special modification of the h elements deserves mention. In many instances the\nalteration of each of the h elements will be geometrically similar to the original elements. Then:\nf\nkm\n= [iJ fk ,\n* The dual theorem in the displacement method solves the cut-out problem, see Ref. 7.\n27\nwhere [1/ PJ is a mere diagonal matrix. Equation (68) becomes now:\ns, = b- - kit] . (71)\n13. The Flexibility of the Modified Structure.-The flexibility IF\", of the modified structure is:\n(72)\nwhere f,n is the flexibility of the unassembled elements in the modified structure:\n(73)\nWe may also write for IFm:\n(74)\nbut this form is inapplicable in the limiting case of modifications consisting of cut-outs affecting\nAs in section 10 it is important from the practical point of view to relate IF\", to the flexibility IF\nof the original structure. Following the same argument as in the case of cut-outs, we have for\nthe deflections rill of the modified system:\nr; F.,R FR + b'[:] (69a)\nor using equation (66) :\nf\", = IF + + zl fit . (75)\nPART III\nSimple Applications of the Theory\n14. The Four-Flange Tube under Transverse Loads.-We illustrate now the theory developed\nin Parts I and II on a simple example. To this effect we consider the singly-symmetrical four-\nflange tube shown in Fig. 10 under the set of transverse loads:\n(1)\n\n2 \"\n20 ,\n1 %,\n<, 2\n5 ':;x20\" 3\n6'::'><-20\" 4\n7\"-y20\" 5\n\n<.\n\nFIG. 10. Four-flange tube.\n28\nRib\nBay\nYw systems.\nb matrix for 1\nTABLE 2\nI tub.. . nd alastlc data Or Geometric a\n5\nf: : 10\" 10 lb In. Z pt lor ribs\nG: 3'85xl0\"'b lin. exce\n\nI12fR __\n?- - r4 13\n/5 i4 w2\nL- r7 ' <, I is w3\n\"J w4\nw5\nwS\nFIG. 11.\n. 01 element s\nNumbering d redundancies.\nb basic system an ts of four-flange tu e,\nStructural elemen 29\nAt first we assume that the structure is continuous, i.e., we exclude any kind of cut-outs. The\nselected elements of the system and their numbering are depicted in Fig. 11, and the dimensions\nand elastic properties are given in Table 2. Only the outline of the calculations is presented here\nsince this case is investigated in greater detail in Ref. 1. The complete programming of the\nproblem has been developed by Hunt\".\nAs basic system we select the two spars acting independently as beams and it is obvious that\ntop and bottom covers and ribs are unloaded in such a system. For the redundancies we choose\nthe six V-systems of Fig. 4 applied at the rib stations 1 to 6 and denote the matrix of the\nredundancies:\nY = {Y\n1\nY\n2\nY\n3\nY\n4\nY\n5\nY\n6\n} (76)\nThe elements of the structure affected by each of the V-systems are shown in Fig. 11. The rib\nflanges are loaded neither in the basic system nor by the redundancies and need not be considered\nat all.\nTo calculate the stress distribution under any system of loads R and the corresponding\nflexibility f we require initially only the very simple matrices b., 10\n1\nand f. However, the\ncomputational work and the programming is considerably simplified by suitable partitioning of\nthe Do and b\ni\nmatrices; the reader may consult Ref. 1 and in particular Hunt\" for more details\nof the procedure. Suffice it here to state that we write b, in the form:\n1 .... 12 1 .... 12\nb\"J\nbOla\n1\ns.,\n1\nbuzb bOll,\nbos\n0\nj\n(77)\nbO,,'a\nb\nowa\nl\nbOU'b\nJ\nb\nOwb\nbOr\n0\nwhere the numbers above the columns refer to the loads 1 to 12. SUbscripts z,,, w' r are used to\ndenote longitudinal flanges, covers, spar webs and ribs respectively; SUbscripts a and b refer to\nfront and rear spar respectively. Note that by virtue of the choice of the basic system b., and boy\nare zero matrices. In compiling Do we take advantage of the symmetry of the structure by\nwriting terms for the top surface only. The rows follow in each of the submatrices the numbering\nof the elements which, in each case, is from the root to the tip. The total number of rows in\nb, is thus:\n\nThe choice of the basic system ensures very simple submatrices which can, in fact, be written\ndown by inspection. This may be seen from Table 3, where bOla and b\nowa\nof dimensions 12 X 12\nand 6 X 12respectively are given. On the other hand if we had selected the E.T.B. and Bredt-\nBatho stresses of the tube as statically equivalent stress system, the calculation of b, would, in\ncomparison, have been quite complicated. This illustrates clearly our comments on page 8\non the best choice of the basic system.\nThe b, matrix of 49 rows is similarly partitioned to give:\n1 .... 6\nb\nlla\n1\nb\nllb\n,\nb.,\n01 =\n,\n(78)\nb\ni wa\nbl1<b\nJ\nb\n1r\n30\nwhere the numbers above the columns refer to the six redundancies and the suffices have the\nsame meaning as in the case of boo The elements of the submatrices are obtained immediately\nfrom the information on Fig. 4. The simplicity of these matrices may be seen on b\nll a\nand b\n1\n\", a\nshown in Table 3.\nThe corresponding partitioned form of the flexibility matrix f of the unassembled elements is:\nI:\"\n0 0 0 0 0\nfib 0 0 0 0\nf= 0\n0\nf\ns\n0 0 0\n(79)\n\n0 0 fwa 0 0\n0 0 0\nf\"'b\n0\n0 0 0 0\nf\nr\nwhere fla, fib are calculated from equation (22) and f\ns\nto f, from equation 17. The submatrix fib\nis shown in Table 4; the factor 2 is introduced to take care of the lower flange since b, and b\n1\ncontain only the loads for the top flange.\nWe cannot emphasise sufficiently the great simplicity of the derivation of the three basic\nmatrices b., b., f. Although our example is naturally trivial the method is basically the same\nin the case of much more complex aircraft structures, e.g., delta wings, once we can ignore the\neffect of cut-outs, have selected the simplest possible basic system and have tabulated all\ninformation for the b\n1\nand f matrices.\nThe analysis of section 6 requires the matrix multiplications b/fb\no,b/fb1\nand bo'fb\no.\nUsing\nthe above partitioned form of the basic matrices we find the simple result:\nDo = b/fbo= [b/fboJla + [b/fboJlb + [b/fboJwa + [b/fboJ\"'b\nD = b/fb\n1\n= [b/ fb.] la + [b,: fb.] Ib + [b,' fb.], +\n+ [b/fb1J\"'a + [b/fb1J\"'b + [b/fb1Jr\nFo= bo'fbo = [bo'fboJla + [bo'fboJlb + [bo'fboJ\"'a + [bo'fboJ\"'b\nl\nt\nJ\n(80)\nThese matrix multiplications and additions are very easily programmed for the digital computer\nwhere the form of D, Do, fin (80) is most useful, especially in the case of large structures, since\nit allows an efficient use of the computer store (see Huntv'). Naturally it is also advantageous\nwhen we use a mere desk machine. If the number of external forces exceeds say 50 and that of\nthe redundancies say 32, it will be necessary to partition the b, and 1>1 matrices also by columns\n(the numbers given refer to the Ferranti Pegasus). The partitioning of the b\n1\nmatrix by columns\nis actually closely related to the idea of a statically indeterminate basic system (see section 1\nand Ref. 1. p. 82. March, 1955).\nRaving D we find the inverted matrix 0-\n1\nand the product 0-lD\no\nmost conveniently and\nspeedily on the digital computer especially if the latter operates, as do the Ferranti machines,\nwith a matrix interpretive scheme (see Rune). In our present example D is of order 6 X 6 and\nthe inversion requires on the Pegasus only 14 sec, whilst approximately 6 hours are necessary on\nthe desk machine when using the Jordan technique. If we are interested only in the stress\ndistribution for a particular load group Ft, we can form on the digital computer DoR (which is a\ncolumn matrix) and then find [)-l[D\no\nRJ without first obtaining 0-\n1\n. This shortens further the\ncomputing time. For example, on the Pegasus the time of 14 sec for D-1 is reduced to 9 sec\nwhen calculating the column matrix 0-1[D\no\nR]. Naturally, these times increase rapidly with the\norder of the D matrix, but they are not more than 17 min 52 sec and 7 min 19 sec respectively\nwhen this order is 32 X 32. Again the quoted times apply to the Pegasus on which we would\nhave to use partitioning when D is of higher order (see also statement at end of previous\nparagraph).\n31\nThe D-\n1\nand D matrices for the present example are given in Table 4. It is now very simple\nto find from equations (42) and (43) the final b matrix and the exact 12 X 12 flexibility f. Table 5\nshows, in particular, the submatrices b., and Ibl.a whilst IF is found on Table 6. Finally, we\npresent the flange loads and shear flows in the webs for a single load R\n3\n= 1,000 lb. in Fig. 12.\n15. Thermal Stresses.-We determine next the stresses and distortion of the same structure\ndue to a non-uniform temperature distribution. For this purpose we assume that the upper\nflange of spar' a ' has the temperature distribution shown in Fig. 15. The variation is taken to\nbe linear between nodal points and hence is defined by the values at the nodal points. The\ncolumn matrix J of the initial strains at the beginning and end of each element of the top flange\n, a' is: .\nII l2 l3 4 l5 l6\nJ = {{rxe\n1\nrxe\n2\n} {rxe\n2\nrxe\n3\n} {rxe\n3\nrxe\n4\n} {rxe\n4\nrxe\n5\n} {rxe\n5\nrxe\n6\n} {rxe\n6\nrxe\n7\n} } (81)\nFollowing equations (46) and (48) of section 7 we require the generalised strains H corresponding\nto the linearly varying flange loads. These may be found from equation (24) which in the present\ncase may be written:\nH =1J\nwhere\n1\n1\n0 0 10 0\n~ 10 1\n2\n0 10 10\n1=\n10 0\nL\n0 10 0\nL\n' ..\n10 10 0 0 0\n0 0 0 0 1\n5\n~\n0 10 0 10 0\n(82)\n(83)\nin which, since l is the same in each bay, all L's are identical matrices given by equation (25).\nThe matrix 1is written out in full in Table 7.\nThe redundancies Yo due to temperature may now be derived from equation (48) :\nYo = - ll)-\nl\nb/ {H O} = - 1D-\n1b\nlla\n' H . .. (84)\nThis last relation follows since the initial strains are only applied to the upper flange of spar' a '.\nUsing b\nll a\nfrom Table 3 and taking o: = 23 X 10-\n6\n, we find (in lb in.) :\nYo = {- 41,330 - 36,590 - 27,770 - 18,000 - 13,980 - 11,230} . (85)\nThe thermal stresses can now be determined from:\nSo = b\n1\nYo .. (86)\nIn particular, Fig. 15 shows the flange loads and web shear flows.\nOf some interest is finally the thermal distortion of the structure. Thus, using the simple\nrelation equation (51), the deflections r\nOj\nat the stations 1 to 12 are:\n(87)\nsince the free thermal strains exist only in the upper flange' a ', b., is the true stress matrix\ncorresponding to the unit loads R\n1\nto R\n12\nand is given in Table 5. We find (in in.) :\n123 4 5 6\nr 0 = {- 0019 - 0081 - O 191 - 0327 - 0492 - 0693\n7 8 9 10 11 12\n- 0016 - 0071 - 0171 - 0308 - 0474 - 0661}. (88)\nI t will be seen that the complete determination of the thermal stresses and distortion is surprisingly\nsimple.\n32\n16. Effect of Cut-Outs.-In this paragraph we give a number of applications of the method\ndeveloped in sections 8 and 10 for cut-outs.\nThree different cases of cut-outs are investigated (see Fig. 11 for notation) :\n(1) Elimination of web w3, i.e., web of spar 1 a 1 in the third bay\n(2) Elimination of two fields, web w3 and cover s2, and an additional cut of the flanges of\nspar 1 a 1 at the root\n(3) Elimination of flange 13.\n(a) Single load R; = 1,000 lb\n(b) Thermal loading of Fig. 15, but only for cut-out cases (1) and (2).\nInspection of formulae (62), (65) and (68a) shows that we require the submatrices b\nlh\nand biz,\nThese are easily extracted from the available matrices h\n1\nand b. Then we have to form:\nb\n1Iz\nD-\n1b\n1,.'\nand invert it to find:\n(89)\nThe order of this matrix is 1 X 1, 3 X 3 and 2 X 2 for cut-outs (1), (2) and (3) respectively.\nAll these data are collected in Table 8. By substitution of these matrices into equations (62)\nand (68a) we obtain the stress distributions for the cut-outs and loading cases considered. The\nresults are given in Figs. 12, 13, 14 for the load Rg = 1,000 lb and Figs. 15 and 16 for the thermal\nloading. For the single cut-out (1) we consider also the flexibility Fe of the cut-out structure.\nIts derivation is particularly simple since in this case:\ni.e., a scalar. Thus:\n[\nb D-1b 'J-1 192\n2\n1h 1h = 3. SOl X 10\n6\n'\n192\n2\n1\nFe = F + Ll F = F +3. SO1 X 106 b, b, , ..\n(90)\n(91)\nwhere the elements of the 12 X 12 matrix b,.'b\nh\nare obtained merely by single multiplications.\nThe incremental change Ll F of the flexibility due to the cut-out is given in Table 6.\n17. Modijied Structure.-The last example to be analysed is one illustrating the applications\nof the method in section 11. The case investigated is that of the tube of section 13 but with the\ncross-sectional areas of flanges II and 12 firstly halved and secondly doubled.\nThe stress distribution in the two modified structures may now be derived from the original\nstress distribution using equation (71). To do this we have to find:\n[\nb D-1b 1 + __(3_ f -lJ -1 (92)\n1h 1h (3-1 h\nwhich, in the present case, is a matrix of order 4 X 4. The factor (3 is t and 2 when the cross-\nsectional areas are halved and doubled respectively, and Table 9 gives the corresponding two\nmatrices (92). For a single load of R; = 1,000 lb the flange loads and web shear flows in the two\nmodified structures are shown in Fig. 17 and can be compared with the stress distribution in\nthe original structure.\nIn conclusion we draw attention to the extreme simplicity of the methods on cut-outs and\nmodifications given in Ref. 1 and this paper. The reader may consult in this connection also\nHuntv\",\n33\nTABLE 3\nBasic Matrices\n1 2 3 4 .5 6 7 8 9 10 11 12\n-1 -2 -3 -4 -5 -6 0 0 0 0 0 0\nt,\n0 -1 -2 -3 -4 -5 0 0 0 0 0 0\n{2\n0 -1 -2 -3 -4 -5 0 0 0 0 0 0\n0 0 -1 -2 -3 -4 0 0 0 0 0 0\n{3\n0 0 -1 -2 -3 -4 0 0 0 0 0 0\n20 0 0 0 -1 -2 -3 0 0 0 0 0 0\n{4\nb\"za= \"6\n0 0 0 -1 -2 -3 0 0 0 0 0 0\n0 0 0 0 -1 -2 0 0 0 0 0 0\n{.5\n0 0 0 0 -1 -2 0 0 0 0 0 0\n0 0 0 0 0 -1 0 0 0 0 0 0\nJ}6\n0 0 0 0 0 -1 0 0 0 0 0 0\n0 0 0 0 0 0 0 0 0 0 0 0\n1 1 1 1 1 1 0 0 0 0 0 0 1\n0 1 1 1 1 1 0 0 0 0 0 0 2\n1\n()\n0 1 1 1 1 0\n()\n0 0 0 0 3\nb\neu\n,,, ,.= 'G\n0\n()\n0 1 1 1 0 0 0 0 0\n()\n4\nI) ()\n0\n()\n1 1 0 0 0 0\n()\n0 .5\n() () ()\n0 0 1 0 0 0 0 0 0 6\n1 2 3 4 .5 6\nI\n1 0 0 0 0 0\nl1\n0 1 0 0 0 0\n{2\n0 1 0 0 0 0\n0 0 1 0 0 0\n}3\n0 0 1 0 0 0\nbIz\" =\n1 0 0 0 1 0 0\n6\n0 0 0 1 0 0\nI\nl\n4\n0 0 0 0 1 0\nJ\n0 0 0 0 1 0\nIl.5\n0 0 0 0 0 1\nJ}6\nl\n0 0 0 0 0 1\n0 0 0 0 0 0\n-1 1 0 0 0 0 1\n0 -1 1 0 0 0 2\n10 0 0 -1 1 0 0 3\nb\nl WU\n= -(3':<: 320 0\n() () -1 1 0 4\n()\n0 0 0 -1 1 .5\n() (j\n0 0 0 -1 6\nNumbers against rows denote elements.\n34\nTABLE 4\nBasic Matrices-continued\nY\n1\nY\n2\nY\n3\nY\n4\nY\ns\nY\n6\nI\n16800 - 8334 2083 0 0 0\nl\n- 8334 40126 -11,448 2083 0 0\nD = 10-\n8\nX\n2083 -11'448 44133 -10,712 2083 0\nI\n0 2083 -10,712 47892 -10,974 2083\nl\n0 0 2083 -10974 52081 -10,270\nJ\n0 0 0 2083 -10,270 57240\nI\n66384 13916 00347 - 00558 - 00132\n- O'\nOOO3\nl\n13916 29840 07156 00249 - 00244 - 00053\nD -1 = 10\n6\nX\nl\n00347 07156 25828 05494 00088 - 00184\nI\n- 00558 - 00249 05494 23167 04661 - 00007\n- 00132 - 00244 00088 04661 20885\n03578 J\n- 00003 - 00053 - 00184 - 00007 03578 18112\n35\nTABLE 5\nTrue Stress Distribution: Typical Matrices\n1 2 3 4 5 6 7 8 9 10 11 12\n-1'447 -2098 -2,726 -3385 -4,071 -4,749 -0,433 -1,100 -1,797 -2,479 -3,156 -3,832 1\n0398 -0,841 -1658 -2434 -3,268 -4,088 -0'096 -0,681 -1,482 -2,311 -3134 -3,954 2\n0011 0430 -0,971 -1966 -2,974 -3,994 -0'004 -0,164 -0,950 -1964 -2997 -4,024 3\nb\n1a\n= -0'016 -0,017 0323 -1080 -2,134 -3,182 0004 -0'006 -0,213 -1,082 -2158 -3,239 4\n-0004 -0021 -0,030 0246 -1,066 -1,992 0001 0006 -0,006 -0,207 -1,038 -2,002 5\n-0,000 -0,003 -0'017 -0038 0266 -0,945 0000 0001 0007 0002 -0126 -0,824 6\n0 0 0 0 a 0 0 0 0 a a a 7\n(j;)\nNumbers opposite rows refer to rib stations. Duplicated rows for the two elements meeting there are merged into one.\nNumbers above columns refer to loads R\n1\nto R\n12\n\nr 12015\n10178 9587 9222 8762 8313 1052 1308 982 524 069 - 381\nr\n- 1211 10221 8399 7711 7169 6544 288 1615 1663 1086 428 - 218 2\nb\nwa\n= 10-\n3\nX - 083 - 1397 10293 9020 8874 8788 022 495 2305 2755 2620 2452 3\nl 038\n- 014 1103 10394 9587 8935 - 008 036 647 2735 3502 3866\nJ\n012 057 040 - 887 10411 9970 - 003 - 014 038 654 2849 3682\n000 010 053 118 - 830 9520 - 000 - 004 - 021 - 007 393 2574\nNumbers opposite rows refer to web elements.\nTABLE 6\nFlexibility of Continuous Tube and Increment Due to Cut-Out Web w3\n1 2 3 4 5 6 7 8 9 10 11 12\n1\n21\n.\n54 2762 3567 4142 4836 5532 443 1127 1842 2543 3238 3932\n2762 7368 1034 1329 1640 1948 1082 3650 6723 9825 1290 1597\n3567 1034 1924 2659 3423 4191 . 1730 6522 1361 2128 2894 3659\n4142 1329 2659 4283 5760 . 7222 2371 9430 2109 3551 5043 6531\n4836 1640 3423 5760 8436 10903 3005 1229 2851 5019 7477 9973\nF = 10-\n6\nX 5532 1948 4191 7222 10903 15106 3642 1516 3593 6493 9969 13730\n443 1082 1730 2371 3005 3642 1081 1721 2353 2988 3625 4261\n1127 3650 6522 9430 1229 1516 1721 4940 7873 1074 1361 1649\n1842 6723 1361 2109 2851 3593 2353 7873 1577 2321 3062 3804\nJ\n2543 9825 2128 3551 5019 6493 2988 1074 2321 3838 5306 6775\n3238 1290 2894 5043 7477 9969 3625 1361 3062 5306 7881 10369\n\"\"\n3932 1597 3659 6531 9973 13730 4261 1649 3804 6775 10369 14247\n001 011 - 083 - 073 - 071 - 071 000 -0'04 - 019 - 022 - 021 - 020\n011 189 - 1395 -12,22 -12'02 -11,91 -0,03 -0,67 - 312 - 373 - 355 - 332\n-0,83 -13,95 1028 9004 8859 8773 022 494 2301 2750 2615 2448\n-0,73 -12,22 9004 7891 7763 7688 019 433 2016 2410 2292 2145\n-0,71 -12,02 8859 7763 7637 7563 019 426 1984 2371 2255 2110\nLlF = 10-\n6\nX -0'71 -11,91 8773 7688 7563 7490 019 422 1966 2348 2233 2090\n000 - 003 022 019 019 019 000 001 005 006 006 005\n-0,04 - 067 494 433 426 422 001 024 111 132 126 118\n-0,19 - 312 2301 2016 1984 1966 005 111 515 616 586 548\n-022 - 373 2750 2410 2371 2348 006 132 616 736 700 655\n-0,21 - 355 2615 2292 2255 2233 006 126 586 700 666 623\n-0,20 - 332 2448 2145 2110 2090 005 118 548 655 623 583\nTABLE 7\nTABLE 8\n1. Cut-out web w3\n1\n192\n[ 0\no -1 1 o\no J\n2. Cut-out web w3, covers s2 and flange 11 at root\n1\n0 0 0 0 0\n6\nb\n1h\n= 0 0\n1 1\n0 0\n- 192\n192\n0\n1 1\n0 0 0\n320 320\nI\n18440 007856\n-0,7067 1\nb\nll\n,D-\n1b\n1h'\n= 10\n3\nX 007856 010310 002185\nl-\n07067 002185 004039 J\nI 0005893\n- 002979 01192\nl\n[b D-\n1b\n'J-\n1\n- 10-\n3\nX l ~ 2 9 7 9 111063 - 65305\n1h 1h -\n01192 - 65305 30380\n3. Cut-out flange element 13\no\no\no\no\n1\no\no\n1\no\no\no l\no J\n[\n71744 1 5261J\nb1hD-lb1h' = 10\n4\nX 1.5261 6.4353 [\n01468 -0'0348 J\n[b1hD-1blh'] -1 = 10-\n4\nX\n-0'0348 01636\n38\nTABLE 9\nFlange Elements ll, l2 Modified\nb ~ ~ r\n1 0 0 0 0 0\n1\n0 1 0 0 0 0\n0 1 0 0 0 0\nJ l\n0 0 1 0 0 0\n[\n018440 003865 003865 000096\n]\n003865 008289 008289 001987\nblhD-lblh' = 10\n6\nX\n003865 008289 008289 001987\n000096 001987 001987 007174\n[ ~\n-0,325 0 0\n]\nf\nh\n-\n1\n= 10\n6\nx\n-or\n0650 0 0\n0 0650 -0,325\n0 -0,325 0650\n1. f3 = 1/2. (Flange areas 11,12 halved), fJ/(fJ - 1) = - 1\n,\n14025 05705 -0,1769 -0,0924\n1\n05705 16237 -0,2823 -01648\n[b D-'b '+ f -'J-' ~ 10-' Xl\nIh Ih h -0,1769\n-0'2823 17100 07309\nJ\n-0,0924 -0,1648 07309 16992\n2. fJ = 2. (Flange areas ll, 12 doubled), fJ/(fJ - 1) = 2\n,-1.3971\n-0,8025 -01525 -0,0973\n1\n-08024 -1'2898 -01788 -0,1190\n[b D-'b '- 2f -'J-' ~ 10-' Xl\nIh 1/1, h -0.1525\n-01788 -1,2007 -0,6578\nJ\n-0,0973 -0,1190 -0,6578 -1,1749\n39\nCut- out cover (52)\nno cut-outs--\nwilh cut-out. - - - -\nFa\n(lb)\n_30\n00\n_2000\n...\n_10\n00\n\"\n,\n0 ,\n,00\n0\n20\n00\no\ncUi -out wab (w3) a\nP.\n(lb)\n,300\nI\n,20\n00\n/\n/ _,000\n/\n\n10\n00\na\nno cut-out\nwith cut-out\na\na\nWeb snear flows\nqwe\n(Ib lin.)\n1\n40\n1\n20\n,00\n80\n60\n40\n20\no\n_ 20\n,40\n\" 60\nb\na\nQ\nWb\n(lb/ln.)\nso\n40 _-I\n20\no I\nI\nI\nWeb shear flows\n,qW&\n(\\b lin.)\n180\n160\n14\n0\n120\ntOO\n80\n60\n40\n20\no\n_ 20\nFIG. 12. Stress distribution in four-flange tube due to a single\nforce, R\na\n= 1,000 lb, with and without cut-out web w3.\nFIG. 13. Stress distribution in four-flange tube due to a single\nforce, R\ns\n= 1,000 lb with and without cut-out web w3, cover\n82 and flange 11 at root.\no\n20\n40\n60\neo\nP.\n(lb)\n20\n00\n10\n00\no\n,10\n00\n_2000\n,30\n00\n_40 00\n_50\n00\n,6\n000\n,70\n00\nq.. a\n(lb lin.)\n,40\n_2 0\nI\nJ\n....\n....\nq..b\n(lb /in.)\nWeb shear fLows\na\nno cut -out ---\nwith cut-out - - - -\nTemperature distribution\n_20\n00\no\nq.. a\nUb/in.)\n120\n100\n8\n0\n60\n40\n20\no\n,20\n-I\n/.\n/\n/\n/\n/\no\n\n(lb)\n_5000\nq.. b\n(Lb lin)\n40\n20\no\nWeb shear flows\na\na\nwith cut-outs - - - -\na\nb\nb\nFIG. 14. Stress distribution in four-flange tube due to a\nsingle force, R\n3\n= 1,000 lb, with and without cut-out\nflange elements 13.\nFIG. 15. Stress distribution in four-flange tube due to thermal\na\n:\n(lb/in.)\n120\n10\"0\naO\nSO\n40\n20\no\n.20\n40\nL -\nP,.\n(lb)\n.5\n000\n.40\n00\nP.\n.30\n00\n(lb)\n.50\n00\n.20\n00\n.40\n00\n.10\n00\n.3\n000\n/\n0\n/ .20\n00\n/\n.1\n000\n\"\n/\n\" /\n0\n1\n000\n,\n(lb lin.)\n40\n20\no\nWeb shear flows\na\nOriginal structure\nAreas of flanges /1./2\ndoubled\nAreas of flanges /1,/2 - - -\nhalved\nP.\n(Ib)\n20\n00\n1\n000\n0\nI\n.,00\n0\nI\n.20\n00\nI\nI .30\n00\nI\n.40\n00\nI\nI\n.5\n000\n.60\n00\n.7\n000\n:\n(lb lin.)\n_120\n.10\n0\n_ a\nO\n_ 60\n_ 40\n_ 20\n0\n20\n40\nSO\nqwb\n(lb lin.)\nWeb shear ttows\nno cut-outs ---\na\n7\nTemperalure distribution\nwith cut-outs - - - -\n0;\n00\nen\n..., J;\n(lb)\n.:::!\n~\n40\n00\n~ ~\n30\n00\n'\" 0\n/qwb 1 ..- <0\n2\n000\n0\n'\" P,.\"--\",, ---1 f...,...\n'\"\n:>::\n/qw.\n10\n00\n:.., ~ ~ --\n/\n0\n-\n0 /\nOi\n...,\n~\n;ii\nb\nFIG. 16. Stress distribution in four-flange tube due to thermal\n. II at root.\nFIG. 17. Stress distribution in four-flange tube due to a\nsingle force R\ns\n= 1,000 lb, with areas of flange elements\nll, l2, doubled and halved.\nR. & M. No. 3034\nPublications' of the\nAeronautical Research Council\nIS. 3d. (IS. Sd.)\nIS. (IS. 2d.)\nIS. (IS. 2,1.)\nIS. 3d. (IS. 5d.)\nII. 3d. (IS. 5d.)\nR. &- M. No. 1850\nR. & M. No. 1950\nR. & M. No. 2050\nR. & M. No. 2150\nR. &- M. No. 2250\nR. &- M. No. 2570 ISS. (ISS. 6d.)\nReports of the Aeronautical! Research\ni\nANNUAL TECHNICAL REPORTS OF 'rUlE AlERONAU'rHCAL\nRESEARCH (BOUND VOLUMES)\n1939 Vol. I. Aerodynamics General, Performance, Airscrews, Engines. 50S. (SIS. 9d.).\nVol. II. Stability and Control, Flutter and Vibration, Instruments, Structures, Seaplanes, etc.\n63S (64-J. 9d.) !\n194- Aero and Hydrodynamics, Aerofoils,' Airscrews, Engines, Flutter, Icing, Stability and Control\nStructures, and a miscellaneous section. 50S. (ns. 9d.)\n194-1 Aero and Hydrodynamics, Aerofoils, Airscrews, Engines,: Flutter, Stability and Control\nStructures. 63s. (64-J. 9d.)\n194-2 Vol. 1. Aero and Hydrodynamics, Aerofoils, Airscrews, Engines. 75s. (76s. 9d.)\nVol. II. Noise, Parachutes, Stability and Control, Structures, Vibration, Wind Tunnels.\n47s. 6d. (49s. 3d.)\n1943 Vol. 1. Aerodynamics, Aerofoils, Airscrews, 80s. (8 IS. 9d.)\nVol. II. Engines, Flutter, Materials, Parachutes, Performance, Stability and Control, Structures.\n90S (921. 6d.)\n194-4 Vol. I. Aero and Hydrodynamics, Aerofoils, Aircraft, Airscrews, Controls. 841. (86s. 3d.)\nVol. II. Flutter and Vibration, Materials, Miscellaneous, Navigation, Parachutes, Performance,\nPlates and Panels, Stability, Structures, Test Equipment, Wind Tunnels.\nS4-J. (86s. 3d.)\n19H Vol. 1. Aero and Hydrodynamics, Aerofoils. 131. (1321. 6d.)\nVol. II. Aircraft, Airscrews, Controls. 130S. (1321. 6d.)\nVol. III. Flutter and Vibration, Instruments, Miscellaneous, Parachutes, Plates and Panels,\nPropulsion. 130S. (132\\$. 3d.)\nVol. IV. Stability, Structures, Wind Tunnels, Wind Tunnel Technique. I 3S. (r 321. 3d.)\nAnnual Reports of the Aeronautical Research Council-\nX937 21. (u. 2d.) 1938 rr. 6d. (u. 8d.) 1939-48 3s. (3s. 3d.)\nIndex to all! Reports and Memoranda published lin the Annual\nTeehnical Reports, and separately-e-\nApril, 1950 R. &- M. 2600 ar, 6d. (ar. 8d.)\nAuthor Index to all Reports and! Memoranda of the Aeronautical!\nResearch Councll-e-\n1909-January, 1954\nIndexes to the Technical\nCouncil-\nDecember I, 1936-June 30, 1939\nJuly I, 1939-June 30, 194-5\nJuly I, 19+5-June 30,1946\nJuly I, 194-6-December 31, 1946\nJanuary I, 1947-June 30,1947\nIS. 9d. (IS. lId.)\n2S. (2S. 2d.)\n2S. 6,1. (u. 8,1.)\n2S. 6,1. (u. 8d.)\nPublished Reports and Memoranda of the Aeronautical! Research\nCoundl-\nBetween Nos. 2251-2349 R. &- M. No. 2350\nBetween Nos. 2351-24-49 R. & M. No. 2450\nBetween Nos. 2'}5 1-2 R. & M. No. 2550\nBetween Nos. 2551-264-9 R. & M. No. 2650\nPrices in brackets inc/lide postage\nHER l\\IAJESTY'S STATIONERY OFFICE\nYork House, Kingsway, London, \"V.C.2; f23 Oxford Street, London, W.r; IJa Castle Street, Edinburgh 2 ;\n39 King Street, Manchester 2; 2 Edmund Street, Birmingham 3 ; r09 St. Mary Street, Cardiff; Tower Lane, Bristol r ;\n80 Chichester Street, Belfast, or through any bookseller.\n5.0. Code No. 23-3034\nR. & M.. NOlD 3034" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8055452,"math_prob":0.96429765,"size":32731,"snap":"2019-43-2019-47","text_gpt3_token_len":9491,"char_repetition_ratio":0.1393345,"word_repetition_ratio":0.017331023,"special_character_ratio":0.3172222,"punctuation_ratio":0.16321321,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9786283,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-18T08:58:00Z\",\"WARC-Record-ID\":\"<urn:uuid:cd06ca32-03b1-41f8-af2e-41050071e223>\",\"Content-Length\":\"459676\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f10c8313-0f07-4b64-83f4-84c104c44d10>\",\"WARC-Concurrent-To\":\"<urn:uuid:305755ef-5764-47f6-8c50-baaa9002dc86>\",\"WARC-IP-Address\":\"151.101.250.152\",\"WARC-Target-URI\":\"https://de.scribd.com/document/204855137/3034\",\"WARC-Payload-Digest\":\"sha1:V63NF4SUQ4JHYE6FIIZAQIORN7A2ZJYE\",\"WARC-Block-Digest\":\"sha1:UWT3QASPLG6OGECQPWTCPRYI442BJTPO\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496669730.38_warc_CC-MAIN-20191118080848-20191118104848-00104.warc.gz\"}"}
https://stats.stackexchange.com/questions/27030/how-to-set-limits-using-constroptim-in-r
[ "# How to set limits using constrOptim in R?\n\nI am using constrOptim to minimize a log likelihood function for maximum likelihood estimation of parameters.\n\nI wish to set the bounds on my parameters, but to not understand the constrOptim definition of the feasibility region.\n\nThe feasible region is defined by ui %*% theta - ci >= 0\n\nI have a set of parameters with bounds [lower, upper]\n\na[0,5] ie 0<a<5\nb[0,Inf]\nc[0,Inf]\ne[0,1]\n\ntheta (starting values) = c(1, 1, 0.01,0.1)\n\n\nWhat are the ui (constraint matrix (k x p)) and ci (constraint vector of length k) for these parameter bounds?\n\nIs there a straightforward way to get from a list of upper and lower bounds to a ui and ci value?\n\nHere's an example that we can use to illustrate ui and ci, with some extraneous output removed for brevity. It's maximizing the log likelihood of a normal distribution. In the first part, we use the optim function with box constraints, and in the second part, we use the constrOptim function with its version of the same box constraints.\n\n# function to be optimized\n> foo.unconstr <- function(par, x) -sum(dnorm(x, par, par, log=TRUE))\n\n> x <- rnorm(100,1,1)\n\n> optim(c(1,1), foo.unconstr, lower=c(0,0), upper=c(5,5), method=\"L-BFGS-B\", x=x)\n$par 1.147652 1.077654$value\n 149.3724\n\n>\n> # constrOptim example\n>\n> ui <- cbind(c(1,-1,0,0),c(0,0,1,-1))\n> ui\n[,1] [,2]\n[1,] 1 0\n[2,] -1 0\n[3,] 0 1\n[4,] 0 -1\n> ci <- c(0, -5, 0, -5)\n>\n> constrOptim(c(1,1), foo.unconstr, grad=NULL, ui=u1, ci=c1, x=x)\n$par 1.147690 1.077712$value\n 149.3724\n\n... blah blah blah ...\n\nouter.iterations\n 2\n\n$barrier.value -0.001079475 > If you look at the ui matrix and imagine multiplying by the parameter vector to be optimized, call it$\\theta$, you'll see that the result has four rows, the first of which is$\\theta_1$, the second$-\\theta_1$, the third$\\theta_2$, and the fourth$-\\theta_2$. Subtracting off the ci vector and enforcing the$\\ge 0$constraint on each row results in$\\theta_1 \\ge 0$,$-\\theta_1 + 5 \\ge 0$,$\\theta_2 \\ge 0$and$-\\theta_2 + 5 \\ge 0$. Obviously, multiplying the second and fourth constraints by -1 and moving the constant to the right hand side gets you to$\\theta_1 \\le 5$and$\\theta_2 \\le 5$, the upper bound constraints. Just substitute your own values into the ci vector and add appropriate columns (if any) to the ui vector to get the box constraint set you want. • I am still slightly confused about how to construct ui and ci, but getting there. Thank you very much Apr 25 '12 at 21:22 • This is a useful link to understand how to set up your ui and ci for constrOptim(): youtube.com/watch?v=MCvz-c6UUkw. The key takeaway is that all your constraints have to be set up in the Ax >= b format. Aug 7 '15 at 8:11 Your constraints are of two types, either$\\theta_i \\geq a_i$, or$\\theta_i \\leq b_i$. The first ones are already in the right form (and the matrix ui is just the identity matrix), while the others can be written as$-\\theta_i \\geq - b_i$: ui is then$-I_n$and ci is$-b\\$.\n\n# Constraints\nbounds <- matrix(c(\n0,5,\n0,Inf,\n0,Inf,\n0,1\n), nc=2, byrow=TRUE)\ncolnames(bounds) <- c(\"lower\", \"upper\")\n\n# Convert the constraints to the ui and ci matrices\nn <- nrow(bounds)\nui <- rbind( diag(n), -diag(n) )\nci <- c( bounds[,1], - bounds[,2] )\n\n# Remove the infinite values\ni <- as.vector(is.finite(bounds))\nui <- ui[i,]\nci <- ci[i]\n\n# Constrained minimization\nf <- function(u) sum((u+1)^2)\n\n\nWe can check how the constraint matrices ci and ui are interpreted:\n\n# Print the constraints\nk <- length(ci)\nn <- dim(ui)\nfor(i in seq_len(k)) {\nj <- which( ui[i,] != 0 )\ncat(paste( ui[i,j], \" * \", \"x[\", (1:n)[j], \"]\", sep=\"\", collapse=\" + \" ))\ncat(\" >= \" )\ncat( ci[i], \"\\n\" )\n}\n# 1 * x >= 0\n# 1 * x >= 0\n# 1 * x >= 0\n# 1 * x >= 0\n# -1 * x >= -5\n# -1 * x >= -1\n\n\nSome of the algorithms in optim allow you to specify the lower and upper bounds directly: that is probably easier to use.\n\n• Does't removing the infinities affect which parameter is being bounded by which value? Apr 25 '12 at 21:20\n• The parameters correspond to the columns of ui, but what I remove I rows, i.e., constraints of the form x[i] <= Inf. Apr 25 '12 at 22:43" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6915412,"math_prob":0.9931283,"size":1703,"snap":"2021-43-2021-49","text_gpt3_token_len":580,"char_repetition_ratio":0.13301942,"word_repetition_ratio":0.0,"special_character_ratio":0.38226658,"punctuation_ratio":0.19181585,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9995012,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-24T16:03:55Z\",\"WARC-Record-ID\":\"<urn:uuid:0d64c297-b146-422f-b50c-2196cc153182>\",\"Content-Length\":\"180441\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5c9a3a48-1fc2-4193-ac59-fd546f3cb568>\",\"WARC-Concurrent-To\":\"<urn:uuid:2d2a2744-82b1-4c46-b2a8-01b2250d091d>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://stats.stackexchange.com/questions/27030/how-to-set-limits-using-constroptim-in-r\",\"WARC-Payload-Digest\":\"sha1:PEUYGITXOKNI6PV7DYD4MGWOW25L7SWW\",\"WARC-Block-Digest\":\"sha1:OM5GYDH4W2OFVDVWMYQ5MKBLDVMKM6EI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323586043.75_warc_CC-MAIN-20211024142824-20211024172824-00086.warc.gz\"}"}
https://digilib.unikom.ac.id/man/php/function.gmp-init.html
[ "Scholar Repository\nHome>Manual>Create GMP number\n\n# gmp_init\n\n(PHP 4 >= 4.0.4, PHP 5)\n\ngmp_initCreate GMP number\n\n### Description\n\nresource gmp_init ( mixed \\$number [, int \\$base = 0 ] )\n\nCreates a GMP number from an integer or string.\n\n### Parameters\n\nnumber\n\nAn integer or a string. The string representation can be decimal, hexadecimal or octal.\n\nbase\n\nThe base.\n\nThe base may vary from 2 to 36. If base is 0 (default value), the actual base is determined from the leading characters: if the first two characters are 0x or 0X, hexadecimal is assumed, otherwise if the first character is \"0\", octal is assumed, otherwise decimal is assumed.\n\n### Return Values\n\nA GMP number resource.\n\n### Changelog\n\nVersion Description\n5.3.2 The base was extended from 2 to 36, to 2 to 62 and -2 to -36.\n4.1.0 The optional base parameter was added.\n\n### Notes\n\nNote:\n\nTo use the extended base introduced in PHP 5.3.2, then PHP must be compiled against GMP 4.2.0 or greater.\n\n### Examples\n\nExample #1 Creating GMP number\n\n```<?php \\$a = gmp_init(123456);\\$b = gmp_init(\"0xFFFFDEBACDFEDF7200\");?>```\n\n### Notes\n\nNote:\n\nIt is not necessary to call this function if you want to use integer or string in place of GMP number in GMP functions, like gmp_add(). Function arguments are automatically converted to GMP numbers, if such conversion is possible and needed, using the same rules as gmp_init().\n\nHome>Manual>Create GMP number" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.67866665,"math_prob":0.83675486,"size":1134,"snap":"2019-51-2020-05","text_gpt3_token_len":311,"char_repetition_ratio":0.119469024,"word_repetition_ratio":0.0,"special_character_ratio":0.28218696,"punctuation_ratio":0.16803278,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9763606,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-06T12:36:03Z\",\"WARC-Record-ID\":\"<urn:uuid:9bfbeadf-edcc-4ff2-9984-cb29079d7340>\",\"Content-Length\":\"8391\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b0dfca65-8ad5-4cdb-9119-44a06782bb93>\",\"WARC-Concurrent-To\":\"<urn:uuid:cc36b995-1f12-4d70-86ff-fac2591b2372>\",\"WARC-IP-Address\":\"103.112.189.161\",\"WARC-Target-URI\":\"https://digilib.unikom.ac.id/man/php/function.gmp-init.html\",\"WARC-Payload-Digest\":\"sha1:QB44URVVGDQTYJCNBUOAYL5K3CE5UWLQ\",\"WARC-Block-Digest\":\"sha1:RYFYQPZJ5G3DPITAFBSNA6CGPFG5LBAX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540488620.24_warc_CC-MAIN-20191206122529-20191206150529-00073.warc.gz\"}"}
https://link.springer.com/article/10.1007/s10640-016-0028-0/tables/16?error=cookies_not_supported&code=37284b24-a9be-4e91-b823-91a63737723d
[ "# Table 16 Hypothetical bias without different income categories (Ind)\n\n$$\\hbox {Inc}\\ne \\hbox {low}$$ $$\\hbox {Inc}\\ne \\hbox {middle }$$ $$\\hbox {Inc}\\ne \\hbox {high}$$ $$\\hbox {Inc}\\ne \\hbox {missing}$$\nPrice $$0.97^{***}$$ $$0.97^{***}$$ $$0.97^{***}$$ $$0.98^{***}$$\nMale 1.03 1.00 0.90 0.98\nAge 1.00 1.00 1.00 1.00\nKids in HH 0.78 $$0.71^{**}$$ $$0.68^{***}$$ 0.84\nSingle 1.11 1.05 1.25 1.16\nUniversity degree 1.12 0.94 1.18 1.20\nVotes Green 1.52$$^{**}$$ 1.13 1.48 1.67$$^{***}$$\nHH Income\nLow Ref Ref Ref\nMiddle Ref 1.04 0.96\nHigh and very high 1.38$$^{**}$$ 1.35 1.34\nMissing $$1.49^{***}$$ $$1.42^{**}$$ $$1.50^{**}$$\nWorry $$1.42^{**}$$ $$1.59^{***}$$ $$1.54^{***}$$ $$1.73^{***}$$\nPolicy preferences\nEmission trading $$2.37^{***}$$ $$2.57^{***}$$ $$2.90^{***}$$ $$2.07^{***}$$\nRenewables $$1.51^{***}$$ $$1.65^{***}$$ $$1.54^{***}$$ $$1.44^{***}$$\nTrading##Renewables $$0.45^{***}$$ $$0.35^{***}$$ $$0.31^{***}$$ $$0.47^{***}$$\nDilemma awareness $$0.71^{***}$$ $$0.64^{***}$$ $$0.74^{**}$$ $$0.73^{***}$$\nPersonal norm $$1.30^{***}$$ $$1.28^{***}$$ $$1.28^{***}$$ $$1.16^{**}$$\nHYPO 1.20 $$1.48^{***}$$ $$1.36^{**}$$ $$1.45^{***}$$\nObservations 3477 2907 3201 3663\nPseudo $$R^{2}$$ 0.065 0.072 0.073 0.061\n1. Choice of certificate is the dependent variable, coefficients are presented as odds ratios, ** $$p < 0.05$$; *** $$p < 0.01$$, standard errors are corrected for clustered observations\n2. Ref reference category" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.54466885,"math_prob":1.00001,"size":1458,"snap":"2021-04-2021-17","text_gpt3_token_len":677,"char_repetition_ratio":0.30261347,"word_repetition_ratio":0.0,"special_character_ratio":0.68655694,"punctuation_ratio":0.23154363,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000061,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-16T14:59:36Z\",\"WARC-Record-ID\":\"<urn:uuid:0780839c-1654-4335-82ff-e254487e08ef>\",\"Content-Length\":\"83419\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a5539b8b-79ca-4fbe-874d-7c34e428b705>\",\"WARC-Concurrent-To\":\"<urn:uuid:f0a79353-312f-458f-b510-0199075242d1>\",\"WARC-IP-Address\":\"151.101.200.95\",\"WARC-Target-URI\":\"https://link.springer.com/article/10.1007/s10640-016-0028-0/tables/16?error=cookies_not_supported&code=37284b24-a9be-4e91-b823-91a63737723d\",\"WARC-Payload-Digest\":\"sha1:7YNXVWGD7QJMDHRDTM4OEA67IQVACYPP\",\"WARC-Block-Digest\":\"sha1:4JTQFAPLKFEMT5LCW6WTWP3XEMC4MZ5K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038066981.0_warc_CC-MAIN-20210416130611-20210416160611-00375.warc.gz\"}"}
https://www.buffalobrewingstl.com/cell-culture/oo.html
[ "## Oo\n\nThis does not have a C=O group, but instead intermediate", null, "Figure 3. IR spectra of l-glutamic acid- related substances.\n\nFigure 3. IR spectra of l-glutamic acid- related substances.\n\nsome between C—O and C=O. The spectra of the amino acids at the isoelectric point of pH do not show the existence of C—O, which indicates that the —COOH group is ionized. In a similar way, the ionization of the —NH2 is indicated.\n\nThe optical rotation changes with changing pH in solution at 25 °C; glutamic acid [a]D(G±; H2O) is +12.0, [a]D(G+; 5N HCl) is +31.8, [a]D(G-; NaOH) is -4.2, and [a]D(G=; NaOH) is +10.9, which corresponds to the ratio of ionic form in solution (21).\n\n### Solubility\n\nFor the production of the umami seasoning MSG, the characteristic solubility changes in L-Glu HCl, L-Glu, and L-GluNa are used to separate it from other impurities to meet the category of food additives. The change in solubility of L-Glu for a- and /¡-forms, respectively, with temperature are expressed by the following (22):\n\nlog S = 0.01741 - 0.377 (0-30 °C) for a-form log S = 0.01531 - 0.328 (30-70 °C) for a-form log S = 0.01591 - 0.481 for /-form\n\nThe solubility of the /-form is lower than that of the a-form throughout the temperature range measured. It follows that the /-form is a stable form from the aspect of thermodynamic kinetics. In Table 1, the solubility for various salts of L-Glu versus temperature are summarized (23).\n\nThe solubility changes with change in the concentration of both hydrochloric acid and sodium hydroxide in Figure 4, which corresponds to the existence of solid crystals, in\n\nTable 1. Solubility of Various Salts of L-Glu (g/L)\n\nTemperature (°C) l-Glu dl-Glu l-GluNa dl-GluNa l-GluHCl dl-GluHCl\n\nG 3.41 8.55 514 158 298 471\n\n25 8.64 2G.5 627 243 479 698\n\n5G 21.9 49.3 765 372 769 1,G3G\n\n75 55.3 119 933 57G 1,24G 1,54G\n\n1GG 14G 285 1,14G 875 1,99G 2,28G", null, "Figure 4. The change in the solubility of l-glutamic acid with change in the concentration of both hydrochloric acid and sodium hydroxide.\n\nFigure 4. The change in the solubility of l-glutamic acid with change in the concentration of both hydrochloric acid and sodium hydroxide.\n\nCrystal: L-Glu is a polymorph, having two crystal forms, a and ft. Both are anhydrous.\n\nCrystals of the a-form are obtained by crystallization from the saturated solution of L-Glu at 60 to 70 °C by cooling rapidly with agitation or from the acidic or alkalic solution of L-Glu to which alkali or acid was added to rapidly neutralize it until the isoelectric point of L-Glu was reached (pH 3.2).\n\nThe crystal shape is a column or pyramid. The a-form crystals transform gradually to b-form when kept in solution for a long time at room temperature. b-form crystals are obtained from either the saturated solution of L-Glu at 80 to 90 °C, which is cooled gradually with agitation, or the relatively higher concentration of L-Glu hydrochloride or sodium salt, to which alkali or acid was added to neutralize it slowly until pH 3.2 was reached. The crystal shape is a needle or thin plate (24).\n\nequilibrium solution, of the hydrochloride, free, and sodium salts of L-Glu, respectively (i.e., in the range of 0 to 30% for HCl and of 0 to 20% for NaOH). In the range of 0 to 7.7% of HCl, the solubility increases linearly so that L-Glu is soluble until the equivalent amount of HCl in solution in which L-Glu exists as solid crystals. Above the concentration of 7.7% HCl, with an increase in the HCl concentration, the solubility decreases steeply. It flattens in the vicinity 20% and then becomes a constant of about 1% above 25% HCl. Above 7.7% HCl, the hydrochloride is a solid crystal in solution. At the invariant point where equilibrium is attained between L-Glu and L-Glu HCl, the maximum solubility of 31.1% is obtained.\n\nOn the other hand, in the range of 0 to 9.33% of NaOH, L-Glu solvates an equal amount of NaOH in solution and exists as a solid crystal. At the invariant point (NaOH 9.33%), the solubility of L-Glu reaches the maximum of 35.38%. Above the concentration of 9.33% NaOH, L-GluNa H2O exists as solid crystals. A slight decrease in solubility is observed in the range from 9.33 to about 13%, and then increases with increasing concentration of NaOH. The minimum value of the solubility of L-Glu is obtained from the range of pH 2 to 4, in the vicinity of the pI.", null, "" ]
[ null, "https://www.buffalobrewingstl.com/cell-culture/images/2284_111_44.jpg", null, "https://www.buffalobrewingstl.com/cell-culture/images/2284_111_45.png", null, "https://www.buffalobrewingstl.com/images/downloads/eJw9ykEKgCAQAMDfeFRLNAqkp4Stkkvpihl-P7p0mdPE1soiRMcTe9ilnHgiIg6UhKeeL3J-Sy67I1Rxh-x5iWV10JCyLQjtqYH9Eb3VRk1g9DwOWgGLdtSS9c8X7DIkAQ.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89885813,"math_prob":0.92129153,"size":4697,"snap":"2020-10-2020-16","text_gpt3_token_len":1344,"char_repetition_ratio":0.14532281,"word_repetition_ratio":0.09268293,"special_character_ratio":0.2748563,"punctuation_ratio":0.11684518,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9601829,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,2,null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-10T05:57:15Z\",\"WARC-Record-ID\":\"<urn:uuid:32ff0a60-db51-4e2c-b964-27b56db7d09a>\",\"Content-Length\":\"26637\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6b43aec1-893a-40f0-8b5b-e746c5c20f8f>\",\"WARC-Concurrent-To\":\"<urn:uuid:3fb579d0-ddb3-49f0-b776-e67ff8c8add3>\",\"WARC-IP-Address\":\"104.18.56.106\",\"WARC-Target-URI\":\"https://www.buffalobrewingstl.com/cell-culture/oo.html\",\"WARC-Payload-Digest\":\"sha1:N6JOIMU4EWYKP5Q3AJA4MMQOYU5AIA3Z\",\"WARC-Block-Digest\":\"sha1:YIY2OJOWYXZCQ7WKFUEZS7LWX7AH5ZRL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585371886991.92_warc_CC-MAIN-20200410043735-20200410074235-00244.warc.gz\"}"}
https://cstheory.stackexchange.com/questions/6261/need-a-term-for-a-graph-theoretic-metric-concept
[ "# Need a term for a graph-theoretic/metric concept\n\nLet $(X,d)$ be a metric space, and define $\\rho$ to be the largest distance of any $x\\in X$ to its nearest neighbor.\n\nFormally, $$\\rho = \\sup_{x \\in X}~ d(x, X \\setminus \\{x\\}).$$\n\nDoes this quantity have a name? It's zero in continuous spaces and is only interesting in discrete ones.\n\n• If I name it, I might call it “largest isolation.” – Tsuyoshi Ito Apr 26 '11 at 17:29\n• I was thinking \"isolation distance\". – Aryeh Apr 26 '11 at 17:34" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8782124,"math_prob":0.98611164,"size":285,"snap":"2019-43-2019-47","text_gpt3_token_len":84,"char_repetition_ratio":0.035587188,"word_repetition_ratio":0.0,"special_character_ratio":0.3122807,"punctuation_ratio":0.13114753,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9982651,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-12T10:46:44Z\",\"WARC-Record-ID\":\"<urn:uuid:a8953197-ac3a-440d-949f-116c24451056>\",\"Content-Length\":\"133016\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7765f147-6904-411d-9f2d-5ad399844ebc>\",\"WARC-Concurrent-To\":\"<urn:uuid:201404e8-30ac-4cba-a8e0-158f9bc15d32>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://cstheory.stackexchange.com/questions/6261/need-a-term-for-a-graph-theoretic-metric-concept\",\"WARC-Payload-Digest\":\"sha1:ATMNBBKDBONODARQXYWOJCMOR2GBXIYN\",\"WARC-Block-Digest\":\"sha1:5A6NSPYTP2GNZKJ4CXHLUVLD42GOLP65\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496665521.72_warc_CC-MAIN-20191112101343-20191112125343-00340.warc.gz\"}"}
http://www.neoarchaic.net/posts/rain/
[ "# Rain\n\nPart of a library of environmental effect graphics generators this Rhino Script produces a series of lines which fall like rain upon Rhino’s cplane.  If a droplet hits there is a splash. The user can input the number clusters where the rain will fall from, the relative radius of the cluster, the percentage of the way that the rain can fall short as well as a few other conditions.  The rain is colored with a gradient according to the distance along the V domain of the surface to allow for the production of depth.\n\n### Rhino Script\n\n```Option Explicit\n'Script written by <insert name>\n'Script copyrighted by <insert company name>\n'Script version Sunday, March 28, 2010 5:08:39 PM\n\nCall Main()\nSub Main()\nDim strSurface, arrInputs\nstrSurface = Rhino.GetObject(\"Select Surface to Rain From\", 8, True)\nIf isNull(strSurface) Then Exit Sub\n\narrInputs = Rhino.PropertyListBox(array(\"Sources\", \"Stream Radius\", \"Droplets\", \"Rate 0-1\", \"Splash Radius\", \"Stream Angle Radius\"), array(50, 0.1, 10, 0.5, 0.5, 4), \"Rain Parameters\", \"Rain Parameters\")\nIf isNull(arrInputs) Then Exit Sub\n\nCall reparameterize(strSurface)\nCall Rhino.EnableRedraw(False)\nCall rainMaker(strSurface, arrInputs(0), arrInputs(1), arrInputs(2), arrInputs(3), arrInputs(4), arrInputs(5))\nCall Rhino.EnableRedraw(True)\n\nEnd Sub\nFunction rainMaker(strSurface, dblCount, dblDispersion, dblDensity, dblRate, dblSplash, dblVarience)\nrainMaker = Null\nDim i, j, k, a, w, v, s, t, u, z, clr, var\nDim srfDom(1),srfD(1),srfA(1),srfR(1),srfS(1),srfE(1),pt(1), r(1)\nDim ln(), spl(), temp(3)\n\nFor i = 0 To 1 Step 1\nsrfDom(i) = Rhino.surfacedomain(strSurface, i)\nsrfD(i) = (srfDom(i)(1) - srfDom(i)(0))\nsrfA(i) = srfD(i) - srfD(i) * dblDispersion\nsrfR(i) = srfD(i) * (dblDispersion / 2)\nsrfS(i) = srfDom(i)(0) + srfD(i) * (dblDispersion / 2)\nsrfE(i) = srfDom(i)(1) - srfD(i) * (dblDispersion / 2)\nNext\n\nw = 0\na = 0\nvar = rnd()\nFor i = 0 To dblCount Step 1\nu = srfS(0) + rnd() * srfA(0)\nv = srfS(1) + rnd() * srfA(1)\nr(0) = srfR(0) / 2 + rnd() * (srfR(0) / 2)\nr(1) = srfR(1) / 2 + rnd() * (srfR(1) / 2)\nclr = v / srfE(1)\nFor j = 0 To dblDensity Step 1\nReDim Preserve ln(w)\npt(0) = Rhino.EvaluateSurface(strSurface, array(u + rnd() * r(0), v + rnd() * r(1)))\npt(1) = array(pt(0)(0) + dblVarience * sin(PI * 2 * var), pt(0)(1) + dblVarience * cos(PI * 2 * var), 0)\nt = 1 - rnd() * dblRate\nln(w) = Rhino.ScaleObject(Rhino.AddLine(pt(0), pt(1)), pt(0), array(t, t, t), False)\npt(1) = Rhino.CurveEndPoint(ln(w))\n\nCall Rhino.ObjectColor(ln(w), RGB(clr * 255, clr * 255, clr * 255))\n\nIf pt(1)(2) < 0.1 Then\nReDim Preserve spl(a)\nFor k = 0 To 3 Step 1\nt = rnd()\ntemp(k) = Rhino.AddLine(pt(1), array(pt(1)(0) + dblSplash * sin(PI * 2 * t), pt(1)(1) + dblSplash * cos(PI * 2 * t), pt(1)(2) + dblSplash * sin(PI * t)))\nCall Rhino.ObjectColor(temp(k), RGB(clr * 255, clr * 255, clr * 255))\nNext\nspl(a) = temp\na = a + 1\nEnd If\nw = w + 1\nNext\nNext\n\nrainMaker = array(ln, spl)\nEnd Function\nFunction reparameterize(strObjectID)\nIf Rhino.IsCurve(strObjectID) = True Then\nCall rhino.SelectObject(strObjectID)\nCall rhino.Command(\"reparameterize 0 1\", False)\nCall rhino.UnselectAllObjects()\nEnd If\nIf Rhino.IsSurface(strObjectID) = True Then\nCall rhino.SelectObject(strObjectID)\nCall rhino.Command(\"reparameterize 0 1 0 1\", False)\nCall rhino.UnselectAllObjects()\nEnd If\nEnd Function\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.50753146,"math_prob":0.992656,"size":3293,"snap":"2023-40-2023-50","text_gpt3_token_len":1134,"char_repetition_ratio":0.12435391,"word_repetition_ratio":0.072265625,"special_character_ratio":0.34558153,"punctuation_ratio":0.15555556,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99155474,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-11T06:11:10Z\",\"WARC-Record-ID\":\"<urn:uuid:f5918bb8-368d-439d-9eea-8220f8c4e0c7>\",\"Content-Length\":\"109428\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1651d54f-b5b7-4f40-96b8-073d2b288d87>\",\"WARC-Concurrent-To\":\"<urn:uuid:206b2e39-1c07-4af5-b2fc-c00ec876298b>\",\"WARC-IP-Address\":\"160.153.95.104\",\"WARC-Target-URI\":\"http://www.neoarchaic.net/posts/rain/\",\"WARC-Payload-Digest\":\"sha1:L2L7OZAE7BIQR2HGBHBFQM7XWQW46BCA\",\"WARC-Block-Digest\":\"sha1:TZYXO2LZWVNJTV4YSD6QX5SS5PW3DMML\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679103558.93_warc_CC-MAIN-20231211045204-20231211075204-00144.warc.gz\"}"}
https://www.researchgate.net/publication/275897333_An_analytic_regularisation_scheme_on_curved_spacetimes_with_applications_to_cosmological_spacetimes
[ "ArticlePDF Available\n\n# An analytic regularisation scheme on curved spacetimes with applications to cosmological spacetimes\n\nAuthors:\n\n## Abstract and Figures\n\nWe develop a renormalisation scheme for time--ordered products in interacting field theories on curved spacetimes which consists of an analytic regularisation of Feynman amplitudes and a minimal subtraction of the resulting pole parts. This scheme is directly applicable to spacetimes with Lorentzian signature, manifestly generally covariant, invariant under any spacetime isometries present and constructed to all orders in perturbation theory. Moreover, the scheme captures correctly the non--geometric state--dependent contribution of Feynman amplitudes and it is well--suited for practical computations. To illustrate this last point, we compute explicit examples on a generic curved spacetime, and demonstrate how momentum space computations in cosmological spacetimes can be performed in our scheme. In this work, we discuss only scalar fields in four spacetime dimensions, but we argue that the renormalisation scheme can be directly generalised to other spacetime dimensions and field theories with higher spin, as well as to theories with local gauge invariance.\nThis content is subject to copyright.\nAn analytic regularisation scheme on curved\nspacetimes with applications to cosmological\nspacetimes\nAntoine Géré\n1,a\n, Thomas-Paul Hack\n1,a\n, Nicola Pinamonti\n1,2,c\n1\nDipartimento di Matematica, Università di Genova, Via Dodecaneso 35, I-16146 Genova, Italy.\n2\nIstituto Nazionale di Fisica Nucleare, Sezione di Genova, Via Dodecaneso, 33 I-16146 Genova, Italy.\nE-Mail:\na\[email protected],\nb\[email protected],\nc\[email protected]\nNovember 13, 2015\nAbstract.\nWe develop a renormalisation scheme for time–ordered products in interacting field\ntheories on curved spacetimes which consists of an analytic regularisation of Feynman amplitudes and\na minimal subtraction of the resulting pole parts. This scheme is directly applicable to spacetimes\nwith Lorentzian signature, manifestly generally covariant, invariant under any spacetime isometries\npresent and constructed to all orders in perturbation theory. Moreover, the scheme captures correctly\nthe non–geometric state–dependent contribution of Feynman amplitudes and it is well–suited for\npractical computations. To illustrate this last point, we compute explicit examples on a generic curved\nspacetime, and demonstrate how momentum space computations in cosmological spacetimes can be\nperformed in our scheme. In this work, we discuss only scalar fields in four spacetime dimensions, but\nwe argue that the renormalisation scheme can be directly generalised to other spacetime dimensions\nand field theories with higher spin, as well as to theories with local gauge invariance.\nContents\n1 Introduction 2\n2 Introduction to pAQFT 3\n2.1 Basic definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3\n2.2 Relation to the standard formulation of perturbative QFT . . . . . . . . . . . . . . . . 5\n3 Analytic regularisation and minimal subtraction on curved spacetimes 9\n3.1 Analytic regularistion of time–ordered products and the minimal subtraction scheme . 10\n3.2 Analytic regularisation of the Feynman propagator H\nF\non curved spacetimes . . . . . 14\n3.3 Generalised Euler operators and principal parts of homogeneous expansions . . . . . . 18\n3.3.1\nThe differential form of generalised Euler operators and homogeneous expansions\nof Feynman amplitudes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21\n3.4 Properties of the minimal subtraction scheme . . . . . . . . . . . . . . . . . . . . . . . 23\n3.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25\n3.5.1 Computation of the renormalised fish and sunset graphs in our scheme . . . . . 25\n3.5.2 Alternative computation of the renormalised fish and sunset graphs . . . . . . 27\n3.5.3 A more complicated graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30\n4 Explicit computations in cosmological spacetimes 30\n4.1 Propagators in Fourier space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31\n4.2 The renormalised fish and sunset graphs in Fourier space . . . . . . . . . . . . . . . . 32\n4.3 Example: the two-point function for a quartic potential up to second order . . . . . . 34\n4.4 More complicated graphs on cosmological spacetimes . . . . . . . . . . . . . . . . . . . 35\n5 Summary and outlook 35\n1\narXiv:1505.00286v3 [math-ph] 12 Nov 2015\nA Conventions and computational details 36\nA.1 Propagators of the free Klein–Gordon field and their relations . . . . . . . . . . . . . . 36\nA.2 Fourier transform of the logarithmic term on Minkowski spacetime . . . . . . . . . . . 37\n1. Introduction\nIn the perturbative construction of models in quantum field theory on curved spacetimes one encounters\ntime–ordered products of field polynomials which are a priori ill–defined due to the appearance of UV\ndivergences. Several renormalisation schemes which deal with these divergences in the presence of non–\ntrivial spacetime curvature have been discussed in the literature, such as for example local momentum\nspace methods [\nBu81\n], dimensional regularisation in combination with heat kernel techniques [\nLü82\n,\nTo82\n], differential renormalisation [\nCHL95\n,\nPr97\n], zeta–function renormalisation [\nBF13\n], generic\nEpstein–Glaser renormalisation [\nBF00\n,\nHW01\n,\nHW04\n], and, on cosmological spacetimes, Mellin–Barnes\ntechniques [\nHo10\n] and dimensional regularisation with respect to the comoving spatial coordinates\n[BCK10].\nSome of these schemes, such as heat kernel approaches, zeta–function techniques and local momen-\ntum space methods are based on constructions which are initially only well–defined for spacetimes with\nEuclidean signature. These constructions can be partly transported to general Lorentzian spacetimes\nby local Wick–rotation techniques developed in [\nMo99\n]. However, whereas the Feynman propagator\nis essentially unique on Euclidean spacetimes, this is not the case on Lorentzian spacetimes where\nthis propagator has a non–unique contribution depending on the quantum state of the field model.\nConsequently, the Euclidean renormalisation techniques, and the numerous practical computations\nalready performed by means of these methods see for example the monographs [\nBD82\n,\nBOS92\n,\nPT09\n]\nare able to capture the correct divergent and geometric parts of Feynman amplitudes, but a priori\nnot their non–geometric and state–dependent contributions.\nA renormalisation scheme which is directly applicable to curved spacetimes with Lorentzian\nsignature has been developed in [\nBF00\n,\nHW01\n,\nHW04\n] in the framework of algebraic quantum field\ntheory. This scheme implements ideas of [\nEG73\n] and [\nSt71\n] and is based on microlocal techniques which\nreplace the momentum space methods available in Minkowski spacetime and have been introduced\nto quantum field theory in curved spacetime by the seminal work [\nRa96\n]. However, although the\ngeneralised Epstein–Glaser scheme developed in [\nBF00\n,\nHW01\n,\nHW04\n] is conceptually clear and\nmathematically rigorous, it is not easily applicable in practical computations. On the other hand,\nLorentzian schemes which are better suited for this purpose have not been developed to all orders in\nperturbation theory [\nCHL95\n,\nPr97\n], are tailored to specific spacetimes [\nHo10\n] or are not manifestly\ncovariant [BCK10].\nMotivated by this, we develop a renormalisation scheme for time–ordered products in interacting\nfield theories on curved spacetimes which is directly applicable to spacetimes with Lorentzian signature,\nmanifestly generally covariant, invariant under any spacetime isometries present and constructed\nto all orders in perturbation theory. Moreover, the scheme captures correctly the non–geometric\nstate–dependent contribution of Feynman amplitudes and it is well–suited for practical computations.\nIn this work, we discuss only scalar fields in four spacetime dimensions, but we shall argue that the\nrenormalisation scheme can be directly generalised to other spacetime dimensions and field theories\nwith higher spin, as well as to theories with local gauge invariance. Our analysis will take place in the\nframework of perturbative algebraic quantum field theory (pAQFT) [\nBF00\n,\nHW01\n,\nHW04\n,\nBDF09\n,\nFR12\n,\nFR14\n] which is a conceptually clear framework in which fundamental physical properties of\nperturbative interacting models on curved spacetimes can be discussed. However, we will make an effort\nto review how the formulation of pAQFT is related to the more standard formulation of perturbative\nQFT.\nThe renormalisation scheme we propose is inspired by the works [\nKe10\n,\nDFKR14\n] which deal\nwith perturbative QFT in Minkowski spacetime. In these works, the authors introduce an analytic\nregularisation of the position–space Feynman propagator in Minkowski spacetime which is similar to\nthe one discussed in [\nBG72\n]. Based on this, time–ordered products are constructed recursively by an\nEpstein–Glaser type procedure and it is shown that this recursion can be resolved by a position–space\n2\nforest formula similar to the one of Zimmermann used in BPHZ renormalisation in momentum space.\nIn order to extend the scheme proposed in [\nDFKR14\n] to curved spacetimes, and motivated by\n[\nBG72\n] and by the form of Feynman propagators on curved spacetimes, we introduce an analytic\nregularisation\n(α)\nF\nof a Feynman propagator\nF\nby\n(α)\nF\n:= lim\n0\n+\n1\n8π\n2\nu\n(σ + i)\n1+α\n+\nv\nα\n1\n1\n(σ + i)\nα\n\n+ w ,\nwhere\nu\n,\nv\nand\nw\nare the so–called Hadamard coefficients and\nσ\nis 1\n/\n2 times the squared geodesic\ndistance. This analytic regularisation, namely the construction of certain distributions by means of\npowers of the geodesic distance, is reminiscent of the use of Riesz distributions to define advanced and\nretarded Greens functions on Minkowski spacetime. A careful discussion of Riesz distributions and\ntheir extension to the curved case is presented in [\nBGP07\n]. The regularisation we use is loosely related\nto dimensional regularisation because the leading singularity of a Feynman propagator in\nN\nspacetime\ndimensions is proportional to (\nσ\n+\ni\n)\n1N/2\n, see e.g. [\nMo03\n, Appendix A]. A regularisation of the\nFeynman propagator similar to the one above has recently been discussed in [\nDa15\n]. In this work, we\nshall combine the analytic regularisation of the Feynman propagator with the minimal subtraction\nscheme encoded in a forest formula of the kind discussed in [\nHo10\n,\nKe10\n,\nDFKR14\n] in order to obtain\na time–ordered product which satisfies the causal factorisation property, i.e. a product which is indeed\n“time–ordered”. In order to prove that the analytically regularised amplitudes constructed out of\n(α)\nF\nhave the meromorphic structure necessary for the application of the forest formula and in order to show\nhow the corresponding Laurent series can be computed explicitly, we shall make use of generalised\nEuler operators. The practical feasibility of the renormalisation scheme shall be demonstrated by\ncomputing a few examples.\nA large part of our analysis is devoted to demonstrating that the scheme we propose is consistent\nand well-defined on general curved spacetimes. Readers interested in directly applying our scheme\nmay use Proposition 3.9 with\n(3.22)\nand\n(3.23)\nin order to compute the Laurent-series of the Feynman\namplitudes\n(3.11)\nregularised by means of\n(3.12)\n. The correct order of pole subtractions is encoded in\nthe forest formula\n(3.9)\nwhich is explained prior to its display. A number of examples is discussed in\nSection 3.5.\nIn quantum field theory on cosmological spacetimes, i.e. Friedmann–Lemaître–Robertson–Walker\n(FLRW) spacetimes, one usually exploits the high symmetry of these spacetimes in order to evaluate\nanalytical expressions in spatial Fourier space. However, the renormalisation scheme discussed in\nthis work operates on quantities such as the geodesic distance and the Hadamard coefficients, whose\nexplicit position space and momentum space forms are not even explicitly known in FLRW spacetimes.\nNotwithstanding, we shall devote a large part of this work in order to develop simple methods to\nevaluate quantities renormalised in our scheme on FLRW spacetimes in momentum space, and we\nshall illustrate these methods by explicit examples.\nThe paper is organised as follows. In the next section we present a brief introduction to pAQFT\nand its connection with the more standard formulation of perturbative QFT. Afterwards we introduce\nthe renormalisation scheme, demonstrate that it is well–defined and analyse its properties in Section 3,\nwhere we also illustrate the scheme by computing examples. In the fourth section we demonstrate\nthe applicability of the renormalisation scheme to momentum space computations on cosmological\nspacetimes. Finally, a few conclusions are drawn in the last section of this paper. Conventions regarding\nthe various propagators of a scalar field theory and a few technical computations are collected in the\nappendix.\n2. Introduction to pAQFT\n2.1. Basic definitions\nThroughout this work, we shall consider four-dimensional globally hyperbolic spactimes (\nM, g\n), where\ng\nis a Lorentzian metric whose signature is (\n,\n+\n,\n+\n,\n+) and we use the sign conventions of [\nWa84\n]\nregarding the definitions of curvature tensors.\n3\nWe recall the perturbative construction of an interacting quantum field theory on a generic curved\nspacetime in the framework of\nperturbative algebraic quantum field theory (pAQFT)\nrecently\ndeveloped in [\nBDF09\n,\nFR12\n,\nFR14\n] based on earlier work. In this construction, the basic object\nof the theory is an algebra of observables which is realised as a suitable set of functionals on field\nconfigurations equipped with a suitable product. In order to implement the perturbative constructions\nfollowing the ideas of Bogoliubov and others, the\nfield configurations φ\nare assumed to be off–shell.\nNamely,\nφ E\n(\nM\n) =\nC\n(\nM\n) is a smooth function on the globally hyperbolic spacetime (\nM, g\n) and\nobservables are modelled by functionals\nF\n:\nE\n(\nM\n)\nC\nsatisfying further properties. In particular all\nthe functional derivatives exist as distributions of compact support, where we recall that the functional\nderivative of a functional F is defined for all ψ\n1\n, . . . , ψ\nn\nD(M) = C\n0\n(M) as\nF\n(n)\n(φ)(ψ\n1\n··· ψ\nn\n) :=\nd\nn\n1\n. . .\nn\nF (φ + λ\n1\nψ\n1\n+ . . . λ\nn\nψ\nn\n)\nλ\n1\n=···=λ\nn\n=0\nE\n0\n(M\nn\n).\nThe set of these functionals is indicated by\nF\n. Further regularity properties are assumed for the\nconstruction of an algebraic product. In particular, the set of\nlocal functionals F\nloc\nF\nis formed\nby the functionals whose\nn\n–th order functional derivatives are supported on the total diagonal\nd\nn\n=\n{\n(\nx, . . . , x\n)\n, x M} M\nn\n. Furthermore, their singular directions are required to be orthogonal\nto\nd\nn\n, namely\nWF\n(\nF\n(n)\n)\n{\n(\nx, k\n)\nT\nM\nn\n, x d\nn\n, k T d\nn\n}\nwhere\nWF\ndenotes the wave front\nset. A generic local functional is a polynomial\nP\n(\nφ\n)(\nx\n) in\nφ\nand its derivatives integrated against a\nsmooth and compactly supported tensor. The functionals whose functional derivatives are compactly\nsupported smooth functions are instead called regular functionals and indicated by F\nreg\n.\nThe quantum theory is specified once a product among elements of\nF\nloc\nand a\n∗−\noperation (an\ninvolution on\nF\n) are given. For the case of free (linear) theories the product can be explicitly given by\na –product\nF\nH\nG =\nX\nn\n~\nn\nn!\nD\nF\n(n)\n, H\nn\n+\nG\n(n)\nE\n, (2.1)\nwhere\nH\n+\nis a Hadamard distribution of the linear theory we are going to quantize, namely a\ndistribution whose antisymmetric part is proportional to the commutator function ∆ =\nR\nA\nand\nwhose wave front set satisfies the Hadamard condition, see e.g. [\nRa96\n,\nBFK96\n] for further details and\nSection A.1 for our propagator conventions. Owing to the properties of\nH\n+\n, iterated\nH\n–products of\nlocal functionals F\n1\nH\n···\nH\nF\nn\nare well defined and\nH\nis associative.\nIn a normal neighbourhood of (M, g), a Hadamard distribution H\n+\nis of the form\nH\n+\n(x, y) =\n1\n8π\n2\nu(x, y)\nσ\n+\n(x, y)\n+ v(x, y) log(M\n2\nσ\n+\n(x, y))\n+ w(x, y), (2.2)\nwhere\nσ\n+\n(\nx, y\n) =\nσ\n(\nx, y\n) +\ni\n(\nt\n(\nx\n)\nt\n(\ny\n)) +\n2\n/\n2 with\nt\na time-function, i.e. a global time-coordinate,\n2\nσ\n(\nx, y\n) is the squared geodesic distance between\nx\nand\ny\nand\nM\nis an arbitrary mass scale. The\nu\nand\nv\nare purely geometric and thus state–independent, whereas\nw\nis smooth\nand state–dependent if H\n+\n(x, y) is the two–point function of a quantum state.\nFor the perturbative construction of interacting models we further need a\ntime–ordered product\n·\nT\nH\non local functionals. This product is characterised by\nsymmetry\nand the\ncausal factorisation\nproperty, which requires that\nF ·\nT\nH\nG = F\nH\nG if F & G , (2.3)\nwhere\nF & G\nindicates that\nF\nis later than\nG\n, i.e. there exists a Cauchy surface Σ of (\nM, g\n) such that\nsupp\n(\nF\n)\nJ\n+\n(Σ) and\nsupp\n(\nG\n)\nJ\n(Σ). However, the causal factorisation fixes uniquely only the\ntime–ordered products among regular functionals, in which case\nF ·\nT\nH\nG =\nX\nn\n~\nn\nn!\nD\nF\n(n)\n, H\nn\nF\nG\n(n)\nE\n, (2.4)\nwhere\nH\nF\nis the time–ordered (Feynman) version of\nH\n+\n, i.e.\nH\nF\n=\nH\n+\n+\ni\nA\nwith\nA\npropagator of the free theory, cf. Section A.1. For local functionals,\n(2.4)\nis only correct up to the\n4\nneed to employ a non–unique renormalisation procedure, cf. Section 3.1. This renormalisation can\nbe performed in such a way that iterated\n·\nT\nH\n–products of local functionals\nF\n1\n·\nT\nH\n··· ·\nT\nH\nF\nn\nare well\ndefined with\n·\nT\nH\nbeing associative. Moreover,\nH\n–products of such time–ordered products of local\nfunctionals are well–defined as well, cf. [\nHW02\n,\nBDF09\n,\nFR12\n,\nFR14\n]. Consequently, we may consider\nthe algebra\nA\n0\nH\n–generated by iterated\n·\nT\nH\n–products of local functionals. This algebra contains all\nobservables of the free theory which are relevant for perturbation theory.\nIn the perturbative construction of interacting models, namely when the free action is perturbed by\na non–linear local functional\nV\n, the observables associated with the interacting theory are represented\non the free algebra\nA\n0\nby means of the\nBogoliubov formula\n. This is given in terms of the local\nS–matrix, i.e., the time–ordered exponential\nS(V ) =\nX\nn=0\ni\nn\nn!~\nn\nV ·\nT\nH\n··· ·\nT\nH\nV\n| {z }\nn times\n, (2.5)\nwhere\nV\nis the interacting Lagrangean. In particular, for every interacting observable\nF\nthe corre-\nsponding representation on the free algebra A\n0\nis given by\nR\nV\n(F ) = S\n1\n(V )\nH\n(S(V ) ·\nT\nH\nF ) , (2.6)\nwhere\nS\n1\n(\nV\n) is the inverse of\nS\n(\nV\n) with respect to the\nH\n–product. The problem in using\nR\nV\n(\nF\n)\nas generators of the algebra of interacting observables lies in the construction of the time–ordered\nproduct which a priori is an ill–defined operation.\nThis problem can be solved using ideas which go back to Epstein and Glaser, see e.g. [\nBF00\n], by\nmeans of which the time–ordered product among local functionals is constructed recursively. The\ntime–ordered products can be expanded in terms of distributions smeared with compactly supported\nsmooth functions which play the role of coupling constants (multiplied by a spacetime–cutoff). At\neach recursion step the causal factorisation property\n(2.3)\npermits to construct the distributions\ndefining the time–ordered product up to the total diagonal. The extension to the total diagonal can be\nperformed extending the distributions previously obtained without altering the scaling degree towards\nthe diagonal. In this procedure there is the freedom of adding finite local contributions supported\non the total diagonal. This freedom is the well known renormalisation freedom. In addition to the\nproperties already discussed, the renormalised time–ordered product is required to satisfy further\nphysically reasonable conditions. We refer to [\nHW02\n,\nHW04\n] for details on these properties and the\nproof that they can be implemented in the recursive Epstein–Glaser construction.\nIn spite of the theoretical clarity of this construction, the Epstein–Glaser renormalisation is quite\ndifficult to implement in practise. The aim of this paper is to discuss a renormalisation scheme which\nis suitable for practical computations.\n2.2. Relation to the standard formulation of perturbative QFT\nIn this subsection we outline the relation of the pAQFT framework to the standard formulation of\nperturbative QFT. As an example, we demonstrate how the two-point (Wightman) function of the\ninteracting field in\nφ\n4\ntheory on a four–dimensional curved spacetime is computed, where we assume\nthat the quantum state of the interacting field is just the state of free field modified by the interacting\ndynamics. We further assume that the free field is in a pure and Gaussian Hadamard state.\nLet us recall the relevant formulae in perturbative algebraic quantum field theory where we shall\nalways try to write expressions both in the pAQFT and in the more standard notation, indicating the\nlatter by a\n.\n=\n. Given a local action\nV\n, such as\nV\n=\nR\nM\nd\n4\nx\ng\nλ\n4\nφ\n(\nx\n)\n4\nin\nφ\n4\n–theory, the corresponding\nS\n-matrix, which is loosely speaking the\nS\n-matrix in the interaction picture”, is defined by\n(2.5)\nand\ncorresponds to S(V )\n.\n= T e\ni\n~\nV\n.\nThe interacting field, i.e. “the field in the interaction picture”\nφ\nI\n(\nx\n), is defined by the Bogoliubov\nformula\nφ\nI\n(x) = R\nV\n(φ(x)) = S(V )\n1\nH\n(S(V ) ·\nT\nH\nφ(x))\n.\n= T (e\ni\n~\nV\n)\n1\nT (e\ni\n~\nV\nφ(x)) (2.7)\nsimilarly to\n(2.6)\n, where by unitarity\nS\n(\nV\n)\n1\n=\nS\n(\nV\n)\n. Interacting versions of more complicated\nexpressions in the field, e.g. polynomials at different and coinciding points, are defined analogously. A\n5\nthorough discussion of the relation between the Bogoliubov formula and the more common formulation\nof observables in the interaction picture may be found e.g. in [\nLi13\n, Section 3.1]. We only remark that,\nin the Minkowski vacuum state\n0\n, the expectation value of the Bogoliubov formula can be shown to\nread (also for more general expressions in the field)\nhφ\nI\n(x)i\n0\n.\n=\nD\nT (e\ni\n~\nV\n)\n1\nT (e\ni\n~\nV\nφ(x))\nE\n0\n=\nD\nT (e\ni\n~\nV\nφ(x))\nE\n0\nD\nT (e\ni\n~\nV\n)\nE\n0\n,\nwhich is the theorem of Gell–Mann and Low, see [Du96, DF00] for details.\nIn the algebraic formulation one usually cuts off the interaction in order to avoid infrared problems\nby replacing\nλ λf\n(\nx\n) with a compactly supported smooth function\nf\nlimit\nf\n1 in the end when computing expectation values. As our aim is to compute expectation\nvalues in this section, we shall write the results in the adiabatic limit keeping in mind that proving the\nabsence of infrared problems, i.e. the convergence of the spacetime integrals, is non–trivial and may\ndepend on the state of the free field chosen. Note that the so-called “in–in–formalism” often used in\nperturbative QFT on cosmological spacetimes corresponds to considering a cutoff function\nf\nof the\nform\nf\n(\nt, x\n) = Θ(\nt t\n0\n), i.e.\nf\nis a step function in time and the parameter\nt\n0\ncorresponds to the time\nwhere the interaction is switched on.\nOur choice for the quantum state of the interacting field implies that e.g. the interacting\ntwo-point function\nhφ\nI\n(x)\nH\nφ\nI\n(y)i\n.\n= hφ\nI\n(x)φ\nI\n(y)i\nis computed by writing\nφ\nI\nin terms of the free field\nφ\nand computing the expectation value of the\nresulting observable of the free field in the chosen pure, Gaussian, Hadamard state of the free field which\nwe may thus denote by the same symbol . The interacting vacuum state in Minkowski spacetime is\nof this form, whereas interacting thermal states in flat spacetime do not belong to this class, as they\nroughly speaking require to take into account both the change of dynamics and the change of spectral\nproperties induced by V [FL13].\nThe functionals in the functional picture of pAQFT correspond to Wick–ordered quantities of\nthe free field in the sense we shall explain now. To this avail we recall the form of the (quantum)\nH\n–product and (time–ordered)\n·\nT\nH\n–product in\n(2.1)\nand\n(2.4)\nwhich are defined by means of a\nH\n+\nand its Feynman–version\nH\nF\n=\nH\n+\n+\ni\nA\n. Up to renormalisation of the\ntime–ordered product, these products computed for the special case of the functional φ\n2\n(x) give\nφ(x)\n2\nH\nφ(y)\n2\n= φ(x)\n2\nφ(y)\n2\n+ 4~φ(x)φ(y)H\n+\n(x, y) + 2~\n2\nH\n2\n+\n(x, y) ,\nφ(x)\n2\n·\nT\nH\nφ(y)\n2\n= φ(x)\n2\nφ(y)\n2\n+ 4~φ(x)φ(y)H\nF\n(x, y) + 2~\n2\nH\n2\nF\n(x, y) .\nThis example shows that the\nH\n–product (\n·\nT\nH\n–product) implements the Wick theorem for normal–\nordered (time–ordered) fields, and thus the previous formulae can be interpreted in more standard\nnotation as\n:φ(x)\n2\n:\nH\n:φ(y)\n2\n:\nH\n=:φ(x)\n2\nφ(y)\n2\n:\nH\n+4~ : φ(x)φ(y):\nH\nH\n+\n(x, y) + 2~\n2\nH\n2\n+\n(x, y) ,\nT\n:φ(x)\n2\n:\nH\n:φ(y)\n2\n:\nH\n=:φ(x)\n2\nφ(y)\n2\n:\nH\n+4~ : φ(x)φ(y):\nH\nH\nF\n(x, y) + 2~\n2\nH\n2\nF\n(x, y) ,\nwhere\n:A:\nH\n:= α\nH\n+\n(A) := e\n~\nH\nS\n(x,y),\nδ\nδφ(x)\nδ\nδφ(y)\nA , (2.8)\nH\nS\n(x, y) :=\n1\n2\n(H\n+\n(x, y) + H\n+\n(y, x)) ,\ne.g.\n:φ(x)\n2\n:\nH\n= lim\nxy\n(φ(x)φ(y) H\n+\n(x, y)) .\n(2.8)\nis a convenient way to encode the combinatorics of normal ordering whereby the exponential\nseries terminates for polynomial functionals such as A = φ(x)\n2\n.\n6\nThe Wick theorem relates (time–ordered) products of Wick–ordered quantities to sums of Wick–\nordered versions of contracted products, where the definition of “Wick–ordering” and “contraction”\nare directly related, they both depend on the Hadamard distribution\nH\n+\nchosen. Thus, if we choose a\nparticular\nH\n+\nto define\nH\nand\n·\nT\nH\nin pAQFT, we immediately fix the interpretation of all functionals\nin terms of expressions Wick–ordered with respect to H\n+\n.\nFor the algebraic formulation the choice of\nH\n+\nis not important, indeed choosing a different\nH\n0\n+\nwith the same properties, one has that\nw\n:=\nH\n0\n+\nH\n+\n=\nH\n0\nF\nH\nF\nA\nis unique and thus universal. Moreover, w is real, smooth and symmetric and\nA\nH\n0\nB = α\nw\n(α\nw\n(A)\nH\nα\nw\n(B)) , A ·\nT\nH\n0\nB = α\nw\n(α\nw\n(A) ·\nT\nH\nα\nw\n(B)) ,\nwith\nα\ndefined as in\n(2.8)\nand thus the algebras associated to\nH\n,\n·\nT\nH\nand\nH\n0\n,\n·\nT\nH\n0\nare isomorphic via\nα\nw\n: A\n0\nA\n0\n0\n,\nwhere we recall that A\n0\nis algebra\nH\n–generated by ·\nT\nH\n–products of local functionals.\nHence, one may choose a suitable\nH\n+\naccording to ones needs. However, since\nα\nd\n(\nA\n)\n6\n=\nA\nfor\nfunctionals containing multiple field powers, statements like “the potential is\nφ\n4\nare ambiguous in\npAQFT, and in fact also in the standard treatment of QFT. They become non–ambiguous only if one\nsays “the potential is :\nφ\n4\n:\nH\n, i.e.\nφ\n4\nWick–ordered with respect to\nH\n+\n. In pAQFT the corresponding\nnon–ambiguous statement would be “the potential is the functional\nφ\n4\nin the algebra\nA\n0\nconstructed\nby means of\nH\n+\n. If one then passes to the algebra\nA\n0\n0\nconstructed by means of\nH\n0\n+\n, the potential picks\nup quadratic and c–number terms as we shall compute explicitly below. Alternatively, this ambiguity\nmay be seen to correspond to the renormalisation ambiguity of tadpoles in Feynman diagrams.\nGiven a Gaussian and Hadamard free field state , a convenient choice or representation of the\nalgebra is to take\nH\n+\n= ∆\n+\n, where\n+\n(\nx, y\n) =\nhφ\n(\nx\n)\nφ\n(\ny\n)\ni\n.\n= hφ\n(\nx\n)\nφ\n(\ny\n)\ni\nis the two-point function\nof the free field in the state . This corresponds to standard normal–ordering and consequently in this\nrepresentation the expectation values of all expressions which contain non-trivial powers of the field\nvanish, i.e.\nhAi\n= A|\nφ=0\n.\n= h:A:\ni\n. (2.9)\nKeeping the state fixed, but passing on to a representation of the algebra with arbitrary\nH\n+\n, the\nexpectation value is computed as\nhAi\n= α\nw\n(A)|\nφ=0\n.\n= h:A:\nH\ni\n, w =\n+\nH\n+\n,\nfor instance\nhφ\n2\n(x)i\n= α\nw\n(φ\n2\n(x))|\nφ=0\n=\nφ\n2\n(x) + w(x, x)\n|\nφ=0\n= w(x, x)\n.\n= h:φ\n2\n(x):\nH\ni\n,\nwhich in more standard terms would be computed as\nh:φ\n2\n(x):\nH\ni\n= lim\nxy\nhφ(x)φ(y) H\n+\n(x, y)i\n= lim\nxy\n(∆\n+\n(x, y) H\n+\n(x, y)) = w(x, x) .\nIn QFT in curved spacetimes normal–ordering is in principle problematic, because (pointlike)\nobservables should be defined in a local and generally covariant way, i.e. they should only depend on\nthe spacetime in an arbitrarily small neighbourhood of the observable localisation [\nBFV01\n,\nHW01\n].\nThis is not satisfied for e.g. field polynomials Wick–ordered with\n+\n(\nx, y\n), because this distribution\nsatisfies the Klein-Gordon equation and thus it encodes non–local information on the curved spacetime\n[\nHW01\n]. It is still possible to compute in the convenient normal–ordered representation in the following\nway. In the example of\nφ\n4\n–theory, one defines the potential\nλ\n4\nφ\n(\nx\n)\n4\nas a local and covariant observable\nby identifying it with the corresponding monomial in a representation of the algebra furnished by a\npurely geometric H\n+\n, i.e. a H\n+\nof the form (2.2) with w = 0.\nIn other words, we set once and for all in the H\n+\n–representation\nV\nH\n=\nZ\nM\nd\n4\nx\ng\nλ\n4\nφ(x)\n4\n.\n=\nZ\nM\nd\n4\nx\ng\nλ\n4\n:φ(x)\n4\n:\nH\n.\n7\nThis does not fix\nV\nuniquely, because\nH\ndepends on the scale\nM\ninside of the logarithm, but the\nfreedom in defining\nV\nH\n, and analogously the free/quadratic part of Klein–Gordon action, as above\ncorresponds to the usual freedom in choosing the “bare mass”\nm\n, “bare coupling to the scalar curvature”\nξ\n, “bare cosmological constant” Λ, “bare Newton constant”\nG\n, as well as the “bare coefficients”\nβ\n1\n,\nβ\n2\nof higher–derivative gravitational terms in the extended Einstein–Hilbert–Klein–Gordon action\nS(φ, g\nab\n) =\nZ\nM\nd\n4\nx\ng\nR\n16πG\n+ β\n1\nR\n2\n+ β\n2\nR\nab\nR\nab\n(φ\n2\n)\n2\n(m\n2\n+ ξR)φ\n2\n2\nλ\n4\nφ\n4\n.\nIn order to switch to the normal-ordered representation, we use the map\nα\nw\ndefined in\n(2.8)\nwhere\nw\n= ∆\n+\nH\n+\nis the state–dependent part of the Hadamard distribution\n+\nwhose dependence on the\nchoice of\nM\nin\nH\n+\ncorresponds to the above–mentioned freedom in the definition of the Wick–ordered\nKlein–Gordon action. That is, we have in the normal–ordered representation in the state\nV := V\n= α\nw\n(V\nH\n) =\nZ\nM\nd\n4\nx\ng\nλ\n4\nφ(x)\n4\n+\n3λ\n2\nw(x, x)φ(x)\n2\n+\n3λ\n4\nw(x, x)\n2\n(2.10)\n.\n=\nZ\nM\nd\n4\nx\ng\nλ\n4\n:φ(x)\n4\n:\n+\n3λ\n2\nw(x, x) :φ(x)\n2\n:\n+\n3λ\n4\nw(x, x)\n2\nWe observe that the combination of the requirements that the interaction potential is a local and\ncovariant observable and that, in order to compute expectation values in the state , one would like to\ncompute in the convenient normal–ordered representation with respect to , leads to the introduction\nof an effective spacetime–dependent and state–dependent (squared) mass term\nµ\n(\nx\n) = 3\nλw\n(\nx, x\n) in the\ninteraction potential which of course leads to additional Feynman graphs in perturbation theory, cf.\nFigures 1 and 2. The field–independent term\n3λ\n4\nw\n(\nx, x\n)\n2\nplays no role for computations of quantities\nwhich do not involve functional derivatives of the extended Einstein–Hilbert–Klein–Gordon action\nwith respect to the metric (an example where it does play a role is the stress–energy tensor), just as\nthe modification of the free action by the change of representation plays no role for the computation\nof such quantities. A similar phenomenon as in\n(2.10)\noccurs in thermal quantum field theory on\nMinkowski spacetime, where the effective mass generated by changing from the normal–ordered picture\nwith respect to the free vacuum state to the normal–ordered picture with respect to the free thermal\nstate is termed “thermal mass”, cf. [Li13, Section 2.3.2.] for details.\nAfter these general considerations, we can proceed to compute as an example the two-point function\nof the interacting field\nφ\nI\nin\nφ\n4\nup to second order in\nλ\n, whereby\nφ\nI\nis assumed to be in a state induced\nby a Gaussian Hadamard state of the free field. To this avail, we shall exclusively compute in the\nassociated normal–ordered representation and thus omit the subscripts on the star product, and the\ntime-ordered product, :=\n, ·\nT\n:= ·\nT\n.\nWe start from the Bogoliubov formula (2.7) and compute (from now on ~ = 1)\nS(V ) = 1 + iV\n1\n2\nV ·\nT\nV + O(λ\n3\n)\nS(V )\n?1\n= 1 iV +\n1\n2\nV ·\nT\nV V V + O(λ\n3\n)\nφ\nI\n= φ iV φ + iV ·\nT\nφ +\n1\n2\n(V ·\nT\nV ) φ V V φ\n1\n2\nV ·\nT\nV ·\nT\nφ + V (V ·\nT\nφ) + O(λ\n3\n) .\nIt remains to compute the\n-product of\nφ\nI\n(\nx\n) and\nφ\nI\n(\ny\n) and to set\nφ\n= 0 in the remaining expression\nin order to obtain the expectation value in the state . The result can as always be conveniently\nexpressed in terms of Feynman diagrams, where we use the Feynman rules depicted in Figure 1.\nIn the computation of\nhφ\nI\n(\nx\n)\nφ\nI\n(\ny\n)\ni\n, many expressions can be shortened considerably by using\nthe relation\nF\n+\n=\ni\nA\n, in particular this holds for the external legs of the appearing Feynman\ndiagrams. The resulting Feynman diagrams are depicted in Figure 2.\n8\nFigure 1. The various propagators and vertices in φ\n4\n–theory, where µ(x) = 3λw(x, x).\nFigure 2.\nThe up–to–second–order contributions to the two–point (Wightman) function\nhφ\nI\n(\nx\n)\nφ\nI\n(\ny\n)\ni\nof the interacting field with potential\nλ\n4\nφ\n(\nx\n)\n4\n+\nµ(x)\n2\nφ\n(\nx\n)\n2\n. We omit the labels of the external vertices\nafter the first line using the convention that the left external vertex is always the x-vertex.\n3. Analytic regularisation and minimal subtraction on curved spacetimes\nAs discussed above, the main problem in using the Bogoliubov formula (2.6)\nR\nV\n(F ) =\n~\ni\nd\nS(V )\n1\nH\nS(V + λF )\nλ=0\nfor constructing interacting fields perturbatively is that it is given in terms of the\nS\n–matrix, which is\nthe time–ordered exponential\n(2.5)\n. Unfortunately, the time–ordered product defined in terms of a\n“deformation”\n(2.4)\nwritten by means of a Feynman propagator\nH\nF\nis well defined only on regular\nfunctionals because the singularities present in\nH\nF\nforbid their application to more general functionals.\nIn order to proceed there is the need of employing a renormalisation procedure to construct the\ntime–ordered products. In this work we discuss the use of certain analytic methods to solve this\nproblem. The procedure we shall pursue is the following. We deform the Feynman propagator by\nmeans of complex parameter\nα\nwith values in the neighbourhood of the origin obtaining a function\nwith distributional values\nα 7→ H\n(α)\nF\n. The deformation we are looking for needs to be such that in the\nlimit\nα\n0 we recover the ordinary Feynman propagator. Furthermore, when\nα\nis non–vanishing, but\nsufficiently small, pointwise powers of\nH\n(α)\nF\nand integral kernels of more complicated loop diagrams\nshould be well–defined. If this is the case, since the corresponding distributions obtained in the limit\nα\n0 are well defined outside of the total diagonal, the poles of\nα 7→ H\n(α)\nF\nand more complicated loop\nexpressions are supported on the total diagonal. The idea, similar to what happens in dimensional\nregularisation, is that it is possible to renormalise these distributions by simply removing the poles.\n9\n3.1. Analytic regularistion of time–ordered products and the minimal subtraction scheme\nIn order to discuss the analytic regularisation of time–ordered products, we employ the notation used\ne.g. in [\nDFKR14\n] which efficiently encodes the full combinatorics of Feynman diagrams in a compact\nform. Namely, the time–ordered product of\nn\nlocal functionals\nV\n1\n, . . . , V\nn\ncan be formally defined in\nthe following way\n1\nV\n1\n·\nT\nH\n··· ·\nT\nH\nV\nn\n:= T\nn\n(V\n1\n··· V\nn\n) := m T\nn\n(V\n1\n··· V\nn\n) , (3.1)\nwhere\nm\ndenotes the pointwise product\nm\n(\nF\n1\n··· F\nn\n)(\nφ\n) =\nF\n1\n(\nφ\n)\n. . . F\nn\n(\nφ\n) and the operator\nT\nn\nis\nwritten in terms of an exponential\nT\nn\n= exp\nX\n1i<jn\nij\n=\nY\n1i<jn\nX\nl\nij\n0\nl\nij\nij\nl\nij\n!\n(3.2)\nwith\nij\n:=\nH\nF\n,\nδ\n2\nδφ\ni\nδφ\nj\n. (3.3)\nHere the functional derivative\nδ\nδφ\ni\nacts on the\ni\nth element of the tensor product\nV\n1\n··· V\nn\nand\nH\nF\n=\nH\n+\n+\ni\nA\nis the time–ordered version of the Hadamard distribution\nH\n+\nentering the\nconstruction of the free algebra\nA\n0\nvia\nH\n. The exponential\n(3.2)\nterms of Feynman graphs. More precisely, it can be written as a sum over all graphs Γ in\nG\nn\n, the set of\nall graphs with vertices\nV\n(Γ) =\n{\n1\n, . . . , n}\nand\nl\nij\nedges\ne E\n(Γ) joining the vertices\ni, j\n. Furthermore,\nin this construction, there are no tadpoles\nl\nii\n= 0 (cf. Section 2.2 for details on why these are absent)\nand the edges are not oriented l\nij\n= l\nji\n. With this in mind\nT\nn\n=\nX\nΓ∈G\nn\n1\nN(Γ)\n*\nτ\nΓ\n,\nδ\n2|E(Γ)|\nQ\niV (Γ)\nQ\nE(Γ)3ei\nδφ\ni\n(x\ni\n)\n+\n, (3.4)\nwhere\nN\n(Γ) =\nQ\ni<j\nl\nij\n! is a numerical factor counting the possible permutations among the lines\njoining the same two vertices, the second product\nQ\nei\nis over the edges having\ni\nas a vertex and\nx\ni\nis a point in\nM\ncorresponding to the vertex\ni\n. Moreover,\nτ\nΓ\nis a distribution which is well–defined\noutside of all partial diagonals, namely on M\nn\n\\ D\nn\n, where\nD\nn\n:= {x\n1\n, . . . , x\nn\n|x\ni\n= x\nj\nfor at least one pair (i, j), i 6= j} (3.5)\nand τ\nΓ\nhas the form\nτ\nΓ\n=\nY\ne=(i,j)E(Γ)\nH\nF\n(x\ni\n, x\nj\n) =\nY\n1i<jn\nH\nF\n(x\ni\n, x\nj\n)\nl\nij\n. (3.6)\nThe a priori restricted domain of\nτ\nΓ\nis the reason why\nT\nn\ndefined as above is not a well–defined\noperation on F\nn\nloc\n. In this context we recall that the total diagonal d\nn\nD\nn\nis defined as\nd\nn\n:= {(x, . . . , x), x M} M\nn\n. (3.7)\nIn order to complete the construction we need to extend the obtained distributions to the diagonals\nD\nn\n. This is not a straightforward limit because the singular structure of the Feynman propagator\nH\nF\ncontains the one of the\nδ\n–distribution and because pointwise products of the latter distribution are\nill–defined. Consequently, a renormalisation procedure needs to be implemented in order to extend\nτ\nΓ\nto the full M\nn\n. This extension is in general not unique, but subject to renormalisation freedom.\n1\nIn fact, in view of locality and covariance a better definition of the time–ordered product is\nT\n1\n(\nV\n1\n)\n·\nT\nH\n· · ··\nT\nH\nT\n1\n(\nV\nn\n) :=\nT\nn\n(\nV\n1\n⊗· · ·V\nn\n) where\nT\n1\n:\nF\nloc\nF\nloc\nA\n0\nplays the role of identifying local and covariant (smeared) Wick polynomials\nas particular elements of the free algebra\nA\n0\n, cf. [\nHW04\n]. As we shall not touch upon this point in our renormalisation\nscheme, we choose to omit T\n1\nin our formulas for simplicity.\n10\nHere we shall discuss a procedure to extend the distributions\nτ\nΓ\nto\nD\nn\ncalled\nminimal subtraction\n(MS)\n, which makes use of an analytic regularisation\nα\nij\nij\nof\nij\ngiven in terms of a family of\ndeformations\nH\nα\nij\nF\nof the Feynman propagator\nH\nF\nparametrised by complex parameters\nα\nij\ncontained\nin some neighbourhood of 0\nC\n. To this end, we follow [\nDFKR14\n] and call\nt\n(α)\nan analytic\nregularisation of a distribution\nt\ndefined outside of a point\nx\n0\nM\nif for all\nf D\n(\nM\n)\nht\n(α)\n, fi\nis a\nmeromorphic function in\nα\nfor\nα\nin some neighbourhood of 0 which is analytic for\nα 6\n= 0. Moreover\nt\n(α)\nmay be extended to x\n0\nfor α 6= 0 whereas lim\nα0\nt\n(α)\n= t on M \\ {x\n0\n}.\nWe shall introduce an analytic regularisation of the Feynman propagator\nH\nF\nin the following\nsection, but the basic idea of the MS–scheme is independent of the details of the analytic regularisation.\nNamely, given any analytic regularisation\nH\n(α)\nF\nof\nH\nF\n, we repeat the formal construction of\nT\nn\npresented above by replacing H\nF\nby H\n(α)\nF\nin (3.3) and\nij\nby the induced\nα\nij\nij\nin (3.2). Proceeding\nin this way we define\nT\n(α)\nn\n:= e\nP\ni<j\nα\nij\nij\nwith α := {α\nij\n}\ni<j\n,\nand the corresponding integral kernels\nτ\n(α)\nΓ\nof Feynman graphs Γ in analogy to\n(3.4)\n. We expect that\nthe distributions\nτ\n(α)\nΓ\nare multivariate meromorphic functions which have poles at the origin for some\nof the\nα\nij\n. Hence, in order to obtain well–defined distributions in the limit\nα\nij\nto 0 and consequently\na renormalised time–ordered product ·\nT\nH\n, all these poles need to be subtracted.\nThe properties of the analytically regularised Feynman propagator imply that\nτ\n(α)\nΓ\nis well–defined\non\nM\nn\n\\ D\nn\n(3.5)\neven if all\nα\nij\nare vanishing. Since\nτ\n(α)\nΓ\nis a multivariate meromorphic function\nin\nα\nwhich is analytic if restricted to\nM\nn\n\\ D\nn\n, we may deduce that the principal part of\nτ\n(α)\nΓ\nfor\nsome\nα\nij\nmust be supported on a partial diagonal of\nM\nn\n. In fact, in order for the time–ordered\nproducts to fulfil the factorisation property\n(2.3)\n, the subtraction of the principal parts of\nτ\n(α)\nΓ\nneeds\nto be done in such a way that at each step only local terms are subtracted. However, the previous\ndiscussion only implies that the support of the principal parts is contained in\nD\nn\n, i.e. the union of\nall the partial diagonals in\nM\nn\n. In order to satisfy the causal factorisation property, the principal\nparts need to be removed in a recursive way starting from the partial diagonals corresponding to two\nvertices and proceeding with the partial diagonals corresponding to an increasing number\nm n\nof\nvertices d\nI\n:= {(x\n1\n, . . . , x\nn\n) M\nn\n, x\ni\n= x\nj\n, i, j I {1, . . . , n}, |I| = m}.\nThe correct recursion procedure is implemented by the so called Epstein–Glaser forest formula,\nwhich is a position–space analogue of the Zimmermann forest formula, see [\nHo10\n,\nKe10\n,\nDFKR14\n] for\na careful analysis of the subject. We shall here follow the treatment discussed in [\nDFKR14\n]. To this\nend, we consider the set of indices n := {1, . . . , n} and define a forest F as\nF = {I\n1\n, . . . , I\nk\n}, I\nj\nn and |I\nj\n| 2 ,\nwhere for every pair I\ni\n, I\nj\nF\nI\ni\nI\nj\n= or I\ni\nI\nj\nor I\nj\nI\nj\n.\nThe set of all forests of n indices together with the empty forest {} is indicated by F\nn\n.\nFor every subset\nI n\nwe indicate by\nR\nI\nthe operator which extracts the principal part with\nrespect to\nα\nI\nof a multivariate meromorphic function\nf\n(\n{α\nij\n}\ni<j\n), where for every\ni, j I\n,\nα\nij\n=\nα\nI\n,\nand multiplies it with 1:\nR\nI\nf := pp lim\nα\nij\nα\nI\ni,jI\nf({α\nij\n}\ni<j\n). (3.8)\nWe complement this definition by setting R\n{}\nto be the identity.\nGiven all these data, we define the renormalised time–ordered product in the MS–scheme as in e.g.\n[DFKR14, Theorem 3.1] by\nT\nn\n= (T\nn\n)\nms\n:= lim\nα0\nm\nX\nF F\nn\nY\nIF\nR\nI\nT\n(α)\nn\n, (3.9)\n11\nwhere, in the product over\nI F\n,\nR\nI\nappears before\nR\nJ\nif\nI J\n. Furthermore, for each graph Γ, the\nlimit\nα\n=\n{α\nij\n}\ni<j\n0 is computed by setting\nα\nij\n=\nα\nΓ\nfor every\ni < j\nbefore taking the sum over\nthe forests and finally considering the limit α\nΓ\nto 0. In this context we recall that, for every element\nof the sum over F\nn\n, part of the limit α\nij\nα\nΓ\nis already taken by applying R\nI\n, see (3.8).\nGiven the renormalised\nT\nn\nin the MS–scheme, the corresponding local\nS\n–matrix may be constructed\nas\nS(V ) =\nX\nn=0\ni\nn\n~\nn\nn!\nT\nn\n(V ··· V )\nfor any local interaction Lagrangean V .\nIn order to implement the minimal subtraction scheme as outlined above we first need to specify an\nanalytic regularisation\nH\n(α)\nF\nof the Feynman propagator\nH\nF\non generic curved spacetimes. Afterwards\nwe have to demonstrate that for all graphs Γ G\nn\nthe analytically regularised integral kernels\nτ\n(α)\nΓ\n=\nY\ne=(i,j)Γ\nH\nα\nij\nF\n(x\ni\n, x\nj\n) =\nY\n1i<jn\nH\nα\nij\nF\n(x\ni\n, x\nj\n)\nl\nij\n. (3.10)\nappearing in\nT\n(α)\nn\n=\nX\nΓ∈G\nn\n1\nN(Γ)\n*\nτ\n(α)\nΓ\n,\nδ\n2|E(Γ)|\nQ\niV (Γ)\nQ\nei\nδφ\ni\n(x\ni\n)\n+\n(3.11)\nsatisfy the properties necessary for the implementation of the MS–scheme. In particular we need to\ndemonstrate that the distribution\nτ\n(α)\nΓ\n, which is a priori defined only on\nM\nn\n\\ D\nn\n, can be uniquely\nextended to the full\nM\nn\nwithout renormalisation, where the uniqueness of this extension is important\nin order to obtain a definite renormalisation scheme. Moreover, we need to show that this distribution\nτ\n(α)\nΓ\nD\n0\n(\nM\nn\n) is weakly meromorphic in\nα\nin a neighbourhood of 0, where in view of the forest\nformula it is only necessary to show that, setting\nα\nij\n=\nα\nI\nfor all\ni, j I\n,\nτ\n(α)\nΓ\nis weakly meromorphic\nin\nα\nI\n. Additionally, we need to prove that, if\nτ\nΓ\nprior to regularisation is well–defined outside of\nthe partial diagonal\nd\nI\n, then the pole of\nτ\n(α)\nΓ\nwith\nα\nij\n=\nα\nI\nfor all\ni, j I\nin\nα\nI\nis supported on\nd\nI\nand thus local. Finally, we need to prove that our MS–scheme satisfies all properties given in\n[\nHW02\n,\nHW04\n] which a physically meaningful renormalisation scheme on curved spacetimes should\nsatisfy, and we need to provide means to explicitly compute the minimal subtraction, which after all is\nthe main motivation for this work.\nOur plan to construct the mentioned quantities and to prove their required properties is as follows.\na)\nIn Section 3.2 we construct an analytic regularisation\nH\n(α)\nF\nof the Feynman propagator based\non the observation that locally\nH\nF\nis of the form\n(2.2)\nσ\n+\nthe half\nsquared geodesic with the Feynman\n–prescription\nσ\nF\n:=\nσ\n+\ni\n. Motivated by the fact that the\nsingular structure of H\nF\noriginates from the form in which σ\nF\nappears, we set locally\nH\n(α)\nF\n:= lim\n0\n+\n1\n8π\n2\nu\nM\n2α\nσ\n1+α\nF\n+\nv\nα\n1\n1\nM\n2α\nσ\nα\nF\n\n+ w, (3.12)\nwhere we use the (arbitrary but fixed) mass scale\nM\npresent in\n(2.2)\nalso for preserving the mass\ndimension of H\nF\nin the regularisation.\nb) In Proposition 3.7 we then prove that the relevant distributions\nt\n(α)\nΓ\n:=\nY\n1i<jn\n1\nσ\nl\nij\n(1+α\nij\n)\nF\nD\n0\n(M\nn\n\\ D\nn\n) (3.13)\nare multivariate analytic functions. The distribution\n(3.13)\nonly displays the most singular\ncontribution of\nτ\n(α)\nΓ\n(3.10)\n, but the subleading contributions are clearly of the same form up to\nreplacing some of the factors (1 + α\nij\n) in the exponents by α\nij\nor 0.\n12\nc)\nIn order to show that\nt\n(α)\nΓ\ncan be uniquely extended from\nM\nn\n\\D\nn\nto\nM\nn\nin a weakly meromorphic\nfashion, i.e. that the singularities relevant for the forest formula are poles of finite order, we follow\na strategy similar to the one used in [\nHW02\n] and consider a scaling expansion with respect to a\nsuitable scaling transformation. We first argue in Proposition 3.8 that an analytically regularised\ndistribution\nt\n(α)\nD\n0\n(\nM\nn\n\\ d\nn\n), which can be written as a sum of homogeneous terms with\nrespect to this scaling transformation plus a sufficiently regular remainder, can be extended to\nM\nn\nin a weakly meromorphic way, were the uniqueness of the extension follows from its weak\nmeromorphicity. In Proposition 3.10, we give a sufficient condition for the existence of such\na homogeneous expansion and we demonstrate in Proposition 3.12 that the distributions\nt\n(α)\nΓ\nsatisfy this condition.\nd)\nThe above–mentioned results are proved by means of generalised Euler operators (see [\nDa13\n]\nfor a related concept) which can be written abstractly in terms of a scaling transformation, but\nalso in terms of covariant differential operators whose explicit form can be straightforwardly\ncomputed as we argue in Section 3.3.1. In Proposition 3.9 we use these operators in order to\ndemonstrate how the full relevant pole structure of\nt\n(α)\nΓ\ncan be computed, thus showing the\npractical feasibility of the MS–scheme. We find that our renormalisation scheme corresponds in\nfact to a particular form of differential renormalisation and expand on this by computing a few\nexamples in Section 3.5.\ne)\nFinally, in Proposition 3.14 we prove that the MS–scheme satisfies the axioms of [\nHW02\n,\nHW04\n]\nfor time–ordered products and in addition preserves invariance under any spacetime isometries\npresent.\nRemark 3.1.\n(2.2)\nof\nH\nF\nand correspondingly the analytically\ncontinued\nH\n(α)\nF\ndefined in\n(3.12)\nare only meaningful on normal neighbourhoods\nN\nof (\nM, g\n). In order\nto define\nH\n(α)\nF\nand the induced distributions\nτ\n(α)\nΓ\n(3.10)\nglobally, we may employ suitable partitions of\nunity. Rather than providing general and cumbersome formulas, we prefer to illustrate the idea at the\nexample of the triangular graph\nτ\nΓ\n= H\nF,13\nH\nF,23\nH\n2\nF,12\n. := H\nF\n(x\n1\n, x\n3\n)H\nF\n(x\n2\n, x\n3\n)H\nF\n(x\n1\n, x\n2\n)\n2\nthe renormalisation of which is discussed in detail in Section 3.5.3.\nWe recall that a corollary of Lemma 10 of Chapter 5 in [\nO’N83\n] guarantees that there exists a\ncovering\nC\nof\nM\nconsisting of open geodesically convex sets such that\nN\ni\nN\nj\nis geodesically convex\nfor every N\ni\n, N\nj\nC\n2\n. With this C at our disposal, we define the sets\nN\n12\n:=\n[\nN∈C\nN × N M\n2\n, N\n123\n:=\n[\nN∈C\nN × N × N M\n3\n.\nWe call sets of the form\nN\n12\nand\nN\n123\na\nnormal neighbourhood of the total diagonal\n. This\ndefinition is essentially motivated by the fact that for every\nx M\nwe can find a normal neighborhood\nN\nx\nC\nof\nx\nin\nM\n. The squared geodesic distance\nσ\nis then well defined on\nN\n12\n, whereas the same is\nin general not true if we replace\nC\nin the previous formula with a covering of\nM\nformed by all open\ngeodesically convex sets.\nSetting\nσ\nij\n:=\nσ\n(\nx\ni\n, x\nj\n), we observe that\nσ\n12\nis well–defined on\nN\n12\n, and that\nσ\n12\n,\nσ\n13\nand\nσ\n23\nare well–defined on\nN\n123\n. We now consider smooth and compactly supported functions\nχ\n12\nD\n(\nN\n12\n),\nχ\n123\nD\n(\nN\n123\n) which are such that\nχ\n12\n= 1 on\nd\n2\nN\n12\nand\nχ\n123\n= 1 on\nd\n3\nN\n123\n. Note that\nby construction\nχ\n12\nand\nχ\n123\nvanish outside of\nN\n12\nand\nN\n123\nrespectively. We may now define the\nanalytically regularised distribution τ\n(α)\nΓ\nby setting\nτ\n(α)\nΓ\n:= H\n(α\n13\n)\nF,13\nH\n(α\n23\n)\nF,23\nH\n(α\n12\n)\nF,12\n2\nχ\n12\nχ\n123\n+ H\nF,13\nH\nF,23\nH\n2\nF,12\n(1 χ\n12\n)\n+ H\nF,13\nH\nF,23\nH\n(α\n12\n)\nF,12\n2\nχ\n12\n(1 χ\n123\n) ,\n2\nWe would like to thank Valter Moretti for pointing this result out to us.\n13\nwhere the Feynman propagators are regularised as in\n(3.12)\n. By construction,\nτ\n(α)\nΓ\nis globally well–\ndefined and the analysis outlined above and performed in the following sections implies that it can be\nuniquely extended to a weakly meromorphic distribution on the full\nM\n3\n. Moreover, the local pole\ncontributions corresponding to\nα\n12\n=\nα\nI\nwith\nI\n=\n{\n1\n,\n2\n}\nand\nα\n12\n=\nα\n13\n=\nα\n23\n=\nα\nJ\nwith\nJ\n=\n{\n1\n,\n2\n,\n3\n}\nare clearly independent of the choice of\nχ\n12\n,\nχ\n123\nand\nN\n12\n,\nN\n123\nsuch that the MS–regularised amplitude\n(\nτ\nΓ\n)\nms\nis both globally well–defined and independent of the quantities entering the global definition of\nthe analytic regularisation.\nKeeping this approach to define global analytically regularised quantities in mind, we shall for\nsimplicity work only with local quantities in the following.\n3.2. Analytic regularisation of the Feynman propagator H\nF\non curved spacetimes\nFollowing the plan outlined in Section 3.1, we would like to define an analytic regularisation\nH\n(α)\nF\nof\nH\nF\nby\n(3.12)\n. To this end, we start our analysis by constructing the distribution 1\n1+α\nF\nin\nM\n2\nfor\nα C \\ N\n. As anticipated in Section 3.1 we shall make use of scaling properties of 1\n1+α\nF\nand the\ninduced quantities t\n(α)\nΓ\n(3.13) with respect to a particular geometric scaling transformation.\nFor every pair of points\nx\n1\n, x\ni\nin a normal neighbourhood\nN\n(\nM, g\n) there exists a unique\ngeodesic\nγ\nconnecting\nx\n1\nand\nx\ni\n. We shall assume that\nγ\n:\nλ 7→ x\ni\n(\nλ\n) is affinely parametrised and\nthat\nx\ni\n(0) =\nx\n1\nwhereas\nx\ni\n(1) =\nx\ni\n. For all\nλ\n0 and all\nf D\n(\nN\nn\n) with\nN\nn\nM\nn\na normal\nneighbourhood of the total diagonal\nd\nn\n(cf. Remark 3.1), the geometric" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.80443335,"math_prob":0.97966623,"size":49762,"snap":"2022-27-2022-33","text_gpt3_token_len":16321,"char_repetition_ratio":0.15267897,"word_repetition_ratio":0.12275207,"special_character_ratio":0.2793296,"punctuation_ratio":0.12740268,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9949732,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-11T01:02:08Z\",\"WARC-Record-ID\":\"<urn:uuid:08bb88a4-fb3b-4adc-b5eb-12471a9099a1>\",\"Content-Length\":\"1050269\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9ed52d91-93d2-43cc-974a-04bce5e877d2>\",\"WARC-Concurrent-To\":\"<urn:uuid:8e502bde-ffc8-4439-bc63-215ea9805bbf>\",\"WARC-IP-Address\":\"104.17.32.105\",\"WARC-Target-URI\":\"https://www.researchgate.net/publication/275897333_An_analytic_regularisation_scheme_on_curved_spacetimes_with_applications_to_cosmological_spacetimes\",\"WARC-Payload-Digest\":\"sha1:MYTV2CW4IGFT4YFCULFMGJBUX6EO2H4G\",\"WARC-Block-Digest\":\"sha1:WRA7AV3K3TLENZTFOEB7CUWUETMSTOVQ\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571222.74_warc_CC-MAIN-20220810222056-20220811012056-00129.warc.gz\"}"}
https://yourfreetemplates.com/venn-diagram-template/
[ "# Venn diagram template\n\n1,407 views\n\nThe Venn diagram Template in PowerPoint format includes three slides. Firstly we have the Venn diagrams with two circles. Secondly we present Venn diagram template with three circles. Thirdly Venn diagrams are composed of four circles. As the same diagram PowerPoint template series, you can also find our Maslow’s hierarchy of needsData Mining, Machine Learning, cloud computing, Artificial Intelligence and BlockChain PowerPoint templates.\n\nThe Venn diagrams PowerPoint templates include three slides.\n\n## Slide 1 and 2, Venn diagram template with two or three circles.\n\nA Venn diagram consists of multiple overlapping closed curves, usually circles, each representing a set. The points inside a curve labeled a represent elements of the set A, while points outside the boundary represent elements not in the set A. This lends to easily read visualizations; for example, the set of all elements that are members of both sets A and B, A ∩ B, is represented visually by the area of overlap of the regions A and B. In Venn diagrams the curves are overlapped in every possible way, showing all possible relations between the sets. It is the same demonstration for three circles.\n\nVenn diagrams are similar to Euler diagrams. However, a Venn diagram for n component sets must contain all 2n hypothetically possible zones that correspond to some combination of inclusion or exclusion in each of the component sets. Euler diagrams contain only the actually possible zones in a given context. In Venn diagrams, a shaded zone may represent an empty zone, whereas in an Euler diagram the corresponding zone is missing from the diagram.\n\n## Slide 3 Venn diagrams template with four circles example.\n\nThe region in both A, B , C and D, where the four sets overlap, is called the intersection of A, B , C and D, denoted by A∩B∩C∩D. For detailed info on Venn diagrams, please refer to Wikipedia." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89594555,"math_prob":0.91700053,"size":2080,"snap":"2019-51-2020-05","text_gpt3_token_len":450,"char_repetition_ratio":0.15799615,"word_repetition_ratio":0.02359882,"special_character_ratio":0.19855769,"punctuation_ratio":0.13399504,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98550534,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-08T01:42:47Z\",\"WARC-Record-ID\":\"<urn:uuid:86727e68-8fc0-4694-a0ba-e556bf193512>\",\"Content-Length\":\"137446\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a3e6352c-38ab-4a89-9ced-8a2278fb788f>\",\"WARC-Concurrent-To\":\"<urn:uuid:2aae2a19-ff4c-4156-be2e-ba9f77adb419>\",\"WARC-IP-Address\":\"184.154.240.141\",\"WARC-Target-URI\":\"https://yourfreetemplates.com/venn-diagram-template/\",\"WARC-Payload-Digest\":\"sha1:CXKRUURJFKIOF7VTIWKI7IHY6BA7T3VW\",\"WARC-Block-Digest\":\"sha1:2DOR6SRWTKQ4PXSWRLPGZ32VTIJ3S3VK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540503656.42_warc_CC-MAIN-20191207233943-20191208021943-00207.warc.gz\"}"}
https://eduzip.com/ask/question/the-solar-constant-is-defined-as-the-energy-incident-per-unit-are-273356
[ "Physics\n\n# The Solar constant is defined as the energy incident per unit area per second. The dimensional formula for solar constant is\n\n$[ML^0T^{-3}]$\n\n##### SOLUTION\nEnergy incident per unit area, per second in dimensions\n$\\displaystyle =\\dfrac {Energy}{Area\\times second}=\\dfrac {ML^2T^{-2}}{L^2T}=ML^oT^{-3}$\n\nYou're just one step away\n\nSingle Correct Medium Published on 18th 08, 2020\nQuestions 244531\nSubjects 8\nChapters 125\nEnrolled Students 204\n\n#### Realted Questions\n\nQ1 Single Correct Medium\nA highly rigid cubical block $A$ of small mass $M$ and side $L$ is fixed rigidly on to another cubical block of same dimensions and of low modulus of rigidity $\\eta$ such that the lower face of $A$ completely covers the upper face $B$. The lower face of $B$ is rigidly held on a horizontal surface. A small force $F$ is applied perpendicular to one of the side face of $A$. After the force is withdrawn, block $A$ executes small oscillations, the time period of which is given by :\n• A. $2\\pi \\sqrt{M\\eta L}$\n• B. $2\\pi \\sqrt{\\dfrac{M\\eta}{ L}}$\n• C. $2\\pi \\sqrt{\\dfrac{ML}{\\eta}}$\n• D. $2\\pi \\sqrt{\\dfrac{M}{L\\eta}}$\n\nAsked in: Physics - Units and Measurement\n\n1 Verified Answer | Published on 18th 08, 2020\n\nQ2 Subjective Medium\nWhat is the cause of backlash error?\n\nAsked in: Physics - Units and Measurement\n\n1 Verified Answer | Published on 18th 08, 2020\n\nQ3 Subjective Medium\nThe pressure on a square plate is measured by measuring the force on the plate and the length of the sides of the plate by using the formula $P = F/l^2$. If the maximum errors in the measurement of force and length are $4\\%$ and $2\\%$, respectively, then what is the maximum error in the measurement of pressure?\n\nAsked in: Physics - Units and Measurement\n\n1 Verified Answer | Published on 18th 08, 2020\n\nQ4 Single Correct Medium\nIn the equation $\\displaystyle \\left( \\frac { 1 }{ P\\beta } \\right) =\\frac { y }{ { k }_{ B }T }$, where $P$ is the pressure, $y$ is the distance, $\\displaystyle { k }_{ B }$ is Boltzmann constant and $T$ is the temperature. Dimensions of $\\displaystyle \\beta$ are:\n• A. $\\displaystyle { M }^{ 0 }{ L }^{ 2 }{ T }^{ 0 }$\n• B. $\\displaystyle { M }^{ 1 }{ L }^{ -1 }{ T }^{ -2 }$\n• C. $\\displaystyle { M }^{ 0 }{ L }^{ 0 }{ T }^{ 0 }$\n• D. $\\displaystyle { M }^{ -1 }{ L }^{ 1 }{ T }^{ 2 }$\n\nAsked in: Physics - Units and Measurement\n\n1 Verified Answer | Published on 18th 08, 2020\n\nQ5 Single Correct Medium\nLight years is the unit of:\n• A. time\n• B. length\n• C. mass\n• D. none of these\n\nAsked in: Physics - Units and Measurement\n\n1 Verified Answer | Published on 18th 08, 2020" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.764249,"math_prob":0.9920072,"size":2293,"snap":"2021-43-2021-49","text_gpt3_token_len":699,"char_repetition_ratio":0.12188729,"word_repetition_ratio":0.072289154,"special_character_ratio":0.33667684,"punctuation_ratio":0.09478673,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998542,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-09T08:13:01Z\",\"WARC-Record-ID\":\"<urn:uuid:af7841fb-4a24-422f-802c-d573cc547127>\",\"Content-Length\":\"43485\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c2f720a3-d577-44a3-a9a7-3dfba2e1e8c9>\",\"WARC-Concurrent-To\":\"<urn:uuid:c099a8f2-e57c-4a9c-b670-e9324e538a08>\",\"WARC-IP-Address\":\"178.63.16.225\",\"WARC-Target-URI\":\"https://eduzip.com/ask/question/the-solar-constant-is-defined-as-the-energy-incident-per-unit-are-273356\",\"WARC-Payload-Digest\":\"sha1:5MPXWLID35IX7GYHB7A7QYMPIQZJBK6S\",\"WARC-Block-Digest\":\"sha1:J7P3NAZHNORKYLKD2BYFEQK4BJE5SUOY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363689.56_warc_CC-MAIN-20211209061259-20211209091259-00195.warc.gz\"}"}
https://my.oschina.net/u/4271062/blog/3599281
[ "# 十个经典排序算法(时间复杂度,空间复杂度,稳定性,动画演示思想)\n\n2019/03/25 21:03\n\n 类型 时间复杂度 空间复杂度 冒泡 O(n^2) O(1) 选择 O(n^2) O(1) 插入 O(n^2) O(1) 归并 O(n*logn) O(N) 快速 O(n*logn) O(logN)~O(N) 堆 O(n*logn) O(1) 希尔 O(n*logn) O(1)\n\n 类型 时间复杂度 空间复杂度 计数排序 O(N) O(N) 基数排序 O(N) O(N) 桶排序 O(N) O(N)\n\n假定待排序的记录序列中,存在多个具有相同的关键字的记录,若经过排序,这些记录的相对次序保存不变,称这种排序算法是稳定的,否则称为不稳定的。", null, "", null, "", null, "", null, "• 默认选择第一个数为基数\n• 实现比基数小的数放在基数左边,比基数大的数放在基数右边\n• 设置左右两个指标,不断向中间靠,左边寻找比基数大的数,标记,右边寻找比基数小的数,标记,交换两个标记位置的数\n• 直到两个指标相遇,停止移动指标,交换基数位和左指标位置的数\n• 继续重复以上步骤,对两个区别的数进行同等操作", null, "", null, "• 建成大根堆\n• 堆顶元素和最后一个元素交换\n• 剔除最后一个元素\n• 变成大根堆,重复2,3步骤", null, "", null, "• 设置一个定量的数组当作空桶;\n• 遍历输入数据,并且把数据一个一个放到对应的桶里去;\n• 对每个不是空的桶进行排序;\n• 从不是空的桶里把排好序的数据拼接起来。", null, "0\n0 收藏\n\n### 作者的其它热门文章", null, "0 评论\n0 收藏\n0", null, "", null, "", null, "" ]
[ null, "https://oscimg.oschina.net/oscnet/0edd9333e7dee1c41c3c324dbd78e70458c.gif", null, "https://oscimg.oschina.net/oscnet/02a558a2c59d03d5e60872b8c16890e298f.gif", null, "https://oscimg.oschina.net/oscnet/a9babcb09bc735e9200056285dcca060cfb.gif", null, "https://oscimg.oschina.net/oscnet/3d4606c92a3df1c0b5a1d9b7369f81adf4e.gif", null, "https://oscimg.oschina.net/oscnet/06eaffe8cec57e81a83fe1fe88c746ac7d4.gif", null, "https://oscimg.oschina.net/oscnet/902f728f7bbf0c8c9c5ba2cb266b8886572.gif", null, "https://images2017.cnblogs.com/blog/849589/201710/849589-20171015231308699-356134237.gif", null, "https://oscimg.oschina.net/oscnet/bcc37cfb39eab9623d7b843efa4b051792b.gif", null, "https://oscimg.oschina.net/oscnet/70211e581b2c817a26dbfacd359fa545892.gif", null, "https://static.oschina.net/new-osc/img/portrait.gif", null, "https://oscimg.oschina.net/oscnet/up-02f2706a81344119fb5cdcdda304068f2e0.png", null, "https://oscimg.oschina.net/oscnet/up-e77d060131d9b392981650ec7beb614554f.JPEG", null, "https://static.oschina.net/new-osc/img/icon/back-to-top.svg", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.89953184,"math_prob":0.99391323,"size":723,"snap":"2021-04-2021-17","text_gpt3_token_len":644,"char_repetition_ratio":0.16272601,"word_repetition_ratio":0.0,"special_character_ratio":0.32365146,"punctuation_ratio":0.008064516,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9500338,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,4,null,1,null,1,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-18T05:59:34Z\",\"WARC-Record-ID\":\"<urn:uuid:0ebcfae4-dbe7-4618-a759-f2e8198a666a>\",\"Content-Length\":\"91430\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2735a418-7433-4a7f-ba0f-f29057c2c966>\",\"WARC-Concurrent-To\":\"<urn:uuid:bb0bc08c-8e0a-4b92-ae71-0d17e3aae4a0>\",\"WARC-IP-Address\":\"212.64.62.183\",\"WARC-Target-URI\":\"https://my.oschina.net/u/4271062/blog/3599281\",\"WARC-Payload-Digest\":\"sha1:WETOSQE5HO37B5QJ75UKRVT3YHXULZ2C\",\"WARC-Block-Digest\":\"sha1:HMWUTNQNVFKAZIXKCIRJSGVYD446MXLZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038468066.58_warc_CC-MAIN-20210418043500-20210418073500-00304.warc.gz\"}"}
https://math.libretexts.org/Bookshelves/Applied_Mathematics/Book%3A_Mathematics_for_Elementary_Teachers_(Manes)/06%3A_Place_Value_and_Decimals/6.03%3A_x-mals
[ "$$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$\n\n# 6.3: x-mals\n\n•", null, "• Contributed by Michelle Manes\n• Professor (Mathematics) at University of Hawaii\n$$\\newcommand{\\vecs}{\\overset { \\rightharpoonup} {\\mathbf{#1}} }$$ $$\\newcommand{\\vecd}{\\overset{-\\!-\\!\\rightharpoonup}{\\vphantom{a}\\smash {#1}}}$$$$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$\n\nJust like in base 10, we can add boxes to the right of the decimal point in other bases, like base 5.", null, "However, the prefix “dec” in “decimal point” means ten. So we really shouldn’t call it a decimal point anymore. Maybe a “pentimal point”? (In fact, the general term is radix point.)\n\nThink / Pair / Share\n\n• Use reasoning like you saw on page 6 for the base ten system to think about other number systems:\n• Figure out the values of x and y in the picture of the base-5 system above. Be sure you can explain your reasoning.\n• Draw a base-4 “Dots & Boxes” model, including a radix point and some boxes to the right. Label at least three boxes to the left of the ones place and three boxes to the right of the ones place.\n• Draw a base-6 “Dots & Boxes” model, including a radix point and some boxes to the right. Label at least three boxes to the left of the ones place and three boxes to the right of the ones place\n\nIn general, in a base-b system, the boxes to the left of the ones place represent positive powers of the base b. Boxes to the right of the ones place represent reciprocals of those powers.", null, "Work on the following exercises on your own or with a partner.\n\n1. Draw a “Dots & Boxes” picture of each number. $$(a)\\; 0.03_{five} \\quad (b)\\; 0.22_{six} \\quad (c)\\; 0.103_{four} \\quad (d)\\; 0.002_{three} \\ldotp$$\n2. Find a familiar (base-10) fraction value for each number. $$(a)\\; 0.04_{five} \\quad (b)\\; 0.3_{six} \\quad (c)\\; 0.02_{four} \\quad (d)\\; 0.03_{nine} \\ldotp$$\n3. Find a familiar (base-10) fraction value for each number. (You might want to re-read the example of $$0.31$$ in base ten from the previous section.) $$(a)\\; 0.13_{five} \\quad (b)\\; 0.25_{six} \\quad (c)\\; 0.101_{two} \\quad (d)\\; 0.24_{seven} \\quad (e)\\; 0.55_{eight} \\ldotp$$\n\nThink / Pair / Share\n\nTami and Courtney were working on converting $$0.44_{five}$$ to a familiar base-10 fraction. Courtney said this:\n\nThe places in base five to the right of the point are like $$\\frac{1}{5}$$ and then $$\\frac{1}{25}$$. Since this has two places, the answer should be $$\\frac{44}{25}$$.", null, "Tami thought about what Courtney said and replied:\n\nI don’t know what the right answer is, but I know that can’t be right. The number $$0.44_{five}$$ is less than one, since there are no numbers in the ones place and no explosions that we can do. But the fraction $$\\frac{44}{25}$$ is more than one. It’s almost two. So they can’t be the same number.\n\n• Who makes the most sense, Courtney or Tami? Why do you think so?\n• Find the right answer to the problem Courtney and Tami were working on.\n\nProblem 1\n\nFind the “decimal” representation of $$\\frac{1}{4}$$ in each of the following bases. Be sure that you can justify your answer. (You might want to review the example of $$12 \\frac{3}{4}$$ in the previous section.) $$\\begin{split} base\\; 2 \\qquad base\\; 4& \\qquad base\\; 6 \\\\ base\\; 8 \\qquad base\\; 10& \\qquad base\\; 12 \\end{split}$$" ]
[ null, "https://math.libretexts.org/@api/deki/files/10545/manes.jpg", null, "https://math.libretexts.org/@api/deki/files/10701/base5radix-768x128.png", null, "https://math.libretexts.org/@api/deki/files/10702/basebradix-768x143.png", null, "https://math.libretexts.org/@api/deki/files/10703/0.44base5-768x144.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8256322,"math_prob":0.9998385,"size":2929,"snap":"2020-34-2020-40","text_gpt3_token_len":876,"char_repetition_ratio":0.14393163,"word_repetition_ratio":0.17669903,"special_character_ratio":0.32536703,"punctuation_ratio":0.12908243,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99530476,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,6,null,6,null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-08T05:58:43Z\",\"WARC-Record-ID\":\"<urn:uuid:6b1ac9b6-a854-452a-aa45-a9f98c8e59d8>\",\"Content-Length\":\"102326\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d4d041f3-1539-4d07-850b-a8588f1b72cb>\",\"WARC-Concurrent-To\":\"<urn:uuid:7dca0c46-fc31-44a8-9b7e-798cc001a7a8>\",\"WARC-IP-Address\":\"13.249.40.120\",\"WARC-Target-URI\":\"https://math.libretexts.org/Bookshelves/Applied_Mathematics/Book%3A_Mathematics_for_Elementary_Teachers_(Manes)/06%3A_Place_Value_and_Decimals/6.03%3A_x-mals\",\"WARC-Payload-Digest\":\"sha1:GIKHKUJKO5REWFJUHZ4GNIEF5MV7HVKI\",\"WARC-Block-Digest\":\"sha1:NVMJVYYRYFSK7IGZQFVG22W46HENSDJE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439737289.75_warc_CC-MAIN-20200808051116-20200808081116-00599.warc.gz\"}"}
https://blog.qianxiaoduan.com/archives/652
[ "# 20. 有效的括号\n\n### 题解\n\n``````/**\n* @param {string} s\n* @return {boolean}\n*/\nvar isValid = function(s) {\nlet a=s.replace(/\\(\\)|\\[\\]|{}/g,'')\ns.replace(/\\(\\)|\\[\\]|{}/g,'')\nfor(let i=0;i<s.split('').length/2;i++){\na=a.replace(/\\(\\)|\\[\\]|{}/g,'')\n}\nif(a!=''){\nreturn false\n}\nreturn true\n\n};\n``````" ]
[ null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.5131111,"math_prob":0.9911484,"size":508,"snap":"2023-14-2023-23","text_gpt3_token_len":261,"char_repetition_ratio":0.14087301,"word_repetition_ratio":0.0,"special_character_ratio":0.503937,"punctuation_ratio":0.23931624,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9749184,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-08T16:12:12Z\",\"WARC-Record-ID\":\"<urn:uuid:35fa9395-492d-4727-8d7d-6807d54bf60b>\",\"Content-Length\":\"20628\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:33cab730-b445-4116-b5eb-14ed5047ebba>\",\"WARC-Concurrent-To\":\"<urn:uuid:67c3c5de-0f44-49e6-aeff-82385cd00fa4>\",\"WARC-IP-Address\":\"124.70.144.213\",\"WARC-Target-URI\":\"https://blog.qianxiaoduan.com/archives/652\",\"WARC-Payload-Digest\":\"sha1:552WWDDFDGNAD4POLTVZCQF7JLKPJHD6\",\"WARC-Block-Digest\":\"sha1:BQPIN6M2XEAB3WV2MBTDNKQYGDVQM5UL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224655027.51_warc_CC-MAIN-20230608135911-20230608165911-00499.warc.gz\"}"}
https://undergroundmathematics.org/chain-rule/r5678/suggestion
[ "Review question\n\n# Can we find the angle between these two tangents? Add to your resource collection Remove from your resource collection Add notes to this resource View your notes for this resource\n\nRef: R5678\n\n## Suggestion\n\nThe point $P$ on the hyperbola $xy=c^2$ is such that the tangent to the hyperbola at $P$ passes through the focus of the parabola $y^2=4ax$. Find the coordinates of $P$ in terms of $a$ and $c$.\n\nEvery parabola has a special point, called the focus. If you imagine turning the parabola into a mirror, and turning it to face the sun, then all of the sun’s rays would be reflected by the parabola through its focus. For the parabola, $y^2 = 4ax$, the focus is at $(a,0).$\n\nYou might like to take a look at our Parabolic Mirrors resource.\n\nWhat’s the equation of a line with gradient $m$ that passes through $(a,0)$? Where does this line intersect $xy=c^2$?\n\nIf the line is a tangent, how many times does it intersect the curve?\n\nWhat does this tell us about $m$? Once we’ve found $m$, can we find $P$?\n\nIf $P$ also lies on the parabola, prove that $a^4=2c^4$\n\nThe blue curve here is the hyperbola $xy = c^2$, while the red curve is the parabola $y^2=4ax$.\n\nThe green line is the tangent to the hyperbola though the parabola’s focus $F$, which touches the hyperbola at $P$.\n\nThe values of $a^4$ and $2c^4$ are given. Can we use this to verify what the question asks us to show?\n\n… and calculate the acute angle between the tangents to the two curves at $P$.\n\nHow is the gradient of a line linked to the angle it makes with the $x$-axis?\n\nOr as an alternative method… the dot product of two vectors can help us find the angle between them." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9183525,"math_prob":0.9983165,"size":1474,"snap":"2022-05-2022-21","text_gpt3_token_len":422,"char_repetition_ratio":0.14557824,"word_repetition_ratio":0.0076045627,"special_character_ratio":0.29986432,"punctuation_ratio":0.0952381,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999771,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-18T00:31:58Z\",\"WARC-Record-ID\":\"<urn:uuid:32c87ae7-0f4f-4b00-aedc-5e4f44c3b216>\",\"Content-Length\":\"19324\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:40294e63-4819-42a9-b602-9c5e9b7fa4cf>\",\"WARC-Concurrent-To\":\"<urn:uuid:9d1a15b9-8d73-460f-a31b-77b14b20e278>\",\"WARC-IP-Address\":\"54.155.231.12\",\"WARC-Target-URI\":\"https://undergroundmathematics.org/chain-rule/r5678/suggestion\",\"WARC-Payload-Digest\":\"sha1:X2VU6QY63QXSYT5TCRCPEO2744CZ42S7\",\"WARC-Block-Digest\":\"sha1:WDJAFGXFM66KQLXPGJPPOKRSWUMXGC2B\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662520936.24_warc_CC-MAIN-20220517225809-20220518015809-00532.warc.gz\"}"}