URL
stringlengths
15
1.68k
text_list
listlengths
1
199
image_list
listlengths
1
199
metadata
stringlengths
1.19k
3.08k
https://techstalking.com/programming/python/hash-value-for-directed-acyclic-graph/
[ "How do I transform a directed acyclic graph into a hash value such that any two isomorphic graphs hash to the same value? It is acceptable, but undesirable for two isomorphic graphs to hash to different values, which is what I have done in the code below. We can assume that the number of vertices in the graph is at most 11.\n\nI am particularly interested in Python code.\n\nHere is what I did. If `self.lt` is a mapping from node to descendants (not children!), then I relabel the nodes according to a modified topological sort (that prefers to order elements with more descendants first if it can). Then, I hash the sorted dictionary. Some isomorphic graphs will hash to different values, especially as the number of nodes grows.\n\nI have included all the code to motivate my use case. I am calculating the number of comparisons required to find the median of 7 numbers. The more that isomorphic graphs hash to the same value the less work that has to be redone. I considered putting larger connected components first, but didn’t see how to do that quickly.\n\n``````from tools.decorator import memoized # A standard memoization decorator\n\nclass Graph:\ndef __init__(self, n):\nself.lt = {i: set() for i in range(n)}\n\ndef compared(self, i, j):\nreturn j in self.lt[i] or i in self.lt[j]\n\ndef withedge(self, i, j):\nretval = Graph(len(self.lt))\nimplied_lt = self.lt[j] | set([j])\nfor (s, lt_s), (k, lt_k) in zip(self.lt.items(),\nretval.lt.items()):\nlt_k |= lt_s\nif i in lt_k or k == i:\nlt_k |= implied_lt\nreturn retval.toposort()\n\ndef toposort(self):\nmapping = {}\nwhile len(mapping) < len(self.lt):\nfor i, lt_i in self.lt.items():\nif i in mapping:\ncontinue\nif any(i in lt_j or len(lt_i) < len(lt_j)\nfor j, lt_j in self.lt.items()\nif j not in mapping):\ncontinue\nmapping[i] = len(mapping)\nretval = Graph(0)\nfor i, lt_i in self.lt.items():\nretval.lt[mapping[i]] = {mapping[j]\nfor j in lt_i}\nreturn retval\n\ndef median_known(self):\nn = len(self.lt)\nfor i, lt_i in self.lt.items():\nif len(lt_i) != n // 2:\ncontinue\nif sum(1\nfor j, lt_j in self.lt.items()\nif i in lt_j) == n // 2:\nreturn True\nreturn False\n\ndef __repr__(self):\nreturn(\"[{}]\".format(\", \".join(\"{}: {{{}}}\".format(\ni,\n\", \".join(str(x) for x in lt_i))\nfor i, lt_i in self.lt.items())))\n\ndef hashkey(self):\nreturn tuple(sorted({k: tuple(sorted(v))\nfor k, v in self.lt.items()}.items()))\n\ndef __hash__(self):\nreturn hash(self.hashkey())\n\ndef __eq__(self, other):\nreturn self.hashkey() == other.hashkey()\n\n@memoized\ndef mincomps(g):\nprint(\"Calculating:\", g)\nif g.median_known():\nreturn 0\nnodes = g.lt.keys()\nreturn 1 + min(max(mincomps(g.withedge(i, j)),\nmincomps(g.withedge(j, i)))\nfor i in nodes\nfor j in nodes\nif j > i and not g.compared(i, j))\n\ng = Graph(7)\nprint(mincomps(g))\n``````\n\nTo effectively test for graph isomorphism you will want to use nauty. Specifically for Python there is the wrapper pynauty, but I can’t attest its quality (to compile it correctly I had to do some simple patching on its `setup.py`). If this wrapper is doing everything correctly, then it simplifies nauty a lot for the uses you are interested and it is only a matter of hashing `pynauty.certificate(somegraph)` — which will be the same value for isomorphic graphs.\n\nSome quick tests showed that `pynauty` is giving the same certificate for every graph (with same amount of vertices). But that is only because of a minor issue in the wrapper when converting the graph to nauty’s format. After fixing this, it works for me (I also used the graphs at http://funkybee.narod.ru/graphs.htm for comparison). Here is the short patch which also considers the modifications needed in `setup.py`:\n\n``````diff -ur pynauty-0.5-orig/setup.py pynauty-0.5/setup.py\n--- pynauty-0.5-orig/setup.py 2011-06-18 20:53:17.000000000 -0300\n+++ pynauty-0.5/setup.py 2013-01-28 22:09:07.000000000 -0200\n@@ -31,7 +31,9 @@\n\next_pynauty = Extension(\nname = MODULE + '._pynauty',\n- sources = [ pynauty_dir + \"https://stackoverflow.com/\" + 'pynauty.c', ],\n+ sources = [ pynauty_dir + \"https://stackoverflow.com/\" + 'pynauty.c',\n+ os.path.join(nauty_dir, 'schreier.c'),\n+ os.path.join(nauty_dir, 'naurng.c')],\ndepends = [ pynauty_dir + \"https://stackoverflow.com/\" + 'pynauty.h', ],\nextra_compile_args = [ '-O4' ],\nextra_objects = [ nauty_dir + \"https://stackoverflow.com/\" + 'nauty.o',\ndiff -ur pynauty-0.5-orig/src/pynauty.c pynauty-0.5/src/pynauty.c\n--- pynauty-0.5-orig/src/pynauty.c 2011-03-03 23:34:15.000000000 -0300\n+++ pynauty-0.5/src/pynauty.c 2013-01-29 00:38:36.000000000 -0200\n@@ -320,7 +320,7 @@\nPyObject *p;\n\n- int i,j;\n+ Py_ssize_t i, j;\nint x, y;\n``````\n\nGraph isomorphism for directed acyclic graphs is still GI-complete. Therefore there is currently no known (worst case sub-exponential) solution to guarantee that two isomorphic directed acyclic graphs will yield the same hash. Only if the mapping between different graphs is known – for example if all vertices have unique labels – one could efficiently guarantee matching hashes.\n\nOkay, let’s brute force this for a small number of vertices. We have to find a representation of the graph that is independent of the ordering of the vertices in the input and therefore guarantees that isomorphic graphs yield the same representation. Further this representation must ensure that no two non-isomorphic graphs yield the same representation.\n\nThe simplest solution is to construct the adjacency matrix for all n! permutations of the vertices and just interpret the adjacency matrix as n2 bit integer. Then we can just pick the smallest or largest of this numbers as canonical representation. This number completely encodes the graph and therefore ensures that no two non-isomorphic graphs yield the same number – one could consider this function a perfect hash function. And because we choose the smallest or largest number encoding the graph under all possible permutations of the vertices we further ensure that isomorphic graphs yield the same representation.\n\nHow good or bad is this in the case of 11 vertices? Well, the representation will have 121 bits. We can reduce this by 11 bits because the diagonal representing loops will be all zeros in an acyclic graph and are left with 110 bits. This number could in theory be decreased further; not all 2110 remaining graphs are acyclic and for each graph there may be up to 11! – roughly 225 – isomorphic representations but in practice this might be quite hard to do. Does anybody know how to compute the number of distinct directed acyclic graphs with n vertices?\n\nHow long will it take to find this representation? Naively 11! or 39,916,800 iterations. This is not nothing and probably already impractical but I did not implement and test it. But we can probably speed this up a bit. If we interpret the adjacency matrix as integer by concatenating the rows from top to bottom left to right we want many ones (zeros) at the left of the first row to obtain a large (small) number. Therefore we pick as first vertex the one (or one of the vertices) with largest (smallest) degree (indegree or outdegree depending on the representation) and than vertices connected (not connected) to this vertex in subsequent positions to bring the ones (zeros) to the left.\n\nThere are likely more possibilities to prune the search space but I am not sure if there are enough to make this a practical solution. Maybe there are or maybe somebody else can at least build something upon this idea.\n\nHow good does the hash have to be? I assume that you do not want a full serialization of the graph. A hash rarely guarantees that there is no second (but different) element (graph) that evaluates to the same hash. If it is very important to you, that isomorphic graphs (in different representations) have the same hash, then only use values that are invariant under a change of representation. E.g.:\n\n• the total number of nodes\n• the total number of (directed) connections\n• the total number of nodes with `(indegree, outdegree) = (i,j)` for any tuple `(i,j)` up to `(max(indegree), max(outdegree))` (or limited for tuples up to some fixed value `(m,n)`)\n\nAll these informations can be gathered in O(#nodes) [assuming that the graph is stored properly]. Concatenate them and you have a hash. If you prefer you can use some well known hash algorithm like `sha` on these concatenated informations. Without additional hashing it is a continuous hash (it allows to find similar graphs), with additional hashing it is uniform and fixed in size if the chosen hash algorithm has these properties.\n\nAs it is, it is already good enough to register any added or removed connection. It might miss connections that were changed though (`a -> c` instead of `a -> b`).\n\nThis approach is modular and can be extended as far as you like. Any additional property that is being included will reduce the number of collisions but increase the effort necessary to get the hash value. Some more ideas:\n\n• same as above but with second order in- and outdegree. Ie. the number of nodes that can be reached by a `node->child->child` chain ( = second order outdegree) or respectively the number of nodes that lead to the given node in two steps.\n• or more general n-th order in- and outdegree (can be computed in O((average-number-of-connections) ^ (n-1) * #nodes) )\n• number of nodes with eccentricity = x (again for any x)\n• if the nodes store any information (other than their neighbours) use a `xor` of any kind of hash of all the node-contents. Due to the `xor` the specific order in which the nodes where added to the hash does not matter.\n\nYou requested “a unique hash value” and clearly I cannot offer you one. But I see the terms “hash” and “unique to every graph” as mutually exclusive (not entirely true of course) and decided to answer the “hash” part and not the “unique” part. A “unique hash” (perfect hash) basically needs to be a full serialization of the graph (because the amount of information stored in the hash has to reflect the total amount of information in the graph). If that is really what you want just define some unique order of nodes (eg. sorted by own outdegree, then indegree, then outdegree of children and so on until the order is unambiguous) and serialize the graph in any way (using the position in the formentioned ordering as index to the nodes).\n\nOf course this is much more complex though.\n\nYears ago, I created a simple and flexible algorithm for exactly this problem (finding duplicate structures in a database of chemical structures by hashing them).\n\nI named it “Powerhash”, and to create the algorithm it required two insights. The first is the power iteration graph algorithm, also used in PageRank. The second is the ability to replace power iteration’s inside step function with anything that we want. I replaced it with a function that does the following on each step, and for each node:\n\n• Sort the hashes of the node’s neighbors\n• Hash the concatenated sorted hashes\n\nOn the first step, a node’s hash is affected by its direct neighbors. On the second step, a node’s hash is affected by the neighborhood 2-hops away from it. On the Nth step a node’s hash will be affected by the neighborhood N-hops around it. So you only need to continue running the Powerhash for N = graph_radius steps. In the end, the graph center node’s hash will have been affected by the whole graph.\n\nTo produce the final hash, sort the final step’s node hashes and concatenate them together. After that, you can compare the final hashes to find if two graphs are isomorphic. If you have labels, then add them in the internal hashes that you calculate for each node (and at each step).\n\nFor more on this you can look at my post here:\n\nThe algorithm above was implemented inside the “madIS” functional relational database. You can find the source code of the algorithm here:\n\nImho, If the graph could be topologically sorted, the very straightforward solution exists.\n\n1. For each vertex with index i, you could build an unique hash (for example, using the hashing technique for strings) of his (sorted) direct neighbours (p.e. if vertex 1 has direct neighbours {43, 23, 2,7,12,19,334} the hash functions should hash the array of {2,7,12,19,23,43,334})\n2. For the whole DAG you could create a hash, as a hash of a string of hashes for each node: Hash(DAG) = Hash(vertex_1) U Hash(vertex_2) U ….. Hash(vertex_N);\nI think the complexity of this procedure is around (N*N) in the worst case. If the graph could not be topologically sorted, the approach proposed is still applicable, but you need to order vertices in an unique way (and this is the hard part)\n\nI will describe an algorithm to hash an arbitrary directed graph, not taking into account that the graph is acyclic. In fact even counting the acyclic graphs of a given order is a very complicated task and I believe here this will only make the hashing significantly more complicated and thus slower.\n\nA unique representation of the graph can be given by the neighbourhood list. For each vertex create a list with all it’s neighbours. Write all the lists one after the other appending the number of neighbours for each list to the front. Also keep the neighbours sorted in ascending order to make the representation unique for each graph. So for example assume you have the graph:\n\n``````1->2, 1->5\n2->1, 2->4\n3->4\n5->3\n``````\n\nWhat I propose is that you transform this to `({2,2,5}, {2,1,4}, {1,4}, {0}, {1,3})`, here the curly brackets being only to visualize the representation, not part of the python’s syntax. So the list is in fact: `(2,2,5, 2,1,4, 1,4, 0, 1,3)`.\n\nNow to compute the unique hash, you need to order these representations somehow and assign a unique number to them. I suggest you do something like a lexicographical sort to do that. Lets assume you have two sequences `(a1, b1_1, b_1_2,...b_1_a1,a2, b_2_1, b_2_2,...b_2_a2,...an, b_n_1, b_n_2,...b_n_an)` and `(c1, d1_1, d_1_2,...d_1_c1,c2, d_2_1, d_2_2,...d_2_c2,...cn, d_n_1, d_n_2,...d_n_cn)`, Here c and a are the number of neighbours for each vertex and b_i_j and d_k_l are the corresponding neighbours. For the ordering first compare the sequnces `(a1,a2,...an)` and `(c1,c2, ...,cn)` and if they are different use this to compare the sequences. If these sequences are different, compare the lists from left to right first comparing lexicographically `(b_1_1, b_1_2...b_1_a1)` to `(d_1_1, d_1_2...d_1_c1)` and so on until the first missmatch.\n\nIn fact what I propose to use as hash the lexicographical number of a word of size `N` over the alphabet that is formed by all possible selections of subsets of elements of `{1,2,3,...N}`. The neighbourhood list for a given vertex is a letter over this alphabet e.g. `{2,2,5}` is the subset consisting of two elements of the set, namely `2` and `5`.\n\nThe alphabet(set of possible letters) for the set `{1,2,3}` would be(ordered lexicographically):\n\n`{0}, {1,1}, {1,2}, {1,3}, {2, 1, 2}, {2, 1, 3}, {2, 2, 3}, {3, 1, 2, 3}`\n\nFirst number like above is the number of elements in the given subset and the remaining numbers- the subset itself. So form all the 3 letter words from this alphabet and you will get all the possible directed graphs with 3 vertices.\n\nNow the number of subsets of the set `{1,2,3,....N}` is `2^N` and thus the number of letters of this alphabet is `2^N`. Now we code each directed graph of `N` nodes with a word with exactly `N` letters from this alphabet and thus the number of possible hash codes is precisely: `(2^N)^N`. This is to show that the hash code grows really fast with the increase of `N`. Also this is the number of possible different directed graphs with `N` nodes so what I suggest is optimal hashing in the sense it is bijection and no smaller hash can be unique.\n\nThere is a linear algorithm to get a given subset number in the the lexicographical ordering of all subsets of a given set, in this case `{1,2,....N}`. Here is the code I have written for coding/decoding a subset in number and vice versa. It is written in `C++` but quite easy to understand I hope. For the hashing you will need only the code function but as the hash I propose is reversable I add the decode function – you will be able to reconstruct the graph from the hash which is quite cool I think:\n\n``````typedef long long ll;\n\n// Returns the number in the lexicographical order of all combinations of n numbers\n// of the provided combination.\nll code(vector<int> a,int n)\n{\nsort(a.begin(),a.end()); // not needed if the set you pass is already sorted.\nint cur = 0;\nint m = a.size();\n\nll res =0;\nfor(int i=0;i<a.size();i++)\n{\nif(a[i] == cur+1)\n{\nres++;\ncur = a[i];\ncontinue;\n}\nelse\n{\nres++;\nint number_of_greater_nums = n - a[i];\nfor(int j = a[i]-1,increment=1;j>cur;j--,increment++)\nres += 1LL << (number_of_greater_nums+increment);\ncur = a[i];\n}\n}\nreturn res;\n}\n// Takes the lexicographical code of a combination of n numbers and returns the\n// combination\nvector<int> decode(ll kod, int n)\n{\nvector<int> res;\nint cur = 0;\n\nint left = n; // Out of how many numbers are we left to choose.\nwhile(kod)\n{\nll all = 1LL << left;// how many are the total combinations\nfor(int i=n;i>=0;i--)\n{\nif(all - (1LL << (n-i+1)) +1 <= kod)\n{\nres.push_back(i);\nleft = n-i;\nkod -= all - (1LL << (n-i+1)) +1;\nbreak;\n}\n}\n}\nreturn res;\n}\n``````\n\nAlso this code stores the result in `long long` variable, which is only enough for graphs with less than 64 elements. All possible hashes of graphs with 64 nodes will be `(2^64)^64`. This number has about 1280 digits so maybe is a big number. Still the algorithm I describe will work really fast and I believe you should be able to hash and ‘unhash’ graphs with a lot of vertices.\n\nAlso have a look at this question.\n\nI’m not sure that it’s 100% working, but here is an idea:\n\nLet’s code a graph into a string and then take its hash.\n\n1. hash of an empty graph is “”\n2. hash of a vertex with no outgoing edges is “.”\n3. hash of a vertex with outgoing edges is concatenation of every child hash with some delimiter (e.g. “,”)\n\nTo produce the same hash for isomorphic graphs before concatenation in step3 just sort the hashes (e.g. in lexicographical order).\n\nFor hash of a graph just take hash of its root (or sorted concatenation, if there are several roots).\n\nedit While I hoped that the resulting string will describe graph without collisions, hynekcer found that sometimes non-isomorphic graphs will get the same hash. That happens when a vertex has several parents – then it “duplicated” for every parent. For example, the algorithm does not differentiate a “diamond” {A->B->C,A->D->C} from the case {A->B->C,A->D->E}.\n\nI’m not familiar with Python and it’s hard for me to understand how graph stored in the example, but here is some code in C++ which is likely convertible to Python easily:\n\n``````THash GetHash(const TGraph &graph)\n{\nreturn ComputeHash(GetVertexStringCode(graph,FindRoot(graph)));\n}\nstd::string GetVertexStringCode(const TGraph &graph,TVertexIndex vertex)\n{\nstd::vector<std::string> childHashes;\nfor(auto c:graph.GetChildren(vertex))\nchildHashes.push_back(GetVertexStringCode(graph,*c));\nstd::sort(childHashes.begin(),childHashes.end());\nstd::string result=\".\";\nfor(auto h:childHashes)\nresult+=*h+\",\";\nreturn result;\n}\n``````\n\nI am assuming there are no common labels on vertices or edges, for then you could put the graph in a canonical form, which itself would be a perfect hash. This proposal is therefore based on isomorphism only.\n\nFor this, combine hashes for as many simple aggregate characteristics of a DAG as you can imagine, picking those that are quick to compute. Here is a starter list:\n\n1. 2d histogram of nodes’ in and out degrees.\n2. 4d histogram of edges a->b where a and b are both characterized by in/out degree.\n\nLet me be more explicit. For 1, we’d compute a set of triples `<I,O;N>` (where no two triples have the same `I`,`O` values), signifying that there are `N` nodes with in-degree `I` and out-degree `O`. You’d hash this set of triples or better yet use the whole set arranged in some canonical order e.g. lexicographically sorted. For 2, we compute a set of quintuples `<aI,aO,bI,bO;N>` signifying that there are `N` edges from nodes with in degree `aI` and out degree `aO`, to nodes with `bI` and `bO` respectively. Again hash these quintuples or else use them in canonical order as-is for another part of the final hash.\n\nStarting with this and then looking at collisions that still occur will probably provide insights on how to get better.\n\nWhen I saw the question, I had essentially the same idea as @example. I wrote a function providing a graph tag such that the tag coincides for two isomorphic graphs.\n\nThis tag consists of the sequence of out-degrees in ascending order. You can hash this tag with the string hash function of your choice to obtain a hash of the graph.\n\nEdit: I expressed my proposal in the context of @NeilG’s original question. The only modification to make to his code is to redefine the `hashkey` function as:\n\n``````def hashkey(self):\nreturn tuple(sorted(map(len,self.lt.values())))\n``````\n\nWith suitable ordering of your descendents (and if you have a single root node, not a given, but with suitable ordering (maybe by including a virtual root node)), the method for hashing a tree ought to work with a slight modification.\n\nExample code in this StackOverflow answer, the modification would be to sort children in some deterministic order (increasing hash?) before hashing the parent.\n\nEven if you have multiple possible roots, you can create a synthetic single root, with all roots as children." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87073827,"math_prob":0.9438986,"size":21466,"snap":"2022-40-2023-06","text_gpt3_token_len":5383,"char_repetition_ratio":0.12682882,"word_repetition_ratio":0.02063896,"special_character_ratio":0.26120377,"punctuation_ratio":0.1449016,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9918125,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-04T01:21:53Z\",\"WARC-Record-ID\":\"<urn:uuid:919c89bc-7ea2-4da8-9db5-9ca84e43a66c>\",\"Content-Length\":\"114906\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fe81d7c0-1f58-4b19-b1a5-ea24eeba2d07>\",\"WARC-Concurrent-To\":\"<urn:uuid:914923b6-07be-448e-b1ad-e02ee1803375>\",\"WARC-IP-Address\":\"104.21.42.107\",\"WARC-Target-URI\":\"https://techstalking.com/programming/python/hash-value-for-directed-acyclic-graph/\",\"WARC-Payload-Digest\":\"sha1:TMYPUH4AUYUDWUV3ENR52GJH27O4MDVX\",\"WARC-Block-Digest\":\"sha1:CR4FOH7YVQDA24MTPAMJ3SLNNCUW4RWO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337446.8_warc_CC-MAIN-20221003231906-20221004021906-00580.warc.gz\"}"}
https://ctan.org/pkg/pi
[ "# pi – Calculate pi\n\nGenerates pi, using the formula: ``` Pi=16*arctan(1/5)-4*arctan(1/239) ``` and leaves the result in an array \\xr, printing what is calculated as it goes along.\n\nThe number of digits you can compute depends on your implementation of . The last digit may be wrong, and if it's 0 or 9, the penultimate digit may also be wrong.\n\n Sources `/macros/plain/contrib/misc/pi.tex` Version 0.993 Maintainer Denis B. Roegel Topics Calculation" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.72225934,"math_prob":0.97581285,"size":761,"snap":"2021-43-2021-49","text_gpt3_token_len":197,"char_repetition_ratio":0.09643329,"word_repetition_ratio":0.0,"special_character_ratio":0.23127463,"punctuation_ratio":0.1294964,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96961117,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-20T13:51:41Z\",\"WARC-Record-ID\":\"<urn:uuid:2fd99763-b7f8-4317-86fb-a0776aa9082f>\",\"Content-Length\":\"14467\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:20809366-3626-48ec-bb96-4bfeb4653fe7>\",\"WARC-Concurrent-To\":\"<urn:uuid:092d32a6-7044-4ff4-a946-b4c04e288a6c>\",\"WARC-IP-Address\":\"5.35.249.60\",\"WARC-Target-URI\":\"https://ctan.org/pkg/pi\",\"WARC-Payload-Digest\":\"sha1:5VC3ZITYBSVLJ7KXKGW7BFGQBBKAOCTM\",\"WARC-Block-Digest\":\"sha1:WRFNW7YAYK5APASOCUWMN7JITE73X42F\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585321.65_warc_CC-MAIN-20211020121220-20211020151220-00679.warc.gz\"}"}
https://www.junhaow.com/lc/problems/linked-list/cycle-detection/141_linked-list-cycle.html
[ "Reference: LeetCode\nDifficulty: EasyTwo Pointers\n\n## Problem\n\nGiven a linked list, determine if it has a cycle in it.\n\nNote:\n\n• To represent a cycle in the given linked list, we use an integer pos which represents the position (0-indexed) in the linked list where tail connects to. If pos is -1, then there is no cycle in the linked list.\n• There could be duplicates.\n\nExample:", null, "", null, "Follow up: Can you solve it using $O(1)$ (i.e. constant) memory?\n\n• Yes. Use two pointers or modify the next.\n\n## Analysis\n\nNote: In terms of duplicate nodes, they are different node objects. So it is okay to use hashCode() and ==.\n\nTest Case:\n\n### Hash Set\n\nUse Set<ListNode> instead of Set<Integer>.\n\nTime: $O(N)$.\nSpace: $O(N)$.\n\n### Two Pointers\n\nThe space complexity can be reduced to $O(1)$ by considering two pointers at different speed. A slow pointer and a fast pointer. The slow pointer moves one step at a time while the fast pointer moves two steps at a time. If there is no cycle in the list, the fast pointer will eventually reach the end and we can return false in this case.", null, "• No cycle\nThe fast pointer reaches the end first and the run time depends on the list’s length, which is $O(N)$.\n• Cycle exists\n• Non-cyclic part:\n• The slow pointer takes non-cyclic length steps to enter the cycle. The length is $L1$.\n• Cyclic part:\n• When two pointers enter into the cycle, it will take:\n• (distance between two pointers (at most $C$) / difference of speed) moves for the fast pointer to catch up with the slow runner.\n\nTherefore, the worst case runtime is $O(L1 + C)$, which is $O(N)$.\n\nWhy will the fast pointer eventually meet the slow pointer?\n\nConsider the one-step case:\n\nIf the fast pointer is two steps behind the slower runner, this case will change into the one-step case above (each time the distance decreases by $1$):\n\nNote: So many corner cases.\n\nMy originally bad code (using flag):\n\nImprovement:\n\nTime: $O(N)$\nSpace: $O(1)$\n\n### Mark Node\n\nBy Argondey: I used a slightly faster destructive approach. I created a new node called “mark” and iterated through the list, setting the next value of each node to the mark node. When reaching the new node the first thing I did was check if the node is the mark node. If it ever is the mark node, we have looped. Brilliant!\n\nTime: $O(N)$\nSpace: $O(1)$\n\nComment", null, "Junhao Wang\na software engineering cat" ]
[ null, "https://bloggg-1254259681.cos.na-siliconvalley.myqcloud.com/jtr5l.jpg", null, "https://bloggg-1254259681.cos.na-siliconvalley.myqcloud.com/vlrjw.jpg", null, "https://bloggg-1254259681.cos.na-siliconvalley.myqcloud.com/c7fpv.png", null, "https://www.junhaow.com/resources/avatar/avatar_2020.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.823454,"math_prob":0.9769121,"size":2657,"snap":"2023-40-2023-50","text_gpt3_token_len":732,"char_repetition_ratio":0.120618165,"word_repetition_ratio":0.05636743,"special_character_ratio":0.2754987,"punctuation_ratio":0.1390845,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9977006,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-11T06:38:23Z\",\"WARC-Record-ID\":\"<urn:uuid:a7fe833f-2eae-47c4-a677-f95fa8c42f8f>\",\"Content-Length\":\"38697\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:336cb3fc-79c1-4c36-a29e-b0d684d5e8ca>\",\"WARC-Concurrent-To\":\"<urn:uuid:f60f9f8f-4931-4317-ac3f-89ea28d414e2>\",\"WARC-IP-Address\":\"185.199.110.153\",\"WARC-Target-URI\":\"https://www.junhaow.com/lc/problems/linked-list/cycle-detection/141_linked-list-cycle.html\",\"WARC-Payload-Digest\":\"sha1:7LQ3Z6C5XAQOQ36WLDBXJL2CAT4MFDZP\",\"WARC-Block-Digest\":\"sha1:SSZLKCSGFCWK4DNZADQAFFIQEDZVOH7E\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679103558.93_warc_CC-MAIN-20231211045204-20231211075204-00059.warc.gz\"}"}
https://www.mathcity.org/msc/notes/partial-differential-equations-m-usman-hamid
[ "# Partial Differential Equations by M Usman Hamid\n\nThe course provides a foundation to solve PDE’s with special emphasis on wave, heat and Laplace equations, formulation and some theory of these equations are also intended. We are really very thankful to Prof. Muhammad Usman Hamid for providing these notes and appreciates his effort to publish these notes on MathCity.org", null, "Name Partial Differential Equations (PDE’s) Muhammad Usman Hamid 113 pages PDF (see Software section for PDF Reader) 2.22 MB\n• INTRODUCTION\n• Basic Concepts and Definitions, Superposition Principle, Exercises\n• FIRST-ORDER QUASI-LINEAR EQUATIONS AND METHOD OF CHARACTERISTICS\n• Classification of first-order equations, Construction of a First-Order Equation, Method of Characteristics and General Solutions, Canonical Forms of First-Order Linear Equations, Method of Separation of Variables, Exercises\n• MATHEMATICAL MODELS\n• Heat Equations and consequences, Wave Equations and consequences.\n• CLASSIFICATION OF SECOND-ORDER LINEAR EQUATIONS\n• Second-Order Equations in Two Independent Variables, Canonical Forms, Equations with Constant Coefficients, General Solutions, Summary and Further Simplification, Exercises\n• FOURIER SERIES, FOURIER TRANSFORMATION AND INTEGRALS WITH APPLICATIONS\n• Introduction, Fourier Transform, Properties of Fourier Transform , Convolution theorem, Fourier Sine and Fourier Cosine, Exercises, Fourier Series and its complex form\n• LAPLACE TRANSFORMS\n• Properties of Laplace Transforms, Convolution Theorem of the Laplace Transform, Laplace Transforms of the Heaviside and Dirac Delta Functions, Hankel Transforms, Properties of Hankel Transforms and Applications, Few results about finite fourier transforms, Exercises\n\nPlease click on View Online to see inside the PDF.\n\n• msc/notes/partial-differential-equations-m-usman-hamid" ]
[ null, "https://www.mathcity.org/_media/msc/notes/partial-differential-equations-m-usman-hamid.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8406814,"math_prob":0.90174013,"size":3572,"snap":"2023-14-2023-23","text_gpt3_token_len":857,"char_repetition_ratio":0.12724215,"word_repetition_ratio":0.046728972,"special_character_ratio":0.19428891,"punctuation_ratio":0.08007449,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99573386,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-06T08:54:37Z\",\"WARC-Record-ID\":\"<urn:uuid:bbce194c-2c8b-4687-b0e2-5b6573a5e630>\",\"Content-Length\":\"36623\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ef45f7a7-5b21-4338-8594-e860313406b1>\",\"WARC-Concurrent-To\":\"<urn:uuid:98700cf4-4bc5-4cb4-ac57-d5f1b759f03a>\",\"WARC-IP-Address\":\"172.67.200.195\",\"WARC-Target-URI\":\"https://www.mathcity.org/msc/notes/partial-differential-equations-m-usman-hamid\",\"WARC-Payload-Digest\":\"sha1:OU3Z2BYSRJW5EDU4KNHBLPXQRJWLPOSV\",\"WARC-Block-Digest\":\"sha1:7PQ4TJCV5SFGSBL3WG2K5AMVYSESS6KL\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224652494.25_warc_CC-MAIN-20230606082037-20230606112037-00401.warc.gz\"}"}
https://testbook.com/objective-questions/mcq-on-dynamic-programming--5eea6a0c39140f30f369e0db
[ "# Dynamic Programming MCQ Quiz - Objective Question with Answer for Dynamic Programming - Download Free PDF\n\nLast updated on Sep 9, 2023\n\nMultiple Choice Questions (MCQs) on dynamic programming are valuable for assessing knowledge and understanding of this algorithmic problem-solving technique. Dynamic Programming MCQ help evaluate familiarity with dynamic programming principles, concepts, and applications. By attempting these Dynamic Programming MCQ, individuals can enhance their comprehension of topics such as overlapping subproblems, optimal substructure, and memoization. These questions cover various aspects, including solving optimization problems, finding optimal paths, and efficient resource allocation. Dynamic Programming MCQs enable learners to consolidate their understanding of this powerful algorithmic approach and its practical applications in computer science, mathematics, and other fields.\n\n## Latest Dynamic Programming MCQ Objective Questions\n\n#### Dynamic Programming Question 1:\n\nLet X = {a/25, b/20, c/10, d/20, e/50} be the alphabet and its frequency distribution. What is the optimum prefix code for this distribution?\n\n1. 275\n2. 220\n3. 230\n4. 245\n\nOption 1 : 275\n\n#### Dynamic Programming Question 1 Detailed Solution\n\nHuffman Coding: It is a lossless data compression algorithm. The input characters are assigned with variable length codes and the lengths of the assigned codes are based on the frequencies of corresponding characters. The most frequent character gets the smallest code and the least frequent character gets the largest code.\n\nExplanation:\n\n• Let us have a min-heap with frequencies as key.\nX =", null, "• According to the Huffman algorithm, we will merge the two minimum frequencies, that is, c and d. Now, n1 will be inserted back into the heap.\nX =", null, "", null, "• Now, again we will merge the two minimum frequencies, that is, a and b. Now, n2 will be inserted back into the heap.\nX =", null, "", null, "• Now, we will merge the two minimum frequencies among the remaining ones, that is, n2 and n1. Now, n3 will be inserted back into the heap.\nX =", null, "", null, "• Finally, we will merge the two last frequencies remaining, that is, n3 and e, and root will be inserted in the tree.", null, "• Now the Huffman code for a is 111, b is 110, c is 100, d is 101 and e is 0.\nHence, the minimum cost or optimum prefix code = $$B(C) = \\sum_{p=1}^{n} f(p_i)L(c(p_i))$$ where\n• $$f(p_i) \\rightarrow$$ frequency distribution of alphabet $$p = \\{p_1, p_2, p_3, ...... p_n\\}$$\n• $$L(c(p_i)) \\rightarrow$$ length of code word $$c(p_i)$$\n• Hence, minimum cost or optimum prefix code =\n$$\\rightarrow 10 * 3 + 20 * 3 + 20 * 3 + 25 * 3 + 50 * 1\\\\ \\rightarrow 30 + 60 + 60 + 75 + 50\\\\ \\rightarrow 275$$\n\n#### Dynamic Programming Question 2:\n\nConsider 4 matrices Q, R, S and T with dimensions 13 × 12, 12 × 30, 30 × 15 and 15 × 18 respectively. What is the least number of scalar multiplications needed to find the product QRST using the basic matrix multiplication method?\n\n1. 11150\n2. 11250\n3. 11350\n4. 11450\n\nOption 2 : 11250\n\n#### Dynamic Programming Question 2 Detailed Solution\n\nGiven matrices Q, R, S, and T with dimensions 13 × 12, 12 × 30, 30 × 15, and 15 × 18 respectively. From the given matrices sequence of dimensions can be seen as:\n\n0, P1, P2, P3, P4> = <13, 12, 30, 15, 18>\n\nLet c[i, j]  be the minimum number of scalar multiplications needed to compute the product QRST.\n\nUsing Dynamic Programming recursive definition of c[i, j] is given as-\n\n$$c[i,j]=\\begin{array}{l} \\left\\{ {\\begin{array}{*{20}{c}} 0\\\\ {\\min }\\\\{i \\le k < j\\left\\{ {c\\left[ {i,k} \\right] + c\\left[ {k + 1,j} \\right] + {P_i}_{ - 1}{P_k}{P_j}} \\right\\}if\\,i = j} \\end{array}if\\,i = j} \\right.\\\\ \\end{array}$$\n\nIn the given scenario i and j can be 1, 2 or 3, and j can be 2, 3 or 4.\n\n$$\\begin{array}{l} c\\left[ {1,2} \\right] = \\begin{array}{*{20}{c}} {\\min }\\\\ {1 \\le K < 2} \\end{array}\\left\\{ {k = 1:\\begin{array}{*{20}{c}} {c\\left[ {1,1} \\right] + c\\left[ {2,2} \\right] + {P_0}{P_1}{P_2}}\\\\ {0 + 0 + 4680} \\end{array}} \\right\\} \\end{array}$$\n\nc[1, 2] = min {4680} = 4680\n\n$$c\\left[ {2,3} \\right] = \\begin{array}{*{20}{c}} {\\min }\\\\ {2 \\le k < 3} \\end{array}\\left\\{ {k = 2:\\begin{array}{*{20}{c}} {c\\left[ {2,2} \\right] + c\\left[ {3,3} \\right] + {P_1}{P_2}{P_3}}\\\\ {0 + 0 + 5400} \\end{array}} \\right\\}$$\n\nc[2, 3]= min {5400} = 5400\n\n$$c\\left[ {3,4} \\right] = \\begin{array}{*{20}{c}} {\\min }\\\\ {3 \\le k < 4} \\end{array}\\left\\{ {k = 3:\\begin{array}{*{20}{c}} {c\\left[ {3,3} \\right] + c\\left[ {4,4} \\right] + {P_2}{P_3}{P_4}}\\\\ {0 + 0 + 8100} \\end{array}} \\right\\}$$\n\nc[3, 4]= min {8100} = 8100\n\n$$c\\left[ {1,3} \\right] = \\begin{array}{*{20}{c}} {\\min }\\\\ {1 \\le k < 3} \\end{array}\\left\\{ {\\begin{array}{*{20}{c}} {k = 1:}\\\\ {K = 2:} \\end{array}\\begin{array}{*{20}{c}} {c\\left[ {2,2} \\right] + c\\left[ {2,3} \\right]{P_0}{P_1}{P_3}}\\\\ {0 + 5400 + 2340}\\\\ {c\\left[ {1,2} \\right] + c\\left[ {3,3} \\right]{P_0}{P_2}{P_3}}\\\\ {4680 + 0 + 5850} \\end{array}} \\right\\}$$\n\n$$c[1,3] = \\min \\left\\{ {\\frac{{7740}}{{10530}}} \\right\\} = 7740$$\n\n$$c\\left[ {2,4} \\right] = \\begin{array}{*{20}{c}} {\\min }\\\\ {2 \\le k < 4} \\end{array}\\left\\{ {\\begin{array}{*{20}{c}} {k = 2:}\\\\ {K = 3:} \\end{array}\\begin{array}{*{20}{c}} {c\\left[ {2,2} \\right] + c\\left[ {3,4} \\right]{P_1}{P_2}{P_4}}\\\\ {0 + 8100 + 6480}\\\\ {c\\left[ {2,3} \\right] + c\\left[ {4,4} \\right]{P_1}{P_3}{P_4}}\\\\ {5400 + 0 + 3240} \\end{array}} \\right\\}$$\n\n$$c[2,4] = \\min \\left\\{ {\\frac{{14580}}{{8640}}} \\right\\} = 8640$$\n\n$$c\\left[ {1,4} \\right] = \\begin{array}{*{20}{c}} {\\min }\\\\ {1 \\le k < 4} \\end{array}\\left\\{ {\\begin{array}{*{20}{c}} {k = 1:}\\\\ {K = 2:}\\\\ {K = 3:} \\end{array}\\begin{array}{*{20}{c}} {c\\left[ {1,1} \\right] + c\\left[ {2,4} \\right] + {P_0}{P_1}{P_4}}\\\\ {0 + 8640 + 2808}\\\\ {c\\left[ {1,2} \\right] + c\\left[ {3,4} \\right] + {P_0}{P_2}{P_4}}\\\\ {4680 + 8100 + 7020}\\\\ {c\\left[ {1,3} \\right] + c\\left[ {4,4} \\right] + {P_0}{P_3}{P_4}}\\\\ {7740 + 0 + 3510} \\end{array}} \\right\\}$$\n\n$$c[1,4]= \\min \\left\\{ {\\begin{array}{*{20}{c}} {11448}\\\\ {19800}\\\\ {11250} \\end{array}} \\right\\} = 11250$$\n\nHence the minimum number of scalar multiplications needed to compute the product QRST is 11250 using (Q(RS))T).\n\n#### Dynamic Programming Question 3:\n\nNote:\n\nWhere n is the number of items and W is the weight of the Knapsack .\n\nWhich of the following statements are true?\n\n1. Time complexity of Knapsack is O(n* W) where W is the weight of the Knapsack and there are n items.\n2. Time complexity of Knapsack is min( O(n*W) , O(2^n) ) where W is the weight of the Knapsack and there are n items.\n3. Knapsack can be implemented in O(n*W) space .\n4. Knapsack can be implemented in O(W) space.\n\nOption :\n\n#### Dynamic Programming Question 3 Detailed Solution\n\nThe correct answer is option 2, option 3, and option 4.\n\nConcept:\n\nOption 1 and option 2:\n\nIf the Weight of the Knapsack is in theta (2^n) , then we should use a brute force approach rather than the dynamic programming approach. So the time complexity of the Knapsack problem is min( O(n*W) , O(2^n) ).\n\nNow, the normal dynamic programming approach takes O(n*W) space, but we can optimize it using some observation.\n\nThe observation is when we are trying to fill the Knapsack with ith object, we only need the optimum result after filling the Knapsack with (i-1)th object, we don’t need entries for (i-2)th , (i-3) th ……. 1 th object. The pseudo-code for implementing the Knapsack problem in O(W) space is\n\nInput : int wt[n] - weight of items to be filled\nint value[n] - value of the items\nint W - Weight of the Knapsack\n\nAlgorithm:\n\n1. Declare two 1 D A[] and B[] arrays of size W+1 each (indexed 0 to W).\n\n2. Initialise them to 0.", null, "Though the time complexity of the above implementation is O(nW), but the space complexity reduces. We use two arrays of size W each. Thus space complexity is O(2W) =O(W)\n\nHence the correct answer is option 2, option 3, and option 4.\n\n#### Dynamic Programming Question 4:\n\nConsider the following items with their associated weights and values.\n\n Item Weight Value 1 4 44 2 7 21 3 5 38 4 2 11 5 7 40 6 10 37 7 3 13\n\nIf a knapsack of capacity 22 units of weight is available and it is allowed to take either the item completely or leave it, the maximum possible profit using the dynamic approach will be _____.\n\n#### Dynamic Programming Question 4 Detailed Solution\n\nThis is a case of 0-1 knapsack because we have to either take the item or completely leave it, we cannot take partial items here.\nInitially, we have 22 units weight of knapsack.\n\nNow calculate the value by weight ratio and sort it in decreasing order.\n\n Item Weight Value Value/Weight 1 4 44 11 3 5 38 7.6 5 7 40 5.7 4 2 11 5.5 7 3 13 4.3 6 10 37 3.7 2 7 21 3\n\nEach stage of the algorithm is described below:\n\n1. In the value column, 11 is the maximum profit ratio, So, we have to take 4kg weight for that. Capacity left=22-4=18 unit. Total profit=44\n2. Next highest ratio is 7.6, So, by taking 5 kg weight for that. Capacity left=18-5=13 unit left. Total profit=44+38=82\n3. Next highest ratio is 5.7, So, by taking 7 kg weight for that. Capacity left=13-7=6 unit. Total profit=82+40=122\n4. Next highest ratio is 5.5, So, by taking 2 kg weight, for that. Capacity left=6-2=4 unit. Total profit=122+11=133\n5. Next highest ratio is 4.3, So, by taking 3 kg weight, for that. Capacity left=4-3=1 unit. Total profit=133+13=146\n6. Now, we have 1 unit of weight space left, but none of the items from 6 and 2 can fit this empty space, so the optimal solution is with 21 units with 146 profit value.\n\nSo, 146 will be the maximum possible profit.\n\n#### Dynamic Programming Question 5:\n\nThe time complexity of an efficient algorithm to find the longest monotonically increasing subsequence of n numbers is\n\n1. O(n)\n2. O(n Ig n)\n3. O(n2)\n4. O(log n)\n5. None of the above\n\nOption 2 : O(n Ig n)\n\n#### Dynamic Programming Question 5 Detailed Solution\n\nThe correct answer is option 2.\n\nConcept:\n\nThe Longest Increasing Subsequence (LIS) problem is to find the length of the longest subsequence of a given sequence such that all elements of the subsequence are sorted in increasing order.\n\nAlgorithm:\n\nAlgorithm LongestIncreasingSubsequenceLength( vector& v)\n\n{\nif (v.size() == 0)\nreturn 0;\n\nvector tail(v.size(), 0);\nint length = 1; // always points empty slot in tail\n\ntail = v;\nfor (size_t i = 1; i < v.size(); i++)\n\n{\n\n// new smallest value\nif (v[i] < tail)\ntail = v[i];\n\n// v[i] extends largest subsequence\nelse if (v[i] > tail[length - 1])\ntail[length++] = v[i];\n\n// v[i] will become end candidate of an existing subsequence or Throw away larger elements in all LIS, to make room for upcoming greater elements than v[i] and also, v[i] would have already appeared in one of LIS, identify the location and replace it.\nelse\ntail[CeilIndex(tail, -1, length - 1, v[i])] = v[i];\n\n//uses the binary search (note boundaries in the caller)\n\n}\n\nreturn length;\n}\n\nAnalysis:\n\nTime Complexity:\n\nThe loop runs for N elements. In the worst case, we may end up querying ceil value using binary search (log i) for many A[i]. Therefore, T(n) < O( log N! )  = O(N log N). Analyze to ensure that the upper and lower bounds are also O( N log N ). The complexity is θ(N log N).\n\nHence the correct answer is O(n Ig n).\n\n## Top Dynamic Programming MCQ Objective Questions\n\n#### Dynamic Programming Question 6\n\nMatch the following with respect to algorithm paradigms:\n\n List - I List - II (a) The 8-Queen’s problem (i) Dynamic programming (b) Single-Source shortest paths (ii) Divide and conquer (c) STRASSEN’s Matrix multiplication (iii) Greedy approach (d) Optimal binary search trees (iv) Backtracking\n\n1. (a) - (iv), (b) - (i), (c) - (iii), (d) - (ii)\n2. (a) - (iv), (b) - (iii), (c) - (i), (d) - (ii)\n3. (a) - (iii), (b) - (iv), (c) - (ii), (d) - (i)\n4. (a) - (iv), (b) - (iii), (c) - (ii), (d) - (i)\n\nOption 4 : (a) - (iv), (b) - (iii), (c) - (ii), (d) - (i)\n\n#### Dynamic Programming Question 6 Detailed Solution\n\n8 - queen’s problem:\n\n• In this, 8 queens are placed on a 8 × 8 chessboard such that none of them can take same place.\n• It is the backtracking approach. In backtracking approach, all the possible configurations are checked and check whether result is obtained or not.\n•  A queen can only be attacked if it is in the same row or column or diagonal of another queen.\n\nSingle source shortest path:\n\n• It is used to find the shortest path starting from a source to the all other vertices.\n• It is based on greedy approach.\n• Example of single source shortest path algorithm is Dijkstra’s algorithm which initially consider all the distance to infinity and distance to source is 0.\n\nStrassen’s matrix multiplication:\n\n• It is a method used for matrix multiplication. It uses divide and conquer approach to multiply the matrices.\n• Time consumption is improved by using Strassen’s method as compared to standard multiplication method.\n• It is faster than standard method. Strassen’s multiplication can be performed only on square matrices.\n\nOptimal binary search tree:\n\n• It is also known as weight balanced binary tree. In optimal binary search tree, dummy key is added in the tree by first constructing a binary search tree.\n• It is based on dynamic programming approach. Total cost must be as small as possible.\n\n#### Dynamic Programming Question 7\n\nDefine Rn to be the maximum amount earned by cutting a rod of length n meters into one or more pieces of integer length and selling them. For i > 0, let p[i] denotes the selling price of a rod whose length is i meters. Consider the array of prices:\n\np = 1, p = 5, p = 8, p = 9, p = 10, p = 17, p = 18\n\nWhich of the following statements is/are correct about R7?\n\n1. R7 cannot be achieved by a solution consisting of three pieces.\n2. R7 = 19\n3. R7 = 18\n4. R7 is achieved by three different solutions.\n\nOption :\n\n#### Dynamic Programming Question 7 Detailed Solution\n\nAnswer: Option 3 and Option 4\n\nData:\n\np = 1, p = 5, p = 8,\n\np = 9, p = 10, p = 17, p = 18\n\nCalculation\n\nR7= max amount and length can be integer\n\n Pieces Amount 1, 1, 1, 1, 1, 1, 1 7 1, 1, 1, 1, 1, 2 10 1, 1, 1, 1, 3 12 1, 1, 1, 4 12 1, 1, 5 12 1, 6 18 1, 1, 2, 3 15 1, 3, 3 17 1,2,4 15 2, 2, 3 18 7 18\n\nmaximum cost possible is 18 and there are 3 solutions for R7.\n\n#### Dynamic Programming Question 8\n\nLet A1, A2, A3, and A4 be four matrices of dimensions 10 × 5, 5 × 20, 20 × 10, and 10 × 5, respectively. The minimum number of scalar multiplications required to find the product A1A2A3A4 using the basic matrix multiplication method is ______.\n\n#### Dynamic Programming Question 8 Detailed Solution\n\nConcept:\n\nIf we multiply two matrices of order l × m and m × n,\n\nthen number of scalar multiplications required = l × m × n\n\nData:\n\nA1 = 10 × 5, A2 = 5× 20, A3 = 20 × 10, A4 = 10 × 5\n\nCalculation:\n\nThere are 5 ways in which we can multiply these 4 matrices.\n\n(A1A2)(A3A4) , A1A2(A3A4), A1 ((A2A3) A4) , (A1(A2A3))A4, ((A1A2)A3)A4\n\nMinimum number of scalar multiplication can be find out using A1 ((A2A3)A4)\n\nFor A2A3 (order will become 5 ×10) = 5 × 20 × 10 = 1000\n\nFor (A2A3)A4 (order will become 5 × 5) = 5 × 10 × 5 = 250\n\nFor A1 ((A2A3)A4) [Order will become 10 × 5] = 10×  5 × 5 = 250\n\nMinimum number of scalar multiplication required = 1000 + 250 + 250 = 1500\n\nIn all other cases, scalar multiplication are more than 1500.\n\n#### Dynamic Programming Question 9\n\nAssume that multiplying a matrix G1 of dimension 𝑝 × 𝑞 with another matrix G2 of dimension 𝑞 × 𝑟 requires 𝑝𝑞𝑟 scalar multiplications. Computing the product of n matrices G1G2G3…Gn can be done by parenthesizing in different ways. Define Gi Gi+1 as an explicitly computed pair for a given paranthesization if they are directly multiplied. For example, in the matrix multiplication chain G1G2G3G4G5G6 using parenthesization (G1(G2G3))(G4(G5G6)), G2G3 and G5G6 are the only explicitly computed pairs.Consider a matrix multiplication chain F1F2F3F4F5, where matrices F1, F2, F3, F4 and F5 are of dimensions 2 × 25, 25 × 3, 3 × 16, 16 × 1 and 1 × 1000, respectively. In the parenthesization of F1F2F3F4F5 that minimizes the total number of scalar multiplications, the explicitly computed pairs is/are\n\n1. F1F2 and F3F4 only\n2. F2F3 only\n3. F3F4 only\n4. F1F2 and F4F5 only\n\nOption 3 : F3F4 only\n\n#### Dynamic Programming Question 9 Detailed Solution\n\nAs F5 is 11000 matrix so 1000 will play vital role in cost. So it is good to multiply F5 at very last step.So, the sequence giving minimal cost: (((F1(F2(F3F4))(F5)) = 48+75+50+2000 = 2173\n\nExplicitly computed pairs is (F3F4).\n\n#### Dynamic Programming Question 10\n\nWhich of the following is a correct time complexity to solve the 0/1 knapsack problem where n and w represents the number of items and capacity of knapsack respectively?\n\n1. O(n)\n2. O(w)\n3. O(nw)\n4. O(n+w)\n\nOption 3 : O(nw)\n\n#### Dynamic Programming Question 10 Detailed Solution\n\nKnapsack problem has the following two variants-\n\n1. Fractional Knapsack Problem\n2. 0/1 Knapsack Problem\n\nTime Complexity-\n\n• Each entry of the table requires constant time θ(1) for its computation.\n• It takes θ(nw) time to fill (n+1)(w+1) entries. Therefore, O(nw + n + w +1) =  O(nw )\n• It takes θ(n) time for tracing the solution since the tracing process traces the n rows.\n• Thus, overall θ(nw) time is taken to solve 0/1 knapsack problem using dynamic programming.\n\n#### Dynamic Programming Question 11\n\nAssembly line scheduling and Longest Common Subsequence problems are an example of __________\n\n1. Dynamic Programming\n2. Greedy Algorithms\n3. Greedy Algorithms and Dynamic Programming respectively\n4. Dynamic Programming and Branch and Bound respectively\n\nOption 1 : Dynamic Programming\n\n#### Dynamic Programming Question 11 Detailed Solution\n\nThe longest common subsequence(LCM)\n\nLCS problem is the problem of finding the longest subsequence common to all sequences in a set of sequences.\n\nLongest Common Subsequence problems is an example of Dynamic Programming.\n\nIn LCS:\n\nIf there is match, A[i, j] = A[i – 1, j  – 1] + 1\n\nIf not match: max(A[i – 1, j], A[i, j – 1])\n\nAssembly line scheduling\n\nThe main goal of assembly line scheduling is to give the best route or can say fastest from all assembly line.\n\nAssembly line schedulingproblems is an example of Dynamic Programming.\n\n#### Dynamic Programming Question 12\n\nConsider product of three matrices M1, M2 and M3 having w rows and x columns, x rows and y columns, and y rows and z columns. Under what condition will it take less time to compute the product as (M1M2)M3 than to compute M1 (M2M3)?\n\n1. Always take the same time\n2. (1/x + 1/z) < (1/w + 1/y)\n3. x > y\n4. (w + x) > (y + z)\n\nOption 2 : (1/x + 1/z) < (1/w + 1/y)\n\n#### Dynamic Programming Question 12 Detailed Solution\n\nConcept:\n\nOrder of M1 = w × x\n\nOrder of M2 = x × y\n\nOrder of M3 = y × z\n\nFor cost of (M1M2)M3 = wxy + wyz. Detailed steps below.\n\n• Cost of M1M2 = w × x × y\n• This gives us a new matrix. Let the new matrix be M.\n• Order of M is w × y\n• Cost of MM3 = w × y × z\n• Total cost = M1M2 cost + MM3 cost\n\nSimilarly, cost of M1 (M2M3) = xyz + wxz\n\nFor (M1M2)M3 to take less time than M1(M2M3)\n\n(wxy + wyz) < (xyz + wxz)\n\nDividing both the sides of above equation, we get\n\n(1/x + 1/z) < (1/w + 1/y)\n\n#### Dynamic Programming Question 13\n\nConsider the following two sequences :\n\nX = < B, C, D, C, A, B, C >\n\nand Y = < C, A, D, B, C, B >\n\nThe length of longest common subsequence of X and Y is :\n\n1. 5\n2. 3\n3. 4\n4. 2\n\nOption 3 : 4\n\n#### Dynamic Programming Question 13 Detailed Solution\n\nConcept:\n\nIf there is match, A[i, j] = A[i – 1, j  – 1] + 1\n\nIf not match: max(A[i – 1, j], A[i, j – 1])\n\nCALCULATION\n\nLet M =  length of X and N = length of Y\n\nint dp[N+1][M+1]\n\nTable of longest common subsequence:\n\n X → B C D C A B C Y ↓ 0 0 0 0 0 0 0 0 C 0 0 1 1 1 1 1 1 A 0 0 1 1 1 2 2 2 D 0 0 1 2 2 2 2 2 B 0 1 1 2 2 2 3 3 C 0 1 2 2 3 3 3 3 B 0 1 2 2 3 3 4 4\n\nThe length of longest common subsequence of X and Y = A[N][M] = 4\n\n#### Dynamic Programming Question 14\n\nThe following paradigm can be used to find the solution of the problem in minimum time:\n\nGiven a set of non-negative integer, and a value K, determine if there is a subset of the given set with sum equal to K:\n\n1. Divide and Conquer\n2. Dynamic Programming\n3. Greedy Algorithm\n4. branch and Bound\n\nOption 2 : Dynamic Programming\n\n#### Dynamic Programming Question 14 Detailed Solution\n\nConcept:\n\n“Subset sum problem”:\n\nGiven a set of non-negative integers, and a value sum, determine if there is a subset of the given set with sum equal to given sum(K).\n\nExplanation:\n\nAssume array of non-negative integer of size n. This problem can u solved by -\n\nMethod 1: Using recursion\n\n• Time complexity - O(2n)\n• Space complexity - O(n)\n\nMethod 1: Using Dynamic Programming (Optimal solution of “Subset sum problem”)\n\n• Time complexity - O(n.k)\n• Space complexity - O(n.k)\n\nDynamic Programming paradigm can be used to find the solution of the problem in minimum time.\n\nSubset sum problem is NP Complete Problem, because 0/1 Knapsack problem polynomial reduced to sum of\n\nsubset problem and 0/1 Knapsack problem is NP complete problem.\n\n#### Dynamic Programming Question 15\n\nA Young tableau is a 2D array of integers increasing from left to right and from top to bottom. Any unfilled entries are marked with ∞, and hence there cannot be any entry to the right of, or below a ∞. The following Young tableau consists of unique entries.\n\n 1 2 5 14 3 4 6 23 10 12 18 25 31 ∞ ∞ ∞\n\nWhen an element is removed from a Young tableau, other elements should be moved into its place so that the resulting table is still a Young tableau (unfilled entries may be filled in with a ∞). The minimum number of entries (other than 1) to be shifted, to remove 1 from the given Young tableau is ________.\n\n#### Dynamic Programming Question 15 Detailed Solution\n\nAs it is given here, that there can’t be any entry to the right of or below a ∞\n\nSo, when a 1 is removed, it is replaced by its right-side position value i.e. 2 keeping the 5 and 14 intact.\n\n 2 5 14 3 4 6 23 10 12 18 25 31 ∞ ∞ ∞\n\nNow we replace the vacant position by 4 below it is keeping other numbers at same position.\n\n 2 4 5 14 3 6 23 10 12 18 25 31 ∞ ∞ ∞\n\nNow, fill the vacant position by shifting 6 to its left.\n\n 2 4 5 14 3 6 23 10 12 18 25 31 ∞ ∞ ∞\n\nNow, similarly shift 18 to up to fill vacant position keeping other number at same position.\n\n 2 4 5 14 3 6 18 23 10 12 25 31 ∞ ∞ ∞\n\nNow, shift 25 to it’s left\n\n 2 4 5 14 3 6 18 23 10 12 25 31 ∞ ∞ ∞\n\nNow, fill this vacant position with ∞.\n\n 2 4 5 14 3 6 18 23 10 12 25 ∞. 31 ∞ ∞ ∞\n\nSo, in this way minimum entries that needs to be shifted after removal of 1 from table are 5." ]
[ null, "https://storage.googleapis.com/tb-img/production/22/09/F1_Vinanti_Engineering_22.09.22_D7.png", null, "https://storage.googleapis.com/tb-img/production/22/09/F1_Vinanti_Engineering_22.09.22_D8.png", null, "https://storage.googleapis.com/tb-img/production/22/09/F1_Vinanti_Engineering_22.09.22_D9.png", null, "https://storage.googleapis.com/tb-img/production/22/09/F1_Vinanti_Engineering_22.09.22_D10.png", null, "https://storage.googleapis.com/tb-img/production/22/09/F1_Vinanti_Engineering_22.09.22_D11.png", null, "https://storage.googleapis.com/tb-img/production/22/09/F1_Vinanti_Engineering_22.09.22_D12.png", null, "https://storage.googleapis.com/tb-img/production/22/09/F1_Vinanti_Engineering_22.09.22_D13.png", null, "https://storage.googleapis.com/tb-img/production/22/09/F1_Vinanti_Engineering_22.09.22_D14.png", null, "https://storage.googleapis.com/tb-img/production/23/04/F1_Madhuri_Engineering_17.04.2023_D12.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.78524476,"math_prob":0.99647975,"size":20758,"snap":"2023-40-2023-50","text_gpt3_token_len":6871,"char_repetition_ratio":0.13607016,"word_repetition_ratio":0.1611411,"special_character_ratio":0.3652086,"punctuation_ratio":0.110371076,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994413,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-09T08:50:51Z\",\"WARC-Record-ID\":\"<urn:uuid:b492c6a3-2b68-4d0a-97c5-d94559b52874>\",\"Content-Length\":\"588438\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:00f2206d-67f8-4ec9-8e39-925393b4cade>\",\"WARC-Concurrent-To\":\"<urn:uuid:8b38f7e9-d8c5-483e-8ad5-020ee9d92394>\",\"WARC-IP-Address\":\"104.22.44.238\",\"WARC-Target-URI\":\"https://testbook.com/objective-questions/mcq-on-dynamic-programming--5eea6a0c39140f30f369e0db\",\"WARC-Payload-Digest\":\"sha1:LENPXAEXSFPT76K2YDGMJYX5SIXNJF54\",\"WARC-Block-Digest\":\"sha1:PDX7A2XHBYY4XQBH3LBM2ZIBHGWEB6OW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100873.6_warc_CC-MAIN-20231209071722-20231209101722-00693.warc.gz\"}"}
https://codeclimate.com/github/poliastro/poliastro/src/poliastro/constants/general.py/source
[ "# poliastro/poliastro\n\nsrc/poliastro/constants/general.py\n\n### Summary\n\nB\n7 hrs\n###### Test Coverage\n``````\"\"\"Astronomical and physics constants.\n\nThis module complements constants defined in `astropy.constants`,\n\nNote that `GM_jupiter` and `GM_neptune` are both referred to the whole planetary system gravitational parameter.\n\nUnless otherwise specified, gravitational and mass parameters were obtained from:\n\n* Luzum, Brian et al. “The IAU 2009 System of Astronomical Constants: The Report of the IAU Working Group on Numerical\nStandards for Fundamental Astronomy.” Celestial Mechanics and Dynamical Astronomy 110.4 (2011): 293–304.\nCrossref. Web. `DOI: 10.1007/s10569-011-9352-4`_\n\n* Archinal, B. A. et al. “Report of the IAU Working Group on Cartographic Coordinates and Rotational Elements: 2009.”\nCelestial Mechanics and Dynamical Astronomy 109.2 (2010): 101–135. Crossref. Web. `DOI: 10.1007/s10569-010-9320-4`_\n\n.. _`DOI: 10.1007/s10569-011-9352-4`: http://dx.doi.org/10.1007/s10569-011-9352-4\n.. _`DOI: 10.1007/s10569-010-9320-4`: http://dx.doi.org/10.1007/s10569-010-9320-4\n\nJ2 for the Sun was obtained from:\n\n* https://hal.archives-ouvertes.fr/hal-00433235/document (New values of gravitational moments J2 and J4 deduced\nfrom helioseismology, Redouane Mecheri et al)\n\n\"\"\"\n\nfrom astropy import time\nfrom astropy.constants import Constant\nfrom astropy.constants.iau2015 import (\nM_earth as _M_earth,\nM_jup as _M_jupiter,\nM_sun as _M_sun,\n)\n\n__all__ = [\n\"J2000\",\n\"J2000_TDB\",\n\"J2000_TT\",\n\"GM_sun\",\n\"GM_earth\",\n\"GM_mercury\",\n\"GM_venus\",\n\"GM_mars\",\n\"GM_jupiter\",\n\"GM_saturn\",\n\"GM_uranus\",\n\"GM_neptune\",\n\"GM_pluto\",\n\"GM_moon\",\n\"M_earth\",\n\"M_jupiter\",\n\"M_sun\",\n\"R_mean_earth\",\n\"R_mean_mercury\",\n\"R_mean_venus\",\n\"R_mean_mars\",\n\"R_mean_jupiter\",\n\"R_mean_saturn\",\n\"R_mean_uranus\",\n\"R_mean_neptune\",\n\"R_mean_pluto\",\n\"R_mean_moon\",\n\"R_earth\",\n\"R_mercury\",\n\"R_venus\",\n\"R_mars\",\n\"R_jupiter\",\n\"R_saturn\",\n\"R_sun\",\n\"R_uranus\",\n\"R_neptune\",\n\"R_pluto\",\n\"R_moon\",\n\"R_polar_earth\",\n\"R_polar_mercury\",\n\"R_polar_venus\",\n\"R_polar_mars\",\n\"R_polar_jupiter\",\n\"R_polar_saturn\",\n\"R_polar_uranus\",\n\"R_polar_neptune\",\n\"R_polar_pluto\",\n\"R_polar_moon\",\n\"J2_sun\",\n\"J2_earth\",\n\"J3_earth\",\n\"J2_mars\",\n\"J3_mars\",\n\"J2_venus\",\n\"J3_venus\",\n\"H0_earth\",\n\"rho0_earth\",\n\"Wdivc_sun\",\n]\n\n# HACK: sphinx-autoapi variable definition\nM_earth = _M_earth\nM_jupiter = _M_jupiter\nM_sun = _M_sun\n\n# See for example USNO Circular 179\nJ2000_TT = time.Time(\"J2000\", scale=\"tt\")\nJ2000_TDB = time.Time(\"J2000\", scale=\"tdb\")\nJ2000 = J2000_TT\n\nGM_sun = Constant(\n\"GM_sun\",\n\"Heliocentric gravitational constant\",\n1.32712442099e20,\n\"m3 / (s2)\",\n0.0000000001e20,\n\"IAU 2009 system of astronomical constants\",\nsystem=\"si\",\n)\n\nGM_earth = Constant(\n\"GM_earth\",\n\"Geocentric gravitational constant\",\n3.986004418e14,\n\"m3 / (s2)\",\n0.000000008e14,\n\"IAU 2009 system of astronomical constants\",\nsystem=\"si\",\n)\n\n# Anderson, John D. et al. “The Mass, Gravity Field, and Ephemeris of Mercury.” Icarus 71.3 (1987): 337–349.\n# Crossref. Web. DOI: 10.1016/0019-1035(87)90033-9\nGM_mercury = Constant(\n\"GM_mercury\",\n\"Mercury gravitational constant\",\n2.203209e13,\n\"m3 / (s2)\",\n0.91,\n\"IAU 2009 system of astronomical constants\",\nsystem=\"si\",\n)\n\n# Konopliv, A.S., W.B. Banerdt, and W.L. Sjogren. “Venus Gravity: 180th Degree and Order Model.”\n# Icarus 139.1 (1999): 3–18. Crossref. Web. DOI: 10.1006/icar.1999.6086\nGM_venus = Constant(\n\"GM_venus\",\n\"Venus gravitational constant\",\n3.24858592e14,\n\"m3 / (s2)\",\n0.006,\n\"IAU 2009 system of astronomical constants\",\nsystem=\"si\",\n)\n\n# Konopliv, Alex S. et al. “A Global Solution for the Mars Static and Seasonal Gravity, Mars Orientation, Phobos and\n# Deimos Masses, and Mars Ephemeris.” Icarus 182.1 (2006): 23–50.\n# Crossref. Web. DOI: 10.1016/j.icarus.2005.12.025\nGM_mars = Constant(\n\"GM_mars\",\n\"Mars gravitational constant\",\n4.282837440e13,\n\"m3 / (s2)\",\n0.00028,\n\"IAU 2009 system of astronomical constants\",\nsystem=\"si\",\n)\n\n# Jacobson, R. A. et al. “A comprehensive orbit reconstruction for the galileo prime mission in the JS200 system.”\n# The Journal of the Astronautical Sciences 48.4 (2000): 495–516.\n# Crossref. Web.\nGM_jupiter = Constant(\n\"GM_jupiter\",\n\"Jovian system gravitational constant\",\n1.2671276253e17,\n\"m3 / (s2)\",\n2.00,\n\"IAU 2009 system of astronomical constants\",\nsystem=\"si\",\n)\n\n# Jacobson, R. A. et al. “The Gravity Field of the Saturnian System from Satellite Observations and Spacecraft\n# Tracking Data.” The Astronomical Journal 132.6 (2006): 2520–2526.\n# Crossref. Web. DOI: 10.1086/508812\nGM_saturn = Constant(\n\"GM_saturn\",\n\"Saturn gravitational constant\",\n3.79312077e16,\n\"m3 / (s2)\",\n1.1,\n\"IAU 2009 system of astronomical constants\",\nsystem=\"si\",\n)\n\n# Jacobson, R. A. et al. “The Masses of Uranus and Its Major Satellites from Voyager Tracking Data and Earth-Based\n# Uranian Satellite Data.” The Astronomical Journal 103 (1992): 2068.\n# Crossref. Web. DOI: 10.1086/116211\nGM_uranus = Constant(\n\"GM_uranus\",\n\"Uranus gravitational constant\",\n5.7939393e15,\n\"m3 / (s2)\",\n13.0,\n\"IAU 2009 system of astronomical constants\",\nsystem=\"si\",\n)\n\n# Jacobson, R. A. “THE ORBITS OF THE NEPTUNIAN SATELLITES AND THE ORIENTATION OF THE POLE OF NEPTUNE.”\n# The Astronomical Journal 137.5 (2009): 4322–4329. Crossref. Web. DOI:\n# 10.1088/0004-6256/137/5/4322\nGM_neptune = Constant(\n\"GM_neptune\",\n\"Neptunian system gravitational constant\",\n6.836527100580397e15,\n\"m3 / (s2)\",\n10.0,\n\"IAU 2009 system of astronomical constants\",\nsystem=\"si\",\n)\n\n# Tholen, David J. et al. “MASSES OF NIX AND HYDRA.” The Astronomical Journal 135.3 (2008): 777–784. Crossref. Web.\n# DOI: 10.1088/0004-6256/135/3/777\nGM_pluto = Constant(\n\"GM_pluto\",\n\"Pluto gravitational constant\",\n8.703e11,\n\"m3 / (s2)\",\n3.7,\n\"IAU 2009 system of astronomical constants\",\nsystem=\"si\",\n)\n\n# Lemoine, Frank G. et al. “High-Degree Gravity Models from GRAIL Primary Mission Data.”\n# Journal of Geophysical Research: Planets 118.8 (2013): 1676–1698.\n# Crossref. Web. DOI: 10.1002/jgre.20118\nGM_moon = Constant(\n\"GM_moon\",\n\"Moon gravitational constant\",\n4.90279981e12,\n\"m3 / (s2)\",\n0.00000774,\n\"Journal of Geophysical Research: Planets 118.8 (2013)\",\nsystem=\"si\",\n)\n\n# Archinal, B. A., Acton, C. H., A’Hearn, M. F., Conrad, A., Consolmagno,\n# G. J., Duxbury, T., … Williams, I. P. (2018). Report of the IAU Working\n# Group on Cartographic Coordinates and Rotational Elements: 2015. Celestial\n# Mechanics and Dynamical Astronomy, 130(3). doi:10.1007/s10569-017-9805-5\n\nR_mean_earth = Constant(\n\"R_mean_earth\",\n6.3710084e6,\n\"m\",\n0.1,\n\"IAU Working Group on Cartographic Coordinates and Rotational Elements: 2015\",\nsystem=\"si\",\n)\n\nR_mean_mercury = Constant(\n\"R_mean_mercury\",\n2.4394e6,\n\"m\",\n100,\n\"IAU Working Group on Cartographic Coordinates and Rotational Elements: 2015\",\nsystem=\"si\",\n)\n\nR_mean_venus = Constant(\n\"R_mean_venus\",\n6.0518e6,\n\"m\",\n1000,\n\"IAU Working Group on Cartographic Coordinates and Rotational Elements: 2015\",\nsystem=\"si\",\n)\n\nR_mean_mars = Constant(\n\"R_mean_mars\",\n3.38950e6,\n\"m\",\n2000,\n\"IAU Working Group on Cartographic Coordinates and Rotational Elements: 2015\",\nsystem=\"si\",\n)\n\nR_mean_jupiter = Constant(\n\"R_mean_jupiter\",\n6.9911e7,\n\"m\",\n6000,\n\"IAU Working Group on Cartographic Coordinates and Rotational Elements: 2009\",\nsystem=\"si\",\n)\n\nR_mean_saturn = Constant(\n\"R_mean_saturn\",\n5.8232e7,\n\"m\",\n6000,\n\"IAU Working Group on Cartographic Coordinates and Rotational Elements: 2015\",\nsystem=\"si\",\n)\n\nR_mean_uranus = Constant(\n\"R_mean_uranus\",\n2.5362e7,\n\"m\",\n7000,\n\"IAU Working Group on Cartographic Coordinates and Rotational Elements: 2015\",\nsystem=\"si\",\n)\n\nR_mean_neptune = Constant(\n\"R_mean_neptune\",\n2.4622e7,\n\"m\",\n19000,\n\"IAU Working Group on Cartographic Coordinates and Rotational Elements: 2015\",\nsystem=\"si\",\n)\n\nR_mean_pluto = Constant(\n\"R_mean_pluto\",\n1.188e6,\n\"m\",\n1600,\n\"IAU Working Group on Cartographic Coordinates and Rotational Elements: 2015\",\nsystem=\"si\",\n)\n\nR_mean_moon = Constant(\n\"R_mean_moon\",\n1.7374e6,\n\"m\",\n0,\n\"IAU Working Group on Cartographic Coordinates and Rotational Elements: 2015\",\nsystem=\"si\",\n)\n\nR_sun = Constant(\n\"R_sun\",\n6.95700e8,\n\"m\",\n0,\n\"IAU Working Group on Cartographic Coordinates and Rotational Elements: 2015\",\nsystem=\"si\",\n)\n\nR_earth = Constant(\n\"R_earth\",\n6.3781366e6,\n\"m\",\n0.1,\n\"IAU Working Group on Cartographic Coordinates and Rotational Elements: 2015\",\nsystem=\"si\",\n)\n\nR_mercury = Constant(\n\"R_mercury\",\n2.44053e6,\n\"m\",\n40,\n\"IAU Working Group on Cartographic Coordinates and Rotational Elements: 2015\",\nsystem=\"si\",\n)\n\nR_venus = Constant(\n\"R_venus\",\n6.0518e6,\n\"m\",\n1000,\n\"IAU Working Group on Cartographic Coordinates and Rotational Elements: 2015\",\nsystem=\"si\",\n)\n\nR_mars = Constant(\n\"R_mars\",\n3.39619e6,\n\"m\",\n100,\n\"IAU Working Group on Cartographic Coordinates and Rotational Elements: 2015\",\nsystem=\"si\",\n)\n\nR_jupiter = Constant(\n\"R_jupiter\",\n7.1492e7,\n\"m\",\n4000,\n\"IAU Working Group on Cartographic Coordinates and Rotational Elements: 2009\",\nsystem=\"si\",\n)\n\nR_saturn = Constant(\n\"R_saturn\",\n6.0268e7,\n\"m\",\n4000,\n\"IAU Working Group on Cartographic Coordinates and Rotational Elements: 2015\",\nsystem=\"si\",\n)\n\nR_uranus = Constant(\n\"R_uranus\",\n2.5559e7,\n\"m\",\n4000,\n\"IAU Working Group on Cartographic Coordinates and Rotational Elements: 2015\",\nsystem=\"si\",\n)\n\nR_neptune = Constant(\n\"R_neptune\",\n2.4764e7,\n\"m\",\n15000,\n\"IAU Working Group on Cartographic Coordinates and Rotational Elements: 2015\",\nsystem=\"si\",\n)\n\nR_pluto = Constant(\n\"R_pluto\",\n1.1883e6,\n\"m\",\n1600,\n\"IAU Working Group on Cartographic Coordinates and Rotational Elements: 2015\",\nsystem=\"si\",\n)\n\nR_moon = Constant(\n\"R_moon\",\n1.7374e6,\n\"m\",\n0,\n\"IAU Working Group on Cartographic Coordinates and Rotational Elements: 2015\",\nsystem=\"si\",\n)\n\nR_polar_earth = Constant(\n\"R_polar_earth\",\n6.3567519e6,\n\"m\",\n0.1,\n\"IAU Working Group on Cartographic Coordinates and Rotational Elements: 2015\",\nsystem=\"si\",\n)\n\nR_polar_mercury = Constant(\n\"R_polar_mercury\",\n2.43826e6,\n\"m\",\n40,\n\"IAU Working Group on Cartographic Coordinates and Rotational Elements: 2015\",\nsystem=\"si\",\n)\n\nR_polar_venus = Constant(\n\"R_polar_venus\",\n6.0518e6,\n\"m\",\n1000,\n\"IAU Working Group on Cartographic Coordinates and Rotational Elements: 2015\",\nsystem=\"si\",\n)\n\nR_polar_mars = Constant(\n\"R_polar_mars\",\n3.376220e6,\n\"m\",\n100,\n\"IAU Working Group on Cartographic Coordinates and Rotational Elements: 2015\",\nsystem=\"si\",\n)\n\nR_polar_jupiter = Constant(\n\"R_polar_jupiter\",\n6.6854e7,\n\"m\",\n10000,\n\"IAU Working Group on Cartographic Coordinates and Rotational Elements: 2009\",\nsystem=\"si\",\n)\n\nR_polar_saturn = Constant(\n\"R_polar_saturn\",\n5.4364e7,\n\"m\",\n10000,\n\"IAU Working Group on Cartographic Coordinates and Rotational Elements: 2015\",\nsystem=\"si\",\n)\n\nR_polar_uranus = Constant(\n\"R_polar_uranus\",\n2.4973e7,\n\"m\",\n20000,\n\"IAU Working Group on Cartographic Coordinates and Rotational Elements: 2015\",\nsystem=\"si\",\n)\n\nR_polar_neptune = Constant(\n\"R_polar_neptune\",\n2.4341e7,\n\"m\",\n30000,\n\"IAU Working Group on Cartographic Coordinates and Rotational Elements: 2015\",\nsystem=\"si\",\n)\n\nR_polar_pluto = Constant(\n\"R_polar_pluto\",\n1.1883e6,\n\"m\",\n1600,\n\"IAU Working Group on Cartographic Coordinates and Rotational Elements: 2015\",\nsystem=\"si\",\n)\n\nR_polar_moon = Constant(\n\"R_polar_moon\",\n1.7374e6,\n\"m\",\n0,\n\"IAU Working Group on Cartographic Coordinates and Rotational Elements: 2015\",\nsystem=\"si\",\n)\n\nJ2_sun = Constant(\n\"J2_sun\",\n\"Sun J2 oblateness coefficient\",\n2.20e-7,\n\"\",\n0.01e-7,\n\"HAL archives\",\nsystem=\"si\",\n)\n\nJ2_earth = Constant(\n\"J2_earth\",\n\"Earth J2 oblateness coefficient\",\n0.00108263,\n\"\",\n1,\n\"HAL archives\",\nsystem=\"si\",\n)\n\nJ3_earth = Constant(\n\"J3_earth\",\n\"Earth J3 asymmetry between the northern and southern hemispheres\",\n-2.5326613168e-6,\n\"\",\n1,\n\"HAL archives\",\nsystem=\"si\",\n)\n\nJ2_mars = Constant(\n\"J2_mars\",\n\"Mars J2 oblateness coefficient\",\n0.0019555,\n\"\",\n1,\n\"HAL archives\",\nsystem=\"si\",\n)\n\nJ3_mars = Constant(\n\"J3_mars\",\n\"Mars J3 asymmetry between the northern and southern hemispheres\",\n3.1450e-5,\n\"\",\n1,\n\"HAL archives\",\nsystem=\"si\",\n)\n\nJ2_venus = Constant(\n\"J2_venus\",\n\"Venus J2 oblateness coefficient\",\n4.4044e-6,\n\"\",\n1,\n\"HAL archives\",\nsystem=\"si\",\n)\n\nJ3_venus = Constant(\n\"J3_venus\",\n\"Venus J3 asymmetry between the northern and southern hemispheres\",\n-2.1082e-6,\n\"\",\n1,\n\"HAL archives\",\nsystem=\"si\",\n)\n\nH0_earth = Constant(\n\"H0_earth\",\n\"Earth H0 atmospheric scale height\",\n8_500,\n\"m\",\n1,\n\"de Pater and Lissauer 2010\",\nsystem=\"si\",\n)\n\nrho0_earth = Constant(\n\"rho0_earth\",\n\"Earth rho0 atmospheric density prefactor\",\n1.3,\n\"kg / (m3)\",\n1,\n\"de Pater and Lissauer 2010\",\nsystem=\"si\",\n)\n\nWdivc_sun = Constant(\n\"Wdivc_sun\",\n\"total radiation power of Sun divided by the speed of light\",\n1.0203759306204136e14,\n\"kg km / (s2)\",\n1,\n\"Howard Curtis\",\nsystem=\"si\",\n)``````" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.64152265,"math_prob":0.88697076,"size":13253,"snap":"2020-34-2020-40","text_gpt3_token_len":4418,"char_repetition_ratio":0.2113367,"word_repetition_ratio":0.26223564,"special_character_ratio":0.38210216,"punctuation_ratio":0.2979066,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96666104,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-05T07:45:48Z\",\"WARC-Record-ID\":\"<urn:uuid:af9fa8b2-824e-4ce7-8a92-98258f7c18ed>\",\"Content-Length\":\"30445\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c7fdbbdc-759a-4507-8317-8245f09b0a08>\",\"WARC-Concurrent-To\":\"<urn:uuid:6165d251-10d0-4306-97ef-f0b0c7220a2c>\",\"WARC-IP-Address\":\"52.204.137.100\",\"WARC-Target-URI\":\"https://codeclimate.com/github/poliastro/poliastro/src/poliastro/constants/general.py/source\",\"WARC-Payload-Digest\":\"sha1:XFRSLB677JTUKYOG45D4ONCJ6F3RIWJN\",\"WARC-Block-Digest\":\"sha1:ULUMQALFHSLCVQ62OLMRZ6MJSCFOM3U5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439735916.91_warc_CC-MAIN-20200805065524-20200805095524-00364.warc.gz\"}"}
http://azfoo.net/gdt/babs/numbers/n/number51315.html
[ "### About the Number 51,315 (fifty-one thousand three hundred fifteen)\n\nThis nBAB for the number 51315 was created on the palindromic date of 5/13/15.\n\n```MathBabbler Number Analyst (MBNA) output:\n=========================================\n51315 is a natural, whole, integer\n51315 is odd\n51315 proper divisors are: 1,3,5,11,15,33,55,165,311,933,1555,\n3421,4665,10263,17105,\n51315 has 15 proper divisors\n51315 is deficient (sum of divisors is 38541; ratio: 0.751067)\n51315 is unhappy\n51315 is a Squarefree Number\n51315 is Harshad (Niven) number\n51315 is composite (not prime)\n51315 has the prime factors: 3*5*11*311 (sum=330)\n51315 is a joke (Smith) number\n51315 is a hoax number\n51315 is palindromic\n51315 is Arithmetic (A003601)\n51315 is not in A038853\n51315 in octal is 0144163\n51315 in hexadecimal is 0xc873\n51315 in binary is 1100100001110011 (is evil)\n51315 nearest square numbers: -239...214 (51076...51529 )\nsqrt(51315) = 226.528\nln(51315) = 10.8457\nlog(51315) = 4.71024\n51315 reciprocal is .00001948747929455324953717236675\n51315 is an apocalyptic power\n51315! is inf\n51315 is 16334.1 Pi years\n51315 is 2565 score and 15 years\n51315 is 25658^2 - 25657^2 and 25658 + 25657 = 51315\n51315 is a multiple of 1 & it contains a 1 (A011531)\n51315 is a multiple of 3 & it contains a 3 (A121023)\n51315 is a multiple of 5 & it contains a 5 (A121025)\n```\n\nCreator: Gerald Thurman [[email protected]]\nCreated: 13 May 2015", null, "" ]
[ null, "http://i.creativecommons.org/l/by/3.0/us/88x31.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7858999,"math_prob":0.9915835,"size":1340,"snap":"2019-35-2019-39","text_gpt3_token_len":482,"char_repetition_ratio":0.22080839,"word_repetition_ratio":0.014563107,"special_character_ratio":0.56044775,"punctuation_ratio":0.14652015,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9837414,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-19T19:27:20Z\",\"WARC-Record-ID\":\"<urn:uuid:130fd47e-9991-48fb-bfef-ed652bfa83ee>\",\"Content-Length\":\"3620\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ad9283c4-52ef-4079-8094-dc33920aa0b7>\",\"WARC-Concurrent-To\":\"<urn:uuid:bf59699c-0780-478b-8c6e-6bf284a39b50>\",\"WARC-IP-Address\":\"70.38.112.232\",\"WARC-Target-URI\":\"http://azfoo.net/gdt/babs/numbers/n/number51315.html\",\"WARC-Payload-Digest\":\"sha1:RQOHZSRWQJVRT5XSDFX5EWAVUNHO4AB4\",\"WARC-Block-Digest\":\"sha1:EM5DL7IF3SBYRC5WQU3U2HQU7C6UL77G\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514573570.6_warc_CC-MAIN-20190919183843-20190919205843-00464.warc.gz\"}"}
https://admin.clutchprep.com/chemistry/practice-problems/86511/the-temperature-inside-a-pressure-cooker-is-115-c-calculate-the-vapor-pressure-o
[ "Chemistry Practice Problems Clausius-Clapeyron Equation Practice Problems Solution: The temperature inside a pressure cooker is 115 ˚C...\n\n⚠️Our tutors found the solution shown to be helpful for the problem you're searching for. We don't have the exact solution yet.\n\n# Solution: The temperature inside a pressure cooker is 115 ˚C. Calculate the vapor pressure of water inside the pressure cooker. What would be the temperature inside the pressure cooker if the vapor pressure of water was 3.50 atm?\n\n###### Problem\n\nThe temperature inside a pressure cooker is 115 ˚C. Calculate the vapor pressure of water inside the pressure cooker. What would be the temperature inside the pressure cooker if the vapor pressure of water was 3.50 atm?\n\nClausius-Clapeyron Equation\n\nClausius-Clapeyron Equation\n\n#### Q. The enthalpy of vaporization for acetone is 32.0 kJ/mol. The normal boiling point for acetone is 56.5 ˚C. What is the vapor pressure of acetone at 23....\n\nSolved • Thu Oct 18 2018 16:58:27 GMT-0400 (EDT)\n\nClausius-Clapeyron Equation\n\n#### Q. Carbon tetrachloride, CCl 4, has a vapor pressure of 213 torr at 40. ˚C and 836 torr at 80. ˚C. What is the normal boiling point of CCl4?\n\nSolved • Thu Oct 18 2018 15:40:17 GMT-0400 (EDT)\n\nClausius-Clapeyron Equation\n\n#### Q. In Breckenridge, Colorado, the typical atmospheric pressure is 520. torr. What is the boiling point of water (ΔHvap = 40.7 kJ/mol) in Breckenridge?\n\nSolved • Thu Oct 18 2018 15:38:06 GMT-0400 (EDT)\n\nClausius-Clapeyron Equation\n\n#### Q. From the following data for liquid nitric acid, determine its heat of vaporization and normal boiling point.\n\nSolved • Thu Oct 18 2018 15:37:19 GMT-0400 (EDT)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.742682,"math_prob":0.91506916,"size":487,"snap":"2020-10-2020-16","text_gpt3_token_len":126,"char_repetition_ratio":0.17184265,"word_repetition_ratio":0.12307692,"special_character_ratio":0.18069816,"punctuation_ratio":0.074074075,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98453814,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-03-29T10:09:53Z\",\"WARC-Record-ID\":\"<urn:uuid:58447667-c5a2-4db4-966a-9535e76e2eaa>\",\"Content-Length\":\"80086\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:72a6ddd5-0652-4c37-813b-34941242ef5c>\",\"WARC-Concurrent-To\":\"<urn:uuid:0edebe72-8d69-4b6e-a2c1-75b0aa5fbf16>\",\"WARC-IP-Address\":\"34.228.134.34\",\"WARC-Target-URI\":\"https://admin.clutchprep.com/chemistry/practice-problems/86511/the-temperature-inside-a-pressure-cooker-is-115-c-calculate-the-vapor-pressure-o\",\"WARC-Payload-Digest\":\"sha1:TCESSRU633A2BVGUZFM6P7KY2RTW2HIV\",\"WARC-Block-Digest\":\"sha1:6NFR2RMWDW5QICCIVLWWX7LERGBWG6PG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370494064.21_warc_CC-MAIN-20200329074745-20200329104745-00203.warc.gz\"}"}
https://forums.ni.com/t5/LabVIEW/Wavelet-Transformation-Advanced-signal-processing/td-p/4186926
[ "# LabVIEW\n\ncancel\nShowing results for\nDid you mean:\n\n## Wavelet Transformation- Advanced signal processing\n\nGoal: To find the defect frequencies of the bearing (rolling-element) from the \"simulated\" acceleration signal.\n\nI am trying to use \"Wavelet transform\" functions of Advanced signal processing toolkit for the defect identification.\n\nBelow shown is the acceleration signal in green (cosine of 10 Hz frequency), which has noise and defect signal (in RED plot, having 5 Hz frequency). I tried to get back the red plot with the help of Wavelet analysis, but was not possible as it is hard for me to comprehend these wavelet functions.", null, "Below is the graph showing what I have got in (a) time domain and in (b) frequency domain", null, "the above graph should get closer to the defect (Red plot shown in the first graph) signal.", null, "I have used 1. BP Filter (see below for config.) -> white Plot", null, "2. Wavelet packet analysis tool (see  below for config.) -> red Plot. used db14", null, "3. Band pass filter (IIR filter with LPF- 3 Hz and HPF-7 Hz (since i know my defect frequency!))-> Green plot (PERFECT!!  I need this but from WPT analysis with that fact that defect frequency is unkown, is that possible ?)\n\n4. Wavelet denoise  (see  below for config.) -> Blue Plot. Just used Bior3_X, since it was more closer to the defect waveform shape than daubechies.", null, "I know that to find the defect signal, I need to use a mother wavelet which is very similar to the defect waveform. But I dont know to choose that. below is my defect signal with 500Hz frequency, which is repeating at 5 Hz.", null, "I have attached the labview file (saved as html, if any trouble opening it, let me know) that I developed along with the  acceleration signal dataset (with 3 Y-axes columns- Signal + noise, defect, 1st and 2nd columns superimposed) .\n\nI try to organise the question as clear as possible, but I can do more if you cannot understand my points.\n\nMessage 1 of 2\n(224 Views)\n\n## Re: Wavelet Transformation- Advanced signal processing\n\nHello Everyone,\n\nas I did not get any reply for my post, I ask doubts regard to more concrete implementation of Wavelet tools.\n\nWavelet packet analysis: How do I get wavelet denoised signal for each of the node levels. For eg., I used Wavelet: db04 with Level: 3 then there are 8 nodes possible 3.0, 3.1, 3.2, ... 3.7, then I do I get the signala for each of these 8 nodes. When I display the node coefficients (refer to my block diagram in my precious post), i get a (only one) co-effciient of -0,661829 for node \"0\". What does this number signify?\n\nAlso, is it possible to define a \"morely\" mother wavelet in labview when I know the center frequency(fc) and bandwidth (sigma)? IF so, let me know. And which tool to use to filter my signal with this wavelet ? any eg. implementation." ]
[ null, "https://forums.ni.com/t5/image/serverpage/image-id/293770i28E5199A3770869D/image-size/medium", null, "https://forums.ni.com/t5/image/serverpage/image-id/293771iA42A53C7AC9770E1/image-size/medium", null, "https://forums.ni.com/t5/image/serverpage/image-id/293773i7B4667D0C9650BF7/image-size/medium", null, "https://forums.ni.com/t5/image/serverpage/image-id/293774i48D9238E4CCDD79B/image-size/medium", null, "https://forums.ni.com/t5/image/serverpage/image-id/293775i38DE6E3364A137F4/image-size/medium", null, "https://forums.ni.com/t5/image/serverpage/image-id/293777i0C52DC683B66F38F/image-size/medium", null, "https://forums.ni.com/t5/image/serverpage/image-id/293780i85F103EEBF8BD72C/image-size/medium", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9290297,"math_prob":0.68358725,"size":2779,"snap":"2022-05-2022-21","text_gpt3_token_len":697,"char_repetition_ratio":0.123603605,"word_repetition_ratio":0.006060606,"special_character_ratio":0.2529687,"punctuation_ratio":0.1303602,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97442365,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-23T08:19:38Z\",\"WARC-Record-ID\":\"<urn:uuid:10e87402-e2ca-4a8a-bb07-af6f4560d70c>\",\"Content-Length\":\"272409\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:01bfda15-f417-48de-ac09-c4ef2206c2f2>\",\"WARC-Concurrent-To\":\"<urn:uuid:cdf054d9-5633-4fe6-bdb1-9d7752803e84>\",\"WARC-IP-Address\":\"104.16.35.15\",\"WARC-Target-URI\":\"https://forums.ni.com/t5/LabVIEW/Wavelet-Transformation-Advanced-signal-processing/td-p/4186926\",\"WARC-Payload-Digest\":\"sha1:J66DILABYP3UBPXZJS567CQWCNFU5MO2\",\"WARC-Block-Digest\":\"sha1:SDVAWQB4GXBEQE7K7CYLRGOAFLQG6AJ4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662556725.76_warc_CC-MAIN-20220523071517-20220523101517-00773.warc.gz\"}"}
https://hackeradam.com/post/bubble-sort/
[ "Bubble sort is the easiest possible sorting algorithm you could imagine. It works by simply iterating over an array of data and comparing adjacent items. If those items are in the wrong order, then they are swapped. In doing this, the largest items “bubble up” to the correct position in the array (assuming ascending order, anyway).\n\nWhile this is great from a simplicity standpoint, it’s a pretty awful solution in terms of efficiency. The worst-case performance of bubble sort is O(n2). The best-case turns out to be O(n).\n\nAs you can see, this algorithm is not suitable for any sufficiently large datasets.\n\n## Implementation\n\nThis algorithm is so simple that I think we can just jump right into the implementation without worrying about too much more explanation.\n\nHere’s a bubble sort implementation in Java:\n\n 1package com.hackeradam.sorting;\n2public class Main {\n3 // Performs a bubble sort on a int array\n4 public static void bubblesort(int arr[]) {\n5 int size = arr.length;\n6 int tmp;\n7 // This boolean flag is a small optimization\n8 // that how long the loop needs to run in some\n9 // situations\n10 boolean swapped = false;\n11 for (int i = 0; i < size - 1; i++) {\n12 swapped = false;\n13 for (int j = 0; j < size - i - 1; j++) {\n14 if (arr[j] > arr[j+1]) {\n15 // Swap the items\n16 tmp = arr[j];\n17 arr[j] = arr[j + 1];\n18 arr[j + 1] = tmp;\n19 swapped = true;\n20 }\n21 }\n22 // If we didn't need to do a swap in that inner loop\n23 // then we must be sorted, so break\n24 if (!swapped)\n25 break;\n26 }\n27 }\n28 public static void printArray(int arr[]) {\n29 for (int i=0; i < arr.length; i++) {\n30 System.out.print(arr[i] + \" \");\n31 }\n32 System.out.println();\n33 }\n34 public static void main(String... args) {\n35 int items[] = {5, 2, 88, 100, 3, 0, -1, 20, 30, 7, 99};\n36 printArray(items);\n37 System.out.println(\"After bubble sort: \");\n38 bubblesort(items);\n39 printArray(items);\n40 }\n41}" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6882553,"math_prob":0.96234924,"size":1904,"snap":"2022-40-2023-06","text_gpt3_token_len":506,"char_repetition_ratio":0.1,"word_repetition_ratio":0.0,"special_character_ratio":0.34033614,"punctuation_ratio":0.15776698,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96511894,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-01T23:16:38Z\",\"WARC-Record-ID\":\"<urn:uuid:f19c981f-2975-48b3-9fd4-2c8318fc782f>\",\"Content-Length\":\"17193\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:712fce01-2a05-491c-bd62-a980898fb29d>\",\"WARC-Concurrent-To\":\"<urn:uuid:e47f5416-8c30-4b2e-9e38-918487fc971f>\",\"WARC-IP-Address\":\"104.21.63.22\",\"WARC-Target-URI\":\"https://hackeradam.com/post/bubble-sort/\",\"WARC-Payload-Digest\":\"sha1:UMMR3J4APVKH2WG5OWF644LNZEXGMUCV\",\"WARC-Block-Digest\":\"sha1:OM66S4HIKDR6W5O23ZX7F3A6SXZPQ6ZK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030336978.73_warc_CC-MAIN-20221001230322-20221002020322-00496.warc.gz\"}"}
https://kodey.co.uk/2020/07/30/normal-distribution-and-the-empirical-rule/
[ "# Normal Distribution And The Empirical Rule\n\nA normal distribution looks a little bit like the below. It’s where the mean, median and mode are on top of one another.\n\nRemember in a previous article, we discussed Chebyshev’s theorem, which gave us a guideline for what percentage of datapoints fell between two intervals? Well, the empirical rule is the same, but better. It gives us a much more accurate approximation of the percentage of data that falls between certain intervals – but, unlike Chebyshev’s theorem, only works for normally distributed data.\n\nHere’s how they compare:\n\nIf we look at the empirical rule, it states the approximate amount of data that will fall between our sigma intervals. The below chart shows this graphically.\n\nSo, we simply take the mean and +/- sigma to define our upper and lower boundaries.\n\n##### 1 comment\n\nPrevious Article", null, "## Standard Normal Distributions Understood\n\nNext Article", null, "## Measures Of Variance\n\n##### Related Posts", null, "## Standard Normal Distributions Understood", null, "## A Guide To Basic Linear Algebra Notation For Machine Learning", null, "## Z Scores & Probability", null, "" ]
[ null, "https://kodey.co.uk/wp-content/uploads/2020/09/pexels-elevate-1267338.jpg", null, "https://kodey.co.uk/wp-content/uploads/2020/09/pexels-daniel-reche-1556707.jpg", null, "https://kodey.co.uk/wp-content/uploads/2020/09/pexels-elevate-1267338.jpg", null, "https://kodey.co.uk/wp-content/uploads/2020/12/pexels-rfstudio-3825462-260x195.jpg", null, "https://kodey.co.uk/wp-content/uploads/2020/09/pexels-matthias-groeneveld-4200740.jpg", null, "https://kodey.co.uk/wp-content/uploads/2020/09/pexels-karolina-grabowska-4021883.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9270274,"math_prob":0.77556133,"size":810,"snap":"2022-27-2022-33","text_gpt3_token_len":168,"char_repetition_ratio":0.094292805,"word_repetition_ratio":0.0,"special_character_ratio":0.20123456,"punctuation_ratio":0.12101911,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9719068,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,5,null,5,null,5,null,7,null,7,null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-26T04:12:37Z\",\"WARC-Record-ID\":\"<urn:uuid:6efef3eb-5d73-4004-abcb-f31416de8461>\",\"Content-Length\":\"149206\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d9e641a4-8efa-452d-82e6-ee3fb300c2b5>\",\"WARC-Concurrent-To\":\"<urn:uuid:68792e44-fda4-45c0-bbc8-85d2d65a1530>\",\"WARC-IP-Address\":\"35.214.29.137\",\"WARC-Target-URI\":\"https://kodey.co.uk/2020/07/30/normal-distribution-and-the-empirical-rule/\",\"WARC-Payload-Digest\":\"sha1:CKIZPCWPW2I7NQ6IAZ5U7IBRKEDK2NUP\",\"WARC-Block-Digest\":\"sha1:VJGZ4QHZLR7S5N22EPTXDKTIPXTKLDJK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103037089.4_warc_CC-MAIN-20220626040948-20220626070948-00587.warc.gz\"}"}
https://wiki.docking.org/index.php?title=Google_sheets_hit_picking&diff=prev&oldid=11516
[ "# Difference between revisions of \"Google sheets hit picking\"\n\nTo use Google sheets for hit-picking\n\n1. Put zinc_ids in sorted order in a column, e.g. B2-B100\n\n2. To depict the molecules copy and paste the following in each cells in new column. E.g. put this in cell C2 to depict the molecule in B2\n\n``` =IMAGE(CONCATENATE(\"http://zinc15.docking.org/substances/\", B2, \"-small.png\"))\n```\n\n3. Use IMPORTDATA to populate additional columns of interest in the header of the next columns. It looks like it can only load 3 columns at a time, so for example put the first cell D1, the next . Look up additional columns of interest here `http://zinc15.docking.org/substances/help/`\n\n``` Put in cell D1: =IMPORTDATA(CONCATENATE(\"http://zinc15.docking.org/substances.csv?zinc_id-in=\", JOIN(\"+\", B2:B100),\"&output_fields=smiles mol_formula logp\"))\nPut in cell G1: =IMPORTDATA(CONCATENATE(\"http://zinc15.docking.org/substances.csv?zinc_id-in=\", JOIN(\"+\", B2:B100),\"&output_fields=mwt num_chiral_centers reactivity\"))\nPut in cell K1: =IMPORTDATA(CONCATENATE(\"http://zinc15.docking.org/substances.csv?zinc_id-in=\", JOIN(\"+\", B2:B100),\"&output_fields=predicted_gene_names purchasability supplier_codes\"))\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6307402,"math_prob":0.5151164,"size":2821,"snap":"2022-40-2023-06","text_gpt3_token_len":820,"char_repetition_ratio":0.12034079,"word_repetition_ratio":0.5148515,"special_character_ratio":0.29493088,"punctuation_ratio":0.2359155,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9590532,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-25T05:33:28Z\",\"WARC-Record-ID\":\"<urn:uuid:baed6199-f0bb-4719-98ff-ebcf70cec053>\",\"Content-Length\":\"22570\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c5a97333-9467-481b-aa22-768d683ad9b1>\",\"WARC-Concurrent-To\":\"<urn:uuid:8b638bf9-4b2c-4d27-b449-06498a7df43f>\",\"WARC-IP-Address\":\"169.230.26.169\",\"WARC-Target-URI\":\"https://wiki.docking.org/index.php?title=Google_sheets_hit_picking&diff=prev&oldid=11516\",\"WARC-Payload-Digest\":\"sha1:BDWVWFJGYPRO6ARKTYXUSBKAAUDXR3FG\",\"WARC-Block-Digest\":\"sha1:FSPZLALN2AJZFTAHNYYFJF4XGXGVV7GS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030334514.38_warc_CC-MAIN-20220925035541-20220925065541-00430.warc.gz\"}"}
https://www.planetanalog.com/random-number-generator-using-leap-forward-techniques/
[ "# Random Number Generator Using Leap-Forward Techniques\n\nA wide variety of applications, including data encryption and circuit testing, require random numbers. As the cost of the hardware become cheaper, it is feasible and frequently necessary to implement the random number generator directly in hardware itself. Ideally, the generated random number should be uncorrelated. A generator can be either “truly random” or “pseudo random.” The former exhibits true randomness and the value of next number is unpredictable. The later only appears to be random. The sequence is based on specific mathematical algorithms, and thus the pattern is repetitive and predictable. However, if the cycle period is very large, the sequence appears to be non-repetitive and random. The focus of this paper is on pseudo-random number generators.\n\nThis paper provides an overview on the methods that are suitable for hardware implementation. The paper also describes single-bit and shows a leap-forward LFSR implementation.\n\nSingle Bit Random Number Generator Using LFSR\n\nA single bit random number generator produces a value of 0 or 1. The most efficient implementation is to use a Linear Feedback Shift Register (LFSR)", null, "", null, "", null, ". It is based on the recurrence equation:", null, "(1)\n\nHere xi is the ith number generated, ai is a pre-determined constant that can be either 0 or 1 and * and", null, "are And and Xor operator respectively. This equation implies that a new number (xn) can be obtained by utilizing m previous values (xn-1 , xn-2 ,…xn-m ) through a sequence of AND-XOR operations.\n\nIn pseudo-random number generator, the generated pattern will repeat after a certain number of cycles. It is know as the period of the generator. In an LFSR, the maximum achievable period, determined by m , is 2m-1 . We use a special set of ai s in order to achieve the maximum period. Despite their simplicity, the recurrence equations are different for different values of m . The table in Figure 1 lists the recurrence equation for m with values from 2 to 8.", null, "Figure 1:  Sample Recurrence Equations\n\nFor example, when m is 4, the equation becomes:", null, "(2)\n\nAssume the initial seed (i.e., s0, s1, s2, s3) is 1000. We can obtain the random number sequence by the equation: 100110101111000. This pattern repeats itself after 15 numbers. The new value, sn , depends on the m previous values, sn-1 , sn-2 ,?sn-m . Therefore, m 1-bit registers are required to store these values. After a new value is generated, the oldest stored value is no longer needed for future generation and can be done by an m -slot shift register, which shifts out the oldest value and shifts in a new value in every clock cycle. In addition to registers, a few XOR gates are also required to perform exclusive operations. Let's use the previous example for m =4 again. We need four 1-bit registers to store the required values. Let q3 , q2 , q1 , and q0 be the outputs of the registers and q3_next , q2_next , q1_next , and q0_next be their next values. The Boolean equation can be written as:", null, "Figure 2\n\nAn LFSR random number generator is very efficiently implemented. It only needs an m-bit shift register and 1 to 3 XOR gates, and thus the resulting circuit is very small and its operation is extremely simple and fast. Furthermore, since the period grows exponentially with the size of the register, we can easily generate a large non-repetitive sequence. For example, with a 64-bit generator running at 1GHz, the period is more than 500 years.\n\nMultiple-Bit Leap Forward LFSR\n\nSome applications require more accuracy and need more that a single bit random number. Since the numbers produced by a single bit LFSR random number generator are correlated, one way to obtain a multi bit random number is to accumulate several single bit numbers.\n\nThe leap-forward LFSR method utilizes only one LFSR and shifts out several bits. This method is based on the observation that LFSR is a linear system and the register state can be written in vector format:", null, "(3)\n\nIn this equation, q(i + 1) and q(i) are the content of the shift register at (i+1)th and it h steps and A is the transition matrix. After the LFSR advances k steps, the equation becomes:", null, "(4)\n\nWe can calculate Ak and determine the Xor structure accordingly. The new circuit leaps k steps in one clock cycle. It still consists of the identical shifter register, although the feedback combinational circuitry becomes more complex. The implementation with 4 bit LFSR can be written as:", null, "(5)\n\nAssume that we need a four-bit random number generator and therefore have to advance four steps at a time. The new transition matrix becomes:", null, "(6)\n\nFor the purpose of circuit implementation, the equation can be written as:", null, "(7)\n\nAfter performing the operations, we can derive the feedback equation for each signal as mentioned below:", null, "(8)\n\nFigure 3 shows the corresponding block diagram.", null, "Figure 3:  Block diagram\n\nFigure 4 shows the four-bit output random sequence of the single LFSR and leap-forward. The sequence generated is based on the seed value of 1000.", null, "Figure 4:  Four-bit output random sequence of a single LFSR and leap-forward\nResults\n\nThe simulation results are shown in Figure 5 . The first (top) figure shows the simulation results of Leap Forward LFSR techniques while the second (bottom) image is the LFSR result. It looks as though there is a high correlation between the two consecutive outputs of LFSR; while this is not the case, this is at the cost of extra hardware. The synthesis details are shown in Table 1 .\n\n Synthesis Details Tool Xilinx ISE Device family Xilinx5x Spartan2 Target device 2s30pq208\n\nTable 1:  Synthesis details for simulation results\n\nConclusion\n\nSeveral design techniques have been examined for hardware random number generators and this feasibility for FPGA devices. LFSR is the most effective method for single bit random number generator. When multiple bits are required, LFSR can be extended by utilizing extra circuitry. For a small number of bits, leap-forward LFSR method is ideal because it balances the combinational circuitry and register, and balances the FPGA resource. We can extend this technique to a greater number of bits where one can get even better uncorrelated sequences for random number generation." ]
[ null, "https://www.planetanalog.com/wp-content/uploads/images-common-techonline-images-community-content-old-features-numbers-two-blue.gif", null, "https://www.planetanalog.com/wp-content/uploads/images-common-techonline-images-community-content-old-features-numbers-comma.gif", null, "https://www.planetanalog.com/wp-content/uploads/images-common-techonline-images-community-content-old-features-numbers-three-blue.gif", null, "https://www.planetanalog.com/wp-content/uploads/images-common-techonline-images-community-content-feature-bola-equation1.gif", null, "https://www.planetanalog.com/wp-content/uploads/images-common-techonline-images-community-content-feature-bola-circle.gif", null, "https://www.planetanalog.com/wp-content/uploads/images-common-techonline-images-community-content-feature-bola-figure1.gif", null, "https://www.planetanalog.com/wp-content/uploads/images-common-techonline-images-community-content-feature-bola-equation2.gif", null, "https://www.planetanalog.com/wp-content/uploads/images-common-techonline-images-community-content-feature-bola-figure2.gif", null, "https://www.planetanalog.com/wp-content/uploads/images-common-techonline-images-community-content-feature-bola-equation3.gif", null, "https://www.planetanalog.com/wp-content/uploads/images-common-techonline-images-community-content-feature-bola-equation4.gif", null, "https://www.planetanalog.com/wp-content/uploads/images-common-techonline-images-community-content-feature-bola-equation5.gif", null, "https://www.planetanalog.com/wp-content/uploads/images-common-techonline-images-community-content-feature-bola-equation6.gif", null, "https://www.planetanalog.com/wp-content/uploads/images-common-techonline-images-community-content-feature-bola-equation7.gif", null, "https://www.planetanalog.com/wp-content/uploads/images-common-techonline-images-community-content-feature-bola-equation8.gif", null, "https://www.planetanalog.com/wp-content/uploads/images-common-techonline-images-community-content-feature-bola-figure3.gif", null, "https://www.planetanalog.com/wp-content/uploads/images-common-techonline-images-community-content-feature-bola-figure4.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9030761,"math_prob":0.99159384,"size":5951,"snap":"2020-45-2020-50","text_gpt3_token_len":1292,"char_repetition_ratio":0.14831007,"word_repetition_ratio":0.002002002,"special_character_ratio":0.21391363,"punctuation_ratio":0.106984966,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9935509,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32],"im_url_duplicate_count":[null,6,null,null,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-20T15:08:54Z\",\"WARC-Record-ID\":\"<urn:uuid:0117e229-8cfa-448c-898e-247f3ea73d13>\",\"Content-Length\":\"154943\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:61ad4637-bb46-44c8-a621-0d2ee417bd20>\",\"WARC-Concurrent-To\":\"<urn:uuid:fc1d626e-4b33-464b-8566-d9773d90b636>\",\"WARC-IP-Address\":\"192.0.66.120\",\"WARC-Target-URI\":\"https://www.planetanalog.com/random-number-generator-using-leap-forward-techniques/\",\"WARC-Payload-Digest\":\"sha1:VQYYR4YDSFYHL2OWG2P72JHN2ODGU7UT\",\"WARC-Block-Digest\":\"sha1:TTROFLVAIOQVBLRODQMYXWJFQQYOTPRI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107872746.20_warc_CC-MAIN-20201020134010-20201020164010-00137.warc.gz\"}"}
https://www.cornerstonecurriculum.com/product-page/making-math-meaningful-level-4
[ "top of page", null, "`In LEVEL 4 your child is working mostly on their own. Level 4 uses word problems to expand the understanding of addition and subtraction (0 - 9,999), and teaches multiplication of 2 and 3 digit numbers and division. Fractions are expanded from Level 3 to include whether two fractons are equal-not equal, and less than-greater than. Algebra is further developed. The Student Book offers opportunities to apply and practice the concepts and skills covered in the lessons from Level 4. Making Math Meaningful is an exceptional program that elevates a child’s conceptual reasoning! This curriculum teaches the child to understand not only how to add, subtract, multiply, and divide, but more importantly why and when. The student will be equipped to solve every kind of math problem or word problem, will have the understanding to apply their knowledge to everyday life, and will be prepared for algebra and other higher math. Making Math Meaningful trains children in advanced reasoning, to understand math concepts, and to think mathematically. `\n\n# Making Math Meaningful: Level 4\n\n\\$60.00Price\nbottom of page" ]
[ null, "https://static.wixstatic.com/media/0c502d_058190403fda4b94ae2769c9cd8ffe8f~mv2.jpg/v1/fill/w_980,h_1280,al_c,q_85,usm_0.66_1.00_0.01,enc_auto/0c502d_058190403fda4b94ae2769c9cd8ffe8f~mv2.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9344321,"math_prob":0.82728565,"size":1082,"snap":"2023-40-2023-50","text_gpt3_token_len":220,"char_repetition_ratio":0.10204082,"word_repetition_ratio":0.0,"special_character_ratio":0.19963032,"punctuation_ratio":0.10552764,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9521055,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-30T10:49:46Z\",\"WARC-Record-ID\":\"<urn:uuid:2bd072aa-f878-43e3-a02e-2974b1f82ce9>\",\"Content-Length\":\"1050548\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8883b149-f561-4f80-b56d-541b566faf95>\",\"WARC-Concurrent-To\":\"<urn:uuid:48bb8e05-b726-4f57-abef-e2107c96fc86>\",\"WARC-IP-Address\":\"146.75.33.84\",\"WARC-Target-URI\":\"https://www.cornerstonecurriculum.com/product-page/making-math-meaningful-level-4\",\"WARC-Payload-Digest\":\"sha1:DETV6OLBDHTBB7PLRF42FQAGMMH45DP5\",\"WARC-Block-Digest\":\"sha1:XHF4BMI74YMBKSE7HO3MXLWXNLOY22AS\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100184.3_warc_CC-MAIN-20231130094531-20231130124531-00311.warc.gz\"}"}
https://cotejer.github.io/2017/06/12/translation
[ "# Translation\n\nIf there’s a bane to everyone’s existence in mathematics, the word problem seems to be it. These are routinely the most tricky problems to solve, and so many students find themselves understanding the material, only to stumble on this kind of problem. If you ask a student, chances are that they will say these problems are difficult because they aren’t as straightforward as a question that asks to solve for $x$ or to find the area of a figure. I don’t want to spend today debating whether or not it’s a good idea to have word problems, but instead I want to address the fundamental difficulty that I’ve found many students to have in this area.\n\nIn one word, here’s the difficulty: translation.\n\nLet’s face it. Mathematics is a language, just like any other. This means that mathematics has its own grammatical structure, as well as ways to construct sentences. Additionally, there are ways to make your mathematical sentences clear, and there are many other ways to make them incomprehensible. Unfortunately, many students don’t get the experience of seeing what a “good” mathematical sentence looks like, which means they have difficult knowing what expresses total nonsense and what conveys meaning.\n\nTo combat this, one of the skills that can help is an ability to translate between the mathematics and your preferred language. My mother tongue is English, so I’ll be referring to it here, but there’s nothing special about English. The important part is to be able to move between the mathematics and your language. If you can express a mathematical equation in plain English, and (arguably, more importantly) vice versa, it’s so helpful in decoding problems that you will come across. This ability seems so scarce in schools now that it’s almost like a superpower.\n\nI think there’s no better way to show this than through an example:\n\nAn airplane flies against the wind from A to B in 8 hours. The same airplane returns from B to A, in the same direction as the wind, in 7 hours. Find the ratio of the speed of the airplane (in still air) to the speed of the wind.\n\nIf you want to have any hope of solving problems like this in a systematic way, you need to know how to translate between the words and the mathematics you’ll use to answer the question. If you want, I encourage you to try this question out, and make a detailed solution. This doesn’t mean you need to write ten pages of explanation, but it means that everything you do should be clear.\n\nGot it? Okay, let’s go through the way I would solve this.\n\nFirst, you should identify the quantities you’re looking for in this problem. From what I can see, the quantities we are looking for are the speed of the plane when there’s no wind, and the speed of the wind itself. Do we know the numerical values of these quantities? We do not, so let’s give them symbols. Let’s call the speed of the plane without any wind $v$ and the speed of the wind itself $w$. Note that the units of both are something akin to $m/s$.\n\nNow, we don’t know how far the distance is from point A to B, so we’ll just call this distance $d$, with units of metres.\n\nWith all of the variables of the question in hand, it’s time to translate from the words in the problem to mathematical equations. This is a crucial part of solving these problems, and it’s good to take it sentence by sentence. But first, let’s state clearly what we are trying to determine. We want the ratio of the plane’s speed in still air to the wind speed. In our variables, this corresponds to the expression $\\frac{v}{w}$.\n\nHere’s the first sentence again: An airplane flies against the wind from A to B in 8 hours.\n\nFor this sentence, we need to relate the time traveled (8 hours) and the distance from A to B ($d$). The most natural way to do this is through the speed of the trip. Note that, since the airplane is flying against the wind, it’s “net” speed is $v-w$. This is due to the fact that the wind is slowing the plane down. We then know that the speed of anything is given by a distance over a time (here, we are talking about average speed), so we have the speed being $\\frac{d}{8}$. Therefore, we get the following equation:\n\n$v-w=\\frac{d}{8}$\n\nLet’s parse the second sentence: The same airplane returns from B to A, in the same direction as the wind, in 7 hours.\n\nThis is similar to the above, except now we have a time of 7 hours, and a “net” speed of $v+w$. You can think of it as the wind “helping” the airplane as it moves to its destination. The distance is once again the same, so we have the following relation:\n\n$v+w=\\frac{d}{7}$\n\nThis is by far the most difficult part of a word problem. For the most part, students can do the actual computations, but the difficult part is the translation. In this problem, we see that the tricky aspects include knowing that speed is given by a distance divided by a time, and being able to relate the sentences into that form. This isn’t a skill that’s developed within a week. It’s something that you need to focus on for many problems in order to understand how these kinds of situations go.\n\nHere are the big parts of translating from English to mathematics:\n\n• Knowing what equality means. But I know what equality means, you say. Of course, we know implicitly what it means, but the reality is that many students will claim to know what equalities mean, yet write things that aren’t equalities. This is easily seen in terms of units, where many students will make equations whose units don’t agree. Indeed, being able to look at the units of your quantities is a great way to help you come up with equations. This is called dimensional analysis.\n• Knowing what the words “more”, “less”, “at most”, “in total”, and so on, mean. Each of those words has a precise mathematical meaning, and it’s critical that you know the difference between them.\n• Relational words like “double”, and “half”. This closely follows from the above, but this also gets a lot of students tripped up. If I said to you that I (let’s call me $x$) had double the amount of something than you (which we’ll label $y$), is the inequality $x\\ge 2y$ or is it $y \\ge 2x$? It’s very important to know how to differentiate between these two. (Hint: it’s the former.)\n• Knowing how to parse through negation. When you see the word “not”, can you infer the correct meaning? It’s an unfortunate reality that many questions throw in a bunch of extra (and confusing) negation in order to trick students for no good reason, and so being able to deal with this is a must.\n\nIn the end, the important thing is that you can comfortably go between English and mathematics. The arrows should go both ways, even if you’re only tested on one way. The reason is that it’s helpful when learning a new concept to express it in words, since it can make the concept more clear than in abstract notation. Of course, that notation is important when you’re trying to precisely answer a question, but it’s good to be able to talk more informally about an idea.\n\nMy advice is therefore this: when you encounter a word problem, go through it line by line, asking yourself, “How can I translate these words into mathematics?” Similarly, do it the other way when you encounter equations, so you get a sense of the other side of the coin. By doing this, it becomes easier to move between the two languages, without having to carry an English-to-mathematics dictionary." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.95369655,"math_prob":0.92990565,"size":7375,"snap":"2021-04-2021-17","text_gpt3_token_len":1648,"char_repetition_ratio":0.12345679,"word_repetition_ratio":0.04236006,"special_character_ratio":0.22440678,"punctuation_ratio":0.104139715,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98547024,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-26T09:06:14Z\",\"WARC-Record-ID\":\"<urn:uuid:f5220388-d3f6-4f6c-83ed-3e0f9ea0bd8f>\",\"Content-Length\":\"10456\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dc1503e3-0d2c-4e33-83e6-b91e2ef3ee65>\",\"WARC-Concurrent-To\":\"<urn:uuid:b1bb4302-78cf-4021-9d83-3a9249032a2c>\",\"WARC-IP-Address\":\"185.199.109.153\",\"WARC-Target-URI\":\"https://cotejer.github.io/2017/06/12/translation\",\"WARC-Payload-Digest\":\"sha1:I6WR4HQ6HVPRPBVR3I4PRNO2BGJKNGDO\",\"WARC-Block-Digest\":\"sha1:2X5ZY5IKKJ5QDZYLKMEVXZ7N6FGOJGLF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610704799711.94_warc_CC-MAIN-20210126073722-20210126103722-00291.warc.gz\"}"}
https://www.excelhow.net/how-to-calculate-average-ignore-non-numeric-values-and-errors.html
[ "# How to Calculate Average Ignore Non-Numeric Values and Errors\n\nIn daily work we often need to calculate the average of some numbers in a range for further analysis. Thus, we need to know some basic functions of calculate average in Excel. From now on, we will introduce you some basic functions like AVERAGE, AVERAGEIF, and show you some common formulas to calculate average value.\n\nIn this article, we will let you know how to calculate average by AVERAGE function easily and the way to calculate average based on a given criteria by AVERAGEIF function. We will introduce you the syntax, arguments, and basic usage about the two functions, and let you know the working process of our created formulas.\n\n## 1. EXAMPLE\n\nRefer to “Numbers” column, some numbers are listed in range A2:A11. C2 is used for entering a formula which can calculate the average of numbers in range A2:A11. In fact, Excel has amount of built-in functions and they can execute most simple calculations properly, and AVERAGE is one of the most common used functions.\n\nIn C2, enter “=AVERAGE(A2:A11)”.\n\nThen press Enter, AVERAGE function returns 63.8.\n\nYou can adjust decimal places by “Increase Decimal” or “Decrease Decimal” in “Number” section.\n\nBut if some errors or blank or non-numeric values exist in the list, how can we only calculate average for numbers ignoring the invalid values?  See example below.\n\nWe get an error when calculating average for range A2:A11, the difference is there are some invalid values in the list. To calculate average ignoring the errors, that means we need to calculate average of numbers in a range with one or more criteria, thus, we can apply AVERGAEIF function instead of basic AVERAGE function.\n\n## 2. CREATE A FORMULA with AVERAGEIF FUNCTION\n\nStep1: In C2, enter the formula =AVERAGEIF(A2:A11,”>0″).\n\nYou can also name range “Numbers” for A2:A11, then you can enter =AVERAGEIF(Numbers,”>0″).\n\nStep2: Press Enter after typing the formula.\n\nIgnore the improper cells, and only calculate the average of numbers 54, 40, 88, 76, 100, 90 and 44, total 7 numbers. (54+40+88+76+100+90+44)/7=70.3. The formula works correctly.\n\n### a. FUNCTION INTRODUCTION\n\nAVERAGEIF function is AVERAGE+IF. It returns the average of some numbers in a range based on given criteria.\n\nSyntax:\n\n``=AVERAGEIF(range, criteria, [average_range])``\n\nIt supports wildcards like asterisk ‘*’ and question mark ‘?’, also supports logical operators like ‘>’,’<’. If wildcards or logical operators are required, they should be enclosed into double quotes (““), in this case we entered “>0” to show the criteria.\n\nAVERAGEIF – RANGE\n\nThis example is very simple, we have only one list. A2:A11 is criteria range and average range.\n\nIn the formula bar, select “A2:A11”, press F9, values in this range are expanded in an array.\n\nIf average range=criteria range, average range can be omitted.\n\nAVERAGEIF – CRITERIA\n\nObviously, the criteria in our case is “>0”, this condition can filter numbers from criteria range.\n\n### b. HOW THE FORMULA WORKS\n\nAfter expanding values, the formula is displayed as:\n\n``=AVERAGEIF({54;40;#N/A;88;\"ABC\";76;0;100;90;44},\">0\") ``\nNote: blank cell is recorded as 0 in the array.\n\nBecause of criteria “>0”, so invalid cells are ignored. Replace all invalid values with 0. Keep all numbers.\n\n``{54;40;#N/A;88;\"ABC\";76;0;100;90;44} -> {54;40;0;88;0;76;0;100;90;44}``\n\nNow this new array only contains numbers. We can calculate the average now.\n\n## 3. Calculate Average Ignore Non-Numeric Values and Errors with VBA Code\n\nYou can also create a User Defined Function to calculate the average of a range of cells, ignoring any non-numeric values or error values, just do the following steps:\n\nStep1: Click on the “Visual Basic” button in the Developer tab to open the Visual Basic Editor.\n\nStep2: In the Visual Basic Editor, click on “Insert” in the menu and select “Module” to create a new module.\n\nStep3: Paste the VBA code provided in the new module.\n\n``````Function AverageIgnoreNonNumeric(rng As Range) As Variant\nDim cell As Range\nDim sum As Double\nDim count As Long\n\nsum = 0\ncount = 0\n\nFor Each cell In rng\nIf IsNumeric(cell.Value) And Not IsError(cell.Value) And Not IsEmpty(cell.Value) Then\nsum = sum + cell.Value\ncount = count + 1\nEnd If\nNext cell\n\nIf count > 0 Then\nAverageIgnoreNonNumeric = sum / count\nElse\nAverageIgnoreNonNumeric = CVErr(xlErrNA)\nEnd If\nEnd Function``````\n\nStep4: Save the workbook as a macro-enabled workbook by selecting “Excel Macro-Enabled Workbook” from the “Save as type” drop-down menu.\n\nStep5: Close the Visual Basic Editor and return to the Excel worksheet.\n\nStep6: Enter the below formula in a blank cell where you want to calculate the average\n\n``=AverageIgnoreNonNumeric(A2:A11)``\n\nStep7: Press “Enter” to calculate the average ignoring non-numeric values.\n\n## 4. Related Functions\n\n• Excel AVERAGE function\nThe Excel AVERAGE function returns the average of the numbers that you provided.The syntax of the AVERAGE function is as below:=AVERAGE (number1,[number2],…)….\n• Excel AVERAGEIF function\nThe Excel AVERAGEAIF function returns the average of all numbers in a range of cells that meet a given criteria.The syntax of the AVERAGEIF function is as below:= AVERAGEIF (range, criteria, [average_range])…." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7827958,"math_prob":0.97629184,"size":5101,"snap":"2023-40-2023-50","text_gpt3_token_len":1274,"char_repetition_ratio":0.16853051,"word_repetition_ratio":0.017412934,"special_character_ratio":0.24877475,"punctuation_ratio":0.157129,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9989886,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-23T04:07:10Z\",\"WARC-Record-ID\":\"<urn:uuid:bdeb4119-35d6-40ae-b164-67c1f78efb7f>\",\"Content-Length\":\"95016\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7528657e-1325-4b0d-aefa-8ab6a90c7be2>\",\"WARC-Concurrent-To\":\"<urn:uuid:914d5108-1724-48cb-b035-f262a3251cce>\",\"WARC-IP-Address\":\"18.213.98.197\",\"WARC-Target-URI\":\"https://www.excelhow.net/how-to-calculate-average-ignore-non-numeric-values-and-errors.html\",\"WARC-Payload-Digest\":\"sha1:YHGRBXQ5OJCP7WWIN2FYU6XRRBR6J5MJ\",\"WARC-Block-Digest\":\"sha1:ZCOFZKVNXVRCADJCSY6KRMBOJCSYTMSL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506479.32_warc_CC-MAIN-20230923030601-20230923060601-00752.warc.gz\"}"}
https://math.stackexchange.com/questions/1684223/formula-for-a-geometric-series-weighted-by-binomial-coefficients-sum-over-the-u
[ "# Formula for a geometric series weighted by binomial coefficients (sum over the upper index):$\\sum_{i=0}^L {n+i\\choose n}\\ x^i =\\ ?$\n\nThe binomial sum is $$\\sum\\limits_{i=0}^n {n\\choose i}\\ x^i = (1+x)^n,$$ where $\\displaystyle{n\\choose i}=\\frac{n!}{(n-i)!i!}.$\n\nIs there a corresponding formula when you sum over the upper index of the binomial coefficients, not the lower index: $$\\sum_{i=0}^L {n+i\\choose n}\\ x^i =\\ ?$$ Equivalently, I am looking for the generating function of the sequence $$a_i={r+i\\choose k}\\qquad i\\ge0,$$ where $r\\ge k\\ge0$ are fixed parameters.\n\n• When you change upper index you continously change powers i think there isnt any closed form – Archis Welankar Mar 5 '16 at 14:46\n• There is a closed form which contains the incomplete beta function. – Claude Leibovici Mar 5 '16 at 14:49\n\n$$\\sum_{i=0}^L {n+i\\choose n}\\ x^i =\\frac{1-(L+1) \\binom{L+n+1}{n} B_x(L+1,n+1) }{(1-x)^{n+1}}$$ where appears the incomplete beta function." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.55636156,"math_prob":0.99972576,"size":435,"snap":"2021-21-2021-25","text_gpt3_token_len":150,"char_repetition_ratio":0.118329465,"word_repetition_ratio":0.0,"special_character_ratio":0.33103448,"punctuation_ratio":0.11578947,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999944,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-08T03:51:38Z\",\"WARC-Record-ID\":\"<urn:uuid:8191fb44-8555-46cd-955d-5cc0c2f366c9>\",\"Content-Length\":\"166531\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:78abcc4b-a315-47e3-be6a-bd95ee82edf0>\",\"WARC-Concurrent-To\":\"<urn:uuid:bb22bd58-eb53-4182-813d-3b697c1c2b1f>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/1684223/formula-for-a-geometric-series-weighted-by-binomial-coefficients-sum-over-the-u\",\"WARC-Payload-Digest\":\"sha1:ARAKTPHO3VA645W3J2CIEDWQOWJX7P2D\",\"WARC-Block-Digest\":\"sha1:UN2HXPPJ27UHOOJUGKBQEGQ2GKZGICKM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243988837.67_warc_CC-MAIN-20210508031423-20210508061423-00096.warc.gz\"}"}
https://pwg.gsfc.nasa.gov/stargaze/Slog5.htm
[ "# Disclaimer: The following material is being kept online for archival purposes.\n\n## Although accurate at the time of publication, it is no longer being updated. The page may contain broken links or outdated information, and parts may not function in current web browsers.\n\n Site Map Math Index Glossary Timeline Questions & Answers Lesson Plans\n\n# (M-17)     The Number \"e\"\n\nHere begins an optional extension of the sections on logarithms. Its starting point seems completely unrelated--namely, the interest paid on money deposited in a bank. It starts with a very elementary discussion of interest, and users familiar with this subject may quickly skim this part. However, be assured --a connection with logarithms soon appears!\n\n### Interest and Banking--a Very Elementary Introduction\n\nMost people are familiar with interest, the extra amount paid by borrowers to temporarily use other people's money.\n\nSuppose you want to buy an expensive item--car, house, business, farm machinery--but cannot pay the full price right away. You then borrow what you need from a bank, and gradually pay it back, adding a little extra as interest, in return for the privilege of borrowing.\n\nInterest is traditionally measured in percent, hundredths of the amount borrowed, also denoted by the symbol %. If the interest rate is, say, 10 dollars per year for every 100 dollars borrowed, your interest is \"10 percent\" or 10% (\"per cent\" means, \"for each hundred\").\n\nBanks usually lend money for only part of the cost of the purchased item--because then, if the borrower is unable to pay (\"defaults\"), they can legally \"repossess\" the item, and having paid only part of its cost, they suffer no loss.\n\nBecause a \"down payment\" is usually needed before borrowing, one needs to save money to cover it. After putting the money aside for such savings, people often lend it to a bank. The bank will pay a lower interest--say, only 6%--while using the same money to lend it out at some higher rate, such as 10%. That way the bank makes a profit and covers its own costs; but it is still better to lend out money at a lower rate and make a profit of \\$6 for every \\$100--better than not making any profit at all.\n\n### Simple Interest and Compound Interest\n\n(Below, after the *** mark, you are expected here to use a calculator with a yx button)\n\nSuppose you lend the bank \\$ 1000 at 6% interest. The amount you lend is known as \"the principal.\"\n\nAfter a year you have earned \\$60 and now have \\$1060\nAfter 2 years, you have earned another \\$60 and now have \\$1120\nAfter 3 years, you have earned another \\$60 and now have \\$1180\nAfter 4 years, you have earned another \\$60 and now have \\$1240\n\n(To save for an expensive purchase, most people would of course save \\$1000 every year and accumulate cash much faster. In this calculation, however, we concentrate on just the first \\$1000)\n\nYou can actually do better! The \\$60 earned in the first year can be added to the \"principal\" amount, so that in the second year it, too, earns interest. In fact, in any year you can add the profits to the principal, and earn more.\n\nThis changes the rules. Until now, what you earned in any year were \\$60, six percent of the amount originally invested. That is called \"simple interest.\" The amount you have at the end of the year is just \\$60 more.\n\nNow the amount you have at the end of the year (assuming you took nothing out) is 1.06 times what you had at the beginning of the year. This is called compound interest, compounded at the end of each year, Let us calculate how much you have at the end of each year. Suppose you started with x dollars. With simple interest\n\nAfter 1 year          1.06 x\nAfter 2 years         1.12 x\nAfter 3 years         1.18 x\nAfter 4 years         1.24 x\n\nWith compound interest as described above, to an accuracy of 4 decimals\n\n After one year you have 1.06 x After another year 1.06 [1.06 x] = (1.06)2 x = 1.1236 x After 3 years 1.06 [(1.06)2 x] = (1.06)3 x = 1.191 x After 4 years 1.06 [(1.06)3 x] = (1.06)4 x = 1.2624 x\n\nSo yes, you are increasing your profit, though the increase is rather moderate.\n\nYou get still more if you add the earned interest to the principal not at the end of each year, but at the end of each half year. The interest earned each half year is only 3%, but the number of \"compounding periods\" is doubled--two per half year. Let's see what you get from an original amount of x dollars (calculating to an accuracy of 5 decimals)\n\n After half a year 1.03 x After 1 year 1.03 [1.03 x] = (1.03)2 x = 1.0609 x After 1.5 years 1.03 [(1.03)2x] = (1.03)3 x = 1.10927 x After 2 years 1.03 [(1.03)3 x] = (1.03)4 x = 1.12551 x After 3 years (1.03)6x = 1.19405 x After 4 years (1.03)8x = 1.26677 x\n\nAnd so on\n\nBut why wait half a year? One may just as well add the money monthly, assuming here all months are equal and each earns 0.5% interest. (*** you better have a yx button for the next steps, though with a lot of patience the same results can also be derived without one. They are given to an accuracy of 6 decimals.)\n\nAfter 1 year --12 periods, 0.5% each--you have (1.005)12 x = 1.061678 x\nAfter 2 years--24 periods, 0.5% each--you have (1.005)24 x = 1.127159 x\nAfter 3 years--36 periods, 0.5% each--you have (1.005)36 x = 1.196680 x\nAfter 4 years--48 periods, 0.5% each--you have (1.005)48 x = 1.279489 x\n\nIt is instructive to compare the total amount after, say, 4 years:\n\n Simple interest 1.24 x Compounded yearly 1.2624 x Compounded twice a year 1.26677 x Compounded 12 times a year 1.278489 x\n\nSo yes, you keep getting more each time. You might get even more if you compounded every day (as some banks have brazenly promised) but the number seems to be creeping towards a limit, which you can never pass. You will never get rich this way!\n\nIn the next section we'll try derive that limit. Stand by for some algebra!\n\n### The Limit\n\nSuppose you lend out x dollars at a percentage of p percent (it was 6 in the above example). After one year we have\n(1 + p/100) x\n\nLet us divide the year into N equal parts, each of which earns interest p/N . After each such period, the money which was invested grows by a factor\n\n[1 + p/(100 N)]                 (1)\n\nThere exist N such periods in the year, so after one year, our investment has grown by a factor\n\nF = [1 + p/(100 N)]N                 (2)\n\nDividing the power N by some number and also multiplying by it does not change a thing--it's like multiplying by (Q/Q)=1 (whatever Q might be). So if Q=p/100\n\nF = [1 + p/(100 N)][(100N/p) (p/100)]                 (3)\n\nIf you cancel the fractions, you are back where we started. However, instead of doing so, let us introduce a new variable quantity y: let\n\n100N/p = y                         (4)\nThen (3) becomes\nF = (1 + 1/y)(y.p/100)                 (5)\n\nand remembering that one power raised to another is like raising to a power that is the product of both exponents\nF = [(1 + 1/y)y](p/100)                (6)\nThe reason for introducing the new variable y is to put at the core of our expression the rather interesting quantity:\n[(1 + 1/y)y]                           (7)\n\n(raised to power p/100). Once again, by (4) y is defined as\ny = N (p/100)\n\nIf interest is compounded every second of the year, N is a bit over 31 million. Suppose N, the number of compounding periods, grows without limit, and so, therefore, does y. Then (7) becomes a rather strange expression!\n\nOn one hand, the expression we are raising to the yth power is (1 + 1/y), very close to 1, and we know that any power of 1 is still 1--no matter how many times you multiply 1 by itself, the result stays the same. On the other hand, a high power y of any number larger than 1 (even larger by just a tiny bit) will grow without limit.\n\nWhich case is the one here? Let's try the calculator, and assume it has buttons for both (1/x) and x2. We can then easily derive the expression (7) for values of y which are powers of 2:\n\n(1 + 1/2) 2 = 2.25\n(enter 0.5, add 1, hit squaring button)\n(1 + 1/4) 4 = 2.441406...\n(enter 0.5, hit squaring button, add 1, hit x2 button 2 times more)\n(1 + 1/16) 16 = 2.6379284...\n(enter 0.5, hit squaring button 2 times, add 1, hit x2 button 4 times)\n(1 + 1/256) 256 = 2.7129916...\n(enter 0.5, hit squaring button 3 times, add 1, hit x2 button 8 times, since 256 = 28)\n(1+ 1/65536) 65536 = 2.71826039...\n(enter 0.5, hit squaring button 4 times, add 1, hit x2 button 16 times, as 65536 = 216)\n\nYou can see that the result approaches a limit which is neither zero nor infinity, but a number between 2 and 3. Mathematicians denote it by the letter e. And the limiting factor, beyond which compounded interest with percentage p cannot rise, no matter how frequently the compounding is performed, is by (6)\ne(p/100)             (8)\n\nProcesses which are compounded naturally--for instance, the number of bacteria (or other living creatures) given an unlimited supply of food, or the number of neutrons in an uncontrolled chain reaction--all these \"grow exponentially\" following a law like the one above.\n\nMany properties of e involve calculus, or an expanded definition of numbers including the square root of (–1), also denoted i. For instance, if the symbol N! (\"factorial of N\") defines the product of whole numbers up to N\n\n1! = 1       2! = 1.2 = 2      3! = 1.2.3 = 6       4! = 24       5! = 120    etc.\n\nand so on, it may be shown that\n\ne = 1 + 1/(1!) + 1/(2!) + 1/(3!) + 1/(4!) + 1/(5!) + ...\n\nand because N! grows extremely fast with increasing values of N, one soon gets pretty accurate values of e. Furthermore, the number e is also \"the base of natural logarithms.\" That, however, is the story of the next section.\n\n### Exploring further\n\nYou saw how\n(1 + 1/y)y\n\ngoes to a limit e = 2.71828... as y gets larger and larger. How about\n\n(1 – 1/y)y\n\n--does it go to a limit too? Indeed, it does. With your calculator you can derive approximations to it, as was done with e. If your calculator has a sign-reversing button \"+/–\" you can use the same steps as before, except that before adding \"1\", reverse the sign of the power of 0.5 by means of that button.\n\nNext, the obvious question--how is that limit related to \"e\"? Try to discover on your own!\n\nAuthor and Curator:   Dr. David P. Stern\nMail to Dr.Stern:   stargaze(\"at\" symbol)phy6.org .\n\nLast updated 10 October 2007" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.94417524,"math_prob":0.980382,"size":8621,"snap":"2022-27-2022-33","text_gpt3_token_len":2244,"char_repetition_ratio":0.11163978,"word_repetition_ratio":0.028427036,"special_character_ratio":0.27862197,"punctuation_ratio":0.12994653,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9848186,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-09T02:21:01Z\",\"WARC-Record-ID\":\"<urn:uuid:590e11ca-eca1-439e-be97-13dfedc70aa0>\",\"Content-Length\":\"19458\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2dbaaad5-1d43-4217-9bfb-fc39675246c1>\",\"WARC-Concurrent-To\":\"<urn:uuid:f866cdf1-3bb6-4532-9f34-e10b8816db6a>\",\"WARC-IP-Address\":\"169.154.154.72\",\"WARC-Target-URI\":\"https://pwg.gsfc.nasa.gov/stargaze/Slog5.htm\",\"WARC-Payload-Digest\":\"sha1:FLARX3OYIAHCP2KOOOJ4WZFWW7PRTWBA\",\"WARC-Block-Digest\":\"sha1:2VET5VK6WKO7ZY5BTET5RVLTDZRK5AOD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882570879.37_warc_CC-MAIN-20220809003642-20220809033642-00262.warc.gz\"}"}
https://www.emathhelp.net/en/calculators/pre-algebra/prime-factorization-calculator/?i=4607
[ "# Prime factorization of $4607$\n\nThe calculator will find the prime factorization of $4607$, with steps shown.\n\nIf the calculator did not compute something or you have identified an error, or you have a suggestion/feedback, please write it in the comments below.\n\nFind the prime factorization of $4607$.\n\n### Solution\n\nStart with the number $2$.\n\nDetermine whether $4607$ is divisible by $2$.\n\nSince it is not divisible, move to the next prime number.\n\nThe next prime number is $3$.\n\nDetermine whether $4607$ is divisible by $3$.\n\nSince it is not divisible, move to the next prime number.\n\nThe next prime number is $5$.\n\nDetermine whether $4607$ is divisible by $5$.\n\nSince it is not divisible, move to the next prime number.\n\nThe next prime number is $7$.\n\nDetermine whether $4607$ is divisible by $7$.\n\nSince it is not divisible, move to the next prime number.\n\nThe next prime number is $11$.\n\nDetermine whether $4607$ is divisible by $11$.\n\nSince it is not divisible, move to the next prime number.\n\nThe next prime number is $13$.\n\nDetermine whether $4607$ is divisible by $13$.\n\nSince it is not divisible, move to the next prime number.\n\nThe next prime number is $17$.\n\nDetermine whether $4607$ is divisible by $17$.\n\nIt is divisible, thus, divide $4607$ by ${\\color{green}17}$: $\\frac{4607}{17} = {\\color{red}271}$.\n\nThe prime number ${\\color{green}271}$ has no other factors then $1$ and ${\\color{green}271}$: $\\frac{271}{271} = {\\color{red}1}$.\n\nSince we have obtained $1$, we are done.\n\nNow, just count the number of occurences of the divisors (green numbers), and write down the prime factorization: $4607 = 17 \\cdot 271$.\n\nThe prime factorization is $4607 = 17 \\cdot 271$A." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.866643,"math_prob":0.999951,"size":1569,"snap":"2023-14-2023-23","text_gpt3_token_len":426,"char_repetition_ratio":0.20127796,"word_repetition_ratio":0.37068966,"special_character_ratio":0.39515615,"punctuation_ratio":0.13937283,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999703,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-09T14:49:45Z\",\"WARC-Record-ID\":\"<urn:uuid:7712e0fc-664f-43bf-ab54-a344b9e59ab9>\",\"Content-Length\":\"26060\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ba896591-9e1a-4303-acd6-1554b249c270>\",\"WARC-Concurrent-To\":\"<urn:uuid:52e4347e-dab9-4af4-a5e3-5307151a85f0>\",\"WARC-IP-Address\":\"69.55.60.125\",\"WARC-Target-URI\":\"https://www.emathhelp.net/en/calculators/pre-algebra/prime-factorization-calculator/?i=4607\",\"WARC-Payload-Digest\":\"sha1:WJFAD3K4Y5BNJPJLTXQFG6ET7UZIONHZ\",\"WARC-Block-Digest\":\"sha1:QKPSYOLGCQND7M5SG2OOBW25PBDNAIH6\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224656737.96_warc_CC-MAIN-20230609132648-20230609162648-00497.warc.gz\"}"}
https://www.asknumbers.com/inch-to-mm/199-inches-to-mm.aspx
[ "# How Many Millimeters in 199 Inches?\n\n199 Inches to mm converter. How many millimeters in 199 inches?\n\n199 Inches equal to 5054.6 mm or there are 5054.6 millimeters in 199 inches.\n\n←→\nstep\nRound:\nEnter Inch\nEnter Millimeter\n\n## How to convert 199 inches to mm?\n\nThe conversion factor from inches to mm is 25.4. To convert any value of inches to mm, multiply the inch value by the conversion factor.\n\nTo convert 199 inches to mm, multiply 199 by 25.4, that makes 199 inches equal to 5054.6 mm.\n\n199 inches to mm formula\n\nmm = inch value * 25.4\n\nmm = 199 * 25.4\n\nmm = 5054.6\n\nCommon conversions from 199.x inches to mm:\n(rounded to 3 decimals)\n\n• 199 inches = 5054.6 mm\n• 199.1 inches = 5057.14 mm\n• 199.2 inches = 5059.68 mm\n• 199.3 inches = 5062.22 mm\n• 199.4 inches = 5064.76 mm\n• 199.5 inches = 5067.3 mm\n• 199.6 inches = 5069.84 mm\n• 199.7 inches = 5072.38 mm\n• 199.8 inches = 5074.92 mm\n• 199.9 inches = 5077.46 mm\n\nWhat is a Millimeter?\n\nMillimeter (millimetre) is a metric system unit of length. The symbol is \"mm\".\n\nWhat is a Inch?\n\nInch is an imperial and United States Customary systems unit of length, equal to 1/12 of a foot. 1 inch = 25.4 mm. The symbol is \"in\".\n\nCreate Conversion Table\nClick \"Create Table\". Enter a \"Start\" value (5, 100 etc). Select an \"Increment\" value (0.01, 5 etc) and select \"Accuracy\" to round the result." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.76966774,"math_prob":0.99279356,"size":934,"snap":"2023-40-2023-50","text_gpt3_token_len":333,"char_repetition_ratio":0.23978494,"word_repetition_ratio":0.0,"special_character_ratio":0.44218415,"punctuation_ratio":0.18454936,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98610955,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-06T01:53:08Z\",\"WARC-Record-ID\":\"<urn:uuid:833b5061-f1df-487e-9007-acb6a9a3c18b>\",\"Content-Length\":\"44607\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e90f3d29-b848-4389-b1e0-8c0560c24aba>\",\"WARC-Concurrent-To\":\"<urn:uuid:c1623a4f-aa89-41d1-939b-5605b730e1d2>\",\"WARC-IP-Address\":\"104.21.33.54\",\"WARC-Target-URI\":\"https://www.asknumbers.com/inch-to-mm/199-inches-to-mm.aspx\",\"WARC-Payload-Digest\":\"sha1:52M4IUA7VS33QWVNLHJQBAX53EBAMBDY\",\"WARC-Block-Digest\":\"sha1:OMJO5EOZTWNLWI3XGTW7YXU73CPE2BIM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100575.30_warc_CC-MAIN-20231206000253-20231206030253-00595.warc.gz\"}"}
https://www.thelearningpoint.net/computer-science/learning-python-programming-and-data-structures/learning-python-programming-and-data-structures--tutorial-19--self-referential-data-structures-linked-lists-single-double-and-circular-binary-search-trees
[ "## 1. Stacks in Python\n\nFor a detailed explanation about the general ideas and principles behind a stack data structure, you might find this tutorial useful. Default Python lists can easily be used as stacks in Python. Appending to a list is the equivalent on pushing an element onto a stack. Popping from a list is the equivalent of de-queing an element from the stack.\n\n ```# You can use Lists as Stacks in Python stack = [10,9,8,7,6,5] # Original contents of the stack print \"Original Contents of the Stack\" print stack # Appending to a list is the same as pushing to a stack stack.append(1) stack.append(2) # In the two steps above we push 1 and 2 onto the stack print \"After pushing 1 and 2 onto the stack it looks like:\" print stack # Now we explore the pop operation poppedValue = stack.pop() # Display the popped value print \"Popped Value:\" print poppedValue # Now display what the stack looks like: print \"After the pop operation the stack looks like:\" print stack # Now we explore the pop operation again poppedValue = stack.pop() # Display the popped value print \"Again, we pop a value from the top of the stack. Popped Value:\" print poppedValue # Now display what the stack looks like: print \"After the second pop operation the stack looks like:\" print stack ```\n\n### Output of the Program above:\n\n`~/work/pythontutorials\\$ python DataStructuresStacks.py `\n`Original Contents of the Stack`\n`[10, 9, 8, 7, 6, 5]`\n`After pushing 1 and 2 onto the stack it looks like:`\n`[10, 9, 8, 7, 6, 5, 1, 2]`\n`Popped Value:`\n`2`\n`After the pop operation the stack looks like:`\n`[10, 9, 8, 7, 6, 5, 1]`\n\n## 2. Queues in Python\n\nFor a detailed explanation about the functioning of the Queue data structure (which is a FIFO Data Structure) you might find this tutorial very useful.\nFor creating Queues in Python we can use the collections.deque data structure because it was designed to have efficient pops and pushes from both ends unlike lists which are designed to work efficiently from the right end.\n ```# Deques from collections are convenient to use as Queues # It is not efficient to use lists because they are efficient for reading/appending/popping from the end # But they are not so efficient for dequeing from the beginning from collections import deque queue = deque([\"London\",\"Paris\",\"New York\",\"Delhi\"]) print \"The Original Queue:\" print queue # Now we also queue a few more cities queue.append(\"Mumbai\") queue.append(\"Kolkata\") # Now display the queue after en-queueing these print queue # You will observe that Mumbai and Kolkata have been en-queued at the end of the queue # Now let us start to De-que element from this queue dequedElement1 = queue.popleft() dequedElement2 = queue.popleft() # Let us display the Dequeued Elements. # Given that a Queue is a First In First Out (FIFO) data structure the de-queued elements will be London and Paris print \"Two cities were de-queued\" print \"First Dequeued city:\" print dequedElement1 print \"First Dequeued city:\" print dequedElement2 # You will notice how the first two cities have been removed from the Queue, in FIFO order print \"Current state of the queue after dequeing two cities:\" print queue ```\n\n### Output of the Program above:\n\n`~/work/pythontutorials\\$ python DataStructuresQueues.py `\n`The Original Queue:`\n`deque(['London', 'Paris', 'New York', 'Delhi'])`\n`deque(['London', 'Paris', 'New York', 'Delhi', 'Mumbai', 'Kolkata'])`\n`Two cities were de-queued`\n`First Dequeued city:`\n`London`\n`First Dequeued city:`\n`Paris`\n`Current state of the queue after dequeing two cities:`\n`deque(['New York', 'Delhi', 'Mumbai', 'Kolkata'])`\n\n## 3. Linked Lists in Python\n\nFor a detailed explanation about Linked Lists and the common operations on them, check out the tutorial here. Linked lists are linear, self referential data structures. Common operations on them are insertion, deletion, traveral/display, and finding elements.\n```class Node:\n\ndef __init__(self,data=None,next=None):\nself.data = data\nself.next = next\n\ndef __str__(self):\nreturn \"Node[Data=\" + `self.data` + \"]\"\n\ndef __init__(self):\n\n# Inserting new data at the end of the list\n# Iterate through the list till we encounter the last node.\n# A new node is created for this data element\n# And the last pointer points to this\ndef insert(self,data):\nelse:\nwhile (current.next != None) and (current.data == data):                            current = current.next\t\t\tcurrent.next = Node(data)\n\n# Deleting a given data value from the linked list\n# If the head contains this data value\n# Set head = node which comes next after the current head\n# Otherwise go to the node such that the node after it (next to it) contains the value we're looking for\n# set node.next = node.next.next\n# so, the node which dontains the specified value/data; is skipped\ndef delete(self,data):\nif current.data == data:\nif current == None:\nreturn False\nelse:\nwhile (current != None) and (current.next != None) and (current.next.data != data):\ncurrent = current.next\nif (current != None) and (current.next != None) :\ncurrent.next = current.next.next\n\n# Find a given data value in the linked list\n# Traverse the linked list till you either find the data value or you come to the end of the list\n\ndef find(self,data):\nfound = False\nwhile ((current != None) and (current.data != data) and ( current.next != None)):\ncurrent = current.next\nif current != None:\nfound = True\nreturn found\n\n# Traverse the linked list till you reach its end\n# Display each node which you traverse\ndef display(self):\nstring_representation = \" \"\nwhile current != None:\nstring_representation += str(current) + \"--->\"\ncurrent = current.next\nprint string_representation\n\n# Initialize a new linked list\n\n# Insert values in the linked list\nprint \"Inserting values 1,2,3,9 in the Linked List\"\nll.insert(1)\nll.insert(2)\nll.insert(3)\nll.insert(9)\n\nll.display()\n\n# Delete an element from the linked list. Demonstrate the Delete function\nprint \"Delete an element (data = 3) from the linked list\"\nll.delete(3)\n\nprint \"Display the linked list again. The value 3 is deleted. \"\nll.display()\n\n# Try to find the value 2 in the linked list (Demonstrating the Find function)\nprint \"Try to find the value 2 in the linked list\"\nfound = ll.find(2)\nif found == True:\nprint \"The value 2 is present in the Linked List\"\nelse:\nprint \"The value 2 is not present in the linked list\"\n\n# Try to find the value 20 in the linked list\nprint \"Try to find the value 20 in the linked list\"\nfound = ll.find(20)\nif found == True:\nprint \"The value 20 is present in the Linked List\"\nelse:\nprint \"The value 20 is not present in the linked list\"\n```\n\n### Output of the Program above:\n\n`~/work/pythontutorials\\$ python DataStructuresLinkedLists.py `\n`Initializing linked list`\n`Inserting values 1,2,3,9 in the Linked List`\n`Displaying the linked list`\n` Node[Data=1]--->Node[Data=2]--->Node[Data=3]--->Node[Data=9]--->`\n`Delete an element (data = 3) from the linked list`\n`Display the linked list again. The value 3 is deleted. `\n` Node[Data=1]--->Node[Data=2]--->Node[Data=9]--->`\n`Try to find the value 2 in the linked list`\n`The value 2 is present in the Linked List`\n`Try to find the value 20 in the linked list`\n`The value 20 is not present in the Linked List`\n\n## 4. Double Linked Lists in Python\n\n ```class Node: # Constructor to initialize data # If data is not given by user,its taken as None def __init__(self, data=None, next=None, prev=None): self.data = data self.next = next self.prev = prev # __str__ returns string equivalent of Object def __str__(self): return \"Node[Data = %s]\" % (self.data,) class DoubleLinkedList: def __init__(self): self.head = None self.tail = None def insert(self, data): if (self.head == None): # To imply that if head == None self.head = Node(data) self.tail = self.head else: current = self.head while(current.next != None): current = current.next current.next = Node(data, None, current) self.tail = current.next def delete(self, data): current = self.head # If given item is the first element of the linked list if current.data == data: self.head = current.next self.head.prev = None return True # In case the linked list is empty if current == None: return False # If the element is at the last if self.tail == data: self.tail = self.tail.prev self.tail.next = None return True # If the element is absent or in the middle of the linked list while current != None: if current.data == data : current.prev.next = current.next current.next.prev = current.prev return True current = current.next # The element is absent return False def find(self, data): current = self.head while current != None: if current.data == data : return True current = current.next return False def fwd_print(self): current = self.head if current == None: print(\"No elements\") return False while (current!= None): print (current.data) current = current.next return True def rev_print(self): current = self.tail if (self.tail == None): print(\"No elements\") return False while (current != None): print (current.data) current = current.prev return True # Initializing list l = DoubleLinkedList() # Inserting Values l.insert(1) l.insert(2) l.insert(3) l.insert(4) # Forward Print l.fwd_print() # Reverse Print l.rev_print() # Try to find 3 in the list if (l.find(3)): print(\"Found\") else : print(\"Not found\") # Delete 3 from the list l.delete(3) # Forward Print l.fwd_print() # Reverse Print l.rev_print() # Now if we find 3, we will not get it in the list if (l.find(3)): print(\"Found\") else : print(\"Not found\") ```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7386524,"math_prob":0.7643894,"size":9442,"snap":"2023-14-2023-23","text_gpt3_token_len":2424,"char_repetition_ratio":0.18022886,"word_repetition_ratio":0.19513798,"special_character_ratio":0.2656217,"punctuation_ratio":0.1540107,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9907615,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-07T18:50:35Z\",\"WARC-Record-ID\":\"<urn:uuid:9155a222-2846-45c7-be51-9a6453c35c6b>\",\"Content-Length\":\"152217\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b8a0532f-1aad-4a10-850a-64f2bc5eb6a3>\",\"WARC-Concurrent-To\":\"<urn:uuid:76e431b9-6966-4314-b72e-45103eff4fa5>\",\"WARC-IP-Address\":\"172.67.209.130\",\"WARC-Target-URI\":\"https://www.thelearningpoint.net/computer-science/learning-python-programming-and-data-structures/learning-python-programming-and-data-structures--tutorial-19--self-referential-data-structures-linked-lists-single-double-and-circular-binary-search-trees\",\"WARC-Payload-Digest\":\"sha1:S3O4JLIP6KKAHURYWESXBR4ZAOBHMLML\",\"WARC-Block-Digest\":\"sha1:KSIFVVZATHCZ3W7ZWQ37OCFIAMARRYZU\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224654012.67_warc_CC-MAIN-20230607175304-20230607205304-00597.warc.gz\"}"}
https://mda.tools/docs/plsda-classification-plots.html
[ "## Classification plots\n\nMost of the plots for visualisation of classification results described in SIMCA chapter can be also used for PLS-DA models and results. Let’s start with classification plots. By default it is shown for cross-validation results (we change position of the legend so it does not hide the points). You can clearly spot for example three false positives and one false negatives in the one-class PLS-DA model for virginica.\n\npar(mfrow = c(1, 2))\nplotPredictions(m.all)\nplotPredictions(m.vir)", null, "In case of multiple classes model you can select which class to show the predictions for.\n\npar(mfrow = c(1, 2))\nplotPredictions(m.all, nc = 1)\nplotPredictions(m.all, nc = 3)", null, "" ]
[ null, "https://mda.tools/docs/_main_files/figure-html/unnamed-chunk-152-1.png", null, "https://mda.tools/docs/_main_files/figure-html/unnamed-chunk-153-1.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8638602,"math_prob":0.9633679,"size":529,"snap":"2023-40-2023-50","text_gpt3_token_len":105,"char_repetition_ratio":0.13714285,"word_repetition_ratio":0.0,"special_character_ratio":0.18336484,"punctuation_ratio":0.052083332,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9582886,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-29T04:35:26Z\",\"WARC-Record-ID\":\"<urn:uuid:eb2329f0-7d08-414a-b0f2-f9f8b902bfb7>\",\"Content-Length\":\"28472\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6a061dbc-aa44-438b-ba0d-d40c59200fb8>\",\"WARC-Concurrent-To\":\"<urn:uuid:10be7cf1-e5c2-4a38-8652-1b5111b4a8b9>\",\"WARC-IP-Address\":\"172.66.0.96\",\"WARC-Target-URI\":\"https://mda.tools/docs/plsda-classification-plots.html\",\"WARC-Payload-Digest\":\"sha1:TIXYBOHCHXNGLWKHGEEPF2L6A5D3NWVI\",\"WARC-Block-Digest\":\"sha1:XXY4NJLEBZRSGBLVGTSRL2LCZGJWJHJG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510481.79_warc_CC-MAIN-20230929022639-20230929052639-00417.warc.gz\"}"}
https://chemistry.stackexchange.com/questions/153175/calculating-the-apparent-molecule-length-from-density-of-the-bulk
[ "# Calculating the apparent molecule length from density of the bulk [closed]\n\nI'm interested to estimate the apparent length of the water molecule assuming it could be approximated as a cube. This is an intentional simplification.\n\n## Calculating molecule length from volume in density equation\n\nI used this equation to find the volume for a mole of water molecules with $$\\varrho$$ (water density, assumed $$\\pu{0.997 g cm^{-3}}$$), $$m$$ (mass), and $$V$$ (volume).\n\nFor the determination of the molar volume, I substituted mass $$m$$ by molar mass $$M$$ and rearranged\n\n$$\\varrho = m / V$$\n\ninto the equivalent form\n\n$$V_{\\mbox{molar}} = \\frac{M} {\\varrho} = \\frac {\\pu{18.01 g mol^{-1}}} {\\pu{0.997 g cm^{-3}}} = \\pu{18.06 cm^3 mol^{-1}}$$\n\nBecause $$\\pu{1 m^3} = \\pu{10^6 cm^3}$$, the molar volume equates to $$\\pu{18.06 \\times 10^{-6} m^3 mol^{-1}}$$.\n\nGiven Avogadro's number, I computed the volume of the individual volume with\n\n$$V_{\\mbox{molecule}} = \\frac{ \\pu{18.06 \\times 10^{-6} m^3 mol^{-1}}} { \\pu{6.022 \\times 10^{23} mol^{-1}} } = \\pu{2.999 \\times 10^{-29} m^3}$$\n\nThen I took the cube root to find the length, as I want $$m^3 \\rightarrow m$$:\n\n$$\\ell(\\text{molecule}) = \\sqrt{\\pu{2.999 \\times 10^{-29} m^3}} = \\pu{3.10 \\times 10^{-10} m} = \\pu{0.31 nm}.$$\n\n#### What I need help with\n\nThis isn't taking intermolecular forces into account, but I want to know if it's an ok method for getting a rough idea of the molecule length, and if it's done correctly.\n\n• Density often is denoted by $\\rho$ ($\\rho$) or $\\varrho$ ($\\varrho$ in LaTeX / mhchem syntax available to you here on chemistry.se), which is different to $p$ then typically used about pressure. But this is smallish compared to the initial equation, because density is not mass times volume, but mass divided by volume (e.g., water, 1 g per cubic centimetre, or about a [metric] ton per cubic metre. The length you obtain is about twice as wide as the (intramolecular) H-H distance (about 0.15 nm). – Buttonwood Jun 20 at 20:29\n• Assuming molecules as small cubes is a simplification and it depends a lot on the circumstances if this is acceptable, or not. It may be a start of a model later to be refined (e.g. limiting the number of interactions modelling crystal growth) for water's bent, or benzene's flat shape. For future reference: if you use an equality sign, then both numbers and dimensions must equate, too. This shows e.g., when they cancel out (dimension analysis) even if this demands a bit more to type / write. – Buttonwood Jun 20 at 21:58\n• Over time (and repeated exposure) you will pick up skills new to you. Once you have enough reputation, for example, you may e.g. access to the edits made by other users. This equally is a source of inspiration (and eventually, training) by reference for how to format questions and answers in the community of chemistry.se (maybe you know the expression of «when in Rome, do as the Romans do»). – Buttonwood Jun 21 at 19:47\n• Strange this question was closed for the reason given but from things you have said @buttonwood my question is answered (about if it can be used roughly to estimate length), would gladly have given you the answer if your stuff wasn’t in a comment. – Nickotine Jun 22 at 11:40\n• It actually is fascinating how close you get with this absolutely simplistic approach. I#m not sure why this was closed, I wouldn't have. But I'm also not confident enough to overrule this decision. Also (but not entirely sure how helpful this is): researchgate.net/post/… – Martin - マーチン Jun 24 at 21:51" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7666839,"math_prob":0.99579144,"size":1865,"snap":"2021-31-2021-39","text_gpt3_token_len":591,"char_repetition_ratio":0.12466416,"word_repetition_ratio":0.0,"special_character_ratio":0.35603216,"punctuation_ratio":0.1064935,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9992737,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-23T20:15:27Z\",\"WARC-Record-ID\":\"<urn:uuid:f129f8a9-af4d-42b6-8e8f-507e1a7f53ad>\",\"Content-Length\":\"144609\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:53561313-0fee-44d4-a7e1-a05548767349>\",\"WARC-Concurrent-To\":\"<urn:uuid:e9d092e6-33dd-4e46-a45b-b8ec3e2807b0>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://chemistry.stackexchange.com/questions/153175/calculating-the-apparent-molecule-length-from-density-of-the-bulk\",\"WARC-Payload-Digest\":\"sha1:4PDS7QSPL4VUJFSD6CJDV2N2DHFMJRB2\",\"WARC-Block-Digest\":\"sha1:BYE4FLE6A7OSGGDDFTYWZGG4K55KF6AD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046150000.59_warc_CC-MAIN-20210723175111-20210723205111-00056.warc.gz\"}"}
https://bitesizebio.com/10951/how-does-it-move-interpreting-motion-of-an-object-with-the-mean-squared-displacement/
[ "# How Does it Move? Interpreting Motion of an Object with the Mean-Squared Displacement\n\nStuff moves. It is useful to study how stuff moves, because motion analysis can tell us a lot about the object that is moving. For example, we can learn if an object’s motion is aimless, diffusive wandering, or directed towards some goal, free to explore the available environment, or restricted to a confined space.\n\nStudying the motion of a single object requires different tools than studying the average motion of a population of objects. Here, I want to introduce you to one of my favorite tools, that mainstay of biophysics, the mean-squared displacement.  In this article, I’ll first explain what mean-squared displacement (MSD) is and show you a few of the easiest things to learn from a MSD plot. In a follow-up article, I’ll show you how to make your own plot from a series of measurements of an object’s location.\n\nWhat kind of object you study doesn’t really matter; I use MSD to analyze motion of proteins and nucleic acids in cells, but the analysis could just as validly be applied to study the motion of individual insects, of amoeboid cells in a petri dish, of a macrophage in a tissue, or really whatever you’re interested in. The only measure required is the object’s location over time.\n\n## So what is MSD?\n\nMSD is a statistic that tells you how much your object displaces, on average, in a given interval of time, squared. So if you saw something like this:", null, "You would interpret it as saying that your object displaces 1000nm2 in a 5 second time interval, on average. For easier interpretation, you could take the square root of that, which gives you the “root mean squared displacement.” Then, you’d be able to say that your object displaces about 100nm in 5 seconds, on average.\n\nIf it is easier to think about specific data points by taking their square root, why don’t we show root MSD plots instead of MSD plots? Well, some people do. But most still use MSD, because it turns out that looking at motion^2 makes it easy to learn some cool things about the kind of motion the object is doing, as you’ll see below.\n\nYou might have noticed I keep saying that MSD describes how much your object displaces, not how much your object moves. This is because usually when you take movies of your object, there is time between the frames. And though you can measure the difference in the object’s position between two frames (i.e. displacement), you don’t really know how much the object moved in that time. For all you know, it may have traveled to the sun and back when your camera wasn’t looking.\n\nIt’s also good to keep in mind that MSD is a mean. Your object might have moved 1 nm in one 5 second interval and 1000 nm in another 5 second interval and have the same MSD as an object that moved 500nm in both 5 second intervals. It’s important to look at the standard deviation of MSDs to see how consistent the displacements are.\n\nSo MSD tells you how much your object displaced in a given interval of time. That’s great, but the really cool thing about MSD is what you can learn when you look at graphs that show many MSD data points over increasing intervals of time. Let’s do that:", null, "The x-axis is what frequently trips people up: the x-axis in in units of a time interval. Often, this is represented as delta-t, a change in time (although sometimes confusingly is just represented as “time”). People often see a “t” or an “s” on the x-axis and then assume the plot is showing a time series. IT IS NOT A TIME SERIES! The x-axis shows time intervals.  I cannot stress this enough.\n\nNow that you know what MSD is and the axes, let’s briefly go through the four main things you can detect with a fast glance at an MSD plot.  The details behind the following truths are a bit beyond the scope of this article, but can be found in the primary lit.\n\n## 1.  Diffusion\n\nIf the MSD plot follows a linear trend, then the object whose motion you are observing moves diffusively; i.e. without direction, as in a random walk. For example, an unbound GFP particle will move diffusively through a cell’s cytoplasm.", null, "## 2. Active motion\n\nIf the MSD plot does not follow a linear trend but instead follows an increasing slope, then the object exhibits directed motion. For example, the motor protein dynein moves along microtubules in a directed fashion, always going from the “plus” end of microtubules to the “minus” end.", null, "## 3. Constrained motion\n\nIf the plot plateaus, then the motion of the object is constrained. Furthermore, the square root of the height of the plateau (minus error) defines the size of the region of constraint. For example, if you labeled a centromere in a budding yeast cell, it could never move outside of the nucleus, and this would be reflected in an MSD plot by a constraint likely with the plateau at the height of the diameter of the nucleus, squared.", null, "## 4. Measurement error\n\nThe intercept of an MSD plot is the measurement error (which can include a number of sources, including microscopy errors and errors in particle tracking in your image analysis software). This makes intuitive sense: the intercept of an MSD plot is where the time interval = 0. Since an object can’t move in zero time, the intercept must reflect measurement error.", null, "With this foundation, you’re equipped to do a basic interpretation of MSD plots. Keep in mind that while we have learned a lot from a simple plot, we can go deeper still, and learn all sorts of cool things like mechanical properties of molecular polymers.  Stay tuned for the follow-up, when we learn to make MSD plots from a time series of object locations!\n\n1.", null, "Najib on July 14, 2017 at 8:09 pm\n\n2.", null, "j on January 7, 2016 at 11:31 am\n\nPretty useful article for understanding these graphs! But isn’t the square root of 1000 31.6 instead of 100?\n\nCheers,\n\nJ\n\n•", null, "Joe on November 12, 2019 at 7:56 pm" ]
[ null, "https://bitesizebio.com/wp-content/uploads/2013/08/First-figure-MSD-copy.jpg", null, "https://bitesizebio.com/wp-content/uploads/2013/08/Figure-2-MDS1.jpg", null, "https://bitesizebio.com/wp-content/uploads/2013/09/diffusion-353x265.png", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20width='282'%20height='212'%20viewBox='0%200%20282%20212'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20width='282'%20height='212'%20viewBox='0%200%20282%20212'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20width='282'%20height='212'%20viewBox='0%200%20282%20212'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20width='80'%20height='80'%20viewBox='0%200%2080%2080'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20width='80'%20height='80'%20viewBox='0%200%2080%2080'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20width='80'%20height='80'%20viewBox='0%200%2080%2080'%3E%3C/svg%3E", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9328218,"math_prob":0.889003,"size":5912,"snap":"2022-27-2022-33","text_gpt3_token_len":1311,"char_repetition_ratio":0.12626946,"word_repetition_ratio":0.011417697,"special_character_ratio":0.21532476,"punctuation_ratio":0.10288066,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9710382,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,4,null,4,null,4,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-26T23:57:54Z\",\"WARC-Record-ID\":\"<urn:uuid:b94dec8e-ee7b-438b-a090-aef3738f01b3>\",\"Content-Length\":\"98977\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:481a0688-8e2d-45c5-ba6a-30d4c9cb11ee>\",\"WARC-Concurrent-To\":\"<urn:uuid:4fa6aa74-8aba-47a5-a3b6-c6e1c5befbf5>\",\"WARC-IP-Address\":\"162.159.135.42\",\"WARC-Target-URI\":\"https://bitesizebio.com/10951/how-does-it-move-interpreting-motion-of-an-object-with-the-mean-squared-displacement/\",\"WARC-Payload-Digest\":\"sha1:BUT5ZSRGGS7L5NWTMV6AVH4WUDJJ6DGS\",\"WARC-Block-Digest\":\"sha1:NTW5HRB7N6CFV2ZZLW3ICEVCEHZO5QOS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103322581.16_warc_CC-MAIN-20220626222503-20220627012503-00195.warc.gz\"}"}
https://everything2.com/title/Green%2527s+Theorem
[ "Theorem 1: Green's Theorem:   Let D be a simple closed region in two dimensions, and let its boundary be C. For a vector field F(x,y) = (P(x,y), Q(x,y)), the line integral of F on the positive orientation of C is equal to the surface integral over the region D of the partial derivative of Q with repsect to x minus the partial derivative of P with respect to y\n\nTheorem 2: Area of a Region:   If C is a simple closed curve that bounds a region to which Theorem 1 applies, then the area of the region D bounded by C is one-half the line integral along C of F(x,y)=(-y,x)\n\nTheorem 3: Vector Form of Green's Theorem:   Let D be a subset of R-2 be a region to which Green's theorem applies, let C be its boundary (oriented counter-clockwise), and let F=(P,Q) be continuously differentiable vector field on D. Then, the line integral along C is equal to the surface integral of <curl(F),k> over the region D.\n\ngreen machine = G = greenbar\n\nGreen's Theorem prov.\n\n[TMRC] For any story, in any group of people there will be at least one person who has not heard the story. A refinement of the theorem states that there will be exactly one person (if there were more than one, it wouldn't be as bad to re-tell the story). [The name of this theorem is a play on a fundamental theorem in calculus. --ESR]\n\n--The Jargon File version 4.3.1, ed. ESR, autonoded by rescdsk.\n\nLog in or register to write something here or to contact authors." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9273084,"math_prob":0.9924244,"size":896,"snap":"2020-10-2020-16","text_gpt3_token_len":229,"char_repetition_ratio":0.1412556,"word_repetition_ratio":0.073619634,"special_character_ratio":0.25,"punctuation_ratio":0.11165048,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9989956,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-03-30T17:23:20Z\",\"WARC-Record-ID\":\"<urn:uuid:fb692609-d081-4988-8cb7-c195fa08d7d4>\",\"Content-Length\":\"21835\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e7f50127-bd4e-4e73-b153-7b0d0077986c>\",\"WARC-Concurrent-To\":\"<urn:uuid:cf1a5286-57f4-4fcb-bef7-0c77aa2e9d7f>\",\"WARC-IP-Address\":\"35.165.66.52\",\"WARC-Target-URI\":\"https://everything2.com/title/Green%2527s+Theorem\",\"WARC-Payload-Digest\":\"sha1:ATBOYFGVTKJ6GKJSLJECYZMBXT7EZM2P\",\"WARC-Block-Digest\":\"sha1:NGDZP4MP34RAA2NYBNKIVGXYE4ZOOUEI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370497171.9_warc_CC-MAIN-20200330150913-20200330180913-00502.warc.gz\"}"}
https://shichaoxin.com/2021/11/09/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0%E5%9F%BA%E7%A1%80-%E7%AC%AC%E4%BA%8C%E5%8D%81%E4%B9%9D%E8%AF%BE-%E9%9B%86%E6%88%90%E5%AD%A6%E4%B9%A0%E4%B9%8BBagging%E4%B8%8E%E9%9A%8F%E6%9C%BA%E6%A3%AE%E6%9E%97/
[ "# 【机器学习基础】第二十九课:集成学习之Bagging与随机森林\n\n## Bagging,“包外估计”(out-of-bag estimate),随机森林(Random Forest)\n\nPosted by x-jeff on November 9, 2021\n\n【机器学习基础】系列博客为参考周志华老师的《机器学习》一书,自己所做的读书笔记。\n\n# 1.前言\n\n【机器学习基础】第二十七课:集成学习之个体与集成可知,欲得到泛化性能强的集成,集成中的个体学习器应尽可能相互独立;虽然“独立”在现实任务中无法做到,但可以设法使基学习器尽可能具有较大的差异。给定一个训练数据集,一种可能的做法是对训练样本进行采样,产生出若干个不同的子集,再从每个数据子集中训练出一个基学习器。这样,由于训练数据不同,我们获得的基学习器可望具有比较大的差异。然而,为获得好的集成,我们同时还希望个体学习器不能太差。如果采样出的每个子集都完全不同,则每个基学习器只用到了一小部分训练数据,甚至不足以进行有效学习,这显然无法确保产生出比较好的基学习器。为解决这个问题,我们可考虑使用相互有交叠的采样子集。\n\n# 2.Bagging", null, "$\\mathcal{D}_{bs}$是自助采样产生的样本分布。\n\nBagging是一个很高效的集成学习算法。值得一提的是,自助采样过程还给Bagging带来了另一个优点:由于每个基学习器只使用了初始训练集中约63.2%的样本,剩下约36.8%的样本可用作验证集来对泛化性能进行“包外估计”(out-of-bag estimate)。为此需记录每个基学习器所使用的训练样本。不妨令$D_t$表示$h_t$实际使用的训练样本集,令$H^{oob}(\\mathbf x)$表示对样本$\\mathbf x$的包外预测,即仅考虑那些未使用$\\mathbf x$训练的基学习器在$\\mathbf x$上的预测,有:\n\n$H^{oob}(\\mathbf x)=\\arg \\max \\limits_{y \\in \\mathcal{Y}} \\sum^T_{t=1} \\mathbb{I} (h_t(\\mathbf x)=y) \\cdot \\mathbb{I} (\\mathbf x \\notin D_t)$\n\n$\\epsilon^{oob} = \\frac{1}{\\lvert D \\rvert} \\sum_{(\\mathbf x,y)\\in D} \\mathbb{I} (H^{oob}(\\mathbf x) \\neq y)$\n\n# 3.随机森林", null, "" ]
[ null, "https://github.com/x-jeff/BlogImage/raw/master/MachineLearningSeries/Lesson29/29x1.png", null, "https://shichaoxin.com/img/shanghaiwaitan.jpeg", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.94303924,"math_prob":0.9991246,"size":2095,"snap":"2022-27-2022-33","text_gpt3_token_len":1990,"char_repetition_ratio":0.09803922,"word_repetition_ratio":0.0,"special_character_ratio":0.1718377,"punctuation_ratio":0.030303031,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99579203,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-28T00:26:13Z\",\"WARC-Record-ID\":\"<urn:uuid:5777043e-0078-492d-a1b3-a9f81f409275>\",\"Content-Length\":\"30367\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cf92f5cb-e7ad-483a-ba47-ecb4cf7e574e>\",\"WARC-Concurrent-To\":\"<urn:uuid:8c1d928f-8862-4e99-a8cb-a1c579d9b1ba>\",\"WARC-IP-Address\":\"185.199.111.153\",\"WARC-Target-URI\":\"https://shichaoxin.com/2021/11/09/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0%E5%9F%BA%E7%A1%80-%E7%AC%AC%E4%BA%8C%E5%8D%81%E4%B9%9D%E8%AF%BE-%E9%9B%86%E6%88%90%E5%AD%A6%E4%B9%A0%E4%B9%8BBagging%E4%B8%8E%E9%9A%8F%E6%9C%BA%E6%A3%AE%E6%9E%97/\",\"WARC-Payload-Digest\":\"sha1:OAWHQIJTKYXS7EBNVMVWPGRE22YNHU67\",\"WARC-Block-Digest\":\"sha1:J7DDEGABR6MOHHGNPGDKE7ACBMJ7EWTU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103344783.24_warc_CC-MAIN-20220627225823-20220628015823-00186.warc.gz\"}"}
https://www.intechopen.com/books/optimization-algorithms-examples/piecewise-parallel-optimal-algorithm
[ "Open access peer-reviewed chapter\n\n# Piecewise Parallel Optimal Algorithm\n\nBy Zheng Hong Zhu and Gefei Shi\n\nSubmitted: October 14th 2017Reviewed: March 20th 2018Published: September 5th 2018\n\nDOI: 10.5772/intechopen.76625\n\n## Abstract\n\nThis chapter studies a new optimal algorithm that can be implemented in a piecewise parallel manner onboard spacecraft, where the capacity of onboard computers is limited. The proposed algorithm contains two phases. The predicting phase deals with the open-loop state trajectory optimization with simplified system model and evenly discretized time interval of the state trajectory. The tracking phase concerns the closed-loop optimal tracking control for the optimal reference trajectory with full system model subject to real space perturbations. The finite receding horizon control method is used in the tracking program. The optimal control problems in both programs are solved by a direct collocation method based on the discretized Hermite–Simpson method with coincident nodes. By considering the convergence of system error, the current closed-loop control tracking interval and next open-loop control predicting interval are processed simultaneously. Two cases are simulated with the proposed algorithm to validate the effectiveness of proposed algorithm. The numerical results show that the proposed parallel optimal algorithm is very effective in dealing with the optimal control problems for complex nonlinear dynamic systems in aerospace engineering area.\n\n### Keywords\n\n• optimal control\n• parallel onboard optimal algorithm\n• discretizing Hermite–Simpson method\n• nonlinear dynamic system\n• aerospace engineering\n\n## 1. Introduction\n\nSpace tether system is a promising technology over decades. It has wide potential applications in the space debris mitigation & removal, space detection, power delivery, cargo transfer and other newly science & technic missions. Recently, there is continuous interest in the space tether systems, in leading space agencies such as, NASA’s US National Aeronautics and Space Administration, ESA’s European Space Agency, and JAXA’s Japan Aerospace Exploration Agency . Their interest technologies include the electrodynamic tether (EDT) propulsion technology, retrieval of tethered satellite system, multibody tethered system and space elevator system. Compared with existing technologies adopted by large spacecraft such as the rocket or thruster, the space tether technology has the advantages of fuel-efficiency (little or no propellant required), compact size, low mass, and ease-of-use . These advantages make it reasonable to apply the space tethered system for deorbiting the fast-growing low-cost micro/nano-satellites and no-fuel cargo transfer. The difficulty associated with space tether system is to control & suppress its attitudes during a mission process for the technology to be functional and practical. Many works have been devoted to solving this problem, and one effort is to use the optimal control due to its good performances in the complex and unstable nonlinear dynamic systems. In this chapter, a new piecewise onboard parallel optimal control algorithm is proposed to control and suppress the attitudes of the space tether system. To test its validity, two classical space tether systems, the electrodynamic tether system (EDT) and partial space elevator (PSE) system are considered and tested.\n\nAn EDT system with constant tether length is underactuated. The electric current is the only control input if there are no other active forces such as propulsion acting on the ends of an EDT. The commonly adopted control strategy in the literature is the current regulation using energy-based feedback in this underactuated control problem. Furthermore, many efforts have been done to solve this problem with optimal control. Stevens and Baker studied the optimal control problem of the EDT libration control and orbital maneuverer efficiency by separating the fast and slow motions using an averaged libration state dynamics as constraints instead of instantaneous dynamic constraints in the optimal control algorithm. The instantaneous states are propagated from the initial conditions using the optimal control law in a piecewise fashion. Williams treated the slow orbital and fast libration motions separately with two different discretization schemes in the optimal control of an EDT orbit transfer. The differential state equations of the libration motion are enforced at densely allocated nodes, while the orbital motion variables are discretized by a quadrature approach at sparsely allocated nodes. The two discretization schemes are unified by a specially designed node mapping method to reflect the coupling nature of orbital and libration motions. The control reference, however, is assumed known in advance.\n\nA PSE system is consisted with one main satellite and two subsatellites (climber & end body) connected to each other by tether(s). The difficulty associated to such a system is to suppress the libration motion of the climber and the end body. This libration is produced by the moving climber due to the Coriolis force, which will lead the system unstable. While the climber is fast moving along the tether, the Coriolis force will lead to the tumbling of the PSE system. Thus, the stability control for suppressing such a system is critical for a successful climber transfer mission. To limit the fuel consuming, tension control is widely used to stable the libration motion of the space tethered system due to it can be realized by consuming electric energy only . Many efforts have been devoted to suppressing the libration motion of space tethered system such as, Wen et al. stabled the libration of the tethered system by an analytical feedback control law that accounts explicitly for the tension constraint. The study shows good computational effect, and the proposed method requires small data storage ability. Ma et al. used adaptive saturated sliding mode control to suppress the attitude angle in the deployment period of the space tethered system. Optimal control [8, 9] is also proved as a way to overcome the libration issue. The above tension control schemes are helpful for both two-body and three-body tethered system. Up to data, limited devotions have been done on the libration suppression of a PSE system using tension control only. Williams used optimal control to design the climber’s speed function of a climber for a full space elevator . Modeled by simplified dynamic equations, an optimal control problem is solved, and the solution results in zero in-plane libration motion of the ribbon in the ending phase of climber motion. The study shows that to eliminate the in-plane oscillations by reversing the direction of the elevator is possible. Kojima et al. extended the mission function control method to eliminate the libration motion of a three body tethered system. The proposed method is effective when the total tether length is fixed and the maximum speed of the climber no more than 10 m/s.Although these efforts are useful to suppress the libration motion of the PSE system, it still difficult to control the attitudes of such a system in the transfer period.\n\nTo overcome the challenges in aforementioned works, we propose a parallel onboard optimal algorithm contains two phases. Phase 1 concerns the reference state trajectory optimization within a given time interval, where an optimal control model is formulated based on the timescale separation concept [3, 12] to simplify the dynamic calculations of the EDT & PSE system. An open-loop optimal state trajectory is then obtained by minimizing a cost function subject to given constraints. The state trajectory of paired state and control input variables is solved approximately by the direct collocation method that is based on the Hermite-Simpson method . In this phase, the simplified dynamic model is by used. Phase 2 concerns the tracking of the open-loop optimal state trajectory within the same interval. A closed-loop optimal control problem is formulated in a quadrature form to track the optimal state trajectory obtained in phase 1. Unlike phase 1, all the major perturbative forces are included, and more realistic geomagnetic and gravitational field models are considered. While the system is running the process in phase 2 with one CPU, the next phase 1 calculation is running in another CPU with data modification based on the errors obtained in the last calculation program. The simulation results demonstrate the effectiveness of the approach in fast satellite deorbit by EDTs in equatorial orbit. Furthermore, for fast transfer period of the partial space elevator, the propose method also shows good effect on suppression the libration angles of the climber and the end body with tension control only.\n\n## 2. Optimal control algorithm\n\n### 2.1. Control scheme\n\nAssume two CUPs are used to process the calculation. CUP-1 is used to determine the open-loop optimal control trajectory of dynamic states employing the simple dynamic equations. The obtained optimal state trajectory will be tracked by CPU-2 using closed-loop RHC. While the system is tracking the i-th interval, the (i + 1)-th optimal trajectory is being calculated in CPU-1. Once the tracking for the i-th interval is finished (implemented by CPU-2), the real finial state Siwill be stored in the memory and the (i + 1)-th optimal trajectory can be tracked. By repeating the above process, the optimal suppression control problem is solved in a parallel piecewise manner until the transfer period is over.\n\nThe calculation of the optimal trajectory in CPU-1 is a prediction state trajectory whose initial state is estimated as S˜ii+1=S˜ii+12ei1, where S˜iidenotes the estimated finial state of the i-th interval obtained by CUP-1, the superscript and subscript denote the internal number and the states node number. S˜ii+1is the estimated initial state of the (i + 1)-th interval, and ei1=Si1S˜i1i1is the error between the real and the estimated final state of the (i-1)-th interval, the data used to calculate the error is picked up from the memory. For the first interval, i = 1, S0=S˜01and e0=0. The computational diagram of the entire control strategy is given as shown in Figure 1.\n\n### 2.2. Open-loop control trajectory\n\nThe libration angles of the climber and the end body are required to be kept between the desired upper/lower bounds in the climber transfer process. To make the calculation convenient and simple. The accessing process is divided into a series of intervals, such that, the transfer process titi+1is discretized into nintervals, where tiand ti+1are the initial and final time, respectively. ti+1can be obtained in terms of the transfer length of the climber and its speed. To make calculation convenient to be realized in practical condition, the transfer process is divided evenly. The optimal trajectory should be found to satisfy the desired cost functions for each time intervals as\n\nJi=titi+1ΠxudtE1\n\nsubject to the simplified dynamic equations. All the errors between simple model and the entire model are regarded as perturbations. The above cost function minimization problem is solved by a direct solution method, which uses a discretization scheme to transform the continuous problem into a discrete parameter optimization problem of nonlinear programming within the interval, to avoid the difficulty usually encountered when standard approaches are used on derivation of the required conditions for optimality . There are a number of efficient discretization schemes, such as, Hermite-Legendre-Gauss-Lobatto method and Chebyshev pseudospectral method , in the literature for discretizing the continuous problem. In the current work, a direct collocation method, based on the Hermite-Simpson scheme [14, 17], is adopted because of its simplicity and accuracy.\n\nAssume that the time interval titi+1is discretized into nsubintervals with n + 1 nodes at the discretized time τk(k=0,1,.,n).\n\nγk=τk+1τk,k=1nγk=ti+1tiE2\n\nThe state vectors and control inputs are discretized at n + 1 nodes, x0, x1, x2, …, xnand υ0, υ1, υ2, …, υn. Further, denote the state vectors and the control inputs at mid-points between adjacent nodes by x0.5, x1.5, x2.5, …, xn0.5and υ0.5, υ1.5, υ2.5, …, υn0.5and the mid-point state vectors, xk+0.5, can be derived by the Hermite interpolation scheme,\n\nxk+0.5=12xk+xk+1+γ8ΓxkυkτkΓ(xk+1υk+1τk+1)E3\n\nAccordingly, the cost function in Eq. (1) can be discretized by the Simpson integration formula as\n\nJγ6k=0n1Πxkυkτk+4Π(xk+0.5υk+0.5τk+0.5)+Π(xk+1υk+1τk+1)E4\n\nThe nonlinear constraints based on the tether libration dynamics, the first-order states, can also be denoted by discretized equations using the Simpson integration formula, such that\n\nγ6Πxkυktk+4Π(xk+0.5υk+0.5τk+0.5)+Π(xk+1υk+1τk+1)+xkxk+1=0E5\n\nThe left-hand side of Eq. (5) is also named as the Hermite-Simpson Defect vector in the literature. Finally, the discretization process is completed by replacing the constraints for the initial states and the continuous box constraints with the discretized constraints,\n\nx0=xstart,xminxkxmax,υminυkυmax,υminυk+0.5υmaxE6\n\nThe minimization problem of a continuous cost function is now transformed to a nonlinear programming problem. It searches optimal values for the programming variables that minimize the discretized form of cost function shown in Eq. (4) while satisfying the constraints of Eqs. (5) and (6). The subscript index “k” will be refreshed in the next time interval.\n\n### 2.3. Closed-loop optimal control for tracking open-loop optimal state trajectory\n\nThe RHC is implemented by converting the continuous optimal control problem into a discrete parameter optimization problem that can be solved analytically. Like the open-loop trajectory optimization problem, the same direct collocation based on the Hermite-Simpson method is used to discretize the RHC problem.\n\nBy using the similar notations of discretization above, the cost function is discretized using the Simpson integration formula as\n\nG=12δxi+1TSδxi+1+γ12k=0n1δxkTQδxk+4δxk+0.5TQδxk+0.5+δxk+1TQδxk+1+Rδυk2+4δυk+0.52+δυk+12E7\n\nand the constraints are discretized into\n\nδxkδxk+1+γ6Akδxk+Bkδυk+4Ak+0.5δxk+0.5+4Bk+0.5δυk+0.5+Ak+1δxk+1+Bk+1δυk+1=0E8\nδxk=xτkxoptτkE9\nδxk+0.5=12δxk+δxk+1+γ8Akδxk+BkδυkAk+1δxk+1Bk+1δυk+1E10\n\nwhere, Ak=Aτk, Ak+0.5=Aτk+0.5, Bk=Bτk, Bk+0.5=Bτk+0.5.\n\nThe derivation of Eq. (8a) finally leads to a quadratic programming problem to find a programming vector Z=δx0Tδx1TδxnTδυ0δυ1δυnδυ0.5δυ1.5δυn0.5T, which minimizes the cost function:\n\nG=12ZTMZE11\n\nsubject to.\n\nCZ=X    X=δx0T000TE12\n\nwhere the matrices Cand Mare given in the Appendix.\n\nIt is easy to find the solution analytically to this standard quadratic programming problem by .\n\nZ=M1CTCM1CT1XE13\n\nand the control correction at the current time can be obtained as\n\nδυti=VZ=VM1CTCM1CT1XKtinthxtixopttiE14\n\nwhere the row vector Vis defined to “choose” the target value from the optimal solution, and the position of “1” in the row vector Vis the same as the position of δυ0in the column vector Z. Finally, the control input of the closed-loop control, υti, is\n\nυti=υoptti+δυti=υoptti+KtinthxtixopttiE15\n\nIt is apparent that the closed-loop control law derived here is a linear proportional feedback control law, and the feedback gain matrix Kis a function of time. Without any explicit integration of differential equations, Kcan either be determined offline or online depending on the computation and restoration ability onboard the satellite.\n\nIt is worth to point out some advantages of this approach. Firstly, the matrices Mand Care both formulated by the influence matrices at certain discretization notes (Ak, Bk, Ak + 0.5, Bk + 0.5). If tiand tiare both set to be coincident with the discretization nodes used in the open-loop control problem, then most of the influence matrices calculated previously can be used directly in the tracking control process to reduce computational efforts. This is the advantage of using the same discretization method in the current two-phased optimal control approach. Secondly, the matrix Mis unchanged if treating the terminal horizon time ti+thas the time-to-go and keep the future horizon interval titi+1unchanged. This means the inverse of Mcould be calculated only once in the same interval. It is attractive for the online implementation of HRC, where the computational effort is critical. As the entire interval can be discretized into small intervals, if these intervals are sufficiently small relative to the computing power of the satellite, then the calculation process can be carried by the onboard computer. Furthermore, for small intervals, the computation for the open-loop optimal trajectory of the next interval can be done by CPU-1 while the tracking is still in process, see Figure 1. This makes of the proposed optimal suppression control a parallel online implementation, which is another advantage of this control scheme.\n\n## 3. Cases study\n\n### 3.1. Parallel optimal algorithm in attitudes control of EDT system\n\nIn order to test the validity of the proposed optimal algorithm, a case study of the attitudes control of EDT system in aerospace engineering is used. The obtained results are compared with some existing control methods.\n\n#### 3.1.1. Problem formulation\n\nThe EDT system’s orbital motion is generally described in an Earth geocentric inertial frame (OXYZ) with the origin Oat the Earth’s centre, see Figure 2(a). The X-axis directs to the point of vernal equinox, the Z-axis aligns with the Earth’s rotational axis, and the Y-axis completes a right-hand coordinate system, respectively. The equation of orbital dynamic motion can be written in the form of Gaussian perturbation , which is a set of ordinary differential equations of six independent orbital elements (a, Ω, i, ex, ey, φ)\n\ndΩdt=σyrsinuna21e2siniE17\ndidt=σyrcosuna21e2E18\ndexdt=1e2naσzsinu+σx1+rpcosu+rpex+dΩdteycosiE19\ndeydt=1e2naσzcosu+σx1+rpsinu+rpeydΩdtexcosiE20\ndt=n1naσz2ra+1e21+1e2ecosνσx1+rp1e21+1e2esinνσyrcosisinuna21e2siniE21", null, "Figure 2.Illustration of coordinate system for the EDT’s orbital (a) and libration (b) motion.\n\nThe components of perturbative accelerations are defined in a local frame. The components σxand σzare in the orbital plane, and σzis the radial component pointing outwards. The out-of-plane component σycompletes a right-hand coordinate system. The components of perturbative accelerations depend on the tether attitude and the EDT’ orbital dynamics is coupled with the tether libration motion.\n\nThe libration motion of a rigid EDT system is described in an orbital coordinate system shown in Figure 2(b). The z-axis of the orbital coordinate system points from the Earth’s center to the CM of the EDT system, the x-axis lies in the orbital plane and points to the direction of the EDT orbital motion, perpendicular to the z-axis. The y-axis completes a right-hand coordinate system. The unit vectors along each axis are expressed as eox, eoy, and eoz, respectively. Then, the instantaneous attitudes of the EDT system are described by an in-plane angle α(pitch angle, rotating about the y-axis) followed by an out-of-plane angle β(roll angle, rotating about the x’-axis, the x-axis after first rotating about the y-axis). Thus, the equations of libration motion of the EDT system can be derived as,\n\nα¨+ν¨2α̇+ν̇β̇tanβ+3μr3sinαcosα=Qαm˜L2cos2βE22\nβ¨+α̇+ν̇2sinβcosβ+3μr3cos2αsinβcosβ=Qβm˜L2E23\n\nwhere m˜=m1m2+m1+m2mt/3+mt2/12mEDT1is the equivalent mass, (Qα, Qβ) are the corresponding perturbative torques by the perturbative forces to be discussed below.\n\nThe perturbative accelerations (σx, σy, σz) and torques (Qα, Qβ) in Eq. (15) are induced by multiple orbital perturbative effects, namely, (i) the electrodynamic force exerting on a current-carrying EDT due to the electromagnetic interaction with the geomagnetic field, (ii) the Earth’s atmospheric drag, (iii) the Earth’s non-homogeneity and oblateness, (iv) the lunisolar gravitational perturbations, and (v) the solar radiation pressure, respectively. The EDT system is assumed thrust-less during the deorbit process, while the atmosphere, geomagnetic and ambient plasma fields are assumed to rotate with the Earth at the same rate. The geodetic altitude, instead of geocentric altitude, should be used in the evaluation of the environmental parameters, such as, atmospheric and plasma densities, to realistically account for the Earth’s ellipsoidal surface, such that,\n\nhg=rrpo1eE2cos2θ1/2E24\n\nwhere the polar radius rpoand the Earth’s eccentricity eEare provided by NASA .\n\nMoreover, the local strength of geomagnetic field is described by the IGRF2000 model [20, 21, 22] in a body-fixed spherical coordinates of the Earth, such that\n\nBϕ=1sinθn=1r0rn+2m=0nmgnmsinhnmcosPnmθcBθ=n=1r0rn+2m=0ngnmcos+hnmsinPnmθcθcBr=n=1r0rn+2n+1m=0ngnmcos+hnmsinPnmθcE25\n\nwhere r0 = 6371.2 × 103 km is the reference radius of the Earth, respectively.\n\nThe average current in the EDT is defined as\n\nIave=1L0LIsdsE26\n\nThe open-loop optimal control problem for EDT deorbit can be stated as finding a state-control pair xtυtover each time interval titi+1to minimize a cost function of the negative work done by the electrodynamic force\n\nJ=titi+1Fe·vdtttti+1ΠxυtdtE27\n\nsubject to the nonlinear state equations of libration motion\n\nx1=x2x2=2η3esinν+2x2+η2x4tanx33η3sinx1cosx1+sinitanx32sinucosx1cosusinx1cosiςλIaveμmμm˜η3E28\nx3=x4x4=x2+η22sinx3cosx33η3cos2x1sinx3cosx3sini2sinusinx1+cosucosx1ςλIaveμmμm˜η3E29\n\nwhere x1x2x3x4=ααββ, η=1+ecosν, ν̇=μ/p30.51+ecosν2, r=p1+ecosν1, λ=m1+0.5mt/mEDTis determined by the mass ratio between the end-bodies, and ςis determined by the distribution of current along the EDT, such that, ς=Iave1L20LsIsds. Accordingly, ς=0.5is used for the assumption of a constant current in the EDT. The initial conditions xti=xstartand the box constraint ααmax,ββmax,IminIaveImax. The environmental perturbations are simplified by considering only the electrodynamic force with a simple non-tilted dipole model of geomagnetic field, such that,\n\nB=μmr3cosusinieox+μmr3cosieoy2μmr3sinusinieozE30\n\nAccordingly, the electrodynamic force Feexerting on the EDT can be obtained as,\n\nFe=0LB×Ilds=IaveLB×lE31\n\n#### 3.1.2. Results and discussion\n\nThe initial and boundary conditions of box constraints of the case are shown in Tables 1 and 2.\n\nParametersValues\nMass of the main satellite5 kg\nMass of subsatellite1.75 kg\nMass of the tether0.25 kg\nDimensions of main satellite0.2 × 0.2 × 0.2 m\nDimensions of subsatellite0.1 × 0.17 × 0.1 m\nTether length500 m\nTether diameter0.0005 m\nTether conductivity (aluminum)3.4014 × 107 Ω−1 m−1\nTether current lower/upper limits0 ~ 0.8 A for the equatorial orbit\nOrbital altitudes700 ~ 800 km\n\n### Table 1.\n\nParameters of an EDT system.\n\nParametersValues\nImax(equatorial)0.4 A\nImax(inclined)0.1sinisinβ¯BsinΩG+α¯BΩ+cosicosβ¯Bcos1iβ¯BA\nImin(equatorial & inclined)0 A\nαmax(equatorial & inclined)45 degrees\nβmax(equatorial & inclined)45 degrees\n\n### Table 2.\n\nBoundary Values of Box Constraints in the Open-Loop Trajectory Optimization.\n\nFirstly, the validity of the proposed optimal control scheme in the equatorial orbit where the EDT system gets the highest efficiency is demonstrated. The solid line in Figure 4 shows the time history of the EDT’s average current control trajectory obtained from the open-loop optimal control problem. It is clearly shown that the average current in the open-loop case reaches the upper limit most of the time, which indicates the electrodynamic force being maximized for the fast deorbit. As expected, the current is not always at the upper limit in order to avoid the tumbling of the EDT system. This is evident that the timing of current reductions coincides with the peaks of pitch angles shown in Figure 5. The effectiveness of the proposed control scheme in terms of keeping libration stability is further demonstrated by the solid lines in Figure 5, where the trajectory of libration angles is no more than 45 degrees. It is also found that the amplitude of the pitch angle nearly reaches 45 degrees, the maximum allowed value, whereas the roll angle is very small in the whole deorbit process. As a comparison, tracking optimal control with the non-tilted dipole model and the IGRF 2000 model of the geomagnetic fields are conducted respectively.\n\nThe dashed lines in Figures 3 and 4 show the tracking control simulations where all perturbations mentioned before are included with the non-tilted dipole geomagnetic field model. As expected, the closed-loop tracking control works well in this case since the primary electrodynamic force perturbation is the same as the one used in the open-loop trajectory optimization. It is shown clearly in Figure 4 that the pitch angle under the proposed closed-loop control tracks the open-loop optimal trajectory very closely with this simple environment model. Figure 4 also shows the roll angle is almost zero even if it is not tracked. At the same time, Figure 3 shows that the current control modification to the optimal current trajectory is relative small, i.e., 12% above the maximum current, for the same reason. Now the same cases are analyzed again using a more accurate geomagnetic field model – the IGRF 2000 model with up to 7th order terms (Figures 5 and 6). The solid line in Figure 5 is the open-loop current control trajectory while the dashed line is the modified current control input obtained by the receding horizon control. Compared with Figure 3, it shows more current control modifications are needed to track the open-loop control trajectory because of larger differences in dynamic models between the open-loop and closed-loop optimal controls, primarily due to the different geomagnetic field models. Because of the same reason, it is noticeable in Figure 6 that the instantaneous states of the EDT system, controlled by the closed-loop optimal control law, are different from the open-loop reference state trajectory at the end of the interval. The instantaneous states are used for the next interval as initial conditions for the open-loop optimal control problem to derive the optimal control trajectory in that interval. This is reflected in Figure 6 that the solid lines are discontinuous at the beginning of each interval. The dashed line in Figure 7 shows that the pitch and roll angle under the closed-loop control has been controlled to the open-loop control trajectory, indicating the effectiveness of the proposed optimal control law. The roll angle is not controlled in this case as mentioned before. Compared Figure 4 with Figure 6, it shows that the roll angle increases significantly since there is an out-of-plane component of the electrodynamic force resulting from the IGRF 2000 geomagnetic model. However, the amplitude of the roll angle is acceptable within the limits and will not lead to a tumbling of the EDT system.", null, "Figure 3.Time history of average current in the equatorial orbit (non-tilted dipole geomagnetic model). Solid line: Open-loop state trajectory. Dashed line: Close-loop tracking trajectory.", null, "Figure 4.Time history of pitch and roll angles in the equatorial orbit (non-tilted dipole geomagnetic model). Solid line: Open-loop state trajectory. Dashed line: Close-loop tracking trajectory.", null, "Figure 5.Time history of average current in the equatorial orbit (IGRF 2000 model). Solid line: Open-loop state trajectory. Dashed line: Close-loop tracking trajectory.", null, "Figure 6.Time history of pitch and roll angles in the equatorial orbit (IGRF 2000 model). Solid line: Open-loop control trajectory. Dashed line: Close-loop tracking trajectory.", null, "Figure 7.Comparison of EDT deorbit rates using different control laws and geomagnetic field models.\n\nFinally, we make a comparison to show the performance of the proposed onboard parallel optimal control law from the aspect of deorbit rate. A simple current on–off control law from a previous work of Zhong and Zhu is used here as baselines for the comparison of EDT deorbit efficiency. The current on–off control becomes active only if the libration angles exceed the maximum allowed values. Furthermore, it will turn on the current only in the condition that the electrodynamic force does negative work in both pitch and roll directions. In this paper, the maximum allowed amplitude for pitch and roll angles was set to 20° and the turned-on current was assumed to be 0.4 A, roughly the average value of the current control input into the closed-loop optimal control. Besides, a minimum interval of 10 minutes for the switching was imposed to avoid equipment failure that might happen due to the frequent switching. Figure 8 shows the comparisons of the deorbit rates in different cases (the present optimal control and the current switching control with the non-tilted dipole or the IGRF 2000 model of geomagnetic field). It is shown that the EDT deorbit under the proposed optimal control scheme is faster than the current on–off control regardless which geomagnetic field model is used. The deorbit time of proposed optimal control based on the IGRF2000 model is about 25 hours, which equals approximately 15 orbits, whereas the deorbit time of simple current on–off control based on the same geomagnetic field model is about 55 hours, which equals approximately 33 orbits. The results also indicate that in the optimal control scheme, the effect is mostly shown in the current control input, instead of the deorbit rate, where Figure 6 shows much more current control effort is required due to the different magnetic field models were used in the open-loop control trajectory optimization and the closed-loop optimal tracking control.\n\n### 3.2. Parallel optimal algorithm in libration suppression of partial space elevator\n\nFor further test of the effect of the onboard parallel algorithm, the proposed control method is used to suppress the libration motions of the partial space elevator system. As studied in , this system is a non-equilibrium nonlinear dynamic system. It is difficult to suppress such a system in the mission period by using the common control design methods. In this case, we mainly concern obtaining the local time optimization.\n\n#### 3.2.1. Problem formulation\n\nConsider an in-plane PSE system in a circular orbit is shown in Figure 8, where the main satellite, climber and the end body are connected by two inelastic tethers L1and L2, respectively. The masses of the tethers are neglected. Assuming the system is subject to a central gravitational field and orbiting in the orbital plane. All other external perturbations are neglected. The main satellite, climber and end body are modeled as three point masses (M, m1and m2) since the tether length is much greater than tethered bodies [5, 23]. Thus, the libration motions can be expressed in an Earth inertial coordinate system OXYwith its origin at the centre of Earth. Denoting the position of the main satellite (M) by a vector rmeasuring from the centre of Earth. The climber m1is connected to the main satellite Mby a tether 1 with the length of L1and a libration angle θ1measured from the vector r. The distance between them is controlled by reeling in/out tether 1 at main satellite. The end body m2is connected to m1by a tether 2 with the length of L2and a libration angle θ2measured from the vector r. The length of tether 2 L2is controlled by reeling in or out tether 2 at end body. The mass of the main satellite is assumed much greater than the masses of the climber and the end body. Therefore, the CM of the PSE system can be assumed residing in the main satellite that moves in a circular orbit. Based on the aforementioned assumptions, the dynamic equations can be written as.\n\nθ¨1=3ω2sin2θ122ω+θ̇1L̇1L1sinθ1θ2T2L1m1E32\nθ¨2=3ω2sin2θ222ω+θ̇2L̇cL̇1L0L1+Lc+sinθ1θ2T1L0L1+Lcm1E33\nL¨1=3ω2L1cos2θ1+2ωL1θ̇1+L1θ̇12T1m1+cosθ1θ2T2m1E34\nL¨c=3ω2L0L1+Lccos2θ2+3ω2L1cos2θ1+2ω+θ̇2L0L1+Lcθ̇2+2ω+θ̇1L1θ̇1+cosθ1θ21T1m1m1m2cosθ1θ2+m2T2m1m2E35\n\nwhere L0is the initial total length of two pieces of the tethers and Lcis the length increment relates to L0.\n\nThe libration angles are required to be kept between the desired upper/lower bounds in the climber transfer process. The accessing process is divided into a series of intervals. In this case, we modified the aforementioned parallel optimal algorithm. The total transfer length L10L1fof the climber is discretized evenly. The optimal trajectory should be found to make the transfer time minimize in each equal tether transferring length. Then the cost function can be rewritten as\n\nJi=tfE36\n\nsubject to the simplified dynamic equations\n\nθ¨1=3ω2θ12ω+θ̇1L̇1L1θ1θ2T2L1m1E37\nθ¨2=3ω2θ22ω+θ̇2L̇cL̇1L0L1+Lc+θ1θ2T1L0L1+Lcm1E38\nL¨1=3ω2L1+2ωL1θ̇1+L1θ̇12+T2T1m1E39\nL¨c=3ω2L0+Lc+2ω+θ̇2L0L1+Lcθ̇2+2ω+θ̇1L1θ̇1T2m2E40\n\nwhere u=T1T2, x=θ1θ̇1θ2θ̇2L1L̇1and idenotes the interval number. In (26) the gravitational perturbations and the trigonometric functions are ignored, then they can be simplified following the assumptions: sinθjθj,cosθj1j=12. All the errors between simple model and the entire model are regarded as perturbations.\n\nTo ensure the availability and the suppression of the libration angles, following constrains are also required to be subjected 0T1T1max,0T2T2max,θ1θ1max,θ2θ2max,LcLcLimitL̇1mL̇1L̇1M,L̇cmL̇cL̇cM, where T1maxand T2maxare the upper bounds of the tension control inputs T1 and T2, respectively. θ1max, θ2maxand LcLimitare the magnitudes of libration angles θ1, θ2 and the maximum available length scale of Lc, respectively. L̇1mand L̇1Mare the lower and upper bounds of climber’s moving speed L̇1, respectively. L̇cmand L̇cMare the lower and upper bounds of end-bodies’ moving speed L̇c, respectively. It should be noting that, to avoid the tether slacking, the control tensions are not allowed smaller than zero. Dividing the time interval titi+1evenly into nsubintervals. The cost function minimization problem for each time interval can be solved by Hermite–Simpson method, due its simplicity and accuracy . Then nonlinear programming problem is to search optimal values for the programming variables that minimize the cost function for each interval shown in (25). The closed-loop optimal tracking control method, is same as that in case 1. Direct transcription methods are routinely implemented with standard nonlinear programming (NLP) software. The sparse sequential quadratic programming software SNOPT is used via a MATLAB-executable file interface.\n\n#### 3.2.2. Results and discussion\n\nThe proposed control scheme is used to suppress the libration angles of the PSE system in the ascending process with following system parameters and initial conditions: r = 7100 km, m1 = 500 kg, m2 = 1000 kg, θ1(0) = θ2(0) = 0, L0 = 20 km, L1(0) = 19,500 m, Lc(0) = 0, θ̇10=θ̇20=0, L̇10=20m/s, and L̇c0=0for the ascending process. The whole transfer trajectory is divided into 50 intervals. The climber’s ascending speed along tether 1 is allowed to be controlled to help suppress the libration angles and keep the states of the system in an acceptable area. The constrains are set as T1max=T2max=200N, θ1max=θ2max=0.3rad, L̇1m=15m/s, L̇1M=25m/s, L̇cm=10m/sand L̇cM=10m/s.\n\nThe simulation results of this case are shown in Figures 911. The climber’s open-loop libration angle approaches its upper bound at 850 s. After 850 sθ1is kept at 0.3 radby the end of the ascending period, see the dashed line in Figure 10. Using the closed-loop control, the tracking trajectory of θ1matches the open-loop trajectory very well overall, see solid line in Figure 9. A short gap appears between 875 s – 880 s, this is caused by the errors of the model and computation. Figure 9 also shows the changes of the trajectories of θ2.The trajectory of θ2obtained by closed-loop control tracks the open-loop trajectory well and reaches 0.1 radby the end of the ascending period. The closed-loop trajectories of L1and Lcare shown in Figure 10. They are the reflections of the control inputs. Both L1and Lcshow smooth fluctuations between 40s and 350 s. Figure 10 shows the time history of trajectories of L̇1and L̇c, respectively. In the first 140 s the trajectory of L̇cincreases continuously until reaches its upper bound. Then, it keeps at 10 m/sby 270 s with some slight fluctuations. From 270 s to 355 s, it reduces continuously to −3 m/s. After that, L̇cfluctuates around −3 m/s by the end of the transfer period. As a reflection of the control input, L̇1also shows fluctuation during the transfer phase with obvious small-scale fluctuations appear in the period of 120 s – 260 s and 360 s – 750 s. This impacts during the whole transfer period, the changeable speed of the climber has the ability to help the suppression of the libration angles and states trajectory tracking. This time history of the control inputs is shown in Figure 11with frequent changes between its lower bound and upper bound.\n\n## 4. Conclusions\n\nThis chapter investigated a piecewise parallel onboard optimal control algorithm to solve the optimal control issues in complex nonlinear dynamic systems in aerospace engineering. To test the validity of the proposed two-phase optimal control scheme, the long-term tether libration stability and fast nano-satellite deorbit under complex environmental perturbations and the libration suppression for PSE system are considered. For EDT system, instead of optimizing the control of fast and stable nano-satellite deorbit over the complete process, the current approach divides the deorbit process into a set of intervals. For the PSE system, each time interval is set depends on the minimize transfer time for equal transfer length interval. Within each interval, the predicting phase simplifies significantly the optimal control problem. The dynamic equations of libration motion are further simplified to reduce computational loads using the simple dynamic models. The trajectory of the stable libration states and current control input is then optimized for the fast deorbit within the interval based on the simplified dynamic equations. The tracking optimizes the trajectory tracking control using the finite receding horizon control theory within the time interval corresponding to the open-loop control state trajectory with the same interval number. By applying the close-loop control modification, the system motions are integrated without any simplification of the dynamics or environmental perturbations and the instantaneous states of the orbital and libration motions. The i-th time interval’s closed-loop tracking is processed in tracking phase while the (i + 1)-th time interval’s optimal state trajectory is predicted in the predicting phase. This prediction is based on the error between the real state and the predicting state in the (i-1)-th time interval. By repeating the process, the optimal control problem can be achieved in a piecewise way with dramatically reduced computation effort. Compared with the current on–off control where the stable libration motion is the only control target, numerical results show that the proposed optimal control scheme works well in keeping the libration angles within an allowed limit.\n\n## Acknowledgments\n\nThis work is funded by the Discovery Grant of the Natural Sciences and Engineering Research Council of Canada, the National Natural Science Foundation of China, Grand No. 11472213 and the Chinese Scholarship Council Scholarship No. 201606290135.\n\n## A. Appendix\n\nThe detailed expressions for the matrixes Cand Mare shown as followed,\n\nC=C1C2C3,M=M11M120M12TM22000M33\nC1=[E0χ01χ02χ11χ12χn11χn120E],C2=[00χ03χ04χ13χ14χn13χn1400],C3=23γ¯[00B0.5Bn0.500]χj1=E+γ¯6Aj+γ¯3Aj+0.5+γ¯212Aj+0.5Aj,χj2=E+γ¯6Aj+1+γ¯3Aj+0.5γ¯212Aj+0.5Aj+1χj3=γ¯212Aj+0.5Bj+γ¯6Bj,χj4=γ¯212Aj+0.5Bj+1+γ¯6Bj+1\nM11=ϖ0011ϖ01110ϖ0111Tϖ1111ϖn1n111ϖn1n110ϖn1n11Tϖnn11M22=ϖ0022ϖ01220ϖ0122Tϖ1122ϖn1n122ϖn1n220ϖn1n22Tϖnn22\nM12=ϖ0012ϖ01120ϖ1012ϖ1112ϖn1n112ϖn1n120ϖnn112ϖnn12M33=23γ¯R00R\nϖjj11=γ¯64Q+γ¯28AjTQAj,ϖjj+111=γ¯6Qγ¯216AjTQAj+1+γ¯4AjTQγ¯4QAj+1ϖ0011=γ¯62Q+γ¯216A0TQA0+γ¯4A0TQ+γ¯4QA0,ϖnn11=γ¯62Q+γ¯216AnTQAnγ¯4AnTQγ¯4QAn+Sfϖjj22=γ¯62R+γ¯28BjTQBj,ϖjj+122=γ¯BjTQBj+1ϖ0022=γ¯6R+γ¯216B0TQB0,ϖnn22=γ¯6R+γ¯216BnTQBnϖjj12=γ¯348AjTQBj,ϖjj+112=γ¯224QBj+1+γ¯4AjTQBj+1ϖj+1j12=γ¯224QBjγ¯4Aj+1TQBj,ϖ0012=γ¯224QB0+γ¯4A0TQB0,ϖnn12=γ¯224QBN¯γ¯4AnTQBn\n\nwhere Eis the unit matrix which has the same dimension as Aj, and j = 0,1,2,…,n-1.\n\nchapter PDF\nCitations in RIS format\nCitations in bibtex format\n\n## More\n\n© 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.\n\n## How to cite and reference\n\n### Cite this chapter Copy to clipboard\n\nZheng Hong Zhu and Gefei Shi (September 5th 2018). Piecewise Parallel Optimal Algorithm, Optimization Algorithms - Examples, Jan Valdman, IntechOpen, DOI: 10.5772/intechopen.76625. Available from:\n\n### chapter statistics\n\n1Crossref citations\n\n### Related Content\n\n#### Optimization Algorithms\n\nEdited by Jan Valdman\n\nNext chapter\n\n#### Bilevel Disjunctive Optimization on Affine Manifolds\n\nBy Constantin Udriste, Henri Bonnel, Ionel Tevy and Ali Sapeeh Rasheed\n\n#### Applications from Engineering with MATLAB Concepts\n\nEdited by Jan Valdman\n\nFirst chapter\n\n#### Digital Image Processing with MATLAB\n\nBy Mahmut Sinecen\n\nWe are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities." ]
[ null, "https://www.intechopen.com/media/chapter/61007/media/F2.png", null, "https://www.intechopen.com/media/chapter/61007/media/F3.png", null, "https://www.intechopen.com/media/chapter/61007/media/F4.png", null, "https://www.intechopen.com/media/chapter/61007/media/F5.png", null, "https://www.intechopen.com/media/chapter/61007/media/F6.png", null, "https://www.intechopen.com/media/chapter/61007/media/F7.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8972564,"math_prob":0.94817185,"size":23108,"snap":"2021-21-2021-25","text_gpt3_token_len":4554,"char_repetition_ratio":0.17010042,"word_repetition_ratio":0.02739726,"special_character_ratio":0.18746755,"punctuation_ratio":0.07555221,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98484504,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-17T01:48:56Z\",\"WARC-Record-ID\":\"<urn:uuid:5f848fdc-0fdd-4acb-b3c1-a5afdf52063c>\",\"Content-Length\":\"712229\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5e1d028d-6526-422e-8cd2-32eea9532378>\",\"WARC-Concurrent-To\":\"<urn:uuid:3e296546-ebe8-4aca-bd11-83977afef45d>\",\"WARC-IP-Address\":\"35.171.73.43\",\"WARC-Target-URI\":\"https://www.intechopen.com/books/optimization-algorithms-examples/piecewise-parallel-optimal-algorithm\",\"WARC-Payload-Digest\":\"sha1:ARVLLW7C7FH5C3BDUOYDDA3CBL26P4OC\",\"WARC-Block-Digest\":\"sha1:7WGLRB7DWR3GV52COTBXZQ44DIPH6FRE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991921.61_warc_CC-MAIN-20210516232554-20210517022554-00353.warc.gz\"}"}
https://quant.stackexchange.com/questions/41605/libor-market-model-lmm-under-risk-neutral-measure
[ "# Libor Market Model (LMM) under risk neutral measure\n\nI would like to establish the equations of forward libors under risk neutral measure. Here is how I do it, and what I get :\n\nUnder the $P_{T_j}$ measure, forward Libor $L_j$ is martingale. Thus:\n\n$$dL_j = L_j \\times \\sigma_j(t) \\times dW^{T_j}$$\n\nThe change of numéraire implies that:\n\n$$\\frac{dQ^{T_j}}{dQ^*} = \\frac{P(t,T)\\times\\exp{\\left(-\\int_0^T{r(s)ds}\\right)}}{P(0,T)}$$\n\nUnder $Q^*$, $P(t,T)$ discounted is martingale, which means that:\n\n$$\\frac{dP(t,T)}{P(t,T)} = r(t) dt + \\eta(t) dW^*$$\n\nSolving this gives:\n\n$$P(t,T) = P(0,T) \\times \\exp\\left(\\int_0^T\\left(r(s)-\\frac{1}{2} \\eta(s)^2\\right)ds + \\int_0^T{\\eta(s)}dW^{T_j}\\right)$$\n\nThus:\n\n$$\\frac{dQ^{T_j}}{dQ^*} = \\exp\\left(-\\int_0^T{\\frac{1}{2}} \\eta(s)^2ds + \\int_0^T{\\eta(s)}dW^{T_j}\\right)$$\n\nGirsanov shows that:\n\n$$dW^{T_j} = dW^* - \\eta(t) dt$$\n\nUnder $Q^*$:\n\n$$\\frac{dL_j}{L_j} = \\alpha(t) dt + \\sigma_j(t) dW^{*}$$\n\nWriting $P(t,T)$ as $\\exp(-\\int_t^T{f(t,s)ds}) = \\exp(Y_t)$:\n\n$$\\frac{dP(t,T)}{P(t,T)} = dY_t +\\frac{1}{2}<Y_t>dt$$\n\nwith $dY_t = f(t,t) dt -\\int_t^T{\\alpha(t) dt ds } - \\int_t^T{\\sigma(t)dt dW_s}$. Identifying, I conclude that:\n\n$$\\eta(t) = \\sigma(t)$$\n\nFinally:\n\n$$dL_j(t) = -L_j(t) \\sigma_j(t) \\int_t^{T_j}{\\sigma_j(s)ds} dt + L_j\\sigma_j(t) dW^*.$$\n\nIs it correct ?\n\n• It is not clear why $dY_t = f(t,t) dt -\\int_t^T{\\alpha(t) dt ds } - \\int_t^T{\\sigma(t)dt dW_s}$. Note that $f(t, T)$ is the instantaneous forward rate, while $L_j$ is a forward rate over a given time period, for example, $[T_{j-1}, \\, T_j]$. – Gordon Sep 7 '18 at 18:02\n\nWe assume that, under the $T_j$-forward probability measure $P_{T_j}$, \\begin{align*} \\frac{dP(t, T_j)}{P(t, T_j)} = \\mu_P(t, T_j) dt + \\sigma_P(t, T_j) dW_t^{T_j}, \\end{align*} where $\\mu_P(t, T_j)$ and $\\sigma_P(t, T_j)$ are the respective drift and volatility functions. Let $Q$ be the risk-neutral probability measure. Then \\begin{align*} \\frac{dQ}{dP_{T_j}}\\big|_t &= \\frac{e^{\\int_0^t r_s ds}P(0, T_j)}{P(t, T_j)}\\\\ &=e^{\\int_0^t \\big(r_s -\\mu_P(s, T_j)+\\frac{1}{2} \\sigma_P(s, T_j)^2 \\big) ds - \\int_0^t \\sigma_P(s, T_j) dW_s^{T_j}}. \\end{align*} Since $\\frac{dQ}{dP_{T_j}}\\big|_t$ is a martingale under $P_{T_j}$, \\begin{align*} \\int_0^t \\Big(r_s -\\mu_P(s, T_j)+\\frac{1}{2} \\sigma_P(s, T_j)^2 \\Big) ds = -\\frac{1}{2}\\int_0^t \\sigma_P(s, T_j)^2 ds. \\end{align*} That is, \\begin{align*} \\mu_P(t, T_j) = r_t + \\sigma_P(t, T_j)^2, \\end{align*} and \\begin{align*} \\frac{dQ}{dP_{T_j}}\\big|_t &= e^{-\\frac{1}{2} \\int_0^t\\sigma_P(s, T_j)^2 ds - \\int_0^t \\sigma_P(s, T_j) dW_s^{T_j}}. \\end{align*} Then, under the risk-neutral probability measure $Q$, $\\{W_t, \\, t \\ge 0\\}$, where, for $t \\ge 0$, \\begin{align*} W_t = W_t^{T_j} + \\int_0^t \\sigma_P(s, T_j) ds, \\end{align*} is a standard Brownian motion. Moreover, \\begin{align*} \\frac{dL_j}{L_j} = -\\sigma_j(t) \\sigma_P(t, T_j) dt + \\sigma_j(t) d W_t. \\end{align*}" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.57163346,"math_prob":1.0000083,"size":1257,"snap":"2019-51-2020-05","text_gpt3_token_len":559,"char_repetition_ratio":0.14445332,"word_repetition_ratio":0.0,"special_character_ratio":0.45266506,"punctuation_ratio":0.12087912,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000099,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-29T20:27:29Z\",\"WARC-Record-ID\":\"<urn:uuid:b15c53d7-c03e-4958-9291-fefecaa8d9c2>\",\"Content-Length\":\"134472\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dd6e66c2-1105-4a7c-b136-481a2629852b>\",\"WARC-Concurrent-To\":\"<urn:uuid:cb80121f-3567-4841-8e06-198986e468b2>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://quant.stackexchange.com/questions/41605/libor-market-model-lmm-under-risk-neutral-measure\",\"WARC-Payload-Digest\":\"sha1:2LHZS5SNYLEBEQNCSLLG33UOJ2GLYN62\",\"WARC-Block-Digest\":\"sha1:WJP6YZQXQOEFIVRWOKZDHAVPHV5W5DRG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251802249.87_warc_CC-MAIN-20200129194333-20200129223333-00305.warc.gz\"}"}
https://www.elibrary.imf.org/view/journals/001/2015/139/article-A001-en.xml
[ "Does Easing Monetary Policy Increase Financial Instability?\n• 1 0000000404811396https://isni.org/isni/0000000404811396International Monetary Fund\n\n## Contributor Notes\n\nThis paper develops a model featuring both a macroeconomic and a financial friction that speaks to the interaction between monetary and macro-prudential policies. There are two main results. First, real interest rate rigidities in a monopolistic banking system have an asymmetric impact on financial stability: they increase the probability of a financial crisis (relative to the case of flexible interest rate) in response to contractionary shocks to the economy, while they act as automatic macro-prudential stabilizers in response to expansionary shocks. Second, when the interest rate is the only available policy instrument, a monetary authority subject to the same constraints as private agents cannot always achieve a (constrained) efficient allocation and faces a trade-off between macroeconomic and financial stability in response to contractionary shocks. An implication of our analysis is that the weak link in the U.S. policy framework in the run up to the Global Recession was not excessively lax monetary policy after 2002, but rather the absence of an effective regulatory framework aimed at preserving financial stability.\n\n## Abstract\n\nThis paper develops a model featuring both a macroeconomic and a financial friction that speaks to the interaction between monetary and macro-prudential policies. There are two main results. First, real interest rate rigidities in a monopolistic banking system have an asymmetric impact on financial stability: they increase the probability of a financial crisis (relative to the case of flexible interest rate) in response to contractionary shocks to the economy, while they act as automatic macro-prudential stabilizers in response to expansionary shocks. Second, when the interest rate is the only available policy instrument, a monetary authority subject to the same constraints as private agents cannot always achieve a (constrained) efficient allocation and faces a trade-off between macroeconomic and financial stability in response to contractionary shocks. An implication of our analysis is that the weak link in the U.S. policy framework in the run up to the Global Recession was not excessively lax monetary policy after 2002, but rather the absence of an effective regulatory framework aimed at preserving financial stability.\n\n## I Introduction\n\nThe global financial crisis and ensuing Great Recession of 2007-09 have ignited a debate on the role of policies for the stability of the financial system or the economy as a whole (i.e., so called macro-prudential policies). In advanced economies, this debate is revolving around the role of monetary and regulatory policies in causing the global crisis and how the conduct of monetary policy and the supervision of financial intermediaries should be altered in the future to avoid the recurrence of such a catastrophic event.\n\nIn this paper we develop a simple model featuring both a macroeconomic and a financial friction—i.e., a real interest rate rigidity that give rise to a traditional macroeconomic stabilization objective and a pecuniary externality that give rise to a more novel financial stability objective—which speaks to the necessity to complement monetary policy with macro-prudential policies in response to contractionary shocks.\n\nThe prime objective of macro-prudential policy is to limit build-up of system-wide financial risk in order to reduce the frequency and mitigate the impact of a financial crash.1 Most commonly used prudential tools, however, interact with other policy instruments. The overlap between different policy areas is a major challenge for policy-makers, who have to consider the unintended impact of their instruments on other policy objectives as well as the unintended impact of other policy-makers’ instruments on their own policy objective (Svensson, 2012).\n\nFor instance, monetary policy can affect financial stability: investors may be induced to substitute low-yielding, safe assets for higher-yielding, riskier assets (Rajan, 2005, Dell’Ariccia, Laeven, and Marquez, 2011); investors may also be encouraged to take greater risks if they perceive that monetary policy is being used asymmetrically to support asset prices during downturns (Issing, 2009); and asset price increases induced by falling interest rates might cause banks to increase their holdings of risky assets through active balance sheet management (Adrian and Shin, 2009, 2010). On the other hand, macro-prudential policy instruments can affect macroeconomic stability. In fact, by affecting variables such as asset prices and credit, macro-prudential policy is likely to affect a key transmission mechanism of monetary policy (see e.g., Ingves, 2011). This overlap in the respective areas of influence entails the possibility of the instruments having offsetting or amplifying effects on their objectives if they are implemented in an uncoordinated manner, possibly leading to worse outcomes than if the instruments had been coordinated (see Bean, Paustian, Penalver, and Taylor, 2010, Angelini, Neri, and Panetta, 2011, between others).\n\nAgainst this background, some observers have assigned to monetary policy a key role in exacerbating the severity of the global financial crisis of 2007-09. Taylor (2007), in particular, noticed that during the period from 2002 to 2006 the U.S. federal funds rate was well below what a good rule of thumb for U.S. monetary policy would have predicted. Figure 1 displays the actual federal funds rate (solid line) and the counterfactual policy rate that would have prevailed if monetary policy had followed a standard Taylor rule (dashed line). Indeed, the interest rate implied by the Taylor rule is well above the actual federal funds rate, starting from the second quarter of 2002. Taylor (2007) argues that such a counterfactual policy rate would have contained the housing market bubble; moreover, Taylor also supports the idea that deviating from this rule-based monetary policy framework has been a major factor in determining the likelihood and the severity of the 2007-09 crisis (Taylor, 2010).\n\nDespite a somewhat widely shared common sentiment that the Federal Reserve is partly to blame for the housing bubble, the issue is highly controversial in academia and the policy community. Besides Taylor (2007, 2010), Borio and White (2003), Gordon (2005), and Borio (2006) support the idea that monetary policy contributed significantly to the boom that preceded the global financial crisis. In contrast, Posen (2009), Bean (2010), and Svensson (2010) argue against this thesis.2\n\nTo address some of these issues, we develop a simple model of consumption-based asset pricing with collateralized borrowing, monopolistic banking, real interest rates rigidities and pecuniary externalities. The presence of real and financial frictions give rise to both a traditional macroeconomic stabilization role for policy and a more novel financial stability objective.\n\nThe macroeconomic stabilization objective arises from the presence of monopolistic competition and real interest rates rigidities in the banking sector. Due to monopolistic power, banks apply a markup on lending rates. Moreover, when banks cannot fully adjust their lending rates in response to macroeconomic shocks, the economy displays distortions typical of models with staggered price setting, generating equilibrium allocations that are not Pareto efficient (Hannan and Berger, 1991, Kwapil and Scharler, 2010, Gerali, Neri, Sessa, and Signoretti, 2010).\n\nThe financial stability objective stems from the fact that the model endogenously generates financial crises when the borrowing constraint occasionally binds. When access to credit is subject to an occasionally binding collateral constraint, a pecuniary externality arises. Private agents do not internalize the effect of their individual decisions on the market price of collateral, thus borrowing and consuming more than socially efficient, and increasing the frequency and the severity of financial crises.\n\nThe main results of the analysis are two. First, the analysis of our model economy shows that real interest rate rigidities have a different impact on financial stability depending on the sign of the shock hitting the economy. In response to positive shocks to interest rates, aggregate lending rates rise, too. However, because of interest rate stickiness, they increase less than in the flexible interest rate case. This affects next period net worth through two effects. On the one hand, lower lending rates prompt consumers to borrow more than in the flexible rate case, and thus lowering next period net worth; on the other hand, interest rate repayments are lower relative to the flexible case, thus increasing next period net worth. As the second effect dominates the first one in equilibrium (for a wide range of parameter values), the probability of a crisis is lower with interest rate stickiness. Thus, interest rate rigidity acts as an automatic macro-prudential stabilizer in response to shocks that require interest rates to increase. In contrast, when interest rates are hit by a negative shock, aggregate lending interest rates decrease relatively less. Because of the same mechanisms working in reverse, real interest rate rigidity leads to a higher probability of a crisis in response to shocks that lower interest rates (relative to the flexible interest rates case).\n\nSecond, the model shows that a policy authority, facing the same constraints faced by private agents and with only one policy instrument (namely, the policy interest rate), may not achieve efficiency when a shock that lowers interest rates hits the economy. Specifically, in response to shocks that lower interest rates, achieving both macroeconomic and financial stability entails a trade-off because the two objectives require interventions of opposite direction on the same policy tool; in our case, the interest rate. However, when two different instruments are at the policymaker’s disposal (as, for example, a tax on debt and the policy interest rate), efficiency can be achieved in response to both positive and negative shocks to the risk-free interest rate.\n\nOur analysis has important implications regarding the role of U.S. monetary policy for the stability of the financial system in the run-up to the Great recession. In particular, we show that Taylor’s argument—i.e., that higher interest rates would have reduced both the probability and the severity of the great recession—is supported by our theoretical model only if we make the auxiliary assumption that the Fed had to address all distortions in the economy with only one instrument, namely the policy interest rate. However, Taylor’s argument cannot be rationalized in the context of our model when the policy authority has two different instruments. In this case, in response to a negative aggregate demand shock, interest rates ought to be lowered as much as needed without concerns for financial stability. As suggested by Bernanke (2010) and Blanchard, Dell’Ariccia, and Mauro (2010), this implies that the same monetary policy stance as the one adopted by the Fed during the 2002-06 period, accompanied by stronger regulation and supervision of the financial system, might have been more effective in reducing the likelihood and the severity of the crisis, relative to a tighter monetary policy stance with the same financial supervision and regulation observed during the 2002-06 period.\n\nThis paper is related to several strands of literature. The first is the branch of the New Keynesian literature that considers financial frictions and Taylor-type interest rate rules (see Angelini, Neri, and Panetta, 2011, Beau, Clerc, and Mojon, 2012, Kannan, Rabanal, and Scott, 2012, for example). These papers consider either interest rules augmented with macro-prudential arguments—such as credit growth, asset prices, loan-to-value limits—or a combination of interest and macro-prudential rules in order to allow monetary policy to “lean against financial winds.” However, in this class of models, macro-prudential regulation is taken for granted, in the sense that it does not target a clearly identified market failure giving rise to a well defined financial stability objective. In our model, there is a well defined pecuniary externality that justify government intervention for financial stability purposes.\n\nThe second is a growing literature on pecuniary externalities that interprets financial crises as episodes of financial amplification in environments where credit constraints are only occasionally binding (see, between others, Korinek, 2010, Bianchi, 2011, Jeanne and Korinek, 2010a, b, Benigno, Chen, Otrok, Rebucci, and Young, 2013). In this class of models the need for macro-prudential policies stems from a well-defined market failure: a pecuniary externality originating from the presence of the price of collateral in the aggregate borrowing constraint faced by private agents. However, in all these models, the financial friction is the only distortion in the economy. The question of how the pursuit of financial stability may affect macroeconomic stability is therefore novel relative to this literature.\n\nThe third and final is a small, but growing literature that considers both macroeconomic and financial frictions at the same time. Benigno, Chen, Otrok, Rebucci, and Young (2011) analyze a fully specified new open economy macroeconomics 3-period model that features the same financial friction analyzed here and Calvo-style nominal rigidities. The solution of the fully non-linear version of that model (i.e., without resorting to approximation techniques) shows that there is a trade-off between macroeconomic and financial stability, but it is quantitatively too small to warrant the use of a second policy instrument in addition to the interest rate. Kashyap and Stein (2012) use a modified version of the pecuniary externality framework of Stein (2012) where the central bank has both a price stability and a financial stability objective. Similar to our findings, a trade-off emerges between the two objectives when the policy interest rate is the only instrument and it disappears when there is a second instrument (a non-zero interest rate on reserves, in their case). However, they do not model the price stability objective explicitly. Woodford (2012), in contrast, sets up a New Keynesian model with credit frictions, where the probability of a financial crisis is endogenous (i.e., it is a regime-switching process that depends on the model variables). Woodford characterizes optimal policy in this environment, showing that—under certain circumstances—the central bank may face a trade-off between macroeconomic and financial stability. However, he does not explicitly model financial stability.\n\nIn contrast, in our paper, both the macroeconomic and the financial stability objective are well defined and each objective originates from a friction that we model explicitly. The interaction between the macroeconomic and the financial friction delivers a stark trade-off between macroeconomic and financial stability, that helps rationalize the role of monetary policy and macro-prudential policy (or the lack of thereof) in the run-up to the Great Recession in the United States.\n\nThe rest of the paper is organized as follows. Section 2 describes the model economy. Sections 3 and 4 characterize the decentralized and the socially planned equilibrium of the economy, respectively. In Section 5 we discuss the implications of our model in terms of the role played by U.S. monetary policy for the stability of the financial system in the run-up to the Great Recession. Section 6 concludes.\n\n## II The Model\n\nWe include monopolistic banking and real interest rate rigidities in the pecuniary externality framework of Jeanne and Korinek (2010a). In Jeanne and Korinek (2010a)’s set up, consumers borrow directly from international capital markets (or foreign banks). In our model, consumers must borrow from a stylized monopolistic banking sector that intermediates foreign saving. Assuming that some of the changes in US interest rates ultimately originate abroad is in line with the view that the global external imbalances (i.e., the behavior of foreign savings) influenced significantly the US economy in the run-up to the Great Recession. Alternatively, borrowers could be interpreted as entrepreneurs/households in a closed economy enjoying a comparative advantage in owning certain assets.\n\nThe financial friction is given by the presence of collateralized borrowing. Strictly speaking, the real frictions are two: the first is the presence of market power in loan markets, exercised by monopolistically competitive banks, and the second is infrequent adjustment of interest rates by banks.\n\nThe economy is populated by two sets of agents: a continuum of monopolistically competitive banks and a continuum of identical atomistic individuals who borrow from banks and consume. Each set of agents has a mass normalized to one. There are only three periods, denoted t = 0, 1, 2.\n\nAt the beginning of period 0 consumers own an asset whose available stock is normalized to 1. In order to consume they can either sell a fraction of the asset (1 – θi, 1) at market prices or borrow from banks (bi, 1). They have a well-defined demand function for loans which is decreasing in the lending interest rate (RL1). Monopolistic banks freely borrow from foreign lenders at the risk free interest rate (Rt = R*) and—given loans demand—optimally set their lending rates. The risk-free interest rate can be hit by a temporary shock (R* ± ν) at the beginning of period 0. We assume that only a fraction of banks (μ) can reset their lending rates conditional on this shock, while the remaining banks (1 – μ) need to keep their lending rates fixed. The purpose of this assumption is to introduce macroeconomic stabilization considerations in relatively simple manner.3 The credit market clears after the realization of the shock, which is observed by all agents. At the end of period 0, households consume (ci, 0).\n\nIn period 1, consumers are endowed with a stochastic endowment (e), they repay their debt (bi, 1 RL1), borrow an additional amount from banks (bi, 2), realize banks profits (πi, 1), and consume (ci, 1). Note that debt rollover is subject to a collateral constraint: additional borrowing (bi, 2) is limited to a fraction of consumers’ assets (at their market value). The purpose of this assumption is to introduce financial stability considerations. If hit by a shock in period 0, the level of the risk-free interest rate returns to its pre-shock value (R*) in period 1.\n\nPeriod 2 represents the long run. Consumers get a deterministic return on the asset that they own (y), repay their debt (bi, 2 RL2), realize banks profits (πi, 2), and consume (ci, 2).\n\nWe now discuss the consumers’ and banks’ problems in turn.\n\n### A Consumers and Loan Demand\n\nThe utility of each consumer, indexed by i ϵ [0,1], is given by:\n\n$\\begin{array}{cc}u\\left({c}_{i,0}\\right)+u\\left({c}_{i,1}\\right)+{c}_{i,2},& \\left(1\\right)\\end{array}$\n\nwhere, for simplicity, we assume a unitary discount factor. The period utility function, u(·), is a standard CES function:\n\n$\\begin{array}{cc}u\\left(c\\right)=\\frac{{c}^{1-\\rho }}{1-\\rho }.& \\left(2\\right)\\end{array}$\n\nThe budget constraint can be written as:\n\n$\\begin{array}{cc}\\left\\{\\begin{array}{l}{c}_{i,0}={b}_{i,1}+\\left(1-{\\theta }_{i,1}\\right)p0,\\\\ {c}_{i,1}+{b}_{i,1}{R}_{L1}=e+{b}_{i,2}+\\left({\\theta }_{i,1}-{\\theta }_{i,2}\\right){p}_{1}+{\\pi }_{i,1},\\\\ {c}_{i,2}+{b}_{i,2}{R}_{L2}={\\theta }_{i,2y}+{\\pi }_{i,2}.\\end{array}& \\left(3\\right)\\end{array}$\n\nInitially, each consumer owns θi, 0 = 1 unit of the asset, where the price of the asset in period t is denoted by pt. Consumers can buy or sell the asset in a perfectly competitive market, but they cannot sell it to the lenders and rent it back. As in Jeanne and Korinek (2010b), we assume that consumers derive some important benefits from owning the asset.4 Note that consumers are identical and, in a symmetric equilibrium, we must have θi, 0 = θi, 1 = θi, 2 = 1.\n\nAs it is evident from the budget constraint, in order to consume in period 0, consumers need to either sell a fraction of their assets (1 – θi, 1) or borrow from banks (bi, 1). Moreover, each consumer, in period 1, faces a collateral constraint of the form:\n\n$\\begin{array}{cc}{b}_{i,2}\\le {\\theta }_{i,1}{p}_{1},& \\left(4\\right)\\end{array}$\n\nwhere θi, 1 is the quantity of domestic collateral held by the consumer at the beginning of period 1.\n\nThe microfoundation of the collateral constraint follows the spirit of Kiyotaki and Moore (1997). However, for tractability, while in Kiyotaki and Moore (1997) borrowing capacity is an increasing function of the future value of the collateral, we assume that borrowing capacity is an increasing function of the current value of the collateral. The same modelling choice has been made by Mendoza (2010), Jeanne and Korinek (2010b) and Mendoza and Smith (2006), and is justified by the work of Cordoba and Ripoll (2004) and Kocherlakota (2000) who show that collateral constraints specified with next-period price of collateral asset do not yield quantitatively significant differences in response to shocks.\n\nConsumers maximize (1) subject to the budget constraint (3) and the collateral constraint (4). The utility maximization problem of the representative consumer (i.e., variables without the subscript i) can be written as:\n\n$\\begin{array}{cc}\\underset{{b}_{1},{b}_{2},{\\theta }_{2}}{\\text{max}}\\left\\{\\begin{array}{c}u\\left({b}_{1}+\\left(1-{\\theta }_{1}\\right){p}_{0}\\right)+{\\mathbb{\\text{E}}}_{0}\\left[u\\left(e+{b}_{2}+\\left({\\theta }_{1}-{\\theta }_{2}\\right){p}_{1}+{\\pi }_{1}-\\\\ -{b}_{1}{R}_{L1}\\right)+{\\theta }_{2y}+{\\pi }_{2}-{b}_{2}{R}_{L2}-\\lambda \\left({b}_{2}-{\\theta }_{1}{p}_{1}\\right)\\end{array}\\right\\}.& \\left(5\\right)\\end{array}$\n\nSolving this problem backwards, the first order conditions are:\n\n$\\begin{array}{cc}\\begin{array}{l}{p}_{1}=\\frac{y}{{u}^{\\prime }\\left({c}_{1}\\right)},\\\\ {u}^{\\prime }\\left({c}_{1}\\right)={R}_{L2}+\\lambda ,\\\\ {u}^{\\prime }\\left({c}_{0}\\right)={R}_{L1}{\\mathbb{\\text{E}}}_{0}\\left[{u}^{\\prime }\\left({c}_{1}\\right)\\right].\\end{array}& \\left(6\\right)\\end{array}$\n\nThe first equation represents the asset pricing condition for the economy. The second and third equations are the Euler equation for consumption in period 1 and 0, respectively.5\n\n#### Consumers’ demand of loans\n\nIn order to allow for market power in the banking sector, we model the market for loans in a Dixit and Stiglitz (1977) framework.6 That is, we assume that loan contracts bought by consumers are a constant elasticity of substitution composite basket of slightly differentiated financial products—each supplied by a bank j—with an elasticity of substitution ζ (which will be the main determinant of the spread between bank rates and the risk-free rate).\n\nIn particular, the consumer i, in order to obtain a loan of a given size bi,t, needs to take out a continuum of loans bi, j, t from all existing banks j, such that:\n\n$\\begin{array}{cc}{b}_{i,t}\\le {\\left({\\int }_{0}^{1}{b}_{i,j,t}^{\\frac{\\zeta -1}{\\zeta }}dj\\right)}^{\\frac{\\zeta -1}{\\zeta }}& \\left(7\\right)\\end{array}$\n\nwhere ζ > 1 is the elasticity of substitution between differentiated loans (or banking services, in general). Demand by consumer i seeking a real amount of loans equal to bi,t can be derived by minimizing the total repayment due to the continuum of banks j over bi,j,t. Aggregating over symmetric households, this minimization problem yields downward-sloping loans demand curves of the kind:\n\n$\\begin{array}{cc}{b}_{j,t}=\\left(\\frac{{R}_{Lj,t}}{{R}_{Lt}}{\\right)}^{-\\zeta }{b}_{t}.& \\left(8\\right)\\end{array}$\n\nwith the aggregate interest rate on loans given by:\n\n$\\begin{array}{cc}{R}_{Lt}=\\left({\\int }_{0}^{1}{R}_{Lj,t}^{1-\\zeta }dj{\\right)}^{\\frac{1}{1-\\zeta }}.& \\left(9\\right)\\end{array}$\n\n### B Banks and Loan Supply\n\nThere is a continuum of monopolistically competitive domestic banks indexed by j ϵ [0,1] owned by households. Microeconomic theory typically considers market power as a distinctive feature of the banking sector (Freixas and Rochet, 2008).7 In particular, we assume that each bank j supplies slightly differentiated financial products, and no other bank produces the same variety: each bank has, therefore, some monopoly power over its product. However, each bank competes with all other banks, since consumers consider each bank’s variety as imperfect substitutes. As banks have market power over the supply of their variety, they set prices to maximize profits, taking into account the elasticity of demand for their variety.\n\nEach bank j collects fully insured deposits dj,t from foreign investors at the risk-free interest rate Rt = R*, where R* is exogenous and given. We further assume that foreign lenders have an infinite supply of deposits, so that banks can satisfy any demand for loans. Finally, banks use deposits to supply loans to consumers with the following constant return to scale production function:\n\n$\\begin{array}{cc}{b}_{j,t}={d}_{j,t}.& \\left(10\\right)\\end{array}$\n\nIn each period, bank j maximizes its profits choosing prices and quantities:\n\n$\\underset{{R}_{Lj,t}{b}_{j,t}}{\\text{max}}{b}_{j,t}{R}_{Lj,t}-{d}_{j,t}{R}_{t},$\n\nsubject to the demand schedule in (8) and to the production function in (10). The first order condition for this problem implies that the optimal lending rate applied by banks is a positive constant gross markup ((M)) over the marginal cost:\n\n$\\begin{array}{cc}{R}_{Lt}\\left(j\\right)=\\frac{\\zeta }{\\zeta -1}{R}_{t}=M{R}_{t}.& \\left(11\\right)\\end{array}$\n\nNote that, together with consumers’ optimality conditions, equation (11) determines the equilibrium of the economy. That is, once the lending rate has been set by banks, households make their consumption (and, therefore, borrowing) decisions. The market clearing in the loan market closes the model.\n\nWe also assume that the banking sector displays short-run interest rate stickiness. In particular, we assume that banks cannot immediately adjust their lending rates in response to macroeconomic developments. The presence of interest rate stickiness in the banking sector can be justified by the presence of adjustment costs of and monopolistic power. For example, Hannan and Berger (1991) show that, in the presence of fixed adjustment costs, banks re-set their lending rates only if the costs of changing the interest rate are lower than the costs of maintaining a non-equilibrium rate (see also Neumark and Sharpe, 1992). Empirically, it is a well documented fact that the adjustment of banks lending rates to changes in the risk-free rate is only partial and heterogeneous, in particular in the short run. For example, Kwapil and Scharler (2010) show that interest rate pass-through of consumer loans in the U.S. can be as low as 0.3, implying that interest rates charged on consumer loans are smoothed heavily by banks. For tractability, we implement interest rate stickiness by means of a simple one-period real rigidity—while we assume that in the long-run interest rates are fully flexible.\n\nIn particular we assume that, if the risk-free interest rate is hit by a temporary shock (ν) in period 0, only a fraction μ of the banks can reset their rates, whereas the remaining 1 – μ banks cannot. This entails that, following a shock to the risk free interest rate, the aggregate lending rate will be in general different from the one desired by banks: remembering that consumers are price takers and that their loans demand depends on the average interest rate in the economy, this friction will lead to a distortion in the competitive equilibrium and will create the policy scope for restoring efficiency. Moreover, given that the incomplete pass-through of changes in the risk-free rate on lending rates is a realistic assumption only in the short run, we assume that from period 1 interest rates are again fully flexible.\n\nFinally, note that shocks to the interest rate in period 0 are observed by all agents before they make their decisions. We will consider three different scenarios: no shock to the risk-free interest rate (ν = 0), a temporary increase in the risk-free rate (ν < 0), and a temporary reduction in the risk-free rate (ν < 0). We can interpret these three scenarios as the result of a realized temporary “shock” to the risk-free interest rate at the beginning of period 0. Specifically, shock ν can be interpreted as a demand shock—such as a preference shock or a government spending shock—in a closed economy or as a foreign demand shock in a small open economy (see Harrison and Oomen (2010) and Cook and Devereux (2011), for example).\n\n### C Shocks and Parameter Values\n\nTo be able to solve and simulate the model we need to make assumptions about key parameters: the distribution of the stochastic endowment (e), the return of the asset (y), households’ preferences (ρ), the degree of monopolistic competition in the banking sector (ζ), the risk-free interest rate (R*), the degree of interest rates stickiness (μ), and the size of the shocks to the interest rate (ν). Table 1 summarizes the assumptions we make on these processes and parameters.\n\nTable 1.\n\nCalibration of Model’s Parameter", null, "Note. 3M US T-Bill is the the average 3-Month Treasury Bill deflated with US CPI; RL is the 15-Year mortgage fixed rate deflated with US CPI. U.S. monthly data from 1985:M1 to 2007:M3.\n\nWe assume that endowment e has a deterministic component $\\overline{e}$ and a stochastic component $\\stackrel{˜}{\\epsilon }$:\n\n$\\begin{array}{cc}e=\\overline{e}+\\stackrel{˜}{\\epsilon },& \\left(12\\right)\\end{array}$\n\nwhere $\\stackrel{˜}{\\epsilon }$ is uniformly distributed over the [-ε, +ε] interval. This implies that the endowment e is uniformly distributed over the [$\\overline{e}$ε, $\\overline{e}$ + ε] interval. We will analyze the model’s properties for different values of the maximum size of the shock to the endowment (ε). In particular, we will consider values for ε such that the economy may be constrained for a sufficiently large negative realization of the shock, but would not be constrained in the absence of disturbances. As shown in Appendix A, under these assumptions the model can be solved largely in closed form.\n\nWhile it is possible to make reasonable assumptions for the majority of parameters, two degrees of freedom are left for the solution of the model: the return of the asset (y) and the expected value of the endowment ($\\overline{e}$). Following Jeanne and Korinek (2010a), we assume $\\overline{e}$ = 1.3 and y = 0.8. Jeanne and Korinek (2010a) choose these two parameters jointly with the maximum size of the shock to the endowment (ε) to control when the borrowing constraint binds. Note also that, given the stylized nature of the model, we do not use it for quantitative analysis. The parametrization chosen is to study the solution in the case in which the borrowing constraint does not bind today, but can bind tomorrow. The analysis of a model with an occasionally binding constraint adds a layer of complexity that is not present in standard New Keynesian models and banking models, and thus justifies simplifying in other dimensions. In the last section of the paper we shall use the qualitative predictions of the model to interpret the recent U.S. experience in the run-up to the Great Recession.\n\nWe calibrate the remaining parameters using U.S. data from 1985 to 2007, i.e., from the beginning of the Great Moderation to the beginning of the Great Recession. The gross risk-free real interest rate is set to R* = 1.015 in order to match the average yield of the 3-Month Treasury Bill (deflated with US CPI) over the period 1985-2007. We set the elasticity of substitution between financial products to ζ = 33.3, which implies a gross markup of (M) ≃ 1.03. This markup yields approximately a spread of 250 basis points over the risk-free interest rate, which is consistent with the average spread of the 15-year mortgage fixed rate over the 3-Month Treasury Bill rate.8 Household preferences are given by a constant elasticity of substitution utility function, with a relative risk aversion coefficient ρ = 2, which is a conventional value.\n\nUnder these assumptions, the model economy is never constrained when εεb = 0.095. That is, the constraint never binds below the threshold εb, and the probability of observing a crisis in period 1 is zero. In this case, the model has a closed-form solution given by optimality conditions (6) together with λ = 0. In contrast, when ε > 0.095 there exists a positive probability that the constraint will bind in period 1: in this case the model does not have a closed-form solution and, therefore, the levels of debt and consumption have to be solved numerically (as shown in Appendix A).\n\nThe calibration of the degree of interest rate stickiness (μ) is more difficult. Even if there is compelling evidence on the imperfect adjustment of retail interest rates rate to movements in the risk free rate, the degree of such rigidity is not consistently quantified. For the U.S., Kwapil and Scharler (2010) estimate a short-run pass through of 0.3 for consumer loans.9 Based on this evidence we assume that, in the short run, only 50 percent of the banks can adjust their lending rates conditional to a movement in the interest rate. In the long-run, in contrast, pass through is assumed to be complete.10\n\nFinally, we assume that the risk-free interest rate is affected by a shock in period 0, such that:\n\n$\\begin{array}{cc}{R}_{1}={R}^{*}+\\upsilon ,& \\left(13\\right)\\end{array}$\n\nwhere υ can take three values, namely ν = {0, +0.02, −0.02}. The size of the shock matches the standard deviation of the yield on the U.S. 3-Month Treasury Bill over the 1985-2007 period.\n\n## III Decentralized Equilibrium\n\nWe can now analyze the decentralized equilibrium of the economy. In order to build intuition, we will consider first the effects of the financial friction (which manifests itself conditional on shocks to the endowment) by comparing the allocation in our model economy with an economy in which the collateral constraint is never binding. Second, we will analyze the effect of the macroeconomic friction (which manifests itself conditional on shocks to the risk free interest rate) by comparing the allocation in our model economy with an economy with fully flexible interest rates. Third, and finally, we will analyze the full model, when both frictions are at work simultaneously.\n\n### A Financial Friction\n\nThe financial friction affects the economy only when the collateral constraint is not binding today but can bind tomorrow with a positive probability. In particular, a shock (ε) to the endowment received by households, if large enough to make the collateral constraint binding, will lead to a downward spiral of declining consumption, falling asset prices, and tighter borrowing constraints typical of financial accelerator models, such as Bernanke, Gertler, and Gilchrist (1996) and Kiyotaki and Moore (1997). We label states in which the collateral constraint is binding as “crisis states” and define the probability that the constraint will bind in period 1 (i.e., the crisis probability) as our measure of financial stability.11\n\nWe consider different values of the maximum size of the shock (ε) so that i) the collateral constraint never binds (i.e., the shock $\\stackrel{˜}{\\epsilon }$ is never large enough to push the economy in the constrained region); and ii) the collateral constraint is occasionally binding (i.e., for large enough realizations of the shock $\\stackrel{˜}{\\epsilon }$ the economy can enter the constrained region and experience a financial crisis). As we discussed earlier, the threshold for ε that makes the collateral constraint bind with positive probability is εb ≃ 0.095.\n\nFigure 2 displays the behavior of some endogenous variables in our model for different values of the maximum size of the shock (ε, displayed on the horizontal axis). The upper-left panel of Figure 2 plots the equilibrium level of borrowing in period 0 (b1). Conditional on b1, it is possible to compute net worth (eb1 RL1), consumption (c1), and the probability of observing a crisis (π) in period 1.\n\nWhen εεb the economy is never constrained, households’ decisions are not affected by the size of the shock ε: if hit by a negative endowment shock, households can borrow from banks to smooth consumption. In contrast, when the maximum size of the shock is above its threshold (εb = 0.095), consumers take into account that there is a positive probability that the constraint will bind in period 1. They insure by reducing borrowing in period 0 (so that their net worth next period will be higher) and by reducing their consumption in period 1. The probability of a crisis (π) is positive and increases in a non-linear way with the maximum size of the shock to the endowment.\n\nThe intuition for the comparative statics in Figure 2 is the following. The Lagrangian multiplier (λ) in the Euler equation (6) represents the shadow value of the collateral constraint. When the shock to the endowment is not large enough to push the economy in the constrained region, λ = 0 and the economy achieves its efficient allocation (as we are not explicitly considering here other distortions). In contrast, when the maximum size of the shock (ε) is large enough, λ may be positive and increasing in ε. Therefore, the larger ε, the larger is λ and the level of precautionary savings undertaken by consumers.\n\n### B Macroeconomic Friction\n\nLet us now analyze how the macroeconomic friction affects our model economy. As it is well known from the standard New Keynesian literature, there are two potential distortions in models with monopolistic competition and staggered pricing. First, monopolistic power forces average output below the socially optimal level. Second, staggered pricing implies that both the economy’s average markup and the relative price of different goods will vary over time in response to shocks, violating efficiency conditions.12 As we shall see below, our model displays a similar behavior.\n\nLet us assume for the moment that interest rates can freely adjust and that lending rates at the beginning of period 0 are set at the desired level, as a markup over the marginal cost (RL1 = (M) R*). If a positive shock υ > 0 hits the economy, banks face a new, higher marginal cost and update their lending interest rates such that RL1 = (M) (R* + ν). Households update their loans demand accordingly and the loans market clears at a higher lending rate. In response to the higher interest rate, consumption and borrowing in period 0 fall relative to the case in which ν = 0. This allocation (henceforth “flex-rates” allocation) is efficient, conditional on the shock ν.\n\nIn a sticky-rates environment, not all banks can reset their lending rate so as to be consistent with the new marginal cost. The fraction μ of banks that can reset lending rates will set:\n\n${R}_{L1}^{u}=M\\left({R}^{*}+\\nu \\right).$\n\nIn contrast, the remaining 1 – μ banks will not be allowed to reset their lending rates, implying that:\n\n${R}_{L1}^{1-\\mu }=M{R}^{*}<{R}_{L1}^{\\mu }.$\n\nAs a consequence, the aggregate lending rate in the economy would differ from its flex-rates counterpart. According to equation 9, the aggregate lending rate in the sticky-rates economy becomes:\n\n${R}_{L1}=M\\left({R}^{*}+\\mu \\nu \\right),$\n\nwhich is higher than the lending rate prevailing under flex-rates in the case of positive shocks to the interest rate. A similar gap of opposite sign emerges when the ν shock is negative.\n\nThe model properties analyzed in this section can be summarized as follows. In general, interest rates stickiness results in an average interest rate, RL1, which differs from the one required to obtain the flex-rates allocation, therefore affecting the aggregate level of borrowing and consumption. More specifically, when a positive shock hits the interest rate, debt and consumption are higher than in the flex-rates economy, because interest rates increase by less than they would in a fully flexible world. But, when a negative shock hits the economy, debt and consumption are lower than in the flex-rates economy, because interest rates decrease by less than they would in a fully flexible world. As we shall see, this property has crucial implications for the results of our analysis when the macroeconomic frictions interact with the financial friction.\n\n### C The Interaction between Financial Friction and Macroeconomic Friction\n\nIn this section we show that the impact of staggered interest rates setting on the crisis probability (our measure of financial stability, i.e. the crisis probability) depends on the sign of the shock hitting the economy. Specifically, we shall see that in response to positive shocks to the risk-free interest rate, the probability of a crisis in the sticky price economy is lower (increases less) than in the in the flex-price economy. Instead, in response to negative shocks to the risk-free interest rate, the crisis probability is higher (it falls less) than in the flex-rates economy. In this sense, we say that interest rate rigidities have an asymmetric impact on financial stability.\n\nWe first analyze the effect of a positive shock to the risk-free interest rate (Figure 3). The benchmark is the economy with both frictions but no interest rate shocks (solid line, i.e., the same allocation as in Figure 2). The thin line with asterisk markers and the thin line with circle markers display the equilibrium after the interest rate shock has hit, under flexible and sticky interest rates respectively.\n\nAs we showed above, under the assumption of sticky interest rates, the aggregate lending rate in the economy does not increase as much as the risk-free rate following a positive shock. What are the implications for the probability of a crisis, our measure of financial stability? On the one hand, lower lending rates—relative to the flex-rates case—prompt consumers to borrow more (b1) in period 0 and to consume more (c1) in period 1, as shown by the difference between the circles line and the asterisks line. All else equal, this implies higher expected next-period refinancing needs (b2) and, therefore, a higher expected probability that the constraint will be binding in period 1. On the other hand, and despite the higher level of borrowing in period 0, expected net worth (eb1RL1) in period 1 is larger under sticky rates than under flex rates, because of lower interest rate repayments. All else equal, this implies a relaxation of the borrowing constraint in period 1. The net effect is displayed in the bottom right-hand panel of Figure 3: when a positive shock hits the economy, with sticky interest rates the probability that the constraint will bind in period 1 increases by less than in the flex-rates case. This is because, as long as the coefficient of relative risk aversion (ρ) is larger than 1, the effect of the lower interest rates on net worth dominates the effect on borrowing and consumption. Note that this result is robust to assuming different values for all other parameters of the model, including the size of the shock to the interest rate (ν) and the degree of interest rate stickiness (μ). Changing these parameters does not affect the mechanisms driving the result, but only the magnitude of the effects. In other words, for every possible value of ν and μ the allocation under sticky-rates (circles line) is bounded between the allocation under flex-rates (asterisks line) and the allocation where no shock hits the economy (solid line).\n\nConsider now a negative shock. In the case of a negative shock, sticky interest rates exacerbate the effects of the financial friction rather than dampening it. To see that, Figure 4 displays how the model equilibrium and the crisis probability vary in response to a negative shock to the risk-free interest rate.\n\nUnder interest rate stickiness (circles line), the average lending rate now falls by less than the risk-free interest rate. In the sticky rate economy, consumption and borrowing are lower (or increase less) than in the flex-rate economy (asterisks line) in response to the shock, but next period interest payments are higher. As a result next-period net worth in the sticky rates economy is lower than in the flex-rate economy, and the crisis probability is higher (or it falls less).\n\nIn conclusion, the analysis of the interaction between the macroeconomic and financial friction can be summarized as follows: when both the macroeconomic and the financial friction are present, interest rate stickiness, conditional on positive shocks to the interest rate reduces the crisis probability relative to the flex-rate equilibrium; conditional on negative shocks to the interest rate, it increases the crisis probability relative to the flex-rate equilibrium. In this sense, interest rate rigidities have an asymmetric impact on financial stability.\n\n## IV Restoring Efficiency\n\nIn this section we discuss how policy intervention, and in particular monetary policy, can address the market failures of our model economy.\n\nTo build understanding and intuition for the main results, we first analyze the case in which there is only the financial friction or the macroeconomic friction. Then, we consider the case in which the policy authority faces both frictions with either one or two policy instruments.\n\nA key result is that a policy-maker with a macro-prudential instrument (a tax on borrowing) and a monetary policy instrument (the policy interest rate) can address both distortions induced by the financial friction and the macroeconomic friction. In contrast, if the interest rate is the only available instrument, the policy-maker faces a trade-off between macroeconomic and financial stability when the economy is hit by negative shocks.\n\n### A Addressing the Pecuniary Externality\n\nAs it is well known, the occasionally binding constraint that is in our model generates a pecuniary externality. This pecuniary externality drives a wedge between private and socially optimal outcomes because private agents do not internalize the effect of their decisions on the asset price that enters the specification of the borrowing constraint. A social planner, unlike private agents, can internalize that consumption decisions affect the asset price—as shown by the asset price equation in (6)—which, in turn, affects the aggregate collateral constraint in (4).13\n\nFollowing Jeanne and Korinek (2010a), the planner’s problem for this economy can be written as:\n\n$\\underset{{b}_{1},{b}_{2}}{\\text{max}}\\left\\{\\begin{array}{c}u\\left({b}_{1}\\right)+{\\mathbb{\\text{E}}}_{0}\\left[u\\left(e+{b}_{2}+{\\pi }_{1}-{b}_{1}{R}_{L1}\\right)+y-{b}_{2}{R}_{L2}\\\\ -{\\lambda }^{sp}\\left({b}_{2}-{p}_{1}\\left(e+{b}_{2}-{b}_{1}{R}_{L1}\\right)\\right)\\right]\\end{array}\\right\\},$\n\nwhere the maximization is subject to the budget constraint (3), the aggregate borrowing constraint (4), and the pricing rule of the competitive equilibrium allocation:\n\n${p}_{1}\\left({c}_{1}\\right)=\\frac{y}{{u}^{\\prime }\\left({c}_{1}\\right)},$\n\nwhere the asset price, p1 (c1), is now a function of aggregate consumption.\n\nThe corresponding first order conditions are:\n\n$\\begin{array}{cc}{u}^{\\prime }\\left({c}_{0}\\right)={R}_{L1}{\\mathbb{\\text{E}}}_{0}\\left[{u}^{\\prime }\\left({c}_{1}\\right)+{\\lambda }^{sp}{p}^{\\prime }\\left({c}_{1}\\right)\\right],\\hfill & \\left(14\\right)\\\\ {u}^{\\prime }\\left({c}_{1}\\right)={R}_{L2}+{\\lambda }^{sp}\\left(1-{p}^{\\prime }\\left({c}_{1}\\right)\\right).\\hfill & \\end{array}$\n\nBy comparing (6) and (14) and noting that p(c1) > 0, it is clear that there is a wedge between the decentralized and the social planner allocation: the social planner saves more than private agents whenever the borrowing constraint is expected to bind in period 1 with positive probability (i.e., whenever λsp > 0). This reflects the fact that the social planner internalizes the endogeneity of next period’s asset price to this period’s aggregate saving. As a consequence, when the constraint never binds, the allocation of resources in the economy is efficient (ignoring the other frictions in the model). However, when there is a positive probability that the constraint binds in period 1, the allocation is not efficient. Consumption and borrowing in the decentralized equilibrium are excessive relative to the allocation chosen by the social planner (i.e., there is overborrowing in the parlance of the literature). As a result, the crisis probability is also higher in the decentralized equilibrium relative to the social planner equilibrium.\n\n#### A Pigouvian Tax on Borrowing\n\nIn this set-up, Jeanne and Korinek (2010a) show that efficiency can be restored in the decentralized economy by imposing a Pigouvian tax on borrowing in period 0, namely b1 (1—τ), which is rebated with transfers (TR) in a lump-sum fashion. The optimal tax is given by:\n\n$\\begin{array}{cc}\\tau =\\mathbb{\\text{E}}\\left[\\frac{{\\lambda }^{sp}{p}^{\\prime }\\left({c}_{1}\\right)}{{u}^{\\prime }\\left({c}_{1}\\right)}\\right],& \\left(15\\right)\\end{array}$\n\nThis equation states that whenever the borrowing constraint binds in period 1 with positive probability, the policy-maker imposes a positive tax on borrowing in period 0, prompting private agents to issue less debt in period 0 than under decentralized equilibrium. This is because both the shadow value of the collateral constraint (λsp) and the derivative ${p}_{1}^{\\prime }\\left({c}_{1}\\right)$ are positive.\n\n#### An Interest Rate Policy\n\nA Pigouvian tax on borrowing may be difficult to implement. But the constrained efficient allocation can also be decentralized with the interest rate. The policy-maker can equally curtail households’ borrowing by increasing lending interest rates. For instance, the policy-maker (e.g., a central bank in this specific case) can increase the interest rate at the beginning of period 0, affecting banks marginal cost and, therefore, consumers’ borrowing and consumption decisions.\n\nThis increase in interest rates—if rebated with lump sum transfers (TR)—has the same effect of the Pigouvian tax analyzed above. To see this, assume for simplicity that the central bank can affect the interest rate by an additive factor ψ, so that the marginal cost for banks would be given by R* + ψ (see Stein, 2012). The consumers’ maximization problem becomes:\n\n$\\underset{{b}_{1},{b}_{2}}{\\text{max}}\\left\\{\\begin{array}{c}u\\left({b}_{1}\\right)+{\\mathbb{\\text{E}}}_{0}\\left[u\\left(e+{b}_{2}+{\\pi }_{1}-{b}_{1}M\\left({R}^{*}+\\psi \\right)+TR\\right)+\\\\ +y-{b}_{2}{R}_{L2}-{\\lambda }^{sp}\\left({b}_{2}-{p}_{1}\\right)\\right]\\end{array}\\right\\}.$\n\nBy equalizing the first order condition with respect to b1 of the decentralized equilibrium and the social planner equilibrium, we can derive the level of ψ which closes the wedge:\n\n$\\left\\{\\begin{array}{l}{u}^{\\prime }\\left({c}_{0}\\right)={R}_{L1}{\\mathbb{\\text{E}}}_{0}\\left[{u}^{\\prime }\\left({c}_{1}\\right)+{\\lambda }^{sp}{p}^{\\prime }\\left({c}_{1}\\right)\\right],\\\\ {u}^{\\prime }\\left({c}_{0}\\right)=M\\left({R}^{*}+\\psi \\right){U}^{\\prime }\\left({c}_{1}\\right),\\end{array}$\n\nSolving for ψ yields:\n\n$\\begin{array}{cc}\\psi ={\\mathbb{\\text{E}}}_{0}\\left[\\frac{{\\lambda }^{sp}p\\prime \\left({c}_{1}\\right)}{{u}^{\\prime }\\left({c}_{1}\\right)}\\right]{R}^{*}.& \\left(16\\right)\\end{array}$\n\nNotice that as long as the shadow value of the collateral constraint (λsp) is different from zero, ψ is positive and can be interpreted as a prudential “markup” factor on the risk-free interest rate. This, in turn, implies that whenever the constraint is binding with positive probability, the central bank would raise interest rates so that households consume less and issue less debt in period 0, reducing the probability of hitting the constraint in case of an adverse shock in period 1.\n\nIn summary, when the borrowing constraint is the only friction in the economy and the policy rate is the only policy instrument, a social planner can achieve constrained efficiency by increasing interest rates in period 0. This allocation is isomorphic to the one obtained with the Pigouvian tax on debt analyzed in the previous section. As we shall see below, however, this is not always the case. When both the financial and the macroeconomic frictions are present it will depend on the sign of the shock hitting the economy.\n\n### B Addressing Monopolistic Competition and Interest Rate Stickiness\n\nAs mentioned in the previous sections, our model embeds two macroeconomic distortions. The first distortion is the presence of market power in loan markets. The second distortion is staggered adjustment of lending rates. In this section we discuss how policy can address them.\n\nMonopolistic competition in the banking sector implies an inefficiently low level of consumption, because lending interest rates are, on average, higher than under perfect competition. As it is standard in the New Keynesian literature, this inefficiency could be eliminated in the decentralized economy through the suitable choice of a subsidy to interest rate repayments such that:\n\n$\\begin{array}{cc}{R}_{Lt}=\\underset{1}{\\underbrace{M\\left(1-{\\eta }_{t}\\right){R}_{t}}}.& \\left(17\\right)\\end{array}$\n\nHence, the optimal allocation can be attained if (M) (1 − ηt) = 1 or, equivalently, by setting η = ζ.\n\nStaggered interest rate setting implies an inefficient level of borrowing, consumption, and net worth because the economy’s aggregate lending rate will generally differ from the one prevailing under flexible rates.\n\nOne way to address the consequences of interest rate stickiness is as follows. Assume that the central bank can affect the interest rate by an additive factor ψ. Thus, the marginal cost of funds for banks—conditional on a shock to the risk free interest rate—would be given by R* + ν + ψ. Then, the central bank could set:\n\n$\\psi :{R}_{L1}=M\\left({R}^{*}+\\upsilon \\right),$\n\nwhich is the efficient (i.e., without distortions) level of the lending interest rate. Solving this equality yields:\n\n$\\begin{array}{cc}\\psi =\\frac{1-\\mu }{\\mu }\\upsilon .& \\left(18\\right)\\end{array}$\n\nHence, in response to a positive shock to the risk-free rate (ν > 0), the central bank would raise interest rates above the competitive equilibrium level by the factor ψ > 0; in contrast, in response to a negative shock to the risk free rate (ν < 0), the central bank would lower interest rates below the competitive equilibrium level by the factor ψ < 0.\n\nTo see why this would be a constrained-efficient decentralized equilibrium, note the following: in response to such a policy intervention, banks that can adjust their interest rates would do so and make an optimal decision. In contrast, banks that are not allowed to change their interest rates would not be optimizing anyway. But consumers would face the same aggregate interest rate prevailing without sticky rates (that is, as in the undistorted economy) and hence make optimal decisions.\n\n### C Addressing Both Frictions with Two Instruments\n\nWe now analyze how to implement the constrained-efficient allocation in the decentralized economy when both frictions are present. We focus on the case in which the subsidy to interest rate repayments, η in equation (17), is always in place—so as to remove the distortion generated by monopolistic competition—and the policy-maker has two policy instruments at disposal. Specifically, we consider a policy-maker who maximizes the expected utility of consumers (5), subject to their budget constraints (3) and borrowing constraints (4). The two instruments to address the two distortions in the economy are (1) the interest rate wedge (ψ) to address the distortion generated by staggered interest rate setting and (2) a prudential tax on debt (τ) to address the distortion generated by the pecuniary externality.\n\nWhen two instruments are available, the policy-maker can address the macroeconomic and financial stabilization problems separately. This is regardless of whether a single policy authority is in charge of both monetary and financial-stability policy (e.g., a central bank) or whether one authority is in charge of monetary policy and the other is in charge of macroprudential policy. In other words, in our set-up, there are no incentives for a central bank and a financial stability authority to deviate from a coordinated equilibrium.\n\nWith these considerations in mind, we consider first a positive shock to the risk-free interest rate. The dashed line of Figure 5 displays the model equilibrium when the policy-maker restores efficiency. Figure 5 also displays two additional allocations: the competitive equilibrium in which a positive shock hits the economy and interest rates are flexible (triangles line); and the competitive equilibrium in which a positive shock hits the economy and interest rates are sticky (squares line). Note that, unlike in the decentralized equilibria discussed in the previous section, here a subsidy (η) is also removing the distortionary effect of the markup and restores the level of interest rates that would prevail in a perfectly competitive banking sector. For this reason, borrowing under flexible interest rates in Figure 5 (triangles line) is now larger than borrowing in Figure 3 (asterisks line).14\n\nThe policy-maker undertakes two independent policy actions, one to address the distortion generated by staggered interest rate setting (the macroeconomic friction) and another one to address the distortion generated by the occasionally binding borrowing constraint (the financial friction). Consider first the macroeconomic friction and then the financial friction.15 The policy-maker first raises interest rates by a factor ψ > 0 to restore the aggregate lending rate that would prevail under flex rates, moving the economy from the sticky-rates competitive equilibrium (squares line) to the flex-rates competitive equilibrium (triangles line). Then, the policy-maker imposes a distortionary tax on debt (τ) to restore the efficient level of borrowing, moving the economy to the constrained efficient equilibrium (dashed line).\n\nAs shown in equation 15, the optimal level of τ is zero when the constraint never binds and positive when the constraint is expected to bind with positive probability in period 1. Figure 5 shows that when εεb the triangles line and dashed line coincide. However, when ε > εb the tax on borrowing is positive, borrowing in period 0 is lower than in the flex-rates competitive equilibrium (upper-right panel of Figure 5), while consumption in period 1 is larger (lower-right panel of Figure 5). That is, whenever the collateral constraint is expected to bind with a positive probability, the policy-maker forces private agents to borrow less in period 0—therefore increasing their net worth next period—and to consume more in period 1, thereby reducing the probability of a financial crisis.\n\nNote here that, like before, in the flex-rate equilibrium the net worth (and the crisis probability) is lower (higher) than in the sticky-rate one because of the higher debt repayment in period 1. With flexible interest rates, borrowing is lower but it is more costly to service, so net worth in period 1 is lower and the probability of a crisis is higher. Adding the tax on debt curtails borrowing without increasing debt service costs. As the maximum size of the endowment shock increases, the optimal level of debt falls. And above a certain threshold, the crisis probability falls even below the one in the sticky-rate equilibrium.\n\nConsider now a negative shock to the risk-free interest rate (Figure 6). To address the pecuniary externality, the policy-maker can impose a tax on debt whenever there is a positive probability that the constraint will bind in period 1 regardless of the sign of the shock. To address the interest rate rigidity, when a negative shock hits the economy, the policy-maker can lower interest rates by ψ. In this case—and differently from a positive shock to the risk-free rate—achieving the flex-rate equilibrium already reduces the probability of a crisis, as the economy moves from the sticky-rates competitive equilibrium (squares line) to the flex-rates competitive equilibrium (triangles line). This is because the higher borrowing than in the sticky-rate case is more than compensated by the lower interest payment. So, when the policy maker uses also the tax on debt, the probability of a crisis decreases even more relative to the sticky-rate case, and it is always below it.\n\nIn summary, with two instruments, such as a tax on borrowing and the monetary policy interest rate, a policy maker can address both the financial and the macroeconomic friction, thereby achieving constrained efficiency, independently of the sign of the shock hitting the economy.\n\n### D The Trade-off: Addressing Both Frictions with One Instrument\n\nLet us now consider the case in which both frictions are present in the model but the interest rate is the only instrument at the policy-maker’s disposal.\n\nBefore proceeding it is useful to recall that, in our model, the financial friction results in more borrowing than socially desirable in period 0 when the collateral constraint has a positive probability to bind in period 1, regardless of the sign of the shock. In contrast, the macroeconomic friction generates either more or less borrowing than socially desirable depending on whether the economy is hit by a positive or a negative shock. It is thus evident that, if the policy-maker has only one instrument, she/he may face a trade off in the face of negative shocks when the economy requires interventions in opposite direction.\n\nConsider a positive shock to the risk-free interest rate. As we showed before, both the macroeconomic and the financial friction result in higher borrowing in period 0 relative to the socially efficient allocation. To address the macroeconomic friction, the policy-maker can raise interest rates by the factor ψ = (1 − μ) ν/μ > 0, as implied by equation (18); and, to address the financial friction, she/he can further raise interest rates by the factor ψ = E0 [R* λsp p(c1))/u(c1)] > 0, as implied by equation (16). Therefore, when a positive shock hits the economy, a single instrument can restore efficiency.\n\nHowever, when a negative shock hits the economy, the macroeconomic friction and the financial friction require opposite actions on the interest rate. The macroeconomic friction requires a decrease in interest rates: given that interest rates fall by less than in the flexible rate case, the social planner intervenes to lower interest rates by the factor ψ = −(1 − μ) ν/μ < 0. In contrast, the financial friction requires an increase in interest rates independently of the sign of the shock. Hence, if the interest rate is the only instrument, the social planner would try to lower interest rates to address the macroeconomic friction and, at the same time, to raise the interest rate to address the financial friction.\n\nSummarizing, when both macroeconomic and financial frictions are present, if the policy interest rate is the only available instrument, a policy maker that aims to achieve both macroeconomic and financial stability faces a policy trade-off. In particular, the trade-off emerges when the economy is hit by negative interest rate shocks, because addressing both frictions requires interventions of opposite sign on the policy instrument.\n\nNote here that this result is consistent with the findings of Kashyap and Stein (2012), who raise the issue of a potential conflicts between price stability and financial stability when the policy rate is the only policy instrument. Formally, they show that the introduction of a second instrument (interest payments on reserves, in their model) can resolve that trade-off. Our analysis above not only corroborates their result in a different setting, but also shows that trade-off emerges depending on the sign of the shock.\n\n## V Implications for US Monetary Policy and Financial Stability\n\nIn this section we look at the U.S. recent experience through the lens of our model in a qualitative way. Specifically, the theoretical results in the previous section have implications for the debate on the role of U.S. monetary policy in the run-up to the Great Recession.\n\nUnder former Chairman Alan Greenspan, the Federal Reserve lowered its benchmark rate from 6.5 percent to about 2 percent in 2000-01 as a response to the burst of the dot-com bubble. It further lowered interest rates to 1 percent in 2002-03 in response to a deflationary scare, and finally started a long sequence of tightening actions that, during the 2004-06 period, brought the Federal fund rate back to 5 percent (see Figure 1).\n\nAgainst this background, Taylor (2007) put forth the idea that the Federal Reserve helped inflate U.S. housing prices by keeping rates too low for too long after 2002. His main argument departed from the observation that the policy rate was well below what implied by a standard Taylor rule, a good approximation to the conduct of monetary policy in the previous several years (Figure 1). As a consequence, “those low interest rates were not only unusually low but they logically were a factor in the housing boom and therefore ultimately the bust.”16 Therefore, according to this view, higher interest rates would have reduced both the probability and the severity of the bust that led to the Great Recession.\n\nIn this section we evaluate this claim against the qualitative predictions of our model. In particular, we will show that Taylor’s argument can be rationalized within the logic of our model only if we make the following auxiliary assumptions: the policy authority is responsible for financial stability—in addition to the traditional objective of price stability—and it has only one instrument at its disposal. However, Taylor’s argument is no longer valid within the logic of our model if the policy authority has two instruments to address the macroeconomic and the financial friction or, as we showed in the previous section, when there are two different policy authorities for macroeconomic and financial stability with one instrument each. In the latter case, which is the institutional set-up prevailing in the United States, in response to a negative aggregate demand shock, the “optimal” response of the central bank is to slash interest rates without concern for financial stability, which is addressed with the second instrument (or by the other authority).\n\nAs we discuss below, the evidence suggests that the U.S. regulators were at best ineffective in curbing the continued expansion of subprime mortgage lending well past the point at which prime lending had started to decline. We conclude from this analysis that Taylor’s claim that U.S. monetary policy is to blame for the Great Recession is not justified within the logic of our model, given the regulatory regime prevailing in the United States and the evidence we report on its inability to curb subprime lending while monetary policy was tightening its stance during the 2004-06 period.\n\nTo assess Taylor’s contention through the lenses of the model, consider a negative shock hitting the economy, such as the one that occurred in March 2000 when the dot-com bubble burst. Set the beginning of period 0 as the year 2000 and assume that the economy comes back to its pre-shock level of activity after four years, namely at the beginning of 2004—consistent with the fact that the policy rate was raised for the first time in July 2004. Therefore, each time period in our model corresponds to about 4 years in the data.\n\nFigure 7 reports the qualitative behavior of the lending interest rate as implied by our model when a negative shock hits the economy. We consider two policy regimes. First, the policy-maker has just one instrument to address both frictions (Panel a). Second, the policy-maker has two separate instruments to address macroeconomic and financial friction (Panel b). Notice that each panel of Figure 7 reports the behavior of two interest rates. The solid line is the lending rate that would prevail when there is no interest rate stickiness and no policy action is undertaken (i.e., in the decentralized economy when interest rates are fully flexible) and will serve as a benchmark. The dashed line is the lending interest rate that would prevail when interest rate stickiness is present under the two policy regimes analyzed.\n\nAs we discussed in the previous section, if the interest rate is the only policy instrument, there is a trade-off between macroeconomic and financial stability conditional on a negative shock to our model economy. To achieve the efficient allocation, the policy-maker ought to move interest rates in opposite directions. On the one hand, she would have to lower interest rates to restore the aggregate lending rate that would prevail in the absence of interest rate stickiness. On the other hand, she/he would have to raise them to contain the excess borrowing generated by the pecuniary externality. As a result, interest rates in this environment would be set higher than the level predicated by focusing only on macroeconomic stability. As illustrated by the left-hand panel of Figure 7, assuming that the weight attached to macroeconomic and financial stability is the same, average lending interest rates would fall by less than in the flex-rates case. Therefore, under this regime, our model is consistent with Taylor’s argument in the sense that it suggests keeping interest rates higher than the flex-rate case to avoid excessive borrowing and large asset price increases, and to reduce the probability of a crisis if the economy is hit by a negative shock in the future.\n\nThe results are different when the policy-maker has two separate instruments to address financial and macroeconomic friction. As noted above, this is equivalent to the case in which there are two separate and independent policy authorities, such as a central bank with the objective of price stability and a financial regulator with the objective of financial stability. As we discussed above, in this case, the policy-maker can achieve efficiency with two independent policy actions, regardless of the sign of the shock. Therefore, once the excess borrowing generated by the financial friction is addressed with a macro-prudential tool, it is optimal for the central bank to lower interest rates in order to address interest rate stickiness and restore the flex-rates allocation. As a matter of fact, the right-hand panel of Figure 7 displays how the average lending rate under this regime (dashed line) is effectively equal to the one prevailing under the flexible interest rates (solid line).\n\nIn the United States, institutional responsibility for financial stability is shared among a multiplicity of agencies. Therefore, for Taylor’s contention to be justified within our model, we would have to observe an effective regulatory clampdown on mortgage lending during the period in which monetary policy was unusually lax by the standard of the Taylor rule. As we shall see below, the regulatory effort to contain mortgage lending during the period 2003-06 was at best ineffective, if not absent altogether. The evidence we report, therefore, provides support for the idea that regulation (or, more exactly, the lack thereof) was a key factor in determining the magnitude of the boom-bust cycle experienced by the U.S. housing market rather than monetary policy per se.\n\nSince the Glass-Steagall Act of 1932, U.S. depository institutions (e.g., banks, thrifts, credit unions, savings and loans, etc.) have been regulated by different federal agencies.17 In contrast, non-depository mortgage originators have enjoyed much more freedom even when they were subsidiaries of bank holding companies (see Engel and McCoy, 2011, Demyanyk and Loutskina, 2012). Moreover, as it is well known, the rise of securitization was accompanied by a shift in the structure of the mortgage industry from an originate-and-hold model to an originate-and-distribute model. Thus, well before the crisis, financial intermediation theory pointed out the risks associated with this shift, with securitization potentially leading to a reduction of financial intermediaries’ incentives to carefully screen borrowers (see Diamond and Rajan, 2001, Petersen and Rajan, 2002).\n\nFigure 8 provides a picture of the evolution of the U.S. mortgage market and monetary policy over the 2000-07 period. Broadly speaking, the picture shows that, after the Federal Reserve started to tighten its monetary policy stance and the prime segment of the mortgage market turned around, the subprime segment of the market continued to boom, with increased perceived risk of loans portfolios and declining lending standards. Despite this evidence, the first restrictive regulatory action was undertaken only in late 2006, after almost two years of steady increases in the federal funds rate.\n\nThe upper-left panel of Figure 8 (Panel a) reports the evolution of the federal funds rate (annual average) together with mortgage originations by category over the period 2001-2007. While prime mortgage originations started to fall in 2003, non-prime mortgage originations continued to increase in 2004 and 2005.18 As a matter of fact, the share of non-prime mortgage over total mortgage originations went from about 20 percent in 2001 to more than 50 percent in 2006, experiencing the largest increase in 2004, while the Federal Reserve was already tightening its monetary policy stance. A similar pattern emerges by looking at the issuance of mortgage backed securities (MBS).19 The upper-right panel of Figure 8 (Panel b) shows how the share of private label MBS sharply increased in the 2003-06 period.\n\nThe lower-left panel of Figure 8 (Panel c) reports the federal funds rate together with the share of mortgage originations with a Loan-to-Value (LTV) ratio greater than 90 percent. Note here that, while the use of countercyclical LTV ratios has been suggested—and in some emerging market economies has already been adopted—as a macro-prudential policy tool, the share of high LTV ratio mortgages in the U.S. spiked in 2005, two years after the beginning of the monetary policy tightening.\n\nFinally, the lower-right panel of Figure 8 (Panel d) reports additional evidence on the fact that, while loan quality was relatively stable or improving from 2000 to 2003, it deteriorated sharply from 2004 to 2007. The Office of the Comptroller of the Currency publishes an annual underwriting survey to identify trends in lending standards and credit risk for the most common types of commercial and retail credit offered by national banks. Using data from the 2009 survey, which covered 52 banks engaged in residential real estate lending, Panel (d) reports the evolution of changes in underwriting standards (dash-dotted line) and the perceived level of credit risk (dashed line) in residential real estate loan portfolios.20 The figure shows that, while the level of perceived risk was sharply increasing starting from 2004, banks started easing their lending standards from 2003 and did even more so in the 2004-05 period.\n\nDespite this evidence, U.S. regulators did not take action while monetary policy was being tightened. On the contrary, for instance, the SEC proposed in 2004 a system of voluntary regulation under the Consolidated Supervised Entities program, allowing investment banks to hold less capital in reserve and increase leverage that might have contributed to fueling the demand for mortgage-backed securities (vertical line in our charts under label SEC).\n\nWhen regulators finally decided to act, it was too late. It was not until September 2006 that regulators agreed on new guidelines (vertical line under label FDIC 1) aimed at tightening “non-traditional” mortgage lending practices. Note however that, even if it served as a signal to the mortgage market of changing direction of regulatory policy, the new underwriting criteria did not apply to subprime loans, whose standards were discussed in a subsequent regulatory action which was introduced in June 2007 (vertical line under label FDIC 2). By that time, more than 30 subprime lenders had gone bankrupt and many more followed suit.\n\nIn summary, the evidence above suggests that Taylor’s contention that excessively lax monetary policy might have contributed to the occurrence and the severity of the great recession does not appear justified within the logic of our model. Indeed, in the context of a framework in which the regulatory and monetary policy functions are assigned to different agencies that can rely on different instruments, the evidence above suggests that monetary policy was appropriately targeting macroeconomic stability. The regulatory function of the system, instead, was at best ineffective in addressing the financial imbalance that continued to grow in the subprime mortgage market while monetary policy was tightened in 2004-05. With the fall in interest rates after the burst of the dot-com bubble and with house prices at bubble-inflated levels, the mortgage industry found creative ways to expand lending and make large profits. Government regulators maintained a hands-off approach for too long: even though the variables plotted are equilibrium outcomes, Figure 8 shows that policy measures aimed at tightening a largely unregulated sector of the U.S. mortgage market kicked in much later than the tightening of monetary policy enacted by the Federal Reserve.\n\n## VI Conclusions\n\nIn this paper, we develop a model featuring both a macroeconomic and a financial friction that speaks to the interaction between monetary and macro-prudential policy and to the role of U.S. monetary policy in the run up to the Great Recession.\n\nThere are two main results. First, we show that real interest rate rigidities have a different impact on financial stability (defined and measured as the probability that a borrowing constraint binds), depending on the sign of the shock hitting the economy. In response to positive shocks to the risk-free interest rate, real interest rate rigidity acts as an automatic macro-prudential stabilizer. This is because higher debt today associated with lower interest rates (relative to the flexible interest rate case) is offset by lower interest repayments, resulting in higher net worth and lower probability of a crisis in the future. In contrast, when the risk-free rate is hit by a negative shock, real interest rate rigidity leads to a relatively higher crisis probability through the same mechanisms working in reverse (borrowing and consumption are relatively lower today, but they are offset by relatively higher debt service tomorrow, resulting in lower future net-worth and higher crisis probability).\n\nSecond, we show that, when the interest rate is the only policy instrument to address both the macroeconomic and the financial friction, and a shock that lowers interest rates hits the economy, a policy trade-off emerges. This is because the two frictions require interventions of opposite direction on the same instrument. Other instruments, however, may be at the policy-maker’s disposal in order to achieve and maintain financial stability. Our model shows that, when two instruments are available, this trade-off disappears and efficiency can be restored.\n\nOur analysis has interesting implications regarding the role of U.S. monetary policy in the runup to the Great Recession. In a series of recent papers Taylor (2007, 2010) suggested that higher interest rates in the 2002-2006 period would have reduced both the likelihood and the severity of the Great Recession. Our findings above support this argument only if we make the auxiliary assumption that the policy authority seeks to address all distortions in the model with a single instrument, namely the policy interest rate. In contrast, when the policy authority has two different instruments, interest rates can be lowered as much as needed in response to a contractionary shock without concerns for financial stability. This is consistent with the view of Bernanke (2010) that additional policy tools, to limit dangerous expansions in leverage, were needed to prevent the global financial crisis.\n\n## References\n\n• Adrian, T., and H. S. Shin (2009): “Money, Liquidity, and Monetary Policy,American Economic Review, 99(2), 600605.\n\n• Adrian, T., and H. S. Shin (2010): “Liquidity and leverage,Journal of Financial Intermediation, 19(3), 418437.\n\n• Angelini, P., S. Neri, and F. Panetta (2011): “Monetary and macroprudential policies,Temi di discussione (Economic working papers) 801, Bank of Italy, Economic Research Department.\n\n• Search Google Scholar\n• Export Citation\n• Bank of England,. (2009): “The Role of Macroprudential Policy. A Discussion Paper,Bank of England.\n\n• Bean, C. (2010): “Joseph Schumpeter Lecture The Great Moderation, The Great Panic, and The Great Contraction,Journal of the European Economic Association, 8(2-3), 289325.\n\n• Search Google Scholar\n• Export Citation\n• Bean, C., M. Paustian, A. Penalver, and T. Taylor (2010): “Monetary Policy after the Fall,in Federal Reserve Bank of Kansas City Annual Conference, Jackson Hole, Wyoming, 28 August 2010.\n\n• Search Google Scholar\n• Export Citation\n• Beau, D., L. Clerc, and B. Mojon (2012): “Macro-Prudential Policy and the Conduct of Monetary Policy,Working papers 390, Banque de France.\n\n• Search Google Scholar\n• Export Citation\n• Benes, J., and K. Lees (2007): “Monopolistic Banks and Fixed Rate Contracts: Implications for Open Economy Inflation Targeting,Unpublished manuscript, Reserve Bank of New Zealand.\n\n• Search Google Scholar\n• Export Citation\n• Benigno, G., H. Chen, C. Otrok, A. Rebucci, and E. R. Young (2011): “Monetary and Macro-Prudential Policies: An Integrated Analysis,Unpublished manuscript.\n\n• Search Google Scholar\n• Export Citation\n• Benigno, G., H. Chen, C. Otrok, A. Rebucci, and E. R. Young (2013): “Financial crises and macro-prudential policies,Journal of International Economics, 89(2), 453470.\n\n• Search Google Scholar\n• Export Citation\n• Berger, A., A. Demirguc-Kunt, R. Levine, and J. Haubrich (2004): “Bank Concentration and Competition: An Evolution in the Making,Journal of Money, Credit and Banking, 36(3), 43351.\n\n• Search Google Scholar\n• Export Citation\n• Bernanke, B., M. Gertler, and S. Gilchrist (1996): “The Financial Accelerator and the Flight to Quality,The Review of Economics and Statistics, 78(1), 115.\n\n• Search Google Scholar\n• Export Citation\n• Bernanke, B. S. (2010): “Monetary Policy and the Housing Bubble,Speech at the Annual Meeting of the American Economic Association, Atlanta, Georgia, January 3, 2010,\n\n• Search Google Scholar\n• Export Citation\n• Bianchi, J. (2011): “Overborrowing and Systemic Externalities in the Business Cycle,American Economic Review, 101(7), 34003426.\n\n• Search Google Scholar\n• Export Citation\n• Blanchard, O., G. Dell’Ariccia, and P. Mauro (2010): “Rethinking Macroeconomic Policy,Journal of Money, Credit and Banking, 42(s1), 199215.\n\n• Search Google Scholar\n• Export Citation\n• Borio, C. (2011): “Rediscovering the macroeconomic roots of financial stability policy: journey, challenges and a way forward,BIS Working Papers 354, Bank for International Settlements.\n\n• Search Google Scholar\n• Export Citation\n• Borio, C., and W. R. White (2003): “Whither monetary and financial stability: the implications of evolving policy regimes,Proceedings, Federal Reserve Bank of Kansas City, 131211.\n\n• Search Google Scholar\n• Export Citation\n• Borio, C. E. V. (2006): “Monetary and prudential policies at a crossroads? New challenges in the new century,BIS Working Papers 216, Bank for International Settlements.\n\n• Search Google Scholar\n• Export Citation\n• Borio, C. E. V., and W. Fritz (1995): “The response of short-term bank lending rates to policy rates: a cross-country perspective,BIS Working Papers 27, Bank for International Settlements.\n\n• Search Google Scholar\n• Export Citation\n• Cook, D., and M. B. Devereux (2011): “Optimal fiscal policy in a world liquidity trap,European Economic Review, 55(4), 443462.\n\n• Cordoba, J.-C., and M. Ripoll (2004): “Credit Cycles Redux,International Economic Review, 45(4), 10111046.\n\n• Cottarelli, C., and A. Kourelis (1994): “Financial Structure, Bank Lending Rates, and the Transmission Mechanism of Monetary Policy,IMF Staff Papers, 41(4), 587623.\n\n• Search Google Scholar\n• Export Citation\n• Degryse, H., and S. Ongena (2008): “Competition and Regulation in the Banking Sector: A Review of the Empirical Evidence on the Sources of Bank Rents,” in Handbook of Financial Intermediation and Banking, ed. by A. Boot, and A. Thakor. Elsevier.\n\n• Search Google Scholar\n• Export Citation\n• Dell’Ariccia, G., L. Laeven, and R. Marquez (2011): “Monetary Policy, Leverage, and Bank Risk-taking,CEPR Discussion Papers 8199, C.E.P.R. Discussion Papers.\n\n• Search Google Scholar\n• Export Citation\n• Demyanyk, Y., and E. Loutskina (2012): “Mortgage companies and regulatory arbitrage,Working Paper 1220, Federal Reserve Bank of Cleveland.\n\n• Search Google Scholar\n• Export Citation\n• Diamond, D. W. (1984): “Financial Intermediation and Delegated Monitoring,Review of Economic Studies, 51(3), 393414.\n\n• Diamond, D. W., and R. G. Rajan (2001): “Liquidity Risk, Liquidity Creation, and Financial Fragility: A Theory of Banking,Journal of Political Economy, 109(2), 287327.\n\n• Search Google Scholar\n• Export Citation\n• Dixit, A. K., and J. E. Stiglitz (1977): “Monopolistic Competition and Optimum Product Diversity,American Economic Review, 67(3), 297308.\n\n• Search Google Scholar\n• Export Citation\n• Engel, K., and P. McCoy (2011): Banks and Capital Markets: A Survey, Handbook of Empirical Corporate Finance. Oxford University Press, Inc.\n\n• Search Google Scholar\n• Export Citation\n• Freixas, X., and J.-C. Rochet (2008): Microeconomics of Banking, 2nd Edition, vol. 1 of MIT Press Books. The MIT Press.\n\n• Gerali, A., S. Neri, L. Sessa, and F. M. Signoretti (2010): “Credit and Banking in a DSGE Model of the Euro Area,Journal of Money, Credit and Banking, 42(s1), 107141.\n\n• Search Google Scholar\n• Export Citation\n• Gordon, R. J. (2005): “What Caused the Decline in US Business Cycle Volatility?,” in The Changing Nature of the Business Cycle, ed. by C. Kent, and D. Norman, RBA Annual Conference Volume. Reserve Bank of Australia.\n\n• Search Google Scholar\n• Export Citation\n• Hannan, T. H., and A. N. Berger (1991): “The Rigidity of Prices: Evidence from the Banking Industry,American Economic Review, 81(4), 93815.\n\n• Search Google Scholar\n• Export Citation\n• Harrison, R., and z. Oomen (2010): “Evaluating and estimating a DSGE model for the United Kingdom,Bank of England working papers 380, Bank of England.\n\n• Search Google Scholar\n• Export Citation\n• IMF (2011): “Macroprudential Policy: An Organizing Framework,International Monetary Fund.\n\n• Ingves, S. (2011): “Challenges for the design and conduct of macroprudential policy,Governor of the Sveriges Riskbank, Speech at BOK-BIS Conference, Seoul, Korea.\n\n• Search Google Scholar\n• Export Citation\n• Issing, O. (2009): “Asset Prices and Monetary Policy,Cato Journal, 29(1), 4551.\n\n• Jeanne, O., and A. Korinek (2010a): “Excessive Volatility in Capital Flows: A Pigouvian Taxation Approach,American Economic Review, 100(2), 40307.\n\n• Search Google Scholar\n• Export Citation\n• Jeanne, O., and A. Korinek (2010b): “Managing Credit Booms and Busts: A Pigouvian Taxation Approach,NBER Working Papers 16377, National Bureau of Economic Research, Inc.\n\n• Search Google Scholar\n• Export Citation\n• Kannan, P., P. Rabanal, and A. M. Scott (2012): “Monetary and Macroprudential Policy Rules in a Model with House Price Booms,The B.E. Journal of Macroeconomics, 12(1), 16.\n\n• Search Google Scholar\n• Export Citation\n• Kashyap, A. K., and J. C. Stein (2012): “The Optimal Conduct of Monetary Policy with Interest on Reserves,American Economic Journal: Macroeconomics, 4(1), 26682.\n\n• Search Google Scholar\n• Export Citation\n• Kiyotaki, N., and J. Moore (1997): “Credit Cycles,Journal of Political Economy, 105(2), 21148.\n\n• Kocherlakota, N. R. (2000): “Creating business cycles through credit constraints,Quarterly Review, -(Sum), 210.\n\n• Korinek, A. (2010): “Regulating Capital Flows to Emerging Markets: An Externality View,Unpublished manuscript.\n\n• Kwapil, C., and J. Scharler (2010): “Interest rate pass-through, monetary policy rules and macroeconomic stability,Journal of International Money and Finance, 29, 236251.\n\n• Search Google Scholar\n• Export Citation\n• Mendoza, E. G. (2010): “Sudden Stops, Financial Crises, and Leverage,American Economic Review, 100(5), 194166.\n\n• Mendoza, E. G., and K. A. Smith (2006): “Quantitative implications of a debt-deflation theory of Sudden Stops and asset prices,Journal of International Economics, 70(1), 82114.\n\n• Search Google Scholar\n• Export Citation\n• Moazzami, B. (1999): “Lending rate stickiness and monetary transmission mechanism: the case of Canada and the United States,Applied Financial Economics, 9(6), 533538.\n\n• Search Google Scholar\n• Export Citation\n• Neumark, D., and S. A. Sharpe (1992): “Market Structure and the Nature of Price Rigidity: Evidence from the Market for Consumer Deposits,The Quarterly Journal of Economics, 107(2), 65780.\n\n• Search Google Scholar\n• Export Citation\n• Petersen, M. A., and R. G. Rajan (2002): “Does Distance Still Matter? The Information Revolution in Small Business Lending,Journal of Finance, 57(6), 25332570.\n\n• Search Google Scholar\n• Export Citation\n• Posen, A. (2009): “Finding the Right Tool for Dealing with Asset Price Booms,Speech to the MPR Monetary Policy and the Markets Conference, London, 1 December, available at http://www.bankofengland.co.uk.\n\n• Search Google Scholar\n• Export Citation\n• Rajan, R. G. (2005): “Has financial development made the world riskier?,Proceedings, Federal Reserve Bank of Kansas City(Aug), 313369.\n\n• Search Google Scholar\n• Export Citation\n• Stein, J. C. (2012): “Monetary Policy as Financial-Stability Regulation,The Quarterly Journal of Economics, 127, 57i95.\n\n• Svensson, L. E. (2010): “Inflation Targeting,” in Handbook of Monetary Economics, ed. by B. M. Friedman, and M. Woodford, vol. 3 of Handbook of Monetary Economics, chap. 22, pp. 12371302. Elsevier.\n\n• Search Google Scholar\n• Export Citation\n• Svensson, L. E. (2012): “The Relation between Monetary Policy and Financial Policy,International Journal of Central Banking, 8(Supplement 1), 293295.\n\n• Search Google Scholar\n• Export Citation\n• Taylor, J. B. (2007): “Housing and monetary policy,Proceedings, Federal Reserve Bank of Kansas City, 463476.\n\n• Taylor, J. B. (2010): “Getting back on track: macroeconomic policy lessons from the financial crisis,Review, Federal Reserve Bank of St. Louis(May), 165176.\n\n• Search Google Scholar\n• Export Citation\n• Woodford, M. (2012): “Inflation Targeting and Financial Stability,NBER Working Papers 17967, National Bureau of Economic Research, Inc.\n\n• Search Google Scholar\n• Export Citation\n\n### A Appendix. Numerical Solution\n\nFirst order conditions. We solve for the equilibrium going backward, as in Jeanne and Korinek (2010a). In period 1, the consumers maximize their utility subject to the budget constraint in (3) and the collateral constraint in (4). The problem for the representative consumer therefore is:\n\n${\\mathcal{V}}_{1}=\\underset{{b}_{2},{\\theta }_{2}}{\\text{max}}\\left\\{u\\left(e+{b}_{2}+\\left({\\theta }_{1}-{\\theta }_{2}\\right){p}_{1}+{\\pi }_{1}-{b}_{1}{R}_{L1}\\right)+{\\theta }_{2}y+{\\pi }_{2}-{b}_{2}{R}_{L2}-\\lambda \\left({b}_{2}-{\\theta }_{1}{p}_{1}\\right)\\right\\},$\n\nwhere notice that the net worth eb1RL1 is taken as given. The first order conditions read:\n\n$\\left\\{\\begin{array}{l}FOC\\left({b}_{2}\\right):\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}{u}^{\\prime }\\left({c}_{1}\\right)={R}_{L2}+\\lambda ,\\\\ FOC\\left({\\theta }_{2}\\right):\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}{p}_{1}=y/{u}^{\\prime }\\left({c}_{1}\\right).\\end{array}$\n\nIn period 0, consumers solve the following problem:\n\n$\\underset{{b}_{1}}{\\text{max}}\\left\\{u\\left({b}_{1}\\right)+{\\mathbb{\\text{E}}}_{0}\\left[{\\mathcal{V}}_{1}\\right]\\right\\},$\n\nwhere we make use of the fact that, in equilibrium, θt = 1. The maximization yields:\n\n${u}^{\\prime }\\left({c}_{0}\\right)={R}_{L1}{\\mathbb{\\text{E}}}_{0}\\left[{u}^{\\prime }\\left({c}_{1}\\right)\\right].$\n\nPreliminaries. The first order conditions of the competitive equilibrium (CE) therefore are:\n\n$\\left\\{\\begin{array}{l}FOC\\left({b}_{1}\\right):\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}{u}^{\\prime }\\left({c}_{0}\\right)={R}_{L1}{\\mathbb{\\text{E}}}_{0}\\left[{u}^{\\prime }\\left({c}_{1}\\right)\\right],\\\\ FOC\\left({b}_{2}\\right):\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}{u}^{\\prime }\\left({c}_{1}\\right)={R}_{L2}+\\lambda ,\\\\ FOC\\left({\\theta }_{2}\\right):\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}{p}_{1=y/{u}^{\\prime }\\left({c}_{1}\\right)}.\\end{array}$\n\nWhen the economy is not constrained (λ = 0) the model has the following close form solution:\n\n$\\left\\{\\begin{array}{l}{u}^{\\prime }\\left({c}_{1}\\right)={R}_{L2}\\\\ {u}^{\\prime }\\left({c}_{0}\\right)={\\mathbb{\\text{E}}}_{0}\\left[{R}_{L2}{R}_{L1}\\right],\\\\ {p}_{1}=\\frac{y}{{R}_{L2}}.\\end{array}⇒\\left\\{\\begin{array}{c}{c}_{1}^{*}=\\left({R}_{L2}{\\right)}^{-\\frac{1}{\\rho }}\\\\ {c}_{0}^{*}={b}_{1}^{*}=\\left({R}_{L2}{R}_{L1}{\\right)}^{-\\frac{1}{\\rho }},\\\\ {p}_{1}^{*}=\\frac{y}{{R}_{L2}}.\\end{array}$\n\nMoreover, by definition, the collateral constraint must hold when the economy is not constrained21:\n\n$\\underset{{c}_{1}^{*}+{b}_{1}^{*}{R}_{L1}-e}{\\underbrace{{b}_{2}^{*}}}\\le \\underset{\\frac{y}{{R}_{L2}}}{\\underbrace{{p}_{1}^{*}}},$\n\nwhich we can rewrite as:\n\n$e\\ge {e}^{b}={c}_{1}^{*}+{b}_{1}^{*}{R}_{L1}-\\frac{y}{{R}_{L2}}$\n\nThat is, whenever the endowment is above a certain threshold (eeb) the economy is not constrained. On the other hand, when the economy is constrained (e < eb) the collateral constraint is binding and consumers would like to borrow b2 > p1. Given that this is not possible, consumers will borrow as much as they can, trying to maximize their consumption in period 1. In this case, the collateral constraint will bind with equality b2 = p1, so that:\n\n${c}_{1}+{b}_{1}{R}_{L1}-e=\\frac{y}{{u}^{\\prime }\\left({c}_{1}\\right)},$\n\nand using the fact that the utility function is in CES form:\n\n$\\begin{array}{cc}{c}_{1}+{b}_{1}{R}_{L1}-e=y{c}_{1}^{\\rho }.& \\left(\\text{A.}1\\right)\\end{array}$\n\nTherefore, depending whether the constraint is binding or not, we can express borrowing in period 0 as:\n\n$\\begin{array}{cc}{b}_{1}=\\left\\{\\begin{array}{cc}{\\begin{array}{c}\\begin{array}{c}\\left({R}_{L2}{R}_{L1}\\right)\\end{array}\\end{array}}^{-\\frac{1}{\\rho }}& e\\ge {e}^{b}\\\\ \\frac{y{c}_{1}^{\\rho }-{c}_{1}+e}{{R}_{L1}}& e<{e}^{b}\\end{array}& \\left(\\text{A.2}\\right)\\end{array}$\n\nFinally, we assume that the endowment is stochastic and follows a uniform distribution $e\\sim U\\left(\\overline{e}-\\epsilon ,\\text{\\hspace{0.17em}}\\overline{e}+\\epsilon \\right)$.\n\nAssumption on parameter values. To be able to solve the model we need to make assumptions on the value of two parameters: y and $\\overline{e}$. In particular, we will consider values such that 1) the economy may be constrained for sufficiently large negative shocks but 2) would not be constrained in the absence of uncertainty.\n\nFirst, we want a condition that is necessary and sufficient for the economy to be constrained with some probability, when $e\\sim U\\left(\\overline{e}-\\epsilon ,\\text{\\hspace{0.17em}}\\overline{e}+\\epsilon \\right)$. Let’s reason the other way round: we already showed that the economy is indeed unconstrained in period 1 if and only if:\n\n$\\overline{e}\\ge {e}^{b}={c}_{1}^{*}+{b}_{1}^{*}{R}_{L1}-\\frac{y}{{R}_{L2}}.$\n\nWhen e is stochastic, for the economy to be unconstrained, the above inequality must hold for all possible realizations of e (in particular the adverse realizations). In other words it must be the case that:\n\n$\\begin{array}{ccc}\\hfill e-\\epsilon & \\ge & {c}_{1}^{*}+{b}_{1}^{*}{R}_{L1}-\\frac{y}{{R}_{L2}},\\hfill \\\\ \\hfill \\overline{e}& \\ge & {c}_{1}^{*}+{b}_{1}^{*}{R}_{L1}-\\frac{y}{{R}_{L2}}+\\epsilon .\\hfill \\end{array}$\n\nTherefore, when $\\overline{e}<{c}_{1}^{*}+{b}_{1}^{*}{R}_{L1}-\\frac{y}{{R}_{L2}}+\\epsilon$ there exists a non-zero probability that the constraint binds.\n\nSecond, we want a condition that is necessary and sufficient for the economy to be unconstrained when there is no uncertainty around the realizations of e (i.e., ε = 0 and $\\overline{e}=e$). When ε = 0, the constraint is not binding in period 1 if and only if $e=\\overline{e}\\ge {e}^{b}$, that is:\n\n$\\overline{e}\\ge {c}_{1}^{*}+{b}_{1}^{*}{R}_{L1}-\\frac{y}{{R}_{L2}}.$\n\nTherefore, with no uncertainty, when $\\overline{e}\\ge {c}_{1}^{*}+{b}_{1}^{*}{R}_{L1}-\\frac{y}{{R}_{L2}}$ the constraint never binds.\n\nSummarizing we choose an $\\overline{e}$ such that would not be constrained in the absence of uncertainty but it economy may be constrained for sufficiently large negative shocks:\n\n$\\left({R}_{L2}{\\right)}^{-\\frac{1}{\\rho }}+\\left({R}_{L2}{R}_{L1}{\\right)}^{-\\frac{1}{\\rho }}{R}_{L1}-\\frac{y}{{R}_{L2}}\\le \\overline{e}<{\\left({R}_{L2}\\right)}^{-\\frac{1}{\\rho }}+\\left({R}_{L2}{R}_{L1}{\\right)}^{-\\frac{1}{\\rho }}{R}_{L1}-\\frac{y}{{R}_{L2}}+\\epsilon .$\n\nThis implies that there will be a threshold for the size of the shock (εb) above which the collateral constraint will start to be binding with positive probability. Specifically, the collateral constraint would be binding for realizations of e in the interval $\\left[\\overline{e}-\\epsilon ,\\text{\\hspace{0.17em}}\\overline{e}-{\\epsilon }^{b}\\right]$. The level of εb can be easily computed as:\n\n${\\epsilon }^{b}=\\overline{e}-{e}^{b}=\\overline{e}-{c}_{1}^{*}-{b}_{1}^{*}{R}_{L1}+\\frac{y}{{R}_{L2}}.$\n\nCompetitive equilibrium. We will find numerical values for consumption at time 1 (c1) by using is the Euler equation FOC(b1), which gives us an optimal relation between consumption in period 0 and consumption in period 1.22 In order to be able to solve this equation we need 1) to find an expression for borrowing as a function of consumption for both constrained and unconstrained states, as we already did in equation (A.2); and 2) to weight those states for their probability. Combining FOC(b1), the budget constraint, and the expression for b1 derived earlier in equation (A.2) we get the following system of equations:\n\n$\\left\\{\\begin{array}{c}{b}_{1}^{-\\rho }={R}_{L1}{\\mathbb{\\text{E}}}_{0}\\left[{c}_{1}^{-\\rho }\\right],\\\\ {b}_{1}=\\left\\{\\begin{array}{cc}\\left({R}_{L2}{R}_{L1}{\\right)}^{-\\frac{1}{\\rho }}& e\\ge {e}^{b},\\\\ \\frac{\\begin{array}{c}y{c}_{1}^{\\rho }-{c}_{1}+e\\end{array}}{{R}_{L1}}& e<{e}^{b}.\\end{array}\\end{array}$\n\nBy plugging the second equation in the first one we can write:\n\n$\\text{Pr}\\left(e<{e}^{b}\\right)\\cdot {\\left[{b}_{1}^{-\\rho }\\right]}^{\\text{binding}}+\\text{Pr}\\left(e\\ge {e}^{b}\\right)\\cdot {\\left[{b}_{1}^{-\\rho }\\right]}^{\\text{non-}\\text{binding}}={R}_{L1}{\\mathbb{\\text{E}}}_{0}\\left[{c}_{1}^{-\\rho }\\right].$\n\nNow, by effectively substituting for b1, The LHS of the previous equation can be expressed as follows:23\n\n$\\begin{array}{cc}{b}_{1}^{-\\rho }& =\\frac{1}{2\\epsilon }\\underset{\\overline{e}-\\epsilon }{\\overset{\\overline{e}-{\\epsilon }^{b}}{\\int }}\\left(\\frac{y{c}_{1}^{\\rho }-{c}_{1}+e}{{R}_{L1}}{\\right)}^{-\\rho }\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}de+\\frac{1}{2\\epsilon }\\underset{\\overline{e}-{\\epsilon }^{b}}{\\overset{\\overline{e}+\\epsilon }{\\int }}{R}_{L2}{R}_{L1}de=\\hfill \\\\ & =\\frac{1}{2\\epsilon }\\underset{\\overline{e}-\\epsilon }{\\overset{\\overline{e}-{\\epsilon }^{b}}{\\int }}\\left(\\frac{y{c}_{1}^{\\rho }-{c}_{1}}{{R}_{L1}}+\\frac{e}{{R}_{L1}}{\\right)}^{-\\rho }\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}de+\\frac{{R}_{L2}{R}_{L1}}{2\\epsilon }\\left[e{\\right]}_{\\overline{e}-{\\epsilon }^{b}}^{\\overline{e}+\\epsilon }=\\hfill \\\\ & =\\frac{1}{2\\epsilon }{\\left[{R}_{L1}\\frac{\\left(\\frac{y{c}_{1}^{\\rho }-{c}_{1}}{{R}_{L1}}+\\frac{e}{{R}_{L1}}{\\right)}^{-\\rho +1}}{-\\rho +1}\\right]}_{\\overline{e}-\\epsilon }^{\\overline{e}-{\\epsilon }^{b}}+\\frac{{R}_{L2}{R}_{L1}}{2\\epsilon }\\left[\\epsilon +{\\epsilon }^{b}\\right]\\hfill \\\\ & =\\frac{{R}_{L1}^{\\rho }}{2\\epsilon \\left(1-\\rho \\right)}\\left[\\left(y{c}_{1}^{\\rho }-{c}_{1}+e{\\right)}^{-\\rho +1}{\\right]}_{\\overline{e}-\\epsilon }^{\\overline{e}-{\\epsilon }^{b}}+\\frac{{R}_{L2}{R}_{L1}}{2\\epsilon }\\left[\\epsilon +{\\epsilon }^{b}\\right].\\hfill \\end{array}$\n\nBy equalizing LHS to RHS numerically, we obtain the competitive equilibrium level of consumption at time 1, where remember that:\n\n$\\begin{array}{lll}\\text{LHS}& =& \\frac{{R}_{L1}^{\\rho }}{2\\epsilon \\left(1-\\rho \\right)}\\left[\\left(y{c}_{1}^{\\rho }-{c}_{1}+\\overline{e}-{\\epsilon }^{b}{\\right)}^{-\\rho +1}-{\\left(y{c}_{1}^{\\rho }-{c}_{1}+\\overline{e}-\\epsilon \\right)}^{-\\rho +1}\\right]+\\frac{{R}_{L2}{R}_{L1}}{2\\epsilon }\\left[\\epsilon +{\\epsilon }^{b}\\right]\\\\ \\text{RHS}& =& {R}_{L1}{\\mathbb{\\text{E}}}_{0}\\left[{c}_{1}^{-\\rho }\\right].\\end{array}$\n\nFinally, one can also derive the level of optimal debt at time 0, by using again FOC(bT):\n\n${b}_{1}={\\mathbb{\\text{E}}}_{0}\\left[\\left({R}_{L1}{c}_{1}^{-\\rho }{\\right)}^{-\\frac{1}{\\rho }}\\right].$\n\nSocial planner. The social planner problem is solved with the same strategy. The first order conditions are:\n\n$\\left\\{\\begin{array}{cc}FOC\\left({b}_{1}\\right):& {u}^{\\prime }\\left({c}_{0}\\right)={R}_{L1}{\\mathbb{\\text{E}}}_{0}\\left[{u}^{\\prime }\\left({c}_{1}\\right)+\\lambda {p}^{\\prime }\\left({c}_{1}\\right)\\right],\\hfill \\\\ FOC\\left({b}_{2}\\right):& {u}^{\\prime }\\left({c}_{1}\\right)={R}_{L2}+\\lambda \\left(1-{p}^{\\prime }\\left({c}_{1}\\right)\\right),\\hfill \\\\ FOC\\left({\\theta }_{2}\\right):& {p}_{1}=\\frac{y}{{u}^{\\prime }\\left({c}_{1}\\right)}.\\hfill \\end{array}$\n\nFirst we have to find an expression for p(c1). From FOC(θ2) we get:\n\n$p\\left({c}_{1}\\right)=\\frac{y}{{u}^{\\prime }\\left({c}_{1}\\right)}=y{c}_{1}^{\\rho },$\n\nand computing the derivative:\n\n${p}^{\\prime }\\left({c}_{1}\\right)=\\frac{\\partial \\left(y{c}_{1}\\right)}{\\partial {c}_{1}}=\\rho y{c}_{1}^{\\rho -1},$\n\nNotice that the p(c1) is positive and decreasing. Notice also that, by definition, the Lagrange multiplier (λ) is positive only when the constraint is binding. By looking at FOC(b1) of the social planner problem, we can state that the planner limits over-borrowing. In fact, ${u}^{\\prime }{\\left({c}_{1}\\right)}^{sp}>{u}^{\\prime }{\\left({c}_{1}\\right)}^{CE}$ which implies that consumption and, therefore, borrowing at time 1 are lower relative to the competitive equilibrium. On the other hand, the planner increases consumption in period 1: given that p(c1) > 0 from FOC(b2) we see that ${u}^{\\prime }{\\left({c}_{1}\\right)}^{sp}<{u}^{\\prime }{\\left({c}_{1}\\right)}^{CE}$.\n\nWe also need a value of λ. Notice that the Lagrange multiplier of the social planner is numerically different from the one of the competitive equilibrium problem. In fact, from FOC(b2) we get\n\n$\\lambda =\\frac{{c}_{1}^{-\\rho }-{R}_{L2}}{1+y}.$\n\nCombining these two results we can compute:\n\n$\\lambda p\\prime \\left({c}_{1}\\right)=\\left\\{\\begin{array}{cc}0\\hfill & e\\ge {e}^{b},\\\\ \\frac{\\rho y}{1+y}\\left({c}_{1}^{-1}-{R}_{L2}{c}_{1}^{\\rho -1}\\right)\\hfill & e<{e}^{b}.\\end{array}$\n\nWe can now solve for the level of c1. The FOC(b1) can be written:\n\n${b}_{1}^{-\\rho }={R}_{L1}{\\mathbb{\\text{E}}}_{0}\\left[{c}_{1}^{-\\rho }+\\lambda {p}^{\\prime }\\left({c}_{1}\\right)\\right].$\n\nThe LHS has already been computed before. The RHS is:\n\n$\\begin{array}{l}\\frac{{R}_{L1}}{2\\epsilon }\\underset{\\overline{e}-\\epsilon }{\\overset{\\overline{e}-{\\epsilon }^{b}}{\\int }}\\left({c}_{1}^{-\\rho }+\\frac{\\rho y}{1+y}\\left({c}_{1}^{-1}-{R}_{L2}{c}_{1}^{\\rho -1}\\right)\\right)de+\\frac{{R}_{L1}}{2\\epsilon }\\underset{\\overline{e}-{\\epsilon }^{b}}{\\overset{\\overline{e}-{\\epsilon }^{b}}{\\int }}{c}_{1}^{-\\rho }de,\\\\ \\frac{{R}_{L1}}{2\\epsilon }\\left[\\left({c}_{1}^{-\\rho }+\\frac{\\rho y}{1+y}\\left({c}_{1}^{-1}-{R}_{L2}{c}_{1}^{\\rho -1}\\right)\\right)\\left(\\epsilon -{\\epsilon }^{b}\\right)+{c}_{1}^{-\\rho }\\left(\\epsilon +{\\epsilon }^{b}\\right)\\right],\\\\ \\frac{{R}_{L2}}{2\\epsilon }\\left[\\left(\\frac{\\rho y}{1+y}\\left({c}_{1}^{-1}-{R}_{L2}{c}_{1}^{\\rho -1}\\right)\\right)\\left(\\epsilon -{\\epsilon }^{b}\\right)+2{c}_{1}^{-\\rho }\\epsilon \\right]\\end{array}$\n\nAs we have done above, by equalizing LHS to RHS numerically, we obtain the competitive equilibrium level of consumption at time 1, where remember that:\n\n$\\begin{array}{lll}\\text{LHS}& =& \\frac{{R}_{L1}^{\\rho }}{2\\epsilon \\left(1-\\rho \\right)}\\left[\\left(y{c}_{1}^{\\rho }-{c}_{1}+\\overline{e}-{\\epsilon }^{b}{\\right)}^{-\\rho +1}-{\\left(y{c}_{1}^{\\rho }-{c}_{1}+\\overline{e}-\\epsilon \\right)}^{-\\rho +1}\\right]+\\frac{{R}_{L2}{R}_{L1}}{2\\epsilon }\\left[\\epsilon +{\\epsilon }^{b}\\right]\\\\ \\text{RHS}& =& \\frac{{R}_{L1}}{2\\epsilon }\\left[\\left(\\frac{\\rho y}{1+y}\\left({c}_{1}^{-1}-{R}_{L2}{c}_{1}^{\\rho -1}\\right)\\right)\\left(\\epsilon -{\\epsilon }^{b}\\right)+2{c}_{1}^{-\\rho }\\epsilon \\right].\\end{array}$\n\nFinally, one can derive the optimal expression for borrowing at time 1 from the social planner FOC(b1):\n\n${b}_{1}=\\left({R}_{L1}{\\mathbb{\\text{E}}}_{0}\\left[{c}_{1}^{-\\rho }+\\lambda {p}^{\\prime }\\left({c}_{1}\\right)\\right]{\\right)}^{-\\frac{1}{\\rho }}.$\n\nCrisis Probability. The crisis probability is defined as the probability of the constraint to be binding:\n\n$\\begin{array}{cc}{\\text{Pr[b}}_{2}& >{p}_{1}\\right]\\hfill \\\\ & =\\frac{1}{2\\epsilon }\\underset{\\overline{e}-\\epsilon }{\\overset{\\overline{e}-{\\epsilon }^{b}}{\\int }}de=\\frac{1}{2\\epsilon }\\left(\\epsilon -{\\epsilon }^{b}\\right).\\hfill \\end{array}$\n\nwhich, using the optimality conditions and the budget constraint, can be written as\n\n$\\text{Pr}\\left[\\left({c}_{1}-\\left(e-{b}_{1}{R}_{L1}\\right)>\\frac{y}{{u}^{\\prime }\\left({c}_{1}\\right)}\\right].$\n\nNow, knowing that $e=\\overline{e}+\\stackrel{˜}{\\epsilon }$ and that $\\stackrel{˜}{\\epsilon }\\sim \\mathcal{U}\\left(-\\epsilon ,\\epsilon \\right)$, we can write\n\n$\\text{Pr}\\left[\\stackrel{˜}{\\epsilon }<\\underset{x}{\\underbrace{{c}_{1}-\\stackrel{˜}{e}+{b}_{1}{R}_{L1}-\\frac{y}{{u}^{\\prime }\\left({c}_{1}\\right)}}}\\right].$\n\nIn particular, the probability of the constraint to be binding is given by:\n\n$\\text{Pr}\\left[-\\epsilon \\le \\stackrel{˜}{\\epsilon }\n\nBank of England and Johns Hopkins University, respectively. This paper was previously circulated under the title “Coordinating Monetary and Macro-Prudential Policies.” We would like to thank Jihad Dagher and the seminar participants at the 2013 Macro Banking and Finance Workshop, the 2013 MMF Conference, the Cattolica University, the Bank of Italy, the EPFL, the Bank of Portugal, the Banque de France, the Bank of England, the IADB, LACEA 2012 Annual Meetings, and EEA 2012 Annual Meetings for useful discussions and helpful comments. The information and opinions presented in this paper are entirely those of the authors, and not necessarily those of the Bank of England.\n\nBernanke (2010) recently said that “the best response to the housing bubble would have been regulatory, rather than monetary”.\n\nThis assumption may be justified by both theoretical and empirical findings (see, for example, Hannan and Berger, 1991, Neumark and Sharpe, 1992, Kwapil and Scharler, 2010, Gerali, Neri, Sessa, and Signoretti, 2010).\n\nConsumers can be interpreted as households owning durable assets (such as their homes, for example).\n\nDetails on the the derivation of the equilibrium conditions are reported in Appendix A.\n\nThe presence of market power can be justified by the existence of switching costs which lead to long-term relationships between banks and borrowers (see Diamond (1984) for example). Empirically, the presence of market power in the banking sector, as well as its determinants over the business cycle, are well documented. See, for example, Berger, Demirguc-Kunt, Levine, and Haubrich (2004) and Degryse and Ongena (2008).\n\nNotice here that Gerali, Neri, Sessa, and Signoretti (2010) set the elasticity of loan contracts to about 2.5, to match an average spread of 170 basis points of deposit rates on the policy rate. Our number differs from theirs because we assume that the markup is applied to the gross interest rate (i.e., MR) instead of the net interest rate (i.e., 1+Mr).\n\nThese estimates are in line with older studies on interest rate pass-through in the U.S.. For example, Cottarelli and Kourelis (1994) estimate a short run pass through of 0.32 and a long run pass through of 1; Moazzami (1999) and Borio and Fritz (1995) report a short run coefficient of 0.4 and 0.34, respectively.\n\nNote that the calibration of this parameter does not affect the qualitative behavior of our model.\n\nNote that both Woodford (2012) and Benigno, Chen, Otrok, Rebucci, and Young (2013) define financial stability in these terms.\n\nNote here that, if no shock pushes the economy away from its equilibrium, the average markup would be equal to the constant frictionless markup and the price of all goods in the economy would be the same, implying that no efficiency condition would be violated.\n\nSee Korinek (2010), Bianchi (2011), Jeanne and Korinek (2010a,b), Benigno, Chen, Otrok, Rebucci, and Young (2013) for a more detailed discussion.\n\nResults reported in Figure 5 are robust to the case in which the distortions generated by monopolistic competition are not removed.\n\nNote here that changing the order of the policy actions would not alter the results.\n\nJohn Taylor, interviewed by Bloomberg at the American Economic Association’s annual meeting, Atlanta, January 5, 2010, available at: http://www.bloomberg.com/apps/news?pid=newsarchive&sid=a44P5KTDjWWY\n\nFor instance, the Office of the Comptroller of the Currency is in charge of nationally chartered banks and their subsidiaries. The Federal Reserve covers affiliates of nationally chartered banks. The Office of Thrift Supervision oversees savings institutions. The Federal Deposit Insurance Corporation insures deposits of both state-chartered and nationally chartered banks.\n\nBy prime loans we refer to loans that conform to Government Sponsored Enterprises (GSE) guidelines; by nonprime loans we refer to Alt-A, Home Equity, FHA/VA, and subprime mortgages.\n\nMBS which are issued or guaranteed by a government sponsored enterprise (GSE) such as Fannie Mae or Freddie Mac are referred to as “agency MBS.” Some private institutions, such as subsidiaries of investment banks, banks, financial institutions, non-bank mortgage lenders and home builders, also issue mortgage securities, the so-called “private label” MBS.\n\nNet percentage calculated by subtracting the percent of banks tightening from the percent of banks easing. Negative values, therefore, indicate easing.\n\nNote here that we are assuming that profits are realized at the end of the period so that they have no effect on the borrowing constraint.\n\nRember that c0 = b1 from the budget constraint.\n\nIf X is uniformely distributed with U(a, b), then the nth moment of X is given by ${\\mathbb{\\text{E}}}_{0}\\left[{X}^{n}\\right]=\\frac{1}{b-a}\\underset{a}{\\overset{b}{\\int }}{x}^{n}dx$.\n\nDoes Easing Monetary Policy Increase Financial Instability?\nAuthor: Ambrogio Cesa-Bianchi and Mr. Alessandro Rebucci\n•", null, "A Counterfactual Path for the U.S. Policy Rate.This chart replicates the counterfactual federal funds rate reported by Taylor (2007). The counterfactual path for the policy rate from 1996 to 2007 is obtained with a Taylor rule of the type: it=rt+πt+1.5(πt−π)+0.5(yt−yt*), where rt, long-run real value of the federal funds rate, is set to 2 percent, πt is CPI inflation, π is target inflation (assumed at 2 percent), yt is real GDP growth, and yt* is real potential GDP growth.\n•", null, "Model Equilibrium with Financial Friction.On the horizontal axis is the maximum size of the endowment shock (ε).\n•", null, "Model Equilibrium with Both Frictions: Positive Shock to the Interest Rate.On the horizontal axis is the maximum size of the endowment shock (ε). The thick solid line displays the equilibriums when no shock hits the risk-free interest rate; the thin line with asterisk markers and the thin line with circle markers display the equilibrium after a positive shock hits the risk-free rate under flex-rates and sticky-rates, respectively.\n•", null, "Model equilibrium with both frictions - Negative Shock to The Interest Rate.On the horizontal axis is the maximum size of the endowment shock (ε). The thick solid line displays the equilibriums when no shock hits the risk free interest rate; the thin line with asterisk markers and the thin line with circle markers display the equilibrium after a negative shock hits the risk free rate under flex-rates and sticky-rates, respectively.\n•", null, "Efficient Allocation with Both Frictions: Positive Shock to the Interest Rate.On the horizontal axis is the maximum size of the endowment shock (ε). The thin lines with triangle and square markers display the equilibrium after a positive shock hits the risk-free rate under flex-rates and sticky rates, respectively; the dashed line displays the efficient allocation with two policy instruments. A subsidy is in place to remove the distortionary effect of monopolistic competition.\n•", null, "Efficient Allocation with Both Frictions: Negative Shock to the Interest Rate.On the horizontal axis is the maximum size of the endowment shock (ε). The thin lines with triangle and square markers display the equilibrium after a negative shock hits the risk-free rate under flex-rates and sticky rates, respectively; the dashed line displays the efficient allocation with two policy instruments. A subsidy is in place to remove the distortionary effect of monopolistic competition.\n•", null, "Alternative Path of the Lending Interest Rate under Different Assumptions about the Number of Instruments at the Policy-maker’s Disposal.Competitive Eq. -Flex (ν < 0) displays the lending interest rate in the decentralized economy under fully flexible interest rates; Policy displays the lending interest rate that would prevail with a policy maker addressing both the macroeconomic and the financial frictions with one or two instruments, respectively.\n•", null, "Monetary Policy and the U.S. Housing Sector.The figure provides a picture of the evolution of the U.S. mortgage market and monetary policy over the 2000-07 period." ]
[ null, "https://www.elibrary.imf.org/skin/73d3da6f9c8e812b492dcc7a9cf15c322f6a1944/img/Blank.svg", null, "https://www.elibrary.imf.org/skin/73d3da6f9c8e812b492dcc7a9cf15c322f6a1944/img/Blank.svg", null, "https://www.elibrary.imf.org/skin/73d3da6f9c8e812b492dcc7a9cf15c322f6a1944/img/Blank.svg", null, "https://www.elibrary.imf.org/skin/73d3da6f9c8e812b492dcc7a9cf15c322f6a1944/img/Blank.svg", null, "https://www.elibrary.imf.org/skin/73d3da6f9c8e812b492dcc7a9cf15c322f6a1944/img/Blank.svg", null, "https://www.elibrary.imf.org/skin/73d3da6f9c8e812b492dcc7a9cf15c322f6a1944/img/Blank.svg", null, "https://www.elibrary.imf.org/skin/73d3da6f9c8e812b492dcc7a9cf15c322f6a1944/img/Blank.svg", null, "https://www.elibrary.imf.org/skin/73d3da6f9c8e812b492dcc7a9cf15c322f6a1944/img/Blank.svg", null, "https://www.elibrary.imf.org/skin/73d3da6f9c8e812b492dcc7a9cf15c322f6a1944/img/Blank.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9124814,"math_prob":0.969959,"size":101659,"snap":"2022-40-2023-06","text_gpt3_token_len":22044,"char_repetition_ratio":0.1797944,"word_repetition_ratio":0.17020339,"special_character_ratio":0.21808203,"punctuation_ratio":0.12515937,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98840696,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-26T09:36:43Z\",\"WARC-Record-ID\":\"<urn:uuid:19ab2266-f574-4af5-bee3-73685de7b082>\",\"Content-Length\":\"1019822\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:371e79d1-bd41-49bf-9888-41a99ad5e908>\",\"WARC-Concurrent-To\":\"<urn:uuid:4be37f5a-3108-4fde-8086-b85fc469e92e>\",\"WARC-IP-Address\":\"3.248.172.205\",\"WARC-Target-URI\":\"https://www.elibrary.imf.org/view/journals/001/2015/139/article-A001-en.xml\",\"WARC-Payload-Digest\":\"sha1:Q2YMQQKBHYDVF5BRYNMNEOR7V34LTVYX\",\"WARC-Block-Digest\":\"sha1:6GHX7DNTC5WS3556PXXQJMFXKD5DJO27\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030334855.91_warc_CC-MAIN-20220926082131-20220926112131-00098.warc.gz\"}"}
https://mathoverflow.net/questions/326280/6-linear-pde-for-only-3-unknowns
[ "# 6 linear PDE for only 3 unknowns?\n\nLet $$x \\in (0,L)$$, $$t \\in (0,T)$$, and let $$u_0 = u_0(x) \\in \\mathbb{R}^3$$, $$g=g(t) \\in \\mathbb{R}^3$$, $$P = P(x,t) \\in \\mathbb{R}^3$$ and $$Q = Q(x,t) \\in \\mathbb{R}^3$$ be continuously differentiable functions.\n\nDenote by $$\\hat{P}, \\hat{Q}$$ the matrices \\begin{align*} \\hat{P} = \\begin{bmatrix} 0 & -P_3 & P_2 \\\\ Q_3 & 0 & -P_1 \\\\ -P_2 & P_1 & 0 \\end{bmatrix}, \\qquad \\hat{Q} = \\begin{bmatrix} 0 & -Q_3 & Q_2 \\\\ Q_3 & 0 & -Q_1 \\\\ -Q_2 & Q_1 & 0 \\end{bmatrix}. \\end{align*}\n\nMy question is:\n\nAssuming that the following four conditions hold:\n\nCondition 1: \\begin{align*} \\partial_x P - \\partial_t Q = \\hat{P}Q \\qquad \\text{in }(0,L) \\times (0,T), \\end{align*} Condition 2: \\begin{align*} u_0(0) = g(0), \\end{align*} Condition 3: \\begin{align*} u_0' = - \\hat{Q}u_0 \\qquad \\text{for } x \\in (0,L), \\end{align*} Condition 4: \\begin{align*} g' = - \\hat{P}g \\qquad \\text{for } t \\in (0,T), \\end{align*} is there a solution $$u = u(x,t) \\in \\mathbb{R}^3$$ to \\begin{align*} \\begin{cases} \\partial_t u = -\\hat{P}u &\\text{in }(0,L)\\times(0,T) \\\\ \\partial_x u = - \\hat{Q}u &\\text{in }(0,L)\\times(0,T) \\\\ u(x,0) = u_0(x) & \\text{for } x \\in (0,L)\\\\ u(0,t) = g(t) & \\text{for }t \\in (0,T) \\end{cases} \\end{align*} or not ?\n\nIn the above, $$f'$$, $$\\partial_t f$$a and $$\\partial_x f$$ denote the derivative, partial time derivative and partial space derivative respectively; and $$P_k$$, $$Q_k$$, $$u_k$$, $$u_{0k}$$ and $$g_k$$ denote the components of $$P$$, $$Q$$, $$u$$, $$u_0$$ and $$g$$ respectively.\n\nMy question with all terms developed rewrites as follows. Assuming that the following four conditions hold:\n\nCondition 1: \\begin{align*} \\begin{cases} \\partial_x P_1 - \\partial_t Q_1 = P_2 Q_3 - P_3 Q_2\\\\ \\partial_x P_2 - \\partial_t Q_2 = P_3 Q_1 - P_1 Q_3\\\\ \\partial_x P_3 - \\partial_t Q_3 = P_1 Q_2 - P_2 Q_1 \\end{cases} \\qquad \\text{in }(0,L) \\times (0,T), \\end{align*} Condition 2: \\begin{align*} u_0(0) = g(0), \\end{align*} Condition 3: \\begin{align*} \\begin{cases} u_{01}' = \\ \\ Q_3(\\cdot,0) u_{02} - Q_2(\\cdot,0) u_{03} \\\\ u_{02}' = -Q_3(\\cdot,0) u_{01} + Q_1(\\cdot,0) u_{03} \\\\ u_{03}' = \\ \\ Q_2(\\cdot,0) u_{01} - Q_1(\\cdot,0) u_{02} \\end{cases} \\qquad \\text{for } x \\in (0,L), \\end{align*} Condition 4: \\begin{align*} \\begin{cases} g_1' &= \\ \\ P_3(0,\\cdot) g_2 - P_2(0,\\cdot) g_3 \\\\ g_2' &= -P_3(0,\\cdot) g_1 + P_1(0,\\cdot) g_3 \\\\ g_3' &= \\ \\ P_2(0,\\cdot) g_1 - P_1(0,\\cdot) g_2 \\end{cases} \\qquad \\text{for } t \\in (0,T), \\end{align*} is there a solution $$u = u(x,t) \\in \\mathbb{R}^3$$ to the problem: \\begin{align*} \\begin{cases} \\partial_t u_1 = \\ \\ P_3 u_2 - P_2 u_3 &\\text{in }(0,L)\\times(0,T) \\\\ \\partial_t u_2 = -P_3 u_1 + P_1 u_3 &\\text{in }(0,L)\\times(0,T) \\\\ \\partial_t u_3 = \\ \\ P_2 u_1 - P_1 u_2 & \\text{in }(0,L)\\times(0,T) \\\\ \\partial_x u_1 = \\ \\ Q_3 u_2 - Q_2 u_3 & \\text{in }(0,L)\\times(0,T) \\\\ \\partial_x u_2 = -Q_3 u_1 + Q_1 u_3 &\\text{in }(0,L)\\times(0,T) \\\\ \\partial_x u_3 = \\ \\ Q_2 u_1 - Q_1 u_2 & \\text{in }(0,L)\\times(0,T) \\\\ u(x,0) = u_0(x) & \\text{for } x \\in (0,L)\\\\ u(0,t) = g(t) & \\text{for }t \\in (0,T). \\end{cases} \\end{align*}\n\nWhat I started. I started reasoning the following way. For $$x \\in (0,L)$$ fixed, a function of the form \\begin{align*} u_1(x,t) &= u_{01}(x) + \\int_0^t P_3(x,s)u_2(x,s) - P_2(x,s)u_3(x,s)ds\\\\ u_2(x,t) &= u_{02}(x) + \\int_0^t -P_3(x,s)u_1(x,s) + P_1(x,s)u_3(x,s)ds\\\\ u_3(x,t) &= u_{03}(x) + \\int_0^t P_2(x,s)u_1(x,s) - P_1(x,s)u_2(x,s)ds, \\end{align*} satisfies the initial value problem involving time derivatives, with $$u(\\cdot, 0) = u_0$$, while for $$t \\in (0,T)$$ fixed, a solution of the form \\begin{align*} \\begin{aligned} u_1(x,t) &= g_1(t) + \\int_0^x Q_3(\\xi,t)u_2(\\xi,t) - Q_2(\\xi, t)u_3(\\xi, t)d\\xi \\\\ u_2(x,t) &= g_2(t) + \\int_0^x -Q_3(\\xi, t)u_1(\\xi, t) + Q_1(\\xi, t) u_3(\\xi, t)d\\xi\\\\ u_3(x,t) &= g_3(t) + \\int_0^x Q_2(\\xi, t)u_1(\\xi, t) - Q_1(\\xi, t)u_2(\\xi, t)d\\xi. \\end{aligned} \\end{align*} satisfies the initial value problem involving space derivatives, with $$u(0, \\cdot) = g$$. I wanted to show, using the four conditions, that both expressions for the solution coincide. Using conditions 3 and 4 we can rewrite both expressions as \\begin{align*} u_1(x,t) &= u_{01}(0) + \\int_0^x Q_3(\\xi, 0)u_{02}(\\xi) - Q_2(\\xi, 0)u_{03}(\\xi)d\\xi \\\\ &\\quad + \\int_0^t P_3(0,s)g_2(s) - P_2(0,s)g_3(s)ds + \\int_0^t \\int_0^x \\partial_x (P_3 u_2 - P_2 u_3 )(\\xi, s)d\\xi ds\\\\ u_2(x,t) &= u_{02}(0) + \\int_0^x -Q_3(\\xi, 0)u_{01} + Q_1(\\xi, 0)u_{03}(\\xi)d\\xi \\\\ &\\quad + \\int_0^t -P_3(0,s)g_1(s) + P_1(0,s)g_3(s)ds + \\int_0^t \\int_0^x \\partial_x(-P_3 u_1 + P_1 u_3)(\\xi, s)ds\\\\ u_3(x,t) &= u_{03}(0) + \\int_0^x Q_2(\\xi, 0)u_{01}(\\xi) - Q_1(\\xi, 0)u_{02}(\\xi)\\\\ &\\quad + \\int_0^t P_2(0,s)g_1(s) - P_1(0,s)g_2(s)ds + \\int_0^t \\int_0^x \\partial_x (P_2 u_1 - P_1 u_2)(\\xi,s)d\\xi ds, \\end{align*} and \\begin{align*} u_1(x,t) &= g_1(0) + \\int_0^t P_3(0,s)g_2(s) - P_2(0,s)g_3(s) ds \\\\ &\\quad + \\int_0^x Q_3(\\xi,0)u_{02}(\\xi) - Q_2(\\xi, 0)u_{03}(\\xi)d\\xi + \\int_0^x \\int_0^t \\partial_t ( Q_3 u_2 - Q_2 u_3)(\\xi, s)dsd\\xi\\\\ u_2(x,t) &= g_2(0) + \\int_0^t -P_3(0,s)g_1(s) + P_1(0,s)g_3(s) ds\\\\ &\\quad + \\int_0^x -Q_3(\\xi, 0)u_{01}(\\xi) + Q_1(\\xi, 0) u_{03}(\\xi)d\\xi + \\int_0^x \\int_0^t \\partial_t (-Q_3 u_1 + Q_1 u_3)(\\xi,s)dsd\\xi\\\\ u_3(x,t) &= g_3(0) + \\int_0^t P_2(0,s)g_1(s) - P_1(0,s)g_2(s) ds \\\\ &\\quad + \\int_0^x Q_2(\\xi, 0)u_{01}(\\xi) - Q_1(\\xi, 0)u_{02}(\\xi)d\\xi + \\int_0^x \\int_0^t \\partial_t ( Q_2 u_1 - Q_1 u_2)(\\xi, s)ds d\\xi \\end{align*} respectively. It seems that both expressions would coincide if the following equalities hold: \\begin{align*} \\partial_x (P_3 u_2 - P_2 u_3 ) &= \\partial_t ( Q_3 u_2 - Q_2 u_3)\\\\ \\partial_x(-P_3 u_1 + P_1 u_3) &= \\partial_t (-Q_3 u_1 + Q_1 u_3)\\\\ \\partial_x (P_2 u_1 - P_1 u_2) &= \\partial_t ( Q_2 u_1 - Q_1 u_2). \\end{align*} To show that these equalities hold, I would like to use condition 1. However, I do not know how to argue from there.\n\nAny suggestion of how to proceed from here, any alternative way of reasoning, or explanation of why there would be no solution, would be welcome. Thank you.\n\n(This question is also on Mathematics Stack Exchange: https://math.stackexchange.com/questions/3160600/6-linear-pde-for-only-3-unknowns)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5123088,"math_prob":1.0000064,"size":6094,"snap":"2019-35-2019-39","text_gpt3_token_len":2905,"char_repetition_ratio":0.17405583,"word_repetition_ratio":0.15502183,"special_character_ratio":0.50639975,"punctuation_ratio":0.13763441,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999615,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-25T03:57:05Z\",\"WARC-Record-ID\":\"<urn:uuid:77cc0c6a-3cef-458a-a2ea-50d358319c79>\",\"Content-Length\":\"114618\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:21b5901d-a48d-45d6-b9a2-ea1c4a7f7148>\",\"WARC-Concurrent-To\":\"<urn:uuid:4ac4cd53-fbad-4f01-be8a-bb895c21c17e>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://mathoverflow.net/questions/326280/6-linear-pde-for-only-3-unknowns\",\"WARC-Payload-Digest\":\"sha1:ASQFFX6RXLH33MJZQBFWDJW3MEWNCXQS\",\"WARC-Block-Digest\":\"sha1:UXPC4LJJ3TRWIJ3JBIMJQPEHP4NPLIVE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027322170.99_warc_CC-MAIN-20190825021120-20190825043120-00103.warc.gz\"}"}
http://www.fact-index.com/r/re/reflection_coefficient.html
[ "Main Page | See live article | Alphabetical index\n\n# Reflection coefficient\n\nIn telecommunication, the term reflection coefficient (RC) has the following meanings:\n\n1. The ratio of the amplitude of the reflected wave and the amplitude of the incident wave.\n\n2. At a discontinuity in a transmission line, the complex ratio of the electric field strength of the reflected wave to that of the incident wave.\n\nNote 1: The reflection coefficient may also be established using other field or circuit quantities.\n\nNote 2: The reflection coefficient is given by the equations below, where Z 1 is the impedance toward the source, Z 2 is the impedance toward the load, the vertical bars designate absolute magnitude, and SWR is the standing wave ratio:\n\nSource: from Federal Standard 1037C and from MIL-STD-188" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8900114,"math_prob":0.92422706,"size":574,"snap":"2022-05-2022-21","text_gpt3_token_len":118,"char_repetition_ratio":0.15789473,"word_repetition_ratio":0.021052632,"special_character_ratio":0.20383275,"punctuation_ratio":0.116071425,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95995635,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-28T17:31:41Z\",\"WARC-Record-ID\":\"<urn:uuid:18805c8a-108e-4163-acaf-d94f69199daa>\",\"Content-Length\":\"4199\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8757b876-44f7-48b8-b991-036caf48576a>\",\"WARC-Concurrent-To\":\"<urn:uuid:ada29835-8676-4a43-8631-35e38c4c5d63>\",\"WARC-IP-Address\":\"23.227.169.68\",\"WARC-Target-URI\":\"http://www.fact-index.com/r/re/reflection_coefficient.html\",\"WARC-Payload-Digest\":\"sha1:6YNU6X6KWGXWPKZEVRPDC43NNNMMRF2W\",\"WARC-Block-Digest\":\"sha1:2DSCMYDSX5XXY7SDODMRWAFMYTL6AB6D\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652663016949.77_warc_CC-MAIN-20220528154416-20220528184416-00252.warc.gz\"}"}
https://www.percentagecal.com/answer/560-is-what-percent-of-248000
[ "#### Solution for 560 is what percent of 248000:\n\n560:248000*100 =\n\n(560*100):248000 =\n\n56000:248000 = 0.23\n\nNow we have: 560 is what percent of 248000 = 0.23\n\nQuestion: 560 is what percent of 248000?\n\nPercentage solution with steps:\n\nStep 1: We make the assumption that 248000 is 100% since it is our output value.\n\nStep 2: We next represent the value we seek with {x}.\n\nStep 3: From step 1, it follows that {100\\%}={248000}.\n\nStep 4: In the same vein, {x\\%}={560}.\n\nStep 5: This gives us a pair of simple equations:\n\n{100\\%}={248000}(1).\n\n{x\\%}={560}(2).\n\nStep 6: By simply dividing equation 1 by equation 2 and taking note of the fact that both the LHS\n(left hand side) of both equations have the same unit (%); we have\n\n\\frac{100\\%}{x\\%}=\\frac{248000}{560}\n\nStep 7: Taking the inverse (or reciprocal) of both sides yields\n\n\\frac{x\\%}{100\\%}=\\frac{560}{248000}\n\n\\Rightarrow{x} = {0.23\\%}\n\nTherefore, {560} is {0.23\\%} of {248000}.\n\n#### Solution for 248000 is what percent of 560:\n\n248000:560*100 =\n\n(248000*100):560 =\n\n24800000:560 = 44285.71\n\nNow we have: 248000 is what percent of 560 = 44285.71\n\nQuestion: 248000 is what percent of 560?\n\nPercentage solution with steps:\n\nStep 1: We make the assumption that 560 is 100% since it is our output value.\n\nStep 2: We next represent the value we seek with {x}.\n\nStep 3: From step 1, it follows that {100\\%}={560}.\n\nStep 4: In the same vein, {x\\%}={248000}.\n\nStep 5: This gives us a pair of simple equations:\n\n{100\\%}={560}(1).\n\n{x\\%}={248000}(2).\n\nStep 6: By simply dividing equation 1 by equation 2 and taking note of the fact that both the LHS\n(left hand side) of both equations have the same unit (%); we have\n\n\\frac{100\\%}{x\\%}=\\frac{560}{248000}\n\nStep 7: Taking the inverse (or reciprocal) of both sides yields\n\n\\frac{x\\%}{100\\%}=\\frac{248000}{560}\n\n\\Rightarrow{x} = {44285.71\\%}\n\nTherefore, {248000} is {44285.71\\%} of {560}.\n\nCalculation Samples" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.81854653,"math_prob":0.9996181,"size":2288,"snap":"2020-34-2020-40","text_gpt3_token_len":775,"char_repetition_ratio":0.19264448,"word_repetition_ratio":0.41315788,"special_character_ratio":0.4763986,"punctuation_ratio":0.15261044,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999949,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-04T20:37:52Z\",\"WARC-Record-ID\":\"<urn:uuid:d25cf47d-1a9e-412c-b907-0790db8f2460>\",\"Content-Length\":\"10511\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:127ead53-40a6-4e92-90de-b840635dd10f>\",\"WARC-Concurrent-To\":\"<urn:uuid:4c0d3895-03b1-4acf-b5f4-ec175b20832e>\",\"WARC-IP-Address\":\"217.23.5.136\",\"WARC-Target-URI\":\"https://www.percentagecal.com/answer/560-is-what-percent-of-248000\",\"WARC-Payload-Digest\":\"sha1:G42KBJDMDFRQAZ35ITJGTIREODVVKABI\",\"WARC-Block-Digest\":\"sha1:G473BUVBDR5UCZ2FCAZLOCCXH3OPZGBQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439735882.86_warc_CC-MAIN-20200804191142-20200804221142-00314.warc.gz\"}"}
https://www.hindawi.com/journals/aa/2020/7421396/
[ "/ / Article\nSpecial Issue\n\n## Recent Trends in Celestial Mechanics\n\nView this Special Issue\n\nResearch Article | Open Access\n\nVolume 2020 |Article ID 7421396 | https://doi.org/10.1155/2020/7421396\n\nA. Mostafa, M. H. El Dewaik, \"Balanced Low Earth Satellite Orbits\", Advances in Astronomy, vol. 2020, Article ID 7421396, 12 pages, 2020. https://doi.org/10.1155/2020/7421396\n\n# Balanced Low Earth Satellite Orbits\n\nGuest Editor: Euaggelos E. Zotos\nAccepted10 Jul 2020\nPublished19 Aug 2020\n\n#### Abstract\n\nThe present work aims at constructing an atlas of the balanced Earth satellite orbits with respect to the secular and long periodic effects of Earth oblateness with the harmonics of the geopotential retained up to the 4th zonal harmonic. The variations of the elements are averaged over the fast and medium angles, thus retaining only the secular and long periodic terms. The models obtained cover the values of the semi-major axis from 1.1 to 2 Earth’s radii, although this is applicable only for 1.1 to 1.3 Earth’s radii due to the radiation belts. The atlas obtained is useful for different purposes, with those having the semi-major axis in this range particularly for remote sensing and meteorology.\n\n#### 1. Introduction\n\nThe problem of the motion of an artificial satellite of the Earth was not given serious attention until 1957. At this time, little was known about the magnitudes of the coefficients of the tesseral and sectorial harmonics in the Earth’s gravitational potential. It was pretty well known at this time (1957–1960) that the contributions of the 3rd, 4th, and 5th zonal harmonics were of order higher than the contribution of the 2nd zonal harmonic, but the values of the coefficients C30, C40, and C50 were not very well established. No reliable information was available for the tesseral or the sectorial coefficients except that the observations of orbiting satellites indicated that these coefficients must be small, certainly no more than the first order with respect to C20.\n\nFor low Earth orbits within an altitude less than 480 km, if the satellite attitude is stabilized, or at least a mean projected area could be estimated, the perturbative effects of atmospheric drag should be included. Unfortunately, the literature is still void of even a mention of this topic of balancing this kind of very low Earth orbits. The reason may be the present increased interest in space communications and broadcasting, which still make use of the geostationary orbits that lie beyond the effects of atmospheric drag, though they still suffer the effects of drift solar radiation pressure.\n\nWith the advance of the space age, it became clear that most space applications require fixing, as strictly as possible, the areas covered by the satellite or the constellation of satellites. In turn, fixing the coverage regions requires fixed nodes and fixed apsidal lines. This in turn leads to the search for orbits satisfying these requirements. The families of orbits satisfying such conditions are called “frozen orbits” . Clearly, the design of such orbits includes the effects of the perturbing influences that affect the motion of the satellite. As the present work is interested in low Earth orbits, only the effect of Earth oblateness is taken into concern. These have been extensively treated in the literature .\n\nThis paper is aiming at constructing an atlas of the balanced low Earth satellite orbits, which fall in the range from 600 km to 2000 km above sea surface, in the sense that the variations of the elements are averaged over the fast angle to keep only long periodic and secular variations that affect the orbit accumulatively with time. In this paper, a model is given for the averaged effects (over the mean anomaly) of Earth oblateness. Then, the Lagrange planetary equations for perturbations of the elements are investigated to get sets of orbital values at which the variations of the elements can be cancelled simultaneously.\n\n#### 2. Earth Potential\n\nThe actual shape of the Earth is that of an eggplant. The center of mass does not lie on the spin axis, and neither the meridian nor the latitudinal contours are circles. The net result of this irregular shape is to produce a variation in the gravitational acceleration to that predicted using a point mass distribution. This variation reaches its maximum value at latitude 45 deg and approaches zero at latitudes 0 and 90 deg.\n\nThe motion of a particle around the Earth can be visualized best by resolving it into individual motions along the meridian and the latitudinal contours. The motion around the meridian can be thought of as consisting of different periodic motions called “zonal harmonics.” Similarly, the motion along a latitudinal contour can be visualized as consisting of different periodic motions called “tesseral harmonics.” The zonal harmonics describe the deviations of a meridian from a great circle, while the tesseral harmonics describe the deviations of a latitudinal contour from a circle.\n\nAt points exterior to the Earth, the mass density is zero, thus, at external points the gravitational potential satisfieswhere V is a scalar function representing the potential. Also, the gravitational potential of the Earth must vanish as we recede to infinitely great distances. With these conditions on the above equation, the potential V at external points can be represented in the following form :\n\nThis expression of the potential is called “Venti potential,” and it was adopted by the IAU (International Astronomical Union) in 1961. The terms arising in the above equation are Cnm and Snm are harmonic coefficients (they are bounded as is always the case in physical problems), R is the equatorial radius of the Earth, is the Earth’s gravitational constant, G is the universal constant of gravity, M is the mass of the Earth, and (r, α, δ) are the geocentric coordinates of the satellite (Figure 1, ) with α measured east of Greenwich, and represents the associated Legendre polynomials.The terms with m = 0 are called “zonal harmonics.”The terms with 0 < m < n are called “tesseral harmonics.”The terms with m = n are called “sectorial harmonics.”\n\nThe case of axial symmetry is expressed by taking m = 0, while if equatorial symmetry is assumed, we consider only even harmonics since P2n+1 (−x) = −P2n+1 (x). Also, the coefficients C21 and S21 are vanishingly small. Further if the origin is taken at the center of mass, the coefficients C10, C11, and S11 will be equal to zero.\n\nConsidering axial symmetry, with origin at the center of mass, we can writewhere Jn = −Cn0.\n\nTaking terms up to j4, we can write V in the following form:where\n\nIt is a purely geometrical transformation to express the potential function V(r, δ), given by the above equations, as a function of the Keplerian orbital elements a, e, i, Ω, ω, and I in their usual meanings (Figure 2, ), where a and e are the semi-major axis and the eccentricity of the orbit, respectively, i is the inclination of the orbit to the Earth’s equatorial plane, Ω and ω describes the position of the orbit in space where Ω is the longitude of the ascending node and ω is the argument of perigee, and finally, l is the mean anomaly to describe the position of the satellite with respect to the orbit.\n\nThen, V (a, e, i, Ω, ω, I) is in a form suitable to use in Lagrange’s planetary equations, and in canonical perturbations methods through the relation. From the spherical trigonometry of the celestial sphere, we havewhere f is the true anomaly, ω is the argument of perigee, and Si= sin i, Ci= cos i.\n\nSubstituting for δ, in Pn (sin δ), we get\n\nThus, we get V2, V3, and V4 as functions of the orbital elements.\n\nWe now proceed to evaluate the effects of Earth oblateness, considering the geopotential up to the zonal harmonic J4.\n\nIn the present solution, we consider only the secular and long periodic terms, averaging over the mean anomaly l.\n\n##### 2.1. The Disturbing Function\n\nThe disturbing function is defined as follows:where V2, V3, and V4 are the 2nd, 3rd, and 4th terms of the geopotential, namelywhere , , and is the true anomaly.\n\nSince terms depending only on the fast variable l will not affect the orbit in an accumulating way with time, we average the perturbing function R over the mean anomaly with its period 2π. The average function is defined by\n\nAs the perturbing function R is a function of the true anomaly f not the mean anomaly l, we use the relationwhere both angles have the same period and the same end points 0 and 2π, and therefore the average function will be given by\n\nApplying the required integrals, we get the averaged disturbing function where\n\n##### 2.2. Lagrange Equations for the Averaged Variations of the Elements\n\nThe Lagrange planetary equations for the variations of the elements for a disturbing potential R are as follows :where n is the mean motion given by .\n\nSubstituting for the averaged disturbing function due to Earth oblateness, the Lagrange equations becomewhere the equation for l is neglected because we concentrate on balancing the orbit position not the satellite motion in the orbit.\n\n##### 2.3. Variations of the Elements due to Earth Oblateness\n\nSubstituting for the required derivatives in equations (16) to (20) yields\n\nDefining\n\nWe getor\n\nDefining\n\nWe get\n\nDefining as in equation (28), we get equation (29):\n\nEquations (22)–(29) give the average effects of Earth oblateness including the zonal harmonics of the geopotential up to J4 on the Keplerian elements of the satellite orbit.\n\n#### 3. Balanced Low Earth Satellite Orbits\n\nIn what follows, we try to find orbits that are balanced in the sense that the averaged (over the fast variable l) variations of the orbit elements are set equal to zero. In equation (23), we put it equal to zero and get a relation between the argument of perigee ω and the inclination i, while treating the eccentricity e and the semi-major axis a as parameters. This will give a range of values for ω and i at different values of e and a, which are all give balanced orbits with respect to both the eccentricity e and the inclination i. The same is done with equations (26) and (29), while putting  = 0 and  = 0.\n\nThe applicable ranges for this model of the semi-major axis a are , where the range 1.4R–2R is avoided due to the predominance of the radiation belts at these levels, to avoid the damages of the equipment that it may produce, besides its fatal effects on human life (for inhabited spacecrafts). The values for the eccentricity e are taken between 0.01 (almost circular orbit) and 0.5.\n\n##### 3.1. Orbits with Fixed Eccentricity and Inclination\n\nBy equating the variation of e by zero, we get\n\nThis implies\n\nSo either orwhere\n\nEquations (32) and (33) give the family of low orbits that have both the eccentricity and the inclination fixed.\n\nThe condition for the existence of such orbits is clearly thatwhich is guaranteed by\n\nWe put it as a condition on the eccentricity i only when\n\n##### 3.2. Orbits with Fixed Node\n\nThe families of orbits for which dΩ/dt = 0 (i.e with fixed nodes) are obtained from\n\nThis implies\n\nThis can be arranged as a second order equation for sin(ω), giving a family of orbits with fixed argument of perigee for different values of ω as a function of the inclination i, the semi-major axis a, and the eccentricity e.\n\nWhen solving for sin(ω), we get the family of orbits for which the longitude of the node is balanced,where\n\nThe condition for having the orbit if real solution exists is again that , which gives\n\n##### 3.3. Orbits with Fixed Perigee\n\nFor the argument of perigee to balance, we solve .We substitute from equation (26) into equation (29) then expand cos(2ω) and collect terms with respect to sin(ω). We get\n\nThus for , we getwhere\n\nEquations (44)–(47) give the relation between sin(ω) and the inclination i, the semi-major axis a, and the eccentricity e, which gives the family of orbits that balance the argument of perigee ω, subject of course to the restriction that is a real value between −1 and 1.\n\n#### 4. Numerical Results\n\nIn this section, numerical results and graphs are obtained for the case of seasat a = 7100 km by putting ω as a function of e and i from equations (32) and (33). The curves are plotted within the possible range given by condition (36) to give curves of balanced e and i. The curves are against the eccentricity e in the range [0.01, 0.5]. The numerical values involved are J2 = 0.001082645, J3 = −0.000002546, J4 = −0.000001649, R = 6378.165 km, α = 7100 m, and μ = 398600.5 km3sec−2.\n\nThe condition (36) gives the upper and lower bounds of the function F(i) as a function of e, and since the function is increasing with e and has no critical points in the interval [0.01, 0.5], then the minimum value occurs at e= 0.01 and the maximum at e= 0.5, which gives . The graphs are plotted for different values of F(i), which corresponds to specific values of i found by solving the equation resulting from setting F(i) equal to the required values. After that we plot  = 0 and  = 0 simultaneously for the same values of F(i) at each specific i-value, to find the orbit values at which we have nearest values for  = 0 and  = 0. In the graphs of and , the relation (32) was kept to ensure that e and i are already balanced.\n\nFive selected values of F(i) are chosen: F(i) = 0.01, 0.5, 0.10, 0.15, and 0.20. Negative values will give the same results with negative sign since F(i) is odd with respect to i. Also for each value, the solution of F(i) = x, with x equals one of the above values we get four real values of i on the form: i1, 180 − i1, i2, and −180 − i2, where i1 and i2 are near the critical inclination one of them is positive and the other is negative. Table 1 gives the values of i corresponding to the selected values of F(i).\n\n F(i) i1 180 − i1 i2 −180 − i2 0.01 63.63 116.37 −63.22 −116.78 0.05 64.27 115.73 −62.11 −117.89 0.10 64.84 115.16 −59.76 −120.24 0.15 65.25 114.75 −55.14 −124.86 0.20 65.56 114.44 −47.16 −132.84\n\nWe note that as F(i) increases, i gets away from the critical inclination, and the curve gets shorter indicating less stability of ω as expected.\n\nThe graphs are plotted for each value of F(i) first for the balanced values of e and i, then for the corresponding four values of i, four graphs are plotted for and to find the nearest values of zero variation for both elements.\n\nFigures 312 show the possibility of balancing ω with e and i, while Ω will have a variation of order 10−6/sec or it must be balanced alone at i = 90 deg (as shown in equation (26)), according to the orbit kind. Figures 3, 5, 7, 9, and 11 show the possible values of ω(e) that balance e and i, while Figures 4, 6, 8, 10, and 12 show the curves of and at the values of i at which .\n\n#### 5. Conclusion\n\nLet balanced orbits be defined as those for which the orbital elements are set equal to zero under the effect of secular and long periodic perturbations. In this work, the effect of Earth oblateness is the considered perturbing force because of dealing with low Earth orbits. The above analysis then shows that such an orbit will be balanced within a reliable tolerance only for few weeks since we are forced to accept the motion of either the node or the perigee by about 10−6 deg/sec. The reason is that under the influence of the Earth oblateness, (exactly) for , while only near the critical inclination deg. Hence, the best procedure is to design a satellite constellation for which the nodal shifts due to the perturbative effects and Earth rotation are modeled to yield continuous coverage. The perigees are either fixed or arranged to realize that the perigee (or the apogee) be overhead the coverage region (regions) in due times. This may require near commensurability with the admitted nodal periods.\n\n#### Data Availability\n\nNo data were used to support this study.\n\n#### Conflicts of Interest\n\nThe authors declare that they have no conflicts of interest.\n\n#### Acknowledgments\n\nThis work was funded by the University of Jeddah, Saudi Arabia, under grant no. UJ-26-18-DR. Thus, the author therefore acknowledges with thanks the university technical and financial support.\n\n1. N. Delsate, P. Robutel, A. Lemaître, and T. Carletti, “Frozen orbits at high eccentricity and inclination: application to mercury orbiter,” Celestial Mechanics and Dynamical Astronomy, vol. 108, no. 3, pp. 275–300, 2010. View at: Publisher Site | Google Scholar\n2. E. Condoleo, M. Cinelli, E. Ortore, and C. Circi, “Stable orbits for lunar landing assistance,” Advances in Space Research, vol. 60, no. 7, pp. 1404–1412, 2017. View at: Publisher Site | Google Scholar\n3. A. Elipe and M. Lara, “Frozen orbits about the moon,” Journal of Guidance, Control, and Dynamics, vol. 26, no. 2, pp. 238–243, 2003. View at: Publisher Site | Google Scholar\n4. C. Circi, E. Condoleo, and E. Ortore, “A vectorial approach to determine frozen orbital conditions,” Celestial Mechanics and Dynamical Astronomy, vol. 128, no. 2-3, pp. 361–382, 2017. View at: Publisher Site | Google Scholar\n5. E. Condoleo, M. Cinelli, E. Ortore, and C. Circi, “Frozen orbits with equatorial perturbing bodies: the case of Ganymede, Callisto, and Titan,” Journal of Guidance, Control, and Dynamics, vol. 39, no. 10, pp. 2264–2272, 2016. View at: Publisher Site | Google Scholar\n6. V. Kudielka, “Balanced earth satellite orbits,” Celestial Mechanics & Dynamical Astronomy, vol. 60, no. 4, pp. 455–470, 1994. View at: Publisher Site | Google Scholar\n7. S. L. Coffey, A. Deprit, and E. Deprit, “Frozen orbits for satellites close to an earth-like planet,” Celestial Mechanics & Dynamical Astronomy, vol. 59, no. 1, pp. 37–72, 1994. View at: Publisher Site | Google Scholar\n8. M. Lara, A. Deprit, and A. Elipe, “Numerical continuation of families of frozen orbits in the zonal problem of artificial satellite theory,” Celestial Mechanics & Dynamical Astronomy, vol. 62, no. 2, pp. 167–181, 1995. View at: Publisher Site | Google Scholar\n9. D. Brouwer, “Solution of the problem of artificial satellite theory without drag,” The Astronomical Journal, vol. 64, p. 378, 1959. View at: Publisher Site | Google Scholar\n10. Y. Kozai, “The motion of a close earth satellite,” The Astronomical Journal, vol. 64, p. 367, 1959. View at: Publisher Site | Google Scholar\n11. P. M. Fitzpatrick, Principles of Celestial Mechanics, Academic Press, Cambridge, MA, USA, 1970.\n12. H. D. Curtis, Orbital Mechanics for Engineering Students, Elsevier, Amsterdam, Netherlands, 2005.\n13. D. Brower and G. M. Clemence, Methods of Celestial Mechanics, Academic Press, Cambridge, MA, USA, 1961.\n\n#### More related articles\n\nArticle of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8887233,"math_prob":0.95736176,"size":18433,"snap":"2021-43-2021-49","text_gpt3_token_len":4493,"char_repetition_ratio":0.14417495,"word_repetition_ratio":0.05844981,"special_character_ratio":0.24222861,"punctuation_ratio":0.13833243,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9905909,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-18T16:08:51Z\",\"WARC-Record-ID\":\"<urn:uuid:f8a78c84-fe19-4ad2-b540-1ed645696c98>\",\"Content-Length\":\"1049303\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bc4a4bad-7858-436f-8f9c-a231ecd36fbd>\",\"WARC-Concurrent-To\":\"<urn:uuid:e4820648-ffb5-461b-8831-ad7d5431cf6b>\",\"WARC-IP-Address\":\"13.32.208.94\",\"WARC-Target-URI\":\"https://www.hindawi.com/journals/aa/2020/7421396/\",\"WARC-Payload-Digest\":\"sha1:2Y6UY5PLIIZ5VZSLU5DXTKTLUMAOVXSG\",\"WARC-Block-Digest\":\"sha1:MHO5JKRTOWICRVZRC46MEMZQSYWNTIBQ\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585204.68_warc_CC-MAIN-20211018155442-20211018185442-00427.warc.gz\"}"}
https://benidormclubdeportivo.org/what-are-the-factor-pairs-of-60/
[ "Factors of 60 room integers that deserve to be divided evenly right into 60. There are 12 factors of 60 of which 60 chin is the best factor and also its prime determinants are 2, 3 and 5 The amount of all components of 60 is 168.\n\nYou are watching: What are the factor pairs of 60\n\nFactors that 60: 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30 and 60Negative determinants of 60: -1, -2, -3, -4, -5, -6, -10, -12, -15, -20, -30 and -60Prime factors of 60: 2, 3, 5Prime factorization of 60: 2 × 2 × 3 × 5 = 22 × 3 × 5Sum of components of 60: 168\n 1 What are the determinants of 60? 2 How come Calculate factors of 60? 3 Factors the 60 by element Factorization 4 Factors of 60 in Pairs 5 FAQs on factors of 60\n\n## What are the components of 60?\n\nThe factors of 60 room all the numbers that provide the value 60 as soon as multiplied. Together 60 is an also number, it has much more than one factor. To understand why it is composite, let\"s remind the definition of a composite number. A number is stated to be composite if the has more than two factors.\n\n60 ÷ 2 = 335 and 12 are factors of 60.Factors that 60 are 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30, and 60.\n\n## How to Calculate Factors the 60?\n\nTo calculate the factors of 60, we can use different methods like prime factorization and also the division method. In element factorization, us express 60 as a product the its prime factors and in the department method, we view what numbers divide 60 exactly.\n\nHence, the components of 60 are, 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30 and 60.\n\nExplore components using illustrations and interactive examples.\n\n## Factors of 60 by prime Factorization\n\nPrime factorization is the process of creating a number as a product the its prime factors. Permit us learn exactly how to uncover the components of a number making use of prime factorization. We begin by trying to find the smallest prime number which can divide 60 leaving the remainder as 0. As 60 is an even number, 2 is among its factors.\n\nThe number 60 is split by the the smallest prime number i beg your pardon divides 60 exactly, i.e. It pipeline a remainder 0. The quotient is then split by the smallest or second smallest element number and also the procedure continues till the quotient i do not care indivisible. Let us divide 60 by the prime number 2⇒ 60 ÷ 2 = 30Now we should divide the quotient 30 by the next least prime number. Since 30 is an even number, it is divisible through 2⇒ 30 ÷ 2 =1515 will be divisible through 3⇒ 15 ÷ 3= 55 is a element number and hence not additional divisible.", null, "Prime factorization of 60 = 2 × 2 × 3 × 5Therefore, the prime factors of 60 are 2, 3, and also 5.\n\n### Prime administer by variable Tree\n\nThe other method of element factorization is acquisition 60 as the root and creating branches by splitting it by the the smallest prime number. This an approach is comparable to the division method above. The distinction lies in presenting the factorization. The figure listed below shows the variable tree the 60.", null, "Collect every the number in circles and also write 60 as the product of this numbers.Hence, 60 = 2 × 2 × 3 × 5\n\nPair factors are the factors of a number given in pairs, which as soon as multiplied together, offer that initial number.6 and also 10 both are determinants of 60. For example, 6 × 10 = 60.The pair components of 60 would be the two numbers which, when multiplied together, an outcome in 60.\n\nThe complying with table represents the pair determinants of 60:\n\n Product Pair factors 1 × 60 (1, 60) 2 × 30 (2, 30) 3 × 20 (3, 20) 4 × 15 (4, 15) 5 × 12 (5, 12) 6 × 10 (6, 10)\n\nWe have the right to have an unfavorable factors likewise for a provided number. The negative pair factors of 60 are (-1, -60), (-2, -30), (-3, -20), (-4, -15), (-5, -12) and also (-6, -10).\n\nImportant Notes:\n\nThe factors of 60 are 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30, and 60.Every also number will have 2 as among its factors.60 is a composite number as it has much more factors various other than 1 and itself.\n\nChallenging Questions:\n\nWhat are the common factors of 60 and also 600?Are 2/3 and 4/5 determinants of 600?\n\nExample 1: Fifteen friends have actually 60 pizzas from a pizza shop and distributed among themselves equally. How many pizzaz would each among them get?\n\nSolution:\n\n15 pizzas are to be divided amongst friends equally.This implies we must divide 60 by 15.60 ÷ 15 = 4Hence, each girlfriend will obtain 4 pizzas.\n\nExample 2: Sia wants to know if over there are any common components of 60 and 80. Help her in recognize the answer to it.\n\nSolution:\n\nFactors that 60 = 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30, and also 60Factors the 80 = 1, 2, 4, 5, 8, 10, 16, 20, 40, 80The typical factors the 60 and 80 = 1, 2, 4, 5, 10, and 20Hence, the typical factors that 60 and 80 are 1, 2, 4, 5, 10, and also 20.\n\nExample 3: uncover the product of every the prime factors of 60.\n\nSolution:\n\nSince, the prime components of 60 are 2, 3, 5. Therefore, the product the prime components = 2 × 3 × 5 = 30.", null, "## FAQs on components of 60\n\n### What space the factors of 60?\n\nThe determinants of 60 are 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30, 60 and also its negative factors space -1, -2, -3, -4, -5, -6, -10, -12, -15, -20, -30, -60.\n\n### What is the Greatest common Factor the 60 and also 21?\n\nThe factors of 60 and 21 are 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30, 60 and 1, 3, 7, 21 respectively.Common determinants of 60 and 21 space <1, 3>.Hence, the GCF the 60 and 21 is 3.\n\n### What Numbers space the Prime components of 60?\n\nThe prime components of 60 are 2, 3, 5.\n\nSee more: If You Have Faith The Size Of A Mustard Seed Kjv, Bible Verses About Mustard Seed\n\n### What space the usual Factors of 60 and 38?\n\nSince, the determinants of 60 room 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30, 60 and also the determinants of 38 space 1, 2, 19, 38.Hence, <1, 2> space the common factors of 60 and also 38.\n\n### What is the sum of all the determinants of 60?\n\nSum that all components of 60 = (22 + 1 - 1)/(2 - 1) × (31 + 1 - 1)/(3 - 1) × (51 + 1 - 1)/(5 - 1) = 168" ]
[ null, "https://benidormclubdeportivo.org/what-are-the-factor-pairs-of-60/imager_1_2771_700.jpg", null, "https://benidormclubdeportivo.org/what-are-the-factor-pairs-of-60/imager_2_2771_700.jpg", null, "https://benidormclubdeportivo.org/what-are-the-factor-pairs-of-60/imager_3_2771_700.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92159474,"math_prob":0.99523723,"size":5746,"snap":"2022-27-2022-33","text_gpt3_token_len":1817,"char_repetition_ratio":0.19157088,"word_repetition_ratio":0.094858155,"special_character_ratio":0.35398537,"punctuation_ratio":0.18020679,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99874145,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-05T11:47:37Z\",\"WARC-Record-ID\":\"<urn:uuid:6fd40328-72f5-4a69-910e-04859607b715>\",\"Content-Length\":\"16547\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d0e266c3-777a-4a6e-be77-fabce0cc5641>\",\"WARC-Concurrent-To\":\"<urn:uuid:db1261e3-7824-484e-b1a7-44ab86918661>\",\"WARC-IP-Address\":\"172.67.177.16\",\"WARC-Target-URI\":\"https://benidormclubdeportivo.org/what-are-the-factor-pairs-of-60/\",\"WARC-Payload-Digest\":\"sha1:OAFNX55P2IOJE6GSEUI447KPIH5RULNU\",\"WARC-Block-Digest\":\"sha1:JOCNXX73TZQD5DUADTMSAXF7LCNQEHCG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104576719.83_warc_CC-MAIN-20220705113756-20220705143756-00086.warc.gz\"}"}
https://wealthofunderstanding.com/portfolio/simple-vs-compound-interest/
[ "", null, "## Simple vs Compound Interest", null, "Recently we discussed that not all loans are created equal because of the way the lender structures your payments. But loans can also differ in the way they charge interest. And this is bigger than loans–this is also a major key in how you earn interest.\n\nThe big difference is whether the loan/investment uses Simple or Compound interest.\n\nSimple Interest Examples:\n– Savings Bonds\n\\$100 invested at 4% for 10 years.\n\\$100 x .04 x 10 = \\$40 interest earned.\n\n– Line of Credit\n\\$100 borrowed at 5% for 5 years.\n\\$100 x .05 x 5 = \\$25 interest owed.\n\nWith Simple Interest, the interest builds in a straight line.\n\nCompound Interest Examples:\n– Stock Market\n\\$100 invested at 4% for 10 years.\n\\$100 x (1 + .04)10 – \\$100 = \\$48 interest earned\n\n-Home Mortgage\n\\$100 borrowed at 5% for 5 years.\n\\$100 x (1 + .05)5 – \\$100 = \\$27 interest owed.\n\nWith Compound Interest, the interest builds in a curved line.\n\nWe used small examples to keep the math easy, but let’s run it again with some larger examples:\n\nSavings Bond: \\$10,000 invested at 4% for 30 years\n\\$10,000 x .04 x 30 = \\$12,000 interest earned.\n\nLine of Credit: \\$100,000 borrowed at 5% for 30 years\n\\$100,000 x .05 x 30 = \\$150,000 interest owed.\n\nStock Market: \\$10,000 invested at 4% for 30 years\n\\$10,000 x (1 + .04)30 – \\$10,000 = \\$22,433 interest earned.\n\nHome Mortgage: \\$100,000 borrowed at 5% for 30 years\n\\$100,000 x (1 + .05)30 – \\$100,000 = \\$332,194 interest owed.\n\nThe act of compounding has the power to double the amount of interest generated by a loan/investment over the course of 30 years." ]
[ null, "https://www.facebook.com/tr", null, "https://149474742.v2.pressablecdn.com/wp-content/uploads/2020/10/SimpleVsCompound_1x1.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9276663,"math_prob":0.9877534,"size":1524,"snap":"2023-40-2023-50","text_gpt3_token_len":434,"char_repetition_ratio":0.17631578,"word_repetition_ratio":0.16546762,"special_character_ratio":0.3753281,"punctuation_ratio":0.14925373,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9529883,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-29T22:01:13Z\",\"WARC-Record-ID\":\"<urn:uuid:0f1c35ae-4533-4bec-bed4-00866acd2517>\",\"Content-Length\":\"53775\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:90420167-9070-45a3-bae2-4f26c6313293>\",\"WARC-Concurrent-To\":\"<urn:uuid:0cbea702-2b9a-4bca-932e-97567c4c6bbf>\",\"WARC-IP-Address\":\"199.16.172.46\",\"WARC-Target-URI\":\"https://wealthofunderstanding.com/portfolio/simple-vs-compound-interest/\",\"WARC-Payload-Digest\":\"sha1:PW7M6OL3JGRRMLEVGY4V55LNPHFOSCMH\",\"WARC-Block-Digest\":\"sha1:B3WJVYQX4QM4LKHTDFJIU3VSGAENHFYR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100146.5_warc_CC-MAIN-20231129204528-20231129234528-00807.warc.gz\"}"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=60&t=38750&p=131017
[ "## Are all pH calculations done to 2 decimal places or do we use sig figs in pH calculations? [ENDORSED]\n\nHenri_de_Guzman_3L\nPosts: 88\nJoined: Fri Sep 28, 2018 12:25 am\n\n### Are all pH calculations done to 2 decimal places or do we use sig figs in pH calculations?\n\nFor example 6B.5c in the 7th edition textbook asks us to calculate the pH for 0.0092 M Ba(OH)2. This is 2 sig figs but the answer in the back gives 12.96. Should the answer instead be 13?\n\nanthony_trieu2L\nPosts: 60\nJoined: Fri Sep 28, 2018 12:29 am\n\n### Re: Are all pH calculations done to 2 decimal places or do we use sig figs in pH calculations?  [ENDORSED]\n\nFor pH calculations, the sig figs are only accounted for after the decimal point. In this case, since there are two sig figs in 0.0092 M Ba(OH)2, the final answer should present two digits after the decimal point. Thus, 12.96 is correct.\n\nRandallNeeDis3K\nPosts: 34\nJoined: Fri Sep 28, 2018 12:25 am\n\n### Re: Are all pH calculations done to 2 decimal places or do we use sig figs in pH calculations?\n\npH calculations only follow sig figs after the decimal point! so two sig figs after the decimal point makes sense.\n\nReturn to “Calculating pH or pOH for Strong & Weak Acids & Bases”\n\n### Who is online\n\nUsers browsing this forum: No registered users and 2 guests" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7158797,"math_prob":0.9851308,"size":1257,"snap":"2020-45-2020-50","text_gpt3_token_len":360,"char_repetition_ratio":0.1963288,"word_repetition_ratio":0.33050847,"special_character_ratio":0.291965,"punctuation_ratio":0.13620071,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.979542,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-05T03:56:07Z\",\"WARC-Record-ID\":\"<urn:uuid:fe1a55b3-c63c-4540-ab8a-88dadb623a21>\",\"Content-Length\":\"55892\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:688325a9-39cc-4970-ac7d-85d04bc7e042>\",\"WARC-Concurrent-To\":\"<urn:uuid:fb2d2703-a598-4729-95a0-16a9541a8e67>\",\"WARC-IP-Address\":\"169.232.134.130\",\"WARC-Target-URI\":\"https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=60&t=38750&p=131017\",\"WARC-Payload-Digest\":\"sha1:SVMTN7S6Y5HSQVYFMQNR4GUGSJEHKVEU\",\"WARC-Block-Digest\":\"sha1:T2GALHCACA5OLVB5DDYOVRSESHC5QTUP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141746033.87_warc_CC-MAIN-20201205013617-20201205043617-00642.warc.gz\"}"}
https://dsp.stackexchange.com/questions/41370/interpolation-formula-for-two-dimensional-signal-reconstruction-in-the-frequency
[ "# Interpolation formula for two dimensional signal reconstruction in the frequency domain from polar samples\n\nIn the book, Advanced Topics in Shannon Sampling and Interpolation Theory by Robert J. Marks II, one may find an interpolation formula for reconstructing a two dimensional signal from regular polar samples in the frequency domain:\n\nLet $f(x,y)$ be space-limited to $2A$. Let its Fourier transform in polar coordinates $F(\\rho,\\phi)$ be angularly band-limited to $K$. Then $F(\\rho,\\phi)$ can be reconstructed from its polar samples via\n\n$$F(\\rho,\\phi)=\\sum_{n=-\\infty}^\\infty\\sum_{k=0}^{N-1}\\widetilde{F}\\left(\\frac{n}{2A},\\frac{2\\pi k}{N}\\right)\\operatorname{sinc}\\left[\\frac{2A(\\rho-n)}{2A}\\right]\\frac{\\sin\\left[\\frac{1}{2}(N-1)\\left(\\phi-\\frac{2\\pi k}{N}\\right)\\right]}{N\\sin\\left[\\frac{1}{2}\\left(\\phi-\\frac{2\\pi k}{N}\\right)\\right]},$$ where $N$ is assumed even and $$\\widetilde{F}\\left(\\frac{n}{2A},\\frac{2\\pi k}{N}\\right)=\\begin{cases}F\\left(\\frac{n}{2A},\\frac{2\\pi k}{N}\\right),&n\\ge 0, \\\\ F\\left(-\\frac{n}{2A},\\frac{2\\pi k}{N}+\\pi\\right),&n<0.\\end{cases}$$\n\nI figured that the argument of the cardinal sine function contained a typo, since the $2A$ in the numerator and denominator would appear to cancel. Thus I would guess that the author actually had\n\n$$\\operatorname{sinc}\\left[\\frac{\\rho-2An}{2A}\\right]$$\n\nin mind. Correct me if I'm wrong?\n\nThen the last quotient with the sine functions looks very similar to the Dirichlet kernel:\n\n$$D_N\\left(\\phi-\\frac{2\\pi k}{N}\\right)=\\frac{\\sin\\left[\\left(N+\\frac{1}{2}\\right)\\left(\\phi-\\frac{2\\pi k}{N}\\right)\\right]}{\\sin\\left[\\frac{\\phi-\\frac{2\\pi k}{N}}{2}\\right]}$$\n\nbut is not quite the same.\n\nIncidentally, my attempts to implement the interpolation formula in the theorem via Matlab for specific examples have failed completely with regards to reconstructing a signal. Perhaps someone can point me in the direction of a correct formula for signal reconstruction in the frequency domain from polar samples?", null, "suggests that $$\\operatorname{sinc}\\left[2A(\\rho-\\frac{n}{2A})\\right]$$ might be the correct form of the $\\operatorname{sinc}$ term.\n• Then this is the same as $$\\operatorname{sinc}\\left[\\frac{\\rho-n\\Delta\\rho}{\\Delta\\rho}\\right],$$ with $\\Delta\\rho=\\frac{1}{2A}$. Interestingly I also tried to implement this formula on Matlab too, but with no success. – Jason Born Jun 9 '17 at 18:08" ]
[ null, "https://i.stack.imgur.com/QTkK6.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.72242886,"math_prob":0.9997428,"size":1858,"snap":"2019-51-2020-05","text_gpt3_token_len":577,"char_repetition_ratio":0.16882417,"word_repetition_ratio":0.0,"special_character_ratio":0.2949408,"punctuation_ratio":0.0754717,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999972,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-12T21:47:28Z\",\"WARC-Record-ID\":\"<urn:uuid:f6722b3e-ddc6-4e9a-9cdb-4f60c8cd7e06>\",\"Content-Length\":\"133632\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:62145085-6b09-4628-96b9-29a9f651fc9c>\",\"WARC-Concurrent-To\":\"<urn:uuid:0e31b427-bb81-4917-ba40-6ce6eaa88a03>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://dsp.stackexchange.com/questions/41370/interpolation-formula-for-two-dimensional-signal-reconstruction-in-the-frequency\",\"WARC-Payload-Digest\":\"sha1:L4PBF63MANH3SN2T5QGTPJ2QKHTE3C37\",\"WARC-Block-Digest\":\"sha1:TNUORCPGD2QZATTLTNYYTO43BXK7AQ5Q\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540547165.98_warc_CC-MAIN-20191212205036-20191212233036-00413.warc.gz\"}"}
https://metanumbers.com/42265
[ "## 42265\n\n42,265 (forty-two thousand two hundred sixty-five) is an odd five-digits composite number following 42264 and preceding 42266. In scientific notation, it is written as 4.2265 × 104. The sum of its digits is 19. It has a total of 3 prime factors and 8 positive divisors. There are 33,072 positive integers (up to 42265) that are relatively prime to 42265.\n\n## Basic properties\n\n• Is Prime? No\n• Number parity Odd\n• Number length 5\n• Sum of Digits 19\n• Digital Root 1\n\n## Name\n\nShort name 42 thousand 265 forty-two thousand two hundred sixty-five\n\n## Notation\n\nScientific notation 4.2265 × 104 42.265 × 103\n\n## Prime Factorization of 42265\n\nPrime Factorization 5 × 79 × 107\n\nComposite number\nDistinct Factors Total Factors Radical ω(n) 3 Total number of distinct prime factors Ω(n) 3 Total number of prime factors rad(n) 42265 Product of the distinct prime numbers λ(n) -1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) -1 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 42,265 is 5 × 79 × 107. Since it has a total of 3 prime factors, 42,265 is a composite number.\n\n## Divisors of 42265\n\n1, 5, 79, 107, 395, 535, 8453, 42265\n\n8 divisors\n\n Even divisors 0 8 4 4\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 8 Total number of the positive divisors of n σ(n) 51840 Sum of all the positive divisors of n s(n) 9575 Sum of the proper positive divisors of n A(n) 6480 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 205.585 Returns the nth root of the product of n divisors H(n) 6.52238 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 42,265 can be divided by 8 positive divisors (out of which 0 are even, and 8 are odd). The sum of these divisors (counting 42,265) is 51,840, the average is 6,480.\n\n## Other Arithmetic Functions (n = 42265)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 33072 Total number of positive integers not greater than n that are coprime to n λ(n) 8268 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 4413 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares\n\nThere are 33,072 positive integers (less than 42,265) that are coprime with 42,265. And there are approximately 4,413 prime numbers less than or equal to 42,265.\n\n## Divisibility of 42265\n\n m n mod m 2 3 4 5 6 7 8 9 1 1 1 0 1 6 1 1\n\nThe number 42,265 is divisible by 5.\n\n## Classification of 42265\n\n• Arithmetic\n• Deficient\n\n• Polite\n\n• Square Free\n\n### Other numbers\n\n• LucasCarmichael\n• Sphenic\n\n## Base conversion (42265)\n\nBase System Value\n2 Binary 1010010100011001\n3 Ternary 2010222101\n4 Quaternary 22110121\n5 Quinary 2323030\n6 Senary 523401\n8 Octal 122431\n10 Decimal 42265\n12 Duodecimal 20561\n20 Vigesimal 55d5\n36 Base36 wm1\n\n## Basic calculations (n = 42265)\n\n### Multiplication\n\nn×i\n n×2 84530 126795 169060 211325\n\n### Division\n\nni\n n⁄2 21132.5 14088.3 10566.2 8453\n\n### Exponentiation\n\nni\n n2 1786330225 75499246959625 3190975672748550625 134866586808717492165625\n\n### Nth Root\n\ni√n\n 2√n 205.585 34.8332 14.3382 8.41775\n\n## 42265 as geometric shapes\n\n### Circle\n\n Diameter 84530 265559 5.61192e+09\n\n### Sphere\n\n Volume 3.16251e+14 2.24477e+10 265559\n\n### Square\n\nLength = n\n Perimeter 169060 1.78633e+09 59771.7\n\n### Cube\n\nLength = n\n Surface area 1.0718e+10 7.54992e+13 73205.1\n\n### Equilateral Triangle\n\nLength = n\n Perimeter 126795 7.73504e+08 36602.6\n\n### Triangular Pyramid\n\nLength = n\n Surface area 3.09401e+09 8.89767e+12 34509.2\n\n## Cryptographic Hash Functions\n\nmd5 1b9b3786eae526755467c2593d194005 37156b9eddb65ef2207af1e927b7e6c083bfe0e8 b5b7ec8069eae2248ed162d8e0f878ef4e93a3f63c48a6b8323e827356f49358 11a51a0a8e07cc7429775fb63627d21cf968a8a959dd51e3ca327f212d6d4ffac5f7c6153afab0da8efcb7891dd2eac98abaa11ddeec69185b6715e490b438e2 0a800039d9092ad65ed56ce20872df2dfa0bc522" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6073133,"math_prob":0.9844902,"size":4535,"snap":"2020-24-2020-29","text_gpt3_token_len":1609,"char_repetition_ratio":0.11895829,"word_repetition_ratio":0.028106509,"special_character_ratio":0.45181918,"punctuation_ratio":0.076030925,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9958916,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-13T04:35:15Z\",\"WARC-Record-ID\":\"<urn:uuid:5e5b305c-7058-417b-9ce3-48699dff93aa>\",\"Content-Length\":\"48312\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ef7959ba-e343-484e-bcd2-185b5935efed>\",\"WARC-Concurrent-To\":\"<urn:uuid:2a2d473a-5166-44b8-ae57-294ee1f4b021>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/42265\",\"WARC-Payload-Digest\":\"sha1:7VRA4JFQNQCCVSRPSFJZWQ7J3IHQ5S6P\",\"WARC-Block-Digest\":\"sha1:HHMELXUASJN6OIN27O6XNPNJ4WFHYZK6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593657142589.93_warc_CC-MAIN-20200713033803-20200713063803-00316.warc.gz\"}"}
https://getchemistryhelp.com/chemistry-lesson-significant-digits-rounding/
[ "", null, "", null, "Dr. Kent McCorkle, or \"Dr. Kent' to his student, has helped thousands of students be successful in chemistry.\n\n# Chemistry Lesson: Significant Digits & Rounding\n\n[View the accompanying Lesson on Significant Digits & Measurements here.]\n\n## Significant Digits & Rounding\n\nAll numbers from a measurement are significant.  However, we often generate nonsignificant digits when performing calculations. We get rid of nonsignificant digits by rounding off numbers. There are three rules for rounding off numbers.\n\n## Rules for Rounding Numbers\n\n1. If the first nonsignificant digit is less than 5, drop all nonsignificant digits.\n2. If the first nonsignificant digit is greater than or equal to 5, increase the last significant digit by 1 and drop all nonsignificant digits.*\n3. If a calculation has two or more operations, retain at least one nonsignificant digit until the final operation and then round off the answer.\n\n## Examples\n\nA calculator displays 17.846239 and 3 significant digits are justified.\n• The first three significant digits are 17.8.  The first nonsignificant digit is 4.  According to rule #1, because 4 is less than 5 we round down, discarding all of the nonsignificant digits, leaving us with 17.8.\nA calculator displays 17.856239 and 3 significant digits are justified.\n• The first three significant digits are 17.8.  The first nonsignificant digit is 5.  According to rule #2, because it is 5 or greater then we add 1 to the last significant digit (the 8 becomes a 9) and drop all of the nonsignificant digits.  The answer is thus 17.9.\n\n## Placeholder Zeros\n\nRound the measurement 151 mL to 2 significant digits.\n\n• We keep the first two significant digits, 15. The first nonsignificant digit, 1, tells us to round down leaving us with 15. However, 15 is 10x smaller than the original number so we need to put a 0 in the ones place to maintain the magnitude of the number. So the answer would be 150. Remember, such placeholder zeros (trailing zeros) are not significant so 150 does have two significant digits.\n\nRound the measurement 2788 g to 3 significant digits.\n\n• We keep the first three significant digits, 278. The first nonsignificant digit is the final 8, which indicates we are to round up, adding 1 to the final significant 8 in the tens places making it 279. But again, 279 is much smaller than 2788 so we again need a placeholder zero in the ones place. So the answer would be 2790.\n\n## Examples\n\n1. Round 4.1278 to 3 significant digits.\n• 4.13\n2. Round 4.1278 to 2 significant digits.\n• 4.1\n3. Round 63401 to 3 significant digits.\n• 63400\n4. Round 0.0562 to 2 significant digits.\n• 0.056\n\nAn alternate rule says that if the first nonsignificant digit is a 5 then always round so that last significant digit is even. Why? Since a 0 doesn’t really require ’rounding’, there are only 9 numbers that can be either dropped or rounded up. 5 is exactly in the middle of 1-9 so always rounding up at 5 would lead to more numbers being rounded up (5,6,7,8,9) than are rounded down (1,2,3,4). To avoid this, if a number ends in 5 then some of the time it’s rounded up and some of the time it’s rounded down – whichever way will make the last significant digit even.\n\nFor additional practice problems on significant digits and rounding, visit Significant Digits & Rounding Practice Problems.", null, "" ]
[ null, "https://www.facebook.com/tr", null, "https://getchemistryhelp.com/wp-content/themes/customtheme/images/custom/dr-kent.jpg", null, "https://moderate1-v4.cleantalk.org/pixel/41068a4fab7566008e9d7bf9dfc5bd02.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8871752,"math_prob":0.9501094,"size":2811,"snap":"2023-40-2023-50","text_gpt3_token_len":688,"char_repetition_ratio":0.24474528,"word_repetition_ratio":0.07786885,"special_character_ratio":0.24653149,"punctuation_ratio":0.11785714,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98950243,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-02T02:00:56Z\",\"WARC-Record-ID\":\"<urn:uuid:049b8762-5e9d-4dc3-8d66-fb77d90f16df>\",\"Content-Length\":\"94465\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:23e604f0-759e-4b9f-b8bc-c00e7a5e9395>\",\"WARC-Concurrent-To\":\"<urn:uuid:d9f14537-dd29-4786-8fbe-94b083bfe7ab>\",\"WARC-IP-Address\":\"104.199.121.102\",\"WARC-Target-URI\":\"https://getchemistryhelp.com/chemistry-lesson-significant-digits-rounding/\",\"WARC-Payload-Digest\":\"sha1:3PU2224UY22NIUF6ZEGHDSTSMAX5RH2V\",\"WARC-Block-Digest\":\"sha1:5CJIVF5LNYFWB4PIBS5P4S3GPKJ3BHK6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510942.97_warc_CC-MAIN-20231002001302-20231002031302-00213.warc.gz\"}"}
https://bv.fapesp.br/en/auxilios/91067/lp-lq-decay-estimates-for-evolution-operators/
[ " Research Grants 15/16038-2 - Equações diferenciais parciais - BV FAPESP\nStart date\nBetweenand\n\n# Lp-Lq decay estimates for Evolution Operators\n\n Grant number: 15/16038-2 Support type: Regular Research Grants Duration: November 01, 2015 - October 31, 2017 Field of knowledge: Physical Sciences and Mathematics - Mathematics - Analysis Principal researcher: Marcelo Rempel Ebert Grantee: Marcelo Rempel Ebert Home Institution: Faculdade de Filosofia, Ciências e Letras de Ribeirão Preto (FFCLRP). Universidade de São Paulo (USP). Ribeirão Preto , SP, Brazil\n\nAbstract\n\nIn this project, we are interested in Lp-Lq decay estimates (not necessarily on the conjugate line) in time for linear hyperbolicequations or more in general, p-evolution equations. The results are derived by developing a suitable WKB analysis.We plan to apply these estimates to study semi-linear problems. In particular, we are interested in proving results about global existence (in time) of the solution, possibly assuming small initial data. Here we plan to understand in which cases the decay rates of solutions to the semi-linear problems coincide with those ones for the corresponding linear problem, and in which other cases a loss of decay appears. Then the question for the exact loss of decay appears.So methods to show optimality should be developed.We plan to study both models with constant coefficients and with time-dependent coefficients as well. In the case of time-dependent coefficients, we will assume suitable regularity and a sufficient control of the oscillations. Also, the interaction of the time-dependent coefficients will be studied to avoid bad influence on the asymptotic profile, or to obtain better decay estimates.In a first moment, we will mainly consider wave-type equations, possibly with damping terms, and with nonlocal terms, like fractional powers of the Laplacian. In this way we cover external and structural damping up to the visco-elastic case. Finally, we plan to study higher order equations and, if possible, first-order systems, \\$p\\$-evolution equations and problems in an abstract setting. (AU)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8103347,"math_prob":0.88973385,"size":386,"snap":"2021-31-2021-39","text_gpt3_token_len":74,"char_repetition_ratio":0.14921466,"word_repetition_ratio":0.0,"special_character_ratio":0.15284973,"punctuation_ratio":0.10169491,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9703283,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-22T12:10:17Z\",\"WARC-Record-ID\":\"<urn:uuid:074f2be8-0a0e-4441-8510-36b8d6d5b0ca>\",\"Content-Length\":\"61292\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a9c778fc-49e8-4b69-9746-2e847c0dd9ee>\",\"WARC-Concurrent-To\":\"<urn:uuid:55df5b68-1ab4-408b-b772-28da3be7b35a>\",\"WARC-IP-Address\":\"143.108.10.177\",\"WARC-Target-URI\":\"https://bv.fapesp.br/en/auxilios/91067/lp-lq-decay-estimates-for-evolution-operators/\",\"WARC-Payload-Digest\":\"sha1:7TOWBMZK6TOV2I3237DSI6NRNLYBXR4O\",\"WARC-Block-Digest\":\"sha1:EVOXVMQIB5G43R5K2VBMZRDEBT5KXD45\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057347.80_warc_CC-MAIN-20210922102402-20210922132402-00475.warc.gz\"}"}
https://www.learncram.com/cbse/mcq-questions-for-class-8-science-chapter-16/
[ "# MCQ Questions for Class 8 Science Chapter 16 Light with Answers\n\nWe have compiled the NCERT MCQ Questions for Class 8 Science Chapter 16 Light with Answers Pdf free download covering the entire syllabus. Practice MCQ Questions for Class 8 Science with Answers on a daily basis and score well in exams. Refer to the Light Class 8 MCQs Questions with Answers here along with a detailed explanation.\n\n## Light Class 8 MCQs Questions with Answers\n\nChoose the correct option.\n\nQuestion 1.\nWe are able to see an object due to the presence of\n(a) light\n(b) dark\n(c) refraction\n(d) object\n\nQuestion 2.\nThe bouncing back of light into the same medium is called\n(a) refraction\n(b) reflection\n(c) dispersion\n(d) diffraction\n\nQuestion 3.\nA mirror has _____ surface.\n(a) rough\n(b) polished\n(c) dark\n(d) all of these\n\nQuestion 4.\nMaximum part of light is reflected by\n(a) opaque object\n(b) translucent object\n(c) transparent object\n(d) all of these\n\nQuestion 5.\nBeam of light striking the reflecting surface is called\n(a) incident ray\n(b) reflected ray\n(c) refracted ray\n(d) normal ray\n\nQuestion 6.\nThe back side of a plane mirror contains\n(a) gold coating\n(b) silver coating\n(c) aluminium coating\n(d) copper coating\n\nQuestion 7.\nThe perpendicular drawn to the reflecting surface is called\n(a) normal\n(b) incident ray\n(c) reflected ray\n(d) none of these\n\nQuestion 8.\nThere are ________ laws of reflection.\n(a) one\n(b) two\n(c) three\n(d) four\n\nQuestion 9.\nThe angle of incidence is always _______ to the angle of reflection.\n(a) greater\n(b) smaller\n(c) equal\n(d) none of these\n\nQuestion 10.\nThe angle between the reflected ray and the normal is called\n(a) angle of incidence\n(b) reflected ray\n(c) angle of reflection\n(d) point of incidence\n\nQuestion 11.\nThe reflection of light from a smooth surface is called\n(a) diffused reflection\n(b) regular reflection\n(c) dispersion\n(d) spectrum\n\nQuestion 12.\nWhich of the following results in diffused reflection?\n(a) Plane mirror\n(b) Shiny surface\n(c) Silver\n(d) Wood\n\nQuestion 13.\nThe nature of image formed by plane mirror is\n(a) real and inverted\n(b) virtual and erect\n(c) real and erect\n(d) virtual and inverted\n\nQuestion 14.\nIf you hold a pen in your right hand and stand in front of the mirror, the pen will be in the left hand in the image. This phenomenon is called\n(a) lateral inversion\n(b) diffraction\n(c) reflection\n(d) inversion\n\nQuestion 15.\nIf two plane mirrors are inclined at an angle of 40°, number of images formed will be\n(a) 7\n(b) 8\n(c) 9\n(d) 5\n\nQuestion 16.\nType of mirror used as side view mirror is\n(a) convex mirror\n(b) plane mirror\n(c) concave mirror\n(d) ground mirror\n\nQuestion 17.\nBand of seven colours is called\n(a) VIBGYOR\n(b) dispersion\n(c) spectrum\n(d) reflection\n\nQuestion 18.\nFront bulged part of the eyeball is called\n(a) cornea\n(b) iris\n(c) retina\n(d) pupil\n\nQuestion 19.\nTwo mirrors A and B are placed at right angles to each other. A ray of light incident on mirror A at an angle of 25° falls on mirror B after reflection. The angle of reflection for the ray reflected from mirror B would be\n(a) 25°\n(b) 50°\n(c) 65°\n(d) 115°\n\nQuestion 20.\nVisually impaired people can read and write using\n(a) electronic writer\n(b) Braille system\n(c) digital pens\n(d) hearing aids\n\nQuestion 21.\nA toy is placed at 10 cm in front of a plane mirror. What is the distance of image from the mirror?\n(a) 20 cm\n(b) 40 cm\n(c) 10 cm\n(d) 30 cm\n\nQuestion 22.\nA candle is 30 cm high. What is the height of its image in a plane mirror?\n(a) 10 cm\n(b) 15 cm\n(c) 30 cm\n(d) 45 cm\n\nQuestion 23.\nWhich of the following works on the concept of multiple reflections?\n(a) Telescope\n(b) Binoculars\n(c) Kaleidoscope\n(d) Sunglasses\n\nQuestion 24.\nVisually challenged people can read and write with\n(a) hearing aid\n(b) electronic type writer\n(c) Braille system\n(d) digital pen\n\nQuestion 25.\nThe human eye can clearly see up to which distance?\n(a) Infinity\n(b) 1000 km\n(c) 100 km\n(d) 10 km\n\nQuestion 26.\nThe human eye cannot see clearly at a distance which is less than\n(a) 2.5 cm\n(b) 25 cm\n(c) 15 cm\n(d) 1.5 cm\n\nQuestion 27.\nThe angle between the incident ray and the normal is called angle of\n(a) reflection\n(b) refraction\n(c) transmission\n(d) incidence\n\nQuestion 28.\nWhich of the following parts of an eye controls the amount of light entering the eye by contracting or dilating?\n(a) Retina\n(b) Cornea\n(c) Pupil\n(d) Iris\n\nQuestion 29.\nThe phenomenon of breaking up of white light into its seven constituent colours is called\n(a) reflection of light\n(b) refraction of light\n(c) dispersion of light\n\nQuestion 30.\nWe can see ourselves in a mirror or a polished surface but not on walls because of\n(a) regular reflection\n(b) normal reflection\n(c) irregular reflection\n(d) specular reflection\n\nQuestion 31.\nHow many cells are there in a Braille character?\n(a) 12\n(b) 9\n(c) 3\n(d) 6\n\nQuestion 32.\nWhich of the following will produce a regular reflection?\n(a) Tree leaf\n(b) Wood\n(c) Wall\n(d) Mirror\n\nQuestion 33.\nAn instrument which enables us to see things which are too small to be seen with naked eye is called\n(a) microscope\n(b) periscope\n(c) kaleidoscope\n(d) none of these\n\nQuestion 34.\nThe property of a plane mirror to make ‘right appear as left’ and vice versa, is called\n(a) vertical inversion\n(b) lateral inversion\n(c) reflection\n(d) refraction\n\nFill in the blanks with suitable word/s.\n\nQuestion 1.\nLight is a form of __________\n\nQuestion 2.\nThe ray of light which strikes the reflecting surface is called __________ ray.\n\nQuestion 3.\nThe bouncing back of light after it falls on a surface is called __________\n\nQuestion 4.\nA mirror has _________ and _________ surface.\n\nQuestion 5.\n__________ is the perpendicular line on the incidence point.\n\nQuestion 6.\nA plane mirror forms a _________ image.\n\nQuestion 7.\nThe ray which returns after striking the surface is called __________ ray.\n\nQuestion 8.\nAngle of incidence is always __________ to the angle of reflection.\n\nQuestion 9.\nReflection from a smooth surface is called __________ reflection.\n\nQuestion 10.\nThe size of image formed by the plane mirror is __________ as size of object.\n\nQuestion 11.\nSplitting of light into seven colours is called __________\n\nQuestion 12.\nKaleidoscope is based on the concepts of __________\n\nQuestion 13.\nThe point on the surface at which incident ray strikes is called __________\n\nQuestion 14.\nThe reflection of light from an uneven surface is called __________\n\nQuestion 15.\nPaper is a _________ surface.\n\nQuestion 16.\nMirror is a __________ surface.\n\nQuestion 17.\nWhen the mirrors are inclined at 900, we get images.\n\nQuestion 18.\nIn bright light, the size of pupil __________\n\nQuestion 19.\nThe space between the cornea and lens is filled with a liquid called __________\n\nQuestion 20.\nBraille system was invented by __________\n\nQuestion 21.\nThe image formed by a plane mirror is ………………….. inverted.\n\nQuestion 22.\nThe angle of incidence is always equal to the angle of …………………..\n\nQuestion 23.\n………………….. formation is the natural phenomenon showing dispersion.\n\nQuestion 24.\nThe lens of the eye focuses light on …………………..\n\nQuestion 25.\nThe size of the pupil becomes ………………….. when we see in dim light.\n\nTrue or False\n\nQuestion 1.\nDeficiency of vitamin B causes night blindness.\n\nQuestion 2.\nIn the Braille system, patterns are made with coloured dots.\n\nQuestion 3.\nNormal make 60° angle with the reflecting surface.\n\nQuestion 4.\nAngle of incidence is always equal to angle of reflection.\n\nQuestion 5.\nDiffused reflection occurs due to rough surface.\n\nQuestion 6.\nKaleidoscope is based on the principle of dispersion of light.\n\nQuestion 7.\nBoth incident ray and reflected ray lie in the same plane.\n\nQuestion 8.\nThe choroid prevents the internal reflection of light and protects the light sensitive inner parts of the eye.\n\nQuestion 9.\nRainbow forms due to dispersion.\n\nQuestion 10.\nRods are sensitive to bright light.\n\nQuestion 11.\nThe iris is the coloured part of the eye.\n\nQuestion 12.\nDiffused reflection is due to the failure of laws of reflection.\n\nQuestion 13.\nCiliary muscles changes the shape of the lens in the eye.\n\nQuestion 14.\nWe should not wash our eyes.\n\nQuestion 15.\nBraille was designed by Louis Braille.\n\nQuestion 16.\nCones are sensitive to dark light.\n\nQuestion 17.\nToo much light is good for eyes.\n\nQuestion 18.\nThe size of the pupil becomes large when we see in dim light.\n\nQuestion 19.\nThe angle of incidence is not equal to the angle of reflection in irregular reflection.\n\nQuestion 20.\nThe angle between the normal and the incident rays is called the angle of incidence.\n\nMatch the following\n\n Column I Column II 1. Reflection (a) Regular reflection 2. Normal vision (b) bouncing back of light 3. Smooth surface (c) 25 cm 4. Kaleidoscope (d) Short-sightedness 5. Hypermetropia (e) Dispersion of light 6. Cornea (f) For visually challenged person 7. Rainbow (g) Multiple images 8. Blind spot (h) Front part of the eye 9. Braille system (i) Sensitive for bright light 10. Rods (j) Long-sightedness 11. Cones (k) Sensitive for dim light 12. Myopia (l) No sensory nerves 13. Retina (m) Cataract 14. Cloudy lens (n) Ability to focus 15. Accommodation power (o) Image formed" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.73226345,"math_prob":0.8875109,"size":12463,"snap":"2022-27-2022-33","text_gpt3_token_len":3438,"char_repetition_ratio":0.25282928,"word_repetition_ratio":0.06911344,"special_character_ratio":0.27497393,"punctuation_ratio":0.11633767,"nsfw_num_words":3,"has_unicode_error":false,"math_prob_llama3":0.9538676,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-02T20:46:08Z\",\"WARC-Record-ID\":\"<urn:uuid:0cc1673b-f33d-4a1a-9bb2-f163b713de59>\",\"Content-Length\":\"63490\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:171220ec-5c44-4228-992f-980dccca8016>\",\"WARC-Concurrent-To\":\"<urn:uuid:670c84dd-af07-4c21-9d52-628b56376730>\",\"WARC-IP-Address\":\"68.183.85.35\",\"WARC-Target-URI\":\"https://www.learncram.com/cbse/mcq-questions-for-class-8-science-chapter-16/\",\"WARC-Payload-Digest\":\"sha1:3QNB5UOD5HGFRCJJ5KV5HT76S3FKXEFS\",\"WARC-Block-Digest\":\"sha1:C6SXB222UZ7EQGCVIDBIB3JC4S4QFYIE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104204514.62_warc_CC-MAIN-20220702192528-20220702222528-00775.warc.gz\"}"}
https://bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-2105-11-17
[ "# A benchmark for statistical microarray data analysis that preserves actual biological and technical variance\n\n## Abstract\n\n### Background\n\nRecent reanalysis of spike-in datasets underscored the need for new and more accurate benchmark datasets for statistical microarray analysis. We present here a fresh method using biologically-relevant data to evaluate the performance of statistical methods.\n\n### Results\n\nOur novel method ranks the probesets from a dataset composed of publicly-available biological microarray data and extracts subset matrices with precise information/noise ratios. Our method can be used to determine the capability of different methods to better estimate variance for a given number of replicates. The mean-variance and mean-fold change relationships of the matrices revealed a closer approximation of biological reality.\n\n### Conclusions\n\nPerformance analysis refined the results from benchmarks published previously.\n\nWe show that the Shrinkage t test (close to Limma) was the best of the methods tested, except when two replicates were examined, where the Regularized t test and the Window t test performed slightly better.\n\n### Availability\n\nThe R scripts used for the analysis are available at http://urbm-cluster.urbm.fundp.ac.be/~bdemeulder/.\n\n## Background\n\n### Objectives\n\nThe sensitivity of microchip data analysis tools is strongly limited by the weakness of the estimation of variance because the number of replicates is generally low and variance heterogeneity is high. Several methods, variants of the classical t test , have been developed in recent years to increase this sensitivity by improving the estimation of variance. These methods are generally benchmarked on artificial (\"spike-in\") or simulated data. Consequently, the ability of the methods to better estimate variance is tested only on technical or modelled variances, and not on biological variance. We propose to evaluate these statistical strategies on actual biological data in order to avoid this bias. As the use of actual data does not allow for definition of the unambiguous \"truth\" to identify true and false positives, we propose a novel approach to circumvent this limitation.\n\n### State of the art\n\nMicrochip data analyses are confronted with the double-edged problem of multiple testing and weak variance estimation due to the often limited number of replicates. Furthermore, departure from normality and variance heterogeneity between genes and between experimental conditions for a given gene can decrease the confidence of statistical tests. Moreover, data has shown that a non-trivial mean-variance relationship benefits to methods analyzing groups of genes [2, 3] instead of analyzing genes separately. This relies on the fact that n genes sharing similar expression levels also share more similar variances than n genes sampled randomly.\n\nAside from the classical Welch correction for variance heterogeneity , numerous heuristics have been developed over recent years to improve the estimation of variance and consequently the statistical power of the tests. The Window t test, Regularized t test and LPE test [2, 3, 5] assume an empirical relationship with the average expression level. SAM, the Regularized t test and Moderate t test (Limma) use an empirical Bayes model to estimate the variance [2, 6, 7]. The Moderate t test and two versions of a shrinkage approach base the estimation of variance on distributional assumptions .\n\nTable 1 shows the variance shrinkage used in different heuristics (Regularized t test, SAM, Moderate t and Shrinkage t). The general formulation of the variance estimator can be written: V = a V0 + b V g . Thus, the variability estimator is estimated from two terms, respectively background variability V0 and individual variability V g . Those terms are first used either to compute a sum (SAM) or a weighted average value (Regularized t test, Moderated t, Shrinkage t). This is operated at the variance level, except in the SAM procedure where the offset term is added at the level of the standard deviation.\n\nIn the Regularized t test procedure, an arbitrary parameter is used to weight this mean value, based on the number of replicates (n + n0 = k = arbitrary value = 10 in the original procedure, where n is the number of replicates and n0 is the number of \"virtual replicates\" used to compute the background variance). Limma also uses the degrees of freedom associated with each variability term as weighting factors. The degrees of freedom associated with background variance are computed from the expression data matrix, considering a mixture of genes with and without differential expression.\n\nTo compute the Shrinkage t statistic, those terms are weighted according to the minimum value between one and an intermediate statistic reflecting the dispersion of individual estimates compared with their deviation from the median value.\n\nAnother diverging aspect of the procedures is estimation of the correction term used to shrink variance towards an optimal value. The background term is computed using different procedures: (i) from a relationship between expression level and variability (Regularized t test), (ii) from the value minimizing the dispersion of the Student t derivate statistic (SAM), (iii) from a mathematical model describing the mixture of two sets of genes (Moderated t), and from the median value of the individual variance distribution (Shrinkage t).\n\nUsing their own variance estimates, each method computes a t statistic, either based on equal variances (SAM, Moderated t, Shrinkage t) or unequal variances (Regularized t test, Shrinkage t). The significance of each statistic is then assessed by comparison with a null distribution, in accordance with the model: (i) the t distribution with degrees of freedom computed following Welch's correction (Regularized t test), (ii) from the cumulative distribution associated with the 2 sets of genes, with corrected degrees of freedom (Moderated t), or (iii) empirically from permutations of sample labels (SAM). The Shrinkage t procedure does not include a null distribution and only uses the t-like statistic to rank the results.\n\nThe differences between the statistics thus lie in the way in which the variances and/or their degrees of freedom are computed. Paradoxically, the datasets available to compute the rates of false positives and negatives and thus evaluate the sensitivity and specificity of each approach are based on simulated or spike-in data. Such data is characterized by variances, variance heterogeneities and mean-variance relationships which differ from those actually observed with biological data. The problem when benchmarking these methods is precisely the discrepancies between the data used and the performances allegedly tested.\n\n### Existing benchmark datasets\n\nOver recent years, we have witnessed the emergence of a huge number of pre-processing and processing methods for microarray analysis. To validate these approaches and compare their performances, we need datasets for which both non differentially-expressed genes and truly differentially-expressed genes (DEG) are known. Up to now, this question has been addressed by spike-in experiments or in silico simulations.\n\n#### a - Spike-in datasets\n\nA spike-in experiment is a microarray experiment where RNA is added in known quantities. A few datasets of this type are available, namely the two Latin Square datasets from Affymetrix and the \"golden spike\" experiment . These datasets were used in several papers to compare methods that analyse differential expression in microarray experiments . However, the results appear to be highly dependent on the dataset chosen to test the methods, which can be explained by the extremely divergent characteristics of these datasets.\n\nThe Affymetrix Latin Square datasets are characterized by a very low number of differentially-expressed genes (42/2230 genes, about 0.2% of all genes, in the HG-U133 Latin Square), an extreme fold-change range (from 2 to 512) and a large concentration range (from 0.125 pM to 512 pM in the HG-U133 Latin Square). In these datasets, a complex human RNA mixture (human cRNA from the pancreas in HG-U95) was added under all experiment conditions to mimic the bulk of non differentially-expressed genes.\n\nChoe's spike-in dataset , made with a Drosophila chip, was designed to compensate for the failings of existing datasets, and differs considerably from the Latin Square datasets on a number of points: (i) the proportion of spiked DEGs is high, about 10% of all genes; (ii) RNA was spiked in high quantities (iii) only up-regulated genes were included in the dataset, which is not expected in real experiments; (iv) no unrelated background RNA was used, but an important number of genes were spiked in equal quantities on all arrays. This made it possible to distinguish between empty genes, and genes expressed with no differential expression. In Affymetrix's Latin Squares, the complex and undefined background RNA eliminated the possibility to distinguish between unexpressed and expressed genes.\n\nThe aim of spike-in datasets is to mimic a typical microarray experiment, and their main problem is determination of parameters such as the proportion of DEGs and their concentration, the up- or down-regulation of genes, the amount of the mixture that is added to mimic the bulk of equally-expressed genes. However, these parameters influence the results as well. For example, the proportion of DEGs influences the normalization procedure, which assumes that the majority of genes are not differentially expressed, but it cannot be defined from actual experiments where this proportion remains unknown. Each one of the two available types of spike-in datasets has dramatic biases, and re-analyses have been performed on Choe's dataset to take them into account [12, 13, 15].\n\nPerformances can be compared together on both datasets considered as two extreme but imperfect conditions. Then the \"best\" combination of pre-processing and processing would be that which provides the best performance in both tests. However, this pragmatic approach does not lead to an improved understanding of the underlying mechanisms and parameters which make a method perform better than another under given conditions. Moreover, biological variance is not taken into account, as both datasets contain only technical replicates.\n\n#### b - Simulation datasets\n\nSome authors have tried to model in silico microarrays. Among others, Parrish et al and Singhal et al attempted to model the complex reality of a microarray on the basis of observation or real datasets.\n\nThe first study was based on a multivariate normal distribution (by selecting mathematical transformations of the underlying expression measures such that the transformed variables approximately follow a Gaussian distribution, and then estimating the associated parameters) in order to model transformed gene expression values within a subject population, while accounting for co-variances among genes and/or probes. This model was then used to simulate probe intensity data by a modified Cholesky matrix factorization .\n\nThough Singhal's general approach might appear to be similar to ours as it is also based on real datasets, his method differs in the fact that he extracted parameters (biological and technical variance) from these datasets to simulate datasets based on the parameters , while we use the data itself. Like all simulated datasets, numerous simplifications are made and skew reality. So, Parrish et al approximated Gaussian distributions and Singhal et al approximated biological and technical variance using mathematical equations, which inevitably skews or impoverishes reality.\n\nIn conclusion, in a traditional spike-in dataset (Affymetrix Latin Squares and Golden Spike experiments), over-expression is simulated by the addition of RNA fragments at known concentrations, with great reproducibility between the replicates . Important biological variability observed in real datasets is completely eliminated. When the truth is simulated in silico [16, 17], classical biases generated by the simplification of modeling are expected.\n\nThe classification of statistical methods thus reflects their ability to detect true positives and avoid false negatives in an artificial context, which is in obvious contradiction with the fact that methods differ primarily in their approach to the estimation of variance. The reliability of these benchmarks is thus open to discussion at the very least. A biological microarray dataset for which the truth is known simply does not exist.\n\n## Methods\n\n### Strategy proposed\n\nThe goal of our approach was to benchmark different statistical methods on authentic biological data in order to preserve the actual mean-variance relationship. The \"truth\" is not inferred by simulation or induced by spike-in of a known concentration of genetic material. Different sets of genes are defined as the truth, designed to be more or less difficult to isolate from the background.\n\nWe selected the genes from archived experiments on at least 15 replicates under two experimental conditions on one same platform (Affymetrix's HG-U133a). This number of replicates represented a good compromise between dataset availability and variance estimation quality. Indeed, when n = 15, the difference between the Z and t distributions is very slight.\n\nThe \"truth\" is defined as a set of genes characterized by a predetermined ratio between differential expression and variability between replicates. This ratio is computed such that under optimal conditions (normality, homoscedasticity and known variance) the classical t test would be characterized by a given sensitivity and a given positive predictive power (see below: theoretical background). The sensitivity and positive predictive power are then fine-tuned to render genes increasingly difficult to distinguish from the background. The capability of the various statistical methods to detect these sets of genes is then tested on a limited subset of replicates selected at random from among those used to define \"the truth\".\n\nThus, the benchmark does not compute false positive and negative rates in comparison with an experimentally-validated \"truth\" - which is unrealistic - but tells that, if the \"truth\" were to be this set of genes, the performances of the methods would be those evaluated.\n\nSeveral problems are circumvented using this approach such as: (i) the fundamental problems of respect of actual biological variance, the respect of the dependence of this variance on the level of gene expression, the difference in variance between genes as well as between control and test replicates are addressed by collecting actual experimental data, (ii) the prevalence of differentially-expressed genes, often limited in spike-in data (0.2% DEGs in the Latin Square datasets), is controlled and kept constant by re-sampling in over 1,000,000 DB-probesets (we call one row of the DB matrix a \"DB-probeset\" to avoid confusing with an original probeset (in any classic expression set)) obtained through the combination a large number of datasets, (iii) uneven detection efficiency due to a mix of extreme fold change in a same benchmark (from 2 to 512 in the Latin Square datasets) is avoided by defining more homogeneous differentially-expressed sets of genes. This means that methods are evaluated for a given detection limit and not for a mix of genes in which some are trivial and some are too difficult to detect.\n\nFinally, a non-trivial problem addressed here is that the variation of the number of replicates influences the statistical power both through variance estimation quality and the magnitude of the standard error. We fine-tuned the ratio between differential expression and variability according to the number of replicates (n) (the higher the n, the lower the ratio) so that the difficulty to find a set of genes considered as the truth would remain constant if the variance were known. The effect of n on variance estimation quality can thus be strictly isolated and improvements in the estimation of variance can be evaluated in detail.\n\n### Theoretical background\n\nThe positive predictive power (PPP) of a test is defined as a function of the numbers of true positives (TP) and false positives (FP) according to Equation 1.", null, "(1)\n\nLet P be the prevalence of over- or under-expressed genes, α the probability of type I error and β the probability of type II error. Equation 1 can be transformed to express PPP as a function of (1-β), α and P (Equation 2) and, from there, α as a function of (1-β), P and PPP (Equation 3) .", null, "(2)", null, "(3)\n\nLet n be the number of replicates (considered constant along a procedure), σ2 the variance considered as homogeneous and μ0 - μ1 the difference between gene expression under conditions 0 and 1. n can then be expressed as a function of power (1-β) and confidence (1-α) according to Equation 5 .", null, "(4)\n\nLet D = M0-M1, the estimation of μο1of variance 2 σ2 and S the estimation of σ. The ratio D/Sthreshold expressed in Equation 5 is directly related to the ability of the Student t test to detect a given differential expression on n replicates with power (1-β) and confidence (1-α).", null, "(5)\n\nAs D/Sthreshold is computed from 15 replicates, it provides a rather good estimate of μ0 - μ1 and σ. It approximates the limit for rejecting Ho under ideal conditions (normal distribution and homoscedasticity) with the Student t test at a given n, power and confidence.\n\nTwo main qualities of a test may be considered to be its sensitivity (1-β) and positive predictive power (PPP) (Equation 1 and 2). When high, these probabilities ensure that the user will find an important part of the truth, with low random noise, respectively.\n\nIn our benchmark, we fixed the number of replicates for subsets of replicates (n), prevalence (P), sensitivity (1-β) and positive predictive power (PPP). The value of α is deduced from Equation 1 and the corresponding D/Sthreshold computed from Equation 5. This allows us to define a subset of genes which is more or less easy to detect from the background, and to keep this difficulty constant when increasing n to improve the quality of the variance estimate.\n\n### Implementation\n\nData collection (cel files) was performed using Gene Expression Omnibus on the Affymetrix platform HG-U133a (Human Genome model U133a). This collection consists of 34 datasets (table 2) for which there are at least 15 replicates for each of 2 different experimental conditions. With all the cel files from one experiment, we built an Affybatch object, which is simply a structured concatenation of the files. These 34 Affybatch objects were pre-treated using the R package GCRMA . As the benchmark is tested gene by gene, a pre-treatment including all Affybatch objects globally was not needed.\n\nGiant datasets (e.g., GSE3790 with 202 replicates in three different brain regions) were first split into subsets according to their biological content. The datasets were then sampled as follows: when the number of replicates was ≤ 29, 15 replicates were selected randomly. When the number of replicates was ≥ 30, 15 replicates were selected randomly a first time, and a second time in the remaining replicates, and so on for 45, 60 replicates or more.\n\nThe resulting \"Expression sets\" were appended in a single matrix (named DB below) of 2 × 15 columns (replicates) collecting 1,292,414 lines (DB-probesets). The D/S ratio was computed for each DB-probeset, where D is the difference between the means and S is the square root of the mean variances under the two experimental conditions. The matrix DB was sorted according to the |D/S| value, from the top corresponding to the most over- or under-expressed genes (relative to their standard error) to the bottom corresponding to non differentially-expressed genes (figure 1).Subset matrices were sampled randomly 5 times from DB as follows: the dimensions were set at 20,000 DB-probesets and 2 × n replicates to correspond roughly to an actual expression set (figure 1). The prevalence which was defined at 1% (200 DB-probesets) is a compromise between enough genes to accurately compute frequencies of true and false positives and not too much to get a relative homogeneity of D/S in the set. Incidentally, this prevalence is in the order of magnitude of current lists of expected genes of interest in many biological contexts.\n\nFor a relative evaluation of the statistical methods, the D/Sthreshold was moved according to the increase of n, such that the difficulty to find a set of genes considered as the truth would remain constant if the variance were known. For a given set of parameters n, PPP and (1-β), the D/Sthreshold was computed and the 200 genes above the limit were selected in DB and considered as the target genes (true positive). A second D/Sthreshold was computed to correspond to 0.9 × (1-β) and 19,800 genes considered as the background (the true negative) were selected randomly in DB, under this limit. The genes in the \"twilight zone\" between the two limits were not considered to avoid an abrupt transition between both gene statuses.\n\nFinally, to evaluate the absolute performance of the best statistical methods, the D/S threshold was computed for n = 2 and for given combinations of PPP and (1-β), thus defining the \"truth\" constant for every set of subset matrices for the run considered; five subset matrices of 15 replicates - instead of two - were generated as describe above and resampled for n = 2, 3, up to 10 such that the difficulty to find a set of genes considered as the truth increased according to n, due to the combined effect of improved variance estimation and reduced standard error.\n\nEach subset matrix was treated using the PEGASE software developed in our laboratory (Berger et al., CEJB, under revision). Briefly, several differential expression analysis methods were implemented from scratch and gathered in the R package called PEGASE. Among the methods currently implemented for differential expression analysis are the classic Student t test and Welch correction for heteroscedasticity , SAM , Regularized t test , Window t test . The package includes a performance evaluation of the methods implemented when a list of truly differentially-expressed genes is provided. Limma and Shrinkage t are not yet implemented in PEGASE and were downloaded and run stand-alone.\n\nFor each combination of parameters and statistical analysis, we computed the observed power, or sensitivity (Equation 6) and the false positive rate (Equation 7) for five samples for increasing values of α, step by step.", null, "(6)", null, "(7)\n\nAn ANOVA with 3 fixed criteria (statistical methods, n and runs) was run over the five random samplings of replicates in DB, on the sensitivity computed for 1% FDR. This ANOVA produced a residual mean square (RMS) value corresponding to the error term of each fixed effect. This RMS was used in post-hoc comparisons performed for pairwise comparisons of the methods and for comparisons of each method with the reference method .\n\n### Algorithm\n\nThe input used in the algorithm which computed the performance curve coordinates (FDR, Sensitivity, and Specificity) was the full list of p values for each method. As each list of p values does not cover the same range of values, we needed to use the minimum significance value to define the starting point of the procedure. Moreover, as the beginning of the curves is the most informative, corresponding to small p values, we decided to define each step from regular progression at the logarithm of p values. The pseudo-code of the algorithm used is described below:\n\n1) Retrieve the minimum p value (min.pval);\n\n2) Compute the logarithm of this minimum value (log.min.pval), with base = 10;\n\n3) Compute log.int = vector with 1000 values defining regular intervals between log.min.pval and 0 (corresponding to the maximal p value = 1);\n\n4) Compute the final list of values defining the intervals (int), using int = 10^log.int;\n\n5) For each value in int, compute FDR, sensitivity, specificity from each list of method-specific p values.\n\n## Results\n\n### Mean - Standard deviation relationship\n\nFigure 2 illustrates the empirical relationship between the average expression level and standard deviation. Figure 2A represents the relationship observed for the total benchmark dataset. Figure 2B shows the corresponding plot obtained from a subset of the total biological benchmark dataset. Finally, figure 2C presents an example of the same graph generated on a biological dataset (E-MEXP-231 from Array Express) [23, 24] which was not included in the creation of the benchmark dataset.\n\nInterpreted together, the plots shown in figure 2 reveal that the design of the Biological Benchmark from real datasets (2 A) leads to a similar expression level/variability dependence compared with real datasets (2 C). The definition of subsets based on the D/S statistic combined with the positive predictive value and power parameter generates datasets with properties which are similar to real datasets (2 B).\n\n### MAplot\n\nA MAplot represents the average log expression versus the average ratio between conditions. Figure 3C shows the MAplot obtained from the same dataset used for figure 2, after pre-processing (GCRMA). As can be seen in the figure, points are typically widely distributed along the X-axis while being centered around M = 0 on the Y-axis. Figures 3A and 3B show the MAplots from our total benchmark dataset and a subset thereof. The similarity between these two figures and figure 3C highlights that our dataset distributions are close to a real dataset distribution. In comparison, MAplots obtained from spike-in datasets clearly show their biases, especially the extreme fold changes of the Latin Square LS95, and the absence of down-regulated DEGs in the Golden Spike Experiment dataset (LS95, LS133: [10, 11]).\n\n### Volcano plots\n\nDifferent sets of parameters were tested and those retained here are the most typical, intermediate values which provide intermediate results. Four runs were performed for increasing difficulties to find the target DB-probesets. For run 1 (PPP = 0.99 and sensitivity = 0.99), the true positives were easy to find and there was little noise. For run 2 (PPP = 0.5 and sensitivity = 0.99), the true positives were easy to find and there was more noise. For run 3 (PPP = 0.99 and sensitivity = 0.5), the true positives were harder to find but there was less noise. And for run 4 (PPP = 0.5 and sensitivity = 0.5), the true positives were difficult to find and there was more noise.\n\nVolcano plots present the DB-probesets in a graph of p values according to a given statistical test versus fold change. In the present context, they represent the increasing difficulty to find true positives through runs (figure 4). Typically, interesting features are located in the upper left and right corners of the graphs, as the fold change values (X axis) and p values (Y axis) exceed the usual thresholds used for analysis.\n\nVolcano plots were drawn for the four runs with the Student t test with three replicates. On 200 DEGs, in run 1, nearly one half (92) of the target DB-probesets (black points) had a p value lower than 10-2 and the average fold change was -1.19 ± 4.1 (M ± 2 s.d.). As expected, in run 2, fewer target DB-probesets were found to be significant (44) and the average fold change was -0.29 ± 3.82. In run 3, most of the target DB-probesets did not exceed the significance threshold of 10-2 (29) and the average fold change was -0.147 ± 3.44. Finally, under the most difficult conditions (run 4), only few target DB-probesets still exceeded the statistical thresholds (12) and the average fold change was -0.006 ± 1.96. Surprisingly, as revealed by the negative mean value, most of DEGs were down-regulated, but this fact will not have an impact on our results.\n\n### Relative performances of the statistical methods\n\nFigures 5 and 6 show the change in sensitivity for all (figure 5) or a selection (figure 6) of the methods studied. They present a summary of the information from all of the ROC curves for all n, taken at a FPR equal to prevalence (1%). This value was chosen because ROC curves become non-informative when the FDR exceeds the prevalence . For more details, see additional file 1.\n\nRun 2 (figure 5) is illustrated for all of the methods tested. As the Shrinkage t and Limma on one hand and the Regularized t test and Window t test on the other showed only slight differences between them, only a selection of methods is displayed for purposes of clarity in figure 6 for Runs 1, 3, and 4, with the Student t test as the reference.\n\nThough it has become obvious that the Shrinkage t is most often the best method, we show that, when there are only two replicates, the Regularized t test and Window t test are better. We also observed that when they are 10 or more replicates, the choice of the method becomes less important, as all of the methods perform quite equally.\n\nA ranking of the methods is presented in table 3 for all runs and number of replicates (rank 1 being the best). We only show statistical differences found by Dunnet's post hoc comparison with p ≥ 0.05. We could not highlight any significant difference for run 4. Across all of the statistically relevant data, the Shrinkage t appears to have the lowest mean ranking. It shall therefore be considered as the reference method from now on. However, we noted a striking change between 2 and 3 or more replicates: the Regularized t test and Window t test appear to have the best performances only for n = 2.\n\n### Absolute performance\n\nTo assess the absolute performances of the methods tested, we performed a new test such that the difficulty to find a set of genes considered as the truth increases according to n, due to the combined effect of improved variance estimation and reduced standard error (see Implementation).\n\nThe ranking of the three methods presented is the same as before: the Shrinkage t is better overall, except for n = 2, where the Regularized t test (superimposed to Window t test, data not shown) is slightly better. Moreover, this figure presents the maximal performances of the methods with respect to the run considered, as the truth defined for two replicates is the easiest to recover.\n\nFor Shrinkage t, in a gene list where 1 false positive is expected for 1 true positive, 80% of sensibility is expected for the run 1 with n = 3 when the fold change is -1.234 ± 4.42, for the run 2 with n = 4 when the fold change is -0.812 ± 4.5, for the run 3, with n = 7 when the fold change is -0.46 ± 3.86 and for run 4 with n > 10 when the fold change is -0.013 ± 2.3.\n\n## Discussion\n\nOur benchmark dataset is difficult to objectively compare with previously published benchmarks because we used a different approach, where the definition of the truth was not so straightforward and irrefutable but actual variance was conserved.\n\nIt is probable that use of the D/S ratio to infer the truth introduces a bias towards t-like statistics. This is why we only measure performances for such methods. As for methods in specific, we think that this bias does not change their ranking.\n\nFor example, the Limma method may be favored by this bias as it relies on the existence of two different distributions (DEG and non-DEG) and the benchmark creates those two distributions using a twilight zone. Thus, the Limma method's performances should be better than those of Shrinkage t test, globally based on the same principles but not using a pre-defined distribution. However, we show that the Shrinkage t performs slightly but significantly better than Limma.\n\nIn the Golden Spike Experiment , the authors compared the Regularized t test with the Student t test and SAM. The relative ranking of the methods was comparable with our results (several methods tested here were not published then). In the original Limma paper , among other results, the authors showed that Limma performs better than the Student t test. This conclusion was in keeping with our findings. In the original Shrinkage t paper the authors ranked the methods as follows: Shrinkage t similar but in some cases better than Limma better than Student better than SAM. For us, SAM performs better than Student, but not as well as the Shrinkage t and Limma.\n\nFinally, in Berger's paper , we showed an advantage for methods adapted to better estimate variance, and in particular for the methods using a window to define the target genes. Here, we show that the Window t test and Regularized t test perform equivalently, however not as well as the Shrinkage t and better than Limma, except notably when the number of replicates is two.\n\nAll of the previously published results are in accordance with the results presented here, as we show that the methods based on either shrinkage of the window variance estimator (Shrinkage t test, Regularized t test and Window t test) provide the best performance. However, we can affirm that our results are more representative since they were obtained from analysis of actual biological data.\n\nAs pointed out in a recent reanalysis of the Golden Spike experiment , spike-in datasets available to date, while valuable, either contain too few DEGs, or are flawed by several artifacts, such as unrealistically high concentrations of DEGs. In conclusion, the authors encouraged the creation of new spike-in datasets in order to complete and improve the method for benchmarking of DEG analysis of Affymetrix assays. Such datasets should have the following characteristics: (1) a realistic spike-in concentration, (2) a mixture of up- and down-regulated genes, (3), unrelated fold change and intensity, and (4) a large number of arrays.\n\nHere, we propose a dataset that is not a spike-in dataset, though we believe that it meets the conditions stipulated in the article by Pearson.\n\nSeveral studies (e.g. [2, 3]) on differential expression analysis have postulated a complex relationship between variability and expression level. In some methodologies [2, 3, 5], this empirical relationship was used to improve the assessment of variance in a statistical framework. Spike-in and simulated datasets do not take this empirical relationship into account, compared with the biological benchmark described in this paper. The relationship found in our data (figure 2) reveals that the design of our biological benchmark from real datasets leads to a similar expression level/variability dependence compared with real datasets.\n\nMany factors can influence the variability of expression of probesets, from technical sources to biological properties, and simulation of realistic variance components is not completely straightforward. Genes present both shared and diverging properties. In this context, creation of a benchmark dataset from a repository of biological datasets preserves individual variability properties, as no assumptions on individual variance are needed during the creation of the benchmark dataset. Each potential source of variation is retrieved from real data, thus retaining the contributions from sources of variability, without the need to quantify or list them.\n\nThe MAplots of our datasets show that the genes which we defined as DEGs are present at all concentrations, with variable fold change (1) and meeting point (3). Selected genes were shown to be a mixture of up- and down-regulated genes, meeting point (2). Finally, we performed analyses using a number of replicates going from ten to two by condition, meeting point (4).\n\nWe have shown with the mean versus standard deviation relationship, MAplots and volcano plots that the datasets we built are closer to real datasets in terms of expression and fold-change distribution, than those of spike-in datasets such as the Latin Square HGU95 and HGU133a from Affymetrix or Choe's Golden Spike Experiment. Moreover, the resulting dataset contains biological as well as technical variability and we have shown that it is representative of the mean-variance relationship of real datasets.\n\nROC curves were only used in this work to generate the data used to construct figures 5 and 6. We used the values for a FPR equal to the prevalence, as, above this limit, the number of false negative exceeds the number of positives (see additional file 1 for details).\n\nThese figures present the core benchmarking results. They reveal that, among the methods tested, the Shrinkage t test performs best under all conditions (number of replicates and difficulty to find the truth), although when the number of replicates is very low (< 3), the Regularized t test and Window t test show slightly better performances and when the number of replicates is high (≤ 10), the choice of the method has a lower impact on performance. The reason why the Shrinkage t does not perform well for two replicates is that it does not rely on a pre-defined distributional model. This implies that it needs several replicates to assess this distribution. The fact that the Window t test and Regularized t test take the number of replicates into account in the statistic calculation is the reason why they perform better when the number of replicates is low.\n\nWe then computed the absolute performances of three methods. The results presented in figure 7, although limited to one-color arrays under GCRMA as pre-treatment, confirm the trends which we suggested with relative results (figure 5 and 6). Keeping in mind that, in some pathways, even slight differences in gene expression can lead to dramatic changes in terms of metabolic effects, one should be aware that the methods tested here, although among the best available today, could still be greatly improved.\n\nOne could thus raise the question as to the reliability of the results when the number of replicates is low. One way to address this issue would be to adapt the methods to better estimate variance when the number of replicates is low. Another way would be to perform statistical analysis on relevant groups of genes rather than on isolated genes. The design of relevant groups of genes still remains a challenge.\n\n## Conclusions\n\nThe benchmark method proposed here differs from other approaches published, as actual biological and experimental variability is preserved. The obtained Mean - Standard deviation relationships and MAplots confirm that the variance structure of the data we studied is closer to biological data than that of spike-in or simulation studies. One other advantage of the method lies in the fact that virtually all parameters can be fine-tuned, allowing researchers to assess those methods which are truly suited for their particular approaches.\n\nWe applied the benchmark to a set of published methods. The results show better performances for the Shrinkage t test, except when there are only two replicates, where the Regularized t test and Window t test perform better.\n\n## Perspectives\n\nIn order to compare all the analytical methods, including pretreatments, we also plan to modify the way in which the truth is defined in our DB matrix, for example using an in silico spike-in procedure and finding a way to preserve the biological variances associated with the DB-probesets. However this constraint is not trivial to circumvent.\n\nIn this study, we only work with GCRMA as the pretreatment, with a prevalence of 1%. Some authors show that correlations between probesets can also influence performances of the statistical methods, namely favoring the Shrinkage t and Limma. In the future, our work will concentrate on an exhaustive study of the nested effects of those three parameters (pretreatment, prevalence and correlation), but is outside the scope of this paper due to its complexity. In the same way, we could improve the way we present the results by using a classification based on the level of expression for example.\n\n## References\n\n1. Student: The Probable Error of Mean. Biometrika 1908, 6(1):1–25. 10.2307/2331554\n\n2. Baldi P, Long AD: A Bayesian framework for the analysis of microarray expression data: regularized t-test and statistical inferences of gene changes. Bioinformatics 2001, 17: 509–519. 10.1093/bioinformatics/17.6.509\n\n3. Berger F, De Hertogh B, Pierre M, Gaigneaux A, Depiereux E: The \"Window t test\": a simple and powerfull approach to detect differentially expressed genes in microarray datasets. Centr Eur J Biol 2008, 3: 327–344. 10.2478/s11535-008-0030-9\n\n4. Welch BL: The significance of the difference between two means when the populations are inequal. Biometrika 1938, 29(3–4):350–362. 10.1093/biomet/29.3-4.350\n\n5. Jain N, Thatte J, Braciale T, Ley K, O'Connell M, Lee JK: Local-pooled-error test for identifying differentially expressed genes with a small number of replicated microarrays. Bioinformatics 2003, 19: 1945–1951. 10.1093/bioinformatics/btg264\n\n6. Tusher VG, Tibshirani R, Chu G: Significance analysis of microarrays applied to the ionizing radiation response. Proc Natl Acad Sci USA 2001, 98: 5116–5121. 10.1073/pnas.091062498\n\n7. Smyth GK: Linear models and empirical bayes methods for assessing differential expression in microarray experiments. Stat Appl Genet Mol Biol 2004, 3: Article 3.\n\n8. Cui X, Hwang JT, Qiu J, Blades NJ, Churchill GA: Improved statistical tests for differential gene expression by shrinking variance components estimates. Biostatistics 2005, 6: 59–75. 10.1093/biostatistics/kxh018\n\n9. Opgen-Rhein R, Strimmer K: Accurate ranking of differentially expressed genes by a distribution-free shrinkage approach. Stat Appl Genet Mol Biol 2007, 6: Article 9.\n\n10. Choe SE, Boutros M, Michelson AM, Church GM, Halfon MS: Preferred analysis methods for Affymetrix GeneChips revealed by a wholly defined control dataset. Genome Biol 2005, 6: R16. 10.1186/gb-2005-6-2-r16\n\n11. Pearson RD: A comprehensive re-analysis of the Golden Spike data: towards a benchmark for differential expression methods. BMC Bioinformatics 2008, 9: 164. 10.1186/1471-2105-9-164\n\n12. Irizarry RA, Cope LM, Wu Z: Feature-level exploration of a published Affymetrix GeneChip control dataset. Genome Biol 2006, 7: 404. 10.1186/gb-2006-7-8-404\n\n13. Irizarry RA, Wu Z, Jaffee HA: Comparison of Affymetrix GeneChip expression measures. Bioinformatics 2006, 22: 789–794. 10.1093/bioinformatics/btk046\n\n14. Dabney AR, Storey JD: A reanalysis of a published Affymetrix GeneChip control dataset. Genome Biol 2006, 7: 401. 10.1186/gb-2006-7-3-401\n\n15. Parrish RS, Spencer Iii HJ, Xu P: Distribution modeling and simulation of gene expression data. Computational Statistics & Data Analysis 2009, 53: 1650–1660. 10.1016/j.csda.2008.03.023\n\n16. Singhal S, Kyvernitis CG, Johnson SW, Kaiser LR, Liebman MN, Albelda SM: Microarray data simulator for improved selection of differentially expressed genes. Cancer Biol Ther 2003, 2: 383–391.\n\n17. Dagnelie P: Statistique descriptive et bases de l'inférence statistique. Bruxelles: De Boeck; 2007.\n\n18. Barrett T, Troup DB, Wilhite SE, Ledoux P, Rudnev D, Evangelista C, Kim IF, Soboleva A, Tomashevsky M, Marshall KA, Philippy KH, Sherman PM, Muertter RN, Edgar R: NCBI GEO: archive for high-throughput functional genomic data. Nucleic Acids Res 2009, 37: D885–890. 10.1093/nar/gkn764\n\n19. Wu Z, Irizarry RA, Gentleman R, Hernandez D, Gras R, Smith DK, Danchin A: A model-based background adjustement for oligonucleotide expression arrays. J Am Stat Assoc 2005, 8. [http://www.bepress.com/jhubiostat/paper1]\n\n20. Scheffé H: The analysis of variance. New York,: Wiley; 1959.\n\n21. Dunnet CW: A multiple comparison procedure for comparing several treatments with a control. Journal of the American Statistical Association 1955, 50: 26. 10.2307/2281208\n\n22. Yap YL, Lam DC, Luc G, Zhang XW, Hernandez D, Gras R, Wang E, Chiu SW, Chung LP, Lam WK, Smith DK, Minna JD, Danchin A, Wong MP: Conserved transcription factor binding sites of cancer markers derived from primary lung adenocarcinoma microarrays. Nucleic Acids Res 2005, 33: 409–421. 10.1093/nar/gki188\n\n23. Parkinson H, Kapushesky M, Shojatalab M, Abeygunawardena N, Coulson R, Farne A, Holloway E, Kolesnykov N, Lilja P, Lukk M, Mani R, Rayner T, Sharma A, William E, Sarkans U, Brazma A: ArrayExpress--a public database of microarray experiments and gene expression profiles. Nucleic Acids Res 2007, 35: D747–750. 10.1093/nar/gkl995\n\n24. Gaigneaux A: Discussion about ROC curves and other figures used to compare microarray statistical analyses. In BBC 2008 conference; Maastricht, Nederlands. BiGCaT. Maastricht University; 2008.\n\n25. Zuber V, Strimmer K: Gene ranking and biomarker discovery under correlation. Bioinformatics 2009, in press.\n\n## Acknowledgements\n\nWe would like to thank Mauro Delorenzi from the SIB (Lausanne, Switzerland), Gianluca Bontempi from the Machine learning group (ULB, Belgium), Jean-Louis Ruelle and Swan Gaulis from GSK biological (Rixensart, Belgium) and Marcel Remon from the Statistics Unit (FUNDP, Namur) for useful discussion and comments. This work is supported by the FRS-FNRS Télévie (B. DM.), GSK Biologicals (Rixensart Belgium) (F.B.), FRIA (M.P.), CTB (E.B.) and DGTRE/BIOXPRs.a. (A.G.).\n\n## Author information\n\nAuthors\n\n### Corresponding author\n\nCorrespondence to Eric Depiereux.\n\n### Authors' contributions\n\nBDH. took part in designing the method we present here, as well as in interpreting results. BDM. scripted the whole methodology, apart from the PEGASE package. He also ran the analysis and took part in designing and interpreting the results. FB scripted the PEGASE package and took part in graphical representation of the results. MP analyzed and interpreted the volcano plots and related data. EB took part in scripting and in collection of the data. AG analyzed and interpreted the MA plots and related data. ED coordinated the whole work and gave final approval for submission. Each author read and approved this manuscript.\n\nBenoît De Hertogh, Bertrand De Meulder contributed equally to this work.\n\n## Electronic supplementary material\n\n### 12859_2009_3474_MOESM1_ESM.DOC\n\nAdditional file 1: Supplementary data. Parameterization. ROC curve analysis details. Figures 5, 6 and 7 with error bars. Calculation of the number of rows used throughout all the analysis. (DOC 620 KB)\n\n## Authors’ original submitted files for images\n\nBelow are the links to the authors’ original submitted files for images.\n\n## Rights and permissions\n\nReprints and Permissions\n\nDe Hertogh, B., De Meulder, B., Berger, F. et al. A benchmark for statistical microarray data analysis that preserves actual biological and technical variance. BMC Bioinformatics 11, 17 (2010). https://doi.org/10.1186/1471-2105-11-17\n\n• Accepted:\n\n• Published:\n\n• DOI: https://doi.org/10.1186/1471-2105-11-17\n\n### Keywords\n\n• Real Dataset\n• Benchmark Dataset\n• Average Fold Change\n• Positive Predictive Power\n• Residual Mean Square", null, "" ]
[ null, "https://media.springernature.com/full/springer-static/image/art%3A10.1186%2F1471-2105-11-17/MediaObjects/12859_2009_Article_3474_Equ1_HTML.gif", null, "https://media.springernature.com/full/springer-static/image/art%3A10.1186%2F1471-2105-11-17/MediaObjects/12859_2009_Article_3474_Equ2_HTML.gif", null, "https://media.springernature.com/full/springer-static/image/art%3A10.1186%2F1471-2105-11-17/MediaObjects/12859_2009_Article_3474_Equ3_HTML.gif", null, "https://media.springernature.com/full/springer-static/image/art%3A10.1186%2F1471-2105-11-17/MediaObjects/12859_2009_Article_3474_Equ4_HTML.gif", null, "https://media.springernature.com/full/springer-static/image/art%3A10.1186%2F1471-2105-11-17/MediaObjects/12859_2009_Article_3474_Equ5_HTML.gif", null, "https://media.springernature.com/full/springer-static/image/art%3A10.1186%2F1471-2105-11-17/MediaObjects/12859_2009_Article_3474_Equ6_HTML.gif", null, "https://media.springernature.com/full/springer-static/image/art%3A10.1186%2F1471-2105-11-17/MediaObjects/12859_2009_Article_3474_Equ7_HTML.gif", null, "https://bmcbioinformatics.biomedcentral.com/track/article/10.1186/1471-2105-11-17", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9236463,"math_prob":0.88735694,"size":43608,"snap":"2022-40-2023-06","text_gpt3_token_len":9301,"char_repetition_ratio":0.14877075,"word_repetition_ratio":0.03798931,"special_character_ratio":0.20819575,"punctuation_ratio":0.098518424,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96536994,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-08T19:19:15Z\",\"WARC-Record-ID\":\"<urn:uuid:a9bfcb80-ddbd-42dd-98d4-6df5b969775f>\",\"Content-Length\":\"321243\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:26f372da-ef3f-4a4b-b525-205ffb91ff9e>\",\"WARC-Concurrent-To\":\"<urn:uuid:f1cf4c67-3b7d-4c98-8998-b55de5113506>\",\"WARC-IP-Address\":\"146.75.32.95\",\"WARC-Target-URI\":\"https://bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-2105-11-17\",\"WARC-Payload-Digest\":\"sha1:3ARNCVTRDFHSET45JAT3LWTBSYCWR5NQ\",\"WARC-Block-Digest\":\"sha1:P2W4NXOAYF7A4BL7OI2VSJ536UIMGOF6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500904.44_warc_CC-MAIN-20230208191211-20230208221211-00466.warc.gz\"}"}
https://reference.wolfram.com/language/ref/HalfSpace.html
[ "# HalfSpace\n\nHalfSpace[n,p]\n\nrepresents the half-space of points", null, "such that", null, ".\n\nHalfSpace[n,c]\n\nrepresents the half-space of points", null, "such that", null, ".\n\n# Details", null, "• HalfSpace can be used as a geometric region and a graphics primitive.\n• HalfSpace corresponds to half-line or half-infinite interval in", null, ", a half-plane in", null, ", etc.\n•", null, "• HalfSpace represents the set", null, "or", null, ".\n• HalfSpace can be used in Graphics and Graphics3D.\n• HalfSpace will be clipped by PlotRange when rendering.\n• Graphics rendering is affected by directives such as FaceForm, EdgeForm, Opacity, and color.\n• Graphics3D rendering is affected by directives such as Opacity and color.\n\n# Examples\n\nopen allclose all\n\n## Basic Examples(3)\n\nA HalfSpace in 2D:\n\nAnd in 3D:\n\nDifferent styles applied to a half-space region:\n\nDetermine if points belong to a given half-space region:\n\n## Scope(15)\n\n### Graphics(5)\n\n#### Specification(2)\n\nA half-space in 2D defined by a normal vector and a point:\n\nThe same half-space defined by a normal vector and a constant:\n\nDefine a half-space in 3D using a normal vector and a point:\n\nDefine the same half-space using a normal vector and a constant:\n\nHalf-spaces varying in direction of the normal:\n\n#### Styling(2)\n\nColor directives specify the color of the half-space:\n\nFaceForm and EdgeForm can be used to specify the styles of the faces and edges:\n\n#### Coordinates(1)\n\nPoints and vectors can be Dynamic:\n\n### Regions(10)\n\nEmbedding dimension is the dimension of the coordinates:\n\nGeometric dimension is the dimension of the region itself:\n\nPoint membership test:\n\nGet the conditions for membership:\n\nA half-space has infinite measure and undefined centroid:\n\nDistance from a point:\n\nSigned distance from a point:\n\nNearest point in the region:\n\nNearest points:\n\nA half-space is unbounded:\n\nFind the region range:\n\nIn the axis-aligned case:\n\nIntegrate over a half-space:\n\nOptimize over a half-space:\n\nSolve equations over a half-space:\n\n## Applications(5)\n\nVisualize 2D half-planes:\n\nThe upper half-plane:\n\nThe lower half-plane:\n\nThe left half-plane:\n\nThe right half-plane:\n\nVisualize 3D half-spaces:\n\nThe upper half-space:\n\nThe lower half-space:\n\nThe left half-space:\n\nThe right half-space:\n\nThe front half-space:\n\nThe back half-space:\n\nPartition space in a BubbleChart:\n\nCombine the graphics:\n\nAny convex polygon in 2D can be represented as an intersection of half-spaces:\n\nAny convex polyhedron in 3D can be represented as an intersection of half-spaces:\n\n## Properties & Relations(7)\n\nClipPlanes, for a given", null, ", results in a graphic that does not render anything within the", null, ":\n\nHalfSpace is a special case of ConicHullRegion:\n\nHalfSpace is a special case of AffineHalfSpace:\n\nHalfLine is a special case of HalfSpace:\n\nHalfPlane is a special case of HalfSpace:\n\nImplicitRegion can represent any HalfSpace in", null, ":\n\nIn", null, ":\n\nIn", null, ":\n\nParametricRegion can represent any HalfSpace in", null, ":\n\nIn", null, ":\n\nIn", null, ":\n\n## Neat Examples(1)\n\nA collection of random half-spaces in", null, ":\n\nIn", null, ":" ]
[ null, "https://reference.wolfram.com/language/ref/Files/HalfSpace.en/1.png", null, "https://reference.wolfram.com/language/ref/Files/HalfSpace.en/2.png", null, "https://reference.wolfram.com/language/ref/Files/HalfSpace.en/3.png", null, "https://reference.wolfram.com/language/ref/Files/HalfSpace.en/4.png", null, "https://reference.wolfram.com/language/ref/Files/HalfSpace.en/details_1.png", null, "https://reference.wolfram.com/language/ref/Files/HalfSpace.en/5.png", null, "https://reference.wolfram.com/language/ref/Files/HalfSpace.en/6.png", null, "https://reference.wolfram.com/language/ref/Files/HalfSpace.en/Image_7.gif", null, "https://reference.wolfram.com/language/ref/Files/HalfSpace.en/8.png", null, "https://reference.wolfram.com/language/ref/Files/HalfSpace.en/9.png", null, "https://reference.wolfram.com/language/ref/Files/HalfSpace.en/10.png", null, "https://reference.wolfram.com/language/ref/Files/HalfSpace.en/11.png", null, "https://reference.wolfram.com/language/ref/Files/HalfSpace.en/12.png", null, "https://reference.wolfram.com/language/ref/Files/HalfSpace.en/13.png", null, "https://reference.wolfram.com/language/ref/Files/HalfSpace.en/14.png", null, "https://reference.wolfram.com/language/ref/Files/HalfSpace.en/15.png", null, "https://reference.wolfram.com/language/ref/Files/HalfSpace.en/16.png", null, "https://reference.wolfram.com/language/ref/Files/HalfSpace.en/17.png", null, "https://reference.wolfram.com/language/ref/Files/HalfSpace.en/18.png", null, "https://reference.wolfram.com/language/ref/Files/HalfSpace.en/19.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.74961615,"math_prob":0.9605279,"size":2139,"snap":"2021-43-2021-49","text_gpt3_token_len":496,"char_repetition_ratio":0.15925059,"word_repetition_ratio":0.14237288,"special_character_ratio":0.20991117,"punctuation_ratio":0.16266666,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98272014,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40],"im_url_duplicate_count":[null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-27T22:27:17Z\",\"WARC-Record-ID\":\"<urn:uuid:bbbbcb13-7ca8-47b1-92ef-821bf2e2c8aa>\",\"Content-Length\":\"144856\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:231722d1-142a-49db-aa00-12251f12ba6d>\",\"WARC-Concurrent-To\":\"<urn:uuid:497c1b01-9f84-430a-955b-57af367903f1>\",\"WARC-IP-Address\":\"140.177.205.163\",\"WARC-Target-URI\":\"https://reference.wolfram.com/language/ref/HalfSpace.html\",\"WARC-Payload-Digest\":\"sha1:ILH5GDO3WOL2GIFN36KOE6UP2WMSDOCO\",\"WARC-Block-Digest\":\"sha1:7KC4OCEDFGE2HOY5MSG3O6OWJOJ4PVFW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323588244.55_warc_CC-MAIN-20211027212831-20211028002831-00566.warc.gz\"}"}
https://cs.stackexchange.com/questions/50384/how-the-deletion-takes-place-in-b-tree
[ "# How the deletion takes place in B+ Tree\n\nMy professor was giving a lecture on B+ Trees deletion, and I got very confused. According to him for deleting any key from a B+ Tree:\n\n1- First navigate to the leaf *L* where it belongs.\n2- If the *L* is at least half full if you can simply delete it.\n3- If it contains d-1 elements then you need to redistribute and merge.\n\n\nIf you see the below image, here I want to delete 19 and 20 from the B+ Tree.", null, "After deleting 19 and 20 from the B+ Tree.", null, "Question:\n\nI am confused why the redistribution and merging is required here at all? If you just simply delete 19 and 20 from the leaf nodes without any distribution it should work right? Why redistribution is performed here? Could anyone explain?\n\nIs it because the left pointer of 24 is pointing to 20 but no 19. Thats why redistribution is required for 20 but not 19.\n\n• B-trees are documented in many places on the web and in standard textbooks. Have you checked those resources?\n– D.W.\nDec 6, 2015 at 5:10\n• @D.W. B+-trees are less standard and may only be available in database books which may not exhibit the concepts as clearly as an algorithms texbook would. (Guessing here, I haven't actually read a database book.)\n– Raphael\nJan 5, 2016 at 10:02\n\nOkay I understood the issue.\n\nProperties of B+ Tree.\n\n• All leaves should be at the same depth, and the mininum element in each leaf node should be equal to depth of the tree. See the example below:\n\n• All the leaves are in same depth, and here d = 2.\n\n• Each leaf node must contain d number of elements, otherwise redistribution and merging has to be performed.\n• All the data pointers are contained in leaf nodes.\n• All elements should be contained in leaf nodes.\n• There should be between d to 2*d keys at node except possibly the root.\n• There should be between d + 1 to 2*d + 1 child pointers.\n\nIn the B+ Tree given below, each node has 2 and 2*2 data entries except possible the root. Each node has a mininum of 2 keys.\n\nOnly root of the B+ Tree can only fewer than d keys, its the only exception we have.\n\nIn my question, when you remove 19, the property of B+ Tree is not violated but when you remove 20, the total number of elements contained in the node is less than d. Hence redistribution and merging have to be performed so that the property of B+ tree is not violated.", null, "" ]
[ null, "https://i.stack.imgur.com/NBt66.png", null, "https://i.stack.imgur.com/eCgVD.png", null, "https://i.stack.imgur.com/NqBU4.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.95054585,"math_prob":0.8466518,"size":813,"snap":"2023-40-2023-50","text_gpt3_token_len":198,"char_repetition_ratio":0.13597034,"word_repetition_ratio":0.05882353,"special_character_ratio":0.25830257,"punctuation_ratio":0.09195402,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95967937,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-25T16:48:50Z\",\"WARC-Record-ID\":\"<urn:uuid:75eb4987-1ca5-4159-8c68-190676b5e060>\",\"Content-Length\":\"157003\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5a015a9a-e5c5-434c-918d-695d6604d8d8>\",\"WARC-Concurrent-To\":\"<urn:uuid:862eecca-59ac-4f9b-929a-a98a2a8a4586>\",\"WARC-IP-Address\":\"104.18.11.86\",\"WARC-Target-URI\":\"https://cs.stackexchange.com/questions/50384/how-the-deletion-takes-place-in-b-tree\",\"WARC-Payload-Digest\":\"sha1:PWH6MRKOYKRPGPWNJOI3N3MVBSKMCNQ5\",\"WARC-Block-Digest\":\"sha1:452M6VM7ZFOUPRY7HWCMJQOD67STC434\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233509023.57_warc_CC-MAIN-20230925151539-20230925181539-00724.warc.gz\"}"}
https://sciencegatepub.com/sgp-books/gcsr/gcsr_vol6/
[ "# Handbook of Fuzzy Sets Comparison – Theory, Algorithms and Applications", null, "## Handbook of Fuzzy Sets Comparison Theory, Algorithms and Applications\n\n### DOI 10.15579/gcsr.vol6\n\nEdited by\n\nGeorge A. Papakostas, Anestis G. Hatzimichailidis and Vassilis G. Kaburlasos\n\n(pages i-v)\n\n#### Chapter 1 – On Constructing Distance and Similarity Measures based on Fuzzy Implications\n\nAbstract\n\nThis chapter deals with the construction of distance and similarity measures by utilizing the theoretical advantages of the fuzzy implications. To this end the basic definitions of fuzzy implications are initially discussed and the conditions of typical distance and similarity measures that need to be satisfied are defined next. On the basis of this theory a straightforward methodology for building fuzzy implications based measures is analysed. The main advantage of the proposed methodology is its generality that makes it easy to be adopted in several types of fuzzy sets.\n\n#### Chapter 2 – Toward a Synergy of a Lattice Implication Algebra with Fuzzy Lattice Reasoning – A Lattice Computing Approach\n\nAbstract\n\nAutomated reasoning can be instrumental in real-world applications involving “intelligent” machines such as (semi-)autonomous vehicles as well as robots. From an analytical point of view, reasoning consists of a series of inferences or, equivalently, implications. In turn, an implication is a function which obtains values in a welldefined set. For instance, in classical Boolean logic an implication obtains values in the set {0, 1}, i.e. it is either true (1) or false (0); whereas, in narrow fuzzy logic an implication obtains values in the specific complete mathematical lattice unit-interval, symbolically [0, 1], i.e. it is partially true/false. A lattice implication algebra (LIA) assumes implication values in a general complete mathematical lattice toward enhancing the representation of ambiguity in reasoning. This work introduces a LIA with implication values in a complete lattice of intervals on the real number axis. Since real numbers stem from real-world measurements, this work sets a ground for real-world applications of a LIA. We show that the aforementioned lattice of intervals includes all the enabling mathematical tools for fuzzy lattice reasoning (FLR). It follows a capacity to optimize, in principle, LIA-reasoning based on FLR as described in this work.\n\n#### Chapter 3 – Relationships Among Several Fuzzy Measures\n\nAbstract\n\nIn fuzzy set theory, similarity measure, divergence measure, subsethood measure and fuzzy entropy are four basic concepts. They surface in many fields, such as image processing, fuzzy neural networks, fuzzy reasoning, fuzzy control, and so on. The similarity measure describes the degree of similarity of fuzzy sets A and B. The divergence measure describes the degree of difference of fuzzy sets A and B. The subsethood measure (also called inclusion measure) is a relation between fuzzy sets A and B, which indicates the degree to which A is contained in B. The entropy of a fuzzy set is the fuzziness of that set.\nThis chapter focuses on discussing relationships among these four fuzzy measures. All of the fuzzy measures are discussed on discrete universes here; the cases for continuous universes can be researched similarly.\n\n#### Chapter 4 – Pattern Classification using Generalized Recurrent Exponential Fuzzy Associative Memories\n\nAbstract\n\nGeneralized recurrent exponential fuzzy associative memories (GRE-FAMs) are biologically inspired models designed for the storage and recall of fuzzy sets. They can be viewed as a recurrent multilayer neural network that employs a fuzzy similarity measure in its first hidden layer. In this chapter, we provide theoretical results concerning the storage capacity and noise tolerance of a single-step GRE-FAM. Furthermore, we describe how a GRE-FAM model can be applied for pattern classification. Computational experiments show that the accuracy of certain GRE-FAM classifiers is competitive with some well-known classifiers from the literature.\n\n#### Chapter 5 – Fuzzy Set Similarity using a Distance-Based Kernel on Fuzzy Sets\n\nAbstract\n\nSimilarity measures computed by kernels are well studied and a vast literature is available. In this work, we use distance-based kernels to define a new similarity measure for fuzzy sets. In this sense, a distance-based kernel on fuzzy sets implements a similarity measure for fuzzy sets with a geometric interpretation in functional spaces.\nWhen the kernel is positive definite, the similarity measure between fuzzy sets is an inner product of two functions on a Reproducing kernel Hilbert space. This new view of similarity measures for fuzzy sets given by kernels leverages several applications in areas as machine learning, image processing, and fuzzy data analysis. Moreover, it extends the application of kernel methods to the case of fuzzy data. We show an application of our method in a kernel hypothesis testing on fuzzy data.\n\n#### Chapter 6 – FSSAM: A Fuzzy Rule-Based System for Financial Decision Making in Real-Time\n\nAbstract\n\nThis chapter looks into some problems financial managers face when they have to make decisions in real time while confronted with restrictions, such as coping with imprecise information or processing enormous amount of financial data. However, it is not concerned with existing software systems for supporting investment decisions, neither the ones based on fundamental analysis nor those based on technical analysis of stock markets.\nWhat the chapter describes in detail is a real-time software application – Fuzzy Software System for Asset Management (FSSAM). FSSAM collects and processes the data autonomously, and produces outputs that support the process of financial management.\nThus, the fuzzy rule-based systems (FRBS) are presented as a type of technology which provides tools for overcoming the above-mentioned difficulties, with its unique features, such as the capacity for implementing human knowledge; error tolerance and the ability to, relatively easily, create models of complex dynamic and non-deterministic systems with volatile and/or uncertain parameters.\n\n#### Chapter 7 – Application of Fuzzy Rule Base Design Method\n\nAbstract\n\nIn many classification tasks the final goal is usually to determine classes of objects. The final goal of fuzzy clustering is also the distribution of elements with highest membership functions into classes. The key issue is the possibility of extracting fuzzy rules that describe clustering results. The paper develops a method of fuzzy rule base designing for the numerical data, which enables extracting fuzzy rules in the form IFTHEN. To obtain the membership functions, the fuzzy c-means clustering algorithm is employed. The described methodology of fuzzy rule base designing allows one to classify the data. The practical part contains implementation examples." ]
[ null, "https://sciencegatepub.com/wp-content/uploads/2020/05/GCSR_Vol06_FCover_1.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88934916,"math_prob":0.86788636,"size":12341,"snap":"2021-43-2021-49","text_gpt3_token_len":2503,"char_repetition_ratio":0.13139337,"word_repetition_ratio":0.8191721,"special_character_ratio":0.1870999,"punctuation_ratio":0.10351288,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9848432,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-21T20:54:54Z\",\"WARC-Record-ID\":\"<urn:uuid:02252a0f-d6cd-4472-a5b6-563e34e13f08>\",\"Content-Length\":\"71063\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0ef3ac86-1516-4d5d-85f4-ee4273cbabaf>\",\"WARC-Concurrent-To\":\"<urn:uuid:0a1bd884-39e4-4936-95b8-252e920745de>\",\"WARC-IP-Address\":\"185.4.133.228\",\"WARC-Target-URI\":\"https://sciencegatepub.com/sgp-books/gcsr/gcsr_vol6/\",\"WARC-Payload-Digest\":\"sha1:HR6IGII3X2R4GSJLC2RHKHPFL73WFPEF\",\"WARC-Block-Digest\":\"sha1:PQVSTK7SHWZSJC3JP5NWEWG4BCZECU4M\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585441.99_warc_CC-MAIN-20211021195527-20211021225527-00120.warc.gz\"}"}
https://ittutorialpoint.com/binance-api-apierrorcode-1111-precision-is-over-the-maximum-defined-for-this-asset-python/
[ "# Binance API: APIError(code=-1111): Precision is over the maximum defined for this asset. || Python\n\nWe Are Going To Discuss About Binance API: APIError(code=-1111): Precision is over the maximum defined for this asset. || Python. So lets Start this Python Article.\n\n## Binance API: APIError(code=-1111): Precision is over the maximum defined for this asset. || Python\n\n1. How to solve Binance API: APIError(code=-1111): Precision is over the maximum defined for this asset. || Python\n\nBased on your code and a bit of familiarity with trading crypto I am assuming that the 'balance' you are calculating is your USDT balance and not your ADA balance via `i_o = float(bal[\"free\"])` which would mean that you are placing a buy order for an amount of ADA equal to your current USDT balance (rounded) — are you sure that's what you want to be doing?\nTo more directly answer your question as to why you are getting that error, I can only deduce that you must be calculating the price at which you would like to buy using a function that includes division and that your resultant `v_min` is ending up being something like 1.32578935987532098325 and not 1.2.\nTherefore, you would like to also round it, for your convenience:\n`price = float(round(v_min,8)) ,`\n\n2. Binance API: APIError(code=-1111): Precision is over the maximum defined for this asset. || Python\n\nBased on your code and a bit of familiarity with trading crypto I am assuming that the 'balance' you are calculating is your USDT balance and not your ADA balance via `i_o = float(bal[\"free\"])` which would mean that you are placing a buy order for an amount of ADA equal to your current USDT balance (rounded) — are you sure that's what you want to be doing?\nTo more directly answer your question as to why you are getting that error, I can only deduce that you must be calculating the price at which you would like to buy using a function that includes division and that your resultant `v_min` is ending up being something like 1.32578935987532098325 and not 1.2.\nTherefore, you would like to also round it, for your convenience:\n`price = float(round(v_min,8)) ,`\n\n## Solution 1\n\nBased on your code and a bit of familiarity with trading crypto I am assuming that the ‘balance’ you are calculating is your USDT balance and not your ADA balance via `i_o = float(bal[\"free\"])` which would mean that you are placing a buy order for an amount of ADA equal to your current USDT balance (rounded) — are you sure that’s what you want to be doing?\n\nTo more directly answer your question as to why you are getting that error, I can only deduce that you must be calculating the price at which you would like to buy using a function that includes division and that your resultant `v_min` is ending up being something like 1.32578935987532098325 and not 1.2.\n\nTherefore, you would like to also round it, for your convenience:\n\n`price = float(round(v_min,8)) ,`\n\nOriginal Author Cfomodz Of This Content\n\n## Solution 2\n\nSolution:\n\nI rounded the variable that specifies the price for each buy and sell order, aswell as `i_o` (“USDT” balance), by 2.\n\nOriginal Author Cfomodz Of This Content\n\n## Solution 3\n\n``````tradingPairs = ['BTCUSDT','ETHUSDT','BNBUSDT']\n\n#Loop though cryptos\n\ninfo = client.futures_exchange_info()\n\nprint(\"Price Pre \",info['symbols']['pricePrecision'])\n\npricePrecision = info['symbols']['pricePrecision']\nquantityS = 5.2\nquantityB = \"{:0.0{}f}\".format(quantityS, pricePrecision)\n``````\n\nOriginal Author Tshepo Phiri Of This Content\n\n## Solution 4\n\nCode for getting precision data using python Binance API:\n\n``````from binance.client import Client\nclient = Client()\ninfo = client.futures_exchange_info()\n\nrequestedFutures = ['BTCUSDT', 'ETHUSDT', 'BNBUSDT', 'SOLUSDT', 'DYDXUSDT']\nprint(\n{si['symbol']:si['quantityPrecision'] for si in info['symbols'] if si['symbol'] in requestedFutures}\n)\n``````\n\nOriginal Author mde Of This Content\n\n## Conclusion\n\nSo This is all About This Tutorial. Hope This Tutorial Helped You. Thank You.", null, "" ]
[ null, "https://ittutorialpoint.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89176226,"math_prob":0.8101158,"size":3894,"snap":"2022-27-2022-33","text_gpt3_token_len":955,"char_repetition_ratio":0.11825193,"word_repetition_ratio":0.6919275,"special_character_ratio":0.2560349,"punctuation_ratio":0.0988858,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9794847,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-16T04:50:03Z\",\"WARC-Record-ID\":\"<urn:uuid:96f7914f-e4c5-46a9-82e5-302be3d4d51d>\",\"Content-Length\":\"191637\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:abd56595-2bc8-480f-b7f7-9aba78b7a4c0>\",\"WARC-Concurrent-To\":\"<urn:uuid:132b031a-aabe-4dca-9a05-422d37c5419b>\",\"WARC-IP-Address\":\"172.67.135.219\",\"WARC-Target-URI\":\"https://ittutorialpoint.com/binance-api-apierrorcode-1111-precision-is-over-the-maximum-defined-for-this-asset-python/\",\"WARC-Payload-Digest\":\"sha1:GQOWEIPNR7VJUCIXCFRTAD7LR33GQRVU\",\"WARC-Block-Digest\":\"sha1:G7KOK5NJOPG245AFK3INF2K74ZUHQBGF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572220.19_warc_CC-MAIN-20220816030218-20220816060218-00103.warc.gz\"}"}
https://metanumbers.com/55646
[ "## 55646\n\n55,646 (fifty-five thousand six hundred forty-six) is an even five-digits composite number following 55645 and preceding 55647. In scientific notation, it is written as 5.5646 × 104. The sum of its digits is 26. It has a total of 2 prime factors and 4 positive divisors. There are 27,822 positive integers (up to 55646) that are relatively prime to 55646.\n\n## Basic properties\n\n• Is Prime? No\n• Number parity Even\n• Number length 5\n• Sum of Digits 26\n• Digital Root 8\n\n## Name\n\nShort name 55 thousand 646 fifty-five thousand six hundred forty-six\n\n## Notation\n\nScientific notation 5.5646 × 104 55.646 × 103\n\n## Prime Factorization of 55646\n\nPrime Factorization 2 × 27823\n\nComposite number\nDistinct Factors Total Factors Radical ω(n) 2 Total number of distinct prime factors Ω(n) 2 Total number of prime factors rad(n) 55646 Product of the distinct prime numbers λ(n) 1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) 1 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 55,646 is 2 × 27823. Since it has a total of 2 prime factors, 55,646 is a composite number.\n\n## Divisors of 55646\n\n1, 2, 27823, 55646\n\n4 divisors\n\n Even divisors 2 2 1 1\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 4 Total number of the positive divisors of n σ(n) 83472 Sum of all the positive divisors of n s(n) 27826 Sum of the proper positive divisors of n A(n) 20868 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 235.894 Returns the nth root of the product of n divisors H(n) 2.66657 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 55,646 can be divided by 4 positive divisors (out of which 2 are even, and 2 are odd). The sum of these divisors (counting 55,646) is 83,472, the average is 20,868.\n\n## Other Arithmetic Functions (n = 55646)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 27822 Total number of positive integers not greater than n that are coprime to n λ(n) 27822 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 5648 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares\n\nThere are 27,822 positive integers (less than 55,646) that are coprime with 55,646. And there are approximately 5,648 prime numbers less than or equal to 55,646.\n\n## Divisibility of 55646\n\n m n mod m 2 3 4 5 6 7 8 9 0 2 2 1 2 3 6 8\n\nThe number 55,646 is divisible by 2.\n\n## Classification of 55646\n\n• Arithmetic\n• Semiprime\n• Deficient\n\n### Expressible via specific sums\n\n• Polite\n• Non-hypotenuse\n\n• Square Free\n\n## Base conversion (55646)\n\nBase System Value\n2 Binary 1101100101011110\n3 Ternary 2211022222\n4 Quaternary 31211132\n5 Quinary 3240041\n6 Senary 1105342\n8 Octal 154536\n10 Decimal 55646\n12 Duodecimal 28252\n20 Vigesimal 6j26\n36 Base36 16xq\n\n## Basic calculations (n = 55646)\n\n### Multiplication\n\nn×i\n n×2 111292 166938 222584 278230\n\n### Division\n\nni\n n⁄2 27823 18548.7 13911.5 11129.2\n\n### Exponentiation\n\nni\n n2 3096477316 172306576726136 9588171768502563856 533543406230093668330976\n\n### Nth Root\n\ni√n\n 2√n 235.894 38.1778 15.3588 8.89379\n\n## 55646 as geometric shapes\n\n### Circle\n\n Diameter 111292 349634 9.72787e+09\n\n### Sphere\n\n Volume 7.21756e+14 3.89115e+10 349634\n\n### Square\n\nLength = n\n Perimeter 222584 3.09648e+09 78695.3\n\n### Cube\n\nLength = n\n Surface area 1.85789e+10 1.72307e+14 96381.7\n\n### Equilateral Triangle\n\nLength = n\n Perimeter 166938 1.34081e+09 48190.8\n\n### Triangular Pyramid\n\nLength = n\n Surface area 5.36326e+09 2.03065e+13 45434.8\n\n## Cryptographic Hash Functions\n\nmd5 7eeb15d6027a367ea82bc7c5e9f25abd 3a1a83aa65dba3bf1b358e62c07061d611be64a3 425ad33eae7e62354dbd7d4fb4ce96404e69622479effa31ba5dcf9029199b6a 048f9096a771fd53059c718e5023f8e734abaa5ee8f57436f83e67befea8acf266dabbcd58a23459caaf65d67cc18dd4ac3d35476b67e47be15ec9984c776244 88c90385a9052af8531659d523de605f364e88d8" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.619834,"math_prob":0.9820761,"size":4525,"snap":"2020-34-2020-40","text_gpt3_token_len":1594,"char_repetition_ratio":0.11900022,"word_repetition_ratio":0.02835821,"special_character_ratio":0.4521547,"punctuation_ratio":0.07522698,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9961621,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-04T17:48:28Z\",\"WARC-Record-ID\":\"<urn:uuid:72bad895-5b08-4ac5-8a49-cf242b89d7a6>\",\"Content-Length\":\"47902\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fee00e59-5571-4143-924e-df05c2c711fe>\",\"WARC-Concurrent-To\":\"<urn:uuid:50c2fce8-c357-4278-81ea-65812928c2aa>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/55646\",\"WARC-Payload-Digest\":\"sha1:IHHFBN423IY57YYF7YZH6AA7HR6ETGA6\",\"WARC-Block-Digest\":\"sha1:GFTGZONZO7DKVFZYXIGCIPY44QJPOC2T\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439735881.90_warc_CC-MAIN-20200804161521-20200804191521-00593.warc.gz\"}"}
https://crypto.stackexchange.com/questions/8454/what-security-authorities-and-standards-reject-e-3-in-rsa-when-and-with-what
[ "# What security authorities and standards reject $e=3$ in RSA, when, and with what rationale?\n\nIn RSA, some security authorities and/or standards allow the public exponent $e=3$, others require or recommend $e>2^{16}$ (or perhaps some other minimum). I gathered the following:\n\n• PKCS#1 allows $e=3$ for both RSA digital signature and encryption (but see 3. below).\n• ISO/IEC 9796-2 allows $e=3$ (in the context of RSA digital signature).\n• FIPS 186-4 section B.3.1 requires $e>2^{16}$ (in the context of RSA digital signature); no rationale is given.\n• ANSSI's RGS 1.0 annex B1 (French official recommendations), section 2.2.1.1, requires $e>2^{16}$ for encryption, and recommends it for every application of RSA. The rationale mentions existing attacks on RSA encryption schemes with very small exponents, but they are left unspecified.\n\nI'm asking the status with other standards and authorities, and any justification to the ban of low public exponent they give, or otherwise exists, including in the context of attacks on implementations (e.g. side-channel attacks).\n\nUltimately, I want to understand the conditions to use RSA with $e=3$ safely, and inasmuch as possible without clash with official security recommendations, or at least their rationale. That's because I am considering using $e=3$ for some RSA digital signature scheme, and for authentication based on RSA encryption of a random challenge. In such applications, $e=2^{16}+1$ would make the verifier's job like eight times slower than $e=3$.\n\nMy list of reasons not to use low public exponent, in particular $e=3$, has grown to:\n\n1. RSA without padding is vulnerable to a non-modular $e^{th}$ root attack, for some bound on the size of input which is a concern for low $e$ only.\n2. RSA encryption is vulnerable when sending the same message to $e$ recipients using the same padding for each recipient. For this (and a gentle introduction to the attack in 3. below) see Dan Boneh's Twenty Years of Attacks on the RSA Cryptosystem, section 4. Note: a nice comment by @CodesInChaos explain how the recipient's public key could be used rather than randomness to fix the multiple-recipients vulnerability; however some randomness is still required for semantic security, as in any public-key encryption scheme.\n3. With less than about $n/e^2$ bits of random padding (where $n$ is the bit length of the public modulus N), RSA encryption is vulnerable; see Don Coppersmith's Small solutions to polynomial equations, and low exponent RSA vulnerabilities. This bound has been extended in practical use cases by Coron, Joye, Naccache and Paillier in New Attacks on PKCS#1 v1.5 Encryption, assuming that a suitable section of the plaintext is all-zero. PKCS#1v2.2 now warns to guard against these attacks when using RSAES-PKCS1-V1_5 in combination with low public exponent, and recommends not using this scheme to encipher arbitrary plaintext (which, contrary to random keys, could exhibit the characteristic enabling the new attack, which remains threatening to some lesser degree for any public exponent).\n4. Some questionable RSA signature padding schemes are worse with low exponents. An example is the INCITS/ISO/IEC 9796:1991 digital signature standard (also in section 11.3.5 of the Handbook of Applied Cryptography), that was withdrawn following attacks: the padding scheme turned out to be slightly worse for $e=3$ than for $e=2^{16}+1$ (forgery from the signature of a single chosen messages for $e=3$, versus a grand three chosen messages for $e=2^{16}+1$).\n5. (latest update) A general class of attacks based on factorization of poorly padded messages, introduced by Desmedt and Odlyzko in A chosen text attack on the RSA cryptosystem and some discrete logarithm schemes, is perhaps (I have not made my mind) slightly easier for low public exponent $e$, in particular when applied to chosen-message attacks on some ad-hoc signature schemes, like ISO/IEC 9796-2 scheme 1, as in this attack (because the limiting step is picking a non-trivial linear combination of sparse vectors summing to zero, with elements of the vectors in $\\mathbb Z_e$).\n6. (update) Some attacks on implementations based on partial information about the private key (e.g. obtained by approximate extraction of DRAM content by cold-boot attack) have reported cost growing with $e$; e.g. Heninger and Shacham's Reconstructing RSA Private Keys from Random Key Bits, and perhaps Constantinos Patsakis's RSA private key reconstruction from random bits using SAT solvers.\n\nWith the exception of attack on implementations, I have so far located no attack enabled by low RSA public exponent:\n\n• in an encryption scheme raising an essentially random element in $\\mathbb Z_N$ to the public exponent, as in naked RSA with the message random and about the size of the public modulus; RSAES-PKCS1-V1_5 when enciphering random plaintext of any size; and RSAES-PSS with any plaintext;\n• in a signature scheme with an otherwise fully unbroken padding, including those randomized following the principle of full domain hash (giving a strong argument of equivalence to the underlying RSA problem with $e=3$), such as RSASSA-PSS of PKCS#1v2, and ISO/IEC 9796-2 schemes 2 and 3 (introduced in the 2002 edition, unmodified in the 2010 edition; scheme 1, also known as ISO/IEC 9796-2:1997, does not have such proof).\n• You don't need random padding to avoid the $e$ recipients vulnerability. You could make the padding depend on the recipient's public key e.g. hash(e||n). – CodesInChaos May 25 '13 at 17:42\n• I would guess that the differences in the standards you posted is simply a matter of how conservative about security the standards committees are. Like you, I haven't found any fatal attack against RSA with $e=3$ assuming proper padding, but I suspect the standards that require $e>2^{16}$ are nervous about all of the potential traps an implementer could fall into with $e=3$. – Reid May 25 '13 at 18:26\n• Honestly, I think everything below the horizontal rule would make a great self-answer to this question. – Reid May 27 '13 at 2:03\n• @Reid: I'll think about making the second part a separate answer/community wiki. But it is not a satisfactory answer, at least yet. I am afraid that I do not have a complete list of relevant attacks, and more generally reasons to avoid very low public exponents. In particular I did not touch hardness of the RSA problem for random argument w.r.t. low public exponent; and implementation attacks. And there are other standards and official recommendations. – fgrieu May 27 '13 at 4:36\n• @Gilles: the question linked in your comment is less focused. It is also (and the answers are mainly) about RSA with small private exponent. This, in summary, is unsafe when used to a degree such that it offers a worthwhile speed advantage. – fgrieu May 28 '13 at 10:48\n\nThe advice to avoid $e=3$ comes down primarily to superstition, historical inertia, and general caution, rather than anything with a solid technical basis.\n\nHistorically, some of the early schemes that used $e=3$ were subject to attack. At the time, many folks drew the conclusion that this means $e=3$ is insecure. However, we now know that that conclusion was faulty. We now know that the real problem was failure to use a secure padding scheme.\n\nAs long as you use a secure padding scheme, using $e=3$ is perfectly safe. So, use any well-regarded provably-secure padding scheme, and $e=3$ is fine.\n\nIn fact, there's no real reason why you need to use $e=3$, either. If you want to squeeze out every last bit of possible performance out of the verification operation, there are perfectly safe variations on RSA that effectively use $e=2$. You have to make some slight tweaks to the scheme (to account for the fact that $\\gcd(e,\\varphi(n))\\ne 1$), but it's been worked out how to do that. If you want to make verification as fast as humanly possible, Dan Bernstein has several papers that show how to do this, by using a variety of tricks: one trick is to use $e=2$; another trick is to check the verification condition (namely, $s^2=H(m)$) modulo a small secret prime. See his papers for more details.\n\n• Nice to see this voiced in no uncertain terms! That makes me even more willing to trust \"full domain\" padding schemes with $e=3$ (despite my current lack of understanding of the debate on theoretical hardness of the RSA problem for $e=3$ vs random $e$). But there's also the issue of implementations attacks. Perhaps $e=3$ makes side channel leakage in encryption more of a concern? – fgrieu May 28 '13 at 10:27\n• On using $e=2$: I love Rabin schemes for their performance. But there's a very real practical issue: current lack of support in commercial security-evaluated devices (Smart Cards, HSMs) of even standardized schemes (ISO/IEC 9796-2 with $e=2$). Also the common description of this is one mistake/fault/weakness away from total disaster (messing up the Jacobi evaluation reveals the secret key, so does an attack on padding); and Jacobi evaluation (required for semantic security of encryption AFAIK, and also a concern when signing secret material) has its channel leakage hardly explored. – fgrieu May 28 '13 at 10:32\n• @fgrieu Personally, when I see a string of partial attacks on e=3, that leads me to worry about future development of more general attacks. Why not reduce the risk now at minimal cost by using a larger exponent? – Antimony May 29 '13 at 6:18\n• @Antinomy: the \"partial attacks on e=3\" we have (at least the ones not related to implementation weaknesses) are best (IMHO) described as attacks on ad-hoc padding schemes, that happen to be slightly facilitated by $e=3$. But padding schemes with a security argument (or \"proof\") have one that holds for $e=3$, so why reject $e=3$ with such schemes when $e=3$ has other serious advantages, like being 8 times faster on the public-key side? – fgrieu May 29 '13 at 6:22\n• @fgrieu: I wonder why the 8x speed difference on the public-key side isn't viewed as significant? If key length is limited by the time budget for public key operations, an 8x increase in performance would more than double the size of key one could process within the budgeted amount of time. – supercat May 29 '18 at 18:58\n\nRather than making an overly long question even longer, I post this as an answer.\n\nAs part of the update process of the French security recommendations linked in the question, I suggested (June 2013) a waiver for the requirement/recommendation that $e>2^{16}$ when using a padding scheme with a security proof. It was kindly refused (within 6 weeks), with rationale. The updated rules and recommendations V2.03 (in French) forming Annex B1 of the general security referential RGS V2 (in French), as approved (in French) June 2014, is identical in this regard [a noticeable change though: 2048-bit RSA is deemed good to 2030 rather than 2020 in the previous edition].\n\nHere is the rationale, paraphrased and condensed to $2\\over5$ (originally in French, attributed to the cryptographic laboratory of ANSSI / SDE / ST / LCR):\n\n1. An argument can me made that no attack effective against RSA with $e=3$ and a padding scheme with security proof is currently known. However, it is generally useful to keep some safety margin w.r.t. the state of the art in cryptanalytic attacks, as such margin could minimize the impact of new cryptanalytic advances, should they occur.\n2. While the \"classic\" security proof of RSA-OAEP works assuming the hardness of the RSA problem for random plaintext, independent of the magnitude of $e$, recent articles [KOS10, LOS13] suggest a proof of RSA-OAEP in the standard model based on the \"Phi-Hiding\" hypothesis. That proof requires a large enough $e$ and gives no assurance for low $e$. Therefore, from the standpoint of provable security, large $e$ arguably gives increased security insurance.\n3. Security proof of RSA-OAEP is not an absolute insurance of security, for there can be attacks outside a proof's framework. Examples of unaccounted parameters are errors in data formatting, use of a poor RNG or hash function (the \"classic\" security proof of RSA-OAEP uses a random oracle model).\n4. A review of the rich existing literature suggest low exponents are more vulnerable. For example, many attacks on RSA PKCS#1 v1.5 [CFPR96, CNJP00, BCNTV10] apply for small $e$ only.\n5. The recommendation makes a satisfactory distinction between encryption (where $e<2^{16}+1$ is prohibited) and signature (where such $e$ is only discouraged), because:\n• More attacks exploiting low $e$ have been proposed against encryption schemes than against signature schemes; that may be because attacks against encryption in general are easier to perform and harder to prevent.\n• Gradual weakening could prove much more a problem for an encryption scheme than for a signature scheme. That's because a successful attack (e.g. against RSA-OAEP) would compromise long-term confidentiality of past messages; while in the case of signature, there are many application for which a relatively short validity of signatures is acceptable (e.g. access control); or, it might be acceptable to increase the validity of a signature made with an obsolete mechanism by having an authority re-issue a signature before the attack becomes realistic.\n\nComment: I very much appreciate the balanced arguments in this rationale. In point 3, I interpret errors in data formatting [en Français: erreurs de formatage des données] to include fault injection, which indeed is an area where $e=3$ could facilitate attacks against a scheme (signature or encryption) with provable security and randomized message representative. My opinion remains that beyond fault injection and side-channels (timing, power analysis and friends), emergence of such attack is highly implausible." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8862132,"math_prob":0.8830305,"size":5229,"snap":"2019-51-2020-05","text_gpt3_token_len":1230,"char_repetition_ratio":0.12382775,"word_repetition_ratio":0.0049321824,"special_character_ratio":0.23178428,"punctuation_ratio":0.10224949,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.968539,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-23T20:55:47Z\",\"WARC-Record-ID\":\"<urn:uuid:a75392f3-95a5-44d5-8b6b-1c0aeeb4bce5>\",\"Content-Length\":\"175152\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:43343398-cce7-4258-b805-b6fc7558d622>\",\"WARC-Concurrent-To\":\"<urn:uuid:1e1f0728-7388-4bf9-a66b-1b8f608b661b>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://crypto.stackexchange.com/questions/8454/what-security-authorities-and-standards-reject-e-3-in-rsa-when-and-with-what\",\"WARC-Payload-Digest\":\"sha1:Z5CUYMTG5VUDW4NZEDFMDZIDCQBRNUYB\",\"WARC-Block-Digest\":\"sha1:VCRXCPFQG6S4N4P5JIHKBV64NGSDC3HB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250613416.54_warc_CC-MAIN-20200123191130-20200123220130-00216.warc.gz\"}"}
https://brilliant.org/practice/instantaneous-rate-of-change/?subtopic=differentiation&amp;chapter=derivatives-2
[ "", null, "Calculus\n\nInstantaneous Rate of Change\n\nLet $f(x)=6 x^2+7x-5$. Then if the average rate of change of $f(x)$ when $x$ changes from $0$ to $18$ is the same as the rate of change of $f(x)$ at $x=a$, what is the value of $a$?\n\nWhat is the rate of change of $y = \\frac{x(x-5)^2}{(x+3)^3}$ at $x = 1$?\n\nFor a function $f(x)= x^2+px+q$, if the average rate of change of $f(x)$ when $x$ changes from $a$ to $b$ is the same as the rate of change at $x=c$, what is the value of $c$?\n\nA person $180$ cm tall walks away from the base of a streetlight $4$ m high. If she follows a straight line at a velocity of $121$ m/min, what is the rate of change of the length of her shadow (in m/min)?", null, "What is the rate of change of the function $y = \\ln ( 8 x)$ when $x = \\frac{ 1}{16}$?\n\n×" ]
[ null, "https://ds055uzetaobb.cloudfront.net/brioche/chapter/Derivatives-7GB6xT.png", null, "https://ds055uzetaobb.cloudfront.net/brioche/solvable/images/Streetlamp_p-7lUHJi11xJ.JPG", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8999835,"math_prob":1.0000097,"size":700,"snap":"2019-43-2019-47","text_gpt3_token_len":167,"char_repetition_ratio":0.22270115,"word_repetition_ratio":0.44055945,"special_character_ratio":0.25142857,"punctuation_ratio":0.15568863,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000099,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,8,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-19T08:32:37Z\",\"WARC-Record-ID\":\"<urn:uuid:78d4957f-cba6-400c-87a8-aeab984b81b9>\",\"Content-Length\":\"99143\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ecbb6d03-17d5-4e5e-834d-4cc1aeb6014e>\",\"WARC-Concurrent-To\":\"<urn:uuid:26b2069c-ce75-4ce6-a3d8-f26eebb2c997>\",\"WARC-IP-Address\":\"104.20.35.242\",\"WARC-Target-URI\":\"https://brilliant.org/practice/instantaneous-rate-of-change/?subtopic=differentiation&amp;chapter=derivatives-2\",\"WARC-Payload-Digest\":\"sha1:7U7GR2ZFXYZOU3I4MP6IIKV5KIO7OTPS\",\"WARC-Block-Digest\":\"sha1:DCVZLNMNHV4OSCZUNOX6C3D6PDFO2GOY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986692126.27_warc_CC-MAIN-20191019063516-20191019091016-00173.warc.gz\"}"}
https://fr.mathworks.com/academia/books/matlab-attaway.html
[ "", null, "MATLAB: A Practical Introduction to Programming and Problem Solving, 5e\n\nMATLAB: A Practical Introduction to Programming and Problem Solving guides the reader through both programming and built-in functions to easily make full use of the extensive capabilities of MATLAB for tackling engineering and scientific problems. Assuming no knowledge of programming, this book starts with programming concepts, such as variables, assignments, and selection statements; moves on to loops; then solves problems using both the programming concept and the power of MATLAB. The fifth edition has been updated to reflect the functionality of MATLAB R2018a, including the addition of local functions in scripts, the new string type, coverage of recently introduced functions to import data from web sites, and updates to the MATLAB Live Editor and App Designer.\n\nNew to the Fifth Edition\n\n• Use of MATLAB R2018a\n• A revised Text Manipulation chapter, which includes manipulating character vectors as well as the new string type\n• Introduction to alternate MATLAB platforms, including MATLAB Mobile\n• Local functions within scripts\n• The new output format for most expression types\n• Introduction to the RESTFUL web functions which import data from web sites\n• Increased coverage of App Designer\n• Introduction to recording audio from a built-in device such as a microphone\n• Modified and new end-of-chapter exercises\n• More coverage of data structures, including categorical arrays and tables\n• Increased coverage of built-in functions in MATLAB\n\nStormy Attaway, Boston University\n\nButterworth-Heinemann, 2019\n\nISBN: 9780128163450\nLanguage: English\n\nMATLAB Courseware\n\nTeaching materials based on MATLAB and Simulink.\n\nTrials Available\n\nTry the latest MATLAB and Simulink products.\n\nGet trial software" ]
[ null, "https://fr.mathworks.com/content/mathworks/fr/fr/academia/books/matlab-attaway/_jcr_content/cover.img.png/1545422728976.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8937912,"math_prob":0.47472638,"size":1455,"snap":"2019-26-2019-30","text_gpt3_token_len":269,"char_repetition_ratio":0.13645762,"word_repetition_ratio":0.0,"special_character_ratio":0.1814433,"punctuation_ratio":0.07234043,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9892461,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-24T13:54:49Z\",\"WARC-Record-ID\":\"<urn:uuid:c07fba0f-0a9b-4aff-9ff4-16e1dd15c69b>\",\"Content-Length\":\"44975\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f4c61759-b843-4f9d-84f8-762748e8dc2d>\",\"WARC-Concurrent-To\":\"<urn:uuid:5a552685-5322-4d5b-af7d-c6b0189463a3>\",\"WARC-IP-Address\":\"23.50.112.17\",\"WARC-Target-URI\":\"https://fr.mathworks.com/academia/books/matlab-attaway.html\",\"WARC-Payload-Digest\":\"sha1:E4L6PAJ6HJVWGWU253FBHNVABAEENK2V\",\"WARC-Block-Digest\":\"sha1:6FT5NSAQC3RVIDRAZXINGV2T6OL3JBCA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627999539.60_warc_CC-MAIN-20190624130856-20190624152856-00207.warc.gz\"}"}
https://apo.studysixsigma.com/category/how-to/analyze-phase/
[ "# Archive | Analyze Phase", null, "## Chi Square Tests with SigmaXL\n\nChi Square (Contingency Tables) We have looked at hypothesis tests to analyze the proportion of one population vs. a specified value, and the proportions of two populations, but what do we do if we want to analyze more than two populations? A chi-square test is a hypothesis test in which the sampling distribution of the […]", null, "## Median Test with SigmaXL\n\nWhat is Mood’s Median Test? Mood’s median test is a statistical test to compare the medians of two or more populations. The symbol k is the number of groups of our interest and is equal to or greater than two. Mood’s median is an alternative to Kruskal–Wallis. For the data with outliers, Mood’s median test […]", null, "## Kruskal Wallis with SigmaXL\n\nKruskal–Wallis One-Way Analysis of Variance The Kruskal Wallis one-way analysis of variance is a statistical hypothesis test to compare the medians among more than two groups. It is an extension of Mann–Whitney test. While the Mann–Whitney test allows us to compare the samples of two populations, the Kruskal–Wallis test allows us to compare the samples […]", null, "## Paired t Test with SigmaXL\n\nPaired t Test The third type of a Two Sample t-Test is the Paired t Test.  This test is used when the two populations are dependent of each other, so each data point from one distribution corresponds to a data point in the other distribution. When using a paired t test, the test statistic is calculated […]" ]
[ null, "https://www.leansigmacorporation.com/wp/wp-content/uploads/2016/01/Chi-Square-EQ1.png", null, "https://www.leansigmacorporation.com/wp/wp-content/uploads/2015/12/Median-Test-SXL_01.png", null, "https://lsc.studysixsigma.com/wp-content/uploads/sites/6/2015/12/cropped-equation.png", null, "https://www.leansigmacorporation.com/wp/wp-content/uploads/2015/12/Paired-t-Test-SXL_01.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86529166,"math_prob":0.9086211,"size":1957,"snap":"2020-45-2020-50","text_gpt3_token_len":439,"char_repetition_ratio":0.12852022,"word_repetition_ratio":0.18507463,"special_character_ratio":0.19519673,"punctuation_ratio":0.044077136,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98650956,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-05T18:40:47Z\",\"WARC-Record-ID\":\"<urn:uuid:a22036eb-8296-4d82-8454-1174060d73e2>\",\"Content-Length\":\"55449\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6099841d-350e-45dd-9f89-a765f91aab6a>\",\"WARC-Concurrent-To\":\"<urn:uuid:a3d17648-b11a-45c3-bec6-41a546ce5549>\",\"WARC-IP-Address\":\"67.227.137.173\",\"WARC-Target-URI\":\"https://apo.studysixsigma.com/category/how-to/analyze-phase/\",\"WARC-Payload-Digest\":\"sha1:ZZR6HBKJ6GJUWWCJNUKJXF6ISZGN62EC\",\"WARC-Block-Digest\":\"sha1:WUK6K5E2CBADFQ5LU2CLSWFTZR55M3OZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141748276.94_warc_CC-MAIN-20201205165649-20201205195649-00073.warc.gz\"}"}
https://www.analyticsvidhya.com/blog/2021/07/an-ultimate-guide-to-opencv-learning-libraries-1-0/
[ "Saakshi Malhotra — Published On July 1, 2021\n\nThis article was published as a part of the Data Science Blogathon\n\n## Introduction\n\n“Vision is an Art of Seeing what is invisible to others.”  – Jonathan Swift\n\nComputer Vision enables computers to see what is invisible to them and this technology has been evolved so much for humans to see things that are not even there. For example, a black and white version of a Colored Image.\n\nThere are many Computer Vision and Machine Learning Libraries that allow us to do this. Like: OpenCV, Korina, Scikit – Image, PIL, SimpleCV, PG Magic.\n\nThis is a series, we will be learning about the basic implementation and features of these libraries starting with OpenCV.\n\n## Table contents\n\n1.  Image Theory\n2. I/O Image, Video, and Webcam\n3.  Basic functions\n4.  Resizing and Cropping Image\n5.  Adding text to images\n\nBefore we get started with using computer vision, let us get familiar with the most basic understanding of an image.\n\n## The Image Theory\n\nFor a machine, an Image is interpreted as a two-dimensional matrix formed with rows and columns of pixels.\n\nPixel: here refers to the smaller to the smallest accessible part of an image.\n\nImage Resolution: Image Resolution tells us the number of pixels in an image.\n\nIt is defined by the width and the height of an image\n\nSome common types of image resolutions are :\n\n• VGA (Video Graphics Array) : 640 X 480\n• HD (High Definition ): 1280 X 720\n• FHD (Full High Definition) : 1920 X 1080\n• 4K : 3840 X 2160\n\nBinary Image: A black and white image is called a binary image.\nWhy? As we know images are made of small elements/boxes in a black and white image, a box is either filled with black denoted by zero or it could be white denoted by 1.\nHence, it is referred to as Binary Image.\n\nBut these are not the usual black and white images that we see, they are more detailed.\n\nThat just means it has more levels.\n\n## Setting up the Environment\n\n`import cv2sou`\n\nNote: OpenCV released two python interfaces, cv, and cv2. cv2 is the latest version and we will be using the same.\n\n## I/O Images, Videos, and Webcam\n\nWe have imread() function in OpenCV to read images. But for that we need to create an image variable, let’s say ‘img’.\n\nThe imread() function will take the path of the image file that needs to be read as an argument.\n\n(directly inserting path in imshow() needs double commas “ . “ )\n\nEg : img = cv2.imread(“Resources/lena.png”)\n\nSyntax :\n\n`img = cv2.imread(“ path location of Image folder”)`\n\nOR ( this method of using path variable has single comas ‘ . ‘ )\n\n```path = ‘Resources/lena.png’\nimg = cv2.imshow(path)```\n\n## Basic Functions\n\n### Converting to different colorspaces\n\nUsing cvtcolor() function is used to convert an image into more than 150 color spaces.\n\nSyntax :\n\n`img= cv2.cvtcolor( prev_img_object, colorspace)`\n\nExample :\n\n```#converts to gray colorspace\nimgGray=cv2.cvtcolor(img,COLOR_BGR2GRAY)\ncv2.imshow(\"Gray Image\",imGray)\ncv2.waitKey(0)```\n\nNote: CONVENTIONALLY we use RBG, BUT in OpenCV, we write it as BGR.\n\n###", null, "Output after applying Gray Scale\n\n### Blur\n\nThe Gaussian Blur function is used to give blur effect to an image.\n\nSyntax :\n\n```imgBlur = cv2.GaussianBlur(prev_img_object, (kernel size, Kernel Size) , sigma x)\n# Kernel size can only be odd numbers\n#Sigma x - controls variance of kernel applied on image```\n\nExample :\n\n```img = cv2.imread (“Resources/lena.png”)\nimgGray= cv2.cvtcolor(img, COLOR_BGR2GRAY)\nimgBlur = cv2.GaussianBlur(imgBlur, (7,7),0)\ncv2.imshow(“Blur Image “ , imgBlur)```\n\n###", null, "Output after Gaussian Blurr\n\n### Canny Edge Detector\n\nCanny Edge Detection is a technique used to extract the structural information, i.e. Edges. It takes grayscale image as input.\n\nSyntax :\n\n`imgCanny = cv2.Canny(prev_img_obj, threshold 1 , threshold 2)`\n\nExample :\n\n`imgCanny = cv2.Canny(img,150,200)`\n\n###", null, "Output for Canny Edge Detector\n\n### Image Dilation\n\nIt is a technique used to expand the pixels of an image along the edges. It is used to thicken the edges in the image to remove the gap in images.\n\nNote: We will be dealing with kernel sizes – matrices, therefore, import NumPy.\n\nThe first step would be to define the kernel.\n\nSyntax :\n\n```Kernel = np.ones( size1, size2)\n#Where size 1 and size 2 are dimensions of matrix size 1 X size2\nimgDialation = cv2.dilate( imgCanny, kernel , interactions =n)```\n\nExample  :\n\n```kernel = np.ones( 5,5) // ones - mean we want all the values of kernel to be 1\nimgDialation = cv2.dilate( imgCanny, kernel , interactions =1)```\n• Used imgCanny as prev object in image dilation as image dilation is applied on the Edges (Canny Edges)\n\n• The number of Iterations influences the thickness of the edges dilated.", null, "Output for Dilation\n\n### Erosion – Complement to Dilation\n\nSince erosion is the opposite of dilation, if we apply erosion to the dilated version of the image, we can get results similar to the Canny Edge version of the Image. Erosion computes the minimum area of a pixel over a given area.\n\nSyntax :\n\n`imgEroded = cv2.erode( imgCanny, kernel , interactions =n)`\n\nExample :\n\n```imgEroded = cv2.erode( imgDilation,kernel , interactions =n)\ncv2.imshow(“Eroded Image “ , imgEroded)```\n\nOutput for Erosion\n\n### Resizing and Cropping\n\nOpenCV Convention\n\n## Resizing Image\n\nNote: To resize the image we need to know the current size of the image. For this, we use the shape function.\n\nFinding the current size of the image :\n\nSyntax :\n\n`print(img_name.shape)`\n\nExample :\n\n`img = cv2.imread(“Resources/lambo.png”)`\n```cv2.imshow(“Original Image”,img\n\n#checking the shape of image\nprint(img.shape)```\n\nOutput :\n\n [ 1 ]   462 623 3\n\nNote: Here 462 is the height, 623 in Width, and 3 represents the RBG channels.\n\nResize Function is used to resize the image, it takes the new height and width as parameters.\n\nSyntax :\n\n`ResizedImage = cv2.resize(orgninalimagename,(new height, new width))`\n\nExample :\n\n```imgResize = cv2.resize (img,( 300,200))\nprint(imgResize.shape)```\n Output  :\n [ 1 ] 200 300 3\n\nNote: Here 200 and 300 are the new height and width respectively.\n\n### Cropping Image\n\nImage is an array of pixels, it is a matrix. So we don’t require any special OpenCV function, we can simply do that using matrix functions of Numpy.\n\nSyntax :\n\n`imgCropped = img [ Start_height : End_height , start_width : end_width]`\n\nNote In Open Cv functions we write width first and then the height\n\n## Adding Text to Images\n\nWe use the function cv2.put Text()  to add text to an image. It takes the text to be added, starts coordinates, font type, thickness, and the color of the text in BRG format and the scale – size of the text.\n\nSyntax :\n\n`cv2.putText ( img_obj, “text_here”, (start coordinates) , cv2.Font_Type , scale , (B,R,G),Thickness)`\n\nExample :\n\n`cv2.putText ( img, “ OPENCV ”, (100,200) , cv2.Font_HERSHEY_COMPLEX, 1 , (255,0,0),2)`\n\nThese are some of the fonts and scales available in OpenCV:\n\n• FONT_HERSHEY_SIMPLEX = 0\n• FONT_HERSHEY_PLAIN = 1\n• FONT_HERSHEY_DUPLEX = 2\n• FONT_HERSHEY_COMPLEX = 3\n• FONT_HERSHEY_TRIPLEX = 4\n• FONT_HERSHEY_COMPLEX_SMALL = 5\n• FONT_HERSHEY_SCRIPT_SIMPLEX = 6\n• FONT_HERSHEY_SCRIPT_COMPLEX = 7\n\nAdded Text to display the number of vehicles [Source : GitHub/saakshi077/vehicle-count-model\n\n## Endnote\n\nThank you for reading my first blog 🙂\n\nIn this blog, I explained some basic functions of OpenCV Library to get started with learning and using this library.\nI am hoping to write some more on the same ( will be in continuation of this blog).\n\nAlong with that, I am looking forward to writing the same kind of articles to help you get started with other Computer Vision Libraries.\n\n### Hey! I am Saakshi Malhotra …\n\nA Computer Science student, passionate about Artificial Intelligence, Deep Learning, and Computer Vision.\nI welcome all the suggestions and doubts in the comments section. Thank you for being here : )\n\n### About the Author", null, "" ]
[ null, "https://editor.analyticsvidhya.com/uploads/72085output1.png", null, "https://editor.analyticsvidhya.com/uploads/46059output2.jpg", null, "https://editor.analyticsvidhya.com/uploads/33901output3.png", null, "https://editor.analyticsvidhya.com/uploads/39228dilation.png", null, "https://www.analyticsvidhya.com/wp-content/themes/analytics-vidhya/images/default_avatar.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7833054,"math_prob":0.90296626,"size":7854,"snap":"2022-40-2023-06","text_gpt3_token_len":1980,"char_repetition_ratio":0.11324841,"word_repetition_ratio":0.013793103,"special_character_ratio":0.24738987,"punctuation_ratio":0.15175612,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96729845,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-26T06:24:14Z\",\"WARC-Record-ID\":\"<urn:uuid:4cd6d59a-dbcf-42d2-9a3e-b43656c06c92>\",\"Content-Length\":\"144880\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3f32d5c7-4161-4486-bab8-d075e5c171c3>\",\"WARC-Concurrent-To\":\"<urn:uuid:5bca3b93-4e38-400f-a071-e877260b4974>\",\"WARC-IP-Address\":\"104.22.54.101\",\"WARC-Target-URI\":\"https://www.analyticsvidhya.com/blog/2021/07/an-ultimate-guide-to-opencv-learning-libraries-1-0/\",\"WARC-Payload-Digest\":\"sha1:N4UGTA5EVWVGAELFSEYOW53WKCBJT4OJ\",\"WARC-Block-Digest\":\"sha1:OISG73AQOWBFJ72IAVQPFVDVE6LDMLY2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030334802.16_warc_CC-MAIN-20220926051040-20220926081040-00526.warc.gz\"}"}
https://www.edureka.co/blog/data-structures-in-python/
[ "", null, "# Data Structures You Need To Learn In Python\n\nLast updated on Nov 26,2019 34K Views", null, "I love technology and I love sharing it with everyone. I work...\n2 / 62 Blog from Python Fundamentals\n\nPython has been used worldwide for different fields such as making websites, artificial intelligence and much more. But to make all of this possible, data plays a very important role which means that this data should be stored efficiently and the access to it must be timely. So how do you achieve this? We use something called Data Structures. With that being said, let us go through the topics we will cover in Data Structures in Python\n\nThe article has been broken down into the following parts:", null, "So, let’s get started :)\n\n## What is a Data Structure?\n\nOrganizing, managing and storing data is important as it enables easier access and efficient modifications. Data Structures allows you to organize your data in such a way that enables you to store collections of data, relate them and perform operations on them accordingly.\n\n## Types of Data Structures in Python\n\nPython has implicit support for Data Structures which enable you to store and access data. These structures are called List, Dictionary, Tuple and Set.\n\nPython allows its users to create their own Data Structures enabling them to have full control over their functionality. The most prominent Data Structures are Stack, Queue, Tree, Linked List and so on which are also available to you in other programming languages. So now that you know what are the types available to you, why don’t we move ahead to the Data Structures and implement them using Python.", null, "## Built-in Data Structures\n\nAs the name suggests, these Data Structures are built-in with Python which makes programming easier and helps programmers use them to obtain solutions faster. Let’s discuss each of them in detail.\n\n### Lists\n\nLists are used to store data of different data types in a sequential manner. There are addresses assigned to every element of the list, which is called as Index. The index value starts from 0 and goes on until the last element called the positive index. There is also negative indexing which starts from -1 enabling you to access elements from the last to first. Let us now understand lists better with the help of an example program.\n\n#### Creating a list\n\nTo create a list, you use the square brackets and add elements into it accordingly. If you do not pass any elements inside the square brackets, you get an empty list as the output.\n\n```my_list = [] #create empty list\nprint(my_list)\nmy_list = [1, 2, 3, 'example', 3.132] #creating list with data\nprint(my_list)\n```\n\nOutput:\n[]\n[1, 2, 3, ‘example’, 3.132]\n\nAdding the elements in the list can be achieved using the append(), extend() and insert() functions.\n\n• The append() function adds all the elements passed to it as a single element.\n• The extend() function adds the elements one-by-one into the list.\n• The insert() function adds the element passed to the index value and increase the size of the list too.\n```my_list = [1, 2, 3]\nprint(my_list)\nmy_list.append([555, 12]) #add as a single element\nprint(my_list)\nmy_list.extend([234, 'more_example']) #add as different elements\nprint(my_list)\nprint(my_list)\n```\n\nOutput:\n[1, 2, 3]\n[1, 2, 3, [555, 12]]\n[1, 2, 3, [555, 12], 234, ‘more_example’]\n[1, ‘insert_example’, 2, 3, [555, 12], 234, ‘more_example’]\n\n#### Deleting Elements\n\n• To delete elements, use the del keyword which is built-in into Python but this does not return anything back to us.\n• If you want the element back, you use the pop() function which takes the index value.\n• To remove an element by its value, you use the remove() function.\n\nExample:\n\n```my_list = [1, 2, 3, 'example', 3.132, 10, 30]\ndel my_list #delete element at index 5\nprint(my_list)\nmy_list.remove('example') #remove element with value\nprint(my_list)\na = my_list.pop(1) #pop element from list\nprint('Popped Element: ', a, ' List remaining: ', my_list)\nmy_list.clear() #empty the list\nprint(my_list)\n```\n\nOutput:\n[1, 2, 3, ‘example’, 3.132, 30]\n[1, 2, 3, 3.132, 30]\nPopped Element: 2 List remaining: [1, 3, 3.132, 30]\n[]\n\n#### Accessing Elements\n\nAccessing elements is the same as accessing Strings in Python. You pass the index values and hence can obtain the values as needed.\n\n```my_list = [1, 2, 3, 'example', 3.132, 10, 30]\nfor element in my_list: #access elements one by one\nprint(element)\nprint(my_list) #access all elements\nprint(my_list) #access index 3 element\nprint(my_list[0:2]) #access elements from 0 to 1 and exclude 2\nprint(my_list[::-1]) #access elements in reverse\n```\n\nOutput:\n1\n2\n3\nexample\n3.132\n10\n30\n[1, 2, 3, ‘example’, 3.132, 10, 30]\nexample\n[1, 2]\n[30, 10, 3.132, ‘example’, 3, 2, 1]\n\n#### Other Functions\n\nYou have several other functions that can be used when working with lists.\n\n• The len() function returns to us the length of the list.\n• The index() function finds the index value of value passed where it has been encountered the first time.\n• The count() function finds the count of the value passed to it.\n• The sorted() and sort() functions do the same thing, that is to sort the values of the list. The sorted() has a return type whereas the sort() modifies the original list.\n```my_list = [1, 2, 3, 10, 30, 10]\nprint(len(my_list)) #find length of list\nprint(my_list.index(10)) #find index of element that occurs first\nprint(my_list.count(10)) #find count of the element\nprint(sorted(my_list)) #print sorted list but not change original\nmy_list.sort(reverse=True) #sort original list\nprint(my_list)\n```\n\n### Output:\n\n```6\n3\n2\n[1, 2, 3, 10, 10, 30]\n[30, 10, 10, 3, 2, 1]```\n\n### Dictionary\n\nDictionaries are used to store key-value pairs. To understand better, think of a phone directory where hundreds and thousands of names and their corresponding numbers have been added. Now the constant values here are Name and the Phone Numbers which are called as the keys. And the various names and phone numbers are the values that have been fed to the keys. If you access the values of the keys, you will obtain all the names and phone numbers. So that is what a key-value pair is. And in Python, this structure is stored using Dictionaries. Let us understand this better with an example program.\n\n#### Creating a Dictionary\n\nDictionaries can be created using the flower braces or using the dict() function. You need to add the key-value pairs whenever you work with dictionaries.\n\n```my_dict = {} #empty dictionary\nprint(my_dict)\nmy_dict = {1: 'Python', 2: 'Java'} #dictionary with elements\nprint(my_dict)\n```\n\nOutput:\n{}\n{1: ‘Python’, 2: ‘Java’}\n\n#### Changing and Adding key, value pairs\n\nTo change the values of the dictionary, you need to do that using the keys. So, you firstly access the key and then change the value accordingly. To add values, you simply just add another key-value pair as shown below.\n\n```my_dict = {'First': 'Python', 'Second': 'Java'}\nprint(my_dict)\nmy_dict['Second'] = 'C++' #changing element\nprint(my_dict)\nmy_dict['Third'] = 'Ruby' #adding key-value pair\nprint(my_dict)\n```\n\nOutput:\n{‘First’: ‘Python’, ‘Second’: ‘Java’}\n{‘First’: ‘Python’, ‘Second’: ‘C++’}\n{‘First’: ‘Python’, ‘Second’: ‘C++’, ‘Third’: ‘Ruby’}\n\n#### Deleting key, value pairs\n\n• To delete the values, you use the pop() function which returns the value that has been deleted.\n• To retrieve the key-value pair, you use the popitem() function which returns a tuple of the key and value.\n• To clear the entire dictionary, you use the clear() function.\n```my_dict = {'First': 'Python', 'Second': 'Java', 'Third': 'Ruby'}\na = my_dict.pop('Third') #pop element\nprint('Value:', a)\nprint('Dictionary:', my_dict)\nb = my_dict.popitem() #pop the key-value pair\nprint('Key, value pair:', b)\nprint('Dictionary', my_dict)\nmy_dict.clear() #empty dictionary\nprint('n', my_dict)\n```\n\nOutput:\n\nValue: Ruby\nDictionary: {‘First’: ‘Python’, ‘Second’: ‘Java’}\n\nKey, value pair: (‘Second’, ‘Java’)\nDictionary {‘First’: ‘Python’}\n\n{}\n\n#### Accessing Elements\n\nYou can access elements using the keys only. You can use either the get() function or just pass the key values and you will be retrieving the values.\n\n```my_dict = {'First': 'Python', 'Second': 'Java'}\nprint(my_dict['First']) #access elements using keys\nprint(my_dict.get('Second'))\n```\n\nOutput:\nPython\nJava\n\n#### Other Functions\n\nYou have different functions which return to us the keys or the values of the key-value pair accordingly to the keys(), values(), items() functions accordingly.\n\n```my_dict = {'First': 'Python', 'Second': 'Java', 'Third': 'Ruby'}\nprint(my_dict.keys()) #get keys\nprint(my_dict.values()) #get values\nprint(my_dict.items()) #get key-value pairs\nprint(my_dict.get('First'))\n```\n\nOutput:\ndict_keys([‘First’, ‘Second’, ‘Third’])\ndict_values([‘Python’, ‘Java’, ‘Ruby’])\ndict_items([(‘First’, ‘Python’), (‘Second’, ‘Java’), (‘Third’, ‘Ruby’)])\nPython\n\n### Tuple\n\nTuples are the same as lists are with the exception that the data once entered into the tuple cannot be changed no matter what. The only exception is when the data inside the tuple is mutable, only then the tuple data can be changed. The example program will help you understand better.\n\n#### Creating a Tuple\n\nYou create a tuple using parenthesis or using the tuple() function.\n\n```my_tuple = (1, 2, 3) #create tuple\nprint(my_tuple)\n```\n\nOutput:\n(1, 2, 3)\n\n#### Accessing Elements\n\nAccessing elements is the same as it is for accessing values in lists.\n\n```my_tuple2 = (1, 2, 3, 'edureka') #access elements\nfor x in my_tuple2:\nprint(x)\nprint(my_tuple2)\nprint(my_tuple2)\nprint(my_tuple2[:])\nprint(my_tuple2)\n```\n\nOutput:\n1\n2\n3\nedureka\n(1, 2, 3, ‘edureka’)\n1\n(1, 2, 3, ‘edureka’)\ne\n\n#### Appending Elements\n\nTo append the values, you use the ‘+’ operator which will take another tuple to be appended to it.\n\n```my_tuple = (1, 2, 3)\nmy_tuple = my_tuple + (4, 5, 6) #add elements\nprint(my_tuple)\n```\n\nOutput:\n(1, 2, 3, 4, 5, 6)\n\n#### Other Functions\n\nThese functions are the same as they are for lists.\n\n```my_tuple = (1, 2, 3, ['hindi', 'python'])\nmy_tuple = 'english'\nprint(my_tuple)\nprint(my_tuple.count(2))\nprint(my_tuple.index(['english', 'python']))\n```\n\nOutput:\n(1, 2, 3, [‘english’, ‘python’])\n1\n3\n\n### Sets\n\nSets are a collection of unordered elements that are unique. Meaning that even if the data is repeated more than one time, it would be entered into the set only once. It resembles the sets that you have learnt in arithmetic. The operations also are the same as is with the arithmetic sets. An example program would help you understand better.\n\n#### Creating a set\n\nSets are created using the flower braces but instead of adding key-value pairs, you just pass values to it.\n\n```my_set = {1, 2, 3, 4, 5, 5, 5} #create set\nprint(my_set)\n```\n\nOutput:\n{1, 2, 3, 4, 5}\n\nTo add elements, you use the add() function and pass the value to it.\n\n```my_set = {1, 2, 3}\nprint(my_set)\n```\n\nOutput:\n{1, 2, 3, 4}\n\n#### Operations in sets\n\nThe different operations on set such as union, intersection and so on are shown below.\n\n```my_set = {1, 2, 3, 4}\nmy_set_2 = {3, 4, 5, 6}\nprint(my_set.union(my_set_2), '----------', my_set | my_set_2)\nprint(my_set.intersection(my_set_2), '----------', my_set & my_set_2)\nprint(my_set.difference(my_set_2), '----------', my_set - my_set_2)\nprint(my_set.symmetric_difference(my_set_2), '----------', my_set ^ my_set_2)\nmy_set.clear()\nprint(my_set)\n```\n• The union() function combines the data present in both sets.\n• The intersection() function finds the data present in both sets only.\n• The difference() function deletes the data present in both and outputs data present only in the set passed.\n• The symmetric_difference() does the same as the difference() function but outputs the data which is remaining in both sets.\n\nOutput:\n{1, 2, 3, 4, 5, 6} ———- {1, 2, 3, 4, 5, 6}\n{3, 4} ———- {3, 4}\n{1, 2} ———- {1, 2}\n{1, 2, 5, 6} ———- {1, 2, 5, 6}\nset()\n\nNow that you have understood the built-in Data Structures, let’s get started with the user-defined Data Structures. User-defined Data Structures, the name itself suggests that users define how the Data Structure would work and define functions in it. This gives the user whole control over how the data needs to be saved, manipulated and so forth.\n\nLet us move ahead and study the most prominent Data Structures in most of the programming languages.\n\n## User-Defined Data Structures\n\n### Arrays vs. Lists\n\nArrays and lists are the same structure with one difference. Lists allow heterogeneous data element storage whereas Arrays allow only homogenous elements to be stored within them.\n\n### Stack\n\nStacks are linear Data Structures which are based on the principle of Last-In-First-Out (LIFO) where data which is entered last will be the first to get accessed. It is built using the array structure and has operations namely, pushing (adding) elements, popping (deleting) elements and accessing elements only from one point in the stack called as the TOP. This TOP is the pointer to the current position of the stack. Stacks are prominently used in applications such as Recursive Programming, reversing words, undo mechanisms in word editors and so forth.", null, "### Queue\n\nA queue is also a linear data structure which is based on the principle of First-In-First-Out (FIFO) where the data entered first will be accessed first. It is built using the array structure and has operations which can be performed from both ends of the Queue, that is, head-tail or front-back. Operations such as adding and deleting elements are called En-Queue and De-Queue and accessing the elements can be performed. Queues are used as Network Buffers for traffic congestion management, used in Operating Systems for Job Scheduling and many more.", null, "### Tree\n\nTrees are non-linear Data Structures which have root and nodes. The root is the node from where the data originates and the nodes are the other data points that are available to us. The node that precedes is the parent and the node after is called the child. There are levels a tree has to show the depth of information. The last nodes are called the leaves. Trees create a hierarchy which can be used in a lot of real-world applications such as the HTML pages use trees to distinguish which tag comes under which block. It is also efficient in searching purposes and much more.", null, "Linked lists are linear Data Structures which are not stored consequently but are linked with each other using pointers. The node of a linked list is composed of data and a pointer called next. These structures are most widely used in image viewing applications, music player applications and so forth.", null, "### Graph\n\nGraphs are used to store data collection of points called vertices (nodes) and edges (edges). Graphs can be called as the most accurate representation of a real-world map. They are used to find the various cost-to-distance between the various data points called as the nodes and hence find the least path. Many applications such as Google Maps, Uber, and many more use Graphs to find the least distance and increase profits in the best ways.", null, "### HashMaps\n\nHashMaps are the same as what dictionaries are in Python. They can be used to implement applications such as phonebooks, populate data according to the lists and much more.", null, "That wraps up all the prominent Data Structures in Python. I hope you have understood built-in as well as the user-defined Data Structures that we have in Python and why they are important.\n\nNow that you have understood the Data Structures in Python, check out the Python Programming Certification by Edureka, a trusted online learning company with a network of more than 250,000 satisfied learners spread across the globe.\n\nEdureka’s Python Programming Certification Training course is designed for students and professionals who want to be a Python Programmer. The course is designed to give you a head start into Python programming and train you for both core and advanced concepts.\n\nGot a question for us? Please mention it in the comments section of this “Data Structures You Need to Learn with Python” blog and we will get back to you as soon as possible.", null, "REGISTER FOR FREE WEBINAR", null, "Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month" ]
[ null, "https://googleads.g.doubleclick.net/pagead/viewthroughconversion/977137586/", null, "data:image/gif;base64,R0lGODlhAQABAIAAAMLCwgAAACH5BAAAAAAALAAAAAABAAEAAAICRAEAOw==", null, "https://www.edureka.co/blog/wp-content/uploads/2019/10/Data-Structures-in-Python-300x169.png", null, "data:image/gif;base64,R0lGODlhAQABAIAAAMLCwgAAACH5BAAAAAAALAAAAAABAAEAAAICRAEAOw==", null, "data:image/gif;base64,R0lGODlhAQABAIAAAMLCwgAAACH5BAAAAAAALAAAAAABAAEAAAICRAEAOw==", null, "data:image/gif;base64,R0lGODlhAQABAIAAAMLCwgAAACH5BAAAAAAALAAAAAABAAEAAAICRAEAOw==", null, "data:image/gif;base64,R0lGODlhAQABAIAAAMLCwgAAACH5BAAAAAAALAAAAAABAAEAAAICRAEAOw==", null, "data:image/gif;base64,R0lGODlhAQABAIAAAMLCwgAAACH5BAAAAAAALAAAAAABAAEAAAICRAEAOw==", null, "data:image/gif;base64,R0lGODlhAQABAIAAAMLCwgAAACH5BAAAAAAALAAAAAABAAEAAAICRAEAOw==", null, "data:image/gif;base64,R0lGODlhAQABAIAAAMLCwgAAACH5BAAAAAAALAAAAAABAAEAAAICRAEAOw==", null, "https://d1jnx9ba8s6j9r.cloudfront.net/blog/wp-content/themes/edu-new/img/blog-001.svg", null, "https://d1jnx9ba8s6j9r.cloudfront.net/blog/wp-content/themes/edu-new/img/blog-tick.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85316753,"math_prob":0.876963,"size":13595,"snap":"2020-45-2020-50","text_gpt3_token_len":3461,"char_repetition_ratio":0.15914944,"word_repetition_ratio":0.03839038,"special_character_ratio":0.2780434,"punctuation_ratio":0.15820456,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96985286,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,null,null,null,null,5,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-26T15:53:48Z\",\"WARC-Record-ID\":\"<urn:uuid:53dea339-e110-45df-b1b7-374c0b0f917f>\",\"Content-Length\":\"222695\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:da620e6e-4dfa-4407-96d9-e13c3fc9aed2>\",\"WARC-Concurrent-To\":\"<urn:uuid:614aec5c-1c87-4d4b-80d6-779c741f5ab2>\",\"WARC-IP-Address\":\"52.85.144.108\",\"WARC-Target-URI\":\"https://www.edureka.co/blog/data-structures-in-python/\",\"WARC-Payload-Digest\":\"sha1:Z2VWYKYECIWDMYHEXBPZLXZ22KLCAXL6\",\"WARC-Block-Digest\":\"sha1:JSEKHDH4OPWOSIHUZXUU4CXHEQZE4EBY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107891428.74_warc_CC-MAIN-20201026145305-20201026175305-00453.warc.gz\"}"}
https://www.geeksforgeeks.org/constraint-cubic-spline/?ref=rp
[ "", null, "Open in App\nNot now\n\n# Constraint Cubic spline\n\n• Last Updated : 24 Oct, 2021\n\n### Cubic Spline Interpolation\n\nCubic spline interpolation is a way of finding a curve that connects data points with a degree of three or less. Splines are polynomial that are smooth and continuous across a given plot and also continuous first and second derivatives where they join.\n\nWe take a set of points [xi, yi] for i = 0, 1, …, n for the function y = f(x). The cubic spline interpolation is a piecewise continuous curve, passing through each of the values in the table.\n\n• Following are the conditions for the spline of degree K=3:\n• The domain of s is in intervals of [a, b].\n• S, S’, S” are all continuous function on [a,b].", null, "Here Si(x) is the cubic polynomial that will be used on the subinterval [xi, xi+1].\n\nThe main factor about spline is that it combines different polynomials and does not use a single polynomial of degree n to fit all the points at once, it avoids high degree polynomials and thereby the potential problem of overfitting. These low-degree polynomials need to be such that the spline they form is not only continuous but also smooth.\n\nBut for the spline to be smooth and continuous, the two consecutive polynomials and  Si (x) and Si+1 (x) must join at xi", null, "Or, Si (x) must be passed through two end-points:", null, "Assume, S” (x) = Mi (i= 0,1,2, … , n). Since S(x) is cubic polynomial , so S” (x) is the linear polynomial in  [xi, xi+1], then S”’ (x) will be:", null, "By applying the Taylor series:", null, "Let, x = xi+1:", null, "Similarly, we apply above equation b/w range [xi-1, xi]:", null, "Let hi =xi – xi-1", null, "Now, we have n-1 equations, but have n+1 variables i.e M0, M1, M2,…Mn-1, Mn. Therefore, we need to get 2 more equations. For that, we will be using additional boundary conditions.\n\nLet’s consider that we know S’ (x0) = f0‘ and S’ (xn) = fn‘, especially if S’ (x0) and S’ (xn) both are 0. This is called the clamped boundary condition.", null, "", null, "", null, "Similarly, for Mn", null, "or", null, "Combining the above equation in to the matrix form, we get the following matrix:", null, "### Constraint Cubic Spline\n\nConstraint cubic spline was proposed by the  CJC Kruger in his article. The algorithm is an extension of cubic spline interpolation. The important step in it is the calculation of the slope at each point. The idea behind the slope calculation is that the slope at a point will be b/w the slope of the two adjacent lines joining that point, if one of them is 0 then the slope at point should also be 0.\n\nLet take a collection of point (x_0, y_0), (x_1, x_2) …\\, …\\, … \\, (x_{i-1}, y_{i-1}),(x_{i}, y_{i}), (x_{i+1}, y_{i+1})…\\, …\\, … \\, (x_n, y_n). The cubic curve can be given by:", null, "The above curve pass through all of the following points:", null, "THe first order derivative must be continuous at intermediate points:", null, "which can be calculated by following formula for intermediate points:", null, "if slope changes sign at point.\n\nFirst derivative (slope) of each end point is calculated by following formula:", null, "Second derivative are calculated by following formula:", null, "Solving for the coefficient of curve gives:", null, "### Conclusion", null, "Constraint vs Natural Spline (Upward)", null, "Constraint vs Cubic (downward)\n\nConstraint cubic spline interpolation has the following advantages as compared to a standard cubic spline.\n\n• It generates a relatively smooth curve as compare to standard cubic spline\n• Interpolated values are calculated without solving a system of equations.\n• It never overshoots the immediate values.\n\n### References:\n\nMy Personal Notes arrow_drop_up" ]
[ null, "https://media.geeksforgeeks.org/gfg-gg-logo.svg", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-bfe0b4a8fddd8d349888f0eb303f7f44_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-91588b727260f8aa1d557119810cd7c1_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-4917bb314ca035a7da43010fd3688b5c_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-1c32505b7ac87f0da679b14a1c63f3ae_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-94ae45ec6ecf12cb34ca7e6851d81df0_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-bccea473ae3393eef67b40259f1d54fe_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-d6f44dcfa64cf5551cd27fc3bb4a786f_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-0f80a5930d5c250491eb45aef40720cc_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-1ddfdc44f41d95aa9419e727c597a94d_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-705d8826de69074b5b2b908881ab17fd_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-77dbad75e78cc4a7a6b6f70976ea7ce9_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-14f8cfc70273796d25bf3d741add7386_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-b444ad87090e51a4b1236bc283d364f2_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-777a681911717d5878bd78d7b2d8c96b_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-2978b631c02500487e68725df39a0846_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-89a1b25705d262a752ceb2426072c154_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-4b26f524075ff1760c8719083ac7672c_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-5eee1d5ca51e363d5329967f7e77f3d3_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-35efbfe5caed5bf3129c1f0fc6ff22b1_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-5131cb275c8aa2abfeda6eb8fc718dcd_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-a3106991bafa979e84babbe8bb49812d_l3.png", null, "https://media.geeksforgeeks.org/wp-content/uploads/20211001183235/constraintvsnatural-660x509.png", null, "https://media.geeksforgeeks.org/wp-content/uploads/20211001183233/constraintvsnatural2-660x509.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91887856,"math_prob":0.9981678,"size":3306,"snap":"2022-40-2023-06","text_gpt3_token_len":842,"char_repetition_ratio":0.12598425,"word_repetition_ratio":0.0,"special_character_ratio":0.2565033,"punctuation_ratio":0.13682678,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99976367,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48],"im_url_duplicate_count":[null,null,null,8,null,6,null,6,null,6,null,6,null,4,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-07T05:54:46Z\",\"WARC-Record-ID\":\"<urn:uuid:fee50305-d719-4f0f-9cd5-68a36945bc58>\",\"Content-Length\":\"150111\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:eade8483-b26a-40e6-b61c-ef6fef0f67f2>\",\"WARC-Concurrent-To\":\"<urn:uuid:ef73ed9d-599f-4dc9-a55f-fe2c8bdfcfb3>\",\"WARC-IP-Address\":\"23.15.9.35\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/constraint-cubic-spline/?ref=rp\",\"WARC-Payload-Digest\":\"sha1:S2RFERD2DX55MY3HBNU556ITXKXYC6DO\",\"WARC-Block-Digest\":\"sha1:PVQZ4TVAADWYZXOKVH5RPJFOURXVCHWB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500384.17_warc_CC-MAIN-20230207035749-20230207065749-00401.warc.gz\"}"}
https://www.linstitute.net/archives/685453
[ "# IB DP Physics: HL复习笔记8.2.3 The Solar Constant, Albedo & Emissivity\n\n### The Solar Constant\n\n• Since life on Earth is entirely dependant on the Sun’s energy, it is useful to quantify how much of its energy reaches the top of the atmosphere\n• This is known as the solar constant\n• The solar constant is defined as:\n\nThe amount of solar radiation across all wavelengths that is incident in one second on one square metre at the mean distance of the Earth from the Sun\n\n• The value of the solar constant varies year-round because:\n• The Earth’s is in an elliptical orbit around the Sun, meaning at certain times of year the Earth is closer to the Sun, and other times of year it is further away\n• The Sun’s output varies by about 0.1% during its 11-year sunspot cycle\n• Calculations of the solar constant assume that:\n• This radiation is incident on a plane perpendicular to the Earth's surface\n• The Earth is at its mean distance from the Sun\n\n#### Worked Example\n\nThe Sun emits 4 × 1026 J in one second. The mean distance of the Earth from the Sun is 1.5 × 1011 m.\n\nUsing this data, calculate the solar constant.\n\nStep 1: List the known quantities\n\n• Power output of Sun, P = 4 × 1026 W\n• Distance between the Earth and Sun, r = 1.5 × 1011 m\n\nStep 2: Model the scenario using geometry\n\n• As light leaves the surface of the Sun, it begins to spread out uniformly through a spherical shell\n• The surface area of a sphere = 4πr2\n• The radius r of this sphere is equal to the distance between the Sun and the Earth", null, "Step 3: Write an equation to calculate the solar constant\n\n###", null, "Albedo & Emissivity\n\n#### Albedo\n\n• Albedo, a, is defined as\n\nThe proportion of light that is reflected by a given surface\n\n• It can be calculated using the equation\n•", null, "More specifically, the albedo of a planet is defined as\n\nThe ratio between the total scattered, or reflected, radiation and the total incident radiation of that planet\n\n• Earth’s albedo is generally taken to be 0.3, which means 30% of the Sun’s rays that reach the ground are reflected, or scattered, back into the atmosphere\n• Earth’s albedo varies daily and depends on:\n• Cloud formations and season – the thicker the cloud cover, the higher the degree of reflection\n• Latitude\n• Terrain – different materials reflect light to different degrees\n• It is useful to know the albedo of common materials:\n• Fresh asphalt = 0.04\n• Bare soil = 0.17\n• Green grass = 0.25\n• Desert sand = 0.40\n• New concrete = 0.55\n• Ocean ice = 0.50 - 0.70\n• Fresh snow = 0.85\n• Albedo has no units because it is a ratio (or fraction) of power", null, "#### Emissivity\n\n• Stars are good approximations to a black body, whereas planets are not\n• This can be quantified using the emissivity\n• Emissivity, e, is defined as\n\nThe power radiated by a surface divided by the power radiated from a black body of the same surface area and temperature\n\n• It can be calculated using the equation\n•", null, "Calculations of the emissivity assume that the black body:\n• Is at the same temperature as the object\n• Has the same dimensions as the object\n• For a perfect black body, emissivity is equal to 1\n• When using the Stefan-Boltzmann law for an object which is not a black body, the equation becomes:\n\nP = eσAT4\n\n• Where:\n• P = total power emitted by the object (W)\n• = emissivity of the object\n• σ = the Stefan-Boltzmann constant\n• A = total surface area of the object black body (m2)\n• T = absolute temperature of the body (K)\n\n#### Worked Example", null, "Step 1: Define albedo\n\n• Albedo = the proportion of radiation that is reflected\n• Therefore, the energy reflected by fresh snow = 0.85\nStep 2: Identify the proportion of radiation that is absorbed\n• If 85% of the radiation is reflected, we can assume that 15% is absorbed\n• Therefore, the energy absorbed by fresh snow = 1 – 0.85 = 0.15\nStep 3: Calculate the ratio", null, "#### Exam Tip\n\nYou will be expected to remember that a perfect black body has an emissivity of 1 - this information is not included in the data booklet!", null, "" ]
[ null, "https://oss.linstitute.net/wechatimg/2022/08/8-2-3-we-solar-constant_sl-physics-rn-1.png", null, "https://oss.linstitute.net/wechatimg/2022/08/1-23.png", null, "https://oss.linstitute.net/wechatimg/2022/08/2-4.png", null, "https://oss.linstitute.net/wechatimg/2022/08/8-2-3-ib-sl-rn-albedo.png", null, "https://oss.linstitute.net/wechatimg/2022/08/1-24.png", null, "https://oss.linstitute.net/wechatimg/2022/08/2-5.png", null, "https://oss.linstitute.net/wechatimg/2022/08/1-25.png", null, "https://oss.linstitute.net/wechatimg/2022/07/网站扫码-1.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90953046,"math_prob":0.98636734,"size":3765,"snap":"2022-27-2022-33","text_gpt3_token_len":958,"char_repetition_ratio":0.12682797,"word_repetition_ratio":0.025069637,"special_character_ratio":0.25551128,"punctuation_ratio":0.07660167,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9970833,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-17T07:05:47Z\",\"WARC-Record-ID\":\"<urn:uuid:ee8da401-f0fd-461c-a476-ac5dd3d625e7>\",\"Content-Length\":\"83157\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fbe87fa2-5972-4a78-ae9c-a81103baccdc>\",\"WARC-Concurrent-To\":\"<urn:uuid:44f930da-345d-4081-8f84-e343640686ad>\",\"WARC-IP-Address\":\"106.14.134.64\",\"WARC-Target-URI\":\"https://www.linstitute.net/archives/685453\",\"WARC-Payload-Digest\":\"sha1:I5OW4BZGJ4BANZ5QGAOVNVSUTTLJ7RED\",\"WARC-Block-Digest\":\"sha1:HUYFMYU7PKWNFR53YIL5RXNVKLTVW4YI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572870.85_warc_CC-MAIN-20220817062258-20220817092258-00461.warc.gz\"}"}
https://everything2.com/title/Lennard-Jones+potential
[ "Equation that emperically models the van der Waals forces between two atoms. It usually takes on the form:\n\n```\n1 1\nE = a * ( --- - b --- )\nrx ry\n\n```\nThe values of x and y are usually 12 and 6 (known as the 12-6 Lennard Jones potential - surprisingly). The r12 term is a repulsive term that gets very high at small values of r. This represents the electron orbitals coming too close to each other. The r6 term represents dipole-dipole interactions generally. The values of a and b depend on the types of atoms interacting. Combining this with Coulomb's law to model charge-charge interactions is often sufficient for a rudimentary force field to describe molecular interactions. The shape of a 12-6 Lennard-Jones potential shows the equilibrium distance of a particular interaction:\n```\n\n| *\n| *\n| *\n| *\n| *\n| *\n| *\n| *\nE | *\n| *\n-----*-----------------********------- r\n| * *****\n| * ***\n| ** **\n| ****\nreq\n\nE = energy\nr = distance between two objects\nreq = equilibrium distance for system.\n\n```\n\nI apologize that this graph looks horrible, but the essential elements are as follows: At very short distances (the left end of the graph), the two molecules are repelling each other very strongly. As this distances approaches zero, the energy goes to infinity. At some distance req, the repulsion 1/r12 term and the attractive 1/r6 term are balanced such that an energy minimum is achieved. This is the most stable distance for the system to lie in. As r increases, the attractive force (which follows a 1/r6 dependence) drops off gradually and the energy converges to zero.\n\nLog in or register to write something here or to contact authors." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8996124,"math_prob":0.9800036,"size":1558,"snap":"2019-26-2019-30","text_gpt3_token_len":378,"char_repetition_ratio":0.13449164,"word_repetition_ratio":0.04225352,"special_character_ratio":0.2817715,"punctuation_ratio":0.076642334,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9806933,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-23T08:57:20Z\",\"WARC-Record-ID\":\"<urn:uuid:8ad132b4-be90-4756-aadf-f3f724cd3568>\",\"Content-Length\":\"17405\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ffb31ae7-f4c2-4f86-8120-c330d89ce657>\",\"WARC-Concurrent-To\":\"<urn:uuid:8d50f874-bc72-4df8-9248-1bd94b92767f>\",\"WARC-IP-Address\":\"52.41.251.166\",\"WARC-Target-URI\":\"https://everything2.com/title/Lennard-Jones+potential\",\"WARC-Payload-Digest\":\"sha1:LAZA3ETLNIINQAB6A57WNLV7RLGDQUFA\",\"WARC-Block-Digest\":\"sha1:QNDZMMSSE3FSHKKH4LBTONXMH6B7AO4W\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195529175.83_warc_CC-MAIN-20190723085031-20190723111031-00403.warc.gz\"}"}
https://accessbiomedicalscience.mhmedical.com/content.aspx?bookid=2724&sectionid=227193841
[ "## KEY CONCEPTS\n\nKEY CONCEPTS\n\n•", null, "Probability is an important concept in statistics. Both objective and subjective probabilities are used in the medical field.\n\n•", null, "Basic definitions include the concept of an event or outcome. A number of essential rules tell us how to combine the probabilities of events.\n\n•", null, "Bayes’ theorem relates to the concept of conditional probability—the probability of an outcome depending on an earlier outcome. Bayes’ theorem is part of the reasoning process when interpreting diagnostic procedures.\n\n•", null, "Populations are rarely studied; instead, re­searchers study samples.\n\n•", null, "Several methods of sampling are used in medical research; a key issue is that any method should be random.\n\n•", null, "When researchers select random samples and then make measurements, the result is a random variable. This process makes statistical tests and inferences possible.\n\n•", null, "The binomial distribution is used to determine the probability of yes/no events—the number of times a given outcome occurs in a given number of attempts.\n\n•", null, "The Poisson distribution is used to determine the probability of rare events.\n\n•", null, "The normal distribution is used to find the probability that an outcome occurs when the observations have a bell-shaped distribution. It is used in many statistical procedures.\n\n•", null, "If many random samples are drawn from a population, a statistic, such as the mean, follows a distribution called a sampling distribution.\n\n•", null, "The central limit theorem tells us that means of observations, regardless of how they are distributed, begin to follow a normal distribution as the sample size increases. This is one of the reasons the normal distribution is so important in statistics.\n\n•", null, "It is important to know the difference between the standard deviation, which describes the spread of individual observations, from the standard error of the mean, which describes the spread of the mean observations.\n\n•", null, "One of the purposes of statistics is to use a sample to estimate something about the population. Estimates form the basis of statistical tests.\n\n•", null, "Confidence intervals can be formed around an estimate to tell us how much the estimate would vary in repeated samples.\n\n## PRESENTING PROBLEMS\n\n### Presenting Problem 1\n\nThe World Health Organization (WHO) ­collects influenza rates worldwide. The CDC collects the statistics for the United States and territories. The data collection is completed weekly through the NREVSS collaborating laboratories. The data for the 2017–2018 flu season is used as a Presenting Problem to demonstrate the concepts of probability and displayed in Table 4–1.\n\nTable 4–1.Summary of flu virus type by age group for the 2017–2018 season.\n\n### Pop-up div Successfully Displayed\n\nThis div only appears when the trigger link is hovered over. Otherwise it is hidden from view." ]
[ null, "https://accessbiomedicalscience.mhmedical.com/Images/spinnerLarge.gif", null, "https://accessbiomedicalscience.mhmedical.com/Images/spinnerLarge.gif", null, "https://accessbiomedicalscience.mhmedical.com/Images/spinnerLarge.gif", null, "https://accessbiomedicalscience.mhmedical.com/Images/spinnerLarge.gif", null, "https://accessbiomedicalscience.mhmedical.com/Images/spinnerLarge.gif", null, "https://accessbiomedicalscience.mhmedical.com/Images/spinnerLarge.gif", null, "https://accessbiomedicalscience.mhmedical.com/Images/spinnerLarge.gif", null, "https://accessbiomedicalscience.mhmedical.com/Images/spinnerLarge.gif", null, "https://accessbiomedicalscience.mhmedical.com/Images/spinnerLarge.gif", null, "https://accessbiomedicalscience.mhmedical.com/Images/spinnerLarge.gif", null, "https://accessbiomedicalscience.mhmedical.com/Images/spinnerLarge.gif", null, "https://accessbiomedicalscience.mhmedical.com/Images/spinnerLarge.gif", null, "https://accessbiomedicalscience.mhmedical.com/Images/spinnerLarge.gif", null, "https://accessbiomedicalscience.mhmedical.com/Images/spinnerLarge.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8962048,"math_prob":0.89931315,"size":2995,"snap":"2022-40-2023-06","text_gpt3_token_len":683,"char_repetition_ratio":0.12102976,"word_repetition_ratio":0.050104383,"special_character_ratio":0.25308847,"punctuation_ratio":0.10622711,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9908525,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-06T22:53:04Z\",\"WARC-Record-ID\":\"<urn:uuid:ea4836d5-8925-4731-b3b3-4f3488742ef5>\",\"Content-Length\":\"137982\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:abb067aa-f7be-45fa-aac6-5f9e79a81877>\",\"WARC-Concurrent-To\":\"<urn:uuid:803e07d5-14c0-48f5-8d9b-debf84b47d88>\",\"WARC-IP-Address\":\"52.152.192.18\",\"WARC-Target-URI\":\"https://accessbiomedicalscience.mhmedical.com/content.aspx?bookid=2724&sectionid=227193841\",\"WARC-Payload-Digest\":\"sha1:5FLFOA6QENGJXKSSOS4F6IZWVYH5YW7D\",\"WARC-Block-Digest\":\"sha1:GPJUTEYLLKDNZ62TJDCMUASECZKZAOKO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500365.52_warc_CC-MAIN-20230206212647-20230207002647-00603.warc.gz\"}"}
https://medicpdf.com/p/people.math.carleton.ca1.html
[ "## People.math.carleton.ca\n\nIrreducible characters which are zero on only Institute of Advance Studies in Basic Sciences, Zanjan, Iran Suppose that G is a …nite solvable group which has an irreducible which vanishes on exactly one conjugacy class.\nshow that G has a homomorphic image which is a nontrivial 2-transitivepermutation group. The latter groups have been classi…ed by Huppert.\nWe can also say more about the structure of G depending on whetheris primitive or not.\nMathematics Subject Classi…cation 2000: 20C15 20D10 20B20 be an irreducible character of a …nite group G. A well-known theorem on at least one conjugacy class of G. Groups having an irreducible characterthat vanishes on exactly one class were studied by Zhmud’in (see also ).\nChillag [2, Lemma 2.4] has proved that if the restriction of vanishes on exactly one class of G, then G is a Frobenius group with a complement of order 2 and an abelian odd-order kernel.\nOur purpose in this paper is to show that, if an irreducible character …nite solvable group G vanishes on exactly one conjugacy class, then G has ahomomorphic image which is a nontrivial 2-transitive permutation group. Thelatter groups have been classi…ed by Huppert: they have degree pd where p isprime, and are subgroups of the extended a¢ ne group A L(1; pd) except for sixexceptional degrees (see Remark 8 below).\nWe shall initially assume that our character is faithful, and make the followingassumptions: (*) G is a …nite group with a faithful irreducible character only one class which we denote by C. Furthermore, G has a chief factorK=L which is an elementary abelian p-group of order pd such that therestriction must be nonlinear, the latter condition clearly holds whenever G is solvable, but for the present we shall not assume solvability.\nProposition 1 Suppose (*) holds. Then C = K n L, K = hCi and L is equalto L0 := fu 2 G j uC = Cg. In particular, C consists of p-elements (since L doesnot contain a Sylow p-subgroup of K). Moreover, either: pd is the sum of pd distinct G-conjugate irreducible Proof. Since K is irreducible, the theorem of Burnside quoted above shows Now since K=L is an abelian chief factor, and from [9, (6.18)] that either (i) d is even, pd is the sum of pd distinct G-conjugate irreducible characters We shall consider these two cases separately.\nIn case (i) we note that, since C \\ L = ;, the irreducible character is assumed to be faithful, L is contained in the centre Z(G) of G. On the otherhand, for each z 2 Z(G), (z) is a scalar of the form 1: Thus for each x 2 C wehave K for all z 2 Z(G). This shows that Z(G) is a normal is a nonlinear irreducible character of K, we conclude that Z(G) = L. Finally,since K=Z(G) is abelian, [9, (2.30)] shows that K n L = C.\nK is an irreducible constituent of ( 1)K and so comparison normal subgroup L, and so K n L = C in this case as well.\nFinally since jC [ f1gj > 1 jKj, therefore K = hCi. Finally, it is easily seen that L0 is a normal subgroup of G, and that L0 C = K n L is a union of cosets of L, we see that L C * L0 since C is not a subgroup. Therefore L0 C G and L Corollary 2 Under the hypothesis (*) every normal subgroup N of G eithercontains K (when N is irreducible) or is contained in L (when N is reducible).\nIn particular, K=L is the unique chief factor such that is reducible and K=L is the socle of G=L. Since K has a nonlinear irreduciblecharacter, K is not abelian and so L 6= 1.\nRemark 3 Both cases (i) and (ii) in Proposition 1 can actually occur. Thegroup SL(2; 3) has three primitive characters of degree 2 which satisfy (*) (case(i) with jKj = 8 and jLj = 2 for each character), and S4 has an imprimitivecharacter of degree 3 which satis…es (*) (case (ii) with jKj = 12 and jLj = 4).\nProposition 4 Suppose that the hypothesis (*) and case (i) of Proposition 1hold. Then L = Z(G) has order p, K is an extraspecial p-group and Proof. Let z 2 L. Then for any x 2 C we have zx 2 C and so zx = y 1xy for some y 2 G. Since K=L is an elementary abelian p-group, zpxp = (zx)p =y 1xpy = xp, and so zp = 1.\nrepresented faithfully as a group of scalar matrices by a representation a¤ording , it follows that L is cyclic and hence jLj = p.\n(K) = L = Z(K) and so K is an extraspecial p-group.\nis primitive. Indeed, otherwise there is a maximal induced character shows that G is 0 on each conjugacy class disjoint fromH. As is well-known every proper subgroup of a …nite group is disjoint fromsome conjugacy class, and so we conclude that C is the unique class such thatC \\ H = ;. By Proposition 1 this implies that H \\ K (1) = pd=2, we obtain a contradiction. Thus Proposition 5 Suppose that the hypothesis (*) and case (ii) of Proposition 1hold (so is imprimitive). Then there exists a subgroup M of index pd in G = G for some 2 Irr(M), G = MK and M \\K = L = coreG(M).\nProof. As noted in the proof of Proposition 1 irreducible constituents i. Because K is irreducible, these constituents are K-conjugates (as well as G-conjugates): Let M := IG( 1) be the inertial subgroup…xing the constituent Then jG : Mj = pd and G = MK because K acts jK : M \\ Kj, we conclude that M \\ K = L: On the other hand, since G is0 on any class which does not intersect M , the hypothesis on that ux does not lie in any y 1M y, and hence ux 2 C. Thus with the notationof Proposition 1, coreG(M ) L0 = L. Since L is a normal subgroup contained in M , the reverse inequality is also true and so coreG(M ) = L.\nThe proof of the next result requires a theorem of Isaacs [10, Theorem 2] Let H be a …nite group with centre Z and K be a normal subgroup of H with Z = Z(K). Suppose that H centralizes K=Z and jHom(K=Z; Z)jjK=Zj. Then H=Z = K=Z Proposition 6 Under the hypothesis (*) the centralizer CG(K=L) equals K.\nis primitive, then Proposition 4 shows that the hypotheses of Isaacs’theorem are satis…ed for H := CG(K=L) (the condition jHom(K=Z; Z)jjK=Zj is trivial since the irreducibility of H implies that Z is cyclic). Also,since K is irreducible, CG(K ) = Z (G) = L, and so Isaacs’theorem shows that is imprimitive, then using the notation of Proposition 5 we can show that M \\ H = L where H := CG(K=L): Indeed, it is clear from Proposition5 that L M \\ H. To prove the reverse inequality suppose that u 2 M \\ H.\nThen for each x 2 K we have xu = yux for some y 2 L. Choose i such that in x 1M x. Since this is true for all x 2 K, it follows from Proposition 5 thatu 2 coreG(M) = L. Thus M \\ H = L as claimed. Finally H = H \\ MK =(H \\ M)K = LK = K as required.\nCorollary 7 Under the hypothesis (*) G acts transitively by conjugation on thenontrivial elements of the vector space K=L and the kernel of this action is K.\nThus G=K is isomorphic to a subgroup of GL(d; p) which is transitive on thenonzero elements of the underlying vector space.\nRemark 8 Huppert [8, Chapter XII Theorem 7.3] has classi…ed all solvable sub-groups S of GL(d; p) which are transitive on the nonzero vectors of the underly-ing vector space. Apart from six exceptional cases (where pd = 32; 52; 72; 112; 232 or 34), the underlying vector space can be identi…ed with the Galois …eld GF (pd)in such a way that S is a subgroup of the group t is an automorphism of the …eld. The group 1)d. A classi…cation for nonsolvable groups has been carried out by Her- ing , . It is considerably more complicated to state and prove, but amongother things it shows that such groups have only a single nonsolvable compositionfactor (a summary is given in [8, page 386]).\nSince the latter half of hypothesis (*) is certainly satis…ed in a solvable group, we can specialize to solvable groups and drop the condition that Theorem 9 Let G be a …nite solvable group which has an irreducible character which takes the value 0 on only one conjugacy class C. Let K := hCi : Then: (b) There is a unique normal subgroup L of G such that K=L is a chief factor of G and K n L = C (we set jK : Lj = pd).\n(c) G=K acts transitively on the set (K=L)# of nontrivial elements of the vector space K=L and so is one of the groups classi…ed by Huppert.\nis imprimitive, then G=L is a 2-transitive Frobenius group of degree Remark 10 We also note that (c) and Huppert’s classi…cation show that theinteger k in (a) is bounded. Indeed, since except in the six exceptional cases. Computations using GAP show that inthe remaining cases k Proof. (a) Let k be the largest integer such that K we know that the restriction G(k+1) is reducible, and so G(k+1) (b), (c) and (d) follow from Proposition 1, Corollary 7 and Proposition 4.\n(e) Let M be the subgroup de…ned in Proposition 5. Since jMj : Since G = MK and G=K acts transitively on (K=L)# we 1 by Proposition 1, so equality must hold throughout.\n1. Hence M=L acts regularly on (K=L)# and so G=L = (M=L)(K=L) is a 2-transitive Frobenius group.\nRemark 11 Not all groups having an irreducible character which takes 0 ona single conjugacy class satisfy the second half of hypothesis (*).\nple, the Atlas shows that A5 has three characters with this property and itscentral cover 2 A5 also has three.\nthe required property and each of the groups L2(2k) (k = 3; 4; :::) appears tohave one such character (of degree 2k). It would be interesting to know if thesewere the only simple groups with this property, or whether a group with such acharacter can have more than one nonabelian composition factor (see Remark8).\nAnother question which can be asked is what can be said about the kernel of such a character; evidently this kernel is contained in the normal subgroupL0 := fu 2 G j uC = Cg.\n Ya. G. Berkovich and E.M. Zhmud’, Characters of Finite Groups. Part (2), Vol. 181, Mathematical Monographs, Amer. Math. Soc., Rhode Island,1999.\n D. Chillag, On zeros of characters of …nite groups, Proc. Amer. Math. Soc.\n J.H. Conway et al, Atlas of Finite Simple Groups, Clarendon Press, Oxford, The GAP Group, GAP–Groups, Algorithms and Programming, Version 4.4.4 (2005) (http://www.gap-system.org).\n Ch. Hering, Transitive linear groups and linear groups which contain irre- ducible subgroups of prime order, Geometriae Dedicata 2 (1974), 425–460.\n Ch. Hering, Transitive linear groups and linear groups which contain irre- ducible subgroups of prime order II, J. Algebra 93 (1985) 151–164.\n B. Huppert, Endliche Gruppen I, Springer-Verlag, Berlin, 1967.\n B. Huppert and N. Blackburn, Finite Groups III, Springer-Verlag, Berlin, I.M. Isaacs, Character Theory of Finite Groups, Academic Press, New York, I.M. Isaacs, Character degrees and derived length of a solvable group, E.M. Zhmud’, On …nite groups having an irreducible complex character with one class of zeros, Soviet Math. Dokl. 20 (1979) 795-797.\n\nSource: http://people.math.carleton.ca/~jdixon/OneZero.pdf\n\n### Inverse gradient.qxp\n\nCorona® CAD® - Charged Aerosol Detector Comparable Response Between AnalytesDuring Gradient Elution HPLC Using A Make-upInverse Gradient The response of nebulization-based detectors A ilent 12 typically varies as a function of mobile phasevolatility. An increase in the organic content ofthe eluent leads to more efficient nebulization,an increase in the percentage of analyte reac\n\n### Zvr 2009-02 37.72\n\nRedaktion Karl-Heinz Danzl, Christian Huber,Das Europäische Bagatellverfahrenin Österreich Peter G. Mayr £ 40Besitzstörung durch abgestellte Kraftfahrzeuge Claudia Reihs £ 46Eigentumsfreiheitsklage,Passivlegitimation Kfz-Vermieter £ 52Haftung des Pistenhalters für Unfall bei Seilwindenpräparierungaußerhalb der Betriebszeit £ 54Beantragung einer Ausnahmebewilligung, Zumutbarkeit £ 64\n\nCopyright © 2010 Medicament Inoculation Pdf" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.873626,"math_prob":0.9580611,"size":11473,"snap":"2021-21-2021-25","text_gpt3_token_len":3257,"char_repetition_ratio":0.13418782,"word_repetition_ratio":0.035818007,"special_character_ratio":0.26401114,"punctuation_ratio":0.11980033,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.996324,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-25T06:02:50Z\",\"WARC-Record-ID\":\"<urn:uuid:1d4c7ea6-89bb-403e-beff-25fcb04e2ea4>\",\"Content-Length\":\"15790\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:25eefd18-e361-44aa-9851-ed184daafd71>\",\"WARC-Concurrent-To\":\"<urn:uuid:a0f60099-c41c-4c6f-a92b-82faf48a0974>\",\"WARC-IP-Address\":\"172.67.162.46\",\"WARC-Target-URI\":\"https://medicpdf.com/p/people.math.carleton.ca1.html\",\"WARC-Payload-Digest\":\"sha1:F5GRUDQEFBAWGXLI4VYW46YBYD7N5R27\",\"WARC-Block-Digest\":\"sha1:R255UWLT45NGZ3HIWISFVZQQ4FYUYQAA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487622113.11_warc_CC-MAIN-20210625054501-20210625084501-00456.warc.gz\"}"}
https://math.stackexchange.com/questions/347544/showing-that-if-p-k2n1-with-k-odd-then-the-jacobi-symbol-frackp
[ "# Showing that if $p = k2^n+1$ with $k$ odd, then the Jacobi symbol $(\\frac{k}{p})$ equals $1$\n\nI am observing if $p = k2^n+1$ (a Proth number), $k$ is odd, there is always an integer $x$, such that $k = x^2 \\bmod p$, i.e. the Jacobi symbol $(\\frac{k}{p})$ is always $1$. Can someone give a formal proof?\n\nThank you.\n\n• k has to be odd, so p is a Proth Number – Kurtul Mar 31 '13 at 19:45\n• Never heard that term before. (checks wikipedia). Did you mean $k2^n+1$? – user14972 Mar 31 '13 at 19:46\n• Correct I edited the Question – Kurtul Mar 31 '13 at 19:54\n\nThis is an easy exercise in Quadratic reciprocity:\n\n$$\\left(\\frac{k}{k2^n + 1}\\right) = \\mu \\left( \\frac{k2^n+1}{k} \\right) = \\mu \\left( \\frac{1}{k} \\right) = \\mu$$\n\nand so we just have to figure out what $\\mu$ is. When the numerator and denominator are odd, the general theorem says that $\\mu$ is $1$ except in the case where both the numerator and denominator are $3 \\bmod 4$, in which case $\\mu$ is $-1$.\n\nFor $n \\geq 2$, the denominator is $1 \\bmod 4$, so $\\mu$ is $1$. But what about $n=1$?\n\nIt turns out your conjecture is not true, and we can easily produce a counterexample: for $k=3$ and $n=1$, we see that $3$ is not a square modulo $7$.\n\nTo be a Proth number, wikipedia says that we're supposed to have $2^n > k$. If you restrict to this case (and everything being positive), then $k=1$ is the only allowed option when $n=1$, and in this case $1$ is a square modulo $3$.\n\n• Thank you, So for the case n>3, μ is 1, and the conjecture is true. – Kurtul Mar 31 '13 at 20:36" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85611886,"math_prob":0.99998426,"size":1318,"snap":"2019-51-2020-05","text_gpt3_token_len":431,"char_repetition_ratio":0.10350076,"word_repetition_ratio":0.32231405,"special_character_ratio":0.35280728,"punctuation_ratio":0.12,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000049,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-25T02:21:10Z\",\"WARC-Record-ID\":\"<urn:uuid:c4a6e4a7-a8cc-478b-b450-14293855cbeb>\",\"Content-Length\":\"141254\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b6e06e74-c38f-41b1-8e23-e59e5ec4c959>\",\"WARC-Concurrent-To\":\"<urn:uuid:92c03b1f-b065-444d-b22f-7a09b581390f>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/347544/showing-that-if-p-k2n1-with-k-odd-then-the-jacobi-symbol-frackp\",\"WARC-Payload-Digest\":\"sha1:DM4WAWEEBFUSI7255PJBZ4KE73LEAZ4H\",\"WARC-Block-Digest\":\"sha1:OCENSSF4GB4EBJ2YDMY2IRMXXNGDXARN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250628549.43_warc_CC-MAIN-20200125011232-20200125040232-00295.warc.gz\"}"}
http://mathandmultimedia.com/2012/09/page/5/
[ "## How to Generate Colored Latex For Your Blog or Website\n\nSince I started this blog, I developed a habit of drawing my illustrations in bright colors. Recently, In my most recent post, I have generated colored Latex to hopefully further illuminate mathematical ideas. In this post, I am going to teach you how to generate it for your blog or website.\n\nAs of this writing, I believe that WordPress does not yet support colored Latex, so we will use Codecogs.com latex generator. Here is a sample output of the colored latex generated from the mentioned site.  » Read more\n\n## Math Word Problems: Solving Number Problems Part 2\n\nThis is the continuation of the previous post on solving number problems. In this post, I will give three more examples on how to solve word problems about numbers.\n\nPROBLEM 4\n\nOne number is smaller than the other by", null, "$12$. Their sum is", null, "$102$. What is the smaller number?\n\nSolution\n\nIn the previous post, we talked about two numbers, one is being larger than the other. In this problem, the other number is smaller. If a number is", null, "$15$, and the other number is", null, "$6$ smaller than it, then that number is", null, "$15 - 6$. So, in the problem above, if we let", null, "$n$ be the number, then", null, "$n - 12$ is the smaller number.  Again, their sum is", null, "$102$, so we can now set up the equation » Read more\n\n## The 2011 Most Popular Posts\n\nI found a new plugin that adds the number of shares (Tweets, Facebook likes, and Google+ plus 1’s) and I  thought that I should share to you last year’s most shared posts. Below are the 7 most popular posts with their corresponding number of shares.", null, "" ]
[ null, "http://s0.wp.com/latex.php", null, "http://s0.wp.com/latex.php", null, "http://s0.wp.com/latex.php", null, "http://s0.wp.com/latex.php", null, "http://s0.wp.com/latex.php", null, "http://s0.wp.com/latex.php", null, "http://s0.wp.com/latex.php", null, "http://s0.wp.com/latex.php", null, "http://www.linkwithin.com/pixel.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9024856,"math_prob":0.98559934,"size":565,"snap":"2020-10-2020-16","text_gpt3_token_len":117,"char_repetition_ratio":0.13012478,"word_repetition_ratio":0.0,"special_character_ratio":0.19646017,"punctuation_ratio":0.10810811,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98455733,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-25T13:30:57Z\",\"WARC-Record-ID\":\"<urn:uuid:590d71ac-8559-40ad-8824-bab687a658cc>\",\"Content-Length\":\"40964\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0cc1c7e8-3c92-4d81-9864-5106f7a4cf1e>\",\"WARC-Concurrent-To\":\"<urn:uuid:ae055810-a172-498c-b554-4959f4e687db>\",\"WARC-IP-Address\":\"184.168.164.1\",\"WARC-Target-URI\":\"http://mathandmultimedia.com/2012/09/page/5/\",\"WARC-Payload-Digest\":\"sha1:2M6AO7CD6UYXWJ2DCPWG7BRN2V7UKP46\",\"WARC-Block-Digest\":\"sha1:XNQDXO5RDD2IGRDXETNMYOQKUKTKWPUS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875146066.89_warc_CC-MAIN-20200225110721-20200225140721-00321.warc.gz\"}"}
http://slideplayer.com/slide/777291/
[ "", null, "Overview of Lecture Factorial Designs Experimental Design Names\n\nPresentation on theme: \"Overview of Lecture Factorial Designs Experimental Design Names\"— Presentation transcript:\n\nOverview of Lecture Factorial Designs Experimental Design Names Partitioning the Variablility The Two-Way Between Groups ANOVA Evaluating the Null Hypotheses Main effects Interactions Analytical Comparisons\n\nFactorial Design Much experimental psychology asks the question: What effect does a single independent variable have on a single dependent variable? It is quite reasonable to ask the following question as well. What effects do multiple independent variables have on a single dependent variable? Designs which include multiple independent variables are known as factorial designs.\n\nAn example factorial design\nIf we were looking at GENDER and TIME OF EXAM, these would be two independent factors GENDER would only have two levels: male or female TIME OF EXAM might have multiple levels, e.g. morning, noon or night This is a factorial design\n\nExperimental Design Names\nThe name of an experimental design depends on three pieces of information The number of independent variables The number of levels of each independent variable The kind of independent variable Between Groups Within Subjects (or Repeated Measures)\n\nExperimental design names\nIf there is only one independent variable then The design is a one-way design (e.g. does coffee drinking influence exam scores) If there are two independent variables The design is a two-way design (e.g. does time of day or coffee drinking influence exam scores). If there are three independent variables The design is a three-way design (e.g. does time of day, coffee drinking or age influence exam scores).\n\nExperimental Design Names\nIf there are 2 levels of the first IV and 3 levels of the second IV It is a 2x3 design E.G.: coffee drinking x time of day Factor coffee has two levels: cup of coffee or cup of water Factor time of day has three levels: morning, noon and night If there are 3 levels of the first IV, 2 levels of the second IV and 4 levels of the third IV It is a 3x2x4 design E.G.: coffee drinking x time of day x exam duration Factor coffee has three levels: 1 cup, 2 cup 3 cups Factor time of day has two levels: morning or night Factor exam duration has 4 levels: 30min, 60min, 90min, 120min\n\nExperimental Design Names\nIf all the IVs are between groups then It is a Between Groups design If all the IVs are repeated measures It is a Repeated Measures design If at least one IV is between groups and at least one IV is repeated measures It is a Mixed or Split-Plot design\n\nExperimental design names\nThree IVs IV 1 is between groups and has two levels (e.g. a.m., p.m.) IV 2 is between groups and has two levels (e.g. coffee, water). IV 3 is repeated measures and has 3 levels (e.g. 1st year, 2nd year and 3rd year). The design is: A three-way (2x2x3) mixed design.\n\nExperimental design names\nThe effect of a single variable is known as a main effect The effect of two variables considered together is known as an interaction For the two-way between groups design, an F-ratio is calculated for each of the following: The main effect of the first variable The main effect of the second variable The interaction between the first and second variables\n\nAnalysis of a 2-way between groups design using ANOVA\nTo analyse the two-way between groups design we have to follow the same steps as the one-way between groups design State the Null Hypotheses Partition the Variability Calculate the Mean Squares Calculate the F-Ratios\n\nNull Hypotheses There are 3 null hypotheses for the two-way (between groups design. The means of the different levels of the first IV will be the same, e.g. The means of the different levels of the second IV will be the same, e.g. The differences between the means of the different levels of the interaction are not the same, e.g.\n\nAn example null hypothesis for an interaction\nThe differences betweens the levels of factor A are not the same.\n\nPartitioning the variability\nIf we consider the different levels of a one-way ANOVA then we can look at the deviations due to the between groups variability and the within groups variability. If we substitute AB into the above equation we get This provides the deviations associated with between and within groups variability for the two-way between groups design.\n\nPartitioning the variability\nThe between groups deviation can be thought of as a deviation that is comprised of three effects. In other words the between groups variability is due to the effect of the first independent variable A, the effect of the second variable B, and the interaction between the two variables AxB.\n\nPartitioning the variability\nThe effect of A is given by Similarly the effect of B is given by The effect of the interaction AxB equals which is known as a residual\n\nThe sum of squares The sums of squares associated with the two-way between groups design follows the same form as the one-way We need to calculate a sum of squares associated with the main effect of A, a sum of squares associated with the main effect of B, a sum of squares associated with the effect of the interaction. From these we can estimate the variability due to the two variables and the interaction and an independent estimate of the variability due to the error.\n\nThe mean squares In order to calculate F-Ratios we must calculate an Mean Square associated with The Main Effect of the first IV The Main Effect of the second IV The Interaction. The Error Term\n\nThe mean squares The main effect mean squares are given by: The interaction mean squares is given by: The error mean square is given by:\n\nThe F-ratios The F-ratio for the first main effect is: The F-ratio for the second main effect is: The F-ratio for the interaction is:\n\nAn example 2x2 between groups ANOVA\nFactor A - Lectures (2 levels: yes, no) Factor B - Worksheets (2 levels: yes, no) Dependent Variable - Exam performance (0…30) Mean Std Error LECTURES WORKSHEETS yes 19.200 2.04 no 25.000 1.23 16.000 1.70 9.600 0.81\n\nResults of ANOVA When an analysis of variance is conducted on the data (using Experstat) the following results are obtained Source Sum of Squares df Mean Squares F p A (Lectures) 1 37.604 0.000 B (Worksheets) 0.450 0.039 0.846 AB 16.178 0.001 Error 16 11.500\n\nWhat does it mean? - Main effects\nA significant main effect of Factor A (lectures) “There was a significant main effect of lectures (F1,16=37.604, MSe=11.500, p<0.001). The students who attended lectures on average scored higher (mean=22.100) than those who did not (mean=12.800). No significant main effect of Factor B (worksheets) “The main effect of worksheets was not significant (F1,16=0.039, MSe=11.500, p=0.846)”\n\nWhat does it mean? - Interaction\nA significant interaction effect “There was a significant interaction between the lecture and worksheet factors (F1,16=16.178, MSe=11.500, p=0.001)” However, we cannot at this point say anything specific about the differences between the means unless we look at the null hypothesis Many researches prefer to continue to make more specific observations. Mean Std Error LECTURES WORKSHEETS yes 19.200 2.04 no 25.000 1.23 16.000 1.70 9.600 0.81\n\nSimple main effects analysis\nWe can think of a two-way between groups analysis of variance as a combination of smaller one-way anovas. The analysis of simple main effects partitions the overall experiment in this way Worksheets Yes No Lectures (Yes) Worksheets Worksheets Yes No Yes No Lectures (No) Yes Lectures No Worksheets (yes) Worksheets (no) Yes Yes Lectures Lectures No No\n\nResults of a simple main effects analysis\nUsing ExperStat it possible to conduct a simple main effects analysis relatively easily Source of Sum of df Mean F p Variation Squares Squares Lectures at worksheets(yes) worksheets(no) Error Term Worksheets at lectures(yes) lectures(no)\n\nWhat does it mean? - Simple main effects of Lectures\nNo significant simple main effect of lectures at worksheets (yes) “There was no significant difference between those students who did attend lectures (mean=19.20) or did not attend lectures (mean=16.00) when they completed worksheets (F1,16=2.226, MSe=11.500, p=0.155).” Significant simple main effect of lectures at worksheets (no) “There was a significant difference between those students who did attend lectures (mean=25.00) or did not attend lectures (mean=9.60) when they did not complete worksheets (F1,16=51.557, MSe=11.500, p<0.001). When students who attended lectures did not complete worksheets they scored higher on the exam than those students who neither attended lectures nor completed worksheets.”\n\nWhat does it mean? - Simple main effects of worksheets\nSignificant simple main effect of worksheets at lectures (yes) “There was a significant difference between those students who did complete worksheets (mean=19.20) or did not complete worksheets (mean=25.00) when they attended lectures (F1,16=7.313, MSe=11.500, p=0.016). Students who attended the lectures and completed worksheets did less well than those students who attended lectures but did not complete the worksheets.” Significant simple main effect of worksheets at lectures (no) “There was a significant difference between those students who did complete worksheets (mean=16.00) or did not complete worksheets (mean=9.60) when they did not attend lectures (F1,16=51.557, MSe=11.500, p<0.001). When students who attended lectures did not complete worksheets they scored higher on the exam than those students who neither attended lectures nor completed worksheets.”\n\nAnalytic comparisons in general\nIf there are more than two levels of a Factor And, if there is a significant effect (either main effect or simple main effect) Analytical comparisons are required. Post hoc comparisons include tukey tests, Scheffé test or t-tests (bonferroni corrected).\n\nDownload ppt \"Overview of Lecture Factorial Designs Experimental Design Names\"\n\nSimilar presentations" ]
[ null, "http://slideplayer.com/static/blue_design/img/slide-loader4.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9222295,"math_prob":0.97854435,"size":9917,"snap":"2019-43-2019-47","text_gpt3_token_len":2302,"char_repetition_ratio":0.16271563,"word_repetition_ratio":0.13993809,"special_character_ratio":0.23495008,"punctuation_ratio":0.09752322,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99106634,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-16T14:56:46Z\",\"WARC-Record-ID\":\"<urn:uuid:be5d2e87-157c-4b10-9b7a-d7cdae68b7cd>\",\"Content-Length\":\"187252\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cf650a34-a45c-416f-bff3-98cbce40eea1>\",\"WARC-Concurrent-To\":\"<urn:uuid:51070fae-cc39-4f01-8665-43628c5be7f7>\",\"WARC-IP-Address\":\"144.76.224.208\",\"WARC-Target-URI\":\"http://slideplayer.com/slide/777291/\",\"WARC-Payload-Digest\":\"sha1:Y3PV2I4NLHNAH5MSOCB6LZ7Z23CM6HRG\",\"WARC-Block-Digest\":\"sha1:XXQRQYEUALKQXSTHSGLTNDS3KI5HWW4G\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986668994.39_warc_CC-MAIN-20191016135759-20191016163259-00115.warc.gz\"}"}
https://mathoverflow.net/questions/69461/on-the-difference-between-two-concepts-of-even-cardinalities-is-there-a-model-o?noredirect=1
[ "# On the difference between two concepts of even cardinalities: Is there a model of ZF set theory in which every infinite set can be split into pairs, but not every infinite set can be cut in half?\n\nAn interesting question has arisen over at this math.stackexchange question about two concepts of even in the context of infinite cardinalities, which are equivalent under the axiom of choice, but which it seems might separate when choice fails.\n\nOn the one hand, a set $A$ can be even in the sense that it can be split into pairs, meaning that there is a partition of $A$ into sets of size two, or in other words, if there is an equivalence relation on $A$, such that every equivalence class has exactly two elements.\n\nOn the other hand, a set $A$ can be even in the sense that it can be cut in half, meaning that $A$ is the union of two disjoint sets that are equinumerous.\n\nNote that if $A$ can be cut in half, then it can be split into pairs, since if $A=A_0\\sqcup A_1$ and $f:A_0\\cong A_1$ is a bijection, then $A$ is the union of the family of pairs $\\{x,f(x)\\}$ for $x\\in A_0$. And this argument does not use the axiom of choice.\n\nConversely, if $A$ can be split into pairs, and if we have the axiom of choice for sets of pairs, then we may select one element from each pair, and $A$ is the union of this choice set and its complement in $A$, which are equinumerous.\n\nThus, when the axiom of choice for sets of pairs holds, then the two concepts of even are equivalent. Note also that every infinite well-orderable set is even in both senses, and so in ZFC, every infinite set is even in both senses. My question is, how bad can it get when choice fails?\n\n• Is there a model of ZF in which every infinite set can be split into pairs, but not every infinite set can be cut in half?\n\n• Is there a model of ZF having at least one infinite set that can be split into pairs, but not cut in half?\n\n• What is the relationship between the equivalence of the two concepts of even and the axiom of choice for sets of pairs?\n\n• Joel: There is a nice reference. The theme of your question is precisely addressed in the paper by Blass et al. I mention in this answer: mathoverflow.net/questions/63596/… Jul 4 '11 at 14:40\n• Thanks very much, Andres! Perhaps you can post as an answer? Jul 5 '11 at 10:51\n• @Andres: Thanks for pointing out my paper. Having often benefited from the convention in mathematics that authors are listed in alphabetic order, I must, in fairness, point out that this paper is \"Blair et al.\" My co-authors are David Blair (who did part of this work in an REU project) and Paul Howard. Jul 6 '11 at 0:14\n• @Theo: I'd omit \"distinguished\" in both versions. In the cut-in-half case, from the mere existence of a bijection, we can conclude the existence of a splitting into pairs. Every bijection can be explicitly converted into a partition into pairs. (If we had a distinguished bijection, that would give a distinguished partition into pairs, but that's not required by the definition of split-into-pairs.) On the other hand, from a partition into pairs and (therefore) the existence of bijections to {0,1} for each of these pairs, I can't conclude much without choice. Jul 7 '11 at 2:32\n• I kinda liked the \"MathUnderflow\" nickname for math.SE :-) Dec 18 '11 at 22:55\n\n(I have removed my CW answer since it was irrelevant and somewhat wrong, I just did not know that back then.)\n\nFirst we'll answer the easiest question: We shall construct a model with a 2-amorphous set, which is an amorphous set which can be partitioned into pairs but cannot be cut into two infinite sets.\n\nWe start with a model of ZFA where the set of atoms is countable (for instance), write it $A=\\coprod_n P_n$ where the $P_n$ are pairs.\n\nNow take $\\mathscr G$ to be a group of permutations such that $\\pi P_n = P_k$ (that is, the permutation respect this partition), along with the ideal of finite subsets for support. Let $\\mathfrak A$ be the permutation model defined by those permutations and the chosen support.\n\nClaim I: If $B\\subseteq A$ is infinite and in the permutation model then only for finitely many $P_n$ we have $|P_n\\cap B|=1$.\n\nProof: Assume by contradiction, let $E$ be a finite support of $B$ and $P_k=\\{a,b\\}$ a pair which meet $B$ at a single point (suppose $a\\in B$), as well $P_k\\cap E=\\varnothing$. Define $\\pi(a)=b, \\pi(b)=a$ and $\\pi(x)=x$ otherwise. It is clear that $\\pi$ fixes $E$, but $\\pi B\\neq B$. $\\square$\n\nClaim II: If $B\\subseteq A$ is infinite and in the permutation model, then it is cofinite.\n\nProof: Suppose otherwise, let $E$ be a support of $B$ and $F$ be a support of $A\\setminus B$. We can assume without loss of generality that $E=F$ (take union of both otherwise). By the previous claim we have $\\{a_k,b_k\\}=P_k\\subseteq B$ and $\\{a_n,b_n\\}=P_n\\subseteq A\\setminus B$ such that $E\\cap (P_n\\cup P_k)=\\varnothing$. Take $\\pi$ to be a permutation for which $\\pi(x_k)=x_n, \\pi(x_n)=x_k$ (for $x=a,b$) and the identity otherwise.\n\nAgain it is clear that $\\pi$ fixes $E$ alas $\\pi E\\neq E$, which is a contradiction. $\\square$\n\nClaim III: The partition $\\mathbb P = \\{P_n\\}$ is in the permutation model (however clearly not a countable set there).\n\nProof: It is clear that every permutation in our chosen group has $\\pi(\\mathbb P)=\\mathbb P$. $\\square$\n\nUse any transfer theorem (Jech-Sochor, Pincus, etc.) to have this in a model of ZF, thus answering the question that it is in fact possible to have a set which can be split to pairs but not cut in half.\n\nOne can take instead a variation on the second Cohen model. In this model we add countably many real numbers indexed as $\\{x_{n,\\varepsilon,i}\\mid n,i\\in\\omega,\\varepsilon\\in 2\\}$. We then take $X_{n,\\varepsilon}=\\{x_{n,\\varepsilon,i}\\mid i\\in\\omega\\}$ and $P_n=\\{X_{n,0}, X_{n,1}\\}$.\n\nCohen's took permutations of $\\omega\\times 2\\times\\omega$ which preserve the $P_n$'s. We can instead take permutations which only preserve the partition and have the same result as above. This model, I believe should answer positively the first question (I cannot see why, yet).\n\nIn this model we have that the collection of the $X_{n,\\varepsilon}$ can be split into pairs, however the index set of these pairs need not be split into pairs itself. Indeed such splitting would induce a partition into sets of $4$ elements, which would also be splittable and so inducing a partition of $8$ elements, and so on to have every $2^n$ size of partition. I doubt that this is the case in the variant I have described above.\n\nIt also remains to check these properties on the set of all our generic reals - whether it is countable, or uncountable (but not well-orderable), and can that collection be split too?\n\n• Isn't the model you describe in the last two paragraphs the same as the one used to prove the Jech-Sochor and Pincus transfer theorems? I guess the general proof of the transfer theorems would use larger cardinals in place of $\\omega$, but that would be needed only to transfer statements that look at higher ranks in the permutation model. Dec 19 '11 at 21:17\n• @Andreas: It is possible, yes. However the assertion that every infinite set is splittable would be an unbounded statement, so even if I did come up with a way to prove that in the permutation model - I don't see an immediate argument why any of the transfer theorems (which I know, at least) can take it into a model of ZF. Dec 19 '11 at 21:23\n\nHere is a somewhat informal example which seems relevant: We know that the binomial coefficient $\\binom{n}{2}=\\frac{n(n-1)}2$ is an integer because the numerator is even. One explanation is \"well, $n$ is even or else $n-1$ is even so either way the numerator is even.\" A combinatorial view is that this says that $(\\mathbf{n})_2$ is even where $\\mathbf{n}=\\lbrace0,1,\\cdots,n-1\\rbrace$ and for each set $S$ we let $(S)_2=\\lbrace(a,b) \\mid a \\ne b\\rbrace.$\n\nIn what sense is it true that $(S)_2$ is \"even\" for arbitrary sets $S?$ In particular is it different for the two notions of \"even\"?\n\nThere is an obvious way to split $(S)_2$ into couples. (I changed the phrase since the elements of $(S)_2$ are themselves pairs) If we assume that $S$ has a distinguished order then there is a natural way to cut $(S)_2$ in half. Otherwise there is not a clear way to do it.\n\nConsider the question\n\nIn what sense is $S \\cup (S \\times S)$ even?\n\nwe can couple $s$ with the pair $(s,s)$ and treat the remaining pairs as before. Again, cutting in half seems less natural.\n\nI am not as sure just how tie this into formal models of ZF. (topoi? equivariant sets?) Maybe Asaf's answer which deserves closer scrutiny than I have given it.\n\nlater thoughts\n\n1) It would seem to me that saying $(S)_2$ can always be cut in half should be equivalent to the axiom of choice for 2-element sets. I suppose that this is not so clear because it depends on the intuition that $(a,b)$ and $(b,a)$ would never be in the same half. It is hard to imagine how they could be if we have no special knowledge about $S$ but maybe it does not follow axiomatically.\n\n2) I don't think that this is what Theo was asking about in his comment, but does cutting $V$ in half mean that we have an ordered pair $(V_1,V_2)$ of disjoint sets with union $V$ and a bijection OR does it mean that we have a set $\\lbrace Y,Z\\rbrace$ of disjoint sets with union $V$ and a bijection? In other words, does it say that we have a directed graph with vertex set $V$ and total degree (in plus out) $1$ or does it mean we have an undirected graph with vertex set $V$ regular of degree 1? In the former case we do have choice for pairs.\n\n3) I am reminded of John Conways work on Effective implications between the \"finite\" choice axioms\n\n• Thanks for your answer Aaron. Regarding your question (2), for a set $V$ to be cut in half means that there are disjoint sets $V_1$ and $V_2$ whose union is $V$, such that there is a bijection between $V_1$ and $V_2$. You don't need AC to pick this bijection, since this is just one choice. I agree completely with Andreas's remarks on Theo's comment. Dec 19 '11 at 20:45\n• So could we say that for a set $S=\\lbrace a,b \\rbrace$ that the set $V=\\lbrace (a,b),(b,a) \\rbrace$ can be split into a pair of disjoint sets $(\\lbrace(x,y)\\rbrace,\\lbrace(y,x)\\rbrace)$ where $x$ is either $a$ or $b$? and could we say that then we had a made a choice of $x$ (say) from $S$. Dec 19 '11 at 23:27\n• Well, in this case, yes, but choice on finite families is provable without any extra axioms, and we could have equivalently asked for an unordered pair when splitting in half, rather than an ordered pair (it is equivalent). Nevertheless, I share your sense that the general possibility is related to the principle of choice for pairs (and this is my third question), but don't see a proof yet. Dec 20 '11 at 2:16" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9513337,"math_prob":0.99407816,"size":1806,"snap":"2021-43-2021-49","text_gpt3_token_len":448,"char_repetition_ratio":0.13596004,"word_repetition_ratio":0.16081871,"special_character_ratio":0.23975636,"punctuation_ratio":0.09898477,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997924,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-18T04:44:05Z\",\"WARC-Record-ID\":\"<urn:uuid:d35f753b-0770-4d00-9c8a-64e8ec8a0642>\",\"Content-Length\":\"166968\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:61a024cc-ea23-48cc-a013-7675faa40114>\",\"WARC-Concurrent-To\":\"<urn:uuid:4167e0ee-592c-41dd-bd13-a50bd0875014>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://mathoverflow.net/questions/69461/on-the-difference-between-two-concepts-of-even-cardinalities-is-there-a-model-o?noredirect=1\",\"WARC-Payload-Digest\":\"sha1:ENES73BIOSCP7KIZYMA5B3A4MGDAWVBO\",\"WARC-Block-Digest\":\"sha1:TKDNSWWMZGUKKFLCDTOA2L3D22INLJLT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585196.73_warc_CC-MAIN-20211018031901-20211018061901-00660.warc.gz\"}"}
https://search.r-project.org/CRAN/refmans/addScales/html/revert.html
[ "## Revert A Scaled Trellis Plot To Its Previous Unscaled Form\n\n### Description\n\nS3 generic and scaledTrellis method to remove all scaling information from a `scaledTrellis` object, returning the prior unscaled trellis object.\n\n### Usage\n\n```revert(obj,...)\n## S3 method for class 'scaledTrellis'\nrevert(obj, ...)\n```\n\n### Arguments\n\n `obj` An object inheriting from class `scaledTrellis`. `...` Currently ignored\n\n### Details\n\nReturns the last version of the `trellis` object with all `addScales` scales and legends removed. Note that this is not the original `trellis` object if that was subsequently modified by `update` calls. See the examples.\n\n### Value\n\nA `trellis` object that can be printed/plotted as usual.\n\n### Author(s)\n\nBert Gunter [email protected]\n\n`update.scaledTrellis`\n\n### Examples\n\n```## Using simple artificial data\nset.seed (2233)\nx <- rep(1:10,4)\ny <- rnorm(40, mean = rep(seq(10, 25, by = 5), each = 10),\nsd = rep(1:4, each = 10))\nf <- rep(c(\"AA\",\"BB\",\"CC\",\"DD\"), each = 10)\n##\n## trellis plot the data with \"free\" y axis sxaling\norig <- xyplot(y ~ x|f, type = c(\"l\",\"p\"), col.line = \"black\",\nscales = list(alternating =1,\ny = list(relation = \"free\")),\nas.table = TRUE,\nlayout = c(2,2),\nmain = \"revert() Example\"\n)\n## Plot it\norig\n\n## Remove the y axis scales and add horizontal scalelines\norig <- update(orig, scales = list(alternating =1,\ny = list(relation = \"free\", draw = FALSE)))\n## Plot it\nupd1\nclass(upd1)\n\n## revert\nupd2 <- revert(upd1)\n## Plot it\nupd2\nclass(upd2)\n\n## clean up\nrm(x, y, f, orig, upd1, upd2)\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6338553,"math_prob":0.9360926,"size":1377,"snap":"2021-43-2021-49","text_gpt3_token_len":415,"char_repetition_ratio":0.119446464,"word_repetition_ratio":0.036036037,"special_character_ratio":0.3318809,"punctuation_ratio":0.1904762,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98227227,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-27T09:08:11Z\",\"WARC-Record-ID\":\"<urn:uuid:23eb577c-ead4-4e8a-a3d7-9638e4354585>\",\"Content-Length\":\"3308\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:df5e4724-169c-42e8-9bd8-143f7b64a34b>\",\"WARC-Concurrent-To\":\"<urn:uuid:ebaab0aa-c2f2-4bc5-80b9-5016e1be56bd>\",\"WARC-IP-Address\":\"137.208.57.46\",\"WARC-Target-URI\":\"https://search.r-project.org/CRAN/refmans/addScales/html/revert.html\",\"WARC-Payload-Digest\":\"sha1:TJRZUPQGX2TXYHDOUV6LSCECDYNYGGOE\",\"WARC-Block-Digest\":\"sha1:CGPTYRDJLVEQFXACNUSJGGF2Y2YYXVJT\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323588113.25_warc_CC-MAIN-20211027084718-20211027114718-00411.warc.gz\"}"}
https://programmingshots.com/better-precision-and-faster-index-building-in-annoy-%C2%B7-erik-bernhardsson/amp/
[ "# Better precision and faster index building in Annoy · Erik Bernhardsson\n\nSometimes you have these great insights.A few days ago I Thoughts How to improve index building Annoying..\n\nFor those unfamiliar with Annoy, this is a C ++ library with Python bindings that provides fast, high-dimensional nearest neighbor search.\n\nAnnoy recursively builds the tree by specifying a set of points. The algorithm so far was: At all levels, select a random hyperplane from all possible hyperplanes that intersect the convex hull given by the set of points. The hyperplane defines how to divide a set of points into two subsets. Recursively apply the same algorithm to each subset until there are few sets of points.\n\nA much smarter way is this: Sampling two points from a set of points, Calculate hyperplanes equidistant from those points and use this hyperplane to divide the point set.\n\n(I explained what happens with Euclidean distance. The angles are almost the same, but a little simpler).\n\nImplementing this turn will create an index 4x faster For Euclidean distance. But more importantly Greatly improved search quality, Both angular distance and Euclidean distance. This difference is especially noticeable in higher dimensional spaces.\n\nSummarized test It measures the accuracy of the above nearest neighbor search GloVe pre-trained vector Use hard-coded values ​​for various parameters (10 trees, 10 nearest neighbors). See below:", null, "", null, "this is, Commits are actually red rather than green – The new algorithm is much simpler and I was able to remove a lot of old ones that I no longer need.\n\nThe intuitive reason why this works so well is to think about what happens when there are 200 dimensions, but the data is actually “almost”, for example, in a low dimensional space of 20 dimensions. Annoy then finds a more consistent division of the distribution of the data. I think these cases are quite common in higher dimensional spaces.\n\nI also fixed another Randomness problem This looks pretty serious (though I don’t think it really happened) and I’ve added a unit test that runs the f = 25 test shown in the graph above.\n\nNew 1.3.1 version is out PyPI And Github – Get it while it’s hot!\n\nTagged:" ]
[ null, "data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjQwMCIgd2lkdGg9IjEyMDAiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+", null, "data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjQwMCIgd2lkdGg9IjEyMDAiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91569597,"math_prob":0.9034163,"size":2103,"snap":"2021-43-2021-49","text_gpt3_token_len":437,"char_repetition_ratio":0.09909481,"word_repetition_ratio":0.0,"special_character_ratio":0.20161673,"punctuation_ratio":0.0959596,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96698594,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-22T10:30:51Z\",\"WARC-Record-ID\":\"<urn:uuid:26217500-c727-4070-8899-dfc32bfd57fc>\",\"Content-Length\":\"29425\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f7d8b4d9-1b61-4448-924b-20dfe3a27b8d>\",\"WARC-Concurrent-To\":\"<urn:uuid:37d5e8b5-c99a-47f2-a6c4-c16c389efe6c>\",\"WARC-IP-Address\":\"172.67.220.239\",\"WARC-Target-URI\":\"https://programmingshots.com/better-precision-and-faster-index-building-in-annoy-%C2%B7-erik-bernhardsson/amp/\",\"WARC-Payload-Digest\":\"sha1:DHXCPFSXHBKXPNP3LRHYMOWFDBITXARH\",\"WARC-Block-Digest\":\"sha1:ACQXK5PY2BUMEBBCAXFBJXUMZVJMS3BS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585504.90_warc_CC-MAIN-20211022084005-20211022114005-00000.warc.gz\"}"}
http://www.investingforbeginners.eu/internal_rate_of_return
[ "Investing for Beginners .EU, investing", null, "The lack of money is the root of all evil.Mark Twain\n\nInvestment Dictionary\n\nBrowse by search:\n\nBrowse by Letter: A B C D E F G H I J K L M N O P Q R S T U V W X Y Z All\n\nInternal Rate of Return\n\nAn internal rate of return (IRR) is a ratio used very often to measure a profitability of some investment project. IRR is determined as a discount rate when NPV of the project is equal to zero. If IRR is higher than cost of capital, then investment should be proceeded, but if cost of capital is higher than IRR of the project, then investment should be rejected, because it would destroy the value for shareholders, but not maximize it.\n\nInternal rate of return is very popular among managers that are dilettantes in finance field because of this method simplicity. (The easiest way to calculate IRR is to use MS Excel’s IRR function).\n\nDespite the simplicity of the IRR it should not be the only criteria for filtering investment projects. It would be advisable to calculate NPV, sensitivity analysis, riskiness and internal costs, alternative costs, future profitability ratios and dependability on KPIs, perform future cash flow analysis, analyze impact of changes in competition, fulfill market forecasts and other methods of financial analysis.\n\nDespite the fact that IRR in many cases is over trusted, it is the one of most complete ratios compared to its simplicity. One of the disadvantages of this ratio is that internal rate of return do not adjusts to time effect to the value of money if an IRR isn’t positive (that may be important in more specific cases). However, IRR is a perfect ratio for fast calculation when numbers are clear. Another, yet simplified version of return rate is CAGR, which is calculated as geometric mean too.\n\nSee an example of internal rate of return calculation (using Excel formula IRR – input Net cash flow):\n\n Year 1 Year 2 Year 3 Year 4 IRR Investments 500 200 0 0 Return 0 100 300 500 Net cash flow -500 -100 300 500 12%\n\nRead about other ratios or method’s that can be used in return analysis:", null, "", null, "" ]
[ null, "http://www.investingforbeginners.eu/img/logo.jpg", null, "http://ssl.gstatic.com/images/icons/gplus-16.png", null, "http://www.investingforbeginners.eu/img/funtest_white.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91475844,"math_prob":0.9539752,"size":2008,"snap":"2019-43-2019-47","text_gpt3_token_len":411,"char_repetition_ratio":0.11576846,"word_repetition_ratio":0.0,"special_character_ratio":0.187251,"punctuation_ratio":0.08539945,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98220015,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-17T18:31:00Z\",\"WARC-Record-ID\":\"<urn:uuid:ec708209-5f73-4d72-afdb-0224c94722c3>\",\"Content-Length\":\"18662\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f7f6600c-323a-4ac1-ae02-173c887d5b00>\",\"WARC-Concurrent-To\":\"<urn:uuid:26f9562d-688a-408c-9fbf-f7b29983df51>\",\"WARC-IP-Address\":\"194.135.87.142\",\"WARC-Target-URI\":\"http://www.investingforbeginners.eu/internal_rate_of_return\",\"WARC-Payload-Digest\":\"sha1:LF5A5YD4747ESU6IJQ7W4D7LLYWFVCRZ\",\"WARC-Block-Digest\":\"sha1:K4MMVFHDE2BIEH3ZXJ4T2ZRIIS5TM4PP\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986675598.53_warc_CC-MAIN-20191017172920-20191017200420-00285.warc.gz\"}"}
https://www.newworldencyclopedia.org/entry/Thermal_conductivity
[ "# Thermal conductivity\n\nIn physics, thermal conductivity,", null, "$k$, is the property of a material that indicates its ability to conduct heat. It appears primarily in Fourier's Law for heat conduction.\n\nConduction is the most significant means of heat transfer in a solid. By knowing the values of thermal conductivities of various materials, one can compare how well they are able to conduct heat. The higher the value of thermal conductivity, the better the material is at conducting heat. On a microscopic scale, conduction occurs as hot, rapidly moving or vibrating atoms and molecules interact with neighboring atoms and molecules, transferring some of their energy (heat) to these neighboring atoms. In insulators the heat flux is carried almost entirely by phonon vibrations.\n\n## Mathematical background\n\nFirst, heat conduction can be defined by the formula:", null, "$H=\\frac{\\Delta Q}{\\Delta t}=k\\times A\\times\\frac{\\Delta T}{x}$\n\nwhere", null, "$\\frac{\\Delta Q}{\\Delta t}$ is the rate of heat flow, k is the thermal conductivity, A is the total surface area of conducting surface, ΔT is temperature difference and x is the thickness of conducting surface separating the two temperatures.\n\nThus, rearranging the equation gives thermal conductivity,", null, "$k=\\frac{\\Delta Q}{\\Delta t}\\times\\frac{1}{A}\\times\\frac{x}{\\Delta T}$\n\n(Note:", null, "$\\frac{\\Delta T}{x}$ is the temperature gradient)\n\nIn other words, it is defined as the quantity of heat, ΔQ, transmitted during time Δt through a thickness x, in a direction normal to a surface of area A, due to a temperature difference ΔT, under steady state conditions and when the heat transfer is dependent only on the temperature gradient.\n\nAlternately, it can be thought of as a flux of heat (energy per unit area per unit time) divided by a temperature gradient (temperature difference per unit length)", null, "$k=\\frac{\\Delta Q}{A\\times{} \\Delta t}\\times\\frac{x}{\\Delta T}$\n\nTypical units are SI: W/(m·K) and English units: Btu·ft/(h·ft²·°F). To convert between the two, use the relation 1 Btu·ft/(h·ft²·°F) = 1.730735 W/(m·K).\n\n## Examples\n\nIn metals, thermal conductivity approximately tracks electrical conductivity according to the Wiedemann-Franz law, as freely moving valence electrons transfer not only electric current but also heat energy. However, the general correlation between electrical and thermal conductance does not hold for other materials, due to the increased importance of phonon carriers for heat in non-metals. As shown in the table below, highly electrically conductive silver is less thermally conductive than diamond, which is an electrical insulator.\n\nThermal conductivity depends on many properties of a material, notably its structure and temperature. For instance, pure crystalline substances exhibit very different thermal conductivities along different crystal axes, due to differences in phonon coupling along a given crystal axis. Sapphire is a notable example of variable thermal conductivity based on orientation and temperature, for which the CRC Handbook reports a thermal conductivity of 2.6 W/(m·K) perpendicular to the c-axis at 373 K, but 6000 W/(m·K) at 36 degrees from the c-axis and 35 K (possible typo?).\n\nAir and other gases are generally good insulators, in the absence of convection. Therefore, many insulating materials function simply by having a large number of gas-filled pockets which prevent large-scale convection. Examples of these include expanded and extruded polystyrene (popularly referred to as \"styrofoam\") and silica aerogel. Natural, biological insulators such as fur and feathers achieve similar effects by dramatically inhibiting convection of air or water near an animal's skin.\n\nThermal conductivity is important in building insulation and related fields. However, materials used in such trades are rarely subjected to chemical purity standards. Several construction materials' k values are listed below. These should be considered approximate due to the uncertainties related to material definitions.\n\nThe following table is meant as a small sample of data to illustrate the thermal conductivity of various types of substances. For more complete listings of measured k-values, see the references.\n\n## List of thermal conductivities\n\nThis is a list of approximate values of thermal conductivity, k, for some common materials. Please consult the list of thermal conductivities for more accurate values, references and detailed information.\n\nMaterial Thermal conductivity\nW/(m·K)\nCement, portland 0.29\nConcrete, stone 1.7\nAir 0.025\nWood 0.04 - 0.4\nAlcohols and oils 0.1 - 0.21\nSilica Aerogel 0.004-0.03\nSoil 1.5\nRubber 0.16\nEpoxy (unfilled) 0.19\nEpoxy (silica-filled) 0.30\nWater (liquid) 0.6\nThermal grease 0.7 - 3\nThermal epoxy 1 - 4\nGlass 1.1\nIce 2\nSandstone 2.4\nStainless steel 12.11 ~ 45.0\nAluminum 237\nGold 318\nCopper 401\nSilver 429\nDiamond 900 - 2320\nLPG 0.23 - 0.26\n\n## Measurement\n\nGenerally speaking, there are a number of possibilities to measure thermal conductivity, each of them suitable for a limited range of materials, depending on the thermal properties and the medium temperature. There can be made a distinction between steady-state and transient techniques.\n\nIn general the steady-state techniques perform a measurement when the temperature of the material that is measured does not change with time. This makes the signal analysis straight forward (steady state implies constant signals). The disadvantage generally is that it takes a well-engineered experimental setup. The Divided Bar (various types) is the most common device used for consolidated rock samples.\n\nThe transient techniques perform a measurement during the process of heating up. The advantage is that measurements can be made relatively quickly. Transient methods are usually carried out by needle probes (inserted into samples or plunged into the ocean floor).\n\nFor good conductors of heat, Searle's bar method can be used. For poor conductors of heat, Lees' disc method can be used. An alternative traditional method using real thermometers can be used as well. A thermal conductance tester, one of the instruments of gemology, determines if gems are genuine diamonds using diamond's uniquely high thermal conductivity.\n\n### Standard Measurement Techniques\n\n• IEEE Standard 442-1981, \"IEEE guide for soil thermal resistivity measurements\" see als soil_thermal_properties.\n• IEEE Standard 98-2002, \"Standard for the Preparation of Test Procedures for the Thermal Evaluation of Solid Electrical Insulating Materials\"\n• ASTM Standard D5470-06, \"Standard Test Method for Thermal Transmission Properties of Thermally Conductive Electrical Insulation Materials\"\n• ASTM Standard E1225-04, \"Standard Test Method for Thermal Conductivity of Solids by Means of the Guarded-Comparative-Longitudinal Heat Flow Technique\"\n• ASTM Standard D5930-01, \"Standard Test Method for Thermal Conductivity of Plastics by Means of a Transient Line-Source Technique\"\n• ASTM Standard D2717-95, \"Standard Test Method for Thermal Conductivity of Liquids\"\n\n## Difference between US and European notation\n\nIn Europe, the k-value of construction materials (e.g. window glass) is called λ-value.\n\nU-value used to be called k-value in Europe, but is now also called U-value.\n\nK-value (with capital k) refers in Europe to the total isolation value of a building. K-value is obtained by multiplying the form factor of the building (= the total inward surface of the outward walls of the building divided by the total volume of the building) with the average U-value of the outward walls of the building. K-value is therefore expressed as (m2.m-3).(W.K-1.m-2) = W.K-1.m-3. A house with a volume of 400 m³ and a K-value of 0.45 (the new European norm. It is commonly referred to as K45) will therefore theoretically require 180 W to maintain its interior temperature 1 degree K above exterior temperature. So, to maintain the house at 20°C when it is freezing outside (0°C), 3600 W of continuous heating is required.\n\n## Related terms\n\nThe reciprocal of thermal conductivity is thermal resistivity, measured in kelvin-metres per watt (K·m·W−1).\n\nWhen dealing with a known amount of material, its thermal conductance and the reciprocal property, thermal resistance, can be described. Unfortunately there are differing definitions for these terms.\n\n### First definition (general)\n\nFor general scientific use, thermal conductance is the quantity of heat that passes in unit time through a plate of particular area and thickness when its opposite faces differ in temperature by one degree. For a plate of thermal conductivity k, area A and thickness L this is kA/L, measured in W·K−1 (equivalent to: W/°C). Thermal conductivity and conductance are analogous to electrical conductivity (A·m−1·V−1) and electrical conductance (A·V−1).\n\nThere is also a measure known as heat transfer coefficient: the quantity of heat that passes in unit time through unit area of a plate of particular thickness when its opposite faces differ in temperature by one degree. The reciprocal is thermal insulance. In summary:\n\n• thermal conductance = kA/L, measured in W·K−1\n• thermal resistance = L/kA, measured in K·W−1 (equivalent to: °C/W)\n• heat transfer coefficient = k/L, measured in W·K−1·m−2\n• thermal insulance = L/k, measured in K·m²·W−1.\n\nThe heat transfer coefficient is also known as thermal admittance\n\n### Thermal Resistance\n\nWhen thermal resistances occur in series, they are additive. So when heat flows through two components each with a resistance of 1 °C/W, the total resistance is 2 °C/W.\n\nA common engineering design problem involves the selection of an appropriate sized heat sink for a given heat source. Working in units of thermal resistance greatly simplifies the design calculation. The following formula can be used to estimate the performance:", null, "$R_{hs} = \\frac {\\Delta T}{P_{th}} - R_s$\n\nwhere:\n\n• Rhs is the maximum thermal resistance of the heat sink to ambient, in °C/W\n•", null, "$\\Delta T$ is the temperature difference (temperature drop), in °C\n• Pth is the thermal power (heat flow), in Watts\n• Rs is the thermal resistance of the heat source, in °C/W\n\nFor example, if a component produces 100 W of heat, and has a thermal resistance of 0.5 °C/W, what is the maximum thermal resistance of the heat sink? Suppose the maximum temperature is 125 °C, and the ambient temperature is 25 °C; then the", null, "$\\Delta T$ is 100 °C. The heat sink's thermal resistance to ambient must then be 0.5 °C/W or less.\n\n### Second definition (buildings)\n\nWhen dealing with buildings, thermal resistance or R-value means what is described above as thermal insulance, and thermal conductance means the reciprocal. For materials in series, these thermal resistances (unlike conductances) can simply be added to give a thermal resistance for the whole.\n\nA third term, thermal transmittance, incorporates the thermal conductance of a structure along with heat transfer due to convection and radiation. It is measured in the same units as thermal conductance and is sometimes known as the composite thermal conductance. The term U-value is another synonym.\n\nIn summary, for a plate of thermal conductivity k (the k value), area A and thickness L:\n\n• thermal conductance = k/L, measured in W·K−1·m−2;\n• thermal resistance (R value) = L/k, measured in K·m²·W−1;\n• thermal transmittance (U value) = 1/(Σ(L/k)) + convection + radiation, measured in W·K−1·m−2.\n\n## Textile industry\n\nIn textiles, a tog value may be quoted as a measure of thermal resistance in place of a measure in SI units.\n\n## Origins\n\nThe thermal conductivity of a system is determined by how atoms comprising the system interact. There are no simple, correct expressions for thermal conductivity. There are two different approaches for calculating the thermal conductivity of a system.\n\nThe first approach employs the Green-Kubo relations. Although this employs analytic expressions which in principle can be solved, in order to calculate the thermal conductivity of a dense fluid or solid using this relation requires the use of molecular dynamics computer simulation.\n\nThe second approach is based upon the relaxation time approach. Due to the anharmonicity within the crystal potential, the phonons in the system are known to scatter. There are three main mechanisms for scattering (Srivastava, 1990):\n\n• Boundary scattering, a phonon hitting the boundary of a system;\n• Mass defect scattering, a phonon hitting an impurity within the system and scattering;\n• Phonon-phonon scattering, a phonon breaking into two lower energy phonons or a phonon colliding with another phonon and merging into one higher energy phonon." ]
[ null, "https://www.newworldencyclopedia.org/math/8/c/e/8ce4b16b22b58894aa86c421e8759df3.png ", null, "https://www.newworldencyclopedia.org/math/f/4/b/f4baea0aa032cd5e0ceb89479521803b.png ", null, "https://www.newworldencyclopedia.org/math/f/b/5/fb5544db7cc3913abfb5d87fc35a6207.png ", null, "https://www.newworldencyclopedia.org/math/8/6/6/866db26dcb96f3853a32bb42b7942120.png ", null, "https://www.newworldencyclopedia.org/math/8/7/d/87d4299b2ca24f0d35cf509bb41e942c.png ", null, "https://www.newworldencyclopedia.org/math/c/e/6/ce631e2b9dfcd3bbbee451438c9c5ffa.png ", null, "https://www.newworldencyclopedia.org/math/a/4/7/a476f1d5ae3be8f98072d3908edbcdf7.png ", null, "https://www.newworldencyclopedia.org/math/3/e/c/3ecdbbc9a64a9bfc57883ae306bf51cd.png ", null, "https://www.newworldencyclopedia.org/math/3/e/c/3ecdbbc9a64a9bfc57883ae306bf51cd.png ", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9094365,"math_prob":0.9820446,"size":11053,"snap":"2019-51-2020-05","text_gpt3_token_len":2364,"char_repetition_ratio":0.1616436,"word_repetition_ratio":0.01992966,"special_character_ratio":0.21071203,"punctuation_ratio":0.1071076,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98822314,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,10,null,1,null,1,null,1,null,1,null,1,null,1,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-24T17:49:47Z\",\"WARC-Record-ID\":\"<urn:uuid:f7ef7d7e-8ffd-4686-98c9-4f9f2a700498>\",\"Content-Length\":\"70996\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:65128e38-88f3-42c3-a0dc-dcf26b5a64d4>\",\"WARC-Concurrent-To\":\"<urn:uuid:ad65942e-3b3c-4c0d-aedf-ebcff0cd8610>\",\"WARC-IP-Address\":\"104.31.89.182\",\"WARC-Target-URI\":\"https://www.newworldencyclopedia.org/entry/Thermal_conductivity\",\"WARC-Payload-Digest\":\"sha1:VUKSZB3SV7XADT6MV2UV53YSQXR36MFN\",\"WARC-Block-Digest\":\"sha1:FG5C5S6AW3QUTVB6DRLWVPLDVM7BT6PV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250624328.55_warc_CC-MAIN-20200124161014-20200124190014-00386.warc.gz\"}"}
https://electronics-tutorial.net/lab-test-and-measurement/SOLID-STATE-DEVICES-AND-CIRCUITS/Exp-5-SSDC/
[ "EXPT. NO. :5\n\nTITLE : To simulate Voltage series, voltage shunt and current shunt feedback\n\ntopologies.\n\nAIM : To simulate Voltage series, voltage shunt and current shunt feedback topologies.\n\nAPPARATUS :\n\n1. PC with simulation software loaded.\n\n2. Printer.\n\nTHEORY :\n\nIn the feedback process a part of output is sampled and fed back to the input. Thus at the input of an amplifier two signals will be simultaneously present. One of them is the original signal and other is the fed back signal itself. Feedback is defined as the process in which a part of output signal (voltage or current) is returned back to the input. The amplifier that operates on the principle of feedback is known as feedback amplifier.\n\nTYPES OF FEEDBACK\n\n1. Positive feedback 2.Negative feedback.\n\nIf the original input signal and the feedback signal are in phase, the feedback is called as ‘positive feedback’ .However if these two signals are out of phase then feedback is known as ‘negative feedback’. Positive feedback is used in oscillators and negative feedback is used in amplifiers.\n\nCLASSIFICATION OF AMPLIFIERS BASED ON FEEDBACK TOPOLOGY\n\nThe amplifiers are classified into four categories based on the magnitudes of the input and output impedences relative to the source and load impedances respectively. The categories are as follows:\n\n1. Voltage amplifier 2.Current amplifier\n\n3. Tran conductance amplifier 4.Transresistance amplifier\n\nFEEDBACK TOPOLOGIES :\n\nWe can combine the sampling and mixing techniques to yield the following feedback configurations:\n\n1. Voltage series feedback 2.Voltage shunt feedback\n\n3. Current shunt feedback\n\n Signal or ratio Type of feedback Voltage series Current series Current shunt Voltage shunt Xo Voltage Current Current Voltage Xs, Xf, Xd Voltage Voltage Current Current A Av Gm Ai Rm Β Vf/Vo Vf/Io If/Io If/Vo\n\nCURRENT SERIES FEEDBACK :\n\nThe block diagram of a voltage series feedback amplifier is shown in fig.1 .Here we use the combination of ‘Current sampling’ and ‘series mixing’. Therefore\n\nCurrent series feedback= Current sampling +series mixing.\n\nVOLTAGE SHUNT FEEDBACK:\n\nThe block diagram of a voltage shunt feedback amplifier is shown in fig.2 Here we use the combination of ‘voltage sampling’ and ‘shunt mixing’ Therefore,\n\nVoltage shunt feedback=voltage sampling +shunt mixing. Voltage shunt feedback is present in transsresisitance amplifiers.\n\nCURRENT SHUNT FEEDBACK :\n\nThe block diagram of a current shunt feedback amplifier is shown in fig.3. Here we use the combination of ‘current sampling’ and ‘shunt mixing’. Therefore,\n\nVoltage shunt feedback=current sampling +shunt mixing. Current shunt feedback is present in current amplifiers.\n\n1. Negative feedback stabilizes the gain of the amplifier.\n\n2. There is significant increase in bandwidth of the amplifier.\n\n3. Distortions in the amplifier output are reduced.\n\n4. Input resistance increases for certain feedback configurations.\n\n5. Output resistance decreases for certain feedback configurations.\n\n6. Operating point is stabilized.\n\n1. Reduction in gain.\n\n2. Reduction in input resisistance in case of voltage shunt and current shunt type amplifiers.\n\n3. Reduction in output resisistance in case of current shunt and current series type amplifiers.\n\n4. More no. of amplifier stages are required to be used,in order to maintain the required gain.\n\nApplication of negative feedback:\n\n1. Almost all the electronic amplifiers.\n\n2. Regulated power supplies.\n\n3. Wideband amplifiers (amplifiers having a large bandwidth).\n\n4. Circuits like bootstrapping.\n\nEffect of negative feedback on Ri and Ro.\n\n Type of feedback Effect on input resistance Effect on output resisitance Voltage series Increases Decreases Voltage shunt Decreases Decreases Current shunt Decreases Increases\n\nCALCULATIONS:\n\nRESULT:\n\n1.VOLTAGE SERIES:\n\n Sr.No Parameter With feedback Without feedback\n\n2.VOLTAGE SHUNT:\n\n Sr.No . Parameter With feedback Without feedback\n\n3.CURRENT SHUNT:\n\n Sr.No . Parameter With feedback Without feedback" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6937545,"math_prob":0.8699263,"size":4706,"snap":"2021-04-2021-17","text_gpt3_token_len":1061,"char_repetition_ratio":0.2894513,"word_repetition_ratio":0.16142857,"special_character_ratio":0.3036549,"punctuation_ratio":0.14403293,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9660763,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-28T13:59:43Z\",\"WARC-Record-ID\":\"<urn:uuid:066718f4-f250-420b-81e2-45acd338465b>\",\"Content-Length\":\"128038\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:df7188b2-ae99-4213-a501-8a0059064b31>\",\"WARC-Concurrent-To\":\"<urn:uuid:daf6089e-6ce1-469b-9d44-17be1217a38a>\",\"WARC-IP-Address\":\"172.67.183.252\",\"WARC-Target-URI\":\"https://electronics-tutorial.net/lab-test-and-measurement/SOLID-STATE-DEVICES-AND-CIRCUITS/Exp-5-SSDC/\",\"WARC-Payload-Digest\":\"sha1:HMZEAL2OZBQGSUAHZM63KPRNKDWY72QM\",\"WARC-Block-Digest\":\"sha1:SOQ2HAM5IMWOVBIGG6LDLG2JSLP5YVQH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610704847953.98_warc_CC-MAIN-20210128134124-20210128164124-00367.warc.gz\"}"}
https://de.maplesoft.com/support/help/maplesim/view.aspx?path=LinearAlgebra%2FBasis
[ "", null, "Basis - Maple Help\n\nLinearAlgebra\n\n Basis\n return a basis for a vector space\n SumBasis\n return a basis for the direct sum of vector space(s)\n IntersectionBasis\n return a basis for the intersection of vector space(s)", null, "Calling Sequence Basis(V, options) SumBasis(VS, options) IntersectionBasis(VS, options)", null, "Parameters\n\n V - Vector, list of Vectors, or set of Vectors VS - list whose elements represent vector spaces; each list element is a Vector, a list of Vector(s), or a set of Vector(s) whose span represents the vector space options - (optional); constructor options for the result object", null, "Description\n\n • For all of the functions, the following statements hold:\n - The dimension and orientation of all Vectors (in all vector spaces) must be the same.\n - If constructor options are specified, each resulting Vector has the same specified options.\n • The Basis(V) function returns a list or set of Vectors that forms a basis for the vector space spanned by the original Vectors in terms of the original  Vectors.  A basis for the 0-dimensional space is an empty list or set.\n If V is a list of Vectors, the Basis(V) function returns a list of Vectors.  If V is a single Vector or a set of Vectors, a set of Vectors is returned.\n • The SumBasis(VS) function returns a list or set of Vector(s) that forms a basis for the direct sum of the vector spaces defined by the Vector(s) in each list element of VS.  SumBasis([]) returns the empty set.\n If all of the elements of VS are lists of Vectors, the SumBasis(V) function returns a list of Vectors.  Otherwise, a set of Vectors is returned.\n • The IntersectionBasis(VS) function returns a list or set of Vector(s) that forms a basis for the intersection of the vector spaces defined by the Vector(s) in each list element of VS.  IntersectionBasis([]) returns the empty set.\n If all of the elements of VS are lists of Vectors, the IntersectionBasis(V) function returns a list of Vectors.  Otherwise, a set of Vectors is returned.\n • The constructor options provide additional information (readonly, shape, storage, order, datatype, and attributes) to the Vector constructor that builds the result. These options may also be provided in the form outputoptions=[...], where [...] represents a Maple list.  If a constructor option is provided in both the calling sequence directly and in an outputoptions option, the latter takes precedence (regardless of the order).\n • These functions are part of the LinearAlgebra package, and so they can be used in the form Basis(..), SumBasis(..), or IntersectionBasis(..) only after executing the command with(LinearAlgebra). However, it can always be accessed through the long form of the command by using LinearAlgebra[Basis](..), LinearAlgebra[SumBasis](..), or LinearAlgebra[IntersectionBasis](..).", null, "Examples\n\n > $\\mathrm{with}\\left(\\mathrm{LinearAlgebra}\\right):$\n > $\\mathrm{v1}≔⟨1|0|0⟩:$\n > $\\mathrm{v2}≔⟨0|1|0⟩:$\n > $\\mathrm{v3}≔⟨0|0|1⟩:$\n > $\\mathrm{v4}≔⟨0|1|1⟩:$\n > $\\mathrm{v5}≔⟨1|1|1⟩:$\n > $\\mathrm{v6}≔⟨4|2|0⟩:$\n > $\\mathrm{v7}≔⟨3|0|-1⟩:$\n > $\\mathrm{Basis}\\left(\\left[\\mathrm{v1},\\mathrm{v2},\\mathrm{v2}\\right]\\right)$\n $\\left[\\left[\\begin{array}{ccc}{1}& {0}& {0}\\end{array}\\right]{,}\\left[\\begin{array}{ccc}{0}& {1}& {0}\\end{array}\\right]\\right]$ (1)\n > $\\mathrm{Basis}\\left(\\left\\{\\mathrm{v4},\\mathrm{v6},\\mathrm{v7}\\right\\}\\right)$\n $\\left\\{\\left[\\begin{array}{ccc}{0}& {1}& {1}\\end{array}\\right]{,}\\left[\\begin{array}{ccc}{4}& {2}& {0}\\end{array}\\right]{,}\\left[\\begin{array}{ccc}{3}& {0}& {-1}\\end{array}\\right]\\right\\}$ (2)\n > $\\mathrm{Basis}\\left(\\mathrm{v1}\\right)$\n $\\left\\{\\left[\\begin{array}{ccc}{1}& {0}& {0}\\end{array}\\right]\\right\\}$ (3)\n > $\\mathrm{Basis}\\left(\\mathrm{Vector}\\left(4,\\mathrm{shape}=\\mathrm{zero}\\right)\\right)$\n ${\\varnothing }$ (4)\n > $\\mathrm{SumBasis}\\left(\\left[\\left[\\mathrm{v1},\\mathrm{v2}\\right],\\left[\\mathrm{v6},⟨0|1|0⟩\\right]\\right]\\right)$\n $\\left[\\left[\\begin{array}{ccc}{1}& {0}& {0}\\end{array}\\right]{,}\\left[\\begin{array}{ccc}{0}& {1}& {0}\\end{array}\\right]\\right]$ (5)\n > $\\mathrm{SumBasis}\\left(\\left[\\left\\{\\mathrm{v1}\\right\\},\\left[\\mathrm{v2},\\mathrm{v3}\\right],\\mathrm{v5}\\right]\\right)$\n $\\left\\{\\left[\\begin{array}{ccc}{1}& {0}& {0}\\end{array}\\right]{,}\\left[\\begin{array}{ccc}{0}& {1}& {0}\\end{array}\\right]{,}\\left[\\begin{array}{ccc}{0}& {0}& {1}\\end{array}\\right]\\right\\}$ (6)\n > $\\mathrm{IntersectionBasis}\\left(\\left[\\left[\\mathrm{v1},\\mathrm{v2},\\mathrm{v3}\\right],\\left\\{\\mathrm{v4},\\mathrm{v6},\\mathrm{v7}\\right\\},\\left[\\mathrm{v3},\\mathrm{v4},\\mathrm{v5}\\right]\\right]\\right)$\n $\\left\\{\\left[\\begin{array}{ccc}{1}& {1}& {1}\\end{array}\\right]{,}\\left[\\begin{array}{ccc}{0}& {1}& {1}\\end{array}\\right]{,}\\left[\\begin{array}{ccc}{0}& {0}& {1}\\end{array}\\right]\\right\\}$ (7)\n > $\\mathrm{IntersectionBasis}\\left(\\left[\\mathrm{v1},\\left\\{\\mathrm{v3},\\mathrm{v7}\\right\\}\\right]\\right)$\n $\\left\\{\\left[\\begin{array}{ccc}{3}& {0}& {0}\\end{array}\\right]\\right\\}$ (8)\n > $\\mathrm{IntersectionBasis}\\left(\\left[\\left[\\mathrm{v1},\\mathrm{v2}\\right],\\left[\\mathrm{v3}\\right]\\right]\\right)$\n $\\left[\\right]$ (9)" ]
[ null, "https://bat.bing.com/action/0", null, "https://de.maplesoft.com/support/help/maplesim/arrow_down.gif", null, "https://de.maplesoft.com/support/help/maplesim/arrow_down.gif", null, "https://de.maplesoft.com/support/help/maplesim/arrow_down.gif", null, "https://de.maplesoft.com/support/help/maplesim/arrow_down.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6514692,"math_prob":0.9996598,"size":3285,"snap":"2022-05-2022-21","text_gpt3_token_len":910,"char_repetition_ratio":0.1865285,"word_repetition_ratio":0.22881356,"special_character_ratio":0.24079147,"punctuation_ratio":0.1597015,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99925315,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-26T02:43:52Z\",\"WARC-Record-ID\":\"<urn:uuid:1160d121-3d53-41b1-b2fa-be8ac2a851d5>\",\"Content-Length\":\"266456\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:03bbca24-42ce-43e0-ab4a-6378d72e3186>\",\"WARC-Concurrent-To\":\"<urn:uuid:ea204606-f6d6-42b6-ab97-0c83534c0123>\",\"WARC-IP-Address\":\"199.71.183.28\",\"WARC-Target-URI\":\"https://de.maplesoft.com/support/help/maplesim/view.aspx?path=LinearAlgebra%2FBasis\",\"WARC-Payload-Digest\":\"sha1:UL4ZQMOSL7RG2LAY47QR2FFKVD76UZJT\",\"WARC-Block-Digest\":\"sha1:K665UNAWHVYBWNGBNNR32QSMHNN4EUDN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662595559.80_warc_CC-MAIN-20220526004200-20220526034200-00293.warc.gz\"}"}
https://percent-table.com/calculate/what-is-29-of-4727/
[ "# Percent of Calculator\n\nCalculate percentage of X, quick & simple.\n\n29% of 4727 is:\n1370.83\n\n## Percent of - Table For 4727\n\nPercent of Difference\n1% of 4727 is 47.27 4679.73\n2% of 4727 is 94.54 4632.46\n3% of 4727 is 141.81 4585.19\n4% of 4727 is 189.08 4537.92\n5% of 4727 is 236.35 4490.65\n6% of 4727 is 283.62 4443.38\n7% of 4727 is 330.89 4396.11\n8% of 4727 is 378.16 4348.84\n9% of 4727 is 425.43 4301.57\n10% of 4727 is 472.7 4254.3\n11% of 4727 is 519.97 4207.03\n12% of 4727 is 567.24 4159.76\n13% of 4727 is 614.51 4112.49\n14% of 4727 is 661.78 4065.22\n15% of 4727 is 709.05 4017.95\n16% of 4727 is 756.32 3970.68\n17% of 4727 is 803.59 3923.41\n18% of 4727 is 850.86 3876.14\n19% of 4727 is 898.13 3828.87\n20% of 4727 is 945.4 3781.6\n21% of 4727 is 992.67 3734.33\n22% of 4727 is 1039.94 3687.06\n23% of 4727 is 1087.21 3639.79\n24% of 4727 is 1134.48 3592.52\n25% of 4727 is 1181.75 3545.25\n26% of 4727 is 1229.02 3497.98\n27% of 4727 is 1276.29 3450.71\n28% of 4727 is 1323.56 3403.44\n29% of 4727 is 1370.83 3356.17\n30% of 4727 is 1418.1 3308.9\n31% of 4727 is 1465.37 3261.63\n32% of 4727 is 1512.64 3214.36\n33% of 4727 is 1559.91 3167.09\n34% of 4727 is 1607.18 3119.82\n35% of 4727 is 1654.45 3072.55\n36% of 4727 is 1701.72 3025.28\n37% of 4727 is 1748.99 2978.01\n38% of 4727 is 1796.26 2930.74\n39% of 4727 is 1843.53 2883.47\n40% of 4727 is 1890.8 2836.2\n41% of 4727 is 1938.07 2788.93\n42% of 4727 is 1985.34 2741.66\n43% of 4727 is 2032.61 2694.39\n44% of 4727 is 2079.88 2647.12\n45% of 4727 is 2127.15 2599.85\n46% of 4727 is 2174.42 2552.58\n47% of 4727 is 2221.69 2505.31\n48% of 4727 is 2268.96 2458.04\n49% of 4727 is 2316.23 2410.77\n50% of 4727 is 2363.5 2363.5\n51% of 4727 is 2410.77 2316.23\n52% of 4727 is 2458.04 2268.96\n53% of 4727 is 2505.31 2221.69\n54% of 4727 is 2552.58 2174.42\n55% of 4727 is 2599.85 2127.15\n56% of 4727 is 2647.12 2079.88\n57% of 4727 is 2694.39 2032.61\n58% of 4727 is 2741.66 1985.34\n59% of 4727 is 2788.93 1938.07\n60% of 4727 is 2836.2 1890.8\n61% of 4727 is 2883.47 1843.53\n62% of 4727 is 2930.74 1796.26\n63% of 4727 is 2978.01 1748.99\n64% of 4727 is 3025.28 1701.72\n65% of 4727 is 3072.55 1654.45\n66% of 4727 is 3119.82 1607.18\n67% of 4727 is 3167.09 1559.91\n68% of 4727 is 3214.36 1512.64\n69% of 4727 is 3261.63 1465.37\n70% of 4727 is 3308.9 1418.1\n71% of 4727 is 3356.17 1370.83\n72% of 4727 is 3403.44 1323.56\n73% of 4727 is 3450.71 1276.29\n74% of 4727 is 3497.98 1229.02\n75% of 4727 is 3545.25 1181.75\n76% of 4727 is 3592.52 1134.48\n77% of 4727 is 3639.79 1087.21\n78% of 4727 is 3687.06 1039.94\n79% of 4727 is 3734.33 992.67\n80% of 4727 is 3781.6 945.4\n81% of 4727 is 3828.87 898.13\n82% of 4727 is 3876.14 850.86\n83% of 4727 is 3923.41 803.59\n84% of 4727 is 3970.68 756.32\n85% of 4727 is 4017.95 709.05\n86% of 4727 is 4065.22 661.78\n87% of 4727 is 4112.49 614.51\n88% of 4727 is 4159.76 567.24\n89% of 4727 is 4207.03 519.97\n90% of 4727 is 4254.3 472.7\n91% of 4727 is 4301.57 425.43\n92% of 4727 is 4348.84 378.16\n93% of 4727 is 4396.11 330.89\n94% of 4727 is 4443.38 283.62\n95% of 4727 is 4490.65 236.35\n96% of 4727 is 4537.92 189.08\n97% of 4727 is 4585.19 141.81\n98% of 4727 is 4632.46 94.54\n99% of 4727 is 4679.73 47.27\n100% of 4727 is 4727 0\n\n### Here's How to Calculate 29% of 4727\n\nLet's take a quick example here:\n\nYou have a Target coupon of \\$4727 and you need to know how much will you save on your purchase if the discount is 29 percent.\n\nSolution:\n\nAmount Saved = Original Price x Discount in Percent / 100\n\nAmount Saved = (4727 x 29) / 100\n\nAmount Saved = 137083 / 100\n\nAmount Saved = \\$1370.83 (answer).\n\nIn other words, a 29% discount for a purchase with an original price of \\$4727 equals \\$1370.83 (Amount Saved), so you'll end up paying 3356.17." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8802309,"math_prob":0.9096248,"size":3750,"snap":"2022-40-2023-06","text_gpt3_token_len":2099,"char_repetition_ratio":0.31607047,"word_repetition_ratio":0.004322767,"special_character_ratio":0.80053335,"punctuation_ratio":0.1907357,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99994826,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-06T10:46:38Z\",\"WARC-Record-ID\":\"<urn:uuid:2282cb66-7ec8-40c5-9700-ee600e3a97fc>\",\"Content-Length\":\"47252\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f8970d3d-2b6f-4623-8a3d-34285b1b1ca0>\",\"WARC-Concurrent-To\":\"<urn:uuid:36a6cd98-9c94-4ec5-8113-cfd9f8aff851>\",\"WARC-IP-Address\":\"104.21.50.244\",\"WARC-Target-URI\":\"https://percent-table.com/calculate/what-is-29-of-4727/\",\"WARC-Payload-Digest\":\"sha1:C2YQ7SCX2YVNC2DZYOES73YD3KZ5N2O4\",\"WARC-Block-Digest\":\"sha1:2VUD7JPGUMUNPATX5MFROQRK6V2TUYCJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337803.86_warc_CC-MAIN-20221006092601-20221006122601-00334.warc.gz\"}"}
https://m.haicoder.net/php/php-comparison-operator.html
[ "# PHP比较运算符\n\n## PHP比较运算符教程\n\nPHP 中,关系运算符的结果都是 bool 型,也就是要么是 true,要么是 false。关系表达式经常用在 if 结构的条件中或 循环结构 的条件中。\n\n## PHP比较运算符语法\n\n== 相等 4 == “4” true\n=== 绝对等于 4 === “4” false\n!= 不等于 4 != 3 true\n!== 绝对不等于 4 != 3 true\n<> 不等于 4 != 3 true\n< 小于 4 < 3 false\n> 大于 4 > 3 true\n<= 小于等于 4 <= 3 false\n>= 大于等于 4 >= 3 true\n\n## 案例\n\n### 相等比较\n\n``````<?php\necho \"嗨客网(www.haicoder.net)<br>\";\n\\$num1 = 4;\n\\$num2 = 3;\n\\$isEqual = \\$num1 == \\$num2;\n\\$isNotEqual = \\$num1 != \\$num2;\nvar_dump(\\$isEqual);\necho \"<br>\";\nvar_dump(\\$isNotEqual);\n``````", null, "### 绝对相等比较\n\n``````<?php\necho \"嗨客网(www.haicoder.net)<br>\";\n\\$num1 = 4;\n\\$num2 = \"4\";\n\\$isEqual = \\$num1 == \\$num2;\n\\$isNotEqual = \\$num1 != \\$num2;\n\\$isEqual1 = \\$num1 === \\$num2;\n\\$isNotEqual1 = \\$num1 !== \\$num2;\nvar_dump(\\$isEqual);\necho \"<br>\";\nvar_dump(\\$isNotEqual);\necho \"<br><br>\";\nvar_dump(\\$isEqual1);\necho \"<br>\";\nvar_dump(\\$isNotEqual1);\n``````", null, "### 大小于比较\n\n``````<?php\necho \"嗨客网(www.haicoder.net)<br>\";\n\\$num1 = 4;\n\\$num2 = 3;\n\\$isGt = \\$num1 > \\$num2;\n\\$isLt = \\$num1 < \\$num2;\n\\$isGte = \\$num1 >= \\$num2;\n\\$isLte = \\$num1 <= \\$num2;\nvar_dump(\\$isGt);\necho \"<br>\";\nvar_dump(\\$isLt);\necho \"<br>\";\nvar_dump(\\$isGte);\necho \"<br>\";\nvar_dump(\\$isLte);\n``````", null, "" ]
[ null, "https://m.haicoder.net/uploads/pic/server/php/php-operator/07_PHP比较运算符.png", null, "https://m.haicoder.net/uploads/pic/server/php/php-operator/08_PHP绝对比较运算符.png", null, "https://m.haicoder.net/uploads/pic/server/php/php-operator/09_PHP比较运算符.png", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.63002115,"math_prob":0.99966407,"size":1433,"snap":"2021-31-2021-39","text_gpt3_token_len":825,"char_repetition_ratio":0.19524142,"word_repetition_ratio":0.19387755,"special_character_ratio":0.4033496,"punctuation_ratio":0.20542635,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9960815,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-26T16:03:04Z\",\"WARC-Record-ID\":\"<urn:uuid:b9e122c4-08b7-4f52-9f62-e737a73f561d>\",\"Content-Length\":\"10120\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0cea99fb-6c7f-4b0d-8336-06fcf547a75c>\",\"WARC-Concurrent-To\":\"<urn:uuid:fba5686b-f216-4910-a1db-cd99d64a16e4>\",\"WARC-IP-Address\":\"123.56.218.165\",\"WARC-Target-URI\":\"https://m.haicoder.net/php/php-comparison-operator.html\",\"WARC-Payload-Digest\":\"sha1:M5TAMCPOVKSOK6LHPQADNZH6FYGZ2DLA\",\"WARC-Block-Digest\":\"sha1:BR5ILHP4CU7T34V2CLBNMIIEJCEGE2JF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057882.56_warc_CC-MAIN-20210926144658-20210926174658-00631.warc.gz\"}"}
http://essay.helpstudents.xyz/vip/Math-geometry-proof-solver.html
[ "# Math geometry proof solver\n\nStudents need to learn the proofs for some geometry theorems because this is a huge part of geometry. We have a geometry proof solver who will make this easier and simpler for you by helping you learn theories fast and in a convenient way. If you want to solve geometry problems, we have qualified and experienced experts who can help you.", null, "BASIC MATH PROOFS. The math proofs that will be covered in this website fall under the category of basic or introductory proofs. They are considered “basic” because students should be able to understand what the proof is trying to convey, and be able to follow the simple algebraic manipulations or steps involved in the proof itself.", null, "Geometry teachers can use our editor to upload a diagram and create a Geometry proof to share with students. In the proof editor, you can dynamically add steps and optionally pin their positions in the proof as hints for students. The editor gives you easy access to common Geometry symbols. After creating a proof, teachers can send students a.", null, "Geometry Calculators and Solvers. Easy to use online geometry calculators and solvers for various topics in geometry such as calculate area, volume, distance, points of intersection. These may be used to check homework answers, practice or explore with various values for deep understanding. Triangle Calculators Right Triangle Calculator and Solver.", null, "Plane Geometry Solid Geometry Conic Sections. Identities Proving Identities Trig Equations Trig Inequalities Evaluate Functions Simplify. Arithmetic Mean Geometric Mean Quadratic Mean Median Mode Order Minimum Maximum Probability Mid-Range Range Standard Deviation Variance Lower Quartile Upper Quartile Interquartile Range Midhinge.", null, "Free math problem solver answers your algebra homework questions with step-by-step explanations.", null, "Links, Videos, demonstrations for proving triangles congruent including ASA, SSA, ASA, SSS and Hyp-Leg theorems.\n\n## Geometry Calculators and Solvers - analyzemath.com.", null, "Some of the most important geometry proofs are demonstrated here. I will provide you with solid and thorough examples. Understanding a proof can be a daunting task. Writing a proof can even be more daunting. I kept the reader (s) in mind when I wrote the proofs outlines below. A crystal clear proof of the area of a triangle.", null, "Roughly 2400 years ago, Euclid of Alexandria wrote Elements which served as the world's geometry textbook until recently. Studied by Abraham Lincoln in order to sharpen his mind and truly appreciate mathematical deduction, it is still the basis of what we consider a first year course in geometry.", null, "Enter any 3 sides into our our free online tool and it will apply the triangle inequality and show all work.", null, "Many algebra proofs are done using proof by mathematical induction. To demonstrate the power of mathematical induction, we shall prove an algebraic equation and a geometric formula with induction. If you are not familiar with with proofs using induction, carefully study proof by mathematical induction given as a reference above.", null, "Videos, examples, solutions, worksheets, games and activities to help Geometry students learn how to use two column proofs. A two-column proof consists of a list of statements, and the reasons why those statements are true. The statements are in the left column and the reasons are in the right column.", null, "WebMath is designed to help you solve your math problems. Composed of forms to fill-in and then returns analysis of a problem and, when possible, provides a step-by-step solution. Covers arithmetic, algebra, geometry, calculus and statistics.", null, "Math lessons, videos, online tutoring, and more for free. All the geometry help you need right here, all free. Also math games, puzzles, articles, and other math help resources.\n\n## Geometry Calculator - Symbolab - Symbolab Math Solver.\n\nFor many students, geometry is hard and the two-column proof is a dreaded math experience. Use these tips to teach your student like a math tutor and provide them high quality geometry help. Keep in mind, you can also use the Thinkster online tutoring program to add an additional layer of experience for all of your student’s math needs.Step-by-step solutions to all your Geometry homework questions - Slader.They are the best in the knowledge of geometry homework help as they always write unique answers. Of course, they are master at solving any problem related to geometry. They consider each instruction of the order and include it naturally to solve the research problem. We do in-depth study and interact with you in a friendly and professional manner.\n\nHow to Make Geometry Proofs Easier. documenting each statement with supporting evidence. This process is exactly what you need to do to solve a geometry proof -- but solving a crime can seem more interesting than working through a math problem.. Here's the Secret to Acing All Your Math Tests.Math Free Problem Solver. November 6,. Use errors as proof of misconceptions, not carelessness or random guessing. A goal is one other mental construct of a stakeholder. math geometry solver Isn’t a purpose some extent we want to reach? Filed Under: Blog. Primary Sidebar." ]
[ null, "http://www.hippocampus.org/hippocampus3/skins/shared/pageDecor/home/Chalkboard_hippo_logo_explore.png", null, "http://www.sru.edu//images/news/2019/April/041019c-Inline.jpg", null, "http://s3.thingpic.com/images/u9/R8xHJue6tBa8PKvJaVd7qRbQ.jpeg", null, "http://flood-rescue.com/img/math-homework-help-websites-12.jpg", null, "http://flood-rescue.com/img/homework-help-online-accounting-12.png", null, "http://flood-rescue.com/img/who-can-write-a-paper-for-me.png", null, "http://asialifecambodia.com/img/319250f3788a9e20db87b82b82094f69.jpg", null, "http://image.slidesharecdn.com/weekfiveassignment-wileyplus-140521181628-phpapp01/95/acc291week5assignmentwileyplusweek-five-assignment-wiley-plus-6-638.jpg", null, "http://projectnativeinformant.com/wp-content/uploads/2016/12/texe-zer-kunst-_-dis-dragged-3-865x1280.jpg", null, "http://www.markedbyteachers.com/media/docs/newdocs/gcse/english/english_literature/drama/william_shakespeare/romeo_and_juliet/104644/images/full/img_cropped_1.png", null, "http://image.slidesharecdn.com/essayforweakss-140205001523-phpapp01/95/essay-for-weak-students-5-638.jpg", null, "http://www.financehomeworkhelp.org/wp-content/uploads/2014/08/Very-Accurate-Finance-Homework-Help.png", null, "http://flood-rescue.com/img/college-students-homework-help-16.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92332196,"math_prob":0.88379484,"size":4465,"snap":"2020-45-2020-50","text_gpt3_token_len":867,"char_repetition_ratio":0.1248599,"word_repetition_ratio":0.0,"special_character_ratio":0.18365061,"punctuation_ratio":0.104808874,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99401,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,null,null,5,null,5,null,5,null,8,null,7,null,7,null,3,null,4,null,10,null,null,null,10,null,7,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-04T14:57:07Z\",\"WARC-Record-ID\":\"<urn:uuid:8dbb06da-51d0-4c72-8e07-3679f914fd7f>\",\"Content-Length\":\"21104\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9109f3bb-8475-4a20-9ea4-d48ca7c8af89>\",\"WARC-Concurrent-To\":\"<urn:uuid:1150d182-f204-418e-9460-14431a64fc9a>\",\"WARC-IP-Address\":\"144.91.111.158\",\"WARC-Target-URI\":\"http://essay.helpstudents.xyz/vip/Math-geometry-proof-solver.html\",\"WARC-Payload-Digest\":\"sha1:VN6F2VOORJT4JKWBW3GSIUFNY7MSBLFT\",\"WARC-Block-Digest\":\"sha1:LYUQ5BLQBG3UKYMTL6TIPW6LR7RRD5Y6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141737946.86_warc_CC-MAIN-20201204131750-20201204161750-00028.warc.gz\"}"}
https://dictionary.obspm.fr/?showAll=1&formSearchTextfield=viscosity
[ "# An Etymological Dictionary of Astronomy and AstrophysicsEnglish-French-Persian\n\n## فرهنگ ریشه شناختی اخترشناسی-اخترفیزیک\n\n### M. Heydari-Malayeri    -    Paris Observatory\n\nHomepage\n\nNumber of Results: 5 Search : viscosity\n absolute viscosity   وشکسانی ِ اوست   vošksâni-ye avastFr.: viscosité absolue   Same as → viscosity and → dynamic viscosity.→ absolute; → viscosity. coefficient of viscosity   همگر ِ وشکسانی   hamgar-e vošksâniFr.: coefficient de viscosité   A quantity that indicates a property of fluids and is defined by the ratio of shearing → stress to the rate of change of shearing → strain. It is also simply called viscosity. The coefficient of viscosity is expressed by: μ = (F/A) / (dv/dy), where F is the force required to maintain a steady velocity difference dv between any two parallel layers of the fluid, A is the area of the layers, and dv/dy is the → velocity gradient between two points separated by a small distance measured at right angles to the direction of flow. The unit of viscosity is that of force times distance divided by area times velocity. Thus, in the cgs system, the unit is 1 dyne.cm/cm2.(cm/s), which reduces to 1 dyne.s/cm2. This unit is called 1 → poise.→ viscosity; → coefficient. dynamic viscosity   وشکسانی ِ توانیک   vošksâni-y tavânikFr.: viscosité dynamique   Same as → viscosity and → absolute viscosity.→ dynamic; → viscosity. kinematic viscosity   وشکسانی ِ جنبشیک   vošksâni-ye jonbešikFr.: viscosité cinématique   The ratio of the → dynamic viscosity (η) to the density (ρ) of a fluid: ν = η/ρ. The unit of kinematic viscosity in the → SI system is m2s-1. In the → cgs system, cm2s-1, equal to 10-4 m2s-1, is called the → stokes (st).→ kinematic; → viscosity. viscosity   وشکسانی   vošksâni (#)Fr.: viscosité   The property of a → fluid that resists the force tending to cause the fluid to flow. Viscosity may be thought of as the internal → friction of two fluid layers which flow parallel to each other at different speeds. The cause of viscosity is the transport of → momentum by the molecules from one layer to the other. Viscosity is given by η = φ.u.λ.ρ, where φ is a coefficient which depends on the nature of the interaction between the molecules, u is the average velocity of thermal motion of the molecules, λ is the → mean free path, and ρ the → density of the fluid. Also called → dynamic viscosity or → absolute viscosity. See also → kinematic viscosity.Noun from → viscous; → -ity." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8926355,"math_prob":0.900167,"size":2033,"snap":"2023-14-2023-23","text_gpt3_token_len":528,"char_repetition_ratio":0.17249876,"word_repetition_ratio":0.011331445,"special_character_ratio":0.23413675,"punctuation_ratio":0.12468828,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97539914,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-08T21:42:29Z\",\"WARC-Record-ID\":\"<urn:uuid:1147c668-2b73-4358-963a-499696e84c27>\",\"Content-Length\":\"18919\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1ce07d59-51c1-49b7-9dfb-e9b20d353988>\",\"WARC-Concurrent-To\":\"<urn:uuid:e250fbf6-4c16-4c07-9464-c0c669936cb6>\",\"WARC-IP-Address\":\"145.238.181.72\",\"WARC-Target-URI\":\"https://dictionary.obspm.fr/?showAll=1&formSearchTextfield=viscosity\",\"WARC-Payload-Digest\":\"sha1:I3XJ43Q46K74GE23TCNQYN3K6LRHAPBC\",\"WARC-Block-Digest\":\"sha1:AI5CUBFFN7BBMBS2GLKRWEJCENVJ5LDF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224655143.72_warc_CC-MAIN-20230608204017-20230608234017-00392.warc.gz\"}"}
https://dev.opencascade.org/content/shape-split-fail
[ "# Shape split Fail\n\nHi,\ni need to perform a face splitting.\nSo i have a TopoDS_Face, a wire (built by projection on the face).\nWith:\nTopoDS_Shape prism = BRepPrimAPI_MakePrism(wire,faceDir,bInfinite,bCopy,bCanonize);\n\ni try to create an infinite prism.\nI use the prism with BRepAlgoAPI_Section to find the intersection between the 2 shapes and perform the splitting using BRepFeat_SplitShape\n\nHere is a bit of code:\n\n//SPLIT CREATION\nbInfinite=true;\nbCopy=false;\nbCanonize=true;\ngp_dir faceDir=(... normal of the face i need to split);\n\n//wire is a TopoDS_Wire created by curve projection on the surface of face\n\nTopoDS_Shape prism = BRepPrimAPI_MakePrism(wire,faceDir,bInfinite,bCopy,bCanonize);\n\n//SPLITTING\nBRepAlgoAPI_Section aSecAlgo (face, prism);\nBRepFeat_SplitShape aSplitter(face);\n\n//TopoDS_Iterator its(aSecAlgo);\nTopoDS_Iterator its(wire);\nint iEdge=0;\nfor (; its.More(); its.Next())\n{\nconst TopoDS_Edge& aEdge = TopoDS::Edge(its.Value());\n\n// determine an ancestor face of this edge\nTopoDS_Face aFace;\nif (aSecAlgo.HasAncestorFaceOn2(aEdge, aFace))\n{\niEdge++;\n}\n\n}\n\nif iEdge is >0 I retrieve the left faces and so on.\n\nBut (here ise the problem) I have iEdge=0!\n\nAny hint?", null, "have you found a solution to your problem?\nif so, I'm interested as well\nthx\nalexandre", null, "I had to use a semiinfinite prism.\nCause i was not able to have it working with infinite prism.\nI'm still working on it. ASAP i'll post a solution" ]
[ null, "https://dev.opencascade.org/sites/default/files/images/userpic_default.png", null, "https://dev.opencascade.org/sites/default/files/images/userpic_default.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7633523,"math_prob":0.69302076,"size":1512,"snap":"2022-05-2022-21","text_gpt3_token_len":475,"char_repetition_ratio":0.11007958,"word_repetition_ratio":0.0,"special_character_ratio":0.2685185,"punctuation_ratio":0.19798657,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96744,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-29T05:58:05Z\",\"WARC-Record-ID\":\"<urn:uuid:2c058949-da02-4afa-b7ab-b609f4d244aa>\",\"Content-Length\":\"29342\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0ad1b873-28d7-41ca-83cc-d091440ed09a>\",\"WARC-Concurrent-To\":\"<urn:uuid:2672154a-fc91-4f1d-920c-a668d37e5521>\",\"WARC-IP-Address\":\"5.196.194.182\",\"WARC-Target-URI\":\"https://dev.opencascade.org/content/shape-split-fail\",\"WARC-Payload-Digest\":\"sha1:LVIR4MIGLNTTSNRIV7SJVGBHMGXABWXI\",\"WARC-Block-Digest\":\"sha1:VR5N2BJ64MTXQYXPMPH5SZE7UHS2MR6Z\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652663039492.94_warc_CC-MAIN-20220529041832-20220529071832-00395.warc.gz\"}"}
https://www.studypug.com/algebra-2/graphing-piecewise-linear-functions
[ "# Graphing piecewise linear functions\n\n### Graphing piecewise linear functions\n\n#### Lessons\n\n• 1.\nFor the following function", null, "a)\nSketch the graph\n\nb)\nState the domain\n\nc)\nState the range\n\n• 2.\nFor the following function", null, "a)\nSketch the graph\n\nb)\nState the domain\n\nc)\nState the range\n\n• 3.\nFor the following function", null, "a)\nSketch the graph\n\nb)\nState the domain\n\nc)\nState the range\n\n• 4.\nWrite a piecewise function that models the following graph:", null, "• 5.\na)\nGraph$f(x)$ = |3$x$ - 2|; and\n\nb)\nExpress$f(x)$ = |3$x$ - 2| as a piecewise function" ]
[ null, "https://dcvp84mxptlac.cloudfront.net/diagrams2/MATH12-24-2-X-1.jpg", null, "https://dcvp84mxptlac.cloudfront.net/diagrams2/MATH12-24-2-X-2.jpg", null, "https://dcvp84mxptlac.cloudfront.net/diagrams2/MATH12-24-2-X-3.jpg", null, "https://dcvp84mxptlac.cloudfront.net/diagrams2/MATH12-24-2-X-4.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6058878,"math_prob":0.9959727,"size":627,"snap":"2019-51-2020-05","text_gpt3_token_len":156,"char_repetition_ratio":0.20866774,"word_repetition_ratio":0.19767442,"special_character_ratio":0.21212122,"punctuation_ratio":0.10185185,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9993829,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-21T06:39:13Z\",\"WARC-Record-ID\":\"<urn:uuid:4525dc93-e1a1-4f87-9455-2924adc07cc2>\",\"Content-Length\":\"190283\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1ac1b5b1-bbfe-42be-97e5-d54a491f425e>\",\"WARC-Concurrent-To\":\"<urn:uuid:a4341c6f-94bc-47bf-b4f9-e9d1a062f040>\",\"WARC-IP-Address\":\"34.200.169.6\",\"WARC-Target-URI\":\"https://www.studypug.com/algebra-2/graphing-piecewise-linear-functions\",\"WARC-Payload-Digest\":\"sha1:RXRW7UCHD2TPIKWVOAGRAA2KNVG5RWPG\",\"WARC-Block-Digest\":\"sha1:HIIIBQPN43G4WJBQESO7P4ZXQOVQT5V6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250601615.66_warc_CC-MAIN-20200121044233-20200121073233-00441.warc.gz\"}"}
https://raisingthebar.nl/2022/02/19/estimation-of-parameters-of-a-normal-distribution/
[ "19\nFeb 22\n\n## Estimation of parameters of a normal distribution\n\nHere we show that the knowledge of the distribution of", null, "$s^{2}$ for linear regression allows one to do without long calculations contained in the guide ST 2134 by J. Abdey.\n\nTheorem. Let", null, "$y_{1},...,y_{n}$ be independent observations from", null, "$N\\left( \\mu,\\sigma ^{2}\\right)$. 1)", null, "$s^{2}\\left( n-1\\right) /\\sigma ^{2}$ is distributed as", null, "$\\chi _{n-1}^{2}.$ 2) The estimators", null, "$\\bar{y}$ and", null, "$s^{2}$ are independent. 3)", null, "$Es^{2}=\\sigma ^{2},$ 4)", null, "$Var\\left( s^{2}\\right) =\\frac{2\\sigma ^{4}}{n-1},$ 5)", null, "$\\frac{s^{2}-\\sigma ^{2}}{\\sqrt{2\\sigma ^{4}/\\left(n-1\\right) }}$ converges in distribution to", null, "$N\\left( 0,1\\right) .$\n\nProof. We can write", null, "$y_{i}=\\mu +e_{i}$ where", null, "$e_{i}$ is distributed as", null, "$N\\left( 0,\\sigma ^{2}\\right) .$ Putting", null, "$\\beta =\\mu ,\\ y=\\left(y_{1},...,y_{n}\\right) ^{T},$", null, "$e=\\left( e_{1},...,e_{n}\\right) ^{T}$ and", null, "$X=\\left( 1,...,1\\right) ^{T}$ (a vector of ones) we satisfy (1) and (2). Since", null, "$X^{T}X=n,$ we have", null, "$\\hat{\\beta}=\\bar{y}.$ Further,", null, "$r\\equiv y-X\\hat{ \\beta}=\\left( y_{1}-\\bar{y},...,y_{n}-\\bar{y}\\right) ^{T}$\n\nand", null, "$s^{2}=\\left\\Vert r\\right\\Vert ^{2}/\\left( n-1\\right) =\\sum_{i=1}^{n}\\left( y_{i}-\\bar{y}\\right) ^{2}/\\left( n-1\\right) .$\n\nThus 1) and 2) follow from results for linear regression.\n\n3) For a normal variable", null, "$X$ its moment generating function is", null, "$M_{X}\\left( t\\right) =\\exp \\left(\\mu t+\\frac{1}{2}\\sigma ^{2}t^{2}\\right)$ (see Guide ST2133, 2021, p.88). For the standard normal we get", null, "$M_{z}^{\\prime }\\left( t\\right) =\\exp \\left( \\frac{1}{2}t^{2}\\right) t,$", null, "$M_{z}^{\\prime \\prime }\\left( t\\right) =\\exp \\left( \\frac{1}{2}t^{2}\\right) (t^{2}+1),$", null, "$M_{z}^{\\prime \\prime \\prime}\\left( t\\right) =\\exp \\left( \\frac{1}{2}t^{2}\\right) (t^{3}+2t+t),$", null, "$M_{z}^{(4)}\\left( t\\right) =\\exp \\left( \\frac{1}{2}t^{2}\\right) (t^{4}+6t^{2}+3).$\n\nApplying the general property", null, "$EX^{r}=M_{X}^{\\left( r\\right) }\\left( 0\\right)$ (same guide, p.84) we see that", null, "$Ez=0,$", null, "$Ez^{2}=1,$", null, "$Ez^{3}=0,$", null, "$Ez^{4}=3,$", null, "$Var(z)=1,$", null, "$Var\\left( z^{2}\\right) =Ez^{4}-\\left( Ez^{2}\\right) ^{2}=3-1=2.$\n\nTherefore", null, "$Es^{2}=\\frac{\\sigma ^{2}}{n-1}E\\left( z_{1}^{2}+...+z_{n-1}^{2}\\right) =\\frac{\\sigma ^{2}}{n-1}\\left( n-1\\right) =\\sigma ^{2}.$\n\n4) By independence of standard normals", null, "$Var\\left( s^{2}\\right) =$", null, "$\\left(\\frac{\\sigma ^{2}}{n-1}\\right) ^{2}\\left[ Var\\left( z_{1}^{2}\\right) +...+Var\\left( z_{n-1}^{2}\\right) \\right] =\\frac{\\sigma ^{4}}{\\left( n-1\\right) ^{2}}2\\left( n-1\\right) =\\frac{2\\sigma ^{4}}{n-1}.$\n\n5) By standardizing", null, "$s^{2}$ we have", null, "$\\frac{s^{2}-Es^{2}}{\\sigma \\left(s^{2}\\right) }=\\frac{s^{2}-\\sigma ^{2}}{\\sqrt{2\\sigma ^{4}/\\left( n-1\\right) }}$ and this converges in distribution to", null, "$N\\left( 0,1\\right)$ by the central limit theorem." ]
[ null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82315105,"math_prob":1.0000098,"size":926,"snap":"2023-40-2023-50","text_gpt3_token_len":226,"char_repetition_ratio":0.121475056,"word_repetition_ratio":0.0,"special_character_ratio":0.24190065,"punctuation_ratio":0.09550562,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000094,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-23T03:22:00Z\",\"WARC-Record-ID\":\"<urn:uuid:3af0c83d-99c2-4b86-acee-18364417e809>\",\"Content-Length\":\"101602\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1d96051d-ca5c-4c27-98e6-e45ba767d4fe>\",\"WARC-Concurrent-To\":\"<urn:uuid:9d778980-3e09-496c-9daf-29e99c3f384c>\",\"WARC-IP-Address\":\"35.214.134.104\",\"WARC-Target-URI\":\"https://raisingthebar.nl/2022/02/19/estimation-of-parameters-of-a-normal-distribution/\",\"WARC-Payload-Digest\":\"sha1:HC7ZOZ2NI2LTL2JRUUFWSJYHUXGTJJHL\",\"WARC-Block-Digest\":\"sha1:PQ2KOH3NMX5QMUVUHGQYLS2VCEODM3WW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506479.32_warc_CC-MAIN-20230923030601-20230923060601-00401.warc.gz\"}"}
http://7daixie.com/202006301428028431.html
[ "# Homework # 14 SVD\n\nHomework # 14\n\n1. • Reading from Sauer: The beginning of Section 12.3 dis￾cusses the SVD. Sections 12.4.1 1 12.4.3 discuss applica￾tions of the SVD.\n• Reading from Elements of Statistical Learning: Section\n14.3.6 discusses K-means.\n2. With this assignment you will find the file users-shows.txt.\nThis file gives a 9985 by 563 matrix, call it A, corresponding to\n563 television shows and 9985 television users.. The matrix is\ncomposed of 0’s and 1’s. A 1 means that the user likes the show,\na 0 means they do not. The shows, if you are interested, are\nlisted in the file shows.txt. The 500th user, let’s call him Alex,\nhas had his preferences for the first 100 shows removed from\nthe matrix and replaced with all 0’s. His actual preferences are\nin the file Alex.txt. Your goal is to use the svd to suggest 5\nshows from the first 100 that you believe Alex would like. You\nshould use R’s svd function to compute the SVD of A.\n(a) As a warmup to this problem, show following. Given the\n(b) Compute the SVD of A and plot the singular values. How\nmany singular values would accurately approximate this\nmatrix? (What accurate means here is up to you.)\n(c) Use the SVD to reduce the data to two dimensions as fol￾lows. Project the users onto the appropriate two dime￾naional PCA space and plot; do the same for the shows.\nUsing these projections, suggest five movies for Alex. (This\nproblem and dataset is taken from a homework in a class\ntaught by Jure Leskovec in the Stanford CS department.\nThe dataset was originally produced by Chris Volinsky in\nthe Columbia CS department.)\n1\n3. In this problem you will implement K-means on two datasets\n(a) Here is a fact I mentioned in class that is essential to the\nkmeans algorithm. Suppose you are given N points x\n(i)\nfor\ni = 1, 2, . . . , N, with each point in R\nn\n. Compute the point\nm ∈ R\nn\nthat minimizes the sum of squared distances from\nTo find the m, take the gradient of this expression, set it\nto zero, and solve for m. You should find that m is the\nmean of the x\n(\n(b) Write a function MyKmeans(x, K) that accepts a data\nmatrix X and the number of kmeans K and returns the\nsolution to the kmeans problem as well as the number of\niterations needed to reach the solution through the kmeans\nagainst R’s kmeans function and, if you like, you can also\ninclude a parameter in MyKmeans that chooses a start￾ing value for the assignments or means.) Explain why the\nkmeans algorithm is a descent algorithm.\n(c) Apply MyKmeans to the attached dataset synthetic kmeans data.csv\nwith K = 2. This is an artifical dataset for which the sam￾ple points are in R\n. Plot the points of the dataset and the\nlocation of your 2 means at various iterations to see how\nthe means move to their optimal location.\n(d) The attached dataset tumor microarray data.csv comes\nfrom the Elements of Statistical Learning book. Each\nrow represents a cancer cell. The first column, labeled\ncancer, gives the type of the cancer cell (e.g. RENAL,\nLEUKEMIA). The rest of the columns are numeric and\ngive measurement of different proteins in the cell. The\npoint here is to attempt to distinguish cancer cells by the\nlevel of proteins found in the cell. Perform K-means using\nR’s kmeans function. The cluster associated with a given\nmean are the sample points assigned to it. Try different K,\nand determine if the clusters formed separate the cancers\n(e.g. certain cancers are found within certain clusters).\n(See Elements of Statistical Learning Table 14.2, which\ndoes this for K = 2.)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9024641,"math_prob":0.9159458,"size":3605,"snap":"2020-24-2020-29","text_gpt3_token_len":892,"char_repetition_ratio":0.10719245,"word_repetition_ratio":0.0,"special_character_ratio":0.23550624,"punctuation_ratio":0.1205298,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9909755,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-11T05:33:55Z\",\"WARC-Record-ID\":\"<urn:uuid:b003d647-af67-4b12-ad53-505b49237c54>\",\"Content-Length\":\"19540\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:898ea7f5-444c-48e7-bda7-5c8fdd98b199>\",\"WARC-Concurrent-To\":\"<urn:uuid:aed06131-01b2-452a-80df-4d4d7ab3f231>\",\"WARC-IP-Address\":\"103.48.169.250\",\"WARC-Target-URI\":\"http://7daixie.com/202006301428028431.html\",\"WARC-Payload-Digest\":\"sha1:IWQINWI76OXKLY7K2M5P6NDAJED2GE3M\",\"WARC-Block-Digest\":\"sha1:5J4JBMUA244NF2RHX5ALLCOLQ3EBXKNJ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655921988.66_warc_CC-MAIN-20200711032932-20200711062932-00039.warc.gz\"}"}
http://keplarllp.com/what-affects-the-period-of-a-pendulum.html
[ "# What affects the period of a pendulum. Exploring Pendulums 2019-01-27\n\nWhat affects the period of a pendulum Rating: 8,9/10 972 reviews\n\n## What effects the period of a pendulum?", null, "The period was measured in seconds, with the stop clock measuring to the degree of two decimal places of a second. The results of the regression analysis are shown. A pendulum with a longer string has a lower frequency, meaning it swings back and forth less times in a given amount of time than a pendulum with a shorter string length. Many students believe that changing any of the variables string length, mass, or where we release the pendulum will change the frequency of the pendulum. Obviously, the greater the mass, the less any air friction or friction at the pivot will slow the pendulum. This also affects the frequency of the pendulum, which is the rate at which the pendulum swings back and forth. Whatever potential energy is lost in going from position A to position D appears as kinetic energy.\n\nNext\n\n## How Does Amplitude Affect the Period of a Pendulum?", null, "Find the slope of your straight line, and note down the equation which relates the two variables. Yes No Thanks for your feedback! The Sinusoidal Nature of Pendulum Motion In , we investigated the sinusoidal nature of the motion of a mass on a spring. Dependent variable: Time period of the pendulum. It affected the literal, philosophical and social points of view in a way that some of its major doctrines are still living on and has its followers. Evaluation I think that the experiment went well, however if I were to do it again I would allow myself more time so that I could see if my prediction about the length of the string was correct. Maybe you can think of some new ways to use a pendulum, too! Now your intuition should tell you what's going to happen but even if it doesn't you can plug this value into your equation for the pendulum's period to find out what happens …. Engineers also use inventions and discoveries to build new things.\n\nNext\n\n## Physics 4A Lab 8: THe Simple Pendulum", null, "The type of structure that develops will be one that provides. In each plot, values of period the dependent variable are placed on the vertical axis. Organizational size The larger an organization becomes, the more complicated its structure. Factors affecting the time period of a pendulum Aim: To investigate the factors affecting the time period of a pendulum. Equation for the line of best fit, with T instead of y, and root L instead of x. Repeat for the two other bobs, keeping the string length and amplitude constant.\n\nNext\n\n## What Affects the Swing Rate of a Pendulum?", null, "Now here come the words. A pendulum is a string hanging from a fixed spot with a weight called a bob at one end that can swing back and forth. Give them a chance to debate and discuss their answers before continuing. In Taiwan's capital city, the Taipei 101 skyscraper has a giant 726-ton pendulum suspended over the 88th floor to counteract winds, reducing the building's sway and keeping motion sickness at bay. The graph will be generated automatically. You will have to make your own table and graph for this use the following page on excel — it is unlocked.\n\nNext\n\n## Physics 4A Lab 8: THe Simple Pendulum", null, "The acceleration of gravity is the force gravity exerts on an object. For each bob you should have: 3 period data points, the average value of those data points and the average deviation. Last modified: April 6, 2018. Just as objects with different masses but similar shapes fall at the same rate for example, a ping-pong ball and a golf ball, or a grape and a large ball bearing , the pendulum is pulled downward at the same rate no matter how much the bob weighs. Results 100g 200g 300g 400g 500g Attempt 1 20 33.\n\nNext\n\n## What factors affect the period of a pendulum", null, "So as the bob swings to the left of its equilibrium position, the tension force is at an angle - directed upwards and to the right. Today, we will follow in Galileo's footsteps to learn about how pendulums behave. As you swing, you smoothly ride from the top of one arc, through the bottom, to the top on the other side of the swing, and back again. The momentum built up by the acceleration of gravity causes the mass to swing in the opposite direction to a height equal to the original position. Ask for ten volunteers from the class to come up to the front of the room, and give each person one of the pieces of paper. This force is known as inertia. Either way the principle of periodic motion affects the pendulum.\n\nNext\n\n## Physics 4A Lab 8: THe Simple Pendulum", null, "I think this will happen because the bigger the string, the greater the distance the pendulum needs to travel and so the more time it will take. The back and forth swinging motion of the bob of a pendulum. Today, engineers use pendulums in clocks, but they also use them for detecting earthquakes and helping buildings resist shaking. By so doing, the experimenters were able to investigate the possible effect of the mass upon the period. As the amplitude of the pendulum increases, the period increases. Suppose that the performers can be treated as a simple pendulum with a length of 16 m. The time period of a pendulum is directly proportional to thesquare root of its length.\n\nNext\n\n## Period of a Pendulum Lab", null, "If you increase the length four times, you will double the period. The shape of the curve indicates some sort of power relationship between period and length. Give attention to your algebra: Square both sides of the equation to remove the radical. Lexical processing consists of 3 main components, identifying, naming, and understanding. The possessed by an object is the energy it possesses due to its motion. In other words, the height must be measured as a vertical distance above some reference position. Measure the period as above for at least 5 different angles, ranging between 10-40 degrees.\n\nNext" ]
[ null, "http://hyperphysics.phy-astr.gsu.edu/hbase/imgmec/pendp.gif", null, "x-raw-image:/91ede0991fa064ec6981600beb08ac76f644a824ae3df38d54944545f54068b3", null, "http://www.physicsclassroom.com/Class/waves/u10l0c5.gif", null, "https://upload.wikimedia.org/wikipedia/commons/thumb/b/b2/Simple_gravity_pendulum.svg/300px-Simple_gravity_pendulum.svg.png", null, "https://plot.ly/~jackwilson/35/effect-of-bob-mass-on-period-of-a-pendulum.png", null, "https://physicsbykinsella.weebly.com/uploads/4/7/5/3/47539039/8035418_orig.png", null, "https://slideplayer.com/slide/4149596/13/images/2/In+this+presentation+you+will+learn+about+the+pendulum+and+its+cycle%2C+period%2C+frequency+and+amplitude.+You+will+learn+which+factors+affect+its+behaviour+and+how+can+they+be+calculated..jpg", null, "https://upload.wikimedia.org/wikipedia/commons/4/43/Coupled_oscillators.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.94262797,"math_prob":0.91666204,"size":5367,"snap":"2021-21-2021-25","text_gpt3_token_len":1125,"char_repetition_ratio":0.13816893,"word_repetition_ratio":0.006329114,"special_character_ratio":0.20607416,"punctuation_ratio":0.09609895,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9638574,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,8,null,null,null,null,null,7,null,2,null,2,null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-17T15:17:31Z\",\"WARC-Record-ID\":\"<urn:uuid:89e77721-34de-45ff-88ab-b3fb7d26a6cd>\",\"Content-Length\":\"10240\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c396ddfe-3bfe-4237-a19c-2e8f22f89792>\",\"WARC-Concurrent-To\":\"<urn:uuid:a75b0b11-3b2c-4053-a6ee-bbea85ac7b48>\",\"WARC-IP-Address\":\"52.217.90.171\",\"WARC-Target-URI\":\"http://keplarllp.com/what-affects-the-period-of-a-pendulum.html\",\"WARC-Payload-Digest\":\"sha1:SGFUBRT4QTLTFW4XPOBP36VLTLAW74ZK\",\"WARC-Block-Digest\":\"sha1:AZSK46ZNHMKLIW6W5PG3ORQCUHPT2NGH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991258.68_warc_CC-MAIN-20210517150020-20210517180020-00121.warc.gz\"}"}
https://thegrandparadise.com/writing-tricks/is-arima-and-box-jenkins-same/
[ "# Is ARIMA and Box-Jenkins same?\n\n## Is ARIMA and Box-Jenkins same?\n\nAutoregressive integrated moving average (ARIMA) models are a form of Box-Jenkins model. The terms ARIMA and Box-Jenkins are sometimes used interchangeably.\n\nWhat is Box-Jenkins method of forecasting?\n\nBox – Jenkins Analysis refers to a systematic method of identifying, fitting, checking, and using integrated autoregressive, moving average (ARIMA) time series models. The method is appropriate for time series of medium to long length (at least 50 observations).\n\nWhen was Arima model invented?\n\n1930’s-1940’s\nAre an adaptation of discrete-time filtering methods developed in 1930’s-1940’s by electrical engineers (Norbert Wiener et al.)\n\n### How do you do a time series analysis in SPSS?\n\nMaking Time Series Using SPSS\n\n1. Open SPSS.\n2. Click on the circle next to “Type in data”.\n3. Enter the time values in one of the columns, and enter the non-time values in another column.\n4. Click on the “Variable View” tab.\n5. Type in names for the time variable and the non-time variable.\n\nHow does Arima model work?\n\nARIMA uses a number of lagged observations of time series to forecast observations. A weight is applied to each of the past term and the weights can vary based on how recent they are. AR(x) means x lagged error terms are going to be used in the ARIMA model. ARIMA relies on AutoRegression.\n\nWhat is ARIMA Modelling?\n\nAn autoregressive integrated moving average, or ARIMA, is a statistical analysis model that uses time series data to either better understand the data set or to predict future trends. A statistical model is autoregressive if it predicts future values based on past values.\n\n## What is p and Q in ARIMA?\n\nA nonseasonal ARIMA model is classified as an “ARIMA(p,d,q)” model, where: p is the number of autoregressive terms, d is the number of nonseasonal differences needed for stationarity, and. q is the number of lagged forecast errors in the prediction equation." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8640896,"math_prob":0.8885942,"size":1898,"snap":"2023-14-2023-23","text_gpt3_token_len":432,"char_repetition_ratio":0.11774023,"word_repetition_ratio":0.0,"special_character_ratio":0.21285564,"punctuation_ratio":0.10526316,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98668545,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-21T02:03:24Z\",\"WARC-Record-ID\":\"<urn:uuid:4096a89c-43ca-4790-88f2-3727fc1f1186>\",\"Content-Length\":\"54493\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4f5b2a81-674a-4084-a062-938dc299904a>\",\"WARC-Concurrent-To\":\"<urn:uuid:d1fe9cc5-91a5-4d23-b546-819c2f8c28e7>\",\"WARC-IP-Address\":\"151.139.128.10\",\"WARC-Target-URI\":\"https://thegrandparadise.com/writing-tricks/is-arima-and-box-jenkins-same/\",\"WARC-Payload-Digest\":\"sha1:WRR3FLX2C7EUWDE47UHT3FCUMVSUSLPC\",\"WARC-Block-Digest\":\"sha1:JPZYRGBP4VLXZK2NBDMPLYDDDZRF7PAR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296943589.10_warc_CC-MAIN-20230321002050-20230321032050-00794.warc.gz\"}"}
https://whatisconvert.com/1275-imperial-pints-in-cubic-meters
[ "# What is 1275 Imperial Pints in Cubic Meters?\n\n## Convert 1275 Imperial Pints to Cubic Meters\n\nTo calculate 1275 Imperial Pints to the corresponding value in Cubic Meters, multiply the quantity in Imperial Pints by 0.00056826125 (conversion factor). In this case we should multiply 1275 Imperial Pints by 0.00056826125 to get the equivalent result in Cubic Meters:\n\n1275 Imperial Pints x 0.00056826125 = 0.72453309375 Cubic Meters\n\n1275 Imperial Pints is equivalent to 0.72453309375 Cubic Meters.\n\n## How to convert from Imperial Pints to Cubic Meters\n\nThe conversion factor from Imperial Pints to Cubic Meters is 0.00056826125. To find out how many Imperial Pints in Cubic Meters, multiply by the conversion factor or use the Volume converter above. One thousand two hundred seventy-five Imperial Pints is equivalent to zero point seven two five Cubic Meters.", null, "## Definition of Imperial Pint\n\nThe pint (symbol: pt) is a unit of volume or capacity in both the imperial and United States customary measurement systems. The imperial pint is equal to one-eighth of an imperial gallon. One imperial pint is equal to 568.26125 millilitres (≈ 568 ml).\n\n## Definition of Cubic Meter\n\nThe cubic meter (also written \"cubic metre\", symbol: m3) is the SI derived unit of volume. It is defined as the volume of a cube with edges one meter in length. Another name, not widely used any more, is the kilolitre. It is sometimes abbreviated to cu m, m3, M3, m^3, m**3, CBM, cbm.\n\n### Using the Imperial Pints to Cubic Meters converter you can get answers to questions like the following:\n\n• How many Cubic Meters are in 1275 Imperial Pints?\n• 1275 Imperial Pints is equal to how many Cubic Meters?\n• How to convert 1275 Imperial Pints to Cubic Meters?\n• How many is 1275 Imperial Pints in Cubic Meters?\n• What is 1275 Imperial Pints in Cubic Meters?\n• How much is 1275 Imperial Pints in Cubic Meters?\n• How many m3 are in 1275 uk pt?\n• 1275 uk pt is equal to how many m3?\n• How to convert 1275 uk pt to m3?\n• How many is 1275 uk pt in m3?\n• What is 1275 uk pt in m3?\n• How much is 1275 uk pt in m3?" ]
[ null, "https://whatisconvert.com/images/1275-imperial-pints-in-cubic-meters", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8153871,"math_prob":0.97281206,"size":2022,"snap":"2020-24-2020-29","text_gpt3_token_len":560,"char_repetition_ratio":0.246779,"word_repetition_ratio":0.12569833,"special_character_ratio":0.29574677,"punctuation_ratio":0.11358025,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9932552,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-09T13:46:46Z\",\"WARC-Record-ID\":\"<urn:uuid:97b03818-91d0-42c0-8edf-3e35dbf28952>\",\"Content-Length\":\"29844\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:188396db-34a8-478f-ad3e-7fd59b7406af>\",\"WARC-Concurrent-To\":\"<urn:uuid:955848de-2834-4d61-9de4-0306451ba431>\",\"WARC-IP-Address\":\"104.31.70.53\",\"WARC-Target-URI\":\"https://whatisconvert.com/1275-imperial-pints-in-cubic-meters\",\"WARC-Payload-Digest\":\"sha1:A5V5Z62PTGVP5PND2EEGJDPEE6TBQXXO\",\"WARC-Block-Digest\":\"sha1:KLCMNLIM5T6SS3P4YAOU5VUXCABHT7HZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655900335.76_warc_CC-MAIN-20200709131554-20200709161554-00393.warc.gz\"}"}
http://infocenter-archive.sybase.com/help/topic/com.sybase.dc33621_33620_33619_1250/html/ptallbk/ptallbk678.htm
[ "# How the optimizer uses densities and histograms\n\nWhen the optimizer analyzes a SARG, it uses the histogram values, densities, and the number of rows in the table to estimate the number of rows that match the value specified in the SARG:\n\n• If the SARG value matches a frequency cell, the estimated number of matching rows is equal to the weight of the frequency cell multiplied by the number of rows in the table. This query includes a data value with a high number of duplicates, so it matches a frequency cell:\n\n```where authors.city = \"New York\"\n```\n\nIf the weight of the frequency cell is #.015606, and the authors table has 5000 rows, the optimizer estimates that the query returns 5000 * .015606 = 78 rows.\n\n• If the SARG value falls within a range cell, the optimizer uses the range cell density to estimate the number of rows. For example, a query on a city value that falls in a range cell, with a range cell density of .000586 for the column, would estimate that 5000 * .000586 = 3 rows would be returned.\n\n• For range queries, the optimizer adds the weights of all cells spanned by the range of values. When the beginning or end of the range falls in a range cell, the optimizer uses interpolation to estimate the number of rows from that cell that are included in the range." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8128394,"math_prob":0.9918621,"size":1219,"snap":"2020-45-2020-50","text_gpt3_token_len":271,"char_repetition_ratio":0.16872428,"word_repetition_ratio":0.08597285,"special_character_ratio":0.24364233,"punctuation_ratio":0.10526316,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99506,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-20T00:30:30Z\",\"WARC-Record-ID\":\"<urn:uuid:4b471e4a-9451-48cc-986a-463253f8e486>\",\"Content-Length\":\"3132\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:79500fa6-2e58-49ed-adac-3daa60dd1ca7>\",\"WARC-Concurrent-To\":\"<urn:uuid:0d3d329c-df25-4a71-9302-249ffff9e81b>\",\"WARC-IP-Address\":\"169.145.10.96\",\"WARC-Target-URI\":\"http://infocenter-archive.sybase.com/help/topic/com.sybase.dc33621_33620_33619_1250/html/ptallbk/ptallbk678.htm\",\"WARC-Payload-Digest\":\"sha1:SBONUGRNTFSBTGYLXOFB7ES74V7V5AGA\",\"WARC-Block-Digest\":\"sha1:AKYVDI4YSBHIXH3LBCBXZNIMRFLL7QMJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107867463.6_warc_CC-MAIN-20201019232613-20201020022613-00127.warc.gz\"}"}
https://learn.sparkfun.com/tutorials/analog-to-digital-conversion/arduino-adc-example
[ "# Analog to Digital Conversion\n\nPages\n\nTo show this in the real world let’s use the Arduino to detect an analog voltage. Use a trimpot, or light sensor, or simple voltage divider to create a voltage. Let’s setup a simple trimpot circuit for this example:\n\nTo start, we need to define the pin as an input. To match the circuit diagram we will use A3:\n\n``````pinMode(A3, INPUT);\n``````\n\nand then do the analog to digital version by using the analogRead() command:\n\n``````int x = analogRead(A3); //Reads the analog value on pin A3 into x\n``````\n\nThe value that is returned and stored in x will be a value from 0 to 1023. The Arduino has a 10-bit ADC (2^10 = 1024). We store this value into an int because x is bigger (10 bits) than what a byte can hold (8 bits).\n\nLet’s print this value to watch it as it changes:\n\n``````Serial.print(“Analog value: “);\nSerial.println(x);\n``````\n\nAs we change the analog value, x should also change. For example, if x is reported to be 334, and we’re using the Arduino at 5V, what is the actual voltage? Pull out your digital multimeter and check the actual voltage. It should be approximately 1.63V. Congratulations! You have just created your own digital multimeter with an Arduino!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7748122,"math_prob":0.9649312,"size":1177,"snap":"2023-40-2023-50","text_gpt3_token_len":295,"char_repetition_ratio":0.12702473,"word_repetition_ratio":0.0,"special_character_ratio":0.25488532,"punctuation_ratio":0.1254902,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.996293,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-08T19:31:46Z\",\"WARC-Record-ID\":\"<urn:uuid:3ac3158d-f0a9-4971-ab41-ba5959e4c36f>\",\"Content-Length\":\"51808\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d024205b-e790-4998-812f-fdb9d54214a2>\",\"WARC-Concurrent-To\":\"<urn:uuid:8bd29a7d-8d7d-40e4-be96-d583219d7280>\",\"WARC-IP-Address\":\"3.216.222.191\",\"WARC-Target-URI\":\"https://learn.sparkfun.com/tutorials/analog-to-digital-conversion/arduino-adc-example\",\"WARC-Payload-Digest\":\"sha1:2APJLCDWBPZYPR22ISMT5MXN43VYWJEU\",\"WARC-Block-Digest\":\"sha1:5AFU7X5AYD23QO2GAFYRWUSISUUHN4WR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100769.54_warc_CC-MAIN-20231208180539-20231208210539-00739.warc.gz\"}"}
https://www.colorhexa.com/33ffc5
[ "# #33ffc5 Color Information\n\nIn a RGB color space, hex #33ffc5 is composed of 20% red, 100% green and 77.3% blue. Whereas in a CMYK color space, it is composed of 80% cyan, 0% magenta, 22.7% yellow and 0% black. It has a hue angle of 162.9 degrees, a saturation of 100% and a lightness of 60%. #33ffc5 color hex could be obtained by blending #66ffff with #00ff8b. Closest websafe color is: #33ffcc.\n\n• R 20\n• G 100\n• B 77\nRGB color chart\n• C 80\n• M 0\n• Y 23\n• K 0\nCMYK color chart\n\n#33ffc5 color description : Vivid cyan - lime green.\n\n# #33ffc5 Color Conversion\n\nThe hexadecimal color #33ffc5 has RGB values of R:51, G:255, B:197 and CMYK values of C:0.8, M:0, Y:0.23, K:0. Its decimal value is 3407813.\n\nHex triplet RGB Decimal 33ffc5 `#33ffc5` 51, 255, 197 `rgb(51,255,197)` 20, 100, 77.3 `rgb(20%,100%,77.3%)` 80, 0, 23, 0 162.9°, 100, 60 `hsl(162.9,100%,60%)` 162.9°, 80, 100 33ffcc `#33ffcc`\nCIE-LAB 89.975, -60.845, 14.27 47.199, 76.25, 65.05 0.25, 0.405, 76.25 89.975, 62.496, 166.801 89.975, -72.091, 31.298 87.321, -56.329, 16.957 00110011, 11111111, 11000101\n\n# Color Schemes with #33ffc5\n\n• #33ffc5\n``#33ffc5` `rgb(51,255,197)``\n• #ff336d\n``#ff336d` `rgb(255,51,109)``\nComplementary Color\n• #33ff5f\n``#33ff5f` `rgb(51,255,95)``\n• #33ffc5\n``#33ffc5` `rgb(51,255,197)``\n• #33d3ff\n``#33d3ff` `rgb(51,211,255)``\nAnalogous Color\n• #ff5f33\n``#ff5f33` `rgb(255,95,51)``\n• #33ffc5\n``#33ffc5` `rgb(51,255,197)``\n• #ff33d3\n``#ff33d3` `rgb(255,51,211)``\nSplit Complementary Color\n• #ffc533\n``#ffc533` `rgb(255,197,51)``\n• #33ffc5\n``#33ffc5` `rgb(51,255,197)``\n• #c533ff\n``#c533ff` `rgb(197,51,255)``\n• #6dff33\n``#6dff33` `rgb(109,255,51)``\n• #33ffc5\n``#33ffc5` `rgb(51,255,197)``\n• #c533ff\n``#c533ff` `rgb(197,51,255)``\n• #ff336d\n``#ff336d` `rgb(255,51,109)``\n• #00e6a4\n``#00e6a4` `rgb(0,230,164)``\n• #00ffb7\n``#00ffb7` `rgb(0,255,183)``\n• #1affbe\n``#1affbe` `rgb(26,255,190)``\n• #33ffc5\n``#33ffc5` `rgb(51,255,197)``\n• #4dffcc\n``#4dffcc` `rgb(77,255,204)``\n• #66ffd4\n``#66ffd4` `rgb(102,255,212)``\n• #80ffdb\n``#80ffdb` `rgb(128,255,219)``\nMonochromatic Color\n\n# Alternatives to #33ffc5\n\nBelow, you can see some colors close to #33ffc5. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #33ff92\n``#33ff92` `rgb(51,255,146)``\n• #33ffa3\n``#33ffa3` `rgb(51,255,163)``\n• #33ffb4\n``#33ffb4` `rgb(51,255,180)``\n• #33ffc5\n``#33ffc5` `rgb(51,255,197)``\n• #33ffd6\n``#33ffd6` `rgb(51,255,214)``\n• #33ffe7\n``#33ffe7` `rgb(51,255,231)``\n• #33fff8\n``#33fff8` `rgb(51,255,248)``\nSimilar Colors\n\n# #33ffc5 Preview\n\nThis text has a font color of #33ffc5.\n\n``<span style=\"color:#33ffc5;\">Text here</span>``\n#33ffc5 background color\n\nThis paragraph has a background color of #33ffc5.\n\n``<p style=\"background-color:#33ffc5;\">Content here</p>``\n#33ffc5 border color\n\nThis element has a border color of #33ffc5.\n\n``<div style=\"border:1px solid #33ffc5;\">Content here</div>``\nCSS codes\n``.text {color:#33ffc5;}``\n``.background {background-color:#33ffc5;}``\n``.border {border:1px solid #33ffc5;}``\n\n# Shades and Tints of #33ffc5\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000c08 is the darkest color, while #f7fffd is the lightest one.\n\n• #000c08\n``#000c08` `rgb(0,12,8)``\n• #001f16\n``#001f16` `rgb(0,31,22)``\n• #003325\n``#003325` `rgb(0,51,37)``\n• #004733\n``#004733` `rgb(0,71,51)``\n• #005a41\n``#005a41` `rgb(0,90,65)``\n• #006e4f\n``#006e4f` `rgb(0,110,79)``\n• #00815d\n``#00815d` `rgb(0,129,93)``\n• #00956b\n``#00956b` `rgb(0,149,107)``\n• #00a979\n``#00a979` `rgb(0,169,121)``\n• #00bc87\n``#00bc87` `rgb(0,188,135)``\n• #00d095\n``#00d095` `rgb(0,208,149)``\n• #00e4a3\n``#00e4a3` `rgb(0,228,163)``\n• #00f7b1\n``#00f7b1` `rgb(0,247,177)``\n• #0cffba\n``#0cffba` `rgb(12,255,186)``\n• #1fffbf\n``#1fffbf` `rgb(31,255,191)``\n• #33ffc5\n``#33ffc5` `rgb(51,255,197)``\n• #47ffcb\n``#47ffcb` `rgb(71,255,203)``\n• #5affd0\n``#5affd0` `rgb(90,255,208)``\n• #6effd6\n``#6effd6` `rgb(110,255,214)``\n• #81ffdb\n``#81ffdb` `rgb(129,255,219)``\n• #95ffe1\n``#95ffe1` `rgb(149,255,225)``\n• #a9ffe6\n``#a9ffe6` `rgb(169,255,230)``\n• #bcffec\n``#bcffec` `rgb(188,255,236)``\n• #d0fff2\n``#d0fff2` `rgb(208,255,242)``\n• #e4fff7\n``#e4fff7` `rgb(228,255,247)``\n• #f7fffd\n``#f7fffd` `rgb(247,255,253)``\nTint Color Variation\n\n# Tones of #33ffc5\n\nA tone is produced by adding gray to any pure hue. In this case, #91a19c is the less saturated color, while #33ffc5 is the most saturated one.\n\n• #91a19c\n``#91a19c` `rgb(145,161,156)``\n• #89a9a0\n``#89a9a0` `rgb(137,169,160)``\n• #81b1a3\n``#81b1a3` `rgb(129,177,163)``\n• #7ab8a7\n``#7ab8a7` `rgb(122,184,167)``\n• #72c0aa\n``#72c0aa` `rgb(114,192,170)``\n``#6ac8ad` `rgb(106,200,173)``\n• #62d0b1\n``#62d0b1` `rgb(98,208,177)``\n``#5ad8b4` `rgb(90,216,180)``\n• #52e0b7\n``#52e0b7` `rgb(82,224,183)``\n• #4be7bb\n``#4be7bb` `rgb(75,231,187)``\n• #43efbe\n``#43efbe` `rgb(67,239,190)``\n• #3bf7c2\n``#3bf7c2` `rgb(59,247,194)``\n• #33ffc5\n``#33ffc5` `rgb(51,255,197)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #33ffc5 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5635963,"math_prob":0.9184259,"size":3687,"snap":"2021-31-2021-39","text_gpt3_token_len":1622,"char_repetition_ratio":0.13711648,"word_repetition_ratio":0.011049724,"special_character_ratio":0.53241116,"punctuation_ratio":0.22732492,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98480517,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-18T01:57:07Z\",\"WARC-Record-ID\":\"<urn:uuid:8f9cd635-644c-4eae-b9d8-403c36773e66>\",\"Content-Length\":\"36176\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:01e2be89-7024-4d3b-93e3-921e5c5b4a22>\",\"WARC-Concurrent-To\":\"<urn:uuid:5814aca2-2153-457d-a91e-fc0a1979c5f2>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/33ffc5\",\"WARC-Payload-Digest\":\"sha1:ZVVSDW4EBE2LDBH5DXKCQ7NU6LSFPNCC\",\"WARC-Block-Digest\":\"sha1:5PVA33VYTOGNSMQY5VHWVW6GMXCOJ2TA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780056120.36_warc_CC-MAIN-20210918002951-20210918032951-00332.warc.gz\"}"}
https://www.nagwa.com/en/videos/809168017565/
[ "# Video: Finding the Unknown Components of Three Forces in Equilibrium Acting on a Point\n\nThe forces 𝐹₁ = 2𝑖 + 7𝑗, 𝐹₂ = 𝑎𝑖 − 6𝑗 and 𝐹₃ = 6𝑖 + (𝑏 + 8)𝑗 act on a particle, where 𝑖 and 𝑗 are two perpendicular unit vectors. Given that the system is in equilibrium, determine the values of 𝑎 and 𝑏.\n\n02:22\n\n### Video Transcript\n\nThe forces 𝐹 one, which is equal to two 𝑖 plus seven 𝑗; 𝐹 two, which is equal to 𝑎𝑖 minus six 𝑗; and 𝐹 three, which is equal to six 𝑖 plus 𝑏 plus eight 𝑗, act on a particle, where 𝑖 and 𝑗 are two perpendicular unit vectors. Given that the system is in equilibrium, determine the values of 𝑎 and 𝑏.\n\nIf the system is in equilibrium, then the resultant force must equal zero. This means that the 𝑖-components — two 𝑖, 𝑎𝑖, and six 𝑖 — must equal zero. The coefficients are two, 𝑎, and six. Therefore, two plus 𝑎 plus six equals zero. Two plus six is equal to eight. Therefore, 𝑎 plus eight is equal to zero. Subtracting eight from both sides of the equation gives us a value of 𝑎 of negative eight. This means that the force 𝐹 two is negative eight 𝑖 minus six 𝑗.\n\nAs the 𝑗-components must also equal zero, seven 𝑗 minus six 𝑗 and 𝑏 plus eight 𝑗 must equal zero. Once again, the coefficients are seven, negative six, and 𝑏 plus eight. Seven minus six plus 𝑏 plus eight equals zero. Seven take away six is one. One plus eight is equal to nine. Therefore, 𝑏 plus nine equals zero. Subtracting nine from both sides of this equation gives us a value of 𝑏 equal to negative nine.\n\nThis means that the force 𝐹 three is equal to six 𝑖 plus negative nine plus eight 𝑗. Negative nine plus eight is negative one. Therefore, 𝐹 three is six 𝑖 minus one 𝑗. If the three forces 𝐹 one, 𝐹 two, and 𝐹 three are acting on a particle where the system is in equilibrium, then the value of 𝑎 is negative eight and the value of 𝑏 is negative nine." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93154484,"math_prob":0.9997775,"size":1536,"snap":"2020-10-2020-16","text_gpt3_token_len":452,"char_repetition_ratio":0.1808094,"word_repetition_ratio":0.069536425,"special_character_ratio":0.23046875,"punctuation_ratio":0.12820514,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999081,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-21T03:31:26Z\",\"WARC-Record-ID\":\"<urn:uuid:4cab767d-aaae-4664-aec2-24f4459b0f38>\",\"Content-Length\":\"25612\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ad0a3b33-1ad5-4cf5-9cfa-232b845256f1>\",\"WARC-Concurrent-To\":\"<urn:uuid:47639dd8-b193-4a7f-be8f-34b675b7ce92>\",\"WARC-IP-Address\":\"34.196.176.227\",\"WARC-Target-URI\":\"https://www.nagwa.com/en/videos/809168017565/\",\"WARC-Payload-Digest\":\"sha1:I6RKH3NX5EQKHWHC7KRITAOCUSMLBQ44\",\"WARC-Block-Digest\":\"sha1:UHD4GHCJUUS3ZEVGX2ZRSETGFEIY57HA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145438.12_warc_CC-MAIN-20200221014826-20200221044826-00041.warc.gz\"}"}
https://astronomy.stackexchange.com/questions/1860/how-fast-is-the-universe-expanding
[ "# How fast is the universe expanding?\n\nI've hear several theories stating that the universe is expanding faster than the speed of light, others claim that the universe expands faster the further away you measure it. Which of this is correct and how do you prove it (mathematically)? Furthermore, does this correctly imply, then, that eventually galaxies will be so far away, and moving so fast, that we will never see them again?\n\nBoth are correct, although the first can be further explained a bit. I won't give you a mathematical proof, though; Instead I'll play with characters.\n\nLet's assume, for the sake of gedankenexperiment, that:\n\n• The speed of light is 5 characters/sec;\n• our universe is expanding at 1 character per 5 characters per sec.\n\nThis is our current universe, and we launch a photon from body A aiming at body E. (Space generated each second is marked with a # symbol.)\n\nT=0s A----B----C----D----E Bodies\n* Photon - 19 chars to E\nT=1s A--#--B--#--C--#--D--#--E\n--#--* 17 chars to E\nT=2s A--#---B-#----C#-----#D----#-E\n--#-----#-* 17 chars to E\nT=3s A--#----B#-----#-C---#----D#-----#-E\n--#-----#-----#* 18 chars to E\nT=4s A--#-----#B----#----C#-----#---D-#-----#--E\n--#-----#-----#-----#* 19 chars to E\n\n\nI know, the graph isn't too granular, and the space generation isn't evenly distribute. I apologize for that, but it's for the sake of demonstration.\n\nNotice that at T=2 some space is already generated between A and the photon. But that's irrelevant: E is sitting at the event horizon, and will never be reached by the photon, because the amount of space being generated between photon * and body E is equal, or superior, to the speed of light.\n\nGiven any positive expansion rate, there will be an event horizon - a point where the accumulated dilation of space is more than the amount of space a particle moving at the speed of light can travel.\n\nA galaxy sitting initially at say, 1000 characters from A at T=0, will be at staggering 1200C at T=1 - that's 40 times our speed of light.\n\nAt T=16s, B (that was passed by the original photon at T=1) will be sitting exactly where E was relative to A, and at T=17 will fall out of our event horizon. A new photon emitted from A will never reach it.\n\n• I'm being very suspitions about this. It reminds me the most famous Zeno's paradox about Achilles and the tortoise. I'm quite curious what function of distance in time would look like. Sounds like some weird periodical function. Mar 1 '14 at 14:01\n\nThis was supposed to be more a comment than an answer, but since I can not comment due to reputation lack I will spend few words here. First of all, the \"theories\" you mentioned are not inconsistent each other.\n\nWe know the simple Hubble law:\n\n$v = H D$\n\nwhere $v$ is the receding velocity of a galaxy, $H$ is the Hubble constant, $D$ is the distance of the considered galaxy. This means that the further is the galaxy you observe, the faster this galaxy is receding. At some point it will become faster than light (or superluminal). At some point, the space between us and the light emitter will grow so fast that the light can never reach us, and this will make those objects invisible. Indeed, all we can observe is by definition our observable universe. This is growing with time, but still some objects will stay invisible forever. The very first thing you mention, I suppose you should put it more correctly, since it should be better to talk about expansion rate of the universe (instead of velocity), and this is given itself by the Hubble constant, around $70 km/s/Mpc$. Take care of the units of this \"constant\", and you will grasp why this argument is not so intuitive. Please, wait for more experienced people, since this was just a very rough summary of cosmology concepts.\n\n• And relativity does not apply here? I suppose that at some moment, the objects should appear to have the same speed regardless the distance thanks to the relativity concept. Mar 1 '14 at 13:59\n\nThe universe expands with about 70 km/s per Mega parsec, due to Hubble's law. This means, that the velocity two objects move away from each other is proportional to their distance. At one Mega parsec it's 70 km/s. That's an average value, which doesn't need to hold for each single object.\n\nBy dividing the speed of light of about 300,000 km/s by the Hubble constant, you get, that objects further away than about 4300 Mega parsecs move faster away from each other than the speed of light.\n\nThe expansion is measured by the redshift of spectra, meaning absorption and emission lines are shifted. Together with distance estimates, based on several methods, the expansion per distance i.e. the Hubble constant, can be estimated.\n\nThe observable universe is 879,873,000,000,000,000,000,000 kilometers across. Using the mean measured Hubble Constant of 75 kilometers per second per megaparsec (30,800,000,000,000,000,000 km) for the expansion of space, you can then calculate the rate of expansion for the entire universe and that number is 2,113,636 kilometers per second. That says the expansion of the universe across the entire diameter is expanding at 7.05 (+/-2.33) times the speed of light.\n\n• Diameter of the universe 93,000,000,000 LY Megaparsec 3,300,000 LY\n• ((Universe Diameter / Megaparsec = 28,182) x 75 kps )=2,113,636 kps or 7.05 x the speed of light.\n• This means the universe expands 0.0000000000000000000008 % every earth year.\n\nLocally this works out to ...\n\nThe Milky way should experience 3.46 kilometers per second expansion or 108,988,052 kilometers per year. (using 75 kps - within margin of error)\n\nThere are a few papers now reporting an indirect but measurable increase in space locally. One peer reviewed paper published in 2015 in Gravitation and Cosmology periodical states the measured effect on Earths orbit is about 5 meters per year (about 1/2 of the hubble constant) Manifestations of dark energy in the solar system Manifestations of dark energy in the solar system\n\n• Scientific notation of numbers would make this post more readable. Mar 15 '17 at 11:50\n• The article you link to is a proceeding, not a peer reviewed paper. In fact the author has virtually no citation for any of his paper, except self-citations. It is generally believed that gravitaty prevents systems such as solar systems and galaxies from expanding.\n– pela\nMar 16 '17 at 7:39" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87399644,"math_prob":0.91518354,"size":1781,"snap":"2021-43-2021-49","text_gpt3_token_len":506,"char_repetition_ratio":0.1412493,"word_repetition_ratio":0.0,"special_character_ratio":0.33127457,"punctuation_ratio":0.09497207,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9567995,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-07T21:05:43Z\",\"WARC-Record-ID\":\"<urn:uuid:15048dd2-e4a2-4fa7-8c8f-abd38bb249ef>\",\"Content-Length\":\"168276\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d4834923-d508-422c-bf71-0ff424c0f2c9>\",\"WARC-Concurrent-To\":\"<urn:uuid:613fdf3f-7576-44eb-8817-20ee76b049e8>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://astronomy.stackexchange.com/questions/1860/how-fast-is-the-universe-expanding\",\"WARC-Payload-Digest\":\"sha1:CJHIF5ZJ4SO5UQPYVOAOZUPYPTRMSOUB\",\"WARC-Block-Digest\":\"sha1:CLKLSBA2QDNVCC4ZYZLAGWD4GSQNQQZZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363418.83_warc_CC-MAIN-20211207201422-20211207231422-00596.warc.gz\"}"}
https://jinlongli.me/2017/04/28/effect-of-water-table-on-safe-bearing-capacity-of-soil/
[ "# EFFECT OF WATER TABLE ON SAFE BEARING CAPACITY OF SOIL\n\nThe position of ground water has a significant effect on the bearing capacity of soil. Presence of water table at a depth less than the width of the foundation from the foundation bottom will reduce the bearing capacity of the soil.\n\nThe bearing capacity equation incorporating the ground water table correction factors is given below.", null, "Where", null, "= Ultimate bearing capacity of soil in", null, "c = Cohesion of soil in", null, "Nc, Nq, N? are Therzaghi’s bearing capacity constants.", null, "= depth of foundation in meters\n\nB = Width of the foundation in meters", null, "and", null, "are water table correction factors\n\nThe water table correction factors can be obtained from the equations given below.\n\n1. When the water table is below the base of foundation at a distance ‘b’ the correction", null, "is given by the following equation", null, ";\n\nwhen b =0,", null, "= 0.5\n\n2. When water table further rises above base of foundation, correction factor", null, "comes in to action, which is given by the following equation.", null, "when a =", null, ",", null, "= 0.5", null, "Fig 1: Showing the influence of water table below foundation\n\nThe use of these equations is explained with the help of the Fig 1.\n\nFirst let us begin with the correction factor", null, "When water table is at a depth greater than or equals to the width of foundation, from the foundation bottom, the correction factor", null, "is 1. i.e. there is no effect on the safe bearing capacity.\n\nLet us assume water table started rising then the effect of", null, "comes in to action. The correction factor will be less than 1. When the water table reaches the bottom of foundation, i.e, when b = 0,", null, "= 0.5.\n\nNow let us assume water table further raises, above the depth of foundation. When the depth of water table is just touching the bottom of foundation, a = 0. This means", null, "= 1.0. On further rising, when the water table reaches the ground level, Rw1 becomes 0.5.\n\nHence, the assessment of ground water level is an important aspect in any site investigation." ]
[ null, "https://theconstructor.org/wp-content/uploads/2010/10/clip_image00244.jpg", null, "https://theconstructor.org/wp-content/uploads/2010/10/clip_image00438.jpg", null, "https://theconstructor.org/wp-content/uploads/2010/10/clip_image00631.jpg", null, "https://theconstructor.org/wp-content/uploads/2010/10/clip_image006110.jpg", null, "https://theconstructor.org/wp-content/uploads/2010/10/clip_image00828.jpg", null, "https://theconstructor.org/wp-content/uploads/2010/10/clip_image01025.jpg", null, "https://theconstructor.org/wp-content/uploads/2010/10/clip_image01227.jpg", null, "https://theconstructor.org/wp-content/uploads/2010/10/clip_image012110.jpg", null, "https://theconstructor.org/wp-content/uploads/2010/10/clip_image01417.jpg", null, "https://theconstructor.org/wp-content/uploads/2010/10/clip_image01228.jpg", null, "https://theconstructor.org/wp-content/uploads/2010/10/clip_image010110.jpg", null, "https://theconstructor.org/wp-content/uploads/2010/10/clip_image01611.jpg", null, "https://theconstructor.org/wp-content/uploads/2010/10/clip_image008110.jpg", null, "https://theconstructor.org/wp-content/uploads/2010/10/clip_image01026.jpg", null, "https://theconstructor.org/wp-content/uploads/2010/10/clip_image01811.jpg", null, "https://theconstructor.org/wp-content/uploads/2010/10/clip_image01231.jpg", null, "https://theconstructor.org/wp-content/uploads/2010/10/clip_image01241.jpg", null, "https://theconstructor.org/wp-content/uploads/2010/10/clip_image01251.jpg", null, "https://theconstructor.org/wp-content/uploads/2010/10/clip_image01262.jpg", null, "https://theconstructor.org/wp-content/uploads/2010/10/clip_image01031.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90715873,"math_prob":0.99595386,"size":1870,"snap":"2023-14-2023-23","text_gpt3_token_len":417,"char_repetition_ratio":0.19506967,"word_repetition_ratio":0.018126888,"special_character_ratio":0.22192514,"punctuation_ratio":0.122015916,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9973215,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40],"im_url_duplicate_count":[null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-31T20:13:17Z\",\"WARC-Record-ID\":\"<urn:uuid:266c482d-9f29-41fd-85b9-3bb3c36dd3f2>\",\"Content-Length\":\"98257\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8fed0800-7ec9-425e-a1fb-b01e802425b8>\",\"WARC-Concurrent-To\":\"<urn:uuid:550da941-785c-4d03-83f7-87406510d20d>\",\"WARC-IP-Address\":\"192.0.78.25\",\"WARC-Target-URI\":\"https://jinlongli.me/2017/04/28/effect-of-water-table-on-safe-bearing-capacity-of-soil/\",\"WARC-Payload-Digest\":\"sha1:XM4LU3Q3TXJKKCRXJG6AHUTKWWHXIVSD\",\"WARC-Block-Digest\":\"sha1:OTBTODXM6PZ2BP6RBSN4UJ24D44QIKMG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224647409.17_warc_CC-MAIN-20230531182033-20230531212033-00773.warc.gz\"}"}
https://www.colorhexa.com/00dd50
[ "# #00dd50 Color Information\n\nIn a RGB color space, hex #00dd50 is composed of 0% red, 86.7% green and 31.4% blue. Whereas in a CMYK color space, it is composed of 100% cyan, 0% magenta, 63.8% yellow and 13.3% black. It has a hue angle of 141.7 degrees, a saturation of 100% and a lightness of 43.3%. #00dd50 color hex could be obtained by blending #00ffa0 with #00bb00. Closest websafe color is: #00cc66.\n\n• R 0\n• G 87\n• B 31\nRGB color chart\n• C 100\n• M 0\n• Y 64\n• K 13\nCMYK color chart\n\n#00dd50 color description : Pure (or mostly pure) cyan - lime green.\n\n# #00dd50 Color Conversion\n\nThe hexadecimal color #00dd50 has RGB values of R:0, G:221, B:80 and CMYK values of C:1, M:0, Y:0.64, K:0.13. Its decimal value is 56656.\n\nHex triplet RGB Decimal 00dd50 `#00dd50` 0, 221, 80 `rgb(0,221,80)` 0, 86.7, 31.4 `rgb(0%,86.7%,31.4%)` 100, 0, 64, 13 141.7°, 100, 43.3 `hsl(141.7,100%,43.3%)` 141.7°, 100, 86.7 00cc66 `#00cc66`\nCIE-LAB 77.453, -72.909, 55.055 27.303, 52.289, 16.243 0.285, 0.546, 52.289 77.453, 91.361, 142.942 77.453, -71.394, 79.183 72.311, -59.148, 37.3 00000000, 11011101, 01010000\n\n# Color Schemes with #00dd50\n\n• #00dd50\n``#00dd50` `rgb(0,221,80)``\n• #dd008d\n``#dd008d` `rgb(221,0,141)``\nComplementary Color\n• #1fdd00\n``#1fdd00` `rgb(31,221,0)``\n• #00dd50\n``#00dd50` `rgb(0,221,80)``\n• #00ddbf\n``#00ddbf` `rgb(0,221,191)``\nAnalogous Color\n• #dd001e\n``#dd001e` `rgb(221,0,30)``\n• #00dd50\n``#00dd50` `rgb(0,221,80)``\n• #bf00dd\n``#bf00dd` `rgb(191,0,221)``\nSplit Complementary Color\n• #dd5000\n``#dd5000` `rgb(221,80,0)``\n• #00dd50\n``#00dd50` `rgb(0,221,80)``\n• #5000dd\n``#5000dd` `rgb(80,0,221)``\n• #8ddd00\n``#8ddd00` `rgb(141,221,0)``\n• #00dd50\n``#00dd50` `rgb(0,221,80)``\n• #5000dd\n``#5000dd` `rgb(80,0,221)``\n• #dd008d\n``#dd008d` `rgb(221,0,141)``\n• #009134\n``#009134` `rgb(0,145,52)``\n• #00aa3e\n``#00aa3e` `rgb(0,170,62)``\n• #00c447\n``#00c447` `rgb(0,196,71)``\n• #00dd50\n``#00dd50` `rgb(0,221,80)``\n• #00f759\n``#00f759` `rgb(0,247,89)``\n• #11ff67\n``#11ff67` `rgb(17,255,103)``\n• #2bff77\n``#2bff77` `rgb(43,255,119)``\nMonochromatic Color\n\n# Alternatives to #00dd50\n\nBelow, you can see some colors close to #00dd50. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #00dd19\n``#00dd19` `rgb(0,221,25)``\n• #00dd2b\n``#00dd2b` `rgb(0,221,43)``\n• #00dd3e\n``#00dd3e` `rgb(0,221,62)``\n• #00dd50\n``#00dd50` `rgb(0,221,80)``\n• #00dd62\n``#00dd62` `rgb(0,221,98)``\n• #00dd75\n``#00dd75` `rgb(0,221,117)``\n• #00dd87\n``#00dd87` `rgb(0,221,135)``\nSimilar Colors\n\n# #00dd50 Preview\n\nThis text has a font color of #00dd50.\n\n``<span style=\"color:#00dd50;\">Text here</span>``\n#00dd50 background color\n\nThis paragraph has a background color of #00dd50.\n\n``<p style=\"background-color:#00dd50;\">Content here</p>``\n#00dd50 border color\n\nThis element has a border color of #00dd50.\n\n``<div style=\"border:1px solid #00dd50;\">Content here</div>``\nCSS codes\n``.text {color:#00dd50;}``\n``.background {background-color:#00dd50;}``\n``.border {border:1px solid #00dd50;}``\n\n# Shades and Tints of #00dd50\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000502 is the darkest color, while #f1fff6 is the lightest one.\n\n• #000502\n``#000502` `rgb(0,5,2)``\n• #001909\n``#001909` `rgb(0,25,9)``\n• #002c10\n``#002c10` `rgb(0,44,16)``\n• #004017\n``#004017` `rgb(0,64,23)``\n• #00541e\n``#00541e` `rgb(0,84,30)``\n• #006725\n``#006725` `rgb(0,103,37)``\n• #007b2c\n``#007b2c` `rgb(0,123,44)``\n• #008f34\n``#008f34` `rgb(0,143,52)``\n• #00a23b\n``#00a23b` `rgb(0,162,59)``\n• #00b642\n``#00b642` `rgb(0,182,66)``\n• #00c949\n``#00c949` `rgb(0,201,73)``\n• #00dd50\n``#00dd50` `rgb(0,221,80)``\n• #00f157\n``#00f157` `rgb(0,241,87)``\n• #05ff60\n``#05ff60` `rgb(5,255,96)``\n• #19ff6c\n``#19ff6c` `rgb(25,255,108)``\n• #2cff79\n``#2cff79` `rgb(44,255,121)``\n• #40ff85\n``#40ff85` `rgb(64,255,133)``\n• #54ff92\n``#54ff92` `rgb(84,255,146)``\n• #67ff9e\n``#67ff9e` `rgb(103,255,158)``\n• #7bffab\n``#7bffab` `rgb(123,255,171)``\n• #8fffb7\n``#8fffb7` `rgb(143,255,183)``\n• #a2ffc4\n``#a2ffc4` `rgb(162,255,196)``\n• #b6ffd0\n``#b6ffd0` `rgb(182,255,208)``\n• #c9ffdd\n``#c9ffdd` `rgb(201,255,221)``\n• #ddffe9\n``#ddffe9` `rgb(221,255,233)``\n• #f1fff6\n``#f1fff6` `rgb(241,255,246)``\nTint Color Variation\n\n# Tones of #00dd50\n\nA tone is produced by adding gray to any pure hue. In this case, #66776c is the less saturated color, while #00dd50 is the most saturated one.\n\n• #66776c\n``#66776c` `rgb(102,119,108)``\n• #5e806a\n``#5e806a` `rgb(94,128,106)``\n• #558867\n``#558867` `rgb(85,136,103)``\n• #4d9165\n``#4d9165` `rgb(77,145,101)``\n• #449963\n``#449963` `rgb(68,153,99)``\n• #3ca260\n``#3ca260` `rgb(60,162,96)``\n• #33aa5e\n``#33aa5e` `rgb(51,170,94)``\n• #2bb35c\n``#2bb35c` `rgb(43,179,92)``\n• #22bb59\n``#22bb59` `rgb(34,187,89)``\n• #1ac457\n``#1ac457` `rgb(26,196,87)``\n• #11cc55\n``#11cc55` `rgb(17,204,85)``\n• #09d552\n``#09d552` `rgb(9,213,82)``\n• #00dd50\n``#00dd50` `rgb(0,221,80)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #00dd50 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5097011,"math_prob":0.88055676,"size":3687,"snap":"2020-24-2020-29","text_gpt3_token_len":1595,"char_repetition_ratio":0.13521586,"word_repetition_ratio":0.010989011,"special_character_ratio":0.55438024,"punctuation_ratio":0.23146068,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98680687,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-09T21:22:57Z\",\"WARC-Record-ID\":\"<urn:uuid:5e0931fc-4d11-49d0-812b-87ab1285ce24>\",\"Content-Length\":\"36246\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e0cb7468-6a7b-45e9-b4c6-98bc0800dd01>\",\"WARC-Concurrent-To\":\"<urn:uuid:17f96514-d40e-4acb-a6f3-e195b28cf9e7>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/00dd50\",\"WARC-Payload-Digest\":\"sha1:KMYSP4HLDSSU3T4J5MMNQI3YUKWWRT5Y\",\"WARC-Block-Digest\":\"sha1:ILMPRG6MW3OZLGJ5EF5OUVYRX33CJKCR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655901509.58_warc_CC-MAIN-20200709193741-20200709223741-00181.warc.gz\"}"}
https://threesixty360.wordpress.com/2008/02/10/root-extraction-part-i-square-roots/
[ "## Root extraction, part I: square roots\n\nby", null, "I recently discussed the traditional algorithms for computing square and cube roots in my History of Math class.   Our reading, on mathematics in ancient China, gave both algorithms as a set of rules for manipulating number rods.  For me, it was fascinating to see past the text: the rules as given would transfer directly to an abacus/soroban calculation, and were essentially the same as the rules that prior generations of American schoolchildren would have been drilled on in school.\n\nMy students (mostly high school math teachers) found the book’s explanation of the method obscure;  the key is to view the process geometrically, rather than as a mechanical set of rules for manipulating digits.\n\nI make no claim of originality in what follows; I offer it here in part because I can’t find any lucent discussions along these lines on the web.\n\nExample:  Find", null, "$\\sqrt{388129}$.\n\nStart by noting that  100²=10000,  while 1000²=1000000, so the root lies between 100 and 1000.  Further experimentation convinces us that the root is in the six-hundreds.\n\nDraw a square whose area is to be 388129, and whose sides have length 600 something.  Our goal is to determine the tens place of the root, so we label each side as having length 600 and 10d, where d is a digit from 0 to 9.", null, "The area of the red region is 360000; the combined area of the green regions is 12000d.  The area of the remaining corner would be", null, "$100d^2$, which is quite small in comparison to the other regions.  A reasonable guess for the value of d (the next digit in the square root) would come from setting 12000d to being close to but less than 28129 (the total area minus the 360000 we’ve committed ourselves to having).   d = 2 looks appropriate, and if d = 2, the L shaped region of the square has area 12000+12000+400=24400, giving a total area (so far) of 384400, the square of 620.\n\nWe repeat the process, again with a square whose area wants to be 388129, but now whose sides have lengths 620 plus d (where d will be the ones place of the root).", null, "We’ve found the area of the red square to be 384400; the combined area of the green regions is 1240d, and again we ignore the extremely small", null, "$d^2$;  we want the area of the L shaped region to be 3729.  It looks like d=3 is a good guess, and when d=3 the actual area of the L-shaped region is 1860+1860+9 =3729!  We’ve successfully found that", null, "$\\sqrt{388129}=623$.\n\nWe’re not always so lucky with our digit guesses; sometimes you have to backtrack:\n\nExample:  Find the first four decimal places of", null, "$\\sqrt{2}$.\n\nOnce we deduce that the lead digit of the answer is 1, we draw a square whose area will be 2, and whose sides are broken up into a portion of length 1 and a remainder.  Seeking the tenths digit for the length of the side, we assume the side has length 1 + d(0.1), and proceed as before.", null, "The red region has area 1; the two green regions have a total area of 0.2d, and for convenience we’ll ignore the relatively small area in the bottom right corner.  Comparing 0.2d to the remaining area 1 (= 2 – 1), at first glance one expects to set d=5.  But when d=5, the area of the L shaped region is 0.5+0.5+0.25 = 1.25, which yields a total area that is too large.\n\nSo we revise our guess, set d=4, and now find the L shaped region has area 0.4+0.4+0.16 = 0.96 < 1.\n\nWe repeat the cycle, with our square now having width 1.4 + d(0.01), where now d will be the digit in the hundredths place.", null, "Our previous calculation shows that the red square has area 1.96; the green regions a total area of 0.028d, and again for convenience we neglect the lower right corner.   Comparing 0.028d with 0.04 (=2 – 1.96), we see that d=1 is the largest possible digit to use while keeping the area no more than our goal.  With that choice for d, the area of the L shaped region is 0.014+0.014+0.0001=0.0281, and the total area of the entire square is now 1.96+0.0281=1.9881.\n\nFor the next iteration, we set the width to be 1.41 + d(0.001) in order to determine the digit in the thousandths place.", null, "The previous calculation shows that the area of the red region is 1.9881; the green regions have a total area of 0.00282d, and the right corner is negligible.  We compare 0.00282d to 0.0119, and find that d=4 should work.  Setting d=4, the L shaped region has an area of 0.00564+0.00564+0.000016=0.011296, and hence the entire square has area 1.9881+0.011296= 1.999396.\n\nOne continues in this way indefinitely, finding an additional digit of", null, "$\\sqrt{2}$  at each iteration.\n\nThis geometric process is equivalent to the traditional method of breaking the radicand into digit pairs, doubling the root found so far, dividing the doubled root into the remainder, etc….\n\nComing up in Part II:  extending this method to cube roots\n\n### 6 Responses to “Root extraction, part I: square roots”\n\n1. Science After Sunclipse Says:\n\nSquare Roots by Hand\n\nA little while back, several of my fellow math-and-science bloggers and I got into a discussion of a particularly hare-brained way to reform math education, and I mentioned that nobody in my generation seems to have learned how to take square roots by …\n\n2.", null, "Root extraction, part II: cube roots « 360 Says:\n\n[…] 360 12 tables, 24 chairs, and plenty of chalk « Root extraction, part I: square roots […]\n\n3. Carnival of Mathematics 1000 « JD2718 Says:\n\n[…] roots (for the brave) over at Blog 360. (Also, for the brave and non-brave alike, discussion of how to extract square roots). 00 – Ternary Geometry and Ternary Geometry II from Arcadian Functor, a New Zealand physics blog […]\n\n4.", null, "john Says:\n\nhow can I solve the equation\n\n5 to the power of x = 390211\n\n5.", null, "TwoPi Says:\n\nThis is a very different situation, since the unknown is your exponent, not the base. (We don’t know what kind of roots to apply, in effect — square roots, cube roots, xth roots?)\n\nFurthermore, since 390211 isn’t even an integer multiple of five, much less an integer power of five, we can’t use elementary methods such as prime factorization to find x.\n\nThat leaves one viable path: logarithms.\n\nI’ll leave the rest to your own devices….\n\n6.", null, "SasQ Says:\n\n@john: 5^x = 390211\nJust take a logarithm to the base of 5 from both sides to cancel the base and get the x back:\nlog_5(5^x) = log_5(390211)\nx = log_5(390211)\nSo now the problem is another question (a bit simpler):\nWhat power of 5 gives us 390211?\n5^7 is too few: 78125.\n5^8 is too much: 390625, but very close. If you try something between, you’ll get closer and closer approximation. It will be something around 7.999341135…\nSo the answer is: 5^7.999341135… = 390211" ]
[ null, "https://threesixty360.files.wordpress.com/2008/02/taraxacum-officinalis-plant.jpg", null, "https://s0.wp.com/latex.php", null, "https://threesixty360.files.wordpress.com/2008/02/sqra.jpg", null, "https://s0.wp.com/latex.php", null, "https://threesixty360.files.wordpress.com/2008/02/sqrb.jpg", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://threesixty360.files.wordpress.com/2008/02/sqrt2a.jpg", null, "https://threesixty360.files.wordpress.com/2008/02/sqrt2b.jpg", null, "https://threesixty360.files.wordpress.com/2008/02/sqrt2c.jpg", null, "https://s0.wp.com/latex.php", null, "https://secure.gravatar.com/blavatar/97ee2c898c6ee488b119d15bd629397a", null, "https://2.gravatar.com/avatar/841e0fbfb5fea89d4a53b47e564c4779", null, "https://0.gravatar.com/avatar/921f0a0f57d6cfa073889ead2a9495ed", null, "https://1.gravatar.com/avatar/db0ffc046fd218d4a2a0d7314e6c1d98", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9202019,"math_prob":0.9822984,"size":6432,"snap":"2023-14-2023-23","text_gpt3_token_len":1729,"char_repetition_ratio":0.12912259,"word_repetition_ratio":0.031650983,"special_character_ratio":0.30363807,"punctuation_ratio":0.1296426,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99421024,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32],"im_url_duplicate_count":[null,4,null,null,null,4,null,null,null,4,null,null,null,null,null,null,null,4,null,4,null,4,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-20T10:49:18Z\",\"WARC-Record-ID\":\"<urn:uuid:8dcbc3a4-118c-4963-b360-424a3b894155>\",\"Content-Length\":\"93643\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:15d16adc-a3d7-4530-8e3a-3436e7d06acd>\",\"WARC-Concurrent-To\":\"<urn:uuid:faa74637-a71a-4086-8730-c26f2f98e452>\",\"WARC-IP-Address\":\"192.0.78.13\",\"WARC-Target-URI\":\"https://threesixty360.wordpress.com/2008/02/10/root-extraction-part-i-square-roots/\",\"WARC-Payload-Digest\":\"sha1:ZB4DHVHXOKXME3IZGQD7EYT5ATDUNAUK\",\"WARC-Block-Digest\":\"sha1:IWE5W4REDEJZ6PAIZNSE4PSFN6OB3MXC\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296943471.24_warc_CC-MAIN-20230320083513-20230320113513-00762.warc.gz\"}"}
https://www.wisdomjobs.com/e-university/discrete-mathematics-tutorial-471/discrete-mathematics-spanning-trees-25707.html
[ "# Discrete Mathematics Spanning Trees - Discrete Mathematics\n\n## What is a spanning tree in Discrete Mathematics?\n\nThe tree which includes all the vertices of the connected undirected graph G very minimally is known as a spanning tree. A single graph can have many spanning trees.\n\n### Example", null, "", null, "## What is Minimum Spanning Tree?\n\nWhen the assigned weight is less than or equal to the weight of all possible spanning tree of weighted, connected and undirected graph G, such a spanning tree is called as Minimum Spanning Tree (MST). The weight of the spanning tree is calculated by the total of all the weights that are assigned to each edge of the spanning tree.\n\n### Example", null, "## What is Kruskal's Algorithm?\n\nAn algorithm that is used for finding the minimum spanning tree of a connected weighted graph is known as Kruskal’s algorithm. A tree among the graph is identified which includes every vertex and where the total weight of all the edges in the tree is less than or equal to the spanning tree.\n\n### Algorithm\n\nStep 1 – All the edges of the given graph G(V,E) are arranged in the non-decreasing order in accordance with the weight of the edge.\n\nStep 2 – The smallest weighted edge from the graph is chosen and is checked if it forms a spanning tree earlier.\n\nStep 3 – This edge is included to the spanning tree if there is no cycle, or else it is discarded.\n\nStep 4 − Repeat Step 2 and Step 3 until (V−1) number of edges are left in the spanning tree.\n\n### Problem\n\nFor instance, identify the minimum spanning tree from the following graph G by using the Kruskal’s algorithm.", null, "### Solution\n\nThe following table is constructed from the above graph:\n\n Edge No. Vertex Pair Edge Weight E1 (a, b) 20 E2 (a, c) 9 E3 (a, d) 13 E4 (b, c) 1 E5 (b, e) 4 E6 (b, f) 5 E7 (c, d) 2 E8 (d, e) 3 E9 (d, f) 14\n\nIn accordance with the weight of the Edge, the table is rearranged in ascending order.\n\n Edge No. Vertex Pair Edge Weight E4 (b, c) 1 E7 (c, d) 2 E8 (d, e) 3 E5 (b, e) 4 E6 (b, f) 5 E2 (a, c) 9 E3 (a, d) 13 E9 (d, f) 14 E1 (a, b) 20", null, "", null, "", null, "As all the edges are covered in the last figure, the algorithm is stopped and this is considered as the minimal spanning tree and the total weight of the spanning tree is (1+2+3+5+9)=20.\n\n## What is Prim's Algorithm?\n\nThe minimum spanning tree for a connected weighted graph by Prim’s algorithm, developed by the mathematicians Vojtech Jarnik and Robert C. Prim. In the Graph, the tree that includes every vertex and total weight of all the edges in the tree is less than or equal to every possible spanning tree. Prim’s algorithm works faster on dense graphs.\n\n### Algorithm\n\n• Randomly selected from the graph, the minimal spanning tree with single vertex is initialized.\n• Until all the vertices are included in the tree, step 3 and step 4 are repeated.\n• The edge that is not yet in the tree, but connects the edge with the tree is selected, such that the weight of the edge is minimal and even after including the edge, it does not form a cycle.\n• The selected edge and the vertex it connects are added to the tree.\n• Select an edge that connects the tree with a vertex not yet in the tree, so that the weight of the edge is minimal and inclusion of the edge does not form a cycle.\n\n### Problem\n\nFor instance, consider the following graph G and identify the minimum spanning tree using Prim’s algorithm", null, "### Solution\n\nIt is started with vertex ‘a’.", null, "", null, "", null, "", null, "The above graph is the minimum spanning tree and the total weight is (1+2+3+5+9)=20\n\nDiscrete Mathematics Topics" ]
[ null, "https://www.wisdomjobs.com/userfiles/graph_in_span.jpg", null, "https://www.wisdomjobs.com/userfiles/spanning_tree(1).jpg", null, "https://www.wisdomjobs.com/userfiles/minimum_spanning_tree(1).jpg", null, "https://www.wisdomjobs.com/userfiles/kruskal_problem.jpg", null, "https://www.wisdomjobs.com/userfiles/kruskal_adding_vertex_edge.jpg", null, "https://www.wisdomjobs.com/userfiles/kruskal_adding_vertex_edge1.jpg", null, "https://www.wisdomjobs.com/userfiles/kruskal_adding_vertex_edge2.jpg", null, "https://www.wisdomjobs.com/userfiles/prim.jpg", null, "https://www.wisdomjobs.com/userfiles/prim_vertex_a_added.jpg", null, "https://www.wisdomjobs.com/userfiles/prim_vertex_c_b_added.jpg", null, "https://www.wisdomjobs.com/userfiles/prim_vertex_d_e_added.jpg", null, "https://www.wisdomjobs.com/userfiles/prim_vertex_f_added.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.885422,"math_prob":0.9751792,"size":3366,"snap":"2020-24-2020-29","text_gpt3_token_len":860,"char_repetition_ratio":0.15942891,"word_repetition_ratio":0.22292994,"special_character_ratio":0.26767677,"punctuation_ratio":0.0821485,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99619263,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-05T05:09:08Z\",\"WARC-Record-ID\":\"<urn:uuid:d284d99f-71b9-4ee9-a287-9bffc444c79f>\",\"Content-Length\":\"278359\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bbf465ed-884f-4e9a-9e47-d4c384f88d4d>\",\"WARC-Concurrent-To\":\"<urn:uuid:40b29ade-eede-46b4-bbc7-eb6517ab63f4>\",\"WARC-IP-Address\":\"139.59.66.94\",\"WARC-Target-URI\":\"https://www.wisdomjobs.com/e-university/discrete-mathematics-tutorial-471/discrete-mathematics-spanning-trees-25707.html\",\"WARC-Payload-Digest\":\"sha1:S2AU3VQWBHWN3K5565OJXA4PWJGY5SI5\",\"WARC-Block-Digest\":\"sha1:WDO27QYYVSAYYXVZIM4HUVCK4AV4ERYY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655886865.30_warc_CC-MAIN-20200705023910-20200705053910-00138.warc.gz\"}"}
http://electricalacademia.com/instrumentation-and-measurements/ohmmeter-basic-concepts-and-working-principle-ohmmeter-definition/
[ "Home / Instrumentation / Ohmmeter Basic Concepts and Working Principle\n\n# Ohmmeter Basic Concepts and Working Principle\n\nWant create site? Find Free WordPress Themes and plugins.\n\n## Ohmmeter Definition\n\nAn ohmmeter is an instrument used to measure the resistance. It is an instrument containing a voltage source and a meter directly calibrated in ohms.\n\n## Series Ohmmeter\n\nOne type of ohmmeter is the series ohmmeter, so called because the meter movement is in series with the source of emf and the unknown resistance. The circuit diagram and basic ohmmeter scale are shown in figure 1.", null, "(a)", null, "(b)\n\nFig.1: (a) Circuit; (b) Basic Scale\n\nIf the terminals of the ohmmeter are left open, ${{R}_{x}}=\\infty$ and no current flows. As shown on the basic scale, zero meter deflection corresponds to   ${{R}_{x}}=\\infty$. However, when the terminals or leads are shorted, ${{R}_{x}}=0$ and maximum current flows. So that exactly full-scale current flows when  ${{R}_{x}}=0$ , a zero adjust control is provided. The zero adjust allows one to calibrate the ohmmeter with the test leads shorted, thus compensating for lead resistance and battery aging.\n\nIf the sum of all the resistances internal to the ohmmeter is called ${{R}_{i}}$,\n\n${{R}_{i}}={{R}_{1}}+{{R}_{2}}+{{R}_{m}}\\text{ (1)}$\n\nAnd the meter current for any value of ${{R}_{x}}$ is\n\n$I=\\frac{E}{{{R}_{i}}+{{R}_{x}}}\\text{ (2)}$\n\nIt follows from equation (2) that if full-scale current flows when${{R}_{x}}=0\\text{ }\\Omega$, half scale current flows when  ${{R}_{x}}={{R}_{i}}$ . Within the limitations of the emf and meter mechanism used, the internal resistances R1 and R2 are selected to provide a particular mid-scale resistance, Rmid.\n\nNotice that for the left half of the ohmmeter scale, resistances between $\\infty \\text{ and }{{\\text{R}}_{\\text{mid}}}$  are indicated; for right half, resistances between ${{\\text{R}}_{\\text{mid}}}\\text{ and 0}$ are indicated. The resulting scale is nonlinear, and at either scale end, the accuracy is poor. Therefore, multi-range ohmmeters having different mid-scale resistances values are desirable. The different scales are obtained through the use of range switching, meter mechanism shunts, and different potential sources.\n\n## Shunt Ohmmeter\n\nThe second type of ohmmeter is the shunt ohmmeter, so called because the meter movement is in parallel with the unknown resistance. The basic shunt ohmmeter circuit and scale are shown in figure 2.", null, "(a)", null, "(b)\n\nFig.2: The Shunt Ohmmeter: (a) Circuit (b) Basic Scale\n\nNotice that a switch, S, is necessary to prevent current flow from the source of emf when the ohmmeter is not in use.\n\nIf the terminals of the shunt ohmmeter are shorted, ${{R}_{x}}=0\\text{ }\\Omega$ and all current is shunted away from the meter mechanism. However, when the terminals are open, ${{R}_{x}}=\\infty$ and maximum current flows. As before, a control is provided for the adjustment of full-scale deflection, but it is now an infinity adjust (∞ adjust). As in the series-type ohmmeter, when the unknown resistance equals the meter resistance, the meter reading is at half scale. In comparison to the series type, though, the shunt ohmmeter has a low meter resistance, making it particularly useful for unknown resistances that are relatively low. Regardless of the type of ohmmeter used, one must be certain that it is not connected to an energized or active circuit.\n\nDid you find apk for android? You can find new Free Android Games and apps.\n\n### About Ahmad Faizan", null, "Mr. Ahmed Faizan Sheikh, M.Sc. (USA), Research Fellow (USA), a member of IEEE & CIGRE, is a Fulbright Alumnus and earned his Master’s Degree in Electrical and Power Engineering from Kansas State University, USA." ]
[ null, "http://electricalacademia.com/wp-content/uploads/2017/03/ohmmeter.gif", null, "http://electricalacademia.com/wp-content/uploads/2017/03/ohmmeter-scale-700x281.gif", null, "http://electricalacademia.com/wp-content/uploads/2017/03/shunt-ohmmeter.gif", null, "http://electricalacademia.com/wp-content/uploads/2017/03/shunt-ohmmeter-scale-700x294.gif", null, "http://1.gravatar.com/avatar/12ca3fae784a44fa8f8e9aca2b9fa673", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8649867,"math_prob":0.99218917,"size":3336,"snap":"2019-13-2019-22","text_gpt3_token_len":855,"char_repetition_ratio":0.1707683,"word_repetition_ratio":0.03137255,"special_character_ratio":0.2529976,"punctuation_ratio":0.101404056,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.993816,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-26T10:28:26Z\",\"WARC-Record-ID\":\"<urn:uuid:fb10965b-083c-4d94-971b-3ede6c2c53d3>\",\"Content-Length\":\"74853\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3ea6a5fb-e03a-41b0-8e69-3da59b09992f>\",\"WARC-Concurrent-To\":\"<urn:uuid:45783c0b-61ec-4681-b48a-7c501b30c9bf>\",\"WARC-IP-Address\":\"107.180.46.155\",\"WARC-Target-URI\":\"http://electricalacademia.com/instrumentation-and-measurements/ohmmeter-basic-concepts-and-working-principle-ohmmeter-definition/\",\"WARC-Payload-Digest\":\"sha1:65NICQK34QXA5NT7LC7T6TFEBTBFQGG3\",\"WARC-Block-Digest\":\"sha1:M2NGMCOM7MOXJHD55VP5ZMRIRQDQ2L4X\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232259015.92_warc_CC-MAIN-20190526085156-20190526111156-00135.warc.gz\"}"}
https://www.studyrankers.com/2019/01/rd-sharma-solutions-class10-ch3-pair-of-linear-equation-in-two-variables-3.html
[ ">\n\n## Chapter 3 Pair of Linear Equations in Two Variables R.D. Sharma Solutions for Class 10th Math Exercise 3.3\n\nExercise 3.3\n\n1. Solve the following systems of equations:\n11x + 15y + 23 = 0\n7x – 2y – 20 = 0\n\nSolution\n\nThe given system of equation is", null, "Hence, the solution of the given system of equations is x = 2,y = - 3.\n\n2. 3x – 7y + 10 = 0\ny – 2x – 3 = 0\n\nSolution\n\nThe given system of equation is", null, "Hence, the solution of the given system of equations is  x = -1, y = 1.\n\n3. 0.4x + 0.3y = 1.7\n0.7x + 0.2y = 0.8\n\nSolution\n\nThe given system of equation is", null, "Hence, the solution of the given system of equation is x = 2, y = 3.\n\n4. x/2 + y = 0.8\n\nSolution", null, "5. 7(y + 3) – 2 (x + 3) = 14\n4(y – 2) + 3 (x – 3) = 2\n\nSolution\n\nThe given system of equations is", null, "Hence, the solution of the given system of equations is x = 5,y = 1.\n\n6. x/7 + y/3 = 5\nx/2 - y/9 = 6\n\nSolution\n\nThe given system of equation is", null, "Hence, the solution of thee given system of equations is  x=14, y=9.\n\n7. x/3 + y/4 = 11\n5x/6 - y/3 = 7\n\nSolution\n\nThe given system of equations is", null, "Let us eliminate y from the given equations. The coefficients of y in the equations(iii) and\n(iv) are 3 and 2 respectively. The L.C.M of 3 and 2 is 6. So, we make the coefficient of y\nequal to 6 in the two equations.\nMultiplying (iii) by 2 and (iv) by 3, we get", null, "Hence, the solution of the given system of equations is  x = 6, y = 36.\n\n8. 4u + 3y = 8\n6u - 4y = -5\n\nSolution", null, "So, the solution of the given system of equation is  x=2, y=2.", null, "9. x + y/2 = 4\nx/3 + 2y = 5\n\nSolution\n\nThe given system of equation is", null, "Hence, solution of the given system of equation is  x=3, y=2.\n\n10. x + 2y = 3/2\n2x + y = 3/2\n\nSolution\n\nThe given system of equation is", null, "Let us eliminate y from the given equations. The Coefficients of y in the given equations\nare 2 and 1 respectively. The L.C.M of 2 and 1 is 2. So, we make the coefficient of y equal\nto 2 in the two equations.", null, "Hence, solution of the given system of equation is x = 1/2, y = 1/2.\n\n11. √2 x + √3 y = 0\n√3 x - √8 y = 0\n\nSolution", null, "12. 3x - (y+7)/11 + 2 = 10\n2y + (x+11)/7 = 10\n\nSolution\n\nThe given systems of equation is", null, "Hence, solution of the given system of equation is  x=3, y=4.\n\n13. 2x - 3/y = 9\n3x + 7/y = 2, y ≠ 0\n\nSolution\n\nThe given systems of equation is", null, "Hence, solution of the given system of equation is  x=3,y=1.\n\n14. 0.5x + 0.7y = 0.74\n0.3x + 0.5y = 0.5\n\nSolution\n\nThe given systems of equations is", null, "Hence, solution of the given system of equation is x = 0 5, y = 0.7.\n\n15 . 1/7x + 1/6y = 3\n1/2x - 1/3y = 5\n\nSolution", null, "16. 1/2x + 1/3y = 2\n1/3x + 1/2y = 13/6\n\nSolution\n\nThe given equations are:", null, "Multiply equation (i) by 1/2 and (ii) by 1/3 and subtract equation (ii) from (i) we get", null, "Hence the value of x= 1/2 and y = 1/3 .\n\n17. 15/u + 2v = 17\n1/u + 1/v = 36/5\n\nSolution\n\nThe given equations are:", null, "Multiply equation (ii) by 2 and subtract (ii) from (i), we get", null, "Put the value of u in equation (i), we get", null, "Hence the value of u=5 and v=1/7\n\n18. 3/x - 1/y = -9\n2/x + 3/y = 5\n\nSolution\n\nThe given equations are:", null, "Multiply equation (i) by 3 and add both equations, we get", null, "Put the value of x in equation(i), we get", null, "Hence the value of x=1/2 and y=1/3\n\n19. 2/x + 5/y = 1\n60/x + 40/y = 19\n\nSolution\n\nThe given equations are:", null, "Multiply equation (i) by 8 and subtract (ii) from equation (i), we get", null, "Put the value of x in equation(i), we get", null, "Hence the value x=4 and y=10.\n\n20. 1/5x + 1/6y = 12\n1/3x - 3/7y = 8\n\nSolution\n\nThe given equations are:", null, "Multiply equation (i) by 3/7 and equation (ii) by 1/6, add both equation, we get", null, "Put the value of x in equation (i), we get", null, "Hence the value of = x - 89/4080 and y=89/1512\n\n21. 4/x + 3/y = 14\n3/x - 4y = 23\n\nSolution\n\nThe given equations are:", null, "Multiply equation (i) by 4 and equation (ii) by 3, add both equations, we get", null, "Put the value of x in equation (i), we get", null, "Hence the value of x = 1/5 and y = -2\n\n22. 4/x + 5y = 7\n3/x + 4y = 5\n\nSolution\n\nThe given equations are:", null, "Multiply equation (i) by 4 and equation (ii) by 5 and subtract (ii) from (i) we get", null, "Put the value of x in equation (i), we get", null, "Hence the value of x=1/3 and y=-1\n\n23.  2/x + 3/y = 13\n5/x - 4y = -2\n\nSolution", null, "Multiply equation (i) by 4 and equation (ii) by 3 and add both equations we get", null, "Put the value of x in equation (i), we get", null, "Hence the value of x=1/2 and y=1/3.\n\n24. 2/√x + 3/√y = 2\n4/√x - 9/√y = -1\n\nSolution\n\nThe given equations are:", null, "Multiply equation (i) by 3 and add both equations we get", null, "Put the value x in equation (i), we get", null, "Hence the value of x = 4 and y = 9\n\n25. (x+y)/xy = 2\n(x-y)/xy = 6\n\nSolution\n\nThe given equations are:", null, "", null, "Put the value y in equation (i), we get", null, "Hence the value of x=-1/2 and y=1/4\n\n26. 2/x + 3/y = 9/xy\n4/x + 9/y = 21/xy\n\nSolution\n\nThe given equations are:", null, "Multiply equation (i) by 3 and subtract (ii) from (i), we get", null, "Put the value of x in equation (i), we get", null, "Hence the value of x=1 and y=3 .\n\n27. 6/(x+y) = 7/(x-y) + 3\n1/2(x+y) = 1/3(x-y)\n\nSolution\n\nThe given equations are:", null, "28. xy/(x+y) = 6/5\nxy/(y-x) = 6\n\nSolution\n\nThe given equations are:", null, "29. 22/(x+y) + 15/(x-y) = 5\n55/(x+y) + 45/(x-y) = 14\n\nSolution\n\nThe given equations are:", null, "30. 5/(x+y) - 2/(x-y) = -1\n15/(x+y) + 7/(x-y) = 10\n\nSolution\n\nThe given equations are:", null, "31. 3/(x+y) + 2/(x-y) = 2\n9/(x+y) - 4/(x-y) = 1\n\nSolution\n\nThe given equations are:", null, "32. 1/2(x+2y) + 5/3(3x-2y) = -3/2\n4(x+2y) - 3/5(3x-2y) = 61/60\n\nSolution\n\nThe given equations are:", null, "33. 5/(x+1) - 2/(y-1) = 1/2\n10/(x+1) + 2/(y-1) = 5/2\n\nSolution\n\nThe given equations are:", null, "34. x+y=5xy\n3x+2y=13xy\n\nSolution\n\nThe given equations are:", null, "35. x+y = 2xy\nx-y/xy = 6\n\nSolution\n\nThe given equations are:", null, "36. 2(3u-v) = 5uv\n2(u+3v) = 5uv\n\nSolution\n\nThe given equations are:", null, "37. 2/(3x+2y) + 3/(3x-2y) = 17/5\n5/(3x+2y) + 1/(3x-2y) = 2\n\nSolution\n\nThe given equations are:", null, "38. 44/(x+y) + 30/(x-y) = 10\n55/(x+y) + 40/(x-y) = 13\n\nSolution\n\nThe given equations are:", null, "39. 5/(x-1) + 1/(y-2) = 2\n6/(x-1) - 3(y-2) = 1\nSolution\n\nThe given equations are:", null, "40. 10/(x+y) + 2/(x-y) = 4\n15/(x+y) - 9/(x-y) = -2\n\nSolution\n\nThe given equations are:", null, "41. 1/(3x+y) + 1/(3x-y) = 3/4\n1/2(3x+y) - 1/2(3x-y) = -1/8\n\nSolution\n\nThe given equations are:", null, "42. (7x-2y)/xy = 5\n(8x+7y)/xy = 15\n\nSolution\n\nThe given equations are:", null, "43. 152x-378y=-74\n-378x + 152y = -604\n\nSolution\n\nThe given equations are:", null, "44. 99x + 101y = 499\n101x + 99y = 501\n\nSolution", null, "45. 23x − 29y = 98\n29x − 23y = 110\n\nSolution\n\nThe given equations are:", null, "46. x -y + z = 4\nx - 2y - 2z = 9\n2x + y + 3z = 1\n\nSolution\n\nThe given equations are:", null, "47. x − y + z = 4\nx + y + z = 2\n2x + y − 3z = 0\n\nSolution\n\nThe given equations are:", null, "48. 21x + 47y = 110\n47x + 21y = 162\n\nSolution\n\n21x + 47y = 110 ...(i)\n47x + 21y = 162 ...(ii)\nAdding (i) and (ii), we get\n68x + 68y = 272\n⇒ x + y = 4 ...(iii)\nSubtracting (i) from (ii), we get\n26x - 26y = 52\n⇒ x - y = 2 ...(iv)\nAdding (iii) and (iv), we get\n2x = 6\n⇒ x = 3\nPutting x = 3 in (iv), we get\n3 - y = 2\n⇒ y = 1\n\n49. If x+1 is a factor of 2x3 + ax2 + 2bx + 1, then find the values of a and b given that 2a - 3b = 4\n\nSolution\n\nSince (x+1) is a factor of 2x3 + ax2 + 2bx + 1, so\n2(-1)3 + a(-1)2 + 2b(-1) + 1 = 0\n⇒ -2 + a - 2b + 1 = 0\n⇒ a - 2b - 1 = 0\n⇒ a - 2b = 1 ...(i)\nAlso, we are given\n2a - 3b = 4 ...(ii)\nFrom (i) and (ii) we get\na = 1 + 2b ...(iii)\nSubstituting the value of a in (ii), we get\n2(1 + 2b) - 3b = 4\n⇒ 2 + 4b - 3b = 4\n⇒ b = 2\nPutting b = 2 in (iii), we get\na = 1 + 2 × 2 = 5\nThus, the value of a = 5 and b = 2.\n\n50. Find the solution of the pair of equations x/10 + y/5 - 1 = 0 and x/8 + y/6 = 15 . Hence, find λ, if y = λ x + 5.\n\nSolution\n\nThe given equations are", null, "Thus, the value of λ = -1/2 .\n\n51. Find the values of x and y in the following rectangle .\nSolution\n\nABCD is the given rectangle. So, AB = CD and AD = BC .\nThus,\nx + 3y = 13 ....(i)\n3x + y = 7  ....(ii)\nAdding (i) and (ii), we get\n4x + 4y = 20\n⇒ x + y = 5 ....(iii)\nSubtracting (i) from (ii), we get\n2x - 2y = -6\n⇒ x - y = -3 ....(iv)\nAdding (iii) and (iv), we get\n2x = 2\n⇒ x = 1\nPutting x = 1 in (iii), we get\n1+y = 5\n⇒ y = 4\nThus, x = 1 and y = 4.\n\n52. Write an equation of a line passing through the point representing solution of the pair of linear equation x + y = 2 and 2x - y = 1. How many such lines can we find?\n\nSolution\n\nThe given equations are\nx + y = 2 ....(i)\n2x - y = 1 ....(ii)\nAdding (i) and (ii), we get\n3x = 3\n⇒ x = 1\nPutting x = 1 in (i), we get\n1 + y = 2\n⇒ y = 1\nThus, the solution of the given equations is (1, 1).\nWe know that, infinitely many straight lines pass through a single point.\nSo, the equation of one such line can be 3x + 2y = 5 or 2x + 3y = 5.\n\n53. Write a pair of linear equations which has the unique solution x = -1,  y = 3. How many such pairs can you write?\n\nSolution\n\nThe unique solution is given as x = −1 and y = 3.\nThe one pair of linear equations having x = −1 and y = 3 as unique solution can be\n12x + 5y = 3\n2x + y = 1\nSimilarly, infinitely many pairs of linear equations are possible.\nX" ]
[ null, "https://4.bp.blogspot.com/-UwogX9j6NAk/Wv6yMb9JJQI/AAAAAAAAJ0A/cms5uJQpA4E_v7_CRmvXHQGQUvE7hYXGgCLcBGAs/s640/q1%2Bi.PNG", null, "https://3.bp.blogspot.com/-MHvalwbZT-M/Wv6zDfgVXYI/AAAAAAAAJ0I/Ij7xMen-bAwmeWJVlGaOiHRVdSykEjVCACLcBGAs/s320/2%2Bans.PNG", null, "https://4.bp.blogspot.com/-xsfIMmg_HaU/Wv60g4ngyII/AAAAAAAAJ0U/yBK0LV4e4WcJ6nsHIfHJ8mPkldSXK2iewCLcBGAs/s640/3%2Bi.PNG", null, "https://1.bp.blogspot.com/-GyHZYzxl2sU/Wv61obr6eHI/AAAAAAAAJ0o/ED6GVUsgOGkO-_3ezzWNQwpOmyZ-P3OBQCLcBGAs/s400/q4.PNG", null, "https://4.bp.blogspot.com/-mntCnc67V4s/Wv62nksKfQI/AAAAAAAAJ0w/XXUJQKk0uoc5OrWCHdCOVbVmCzRI_cWxQCLcBGAs/s640/5%2Bans%2Bi.PNG", null, "https://4.bp.blogspot.com/-fdu9D4QUl48/Wv65gXE-WHI/AAAAAAAAJ1E/dRftv_GNw8sqPJQZcoxd-wkFLQm_iiOCgCLcBGAs/s640/6%2Bi.PNG", null, "https://2.bp.blogspot.com/-UqTZ7BpZE70/Wv66a9JbKgI/AAAAAAAAJ1U/PvB1PeSjs_8byatfLRdiThHhy55RolfgACLcBGAs/s320/7%2Bi.PNG", null, "https://4.bp.blogspot.com/-nMYMAg2sZnQ/Wv66vhHmfJI/AAAAAAAAJ1c/pOjCiToqLQ0bPAmhV3AdZRr4fk55NWjIACLcBGAs/s320/7%2Bans.PNG", null, "https://2.bp.blogspot.com/-dkIefcXST2w/Wv6_UpNsnuI/AAAAAAAAJ1o/3iYPH9oHREoCsGL_KKgNk3mRZDGhj3oIACLcBGAs/s640/8%2Bi.PNG", null, "https://1.bp.blogspot.com/-dgzV3nJIXl4/Wv-x9StjEgI/AAAAAAAAJ38/idIqIJWBltwi82u8srOFD5U3uvy2WzBHACEwYBhgL/s640/14%2Bi.PNG", null, "https://1.bp.blogspot.com/-PVFbP3RiPww/Wv-sKmUz03I/AAAAAAAAJ2U/YiMLhQQWzYcqpUREc2J8PZmptzSlg5S1QCLcBGAs/s640/9%2Bi.PNG", null, "https://3.bp.blogspot.com/-gDTG_v8elkI/Wv-suLLn3KI/AAAAAAAAJ2k/JNwheCPUhlI083gt0FhXZ3mfA3PM9g1nQCLcBGAs/s1600/10%2Bi.PNG", null, "https://3.bp.blogspot.com/-cOqmjD2HoF4/Wv-tNOJRRCI/AAAAAAAAJ2s/ul1C0O-4c9ExhZTJ2sWP1gdc3LQh6j82wCLcBGAs/s640/10%2Bii.PNG", null, "https://1.bp.blogspot.com/-jzaYq83c_9k/Wv-t_ipSEEI/AAAAAAAAJ3A/yrcTd3DI4G8GTXCF6h3iM0khOk8faWoXQCLcBGAs/s400/11%2Bi.PNG", null, "https://1.bp.blogspot.com/-JYaNJDQmAGw/Wv-vPls0IaI/AAAAAAAAJ3c/y3pJ3nMcFxcwHbvTYke6-aLYKfrdvy-8QCLcBGAs/s640/12%2Bi.PNG", null, "https://1.bp.blogspot.com/-Xut2eYBfuT4/Wv-wPwFOjCI/AAAAAAAAJ3s/jE1pm0ccgQwTqEQxw5wYyarBmJgzu5RRgCLcBGAs/s640/13%2Bi.PNG", null, "https://4.bp.blogspot.com/-dgzV3nJIXl4/Wv-x9StjEgI/AAAAAAAAJ4A/oulZIqUmSGc1vL1VNU408dfnKMs9of-wACEwYBhgL/s640/14%2Bi.PNG", null, "https://1.bp.blogspot.com/-RBk9tFmtWdo/Wv-z5e6pKNI/AAAAAAAAJ4Q/mrKTwMCq38omJKo_ydN9tHMSJn-bFEgigCLcBGAs/s640/15%2Bi.PNG", null, "https://3.bp.blogspot.com/-b-KPC4sCyfo/Wv_0AErErOI/AAAAAAAAKC4/9rmMw_MIElwjhJgQZMsBrETg5G7FcRS_gCLcBGAs/s1600/16%2Bi.PNG", null, "https://1.bp.blogspot.com/-HohJK1bsbnI/Wv_0ZjuhrpI/AAAAAAAAKDA/r8Nk38Ji2mgOeYQveaIr8e9eIQ2fhH3SgCLcBGAs/s320/16%2Bii.PNG", null, "https://3.bp.blogspot.com/-ZDyexzFOkmM/Wv_1ReWe-5I/AAAAAAAAKDU/TCsPhb-7eAQMOb17BB8l6RywJr7Our3BwCLcBGAs/s1600/17%2Bi.PNG", null, "https://3.bp.blogspot.com/-kUX0dwb2xsI/Wv_1v9MNmII/AAAAAAAAKDg/iSdSvGjDunUUCyN2ZGhVF_3W63jYeOW4gCLcBGAs/s1600/17%2Bii.PNG", null, "https://1.bp.blogspot.com/-PJajo2FZ2bo/Wv_3iQXnfdI/AAAAAAAAKD8/wWkHcRLKubs2rkxYRuaIii36fYhuTpeTwCLcBGAs/s1600/17%2Biii.PNG", null, "https://1.bp.blogspot.com/-uZ3LLzLFl6k/Wv_4c6wJgSI/AAAAAAAAKEQ/59mdHY4rsSMeYb2KMZF-0E8N4sKO92-pgCLcBGAs/s1600/18%2Bi.PNG", null, "https://3.bp.blogspot.com/-qIuX_WU2SQ4/Wv_5LszXymI/AAAAAAAAKEY/swQZqexFrBwEguoJLK8Mh4OcPJjJ-Y-igCLcBGAs/s1600/18%2Bii.PNG", null, "https://3.bp.blogspot.com/-wIoID2b_x44/Wv_55vBKMKI/AAAAAAAAKEk/UrlvBPMXJg4X4IOMXEKdZ0ysYQ6CGWHDACLcBGAs/s1600/18%2Biii.PNG", null, "https://3.bp.blogspot.com/-2dL-eIgE4eY/Wv_7PUhgBQI/AAAAAAAAKE8/9ljuLnqweGkwt0eYMWrIOnOh59QoEJyywCLcBGAs/s1600/19%2Bi.PNG", null, "https://4.bp.blogspot.com/-htFjGyH35vc/Wv_7xPD0CrI/AAAAAAAAKFE/m2ohV4FFVzsobJMJ-4IkFHSbvY1AjQngwCLcBGAs/s1600/19%2Bans.PNG", null, "https://3.bp.blogspot.com/-Hu1xLxz4Se0/Wv_8SYci1cI/AAAAAAAAKFM/4K4h5SWBIDo3tez3k7q59E8jzhSUfYk6QCLcBGAs/s1600/19%2Bans%2Bii.PNG", null, "https://3.bp.blogspot.com/-qbR8oXa_7KY/Wv_9QSLlRdI/AAAAAAAAKFg/rQc5LL0G_EIVUkP8w898hjn30mE3-bQdACLcBGAs/s1600/20%2Bi.PNG", null, "https://3.bp.blogspot.com/-ZFGm48keZow/Wv_-GmDIY-I/AAAAAAAAKFs/QlhQGEGDPUgMKgO_eisPmrlpVMbcfsGRACLcBGAs/s1600/20%2Bii.PNG", null, "https://4.bp.blogspot.com/-c9-j6ecsXc8/WwACAWYCSPI/AAAAAAAAKF4/K6mwt2lnvXcx_v7zfBTBmvDk_17xaVCNwCLcBGAs/s1600/20%2Biii.PNG", null, "https://1.bp.blogspot.com/-NmHpx5HU74k/WwAEvCL9YtI/AAAAAAAAKGY/Or8gSe5fwiUpsQvLeCLL_3E-67aYf3p3ACLcBGAs/s1600/21%2Ba.PNG", null, "https://1.bp.blogspot.com/-PIqnvAfeR1A/WwAGAGVGo7I/AAAAAAAAKGk/OFaz5LQjkdc6XXBSK7J7fSwttObr5HYfgCLcBGAs/s1600/21%2Bb.PNG", null, "https://2.bp.blogspot.com/-FKa6Ob4Oy2g/WwAHGlZf9UI/AAAAAAAAKGw/w0TPIapqk2MbzqvLzRQYsHVeEYIwVqprACLcBGAs/s1600/21%2Bc.PNG", null, "https://3.bp.blogspot.com/-WPw0ouTFnH0/WwAH0pdzw-I/AAAAAAAAKHA/Y70xmQl4OqgP9QcnQoFpvVzxzsMTDrsUgCLcBGAs/s1600/22%2Bi.PNG", null, "https://1.bp.blogspot.com/-tNGr9sx_4s4/WwAIcRIZ6hI/AAAAAAAAKHM/V05WDnL6BVExv8LCrXeX9DsR5bR7lsCFACLcBGAs/s1600/22%2Bii.PNG", null, "https://4.bp.blogspot.com/-TIQwxDscN2M/WwAIuxTd6DI/AAAAAAAAKHU/RWmyzO3-Jho8X9odlP8u0UaWDMacugl1QCLcBGAs/s1600/22%2Biii.PNG", null, "https://2.bp.blogspot.com/-5mHK-eR8-lA/WwAJ3Ag-jTI/AAAAAAAAKHk/CAuYGWNXNGkEslWZFhuO0ulwDXTz5h1bACLcBGAs/s1600/23%2Bi.PNG", null, "https://1.bp.blogspot.com/-eeiDZdKQ5hQ/WwAKVLG9PnI/AAAAAAAAKHs/Q_C3Sr0PSTcua6IGYp2BsLC4g687ff1kQCLcBGAs/s1600/23%2Bii.PNG", null, "https://2.bp.blogspot.com/-B7BBFIIDqhs/WwAK5lz10kI/AAAAAAAAKH0/R4E9liFo0usGBaHJtTkDmZG_WqqPWd7wACLcBGAs/s1600/23%2Biii.PNG", null, "https://3.bp.blogspot.com/-qxgrfNIPzyc/WwAMZFWKdMI/AAAAAAAAKII/6bqr7pwgriEWH9BL8woT1zYOSaIGhMeEACLcBGAs/s1600/24%2Bi.PNG", null, "https://2.bp.blogspot.com/-gbpqRsGr4h4/WwAMzw7O_PI/AAAAAAAAKIQ/H0AjhxllRDUflLmE7xFzgUwmLMd_JBnkACLcBGAs/s1600/24%2Bii.PNG", null, "https://3.bp.blogspot.com/-ScaXL8r7ids/WwANtPUBrgI/AAAAAAAAKIc/B3O2tBqTs5klyZmhGVltdmdsc3CHdaeoACLcBGAs/s1600/24%2Biii.PNG", null, "https://1.bp.blogspot.com/-wDGD43WuUwI/WwAOw6fw27I/AAAAAAAAKIw/_GfrWJ21zaIH5TDMCnPk4AVKKGZrr-jWwCLcBGAs/s1600/25%2Bi.PNG", null, "https://1.bp.blogspot.com/-zurijLL5GNA/WwAPEb4nVfI/AAAAAAAAKI4/MQN4j6mtKJMpHbiLvFFCvyfRxE6gtjKwwCLcBGAs/s1600/25%2Bii.PNG", null, "https://3.bp.blogspot.com/-yLQuKXMCGnQ/WwAPYmqFNqI/AAAAAAAAKJA/JaEXu144XhkuTk2Mmjsn_miyHOzGyRFpwCLcBGAs/s1600/25%2Biii.PNG", null, "https://1.bp.blogspot.com/-9dixovKmX00/WwAQdTPpjsI/AAAAAAAAKJU/yiVHUgAiMSM9Fd5xlz-QL26R4eqZ-Qu9wCLcBGAs/s1600/26%2Bi.PNG", null, "https://2.bp.blogspot.com/-IRyxGwHVbqE/WwAQ4tijSKI/AAAAAAAAKJc/qyG8gptkfKADAuJXvRJplhCovlccgzXhQCLcBGAs/s1600/26%2Bii.PNG", null, "https://2.bp.blogspot.com/-OFuDbEB_9yU/WwAR84bD2lI/AAAAAAAAKJo/mtLd11s1JJoNdeMV-ApjMp2ZW9H9bQCZACLcBGAs/s1600/26%2Biv.PNG", null, "https://2.bp.blogspot.com/-yyCWStRgm7U/WwJOxUSRLuI/AAAAAAAAKKY/VgAPeHF3GgwmfJGnKsvIEv5sKjHAfLQQACLcBGAs/s1600/27%2BI.PNG", null, "https://3.bp.blogspot.com/-RT8Rc66qmj8/WwJQIuUo8EI/AAAAAAAAKKo/Qx__KWc6KvMgB8K7pRlawSrl1QlG3-8wwCLcBGAs/s400/28%2Bans.PNG", null, "https://1.bp.blogspot.com/-MA8cVQaVx20/WwJRJKP_qFI/AAAAAAAAKK4/31UDitfK2KgYpaWZcfVG4BtqVVDePw9dACLcBGAs/s1600/29%2Bi.PNG", null, "https://2.bp.blogspot.com/-8YW-aBC2ImQ/WwJSb6JQnoI/AAAAAAAAKLQ/GYEr8qRwU0E1Govro5hGisX4EMJtNR6agCLcBGAs/s1600/30%2Bi.PNG", null, "https://2.bp.blogspot.com/-PbR62a8q7IQ/WwJTjE4TubI/AAAAAAAAKLk/nb-G3V95ReYrSLQf5LK8oRwIyx9iVTWgwCLcBGAs/s1600/31%2Bi.PNG", null, "https://2.bp.blogspot.com/-o74jASoZuh4/WwJVLWC0cUI/AAAAAAAAKL4/U0dIztbXKYwUf8RtCRtueixCiBTw2zOTgCLcBGAs/s1600/32%2Bi.PNG", null, "https://4.bp.blogspot.com/-b6Sm2_ocNcE/WwJWZLhFmtI/AAAAAAAAKMM/J5n_E0-z_vsXxVi9B7HyAb37Ot90uFeSwCLcBGAs/s1600/33%2Bi.PNG", null, "https://3.bp.blogspot.com/-cys9JFCWmwg/WwJXX5fH0DI/AAAAAAAAKMU/Wdy1_X2ZovM6-a-CvWPTtMxsL7QlJLKuQCLcBGAs/s400/34%2Bi.PNG", null, "https://2.bp.blogspot.com/-HZIok9vMdq0/WwJYODY2iaI/AAAAAAAAKMk/WdzBBrluJvE1yKInOO3oRBZUIibn5GthwCLcBGAs/s400/35%2Bi.PNG", null, "https://2.bp.blogspot.com/-GBSHgsk156A/WwJY_huJKXI/AAAAAAAAKMs/96U4YLrswrYaZWv-dA2snUkoBF-GKjY_wCLcBGAs/s1600/36%2Bans.PNG", null, "https://2.bp.blogspot.com/-N1Alt2hGTJM/WwJaAgB9AvI/AAAAAAAAKM8/HtcQcwoTOBswGHKxsCXmSrZV-ph-VSitgCLcBGAs/s1600/37%2Bans%2Bi.PNG", null, "https://1.bp.blogspot.com/-1rTfV3xFZJY/WwJbC3-djcI/AAAAAAAAKNU/mA8IRYo9jH81KORt_RCbVQaZTh-m3QWQgCLcBGAs/s1600/38%2Bi.PNG", null, "https://1.bp.blogspot.com/-Syun2JsbY6o/WwJb61idlwI/AAAAAAAAKNk/CAnD_YnDWNUaPzOBeNmObsncr6xfbGGAACLcBGAs/s640/39%2Bi.PNG", null, "https://4.bp.blogspot.com/-ibVmyfzi2gQ/WwJdrXR7EZI/AAAAAAAAKN4/j3-kthJ34JshMYKfmB0Ej9mzjFt-r-5nQCLcBGAs/s1600/40%2Bi.PNG", null, "https://4.bp.blogspot.com/-lMpKUlUa5eo/WwJf7TlgJmI/AAAAAAAAKOM/iHhx42JihfU9SVLHydCS4nM_FfhZP6NgACLcBGAs/s1600/41%2Bi.PNG", null, "https://1.bp.blogspot.com/-WXzvZh7AfjA/WwJhTWgQZ9I/AAAAAAAAKOg/1LLyfZjOuwkJKKnP4NJPSa3t1FrVk6GlQCLcBGAs/s1600/q43.PNG", null, "https://1.bp.blogspot.com/-4HLupByXF9w/WwJjjpn6fJI/AAAAAAAAKOs/zuVJyfA-QEwdgGAivVKIyUtYsF_aYeovQCLcBGAs/s640/43%2Bans.PNG", null, "https://3.bp.blogspot.com/-PXPhJZiPn2I/WwJkGeEEJQI/AAAAAAAAKO0/EM4b0Q6Kdt0leZLGY5yqsRIx0EaMIWoHgCLcBGAs/s1600/44%2Bans.PNG", null, "https://2.bp.blogspot.com/-3FA0ddIrvLM/WwJlN12GSyI/AAAAAAAAKPA/fUu1LhzNemkxPv6psjSav9OUAGWlatB0QCLcBGAs/s640/45%2Bans.PNG", null, "https://4.bp.blogspot.com/-bh8yR13BY8I/WwJmWCQweLI/AAAAAAAAKPM/avQA-cip2D4nIAZZjxew7OIbj-7xUCEWwCLcBGAs/s640/46%2Bi.PNG", null, "https://3.bp.blogspot.com/-Aqvl1WfOJFY/WwJnw7ZYFmI/AAAAAAAAKPg/kXPWp5t1ePsr53NwiSVv9gm3LCap8KqvACLcBGAs/s1600/47%2Bans.PNG", null, "https://2.bp.blogspot.com/-xAYiislaVXA/WwJxJPFy7fI/AAAAAAAAKQA/iT8ujnhUzTAsNvtjTtrG8igJqxmUg5oUACLcBGAs/s640/50%2Bi.PNG", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8064027,"math_prob":1.0000095,"size":9815,"snap":"2021-43-2021-49","text_gpt3_token_len":4424,"char_repetition_ratio":0.26928958,"word_repetition_ratio":0.39607844,"special_character_ratio":0.4861946,"punctuation_ratio":0.12293144,"nsfw_num_words":2,"has_unicode_error":false,"math_prob_llama3":1.0000098,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144],"im_url_duplicate_count":[null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-16T14:26:03Z\",\"WARC-Record-ID\":\"<urn:uuid:8b45141c-e0dc-4b09-980f-537e30290b76>\",\"Content-Length\":\"349503\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2f3e7ff9-70cf-4841-8ba1-87170d9fdfe8>\",\"WARC-Concurrent-To\":\"<urn:uuid:745a92a7-3dc9-4ed6-8e47-b1e41b95d11e>\",\"WARC-IP-Address\":\"142.251.33.211\",\"WARC-Target-URI\":\"https://www.studyrankers.com/2019/01/rd-sharma-solutions-class10-ch3-pair-of-linear-equation-in-two-variables-3.html\",\"WARC-Payload-Digest\":\"sha1:MCTN43VPKFXSYTHPJHL4MZMH44VVCE2S\",\"WARC-Block-Digest\":\"sha1:WZPQKCZSX5E4XIDMPRI3T2ZKX7B4TKTI\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323584886.5_warc_CC-MAIN-20211016135542-20211016165542-00592.warc.gz\"}"}
https://blog.jverkamp.com/2014/08/23/ludum-dare-30-hints-of-a-game/
[ "# Ludum Dare 30: Hints of a game\n\nWe’re getting there. 18 hours in and I have the first hints of what might actually be a game…\n\n(I’ll include a demo at the bottom of the post)\n\nBasically, I went with allowing each box to be controlled individually by the keyboard. There will be some problems with having that many people on the keyboard at once, but we’ll deal with that later (if we can).\n\nOne additional thing I wanted (and mostly figured out) is somewhat ‘loose’ controls. Basically, rather than explicitly dealing with moving only when a key is down, we’ll accelerate when the key is down, preserving velocity even after keys are raised. There will be some small amount of friction as well, to make sure that eventually pieces will slow down.\n\nIt’s actually not too hard to implement a system like this:\n\nFirst, load in the keybindings from the interface you can see in the screenshot above:\n\n// Load key bindings\nvar loadKeyBindings = function() {\nkeys = {};\n\nconsole.log('Loading key bindings...');\n\n$('#controls table').each(function(i, eli) { var player = parseInt($(eli).attr('data-player'));\n\nconsole.log('loading controls for player ' + player);\n\n$(eli).find('input').each(function(j, elj) { var command =$(elj).attr('name');\nvar key = $(elj).val(); keys[key] = [player, command, false]; }); }); }; This will put everything into the key array, indexed by key name and storing the player it refers to, the direction you are going (one of left, right, up, or down), and if that key is currently active (pressed down). If I can, I may add additional key bindings (such as rotation or powerups), otherwise, that’s pretty good for the moment. Next, we’ll add a function to tell when keys are active: var onkey = function(event) { switch (event.keyCode) { case 37: key = 'LEFT'; break; case 38: key = 'UP'; break; case 39: key = 'RIGHT'; break; case 40: key = 'DOWN'; break; case 97: key = 'NUM1'; break; case 98: key = 'NUM2'; break; case 99: key = 'NUM3'; break; case 100: key = 'NUM4'; break; case 101: key = 'NUM5'; break; case 102: key = 'NUM6'; break; case 103: key = 'NUM7'; break; case 104: key = 'NUM8'; break; case 105: key = 'NUM9'; break; default: key = String.fromCharCode(event.keyCode).toUpperCase(); } if (key in keys) { if (event.type == 'keydown') { keys[key] = true; } else if (event.type == 'keyup') { keys[key] = false; } } }; Longer than I wanted, but it correctly deals with both the numpad and arrow keys, which is kind of necessary if you want to support 4 human players all at the same time. Perhaps I’ll implement AIs, but until I do, we’re going to have to allow for a bunch of players… Okay, so what do we do with all of this information? Well, just like before, we have a tick function: var tick = function(event) {$.each(keys, function(i, el) {\nvar player = el;\nvar command = el;\nvar active = el;\n\n$game =$('#tiles');\n$tile =$('#tiles *[data-player=\"' + player + '\"]');\n\n// Update velocity\n...\n\n// Use friction to slow each box down over time\n...\n\n// Cap velocity so we don't go too fast\n...\n\n// Update the current position based on velocity\n...\n\n// Bounce off the edges of the screen\n...\n\n// Finally, update the position\n$tile.css({'top': top, 'left': left}); }); if (running) { setTimeout(tick, 1000/30); } }; Oof. That’s a heck of a function. Luckily, the individual parts aren’t that bad. First, we want to update the velocity. This is where the active parameter (the third in each key definition) comes into play: // Update velocity if (active) { if (command == 'up') { vel[player] -= PER_TICK_ACCELERATION; } else if (command == 'down') { vel[player] += PER_TICK_ACCELERATION; } else if (command == 'left') { vel[player] -= PER_TICK_ACCELERATION; } else if (command == 'right') { vel[player] += PER_TICK_ACCELERATION; } } That’s simple enough. As before, we have to decide that up and down are inverted (they almost always are when it comes to computers), but once you’ve decided that’s easy enough. Now, outside of that black, the next thing we’ll do is apply friction. This way the boxes will slow down over time, forcing players both to pay attention and to let them bounce around like madmen. // Use friction to slow each box down over time // If we're close enough to zero that friction will accelerate us, just stop if (Math.abs(vel[player]) < PER_TICK_FRICTION) { vel[player] = 0; } else { vel[player] += (vel[player] > 0 ? -PER_TICK_FRICTION : PER_TICK_FRICTION); } if (Math.abs(vel[player]) < PER_TICK_FRICTION) { vel[player] = 0; } else { vel[player] += (vel[player] > 0 ? -PER_TICK_FRICTION : PER_TICK_FRICTION); } // Cap velcity so we don't go too fast vel[player] = Math.min(VELOCITY_CAP, Math.max(-VELOCITY_CAP, vel[player])); vel[player] = Math.min(VELOCITY_CAP, Math.max(-VELOCITY_CAP, vel[player])); Also at the end there, we make sure we don’t keep accelerating indefinitely. That both helps keep the game a little easier to play and prevents edge cases (such as moving further in one tick than we’re allowed). Next, we can finally update the position: // Update the current position based on velocity var left =$tile.offsetLeft + vel[player];\nvar top = $tile.offsetTop + vel[player]; // Bounce off the edges of the screen if (left < 0) { left = 0; vel[player] = Math.abs(vel[player]); } else if (left >$game.width() - $tile.width()) { left =$game.width() - $tile.width(); vel[player] = -1 * Math.abs(vel[player]); } if (top < 0) { top = 0; vel[player] = Math.abs(vel[player]); } else if (top >$game.height() - $tile.height()) { top =$game.height() - $tile.height(); vel[player] = -1 * Math.abs(vel[player]); } Once again, we want to clip the positions. This time though, we’re actually going to use the velocities we have rather than zeroing them out. Instead: bounce! It’s nice, because it makes the game feel more ‘realistic’ (for some definitions of the word). And that’s about it. With that, we can have the boxes moving around and interacting as they did last night. We’re actually starting to get a game going here. One other tweak is the control code: var tiles = new Tiles(); var controls = new Controls(); var MS_PER_GAME = 60 * 1000; var startTime = new Date().getTime(); var running = true;$(function() {\ncontrols.run();\n});\n\nfunction tick() {\nvar soFar = new Date().getTime() - startTime;\nvar remainingSec = Math.floor((MS_PER_GAME - soFar) / 1000);\n\nif (remainingSec > 0) {\n$('#tiles #countdown').text(remainingSec + ' sec remaining'); } else { stop(); } if (running) { setTimeout(tick, 1000/30); } } function run() { tiles.run(); controls.run(); startTime = new Date().getTime(); running = true; tick(); return false; } function stop() { tiles.stop(); controls.stop(); startTime = new Date().getTime() - MS_PER_GAME; running = false;$('#tiles #countdown').text('game over');\n\nreturn false;\n}\n\n\nTechnically, it’s not a gameloop, since everything is done asynchronously via setTimeout (and make absolutely sure that you don’t use setInterval…), but it’s close enough. What this does give us though is a strict time before the game ends. Otherwise, the boxes will eventually fill up, and where’s the fun in that? (Although that might be an interesting alternative end condition).\n\nAfter that, all I have to figure out is scoring. And I have another 6 hours until the one day mark. If I can make it by then, I’ll feel pretty good–and can use all of the rest of the time for polish. I’m thinking some simple music, sound effects, a title screen (initial letters in sand?). Of course, I still have to figure out the scoring algorithm..\n\nSame as yesterday, the entire source (warning: still ugly) if available on GitHub: jpverkamp/sandbox-battle\n\nDemo:\n\nPlayer 1 - Blue\nUp\nLeft\nRight\nDown\nPlayer 2 - Red\nUp\nLeft\nRight\nDown\nPlayer 3 - Green\nUp\nLeft\nRight\nDown\nPlayer 4 - Pink\nUp\nLeft\nRight\nDown\n\n[ Run! ] [ Stop! ]\n\nI’m sure there are bugs… And I’m working on it right at the moment. If you have any questions or comments though, feel free to drop me a line below.\n\ncomments powered by Disqus" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7876598,"math_prob":0.94605505,"size":7987,"snap":"2019-35-2019-39","text_gpt3_token_len":2115,"char_repetition_ratio":0.1211324,"word_repetition_ratio":0.041825093,"special_character_ratio":0.31012896,"punctuation_ratio":0.18249534,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9652802,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-18T01:33:56Z\",\"WARC-Record-ID\":\"<urn:uuid:c9281692-f1fc-46eb-91f9-9d3f5d814943>\",\"Content-Length\":\"78524\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6bc39c97-0d0b-4e12-82f7-4228fb47e97b>\",\"WARC-Concurrent-To\":\"<urn:uuid:d5e290f3-68e2-4535-8879-a3a4e55eface>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://blog.jverkamp.com/2014/08/23/ludum-dare-30-hints-of-a-game/\",\"WARC-Payload-Digest\":\"sha1:SRSWGMVD5IF77SOBTGCYHRGZBFIEHAX2\",\"WARC-Block-Digest\":\"sha1:VOYUN5KTZUNALUBLS25HCD4N77USV6EH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027313536.31_warc_CC-MAIN-20190818002820-20190818024820-00080.warc.gz\"}"}
https://www.teachoo.com/8041/2515/Ex-4.2--4/category/Ex-4.2/
[ "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "Subscribe to our Youtube Channel - https://you.tube/teachoo\n\n1. Chapter 4 Class 7 Simple Equations\n2. Serial order wise\n3. Ex 4.2\n\nTranscript\n\nEx 4.2, 4 Solve the following equations: (a) 10p = 10010p = 100 Dividing 10 both sides 10𝑝/10 = 100/10 p = 10 Ex 4.2, 4 Solve the following equations: (b) 10p + 10 = 100 10p + 10 = 100 Subtracting both sides by 10 10p + 10 − 10 = 100 − 10 10p = 90 Dividing both sides by 10 10𝑝/10 = 90/10 p = 9 Ex 4.2, 4 Solve the following equations: (c) 𝑝/4 = 5𝑝/4 = 5 Multiply both sides by 4 𝑝/4 × 4 = 5 × 4 p = 20 Ex 4.2, 4 Solve the following equations: (d) (−𝑝)/3 = 5(−𝑝)/3 = 5 Multiplying both sides by 3 (−𝑝)/3 × 3 = 5 × 3 −p × 1 = 15 −p = 15 Multiplying both sides by −1 −p × −1 = 15 × −1 p = −15 Ex 4.2, 4 Solve the following equations: (e) 3𝑝/4 = 63𝑝/4 = 6 Multiplying both sides by 4 3𝑝/4 × 4 = 6 × 4 3p = 24 Diving both sides by 3 3𝑝/3 = 24/3 p = 8 Ex 4.2, 4 Solve the following equations: (f) 3s = –9 Ex 4.2, 4 Solve the following equations: (g) 3s + 12 = 0 3s + 12 = 0 Subtracting both sides by 12 3s + 12 − 12 = 0 × 12 3s = −12 Diving both sides by 3 3𝑠/3 = (−12)/3 s = −4 Ex 4.2, 4 Solve the following equations: (h) 3s = 03s = 0 Diving both sides by 3 3𝑠/3 = 0/3 s = 0 Ex 4.2, 4 Solve the following equations: (i) 2q = 62q = 6 Diving both sides by 2 2𝑞/2 = 6/2 q = 3 Ex 4.2, 4 Solve the following equations: (j) 2q – 6 = 02q − 6 = 0 Adding 6 to both sides 2q − 6 + 6 = 0 + 6 2q = 6 Diving both sides by 2 2𝑞/2 = 6/2 q = 3 Ex 4.2, 4 Solve the following equations: (k) 2q + 6 = 02q + 6 = 0 Subtracting both sides by 6 2q + 6 − 6 = 0 − 6 2q = −6 Diving both sides by 2 2𝑞/2 = (−6)/2 q = −3 Ex 4.2, 4 Solve the following equations: (l) 2q + 6 = 122q + 6 = 12 Subtracting both sides by 6 2q + 6 − 6 = 12 − 6 2q = 6 Diving both sides by 2 2𝑞/2 = 6/2 q = 3\n\nEx 4.2", null, "" ]
[ null, "https://d1avenlh0i1xmr.cloudfront.net/e3782dbf-099b-4433-bbb1-0e91f00f8522/slide21.jpg", null, "https://d1avenlh0i1xmr.cloudfront.net/9384fa33-b8bf-435c-be82-db8e5caebbad/slide22.jpg", null, "https://d1avenlh0i1xmr.cloudfront.net/57ea7fca-5642-49ff-aa2f-c975400b8c0c/slide23.jpg", null, "https://d1avenlh0i1xmr.cloudfront.net/61424cf1-862f-4f80-8904-95d813715d53/slide24.jpg", null, "https://d1avenlh0i1xmr.cloudfront.net/1aaf8b99-9176-446e-876a-10e44deb48ae/slide25.jpg", null, "https://d1avenlh0i1xmr.cloudfront.net/777e352d-e435-4ade-b47c-3c47025e437b/slide26.jpg", null, "https://d1avenlh0i1xmr.cloudfront.net/5d8009b9-d536-47c4-8bb8-cd3bed5f19a7/slide27.jpg", null, "https://d1avenlh0i1xmr.cloudfront.net/45d83253-5494-4a21-8f4f-9d12db5bd165/slide28.jpg", null, "https://d1avenlh0i1xmr.cloudfront.net/7a72bf11-729b-4b4b-a4fc-072489b7184e/slide29.jpg", null, "https://d1avenlh0i1xmr.cloudfront.net/51cfadfd-50f5-4822-b73b-2c8d9fa7f6f3/slide30.jpg", null, "https://d1avenlh0i1xmr.cloudfront.net/c71e05ca-0752-4b05-8a08-10b7ef7e4625/slide31.jpg", null, "https://d1avenlh0i1xmr.cloudfront.net/90f0e538-4a42-4faa-87d8-4c6e0a03393e/slide32.jpg", null, "https://delan5sxrj8jj.cloudfront.net/misc/Davneet+Singh.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9058087,"math_prob":1.0000079,"size":1840,"snap":"2020-45-2020-50","text_gpt3_token_len":878,"char_repetition_ratio":0.208061,"word_repetition_ratio":0.25336322,"special_character_ratio":0.5125,"punctuation_ratio":0.09297052,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000048,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,8,null,5,null,6,null,7,null,7,null,4,null,7,null,5,null,5,null,6,null,6,null,7,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-27T08:24:27Z\",\"WARC-Record-ID\":\"<urn:uuid:08a13488-c8df-4d8a-8d02-31ab83bd2a54>\",\"Content-Length\":\"50626\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:25c83425-d98f-4c0b-bfd3-6c243122474f>\",\"WARC-Concurrent-To\":\"<urn:uuid:f3c25167-6e98-4732-ab8a-311e4ebf3be6>\",\"WARC-IP-Address\":\"52.44.17.83\",\"WARC-Target-URI\":\"https://www.teachoo.com/8041/2515/Ex-4.2--4/category/Ex-4.2/\",\"WARC-Payload-Digest\":\"sha1:E3ISWINTJQ5ZVK3HZJXXZ4EURQ267O2V\",\"WARC-Block-Digest\":\"sha1:26HTZLLYV33IS2TO3QHGDXA2SNIMDSJ5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141191511.46_warc_CC-MAIN-20201127073750-20201127103750-00661.warc.gz\"}"}
https://answers.everydaycalculation.com/compare-fractions/50-20-and-10-42
[ "Solutions by everydaycalculation.com\n\n## Compare 50/20 and 10/42\n\n1st number: 2 10/20, 2nd number: 10/42\n\n50/20 is greater than 10/42\n\n#### Steps for comparing fractions\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 20 and 42 is 420\n\nNext, find the equivalent fraction of both fractional numbers with denominator 420\n2. For the 1st fraction, since 20 × 21 = 420,\n50/20 = 50 × 21/20 × 21 = 1050/420\n3. Likewise, for the 2nd fraction, since 42 × 10 = 420,\n10/42 = 10 × 10/42 × 10 = 100/420\n4. Since the denominators are now the same, the fraction with the bigger numerator is the greater fraction\n5. 1050/420 > 100/420 or 50/20 > 10/42\n\nMathStep (Works offline)", null, "Download our mobile app and learn to work with fractions in your own time:" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8839316,"math_prob":0.9872908,"size":880,"snap":"2023-40-2023-50","text_gpt3_token_len":310,"char_repetition_ratio":0.18949772,"word_repetition_ratio":0.0,"special_character_ratio":0.45,"punctuation_ratio":0.07446808,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9941625,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-29T23:43:18Z\",\"WARC-Record-ID\":\"<urn:uuid:4cca2f06-cd5a-40e6-ad59-133381206cb6>\",\"Content-Length\":\"7597\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cdd83f9e-8000-41af-a71b-0c766558661e>\",\"WARC-Concurrent-To\":\"<urn:uuid:8f1aec5c-b365-4d78-aa17-83aa10705f1d>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/compare-fractions/50-20-and-10-42\",\"WARC-Payload-Digest\":\"sha1:A2DQBBPJWTXIDXP522JE23A7EYEFGN6F\",\"WARC-Block-Digest\":\"sha1:GHWOQPEJVXC4BBN6FRSYI67HM63HKJAJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510529.8_warc_CC-MAIN-20230929222230-20230930012230-00478.warc.gz\"}"}
https://www.hackmath.net/en/word-math-problems/unit-conversion?tag_id=3
[ "# Unit conversion + ratio - math problems\n\n#### Number of problems found: 66\n\n• Photo egative", null, "Negative dimensions are 36mm and 28mm. What will be the photo size in the 21:4 ratio?\n• Squares ratio", null, "The first square has a side length of a = 6 cm. The second square has a circumference of 6 dm. Calculate the proportions of the perimeters and the proportions of the contents of these squares? (Write the ratio in the basic form). (Perimeter = 4 * a, conte\n• Marlon", null, "Marlon drew a scale drawing of a summer camp. In real life, the sand volleyball court is 8 meters wide. It is 4 centimeters wide in the drawing. What is the drawing's scale factor? Simplify your answer and write it as a ratio, using a colon.\n• The sides", null, "The sides of a rectangle are in a ratio of 2:3, and its perimeter is 1 1/4 inches. What are the lengths of its side? Draw it.\n• Car", null, "Car is going 95 km per hour. How long distance goes in 1 minute?\n• Thrift woman", null, "Calculate how long grandmother will save to new shoes priced 108 euros if save 3 Eur monthly.\n• Two brothers", null, "The two brothers were to be divided according to the will of land at an area of 1ha 86a 30m2 in a ratio of 5:4. How many will everyone get?\n• TV diagonal", null, "Diagonal TV is 0.56 m long, how big the television sreen is if the aspect ratio is 16:9?\n• Lunch", null, "Jane eats whole lunch for the 30 minutes. Which part of the lunch is eaten in 180 seconds?\n• Map 3", null, "Map scale is M = 1: 25000 . Two cottages which are shown on the map are actually 15 km away. What is its distance on the map?\n• Cooling liquid", null, "Cooling liquid is diluted with water in a ratio of 3:2 (3 parts by volume of coolant with 2 volumes of water). How many volumes of coolant must be prepared for a total 0.7 dm3 (liters) of the mixture?\n• Nutballs", null, "The dough for nutballs contains, among other things, two basic raw materials: flour and nuts in a ratio of 2:1. How much flour and how many nuts are needed for 1 kg of dough if \"other\" is 100g?\n• Scale of plan", null, "On the plan of the village in the scale of 1: 1000 a rectangular garden is drawn. Its dimensions on the plan are 25mm and 28mm. Determine the area of the garden in ares.\n• Clock", null, "What distance will pass end of 8 cm long hour hand for 15 minutes?\n• Forest nursery", null, "In Forest nursery plant one pine to 1.9 m2. Calculate how many plants are planting in area 362 acres.\n• Server", null, "Calculate how many average minutes a year is a webserver is unavailable, the availability is 99.99%.\n• Geometric plan", null, "At what scale the building plan if one side of the building is 45m long and 12mm long on a plan?\n• Map", null, "Forest has an area of ​​36 ha. How much area is occupied by forest on the map at scale 1:500?\n• Two villages", null, "On the map with a scale of 1:40000 are drawn two villages actually 16 km away. What is their distance on the map?\n• Widescreen monitor", null, "Computer businesses hit by a wave of widescreen monitors and televisions. Calculate the area of ​​the LCD monitor with a diagonal size 20 inches at ratio 4:3 and then 16:9 aspect ratio. Is buying widescreen monitors with the same diagonal more advantageou\n\nDo you have an interesting mathematical word problem that you can't solve it? Submit a math problem, and we can try to solve it.\n\nWe will send a solution to your e-mail address. Solved examples are also published here. Please enter the e-mail correctly and check whether you don't have a full mailbox.\n\nPlease do not submit problems from current active competitions such as Mathematical Olympiad, correspondence seminars etc...\n\nCheck out our ratio calculator." ]
[ null, "https://www.hackmath.net/thumb/13/t_4913.jpg", null, "https://www.hackmath.net/thumb/3/t_18103.jpg", null, "https://www.hackmath.net/thumb/53/t_16053.jpg", null, "https://www.hackmath.net/thumb/85/t_6285.jpg", null, "https://www.hackmath.net/thumb/91/t_391.jpg", null, "https://www.hackmath.net/thumb/20/t_220.jpg", null, "https://www.hackmath.net/thumb/30/t_6330.jpg", null, "https://www.hackmath.net/thumb/5/t_2405.jpg", null, "https://www.hackmath.net/thumb/77/t_1577.jpg", null, "https://www.hackmath.net/thumb/76/t_1276.jpg", null, "https://www.hackmath.net/thumb/92/t_5292.jpg", null, "https://www.hackmath.net/thumb/63/t_18163.jpg", null, "https://www.hackmath.net/thumb/2/t_6502.jpg", null, "https://www.hackmath.net/thumb/62/t_1962.jpg", null, "https://www.hackmath.net/thumb/15/t_215.jpg", null, "https://www.hackmath.net/thumb/40/t_440.jpg", null, "https://www.hackmath.net/thumb/53/t_8253.jpg", null, "https://www.hackmath.net/thumb/82/t_382.jpg", null, "https://www.hackmath.net/thumb/1/t_12101.jpg", null, "https://www.hackmath.net/thumb/62/t_162.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9237762,"math_prob":0.97196996,"size":3192,"snap":"2020-45-2020-50","text_gpt3_token_len":841,"char_repetition_ratio":0.10696361,"word_repetition_ratio":0.0032258064,"special_character_ratio":0.26597744,"punctuation_ratio":0.10656934,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97425103,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-06T01:41:17Z\",\"WARC-Record-ID\":\"<urn:uuid:a88d720a-f8d1-4ed3-95ba-32df83f7b187>\",\"Content-Length\":\"39599\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2a097364-a269-4f4b-9a6a-53b465bdaf6b>\",\"WARC-Concurrent-To\":\"<urn:uuid:d564497e-3668-4807-90fe-bdff2462d706>\",\"WARC-IP-Address\":\"172.67.143.236\",\"WARC-Target-URI\":\"https://www.hackmath.net/en/word-math-problems/unit-conversion?tag_id=3\",\"WARC-Payload-Digest\":\"sha1:HTMA63XEECGZFCD5PIDZU3QB6T34KXEN\",\"WARC-Block-Digest\":\"sha1:Y3B4FZN6W7CGZW3MOVO45TOGNBTMGOSD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141753148.92_warc_CC-MAIN-20201206002041-20201206032041-00009.warc.gz\"}"}
http://basic-calculator.com/
[ "# Basic Calculator\n\nBasic Calculator is used to perform simple calculations online. This calculator suits basic needs. The basic calculator offers four functions, which include addition, subtraction, multiplication and division. This is useful for calculations in lower level math courses such as solving the variable in algebra and adding up the angles in geometry.\n\nFor clarification, Basic Calculator (this site) is different from TI's BASIC calculator—the BASIC calculator from Texas Instruments is a calculator powered by the language BASIC." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9251541,"math_prob":0.9926548,"size":525,"snap":"2019-35-2019-39","text_gpt3_token_len":92,"char_repetition_ratio":0.1765835,"word_repetition_ratio":0.0,"special_character_ratio":0.16761905,"punctuation_ratio":0.10465116,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99510264,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-25T04:43:42Z\",\"WARC-Record-ID\":\"<urn:uuid:619eb9a8-0242-40af-ac28-24b0162a0b24>\",\"Content-Length\":\"5635\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ca0bb067-7152-4f76-8a2a-632fb7cb944d>\",\"WARC-Concurrent-To\":\"<urn:uuid:2ab2ad58-9878-47b8-9692-a05090bf3198>\",\"WARC-IP-Address\":\"89.238.188.205\",\"WARC-Target-URI\":\"http://basic-calculator.com/\",\"WARC-Payload-Digest\":\"sha1:KT6SY5BKNMQ37EDYILV2KZAALTHG7MZ5\",\"WARC-Block-Digest\":\"sha1:ROENTS7Q5ONFKAVZRK77MH4DT5QCWDCH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027323067.50_warc_CC-MAIN-20190825042326-20190825064326-00250.warc.gz\"}"}
https://www.edureka.co/blog/linear-discriminant-analysis/
[ "", null, "# How To Implement Linear Discriminant Analysis in R?\n\nLast updated on Sep 09,2019 466 Views", null, "", null, "### myMock Interview Service for Real Tech Jobs", null, "### myMock Interview Service for Real Tech Jobs\n\n• Mock interview in latest tech domains i.e JAVA, AI, DEVOPS,etc\n• Get interviewed by leading tech experts\n• Real time assessment report and video recording\n\nLinear Discriminant Analysis is a very popular Machine Learning technique that is used to solve classification problems. In this article we will try to understand the intuition and mathematics behind this technique. An example of implementation of LDA in R is also provided.\n\nSo let us get started then\n\n## Linear Discriminant Analysis Assumption\n\nLinear Discriminant Analysis is based on the following assumptions:\n\n• The dependent variable Y is discrete. In this article we will assume that the dependent variable is binary and takes class values {+1, -1}. The probability of a sample belonging to class +1, i.e P(Y = +1) = p. Therefore, the probability of a sample belonging to class -1 is 1-p.\n\n• The independent variable(s) X come from gaussian distributions. The mean of the gaussian distribution depends on the class label Y. i.e. if Yi = +1, then the mean of Xi is 𝜇+1, else it is 𝜇-1. The variance 𝜎2 is the same for both classes. Mathematically speaking, X|(Y = +1) ~ N(𝜇+1, 𝜎2) and X|(Y = -1) ~ N(𝜇-1, 𝜎2), where N denotes the normal distribution.\n\nWith this information it is possible to construct a joint distribution P(X,Y) for the independent and dependent variable. Therefore, LDA belongs to the class of Generative Classifier Models. A closely related generative classifier is Quadratic Discriminant Analysis(QDA). It is based on all the same assumptions of LDA, except that the class variances are different.\n\nLet us continue with Linear Discriminant Analysis article and see\n\n## Intuition\n\nConsider the class conditional gaussian distributions for X given the class Y. The below figure shows the density functions of the distributions. In this figure, if Y = +1, then the mean of X is 10 and if Y = -1, the mean is 2. The variance is 2 in both cases.", null, "Now suppose a new value of X is given to us. Lets just denote it as xi. The task is to determine the most likely class label for this xi, i.e. yi. For simplicity assume that the probability p of the sample belonging to class +1 is the same as that of belonging to class -1, i.e. p=0.5.\n\nIntuitively, it makes sense to say that if xi is closer to 𝜇+1 than it is to 𝜇-1, then it is more likely that yi = +1. More formally, yi = +1 if:\n\n|xi – 𝜇+1| < |xi – 𝜇-1|\n\nNormalizing both sides by the standard deviation:\n\n|xi – 𝜇+1|/𝜎 < |xi – 𝜇-1|/𝜎\n\nSquaring both sides:\n\n(xi – 𝜇+1)2/𝜎2 < (xi – 𝜇-1)2/𝜎2\n\nxi2/𝜎2 + 𝜇+12/𝜎2 – 2 xi𝜇+1/𝜎2 < xi2/𝜎2 + 𝜇-12/𝜎2 – 2 xi𝜇-1/𝜎2\n\n2 xi (𝜇-1 – 𝜇+1)/𝜎2  – (𝜇-12/𝜎2 – 𝜇+12/𝜎2) < 0\n\n-2 xi (𝜇-1 – 𝜇+1)/𝜎2  + (𝜇-12/𝜎2 – 𝜇+12/𝜎2) > 0\n\nThe above expression is of the form bxi + c > 0 where b = -2(𝜇-1 – 𝜇+1)/𝜎2 and c = (𝜇-12/𝜎2 – 𝜇+12/𝜎2).\n\nIt is apparent that the form of the equation is linear, hence the name Linear Discriminant Analysis.\n\nLet us continue with Linear Discriminant Analysis article and see,\n\n## Mathematical Description of LDA\n\nThe mathematical derivation of the expression for LDA is based on concepts like Bayes Rule and Bayes Optimal Classifier. Interested readers are encouraged to read more about these concepts. One way to derive the expression can be found here.\n\nWe will provide the expression directly for our specific case where Y takes two classes {+1, -1}. We will also extend the intuition shown in the previous section to the general case where X can be multidimensional. Let’s say that there are k independent variables. In this case, the class means 𝜇-1 and 𝜇+1 would be vectors of dimensions k*1 and the variance-covariance matrix 𝜮 would be a matrix of dimensions k*k\n\nThe classifier function is given as\n\nY = h(X) = sign(bTX + c)\n\nWhere,\n\nb = -2 𝜮 -1(𝜇-1 – 𝜇+1)\n\nc = 𝜇-1T𝜮 -1𝜇-1 – 𝜇-1T𝜮 -1𝜇-1 -2 ln{(1-p)/p}\n\nThe sign function returns +1 if the expression bTx + c > 0, otherwise it returns -1. The natural log term in c is present to adjust for the fact that the class probabilities need not be equal for both the classes, i.e. p could be any value between (0, 1), and not just 0.5.\n\n## Learning the Model Parameters\n\nGiven a dataset with N data-points (x1, y1), (x2, y2), … (xn, yn), we need to estimate p, 𝜇-1, 𝜇+1 and 𝜮. A statistical estimation technique called Maximum Likelihood Estimation is used to estimate these parameters. The expressions for the above parameters are given below.\n\n𝜇+1= (1/N+1) * 𝚺i:yi=+1 xi\n\n𝜇-1 = (1/N-1) * 𝚺i:yi=-1 xi\n\np = N+1/N\n\n𝜮 = (1/N) * 𝚺i=1:N(xi – 𝜇i)(xi – 𝜇i)T\n\nWhere N+1 = number of samples where yi = +1 and N-1 = number of samples where yi = -1.\n\nWith the above expressions, the LDA model is complete. One can estimate the model parameters using the above expressions and use them in the classifier function to get the class label of any new input value of independent variable X.\n\nLet us continue with Linear Discriminant Analysis article and see\n\n## Example in R\n\nThe following code generates a dummy data set with two independent variables X1 and X2 and a dependent variable Y. For X1 and X2, we will generate sample from two multivariate gaussian distributions with means 𝜇-1= (2, 2) and 𝜇+1= (6, 6). 40% of the samples belong to class +1 and 60% belong to class -1, therefore p = 0.4.\n\n```library(ggplot2)\nlibrary(MASS)\nlibrary(mvtnorm)\n#Variance Covariance matrix for random bivariate gaussian sample\nvar_covar = matrix(data = c(1.5, 0.3, 0.3, 1.5), nrow=2)\n#Random bivariate gaussian samples for class +1\nXplus1 <- rmvnorm(400, mean = c(6, 6), sigma = var_covar)\n# Random bivariate gaussian samples for class -1\nXminus1 <- rmvnorm(600, mean = c(2, 2), sigma = var_covar)\n#Samples for the dependent variable\nY_samples <- c(rep(1, 400), rep(-1, 600))\n#Combining the independent and dependent variables into a dataframe\ndataset <- as.data.frame(cbind(rbind(Xplus1, Xminus1), Y_samples))\ncolnames(dataset) <- c(\"X1\", \"X2\", \"Y\")\ndataset\\$Y <- as.character(dataset\\$Y)\n#Plot the above samples and color by class labels\nggplot(data = dataset)+\ngeom_point(aes(X1, X2, color = Y))\n```", null, "In the above figure, the blue dots represent samples from class +1 and the red ones represent the sample from class -1. There is some overlap between the samples, i.e. the classes cannot be separated completely with a simple line. In other words they are not perfectly linearly separable\n\nWe will now train a LDA model using the above data.\n\n```#Train the LDA model using the above dataset\nlda_model <- lda(Y ~ X1 + X2, data = dataset)\n#Print the LDA model\nlda_model\n```\n\nOutput:\n\nPrior probabilities of groups:\n\n-1    1\n\n0.6  0.4\n\nGroup means:\n\nX1       X2\n\n-1 1.928108 2.010226\n\n1  5.961004 6.015438\n\nCoefficients of linear discriminants:\n\nLD1\n\nX1 0.5646116\n\nX2 0.5004175\n\nAs one can see, the class means learnt by the model are (1.928108, 2.010226) for class -1 and (5.961004, 6.015438) for class +1. These means are very close to the class means we had used to generate these random samples. The prior probability for group +1 is the estimate for the parameter p. The b vector is the linear discriminant coefficients.\n\nWe will now use the above model to predict the class labels for the same data.\n\n```#Predicting the class for each sample in the above dataset using the LDA model\ny_pred <- predict(lda_model, newdata = dataset)\\$class\n#Adding the predictions as another column in the dataframe\ndataset\\$Y_lda_prediction <- as.character(y_pred)\n#Plot the above samples and color by actual and predicted class labels\ndataset\\$Y_actual_pred <- paste(dataset\\$Y, dataset\\$Y_lda_prediction, sep=\",\")\nggplot(data = dataset)+\ngeom_point(aes(X1, X2, color = Y_actual_pred))</p>\n```", null, "In the above figure, the purple samples are from class +1 that were classified correctly by the LDA model. Similarly, the red samples are from class -1 that were classified correctly. The blue ones are from class +1 but were classified incorrectly as -1. The green ones are from class -1 which were misclassified as +1. The misclassifications are happening because these samples are closer to the other class mean (centre) than their actual class mean.\n\nThis brings us to the end of this article, check out the R training by Edureka, a trusted online learning company with a network of more than 250,000 satisfied learners spread across the globe. Edureka’s Data Analytics with R training will help you gain expertise in R Programming, Data Manipulation, Exploratory Data Analysis, Data Visualization, Data Mining, Regression, Sentiment Analysis and using R Studio for real life case studies on Retail, Social Media.\n\nGot a question for us? Please mention it in the comments section of this article and we will get back to you as soon as possible.", null, "REGISTER FOR FREE WEBINAR", null, "Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month" ]
[ null, "https://googleads.g.doubleclick.net/pagead/viewthroughconversion/977137586/", null, "https://secure.gravatar.com/avatar/caa951201d8ba48c9e177b5996e29ccb", null, "https://d1jnx9ba8s6j9r.cloudfront.net/blog/wp-content/themes/edu-new/img/mi-new-launch.svg", null, "https://d1jnx9ba8s6j9r.cloudfront.net/blog/wp-content/themes/edu-new/img/myMock-monile-banner-bg.svg", null, "https://www.edureka.co/blog/wp-content/uploads/2019/07/r1.png", null, "data:image/gif;base64,R0lGODlhAQABAIAAAMLCwgAAACH5BAAAAAAALAAAAAABAAEAAAICRAEAOw==", null, "data:image/gif;base64,R0lGODlhAQABAIAAAMLCwgAAACH5BAAAAAAALAAAAAABAAEAAAICRAEAOw==", null, "https://d1jnx9ba8s6j9r.cloudfront.net/blog/wp-content/themes/edu-new/img/blog-001.svg", null, "https://d1jnx9ba8s6j9r.cloudfront.net/blog/wp-content/themes/edu-new/img/blog-tick.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.83802325,"math_prob":0.9901565,"size":8310,"snap":"2019-51-2020-05","text_gpt3_token_len":2410,"char_repetition_ratio":0.127137,"word_repetition_ratio":0.0372191,"special_character_ratio":0.28267148,"punctuation_ratio":0.11488863,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99903786,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,4,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-12T16:50:18Z\",\"WARC-Record-ID\":\"<urn:uuid:4ebcc849-c121-4d11-a496-742e48dba5f9>\",\"Content-Length\":\"188363\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a7d3af4a-ac2d-4faa-b610-ada291c45c6b>\",\"WARC-Concurrent-To\":\"<urn:uuid:d0305795-709e-4ef8-adb8-f35f808238fd>\",\"WARC-IP-Address\":\"13.249.44.26\",\"WARC-Target-URI\":\"https://www.edureka.co/blog/linear-discriminant-analysis/\",\"WARC-Payload-Digest\":\"sha1:DPRXQUQ2MCVAR6IFP46MLXHLJDUM2Z5S\",\"WARC-Block-Digest\":\"sha1:73YJHJGXDMNZJ5LVTNISYO6D6ESHPWK2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540544696.93_warc_CC-MAIN-20191212153724-20191212181724-00111.warc.gz\"}"}
https://git.openssl.org/gitweb/?p=openssl.git;a=blob;f=crypto/ec/ec_mult.c;h=979b4540ef34e9f3d60eede324852940d5bef640
[ "979b4540ef34e9f3d60eede324852940d5bef640\n1 /* crypto/ec/ec_mult.c */\n2 /*\n3  * Originally written by Bodo Moeller and Nils Larsch for the OpenSSL project.\n4  */\n5 /* ====================================================================\n6  * Copyright (c) 1998-2007 The OpenSSL Project.  All rights reserved.\n7  *\n8  * Redistribution and use in source and binary forms, with or without\n9  * modification, are permitted provided that the following conditions\n10  * are met:\n11  *\n12  * 1. Redistributions of source code must retain the above copyright\n13  *    notice, this list of conditions and the following disclaimer.\n14  *\n15  * 2. Redistributions in binary form must reproduce the above copyright\n16  *    notice, this list of conditions and the following disclaimer in\n17  *    the documentation and/or other materials provided with the\n18  *    distribution.\n19  *\n20  * 3. All advertising materials mentioning features or use of this\n21  *    software must display the following acknowledgment:\n22  *    \"This product includes software developed by the OpenSSL Project\n23  *    for use in the OpenSSL Toolkit. (http://www.openssl.org/)\"\n24  *\n25  * 4. The names \"OpenSSL Toolkit\" and \"OpenSSL Project\" must not be used to\n26  *    endorse or promote products derived from this software without\n27  *    prior written permission. For written permission, please contact\n28  *    [email protected].\n29  *\n30  * 5. Products derived from this software may not be called \"OpenSSL\"\n31  *    nor may \"OpenSSL\" appear in their names without prior written\n32  *    permission of the OpenSSL Project.\n33  *\n34  * 6. Redistributions of any form whatsoever must retain the following\n35  *    acknowledgment:\n36  *    \"This product includes software developed by the OpenSSL Project\n37  *    for use in the OpenSSL Toolkit (http://www.openssl.org/)\"\n38  *\n39  * THIS SOFTWARE IS PROVIDED BY THE OpenSSL PROJECT AS IS'' AND ANY\n40  * EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n41  * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR\n42  * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE OpenSSL PROJECT OR\n43  * ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n44  * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT\n45  * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\n46  * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)\n47  * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,\n48  * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n49  * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED\n50  * OF THE POSSIBILITY OF SUCH DAMAGE.\n51  * ====================================================================\n52  *\n53  * This product includes cryptographic software written by Eric Young\n54  * ([email protected]).  This product includes software written by Tim\n55  * Hudson ([email protected]).\n56  *\n57  */\n58 /* ====================================================================\n59  * Copyright 2002 Sun Microsystems, Inc. ALL RIGHTS RESERVED.\n60  * Portions of this software developed by SUN MICROSYSTEMS, INC.,\n61  * and contributed to the OpenSSL project.\n62  */\n64 #include <string.h>\n65 #include <openssl/err.h>\n67 #include \"internal/bn_int.h\"\n68 #include \"ec_lcl.h\"\n70 /*\n71  * This file implements the wNAF-based interleaving multi-exponentation method\n73  * for multiplication with precomputation, we use wNAF splitting\n75  */\n77 /* structure for precomputed multiples of the generator */\n78 typedef struct ec_pre_comp_st {\n79     const EC_GROUP *group;      /* parent EC_GROUP object */\n80     size_t blocksize;           /* block size for wNAF splitting */\n81     size_t numblocks;           /* max. number of blocks for which we have\n82                                  * precomputation */\n83     size_t w;                   /* window size */\n84     EC_POINT **points;          /* array with pre-calculated multiples of\n85                                  * generator: 'num' pointers to EC_POINT\n86                                  * objects followed by a NULL */\n87     size_t num;                 /* numblocks * 2^(w-1) */\n88     int references;\n89 } EC_PRE_COMP;\n91 /* functions to manage EC_PRE_COMP within the EC_GROUP extra_data framework */\n92 static void *ec_pre_comp_dup(void *);\n93 static void ec_pre_comp_free(void *);\n94 static void ec_pre_comp_clear_free(void *);\n96 static EC_PRE_COMP *ec_pre_comp_new(const EC_GROUP *group)\n97 {\n98     EC_PRE_COMP *ret = NULL;\n100     if (!group)\n101         return NULL;\n103     ret = OPENSSL_malloc(sizeof(EC_PRE_COMP));\n104     if (!ret) {\n105         ECerr(EC_F_EC_PRE_COMP_NEW, ERR_R_MALLOC_FAILURE);\n106         return ret;\n107     }\n108     ret->group = group;\n109     ret->blocksize = 8;         /* default */\n110     ret->numblocks = 0;\n111     ret->w = 4;                 /* default */\n112     ret->points = NULL;\n113     ret->num = 0;\n114     ret->references = 1;\n115     return ret;\n116 }\n118 static void *ec_pre_comp_dup(void *src_)\n119 {\n120     EC_PRE_COMP *src = src_;\n122     /* no need to actually copy, these objects never change! */\n126     return src_;\n127 }\n129 static void ec_pre_comp_free(void *pre_)\n130 {\n131     int i;\n132     EC_PRE_COMP *pre = pre_;\n134     if (!pre)\n135         return;\n137     i = CRYPTO_add(&pre->references, -1, CRYPTO_LOCK_EC_PRE_COMP);\n138     if (i > 0)\n139         return;\n141     if (pre->points) {\n142         EC_POINT **p;\n144         for (p = pre->points; *p != NULL; p++)\n145             EC_POINT_free(*p);\n146         OPENSSL_free(pre->points);\n147     }\n148     OPENSSL_free(pre);\n149 }\n151 static void ec_pre_comp_clear_free(void *pre_)\n152 {\n153     int i;\n154     EC_PRE_COMP *pre = pre_;\n156     if (!pre)\n157         return;\n159     i = CRYPTO_add(&pre->references, -1, CRYPTO_LOCK_EC_PRE_COMP);\n160     if (i > 0)\n161         return;\n163     if (pre->points) {\n164         EC_POINT **p;\n166         for (p = pre->points; *p != NULL; p++) {\n167             EC_POINT_clear_free(*p);\n168             OPENSSL_cleanse(p, sizeof *p);\n169         }\n170         OPENSSL_free(pre->points);\n171     }\n172     OPENSSL_cleanse(pre, sizeof *pre);\n173     OPENSSL_free(pre);\n174 }\n176 /*\n177  * TODO: table should be optimised for the wNAF-based implementation,\n178  * sometimes smaller windows will give better performance (thus the\n179  * boundaries should be increased)\n180  */\n181 #define EC_window_bits_for_scalar_size(b) \\\n182                 ((size_t) \\\n183                  ((b) >= 2000 ? 6 : \\\n184                   (b) >=  800 ? 5 : \\\n185                   (b) >=  300 ? 4 : \\\n186                   (b) >=   70 ? 3 : \\\n187                   (b) >=   20 ? 2 : \\\n188                   1))\n190 /*-\n191  * Compute\n192  *      \\sum scalars[i]*points[i],\n193  * also including\n194  *      scalar*generator\n195  * in the addition if scalar != NULL\n196  */\n197 int ec_wNAF_mul(const EC_GROUP *group, EC_POINT *r, const BIGNUM *scalar,\n198                 size_t num, const EC_POINT *points[], const BIGNUM *scalars[],\n199                 BN_CTX *ctx)\n200 {\n201     BN_CTX *new_ctx = NULL;\n202     const EC_POINT *generator = NULL;\n203     EC_POINT *tmp = NULL;\n204     size_t totalnum;\n205     size_t blocksize = 0, numblocks = 0; /* for wNAF splitting */\n206     size_t pre_points_per_block = 0;\n207     size_t i, j;\n208     int k;\n209     int r_is_inverted = 0;\n210     int r_is_at_infinity = 1;\n211     size_t *wsize = NULL;       /* individual window sizes */\n212     signed char **wNAF = NULL;  /* individual wNAFs */\n213     size_t *wNAF_len = NULL;\n214     size_t max_len = 0;\n215     size_t num_val;\n216     EC_POINT **val = NULL;      /* precomputation */\n217     EC_POINT **v;\n218     EC_POINT ***val_sub = NULL; /* pointers to sub-arrays of 'val' or\n219                                  * 'pre_comp->points' */\n220     const EC_PRE_COMP *pre_comp = NULL;\n221     int num_scalar = 0;         /* flag: will be set to 1 if 'scalar' must be\n222                                  * treated like other scalars, i.e.\n223                                  * precomputation is not available */\n224     int ret = 0;\n226     if (group->meth != r->meth) {\n227         ECerr(EC_F_EC_WNAF_MUL, EC_R_INCOMPATIBLE_OBJECTS);\n228         return 0;\n229     }\n231     if ((scalar == NULL) && (num == 0)) {\n232         return EC_POINT_set_to_infinity(group, r);\n233     }\n235     for (i = 0; i < num; i++) {\n236         if (group->meth != points[i]->meth) {\n237             ECerr(EC_F_EC_WNAF_MUL, EC_R_INCOMPATIBLE_OBJECTS);\n238             return 0;\n239         }\n240     }\n242     if (ctx == NULL) {\n243         ctx = new_ctx = BN_CTX_new();\n244         if (ctx == NULL)\n245             goto err;\n246     }\n248     if (scalar != NULL) {\n249         generator = EC_GROUP_get0_generator(group);\n250         if (generator == NULL) {\n251             ECerr(EC_F_EC_WNAF_MUL, EC_R_UNDEFINED_GENERATOR);\n252             goto err;\n253         }\n255         /* look if we can use precomputed multiples of generator */\n257         pre_comp =\n258             EC_EX_DATA_get_data(group->extra_data, ec_pre_comp_dup,\n259                                 ec_pre_comp_free, ec_pre_comp_clear_free);\n261         if (pre_comp && pre_comp->numblocks\n262             && (EC_POINT_cmp(group, generator, pre_comp->points, ctx) ==\n263                 0)) {\n264             blocksize = pre_comp->blocksize;\n266             /*\n267              * determine maximum number of blocks that wNAF splitting may\n268              * yield (NB: maximum wNAF length is bit length plus one)\n269              */\n270             numblocks = (BN_num_bits(scalar) / blocksize) + 1;\n272             /*\n273              * we cannot use more blocks than we have precomputation for\n274              */\n275             if (numblocks > pre_comp->numblocks)\n276                 numblocks = pre_comp->numblocks;\n278             pre_points_per_block = (size_t)1 << (pre_comp->w - 1);\n280             /* check that pre_comp looks sane */\n281             if (pre_comp->num != (pre_comp->numblocks * pre_points_per_block)) {\n282                 ECerr(EC_F_EC_WNAF_MUL, ERR_R_INTERNAL_ERROR);\n283                 goto err;\n284             }\n285         } else {\n286             /* can't use precomputation */\n287             pre_comp = NULL;\n288             numblocks = 1;\n289             num_scalar = 1;     /* treat 'scalar' like 'num'-th element of\n290                                  * 'scalars' */\n291         }\n292     }\n294     totalnum = num + numblocks;\n296     wsize = OPENSSL_malloc(totalnum * sizeof wsize);\n297     wNAF_len = OPENSSL_malloc(totalnum * sizeof wNAF_len);\n298     wNAF = OPENSSL_malloc((totalnum + 1) * sizeof wNAF); /* includes space\n299                                                              * for pivot */\n300     val_sub = OPENSSL_malloc(totalnum * sizeof val_sub);\n302     /* Ensure wNAF is initialised in case we end up going to err */\n303     if (wNAF)\n304         wNAF = NULL;         /* preliminary pivot */\n306     if (!wsize || !wNAF_len || !wNAF || !val_sub) {\n307         ECerr(EC_F_EC_WNAF_MUL, ERR_R_MALLOC_FAILURE);\n308         goto err;\n309     }\n311     /*\n312      * num_val will be the total number of temporarily precomputed points\n313      */\n314     num_val = 0;\n316     for (i = 0; i < num + num_scalar; i++) {\n317         size_t bits;\n319         bits = i < num ? BN_num_bits(scalars[i]) : BN_num_bits(scalar);\n320         wsize[i] = EC_window_bits_for_scalar_size(bits);\n321         num_val += (size_t)1 << (wsize[i] - 1);\n322         wNAF[i + 1] = NULL;     /* make sure we always have a pivot */\n323         wNAF[i] =\n324             bn_compute_wNAF((i < num ? scalars[i] : scalar), wsize[i],\n325                             &wNAF_len[i]);\n326         if (wNAF[i] == NULL)\n327             goto err;\n328         if (wNAF_len[i] > max_len)\n329             max_len = wNAF_len[i];\n330     }\n332     if (numblocks) {\n333         /* we go here iff scalar != NULL */\n335         if (pre_comp == NULL) {\n336             if (num_scalar != 1) {\n337                 ECerr(EC_F_EC_WNAF_MUL, ERR_R_INTERNAL_ERROR);\n338                 goto err;\n339             }\n340             /* we have already generated a wNAF for 'scalar' */\n341         } else {\n342             signed char *tmp_wNAF = NULL;\n343             size_t tmp_len = 0;\n345             if (num_scalar != 0) {\n346                 ECerr(EC_F_EC_WNAF_MUL, ERR_R_INTERNAL_ERROR);\n347                 goto err;\n348             }\n350             /*\n351              * use the window size for which we have precomputation\n352              */\n353             wsize[num] = pre_comp->w;\n354             tmp_wNAF = bn_compute_wNAF(scalar, wsize[num], &tmp_len);\n355             if (!tmp_wNAF)\n356                 goto err;\n358             if (tmp_len <= max_len) {\n359                 /*\n360                  * One of the other wNAFs is at least as long as the wNAF\n361                  * belonging to the generator, so wNAF splitting will not buy\n362                  * us anything.\n363                  */\n365                 numblocks = 1;\n366                 totalnum = num + 1; /* don't use wNAF splitting */\n367                 wNAF[num] = tmp_wNAF;\n368                 wNAF[num + 1] = NULL;\n369                 wNAF_len[num] = tmp_len;\n370                 if (tmp_len > max_len)\n371                     max_len = tmp_len;\n372                 /*\n373                  * pre_comp->points starts with the points that we need here:\n374                  */\n375                 val_sub[num] = pre_comp->points;\n376             } else {\n377                 /*\n378                  * don't include tmp_wNAF directly into wNAF array - use wNAF\n379                  * splitting and include the blocks\n380                  */\n382                 signed char *pp;\n383                 EC_POINT **tmp_points;\n385                 if (tmp_len < numblocks * blocksize) {\n386                     /*\n387                      * possibly we can do with fewer blocks than estimated\n388                      */\n389                     numblocks = (tmp_len + blocksize - 1) / blocksize;\n390                     if (numblocks > pre_comp->numblocks) {\n391                         ECerr(EC_F_EC_WNAF_MUL, ERR_R_INTERNAL_ERROR);\n392                         goto err;\n393                     }\n394                     totalnum = num + numblocks;\n395                 }\n397                 /* split wNAF in 'numblocks' parts */\n398                 pp = tmp_wNAF;\n399                 tmp_points = pre_comp->points;\n401                 for (i = num; i < totalnum; i++) {\n402                     if (i < totalnum - 1) {\n403                         wNAF_len[i] = blocksize;\n404                         if (tmp_len < blocksize) {\n405                             ECerr(EC_F_EC_WNAF_MUL, ERR_R_INTERNAL_ERROR);\n406                             goto err;\n407                         }\n408                         tmp_len -= blocksize;\n409                     } else\n410                         /*\n411                          * last block gets whatever is left (this could be\n412                          * more or less than 'blocksize'!)\n413                          */\n414                         wNAF_len[i] = tmp_len;\n416                     wNAF[i + 1] = NULL;\n417                     wNAF[i] = OPENSSL_malloc(wNAF_len[i]);\n418                     if (wNAF[i] == NULL) {\n419                         ECerr(EC_F_EC_WNAF_MUL, ERR_R_MALLOC_FAILURE);\n420                         OPENSSL_free(tmp_wNAF);\n421                         goto err;\n422                     }\n423                     memcpy(wNAF[i], pp, wNAF_len[i]);\n424                     if (wNAF_len[i] > max_len)\n425                         max_len = wNAF_len[i];\n427                     if (*tmp_points == NULL) {\n428                         ECerr(EC_F_EC_WNAF_MUL, ERR_R_INTERNAL_ERROR);\n429                         OPENSSL_free(tmp_wNAF);\n430                         goto err;\n431                     }\n432                     val_sub[i] = tmp_points;\n433                     tmp_points += pre_points_per_block;\n434                     pp += blocksize;\n435                 }\n436                 OPENSSL_free(tmp_wNAF);\n437             }\n438         }\n439     }\n441     /*\n442      * All points we precompute now go into a single array 'val'.\n443      * 'val_sub[i]' is a pointer to the subarray for the i-th point, or to a\n444      * subarray of 'pre_comp->points' if we already have precomputation.\n445      */\n446     val = OPENSSL_malloc((num_val + 1) * sizeof val);\n447     if (val == NULL) {\n448         ECerr(EC_F_EC_WNAF_MUL, ERR_R_MALLOC_FAILURE);\n449         goto err;\n450     }\n451     val[num_val] = NULL;        /* pivot element */\n453     /* allocate points for precomputation */\n454     v = val;\n455     for (i = 0; i < num + num_scalar; i++) {\n456         val_sub[i] = v;\n457         for (j = 0; j < ((size_t)1 << (wsize[i] - 1)); j++) {\n458             *v = EC_POINT_new(group);\n459             if (*v == NULL)\n460                 goto err;\n461             v++;\n462         }\n463     }\n464     if (!(v == val + num_val)) {\n465         ECerr(EC_F_EC_WNAF_MUL, ERR_R_INTERNAL_ERROR);\n466         goto err;\n467     }\n469     if (!(tmp = EC_POINT_new(group)))\n470         goto err;\n472     /*-\n473      * prepare precomputed values:\n474      *    val_sub[i] :=     points[i]\n475      *    val_sub[i] := 3 * points[i]\n476      *    val_sub[i] := 5 * points[i]\n477      *    ...\n478      */\n479     for (i = 0; i < num + num_scalar; i++) {\n480         if (i < num) {\n481             if (!EC_POINT_copy(val_sub[i], points[i]))\n482                 goto err;\n483         } else {\n484             if (!EC_POINT_copy(val_sub[i], generator))\n485                 goto err;\n486         }\n488         if (wsize[i] > 1) {\n489             if (!EC_POINT_dbl(group, tmp, val_sub[i], ctx))\n490                 goto err;\n491             for (j = 1; j < ((size_t)1 << (wsize[i] - 1)); j++) {\n493                     (group, val_sub[i][j], val_sub[i][j - 1], tmp, ctx))\n494                     goto err;\n495             }\n496         }\n497     }\n499     if (!EC_POINTs_make_affine(group, num_val, val, ctx))\n500         goto err;\n502     r_is_at_infinity = 1;\n504     for (k = max_len - 1; k >= 0; k--) {\n505         if (!r_is_at_infinity) {\n506             if (!EC_POINT_dbl(group, r, r, ctx))\n507                 goto err;\n508         }\n510         for (i = 0; i < totalnum; i++) {\n511             if (wNAF_len[i] > (size_t)k) {\n512                 int digit = wNAF[i][k];\n513                 int is_neg;\n515                 if (digit) {\n516                     is_neg = digit < 0;\n518                     if (is_neg)\n519                         digit = -digit;\n521                     if (is_neg != r_is_inverted) {\n522                         if (!r_is_at_infinity) {\n523                             if (!EC_POINT_invert(group, r, ctx))\n524                                 goto err;\n525                         }\n526                         r_is_inverted = !r_is_inverted;\n527                     }\n529                     /* digit > 0 */\n531                     if (r_is_at_infinity) {\n532                         if (!EC_POINT_copy(r, val_sub[i][digit >> 1]))\n533                             goto err;\n534                         r_is_at_infinity = 0;\n535                     } else {\n537                             (group, r, r, val_sub[i][digit >> 1], ctx))\n538                             goto err;\n539                     }\n540                 }\n541             }\n542         }\n543     }\n545     if (r_is_at_infinity) {\n546         if (!EC_POINT_set_to_infinity(group, r))\n547             goto err;\n548     } else {\n549         if (r_is_inverted)\n550             if (!EC_POINT_invert(group, r, ctx))\n551                 goto err;\n552     }\n554     ret = 1;\n556  err:\n557     if (new_ctx != NULL)\n558         BN_CTX_free(new_ctx);\n559     EC_POINT_free(tmp);\n560     if (wsize != NULL)\n561         OPENSSL_free(wsize);\n562     if (wNAF_len != NULL)\n563         OPENSSL_free(wNAF_len);\n564     if (wNAF != NULL) {\n565         signed char **w;\n567         for (w = wNAF; *w != NULL; w++)\n568             OPENSSL_free(*w);\n570         OPENSSL_free(wNAF);\n571     }\n572     if (val != NULL) {\n573         for (v = val; *v != NULL; v++)\n574             EC_POINT_clear_free(*v);\n576         OPENSSL_free(val);\n577     }\n578     if (val_sub != NULL) {\n579         OPENSSL_free(val_sub);\n580     }\n581     return ret;\n582 }\n584 /*-\n585  * ec_wNAF_precompute_mult()\n586  * creates an EC_PRE_COMP object with preprecomputed multiples of the generator\n587  * for use with wNAF splitting as implemented in ec_wNAF_mul().\n588  *\n589  * 'pre_comp->points' is an array of multiples of the generator\n590  * of the following form:\n591  * points =     generator;\n592  * points = 3 * generator;\n593  * ...\n594  * points[2^(w-1)-1] =     (2^(w-1)-1) * generator;\n595  * points[2^(w-1)]   =     2^blocksize * generator;\n596  * points[2^(w-1)+1] = 3 * 2^blocksize * generator;\n597  * ...\n598  * points[2^(w-1)*(numblocks-1)-1] = (2^(w-1)) *  2^(blocksize*(numblocks-2)) * generator\n599  * points[2^(w-1)*(numblocks-1)]   =              2^(blocksize*(numblocks-1)) * generator\n600  * ...\n601  * points[2^(w-1)*numblocks-1]     = (2^(w-1)) *  2^(blocksize*(numblocks-1)) * generator\n602  * points[2^(w-1)*numblocks]       = NULL\n603  */\n604 int ec_wNAF_precompute_mult(EC_GROUP *group, BN_CTX *ctx)\n605 {\n606     const EC_POINT *generator;\n607     EC_POINT *tmp_point = NULL, *base = NULL, **var;\n608     BN_CTX *new_ctx = NULL;\n609     BIGNUM *order;\n610     size_t i, bits, w, pre_points_per_block, blocksize, numblocks, num;\n611     EC_POINT **points = NULL;\n612     EC_PRE_COMP *pre_comp;\n613     int ret = 0;\n615     /* if there is an old EC_PRE_COMP object, throw it away */\n616     EC_EX_DATA_free_data(&group->extra_data, ec_pre_comp_dup,\n617                          ec_pre_comp_free, ec_pre_comp_clear_free);\n619     if ((pre_comp = ec_pre_comp_new(group)) == NULL)\n620         return 0;\n622     generator = EC_GROUP_get0_generator(group);\n623     if (generator == NULL) {\n624         ECerr(EC_F_EC_WNAF_PRECOMPUTE_MULT, EC_R_UNDEFINED_GENERATOR);\n625         goto err;\n626     }\n628     if (ctx == NULL) {\n629         ctx = new_ctx = BN_CTX_new();\n630         if (ctx == NULL)\n631             goto err;\n632     }\n634     BN_CTX_start(ctx);\n635     order = BN_CTX_get(ctx);\n636     if (order == NULL)\n637         goto err;\n639     if (!EC_GROUP_get_order(group, order, ctx))\n640         goto err;\n641     if (BN_is_zero(order)) {\n642         ECerr(EC_F_EC_WNAF_PRECOMPUTE_MULT, EC_R_UNKNOWN_ORDER);\n643         goto err;\n644     }\n646     bits = BN_num_bits(order);\n647     /*\n648      * The following parameters mean we precompute (approximately) one point\n649      * per bit. TBD: The combination 8, 4 is perfect for 160 bits; for other\n650      * bit lengths, other parameter combinations might provide better\n651      * efficiency.\n652      */\n653     blocksize = 8;\n654     w = 4;\n655     if (EC_window_bits_for_scalar_size(bits) > w) {\n656         /* let's not make the window too small ... */\n657         w = EC_window_bits_for_scalar_size(bits);\n658     }\n660     numblocks = (bits + blocksize - 1) / blocksize; /* max. number of blocks\n661                                                      * to use for wNAF\n662                                                      * splitting */\n664     pre_points_per_block = (size_t)1 << (w - 1);\n665     num = pre_points_per_block * numblocks; /* number of points to compute\n666                                              * and store */\n668     points = OPENSSL_malloc(sizeof(EC_POINT *) * (num + 1));\n669     if (!points) {\n670         ECerr(EC_F_EC_WNAF_PRECOMPUTE_MULT, ERR_R_MALLOC_FAILURE);\n671         goto err;\n672     }\n674     var = points;\n675     var[num] = NULL;            /* pivot */\n676     for (i = 0; i < num; i++) {\n677         if ((var[i] = EC_POINT_new(group)) == NULL) {\n678             ECerr(EC_F_EC_WNAF_PRECOMPUTE_MULT, ERR_R_MALLOC_FAILURE);\n679             goto err;\n680         }\n681     }\n683     if (!(tmp_point = EC_POINT_new(group)) || !(base = EC_POINT_new(group))) {\n684         ECerr(EC_F_EC_WNAF_PRECOMPUTE_MULT, ERR_R_MALLOC_FAILURE);\n685         goto err;\n686     }\n688     if (!EC_POINT_copy(base, generator))\n689         goto err;\n691     /* do the precomputation */\n692     for (i = 0; i < numblocks; i++) {\n693         size_t j;\n695         if (!EC_POINT_dbl(group, tmp_point, base, ctx))\n696             goto err;\n698         if (!EC_POINT_copy(*var++, base))\n699             goto err;\n701         for (j = 1; j < pre_points_per_block; j++, var++) {\n702             /*\n703              * calculate odd multiples of the current base point\n704              */\n705             if (!EC_POINT_add(group, *var, tmp_point, *(var - 1), ctx))\n706                 goto err;\n707         }\n709         if (i < numblocks - 1) {\n710             /*\n711              * get the next base (multiply current one by 2^blocksize)\n712              */\n713             size_t k;\n715             if (blocksize <= 2) {\n716                 ECerr(EC_F_EC_WNAF_PRECOMPUTE_MULT, ERR_R_INTERNAL_ERROR);\n717                 goto err;\n718             }\n720             if (!EC_POINT_dbl(group, base, tmp_point, ctx))\n721                 goto err;\n722             for (k = 2; k < blocksize; k++) {\n723                 if (!EC_POINT_dbl(group, base, base, ctx))\n724                     goto err;\n725             }\n726         }\n727     }\n729     if (!EC_POINTs_make_affine(group, num, points, ctx))\n730         goto err;\n732     pre_comp->group = group;\n733     pre_comp->blocksize = blocksize;\n734     pre_comp->numblocks = numblocks;\n735     pre_comp->w = w;\n736     pre_comp->points = points;\n737     points = NULL;\n738     pre_comp->num = num;\n740     if (!EC_EX_DATA_set_data(&group->extra_data, pre_comp,\n741                              ec_pre_comp_dup, ec_pre_comp_free,\n742                              ec_pre_comp_clear_free))\n743         goto err;\n744     pre_comp = NULL;\n746     ret = 1;\n747  err:\n748     if (ctx != NULL)\n749         BN_CTX_end(ctx);\n750     if (new_ctx != NULL)\n751         BN_CTX_free(new_ctx);\n752     if (pre_comp)\n753         ec_pre_comp_free(pre_comp);\n754     if (points) {\n755         EC_POINT **p;\n757         for (p = points; *p != NULL; p++)\n758             EC_POINT_free(*p);\n759         OPENSSL_free(points);\n760     }\n761     EC_POINT_free(tmp_point);\n762     EC_POINT_free(base);\n763     return ret;\n764 }\n766 int ec_wNAF_have_precompute_mult(const EC_GROUP *group)\n767 {\n768     if (EC_EX_DATA_get_data\n769         (group->extra_data, ec_pre_comp_dup, ec_pre_comp_free,\n770          ec_pre_comp_clear_free) != NULL)\n771         return 1;\n772     else\n773         return 0;\n774 }" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.51968366,"math_prob":0.9515588,"size":16896,"snap":"2019-43-2019-47","text_gpt3_token_len":4854,"char_repetition_ratio":0.15303102,"word_repetition_ratio":0.040276647,"special_character_ratio":0.37985322,"punctuation_ratio":0.16340579,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95461863,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-16T20:50:48Z\",\"WARC-Record-ID\":\"<urn:uuid:22ef0644-1bb2-4e05-95f7-e1a871a11230>\",\"Content-Length\":\"199452\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e0298d0d-1b6c-4084-b6a9-d33b7d2d2512>\",\"WARC-Concurrent-To\":\"<urn:uuid:84bfeae9-c9fc-4150-bbe7-b989ec85ce50>\",\"WARC-IP-Address\":\"194.97.150.234\",\"WARC-Target-URI\":\"https://git.openssl.org/gitweb/?p=openssl.git;a=blob;f=crypto/ec/ec_mult.c;h=979b4540ef34e9f3d60eede324852940d5bef640\",\"WARC-Payload-Digest\":\"sha1:PQMNAZJXPI5TYO7U2FKMSPZC6GOVH4K7\",\"WARC-Block-Digest\":\"sha1:PL4O3CKB54CMEFKKPVKX66P3G6PTSR6W\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668765.35_warc_CC-MAIN-20191116204950-20191116232950-00076.warc.gz\"}"}
https://mosspravka.pro/trigonometry-worksheet/
[ "", null, "Home coloring page Trigonometry Worksheet\n\n# Trigonometry Worksheet\n\nA complete list of all of our math worksheets relating to trigonometry. Answers are also provided on the worksheet so students are able to check their work and self assess.", null, "Teaching Trigonometry Trigonometry, Trigonometry worksheets\n\nTable of Contents\n\n### Trigonometry is a mathematical method used to define relations between elements of a triangle.", null, "Trigonometry worksheet. A worksheet designed to be printed back to back and folded to make an a5 booklet. Trigonometry is one of the most useful topics in mathematics, and these thorough, detailed worksheets will give students a solid foundation in it. See more ideas about trigonometry worksheets, trigonometry, worksheets.\n\nTrigonometry worksheets on this page you will find: Free trigonometry cheat sheet compilation of all trigonometric identities such as those found in trigonometry books trigonometry cheat sheets algebra formulas one of the best teaching strategies employed in most classrooms today is worksheets. It includes basics like finding the ratios of sin cos and tan for various right angled triangles.\n\nComplementary and supplementary word problems worksheet. Trigonometry is the study of triangles. It is helpful to memorize the following triangles equilateral side length 2.\n\nApplications of trigonometry rub and reveal worksheet. Problem types find angle find length. 45 applications of trigonometry use the previous example to put these steps in the correct order:.\n\nTrignometry is one of the major section of advance mathematics for different exams including competitive exams.trignometry study materials pdf with practice questions worksheet is available here to download in english and hindi language. (worksheet t1 may help you!!) 1. Below are a number of worksheets covering trigonometry problems.\n\nRead  Mouse Paint Coloring Page\n\nPrintable math worksheets for grade 10. You will need to work out which to use and how! View sohcahtoa worksheet.pdf from bio 134 at oregon institute of technology.\n\nYou will find addition lessons, worksheets, homework, and quizzes in each section. Full curriculum of exercises and videos. Questions 1 and 2 require sine, questions 3 and 4 require cosine, question 5 and 6 require tangent.\n\nTrig ratios to find missing lengths and angles together with questions on elevation/depression as well as bearings make for a good challenge. Choose if you want the problems to be given in metric or imperial units. The angle of elevation of the top of the building at a distance of 50 m from its foot on a horizontal plane is found to be 60 degree.\n\nSum of the angles in a triangle is 180 degree worksheet. Trigonometry worksheets free worksheets with answer keys. Use these practical worksheets to ground students in the law of sines, the law of cosines, tangents, trigonometric functions, and much more!\n\nDownload free printable worksheets for cbse class 10 trigonometry with important topic wise questions, students must practice the ncert class 10 trigonometry worksheets, question banks, workbooks and exercises with solutions which will help them in revision of important concepts class 10 trigonometry. Sine, cosine, and tangent, which get shortened to sin, cos, and tan. A = b = c sin a sin b sin c.\n\nChoose a specific addition topic below to view all of our worksheets in that content area. Trigonometry study materials pdf with practice questions worksheet: Printable worksheets > trigonometry worksheets.\n\n43 applications of trigonometry rub and reveal. Our maths trigonometry worksheets with answers will help your child or student to grasp and understand basic and more advanced ways of solving trigonometric equations. Trigonometry word problems worksheet with answers question 1 :\n\nRead  Wall Decor Ideas For Living Room\n\nLearn trigonometry for free—right triangles, the unit circle, graphs, identities, and more. Developing learners will be able to calculate the missing angles of a right angle triangle using trigonometry. Trigonometry in 3d we are learning to:\n\nFree printable trigonometry worksheets, right triangle trigonometry worksheet and inverse trig functions worksheet are some main things we want to show you based on the gallery title. Right triangle trigonometry worksheet fresh 10 best of trigonometry sin cos tan worksheets in 2020 trigonometry worksheets trigonometry right to education. Make sure you are happy with the following topics before continuing.\n\n44 applications of trigonometry the sine rule can also be rearranged to give: Trigonometry worksheets, worksheets for 4th grade, 5th grade, 6th grade, 7th grade and middle school Printable math worksheets for grade 10.\n\nThese worksheets for grade 10 trigonometry, class assignments and practice tests have been. For other icse worksheet for class 9 mathematic check out main page of entrancei. Examples in 4 different formats for your preference;\n\nExplore the surplus collection of trigonometry worksheets that cover key skills in quadrants and angles, measuring angles in degrees and radians, conversion between degrees, minutes and radians, understanding the six trigonometric ratios, unit circles, frequently used trigonometric identities, evaluating, proving and verifying trigonometric expressions and the list go on. When we talk concerning trigonometry worksheets and answers pdf, below we will see several variation of images to complete your ideas. Law of sines and cosines worksheet (this sheet is a summative worksheet that focuses on deciding when to use the law of sines or cosines as well as on using both formulas to solve for a single triangle's side or angle) law of sines.\n\nRead  Thanksgiving Lesson Ideas For Preschool\n\nNumber of problems 4 problems 8 problems 12 problems 15 problems. Use trigonometry to solve problems in three dimensions.", null, "50 Right Triangle Trigonometry Worksheet in 2020", null, "Unit Circle Worksheet with Answers Unique Precalculus Unit", null, "Trigonometry identity problems Trigonometry, Precalculus", null, "10+ Right Triangle Trig Worksheet Work Work", null, "Trigonometry Worksheet Bundle 120 Practice Problems", null, "Image detail for Trigonometry formulas/Triangle Relations", null, "50 Right Triangle Trigonometry Worksheet in 2020", null, "Special Right Triangles Worksheets Triangle worksheet", null, "6 Recent Trigonometry Ratios In Right Triangles Worksheet", null, "Trigonometry Worksheets Pdf A Trigonometry Worksheets", null, "Proving Trig Identities Worksheet In advance of dealing", null, "Right Triangle Trigonometry Worksheet Inspirational 11", null, "50 Right Triangle Trigonometry Worksheet in 2020", null, "Maths Worksheets Geometry worksheets, Triangle worksheet", null, "Pin on Printable Blank Worksheet Template", null, "Pythagorean theorem Puzzle Worksheet area A Triangle Using", null, "Trigonometric Ratios Worksheet Answers Unique Trigonometry", null, "Reciprocal Trig Functions Worksheet Graphing Sine and", null, "InspiringPeriodic Verifying Trigonometric Identities\n\n0 comment" ]
[ null, "https://sstatic1.histats.com/0.gif", null, "https://i.pinimg.com/originals/3e/08/9f/3e089f228dc75102914ee6f612b06434.png", null, "https://i.pinimg.com/originals/b1/59/01/b159016123438b3b2a4c68b54f5f71b7.jpg", null, "https://i.pinimg.com/originals/f0/4c/53/f04c530812df8a028d834024ee9bd394.jpg", null, "https://i.pinimg.com/originals/e9/60/0b/e9600ba2b03a2e3bbe9908b2570d67e9.jpg", null, "https://i.pinimg.com/originals/75/73/53/757353a3feec53c816a0debed9df1030.png", null, "https://i.pinimg.com/originals/dd/23/70/dd2370e28c59ff43aa7ef13f2047fb35.jpg", null, "https://i.pinimg.com/originals/02/91/62/0291622047ab1fe2a9d0cf1dd38a31e1.jpg", null, "https://i.pinimg.com/originals/17/85/9a/17859a8859e86bee573f2e6f628a53de.jpg", null, "https://i.pinimg.com/originals/7f/ba/7c/7fba7ca1d72a96308f93c39265d9fa47.jpg", null, "https://i.pinimg.com/originals/7b/2f/8e/7b2f8e64c327fed0cebbc66800469650.png", null, "https://i.pinimg.com/originals/34/d7/d5/34d7d51e5557272da1c8f66b480cd37f.jpg", null, "https://i.pinimg.com/originals/f3/2d/0c/f32d0c70d94f8dde6c28a171399570be.jpg", null, "https://i.pinimg.com/originals/b1/59/01/b159016123438b3b2a4c68b54f5f71b7.jpg", null, "https://i.pinimg.com/originals/ab/16/35/ab1635fdf2b073d05a8e3ece8754fda3.jpg", null, "https://i.pinimg.com/originals/f5/d5/c8/f5d5c8146114c85f83c69a6825fc49e0.jpg", null, "https://i.pinimg.com/originals/0c/b2/e9/0cb2e996f12d912e51a01376cbc7762f.jpg", null, "https://i.pinimg.com/originals/12/9e/15/129e1506a52831e3cab30b10bd6751ad.jpg", null, "https://i.pinimg.com/736x/f6/f1/6f/f6f16fa106269a727d643320cdfa59ad.jpg", null, "https://i.pinimg.com/originals/a1/46/b9/a146b9f14db54329f779951040e499f7.jpg", null, "https://i.pinimg.com/originals/51/f8/b5/51f8b53f38a30129692243119ab951cc.jpg", null, "https://i.pinimg.com/originals/b5/26/26/b52626f57ab88be3edca7b05145198a8.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86428565,"math_prob":0.8341693,"size":6627,"snap":"2021-04-2021-17","text_gpt3_token_len":1349,"char_repetition_ratio":0.23901555,"word_repetition_ratio":0.017311608,"special_character_ratio":0.17881395,"punctuation_ratio":0.09532539,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99699765,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44],"im_url_duplicate_count":[null,null,null,null,null,null,null,1,null,3,null,null,null,3,null,6,null,null,null,null,null,null,null,1,null,10,null,null,null,null,null,8,null,null,null,null,null,null,null,5,null,6,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-13T01:55:02Z\",\"WARC-Record-ID\":\"<urn:uuid:9ce80617-0d37-4935-b2b1-646ff4c285b1>\",\"Content-Length\":\"145462\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1344dc2e-989c-4e92-8511-cfbae467bffd>\",\"WARC-Concurrent-To\":\"<urn:uuid:3a87986b-959a-4e63-a7e9-41d5fc9ef175>\",\"WARC-IP-Address\":\"104.21.45.228\",\"WARC-Target-URI\":\"https://mosspravka.pro/trigonometry-worksheet/\",\"WARC-Payload-Digest\":\"sha1:G5TCZOLK6MJQMYUEDLO36XAJJUL33N6K\",\"WARC-Block-Digest\":\"sha1:UBPVEYIWNPFWH2QIE5UFWWZ3KWAOXCZ2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038071212.27_warc_CC-MAIN-20210413000853-20210413030853-00014.warc.gz\"}"}
https://warwick.ac.uk/fac/sci/masdoc/current/msc-modules/ma916/cm/parametric/
[ "# Parametric Setting\n\nIn order to solve the coupled system analytically or numerically, we need to give a description of our surface. There are several approaches we could take; parametric, graph, level set, phase field. We will consider the first two methods.\n\n• Parametric approach - Mesh point will evolve with normal velocity leading to mesh degeneration. We aim to resolve this issue with the DeTurck trick.\n• Graph approach - Although this assumption is restrictive in terms of modelling, it will enable us to prove local existence. The ideas and techniques will be a useful tool in understanding the general situation.\n\nWe parametrise the evolving closed curve", null, "$\\Gamma(t) \\subset \\mathbb{R}^2$ using a stationary reference manifold,", null, "$\\Gamma(t) = x(S^1,t) , \\qquad x:\\mathcal{S}^1\\times[0,T)\\to\\mathbb{R}^2.$", null, "Mesh degeneration\n\nA key problem with this approach is that our mesh points move with normal velocity resulting in mesh degeneration. As a result, this will produce large errors in our numerical schemes. To resolve this issue we will use the DeTurck trick, which is a reparametrisation that introduces tangential motion.\n\nForced curve shortening flow with the DeTurck trick\n\nApplying the DeTurck reparametrisation to curve shortening flow with forcing, we have", null, "$\\partial_t \\hat{x}_\\alpha=(-\\kappa+f)\\, \\nu\\circ \\hat{x}_\\alpha -\\tfrac{1}{\\alpha} \\, d\\hat{x}_\\alpha(V_\\alpha)$\n\nIn local coordinates the equation becomes:", null, "$\\alpha \\hat{X}_{\\alpha, t} + (1-\\alpha)(\\nu \\cdot \\hat{X}_{\\alpha,t}) \\, \\nu = |\\hat{X}_{\\alpha, \\theta \\theta}|^{-2} \\, \\hat{X}_{\\alpha,\\theta\\theta} + f(\\hat{X}_{\\alpha}) \\, \\nu$\nSemi-discrete scheme\n\nDiscretising the spatial variables using linear finite elements we obtain the semi-discrete scheme", null, "$\\int_0^{2\\pi} \\big{(} \\alpha \\hat{X}_{ht} \\cdot \\phi_h + (1-\\alpha)(\\nu_h\\cdot \\hat{X}_{ht})(\\nu_h\\cdot \\phi_h) \\big{)} |\\hat{X}_{h\\theta}|^2 + \\hat{X}_{h\\theta} \\cdot \\phi_{h\\theta} \\, d\\theta = \\int_0^{2\\pi} |\\hat{X}_{h\\theta}|^2 \\, f(\\hat{X}_{h}) \\, \\nu_h \\cdot \\phi_h \\, d\\theta.$\n\nA suitable convergence result can be proved, however the constant in the error estimate will blow up as", null, "$\\alpha \\rightarrow 0$. The problem of convergence with", null, "$\\alpha = 0$ remains open." ]
[ null, "https://mathtex.warwick.ac.uk/cgi-bin/mathtex.cgi", null, "https://mathtex.warwick.ac.uk/cgi-bin/mathtex.cgi", null, "https://warwick.ac.uk/fac/sci/masdoc/current/msc-modules/ma916/cm/parametric/tikz.png", null, "https://mathtex.warwick.ac.uk/cgi-bin/mathtex.cgi", null, "https://mathtex.warwick.ac.uk/cgi-bin/mathtex.cgi", null, "https://mathtex.warwick.ac.uk/cgi-bin/mathtex.cgi", null, "https://mathtex.warwick.ac.uk/cgi-bin/mathtex.cgi", null, "https://mathtex.warwick.ac.uk/cgi-bin/mathtex.cgi", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8572588,"math_prob":0.9982542,"size":1434,"snap":"2020-34-2020-40","text_gpt3_token_len":287,"char_repetition_ratio":0.11328671,"word_repetition_ratio":0.0,"special_character_ratio":0.17642957,"punctuation_ratio":0.09795918,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99971575,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,null,null,null,null,3,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-26T10:43:51Z\",\"WARC-Record-ID\":\"<urn:uuid:61d2e996-851e-4187-8eb7-94287598a039>\",\"Content-Length\":\"28571\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dec5a91c-476b-47fa-b6cb-0def76031609>\",\"WARC-Concurrent-To\":\"<urn:uuid:23f57d1f-31a2-4554-974c-b57b0fde60d0>\",\"WARC-IP-Address\":\"137.205.28.41\",\"WARC-Target-URI\":\"https://warwick.ac.uk/fac/sci/masdoc/current/msc-modules/ma916/cm/parametric/\",\"WARC-Payload-Digest\":\"sha1:YQZC3CWTMVPWYKUOSERBK7XADIRFAV5A\",\"WARC-Block-Digest\":\"sha1:6Z52CL4PS5VSDWOTRHTOE4B2SDYJ3J5R\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400241093.64_warc_CC-MAIN-20200926102645-20200926132645-00797.warc.gz\"}"}
https://www.allbusinesstemplates.com/cn/template/OM4KQ/liquid-measurement-equivalent-chart/
[ "# Liquid Measurement Equivalent Chart\n\nSponsored Link\n\n## 免费模板                                  保存,填空,打印,三步搞定!", null, "点击图片放大 / 点击下面的按钮查看更多图片\n\nAdobe PDF (.pdf)\n\n• 本文档已通过专业认证\n• 100%可定制\n• 这是一个数字下载 (456.93 kB)\n• 语: English\n\nSponsored Link\n\nABT 模板评分: 8\n\nHow to draft a Liquid Measurement Equivalent Chart? An easy way to start completing your document is to download this Liquid Measurement Equivalent Chart template now!\n\nEvery day brings new projects, emails, documents, and task lists, and often it is not that different from the work you have done before. Many of our day-to-day tasks are similar to something we have done before. Don't reinvent the wheel every time you need to make measurement calculations.\n\nThe conversion measurement template provides info regarding the following:\n\nNautical mile = 6,076.1155\nfeet 1 knot = 1 nautical mile per hour 1 fathom = 2 yards = 6 feet 1 furlong = 1⁄ 8 mile = 660 feet = 220 yards 1 league = 3 miles = 24 furlongs 1 chain = 100 links = 22 yards SQUARE 1 square foot = 144 square inches 1 square yard = 9 square feet 1 square rod = 30 1⁄ 4 square yards = 272 1⁄ 4 square feet 1 acre = 160 square rods = 43,560 square feet 1 square mile = 640 acres = 102,400 square rods 1 square rod = 625 square links 1 square chain = 16 square rods 1 acre = 10 square chains CUBIC 1 cubic foot = 1,728 cubic inches 1 cubic yard = 27 cubic feet 1 cord = 128 cubic feet 1 U.S. liquid gallon = 4 quarts = 231 cubic inches 1 imperial gallon = 1.20 U.S. gallons = 0.16 cubic foot 1 board foot = 144 cubic inches KITCHEN 3 teaspoons = 1 tablespoon 16 tablespoons = 1 cup 1 cup = 8 ounces 2 cups = 1 pint 2 pints = 1 quart 4 quarts = 1 gallon\n\nTO CONVERT CELSIUS AND FAHRENHEIT : °C = (°F – 32)/1.8 °F = (°C × 1.8) + 32 Metric Conversions\n\nLINEAR 1 inch = 2.54 centimeters 1 centimeter = 0.39 inch 1 meter = 39.37 inches 1 yard = 0.914 meter 1 mile = 1.61 kilometers 1 kilometer = 0.62 mile SQUARE 1 square inch = 6.45 square centimeters 1 square yard = 0.84 square meter 1 square mile = 2.59 square kilometers 1 square kilometer = 0.386 square mile 1 acre = 0.40 hectare 1 hectare = 2.47 acres 1 ⁄ 4 cup = 60 mL ⁄ 3 cup = 75 mL 1 ⁄ 2 cup = 125 mL 2 ⁄ 3 cup = 150 mL 3 ⁄ 4 cup = 175 mL 1 cup = 250 mL 1 liter = 1.057 U.S. liquid quarts CUBIC 1 U.S. liquid quart = 0.946 liter 1 cubic yard = 0.76 cubic meter 1 U.S. liquid gallon = 3.78 liters 1 cubic meter = 1.31 cubic yards 1 gram = 0.035 ounce 1 ounce = 28.349 grams HOUSEHOLD 1 kilogram = 2.2 pounds 1 ⁄ 2 teaspoon = 2 mL 1 pound = 0.45 kilogram 1 teaspoon = 5 mL 1 tablespoon = 15 mL The Old Farmer’s Almanac 1.\nInstead, we provide this standardized Liquid Measurement Equivalent Chart template with text and formatting as a starting point to help you do the conversion or find the equivalent. If time or quality is of the essence, this ready-made template can help you to save time and to focus on the topics that really matter!\n\nUsing this chart template guarantees you will save time, cost, and effort! It comes in Microsoft Office format, is ready to be tailored to your personal needs. Completing your chart has never been easier! If you wish to convert Celcius to Fahrenheit, have a look at this C to F temperature conversion chart.\n\nDownload this Liquid Measurement Equivalent Chart template now for your own benefit!\n\nDISCLAIMER\nNothing on this site shall be considered legal advice and no attorney-client relationship is established.", null, "Sponsored Link" ]
[ null, "https://www.allbusinesstemplates.com/thumbs/c9a26c4d-94df-4bc6-9b90-ff9f4637aa43_1.png", null, "https://www.allbusinesstemplates.com/img/avatar_1x.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.79113626,"math_prob":0.99837404,"size":3071,"snap":"2021-04-2021-17","text_gpt3_token_len":933,"char_repetition_ratio":0.13172482,"word_repetition_ratio":0.011494253,"special_character_ratio":0.3337675,"punctuation_ratio":0.101404056,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9700792,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-12T18:06:51Z\",\"WARC-Record-ID\":\"<urn:uuid:dfa641e7-8284-4f71-8bbe-6bf18ea04a36>\",\"Content-Length\":\"28424\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e0991a76-8cce-4e1c-8922-d466148ec289>\",\"WARC-Concurrent-To\":\"<urn:uuid:88a9e58a-66ae-457b-8e1c-87485e48422d>\",\"WARC-IP-Address\":\"104.26.7.31\",\"WARC-Target-URI\":\"https://www.allbusinesstemplates.com/cn/template/OM4KQ/liquid-measurement-equivalent-chart/\",\"WARC-Payload-Digest\":\"sha1:PMDDUCEJIVEUKEUHYULXCRRUKUXJ5E6M\",\"WARC-Block-Digest\":\"sha1:IJ6I3IKE2QXYPRQZWMEKWEK6HIJIIWAA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038069133.25_warc_CC-MAIN-20210412175257-20210412205257-00339.warc.gz\"}"}