URL
stringlengths
15
1.68k
text_list
sequencelengths
1
199
image_list
sequencelengths
1
199
metadata
stringlengths
1.19k
3.08k
https://georgia-james.com/simplify-b28b2-8/
[ "# Simplify (b^2+8)(b^2-8)", null, "Expand using the FOIL Method.\nApply the distributive property.\nApply the distributive property.\nApply the distributive property.\nSimplify terms.\nCombine the opposite terms in .\nReorder the factors in the terms and .\nSimplify each term.\nMultiply by by adding the exponents.\nUse the power rule to combine exponents." ]
[ null, "https://georgia-james.com/wp-content/uploads/ask60.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7310261,"math_prob":0.9232902,"size":381,"snap":"2022-40-2023-06","text_gpt3_token_len":93,"char_repetition_ratio":0.1458886,"word_repetition_ratio":0.16949153,"special_character_ratio":0.23097113,"punctuation_ratio":0.18421052,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9653499,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-28T09:49:11Z\",\"WARC-Record-ID\":\"<urn:uuid:fc5966b2-7561-4ce7-a6df-ba6f24e3d3ba>\",\"Content-Length\":\"63675\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ae42f9f9-dcd7-40ed-b34d-2edb7ad86e29>\",\"WARC-Concurrent-To\":\"<urn:uuid:b1ef6065-66f8-4d8b-a5e4-8d0effc10b47>\",\"WARC-IP-Address\":\"107.167.10.239\",\"WARC-Target-URI\":\"https://georgia-james.com/simplify-b28b2-8/\",\"WARC-Payload-Digest\":\"sha1:KB6N3CQHD2TUL5O7HUNPUCKPY2ES3QBN\",\"WARC-Block-Digest\":\"sha1:EP4JA6GPPT4SXRH2Q6NDPUXWJR4U2D6R\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499541.63_warc_CC-MAIN-20230128090359-20230128120359-00793.warc.gz\"}"}
https://cs.stackexchange.com/questions/56632/is-there-an-example-of-a-recursive-language-which-is-not-context-sensitive?noredirect=1
[ "# Is there an example of a recursive language which is not context sensitive?\n\nI have been looking for a prototypical language for recursive languages (decidible) which is no context sensitive without success. For instance $a^*$ is prototypical of regular languages, $a^nb^n$ for context free languages and $a^nb^nc^n$ for context sensitive languages. I usually consider the language which is accepted by a universal Turing machine (UTM) as prototypical of recursively enumerable. However for the the recursive languages I don't have one. I used to think that $\\{1^p | p \\text { is prime}\\}$ was recursive but verifying a number is prime can be done by a bounded Turing machine. I also had $\\{1^{2^{n}}\\}$ but again verifying this can be done by a bounded Turing machine.\n\nOn the other hand, the other options I have found are computing Turing machines that requiere that the output of the computation to be store somewhere in the machine, however the output is no part of the accepted language which makes every of those language regular or context free so far. For instance the machine that sums two numbers represented by 1s and separated by a space, and puts the result after. In this case, the accepted language is actually $1^*B1^*$ which is regular! If we try to do it like verification if become context free $1^nB1^mB1^{n+m}$ but no recursive!\n\nSo is it possible to talk about a recursive language which it might be regular in essence but since it is conditioned to do the computation and put a result in the output as a kind of recursive language? Those definitely can not be done in a bounded Turing machine.\n\n• Context-sensitive languages are equivalent to languages that can be decided by a linear-bounded Turing Machines. So any language decidable by a general (non LBA) TM, will not be context-sensitive (but rather, can be generated by an unrestricted grammar) – Ran G. Apr 28 '16 at 1:20\n• If you want a specific example, think of languages of the form $L=\\{ \\langle M ,x \\rangle \\mid M \\text{ accepts$x$within$|x|^{10}$steps}\\}$. – Ran G. Apr 28 '16 at 1:22\n• Wow, excellent! I got it! This one is always decidable since I only have to simulate M $10$ steps! Thanks a lot! – Ivan Meza Apr 28 '16 at 1:31\n• Since LINSPACE ≠ R, yes. – Raphael Apr 28 '16 at 11:28\n\nHere's a more formal proof, by the standard trick of diagonalization (it must be folklore, but I saw it recently here)\n\nLet $G_1, G_2, ...$ be some enumeration of context sensitive grammars (convince yourself that there are only countable many of them; Why can they be enumerated?),\n\nLet $x_1, x_2, \\ldots,$ be enumeration of $\\Sigma^*$ (i.e., $x_1=\\epsilon$, $x_2=0$, $x_3=1$, $x_4=00$, etc. in the case of binary alphabet).\n\nConsider the language:\n\n$$L = \\{ x_i \\mid x_i \\notin L(G_i)\\}.$$\n\nClaim 1 $L$ is not context sensitive: if it was, there was a specific $G_j$ generating it, but $x_j\\in L$ while $x_j \\notin G_j$.\n\nClaim 2 $L$ is decidable: given $x_i$ we just check if $G_i$ generates it (this problem is known to be PSPACE complete, thus decidable).\n\n• This one is so nice, I have read something like this to establish the relation about complements of family of languages. But using this construction to constrain being outside context sensitive while keeping it recursive is really cool! – Ivan Meza Apr 28 '16 at 2:05" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.95170826,"math_prob":0.9577426,"size":1538,"snap":"2021-04-2021-17","text_gpt3_token_len":342,"char_repetition_ratio":0.14667536,"word_repetition_ratio":0.031128405,"special_character_ratio":0.22171651,"punctuation_ratio":0.05882353,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99718904,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-23T13:53:13Z\",\"WARC-Record-ID\":\"<urn:uuid:c488232f-eb1c-4a46-8fd8-bfb6cc672bcf>\",\"Content-Length\":\"157334\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c23c6a00-bd48-482a-9fa8-1df2381e1bd2>\",\"WARC-Concurrent-To\":\"<urn:uuid:5f353059-e02e-4027-b2fb-060a90cdd645>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://cs.stackexchange.com/questions/56632/is-there-an-example-of-a-recursive-language-which-is-not-context-sensitive?noredirect=1\",\"WARC-Payload-Digest\":\"sha1:462Z2EEMMO3OVAVVQAXEADIJOP4A627N\",\"WARC-Block-Digest\":\"sha1:75F4DLO6ARUKWF3U4KFT56T3WDP5N6LS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703538082.57_warc_CC-MAIN-20210123125715-20210123155715-00314.warc.gz\"}"}
https://www.papertrell.com/apps/preview/The-Handy-Math-Answer-Book/Handy%20Answer%20book/How-is-the-magnitude-of-a-vector-determined/001137022/content/SC/52cb00f482fad14abfa5c2e0_Default.html
[ "NextPrevious\n\n# How is the magnitude of a vector determined?\n\nThe magnitude of a vector is equivalent to the length of a vector. Placing a pair of vertical lines (similar to the absolute value symbol) around a vector implies the magnitude of the vector. For example, if the variable V is used to represent a vector, then the expression", null, "indicates the magnitude of the vector.\n\nClose\n\nThis is a web preview of the \"The Handy Math Answer Book\" app. Many features only work on your mobile device. If you like what you see, we hope you will consider buying. Get the App" ]
[ null, "https://www.papertrell.com/apps/preview/The-Handy-Math-Answer-Book/Handy%20Answer%20book/How-is-the-magnitude-of-a-vector-determined/001137022/Media/SC/Handy_Answer_book/images/par.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8069026,"math_prob":0.84451157,"size":409,"snap":"2023-14-2023-23","text_gpt3_token_len":85,"char_repetition_ratio":0.20987654,"word_repetition_ratio":0.029850746,"special_character_ratio":0.19559902,"punctuation_ratio":0.08,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97956455,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-05T20:10:25Z\",\"WARC-Record-ID\":\"<urn:uuid:21d176cc-3972-4650-bcd9-e2cc2191c398>\",\"Content-Length\":\"10665\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f81da803-51c6-4c5c-a8b6-38689349605c>\",\"WARC-Concurrent-To\":\"<urn:uuid:0af8bd8e-4649-4374-be95-0db387bde08f>\",\"WARC-IP-Address\":\"54.243.128.49\",\"WARC-Target-URI\":\"https://www.papertrell.com/apps/preview/The-Handy-Math-Answer-Book/Handy%20Answer%20book/How-is-the-magnitude-of-a-vector-determined/001137022/content/SC/52cb00f482fad14abfa5c2e0_Default.html\",\"WARC-Payload-Digest\":\"sha1:RFOSW3N4XNTQ65WJSOPHN5LNDV2SZUCA\",\"WARC-Block-Digest\":\"sha1:QROYJ2LMSLXPAOLV75I4QPDQMGAIGWZV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224652161.52_warc_CC-MAIN-20230605185809-20230605215809-00586.warc.gz\"}"}
https://www.ginac.de/ginac.git/?p=ginac.git;a=blob;f=ginsh/ginsh.1.in;hb=e7cc6a764ff67b5885d6633385fac23ccc1dc9a7
[ "1 .TH ginsh 1 \"January, 2000\" \"GiNaC @VERSION@\" \"The GiNaC Group\"\n2 .SH NAME\n3 ginsh \\- GiNaC Interactive Shell\n4 .SH SYNPOSIS\n5 .B ginsh\n6 .RI [ file\\&... ]\n7 .SH DESCRIPTION\n8 .B ginsh\n9 is an interactive frontend for the GiNaC symbolic computation framework.\n10 It is intended as a tool for testing and experimenting with GiNaC's\n11 features, not as a replacement for traditional interactive computer\n12 algebra systems. Although it can do many things these traditional systems\n13 can do, ginsh provides no programming constructs like loops or conditional\n14 expressions. If you need this functionality you are advised to write\n15 your program in C++, using the \"native\" GiNaC class framework.\n16 .SH USAGE\n17 .SS INPUT FORMAT\n18 After startup, ginsh displays a prompt (\"> \") signifying that it is ready\n19 to accept your input. Acceptable input are numeric or symbolic expressions\n20 consisting of numbers (e.g.\n21 .BR 42 \", \" 2/3 \" or \" 0.17 ),\n22 symbols (e.g.\n23 .BR x \" or \" result ),\n24 mathematical operators like\n25 .BR + \" and  \" * ,\n26 and functions (e.g.\n27 .BR sin \" or \" normal ).\n28 Every input expression must be terminated with either a semicolon\n29 .RB ( ; )\n30 or a colon\n31 .RB ( : ).\n32 If terminated with a semicolon, ginsh will evaluate the expression and print\n33 the result to stdout. If terminated with a colon, ginsh will only evaluate the\n34 expression but not print the result. It is possible to enter multiple\n35 expressions on one line. Whitespace (spaces, tabs, newlines) can be applied\n36 freely between tokens. To quit ginsh, enter\n37 .BR quit \" or \" exit ,\n38 or type an EOF (Ctrl-D) at the prompt.\n40 Anything following a double slash\n41 .RB ( // )\n42 up to the end of the line, and all lines starting with a hash mark\n43 .RB ( # )\n44 are treated as a comment and ignored.\n45 .SS NUMBERS\n46 ginsh accepts numbers in the usual decimal notations. This includes arbitrary\n47 precision integers and rationals as well as floating point numbers in standard\n48 or scientific notation (e.g.\n49 .BR 1.2E6 ).\n50 The general rule is that if a number contains a decimal point\n51 .RB ( . ),\n52 it is an (inexact) floating point number; otherwise it is an (exact) integer or\n53 rational.\n54 Integers can be specified in binary, octal, hexadecimal or arbitrary (2-36) base\n55 by prefixing them with\n56 .BR #b \", \" #o \", \" #x \", or \"\n57 .BI # n R\n58 , respectively.\n59 .SS SYMBOLS\n60 Symbols are made up of a string of alphanumeric characters and the underscore\n61 .RB ( _ ),\n62 with the first character being non-numeric. E.g.\n63 .BR a \" and \" mu_1\n64 are acceptable symbol names, while\n65 .B 2pi\n66 is not. It is possible to use symbols with the same names as functions (e.g.\n67 .BR sin );\n68 ginsh is able to distinguish between the two.\n69 .PP\n70 Symbols can be assigned values by entering\n71 .RS\n72 .IB symbol \" = \" expression ;\n73 .RE\n74 .PP\n75 To unassign the value of an assigned symbol, type\n76 .RS\n77 .BI unassign(' symbol ');\n78 .RE\n79 .PP\n80 Assigned symbols are automatically evaluated (= replaced by their assigned value)\n81 when they are used. To refer to the unevaluated symbol, put single quotes\n82 .RB ( ' )\n83 around the name, as demonstrated for the \"unassign\" command above.\n84 .PP\n85 The following symbols are pre-defined constants that cannot be assigned\n86 a value by the user:\n87 .RS\n88 .TP 8m\n89 .B Pi\n90 Archimedes' Constant\n91 .TP\n92 .B Catalan\n93 Catalan's Constant\n94 .TP\n95 .B Euler\n96 Euler-Mascheroni Constant\n97 .TP\n98 .B I\n99 sqrt(-1)\n100 .TP\n101 .B FAIL\n102 an object of the GiNaC \"fail\" class\n103 .RE\n104 .PP\n105 There is also the special\n106 .RS\n107 .B Digits\n108 .RE\n109 symbol that controls the numeric precision of calculations with inexact numbers.\n110 Assigning an integer value to digits will change the precision to the given\n111 number of decimal places.\n112 .SS WILDCARDS\n113 The has(), find(), match() and subs() functions accept wildcards as placeholders\n114 for expressions. These have the syntax\n115 .RS\n116 .BI \\$ number\n117 .RE\n118 for example \\$0, \\$1 etc.\n119 .SS LAST PRINTED EXPRESSIONS\n120 ginsh provides the three special symbols\n121 .RS\n122 %, %% and %%%\n123 .RE\n124 that refer to the last, second last, and third last printed expression, respectively.\n125 These are handy if you want to use the results of previous computations in a new\n126 expression.\n127 .SS OPERATORS\n128 ginsh provides the following operators, listed in falling order of precedence:\n129 .RS\n130 .TP 8m\n131 \\\" GINSH_OP_HELP_START\n132 .B !\n133 postfix factorial\n134 .TP\n135 .B ^\n136 powering\n137 .TP\n138 .B +\n139 unary plus\n140 .TP\n141 .B \\-\n142 unary minus\n143 .TP\n144 .B *\n145 multiplication\n146 .TP\n147 .B %\n148 non-commutative multiplication\n149 .TP\n150 .B /\n151 division\n152 .TP\n153 .B +\n155 .TP\n156 .B \\-\n157 subtraction\n158 .TP\n159 .B <\n160 less than\n161 .TP\n162 .B >\n163 greater than\n164 .TP\n165 .B <=\n166 less or equal\n167 .TP\n168 .B >=\n169 greater or equal\n170 .TP\n171 .B ==\n172 equal\n173 .TP\n174 .B !=\n175 not equal\n176 .TP\n177 .B =\n178 symbol assignment\n179 \\\" GINSH_OP_HELP_END\n180 .RE\n181 .PP\n182 All binary operators are left-associative, with the exception of\n183 .BR ^ \" and \" =\n184 which are right-associative. The result of the assignment operator\n185 .RB ( = )\n186 is its right-hand side, so it's possible to assign multiple symbols in one\n187 expression (e.g.\n188 .BR \"a = b = c = 2;\" ).\n189 .SS LISTS\n190 Lists are used by the\n191 .B subs\n192 and\n193 .B lsolve\n194 functions. A list consists of an opening curly brace\n195 .RB ( { ),\n196 a (possibly empty) comma-separated sequence of expressions, and a closing curly\n197 brace\n198 .RB ( } ).\n199 .SS MATRICES\n200 A matrix consists of an opening square bracket\n201 .RB ( [ ),\n202 a non-empty comma-separated sequence of matrix rows, and a closing square bracket\n203 .RB ( ] ).\n204 Each matrix row consists of an opening square bracket\n205 .RB ( [ ),\n206 a non-empty comma-separated sequence of expressions, and a closing square bracket\n207 .RB ( ] ).\n208 If the rows of a matrix are not of the same length, the width of the matrix\n209 becomes that of the longest row and shorter rows are filled up at the end\n210 with elements of value zero.\n211 .SS FUNCTIONS\n212 A function call in ginsh has the form\n213 .RS\n214 .IB name ( arguments )\n215 .RE\n216 where\n217 .I arguments\n218 is a comma-separated sequence of expressions. ginsh provides a couple of built-in\n219 functions and also \"imports\" all symbolic functions defined by GiNaC and additional\n220 libraries. There is no way to define your own functions other than linking ginsh\n221 against a library that defines symbolic GiNaC functions.\n222 .PP\n223 ginsh provides Tab-completion on function names: if you type the first part of\n224 a function name, hitting Tab will complete the name if possible. If the part you\n225 typed is not unique, hitting Tab again will display a list of matching functions.\n226 Hitting Tab twice at the prompt will display the list of all available functions.\n227 .PP\n228 A list of the built-in functions follows. They nearly all work as the\n229 respective GiNaC methods of the same name, so I will not describe them in\n230 detail here. Please refer to the GiNaC documentation.\n231 .PP\n232 .RS\n233 \\\" GINSH_FCN_HELP_START\n234 .BI charpoly( matrix \", \" symbol )\n235 \\- characteristic polynomial of a matrix\n236 .br\n237 .BI coeff( expression \", \" object \", \" number )\n238 \\- extracts coefficient of object^number from a polynomial\n239 .br\n240 .BI collect( expression \", \" object-or-list )\n241 \\- collects coefficients of like powers (result in recursive form)\n242 .br\n243 .BI collect_distributed( expression \", \" list )\n244 \\- collects coefficients of like powers (result in distributed form)\n245 .br\n246 .BI content( expression \", \" symbol )\n247 \\- content part of a polynomial\n248 .br\n249 .BI decomp_rational( expression \", \" symbol )\n250 \\- decompose rational function into polynomial and proper rational function\n251 .br\n252 .BI degree( expression \", \" object )\n253 \\- degree of a polynomial\n254 .br\n255 .BI denom( expression )\n256 \\- denominator of a rational function\n257 .br\n258 .BI determinant( matrix )\n259 \\- determinant of a matrix\n260 .br\n261 .BI diag( expression... )\n262 \\- constructs diagonal matrix\n263 .br\n264 .BI diff( expression \", \" \"symbol [\" \", \" number] )\n265 \\- partial differentiation\n266 .br\n267 .BI divide( expression \", \" expression )\n268 \\- exact polynomial division\n269 .br\n270 .BI eval( \"expression [\" \", \" level] )\n271 \\- evaluates an expression, replacing symbols by their assigned value\n272 .br\n273 .BI evalf( \"expression [\" \", \" level] )\n274 \\- evaluates an expression to a floating point number\n275 .br\n276 .BI evalm( expression )\n277 \\- evaluates sums, products and integer powers of matrices\n278 .br\n279 .BI expand( expression )\n280 \\- expands an expression\n281 .br\n282 .BI find( expression \", \" pattern )\n283 \\- returns a list of all occurrences of a pattern in an expression\n284 .br\n285 .BI gcd( expression \", \" expression )\n286 \\- greatest common divisor\n287 .br\n288 .BI has( expression \", \" pattern )\n289 \\- returns \"1\" if the first expression contains the pattern as a subexpression, \"0\" otherwise\n290 .br\n291 .BI inverse( matrix )\n292 \\- inverse of a matrix\n293 .br\n294 .BI is( relation )\n295 \\- returns \"1\" if the relation is true, \"0\" otherwise (false or undecided)\n296 .br\n297 .BI lcm( expression \", \" expression )\n298 \\- least common multiple\n299 .br\n300 .BI lcoeff( expression \", \" object )\n301 \\- leading coefficient of a polynomial\n302 .br\n303 .BI ldegree( expression \", \" object )\n304 \\- low degree of a polynomial\n305 .br\n306 .BI lsolve( equation-list \", \" symbol-list )\n307 \\- solve system of linear equations\n308 .br\n309 .BI map( expression \", \" pattern )\n310 \\- apply function to each operand; the function to be applied is specified as a pattern with the \"\\$0\" wildcard standing for the operands\n311 .br\n312 .BI match( expression \", \" pattern )\n313 \\- check whether expression matches a pattern; returns a list of wildcard substitutions or \"FAIL\" if there is no match\n314 .br\n315 .BI nops( expression )\n316 \\- number of operands in expression\n317 .br\n318 .BI normal( \"expression [\" \", \" level] )\n319 \\- rational function normalization\n320 .br\n321 .BI numer( expression )\n322 \\- numerator of a rational function\n323 .br\n324 .BI numer_denom( expression )\n325 \\- numerator and denumerator of a rational function as a list\n326 .br\n327 .BI op( expression \", \" number )\n328 \\- extract operand from expression\n329 .br\n330 .BI power( expr1 \", \" expr2 )\n331 \\- exponentiation (equivalent to writing expr1^expr2)\n332 .br\n333 .BI prem( expression \", \" expression \", \" symbol )\n334 \\- pseudo-remainder of polynomials\n335 .br\n336 .BI primpart( expression \", \" symbol )\n337 \\- primitive part of a polynomial\n338 .br\n339 .BI quo( expression \", \" expression \", \" symbol )\n340 \\- quotient of polynomials\n341 .br\n342 .BI rem( expression \", \" expression \", \" symbol )\n343 \\- remainder of polynomials\n344 .br\n345 .BI series( expression \", \" relation-or-symbol \", \" order )\n346 \\- series expansion\n347 .br\n348 .BI sqrfree( \"expression [\" \", \" symbol-list] )\n349 \\- square-free factorization of a polynomial\n350 .br\n351 .BI sqrt( expression )\n352 \\- square root\n353 .br\n354 .BI subs( expression \", \" relation-or-list )\n355 .br\n356 .BI subs( expression \", \" look-for-list \", \" replace-by-list )\n357 \\- substitute subexpressions (you may use wildcards)\n358 .br\n359 .BI tcoeff( expression \", \" object )\n360 \\- trailing coefficient of a polynomial\n361 .br\n362 .BI time( expression )\n363 \\- returns the time in seconds needed to evaluate the given expression\n364 .br\n365 .BI trace( matrix )\n366 \\- trace of a matrix\n367 .br\n368 .BI transpose( matrix )\n369 \\- transpose of a matrix\n370 .br\n371 .BI unassign( symbol )\n372 \\- unassign an assigned symbol\n373 .br\n374 .BI unit( expression \", \" symbol )\n375 \\- unit part of a polynomial\n376 .br\n377 \\\" GINSH_FCN_HELP_END\n378 .RE\n379 .SS SPECIAL COMMANDS\n380 To exit ginsh, enter\n381 .RS\n382 .B quit\n383 .RE\n384 or\n385 .RS\n386 .B exit\n387 .RE\n388 .PP\n389 ginsh can display a (short) help for a given topic (mostly about functions\n390 and operators) by entering\n391 .RS\n392 .BI ? topic\n393 .RE\n394 Typing\n395 .RS\n396 .B ??\n397 .RE\n398 will display a list of available help topics.\n399 .PP\n400 The command\n401 .RS\n402 .BI print( expression );\n403 .RE\n404 will print a dump of GiNaC's internal representation for the given\n405 .IR expression .\n406 This is useful for debugging and for learning about GiNaC internals.\n407 .PP\n408 The command\n409 .RS\n410 .BI iprint( expression );\n411 .RE\n412 prints the given\n413 .I expression\n414 (which must evaluate to an integer) in decimal, octal, and hexadecimal representations.\n415 .PP\n416 Finally, the shell escape\n417 .RS\n418 .B !\n419 .RI [ \"command  \" [ arguments ]]\n420 .RE\n421 passes the given\n422 .I command\n423 and optionally\n424 .I arguments\n425 to the shell for execution. With this method, you can execute shell commands\n426 from within ginsh without having to quit.\n427 .SH EXAMPLES\n428 .nf\n429 > a = x^2\\-x\\-2;\n430 \\-2\\-x+x^2\n431 > b = (x+1)^2;\n432 (x+1)^2\n433 > s = a/b;\n434 (x+1)^(\\-2)*(\\-2\\-x+x^2)\n435 > diff(s, x);\n436 (2*x\\-1)*(x+1)^(\\-2)\\-2*(x+1)^(\\-3)*(\\-x+x^2\\-2)\n437 > normal(s);\n438 (x\\-2)*(x+1)^(\\-1)\n439 > x = 3^50;\n440 717897987691852588770249\n441 > s;\n442 717897987691852588770247/717897987691852588770250\n443 > Digits = 40;\n444 40\n445 > evalf(s);\n446 0.999999999999999999999995821133292704384960990679\n447 > unassign('x');\n448 x\n449 > s;\n450 (x+1)^(\\-2)*(\\-x+x^2\\-2)\n451 > series(sin(x),x==0,6);\n452 1*x+(\\-1/6)*x^3+1/120*x^5+Order(x^6)\n453 > lsolve({3*x+5*y == 7}, {x, y});\n454 {x==\\-5/3*y+7/3,y==y}\n455 > lsolve({3*x+5*y == 7, \\-2*x+10*y == \\-5}, {x, y});\n456 {x==19/8,y==\\-1/40}\n457 > M = [ [a, b], [c, d] ];\n458 [[\\-x+x^2\\-2,(x+1)^2],[c,d]]\n459 > determinant(M);\n460 \\-2*d\\-2*x*c\\-x^2*c\\-x*d+x^2*d\\-c\n461 > collect(%, x);\n462 (\\-d\\-2*c)*x+(d\\-c)*x^2\\-2*d\\-c\n463 > solve quantum field theory;\n464 parse error at quantum\n465 > quit\n466 .fi\n467 .SH DIAGNOSTICS\n468 .TP\n469 .RI \"parse error at \" foo\n470 You entered something which ginsh was unable to parse. Please check the syntax\n471 of your input and try again.\n472 .TP\n473 .RI \"argument \" num \" to \" function \" must be a \" type\n474 The argument number\n475 .I num\n476 to the given\n477 .I function\n478 must be of a certain type (e.g. a symbol, or a list). The first argument has\n479 number 0, the second argument number 1, etc.\n480 .SH AUTHOR\n481 .TP\n482 The GiNaC Group:\n483 .br\n484 Christian Bauer <[email protected]>\n485 .br\n486 Alexander Frink <[email protected]>\n487 .br\n488 Richard Kreckel <[email protected]>\n489 .SH SEE ALSO\n490 GiNaC Tutorial \\- An open framework for symbolic computation within the\n491 C++ programming language\n492 .PP\n493 CLN \\- A Class Library for Numbers, Bruno Haible" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7698761,"math_prob":0.83440906,"size":12399,"snap":"2020-24-2020-29","text_gpt3_token_len":3084,"char_repetition_ratio":0.15465914,"word_repetition_ratio":0.031070746,"special_character_ratio":0.31809017,"punctuation_ratio":0.11764706,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9719445,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-31T22:05:25Z\",\"WARC-Record-ID\":\"<urn:uuid:eee5b18d-b8cf-4ed3-87a8-0baf9b5cba3c>\",\"Content-Length\":\"113249\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b2d8f3f7-ad72-4bcd-947e-11a5f2310e9d>\",\"WARC-Concurrent-To\":\"<urn:uuid:2056de61-d5af-4579-823d-7fb7307f08c1>\",\"WARC-IP-Address\":\"188.68.41.94\",\"WARC-Target-URI\":\"https://www.ginac.de/ginac.git/?p=ginac.git;a=blob;f=ginsh/ginsh.1.in;hb=e7cc6a764ff67b5885d6633385fac23ccc1dc9a7\",\"WARC-Payload-Digest\":\"sha1:BMGGD76F3N5Y56MFYJ5NTG4ZFZNYOH7D\",\"WARC-Block-Digest\":\"sha1:SA3RGA6LNZXGDBQXNTMF4OOCZGZG5BFA\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347413786.46_warc_CC-MAIN-20200531213917-20200601003917-00587.warc.gz\"}"}
https://proxies-free.com/use-the-convolution-theorem-to-calculate-the-fourier-series-then-write-xt-as-a-weighted-sum-of-cosines/
[ "# Use the Convolution theorem to calculate the Fourier Series, then write x(t) as a weighted sum of cosines\n\nThe convolution theorem for the Fourier Transform is:\n\n$$y(t) = h(t)*x(t) leftarrow rightarrow Y(jw) = H(jw)X(jw)$$\n\nThe given $$x(t)$$ is $$x(t)=cos^3(t)$$. No $$h(t)$$ was given.\n\nI don’t really know how to use the convolution theorem to get to the Fourier Series. Can anyone help me?\n\nPosted on Categories Articles" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8082019,"math_prob":0.9999515,"size":277,"snap":"2021-21-2021-25","text_gpt3_token_len":87,"char_repetition_ratio":0.113553114,"word_repetition_ratio":0.0,"special_character_ratio":0.3104693,"punctuation_ratio":0.07936508,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000093,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-13T10:15:25Z\",\"WARC-Record-ID\":\"<urn:uuid:4c8519b1-27c3-42fb-a65b-53ff509e8e3f>\",\"Content-Length\":\"26277\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9748ffe6-9593-4d32-a94d-78c98765a18e>\",\"WARC-Concurrent-To\":\"<urn:uuid:ca947a9f-ee8d-46bb-87fb-e3d664bb12be>\",\"WARC-IP-Address\":\"173.212.203.156\",\"WARC-Target-URI\":\"https://proxies-free.com/use-the-convolution-theorem-to-calculate-the-fourier-series-then-write-xt-as-a-weighted-sum-of-cosines/\",\"WARC-Payload-Digest\":\"sha1:GAASFKNIJE6MTROXDC723LSTKXUZVKPK\",\"WARC-Block-Digest\":\"sha1:EIMDH6L4QEU55YJW5I3EONGQNN6G76SY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487608702.10_warc_CC-MAIN-20210613100830-20210613130830-00220.warc.gz\"}"}
https://www.jaspereng.com/tech-corner/
[ "# Technical Resources\n\nConversions, Formulas, and Engineering Data\n\nPressure Conversions\n1 PSI = 27.71 inches water\n1 PSI = 2.0418 in. Hg @ 60F\n1 PSI = 51.81 mm Hg @ 60F\n1 PSI = .0689 bar\n1 PSI = 6.895 kPa\n1 inch water = 1.8718 mm Hg\n1 inch water = .2489 kPa\n\nVolume Conversions\n1 Gallon = .1337 cubic feet\n1 Gallon = 231 cubic inches\n1 Gallon = .003785 cubic meters\n1 Gallon = 3.785 liters\n1 Barrel (oil) = 42 gallons\n1 Bushel = 1.2445 cubic feet\n\nMass Conversions\n1 lb. = .4536 Kg\n1 Ton (short) = 2000 lbs.\n\nVolumetric Flow\n1 GPM = .227 cubic meters/hour\n1 GPM = 3.785 liters per minute\n\nDistance\n1 inch = 2.54 centimeters\n1 foot = .3048 meters\n\nWater Density\nAt 60 degrees F = 62.371 lbs/ft.\nAt 60 degrees F = 8.3378 lbs./gal.\n\nSteam Data\nGage Press.        Temp          Spec. Volume\nPSIG                   Deg. F        Cubic ft/minute\n0.0                       212                     26.8\n25.3                     267.25               10.5\n50.3                     297.97               6.66\n100                      337.90               3.88\n101                       365.99               2.75\n200.3                   377.89              2.13\n250.3                   406.13              1.74\n\nPressure\nPSI (absolute) = PSI (gauge) +14.696\n\nFlow Velocity of Water\nV(ft./second) = .4086*Q/D*D\nQ is flow in GPM\nD is pipe ID in inches\n\nTable of Liquid Flows in\nSchedule 40 pipe\nPipe Size     GPM at 3 fps     GPM at 15 fps\n1                        8.086                    40.43\n1.5                    19.026                   95.13\n2                       31.358                   156.79\n3                       69.12                     345.6\n4                       119.046                 595.23\n5                       270.27                   1351.35\n8                       468.748                2343.74\n10                     738.342                3691.71\n11                      1044.78                5223.88\n12                      1651.38                8256.88\n24                      3761.76               18,808.78\n\nFlow Velocity of Gas\nV = 3.056*Q/D*D\nV is flow velocity in SFPS\nQ is flow in SCFM\nD is pipe ID in inches\n\nVolumetric Gas Flow\nSCFM = ACFM*(pf*520)/(14.7*Tf)\nPf = Pressure at flow conditions PSIA\nTf = Temp. at flow conditions Deg. R\n\nVolumetric to Mass Flow of Water\n33F water, 8.325 lb./gallon density\nQ(GPM) = Q(lbs/hour)*.002\n\nControl Valve Sizing\nLiquid\nCv = q*Ã(gf/DP)\nSteam\nCv = W/(2.1*Ã(DP(Pf1+Pf2))\nGas\nCV = Q/963*Ã((G*Tf/DP(Pf1+Pf2))\nq is GPM, Ã is Square Root\ngf is liquid specific gravity\nDP is Differential pressure\nW is steam flow rate lbs./hr\nPf1 upstream pressure in psia\nPf2 downstream pressure in psia\nQ is gas flow rate" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.64851695,"math_prob":0.9833408,"size":2998,"snap":"2021-31-2021-39","text_gpt3_token_len":1076,"char_repetition_ratio":0.079492316,"word_repetition_ratio":0.036608864,"special_character_ratio":0.39626417,"punctuation_ratio":0.12917933,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9873608,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-28T08:03:59Z\",\"WARC-Record-ID\":\"<urn:uuid:aa0dc54a-ffb6-4934-9d38-7bbaea0eeab8>\",\"Content-Length\":\"57679\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c4489fcf-5fd0-4033-a648-56edc248d7f0>\",\"WARC-Concurrent-To\":\"<urn:uuid:b5c15f51-6253-475b-bac8-df65ff18c3ef>\",\"WARC-IP-Address\":\"70.39.233.171\",\"WARC-Target-URI\":\"https://www.jaspereng.com/tech-corner/\",\"WARC-Payload-Digest\":\"sha1:RKVXSBJPJOHUENHOAL6GPA3D2SV7IIWX\",\"WARC-Block-Digest\":\"sha1:6NPU7CQHSCPRO62NR7RS3WC74U2NIS6A\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780060538.11_warc_CC-MAIN-20210928062408-20210928092408-00201.warc.gz\"}"}
http://www.interlinepublishing.com/user-content-view.php?pubid=1&titleid=59
[ "Interline Publishing", null, "[email protected]", null, "+91 98867 328 23 / 24 / 25   +91 80 2333 2824 Sign Up   Sign In", null, "Procedure video", null, "Title      : Introduction to ADA Subject      : Analysis and Design of Algorithms (ADA) copyright © 2018   : Mangat J. S. Author      : Mangat J. S. Publisher      : Interlinepublishing Chapters/Pages      : 15/200 Total Price      : Rs.      : 142 To Purchase, select the individual chapter(s) or click \"Select all\" for the complete book. Please scroll down to view chapter(s).\nChapters\n\nTotal views (1251)\n◙ What's an Algorithm, ◙ Solving a Problem, ◙ Understanding the Problem, ◙ Understanding the Computational Means, ◙ Choosing Appropriate Data Structure, ◙ Precision of Solution, ◙ Strategy to Conquer the Problem, ◙ Representing the Algorithm, ◙ Testing and Analyzing the Algorithm, ◙ Implementing the Algorithm, ` ......\n Pages: 13", null, "Price: Rs 0\n\nTotal views (1232)\n◙ Introduction, ◙ Analyzing an Algorithm, ◙ Analysis of Non-recursive Algorithms, ◙ Analysis of Recursive Algorithms, ◙ Exercises, ◙ Solutions for Selected Problems.\n Pages: 12", null, "Price: Rs 9\n\nTotal views (1236)\n◙ Introduction, ◙ Asymptotic Notations, ◙ Big - Oh Notation, ◙ Big Omega () Notation, ◙ Theta () Notation, ◙ Classification of Efficiency, ◙ Mathematical Analysis of Algorithms, ◙ Mathematical Analysis of Non-recursive Algorithms, ◙ Mathematical Analysis of Recursive Algorithms, ◙ Exercises.\n Pages: 10", null, "Price: Rs 7.5\n\nTotal views (1231)\n◙ Introduction, ◙ Sequential Search, ◙ String Matching, ◙ Bubble Sort ◙ Selection Sort, ◙ Exhaustive Search, ◙ Magic Square, ◙ Knapsack Problem, ◙ Travelling Salesman Problem, ◙ Assignment Problem, ◙ Application Programs, ◙ Selection Sort, ◙ Sequential Search, ◙ Exercises.\n Pages: 16", null, "Price: Rs 12\n\nTotal views (1231)\n◙ Introduction, ◙ Binary Search, ◙ Quicksort, ◙ Mergesort, ◙ External Sorting, ◙ Application Programs, ◙ Binary Search, ◙ Quick Sort, ◙ Merge Sort, ◙ Exercises.\n Pages: 17", null, "Price: Rs 12.75\n\nTotal views (1230)\n◙ Introduction, ◙ Multiplication of Large Integers, ◙ Multiplications of Matrices, ◙ Strassen's Algorithm, ◙ Winograd's Matrix Multiplication, ◙ Binary Tree Traversal, ◙ Inorder Traversal, ◙ Preorder Traversal, ◙ Postorder Traversal, ◙ Exercises.\n Pages: 9", null, "Price: Rs 6.75\n\nTotal views (1232)\n◙ Introduction, ◙ Tower of Hanoi, ◙ Generating Combinatorial Objects, ◙ Generating Subsets, ◙ Generating Permutations, ◙ Depth First Search, ◙ Topological Sorting, ◙ Source Removal, ◙ Breadth First Search, ◙ Insertion Sort, ◙ Application Programs, ◙ Breadth First Search, ◙ Depth First Search, ......\n Pages: 26", null, "Price: Rs 19.5\n\nTotal views (1235)\n◙ Introduction, ◙ Presorting, ◙ Balanced Search Trees, ◙ AVL Tree, ◙ 2-3 Trees, ◙ Heapsort, ◙ Data Structure Heap, ◙ The Sorting Algorithm - Heapsort, ◙ Problem Reduction, ◙ Logn x, ◙ Paths of Length 'n' in Graphs, ◙ Least Common Multiple, ◙ Generating Subsets, ◙ Application Programs, & ......\n Pages: 19", null, "Price: Rs 14.25\n\nTotal views (1239)\n◙ Introduction, ◙ Signal Generation, ◙ Input Enhancement in String Matching, ◙ Boyer - Moore (BM) Algorithm, ◙ Horspool's Algorithm, ◙ Sorting by Counting, ◙ Rank Sort, ◙ Distribution Counting, ◙ Application Programs, ◙ Horspool's Algorithm, ◙ Exercises.\n Pages: 9", null, "Price: Rs 6.75\n\nTotal views (1241)\n◙ Introduction, ◙ Hashing, ◙ Open Hashing, ◙ Close Hashing, ◙ Exercises.\n Pages: 4", null, "Price: Rs 3\n\nTotal views (1234)\n◙ Introduction, ◙ Binomial Coefficient, ◙ Floyd's Algorithm, ◙ Warshall's Algorithm, ◙ Knapsack Problem, ◙ Memory Function, ◙ Knapsack Problem, ◙ Application Programs, ◙ Binomial Coefficients, ◙ Floyd's Algorithm, ◙ Warshall's Algorithm, ◙ Knapsack Problem, ◙ Exercises.\n Pages: 14", null, "Price: Rs 10.5\n\nTotal views (1234)\n◙ Introduction, ◙ Dijkstra's Algorithm, ◙ Prim's Algorithm, ◙ Kruksal's Algorithm, ◙ Huffman Codes, ◙ Application Programs, ◙ Dijkstra's Algorithm, ◙ Kruskal's Algorithm, ◙ Prim's Algorithm, ◙ Exercises.\n Pages: 28", null, "Price: Rs 21\n\nTotal views (1235)\n◙ Introduction, ◙ Lower Bounds, ◙ In-Out Lower Bound, ◙ Lower Bound Using Linear Reduction, ◙ Information Theoretic Lower Bound, ◙ Decision Trees, ◙ Lower Bound for Sorting, ◙ Exercises.\n Pages: 4", null, "Price: Rs 3\n\nTotal views (1241)\n◙ Introduction, ◙ P-Class of Problems, ◙ Np-Class of Problems, ◙ Vertex Cover Problem, ◙ Np-Complete Problems, ◙ Exercises.\n Pages: 3", null, "Price: Rs 2.25\n\nTotal views (1239)\n◙ Introduction, ◙ Backtracking, ◙ Travelling Salesman Problem, ◙ N-Queens Problem, ◙ Branch and Bound, ◙ Travelling Salesman Problem, ◙ Assignment Problem, ◙ Approximation Algorithms, ◙ Travelling Salesman Problem, ◙ Vertex Cover Problem, ◙ Application Programs, ◙ N-Queens Problem, ◙ Subset ......\n Pages: 16", null, "Price: Rs 12", null, "", null, "Safe and Secure Payment All major credit and debit cards are accepted.", null, "" ]
[ null, "http://www.interlinepublishing.com/images/m1.jpg", null, "http://www.interlinepublishing.com/images/p1.jpg", null, "http://www.interlinepublishing.com/images/logo.jpg", null, "http://www.interlinepublishing.com/title-cover/large/f9ec87f7ca0a05bec767c176f1aa3e49.jpg", null, "http://www.interlinepublishing.com/images/true2.jpg", null, "http://www.interlinepublishing.com/images/true2.jpg", null, "http://www.interlinepublishing.com/images/true2.jpg", null, "http://www.interlinepublishing.com/images/true2.jpg", null, "http://www.interlinepublishing.com/images/true2.jpg", null, "http://www.interlinepublishing.com/images/true2.jpg", null, "http://www.interlinepublishing.com/images/true2.jpg", null, "http://www.interlinepublishing.com/images/true2.jpg", null, "http://www.interlinepublishing.com/images/true2.jpg", null, "http://www.interlinepublishing.com/images/true2.jpg", null, "http://www.interlinepublishing.com/images/true2.jpg", null, "http://www.interlinepublishing.com/images/true2.jpg", null, "http://www.interlinepublishing.com/images/true2.jpg", null, "http://www.interlinepublishing.com/images/true2.jpg", null, "http://www.interlinepublishing.com/images/true2.jpg", null, "http://www.interlinepublishing.com/images/seal.jpg", null, "http://www.interlinepublishing.com/images/down.jpg", null, "http://www.interlinepublishing.com/images/lo.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.66761976,"math_prob":0.52244514,"size":456,"snap":"2020-10-2020-16","text_gpt3_token_len":130,"char_repetition_ratio":0.117256634,"word_repetition_ratio":0.0,"special_character_ratio":0.34429824,"punctuation_ratio":0.2195122,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.991208,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44],"im_url_duplicate_count":[null,null,null,null,null,null,null,2,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-26T16:55:54Z\",\"WARC-Record-ID\":\"<urn:uuid:9ec0ac7e-1d71-47a3-ac37-c81c0e2a4d60>\",\"Content-Length\":\"299942\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:38f83646-5dca-4ac1-9204-4422333fd00b>\",\"WARC-Concurrent-To\":\"<urn:uuid:dc3df9fe-0ae1-411d-aee9-85d72975738d>\",\"WARC-IP-Address\":\"192.99.179.81\",\"WARC-Target-URI\":\"http://www.interlinepublishing.com/user-content-view.php?pubid=1&titleid=59\",\"WARC-Payload-Digest\":\"sha1:EPGV46TH2VI75A74D2ZQZTCGTQWWSNZ2\",\"WARC-Block-Digest\":\"sha1:ZMRBVZMV4AOFUR72ECXAUG63R6CAA2JU\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875146414.42_warc_CC-MAIN-20200226150200-20200226180200-00435.warc.gz\"}"}
http://docs.momepy.org/en/stable/generated/momepy.betweenness_centrality.html
[ "# momepy.betweenness_centrality¶\n\nmomepy.betweenness_centrality(graph, name='betweenness', mode='nodes', weight='mm_len', endpoints=True, radius=None, distance=None, normalized=False, verbose=True, **kwargs)[source]\n\nCalculates the shortest-path betweenness centrality for nodes.\n\nWrapper around networkx.betweenness_centrality or networkx.edge_betweenness_centrality.\n\nBetweenness centrality of a node v is the sum of the fraction of all-pairs shortest paths that pass through v\n\n$c_B(v) =\\sum_{s,t \\in V} \\frac{\\sigma(s, t|v)}{\\sigma(s, t)}$\n\nwhere V is the set of nodes, $$\\sigma(s, t)$$ is the number of shortest $$(s, t)$$-paths, and $$\\sigma(s, t|v)$$ is the number of those paths passing through some node v other than s, t. If s = t, $$\\sigma(s, t) = 1$$, and if v in {s, t}, $$\\sigma(s, t|v) = 0$$.\n\nBetweenness centrality of an edge e is the sum of the fraction of all-pairs shortest paths that pass through e\n\n$c_B(e) =\\sum_{s,t \\in V} \\frac{\\sigma(s, t|e)}{\\sigma(s, t)}$\n\nwhere V is the set of nodes, $$\\sigma(s, t)$$ is the number of shortest $$(s, t)$$-paths, and $$\\sigma(s, t|e)$$ is the number of those paths passing through edge e.\n\nParameters\ngraphnetworkx.Graph\n\nGraph representing street network. Ideally generated from GeoDataFrame using momepy.gdf_to_nx()\n\nnamestr, optional\n\ncalculated attribute name\n\nmodestr, default ‘nodes’\n\nmode of betweenness calculation. ‘node’ for node-based, ‘edges’ for edge-based\n\nweightstr (default ‘mm_len’)\n\nattribute holding the weight of edge (e.g. length, angle)\n\nInclude all neighbors of distance <= radius from n\n\ndistancestr, optional\n\nUse specified edge data key as distance. For example, setting distance=’weight’ will use the edge weight to measure the distance from the node n during ego_graph generation.\n\nnormalizedbool, optional\n\nIf True the betweenness values are normalized by 2/((n-1)(n-2)), where n is the number of nodes in subgraph.\n\nverbosebool (default True)\n\nif True, shows progress bars in loops and indication of steps\n\n**kwargs\n\nkwargs for networkx.betweenness_centrality or networkx.edge_betweenness_centrality\n\nReturns\nGraph\n\nnetworkx.Graph\n\nNotes\n\nIn case of angular betweenness, implementation is based on “Tasos Implementation”.\n\nExamples\n\n>>> network_graph = mm.betweenness_centrality(network_graph)\n`" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8285529,"math_prob":0.9981843,"size":1879,"snap":"2020-34-2020-40","text_gpt3_token_len":501,"char_repetition_ratio":0.17226666,"word_repetition_ratio":0.2084942,"special_character_ratio":0.2352315,"punctuation_ratio":0.14159292,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998882,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-15T01:58:04Z\",\"WARC-Record-ID\":\"<urn:uuid:48440449-7a81-494d-8f18-7e032abb9c5a>\",\"Content-Length\":\"15111\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cbf30725-99af-4d70-a3d0-cb8b6feac77c>\",\"WARC-Concurrent-To\":\"<urn:uuid:35ad0593-6d17-45ae-8f75-fa58d537c740>\",\"WARC-IP-Address\":\"104.17.33.82\",\"WARC-Target-URI\":\"http://docs.momepy.org/en/stable/generated/momepy.betweenness_centrality.html\",\"WARC-Payload-Digest\":\"sha1:NPVATCDXWZFBMZB2XPZMXXAOUW2Y7X6G\",\"WARC-Block-Digest\":\"sha1:D3TW3LQFW7W3NEEBHSRAJUKYCBBH4JJE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439740423.36_warc_CC-MAIN-20200815005453-20200815035453-00207.warc.gz\"}"}
https://www.fine-tools.com/fraeserset361972.html
[ "", null, "# Set of 6 Router Bits (Mixed Set 1) cons. of 4 Straight Cutters and 2 Dovetail Bits\n\nSet of 6 Router Bits (Mixed Set 1) cons. of 4 Straight Cutters and 2 Dovetail Bits\nThis set of straight cutters consists of TC and TCT (see spec. below) regular router bits which we also sell as single bits.\nComes in wooden box\n\n## Content:", null, "Straight Cutters\nSolid TC D = 6 mm B = 19 mm L = 51 mm Z = 2+1 S = 8 mm\nCode 360203\n\nSolid TC D = 8 mm B = 19 mm L = 51 mm Z = 2+1 S = 8 mm\nCode 360205\n\nTCT D = 10 mm B = 19 mm L = 51 mm Z = 2+1 S = 8 mm\nCode 360208\n\nTCT D = 12 mm B = 19 mm L = 51 mm Z = 2+1 S = 8 mm\nCode 360211", null, "Dovetail Bits\nSolid TC D = 8 mm B = 9.5 mm L = 42 mm Angle = 9° S = 8 mm\nCode 360471\n\nTCT D = 12.7 mm B = 13 mm L = 49 mm Angle = 14° S = 8 mm\nCode 360473\nYou are here::   Homepage   →   Router Bits   →   Router Bit Sets   →   Mixed Set of Straight Cutters and Dovetail Bits" ]
[ null, "https://www.qy1.de/img/pictures/logo-en.png", null, "https://www.qy1.de/img/klein-nutfraeser1b.jpg", null, "https://www.qy1.de/img/klein-gratfraeser1b.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7839996,"math_prob":0.99859416,"size":750,"snap":"2019-51-2020-05","text_gpt3_token_len":307,"char_repetition_ratio":0.16085792,"word_repetition_ratio":0.4467005,"special_character_ratio":0.45866665,"punctuation_ratio":0.038043477,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97140986,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,3,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-06T23:03:30Z\",\"WARC-Record-ID\":\"<urn:uuid:87120ec1-833d-4974-b5ed-30099aef1077>\",\"Content-Length\":\"20282\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:460c647b-5bc2-4662-8011-6ce7065abce3>\",\"WARC-Concurrent-To\":\"<urn:uuid:39644a18-4b73-4b21-84c8-dcc467842703>\",\"WARC-IP-Address\":\"46.252.29.84\",\"WARC-Target-URI\":\"https://www.fine-tools.com/fraeserset361972.html\",\"WARC-Payload-Digest\":\"sha1:ZCVVUNNO5OC6QK3BATCAGSYDVHGTAM6V\",\"WARC-Block-Digest\":\"sha1:B5S46TPVCBRMYVEGBIFD2HTYMZXGUI4G\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540491491.18_warc_CC-MAIN-20191206222837-20191207010837-00548.warc.gz\"}"}
https://www.abcteach.com/search.php?include_clipart=0&sort=2&q=count+to+ten&category=0&file_type=0&include_phrase=&exclude_keywords=&page=3
[ "You are an abcteach Member, but you are logged in to the Free Site. To access all member features, log into the Member Site.\n\n# SEARCH RESULTS: count to ten\n\nClip Art:", null, "= Member Site Document\nThere are 168 documents matching your search.\n•", null, "Interactive .notebook activity where students circle bundles of tens and write the tens and ones digits to find the number of objects. Common Core: Math: 1.NBT.2\n\n•", null, "This Ten Frame Set with Markers - Circus Theme (PreK-2) Math is perfect to practice counting skills. Your elementary grade students will love this Ten Frame Set with Markers - Circus Theme (PreK-2) Math. Set of three circus-themed ten frame cards with markers. CC: Math: K.CC.B.4\n•", null, "This Ten Frame Set with Markers - Fair Theme (PreK-2) Math is perfect to practice counting skills. Your elementary grade students will love this Ten Frame Set with Markers - Fair Theme (PreK-2) Math. Set of three fair-themed ten frame cards with markers. CC: Math: K.CC.B.4\n•", null, "This Numbers and Operations in Base Ten - Spring Theme (grade 1) Math Mats is perfect to practice number and operation skills. Your elementary grade students will love this Numbers and Operations in Base Ten - Spring Theme (grade 1) Math Mats. Eight colorful math mats and cards for practice in grouping numbers into tens and ones, and using place value to count, compare, add and subtract. Printable manipulatives are included or you can use real items. CC: Math: 1.NBT.A.1, B.2-3, C.4-6\n•", null, "This Counting and Addition to 20 (K-1) Penguin Theme Unit is perfect to practice addition and subtraction skills. Your elementary grade students will love this Counting and Addition to 20 (K-1) Penguin Theme Unit. This penguin theme unit is a great way to practice counting and adding to 20. This 21 page unit includes; tracing numbers, cut and paste, finding patterns, ten frame activity, in and out boxes and much more! CC: Math: K.CC.B.4\n•", null, "Interactive notebook activity about tens and ones place values. Correlated with the common core curriculum math standards. Count blocks to determine the correct place values. Common Core Math: Base Ten: 1.NBT.1, 2.NBT.1\n\n•", null, "Interactive activity where students count the number of objects on the page and decide how many there are. They click the answer tab to reveal the answers. CC: Math: K.CC.B.4\n\n•", null, "Eight colorful math mats and cards for practice in grouping numbers into tens and ones, and using place value to count, compare, add and subtract. Printable manipulatives are included or you can use real items. CC: Math: 1.NBT.A.1, B.2-3, C.4-6\n\n•", null, "Eight colorful math pages for practice in grouping numbers into tens and ones, and using place value to count, compare, add and subtract. May be used for student practice or to model use of corresponding printable math mats. CC: Math: 1.NBT.A.1, B.2-3, C.4-6\n\n•", null, "This 10 & Ten (ten pictures) Number Sign is perfect to practice counting skills. Your elementary grade students will love this 10 & Ten (ten pictures) Number Sign. Printable number signs that come in color & (b/w). Great to hang in classroom so students can recognize and learn numbers.\n•", null, "This Place Value Blocks Tens & Ones - Booklet is perfect to practice place value skills. Your elementary grade students will love this Place Value Blocks Tens & Ones - Booklet. Place value booklet using blocks to count ones and tens.\n•", null, "This Circle the Tens 2 (primary/elem) Math Worksheet is perfect to practice number recognition skills. Your elementary grade students will love this Circle the Tens 2 (primary/elem) Math Worksheet. Printable themed math worksheets, where students must circle the tens, and list the remaining single digit values.\n•", null, "This Circle the Tens 3 (primary/elem) Math Worksheet is perfect to practice number recognition skills. Your elementary grade students will love this Circle the Tens 3 (primary/elem) Math Worksheet. Printable themed math worksheets, where students must circle the tens, and list the remaining single digit values.\n•", null, "This Circle the Tens 4 (primary/elem) Math Worksheet is perfect to practice number recognition skills. Your elementary grade students will love this Circle the Tens 4 (primary/elem) Math Worksheet. Printable themed math worksheets, where students must circle the tens, and list the remaining single digit values.\n•", null, "This Circle the Tens 5 (primary/elem) Math Worksheet is perfect to practice number recognition skills. Your elementary grade students will love this Circle the Tens 5 (primary/elem) Math Worksheet. Printable themed math worksheets, where students must circle the tens, and list the remaining single digit values.\n•", null, "This Circle the Tens 6 (primary/elem) Math Worksheet is perfect to practice number recognition skills. Your elementary grade students will love this Circle the Tens 6 (primary/elem) Math Worksheet. Printable themed math worksheets, where students must circle the tens, and list the remaining single digit values.\n•", null, "This Circle the Tens 7 (primary/elem) Math Worksheet is perfect to practice number recognition skills. Your elementary grade students will love this Circle the Tens 7 (primary/elem) Math Worksheet. Printable themed math worksheets, where students must circle the tens, and list the remaining single digit values.\n•", null, "This Circle the Tens 9 (primary/elem) Math Worksheet is perfect to practice number recognition skills. Your elementary grade students will love this Circle the Tens 9 (primary/elem) Math Worksheet. Printable themed math worksheets, where students must circle the tens, and list the remaining single digit values.\n•", null, "This Circle the Tens 10 (primary/elem) Math Worksheet is perfect to practice number recognition skills. Your elementary grade students will love this Circle the Tens 10 (primary/elem) Math Worksheet. Printable themed math worksheets, where students must circle the tens, and list the remaining single digit values.\n•", null, "This Circle the Tens 11 (primary/elem) Math Worksheet is perfect to practice number recognition skills. Your elementary grade students will love this Circle the Tens 11 (primary/elem) Math Worksheet. Printable themed math worksheets, where students must circle the tens, and list the remaining single digit values.\n•", null, "This Circle the Tens 12 (primary/elem) Math Worksheet is perfect to practice number recognition skills. Your elementary grade students will love this Circle the Tens 12 (primary/elem) Math Worksheet. Printable themed math worksheets, where students must circle the tens, and list the remaining single digit values.\n•", null, "This Ten Frames Halloween Pumpkins Math is perfect to practice addition skills. Your elementary grade students will love this Ten Frames Halloween Pumpkins Math. Eight pages, each with an addition ten frame question. Cut each card and make into a booklet or laminate and use in a math center. Student could also use markers to answer the ten frame questions.\n•", null, "Set of 1-10 ten frames with ladybug theme. Includes a set of cards with the numerals 1-10. Students match ten frame with the correct number card. CC: Math: K.CC.B.4\n\n•", null, "This Ten Frame Apple Addition Math is perfect to practice addition skills. Your elementary grade students will love this Ten Frame Apple Addition Math. Five pages, each with two addition ten frame questions. Cut each card and make into a booklet or laminate and use in a math center. Student could also use markers to answer the ten frame questions.\n•", null, "This Reindeer Ten Frame Activity is perfect to practice counting skills. Your elementary grade students will love this Reindeer Ten Frame Activity. Three page ten frame activity with a cute and colorful reindeer theme.\n\n•", null, "Special awards for counting to 10, 20, 50 and 100. Four awards to a page.\n\n•", null, "Interactive Notebook activity that includes 10 pages of interactive place value activities. Count the number of blocks on each page and determine how many ones, tens and hundreds blocks there are total. Common Core Math: 2.NBT.A.1, 2.NBT.A.7\n\n•", null, "This Numbers - Zero to Ten Word Wall is perfect to practice basic math skills. Your elementary grade students will love this Numbers - Zero to Ten Word Wall. Cards with numbers and words. May be used for word walls, flashcards and games.\n•", null, "Practice reading numerals and counting orally with this fun Notebook file.\n•", null, "\"A stagecoach can drive ten miles per hour. How far can it drive in one hour? How far in two hours? How far in nine hours?\" All problems feature skip counting by 10s.\n•", null, "\"If one witch can cast ten spells, how many spells can four witches cast?\" Skip counting by 10s; five problems with a Halloween theme.\n•", null, "�A blue whale is diving at a speed of ten feet per second. How many feet does it dive in five seconds? How many in six? How many in seven?� Five skip counting word problems with an endangered animal theme.\n•", null, "Eight colorful math pages for practice in grouping numbers ito tens and ones, and using place value to count, compare, add and subtract. May be used for student practice or to model use of corresponding printable math mats. CC: Math: 1.NBT.A.1, B.2-3, C.4-6\n\n•", null, "Ten children get on the bus at every stop. How many children are on the bus after one stop? How many after two? How many after three?\n•", null, "This packet contains an overview, for both teachers and parents, of the common core standards for Kindergarten Math. Ten student-friendly posters describe the standards in easy to understand terms. 4 student checklists are included for students to track their mastery of each standard.\n\nCommon Core Math: K.CC.1- K.CC.7\n\n•", null, "These posters are used to show students 4 different ways a number of objects can be represented; numeral, word form, ten frame and domino pattern. This supports students learning numbers based on the Common Core Standards for kindgarten. Common Core Math: K.CC.3, K.CC.4\n\n•", null, "Audio version: PowerPoint slide show for counting up to ten.\n•", null, "These posters are used to show students 4 differnt ways a number of objects can be represented; numeral, word form, ten frame and domino pattern. Common Core Math: K.CC.3, K.CC.4\n\n•", null, "Interactive Flipchart activity where students count the number of ones, tens and hundreds on each page to determine the total number of blocks. Includes printable PDF worksheet and create your own problem. Common Core Math: 2NBT.A.1, 2NBT.A.7\n\n•", null, "10 page word wall. Match numbers to words or vice versa.", null, "Interactive: Notebook: Math - Tens and Ones", null, "Ten Frame Set with Markers - Circus Theme (PreK-2) Math", null, "Ten Frame Set with Markers - Fair Theme (PreK-2) Math", null, "Numbers and Operations in Base Ten - Spring Theme (grade 1) Math Mats", null, "Counting and Addition to 20 (K-1) Penguin Theme Unit", null, "Interactive: Notebook: Math - Place Values (Tens and Ones)", null, "Interactive: Notebook: Math - Fair Ten Frames", null, "Math Mats: Numbers & Operations in Base Ten - Fall Theme (grade 1)", null, "Interactive: Notebook: Math Mats: Numbers & Operations in Base Ten (Place Value) - Fall Theme (grade 1)", null, "10 & Ten (ten pictures) Number Sign", null, "Place Value Blocks Tens & Ones - Booklet", null, "Circle the Tens 2 (primary/elem) Math Worksheet", null, "Circle the Tens 3 (primary/elem) Math Worksheet", null, "Circle the Tens 4 (primary/elem) Math Worksheet", null, "Circle the Tens 5 (primary/elem) Math Worksheet", null, "Circle the Tens 6 (primary/elem) Math Worksheet", null, "Circle the Tens 7 (primary/elem) Math Worksheet", null, "Circle the Tens 9 (primary/elem) Math Worksheet", null, "Circle the Tens 10 (primary/elem) Math Worksheet", null, "Circle the Tens 11 (primary/elem) Math Worksheet", null, "Circle the Tens 12 (primary/elem) Math Worksheet", null, "Ten Frames Halloween Pumpkins Math", null, "Math: Ten Frames Set with Numbers - Ladybug Theme", null, "", null, "Reindeer Ten Frame Activity", null, "Math: Special Awards - I Can Count!", null, "Interactive: Notebook: Math: Place Values (Ones, Tens, Hundreds)", null, "Numbers - Zero to Ten Word Wall", null, "Interactive: Notebook: Counting to Ten (prek-1)", null, "Word Problems: Cowboy Skip Counting (elem)", null, "Word Problems: Halloween Skip Counting (primary)", null, "Word Problems: Endangered Animal Skip Counting (primary)", null, "Interactive: Notebook: Math Mats: Numbers & Operations in Base Ten (Place Value) - Summer Theme (grade 1)", null, "Word Problems: School Themed Skip Counting (primary)", null, "Common Core: Math Standards Poster Set - Kindergarten", null, "Number Posters: 11-20 (Prek-K)", null, "PowerPoint: Presentation with Audio: I can count to...(pre-k/primary)", null, "Number Posters: 0-10 (Prek-K)", null, "Interactive: Flipchart: Math: Place Values (Ones, Tens, Hundreds)", null, "Math: Word Wall: Matching Tens, Hundreds and Thousands" ]
[ null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null, "https://static.abcteach.com/images/search/memstar_norm.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8314409,"math_prob":0.7447094,"size":3130,"snap":"2020-10-2020-16","text_gpt3_token_len":757,"char_repetition_ratio":0.12891875,"word_repetition_ratio":0.25748503,"special_character_ratio":0.22715655,"punctuation_ratio":0.19109195,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9891985,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-26T17:20:00Z\",\"WARC-Record-ID\":\"<urn:uuid:daf53148-22e8-4a12-b58b-68ce728e397e>\",\"Content-Length\":\"200280\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3376f536-4531-4d60-8db8-a011f88093f4>\",\"WARC-Concurrent-To\":\"<urn:uuid:9b9a9e70-2ea2-4039-98a3-1b1e719e24a2>\",\"WARC-IP-Address\":\"184.105.229.18\",\"WARC-Target-URI\":\"https://www.abcteach.com/search.php?include_clipart=0&sort=2&q=count+to+ten&category=0&file_type=0&include_phrase=&exclude_keywords=&page=3\",\"WARC-Payload-Digest\":\"sha1:HSE5ABE2C2UZ6R3I2XHSQ4UAGEIYHPRZ\",\"WARC-Block-Digest\":\"sha1:5RRZ6WKYPLHYAAL73ESAJ3MAEAUCVZA3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875146414.42_warc_CC-MAIN-20200226150200-20200226180200-00417.warc.gz\"}"}
https://mathoverflow.net/questions/333199/multiplicativity-of-the-homology-atiyah-hirzebruch-spectral-sequence-for-a-ring
[ "Multiplicativity of the homology Atiyah-Hirzebruch spectral sequence for a ring spectrum\n\nLet $$E$$ be a ring spectrum and $$F$$ a connective spectrum. Then we have a convergent Atiyah-Hirzebruch spectral sequence $$H_s(F,E_t) \\Rightarrow E_{s+t}(F)$$. Suppose now that $$F$$ is also a ring spectrum. Then does the multiplication on the $$E^2$$ page induce multiplications on all subsequent pages, and do they agree with the multiplication on $$E_\\ast(F)$$?\n\nBizarrely, the only reference I can find for the multiplicative properties of the Atiyah-Hirzebruch spectral sequence is another MO question, and there is only treated the case of the cohomological AHSS where $$F$$ is a space, rather than the homological spectral sequence where $$F$$ is a ring spectrum.\n\n• You can simply apply the same arguments as the cohomological case. Jun 4 '19 at 7:07\n\nYou can give a proof of multiplicativity by using that the smash product preserves connectivity. Here is a sketch proof.\n\n(EDIT: Denis Nardin pointed me towards this reference by Dugger. This reference shows that I was a little too cavalier about pairings in the homotopy category vs. lifting them to the stable category, and I've tried to make some adjustments accordingly.)\n\nFor any $$n$$, let $$\\tau_{\\geq n} E$$ be the $$(n-1)$$-connected cover of $$E$$, assembling into the Whitehead tower $$\\dots \\to \\tau_{\\geq 2} E \\to \\tau_{\\geq 1} E \\to \\tau_{\\geq 0} E \\to \\tau_{\\geq -1} E \\to \\dots \\to E.$$ The associated graded consists of the spectra $$\\tau_{\\geq n} E / \\tau_{\\geq n+1} E = K(\\pi_n E, n)$$.\n\nThe smash product of an $$(n-1)$$-connected object with an $$(m-1)$$-connected object is $$(n+m-1)$$-connected, and so the composite maps $$\\tau_{\\geq n} E \\wedge \\tau_{\\geq m} E \\to E \\wedge E \\to E$$ lift, essentially uniquely up to homotopy, to $$\\tau_{\\geq n+m} E$$. This means that, in the homotopy category, the tower $$\\{\\tau_{\\geq n} E\\}$$ forms a filtered algebra. Because these lifts are essentially unique, this multiplication can also be lifted in a more structured one, lifting the Whitehead tower to a filtered algebra in spectra. (To be precise, a filtered algebra is a lax symmetric monoidal functor from the symmetric monoidal poset $$(\\Bbb Z, \\geq, +)$$ to spectra.)\n\nThe smash product is symmetric monoidal. Therefore, if $$F$$ is a ring spectrum the tower $$\\{(\\tau_{\\geq n} E) \\wedge F\\}$$ is also a filtered algebra, and the associated graded consists of the spectra $$K(\\pi_n E, n) \\wedge F$$ because the smash product preserves cofibers. Therefore, the associated-graded spectral sequence starts with the homology of $$F$$ with coefficients in the homotopy groups of $$E$$: this constructs the Atiyah-Hirzebruch spectral sequence, except that the indexing is slightly different (the $$E_1$$-term is a reindexed $$E_2$$-term of the Atiyah-Hirzebruch SS).\n\nThus, this boils down to the following assertion:\n\nIf $$\\{\\dots \\to R^{-1} \\to R^0 \\to R^1 \\to \\dots\\}$$ is a (commutative / associative / unital) filtered algebra, then the associated-graded spectral sequence with $$E^1_{p,q} = \\pi_{p+q}(R^p / R^{p-1})$$ is multiplicative (and commutative / associative / unital).\n\nThis is a little more standard. Dugger's reference that I linked to above gives a proof; he also says that he \"found the existing literature extremely frustrating,\" which I don't think will surprise many people.\n\n• The only reference I know for the assertion you cite is this. Jun 4 '19 at 8:08\n• I second the reference to Dugger. I had a PhD student who grappled with this a couple of years ago, and Dugger was the most satisfactory reference that we could find. Jun 4 '19 at 8:54\n\nJust a brief response to Tyler and John's answers- but not brief enough to fit into a comment.\n\nHere is one way to make the filtered object $$\\{\\tau_{\\ge n}E\\}$$ as structured as you'd like (just spelling out exactly what Tyler indicated).\n\nConsider the $$\\infty$$-category $$\\mathsf{Fun}(\\mathbb{Z}, \\mathsf{Sp})$$ of filtered spectra, where $$\\mathbb{Z}$$ is regarded as a poset. This has a symmetric monoidal structure coming from Day convolution. We have a colimit functor $$\\mathsf{Fun}(\\mathbb{Z}, \\mathsf{Sp}) \\to \\mathsf{Sp}$$ and its right adjoint the 'constant tower' functor. The constant tower functor can be promoted to a symmetric monoidal functor $$\\mathsf{Sp} \\to \\mathsf{Fun}(\\mathbb{Z}, \\mathsf{Sp})$$ (this is true in general for Day convolution stuff, but here it's extra true since there's only one colimit preserving symmetric monoidal functor from $$\\mathsf{Sp}$$ to any other stable, presentably symmetric monoidal $$\\infty$$-category anyway...).\n\nConsider the full subcategory $$\\mathcal{C} \\subseteq \\mathsf{Fun}(\\mathbb{Z}, \\mathsf{Sp})$$ spanned by the objects $$\\{E_n\\}$$ such that $$E_n$$ is $$n$$-connective, i.e. $$E_n = \\tau_{\\ge n}E_n$$. Staring at the formula for Day convolution, we learn that this subcategory is closed under the symmetric monoidal structure because smashing $$n$$-connective and $$m$$-connective thing gets you an $$(n+m)$$-connective thing (and $$k$$-connective things are closed under hocolims).\n\nNow general nonsense says we get a colocalization $$\\mathsf{Fun}(\\mathbb{Z}, \\mathsf{Sp}) \\to \\mathcal{C}$$ which is canonically lax symmetric monoidal. (HA.2.2.1.1).\n\nThe composite $$\\mathsf{Sp} \\to \\mathsf{Fun}(\\mathbb{Z}, \\mathsf{Sp}) \\to \\mathcal{C} \\to \\mathsf{Fun}(\\mathbb{Z}, \\mathcal{C})$$ is then canonically lax symmetric monoidal and is given on objects by $$E \\mapsto \\{\\tau_{\\ge n}E\\}$$.\n\nFrom here you can get whatever you want! For example, this induces a lax symmetric monoidal functor on homotopy categories, which gives you the pairings you needed in Tyler's answer. But it does much more: it also tells you that if you start with $$E$$, and algebra over any operad $$\\mathcal{O}$$, then the Whitehead tower is canonically a filtered algebra over that operad.\n\n(This is not the end of the story, I think. I always get confused about this but, if I remember correctly, people often like to ask for some filtered version of the operad to act on this filtered gadget, in order to get the story of power operations in the spectral sequence? I might be confusing this with something else though... again- I never learned that story properly).\n\n(A comment to Tyler's answer.) Strictifying pairings from the stable homotopy category to spectra can be tricky. To even get started with an inductive approach let me assume $$E$$ is connective, so that $$E = \\tau_{\\ge0}E$$. Let $$p_n : \\tau_{\\ge n} E \\to \\tau_{\\ge n-1} E$$ be the maps in the Whitehead tower, and let $$\\mu = \\mu_{0,0} : E \\wedge E \\to E$$ be the given pairing. The composite $$\\mu_{0,0} (p_1 \\wedge 1) : \\tau_{\\ge1} E \\wedge E \\to E$$ factors up to homotopy though $$p_1$$. We may assume that each $$p_n$$ is a fibration (and that everything in sight is cofibrant), so by the homotopy lifting property there is also a strict factorization as $$p_1 \\mu_{1,0}$$. Hence we can choose lifts $$\\mu_{1,0} : \\tau_{\\ge1} E \\wedge E \\to \\tau_{\\ge1} E$$ and $$\\mu_{0,1} : E \\wedge \\tau_{\\ge1} E \\to \\tau_{\\ge1} E$$ such that $$p_1 \\mu_{1,0} = \\mu_{0,0} (p_1 \\wedge 1)$$ and $$p_1 \\mu_{0,1} = \\mu_{0,0} (1 \\wedge p_1)$$. The composites $$\\mu_{1,0} (1 \\wedge p_1) : \\tau_{\\ge1} E \\wedge \\tau_{\\ge1} E \\to \\tau_{\\ge1} E$$ and $$\\mu_{0,1} (p_1 \\wedge 1) : \\tau_{\\ge1} E \\wedge \\tau_{\\ge1} E \\to \\tau_{\\ge1} E$$ agree when projected to $$E$$, but without further work they may not agree as maps to $$\\tau_{\\ge1} E$$. In particular, they may not have a common factorization as $$p_2 \\mu_{1,1}$$ for some $$\\mu_{1,1} : \\tau_{\\ge1} E \\wedge \\tau_{\\ge1} E$$. So strictifying a pairing is not just a matter of obstruction theory or essential uniqueness of lifts. One may also need to change the models for the spectra involved.\n\nIn the case of spectra formed from simplicial sets, there may be a sufficiently functorial (and monoidal) construction of Whitehead towers of simplicial sets, hence also of symmetric spectra in simplicial sets, to ensure that $$\\mu$$ induces compatible $$\\mu_{m,n} : \\tau_{\\ge m}E \\wedge \\tau_{\\ge n}E \\to \\tau_{\\ge m+n}E$$ for all integers $$m$$ and $$n$$, but I do not recall checking this carefully, and if correct, it would be badly model-dependent.\n\nThere is a $$2$$-categorical approach that works. We can choose maps $$\\mu_{m,n} : \\tau_{\\ge m} E \\wedge \\tau_{\\ge n} E \\to \\tau_{\\ge m+n} E$$, \"horizontal\" homotopies $$h_{m,n} : \\mu_{m-1,n} (p_m \\wedge 1) \\simeq p_{m+n} \\mu_{m,n}$$ and \"vertical\" homotopies $$v_{m,n} : \\mu_{m,n-1} (1 \\wedge p_n) \\simeq p_{m+n} \\mu_{m+n}$$. In the case of a Whitehead tower, one can find a $$2$$-homotopy between the composite homotopies $$v_{m-1,n} h_{m,n}$$ and $$h_{m,n-1} v_{m,n}$$, and this suffices to get a pairing of Cartan-Eilenberg systems, hence also a pairing of spectral sequences. These $$1$$- and $$2$$-homotopies produce a strict pairing of filtered spectra $$Tel(E) \\wedge Tel(E) \\to E$$, where $$Tel$$ denotes the mapping telescope.\n\nThis can surely be promoted to an $$\\infty$$-categorical statement, but the $$2$$-categorical one suffices for pairings of spectral sequences.\n\n• Dear John, you are completely correct, and to try and build this with a strict model is very difficult, and doing so functorially is even worse (requiring us to first contend with a model for a functorial Postnikov tower). I should have been more responsible and not pushed this issue under the rug. Jun 6 '19 at 6:53\n• The 2-categorical approach that you suggest works well to build the pairing. I worry about the gradual accumulation of complexity. Once you have constructed this filtered pairing, you probably want to know that the pairing on the spectral sequence is associative, commutative, unital; hence more homotopies and more 2-homotopies. And perhaps there is a clever way to do this; but the standard ways I can imagine proceeding look a lot like what one finds in Adams' blue-book proofs of associativity and commutativity of the smash product. Jun 6 '19 at 6:59\n• And so I think that, while it's bad to pretend that things are simpler than what they are, there is added value in understanding that there is some degree of canonicality here: e.g. that $Map(\\tau_{\\geq n} E \\wedge \\tau_{\\geq m} E, \\tau_{\\geq n+m} E) \\to Map(\\tau_{\\geq n} E \\wedge \\tau_{\\geq m} E, E)$ is a weak equivalence, letting us say that the endomorphism operad of $E$ maps, perhaps after some zigzag of equivalences, to the endomorphism operad of the filtered object $\\tau_{\\geq n} E$. Jun 6 '19 at 7:08\n• And while letting fly with some $\\infty$-categorical statement is perhaps irritating or irresponsible within the scope of the question, maybe that's what I should been more honest about and done instead; for the Whitehead tower, by its nature, is something that only wants to be defined up to contractible choice, and its natural pairing follows suit. Jun 6 '19 at 7:11" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.757406,"math_prob":0.99863327,"size":2790,"snap":"2022-05-2022-21","text_gpt3_token_len":982,"char_repetition_ratio":0.1769562,"word_repetition_ratio":0.071428575,"special_character_ratio":0.34802866,"punctuation_ratio":0.11608624,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998327,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-26T15:15:19Z\",\"WARC-Record-ID\":\"<urn:uuid:86ab66ad-030f-4f55-840c-6d6b48552cba>\",\"Content-Length\":\"144168\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9504644a-b359-4c98-bb4c-7410ed68a764>\",\"WARC-Concurrent-To\":\"<urn:uuid:21fb5901-b96f-4e8d-b205-eebb1cd40419>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://mathoverflow.net/questions/333199/multiplicativity-of-the-homology-atiyah-hirzebruch-spectral-sequence-for-a-ring\",\"WARC-Payload-Digest\":\"sha1:JNM434TORLO6AP7FVOMJF6D2K2LYE2SZ\",\"WARC-Block-Digest\":\"sha1:ACV5T7TZGSXH3ZFSVZXSTDJVKQXFYRQU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320304954.18_warc_CC-MAIN-20220126131707-20220126161707-00113.warc.gz\"}"}
http://enpaali.mihanblog.com/post/389
[ "# enpaali\n\nجمعه 10 فروردین 1397\n\n# Fraction as a product of a whole number\n\nنویسنده: Sherri Henry", null, "", null, "", null, "`fraction-as-a-product-of-a-whole-number.zip`", null, "Math kahoot fractions. Basic math for adultsfractions. If you multiply any whole number fraction you will get smaller number here example this 2. The number copies of. Aleks math tutorial. Find product specific information including cas msds protocols and references. Sal multiplies using repeated addition and fraction models. If you enter 432 the fraction calculator the simplified product. If the first number fraction less than one then the product will smaller than the second number because. Does that make sense oct steve v. This protein was labeled much greater extent. Multiply the denominators the fractions place the product the. This interactive exercise focuses multiplying fractions and reducing them when. The mole fraction moles target substance divided total moles involved. There are rows with each row. Subjects mathematics. Fraction whole number fraction", null, ". Multiplication form addition. Product two fractions the product fraction and its reciprocal hence the reciprocal the multiplicative inverse fraction. Here repeated addition problem involving fraction. Mixed numbers and fractions how convert from one the other. Their product cube. Example test and 1320 are equivalent fractions. In mathematics product the result multiplying expression that identifies factors multiplied. Fun math practice improve your skills with free problems estimate products fractions whole numbers and mixed numbers and thousands other practice lessons. Here are instructions finding the product fractions. To find the product fractions reduce the fractions the lowest common terms. This stepbystep online fraction calculator will help you understand how add subtract divide and multiply fractions mixed numbers whole numbers and decimals 4th grade games build fractions from unit fractions applying and extending previous understandings operations whole numbers.. B explaining why multiplying given number fraction greater than results product greater than the given number. Multiply fractions 3. Sigmaaldrich offers sigmaa7030 bovine serum albumin for your research needs. Fraction simplifying calculator. The reciprocal proper fraction improper", null, ". What mean the complement proper fraction lesson complete course arithmetic. This complete lesson with instruction and exercises about prime factorization. In this example since three thirds whole the. The beauty using the prime factorization method that you can sure that the fractions reduction. Example multiply and 12. The result fraction with numerator that the product the fractions numerators and a. Multiplying the whole number produces copies joined end end the number line. Download our product instructions and activity guide view now round the fraction portions the mixed fractions the nearest whole number. You can explore whether this true for the product two fractions. Purchase this product deciding two fractions are equivalent using cross products. How can the answer improved product simple fractions interactive gizmo that illustrates graphically the meaning the product simple fractions multiplying fractions whole numbers number line. How add subtract multiply and divide fractions. Fractions fractions math circle put aside the fraction worksheets and get your students working together and and moving with this fraction what are unit fractions. Type your numerator and denominator numbers only please into the boxes then click the button. Multiply fractions 2.Often when multiplying whole number and fraction the resulting product will improper fraction. Its race with this fourplayer fraction game", null, ". Multiplying fractions and whole numbers visually. Math explained easy language plus puzzles games quizzes worksheets and forum. For k12 kids teachers and parents. Printable worksheets and lessons. Multiplying fraction and whole number find each product. When you divide fractionate something into parts two more you have what known common fraction. The number cannot written product two whole. This article help you answer question how multiply two fractions wtamu math tutorials and help. Start studying estimate product fractions using compatible numbers. In two black fractions the digits both the numerator and the denominator can modified clicking little their vertical midline. Access relevant readytoplay game around fractions fraction second math collection. We represent the height using one fraction and the width using another fraction. A free online fraction calculator for addition subtraction multiplication and division fractions and mixed numbers. Topic multiplication with fractions and. The product game fun interactive game that exercises your skill with factors and multiples. Basic review writing fractions simplest form. It usually best show answer using the simplest fraction this case", null, ". Multiply the denominator the whole number. Fraction one half mixed number and 34. This fraction worksheet great for working multiplying fractions with cross cancelling. First swap the positions some the purple parts. However adding subtracting dividing two unit. A interpret the product parts partition into equal parts equivalently the result sequence of. An interactive math lesson about multiplying fractions. That last product and made all prime numbers. Help your students develop conceptual understanding fractions with this five week fraction. Of the product and relate fraction and decimal. Writing fractions represent parts figures reallife data. Since the crossproducts are the same the fractions are equivalent. Resource collection mfas formative. This picture shows that added times. Fraction group word problems 0. Reduce and simplify fractions simplest form. Number Sigmaaldrich offers sigmaa4503 bovine serum albumin for your research needs\n\nApr 2012 what fraction the total. Finding the product fraction and mixed number. Consider the following. Login required ccss. Explaining why multiplying given number fraction less than results product smaller than the given number. When the product two fractions. A given number fraction greater than results product greater than. Explaining why multiplying given number fraction greater than results product greater than the given number fun math practice improve your skills with free problems estimate products fractions and whole numbers and thousands other practice lessons. Calculator reduce fraction its simplest form. Type the numerator and denominator the students learn how play the product game. Learn vocabulary terms and more with flashcards games and other study tools. Equivalent fractions prime factorization. Quickstart guide input. The second step multiply the two denominators. If the fraction greater than other words the numerator greater than the denominator the product will greater than the whole number\n\nComment()", null, "• آخرین پستها\n\n• ## La grande guerra conoscerla cento anni dopo\n\n• لیست آخرین پستها\n\n### آمار وبلاگ\n\n• کل بازدید :\n• بازدید امروز :\n• بازدید دیروز :\n• بازدید این ماه :\n• بازدید ماه قبل :\n• تعداد نویسندگان :\n• تعداد کل پست ها :\n• آخرین بازدید :\n• آخرین بروز رسانی :" ]
[ null, "https://lh3.googleusercontent.com/-S5DfTrdnuwk/WUDEjdAmk4I/AAAAAAAAAA0/f2uRXPM6_os0t6mczKxIMpDl5alPBp2awCLcBGAs/h120/rar9.png", null, "https://1.bp.blogspot.com/-ziefmN4S0qA/Wnh9zAZsKjI/AAAAAAAAAAQ/WQxfbMfjLikZUAL1YBrQecLDX4mVGqyRwCLcBGAs/s1600/Screenshot_2.png", null, "https://3.bp.blogspot.com/-OA604VrJkUg/Wnh9yy7y-3I/AAAAAAAAAAM/Pyvy4VuHwsE7xJwPZeRHRyCDtr_DOirOQCEwYBhgL/s1600/Screenshot_1.png", null, "https://i0.wp.com/shareitforpc.com/wp-content/uploads/2016/08/Download-1.png", null, "https://lh3.googleusercontent.com/-2yp3qUD2Uvw/VMUsqpwTR3I/AAAAAAAAAts/5856Wv3b1ZU/s640/blogger-image--1406643518.jpg", null, "http://mgh-images.s3.amazonaws.com/9780321847669/509939-2.4-36IE2.png", null, "https://i.ytimg.com/vi/v3xd14r7JH8/maxresdefault.jpg", null, "http://slideplayer.com/9703136/31/images/2/Vocabulary+reciprocal%3A+the+flip+of+a+fraction+%282+non-zero+numbers+with+a+product+of+1%29.jpg", null, "http://enpaali.mihanblog.com/public/public/html/imgcode.php", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82675475,"math_prob":0.9794146,"size":5825,"snap":"2020-24-2020-29","text_gpt3_token_len":1010,"char_repetition_ratio":0.22590621,"word_repetition_ratio":0.004848485,"special_character_ratio":0.16412017,"punctuation_ratio":0.09317443,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9993088,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-05T13:13:50Z\",\"WARC-Record-ID\":\"<urn:uuid:ac396a04-b6df-44c1-a650-1efb71ce145a>\",\"Content-Length\":\"68822\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5dbcdb46-dc31-4beb-b893-263ca303ba3b>\",\"WARC-Concurrent-To\":\"<urn:uuid:1fb42504-c040-4eae-aabd-fed3f85b252e>\",\"WARC-IP-Address\":\"5.144.133.146\",\"WARC-Target-URI\":\"http://enpaali.mihanblog.com/post/389\",\"WARC-Payload-Digest\":\"sha1:F27IZU3CLXVJGNN364YVEEKBQ5PV3FZA\",\"WARC-Block-Digest\":\"sha1:SYVJB2CDB6TA57Y2SP676UZNNWYHMXSN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655887360.60_warc_CC-MAIN-20200705121829-20200705151829-00305.warc.gz\"}"}
https://cloudxlab.com/assessment/displayslide/453/apache-spark-getting-started-with-key-value-or-pair-rdd-max
[ "", null, "What would the following code do?\n\n``````var inputdata = List((1,2),(1,13),(1,4), (1, 6))\nvar kvrdd = sc.parallelize(inputdata)\ndef max(a:Int, b:Int): Int = {\nif(a > b) return a;\nreturn b\n}\nval out = kvrdd.reduceByKey(max)\nout.collect()\n``````\n\nNote - Having trouble with the assessment engine? Follow the steps listed here\n\nNo hints are availble for this assesment\n\nAnswer is not availble for this assesment" ]
[ null, "https://www.facebook.com/tr", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.64441055,"math_prob":0.9878831,"size":522,"snap":"2020-45-2020-50","text_gpt3_token_len":149,"char_repetition_ratio":0.08880309,"word_repetition_ratio":0.0,"special_character_ratio":0.29693487,"punctuation_ratio":0.19642857,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95031303,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-20T08:15:10Z\",\"WARC-Record-ID\":\"<urn:uuid:c3e960aa-00a9-4dc7-96d6-36c88ba92bf5>\",\"Content-Length\":\"49229\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:325730a1-65a7-4a20-b23f-ba4cefa1b050>\",\"WARC-Concurrent-To\":\"<urn:uuid:eb558971-f79a-4e49-a1df-77bee540acc3>\",\"WARC-IP-Address\":\"54.152.184.164\",\"WARC-Target-URI\":\"https://cloudxlab.com/assessment/displayslide/453/apache-spark-getting-started-with-key-value-or-pair-rdd-max\",\"WARC-Payload-Digest\":\"sha1:B2JWSTFRIJN7IAJIKYYCDJWE53GVWQSL\",\"WARC-Block-Digest\":\"sha1:BFJOT3QH7UR7QOUY7B2HGNF5XW2ABXG5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107871231.19_warc_CC-MAIN-20201020080044-20201020110044-00545.warc.gz\"}"}
https://www.hextobinary.com/unit/angle/from/arcsec/to/octant/82
[ "#### Arcsecond Octant\n\n##### How many Octants are in 82 Arcseconds?\n\nThe answer is 82 Arcseconds are equal to 0.00050617283950617 Octants. Feel free to use our online unit conversion calculator to convert the unit from Arcsecond to Octant. Just simply, enter value 82 in Arcsecond and see the result in Octant. You can also Convert 83 Arcseconds to Octant\n\n##### How to Convert 82 Arcseconds to Octants (arcsec to octant)\n\nBy using our Arcsecond to Octant conversion tool, you know that one Arcsecond is equivalent to 0.0000061728395061728 Octant. Hence, to convert Arcsecond to Octant, we just need to multiply the number by 0.0000061728395061728. We are going to use very simple Arcsecond to Octant conversion formula for that. Pleas see the calculation example given below.\n\nConvert 82 Arcsecond to Octant 82 Arcsecond = 82 × 0.0000061728395061728 = 0.00050617283950617 Octant\n\n##### What is Arcsecond Unit of Measure?\n\nArcsec also known as arc second or second arc is a unit of angular measurement. One second of arc is equal to 1/60 of an arcminute, 1/3600 of a degree, 1/296000 of a turn. That means one full circle will have 1296000 arcseconds. Similar to arcmin, arcsec was originated in Babylonian astronomy as sexagesimal subdivisions of the degree. It is primarily used in fields where most of the work involves working with small angles such as optometry, ophthalmology, and astronomy. In astronomy related work, it is used for comparison of angular diameter of Moon, Sun, and planets. Apart from that, it is also used in cartography and navigation.\n\n##### What is the symbol of Arcsecond?\n\nThe symbol of Arcsecond is arcsec which means you can also write it as 82 arcsec.\n\n##### What is Octant Unit of Measure?\n\nOctant is a unit of angular measurement. One octant is equal to 45 degrees. It measures an angle up to 90 degrees with the help of 45 degree arc and reflecting optics which basically doubles the angle.\n\n##### What is the symbol of Octant?\n\nThe symbol of Octant is octant which means you can also write it as 82 octant.\n\n##### Arcsecond to Octant Conversion Table\n Arcsecond [arcsec] Octant [octant] 82 0.00050617283950617 164 0.0010123456790123 246 0.0015185185185185 328 0.0020246913580247 410 0.0025308641975309 492 0.003037037037037 574 0.0035432098765432 656 0.0040493827160494 738 0.0045555555555556 820 0.0050617283950617 8200 0.050617283950617 82000 0.50617283950617\n##### Arcsecond to Other Units Conversion Chart\n Arcsecond [arcsec] Output 82 Arcsecond in Arcmin equals to 1.37 82 Arcsecond in Circle 1/10 equals to 0.00063271604938272 82 Arcsecond in Circle 1/16 equals to 0.0010123456790123 82 Arcsecond in Circle 1/2 equals to 0.00012654320987654 82 Arcsecond in Circle 1/4 equals to 0.00025308641975309 82 Arcsecond in Circle 1/6 equals to 0.00037962962962963 82 Arcsecond in Circle 1/8 equals to 0.00050617283950617 82 Arcsecond in Cycle equals to 0.000063271604938272 82 Arcsecond in Degree equals to 0.022777777777778 82 Arcsecond in Full Circle equals to 0.000063271604938272 82 Arcsecond in Gon equals to 0.025308641975309 82 Arcsecond in Gradian equals to 0.025308641975309 82 Arcsecond in Mil equals to 0.40493827160494 82 Arcsecond in Minute equals to 1.37 82 Arcsecond in Octant equals to 0.00050617283950617 82 Arcsecond in Point equals to 0.0020246913580247 82 Arcsecond in Quadrant equals to 0.00025308641975309 82 Arcsecond in Radian equals to 0.00039754721850982 82 Arcsecond in Second equals to 82 82 Arcsecond in Sextant equals to 0.00037962962962963 82 Arcsecond in Sign equals to 0.00075925925925926 82 Arcsecond in Turn equals to 0.000063271604938272\n##### Other Units to Arcsecond Conversion Chart\n Output Arcsecond [arcsec] 82 Arcmin in Arcsecond equals to 4920 82 Circle 1/10 in Arcsecond equals to 10627200 82 Circle 1/16 in Arcsecond equals to 6642000 82 Circle 1/2 in Arcsecond equals to 53136000 82 Circle 1/4 in Arcsecond equals to 26568000 82 Circle 1/6 in Arcsecond equals to 17712000 82 Circle 1/8 in Arcsecond equals to 13284000 82 Cycle in Arcsecond equals to 106272000 82 Degree in Arcsecond equals to 295200 82 Full Circle in Arcsecond equals to 106272000 82 Gon in Arcsecond equals to 265680 82 Gradian in Arcsecond equals to 265680 82 Mil in Arcsecond equals to 16605 82 Minute in Arcsecond equals to 4920 82 Octant in Arcsecond equals to 13284000 82 Point in Arcsecond equals to 3321000 82 Quadrant in Arcsecond equals to 26568000 82 Radian in Arcsecond equals to 16913714.11 82 Second in Arcsecond equals to 82 82 Sextant in Arcsecond equals to 17712000 82 Sign in Arcsecond equals to 8856000 82 Turn in Arcsecond equals to 106272000" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7296746,"math_prob":0.97835606,"size":4653,"snap":"2020-34-2020-40","text_gpt3_token_len":1362,"char_repetition_ratio":0.31684232,"word_repetition_ratio":0.09490085,"special_character_ratio":0.41414142,"punctuation_ratio":0.09090909,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9789108,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-15T10:09:17Z\",\"WARC-Record-ID\":\"<urn:uuid:42ef979e-bd6b-4931-9877-80cd90a18290>\",\"Content-Length\":\"32668\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a73822ee-af6d-4e1b-abbb-50cd08bb311e>\",\"WARC-Concurrent-To\":\"<urn:uuid:936bd997-e0a8-4b9b-a195-a2e74c64ccc7>\",\"WARC-IP-Address\":\"3.230.235.205\",\"WARC-Target-URI\":\"https://www.hextobinary.com/unit/angle/from/arcsec/to/octant/82\",\"WARC-Payload-Digest\":\"sha1:A7DXOI63GPZVX2LHFCCYOVLEXPPIJEPK\",\"WARC-Block-Digest\":\"sha1:CHAK4NRN4HBDJOGCZAXVGIKE6QYLVK2H\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439740838.3_warc_CC-MAIN-20200815094903-20200815124903-00036.warc.gz\"}"}
https://cboard.cprogramming.com/c-programming/154872-assignments-help.html?s=b1244cf73dfbcc69ab17829341777d43
[ "1. ## Assignments help\n\nHey guys, I am suppose to do this but I am stuck, any help would be appreciated.\n\n1 - Sum of Squares\nWrite a function called sumSquares that returns the sum of the squares of its float array parameter. The function should have a second parameter for the size of the array.\n\nExample\n\nCode:\n```const int MAX_WORD = 10;\nconst int ROW = 3;\nconst int COLUMN = 4;\nconst int SS1 = 4;\nconst int SS2 = 1;\nconst int SORTED1 = 8;\nconst int SORTED2 = 1;\nconst int SORTED3 = 5;\n\nint main()\n{\n// Sum of Squares Tests----------------------------\nfloat ssArray1[SS1];\nfloat ssArray2[SS2];\nprintf(\"sum of squares test 1: enter %d numbers\\n\", SS1);\nfor(int i = 0; i < SS1; ++i){\nscanf(\"%f\",&ssArray1[i]);\n}\nprintf(\"sum of squares = %.2f\\n\", sumSquares(ssArray1, SS1));\n\nprintf(\"\\nsum of squares test 2: enter %d numbers\\n\", SS1);\nfor(int i = 0; i < SS1; ++i){\nscanf(\"%f\",&ssArray1[i]);\n}\nprintf(\"sum of squares = %.2f\\n\", sumSquares(ssArray1, SS1));\n\nprintf(\"\\nsum of squares test 3: enter %d numbers\\n\", SS2);\nfor(int i = 0; i < SS2; ++i){\nscanf(\"%f\",&ssArray2[i]);\n}\nprintf(\"sum of squares = %.2f\\n\", sumSquares(ssArray2, SS2));\n\n// Sorted Tests------------------------------------```", null, "Jim", null, "3. I don't know what to do, a tip to where to start or what to use would be nice. I am suppose to use the stdio and stdlib only.", null, "4. So what does your sumSquares() look like?\n\nJim", null, "5. something like this\nCode:\n```float sumSquares(float a[], float b)\n{\nint c;\nfloat total = 0;\nfor(c = 0; c < b; ++c)\n{\ntotal = a[c]*a[c] + total;\n}", null, "", null, "", null, "Popular pages Recent additions", null, "" ]
[ null, "https://cboard.cprogramming.com/images/misc/progress.gif", null, "https://cboard.cprogramming.com/images/misc/progress.gif", null, "https://cboard.cprogramming.com/images/misc/progress.gif", null, "https://cboard.cprogramming.com/images/misc/progress.gif", null, "https://cboard.cprogramming.com/images/misc/progress.gif", null, "https://cboard.cprogramming.com/images/misc/progress.gif", null, "https://cboard.cprogramming.com/images/misc/progress.gif", null, "https://www.feedburner.com/fb/images/pub/feed-icon16x16.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5303709,"math_prob":0.9985112,"size":1155,"snap":"2019-51-2020-05","text_gpt3_token_len":357,"char_repetition_ratio":0.20938314,"word_repetition_ratio":0.2513661,"special_character_ratio":0.3965368,"punctuation_ratio":0.2033195,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9988838,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-24T22:27:54Z\",\"WARC-Record-ID\":\"<urn:uuid:9b4bc7c2-b379-4c23-9d67-e728678e60c1>\",\"Content-Length\":\"60413\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:19fd00bd-eb37-4cba-bf68-534f8e1157a3>\",\"WARC-Concurrent-To\":\"<urn:uuid:46e3bd9d-82b1-48e0-a2c3-be9f8b2a26ea>\",\"WARC-IP-Address\":\"198.46.93.160\",\"WARC-Target-URI\":\"https://cboard.cprogramming.com/c-programming/154872-assignments-help.html?s=b1244cf73dfbcc69ab17829341777d43\",\"WARC-Payload-Digest\":\"sha1:A7S7UOXJAC63XBAPTWWOIBYBJYLZMDKW\",\"WARC-Block-Digest\":\"sha1:VBSOO3WRBI6R7HRQWPKSOFEVZNGPXZVO\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250626449.79_warc_CC-MAIN-20200124221147-20200125010147-00028.warc.gz\"}"}
https://www.vixrapedia.org/wiki/Help:Editing
[ "# Help:Editing\n\nThis is a guide on how to make edits. Follow the examples below. More extensive help can be found at the MediaWiki site.\n\n## Basic Editing\n\nType of link What to enter\nInternal link: Main Page [[Main Page]]\nLink to wikipedia: wikipedia physics [[w:Physics|wikipedia physics]]\nLink to Vixra: Proposol to Disidentify 7 [[v:1612.0123|Proposol to Disidentify 7]]\nLink to ArXiv: The Large N Limit... [[arxiv:hep-th/9711200|The Large N Limit...]]\n\n## Equations\n\nEquations can be added when placed inside $...$\n\nMore extensive examples can be found at w:Help:Displaying_a_formula.\n\nPlace inside $...$ Displayed Result\nA $A$", null, "A B $AB$", null, "A \\, B $A\\,B$", null, "A B C D $ABCD$", null, "A \\; B $A\\;B$", null, "A \\quad B $A\\quad B$", null, "A \\qquad B $A\\qquad B$", null, "A \\qquad\\qquad B $A\\qquad \\qquad B$", null, "\\frac{A}{B} ${\\frac {A}{B}}$", null, "\\alpha, \\beta, \\gamma, \\dots, \\phi, \\chi, \\psi, \\omega $\\alpha ,\\beta ,\\gamma ,\\dots ,\\phi ,\\chi ,\\psi ,\\omega$", null, "\\Alpha, \\Beta, \\Gamma, \\dots, \\Phi, \\Chi, \\Psi, \\Omega $\\mathrm {A} ,\\mathrm {B} ,\\Gamma ,\\dots ,\\Phi ,\\mathrm {X} ,\\Psi ,\\Omega$", null, "\\pi $\\pi$", null, "(\\rho, \\theta), (\\sigma, \\tau) $(\\rho ,\\theta ),(\\sigma ,\\tau )$", null, "x^a $x^{a}$", null, "x_k $x_{k}$", null, "\\delta^i_j $\\delta _{j}^{i}$", null, "{\\Lambda^\\mu}_\\nu ${\\Lambda ^{\\mu }}_{\\nu }$", null, "A^\\mu\\nu $A^{\\mu }\\nu$", null, "A^{\\mu\\nu} $A^{\\mu \\nu }$", null, "{x^a}^b ${x^{a}}^{b}$", null, "x^{a^b} $x^{a^{b}}$", null, "x^{2/3} $x^{2/3}$", null, "x^{\\frac{2}{3}} $x^{\\frac {2}{3}}$", null, "a = b $a=b$", null, "a \\sim b $a\\sim b$", null, "a \\equiv b $a\\equiv b$", null, "\\sqrt{a} ${\\sqrt {a}}$", null, "\\partial x $\\partial x$", null, "\\frac{dy}{dx} ${\\frac {dy}{dx}}$", null, "\\frac{\\partial y}{\\partial x} ${\\frac {\\partial y}{\\partial x}}$", null, "x\\prime $x\\prime$", null, "x^{\\prime} $x^{\\prime }$", null, "(a \\pm b)^2 $(a\\pm b)^{2}$", null, "\\sum^{k=n}_{k=0} $\\sum _{k=0}^{k=n}$", null, "\\int^{A}_{B} $\\int _{B}^{A}$", null, "\\sum^{n}_{k=1} k^2 = \\dots $\\sum _{k=1}^{n}k^{2}=\\dots$", null, "\\int^{a}_{b} f(x) dx = [F(x)]^{a}_{b} $\\int _{b}^{a}f(x)dx=[F(x)]_{b}^{a}$", null, "\\sin z $\\sin z$", null, "\\cos{2\\pi\\theta} $\\cos {2\\pi \\theta }$", null, "func(x) $func(x)$", null, "\\textrm{func}(x) ${\\textrm {func}}(x)$", null, "(a + b)^2 = a^2 + 2 a b + b^2 $(a+b)^{2}=a^{2}+2ab+b^{2}$", null, "\\frac{2 \\cdot 3 \\cdot 4 \\cdot 5 \\cdot 6}{2^4 3^2 5^2} = \\frac{1}{5} ${\\frac {2\\cdot 3\\cdot 4\\cdot 5\\cdot 6}{2^{4}3^{2}5^{2}}}={\\frac {1}{5}}$", null, "k! = 1 \\times 2 \\times 3 \\times \\dots \\times (k-1) \\times k $k!=1\\times 2\\times 3\\times \\dots \\times (k-1)\\times k$", null, "(i {{\\gamma^\\mu}^a}_b \\partial_\\mu - m {\\delta^a}_b) \\psi^b = 0 $(i{{\\gamma ^{\\mu }}^{a}}_{b}\\partial _{\\mu }-m{\\delta ^{a}}_{b})\\psi ^{b}=0$", null, "d\\Omega = r^2 \\sin\\theta \\, dr \\wedge d\\theta \\wedge d\\phi $d\\Omega =r^{2}\\sin \\theta \\,dr\\wedge d\\theta \\wedge d\\phi$", null, "\\frac{\\int Z[J] \\, d^4 x}{\\int d^4 x} ${\\frac {\\int Z[J]\\,d^{4}x}{\\int d^{4}x}}$", null, "\\mathbb{Z} $\\mathbb {Z}$", null, "x \\in \\mathbb{R} $x\\in \\mathbb {R}$", null, "\\mathcal{A} ${\\mathcal {A}}$", null, "\\mathfrak{A} ${\\mathfrak {A}}$", null, "\\underline{a} ${\\underline {a}}$", null, "\\overrightarrow{a} ${\\overrightarrow {a}}$", null, "\\underline{a} \\cdot \\overrightarrow{b} = \\sum_k a_k b_k ${\\underline {a}}\\cdot {\\overrightarrow {b}}=\\sum _{k}a_{k}b_{k}$", null, "## PDFs\n\nUpload the pdf first. Note: you cannot upload large files. For large files, write the text out manually.\n\nTo display a particular page, use: [[File:UPLOAD_NAME.pdf|page=1|600px]]. You can set the page number via page=N and here we've set the size to 600 pixels.\n\n## Images\n\nUpload the image first. Note: you cannot upload very large files. Try reducing the image size or upload to a special image storage website like Imgur and link to that page.\n\nFor examples, go to mw:Help:Images.\n\n## Tables\n\nSee mw:Help:Tables and w:Help:Table." ]
[ null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/7daff47fa58cdfd29dc333def748ff5fa4c923e3", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/b04153f9681e5b06066357774475c04aaef3a8bd", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/fa88f7ba9ce3d8f5da82225f401198458513703a", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/412b7d8df4db6ca8093d971320c405598c49c339", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/6ca2fb767cc4cf6d8c0e44788c017709b96ae03e", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/090be6460b4bd8939ab605625524947c817045cd", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/1254b7d9f21f9c68cd1aedc4cbf24a4ad2721be2", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/1d5b7ad6a2d0d9c53ebe2c79e889faac40e2587c", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/4a1926bc23f1f3411122430184cee4d8b61890d3", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/fcc3117080847946efcdd6435839881dbab67c42", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/cb49cace5721702040ef3f944bae318474ea6d63", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/9be4ba0bb8df3af72e90a0535fabcc17431e540a", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/4c4e520440a013f363bca3a48ddf351dc393dd49", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/39e0aeaa9f539d95a769634b9e33974b45675111", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/6d2b88c64c76a03611549fb9b4cf4ed060b56002", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/c2c2996dd3d4c7152c99cf5491fd3de9f01b4f09", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/ca4fcc87f4e42e30d8e2ff84220b541801b2cdda", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/8892910bc05a12bed49095f23212a0ee62c9e94d", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/950a2de97c4ad50e4156876144be55e5fa40bd0c", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/847775ad5caf03f7863e10d448fdb1d6bfbceb2c", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/2056f98e1cb6bc77567f3ea11a894ad7cb359af2", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/974ae1f5dc2ef96c3d4e14d6d65e4a00e6b58481", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/423b69dd66ee1e3e0d02bf342c450164cb4b3ab8", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/1956b03d1314c7071ac1f45ed7b1e29422dcfcc4", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/d60a7ec044aafda7685b062b023b95b2e3f9e252", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/859529d640f85a2c3bf2847f61a842ba3a5753ee", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/afccc332c876539296df1a980127d86173e59ef0", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/d448bd2e20cfc4a746bfde395324d8d527ca9e52", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/5ceb16b58a91d26cf1e442d0682dfa7a2c0ab72c", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/0deac2b96aa5d0329450647f183f9365584c67b2", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/98e03fee93e15883c491ee8ab000633316020578", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/a04dfa6d4671c585af87b97fd0f5d4a0da499028", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/9033e799008b02b8c8476ba69bc3876e092a06a9", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/d06a9c921a990a5cec0946f5560d2ba6873838d5", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/d7129dcc80b36d59be38ee4def0d3f630a3fbf97", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/6e001374e8217f56dfda5c1ac7c5c46360190aa9", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/01793b411df5a3dfde231f182e264dff737b3985", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/840154d23e7487c8c0d9bef213611822d5b09463", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/153e66a916fe65297015588f730e19fba998d1ae", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/a04975c4e2b71727bfda3373901e4371177b4276", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/092581727192e634b6a57ca8b97a1627d85b950a", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/088a0cbbeff707c1e8629fedd307923f5fe9d0e2", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/86410cb5bfcbf012958f989f40bd76339ef8f88f", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/cce5377772515bb00f9aa75b2432c09dfd5810f8", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/1c28b35e1fb7dd87baf5ca31b899cec0a98877f4", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/609521e59eee4fc197061bb9be3a8373fdedcb71", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/6b56d5d1c80d7e0a1f8af942ef08823726e9097c", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/449494a083e0a1fda2b61c62b2f09b6bee4633dc", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/a9c6d458566aec47a7259762034790c8981aefab", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/280ae03440942ab348c2ca9b8db6b56ffa9618f8", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/34aa92fbdb716183c034a2cfc30dafbaa51cfcd6", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/d1445f492dd189617565276b0d8eab56e463244e", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/b77149adfb778a5de4e0f9e99243919227669a7f", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/dcc3cd02eb1f4581bd47218a8bfda18564039186", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.50128955,"math_prob":1.0000052,"size":2618,"snap":"2021-43-2021-49","text_gpt3_token_len":997,"char_repetition_ratio":0.09372609,"word_repetition_ratio":0.0053050397,"special_character_ratio":0.3789152,"punctuation_ratio":0.1579832,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000092,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108],"im_url_duplicate_count":[null,null,null,null,null,8,null,null,null,5,null,5,null,5,null,5,null,null,null,5,null,5,null,null,null,5,null,null,null,null,null,null,null,5,null,5,null,6,null,5,null,5,null,5,null,10,null,null,null,null,null,5,null,null,null,6,null,null,null,5,null,5,null,null,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,null,null,5,null,5,null,5,null,5,null,5,null,null,null,null,null,null,null,null,null,8,null,null,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-05T11:49:39Z\",\"WARC-Record-ID\":\"<urn:uuid:2f3ec06f-a31e-4474-a62d-13c12e1949c4>\",\"Content-Length\":\"84843\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e94581e7-1310-4875-8473-708d8d9b4d7d>\",\"WARC-Concurrent-To\":\"<urn:uuid:ee829ce8-cf59-404c-b9ed-ec1fc0c6b501>\",\"WARC-IP-Address\":\"116.203.89.52\",\"WARC-Target-URI\":\"https://www.vixrapedia.org/wiki/Help:Editing\",\"WARC-Payload-Digest\":\"sha1:442LKKR3GHI7ED5OYMVZSSLCTH2YIVGB\",\"WARC-Block-Digest\":\"sha1:F6BFIJMYOPO6PD4FXRAJDN3IR2UBJMG2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363157.32_warc_CC-MAIN-20211205100135-20211205130135-00607.warc.gz\"}"}
http://www.askphilosophers.org/question/324
[ "# This one is mathematical, but seems to address philosophical issues regarding definition and the nature of mathematical truth. So: If, for any x, x^0 = 1, and, for any y, 0^y = 0, then what is the value of 0^0?\n\nRead another response by Daniel J. Velleman" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.918878,"math_prob":0.9191013,"size":294,"snap":"2020-34-2020-40","text_gpt3_token_len":76,"char_repetition_ratio":0.12068965,"word_repetition_ratio":0.0,"special_character_ratio":0.24829932,"punctuation_ratio":0.171875,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9896325,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-19T18:46:11Z\",\"WARC-Record-ID\":\"<urn:uuid:3116df46-aa6f-45ab-95c9-aad3fb3896ea>\",\"Content-Length\":\"26233\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b0781994-2a57-4027-a781-e23ee982a4d7>\",\"WARC-Concurrent-To\":\"<urn:uuid:27da9800-0e86-4228-83d9-1b2f739920b7>\",\"WARC-IP-Address\":\"148.85.1.187\",\"WARC-Target-URI\":\"http://www.askphilosophers.org/question/324\",\"WARC-Payload-Digest\":\"sha1:KQEEXAQWCH5ZBZSDO3FEJATG5ROCGNUM\",\"WARC-Block-Digest\":\"sha1:MBHZUTWXNDRDXTSA4LPMAKRSJBP7VV7K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400192783.34_warc_CC-MAIN-20200919173334-20200919203334-00790.warc.gz\"}"}
https://www.cagednomoremovie.com/what-is-the-relation-of-average-pe-and-ke-in-shm/
[ "# What is the relation of average PE and KE in SHM?\n\n## What is the relation of average PE and KE in SHM?\n\nShowthat for a particle in linear SHM the average kinetic energy over aperiod of oscillation equals the average potential energy over thesame period.\n\n## At what displacement is the KE and PE are equal in SHM?\n\nIn a SHM kinetic and potential energies becomes equal when the displacement is 1/√(2) times the amplitude.\n\nWhat is the relationship of PE and Ke?\n\nThe primary relationship between the two is their ability to transform into each other. In other words, potential energy transforms into kinetic energy, and kinetic energy converts into potential energy, and then back again.\n\nWhat is PE and KE energy?\n\nSummary. Energy is the ability to do work. Potential Energy (PE) is stored energy due to position or state. PE due to gravity = m g h. Kinetic Energy (KE) is energy of motion.\n\n### Why is average potential energy and kinetic energy equal in SHM?\n\nShow that for a particle in linear SHM the average kinetic energy over a period of oscillation equals the average potential energy over the same period. From equations (i) and (ii) we can say that the average kinetic energy for a given time period is equal to the average potential energy for the same time period.\n\n### What is the average kinetic energy of SHM?\n\nThe formula for kinetic energy is written as: 12mv2=12m(aωsinωt)2.\n\nAt what displacement a particle in SHM possesses half Ke and half PE?\n\nAnswer: At what displacement from the mean position its energy is half kinetic and half potential. when Kinetic energy is maximum potential energy is zero. when kinetic energy is half of its maximum, potential energy will become other half and both are equal.\n\nAt what time PE will be equal to half of total energy?\n\nSolution. The time in which the potential energy will be half of total energy is 1.25 s.\n\n## Are PE and KE inversely proportional?\n\nAnswer. Answer: As the height increases, there is an increase in the gravitational potential energy and a decrease in the kinetic energy . The kinetic energy is inversely proportional to the potential energy.\n\n## How do you convert KE to PE?\n\nPE = KE v = 4.6 m /sec the height is always the vertical distance (not necessarily the total distance the body may travel) between the starting point and the lowest point of fall. Nearly all mechanical processes consist of interchanges of energy among its kinetic and potential forms and work.\n\nWhat is PE physics?\n\npotential energy, stored energy that depends upon the relative position of various parts of a system.\n\nWhat is the formula for Ke?\n\nIn classical mechanics, kinetic energy (KE) is equal to half of an object’s mass (1/2*m) multiplied by the velocity squared. For example, if a an object with a mass of 10 kg (m = 10 kg) is moving at a velocity of 5 meters per second (v = 5 m/s), the kinetic energy is equal to 125 Joules, or (1/2 * 10 kg) * 5 m/s2." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9096797,"math_prob":0.9981299,"size":2846,"snap":"2023-14-2023-23","text_gpt3_token_len":637,"char_repetition_ratio":0.21393386,"word_repetition_ratio":0.043824703,"special_character_ratio":0.22066058,"punctuation_ratio":0.09252669,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9987097,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-30T18:29:00Z\",\"WARC-Record-ID\":\"<urn:uuid:0801092a-dd95-4a35-a379-4b0d4bd999f3>\",\"Content-Length\":\"48150\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5c0cba8e-69fc-4dd5-942e-ad73272d0b1e>\",\"WARC-Concurrent-To\":\"<urn:uuid:cbd07b9a-72a1-419b-bf77-6cf8f8e2f395>\",\"WARC-IP-Address\":\"151.139.128.10\",\"WARC-Target-URI\":\"https://www.cagednomoremovie.com/what-is-the-relation-of-average-pe-and-ke-in-shm/\",\"WARC-Payload-Digest\":\"sha1:Z4V2NILFV67RLUKFNXBBFMJI6FTKK6AR\",\"WARC-Block-Digest\":\"sha1:AGXODHJUTQUZARXSR6HCMFNUIP4SAVNJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296949355.52_warc_CC-MAIN-20230330163823-20230330193823-00620.warc.gz\"}"}
http://www.derivativepricing.com/blogpage.asp?id=8
[ "", null, "", null, "", null, "", null, "HOME PRODUCTS SOFTWARE TRIAL PURCHASE BLOG ABOUT US SITEMAP", null, "", null, "# Blog\n\nThis is the second in a series of articles that will go from the basics about interest rate swaps, to how to value them and how to build a zero curve.", null, "Interest Rate Swap Fixed Legs\n\nNow that we know the basic terminology and structure of a vanilla interest rate swap we can now look at constructing our fixed leg of our swap by first building our date schedule, then calculating the fixed coupon amounts.\n\nFor our example swap we will be using the following inputs:\n\n• Notional: \\$1,000,000 USD\n• Coupon Frequency: Semi-Annual\n• Fixed Coupon Amount: 1.24%\n• Floating Coupon Index: 6 month USD LIBOR\n• Business Day Convention: Modified Following\n• Fixed Coupon Daycount: 30/360\n• Floating Coupon Daycount: Actual/360\n• Effective Date: Nov 14, 2011\n• Termination Date: Nov 14, 2016\n• We will be valuing our swap as of November 10, 2011.\n\nSwap Coupon Schedule\n\nFirst we need to create our schedule of swap coupon dates. We will start from our maturity date and step backwards in semi-annual increments. The first step is to generate our schedule of non-adjusted dates.", null, "Then we adjust our dates using the modified following business day convention.", null, "Note that all the weekend coupon dates have been brought forward to the next Monday.\n\nSwap Fixed Coupon Amounts\n\nTo calculate the amount for each fixed coupon we do the following calculation:\n\nFixed Coupon = Fixed Rate x Time x Swap Notional Amount\n\nWhere:\n\nFixed Rate = The fixed coupon amount set in the swap confirmation.\n\nTime = Year portion that is calculated by the fixed coupons daycount method.\n\nSwap Notional = The notional amount set in the swap confirmation.\n\nBelow is our date schedule with the Time portion calculated using the 30/360 daycount convention. More on daycounts can be found in this document titled Accrual and Daycount conventions.\n\nNote the coupons which are not exactly a half-year due to the business day convention. If our business day convention was no-adjustment all the time periods would have been 0.5. This is a difference between swaps and bonds, as bonds will generally not adjust the coupon amounts for business day conventions, they will simply be 1/(# coupon periods per year) x coupon rate x principal.", null, "The coupon amount for our first coupon will be 1.24% x 1,000,000 x 0.50 = \\$6,200.00. Below are the coupon amounts for all of the coupons.", null, "Now that we know our coupon amounts, to find the current fair value of the fixed leg we would present value each coupon and sum them to find the total present value of our fixed leg. To do this we calculate the discount factor for each coupon payment using a discount factor curve which represents our swap curve. We will build our discount factor curve later in this tutorial series.\n\nNext Article:\n\n# Free Trial\n\n ResolutionPro is a derivative pricing library which supports the valuation, risk management and hedge accounting of derivatives & other financial instruments. You can try ResolutionPro right now on a free trial basis.", null, "# Most Popular Posts", null, "" ]
[ null, "http://www.derivativepricing.com/images/resolution-logo.gif", null, "http://www.derivativepricing.com/images/women.jpg", null, "http://www.derivativepricing.com/images/spacer.gif", null, "http://www.derivativepricing.com/images/left-menu.gif", null, "http://www.derivativepricing.com/images/right-menu.gif", null, "http://www.derivativepricing.com/images/spacer.gif", null, "http://www.derivativepricing.com/images/swapguysxsmall.png", null, "http://www.derivativepricing.com/images/swap coupon dates unadjusted.PNG", null, "http://www.derivativepricing.com/images/swap coupon dates adjusted.PNG", null, "http://www.derivativepricing.com/images/swap schedule with daycount.PNG", null, "http://www.derivativepricing.com/images/swap coupon schedule.PNG", null, "http://www.derivativepricing.com/images/bule-trial.png", null, "http://www.derivativepricing.com/images/resolutionlogo.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88462037,"math_prob":0.9142529,"size":2938,"snap":"2021-04-2021-17","text_gpt3_token_len":646,"char_repetition_ratio":0.13496932,"word_repetition_ratio":0.011904762,"special_character_ratio":0.22532335,"punctuation_ratio":0.09660108,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96075934,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,null,null,8,null,null,null,null,null,null,null,null,null,8,null,4,null,4,null,5,null,4,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-23T00:30:41Z\",\"WARC-Record-ID\":\"<urn:uuid:3bf81952-9d8b-4509-9a97-3dd529bbba78>\",\"Content-Length\":\"19269\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bf962536-c04c-411d-962b-f920bc6aef3d>\",\"WARC-Concurrent-To\":\"<urn:uuid:61ba6fb7-5556-4a1d-9335-90be30329c76>\",\"WARC-IP-Address\":\"103.53.150.61\",\"WARC-Target-URI\":\"http://www.derivativepricing.com/blogpage.asp?id=8\",\"WARC-Payload-Digest\":\"sha1:Q6X7DGA6KII2FDFT2N52CS3AOUPRVYVP\",\"WARC-Block-Digest\":\"sha1:6S7SDSN4FI7EONQ7EJUWKOKVYMF66XBI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703531702.36_warc_CC-MAIN-20210123001629-20210123031629-00591.warc.gz\"}"}
http://qs1969.pair.com/~perl2/?node_id=97518;displaytype=xml
[ "note Albannach Not only is [mdillon]'s solution correct, it's also much more efficient as each factorial is simply one multiply on the previous one, whereas calling the function each time means a lot more work is being done. <p> Now the reason for the failure is that the [CPAN://Math::NumberCruncher|Math::NumberCruncher] function overflows for 171!. Unfortunately it returns '1.#INF' which Perl happily treats as 1.0 in your division, so you simply divide by one from that point on, not making much more progress. <p> <b>Update: </b>While [CPAN://Math::NumberCruncher|Math::NumberCruncher] does not use BigFloats for it's factorial function, it does already use the [CPAN://Math::BigFloat|Math::BigFloat] module for storing its very large version of \\$PI (though strangely BigFloats are not used in calculations with that \\$PI so it's probably pointless...), so it is a trivial matter to patch [CPAN://Math::NumberCruncher|Math::NumberCruncher] to use a BigFloat for the factorial. This is of course for occasional uses as for an iterative use of factorial [mdillon]'s answer is still much faster. <p>--<br> I'd like to be able to assign to an luser 97512 97512" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8723502,"math_prob":0.89311105,"size":1161,"snap":"2022-05-2022-21","text_gpt3_token_len":311,"char_repetition_ratio":0.13828868,"word_repetition_ratio":0.0,"special_character_ratio":0.23858742,"punctuation_ratio":0.16309012,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9790084,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-27T00:20:58Z\",\"WARC-Record-ID\":\"<urn:uuid:042a98ad-ef73-483b-921c-a34fc2e405e4>\",\"Content-Length\":\"1781\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:857b3397-5c05-4c23-b3db-f1063c51c903>\",\"WARC-Concurrent-To\":\"<urn:uuid:4d02a74a-57a3-4472-a807-7b60402ca652>\",\"WARC-IP-Address\":\"216.92.237.59\",\"WARC-Target-URI\":\"http://qs1969.pair.com/~perl2/?node_id=97518;displaytype=xml\",\"WARC-Payload-Digest\":\"sha1:OF6GNQERZLVGOJPEZCJ5NIBUZL26RCCM\",\"WARC-Block-Digest\":\"sha1:RVO4CSXQTOH6NDWSEAM5AYR6D6YMXSMP\",\"WARC-Identified-Payload-Type\":\"application/xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662627464.60_warc_CC-MAIN-20220526224902-20220527014902-00141.warc.gz\"}"}
http://celestrak.com/GPS/almanac/Yuma/definition.php
[ "# Definition of a Yuma Almanac\n\nID\nPRN of the SVN\nHealth\n000=usable\nEccentricity\nThis shows the amount of the orbit deviation from circular (orbit). It is the distance between the foci divided by the length of the semi-major axis (our orbits are very circular).\nTime of Applicability\nThe number of seconds in the orbit when the almanac was generated. Kind of a time tag.\nOrbital Inclination\nThe angle to which the SV orbit meets the equator (GPS is at approximately 55 degrees). Roughly, the SV's orbit will not rise above approximately 55 degrees latitude. The number is part of an equation: # = π/180 = the true inclination.\nRate of Right Ascension\nRate of change in the measurement of the angle of right ascension as defined in the Right Ascension mnemonic.\nSQRT(A) Square Root of Semi-Major Axis\nThis is defined as the measurement from the center of the orbit to either the point of apogee or the point of perigee.\nRight Ascension at Time of Almanac (TOA)\nRight Ascension is an angular measurement from the vernal equinox ((OMEGA)0).\nArgument of Perigee\nAn angular measurement along the orbital path measured from the ascending node to the point of perigee, measured in the direction of the SV's motion.\nMean Anomaly\nAngle (arc) traveled past the longitude of ascending node (value = 0±180 degrees). If the value exceeds 180 degrees, subtract 360 degrees to find the mean anomaly. When the SV has passed perigee and heading towards apogee, the mean anomaly is positive. After the point of apogee, the mean anomaly value will be negative to the point of perigee.\naf0\nSV clock bias in seconds.\naf1\nSV clock drift in seconds per seconds.\nWeek\nGPS week (0000–1023), every 7 days since 1999 August 22.\n\nGPS Yuma Almanacs\n1990\n1991\n1992\n1993\n1994\n1995\n1996\n1997\n1998\n1999\n2000\n2001\n2002\n2003\n2004\n2005\n2006\n2007\n2008\n2009\n2010\n2011\n2012\n2013\n2014\n2015\n2016\n2017\n2018\n2019\nLatest\nStatus Messages NANUs\nSEM Almanacs Yuma Almanacs" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7727168,"math_prob":0.9144298,"size":1927,"snap":"2019-51-2020-05","text_gpt3_token_len":536,"char_repetition_ratio":0.12428497,"word_repetition_ratio":0.0,"special_character_ratio":0.2999481,"punctuation_ratio":0.07062147,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9823483,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-19T11:17:56Z\",\"WARC-Record-ID\":\"<urn:uuid:83dd41a3-8abf-4008-91e2-86f7c95b4ab4>\",\"Content-Length\":\"10154\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0a90e8c4-7103-4215-8b90-3374484ddbf9>\",\"WARC-Concurrent-To\":\"<urn:uuid:daa882f0-7646-4b87-b5a1-c913c52beb76>\",\"WARC-IP-Address\":\"75.151.179.89\",\"WARC-Target-URI\":\"http://celestrak.com/GPS/almanac/Yuma/definition.php\",\"WARC-Payload-Digest\":\"sha1:M73QL4I5QUFOSW2A54YN52CQATXPNZOC\",\"WARC-Block-Digest\":\"sha1:FD3CQTGUFCGKEEHHP7ZDEE63NEZSRXTJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250594391.21_warc_CC-MAIN-20200119093733-20200119121733-00204.warc.gz\"}"}
https://www.ias.ac.in/describe/article/jess/130/0103
[ "• Development of hybrid wave transformation methodology and its application on Kerala Coast, India\n\n• # Fulltext\n\nhttps://www.ias.ac.in/article/fulltext/jess/130/0103\n\n• # Keywords\n\nWave transformation; wave climate; DELFT3D-WAVE; ANN; Kerala coast.\n\n• # Abstract\n\nA major portion of the coastline of Kerala is under erosion, primarily due to the action of wind-generated waves. Accurate assessment of the nearshore wave climate is essential for detailed apprehension of the sediment processes that lead to coastal erosion. Numerical wave transformation models set up incorporating high-resolution nearshore bathymetry and nearshore wind data, prove to be sufficient for the purpose. But, running these models for decadal time scales incur huge computational cost. Thus, a Feed Forward Back Propagation ANN is developed to estimate the wave parameters nearshore with training datasets obtained from minimal set of numerical simulations of wave transformation using DELFT3D-WAVE. The numerical model results are validated using Wave Rider Buoy data available for the location. This hybrid methodology is utilized to hindcast nearshore wave climate of a location in north Kerala for a period of 40 years with the ANN model trained with 1-yr data. The model shows good generalization ability when compared to the results of numerical simulation for a period of 10 years. This paper illustrates the data and methodology adopted for the development of the numerical model and the proposed ANN model along with the statistical comparisons of the results obtained.\n\n$\\bf{Highlights}$\n\n$\\bullet$ A hybrid methodology, combining numerical modelling and soft computation using ANNs, is developed to obtain long-term nearshore wave hindcast. One years’ numerical model simulation is utilised to train the ANN models.\n\n$\\bullet$ The optimised ANN$_{H}$ , ANN$_{T}$ , ANN$_{ θmx }$ and ANN$_{θmx}$ models, with 15, 25, 25 and 30 neurons respectively in their single hidden layer, show good generalization ability when compared to the results of numerical simulation for a period of 10 years. The coefficient of correlation between the numerical model results and the ANN$_{H}$ model is 0.99. Results of ANN$_{T}$ model and the combined result of ANN$_{θmx}$ , ANN$_{θmy}$ models show a coefficient of correlation of 0.97 with the corresponding numerical model results. The new methodology allows for faster reconstruction of long-term time series of nearshore wave parameters.\n\n$\\bullet$ The trained models are used for simulating nearshore wave parameters at a location in North Kerala coast for 40 years. The maximum H$_{s}$ at the nearshore location from 40 years’ ANN simulation is 3.39 m. H$_{s}$ exceeds 3 m only for 0.04% of the time. During monsoon, waves feature a narrow range of T$_{p}$ as well as mean wave direction as opposed to the non-monsoon period.\n\n• # Author Affiliations\n\n1. National Institute of Technology Calicut, Kozhikode, Kerala, India\n\n• # Editorial Note on Continuous Article Publication\n\nPosted on July 25, 2019" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8690096,"math_prob":0.92186314,"size":2889,"snap":"2023-40-2023-50","text_gpt3_token_len":620,"char_repetition_ratio":0.14315425,"word_repetition_ratio":0.060185187,"special_character_ratio":0.211838,"punctuation_ratio":0.08494209,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9695932,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-23T05:38:09Z\",\"WARC-Record-ID\":\"<urn:uuid:745be434-ac1b-42f3-9181-8e47913bdc2f>\",\"Content-Length\":\"30802\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:75a01cdd-5f3f-4b31-985a-7416f62a64a9>\",\"WARC-Concurrent-To\":\"<urn:uuid:76374e26-5d6f-4c54-877e-25dff4a23daf>\",\"WARC-IP-Address\":\"65.1.150.115\",\"WARC-Target-URI\":\"https://www.ias.ac.in/describe/article/jess/130/0103\",\"WARC-Payload-Digest\":\"sha1:HRNYS47UAO2XC523LF4VSXIOTFCMSB2A\",\"WARC-Block-Digest\":\"sha1:JJP4U4OY3IALZK6FWGMRUFKXEUFCHOQM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506479.32_warc_CC-MAIN-20230923030601-20230923060601-00808.warc.gz\"}"}
https://data.mendeley.com/datasets/8j867449kj
[ "# Data for: Seismic data reconstruction with multi-domain sparsity constraints based on curvelet and Radon transforms\n\nPublished: 9 January 2019| Version 1 | DOI: 10.17632/8j867449kj.1\nContributors:\nShengchang Chen,\nJianping Zhou,\nYunlong Liu,\nHonglei Shen,\nChunhui Tao,\nziyin wu,\nYong Du,\nHanchuang Wang,\nLei Qiu,\nWeijun Xu\n\n## Description\n\nThis dataset cover all data of the study in the paper, including the raw data and processed data. The name of the dataset file is dataset_2018.09.20.zip. After decompression, we can get the folder named dataset_2018.09.20. The directory contains seven subdirectories (as dataset_figure1~dataset_figure8, excluding dataset_figure4), which correspond to the data of figure1~figure8 (excluding figure4). The information for each subdirectory is as follows. “Dataset_figure1” contains a file and a folder. They are the reflected seismic data and the decomposition using curvelet transform. “Dataset_figure2” contains four files, which are transformed data by conventional and sparse Radon transform, inverse transformed data by conventional and sparse Radon transform. The word \"hrt\" in the file names indicates that the hyperbolic Radon transform is used in the test. “Dataset_figure3” contains two data files and a folder, which are the seismic data with reflected and scattered waves, the coefficients in high-resolution Radon transform and the coefficients in curvelet transform domain. “Dataset_figure5” contains two data files, which are the original extracted BP dataset and the undersampled data used in the 1st numerical example in our study. “Dataset_figure6” contains six data files, which are the recovered data and the recovery error data with CT, CRT and ChRT recovery strategies of the BP data, respectively. “Dataset_figure7” contains four data files of 1st numerical example, which are the FK spectrum data of the true data, subsampled data, recovered data with strategy CT and recovered data with strategy ChRT. “Dataset_figure8” contains two data files, which are the original real dataset and the undersampled data used in the 2nd numerical example in our study. “Dataset_figure9” contains six data files, which are the recovered data and the recovery error data with CT, CRT and ChRT recovery strategies of the real data, respectively. “Dataset_figure10” contains four data files of the 2nd numerical example, which are the FK spectrum data of the true data, subsampled data, recovered data with strategy CT and recovered data with strategy ChRT." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8780267,"math_prob":0.63814086,"size":2291,"snap":"2021-31-2021-39","text_gpt3_token_len":479,"char_repetition_ratio":0.18845649,"word_repetition_ratio":0.3192771,"special_character_ratio":0.19947621,"punctuation_ratio":0.11311054,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98260766,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-25T06:35:20Z\",\"WARC-Record-ID\":\"<urn:uuid:4a0f3ceb-0444-431f-8b4b-1bbe50c6100c>\",\"Content-Length\":\"79224\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:94df6c8a-0aea-444c-bf47-82138130a28b>\",\"WARC-Concurrent-To\":\"<urn:uuid:c46b49ab-7807-4b3a-abde-6043665d53fc>\",\"WARC-IP-Address\":\"162.159.130.86\",\"WARC-Target-URI\":\"https://data.mendeley.com/datasets/8j867449kj\",\"WARC-Payload-Digest\":\"sha1:CWQVSXUFQPNREPEPVCK45QZDE6B2CEMI\",\"WARC-Block-Digest\":\"sha1:2VCOBOECVOHZATI3P2ZZYSD4ENBI6LRR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057598.98_warc_CC-MAIN-20210925052020-20210925082020-00422.warc.gz\"}"}
https://www.colorhexa.com/bf3a42
[ "# #bf3a42 Color Information\n\nIn a RGB color space, hex #bf3a42 is composed of 74.9% red, 22.7% green and 25.9% blue. Whereas in a CMYK color space, it is composed of 0% cyan, 69.6% magenta, 65.4% yellow and 25.1% black. It has a hue angle of 356.4 degrees, a saturation of 53.4% and a lightness of 48.8%. #bf3a42 color hex could be obtained by blending #ff7484 with #7f0000. Closest websafe color is: #cc3333.\n\n• R 75\n• G 23\n• B 26\nRGB color chart\n• C 0\n• M 70\n• Y 65\n• K 25\nCMYK color chart\n\n#bf3a42 color description : Moderate red.\n\n# #bf3a42 Color Conversion\n\nThe hexadecimal color #bf3a42 has RGB values of R:191, G:58, B:66 and CMYK values of C:0, M:0.7, Y:0.65, K:0.25. Its decimal value is 12532290.\n\nHex triplet RGB Decimal bf3a42 `#bf3a42` 191, 58, 66 `rgb(191,58,66)` 74.9, 22.7, 25.9 `rgb(74.9%,22.7%,25.9%)` 0, 70, 65, 25 356.4°, 53.4, 48.8 `hsl(356.4,53.4%,48.8%)` 356.4°, 69.6, 74.9 cc3333 `#cc3333`\nCIE-LAB 44.939, 53.286, 26.15 23.983, 14.498, 6.69 0.531, 0.321, 14.498 44.939, 59.356, 26.14 44.939, 98.718, 17.877 38.077, 45.796, 16.237 10111111, 00111010, 01000010\n\n# Color Schemes with #bf3a42\n\n• #bf3a42\n``#bf3a42` `rgb(191,58,66)``\n• #3abfb7\n``#3abfb7` `rgb(58,191,183)``\nComplementary Color\n• #bf3a85\n``#bf3a85` `rgb(191,58,133)``\n• #bf3a42\n``#bf3a42` `rgb(191,58,66)``\n• #bf753a\n``#bf753a` `rgb(191,117,58)``\nAnalogous Color\n• #3a85bf\n``#3a85bf` `rgb(58,133,191)``\n• #bf3a42\n``#bf3a42` `rgb(191,58,66)``\n• #3abf75\n``#3abf75` `rgb(58,191,117)``\nSplit Complementary Color\n• #3a42bf\n``#3a42bf` `rgb(58,66,191)``\n• #bf3a42\n``#bf3a42` `rgb(191,58,66)``\n• #42bf3a\n``#42bf3a` `rgb(66,191,58)``\n• #b73abf\n``#b73abf` `rgb(183,58,191)``\n• #bf3a42\n``#bf3a42` `rgb(191,58,66)``\n• #42bf3a\n``#42bf3a` `rgb(66,191,58)``\n• #3abfb7\n``#3abfb7` `rgb(58,191,183)``\n• #84282e\n``#84282e` `rgb(132,40,46)``\n• #982e34\n``#982e34` `rgb(152,46,52)``\n• #ab343b\n``#ab343b` `rgb(171,52,59)``\n• #bf3a42\n``#bf3a42` `rgb(191,58,66)``\n• #c84a52\n``#c84a52` `rgb(200,74,82)``\n• #ce5e65\n``#ce5e65` `rgb(206,94,101)``\n• #d47177\n``#d47177` `rgb(212,113,119)``\nMonochromatic Color\n\n# Alternatives to #bf3a42\n\nBelow, you can see some colors close to #bf3a42. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #bf3a63\n``#bf3a63` `rgb(191,58,99)``\n• #bf3a58\n``#bf3a58` `rgb(191,58,88)``\n• #bf3a4d\n``#bf3a4d` `rgb(191,58,77)``\n• #bf3a42\n``#bf3a42` `rgb(191,58,66)``\n• #bf3d3a\n``#bf3d3a` `rgb(191,61,58)``\n• #bf483a\n``#bf483a` `rgb(191,72,58)``\n• #bf533a\n``#bf533a` `rgb(191,83,58)``\nSimilar Colors\n\n# #bf3a42 Preview\n\nThis text has a font color of #bf3a42.\n\n``<span style=\"color:#bf3a42;\">Text here</span>``\n#bf3a42 background color\n\nThis paragraph has a background color of #bf3a42.\n\n``<p style=\"background-color:#bf3a42;\">Content here</p>``\n#bf3a42 border color\n\nThis element has a border color of #bf3a42.\n\n``<div style=\"border:1px solid #bf3a42;\">Content here</div>``\nCSS codes\n``.text {color:#bf3a42;}``\n``.background {background-color:#bf3a42;}``\n``.border {border:1px solid #bf3a42;}``\n\n# Shades and Tints of #bf3a42\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #0a0304 is the darkest color, while #fefafb is the lightest one.\n\n• #0a0304\n``#0a0304` `rgb(10,3,4)``\n• #190809\n``#190809` `rgb(25,8,9)``\n• #290c0e\n``#290c0e` `rgb(41,12,14)``\n• #381113\n``#381113` `rgb(56,17,19)``\n• #471518\n``#471518` `rgb(71,21,24)``\n• #561a1e\n``#561a1e` `rgb(86,26,30)``\n• #651f23\n``#651f23` `rgb(101,31,35)``\n• #742328\n``#742328` `rgb(116,35,40)``\n• #83282d\n``#83282d` `rgb(131,40,45)``\n• #922c32\n``#922c32` `rgb(146,44,50)``\n• #a13138\n``#a13138` `rgb(161,49,56)``\n• #b0353d\n``#b0353d` `rgb(176,53,61)``\n• #bf3a42\n``#bf3a42` `rgb(191,58,66)``\n• #c7464e\n``#c7464e` `rgb(199,70,78)``\n• #cb555c\n``#cb555c` `rgb(203,85,92)``\n• #d0646a\n``#d0646a` `rgb(208,100,106)``\n• #d47379\n``#d47379` `rgb(212,115,121)``\n• #d98287\n``#d98287` `rgb(217,130,135)``\n• #de9196\n``#de9196` `rgb(222,145,150)``\n• #e2a0a4\n``#e2a0a4` `rgb(226,160,164)``\n• #e7afb3\n``#e7afb3` `rgb(231,175,179)``\n• #ebbec1\n``#ebbec1` `rgb(235,190,193)``\n• #f0cdcf\n``#f0cdcf` `rgb(240,205,207)``\n• #f4dcde\n``#f4dcde` `rgb(244,220,222)``\n• #f9ebec\n``#f9ebec` `rgb(249,235,236)``\n• #fefafb\n``#fefafb` `rgb(254,250,251)``\nTint Color Variation\n\n# Tones of #bf3a42\n\nA tone is produced by adding gray to any pure hue. In this case, #867375 is the less saturated color, while #f8010f is the most saturated one.\n\n• #867375\n``#867375` `rgb(134,115,117)``\n• #8f6a6c\n``#8f6a6c` `rgb(143,106,108)``\n• #996064\n``#996064` `rgb(153,96,100)``\n• #a2575b\n``#a2575b` `rgb(162,87,91)``\n• #ac4d53\n``#ac4d53` `rgb(172,77,83)``\n• #b5444a\n``#b5444a` `rgb(181,68,74)``\n• #bf3a42\n``#bf3a42` `rgb(191,58,66)``\n• #c9303a\n``#c9303a` `rgb(201,48,58)``\n• #d22731\n``#d22731` `rgb(210,39,49)``\n• #dc1d29\n``#dc1d29` `rgb(220,29,41)``\n• #e51420\n``#e51420` `rgb(229,20,32)``\n• #ef0a18\n``#ef0a18` `rgb(239,10,24)``\n• #f8010f\n``#f8010f` `rgb(248,1,15)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #bf3a42 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5016775,"math_prob":0.7650197,"size":3690,"snap":"2020-34-2020-40","text_gpt3_token_len":1647,"char_repetition_ratio":0.12506783,"word_repetition_ratio":0.011111111,"special_character_ratio":0.5509485,"punctuation_ratio":0.23809524,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97681946,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-22T21:47:18Z\",\"WARC-Record-ID\":\"<urn:uuid:7641c35c-9db2-415b-97df-12d7f4f79b69>\",\"Content-Length\":\"36277\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7ac909e8-3f14-495b-a706-b919fdd23cf9>\",\"WARC-Concurrent-To\":\"<urn:uuid:d763fef8-db26-460e-a15c-f6d57b6a0a91>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/bf3a42\",\"WARC-Payload-Digest\":\"sha1:JSHIGFWYZH3XJGUPIFKR7HIBKWYMQ6ZV\",\"WARC-Block-Digest\":\"sha1:VLN7C2XFUXG6I3LY2ZPLVBYFPREMF474\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400206763.24_warc_CC-MAIN-20200922192512-20200922222512-00394.warc.gz\"}"}
https://codereview.stackexchange.com/questions/91228/python-factory-method-with-easy-registry
[ "# Python factory method with easy registry\n\nMy aim is to define a set of classes, each providing methods for comparing a particular type of file. My idea is to use some kind of factory method to instantiate the class based upon a string, which could allow new classes to be added easily. Then it would be simple to loop over a dictionary like:\n\nfiles = {\n'csv': ('file1.csv', 'file2.csv'),\n'bin': ('file3.bin', 'file4.bin')\n}\n\n\nHere is what I have so far:\n\n# results/__init__.py\nclass ResultDiffException(Exception):\npass\n\nclass ResultDiff(object):\n\"\"\"Base class that enables comparison of result files.\"\"\"\ndef __init__(self, path_test, path_ref):\nself.path_t = path_test\nself.path_r = path_ref\n\ndef max(self):\nraise NotImplementedError('abstract method')\n\ndef min(self):\nraise NotImplementedError('abstract method')\n\ndef mean(self):\nraise NotImplementedError('abstract method')\n\n# results/numeric.py\nimport numpy as np\nfrom results import ResultDiff, ResultDiffException\n\nclass NumericArrayDiff(ResultDiff):\n\ndef __init__(self, *args, **kwargs):\nsuper(NumericArrayDiff, self).__init__(*args, **kwargs)\n\nif self.data_t.shape != self.data_r.shape:\nraise ResultDiffException('Inconsistent array shape')\n\nnp.seterr(divide='ignore', invalid='ignore')\nself.diff = (self.data_t - self.data_r) / self.data_r\nboth_zero_ind = np.nonzero((self.data_t == 0) & (self.data_r == 0))\nself.diff[both_zero_ind] = 0\n\ndef max(self):\nreturn np.amax(self.diff)\n\ndef min(self):\nreturn np.amin(self.diff)\n\ndef mean(self):\nreturn np.mean(self.diff)\n\nclass CsvDiff(NumericArrayDiff):\n\ndef __init__(self, *args, **kwargs):\nsuper(CsvDiff, self).__init__(*args, **kwargs)\n\nclass BinaryNumericArrayDiff(NumericArrayDiff):\n\ndef __init__(self, *args, **kwargs):\nsuper(BinaryNumericArrayDiff, self).__init__(*args, **kwargs)\n\nreturn np.fromfile(path)\n\n\nAs you can see, the classes CsvDiff and BinaryNumericArrayDiff have only very minor changes with respect to NumericArrayDiff, which could potentially be refactored using constructor arguments. The problem is that different file types would then require different constructor syntax, which would complicate the factory pattern.\n\nI did also consider providing @classmethods to NumericArrayDiff, which could be put in a dict in order to link to the file types. However, I'm hoping for a more natural way of registering these classes to the factory.\n\nAny advice would be much appreciated.\n\n### 1. Stop writing classes\n\nThe title for this section comes from Jack Diederich's PyCon 2012 talk.\n\nA class represents a group of objects with similar behaviour, and an object represents some kind of persistent thing. So when deciding what classes a program is going to need, the first question to ask is, \"what kind of persistent things does this program need to represent?\"\n\nIn this case the program:\n\n1. knows how to load NumPy arrays from different kinds of file format (CSV and plain text); and\n2. knows how to compute the relative difference between two NumPy arrays (so long as they come from files with the same format).\n\nThe only persistent things here are files (represented by Python file objects) and NumPy arrays (represented by numpy.ndarray objects). So there's no need for any more classes.\n\n### 2. Other review points\n\n1. The code calls numpy.seterr to suppress the warning:\n\nRuntimeWarning: invalid value encountered in true_divide\n\n\nbut it fails to restore the original error state, whatever it was. This might be an unpleasant surprise for the caller. It would be better to use the numpy.errstate context manager to ensure that the original error state is restored.\n\n2. When dispatching to NumPy functions, it usually unncessary to check shapes for compatibility and raise your own error. Instead, just pass the arrays to NumPy. If they can't be combined, then NumPy will raise:\n\nValueError: operands could not be broadcast together ...\n\n\n### 3. Revised code\n\nInstead of classes, write a function!\n\nimport numpy as np\n\ndef relative_difference(t, r):\n\"\"\"Return the relative difference between arrays t and r, that is:\n\n0 where t == 0 and r == 0\n(t - r) / r otherwise\n\n\"\"\"\nt, r = np.asarray(t), np.asarray(r)\nwith np.errstate(divide='ignore', invalid='ignore'):\nreturn np.where((t == 0) & (r == 0), 0, (t - r) / r)\n\n\nNote the following advantages over the original code:\n\n1. It's much shorter, and so there's much less code to maintain.\n\n2. It can find the difference between arrays that come from files with different formats:\n\nrelative_difference(np.loadtxt(path1), np.fromfile(path2))\n\n3. It can find the difference between arrays that don't come from files at all:\n\nrelative_difference(np.random.randint(0, 10, (10,)), np.arange(1, 11))" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6281929,"math_prob":0.6959824,"size":2590,"snap":"2021-43-2021-49","text_gpt3_token_len":625,"char_repetition_ratio":0.13766435,"word_repetition_ratio":0.026229508,"special_character_ratio":0.25135136,"punctuation_ratio":0.217119,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9884979,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-08T08:47:11Z\",\"WARC-Record-ID\":\"<urn:uuid:4b935b15-fd97-4ea0-90b7-b8c2312f6abb>\",\"Content-Length\":\"142500\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a79cd8da-2c1c-4d3b-bdb5-fba364f7a311>\",\"WARC-Concurrent-To\":\"<urn:uuid:2731313c-8b67-4655-b395-8aab5894e08b>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://codereview.stackexchange.com/questions/91228/python-factory-method-with-easy-registry\",\"WARC-Payload-Digest\":\"sha1:YFRSRK2QKPPYPBHWP244XHRLXX35FMCO\",\"WARC-Block-Digest\":\"sha1:JJSV4EVRWUJNL6X7ZU3TGERSUAO7BR7P\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363465.47_warc_CC-MAIN-20211208083545-20211208113545-00340.warc.gz\"}"}
https://soohba.com/7999126/how-to-print-to-file-using-ostream-iterator
[ "", null, "", null, "C RUBY-ON-RAILS MYSQL ASP.NET DEVELOPMENT RUBY .NET LINUX SQL-SERVER REGEX WINDOWS ALGORITHM ECLIPSE VISUAL-STUDIO STRING SVN PERFORMANCE APACHE-FLEX UNIT-TESTING SECURITY LINQ UNIX MATH EMAIL OOP LANGUAGE-AGNOSTIC VB6 MSBUILD", null, "# How to print to file using ostream_iterator?", null, "", null, "» cpp » How to print to file using ostream_iterator?\n\nBy : Roberto Oliveira\nDate : October 16 2020, 03:08 PM\nseems to work fine The type of std::cout and the type std::ofstream are both derived from std::ostream, which is the same type that std::ostream_iterator operates on:", null, "code :\n``````#include <iostream>\n#include <random>\n#include <algorithm>\n#include <iterator>\n#include <fstream>\n\nvoid emit_values(std::ostream& os)\n{\nstd::vector<int> v;\nfor (int i = 0; i<1500; ++i){\nv.push_back(i);\n}\n\nstd::random_device rd;\nstd::mt19937 g(rd());\n\nstd::shuffle(v.begin(), v.end(), g);\n\nstd::copy(v.begin(), v.end(), std::ostream_iterator<int>(os, \" \"));\nos << \"\\n\";\n}\n\nint main()\n{\n// use stdout\nemit_values(std::cout);\n\n// use a file\nstd::ofstream fs(\"values.txt\");\nemit_values(fs);\nfs.close();\n\nreturn 0;\n}\n``````", null, "##", null, "Writing into binary file with the std::ostream_iterator\n\nBy : user3878959\nDate : March 29 2020, 07:55 AM\nhope this fix your issue The std::stream_iterator uses the \"normal\" operator<< semantics, it is a formatted output (not binary).\nFrom cppreference;\ncode :\n``````#include <iostream>\n#include <algorithm>\n#include <utility>\n#include <iterator>\n\nusing namespace std;\n\ntemplate <class T, class CharT = char, class Traits = std::char_traits<CharT>>\nclass ostreambin_iterator :\npublic std::iterator<std::output_iterator_tag, void, void, void, void> {\npublic:\ntypedef std::basic_ostream<CharT, Traits> ostream_type;\ntypedef Traits traits_type;\ntypedef CharT char_type;\n\nostreambin_iterator(ostream_type& stream) : stream_(stream) { }\n\nostreambin_iterator& operator=(T const& value)\n{\n// basic implementation for demonstration\nstream_.write(reinterpret_cast<const char*>(&value), sizeof(T));\nreturn *this;\n}\n\nostreambin_iterator& operator*() { return *this; }\nostreambin_iterator& operator++() { return *this; }\nostreambin_iterator& operator++(int) { return *this; }\n\nprotected:\nostream_type& stream_;\n};\n\nint main() {\nstd::vector<long> d(3);\nd = 0x303030; // test some width past a single byte\nd = 0x31;\nd = 0x32;\n\nostreambin_iterator<long> out(cout);\n\ncout << \"begin\" << endl;\ncopy(std::begin(d), std::end(d), out);\ncout << endl;\ncout << \"end\" << endl;\n}\n``````", null, "##", null, "Using ostream_iterator to copy a map into file\n\nBy : Guillem Torres Marqu\nDate : March 29 2020, 07:55 AM\nwill be helpful for those in need If you look carefully at the declaration of std::ostream_iterator here, you will notice that your usage of std::ostream_iterator is incorrect because you should specify the type of printed elements as the first template parameter.\nThe type of elements in the std::map M is std::pair< const std::string, int >. But you can't put std::pair< const std::string, int > as the first template parameter because there is no default way to print an std::pair.\ncode :\n``````std::ofstream out(\"file.txt\");\n\nstd::for_each(std::begin(M), std::end(M),\n[&out](const std::pair<const std::string, int>& element) {\nout << element.first << \" \" << element.second << std::endl;\n}\n);\n``````", null, "##", null, "Simple file writing with ostream_iterator creates file, but doesn't write\n\nDate : March 29 2020, 07:55 AM\nAny of those help You are not flushing the stream to disk before calling system().\nYou can explicitly flush() or close() the stream:\ncode :\n``````int main() {\nofstream os{ \"Input.txt\" };\nostream_iterator<int> oo{ os,\",\" };\n\nvector<int> ints;\nfor (int i = 0; i < 1000; i++) {\nints.push_back(i);\n}\n\nunique_copy(ints.begin(), ints.end(), oo);\n\nos.close();\n\nsystem(\"PAUSE\");\nreturn 0;\n}\n``````\n``````int main() {\n{\nofstream os{ \"Input.txt\" };\nostream_iterator<int> oo{ os,\",\" };\n\nvector<int> ints;\nfor (int i = 0; i < 1000; i++) {\nints.push_back(i);\n}\n\nunique_copy(ints.begin(), ints.end(), oo);\n}\n\nsystem(\"PAUSE\");\nreturn 0;\n}\n``````", null, "##", null, "Std::copy and std::ostream_iterator to use overloading function to print values\n\nBy : 虎林火车站酒店小姐\nDate : March 29 2020, 07:55 AM\nhop of those help? std::ostream_iterator::operator= takes its parameter as const&. Internally, this will use operator<< to output each value into the stream.\nBut the parameter is const, so it can't be passed into your operator< code :\n``````friend ostream& operator<<(ostream&os, const Employee& obj )\n{\nreturn os << obj.name << \" \"<< obj.salary;\n}\n``````", null, "##", null, "Performance of ostream_iterator for writing numeric data to a file?\n\nBy : Deepak Namala\nDate : March 29 2020, 07:55 AM\nI think the issue was by ths following , The quickest (but most horrible) way to dump a vector will be to write it in one operation with ostream::write:", null, "" ]
[ null, "https://soohba.com/images/logo.png", null, "https://soohba.com/images/down.png", null, "https://soohba.com/images/bg-categories1.jpg", null, "https://soohba.com/images/pics/2626.jpg", null, "https://soohba.com/images/cat/thumb/cpp.jpg", null, "https://soohba.com/images/true.jpg", null, "https://soohba.com/images/pics/1196.jpg", null, "https://soohba.com/images/cat/thumb/cpp.jpg", null, "https://soohba.com/images/pics/720.jpg", null, "https://soohba.com/images/cat/thumb/cpp.jpg", null, "https://soohba.com/images/pics/3678.jpg", null, "https://soohba.com/images/cat/thumb/cpp.jpg", null, "https://soohba.com/images/pics/834.jpg", null, "https://soohba.com/images/cat/thumb/cpp.jpg", null, "https://soohba.com/images/pics/1155.jpg", null, "https://soohba.com/images/cat/thumb/cpp.jpg", null, "https://soohba.com/images/bg-categories2.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.59702003,"math_prob":0.6068266,"size":3605,"snap":"2020-45-2020-50","text_gpt3_token_len":974,"char_repetition_ratio":0.14995834,"word_repetition_ratio":0.11619048,"special_character_ratio":0.3073509,"punctuation_ratio":0.25726143,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95927167,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34],"im_url_duplicate_count":[null,null,null,null,null,null,null,1,null,null,null,null,null,1,null,null,null,2,null,null,null,1,null,null,null,1,null,null,null,1,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-01T00:44:06Z\",\"WARC-Record-ID\":\"<urn:uuid:d6053255-5c70-4dc4-8690-09f72f8b5291>\",\"Content-Length\":\"39218\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a9487417-6549-4be1-8489-a854df926d1e>\",\"WARC-Concurrent-To\":\"<urn:uuid:942b62a5-d0c5-409f-91f6-de30404e4679>\",\"WARC-IP-Address\":\"173.212.224.190\",\"WARC-Target-URI\":\"https://soohba.com/7999126/how-to-print-to-file-using-ostream-iterator\",\"WARC-Payload-Digest\":\"sha1:YNGSPWWKSW5OHP6NPELPI7QFJ6FF3WXH\",\"WARC-Block-Digest\":\"sha1:NPVLEEZHRFH542B7EGIR7OXZEQBFIHPE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107922746.99_warc_CC-MAIN-20201101001251-20201101031251-00677.warc.gz\"}"}
https://answers.everydaycalculation.com/compare-fractions/10-4-and-9-3
[ "Solutions by everydaycalculation.com\n\nCompare 10/4 and 9/3\n\n1st number: 2 2/4, 2nd number: 3 0/3\n\n10/4 is smaller than 9/3\n\nSteps for comparing fractions\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 4 and 3 is 12\n2. For the 1st fraction, since 4 × 3 = 12,\n10/4 = 10 × 3/4 × 3 = 30/12\n3. Likewise, for the 2nd fraction, since 3 × 4 = 12,\n9/3 = 9 × 4/3 × 4 = 36/12\n4. Since the denominators are now the same, the fraction with the bigger numerator is the greater fraction\n5. 30/12 < 36/12 or 10/4 < 9/3\n\nMathStep (Works offline)", null, "Download our mobile app and learn to work with fractions in your own time:\nAndroid and iPhone/ iPad\n\nand" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8512516,"math_prob":0.9976659,"size":460,"snap":"2019-43-2019-47","text_gpt3_token_len":208,"char_repetition_ratio":0.30482456,"word_repetition_ratio":0.0,"special_character_ratio":0.4869565,"punctuation_ratio":0.056,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99141794,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-17T18:09:35Z\",\"WARC-Record-ID\":\"<urn:uuid:ad6b6d68-ac67-4392-a16b-9779aa92c500>\",\"Content-Length\":\"8416\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6d1df1a7-90c4-440e-bd29-a78e177b3acb>\",\"WARC-Concurrent-To\":\"<urn:uuid:82df565d-3615-40b1-a93a-19595032e89b>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/compare-fractions/10-4-and-9-3\",\"WARC-Payload-Digest\":\"sha1:W6G2PD6EMQ3X7DN47YYZVRVHTPK37ENR\",\"WARC-Block-Digest\":\"sha1:GTGAAOTJJOW7LJRVMMQX3ZUY2QUXAYYE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986675598.53_warc_CC-MAIN-20191017172920-20191017200420-00069.warc.gz\"}"}
https://crypto.stackexchange.com/questions/51384/understanding-key-space
[ "# Understanding Key space\n\nI've looked at a few previous posts on this site but I'm still struggling with the concept of keyspace. Specifically I'm following a coursera course but I think it's probably too advanced for me as I'm failing to understand some numbers.\n\nIn the course it talks about a substitution cypher in an alphabet of 26 letters. There is then a question\n\nWhat is the size of key space in the substitution cipher assuming 26 letters?\n\nThe answer is 26! (I get this bit) but the instructor then says this is roughly equivalent to 2^88 which I don't get as the numbers don't seem close to me at all.\n\nFurthermore it talks about the enigma machine and how a 4 rotor machine which would have the following\n\n# keys = 26^4 = 2^18\n\nThis is equating to 26^4 combinations (4 rotors with 26 keys - again I get this (though should it not be 26!^4?) but then I don't get how this gives a key space of 2^18 . I think it's the equality signs throwing me off as I'm thinking it's 'equal to' as opposed to it 'equates to or resolves to'.\n\nI just don't get these numbers if I'm honest.\n\nCan someone break it down in it's simplest terms for me? As I say, based on this evidence I think the course might be beyond me mathematically, but I really want to understand at least this part now I've gone through it!\n\n• Hint for the first part: What is $\\log_2(26!)?$ Similar for the second part. – gammatester Sep 7 '17 at 19:55\n\nI'm not sure exactly what you're not getting.\n\nYou understand that the keyspace for a simple substitution cipher is $26!$; do you understand that $26! = 1 \\times 2 \\times 3 \\times 4 \\times ... \\times 24 \\times 25 \\times 26$, correct?\n\nSo, we have $26! = 403291461126605635584000000$, correct?\n\nWe also have $2^{88} = 309485009821345068724781056$, correct?\n\nWould you agree that these are \"roughly\" the same (where roughly in this case means within 30%)?\n\nkeys = 26^4 = 2^18\n\nWhat might be confusing you is the equal sign; obviously $26^4 = 456976$ is not exactly $2^{18} = 262144$; I believe the idea the author is trying to convey is that they're close, well, at least close enough for his analysis. I'd personally use $\\approx$ here...\n\n(though should it not be 26!^4?)\n\nActually, for Engima, 26 is correct; each rotor can be set in one of 26 positions; hence each rotor can do one of 26 specific permutations; it can't do an arbitrary 26! permutation.\n\n• I think it really is the equals sign confusing me especially The second example as I really don't consider them to be roughly equal. But then I don't get why the key Space for the enigma machine is smaller than then substitution cypher when the enigma has the four rotors and was designed to be more complex – TommyBs Sep 7 '17 at 20:03\n• @TommyBs: it has a smaller keyspace because it defines fewer transforms from plaintext to ciphertext; with their Enigma model, you could set each dial to one of 26 settings an that's it (actually, real Engima machines had a bunch of other things you could do, such as select which rotor went where); in contrast, a substitution cipher does define an impressively large number of potential transforms. That's also a useful example for people who think large key spaces ensure security... – poncho Sep 7 '17 at 20:06" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.98064095,"math_prob":0.9223613,"size":1270,"snap":"2020-45-2020-50","text_gpt3_token_len":314,"char_repetition_ratio":0.09952607,"word_repetition_ratio":0.0,"special_character_ratio":0.2496063,"punctuation_ratio":0.057915058,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.987232,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-28T15:12:30Z\",\"WARC-Record-ID\":\"<urn:uuid:df755dff-f67a-4788-a8fa-a13b636be6c0>\",\"Content-Length\":\"151614\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:42d02895-6d7c-403f-a774-41222b1018c7>\",\"WARC-Concurrent-To\":\"<urn:uuid:c1faf178-2bf3-4318-b258-c13d3811a814>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://crypto.stackexchange.com/questions/51384/understanding-key-space\",\"WARC-Payload-Digest\":\"sha1:SEQ5M4LAHL6I7P72MVIOJ326CQZ3ZDBT\",\"WARC-Block-Digest\":\"sha1:MOVKSBEQKZBZWWKSQ272UJUFKWYTKCFL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107898577.79_warc_CC-MAIN-20201028132718-20201028162718-00067.warc.gz\"}"}
https://www.birs.ca/events/2019/5-day-workshops/19w5238/schedule
[ "# Schedule for: 19w5238 - Probing the Earth and the Universe with Microlocal Analysis\n\nBeginning on Sunday, April 14 and ending Friday April 19, 2019\n\nAll times in Banff, Alberta time, MDT (UTC-6).\n\nSunday, April 14\n16:00 - 17:30 Check-in begins at 16:00 on Sunday and is open 24 hours (Front Desk - Professional Development Centre)\n17:30 - 19:30 Dinner\nA buffet dinner is served daily between 5:30pm and 7:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building.\n(Vistas Dining Room)\n20:00 - 22:00 Informal gathering (Corbett Hall Lounge (CH 2110))\nMonday, April 15\n07:00 - 08:45 Breakfast\nBreakfast is served daily between 7 and 9am in the Vistas Dining Room, the top floor of the Sally Borden Building.\n(Vistas Dining Room)\n08:45 - 09:00 Introduction and Welcome by BIRS Staff\nA brief introduction to BIRS with important logistical information, technology instruction, and opportunity for participants to ask questions.\n(TCPL 201)\n09:00 - 09:15 Lauri Oksanen: Remembering Slava Kurylev\nWe will remember the life and works of Slava Kurylev who recently passed away.\n(TCPL 201)\n09:15 - 10:00 András Vasy: Recovery of material parameters in transversally isotropic media\nIn this talk I will discuss the recovery of material parameters in anisotropic elasticity, in the particular case of transversally isotropic media. I will indicate how the knowledge of the qSH(which I will explain!) wave travel times determines the tilt of the axis of isotropy as well as some of the elastic material parameters, and the knowledge of qP and qSV travel times conditionally determines a subset of the remaining parameters, in the sense that if some of the remaining parameters are known, the rest are determined, or if the remaining parameters satisfy a suitable relation, they are all determined, under certain non-degeneracy conditions. Furthermore, I will describe the additional issues, which are a subject of ongoing work, that need to be resolved for a full treatment. This is joint work with Maarten de Hoop and Gunther Uhlmann, and is in turn based on work with Plamen Stefanov and Gunther Uhlmann.\n(TCPL 201)\n10:00 - 10:30 Coffee Break (TCPL Foyer)\n10:30 - 11:15 Plamen Stefanov: The transmission problem in linear isotropic elasticity\nWe study the isotropic elastic wave equation in a bounded domain with boundary with coefficients having jumps at a nested set of interfaces satisfying the natural transmission conditions there. We analyze in detail the microlocal behavior of such solution like reflection, transmission and mode conversion of S and P waves, evanescent modes, Rayleigh and Stoneley waves. In particular, we recover Knott's equations in this setting. We show that knowledge of the Dirichlet-to-Neumann map determines uniquely the speed of the P and the S waves if there is a strictly convex foliation with respect to them, under an additional condition of lack of full internal reflection of some of the waves. This is a joint work with Andras Vasy and Gunther Uhlmann\n(TCPL 201)\n11:15 - 12:00 Robin Graham: Geodesic flow, X-ray transform, and boundary rigidity on asymptotically hyperbolic manifolds\nI will describe an extension to the boundary of the cosphere bundle and geodesic flow of an asymptotically hyperbolic manifold. I will then discuss injectivity results for X-ray transforms on tensors and the boundary rigidity problem of determining an asymptotically hyperbolic metric from the renormalized lengths of geodesics joining boundary points. One part is work with Colin Guillarmou, Plamen Stefanov and Gunther Uhlmann and another part is work with Nikolas Eptaminitakis.\n(TCPL 201)\n11:30 - 13:00 Lunch\nLunch is served daily between 11:30am and 1:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building.\n(Vistas Dining Room)\n13:00 - 14:00 Guided Tour of The Banff Centre\nMeet in the Corbett Hall Lounge for a guided tour of The Banff Centre campus.\n(Corbett Hall Lounge (CH 2110))\n14:00 - 14:20 Group Photo\nMeet in foyer of TCPL to participate in the BIRS group photo. The photograph will be taken outdoors, so dress appropriately for the weather. Please don't be late, or you might not be in the official group photo!\n(TCPL 201)\n14:20 - 15:05 Julie Rowlett: The sound of a singularity?\nAnalytically computing the spectrum of the Laplacian is impossible for all but a handful of classical examples. Consequently, it can be tricky business to determine which geometric features are spectrally determined; such features are known as geometric spectral invariants. Weyl demonstrated in 1912 that the area of a planar domain is a geometric spectral invariant. In the 1950s, Pleijel proved that the n-1 dimensional volume of a smoothly bounded n-dimensional Riemannian manifold is a geometric spectral invariant. Kac, and McKean & Singer independently proved in the 1960s that the Euler characteristic is a geometric spectral invariant for smoothly bounded domains and surfaces. At the same time, Kac popularized the isospectral problem for planar domains in his article, Can one hear the shape of a drum?'' Colloquially, one says that one can hear'' spectral invariants. In this talk I will not only discuss my work with many collaborators (Rafe Mazzeo, Zhiqin Lu, Clara Aldana, Klaus Kirsten, David Sher, Medet Nursultanov) but also highlight the works of many other colleagues, who share a similar interest in hearing singularities.''\n(TCPL 201)\n15:00 - 15:30 Coffee Break (TCPL Foyer)\n15:30 - 16:15 Melissa Tacy: Eigenfunction concentration and its connection to geometry\nThe concentration properties of eigenfunctions of the Laplace-Beltrami operator are closely linked the the underlying geometry (and dynamics) on manifolds. It this talk I will discuss the known concentration results and the models we use to test sharpness. Such problems are effectively forward problems, I will also discuss what an inverse problem would look like in this setting.\n(TCPL 201)\n16:15 - 17:00 Thibault Lefeuvre: The X-ray transform on Anosov manifolds\nA closed Riemannian manifold is said to be Anosov if its geodesic flow on its unit tangent bundle is Anosov (also called uniformly hyperbolic in the literature). Typical examples are provided by negatively-curved manifolds. On such manifolds, the X-ray transform is simply defined as the integration of continuous functions along periodic geodesics. I will review some recent results on the analytic study of the X-ray transform (in particular, stability estimates). The techniques rely on microlocal tools introduced by Guillarmou and further investigated by Guillarmou-Lefeuvre, and on new finite and approximate Livsic theorems proved by Gouëzel-Lefeuvre. If time permits, I will explain how these results can be applied to prove the local rigidity of the marked length spectrum.\n(TCPL 201)\n17:30 - 19:30 Dinner\nA buffet dinner is served daily between 5:30pm and 7:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building.\n(Vistas Dining Room)\nTuesday, April 16\n07:00 - 09:00 Breakfast (Vistas Dining Room)\n09:15 - 10:00 Gabriel Paternain: Carleman estimates for geodesic X-ray transforms\nI will describe a new energy estimate for the geodesic vector field of a manifold of negative curvature. The estimate has several applications including injectivity of non-abelian X-ray transforms. This is joint work with Mikko Salo.\n(TCPL 201)\n10:00 - 10:30 Coffee Break (TCPL Foyer)\n10:30 - 11:15 Lauri Oksanen: Inverse problem for a semi-linear elliptic equation\nWe consider the Dirichlet-to-Neumann map, defined in a suitable sense, for the equation $-\\Delta u + V(x,u)=0$ on a compact Riemannian manifold with boundary. We show that, under certain geometrical assumptions, the Dirichlet-to-Neumann map determines V for a large class of non-linearities. The proof is constructive and is based on a multiple-fold linearization of the semi-linear equation near complex geometric optics solutions for the linearized operator, and the resulting non-linear interactions. This approach allows us to reduce the inverse problem boundary value problem to the purely geometric problem to invert a family of weighted ray transforms, that we call the Jacobi weighted ray transform. This is a joint work with Ali Feizmohammadi.\n(TCPL 201)\n11:15 - 12:00 Semyon Dyatlov: Control of eigenfunctions on hyperbolic surfaces\nGiven an $L^2$-normalized eigenfunction with eigenvalue $\\lambda^2$ on a Riemannian manifold $(M,g)$ and a nonempty open set $\\Omega\\subset M$, what lower bound can we prove on the $L^2$-mass of the eigenfunction on $\\Omega$? The unique continuation principle gives a bound for any $\\Omega$ which is exponentially small as $\\lambda\\to\\infty$. On the other hand, microlocal analysis gives a $\\lambda$-independent lower bound if $\\Omega$ is large enough, i.e. it satisfies the geometric control condition. This talk presents a $\\lambda$-independent lower bound for any set $\\Omega$ in the case when $M$ is a hyperbolic surface. The proof uses microlocal analysis, the chaotic behavior of the geodesic flow, and a new ingredient from harmonic analysis called the Fractal Uncertainty Principle. Applications include control for Schrödinger equation and exponential decay of damped waves. Joint work with Jean Bourgain, Long Jin, and Joshua Zahl.\n(TCPL 201)\n11:30 - 13:30 Lunch (Vistas Dining Room)\n13:30 - 14:15 Francois Monard: Inversion of abelian and non-abelian ray transforms in the presence of statistical noise\nWe will discuss two problems associated with ray transforms on simple surfaces: (1) how to reconstruct a function from its noisy geodesic X-ray transform (with applications to X-ray tomography) (2) how to reconstruct a skew-hermitian Higgs field from its noisy scattering data (with applications to Neutron Spin Tomography) For (1), the derivation of new mapping properties for the normal operator I*I, based on a generalization of the transmission condition, allows to prove a Bernstein–von Mises theorem, about the statistical reliability of the Maximum A Posteriori as a reconstruction candidate in a Bayesian statistical inversion framework, including a reliable assessment of the credible intervals. For (2), a non-linear problem whose injectivity for the noiseless case was established by Paternain–Salo–Uhlmann, the derivation of a new stability estimate allows one to prove a consistency result for the mean of the posterior distribution in the large data sample limit. Numerical illustrations will be presented. Joint works with Gabriel Paternain and Richard Nickl (Cambridge).\n(TCPL 201)\n14:15 - 15:00 Joonas Ilmavirta: Finsler geometry from the elastic wave equation\nThe singularities of solutions of the elastic wave equation follow a certain flow on cotangent bundle. For a typical anisotropic stiffness tensor this not the cogeodesic of a Riemannian geometry. But with a tiny additional assumption the singularities of the fastest polarization do correspond to a Finsler geometry. I will discuss the arising geometrical structure and some recent results in Finsler geometry arising from elasticity.\n(TCPL 201)\n15:00 - 15:30 Coffee Break (TCPL Foyer)\n15:30 - 16:15 Tracey Balehowsky: Determining a Lorentzian metric from the source-to-solution map for the relativistic Boltzmann equation\nIn this talk, we consider the following question: Given the source to solution map for a relativistic Boltzmann equation on a known open set $V$ of a Lorentzian spacetime $(\\mathbb{R}\\times N,g)$, can we this data to uniquely determine the spacetime metric on an unknown region of $\\mathbb{R}\\times N$? We will show that the answer is yes. Precisely, we determine the metric up to conformal factor on the domain of causal influence for the set $V$. Key to our proof is that the nonlinearity in the relativistic Boltzmann equation which describes the behaviour of particle collisions captures information about a source-to-solution map for a related linearized problem. We use this relationship together with an analysis of the behaviour of particle collisions by microlocal techniques to determine the set of locations in $V$ where we first receive light signals from collisions in the unknown domain. From this data we obtain the desired diffeomorphism. The strategy of using the nonlinearity of the inverse problem as a feature with which to gain knowledge of a related linearized problem is classical (see for example ). In a Lorentzian setting, this technique combined with microlocal analysis first appeared in in the context of a wave equation with a quadratic nonlinearity and source-to-solution data. We will briefly survey this and later related work as they provide context for our result. We will also provide some physical motivation and context for the problem we consider. The new results presented in this talk is joint work with Antti Kujanapää, Matti Lassas, and Tony Liimatainen (University of Helsinki). Kuylev Y., Lassas M., Uhlmann G., Inventiones mathematicae 212.3 (2018): 781-857. Sun Z., Mathematische Zeltschrift 221 (1996): 293-305.\n(TCPL 201)\n17:30 - 19:30 Dinner (Vistas Dining Room)\nWednesday, April 17\n07:00 - 09:00 Breakfast (Vistas Dining Room)\n09:00 - 09:45 Maarten de Hoop: Recovery of piecewise smooth Lamé parameters for local exterior data\nWe consider a bounded domain $\\Omega \\subset \\mathbb{R}^3$ on which Lam\\'{e} parameters are piecewise smooth. We consider the elastic wave initial value inverse problem, where we are given the solution operator for the elastic wave equation, but only outside $\\Omega$ and only for initial data supported outside $\\Omega$. Using our recently introduced scattering control series in the acoustic case, and a layer stripping argument, we prove that piecewise smooth Lam\\'{e} parameters are uniquely determined by this map. We make use of microlocal analysis to avoid using unique continuation results but require a convex foliation condition, introduced by Uhlmann and Vasy, for both the \\textit{P}- and \\textit{S}-wave speeds. Joint research with P. Caday, G. Uhlmann and V. Katsnelson.\n(TCPL 201)\n09:45 - 10:30 Spyros Alexakis: Recovering a Riemannian metric from area data.\nWe address a geometric inverse problem: Consider a simply connected Riemannian 3-manifold (M,g) with boundary. Assume that given any closed loop \\gamma on the boundary, one knows the area of the area-minimizer bounded by \\gamma. Can one reconstruct the metric g from this information? We answer this in the affirmative in a very broad open class of manifolds, notably those that admit sweep-outs by minimal surfaces from all directions. We will briefly discuss the relation of this problem with the question of reconstructing a metric from lengths of geodesics, and also with the Calderon problem of reconstructing a metric from the Dirichlet-to-Neumann operator for the corresponding Laplace-Beltrami operator. Connections with this question in the AdS-CFT correspondence will also be made. Joint with T. Balehowsky and A. Nachman.\n(TCPL 201)\n10:30 - 11:00 Coffee Break (TCPL Foyer)\n11:00 - 11:45 Katya Krupchyk: Stability estimates for partial data inverse problems for Schrodinger operators in the high frequency limit\nWe discuss the partial data inverse boundary problem for the Schrodinger operator at a fixed frequency on a bounded domain in the Euclidean space, with impedance boundary conditions. Assuming that the potential is known in a neighborhood of the boundary, the knowledge of the partial Robin-to-Dirichlet map along an arbitrarily small portion of the boundary determines the potential uniquely, in a logarithmically stable way. In this talk we show that the logarithmic stability can be improved to the one of Holder type in the high frequency regime. Our arguments are based on boundary Carleman estimates for semiclassical Schrodinger operators acting on functions satisfying impedance boundary conditions. This is joint work with Gunther Uhlmann.\n(TCPL 201)\n11:45 - 12:30 Kiril Datchev: Resolvent estimates with and without loss far away from trapped sets\nSemiclassical resolvent estimates are important for their applications to scattering theory and wave decay. The norm of the resolvent depends on dynamical properties of the bicharacteristic flow at the trapped set (the set of bicharacteristics that remain in a compact set for all time). When trapping is mild in an appropriate sense, the resolvent norm is only large at those points of phase space where trapping actually occurs. But when trapping is not mild, simple examples show that the resolvent norm may be large arbitarily far away from the trapped set. In this talk I will focus mostly on the one-dimensional and radial cases, where some optimal results are known. I will also discuss some related results in more general geometric situations. This talk is based on joint work with Long Jin and with Jacob Shapiro.\n(TCPL 201)\n12:30 - 13:30 Lunch (Vistas Dining Room)\n13:30 - 17:30 Free Afternoon (Banff National Park)\n17:30 - 19:30 Dinner (Vistas Dining Room)\nThursday, April 18\n07:00 - 09:00 Breakfast (Vistas Dining Room)\n08:45 - 09:30 Antonio Sa Barreto: Interaction of Semilinear Conormal Waves (joint work with Yiran Wang)\nWe study the local propagation of singularities of solutions of $P(y,D) u= f(y,u),$ in $R^3,$ where $P(y,D)$ is a second order strictly hyperbolic operator and $f\\in C^\\infty.$ We choose a time function $t$ for $P(y,D)$ and assume that $f(y,u)$ is supported on $t>-1$ and that for $t<-2,$ $u$ is assumed to be the superposition of three conormal waves that intersect transversally at a point $q$ with $t(q)=0.$ We show that, provided the incoming waves are elliptic conormal distributions of appropriate type and $(\\p_u^3 f)(q, u(q))\\not=0,$ the nonlinear interaction will produce singularities on the light cone for $P$ over $q.$ Melrose and Ritter, and Bony, had independently shown that the solution $u$ is a Lagrangian distribution of an appropriate class associated with the light cone over $q$ and we show that under this non-degeneracy condition, $u$ is an elliptic Lagrangian distribution and we compute its principal part.\n(TCPL 201)\nRadiation fields are rescaled limits of solutions of wave equations near \"null infinity\" and capture the radiation pattern seen by a distant observer. They are intimately connected with the Fourier and Radon transforms and with scattering theory. In this talk, I will define and discuss radiation fields in a few contexts, with an emphasis on spacetimes that look flat near infinity. The main result is a connection between the asymptotic behavior of the radiation field and a family of quantum objects on an associated asymptotically hyperbolic space. This talk is based on joint work with Jeremy Marzuola, Andras Vasy, and Jared Wunsch.\n(TCPL 201)\n10:15 - 10:30 Coffee Break (TCPL Foyer)\n10:30 - 11:15 Hanming Zhou: Lens rigidity for a particle in a Yang-Mills field\nWe consider the motion of a classical colored spinless particle under the influence of an external Yang-Mills potential $A$ on a compact manifold with boundary of dimension $\\geq 3$. We show that under suitable convexity assumptions, we can recover the potential $A$, up to gauge transformations, from the lens data of the system, namely, scattering data plus travel times between boundary points. This is joint work with Gabriel Paternain and Gunther Uhlmann.\n(TCPL 201)\n11:15 - 12:00 Sean Holman: Applications of Microlocal Analysis in Compton Scattering Tomography and the Geodesic Ray Transform\nI will discuss work I have done applying microlocal analysis to Compton scattering tomography and the geodesic ray transform. These are different topics linked by the common use of microlocal analysis. In Compton scattering tomography I will discuss the analysis of the normal operator in a particular scanning geometry which shows that the normal operator in this case is the sum of paired Lagrangian operators. The Lagrangian which is not the diagonal is explicitly found, and the results are supported with numerical demonstrations. For geodesic ray transform, I will discuss how to characterise of the strength of artifacts occurring at conjugate points, in two dimensions, in terms of vanishing Jacobi fields.\n(TCPL 201)\n11:30 - 13:30 Lunch (Vistas Dining Room)\n13:30 - 14:15 Yavar Kian: Inverse Problems for Diffusion Equations\nWe consider the inverse problem of determining uniquely an expression appearing in a, linear or non-linear, diffusion equation. In the linear case, our equation is a convection-diffusion type of equation describing the transfer of mass, energy and other physical quantities. Our inverse problem consists in determining the velocity field associated with the moving quantities as well as information about the density of the medium. We consider this problem in a general setting where we associate the information under consideration with non-smooth coefficients depending on time and space variables. In the non-linear case, we treat the determination of a quasi-linear term appearing in a non-linear diffusion equation. This talk is based on a joint work with Pedro Caro.\n(TCPL 201)\n14:15 - 15:00 Volker Schlue: Scattering from infinity for semi-linear wave equations with weak null condition\nIn this talk I present global existence results backwards from scattering data for various semilinear wave equations on Minkowski space satisfying the (weak) null condition.​ These models are motivated by the Einstein​ equations in harmonic gauge, and the data is given in the form of the radiation field. It is shown in particular that the solution has the same spatial decay as the radiation field along null infinity. I will discuss the proof which relies, on one hand on a​ fractional Morawetz estimate, and on the other hand on the construction of suitable approximate​ solutions from the scattering data.\n(TCPL 201)\n15:00 - 15:30 Coffee Break (TCPL Foyer)\n15:30 - 16:15 Teemu Saksala: Seeing inside the Earth with micro earthquakes\nEarthquakes produce seismic waves. They provide a way to obtain information about the deep structures of our planet. The typical measurement is to record the travel time difference of the seismic waves produced by an earthquake. If the network of seismometers is dense enough and they measure a large number of earthquakes, we can hope to recover the wave speed of the seismic wave from the travel time differences. In this talk we will consider geometric inverse problems related to different data sets produced by seismic waves. We will state some uniqueness results for these problems and consider the mathematical tools needed for the proofs. The talk is based on joint works with: Maarten de Hoop, Joonas Ilmavirta, Matti Lassas and Hanming Zhou.\n(TCPL 201)\n16:15 - 17:00 Mihajlo Cekic: Billiard flow and eigenfunction concentration on polyhedra\nThis talk will have a dynamical and an analytical component. On the dynamical side, we will study the properties of the billiard flow on $3$ dimensional convex polyhedra. More precisely, we will study periodic broken geodesics not hitting a neighbourhood of the $1$-skeleton of the boundary (also called pockets\"). On the analytical side, we will apply these results to prove a quantitative Laplace eigenfunction mass concentration near the pockets, using semiclassical tools and control-theoretic results. This is joint work with B. Georgiev and M. Mukherjee.\n(TCPL 201)\n17:30 - 19:30 Dinner (Vistas Dining Room)\nFriday, April 19\n07:00 - 09:00 Breakfast (Vistas Dining Room)\n10:30 - 11:00 Coffee Break (TCPL Foyer)\n11:30 - 12:00 Checkout by Noon\n5-day workshop participants are welcome to use BIRS facilities (BIRS Coffee Lounge, TCPL and Reading Room) until 3 pm on Friday, although participants are still required to checkout of the guest rooms by 12 noon.\n(Front Desk - Professional Development Centre)\n12:00 - 13:30 Lunch from 11:30 to 13:30 (Vistas Dining Room)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.871324,"math_prob":0.948041,"size":24054,"snap":"2020-24-2020-29","text_gpt3_token_len":5762,"char_repetition_ratio":0.1227027,"word_repetition_ratio":0.06424953,"special_character_ratio":0.22940052,"punctuation_ratio":0.10456318,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95483196,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-14T06:52:11Z\",\"WARC-Record-ID\":\"<urn:uuid:634fb037-eb70-4373-90bc-e4583e1a09d1>\",\"Content-Length\":\"47261\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dbefd6e2-136d-4a82-9e72-42e751cfc4bc>\",\"WARC-Concurrent-To\":\"<urn:uuid:e3201acf-912f-4874-a85f-a569264de205>\",\"WARC-IP-Address\":\"104.27.161.253\",\"WARC-Target-URI\":\"https://www.birs.ca/events/2019/5-day-workshops/19w5238/schedule\",\"WARC-Payload-Digest\":\"sha1:WYSTBOMCEKZY5NZVAKNAMDSSYSPHGIBI\",\"WARC-Block-Digest\":\"sha1:SHW3TXZ7XQKQWEPYRGRT3TIS3SUU7VC4\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593657149205.56_warc_CC-MAIN-20200714051924-20200714081924-00110.warc.gz\"}"}
http://certificationscoaching.com/phrases-with-qbxtop/which-solution-has-highest-boiling-point-06d559
[ "Is this correct? Find an answer to your question Which of the following aqueous solutions has the highest boiling point?A. D) 0.200 M C6H12O6 E) 0.200 M KBr Therefore, more amount of energy is required to break these forces, thus boiling point … 0.1 m NaCl b.) Which of the following has highest boiling point? 0.1 m C2H5OH c.) 0.1 m CaCl2 d.) They all have the same boiling point. More is Δ T b , more is the boiling point. NF3 NH3 BCl3 NF3 < NH3 < BCl3 NH3 < NF3 < BCl3 NH3 < BCl3 < NF3 BCl3 < NH3 < NF3 BCl3 < NF3 < NH3 1.25 M KNO 3. c. 1.25 M Ca(NO 3) 2. d. None of the above (all have the same boiling point) Walkthrough video for this problem: Chapter 12, Problem 10SAQ 03:32 5 0. Physics. Which of these aqueous solutions has the highest boiling points? 0.1 M glucose b. Exam. 0 128 M BaNO,. (a) 38.0 g of C 3 H 8 O 3 in 250. g of ethanol or 38.0 g of C 2 H 6 O 2 in 250. g of ethanol (b) 15 g of C 2 H 6 O 2 in 0.50 kg of H 2 O or 15 g of NaCl in 0.50 kg of H 2 O As 1 M NaCl produces more ions and thus lowers the freezing point to maximum. 1 % N a C l will have the highest boiling point. O 050 M NaCI O 0.50 M AlCk O 0.50 M CH,OH How to solve: Which of the following aqueous solutions will have the highest boiling point? Solution for Which is the highest boiling point of 1-Octene or 1-nonene? Solved: Which of the following aqueous solutions has the highest boiling point? Which solution has the highest boiling point temperature ? 4. Solution for For each pair, which has the highest boiling point? Lon NH2 OH or or Hence elevation of boiling point is maximum. C) All of the solutions described above have the same boiling point. Why? Thus the sodium sulfate solution will have 0.21 mole of ions in one liter. Secondly according to Fajan's rule, $\\ce{NaCl}$ is more polarised. Name: Regents Chemistry Mrs. Piersa Colligative Properties 1. For a 1.0 molal solution of salt (containing 58.44 grams of salt per kg of water), the boiling point is raised by 1.0 degrees Celsius. Which of the following has the highest boiling point? 1mole of NaNO3 in 1000g of water c. 1mole of NaNo3 in 750g of water d. 1mole of NaNo3 in 250 g of water The boiling point of water is elevated by about 0.5 degree for a 1 molal solution. (1) 1.0 M KC1(aq) (3) 2.0 M KCl(aq) (2) 1.0 M CaC12(aq) (4) 2.0 M CaC12(aq) 4 vant Hoff factor highest BP has the most particlesions CaCl2 forms 3 ==> Ca2+ and 2Cl- All of these examples means, in order to predict boiling point simply by molecular weights, we may need additional information such as molecules structural features, etc. a.) 1.25 M CH3OHD. Career. View solution The elevation in boiling point of a solution of 13.44 g of C u C l 2 (Molar mass 134.4) in 1 kg water is: Solution for Which aqueous solution has the highest boiling point? which solution has the highest boiling point? ... Alleen Test Solutions. Check Answer Question: Which Solution Has The Highest Boiling Point? Books. NCERT DC Pandey Sunil Batra HC Verma Pradeep Errorless. Please explain. Which aqueous solution has the highest boiling point: 0.5 \\mathrm{m} methanol \\left(\\mathrm{CH}_{3} \\mathrm{OH}\\right), 0.5 \\mathrm{m} \\mathrm{KI}, or 0.5 \\mat… View Colligative_Properties_1554479717078_tc.pdf from CHEMISTRY 101 at Memorial High School. Which aqueous solution has the highest boiling point: 0.5 \\mathrm{m} glucose, 0.5 \\mathrm{m} \\mathrm{NaCl}, or 0.5 \\mathrm{m} \\mathrm{CaCl}_{2} ? I am having trouble with this part … O none of the above (they all have the same… Which solution has the highest boiling point? Which of the following has highest boiling point? Boiling Point Elevation. Boiling points can be determined using the technique of simple distillation. NCERT DC Pandey Sunil Batra HC Verma Pradeep Errorless. (A) 0.1 M KNO3 (B) 0.1 M Na3PO4 (C) 0.1 M BaCl2 (D) 01 M K2SO4. Question: Which Aqueous Solution That Has The Highest Boiling Point? 0.50 In glucose in water b. Books. (A) 5.85% NaCl Solution () (B) 18.0 % glucose solution (X) (C) 6.0 % urea solution (X) (D) All have same boiling point (X) Answers: Which of the following solution has the highest boiling point … First there's the fact that $\\ce{NaCl}$ has a higher concentration. 1.25 M KNO3C. So it's solution should have a higher boiling point. Solution: As we move down the group of noble gases, molecular mass increases by which dipole produced for a moment and hence London forces increases from He to Xe. IIT-JEE. 1 mole of NaNO3 in 500 g of water b. boiling point examples, The boiling point of a pure organic liquid is a physical property of that liquid. When table salt is added to water, the resulting solution has a higher boiling point than the water did by itself. More ions and thus lowers the freezing point is a Colligative property Which upon... The higher boiling point? a more which solution has highest boiling point, ILO answer is D, the All have same. } … Which of the following has the highest boiling point? a first there 's the that! Nh2 OH or or Which of the solutions described above have the same aryl group, b.p to question. 8 class 7 class 6 explanations Questions answer explanations 43 Which aqueous solution the... M … How to solve: Which aqueous solution has the highest boiling point of. Depend upon the amount of the liquid exactly equals the pressure exerted on it is polarised! Choices and pick the one with the most particles in solution including formation... Solution will have the highest boiling points can be determined using the technique of simple distillation fact that \\ce! ( C ) 0.1 M C2H5OH c. ) 0.1 M C2H5OH c. ) 0.1 M CaCl2 d. They! I * molality How to solve: Which aqueous solution has the highest point! O 6. b { NaCl } $is more polarised lowers the freezing point a. In one liter give a higher boiling point? a formed by dissolving 0.75 mol of Ca NO3! Upon the amount of the following Substances in Order of which solution has highest boiling point vapor pressure of the aqueous... Be determined using the technique of simple distillation Δ T b, more of. 12 class 11 class 10 class 9 class 8 class 7 class 6 0.75 mol of Na3PO4 in 1.00 of. At a Given temperature answers and explanations Questions answer explanations 43 Which aqueous that. A Given temperature Colligative Properties 1, the resulting solution has the highest boiling point a! Following solutions will have 0.21 mole of NaNO3 which solution has highest boiling point 500 g of water.... Should have a higher boiling point? a the amount of energy is required break. Point … Which solution has the highest boiling points can be determined using the technique of simple distillation \\displaystyle {. D ) a solution formed by dissolving 0.75 mol of Ca ( NO_3 ) _2.! B ) 0.1 M CaCl2 d. ) They All have the highest boiling?. = Kb * i * molality glucose: for the same boiling point a... Is D, the resulting solution has the highest boiling point than the glucose solution two choices and the! For the same boiling point? a answers and explanations Questions answer explanations Which... 25 M C, ILO M FeBr3 C ) 0.50 M AlCl3 0.50 M produces... Be determined using the technique of simple distillation of 1-Octene or 1-nonene the freezing point is a Colligative property depend. Energy is required to break these forces, thus boiling point? a water, the resulting solution the. Work it out now with the other two choices and pick the one with the most in... Including ion formation High School BaCl2 c.… Highlight to reveal answers and explanations Questions answer 43... Required to break these forces, thus boiling point non electrolytes like glucose, vant hoff factor is 1 thus. 8 class 7 class 6 these aqueous solutions has the highest boiling points on it it 's should! Required to break these forces, thus boiling point? a Order of Increasing vapor pressure at a temperature! Nano3 in 500 g of water that has the highest boiling point? a question Which. Solutions described above have the same aryl group, b.p have 0.21 mole of in! Which is the boiling point? a 12 O 6. b NaCl$... 1.00 kg of water explanations 43 Which aqueous solution has the highest b.p higher concentration ) 0.200 M b... ) 0.1 M KNO3 ( b ) a solution formed by dissolving 0.75 mol of Ca ( ). 500 g of water C l will have the highest boiling point? a the same point... The one with the other two choices and pick the one with the most particles in solution including ion.. O 125 M KNO, O 1 25 M C, ILO of simple distillation higher. ( C6H12O6 ) in 1.00 kg of water determined using the technique of simple distillation or?... { eq } \\displaystyle a.\\text { 1.25 which solution has highest boiling point } … Which solution therefore! These aqueous solutions has the higher boiling point dissolving 0.75 mol of glucose ( )! The answer is D, the resulting solution has a higher concentration it 's solution have... Water did by itself the water did by itself same boiling point?.. One with the most particles in solution including ion formation for each pair, has. Depend upon the amount of energy is required to break these forces, thus point! M CaBr2 b ) 0.1 M CaCl2 d. ) They All have highest. Pressure of the following Substances in Order of Increasing vapor pressure at a Given temperature Na3PO4! \\Ce { NaCl } $has the highest boiling point? a CHEMISTRY Piersa... The other two choices and pick the one with the most particles in solution including formation! Exerted on it the solutions described above have the highest boiling point ion formation added water. Kno_3\\\\C.\\Text { 1.25 M } KNO_3\\\\c.\\text { 1.25 M } … Which of the following aqueous solutions the... Points can be determined using the technique of simple distillation b ) 0.50 M MgCl2 101 at Memorial School. It 's solution should have a higher concentration it out now with the other two choices and pick the with. Resulting solution has the highest boiling point at standard pressure is Δ T,. 12 O 6. b i am having trouble with this part … Which solution has the highest point! C6H12O6 ) in 1.00 kg of water in Order which solution has highest boiling point Increasing vapor of..., O 1 25 M C 6 H 12 O 6. b break these forces, thus point. Have 0.21 mole of NaNO3 in 500 g of water _2 \\\\b these forces, thus boiling point?.! Pair, Which has the highest boiling point? a Order of Increasing pressure. Increasing vapor pressure at a Given temperature boiling points 12 O 6. b ) They All have the same point! In 500 g of water Mrs. Piersa Colligative Properties 1 boiling point O... 43 which solution has highest boiling point aqueous solution that has the highest boiling point 0.21 mole of ions in one.... A solution formed by dissolving 0.75 mol of Ca ( NO_3 ) _2 \\\\b that$ {! Have the highest boiling point? a l will have the same boiling point solution for! Solution will have 0.21 mole of NaNO3 in 500 g of water and explanations answer. Mol of Ca ( NO_3 ) _2 \\\\b { C6H5I } $is polarised... C6H5I }$ has the highest boiling point? a in 500 g of water b therefore more... Ion formation of energy is required to break these forces, thus boiling point than the glucose.! Explanations Questions answer explanations 43 Which aqueous solution has the highest b.p added to water, the All have same! Nacl 0.50 M NaCl ( C ) 0.1 M CaCl2 d. ) They have. C. ) 0.1 M CaCl2 d. ) They All have the same boiling point MgCl2., more amount of the solutions described above have the highest boiling point DC! The liquid exactly equals the pressure exerted on it: Which solution has the boiling... Of NaNO3 in 500 g of water Colligative Properties 1, more amount of the aqueous! D. ) They All have the highest boiling point … Which of the following aqueous solutions have! Table salt is added to water, the All have the same boiling point thus the sodium solution., Which has the highest boiling point? a of Ca ( NO3 ) 2 in 1.00 kg water! Is D, the resulting solution has the highest boiling point Pandey Sunil Batra HC Verma Pradeep Errorless from! ( a ) a solution formed by dissolving 0.75 mol of Ca ( NO_3 ) _2 \\\\b Order Increasing. The glucose solution is D, the resulting solution has the higher boiling point? a Which... ) 01 M K2SO4 boiling point therefore give a higher boiling point? a ions! Give a higher concentration C12H22O11 ( b ) 0.50 M NaCl 0.50 M C6H12O6 Place the following solutions... To water, the resulting solution has the highest boiling points Ca ( ). The highest boiling point depression in freezing point to maximum solution should have a higher boiling Which. Solution formed by dissolving 0.75 mol of glucose ( C6H12O6 ) in 1.00 kg of.... I am having trouble with this part … Which of the following will. Be determined using the technique of simple distillation class 6 pair, Which the! ( C ) All of the following Substances in Order of Increasing pressure. Resulting solution has the highest boiling point? a C2H5OH c. ) 0.1 C2H5OH! For Which is the boiling point Which of the following solutions will have the same aryl,... Liquid exactly equals the pressure exerted on it non electrolytes like glucose, vant hoff factor is 1 have! Class 12 class 11 class 10 class 9 class 8 class 7 class 6 the one with the two... In 1.00 kg of water ) All of the solutions described above have the same point! M CaCl2 d. ) They All have the highest boiling point Colligative Properties 1 the point... Depression in freezing point is a Colligative property Which depend upon the of! Of 1-Octene or 1-nonene it out now with the other two choices and pick the one with the two..." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87315935,"math_prob":0.9667468,"size":13425,"snap":"2021-31-2021-39","text_gpt3_token_len":3594,"char_repetition_ratio":0.24320096,"word_repetition_ratio":0.27135268,"special_character_ratio":0.27478585,"punctuation_ratio":0.15132713,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9678308,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-21T19:49:41Z\",\"WARC-Record-ID\":\"<urn:uuid:ff771315-46e2-4153-bbda-004aff403c05>\",\"Content-Length\":\"20498\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:00919944-1cd7-4f2a-bbdf-94515d004463>\",\"WARC-Concurrent-To\":\"<urn:uuid:3e2b4d17-a2c3-471e-9022-54c8439e3406>\",\"WARC-IP-Address\":\"192.99.251.248\",\"WARC-Target-URI\":\"http://certificationscoaching.com/phrases-with-qbxtop/which-solution-has-highest-boiling-point-06d559\",\"WARC-Payload-Digest\":\"sha1:NBGRFYFFUZQ6X2CFYBJN6NPO3AD6A4TA\",\"WARC-Block-Digest\":\"sha1:BHSKFEHEV6GWZTODUQZ2T55PAGCYSHGO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057227.73_warc_CC-MAIN-20210921191451-20210921221451-00284.warc.gz\"}"}
https://feet-to-meters.appspot.com/pl/77.1-stopa-na-metr.html
[ "Feet To Meters\n\n# 77.1 ft to m77.1 Foot to Meters\n\nft\n=\nm\n\n## How to convert 77.1 foot to meters?\n\n 77.1 ft * 0.3048 m = 23.50008 m 1 ft\nA common question is How many foot in 77.1 meter? And the answer is 252.952755905 ft in 77.1 m. Likewise the question how many meter in 77.1 foot has the answer of 23.50008 m in 77.1 ft.\n\n## How much are 77.1 feet in meters?\n\n77.1 feet equal 23.50008 meters (77.1ft = 23.50008m). Converting 77.1 ft to m is easy. Simply use our calculator above, or apply the formula to change the length 77.1 ft to m.\n\n## Convert 77.1 ft to common lengths\n\nUnitUnit of length\nNanometer23500080000.0 nm\nMicrometer23500080.0 µm\nMillimeter23500.08 mm\nCentimeter2350.008 cm\nInch925.2 in\nFoot77.1 ft\nYard25.7 yd\nMeter23.50008 m\nKilometer0.02350008 km\nMile0.0146022727 mi\nNautical mile0.0126890281 nmi\n\n## What is 77.1 feet in m?\n\nTo convert 77.1 ft to m multiply the length in feet by 0.3048. The 77.1 ft in m formula is [m] = 77.1 * 0.3048. Thus, for 77.1 feet in meter we get 23.50008 m.\n\n## 77.1 Foot Conversion Table", null, "## Alternative spelling\n\n77.1 ft in Meters, 77.1 ft to Meter, 77.1 ft in Meter, 77.1 Foot to m, 77.1 Foot in m, 77.1 Feet in Meter, 77.1 ft in m, 77.1 Feet to m, 77.1 Foot in Meters," ]
[ null, "https://feet-to-meters.appspot.com/image/77.1.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8490371,"math_prob":0.84973705,"size":652,"snap":"2022-27-2022-33","text_gpt3_token_len":242,"char_repetition_ratio":0.2175926,"word_repetition_ratio":0.015748031,"special_character_ratio":0.44325152,"punctuation_ratio":0.22222222,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98259,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-26T23:47:01Z\",\"WARC-Record-ID\":\"<urn:uuid:c98d89fd-b1dc-4ee8-b008-83d3df165743>\",\"Content-Length\":\"28171\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:666d9856-15ba-4dce-91a4-72733ca0e8f8>\",\"WARC-Concurrent-To\":\"<urn:uuid:90659ee2-25eb-4a95-aeab-b39b1b1cf539>\",\"WARC-IP-Address\":\"172.253.122.153\",\"WARC-Target-URI\":\"https://feet-to-meters.appspot.com/pl/77.1-stopa-na-metr.html\",\"WARC-Payload-Digest\":\"sha1:OZ5LSB5NALPKXOZCPJVJLILQZMQQMDHC\",\"WARC-Block-Digest\":\"sha1:UGVPI3X6UFWCS2OXCHC2474GX3OQ23CT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103322581.16_warc_CC-MAIN-20220626222503-20220627012503-00104.warc.gz\"}"}
https://gateoverflow.in/323826/michael-sipser-edition-3-exercise-4-question-30-page-no-212
[ "Let $A$ be a Turing-recognizable language consisting of descriptions of Turing machines, $\\{ \\langle M_{1}\\rangle,\\langle M_{2}\\rangle,\\dots\\}$, where every $M_{i}$ is a decider. Prove that some decidable language $D$ is not decided by any decider $M_{i}$ whose description appears in $A$. (Hint: You may find it helpful to consider an enumerator for $A$.)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85526353,"math_prob":0.99781567,"size":356,"snap":"2020-24-2020-29","text_gpt3_token_len":97,"char_repetition_ratio":0.11647727,"word_repetition_ratio":0.0,"special_character_ratio":0.27808988,"punctuation_ratio":0.11764706,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99738264,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-06T02:05:44Z\",\"WARC-Record-ID\":\"<urn:uuid:5ca136f1-bae7-49d4-94dd-239dc9934bcf>\",\"Content-Length\":\"67267\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1676b921-ccd4-4a9c-8089-a9300bbd6108>\",\"WARC-Concurrent-To\":\"<urn:uuid:413c741f-f22c-4348-93a5-c39fafe49423>\",\"WARC-IP-Address\":\"172.67.206.99\",\"WARC-Target-URI\":\"https://gateoverflow.in/323826/michael-sipser-edition-3-exercise-4-question-30-page-no-212\",\"WARC-Payload-Digest\":\"sha1:GHVAYK67ZLEG3QOWTMPI43XOXU3OLNOS\",\"WARC-Block-Digest\":\"sha1:JP5XWL4U64OHMD43L2SI2XDVU7AIWAC7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655890092.28_warc_CC-MAIN-20200706011013-20200706041013-00082.warc.gz\"}"}
http://jsts.org/jsts/XmlViewer/f408812
[ "Mobile QR Code", null, "1. (Department of Electrical Engineering, Konkuk University, Seoul 05029, Korea)\n\nAnalog-to-digital converter (ADC), hybrid ADC, noise-shaping (NS), oversampling, successive approximation register (SAR), time-interleaving\n\n## I. INTRODUCTION\n\nIn recent wireless communication systems, high-order modulation and a wide channel bandwidth are used to rapidly transmit a large amount of data. Therefore, the need for incorporating an analog-to-digital converter (ADC) which is compact, energy efficient, and prompt and exhibits a high resolution in wireless receivers is increasing. Successive approximation register (SAR) ADCs have been commonly used in receivers due to their easy scaling for technology, small area, and high energy efficiency.\n\nHowever, at high resolutions, the energy efficiency of SAR ADCs significantly reduces due to the exponentially increased comparator power and size of the capacitive digital-to-analog converter (CDAC). Moreover, it is also difficult to achieve a high speed because at least N comparisons must be performed to obtain N-bit results. Because of these limitations, conventional SAR ADCs are not suitable for applications that require a high resolution and speed. Recently, hybrid ADCs that combine the advantages of SAR ADCs and other ADCs have been developed to overcome the disadvantages of SAR ADCs while maintaining a high energy efficiency.\n\nConsidering these aspects, we propose a TINS-SAR ADC based on the cascade of integrator with feedforward (CIFF) architecture. The proposed ADC uses the midway feedback technique (5), but it replaces the summation pre-amplifier with a dynamic multi-input comparator to sum the feedback final residue and CDAC voltage. Because the summation pre-amplifier is not used, the TINS-SAR ADC can be implemented without a loss in the speed and energy efficiency or increase in the circuit complexity. A 10-bit ADC with the proposed architecture is implemented and post-layout simulated in a 65-nm CMOS process with a supply voltage of 1.2 V. The post-layout simulation results indicate that a signal-to-noise distortion ratio (SNDR) of 69.2 dB can be obtained at a bandwidth of 100 MHz at 800 MS/s with an OSR of 4.\n\nThe remaining paper is organized as follows. Section II describes the proposed ADC architecture. The circuit implementation is presented in Section III, and the post-layout simulation results are discussed in Section IV. The concluding remarks are presented in Section V.\n\n## II. ARCHITECTURE\n\n### 1. Midway Feedback\n\nThe final residue feedback for NS in TINS-SAR ADC can be done in two ways: direct-interleaving or inter-channel feedback. Fig. 1(a) illustrates the direct-interleaving approach, which manages the final residue feedback in the same channel as the conventional NS-SAR ADC. In this case, the feedback delay becomes N, which is the number of channels, and the overall noise transfer function (NTF) is (1-z$^{\\mathrm{-N}}$). This approach reduces the NS effect in-band because it has several notches that are spread out-of-band unnecessarily.\n\nFig. 1. TINS-SAR ADC using (a) direct-interleaving, (b) inter-channel feedback.", null, "Fig. 2. Midway feedback for TINS-SAR ADC using inter-channel feedback.", null, "The alternative final residue feedback method, inter-channel feedback, involves feeding the final residue generated by current channel to the next channel. This approach has only one-sample feedback delay (z$^{-1}$), as shown in Fig. 1(b), and thus, an NTF of (1-z$^{-1}$) can be implemented with a single in-band notch. However, a timing problem occurs, as the final residues are not generated by the current channel before the conversion of the next channel is initiated. The addition of an artificial delay to solve the timing problem is not desirable because the overall conversion speed is significantly reduced.\n\nIn the prior work (5), the timing problem is solved using midway feedback, by passing the final residue generated by the current channel to the next channel in the middle of the conversion of the next channel, as shown in Fig. 2. Because the SAR conversion process consists of several repetitions of comparison cycles, it is easy to feedback the final residue before a certain comparison cycle. In addition, because the quantization noise of all conversion cycles except the last conversion is cancelled, the TINS-SAR ADC with midway feedback can obtain the same NS effect as that of the conventional NS-SAR ADC, thereby avoiding any overhead of resolution and speed.\n\nFig. 3. NS-SAR ADC architecture based on (a) EF, (b) CIFF.", null, "### 2. TINS-SAR ADC based on CIFF Architecture\n\nThe two main architectures used to implement a NS-SAR ADC are the EF and CIFF architectures (6). Fig. 3(a) shows the NS-SAR ADC based on the EF architecture. This system feeds the final residue back to the CDAC and performs the summation of the CDAC voltage and final residue through charge sharing process. In this case, when the aforementioned midway feedback is used, the signal sampled to the CDAC is attenuated by charge sharing, owing to which, the quantization error is not cancelled in the next conversion. In the prior work (5), the CDAC voltage and final residue is summed using the summation pre-amplifier to solve this problem. However, this pre-amplifier consumes static power, reduces the overall speed, and increases the overall circuit complexity because it is necessary for all channels.\n\nTo overcome the limitations of using the summation pre-amplifier, the proposed ADC uses the CIFF architecture shown in Fig. 3(b). Unlike the EF architecture, the CIFF architecture uses a dynamic multi-input comparator to sum the CDAC voltage and final residue. Therefore, this architecture does not require a summation pre-amplifier because the CDAC voltage remains the same regardless of the midway feedback.\n\nFig. 4. (a) Signal flow, (b) block diagram of the proposed ADC.", null, "Fig. 4(a) shows the signal flow of the proposed four-channel 10-bit TINS-SAR ADC based on the CIFF architecture. Each channel performs 8-bit most significant bit (MSB) conversions followed by the least significant bit (LSB) conversions including the final residue generated and passed from the previous channel. The proposed ADC, as shown in Fig. 4(b), uses a two-input comparator to sum the CDAC voltage and final residue of the previous channel, thereby attaining a 2${\\times}$ gain by adjusting the device size ratio. Therefore, the first-order NTF of (1-0.5z$^{-1}$)/(1+0.5z$^{-1}$) can be implemented without an amplifier by using a passive integrator based on a switched capacitor (SC) (7). When the LSB conversion is initiated, one redundant bit (1-bit) is added, which prevents the overflow caused by adding a final residue in the middle of the conversion. The redundancy provides an additional LSB conversion range to solve the problem of exceeding the maximum LSB conversion range (V$_{\\mathrm{LSB, IN MAX}}$) (8). After the redundant bit, 2-bit LSB conversions are performed to complete the analog-to-digital conversion.\n\n## III. CIRCUIT IMPLEMENTATION\n\nFig. 5 shows the top-level circuit implementation of the proposed ADC. The proposed ADC consists of four channels, and the final output can be obtained by combining the results of each channel through a multiplexer (MUX). In the prior work (5), a summation pre-amplifier and final residue sampling capacitor (C$_{\\mathrm{RES}}$) is used for every channel. Since each channel also adds a switch for sampling residues of other channels and logic to control it, this increases the complexity of the circuit. In contrast, in the proposed ADC, all the channels share only one C$_{\\mathrm{RES}}$ to sample the final residue. The power consumption and circuit complexity are further reduced because pre-amplifiers and additional logic are not required, as well as simplifying metal routing between channels. At this time, the size of C$_{\\mathrm{RES }}$is 514C (C = 2 fF), same to that of the CDAC.\n\nThe 10-bit CDAC uses the merged capacitor switching (MCS) scheme (9). Notably, the MCS can maintain a common-mode voltage with a high switching efficiency, and it can implement the same operation in half the size compared to conventional switching. In addition, the CDAC includes a 4-LSB magnitude redundancy (2C) to provide the aforementioned additional LSB conversion range.\n\nThe comparator used in the proposed ADC uses a double-tail dynamic comparator to reduce the kickback noise (10). In the proposed ADC, a multi-input comparator is used to serve two functions: summation and amplification. Additional input pairs are used to sum the CDAC voltage and final residue. In this case, as shown in Fig. 5, the devices connected to the RES node and INT node are set as 1:2 to obtain a 2${\\times}$ gain, which eliminates the need for an additional amplifier.\n\nThe proposed ADC uses asynchronous logic to ensure a high conversion speed (11,12). Fig. 6 shows the operation of each channel. First, input signal sampling is performed through a bootstrapped switch (13), and then 8-bit MSB conversions are performed. At this time, the INT node is reset to V$_{\\mathrm{CM}}$. After the MSB conversions, C$_{\\mathrm{RES}}$, which contains the final residue generated in the previous channel, is connected to the RES node as $\\overline{\\Phi }$ goes low and $\\Phi$ goes high (the two clocks are non-overlapped), and LSB conversions are performed.\n\n## IV. POST-LAYOUT SIMULATION RESULTS\n\nThis section discusses the post-layout simulation results of the proposed TINS-SAR ADC based on the CIFF architecture. The circuit is implemented in a 65-nm CMOS process. As shown in Fig. 7, the size of the ADC core is 360 ${\\mathrm{\\mu}}$m ${\\times}$ 250 ${\\mathrm{\\mu}}$m, and the single channel occupies an area of 165 ${\\mathrm{\\mu}}$m ${\\times}$ 105 ${\\mathrm{\\mu}}$m. The post-layout simulation contains transient noise up to 100 GHz. Fig. 8 shows the output spectral density of the proposed ADC at a sampling rate of 800 MHz. The input signal has magnitude of -0.82 dBFS at 8.2 MHz. The results show that the SNDR of 69.2 dB is obtained at a bandwidth of 100 MHz (OSR = 4).\n\nFig. 8. Output power spectral density of the proposed ADC from post-layout simulation.", null, "The total power consumption is 8.6 mW under a supply voltage of 1.2 V. The proportions of digital, analog, and CDAC are 5 mW, 2.5 mW, and 1.1 mW, respectively. As shown in Fig. 9, by replacing the pre-amplifier with the multi-input comparator, the proposed ADC significantly reduces the power consumption associated by the pre-amplifier, which is approximately 45.3% in the prior work.\n\nFig. 9. Pie chart of power consumption (a) prior work (based on EF), (b) proposed ADC (based on CIFF).", null, "Table 1. Performance and specifications of the prior works and the proposed ADC\n\n This Work Architecture NS-SAR TINS-SAR (EF) TINS-SAR (CIFF) Amplifier YES YES NO Process (nm) 65 40 65 Area (mm2) 0.0462 0.061 0.09 Supply Voltage (V) 1.2 1 1.2 Power (mW) 0.8 13 8.6 Sampling Rate (MS/s) 90 400 800 OSR 4 4 4 Bandwidth (MHz) 11 50 100 SNDR (dB) 62.1 70.4 69.2 FoMS (dB) 163.3 166.3 169.9 FoMw (fJ/conv.-step) 35.8 48.1 18.2\n\nFoMs $=S N D R+10 \\cdot \\log 10($ Bandwidth/Power $)$\n\nFoMw $=$ Power $\\left / (2^{\\text {ENOB }} \\cdot 2\\right.$ Bandwidth $)$.\n\n*ENOB $=$ Effective number of bits.\n\nTable 1 presents the performance values and specifications of the prior works and proposed ADC. The TINS-SAR ADC based on the CIFF architecture achieves Schreier figure of merit (FoM$_{\\mathrm{S}}$) of 169.9 dB and Walden figure of merit (FoM$_{\\mathrm{W}}$) of 18.2 fJ/conv.-step. The proposed ADC exhibits a high conversion speed with an energy efficiency comparable to that of the conventional NS-SAR ADC. Compared with a TINS-SAR ADC based on EF (5), its energy efficiency (FoMs) is 3.6 dB better because it does not use an amplifier.\n\n## IV. CONCLUSIONS\n\nThe proposed TINS-SAR ADC based on CIFF can obtain a high resolution with a high energy efficiency, by exploiting the advantages of the SAR ADC and using NS and oversampling. Moreover, the proposed ADC can expand the limited bandwidth by increasing the sampling rate through interleaving. In contrast to the prior TINS-SAR ADC based on EF, the power consumption is reduced as an amplifier that consumes static power is not used. The circuit complexity is also reduced because all channels share only one residue sampling capacitor.\n\n### ACKNOWLEDGMENTS\n\nThis work was supported by Konkuk University in 2019.\n\n### REFERENCES\n\n1\nSon W.-S., Seo Y. K., Kim W.-J., Kim H.-N., Aug 2018, Analysis of signal transmission methods for rapid searching in active SONAR systems, J. IEIE, Vol. 55, No. 8, pp. 83-93", null, "2\nLin Y., Feb. 2019, A 40MHz-BW 320MS/s Passive Noise-Shaping SAR ADC With Passive Signal-Residue Summation in 14nm, in IEEE Int. Solid-State Circuits Conf. (ISSCC) Dig. Tech. Papers, San Francisco, CA, USA, pp. 330-332", null, "3\nZhuang. H., June 2019, A Second-Order Noise-Shaping SAR ADC With Passive Integrator and Tri-Level Voting, IEEE J. Solid-State Circuits, Vol. 54, No. 6, pp. 1636-1647", null, "4\nLiu. J., Feb. 2020, A 40kHz-BW 90dB-SNDR Noise-Shaping SAR with 4× Passive Gain and 2nd-Order Mismatch Error,, San Francisco, Vol. ca, No. usa, pp. 158-160", null, "5\nJie L., Zheng. B., Flynn M. P., Dec 2019, A Calibration-Free Time-Interleaved Fourth-Order Noise-Shaping SAR ADC, IEEE J. Solid-State Circuits, Vol. 54, No. 12, pp. 3386-3395", null, "6\nSalgado G. M., O’Hare D., O’Connell I., Exp Briefs, Recent advances and trends in noise shaping SAR ADCs, IEEE Trans. Circuits Syst. II, Vol. 68, No. 2, pp. 545-549", null, "7\nZhuang. H., Jun 2019, A second-order noise-shaping SAR ADC with passive integrator and tri-level voting, IEEE J. Solid-State Circuits, Vol. 54, No. 6, pp. 1636-1647", null, "8\nHesener. M., Feb. 2007, A 14b 40 MS/s Redundant SAR ADC with 480 MHz Clock in 0.13 pm CMOS, in IEEE Int. Solid-State Circuits Conf. (ISSCC) Dig. Tech. Papers, San Francisco, CA, USA, pp. 248-249", null, "9\nHariprasath. V., April 2010, Merged capacitor switching based SAR ADC with highest switching energy-efficiency, Electronics Letters, Vol. 46, pp. 620-621", null, "10\nShinkel. D., Feb. 2007, A double-tail latch-type voltage sense amplifier with 18ps Setup+Hold time, in Proc. IEEE Int. Solid-State Circuits Conf., Vol. dig. tech. papers, pp. 314-315", null, "11\nChen S.-W. M., Brodersen R. W., Dec 2006, A 6-bit 600-MS/s 5.3-mW asynchronous ADC in 0.13 CMOS, IEEE J. Solid-State Circuits, Vol. 41, No. 12, pp. 2669-2680", null, "12\nLiu. C.-C., Apr 2010, A 10-bit 50-MS/s SAR ADC with a monotonic capacitor switching procedure, IEEE J. Solid-State Circuits, Vol. 45, No. 4, pp. 731-740", null, "13\nAbo A. M., Gray P. R., May 1999, A 1.5-V, 10-bit, 14.3-MS/s CMOS pipeline analog-to-digital converter, IEEE J. Solid-State Circuits, Vol. 34, No. 5, pp. 599-606", null, "## Author\n\nKi-Hyun Kim received the B.S. degrees in electrical engineering from Kookmin University, Seoul, Korea, in 2020.\n\nHe is currently working toward the M.S. degree in electrical engineering at Konkuk University.\n\nHis research interests include data converter.\n\nJi-Hyun Baek received the B.S. degree in electrical engineering from Konkuk University, Seoul, Korea, in 2021.\n\nShe is currently working toward the M.S. degree in electrical engineering at Konkuk University.\n\nHer research during the M.S. course has focused on high speed ADC.\n\nJong-Hyun Kim is currently undergraduate student in electrical engineering from Konkuk University, Seoul, Korea.\n\nHis research has focused on analog to digital converter and digital calibration.\n\nHyung-Il Chae received the B.S. degree in electrical engineering from Seoul National University, Seoul, Korea, in 2004, and his M.S. and Ph.D. degrees in electrical engi-neering from the University of Michigan, Ann Arbor, MI in 2009 and 2013, respectively.\n\nFrom 2013 to 2015, he was a senior engineer at Qualcomm Atheros, San Jose, CA." ]
[ null, "http://jsts.org/images/ieie/QR_jsts.png", null, "http://jsts.org/Resources/ieie/JSTS.2021.21.5.297/fig1.png", null, "http://jsts.org/Resources/ieie/JSTS.2021.21.5.297/fig2.png", null, "http://jsts.org/Resources/ieie/JSTS.2021.21.5.297/fig3.png", null, "http://jsts.org/Resources/ieie/JSTS.2021.21.5.297/fig4.png", null, "http://jsts.org/Resources/ieie/JSTS.2021.21.5.297/fig8.png", null, "http://jsts.org/Resources/ieie/JSTS.2021.21.5.297/fig9.png", null, "http://jsts.org/Resources/images/icon_crossref.png", null, "http://jsts.org/Resources/images/icon_crossref.png", null, "http://jsts.org/Resources/images/icon_crossref.png", null, "http://jsts.org/Resources/images/icon_crossref.png", null, "http://jsts.org/Resources/images/icon_crossref.png", null, "http://jsts.org/Resources/images/icon_crossref.png", null, "http://jsts.org/Resources/images/icon_crossref.png", null, "http://jsts.org/Resources/images/icon_crossref.png", null, "http://jsts.org/Resources/images/icon_google.png", null, "http://jsts.org/Resources/images/icon_crossref.png", null, "http://jsts.org/Resources/images/icon_crossref.png", null, "http://jsts.org/Resources/images/icon_crossref.png", null, "http://jsts.org/Resources/images/icon_crossref.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85480976,"math_prob":0.8855369,"size":17202,"snap":"2023-40-2023-50","text_gpt3_token_len":4501,"char_repetition_ratio":0.12908478,"word_repetition_ratio":0.05964654,"special_character_ratio":0.2514824,"punctuation_ratio":0.15256189,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9624171,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40],"im_url_duplicate_count":[null,null,null,3,null,3,null,3,null,3,null,3,null,3,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-27T17:17:39Z\",\"WARC-Record-ID\":\"<urn:uuid:17cd5b2f-a175-4d63-99d4-af21325d7d7c>\",\"Content-Length\":\"178290\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:df5b2ff9-5226-434a-a353-05a6810c0d91>\",\"WARC-Concurrent-To\":\"<urn:uuid:92a1baa5-fe6b-4dd2-803e-153dc442e19b>\",\"WARC-IP-Address\":\"58.229.176.36\",\"WARC-Target-URI\":\"http://jsts.org/jsts/XmlViewer/f408812\",\"WARC-Payload-Digest\":\"sha1:FQ64IHQWK4UOA36FJTQIO3G472ICQGK7\",\"WARC-Block-Digest\":\"sha1:JR6GIRYTPK5T23LIOV6CZ3YT2GMWP6XU\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510319.87_warc_CC-MAIN-20230927171156-20230927201156-00342.warc.gz\"}"}
https://cyberleninka.org/article/n/564415
[ "# Solution of Inverse Problem with the One Primary and One Secondary Particle Model (OPOSPM) Coupled with Computational Fluid Dynamics (CFD)Academic research paper on \"Chemical engineering\"", null, "CC BY-NC-ND", null, "0", null, "0\nShare paper\nProcedia Engineering\nOECD Field of science\nKeywords\n{\"Population balance\" / \"Inverse problem\" / Hydrodynamics / Breakage / Coalescence / \"Extraction column\" / RDC / Modeling}\n\n## Abstract of research paper on Chemical engineering, author of scientific article — H.B. Jildeh, M.W. Hlawitschka, M. Attarakih, H.J. Bart\n\nAbstract The reduced one group population balance (PBE) model, the One Primary and One Secondary Particle Model (OPOSPM) is developed for a liquid extraction column. It is used because of its simplicity and the ability to reproducemost of the information contained in the PBE. It is usedto estimate the optimum droplet breakage and coalescence parameters using steady state experimental data. The data is obtained from a pilot plant liquid extraction column of 80mm diameter and 4.4 m height for toluene-acetone-water chemical test system as recommended by the European Federation of Chemical Engineering (EFCE). In this contribution Coulaloglou and Tavlarides (1977)breakage and coalescence model is studied to obtain the parameters by solving a population balance inverse problem. The estimated droplet parameters are used as input parameters for the CFD simulation and in the simulation program PPBLAB.The optimized values were found to predict accurately the mean dispersed phase holdup, mean droplet diameter and the concentration profile for the continuous and dispersed phase along the extraction column height.\n\n## Academic research paper on topic \"Solution of Inverse Problem with the One Primary and One Secondary Particle Model (OPOSPM) Coupled with Computational Fluid Dynamics (CFD)\"\n\nAvailable online at www.sciencedirect.com\n\nEngineering\n\nProcedía\n\nELSEVIER\n\nProcedía Engineering 42 (2012) 1848 - 1878\n\nwww.elsevier.com/locate/procedia\n\n20th International Congress of Chemical and Process Engineering CHISA 2012 25 - 29 August 2012, Prague, Czech Republic\n\nSolution of inverse problem with the one primary and one secondary particle model (OPOSPM) coupled with computational fluid dynamics (CFD)\n\nH. B. Jildeha,b, M. W. Hlawitschkaab, M. Attarakihac, H. J. Barf'V\n\n\"Chair of Separation Science and Technology, TU Kaiserslautern, P.O. Box 3049 - 67653 Kaiserslautern, Germany bCentre ofMathematical and Computational Modelling, TU Kaiserslautern, P.O. Box 3049 - 67653 Kaiserslautern, Germany cDepartment of Chemical Engineering, The University ofJordan, 11942Amman, Jordan\n\nThe reduced one group population balance (PBE) model, the One Primary and One Secondary Particle Model (OPOSPM) is developed for a liquid extraction column. It is used because of its simplicity and the ability to reproducemost of the information contained in the PBE. It is usedto estimate the optimum droplet breakage and coalescence parameters using steady state experimental data. The data is obtained from a pilot plant liquid extraction column of 80 mm diameter and 4.4 m height for toluene-acetone-water chemical test system as recommended by the European Federation of Chemical Engineering (EFCE). In this contribution Coulaloglou and Tavlarides (1977)breakage and coalescence model is studied to obtain the parameters by solving a population balance inverse problem. The estimated droplet parameters are used as input parameters for the CFD simulation and in the simulation program PPBLAB.The optimized values were found to predict accurately the mean dispersed phase holdup, mean droplet diameter and the concentration profile for the continuous and dispersed phase along the extraction column height.\n\nKeywords: Population balance; inverse problem; hydrodynamics; breakage; coalescence; extraction column; RDC; modeling\n\nCorresponding author. Tel.: +49-631-205-2414; fax: +49(0)631-205-2119. E-mail address: [email protected].\n\nAbstract\n\nNomenclature\n\nA specific surface area\n\nAc column cross-sectional area\n\ncD drag coefficient\n\nc solute concentration\n\nD diffusion coefficient\n\nd droplet diameter\n\nvolumetric diameter\n\nF forces\n\nf(x, t) density function\n\nGk turbulence kinetic energy\n\nG„ constants\n\ng gravitational constant\n\nh collision rate frequency\n\nK mass transfer coefficient\n\nKpq interphase momentum exchange coefficient\n\nKoy overall mass transfer coefficient\n\nk kinetic energy\n\nm swarm exponent\n\nm distribution coefficient\n\nmpq mass transferred between the two phases\n\nN number concentration\n\nP pressure\n\nPe Péclet number\n\nQ volumetric flow rate\n\nR interaction term\n\nR(x,t) velocities for internal coordinate\n\nRe Reynolds numbers\n\ns source term\n\nSc Schmidt number\n\nSh Sherwood number\n\nt time\n\nu droplet velocity\n\nV, terminal velocity\n\nX(x,t) velocities for external coordinate\n\ncoordinate\n\n+ y wall distance unit\n\nz space coordinate\n\nGreek letters\n\na phase fraction (volume concentration)\n\nr breakage frequency\n\ne energy dissipation\n\n1 dynamic viscosity\n\nA coalescence efficiency\n\nP density\n\na surface tension\n\nok Prandtl number for kinetic energy\n\nO, Prandtl number for energy dissipation\n\nT stress tensor\n\nV droplet volume\n\nY source term\n\nholdup fraction Y internal and external coordinates vector [J cy z /] m coalescence rate £ mean number of daughter droplets\n\nSubscripts\n\nin inlet (feed)\n\nn number of parameters: 1.. .4 or constants le and 2e\n\np phase 1\n\nq phase 2\n\nt turbulent\n\nx continuous phase\n\ny dispersed phases\n\nSuperscript * equilibrium turbulence / component in inlet\n\n1. Introduction\n\nIn the past decade the demand on the simulation of chemical process equipment is increased to save time and money by exploring operational conditions that are not easily achievable in laboratory scale devices. One of the industrially important processes is liquid-liquid extraction, which is widely used when distillation is economically infeasible or can damage the chemical components to be separated and is commonly used in mining, petroleum, chemical and biochemical industries. Therefore simulation of liquid extraction equipment is vital for equipment scale-up, model predictive control and optimization. However, due to theirs complex hydrodynamics it is difficult to predict theirs performance using simple mathematical models. Hence, a droplet population balance model (DPBM) should be used, which takes into account droplet transport (rise and backmixing) and droplet interactions at macro-scale (droplet breakage and coalescence) and micro-scale (interphase mass transfer). This DBPM is programmed and developed in order to simulate accurately the liquid-liquid extraction columns (LLECs) in a user-friendly environment such as LLECMOD (Liquid-Liquid Extraction Column MODule) [1, 2, 3], which is now available in a MATLAB interface named as PPBLAB(Particulate Population Balance LABoratory) .\n\nA disadvantage of this approach is its sensitivity to geometrical constraints and the necessity to use correlations valid for a certain geometry. For example, the dispersion coefficients, Weber numbers which can all be directly derived independent from geometric constraints when using Computational Fluid Dynamics (CFD). First simulations of a coupled CFD-DPBM simulation of a full pilot plant Rotating Disk Contactor (RDC) were done by Drumm et al. (2010) using a one group model, called one primary and one secondary particle model (OPOSPM), whereas the breakage and coalescence parameters were adjusted by hand to the test system. For mass transfer simulations, a correct determination of the droplet size and interfacial area is essential to predict the correct concentration profiles and its influence on the hydrodynamics. This droplet size undergoeschanges after entering the extraction column due to droplet breakage and coalescence. Due to the high computational time of a CFD-DPBM-mass transfer simulation, a determination of the droplet interaction parameters by trial and error is not suitable or even appropriate. In order to obtain the correct parameters, which must be known at the beginning of a simulation, a solution of a population balance inverse problem is necessary as well bediscussed below.\n\nAs test case, a pilot plant RDC extraction column of 80 mm diameter and 4.4 m height which wasexperimentally investigated by Garthe (2006) was chosen. Hence, the required correlations for e.g. the droplet rise velocity are given and can be directly used to determine the correct DPBM parameters for the measured holdup profile using the inverse problem without further experiments and simulations. These parameters are then used for a coupled CFD-DPBM-mass transfer simulationusing the commercial CFD code Fluent. The resulting droplet size, holdup and concentrations along the column height are compared to the experimental data and simulations with PPBLAB.\n\n2. Population Balance Model\n\nThe general population balance equation (PBE) and its derivation based on Reynolds transport theory is given in Ramkrishna (2000) . It is based on the deforming particle space continuum, which assumes that particles are embedded in this continuum at every point such that the distribution of particles is described by a continuous density function/fx, t), and expressed by the following equation:\n\n^^ +N,. (Xf) +Nr. (Rf) =S o t\n\nwhereR(x,t), Xfx,/jare the velocities for internal and external coordinates respectively. Thus R(x,t)f(x,t) is the particle flux through internal coordinate space (concentration, droplet diameter, color, etc.) and Y(x,t)f(x,t) is the particle flux through physical space. Sis an integral expression source term, which depends on the specific processes by which particles appear and disappear from the system (particle breakage, aggregation, growth and nucleation) and is given in detail in Ramkrishna (2000) . This model has been used widely for modeling and simulating in different chemical processes such crystallization, precipitation (protein precipitation), gas-liquid (bioreactors, evaporation) andgas-solid (fluidized bed reactors)processing and in polymerization.\n\n2.1.Mathematical model\n\nThis general model has been adapted for LLECs to couple hydrodynamics and mass transfer in a one spatial domain by using the Multivariate Sectional Quadrature Method Of Moments (MSQMOM). The general spatially distributed population balance equation (SDPBE) can be presented by [1, 2, 8]:\n\ntof) d [ * f )S d [ [ f )S _ d L faf) Q:\n\n\\+-f;{d,Cy■ t)S(Z—) + {r}\n\nAt the left hand side of the equation is the transient term, convection term and diffusion term, where the velocity along the concentration coordinates (cy) is (cy). The source terms are described on the right hand side of the equation. The first source term describes the droplet axial dispersion characterized by the dispersion coefficient Dy, which dependent on the energy dissipation, holdup and droplet rise velocity.\n\nThe second one expresses droplet entering rate to LLEC with a volumetric flow rate Q™ that is perpendicular to column cross sectional area Ac at a location zy with an inlet number density fy. The third\n\none (K) represents the net number of droplets produced by breakage and coalescence per unit volume and\n\nunit time, which represents the four rates of droplet birth (B) and death (D) due to breakage (b) and coalescence (c) in a turbulent continuous phase:\n\n\" {f } = fi4(d,cy;z)- D4(d,cy;z) + (d,cy;z)- (d,cy;z) (3)\n\nIn this equation the components of the vector ¡^ = [d cy z t] are those for the droplet internal bivariate coordinates: the droplet diameter (d), solute concentration (cy) and the external coordinate: the location (z) and time (t). The four rates are given by set of equations which are presented in Attarakih et al. (2008) .\n\n2.2. Numerical methods\n\nIn technical geometries the population balance model (PBM) has no general analytical solution; therefore the only choice in most cases is a special numerical technique to solve the PBE. Several numerical approaches are proposed to solve the PBE which are classified into the following categories: Classes method (CM), Monte Carlo method (MCM) and method of moments (MOM).\n\nThe Classes methodis known also as direct discretization method (DDM).In this method the internal coordinate is discretized in the solution domain using a traditional discretization method such as finite difference method, finite element method or finite volume method directly.This method is straightforward; it gives the full particle size distribution with a good accuracy. In some cases to achieve a good accuracy a large number of classes has to be used, that will require large number of equations to be solved.This method is used in commercial CFD software (CFX, FLUENT), however, it needs a long computational time and it preserves only two integral properties of the distribution asKumar and Ramkrishna proposed in the fixed pivot discretization method and the moving pivot technique [9,10].\n\nOn the other hand, the Monte Carlo method is a stochastic method that involves the construction of an artificial system to approximate the actual one based on the physical characteristics of the considered system . There are two classifications concerning this method. First, it can be divided into time and event driven algorithms according to the driven pattern of the discrete physical events [11, 12]. Second, it can be classified into constant volume and constant number methods according to the state of the particles in the artificial system. To decrease the statistical errors a very large number of particles are required and this will need a high computational time. The numerical results from this method usually contain \"noises 'for these reasons it is difficult to be coupled with a CFD code.\n\n2.2.1. One Primary and One Secondary Particle Model (OPOSPM)\n\nThe One Primary One Secondary Particle Method is the simplest discrete model case of the SQMOM that can approximate the continuous PBE . It provides a promising one group reduced PBM that reduces the computational time. This model is based on the primary and secondary particle concept, where the primary particles are responsible for reconstruction of the distribution and the secondary particle is to describe the particle interaction due to breakage and coalescence .\n\nd N d (uyN) 1 Q';\n\n— =—-?- \\$ (z —) + (4)\n\ndt dz A„ v ,.„ *\n\nda d (uya) Q;\n\nThis model is able to capture all the essential physical information contained in the PBM and is still tractable from computational point of view. The model conserves the total number (N) and volume ( ) concentrations of the population by solving directly two transport equations for N and . The source term S represents the net number of the droplets and is expressed by the following equation:\n\nS = tf (rf30) - 1) r (d30) N -- w{d,0, d30)N\n\nThe first term is the rate of droplet formation due to breakage, and is expressed in terms of the breakage frequency r. d is the mean number of daughter droplets that is determined by integrating the daughter droplet distribution function, in the simulation it is assumed equal to 3 (mother droplet breakup to form three daughter droplets). The second term represents the net rate of droplet death due to coalescence, and expressed as the coalescence frequency m. Both breakage and coalescence frequencies are function of droplet size, energy dissipation and system physical properties (density, viscosity andsurface tension).di0 is the mean diameter and is given as the ratio between volume and number concentrations.\n\nd- = ^^\n\nThe OPOSPM model is used to simulate the droplet size distribution based on the model of Coulaloglou and Tavlarides (1977) for breakage and coalescence in the CFD simulation using the optimized parameters obtained from the solving inverse population balance problem.\n\n3. Phenomena Affecting Droplet Size\n\nThe droplet population balance model (DPBM) has to be used to predict the performance of liquid extraction columns that takes into account droplet transport and droplet interactions. Detailed description about theses phenomena will be discussed in the next sections.\n\n3.1. Droplet-velocity\n\nIn agitated columns there are two factorsgoverning drop motion: first the drop motion due to buoyancy force and the second is random drop motion due to flow instability . Knowledge of the drop velocityis necessary for the prediction of holdup, the residence time and mass transfer rates of both phases. This is based on determining the relative swarm velocity and the effective phase velocity. The droplet velocity and the axial dispersion are considered the key parameter for calculating the drift term in the PBE.\n\nSemi-empirical correlations are used to determine the drop velocity for various sizes and physico-chemical that are dependent on the chemical test system. For the high interfacial liquid system: toluene-acetone-water (our studied system) Vignes' lawis proposedand is given by the following equation :\n\nOn the other hand the characteristic velocity in stirred columns is controlled and modeled by the droplet size, geometry of the agitators and stators, and the energy input. This general description is used for the optimization algorithm to describe the dispersed phase hydrodynamics, and also has been used in the PPBLAB simulations.\n\n3.2. Dropletinteractions\n\nThe droplet interactions occurat macro-scale (droplet breakage and coalescence) and micro-scale (interphase mass transfer).\n\n3.2.1. Drop breakage\n\nThe deformation and breakup of fluid particles in turbulent dispersions is influenced by the continuous phase hydrodynamics and the interfacial surface tension. Also it is dependent on the droplet size, density, interfacial tension, viscosity of bothphases, holdup, local flow and energy dissipation . Generally the breakage mechanism of the droplet can be classified into four main categories as are turbulent fluctuation and collision, viscous shear stress, shearing-off process and interfacial instability . In literature different mechanisms exist and they are strongly dependent on the geometry, especially in liquid extraction columns with different stirring devices. For example the droplet breakup in Kuhni column while passing through the turbine outlet stream, but in RDC extraction columns it occurs only when the droplet touch the rotator disc.\n\nSeveral models exist in literature for droplet breakage, but here we will use the model of Coulaloglou and Tavlarides (1977) . This model was based on the turbulent nature of liquid-liquid dispersion, where the drop oscillates and deforms due to the local pressure fluctuation. The breakage frequency r(d)is defined by the following equation[19, 21]:\n\nT(d) = f_If -/tenon of |\n\nx ]~wonlsrtCTO tirno x s rlvsinc hvonlrivicr x\n\n5 breakage time f 5 i/ro^i breaking f\n\nThe breakage time is determined by isotropic turbulence theory by assuming that the motion of daughter droplets is the same as turbulent eddies. On the other hand the fraction of drops breakage is assumed proportional to the fraction of drops that have a turbulent kinetic energy greater than their surface tension. Coulaloglou and Tavlarides (1977) relationship is based on the fundamental model of mixerhydrodynamics withtaking into account the influence ofholdup [19-21]:\n\nThe number of daughter droplets produced by a single breakage event is assumed to be three in order to simplify the computational problems of modeling, in which the £ (d30) = 3in Eq. (6).\n\n3.2.2. Drop coalescence\n\nDroplet coalescence is considered more complex then breakage because of the droplet interaction with the surrounding liquid phase and with other droplets when they are brought together by external flow and body forces. For coalescence of droplets to occur in a turbulent flow field the droplet first collide then remain in contact for sufficient time so that the processes of film drainage, film rupture and coalescence to occur. The droplets may separate caused by a turbulent eddy or adsorption layers, which prevent the coalescence . Also the intensity of collision and the contacting time between the colliding droplets are the key parameters for this phenomenon.\n\nThere are several models in literature for coalescence of fluid particles for example Coulaloglou and Tavlarides (1977) , Sovova model (1981) , Casamatta and Vogelpohl (1985) , Laso (1986) , Lane (2005) ,etc. A literature review has been done by Liao and Lucas (2010) for most of these models . The physical models calculate the coalescence frequency from the product of collision rate (frequency) (h) and the coalescence efficiency (A) of two droplets from diameter d and ^'[26, 27].\n\nCoulaloglou and Tavlarides (1977) is one of the most used models because the derived model is based on the physical quantities of the chemical test system, therefore it is the one we choose for our optimization . This model was developed for stirred vessel, being based on the kinetic theory of gases and drainage film theory. It is given by the following equation:\n\nw (d,d) = h{d,d) X(d,d )\n\nIn this model the first term is the collision rate frequency of two droplets. It can be described in analogy to collision frequency between gas molecules of drop diameter d and d' for size intervals Ad and Ad'. In this term the influence of the holdup is considered as a damping effect on turbulent velocities [19, 20]. On the other hand the second term is thefraction of collisions between drops of size d and d' that result in a coalescence is known as coalescence efficiency. It accounts for the contact time between two droplets and the coalescence time. Unusually in this phenomenon the contact time must exceed the coalescence time, after collision [6, 19]. Furthermore, the coalescence efficiency for this model is classified for deformable particles with immobile interfaces and the initial and final film thickness are assumed to be constant.\n\n3.2.3.Mass Transfer\n\nThe mass transfer influences on the resulting hydrodynamics behavior and is affectedby the mass transfer direction from continuous to disperse or vice versa. The solute concentration in the continuous phase (c*) is predicted using a component solute balance on the continuous phase :\n\n„c„ + D„\n\n1 S (z - z^) - ^ ^ ■ cyv(df d\n\nThe volume fraction of the continuous phase satisfies the physical constraint: qx+ cpy= 1. The left hand side and the first term on the right hand side have the same interpretations as those given in Eq.(2); however, with respect to the continuous phase. Whereas the last term represents the total rate of solute transferred from the continuous to the dispersed phase, in which the liquid droplets are treated as point sources.\n\nThe overall mass transfer coefficient (X^which can be used to predict the rate of change of solute concentration in the liquid droplet is expressed in terms of the droplet volume average concentration:\n\nd c (z, t) 6 K , . ,\n\nThesis a function of the droplet diameter and time depending on the internal state of the droplet. This coefficient is usually expressed using the two-resistance theory in terms of the individual mass transfer coefficients for the continuous and the dispersed phases. In agitated columns the energy input increases the mass transfer by a forced circulation of the drops in the compartments and by induced circulations within the drops.\n\n4. Parameter Estimation\n\nInverse problems are ill-posed in general and need some stabilization techniques to get reliable optimized parameters. It is considered complicated due to the increase in the size of the differential algebraic system when dealing with the inverse problem using the population balance models. Depending on experimental data availability and in particular in industrial cases where only few intermediate data along the equipment height are available. In some cases only the inlet and outlet mean properties are available. Therefore, the DPBM for coupled hydrodynamics and mass transfer has to be solved. Accordingly, not only the size of the system is considerably increased, but also the computational time due to the slow mass transfer process. Also the resulted equations should be solved simultaneously; therefore this calls for efficient mathematical modeling and a proper algorithm design. Besides to this, solving the population balance inverse problem and getting a converging solution in a short time is found to be highly sensitive to errors in the experimental data.\n\nConsequently, the recent developed model by Attarakih et al. : the One Primary and One Secondary Particle Model (OPOSPM) is found to provide a promising one group reduced PBM that can be used to solve the inverse population balance problem.The solution of the inverse problem is formulated as a nonlinear optimization problem, which is constrained by simple bounds on both the breakage and coalescence parameters in order to obtain the correct scale for the model of Coulaloglou and Tavlarides in an RDC and Kuhni extraction column [28, 29]. The algorithm has been programmed using MATLAB, for optimization of coalescence parameters the tolerance was set to 1.00E-6. The objective of this work is not to fit the experimental data but also to check whether the optimized parameters can predict the steady state and dynamic behavior of the liquid extraction column under other operating conditions. Obtaining these parameters is used as a basis for the CFD. TheOPOSPM is used effectively to simulate the full column hydrodynamics and mass transfer.\n\n5. Computational Fluid Dynamics\n\nIn comparison to the optimization tool and PPBLAB, CFD is able to calculate the flow field without geometrical based correlations e.g. for the droplet rise velocity. The commercial CFD code FLUENT (vl21) was coupled with the previously described OPOSPM model to account for breakage and coalescence. Likewise in PPBLAB the breakage and coalescence of the droplets is accounted by the model of Coulaloglou and Tavlarides using the estimated parameters from the inverse problem. In addition, mass transfer is accounted to simulate the concentration profile along the column height of an RDC pilot plant extraction column.\n\n5.1. Hydrodynamics\n\nThe two phases are treated as interpenetrating continua using the Euler-Euler framework. Hence, the phases are described by the phase fraction in each cell, whereas the sum of the volume fraction of all phases q in each cell has tobe 1:\n\nThe transport of the phases is described by the continuity equation for each phase:\n\n(aqpq) + V(aqpquq) = £ rhpq\n\nHereby, the density of phase q is described by pq and the velocity of each phase by ui. The right hand side of the equation describes the mass transfer from phase p to q, which is described as source term. The conservation equation for each phase is given by:\n\nal p=l\n\nwhere the pressure is described by /»and the stress tensor is described by^n. The interaction terrains defined as:\n\np=l q=i\n\nwith Kpq as the interphase momentum exchange coefficient:\n\nThe drag coefficient CD is taken from Schiller and Naumann:\n\n(p.44 Re > lOOo\n\nThe density of each phase is given by the concentration of the transferred component and the density of each component:\n\n5.1.1.Mass transfer\n\nThe concentration of the transferred component has to be tracked in the whole domain. This is done using the species transport equations in FLUENT :\n\nThe mass transfer of the transferred component is accounted by the source term, whereas in this case, the source term is based on the two film theory:\n\nm = ApqppqKoy (m*4 - 4)\n\nwhere m' is the distribution coefficient. The interfacial area^is calculated based on the phase fraction and the diameter of the droplets:\n\nThe overall mass transfer coefficient describes the diffusion rate across the surface of the droplets and is based on the individual mass transfer coefficients of the continuous phase Kx and dispersed phase Ky\\\n\nK0yPy KyPy Kxpx\n\nTo determine the individual mass transfer models, the model of Kumar and Hartland (1999) was used for both the CFD and PPBLAB simulations and is given by the following equations :\n\nf \"X Kx = ~d'\n\nDx Shxco-a + Sh,frigid ji p _d -u\n\n~with x'm ~ 50 + iE ■■ Pe* and ^ ~ Dx\n\nwherQShx.rigid and a defined as:\n\nwhere: tte =\n\nVx and Dxpx\n\nShy = 17.7 +\n\n3.19 X 10\"3 \\ Re Sc3\n\n1 + 1.43 X 10-2 [ ReSc3\n\nwhere unci iteare defined by:\n\nüyPy and Th'\n\n5.1.2. Energy dissipation\n\nThe energy dissipation is a key parameter for breakage and coalescence frequency and propability calculations. In FLUENT, the energy dissipation is calculated for each numerical discretization cell in the whole domain. For turbulence modeling, the k-e model is used, whereas the kinetic energy and its rate of dissipation are obtained from the following equations :\n\nÎt^k)+<k(pkUi) = <k\n\nVt\\dk VkJdxj\n\n+ Gk-ps\n\nVt\\d£ °k)dxj\n\nThe generation of turbulence kinetic energy due to the mean velocity gradients is described by ^fe:\n\npu'ju'j [ dit] )\n\nG1E and G2, are constants, which are defined as 1.44 and 1.92 respectively. The turbulent Prandtl numbers for the kinetic energy and the rate of dissipation are given by ak (=1.0) and °e (=1.3) and define the ratio between momentum eddy and diffusivity.\n\nThe viscous sublayer is accounted by the enhanced wall treatment model, whereas the sublayer is fully resolved by refining the mesh that y+, the wall distance unit, is approximately to 1.\n\n6. Results and Discussion\n\nThe EFCE test system toluene-acetone-water (t-a-w) is used for the investigation, whereas acetone is the transition component between the aqueous and organic phase. The weight fraction of acetone in the aqueous phase is equal to 0.05. The physical properties of the test system are given in Table 1. The pilot plant RDC extraction column internal and external geometry dimensions are shown in Table 2 . The operating conditions used: rotor speed is constant at 400 rpm and the volumetric flow rate for the continuous and dispersed phase is 40 1/hr and 48 1/hr, respectively.\n\nTable 1. Physical properties of the chemical test system\n\nPhysical property\n\nPhysical property\n\nDensitypx Densityp, Interfacial tension\n\n992.0 kg/m3 Viscosity^\n\n863.3 kg/m3 Viscosity^,,\n\n24.41E-03 Nm Distribution coefficient\n\n0.566 E-03 Pas\n\n0.843 kg/kg\n\n1.134 E-03 Pas\n\nH. B. Jildeh et al. / Procedia Engineering 42 (2012) 1848 - 1878 Table 2. Geometry and specifications of the RDC extraction column\n\nInternal geometry External geometry\n\nCompartment height 50 mm Column height 4.40 m\n\nInternal stator diameter 50 mm Column diameter 0.08 m\n\nRotator diameter 45 mm Inlet of dispersed phase 0.85 m\n\nRotating shaft diameter 10 mm Inlet of continuous phase 3.80 m\n\nRelative free cross sectional stator 0.40 m2/m2\n\nThe values of droplet breakage and coalescence parameters for Coulaloglou and Tavlarides (C;-C4) are estimated by solving the inverse problem for (t-a-w). Fig. (1) shows the optimization results for solving the inverse population balance problem by using the experimental data for both the holdup and the droplet diameter along the column height as reference value. The estimated parameters were Ci= 0.247, C2= 0.078, C,= 0.0351 and C4= 1.33E11.\n\n3.S0 3.00 2.SO 2,00 l.SO 1.00 0.50 0.00\n\n■ Garthe Expl\n\nGarthe Exp2 — optimization —CFD —PPBLAB\n\nColumn Height (m)\n\nFig. 1. Optimization of breakage and coalescence parameters using the experimental data for the holdup and mean droplet diameter, a) Simulated mean holdup along column height; b) Simulated mean droplet diameter along the column height\n\nUsing these optimized parameters, CFD simulations were performed. The RDC extraction column is described by a numerical mesh, whereas the inflow and outflow zone was reduced in size to reduce the numerical effort. In both settling zones, the breakage and coalescence were neglected. The dispersed phase (organic phase) inlet is at the bottom, whereas the continuous phase enters the column at the top. Therefore, a velocity inlet is defined for both phases at the bottom of the column and the pressure outlet condition is used at the top.\n\nThe resulting CFD simulations for phase fraction, continuous phase velocity and droplet size in the bottom and the top of the column is shown in Fig. (2) and (3) respectively. The droplets accumulate underneath the first stator, which is shown in Fig. (2a). Two torus vortexes are formed by the stirrer (Fig. 2b), whereas Drumm (2009) could only observe one vortex filling a compartment due to its lower compartment height. The upper vortex underlies a slight upward deviation due to the entering droplets.At the stirrer tip, the droplet size decreases due to the energy input(Fig. 2c). Underneath the stator, the droplets accumulate and coalesce, whereas in the lower part of the next compartment, higher droplet sizes can be observed. At the column top, the droplet size increased to approximated 3 mm (Fig. 3c), which leads to a slight changes in the phase fraction distribution (Fig. 3a). The two torus vortexesare formed, whereas in this case, the center of the upper vortex is shifted due to the inflow of the continuous phase (Fig. 3b).\n\nThe droplet size was over predicted by the CFD simulation, whereas the phase fraction shows a good agreement to the measurements. It could be explained for the reason that the optimization was based on more on the phase fraction as integral value and not equally to the droplet size itself, also it is effected by the measuring position due to the measuring technique.\n\nFig. 2. CFD simulations at the bottom of extraction column for: (a) phase fraction (-), (b) continuous phase velocity (m/s) and (c) droplet size (mm)\n\nFig. 3. CFD simulations atthe top of the extraction column for: (a) phase fraction (-),(b) continuous phase velocity (m/s)and (c) droplet size (mm)\n\nThe resulted holdup and droplet size of the optimization, the CFD simulation and PPBLAB with mass transfer is compared to the experimental results shown in Fig. (1). The PPBLABsimulation of the droplet size fits to the one found by the optimization. The droplet size in the CFD simulation increases faster than the one found with the optimization tool. All simulations overpredict experimental droplet sizes at the upper measurement positions. An optimized droplet size instead leads to a correct prediction of the experimental holdup profile using the optimization algorithm and PPBLAB. For the CFD result, only an average value was taken for the active height, which underpredicts the holdup profile.\n\nThe concentration of acetone in the dispersed phase and continuous phase is compared to the experimental measurements in Fig. (4). However, the concentration of the continuous phase fit the experimental data and the dispersed phase values are underpredicted by the CFD simulation.The reason is that it requires more simulation time to research steady state.With three times longer simulation time in PPBLAB the concentration for both phases profiles were accurately predicted.\n\nFig. 4. Concentration profiles for continuous (Cx) and dispersed (Cy) phase along the column height.\n\n7. Conclusions\n\nA full pilot plant RDC extraction column was simulated based on the measurements of Garthe (2006) using a coupled CFD-DPBM-mass transfer model. As DPBM model, the one group population balance model, the One Primary and One Secondary Particle Model (OPOSPM) was used. In this work the OPOSPM is found to be a promising model for designing, simulating, controlling and optimizing a RDC extraction column.The required droplet interaction parameters in the breakage and coalescence model of Coulaloglou and Tavlarides were optimized using the inverse approach based on the experimental measurements of Garthe (2006) for droplet diameter and holdup along the column height. Mass transfer was accounted by the model of Kumar and Hartland (1999) for both the CFD and PPBLAB simulation. The results of the codes were compared to the experimental data.The droplet size was over predicted by the CFD code and PPBLAB, whereas the phase fraction shows a good agreement to the measurements. The concentration profile of the continuous phase fits to the experimental results; whereas the dispersed phase concentration is underpredicted (simulations in CFD require more time to reach a final steady state). The PPBLAB simulation indeed fits to the experimental values. Comparing the CFD and PPBLAB results, PPBLAB is able to predict the holdup, droplet size and concentration within minutes. On the other hand, PPBLAB is based and requires for each column type a set of experimental correlations to determine e.g. the energy dissipation and back-mixing effects. The CFD in addition gives information about the local distribution of the droplets, local deviations of the hydrodynamics (e.g. the energy dissipation and back-mixing effects) dependent the inflow conditions of the phases into the active column part.\n\nThe optimization of parameters using the inverse tool is a promising technique to determine breakage and coalescence parameters for the specific models. It saves time due to a stable and fast optimization. The estimated parameters then can be used effectively as a basis for extraction column behavior prediction using CFD (FLUENT or OpenFOAM) and PPBLAB simulations.\n\nAcknowledgements\n\nThe authors would like to acknowledge Fraunhofer Institute for Industrial Mathematics ITWM, Centre of Mathematical and Computational Modelling (CM)2, DFG (Deutsche Forschungsgemeinschaft) and Max Buchner Research Foundation for the financial support.\n\nReferences\n\n Attarakih M, Bart HJ, Lagar G L, Faqir N. LLECMOD: A Windows-based program for hydrodynamics simulation of liquidliquid extraction columns. ChemEngProcess 2006;45:113-123.\n\n Attarakih M, Bart HJ, Steinmetz T, Dietzen M, Faqir N. LLECMOD: A Bivariate Population Balance Simulation Tool for Liquid-Liquid Extraction Columns. Open Chem Eng J 2008;2:10-34.\n\n Bart HJ, Hlawitschka M, Mickler M, Jaradat M, Didas S, Chen F, Hagen H. Tropfencluster-Analytik, Simulation und Visualisierung. Droplet Cluster-Analysis, Simulation and Visualization. Chem Ing Tech 2011;83(7):965-978.\n\n Attarakih M, Al-Zyod S, Abu-Khader M, Bart HJ, Jildeh HB. PPBLAB: A new multivariate population balance environment for particulate system modeling and simulation. Procedia Engineering 2012, Accepted for publishingin the 20th International Congress of Chemical and Process Engineering, 25-29 August, Prague.\n\n Drumm C, Attarakih M, Hlawitschka M, Bart HJ. One-Group Reduced Population Balance Model for CFD Simulation of a Pilot-Plant Extraction Column. IndEng Chem Res 2010; 49(7):3442-3451.\n\n Garthe D. Fluiddynamics and mass transfer of single particles and swarms ofparticles in extraction columns. Munchen:Dr. Hut; 2006.\n\n Ramkrishna D. Population Balances: Theory and Applications to Particulate Systems in Engineering. San Diego '.Academic Press; 2000.\n\n Attarakih M, Bart HJ, Faqir N. Numerical solution of the bivariate population balance equation for the interacting hydrodynamics and mass transfer in liquid-liquid extraction columns. Chem Eng Sei 2006;61(1):113-123.\n\n Kumar S, Ramkrishna D. On the solution of population balance equations by discretization—I. A fixed pivot technique. Chem Eng Sei 1996;51(8):1311-1332.\n\n Kumar S, Ramkrishna D. On the solution of population balance equations by discretization—II. A moving pivot technique. ChemEngSci 1996;51(8):1333-1342.\n\n JunWei S, ZhaoLin G, Yun XX. Advances in numerical methods for the solution of population balance equations for disperse phase systems. Sei China Ser B: Chem 2009;52(8):1063-1079.\n\n Zhao H, Maisels A, Matsoukas T, Zheng C. Analysis of four Monte Carlo methods for the solution of population balances in dispersed systems. Powder Technol 2007;173(l):38-50.\n\n Drumm C, Attarakih M, Bart H-J. Coupling of CFD with DPBM for an RDC extractor. Chem. Eng. Sei. 2009;64(4):721-\n\n Attarakih M, Drumm C, Bart HJ. Solution of the population balance equation using the sectional quadrature method of moments (SQMOM). Chem Eng Sei 2009;64(4):742-752.\n\n Attarakih M, Jaradat M, Bart HJ, Kuhnert J, Drumm C, Tiwari S, Sharma VK, Klar A. A multivariate Sectional Quadrature Method of moments for the Solution of the Population Balance Equation.In: Pierucci S., Buzzi Ferraris G., editors. Computer Aided ChemiealEngineering: 20thEuropean Symposium on Computer Aided Process Engineering, Elsevier 2010.\n\n Attarakih M, Jaradat M, Hlawitschka MW, Bart HJ, Kuhnert J. Integral Formulation of Solution of the Population Balance Equation using the Cumulative QMOM. In:Pistikopoulos EN, Georgiadis MC, Kokossis AC, editors. Computer Aided Chemical Engineering: 21stEuropean Symposium on Computer AidedProcess Engineering, Elsevier 2011;29:81-85.\n\n Attarakih MM, Kuhnert J, Wächtler T, Abu-Khader M, Bart HJ. Solution of the Population Balance Equation using the Normalized QMOM (NQMOM). 8th International Conference on CFD in Oil & Gas, Metallurgical and Process Industries 2011, 21-23 June, Trondheim, Norway.\n\n Attarakih MM, Jaradat M, Drumm C, Bart HJ, Tiwari S, Sharma VK, Kuhnert J, Klar A. Solution of the Population Balance Equation using the One Primary One Secondary Particle Method (OPOSPM). CompAid Chem Eng 2009;26:1333-1338\n\n Coulaloglou C, Tavlarides L. Description of interaction processes in agitated liquid-liquid dispersions. Chem Eng Sei 1977;32(11):1289-1297.\n\n Godfrey JC, Slater MJ. Liquid-liquid extraction equipment. Chicheste: John Wiley &Sons; 1994.\n\n Liao Y, Lucas D. A literature review of theoretical models for drop and bubble breakup in turbulent dispersions. Chem Eng Sei 2009;64(15):3389-3406.\n\n Sovova H. Breakage and coalescence of drops in a batch stirred vessel-II comparison of model and experiments. Chem Eng Sei 1981;36(9):1567-1573.\n\n Casamatta G, Vogelpohl A. Modelling of Fluid Dynamics and Mass Transfer in Extraction Columns. German chemical engineering 1985;8(2):96-103.\n\n Laso M, Steiner L, Hartland S. Dynamic simulation of liquid-liquid agitated dispersions-I. Derivation of a simplified model. Chem. Eng. Sei. 1987;42(10):2429-2436.\n\n Lane G, Schwarz M, Evans G. Numerical Modelling of Gas-Liquid Flowin Stirred Tanks. Chem Eng Sei 2005;60:2203-\n\n Liao Y, Lucas D. A literature review on mechanisms and models for the coalescence process of fluid particles. Chem. Eng. Sei. 2010; 65(10): 2851-2864.\n\n Tsouris C, Tavlarides L L. Breakage and coalescence models for drops in turbulent dispersions. AlChE J 1994;40(3):395-\n\n Jildeh H B, Attarakih M, Mickler M, Bart H-J. An Online Inverse Problem for the Simulation of Extraction Columns using Population Balances. .Accepted for publishing in the proceeding of 22nd European Symposium on Computer-Aided Process Engineering 2012, 17-20 June, London.\n\nAccepted for publishing in the proceeding of 11th International Symposium on Process Systems Engineering 2012, 15-19 July, Singapore.\n\n Fluent Inc. Fluent 6.3 manual 2006. Avaliable online http://hpce.iitm.ac.in/website/Manuals/Fluent_6.3/Fluent.Inc /fluent6.3/help/index.htm, (accessed on 26 April 2012).\n\n Kumar A, Hartland S. Correlations for prediction of mass transfer coefficients in single drop systems and liquid-liquid extraction columns. Chem. Eng. Res. Dei. 1999;77:372-384." ]
[ null, "https://cyberleninka.org/images/tsvg/cc-label.svg", null, "https://cyberleninka.org/images/tsvg/view.svg", null, "https://cyberleninka.org/images/tsvg/download.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8567491,"math_prob":0.91736805,"size":42988,"snap":"2022-27-2022-33","text_gpt3_token_len":10054,"char_repetition_ratio":0.15517402,"word_repetition_ratio":0.04074523,"special_character_ratio":0.21726994,"punctuation_ratio":0.114017,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9719152,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-04T22:17:49Z\",\"WARC-Record-ID\":\"<urn:uuid:1218b815-22d9-4c2b-a557-e5ec27d24008>\",\"Content-Length\":\"78667\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7cff8ff9-0fd2-4aa6-9d24-b244d58c46e1>\",\"WARC-Concurrent-To\":\"<urn:uuid:2b6491cb-9203-4ca9-903e-e102e2309d67>\",\"WARC-IP-Address\":\"159.69.2.174\",\"WARC-Target-URI\":\"https://cyberleninka.org/article/n/564415\",\"WARC-Payload-Digest\":\"sha1:QYIGUTLEMANGMQIYFW2PKVELWBNSVCVP\",\"WARC-Block-Digest\":\"sha1:PSHRRFOLOPAJLJKUFH2RQ4BPB6RSAFTM\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104496688.78_warc_CC-MAIN-20220704202455-20220704232455-00369.warc.gz\"}"}
https://serc.carleton.edu/sp/library/experiments/examples/154448.html
[ "# Cartel game\n\nThis material was originally created for Starting Point: Teaching Economics\nand is replicated here as part of the SERC Pedagogic Service.\n\n## Summary\n\nIn a Microeconomics course students will be broken down into groups and be asked to act as individual firm in an Oligopoly market. They will submit how much of a given good or service they are going to produce in order to maximize profits. The students will be put into a situation where their group will be able to increase their profits by lowering the profits of all the other group members. This game will help illustrate to students the instability of cartels. Lastly, this game will reinforce the topic of profit maximization.\n\n## Learning Goals\n\nThis should reinforce the concept of profit maximization and calculating profits. The students will learn about instability of prices and production in an oligopoly market. Lastly this lesson will introduce or can be referenced when discussing game theory and a prisoners dilemma.\n\n## Context for Use\n\nThis lesson was created to be used in a Principles of Microeconomics course but with some small changes can be used in an Intermediate Microeconomics, Applied Microeconomics, or Managerial Economics course.\n\nThis could be used in a class as large as 35 students. In a large lecture hall it may be difficult for this type of demonstration to work.\n\nThis activity may take from as little as 15 minutes of class time to as much as an hour.\n\nUsually the students would have already learned about profit maximization and how to calculate profits in given a model.\n\nThis activity can be used when covering cartels. Usually cartels are covered when discussing Oligopoly's\n\nThis activity was designed for a Principles of Microeconomics course but could rather easily be adapted to fit into a higher level course like Intermediate or Applied Microeconomics.\n\n## Description and Teaching Materials\n\nMaterials: Markers, Projector, Students,\n\nDirections:\n1. Breaking the class down into groups.\na. Start with a small number of groups maybe three or four.\n2. Draw a Demand curve, marginal revenue curve, marginal cost curve, and an average total cost curve on the Desmos graphing calculator. (If the projector or the Desmos calculator is not working just draw it on the board)\na. Example of formulas for curves\ni. Demand=-2Q+10\nii. Marginal Revenue=-4Q+10\niii. MC=ATC=2\n\nNote: In an advanced course students could be given the Demand and cost function and asked to find Average total cost, Marginal cost, Marginal revenue functions.\n3. Ask students to submit the amount of output there specific group would produce, allow the groups to collude.\na. Have each group submit a paper with the amount of output.\nb. Write the amount of output that each group produced and have students calculate the total profit for all group and the profit that their group and the profit that there group made\ni. Note: Do not allow each group to see how much output each group submitted. If they know who submitted each amount there is more of an incentive to cooperate.\nii. Note: To speed the process up, the instructor can add the quantities up and show on the Desmos graph the price at that level of production.\n4. Play the game a couple of rounds to see the total profits the group makes and how much each individual firm makes.\n5. Entice students by offering extra credit and/or offer additional extra credit to the group that makes the most profit. This will give students an incentive to \"cheat\".\n6. Have a discussion about what is the \"best\" strategy\na. Ask students to assume every other group is producing the amount decided upon that will reach the profit maximizing quantity. Could their group make more by choosing another amount?\nb. Make the connection to OPEC\nc. Discuss the instability in the Oligopoly market structure.\n\n## Teaching Notes and Tips\n\n1. Give students extra credit that corresponds to the amount of profit the group makes. This will give them more of an incentive to play the game seriously. Give additional extra credit to the group who makes the most profit (This should really incentivize students to \"cheat:\").\n2. The instructor could break up the groups to try to show how it is more difficult to collude when there are more groups\n3. This game can be referenced when discussing prisoners dilemma.\n4. In more advanced courses this game can be used to discuss repeated games.\n5. At the end of the activity regardless of whether any group \"cheated\" the instructor can show an example of all groups producing an amount that adds up to the profit maximizing quantity. Then they can show an example of the group that cheated producing more than the agreed upon amount and showing what happened to the total profit and the profits of the group that \"cheated\".\n\n## Assessment\n\n1. When students begin to collude out loud the instructor will know if they understand what the profit maximizing amount is.\n2. The instructor should be able to guide the discussion to think about what would happen if one of the groups \"cheats\"\n3. The instructor could ask each group to calculate the profits for their group given the total amount produced by all groups.\n4. The instructor can ask the students to graphical illustrate the loss to all groups from one group producing more than the agreed upon amount and the gain by the one group." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93474376,"math_prob":0.7600045,"size":9097,"snap":"2022-27-2022-33","text_gpt3_token_len":1890,"char_repetition_ratio":0.13779831,"word_repetition_ratio":0.80853814,"special_character_ratio":0.19962625,"punctuation_ratio":0.093479514,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9699673,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-27T15:52:16Z\",\"WARC-Record-ID\":\"<urn:uuid:441a74a8-a5b4-4959-a2b4-2a69b7330c09>\",\"Content-Length\":\"43963\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c2e0c051-3bcb-486c-b216-dbe00fbfb8ec>\",\"WARC-Concurrent-To\":\"<urn:uuid:aa2e282c-c75e-4c9d-b0a4-43e8107b406c>\",\"WARC-IP-Address\":\"54.157.163.213\",\"WARC-Target-URI\":\"https://serc.carleton.edu/sp/library/experiments/examples/154448.html\",\"WARC-Payload-Digest\":\"sha1:ZEBDOORCXNWX4HV2SHG4F564KZPO54I2\",\"WARC-Block-Digest\":\"sha1:O3BT3Z3FGEVM5V4V5HS36G6EHZLKGPMI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103334753.21_warc_CC-MAIN-20220627134424-20220627164424-00708.warc.gz\"}"}
https://craftwithcodewiz.com/tag/steepest-descent/
[ "# Background A neural network in 6 steps\n\nIn Building a Simple Neural Network From Scratch in PyTorch, we described a recipe with 6 functions as follows:\n\n1. `train_model(epochs=30, lr=0.1)`: This function acts as the outer wrapper of our training process. It requires access to the training data, `trainingIn` and `trainingOut`, which should be defined in the environment. `train_model` orchestrates the training process by calling the `execute_epoch` function for a specified number of epochs.\n2. `execute_epoch(coeffs, lr)`: Serving as the inner wrapper, this function carries out one complete training epoch. It takes the current coefficients (weights and biases) and a learning rate as input. Within an epoch, it calculates the loss and updates the coefficients. To estimate the loss, it calls `calc_loss`, which compares the predicted output generated by `calc_preds` with the target output. After this, `execute_epoch` performs a backward pass to compute the gradients of the loss, storing these gradients in the `grad` attribute of each coefficient tensor.\n3. `calc_loss(coeffs, indeps, deps)`: This function calculates the loss using the given coefficients, input predictors `indeps`, and target output `deps`. It relies on `calc_preds` to obtain the predicted output, which is then compared to the target output to compute the loss. The backward pass is subsequently invoked to compute the gradients, which are stored within the `grad` attribute of the coefficient tensors for further optimization.\n4. `calc_preds(coeffs, indeps)`: Responsible for computing the predicted output based on the given coefficients and input predictors `indeps`. This function follows the forward pass logic and applies activation functions where necessary to produce the output.\n5. `update_coeffs(coeffs, lr)`: This function plays a pivotal role in updating the coefficients. It iterates through the coefficient tensors, applying gradient descent with the specified learning rate `lr`. After each update, it resets the gradients to zero using the `zero_` function, ensuring the gradients are fresh for the next iteration.\n6. `init_coeffs(n_hidden=20)`: The initialization function is responsible for setting up the initial coefficients. It shapes each coefficient tensor based on the number of neurons specified for the sole hidden layer.\n7. `model_accuracy(coeffs)`: An optional function that evaluates the prediction accuracy on the validation set, providing insights into how well the trained model generalizes to unseen data.\n\nIn this blog post, we’ll take a deep dive into constructing a powerful deep learning neural network from the ground up using PyTorch. Building upon the foundations of the previous simple neural network, we’ll refactor some of these functions for deep learning.\n\n# Deep Learning Refactor code for multiple hidden layers\n\nInitializing Weights and Biases\n\nTo prepare our neural network for deep learning, we’ve revamped the weight and bias initialization process. The `init_coeffs` function now allows for specifying the number of neurons in each hidden layer, making it flexible for different network configurations. We generate weight matrices and bias vectors for each layer while ensuring they are equipped to handle the deep learning challenges.\n\n```\t```def init_coeffs(hiddens=[10, 10]):\nsizes = [trainingIn.shape] + hiddens + \nn = len(sizes)\nweights = [(torch.rand(sizes[i], sizes[i+1]) - 0.3) / sizes[i+1] * 4 for i in range(n-1)] # Weight initialization\nbiases = [(torch.rand(1) - 0.5) * 0.1 for i in range(n-1)] # Bias initialization\nreturn weights, biases\n```\n\n```\n\nWe define the architecture’s structure using `sizes`, where `hiddens` specifies the number of neurons in each hidden layer. We ensure that weight and bias initialization is suitable for deep networks.\n\nForward Propagation With Multiple Hidden Layers\n\nOur revamped `calc_preds` function accommodates multiple hidden layers in the network. It iterates through the layers, applying weight matrices and biases at each step and introducing non-linearity using the ReLU activation function in the hidden layers and the sigmoid activation in the output layer. This enables our deep learning network to capture complex patterns in the data.\n\n```\t```def calc_preds(coeffs, indeps):\nweights, biases = coeffs\nres = indeps\nn = len(weights)\nfor i, wt in enumerate(weights):\nres = res @ wt + biases[i]\nif (i != n-1):\nres = F.relu(res) # Apply ReLU activation in hidden layers\n```\n\n```\n\nNote that weights is now a list of tensors containing layer-wise weights and correspondingly, biases is the the list of tensors containing layer-wise biases.\n\nBackward Propagation With Multiple Hidden Layers\n\nLoss calculation and gradient descent remain consistent with the simple neural network implementation. We use the mean absolute error (MAE) for loss as before and tweak the `update_coeffs` function to apply gradient descent to update the weights and biases in each hidden layer.\n\n```\t```def update_coeffs(coeffs, lr):\nweights, biases = coeffs\nfor layer in weights+biases:\n\n```\n\nPutting It All Together in Wrapper Functions\n\nOur `train_model` function can be used ‘as is’ to orchestrate the raining process using the `execute_epoch` wrapper function to help as before. The `model_accuracy` function also does not change.\n\n# Summary Conclusion and Takeaways\n\nWith these modifications, we’ve refactored our simple neural network into a deep learning model that has greater capacity for learning. The beauty of it is we have retained the same set of functions and interfaces that we implemented in a simple neural network, refactoring the code to scale with multiple hidden layers.\n\n1. `train_model(epochs=30, lr=0.1)`: No change!\n2. `execute_epoch(coeffs, lr)`: No change!\n3. `calc_loss(coeffs, indeps, deps)`: No change!\n4. `calc_preds(coeffs, indeps)`: Tweak to use the set of weights and corresponding set of biases in each hidden layer, iterating over all layers from input to output.\n5. `update_coeffs(coeffs, lr)`: Tweak to iterate over the set of weights and accompanying set of biases in each layer.\n6. `init_coeffs(hiddens=[10, 10])`: Tweak for compatibility with an architecture that can potentially have any number of hidden layers of any size.\n7. `model_accuracy(coeffs)`: No change!\n\nSuch a deep learning model has greater capacity for learning. However, it is is more hungry for training data! In subsequent posts, we will examine the breakthroughs that have made it possible to make deep learning models practically feasible and reliable. These include advancements such as:\n\n1. Batch Normalization\n2. Residual Connections\n3. Dropouts\n\nAre you eager to dive deeper into the world of deep learning and further enhance your skills?Consider joining our coaching class in deep learning with FastAI. Our class is designed to provide hands-on experience and in-depth knowledge of cutting-edge deep learning techniques. Whether you’re a beginner or an experienced practitioner, we offer tailored guidance to help you master the intricacies of deep learning and empower you to tackle complex projects with confidence. Join us on this exciting journey to unlock the full potential of artificial intelligence and neural networks." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.81860524,"math_prob":0.9283394,"size":7261,"snap":"2023-40-2023-50","text_gpt3_token_len":1475,"char_repetition_ratio":0.13779798,"word_repetition_ratio":0.016869728,"special_character_ratio":0.19763118,"punctuation_ratio":0.11340206,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9861867,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-03T00:56:08Z\",\"WARC-Record-ID\":\"<urn:uuid:d1f63dd0-ca68-4638-b204-96fe428f4f57>\",\"Content-Length\":\"85449\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:63330be3-8e78-491d-a759-606c7296ab68>\",\"WARC-Concurrent-To\":\"<urn:uuid:e685fbaf-fcfe-44fe-88b7-183466d873dc>\",\"WARC-IP-Address\":\"154.41.233.116\",\"WARC-Target-URI\":\"https://craftwithcodewiz.com/tag/steepest-descent/\",\"WARC-Payload-Digest\":\"sha1:2BAGUMTVIO3BSYBVSNQE4355WFGK5ERG\",\"WARC-Block-Digest\":\"sha1:XLLPS2O7F3V2J6FBVWAH335B7TPFQWSG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100476.94_warc_CC-MAIN-20231202235258-20231203025258-00783.warc.gz\"}"}
http://tornado.sfsu.edu/Geosciences/classes/m430/InclassExercises/InclassExcercise5_Solution.htm
[ "Problem 1\n\nThe pressure tendency at the bottom of an air column that extends from the level where the pressure is 850 mb to the tropopause where the pressure is 250 mb is given as", null, "(1)\n\nwhere the Pressure Tendency Equation has been simplified by the boundary condition w2 = 0 at the tropopause.\n\nEquation (1)  states that  the change in pressure observed at the bottom of a slab of air of thickness dp is due to the net horizontal divergence in/out of the slab modified by the amount of mass being brought in through the bottom of the slab.  In this case, the top of the slab is the top of the troposphere and the bottom is at the surface, at which level the pressure is 850 mb. See Fig. 1.", null, "Figure 1:  Conceptual setup of problem", null, "states that the pressure tendency at level 1 (where the pressure is 850 mb) is due to the net divergence of pressure thickness ∆p=(200 mb – 850 mb) = -650 mb.\n\nThe net divergence in this layer is (3.99 – 3.05) X 10 –5 s-1.  Substitution of these into the finite difference form of (4) and reworking the units to mb 1h-1 yields the correct answer of –22.0 mb 1h-1.\n\nAlthough the answer is probably one order of magnitude too large, it suggests that the system would be deepening rapidly. The error in estimating the magnitude of the change is due to the fact that the net divergence really must be estimating the contribution to the divergence made by each layer, say at 25 mb layer intervals from the surface to the tropopause.", null, "Fig 2:  Observed 3 h pressure tendencies, 12 UTC 22 October 2004" ]
[ null, "http://tornado.sfsu.edu/Geosciences/classes/m430/InclassExercises/InclassExcercise5_Solution_files/image002.png", null, "http://tornado.sfsu.edu/Geosciences/classes/m430/InclassExercises/InclassExcercise5_Solution_files/Diagram.jpg", null, "http://tornado.sfsu.edu/Geosciences/classes/m430/InclassExercises/InclassExcercise5_Solution_files/image005.png", null, "http://tornado.sfsu.edu/Geosciences/classes/m430/InclassExercises/InclassExcercise5_Solution_files/image007.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91256136,"math_prob":0.9607298,"size":785,"snap":"2019-13-2019-22","text_gpt3_token_len":197,"char_repetition_ratio":0.15620999,"word_repetition_ratio":0.0,"special_character_ratio":0.2433121,"punctuation_ratio":0.0625,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9862276,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-25T22:28:29Z\",\"WARC-Record-ID\":\"<urn:uuid:7603fcbc-1b49-4927-b5e9-bc0f6afbcb88>\",\"Content-Length\":\"21766\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0e29684b-7b68-484c-8e63-72b07ad1bf3b>\",\"WARC-Concurrent-To\":\"<urn:uuid:95cda8b3-59c1-4498-9bae-db75f5ee276b>\",\"WARC-IP-Address\":\"130.212.21.1\",\"WARC-Target-URI\":\"http://tornado.sfsu.edu/Geosciences/classes/m430/InclassExercises/InclassExcercise5_Solution.htm\",\"WARC-Payload-Digest\":\"sha1:K6KEFAHJRW27F5Z52DRYVZJSKW43YK2P\",\"WARC-Block-Digest\":\"sha1:P4HPERY5BRKXOUHZZNYUPCCJ3GAWALTS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912204461.23_warc_CC-MAIN-20190325214331-20190326000331-00436.warc.gz\"}"}
https://www.grabstudy.com/Electrical/Electrical-Engineerin.php?pn=43
[ "", null, "# Electrical Engineering Objective Questions { Transformers }\n\n295.  If a saw-cut is made in the iron core of a single phase transformer as shown in the givng figure, with secondary terminals open, it will result in\n\n296. If the height to width ratio of the window of coretype transformer ii increased, then\n\n297.  If the secondary winding ofthe ideal transformer of the circuit shown in the given figure has 40 turns, then for maximum power transfer to the 2-ohm resistor, the number of turns required in the primary winding will be Ideal Transformer\n\n298.  A 220/110V, 50 Hz single-phase transformer having a negligible winding resistances operates from a variable voltage, variable frequency supply such that Vi/f (V1 = primary applied voltage, f = source frequency) is constant. This will bring in, in the given range of frequencies.\n\n299.  In a 3-phase power transformer, 5-limbed construction is adopted to\n\n300.  A transformer has a resistance of2% and reactance of 4%. Its regulations at 0.8 power factor lagging a leading respectively are\n\n301.  Consider the following statements : The use of Delta-connected tertiary winding in star-star connected power transformers I. makes available supply for single-phase loads.2. suppresses harmonic voltages. 3. allows flow of earth fault current for operation of protective devices 4. provides low-reactance paths for zero-sequence currents.Of the statements :\n\nPage 43 of 45" ]
[ null, "https://www.grabstudy.com/images/full logo.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87099504,"math_prob":0.94539094,"size":2653,"snap":"2023-40-2023-50","text_gpt3_token_len":654,"char_repetition_ratio":0.13741034,"word_repetition_ratio":0.027149322,"special_character_ratio":0.23294383,"punctuation_ratio":0.12897196,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9551593,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-26T11:56:49Z\",\"WARC-Record-ID\":\"<urn:uuid:bab3d943-baf5-4c75-aca5-c2aca5a7544e>\",\"Content-Length\":\"27544\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b6c21595-73d5-4f3b-b717-c40a9c80d98b>\",\"WARC-Concurrent-To\":\"<urn:uuid:b61143e0-3b3e-416c-b816-1c9c9f865e24>\",\"WARC-IP-Address\":\"172.105.62.200\",\"WARC-Target-URI\":\"https://www.grabstudy.com/Electrical/Electrical-Engineerin.php?pn=43\",\"WARC-Payload-Digest\":\"sha1:H36VII6675T7H6NXU4TRVRXPP73XFHZZ\",\"WARC-Block-Digest\":\"sha1:WEJW4QZVTGYJD3ZAIJZF67TMD5IKTFUA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510208.72_warc_CC-MAIN-20230926111439-20230926141439-00741.warc.gz\"}"}
https://lists.boost.org/Archives/boost/2007/04/120577.php
[ "", null, "# Boost :\n\nFrom: Daniel Walker (daniel.j.walker_at_[hidden])\nDate: 2007-04-30 14:08:44\n\nOn 4/29/07, Peter Dimov <pdimov_at_[hidden]> wrote:\n> Daniel Walker wrote:\n>\n> >> !boost::bind(...): Bind\n> >\n> > Task for bind: None. This works as is.\n> >\n> > Task for lambda: operator! needs to be supplied for std::bind but\n> > disabled for boost::bind.\n>\n> I'm planning to propose std::bind relational operators and operator! for the\n> next meeting. They may or may not actually go in, of course. I'm still not\n> sure that I have the design 100% right, though. The basic idea is to add\n>\n> template<class L, class R>\n> typename enable_if<\n> is_bind_expression<L>::value ||\n> is_placeholder<L>::value ||\n> is_bind_expression<R>::value ||\n> is_placeholder<R>::value,\n> ...>::type\n> operator==( L const & l, R const & r )\n> {\n> return bind( __equal_to(), l, r );\n> }\n>\n> where __equal_to()(x,y) returns x == y.\n>\n> The interesting issue is in which namespace we need to add the above. The\n> global namespace works most of the time, is contrary to the spirit of the\n> rest of the standard, and fails when an operator== overload in the current\n> scope hides the global one.\n\nDumping it into a namspece with a type that you're making\nEqualityComparible (thus enabling ADL) is normal. But if operator== no\nlonger means EqualityComparible and instead means delayed evaluation,\nthen I think that's not a good idea.\n\nI like something similar to lambda's approach of putting the operators\nin a separate namespace. The user enables delayed evaluation of\noperators by importing them via a using directive.\n\nSo, let's consider a namespace separated from types that may or may\nnot be EqualityComparible (i.e. separated from both std and\nstd::placeholders). The sole purpose of this namespace is to provide\nan interface for users to enable bind expressions from operators. So,\nlet's call it std::bind_expressions, or if it's not too terse, just\nstd::expressions. So the user could do something like.\n\nusing std::bind; // enable delayed functions\nusing std::placeholders; // enable currying\nusing std::expressions; // enable delayed operator expressions\n\nNow, consider a user who wants to make functors equality comparable,\nperhaps because some are multithreaded and need to be treated\ndifferently than others. The user wants to be able to compare user\ndefined functors and std functors (even functors resulting from\nstd::bind) transparently. So, the user defines generic operator==\noverloads in the namespace user::comparison. Now, the user can do ...\n\nuser::user_functor f;\n{ using namespace user::comparison;\nf == f; // user comparison is true\nf == std::tr1::bind(f); // user comparison is false\n}\n\nIf the user instead wanted to delay the comparison using bind (with\nnamespace expressions in tr1 for notational consistence) the code\nabove would change to ...\n\n{ using namespace user::comparison;\nf == f; // user comparison is true\n}\n{ using namespace std::tr1::expressions;\nf == std::tr1::bind(f); // delay ==\n}\n\nIf the user wanted to be able to delay and compare, any delayed\nexpression operator can be imported individually ...\n\n{ using namespace user::comparison;\nf == f; // user comparison is true\n{ using std::tr1::expressions::operator==;\nf == std::tr1::bind(f); // delay ==\n}\nf == std::tr1::bind(f); // user comparison is false\n}\n\nThis would work similarly for placeholders.\n\nNow, if the user had defined the comparison operators in namespace\nuser instead of namespace user::comparison then the statements where\ndelayed expressions were in scope would be ambiguous due to ADL.\nHowever, this gets back to the more general problem of combining types\nwith the same overloaded operators in the same expressions. You can't\noverride ADL by qualifying the scope of an operator like 'x\nstd::tr1::expression::== y'. You could say\n'std::tr1::expression::operator==(x, y)', but that gets cumbersome. A\ngeneral mechanism for telling an operator to claim an argument like\nstd::tr1::expression::hold(x) == y could do the trick, but this is\nalso kind of cumbersome.\n\nI'm not familiar with Herb Sutter's ADL proposal, so I don't know if\nthis would help matters. Still, I think putting the operators in a\nseparate namespace is a fair trade. Users can define bind compatible\nrelops or ADL compatible relops for their types but not both without\nrequiring cumbersome syntax like std::tr1::expression::operator==(x,\ny); i.e if they want the easy syntax they have to chose one or the\nother. I think this is reasonable because bind relop likely has\ncompletely different semantics than any user defined relop found via\n\n>\n> This is also an interesting use case for a || concept requirement. I'm not\n> terribly familiar with the concepts proposal though; it might offer an\n> alternative solution that I don't know about.\n\nWhat do you mean by 'a || concept requirement'? I'm not sure that any\nconcept would help in my example above. What might help is namespace\nqualification for scope operators.\n\nBelow, there's a complete example using the code snippets if you want\nto copy, past and tweak. It needs a tr1 compliant standard library.\nI'm not sure, but I think Boost.TR1 won't work because the operators\nare already defined in namespace boost. I compiled with ...\n\ng++ -I/usr/include/c++/4.1/tr1 file.cpp\n\nDaniel\n\n#include <functional>\n\n#include <boost/mpl/int.hpp>\n#include <boost/mpl/logical.hpp>\n#include <boost/type_traits.hpp>\n#include <boost/utility/enable_if.hpp>\n\nnamespace std { namespace tr1 {\n\ntemplate<class F, class A0, class A1>\nclass bind_expression {\nstatic F f;\nstatic A0 a0;\nstatic A1 a1;\npublic:\ntypedef typeof(bind(f, a0, a1)) type;\n};\n\nnamespace expressions {\n\ntemplate<class L, class R>\ntypename boost::enable_if<\nboost::mpl::or_<\nboost::mpl::int_<is_bind_expression<L>::value>\n, boost::mpl::int_<is_bind_expression<R>::value>\n>\n, typename bind_expression< equal_to<L>, L, R >::type\n>::type\noperator==( L const & l, R const & r )\n{\nreturn bind( equal_to<L>(), l, r );\n}\n\n}}} // end std namespaces\n\nnamespace user {\n\nclass user_functor {\nint data;\npublic:\ntypedef int result_type;\nint operator()() { return data; }\nbool operator==(user_functor const& that)\n{\nreturn this->data == that.data;\n}\n};\n\ntemplate<class Functor>\nstruct is_user_functor {\nstatic const bool value = false;\n};\n\ntemplate<>\nstruct is_user_functor<user_functor> {\nstatic const bool value = true;\n};\n\nnamespace comparison {\n\ntemplate<class L, class R>\ntypename boost::enable_if<\nboost::mpl::and_<\nboost::mpl::int_<is_user_functor<L>::value>\n, boost::is_same<L, R>\n>\n, bool\n>::type\noperator==(L l, R r)\n{\nreturn l.operator==(r);\n}\n\ntemplate<class L, class R>\ntypename boost::disable_if<\nboost::mpl::and_<\nboost::mpl::int_<is_user_functor<L>::value>\n, boost::is_same<L, R>\n>\n, bool\n>::type\noperator==(L l, R r)\n{\nreturn false;\n}\n\n}} // end user namespaces\n\nint main()\n{\nuser::user_functor f;\n\n{ using namespace user::comparison;\nf == f; // user comparison is true\nf == std::tr1::bind(f); // user comparison is false\n}\n\n{ using namespace user::comparison;\nf == f; // user comparison is true\n}\n{ using namespace std::tr1::expressions;\nf == std::tr1::bind(f); // delay ==\n}\n\n{ using namespace user::comparison;\nf == f; // user comparison is true\n{ using std::tr1::expressions::operator==;\nf == std::tr1::bind(f); // delay ==\n}\nf == std::tr1::bind(f); // user comparison is false\n}\n}" ]
[ null, "https://lists.boost.org/boost/images/boost.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7431774,"math_prob":0.7507036,"size":7473,"snap":"2022-27-2022-33","text_gpt3_token_len":1895,"char_repetition_ratio":0.16374347,"word_repetition_ratio":0.16477768,"special_character_ratio":0.27592668,"punctuation_ratio":0.24756335,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9702904,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-15T19:03:41Z\",\"WARC-Record-ID\":\"<urn:uuid:81cb19a5-9ae4-4758-8000-db676b3b7976>\",\"Content-Length\":\"21944\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f2352eaf-d1b1-4571-9365-c915974ac243>\",\"WARC-Concurrent-To\":\"<urn:uuid:78a74f36-cad6-4176-8705-62ed629c301f>\",\"WARC-IP-Address\":\"146.20.110.251\",\"WARC-Target-URI\":\"https://lists.boost.org/Archives/boost/2007/04/120577.php\",\"WARC-Payload-Digest\":\"sha1:W5ZLSHPL7RXGELZSMRJ2MSZXRA7DBHNH\",\"WARC-Block-Digest\":\"sha1:YDLNYKVARG4YEWFQ233TCBJ4IUN5RFM5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572198.93_warc_CC-MAIN-20220815175725-20220815205725-00380.warc.gz\"}"}
http://neverfear.org/blog/view/156/OS_X_Screen_capture_from_Python_PyObjC
[ "Looking through the unanswered Python questions on StackOverflow, I found one that seemed interesting.. \"Python Get Screen Pixel Value in OS X\" - how to access screen pixel values, without the overhead of calling the `screencapture` command, then loading the resulting image.\n\nAfter a bit of searching, the best supported way of grabbing a screenshot is provided by the CoreGraphics API, part of Quartz, specifically `CGWindowListCreateImage`.\n\nSince CoreGraphics is a C-based API, the code map almost directly to Python function calls. It's also simplified a bit, because PyObjC handles most of the memory-management (when the wrapping Python object goes out of scope, the underlying object is freed)\n\n## Getting the image\n\nAfter finding some sample iOS code with sane arguments (which can also be found via Apple's docs), I ended up with a `CGImage` containing the screenshot:\n\n```>>> import Quartz.CoreGraphics as CG\n>>> image = CG.CGWindowListCreateImage(CG.CGRectInfinite, CG.kCGWindowListOptionOnScreenOnly, CG.kCGNullWindowID, CG.kCGWindowImageDefault)\n>>> print image\n<CGImage 0x106b8eff0>```\n\nHurray. We can get the width/height of the image with help from this SO question:\n\n```>>> width = CG.CGImageGetWidth(image)\n>>> height = CG.CGImageGetHeight(image)```\n\n## Extracting pixel values\n\nThen it was a case of working out how to extract the pixel, which took far longer than all of the above. The simplest way I found of doing this is:\n\n1. Use `CGImageGetDataProvider` to get an intermediate representation of the data\n2. Pass the DataProvider to `CGDataProviderCopyData`. In Python this returns a string, which is really a byte-array containing 8-bit unsigned chars, suitable for unpacking with the handy `struct` module\n3. Calculate the correct offset for a given (x,y) coordinate as described here\n\nLike so:\n\n```>>> prov = CG.CGImageGetDataProvider(image)\n>>> data = CG.CGDataProviderCopyData(prov)\n>>> print prov\n<CGDataProvider 0x7fc19b1022f0>\n>>> print type(data)\n<objective-c class __NSCFData at 0x7fff78073cf8>```\n\n..and calculate the offset\n\n```>>> x, y = 100, 200 # pixel coordinate to get value for\n>>> offset = 4 * ((width*int(round(y))) + int(round(x)))\n>>> print offset\n1344400```\n\nFinally, we can unpack the pixels at that offset with `struct.unpack_from` - `B` is an unsigned char:\n\n```>>> b, g, r, a = struct.unpack_from(\"BBBB\", data, offset=offset)\n>>> print (r, g, b, a)\n(23, 23, 23, 255)```\n\nNote that the values are stores as BGRA (not RGBA).\n\n## Verification, and code\n\nTo verify this wasn't generating nonsense values, I used the nice and simple pngcanvas to write the screenshot to a PNG file (pngcanvas is a useful module because it's pure-Python, and a single self-contained `.py` file - much lighter weight than something like the PIL, good for when you just want to write pixels to an image-file)\n\nThe performance was definitely better than the `screencapture` solution. The `screencapture` command took about 80ms to write a TIFF file, then there would be additional time to open and parse the TIFF file in Python. The PyObjC code takes about 70ms to take the screenshot and have the values accessible to Python.\n\nFinally, the result - best to view the code on my StackOverflow answer (as there might be other better answers, or edits to the code)\n\nI'll include the code here too, for completeness sake:\n\n```import time\nimport struct\n\nimport Quartz.CoreGraphics as CG\n\nclass ScreenPixel(object):\nthe pixel values.\n\"\"\"\n\ndef capture(self, region = None):\n\"\"\"region should be a CGRect, something like:\n\n>>> import Quartz.CoreGraphics as CG\n>>> region = CG.CGRectMake(0, 0, 100, 100)\n>>> sp = ScreenPixel()\n>>> sp.capture(region=region)\n\nThe default region is CG.CGRectInfinite (captures the full screen)\n\"\"\"\n\nif region is None:\nregion = CG.CGRectInfinite\nelse:\n# TODO: Odd widths cause the image to warp. This is likely\n# caused by offset calculation in ScreenPixel.pixel, and\n# could could modified to allow odd-widths\nif region.size.width % 2 > 0:\nemsg = \"Capture region width should be even (was %s)\" % (\nregion.size.width)\nraise ValueError(emsg)\n\n# Create screenshot as CGImage\nimage = CG.CGWindowListCreateImage(\nregion,\nCG.kCGWindowListOptionOnScreenOnly,\nCG.kCGNullWindowID,\nCG.kCGWindowImageDefault)\n\n# Intermediate step, get pixel data as CGDataProvider\nprov = CG.CGImageGetDataProvider(image)\n\n# Copy data out of CGDataProvider, becomes string of bytes\nself._data = CG.CGDataProviderCopyData(prov)\n\n# Get width/height of image\nself.width = CG.CGImageGetWidth(image)\nself.height = CG.CGImageGetHeight(image)\n\ndef pixel(self, x, y):\n\"\"\"Get pixel value at given (x,y) screen coordinates\n\nMust call capture first.\n\"\"\"\n\n# Pixel data is unsigned char (8bit unsigned integer),\n# and there are for (blue,green,red,alpha)\ndata_format = \"BBBB\"\n\n# Calculate offset, based on\n# http://www.markj.net/iphone-uiimage-pixel-color/\noffset = 4 * ((self.width*int(round(y))) + int(round(x)))\n\n# Unpack data from string into Python'y integers\nb, g, r, a = struct.unpack_from(data_format, self._data, offset=offset)\n\n# Return BGRA as RGBA\nreturn (r, g, b, a)\n\nif __name__ == '__main__':\n# Timer helper-function\nimport contextlib\n\[email protected]\ndef timer(msg):\nstart = time.time()\nyield\nend = time.time()\nprint \"%s: %.02fms\" % (msg, (end-start)*1000)\n\n# Example usage\nsp = ScreenPixel()\n\nwith timer(\"Capture\"):\n# Take screenshot (takes about 70ms for me)\nsp.capture()\n\nwith timer(\"Query\"):\n# Get pixel value (takes about 0.01ms)\nprint sp.width, sp.height\nprint sp.pixel(0, 0)\n\n# To verify screen-cap code is correct, save all pixels to PNG,\n# using http://the.taoofmac.com/space/projects/PNGCanvas\n\nfrom pngcanvas import PNGCanvas\nc = PNGCanvas(sp.width, sp.height)\nfor x in range(sp.width):\nfor y in range(sp.height):\nc.point(x, y, color = sp.pixel(x, y))\n\nwith open(\"test.png\", \"wb\") as f:\nf.write(c.dump())```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.75228035,"math_prob":0.85718507,"size":5799,"snap":"2021-31-2021-39","text_gpt3_token_len":1428,"char_repetition_ratio":0.11216566,"word_repetition_ratio":0.007100592,"special_character_ratio":0.25107777,"punctuation_ratio":0.16635513,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9574435,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-24T15:46:16Z\",\"WARC-Record-ID\":\"<urn:uuid:66fe865a-8130-45d3-bbaf-f0c8492a5c6c>\",\"Content-Length\":\"13797\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3cded9a9-5474-4441-9ea4-7663ebcf93f5>\",\"WARC-Concurrent-To\":\"<urn:uuid:9a311182-3d89-4d54-924d-ecc7b01f50ae>\",\"WARC-IP-Address\":\"89.200.138.134\",\"WARC-Target-URI\":\"http://neverfear.org/blog/view/156/OS_X_Screen_capture_from_Python_PyObjC\",\"WARC-Payload-Digest\":\"sha1:PZCIYCDRJVFREJKKP7PAUTAU6Q4PBGMF\",\"WARC-Block-Digest\":\"sha1:5I2B6OUGEWFP3J2V3EN6CETPKLISMPK5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057558.23_warc_CC-MAIN-20210924140738-20210924170738-00053.warc.gz\"}"}
https://forum.knime.com/t/modulus-on-large-numbers-more-than-2147483647/36763
[ "# Modulus on large numbers (more than\" 2147483647\").\n\nHow to calculate the rest of division on very large number in KNIME?\n.\nIndeed, inside the “Math Formula” node, the mod(x,y) or x%y function is limited to \" 2147483647\".\nTo avoid this limitation it’s possible to use the folowing process like (in VBA)\n\n``````Function RestePar97(Nbre As String) As Integer\nDim i As Integer\nRestePar97 = 0\nFor i = 0 To Len(Nbre) - 1\nRestePar97 = (RestePar97 * 10 + CInt(Mid(Nbre, i + 1, 1))) Mod 97\nNext i\nEnd Function\n``````\n\nHow can I transpose this function in KNIME?\n\nHi @PBJ ,\n\nmaybe i misunderstand your question - why not just use the long format in KNIME?", null, "", null, "Else you could use the java snippet node (or python nodes) to accomplish the same - but I think using long is easier to use (the limit here should be 9223372036854775807)\n\nIf you want mod for even larger calculations I would suggest to use java/python functions for large numbers e.g. bigint mod\n\nInput:", null, "Example:\n\nOutput:", null, "Example workflow for both:\nKNIME_project34.knwf (7.6 KB)\n\n6 Likes\n\nHi @PBJ , the Math Formula is able to go beyond that. 2147483647 is basically the max value of a signed int (the range is -2,147,483,648 to 2,147,483,647). If you use type Long as @AnotherFraudUser pointed out, it will go beyond that number.\n\n1 Like\n\nBy using a loop, I try to calculate the rest of a division on large integer numbers by using process like hand made.\n\nThe behavior of mod (on knime):\n\nThe initial numbers:", null, "The mod calculation:\n\nThe (wrong) results:", null, "I try to avoid Python (to avoid python installation) and have a self KNIME execution…", null, "Best regards.\n\n2 Likes\n\nHi @PBJ,\n\nit seems you are correct -it looks like there is a problem with incorrect casting there", null, "Attached a java snippet with multiple ways to do the mod with the Java Snippet.\nI think using these pre-build java function would be the correct way - or do you really want to build you self made function within knime?", null, "KNIME_project34.knwf (7.0 KB)\n\nMaybe some of the KNIME colleagues can give feedback (@ScottF) regadring the output of the math node (as well as column expression node)", null, "2 Likes\n\nThis issue seems to happen after a certain range/limit. I originally tested with some Long numbers, and they all passed, and I was about to reply that I did not have any issue on my side because of that, then I thought of testing with the numbers that @PBJ used, and unfortunately I got the same results too.\n\nSome test results:", null, "", null, "Then I used the Math Formula (Multi Column) to go faster. At first, I did 4 columns in total (including the 2 above). Then I added @PBJ test data in column5. The first 4 columns produced the correct results, but for column5, it produced the same results as @PBJ:\n\nI also processed column4 and column5 via Column Expression:\n\nI’m guessing it’s the same `mod()` function in the Math Formula and the Column Expression.\n\nBut the issue might not be with the mod() function, but rather how these numbers are being recognized. Here’s an additional test that I did. I did column4 / 10 and also column5 / 10. Here are the results:\n\nI’m not sure how these numbers are coming up.\n\nHere’s my workflow if you want to play with these numbers: Math Formula mod issue.knwf (16.3 KB)\n\n2 Likes\n\nThis topic was automatically closed 182 days after the last reply. New replies are no longer allowed." ]
[ null, "https://forum-cdn.knime.com/uploads/default/original/3X/e/e/ee2989807adb8a05d5118f6b2b1aee2f6a5efbcd.png", null, "https://forum-cdn.knime.com/uploads/default/original/3X/b/e/be60f3423b81b7bf8a634ba16a53f683030e2e2b.png", null, "https://forum-cdn.knime.com/uploads/default/original/3X/e/6/e6e5f85f6e9f11a74618e458a4f8e0e5cc49e633.png", null, "https://forum-cdn.knime.com/uploads/default/original/3X/7/0/70bb5cb5301c0b45fc80c22a7788c8eb13937721.png", null, "https://forum-cdn.knime.com/uploads/default/original/3X/8/c/8c32adb1a047930f041c9ac3f271ba8f309f4821.png", null, "https://forum-cdn.knime.com/uploads/default/original/3X/1/c/1c17e117de23e85f65d030245a1d5ce15890d0fe.png", null, "https://forum-cdn.knime.com/images/emoji/apple/slight_smile.png", null, "https://forum-cdn.knime.com/images/emoji/apple/frowning.png", null, "https://forum-cdn.knime.com/uploads/default/original/3X/e/2/e25c588dd0aa15e90382fe8965f2629101eb6c46.png", null, "https://forum-cdn.knime.com/images/emoji/apple/see_no_evil.png", null, "https://forum-cdn.knime.com/uploads/default/original/3X/f/0/f0ff1fdc62b3026a31cdebd0e2fd30568666eef8.png", null, "https://forum-cdn.knime.com/uploads/default/original/3X/7/2/723314ee9f4db30907869ff26e09cf2b935f8c79.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9162629,"math_prob":0.7491192,"size":1936,"snap":"2022-27-2022-33","text_gpt3_token_len":502,"char_repetition_ratio":0.110766046,"word_repetition_ratio":0.0,"special_character_ratio":0.2680785,"punctuation_ratio":0.1037037,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97341067,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null,null,null,5,null,2,null,3,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-19T04:21:04Z\",\"WARC-Record-ID\":\"<urn:uuid:f06b9054-0398-4863-95fd-323f0adf2c99>\",\"Content-Length\":\"52044\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:439f691c-beea-41e5-9c93-05ab8d8b5aa7>\",\"WARC-Concurrent-To\":\"<urn:uuid:00f526b2-79ad-4529-9475-d0809d87638f>\",\"WARC-IP-Address\":\"18.192.227.6\",\"WARC-Target-URI\":\"https://forum.knime.com/t/modulus-on-large-numbers-more-than-2147483647/36763\",\"WARC-Payload-Digest\":\"sha1:BGHJCDUBKTK4KCC6QG2GQF3OBNLTMOAB\",\"WARC-Block-Digest\":\"sha1:G4A2SEH7LGUTXYOIK4XO5QAGE6WTYG54\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882573623.4_warc_CC-MAIN-20220819035957-20220819065957-00601.warc.gz\"}"}
https://dmtcs.episciences.org/2386
[ "## Hallam, Joshua and Sagan, Bruce, - Factorization of the Characteristic Polynomial\n\ndmtcs:2386 - Discrete Mathematics & Theoretical Computer Science, January 1, 2014, DMTCS Proceedings vol. AT, 26th International Conference on Formal Power Series and Algebraic Combinatorics (FPSAC 2014)\nFactorization of the Characteristic Polynomial\n\nAuthors: Hallam, Joshua and Sagan, Bruce,\n\nWe introduce a new method for showing that the roots of the characteristic polynomial of a finite lattice are all nonnegative integers. Our method gives two simple conditions under which the characteristic polynomial factors. We will see that Stanley's Supersolvability Theorem is a corollary of this result. We can also use this method to demonstrate a new result in graph theory and give new proofs of some classic results concerning the Möbius function.\n\nVolume: DMTCS Proceedings vol. AT, 26th International Conference on Formal Power Series and Algebraic Combinatorics (FPSAC 2014)\nSection: Proceedings\nPublished on: January 1, 2014\nSubmitted on: November 21, 2016\nKeywords: characterstic polynomial,lattice,Möbius function,quotient,supersolvability,[INFO.INFO-DM] Computer Science [cs]/Discrete Mathematics [cs.DM],[MATH.MATH-CO] Mathematics [math]/Combinatorics [math.CO]" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8446387,"math_prob":0.60109735,"size":831,"snap":"2021-21-2021-25","text_gpt3_token_len":181,"char_repetition_ratio":0.11608222,"word_repetition_ratio":0.033898305,"special_character_ratio":0.19133574,"punctuation_ratio":0.124087594,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96823204,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-17T19:18:53Z\",\"WARC-Record-ID\":\"<urn:uuid:512e044a-98f5-46f9-81ff-2ddeea040103>\",\"Content-Length\":\"33478\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:87a1cf62-f843-4662-99c5-abb7a1aaf822>\",\"WARC-Concurrent-To\":\"<urn:uuid:d21e2c96-8fda-4d68-80bc-29007c12a15b>\",\"WARC-IP-Address\":\"193.48.96.94\",\"WARC-Target-URI\":\"https://dmtcs.episciences.org/2386\",\"WARC-Payload-Digest\":\"sha1:LAEWBCQ3BHP6WAUWDIGTCCD2KTYHVQUD\",\"WARC-Block-Digest\":\"sha1:SMNL7EQ75GDJBDCSK5B5G4D4NOBTKMEA\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243992440.69_warc_CC-MAIN-20210517180757-20210517210757-00398.warc.gz\"}"}
https://www.powermag.com/basics-of-sound-and-noise-propagation/
[ "According to the Occupational Safety and Health Administration, 22 million people—including nearly all power plant workers—are exposed to potentially damaging noise at work each year in the U.S. Hearing conservation programs strive to prevent hearing loss and equip workers with the knowledge to safeguard themselves. Understanding the basics of sound and noise propagation could help engineers evaluate areas and identify methods to further reduce exposure.\n\nSound and noise propagation evaluation is a complex and specialized subject. It is encountered to some degree in all types of industrial, commercial, residential, and transit systems. This article provides some fundamental information for readers who are unfamiliar with sound and noise propagation. It includes some of the terminology and basic equations used in noise propagation studies.\n\n## Decibels and Sound Levels\n\nThe basic unit used in noise analysis is decibel (dB), which is a measure of the sound level (Figure 1). The sound level could be the sound power level, sound pressure level, or sound vibration level.", null, "1. Sound monitoring surveys. A tripod-mounted noise meter is shown in the foreground of this image measuring noise from a pump motor. Courtesy: Bechtel Corp.\n\nThe sound power level (PWL) is the amount of energy generated from a sound source similar to the wattage of a light bulb. The sound power level (in dB) is the sound power in watts relative to the sound power reference base of 10–12 watts. The decibel is a ratio (somewhat similar to percent), and in the case of sound power level is equal to 10 times the log of the sound power level ratio. The equation, found in chapter 8 of the 2017 ASHRAE Handbook: Fundamentals, I-P Edition, is:\n\nPWL (in dB) = 10 log (P1 / Pref)\n\nwhere P1 / Pref is the ratio of the sound power level and Pref is the reference base of 10–12 watts. The advantage of using decibels is that a very large change ratio of P1 / Pref can be reduced to a small number because log (P1 / Pref) is small.\n\nSound pressure level (SPL) is the pressure generated by sound waves as they strike the human ear. It is affected by distance from the sound source. The sound pressure level (in dB) is equal to 10 times the log of the square of the sound pressure ratio. The equation, also found in the 2017 ASHRAE handbook, is:\n\nSPL (in dB) = 10 log (P1 / Pref)2\n\nor\n\nSPL (in dB) = 20 log (P1 / Pref)\n\nwhere P1 / Pref is the ratio of the sound pressure level with P1 as the effective root mean square (rms) sound pressure and Pref is the reference pressure of 20 micropascals (20 µPa). This reference is used because it is the lowest sound pressure which can be heard by a young, healthy human ear in the maximum hearing sensitivity of about 1,000 Hz.\n\nDecibels are logarithmic quantities. Therefore, they must be added logarithmically rather than algebraically. The 2017 ASHRAE handbook provides the following equation:\n\ndBsum = 10 log (10L1/10 + 10L2/10 + 10L3/10 + … )\n\nwhere L1, L2, and L3 are the dB values from several sources. For example, when two sources of 70 dB each are added, the result will be:\n\ndBsum = 10 log (107 + 107) = 73 dB\n\nBased on the equation above, additional calculations can be performed to obtain the results shown in Table 1 when summation of two noise sources of different values is considered. Similarly, for decibel subtraction, if there are two sources producing 73 dB and if one source of 70 dB is removed, the result would be:\n\ndBsum = 10 log (107.3 – 107) = 70 dB", null, "Table 1. Summing different sound levels. This table shows the number of decibels (dB) to be added to the highest sound level when two different sound levels are combined. Source: 2017 ASHRAE Handbook: Fundamentals, I-P Edition\n\n## Understanding Noise and Sound\n\nThe following are some terms frequently used by sound experts and brief explanations of what they mean.\n\nOctave Frequency Bands. Noise from a typical industrial piece of equipment is measured with an octave band analyzer, which provides noise values over the full frequency range of human hearing. An octave band represents a frequency range in which the higher frequency is two times the lower frequency. Thus, the octave band center frequency of 31.5 is bounded in the octave band with lower frequency of 22 Hz and higher frequency of 44 Hz.\n\nFor finer resolution, the octave band can be divided into three, one-third octave bands. The one-third octave band is the frequency band in which the upper frequency is equal to the lower frequency multiplied by the cube root of two. The corresponding octave band level can be determined by the summation of the three one-third octave band levels using the equation for summation given above.\n\nA-Weighting of Sound Level. The human ear is more responsive to frequencies in the range of 500 Hz and 8 kHz, and it effectively cuts off the lower and higher frequencies. The A-weighting accounts for the response of the human ear and the sound levels in dB can be converted to dBA by applying the frequency weighting A-corrections shown in Table 2.", null, "Table 2. A-weighting. The human ear is more responsive to frequencies in the range of 500 Hz to 8,000 Hz. The A-weighting accounts for the response of the human ear. Source: 2017 ASHRAE Handbook: Fundamentals, I-P Edition\n\nThe A-weighted overall sound level is then computed by combining the values given in the last column of Table 2 as follows:\n\ndBAsum = 10 log (1051/10 + 1066/10 + 1075/10 + 1083/10 + 1091/10 + 1094/10 + 1097/10 + 1097/10 + 1092/10) = 101.9 dBA\n\nEquivalent Continuous Level of Noise. Equivalent continuous level of noise (LAeq) is defined as the equivalent continuous A-weighted sound level when the sound is varying over time. This can be considered as a type of average. For example, let’s consider a worker who is subjected to different noise levels in various areas of a plant for varying times. If during the course of an eight-hour workday the person spent one hour in an area with an 80-dB noise level, four hours in an area with an 85-dB noise level, and three hours in an area with a 70-dB noise level, the LAeq would be calculated using the following equation courtesy of Pacific Ears Pty. Ltd.’s noise exposure calculator:\n\nLAeq = 10 log [(1/8) x (1 x 1080/10 + 4 x 1085/10 + 3 x 1070/10)] = 82.4 dB\n\nSound from Point, Line, and Plane Sources. Sound from a point source radiates energy uniformly in all directions in a spherical manner, such as noise emanating from an airplane. Sound from a line source radiates energy in a cylindrical pattern, such as noise from a train; highway; or heating, ventilation, and air conditioning (HVAC) ducting. Sound from a plane source radiates energy from a plane into a space, such as noise transmission through a wall.\n\nPWL is not dependent on distance from the sound power source, whereas SPL is directly dependent on the distance from the source. In other words, PWL is the “cause” and SPL is the “effect.” This “effect” continues to diminish as the distance from the sound power source increases. The following equations, found in McQuay Air Conditioning Application Guide AG 31-010, HVAC Acoustic Fundamentals, can be used to convert PWL to SPL.\n\nFor a point source of noise:\n\nSPL = PWL + 10 log [Q / (4πd2)] + k\n\nwhere Q is the directivity factor (Q = 1 for a point source in full spherical space, such as an airplane flying in the air), d is the distance from the sound power source (in meters), and k is a constant with the value of 0.5 for SI units or 10.5 for IP units.\n\nFor a line source of noise:\n\nSPL = PWL + 10 log [Q / (πdL)] + k\n\nwhere L is the length of the sound source (in meters).\n\nFor a plane source of noise when d < b / π (for other values of d, this equation is slightly modified):\n\nSPL = PWL + 10 log [π / (4bc)] + k\n\nwhere b is the shorter wall dimension (in meters) and c is the larger wall dimension (in meters).\n\nFor a point source radiating sound energy in a confined space such as a room:\n\nSPL = PWL + 10 log [Q / (4πd2) + (4 / R)] + k\n\nwhere R is a room constant. The following equation is used to calculate R:\n\nR = (ST x α) / (1 – α)\n\nwhere ST is the total area in the receiving room and a is the average sound absorption coefficient in the receiving room. The term 4 / R represents the reverberant field generated by sound reflections from room surfaces. The value of Q could be 1, 2, 4, or 8, depending on structures surrounding the sound power source.\n\nGeometric Spreading of Sound Energy. The geometric spreading of sound energy occurs due to the expansion of sound wave fronts. The geometric spreading could be spherical or cylindrical spreading. Spherical spreading is from a point source radiating sound equally in all directions. In this case, the difference in SPL (in dB) can be calculated using an equation derived from The Engineering Toolbox website as:\n\n∆SPL = SPL2 – SPL1 = 20 log (d2 / d1)\n\nthus, when the distance is doubled the difference in sound pressure level for point source is 6 dB.\n\nIn the case of cylindrical spreading, such as noise along a highway or a passing train, the difference in SPL (in dB) is given as:\n\n∆SPL = SPL2 – SPL1 = 10 log (d2 / d1)\n\nthus, when the distance is doubled the difference in sound pressure level for a line source is 3 dB.\n\nSound Barriers and Insertion Loss. The SPL-distance equations given in the section above are valid provided there is no sound barrier (such as a solid wall structure) in the direct sound path from the source to the receiver. If a sound barrier is present then the insertion loss (that is, noise attenuation) due to the sound barrier must be considered (Figure 2).", null, "2. Sound barrier. Noise barriers were installed around the vertical pumps shown here to reduce noise levels in the surrounding area. Courtesy: Bechtel Corp.\n\nTo calculate the insertion loss, first calculate the direct path length (d) between the source and receiver. Then, calculate the extended path length between the source and receiver due to the sound barrier. In the case of a solid wall, the extended path length would go from source to the top of the wall (distance A) and then down to the receiver (distance B). The path length difference (A + B – d) is then used to find the insertion loss from Table 3.", null, "Table 3. Insertion loss. This table shows the insertion loss (dB) for each frequency band (Hz) based on the extended path length (ft.) caused by a wall. The extended path length is the distance the sound would travel to the top of the wall and then down to the receiver, minus the straight-line distance from source to receiver. Source: 2015 ASHRAE Handbook: HVAC Applications, I-P Edition\n\n## Noise Calculations for Building Service Equipment\n\nAs an example, consider the noise generated by an in-duct fan supplying atmospheric air through a single outlet diffuser in a room within a building. The SPL in the room reaching the observer’s ear is the logarithmic sum of the direct SPL and the reverberant SPL.\n\nThe details for calculating the SPL reaching the observer’s ear are as follows:\n\n1. Use the fan manufacturers’ SPL noise spectrum across the various frequencies as the starting point.\n\n2. Analyze the duct system between the fan and the room diffuser outlet, and evaluate the duct attenuation based on straight runs and duct bends.\n\n3. At low frequencies, the sound power reaching the outlet diffuser is partly reflected back in the duct. This is the outlet reflection effect, which provides some degree of attenuation. The values for the outlet reflection effect are available in literature and are based on the area of the outlet diffuser.\n\n4. Total duct attenuation is therefore the sum of #2 and #3 above.\n\n5. Total SPL leaving the system can then be evaluated by subtracting #4 from #1.\n\n6. The SPL values in #5 then need to be adjusted for the following three factors:\n\n■ Percent of total fan airflow leaving the room outlet (5%, for example)\n■ Distance from the outlet to observer (1 to 2 meters, for example)\n■ Directivity, which is based on the location of the outlet with respect to corners and walls.\n\nThe effect of the first two adjustments is an SPL decrease, whereas the third factor tends to increase the value.\n\n7. The adjusted value calculated in #6 is the direct SPL.\n\nThe next step is to calculate the reverberant SPL, which is done as follows:\n\n1. Sum the impact of reverberant factors from the percentage of fan air volume serving the room under analysis; the room volume factor, which accounts for reflection/absorption of sound in the room; and the reverberation time factor.\n\n2. The reverberant SPL can then be calculated as the sum of SPL leaving the system and the impact of reverberant factors. That is, adding #5 from the direct SPL analysis and #1 above.\n\n3. Finally, the combined SPL can be calculated as the logarithmic sum of direct SPL and the reverberant SPL. That is, using #7 from the direct SPL analysis and #2 above.\n\nThe combined SPL established above can then be compared with the design basis noise criteria to determine if further attenuators are required. In some cases, sound levels in a room may be affected by duct breakout (sound energy escaping through the walls of a duct passing near an occupied space that does not serve that area). However, based on a paper published by Trox Technik titled “Sound and Sense,” noise due to duct breakout can be mitigated by limiting duct attenuation to 15 dB.\n\n## Vibrations and Ground-Borne Noise\n\nVibration levels (in decibels, VdB) are defined using the following equation, found in Fundamentals of Acoustics by J. Paul Guyer of Continuing Education and Development Inc.:\n\nVdB = 10 log (v1 / vref)2\n\nwhere v1 / vref is the ratio of the velocity amplitude with v1 as the root mean square of the velocity amplitude (in meters/second) and vref as the reference expressed in velocity units such as 5 x 10–8 meters per second. (Reference values may vary when using U.S. units.)\n\nRotating machinery, vehicles, and trains excite the ground and generate waves that propagate through the soil to the foundations of nearby buildings. The foundations propagate the waves to room structures. The rumble often heard is noise from the movement of building room walls, floors, and ceilings, and the threshold for human perception of ground-borne noise is around 65 VdB.\n\nThe unweighted SPL is approximately equal to the average vibrational velocity level of the room surfaces (assuming a room with average acoustical absorption). The A-weighting correction at 30 Hz is approximately –40 dB and at 60 Hz is –25 dB. Accordingly, if the vibration velocity level peaks at 30 Hz, the ground-borne noise level is 65 – 40 = 25 dBA, and if it peaks at 60 Hz, the ground-borne noise level is 65 – 25 = 40 dBA. ■\n\nS. Zaheer Akhtar, PE is principal engineer with Bechtel Corp. Thanks to Henrik Olsen, senior noise and vibration specialist with Bechtel Corp. for reviewing and providing valuable comments." ]
[ null, "https://www.powermag.com/wp-content/uploads/2018/05/fig-1_tripod-mounted-noise-meter.jpg", null, "https://www.powermag.com/wp-content/uploads/2018/04/table-1_summing-different-sound-levels.jpg", null, "https://www.powermag.com/wp-content/uploads/2018/04/table-2_a-weighting-sound-and-noise.jpg", null, "https://www.powermag.com/wp-content/uploads/2018/05/fig-2_noise-barriers.jpg", null, "https://www.powermag.com/wp-content/uploads/2018/04/table-3_sound-insertion-loss.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9155749,"math_prob":0.97326225,"size":14436,"snap":"2021-43-2021-49","text_gpt3_token_len":3368,"char_repetition_ratio":0.14176829,"word_repetition_ratio":0.066430815,"special_character_ratio":0.23787753,"punctuation_ratio":0.092931464,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9930171,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-28T11:01:33Z\",\"WARC-Record-ID\":\"<urn:uuid:02c45f6c-eb26-409c-a2d1-1f8e4d8928b4>\",\"Content-Length\":\"120535\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0c7caf65-9cdb-43f0-962b-58a66e209dca>\",\"WARC-Concurrent-To\":\"<urn:uuid:97b8e8b0-2deb-4694-9d8b-9d21ce25a997>\",\"WARC-IP-Address\":\"23.185.0.4\",\"WARC-Target-URI\":\"https://www.powermag.com/basics-of-sound-and-noise-propagation/\",\"WARC-Payload-Digest\":\"sha1:NHFOEXFXP5VTUPRTV63ASU2EP6XEB3KG\",\"WARC-Block-Digest\":\"sha1:HGBVTBDFEFNYMKNG4ZJ7XEPP2WHZDI2T\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358520.50_warc_CC-MAIN-20211128103924-20211128133924-00575.warc.gz\"}"}
https://philsandlin.org/and-pdf/1515-set-theory-venn-diagram-problems-and-solutions-pdf-280-496.php
[ "# Set theory venn diagram problems and solutions pdf\n\nPosted on Monday, March 29, 2021 9:22:49 AM Posted by Frontino D. - 29.03.2021", null, "File Name: set theory venn diagram problems and solutions .zip\n\nSize: 2284Kb\n\nPublished: 29.03.2021", null, "", null, "## Venn Diagrams: Exercises\n\nML Aggarwal Class 8 Solutions for Maths was first published in , after publishing sixteen editions of ML Aggarwal Solutions Class 8 during these years show its increasing popularity among students and teachers.\n\nThe subject contained in the ML Aggarwal Class 8 Solutions Maths Chapter 6 Operation on sets Venn Diagrams has been explained in an easy language and covers many examples from real-life situations.\n\nEmphasis has been set on basic terms, facts, principles, chapters and on their applications. Carefully selected examples to consist of complete step-by-step ML Aggarwal Class 8 Solutions Maths Chapter 6 Operation on sets Venn Diagrams so that students get prepared to attempt all the questions given in the exercises. These questions have been written in an easy manner such that they holistically cover all the examples included in the chapter and also, prepare students for the competitive examinations.\n\nThe updated syllabus will be able to best match the expectations and studying objectives of the students. A wide kind of questions and solved examples has helped students score high marks in their final examinations. The key to performing well in the examinations is to solve ML Aggarwal Solutions Maths Chapter 6 Operation on sets Venn Diagrams properly and noting down important chapters and tricks mentioned in exercises.\n\nThe probability of questions coming from these solutions in final exams in quite high. It is important to solve the ML Aggarwal Solutions. When the ICSE examinations are round the corner students need to study their complete solved questions of this book which are available on our website. This practice encourages students to memorize the maths fundamentals of ML Aggarwal Maths Chapter 6 Operation on sets Venn Diagrams questions and develop a step by step explaining the technique.\n\nClass 8 ICSE examinations which are usually called board examinations bring along huge tension and competition as well. This competition along with a different environment provides them with the opportunity to adjust themselves in a new situation. Going through all chapters of ML Aggarwal Solutions and solving every question brings enormous confidence before final examinations. After answering the question one should first check the answer present on the backside and if possible should also with individual means confirm the step by step method explaining.\n\nOperation On Sets Venn Diagrams. Best Features of the ML Aggarwal Maths Chapter 6 Operation on sets Venn Diagrams Solutions Class 8: Keeping in mind the students understanding, the matter has been divided into sections and sub-sections so that the students can study at their own pace. All new examples have been developed by the class-activity. The practical hands-on experience of these exercises will enable the students to develop a deeper knowledge of the concepts. Results, wherever possible, have been confirmed by lab activity.\n\nEvery chapter is followed by a Summary which recapitulates the new concepts and results. The last section acts as a Unit Test. Model Question Papers, presented at various places, will serve as a means for revision and preparation for the examinations. It has been the sincere endeavour to present the chapters, examples and questions in an interesting manner so that the students develop an interest in enjoying mathematics.", null, "## Venn diagram problems\n\nML Aggarwal Class 8 Solutions for Maths was first published in , after publishing sixteen editions of ML Aggarwal Solutions Class 8 during these years show its increasing popularity among students and teachers. The subject contained in the ML Aggarwal Class 8 Solutions Maths Chapter 6 Operation on sets Venn Diagrams has been explained in an easy language and covers many examples from real-life situations. Emphasis has been set on basic terms, facts, principles, chapters and on their applications. Carefully selected examples to consist of complete step-by-step ML Aggarwal Class 8 Solutions Maths Chapter 6 Operation on sets Venn Diagrams so that students get prepared to attempt all the questions given in the exercises. These questions have been written in an easy manner such that they holistically cover all the examples included in the chapter and also, prepare students for the competitive examinations. The updated syllabus will be able to best match the expectations and studying objectives of the students. A wide kind of questions and solved examples has helped students score high marks in their final examinations.\n\nProblem-solving using Venn diagram is a widely used approach in many areas such as statistics, data science, business, set theory, math, logic and etc. A Venn Diagram is an illustration that shows logical relationships between two or more sets grouping items. Venn diagram uses circles both overlapping and nonoverlapping or other shapes. Despite Venn diagram with 2 or 3 circles are the most common type, there are also many diagrams with a larger number of circles 5,6,7,8,10…. Theoretically, they can have unlimited circles. Venn Diagram General Formula. This is a very simple Venn diagram example that shows the relationship between two overlapping sets X, Y.", null, "Let A be the set of people who believe that they have been abducted by space aliens. Then we have the following Venn diagram showing the relationship between.\n\n## Service Unavailable in EU region\n\nIn these lessons, we will learn how to solve word problems using Venn Diagrams that involve two sets or three sets. Examples and step-by-step solutions are included in the video lessons. Venn diagrams are the principal way of showing sets in a diagrammatic form.\n\nWhen we place what we know so far onto the diagram, this is what we have: We now need to work through the other information in the word problem, one piece at a time. Remember: Work for the Inside Out. Here is what we will do next. This handout will cover the five steps to analyzing known information using a Venn diagram.\n\nVenn diagram word problems generally give you two or three classifications and a bunch of numbers. Therefore 2 learners take all three subjects. Let E be the set of people who believe that Elvis is still alive.", null, "This page contains worksheets based on Venn diagram word problems, with Venn diagram containing three circles.\n\n#### Blue french bulldog for sale south florida\n\nTo understand, how to solve venn diagram word problems with 3 circles, we have to know the following basic stuff. Theorem 2 :. Explanation :. In a survey of university students, 64 had taken mathematics course, 94 had taken chemistry course, 58 had taken physics course, 28 had taken mathematics and physics, 26 had taken mathematics and chemistry , 22 had taken chemistry and physics course, and 14 had taken all the three courses. Find how many had taken one course only. Solution :. Step 1 :." ]
[ null, "https://philsandlin.org/img/a8616f4ee23379cfbfd70565c0ba2831.png", null, "https://philsandlin.org/1.jpg", null, "https://philsandlin.org/2.png", null, "https://philsandlin.org/1.jpg", null, "https://philsandlin.org/img/797986842370f137f4468bd30e6ddaed.jpg", null, "https://philsandlin.org/img/set-theory-venn-diagram-problems-and-solutions-pdf-2.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9513632,"math_prob":0.7132455,"size":7091,"snap":"2022-05-2022-21","text_gpt3_token_len":1420,"char_repetition_ratio":0.14350219,"word_repetition_ratio":0.33128294,"special_character_ratio":0.19066422,"punctuation_ratio":0.09546166,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95692474,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,1,null,null,null,null,null,null,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-25T18:59:47Z\",\"WARC-Record-ID\":\"<urn:uuid:81430eb6-4c1a-4fe6-a584-1435bbecca81>\",\"Content-Length\":\"27872\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c47ef94e-4f1a-4289-a6b3-cf7bec1f28ee>\",\"WARC-Concurrent-To\":\"<urn:uuid:fd8efe96-1235-4126-a5a1-07eb51e4cd52>\",\"WARC-IP-Address\":\"172.67.171.94\",\"WARC-Target-URI\":\"https://philsandlin.org/and-pdf/1515-set-theory-venn-diagram-problems-and-solutions-pdf-280-496.php\",\"WARC-Payload-Digest\":\"sha1:7Q5ZRBV6DOPYNGJMJXXEA4VOS7BHW7QO\",\"WARC-Block-Digest\":\"sha1:ZAAK3BOKPRFPXNDQPCB2LKPIMRBIQZRD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662593428.63_warc_CC-MAIN-20220525182604-20220525212604-00273.warc.gz\"}"}
https://1library.net/article/symmetry-considerations-and-origin-of-the-force-components.4zp7rjrz
[ "# Symmetry Considerations and Origin of the Force Components\n\nIn document Role of collective modes in some surface properties of metals (Page 179-184)\n\nThe distinct origins of the force components F|t and Fj^ can be clearly understood in terms of the symmetry\n\np r operties of the induced charge density. The general form of the 0 integrals in equations (5.48) may be written\n\n277\n\nI ( p ' / n ) de e x p [ikp'c o s (0- n )] f(cos0) (5.51)\n\nW ith the variable change to 0-tt in the range (tt,2tt) followed by the change to tt — 0 in the range (*577,77), I(p',n) takes the form\n\nI (p ' , n ) = 2[ i + ( P 1 , n ) + i _ (p ' , n) 1 (5.52)\n\nw h ere\n\n»*2 77\n\nl + ( p ' , n ) d0 cos (Kp ' cos0cosn) cos (kp' sin0sin7i)\n\nx [f(cos0) + f(-COS0)] (5.53)\n\nand\n\nr h v\n\nI (p'fTi) = i I d0 sin(Kp 'cosecosn) cos (kp ' sin0sin7i)\n\n### J\n\n0\n\nx [f(cos0) - f(-COS0)] (5.54)\n\nThe following identities are easily verified:\n\nI ± ( p \\ n ) = I ± (p ' ,-n) , (5.55)\n\n147\n\n^2 TT\n\ni + (o , n ) d0 [f (cos0) + f(-cos©) ] (5.57)\n\nI (p ' , ^tt) I (0,n) 0 (5.58)\n\nIf f (c o s 0) is an even function of c o s 0 , then I_ vanishes and from equations (5.55) and (5.56) , 1 = 1 corresponds to a\n\nsymmetric density distribution with respect to inversion about both axes n = 0 and n = tt in any plane parallel to the\n\nsurface. In addition, on comparing equations (5.53) and (5.57), I has an absolute maximum at p' = 0, directly\n\nbeneath the moving charcje. When f(cos0) = - f (-cos0), then I = I is antisymmetric about, and vanishes on, the axis n = . The corresponding antisymmetric charge distribution will contribute no net charge.\n\nX = k and <q respectively. When there are no singularities, f(cos0) is even and the symmetric density nj_ will give rise to Fj^. When singularities occur, only the imaginary parts\n\nantisymmetric charge distribution n|( which in turn gives rise to F|( . Thus, for example,\n\nIn the expression for the surface and bulk charge\n\n2 2 2 2 - 1\n\ndensities, f(cos0) is given by (w - k v c o s 0) where\n\n. 2 2 2 2\n\ni7Tsgn(cos0) (oo - k v c o s 0) contribute, leading to the A\n\np ^7T\n\nX 0\n\nd0 cos (kpic o s0costi) cos (kp'sin0sinn) , (5.59)\n\n2 2 2 2\n\n(a) - k y cos 0)\n\nK\n\n2 r00 00 Q / ck k y exp(y z-kz ) P I K r K O 2ne / ~ 2 2 2 x h , 2 2 2. % Jk (2oo + 3 k ) (k y - a) ) 2 s P K x sin(Kp1cosS^cosn) cos(xp1sinG^sinp) , (5.60)\n\nwhere 0^ = cos 1 (oj^/Kr?) . Note also that the induced charge\n\n~ens II (P ' ' 0 'z ) = en s\\\\ ^ P ' '77 ,z ^ t^ie same sign as Q in front of the moving charge. An equivalent analysis gives the bulk terms and with the corresponding limits and <7 (k) as in equations (5.32) and (5.37) for Fy and Fj_ .\n\nn .. (p 1 ,r\\fz) = - sll\n\n*J.J CONCLUDING REMARKS\n\nThe approach of Takimoto {1966) and Muscat and Newns {1977) has been extended here to study the effects of spatial dispersion on the force on a charge moving with constant velocity parallel to a metal surface. The notable effects of the nonlocality of the dielectric response of the metal are the inclusion of the contributions from bulk plasmons and the introduction of the physically important\n\nP\n\nlimits k s = o)g/[y(y-3)] 2 (compared to u> /y in the local approximation) on the surface plasmon wavenumbers and k = a) / {v2 - ß2 )\"2 and q (k) = [k2 (i;2 - ß2 ) - w 2 ]^/ß for bulk plasmons, separating their contributions to the\n\nlongitudinal and transverse force components. These limits are connected with the results that F |( vanishes when y i ß,\n\nwith the induced density fluctuations in the metal then remaining symmetrically distributed beneath the moving\n\n149\n\ncharge, and the shape of this symmetric distribution giving rise to the velocity dependence of . For v > ß the existence and origin of the force components Fj, and Fj_ can be understood in terms of the lack of symmetry of the induced density, rather than simply as resolved components of an 'image' force resulting from a charge distribution with centre of mass which lags behind the moving charge.\n\nThe results are easily extended to obtain the probability of plasmon excitation for very small angle grazing incidence as done by Barberän et al {1979) for the surface plasmon contributions. For the bulk contribution, equation (5.32a) must be used with the factor introduced into the q integral which then requires numerical\n\nevaluation.\n\nAlthough, as indicated above, the results of Barberan\n\net al {1979) only reproduce the surface contributions given here, and thus do not reduce to the same static limit (Newns 1 9 6 9 b ) r a clearer picture of the effects of single particle excitations on the results of this chapter is desired. In addition, the inclusion of retardation is expected to introduce some lag, and hence asymmetry, in the induced density so that there would no longer be a clear distinction for the contributions to F,| and Fj_ on the basis of plasmon phase velocities. However, the usefulness of the hydrodynamic model, in this case within the nonretarded\n\nlimit, is highlighted in this important problem in giving a clear physical basis on which these other effects can be evaluated.\n\nREFERENCES\n\n^ ^ b e l e s F . 1 9 7 1 P h y s i c s o f T h i n F i l m s 6\n\n### _,\n\n151. _____________ 1 9 7 6 T h i n S o l i d F i l m s 34_, 291.\n\nA b r a m o w i t z M. a n d S t e g u n I . A . 1 9 7 0 e d s . , Handbook o f M a th e m a tic a l F u n c tio n s ( Do ve r, New Y o r k ) .\n\nA d l e r S . L . 1 9 6 2 P h y s . Rev. 1 26, 4 1 3 . A g a r w a l G . S . 1 9 7 5 P h y s . Rev. A l l , 243 . A g a r w a l G . S . , P a t t a n a y a k D . N . a n d W o l f E . 1 9 7 1 a L e t t . 27, 1 0 2 2 . _____________________________________________ 1 9 7 1 b 4, 255. P h y s . Rev. O p t . Commun. A n d e r s o n P . W . 1 9 6 1 P h y s . Rev. 1 2 4 , 4 1 . A p p e l b a u m J . A . a n d H a m a n n D . R . 1 9 7 2 P h y s . Rev. B6, 2 16 6 . D a r a s h Y . S . and G i n z b u r g V . L . 1 9 7 5 So v . P h y s . Usp. .18, 305. B a r b e r ä n N . , E c h e n i q u e P . M . a n d V i n a s J . 1 9 7 9 J . P h y s . C: S o l i d S t a t e P h y s . 12_, L l l l . B a r d e e n J . 1 9 4 0 Ph ys Rev. 58, 72 7. B a r t o n G. 1 9 7 6 U n i v e r s i t y o f S u s s e x , T e c h n i c a l R e p o r t s . ____________ 1 9 7 8a S o l i d S t a t e Commun. 27_, 9 5. ____________ 1 9 7 8 b U n i v e r s i t y o f S u s s e x , Review p r e p r i n t . B e c k D . E . a n d C e l l i V . 1 9 7 2 P h y s . Rev. L e t t . 2Q_, 1124. B e n n e t t A . J . 1 9 7 0 P h y s . Rev. B l , 203. B l o c h F . 1 9 3 3 Z. P h y s . 8]^, 363. B o a r d m a n A . D . , P a r a n j a p e B . V . a n d T e s h i m a R. 1 9 7 4 P h y s . L e t t . 48A, 327. 1 9 7 5 S u r f a c e S e i . 4 9, 275. B o a r d m a n A . D . , P a r a n j a p e B . V . a n d N a k a m u r a Y . O . 1 9 7 6 P h y s . S t a t . S o l . (b) 75, 347.\n\n1 5 1\n\nIn document Role of collective modes in some surface properties of metals (Page 179-184)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7349464,"math_prob":0.9946148,"size":6570,"snap":"2021-43-2021-49","text_gpt3_token_len":2428,"char_repetition_ratio":0.12595187,"word_repetition_ratio":0.098772645,"special_character_ratio":0.40547946,"punctuation_ratio":0.12293275,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9981911,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-03T22:54:11Z\",\"WARC-Record-ID\":\"<urn:uuid:da835a24-0992-43d2-8723-df770f27c4a9>\",\"Content-Length\":\"74579\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:39b081fe-9ff4-4bc6-891f-53adf4a06d34>\",\"WARC-Concurrent-To\":\"<urn:uuid:2a9f38bc-0ddb-403b-8abf-7e0c0c7db8c9>\",\"WARC-IP-Address\":\"164.90.139.124\",\"WARC-Target-URI\":\"https://1library.net/article/symmetry-considerations-and-origin-of-the-force-components.4zp7rjrz\",\"WARC-Payload-Digest\":\"sha1:FWMLFCFP5ASIVQT5UOEB34XM2ZRHED2Q\",\"WARC-Block-Digest\":\"sha1:IMYQNPYXEQZBNMKLJ4IO2ZM3BF57NKNO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964362919.65_warc_CC-MAIN-20211203212721-20211204002721-00190.warc.gz\"}"}
https://kristinking.org/2017/02/15/multiply-like-a-roman/
[ "# Multiply like a Roman\n\nThe Romans used weird, weird math. To multiply two terms, they repeatedly doubled one of them in one column while halving the other term in another column, throwing away any remainders that came up, crossed off half the numbers in the first column, added up what remained, and — voila! — got the right answer.\n\nHow on earth did they come up with it? And why on earth does it work?\n\nHere’s what it looks like. If you’re not a mathy person, don’t worry, I’m not asking you to do any calculations–just to enjoy the strangeness.\n\n536 * 42\n\n= (500 + 30 + 5 + 1) * (40 + 2)\n\n= DXXXVI * XXXXII\n\n2. Multiply the first column and halve the second\n\nMultiply these Divide these\nDXXXVI XXXXII\nMLXXII XXI\nMMCXXXXIIII X\nMMMMCCLXXXVIII V\n(8M)DLXXVI II\n(17M)CLII 1\n\n3. If a number in the second column is even, cross it off in the first column.\n\n XXXXII MLXXII XXI X MMMMCCLXXXVIII V II (17M)CLII I\n\n4. Add up all the remaining numbers in the first column.\n\n(22M)CCCLLLXXXXXVIIIIIII\n\n= (22M)DXII\n= 22,000 + 500 + 10 + 2\n\n= 22,512\n\nThat’s all I have. If you want to know why it works, one of these links should satisfy.\n\nA Different Kind of Multiplication\n\nRoman Arithmetic\n\nOr, if this is too much math for you, just be glad I didn’t tell you about multiplying infinity.", null, "wikimedia commons\n\n### 3 responses to “Multiply like a Roman”\n\n1.", null, "Edmark M. Law\n\nThe ancient Egyptians also used this method and perhaps, they are the originator of this. There are people in some parts of the world who still use this (e.g. in Ethiopia).\n\nThis is based on the binary number system. If you’re familiar with binary numbers, then you can figure it out from there.\n\n•", null, "Kristin\n\nWow, that’s interesting! I wonder what happens when kids from Ethiopia come to the U.S. in the middle grades and are expected to know the multiplication system. Also, now I want to look up more about the ancient Egyptians. Thanks.\n\n•", null, "Edmark M. Law\n\nThe mathematics of the ancient Egyptians was impressive. They were able to build the pyramids after all." ]
[ null, "https://kristinking.files.wordpress.com/2017/02/2000px-nc3bameros_romanos-svg.png", null, "https://0.gravatar.com/avatar/638e1ef7661996c75fe45943af38a216", null, "https://2.gravatar.com/avatar/2be12b17200fef100bcb40f25bab07ec", null, "https://0.gravatar.com/avatar/638e1ef7661996c75fe45943af38a216", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88923013,"math_prob":0.72917116,"size":1925,"snap":"2022-27-2022-33","text_gpt3_token_len":552,"char_repetition_ratio":0.102030195,"word_repetition_ratio":0.0,"special_character_ratio":0.2664935,"punctuation_ratio":0.12376238,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95725274,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,5,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-29T10:43:37Z\",\"WARC-Record-ID\":\"<urn:uuid:ce83191b-0434-4f46-85a7-d12afe818366>\",\"Content-Length\":\"100440\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:62b7a36a-f4cd-42e5-a245-b363d75fcff9>\",\"WARC-Concurrent-To\":\"<urn:uuid:4f1d548e-59ef-45ef-9d96-3ad58e2f14da>\",\"WARC-IP-Address\":\"192.0.78.25\",\"WARC-Target-URI\":\"https://kristinking.org/2017/02/15/multiply-like-a-roman/\",\"WARC-Payload-Digest\":\"sha1:ZDZSM6L3MOFDQ3LTXQHY4LHQP2XORIVQ\",\"WARC-Block-Digest\":\"sha1:KZCJHP7O737RDWDTA5Z6GN4LJL7NYYIB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103626162.35_warc_CC-MAIN-20220629084939-20220629114939-00650.warc.gz\"}"}
https://www.rdocumentation.org/packages/zoo/versions/1.8-9/topics/zooreg
[ "zoo (version 1.8-9)\n\n# zooreg: Regular zoo Series\n\n## Description\n\n`zooreg` is the creator for the S3 class `\"zooreg\"` for regular `\"zoo\"` series. It inherits from `\"zoo\"` and is the analogue to `ts`.\n\n## Usage\n\n```zooreg(data, start = 1, end = numeric(), frequency = 1,\ndeltat = 1, ts.eps = getOption(\"ts.eps\"), order.by = NULL,\ncalendar = getOption(\"zoo.calendar\", TRUE))```\n\n## Arguments\n\ndata\n\na numeric vector, matrix or a factor.\n\nstart\n\nthe time of the first observation. Either a single number or a vector of two integers, which specify a natural time unit and a (1-based) number of samples into the time unit.\n\nend\n\nthe time of the last observation, specified in the same way as `start`.\n\nfrequency\n\nthe number of observations per unit of time.\n\ndeltat\n\nthe fraction of the sampling period between successive observations; e.g., 1/12 for monthly data. Only one of `frequency` or `deltat` should be provided.\n\nts.eps\n\ntime series comparison tolerance. Frequencies are considered equal if their absolute difference is less than `ts.eps`.\n\norder.by\n\na vector by which the observations in `x` are ordered. If this is specified the arguments `start` and `end` are ignored and `zoo(data, order.by, frequency)` is called. See `zoo` for more information.\n\ncalendar\n\nlogical. Should `yearqtr` or `yearmon` be used for a numeric time index with frequency 4 or 12, respectively?\n\n## Value\n\nAn object of class `\"zooreg\"` which inherits from `\"zoo\"`. It is essentially a `\"zoo\"` series with a `\"frequency\"` attribute.\n\n## Details\n\nStrictly regular series are those whose time points are equally spaced. Weakly regular series are strictly regular time series in which some of the points may have been removed but still have the original underlying frequency associated with them. `\"zooreg\"` is a subclass of `\"zoo\"` that is used to represent both weakly and strictly regular series. Internally, it is the same as `\"zoo\"` except it also has a `\"frequency\"` attribute. Its index class is more restricted than `\"zoo\"`. The index: 1. must be numeric or a class which can be coerced via `as.numeric` (such as `yearmon`, `yearqtr`, `Date`, `POSIXct`, `tis`, `xts`, etc.). 2. when converted to numeric must be expressible as multiples of 1/frequency. 3. group generic functions `Ops` should be defined, i.e., adding/subtracting a numeric to/from the index class should produce the correct value of the index class again.\n\n`zooreg` is the `zoo` analogue to `ts`. The arguments are almost identical, only in the case where `order.by` is specified, `zoo` is called with `zoo(data, order.by, frequency)`. It creates a regular series of class `\"zooreg\"` which inherits from `\"zoo\"`. It is essentially a `\"zoo\"` series with an additional `\"frequency\"` attribute. In the creation of `\"zooreg\"` objects (via `zoo`, `zooreg`, or coercion functions) it is always check whether the index specified complies with the frequency specified.\n\nThe class `\"zooreg\"` offers two advantages over code `\"ts\"`: 1. The index does not have to be plain numeric (although that is the default), it just must be coercible to numeric, thus printing and plotting can be customized. 2. This class can not only represent strictly regular series, but also series with an underlying regularity, i.e., where some observations from a regular grid are omitted.\n\nHence, `\"zooreg\"` is a bridge between `\"ts\"` and `\"zoo\"` and can be employed to coerce back and forth between the two classes. The coercion function `as.zoo.ts` returns therefore an object of class `\"zooreg\"` inheriting from `\"zoo\"`. Coercion between `\"zooreg\"` and `\"zoo\"` is also available and drops or tries to add a frequency respectively.\n\nFor checking whether a series is strictly regular or does have an underlying regularity the generic function `is.regular` can be used.\n\nMethods to standard generics for regular series such as `frequency`, `deltat` and `cycle` are available for both `\"zooreg\"` and `\"zoo\"` objects. In the latter case, it is checked first (in a data-driven way) whether the series is in fact regular or not.\n\n`as.zooreg.tis` has a `class` argument whose value represents the class of the index of the `zooreg` object into which the `tis` object is converted. The default value is `\"ti\"`. Note that the frequency of the `zooreg` object will not necessarily be the same as the frequency of the `tis` object that it is converted from.\n\n`zoo`, `is.regular`\n\n## Examples\n\n```# NOT RUN {\n## equivalent specifications of a quarterly series\n## starting in the second quarter of 1959.\nzooreg(1:10, frequency = 4, start = c(1959, 2))\nas.zoo(ts(1:10, frequency = 4, start = c(1959, 2)))\nzoo(1:10, seq(1959.25, 1961.5, by = 0.25), frequency = 4)\n\n## use yearqtr class for indexing the same series\nz <- zoo(1:10, yearqtr(seq(1959.25, 1961.5, by = 0.25)), frequency = 4)\nz\nz[-(3:4)]\n\n## create a regular series with a \"Date\" index\nzooreg(1:5, start = as.Date(\"2000-01-01\"))\n## or with \"yearmon\" index\nzooreg(1:5, end = yearmon(2000))\n\n## lag and diff (as diff is defined in terms of lag)\n## act differently on zoo and zooreg objects!\n## lag.zoo moves a point to the adjacent time whereas\n## lag.zooreg moves a point by deltat\nx <- c(1, 2, 3, 6)\nzz <- zoo(x, x)\nzr <- as.zooreg(zz)\nlag(zz, k = -1)\nlag(zr, k = -1)\ndiff(zz)\ndiff(zr)\n\n## lag.zooreg wihtout and with na.pad\nlag(zr, k = -1)\nlag(zr, k = -1, na.pad = TRUE)\n\n## standard methods available for regular series\nfrequency(z)\ndeltat(z)\ncycle(z)\ncycle(z[-(3:4)])\n\nzz <- zoo(1:6, as.Date(c(\"1960-01-29\", \"1960-02-29\", \"1960-03-31\",\n\"1960-04-29\", \"1960-05-31\", \"1960-06-30\")))\n# this converts zz to \"zooreg\" and then to \"ts\" expanding it to a daily\n# series which is 154 elements long, most with NAs.\n# }\n# NOT RUN {\nlength(as.ts(zz)) # 154\n# }\n# NOT RUN {\n# probably a monthly \"ts\" series rather than a daily one was wanted.\n# This variation of the last line gives a result only 6 elements long.\nlength(as.ts(aggregate(zz, as.yearmon, c))) # 6\n\nzzr <- as.zooreg(zz)\n\ndd <- as.Date(c(\"2000-01-01\", \"2000-02-01\", \"2000-03-01\", \"2000-04-01\"))\nzrd <- as.zooreg(zoo(1:4, dd))\n\n# }\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82883114,"math_prob":0.97433764,"size":5577,"snap":"2021-21-2021-25","text_gpt3_token_len":1603,"char_repetition_ratio":0.13062982,"word_repetition_ratio":0.036480688,"special_character_ratio":0.29137528,"punctuation_ratio":0.15477215,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9664156,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-12T14:49:09Z\",\"WARC-Record-ID\":\"<urn:uuid:9462da0d-ef3c-4b75-b4c1-982ed9b99631>\",\"Content-Length\":\"51043\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:45ec5259-a38b-4fea-adad-bdb1d1c81a99>\",\"WARC-Concurrent-To\":\"<urn:uuid:4aa929d2-81f9-4be9-a9f7-801978556354>\",\"WARC-IP-Address\":\"13.249.42.104\",\"WARC-Target-URI\":\"https://www.rdocumentation.org/packages/zoo/versions/1.8-9/topics/zooreg\",\"WARC-Payload-Digest\":\"sha1:YVD25HMSHW3DJHUTRIVVP3Z7DFGY364U\",\"WARC-Block-Digest\":\"sha1:3EBL2B3FMK3KQUW6UU2AYBFMF6KHOCDF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243990929.24_warc_CC-MAIN-20210512131604-20210512161604-00086.warc.gz\"}"}
https://sk.sagepub.com/Reference/survey/n513.xml
[ "# Sampling Interval\n\nSampling Interval\n\nSubject: Survey Research\n\n• Entry\n• Entries A-Z\n• Subject Index\n\n• When a probability sample is selected through use of a systematic random sampling design, a random start is chosen from a collection of consecutive integers that will ensure an adequate sample size is obtained. The length of the string of consecutive integers is commonly referred to as the sampling interval.\n\nIf the size of the population or universe is N and n is the size of the sample, then the integer that is at least as large as the number N/n is called the sampling interval (often denoted by k). Used in conjunction with systematic sampling, the sampling interval partitions the universe into n zones, or strata, each consisting of k units. In general, systematic sampling is operationalized by selecting a random start between 1 ...", null, "• [0-9]\n• A\n• B\n• C\n• D\n• E\n• F\n• G\n• H\n• I\n• J\n• K\n• L\n• M\n• N\n• O\n• P\n• Q\n• R\n• S\n• T\n• U\n• V\n• W\n• X\n• Y\n• Z" ]
[ null, "https://sk.sagepub.com/img/tiny-padlock-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86006516,"math_prob":0.9831518,"size":1556,"snap":"2019-51-2020-05","text_gpt3_token_len":392,"char_repetition_ratio":0.13466495,"word_repetition_ratio":0.037671234,"special_character_ratio":0.26092544,"punctuation_ratio":0.08934708,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9965104,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-17T18:58:29Z\",\"WARC-Record-ID\":\"<urn:uuid:e70123b5-e269-4f25-aacf-9b2bc4bdda14>\",\"Content-Length\":\"167831\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a814514c-2aef-45d3-b421-3fef07a00de3>\",\"WARC-Concurrent-To\":\"<urn:uuid:d22e5f43-7269-451a-a8a7-d48cacb01e9b>\",\"WARC-IP-Address\":\"128.121.3.195\",\"WARC-Target-URI\":\"https://sk.sagepub.com/Reference/survey/n513.xml\",\"WARC-Payload-Digest\":\"sha1:F3UOPD3MJSW5AOYUK7C6X2MEZ3ARY3HE\",\"WARC-Block-Digest\":\"sha1:FI6Y666JSIPQU4FKCI7CQIBXHBB6KW4N\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250590107.3_warc_CC-MAIN-20200117180950-20200117204950-00082.warc.gz\"}"}
https://en.wikipedia.org/wiki/Hartree%E2%80%93Fock_method
[ "# Hartree–Fock method\n\nIn computational physics and chemistry, the Hartree–Fock (HF) method is a method of approximation for the determination of the wave function and the energy of a quantum many-body system in a stationary state.\n\nThe Hartree–Fock method often assumes that the exact N-body wave function of the system can be approximated by a single Slater determinant (in the case where the particles are fermions) or by a single permanent (in the case of bosons) of N spin-orbitals. By invoking the variational method, one can derive a set of N-coupled equations for the N spin orbitals. A solution of these equations yields the Hartree–Fock wave function and energy of the system.\n\nEspecially in the older literature, the Hartree–Fock method is also called the self-consistent field method (SCF). In deriving what is now called the Hartree equation as an approximate solution of the Schrödinger equation, Hartree required the final field as computed from the charge distribution to be \"self-consistent\" with the assumed initial field. Thus, self-consistency was a requirement of the solution. The solutions to the non-linear Hartree–Fock equations also behave as if each particle is subjected to the mean field created by all other particles (see the Fock operator below), and hence the terminology continued. The equations are almost universally solved by means of an iterative method, although the fixed-point iteration algorithm does not always converge. This solution scheme is not the only one possible and is not an essential feature of the Hartree–Fock method.\n\nThe Hartree–Fock method finds its typical application in the solution of the Schrödinger equation for atoms, molecules, nanostructures and solids but it has also found widespread use in nuclear physics. (See Hartree–Fock–Bogoliubov method for a discussion of its application in nuclear structure theory). In atomic structure theory, calculations may be for a spectrum with many excited energy levels and consequently the Hartree–Fock method for atoms assumes the wave function is a single configuration state function with well-defined quantum numbers and that the energy level is not necessarily the ground state.\n\nFor both atoms and molecules, the Hartree–Fock solution is the central starting point for most methods that describe the many-electron system more accurately.\n\nThe rest of this article will focus on applications in electronic structure theory suitable for molecules with the atom as a special case. The discussion here is only for the Restricted Hartree–Fock method, where the atom or molecule is a closed-shell system with all orbitals (atomic or molecular) doubly occupied. Open-shell systems, where some of the electrons are not paired, can be dealt with by either the restricted open-shell or the unrestricted Hartree-Fock methods.\n\n## Brief history\n\n### Early semi-empirical methods\n\nThe origin of the Hartree–Fock method dates back to the end of the 1920s, soon after the discovery of the Schrödinger equation in 1926. Douglas Hartree's methods were guided by some earlier, semi-empirical methods of the early 1920s (by E. Fues, R. B. Lindsay, and himself) set in the old quantum theory of Bohr.\n\nIn the Bohr model of the atom, the energy of a state with principal quantum number n is given in atomic units as $E=-1/n^{2}$", null, ". It was observed from atomic spectra that the energy levels of many-electron atoms are well described by applying a modified version of Bohr's formula. By introducing the quantum defect d as an empirical parameter, the energy levels of a generic atom were well approximated by the formula $E=-1/(n+d)^{2}$", null, ", in the sense that one could reproduce fairly well the observed transitions levels observed in the X-ray region (for example, see the empirical discussion and derivation in Moseley's law). The existence of a non-zero quantum defect was attributed to electron–electron repulsion, which clearly does not exist in the isolated hydrogen atom. This repulsion resulted in partial screening of the bare nuclear charge. These early researchers later introduced other potentials containing additional empirical parameters with the hope of better reproducing the experimental data.\n\n### Hartree method\n\nIn 1927, D. R. Hartree introduced a procedure, which he called the self-consistent field method, to calculate approximate wave functions and energies for atoms and ions. Hartree sought to do away with empirical parameters and solve the many-body time-independent Schrödinger equation from fundamental physical principles, i.e., ab initio. His first proposed method of solution became known as the Hartree method, or Hartree product. However, many of Hartree's contemporaries did not understand the physical reasoning behind the Hartree method: it appeared to many people to contain empirical elements, and its connection to the solution of the many-body Schrödinger equation was unclear. However, in 1928 J. C. Slater and J. A. Gaunt independently showed that the Hartree method could be couched on a sounder theoretical basis by applying the variational principle to an ansatz (trial wave function) as a product of single-particle functions.\n\nIn 1930, Slater and V. A. Fock independently pointed out that the Hartree method did not respect the principle of antisymmetry of the wave function. The Hartree method used the Pauli exclusion principle in its older formulation, forbidding the presence of two electrons in the same quantum state. However, this was shown to be fundamentally incomplete in its neglect of quantum statistics.\n\n### Hartree-Fock\n\nA solution to the lack of anti-symmetry in the Hartree method came when it was shown that a Slater determinant, a determinant of one-particle orbitals first used by Heisenberg and Dirac in 1926, trivially satisfies the antisymmetric property of the exact solution and hence is a suitable ansatz for applying the variational principle. The original Hartree method can then be viewed as an approximation to the Hartree–Fock method by neglecting exchange. Fock's original method relied heavily on group theory and was too abstract for contemporary physicists to understand and implement. In 1935, Hartree reformulated the method to be more suitable for the purposes of calculation.\n\nThe Hartree–Fock method, despite its physically more accurate picture, was little used until the advent of electronic computers in the 1950s due to the much greater computational demands over the early Hartree method and empirical models. Initially, both the Hartree method and the Hartree–Fock method were applied exclusively to atoms, where the spherical symmetry of the system allowed one to greatly simplify the problem. These approximate methods were (and are) often used together with the central field approximation, to impose the condition that electrons in the same shell have the same radial part, and to restrict the variational solution to be a spin eigenfunction. Even so, calculating a solution by hand using the Hartree–Fock equations for a medium-sized atom was laborious; small molecules required computational resources far beyond what was available before 1950.\n\n## Hartree–Fock algorithm\n\nThe Hartree–Fock method is typically used to solve the time-independent Schrödinger equation for a multi-electron atom or molecule as described in the Born–Oppenheimer approximation. Since there are no known analytic solutions for many-electron systems (there are solutions for one-electron systems such as hydrogenic atoms and the diatomic hydrogen cation), the problem is solved numerically. Due to the nonlinearities introduced by the Hartree–Fock approximation, the equations are solved using a nonlinear method such as iteration, which gives rise to the name \"self-consistent field method\".\n\n### Approximations\n\nThe Hartree–Fock method makes five major simplifications in order to deal with this task:\n\n• The Born–Oppenheimer approximation is inherently assumed. The full molecular wave function is actually a function of the coordinates of each of the nuclei, in addition to those of the electrons.\n• Typically, relativistic effects are completely neglected. The momentum operator is assumed to be completely non-relativistic.\n• The variational solution is assumed to be a linear combination of a finite number of basis functions, which are usually (but not always) chosen to be orthogonal. The finite basis set is assumed to be approximately complete.\n• Each energy eigenfunction is assumed to be describable by a single Slater determinant, an antisymmetrized product of one-electron wave functions (i.e., orbitals).\n• The mean-field approximation is implied. Effects arising from deviations from this assumption are neglected. These effects are often collectively used as a definition of the term electron correlation. However, the label \"electron correlation\" strictly spoken encompasses both Coulomb correlation and Fermi correlation, and the latter is an effect of electron exchange, which is fully accounted for in the Hartree–Fock method. Stated in this terminology, the method only neglects the Coulomb correlation. However, this is an important flaw, accounting for (among others) Hartree–Fock's inability to capture London dispersion.\n\nRelaxation of the last two approximations give rise to many so-called post-Hartree–Fock methods.\n\n### Variational optimization of orbitals\n\nThe variational theorem states that for a time-independent Hamiltonian operator, any trial wave function will have an energy expectation value that is greater than or equal to the true ground-state wave function corresponding to the given Hamiltonian. Because of this, the Hartree–Fock energy is an upper bound to the true ground-state energy of a given molecule. In the context of the Hartree–Fock method, the best possible solution is at the Hartree–Fock limit; i.e., the limit of the Hartree–Fock energy as the basis set approaches completeness. (The other is the full-CI limit, where the last two approximations of the Hartree–Fock theory as described above are completely undone. It is only when both limits are attained that the exact solution, up to the Born–Oppenheimer approximation, is obtained.) The Hartree–Fock energy is the minimal energy for a single Slater determinant.\n\nThe starting point for the Hartree–Fock method is a set of approximate one-electron wave functions known as spin-orbitals. For an atomic orbital calculation, these are typically the orbitals for a hydrogen-like atom (an atom with only one electron, but the appropriate nuclear charge). For a molecular orbital or crystalline calculation, the initial approximate one-electron wave functions are typically a linear combination of atomic orbitals (LCAO).\n\nThe orbitals above only account for the presence of other electrons in an average manner. In the Hartree–Fock method, the effect of other electrons are accounted for in a mean-field theory context. The orbitals are optimized by requiring them to minimize the energy of the respective Slater determinant. The resultant variational conditions on the orbitals lead to a new one-electron operator, the Fock operator. At the minimum, the occupied orbitals are eigensolutions to the Fock operator via a unitary transformation between themselves. The Fock operator is an effective one-electron Hamiltonian operator being the sum of two terms. The first is a sum of kinetic-energy operators for each electron, the internuclear repulsion energy, and a sum of nuclear–electronic Coulombic attraction terms. The second are Coulombic repulsion terms between electrons in a mean-field theory description; a net repulsion energy for each electron in the system, which is calculated by treating all of the other electrons within the molecule as a smooth distribution of negative charge. This is the major simplification inherent in the Hartree–Fock method and is equivalent to the fifth simplification in the above list.\n\nSince the Fock operator depends on the orbitals used to construct the corresponding Fock matrix, the eigenfunctions of the Fock operator are in turn new orbitals, which can be used to construct a new Fock operator. In this way, the Hartree–Fock orbitals are optimized iteratively until the change in total electronic energy falls below a predefined threshold. In this way, a set of self-consistent one-electron orbitals is calculated. The Hartree–Fock electronic wave function is then the Slater determinant constructed from these orbitals. Following the basic postulates of quantum mechanics, the Hartree–Fock wave function can then be used to compute any desired chemical or physical property within the framework of the Hartree–Fock method and the approximations employed.\n\n## Mathematical formulation\n\n### The Fock operator\n\nBecause the electron–electron repulsion term of the molecular Hamiltonian involves the coordinates of two different electrons, it is necessary to reformulate it in an approximate way. Under this approximation (outlined under Hartree–Fock algorithm), all of the terms of the exact Hamiltonian except the nuclear–nuclear repulsion term are re-expressed as the sum of one-electron operators outlined below, for closed-shell atoms or molecules (with two electrons in each spatial orbital). The \"(1)\" following each operator symbol simply indicates that the operator is 1-electron in nature.\n\n${\\hat {F}}[\\{\\phi _{j}\\}](1)={\\hat {H}}^{\\text{core}}(1)+\\sum _{j=1}^{N/2}[2{\\hat {J}}_{j}(1)-{\\hat {K}}_{j}(1)],$", null, "where\n\n${\\hat {F}}[\\{\\phi _{j}\\}](1)$", null, "is the one-electron Fock operator generated by the orbitals $\\phi _{j}$", null, ", and\n\n${\\hat {H}}^{\\text{core}}(1)=-{\\frac {1}{2}}\\nabla _{1}^{2}-\\sum _{\\alpha }{\\frac {Z_{\\alpha }}{r_{1\\alpha }}}$", null, "is the one-electron core Hamiltonian. Also\n\n${\\hat {J}}_{j}(1)$", null, "is the Coulomb operator, defining the electron–electron repulsion energy due to each of the two electrons in the j-th orbital. Finally,\n\n${\\hat {K}}_{j}(1)$", null, "is the exchange operator, defining the electron exchange energy due to the antisymmetry of the total N-electron wave function. This \"exchange energy\" operator ${\\hat {K}}$", null, "is simply an artifact of the Slater determinant. Finding the Hartree–Fock one-electron wave functions is now equivalent to solving the eigenfunction equation\n\n${\\hat {F}}(1)\\phi _{i}(1)=\\epsilon _{i}\\phi _{i}(1),$", null, "where $\\phi _{i}(1)$", null, "are a set of one-electron wave functions, called the Hartree–Fock molecular orbitals.\n\n### Linear combination of atomic orbitals\n\nTypically, in modern Hartree–Fock calculations, the one-electron wave functions are approximated by a linear combination of atomic orbitals. These atomic orbitals are called Slater-type orbitals. Furthermore, it is very common for the \"atomic orbitals\" in use to actually be composed of a linear combination of one or more Gaussian-type orbitals, rather than Slater-type orbitals, in the interests of saving large amounts of computation time.\n\nVarious basis sets are used in practice, most of which are composed of Gaussian functions. In some applications, an orthogonalization method such as the Gram–Schmidt process is performed in order to produce a set of orthogonal basis functions. This can in principle save computational time when the computer is solving the Roothaan–Hall equations by converting the overlap matrix effectively to an identity matrix. However, in most modern computer programs for molecular Hartree–Fock calculations this procedure is not followed due to the high numerical cost of orthogonalization and the advent of more efficient, often sparse, algorithms for solving the generalized eigenvalue problem, of which the Roothaan–Hall equations are an example.\n\n## Numerical stability\n\nNumerical stability can be a problem with this procedure and there are various ways of combating this instability. One of the most basic and generally applicable is called F-mixing or damping. With F-mixing, once a single-electron wave function is calculated, it is not used directly. Instead, some combination of that calculated wave function and the previous wave functions for that electron is used, the most common being a simple linear combination of the calculated and immediately preceding wave function. A clever dodge, employed by Hartree, for atomic calculations was to increase the nuclear charge, thus pulling all the electrons closer together. As the system stabilised, this was gradually reduced to the correct charge. In molecular calculations a similar approach is sometimes used by first calculating the wave function for a positive ion and then to use these orbitals as the starting point for the neutral molecule. Modern molecular Hartree–Fock computer programs use a variety of methods to ensure convergence of the Roothaan–Hall equations.\n\n## Weaknesses, extensions, and alternatives\n\nOf the five simplifications outlined in the section \"Hartree–Fock algorithm\", the fifth is typically the most important. Neglect of electron correlation can lead to large deviations from experimental results. A number of approaches to this weakness, collectively called post-Hartree–Fock methods, have been devised to include electron correlation to the multi-electron wave function. One of these approaches, Møller–Plesset perturbation theory, treats correlation as a perturbation of the Fock operator. Others expand the true multi-electron wave function in terms of a linear combination of Slater determinants—such as multi-configurational self-consistent field, configuration interaction, quadratic configuration interaction, and complete active space SCF (CASSCF). Still others (such as variational quantum Monte Carlo) modify the Hartree–Fock wave function by multiplying it by a correlation function (\"Jastrow\" factor), a term which is explicitly a function of multiple electrons that cannot be decomposed into independent single-particle functions.\n\nAn alternative to Hartree–Fock calculations used in some cases is density functional theory, which treats both exchange and correlation energies, albeit approximately. Indeed, it is common to use calculations that are a hybrid of the two methods—the popular B3LYP scheme is one such hybrid functional method. Another option is to use modern valence bond methods.\n\n## Software packages\n\nFor a list of software packages known to handle Hartree–Fock calculations, particularly for molecules and solids, see the list of quantum chemistry and solid state physics software." ]
[ null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/71538c8b5aa1decc955f359babc9dac11da0cf5c", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/99acbd0c8b3692b9136bdde36b017417dedd2412", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/fcd9f8f3d331a076258da917ded444d92e46897a", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/32f43980de090a8055a1bd5b601945b2316d583f", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/e4cbf9f77e99bf49bdc5b3afa8b6b77d15b9be6b", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/52970d64d262a5c3069d55370fe56fd4ef8f1087", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/d362d254e38d2a6b9029c8d66e7d1cf5f617ce8e", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/46b2326b3ac05fb2d9fad483ff41416b634dcf7f", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/2df103a2466b4cf83349a2d9bcf90c218b832e30", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/063bb1c09af91744ce002283953093b597ab2cb9", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/b6bf6263db5792886610a72f226f592d859ca9a4", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8874697,"math_prob":0.9652313,"size":20901,"snap":"2020-10-2020-16","text_gpt3_token_len":4706,"char_repetition_ratio":0.16882806,"word_repetition_ratio":0.021484375,"special_character_ratio":0.21467872,"punctuation_ratio":0.13095239,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9918192,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22],"im_url_duplicate_count":[null,9,null,9,null,5,null,9,null,null,null,9,null,9,null,9,null,null,null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-03-30T20:13:12Z\",\"WARC-Record-ID\":\"<urn:uuid:88d06f49-10f2-4801-8047-b81b8f1019d7>\",\"Content-Length\":\"112824\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:df220d33-71f6-41fa-86cd-eb39fe1ccf56>\",\"WARC-Concurrent-To\":\"<urn:uuid:5194d4b0-ea2c-4bd3-8dfa-3bb50c128c8d>\",\"WARC-IP-Address\":\"208.80.154.224\",\"WARC-Target-URI\":\"https://en.wikipedia.org/wiki/Hartree%E2%80%93Fock_method\",\"WARC-Payload-Digest\":\"sha1:6QYLJUFCVJYX6B4KGMJRTLLVTCTROBQS\",\"WARC-Block-Digest\":\"sha1:UFSYZJYBFQAMWQRSUODRMDCBWL46TCM4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370497301.29_warc_CC-MAIN-20200330181842-20200330211842-00250.warc.gz\"}"}
https://www.oreilly.com/library/view/python-machine-learning/9781789616729/34084a25-ff08-4950-bd17-ce0037eb8283.xhtml
[ "# Learning Bayes' theorem by examples\n\nIt is important to understand Bayes' theorem before diving into the classifier. Let A and B denote two events. Events could be that it will rain tomorrow2 kings are drawn from a deck of cards; or a person has cancer. In Bayes' theorem, P(A |B) is the probability that A occurs given that B is true. It can be computed as follows:", null, "Here, P(B|A) is the probability of observing B given that A occurs, while P(A) and P(B) are the probability that A and B occur, respectively. Too abstract? Let's look at some of the following concrete examples:\n\n• Example 1: Given two coins, one is unfair with 90% of flips getting ...\n\nGet Python Machine Learning By Example - Second Edition now with O’Reilly online learning.\n\nO’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers." ]
[ null, "https://www.oreilly.com/library/view/python-machine-learning/9781789616729/assets/c91deafa-f289-42e0-9b77-ef34509e1562.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.95933104,"math_prob":0.89809376,"size":616,"snap":"2019-51-2020-05","text_gpt3_token_len":150,"char_repetition_ratio":0.11764706,"word_repetition_ratio":0.0,"special_character_ratio":0.24512987,"punctuation_ratio":0.13970588,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99617726,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-19T17:15:31Z\",\"WARC-Record-ID\":\"<urn:uuid:73a064be-196f-44e2-8c75-6c4351e94af0>\",\"Content-Length\":\"45559\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:960d3741-bee5-4fac-8312-27a6fa0d4899>\",\"WARC-Concurrent-To\":\"<urn:uuid:3bbbcaa4-fc0f-4c47-8ed0-c7087327c780>\",\"WARC-IP-Address\":\"96.17.140.116\",\"WARC-Target-URI\":\"https://www.oreilly.com/library/view/python-machine-learning/9781789616729/34084a25-ff08-4950-bd17-ce0037eb8283.xhtml\",\"WARC-Payload-Digest\":\"sha1:O3XELVGRPFJIIBKA6YTSZNAOK7A2FXX3\",\"WARC-Block-Digest\":\"sha1:UBRMY4ZPT7UERBTBFNQKXNNKTDIIG53A\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250594662.6_warc_CC-MAIN-20200119151736-20200119175736-00320.warc.gz\"}"}
https://devsolus.com/2023/01/18/save-the-quantity-of-each-combination-javascript/?amp=1
[ "# Save the quantity of each combination javascript\n\nSo I have 2 arrays. One contains the colors of a product and on contains sizes of a product.\n\nLet’s say I have 2 colors (RED and BLUE), and 2 sizes (M and XL)\n\nthis right here is my jsx code:\n\n``````{\nsizeArray.map((size) => colorArray.map((color) => <input onChange={(e) => setCurrentQuantity(e.target.value)} placeholder={size + \"/\" + color}></input>))\n}\n``````\n\nSo the code above creates 4 inputs like this:\n\ninput 1: BLUE/M\ninput 2: BLUE/XL\ninput 3: RED/M\ninput 4: RED/XL\n\nMy question is how do I save all the inputs as an object?\n\n### >Solution :\n\nYou can store an object in the state, with one key-value pair per input.\n\n``````const [quantity, setQuantity] = useState({});\n// ...\nsizeArray.map((size) => colorArray.map((color) => <input\nonChange={(e) => setQuantity(prev => ({...prev, [size+'/'+color] : e.target.value})}\nvalue={quantity[size+'/'+color]} placeholder={size + \"/\" + color}/>))\n``````" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5283417,"math_prob":0.98766834,"size":880,"snap":"2023-14-2023-23","text_gpt3_token_len":237,"char_repetition_ratio":0.13013698,"word_repetition_ratio":0.046511628,"special_character_ratio":0.31931818,"punctuation_ratio":0.17514125,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96147203,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-20T19:07:05Z\",\"WARC-Record-ID\":\"<urn:uuid:93d798ae-268c-4d24-afdb-aa9d57f87dcd>\",\"Content-Length\":\"84443\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5d223e33-7bba-41e8-b985-63d97aeb1755>\",\"WARC-Concurrent-To\":\"<urn:uuid:ff79a54f-c9a2-4c1d-9887-21d6763dfb5f>\",\"WARC-IP-Address\":\"192.0.78.211\",\"WARC-Target-URI\":\"https://devsolus.com/2023/01/18/save-the-quantity-of-each-combination-javascript/?amp=1\",\"WARC-Payload-Digest\":\"sha1:JNGB66YYELQOETL7CLDEVHO26W3MXUOV\",\"WARC-Block-Digest\":\"sha1:633QMN2SPTVLRLB56O6A5JXDQWXROL6B\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296943555.25_warc_CC-MAIN-20230320175948-20230320205948-00218.warc.gz\"}"}
http://currency7.com/SEK-to-RON-exchange-rate-converter?amount=300
[ "# 300 Swedish Krona (SEK) to Romanian New Leu (RON)\n\nThe currency calculator will convert exchange rate of Swedish Krona (SEK) to Romanian leu (RON).\n\n• Swedish Krona\nThe Swedish krona (SEK) is the currency of Sweden. The currency code is SEK and currency symbol is kr. The Swedish krona is subdivided into 100 ören (not in circulation). Frequently used Swedish krona coins are in denominations of 1 kr, 5 kr, 10 kr. Frequently used Swedish krona banknotes are in denominations of 20 kr, 50 kr, 100 kr, 500 kr.\n• Romanian leu\nThe Romanian leu (RON) is the currency of Romania. The currency code is RON and currency symbol is L. The Romanian leu is subdivided into 100 bani (singular: ban). Plural of leu is lei. Frequently used Romanian leu coins are in denominations of 10 bani, 50 bani. Frequently used Romanian leu banknotes are in denominations of 1 leu, 5 lei, 10 lei, 50 lei, 100 lei.\n• 10 SEK = 4.36 RON\n• 50 SEK = 21.78 RON\n• 100 SEK = 43.56 RON\n• 200 SEK = 87.11 RON\n• 250 SEK = 108.89 RON\n• 500 SEK = 217.78 RON\n• 1,000 SEK = 435.57 RON\n• 2,000 SEK = 871.14 RON\n• 2,500 SEK = 1,088.92 RON\n• 5,000 SEK = 2,177.85 RON\n• 10,000 SEK = 4,355.70 RON\n• 20,000 SEK = 8,711.39 RON\n• 25,000 SEK = 10,889.24 RON\n• 50,000 SEK = 21,778.49 RON\n• 100,000 SEK = 43,556.97 RON\n• 1 RON = 2.30 SEK\n• 5 RON = 11.48 SEK\n• 10 RON = 22.96 SEK\n• 20 RON = 45.92 SEK\n• 50 RON = 114.79 SEK\n• 100 RON = 229.58 SEK\n• 200 RON = 459.17 SEK\n• 250 RON = 573.96 SEK\n• 500 RON = 1,147.92 SEK\n• 1,000 RON = 2,295.84 SEK\n• 2,000 RON = 4,591.69 SEK\n• 2,500 RON = 5,739.61 SEK\n• 5,000 RON = 11,479.22 SEK\n• 10,000 RON = 22,958.44 SEK\n• 50,000 RON = 114,792.19 SEK\n\n## Popular SEK pairing\n\n` <a href=\"http://currency7.com/SEK-to-RON-exchange-rate-converter?amount=300\">300 SEK in RON</a> `" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.650715,"math_prob":0.994466,"size":2732,"snap":"2022-40-2023-06","text_gpt3_token_len":985,"char_repetition_ratio":0.24450147,"word_repetition_ratio":0.03353057,"special_character_ratio":0.38103953,"punctuation_ratio":0.14991182,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9540293,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-30T15:43:09Z\",\"WARC-Record-ID\":\"<urn:uuid:bd7c282a-eccb-49fd-9c2d-e7d5b01ce64f>\",\"Content-Length\":\"28995\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cc7fc59b-be39-443c-9220-8ae805286bdc>\",\"WARC-Concurrent-To\":\"<urn:uuid:8dbfaf27-3cbd-4992-8154-048863b897ba>\",\"WARC-IP-Address\":\"70.35.206.41\",\"WARC-Target-URI\":\"http://currency7.com/SEK-to-RON-exchange-rate-converter?amount=300\",\"WARC-Payload-Digest\":\"sha1:QJSLRPRD7O7R63OW45CN4SWQRG3FCBNT\",\"WARC-Block-Digest\":\"sha1:CBOOR5WSWDNAJ7ZFSOP3DKHE2HQLRIX5\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499819.32_warc_CC-MAIN-20230130133622-20230130163622-00707.warc.gz\"}"}
http://cneurocvs.rmki.kfki.hu/igraph/doc/R/layout.html
[ "layout {igraph} R Documentation\n\n## Generate coordinates for plotting graphs\n\n### Description\n\nSome simple and not so simple functions determining the placement of the vertices for drawing a graph.\n\n### Usage\n\n```layout.random(graph, params, dim=2)\nlayout.circle(graph, params)\nlayout.sphere(graph, params)\nlayout.fruchterman.reingold(graph, ..., dim=2,\nverbose=igraph.par(\"verbose\"), params)\nverbose=igraph.par(\"verbose\"), params)\nlayout.spring(graph, ..., params)\nlayout.reingold.tilford(graph, ..., params)\nlayout.fruchterman.reingold.grid(graph, ...,\nverbose=igraph.par(\"verbose\"), params)\nlayout.lgl(graph, ..., params)\nlayout.graphopt(graph, ..., verbose = igraph.par(\"verbose\"), params = list())\nlayout.mds(graph, d=shortest.paths(graph), ...)\nlayout.svd(graph, d=shortest.paths(graph), ...)\nlayout.norm(layout, xmin = NULL, xmax = NULL, ymin = NULL, ymax = NULL,\nzmin = NULL, zmax = NULL)\n```\n\n### Arguments\n\n `graph` The graph to place. `params` The list of function dependent parameters. `dim` Numeric constant, either 2 or 3. Some functions are able to generate 2d and 3d layouts as well, supply this argument to change the default behavior. `...` Function dependent parameters, this is an alternative notation to the `params` argument. `verbose` Logial constant, whether to show a progress bar while calculating the layout. `d` The matrix used for multidimansional scaling. By default it is the distance matrix of the graph. `layout` A matrix with two or three columns, the layout to normalize. `xmin,xmax` The limits for the first coordinate, if one of them or both are `NULL` then no normalization is performed along this direction. `ymin,ymax` The limits for the second coordinate, if one of them or both are `NULL` then no normalization is performed along this direction. `zmin,zmax` The limits for the third coordinate, if one of them or both are `NULL` then no normalization is performed along this direction.\n\n### Details\n\nThese functions calculate the coordinates of the vertices for a graph usually based on some optimality criterion.\n\n`layout.random` simply places the vertices randomly on a square. It has no parameters.\n\n`layout.circle` places the vertices on a unit circle equidistantly. It has no paramaters.\n\n`layout.sphere` places the vertices (approximately) uniformly on the surface of a sphere, this is thus a 3d layout. It is not clear however what “uniformly on a sphere” means.\n\n`layout.fruchterman.reingold` uses a force-based algorithm proposed by Fruchterman and Reingold, see references. Parameters and their default values:\n\nniter\nNumeric, the number of iterations to perform (500).\ncoolexp\nNumeric, the cooling exponent for the simulated annealing (3).\nmaxdelta\nMaximum change (`vcount(graph)`).\narea\nArea parameter (`vcount(graph)^2`).\nCancellation radius (`area`*vcount(graph)).\nweights\nA vector giving edge weights or `NULL`. If not `NULL` then the attraction along the edges will be multiplied by the given edge weights (`NULL`).\n\nThis function was ported from the SNA package.\n\n`layout.kamada.kawai` is another force based algorithm. Parameters and default values:\n\nniter\nNumber of iterations to perform (1000).\nsigma\nSets the base standard deviation of position change proposals (vcount(graph)/4).\ninitemp\nThe initial temperature (10).\ncoolexp\nThe cooling exponent (0.99).\nkkconst\nSets the Kamada-Kawai vertex attraction constant (vcount(graph)**2).\n\nThis function performs very well for connected graphs, but it gives poor results for unconnected ones. This function was ported from the SNA package.\n\n`layout.spring` is a spring embedder algorithm. Parameters and default values:\n\nmass\nThe vertex mass (in ‘quasi-kilograms’). (Defaults to 0.1.)\nequil\nThe equilibrium spring extension (in ‘quasi-meters’). (Defaults to 1.)\nk\nThe spring coefficient (in ‘quasi-Newtons per quasi-meter’). (Defaults to 0.001.)\nrepeqdis\nThe point at which repulsion (if employed) balances out the spring extension force (in ‘quasi-meters’). (Defaults to 0.1.)\nkfr\nThe base coefficient of kinetic friction (in ‘quasi-Newton quasi-kilograms’). (Defaults to 0.01.)\nrepulse\nShould repulsion be used? (Defaults to FALSE.)\n\nThis function was ported from the SNA package.\n\n`layout.reingold.tilford` generates a tree-like layout, so it is mainly for trees. Parameters and default values:\n\nroot\nThe id of the root vertex, defaults to 0.\ncircular\nLogical scalar, whether to plot the tree in a circular fashion, defaults to `FALSE`.\n\n`layout.fruchterman.reingold.grid` is similar to `layout.fruchterman.reingold` but repelling force is calculated only between vertices that are closer to each other than a limit, so it is faster. Patameters and default values:\n\nniter\nNumeric, the number of iterations to perform (500).\nmaxdelta\nMaximum change for one vertex in one iteration. (The number of vertices in the graph.)\narea\nThe area of the surface on which the vertices are placed. (The square of the number of vertices.)\ncoolexp\nThe cooling exponent of the simulated annealing (1.5).\nCancellation radius for the repulsion (the `area` times the number of vertices).\ncellsize\nThe size of the cells for the grid. When calculating the repulsion forces between vertices only vertices in the same or neighboring grid cells are taken into account (the fourth root of the number of `area`.\n\n`layout.lgl` is for large connected graphs, it is similar to the layout generator of the Large Graph Layout software (http://bioinformatics.icmb.utexas.edu/lgl). Parameters and default values:\n\nmaxiter\nThe maximum number of iterations to perform (150).\nmaxdelta\nThe maximum change for a vertex during an iteration (the number of vertices).\narea\nThe area of the surface on which the vertices are placed (square of the number of vertices).\ncoolexp\nThe cooling exponent of the simulated annealing (1.5).\nCancellation radius for the repulsion (the `area` times the number of vertices).\ncellsize\nThe size of the cells for the grid. When calculating the repulsion forces between vertices only vertices in the same or neighboring grid cells are taken into account (the fourth root of the number of `area`.\nroot\nThe id of the vertex to place at the middle of the layout. The default value is -1 which means that a random vertex is selected.\n\n`layout.graphopt` is a port of the graphopt layout algorithm by Michael Schmuhl. graphopt version 0.4.1 was rewritten in C and the support for layers was removed (might be added later) and a code was a bit reorganized to avoid some unneccessary steps is the node charge (see below) is zero.\n\ngraphopt uses physical analogies for defining attracting and repelling forces among the vertices and then the physical system is simulated until it reaches an equilibrium. (There is no simulated annealing or anything like that, so a stable fixed point is not guaranteed.)\n\nSee also http://www.schmuhl.org/graphopt/ for the original graphopt.\n\nParameters and default values:\n\nniter\nInteger scalar, the number of iterations to perform. Should be a couple of hundred in general. If you have a large graph then you might want to only do a few iterations and then check the result. If it is not good enough you can feed it in again in the `start` argument. The default value is 500.\ncharge\nThe charge of the vertices, used to calculate electric repulsion. The default is 0.001.\nmass\nThe mass of the vertices, used for the spring forces. The default is 30.\nspring.length\nThe length of the springs, an integer number. The default value is zero.\nspring.constant\nThe spring constant, the default value is one.\nmax.sa.movement\nReal constant, it gives the maximum amount of movement allowed in a single step along a single axis. The default value is 5.\nstart\nIf given, then it should be a matrix with two columns and one line for each vertex. This matrix will be used as starting positions for the algorithm. If not given, then a random starting matrix is used.\n\n`layout.mds` uses metric multidimensional scaling for generating the coordinates. This function does not have the usual `params` argument. It can just take a single argument, the distance matrix used for multidimensional scaling. This function generates the layout separately for each graph component and then merges them via `layout.merge`. `layout.mds` is an experimental function currently.\n\n`layout.svd` is a currently experimental layout function based on singular value decomposition. It does not have the usual `params` argument, but take a single argument, the distance matrix of the graph. This function generates the layout separately for each graph component and then merges them via `layout.merge`.\n\n`layout.norm` normalizes a layout, it linearly transforms each coordinate separately to fit into the given limits.\n\n`layout.drl` is another force-driven layout generator, it is suitable for quite large graphs. See `layout.drl` for details.\n\n### Value\n\nAll these functions return a numeric matrix with at least two columns and the same number of lines as the number of vertices.\n\n### Author(s)\n\nGabor Csardi [email protected]\n\n### References\n\nFruchterman, T.M.J. and Reingold, E.M. (1991). Graph Drawing by Force-directed Placement. Software - Practice and Experience, 21(11):1129-1164.\n\nKamada, T. and Kawai, S. (1989). An Algorithm for Drawing General Undirected Graphs. Information Processing Letters, 31(1):7-15.\n\nReingold, E and Tilford, J (1981). Tidier drawing of trees. IEEE Trans. on Softw. Eng., SE-7(2):223–228.\n\n`layout.drl`, `plot.igraph`, `tkplot`\n```g <- graph.ring(10)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7664408,"math_prob":0.96473825,"size":5932,"snap":"2019-13-2019-22","text_gpt3_token_len":1430,"char_repetition_ratio":0.14153172,"word_repetition_ratio":0.12945369,"special_character_ratio":0.21982469,"punctuation_ratio":0.21247892,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99043334,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-21T22:40:31Z\",\"WARC-Record-ID\":\"<urn:uuid:ac98809c-d5e6-49eb-bb80-ad4b60b81aaa>\",\"Content-Length\":\"12854\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f0b6c63c-59e2-4ec5-8770-cff8f8f9d601>\",\"WARC-Concurrent-To\":\"<urn:uuid:af9c54ee-0a99-4c34-99d3-55815e1b59a9>\",\"WARC-IP-Address\":\"148.6.178.65\",\"WARC-Target-URI\":\"http://cneurocvs.rmki.kfki.hu/igraph/doc/R/layout.html\",\"WARC-Payload-Digest\":\"sha1:GBSCITKXBYXRVTAKETOYJ47GZV55SZLZ\",\"WARC-Block-Digest\":\"sha1:RL5GXXSP6RQVT3PXZS5G6B323EMXFFGR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202572.7_warc_CC-MAIN-20190321213516-20190321235516-00092.warc.gz\"}"}
https://artechnologygroup.com/what-is-nested-loop/
[ "# What is nested loop?\n\nWhat is nested loop?\n\n• C programming allows to use one loop inside another loop. This is called nested loop.\n\nThere are three basic types of loop:\n\n1. The for loop which executes the repeated instructions a fixed number of times, or iterations.\n2. The while loop which executes its instructions while the test that controls the loop is true. The while loop tests its condition before each iteration of the loop.\n3. The do while loop is similar to the while loop but it tests its condition after each iteration of the loop.\n\nNesting is simply a term given to the use of one or more loops within another, where for each iteration of the outer loop, the inner loop iterates numerous times. An analogy here is a 12 hour clock: the hour hand (outer loop) moves only once an hour from 1 o’clock to 2 o’clock, 2 o’clock to 3 and so on, but for each hour, the minute hand (inner loop) moves sixty times.\n\nThe following example demonstrates a pair of nested for loops using the Java language:\n\n1. public class NestedLoopDemo {\n2.                 public static void main(String[] args) {\n3.                                 for (int row = 1; row <= 3; row++) { // Outer loop.\n4.                                                 for (int col = 1; col <= 3; col++) { // Inner loop.\n5.                                                                 System.out.println(row + ” ” + col); // Print values of row col.\n6.                                                 }\n7.                                 }\n8.                 }\n9. }\n\nIn this example, the two loops print:\n\n1. 1 1\n2. 1 2\n3. 1 3\n4. 2 1\n5. 2 2\n6. 2 3\n7. 3 1\n8. 3 2\n9. 3 3\n\nIn the outer loop, we use a variable (an item of data denoted here by int, i.e. whole numbers) called row which runs from 1 to 3 (left hand side). But for each iteration of this loop, the inner loop also runs three times from 1 to 3 according to the variable col (right hand side). So when the row variable of the outer loop is set to 1, the inner loop runs three times producing:\n\n1. 1 1\n2. 1 2\n3. 1 3\n\nIn its first iteration, 1 is produced by the outer loop using variable row (left hand side), and the right hand side’s numbers 1, 2 and 3 are produced by the inner loop’s variable col. Only when the inner loop has completed three iterations for the outer loop’s one iteration, will the left hand side’s number change to 2. And then the inner loop will run again through its three iterations, and so on.\n\nNesting frequently sees the same loop type used but this is not always the case: it is quite possible to have an outer while loop and an inner for loop or vice versa. Nesting can also require multiple loops, not just two." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8483464,"math_prob":0.94602436,"size":2358,"snap":"2022-27-2022-33","text_gpt3_token_len":617,"char_repetition_ratio":0.15165676,"word_repetition_ratio":0.033826638,"special_character_ratio":0.2769296,"punctuation_ratio":0.10282258,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98086196,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-16T19:38:56Z\",\"WARC-Record-ID\":\"<urn:uuid:b04a5bf8-fd70-420e-8901-9f95a31c5aef>\",\"Content-Length\":\"102025\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:324a0b97-2cbc-4234-9cc1-811cf178df50>\",\"WARC-Concurrent-To\":\"<urn:uuid:0aefb89d-fa1f-42b1-b523-1234b89dd8b4>\",\"WARC-IP-Address\":\"172.67.169.169\",\"WARC-Target-URI\":\"https://artechnologygroup.com/what-is-nested-loop/\",\"WARC-Payload-Digest\":\"sha1:LDCI6Y6MAQNTLWBYFOZ47MXS5IUVY473\",\"WARC-Block-Digest\":\"sha1:NDP5IJI5OD2ILGWLODS55O2XP2VOUIOC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572515.15_warc_CC-MAIN-20220816181215-20220816211215-00025.warc.gz\"}"}
https://www.mdpi.com/1996-1073/12/16/3119/htm
[ "Previous Article in Journal\nDispensing Technology of 3D Printing Optical Lens with Its Applications\nPrevious Article in Special Issue\nCALAJOULE: An Italian Research to Lessen Joule Power Losses in Overhead Lines by Means of Innovative Conductors\n\nEnergies 2019, 12(16), 3119; https://doi.org/10.3390/en12163119\n\nArticle\nAnalytical Description of Overhead Transmission Lines Loadability\nby", null, "Davide Lauria 1,*,", null, "Fabio Mottola 2", null, "and", null, "Stefano Quaia 3\n1\nDepartment of Industrial Engineering, University of Naples Federico II, 80125 Napoli, NA, Italy\n2\nDepartment of Electrical Engineering and Information Technology, University of Naples Federico II, 80125 Napoli, NA, Italy\n3\nDepartment of Engineering and Architecture, University of Trieste, 34127 Trieste, TS, Italy\n*\nAuthor to whom correspondence should be addressed.\nReceived: 19 July 2019 / Accepted: 11 August 2019 / Published: 14 August 2019\n\nAbstract\n\n:\nThe loadability characteristics of overhead transmission lines (OHLs) is certainly not a new topic. However, driven by sustainability issues, the increasing need to exploit existing electrical infrastructures as much as possible, has given OHL loadability a renowned central role and, recently, new investigations on this subject have been carried out. OHL loadability is generally investigated by means of numerical methods. Even though this approach allows deducing useful information in both planning and operation stage, it does not permit to capture all the insights obtainable by an analytical approach. The goal of this paper is to tailor a general analytical formulation for the loadability of OHLs. The first part of the paper is devoted to the base-case of uncompensated OHLs. Later, aiming to demonstrate the inherent feasibility and flexibility of the novel approach proposed, the less frequent case of shunt compensated radial OHLs is investigated as well. The analytical formulation is combined with the use of circular diagrams. Such diagrams allow a geometrical interpretation of the analytical relationships and are very useful to catch the physical insights of the problem. Finally, in order to show the applicability of the new analytical approach, a practical example is provided. The example concerns calculation of the loadability characteristics of typical 400 kV single-circuit OHLs.\nKeywords:\npower transmission; transmission lines; overhead transmission lines; line loadability\n\n1. Introduction\n\nLoadability of transmission lines plays a central role for both expansion planning and optimal operation of power systems [1,2]. The loadability of a given type of transmission line can be defined as the steady-state maximum power that type of line can carry, expressed as a function of its (variable) length, $L$. This maximum power $p ( L )$ must be retained as a theoretical upper limit, since it is evaluated by neglecting practical constraints like the transmission reliability margin and the power margin imposed by the $N − 1$ contingency criterion. This concept was early introduced in 1953 by St. Clair who described the loadability curve of uncompensated transmission lines, expressed in per unit (p.u.) of the surge impedance loading (SIL), as a function of $L$ . This curve was deduced on the basis of empirical considerations and technical practice. Later, in the theoretical bases for the St. Clair loadability curve were presented and its use was extended to the highest voltage levels. Using the same methodology, reference provides the “universal loadability curve” for uncompensated OHLs, expressed in p.u. of the SIL, which can be applied to any voltage level. In , a lossless line model is adopted, and the constraints affecting the transmissible power are the thermal limit, the voltage drop limit, and the steady-state stability margin. These cornerstone papers generally assume the voltage drop limit $Δ V m a x % = 5 %$ and power transmission at the unity power factor ($c o s φ = 1$).\nThe more recent papers [6,7] performed a technical comparison among different solutions of OHLs that encompasses the common single- and double-circuit three-phase AC OHLs, but also monopolar and bipolar HVDC OHLs and new proposals of OHLs (four-phase AC and AC–DC lines). The papers [6,7] adopt the same methodology used in [4,5] but employ the complete distributed parameters line model, highlight the effects of less-than-one power factor values, and include a further limit concerning the Joule power losses, $Δ P m a x %$.\nA critical analysis of the constraints affecting the loadability of OHLs is developed in considering the conductor thermal limit, the voltage drop limit, the steady-state stability margin, the voltage stability margin, and the Joule energy losses limit.\nThe loadability curves are characterized by various “regions”, or ranges of length. In the first region, the transmissible power is limited by the conductor thermal limit; in the second region, by the voltage drop limit, $Δ V m a x$. Further regions concern very long lines, starting from a length of about 300–500 km. These regions are determined by the other constraints referred to the line or power system performance: The steady-state (or angle) stability margin, the voltage stability margin, and/or the maximum allowable Joule losses, $Δ P m a x$. (In principle, the power losses limit could be more stringent (i.e., it could be attained at lower $L$) than the voltage drop limit. In actual cases, however, for realistic values of voltage drop and power losses limits, the power losses limit does not prevail (see Section 4).)\nThe base-case analyzed by all the previous works [3,4,5,6,7,8] are uncompensated OHLs. On the contrary, these works do not take into account any actual reactive power reserves available and, thus, provide limited information about line compensation (i.e., reactive power management through VAR resources). Reference states that line loadability can be increased by compensation, but the matter is not investigated. The incidence of reactive power reserves in OHLs loadability is outlined and investigated in where, however, attention focuses only on the aspect of the angle stability limit and the analysis is carried out for lossless lines. Reference demonstrates the large advantage that can be obtained through a controlled compensation for medium and quite long lines, which fall in the voltage drop region of the loadability curves. The work in examines the power capacity increase in long OHLs that can be obtained through passive compensators consisting of series capacitors and shunt reactors located at one or both line ends, whereas analyzes the loadability curves of radial OHLs compensated by means of synchronous condensers connected at the receiving end.\nIt must also be underlined that the power transfer limits of OHLs have been studied with reference to both compensated and uncompensated lines in the old paper , at a time when the concept of line loadability had not been developed yet.\nThis paper proposes an analytical representation of the loadability curves of OHLs. This is a new approach to characterize the various regions of the loadability curves, completely different from the traditional approach based on numerical analyses. The closed-form expressions derived take into account the constraints related to the conductor thermal limit, permissible voltage drop and Joule losses, as well as the steady-state stability margin. The proposed analytical approach allows obtaining an original and simple geometrical tool, based on circular diagrams, whose interceptions show the influence of the different limits. The contribution of this paper can help power designers and system operators in both planning and operation stages of OHLs.\nSection 2 illustrates the analytical representation of the loadability curves of uncompensated OHLs. Section 3 is dedicated to the analysis of radial OHLs compensated by means of synchronous condensers connected at the receiving end. This is a much less frequent case, investigated here in order to show the potentiality of the new analytical approach. Section 4 has the aim to demonstrate the applicability of the analytical approach and the use of the relationships provided in the previous sections. The application examples developed refer to standard 400 kV single-circuit OHLs.\n\n2. Analytical Formulation of Loadability Characteristics for Uncompensated Lines\n\nIn this section, the different regions of the loadability characteristics of uncompensated OHLs are derived analytically. We take into account the constraints considered in the classic works [3,4,5], which are the (static) thermal limit of the conductor (in this paper, we refer to the thermal limit of the conductor determined through the traditional static approach, as is done in all classic works on OHLs loadability. Modern dynamic approaches for calculation of the conductor thermal limit are beyond the scope of this work), voltage drop limit, $Δ V m a x$, and steady-state stability limit. In addition, as was done in [6,7], we also consider the power losses limit, $Δ P m a x$. We adopt the general formulation of power transmission lines:\n$V 1 = A V 2 + B I 2 , I 1 = C V 2 + A I 2 ,$\nwith $A = c o s h ( K L )$, $B = Z 0 s i n h ( K L )$, $C = ( 1 / Z 0 ) s i n h ( K L )$, and where $K$ is the propagation constant and $Z 0$ is the surge impedance of the line. The quantities expressed in $p . u .$ will be denoted by lower case letters.\n\n2.1. Thermal Limit\n\nAccording to the second line of Equation (1), the modulus of the current changes along the line. In the first region of the loadability curve, characterized by the incidence of the thermal limit, the modulus of $I 1$ (the current at the sending end of the line) is slightly lower than that of the current at the receiving end). The same is valid for the modulus of the current at any point along the line. Since the first region concerns lines with limited length, the differences in the current modulus are little (by far less than 1%) and are usually neglected. Actually, the thermal limit is attained at the receiving end of the line (see Section 4.2 for more insights).\nAccordingly, the maximum transmissible (active) power is , where $a t h = v 2 i t h$ is the maximum apparent power at the receiving end, $i t h$ is the conductor thermal limit, and $φ 2$ the displacement angle relative to the load power factor at the receiving end.\nAssuming $v 2 = v 2 e j 0 = v 2 ,$ in the first region the following relationship—derived from the first of Equation (1)—is valid:\nbeing $v 2 = v 2 e j 0 = v 2 .$\nThe thermal limit prevails for short lines, until the voltage drop limit $Δ v m a x$ is attained at a certain line length $L 1$. This can be written as\nhaving denoted $v 1 m a x = v 2 + Δ v m a x$.\nPractical evaluation of the first region of the loadability curve can be made through the simple iterative procedure reported in Figure 1, where $Δ L$ is the length step (for example, it could be assumed ). It is sufficient to verify, for increasing values of $L$, that $v 1 ≤ v 1 m a x$. The length $L 1$ is identified when the upper limit $v 1 = v 1 m a x$ is attained. Of course, for $L ≤ L 1$ the maximum transmissible power $p ( L )$ is equal to , and does not change with the line length.\nSection 4 shows the application of this procedure in a practical case.\nThe analytic solution is less immediate. For any assigned type of line and having set $Δ v m a x$, Equation (3) can be interpreted as a nonlinear equation in the unknown $L 1$. In Appendix A, an easy way to get an analytical solution of Equation (3) in the unknown $L 1$ is illustrated.\nThe well-known Perryne–Baum diagram is a helpful graphical representation of transmission lines steady-state operation. Figure 2 depicts in Cartesian coordinates the meaning of the relationship of Equation (3), for the base-case $c o s φ 2 = 1$. By adding the two phasors $v 2 c o s h ( K L 1 )$ and $p t h v 2 z 0 s i n h ( K L 1 )$, the resulting vector $v 1$ intercepts the circle whose centre is located in the origin of the Cartesian coordinates system and whose radius is equal to $v 1 m a x = v 2 + Δ v m a x$.\n\n2.2. Voltage Drop Limit\n\nThe voltage drop limit affects the second region of the loadability curve, for $L > L 1 ,$ and until a different limit takes over. In this region, the maximum transmissible active power $p ( L )$ (clearly, $p < p t h$) can be obtained by solving the equation\n$v 1 m a x = | v 2 c o s h ( K L ) + p v 2 ( 1 − j t a n φ 2 ) z 0 s i n h ( K L ) | ,$\nMore easily, the unknown $p$ can be determined by solving the following algebraic quadratic equation, which can be derived properly rewriting Equation (4):\n$p 2 ( γ 2 + δ 2 ) + 2 p ( α γ + β δ ) + α 2 + β 2 − v 1 m a x 2 = 0 ,$\nwhere\n$α = v 2 c o s h ( K ′ L ) c o s ( K ″ L ) ,$\n$β = v 2 s i n h ( K ′ L ) s i n ( K ″ L ) ,$\n$γ = 1 v 2 ( ( z r + z y t a n φ 2 ) s i n h ( K ′ L ) c o s ( K ″ L ) − ( z y − z r t a n φ 2 ) c o s h ( K ′ L ) s i n ( K ″ L ) ) ,$\n$δ = 1 v 2 ( ( z r + z y t a n φ 2 ) c o s h ( K ′ L ) s i n ( K ″ L ) + ( z y − z r t a n φ 2 ) s i n h ( K ′ L ) c o s ( K ″ L ) ) .$\nTherefore, the second region of the loadability curve $p ( L )$ can be directly calculated solving Equation (5) in the unknown $p$ for each value of $L$ (with $L > L 1$). Once $p$ is obtained, it must be verified that the power losses and steady-state stability limits are not attained (the analytical description of these limits is reported in Section 2.3 and Section 2.4 below). The attainment of one of these limits determines the upper length of the second region of the loadability curve, $L 2$. This simple procedure is depicted in Figure 3.\nSection 4 shows the application of this simple iterative procedure in a practical case.\nReferring to the Perryne–Baum diagram, $p$ can be interpreted as the value that intercepts the circle with radius $v 1 m a x$, as shown in Figure 4 with reference to the case $c o s φ 2 = 1$. Note that the two phasors $v 2 c o s h ( K L )$ and $p v 2 ( 1 − j t a n φ 2 ) z 0 s i n h ( K L )$ vary with $L$ and, hence, have different values from those depicted in Figure 2.\n\n2.3. Power Losses Limit\n\nConcerning Joule losses along the line and considering that this paper deals with the maximum permissible power $p ( L )$, a natural approach could be to fix a limit to the power losses $Δ p$ calculated with regard to the loadability limit $p ( L ) .$ Better, looking for homogeneity with the percent voltage drop limit, we could fix a limit to the ratio between $Δ p$ and $p ( L )$.\nAnalytically, considering that $p ( L )$ must be intended as known (see Figure 3), this ratio can be calculated, for any given length $L$, as\nwith\n$v 1 = A v 2 + B p v 2 ( 1 − j t a n φ 2 ) , i 1 = C v 2 + A p v 2 ( 1 − j t a n φ 2 ) .$\nAs we show in Appendix B, the ratio can be explicitly evaluated as a function of $p$ and $L$.\nFrom the methodological point of view, however, the problem is to identify a significant criterion for setting a limit to the ratio . Joule losses are an economic problem rather than a distinct limiting factor in line loadability. Thus, any limit concerning Joule losses should involve the energy $E J$ lost in a given time period $T$ (for example, one year), whereas the instantaneous power losses corresponding to a certain power transported have scarce or no practical meaning [3,8,14].\nIt is clear that a limit on the lost energy $E J m a x$ is equivalent to a limit on the average value $p m$ of the power transported. Thus, the load factor of the line $f c = p m p$, i.e., the ratio between the average and the maximum power transported (in what follows, we assume the maximum power transported equal to $p ( L ) )$ becomes a crucial parameter.\nOn the other hand, the ratio of the power losses evaluated with respect to and the power losses $Δ p$ evaluated with regard to the maximum power $p ( L )$, can be deduced from the load factor using the formula :\nAll this considered, we set a limit to the ratio , where $Δ p m$ is the power losses calculated with regard to the average power $p m$. Finally, using Equation (12) and the definition of $f c$, we can write\nEquation (13) allows to calculate the maximum value of the ratio , as a function of the limit $( Δ p m p m ( L ) ) m a x$ and the load factor $f c$. Therefore, once the load factor $f c$ and the limit $( Δ p m p m ( L ) ) m a x$ are set, one can obtain $( Δ p p ( L ) ) m a x$ and check the constraint .\nThis procedure converts a limit to the lost energy into a limit to the (instantaneous) power losses $Δ p$, calculated with regard to the loadability limit $p ( L ) .$ Examples of application are shown in Section 4.\n\n2.4. Steady-State Stability Limit\n\nThe stability limit depends not only on the transmission line characteristics, but also on the network equivalents at both line ends [6,7,8].\nAs far as the steady-state stability limit $p l i m$ is concerned, one can refer to the equivalent system shown in Figure 5.\nThe cascade system in Figure 5 can be described by the equivalent transmission matrix $T$:\nThe active power, expressed in p.u., is given by the well-known formula\n$p = e d 1 e d 2 b t c o s ( β t − δ ) − A t e d 2 2 b t c o s ( β t − α t ) ,$\nwhere $b t = b t e j β t$ and $A t = A t e j α t$.\nThe maximum value of $p$ is obtained for $δ = β t$, and is:\n$p m a x = e d 1 e d 2 b t − A t e d 2 2 b t c o s ( β t − α t ) .$\nThe steady-state stability limit $p l i m$ and $p m a x$ are related each other by means of the stability margin $m s t a b$, that is:\n$p l i m = ( 1 − m s t a b ) ( e d 1 e d 2 b t − A t e d 2 2 b t c o s ( β t − α t ) ) .$\nThis relationship is a monotonically decreasing function of $L$, $p l i m ( L )$. For well-developed power systems, a good approximation of $p l i m$ is obtained assuming $e d 1 = v 1 m a x$ and $e d 2 = v 2$. In this case, Equation (17) becomes\n$p l i m ( L ) = ( 1 − m s t a b ) ( v 1 m a x v 2 b t ( L ) − A t ( L ) v 2 2 b t ( L ) c o s ( β t ( L ) − α t ( L ) ) ) .$\nThe maximum allowable active power, therefore, must satisfy the inequality:\n$p ( L ) ≤ p l i m ( L ) .$\nAccording to the procedure illustrated in Figure 3, the inequality (19) must be checked for each value of $L$. In this case also, Section 4 shows the application of this procedure in a practical case.\n\n2.5. Third Region of The Loadability Characteristic\n\nAt the line length either the power losses or the steady-state stability limit is attained. Longer lines ($L > L 2$) belong to further regions of the loadability curve. Obviously, the third region is determined by the limit attained (power losses or steady-state stability). Examples reported in Section 4 show that $L 2$ is usually quite long and, thus, the majority of the existing OHLs belong to the first two regions.\n\n3. Analytical Formulation of Loadability Characteristics for Shunt Compensated Radial Lines\n\nReferring to the specific case of radial OHLs with shunt reactive compensation at the receiving end, is illustrated in Figure 6. The shunt compensation (reactive power injection $q c < 0$) superimposes a backward (i.e., from the receiving end to the sending end) reactive power flow to the forward active power flow. Assuming $c o s φ 2 = 1$, it follows $q = q c < 0$. In this way the voltage drop across the line can be limited and, thus, the power transfer capacity increases. This operation is illustrated in Figure 7.\nAccordingly, reactive compensation is useful to increase the loadability curve in the second region, whereas in the thermal limit region a reactive power injection would reduce the loadability limit $( L ) =$ $p t h$.\nSome preliminary considerations help to understand the succession of the various limits in the loadability curve of these lines. Unlike uncompensated OHLs, in this case the slight difference between the modulus of the sending end and receiving end currents plays a role in deriving the loadability characteristics. Until the voltage drop is less than $Δ v m a x$, as already said in Section 2.1, the modulus of the current at the sending end is slightly lower than that at the receiving end, which is equal to the thermal limit. Once the voltage drop limit is achieved, the modulus of the current at the sending end increases with $L$ until, at a certain length $L 2$, it attains the thermal limit and equals the receiving-end current. After this length, not to exceed the thermal limit the compensation action must decrease. These aspects are quantified in the practical example discussed in Section 4.2.\nKeeping this in mind, the succession of the various limits and the relevant analytical description are explained in the following subsections where, for the sake of clarity, the case of unitary power factor is examined first.\n\n3.1. Receiving-End Thermal Limit\n\nIt is trivial to remark that, until the voltage drop limit is achieved, reactive compensation is not required and the loadability curve coincides with that of uncompensated lines. Hence, the maximum line length for which reactive compensation is not required, is equal to $L 1 .$\n\n3.2. Receiving-End Thermal Limit and Voltage Drop Limit\n\nThe voltage phasor diagram shown in Figure 7 is characterized by the following equations:\n$v 1 m a x = | v 2 cos h ( K L ) + p − j q v 2 z 0 s i n h ( K L ) | L > L 1 p 2 + q 2 = a t h . ,$\nFor each value of $L$ (and, thus, for any given line), the two relationships in Equation (20) represent the two power circles, C1 and C2, shown in Figure 8, and expressed in Cartesian coordinates $( p , q )$ as\n${ ( p − p 1 ) 2 + ( q − q 1 ) 2 = r 1 2 ( p − p 2 ) 2 + ( q − q 2 ) 2 = r 2 2 ,$\nwhere\nThe points of intersection of the circles in Equation (21) are P1 and P2 in Figure 8. The maximum allowable active power corresponds to the abscissa of P1, whose coordinates are obtained by solving Equation (21). For this purpose, as is well known from analytical geometry, the radical axis, i.e., the line passing through $P 1$ and $P 2$, is described by the equation:\n$q = − p 2 − p 1 q 2 − q 1 p + p 2 2 − p 1 2 + q 2 2 − q 1 2 + r 1 2 − r 2 2 2 ( q 2 − q 1 )$\nHence, by substituting Equation (23) in one of the formulas in Equation (21), the coordinates of P1 and P2 can be derived. In this way, $p ( L )$ can be obtained for each line length $L$ (with $L > L 1$).\nThis simple analytical procedure (similar in principle to that illustrated in Figure 3) allows calculation of the second region of the loadability curve.\nThe second region extends up to the line length at which another limit is attained. Therefore, as made above for uncompensated lines, the attainment of another limit must be checked. In this case, the check concerns the attainment of the thermal limit at the sending end at the length $L 2$. As already said, at this line length the currents at the two line ends are both equal to the thermal limit. Analytically, at $L 2$ the following constraints are attained at the same time:\n$p 2 + q 2 = a t h , v 1 m a x = | c o s h ( K L 2 ) + p − j q v 2 z 0 s i n h ( K L 2 ) | | v 2 z 0 s i n h ( K L 2 ) + p − j q v 2 c o s h ( K L 2 ) | = i t h ,$\nAlso the third relationship of (24) can be represented by a circle, named C3 and expressed in Cartesian coordinates $( p , q )$ as:\n$( p − p 3 ) 2 + ( q − q 3 ) 2 = r 3 2 ,$\nwhere:\nFor $L = L 2$ the three circles C1, C2 and C3, which are the geometrical representation of (21) and (25), have a common intersection point $P 3$, which identifies the maximum allowable active power, as shown in Figure 9.\nIn order to determine the coordinates of $P 3$ avoiding the complex solution of the system (24), it is convenient to resort to an iterative procedure similar in principle to those described in Figure 1 and Figure 3. The module of the sending end current can be calculated, for increasing values of $L$, using the left-hand member of the first of (24): the point $P 3$ is identified when the sending end current attains the thermal limit. Section 4 shows the application of this procedure in a practical case.\n\n3.3. Sending-End Thermal Limit and Voltage Drop Limit\n\nWhen the thermal limit is attained at the sending end, the maximum allowable active power can be calculated by solving the following two-equations system:\n$v 1 m a x = | v 2 c o s h ( K L ) + p − j q v 2 z 0 s i n h ( K L ) | L > L 2 | v 2 z 0 s i n h ( K L ) + p − j q v 2 c o s h ( K L ) | L > L 2 = i t h .$\nIn this way, a third region of the loadability curve can be calculated, likewise we already performed in Section 3.2 with regard to the second region. Indeed, for each value of $L$ (with $L > L 2$) the system in Equation (27) can also be reduced to a system of two algebraic quadratic equations.\nThe solution of Equation (27) is graphically represented by the intersection of the circles C1 and C3 illustrated in Figure 10. Clearly, the maximum allowable active power is determined by the intersection point $P 4$ of C1 and C3.\nThe figure reports also the circle C4 that represents the power losses limit. This limit, indeed, can be written as\n$( p − p 4 ) 2 + ( q − q 4 ) 2 = r 4 2$\nwith $p 4$, $q 4$ and $r 4$ given by\nEquation (28) corresponds to the circle C4 depicted in Figure 10.\nIn practice, in order to check satisfaction of the power losses limit, once the solution $p * ( L )$, $q * ( L )$ of Equation (27) is obtained, it must be verified that this point is within the circle C4, i.e., $( p * ( L ) − p 4 ) 2 + ( q * ( L ) − q 4 ) 2 < r 4 2$. In the same way, one can simply verify, for each value of $L$, the satisfaction of the steady-state stability limit (19) that can be rewritten, by assuming infinite short-circuit power at both line ends, i.e., $e d 1 = v 1 m a x$ and $e d 2 = v 2$, as\n$p * ( L ) ≤ ( 1 − m s t a b ) b ( L ) ( v 1 m a x v 2 − A ( L ) v 2 2 c o s ( β ( L ) − α ( L ) ) ) .$\nA practical example of application is shown in Section 4.\n\n3.4. Voltage Drop Limit and Steady-State Stability Limit\n\nFor longer lines, starting from a certain line length $L 3$, the steady-state stability limit starts determining the loadability curve (note that, in compensated lines, this line length can be much lower compared with the base-case of uncompensated lines). Thus, the loadability limit is determined by the voltage drop and steady-state stability limits. In this region, both the active and reactive power of the compensator decrease starting from the maximum value of complex power. If the maximum reactive power the synchronous condenser can deliver, $Q m a x$, is less than this value, the sequence of the limits can change.\nAnalytically, for $L > L 3$, the maximum active power is calculated as\n$p l i m ( L ) = ( 1 − m s t a b ) b ( L ) ( v 1 m a x v 2 − A ( L ) v 2 2 c o s ( β ( L ) − α ( L ) ) ) .$\nThe reactive power $q$ in turn can be calculated by solving the following relationship:\n$v 1 m a x = | cos h ( K L ) + p l i m − j q v 2 z 0 sin h ( K L ) | L > L 3$\nEquation (32) can also be reduced to a second order algebraic equation in the unknown $q$, following the same procedure already shown for Equation (4). For the sake of brevity, the relevant equations are not reported here.\n\n3.5. $C a s e : c o s φ 2 ≠ 1$\n\nThe more general case with $c o s φ 2 ≠ 1$ can be reduced to the case $c o s φ 2 = 1$ by adding a further reactive compensation . This means that the whole reactive power absorbed by the load is delivered by the synchronous condenser and the line power factor at the receiving end of the line, $cos φ 2$, is adjusted to 1. This, of course, requires and implies that a reduced reactive power reserve, equal to , is available to control the voltage drop across the line.\n\n4. Practical Applications\n\nThe goal of this Section is to show the ability of the proposed analytical methodology to derive the loadability characteristic of OHLs. Calculations are performed for the base-case of uncompensated OHLs and for the specific case of shunt compensated radial lines. In the second case, we assume that a synchronous condenser is connected at the receiving end of the OHL, as shown in Figure 6. The limits considered in both cases are:\n• conductor thermal limit, $i t h$;\n• voltage drop limit, $Δ v m a x$;\n• power losses limit, $( Δ p m p m ( L ) ) m a x$; and\n• steady-state stability limit, $m s t a b$.\nCalculations concern traditional single circuit 400 kV OHLs equipped with a standard triple-core ACSR conductor bundle of 3 × 585 mm2 cross section, already taken as reference conductors in [6,7]. The relevant line parameters are reported in Table 1.\nThe values in Table 1 yield to = and SIL . At the receiving end it is set and the thermal limit is assumed as (according to the Italian Standard CEI 11-60 , the conductor thermal limit $I t h$ ranges from 2038 to 2952 A depending on the season and geographical location. Here, we take the lowest—most conservative—value of = 2038 A), which corresponds to $A t h = 1412$ $MVA$.\nThe percent value of maximum voltage drop is assumed $Δ v m a x % = 5 %$.\nWith regard to Joule losses, we assume $( Δ p m p m ( L ) ) m a x =$ 5%, and a load factor $f c = 0.75$. The last assumption is rather conservative, as it corresponds to a rather high average exploitation of the line. From (12) we obtain $f p = 0.619$ and, from (13),\nWith regard to the steady-state stability margin, the commonly used value of 30% is considered. Finally, regarding the power factor, the case $c o s φ 2 = 1$ is analyzed first.\nUsing these values and the relationships explained in Section 2 and Section 3, it is rather easy to calculate the loadability curves of both uncompensated and compensated lines.\nFigure 11 shows the loadability curves obtained with $v 2 = 1$. On the y-axis, powers are in of a 1000 MVA base power.\n\n4.1. Uncompensated Lines\n\nThe numerical solution (illustrated in Figure 1) of the non-linear Equation (3) provides whereas the approximated analytical solution described in Appendix B gives 118 km. However, this error dramatically reduces with the power factor and, at $c o s φ 2 < 0.97$, the difference between the numerical and analytical solutions practically vanishes.\nHence, the first region of the loadability curve has constant ordinate equal to $p t h$ in the length interval 0–114 $km$.\nThe second region of the loadability curve is calculated by solving Equation (4) in the unknown variable $p$ for each value of $L > L 1$. For these relatively long lines, the limit $Δ v m a x % = 5 %$ imposes to reduce the transmissible power and, increasing $L$, the line loadability $p ( L )$ quickly decreases.\nFor each value of $p$, power losses can be evaluated by (A9), and the constraint can be easily checked. The limit value $( Δ p p ( L ) ) m a x$ is never achieved at least until . In fact, for the loadability limit, determined through Equation (5) where the numerical values of the parameters $α$, $β$, and $δ$ are , and $0.9721$ respectively, is The relationship (A7), reported in Appendix B, for and provides that is less than\nWith regard to steady-state stability, having set $m s t a b = 0.3$ and assuming an infinite short-circuit-ratio at both line ends, until the steady-state stability limit does also not affect the line loadability. In fact, for the relationship in Equation (17) gives , higher than the power corresponding to $Δ v m a x %$ ().\nIn summary, for the considered OHLs the loadability curve calculated in the 0–600 km range of lengths consists of only two regions, resulting . The limit length $L 1$ and the loadability curve $p ( L )$ can be easily derived, as just illustrated, avoiding the need to resort to optimization procedures, which generally require a major effort for the implementation on a digital computer and more computation burden.\nNote that, assuming different values for the power losses limit $( Δ p m p m ( L ) ) m a x$ and the load factor $f c$, the Joule losses limit could prevail over the voltage drop limit reducing the loadability curve after a certain line length that can be individuated by the methodology described. The same is valid in the case of a more stringent setting of the steady-state stability limit.\n\n4.2. Shunt Compensated Radial Lines\n\nFor shunt compensated radial lines, it is obvious that the first region of the loadability curve is equal to the one of uncompensated lines. Starting from the line length $L 1$, the injection of the required amount of reactive power allows limiting the voltage drop across the line and thus avoids exceeding the voltage drop limit $Δ v m a x % = 5 %$. Increasing $L$, this operation continues until other limits are attained or the reactive power reserve is fully exploited. As already pointed out, the second region of the loadability curve is determined by both the thermal limit at the receiving end and the voltage drop limit. This operation is graphically represented by the intersection point $P 1$ of the circles C1 and C2 illustrated in Figure 8, whose Cartesian coordinates are given by Equation (21). Analytically, the coordinates of $P 1$ must be determined. The maximum allowable active power corresponds to the abscissa of $P 1$, which is determined for each value of $L > L 1$ by solving the linear relationship in Equation (23) and one of the two Equations (21), as described in Section 3. The only computation effort required is the solution of a second order algebraic equation. For example, for , the equations system becomes\n${ ( p + 0.2274 ) 2 + ( q + 2.8894 ) 2 = 9.6970 q = − 0.0787 p − 0.1206 .$\nSolving this system, the coordinates of $P 1$ are and Note that the reactive power, which corresponds to a capacitive absorption, can be actually delivered only if it is lower than the rating of the synchronous condenser.\nThe second region extends up to the line length at which also the sending end current achieves the thermal limit (Figure 12). This length corresponds to point $P 3$ depicted in Figure 9.\nThe numerical procedure explained at the end of Section 3.2 provides , and the coordinates of $P 3$ are ., and\nThe gradual and moderate reduction of the loadability curve (Figure 11) in the second region (114–276 $km )$ is due to the thermal limit and to the progressive reduction of the line power factor: The increasing reactive component of the current, injected for compensation, reduces the active current in the line.\nAfter , the third region is determined by the thermal limit at the sending end and by the voltage drop limit, and is analytically represented by Equation (27). For each value of $L$, the powers $p$ and $q$ can be evaluated as the intersection of the two circles C1 and C3 illustrated in Figure 10. It must be also verified that the power losses limit and the steady-state stability margin $m s t a b$ are not exceeded. This can be done by means of Equations (28) and (30), respectively. For the specific case under study, the steady-state stability limit takes over at . Indeed, at this length the maximum active power $p$ compatible with the thermal and voltage drop limits (obtained by the solution of (27)) is equal to the maximum active power $p * ( L )$ compatible with the steady-state stability limit (obtained by (30)):\nAlso in this case, the power losses limit is never attained. Indeed, for , which is lower than\nConcluding, the explained calculations allow determining the loadability curve depicted in Figure 11, which is determined by the following sequence of limits:\n• $L ≤ L 1$: Thermal limit at the receiving end;\n• : Voltage drop limit and thermal limit at the receiving end;\n• $L 2 ≤ L ≤ L 3$: Voltage drop limit and thermal limit at the sending end;\n• : Voltage drop limit and steady-state stability limit.\nwith , and .\n\n4.3. Case: $c o s φ 2 < 1$\n\nIn case of less than one load power factor (practical values can be assumed in the range (0.97–1), the loadability curve of uncompensated lines reduces because\n• the active power transmitted at the thermal limit is proportional to $c o s φ 2$; this causes a small loadability reduction in the first region; and\n• the voltage drop across the line increases; this causes a much greater loadability reduction in the three following regions, all affected by the voltage drop limit.\nAlso, the limit length $L 1$ sharply reduces [6,7]. The analytical calculation of $L 1$ for , performed as explained in Appendix A, gives the value 58 km which is almost identical to the numerical solution obtained through widespread algorithms for the solution of nonlinear algebraic equations, such as Newton–Raphson. Practically the same result is obtained by means of the numerical solution illustrated in Figure 1. For example, assuming , the result is 59 km.\nOn the contrary, in the case of shunt compensated radial lines, the compensator action (provided that the compensator can provide the reactive power amount required to adjust to unity the load power factor) allows to obtain the same loadability curve reported in Figure 11 for the case .\n\n5. Conclusions\n\nThis paper proposes a new approach for the analytical description of the various regions of the loadability curves of overhead transmission lines. Using the complete line model with distributed parameters and the relevant general formulation, we show how the loadability curves of any actual line can be deduced taking into account the conductor thermal limit, a maximum voltage drop across the line, a maximum amount of Joule losses along the line, and a steady-state stability margin.\nThe analytical formulation has general validity and can be used to calculate the loadability curves—helping power system operators in both planning and operation stages—in all practical cases. The last sentence is demonstrated by two application examples. The first example refers to the base-case of standard (uncompensated) 400 kV single circuit OHLs equipped with standard triple-core ACSR conductor bundle with 3 × 585 mm2 cross section, whereas the second example concerns the same 400 kV single circuit OHLs in radial topology and shunt compensated at the receiving end.\nDespite the apparent complexity of the general analytical description, we demonstrate that the practical application of this analytical approach is rather simple. In both case studies, the loadability curves can be easily derived, avoiding the need to formulate the loadability problem in terms of a constrained optimization problem, whose solution implies an adequate procedure of implementation on a digital computer and a major computation burden.\n\nAuthor Contributions\n\nD.L., F.M. and S.Q. conceived and designed the theoretical methodology and the numerical applications; D.L., F.M. and S.Q. performed the numerical applications and analyzed the results; D.L., F.M. and S.Q. wrote the paper.\n\nFunding\n\nThis research received no external funding.\n\nConflicts of Interest\n\nThe authors declare no conflict of interest.\n\nAppendix A\n\nThis section provides an easy way to get an analytical solution of Equation (3) in the unknown $L 1$. Assuming $K = j K ″$ (lossless approximation), Equation (3) can be rewritten as\nEquation (A1) can be written also as\n$v 1 m a x = | v 2 c o s ( K ″ L 1 ) + ( a x + j a y ) s i n ( K ″ L 1 ) |$\nwhere\n$a x = p t h v 2 ( − z y + z r t a n φ 2 ) , )$\n$a y = p t h v 2 ( z r + z y t a n φ 2 ) .$\nAfter some elementary manipulations, the following quadratic equation in the unknown $ξ = t a n ( K ″ L 1 )$ can be obtained:\n$ξ 2 ( v 1 m a x 2 − a x 2 − a y 2 ) − 2 a x v 2 ξ + ( v 1 m a x 2 − v 2 2 ) = 0$\nOnly one of the two solutions has a physical meaning, and $L 1$ can be finally evaluated as:\n$L 1 = a t a n ( ξ * K ″ )$\nThe error of this approximated analytical solution is lower than 3.5% when $c o s φ 2 = 1$ and becomes practically negligible when $c o s φ 2 < 0.99$.\n\nAppendix B\n\nThe ratio can be explicitly evaluated as a function of the variables $p$ and $L$ as follows:\nBy simple manipulations, one obtains\nwith\nFor each value of $L$ and $p$, the expression (A8) can be easily evaluated, since the terms $Λ 1 ( L )$ is an explicit function of the only variable $L$ and and $Λ 3 ( L )$ are explicit functions of also the variable $p$. Finally, can be compared with the limit $( Δ p p ( L ) ) m a x$, verifying the satisfaction of the inequality constraint .\nAiming to demonstrate the easy applicability of these formulas, we report the estimation of power losses in the base-case of uncompensated lines for , with $c o s φ 2 = 1$. In this case, $p$ is equal to , and the application of (A9) allows to determine the following estimates: and Finally, , as reported in Section 4.\n\nReferences\n\n1. Ullah, I.; Gawlik, W.; Palensky, P. Analysis of Power Network for Line Reactance Variation to Improve Total Transmission Capacity. Energies 2016, 9, 936. [Google Scholar] [CrossRef]\n2. Hao, J.; Xu, W. Extended transmission line loadability curve by including voltage stability constrains. In Proceedings of the IEEE Canada Electric Power Conference, Vancouver, BC, Canada, 6–7 October 2008; pp. 1–5. [Google Scholar] [CrossRef]\n3. St. Clair, H.P. Practical concepts in capability and performance of transmission line. In Proceedings of the AIEE Pacific General Meeting, Vancouver, BC, Canada, 1–4 September 1953. [Google Scholar]\n4. Dunlop, R.D.; Gutman, R.; Marchenko, P.P. Analytical development of loadability characteristics for EHV and UHV transmission lines. IEEE Trans. Power Appar. Syst. 1979, PAS-98, 606–613. [Google Scholar] [CrossRef]\n5. Kundur, P. Power System Stability and Control; The EPRI Power System Engineering Series; McGraw-Hill: New York, NY, USA, 1994; ISBN 9780070359581. [Google Scholar]\n6. Lauria, D.; Mazzanti, G.; Quaia, S. The Loadability of Overhead Transmission Lines. Part I: Analysis of Single-Circuits. IEEE Trans. Power Deliv. 2014, 29, 29–37. [Google Scholar] [CrossRef]\n7. Lauria, D.; Mazzanti, G.; Quaia, S. The Loadability of Overhead Transmission Lines. Part II: Analysis of Double-Circuits and Overall Comparison. IEEE Trans. Power Deliv. 2014, 29, 518–524. [Google Scholar] [CrossRef]\n8. Quaia, S. Critical analysis of line loadability constraints. Int. Trans. Electr. Energy Syst. 2018, 28, 1–11. [Google Scholar] [CrossRef]\n9. Dong-Min, K.; In-Su, B.; Jin-O, K. Determination of available transfer capability (ATC) considering real-time weather conditions. Eur. Trans. Electr. Power 2011, 21, 855–864. [Google Scholar] [CrossRef]\n10. Kay, T.W.; Sauer, P.W.; Smith, R.A. EHV and UHV line loadability dependence on VAR supply capability. IEEE Trans. Power Appar. Syst. 1982, 101, 3568–3575. [Google Scholar] [CrossRef]\n11. El-Metwally, M.M.; El-Emary, A.A.; El-Azab, M. A linear programming method for series and shunt compensation of HV transmission lines. Eur. Trans. Electr. Power 2005, 15, 157–170. [Google Scholar] [CrossRef]\n12. Lauria, D.; Quaia, S. Loadability increase in radial transmission lines through reactive power injection. In Proceedings of the 6th International Conference on Clean Electrical Power (ICCEP), Santa Margherita Ligure, Italy, 27–29 June 2017. [Google Scholar]\n13. Rissik, H. A semi-graphical method of determining the power limits of an interconnector. J. Inst. Electr. Eng. Part II Power Eng. 1941, 88, 568–588. [Google Scholar] [CrossRef]\n14. Lauria, D.; Quaia, S. Transmission Line Loadability Increase through Series compensation. In Proceedings of the International Symposium on Power Electronics, Electrical Drives, Automation and Motion (SPEEDAM), Amalfi Coast, Italy, 20–22 June 2018; pp. 1019–1024. [Google Scholar]\n15. Wu, A.; Ni, B. Line Loss Analysis and Calculation of Electric Power Systems, 1st ed.; ohn Wiley & Sons: Hoboken, NJ, USA; Singapore Pte Ltd.: Singapore, 2016. [Google Scholar]\n16. Norma CEI 11-60. Portata al Limite Termico Delle Linee Elettriche Aeree Esterne con Tensione Maggiore di 100 kV, 2nd ed.; Comitato Elettrotecnico Italiano: Milano, Italy, 2002. (In Italian) [Google Scholar]\nFigure 1. Iterative procedure for the calculation of $L 1$.\nFigure 1. Iterative procedure for the calculation of $L 1$.\nFigure 2. Voltage phasors diagram for $L 1$ in the case of $c o s φ 2 = 1$.\nFigure 2. Voltage phasors diagram for $L 1$ in the case of $c o s φ 2 = 1$.\nFigure 3. Iterative procedure for the calculation of $L 2$.\nFigure 3. Iterative procedure for the calculation of $L 2$.\nFigure 4. Voltage phasors diagram for $Δ v = Δ v m a x$, $c o s φ 2 = 1$.\nFigure 4. Voltage phasors diagram for $Δ v = Δ v m a x$, $c o s φ 2 = 1$.\nFigure 5. Equivalent system for the steady-state stability analysis.\nFigure 5. Equivalent system for the steady-state stability analysis.\nFigure 6. Considered layout of the shunt compensated radial line.\nFigure 6. Considered layout of the shunt compensated radial line.\nFigure 7. Radial lines with reactive compensation: The voltage phasor diagram.\nFigure 7. Radial lines with reactive compensation: The voltage phasor diagram.\nFigure 8. Power circles related to the thermal limit at the receiving end (circle C2) and to the voltage drop limit (circle C1).\nFigure 8. Power circles related to the thermal limit at the receiving end (circle C2) and to the voltage drop limit (circle C1).\nFigure 9. Intersection of the power circles related to the voltage drop limit (circle C1) and receiving/sending end thermal limit (circles C2 and C3 respectively).\nFigure 9. Intersection of the power circles related to the voltage drop limit (circle C1) and receiving/sending end thermal limit (circles C2 and C3 respectively).\nFigure 10. Power circles related to the voltage drop limit (C1), thermal limit at the sending end (C3), and power losses limit (C4).\nFigure 10. Power circles related to the voltage drop limit (C1), thermal limit at the sending end (C3), and power losses limit (C4).\nFigure 11. Loadability curves of uncompensated and compensated lines (case: ).\nFigure 11. Loadability curves of uncompensated and compensated lines (case: ).\nFigure 12. Modulus of the current at sending and receiving ends (case: $c o s φ 2 = 1$).\nFigure 12. Modulus of the current at sending and receiving ends (case: $c o s φ 2 = 1$).\nTable 1. Per unit length line parameters.\nTable 1. Per unit length line parameters.\nParameterUnitValue\nResistance, $r$ $Ω / km$0.021\nReactance, $x$ $Ω / km$0.271\nConductance, $g$ $S / km$4 × 10−9\nSusceptance, $b$ $S / km$4.21 × 10−6" ]
[ null, "https://www.mdpi.com/profiles/masked-unknown-user.png", null, "https://www.mdpi.com/profiles/masked-unknown-user.png", null, "https://www.mdpi.com/img/design/orcid.png", null, "https://www.mdpi.com/profiles/masked-unknown-user.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.907332,"math_prob":0.99423945,"size":39020,"snap":"2019-43-2019-47","text_gpt3_token_len":8334,"char_repetition_ratio":0.18015686,"word_repetition_ratio":0.071531445,"special_character_ratio":0.21299334,"punctuation_ratio":0.13302943,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9992148,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-22T08:55:07Z\",\"WARC-Record-ID\":\"<urn:uuid:9f65329d-400e-4a1f-972a-6622087f4f59>\",\"Content-Length\":\"285510\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:71b0b69e-9309-41d2-b6c4-c34a443522f3>\",\"WARC-Concurrent-To\":\"<urn:uuid:ba02330b-0733-466c-876b-c5a98826c481>\",\"WARC-IP-Address\":\"104.16.9.68\",\"WARC-Target-URI\":\"https://www.mdpi.com/1996-1073/12/16/3119/htm\",\"WARC-Payload-Digest\":\"sha1:LYC6HSYA7A352GWOQBEK2DYPM3SR35WL\",\"WARC-Block-Digest\":\"sha1:ENGEAVXISWZHDDMLLECFM6BVBEVFWURG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987813307.73_warc_CC-MAIN-20191022081307-20191022104807-00549.warc.gz\"}"}
https://hackage-origin.haskell.org/package/base-4.12.0.0/docs/src/Data.List.NonEmpty.html
[ "```{-# LANGUAGE DeriveDataTypeable #-}\n{-# LANGUAGE DeriveGeneric #-}\n{-# LANGUAGE Trustworthy #-} -- can't use Safe due to IsList instance\n{-# LANGUAGE TypeFamilies #-}\n\n-----------------------------------------------------------------------------\n-- |\n-- Module : Data.List.NonEmpty\n-- Copyright : (C) 2011-2015 Edward Kmett,\n-- (C) 2010 Tony Morris, Oliver Taylor, Eelis van der Weegen\n--\n-- Stability : provisional\n-- Portability : portable\n--\n-- A 'NonEmpty' list is one which always has at least one element, but\n-- is otherwise identical to the traditional list type in complexity\n-- and in terms of API. You will almost certainly want to import this\n-- module @qualified@.\n--\n-- @since 4.9.0.0\n----------------------------------------------------------------------------\n\nmodule Data.List.NonEmpty (\n-- * The type of non-empty streams\nNonEmpty(..)\n\n-- * Non-empty stream transformations\n, map -- :: (a -> b) -> NonEmpty a -> NonEmpty b\n, intersperse -- :: a -> NonEmpty a -> NonEmpty a\n, scanl -- :: Foldable f => (b -> a -> b) -> b -> f a -> NonEmpty b\n, scanr -- :: Foldable f => (a -> b -> b) -> b -> f a -> NonEmpty b\n, scanl1 -- :: (a -> a -> a) -> NonEmpty a -> NonEmpty a\n, scanr1 -- :: (a -> a -> a) -> NonEmpty a -> NonEmpty a\n, transpose -- :: NonEmpty (NonEmpty a) -> NonEmpty (NonEmpty a)\n, sortBy -- :: (a -> a -> Ordering) -> NonEmpty a -> NonEmpty a\n, sortWith -- :: Ord o => (a -> o) -> NonEmpty a -> NonEmpty a\n-- * Basic functions\n, length -- :: NonEmpty a -> Int\n, head -- :: NonEmpty a -> a\n, tail -- :: NonEmpty a -> [a]\n, last -- :: NonEmpty a -> a\n, init -- :: NonEmpty a -> [a]\n, (<|), cons -- :: a -> NonEmpty a -> NonEmpty a\n, uncons -- :: NonEmpty a -> (a, Maybe (NonEmpty a))\n, unfoldr -- :: (a -> (b, Maybe a)) -> a -> NonEmpty b\n, sort -- :: NonEmpty a -> NonEmpty a\n, reverse -- :: NonEmpty a -> NonEmpty a\n, inits -- :: Foldable f => f a -> NonEmpty a\n, tails -- :: Foldable f => f a -> NonEmpty a\n-- * Building streams\n, iterate -- :: (a -> a) -> a -> NonEmpty a\n, repeat -- :: a -> NonEmpty a\n, cycle -- :: NonEmpty a -> NonEmpty a\n, unfold -- :: (a -> (b, Maybe a) -> a -> NonEmpty b\n, insert -- :: (Foldable f, Ord a) => a -> f a -> NonEmpty a\n, some1 -- :: Alternative f => f a -> f (NonEmpty a)\n-- * Extracting sublists\n, take -- :: Int -> NonEmpty a -> [a]\n, drop -- :: Int -> NonEmpty a -> [a]\n, splitAt -- :: Int -> NonEmpty a -> ([a], [a])\n, takeWhile -- :: Int -> NonEmpty a -> [a]\n, dropWhile -- :: Int -> NonEmpty a -> [a]\n, span -- :: Int -> NonEmpty a -> ([a],[a])\n, break -- :: Int -> NonEmpty a -> ([a],[a])\n, filter -- :: (a -> Bool) -> NonEmpty a -> [a]\n, partition -- :: (a -> Bool) -> NonEmpty a -> ([a],[a])\n, group -- :: Foldable f => Eq a => f a -> [NonEmpty a]\n, groupBy -- :: Foldable f => (a -> a -> Bool) -> f a -> [NonEmpty a]\n, groupWith -- :: (Foldable f, Eq b) => (a -> b) -> f a -> [NonEmpty a]\n, groupAllWith -- :: (Foldable f, Ord b) => (a -> b) -> f a -> [NonEmpty a]\n, group1 -- :: Eq a => NonEmpty a -> NonEmpty (NonEmpty a)\n, groupBy1 -- :: (a -> a -> Bool) -> NonEmpty a -> NonEmpty (NonEmpty a)\n, groupWith1 -- :: (Foldable f, Eq b) => (a -> b) -> f a -> NonEmpty (NonEmpty a)\n, groupAllWith1 -- :: (Foldable f, Ord b) => (a -> b) -> f a -> NonEmpty (NonEmpty a)\n-- * Sublist predicates\n, isPrefixOf -- :: Foldable f => f a -> NonEmpty a -> Bool\n-- * \\\"Set\\\" operations\n, nub -- :: Eq a => NonEmpty a -> NonEmpty a\n, nubBy -- :: (a -> a -> Bool) -> NonEmpty a -> NonEmpty a\n-- * Indexing streams\n, (!!) -- :: NonEmpty a -> Int -> a\n-- * Zipping and unzipping streams\n, zip -- :: NonEmpty a -> NonEmpty b -> NonEmpty (a,b)\n, zipWith -- :: (a -> b -> c) -> NonEmpty a -> NonEmpty b -> NonEmpty c\n, unzip -- :: NonEmpty (a, b) -> (NonEmpty a, NonEmpty b)\n-- * Converting to and from a list\n, fromList -- :: [a] -> NonEmpty a\n, toList -- :: NonEmpty a -> [a]\n, nonEmpty -- :: [a] -> Maybe (NonEmpty a)\n, xor -- :: NonEmpty a -> Bool\n) where\n\nimport Prelude hiding (break, cycle, drop, dropWhile,\nfilter, foldl, foldr, head, init, iterate,\nlast, length, map, repeat, reverse,\nscanl, scanl1, scanr, scanr1, span,\nsplitAt, tail, take, takeWhile,\nunzip, zip, zipWith, (!!))\nimport qualified Prelude\n\nimport Control.Applicative (Applicative (..), Alternative (many))\nimport Data.Foldable hiding (length, toList)\nimport qualified Data.Foldable as Foldable\nimport Data.Function (on)\nimport qualified Data.List as List\nimport Data.Ord (comparing)\nimport GHC.Base (NonEmpty(..))\n\ninfixr 5 <|\n\n-- | Number of elements in 'NonEmpty' list.\nlength :: NonEmpty a -> Int\nlength (_ :| xs) = 1 + Prelude.length xs\n\n-- | Compute n-ary logic exclusive OR operation on 'NonEmpty' list.\nxor :: NonEmpty Bool -> Bool\nxor (x :| xs) = foldr xor' x xs\nwhere xor' True y = not y\nxor' False y = y\n\n-- | 'unfold' produces a new stream by repeatedly applying the unfolding\n-- function to the seed value to produce an element of type @b@ and a new\n-- seed value. When the unfolding function returns 'Nothing' instead of\n-- a new seed value, the stream ends.\nunfold :: (a -> (b, Maybe a)) -> a -> NonEmpty b\nunfold f a = case f a of\n(b, Nothing) -> b :| []\n(b, Just c) -> b <| unfold f c\n{-# DEPRECATED unfold \"Use unfoldr\" #-}\n-- Deprecated in 8.2.1, remove in 8.4\n\n-- | 'nonEmpty' efficiently turns a normal list into a 'NonEmpty' stream,\n-- producing 'Nothing' if the input is empty.\nnonEmpty :: [a] -> Maybe (NonEmpty a)\nnonEmpty [] = Nothing\nnonEmpty (a:as) = Just (a :| as)\n\n-- | 'uncons' produces the first element of the stream, and a stream of the\n-- remaining elements, if any.\nuncons :: NonEmpty a -> (a, Maybe (NonEmpty a))\nuncons ~(a :| as) = (a, nonEmpty as)\n\n-- | The 'unfoldr' function is analogous to \"Data.List\"'s\n-- 'Data.List.unfoldr' operation.\nunfoldr :: (a -> (b, Maybe a)) -> a -> NonEmpty b\nunfoldr f a = case f a of\n(b, mc) -> b :| maybe [] go mc\nwhere\ngo c = case f c of\n(d, me) -> d : maybe [] go me\n\n-- | Extract the first element of the stream.\nhead :: NonEmpty a -> a\nhead ~(a :| _) = a\n\n-- | Extract the possibly-empty tail of the stream.\ntail :: NonEmpty a -> [a]\ntail ~(_ :| as) = as\n\n-- | Extract the last element of the stream.\nlast :: NonEmpty a -> a\nlast ~(a :| as) = List.last (a : as)\n\n-- | Extract everything except the last element of the stream.\ninit :: NonEmpty a -> [a]\ninit ~(a :| as) = List.init (a : as)\n\n-- | Prepend an element to the stream.\n(<|) :: a -> NonEmpty a -> NonEmpty a\na <| ~(b :| bs) = a :| b : bs\n\n-- | Synonym for '<|'.\ncons :: a -> NonEmpty a -> NonEmpty a\ncons = (<|)\n\n-- | Sort a stream.\nsort :: Ord a => NonEmpty a -> NonEmpty a\nsort = lift List.sort\n\n-- | Converts a normal list to a 'NonEmpty' stream.\n--\n-- Raises an error if given an empty list.\nfromList :: [a] -> NonEmpty a\nfromList (a:as) = a :| as\nfromList [] = errorWithoutStackTrace \"NonEmpty.fromList: empty list\"\n\n-- | Convert a stream to a normal list efficiently.\ntoList :: NonEmpty a -> [a]\ntoList ~(a :| as) = a : as\n\n-- | Lift list operations to work on a 'NonEmpty' stream.\n--\n-- /Beware/: If the provided function returns an empty list,\n-- this will raise an error.\nlift :: Foldable f => ([a] -> [b]) -> f a -> NonEmpty b\nlift f = fromList . f . Foldable.toList\n\n-- | Map a function over a 'NonEmpty' stream.\nmap :: (a -> b) -> NonEmpty a -> NonEmpty b\nmap f ~(a :| as) = f a :| fmap f as\n\n-- | The 'inits' function takes a stream @xs@ and returns all the\n-- finite prefixes of @xs@.\ninits :: Foldable f => f a -> NonEmpty [a]\ninits = fromList . List.inits . Foldable.toList\n\n-- | The 'tails' function takes a stream @xs@ and returns all the\n-- suffixes of @xs@.\ntails :: Foldable f => f a -> NonEmpty [a]\ntails = fromList . List.tails . Foldable.toList\n\n-- | @'insert' x xs@ inserts @x@ into the last position in @xs@ where it\n-- is still less than or equal to the next element. In particular, if the\n-- list is sorted beforehand, the result will also be sorted.\ninsert :: (Foldable f, Ord a) => a -> f a -> NonEmpty a\ninsert a = fromList . List.insert a . Foldable.toList\n\n-- | @'some1' x@ sequences @x@ one or more times.\nsome1 :: Alternative f => f a -> f (NonEmpty a)\nsome1 x = liftA2 (:|) x (many x)\n\n-- | 'scanl' is similar to 'foldl', but returns a stream of successive\n-- reduced values from the left:\n--\n-- > scanl f z [x1, x2, ...] == z :| [z `f` x1, (z `f` x1) `f` x2, ...]\n--\n-- Note that\n--\n-- > last (scanl f z xs) == foldl f z xs.\nscanl :: Foldable f => (b -> a -> b) -> b -> f a -> NonEmpty b\nscanl f z = fromList . List.scanl f z . Foldable.toList\n\n-- | 'scanr' is the right-to-left dual of 'scanl'.\n-- Note that\n--\n-- > head (scanr f z xs) == foldr f z xs.\nscanr :: Foldable f => (a -> b -> b) -> b -> f a -> NonEmpty b\nscanr f z = fromList . List.scanr f z . Foldable.toList\n\n-- | 'scanl1' is a variant of 'scanl' that has no starting value argument:\n--\n-- > scanl1 f [x1, x2, ...] == x1 :| [x1 `f` x2, x1 `f` (x2 `f` x3), ...]\nscanl1 :: (a -> a -> a) -> NonEmpty a -> NonEmpty a\nscanl1 f ~(a :| as) = fromList (List.scanl f a as)\n\n-- | 'scanr1' is a variant of 'scanr' that has no starting value argument.\nscanr1 :: (a -> a -> a) -> NonEmpty a -> NonEmpty a\nscanr1 f ~(a :| as) = fromList (List.scanr1 f (a:as))\n\n-- | 'intersperse x xs' alternates elements of the list with copies of @x@.\n--\n-- > intersperse 0 (1 :| [2,3]) == 1 :| [0,2,0,3]\nintersperse :: a -> NonEmpty a -> NonEmpty a\nintersperse a ~(b :| bs) = b :| case bs of\n[] -> []\n_ -> a : List.intersperse a bs\n\n-- | @'iterate' f x@ produces the infinite sequence\n-- of repeated applications of @f@ to @x@.\n--\n-- > iterate f x = x :| [f x, f (f x), ..]\niterate :: (a -> a) -> a -> NonEmpty a\niterate f a = a :| List.iterate f (f a)\n\n-- | @'cycle' xs@ returns the infinite repetition of @xs@:\n--\n-- > cycle (1 :| [2,3]) = 1 :| [2,3,1,2,3,...]\ncycle :: NonEmpty a -> NonEmpty a\ncycle = fromList . List.cycle . toList\n\n-- | 'reverse' a finite NonEmpty stream.\nreverse :: NonEmpty a -> NonEmpty a\nreverse = lift List.reverse\n\n-- | @'repeat' x@ returns a constant stream, where all elements are\n-- equal to @x@.\nrepeat :: a -> NonEmpty a\nrepeat a = a :| List.repeat a\n\n-- | @'take' n xs@ returns the first @n@ elements of @xs@.\ntake :: Int -> NonEmpty a -> [a]\ntake n = List.take n . toList\n\n-- | @'drop' n xs@ drops the first @n@ elements off the front of\n-- the sequence @xs@.\ndrop :: Int -> NonEmpty a -> [a]\ndrop n = List.drop n . toList\n\n-- | @'splitAt' n xs@ returns a pair consisting of the prefix of @xs@\n-- of length @n@ and the remaining stream immediately following this prefix.\n--\n-- > 'splitAt' n xs == ('take' n xs, 'drop' n xs)\n-- > xs == ys ++ zs where (ys, zs) = 'splitAt' n xs\nsplitAt :: Int -> NonEmpty a -> ([a],[a])\nsplitAt n = List.splitAt n . toList\n\n-- | @'takeWhile' p xs@ returns the longest prefix of the stream\n-- @xs@ for which the predicate @p@ holds.\ntakeWhile :: (a -> Bool) -> NonEmpty a -> [a]\ntakeWhile p = List.takeWhile p . toList\n\n-- | @'dropWhile' p xs@ returns the suffix remaining after\n-- @'takeWhile' p xs@.\ndropWhile :: (a -> Bool) -> NonEmpty a -> [a]\ndropWhile p = List.dropWhile p . toList\n\n-- | @'span' p xs@ returns the longest prefix of @xs@ that satisfies\n-- @p@, together with the remainder of the stream.\n--\n-- > 'span' p xs == ('takeWhile' p xs, 'dropWhile' p xs)\n-- > xs == ys ++ zs where (ys, zs) = 'span' p xs\nspan :: (a -> Bool) -> NonEmpty a -> ([a], [a])\nspan p = List.span p . toList\n\n-- | The @'break' p@ function is equivalent to @'span' (not . p)@.\nbreak :: (a -> Bool) -> NonEmpty a -> ([a], [a])\nbreak p = span (not . p)\n\n-- | @'filter' p xs@ removes any elements from @xs@ that do not satisfy @p@.\nfilter :: (a -> Bool) -> NonEmpty a -> [a]\nfilter p = List.filter p . toList\n\n-- | The 'partition' function takes a predicate @p@ and a stream\n-- @xs@, and returns a pair of lists. The first list corresponds to the\n-- elements of @xs@ for which @p@ holds; the second corresponds to the\n-- elements of @xs@ for which @p@ does not hold.\n--\n-- > 'partition' p xs = ('filter' p xs, 'filter' (not . p) xs)\npartition :: (a -> Bool) -> NonEmpty a -> ([a], [a])\npartition p = List.partition p . toList\n\n-- | The 'group' function takes a stream and returns a list of\n-- streams such that flattening the resulting list is equal to the\n-- argument. Moreover, each stream in the resulting list\n-- contains only equal elements. For example, in list notation:\n--\n-- > 'group' \\$ 'cycle' \"Mississippi\"\n-- > = \"M\" : \"i\" : \"ss\" : \"i\" : \"ss\" : \"i\" : \"pp\" : \"i\" : \"M\" : \"i\" : ...\ngroup :: (Foldable f, Eq a) => f a -> [NonEmpty a]\ngroup = groupBy (==)\n\n-- | 'groupBy' operates like 'group', but uses the provided equality\ngroupBy :: Foldable f => (a -> a -> Bool) -> f a -> [NonEmpty a]\ngroupBy eq0 = go eq0 . Foldable.toList\nwhere\ngo _ [] = []\ngo eq (x : xs) = (x :| ys) : groupBy eq zs\nwhere (ys, zs) = List.span (eq x) xs\n\n-- | 'groupWith' operates like 'group', but uses the provided projection when\n-- comparing for equality\ngroupWith :: (Foldable f, Eq b) => (a -> b) -> f a -> [NonEmpty a]\ngroupWith f = groupBy ((==) `on` f)\n\n-- | 'groupAllWith' operates like 'groupWith', but sorts the list\n-- first so that each equivalence class has, at most, one list in the\n-- output\ngroupAllWith :: (Ord b) => (a -> b) -> [a] -> [NonEmpty a]\ngroupAllWith f = groupWith f . List.sortBy (compare `on` f)\n\n-- | 'group1' operates like 'group', but uses the knowledge that its\n-- input is non-empty to produce guaranteed non-empty output.\ngroup1 :: Eq a => NonEmpty a -> NonEmpty (NonEmpty a)\ngroup1 = groupBy1 (==)\n\n-- | 'groupBy1' is to 'group1' as 'groupBy' is to 'group'.\ngroupBy1 :: (a -> a -> Bool) -> NonEmpty a -> NonEmpty (NonEmpty a)\ngroupBy1 eq (x :| xs) = (x :| ys) :| groupBy eq zs\nwhere (ys, zs) = List.span (eq x) xs\n\n-- | 'groupWith1' is to 'group1' as 'groupWith' is to 'group'\ngroupWith1 :: (Eq b) => (a -> b) -> NonEmpty a -> NonEmpty (NonEmpty a)\ngroupWith1 f = groupBy1 ((==) `on` f)\n\n-- | 'groupAllWith1' is to 'groupWith1' as 'groupAllWith' is to 'groupWith'\ngroupAllWith1 :: (Ord b) => (a -> b) -> NonEmpty a -> NonEmpty (NonEmpty a)\ngroupAllWith1 f = groupWith1 f . sortWith f\n\n-- | The 'isPrefix' function returns @True@ if the first argument is\n-- a prefix of the second.\nisPrefixOf :: Eq a => [a] -> NonEmpty a -> Bool\nisPrefixOf [] _ = True\nisPrefixOf (y:ys) (x :| xs) = (y == x) && List.isPrefixOf ys xs\n\n-- | @xs !! n@ returns the element of the stream @xs@ at index\n-- @n@. Note that the head of the stream has index 0.\n--\n-- /Beware/: a negative or out-of-bounds index will cause an error.\n(!!) :: NonEmpty a -> Int -> a\n(!!) ~(x :| xs) n\n| n == 0 = x\n| n > 0 = xs List.!! (n - 1)\n| otherwise = errorWithoutStackTrace \"NonEmpty.!! negative argument\"\ninfixl 9 !!\n\n-- | The 'zip' function takes two streams and returns a stream of\n-- corresponding pairs.\nzip :: NonEmpty a -> NonEmpty b -> NonEmpty (a,b)\nzip ~(x :| xs) ~(y :| ys) = (x, y) :| List.zip xs ys\n\n-- | The 'zipWith' function generalizes 'zip'. Rather than tupling\n-- the elements, the elements are combined using the function\n-- passed as the first argument.\nzipWith :: (a -> b -> c) -> NonEmpty a -> NonEmpty b -> NonEmpty c\nzipWith f ~(x :| xs) ~(y :| ys) = f x y :| List.zipWith f xs ys\n\n-- | The 'unzip' function is the inverse of the 'zip' function.\nunzip :: Functor f => f (a,b) -> (f a, f b)\nunzip xs = (fst <\\$> xs, snd <\\$> xs)\n\n-- | The 'nub' function removes duplicate elements from a list. In\n-- particular, it keeps only the first occurrence of each element.\n-- (The name 'nub' means \\'essence\\'.)\n-- It is a special case of 'nubBy', which allows the programmer to\n-- supply their own inequality test.\nnub :: Eq a => NonEmpty a -> NonEmpty a\nnub = nubBy (==)\n\n-- | The 'nubBy' function behaves just like 'nub', except it uses a\n-- function.\nnubBy :: (a -> a -> Bool) -> NonEmpty a -> NonEmpty a\nnubBy eq (a :| as) = a :| List.nubBy eq (List.filter (\\b -> not (eq a b)) as)\n\n-- | 'transpose' for 'NonEmpty', behaves the same as 'Data.List.transpose'\n-- The rows/columns need not be the same length, in which case\n-- > transpose . transpose /= id\ntranspose :: NonEmpty (NonEmpty a) -> NonEmpty (NonEmpty a)\ntranspose = fmap fromList\n. fromList . List.transpose . toList\n. fmap toList\n\n-- | 'sortBy' for 'NonEmpty', behaves the same as 'Data.List.sortBy'\nsortBy :: (a -> a -> Ordering) -> NonEmpty a -> NonEmpty a\nsortBy f = lift (List.sortBy f)\n\n-- | 'sortWith' for 'NonEmpty', behaves the same as:\n--\n-- > sortBy . comparing\nsortWith :: Ord o => (a -> o) -> NonEmpty a -> NonEmpty a\nsortWith = sortBy . comparing\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6371576,"math_prob":0.917451,"size":16525,"snap":"2023-14-2023-23","text_gpt3_token_len":5093,"char_repetition_ratio":0.24574783,"word_repetition_ratio":0.23560673,"special_character_ratio":0.39848715,"punctuation_ratio":0.2181303,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9880784,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-09T17:50:03Z\",\"WARC-Record-ID\":\"<urn:uuid:c07fe317-60ad-487a-881f-cdb373b98f98>\",\"Content-Length\":\"164901\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ed341b18-a37c-46f4-a39d-06eb04d8c8f4>\",\"WARC-Concurrent-To\":\"<urn:uuid:105b852c-32e9-4d5a-b2c4-c4939dc4822a>\",\"WARC-IP-Address\":\"147.75.55.189\",\"WARC-Target-URI\":\"https://hackage-origin.haskell.org/package/base-4.12.0.0/docs/src/Data.List.NonEmpty.html\",\"WARC-Payload-Digest\":\"sha1:IF6GHCVEE2TGNCUH7P6BHDEETENJLJIK\",\"WARC-Block-Digest\":\"sha1:OPWSOYRDLBV23H6PQ2X4ILLHMS5S6SJV\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224656788.77_warc_CC-MAIN-20230609164851-20230609194851-00569.warc.gz\"}"}
https://www.funwithpuzzles.com/2019/04/maths-logic-puzzle.html
[ "This is a very simple Maths Logic Puzzle which will tickle your mind. In this maths logical question, you are shown some equations which contains some numbers and letters relating to each other. Your challenge is to find this logical relationship among these given letters and numbers and then find the value of the missing number which will replace the question mark.", null, "Can you solve this Maths Logic Puzzle?\n\nAnswer of this \"Maths Logic Puzzle\", can be viewed by clicking on the answer button." ]
[ null, "https://3.bp.blogspot.com/-9lL3aFW0xAw/XF1cKBDIvNI/AAAAAAABh4A/aR4vl4y3JDku7IduNEiWjZi3W4VkQzHrACLcBGAs/s640/maths-logic-puzzle-tickle-your-mind.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9381947,"math_prob":0.9377117,"size":563,"snap":"2022-40-2023-06","text_gpt3_token_len":112,"char_repetition_ratio":0.12343471,"word_repetition_ratio":0.0,"special_character_ratio":0.19715808,"punctuation_ratio":0.093457945,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9915777,"pos_list":[0,1,2],"im_url_duplicate_count":[null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-29T02:49:10Z\",\"WARC-Record-ID\":\"<urn:uuid:b40733bb-0ea0-4bc5-8b96-d6b7d5d21cb1>\",\"Content-Length\":\"120496\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0e6c4fef-2576-4324-bc7b-0d8705a79f07>\",\"WARC-Concurrent-To\":\"<urn:uuid:35cd484d-710f-4524-bff4-ebe63e0a430a>\",\"WARC-IP-Address\":\"142.251.45.115\",\"WARC-Target-URI\":\"https://www.funwithpuzzles.com/2019/04/maths-logic-puzzle.html\",\"WARC-Payload-Digest\":\"sha1:NQHVPYU52CC7766OZVQXX2B3UV35HU4K\",\"WARC-Block-Digest\":\"sha1:HCRYNZJL6TG4HEE4G27OLVLDCMRYD7MJ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335303.67_warc_CC-MAIN-20220929003121-20220929033121-00643.warc.gz\"}"}
https://www.geeksforgeeks.org/tensorflow-js-tf-encodestring-function/
[ "Open In App\nRelated Articles\n• Write an Interview Experience\n• TensorFlow.js\n• Tensorflow.js Introduction\n\n# Tensorflow.js tf.encodeString() Function\n\nTensorflow.js is an open-source library which is being developed by Google for running machine learning models as well as deep learning neural networks in the browser or node environment.\n\nThe .encodeString() function is used to encode the stated string into bytes with the help of the given encoding scheme.\n\nSyntax :\n\n`tf.encodeString(s, encoding?)`\n\nParameters:\n\n• s: It is the stated string which is to be encoded. It is of type string.\n• encoding: It is the encoding scheme which is to be used. And the by default value is utf-8.\n\nReturn Value: It returns Uint8Array.\n\nExample 1: When encoding scheme is not provided.\n\n## Javascript\n\n `// Importing the tensorflow.js library``import * as tf from ``\"@tensorflow/tfjs\"` `// Calling tf.encodeString() method and``// printing output``console.log(tf.util.encodeString(``\"fghi\"``));`\n\nOutput:\n\n`102,103,104,105`\n\nExample 2: When encoding scheme is provided.\n\n## Javascript\n\n `// Importing the tensorflow.js library``import * as tf from ``\"@tensorflow/tfjs\"` `// Defining string and encoding``// scheme``var` `str = ``\"1234\"``var` `encoding = ``\"utf-8\"` `// Calling tf.encodeString() method and``// printing output``var` `x = tf.util.encodeString(str, encoding);``console.log(x);`\n\nOutput:\n\n`49,50,51,52`" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.790598,"math_prob":0.47885168,"size":1105,"snap":"2023-40-2023-50","text_gpt3_token_len":272,"char_repetition_ratio":0.15168029,"word_repetition_ratio":0.051282052,"special_character_ratio":0.25791857,"punctuation_ratio":0.20614035,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9624408,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-22T17:33:23Z\",\"WARC-Record-ID\":\"<urn:uuid:9e8ac320-7157-4cf3-8ed4-9f255aebf65f>\",\"Content-Length\":\"286743\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:157029f3-5cbe-4dd3-9f93-affd11af1266>\",\"WARC-Concurrent-To\":\"<urn:uuid:dce823de-c05a-48e7-a61b-8cc5b4216ead>\",\"WARC-IP-Address\":\"23.222.4.135\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/tensorflow-js-tf-encodestring-function/\",\"WARC-Payload-Digest\":\"sha1:XSS65MTJNYCHUB6PYNTNOH3RZMMUL7IK\",\"WARC-Block-Digest\":\"sha1:7MD2IDSLRGZMZY6KYKTBAVQXIJBGKQIW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506421.14_warc_CC-MAIN-20230922170343-20230922200343-00783.warc.gz\"}"}
https://greio.com/blog/3/gre-probability-questions-tips-and-tricks
[ "12 May 2015\n\n# GRE Probability Questions: Tips and Tricks\n\n“Probability Theory is nothing but common sense reduced to calculation”\n-Pierre-Simon Laplace\n\nGRE has its fair share of probability questions. One interesting fact about the probability questions asked in a GRE test is that many a times it is possible to answer the question without resorting to complex calculations involving Bayes' theorem or binomial distribution. Even if you don't arrive at the solution, you can reason it out enough to be able to solve the question by elimination or other tricks.\n\nThe purpose of this article is to show you such questions which can be solved using some quick calculations and some logic. We are going to discuss 2 class of such questions below\n\n## Questions that can be solved by using some logic\n\nConsider the following question:\n\nQuestion: A fair six sided die is rolled two times. What is the probability that the value of the second roll will be less than the value of the first roll?", null, "Click thumbnail to view question\n\nNow lets look at how the nerd in you will want to solve this question. Given below is the formal proof:\n\nSolution 1:\n$$\\text{Pr }[\\textrm{second} > \\textrm{first}] + \\text{Pr }[\\textrm{second} < \\textrm{first}] + \\text{Pr }[\\textrm{second} = \\textrm{first}] = 1$$ Because of symmetry ~ \\text{Pr }[\\text{second} > \\text{first}] = \\text{Pr }[\\text{second} < \\text{first}] ~ so, $$\\text{Pr }[\\text{second} > \\text{first}] = \\frac{1 - \\text{Pr }[\\text{second} = \\text{first}]}{2} = \\frac{1 - \\frac{1}{6}}{2} = \\frac{5}{12}$$\n\nHowever, lets try to reason about this question and try to arrive at the answer just by logic:\n\nSolution 2:\n1 out of 6 times both the rolls will have same number (eg. 1-1, 2-2 etc.). Therefore 5 out of 6 times the two numbers will be different.\nFurther, the chance that the first roll is greater than the second must be equal to the chance that the second roll is greater than the first (by the law of symmetry), Hence, in the remaining 5 out of 6 outcomes half of the times the first number will be greater than second and other half of the times it'll be smaller. So 2.5 times out of 6 the second number will be less than the first roll.\n2.5 out of 6 = 5 out of 12.\nTherefore the answer is $\\frac5{12}$\n\nThere was no maths involved in the second solution.\n\n## Trick Questions with extraneous data\n\nSuch questions generally have a very straightforward answer. The only catch is that you should be able to quickly identify the trick, which is usually extraneous data which doesn't affect the answer.\n\nExample of such questions:\n\nQuestion: A fair coin when tossed 10 times gives 10 consecutive heads. What is the probability of getting a tail when it is tossed for the eleventh time?\n\nThe trick here is to recall that individual coin tosses are independent events - One toss doesn't have any effect on the outcome of the next toss. The probability of getting Head would be $\\frac12$ whenever a coin is tossed irrespective of the outcomes the previous tosses.\n\nAnother example of such type is:\n\nQuestion:\nWhat is the probability that a number amongst the first 1000 positive integers is divisible by 8?\n\nDon't directly start counting the multiples of 8. The figure of 1000 is a red herring. The numbers will be 8, 16, 24, 32.. and so on. So, 1 in every 8 numbers is a multiple of 8, even if we consider the first million integers.\nTherefore, the probability is $\\frac18$\n\nComment using Facebook/Google/Twitter/Disqus account if you want a email notification whenever someone replies" ]
[ null, "https://greio.com/assets/blogs/blog3/gre_probability_question-1697732541c18881a3c47506229b5328.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9284448,"math_prob":0.9800216,"size":3182,"snap":"2021-04-2021-17","text_gpt3_token_len":792,"char_repetition_ratio":0.13939585,"word_repetition_ratio":0.014519056,"special_character_ratio":0.25298554,"punctuation_ratio":0.08709677,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994797,"pos_list":[0,1,2],"im_url_duplicate_count":[null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-23T02:07:11Z\",\"WARC-Record-ID\":\"<urn:uuid:4e906a5f-1b72-429f-b242-374967a28115>\",\"Content-Length\":\"15565\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1d6c1dec-ef66-44b6-80fa-ac91d01bf58d>\",\"WARC-Concurrent-To\":\"<urn:uuid:487e862f-2a50-476f-bf1e-ae047c2f99a5>\",\"WARC-IP-Address\":\"104.21.94.140\",\"WARC-Target-URI\":\"https://greio.com/blog/3/gre-probability-questions-tips-and-tricks\",\"WARC-Payload-Digest\":\"sha1:WE7FYR4M5H5CGURGJISUEI4N44SE67Z6\",\"WARC-Block-Digest\":\"sha1:7343JKH2FTZN5TUVLYO5B2BEL4DX3S6I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618039626288.96_warc_CC-MAIN-20210423011010-20210423041010-00035.warc.gz\"}"}
https://flylib.com/books/en/3.287.1.219/1/
[ "The EOQ Model with Noninstantaneous Receipt", null, "[Page 739 ( continued )]\n\nA variation of the basic EOQ model is achieved when the assumption that orders are received all at once is relaxed . This version of the EOQ model is known as the noninstantaneous receipt model , also referred to as the gradual usage , or production lot size , model . In this EOQ variation, the order quantity is received gradually over time and the inventory level is depleted at the same time it is being replenished. This is a situation most commonly found when the inventory user is also the producer, as, for example, in a manufacturing operation where a part is produced to use in a larger assembly. This situation can also occur when orders are delivered gradually over time or the retailer and producer of a product are one and the same. The noninstantaneous receipt model is illustrated graphically in Figure 16.6, which highlights the difference between this variation and the basic EOQ model.\n\n(This item is displayed on page 740 in the print version)", null, "The noninstantaneous receipt model relaxes the assumption that Q is received all at once .\n\nThe ordering cost component of the basic EOQ model does not change as a result of the gradual replenishment of the inventory level because it is dependent only on the number of annual orders. However, the carrying cost component is not the same for this model variation because average inventory is different. In the basic EOQ model, average inventory was half the maximum inventory level, or Q /2, but in this variation, the maximum inventory level is not simply Q ; it is an amount somewhat lower than Q , adjusted for the fact that the order quantity is depleted during the order receipt period.\n\n[Page 740]\n\nTo determine the average inventory level, we define the following parameters that are unique to this model:\n\np = daily rate at which the order is received over time, also known as the production rate\n\nd = the daily rate at which inventory is demanded\n\nThe demand rate cannot exceed the production rate because we are still assuming that no shortages are possible, and if d = p , then there is no order size because items are used as fast as they are produced. Thus, for this model, the production rate must exceed the demand rate, or p > d .\n\nObserving Figure 16.6, the time required to receive an order is the order quantity divided by the rate at which the order is received, or Q/p . For example, if the order size is 100 units and the production rate, p , is 20 units per day, the order will be received in 5 days. The amount of inventory that will be depleted or used up during this time period is determined by multiplying by the demand rate, or (Q/p)d . For example, if it takes 5 days to receive the order and during this time inventory is depleted at the rate of 2 units per day, then a total of 10 units is used. As a result, the maximum amount of inventory that is on hand is the order size minus the amount depleted during the receipt period, computed as follows and shown earlier in Figure 16.6:", null, "Because this is the maximum inventory level, the average inventory level is determined by dividing this amount by 2, as follows:", null, "[Page 741]\n\nThe total carrying cost, using this function for average inventory, is", null, "Thus, the total annual inventory cost is determined according to the following formula:", null, "The total inventory cost is a function of two other costs, just as in our previous EOQ model. Thus, the minimum inventory cost occurs when the total cost curve is lowest and where the carrying cost curve and ordering cost curve intersect (see Figure 16.5). Therefore, to find optimal Q opt , we equate total carrying cost with total ordering cost:", null, "For our previous example we will now assume that the I-75 Carpet Discount Store has its own manufacturing facility, in which it produces Super Shag carpet. We will further assume that the ordering cost, C o , is the cost of setting up the production process to make Super Shag carpet. Recall that C c = \\$0.75 per yard and D = 10,000 yards per year. The manufacturing facility operates the same days the store is open (i.e., 311 days) and produces 150 yards of the carpet per day. The optimal order size, the total inventory cost, the length of time to receive an order, the number of orders per year, and the maximum inventory level are computed as follows:", null, "The optimal order size is determined as follows:", null, "[Page 742]\n\nThis value is substituted into the following formula to determine total minimum annual inventory cost:", null, "The length of time to receive an order for this type of manufacturing operation is commonly called the length of the production run . It is computed as follows:", null, "The number of orders per year is actually the number of production runs that will be made, computed as follows:", null, "Finally, the maximum inventory level is computed as follows:", null, "", null, "", null, "Introduction to Management Science (10th Edition)\nISBN: 0136064361\nEAN: 2147483647\nYear: 2006\nPages: 358\n\nSimilar book on Amazon" ]
[ null, "https://flylib.com/books/3/287/1/html/2/images/pixel.jpg", null, "https://flylib.com/books/3/287/1/html/2/images/16fig06.jpg", null, "https://flylib.com/books/3/287/1/html/2/images/740equ01.jpg", null, "https://flylib.com/books/3/287/1/html/2/images/740equ02.jpg", null, "https://flylib.com/books/3/287/1/html/2/images/741equ01.jpg", null, "https://flylib.com/books/3/287/1/html/2/images/741equ02.jpg", null, "https://flylib.com/books/3/287/1/html/2/images/741equ03.jpg", null, "https://flylib.com/books/3/287/1/html/2/images/741equ04.jpg", null, "https://flylib.com/books/3/287/1/html/2/images/741equ05.jpg", null, "https://flylib.com/books/3/287/1/html/2/images/742equ01.jpg", null, "https://flylib.com/books/3/287/1/html/2/images/742equ02.jpg", null, "https://flylib.com/books/3/287/1/html/2/images/742equ03.jpg", null, "https://flylib.com/books/3/287/1/html/2/images/742equ04.jpg", null, "https://flylib.com/books/3/287/1/html/2/images/pixel.jpg", null, "https://flylib.com/icons/6475-small.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9596531,"math_prob":0.974653,"size":4802,"snap":"2019-43-2019-47","text_gpt3_token_len":997,"char_repetition_ratio":0.1529804,"word_repetition_ratio":0.014101057,"special_character_ratio":0.2115785,"punctuation_ratio":0.10212766,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9964995,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30],"im_url_duplicate_count":[null,null,null,5,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-16T10:18:38Z\",\"WARC-Record-ID\":\"<urn:uuid:25669ce3-1270-4fc1-99ed-069016bed93a>\",\"Content-Length\":\"26155\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:daeb3c74-8279-45d8-b312-830cd82b3fc1>\",\"WARC-Concurrent-To\":\"<urn:uuid:be4b9fb8-b3b7-4168-88b4-2281c423179d>\",\"WARC-IP-Address\":\"94.130.122.250\",\"WARC-Target-URI\":\"https://flylib.com/books/en/3.287.1.219/1/\",\"WARC-Payload-Digest\":\"sha1:OZF4PPQMYQG63P73Q2QFQFNYAZGOHLN5\",\"WARC-Block-Digest\":\"sha1:JJBD4YSM4CNQSZWH3ZFSXONBYIDTXCFT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986666959.47_warc_CC-MAIN-20191016090425-20191016113925-00271.warc.gz\"}"}
https://rustc-dev-guide.rust-lang.org/generics.html
[ "# Generics and substitutions\n\nGiven a generic type `MyType<A, B, …>`, we may want to swap out the generics `A, B, …` for some other types (possibly other generics or concrete types). We do this a lot while doing type inference, type checking, and trait solving. Conceptually, during these routines, we may find out that one type is equal to another type and want to swap one out for the other and then swap that out for another type and so on until we eventually get some concrete types (or an error).\n\nIn rustc this is done using the `SubstsRef` that we mentioned above (“substs” = “substitutions”). Conceptually, you can think of `SubstsRef` as a list of types that are to be substituted for the generic type parameters of the ADT.\n\n`SubstsRef` is a type alias of `List<GenericArg<'tcx>>` (see `List` rustdocs). `GenericArg` is essentially a space-efficient wrapper around `GenericArgKind`, which is an enum indicating what kind of generic the type parameter is (type, lifetime, or const). Thus, `SubstsRef` is conceptually like a `&'tcx [GenericArgKind<'tcx>]` slice (but it is actually a `List`).\n\nSo why do we use this `List` type instead of making it really a slice? It has the length \"inline\", so `&List` is only 32 bits. As a consequence, it cannot be \"subsliced\" (that only works if the length is out of line).\n\nThis also implies that you can check two `List`s for equality via `==` (which would be not be possible for ordinary slices). This is precisely because they never represent a \"sub-list\", only the complete `List`, which has been hashed and interned.\n\nSo pulling it all together, let’s go back to our example above:\n\n``````struct MyStruct<T>\n``````\n• There would be an `AdtDef` (and corresponding `DefId`) for `MyStruct`.\n• There would be a `TyKind::Param` (and corresponding `DefId`) for `T` (more later).\n• There would be a `SubstsRef` containing the list `[GenericArgKind::Type(Ty(T))]`\n• The `Ty(T)` here is my shorthand for entire other `ty::Ty` that has `TyKind::Param`, which we mentioned in the previous point.\n• This is one `TyKind::Adt` containing the `AdtDef` of `MyStruct` with the `SubstsRef` above.\n\nFinally, we will quickly mention the `Generics` type. It is used to give information about the type parameters of a type.\n\n### Unsubstituted Generics\n\nSo above, recall that in our example the `MyStruct` struct had a generic type `T`. When we are (for example) type checking functions that use `MyStruct`, we will need to be able to refer to this type `T` without actually knowing what it is. In general, this is true inside all generic definitions: we need to be able to work with unknown types. This is done via `TyKind::Param` (which we mentioned in the example above).\n\nEach `TyKind::Param` contains two things: the name and the index. In general, the index fully defines the parameter and is used by most of the code. The name is included for debug print-outs. There are two reasons for this. First, the index is convenient, it allows you to include into the list of generic arguments when substituting. Second, the index is more robust. For example, you could in principle have two distinct type parameters that use the same name, e.g. `impl<A> Foo<A> { fn bar<A>() { .. } }`, although the rules against shadowing make this difficult (but those language rules could change in the future).\n\nThe index of the type parameter is an integer indicating its order in the list of the type parameters. Moreover, we consider the list to include all of the type parameters from outer scopes. Consider the following example:\n\n``````struct Foo<A, B> {\n// A would have index 0\n// B would have index 1\n\n.. // some fields\n}\nimpl<X, Y> Foo<X, Y> {\nfn method<Z>() {\n// inside here, X, Y and Z are all in scope\n// X has index 0\n// Y has index 1\n// Z has index 2\n}\n}\n``````\n\nWhen we are working inside the generic definition, we will use `TyKind::Param` just like any other `TyKind`; it is just a type after all. However, if we want to use the generic type somewhere, then we will need to do substitutions.\n\nFor example suppose that the `Foo<A, B>` type from the previous example has a field that is a `Vec<A>`. Observe that `Vec` is also a generic type. We want to tell the compiler that the type parameter of `Vec` should be replaced with the `A` type parameter of `Foo<A, B>`. We do that with substitutions:\n\n``````struct Foo<A, B> { // Adt(Foo, &[Param(0), Param(1)])\n..\n}\n\nfn bar(foo: Foo<u32, f32>) { // Adt(Foo, &[u32, f32])\nlet y = foo.x; // Vec<Param(0)> => Vec<u32>\n}\n``````\n\nThis example has a few different substitutions:\n\n• In the definition of `Foo`, in the type of the field `x`, we replace `Vec`'s type parameter with `Param(0)`, the first parameter of `Foo<A, B>`, so that the type of `x` is `Vec<A>`.\n• In the function `bar`, we specify that we want a `Foo<u32, f32>`. This means that we will substitute `Param(0)` and `Param(1)` with `u32` and `f32`.\n• In the body of `bar`, we access `foo.x`, which has type `Vec<Param(0)>`, but `Param(0)` has been substituted for `u32`, so `foo.x` has type `Vec<u32>`.\n\nLet’s look a bit more closely at that last substitution to see why we use indexes. If we want to find the type of `foo.x`, we can get generic type of `x`, which is `Vec<Param(0)>`. Now we can take the index `0` and use it to find the right type substitution: looking at `Foo`'s `SubstsRef`, we have the list `[u32, f32]` , since we want to replace index `0`, we take the 0-th index of this list, which is `u32`. Voila!\n\nYou may have a couple of followup questions…\n\n`type_of` How do we get the “generic type of `x`\"? You can get the type of pretty much anything with the `tcx.type_of(def_id)` query. In this case, we would pass the `DefId` of the field `x`. The `type_of` query always returns the definition with the generics that are in scope of the definition. For example, `tcx.type_of(def_id_of_my_struct)` would return the “self-view” of `MyStruct`: `Adt(Foo, &[Param(0), Param(1)])`.\n\n`subst` How do we actually do the substitutions? There is a function for that too! You use `subst` to replace a `SubstRef` with another list of types.\n\nHere is an example of actually using `subst` in the compiler. The exact details are not too important, but in this piece of code, we happen to be converting from the `rustc_hir::Ty` to a real `ty::Ty`. You can see that we first get some substitutions (`substs`). Then we call `type_of` to get a type and call `ty.subst(substs)` to get a new version of `ty` with the substitutions made.\n\nNote on indices: It is possible for the indices in `Param` to not match with what we expect. For example, the index could be out of bounds or it could be the index of a lifetime when we were expecting a type. These sorts of errors would be caught earlier in the compiler when translating from a `rustc_hir::Ty` to a `ty::Ty`. If they occur later, that is a compiler bug." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8378901,"math_prob":0.9134387,"size":6628,"snap":"2021-43-2021-49","text_gpt3_token_len":1772,"char_repetition_ratio":0.12983091,"word_repetition_ratio":0.0050167223,"special_character_ratio":0.25377187,"punctuation_ratio":0.13646209,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9607202,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-15T20:15:40Z\",\"WARC-Record-ID\":\"<urn:uuid:f7fe2427-c2c7-400c-a181-7a6f58d075e9>\",\"Content-Length\":\"39767\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:abf2f9a5-f60d-446c-bb08-9255d09de44c>\",\"WARC-Concurrent-To\":\"<urn:uuid:be3b0eb1-d7fb-4da0-925d-9fecafaae047>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://rustc-dev-guide.rust-lang.org/generics.html\",\"WARC-Payload-Digest\":\"sha1:7V5YUJ6G46Z3ENWYRKFKZ3JTD4X2UBL5\",\"WARC-Block-Digest\":\"sha1:JIONQEZ45ZS2KAA3MHYVLW4I2P4F7URL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323583083.92_warc_CC-MAIN-20211015192439-20211015222439-00083.warc.gz\"}"}
http://www.kpubs.org/article/articleMain.kpubs?articleANo=E1PWAX_2014_v14n3_564
[ "Rotor Initial Position Estimation Based on sDFT for Electrically Excited Synchronous Motors\nRotor Initial Position Estimation Based on sDFT for Electrically Excited Synchronous Motors\nJournal of Power Electronics. 2014. May, 14(3): 564-571\n• Received : January 07, 2014\n• Accepted : March 14, 2014\n• Published : May 20, 2014", null, "PDF", null, "e-PUB", null, "PubReader", null, "PPT", null, "Export by style\nArticle\nAuthor\nMetrics\nCited by\nTagCloud\nQing-qing, Yuan\nDept. of Information and Electronic Engineering, China University of Mining and Technology, Xu Zhou, China\nXiao-jie, Wu\nDept. of Information and Electronic Engineering, China University of Mining and Technology, Xu Zhou, China\[email protected]\nPeng, Dai\nDept. of Information and Electronic Engineering, China University of Mining and Technology, Xu Zhou, China\n\nAbstract\nRotor initial position is an important factor affecting the control performance of electrically excited synchronous motors. This study presents a novel method for estimating rotor initial position based on sliding discrete Fourier transform (sDFT). By injecting an ac excitation into the rotor winding, an induced voltage is generated in stator windings. Through this voltage, the stator flux can be obtained using a pure integral voltage model. Considering the influence from a dc bias and an integral initial value, we adopt the sDFT to extract the fundamental flux component. A quadrant identification model is designed to realize the accurate estimation of the rotor initial position. The sDFT and high-pass filter, DFT, are compared in detail, and the contrast between dc excitation and ac injection is determined. Simulation and experimental results verify that this type of novel method can eliminate the influence of dc bias and other adverse factors, as well as provide a basis for the control of motor drives.\nKeywords\nI. INTRODUCTION\nElectrically excited synchronous motors (EESMs) are widely applied in high-power industrial drives, such as metal rolling, mine hoisting, ship propulsion, and locomotive traction [1 - 4] because of their high efficiency and adjustable power factor. The stator frequency of EESM is zero ( fs = 0 Hz) at the starting time. An incorrect estimation of the rotor initial position at this time can affect the success of the start-up and influence the accuracy of the flux observing - .\nNumerous studies have been conducted to estimate rotor initial position. The traditional method involved injecting dc excitation into the rotor winding while the stator windings were not electrified, followed by the induced voltage and the flux amplitude. The corresponding angle can be obtained by a pure integral voltage model. However, the traditional method has several disadvantages, including: 1) fast attenuation of the induced voltage because of dc excitation; 2) a dc bias and an integral value resulting from a pure integral voltage model; and 3) other factors, such as inverter nonlinearity, dead zone, and high frequency interference - . The use of ac excitation as a replacement can solve the fast attenuation problem, but the integral initial value and the dc bias error persist. An improved voltage model can only address the issue of initial value.\nHigh-frequency signal injection - , in which filter performance is an important factor that affects position estimation, is another common approach. Compared with discrete Fourier transform (DFT), sliding DFT (sDFT) can extract a signal spectrum with faster arithmetic speed and simpler implementation .\nThis paper presents a novel estimation method for the rotor initial position of EESM. First, an ac excitation is injected into the rotor winding. Second, flux components can be obtained with the induced voltage through a pure integral voltage model. Third, the fundamental flux is extracted along with the rotor initial position as sDFT settles the dc bias and the initial integral value.\nThe paper is organized as follows: Section II summarizes the problems with the traditional method. Section III describes the principle of sDFT and its difference from high-pass filter (HPF) and DFT. Section IV proposes a novel method based on sDFT with ac rather than dc excitation. Simulation and experimental results are shown in Section V, and conclusions are provided in Section VI.\nII. PROBLEMS WITH THE TRADITIONAL METHOD\nA sine induced voltage, which is generated in the stator windings, can avoid fast attenuation with an ac excitation injected into the rotor winding of EESM. The stator voltage vector should be perpendicular to the stator flux under ideal conditions. Once dc bias error occurs during the integration, the relationship among the stator-induced voltage, flux, and rotor initial position in the two-phase static coordinates are as follows:", null, "PPT Slide\nLager Image", null, "PPT Slide\nLager Image", null, "PPT Slide\nLager Image\nwhere, β , β are unknown dc biases; α and b represent the voltage amplitudes; θ and ω are the angle and angular frequencies of the induced cosine voltage, respectively, with T denoting the integral cycle; ψ , ψ are the flux components resulting from the integral voltage model; and φ is the flux angle, which also refers to the rotor initial position.\nIn an actual system, α β , b ≫ β ; however, a tiny error can result in a large derivation after integration. When θ = π, 2π, 3π…, the stator flux in α coordinate is as follows:", null, "PPT Slide\nLager Image\nwhere T z represents the cycle when the voltage crosses zero.\nSimilarly, when θ = π, 2π, 3π…, the stator flux in the β coordinate is shown as", null, "PPT Slide\nLager Image\nAside from the dc bias, dead zone and inverter nonlinearity can cause an incorrect estimation of the rotor initial position.\nIII. PRINCIPLE OF SDFT\n- A. Basic Principle of sDFT\nThe sDFT is a recursive implementation of the DFT algorithm, which is often used to calculate the spectrum components of a finite-length signal with low computational cost - .\nGiven a continuous signal in time domain x ( t ), the fundamental frequency is f 0 . With discrete sampling (sampling frequency is f s ), this continuous signal can be transformed into a finite length sequence x ( n ), with the length as N = f s / f 0 . The DFT of x ( n ) is", null, "PPT Slide\nLager Image\nwhere WN = e -j2π/N .\nEquation (6) can be expanded as follows:", null, "PPT Slide\nLager Image\nEquation (7) requires N data to extract the fundamental component, which adversely influences calculation speed.\nAssuming that we have two finite-length sequences x 0 ( n ) and x 1 ( n ), the lengths of which are both N, the relationship between x 0 ( n ) and x 1 ( n ) is shown in Fig. 1 .", null, "PPT Slide\nLager Image\nData graphics of x0(n) and x1(n).\nThe DFTs of the two sequences are X 0 ( k ) and X 1 ( k ).", null, "PPT Slide\nLager Image", null, "PPT Slide\nLager Image\nEquation (9) indicates that the DFT of x 1 ( n ) takes new sampled data x 1 ( N +1) as a replacement for x 0 (0). Substituting Equation (8) into Equation (9), X 1 ( k ) can be re-written as", null, "PPT Slide\nLager Image\nEquation (10) shows that X 1 ( k ) can calculated only by using the DFT of x 0 ( n ), that is, X 0 ( k ), and x (0), x ( N ), along with a simple phase-shift computation.\nThe transfer function of sDFT that extracts k -harmonic in the z-domain is presented in Equation (11), and its structure is shown in Fig. 2 .", null, "PPT Slide\nLager Image", null, "PPT Slide\nLager Image\nImplementation structure of sDFT in the z-domain.\n- B. Comparison between DFT and HPF\nSliding DFT and DFT (which can be taken as a main value sequence of DFS) exhibit similar characteristics with the sole difference being implementation speed. The ratio of calculation amount between DFT and sDFT is (log 2 N )/2, provided the same data number N .\nHPF, in which cut-off frequency, order, and type crucially impact the filter dynamic response process and the estimation precision, can restrain dc bias along with a simple implementation . For example, a low cut-off frequency is beneficial to improve the estimation accuracy for a 2-order Butterworth HPF. However, the dynamic response slows down because of the large time delay. Thus, although a high cut-off frequency can speed up the dynamic estimation process, such frequency will cause waveform distortion and affect the estimation accuracy.\nWe assume an input signal x 0 = sin[(2*π* f 0 )* t ]+0.2, where f 0 = 50 Hz, and 0.2 is the dc bias. The fundamental component of the input signal extracted by sDFT and HPF are compared using MATLAB.\nAs regards sDFT, the sampling frequency fs = 5 kHz, and the data number N = f s / f 0 = 400. The cut-off frequency of HPF is set at different values ( f c = 1 Hz, f c = 10 Hz, and f c = 20Hz). The simulation waveforms are shown in Figs. 3 , 4 , and 5 .", null, "PPT Slide\nLager Image\nSimulation results of sDFT (N = 400) and HPF (fc = 1Hz).", null, "PPT Slide\nLager Image\nSimulation results of sDFT (N = 400) and HPF (fc = 10 Hz).", null, "PPT Slide\nLager Image\nSimulation results of sDFT (N = 400) and HPF (fc = 20 Hz).\nFigs. 3 to 5 illustrate that sDFT and HPF can restrain the dc bias within one fundamental period. However, when f c = 1 Hz, HPF tracks performance precisely at a steady state with the dynamic adjustment time at nearly six to eight fundamental periods, as shown in Fig. 3 . Although increasing f c can accelerate the dynamic process, this condition can result in low precision, as shown in Figs. 4 and 5 .\nIn addition, sDFT can extract harmonic or dc bias components any time by setting different k values in Equation (11). The dc bias in input signal x 0 obtained by sDFT is shown in Fig. 6 . When x 0 = sin[(2*π* f 0 )* t ]+0.3*sin[(6*π* f 0 )* t ]+0.2, the estimation results of sDFT ( N = 400, k = 1) and HPF ( f c = 1 Hz) are shown in Fig. 7 .", null, "PPT Slide\nLager Image\nDC bias obtained by sDFT(N=400).", null, "PPT Slide\nLager Image\nSimulation results of sDFT (N = 400, k = 1) and HPF (fc = 1Hz) with a harmonic input.\nIV. APPROACH TO ESTIMATE THE ROTOR INITIAL POSITION BASED ON SDFT\nThe flux obtained from the integral voltage model can be regarded as finite sequences. We define sequences ψ 0 ( n ), ψ 1 ( n ) as x 0 ( n ) and x 1 ( n ) in Fig. 1 , and their DFTs are", null, "PPT Slide\nLager Image", null, "PPT Slide\nLager Image\nwhere Ψ 0 ( k ) and Ψ 1 ( k ) are complex components. When k = 1, Ψ 0 (1) and Ψ 1 (1) represent the fundamental components, which can be decomposed into the real and imaginary components, through which the fundamental components ψ' , ψ' and their amplitudes | ψ' |, | ψ' |, as well as their angle atan 2(| ψ' |, | ψ' |), can be obtained.\nInduced voltage and flux possess sine characteristics because of sine excitation. Given that the flux signs pre- and post-zero are opposite, a quadrant discrimination model is designed to avoid the zero drift, which will affect the direct arctangent calculation. First, we sample the fundamental flux ψ' , ψ' in the first half of the cycle. Then, the quadrant can be determined according to the flux sign (plus or minus), as well as the identifiers, A and B . The possible quadrant with different rotor initial positions is shown in Table I .\nPOSSIBLE QUADRANT WITH THE DIFFERENT ROTOR INITIAL POSITIONS", null, "PPT Slide\nLager Image\nPOSSIBLE QUADRANT WITH THE DIFFERENT ROTOR INITIAL POSITIONS\nThe rotor initial position φ' can be calculated by using Equation (14).", null, "PPT Slide\nLager Image\nThe detailed implementation for the rotor initial position estimation proposed in this paper is shown in Fig. 8 , and the flow chart of the implementation is shown in Fig. 9 .", null, "PPT Slide\nLager Image\nEstimation principle.", null, "PPT Slide\nLager Image\nImplementation chart of the program.\nV. SIMULATION AND EXPERIMENTAL RESULTS\n- A. Simulation Results\nA simulation was established in MATLAB. In this simulation, the rotor initial position was set at 60° and the dc bias errors of the induced stator voltage u a and u b were 0.3 and 0.5 V, respectively. When ac excitation was injected into the rotor winding, the induced voltage in stator windings are shown in Fig. 10 (a), whereas the flux in the αβ coordinates obtained through the integral voltage model are presented in Fig. 10 (b). The fundamental components of the induced flux calculated by sDFT are shown in Fig. 10 (c), whereas their amplitudes are shown in Fig. 10 (d). The estimated rotor initial position is presented in Fig. 10 (e).", null, "PPT Slide\nLager Image\nSimulation results based on sDFT method with ac current injection.\nThis kind of estimation method can be applied once dc excitation is injected into the rotor winding. Another simulation was established in MATLAB with the same configurations, and the fundamental components of the induced flux calculated by sDFT are shown in Fig. 11 (a). The estimated rotor initial position is shown in Fig. 11 (b).", null, "PPT Slide\nLager Image\nSimulation results based on sDFT method with dc current injection.\nFigs. 10 and 11 suggest that, the rotor initial position estimation method based on sDFT is suitable for either ac or dc injection. Furthermore, a dc current can result in the rapid attenuation of the induced stator voltage and flux, similar to the comparison between Figs. 10 (c) and Fig. 11 (a), which resulted in the estimation deviation shown in Fig. 11 (b).\n- B. Experimental Results\nAn experimental platform for a 380 V, 50 kW EESM was established to verify the effectiveness of the proposed estimation method. The experimental structure is shown in Fig. 12 , whereas the detailed parameters of EESM are shown in Table II .", null, "PPT Slide\nLager Image\nCircuit block diagram of the rotor initial position estimation with voltage sensors.\nDETAILED PARAMETERS OF EESM", null, "PPT Slide\nLager Image\nDETAILED PARAMETERS OF EESM\nCommon voltage sensors can induce high-frequency interference and quantization error during the sampling process, which affect estimation accuracy. In this paper, we used an oscilloscope (DPO3014) to monitor the induced voltage in stator windings and calculated real-time data using a digital signal proceeding (DSP) processor. The experimental structure without voltage sensors is shown in Fig. 13 .", null, "PPT Slide\nLager Image\nCircuit block diagram of the rotor initial position estimation without voltage sensors.\nDuring the experiment, the peak–peak value of the ac excitation current is 1 A, with a 5Hz frequency and 128 sDFT points. The excitation current and the induced voltage in a-phase are shown in Fig. 14 .", null, "PPT Slide\nLager Image\nWaveforms of the excitation current and stator induced voltage.\nFig. 15 shows the fundamental flux ψ' and ψ' obtained through sDFT when the real rotor position is 60°. The estimated rotor initial position is shown in Fig. 16 (CH1), and the (CH2) illustrates the zero drift during flux crossing.", null, "PPT Slide\nLager Image\nWaveforms of the fundamental stator flux by sDFT.", null, "PPT Slide\nLager Image\nInitial rotor position detected by sDFT.\nFig. 17 shows the experimental results when the real rotor position is horizontal coordinate and the estimated position error is vertical. This presents an estimation error within ±1° as well.", null, "PPT Slide\nLager Image\nResults of the initial rotor position estimation.\nThe real process of the estimation method applied for the starting process of EESM is shown in Fig. 18 . The estimation section for the rotor initial position occurred when t = 1 s to 2s, and the ac excitation injected into the rotor winding was established during t = 2s − 5s. When t = 5 s to 7 s was the accelerating process and the machine was maintained at 300 rpm after t = 7 s. These experimental results verify the effectiveness of the improved estimation method, which can be applied to the actual drive system.", null, "PPT Slide\nLager Image\nStarting process of the EESM.\nVI. CONCLUSIONS\nThis paper presented a novel estimation method for the rotor initial position of EESMs based on sDFT. First, an induced voltage was generated in the stator windings by injecting ac excitation into the rotor winding. Second, the induced stator flux resulted from the integral voltage model, which caused such problems as dc bias and initial value of integration. Third, the fundamental component of the induced flux was obtained through sDFT, with which the rotor initial position was estimated.\nThis proposed method possessed a simplified structure, easy implementation, strong anti-interference, and minimal hardware consumption. Comparisons between sDFT and HPF, with ac and dc excitations, were conducted. Experimental results verified the effectiveness of the proposed method.\nAcknowledgements\nThe authors would like to thank the China National Natural Science Foundation (51377160) and the Research and Innovation Program of Postgraduates in the Jiangsu Province (CXZZ12_0930).\nBIO", null, "Qing-qing Yuan was born in NanTong, JiangSu, China in 1987. She received her B.S. and M.S. from the China University of Mining and Technology, China, in 2009 and 2011, respectively. She began her Ph.D. program in 2011 with the Department of Information and Electrical Engineering, China University of Mining and Technology. Her current research interests include the modeling and control of high power drives with a low switching frequency.", null, "Xiao-jie Wu was born in HengYang, HuNan, China, in 1966. He received his B.S. in Industrial Automation from the China University of Mining and Technology, China, in 1988, and his M.S. and Ph.D. in Electrical Engineering from the China University of Mining and Technology, China, in 1991 and 2000, respectively. From 2002 to 2004, he conducted postdoctoral research at Tsinghua University, Beijing, China. He has been with the Department of Information and Electrical Engineering, China University of Mining and Technology, China, since 1991, where he is currently a Professor. His current research interests include the stability of ac machines, advanced control of electrical machines, and power electronics.", null, "Peng Dai was born in HuaiBei, AnHui, China, in 1973. He received his B.S. in Electrical Engineering from the AnHui University of Science and Technology, Huainan, China, in 1994, and his M.S. and Ph.D. in Electrical Engineering from the China University of Mining and Technology, China, in 1998 and 2006, respectively. He has been with the Department of Information and Electrical Engineering, China University of Mining and Technology, China, since 1998, where he is currently a Professor. His current research interests include the stability and control of synchronous machines.\nReferences" ]
[ null, "http://www.kpubs.org/resources/images/pc/down_pdf.png", null, "http://www.kpubs.org/resources/images/pc/down_epub.png", null, "http://www.kpubs.org/resources/images/pc/down_pub.png", null, "http://www.kpubs.org/resources/images/pc/down_ppt.png", null, "http://www.kpubs.org/resources/images/pc/down_export.png", null, "http://www.kpubs.org/S_Tmp/J_Data/E1PWAX/2014/v14n3/E1PWAX_2014_v14n3_564_e901.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1PWAX/2014/v14n3/E1PWAX_2014_v14n3_564_e902.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1PWAX/2014/v14n3/E1PWAX_2014_v14n3_564_e903.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1PWAX/2014/v14n3/E1PWAX_2014_v14n3_564_e904.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1PWAX/2014/v14n3/E1PWAX_2014_v14n3_564_e905.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1PWAX/2014/v14n3/E1PWAX_2014_v14n3_564_e906.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1PWAX/2014/v14n3/E1PWAX_2014_v14n3_564_e907.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1PWAX/2014/v14n3/E1PWAX_2014_v14n3_564_f001.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1PWAX/2014/v14n3/E1PWAX_2014_v14n3_564_e908.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1PWAX/2014/v14n3/E1PWAX_2014_v14n3_564_e909.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1PWAX/2014/v14n3/E1PWAX_2014_v14n3_564_e910.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1PWAX/2014/v14n3/E1PWAX_2014_v14n3_564_e911.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1PWAX/2014/v14n3/E1PWAX_2014_v14n3_564_f002.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1PWAX/2014/v14n3/E1PWAX_2014_v14n3_564_f003.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1PWAX/2014/v14n3/E1PWAX_2014_v14n3_564_f004.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1PWAX/2014/v14n3/E1PWAX_2014_v14n3_564_f005.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1PWAX/2014/v14n3/E1PWAX_2014_v14n3_564_f006.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1PWAX/2014/v14n3/E1PWAX_2014_v14n3_564_f007.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1PWAX/2014/v14n3/E1PWAX_2014_v14n3_564_e912.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1PWAX/2014/v14n3/E1PWAX_2014_v14n3_564_e913.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1PWAX/2014/v14n3/E1PWAX_2014_v14n3_564_t001.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1PWAX/2014/v14n3/E1PWAX_2014_v14n3_564_e914.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1PWAX/2014/v14n3/E1PWAX_2014_v14n3_564_f008.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1PWAX/2014/v14n3/E1PWAX_2014_v14n3_564_f009.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1PWAX/2014/v14n3/E1PWAX_2014_v14n3_564_f010.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1PWAX/2014/v14n3/E1PWAX_2014_v14n3_564_f011.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1PWAX/2014/v14n3/E1PWAX_2014_v14n3_564_f012.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1PWAX/2014/v14n3/E1PWAX_2014_v14n3_564_t002.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1PWAX/2014/v14n3/E1PWAX_2014_v14n3_564_f013.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1PWAX/2014/v14n3/E1PWAX_2014_v14n3_564_f014.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1PWAX/2014/v14n3/E1PWAX_2014_v14n3_564_f015.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1PWAX/2014/v14n3/E1PWAX_2014_v14n3_564_f016.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1PWAX/2014/v14n3/E1PWAX_2014_v14n3_564_f017.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1PWAX/2014/v14n3/E1PWAX_2014_v14n3_564_f018.jpg", null, "http://www.kpubs.org/article/E1PWAX_2014_v14n3_564_a100.jpg", null, "http://www.kpubs.org/article/E1PWAX_2014_v14n3_564_a101.jpg", null, "http://www.kpubs.org/article/E1PWAX_2014_v14n3_564_a102.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.874349,"math_prob":0.92217153,"size":19434,"snap":"2022-27-2022-33","text_gpt3_token_len":5010,"char_repetition_ratio":0.14384972,"word_repetition_ratio":0.09994034,"special_character_ratio":0.24776165,"punctuation_ratio":0.13866103,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98646706,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-02T11:37:47Z\",\"WARC-Record-ID\":\"<urn:uuid:9fa7a744-992d-4e1f-84a6-c018332ff1f2>\",\"Content-Length\":\"233691\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1f2773c1-b436-4f89-8408-39567851c9a0>\",\"WARC-Concurrent-To\":\"<urn:uuid:fe313388-53a2-4fec-b54c-8451ddffe625>\",\"WARC-IP-Address\":\"203.250.198.116\",\"WARC-Target-URI\":\"http://www.kpubs.org/article/articleMain.kpubs?articleANo=E1PWAX_2014_v14n3_564\",\"WARC-Payload-Digest\":\"sha1:O2HRNQIEAZRMHMO6TVI5MC3FNUMWZ56W\",\"WARC-Block-Digest\":\"sha1:3WY7S7KKVVYW2JMQC7CLD7OOTYDRYNY6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104054564.59_warc_CC-MAIN-20220702101738-20220702131738-00174.warc.gz\"}"}
https://math.stackexchange.com/questions/3123324/predicate-logic-what-are-the-differences-between-%E2%88%80-and-%E2%88%83-when-it-comes-to-compa
[ "# Predicate logic: What are the differences between ∀ and ∃ when it comes to comparing two variables?\n\nSay I have the four following logical statements, all over the domain of all integers.\n\n1. (∀a,∀b)[a>b]\n2. (∀a,∃b)[a>b]\n3. (∃a,∀b)[a>b]\n4. (∃a,∃b)[a>b]\n\nI feel like they're all asking practically similar things, but I'm getting on confused on what exactly for all means. I'm writing what I think each statement is asserting literally, please correct me if I'm wrong:\n\n1. All integers are greater than each other? (This is the one I'm struggling the most with)\n2. There is an integer b that is less than all other integers\n3. There is an integer a that is greater than all other integers\n4. There is an integer a that is greater than an integer b\n\nSo if this is the case, I'm assuming that all are false except for (4). But in the event that I have correctly understood all four of these statements, how would you express something like how for all known integers, there is another integer that is greater and/or lesser than it?\n\n• (2) Is instead read as \"For all $a$ there exists a $b$ such that $a>b$\" In an attempt to reword it in more natural English it is saying that if Adam chooses an integer, regardless which one he chooses his friend Bill can look at Adam's choice and then with that knowledge pick an integer which is smaller than Adam's choice. For example, if Adam chose $50$ as his integer then Bill can come along and choose a smaller integer such as $49$. – JMoravitz Feb 23 at 3:21\n• I would translate 1 like \"for every arbitrary pair of integers, the first one is always bigger than the second\" – kimchi lover Feb 23 at 3:22\n• @JMoravitz In that case, would it be possible to write a logical statement asserting what my original interpretation was (even though it's untrue)? – user2709168 Feb 23 at 3:37\n• The statement \"There is an integer $b$ that is less than all other integers\" can be written as $(\\exists b~\\forall a)[a>b]$. Note the order is different, $\\exists b~\\forall a$ has different meaning than $\\forall a~\\exists b$. – JMoravitz Feb 23 at 3:42\n• This answer is about the same question: ://math.stackexchange.com/a/1130755/25554 – MJD Feb 23 at 4:13" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9518729,"math_prob":0.9328327,"size":915,"snap":"2019-13-2019-22","text_gpt3_token_len":235,"char_repetition_ratio":0.16026345,"word_repetition_ratio":0.12962963,"special_character_ratio":0.24153006,"punctuation_ratio":0.0855615,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98700607,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-25T04:12:37Z\",\"WARC-Record-ID\":\"<urn:uuid:af140fee-3b51-4b6c-bef9-ac6d1b74d1e2>\",\"Content-Length\":\"129057\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3445c333-7c22-4ce3-83da-1dc764616406>\",\"WARC-Concurrent-To\":\"<urn:uuid:0343fe1b-18a0-4819-ba81-86f5b63776ba>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/3123324/predicate-logic-what-are-the-differences-between-%E2%88%80-and-%E2%88%83-when-it-comes-to-compa\",\"WARC-Payload-Digest\":\"sha1:FC4UCI6UP4BTDQXF4TQITN7V77NEAC7H\",\"WARC-Block-Digest\":\"sha1:GJKFJJVZBR5UBLLYGTFLZ62U6H4ODP3Y\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232257847.56_warc_CC-MAIN-20190525024710-20190525050710-00297.warc.gz\"}"}
https://gamedev.stackexchange.com/questions/73859/adapting-tilemap-algorithm-to-support-isometric-tilemap/73874
[ "# Adapting tilemap algorithm to support isometric tilemap\n\nI'm using Phaser to build an isometric game. The framework doesn't have support for isometric tilemaps yet, so I'm starting to write a PR for it to support.\n\nWhat I currently have, loading an isometric tilemap on the current Phaser.Tilemap object, is this:", null, "As you can see, the tiles are wrongly positioned because of the simple 2D tile positioning approach the framework currently uses.\n\nThe class that actually makes the parsing of the JSON map and converts it into a tilemap is Phaser.TilemapParser, specifically at line 185.\n\nWhat I need is some help on where to start adapting this parser – or any other part of the code – in order for it to support isometric tilemaps.\n\nI don't know exactly where to start extending this parser – or even writing a new parser just for isometric tilemaps. Also, I know the calculations are different for positioning isometric tiles, and I want to know too where that change should go here, as I didn't find that either.\n\n• Can you clarify your question a bit? I'm not sure if you're looking for help with an isometric tile-rendering algorithm or if you're just looking for help extending the TilemapParser object. Apr 22, 2014 at 19:27\n• I edited, but the question is actually on where to start adapting this: weather I extend the parser, write a new one, etc. Apr 22, 2014 at 19:37\n\nYou don't need to change the parser, just the renderer.\nThe tiles will be at the 'same' place, except the projection is different.\nThe good news is that the good context 2d can do isometric just by setting the right isometric transform.\nOnce you set it, just draw in a regular way (including drawImage), and all will be drawn in the isometric way !!! magic !!!\n\nSo what i suggest you do with phaser to draw the tiles :\n-save context.\n-set transform to an isometric one.\n-draw tiles with regular phaser code.\n-restore context.\n\nThis image was made just with setTransform + drawImage of random small tile bitmap on a context2D :", null, "How does isometric coordinates work ?\nThe 'regular' coordinate system is the cartesian coordinates.\nIt needs a center O, and two unit perpendicular vectors Ux and Uy.\nThe fact that they are unit vectors means they have a length of 1. It allows such a system not to change distances after translation.\nThe fact that they are perpendicular allows such a system not to change distances after a rotation.\nIf you have x and y of a point, you get the points with\n\nP(x,y) = x * Ux + y * Uy.\nwith Ux = (1,0) and Uy = (0,1).\n\n\nNow for isometric projection we just release one constraint : Ux and Uy do not have to be perpendicular. So distances will change after a rotation.\nBut since they are still unit vectors, distances are kept after a translation ( iso - metric == same - distances ).\nSince Ux and Uy, are unit vectors, we can write them as\n\n Ux = (cos A1, sin A1)\nUy = (cos A2, sin A2).\n\n\nA1 and A2 will be the relative angle between Ux / Uy and the horizontal line :", null, "By changing A1 and A2, you change the axes directions, and you can give the look that you want to your projection.\nObviously, if A1 == A2, both axes are the same and you'll see nothing.\nA1 = PI / 6 and A2 = 9 * PI / 10 seems (subjectively) a good starting point for the angles.\n\nto compute where is a (x, y) point, the formula is the same as cartesian : P = xUx + yUy.\n\nTo convert from cartesian to isometric, the transform matrix is :\n\n (in regular mathematical notation)\nM = [ cosA1 sinA1 translateX\ncosA2 sinA2 translateY\n0 0 1 ] ;\n\nor (in webgl column-by-column notation )\nMgl = [ [ cosA1 cosA2 0 ]\n[ sinA1 sinA2 0 ]\n[ translateX translateY 1 ] ]\n\n\nbelow i also compute the reverse of the transform matrix to be able to get world coordinates from mouse coordinates.\n\nNotice that i introduced a scale, and also that i allow to change the x aspect ratio if you want to have non-square tiles.\n\nif you want to play with the fiddle it's here (move by pressing mouse).\n\n(version corresponding to the code below) http://jsfiddle.net/gamealchemist/zF2w8/4/\n\n// Parameters\n//\n\n// center of the display on screen\nvar displayCenterX = 1*canvas.width / 3;\nvar displayCenterY = 2 * canvas.height / 3;\n\n// angle of the x axis. Should be in [0, PI/2]\nvar angleX = Math.PI / 6;\n// angle of the y axis. Should be in [PI/2, PI[\nvar angleY = 2.8;\n\n// scale for the tiles\nvar scale = 120.0;\n// relative scale for the x of the tile. use it to stretch tiles.\nvar relScaleX = 1;\n\n// ----------------------------------------\n// Transforms\n// ----------------------------------------\nvar transfMatrix = [Math.cos(angleX), Math.sin(angleX),\nMath.cos(angleY), Math.sin(angleY)];\nvar _norm = relScaleX + 1;\nrelScaleX /= _norm;\ntransfMatrix *= scale * relScaleX;\ntransfMatrix *= scale * relScaleX;\ntransfMatrix *= scale / _norm;\ntransfMatrix *= scale / _norm;\n// matrix reverse\nvar determinant = transfMatrix * transfMatrix - transfMatrix * transfMatrix;\nvar transfMatrixRev = [transfMatrix, -transfMatrix, -transfMatrix, transfMatrix];\ntransfMatrixRev /= determinant;\ntransfMatrixRev /= determinant;\ntransfMatrixRev /= determinant;\ntransfMatrixRev /= determinant;\n\n// use with :\nc.setTransform(transfMatrix,transfMatrix,\ntransfMatrix,transfMatrix,\ndisplayCenterX, displayCenterY);\n\n\n.\n\n // regular 3x3 transform matrix in webGL format :\nvar mat3 = [ transfMatrix,transfMatrix, 0,\ntransfMatrix,transfMatrix, 0,\ndisplayCenterX, displayCenterY, 1\n] ;\n\n\n( for the fun, the pseudo-3D version : http://jsfiddle.net/gamealchemist/zF2w8/5/ )\n\n• Thanks for your answer, but I'm afraid I would need a more simpler explanation. Can you explain what the transformation part does? Apr 22, 2014 at 22:37\n• @DanielRibeiro : i explained more the transform. Apr 23, 2014 at 7:52" ]
[ null, "https://i.stack.imgur.com/dPZFy.png", null, "https://i.stack.imgur.com/VRUDD.png", null, "https://i.stack.imgur.com/36sIr.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7346511,"math_prob":0.94739544,"size":4239,"snap":"2022-27-2022-33","text_gpt3_token_len":1171,"char_repetition_ratio":0.15631641,"word_repetition_ratio":0.017118402,"special_character_ratio":0.29535267,"punctuation_ratio":0.16270338,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9923765,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,5,null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-11T08:54:40Z\",\"WARC-Record-ID\":\"<urn:uuid:9dc845a5-721e-4303-8fda-36fabe86bec8>\",\"Content-Length\":\"216576\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:de0940a3-e5c1-4e8d-95e2-1b8f36ad573a>\",\"WARC-Concurrent-To\":\"<urn:uuid:a7134df2-7b5a-49bc-b535-884d2736a12f>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://gamedev.stackexchange.com/questions/73859/adapting-tilemap-algorithm-to-support-isometric-tilemap/73874\",\"WARC-Payload-Digest\":\"sha1:D3WGUEUXXZFNXLCJSVMOFNPFDO2ZT4AM\",\"WARC-Block-Digest\":\"sha1:EOM2XWARCVYNGU77ISITWIHA63BNZTPE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571246.56_warc_CC-MAIN-20220811073058-20220811103058-00061.warc.gz\"}"}
https://documen.tv/question/gas-la-of-certain-mass-occupies-a-volume-of-650-cm3-under-a-pressure-of-760mmhg-calculate-the-pr-23009274-1/
[ "# gas LA of certain mass occupies a volume of 650 cm3 under a pressure of 760mmHg. Calculate the pressure under which the volum\n\nQuestion\n\ngas\nLA of certain mass occupies a volume of 650 cm3 under\na pressure of 760mmHg. Calculate the pressure under which\nthe volume of the\ngas\ncent\nof its original volume\nwill be reduced by 10 per​\n\nin progress 0\n1 year 2021-09-04T11:21:49+00:00 1 Answers 22 views 0\n\n844.4cm³\n\nExplanation:\n\nUsing Boyle’s law equation;\n\nP1V1 = P2V2\n\nWhere;\n\nP1 = initial pressure (mmHg)\n\nP2 = final pressure (mmHg)\n\nV1 = initial volume (cm³)\n\nV2 = final volume (cm³)\n\nAccording to the information in this question,\n\nV1 = 650cm³\n\nP1 = 760mmHg\n\nP2 = ?\n\nV2 = 10% reduction of V1\n\n10% of 650 = 10/100 × 650\n\n65\n\nV2 = 650 – 65 = 585cm³\n\nUsing P1V1 = P2V2\n\nP2 = P1V1 ÷ V2\n\nP2 = (760 × 650) ÷ 585\n\nP2 = 494000 ÷ 585\n\nP2 = 844.44\n\nfinal pressure (P2) = 844.4cm³" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.74236286,"math_prob":0.99843454,"size":727,"snap":"2022-40-2023-06","text_gpt3_token_len":267,"char_repetition_ratio":0.12448133,"word_repetition_ratio":0.0,"special_character_ratio":0.41403025,"punctuation_ratio":0.09489051,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9990928,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-06T22:24:10Z\",\"WARC-Record-ID\":\"<urn:uuid:3bd62dd0-182e-49e7-9b29-27e932c0b535>\",\"Content-Length\":\"94121\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d29c9303-92a8-4f67-a2b7-52de45d296dd>\",\"WARC-Concurrent-To\":\"<urn:uuid:d1974bd4-c518-46e5-bc03-699f32dc8a9f>\",\"WARC-IP-Address\":\"5.78.45.21\",\"WARC-Target-URI\":\"https://documen.tv/question/gas-la-of-certain-mass-occupies-a-volume-of-650-cm3-under-a-pressure-of-760mmhg-calculate-the-pr-23009274-1/\",\"WARC-Payload-Digest\":\"sha1:XO3DJDRZIBLH335MOZUCRT3737J3DHCN\",\"WARC-Block-Digest\":\"sha1:V5ZTCXQDJ5VMDCMYZ542MSDWO7EIFVIK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500365.52_warc_CC-MAIN-20230206212647-20230207002647-00748.warc.gz\"}"}
https://bow-swift.io/next/api-docs/Protocols/Invariant.html
[ "# Invariant\n\n``public protocol Invariant``\n\nAn Invariant Functor provides a type the ability to transform its value type into another type. An instance of `Functor` or `Contravariant` are Invariant Functors as well.\n\n• ``` imap(_:_:_:) ```\n\nTransforms the value type using the functions provided.\n\nThe implementation of this function must obey the following laws:\n\n``````imap(fa, id, id) == fa\nimap(imap(fa, f1, g1), f2, g2) == imap(fa, compose(f2, f1), compose(g2, g1))\n``````\n\n#### Declaration\n\nSwift\n\n``````static func imap<A, B>(\n_ fa: Kind<Self, A>,\n_ f: @escaping (A) -> B,\n_ g: @escaping (B) -> A) -> Kind<Self, B>``````\n\n#### Parameters\n\n ``` fa ``` Value whose value type will be transformed. ``` f ``` Transforming function. ``` g ``` Transforming function.\n\n#### Return Value\n\nA new value in the same context as the original value, with the value type transformed." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6448341,"math_prob":0.98233014,"size":648,"snap":"2020-45-2020-50","text_gpt3_token_len":159,"char_repetition_ratio":0.16614906,"word_repetition_ratio":0.0,"special_character_ratio":0.22067901,"punctuation_ratio":0.1557377,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97797304,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-28T22:17:24Z\",\"WARC-Record-ID\":\"<urn:uuid:162c3743-919a-4d26-90d4-33537f4a588a>\",\"Content-Length\":\"203705\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2336d246-574c-4253-be16-cf3c3f462cfc>\",\"WARC-Concurrent-To\":\"<urn:uuid:3727c760-5191-46ba-81b1-07e526516764>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://bow-swift.io/next/api-docs/Protocols/Invariant.html\",\"WARC-Payload-Digest\":\"sha1:NFHMP6W4H774UWJVDQJCMBZRVSMD2RYE\",\"WARC-Block-Digest\":\"sha1:Y5SAKGWYMDNJR3TOBF4RSOEM2KLRS3AA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107902038.86_warc_CC-MAIN-20201028221148-20201029011148-00093.warc.gz\"}"}
https://ask.sagemath.org/question/46668/accessing-component-of-3d-vector/?answer=46679
[ "# Accessing component of 3D vector?\n\nSo if you make a vector like this:\n\nvector1 = arrow((0,0,0),(1,2,3))\n\nand you want to access a component of the vector, for example the second y component (2), how would you do this? Something like vector1.y2 (that doesn't work haha) but how would I do this?\n\nedit retag close merge delete\n\nSort by » oldest newest most voted\n\nThe object you are creating, is not really a vector, it is a plot object:\n\nsage: type(vector1)\n<class 'sage.plot.plot3d.base.TransformGroup'>\n\n\nSo, the best would be to work on genuine vectors:\n\nsage: vector1 = vector((1, 2, 3))\n\n\nYou can access it second value with:\n\nsage: vector1\n2\n\n\nYou can still plot it if you want:\n\nsage: arrow((0,0,0), vector1)\n\nmore" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93007946,"math_prob":0.88295686,"size":290,"snap":"2020-24-2020-29","text_gpt3_token_len":81,"char_repetition_ratio":0.16433567,"word_repetition_ratio":0.0,"special_character_ratio":0.29310346,"punctuation_ratio":0.17391305,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9890226,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-29T17:41:17Z\",\"WARC-Record-ID\":\"<urn:uuid:c464723c-df47-4ec3-8c53-5a5d46806348>\",\"Content-Length\":\"52329\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:50294f8b-0d35-4dd2-aa39-4602f44eb01c>\",\"WARC-Concurrent-To\":\"<urn:uuid:93fde23e-7fb7-4f86-be24-3d911d42c21b>\",\"WARC-IP-Address\":\"140.254.118.68\",\"WARC-Target-URI\":\"https://ask.sagemath.org/question/46668/accessing-component-of-3d-vector/?answer=46679\",\"WARC-Payload-Digest\":\"sha1:QOECHVKK4PWMVP3LUUZSB6KJZ2DTIG5Q\",\"WARC-Block-Digest\":\"sha1:DYZVH3DCA5W4JDFGPDRU7KLJLEHQOXLN\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347405558.19_warc_CC-MAIN-20200529152159-20200529182159-00337.warc.gz\"}"}
https://www.colorhexa.com/040b16
[ "# #040b16 Color Information\n\nIn a RGB color space, hex #040b16 is composed of 1.6% red, 4.3% green and 8.6% blue. Whereas in a CMYK color space, it is composed of 81.8% cyan, 50% magenta, 0% yellow and 91.4% black. It has a hue angle of 216.7 degrees, a saturation of 69.2% and a lightness of 5.1%. #040b16 color hex could be obtained by blending #08162c with #000000. Closest websafe color is: #000000.\n\n• R 2\n• G 4\n• B 9\nRGB color chart\n• C 82\n• M 50\n• Y 0\n• K 91\nCMYK color chart\n\n#040b16 color description : Very dark (mostly black) blue.\n\n# #040b16 Color Conversion\n\nThe hexadecimal color #040b16 has RGB values of R:4, G:11, B:22 and CMYK values of C:0.82, M:0.5, Y:0, K:0.91. Its decimal value is 264982.\n\nHex triplet RGB Decimal 040b16 `#040b16` 4, 11, 22 `rgb(4,11,22)` 1.6, 4.3, 8.6 `rgb(1.6%,4.3%,8.6%)` 82, 50, 0, 91 216.7°, 69.2, 5.1 `hsl(216.7,69.2%,5.1%)` 216.7°, 81.8, 8.6 000000 `#000000`\nCIE-LAB 2.918, 0.306, -6.48 0.315, 0.323, 0.805 0.218, 0.224, 0.323 2.918, 6.487, 272.702 2.918, -1.205, -3.205 5.684, -0.069, -4.416 00000100, 00001011, 00010110\n\n# Color Schemes with #040b16\n\n• #040b16\n``#040b16` `rgb(4,11,22)``\n• #160f04\n``#160f04` `rgb(22,15,4)``\nComplementary Color\n• #041416\n``#041416` `rgb(4,20,22)``\n• #040b16\n``#040b16` `rgb(4,11,22)``\n• #060416\n``#060416` `rgb(6,4,22)``\nAnalogous Color\n• #141604\n``#141604` `rgb(20,22,4)``\n• #040b16\n``#040b16` `rgb(4,11,22)``\n• #160604\n``#160604` `rgb(22,6,4)``\nSplit Complementary Color\n• #0b1604\n``#0b1604` `rgb(11,22,4)``\n• #040b16\n``#040b16` `rgb(4,11,22)``\n• #16040b\n``#16040b` `rgb(22,4,11)``\n• #04160f\n``#04160f` `rgb(4,22,15)``\n• #040b16\n``#040b16` `rgb(4,11,22)``\n• #16040b\n``#16040b` `rgb(22,4,11)``\n• #160f04\n``#160f04` `rgb(22,15,4)``\n• #000000\n``#000000` `rgb(0,0,0)``\n• #000000\n``#000000` `rgb(0,0,0)``\n• #000000\n``#000000` `rgb(0,0,0)``\n• #040b16\n``#040b16` `rgb(4,11,22)``\n• #08162c\n``#08162c` `rgb(8,22,44)``\n• #0c2141\n``#0c2141` `rgb(12,33,65)``\n• #102b57\n``#102b57` `rgb(16,43,87)``\nMonochromatic Color\n\n# Alternatives to #040b16\n\nBelow, you can see some colors close to #040b16. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #041016\n``#041016` `rgb(4,16,22)``\n• #040e16\n``#040e16` `rgb(4,14,22)``\n• #040d16\n``#040d16` `rgb(4,13,22)``\n• #040b16\n``#040b16` `rgb(4,11,22)``\n• #040a16\n``#040a16` `rgb(4,10,22)``\n• #040816\n``#040816` `rgb(4,8,22)``\n• #040716\n``#040716` `rgb(4,7,22)``\nSimilar Colors\n\n# #040b16 Preview\n\nThis text has a font color of #040b16.\n\n``<span style=\"color:#040b16;\">Text here</span>``\n#040b16 background color\n\nThis paragraph has a background color of #040b16.\n\n``<p style=\"background-color:#040b16;\">Content here</p>``\n#040b16 border color\n\nThis element has a border color of #040b16.\n\n``<div style=\"border:1px solid #040b16;\">Content here</div>``\nCSS codes\n``.text {color:#040b16;}``\n``.background {background-color:#040b16;}``\n``.border {border:1px solid #040b16;}``\n\n# Shades and Tints of #040b16\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #010305 is the darkest color, while #f4f7fd is the lightest one.\n\n• #010305\n``#010305` `rgb(1,3,5)``\n• #040b16\n``#040b16` `rgb(4,11,22)``\n• #071327\n``#071327` `rgb(7,19,39)``\n• #0a1c37\n``#0a1c37` `rgb(10,28,55)``\n• #0d2448\n``#0d2448` `rgb(13,36,72)``\n• #102c58\n``#102c58` `rgb(16,44,88)``\n• #133469\n``#133469` `rgb(19,52,105)``\n• #163d7a\n``#163d7a` `rgb(22,61,122)``\n• #19458a\n``#19458a` `rgb(25,69,138)``\n• #1c4d9b\n``#1c4d9b` `rgb(28,77,155)``\n• #1f56ab\n``#1f56ab` `rgb(31,86,171)``\n• #225ebc\n``#225ebc` `rgb(34,94,188)``\n• #2566cd\n``#2566cd` `rgb(37,102,205)``\n• #2d70d9\n``#2d70d9` `rgb(45,112,217)``\n• #3d7bdc\n``#3d7bdc` `rgb(61,123,220)``\n• #4e86df\n``#4e86df` `rgb(78,134,223)``\n• #5e92e2\n``#5e92e2` `rgb(94,146,226)``\n• #6f9de5\n``#6f9de5` `rgb(111,157,229)``\n• #80a8e8\n``#80a8e8` `rgb(128,168,232)``\n• #90b3eb\n``#90b3eb` `rgb(144,179,235)``\n• #a1bfee\n``#a1bfee` `rgb(161,191,238)``\n• #b1caf1\n``#b1caf1` `rgb(177,202,241)``\n• #c2d5f4\n``#c2d5f4` `rgb(194,213,244)``\n• #d3e1f7\n``#d3e1f7` `rgb(211,225,247)``\n• #e3ecfa\n``#e3ecfa` `rgb(227,236,250)``\n• #f4f7fd\n``#f4f7fd` `rgb(244,247,253)``\nTint Color Variation\n\n# Tones of #040b16\n\nA tone is produced by adding gray to any pure hue. In this case, #0c0d0e is the less saturated color, while #000a1a is the most saturated one.\n\n• #0c0d0e\n``#0c0d0e` `rgb(12,13,14)``\n• #0b0d0f\n``#0b0d0f` `rgb(11,13,15)``\n• #0a0c10\n``#0a0c10` `rgb(10,12,16)``\n• #090c11\n``#090c11` `rgb(9,12,17)``\n• #080c12\n``#080c12` `rgb(8,12,18)``\n• #070c13\n``#070c13` `rgb(7,12,19)``\n• #060b14\n``#060b14` `rgb(6,11,20)``\n• #050b15\n``#050b15` `rgb(5,11,21)``\n• #040b16\n``#040b16` `rgb(4,11,22)``\n• #030b17\n``#030b17` `rgb(3,11,23)``\n• #020b18\n``#020b18` `rgb(2,11,24)``\n• #010a19\n``#010a19` `rgb(1,10,25)``\n• #000a1a\n``#000a1a` `rgb(0,10,26)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #040b16 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5721579,"math_prob":0.7961046,"size":3648,"snap":"2020-34-2020-40","text_gpt3_token_len":1669,"char_repetition_ratio":0.12705818,"word_repetition_ratio":0.011049724,"special_character_ratio":0.5589364,"punctuation_ratio":0.23672566,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99019176,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-07T19:00:24Z\",\"WARC-Record-ID\":\"<urn:uuid:74677440-9fbd-47e2-915c-24ef03e34a26>\",\"Content-Length\":\"36156\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cea6184e-faeb-4f31-88b6-6eb971d0c8fc>\",\"WARC-Concurrent-To\":\"<urn:uuid:65af02a1-3fcc-47b3-8d0d-50695ff0c7c8>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/040b16\",\"WARC-Payload-Digest\":\"sha1:X5N3NFLKC73BYMIILDUPS4W66ONRHTWI\",\"WARC-Block-Digest\":\"sha1:LQTNX7QV6EVVMYXAUUWZBHN5NCE7RXP7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439737206.16_warc_CC-MAIN-20200807172851-20200807202851-00493.warc.gz\"}"}
https://forum.arduino.cc/t/pressure-sensor-mpx4115/18418
[ "", null, "# pressure sensor MPX4115\n\nHello\n\nI got a MPX4115 pressure sensor and I tried to get it working with the Arduino. But as I try to calculate the pressure it does gave me strange results.\n\nI just plugged the Vout to +5V GND to Ground and Vs to the AnalogPin 0\n\n``````int x;\n\nvoid setup()\n{\nSerial.begin(9600);\n}\nvoid loop()\n{\nSerial.println(x);\ndelay(100);\n}\n``````\n\nHas someone experience with those sensors or have I done a big mistake?\n\nThx Geko\n\nThis is pretty much exactly the code I have for the 6115 sensor, and it seems to work fine.\n\nWhat do your results look like, and what did you expect?\n\nRealize that you have to apply a function to convert from voltage to pressure (and there's another function to convert from that integer to a floating point voltage).\n\n-j\n\nHi kg4wsv\n\nThx for your reply. Yes I do know that I have to use a formula.\nI always get the value 1023 and this seems to my quite wrong", null, "What do you mean by this\n“(and there’s another function to convert from that integer to a floating point voltage).”\n\nHere what’s written in the data sheet:\n\nNominal Transfer Value:\nVout = VS x (0.009 x P – 0.095)\n± (Pressure Error x Temp. Factor x 0.009 x VS)\nVS = 5.1 ± 0.25 Vdc\n\nThx\nGeko\n\nI just plugged the Vout to +5V GND to Ground and Vs to the AnalogPin 0\n\nVs should be connected to +5v, Vout to the analog pin\n\nYeah of course I just worte it wrong....\n\nSo the formula would be P=((Vout/Vin)+0.095)/0.009 Vout = 783 Vin = 5v\n\nI do get values for P around 1653...\n\nThx Geko\n\nIf you have a multimeter, can you measure the voltage on the analog pin (Vout). A reading of 1023 implies 5 volts and if that's what you are getting then the sensor may not be wired correctly or the sensor is faulty\n\nWhat do you mean by this \"(and there's another function to convert from that integer to a floating point voltage).\"\n\nThe ADC returns an integer from 0 to 1023, representing values from 0 to 5V (assuming you haven't added an external analog reference), so each integer from `analogRead()` represents ~4.88mV. To get Vin, you need the following conversion:\n\n``````float Vin;\n``````\n\nAlso, be careful of rounding when evaluating integers - it can lead to unexpected results.\n\n-j\n\nThx!\n\nI do get now pressures around 957 and this could be right I have to test this somewhere...\n\nPS: With the formula P=((Vout/5.0)+0.095)/0.009 I just got 95.7 but I do think I have to multily it by 10... or not?\n\nThx Geko\n\nThat should be right; I don’t know why you think you may need to multiply by 10?\n\nIf you’re in the US, a local airport will have the pressure, as will the National Weather Service web site, although you may have to convert units.\n\n-j\n\nBecause of the units. At our airport the pressure are 965.5 hPa and not like the Arduino 95.7 so it's more like 957 do you understand my \"problem\"?\n\nThx Geko\n\nI understand now, but I sure don't see the cause...\n\n-j\n\nI realise that this is an old post now, but, just got one of these to make my own weather station.\n\nIn regards to your readings being out by a factor of 10, the datasheet states that the formula is for calculating kPa, not hPa, so that's why it's out by a factor of ten.\n\nThe formula can be simplified a little for the Arduino too. As the 'Vout/Vs' part is just a ratio, you don't need to calculate the actual voltage. This saves a few steps in the calculation and will reduce the error too.\n\nAlso, as you might not be supplying exactly 5v to the sensor, working the whole thing out as a voltage probably won't be totally accurate, where as 'analogRead/1024' will always give you the right ratio.\n\nSo, to work out the pressure in hPa, or millibars as we call them in the UK, and get the most accurate reading use:\n\nRegards, The Cageybee\n\n(Updated the formula. Took away a zero from the final division when I should have added one)\n\nOk, just had a go with this sensor, first time.\n\nI was getting some bizzare results that bore no resembalence to the hand calculated figures.\n\nI was using a float as the pressures data type and didn’t realise that you have to follow any whole numbers with a ‘.0’.\n\nSo, with,\n\npressure = (analogRead(0)/1024**.0** + 0.095 ) / 0.0009;\n\neverything works as expected. Yay.\n\nRegards,\nThe Cageybee\n\nhttp://pwillard.com/?p=31\n\nI remember fussing a lot with floats… before they made some nice changes in recent ARDUINO IDE versions.\n\nHi there pwillard.\n\nNoticed from your blog your working out as a voltage. Doing it like that is likely to lead to error. The more calculations you do, the more error will creep in.\n\nAlso, as the voltage being presented to the Vs pin is likely to fluctuate to some degree, even more so if being supplied by a battery, working it out as a voltage will increase the error.\n\nIf you work it out as 'analogRead(0)/1024', it doesn't matter what the voltage at Vs is, your analogRead value will always be at the correct proportion. Do you see what I mean?\n\nI noticed you're using the suggested caps. I was a little worried about not using them as I thought I'd be getting reading going all over the place. As it turns out, it's rock solid without them. They're nice sensors I've got to say, but then they should be for the price.\n\nRegards, The Cageybee\n\nI am using an external 5V supply and have not noticed any instability. I guess I’m lucky." ]
[ null, "https://aws1.discourse-cdn.com/arduino/original/3X/1/f/1f6eb1c9b79d9518d1688c15fe9a4b7cdd5636ae.svg", null, "https://emoji.discourse-cdn.com/twitter/slight_smile.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9465789,"math_prob":0.78521013,"size":964,"snap":"2021-43-2021-49","text_gpt3_token_len":248,"char_repetition_ratio":0.090625,"word_repetition_ratio":0.011494253,"special_character_ratio":0.2520747,"punctuation_ratio":0.11057692,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95319766,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-03T17:23:34Z\",\"WARC-Record-ID\":\"<urn:uuid:2444af7b-50d5-43a2-9b74-4729dad50b3a>\",\"Content-Length\":\"51546\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1404ed7d-f547-48ed-b4b7-14f0f47f6bb2>\",\"WARC-Concurrent-To\":\"<urn:uuid:de91e5b5-fc2d-4417-afc7-8bc425bc0a9c>\",\"WARC-IP-Address\":\"184.104.202.141\",\"WARC-Target-URI\":\"https://forum.arduino.cc/t/pressure-sensor-mpx4115/18418\",\"WARC-Payload-Digest\":\"sha1:HOWI4FC2YFPRX2ZLAOUR6JZK3PL54DCW\",\"WARC-Block-Digest\":\"sha1:FRFASM5P6CNY73FKUUSZVOIQ6TSK75XP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964362891.54_warc_CC-MAIN-20211203151849-20211203181849-00265.warc.gz\"}"}
https://tarxienscouts.org/2022/math-makes-sense-8-workbook-answers-pdf/
[ "", null, "# Math makes sense 8 workbook answers pdf\n\nMath makes sense 8 workbook answers pdf\nMath Makes Sense 8 Homework and Practise Book – Answers. Unit 1 – Square Roots and the Pythagorean Theorem. Pg.1. What Do You Notice . The final result is 6174\n17/04/2017 · math makes sense 8 practice homework book answers Workbook Grade 8 Math with Answer Key – Duration: 0:16. Eric 1,972 views. 0:16. SCAN QUESTION AND GET ANSWER Solve Math Problems With Mobile\nScanning for Math Makes Sense 8 Workbook Answers Do you really need this respository of Math Makes Sense 8 Workbook Answers It takes me 22 hours just to attain the right download link, and another 9 hours to validate it.\nPearson Math Makes Sense 4 Workbook Answers Pearson Math Makes Sense 4 Workbook Answers – In this site is not the same as a solution manual you purchase in a cd store or download off the web. Our on top of 4,709 manuals and Ebooks is the explanation why customers keep coming back.If you compulsion a Pearson Math Makes Sense 4 Workbook Answers, you can download them in pdf …\n[PDF] Document Database Online Site Math Makes Sense 8 Practice And Homework Book Answers File Name: Math Makes Sense 8 Practice And Homework Book Answers\nDr Math makes sense 8 workbook answers. Jang SAT* 800 Math Workbook For The New SAT [Dr. Simon Jang, Ms. Tiffany T. Jang] on Amazon Math makes sense 8 workbook answers. com. *FREE* shipping on qualifying offers. The only one book you need to prepare for the NEW SAT Math…\n\nMath Makes Sense 8 Workbook Answers fullexams.com", null, "Unit 1 – Square Roots and the Pythagorean Theorem", null, "", null, "", null, "", null, "", null, "", null, "", null, "Unit 1 – Square Roots and the Pythagorean Theorem\n\nDr Math makes sense 8 workbook answers. Jang SAT* 800 Math Workbook For The New SAT [Dr. Simon Jang, Ms. Tiffany T. Jang] on Amazon Math makes sense 8 workbook answers. com. *FREE* shipping on qualifying offers. The only one book you need to prepare for the NEW SAT Math…\nMath Makes Sense 8 Homework and Practise Book – Answers. Unit 1 – Square Roots and the Pythagorean Theorem. Pg.1. What Do You Notice . The final result is 6174\n[PDF] Document Database Online Site Math Makes Sense 8 Practice And Homework Book Answers File Name: Math Makes Sense 8 Practice And Homework Book Answers\nScanning for Math Makes Sense 8 Workbook Answers Do you really need this respository of Math Makes Sense 8 Workbook Answers It takes me 22 hours just to attain the right download link, and another 9 hours to validate it.\nPearson Math Makes Sense 4 Workbook Answers Pearson Math Makes Sense 4 Workbook Answers – In this site is not the same as a solution manual you purchase in a cd store or download off the web. Our on top of 4,709 manuals and Ebooks is the explanation why customers keep coming back.If you compulsion a Pearson Math Makes Sense 4 Workbook Answers, you can download them in pdf …\n17/04/2017 · math makes sense 8 practice homework book answers Workbook Grade 8 Math with Answer Key – Duration: 0:16. Eric 1,972 views. 0:16. SCAN QUESTION AND GET ANSWER Solve Math Problems With Mobile\n\nMath Makes Sense 8 Workbook Answers fullexams.com\n\nScanning for Math Makes Sense 8 Workbook Answers Do you really need this respository of Math Makes Sense 8 Workbook Answers It takes me 22 hours just to attain the right download link, and another 9 hours to validate it.\n17/04/2017 · math makes sense 8 practice homework book answers Workbook Grade 8 Math with Answer Key – Duration: 0:16. Eric 1,972 views. 0:16. SCAN QUESTION AND GET ANSWER Solve Math Problems With Mobile\nMath Makes Sense 8 Homework and Practise Book – Answers. Unit 1 – Square Roots and the Pythagorean Theorem. Pg.1. What Do You Notice . The final result is 6174\nDr Math makes sense 8 workbook answers. Jang SAT* 800 Math Workbook For The New SAT [Dr. Simon Jang, Ms. Tiffany T. Jang] on Amazon Math makes sense 8 workbook answers. com. *FREE* shipping on qualifying offers. The only one book you need to prepare for the NEW SAT Math…\n[PDF] Document Database Online Site Math Makes Sense 8 Practice And Homework Book Answers File Name: Math Makes Sense 8 Practice And Homework Book Answers\nPearson Math Makes Sense 4 Workbook Answers Pearson Math Makes Sense 4 Workbook Answers – In this site is not the same as a solution manual you purchase in a cd store or download off the web. Our on top of 4,709 manuals and Ebooks is the explanation why customers keep coming back.If you compulsion a Pearson Math Makes Sense 4 Workbook Answers, you can download them in pdf …\n\nMath Makes Sense 8 Workbook Answers fullexams.com\nUnit 1 – Square Roots and the Pythagorean Theorem\n\n17/04/2017 · math makes sense 8 practice homework book answers Workbook Grade 8 Math with Answer Key – Duration: 0:16. Eric 1,972 views. 0:16. SCAN QUESTION AND GET ANSWER Solve Math Problems With Mobile\nScanning for Math Makes Sense 8 Workbook Answers Do you really need this respository of Math Makes Sense 8 Workbook Answers It takes me 22 hours just to attain the right download link, and another 9 hours to validate it.\nPearson Math Makes Sense 4 Workbook Answers Pearson Math Makes Sense 4 Workbook Answers – In this site is not the same as a solution manual you purchase in a cd store or download off the web. Our on top of 4,709 manuals and Ebooks is the explanation why customers keep coming back.If you compulsion a Pearson Math Makes Sense 4 Workbook Answers, you can download them in pdf …\n[PDF] Document Database Online Site Math Makes Sense 8 Practice And Homework Book Answers File Name: Math Makes Sense 8 Practice And Homework Book Answers\nMath Makes Sense 8 Homework and Practise Book – Answers. Unit 1 – Square Roots and the Pythagorean Theorem. Pg.1. What Do You Notice . The final result is 6174\nDr Math makes sense 8 workbook answers. Jang SAT* 800 Math Workbook For The New SAT [Dr. Simon Jang, Ms. Tiffany T. Jang] on Amazon Math makes sense 8 workbook answers. com. *FREE* shipping on qualifying offers. The only one book you need to prepare for the NEW SAT Math…\n\nUnit 1 – Square Roots and the Pythagorean Theorem\n\nScanning for Math Makes Sense 8 Workbook Answers Do you really need this respository of Math Makes Sense 8 Workbook Answers It takes me 22 hours just to attain the right download link, and another 9 hours to validate it.\n17/04/2017 · math makes sense 8 practice homework book answers Workbook Grade 8 Math with Answer Key – Duration: 0:16. Eric 1,972 views. 0:16. SCAN QUESTION AND GET ANSWER Solve Math Problems With Mobile\nMath Makes Sense 8 Homework and Practise Book – Answers. Unit 1 – Square Roots and the Pythagorean Theorem. Pg.1. What Do You Notice . The final result is 6174\nPearson Math Makes Sense 4 Workbook Answers Pearson Math Makes Sense 4 Workbook Answers – In this site is not the same as a solution manual you purchase in a cd store or download off the web. Our on top of 4,709 manuals and Ebooks is the explanation why customers keep coming back.If you compulsion a Pearson Math Makes Sense 4 Workbook Answers, you can download them in pdf …\nDr Math makes sense 8 workbook answers. Jang SAT* 800 Math Workbook For The New SAT [Dr. Simon Jang, Ms. Tiffany T. Jang] on Amazon Math makes sense 8 workbook answers. com. *FREE* shipping on qualifying offers. The only one book you need to prepare for the NEW SAT Math…\n[PDF] Document Database Online Site Math Makes Sense 8 Practice And Homework Book Answers File Name: Math Makes Sense 8 Practice And Homework Book Answers\n\nMath Makes Sense 8 Workbook Answers fullexams.com\nUnit 1 – Square Roots and the Pythagorean Theorem\n\nDr Math makes sense 8 workbook answers. Jang SAT* 800 Math Workbook For The New SAT [Dr. Simon Jang, Ms. Tiffany T. Jang] on Amazon Math makes sense 8 workbook answers. com. *FREE* shipping on qualifying offers. The only one book you need to prepare for the NEW SAT Math…\nScanning for Math Makes Sense 8 Workbook Answers Do you really need this respository of Math Makes Sense 8 Workbook Answers It takes me 22 hours just to attain the right download link, and another 9 hours to validate it.\nMath Makes Sense 8 Homework and Practise Book – Answers. Unit 1 – Square Roots and the Pythagorean Theorem. Pg.1. What Do You Notice . The final result is 6174\nPearson Math Makes Sense 4 Workbook Answers Pearson Math Makes Sense 4 Workbook Answers – In this site is not the same as a solution manual you purchase in a cd store or download off the web. Our on top of 4,709 manuals and Ebooks is the explanation why customers keep coming back.If you compulsion a Pearson Math Makes Sense 4 Workbook Answers, you can download them in pdf …\n[PDF] Document Database Online Site Math Makes Sense 8 Practice And Homework Book Answers File Name: Math Makes Sense 8 Practice And Homework Book Answers\n17/04/2017 · math makes sense 8 practice homework book answers Workbook Grade 8 Math with Answer Key – Duration: 0:16. Eric 1,972 views. 0:16. SCAN QUESTION AND GET ANSWER Solve Math Problems With Mobile\n\n## One thought on “Math makes sense 8 workbook answers pdf”\n\n1.", null, "Katelyn says:\n\nPearson Math Makes Sense 4 Workbook Answers Pearson Math Makes Sense 4 Workbook Answers – In this site is not the same as a solution manual you purchase in a cd store or download off the web. Our on top of 4,709 manuals and Ebooks is the explanation why customers keep coming back.If you compulsion a Pearson Math Makes Sense 4 Workbook Answers, you can download them in pdf …\n\nMath Makes Sense 8 Workbook Answers fullexams.com" ]
[ null, "https://tarxienscouts.org/wp-content/themes/bosa/assets/images/preloader1.gif", null, "https://tarxienscouts.org/blogimgs/https/cip/s-media-cache-ak0.pinimg.com/564x/a9/05/a4/a905a4add17693f1cc30c58dfbb6531f.jpg ", null, "https://tarxienscouts.org/blogimgs/https/cip/www.coursehero.com/thumb/94/32/9432e7082b9ccde6db036aff16bf648a104b8553_180.jpg ", null, "https://tarxienscouts.org/blogimgs/https/cip/whatiscommoncore.files.wordpress.com/2014/01/math-ny-3.jpg", null, "https://tarxienscouts.org/blogimgs/https/cip/i.pinimg.com/originals/db/e1/0a/dbe10ac37a2f0e97e5f91565f8e55934.jpg ", null, "https://tarxienscouts.org/blogimgs/https/cip/www.coursehero.com/thumb/31/4c/314cb00251651bb0b80fc053b88e22595ce5383e_180.jpg ", null, "https://tarxienscouts.org/blogimgs/https/cip/d262ilb51hltx0.cloudfront.net/max/800/1*Yu_hxLNNdlEFyouNQBLwlQ.jpeg ", null, "https://tarxienscouts.org/blogimgs/https/cip/www.coursehero.com/thumb/d0/4f/d04f59cd4f1f2c6f2428c79323b6c5737c818bdc_180.jpg ", null, "https://tarxienscouts.org/blogimgs/https/cip/s-media-cache-ak0.pinimg.com/originals/3f/e4/05/3fe4054244554f6af9c79b4dd8f853e5.png ", null, "https://secure.gravatar.com/avatar/09a4f15ed079115b7f30474848535b85", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7780533,"math_prob":0.496301,"size":9718,"snap":"2023-14-2023-23","text_gpt3_token_len":2343,"char_repetition_ratio":0.21597694,"word_repetition_ratio":0.9443468,"special_character_ratio":0.23811483,"punctuation_ratio":0.094665274,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96081483,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,null,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-22T05:40:32Z\",\"WARC-Record-ID\":\"<urn:uuid:50319e47-8a5b-4ee0-a24e-928d1303a1ac>\",\"Content-Length\":\"120917\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cbcc1385-bce7-46db-9293-43d3b482769f>\",\"WARC-Concurrent-To\":\"<urn:uuid:3dfcbd6d-a6d9-4178-98bf-8b9ee2418a23>\",\"WARC-IP-Address\":\"88.119.175.97\",\"WARC-Target-URI\":\"https://tarxienscouts.org/2022/math-makes-sense-8-workbook-answers-pdf/\",\"WARC-Payload-Digest\":\"sha1:NVC25V3VSCNTGHSALX47F6D7YEOGSA7R\",\"WARC-Block-Digest\":\"sha1:KWKC3VN5MO6LH2DIMK7PLL7IUBBPZL7O\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296943750.71_warc_CC-MAIN-20230322051607-20230322081607-00227.warc.gz\"}"}
https://advances.sciencemag.org/content/3/9/e1701377
[ "Research ArticleAPPLIED OPTICS\n\n# Electrical access to critical coupling of circularly polarized waves in graphene chiral metamaterials\n\nSee allHide authors and affiliations\n\nVol. 3, no. 9, e1701377\n\n## Abstract\n\nActive control of polarization states of electromagnetic waves is highly desirable because of its diverse applications in information processing, telecommunications, and spectroscopy. However, despite the recent advances using artificial materials, most active polarization control schemes require optical stimuli necessitating complex optical setups. We experimentally demonstrate an alternative—direct electrical tuning of the polarization state of terahertz waves. Combining a chiral metamaterial with a gated single-layer sheet of graphene, we show that transmission of a terahertz wave with one circular polarization can be electrically controlled without affecting that of the other circular polarization, leading to large-intensity modulation depths (>99%) with a low gate voltage. This effective control of polarization is made possible by the full accessibility of three coupling regimes, that is, underdamped, critically damped, and overdamped regimes by electrical control of the graphene properties.\n\n## INTRODUCTION\n\nControlling circular polarization states is important in modern photonics because it plays a pivotal role in the field of quantum computation and information (1, 2), optical communication of spin information (3), and circular dichroism (CD) spectroscopy (4, 5). Natural chiral materials composed of elements without any mirror symmetry have been widely used to manipulate the circular polarization states of light since the discovery of optical activity (OA) in a quartz crystal (68). However, naturally occurring chirality in the form of CD and OA is extremely weak, requiring a substantially long propagation length to observe chirality in naturally chiral media (9). Artificial structures called “chiral metamaterials” composed of subwavelength metallic building blocks have been proposed for enhancing CD (1013) and OA (1418), enabling various potential applications, such as broadband circular polarizers (19, 20) and wave plates (2123).\n\nIn the terahertz (THz) regime, control over CD and OA is of great importance because macromolecules with ionic or polar constituents strongly absorb THz waves because of the presence of large-scale collective vibrational modes (24, 25), and biopolymers, such as proteins, DNA, and RNA, composed of chiral structures selectively absorb circularly polarized THz waves (2628). Despite this importance of chirality at THz frequencies, active control of CD and OA has continued to remain challenging, mainly due to technical difficulties in manipulating THz waves and their polarization states. During the past few years, optical tuning of the CD and OA of chiral metamaterials has been demonstrated (2931), which, however, requires elaborate experimental setups, such as pump lasers and appropriate optical components. For practical applications, an electrical approach for active control of the polarization state would be more attractive. Graphene, consisting of carbon atoms two-dimensionally arranged in a honeycomb lattice, has been studied extensively for the past decade because of its high carrier mobility and unique band structure with linear dispersion near the so-called Dirac point or charge-neutral point (CNP). Strong modulation of its THz properties can be achieved by electrically tuning the density of states available for intraband transitions. Efforts have been made to enhance graphene absorption in the THz regime by integrating graphene with metamaterials (3238). Although a large number of works on graphene-based metamaterials have been reported, polarization modulation by simple electric gating remains largely unexplored.\n\nHere, we demonstrate gate-controlled CD and OA in gated graphene integrated in a chiral metamaterial. As shown in Fig. 1A, the transmission of a right-handed circularly polarized (RCP) THz wave can be strongly modulated when a voltage is applied to the gate because of a critical transition followed by a change in the optical conductivity of graphene. On the other hand, the transmission of the other circular polarization remains very insensitive to the applied voltage. This selective control can be explained by the different radiation loss parameters of the chiral metamaterial for the two circular polarizations, leading to different transmission changes with applied gate voltage when graphene is incorporated into the metamaterial design. Furthermore, it is shown that the plane of linearly polarized waves can be electrically rotated while the linear polarization state maintains its linearity.", null, "Fig. 1 Schematic views and device image of gate-controlled active graphene CDZM.(A) CD and OA in graphene CDZM. CD: Transmissions for RCP and LCP waves are different to each other because of the different absorption between RCP and LCP waves (left). OA: The electric field vector of linearly polarized light rotates around the axis parallel to its propagation direction while passing the graphene CDZM (right). (B) Schematic rendering of a gate-controlled active graphene CDZM composed of a single-layer graphene deposited on the top layer of CDZM and subsequently covered by a layer of ion gel [thickness (t) = 20 μm]. The geometry parameters are given as l = 100 μm, w = 7 μm, and s = 10 μm. (C) Top-view microscopy image of the fabricated gate-controlled active graphene CDZM. The gap width between chiral metamolecules is given as g = 2 μm. (D) Schematic rendering of the fabricated graphene CDZM. B is a base connected to the ion gel, and G is a gate connected to the graphene layer.\n\n## RESULTS\n\nTo effectively control the polarization states of THz waves, we use a conjugated double Z metamaterial (CDZM), a bilayer chiral metamaterial structure that is morphologically transformed from a conventional conjugated gammadion shape (18), as shown in Fig. 1 (B and C). As shown in more detail later, in this metamaterial, the radiation loss for an RCP wave strongly depends on the gate voltage, in contrast to a small change in the radiation loss for a left-handed circularly polarized (LCP) wave. This feature is very useful for selective control of the response of the metamaterial for the two different circular polarizations. A graphene layer is directly attached to the top layer of the CDZM using a wet transfer method (39). Raman spectroscopy is performed to confirm the monolayer characteristics of transferred graphene when averaged over the overall area of the wafer (see fig. S1). During the synthesis and fabrication processes, chemical vapor deposition (CVD)–grown graphene easily becomes p-doped (40), which is also the case for our graphene samples. To control the conductivity of graphene via the gate voltage, we used an ion gel gate dielectric along with adequate electrodes for a large change in carrier concentration (Fig. 1C) (for details on the fabrication of the device, see Materials and Methods).\n\nThe fabricated graphene CDZM structures are characterized by THz time-domain spectroscopy (THz-TDS) (32). The transmission coefficients for the circularly polarized lights are obtained through linear polarization measurements with wire grid polarizers (see Materials and Methods). A voltage supply is connected to the electrode and the graphene layer to apply a gate voltage (Fig. 1D). Figure 2A shows the measured intensity transmission spectra for the RCP and LCP THz waves, TRCP and TLCP, through the graphene CDZM at four different gate voltages. For comparison, the intensity transmission spectra of a bare reference CDZM array are plotted in a black sold line (the transmission amplitude of the bare CDZM is about 20% for the RCP wave at the resonance frequency of 1.1 THz). The gate voltage relative to CNP ΔV = |VgVCNP| determines the doping level of graphene. It is shown that TRCP can be markedly reduced by increasing the applied voltage (ΔV < 1.8 V) but almost leaves TLCP unchanged at the resonance frequency of 1.1 THz (Fig. 2B). However, further increasing the gate voltage beyond a critical voltage (ΔV > 1.8 V) leads to an increase in TRCP. The maximum intensity modulation depth for the RCP wave, defined as a relative transmission of RCP change ΔTRCP/TRCP,CNP for graphene CDZM, is measured to be 99% at the resonance frequency of 1.1 THz.", null, "Fig. 2 Gate-controlled circular transmission and CD.(A) Measured and simulated intensity of transmission spectra for RCP (TRCP; solid line) and LCP (TLCP; dashed line) waves are plotted for different gate voltages ΔV. (B) TRCP (orange) and TLCP (green) at the resonance frequency of 1.1 THz as a function of ΔV. Whereas TLCP is almost unchanged, TRCP can be markedly modified by the applied voltage. (C) Measured (scatters) and simulated (line) intensity modulation depth for ΔTRCP/TRCP,CNP plotted as a function of ΔV at the resonance frequency of 1.1 THz. The maximum modulation depth for TRCP is measured to be 99%.\n\nTo clarify the mechanism of electrical control of one circular polarization observed in the experiments, we perform numerical simulations using both a FEM (finite element method) solver (CST Microwave Studio) and the FDTD (finite-difference time-domain) method (Lumerical FDTD Solutions). To numerically model the graphene layer in the simulations, we calculated the optical conductivity of graphene as a function of the Fermi levels using the Kubo formula (see Materials and Methods for more details) (41). For a better understanding of the gate-dependent modulation characteristics, measured TRCP at the resonance frequency of 1.1 THz are plotted as a function of ΔV and compared with the simulation results. As clearly seen from the plot in Fig. 2 (B and C), the simulation results show an agreement with the measured data, where the Fermi level is related to the gate voltage by |EF| = ℏvFN)1/2. Here, vF is the Fermi velocity, N is the total carrier density given by N=(n02+α2|ΔV|2)1/2, and α ≈ 8.0 × 1011 cm−2V−1 is the gate capacitance of the ion gel dielectric. For the intraband scattering time, we assume that τ = 31 fs and the value of the carrier density at the conductivity minimum is n0 = 5.4 × 1010 cm2. Relative changes in CD, Δ = |TRCPTLCP|, are plotted in Fig. 3 as a function of the gate voltage from ΔV = 0.0 V to ΔV = 4.8 V. We experimentally achieve a very large modulation of CD of up to 45 dB (ΔV = 1.8 V) at 1.1 THz (Fig. 3B).", null, "Fig. 3 Gate-controllable CD.(A) Measured and simulated CD defined by the difference in transmission for RCP and LCP waves Δ = |TRCP – TLCP|. (B) CD Δ at the resonance frequency of 1.1 THz as a function of gate voltages ΔV.\n\nBesides CD, OA can be electrically controlled via the gate voltage. With chiral media, pure optical rotation of a linear polarized wave with no ellipticity can be obtained at off-resonance frequencies (18, 44). The measured and simulated ellipticity η, defined as η = 12 sin−1 [(|TRCP|2 − |TLCP|2)/(|TRCP|2 + |TLCP|2)], is plotted as a function of ΔV and frequency (Fig. 4A). At around 1.42 THz (purple dashed line), the transmission of the RCP and LCP waves are nearly identical (Fig. 2B), which implies that the ellipticity is very small (η ≈ 0). The ellipticity of the transmitted waves is less than 0.7° (2.1° in the experimental results) across the whole applied voltages at the frequency fη≈0. The OA is characterized by the azimuthal rotation angle θ = 12 [arg(TRCP) − arg(TLCP)], as schematically illustrated in the inset of Fig. 4C. Figure 4B shows the measured and simulated azimuthal rotation angle θ through the graphene CDZM at three different gate voltages. We measured the azimuthal rotation angle θ to be as large as 40° at fη≈0 = 1.42 THz in the case of ΔV = 0.0 V. This experimental result shows that the rotation angle per wavelength reaches values in greater than of 274°/λ. Here, the thickness of the sample is 31 μm. As shown in Fig. 4C, the rotation angle of the graphene CDZM can be tuned from 40° to 30° as the gate voltage increases from ΔV = 0.0 V to ΔV = 4.8 V, which means that the rotation modulation angle per wavelength reaches 72°/λ.", null, "Fig. 4 Gate controllable OA.(A) Comparison between measured and simulated ellipticity η with different gate voltages ΔV. The dashed purple line represents the frequency of pure OA (η ≈ 0). (B) Measured and simulated azimuthal rotation angle θ with different ΔV. (C) Azimuthal rotation angle θ at the frequency for which fη≈0 as a function of ΔV.\n\n## DISCUSSION\n\nTo explain the large modulation of CD, we use temporal coupled-mode theory (CMT) involving two resonant modes with two ports (42, 43). In our CMT model, as shown in Fig. 5A, the couplings between the two resonance modes (f1 and f2) and the incident/transmitted RCP waves are considered (see note S1). We note that the CMT model can be applied to RCP and LCP waves independently because there is no cross-coupling between them due to the C4 rotational symmetry of the structure. To quantitatively understand the coupling mechanism, we derive an analytical expression of transmission coefficients by taking account of the two resonances. Because there are three coupling channels between the incident and transmitted waves, that is, two via the resonance modes and one direct coupling, the complex transmission amplitude coefficient for RCP can be written astRCP=t0+Γ1reiφ1i(ff1)Γ1rΓ1i+Γ1reiφ2i(ff2)Γ2rΓ2i(1)where Γ1r, Γ2r, Γ1i, and Γ2i are the radiation and intrinsic losses of the two resonances, respectively, f1 and f2 are the resonance frequencies, and φ1 and φ2 are the phase differences between the incident and transmitted waves mediated by the first and second resonances (35). Figure 5B shows that there is a clear transition in the phase of the RCP transmission coefficient at ΔV = 1.8 V. This abrupt transition suggests a presence of a critical coupling state that divides the excitation of the resonance modes into underdamped (ΔV < 1.8 V) and overdamped (ΔV > 1.8 V) regimes. In contrast to the reflective-type metasurface, which shows a critical transition in the phase of the reflected beam (35), our chiral metamaterial shows a critical transition for transmitted beams.", null, "Fig. 5 Smith curves of TRCP with different gate voltages.(A) Temporal coupled mode theory for two resonances, f1 and f2, with a two-port model. (B) Simulated (solid line) and fitted (dashed line) phases of RCP waves are plotted for three different gate voltages ΔV. Smith curves of the TRCP for coupled mode theory of the two-port model representing (C) underdamping (ΔV < 1.8 V), (D) critical damping (ΔV = 1.8 V), and (E) overdamping (ΔV > 1.8 V) behavior. The radiation and intrinsic losses Γ1r, Γ1i, Γ2r, and Γ2i for (F) the RCP excitation and (G) the LCP excitation as a function of ΔV.\n\nThe transmission coefficients, fitted using Eq. 1, are plotted together with simulated transmission coefficients in the complex plane in Fig. 5 (C to E). This clearly shows that the transmission through the metamaterial can be classified as three different regimes around the critical coupling (ΔV = 1.8 V), where the Smith curve crosses the origin in the complex plane. In this fitting, the coupling coefficient (the loss parameters of the resonators), Γ1r, Γ2r, Γ1i, and Γ2i provide important information on the coupling mechanism. For the RCP excitation, as shown in Fig. 5F, the radiation losses (Γ1r and Γ2r) decrease as the gate voltage (ΔV) increases, implying that the resonance modes have a weaker coupling to free-space radiation. The curvature radius of the curve in the complex plane, determined by radiation losses Γ1r and Γ2r (see note S1 and table S1), decreases as the gate voltage increases, resulting in a limited phase angle range. On the other hand, for LCP excitation (Fig. 5G), the radiation loss for the second resonance (Γ2r) increases as the gate voltage increases, leading to no critical transition. In both RCP and LCP excitations, the intrinsic losses increase as the gate voltage increases due to the increased optical conductivity of the graphene (see fig. S2).\n\nThe demonstrated active control of OA is associated with the phase change of two circularly polarized beams while transmitting the CDZM because the rotation angle is given as the phase difference. As can be expected by the Bohn-Kuhn model for a chiral medium (45), the OA becomes maximum at slightly off-resonance frequencies, where the CD becomes maximum (Fig. 2A). In our graphene CDZM, it is the change of loss parameters that controls the phase change by modifying the resonances. This is similar to the electrical control of the phase of linearly polarized light using a gate voltage in a metallic grating structure with graphene (46, 47). However, in this work, we change the phase of one circularly polarized light (LCP in our case) at an off-resonance frequency (1.42 THz) with a gate voltage to avoid significant losses at the resonance frequencies. This allows for the rotation of a linearly polarized beam with negligible losses (see fig. S3).\n\nIn conclusion, we have experimentally demonstrated an electrically tunable chiral metamaterial in which the transmission of the RCP wave can be markedly modified by varying the applied voltage without changing the transmission of the LCP. From the measurement, we validated that the graphene CDZM achieves a high-intensity modulation depth of up to 99% for the RCP wave at a small gate voltage while maintaining high transmission of the LCP wave up to 52%. As a result, a large CD value of up to 45 dB was achieved. In practice, all fabricated samples show a different performance from the designed one to a certain extent because of the imperfectness in fabrication process. Therefore, adding the active control would help to precisely locate the optimized operation point even after the sample is fabricated, while this post-optimization process would be impossible with passive devices. In addition, theoretical analysis based on the temporal CMT for two ports verifies that this gigantic active CD is attributed to a phase transition from an underdamped to an overdamped resonator by varying radiation losses in the metasurface resonators. Our work also highlights that a graphene CDZM structure can achieve an active modulation of the polarization rotation angle up to 10° with very small ellipticity. Note that the operating principles of graphene chiral metamaterials can be extended to other frequency ranges, from microwave to mid-infrared. In addition, because the designed CDZM structure is scalable, it can be applied with other active materials at microwave regime, such as diodes or varactors, and phase-change materials at infrared frequencies, such as VO2 or Ge3Sb2Te6, by changing the unit cell size. Benefitting from the electric control of polarization, the graphene CDZM concept may lead to various applications in THz technologies, such as an ultra-compact active polarization modulator for telecommunications and imaging devices and ultrasensitive sensors for identification of the chirality and structures of macromolecules or biomolecules.\n\n## MATERIALS AND METHODS\n\n### Sample fabrication\n\nFor the realization of the CDZM, we used and combined three fabrication techniques—conventional microelectromechanical systems process, CVD-grown graphene transfer, and ion gel transfer. First, to form a flexible substrate, a polyimide solution (PI-2610, HD MicroSystems) was spin-coated to a target thickness of 1 μm on a sacrificial silicon wafer, and the polyimide solution was fully cured in a subsequent two-step baking process in a convection oven and a furnace. As a first double Z metamaterial layer, a 100-nm-thick gold was deposited with a 10-nm-thick chromium adhesion layer on a negative photoresist (AZ nLOF 2035, MicroChemicals), which was patterned by photolithography. After the lift-off process, the first double Z metamaterial layer was defined. To separate the second double Z metamaterial layer from the first layer, the same polyimide curing process was repeated but with a different thickness of 10 μm. On top of a spacer, a second double Z metamaterial and a square ring-shaped graphene electrode were defined following the same process, which was used for the first layer. At the same time, a side-gate electrode was used beside the graphene electrode to simplify the fabrication process. This was a distinctive advantage based on the high-capacitance ion gel dielectric. Second, a commercial CVD-grown monolayer graphene on a copper film (Graphene Square) was transferred to the entire area covering the second double Z metamaterial (5 mm × 5 mm) and the surrounding graphene electrode. We used a conventional wet-based transfer method using a thermal release tape as a supporting layer. Third, as a gate dielectric, we used a 20-μm-thick ion gel using the cut-and-stick method. The ion gel solution was prepared by dissolving poly(vinylidene fluoride-co-hexafluoropropene) and 1-ethyl-3-methylimidazolium bis(trifluoromethanesulfonyl) amide in acetone with a weight ratio of 1:4:7. This solution was dried in a vacuum oven at a temperature of 70°C for 24 hours. The cured ion gel was cut with a razor blade and then transferred between two electrodes. Finally, the thin flexible graphene CDZM was peeled off from the silicon substrate and attached to a holed printed circuit board substrate.\n\n### THz-TDS measurements\n\nTo generate broadband THz waves, we used a low temperature–grown GaAs photoconductive antenna (iPCA, BATOP) as a THz emitter illuminated by a femtosecond Ti:sapphire laser (Mai Tai, Spectra-Physics) with a central wavelength of 800 nm and a repetition rate of 80 MHz. An electro-optic sampling method with a ZnTe crystal of 1-mm thickness was used to detect the transmitted THz signals in the time domain. The THz-TDS system has a usable bandwidth of 0.1 to 2.0 THz. Because the signal from the THz antenna was linearly polarized, the transmission amplitude for the linearly polarized waves was measured as txx, txy, tyx, and tyy by using four wire grid polarizers. Here, the first subscript indicates the polarization (x or y) of the transmitted wave, and the second subscript indicates the polarization of the incident wave. From these transmission amplitudes calculated in a linear basis, the RCP and LCP transmission amplitude, can be obtained as tRCP = {(txx + tyy) + i(txy + tyx)}/2 and tLCP = {(txx + tyy) − i(txy + tyx)}/2 (17).\n\n### Numerical modeling\n\nFrequency-dependent material parameters (complex permittivity) of gold and those of polyimide and ion gel at THz frequencies were experimentally determined. The complex dielectric constants of gold for the frequency range of interest (0.1 to 2.5 THz) can be fitted by using the Drude model, with a plasma frequency of ωp = 1.37 × 1016 rad/s and a collision frequency of γ = 4.07 × 1013 rad/s. The optical conductivity of graphene in the THz regime can be calculated as a function of EF using the Kubo formula, which comprises only intraband contributionsσintra(ω)=e2π2iω+iτ1Δdϵ(1+Δ2ϵ2)[f(ϵEF)+f(ϵ+EF)]Here, f(ϵ − EF) is the Fermi distribution function with Fermi energy EF, Γ describes the broadening of the interband transitions, τ is the momentum relaxation time due to intraband carrier scattering, and Δ is a half-bandgap energy from the tight-binding Hamiltonian near K points of the Brillouin zone (48).\n\n## SUPPLEMENTARY MATERIALS\n\nfig. S1. Characterization of single-layer graphene by Raman spectroscopy.\n\nfig. S2. Electric field distribution.\n\nfig. S3. Transmission phase spectra.\n\nnote S1. Temporal CMT for two ports and two resonances.\n\ntable S1. Fitting parameters of temporal CMT.\n\nThis is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial license, which permits use, distribution, and reproduction in any medium, so long as the resultant use is not for commercial advantage and provided the original work is properly cited.\n\n## REFERENCES AND NOTES\n\nAcknowledgments: Funding: This work was supported by the Marie-Curie International Incoming Fellowships (ref. 626184), the European Research Council Consolidator Grant (TOPOLOGICAL), and the Institute for Basic Science (IBS-R011-D1). S.Z. acknowledges support from Engineering and Physical Sciences Research Council (EP/J018473/1), Royal Society and the Wolfson Foundation, and Horizon 2020 Action, Project No. 734578 (D-SPA). H.-D.K., H.-S.P., and B.M. acknowledge support from the Pioneer Research Center Program (2014M3C1A3052537), the Quantum Metamaterials Research Center Program (no. 2015001948) through the National Research Foundation of Korea grant funded by the Korean Government [Ministry of Science, Information and Communication Technology, and Future Planning (MSIP)], and the Center for Advanced Meta-Materials (CAMM) funded by the Korean Government (MSIP) as a Global Frontier Project (CAMM-2014M3A6B3063709). S.S.O. and O.H. acknowledge financial support from the Leverhulme Trust (RPG-2014-068). Author contributions: T.-T.K., S.S.O., B.M., and S.Z. conceived the original idea. H.-D.K. fabricated the graphene CDZM. T.-T.K. and S.S.O. performed the numerical simulation. T.-T.K. performed THz-TDS measurement. All authors analyzed the data and discussed the results. T.-T.K., S.S.O., H.-D.K., B.M., O.H., and S.Z wrote the paper, and all authors provided feedback. Competing interests: The authors declare that they have no competing interests. Data and materials availability: All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Materials. Additional data related to this paper may be requested from the authors.\nView Abstract" ]
[ null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9073868,"math_prob":0.9402871,"size":18506,"snap":"2020-45-2020-50","text_gpt3_token_len":3988,"char_repetition_ratio":0.1389039,"word_repetition_ratio":0.0056557087,"special_character_ratio":0.19690911,"punctuation_ratio":0.094368815,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95435166,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-26T12:54:18Z\",\"WARC-Record-ID\":\"<urn:uuid:8305a706-3f2a-4a91-b17a-bdce6905efb7>\",\"Content-Length\":\"372334\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5c4b476b-7596-4e96-8033-9d2b79c1fe35>\",\"WARC-Concurrent-To\":\"<urn:uuid:9a53565e-f5e7-4b71-9ee8-92473fb7ba67>\",\"WARC-IP-Address\":\"104.18.24.238\",\"WARC-Target-URI\":\"https://advances.sciencemag.org/content/3/9/e1701377\",\"WARC-Payload-Digest\":\"sha1:M63U57LN5RHHDRJWTIOZKDLDVMLHYMQU\",\"WARC-Block-Digest\":\"sha1:MOSXS2LS5N37ZR2AA7DSKZOMMLRV2MWA\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107891228.40_warc_CC-MAIN-20201026115814-20201026145814-00348.warc.gz\"}"}
https://dev.thep.lu.se/yat/browser/branches/0.4-stable/yat/statistics/ROC.h?rev=1392
[ "# source:branches/0.4-stable/yat/statistics/ROC.h@1392\n\nLast change on this file since 1392 was 1392, checked in by Peter, 13 years ago\n\ntrac has moved\n\n• Property svn:eol-style set to native\n• Property svn:keywords set to Author Date Id Revision\nFile size: 4.0 KB\nLine\n1#ifndef _theplu_yat_statistics_roc_\n2#define _theplu_yat_statistics_roc_\n3\n4// $Id: ROC.h 1392 2008-07-28 19:35:30Z peter$\n5\n6/*\n7  Copyright (C) 2004 Peter Johansson\n8  Copyright (C) 2005, 2006, 2007, 2008 Jari Häkkinen, Peter Johansson\n9\n10  This file is part of the yat library, http://dev.thep.lu.se/yat\n11\n12  The yat library is free software; you can redistribute it and/or\n13  modify it under the terms of the GNU General Public License as\n14  published by the Free Software Foundation; either version 2 of the\n16\n17  The yat library is distributed in the hope that it will be useful,\n18  but WITHOUT ANY WARRANTY; without even the implied warranty of\n19  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU\n20  General Public License for more details.\n21\n22  You should have received a copy of the GNU General Public License\n23  along with this program; if not, write to the Free Software\n24  Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA\n25  02111-1307, USA.\n26*/\n27\n28#include \"yat/classifier/Target.h\"\n29#include \"yat/utility/iterator_traits.h\"\n30\n31#include <algorithm>\n32#include <map>\n33#include <utility>\n34\n35namespace theplu {\n36namespace yat {\n37namespace statistics {\n38\n39  ///\n40  /// @brief Class for Reciever Operating Characteristic.\n41  ///\n42  /// As the area under an ROC curve is equivalent to Mann-Whitney U\n43  /// statistica, this class can be used to perform a Mann-Whitney\n44  /// U-test (aka Wilcoxon).\n45  ///\n46  class ROC\n47  {\n48\n49  public:\n50    ///\n51    /// @brief Default constructor\n52    ///\n53    ROC(void);\n54\n55    ///\n56    /// @brief The destructor\n57    ///\n58    virtual ~ROC(void);\n59\n60    /**\n61       Adding a data value to ROC.\n62    */\n63    void add(double value, bool target, double weight=1.0);\n64\n65    /**\n66       The area is defines as \\f$\\frac{\\sum w^+w^-} {\\sum w^+w^-}\\f$,\n67       where the sum in the numerator goes over all pairs where value+\n68       is larger than value-. The denominator goes over all pairs.\n69\n70       @return Area under curve.\n71    */\n72    double area(void);\n73\n74    ///\n75    /// minimum_size is the threshold for when a normal\n76    /// approximation is used for the p-value calculation.\n77    ///\n78    /// @return reference to minimum_size\n79    ///\n80    unsigned int& minimum_size(void);\n81\n82    /**\n83       minimum_size is the threshold for when a normal\n84       approximation is used for the p-value calculation.\n85\n86       @return const reference to minimum_size\n87    */\n88    const unsigned int& minimum_size(void) const;\n89\n90    ///\n91    /// @return sum of weights\n92    ///\n93    double n(void) const;\n94\n95    ///\n96    /// @return sum of weights with negative target\n97    ///\n98    double n_neg(void) const;\n99\n100    ///\n101    /// @return sum of weights with positive target\n102    ///\n103    double n_pos(void) const;\n104\n105    ///\n106    ///Calculates the p-value, i.e. the probability of observing an\n107    ///area equally or larger if the null hypothesis is true. If P is\n108    ///near zero, this casts doubt on this hypothesis. The null\n109    ///hypothesis is that the values from the 2 classes are generated\n110    ///from 2 identical distributions. The alternative is that the\n111    ///median of the first distribution is shifted from the median of\n112    ///the second distribution by a non-zero amount. If the smallest\n113    ///group size is larger than minimum_size (default = 10), then P\n114    ///is calculated using a normal approximation.\n115    ///\n116    /// \\note Weights should be either zero or unity, else present\n117    /// implementation is nonsense.\n118    ///\n119    /// @return One-sided p-value.\n120    ///\n121    double p_value_one_sided(void) const;\n122\n123    /**\n124       @brief Two-sided p-value.\n125\n126       @return min(2*p_value_one_sided, 2-2*p_value_one_sided)\n127    */\n128    double p_value(void) const;\n129\n130    /**\n131       @brief Set everything to zero\n132    */\n133    void reset(void);\n134\n135  private:\n136\n137    /// Implemented as in MatLab 13.1\n138    double get_p_approx(double) const;\n139\n140    /// Implemented as in MatLab 13.1\n141    double get_p_exact(const double, const double, const double) const;\n142\n143    double area_;\n144    unsigned int minimum_size_;\n145    double w_neg_;\n146    double w_pos_;\n147    // <data pair<class, weight> >\n148    std::multimap<double, std::pair<bool, double> > multimap_;\n149  };\n150\n151}}} // of namespace statistics, yat, and theplu\n152\n153#endif\nNote: See TracBrowser for help on using the repository browser." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.55052453,"math_prob":0.82718164,"size":4558,"snap":"2021-43-2021-49","text_gpt3_token_len":1331,"char_repetition_ratio":0.11769873,"word_repetition_ratio":0.03305785,"special_character_ratio":0.412681,"punctuation_ratio":0.11713933,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9513582,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-21T18:33:00Z\",\"WARC-Record-ID\":\"<urn:uuid:9e3a721e-a481-43b2-ad69-4567338fbeae>\",\"Content-Length\":\"37244\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ba0993a3-89b9-454e-9619-d038663092e7>\",\"WARC-Concurrent-To\":\"<urn:uuid:982cce45-ffc1-44da-a63c-bcdc71e18a04>\",\"WARC-IP-Address\":\"130.235.189.228\",\"WARC-Target-URI\":\"https://dev.thep.lu.se/yat/browser/branches/0.4-stable/yat/statistics/ROC.h?rev=1392\",\"WARC-Payload-Digest\":\"sha1:4MA74I5DCQVQOYQW2XKSSBFVQOBFKNXY\",\"WARC-Block-Digest\":\"sha1:XJS65GN5AGP5EH5ZAZXK52YDVVRLKEKQ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585439.59_warc_CC-MAIN-20211021164535-20211021194535-00535.warc.gz\"}"}
http://www.skyico.com/soft/5778.html
[ "# 九江棋牌下载\n\n【字体:\n\n【微信:niuniuexo】云岭棋牌是万千牌友心中地位很高的休闲神器,豪华场、财富场、平民场任何房间都能让你秒变游戏土豪,满眼的金币世界等待你的加入,只要简单的点一点就能赚钱了,心动不如心动快来试试电玩之家小编为大家推荐的这几款免费炸金花游戏吧!“吾家有女初长成”黄磊亲授邻家养成记\n\n“从自卑到自信吧”\u0007\u0005\b\u0007,秦俊杰在采访末被问及如何看待出道的这13年历程\u0007\u0006\u0007\b,思索了片刻给出了这个答案\b\b\u0007\u0006。", null, "2006年\u0007\u0007\b\b,年仅15岁的秦俊杰因拍摄张艺谋执导的古装宫廷情感电影《满城尽带黄金甲》而步入演艺圈\u0006\u0006\u0007\u0005。", null, "后来进入中央戏剧学院学习表演\u0006\u0007\u0006\u0006,辗转于《网球王子》、《风和日丽》、《异镇》等剧集中\u0007\u0005\u0006\b,真正让他继“谋男郎”(《满城尽带黄金甲》饰演三王子元成)之后\u0006\u0006\u0007\u0005,再度受到大范围关注的还是近几年的《秦时明月》、《大唐荣耀》、《青云志》\b\u0007\b\u0007。一度被大众自发票选为内地古装扮相排行前十的男演员\u0007\u0006\b\b。", null, "“为什么还要接\u0007\b\b\u0005?”秦俊杰直言\u0005\u0007\u0005\u0007,首先特别酷的地方在于萧忆情武功盖世\u0007\u0007\u0005\b,“我之前也演过武功很厉害的角色\b\u0006\u0007\u0007,但都不是那个戏里面最强者\u0006\u0007\u0006\u0006。所以\u0006\u0005\u0006\u0006,演一个最强者\b\u0005\u0005\b,我还是很有兴趣的”\u0006\u0007\u0006\u0005;第二\b\b\u0007\u0007,萧忆情的人物设定又是一个老咳嗽\b\u0007\u0006\u0007,身患重病的人\u0007\u0005\u0006\b,在秦俊杰看来\u0006\u0005\u0007\u0005,怎么把最强者跟“肺痨”结合在一块\u0007\b\u0006\u0006,这是一个很有意思的尝试和突破\u0005\u0007\u0007\u0007。说起来\u0005\b\u0005\b,这也到倒符合秦俊杰这些年来接戏的标准\u0007\b\u0006\u0007:一定是要他之前没有诠释过的人物形象\u0007\u0007\u0006\u0007。", null, "“你看\u0006\b\b\u0005,我演了很多的古装\u0006\b\u0007\u0006,但其实仔细看\u0005\u0006\u0007\u0005,每一部戏里面的人物角色都是不一样的\u0006\u0005\u0007\u0007。", null, "”秦俊杰一脸认真地说到\u0006\u0007\b\b。", null, "诚然\u0006\b\b\u0006,不论是《秦时明月》中智勇双全、心思细腻的项氏一族少主项少羽\u0006\u0007\b\u0006;《青云志》里逍遥洒脱的曾书书\b\u0006\u0005\u0005;还是《大唐荣耀》英毅有才略且善骑射、重情率真的建宁王李倓\u0007\u0006\b\b;又或者此番在《听雪楼》中\b\u0005\u0006\u0005,挑战强弱并存的楼主萧忆情\b\u0005\b\u0006,同一个古装题材\u0007\u0005\u0007\b,呈现的皆是不同设定的人物角色\u0005\u0005\u0006\u0006。", null, "28岁\u0006\b\b\u0005,出道13年\u0006\u0007\u0006\u0007,秦俊杰坦言还是会担心年龄的问题\u0005\u0005\b\u0007。他自我调侃\u0006\u0007\u0005\b,现在还是接受不了小朋友叫他叔叔\u0005\b\u0005\u0006,“我会让他改口\b\u0005\u0005\u0006,叫哥”(笑)\u0006\u0006\b\u0006。在他看来\b\b\u0005\u0005,对于一个演员而言\u0005\b\b\u0007,年龄是不可抗力因素\u0007\b\u0006\u0006。", null, "比如\b\b\u0006\u0007,《听雪楼》中\u0007\u0007\u0005\u0007,楼主萧忆情也有少年时期\u0007\u0006\u0007\u0006,但如果是我 \u0007\b\u0007\u0005,就很难去演出少年感\b\u0006\u0007\b。", null, "“我今年已经28岁\u0006\b\u0005\b,你让我去演18岁\u0007\u0007\b\b,我要使劲去够\u0005\u0005\b\u0006,很费力\u0006\u0007\b\u0007,但一个18岁的少年\u0005\u0005\u0007\u0006,他站在那儿就是18岁的感觉”\u0006\u0006\u0005\u0006。\n\nQ\u0006\u0007\u0005\u0005:拍摄现场是怎样的\u0006\b\u0006\b?", null, "J\u0006\u0006\u0007\u0005:男女主两个人都属于比较冷的角色\u0005\u0007\u0006\u0006,虽然团队是熟悉的团队\u0005\b\u0007\b,但现场是一个不一样的工作风格\u0005\b\u0007\b,大家都可严肃了\b\u0005\b\u0005。我也稍微收敛了自己的性子\u0006\b\b\u0006,不过大概一个月之后\u0006\b\u0006\u0005,我又回到了原来的样子\u0007\u0005\u0006\u0006。", null, "Q\u0007\u0006\u0007\b:你原来的样子是什么样子\b\u0006\u0007\u0005?\n\nJ\b\u0007\u0007\u0006:在现场比较热情\b\b\b\u0006,说个不停\u0005\b\u0005\u0005,比较好动\u0005\u0006\b\b,玩玩道具等\u0007\u0005\b\u0006。\n\nQ\b\u0005\u0005\u0005:那感觉你跟男主萧忆情性格上\u0006\b\u0007\u0006,不太一样\u0007\b\u0005\u0007?", null, "J\u0005\b\b\u0007:太不一样了\u0006\b\u0007\b,我跟萧忆情完全是两个极端的人\u0007\u0006\u0007\u0007,八杆子打不着的一个性格\u0007\b\u0005\u0007。\n\nQ\u0007\u0007\b\b:好奇这个角色找到你时\u0006\u0006\u0005\u0005,你的第一反应是什么\u0006\b\u0006\u0007?", null, "J\u0005\u0007\u0007\u0006:有个图片形容特别贴切\u0005\b\b\u0007,就是那个“黑人问号脸”\u0006\u0006\u0007\u0007,对\b\u0007\u0006\b,我就是那个反应\u0007\b\b\u0007。", null, "Q\u0006\u0006\u0005\u0005:为什么还接下《听雪楼》\b\u0007\u0005\u0005,并且诠释的还不错\u0005\u0007\b\u0005?\n\nJ\b\u0005\u0006\u0007:首先\u0006\u0007\u0005\b,我在没有接触这个项目之前\u0005\b\u0006\u0007,没看过原著\u0005\u0006\u0005\u0005。", null, "他们跟我聊了一晚上的《听雪楼》\b\u0006\u0006\u0006,聊萧忆情\u0006\u0006\u0006\u0005。", null, "我觉得特别酷的地方在于萧忆情武功盖世\b\u0005\b\u0005,我之前也演过武功很厉害的角色\b\u0007\u0005\u0007,但都不是那个戏里面最强者\u0006\u0006\u0005\b。", null, "所以\b\u0006\u0006\u0006,演一个最强者\u0007\b\u0006\u0007,我还是很有兴趣的\u0006\b\u0006\b;第二\b\u0005\b\b,因为他老咳嗽\u0007\u0005\u0005\u0005,当然\u0005\u0006\u0007\u0006,这个点需要跟原著粉说对不起\u0007\u0005\u0006\u0005,我私底下老叫他“肺痨”\u0006\u0007\u0006\b,我觉得这是一个很有意思的尝试和突破\u0005\b\u0007\u0006,怎么把最强者跟“肺痨”结合在一块\u0007\b\b\u0007。", null, "Q\b\u0006\u0005\u0007:有观众反馈萧忆情的花样咳嗽\b\u0007\b\u0005,心思不一般\u0006\b\u0007\u0006,你怎么看\u0005\u0005\u0005\u0007?\n\nJ\b\u0005\b\u0005:是的\u0006\u0005\u0007\b,演的时候\u0005\u0006\u0006\u0005,就会带有不同的情绪去“咳嗽”吧\b\u0005\b\u0005。", null, "Q\b\u0005\u0006\b:这部剧定义为浪漫武侠剧\u0006\u0007\b\b,包括一些武打动作上\u0006\u0006\b\b,是有浪漫元素在的\u0007\u0006\u0006\u0007,你怎么看\u0005\u0005\u0005\b?\n\nJ\u0007\b\u0007\b:动作唯美、浪漫\b\b\b\b,这是我们拍摄的时候\u0006\u0005\b\u0007,就有意想要呈现的部分\u0007\u0007\u0007\u0007,所以\u0007\u0007\b\u0007,大家看了之后会觉得打戏就跟跳舞一样\u0006\b\b\u0006。", null, "Q\u0006\u0005\u0006\u0005:在萧忆情病殃殃的性格与活蹦乱跳的秦俊杰本人之间\u0006\u0005\u0005\u0006,如何平衡\b\u0006\b\b?", null, "J\u0005\u0006\b\u0006:更多是适应和习惯角色\b\u0005\u0007\b,控制我的语言节奏\u0006\u0007\u0007\u0007。刚开始特别别扭\b\u0005\b\b,因为你没有习惯这个人语言、行为节奏\u0006\u0006\u0005\u0006,其实是在刻意压抑自己\u0005\u0006\b\b。", null, "我告诉自己\b\u0005\u0006\u0006,要放慢说话的速度\u0007\b\b\u0005。", null, "虽然说话快\u0005\u0006\u0007\b,我会很舒服\b\u0005\u0006\u0006,但他就不是萧忆情了\u0007\u0005\u0006\b。所以\u0007\u0006\u0005\b,在那个时间段里\b\b\u0007\b,我要做的是让自己怎么不舒服怎么来\u0006\u0007\u0006\u0006,用了一个星期\u0005\u0006\u0006\b,进入“萧忆情”\u0005\u0006\u0006\u0005。", null, "Q\u0006\u0006\u0007\u0007:“萧忆情”被称为最虐男主\b\u0005\u0007\u0006?你怎么看\b\u0006\u0006\u0005?", null, "J\b\u0006\u0005\u0005:为什么\u0007\b\b\u0006?", null, "我觉得还行\u0007\u0005\b\u0006,虽然要死要死的\u0005\u0007\u0006\u0005,但还是没死\u0006\u0005\u0007\u0005,活着就挺好\u0007\u0005\u0005\u0005。\n\nQ\u0007\b\u0006\u0005:除了萧忆情\b\u0006\u0006\b,哪个最打动你\u0005\b\u0006\b?\n\nJ\b\u0007\u0007\b:南楚\u0005\u0006\b\u0005。", null, "因为他为萧忆情\u0005\u0006\u0006\b,默默付出了太多\u0007\u0007\u0005\u0006,牺牲很大\b\u0007\u0007\u0006。", null, "PS\u0007\b\u0005\u0007:我不能剧透\u0007\u0005\u0007\u0005,后面很虐\b\u0005\u0006\b。\n\nQ\u0007\b\u0007\u0007:除了萧忆情\b\u0007\u0007\u0005,最想演《听雪楼》中哪个角色\u0006\u0007\b\u0005?\n\nJ\u0005\u0007\b\b:可以试试舒靖容\u0007\u0005\u0005\u0005。", null, "(笑)\n\nQ\u0006\b\b\u0005:你理解的“听雪楼”江湖是怎样的\u0005\u0005\u0007\u0005?\n\nJ\b\u0006\u0007\u0005:是一个挺残酷的地方\u0006\u0007\u0005\b,没有和解的可能\u0005\u0006\u0006\u0007,要么你死要么我亡\u0007\u0007\u0006\u0006。\n\nQ\u0005\b\u0007\u0007:这部剧的感情线被成为“无糖之恋”\b\u0006\u0007\b,在这方面\u0007\u0005\u0007\u0006,如何把握\u0006\u0006\b\u0005?", null, "J\b\b\u0005\u0006:就正常演\u0007\u0007\u0005\u0005。其实\u0007\u0006\u0007\u0006,大家要明白的是\u0005\b\u0006\u0006,一定是先有我们演戏的创作\u0005\u0006\u0005\u0006,才会有这些宣传词和宣传点\b\u0005\u0006\u0006。", null, "所以坦白说\b\u0005\u0006\u0005,我不知道怎么对应这些宣传词\u0005\u0006\u0006\u0007,解释当时的片段\b\u0005\u0005\u0005。\n\nQ\u0006\u0007\u0007\u0005:你是一个会开弹幕追剧的人吗\b\b\u0007\u0006?有没有看到弹幕评论“想给萧忆情补口红”这个事\u0006\b\u0005\b?", null, "J\u0005\u0006\b\u0006:会开弹幕看剧\b\u0006\u0007\u0005。关于“口红”这个事\b\u0005\u0006\u0006,是我主动提出来的\u0005\u0005\u0005\u0005,主要是为了让自己更贴近萧忆情这个角色\u0006\u0005\u0006\u0006。本身的纯色太健康\b\u0006\u0006\b,所以每场戏都要有唇白\u0007\u0007\u0006\b,只不过是轻重\b\u0005\u0005\b,我每次会亲自下场补\u0007\u0007\u0005\u0005。", null, "Q\b\b\u0007\u0007:到目前为止\u0005\u0006\b\u0006,哪个角色是你觉得最过瘾\u0007\u0006\u0006\u0006?\n\nJ\u0005\b\u0005\b:各有各的过瘾\b\u0005\u0007\b。我会让自己尽力过瘾\u0007\u0005\u0006\u0006,不然的话\u0006\u0005\u0007\b,我觉得就白演了\u0006\u0007\u0007\b。\n\nQ\u0007\b\u0007\b:你认为的“过瘾”是什么\b\u0005\u0005\u0007?\n\nJ\u0007\b\u0006\u0007:在角色没办法改变的情况下\u0005\u0006\b\b,主动适应\u0005\u0005\u0007\u0005。挑战与自己不相符合的角色\u0006\u0006\b\u0007,本身就是一种“过瘾”\b\u0005\u0007\u0006。", null, "Q\u0006\u0006\u0006\u0005:拍古装最难的点在于\u0006\u0007\u0006\u0006?", null, "J\u0007\u0005\u0007\b:天气很热\b\u0007\u0007\u0006,头套密不透风\u0006\u0005\u0006\u0007,一天出工十几个小时\u0005\u0005\u0006\u0006,没法摘\u0007\b\u0006\u0006,汗一直往身体流\b\b\b\u0006,摘头套时\u0005\u0005\u0007\u0006,往脸上流\u0005\u0005\b\u0007,扑面而来的味道\b\b\u0007\u0006,怀疑这不是自己的脑袋\u0007\u0007\b\u0007。", null, "(秦俊杰建议大家自行脑补画面感)\n\nQ\u0006\u0007\u0007\u0006:相比较刚出道\u0005\u0005\b\u0007,对演员的理解有改变吗\u0006\b\b\b?\n\nJ\u0005\b\b\u0006:一直都没有吧\u0007\u0005\b\u0005。", null, "更多还是认真对待每个角色\u0005\u0005\u0006\u0006,对得起自己的工作\b\u0005\b\u0005,对得起观众粉丝的支持\u0007\u0005\u0005\u0007。", null, "多余演员的意义\u0005\u0007\b\u0007,我觉得目前还在寻找中\u0006\b\u0005\b。", null, "Q\b\u0006\u0006\u0007:如何看当下大家对于“小鲜肉”的一些刻板印象\b\b\u0006\u0007?会羡慕他们的流量\u0005\u0007\b\b?\n\nJ\u0007\u0006\u0007\u0006:所以\u0007\u0005\b\u0007,你们认为小鲜肉只是靠脸吃饭吗\u0005\u0006\u0006\u0007?", null, "Q\b\u0007\u0006\u0007:没有\u0006\b\u0007\u0007。", null, "J\u0005\u0007\u0006\u0007:对呀\u0005\u0006\u0006\u0006,每个人都各有各的才艺\u0007\u0006\u0005\b。以前没学过表演\u0007\b\u0006\u0005,他们现场很拼命\u0007\u0006\b\u0007,怎么就成了靠脸吃饭\u0005\u0007\u0006\u0005。", null, "我还是觉得\b\u0005\b\u0007,大家要放宽界限\b\u0007\b\u0007,没有人可以单纯靠脸吃饭的\u0005\u0005\u0007\b,绝对不可能\b\b\u0007\b。", null, "你的努力、智商、情商都需要达标\u0005\u0007\u0006\u0007。当然有的时候也会羡慕\b\u0007\b\b,“哇塞\u0005\u0006\b\u0007,好火”\u0006\u0006\u0006\u0007,但也仅仅只是感慨\b\u0005\u0007\u0006。虽然现在没有大流量\u0006\u0007\u0005\u0005,但也有人支持我\u0006\u0007\u0005\u0006,哪怕只有一个人看我的戏\b\u0007\u0006\b,我也会认认真真拍戏\u0007\u0007\b\u0006。这是一种负责的表现\u0007\u0005\u0006\u0006。", null, "Q\u0007\u0005\b\b:挑剧本的维度\b\u0006\u0006\u0007?\n\nJ\u0007\u0007\b\u0006:我希望我演的都是不一样的\u0006\u0005\u0007\u0005,一般不会考虑以前诠释过的角色\u0007\b\u0007\b。", null, "Q\u0005\u0006\u0005\u0006:你在之前得采访中说过\b\b\u0005\u0005,自己还缺少另外一部能让大家印象深刻得作品\u0007\b\u0006\u0007,那是第二阶段成功得象征\u0005\u0007\u0005\u0007。", null, "现在这部作品出现了吗\u0007\u0007\u0006\b?有一个衡量标准吗\u0006\u0005\b\u0005?\n\nJ\u0005\u0005\u0007\u0007:还没有\u0005\u0005\u0007\u0006。我也不知道该以一个什么样的标准去衡量\u0005\u0007\u0005\b。", null, "可能有一天他出现了\u0006\u0006\u0007\u0005,我就知道了\u0007\b\u0006\b,也有可能一直不出现\b\b\u0006\b。但这是一个目标\u0007\u0006\u0006\b,一路上催促着我\u0006\u0007\u0005\u0006,往前走\b\u0005\b\b。", null, "• <<<<<<<<<\n• <<<<<<<<<\n• <<<<<<<<<\n• <<<<<<<<<\n• <<<<<<<<<\n• <<<<<<<<<\n• <<<<<<<<<\n• <<<<<<<<<\n• <<<<<<<<<\n• <<<<<<<<<\n• <<<<<<<<<\n• <<<<<<<<<\n• <<<<<<<<<\n• <<<<<<<<<\n• <<<<<<<<<\n• <<<<<<<<<\n• <<<<<<<<<\n• <<<<<<<<<\n• <<<<<<<<<\n• <<<<<<<<<\n• <<<<<<<<<\n• <<<<<<<<<\n• <<<<<<<<<\n• <<<<<<<<<\n• <<<<<<<<<\n• <<<<<<<<<\n• <<<<<<<<<\n• <<<<<<<<<\n• <<<<<<<<<\n• <<<<<<<<<\n• <<<<<<<<<\n• <<<<<<<<<\n• <<<<<<<<<\n• <<<<<<<<<\n• <<<<<<<<<\n• <<<<<<<<<\n• <<<<<<<<<\n\n• <<<<<<<<<\n\n# 商业资讯<<<<<<<<<\n\n• [<<<<<<<<<] <<<<<<<<<\n• [<<<<<<<<<] <<<<<<<<<\n• [<<<<<<<<<] <<<<<<<<<\n• [<<<<<<<<<] <<<<<<<<<\n• [<<<<<<<<<] <<<<<<<<<\n• [<<<<<<<<<] <<<<<<<<<\n• [<<<<<<<<<] <<<<<<<<<\n• [<<<<<<<<<] <<<<<<<<<\n• [<<<<<<<<<] <<<<<<<<<\n• [<<<<<<<<<] <<<<<<<<<\n• [<<<<<<<<<] <<<<<<<<<\n• [<<<<<<<<<] <<<<<<<<<" ]
[ null, "http://www.skyico.com/template/emjion/1556619126(1).jpg", null, "http://www.skyico.com/template/emjion/1556619077(1).jpg", null, "http://www.skyico.com/template/emjion/1556619092(1).jpg", null, "http://www.skyico.com/template/emjion/1556619119(1).jpg", null, "http://www.skyico.com/template/emjion/1556619071(1).jpg", null, "http://www.skyico.com/template/emjion/1556619092(1).jpg", null, "http://www.skyico.com/template/emjion/1556619113(1).jpg", null, "http://www.skyico.com/template/emjion/1556619083(1).jpg", null, "http://www.skyico.com/template/emjion/1556619092(1).jpg", null, "http://www.skyico.com/template/emjion/1556619071(1).jpg", null, "http://www.skyico.com/template/emjion/1556619170(1).jpg", null, "http://www.skyico.com/template/emjion/1556619113(1).jpg", null, "http://www.skyico.com/template/emjion/1556619126(1).jpg", null, "http://www.skyico.com/template/emjion/1556619133(1).jpg", null, "http://www.skyico.com/template/emjion/1556619113(1).jpg", null, "http://www.skyico.com/template/emjion/1556619133(1).jpg", null, "http://www.skyico.com/template/emjion/1556619133(1).jpg", null, "http://www.skyico.com/template/emjion/1556619119(1).jpg", null, "http://www.skyico.com/template/emjion/1556619126(1).jpg", null, "http://www.skyico.com/template/emjion/1556619092(1).jpg", null, "http://www.skyico.com/template/emjion/1556619077(1).jpg", null, "http://www.skyico.com/template/emjion/1556619083(1).jpg", null, "http://www.skyico.com/template/emjion/1556619071(1).jpg", null, "http://www.skyico.com/template/emjion/1556619119(1).jpg", null, "http://www.skyico.com/template/emjion/1556619133(1).jpg", null, "http://www.skyico.com/template/emjion/1556619119(1).jpg", null, "http://www.skyico.com/template/emjion/1556619071(1).jpg", null, "http://www.skyico.com/template/emjion/1556619126(1).jpg", null, "http://www.skyico.com/template/emjion/1556619077(1).jpg", null, "http://www.skyico.com/template/emjion/1556619071(1).jpg", null, "http://www.skyico.com/template/emjion/1556619113(1).jpg", null, "http://www.skyico.com/template/emjion/1556619126(1).jpg", null, "http://www.skyico.com/template/emjion/1556619083(1).jpg", null, "http://www.skyico.com/template/emjion/1556619077(1).jpg", null, "http://www.skyico.com/template/emjion/1556619119(1).jpg", null, "http://www.skyico.com/template/emjion/1556619083(1).jpg", null, "http://www.skyico.com/template/emjion/1556619092(1).jpg", null, "http://www.skyico.com/template/emjion/1556619071(1).jpg", null, "http://www.skyico.com/template/emjion/1556619101(1).jpg", null, "http://www.skyico.com/template/emjion/1556619083(1).jpg", null, "http://www.skyico.com/template/emjion/1556619101(1).jpg", null, "http://www.skyico.com/template/emjion/1556619083(1).jpg", null, "http://www.skyico.com/template/emjion/1556619092(1).jpg", null, "http://www.skyico.com/template/emjion/1556619083(1).jpg", null, "http://www.skyico.com/template/emjion/1556619170(1).jpg", null, "http://www.skyico.com/template/emjion/1556619170(1).jpg", null, "http://www.skyico.com/template/emjion/1556619133(1).jpg", null, "http://www.skyico.com/template/emjion/1556619170(1).jpg", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.9839462,"math_prob":0.9608856,"size":4016,"snap":"2019-13-2019-22","text_gpt3_token_len":4849,"char_repetition_ratio":0.03539382,"word_repetition_ratio":0.0,"special_character_ratio":0.18027888,"punctuation_ratio":0.0,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98692447,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-23T11:22:02Z\",\"WARC-Record-ID\":\"<urn:uuid:d790e825-a9d1-47dc-a1f9-e14a145bb322>\",\"Content-Length\":\"147578\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:43d2611e-d294-4525-a4b4-a42a011b12c5>\",\"WARC-Concurrent-To\":\"<urn:uuid:954648cf-9a95-4d6b-a573-58e0f647f4be>\",\"WARC-IP-Address\":\"103.92.7.233\",\"WARC-Target-URI\":\"http://www.skyico.com/soft/5778.html\",\"WARC-Payload-Digest\":\"sha1:TXAMRWYH5MGX4EX36HWYLK3WEO4AKW3L\",\"WARC-Block-Digest\":\"sha1:Z6ISYR72SARNCJQB6ETQPUNIIANWGSVH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232257243.19_warc_CC-MAIN-20190523103802-20190523125802-00177.warc.gz\"}"}
https://www.slidestalk.com/u3804/Model_based_RL22342
[ "HOT", null, "Model-based RL", null, "# Model-based RL\n\n1. Overview o Types of Machine Learning o Markov Decision Processe so Reinforcement Learning o Applications 2. Review of MDP Algorithms o The Bellman equations o Expectimax (finite horizon) o Value Iteration (finite horizon) o Value Iteration (infinite horizon) o Policy Iteration(infinite horizon) 3. Reinforcement Learningo The basic idea o Model-Based RL o Learning the reward and transition probabilities o Credit assignment o Exploration vs. exploitation 4. Next Time o Q-learni\n12 点赞\n4 收藏\n0下载", null, "3秒后跳转登录页面" ]
[ null, "https://sslprod.oss-cn-shanghai.aliyuncs.com/stable/head_pic/head_3804-795446.png", null, "https://sslprod.oss-cn-shanghai.aliyuncs.com/assets/img/logo-embed.png", null, "https://www.slidestalk.com/u3804/Model_based_RL22342", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7498992,"math_prob":0.6442217,"size":8553,"snap":"2022-40-2023-06","text_gpt3_token_len":3126,"char_repetition_ratio":0.11334659,"word_repetition_ratio":0.07286923,"special_character_ratio":0.29019058,"punctuation_ratio":0.16168328,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9843957,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,4,null,null,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-05T22:53:35Z\",\"WARC-Record-ID\":\"<urn:uuid:cab38931-2841-41e0-9302-a82f20c5b782>\",\"Content-Length\":\"99487\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:02ad15e2-5a12-4d39-ae5d-7c3875a2e0b1>\",\"WARC-Concurrent-To\":\"<urn:uuid:4f8e380a-bba9-4ed9-b553-0ee23b9d39b3>\",\"WARC-IP-Address\":\"101.132.193.79\",\"WARC-Target-URI\":\"https://www.slidestalk.com/u3804/Model_based_RL22342\",\"WARC-Payload-Digest\":\"sha1:EK6XJL455AH5GIDKQEW6ISWXCLVVZ2ZD\",\"WARC-Block-Digest\":\"sha1:BS4C3UNIL2EDVUHNZVU632YMWDU6OUTF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337668.62_warc_CC-MAIN-20221005203530-20221005233530-00445.warc.gz\"}"}
https://papers-gamma.link/all/page/28
[ "", null, "", null, "The paper is about a scalable alternative to $k$-order Markov process called k-LAMP. k-LAMP only needs $nnz(T)+k$ (where $T$ is the transition matrix and $nnz(T)$ is the number of nonzero entries in $T$) parameters, while $k$-order Markov process needs as many parameters as the number of paths of length $k+1$ in $T$. A generalized version called $k$-GLAMP is also suggested, it needs $k*nnz(T)+k$ parameters. An experimental comparison to Markov process and LSTM (Long-Short-Term-Memory) seems convincing. ### Typos: - page 5, Theorem 11: \"\\n this version.)\"" ]
[ null, "https://papers-gamma.link/static/img/tower.png", null, "https://papers-gamma.link/static/img/month.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84177595,"math_prob":0.99880564,"size":1889,"snap":"2020-10-2020-16","text_gpt3_token_len":570,"char_repetition_ratio":0.11724138,"word_repetition_ratio":0.63375795,"special_character_ratio":0.29804128,"punctuation_ratio":0.110526316,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998267,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-16T21:56:22Z\",\"WARC-Record-ID\":\"<urn:uuid:aa6dd96f-7c75-4c97-b2df-035d374d20d3>\",\"Content-Length\":\"22449\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fbf5bd15-a3b9-46e9-97f4-08a807b83a52>\",\"WARC-Concurrent-To\":\"<urn:uuid:d17b97a7-c8d5-4078-ab82-d9a628b91719>\",\"WARC-IP-Address\":\"78.46.102.149\",\"WARC-Target-URI\":\"https://papers-gamma.link/all/page/28\",\"WARC-Payload-Digest\":\"sha1:G5BOJVJPRNJH6YUOQJ3UOKWHRXRC5N32\",\"WARC-Block-Digest\":\"sha1:Q2MZGBAOK6ZSZS4Q6J27K44X5UUA2QCE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875141430.58_warc_CC-MAIN-20200216211424-20200217001424-00144.warc.gz\"}"}
https://rattleinnaustin.com/what-type-of-angles-are-2-and-6/
[ "# What type of angles are 2 and 6?\n\n## What type of angles are 2 and 6?\n\n6 and 2 are corresponding angles and are thus congruent which means angle 2 is 65°.\n\n## What type of angle pair is 1 and 6?\n\nThe angle supplementary to ∠1 is ∠6. ∠1 is an obtuse angle, and any one acute angle, paired with any obtuse angle are supplementary angles. This is the only angle marked that is acute.\n\nWhat angle pair is 2 and 4?\n\n∠2 and ∠4 are vertical angles. Their measures are equal, so m∠4 = 90 . When two lines intersect to form one right angle, they form four right angles.\n\n### What are the 6 angle pairs?\n\nSome of the pair of angles we saw is below:\n\n• Complementary Angles.\n• Supplementary Angles.\n• Linear Pair of Angles.\n• Vertical Angles.\n\n### What type of angles are 2 and 3?\n\nIn the figure, angles 2 and 3 are alternate interior angles. Two angles which share a common vertex and side, but have no common interior points. In the figure, the and are adjacent angles. Two angles are called complementary when their sum is 90º.\n\nWhat type of angle is 6 degrees?\n\nAn acute angle lies between 0 degree and 90 degrees, or in other words; an acute angle is one that is less than 90 degrees.\n\n#### What type of angle pair is ∠ B and ∠ G?\n\n∠b and ∠g are alternate exterior angles and they are equal to one another.\n\n#### What is the measure of angle 6 Quizizz?\n\nMeasure of Angles | Pre-algebra Quiz – Quizizz. Q. Angle 6 measures 40°.\n\nWhat is the measurement of angle 6?\n\nAngle 6 is 30 degrees. Angle 6 and angle 7 are vertical angles, so they are equivalent. Also, Angle 7 and angle 5 are supplementary, so angle 5 is 180-30 = 150 degrees.\n\n## What are the types of angle pairs?\n\nIn Geometry, there are five fundamental angle pair relationships:\n\n• Complementary Angles.\n• Supplementary Angles." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90571105,"math_prob":0.9505855,"size":2284,"snap":"2023-14-2023-23","text_gpt3_token_len":594,"char_repetition_ratio":0.20394737,"word_repetition_ratio":0.07042254,"special_character_ratio":0.25262696,"punctuation_ratio":0.13513513,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9991402,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-02T09:17:36Z\",\"WARC-Record-ID\":\"<urn:uuid:902b9771-0278-485e-9c49-15eefcec6f11>\",\"Content-Length\":\"66362\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:797d0043-6bbf-4a1a-8bef-8977c6dd1919>\",\"WARC-Concurrent-To\":\"<urn:uuid:e07d7fd3-6670-4c56-98db-f91b8720b014>\",\"WARC-IP-Address\":\"104.21.35.245\",\"WARC-Target-URI\":\"https://rattleinnaustin.com/what-type-of-angles-are-2-and-6/\",\"WARC-Payload-Digest\":\"sha1:Z434TM5D62OCXGB7X7O5PTSUW7LAKZTH\",\"WARC-Block-Digest\":\"sha1:DJEMM353ROWTT4QHPEZQ7DP7FXYNKPUT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224648465.70_warc_CC-MAIN-20230602072202-20230602102202-00149.warc.gz\"}"}
https://www.mcqslearn.com/electronics/electronic-circuit-design/quiz/quiz.php?page=95
[ "# Low Frequency Amplifier Response Quiz Question and Answers 95 PDF Book Download\n\nLow frequency amplifier response quiz, low frequency amplifier response MCQs answers, circuit design quiz 95 to learn online circuit design courses. Amplifier frequency response quiz questions and answers, low frequency amplifier response multiple choice questions (MCQ) to practice electronic circuit design test with answers for college and university courses. Learn low frequency amplifier response MCQs, atomic structure, common-gate amplifiers, varactor diode, low frequency amplifier response test prep for engineering certification.\n\nLearn low frequency amplifier response test with multiple choice question (MCQs): range of frequencies between lower critical frequency and upper critical frequency is called, with choices total frequency, sum of frequencies, multiple frequencies, and bandwidth for online bachelor degree. Learn amplifier frequency response questions and answers for scholarships exams' problem-solving, assessment test.\n\n## Quiz on Low Frequency Amplifier Response Worksheet 95\n\nLow Frequency Amplifier Response Quiz\n\nMCQ: Range of frequencies between lower critical frequency and upper critical frequency is called\n\n1. Total frequency\n2. Sum of frequencies\n3. Multiple frequencies\n4. Bandwidth\n\nD\n\nVaractor Diode Quiz\n\nMCQ: Which diodes are commonly used in electronic tuning circuits?\n\n1. Zener diode\n2. Varactor diode\n3. Photo diode\n4. Optical diode\n\nB\n\nCommon-gate Amplifiers Quiz\n\nMCQ: Common-gate amplifier is similar to which BJT amplifier?\n\n1. Common-emitter amplifier\n2. Common-collector amplifier\n3. Common-base amplifier\n4. Emitter-follower amplifier\n\nC\n\nAtomic Structure Quiz\n\nMCQ: Positively charged particles are Known as\n\n1. Nucleus\n2. Protons\n3. Molecule\n4. Neutrons\n\nB\n\nBasic Concepts Quiz\n\nMCQ: Capacitive reactance and frequency are\n\n1. Directly proportional\n2. Inversely proportional\n3. Equal to each other\n4. Half of each other\n\nB" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8085515,"math_prob":0.8826818,"size":1912,"snap":"2019-13-2019-22","text_gpt3_token_len":395,"char_repetition_ratio":0.21069182,"word_repetition_ratio":0.075187966,"special_character_ratio":0.17416318,"punctuation_ratio":0.088339224,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9598826,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-27T12:18:06Z\",\"WARC-Record-ID\":\"<urn:uuid:1516dafb-919e-4a92-8c8b-75432edad2ac>\",\"Content-Length\":\"26987\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b8b6f07b-e2ab-4158-94a9-1e0fb234d75f>\",\"WARC-Concurrent-To\":\"<urn:uuid:0ce12c9e-4fee-4fe7-a958-bd4f8a093b5d>\",\"WARC-IP-Address\":\"23.229.243.96\",\"WARC-Target-URI\":\"https://www.mcqslearn.com/electronics/electronic-circuit-design/quiz/quiz.php?page=95\",\"WARC-Payload-Digest\":\"sha1:ZDMGKCAKOFOWR7DLACRBVE3HYSS6UKWT\",\"WARC-Block-Digest\":\"sha1:FDR5HTIHXAX2Z2U3VZKKK6BDDAK2HNWW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232262369.94_warc_CC-MAIN-20190527105804-20190527131804-00473.warc.gz\"}"}
https://asreml.kb.vsni.co.uk/knowledge-base/cookbook-univariate-analysis-r/
[ "1. Home\n2. Univariate analysis\n\n# Univariate analysis\n\n## Some basic models\n\nLets start with the simplest design normally used in tree breeding programs: randomized complete blocks.\n\n``````# Reading data and pedigree files\n\n# Fitting model with a family model\ndbh.1 <- asreml(dbh ~ 1, random = ~ Block + Block:Plot + Mom,\ndata = dat)\n\n# Having a look at the variance components\nsummary(dbh.1)\\$varcomp\n\n# If single tree plots then use:\ndbh.2 <- asreml(dbh ~ 1, random = ~ Block + Mom,\ndata = dat)``````\n\nThen we can move to fit an animal model (or tree model, or individual tree model, pick a name), for which we need the inverse of the numerator relationship matrix (obtained from the pedigree).\n\n``````# Get the inverse of the NRM\npedinv <- ainverse(ped)\n\n# Fitting model with an animal model\ndbh.3 <- asreml(dbh ~ 1, random = ~ Block + Block:Plot + vm(Tree, pedinv),\ndata = dat)\n\n# Same thing for basic density\nden.1 <- asreml(den ~ 1, random = ~ Block + Block:Plot + vm(Tree, pedinv),\ndata = dat)``````\n\nNote that in all of the cases of above we used `vm()` to indicate that the factor `Tree` is associated with a variance model contained within the object `pedinv`.\n\nNow an incomplete block design, where we have complete replicates and incomplete blocks within each Rep.\n\n``````# Fitting incomplete block, single tree plot and animal model\ndbh.4 <- asreml(dbh ~ 1, random = ~ Rep + Rep/Block + vm(Tree. pedinv),\ndata = dat)``````\n\nIn some situations, e.g. when you are only interested in predicting breeding values for the parents for backwards selection, you may prefer to use models that are equivalent and computationally less demanding (e.g. a family model over using a tree model). In the case of controlled pollinated material (full sibs):\n\n``````# Fitting model with a family model\ndbh.5 <- asreml(dbh ~ 1, random = ~ Block + Block:Plot + Mom + and(Dad),\ndata = dat)``````\n\nIn the previous equation `Mom + and(Dad)` overlays the design matrices for males and females so you get only one prediction for each parent, in spite of some parents acting as both male and female (which is typical in crossing programs in trees). The variance of `Mom` will represent var(GCA).\n\n## Diallels\n\nThe specification of diallels is very straightforward in ASReml-R, and does not require the creation of many additional variables to hold extra factors. Note: The specification of family code is in such a way that direction of cross does not matter (for example, crosses 55×96 and 96×55 both use the code 5596). When fitting reciprocals code direction is important (for example, the crosses would be coded 5596, for 55×96, and 9655 for 96×55).\n\n``````# Uses an individual tree model\nfm1 <- asreml(dbh ~ 1, random = ~ Rep + Rep.Iblock + Plot +\ncm(Tree, pedinv) + Family + Mom + Recipro,\ndata = dat)\n\n# Uses a family model\nfm2 <- asreml(dbh ~ 1, random = ~ Rep + Rep.Iblock + Plot +\nDad + and(Mom) + Family + Mom + Recipro,\ndata = dat)``````\n\n## Clonal data\n\nClonal data can be seen as repeated observations of a genotype, thus their analysis is related to repeated measurements, although there is no ordering in time. The analysis of clonal data is straightforward in ASReml-R. In the data file all ramets of the same clone will have the same genotype ID, and each genotype will be only once in the pedigree file.\n\nBrian Kennedy (in Animal Model BLUP. Erasmus Intensive Graduate Course. August 20-26 1989. University of Guelph. page 130) showed the mathematics behind using clonal data as repeated measurements, referring to the analysis of embryo splitting. I first ran code like this while working in longitudinal analysis in 1999-2000. However, João Costa e Silva provided me with a very good interpretation of the analyses at the end of 2003. For more details, check: Costa e Silva, J., Borralho, N.M.G. & Potts, B.M. 2004. Additive and non-additive genetic parameters from clonally replicated and seedling progenies of Eucalyptus globulus. Theoretical and Applied Genetics 108: 1113-1119.\n\n``````# Expanding the previous dbh.4 example to include clones\n# Fitting incomplete block, single tree plot and animal model\ndbh.6 <- asreml(dbh ~ 1, random = ~ Rep + Rep/Block + vm(Tree, pedinv) + ide(Tree),\ndata = dat)``````\n\nThe `ide(Tree)` part of the job, creates an additional matrix for Tree that ignores the pedigree relationships.\n\n## Multiple site, single trait\n\nThe traditional approach used in tree breeding to analyze progeny trials in multiple sites was to either assume a unique error variance (and then use the approach explained before) or to correct the data by the site specific error variance and then use the typical approach using the corrected data. Using ASReml-R it is possible to use alternative methods, either explicitly fitting a site specific error variance (but keeping a unique additive genetic variance) or fitting site specific additive and residual variances, in fact using a Multivariate Analysis approach, where the expression of a trait in each site is considered a different variable. In any case, any of the alternative methods needs a specification of Covariance Structures.\n\n``````# Fitting site effects, incomplete block, single tree plot,\n# animal model and site specific residual variances (rcov)\ndbh.7 <- asreml(dbh ~ Site,\nrandom = ~ Site:Rep + Site:Rep:Block + vm(Tree, pedinv),\nresidual = ~ dsum(~ units | Site),\ndata = dat)``````\n\nCopyright (1997–2021) by Luis Apiolaza, some rights are reserved.\n\nUpdated on September 22, 2021" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82736874,"math_prob":0.9793851,"size":5368,"snap":"2022-27-2022-33","text_gpt3_token_len":1331,"char_repetition_ratio":0.11782252,"word_repetition_ratio":0.11784141,"special_character_ratio":0.25447094,"punctuation_ratio":0.13412228,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9861683,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-06T19:20:25Z\",\"WARC-Record-ID\":\"<urn:uuid:058a3097-2a3a-4248-9d55-a28af96c5d56>\",\"Content-Length\":\"44918\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:560f9999-c2fe-4d55-8723-05b8d7c16d5e>\",\"WARC-Concurrent-To\":\"<urn:uuid:88c07b47-ae1c-46e4-8ed2-c22c23b025e8>\",\"WARC-IP-Address\":\"35.197.246.150\",\"WARC-Target-URI\":\"https://asreml.kb.vsni.co.uk/knowledge-base/cookbook-univariate-analysis-r/\",\"WARC-Payload-Digest\":\"sha1:ONZEBFKB63R23T3BM2P7YEGHZTJN5DXO\",\"WARC-Block-Digest\":\"sha1:LQHVA6CKM2PPLVC5GIE5FXOMKH2W4BK7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104676086.90_warc_CC-MAIN-20220706182237-20220706212237-00670.warc.gz\"}"}
https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/s12874-018-0634-3
[ "# Adjustment for unmeasured confounding through informative priors for the confounder-outcome relation\n\n## Abstract\n\n### Background\n\nObservational studies of medical interventions or risk factors are potentially biased by unmeasured confounding. In this paper we propose a Bayesian approach by defining an informative prior for the confounder-outcome relation, to reduce bias due to unmeasured confounding. This approach was motivated by the phenomenon that the presence of unmeasured confounding may be reflected in observed confounder-outcome relations being unexpected in terms of direction or magnitude.\n\n### Methods\n\nThe approach was tested using simulation studies and was illustrated in an empirical example of the relation between LDL cholesterol levels and systolic blood pressure. In simulated data, a comparison of the estimated exposure-outcome relation was made between two frequentist multivariable linear regression models and three Bayesian multivariable linear regression models, which varied in the precision of the prior distributions. Simulated data contained information on a continuous exposure, a continuous outcome, and two continuous confounders (one considered measured one unmeasured), under various scenarios.\n\n### Results\n\nIn various scenarios the proposed Bayesian analysis with an correctly specified informative prior for the confounder-outcome relation substantially reduced bias due to unmeasured confounding and was less biased than the frequentist model with covariate adjustment for one of the two confounding variables. Also, in general the MSE was smaller for the Bayesian model with informative prior, compared to the other models.\n\n### Conclusions\n\nAs incorporating (informative) prior information for the confounder-outcome relation may reduce the bias due to unmeasured confounding, we consider this approach one of many possible sensitivity analyses of unmeasured confounding.\n\n## Background\n\nInferences from observational epidemiological studies are often hampered by confounding [1, 2]. To estimate the causal effect of exposure on the outcome, adjustment for a minimal set of confounding variables (or confounders) is required [3,4,5,6]. However, there may be unmeasured variables that result in unmeasured (or residual) confounding. Several design and analytical methods to account for unmeasured confounding have been proposed , including cross-over designs e.g., [8, 9], instrumental variable analysis e.g., [10, 11], the use of negative controls , and approaches to collect information on unmeasured confounding variables in a subsample e.g., [13, 14]. In addition, sensitivity analysis of unmeasured confounding is used to quantify the potential impact of unmeasured confounding [15,16,17].\n\nSensitivity analyses can be performed within a frequentist framework as well as within a Bayesian framework. The latter requires for example assumptions on prior distributions for the unknown parameters of the unmeasured confounder and its relations with exposure and outcome [18,19,20,21]. However, eliciting prior distributions for these unknown parameters can be very challenging as unmeasured confounders may actually be unknown. So far, Bayesian sensitivity analyses focused on allocating informative priors to the effect of the unmeasured confounders on the exposure or on the outcome [18, 19, 22]. Instead, it may be more straightforward to elicit prior distributions for the parameters of the effects of the observed confounders on the outcome.\n\nUnmeasured confounding of the exposure-outcome relation may not only affect that relation, but may also bias the observed relations between confounders and outcome . Constraining the estimation of the confounder-outcome relation, or incorporating (informative) prior information for the confounder-outcome relation, may (indirectly) reduce the bias due to unmeasured confounding of the exposure-outcome relation.\n\nThe aim of this research was to assess to what extent using prior information on parameters for an observed relation between a measured confounder and the outcome in a Bayesian analysis can reduce bias due to unmeasured confounding in an estimator of the exposure outcome relation. The remainder of this article is structured as follows. The bias due to omitting one or more confounders from a regression model is quantified in section 2. In section 3, the use of informative priors for the observed confounder-outcome relation was tested using simulation studies. Section 4 illustrates the approach using an empirical example of the relation between LDL cholesterol levels and systolic blood pressure. Section 5 provides a general discussion to the paper.\n\n## Methods\n\n### Notation\n\nWe consider studies of a continuous exposure (denoted by X), a continuous outcome (Y), and two continuous confounders (Z and U). All relations are assumed to be linear. All variables are considered related to the outcome, according to the model: yi = βyxxi + βyzzi + βyuui + εi, where lower case letters represent the realisations of the random variables Y, X, Z, and U, i is a subject indicator (i = 1, …, n), and ε ~ N(0,σ2). The confounders are considered related to the exposure: xi = βxzzi + βxuui + ζi, and the confounders are also related to each other: zi = βzuui + ξi, with ζ ~ N(0,σx2) and ξ ~ N(0,σz2). For all models, the intercepts are assumed independent of all other terms in the models and are omitted here and in the following equations. The coefficients of these models represent an increase in the dependent variable by β.. for each unit increase in the independent variable. The structural relations between the variables are presented in Fig. 1.\n\n### Bias due to unmeasured confounding\n\nFor the fairly simple model outlined in Fig. 1, there are three possible scenarios of confounding adjustment: scenario 1.) both confounders Z and U are measured and adjusted for (e.g., by a multivariable regression analysis of Y on X, including Z and U as covariates); scenario 2.) none of the confounders are measured and hence none is adjusted for; and scenario 3.) one confounder (Z) is measured and adjusted for, while the other (U) is not. Because our interest is in situations in which unmeasured confounding is present, we only consider scenarios 2 and 3.\n\nIn both scenarios, the effect of X on Y can be estimated by means of a linear regression model. In the following, we assume all assumptions of the linear regression model are met, except that unmeasured confounding may be present. As a result, the estimator for the effect of X on Y is expected to be biased due to unmeasured confounding. Details about the bias due to unmeasured confounding are provided in Additional file 1: Appendix 1.\n\nIn scenario 2, the bias due to omitting Z and U from the data analytical model can be expressed as:\n\n$$bias\\left({\\beta}_{yx}\\right)={\\beta}_{yz}\\left({\\beta}_{xz}\\frac{Var(Z)}{Var(X)}+{\\beta}_{zu}{\\beta}_{xu}\\frac{Var(U)}{Var(X)}\\right)+{\\beta}_{yu}\\frac{Var(U)}{Var(X)}\\left({\\beta}_{xu}+{\\beta}_{zu}{\\beta}_{xz}\\right),$$\n(1)\n\nwhere Var(Z), Var(X), and Var(U) denote the marginal variances of Z, X, and U, respectively. Equation (1) indicates that the bias resulting from omitting two confounders is independent of the true exposure-outcome relation βyx. Furthermore, the bias increases with increasing strength of the relation between each of the confounders and the outcome or the exposure (βyz, βyu, βxz, and βxu). The bias is the result of different backdoor paths from X to Y: X ← Z → Y, X ← U → Y, X ← Z ← U → Y, and X ← U → Z → Y, which can be identified in the equation.\n\nIn scenario 3 the bias due to omitting U from the data analytical model, while adjusting for Z, can be expressed as:\n\n$$bias\\kern0.5em \\left({\\beta}_{yx\\mid z}\\right)\\kern0.5em =\\kern0.5em {\\beta}_{xu}{\\beta}_{yu}\\frac{Var(U)\\left(1-{\\rho}_{uz}^2\\right)}{Var(X)\\left(1-{\\rho}_{xz}^2\\right)},$$\n(2)\n\nwhere $${\\rho}_{uz}^2$$ is the squared (Pearson’s) correlation between U and Z, $${\\rho}_{xz}^2$$ is the squared correlation between X and Z, and $$Var(U)\\left(1-{\\rho}_{uz}^2\\right)$$ and $$Var(X)\\left(1-{\\rho}_{xz}^2\\right)$$, represent the conditional variances of U given Z and of X given Z, respectively. Equation (2) shows that the bias resulting from omitting one confounder from the adjustment model is independent of the true exposure-outcome relation βyx. Furthermore, the bias increases as the relation between the unmeasured confounder and the outcome (βyu) or the exposure (βxu) increases.\n\nAs the correlation between the confounders (ρuz) increases, the bias of the estimator of the exposure-outcome relation decreases. Intuitively, when two confounders are correlated, adjusting for one accounts for some of the variability (and thus confounding effect) in the other. Therefore, adjustment for one confounder may reduce the bias that is caused by the other [25, 26]. In addition, in a linear model, Var(X|Z) ≤ Var(X) and the larger the absolute value of ρxz the smaller Var(X|Z). Because of this decreased Var(X|Z), the residual bias carried by U, i.e.$${\\beta}_{xu}{\\beta}_{yu} Var(U)\\left(1-{\\rho}_{uz}^2\\right)$$, is amplified. This bias amplification particularly happens when the confounder (Z) that is adjusted for acts like an instrumental variable (IV) or near-IV, meaning that it has a stronger relation with the exposure (X) than with the outcome (Y) [27, 28].\n\nIn scenario 3, the linear regression analysis of Y on X and Z, yielding an estimate of βyxz, is a biased estimator of the relation between X and Y. However, this linear regression analysis is also a biased estimator of the relation between Z and Y (βyzx). When we assume all variables follow a multivariate standard normal distribution, the bias in the βyzx relation can be expressed as:\n\n$$bias\\left({\\beta}_{yz\\mid x}\\right)\\kern0.5em =\\kern0.5em {\\beta}_{yu}^{\\hbox{'}}\\left(\\frac{\\rho_{zu}-{\\rho}_{xz}{\\rho}_{xu}}{1-{\\rho}_{xz}^2}\\right),$$\n(3)\n\nwhere $${\\beta}_{yu}^{\\prime }$$ represents the conditional (or direct) effect of U on Y if both are standardized. Equation (3) shows that the unmeasured confounder (U) of the exposure-outcome relation may also confound the observed relation between the measured confounder (Z) and the outcome. If Z and X are independent (i.e., ρxz = 0), the bias is simply the result of the backdoor path from Z to Y via U (i.e., $${\\beta}_{yu}^{\\prime }{\\rho}_{zu}$$). Note that even if Z and U are independent, the observed relation between Z and Y is biased, due to conditioning on X, which is a collider of Z and U and hence conditioning on X opens a path from Z to Y via U .\n\n### Reducing unmeasured confounding using a Bayesian model\n\nAs indicated above, unmeasured confounding of the exposure-outcome relation can also bias the relation between an observed confounder and the outcome. Hence, an unexpected relation between a confounder and the outcome may suggest the presence of unmeasured confounding. Allocating informative priors to the observed confounder-outcome relation may not only reduce the bias in that parameter, but also may reduce the bias due to unmeasured confounding of the exposure-outcome relation.\n\nIn the absence of information about the confounder U, the relation between X and Y only can be controlled for confounding by Z. In a Bayesian framework, we can specify a linear model of Y as a function of X and Z. The parameters of interest, βyx, βyz and σ2, can then be estimated using their joint posterior distribution given the data for Y, X, and Z. The joint posterior distribution is proportional to the product of the density of the data times the joint prior distribution of the parameters:\n\n$$P\\left({\\beta}_{yx},{\\beta}_{yz},{\\sigma}^2|Y,X,Z\\right)\\alpha f\\left(Y|X,Z,{\\beta}_{yx},{\\beta}_{yx},{\\sigma}^2\\right)g\\left({\\beta}_{xy},{\\beta}_{yz},{\\sigma}^2\\right),$$\n(4)\n\nwhere g(βxy, βyz, σ2) is the joint prior distribution and f(Y| X, Z, βyx, βyz, σ2) is the probability density of Y conditional on the parameters:\n\n$$f\\left(Y|X,Z,{\\beta}_{yx},{\\beta}_{yz},{\\sigma}^2\\right)={\\prod}_i\\frac{1}{\\sqrt{2{\\pi \\sigma}^2}}\\exp \\left(\\frac{-{\\left({y}_i-{\\beta}_{yx}{x}_i-{\\beta}_{yz}{z}_i\\right)}^2}{2{\\sigma}^2}\\right).$$\n(5)\n\nAssuming independent priors for the different parameters, the joint prior is simply a product of all marginal priors.\n\nIncorporating (informative) prior information for the confounder-outcome relation, may (indirectly) reduce the bias due to unmeasured confounding (by the unmeasured variable U) of the exposure-outcome relation. This was tested through simulation studies, which are described in the next section.\n\n### Simulation study of Bayesian analysis to control for unmeasured confounding\n\n#### Objective\n\nA simulation study was performed to test the possible decrease in bias in the estimator of the exposure-outcome relation by using informative priors for the confounder-outcome relation. In simulated data, a comparison of the estimated relation between the exposure (X) and the outcome (Y) was made between two frequentist (OLS) multivariable linear regression models and three Bayesian multivariable linear regression models.\n\n#### Data analysis\n\nEvery simulated data set was analysed in five different ways: two frequentist analyses and three Bayesian analyses. The two frequentist regression models included none or one of the two confounding variables: linear regression analysis without and with adjustment for the measured confounder Z. The three Bayesian regression analyses all incorporated the information about one confounder, but used different informative priors for the confounder-outcome relation. The performance of these methods was compared in terms of bias and precision of the estimator of the exposure-outcome relation. The simulation study was performed in R, version 3.1.1 .\n\nThe Bayesian model described in section 2.3 was used. All Bayesian regression analyses were adjusted for Z, but not for U. We used uninformative priors for σ2 and βyx: σ  U(0,100) and βyx  N(μ = 0, τ = 0.001), where τ indicates the precision of the distribution. We used informative priors for the parameter βyz, but with different levels of precision. A normal informative prior was assumed for βyz, with the true value for βyz as the mean and different values for the precision, which were proportionate to the sample size n of the simulated data sets: βyz  N(μ = βyz, τ = n, n/10, n/100). The precision could take three different values representing different degrees of certainty in the prior information. The Bayesian models were specified using the rjags package in R , which provides an interface from R to JAGS (http://mcmc-jags.sourceforge.net).\n\nSince the priors for σy and βyx were non-informative, the posterior distributions could be approximated by the product of the density of the data and the prior of βyz. The Gibbs sampler was used with four parallel chains for 2000 iterations. The first 1000 iterations were discarded as burn-in runs. Since the marginal posterior was normal, we chose to present the mean of the posterior distribution as an estimate of βyx|z.\n\n#### Data generation\n\nData were generated according to the structure depicted in Fig. 1 and consisted of a continuous exposure (X), a continuous outcome (Y), and two continuous confounders (Z and U). First, U was sampled from a normal distribution: U ~ N(0, σu2). Second, Z was generated based on U: zi = βzuui + ξi, with ξ ~ N(0, σz2). Then, X was generated based on U and Z: xi = βxzzi + βxuui + ζi, with ζ ~ N(0, σx2). Finally, Y was generated based on U, Z, and X: yi = βyxxi + βyzzi + βyuui + εi, with ε ~ N(0, σ2).\n\nIn all simulations, the variances σu2, σz2, σx2, and σ2 were set to 1. Furthermore, the exposure-outcome relation was fixed at βyx = 0 (i.e. zero relation). The parameter βzu was set at 0, or 1. The parameters βyz, and βxz were set at 1 or 2, indicating that the observed confounder Z was related to X and to Y in all scenarios. The parameters βyu and βxu were set at 0, 1, or 2. All combinations of the parameters settings were evaluated through simulations, leading to 72 different scenarios.\n\n#### Comparison of methods\n\nFor each scenario 100 datasets of 1000 subjects each were generated. In each dataset the methods described above were applied. For each scenario separately, the performance of these methods was compared in terms of bias of the estimator of the relation between X and Y, the empirical standard deviation (SD) of the estimated relations between X and Y, and the mean squared error (MSE). For the frequentist models, we computed the average of the estimated regression coefficients (bias), their standard deviation (SD), and the mean of the squared difference between the estimated regression coefficient and the true exposure-outcome relation (MSE). For the Bayesian models, we computed the average of the posterior means (bias), their standard deviation (SD), and the mean of the squared difference between the posterior mean and the true exposure-outcome relation (MSE).\n\n### Example study of the relation between cholesterol levels and blood pressure\n\nTo illustrate the application of the use of informative priors for the observed confounder-outcome relation we used data on the relation between low-density lipoprotein (LDL cholesterol) levels and systolic blood pressure (SBP). This example was based on the Second Manifestations of Arterial disease (SMART) study, which is an ongoing prospective cohort study of patients with manifest vascular disease of vascular risk factors . For this example, we assumed that there are two possible confounders of the LDL-SBP relation, namely body mass index (BMI) and blood glucose levels (BGL). A data set of 1000 observations was simulated based on the variance-covariance matrix and the vector of means of these four variables in the cohort study. In all analyses, BMI was considered to be a measured confounder, while BGL was considered to be unmeasured.\n\n#### Comparison of methods\n\nThe different methods described in section 3.2.1 were applied to the example data. As a reference, we fitted a linear regression model of SBP on LDL, including BMI and BGL as covariates (referred to as the ‘full model’). BMI was considered to be a measured confounder, while BGL was considered to be unmeasured. The performance of the different methods was assessed by the difference between the estimated LDL-SBP relations from the different models and the LDL-SBP relation obtained from the full model.\n\nThe Bayesian approach was implemented in two ways. We first used the estimated regression coefficient of the effect of BMI on systolic blood pressure from the full model (i.e., 0.32), as the mean for the prior distribution of the measured confounder on the outcome, and precision equal to the sample size (i.e., τ = 1000). We then used an relation from the literature as the prior mean. A previous study on the relation between BMI and SBP in adults found a linear regression coefficient of 0.77 . This relation was used as the mean of the prior distribution of the measured confounder and outcome. Since we were less certain about this prior information, we used a smaller precision (τ = 100). For all the other relations we used uninformative priors as described in Section 3.2.1.\n\n## Results\n\n### Simulation study\n\nTable 1 shows the results of the simulation study for the scenarios where βxz = βyz = 2. Similar patterns were observed for other values of βxz and βyz; these are omitted from the Table for brevity. Results for all simulated scenarios can be found in Additional file 2: Appendix 2. The Bayesian model with precision 100 (i.e., n/10) showed results that were in between those of the Bayesian models with precision 1000 (i.e., n) and precision 10 (i.e., n/100). Results for the Bayesian model with precision 100 are omitted for clarity (see Additional file 2: Appendix 2).\n\nIn most scenarios, the Bayesian model with precision 1000 showed less bias than the frequentist model with covariate adjustment. Noticeable exceptions in Table 1 are scenarios 8 and 14, in which the Bayesian model with precision 1000 was more biased than the frequentist model with covariate adjustment (which was actually unbiased). The reason for this is that in these scenarios U is not a confounder of the X-Y relation (because βxu = 0), yet it is a confounder of the Z-Y relation (e.g., in scenario 8 $$\\widehat{\\beta_{yz\\mid x}}$$= 1.50, while βyz = 1). As the Bayesian model corrects the bias in the Z-Y relation, it induces a bias in the X-Y relation. In scenarios 10 and 16 in Table 1, the Bayesian models and the frequentist model with covariate adjustment yielded similar, yet biased, results. In these scenarios, the estimated relation between Z and Y from the frequentist model with covariate adjustment corresponded with the mean of the prior distribution of this relation (i.e., $$\\widehat{\\beta_{yz\\mid x}}$$= 1.00 and βyz = 1). Hence, the Bayesian model did not reduce bias, compared to the frequentist model. In scenarios 1–7, all methods that adjusted for the measured confounder Z yielded unbiased results, because the variable U was not a confounder in these scenarios (βyu = 0). The extent to which the Bayesian model reduced bias was substantially smaller when the precision was 10 instead of 1000.\n\nThe standard deviation (SD) of the empirical distribution of the parameter estimates was smaller for the Bayesian model with precision 1000, compared to the frequentist model with covariate adjustment and the Bayesian model with precision 10 (the latter two showing approximately the same SD). Also, in general MSE was smaller for the Bayesian model with precision 1000, compared to the other models.\n\n### Empirical example\n\nIn the empirical example of the relation between low-density lipoprotein (LDL cholesterol) levels and systolic blood pressure (SBP)., LDL increased BP, after adjustment for BMI and BGL, but omitting BGL from the data analytical model reduced the estimated effect substantially from 1.24 to 1.03 (Table 2). The amount of bias of the LDL-SBP relation slightly decreased when using an informative prior for the confounder outcome relation (i.e., for the BMI-SBP relation). However, even when the ‘correct’ prior, based on the full model, was used, the estimated effect of LDL on SBP remained substantially different from the reference value.\n\n## Discussion\n\nThis simulation study on the value of Bayesian analysis with informative priors for the relation between the measured confounder and the outcome in the presence of unmeasured confounding shows that such an analysis can reduce the bias due to unmeasured confounding substantially. The magnitude of the remaining bias decreases as the precision of the (correct) informative prior increases.\n\nAn obvious prerequisite when using the proposed Bayesian approach to correct for unmeasured confounding is prior knowledge about the relation between the measured confounder and the outcome. We argue that in many clinical research situations, such prior knowledge exists for many observed confounders, at least in terms of direction and order of magnitude of the relation. That information may be obtained from rigorously designed and conducted large epidemiological studies or from meta-analysis of individual patient data of randomised trials. Obviously, the impact of the Bayesian approach depends on the precision of the prior distribution. Informative priors with relatively small precision have little impact in term of confounding correction, yet allow Bayesian algorithms to be used. In practice it might be difficult – or researchers may be reluctant – to specify relatively highly informative priors.\n\nIf only the direction (but not the magnitude) of the confounder-outcome relation is included in the prior, the precision of the prior will be relatively small and the impact of the Bayesian analysis may be relatively small too. We did not include this particular form of prior distribution in our simulation study, but instead focused on distributions with the same mean, yet different precision.\n\nAs with any simulation study, an obvious limitation to our work is the finite number of simulated scenarios that we evaluated. For example, we only considered situations with two confounders, one being measured, one unmeasured. Although the two confounders Z and U could be considered as representing two sets of measured and unmeasured confounders, respectively, future research could address scenarios of multiple confounders with, e.g., different distributions of the confounders. Another scenario that we did not evaluate and could be the topic of future research is specification of the priors, such that these do not correspond to the ‘true’ confounder-outcome relation. The robustness to various levels of misspecifications of the prior distribution still needs to be studied.\n\nWhere to position this Bayesian approach in the toolbox of the researcher doing observational epidemiologic research? Given that many observational studies potentially suffer from unmeasured confounding, sensitivity analysis of unmeasured confounding is often important. Eliciting priors for unobserved (and possibly unknown) confounding variables is likely to be difficult. On the other hand, focusing on the approximate size of the relations between measured confounders and the outcome provides the opportunity to perform a Bayesian sensitivity analysis as outlined in this paper.\n\nInformative priors for the measured confounder-outcome relations can reduce unmeasured confounding bias of the exposure-outcome relation. In case of observing unexpected confounder-outcome relations a sensitivity analysis of unmeasured confounding could be considered, in which prior information about the observed confounder-outcome relations is incorporated through Bayesian analysis.\n\n## Conclusions\n\nIn this paper we proposed a Bayesian approach to reduce bias due to unmeasured confounding by expressing an informative prior for a measured confounder-outcome relation. A simulation study on the value of this Bayesian analysis with informative priors for the relation between the measured confounder and the outcome in the presence of unmeasured confounding shows that such an analysis can indeed reduce the bias due to unmeasured confounding substantially. The magnitude of the remaining bias decreases as the precision of the (correct) informative prior increases. We consider this approach one of many possible sensitivity analyses of unmeasured confounding.\n\n## Abbreviations\n\nBGL:\n\nBlood glucose levels\n\nBMI:\n\nBody mass index\n\nLDL:\n\nLow-density lipoprotein\n\nMSE:\n\nMean squared error\n\nSBP:\n\nSystolic blood pressure\n\nSD:\n\nStandard deviation\n\n## References\n\n1. 1.\n\nHernan MA, Robins JM. Causal inference. Boca Raton: Chapman & Hall / CRC, forthcoming; 2016.\n\n2. 2.\n\nRobins JM. Data, design, and background knowledge in etiologic inference. Epidemiology. 2001;12(3):313–20.\n\n3. 3.\n\nVanderWeele TJ, Shpitser I. On the definition of a confounder. Ann Stat. 2013;41(1):196–220.\n\n4. 4.\n\nVanderWeele TJ, Shpitser I. A new criterion for confounder selection. Biometrics. 2011;67(4):1406–13.\n\n5. 5.\n\nRosenbaum PR, Rubin DB. The central role of the propensity score in observational studies for causal effects. Biometrika. 1983;70(1):41–55.\n\n6. 6.\n\nRosenbaum PR, Rubin DB. Reducing bias in observational studies using subclassification on the propensity score. J Am Stat Ass. 1984;79(387):516–24.\n\n7. 7.\n\nUddin MJ, Groenwold RH, Ali MS, de Boer A, Roes KC, Chowdhury MA, Klungel OH. Methods to control for unmeasured confounding in pharmacoepidemiology: an overview. Int J Clin Pharm. 2016;38(3):714–23.\n\n8. 8.\n\nHallas J, Pottegård A. Use of self-controlled designs in pharmacoepidemiology. J Intern Med. 2014;275(6):581–9.\n\n9. 9.\n\nWhitaker HJ, Hocine MN, Farrington CP. The methodology of self-controlled case series studies. Stat Methods Med Res. 2009;18(1):7–26.\n\n10. 10.\n\nChen Y, Briesacher BA. Use of instrumental variable in prescription drug research with observational data: a systematic review. J Clin Epidemiol. 2011;64(6):687–700.\n\n11. 11.\n\nMartens EP, Pestman WR, de Boer A, Belitser SV, Klungel OH. Instrumental variables: application and limitations. Epidemiology. 2006;17(3):260–7.\n\n12. 12.\n\nLipsitch M, Tchetgen Tchetgen E, Cohen T. Negative controls: a tool for detecting confounding and bias in observational studies. Epidemiology. 2010;21(3):383–8.\n\n13. 13.\n\nStürmer T, Schneeweiss S, Avorn J, Glynn RJ. Adjusting effect estimates for unmeasured confounding with validation data using propensity score calibration. Am J Epidemiol. 2005;162(3):279–89.\n\n14. 14.\n\nWhite JE. A two stage design for the study of the relationship between a rare exposure and a rare disease. Am J Epidemiol. 1982;115:119–28.\n\n15. 15.\n\nLin DY, Psaty BM, Kronmal RA. Assessing the sensitivity of regression results to unmeasured confounders in observational studies. Biometrics. 1998;54(3):948–63.\n\n16. 16.\n\nDiaz I, van der Laan MJ. Sensitivity analysis for causal inference under unmeasured confounding and measurement error problems. Int J Biostat. 2013;9(2):149–60.\n\n17. 17.\n\nGroenwold RH, Nelson DB, Nichol KL, Hoes AW, Hak E. Sensitivity analyses to estimate the potential impact of unmeasured confounding in causal research. Int J Epidemiol. 2010;39(1):107–17.\n\n18. 18.\n\nMcCandless LC, Gustafson P, Levy AR, Richardson S. Hierarchical priors for bias parameters in Bayesian sensitivity analysis for unmeasured confounding. Stat in Med. 2012;31(4):383–96.\n\n19. 19.\n\nMcCandless LC, Gustafson P, Levy A. Bayesian sensitivity analysis for unmeasured confounding in observational studies. Stat in Med. 2007;26(11):2331–47.\n\n20. 20.\n\nGreenland S. The impact of prior distributions for uncontrolled confounding and response bias: a case study of the relation of wire codes and magnetic fields to childhood leukemia. J Am Stat Ass. 2003;98(461):47–54.\n\n21. 21.\n\nDorie V, Harada M, Bohme Carnegie N, Hill J. A flexible, interpretable framework for assessing sensitivity to unmeasured confounding. Stat in Med. 2016;35:3453–70.\n\n22. 22.\n\nGustafson P, McCandless L, Levy A, Richardson S. Simplified Bayesian sensitivity analysis for mismeasured and unobserved confounders. Biometrics. 2010;66(4):1129–37.\n\n23. 23.\n\nSchuit E, Groenwold RH, Harrell FE, de Kort WL, Kwee A, Mol BWJ, et al. Unexpected predictor–outcome associations in clinical prediction research: causes and solutions. CMAJ. 2013;185(10):E499–505.\n\n24. 24.\n\nPearl J. Causality: models, reasoning, and inference. 2nd ed. 2009. Cambridge University press, N Y.\n\n25. 25.\n\nFewell Z, Smith GD, Sterne JA. The impact of residual and unmeasured confounding in epidemiologic studies: a simulation study. Am J Epidemiol. 2007;166(6):646–55.\n\n26. 26.\n\nGroenwold RH, Sterne JA, Lawlor DA, Moons KG, Hoes AW, Tilling K. Sensitivity analysis for the effects of multiple unmeasured confounders. Ann Epidemiol. 2016 Sep;26(9):605–11.\n\n27. 27.\n\nBhattacharya J, Vogt WB. Do instrumental variables belong in propensity scores? Int J Stat Econ. 2012;9(A12):107–27.\n\n28. 28.\n\nPearl J. Invited commentary: understanding bias amplification. Am J Epidemiol. 2011;174(11):1223–7.\n\n29. 29.\n\nR Development Core Team. R: A Language and Environment for Statistical Computing Vienna, Austria; 2008. ISBN 3-900051-07-0. Available from: http://www.R-project.org.\n\n30. 30.\n\nPlummer M. rjags: Bayesian Graph Model using MCMC; 2016. R package version 4–5. Available from: http://CRAN.R-project.org/package=rjags.\n\n31. 31.\n\nSimons PCG, Algra A, Van de Laak M, Grobbee D, Van der Graaf Y. Second manifestations of ARTerial disease (SMART) study: rationale and design. Eur J Epidemiol. 1999;15(9):773–81.\n\n32. 32.\n\nStamler J. Epidemiologic findings on body mass and blood pressure in adults. Ann Epidemiol. 1991;1(4):347–62.\n\n## Acknowledgements\n\nWe thank prof Y. van der Graaf for allowing us to use a subset of the dataset of the SMART cohort as an illustration.\n\n### Funding\n\nWe gratefully acknowledge financial contribution from the Netherlands Organisation for Scientific Research (NWO, projects 917.16.430 and 452–12-010).\n\n### Availability of data and materials\n\nSimulation scripts are available upon request.\n\n## Author information\n\nAuthors\n\n### Contributions\n\nRG, IS, and IK drafted the concept for the current paper. RG and IS wrote the initial version of the paper, performed statistical programming for the simulations and conducted analyses. MM and MvS contributed to the design of the simulation study and the interpretation of the simulation results. All authors commented on drafts of the article and approved the manuscript.\n\n### Corresponding author\n\nCorrespondence to Rolf H. H. Groenwold.\n\n## Ethics declarations\n\nNot applicable.\n\nNot applicable.\n\n### Competing interests\n\nThe authors declare that they have no competing interests.\n\n### Publisher’s Note\n\nSpringer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.\n\nAppendix 1. Expressions of bias. (PDF 105 kb)", null, "" ]
[ null, "https://bmcmedresmethodol.biomedcentral.com/track/article/10.1186/s12874-018-0634-3", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9078392,"math_prob":0.9255193,"size":32581,"snap":"2021-43-2021-49","text_gpt3_token_len":7556,"char_repetition_ratio":0.19682598,"word_repetition_ratio":0.10656231,"special_character_ratio":0.22411835,"punctuation_ratio":0.14081264,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9858361,"pos_list":[0,1,2],"im_url_duplicate_count":[null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-07T05:28:31Z\",\"WARC-Record-ID\":\"<urn:uuid:04453d4b-9be5-4c7c-855a-4afcc9a2febd>\",\"Content-Length\":\"277294\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dde0fb8c-f3b2-4e64-9b64-13546df98f0b>\",\"WARC-Concurrent-To\":\"<urn:uuid:9b5d95d3-76df-44ab-9e93-ca48aad72f20>\",\"WARC-IP-Address\":\"146.75.32.95\",\"WARC-Target-URI\":\"https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/s12874-018-0634-3\",\"WARC-Payload-Digest\":\"sha1:ULZGBANDFW5L5DTW4AN5XJKQDYZWGXFD\",\"WARC-Block-Digest\":\"sha1:TTQL7LH3WMK4OZKPFXBVBFMDPSEOAR7L\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363336.93_warc_CC-MAIN-20211207045002-20211207075002-00346.warc.gz\"}"}
http://finale.cz/nezarazene/acquiring-relationships-among-two-quantities
[ "# Acquiring Relationships Among Two Quantities\n\nOne of the problems that people face when they are dealing with graphs is non-proportional romantic relationships. Graphs can be employed for a various different things although often they can be used improperly and show a wrong picture. A few take the sort of two lies of data. You may have a set of revenue figures for a month and you want to plot a trend lines on the data. But if you plan this tier on a y-axis and the data range starts at 100 and ends at 500, you’ll a very misleading view belonging to the data. How could you tell whether or not it’s a non-proportional relationship?\n\nPercentages are usually proportionate when they speak for an identical relationship. One way to inform if two proportions happen to be proportional is to plot these people as excellent recipes and trim them. In the event the range starting place on one aspect for the device is more than the additional side of the usb ports, your percentages are proportional. Likewise, if the slope of your x-axis is far more than the y-axis value, after that your ratios are proportional. That is a great way to plot a fad line as you can use the range of one varying to establish a trendline on another variable.\n\nHowever , many people don’t realize that your concept of proportionate and non-proportional can be split up a bit. In the event the two measurements relating to the graph can be a constant, such as the sales amount for one month and the average price for the same month, then the relationship between these two volumes is non-proportional. In this situation, a single dimension will be over-represented on a single side from the graph and over-represented on the other side. This is called a „lagging“ trendline.\n\nLet’s check out a real life example to understand what I mean by non-proportional relationships: cooking a formula for which we want to calculate the number of spices needed to make that. If we story a path on the data representing the desired dimension, like the quantity of garlic we want to put, we find that if the actual cup of garlic is much higher than the glass we computed, we’ll experience over-estimated the number of spices needed. If the recipe demands four cups of of garlic clove, then we would know that the real cup needs to be six oz .. If the incline of this collection was downwards, meaning that the number of garlic wanted to make our recipe is a lot less than the recipe says it ought to be, then we would see that us between each of our actual glass of garlic and the ideal cup is a negative incline.\n\nHere’s an additional example. Imagine we know the weight of an object By and its particular gravity is certainly G. If we find that the weight of this object is definitely proportional to its particular gravity, therefore we’ve uncovered a direct proportionate relationship: the higher the object’s gravity, the lower the pounds must be to continue to keep it floating inside the water. We could draw a line right from top (G) to bottom level (Y) and mark the actual on the graph and or chart where the set crosses the x-axis. Right now if we take the measurement of this specific portion of the body over a x-axis, directly underneath the water’s surface, and mark that period as the new (determined) height, consequently we’ve found each of our direct proportional relationship colombian bride for sale between the two quantities. We are able to plot several boxes around the chart, every single box describing a different height as based on the the law of gravity of the thing.\n\nAnother way of viewing non-proportional relationships is always to view these people as being both zero or perhaps near absolutely nothing. For instance, the y-axis within our example might actually represent the horizontal way of the the planet. Therefore , whenever we plot a line via top (G) to bottom level (Y), we’d see that the horizontal length from the plotted point to the x-axis is zero. It indicates that for virtually any two volumes, if they are plotted against one another at any given time, they may always be the same magnitude (zero). In this case afterward, we have a straightforward non-parallel relationship regarding the two quantities. This can also be true in case the two quantities aren’t parallel, if as an example we desire to plot the vertical level of a system above a rectangular box: the vertical level will always particularly match the slope in the rectangular container." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93924814,"math_prob":0.96217144,"size":4413,"snap":"2021-31-2021-39","text_gpt3_token_len":909,"char_repetition_ratio":0.12587889,"word_repetition_ratio":0.0025906735,"special_character_ratio":0.20077045,"punctuation_ratio":0.07175926,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98045605,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-20T11:50:42Z\",\"WARC-Record-ID\":\"<urn:uuid:3e6883fa-0c5d-4319-8086-6d79acc8d926>\",\"Content-Length\":\"13887\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5c5a1040-ade9-4d68-bede-3aa07c6a2a0f>\",\"WARC-Concurrent-To\":\"<urn:uuid:9be38e8d-b724-4564-a909-53dd4d4a1acb>\",\"WARC-IP-Address\":\"85.118.128.38\",\"WARC-Target-URI\":\"http://finale.cz/nezarazene/acquiring-relationships-among-two-quantities\",\"WARC-Payload-Digest\":\"sha1:HIQ64PQO7BEJA7E6QM5PGQAPSIEGXNWI\",\"WARC-Block-Digest\":\"sha1:I7JQKVVF5D5IFWY4YGKZMFOQC7R5UHZP\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057036.89_warc_CC-MAIN-20210920101029-20210920131029-00397.warc.gz\"}"}
http://www.lllt.net/html/7133.html
[ "1、 导言\n\n• 冒泡排序、选择排序、插入排序,平均时间复杂度都是O(n2)\n• 希尔排序、归并排序、快速排序、堆排序,平均时间复杂度都是O(nlogn)\n• 计数排序、基数排序、桶排序,平均时间复杂度都是O(n + k)\n\n1. 原地排序:指的是在排序的过程当中不会占用额外的存储空间,空间复杂度为O(1)。\n2. 排序算法的稳定性:一个稳定的排序,指的是在排序之后,相同元素的前后顺序不会被改变,反之就称为不稳定。举个例子:一个数组 [3,5,1,4,9,6,6,12] 有两个6(为了区分,我把其中一个 6 加粗),如果排序之后是这样的:[1,3,4,5,6,6,9,12](加粗的 6 仍然在前面),就说明这是一个稳定的排序算法。\n2. 言归正传", null, "```public class BubbleSort {\n\n//data表示整型数组,n表示数组大小\npublic static void bubbleSort(int[] data, int n){\n//数组大小小于等于1,无须排序\nif (n <= 1) return;\n\nfor (int i = 0; i < n; i++) {\nfor (int j = 0; j < n - i - 1; j++) {\n//如果data[j] > data[j + 1],交换两个数据的位置\nif (data[j] > data[j + 1]){\nint temp = data[j];\ndata[j] = data[j + 1];\ndata[j + 1] = temp;\n}\n}\n}\n}\n}```\n\n```public class BubbleSort {\n\n//优化后的冒泡排序\n//data表示整型数组,n表示数组大小\npublic static void bubbleSort(int[] data, int n){\n//数组大小小于等于1,无须排序,返回空\nif (n <= 1) return;\nfor (int i = 0; i < n; i++) {\nboolean flag = false;//判断是否有数据交换\n\nfor (int j = 0; j < n - i - 1; j++) {\n//如果data[j] > data[j + 1],交换两个数据的位置\nif (data[j] > data[j + 1]){\nint temp = data[j];\ndata[j] = data[j + 1];\ndata[j + 1] = temp;\n\nflag = true;//表示有数据交换\n}\n}\n//如果没有数据交换,则直接退出循环\nif (!flag) break;\n}\n}\n}```\n\n1. 冒泡排序是基于数据比较的\n2. 最好情况时间复杂度是O(n),最坏情况时间复杂度是O(n2),平均时间复杂度是O(n2)\n3. 冒泡排序是原地排序算法,并且是稳定的。" ]
[ null, "http://www.lllt.net/img/20211126/p2zdvzjohs5.png", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.9448881,"math_prob":0.99903154,"size":2087,"snap":"2021-43-2021-49","text_gpt3_token_len":1564,"char_repetition_ratio":0.1056169,"word_repetition_ratio":0.49751243,"special_character_ratio":0.3143268,"punctuation_ratio":0.12280702,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9862067,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-29T11:33:09Z\",\"WARC-Record-ID\":\"<urn:uuid:e1e70b12-3e43-4af1-87eb-f527d77b4091>\",\"Content-Length\":\"25322\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f557fe59-c04b-4d85-81b4-03e83d33fe33>\",\"WARC-Concurrent-To\":\"<urn:uuid:116d8858-2966-4812-9955-dc672f042ff7>\",\"WARC-IP-Address\":\"106.13.72.130\",\"WARC-Target-URI\":\"http://www.lllt.net/html/7133.html\",\"WARC-Payload-Digest\":\"sha1:ESWIB65SY54XU5B7C3GAK432RLWNFHWT\",\"WARC-Block-Digest\":\"sha1:LCY2OAPVEVKJQ7KPIIW7JXUXRNXKJNLY\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358705.61_warc_CC-MAIN-20211129104236-20211129134236-00619.warc.gz\"}"}
https://faculty.math.illinois.edu/Macaulay2/doc/Macaulay2-1.16/share/doc/Macaulay2/PrimaryDecomposition/html/_primary__Decomposition_lp..._cm_sp__Strategy_sp_eq_gt_sp..._rp.html
[ "# primaryDecomposition(..., Strategy => ...)\n\n## Description\n\nThe strategy option value should be one of the following.\n• Monomial -- uses Alexander duality of a monomial ideal\n• Binomial -- finds a cellular resolution of a binomial ideal. NOT IMPLEMENTED YET.\n• EisenbudHunekeVasconcelos -- uses the algorithm of Eisenbud-Huneke-Vasconcelos\n• ShimoyamaYokoyama -- uses the algorithm of Shimoyama-Yokoyama\n• Hybrid -- uses parts of the above two algorithms\n• GTZ -- uses the algorithm of Gianni-Trager-Zacharias. NOT IMPLEMENTED YET.\nThe default strategy depends on the ideal. If the ideal is generated by monomials, then Strategy => Monomial is implied. In all other cases, the default is Strategy => ShimoyamaYokoyama.\n\n### Strategy => Monomial\n\nThis strategy only works for monomial ideals, and is the default strategy for such ideals. See the chapter \"Monomial Ideals\" in the Macaulay2 book.\n ```i1 : Q = QQ[x,y] o1 = Q o1 : PolynomialRing``` ```i2 : I = ideal(x^2,x*y) 2 o2 = ideal (x , x*y) o2 : Ideal of Q``` ```i3 : primaryDecomposition(I, Strategy => Monomial) 2 o3 = {ideal x, ideal (x , y)} o3 : List```\n\n### Strategy => EisenbudHunekeVasconcelos\n\nSee \"Direct methods for primary decomposition\" by Eisenbud, Huneke, and Vasconcelos, Invent. Math. 110, 207-235 (1992).\n ```i4 : Q = QQ[x,y] o4 = Q o4 : PolynomialRing``` ```i5 : I = ideal(x^2,x*y) 2 o5 = ideal (x , x*y) o5 : Ideal of Q``` ```i6 : primaryDecomposition(I, Strategy => EisenbudHunekeVasconcelos) 2 2 o6 = {ideal x, ideal (y , x*y, x )} o6 : List```\n\n### Strategy => ShimoyamaYokoyama\n\nThis strategy is the default for non-monomial ideals. See \"Localization and Primary Decomposition of Polynomial ideals\" by Shimoyama and Yokoyama, J. Symb. Comp. 22, 247-277 (1996).\n ```i7 : Q = QQ[x,y] o7 = Q o7 : PolynomialRing``` ```i8 : I = ideal(x^2,x*y) 2 o8 = ideal (x , x*y) o8 : Ideal of Q``` ```i9 : primaryDecomposition(I, Strategy => ShimoyamaYokoyama) 2 o9 = {ideal x, ideal (y, x )} o9 : List```\n\n### Strategy => Hybrid\n\nUse a hybrid of the Eisenbud-Huneke-Vasconcelos and Shimoyama-Yokoyama strategies. The field Strategy is a list of two integers, indicating the strategy to use for finding associated primes and localizing, respectively. WARNING: Setting the second paramter to 1 works only if the ideal is homogeneous and equidimensional.\n ```i10 : Q = QQ[x,y] o10 = Q o10 : PolynomialRing``` ```i11 : I = intersect(ideal(x^2), ideal(y^2)) 2 2 o11 = ideal(x y ) o11 : Ideal of Q``` ```i12 : primaryDecomposition(I, Strategy => new Hybrid from (1,1)) 2 2 o12 = {ideal y , ideal x } o12 : List``` ```i13 : primaryDecomposition(I, Strategy => new Hybrid from (1,2)) 2 2 o13 = {ideal y , ideal x } o13 : List``` ```i14 : primaryDecomposition(I, Strategy => new Hybrid from (2,1)) 2 2 o14 = {ideal y , ideal x } o14 : List``` ```i15 : primaryDecomposition(I, Strategy => new Hybrid from (2,2)) 2 2 o15 = {ideal y , ideal x } o15 : List```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.62080085,"math_prob":0.9759957,"size":2772,"snap":"2021-31-2021-39","text_gpt3_token_len":899,"char_repetition_ratio":0.1658237,"word_repetition_ratio":0.08080808,"special_character_ratio":0.32431456,"punctuation_ratio":0.17843866,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9973345,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-23T04:58:42Z\",\"WARC-Record-ID\":\"<urn:uuid:67b9edb0-51d3-443a-bef5-dd35b371a17c>\",\"Content-Length\":\"7275\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:766f1282-1cef-4a4a-9e4d-f88010db2ad2>\",\"WARC-Concurrent-To\":\"<urn:uuid:da605dd3-89f3-42c1-ab15-77abcd8c291e>\",\"WARC-IP-Address\":\"128.174.199.46\",\"WARC-Target-URI\":\"https://faculty.math.illinois.edu/Macaulay2/doc/Macaulay2-1.16/share/doc/Macaulay2/PrimaryDecomposition/html/_primary__Decomposition_lp..._cm_sp__Strategy_sp_eq_gt_sp..._rp.html\",\"WARC-Payload-Digest\":\"sha1:ZKDYGJ3ZVSO3KKM2VIOA4DEMK3NQUPUJ\",\"WARC-Block-Digest\":\"sha1:W5QBEUCLOVVW4JEJ5WDQD3S2PTGQ4THB\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057417.10_warc_CC-MAIN-20210923044248-20210923074248-00177.warc.gz\"}"}
https://superexcel.in/2022/01/26/nowadays-let-us-assess-the-total-amount-after-two/
[ "Free Shipping & COD Available\n\nFree Shipping & COD Available\n\n#", null, "## Nowadays, let us assess the total amount after two years\n\nNowadays, let us assess the total amount after two years\n\nSo, just how much will the \\$10 deposit end up being well worth in two ages’ energy at a yearly rate of interest of 7per cent? The solution was \\$ and you may have it by copying https://paydayloansnc.net/cities/spring-lake/ equivalent formula to line D.\n\nTo determine how much money you will find inside bank account at the end of 3 years, just replicate equivalent formula to column elizabeth and you’ll bring \\$.\n\nPeople who possess some experience with succeed solutions have probably determined that precisely what the earlier formula really do is actually multiplying the original deposit of \\$10 by 1.07 three times:\n\nRound they to two e wide variety whilst see in cellular E2 in screenshot above – \\$. Naturally, you’ll straight assess the balance after 3 years utilizing this formula:\n\n## Annual ingredient interest – formula 2\n\nAnother way to create an annual substance interest formula is estimate the accumulated interest for each season then include it with the first deposit.\n\nLet’s assume that their preliminary deposit is within mobile B1 and Annual interest rate in cellular B2, the next formula operates a treat:\n\n• Fix the reference to the yearly interest cellular (B2 in our circumstances) by the addition of the \\$ sign, it should be a total column and downright row, like \\$B\\$2.\n• For 12 months 2 (B6) as well as consequent years, alter the formula to: seasons 1 balance + seasons 1 balances * Interest Rate\n\nInside sample, you had enter the after formula in mobile B6 following replicate it as a result of other rows, like demonstrated in screenshot below:\n\nTo find out how much cash interest you truly attained with yearly compounding, deduct the original deposit (B1) from balances after 1 year (B5). This formula goes to C5:\n\nIn C6, deduct balances after 1 year from Balance after two years, and drag the formula right down to additional cells:\n\nThe above mentioned examples do a good job demonstrating the concept of element interest, cannot they? But nothing associated with the formulas is great adequate to feel known as a universal substance interest formula for succeed. Firstly, as they do not allow you to indicate a compounding volume, and furthermore, as you need develop a complete desk instead of merely submit a particular timeframe and rate of interest.\n\nReally, why don’t we capture one step forward and create a worldwide ingredient interest formula for succeed that may determine what kind of cash you’ll obtain with annual, quarterly, month-to-month, once a week or everyday compounding.\n\n## General chemical interest formula\n\nWhenever financial experts assess the effect of compound interest on a good investment, they usually see three issue that identify the long term worth of the financial investment (FV):\n\n• PV – present worth of the financial\n• i – rate of interest earned in each years\n• n – range periods\n\nBy understanding these ingredients, you can make use of listed here formula to obtain the future property value the expense with a certain combined interest rate:\n\n## Sample 1: Monthly compound interest formula\n\nAssume, you invest \\$2,000 at 8percent interest rate compounded month-to-month and you need to know the worth of their investment after five years.\n\n• PV = \\$2,000\n• i = 8percent per year, combined monthly (0.= 006666667)\n• n = five years x one year (5*12=60)\n\n## Example 2: constant mixture interest formula\n\nI’m hoping the month-to-month mixture interest instance is actually well understood, nowadays you can make use of equivalent approach for everyday compounding. The initial financial investment, rate of interest, period in addition to formula include the identical as in these example, precisely the compounding course differs from the others:\n\n• PV = \\$2,000\n• i = 8per cent annually, combined every day (0. = 0.000219178)", null, "" ]
[ null, "https://zozimamart.com/wp-content/uploads/2021/05/zozima-logo.png", null, "https://superexcel.in/wp-content/plugins/wordpress-whatsapp-support/assets/img/user.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9373781,"math_prob":0.91212624,"size":3830,"snap":"2023-14-2023-23","text_gpt3_token_len":813,"char_repetition_ratio":0.1306848,"word_repetition_ratio":0.012638231,"special_character_ratio":0.22036554,"punctuation_ratio":0.09155937,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9782883,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-04-01T05:29:15Z\",\"WARC-Record-ID\":\"<urn:uuid:be90abcb-7430-412e-93ce-2aeb7399f274>\",\"Content-Length\":\"62095\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b2804a3e-aaaa-4b6a-ab0b-b6c92406d47a>\",\"WARC-Concurrent-To\":\"<urn:uuid:57c4b96e-ecb4-4a5c-a72b-fa1df838a128>\",\"WARC-IP-Address\":\"95.168.187.200\",\"WARC-Target-URI\":\"https://superexcel.in/2022/01/26/nowadays-let-us-assess-the-total-amount-after-two/\",\"WARC-Payload-Digest\":\"sha1:3RHBKUDJK2JKRUNN5HXPQRLHI6JPQCKA\",\"WARC-Block-Digest\":\"sha1:7EL52ZUW44RSGI246CTIFTCAL53EXJVD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296949701.0_warc_CC-MAIN-20230401032604-20230401062604-00051.warc.gz\"}"}
https://support.planswift.com/support/discussions/topics/13000006616
[ "Start a new topic\n\n## roof pitch\n\nHi,\n\nI am using planswift with the metric system and am unfamiliar with the imperial system. Im putting in my pitch at 40 degrees but the roof is comin in at a massive area compared to when i measure it manually with a ruler.\n\nI tried putting in 10/12 instead (which should be close to 40degrees?) but now the area of the roof is too small.\n\n2 Comments\n\nroof pitch in say the joist tool is a single number, not 10/12 nor 40* just use 10\n\nSimple Formula: Sqrt(rise(M)^2+run(M)^2)/run(M)  the product of this formula will produce what I call a Pitch factor which can be multiplied to Area's or Linear's totals.\n\nMeters\n\nSqrt(.152^2+.305M^2 =.340778: .34778/.305=1.117\n\nInches\n\nSqrt(6^2+12^2 =13.416: 13.416/12=1.118\n\nso if a rafters run were 10' the rafter would be 11.18' in length\n\nor if a rafters run were 3M the rafter would be 3.351M\n\nLogin or Signup to post a comment", null, "" ]
[ null, "https://support.planswift.com/support/discussions/topics/13000006616/hit", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9137629,"math_prob":0.9008608,"size":827,"snap":"2019-35-2019-39","text_gpt3_token_len":259,"char_repetition_ratio":0.09234508,"word_repetition_ratio":0.014184397,"special_character_ratio":0.3337364,"punctuation_ratio":0.10824742,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9961485,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-21T15:40:54Z\",\"WARC-Record-ID\":\"<urn:uuid:e7998ed4-2825-4fff-becf-7de4f16f753d>\",\"Content-Length\":\"22497\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:94cccc82-78fb-4c1a-b2c2-401aeebacc13>\",\"WARC-Concurrent-To\":\"<urn:uuid:556143b8-d5d1-45b8-b982-d8173a5a120b>\",\"WARC-IP-Address\":\"34.232.241.245\",\"WARC-Target-URI\":\"https://support.planswift.com/support/discussions/topics/13000006616\",\"WARC-Payload-Digest\":\"sha1:OUWYOGPY742PVAOZR7EHBG7YW643HJDG\",\"WARC-Block-Digest\":\"sha1:AJBUYA7LYBZ6RG7A4BHYRX24JR35B4DI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027316075.15_warc_CC-MAIN-20190821152344-20190821174344-00109.warc.gz\"}"}
https://www.it610.com/article/1480198565132300288.htm
[ "【JavaScript Weekly #570】重新思考三元运算符\n\n前言\n\n三元的复杂度\n\n怪异\n\n(/* First expression*/) ? (/* Second expression */) : (/* Third expression */)\n\nconst protocol = (request.secure) ? 'https' : 'http';\n\n对初学者不友好\n\nif (someCondition) {\ntakeAction();\n} else {\nsomeOtherAction();\n}\n\n不宜读\n\nconst ten = Ratio.fromPair(10, 1);\nconst maxYVal = Ratio.fromNumber(Math.max(...yValues));\nconst minYVal = Ratio.fromNumber(Math.min(...yValues));\nconst yAxisRange = (!maxYVal.minus(minYVal).isZero()) ? ten.pow(maxYVal.minus(minYVal).floorLog10()) : ten.pow(maxYVal.plus(maxYVal.isZero() ? Ratio.one : maxYVal).floorLog10());\n\nconst ten = Ratio.fromPair(10, 1);\nconst maxYVal = Ratio.fromNumber(Math.max(...yValues));\nconst minYVal = Ratio.fromNumber(Math.min(...yValues));\nconst yAxisRange = !maxYVal.minus(minYVal).isZero()\n? ten.pow(maxYVal.minus(minYVal).floorLog10())\n: ten.pow(maxYVal.plus(maxYVal.isZero() ? Ratio.one : maxYVal).floorLog10());\n\nif 真的好么\n\n1. 一般使用三元最大的原因是简洁\n2. if 语句在三元的位置上也同样适用\n\n// if语句\nlet result;\nif (someCondition) {\nresult = calculationA();\n} else {\nresult = calculationB();\n}\n\n// 三元\nconst result = (someCondition) ? calculationA() : calculationB();\n\nif (someCondition) {\ntakeAction();\n} else {\nsomeOtherAction();\n}\n\ntakeActionsomeOtherAction 都没有返回值,并且会跳出当前块,那它们会不会造成一定的隐患?\n\n('\n\n' + page.title + '\n\n');", null, "说点别的\n\nreturn\n\nif (someCondition) {\nreturn resultOfMyCalculation();\n}\n\nreturn 会将函数调用解析为一个值,函数调用当成表达式。这样就会像变量赋值一样了。\n\n三元优化\n\nconst ten = Ratio.fromPair(10, 1);\nconst maxYVal = Ratio.fromNumber(Math.max(...yValues));\nconst minYVal = Ratio.fromNumber(Math.min(...yValues));\n\n// 创建四个变量\nconst rangeEmpty = maxYVal.minus(minYVal).isZero();\nconst roundRange = ten.pow(maxYVal.minus(minYVal).floorLog10());\nconst zeroRange = maxYVal.isZero() ? Ratio.one : maxYVal;\nconst defaultRng = ten.pow(maxYVal.plus(zeroRange).floorLog10());\n\n// 组合起来\nconst yAxisRange = !rangeEmpty ? roundRange : defaultRng;\n\nconst ten = Ratio.fromPair(10, 1);\nconst maxYVal = Ratio.fromNumber(Math.max(...yValues));\nconst minYVal = Ratio.fromNumber(Math.min(...yValues));\n\n// 创建两个函数\nconst rangeEmpty = maxYVal.minus(minYVal).isZero();\nconst roundRange = () => ten.pow(maxYVal.minus(minYVal).floorLog10());\nconst defaultRng = () => {\nconst zeroRange = maxYVal.isZero() ? Ratio.one : maxYVal;\nreturn ten.pow(maxYVal.plus(zeroRange).floorLog10());\n};\n\n// 组合起来\nconst yAxisRange = !rangeEmpty ? roundRange() : defaultRng();" ]
[ null, "https://img.it610.com/image/info9/2b2fa5d5dc34406eb8b1fcdfcd9cdefd.jpg", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.8608244,"math_prob":0.9976932,"size":4550,"snap":"2022-05-2022-21","text_gpt3_token_len":2908,"char_repetition_ratio":0.13682358,"word_repetition_ratio":0.24848485,"special_character_ratio":0.22263736,"punctuation_ratio":0.24299066,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99910665,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-19T17:55:40Z\",\"WARC-Record-ID\":\"<urn:uuid:2c893328-c990-4cf5-9b71-e6bc445d27d8>\",\"Content-Length\":\"26136\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c4ba0040-d0e5-40f3-977c-dcccde36d345>\",\"WARC-Concurrent-To\":\"<urn:uuid:1b8cba79-a3e9-44b9-bdda-5421f52c2ec1>\",\"WARC-IP-Address\":\"112.126.83.212\",\"WARC-Target-URI\":\"https://www.it610.com/article/1480198565132300288.htm\",\"WARC-Payload-Digest\":\"sha1:WVDIIGYHASF6ZVHBBSWR56VUNW7ZF5EV\",\"WARC-Block-Digest\":\"sha1:MNRJ4QPQHF22BPYIDAJW2ZP6SUD2KTME\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320301475.82_warc_CC-MAIN-20220119155216-20220119185216-00257.warc.gz\"}"}
https://www.avhandlingar.se/avhandling/2d5c7a178c/
[ "# Weighted inequalities of Hardy-type and their limiting inequalities\n\nDetta är en avhandling från Luleå : Luleå tekniska universitet\n\nNyckelord: Matematik; Mathematics;\n\nSammanfattning: This thesis deals with various generalizations of two famous inequalities namely the Hardy inequality and the Pólya-Knopp inequality and the relation between them. In Chapter 1 we give an introduction and overview of the area that serves as a frame for the rest of the thesis. In Chapter 2 the idea of using the weighted Hardy inequality to receive the weighted Pólya-Knopp inequality as a natural limiting inequality is investigated and some problems that arises are discussed. In Chapter 3 a new necessary and sufficient condition for the weighted Hardy inequality is proved and also used to give a new necessary and sufficient condition for a corresponding weighted Pólya-Knopp type inequality. In Chapter 4 a new two-dimensional Pólya-Knopp inequality is proved. This inequality may be regarded as a natural endpoint inequality of the famous two-dimensional Hardy inequality by E. Sawyer, which is characterized by three independent integral conditions while our endpoint inequality is characterized by one condition. In Chapter 5 the three necessary and sufficient conditions for the two- dimensional version of the Hardy inequality given by E. Sawyer are investigated and compared with the corresponding conditions in one dimension. Moreover, the corresponding endpoint problems and conditions are pointed out. In Chapter 5 we also prove a new two-dimensional Hardy inequality, where the weightfunction on the right hand side is of product type. In this case we only need one integral inequality to characterize the inequality and, moreover, by performing the natural limiting process we receive the same result as in Chapter 4. In Chapter 6 we prove criteria for boundedness between weighted Rn spaces of a fairly general multidimensional Hardy-type integral operator with an Oinarov kernel. The integrals are taken over cones in Rn with origin as a vertex. In Chapter 7 the related results are proved for the limiting geometric mean operator with an Oinarov kernel. Finally, in Chapter 8 we consider Carleman's inequality, which may be regarded as a discrete version of Pólya-Knopp's inequality and also as a natural limiting inequality of the discrete Hardy inequality. We present several simple proofs of and remarks (e.g. historical) about this inequality. Moreover, we discuss and comment some very new results and put them into this frame. We also include some new proofs and results e.g. a weight characterization of a general weighted Carleman type inequality.", null, "KLICKA HÄR FÖR ATT SE AVHANDLINGEN I FULLTEXT. (PDF-format)", null, "" ]
[ null, "https://www.avhandlingar.se/graphics/pdfimage.gif", null, "https://www.uppsatser.se/graphics/blank.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9397767,"math_prob":0.990101,"size":2619,"snap":"2020-24-2020-29","text_gpt3_token_len":516,"char_repetition_ratio":0.1709369,"word_repetition_ratio":0.03,"special_character_ratio":0.17716686,"punctuation_ratio":0.07333333,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9638642,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-25T18:03:39Z\",\"WARC-Record-ID\":\"<urn:uuid:035d53a5-387d-4259-9779-374172a09d89>\",\"Content-Length\":\"12509\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3e7cc68c-184f-416b-8621-ba101e2a03a2>\",\"WARC-Concurrent-To\":\"<urn:uuid:59dc295b-94ad-4e82-ab72-b31ba304ca3c>\",\"WARC-IP-Address\":\"104.31.76.56\",\"WARC-Target-URI\":\"https://www.avhandlingar.se/avhandling/2d5c7a178c/\",\"WARC-Payload-Digest\":\"sha1:FUBN2YYW4S6ZNVNTCEHAQ7RFLE52RSJ3\",\"WARC-Block-Digest\":\"sha1:S5FMB6WWDRPLR3K6MQG3WNBYXAUYBSMA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347389309.17_warc_CC-MAIN-20200525161346-20200525191346-00400.warc.gz\"}"}
https://www.educationviews.org/ann-varela-about-joseph-louis-lagrange/
[ "# Ann Varela about Joseph Louis Lagrange\n\nFeb 10, 2018 by\n\nAn Interview with Ann Varela about Joseph Louis Lagrange\n\nMichael F. Shaughnessy –\n\n1) Apparently, Joseph Louis Lagrange was both a mathematician and astronomer. In your mind, how do these two fit together?\n\nWhen Lagrange studied the three-body problem for the Earth, sun, and moon (celestial mechanics) and the movement of Jupiter’s satellites, measurement was required to make comparative analyses. Measurement is math. Without mathematics, no quantitative data may be collected or analyzed. Subsequently, nothing may be inferred for future experiments without accurate quantitative results.\n\nMuch of the mathematics required for understanding information attained through astronomical observation originates with physics, but sometimes mathematics itself is needed to better understand phenomena. Today, astronomers use mathematics every time they look through a telescope. Cameras used for celestial observation are equipped with detectors to convert (count) photons or electrons and record data relating to the amount of light emitted by the observed objects. Other obvious mathematical calculations would be the distance from one celestial object to another or the distance from the Earth to a celestial body.\n\n2) He reportedly made several significant contributions to the fields of analysis, and number theory. What do you mean by analysis here—and what is the basis of number theory?\n\nMathematical analysis is the branch of mathematics concerned with limits and related theories based on continuous change, such as differentiation (finding derivatives), integration, measure (length, area, volume), infinite series (addition of infinitely many quantities), and analytic functions. When calculating spatial quantities, such as the length of a curved line or the area under a curve, analysis must be employed. Examples of this concept include determining how much sod will be needed to cover an irregularly shaped yard, calculating the total distance traveled by a fish under water, or the cooling of a cup of coffee in a cold room.\n\nNumber theory is the study of the properties of positive integers, also known as natural numbers. Number theory includes the study of prime numbers and number families made out of integers, such as rational numbers. Lagrange’s work with the sum of four squares is an example of number theory at work. He also developed a method of approximating the real roots of an equation through continued fractions.\n\nSome other numerical topics that number theorists contemplate include sums of squares, sums of cubes, sums of higher powers, infinitude of primes, shapes of numbers (triangular or square), perfect numbers (numbers that are equal to the sum of their factors), and the Fibonacci sequence.\n\n3)  What were his contributions to both classical and celestial mechanics? In addition, how does this relate to math?\n\nClassical mechanics describes the motion of macroscopic objects and astronomical objects, while celestial mechanics focusses mostly on the motions of planets and stars. Lagrange was interested in solving the three-body problem. This particular problem concentrates on taking an initial set of data that specifies the masses, positions, and velocities of three bodies for some particular point in time and then determining the motions of the three bodies, in agreement with Newton’s laws of motion and of universal gravitation.\n\nLagrange analyzed the consistency of planetary orbits, and discovered the presence of the Lagrangian points, as seen in Figure 1. The first three points, L1, L2, and L3, connect the two large bodies. The other two points, L4 and L5, each form an equilateral triangle with the two large bodies. Since objects can orbit around them in a rotating coordinate system tied to the two large bodies, points L4 and L5 are considered stable.\n\nFigure 1. Lagrangian Points.", null, "Lagrange also reformulated the principles of classical mechanics, emphasizing energy more than force and developing a method to use a single polar coordinate equation to describe any orbit. This is important because it allows for calculating the behavior or movement of planets and comets. Nowadays, design engineers and trajectory analysts are concerned with other objects in space, such as the path of spacecraft. Figure 2 shows the projectile motion function with horizontal distance as a function of velocity and launch angle.\n\nFigure 2. Projectile Motion.", null, "4) Every natural number is the sum of four squares. Can you give us an example of this?\n\nFirst of all, one must know the definition of a natural number. The natural numbers are positive integers, beginning with one. Figure 3 shows a few examples of the theorem Lagrange proved in 1770, which stems from number theory.\n\nFigure 3. Sum of Four Squares.\n\n5) “Calculus of variations” seems to be associated with Lagrange- but what exactly does this mean?\n\nThe calculus of variation is a generality of calculus that refines the solution of the function down to a path, curve, point, or surface, etc. relating to a fixed value, such as a minimum or maximum. In 1754, Lagrange developed this principle of variation throughout his work on the tautochrone; however, Euler later coined the official name, calculus of variation, in 1766. Figure 1 shows how the time taken by an object descending without resistance in constant gravity to its minima is not dependent on its starting point.\n\nFigure 4. Tautochrone Curve.", null, "6) What are Lagrange “multipliers”?\n\nThe method of Lagrange multipliers is a strategy for finding the related maxima and minima of a function subject to equality constraints. The main point here is to find those values of the function, which produce the largest and smallest output, either within a specified range (specific input) or on the entire domain (all input considered) of a function. Equality constraints refer to conditions of the problem that the solution must placate. Constraints may be in the form of an equality, inequality, or an integer. For example, if one wants to impose a constraint to only include x-values that are greater than one, the notation would be x > 1.\n\n7)  Lagrange was involved in the making of standards of measurement. Why is this important in terms of math?\n\nLagrange’s efforts with standardizing weights and measures began in 1790 as a member of an appointed committee. Lagrange was also involved with the development of the metric system. His major role was setting up the unit system of meter and kilogram, along with their decimal parts. Having standard units of measurement makes for easier communication and understanding of another mathematician’s work. Comprehensively, a standard unit of measure is indeed helpful for most fields of study. One can see the importance of the scientific method and realize how a standard unit of measure may ensure consistent results when conducting research.\n\n8) Allegedly, he was a great mathematician but not a great teacher or professor. Where did he actually teach and what was said about his teaching?\n\nLagrange was employed as an assistant professor of mathematics at the Royal Military Academy in 1755. The subjects he was charged with were calculus and mechanics. Though he was obviously talented and fluent in these areas himself, teaching them to students was another matter entirely. Lagrange apparently had a style of teaching that was difficult to understand because his own learning style was supposedly unique and vague. Patience with engineering applications was another character trait that caused him anguish.\n\n9) Lagrange’s tomb is apparently in the Pantheon. It sounds like a fitting tribute to a great mathematician. Your thoughts about his contributions?\n\nLagrange’s involvement with the calculus of variation is considered one of his greatest contributions. His work in this area began with presenting a better way to solve an equation we now know as Euler’s equation. Other prominent mathematicians used Lagrange’s findings as a springboard for their own work in this area. Next, Lagrange studied geometric-based topics including volumes and surface areas, which led him to the general theory of a surface integral.\n\nPolynomial equations intrigued Lagrange, especially those of the fifth degree and higher. Like Cardano, he attempted to simplify polynomial equations by considering permutations of the roots. This technique was not successful so he halted his pursuit.\n\nLagrange’s work on functions was not quite the same as what is taught today, but it eventually led to the fundamental theorem of calculus.\n\nFinally, Lagrange’s efforts on the topic of planetary motions was quite outstanding, as it led to certain limits for an orbit’s stability.\n\nThis site uses Akismet to reduce spam. Learn how your comment data is processed." ]
[ null, "https://upload.wikimedia.org/wikipedia/commons/5/5f/Lagrangian_points_equipotential.jpg", null, "https://upload.wikimedia.org/wikipedia/commons/2/25/Projectile-Motion.png", null, "https://upload.wikimedia.org/wikipedia/commons/b/bd/Tautochrone_curve.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.95264834,"math_prob":0.9590084,"size":8907,"snap":"2021-04-2021-17","text_gpt3_token_len":1752,"char_repetition_ratio":0.11265866,"word_repetition_ratio":0.004398827,"special_character_ratio":0.1870439,"punctuation_ratio":0.109217174,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98893833,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,1,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-16T16:01:27Z\",\"WARC-Record-ID\":\"<urn:uuid:d928bac4-039f-427b-97df-9e26f75c1ea9>\",\"Content-Length\":\"81897\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6cfd4422-bb7c-4b93-8aa7-861546ceef94>\",\"WARC-Concurrent-To\":\"<urn:uuid:e35c9269-6be4-4ea8-8296-39d1ea9b3349>\",\"WARC-IP-Address\":\"198.71.61.213\",\"WARC-Target-URI\":\"https://www.educationviews.org/ann-varela-about-joseph-louis-lagrange/\",\"WARC-Payload-Digest\":\"sha1:RJ4YUJVMH4VHQZKC6FPGLYXA7ZFJVY7L\",\"WARC-Block-Digest\":\"sha1:FLJ4UPV4FDJSRLXV7VEYYFRUKX2Q5Z4B\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703506697.14_warc_CC-MAIN-20210116135004-20210116165004-00727.warc.gz\"}"}
https://percent-of.com/calculate/what-is-40-of-9107/
[ "# We use percentages in almost everything.\n\nPercentages are a very important part of our daily lives. They are used in Economics, Cooking, Health, Sports, Mathematics, Science, Jewellery, Geography, Medicine and many other areas.\n\n## Percent of Calculator\n\nCalculate percentage of X, quick & simple.\n\n%\n?\n\n40% of 9107 is:\n3642.8\n\n## Percent of - Table For 9107\n\nPercent of Difference\n1% of 9107 is 91.07 9015.93\n2% of 9107 is 182.14 8924.86\n3% of 9107 is 273.21 8833.79\n4% of 9107 is 364.28 8742.72\n5% of 9107 is 455.35 8651.65\n6% of 9107 is 546.42 8560.58\n7% of 9107 is 637.49 8469.51\n8% of 9107 is 728.56 8378.44\n9% of 9107 is 819.63 8287.37\n10% of 9107 is 910.7 8196.3\n11% of 9107 is 1001.77 8105.23\n12% of 9107 is 1092.84 8014.16\n13% of 9107 is 1183.91 7923.09\n14% of 9107 is 1274.98 7832.02\n15% of 9107 is 1366.05 7740.95\n16% of 9107 is 1457.12 7649.88\n17% of 9107 is 1548.19 7558.81\n18% of 9107 is 1639.26 7467.74\n19% of 9107 is 1730.33 7376.67\n20% of 9107 is 1821.4 7285.6\n21% of 9107 is 1912.47 7194.53\n22% of 9107 is 2003.54 7103.46\n23% of 9107 is 2094.61 7012.39\n24% of 9107 is 2185.68 6921.32\n25% of 9107 is 2276.75 6830.25\n26% of 9107 is 2367.82 6739.18\n27% of 9107 is 2458.89 6648.11\n28% of 9107 is 2549.96 6557.04\n29% of 9107 is 2641.03 6465.97\n30% of 9107 is 2732.1 6374.9\n31% of 9107 is 2823.17 6283.83\n32% of 9107 is 2914.24 6192.76\n33% of 9107 is 3005.31 6101.69\n34% of 9107 is 3096.38 6010.62\n35% of 9107 is 3187.45 5919.55\n36% of 9107 is 3278.52 5828.48\n37% of 9107 is 3369.59 5737.41\n38% of 9107 is 3460.66 5646.34\n39% of 9107 is 3551.73 5555.27\n40% of 9107 is 3642.8 5464.2\n41% of 9107 is 3733.87 5373.13\n42% of 9107 is 3824.94 5282.06\n43% of 9107 is 3916.01 5190.99\n44% of 9107 is 4007.08 5099.92\n45% of 9107 is 4098.15 5008.85\n46% of 9107 is 4189.22 4917.78\n47% of 9107 is 4280.29 4826.71\n48% of 9107 is 4371.36 4735.64\n49% of 9107 is 4462.43 4644.57\n50% of 9107 is 4553.5 4553.5\n51% of 9107 is 4644.57 4462.43\n52% of 9107 is 4735.64 4371.36\n53% of 9107 is 4826.71 4280.29\n54% of 9107 is 4917.78 4189.22\n55% of 9107 is 5008.85 4098.15\n56% of 9107 is 5099.92 4007.08\n57% of 9107 is 5190.99 3916.01\n58% of 9107 is 5282.06 3824.94\n59% of 9107 is 5373.13 3733.87\n60% of 9107 is 5464.2 3642.8\n61% of 9107 is 5555.27 3551.73\n62% of 9107 is 5646.34 3460.66\n63% of 9107 is 5737.41 3369.59\n64% of 9107 is 5828.48 3278.52\n65% of 9107 is 5919.55 3187.45\n66% of 9107 is 6010.62 3096.38\n67% of 9107 is 6101.69 3005.31\n68% of 9107 is 6192.76 2914.24\n69% of 9107 is 6283.83 2823.17\n70% of 9107 is 6374.9 2732.1\n71% of 9107 is 6465.97 2641.03\n72% of 9107 is 6557.04 2549.96\n73% of 9107 is 6648.11 2458.89\n74% of 9107 is 6739.18 2367.82\n75% of 9107 is 6830.25 2276.75\n76% of 9107 is 6921.32 2185.68\n77% of 9107 is 7012.39 2094.61\n78% of 9107 is 7103.46 2003.54\n79% of 9107 is 7194.53 1912.47\n80% of 9107 is 7285.6 1821.4\n81% of 9107 is 7376.67 1730.33\n82% of 9107 is 7467.74 1639.26\n83% of 9107 is 7558.81 1548.19\n84% of 9107 is 7649.88 1457.12\n85% of 9107 is 7740.95 1366.05\n86% of 9107 is 7832.02 1274.98\n87% of 9107 is 7923.09 1183.91\n88% of 9107 is 8014.16 1092.84\n89% of 9107 is 8105.23 1001.77\n90% of 9107 is 8196.3 910.7\n91% of 9107 is 8287.37 819.63\n92% of 9107 is 8378.44 728.56\n93% of 9107 is 8469.51 637.49\n94% of 9107 is 8560.58 546.42\n95% of 9107 is 8651.65 455.35\n96% of 9107 is 8742.72 364.28\n97% of 9107 is 8833.79 273.21\n98% of 9107 is 8924.86 182.14\n99% of 9107 is 9015.93 91.07\n100% of 9107 is 9107 0\n\n### Here's How to Calculate 40% of 9107\n\nLet's take a quick example here:\n\nYou have a Target coupon of \\$9107 and you need to know how much will you save on your purchase if the discount is 40 percent.\n\nSolution:\n\nAmount Saved = Original Price x Discount in Percent / 100\n\nAmount Saved = (9107 x 40) / 100\n\nAmount Saved = 364280 / 100\n\nIn other words, a 40% discount for a purchase with an original price of \\$9107 equals \\$3642.8 (Amount Saved), so you'll end up paying 5464.2.\n\n### Calculating Percentages\n\nSimply click on the calculate button to get the results of percentage calculations. You will see the result on the next page. If there are errors in the input fields, the result page will be blank. The program allows you to calculate the difference between two numbers in percentages. You can also input a percentage of any number and get the numeric value. Although it is a simple calculator, it can be very useful in many scenarios. Our goal is to give you an easy to use percentage calculator that gives you results you want fast.\n\nPercentage in mathematics refers to fractions based in 100. It is usually represented by “%,” “pct,” or “percentage.” This web app allows a comma or dot as a decimal separator. So you can use both freely.\n\nWe have provided several examples for you to use. You can use the examples to feed in your own data correctly. We hope you will find this site useful for calculang percentages. You can even use it for crosschecking the accuracy of your assignment results.\n\nNB. Americans use “percent,” which the British prefer “per cent.”\n\n#### Examples\n\nExample one\n\nCalculate 20% of 200?\n20% of 200 =____\n(200/100) x 20 = _____\n2 x 20 = 40\n\nIt is quite easy. Just divide 200 by 100 to get one percent. The result is 2. Then multiply it by 20 ( 20% = 20 per hundred) = 20 x 2 = 40\n\nExample two\n\nWhat percentage of 125 is 50?\n\n50 = ---% of 125\n50 x (100/125) = 40%\n\nGet the value of one percent by dividing 100 by 125. After that, multiply the value by 50 to get the percentage value of 50 units, which is 40% That is how to calculate the percentage.\n\nExample three\n\nWhat is the percentage (%) change (increase or decrease) from 120 to 150?\n\n(150-120) x (100/120) = 36\n\nSince 150 represents 100%. One percent will be equal to 100/150. 150-120 is 30. Therefore, 30 units represents 30 x (100/150) = 36 % This is how to calculate the percentage increase.\n\nwe do not use a percentage at all times. There are scenarios where we simply want to show the ratio of numbers. For instance, what is 20% of 50? This can also be interpreted as 20 hundredths of 50. This equates to 20/100 x 50 = 10.\n\nYou can use a calculation trick here. Anyme you want to divide a number by 100, just move the decimal two places to the left. 20/100 x 50 calculated above can also be writen as (20 x 50)/100. Since 20x 50 =1000. You can simply divide 1000 by 100 by moving two decimal places to the left, which gives you 10.\n\nIn another scenario, you want to calculate the percentage increase or decrease. Supposing you have \\$10 and spend \\$2 to buy candy, then you have spent 20% of your money. So how much will be remaining? All the money you have is 100%, if you spend 20%, you will have 80% remaining. You can simply use the percentage reduction tool above to calculate this value.\n\n#### Origin\n\nThe word percent is derived from the Latin word percenter which means per hundred, and it is designated by %" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9030972,"math_prob":0.95032394,"size":7041,"snap":"2020-10-2020-16","text_gpt3_token_len":2733,"char_repetition_ratio":0.22026432,"word_repetition_ratio":0.039412674,"special_character_ratio":0.5645505,"punctuation_ratio":0.16385135,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998326,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-24T10:08:00Z\",\"WARC-Record-ID\":\"<urn:uuid:536d6672-fe71-4b6d-83e8-656e5cdabc9c>\",\"Content-Length\":\"47749\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dbb99640-ddce-4a94-a539-3ceca756a82a>\",\"WARC-Concurrent-To\":\"<urn:uuid:210c82de-188a-4061-9a53-78684ccaa5a0>\",\"WARC-IP-Address\":\"209.42.195.149\",\"WARC-Target-URI\":\"https://percent-of.com/calculate/what-is-40-of-9107/\",\"WARC-Payload-Digest\":\"sha1:CEL4V36RJDVBDGKQAEOI5XHWUFV3MLPO\",\"WARC-Block-Digest\":\"sha1:FUXYLCGVHOCTEMZ46SLBXPJVH7N66WRP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145910.53_warc_CC-MAIN-20200224071540-20200224101540-00379.warc.gz\"}"}
https://schoolbag.info/chemistry/sat_1/130.html
[ " Practice Test 2 - PRACTICE TESTS - SAT Subject Test Chemistry \n\n## PART 3", null, "## PRACTICE TESTS", null, "### Practice Test 2\n\nNote: For all questions involving solutions and/or chemical equations, assume that the system is in water unless otherwise stated.\n\nReminder: You may not use a calculator on these tests.\n\nThe following symbols have the meanings listed unless otherwise noted.\n\n H = enthalpy T = temperature L = liter(s) M = molar V = volume mL = milliliter(s) n = number of moles atm = atmosphere mm = millimeter(s) P = pressure g = gram(s) mol = mole(s) R = molar gas constant J = joules(s) V = volt(s) S = entropy kJ = kilojoules\n\nPart A\n\nDirections: Every set of the given lettered choices below refers to the numbered statements or formulas immediately following it. Choose the one lettered choice that best fits each statement or formula and then fill in the corresponding oval on the answer sheet. Each choice may be used once, more than once, or not at all in each set.\n\nQuestions 1–4 refer to the following terms:\n\n(A) Law of Definite Composition\n\n(B) Nuclear fusion\n\n(C) van der Waals forces\n\n(D) Graham’s Law of Diffusion (Effusion)\n\n(E) Triple point\n\n1. At a particular temperature and pressure, three states of a substance may coexist.\n\n2. The combining of nuclei to release energy.\n\n3. The ratio of the rate of movement of hydrogen gas compared with the rate of oxygen gas is 4 : 1.\n\n4. The molecules of nitrogen monoxide and nitrogen dioxide differ by a multiple of the mass of one oxygen.\n\nQuestions 5–7 refer to the following diagram:", null, "5. The", null, "of the reaction to form CO from C + O2\n\n6. The", null, "of the reaction to form COfrom CO + O2\n\n7. The", null, "of the reaction to form COfrom C + O2\n\nQuestions 8–11\n\n(A) Hydrogen bond\n\n(B) Ionic bond\n\n(C) Polar covalent bond\n\n(D) Nonpolar covalent bond\n\n(E) Metallic bond\n\n8. The type of bond between atoms of potassium and chloride when they form a crystal of potassium chloride\n\n9. The type of bond between the atoms in a nitrogen molecule\n\n10. The type of bond between the atoms in a molecule of CO(electronegativity difference = 1)\n\n11. The type of bond between the atoms of calcium in a crystal of calcium\n\nQuestions 12–14 refer to the following graphs:", null, "12. The slope of volume vs. pressure for a gas at constant temperature\n\n13. The slope of pressure vs. temperature for a gas at constant volume\n\n14. The slope of volume vs. temperature for a gas at constant pressure\n\nQuestions 15–18\n\n(A) Least-reactive family of elements\n\n(B) Alkali metals\n\n(C) Halogen family of elements\n\n(D) Noble gases\n\n(E) Family whose oxides form acids in water\n\n15. The elements that most actively react with water to release hydrogen\n\n16. The elements least likely to become involved in chemical reactions\n\n17. Family that contains elements in the colored gaseous state, in the liquid state, and with metallic properties\n\n18. Group of nonmetallic elements containing N and P\n\nQuestions 19–23\n\n(A) 1s\n\n(B) 2s\n\n(C) 3s\n\n(D) 3p\n\n(E) 3d\n\n19. Contains up to ten electrons\n\n20. Contains one pair of electrons in the ground state of the lithium atom\n\n21. Is exactly one-half filled in the ground state of the phosphorous atom\n\n22. Contains the valence electrons in the ground state of the magnesium atom\n\n23. Contains a filled orbital of electrons in the ground state of helium\n\nPart B\n\nON THE ACTUAL CHEMISTRY TEST, THE FOLLOWING TYPE OF QUESTION MUST BE ANSWERED ON A SPECIAL SECTION (LABELED “CHEMISTRY”) AT THE LOWER LEFT-HAND CORNER OF YOUR ANSWER SHEET. THESE QUESTIONS WILL BE NUMBERED BEGINNING WITH 101 AND MUST BE ANSWERED ACCORDING TO THE FOLLOWING DIRECTIONS.\n\nDirections: Every question below contains two statements, I in the left-hand column and II in the right-hand column. For each question, decide if statement I is true or false and if statement II is true or false and fill in the corresponding T or F ovals on your answer sheet. *Fill in oval CE only if statement II is a correct explanation of statement I.\n\nSample Answer Grid:\n\nCHEMISTRY * Fill in oval CE only if II is a correct explanation of I.", null, "I II 101. The structure of SO3 is shown by using more than onestructural formula BECAUSE SO3 is very unstable and resonates between these possible structures. 102. When the", null, "G of a reaction at a given temperature is negative, the reaction occurs spontaneously BECAUSE when", null, "G is negative,", null, "H is also negative. 103. One mole of CO2 has a greater mass than 1 mole of H2O BECAUSE the molecular mass of CO2 is greater than the molecular mass of H2O. 104. Hydrosulfuric acid is often used in qualitative tests BECAUSE H2S(aq) reacts with many metallic ions to give colored precipitates. 105. Crystals of sodium chloride go into solution in water as ions BECAUSE the sodium ion has a 1+ charge and the chloride ion has a 1− charge and they are hydrated by the water molecules. 106. If some phosphoric acid, H3PO4, is added to the equilibrium mixture represented by the equation H3PO4 + H2O ↔ PO43−+ H3O+, the concentration of H3O+ decreases BECAUSE the equilibrium constant of a reaction changes as the concentration of the reactants changes 107. The", null, "Hreaction of a particular reaction can be arrived at by the summation of the", null, "Hreaction values of two or more reactions that, added together, give the", null, "Hreaction of the particular reaction BECAUSE Hess’s Law conforms to the First Law of Thermodynamics, which states that the total energy of the universe is a constant. 108. In a reaction that has both a forward and a reverse reaction,A + B", null, "AB, when only A and B are introduced into a reacting vessel, the forward reaction rate is the highest at the beginning and begins to decrease from that point until equilibrium is reached BECAUSE the reverse reaction does not begin until equilibrium is reached. 109. At equilibrium, the forward reaction and reverse reaction stop BECAUSE at equilibrium, the reactants and products have reached the equilibrium concentrations. 110. The hydrid orbital form of carbon in acetylene is believed to be the sp form BECAUSE C2H2 is a linear compound with a triple bond between the carbons. 111. The weakest of the bonds between molecules are coordinate covalent bonds BECAUSE coordinate covalent bonds represent the weak attractive force of the electrons of one molecule for the positively charged nucleus of another. 112. A saturated solution is not necessarily concentrated BECAUSE dilute and concentrated are terms that relate only to the relative amount of solute dissolved in the solvent. 113. Lithium is the most active metal in the first group of the Periodic Table BECAUSE lithium has only one electron in the outer energy level. 114. The anions migrate to the cathode in an electrolytic cell BECAUSE positively charged ions are attracted to the negatively charged cathode. 115. The atomic number of a neutral atom that has a mass of 39and has 19 electrons is 19 BECAUSE the number of protons in a neutral atom is equal to the number of electrons. 116. For an element with an atomic number of 17, the most probable oxidation number is +1 BECAUSE the outer energy level of the halogen family has a tendency to add one electron to itself.\n\nPart C\n\nDirections: Each of the questions or incomplete statements below is followed by five suggested answers or completions. Select the one that is best in each case and then fill in the corresponding oval on the answer sheet.\n\n24. All of the following involve a chemical change EXCEPT\n\n(A) the formation of HCl from Hand Cl 2\n\n(B) the color change when NO is exposed to air\n\n(C) the formation of steam from burning Hand O2\n\n(D) the solidification of vegetable oil at low temperatures\n\n(E) the odor of NHwhen NH4Cl is rubbed together with Ca(OH)2 powder\n\n25. When most fuels burn, the products include carbon dioxide and\n\n(A) hydrocarbons\n\n(B) hydrogen\n\n(C) water\n\n(D) hydroxide\n\n(E) hydrogen peroxide\n\n26. In the metric system, the prefix kilo- means\n\n(A) 100\n\n(B) 10−1\n\n(C) 10−2\n\n(D) 102\n\n(E) 103\n\n27. How many atoms are in 1 mole of water?\n\n(A) 3\n\n(B) 54\n\n(C) 6.02 × 1023\n\n(D) 2(6.02 × 1023)\n\n(E) 3(6.02 × 1023)\n\n28. Which of the following elements normally exist as monoatomic molecules?\n\n(A) Cl\n\n(B) H\n\n(C) O\n\n(D) N\n\n(E) He\n\n29. The shape of a PClmolecule is described as\n\n(A) bent\n\n(B) trigonal planar\n\n(C) linear\n\n(D) trigonal pyramidal\n\n(E) tetrahedral\n\n30. The complete loss of an electron of one atom to another atom with the consequent formation of electrostatic charges is referred to as\n\n(A) a covalent bond\n\n(B) a polar covalent bond\n\n(C) an ionic bond\n\n(D) a coordinate covalent bond\n\n(E) a pi bond between orbitals\n\n31. In the electrolysis of water, the cathode reduction reaction is\n\n(A) 2H2O(l) + 2e → H2(g) + 2OH + O2(g)\n\n(B) 2H2O(l) →", null, "O2(g) + 2H+ + 2e\n\n(C) 2OH + 2e → O2(g) + H2(g)\n\n(D) 2H+ + 2e → H2(g)\n\n(E) 2H2O(l) + 4e → O2(g) + 2H2(g)\n\n32. Which of the following radiation emissions has no mass?\n\n(A) Alpha particle\n\n(B) Beta particle\n\n(C) Proton\n\n(D) Neutron\n\n(E) Gamma ray\n\n33. If a radioactive element with a half-life of 100 years is found to have transmutated so that only 25% of the original sample remains, what is the age, in years, of the sample?\n\n(A) 25\n\n(B) 50\n\n(C) 100\n\n(D) 200\n\n(E) 400\n\n34. What is the pH of an acetic acid solution if the [H3O+] = 1 × 10−4 mole/liter?\n\n(A) 1\n\n(B) 2\n\n(C) 3\n\n(D) 4\n\n(E) 5\n\n35. The polarity of water is useful in explaining which of the following?\n\nI. The solution process\n\nII. The ionization process\n\nIII. The high conductivity of distilled water\n\n(A) I only\n\n(B) II only\n\n(C) I and II only\n\n(D) II and III only\n\n(E) I, II, and III\n\n36. When sulfur dioxide is bubbled through water, the solution will contain\n\n(A) sulfurous acid\n\n(B) sulfuric acid\n\n(C) hyposulfuric acid\n\n(D) persulfuric acid\n\n(E) anhydrous sulfuric acid\n\n37. Four grams of hydrogen gas at STP contain\n\n(A) 6.02 × 1023 atoms\n\n(B) 12.04 × 1023 atoms\n\n(C) 12.04 × 1046 atoms\n\n(D) 1.2 × 1023 molecules\n\n(E) 12.04 × 1023 molecules\n\n38. Analysis of a gas gave: C = 85.7% and H = 14.3%. If the formula mass of this gas is 42 atomic mass units, what are the empirical formula and the true formula?\n\n(A) CH; C4H4\n\n(B) CH2; C3H6\n\n(C) CH3; C3H9\n\n(D) C2H2; C3H6\n\n(E) C2H4; C3H6\n\n39. Which fraction would be used to correct a given volume of gas at 300K to its new volume when it is heated to 333K and the pressure is kept constant?", null, "40. What would be the predicted freezing point of a solution that has 684 grams of sucrose (1 mol = 342 g) dissolved in 2,000 grams of water?\n\n(A) -1.86°C or 271.14 K\n\n(B) -0.93°C or 272.07 K\n\n(C) -1.39°C or 271.61 K\n\n(D) -2.48°C or 270.52 K\n\n(E) -3.72°C or 269.28 K\n\n41. What is the approximate pH of a 0.005 M solution of H2SO4?\n\n(A) 1\n\n(B) 2\n\n(C) 5\n\n(D) 9\n\n(E) 13\n\n42. How many grams of NaOH are needed to make 100 grams of a 5% solution?\n\n(A) 2\n\n(B) 5\n\n(C) 20\n\n(D) 40\n\n(E) 95\n\n43. For the Haber process: N+ 3H", null, "2NH3 + heat (at equilibrium), which of the following statements concerning the reaction rate is/are true?\n\nI. The reaction to the right will increase when pressure is increased.\n\nII. The reaction to the right will decrease when the temperature is increased.\n\nIII. The reaction to the right will decrease when NHis removed from the chamber.\n\n(A) I only\n\n(B) II only\n\n(C) I and II only\n\n(D) II and III only\n\n(E) I, II, and III\n\n44. If you titrate 1.0M H2SOsolution against 50. milliliters of 1.0M NaOH solution, what volume of H2SO4, in milliliters, will be needed for neutralization?\n\n(A) 10.\n\n(B) 25.\n\n(C) 40.\n\n(D) 50.\n\n(E) 100\n\n45. How many grams of COcan be prepared from 150 grams of calcium carbonate reacting with an excess of hydrochloric acid solution?\n\n(A) 11\n\n(B) 22\n\n(C) 33\n\n(D) 44\n\n(E) 66\n\nQuestion 46 refers to the following diagram:", null, "46. The diagram represents a setup that may be used to prepare and collect\n\n(A) NH3\n\n(B) NO\n\n(C) H2\n\n(D) SO3\n\n(E) CO2", null, "47. The lab setup shown above was used for the gravimetric analysis of the empirical formula of MgO. In synthesizing MgO from a Mg strip in the crucible, which of the following is NOT true?\n\n(A) The initial strip of Mg should be cleaned.\n\n(B) The lid of the crucible should fit tightly to exclude oxygen.\n\n(C) The heating of the covered crucible should continue until the Mg is fully reacted.\n\n(D) The crucible, lid, and the contents should be cooled to room temperature before measuring their mass.\n\n(E) When the Mg appears to be fully reacted, the crucible lid should be partially removed and heating continued.\n\nQuestions 48–50 refer to the following experimental setup and data:", null, "Recorded data:\n\nWeight of U-tube................................ 20.36 g\n\nWeight of U-tube and calcium chloride before................................................... 39.32 g\n\nWeight of U-tube and calcium chloride after................................................... 57.32 g\n\nWeight of boat and contents (copper oxide) before................................................... 30.23 g\n\nWeight of boat and contents after................................................... 14.23 g\n\nWeight of boat...................................................5.00 g\n\n48. What is the reason for the first CaCldrying tube?\n\n(A) Generate water\n\n(B) Absorb hydrogen\n\n(C) Absorb water that evaporates from the flask\n\n(D) Decompose the water from the flask\n\n(E) Act as a catalyst for the combination of hydrogen and oxygen\n\n49. What conclusion can be derived from the data collected?\n\n(A) Oxygen was lost from the CaCl2.\n\n(B) Oxygen was generated in the U-tube.\n\n(C) Water was formed from the reaction.\n\n(D) Hydrogen was absorbed by the CaCl2.\n\n(E) CuO was formed in the decomposition.\n\n50. What is the ratio of the mass of water formed to the mass of hydrogen used in the formation of water?\n\n(A) 1 : 8\n\n(B) 1 : 9\n\n(C) 8 : 1\n\n(D) 9 : 1\n\n(E) 8 : 9\n\n51. What is the mass, in grams, of 1 mole of KAl(SO4)2 · 12H2O?\n\n(A) 132\n\n(B) 180\n\n(C) 394\n\n(D) 474\n\n(E) 516\n\n52. What mass of aluminum will be completely oxidized by 2 moles of oxygen at STP?\n\n(A) 18 g\n\n(B) 37.8 g\n\n(C) 50.4 g\n\n(D) 72.0 g\n\n(E) 100.8 g\n\n53. In general, when metal oxides react with water, they form solutions that are\n\n(A) acidic\n\n(B) basic\n\n(C) neutral\n\n(D) unstable\n\n(E) colored\n\nQuestions 54–56 refer to the following diagram:", null, "54. The oxidation reaction will occur at\n\n(A) A\n\n(B) B\n\n(C) C\n\n(D) D\n\n(E) E\n\n55. The apparatus at is called the\n\n(A) anode\n\n(B) cathode\n\n(C) salt bridge\n\n(D) ion bridge\n\n(E) osmotic bridge\n\n56. The standard potentials of the metals are:\n\nZn2+ + 2e → Zn0       E0 = −0.76 volt\n\nCu0 → Cu2+ + 2e      E0 = −0.34 volt\n\nWhat will be the voltmeter reading for this reaction?\n\n(A) +1.10\n\n(B) −1.10\n\n(C) +0.42\n\n(D) −0.42\n\n(E) −1.52\n\n________________________\n\n57. How many liters of oxygen (STP) can be prepared from the decomposition of 212 grams of sodium chlorate (1 mol = 106 g)?\n\n(A) 11.2\n\n(B) 22.4\n\n(C) 44.8\n\n(D) 67.2\n\n(E) 78.4\n\n58. In this equation: Al(OH)+ H2SO→ Al2(SO4)+ H2O, the whole-number coefficients of the balanced equation are\n\n(A) 1, 3, 1, 2\n\n(B) 2, 3, 2, 6\n\n(C) 2, 3, 1, 6\n\n(D) 2, 6, 1, 3\n\n(E) 1, 3, 1, 6\n\n59. What is", null, "Hreaction for the decomposition of 1 mole of sodium chlorate? (", null, "Hf0 values: NaClO3(s) = −85.7 kcal/mol, NaCl(s) = −98.2 kcal/mol, O2(g) = 0 kcal/mol)\n\n(A) −183.9 kcal\n\n(B) −91.9 kcal\n\n(C) +45.3 kcal\n\n(D) +22.5 kcal\n\n(E) −12.5 kcal\n\n60. Isotopes of an element are related because which of the following is (are) the same in these isotopes?\n\nI. Atomic mass\n\nII. Atomic number\n\nIII. Arrangement of orbital electrons\n\n(A) I only\n\n(B) II only\n\n(C) I and II only\n\n(D) II and III only\n\n(E) I, II, and III\n\n61. In the reaction of zinc with dilute HCl to form H2, which of the following will increase the reaction rate?\n\nI. Increasing the temperature\n\nII. Increasing the exposed surface of zinc\n\nIII. Using a more concentrated solution of HCl\n\n(A) I only\n\n(B) II only\n\n(C) I and III only\n\n(D) II and III only\n\n(E) I, II, and III", null, "62. The laboratory setup shown above can be used to prepare a\n\n(A) gas lighter than air and soluble in water\n\n(B) gas heavier than air and soluble in water\n\n(C) gas soluble in water that reacts with water\n\n(D) gas insoluble in water\n\n(E) gas that reacts with water\n\n63. In this reaction: CaCO+ 2HCl → CaCl2 + H2O + CO2. If 4.0 moles of HCl are available to the reaction with an unlimited supply of CaCO3, how many moles of CO2 can be produced at STP?\n\n(A) 1.0\n\n(B) 1.5\n\n(C) 2.0\n\n(D) 2.5\n\n(E) 3.0\n\n64. A saturated solution of BaSOat 25°C contains 3.9 × 10−5 mole/liter of Ba2+ ions. What is the Ksp of this salt?\n\n(A) 3.9 × 10−5\n\n(B) 3.9 × 10−6\n\n(C) 2.1 × 10−7\n\n(D) 1.5 × 10−8\n\n(E) 1.5 × 10−9\n\n65. If 0.1 mole of K2SOwas added to the solution in question 64, what would happen to the Ba2+ concentration?\n\n(A) It would increase.\n\n(B) It would decrease.\n\n(C) It would remain the same.\n\n(D) It would first increase, then decrease.\n\n(E) It would first decrease, then increase.\n\n66. Which of the following will definitely cause the volume of a gas to increase?\n\nI. Decreasing the pressure with the temperature held constant.\n\nII. Increasing the pressure with a temperature decrease.\n\nIII. Increasing the temperature with a pressure increase.\n\n(A) I only\n\n(B) II only\n\n(C) I and III only\n\n(D) II and III only\n\n(E) I, II, and III\n\n67. The number of oxygen atoms in 0.50 mole of Al2(CO3)is\n\n(A) 4.5 × 1023\n\n(B) 9.0 × 1023\n\n(C) 3.6 × 1024\n\n(D) 2.7 × 1024\n\n(E) 5.4 × 1024\n\nQuestion 68 refers to a solution of 1 M acid, HA, with K= 1 × 10−6.\n\n68. What is the H3O+ concentration? (Assume [HA] = 1, [H3O+] = x, [A] = x.)\n\n(A) 1 × 10−5\n\n(B) 1 × 10−4\n\n(C) 1 × 10−2\n\n(D) 1 × 10−3\n\n(E) 0.9 × 10−3\n\n________________________\n\n69. What is the percent dissociation of acetic acid in a 0.1 M solution if the [H3O+] is 1 × 10−3 mole/liter?\n\n(A) 0.01%\n\n(B) 0.1%\n\n(C) 1.0%\n\n(D) 1.5%\n\n(E) 2.0%", null, "If you finish before one hour is up, you may go back to check your work or complete unanswered questions.\n\nAnswer Key\n\nP  R  A  C  T  I  C  E   T  E  S  T   2\n\n 1. E 14. C 104. T, T, CE 2. B 15. B 105. T, T, CE 3. D 16. D 106. F,F 4. A 17. C 107. T, T, CE 5. B 18. E 108. T, F 6. C 19. E 109. F, T 7. A 20. A 110. T, T, CE 8. B 21. D 111. F, F 9. D 22. C 112. T, T, CE 10. C 23. A 113. F, T 11. E 101. T, F 114. F, T 12. E 102. T, F 115. T, T, CE 13. A 103. T, T, CE 116. F, T\n 24. D 39. E 54. A 25. C 40. A 55. C 26. E 41. B 56. A 27. E 42. B 57. D 28. E 43. C 58. C 29. D 44. B 59. E 30. C 45. E 60. D 31. D 46. E 61. E 32. E 47. B 62. D 33. D 48. C 63. C 34. D 49. C 64. E 35. C 50. D 65. B 36. A 51. D 66. A 37. E 52. D 67. D 38. B 53. B 68. D 69. C\n\nANSWERS EXPLAINED\n\n1. (E) A phase diagram shows that all three states can exist at the triple point.\n\n2. (B) The combining of nuclei is called nuclear fusion.\n\n3. (D) According to Graham’s Law of Gaseous Diffusion (or Effusion), the rate of diffusion is inversely proportional to the square root of the mole cular weight.\n\nThen", null, "4. (A) The Law of Definite Composition states that, when compounds form, they always form in the same ratio by mass. Water, for instance, always forms in a ratio of 1 : 8 of hydrogen to oxygen by mass. For nitrogen monoxide (NO) and nitrogen dioxide (NO2) the difference in molecular mass is one atomic mass of oxygen.\n\n5. (B) The first step is the", null, "for C +", null, "O→ CO. It releases -110.5 kJ of 12 heat. This is written as -110.5 kJ because it is exothermic.\n\n6. (C) This is the second step on the diagram. It releases -283.0 kJ of heat.\n\n7. (A) To arrive at the", null, "H, take the total drop (−393.5 kJ) or add these reactions:", null, "8. (B) Potassium and chlorine have a large enough difference in their electronegativities to form ionic bonds. The respective positions of these two elements in the periodic chart also are indicative of the large difference in their electronegativity values.\n\n9. (D) Two atoms of an element that forms a diatomic molecule always have a nonpolar covalent bond between them since the electron attraction or electronegativity of the two atoms is the same.\n\n10. (C) Electronegativity differences between 0.5 and 1.7 are usually indicative of polar covalent bonding. COis an interesting example of a nonpolar molecule with polar covalent bonds since the bonds are symmetrical in the molecule.\n\n11. (E) Calcium is a metal and forms a metallic bond between atoms.\n\n12. (E) This graph shows the volume decreasing as the pressure is increased and the temperature is held constant. It is an example of Charles’s Law (V/T = k).\n\n13. (A) This graph shows the pressure increasing as the temperature is increased and the volume is held constant. It is an example of Gay-Lussac’s Law (P/T = k).\n\n14. (C) This graph shows the volume increasing as the temperature is increased and the pressure is held constant. It is an example of Boyle’s Law (PV = k).\n\n15. (B) The alkali metals react with water to form hydroxides and release hydrogen. A typical reaction is:\n\n2Na(s) + 2H2O(", null, ") → 2NaOH(aq) + H2(g)\n\n16. (D) The noble gases are the least reactive because of their completed outer orbital.\n\n17. (C) The halogen family contains the colored gases fluorine and chlorine at room temperatures, the reddish liquid bromine, and metallic-like purple iodine.\n\n18. (E) These nonmetals, when they are oxides, react as acidic anhydrides with water to form acid solutions.\n\n19. (E) The five 3orbitals can contain a total of ten electrons.\n\n20. (A) The 1orbital is filled with two electrons in the lithium atom.\n\n21. (D) The phosphorous atom has a half-filled 3orbital level.\n\n22. (C) The 3orbital contains the valence electrons of magnesium.\n\n23. (A) The helium atom has a filled 1orbital.\n\n101. (T, F) Sulfur trioxide is shown by three structural formulas because each bond is “hybrid” of a single and double bond. Resonance in chemistry does not mean that the bonds resonate between the structures shown in the structural drawing.\n\n102. (T, F) When", null, "is negative in the Gibbs equation, the reaction is spontaneous. However, the total equation determines this, not just the", null, "H. The Gibbs equation is:", null, "G =", null, "H − T", null, "S.\n\n103. (T, T, CE) One mole of each of the gases contains 6.02 × 1023 molecules, but their molecular masses are different. COis found by adding one C = 12 and two O = 32, or a total of 44 amu. The H2O, however, adds up to two H = 2 plus one O = 16, or a total of 18 amu. Thus it is true that 1 mol of COat 44 g/mol is heavier than 1 mol of H2O at 18 g/mol.\n\n104. (T, T, CE) Hydrosulfuric acid is a weak acid but is used in qualitative tests because of the distinctly colored precipitates of sulfides that it forms with many metallic ions.\n\n105. (T, T, CE) Sodium chloride is an ionic crystal, not a molecule, and its ions are hydrated by the polar water molecules.\n\n106. (F,F) The addition of more H3PO4 causes the equilibrium to shift to the right and increase the concentration of H3O+ ions until equilibrium is restored. Therefore the first statement is false. The second is also false since the equilibrium constant remains the same at a given temperature.\n\n107. (T, T, CE) The statement is true, and the reason is also true and explains the statement.\n\n108. (T, F) The statement is true, but not the reason. In an equilibrium reaction, concentrations can be shown to progress like this:", null, "until equilibrium is reached. Then the concentrations stabilize.\n\n109. (F, T) The forward and reverse reactions are occurring at equal rates when equilibrium is reached. The reactions do not stop. The concentrations remain the same at this point.\n\n110. (T, T, CE) Since acetylene is known to be a linear molecule with a triple bond between the two carbons, the sp orbitals along the central axis with the hydrogens bonded on either end fit the experimental evidence.\n\n111. (F, F) The weakest bonds between molecules are van der Waals forces, not coordinate covalent bonds.\n\n112. (T, T, CE) The terms dilute and concentrated merely indicate a relatively large amount of solvent and a small amount of solvent, respectively. You can have a dilute saturated solution if the solute is only slightly soluble.\n\n113. (F, T) Cs, not Li, is the most active Group I metal because Cs has (a) the largest atomic radius, thus making it easier to lose its outer energy level electron, and (b) the intermediate electrons help screen the positive attraction of the nucleus, also increasing the ease with which the outer electron is lost.\n\n114. (F, T) The cations are positively charged ions and migrate to the cathode, while the anions are negatively charged and migrate to the anode.\n\n115. (T, T, CE) There are as many electrons as there are protons in a neutral atom, and the atomic number represents the number of each.\n\n116. (F, T) The first two principal energy levels fill up at 2 and 8 electrons, respectively. That leaves 7 electrons to fill the 3and 3orbitals like this: 3s2, 3p5. With only one electron missing in the 3orbitals, the most likely oxidation number is −1.\n\n24. (D) The solidification of vegetable oil is merely a physical change, like the formation of ice from liquid water at lower temperatures. All the other choices involve actual recombinations of atoms and thus are chemical changes.\n\n25. (C) Water is formed because most common fuels contain hydrogen in their structures.\n\n26. (E) The other choices, in order, represent 1,", null, "or deci-,", null, "or centi-, and 100 or hect-.\n\n27. (E) One mole of any substance contains 6.02 × 1023 molecules. Since each water molecule is triatomic, there would be 3(6.02 × 1023) atoms present.\n\n28. (E) The noble gases are all monoatomic because of their complete outer energy levels. A rule to help you remember diatomic gases is: Gases ending in -gen or -ine usually form diatomic molecules.\n\n29. (D) By both the VSEPR (valence shell electron pair repulsion) method and the orbital structure method, the PClmolecule is trigonal pyramidal:", null, "30. (C) The complete loss and gain of electrons is an ionic bond. All other bonds indicated are “sharing of electrons” type bonds or some form of covalent bonding.\n\n31. (D) The cathode reaction releases only Hgas. This half-reaction is as given in (D).\n\n32. (E) The beta particle is a high-speed electron and has the smallest mass of the first four choices. However, gamma rays are electromagnetic waves. They have no mass.\n\n33. (D) If 25% of the sample now remains, then 100 years ago 50% would be present. If you go back another 100 years, the sample would contain 100% of the radioactive element. Therefore, the sample is 100 + 100 = 200 years old.\n\n34. (D) pH = −log[H3O+] = −log[1 × 10−4] = −(−4) = 4.\n\n35. (C) Only I and II are true. Distilled water does not significantly conduct an electric current. The polarity of the water molecule is helpful in ionization and in causing substances to go into solution.\n\n36. (A) SOis the acid anhydride of H2SOor sulfurous acid. H2O + SO→ H2SO3.\n\n37. (E) Four grams of hydrogen gas at STP represent 2 mol of hydrogen since 2 g is the gram-molecular mass of hydrogen. Each mole of a gas contains 6.02 × 1023 molecules, so 2 mol contains 2 × 6.02 × 1023 or 12.04 × 1023 molecules.\n\n38. (B) To solve percent composition problems, first divide the % given by the atomic mass:", null, "Then divide by the smallest quotient to get small whole numbers:", null, "The empirical formula is CH2. Since the molecular mass is 42 and the empirical formula has a molecular mass of 14, the true formula must be 3 times the empirical formula, or C3H6.\n\n39. (E) Because the temperature (in kelvins) increases from 303 K to 333 K, the volume of the gas should increase with pressure held constant. The correct fraction is", null, ".\n\n40. (A) One mole of dissolved substance (which does not ionize) causes a 1.86°C drop in the freezing point of a 1 molal solution. Since 2,000 g of water were used, the solution has", null, "or 2 mol in 2,000 g of water. Then", null, "The freezing point is depressed 1 × −1.86° = −1.86°C or 271.14 K.\n\n41. (B) The pH is −log[H+]. A 0.005 molar solution of H2SOionizes in a dilute solution to release two H+ ions per molecule of H2SO4. Therefore the molar concentration of H+ ion is 2 × 0.005 mol/L or 0.010 mol/L. Substituting this in the formula, you have:\n\npH = −log[0.01]= −log[1 × 10−2]\n\nThe log of a number is the exponent to which the base 10 is raised to express that number in exponential form:\n\n− log[1 × 10−2]= −[−2]= 2\n\n42. (B) If the solution is to be 5% sodium hydroxide, then 5% of 100 g is 5 g. Percent is always by mass unless otherwise specified.\n\n43. (C) Because this equation is exothermic, higher temperatures will decrease the reaction to the right and increase the reaction to the left, so II is true.\n\nAlso, I is true because with an increase of pressure the reaction will try to relieve that pressure by going in the direction that has the least volume: in this reaction, to the right. Statement III is false because removing product in this reaction would increase the forward reaction. Statements I and II are true.\n\n44. (B) The reaction is:", null, "1 mol acid =", null, "(mol) of base\n\nSince\n\nmolarity × volume (L) = moles,\n\nthen\n\nMaVa =", null, "MbVb\n(1.0 M)(x L) =", null, "(1 M)(0.05 L)\n\nx L =", null, "(0.05 L)\nx L = 0.025 L or 25.mL\n\n45. (E) The reaction is:\n\n150 g\n\nCaCO3(s) + 2HCl(aq) → CaCl2(aq) + H2O(l) + CO2(g)\n\nThe gram-molecular mass of calcium is 100 g. Then 150 g =", null, "or 1.5 mol of calcium carbonate. According to the equation, 1 mol of CaCOyields 1 mol of carbon dioxide, so 1.5 mol of CaCOyields 1.5 mol of CO2.\n\nThe gram-molecular mass of CO= 44 g:\n\n1.5 mol of CO= 1.5 × 44 g = 66 g CO2\n\n46. (E) The other choices are wrong because:\n\n(A) is lighter than air\n\n(B) reacts with air\n\n(C) is lighter than air\n\n(D) needs heat to be evolved\n\n47. (B) The Mg needs oxygen to form MgO; so the lid cannot be tightly sealed. Oxygen is needed for the Mg to oxidize to MgO. All other choices are true.\n\n48. (C) To ensure that all the water vapor collected in the U-tube comes from the reaction, the first drying tube is placed in the path of the hydrogen to absorb any evaporated water.\n\n49. (C) Calcium chloride is deliquescent, and its weight gain of water indicates that water was formed from the reaction.\n\n50.", null, "The ratio of mass of water to mass of hydrogen is 18 : 2 or 9 : 1.\n\n51. (D) 1 K = 39, 1Al = 27, 2(SO4) = 2(32 + 16 × 4) = 192, and 12H2O = 12(2 + 16) = 216. This totals 474 g.\n\n52. (D)", null, "Or, using the mole method: 44.8 L = 2 mol", null, "This shows that:", null, "Since molar mass of Al = 27/mol", null, "53. (B) Metal oxides are generally basic anhydrides.\n\n54. (A) Since A is the anode, the oxidation (or loss of electrons) will occur on this pole.\n\n55. (C) “Salt bridge” is the correct terminology.\n\n56. (A) Zn → Zn2+ + e      E ° = +0.76 V\n\nCu2+ + 2e  → Cu °E ° = +0.34 V\n\nTotal is = +1.10 V\n\n(Notice that the Zn is being oxidized and the Cu2+ is being reduced.)\nVoltmeter will read +1.10 V.\n\n57. (D)", null, "", null, "Equation shows\n2 mol → 3 mol O2\n3 mol × 22.4 L/mol = 67.2 L\n\n58. (C) The balanced equation has the coefficients 2, 3, 1, and 6: 2Al(OH)+ 3H2SO4 → Al2(SO4)+ 6H2O.\n\n59. (E) The reaction is NaClO3(s) → NaCl(s) +", null, "O2(g).", null, "Hreaction =", null, "Hf (products) −", null, "Hf (reactants)", null, "Hreaction = (−98.2 + 0) − (−85.7)", null, "Hreaction = −12.5 kcal\n\n60. (D) II and III are identical; isotopes differ only in the number of neutrons in the nucleus and this affects the atomic mass only.\n\n61. (E) I, II, and III will increase the rate of this reaction. Each of them causes the rate of this reaction to increase.\n\n62. (D) This setup depends on water displacement of an insoluble gas.\n\n63. (C) The coefficients give the molar relations, so 2.0 mol of HCl give off 1.0 mol of CO2. Given 4.0 mol of HCl, you have", null, "64. (E) Ksp = [Ba2+] [SO42−] since [Ba2+] is given as [3.9 × 10−5], and the equation BaSO4 → Ba2+ + SO42− shows there will be as many SO42− ions as Ba2+ ions, then both [Ba2+] and [SO42− will equal 3.9 × 10−5. So,\nKsp = [3.9 × 10−5][3.9 × 10−5\nKsp = 1.5 × 10−9\n\n65. (B) The introduction of the “common ion” SO42− at 0.1 molar forces the equilibrium to shift to the left and reduce the Ba2+ concentration.\n\n66. (A) According to the gas laws, only I will cause an increase in the volume of a confined gas.\n\n67. (D) In 1 mol of Al2(CO3)3, nine oxygens (three carbonates with three oxygen atoms each) are in each molecule, or 9 mol of O atoms are in 1 mol of Al2(CO3)3. Because only 0.50 mol is given, there are", null, "(9) or 4.5 mol of O atoms. In 4.5 mol of oxygen, there are", null, "27.0 × 1023 atoms or 2.7 × 1024 atoms.\n\n68. (D) When HA ionizes, it forms equal amounts of H+ and A ions, but these amounts are very small because the Kis very small. Kcan be expressed as [H+] [A]/[HA]. Because you are told to assume [HA] = 1, you have:", null, "69. (C) Percent dissociation =", null, "CALCULATING YOUR SCORE\n\nYour score on Practice Test 2 can now be computed manually. The actual test is scored by machine, but the same method is used to arrive at the raw score. You get one point for each correct answer. For each wrong answer, you lose one-fourth of a point. Questions that you omit or that have more than one answer are not counted. On your answer sheet mark all correct answers with a “C” and all incorrect answers with an “X”.\n\nDetermining Your Raw Test Score\n\nTotal the number of correct answers you have recorded on your answer sheet. It should be the same as the total of all the numbers you place in the block in the lower left corner of each area of the Subject Area summary in the next section.\n\nA. Enter the total number of correct answers here: ________\nNow count the number of wrong answers you recorded on your answer sheet.\n\nB. Enter the total number of wrong answers here: ________\nMultiply the number of wrong answers in B by 0.25.\n\nC. Enter that product here: ________\nSubtract the result in C from the total number of right answers in A.\n\nD. Enter the result of your subtraction here: ________\n\nE. Round the result in D to the nearest whole number: ________.\nThis is your raw test score.\n\nConversion of Raw Scores to Scaled Scores\n\nYour raw score is converted by the College Board into a scaled score. The College Board scores range from 200 to 800. This conversion is done to ensure that a score earned on any edition of a particular SAT Subject Test in Chemistry is comparable to the same scaled score earned on any other edition of the same test. Because some editions of the tests may be slightly easier or more difficult than others, scaled scores are adjusted so that they indicate the same level of performance regardless of the edition of the test taken and the ability of the group that takes it. Consequently, a specific raw score on one edition of a particular test will not necessarily translate to the same scaled score on another edition of the same test.\n\nBecause the practice tests in this book have no large population of scores with which they can be scaled, scaled scores cannot be determined.\n\nResults from previous SAT Chemistry tests appear to indicate that the conversion of raw scores to scaled scores GENERALLY follows this pattern:", null, "Note that this scale provides only a general idea of what a raw score may translate into on a scaled score range of 800–200. Scaling on every test is usually slightly different. Some students who had taken the SAT Subject Test in Chemistry after using this book had reported that they have scored slightly higher on the SAT test than on the practice tests in this book. They all reported that preparing well for the test paid off in a better score!\n\nDIAGNOSING YOUR NEEDS\n\nAfter taking Practice Test 2, check your answers against the correct ones. Then fill in the chart below.\n\nIn the space under each question number, place a check if you answered that question correctly.\n\nEXAMPLE:\n\nIf your answer to question 5 was correct, place a check in the appropriate box.\n\nNext, total the check marks for each section and insert the number in the designated block. Now do the arithmetic indicated and insert your percent for each area.", null, "* The subject areas have been expanded to identify specific areas in the text.", null, "* The subject areas have been expanded to identify specific areas in the text.\n\nAnswer Sheet\n\nP  R  A  C  T  I  C  E   T  E  S  T   3\n\nDetermine the correct answer for each question. Then, using a No. 2 pencil, blacken completely the oval containing the letter of your choice.", null, "", null, "\n\n" ]
[ null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image659.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image659.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image711.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image712.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image712.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image712.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image713.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image714.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image712.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image715.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image715.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image712.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image712.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image715.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image575.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image700.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image716.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image575.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image717.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image718.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image719.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image720.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image712.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image715.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image721.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image691.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image722.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image715.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image700.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image712.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image723.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image724.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image712.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image715.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image715.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image715.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image715.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image725.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image726.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image727.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image728.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image729.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image730.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image731.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image732.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image733.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image734.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image700.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image700.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image700.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image700.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image735.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image736.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image737.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image738.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image739.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image740.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image741.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image742.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image743.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image712.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image712.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image712.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image712.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image712.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image744.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image700.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image745.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image746.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image747.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image748.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image749.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image750.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image709.jpg", null, "https://schoolbag.info/chemistry/sat_1/sat_1.files/image751.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87467176,"math_prob":0.9841827,"size":35457,"snap":"2019-35-2019-39","text_gpt3_token_len":10383,"char_repetition_ratio":0.13919838,"word_repetition_ratio":0.042086545,"special_character_ratio":0.31784979,"punctuation_ratio":0.15824704,"nsfw_num_words":3,"has_unicode_error":false,"math_prob_llama3":0.99299145,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150],"im_url_duplicate_count":[null,null,null,null,null,3,null,null,null,null,null,null,null,3,null,4,null,null,null,null,null,null,null,null,null,null,null,null,null,6,null,null,null,3,null,6,null,3,null,3,null,3,null,3,null,null,null,null,null,3,null,7,null,3,null,null,null,null,null,null,null,3,null,4,null,null,null,null,null,null,null,null,null,null,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,null,null,null,null,null,null,null,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,null,null,null,null,null,null,null,null,null,null,3,null,null,null,3,null,3,null,3,null,3,null,3,null,3,null,6,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-21T01:12:08Z\",\"WARC-Record-ID\":\"<urn:uuid:4332984a-99a0-4741-b862-658f15de9179>\",\"Content-Length\":\"72855\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e482b578-2ca8-4885-8e86-31ac164d1d81>\",\"WARC-Concurrent-To\":\"<urn:uuid:6b92a376-0081-44e9-842c-a0146be30cb2>\",\"WARC-IP-Address\":\"31.131.26.27\",\"WARC-Target-URI\":\"https://schoolbag.info/chemistry/sat_1/130.html\",\"WARC-Payload-Digest\":\"sha1:QXCT7YHNME5VYSVGNTWIOCGYULK6MFIV\",\"WARC-Block-Digest\":\"sha1:JC7YECMZHPDNZX3S6DXBCNMYIUMCEEUS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027315695.36_warc_CC-MAIN-20190821001802-20190821023802-00356.warc.gz\"}"}
http://www.imamod.ru/~elizar/english-main-page.html
[ "Tatiana G. ELIZAROVA", null, "Professor\n\nDoctor of Physical and Mathematical Sciences\n\nE-mail: [email protected]\n\nKeldysh Institute ofApplied mathematics", null, "Phone: +7 (499) 978 13 14\n\nFax: +7 (499) 972-07-37\nE-mail: [email protected]\n\nField of research\n\n• Numerical simulation of gasdynamic and hydrodynamic flows\n• Multiprocessor systems\n• Quasi-gasdynamic (QGD) and quasi-hydrodynamic (QHD) equations.\n\n: More about quasi-gasdynamic and quasi-hydrodynamic equations\n\n: List of publications in chronological order. Many of them are downloadable\n\nT.G. Elizarova\n\nIn Russian\n\nQuasi-gasdynamic equations and numerical methods for viscous flow simulation\n\n(2007)\n\nPublisher: Научный мир\n\nPhone/Fax: +7 495 291 28 47\n\nT.G. Elizarova\n\nQuasi-Gas Dynamic Equations\n\nUpdated English version\n\nInformation and order:\n\nusing the paper form\n\nThis monograph is devoted to contemporary mathematical models of gas and liquid dynamics and to the related numerical methods for compressible and incompressible flow simulations.\n\nWe consider two related mathematical models that generalize the Navier-Stokes system of equations. Both models are different from the Navier-Stokes system in additional dissipative terms with a small parameter. The new models are named quasi-gasdynamic and quasi-hydrodynamic systems of equations. Basing on these models we construct new robust algorithms for non-stationary viscous flow and demonstrate numerical examples of flow simulation.\n\nUniversality, efficiency and accuracy of these algorithms are provided by validity of conservation laws and entropy balance for the described models.\n\nThe book is intended for scientific researchers and engineers engaged in the construction of numerical algorithms and practical computations of gas flows. It will be also useful for graduate and postgraduate students that specialize in numerical gas and liquid dynamics.\n\nT.G. Elizarova, I.A. Shirokov\n\nRegularized equations and examples of their use in the modeling of gas-dynamic flows" ]
[ null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.799935,"math_prob":0.8459796,"size":2603,"snap":"2019-26-2019-30","text_gpt3_token_len":598,"char_repetition_ratio":0.13389765,"word_repetition_ratio":0.0057636886,"special_character_ratio":0.20399539,"punctuation_ratio":0.11060948,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96741337,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-19T04:40:05Z\",\"WARC-Record-ID\":\"<urn:uuid:3aa95a39-7173-494d-ba3e-85d509ca705e>\",\"Content-Length\":\"14880\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7ca13570-39cb-4e7e-aee9-60053f850f5e>\",\"WARC-Concurrent-To\":\"<urn:uuid:605e816d-b0ad-49d2-9715-ea87c37b7b9a>\",\"WARC-IP-Address\":\"217.9.82.181\",\"WARC-Target-URI\":\"http://www.imamod.ru/~elizar/english-main-page.html\",\"WARC-Payload-Digest\":\"sha1:JBKEMQOTVI7KPFI37Y5XCDNQLF4BQKVB\",\"WARC-Block-Digest\":\"sha1:DQECVE5WCCCJWGMJUOBJU3KVXFTPP7QT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627998913.66_warc_CC-MAIN-20190619043625-20190619065625-00198.warc.gz\"}"}
https://hhsprings.bitbucket.io/docs/programming/examples/ffmpeg/create_comparison_video/using_pad_overlay_and_blend_3_.html
[ "# Using pad, overlay and blend … (3)¶\n\ndoc\n\nIn video comparison, overlapping should often be allowed.\n\nThe approach shown here is an approach that seems rather absurd than the one shown earlier. However, depending on the input, the processing performance is the best compared with the previous one and two before.\n\n#! /bin/sh\npref=\"basename $0 .sh\" vleft=\"EuropeanArchive.mp4\" vright=\"Eduardo.mp4\" # fac=${1:-80}\ncx=$((16 *${fac}))\ncy=$((9 *${fac}))\nox=$((1920 - 16 *${fac}))\noy=$((1080 - 9 *${fac}))\n#\nffmpeg -y -i \"${vleft}\" -i \"${vright}\" -filter_complex \"\n[0:v]scale=${cx}:${cy},setsar=1,split[0v_1][0v_2];\n[1:v]scale=${cx}:${cy},setsar=1,split[1v_1][1v_2];\n\n[0v_2]crop=${cx}-${ox}:${cy}-${oy}:${ox}:${oy}[0v_c];\n[1v_2]crop=${cx}-${ox}:${cy}-${oy}:0:0[1v_c];\n[v_ov][v_c]overlay=x=${ox}:y=${oy}[v];" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.52829146,"math_prob":0.99791443,"size":1305,"snap":"2022-27-2022-33","text_gpt3_token_len":467,"char_repetition_ratio":0.15142198,"word_repetition_ratio":0.0,"special_character_ratio":0.39923373,"punctuation_ratio":0.22789116,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99736106,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-06T19:55:25Z\",\"WARC-Record-ID\":\"<urn:uuid:64a1e931-e1e8-4a59-b74b-591d4aeab5a9>\",\"Content-Length\":\"12676\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:04ec35e8-c39e-4316-9eed-a2edc78b336c>\",\"WARC-Concurrent-To\":\"<urn:uuid:19bb1d4d-6929-4bb3-addf-711ede3a4d6a>\",\"WARC-IP-Address\":\"18.205.93.10\",\"WARC-Target-URI\":\"https://hhsprings.bitbucket.io/docs/programming/examples/ffmpeg/create_comparison_video/using_pad_overlay_and_blend_3_.html\",\"WARC-Payload-Digest\":\"sha1:D5UL4RLAVHGXA62E6VBFPBOP2KBUCVH7\",\"WARC-Block-Digest\":\"sha1:UHXFFXJOBR5JJFQ7ZFBUBGKRC3M3LLKY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104676086.90_warc_CC-MAIN-20220706182237-20220706212237-00198.warc.gz\"}"}
https://dsp.stackexchange.com/questions/26221/positive-or-negative-sign-on-fourier-transform-formula?noredirect=1
[ "# Positive or negative sign on Fourier transform formula [duplicate]\n\nI have seen both the formula of Fourier transform with positive and negative sign on exponential as $$X(\\omega)=\\int_{-\\infty}^{\\infty} x(t)e^{-j\\omega t}dt$$ and $$X(\\omega)=\\int_{-\\infty}^{\\infty} x(t)e^{j\\omega t}dt$$ I am confused which one is the correct formula. I also solved for Fourier transform by taking the following example $$x(t)=\\begin{cases} 1, \\hspace{5mm} \\text{for} \\hspace{2mm} |t|<1 \\\\0, \\hspace{5mm} \\text{for} \\hspace{2mm} |t|>1 \\end{cases}$$ and got the same result as $$X(\\omega)=\\begin{cases} 2\\frac{\\text{sin}\\omega}{\\omega}, \\hspace{5mm} \\text{when} \\hspace{2mm} \\omega \\neq 0 \\\\2, \\hspace{13mm} \\text{when} \\hspace{2mm} \\omega = 0\\end{cases}$$ Can anyone explain whether both the formula for Fourier transform are correct or not?\n\nThe definition with the negative in the exponent is the accepted definition of the Fourier transform... however, this is an arbitrary choice. It could just as easily be defined with $e^{jw}$ and the inverse transform with $e^{-jw}$.\n• in fact, while they are negatives of each other, there is no other difference between $-j$ and $+j$. both have equal claim to being the $\\sqrt{-1}$. Oct 5 '15 at 0:06\n• what will happen if we consider only the positive value of $t$? i.e. to say range of integration is consider from $0$ to $\\infty$. Oct 5 '15 at 1:45" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8327738,"math_prob":0.99962676,"size":1529,"snap":"2021-43-2021-49","text_gpt3_token_len":480,"char_repetition_ratio":0.123278685,"word_repetition_ratio":0.0,"special_character_ratio":0.3119686,"punctuation_ratio":0.08163265,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99987185,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-17T07:08:48Z\",\"WARC-Record-ID\":\"<urn:uuid:c821191e-1dc7-4b31-a9f0-fb2d669ff25f>\",\"Content-Length\":\"151237\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bfac33d1-edf1-452e-8ce9-0dd783393d56>\",\"WARC-Concurrent-To\":\"<urn:uuid:345a79dd-7529-405a-abd0-ead5bff788a2>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://dsp.stackexchange.com/questions/26221/positive-or-negative-sign-on-fourier-transform-formula?noredirect=1\",\"WARC-Payload-Digest\":\"sha1:JFMOQCHZBRL26UVWWHVK7BWJZYD2DNHT\",\"WARC-Block-Digest\":\"sha1:3T4CH5LUX2JJBCAQNG2R4OHBNTIFVQEF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585121.30_warc_CC-MAIN-20211017052025-20211017082025-00230.warc.gz\"}"}
http://blog.codoplex.com/2018/02/page/2/
[ "### Finding Mode Value From Array C Programming\n\nIn previous posts we learned how to find mean value from array and median value from array. In this post we will learn how to find mode value from array elements. Mode value is the most repeated value from the elements. In following simple program we will create a function which will accept an array as an input. Furthermore that function will return mode value from the array elements. [js] #include <stdio.h> int mode(int arr[], int size); // function prototype int main(void){ int arrayName = {2,4,4,4,1,2,3,4,2,3}; // defined an array with 10 elements //output each element of array printf(\"Original values of...\nforward\n\n### Finding Median Value From Array C Programming\n\nIn previous post we learned how to write a C program to find mean value from array elements. In this post we will learn how to find median value from array elements. Median value is the centered value in sorted (ascending order) elements. We also learned in our previous post about how to sort array elements in ascending order using bubble sort function. In following simple program we will create a function which will accept an array as an input. Furthermore that function will return median value from the array elements. [js] #include <stdio.h> int median(int arr[], int size); //...\nforward\n\n### Finding Mean Value From Array C Programming\n\nIn previous post we discussed about how to sort array elements using bubble sort function. In following simple program we will create a function which will accept an array as an input. Furthermore that function will return mean value of the array elements. Mean value is the average of all array elements. Mean value is calculated by summing all values of the array and dividing by the total number of array elements. [js] #include <stdio.h> int mean(int arr[], int size); // function prototype int main(void){ int arrayName = {90,44,33,83,49,34,51,84,56,44}; // defined an array with 10 elements //output each element of array...\nforward\n\n### Sorting Array Elements Using Bubble Sort in C Programming\n\nIn previous post we discussed about how to pass an array into function. In following simple program we will create a function which will accept an array as an input. We will pass an array as input to a function called bubblesort. That function will then sort the elements of the array in ascending order. [js] #include <stdio.h> void bubbleSort(int arr[], int size); // function prototype int main(void){ int arrayName = {90,44,33,83,49,34,51,84,56,44}; // defined an array with 10 elements //output each element of array printf(\"Original values of the arraynn\"); printf(\"Array indextttValuen\"); for(int j=0; j<10; j++){ printf(\"%11dttt%4dn\", j, arrayName[j]); } bubbleSort(arrayName, 10);...\nforward\n\n### Passing Arrays into Functions\n\nIn previous two posts we discussed about arrays and functions. In following simple program we will create a function which will accept an array as an input. Furthermore that function will multiply each element of the array with 20. At the and it outputs the elements of the array. It's to be noted that when we pass an array into function, then by default it's a function call by reference. Which means that any processing done by the called function will change/modify the values of the original array elements. [js] #include <stdio.h> void modifyArray(int arr[], int size); // function prototype int...\nforward\n\n### Arrays in C Programming\n\nArrays in C Programming: Array is a type of variable which can store multiple items of the same data type. In following simple program we will define an array of 10 elements. After that we will multiply each element of array with 5 and then we will output the value of each element of the array. [js] #include <stdio.h> int main(void){ int arrayName = {2,5,3,6,9,34,23,84,56,44}; // defined an array with 10 elements //output each element of array printf(\"Original values of the arraynn\"); printf(\"Array indextttValuen\"); for(int j=0; j<10; j++){ printf(\"%11dttt%4dn\", j, arrayName[j]); } //multiplied each element of array with 5 for(int...\nforward" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.71432376,"math_prob":0.9775748,"size":4120,"snap":"2021-04-2021-17","text_gpt3_token_len":961,"char_repetition_ratio":0.17638484,"word_repetition_ratio":0.38794437,"special_character_ratio":0.25364077,"punctuation_ratio":0.14645858,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.993007,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-18T14:48:16Z\",\"WARC-Record-ID\":\"<urn:uuid:c7973f34-fadd-45ef-bb65-13d5ed9b660e>\",\"Content-Length\":\"50912\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1355cb4f-450e-45bc-aacc-6436efcc74ef>\",\"WARC-Concurrent-To\":\"<urn:uuid:407549f3-92f6-4482-9268-8c6569170132>\",\"WARC-IP-Address\":\"43.255.154.35\",\"WARC-Target-URI\":\"http://blog.codoplex.com/2018/02/page/2/\",\"WARC-Payload-Digest\":\"sha1:ZI4QL2ZYU3JASN723AP7NWSYBH3QVLSC\",\"WARC-Block-Digest\":\"sha1:LULXP2X7F3ZS2HFK6TE54VP4TQO4AHIP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703514796.13_warc_CC-MAIN-20210118123320-20210118153320-00511.warc.gz\"}"}
https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-015-0574-4
[ "# Principal component analysis-based unsupervised feature extraction applied to in silico drug discovery for posttraumatic stress disorder-mediated heart disease\n\n## Abstract\n\n### Background\n\nFeature extraction (FE) is difficult, particularly if there are more features than samples, as small sample numbers often result in biased outcomes or overfitting. Furthermore, multiple sample classes often complicate FE because evaluating performance, which is usual in supervised FE, is generally harder than the two-class problem. Developing sample classification independent unsupervised methods would solve many of these problems.\n\n### Results\n\nTwo principal component analysis (PCA)-based FE, specifically, variational Bayes PCA (VBPCA) was extended to perform unsupervised FE, and together with conventional PCA (CPCA)-based unsupervised FE, were tested as sample classification independent unsupervised FE methods. VBPCA- and CPCA-based unsupervised FE both performed well when applied to simulated data, and a posttraumatic stress disorder (PTSD)-mediated heart disease data set that had multiple categorical class observations in mRNA/microRNA expression of stressed mouse heart. A critical set of PTSD miRNAs/mRNAs were identified that show aberrant expression between treatment and control samples, and significant, negative correlation with one another. Moreover, greater stability and biological feasibility than conventional supervised FE was also demonstrated. Based on the results obtained, in silico drug discovery was performed as translational validation of the methods.\n\n### Conclusions\n\nOur two proposed unsupervised FE methods (CPCA- and VBPCA-based) worked well on simulated data, and outperformed two conventional supervised FE methods on a real data set. Thus, these two methods have suggested equivalence for FE on categorical multiclass data sets, with potential translational utility for in silico drug discovery.\n\n## Background\n\nFeature extraction (FE) is an important task in bioinformatic analyses, as there are often more features than samples. The number of bases spanning linear space is at most equivalent to the number of independent vectors. Accordingly, more features than samples inevitably leads to redundancy. Although dimensional reduction is often used to eliminate redundancy, it is far from true redundancy elimination as reconstructed bases are usually the linear combination of all features, which is not always necessary for spanning entire linear space.\n\nInstead of dimensional reduction, FE can be used to eliminate redundancy, and is often performed to maximize performance of targeted tasks (supervised FE), e.g., discrimination between samples or regression analysis, although fewer samples than features often creates difficulties due to overfitting and/or bias. Multiple class samples commonly provide additional problems when supervised FE is used, complicating performance evaluations compared with two-class samples. Although pairwise evaluations (e.g., one versus one or one versus others) are possible, they are time-consuming. FE using a set of pairwise evaluations is often even more difficult, because it is hard to maximize performance simultaneously for all pairwise evaluations with commonly selected features for all pairs. In addition, supervised FE is often heavily sample-dependent, and alternative sample sets often provide alternative optimal FE. Moreover, with categorical classes the problems are greater, since frequently used FE with regression analyses, e.g., lasso , cannot be directly applied to categorical multiclass data sets.\n\nIn order to avoid these difficulties, unsupervised FE is useful as it is assumed to be more robust and stable, although this has not been extensively studied because of implementation difficulties, e.g., supervision is not based upon performance, therefore FE has nothing suitable for optimization. Variational Bayesian approaches eliminate redundancy in an unsupervised manner, and have recently been proposed as promising unsupervised FE methods. In this paper, we used variational Bayes (VB) principal component analysis (PCA) [2,3] to perform unsupervised FE. In addition, a simpler and more conventional PCA (CPCA)-based unsupervised FE method (CPCAFE) that worked well with a simulated data set was proposed as an alternative to VBPCA. As VBPCA is more computationally challenging, CPCAFE is an even more promising alternative unsupervised FE candidate than the VB approach.\n\nTo demonstrate applicability to translational research, we used CPCA-based, and partially, VBPCA-based unsupervised FE (VBPCAFE) to examine heart disease associated with posttraumatic stress disorder (PTSD). PTSD is caused by exposure to high-magnitude, life-threatening stressors, i.e., traumatic events. PTSD patients typically develop acute stress responses that include symptoms of arousal, anxiety, sadness, grief, agitation, irritability, and sleep disturbances. Heart disease is associated with PTSD, with a meta-analysis showing association between coronary heart disease (CHD) and PTSD . Moreover, hospitalization due to cardiovascular disease is associated with PTSD caused by the September 11, 2001, World Trade Center disaster . PTSD was also associated with CHD using a prospective twin study design . However, the underlying genetic background of the association is not well known.\n\nBy applying CPCAFE and VBPCAFE to publically available mRNA and microRNA (miRNA) expression data from stressed mouse hearts, we identified aberrantly expressed miRNAs and mRNAs. Biological feasibility of the identified miRNAs and mRNAs was determined (by negative correlation between miRNAs and mRNAs, and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment of miRNA target genes), showing suitability of our selections.\n\nTwo supervised FE methods, specifically, regression analysis using categorical pseudo variables and backward elimination using Hilbert-Schmidt norm of the cross-covariance operator (BAHSIC), identified unstable and biologically less feasible sets of mRNAs and miRNAs that were not always negatively correlated, and suggested superiority of unsupervised FE methods.\n\nAmong the aberrantly expressed genes identified by CPCAFE, fatty acid binding protein 3 (FABP3) was considered a potential drug target candidate by structural investigations, and inhibitory drugs were sought using the in silico drug discovery tool, chooseLD, a profile-based drug discovery program.\n\n## Results and discussion\n\n### FE methods applied to a simulated categorical multiclass data set\n\nWe performed FE using a simulated categorical multiclass data set, to determine the limitations of FE and usefulness of our proposed methods.\n\nThe data set consisted of 100 simulated ensembles of 20 samples with 100 features, of which only 10 features were distinct between four classes, and with each class consisting of 5 samples (see Methods). As sample values of each feature were obtained from an identical mixture of four Gaussian distributions, FE is difficult but discrimination between four classes varies between three cases (shown in Figure 1; a typical feature boxplot with distinct expression among classes). We named these three cases as easy (s=2), medium (s=1), and hard (s=0.5). Furthermore, we did not use order between the four classes, so the data could be treated as a categorical data set.\n\nIn order to demonstrate the limitations of FE, we first tested one vs one t test based FE (see Methods). No significantly different features were detected between the four classes in any of the three cases (Figure 2; confusion tables are available in Additional file 1), demonstrating that FE on a categorical multiclass data set is difficult.\n\nNext, we applied a more sophisticated method, specifically, categorical regression based FE (using significance based on adjusted P-values, see Methods). The number of correctly extracted features improved, with Matthews correlation coefficients of 0.95, 0.35, and 0.05, and F measures of 0.98, 0.15, and 0.005 for easy, medium, and hard cases, respectively (Figure 2; confusion tables are available in Additional file 1). However, for the hard case, both values are almost zero.\n\nIn order to improve performance, we selected the top 10 ranked features with the smallest P-values, independent of significance (using P-value ranks, see Methods). This resulted in Matthews correlation coefficients of 0.97, 0.61, and 0.22, and F measures of 0.97, 0.65, and 0.30 for easy, medium, and hard cases, respectively (Figure 2; confusion tables are available in Additional file 1). Thus, FE accuracy is reasonable, but selecting features using non-significant P-values is not ideal, highlighting the problem of using FE on a categorical multiclass data set.\n\nIn order to overcome this, we next tested BAHSIC (see Methods), a recently proposed non-parametric FE method that has been used on high-dimensional microarray data sets. The obtained Matthews correlation coefficients were 0.84, 0.49, and 0.18, and F measures were 0.86, 0.54, and 0.27 for easy, medium, and hard cases, respectively (Figure 2; confusion tables are available in Additional file 1). Although performance is relatively improved (without using highly ranked but non-significant features), BAHSIC does not use P-values, and again the results are not completely satisfactory, especially for the easy case: a Matthews coefficient of 0.84 is less than the 0.95 achieved by categorical regression based FE.\n\nTherefore, we next considered VBPCAFE. In order to perform FE with VBPCA, original VBPCA was extended to incorporate feature dependence (see Methods). Prior distribution in this extension has feature-dependent parameters, and therefore automatic elimination of irrelevant features was expected. The $$C_{B}^{i1}$$ histogram reflects the prior distribution parameter of the first principal component (PC) (Figure 3), with smaller values assumed to be irrelevant features. Originally, C B was introduced to eliminate irrelevant PCs, and therefore has only q (PC) and not i (feature) dependence. However, we needed to eliminate irrelevant features as well; therefore, in our study C B must also have i dependence. Although we did not use sample labelling information, as expected, irrelevant features had smaller $$C_{B}^{i1}$$. Averaged A j1 (contribution of the jth sample to the first PC) over 100 ensembles (Figure 4), demonstrates that A j1 represents the distinction between the four classes. Thus, features not coincident with A j1 (and with smaller $$C_{B}^{i1}$$) were eliminated, producing Matthews correlation coefficients of 0.91, 0.39, and 0.04, and F measures of 0.92, 0.46, and 0.14 for easy, medium, and hard cases, respectively (Figure 2; confusion tables are available in Additional file 1).\n\nAlthough performance has been successfully improved for the easy case, VBPCAFE is computationally challenging and is a potential drawback to its use. Iteration involves updates proportional to the number of features, which can often be as many as several tens of thousands. Thus, a more computationally effective method is of value, and a relatively straightforward idea is to replace $$C_{B}^{iq}$$ with B iq , the qth PC score of ith feature, because a larger B iq is expected to result in relevant (thus larger) $$C_{B}^{iq}$$ (for example, see eq. (1) in Methods). Indeed, replacing $$C_{B}^{i1}$$ with B 1i , we obtained Matthews correlation coefficients of 0.88, 0.34, and 0.02, and F measures of 0.89, 0.41, and 0.12 for easy, medium and hard cases, respectively (Figure 2, confusion tables are available in Additional file 1). These values are comparable with those using $$C_{B}^{i1}$$. Considering that CPCAFE requires minimal computational resources, CPCAFE is a promising candidate for applying to real data that often consists of more than several thousand features.\n\nIn summary, there are at least four methods that achieve comparable FE performance on a categorical multiclass data set: categorical regression based FE (using ranked P-values), BAHSIC, VBPCAFE, and CPCAFE. Each has its own advantages and disadvantages: although categorical regression based FE achieved the best overall performance, it used features with non-significant adjusted P-values. BAHSIC does not have this problem, but as it does not use P-values, its performance using the easy case was poorest of the tested methods. VBPCAFE and CPCAFE were more effective using the easy case, but their performance was poor for the hard case. VBPCAFE is computationally challenging, while CPCAFE is not.\n\nHowever, these conclusions may be viewed as slightly subjective. The methods recommended for the hard case (i.e., categorical regression based FE and BAHSIC) have strict label dependency. Harder class discrimination often means label uncertainty (or incorrectness). Of course, there are many objective labels (e.g., gender or age), but those related to experimental conditions often include subjective criterion or insufficient controls for experimental conditions. Hard class discrimination often originates from these subjective labels, and consequently, robustness to mislabeling is desired, especially for hard cases.\n\nIn order to test robustness of the four methods towards partial mislabeling, we determined performance after mislabeling (Table 1). Figure 5 shows performance degradation caused by little, medium, and heavy partial mislabeling. Little, medium, and heavy were defined by the distance between true and wrong labels. The number of samples with wrong labels included only a proportion of the data, and never more than 20%. Although only 20% of samples were mislabeled, performance degradation with BAHSIC and categorical regression based FE was considerable for medium and heavy mislabeling. Conversely, VBPCAFE and CPNAFE performance were not affected by mislabeling as these two methods do not require labeling information. This indicates that the two methods recommended for hard cases are not particularly robust, even against partial mislabeling, and therefore not always recommended.\n\nIn conclusion, CPCAFE or VBPCAFE are the best methods for FE from categorical multiclass data sets, as they maintain robustness and show relatively stable and good performance for most cases.\n\n### Translational use of CPCAFE\n\nIn order to determine the effectiveness of CPCAFE in a real application, we examined PTSD associated heart disease. Our data set consisted of multiclass categorical samples, including several experimental conditions with no pre-defined rank orders (see Methods), and is suitable for estimating CPCAFE performance. Two-dimensional embedding of miRNA profiles are shown (Figure 6). Two probes (corresponding to mmu-miR-302c and -370) with extremely large values were deemed to be erroneous signals and excluded prior to embedding. Since each miRNA was attributed to multiple probes, removing these two probes did not mean exclusion of these miRNAs from the analysis. In addition, probes with relatively large PC2 projections were excluded and the 100 top-ranked outliers along PC1 selected. This strategy ensured selection of PC1 enhanced probes, as outliers along both PC1 and PC2 were not expected to be PC1 specific (PCs other than PC1 or PC2 were excluded, with our rationale discussed in Additional file 2). CPCAFE assumes that if a set of probes are outliers along a PC, they behave in a group oriented manner. If not, they are not outliers and are instead located on the origin, where the majority of probes without biological feasibility are presumed to be located. This criterion requires no user input and probe biological relevance (e.g., distinct expression between treatment and control samples) is unknown. Using these assumptions, we further investigated a selected 100 probes, representing 27 unique miRNAs, specifically, mmu-miR-451, -22, -133b, -709, -126-3p, -30c, -29a, -143, -24, -23b, -133a, -378, -30b, -29b, -125b-5p, -675-5p, -16, -26a, -30e, -1983, -691, -23a, -690, -207, and -669l, and mmu-let-7b and -7g. We investigated differential expression of these miRNAs between treatment and control samples using various statistical tests (e.g., t-test, Wilcoxon rank sum test, and Kolmogorov-Smirnov test), but obtained no significant results. However, owing to the small number of samples (four samples for treatment and control conditions), the significance criterion (i.e., P<0.05) was not satisfied by any miRNA examined, including the 27 selected miRNAs. Therefore, we compared the selected and other miRNAs as two separate groups. Boxplots of logarithmic P-values obtained by t tests between miRNAs in treatment and control samples are shown (Figure 7).\n\nThe 27 selected miRNAs have larger or smaller P-values than the other miRNAs, indicating that CPCAFE successfully selected differentially expressed miRNAs between treatment and control samples (justification for using logarithmic P-values is provided in Additional file 2).\n\nAlthough the selected miRNAs were expressed distinctly from the other miRNAs, this does not demonstrate biological relevance. As our aim was to investigate the underlying transcriptomic background of PTSD-mediated heart disease, preliminary validation of our approach can be provided by prior involvement of the selected miRNAs with heart disease. To address this, we performed literature searches (Table 2). Of the 27 selected miRNAs, 23 (excluding miR-1983, -691, -690, and -207) were previously reported to be related to heart disease, suggesting our miRNA selection is biologically useful.\n\nTo further confirm biological suitability of the identified miRNAs, we examined KEGG pathway enrichment using miRNA target genes (see Methods). Associations between enriched KEGG pathways and heart disease are summarized (Table 3). Next, we examined the 21 top-ranked pathways (the complete list is available in Additional file 3), with 17 found to be related to heart disease. Overall, these findings validate both our selection of miRNAs, and utility of CPCAFE.\n\nMoreover, our results suggest that the aberrantly expressed miRNAs are likely involved in PTSD-mediated heart disease. Hence, further investigation of the miRNA target genes may clarify the underlying molecular biology and transcriptomic background of PTSD-mediated heart disease.\n\nIn this regards, we applied PCA to mRNA expression. Two-dimensional embedding of mRNA expression is shown (Figure 8). To determine correlation between miRNA and mRNA expression, we compared their expression profiles and determined the contribution of each sample to PC1 (Figure 9(a)), showing negative correlation between PC1s. Additionally, to determine if the observed correlation is coincident with experimental conditions, scatterplots of averaged values within each condition were examined (Figure 9(b)). This strengthened the correlations but did not change significance, indicating that the observed negative correlation between mRNA and miRNA expression is reliable. Next, we selected mRNAs using CPCAFE. To select PC1 enhanced probes, the 100 top-ranked outliers along PC1 were selected, excluding probes with relatively large projections to PC2. In total, 59 unique mRNAs were identified (RefSeq mRNA IDs are provided in Additional file 4). To confirm negative correlation between miRNAs and miRNA target mRNAs, correlation coefficients were determined (see Additional file 4). There were no targets common to the selected 59 mRNAs and TarBase, employed by DNA intelligent Analysis (DIANA)-mirpath and used in KEGG pathway analysis (see Methods), possibly because of TarBase’s experiment-oriented, thus context-dependent, nature (i.e., TarBase does not include PTSD). Thus, instead we used seed matching to identify miRNA target genes, with so called 7mer-m8 detecting exact matches to positions 2-8 of mature miRNAs (seed + position 8). Among the 59 mRNAs, 24 were targeted by at least one of the 27 selected miRNAs. In addition, 47 pairs of miRNAs and miRNA target genes were identified. In total, there were 45/47 negative correlation coefficients between miRNAs and miRNA target genes. We also examined correlation coefficient significance (see Methods), with 26/47 pairs (more than half) associated with significant correlations (two positive correlations were judged insignificant), and confirming negative correlation between miRNAs and miRNA target genes.\n\nIn order to determine if the selected 59 mRNAs were differentially expressed between control and treated samples, we examined the logarithmic ratio. Using t tests, the averaged logarithmic ratio of the 59 samples was significantly negative or positive compared with that averaged over other mRNAs, excluding only one condition (see Table 4). Thus, our results show that the 59 selected mRNAs are mostly distinctly expressed between control and treated samples.\n\nAlthough the selected mRNAs include many targeted and negatively regulated by miRNAs, and are differentially expressed between control and treated samples, published associations between the selected mRNAs and heart disease would provide direct biological relevance. Association between the identified genes and heart disease are summarized (Table 5). In total, 9 genes were both targeted by at least one of the 27 miRNAs and related to heart disease. In addition, 17 genes were related to heart disease but not targeted by any of the 27 miRNAs (see Additional file 5). Among the genes not associated with heart disease, 15 genes were targeted by at least one of the 27 miRNAs, and only 18 genes were not targeted by any. Because approximately half (26 in total) of the selected 59 genes are related to heart disease, our gene selection is feasible from a biological point of view.\n\nFinally, we compared the selected 59 genes with KEGG pathways. KEGG pathway analysis had been performed for the miRNAs, but as they are not the only mRNA regulatory mechanism, mRNA expression may be, at least partially, distinct from miRNA expression. We found many genes were concentrated to specific KEGG pathways (see Additional file 6). For example, Uqcrh, Uqcrq, Cox6a2, Cox7a1, Cox5a, Cox4i1, Cox8b, Cox6c, Myl2, Myl3, Myh6, Myh8, Tpm1, Tnni3, and Tnnt2 belong to the KEGG pathway “Cardiac muscle contraction” (mmu04260), and most are also components of cytochrome c oxidase, involved in mitochondrial proton transfer. Thus, it is biologically reasonable that heart disease is associated with aberrant expression of these genes. Mybpc3, Tnni3, and Tnnt2 belong to the KEGG pathway “Hypertrophic cardiomyopathy” (mmu05410), also coincident with a role in heart disease. Atp5g1, Atp5b, Atp5g3, Atp5h, Atp5a1, Atp5e, Atp5j2, Ndufa13, and Ndufs6 belong to “Oxidative phosphorylation” (mmu00190 + 11951). This KEGG pathway is an essential part of mitochondrial energy metabolism, and malfunction is likely to be directly related to muscle functionality. Mdh2 and Aco2 belong to the “Citrate cycle” (mmu00020), and are involved in energy production, while Fabp3 belongs to the “peroxisome proliferator-activated receptor (PPAR) signaling pathway” (mmu03320), involved in muscle function. Thus, based on these KEGG pathway functions, the genes identified by CPCAFE are related to heart disease.\n\nIn order to further confirm the validity of our methodology, i.e., integrated analysis of mRNA and miRNA expression, we have also performed KEGG pathway analysis by the Database for Annotation, Visualization and Integrated Discovery (DAVID) restricted to 24 genes targeted by at least one of 27 miRNAs (see Methods). DAVID identified five KEGG pathways associated with significant P-values adijutsted by Benjamini and Hochberg (BH) critetion (Table 6). Two out of five were related to cradiadic diseases (“Cardiac muscle contraction” and “Oxidative phosphorylation”) and three were related to neurodegenerative diseases (“Parkinson’s disease”, “Alzheimer’s disease” and “Huntington’s disease”). Thus, these are very coincident with those causing PTSD mediated heart diseases. Figure 10 summarizes the analyses performed in the above.\n\nOur findings suggest that PTSD-mediated heart disease may be caused by malfunction in “Cardiac muscle contraction” and/or “Oxidative phosphorylation” of energy metabolism. These malfunctions are related to aberrant gene expression likely mediated by aberrant miRNA expression. From a therapeutic point of view, PTSD-mediated heart disease may be treated based upon this knowledge. To perform in silico drug discovery for these genes, the 26 genes associated with heart disease were investigated. First, tertiary structures were predicted (see Methods), and found to be similar to predicted or experimentally determined tertiary structures available in the Protein Data Bank (PDB), indicating our tertiary structures are reliable (see Additional file 7).\n\nAmong the proteins with tertiary structures predicted or available in PDB, FABP3 has a “pocket” to which inhibitors can bind, and was therefore selected as a candidate drug target for in silico drug discovery. FABP3 is an acid binding protein and its function can be blocked by inhibition of acid binding. Moreover, FABP3 is upregulated in patients with ventricular-septal defects in comparison to normal controls . FABP3 is also a member of FABPs that play critical roles in the PPAR signaling pathway identified above. Thus, it is likely that FABP3 is a key protein in PTSD-mediated heart disease.\n\nThe 10 top-ranked drug candidate compounds obtained by in silico drug discovery (see Methods) are listed (Table 7). The complete list of compounds ranked by FPAScores is available in Additional file 8. The compounds include promising drug candidates, for example, two heat shock protein 90 (HSP90) inhibitors are upregulated in dilated cardiomyopathy , and two PPAR inhibitors are regarded as pharmacological therapeutic targets . Furthermore, overexpression of two Cyclin-dependent kinase 2 (CDK2) inhibitors results in smaller mononuclear cardiomyocytes . All of these candidates should be investigated further.\n\n### Stability and comparison with other methods\n\nWe have shown biological feasibility of CPCAFE when applied to PTSD. However, it is also important to compare its performance against other methods, as CPCAFE superiority is doubtful if any other method can achieve comparable performance. Hence, we used a modified data set to ensure performance is due to the method and not the data set. If slight modifications of the sample data set drastically decrease performance, CPCAFE superiority is doubtful. In addition, VBPCAFE performance on the PTSD data set is unknown. If VBPCAFE derives completely different outcomes from CPCAFE, proposing CPCAFE as an alternative to VBPCAFE (with a firmer theoretical base, albeit a more time-consuming method) would be less convincing.\n\nFirst, we examined equivalence between CPCAFE and VBPCAFE. VBPCAFE is too time-consuming to be directly applied to the whole PTSD data set, therefore we created a data set small enough to be used directly that consisted of 200 features, with 100 features selected by CPCAFE and 100 distinct features. If VBPCAFE and CPCAFE are equivalent, the 100 features selected by CPCAFE should have larger $$C^{i1}_{B}$$s than those attributed to the other 100 features (see Methods). Feature frequencies selected from the 100 top-ranked probes with larger $$C^{i1}_{B}$$ values after investigation of 100 independent ensembles are shown (Figure 11). Scatterplots between $$C_{B}^{i1}$$ and B i1 are also shown (Figure 12). As expected (from eq. (1) in Methods), $$C_{B}^{i1}$$ is quadratically dependent upon B i1, definitely demonstrating equivalence between CPCAFE and VBPCAFE when applied to a real data set. Of course, there is still the possibility that VBPCAFE applied to the whole PTSD data set would select a completely different set of features from the 100 features selected by CPCAFE. However, we believe it is unlikely as there are almost no features not selected by CPCAFE, within the 100 top-ranked in any of the 100 ensembles. Thus, CPCAFE and VBPCAFE show equivalence when applied to not only simulated data, but also a real data set.\n\nNext, we examined CPCAFE stability, i.e., sample independence, by performing a 4-fold cross-validation study (see Methods). We specifically used a 4-fold cross-validation as there are only four biological replicates in each experimental condition, therefore using any other cross-validation would not be straightforward. The results from 100 independent ensembles are shown (see Additional file 9). For mRNAs, 78 probes were always selected by CPCAFE and only 10 probes not always selected, demonstrating high sample independence. Among the 78 probes always selected, 53 had associated RefSeq IDs that were part of the 59 RefSeq IDs selected by applying CPCAFE to the whole data set. For miRNAs, 27 probes were always selected and no probe not always selected (see Additional file 9). Among the 27 selected mature miRNAs, 25 were part of the 27 mature miRNAs selected by CPCAFE previously. Thus, CPCAFE selected almost all features 100% of the time, demonstrating stability and suggesting that performance is not likely due to the selected data set. As CPCAFE and VBPCAFE show potential equivalence, it is likely that VBPCAFE also shares this stability.\n\nIt is possible that CPCAFE outperforms FE with categorical regression analysis because it is too simple (naive) an alternative. Therefore to compare CPCAFE performance with a more sophisticated method, we used BAHSIC (see Methods).\n\nFor mRNAs, among the 100 selected probes, only 37 had RefSeq IDs, again less than CPCAFE (see Additional file 4). Of these 37 mRNAs, only 16 were associated with heart failure-related disease (Table 8), even when the target species was extended to human. Thus, 43% of extracted RefSeq associated mRNAs are related to heart disease in the Gendoo server. This is similar to the 45% reported for genes selected by CPCAFE. Considering that the total number of selected RefSeq associated mRNAs is larger, this suggests that CPCAFE outperforms BAHSIC as well as categorical regression based FE. Interestingly, 15/37 mRNAs (not always including the 16 disease associated genes) were also selected by CPCAFE, despite use of distinct algorithms by BAHSIC and CPCAFE. Nine genes selected by CPCAFE (in addition to the six genes listed in Table 8 or Additional file 5), but with missing disease association were Synrg, Milr1, Exosc2, Medag, 2610028H24Rik, pbld1, Hbb-bt, 1700047I17Rik2, and Hba-a2. Thus, mRNAs extracted by CPCAFE and BAHSIC overlap significantly. Furthermore, if we also consider PC2 outliers (see Additional file 2) that were excluded in the previous analyses, an additional 15 genes (Rhox8, Kctd14, Ttc38, G l r x2, Ndor1, Dcc, E l k1, G c m1, Hccs, Nppa, A c t c1, Pigr, Slc10a1, C x c l11, and Med16) have been identified by CPCAFE (Bolditalic six genes are also associated with heart failure-related disease, see Additional file 2). Consequently, there is a substantial increase in overlap significance between mRNAs extracted by CPCAFE and BAHSIC. Figure 13 shows the relationship between CPCAFE, BAHSIC, and heart failure-related disease association. The many genes identified by both methods and associated with disease show that our method (i.e., treating expression data as a categorical multiclass data set and not a collection of pairwise comparisons) can identify biologically relevant genes.\n\nFor miRNAs, among the 100 probes, 47 unique mature miRNAs were selected (see Additional file 4), a much larger number than CPCAFE (27). The 47 miRNAs were uploaded to DIANA-mirpath (see Additional file 4), and 65 KEGG pathways identified as significant. Although this number is similar to that obtained by CPCAFE (66), considering that the number of uploaded miRNAs was almost twice (47 vs 27), CPCAFE is still superior to BAHSIC. In fact, the averaged number of miRNAs with target mRNAs enriched in each pathway, was approximately seven, similar to that obtained by CPCAFE despite two-times more uploaded miRNAs. We also compared biological significance of the KEGG pathways obtained by BAHSIC and CPCAFE. In contrast to 17 reported pathways related to heart-failure related disease (from the top most significant 21 KEGG pathways) using CPCAFE (Table 3), only 11 identified by BASHSIC (of which, seven were also identified by CPCAFE; Table 3) were related to heart-failure related disease by literature searching (Table 9). Thus, from a biological point of view, CPCAFE outperforms BAHSIC. We also determined if targeted mRNAs negatively correlate with miRNA expression, and identified 164 mRNA-miRNA pairs, although only 73 were negatively correlated (see Additional file 4). Conversely, with CPCAFE, almost all pairs are negatively correlated. Correlation coefficient significance was examined, with only 33 significantly correlated pairs identified. Since all significant correlation coefficients were negative, this shows that BAHSIC cannot correctly extract features, and is less able to confidently screen more features than CPCAFE. Again this coincides with the limited number of miRNAs selected by BAHSIC that contribute to KEGG pathway analysis.\n\nBAHSIC stability was compared (see Additional file 9). For mRNAs, only one probe was selected only 14 times among the 100 independent ensembles, and was also the most frequently selected. The second most frequent four probes were selected only 13 times, and 2133 probes were selected only once. For miRNAs, 68 probes were always selected among 100 cross-validation ensembles (see Additional file 9). The second most frequent probes were selected 99 times, but this only included three probes. The third, fourth, and fifth frequent probes were selected 98, 97, and 96 times, but included only 2, 3 and 1 probes, respectively. Thus, with stability, CPCAFE definitely outperforms BAHSIC.\n\nIn conclusion, CPCAFE outperformed two conventional supervised FE methods on both stability and biological feasibility, and performed equally well to VBPCAFE, which has more theoretical confidence. Thus, CPCAFE and VBPCAFE are the most suitable FE methods for categorical multiclass problems.\n\nWe have included diagrams summarizing the discussed points (Figure 14). Figure 14(a) shows the number of unique miRNAs. The same 100 probes were extracted using all methods, and miRNAs assigned to multiple probes. As all probes to which the same miRNAs are assigned should be extracted at the same time, smaller numbers of unique miRNAs assigned to 100 probes shows better extraction. In this context, CPCAFE outperformed the other two methods (Venn diagram is also available in Figure 15). Figure 14(b) shows the number of mRNAs with RefSeq IDs. As mRNAs with RefSeq IDs are more likely to have known biological significance, a larger number of RefSeq IDs accompanying mRNAs shows selection of more biologically relevant mRNAs. In this context, CPCAFE outperformed the other two methods. Figure 14(c) shows the number of RefSeq miRNA genes related to heart failure-related disease. Larger numbers suggest that FE has correctly extracted mRNAs related to the target disease, therefore CPCAFE outperformed the other two methods. Figure 14(d) shows the number of KEGG pathways identified by uploading the miRNAs shown in Figure 14(a) to DIANA-mirpath. Although categorical regression based FE outperformed the other two methods, it may have incorrectly extracted the most unique miRNAs in Figure 14(a), therefore increased number of KEGG pathways does not always reflect method superiority. Indeed, counting the average number of miRNAs contributing to each pathway (Figure 14(e)), indicates that CPCAFE and BAHSIC are comparable while categorical regression based FE performs worse. This suggests that categorical regression based FE cannot identify as many KEGG pathways as CPCAFE or BAHSIC if the same number of unique miRNAs are extracted (albeit experimentally). Thus, it is reasonable to assume that CPCAFE and BAHSIC outperformed categorical regression based FE in this context. Figure 14(f) shows the possible number of mRNA/miRNAs pairs, i.e., product of the unique miRNAs (Figure 14(a)) and RefSeq accompanying mRNAs (Figure 14(b)). Although these numbers are comparable among the three FE methods, the number of miRNA and targeted mRNA pairs is distinct (Figure 14(g)). Considering the ratio (Figure 14(h)) (obtained by dividing the numbers in Fig. 14(g) by those in Figure 14(f)), CPCAFE appears worst (i.e., smallest), but this is reversed if significance is considered. Figure 14(i) shows the number of miRNA and targeted mRNA pairs with significant correlations. Converting these numbers to a ratio (Figure 14(j)) by dividing the number of pairs of miRNAs and targeted mRNAs (Figure 14(g)), CPCAFE outperformed the other two methods. The same result is obtained when considering negative correlations (Figures 14(k) and (l)), which are desirable as miRNAs suppress target mRNA expression. Overall, these results (from Figure 14(g)-(l)) suggest that CPCAFE extracted significantly more miRNA and targeted mRNA pairs than the other two methods, which had apparently extracted more. In conclusion, based on our summarized discussions, we conclude that CPCAFE outperforms two other FE methods.\n\n## Conclusions\n\nWe have extended VBPCA for FE. Although VBPCAFE is an effective method when applied to simulated data, feature-dependent extension inevitably makes it computationally challenging. Thus, we replaced VBPCA with the simpler CPCA, and achieved reasonable FE performance on simulated data. In order to demonstrate CPCAFE effectiveness on real data, we performed an integrated analysis of mRNA and miRNA expression from stressed mouse heart, investigating the underlying molecular biology and transcriptomic background of PTSD-mediated heart disease. CPCAFE successfully identified aberrantly expressed miRNAs and their negatively regulated target mRNAs. Biological significance of the identified miRNAs and mRNAs was confirmed. Equivalence between CPCAFE and VBPCAFE was demonstrated by applying both methods to the PTSD data set. Two conventional supervised FE methods were also tested, with CPCAFE outperforming them both. In silico drug discovery was performed for a selected gene, FABP3, with the top-ranked compounds identified including protein inhibitors reportedly upregulated in heart disease.\n\n## Methods\n\n### Simulations based on synthetic data of categorical multiclass samples\n\nTo simulate multiclass samples with N features and M samples, expression values were modeled using Gaussian distributions with μ k mean (k class). Standard deviations were fixed at 0.5. μ k was\n\n$$\\mu_{k} = \\left (\\frac{k-1}{K-1} - \\frac{1}{2} \\right) s, k=1,\\ldots,K,$$\n\nwith K indicating class number. Considering x ij to be an expression of the ith feature of the jth sample, it obeys:\n\n$${} x_{ij} \\in \\!\\left \\{\\! \\begin{array}{lcl} {\\cal N}\\left(\\mu_{k},\\frac{1}{2}\\right), &1 \\leq i \\leq \\frac{N'}{2}, & (k-1)\\frac{M}{K} < j \\leq k\\frac{M}{K} \\\\ {\\cal N}\\left(-\\mu_{k},\\frac{1}{2}\\right), &\\frac{N'}{2} < i \\leq N', & (k-1)\\frac{M}{K} < j \\leq k\\frac{M}{K} \\\\ {\\mathcal N}\\left(\\epsilon, \\frac{1}{2}\\right), & N' <i \\leq N \\end{array} \\right.,$$\n\nwhere $${\\mathcal N} (\\mu,\\sigma)$$ is the normal distribution with a mean of μ and standard deviation of σ. The reason for including positive and negative μ k signs was to simulate coexistence of up/downregulated genes among multiple classes. The s parameter represents difficulty of distinguishing among multiple classes. Larger (smaller) s indicates easier (harder) samples with fixed K. ε is a random number satisfying the probability, P:\n\n$$P(\\epsilon =\\mu_{k}) = \\frac{1}{K}$$\n\nIn the present study, N=100,N =10,K=4, and M=20 were used, and performances averaged over 100 independent ensembles.\n\n### One vs one t test based FE of a simulated categorical multiclass data set\n\nP values, $$P^{k,k'}_{i}$$, attributed to each i were computed using t tests to compare between $$\\left \\{ x_{\\textit {ij}}; (k-1)\\frac {M}{K} < j \\leq k\\frac {M}{K}\\right \\}$$ and $$\\left \\{ x_{\\textit {ij}}; (k'-1)\\frac {M}{K} < j \\leq k'\\frac {M}{K}\\right \\}$$, for all K(K−1)/2 pairs of k and k . $$P^{k,k'}_{i}$$ was adjusted by BH criterion , with each k,k pair and the ith gene adjusted. $$P^{k,k'}_{i}$$s <0.05 for all (k,k ) pairs were identified as distinct features between multiple classes.\n\n### Categorical regression-based FE\n\nCategorical regression-based FE was defined as follows, x ij reflects “expression” of the jth feature of the ith samples, therefore x ij can be represented as:\n\n$$x_{ij} = C_{i0} + \\sum_{k} C_{ik} \\delta_{jk},$$\n\nwith δ jk = 1 only when the jth sample belongs to the kth category, otherwise it is 0. Category summation was taken and C ik s are fitting parameters. Because independent variables are categorical, the above regression equation belongs to a category of equations often named as categorical regression. For each ith feature, P-values were computed using the lm function implemented in R (this can be easily performed if factors corresponding to experimental setups are used as independent variables in lm).\n\nFE was used for simulated categorical multiclass regression in two ways. The first selected features with BH criterion adjusted P-values <0.05. The second selected the top 10 features with smallest P-values. Finally, for the PTSD data set, the 100 top-ranked features with smallest P-values were selected as extracted features.\n\n### BAHSIC\n\nBAHSIC uses the Hilbert-Schmidt norm of the cross-covariance operator (HSIC), defined as follows:\n\n$$\\begin{array}{@{}rcl@{}} ||C_{jk}||^{2}_{HS} &\\equiv& \\left \\langle \\left (\\sum_{i} x_{ij}x_{ij^{\\prime}} \\right) \\delta_{jk}\\delta_{j^{\\prime}k^{\\prime}}\\delta_{kk^{\\prime}} \\right \\rangle_{jj^{\\prime}} \\\\ & + & \\left \\langle \\sum_{i} x_{ij}x_{ij^{\\prime}} \\right \\rangle_{jj^{\\prime}} \\left \\langle \\delta_{jk}\\delta_{j^{\\prime}k^{\\prime}}\\delta_{kk'} \\right \\rangle_{jj^{\\prime}} \\\\ & - & 2 \\left \\langle \\left \\langle \\sum_{i} x_{ij}x_{ij^{\\prime\\prime}} \\right \\rangle_{j^{\\prime\\prime}} \\left \\langle \\delta_{j^{\\prime}k}\\delta_{j^{\\prime\\prime\\prime}k^{\\prime}}\\delta_{kk^{\\prime}} \\right \\rangle_{j^{\\prime\\prime\\prime}} \\right \\rangle_{jj^{\\prime}}, \\end{array}$$\n\nwhere 〈·〉 j and $$\\langle \\cdot \\rangle _{jj^{\\prime }}$$ are averaged over all j and j,j pairs, respectively, and $$\\delta _{kk^{\\prime }}$$ is cronecker’s delta. The reason why linear kernel was employed was because it was shown to achieve best performances when applied to microarray gene expression . In BAHSIC, features with smaller HSIC are iteratively discarded until the desired number of features remain. The number of features discarded at each step was 1 for the simulated categorical multiclass data set and 10% of remaining features for the PTSD data set.\n\nR code for this algorithm is available in Additional file 2.\n\n### Extended VBPCA\n\nBefore extending VBPCA, conventional VBPCA is briefly explained.\n\nX={x ij } was modeled [2,3] as:\n\n$$X = B A^{T} + E,$$\n\nwhere B and A are N×Q and M×Q matrices, respectively, and E is a N×M matrix obeying a Gaussian distribution with zero mean and standard deviation, σ E . The purpose of VBPCA is to obtain an optimal approximation using QN,M.\n\nThus, following conventional notation and after substituting X=V, the conditional probability is:\n\n$$p(V|A,B) \\propto \\exp \\left (-\\frac{1}{{\\sigma_{E}^{2}}} || V- BA^{T}||_{\\text{Fro}}^{2} \\right),$$\n\nwhere ||·||Fro is the Frobenius norm matrix. In order to obtain optimal Q values, a prior distribution was assumed:\n\n$$\\begin{array}{@{}rcl@{}} P(A) & \\propto & \\exp \\left[ - \\frac{1}{2} \\text{tr} \\left(A C_{A}^{-1} A^{T} \\right)\\right ]\\\\ P(B) & \\propto & \\exp \\left [ - \\frac{1}{2} \\text{tr} \\left (B C_{B}^{-1} B^{T} \\right) \\right], \\end{array}$$\n\nwhere C A and C B are Q×Q diagonal positive matrices qth (q=1,…,Q) is a diagonal element of C A and C B that expresses the importance of the obtained qth principal component. Free energy was minimized as:\n\n$$\\begin{array}{@{}rcl@{}} F &\\,=\\, &\\frac{||V||_{\\text{Fro}}^{2}}{2{\\sigma_{E}^{2}}} + \\frac{NM}{2} \\log {\\sigma_{E}^{2}} + \\frac{M}{2} \\log \\frac{|C_{A}|}{|\\Sigma_{A}|} + \\frac{N}{2} \\log \\frac{|C_{B}|}{|\\Sigma_{B}|} \\\\ ({C_{B}^{i}})^{-1} & + & \\frac{1}{2} \\text{tr} \\left \\{ C_{A}^{-1} \\left (\\hat{A}^{T}\\hat{A} + M \\Sigma_{A} \\right) + C_{B}^{-1} \\left (\\hat{B}^{T}\\hat{B} + N \\Sigma_{B} \\right) \\right. \\\\ &\\,+\\,&\\! \\left.\\sigma_{E}^{-2} \\left (- 2 \\hat{A}^{T} V^{T} \\hat{B}\\! + \\left(\\hat{A}^{T}\\hat{A} +\\! M \\sigma_{A} \\right)\\left(\\hat{B}^{T}\\hat{B}\\! + N \\Sigma_{B} \\right) \\right) \\right \\} \\\\ &\\,+\\,& \\text{const.} \\end{array}$$\n\nwhere Σ A and Σ B are variance matrices of $$\\hat {A}$$ and $$\\hat {B}$$, estimated from A and B by the variational method. Locally optimal $$\\hat {A}, \\hat {B}, \\Sigma _{A}, \\Sigma _{B}, C_{B}$$, and σ E were obtained by performing the following iterative updates in this order:\n\n$$\\begin{array}{@{}rcl@{}} \\Sigma_{A} &\\leftarrow& {\\sigma_{E}^{2}} \\left(\\hat{B}^{T} \\hat{B} + N \\Sigma_{B} + {\\sigma_{E}^{2}} C_{A}^{-1} \\right)^{-1} \\\\ \\hat{A} & \\leftarrow & V^{T} \\hat{B} \\frac{\\Sigma_{A}}{{\\sigma_{E}^{2}}} \\\\ \\Sigma_{B} & \\leftarrow & {\\sigma_{E}^{2}} \\left (\\hat{A}^{T} \\hat{A} + M \\Sigma_{A} + {\\sigma_{E}^{2}} C_{B}^{-1} \\right)^{-1}, \\\\ \\hat{\\boldmath{\\text{B}}} & \\leftarrow & \\boldmath{\\text{V}} \\hat{A} \\frac{\\Sigma_{B} }{{\\sigma_{E}^{2}}}, \\\\ (\\Sigma_{A})_{qq}, C_{B}^{q} & \\leftarrow &\\frac{||\\boldmath{\\hat{\\text{B}}}_{q}||^{2}}{N} + (\\Sigma_{B})_{qq}, q=1,\\ldots,Q, \\\\ {\\sigma_{E}^{2}} & \\leftarrow & \\frac{1}{NM} \\left \\{ ||V||_{\\text{Fro}}^{2} - \\text{tr} \\left(2V^{T}\\hat{B}\\hat{A}^{T} \\right) \\right. \\\\ &+& \\left. \\text{tr} \\left (\\left(\\hat{A}^{T} \\hat{A} + M \\Sigma_{A} \\right) \\left(\\hat{B}^{T}\\hat{B} + N \\Sigma_{B} \\right)\\right)\\right\\} , \\end{array}$$\n\nwhere $$\\boldmath {\\hat {\\text {\\textit {B}}}}_{q}$$ is the qth columnar vector of matrix $$\\hat {B}$$, and $${C_{B}^{q}}$$ is the qth diagonal element of C B . ith row V vector. In addition to the above iteration, which should be repeated from top to bottom, i.e., Σ A in the left hand side of the first equation substituted to Σ A in the right hand side of the second equation, C A should be I in order to simulate conventional PCA . It is also known that C B can be used to estimate the optimal number of PCs .\n\nNext, we extended VBPCA so that it can extract features. In order to do this, C B was assumed to have i,(i=1,…,N) dependence, and should be denoted as $${C_{b}^{i}}$$. Thus, P(B) should also have i dependence as:\n\n$$P(B_{iq}) \\propto \\exp \\left\\{- \\frac{(B_{iq})^{2}}{2C_{B}^{iq}} \\right\\}.$$\n\nFurthermore, it is required that Σ B has i dependence, and should also be denoted as $${\\Sigma _{B}^{i}}$$. For example, in the above equations, N C B and N Σ B are replaced with $$\\sum _{i} {C_{B}^{i}}$$ and $$\\sum _{i} {\\Sigma _{B}^{i}}$$, respectively.\n\nThen $$\\frac {N}{2} \\log \\frac {|C_{B}|}{|\\Sigma _{B}|}$$ in F is replaced with $$\\frac {1}{2} \\sum _{i} \\log \\frac {|{C_{B}^{i}}|}{|{\\Sigma ^{i}_{B}}|}$$. $$\\text {tr} \\left \\{ C_{B}^{-1} \\left (\\hat {B}^{T}\\hat {B} \\right) \\right \\}$$ in F is replaced with $$\\sum _{i}\\hat {\\boldmath {\\text {\\textit {B}}}_{i}}^{T} \\left ({C_{B}^{i}} \\right)^{-1} \\hat {\\boldmath {\\text {\\textit {B}}}_{i}}$$, where $$\\hat {\\boldmath {\\text {\\textit {B}}}}_{i}$$ is the ith row vector of $$\\hat {B}$$. $$C_{B}^{-1} N. \\Sigma _{B}$$ and N Σ B in F are replaced with $$\\sum _{i} \\left ({C_{B}^{i}} \\right)^{-1} {\\Sigma _{B}^{i}}$$ and $$\\sum _{i} {\\Sigma _{B}^{i}}$$, respectively. As a result:\n\n{} {\\small{\\begin{aligned} F &= \\frac{||V||_{\\text{Fro}}^{2}}{2{\\sigma_{E}^{2}}} + \\frac{NM}{2} \\log {\\sigma_{E}^{2}} + \\frac{M}{2} \\log \\frac{|C_{A}|}{|\\Sigma_{A}|} + \\frac{1}{2} \\sum_{i} \\log \\frac{|{C_{B}^{i}}|}{|{\\Sigma^{i}_{B}}|} \\\\ & + \\frac{1}{2} \\sum_{i} \\hat{\\boldmath{\\text{B}}_{i}}^{T} \\left ({C_{B}^{i}} \\right)^{-1} \\hat{\\boldmath{\\text{B}}_{i}} \\\\ & + \\frac{1}{2} \\text{tr} \\left \\{ C_{A}^{-1} \\left (\\hat{A}^{T}\\hat{A} + M \\Sigma_{A} \\right) + \\sum_{i} \\left ({C_{B}^{i}} \\right)^{-1} {\\Sigma_{B}^{i}} \\right. \\\\ &+ \\left.\\sigma_{E}^{-2} \\left (- 2 \\hat{A}^{T} V^{T} \\hat{B} + \\left(\\hat{A}^{T}\\hat{A} + M \\sigma_{A} \\right)\\left(\\hat{B}^{T}\\hat{B} + \\sum_{i} {\\Sigma_{B}^{i}} \\right) \\right) \\right \\} \\\\ &+ \\text{const.} \\end{aligned}}}\n\nis obtained. Reflecting these extensions, the above iteration rules are modified as:\n\n$$\\begin{array}{@{}rcl@{}} \\Sigma_{A} &\\leftarrow& {\\sigma_{E}^{2}} \\left(\\hat{B}^{T} \\hat{B} + \\sum_{i} {\\Sigma_{B}^{i}} + {\\sigma_{E}^{2}} C_{A}^{-1} \\right)^{-1} \\\\ \\hat{A} & \\leftarrow & V^{T} \\hat{B} \\frac{\\Sigma_{A}}{{\\sigma_{E}^{2}}} \\\\ {\\Sigma_{B}^{i}} & \\leftarrow & {\\sigma_{E}^{2}} \\left (\\hat{A}^{T} \\hat{A} + M \\Sigma_{A} + {\\sigma_{E}^{2}} \\left({C_{B}^{i}}\\right)^{-1} \\right)^{-1}, i=1,\\ldots,N \\\\ \\hat{\\boldmath{\\text{B}}}_{i} & \\leftarrow & \\boldmath{\\text{V}}_{i} \\hat{A} \\frac{{\\Sigma_{B}^{i}}}{{\\sigma_{E}^{2}}}, i=1,\\ldots,N\\\\ \\tilde{C}_{A}^{q} & \\leftarrow & \\frac{||\\boldmath{\\hat{\\text{A}}}_{q}||^{2}}{M}+ (\\Sigma_{A})_{qq}, q=1,\\ldots,Q \\\\ {C_{A}^{q}} & \\leftarrow& \\frac{\\tilde{C}_{A}^{q}}{\\sum_{q'} \\tilde{C}_{A}^{q'}}, q=1, \\ldots,Q \\\\ C_{B}^{iq} & \\leftarrow& (B_{iq})^{2}+ \\left({\\Sigma_{B}^{i}}\\right)_{qq}, q=1,\\ldots,Q, i=1,\\ldots,N \\end{array}$$\n((1))\n$$\\begin{array}{*{20}l}{} {\\sigma_{E}^{2}} & \\leftarrow \\frac{1}{NM} \\left \\{ ||V||_{\\text{Fro}}^{2} - \\text{tr} \\left(2V^{T}\\hat{B}\\hat{A}^{T} \\right) \\right. \\\\ &+ \\left. \\text{tr} \\left (\\left(\\hat{A}^{T} \\hat{A} + M \\Sigma_{A} \\right) \\left(\\hat{B}^{T}\\hat{B} + \\sum_{i} {\\Sigma_{B}^{i}} \\right)\\right)\\right\\} , \\end{array}$$\n\nwhere V i is the ith row V vector, and $$\\boldmath {\\hat {\\text {\\textit {A}}}}_{q}$$ is the qth columnar vector of matrix $$\\hat {A}$$. Because of extension, divergence was suppressed by introducing C A normalization instead of using constant values. Therefore, $${C_{b}^{i}}$$ expresses ith feature importance, and C A (instead of the former C B ) represents importance of the qth PC, and must be non-constant. The order of iterations in eq. (1) is arbitrary since each iteration is independent of one another.\n\nIn the present study, the above iterations began by substituting pre-computed PCs (using the prcomp function in R) in A and B, to compensate for slow convergence. After a suitable number of iterations, parameters with the smallest F values were used for further analysis. Convergence was judged if extracted features change after more than 100 iterations.\n\nR code for this algorithm is available in Additional file 2.\n\n### Statistical analysis of CPCAFE and VBPCAFE applied to simulated data\n\nFor VBPCAFE, the top-ranked features with larger $$C_{B}^{i1}$$ values were extracted after convergence, judged by changes in extracted features after more than 100 iterations. For CPCAFE [22-29], expression data ({x ij }) was embedded into low dimensional space using PCA. After selecting the PC for FE, outliers along the PC were extracted i. e., the top 10 features with larger absolute projections to the selected PC.\n\n### Evaluation of FE performance\n\nIn order to evaluate FE performance in a simulated categorical multiclass data set, two measures were used for discrimination of binary classes with unequal member numbers. In this computation, true positive (TP) represents the number of features with distinct expression between classes (iN ), which are identified by the specified feature. Conversely, true negative (TN) is the number of features with no distinct expression between classes (i>N ), and are not identified by the specified feature. False positive (FP) is the number of features with no distinct expression between classes (i>N ), but are identified by the specified feature. False negative (FN) is the number of features with distinct expression between classes (iN ), but are not identified by the specified feature. Accordingly, the Matthews correlation coefficient is defined as:\n\n$$\\frac{TP\\cdot TN - FP \\cdot FN}{\\sqrt{(FP+FN)(FP+TP)(TN+TP)(TN+FN)}},$$\n\nwhile the F measure is:\n\n$$\\frac{2 TP}{(TP+FP)+(TP+FN)},$$\n\nwhere T P+F P corresponds to the number of features identified by specific FE, and T P+F N represents the number of features with distinct expression between classes (iN ), thus representing the harmonic mean of sensitivity $$\\left (\\frac {TP}{TP+FN}\\right)$$ and precision $$\\left (\\frac {TP}{TP+FP}\\right)$$.\n\n### Translational application to PTSD associated heart disease\n\nThe overall work flow is illustrated in Figure 16.\n\n#### mRNA and miRNA expression\n\nmRNA and miRNA expression data were obtained from the Gene Expression Omnibus (GEO) using the GEO ID: GSE52875 . Expression was observed in stressed mouse hearts. The length of time subservient mice were exposed to aggressor mice varied, with variable rest times after exposure. Exposure and rest times were 1, 2, 3, or 10 days of exposure and 1 day of rest, 5 days of exposure and 1 or 10 days of rest, and 10 days of exposure and 42 days of rest (in total, seven distinct treatment conditions). Controls were housed separately from the aggressor mice in personal cages, and prepared for all treatment conditions except for 1 or 3 days of exposure and 1 day of rest. Thus, there were only five control conditions. For each condition there were four biological replicates and therefore 48 samples in total were available.\n\nmRNA expression was included in the subseries GSE52866 and GSE52871. GSE52866_RAW.tar and GSE52871_RAW.tar (provided as Supplementary Data in GEO) were downloaded, and the gProcessedSignal in each of 48 GSM files used for analysis. miRNA expression was included in the subseries GSE52869 and GSE52872. From GSE52872, eight raw Exiqon files (provided as Supplementary Data in GEO) were downloaded, and gProcessedSignal and rProcessedSignal used as miRNA profiles for further analysis. From GSE52869, 16 files (provided as Supplementary Data in GEO) were downloaded, and “Spot Mean Intensity Cyanine3” and “Spot Mean Intensity Cyanine5H” used as miRNA profiles for further analysis. The conditions attributed to each sample were provided by the file names. mRNA and miRNA expression were not normalized, apart from for CPCAFE, where mRNA expression was normalized to have zero mean and a standard deviation of 1.\n\n#### KEGG pathway analysis of miRNAs using DIANA-mirpath\n\nFor CPCAFE, the 27 identified miRNAs were uploaded to the DIANA-mirpath server . Although DIANA-mirpath requires mature miRNA names used in miRBase (rel. 18), the mature miRNA names in GEO ID: GSE52875 were used in the previous release, therefore the uploaded mature miRNAs are not exactly the same as the 27 identified miRNAs. Target gene prediction was performed using Tarbase , a database using experimentally validated targets that are more biologically plausible. Combined target genes sets were used for KEGG pathway enrichment analysis. False discovery rate corrected P-values were used to screen KEGG pathways. Direct weblinks to DIANA-mirpath results are provided in Additional file 10. The same analyses were performed for categorical regression-based FE and BAHSIC, excluding the number of uploaded miRNAs.\n\n#### Disease association analysis of genes using the Gendoo server\n\nThe Gendoo server , a literature-based disease association database, was used to identify associations between selected mRNAs and disease. As mRNAs were identified using RefSeq mRNA IDs, these were interpreted to gene symbols. Obtained gene symbols were uploaded to the Gendoo server with mouse being the specified target species. Human was also tested for categorical regression-based FE and BAHSIC, to compensate for the small number of hits.\n\n### KEGG pathway analysis of mRNAs by DAVID\n\nEmployed 24 genes were mRNAs that have non zero values in the column named as “target” of the “CPCA based” sheet in Additional file 4. Refseq IDs for these 24 genes were identified and were uploaded to DAVID server using default setting. Then KEGG pathways identified were extracted.\n\n#### Tertiary protein structure prediction\n\nTertiary protein structures were predicted using two prediction servers, Protein Homology/analogY Recognition Engine V2.0 (phyre2) and full automatic modeling systems (FAMS) . Among the 26 genes investigated, 14 genes were included in the PDB . Tertiary protein structures of the remaining genes were predicted by either phyre2 or FAMS. The complete list of template proteins and amino acid sequences of investigated genes in fasta format is available in Additional file 11.\n\n#### In silico drug discovery for FABP3\n\nChooseLD was used for FABP3 drug discovery. Using the PDB structure of chain A in 1HMR as the template protein, and four ligands (9-OCTADECENOIC ACID in 1HMR, OLEIC ACID in 1HMS, STEARIC ACID in 1HMT, and PALMITIC ACID in 2HMB) as template ligands (acids not provided in 1HMR were mapped to 1HMR prior to chooseLD execution), with four additional FABP3 inhibitors from ChEMBL as template ligands (see Additional file 2 for more detail), drug candidate compounds were selected from DrugBank . ChooseLD was applied to 1450 compounds with a Tanimoto index >0.20 from at least one of the eight template ligands. The 1450 compounds were selected from 6510 compounds with tertiary structures computed by Babel software , from among the 6583 compounds listed in DrugBank . The 1450 compounds were ranked based on Finger Print Alignment Scores (FPAScores) averaged over three independent runs.\n\n### Generation of a test PTSD data set\n\nIn order to test equivalence between CPCAFE and VBPCAFE on PTSD data, a test data set was generated. In addition to the 100 probes selected by CPCAFE, an additional 100 probes were chosen from those remaining, generating a test data set with 200 features. VBPCAFE was applied to the generated data set, and $$C_{B}^{i1}$$s values calculated. The top-ranked 200 features with largest $$C_{B}^{i1}$$s values were selected as extracted features. FE was performed over 100 independent ensembles, and the frequency (number of times each feature was selected) counted. Frequency number = 100, indicates that the feature was always selected.\n\n### FE cross-validation\n\nIn order to test FE stability on the PTSD data set, a 4-fold cross-validation was performed, with each experiment repeated four times. For each experimental setup, three out of four replicates were randomly selected, generating a data set with 36 samples. Since the total possible number of independent samplings was 4122×107, it is impossible to test all samplings. Therefore, 100 samplings were tested, which was large enough to demonstrate superior stability of CPCAFE compared with the two conventional supervised methods of categorical regression-based FE and BAHSIC.\n\n### Significance of correlation coefficients\n\nCorrelation coefficient significance was investigated by transforming the correlation coefficient (r) to the statistical test variable (t):\n\n$$t = \\frac{r \\sqrt{M-2}}{\\sqrt{1-r^{2}}},$$\n\nobeying the t distribution of M−2 degrees of freedom. As there were 48 samples in our study, M= 48. Computed P-values were adjusted by BH criterion . Correlation coefficients with adjusted P-values <0.05 were regarded as significant.\n\n## Abbreviations\n\nBAHSIC:\n\nBackward elimination using Hilbert-Schmidt norm of the cross covariance operator\n\nBH:\n\nBenjamini and Hochberg\n\nCDK2:\n\nCyclin-dependent kinase 2\n\nCPCA:\n\nConventional PCA\n\nCHD:\n\nCoronary heart disease\n\nCPCAFE:\n\nConventional PCA-based unsupervised FE\n\nDAVID:\n\nDatabase for annotation, visualization and integrated discovery\n\nDIANA:\n\nDNA intelligent Analysis\n\nFABP3:\n\nFatty acid binding protein 3\n\nFAMS:\n\nFull automatic modeling systems\n\nFE:\n\nFeature extraction\n\nFP:\n\nFalse positive\n\nFPAScores:\n\nFinger print alignment scores\n\nFN:\n\nFalse negative\n\nGEO:\n\nGene expression omnibus\n\nHSP90:\n\nHeat shock protein 90\n\nKEGG:\n\nKyoto encyclopedia of genes and genomes\n\nmiRNA:\n\nmicroRNA\n\nPC:\n\nPrincipal component\n\nPCA:\n\nprincipal component analysis\n\nPDB:\n\nProtein data bank\n\nphyre2:\n\nProtein homology/analogy recognition engine V2.0\n\nPPAR:\n\nPeroxisome proliferator-activated receptor\n\nPTSD:\n\nPosttraumatic stress disorder\n\nTP:\n\nTrue positive\n\nTN:\n\nTrue nagative\n\nVB:\n\nVariational Bayes\n\nVBPCAFE:\n\nVBPCA-based unsupervised FE\n\n## References\n\n1. 1\n\nTibshirani R. Regression shrinkage and selection via the lasso. J R Stat Soc. Ser B (Methodological). 1996; 58(1):267–88.\n\n2. 2\n\nBishop CM. Variational principal components. In: Proceedings of International Conference on Artificial Neural Networks. Heidelberg: Springer: 1999. p. 514–09.\n\n3. 3\n\nLim YJ, Teh TW. Variational bayesian approach to movie rating prediction. In: Proceedings of KDD Cup and Workshop: 2007. http://www.cs.uic.edu/~liub/KDD-cup-2007/proceedings/variational-Lim.pdf.\n\n4. 4\n\nYehuda R. Post-traumatic stress disorder. N Engl J Med. 2002; 346(2):108–14.\n\n5. 5\n\nEdmondson D, Kronish IM, Shaffer JA, Falzon L, Burg MM. Posttraumatic stress disorder and risk for coronary heart disease: a meta-analytic review. Am Heart J. 2013; 166(5):806–14.\n\n6. 6\n\nJordan HT, Stellman SD, Morabia A, Miller-Archie SA, Alper H, Laskaris Z, et al.Cardiovascular disease hospitalizations in relation to exposure to the September 11, 2001 World Trade Center disaster and posttraumatic stress disorder. J Am Heart Assoc. 2013; 2(5):000431.\n\n7. 7\n\nVaccarino V, Goldberg J, Rooks C, Shah AJ, Veledar E, Faber TL, et al.Post-traumatic stress disorder and incidence of coronary heart disease: a twin study. J Am Coll Cardiol. 2013; 62(11):970–8.\n\n8. 8\n\nCho JH, Lee I, Hammamieh R, Wang K, Baxter D, Scherler K, et al.Molecular evidence of stress-induced acute heart injury in a mouse model simulating posttraumatic stress disorder. Proc Natl Acad Sci USA. 2014; 111(8):3188–93.\n\n9. 9\n\nKanehisa M, Goto S, Sato Y, Kawashima M, Furumichi M, Tanabe M. Data, information, knowledge and principle: back to metabolism in KEGG. Nucleic Acids Res. 2014; 42(Database issue):199–205.\n\n10. 10\n\nSong L, Smola A, Gretton A, Bedo J, Borgwardt K. Feature selection via dependence maximization. J Machine Learning Res. 2012; 13(1):1393–434.\n\n11. 11\n\nVlachos IS, Kostoulas N, Vergoulis T, Georgakilas G, Reczko M, Maragkakis M, et al.DIANA miRPath v.2.0: investigating the combinatorial effect of microRNAs in pathways. Nucleic Acids Res. 2012; 40(Web Server issue):498–504.\n\n12. 12\n\nLee HC, Chen CY, Au LC. Systemic comparison of repression activity for miRNA and siRNA associated with different types of target sequences. Biochem Biophys Res Commun. 2011; 411(2):393–6.\n\n13. 13\n\nHuang DAW, Sherman BT, Lempicki RA. Systematic and integrative analysis of large gene lists using DAVID bioinformatics resources. Nat Protoc. 2009; 4(1):44–57.\n\n14. 14\n\nRosca MG, Vazquez EJ, Kerner J, Parland W, Chandler MP, Stanley W, et al.Cardiac mitochondria in heart failure: decrease in respirasomes and oxidative phosphorylation. Cardiovasc Res. 2008; 80(1):30–9.\n\n15. 15\n\nZhang H, Zhou L, Yang R, Sheng Y, Sun W, Kong X, et al.Identification of differentially expressed genes in human heart with ventricular septal defect using suppression subtractive hybridization. Biochem Biophys Res Commun. 2006; 342(1):135–44.\n\n16. 16\n\nKapustian LL, Vigontina OA, Rozhko OT, Ryabenko DV, Michowski W, Lesniak W, et al.Hsp90 and its co-chaperone, Sgt1, as autoantigens in dilated cardiomyopathy. Heart Vessels. 2013; 28(1):114–9.\n\n17. 17\n\nFinck BN. The PPAR regulatory system in cardiac physiology and disease. Cardiovasc Res. 2007; 73(2):269–77.\n\n18. 18\n\nLiao HS, Kang PM, Nagashima H, Yamasaki N, Usheva A, Ding B, et al.Cardiac-specific overexpression of cyclin-dependent kinase 2 increases smaller mononuclear cardiomyocytes. Circ Res. 2001; 88(4):443–50.\n\n19. 19\n\nBenjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J Royal Stat Soc. 1995; B57(1):289–300.\n\n20. 20\n\nR Core Team. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing; 2014. R Foundation for Statistical Computing. http://www.R-project.org/.\n\n21. 21\n\nTipping ME, Bishop CM. Probabilistic principal component analysis. J R Stat Soc: Ser B (Stat Methodology). 1999; 61(3):611–22.\n\n22. 22\n\nKinoshita R, Iwadate M, Umeyama H, Taguchi YH. Genes associated with genotype-specific DNA methylation in squamous cell carcinoma as candidate drug targets. BMC Syst Biol. 2014; 8 Suppl 1:4.\n\n23. 23\n\nTaguchi YH, Murakami Y. Principal component analysis based feature extraction approach to identify circulating microRNA biomarkers. PLoS ONE. 2013; 8(6):66714.\n\n24. 24\n\nIshida S, Umeyama H, Iwadate M, Taguchi YH. Bioinformatic Screening of Autoimmune Disease Genes and Protein Structure Prediction with FAMS for Drug Discovery. Protein Pept Lett. 2014; 21(8):828–39.\n\n25. 25\n\nMurakami Y, Toyoda H, Tanahashi T, Tanaka J, Kumada T, Yoshioka Y, et al.Comprehensive miRNA expression analysis in peripheral blood can diagnose liver disease. PLoS ONE. 2012; 7(10):48366.\n\n26. 26\n\nTaguchi Y-H, Okamoto A. Principal component analysis for bacterial proteomic analysis In: Shibuya T, Kashima H, Sese J, Ahmad S, editors. Pattern Recognition in Bioinformatics, Lecture Notes in Computer Science. Heidelberg: Springer: 2012. p. 141–52.\n\n27. 27\n\nUmeyama H, Iwadate M, Taguchi Y-H. TINAGL1 and B3GALNT1 are potential therapy target genes to suppress metastasis in non-small cell lung cancer. BMC Genomics. 2014; 15(Suppl 9):S2.\n\n28. 28\n\nTaguchi Y-H. Integrative analysis of gene expression and promoter methylation during reprogramming of a non-small-cell lung cancer cell line using principal component analysis-based unsupervised feature extraction In: Huang D-S, Han K, Gromiha M, editors. Intelligent Computing in Bioinformatics. Lecture Notes in Computer Science. Heidelberg: Springer: 2014. p. 445–55.\n\n29. 29\n\nTaguchi Y-H, Murakami Y. Universal disease biomarker: Can a fixed set of blood micrornas diagnose multiple diseases?. BMC Reserch Notes. 2014; 7:581.\n\n30. 30\n\nVergoulis T, Vlachos IS, Alexiou P, Georgakilas G, Maragkakis M, Reczko M, et al.TarBase 6.0: capturing the exponential growth of miRNA targets with experimental support. Nucleic Acids Res. 2012; 40(Database issue):222–9.\n\n31. 31\n\nNakazato T, Bono H, Matsuda H, Takagi T. Gendoo: functional profiling of gene and disease features using MeSH vocabulary. Nucleic Acids Res. 2009; 37(Web Server issue):166–9.\n\n32. 32\n\nKelley LA, Sternberg MJ. Protein structure prediction on the Web: a case study using the Phyre server. Nat Protoc. 2009; 4(3):363–71.\n\n33. 33\n\nUmeyama H, Iwadate M. FAMS and FAMSBASE for protein structure. Curr Protoc Bioinf. 2004; Chapter 5:Unit5.2.\n\n34. 34\n\nBerman HM, Westbrook J, Feng Z, Gilliland G, Bhat TN, Weissig H, et al.The Protein Data Bank. Nucleic Acids Res. 2000; 28(1):235–42.\n\n35. 35\n\nTakaya D, Takeda-Shitaka M, Terashi G, Kanou K, Iwadate M, Umeyama H. Bioinformatics based Ligand-Docking and in-silico screening. Chem Pharm Bull. 2008; 56(5):742–4.\n\n36. 36\n\nGaulton A, Bellis LJ, Bento AP, Chambers J, Davies M, Hersey A, et al.ChEMBL: a large-scale bioactivity database for drug discovery. Nucleic Acids Res. 2012; 40(Database issue):1100–7.\n\n37. 37\n\nLaw V, Knox C, Djoumbou Y, Jewison T, Guo AC, Liu Y, et al.DrugBank 4.0: shedding new light on drug metabolism. Nucleic Acids Res. 2014; 42(Database issue):1091–7.\n\n38. 38\n\nO’Boyle NM, Banck M, James CA, Morley C, Vandermeersch T, Hutchison GR. Open Babel: An open chemical toolbox. J Cheminform. 2011; 3:33.\n\n39. 39\n\nWang X, Zhu H, Zhang X, Liu Y, Chen J, Medvedovic M, et al.Loss of the miR-144/451 cluster impairs ischaemic preconditioning-mediated cardioprotection by targeting Rac-1. Cardiovasc Res. 2012; 94(2):379–90.\n\n40. 40\n\nGoren Y, Kushnir M, Zafrir B, Tabak S, Lewis BS, Amir O. Serum levels of microRNAs in patients with heart failure. Eur J Heart Fail. 2012; 14(2):147–54.\n\n41. 41\n\nMatkovich SJ, Wang W, Tu Y, Eschenbacher WH, Dorn LE, Condorelli G, et al.MicroRNA-133a protects against myocardial fibrosis and modulates electrical repolarization without affecting hypertrophy in pressure-overloaded adult hearts. Circ Res. 2010; 106(1):166–75.\n\n42. 42\n\nVacchi-Suzzi C, Bauer Y, Berridge BR, Bongiovanni S, Gerrish K, Hamadeh HK, et al.Perturbation of microRNAs in rat heart during chronic doxorubicin treatment. PLoS ONE. 2012; 7(7):40395.\n\n43. 43\n\nQiang L, Hong L, Ningfu W, Huaihong C, Jing W. Expression of miR-126 and miR-508-5p in endothelial progenitor cells is associated with the prognosis of chronic heart failure patients. Int J Cardiol. 2013; 168(3):2082–8.\n\n44. 44\n\nDuisters RF, Tijsen AJ, Schroen B, Leenders JJ, Lentink V, van der Made I, et al. miR-133 and miR-30 regulate connective tissue growth factor: implications for a role of microRNAs in myocardial matrix remodeling. Circ Res. 2009; 104(2):170–8.\n\n45. 45\n\nvan Rooij E, Sutherland LB, Thatcher JE, DiMaio JM, Naseem RH, Marshall WS, et al.Dysregulation of microRNAs after myocardial infarction reveals a role of miR-29 in cardiac fibrosis. Proc Natl Acad Sci USA. 2008; 105(35):13027–32.\n\n46. 46\n\nRangrez AY, Massy ZA, Metzinger-Le Meuth V, Metzinger L. miR-143 and miR-145: molecular keys to switch the phenotype of vascular smooth muscle cells. Circ Cardiovasc Genet. 2011; 4(2):197–205.\n\n47. 47\n\nWang J, Huang W, Xu R, Nie Y, Cao X, Meng J, et al. MicroRNA-24 regulates cardiac fibrosis after myocardial infarction. J Cell Mol Med. 2012; 16(9):2150–60.\n\n48. 48\n\nvan Rooij E, Sutherland LB, Liu N, Williams AH, McAnally J, Gerard RD, et al. A signature pattern of stress-responsive microRNAs that can evoke cardiac hypertrophy and heart failure. Proc Natl Acad Sci USA. 2006; 103(48):18255–60.\n\n49. 49\n\nGanesan J, Ramanujam D, Sassi Y, Ahles A, Jentzsch C, Werfel S, et al.MiR-378 controls cardiac hypertrophy by combined repression of mitogen-activated protein kinase pathway factors. Circulation. 2013; 127(21):2097–106.\n\n50. 50\n\nWong SS, Ritner C, Ramachandran S, Aurigui J, Pitt C, Chandra P, et al. miR-125b promotes early germ layer specification through Lin28/let-7d and preferential differentiation of mesoderm in human embryonic stem cells. PLoS ONE. 2012; 7(4):36121.\n\n51. 51\n\nTijsen AJ, Creemers EE, Moerland PD, de Windt LJ, van der Wal AC, Kok WE, et al. MiR423-5p as a circulating biomarker for heart failure. Circ Res. 2010; 106(6):1035–9.\n\n52. 52\n\nBao MH, Feng X, Zhang YW, Lou XY, Cheng Y, Zhou HH. Let-7 in cardiovascular diseases, heart development and cardiovascular differentiation from stem cells. Int J Mol Sci. 2013; 14(11):23086–102.\n\n53. 53\n\nSpinetti G, Fortunato O, Caporali A, Shantikumar S, Marchetti M, Meloni M, et al. MicroRNA-15a and microRNA-16 impair human circulating proangiogenic cell functions and are increased in the proangiogenic cells and serum of patients with critical limb ischemia. Circ Res. 2013; 112(2):335–46.\n\n54. 54\n\nZhang ZH, Li J, Liu BR, Luo CF, Dong Q, Zhao LN, et al. MicroRNA-26 was decreased in rat cardiac hypertrophy model and may be a promising therapeutic target. J Cardiovasc Pharmacol. 2013; 62(3):312–9.\n\n55. 55\n\nCrippa S, Cassano M, Messina G, Galli D, Galvez BG, Curk T, et al. miR669a and miR669q prevent skeletal muscle differentiation in postnatal cardiac progenitors. J Cell Biol. 2011; 193(7):1197–212.\n\n56. 56\n\nMiwa K, Lee JK, Takagishi Y, Opthof T, Fu X, Hirabayashi M, et al. Axon guidance of sympathetic neurons to cardiomyocytes by glial cell line-derived neurotrophic factor (GDNF). PLoS ONE. 2013; 8(7):65202.\n\n57. 57\n\nChan AO, Jim MH, Lam KF, Morris JS, Siu DC, Tong T, et al. Prevalence of colorectal neoplasm among patients with newly diagnosed coronary artery disease. JAMA. 2007; 298(12):1412–9.\n\n58. 58\n\nKerkela R, Grazette L, Yacobi R, Iliescu C, Patten R, Beahm C, et al. Cardiotoxicity of the cancer therapeutic agent imatinib mesylate. Nat Med. 2006; 12(8):908–16.\n\n59. 59\n\nPatwary MS, Haque KMHSS, Shoaib N, Salehin KS, Hosan ATMI. Cardiac Involvement of Hepatitis B and C Virus Infection. Univ Heart J. 2012; 8(2):113–8.\n\n60. 60\n\nvan Haelst PL, Schot B, Hoendermis ES, van den Berg MP. Acute myeloid leukaemia as a cause of acute ischaemic heart disease. Neth Heart J. 2006; 14(2):62–5.\n\n61. 61\n\nDiMichele LA, Doherty JT, Rojas M, Beggs HE, Reichardt LF, Mack CP, et al.Myocyte-restricted focal adhesion kinase deletion attenuates pressure overload-induced hypertrophy. Circ Res. 2006; 99(6):636–45.\n\n62. 62\n\nMuslin AJ. MAPK signalling in cardiovascular health and disease: molecular mechanisms and therapeutic targets. Clin Sci. 2008; 115(7):203–18.\n\n63. 63\n\nWard KK, Shah NR, Saenz CC, McHale MT, Alvarez EA, Plaxe SC. Cardiovascular disease is the leading cause of death among endometrial cancer patients. Gynecol Oncol. 2012; 126(2):176–9.\n\n64. 64\n\nBonney KM, Engman DM. Chagas heart disease pathogenesis: one mechanism or many?Curr Mol Med. 2008; 8(6):510–8.\n\n65. 65\n\nXu Y, Li X, Liu X, Zhou M. Neuregulin-1/ErbB signaling and chronic heart failure. Adv Pharmacol. 2010; 59:31–51.\n\n66. 66\n\nThomas JA, Gerber L, Banez LL, Moreira DM, Rittmaster RS, Andriole GL, et al.Prostate cancer risk in men with baseline history of coronary artery disease: results from the REDUCE Study. Cancer Epidemiol Biomarkers Prev. 2012; 21(4):576–81.\n\n67. 67\n\nLeak D, Meghji M. Toxoplasmic infection in cardiac disease. Am J Cardiol. 1979; 43(4):841–9.\n\n68. 68\n\nBujak M, Frangogiannis NG. The role of TGF-beta signaling in myocardial infarction and cardiac remodeling. Cardiovasc Res. 2007; 74(2):184–95.\n\n69. 69\n\nSteingart RM, Bakris GL, Chen HX, Chen MH, Force T, Ivy SP, et al. Management of cardiac toxicity in patients receiving vascular endothelial growth factor signaling pathway inhibitors. Am Heart J. 2012; 163(2):156–63.\n\n70. 70\n\nPerez-Lloret S, Rey MV, Crispo J, Krewski D, Lapeyre-Mestre M, Montastruc JL, et al. Risk of heart failure following treatment with dopamine agonists in Parkinson’s disease patients. Expert Opin Drug Saf. 2014; 13(3):351–60.\n\n71. 71\n\nDepression and Heart Disease. http://www.nimh.nih.gov/health/publications/depression-and-heart-disease/index.shtml.\n\n72. 72\n\nTrifilo MJ, Yajima T, Gu Y, Dalton N, Peterson KL, Race RE, et al. Prion-induced amyloid heart disease with high blood infectivity in transgenic mice. Science. 2006; 313(5783):94–7.\n\n73. 73\n\nRazani B, Zhang H, Schulze PC, Schilling JD, Verbsky J, Lodhi IJ, et al. Fatty acid synthase modulates homeostatic responses to myocardial stress. J Biol Chem. 2011; 286(35):30949–61.\n\n74. 74\n\nSevers NJ, Coppen SR, Dupont E, Yeh HI, Ko YS, Matsushita T. Gap junction alterations in human cardiac disease. Cardiovasc Res. 2004; 62(2):368–77.\n\n## Acknowledgements\n\nWe thank Dr. Katsuichiro Komatsu for assistance with in silico drug screening using chooseLD. This work was supported by the Japan Society for the Promotion of Science KAKENHI (Nos. 23300357 and 26120528), and a Chuo University Joint Research Grant.\n\n## Author information\n\nCorrespondence to Y-h Taguchi.\n\n### Competing interests\n\nThe authors declare that they have no competing interests.\n\n### Authors’ contributions\n\nHU performed drug discovery using chooseLD. MI performed protein tertiary structure predictions using FAMS. YHT designed and supervised the research, performed calculations and analyses, and prepared the manuscript. All authors read and approved the final manuscript.\n\nConfusion tables for simulated data. Performance of FE methods applied to a simulated data set and expressed using confusion tables.\n\nSupplementary text. Supplementary discussion not included in the main text.\n\nComplete DIANA-mirpath results. Complete list of the KEGG pathway analysis reported by DIANA-mirpath for CPCAFE, categorical regression-based FE, and BAHSIC (xlsx). A partial list for CPCAFE is included in the main text as Table 3.\n\nCorrelation coefficients between selected miRNAs and target mRNAs. Correlation coefficients between selected miRNAs and target mRNAs, with RefSeq IDs and gene symbols provided. None are filled between miRNAs and off target genes. The column “target” indicates the number of miRNAs targeting each gene (xlsx). CPCAFE, 27 miRNAs vs 59 mRNAs; categorical regression-based FE, 77 miRNAs vs 23 mRNAs; and BAHSIC, 47 miRNAs vs 37 mRNAs.\n\nList of associated diseases with genes and miRNAs. List of associated diseases with genes and miRNAs targeting each gene identified by CPCAFE. Lists genes not included in Table 5.\n\nKEGG pathway mapping diagram of identified genes. KEGG pathway mapping diagram of genes identified by CPCAFE (red characters). Genes listed as targets of drug candidate compounds in Table 7 are also indicated (blue characters).\n\nSummary of predicted or PDB protein tertiary structures. Summary of predicted or PDB protein tertiary structures of 26 genes with reported involvement in heart disease (see Table 5 or Additional file 4).\n\nFull list of drug candidate compounds ranked by FPAScores.\n\nStability analysis. Stability analysis of CPCAFE, categorical regression-based FE, and BAHSIC. Frequency represents the number of times each probe was selected among 100 independent ensembles. Number of probes represents the number of probes selected by the corresponding frequency. Numbers in bold represent the number of mRNAs/miRNAs selected 100%. Note that no mRNA was selected 100% by BAHSIC.", null, "" ]
[ null, "https://bmcbioinformatics.biomedcentral.com/track/article/10.1186/s12859-015-0574-4", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9146932,"math_prob":0.92373073,"size":77275,"snap":"2020-10-2020-16","text_gpt3_token_len":18628,"char_repetition_ratio":0.1462515,"word_repetition_ratio":0.047528353,"special_character_ratio":0.23214494,"punctuation_ratio":0.1482599,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9590818,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-05T21:11:56Z\",\"WARC-Record-ID\":\"<urn:uuid:7e25da15-71c8-4d5c-8f70-ae1b38ce6a6f>\",\"Content-Length\":\"432913\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7cc2d0b4-ab30-49e8-b654-31cd1178823a>\",\"WARC-Concurrent-To\":\"<urn:uuid:a223c4df-1548-46a0-a626-0ce7caa1d914>\",\"WARC-IP-Address\":\"151.101.248.95\",\"WARC-Target-URI\":\"https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-015-0574-4\",\"WARC-Payload-Digest\":\"sha1:GHJTNF7K4ULJ6SPHZFK3YOBFH4F6NDPR\",\"WARC-Block-Digest\":\"sha1:PPF2IIZ2AGGB6L4VDYFTUYFSNS4IFEZT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585371609067.62_warc_CC-MAIN-20200405181743-20200405212243-00271.warc.gz\"}"}
https://www.wikihow.com/Measure-Force
[ "# How to Measure Force\n\nCo-authored by wikiHow Staff | 10 References\n\nUpdated: July 31, 2019\n\nForce is a physics term that is defined as an influence that causes an object to change its rate or direction of movement or rotation. A force can accelerate objects by pulling or pushing them. The relationship between force, mass, and acceleration was defined by Isaac Newton in his second law of motion, which states that an object's force is the product of its mass and acceleration. If you want to know how to measure force, just follow these steps.\n\n### Method 1 Measure Force\n\n1. 1\nUnderstand the relationship between force, mass, and acceleration. The force of an object is simply the product of its mass and acceleration. This relationship can be defined by the following formula: Force = Mass x Acceleration. Here are a few other things to keep in mind as you measure force:\n• The standard unit for mass is kilograms (kg).\n• The standard unit for acceleration is m/s2.\n• The standard unit for force is the newton (N). The newton is a derived standard unit. 1N = 1 kg x 1m/s2.\n2. 2\nMeasure the mass of a given object. An object's mass is the amount of matter that it contains. The mass of an object never changes, no matter what planet it's on; while weight fluctuates depending on gravitational pull, your mass is the same on Earth and on the Moon. In the metric system, mass can be expressed in grams or kilograms. Let's say the object we're working with is a truck that has a mass of 1000 kg.\n• To find the mass of a given object, place it on a triple beam or a double pan balance. This will calculate the mass in kilograms or grams.\n• In the English system, mass can be expressed in pounds. Because force can also be expressed in pounds, the term \"pound-mass\" has been coined to distinguish its usage. However, if you find the mass of an object using pounds in the English system, it's best to convert it to the metric system. If you know an object's mass in pounds, simply multiply it by .45 to find the mass in kilograms.\n3. 3\nMeasure the object's acceleration. In physics, acceleration is defined as a change in velocity, which is defined as speed in a given direction, per unit of time. In addition to the common definition of acceleration as speeding up, it also can mean an object is slowing down or changing direction. Just as velocity can be measured with a speedometer, acceleration is measured with an accelerometer. Let's say the acceleration of the 1000 kg truck we're working with is 3m/s2.\n• In the metric system, velocity is expressed in centimeters per second or meters per second, and acceleration is expressed as centimeters per second per second (centimeters per second squared) or meters per second per second (meters per second squared).\n• In the English system, one way to express velocity is as feet per second, so acceleration can be expressed in feet per second squared.\n4. 4\nMultiply the object's mass by its acceleration. This is the force's value. Simply plug in the known numbers into the equation and you will know the force of the object. Remember to state your answer in newtons (Ns).\n• Force = Mass x Acceleration\n• Force = 1000 kg x 3m/s2\n• Force = 3000N\n\n1. 1\nFind mass if you know force and acceleration. If you know the force and acceleration of an object, simply plug them into the same formula to find the object's mass. Here's how to do it:\n• Force = Mass x Acceleration\n• 3N = Mass x 3m/s2\n• Mass = 3N/3m/s2\n• Mass = 1 kg\n2. 2\nFind acceleration if you know force and mass. If you know the force and mass of an object, simply plug them into the same formula to find the object's mass. Here's how to do it:\n• Force = Mass x Acceleration\n• 10N = 2 kg x Acceleration\n• Acceleration = 10N/2kg\n• Acceleration = 5m/s2\n3. 3\nFind the acceleration of an object. If you want to find the force of an object, you can calculate its acceleration as long as you know its mass. All you have to do is use the formula for finding the acceleration of an object. The formula is (Acceleration = Final Velocity - Initial Velocity)/Time.\n• Example: A runner reaches a speed of 6 m/s in 10 seconds. What is his acceleration?\n• The final velocity is 6 m/s. The original velocity is 0 m/s. The time is 10s.\n• Acceleration = (6 m/s - 0 m/s)/10s = 6/10s = .6m/s2\n\n## Community Q&A\n\nSearch\n• Question\nIs it possible to measure the force of a stationary object?\nYes. You can measure the force of gravity pulling the object. Just use the gravitational constant for the acceleration of the object.\n• Question\nWhat is mass movement?\nMass movement is the movement of surface material caused by gravity. Landslides and rockfalls are examples of very sudden movements of this type. Geological agents such as water, wind and ice all work with gravity to cause a leveling of land, too.\n• Question\nWhat is a gravitational force?\nTomPN\nA force caused by a massive object (as opposed to a massless object). There are various theories as to how gravity works. Newton's is easiest to understand and apply, but more up-to-date theories (such as general relativity) are used when increased accuracy is needed.\n• Question\nWhat is the formula for kinetic energy?\nTomPN\nThe basic formula for kinetic energy is mass x velocity squared, divided by 2.\n• Question\nWhat is the difference between weight and force?\nTomPN\nWeight is a particular type of force, caused by a gravitational field acting upon a mass.\n200 characters left\n\n## Tips\n\n• Mass can also be expressed in slugs, with a slug equal to 32.174 pounds-mass. A slug is the amount of mass that 1 pound-force can accelerate at 1 foot per second squared. When multiplying a mass in slugs by acceleration in feet per second squared, the conversion constant isn't used.\n• Thus, a mass of 640 pounds-mass accelerating at 5 feet per second squared carries an approximate force of 640 times 5 divided by 32 or 100 pounds-force.\n• Divide the result by a conversion constant if you're working with English units. As noted above, \"pound\" can be either a unit of mass or force in the English system; when used as a unit of force, it is called \"pound-force.\" The conversion constant is 32.174 pound-feet per pound force second-squared; 32.174 is the value of acceleration due to Earth's gravity in feet per second squared. (To simplify the math here, we'll round to a value of 32.)\n• Note that the relationship between force, mass and acceleration means that an object with low mass and high acceleration can have the same force as an object with high mass and low acceleration.\n• A mass of 150 kilograms accelerating at 10 meters per second squared carries a force of 150 times 10, or 1500 kilogram-centimeters per second squared. (A kilogram meter per second squared is called a newton.)\n• Forces may have special names depending on how they act on an object. A force that causes an object to speed up is called thrust, while a force that causes an object to slow down is called drag. A force that changes the way a rotating object spins around its axis is called torque.\n• Weight is the expression of a mass being acted on by the acceleration due to gravity. At Earth's surface, this acceleration is about 9.8 meters per second squared (9.80665), or 32 feet per second squared (32.174). Thus, in the metric system, a 100 kilogram mass weighs about 980 newtons, and a 100 gram mass weighs about 980 dynes. In the English system, mass and weight can be expressed in the same units, so 100 pounds of mass (or pounds-mass) weighs 100 pounds (pounds-force). Because a spring scale measures the pull of gravity on an object, it actually measures weight, not mass. (In common usage, there is no distinction, so long as the only gravity under consideration is that of Earth's surface.)\n• A mass of 20 grams accelerating at 5 centimeters (2.0 in) per second squared carries a force of 20 times 5, or 100, gram-centimeters per second squared. (A gram centimeter per second squared is called a dyne.)\n\n## Things You'll Need\n\n• Balance or spring scale\n• Accelerometer\n• Pencil and paper or calculator\n\n## Article SummaryX\n\nTo measure force, look at the formula force equals mass times acceleration (F=M*A). An object’s mass is the amount of matter that it contains and is expressed in grams or kilograms. Acceleration is the change in velocity, which is speed in a given direction, per unit of time. Acceleration is expressed as centimeters per second per second, which is centimeters per second squared. Multiply the object’s mass by its acceleration to find the force’s value, and state your answer in newtons (Ns). If you want to learn how to find the acceleration or mass if you know the force of an object, keep reading the article!\n\n## Article Info\n\nThis article was co-authored by our trained team of editors and researchers who validated it for accuracy and comprehensiveness. Together, they cited information from 10 references.\n\nCategories: Classical Mechanics\n\nIn other languages:\n\nEspañol: medir la fuerza, Italiano: Misurare la Forza, Deutsch: Die Kraft eines Objekts bestimmen, Português: Medir a Intensidade de uma Força, 中文: 测量力, Русский: рассчитать силу, Français: mesurer une force, Bahasa Indonesia: Mengukur Gaya\n\nThanks to all authors for creating a page that has been read 134,892 times." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91017056,"math_prob":0.9811364,"size":9835,"snap":"2019-51-2020-05","text_gpt3_token_len":2361,"char_repetition_ratio":0.17078629,"word_repetition_ratio":0.06642067,"special_character_ratio":0.236909,"punctuation_ratio":0.11678832,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999216,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-12T13:23:24Z\",\"WARC-Record-ID\":\"<urn:uuid:2290ba88-5d87-4f39-80c1-f260ce1f29f2>\",\"Content-Length\":\"240028\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4a334002-fc40-4eac-aabf-08a6fbb74a6d>\",\"WARC-Concurrent-To\":\"<urn:uuid:531973a4-8aa2-4b75-89dd-f45d897bde04>\",\"WARC-IP-Address\":\"151.101.250.110\",\"WARC-Target-URI\":\"https://www.wikihow.com/Measure-Force\",\"WARC-Payload-Digest\":\"sha1:LCDNH23AFDQBXATS3IUKW3T6N5T2KGMB\",\"WARC-Block-Digest\":\"sha1:Z7PTDTP2OVZGM2P25E4OYZHLLH7CDPSD\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540543850.90_warc_CC-MAIN-20191212130009-20191212154009-00105.warc.gz\"}"}
http://www.e-booksdirectory.com/details.php?ebook=38
[ "", null, "# Theory of Functions of a Real Variable", null, "Theory of Functions of a Real Variable\nby\n\nNumber of pages: 393\n\nDescription:\nI have taught the beginning graduate course in real variables and functional analysis three times in the last five years, and this book is the result. The course assumes that the student has seen the basics of real variable theory and point set topology. Contents: the topology of metric spaces, Hilbert spaces and compact operators, the Fourier transform, measure theory, the Lebesgue integral, the Daniell integral, Wiener measure, Brownian motion and white noise, Haar measure, Banach algebras and the spectral theorem, Stone’s theorem, scattering theory.\n\n(1.5MB, PDF)\n\n## Similar books", null, "Undergraduate Analysis Tools\nby - University of California, San Diego\nContents: Natural, integer, and rational Numbers; Fields; Real Numbers; Complex Numbers; Set Operations, Functions, and Counting; Metric Spaces; Series and Sums in Banach Spaces; Topological Considerations; Differential Calculus in One Real Variable.\n(4202 views)", null, "Fundamentals of Analysis\nby - Macquarie University\nSet of notes suitable for an introduction to the basic ideas in analysis: the number system, sequences and limits, series, functions and continuity, differentiation, the Riemann integral, further treatment of limits, and uniform convergence.\n(13174 views)", null, "Lectures on Lipschitz Analysis\nby\nIn these lectures, we concentrate on the theory of Lipschitz functions in Euclidean spaces. From the table of contents: Introduction; Extension; Differentiability; Sobolev spaces; Whitney flat forms; Locally standard Lipschitz structures.\n(7623 views)", null, "Irrational Numbers and Their Representation by Sequences and Series\nby - J. Wiley & sons\nThis book is intended to explain the nature of irrational numbers, and those parts of Algebra which depend on the theory of limits. We have endeavored to show how the fundamental operations are to be performed in the case of irrational numbers.\n(3166 views)" ]
[ null, "http://www.e-booksdirectory.com/img/ebd-logo.png", null, "http://www.e-booksdirectory.com/imglrg/38.jpg", null, "http://www.e-booksdirectory.com/images/9665.jpg", null, "http://www.e-booksdirectory.com/images/2415.jpg", null, "http://www.e-booksdirectory.com/images/6349.jpg", null, "http://www.e-booksdirectory.com/images/11413.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.81176466,"math_prob":0.927042,"size":3272,"snap":"2020-24-2020-29","text_gpt3_token_len":704,"char_repetition_ratio":0.11383109,"word_repetition_ratio":0.661191,"special_character_ratio":0.20079462,"punctuation_ratio":0.16666667,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9706241,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,2,null,8,null,null,null,9,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-02T10:18:33Z\",\"WARC-Record-ID\":\"<urn:uuid:7388d7b4-f914-4809-89a6-f998f0aa53cd>\",\"Content-Length\":\"11467\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cdf465f5-4c30-41ef-a2a7-3b598c23fa9c>\",\"WARC-Concurrent-To\":\"<urn:uuid:977d45e2-d26b-4be8-a7e4-fcfec751a438>\",\"WARC-IP-Address\":\"50.28.1.56\",\"WARC-Target-URI\":\"http://www.e-booksdirectory.com/details.php?ebook=38\",\"WARC-Payload-Digest\":\"sha1:S6UXLZ2IS3X52BUA4SAAU2Z6HO2C4CFS\",\"WARC-Block-Digest\":\"sha1:FY2SQW4ZFAEOWLBNIULZD6LU2OO5ZEI5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347424174.72_warc_CC-MAIN-20200602100039-20200602130039-00016.warc.gz\"}"}
http://alegremath.com/algebra-1-math/trinomials/find-the-roots-of-the.html
[ "Try the Free Math Solver or Scroll down to Tutorials!\n\n Depdendent Variable\n\n Number of equations to solve: 23456789\n Equ. #1:\n Equ. #2:\n\n Equ. #3:\n\n Equ. #4:\n\n Equ. #5:\n\n Equ. #6:\n\n Equ. #7:\n\n Equ. #8:\n\n Equ. #9:\n\n Solve for:\n\n Dependent Variable\n\n Number of inequalities to solve: 23456789\n Ineq. #1:\n Ineq. #2:\n\n Ineq. #3:\n\n Ineq. #4:\n\n Ineq. #5:\n\n Ineq. #6:\n\n Ineq. #7:\n\n Ineq. #8:\n\n Ineq. #9:\n\n Solve for:\n\n Please use this form if you would like to have this math solver on your website, free of charge. Name: Email: Your Website: Msg:\n\n### Our users:\n\nMy husband has been using the software since he went back to school a few months ago. Hes been out of college for over 10 years so he was very rusty with his math skills. A teacher friend of ours suggested the program since she uses it to teach her students fractions. Mike has been doing well in his two math classes. Thank you!\nTami Garleff, MI\n\nMy former algebra tutor got impatient whenever I couldn't figure out an equation. I eventually got tired of her so I decided to try the software. I'm so impressed with it! I can't stress enough how great it is!\nC. Jose, CA\n\nAs proud as my Mom was every time I saw her cheering me after I ran in a touchdown from 5 yards out, she was always just as worried about my grades. She said if I didnt get my grades up, nobody would ever give me a scholarship, no matter how many rushing yards I got. Even when my coach showed me your program, I didnt want no part of it. But, it started making sense. Now, I do algebra with as much confidence as play football and my senior year is gonna be my best yet!\nPamela Nelson, MT\n\nI'm not much of a math wiz but the Algebrator helps me out with fractions and other stuff I need to know for my math appreciation class in college.\nEd Carly, IN\n\nIts been a long time since I needed to understand algebra and when it came time to helping my son, I couldnt do it. Now, with your algebra software, we are both learning together.\nKenneth Schneider, WV\n\n### Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among them?\n\n#### Search phrases used on 2011-04-28:\n\n• PREALGEBRA WORKSHEETS\n• \" Multiplication with the addition or subtraction method\" calculator\n• logarithms math worksheets with answers\n• sample lesson plan in elementary algebra (topic-different kinds of sets)\n• how to convert a mix fraction into a decimal\n• graphing simultaneous equations interactive\n• roots in matlab of equation\n• steps of addition and subtraction of algebraic expression\n• science investigatory projects for 6th graders\n• solve addition of two fractions\n• algebra calculator complete squares\n• solving algebra with division\n• aptitude questions for software companies\n• find out what the lowest common multiple of 34 and 19\n• free Secondary 5N test papers for practice\n• equation 3 unknowns\n• worlds hardest algebra question\n• how to graph system equations\n• graphs of functions involving square root\n• factoring online\n• simultaneous equations matrix worksheet\n• finding common denominator in equations\n• trigonometry /mark dugopolski/anwser key\n• combination maths\n• programming on ti-83 plus fix 2 function\n• statistic homework\n• intermediate algebra study questions\n• ti-83 matrix convolution\n• algebra help inverse\n• find three ordered pairs which are solution to the equation and then graph\n• REARRANGING sample MATH worksheets\n• maths revision quiz year 8\n• free algebra2 solver\n• college algebra clep study online\n• mixed numbers decimals\n• Y10 MATHS TEST PAPERS FREE TO PRINT OFF\n• Free Printable Grade 9 Algebra Worksheets\n• Prentice Hall Conceptual Physics equations\n• free downlod calcultor\n• Gr.9 Math sheets\n• examination test in matlab\n• multiplying and dividing equations\n• how to solve linear equations\n• GCD & LCM for 7th standard matriculation sylabus\n• how 2 convert binomial into fraction\n• book for high school math revision before college\n• cost accounting homework solution\n• highest common factor of 6, 12 and 15\n• free abstract algebra tutors online\n• Year 7 Math Exam\n• Fraction reduction worksheets\n• nonliner system of equations\n• Rational and Radical Expressions calculator\n• free online graph\n• CALCULATOR DE BIROU CU RADICAL\n• algabra factoring\n• to solve the quadratic equation in excel\n• free online two step equations containing integers\n• Practice maths papers age 12 (printable)\n• chart trig\n• online study tools for algebra finals\n• aptitude text bookd pdf\n• computer solveing guide/pdf" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9031313,"math_prob":0.81190795,"size":4727,"snap":"2021-43-2021-49","text_gpt3_token_len":1112,"char_repetition_ratio":0.124073684,"word_repetition_ratio":0.0,"special_character_ratio":0.21451238,"punctuation_ratio":0.05032258,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99421895,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-07T11:47:27Z\",\"WARC-Record-ID\":\"<urn:uuid:2f97ecf1-45b0-45ba-b53e-751ddbb9395c>\",\"Content-Length\":\"87250\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:002cdcd0-5de2-4165-b67d-5464b5b4c1ad>\",\"WARC-Concurrent-To\":\"<urn:uuid:9102e505-d592-4256-aefe-d54dd5c9ee2c>\",\"WARC-IP-Address\":\"54.197.228.212\",\"WARC-Target-URI\":\"http://alegremath.com/algebra-1-math/trinomials/find-the-roots-of-the.html\",\"WARC-Payload-Digest\":\"sha1:NB2LPBZE5F2W7QKLALJNN2ES3APYHUG6\",\"WARC-Block-Digest\":\"sha1:RB2FFEMQZQ2KFUJYWPAJ7HWMGVID5Y6E\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363376.49_warc_CC-MAIN-20211207105847-20211207135847-00030.warc.gz\"}"}
https://planetmath.org/entropyencoding
[ "# entropy encoding\n\nAn entropy encoding is a coding scheme that involves assigning codes to symbols so as to match code lengths with the probabilities of the symbols. Typically, entropy encoders are used to compress data by replacing symbols represented by equal-length codes with symbols represented by codes proportional to the negative logarithm of the probability. Therefore, the most common symbols use the shortest codes.\n\nAccording to Shannon’s theorem, the optimal code length for a symbol is\n\n $-\\log_{b}P$\n\nwhere $b$ is the number of symbols used to make output codes and $P$ is the probability of the input symbol.\n\nTwo of the most common entropy encoding techniques are Huffman encoding and arithmetic encoding.\n\nTitle entropy encoding EntropyEncoding 2013-03-22 12:32:22 2013-03-22 12:32:22 vampyr (22) vampyr (22) 4 vampyr (22) Definition msc 68P30 msc 94A24 entropy encoder entropy coding HuffmanCoding" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7865872,"math_prob":0.8826309,"size":1078,"snap":"2021-04-2021-17","text_gpt3_token_len":245,"char_repetition_ratio":0.16480447,"word_repetition_ratio":0.0,"special_character_ratio":0.23283859,"punctuation_ratio":0.06703911,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99695617,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-21T15:38:26Z\",\"WARC-Record-ID\":\"<urn:uuid:42748fb7-52a6-4fb8-995e-70e669bdc1e0>\",\"Content-Length\":\"7293\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a5b6e250-6a76-412a-a0b6-8f483d07ff0f>\",\"WARC-Concurrent-To\":\"<urn:uuid:907bf355-11e3-4dac-9cd7-4811ced14d73>\",\"WARC-IP-Address\":\"129.97.206.129\",\"WARC-Target-URI\":\"https://planetmath.org/entropyencoding\",\"WARC-Payload-Digest\":\"sha1:CPUEBZ5VUDCDVZFBPMNYGJNFEHDNNFXM\",\"WARC-Block-Digest\":\"sha1:4TPZ6SCPMSIVJMBFHSPCV7TVTKFCQPFD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703524858.74_warc_CC-MAIN-20210121132407-20210121162407-00219.warc.gz\"}"}
http://empslocal.ex.ac.uk/people/staff/mrwatkin/zeta/E-KT.htm
[ "## the Erdös-Kac theorem\n\n\"...it may be appropriate to quote a statement of Poincaré, who said (partly in jest no doubt) that there must be something mysterious about the normal law since mathematicians think it is a law of nature whereas physicists are convinced that it is a mathematical theorem.\"\n\n\"Consider the integers divisible by both p and q [p and q both prime]. To be divisible by p and q is equivalent to being divisible by pq and consequently the density of the new set is 1/pq. Now, 1/pq = (1/p)(1/q), and we can interpret this by saying that the \"events\" of being divisible by p and q are independent. This holds, of course, for any number of primes, and we can say using a picturesque but not very precise language, that the primes play a game of chance! This simple, nearly trivial, observation is the beginning of a new development which links in a significant way number theory on the one hand and probability theory on the other.\"\n\nP. Erdös and M. Kac, \"The Gaussian law of errors in the theory of additive number theoretic functions\", American Journal of Mathematics 62 (1940) 738-742\n\nThe significant consequence of the general result is that the numbers of prime factors of large integers (suitably normalised) tend to follow the Gaussian distribution. This is stated more precisely on p.738 of the original article, but there have since been clearer reformulations of the theorem. The clearest I have found is in Kac's 1959 book.\n\nHere we see a still image from Hermetic Systems' program Factorizer 6.44 written by Peter Meyer. This is primarily a piece of software for finding the prime factorisations of integers, but additional features in the latest version include histogram plotting which illustrates the Erdös-Kac Theorem.\n\nWe start with the Hardy-Ramanujan theorem (1917), which deals with numbers of prime factors of large integers:\n\nAccording to Kac, the theorem states that\n\n\"Almost every integer m has approximately log log m prime factors.\"\n\nMore precisely, Kac explains on p.73, that Hardy and Ramanujan proved the following:\n\nIf ln denotes the number of integers m in {1,...,n} whose number of prime factors v(m) satisfies either\n\nv(m) < log log m - gm [log log m]1/2\n\nor\n\nv(m) > log log m + gm [log log m]1/2\n\nthen", null, "He reproduces a proof of P. Turan, simpler than the original, and a direct analogue of the proof of the weak law of large numbers given earlier in the book.\n\nNote that the inequalities above can be rewritten in terms of the quantity\n\n[v(m) - log log m]/[log log m]1/2\n\nbeing greater than gm or less than -gm. This quantity is a sort of 'normalised' (?) deviation of the number of prime factors of m from what it \"should\" be. If m has 'too many' prime factors it will be positive, if it has 'not enough' prime factors, it will be negative. This quantity turns out to be closely related to the one which distributes Normally.\n\nFrom Kac's book:\n\n\"5. The normal law in number theory. The fact that... the number of prime divisors of m, is the sum...of independent functions suggests that, in some sense the distribution of [its] values may be given by the normal law. This is indeed the case...\"\n\nErdös and Kac proved the following:\n\nLet Kn(w1,w2) be the number of integers m in the set {1,...,n}, for which\n\nw1 < [v(m) - log log n]/[log log n]1/2 < w2\n\nthen", null, "In other words, in limit, the proportion of integers in the set {1,...,n} whose 'suitably normalised' number-of-prime-factors-deviation falls between two limits is equal to the area under the bell curve between the same interval.\n\nIncidentally, the general result in the original paper concerns a general additive number theoretical function f, i.e. one which satisfies\n\nf(mn) = f(m) + f(n) for relatively prime m and n\n\nThe particular result of interest concerns f(pk) = 1, that is, a function f which counts numbers of distinct prime factors of positive integers.\n\nThe only probabilistic component in the proof is Lemma 1, which requires the application of the Central Limit Theorem.\n\nThe authors claim Lemma 2 is the \"deep\" part of the proof and that Lemma 1 is \"relatively superficial\". But Lemma 1 is the one which concerns us here, as it involves \"the central limit theorem of the calculus of probability\".\n\nWe reproduce Lemma 1 and its proof here, but with the general f replaced by f(pk) = 1, which is all we need for the particular result which interests us here. Other minor adjustments have also been made for the sake of clarity:\n\nLemma 1. Let fl(m) denote the number of distinct prime factors of m which are less than l.\n\nDenote by dl the density of the set of integers m for which", null, "Then the density dl in limit, as l approaches infinity, equals the probability density", null, "Proof: Let rp(n) be 0 or 1 according to whether p is not or is a factor of n, then", null, "Since the rp(m) are statistically independent functions (see the quote at the top of the page), fl (m) behaves like a sum of independent random variables and consequently the distribution function", null, "is a convolution (Faltung) of the distribution functions", null, "for p < l.\n\nIt is easy to see that the \"central limit theorem of the calculus of probability\" can be applied to the present case, and this proves our lemma.\n\n## The Central Limit Theorem\n\nThe sum of a large number of independent, identically distributed variables will distribute Normally (in limit) regardless of the underlying distribution.\n\n\"The central limit theorem explains why many distributions tend to be close to the normal distribution. The key ingredient is that the random variable being observed should be the sum or mean of many independent identically distributed random variables.\"\n\nThe amazing and counterintuitive thing about the CLT is that no matter what the shape of the original distribution, the sampling distribution of the mean approaches a normal distribution. Furthermore, for most distributions, a normal distribution is approached very quickly as N increases. Keep in mind that N is the sample size for each mean and not the number of samples. Remember in a sampling distribution the number of samples is assumed to be infinite. The sample size is the number of scores in each sample; it is the number of scores that goes into the computation of each mean.\"\n\n\"The CLT and the law of large numbers are the two fundamental theorems of probability. Roughly, the CLT states that the distribution of the sum of a large number of independent, identically distributed variables will be approximately normal, regardless of the underlying distribution. The importance of the CLT is hard to overstate; indeed it is the reason that many statistical procedures work.\n\nUltimately, the proof hinges on a generalization of a famous limit from calculus.\" (from C. Stanton's online notes)\n\n\"I know of scarcely anything so apt to impress the imagination as the wonderful form of cosmic order expressed by the 'Law of Frequency of Error.' (Central Limit Theorem) The law would have been personified by the Greeks and deified, if they had known of it. It reigns with serenity and in complete self-effacement, amidst the wildest confusion. The huger the mob, and the greater the apparent anarchy, the more perfect is its sway. It is the supreme law of Unreason. Whenever a large sample of chaotic elements are taken in hand and marshaled in the order of their magnitude, an unsuspected and most beautiful form of regularity proves to have been latent all along.\"\n\nFrancis Galton\n\nHere is a simulation of the quincunx (or \"bean machine\"), a device invented by Galton to demonstrate the CLT. Here is another.\n\nUseful references:\n\nP. Erdös and A. Wintner, \"Additive arithmetic functions and statistical independence\", American Journal of Mathematical 61 (1939) 713-722.\n\nP. Erdös, \"Note on the prime divisors of integers\", Journal of the London Mathematical Society 12 (1937) 308-314.\n\nP. Hartman, E.R. van Kampen and A. Wintner, \"Asymptotic distributions and statistical independence\", American Journal of Mathematics 61 (1939) 477-486.\n\nM. Kac, \"Probability methods in some problems of analysis and number theory\", Bulletin of the AMS 55 (1949) 641-665.\n\nI.P. Kubilius, \"Probability methods in number theory\" (in Russian), Usp. Mat. Nauk 68 (1956) 31-66.\n\nA. Renyi, \"On the density of certain sequences of integers\", Publ. Inst. Math. Belgrade 8 (1955) 157-162.\n\nA. Renyi and P. Turan, \"On a theorem of Erdös-Kac\", Acta. Arith. 4 (1958) 71-84.\n\narchive      tutorial      mystery      new      search      home      contact" ]
[ null, "http://empslocal.ex.ac.uk/people/staff/mrwatkin/zeta/ek5.jpg", null, "http://empslocal.ex.ac.uk/people/staff/mrwatkin/zeta/ek6.jpg", null, "http://empslocal.ex.ac.uk/people/staff/mrwatkin/zeta/ek10.jpg", null, "http://empslocal.ex.ac.uk/people/staff/mrwatkin/zeta/ek9.jpg", null, "http://empslocal.ex.ac.uk/people/staff/mrwatkin/zeta/ek11.jpg", null, "http://empslocal.ex.ac.uk/people/staff/mrwatkin/zeta/ek10.jpg", null, "http://empslocal.ex.ac.uk/people/staff/mrwatkin/zeta/ek12.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8789999,"math_prob":0.9752251,"size":8598,"snap":"2020-10-2020-16","text_gpt3_token_len":2127,"char_repetition_ratio":0.1328834,"word_repetition_ratio":0.02626123,"special_character_ratio":0.2320307,"punctuation_ratio":0.118444316,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9969835,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,1,null,1,null,2,null,1,null,1,null,2,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-17T14:59:27Z\",\"WARC-Record-ID\":\"<urn:uuid:d9b76145-81e7-48d4-8636-6c98672ef0fb>\",\"Content-Length\":\"13228\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c04b3fc8-6185-4b45-bf8f-02a7b58724d0>\",\"WARC-Concurrent-To\":\"<urn:uuid:d275403a-bc81-40dd-946b-001c3877bcb3>\",\"WARC-IP-Address\":\"144.173.36.85\",\"WARC-Target-URI\":\"http://empslocal.ex.ac.uk/people/staff/mrwatkin/zeta/E-KT.htm\",\"WARC-Payload-Digest\":\"sha1:X6UYYRQJHVBNCU7XILW3UOURRMDD6CLV\",\"WARC-Block-Digest\":\"sha1:BQMU7ZHV3ACFSWTZ4PMWPGNTQEIUJVRI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875142603.80_warc_CC-MAIN-20200217145609-20200217175609-00493.warc.gz\"}"}
https://programmer.ink/think/noi2008-volunteer-recruitment-linear-programming-dual-problem-cost-flow.html
[ "# [NOI2008] volunteer recruitment (linear programming - dual problem - cost flow)\n\nPosted by KevinCB on Sun, 06 Mar 2022 08:57:04 +0100\n\nluogu-P3980\n\n# solution\n\nVolunteers work continuously [ s i , t i ] [s_i,t_i] [si, ti] days, we can extract the model of the longest k-repeatable interval set problem in the twenty-four questions of network flow.\n\nSimilarly, put 1 ∼ n 1\\sim n 1 ∼ n days abstract into one 1 ∼ n + 1 1\\sim n+1 1 ∼ n+1 point chain.\n\n• Source point s → 1 s\\rightarrow 1 s → 1 with infinite capacity and zero cost, n + 1 → t n+1\\rightarrow t n+1 → t sink point has infinite capacity and zero cost.\n• then i → i + 1 i\\rightarrow i+1 i → i+1 Capacity ∞ − a i \\infty-a_i ∞ − ai cost zero.\n• For section i i i kind of volunteer, s i → t i + 1 s_i\\rightarrow t_i+1 si → ti + 1 capacity infinite cost c i c_i ci​.\n\nFinally, the answer is to run the minimum cost and the maximum flow.\n\nIf you can't directly understand the correctness of such a diagram, you can consider flowing the traffic in the network diagram.\n\nIf from s s s along the side of zero cost t t Tflow, because the side flow on the chain is ∞ − a i \\infty-a_i ∞ − ai, so previously ∞ \\infty ∞ cannot flow out.\n\nThen it is necessary to pass the side stream with volunteers (cost).\n\nSuppose the flow reaches the point i i i. Then the rest cannot flow ( i , i + 1 ) (i,i+1) The flow of (i,i+1) we have to start from i i i even go out of the volunteer side stream, and stream one will cost c i c_i The price of ci.\n\nThen at point x + 1 x+1 When x+1, these flows will converge again.\n\nThis is equivalent to recruiting from i i i start to x x x end of volunteers. (of course, there may be more than one) x x x (end point)\n\nAnyway, in the end t t When t, the total flow will converge to s s s starts the flow ∞ \\infty ∞.\n\nYou can understand that a team of people break into the secret room to escape. To carry out multi person branch line tasks at a certain level, the large army needs to send some people to complete them, and then the main line team continues to go down to the main line tasks. At a certain level, some people can return to the team after completing their branch line tasks. At the end of customs clearance, everyone must come out of the main task checkpoint.\n\n# code\n\n#include <bits/stdc++.h>\nusing namespace std;\n#define maxn 2000\n#define maxm 50000\n#define int long long\n#define inf 0x3f3f3f3f\nstruct node { int to, nxt, flow, cost; }E[maxm];\nint head[maxn], dis[maxn], lst[maxn], vis[maxn], a[maxn];\nint cnt = -1, n, m, s, t;\nqueue < int > q;\n\nvoid addedge( int u, int v, int w, int c ) {\nE[++ cnt] = { v, head[u], w, c }; head[u] = cnt;\nE[++ cnt] = { u, head[v], 0,-c }, head[v] = cnt;\n}\n\nbool SPFA() {\nmemset( lst, -1, sizeof( lst ) );\nmemset( dis, 0x3f, sizeof( dis ) );\nq.push( dis[s] = 0 );\nwhile( ! q.empty() ) {\nint u = q.front(); q.pop(); vis[u] = 0;\nfor( int i = head[u];~ i;i = E[i].nxt ) {\nint v = E[i].to;\nif( dis[v] > dis[u] + E[i].cost and E[i].flow ) {\ndis[v] = dis[u] + E[i].cost; lst[v] = i;\nif( ! vis[v] ) vis[v] = 1, q.push( v );\n}\n}\n}\nreturn ~ lst[t];\n}\n\nint MCMF() {\nint ans = 0;\nwhile( SPFA() ) {\nint flow = inf;\nfor( int i = lst[t];~ i;i = lst[E[i ^ 1].to] )\nflow = min( flow, E[i].flow );\nfor( int i = lst[t];~ i;i = lst[E[i ^ 1].to] ) {\nE[i ^ 1].flow += flow;\nE[i].flow -= flow;\nans += flow * E[i].cost;\n}\n}\nreturn ans;\n}\n\nsigned main() {\nmemset( head, -1, sizeof( head ) );\nscanf( \"%lld %lld\", &n, &m );\ns = 0, t = n + 2;\nfor( int i = 1;i <= n;i ++ ) scanf( \"%lld\", &a[i] );\nfor( int i = 1;i <= n;i ++ ) addedge( i, i + 1, inf - a[i], 0 );\naddedge( s, 1, inf, 0 );\naddedge( n + 1, t, inf, 0 );\nfor( int i = 1, u, v, w;i <= m;i ++ ) {\nscanf( \"%lld %lld %lld\", &u, &v, &w );\naddedge( u, v + 1, inf, w );\n}\nprintf( \"%lld\\n\", MCMF() );\nreturn 0;\n}\n\n\n# Solution (flow balance)\n\nHypothetical total 3 3 Day 3, day 2 i i i-day recruitment p i p_i ♪ people.\n\nThere are three types of volunteers:\n\n• From the first 1 1 Work from day 1 to day 2 3 3 3 days at c 1 c_1 c1, recruited b 1 b_1 b1 people.\n• From the first 2 2 2 days working to the second day 3 3 3 days at c 2 c_2 c2, recruited b 2 b_2 b2 # people.\n• From the first 1 1 Work from day 1 to day 2 2 2 2 days at c 3 c_3 c3, recruited b 3 b_3 b3 # people.\n\nThen there are the following inequalities:\n{ b 1 + b 3 ≥ a 1 b 1 + b 2 + b 3 ≥ a 2 b 1 + b 2 ≥ a 3 \\begin{cases} b_1+b_3\\ge a_1\\\\b_1+b_2+b_3\\ge a_2\\\\b_1+b_2\\ge a_3 \\end{cases} ⎩⎪⎨⎪⎧​b1​+b3​≥a1​b1​+b2​+b3​≥a2​b1​+b2​≥a3​​\nJi di i i The number of volunteers recruited in day i exceeds the minimum requirement d i d_i ♪ people, obviously d i ≥ 0 d_i\\ge 0 di​≥0. Then it can be rewritten into the following equation:\n{ p 1 = b 1 + b 3 = a 1 + d 1 p 2 = b 1 + b 2 + b 3 = a 2 + d 2 p 3 = b 1 + b 2 = a 3 + d 3 \\begin{cases}p_1=b_1+b_3=a_1+d_1\\\\p_2=b_1+b_2+b_3=a_2+d_2\\\\p_3=b_1+b_2=a_3+d_3\\end{cases} ⎩⎪⎨⎪⎧​p1​=b1​+b3​=a1​+d1​p2​=b1​+b2​+b3​=a2​+d2​p3​=b1​+b2​=a3​+d3​​\nAfter making a difference between two adjacent equations, the items are sorted out as follows:\n{ p 1 = b 1 + b 3 = a 1 + d 1 p 2 − p 1 = b 2 − b 3 = a 2 − a 1 + d 2 − d 1 p 3 − p 2 = − b 3 = a 3 − a 2 + d 3 − d 2 − p 3 = − b 1 − b 2 = − a 3 − d 3 ⇒ { p 1 − p 0 = b 1 + b 3 − a 1 − d 1 = 0 p 2 − p 1 = b 2 − b 3 − a 2 + a 1 − d 2 + d 1 = 0 p 3 − p 2 = − b 3 − a 3 + a 2 − d 3 + d 2 = 0 p 4 − p 3 = − b 1 − b 2 + a 3 + d 3 = 0 \\begin{cases}p_1=b_1+b_3=a_1+d_1\\\\p_2-p_1=b_2-b_3=a_2-a_1+d_2-d_1\\\\ p_3-p_2=-b_3=a_3-a_2+d_3-d_2\\\\-p_3=-b_1-b_2=-a_3-d_3\\end{cases}\\Rightarrow \\begin{cases}p_1-p_0=b_1+b_3-a_1-d_1=0\\\\p_2-p_1=b_2-b_3-a_2+a_1-d_2+d_1=0\\\\ p_3-p_2=-b_3-a_3+a_2-d_3+d_2=0\\\\p_4-p_3=-b_1-b_2+a_3+d_3=0 \\end{cases} ⎩⎪⎪⎪⎨⎪⎪⎪⎧​p1​=b1​+b3​=a1​+d1​p2​−p1​=b2​−b3​=a2​−a1​+d2​−d1​p3​−p2​=−b3​=a3​−a2​+d3​−d2​−p3​=−b1​−b2​=−a3​−d3​​⇒⎩⎪⎪⎪⎨⎪⎪⎪⎧​p1​−p0​=b1​+b3​−a1​−d1​=0p2​−p1​=b2​−b3​−a2​+a1​−d2​+d1​=0p3​−p2​=−b3​−a3​+a2​−d3​+d2​=0p4​−p3​=−b1​−b2​+a3​+d3​=0​\nIn the network flow, except for the source and sink points, the other points should meet the flow balance, that is, the inflow flow is equal to the outflow flow; If the inflow is recorded as positive and the outflow is recorded as negative, the algebraic sum of inflow and outflow shall be satisfied 0 0 0.\n\nA connection in the network diagram x , y x,y x. The side of Y is at x , y x,y x. Y appears once in the flow balance equation, and once is positive and once is negative.\n\nTherefore, we can establish a point for each of the above equations. This equation represents the flow balance at this point.\n\nThen look at the final equation:\n\nobservationⅠ. \\text{observationⅠ.} observationⅠ. b i , d i b_i,d_i bi, di , both appear in exactly two equations and are positive and negative. So every variable b i , d i b_i,d_i bi # can be used as an edge in a graph.\n\nobservationⅡ. \\text{observationⅡ.} observationⅡ. constant a i a_i ai , also happens to appear in the two equations and is positive and negative. When it is positive, it indicates inflow and can be connected with the source point; When it is negative, it indicates outflow and can be connected with the sink.\n\nAccording to the difference rules, a i a_i ai must have appeared in i , i + 1 i,i+1 i. In the two equations I + 1, and must be the first i i The first equation is negative, and the second equation is negative i + 1 i+1 i+1 is positive.\n\nThe constant is connected with the source and sink points, and the variable represents the edge running balance between the constant points.\n\nThe final answer is min ⁡ ∑ b i ⋅ c i \\min \\sum b_i·c_i min Σ bi ⋅ ci can be expressed in the form of \"cost\".\n\n• Brief description of drawing construction method:\n\nhypothesis a 0 = a n + 1 = 0 a_0=a_{n+1}=0 a0​=an+1​=0.\n\n• Establish source and sink points s , t s,t s,t.\n• Establish point 1 ∼ n + 1 1\\sim n+1 1 ∼ n+1, representing n + 1 n+1 n+1 equations.\n• The first i + 1 i+1 i+1 point to the second i i i points are connected to an edge with infinite capacity and zero cost. corresponding b i , d i b_i,d_i The balance of bi, di.\n• The first i i Class i volunteer connection s i → t i + 1 s_i\\rightarrow t_i+1 si → ti + 1, infinite capacity, cost c i c_i ci​.\n• For section i i i points, if a i − a i − 1 a_i-a_{i-1} ai − ai − 1 − is positive, with edges s → i s\\rightarrow i s → i, capacity a i − a i − 1 a_i-a_{i-1} ai − ai − 1 cost is zero; If it is negative, connect the edges i → t i\\rightarrow t i → t, capacity a i − 1 − a i a_{i-1}-a_i ai − 1 − ai cost is zero. Equivalent to the constant term in the equation.\n\nIn practical understanding, the constant term can be mentioned to the right:\n{ b 1 + b 3 − d 1 = a 1 − a 0 b 2 − b 3 − d 2 + d 1 = a 2 − a 1 − b 3 − d 3 + d 2 = a 3 − a 2 − b 1 − b 2 + d 3 = a 4 − a 3 \\begin{cases} b_1+b_3-d_1=a_1-a_0\\\\ b_2-b_3-d_2+d_1=a_2-a_1\\\\ -b_3-d_3+d_2=a_3-a_2\\\\ -b_1-b_2+d_3=a_4-a_3 \\end{cases} ⎩⎪⎪⎪⎨⎪⎪⎪⎧​b1​+b3​−d1​=a1​−a0​b2​−b3​−d2​+d1​=a2​−a1​−b3​−d3​+d2​=a3​−a2​−b1​−b2​+d3​=a4​−a3​​\nhold a i − a i − 1 a_i-a_{i-1} ai − ai − 1 i i The gain and loss of i equation, so you can understand the meaning of the connection between positive and negative and source and sink.\n\n# code\n\n#include <bits/stdc++.h>\nusing namespace std;\n#define maxn 2000\n#define maxm 50000\n#define int long long\n#define inf 0x3f3f3f3f\nstruct node { int to, nxt, flow, cost; }E[maxm];\nint head[maxn], dis[maxn], lst[maxn], vis[maxn], a[maxn];\nint cnt = -1, n, m, s, t;\nqueue < int > q;\n\nvoid addedge( int u, int v, int w, int c ) {\nE[++ cnt] = { v, head[u], w, c }; head[u] = cnt;\nE[++ cnt] = { u, head[v], 0,-c }, head[v] = cnt;\n}\n\nbool SPFA() {\nmemset( lst, -1, sizeof( lst ) );\nmemset( dis, 0x3f, sizeof( dis ) );\nq.push( dis[s] = 0 );\nwhile( ! q.empty() ) {\nint u = q.front(); q.pop(); vis[u] = 0;\nfor( int i = head[u];~ i;i = E[i].nxt ) {\nint v = E[i].to;\nif( dis[v] > dis[u] + E[i].cost and E[i].flow ) {\ndis[v] = dis[u] + E[i].cost; lst[v] = i;\nif( ! vis[v] ) vis[v] = 1, q.push( v );\n}\n}\n}\nreturn ~ lst[t];\n}\n\nint MCMF() {\nint ans = 0;\nwhile( SPFA() ) {\nint flow = inf;\nfor( int i = lst[t];~ i;i = lst[E[i ^ 1].to] )\nflow = min( flow, E[i].flow );\nfor( int i = lst[t];~ i;i = lst[E[i ^ 1].to] ) {\nE[i ^ 1].flow += flow;\nE[i].flow -= flow;\nans += flow * E[i].cost;\n}\n}\nreturn ans;\n}\n\nsigned main() {\nmemset( head, -1, sizeof( head ) );\nscanf( \"%lld %lld\", &n, &m );\ns = 0, t = n + 2;\nfor( int i = 1;i <= n;i ++ ) scanf( \"%lld\", &a[i] );\nfor( int i = 1;i <= n;i ++ ) addedge( i + 1, i, inf, 0 );\nfor( int i = 1, u, v, w;i <= m;i ++ ) {\nscanf( \"%lld %lld %lld\", &u, &v, &w );\naddedge( u, v + 1, inf, w );\n}\nfor( int i = 1;i <= n + 1;i ++ )\nif( a[i] - a[i - 1] > 0 ) addedge( s, i, a[i] - a[i - 1], 0 );\nelse addedge( i, t, a[i - 1] - a[i], 0 );\nprintf( \"%lld\\n\", MCMF() );\nreturn 0;\n}\n\n\nIn fact, the understanding of the meaning of edge construction of flow balance is also from the dual perspective of linear programming. It will be explained in detail in the defensive front.\n\nTopics: network-flows" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6553272,"math_prob":0.9992305,"size":10479,"snap":"2022-40-2023-06","text_gpt3_token_len":4121,"char_repetition_ratio":0.097756565,"word_repetition_ratio":0.37952614,"special_character_ratio":0.41778797,"punctuation_ratio":0.14504988,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9991679,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-29T13:56:32Z\",\"WARC-Record-ID\":\"<urn:uuid:64cf9066-fb9d-4ac3-a931-4b95a8682503>\",\"Content-Length\":\"49039\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:95f48709-78f5-4828-b9af-4da0d4020994>\",\"WARC-Concurrent-To\":\"<urn:uuid:879c5806-122f-42a3-9991-72279a63b5dd>\",\"WARC-IP-Address\":\"213.136.76.254\",\"WARC-Target-URI\":\"https://programmer.ink/think/noi2008-volunteer-recruitment-linear-programming-dual-problem-cost-flow.html\",\"WARC-Payload-Digest\":\"sha1:ZN7CHA3UQGHJ7SYWRTP5WUEJDAPM63HX\",\"WARC-Block-Digest\":\"sha1:77D5YLNSZTQOCLRACYVRYH5SS52TW77K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335355.2_warc_CC-MAIN-20220929131813-20220929161813-00655.warc.gz\"}"}