url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://socratic.org/questions/how-do-you-use-the-factor-theorem-to-determine-whether-y-2-is-a-factor-of-3y-4-6
# How do you use the factor theorem to determine whether y-2 is a factor of 3y^4 – 6y^3 – 5y + 10? May 14, 2018 $\left(y - 2\right) \text{ is a factor}$ #### Explanation: $\text{if "y-2" is a factor then } f \left(2\right) = 0$ $f \left(2\right) = 3 {\left(2\right)}^{4} - 6 {\left(2\right)}^{3} - 5 \left(2\right) + 10$ $\textcolor{w h i t e}{f \left(2\right)} = 48 - 48 - 10 + 10 = 0$ $\Rightarrow \left(y - 2\right) \text{ is a factor of the polynomial}$
2020-09-23 23:43:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7571215033531189, "perplexity": 1344.3158068246173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400212959.12/warc/CC-MAIN-20200923211300-20200924001300-00599.warc.gz"}
https://iotmakerblog.wordpress.com/2016/06/05/09-neural-networks-learning/
# [ 09: Neural Networks – Learning ] Octave Programing Neural network cost function • NNs – one of the most powerful learning algorithms • Is a learning algorithm for fitting the derived parameters given a training set • Let’s have a first look at a neural network cost function • Focus on application of NNs for classification problems • Here’s the set up • Training set is {(x1, y1), (x2, y2), (x3, y3) … (xn, ym) • L = number of layers in the network • In our example below L = 4 • sl = number of units (not counting bias unit) in layer l • So here • l   = 4 • s1 = 3 • s2 = 5 • s3 = 5 • s4 = 4 Types of classification problems with NNs • Two types of classification, as we’ve previously seen • Binary classification • 1 output (0 or 1) • So single output node – value is going to be a real number • k = 1 • NB k is number of units in output layer • sL = 1 • Multi-class classification • k distinct classifications • Typically k is greater than or equal to three • If only two just go for binary • sL = k • So y is a k-dimensional vector of real numbers Cost function for neural networks • The (regularized) logistic regression cost function is as follows; • For neural networks our cost function is a generalization of this equation above, so instead of one output we generate k outputs • Our cost function now outputs a k dimensional vector • hƟ(x) is a k dimensional vector, so hƟ(x)i refers to the ith value in that vector • Costfunction J(Ɵ) is • [-1/m] times a sum of a similar term to which we had for logic regression • But now this is also a sum from k = 1 through to K (K is number of output nodes) • Summation is a sum over the k output units – i.e. for each of the possible classes • So if we had 4 output units then the sum is k = 1 to 4 of the logistic regression over each of the four output units in turn • This looks really complicated, but it’s not so difficult • We don’t sum over the bias terms (hence starting at 1 for the summation) • Even if you do and end up regularizing the bias term this is not a big problem • Is just summation over the terms Woah there – lets take a second to try and understand this! • There are basically two halves to the neural network logistic regression cost function First half • This is just saying • For each training data example (i.e. 1 to m – the first summation) • Sum for each position in the output vector • This is an average sum of logistic regression Second half • This is a massive regularization summation term, which I’m not going to walk through, but it’s a fairly straightforward triple nested summation • This is also called a weight decay term • As before, the lambda value determines the important of the two halves • The regularization term is similar to that in logistic regression • So, we have a cost function, but how do we minimize this bad boy?! Summary of what’s about to go down The following section is, I think, the most complicated thing in the course, so I’m going to take a second to explain the general idea of what we’re going to do; • We’ve already described forward propagation • This is the algorithm which takes your neural network and the initial input into that network and pushes the input through the network • It leads to the generation of an output hypothesis, which may be a single real number, but can also be a vector • We’re now going to describe back propagation • Back propagation basically takes the output you got from your network, compares it to the real value (y) and calculates how wrong the network was (i.e. how wrong the parameters were) • It then, using the error you’ve just calculated, back-calculates the error associated with each unit from the preceding layer (i.e. layer L – 1) • This goes on until you reach the input layer (where obviously there is no error, as the activation is the input) • These “error” measurements for each unit can be used to calculate the partial derivatives • Partial derivatives are the bomb, because gradient descent needs them to minimize the cost function • We use the partial derivatives with gradient descent to try minimize the cost function and update all the Ɵ values • This repeats until gradient descent reports convergence • A few things which are good to realize from the get go • There is a Ɵ matrix for each layer in the network • This has each node in layer l as one dimension and each node in l+1 as the other dimension • Similarly, there is going to be a Δ matrix for each layer • This has each node as one dimension and each training data example as the other Back propagation algorithm • We previously spoke about the neural network cost function • Now we’re going to deal with back propagation • Algorithm used to minimize the cost function, as it allows us to calculate partial derivatives! • The cost function used is shown above • We want to find parameters Ɵ which minimize J(Ɵ) • To do so we can use one of the algorithms already described such as • Advanced optimization algorithms • To minimize a cost function we just write code which computes the following • J(Ɵ) • i.e. the cost function itself! • Use the formula above to calculate this value, so we’ve done that • Partial derivative terms • So now we need some way to do that • This is not trivial! Ɵ is indexed in three dimensions because we have separate parameter values for each node in each layer going to each node in the following layer • i.e. each layer has a Ɵ matrix associated with it! • We want to calculate the partial derivative Ɵ with respect to a single parameter • Remember that the partial derivative term we calculate above is a REAL number (not a vector or a matrix) • Ɵ is the input parameters • Ɵ1 is the matrix of weights which define the function mapping from layer 1 to layer 2 • Ɵ101 is the real number parameter which you multiply the bias unit (i.e. 1) with for the bias unit input into the first unit in the second layer • Ɵ111 is the real number parameter which you multiply the first (real) unit with for the first input into the first unit in the second layer • Ɵ211 is the real number parameter which you multiply the first (real) unit with for the first input into the second unit in the second layer • As discussed, Ɵijl i • i here represents the unit in layer l+1 you’re mapping to (destination node) • j is the unit in layer l you’re mapping from (origin node) • l is the layer your mapping from (to layer l+1) (origin layer) • NB • The terms destination node, origin node and origin layer are terms I’ve made up! • So – this partial derivative term is • The partial derivative of a 3-way indexed dataset with respect to a real number (which is one of the values in that dataset) • One training example • Imagine we just have a single pair (x,y) – entire training set • How would we deal with this example? • The forward propagation algorithm operates as follows • Layer 1 • a1 = x • z2 = Ɵ1a1 • Layer 2 • a2 = g(z2) (add a02) • z3 = Ɵ2a2 • Layer 3 • a3 = g(z3) (add a03) • z4 = Ɵ3a3 • Output • a4 = hƟ(x) = g(z4) • This is the vectorized implementation of forward propagation • Lets compute activation values sequentially (below just re-iterates what we had above!) What is back propagation? • Use it to compute the partial derivatives • Before we dive into the mechanics, let’s get an idea regarding the intuition of the algorithm • For each node we can calculate (δjl) – this is the error of node j in layer l • If we remember, ajl is the activation of node j in layer l • Remember the activation is a totally calculated value, so we’d expect there to be some error compared to the “real” value • The delta term captures this error • But the problem here is, “what is this ‘real’ value, and how do we calculate it?!” • The NN is a totally artificial construct • The only “real” value we have is our actual classification (our y value) – so that’s where we start • If we use our example and look at the fourth (output) layer, we can first calculate • δj= aj– yj • [Activation of the unit] – [the actual value observed in the training example] • We could also write ajas hƟ(x)j • Although I’m not sure why we would? • This is an individual example implementation • Instead of focussing on each node, let’s think about this as a vectorized problem • δ= a– y • So here δis the vector of errors for the 4th layer • ais the vector of activation values for the 4th layer • With δcalculated, we can determine the error terms for the other layers as follows; • Taking a second to break this down • Ɵis the vector of parameters for the 3->4 layer mapping • δis (as calculated) the error vector for the 4th layer • g'(z3) is the first derivative of the activation function g evaluated by the input values given by z • You can do the calculus if you want (…), but when you calculate this derivative you get • g'(z3) = a. * (1 – a3) • So, more easily • δ= (Ɵ3)δ. *(a. *(1 – a3)) • . * is the element wise multiplication between the two vectors • Why element wise? Because this is essentially an extension of individual values in a vectorized implementation, so element wise multiplication gives that effect • We highlighted it just in case you think it’s a typo! Analyzing the mathematics • And if we take a second to consider the vector dimensionality (with our example above [3-5-5-4]) • Ɵ3 = is a matrix which is [4 X 5] (if we don’t include the bias term, 4 X 6 if we do) •  3)T = therefore, is a [5 X 4] matrix •  δ4 = is a 4×1 vector • So when we multiply a [5 X 4] matrix with a [4 X 1] vector we get a [5 X 1] vector • Which, low and behold, is the same dimensionality as the a3 vector, meaning we can run our pairwise multiplication • For δwhen you calculate the derivative terms you get a. * (1 – a3) • Similarly For δ2 when you calculate the derivative terms you get a. * (1 – a2) • So to calculate δ2 we do δ= (Ɵ2)δ3 . *(a. * (1 – a2)) • There’s no δ1 term • Because that was the input! Why do we do this? • We do all this to get all the δ terms, and we want the δ terms because through a very complicated derivation you can use δ to get the partial derivative of Ɵ with respect to individual parameters (if you ignore regularization, or regularization is 0, which we deal with later) •  = ajδi(l+1) • By doing back propagation and computing the delta terms you can then compute the partial derivative terms • We need the partial derivatives to minimize the cost function! Putting it all together to get the partial derivatives! • What is really happening – lets look at a more complex example • Training set of m examples • First, set the delta values • Set equal to 0 for all values • Eventually these Δ values will be used to compute the partial derivative • Will be used as accumulators for computing the partial derivatives • Next, loop through the training set • i.e. for each example in the training set (dealing with each example as (x,y) • Set a(activation of input layer) = xi • Performforward propagation to compute afor each layer (l = 1,2, … L) • i.e. run forward propagation • Then, use the output label for the specific example we’re looking at to calculate δL where δ= a– yi • So we initially calculate the delta value for the output layer • Then, using back propagation we move back through the network from layer L-1 down to layer • Finally, use Δ to accumulate the partial derivative terms • Note here • l = layer • j = node in that layer • i = the error of the affected node in the target layer • You can vectorize the Δ expression too, as • Finally • After executing the body of the loop, exit the for loop and compute • When j = 0 we have no regularization term • At the end of ALL this • You’ve calculated all the D terms above using Δ • NB – each D term above is a real number! • We can show that each D is equal to the following • We have calculated the partial derivative for each parameter • We can then use these in gradient descent or one of the advanced optimization algorithms • Phew! • What a load of hassle! Back propagation intuition • Some additionally back propagation notes • In case you found the preceding unclear, which it shouldn’t be as it’s fairly heavily modified with my own explanatory notes • Back propagation is hard(ish…) • But don’t let that discourage you • It’s hard in as much as it’s confusing – it’s not difficult, just complex • Looking at mechanical steps of back propagation Forward propagation with pictures! • Feeding input into the input layer (xi, yi) • Note that x and y here are vectors from 1 to n where n is the number of features • So above, our data has two features (hence x1 and x2) • With out input data present we use forward propagation • The sigmoid function applied to the z values gives the activation values • Below we show exactly how the z value is calculated for an example Back propagation • With forwardprop done we move on to do back propagation • Back propagation is doing something very similar to forward propagation, but backwards • Very similar though • Let’s look at the cost function again… • Below we have the cost function if there is a single output (i.e. binary classification) • This function cycles over each example, so the cost for one example really boils down to this • Which, we can think of as a sigmoidal version of the squared difference (check out the derivation if you don’t believe me) • So, basically saying, “how well is the network doing on example “? • We can think about a δ term on a unit as the “error” of cost for the activation value associated with a unit • Where cost is as defined above • Cost function is a function of y value and the hypothesis function • So – for the output layer, back propagation sets the δ value as [a – y] • Difference between activation and actual value • We then propagate these values backwards; • Looking at another example to see how we actually calculate the delta value; • So, in effect, • Back propagation calculates the δ, and those δ values are the weighted sum of the next layer’s delta values, weighted by the parameter associated with the links • Forward propagation calculates the activation (a) values, which • Depending on how you implement you may compute the delta values of the bias values • However, these aren’t actually used, so it’s a bit inefficient, but not a lot more! Implementation notes – unrolling parameters (matrices) • Needed for using advanced optimization routines • Is the MATLAB/octave code • But theta is going to be matrices • fminunc takes the costfunction and initial theta values • These routines assume theta is a parameter vector • Also assumes the gradient created by costFunction is a vector • For NNs, our parameters are matrices • e.g. Example • Use the thetaVec = [ Theta1(:); Theta2(:); Theta3(:)]; notation to unroll the matrices into a long vector • To go back you use • Theta1 = resape(thetaVec(1:110), 10, 11) • Backpropagation has a lot of details, small bugs can be present and ruin it 😦 • This may mean it looks like J(Ɵ) is decreasing, but in reality it may not be decreasing by as much as it should • So using a numeric method to check the gradient can help diagnose a bug • Gradient checking helps make sure an implementation is working correctly • Example • Have an function J(Ɵ) • Estimate derivative of function at point Ɵ (where Ɵ is a real number) • How? • Numerically • Compute Ɵ + ε • Compute Ɵ – ε • Join them by a straight line • Use the slope of that line as an approximation to the derivative • Usually, epsilon is pretty small (0.0001) • If epsilon becomes REALLY small then the term BECOMES the slopes derivative • The is the two sided difference (as opposed to one sided difference, which would be J(Ɵ + ε) – J(Ɵ) /ε •  If Ɵ is a vector with n elements we can use a similar approach to look at the partial derivatives • So, in octave we use the following code the numerically compute the derivatives • So on each loop thetaPlus = theta except for thetaPlus(i) • Resets thetaPlus on each loop • Create a vector of partial derivative approximations • Using the vector of gradients from backprop (DVec) • Check that gradApprox is basically equal to DVec • Gives confidence that the Backproc implementation is correc • Implementation note • Implement back propagation to compute DVec • Implement numerical gradient checking to compute gradApprox • Check they’re basically the same (up to a few decimal places) • Before using the code for learning turn off gradient checking • Why? • GradAprox stuff is very computationally expensive • In contrast backprop is much more efficient (just more fiddly) Random initialization • Pick random small initial values for all the theta values • If you start them on zero (which does work for linear regression) then the algorithm fails – all activation values for each layer are the same • So chose random values! • Between 0 and 1, then scale by epsilon (where epsilon is a constant) Putting it all together • 1) – pick a network architecture • Number of • Input units – number of dimensions x (dimensions of feature vector) • Output units – number of classes in classification problem • Hidden units • Default might be • 1 hidden layer • Should probably have • Same number of units in each layer • Or 1.5-2 x number of input features • Normally • More hidden units is better • But more is more computational expensive • We’ll discuss architecture more later • 2) – Training a neural network • 2.1) Randomly initialize the weights • Small values near 0 • 2.2) Implement forward propagation to get hƟ(x)i for any xi • 2.3) Implement code to compute the cost function J(Ɵ) • 2.4) Implement back propagation to compute the partial derivatives • General implementation below for i = 1:m { Forward propagation on (xi, yi) –> get activation (a) terms Back propagation on (xi, yi) –> get delta (δ) terms Compute Δ := Δl + δl+1(al)T } With this done compute the partial derivative terms • Notes on implementation • Usually done with a for loop over training examples (for forward and back propagation) • Can be done without a for loop, but this is a much more complicated way of doing things • Be careful • 2.5) Use gradient checking to compare the partial derivatives computed using the above algorithm and numerical estimation of gradient of J(Ɵ) • Disable the gradient checking code for when you actually run it • 2.6) Use gradient descent or an advanced optimization method with back propagation to try to minimize J(Ɵ) as a function of parameters Ɵ • Here J(Ɵ) is non-convex • Can be susceptible to local minimum • In practice this is not usually a huge problem • Can’t guarantee programs with find global optimum should find good local optimum at least • e.g. above pretending data only has two features to easily display what’s going on • Our minimum here represents a hypothesis output which is pretty close to y • If you took one of the peaks hypothesis is far from y • Gradient descent will start from some random point and move downhill • Back propagation calculates gradient down that hill Application of  Neural Networks
2017-10-21 19:20:20
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8001513481140137, "perplexity": 1583.2579442866129}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824894.98/warc/CC-MAIN-20171021190701-20171021210701-00883.warc.gz"}
http://openstudy.com/updates/55ae6808e4b071e6530cee27
anonymous one year ago Solve for x: |x + 2| + 16 = 14 A. x = −32 and x = −4 B. x = −4 and x = 0 C. x = 0 and x = 28 D. No solution 1. anonymous @HectorH 2. anonymous i think its D 3. Janu16 No solution 4. anonymous ok i was right thnx :) 5. anonymous We subtract 16 from both sides to get $\left| x+2 \right| = -2$ Then thats impossible since absolute value bars keep it from going negative. 6. anonymous yup :) thnx 7. Janu16 @jkl5149 its no solution 8. anonymous @Janu16 Yes.
2016-10-27 11:39:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6933900713920593, "perplexity": 3019.002583718347}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721268.95/warc/CC-MAIN-20161020183841-00228-ip-10-171-6-4.ec2.internal.warc.gz"}
https://brkmnd.com/pages/projects/Default.aspx?id=63
# Dataset ## 02.02.2022 ### Contents/Index Introduction Lexical Analysis @Dataset Neural Network Complete Me The data set consists C-programs obtained from the GNU GCC repo (I'm somewhat sure) and the ID-sofware repo. Each program is of varying length. We call a tokenized C program a sample. A program limit of 5000 tokens have been included - we discard longer programs for now (not to mention empty programs). So the stats for the dataset is: data processed, stats: nr prgs : 387 avg len : 1634 var len : 1430 max len : 4947 min len : 9 discarded len : 144 ## Data Preparation We tokenize each C program: we transform it into a list of tokens, now it is a sample. We store all samples in a json-file. This is done in process_data.py. Then we load them and split them into a training set and a test set. This is supervised learning: each sample is a pair of the original input and the same but rotated to the right and padded with an <eof> token. For the program described on the page before this we have original as: @ident @ident ( ) { @ident ( @string ) ; } And we have rotated and padded as: @ident ( ) { @ident ( @string ) ; } <eof> The original sample we see as a data point $x$, that is the input of the network. The padded and rotated version we see as a target $y$. This is the true value, what we want the prediction to match. We have the following stats of the train/test data. Here the $vocab$ is just a set of tokens that have been seen in the C files. For example we most likely have $@num \in vocab$. Now: vocab size : 118 dataset size : 387 train-set size : 310 test-set size : 77 train-set ratio : 0.8010335917312662 Cross validation has been used. So the training set has been divided into $k = 10$: 9/10 for training and 1/10 for validation. We have the following split sizes for each epoch: train_split length : 279 val_split length : 31 ## Make a note One thing to note about this parser: it is bound to learn the coding style of the coder - at least to some degree. Looking through the original programs I have noticed a lot of one line functions. These contains one return statement and nothing else. Furthermore a lot of #include and #define statements are in the top of each program. This might have impact on the parser. It might develop a believe that functions are most likely one liners. This is left as a note. One can just use a different dataset with a different coding style to have a different parser.
2022-09-27 11:09:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3193455934524536, "perplexity": 1697.9012738094646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00429.warc.gz"}
https://testbook.com/question-answer/for-every-10-fold-increase-in-temperature-the-rev--5f71c1ca6af969983a541e47
# For every 10 fold increase in temperature, the reverse saturation current of a p-n junction will be increased by This question was previously asked in ISRO (VSSC) Technical Assistant Electronics 2018 Official Paper View all ISRO Technician Papers > 1. 10 times 2. 2 times 3. 4 times 4. remains same Option 2 : 2 times Free CT 1: Current Affairs (Government Policies and Schemes) 44199 10 Questions 10 Marks 10 Mins ## Detailed Solution Reverse saturation current: • The reverse saturation current of the diode increases with an increase in the temperature. • The rise is 7% /°C for both germanium and silicon and approximately doubles for every 10°C rise in temperature. Mathematically if the reverse saturation current is I01 at a temperature T1 and I02 at temperature T2, then: I02 = I01 2(T2-T1)/10 The reverse saturation current is given by: $${I_o} = A.q.n_i^2\left[ {\frac{{{D_p}}}{{{L_P}{N_D}}} + \frac{{{D_n}}}{{{L_n}{N_A}}}} \right]$$ $${I_o} \propto n_i^2\;$$ Also, $$n_i^2=N_cN_ve^{-{\frac{E_g}{KT}}}$$ $$n_i^2 \propto {e^{ - \frac{{{E_g}}}{{{KT}}}}}\;$$ The diode for which Eg/KT will be more will have less ni and subsequently will have less reverse saturation current.
2021-09-24 20:28:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5753727555274963, "perplexity": 6259.7442920120975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057580.39/warc/CC-MAIN-20210924201616-20210924231616-00125.warc.gz"}
http://gmatclub.com/forum/the-single-family-house-constructed-by-the-yana-a-native-85169.html?sort_by_oldest=true
Find all School-related info fast with the new School-Specific MBA Forum It is currently 30 Aug 2016, 08:14 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # The single family house constructed by the Yana, a Native Author Message TAGS: ### Hide Tags Manager Affiliations: CFA L3 Candidate, Grad w/ Highest Honors Joined: 03 Nov 2007 Posts: 130 Location: USA Schools: Chicago Booth R2 (WL), Wharton R2 w/ int, Kellogg R2 w/ int WE 1: Global Operations (Futures & Portfolio Financing) - Hedge Fund ($10bn+ Multi-Strat) WE 2: Investment Analyst (Credit strategies) - Fund of Hedge Fund ($10bn+ Multi-Strat) Followers: 1 Kudos [?]: 83 [1] , given: 9 The single family house constructed by the Yana, a Native [#permalink] ### Show Tags 11 Oct 2009, 12:48 1 KUDOS 38 This post was BOOKMARKED 00:00 Difficulty: 55% (hard) Question Stats: 56% (01:46) correct 44% (00:57) wrong based on 1046 sessions ### HideShow timer Statistics The single family house constructed by the Yana, a Native American people who lived in what is now northern California, was conical in shape, its framework of poles overlaid with slabs of bark, either cedar or pine, and banked with dirt to a height of three to four feet. A) banked with dirt to a height of B) banked with dirt as high as that of C) banked them with dirt to a height of D) was banked with dirt as high as E) was banked with dirt as high as that of [Reveal] Spoiler: A) banked with dirt to a height of. Personally, I had it down to either A or D. I do not get why D is wrong. I thought "as high as" was the correct idiom. Is it because "was" is redundant? [Reveal] Spoiler: OA SVP Joined: 29 Aug 2007 Posts: 2492 Followers: 66 Kudos [?]: 693 [4] , given: 19 Re: The single family house constructed by the Yana, a Native [#permalink] ### Show Tags 11 Oct 2009, 13:41 4 KUDOS 9 This post was BOOKMARKED robertrdzak wrote: The single family house constructed by the Yana, a Native American people who lived in what is now northern California, was conical in shape, its framework of poles overlaid with slabs of bark, either cedar or pine, and banked with dirt to a height of three to four feet. A) banked with dirt to a height of B) banked with dirt as high as that of C) banked them with dirt to a height of D) was banked with dirt as high as E) was banked with dirt as high as that of [Reveal] Spoiler: A) banked with dirt to a height of. Personally, I had it down to either A or D. I do not get why D is wrong. I thought "as high as" was the correct idiom. Is it because "was" is redundant? Lets look this way: The single family house constructed by the Yana (...) was conical in shape, its framework of poles overlaid with slabs of bark and banked with dirt to a height of three to four feet. A) "banked" is parallel with "overlaid" as under banked with dirt (to a height) of x feet overlaid with slabs of bark Hope that makes sense. B) "as high as" is unnecessary and "that of.." is inapproperiate. C) "them"? D) "was" is not paralalled with "overlaid" and is in passive. "as high as" is unnecessary for no comparision. E) Similar to D. "was" is not paralllel with "overlaid" and is in passive. "as high as" is unnecessary for no comparision. "that of"? _________________ Gmat: http://gmatclub.com/forum/everything-you-need-to-prepare-for-the-gmat-revised-77983.html GT Manager Affiliations: CFA Level 2 Candidate Joined: 29 Jun 2009 Posts: 221 Schools: RD 2: Darden Class of 2012 Followers: 3 Kudos [?]: 180 [6] , given: 2 Re: The single family house constructed by the Yana, a Native [#permalink] ### Show Tags 11 Oct 2009, 14:52 6 KUDOS 2 This post was BOOKMARKED OA is A Trick is to realize that banked with dirt applies to the poles and not the house overlaid with slabs is parallel to banked with dirt. I made the same mistake with this question Intern Joined: 17 Dec 2009 Posts: 9 Followers: 0 Kudos [?]: 23 [14] , given: 0 Re: The single family house constructed by the Yana, a Native [#permalink] ### Show Tags 05 May 2010, 10:58 14 KUDOS 9 This post was BOOKMARKED A very interesting point here to note is that when ever there is a "as high as" or as low as" or any such comparison, it should be done against a fixed number. Here it is compared against a variable figure, 3 to 4. So B, D and E are out. "Banked them", no use... 'A' is All Clear.... SVP Joined: 17 Feb 2010 Posts: 1558 Followers: 17 Kudos [?]: 499 [0], given: 6 Re: The single family house constructed by the Yana, a Native [#permalink] ### Show Tags 05 May 2010, 11:50 good explanation by gmattiger and sumeetgill. A is best GMAT Club Legend Joined: 01 Oct 2013 Posts: 9310 Followers: 807 Kudos [?]: 165 [0], given: 0 Re: The single family house constructed by the Yana, a Native [#permalink] ### Show Tags 07 Oct 2013, 02:28 Hello from the GMAT Club VerbalBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. Manager Joined: 26 Jan 2014 Posts: 71 Followers: 1 Kudos [?]: 14 [1] , given: 77 Re: The single family house constructed by the Yana, a Native [#permalink] ### Show Tags 27 Jan 2014, 00:23 1 KUDOS GMAT TIGER & SumeetGill have made the whole point of this Qs. GREAT JOB! I was missed until I find this post Manager Joined: 24 Jun 2013 Posts: 65 Schools: ISB '16, NUS '15 Followers: 0 Kudos [?]: 4 [0], given: 49 Re: The single family house constructed by the Yana, a Native [#permalink] ### Show Tags 07 Jun 2014, 08:19 Hi E-GMAT, My analysis , Sentence structure for this sentence would be , The single family house constructed by the Yana, a Native American people who lived in what is now northern California, was conical in shape, its framework of poles overlaid with slabs of bark, either cedar or pine, and banked with dirt to a height of three to four feet. Bold face is a MAIN Sub and Main Verb. My question is "its framework of poles overlaid with slabs of bark, either cedar or pine, and banked with dirt to a height of three to four feet." a noun plus noun modifier? Thanks e-GMAT Representative Joined: 02 Nov 2011 Posts: 1984 Followers: 1947 Kudos [?]: 6450 [3] , given: 260 Re: The single family house constructed by the Yana, a Native [#permalink] ### Show Tags 10 Jun 2014, 12:11 3 KUDOS Expert's post Nitinaka19 wrote: Hi E-GMAT, My analysis , Sentence structure for this sentence would be , The single family house constructed by the Yana, a Native American people who lived in what is now northern California, was conical in shape, its framework of poles overlaid with slabs of bark, either cedar or pine, and banked with dirt to a height of three to four feet. Bold face is a MAIN Sub and Main Verb. My question is "its framework of poles overlaid with slabs of bark, either cedar or pine, and banked with dirt to a height of three to four feet." a noun plus noun modifier? Thanks Hi Nitinaka19, Thank you for the query. Let’s look at the structure of this sentence: • The single family house constructed by the Yana, a Native American people who lived in what is now northern California, was conical in shape, o its framework of poles overlaid with slabs of bark, either cedar or pine, • and banked with dirt to a height of three to four feet. You have correctly identified that “its framework…..four feet” is a noun + noun modifier. In this modifier: Noun- its framework of poles Noun Modifier- overlaid with slabs of bark, either cedar or pine, and banked with dirt to a height of three to four feet. So, the two parallel verb-ed modifiers ‘overlaid’ and ‘banked’ modify the noun in this modifier . Hope this helps! Deepak _________________ | '4 out of Top 5' Instructors on gmatclub | 70 point improvement guarantee | www.e-gmat.com Manager Joined: 18 Oct 2013 Posts: 51 Concentration: Entrepreneurship, Strategy GMAT 1: 660 Q49 V31 WE: Consulting (Computer Software) Followers: 0 Kudos [?]: 5 [0], given: 9 Re: The single family house constructed by the Yana, a Native [#permalink] ### Show Tags 11 Jun 2014, 02:09 robertrdzak wrote: The single family house constructed by the Yana, a Native American people who lived in what is now northern California, was conical in shape, its framework of poles overlaid with slabs of bark, either cedar or pine, and banked with dirt to a height of three to four feet. A) banked with dirt to a height of B) banked with dirt as high as that of C) banked them with dirt to a height of D) was banked with dirt as high as E) was banked with dirt as high as that of [Reveal] Spoiler: A) banked with dirt to a height of. Personally, I had it down to either A or D. I do not get why D is wrong. I thought "as high as" was the correct idiom. Is it because "was" is redundant? Hi E-GMAT, I am able to eliminate option B,C and E. However, Option A and D have confused me. If we consider the parallelism then "overlaid" and "banked" are modifying noun "framework of poles". The Option A says "banked" in which the doer of the action "banked" is "framework of poles" but the meaning says that doer of the action "banked" should be someone else. So, I believe that the right answer should be option D where "was banked" is stated. Is my thought process correct? Thanks! _________________ Preparing for the next shot!!! e-GMAT Representative Joined: 02 Nov 2011 Posts: 1984 Followers: 1947 Kudos [?]: 6450 [2] , given: 260 Re: The single family house constructed by the Yana, a Native [#permalink] ### Show Tags 12 Jun 2014, 09:36 2 KUDOS Expert's post 1 This post was BOOKMARKED anujag24 wrote: Hi E-GMAT, I am able to eliminate option B,C and E. However, Option A and D have confused me. If we consider the parallelism then "overlaid" and "banked" are modifying noun "framework of poles". The Option A says "banked" in which the doer of the action "banked" is "framework of poles" but the meaning says that doer of the action "banked" should be someone else. So, I believe that the right answer should be option D where "was banked" is stated. Is my thought process correct? Thanks! Hi anujag24, Thank you for the query. You are making an error in differentiating between the simple past tense verb and the verb-ed modifiers. In the original sentence, ‘overlaid’ and ‘banked’ are not the verbs for the subject ‘framework of poles’. They act as verb-ed modifiers in this sentence. Let’s take an example to understand the verb-ed modifiers: The lamp placed on the table is a gift. Now, does the lamp to the action of placing? No. The lamp cannot place itself. So, ‘placed’ is not a verb here. The verb for the subject ‘the lamp’ is ‘is’. What is the function of the phrase ‘placed on the table’? It provides us additional information about the lamp. So, we can write the above sentence as: The lamp (that is) placed on the table is a gift. Now, the verb-ed form ‘placed’ is known as the verb-ed modifier. Please refer to the following link to know more about verb-ed modifiers: ed-forms-verbs-or-modifiers-134691.html Once you have gone through the article, try to apply the concept on the above question. If you face any problems, refer to my explanation below: • The single family house constructed by the Yana, a Native American people who lived in what is now northern California, was conical in shape, • its framework of poles o overlaid with slabs of bark, either cedar or pine, o and banked with dirt to a height of three to four feet. Is ‘its framework of poles’ the doer of the verbs ‘overlaid’ and ‘banked’? No, it is not. A framework can’t perform the actions ‘overlaid’ and ‘banked’. So, these verb-ed forms are actually modifiers. So, these two modifiers are providing additional information about the noun ‘framework of poles’ that this framework was overlaid with slabs and it was banked with dirt. Now, if we consider OPTION D: • The single family house constructed by the Yana, a Native American people who lived in what is now northern California, was conical in shape, • its framework of poles o overlaid with slabs of bark, either cedar or pine, o and was banked with dirt as high as three to four feet. Since ‘overlaid’ is a modifier and ‘was banked’ is a verb, these two can’t be parallel to each other. Hence this option is incorrect. Also, the expression ‘as… as’ is unnecessary in this option. ‘As…. as’ is used to show a comparison between two entities. There is no comparison in this sentence. It just tells us how high the dirt was. Hope this helps! Deepak _________________ | '4 out of Top 5' Instructors on gmatclub | 70 point improvement guarantee | www.e-gmat.com Current Student Status: Everyone is a leader. Just stop listening to others. Joined: 22 Mar 2013 Posts: 993 Location: India GPA: 3.51 WE: Information Technology (Computer Software) Followers: 159 Kudos [?]: 1263 [1] , given: 226 Re: The single family house constructed by the Yana, a Native [#permalink] ### Show Tags 12 Jun 2014, 14:06 1 KUDOS its framework of poles overlaid with slabs of bark, either cedar or pine, and banked with dirt to a height of three to four feet. Above mentioned part of the sentence is an absolute phrase modifier or noun + noun modifier, which is perfectly used in the end of sentence. Note: this part can be placed even in the front of the sentence. Its framework of poles overlaid with slabs of bark, either cedar or pine, and banked with dirt to a height of three to four feet, the single family house constructed by the Yana, a Native American people who lived in what is now northern California, was conical in shape, The single family house constructed by the Yana, a Native American people who lived in what is now northern California, was conical in shape, its framework of poles overlaid with slabs of bark, either cedar or pine, and banked with dirt to a height of three to four feet. A) banked with dirt to a height of -- correct -- banked is -ed modifier to framework. B) banked with dirt as high as that of -- as that of is not ok. C) banked them with dirt to a height of -- only plural possible antecedent for them is people. D) was banked with dirt as high as -- As per original sentence framework is banked with dirt, but in this sentence as verb was is introduced The single family house (subject) appears to be banked with dirt. E) was banked with dirt as high as that of -- same as D. _________________ Piyush K ----------------------- Our greatest weakness lies in giving up. The most certain way to succeed is to try just one more time. ― Thomas A. Edison Don't forget to press--> Kudos My Articles: 1. WOULD: when to use? | 2. All GMATPrep RCs (New) Tip: Before exam a week earlier don't forget to exhaust all gmatprep problems specially for "sentence correction". Manager Joined: 18 Oct 2013 Posts: 51 Concentration: Entrepreneurship, Strategy GMAT 1: 660 Q49 V31 WE: Consulting (Computer Software) Followers: 0 Kudos [?]: 5 [0], given: 9 Re: The single family house constructed by the Yana, a Native [#permalink] ### Show Tags 12 Jun 2014, 22:38 egmat wrote: anujag24 wrote: Hi E-GMAT, I am able to eliminate option B,C and E. However, Option A and D have confused me. If we consider the parallelism then "overlaid" and "banked" are modifying noun "framework of poles". The Option A says "banked" in which the doer of the action "banked" is "framework of poles" but the meaning says that doer of the action "banked" should be someone else. So, I believe that the right answer should be option D where "was banked" is stated. Is my thought process correct? Thanks! Hi anujag24, Thank you for the query. You are making an error in differentiating between the simple past tense verb and the verb-ed modifiers. In the original sentence, ‘overlaid’ and ‘banked’ are not the verbs for the subject ‘framework of poles’. They act as verb-ed modifiers in this sentence. Let’s take an example to understand the verb-ed modifiers: The lamp placed on the table is a gift. Now, does the lamp to the action of placing? No. The lamp cannot place itself. So, ‘placed’ is not a verb here. The verb for the subject ‘the lamp’ is ‘is’. What is the function of the phrase ‘placed on the table’? It provides us additional information about the lamp. So, we can write the above sentence as: The lamp (that is) placed on the table is a gift. Now, the verb-ed form ‘placed’ is known as the verb-ed modifier. Please refer to the following link to know more about verb-ed modifiers: ed-forms-verbs-or-modifiers-134691.html Once you have gone through the article, try to apply the concept on the above question. If you face any problems, refer to my explanation below: • The single family house constructed by the Yana, a Native American people who lived in what is now northern California, was conical in shape, • its framework of poles o overlaid with slabs of bark, either cedar or pine, o and banked with dirt to a height of three to four feet. Is ‘its framework of poles’ the doer of the verbs ‘overlaid’ and ‘banked’? No, it is not. A framework can’t perform the actions ‘overlaid’ and ‘banked’. So, these verb-ed forms are actually modifiers. So, these two modifiers are providing additional information about the noun ‘framework of poles’ that this framework was overlaid with slabs and it was banked with dirt. Now, if we consider OPTION D: • The single family house constructed by the Yana, a Native American people who lived in what is now northern California, was conical in shape, • its framework of poles o overlaid with slabs of bark, either cedar or pine, o and was banked with dirt as high as three to four feet. Since ‘overlaid’ is a modifier and ‘was banked’ is a verb, these two can’t be parallel to each other. Hence this option is incorrect. Also, the expression ‘as… as’ is unnecessary in this option. ‘As…. as’ is used to show a comparison between two entities. There is no comparison in this sentence. It just tells us how high the dirt was. Hope this helps! Deepak Thanks Deepak..... Now, I can figure out the difference. _________________ Preparing for the next shot!!! Manager Status: Perspiring Joined: 15 Feb 2012 Posts: 119 Concentration: Marketing, Strategy GPA: 3.6 WE: Engineering (Computer Software) Followers: 6 Kudos [?]: 91 [0], given: 216 Re: The single family house constructed by the Yana, a Native [#permalink] ### Show Tags 18 Sep 2014, 17:25 Dear @E-GMAT, If banked is parallel with overlaid, then there must have been an 'and' after was conical in shape, AND its framework of poles... ? I thought this to be list with 3 items : was conical..., its frameworks..., and was banked... Manager Joined: 27 May 2014 Posts: 96 Location: India Concentration: Technology, General Management Schools: HKUST '15, ISB '15 GMAT Date: 12-26-2014 GPA: 3 Followers: 1 Kudos [?]: 80 [0], given: 43 Re: The single family house constructed by the Yana, a Native [#permalink] ### Show Tags 19 Sep 2014, 01:16 robertrdzak wrote: The single family house constructed by the Yana, a Native American people who lived in what is now northern California, was conical in shape, its framework of poles overlaid with slabs of bark, either cedar or pine, and banked with dirt to a height of three to four feet. A) banked with dirt to a height of B) banked with dirt as high as that of C) banked them with dirt to a height of D) was banked with dirt as high as E) was banked with dirt as high as that of [Reveal] Spoiler: A) banked with dirt to a height of. Personally, I had it down to either A or D. I do not get why D is wrong. I thought "as high as" was the correct idiom. Is it because "was" is redundant? I understood the question as ..."The single family house constructed by the Yana...was conical in shape ...and was banked with dirt".So, I made "was conical" and "was banked" parallel. Is this grammatically incorrect ? Yes, It might sound non-nonsensical but you never know that somebody might like to put mound of dirt in their house, May be it was a dog's house that they built Is option D wrong because of "As high as" only as explained by sumeet ? _________________ Success has been and continues to be defined as Getting up one more time than you have been knocked down. Manager Joined: 05 Jun 2012 Posts: 130 Schools: IIMA Followers: 1 Kudos [?]: 10 [0], given: 66 Re: The single family house constructed by the Yana, a Native [#permalink] ### Show Tags 12 Nov 2014, 23:29 "as high as" or as low as" is used for fixed comparisons, here height is 3 to 4 so A is best option to rely on +1 A _________________ If you are not over prepared then you are under prepared !!! GMAT Club Legend Joined: 01 Oct 2013 Posts: 9310 Followers: 807 Kudos [?]: 165 [0], given: 0 Re: The single family house constructed by the Yana, a Native [#permalink] ### Show Tags 06 Jan 2016, 10:00 Hello from the GMAT Club VerbalBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. Manager Joined: 25 Sep 2015 Posts: 149 Schools: Ross PT '20, ISB '17 GMAT 1: 200 Q0 V0 Followers: 0 Kudos [?]: 18 [0], given: 72 Re: The single family house constructed by the Yana, a Native [#permalink] ### Show Tags 20 Jan 2016, 21:43 robertrdzak wrote: The single family house constructed by the Yana, a Native American people who lived in what is now northern California, was conical in shape, its framework of poles overlaid with slabs of bark, either cedar or pine, and banked with dirt to a height of three to four feet. A) banked with dirt to a height of B) banked with dirt as high as that of C) banked them with dirt to a height of D) was banked with dirt as high as E) was banked with dirt as high as that of [Reveal] Spoiler: A) banked with dirt to a height of. Personally, I had it down to either A or D. I do not get why D is wrong. I thought "as high as" was the correct idiom. Is it because "was" is redundant? C out - banked 'them' - Parallelism + Pronoun (no clear antecedent -as house is singular in the sentence) error D and E out - Was - Parallelism error B - wordy A - correct Senior Manager Joined: 22 Jun 2014 Posts: 443 Concentration: General Management, Technology GMAT 1: 540 Q45 V20 GPA: 2.49 WE: Information Technology (Computer Software) Followers: 11 Kudos [?]: 137 [0], given: 90 Re: The single family house constructed by the Yana, a Native [#permalink] ### Show Tags 04 Apr 2016, 03:41 robertrdzak wrote: The single family house constructed by the Yana, a Native American people who lived in what is now northern California, was conical in shape, its framework of poles overlaid with slabs of bark, either cedar or pine, and banked with dirt to a height of three to four feet. part of sentence that matters to find out erros: ...House....was conical in shape, its framework of poles overlaid ... and banked with dirt to a height of 3 to 4 feet. A) banked with dirt to a height of B) banked with dirt as high as that of C) banked them with dirt to a height of D) was banked with dirt as high as E) was banked with dirt as high as that of _________________ --------------------------------------------------------------- Target - 720-740 helpful post means press '+1' for Kudos! http://gmatclub.com/forum/information-on-new-gmat-esr-report-beta-221111.html http://gmatclub.com/forum/list-of-one-year-full-time-mba-programs-222103.html Manager Joined: 08 Dec 2015 Posts: 204 GMAT 1: 600 Q44 V27 Followers: 1 Kudos [?]: 6 [0], given: 33 The single family house constructed by the Yana, a Native [#permalink] ### Show Tags 03 May 2016, 15:30 Hi, @E-GMAT I got confused with The single family house constructed by the Yana, a Native American people who lived in what is now northern California, was conical in shape, its framework of poles overlaid with slabs of bark, either cedar or pine, and banked with dirt to a height of three to four feet. I thought that (comma) FANBOYS starts a new independent clause, so I used option D) because it has a verb. Only after I understood that the clause still wouldn't have a subject... So when do we have to be careful with the Independent clause and S-V agreement?.. When we see this comma, FANBOYS formation, should we look for an Indep. Clause straight away?... Could be an issue in this case? What if the sentence said: The single family house constructed by the Yana, a Native American people who lived in what is now northern California, was conical in shape, its framework of poles overlaid with slabs of bark, either cedar or pine, and it was banked with dirt to a height of three to four feet. Would it be correct? Thank you! The single family house constructed by the Yana, a Native   [#permalink] 03 May 2016, 15:30 Go to page    1   2    Next  [ 23 posts ] Similar topics Replies Last post Similar Topics: The single-family house constructed by the Yana, a Native 1 24 Apr 2012, 21:59 6 The single-family house constructed by the Yana, a Native 8 05 Aug 2010, 09:53 1 The single-family house constructed by the Yana, a Native 5 28 May 2010, 00:45 The single-family house constructed by the Yana, a Native 6 26 Jan 2008, 10:29 The single-family house constructed by the Yana, a Native 9 31 Oct 2006, 09:15 Display posts from previous: Sort by
2016-08-30 15:14:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5951747298240662, "perplexity": 8265.727234057336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982984973.91/warc/CC-MAIN-20160823200944-00272-ip-10-153-172-175.ec2.internal.warc.gz"}
https://thoughtstreams.io/paltman/atom-editor/3777/
# Atom Editor 11 thoughts last posted March 6, 2014, 5:48 p.m. 3 earlier thoughts 0 I wonder how hard it would be to write a package for flake8. 7 later thoughts
2020-05-28 19:06:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8534050583839417, "perplexity": 8691.29063965096}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347399830.24/warc/CC-MAIN-20200528170840-20200528200840-00378.warc.gz"}
http://nrich.maths.org/7164/index?nomenu=1
The shorter sides of a right-angles isosceles triangle are each $10$cm long. The triangle is folded in half along its line of symmetry to form a smaller triangle. How much longer is the perimeter of the larger triangle than that of the smaller? If you liked this problem, here is an NRICH task which challenges you to use similar mathematical ideas. This problem is taken from the UKMT Mathematical Challenges. View the archive of all weekly problems grouped by curriculum topic View the previous week's solution View the current weekly problem
2016-02-14 21:03:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18236415088176727, "perplexity": 1429.0452314448532}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454702032759.79/warc/CC-MAIN-20160205195352-00080-ip-10-236-182-209.ec2.internal.warc.gz"}
https://dsp.stackexchange.com/questions/37676/what-are-the-known-use-cases-for-different-wavelet-families
# What are the known use cases for different wavelet families? There are numerous wavelet families that differ by a number of parameters (like the number of vanishing moments, symmetry, etc.). • What are the known use cases where one particular family is known to be more suitable and provide better results? For example, Daubechies 9/7 are used in the JPEG2000 image format. As far as I understand Daubechies wavelets were designed specially to have a high number of vanishing moments so that makes them more suitable for signal compression tasks. What are the known use cases for other families of wavelets? And as second part: • How does one decide which one is the most suitable for it's task? • Are there any known heuristics, common knowledge, etc? • Hi Alexander, welcome. Your question is IMHO too broad. That's like going to a mechanical engineering site and asking what the different use cases for different kinds of mechanical joints are. There's far too many to give you a complete listing, and for those that are common, we'd just point you to textbook examples. So maybe you'd want to explain why you're asking for "known use cases"; this feels like you're writing some report as homework, and we'll probably be more happy to help you find good ressources than to give you a list that you can drag&drop into your document. – Marcus Müller Feb 17 '17 at 9:22 • I have been long gathering good practices and caveats on using wavelets. Still, I cannot tell which could be a best. One cannot easily separate the transformation by itself from the subsequent processing. – Laurent Duval Feb 17 '17 at 9:28 • The issue of "What is the best wavelet" has been coming up frequently and in various forms with lots of great respones, on this SE site, you might want to have a look at those too. – A_A Feb 17 '17 at 9:57 Say yes. Let us concentrate on 2-band real discrete wavelet first. JPEG 2000 is a special case, where CDF9/7 and 5/3 biorthogonal wavelets are used. For the first one, the analysis wavelet is moderately regular, but its moments nicely concentrate piecewise regular parts of the data. The synthesis wavelet is smoother, and visually compensates quantization artifacts. The 5/3 works similarly, with the addition of exact computations on integers. There are almost symmetric, which is somehow better for edge location. The paper Mathematical Properties of the JPEG2000 Wavelet Filters gathers some of the most notable. Answer 0.5. apart from that example, there is very few "fit-for-all signals" optimality results. The quality of wavelets depends a lot on the nature of the data: signals, images, on the task, on the resource constraints. But the most important is the mastery of the wavelets you use. Because most often, you are trying to enhance or restore data, but the target is unknown, and quality metrics are often imperfect, so you can only get informed guesses on simulated datasets. Theoretical properties are guides, but you need models for signals or noise at least (eg spline wavelets with a fractional Brownian noise). Orthogonal wavelets, says Daubechies', are widely used because orthogonality is useful to prove stuff, and their "shortest" support interval for given moments is nice. On natural signals, a rule of thumb is that it is rare to observe polynomial parts above degree 3. Hence short DB should suffice. However, longer ones are often used to compensates for the stronger asymmetry of the shortest. The (overall) symmetry of the wavelet is also observed: anti-symmetric to act like derivatives, symmetric to mimic Laplacians. There is a no-go theorem of discrete 1D 2-band wavelets: except for Haar, they cannot be real, orthogonal with symmetry and finite support. Haar seems crude, but it is very fast, and since you can almost compute everything (delays, etc.) it can do a marvelous job. But if you go deeper, you can find that $M$-band wavelets $M=4$ for instance, can have all the required properties, with more degrees of freedom. And they have better aliasing cancelation between channels, which makes them better at textures, for denoising. Answer 1: a good knowledge of wavelets properties restricts the candidate list, and a little trial-and-error is often unavoidable But you can also use a set of different wavelets to enhance the result of a single one: perform the processing with each wavelet, and (wisely) combine the result. One of the most efficient technique is based on a pair of Hilbert-related wavelets: take any wavelet basis you like (for its properties), and build a second basis with an (approximate) Hilbert transform relationship. The resulting primal and dual wavelets share the same moment property, have an opposite symmetry, and form together a dual-tree wavelet transform, that provides almost shift invariance at little cost. But you can also use different shifted versions (time-invariant wavelets), symmetrized combinations, etc. Answer 2: a union of wavelet bases is often a powerful combination.
2020-01-29 05:44:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4988739788532257, "perplexity": 1005.467863423706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251788528.85/warc/CC-MAIN-20200129041149-20200129071149-00175.warc.gz"}
https://www.physicsforums.com/threads/force-velocity-and-power.223428/
Force, Velocity, and Power 1. Mar 21, 2008 mjdiaz89 1. The problem statement, all variables and given/known data The engine of an automobile requires 40 hp to maintain a constant speed of 78 km/h. (a) What is the resistive force against the automobile? correct check mark 1376.465N (correct) (b) If the resistive force is proportional to the velocity, what must the engine power be to drive at constant speeds of 65 km/h? wrong check mark hp (c) What must the engine power be to drive at constant speeds of 145 km/h under the same conditions? wrong check mark hp 2. Relevant equations P=Fv = $$\frac{Fd}{s}$$ 3. The attempt at a solution Questions B and C: It seems I have two unknowns (Resistive force and engine power output). Any ideas? 2. Mar 21, 2008 lavalamp Since you are assuming that resistive force is directly proportional to velocity, you know: $$F \propto v$$ $$F = kv$$ Therefore: $$P = Fv$$ $$P = kv^{2}$$ You now need to find the constant k, then you'll be all set.
2016-12-07 17:03:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6805070638656616, "perplexity": 1282.9581745311584}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542217.43/warc/CC-MAIN-20161202170902-00139-ip-10-31-129-80.ec2.internal.warc.gz"}
https://socratic.org/questions/how-do-you-simplify-root4-93-sqrt-92
# How do you simplify root4(93)*sqrt(92)? Jun 4, 2018 You can use logarithms if your calculator cannot do higher-order roots directly. $29.787$ #### Explanation: .^4√93⋅√92 = $\frac{1}{4} \log \left(93\right) + \frac{1}{2} \log \left(92\right)$ Logarithm tables are readily available online and in libraries. $\frac{1}{4} \log \left(93\right) + \frac{1}{2} \log \left(92\right)$ = $\frac{1}{4} \left(1.9685\right) + \frac{1}{2} \left(1.9638\right)$ $0.4921 + 0.9819 = 1.4740$ To convert this back to the desired number, use the exponential function (inverse of logarithm):$\exp \left(1.4740\right) = {10}^{1.4740} = 29.787$ Direct calculator calculation: $3.1054 \times 9.5917 = 29.787$
2020-03-29 03:36:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8303130269050598, "perplexity": 3206.4491771637963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370493684.2/warc/CC-MAIN-20200329015008-20200329045008-00114.warc.gz"}
http://mathoverflow.net/revisions/31111/list
3 added 3 characters in body When I teach computability, I usually use the following example to illustrate the point. Let $f(n)=1$, if there are $n$ consecutive $1$s somewhere in the decimal expansion of $\pi$, and $f(n)=0$ otherwise. Is this a computable function? Some students might try naively to compute it like this: on input $n$, start to enumerate the digits of $\pi$, and look for $n$ consecutive $1$s. If found, then output $1$. But then they realize: what if on a particular input, you have searched for 10 years, and still not found the instance? You don't seem justified in outputting $0$ quite yet, since perhaps you might find the consecutive $1$s by searching a bit more. Nevertheless, we can prove that the function is computable as follows. Either there are arbitrarily long strings of $1$ in $\pi$ or there is a longest string of $1$s of some length $N$. In the former case, the function $f$ is the constant $1$ function, which is definitely computable. In the latter case, $f$ is the function with value $1$ for all input $n n\lt N$ and value $0$ for $n\geq N$, which for any fixed $N$ is also a computable function. So we have proved that $f$ is computable in effect by providing an infinite list of programs and proving that one of them computes $f$, but we don't know which one exactly. Indeed, I believe it is an open question of number theory which case is the right one. In this sense, this example has a resemblence to Gerhard's examples. 2 added 72 characters in body When I teach computability, I usually use the following example to illustrate the point. Let $f(n)=1$, if there are $n$ consecutive $1$s somewhere in the decimal expansion of $\pi$, and $f(n)=0$ otherwise. Is this a computable function? Some students might try naively to compute it like this: on input $n$, start to enumerate the digits of $\pi$, and look for $n$ consecutive $1$s. If found, then output $1$. But then they realize: what if on a particular input, you have searched for 10 years, and still not found the instance? You don't seem justified in outputting $0$ quite yet, since perhaps you might find the consecutive $1$s by searching a bit more. Nevertheless, we can prove that the function is computable as follows. Either there are arbitrarily long strings of $1$ in $\pi$ or there is a longest string of $1$s of some length $N$. In the former case, the function $f$ is the constant $1$ function, which is definitely computable. In the latter case, $f$ is the function with value $1$ for all input $n So although we have proved that$f$is computable , we didn't provide a definite algorithm in effect by providing an infinite list of programs and proving that one of them computes it$f$, but we don't know which one exactly. Indeed, and I believe it is an open question in of number theory which case actually occursis the right one. In this sense, this example has a resemblence to Gerhard's examples. 1 [made Community Wiki] When I teach computability, I usually use the following example to illustrate the point. Let$f(n)=1$, if there are$n$consecutive$1$s somewhere in the decimal expansion of$\pi$, and$f(n)=0$otherwise. Is this a computable function? Some students might try naively to compute it like this: on input$n$, start to enumerate the digits of$\pi$, and look for$n$consecutive$1$s. If found, then output$1$. But then they realize: what if on a particular input, you have searched for 10 years, and still not found the instance? You don't seem justified in outputting$0$quite yet, since perhaps you might find the consecutive$1$s by searching a bit more. Nevertheless, we can prove that the function is computable as follows. Either there are arbitrarily long strings of$1$in$\pi$or there is a longest string of$1$s of some length$N$. In the former case, the function$f$is the constant$1$function, which is definitely computable. In the latter case,$f$is the function with value$1$for all input$n So although we have proved that $f$ is computable, we didn't provide a definite algorithm that computes it, and I believe it is an open question in number theory which case actually occurs. In this sense, this example has a resemblence to Gerhard's examples.
2013-05-22 22:21:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9028233289718628, "perplexity": 183.21560905126648}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702452567/warc/CC-MAIN-20130516110732-00082-ip-10-60-113-184.ec2.internal.warc.gz"}
https://icml.cc/Conferences/2021/ScheduleMultitrack?event=11635
Timezone: » Towards Achieving Adversarial Robustness Beyond Perceptual Limits Sravanti Addepalli · Samyak Jain · Gaurang Sriramanan · Shivangi Khare · Venkatesh Babu Radhakrishnan The vulnerability of Deep Neural Networks to Adversarial Attacks has fuelled research towards building robust models. While most existing Adversarial Training algorithms aim towards defending against imperceptible attacks, real-world adversaries are not limited by such constraints. In this work, we aim to achieve adversarial robustness at larger epsilon bounds. We first discuss the ideal goals of an adversarial defense algorithm beyond perceptual limits, and further highlight the shortcomings of naively extending existing training algorithms to higher perturbation bounds. In order to overcome these shortcomings, we propose a novel defense, Oracle-Aligned Adversarial Training (OA-AT), that attempts to align the predictions of the network with that of an Oracle during adversarial training. The proposed approach achieves state-of-the-art performance at large epsilon bounds ($\ell_\infty$ bound of $16/255$) while outperforming adversarial training algorithms such as AWP, TRADES and PGD-AT at standard perturbation bounds ($\ell_\infty$ bound of $8/255$) as well.
2022-01-23 22:41:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7872880697250366, "perplexity": 3672.5824635441436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304309.59/warc/CC-MAIN-20220123202547-20220123232547-00495.warc.gz"}
https://www.thetyrantcollection.com/portfolio-item/uncertain-ionia-electrum-hekte-sixth-stater-2-27-g-ca-625-600-bc/
Anonymous, Linzalone 1063 (this coin); Elektron I 17; SNG Kayhan 698; Zhuyuetang Collection 3, Extremely Fine Lydo-Milesian standard. Geometric design somewhat resembling a star, composed of a cross centered upon a polygon of eight sides within a square with slightly rounded sides . Reverse: Rectangular incuse punch divided by a horizontal and vertical line into four compartments, two of the compartments further divided by a thin horizontal line creating four compartments, two with a thin line and two with pellets. The other two larger compartments are further divided into six compartments by two diagonal lines. . .
2022-11-29 05:45:57
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8429506421089172, "perplexity": 4927.287695103414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710685.0/warc/CC-MAIN-20221129031912-20221129061912-00132.warc.gz"}
https://einsteintoolkit.org/thornguide/CactusNumerical/MoL/documentation.html
## Method of Lines Date Abstract The Method of Lines is a way of separating the time integration from the rest of an evolution scheme. This thorn is intended to take care of all the bookwork and provide some basic time integration methods, allowing for easy coupling of different thorns. ### 1 Purpose The Method of Lines (MoL) converts a (system of) partial differential equation(s) into an ordinary differential equation containing some spatial differential operator. As an example, consider writing the hyperbolic system of PDE’s ${\partial }_{t}q+{A}^{i}\left(q\right){\partial }_{i}B\left(q\right)=s\left(q\right)$ (1) in the alternative form ${\partial }_{t}q=L\left(q\right),$ (2) which (assuming a given discretization of space) is an ODE. Given this separation of the time and space discretizations, well known stable ODE integrators such as Runge-Kutta can be used to do the time integration. This is more modular (allowing for simple extensions to higher order methods), more stable (as instabilities can now only arise from the spatial discretization or the equations themselves) and also avoids the problems of retaining high orders of convergence when coupling different physical models. MoL can be used for hyperbolic, parabolic and even elliptic problems (although I definitely don’t recommend the latter). As it currently stands it is set up for systems of equations in the first order type form of equation (2). If you want to implement a multilevel scheme such as leapfrog it is not obvious to me that MoL is the thing to use. However if you have lots of thorns that you want to interact, for example ADM_BSSN and a hydro code plus maybe EM or a scalar field, and they can easily be written in this sort of form, then you probably want to use MoL. This thorn is meant to provide a simple interface that will implement the MoL inside Cactus as transparently as possible. It will initially implement only the optimal Runge-Kutta time integrators (which are TVD up to RK3, so suitable for hydro) up to fourth order and iterated Crank Nicholson. All of the interaction with the MoL thorn should occur directly through the scheduler. For example, all synchronization steps should now be possible at the schedule level. This is essential for interacting cleanly with different drivers, especially to make Mesh Refinement work. For more information on the Method of Lines the most comprehensive references are the works of Jonathan Thornburg [12] - see especially section 7.3 of the thesis. From the CFD viewpoint the review of ENO methods by Shu, [3], has some information. For relativistic fluids the paper of Neilsen and Choptuik [4] is also quite good. ### 2 How to use #### 2.1 Thorn users For those who used the old version of MoL, this version is unfortunately slightly more effort to use. That is, for most methods you’ll now have to set 4 parameters instead of just one. If you already have a thorn that uses the method of lines, then there are four main parameters that are relevant to change the integration method. The keyword MoL_ODE_Method chooses between the different methods. Currently supported are RK2, RK3, ICN, ICN-Avg and Generic. These are second order Runge-Kutta, third order Runge-Kutta, Iterative Crank Nicholson, Iterative Crank Nicholson with averaging, and the generic Shu-Osher type Runge-Kutta methods. To switch between the different types of generic methods there is also the keyword Generic_Type which is currently restricted to RK for the standard TVD Runge-Kutta methods (first to fourth order) and ICN for the implementation of the Iterative Crank Nicholson method in generic form. Full descriptions of the currently implemented methods are given in section 4. The parameter MoL_Intermediate_Steps controls the number of intermediate steps for the ODE solver. For the generic Runge-Kutta solvers it controls the order of accuracy of the method. For the ICN methods this parameter controls the number of iterations taken, which does not check for stability. This parameter defaults to 3. The parameter MoL_Num_Scratch_Levels controls the amount of scratch space used. If this is insufficient for the method selected there will be an error at parameter checking time. This parameter defaults to 0, as no scratch space is required for the efficient ICN and Runge-Kutta 2 and 3 solvers. For the generic solvers this must be at least MoL_Intermediate_Steps - 1. Another parameter is MoL_Memory_Always_On which switches on memory for the scratch space always if true and only during evolution if false. This defaults to true for speed reasons; the memory gains are likely to be limited unless you’re doing something very memory intensive at initialization or analysis. There is also a parameter MoL_NaN_Check that will check your RHS grid functions for NaNs using the NaNChecker thorn from CactusUtils. This will make certain that you find the exact grid function computing the first NaN; of course, this may not be the real source of your problem. The parameter disable_prolongation only does anything if you are using mesh refinement, and in particular Carpet. With mesh refinement it may be necessary to disable prolongation in intermediate steps of MoL. This occurs when evolving systems containing second spatial derivatives. This is done by default in MoL. If your system is purely first order in space and time you may wish to set this to "no". Ideally, initial data thorns should always set initial data at all time levels. However, sometimes initial data thorns fail to do this. In this case you can do one of three things: • Fix the initial data thorn. This is the best solution. • If you’re using Carpet, it has some facilities to take forward/backward time steps to initialize multiple time levels. See the Carpet parameters init_each_timelevel and init_3_timelevels for details. • Finally, if you set (the MoL parameter) initial_data_is_crap, MoL will copy the current time level of all variables it knows about (more precisely, using the terminology of section 2.2, all evolved, save-and-restore, and constrained variables which have multiple time levels) to all the past time levels. Note that this copies the same data to each past time level; this will be wrong if your spacetime is time-dependent! If enabled, the copy happens in the CCTK_POSTINITIAL schedule bin. By default this happens before the MoL_PostStep schedule group; the parameter copy_ID_after_MoL_PostStep can be used to change this to after MoL_PostStep. #### 2.2 Thorn writers To port an existing thorn using the method of lines, or to write a new thorn using it, should hopefully be relatively simple. As an example, within the MoL arrangement is WaveMoL which duplicates the WaveToy thorn given by CactusWave in a form suitable for use by MoL. In this section, “the thorn” will mean the user thorn doing the physics. We start with some terminology. Grid functions are split into four cateogories. 1. The first are those that are evolved using a MoL form. That is, a right hand side is calculated and the variable updated using it. These we call evolved variables. 2. The second category are those variables that are set by a thorn at every intermediate step of the evolution, usually to respect the constraints. Examples of these include the primitive variables in a hydrodynamics code. Another example would be the gauge variables if these were set by constraints at every intermediate step (which is slightly artificial; the usual example would be the use of maximal slicing, which is only applied once every $N$ complete steps). These are known as constrained variables. 3. The third category are those variables that a thorn depends on but does not set or evolve. An example would include the metric terms considered from a thorn evolving matter. Due to the way that MoL deals with these, they are known as Save and Restore variables. 4. The final category are those variables that do not interact with MoL. These would include temporary variables for analysis or setting up the initial data. These can safely be ignored. As a generic rule of thumb, variables for which you have a time evolution equation are evolved (obviously), variables which your thorn sets but does not evolve are constrained, and any other variables which your thorn reads during evolution is a Save and Restore variable. MoL needs to know every GF that falls in one of the first three groups. If a GF is evolved by one thorn but is a constrained variable in another (for example, the metric in full GR Hydro) then each thorn should register the function as they see it. For example, the hydro thorn will register the metric as a Save and Restore variable and the spacetime thorn will register the metric as an evolved variable. The different variable categories are given the priority evolved, constrained, Save and Restore. So if a variable is registered as belonging in two different categories, it is always considered by MoL to belong to the category with the highest priority. MoL needs to know the total number of GFs in each category at parameter time. To do this, your thorn needs to use some accumulator parameters from MoL. As an example, here are the paramaters from WaveMoL: shares: MethodOfLines USES CCTK_INT MoL_Num_Evolved_Vars USES CCTK_INT MoL_Num_Constrained_Vars USES CCTK_INT MoL_Num_SaveAndRestore_Vars restricted: CCTK_INT WaveMoL_MaxNumEvolvedVars \ "The maximum number of evolved variables used by WaveMoL" \ ACCUMULATOR-BASE=MethodofLines::MoL_Num_Evolved_Vars { 5:5           :: "Just 5: phi and the four derivatives" } 5 CCTK_INT WaveMoL_MaxNumConstrainedVars \ "The maximum number of constrained variables used by WaveMoL" \ ACCUMULATOR-BASE=MethodofLines::MoL_Num_Constrained_Vars { 0:1           :: "A small range, depending on testing or not" } 1 CCTK_INT WaveMoL_MaxNumSandRVars \ "The maximum number of save and restore variables used by WaveMoL" \ ACCUMULATOR-BASE=MethodofLines::MoL_Num_SaveAndRestore_Vars { 0:1           :: "A small range, depending on testing or not" } 1 This should give the maximum number of variables that your thorn will register. Every thorn should register every grid function that it uses even if you expect it to be registered again by a different thorn. For example, a hydro thorn would register the metric variables as Save and Restore, whilst the spacetime evolution thorn would register them as evolved (in ADM) or constrained (in ADM_BSSN), both of which have precedence. To register your GFs with MoL schedule a routine in the bin MoL_Register which just contains the relevant function calls. For an evolved variable the GF corresponding to the update term ($L\left(q\right)$ in equation (2)) should be registered at the same time. The appropriate functions are given in section 5. These functions are provided using function aliasing. For details on using function aliasing, see sections B10.5 and F2.2.3 of the UsersGuide. For the case of real GFs, you simply add the following lines to your interface.ccl: ########################################## ### PROTOTYPES - DELETE AS APPLICABLE! ### ########################################## CCTK_INT FUNCTION MoLRegisterEvolved(CCTK_INT EvolvedIndex, CCTK_INT RHSIndex) CCTK_INT FUNCTION MoLRegisterEvolvedSlow(CCTK_INT EvolvedIndex, CCTK_INT RHSIndex) CCTK_INT FUNCTION MoLRegisterConstrained(CCTK_INT ConstrainedIndex) CCTK_INT FUNCTION MoLRegisterSaveAndRestore(CCTK_INT SandRIndex) CCTK_INT FUNCTION MoLRegisterEvolvedGroup(CCTK_INT EvolvedIndex, \ CCTK_INT RHSIndex) CCTK_INT FUNCTION MoLRegisterConstrainedGroup(CCTK_INT ConstrainedIndex) CCTK_INT FUNCTION MoLRegisterSaveAndRestoreGroup(CCTK_INT SandRIndex) CCTK_INT FUNCTION MoLChangeToEvolved(CCTK_INT EvolvedIndex, CCTK_INT RHSIndex) CCTK_INT FUNCTION MoLChangeToEvolvedSlow(CCTK_INT EvolvedIndex, CCTK_INT RHSIndex) CCTK_INT FUNCTION MoLChangeToConstrained(CCTK_INT ConstrainedIndex) CCTK_INT FUNCTION MoLChangeToSaveAndRestore(CCTK_INT SandRIndex) CCTK_INT FUNCTION MoLChangeToNone(CCTK_INT RemoveIndex) ############################################# ### USE STATEMENT - DELETE AS APPLICABLE! ### ############################################# USES FUNCTION MoLRegisterEvolved USES FUNCTION MoLRegisterEvolvedSlow USES FUNCTION MoLRegisterConstrained USES FUNCTION MoLRegisterSaveAndRestore USES FUNCTION MoLRegisterEvolvedGroup USES FUNCTION MoLRegisterConstrainedGroup USES FUNCTION MoLRegisterSaveAndRestoreGroup USES FUNCTION MoLChangeToEvolved USES FUNCTION MoLChangeToConstrained USES FUNCTION MoLChangeToSaveAndRestore USES FUNCTION MoLChangeToNone Note that the list of parameters not complete; see the section on parameters for the use of arrays. However, the list of functions is, and is expanded on in section 5. MoL will check whether a group or variable is a GF or an array and whether it is real. Having done that, one routine (or group of routines) which we’ll here call Thorn_CalcRHS must be defined. This does all the finite differencing that you’d usually do, applied to $q$, and finds the right hand sides which are stored in $L$. This routine should be scheduled in MoL_CalcRHS. The precise order that these are scheduled should not matter, because no updating of any of the user thorns $q$ will be done until after all the RHSs are calculated. Important note: all the finite differencing must be applied to the most recent time level $q$ and not to the previous time level ${q}_{p}$ as you would normally do. Don’t worry about setting up the data before the calculation, as MoL will do that automatically. Finally, if you have some things that have to be done after each update to an intermediate level, these should be scheduled in MoL_PostStep. Examples of things that need to go here include the recalculation of primitive variables for hydro codes, the application of boundary conditions1 , the solution of elliptic equations (although this would be a very expensive place to solve them, some sets of equations might require the updating of some variables by constraints in this fashion). When applying boundary conditions the cleanest thing to do is to write a routine applying the symmetries to the appropriate GFs and, when calling it from the scheduler, adding the SYNC statement to the appropriate groups. An example is given by the routine WaveToyMoL_Boundaries in thorn WaveMoL. Points to note. The thorn routine Thorn_CalcRHS does not need to know and in fact should definitely not know where precisely in the MoL step it is. It just needs to know that it is receiving some intermediate data stored in the GFs $q$ and that it should return the RHS $L\left(q\right)$. All the book-keeping to ensure that it is passed the correct intermediate state at that the GFs contain the correct data at the end of the MoL step will be dealt with by the MoL thorn and the flesh. When using a multirate scheme the thorn routine Thorn_CalcRHS must check the MoL grid scalars MoL::MoL_SlowStep and MoL::MoL_SlowPostStep which MoL sets to true (non-zero) if it is time for the slow sector to compute its RHS or to apply its poststep routine. MoL::MoL_SlowPostStep is always true outside of the RHS loop, eg. in CCTK_POST_RESTRICT. #### 2.3 Evolution method writers If you want to try adding a new evolution method to MoL the simplest way is to use the generic table option to specify it completely in the parameter file - no coding is required at all. All the generic methods evolve the equation ${\partial }_{t}q=L\left(q\right)$ (3) using the following algorithm for an $N$-step method: $\begin{array}{rcll}{q}^{\left(0\right)}& =& {q}^{n},& \text{}\\ {q}^{\left(i\right)}& =& \sum _{k=0}^{i-1}\left({\alpha }_{ik}{q}^{\left(k\right)}\right)+\Delta t{\beta }_{i-1}L\left({q}^{\left(i-1\right)}\right),\phantom{\rule{2em}{0ex}}i=1,\dots ,N,& \text{(4)}\text{}\text{}\\ {q}^{n+1}& =& {q}^{\left(N\right)}.& \text{}\end{array}$ This method is completely specified by $N$ (GenericIntermediateSteps) and the $\alpha$ (GenericAlphaCoeffs) and $\beta$ (GenericBetaCoeffs) arrays. The names in parentheses give the keys in a table that MoL will use. This table is created from the string parameter Generic_Method_Descriptor. As an example, the standard TVD RK2 method that is implemented both in optimized and generic form is written as $\begin{array}{rcll}{q}^{\left(1\right)}& =& {q}^{n}+\Delta tL\left({q}^{n}\right),& \text{(5)}\text{}\text{}\\ {q}^{n+1}& =& \frac{1}{2}\left({q}^{n}+{q}^{\left(1\right)}+\Delta tL\left({q}^{\left(1\right)}\right)\right).& \text{(6)}\text{}\text{}\end{array}$ To implement this using the generic table options, use methodoflines::MoL_Intermediate_Steps = 2 methodoflines::MoL_Num_Scratch_Levels = 1 methodoflines::Generic_Method_Descriptor = \ "GenericIntermediateSteps = 2 \ GenericAlphaCoeffs = { 1.0 0.0 0.5 0.5 } \ GenericBetaCoeffs = { 1.0 0.5 }" The number of steps specified in the table must be the same as MoL_Intermediate_Steps, and the number of scratch levels should be at least MoL_Intermediate_Steps - 1. The generic methods are somewhat inefficient for use in production runs, so it is frequently better to write an optimized version once you are happy with the method. To do this you should • write your code into a new file, called from the scheduler under the alias MoL_Add, • make certain that at each intermediate step the correct values of cctk_time and cctk_delta_time are set in SetTime.c for mesh refinement, boundary conditions and so on, • make certain that you check for the number of intermediate steps in ParamCheck.c. ### 3 Example As a fairly extended example of how to use MoL I’ll outline how ADM_BSSN works in this context. The actual implementation of this is given in the thorn AEIThorns/BSSN_MoL. As normal the required variables are defined in the interface.ccl file, together with the associated source terms. For example, the conformal factor and source are defined by { { ..., ... } Also in this file we write the function aliasing prototypes. Once the sources are defined the registration with MoL is required, for which the essential file is MoLRegister.c. In the ADM_BSSN system the standard metric coefficients ${g}_{ij}$ are not evolved, and neither are the standard extrinsic curvature components ${K}_{ij}$. However these are used by ADM_BSSN in a number of places, and are calculated from evolved quantities at the appropriate points. In the MoL terminology these variables are constrained. As the appropriate storage is defined in thorn ADMBase, the actual calls have the form The actual evolved variables include things such as the conformal factor. This, and the appropriate source term, is defined in thorn ADM_BSSN, and so the call has the form As well as the evolved variables, and those constrained variables such as the metric, there are the gauge variables. Precisely what status these have depends on how they are set. If harmonic or 1+log slicing is used then the lapse is evolved: A matter density $\rho$ might not require such a high order scheme and can be evolved using a multi-rate scheme ierr += MoLRegisterEvolvedSlow(CCTK_VarIndex("GRHydro::dens"), CCTK_VarIndex("GRHydro::dendsrhs")); If maximal or static slicing is used then the lapse is a constrained variable2 : Finally, if none of the above apply we assume that the lapse is evolved in some unknown fashion, and so it must be registered as a Save and Restore variable: However, it is perfectly possible that we may wish to change how we deal with the gauge during the evolution. This is dealt with in the file PreLoop.F. If the slicing changes then the appropriate routine is called. For example, if we want to use 1+log evolution then we call ierr = ierr + MoLChangeToEvolved(lapseindex, lapserhsindex) It is not required to tell MoL what the lapse is changing from, or indeed if it is changing at all; MoL will work this out for itself. Finally there are the routines that we wish to apply after every intermediate step. These are ADM_BSSN_removetrA which enforces various constraints (such as the tracefree conformal extrinsic curvature remaining trace free), ADM_BSSN_Boundaries which applies symmetry boundary conditions as well as various others (such as some of the radiative boundary conditions). Note all the calls to SYNC at this point. We also convert from the updated BSSN variables back to the standard ADM variables in ADM_BSSN_StandardVariables, and also update the time derivative of the lapse in ADM_BSSN_LapseChange. ### 4 Time evolution methods provided by MoL The default method is Iterative Crank-Nicholson. There are many ways of implementing this. The standard "ICN" and "Generic"/"ICN" methods both implement the following, assuming an $N$ iteration method: $\begin{array}{rcll}{q}^{\left(0\right)}& =& {q}^{n},& \text{(7)}\text{}\text{}\\ {q}^{\left(i\right)}& =& {q}^{\left(0\right)}+\frac{\Delta t}{2}L\left({q}^{\left(i-1\right)}\right),\phantom{\rule{1em}{0ex}}i=1,\dots ,N-1,& \text{(8)}\text{}\text{}\\ {q}^{\left(N\right)}& =& {q}^{\left(N-1\right)}+\Delta tL\left({q}^{\left(N-1\right)}\right),& \text{(9)}\text{}\text{}\\ {q}^{n+1}& =& {q}^{\left(N\right)}& \text{(10)}\text{}\text{}\end{array}$ The “averaging” ICN method "ICN-avg" instead calculates intermediate steps before averaging: $\begin{array}{rcll}{q}^{\left(0\right)}& =& {q}^{n},& \text{(11)}\text{}\text{}\\ {\stackrel{̃}{q}}^{\left(i\right)}& =& \frac{1}{2}\left({q}^{\left(i\right)}+{q}^{n}\right),\phantom{\rule{1em}{0ex}}i=0,\dots ,N-1& \text{(12)}\text{}\text{}\\ {q}^{\left(i\right)}& =& {q}^{\left(0\right)}+\Delta tL\left({\stackrel{̃}{q}}^{\left(N-1\right)}\right),& \text{(13)}\text{}\text{}\\ {q}^{n+1}& =& {q}^{\left(N\right)}& \text{(14)}\text{}\text{}\end{array}$ The Runge-Kutta methods are those typically used in hydrodynamics by, e.g., Shu and others — see [3] for example. Explicitly the first order method is the Euler method: $\begin{array}{rcll}{q}^{\left(0\right)}& =& {q}^{n},& \text{(15)}\text{}\text{}\\ {q}^{\left(1\right)}& =& {q}^{\left(0\right)}+\Delta tL\left({\stackrel{̃}{q}}^{\left(0\right)}\right),& \text{(16)}\text{}\text{}\\ {q}^{n+1}& =& {q}^{\left(1\right)}.& \text{(17)}\text{}\text{}\end{array}$ The second order method is: $\begin{array}{rcll}{q}^{\left(0\right)}& =& {q}^{n},& \text{(18)}\text{}\text{}\\ {q}^{\left(1\right)}& =& {q}^{\left(0\right)}+\Delta tL\left({q}^{\left(0\right)}\right),& \text{(19)}\text{}\text{}\\ {q}^{\left(2\right)}& =& \frac{1}{2}\left({q}^{\left(0\right)}+{q}^{\left(1\right)}+\Delta tL\left({q}^{\left(1\right)}\right)\right),& \text{(20)}\text{}\text{}\\ {q}^{n+1}& =& {q}^{\left(2\right)}.& \text{(21)}\text{}\text{}\end{array}$ The third order method is: $\begin{array}{rcll}{q}^{\left(0\right)}& =& {q}^{n},& \text{(22)}\text{}\text{}\\ {q}^{\left(1\right)}& =& {q}^{\left(0\right)}+\Delta tL\left({q}^{\left(0\right)}\right),& \text{(23)}\text{}\text{}\\ {q}^{\left(2\right)}& =& \frac{1}{4}\left(3{q}^{\left(0\right)}+{q}^{\left(1\right)}+\Delta tL\left({q}^{\left(1\right)}\right)\right),& \text{(24)}\text{}\text{}\\ {q}^{\left(3\right)}& =& \frac{1}{3}\left({q}^{\left(0\right)}+2{q}^{\left(2\right)}+2\Delta tL\left({q}^{\left(2\right)}\right)\right),& \text{(25)}\text{}\text{}\\ {q}^{n+1}& =& {q}^{\left(3\right)}.& \text{(26)}\text{}\text{}\end{array}$ The fourth order method, which is not strictly TVD, is: $\begin{array}{rcll}{q}^{\left(0\right)}& =& {q}^{n},& \text{(27)}\text{}\text{}\\ {q}^{\left(1\right)}& =& {q}^{\left(0\right)}+\frac{1}{2}\Delta tL\left({q}^{\left(0\right)}\right),& \text{(28)}\text{}\text{}\\ {q}^{\left(2\right)}& =& {q}^{\left(0\right)}+\frac{1}{2}\Delta tL\left({q}^{\left(1\right)}\right),& \text{(29)}\text{}\text{}\\ {q}^{\left(3\right)}& =& {q}^{\left(0\right)}+\Delta tL\left({q}^{\left(2\right)}\right),& \text{(30)}\text{}\text{}\\ {q}^{\left(4\right)}& =& \frac{1}{6}\left(-2{q}^{\left(0\right)}+2{q}^{\left(1\right)}+4{q}^{\left(2\right)}+2{q}^{\left(3\right)}+\Delta tL\left({q}^{\left(3\right)}\right)\right),& \text{(31)}\text{}\text{}\\ {q}^{n+1}& =& {q}^{\left(4\right)}.& \text{(32)}\text{}\text{}\end{array}$ #### 4.1 Multirate methods A scheme for coupling different parts of a system of equations $\begin{array}{rcll}{\partial }_{t}g& =& F\left(g,q\right),& \text{(33)}\text{}\text{}\\ {\partial }_{t}q& =& G\left(g,q\right),& \text{(34)}\text{}\text{}\end{array}$ representing eg. spacetime and matter variables, respectively, with different RK integrators is given by multirate RK schemes (e.g. [56]). Here, we pursuit the simple Ansatz of performing one RK2 intermediate RHS evaluation for two RK4 intermediate RHS evaluations. That is, the additional RK4 intermediate RHS evaluations simple use the results from the last intermediate RK2 step. To be more explicit, given the equation ${\partial }_{t}y=f\left(t,y\right)\phantom{\rule{0.3em}{0ex}},$ (35) where $f$ corresponds to the RHS possibly including spatial derivatives, we write a generic RK scheme according to $\begin{array}{rcll}{y}_{n+1}& =& {y}_{n}+\Delta t\sum _{i=1}^{s}{b}_{i}\phantom{\rule{0.3em}{0ex}}{k}_{i}\phantom{\rule{0.3em}{0ex}},& \text{(36)}\text{}\text{}\\ {k}_{i}& =& f\left({t}_{n}+{c}_{i}\Delta t\phantom{\rule{0.3em}{0ex}},{y}_{n}+\Delta t\sum _{j=1}^{s}{a}_{ij}{k}_{j}\right)\phantom{\rule{0.3em}{0ex}}.& \text{(37)}\text{}\text{}\end{array}$ The coefficients ${b}_{i}$, ${c}_{i}$, and ${a}_{ij}$ can be written in the standard Butcher notation. In our multirate scheme, we use two different sets of coefficients. One set of coefficients determines the RK4 scheme used for integrating the spacetime variables (33), the other set determines the RK2 scheme for the hydro variables (34). The coefficients for the RK2 scheme are arranged such that RHS evaluations coincide with RK4 RHS evaluations. We list the corresponding multirate Butcher tableau in Table 1. Table 1: Butcher tableau for an explicit multirate RK4/RK2 scheme. The right table (separated by the double vertical line) shows the coefficients ${b}_{i}$ (bottom line), ${c}_{i}$ (first vertical column), and ${a}_{ij}$ for the classical RK4 scheme. The left table shows the corresponding RK2 coefficients evaluated at timesteps that coincide with RK4 timesteps. 0 0 0 0 1/2 1/2 0 0 0 1/2 0 1/2 1 1 0 0 1 0 0 1/2 1/2 0 0 1/2 1/3 1/6 1/6 1/3 ### 5 Functions provided by MoL All the functions listed below return error codes in theory. However at this current point in time they always return 0 (success). Any failure to register or change a GF is assumed fatal and MoL will issue a level 0 warning stopping the code. This may change in future, in which case negative return values will indicate errors. These are all aliased functions. You can get the functions directly through header files, but this feature may be phased out. Using function aliasing is the recommended method. ### MoLRegisterEvolved Tells MoL that the given GF is in the evolved category with the associated update GF. Synopsis C CCTK_INT ierr = MoLRegisterEvolved(CCTK_INT EvolvedIndex, CCTK_INT RHSIndex) CCTK_INT ierr = MoLRegisterEvolvedSlow(CCTK_INT EvolvedIndex, CCTK_INT RHSIndex) Fortran CCTK_INT ierr = MoLRegisterEvolved(CCTK_INT EvolvedIndex, CCTK_INT RHSIndex) CCTK_INT ierr = MoLRegisterEvolvedSlow(CCTK_INT EvolvedIndex, CCTK_INT RHSIndex) Result Currently if there is an error, MoL will issue a level 0 warning. No sensible return codes exist. 0 success Parameters EvolvedIndex Index of the GF to be evolved. RHSIndex Index of the associated update GF. Discussion Should be called in a function scheduled in MoL_Register. Use the Slow variant to register the slow sector of a multirate scheme. CCTK_VarIndex() Get the variable index. MoLRegisterSaveAndRestore() Register Save and Restore variables. MoLRegisterConstrained() Register constrained variables. MoLChangeToEvolved() Change a variable at runtime to be evolved. MoLChangeToEvolvedSlow() Change a variable at runtime to be evolved in the slow sector. Examples C ierr = MoLRegisterEvolved(CCTK_VarIndex("wavetoymol::phi"), CCTK_VarIndex("wavetoymol::phirhs")); Fortran call CCTK_VarIndex(EvolvedIndex, "wavetoymol::phi") call CCTK_VarIndex(RHSIndex, "wavetoymol::phirhs") ierr = MoLRegisterEvolved(EvolvedIndex, RHSIndex) ### MoLRegisterConstrained Tells MoL that the given GF is in the constrained category. Synopsis C CCTK_INT ierr = MoLRegisterConstrained(CCTK_INT ConstrainedIndex) Fortran CCTK_INT ierr = MoLRegisterConstrained(CCTK_INT ConstrainedIndex) Result Currently if there is an error, MoL will issue a level 0 warning. No sensible return codes exist. 0 success Parameters ConstrainedIndex Index of the constrained GF. Discussion Should be called in a function scheduled in MoL_Register. CCTK_VarIndex() Get the variable index. MoLRegisterEvolved() Register evolved variables. MoLRegisterSaveAndRestore() Register Save and Restore variables. MoLChangeToConstrained() Change a variable at runtime to be constrained. Examples C Fortran ierr = MoLRegisterConstrained(ConstrainedIndex) ### MoLRegisterSaveAndRestore Tells MoL that the given GF is in the Save and Restore category. Synopsis C CCTK_INT ierr = MoLRegisterSaveAndRestore(CCTK_INT SandRIndex) Fortran CCTK_INT ierr = MoLRegisterSaveAndRestore(CCTK_INT SandRIndex) Result Currently if there is an error, MoL will issue a level 0 warning. No sensible return codes exist. 0 success Parameters SandRIndex Index of the Save and Restore GF. Discussion Should be called in a function scheduled in MoL_Register. CCTK_VarIndex() Get the variable index. MoLRegisterEvolved() Register evolved variables. MoLRegisterConstrained() Register constrained variables. MoLChangeToSaveAndRestore() Change a variable at runtime to be Save and Restore. Examples C Fortran ierr = MoLRegisterSaveAndRestore(SandRIndex) ### MoLRegisterEvolvedGroup Tells MoL that the given group is in the evolved category with the associated update group. Synopsis C CCTK_INT ierr = MoLRegisterEvolvedGroup(CCTK_INT EvolvedIndex, CCTK_INT RHSIndex) CCTK_INT ierr = MoLRegisterEvolvedGroupSlow(CCTK_INT EvolvedIndex, CCTK_INT RHSIndex) Fortran CCTK_INT ierr = MoLRegisterEvolvedGroup(CCTK_INT EvolvedIndex, CCTK_INT RHSIndex) CCTK_INT ierr = MoLRegisterEvolvedGroupSlow(CCTK_INT EvolvedIndex, CCTK_INT RHSIndex) Result Currently if there is an error, MoL will issue a level 0 warning. No sensible return codes exist. 0 success Parameters EvolvedIndex Index of the group to be evolved. RHSIndex Index of the associated update group. Discussion Should be called in a function scheduled in MoL_Register. Use the Slow variant to register the slow sector of a multirate scheme. CCTK_GroupIndex() Get the group index. MoLRegisterSaveAndRestoreGroup() Register Save and Restore variables. MoLRegisterConstrainedGroup() Register constrained variables. Examples C ierr = MoLRegisterEvolvedGroup(CCTK_GroupIndex("wavetoymol::scalarevolvemol"), CCTK_GroupIndex("wavetoymol::scalarevolvemolrhs")); Fortran call CCTK_GroupIndex(EvolvedIndex, "wavetoymol::scalarevolvemol") call CCTK_GroupIndex(RHSIndex, "wavetoymol::scalarevolvemolrhs") ierr = MoLRegisterEvolvedGroup(EvolvedIndex, RHSIndex) ### MoLRegisterConstrainedGroup Tells MoL that the given group is in the constrained category. Synopsis C CCTK_INT ierr = MoLRegisterConstrainedGroup(CCTK_INT ConstrainedIndex) Fortran CCTK_INT ierr = MoLRegisterConstrainedGroup(CCTK_INT ConstrainedIndex) Result Currently if there is an error, MoL will issue a level 0 warning. No sensible return codes exist. 0 success Parameters ConstrainedIndex Index of the constrained group. Discussion Should be called in a function scheduled in MoL_Register. CCTK_GroupIndex() Get the group index. MoLRegisterEvolvedGroup() Register evolved variables. MoLRegisterSaveAndRestoreGroup() Register Save and Restore variables. MoLChangeToConstrained() Change a variable at runtime to be constrained. Examples C Fortran ierr = MoLRegisterConstrainedGroup(ConstrainedIndex) ### MoLRegisterSaveAndRestoreGroup Tells MoL that the given group is in the Save and Restore category. Synopsis C CCTK_INT ierr = MoLRegisterSaveAndRestoreGroup(CCTK_INT SandRIndex) Fortran CCTK_INT ierr = MoLRegisterSaveAndRestoreGroup(CCTK_INT SandRIndex) Result Currently if there is an error, MoL will issue a level 0 warning. No sensible return codes exist. 0 success Parameters SandRIndex Index of the save and restore group. Discussion Should be called in a function scheduled in MoL_Register. CCTK_GroupIndex() Get the group index. MoLRegisterEvolvedGroup() Register evolved variables. MoLRegisterConstrainedGroup() Register constrained variables. Examples C Fortran ierr = MoLRegisterSaveAndRestoreGroup(SandRIndex) ### MoLChangeToEvolved Sets a GF to belong to the evolved category, with the associated update GF. Not used for the initial setting. Synopsis C CCTK_INT ierr = MoLChangeToEvolved(CCTK_INT EvolvedIndex, CCTK_INT RHSIndex) CCTK_INT ierr = MoLChangeToEvolvedSlow(CCTK_INT EvolvedIndex, CCTK_INT RHSIndex) Fortran CCTK_INT ierr = MoLChangeToEvolvedSlow(CCTK_INT EvolvedIndex, CCTK_INT RHSIndex) Result Currently if there is an error, MoL will issue a level 0 warning. No sensible return codes exist. 0 success Parameters EvolvedIndex Index of the evolved GF. RHSIndex Index of the associated update GF. Discussion Should be called in a function scheduled in MoL_PreStep. Note that this function was designed to allow mixed slicings for thorn ADMBase. This set of functions is largely untested and should be used with great care. CCTK_VarIndex() Get the variable index. MoLRegisterEvolved() Register evolved variables. MoLChangeToSaveAndRestore() Change a variable at runtime to be Save and Restore. MoLChangeToConstrained() Change a variable at runtime to be constrained. Examples C Fortran ierr = MoLChangeToEvolved(EvolvedIndex, RHSIndex) ### MoLChangeToConstrained Sets a GF to belong to the constrained category. Not used for the initial setting. Synopsis C CCTK_INT ierr = MoLChangeToConstrained(CCTK_INT EvolvedIndex) Fortran CCTK_INT ierr = MoLChangeToConstrained(CCTK_INT EvolvedIndex) Result Currently if there is an error, MoL will issue a level 0 warning. No sensible return codes exist. 0 success Parameters ConstrainedIndex Index of the constrained GF. Discussion Should be called in a function scheduled in MoL_PreStep. Note that this function was designed to allow mixed slicings for thorn ADMBase. This set of functions is largely untested and should be used with great care. CCTK_VarIndex() Get the variable index. MoLRegisterConstrained() Register constrained variables. MoLChangeToSaveAndRestore() Change a variable at runtime to be Save and Restore. MoLChangeToEvolved() Change a variable at runtime to be evolved. Examples C Fortran ierr = MoLChangeToConstrained(EvolvedIndex) ### MoLChangeToSaveAndRestore Sets a GF to belong to the Save and Restore category. Not used for the initial setting. Synopsis C CCTK_INT ierr = MoLChangeToSaveAndRestore(CCTK_INT SandRIndex) Fortran CCTK_INT ierr = MoLChangeToSaveAndRestore(CCTK_INT SandRIndex) Result Currently if there is an error, MoL will issue a level 0 warning. No sensible return codes exist. 0 success Parameters SandRIndex Index of the Save and Restore GF. Discussion Should be called in a function scheduled in MoL_PreStep. Note that this function was designed to allow mixed slicings for thorn ADMBase. This set of functions is largely untested and should be used with great care. CCTK_VarIndex() Get the variable index. MoLRegisterSaveAndRestore() Register Save and Restore variables. MoLChangeToEvolved() Change a variable at runtime to be evolved. MoLChangeToConstrained() Change a variable at runtime to be constrained. Examples C Fortran ierr = MoLChangeToSaveAndRestore(SandRIndex) ### MoLChangeToNone Sets a GF to belong to the “unknown” category. Not used for the initial setting. Synopsis C CCTK_INT ierr = MoLChangeToNone(CCTK_INT RemoveIndex) Fortran CCTK_INT ierr = MoLChangeToNone(CCTK_INT RemoveIndex) Result Currently if there is an error, MoL will issue a level 0 warning. No sensible return codes exist. 0 success Parameters RemoveIndex Index of the “unknown” GF. Discussion Should be called in a function scheduled in MoL_PreStep. Note that this function was designed to allow mixed slicings for thorn ADMBase. This set of functions is largely untested and should be used with great care. CCTK_VarIndex() Get the variable index. MoLChangeToEvolved() Change a variable at runtime to be evolved. MoLChangeToSaveAndRestore() Change a variable at runtime to be Save and Restore. MoLChangeToConstrained() Change a variable at runtime to be constrained. Examples C Fortran ierr = MoLChangeToNone(RemoveIndex) ### MoLQueryEvolvedRHS Queries MoL for the index of the update variable for given GF in the evolved category. Synopsis C CCTK_INT RHSindex = MoLQueryEvolvedRHS(CCTK_INT EvolvedIndex) Fortran CCTK_INT RHSindex = MoLQueryEvolvedRHS(CCTK_INT EvolvedIndex) Result If the grid function passed does not exists, MoL will issue a level 0 warning. If the grid function is not of an evolved type (fast or slow sector) $-1$ will be returned. Otherwise the variable index of the update GF is returned. $>0$ variable index of update GF Parameters EvolvedIndex Index of the GF whose update GF is to be returned. Discussion Both slow and fast evolved variables can be queried. CCTK_VarIndex() Get the variable index. MoLRegisterEvolved() Register evolved variables. MoLChangeToEvolvedSlow() Change a variable at runtime to be evolved in the slow sector. Examples C rhsindex = MoLQueryEvolvedRHS(CCTK_VarIndex("wavetoymol::phi")); Fortran call CCTK_VarIndex(EvolvedIndex, "wavetoymol::phi") rhsindex = MoLQueryEvolvedRHS(EvolvedIndex) ### References [1]   J. Thornburg. Numerical Relativity in Black Hole Spacetimes. Unpublished thesis, University of British Columbia. 1993. Available from http://www.aei.mpg.de/~jthorn/phd/html/phd.html. [2]   J. Thornburg. A 3+1 Computational Scheme for Dynamic Spherically Symmetric Black Hole Spacetimes – II: Time Evolution. Preprint gr-qc/9906022, submitted to Phys. Rev. D. [3]   C. Shu. High Order ENO and WENO Schemes for Computational Fluid Dynamics. In T. J. Barth and H. Deconinck, editors High-Order Methods for Computational Physics. Springer, 1999. A related online version can be found under Essentially Non-Oscillatory and Weighted Essentially Non-Oscillatory Schemes for Hyperbolic Conservation Laws at http://www.icase.edu/library/reports/rdp/97/97-65RDP.tex.refer.html. [4]   D. W. Neilsen and M. W. Choptuik. Ultrarelativistic fluid dynamics. Class. Quantum Grav., 17: 733–759, 2000.
2018-11-20 05:45:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 42, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47309595346450806, "perplexity": 947.1312078807755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746227.72/warc/CC-MAIN-20181120035814-20181120061814-00360.warc.gz"}
https://www.kybernetika.cz/content/2019/1/166
# Abstract: In this paper we are concerned with a class of time-varying discounted Markov decision models $\mathcal{M}_n$ with unbounded costs $c_n$ and state-action dependent discount factors. Specifically we study controlled systems whose state process evolves according to the equation $x_{n+1}=G_n(x_n,a_n,\xi_n), n=0,1,\ldots$, with state-action dependent discount factors of the form $\alpha_n(x_n,a_n)$, where $a_n$ and $\xi_n$ are the control and the random disturbance at time $n$, respectively. Assuming that the sequences of functions $\lbrace\alpha_n\rbrace$,$\lbrace c_n\rbrace$ and $\lbrace G_n\rbrace$ converge, in certain sense, to $\alpha_\infty$, $c_\infty$ and $G_\infty$, our objective is to introduce a suitable control model for this class of systems and then, to show the existence of optimal policies for the limit system $\mathcal{M}_\infty$ corresponding to $\alpha_\infty$, $c_\infty$ and $G_\infty$. Finally, we illustrate our results and their applicability in a class of semi-Markov control models. # Keywords: discounted optimality, non-constant discount factor, time-varying Markov decision processes 93E20, 90C40 # References: 1. G. Bastin and D. Dochain: On-line Estimation and Adaptive Control of Bioreactors. Elsevier, Amsterdam 2014.   CrossRef 2. D. P. Bertsekas: Approximate policy iteration: a survey and some new methods. J. Control Theory Appl. 9 (2011), 310-335.   DOI:10.1007/s11768-011-1005-3 3. E. B. Dynkin and A. A. Yushkevich: Controlled Markov Processes. Springer-Verlag, New York 1979.   DOI:10.1007/978-1-4615-6746-2 4. J. González-Hernández, R. R. López-Martínez and J. A. Minjárez-Sosa: Approximation, estimation and control of stochastic systems under a randomized discounted cost criterion. Kybernetika 45 (2009), 737-754.   CrossRef 5. E. I. Gordienko and J. A. Minjárez-Sosa: Adaptive control for discrete-time Markov processes with unbounded costs: discounted criterion. Kybernetika 34 (1998), 217-234.   CrossRef 6. O. Hernández-Lerma and J. B. Lasseerre: Discrete-Time Markov Control Processes: Basic Optimality Criteria. Springer, New York 1996.   DOI:10.1007/978-1-4612-0729-0 7. \noindent O. Hernández-Lerma and J. B. Lasserre: Further Topics on Discrete-time Markov Control Processes. Springer-Verlag, New York 1999.   DOI:10.1007/978-1-4612-0561-6 8. O. Hernández-Lerma and N. Hilgert: Limiting optimal discounted-cost control of a class of time-varying stochastic systems. Syst. Control Lett. 40 (2000), 1, 37-42.   DOI:10.1016/s0167-6911(99)00121-8 9. N. Hilgert and J. A. Minjárez-Sosa: Adaptive policies for time-varying stochastic systems under discounted criterion. Math. Meth. Oper. Res. 54 (2001), 3, 491-505.   DOI:10.1007/s001860100170 10. N. Hilgert and J. A. Minjárez-Sosa: Adaptive control of stochastic systems with unknown disturbance distribution: discounted criteria. Math. Meth. Oper. Res. 63 (2006), 443-460.   DOI:10.1007/s00186-005-0024-6 11. N. Hilgert, R. Senoussi and J. P. Vila: Nonparametric estimation of time-varying autoregressive nonlinear processes. C. R. Acad. Sci. Paris Série 1 1996), 232, 1085-1090.   DOI:10.1109/.2001.980647 12. M. E. Lewis and A. Paul: Uniform turnpike theorems for finite Markov decision processes. Math. Oper. Res.   CrossRef 13. F. Luque-Vásquez and J. A. Minjárez-Sosa: Semi-Markov control processes with unknown holding times distribution under a discounted criterion. Math. Meth. Oper. Res. 61 (2005), 455-468.   DOI:10.1007/s001860400406 14. F. Luque-Vásquez, J. A. Minjárez-Sosa and L. C. Rosas-Rosas: Semi-Markov control processes with partially known holding times distribution: Discounted and average criteria. Acta Appl. Math. 114 (2011), 3, 135-156.   DOI:10.1007/s10440-011-9605-y 15. F. Luque-Vásquez, J. A. Minjárez-Sosa and L. C. Rosas-Rosas: Semi-Markov control processes with unknown holding times distribution under an average criterion cost. Appl. Math. Optim. Theory Appl. 61 (2010), 3, 317-336.   DOI:10.1007/s00245-009-9086-9 16. J. A. Minjárez-Sosa: Markov control models with unknown random state-action-dependent discount factors. TOP 23 (2015), 743-772.   DOI:10.1007/s11750-015-0360-5 17. J. A. Minjárez-Sosa: Approximation and estimation in Markov control processes under discounted criterion. Kybernetika 40 (2004), 6, 681-690.   CrossRef 18. W. B. Powell: Approximate Dynamic Programming. Solving the Curse of Dimensionality John Wiley and Sons Inc, 2007.   DOI:10.1002/9780470182963 19. M. L. Puterman: Markov Decision Processes. Discrete Stochastic Dynamic Programming. John Wiley and Sons 1994.   DOI:10.1002/9780470316887 20. U. Rieder: Measurable selection theorems for optimization problems. Manuscripta Math. 24 (1978), 115-131.   DOI:10.1007/bf01168566 21. M. T. Robles-Alcaráz, O. Vega-Amaya and J. A. Minjárez-Sosa: Estimate and approximate policy iteration algorithm for discounted Markov decision models with bounded costs and Borel spaces. Risk Decision Analysis 6 (2017), 2, 79-95.   DOI:10.3233/rda-160116 22. H. L. Royden: Real Analysis. Prentice Hall 1968.   CrossRef 23. M. Schäl: Conditions for optimality and for the limit on n-stage optimal policies to be optimal. Z. Wahrs. Verw. Gerb. 32 (1975), 179-196.   DOI:10.1007/bf00532612 24. J. F. Shapiro: Turnpike planning horizon for a markovian decision model. Magnament Sci. 14 (1968), 292-300.   DOI:10.1287/mnsc.14.5.292
2023-03-27 11:41:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7036831378936768, "perplexity": 4693.77681591731}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948620.60/warc/CC-MAIN-20230327092225-20230327122225-00070.warc.gz"}
https://forum.azimuthproject.org/discussion/369/azimuth-project-values
#### Howdy, Stranger! It looks like you're new here. If you want to get involved, click one of these buttons! Options # Azimuth Project values In the comments for his blog post announcing the Azimuth Project, John Baez wrote: The distinction between people of ill intent and people of good intent who happen to disagree with what we think can be hard to determine. (Here “we” stands for any group of people who have come to some agreement about what’s good.) There is also the danger of groupthink. So is there some agreement as to "What is good?" This is an important topic for many reasons. If we are to work hard with others, it will help to understand what they too believe to be good and what they believe to be ill. I'm sure we each have our personal opinions and life philosophies serve as the basis for evaluating present conditions and likely future scenarios. But there will inevitably be conflicts, and we need to think about how to manage these conflicts. Are there certain values that the project must hold that we can all agree to? Further, in order to make decisions to develop an actionable plan, there needs to be a commonly agreed set of values which will serve as the basis for the decisions. For example, one of the issues of sustainability is the differing access to scarce resources. Prices will go up. Some people won't have the money to pay for it. Yet our current global socioeconomic resource-allocation system distributes resources on the basis of birth country, family background, personal effort, and other factors that affect wealth. If nothing changes, this means that millions of poor people will starve through no lack of effort on their part. My personal opinion is that this is a great tragedy that we should avoid at all costs. It is also my personal belief that all human beings are equally valuable and should get the same opportunities. I'm not one who views the life of a few from my own country as more important than the life of people from far away places. I'm probably on one extreme end of the scale. But I know that many others share this belief only to varying degrees. Some find it perfectly acceptable that some should have easy access to food and water while others do not. Some people weigh the value of animals as equal to humans. Some as practically worthless. If the Azimuth Project is to produce concrete plans, implicit in those plans will be value judgments of this sort. So we need to have some, as John says, "agreement about what is good," in order to be a "we" according to his definition. In another discussion, David Tweed mentioned: "there should be an acceptance that people have different "non-negotiable core values." Does this mean that we who are working on the Azimuth Project should have different non-negotiable core values too? It seems to me that we won't be able to work together unless we can find a superset of these values that everyone agrees do indeed define "good." It would be very useful to make those values explicit. They will need to be sufficiently detailed to serve as guiding principles for the prioritization of effort and resources going forward. This means they need to be prioritized values, not just a jumble of feel-good ideas that everyone knows are good things. You can't make decisions without prioritization. So what is good? • Options 1. edited January 2011 Some sketchy thoughts: It would be very useful to make those values explicit. They will need to be sufficiently detailed to serve as guiding principles for the prioritization of effort and resources going forward. There could be some good reasons for not making the Azimuth Project values explicit and detailed. I want to attract a lot of scientists and engineers to contribute. The more detailed our stated values are, the more things there are to disagree about, and perhaps the more reason to not join in. But maybe you think that more clearly stated values will attract more people, or focus their energies better. Maybe so. I've never done anything like this before, so I don't feel I'm doing an optimal job of it. I guess maybe one of my values is that we should take a broadly inclusive "umbrella" approach, where many people with differing values can enjoy contributing, as opposed to some more narrow kind of "advocacy" approach, where (for example) we decide that "biochar is great" and then try to convince the world of this. I think scientists and engineers broadly agree on the principle that accurate information is a wonderful thing. And so, I'm hoping we can get a lot of them to contribute accurate information and correct misinformation... with the aim of assembling a clear bird's-eye view of the planet's problems and possible solutions. They don't need to agree on whether biochar is great. It's possible that everyone here agrees with this stuff, but I'm not sure. This means they need to be prioritized values, not just a jumble of feel-good ideas that everyone knows are good things. You can't make decisions without prioritization. The thing is, nobody here can make decisions that bind anyone else against their will. It's not like a company where somebody is the boss and they get to give other people pay raises, or fire them. Instead, we all just do what we want and some kind of structure gradually emerges. It worked very nicely on the nLab and nForum, and now it's happening here. (If anyone here isn't familiar with the nLab and nForum, they should take a good look! They set the pattern that we're roughly following. They're different in a lot of ways, but they're more developed so we have a lot to learn from them.) Of course, I know that in some mysterious way I'm the "leader" here, just as Urs Schreiber is the leader of the nLab. Since I'm not paying anyone a salary, I don't really have the right to tell people people what to do. But I work my butt off, and I find that if I get interested in something, other people tend to follow along. I sometimes think one of my defects is that my interests flicker about too wildly. If I were less erratic I might be a better leader. I also follow along with other people's interests. For example, I would not have gotten so interested in stochastic differential equations if Tim van Beek hadn't been persistent in writing about those. And I might not be thinking about Lotka-Volterra equations if Graham Jones hadn't written a simulation of those. I guess I'm considerably more insistent than most people here about the earth-shaking importance of the Azimuth Project. In fact Curtis Faith may be the first person to have outdone me in this respect. Comment Source:Some sketchy thoughts: > It would be very useful to make those values explicit. They will need to be sufficiently detailed to serve as guiding principles for the prioritization of effort and resources going forward. There could be some good reasons for _not_ making the Azimuth Project values explicit and detailed. I want to attract a lot of scientists and engineers to contribute. The more detailed our stated values are, the more things there are to disagree about, and perhaps the more reason to not join in. But maybe you think that more clearly stated values will attract more people, or focus their energies better. Maybe so. I've never done anything like this before, so I don't feel I'm doing an optimal job of it. I guess maybe one of my values is that we should take a broadly inclusive "umbrella" approach, where many people with differing values can enjoy contributing, as opposed to some more narrow kind of "advocacy" approach, where (for example) we decide that "biochar is great" and then try to convince the world of this. I think scientists and engineers broadly agree on the principle that accurate information is a wonderful thing. And so, I'm hoping we can get a lot of them to contribute accurate information and correct misinformation... with the aim of assembling a clear bird's-eye view of the planet's problems and possible solutions. They don't need to agree on whether biochar is great. It's possible that everyone here agrees with [[Azimuth Project|this stuff]], but I'm not sure. > This means they need to be prioritized values, not just a jumble of feel-good ideas that everyone knows are good things. You can't make decisions without prioritization. The thing is, nobody here can make decisions that bind anyone else against their will. It's not like a company where somebody is the boss and they get to give other people pay raises, or fire them. Instead, we all just do what we want and some kind of structure gradually emerges. It worked very nicely on the <a href = "http://ncatlab.org/nlab/show/HomePage">nLab</a> and <a href = "http://www.math.ntnu.no/~stacey/Mathforge/nForum/">nForum</a>, and now it's happening here. (If anyone here isn't familiar with the nLab and nForum, they should take a good look! They set the pattern that we're roughly following. They're different in a lot of ways, but they're more developed so we have a lot to learn from them.) Of course, I know that in some mysterious way I'm the "leader" here, just as Urs Schreiber is the leader of the nLab. Since I'm not paying anyone a salary, I don't really have the right to tell people people what to do. But I work my butt off, and I find that if I get interested in something, other people tend to follow along. I sometimes think one of my defects is that my interests flicker about too wildly. If I were less erratic I might be a better leader. I also follow along with other people's interests. For example, I would not have gotten so interested in [[stochastic differential equations]] if Tim van Beek hadn't been persistent in writing about those. And I might not be thinking about Lotka-Volterra equations if Graham Jones hadn't written a [[Quantitative ecology|simulation of those]]. I guess I'm considerably more insistent than most people here about the earth-shaking _importance_ of the Azimuth Project. In fact Curtis Faith may be the first person to have outdone me in this respect. <img src = "http://math.ucr.edu/home/baez/emoticons/tongue2.gif" alt = ""/> • Options 2. edited January 2011 There could be some good reasons for not making the Azimuth Project values explicit and detailed. I want to attract a lot of scientists and engineers to contribute. The more detailed our stated values are, the more things there are to disagree about, and perhaps the more reason to not join in. I think this is more of a theoretical problem than an actual one. I'm pretty sure that if you just stated the two or three values that the project must uphold from your perspective there might be one or two more than others would add but the end result wouldn't be off putting to anyone you'd want to help anyway. But maybe you think that more clearly stated values will attract more people, or focus their energies better. Maybe so. I've never done anything like this before, so I don't feel I'm doing an optimal job of it. There are "values statements" all over the place that tend to be just pure corporate feelgoodism BS written by committee and generally not actually evident in the company's actions. That's not what I'm talking about. EDIT: Changed: "That's now" to "That's not" - OOPS! I'm talking about values that we'll use to decide what gets put on the blog, what projects are worth organizing from your perspective, etc. I guess maybe one of my values is that we should take a broadly inclusive "umbrella" approach, where many people with differing values can enjoy contributing, as opposed to some more narrow kind of "advocacy" approach, where (for example) we decide that "biochar is great" and then try to convince the world of this. Yes, this is the kind of value that it is important to make clear up front for reasons I'll describe below. I think scientists and engineers broadly agree on the principle that accurate information is a wonderful thing. And so, I'm hoping we can get a lot of them to contribute accurate information and correct misinformation... with the aim of assembling a clear bird's-eye view of the planet's problems and possible solutions. They don't need to agree on whether biochar is great. A value that I think you share is that: "scientific objective truth is paramount." While many scientists, probably most, hold this value, it is important to spell it out. The reasons become more important the more distributed and meritocratic the organizational structure. The thing is, nobody here can make decisions that bind anyone else against their will. It's not like a company where somebody is the boss and they get to give other people pay raises, or fire them. There aren't many good examples of ways to lead independent agents but that really is the task at hand. The best way to get smart people working together is to let them work on exactly what they want, for their own reasons. The difficulty is finding a way to make sure that people are not working at cross purposes. This is one of the reasons that a clearly articulated set of goals and values is important. They should be brief and to the point and universally shared. If the values are explicit then you know you can trust the other people on the project because they share those core values. If the values are a mish mash of different perspectives then you can never be sure. Getting good people working in unison on a big project requires leadership by influence rather than power. To lead, you set an example of the right way to do things from your limited perspective. If you are lucky the other participants will see the high standards and make sure that what they do meets or exceeds those standards. They will even call you out when you don't live up to what you say. But they can't do this if you don't say what your values are. I find that often people will surprise and delight you if you expect great things from them. I also find that people really like working on something important, they really like doing work they can be proud of. Further, working in a team where an individual can contribute something significant for the good of the planet is a rare opportunity. That's why the plans need to be good ones that everyone can understand. Working on something that you doubt will amount to anything is disheartening not inspiring. Working on a realistic project to help save the planet? What could be more inspiring? Instead, we all just do what we want and some kind of structure gradually emerges. You are being modest, and I think you are underestimating the role of the guidance you provide. As well as that which will be necessary in the future, at least for the first few years of the project. Structure doesn't just emerge from my experience. But you probably meant that the obvious structure we should take will become more and more obvious over time, right? That's why you need to build adaptability and change into any actual structure so the Azimuth Project is like an organism. Lots of different cells doing what they each do best, each contributing to the larger purpose. No one cell is queen or king. They are all important. What the Azimuth Project eventually becomes will depend as much on the culture that you and the other participants create in the beginning. So far, I very much like what I see, that's why I signed up. But keeping the culture healthy will require regular consistent soft nudges. Of course, I know that in some mysterious way I'm the "leader" here, just as Urs Schreiber is the leader of the nLab. Since I'm not paying anyone a salary, I don't really have the right to tell people people what to do. But I work my butt off, and I find that if I get interested in something, other people tend to follow along. Yes, leading by example is the best way to get smart people to follow. No doubt about it. Even a small team can do great things if everyone works together. Most people don't have the guts to start something anywhere near as ambitious as the Azimuth Project. But there are a good many of the types of people you want to help that can see the writing on the wall, and will get behind a plan if it looks like it is well thought out and rational. That's one of the reasons that I believe that explicitly stating the most important values is important. It is also one of the reasons that I think I can help as I have experience with various aspects of building a plan that may not be obvious when you first think about doing something big. (Continued) Comment Source:>There could be some good reasons for _not_ making the Azimuth Project values explicit and detailed. I want to attract a lot of scientists and engineers to contribute. The more detailed our stated values are, the more things there are to disagree about, and perhaps the more reason to not join in. I think this is more of a theoretical problem than an actual one. I'm pretty sure that if you just stated the two or three values that the project must uphold from your perspective there might be one or two more than others would add but the end result wouldn't be off putting to anyone you'd want to help anyway. >But maybe you think that more clearly stated values will attract more people, or focus their energies better. Maybe so. I've never done anything like this before, so I don't feel I'm doing an optimal job of it. There are "values statements" all over the place that tend to be just pure corporate feelgoodism BS written by committee and generally not actually evident in the company's actions. That's not what I'm talking about. EDIT: Changed: "That's now" to "That's not" - OOPS! I'm talking about values that we'll use to decide what gets put on the blog, what projects are worth organizing from your perspective, etc. >I guess maybe one of my values is that we should take a broadly inclusive "umbrella" approach, where many people with differing values can enjoy contributing, as opposed to some more narrow kind of "advocacy" approach, where (for example) we decide that "biochar is great" and then try to convince the world of this. Yes, this is the kind of value that it is important to make clear up front for reasons I'll describe below. >I think scientists and engineers broadly agree on the principle that accurate information is a wonderful thing. And so, I'm hoping we can get a lot of them to contribute accurate information and correct misinformation... with the aim of assembling a clear bird's-eye view of the planet's problems and possible solutions. They don't need to agree on whether biochar is great. A value that I think you share is that: "scientific objective truth is paramount." While many scientists, probably most, hold this value, it is important to spell it out. The reasons become more important the more distributed and meritocratic the organizational structure. >The thing is, nobody here can make decisions that bind anyone else against their will. It's not like a company where somebody is the boss and they get to give other people pay raises, or fire them. There aren't many good examples of ways to lead independent agents but that really is the task at hand. The best way to get smart people working together is to let them work on exactly what they want, for their own reasons. The difficulty is finding a way to make sure that people are not working at cross purposes. This is one of the reasons that a clearly articulated set of goals and values is important. They should be brief and to the point and universally shared. If the values are explicit then you know you can trust the other people on the project because they share those core values. If the values are a mish mash of different perspectives then you can never be sure. Getting good people working in unison on a big project requires leadership by influence rather than power. To lead, you set an example of the right way to do things from your limited perspective. If you are lucky the other participants will see the high standards and make sure that what they do meets or exceeds those standards. They will even call you out when you don't live up to what you say. But they can't do this if you don't say what your values are. I find that often people will surprise and delight you if you expect great things from them. I also find that people really like working on something important, they really like doing work they can be proud of. Further, working in a team where an individual can contribute something significant for the good of the planet is a rare opportunity. That's why the plans need to be good ones that everyone can understand. Working on something that you doubt will amount to anything is disheartening not inspiring. Working on a realistic project to help save the planet? What could be more inspiring? >Instead, we all just do what we want and some kind of structure gradually emerges. You are being modest, and I think you are underestimating the role of the guidance you provide. As well as that which will be necessary in the future, at least for the first few years of the project. Structure doesn't just emerge from my experience. But you probably meant that the obvious structure we should take will become more and more obvious over time, right? That's why you need to build adaptability and change into any actual structure so the Azimuth Project is like an organism. Lots of different cells doing what they each do best, each contributing to the larger purpose. No one cell is queen or king. They are all important. What the Azimuth Project eventually becomes will depend as much on the culture that you and the other participants create in the beginning. So far, I very much like what I see, that's why I signed up. But keeping the culture healthy will require regular consistent soft nudges. >Of course, I know that in some mysterious way I'm the "leader" here, just as Urs Schreiber is the leader of the nLab. Since I'm not paying anyone a salary, I don't really have the right to tell people people what to do. But I work my butt off, and I find that if I get interested in something, other people tend to follow along. Yes, leading by example is the best way to get smart people to follow. No doubt about it. Even a small team can do great things if everyone works together. Most people don't have the guts to start something anywhere near as ambitious as the Azimuth Project. But there are a good many of the types of people you want to help that can see the writing on the wall, and will get behind a plan if it looks like it is well thought out and rational. That's one of the reasons that I believe that explicitly stating the most important values is important. It is also one of the reasons that I think I can help as I have experience with various aspects of building a plan that may not be obvious when you first think about doing something big. (Continued) • Options 3. edited January 2011 (Continuation) I sometimes think one of my defects is that my interests flicker about too wildly. I'd be surprised if your interests flickered more than mine. I'm almost pathologically opposed to boredom and routine. I've spent most of my adult life looking for ways to help make the world better and I always set my sights very high. Every time I have failed I set out to understand the reasons for the failure. Generally some assumption I was making didn't hold and caused the failure. So I always set out to learn the new domain where I was missing information. Understanding the new domain has required me to switch from learning about technology, to marketing, to finance and venture capital, to business, to leadership, to politics, etc. If I were less erratic I might be a better leader. Leadership is about inspiration more than anything else. It is about helping others figure out how to do their best work. It is about helping more than being helped. If you were less erratic you wouldn't be able to inspire because you wouldn't be able to see so clearly that something needs to be done. It is the clarity in your vision made possible because of your generalist nature among specialists that makes you well-suited to leadership of independent-minded scientists and engineers. People want to follow certainty. They want to follow someone who knows where they are going even when they are not sure of the exact path they will take. I guess I'm considerably more insistent than most people here about the earth-shaking importance of the Azimuth Project. In fact Curtis Faith may be the first person to have outdone me in this respect. I just turned 47 on Sunday. I was a first-time father last year, and I have a 9-month old daughter. She is amazing. I don't want her to grow up and wonder why our generation let things get so bad and I didn't do anything to help make the world better. I'm a real optimist by nature. But ignoring the very clear trends of the last 30 to 40 years is no longer an option. Our generation must stand up and do something about this. A few years back I thought that politics might be the answer. I spent a lot of time learning the ins and outs of politics. My wife and I even followed the 2008 U.S. election and filmed the campaigns of Obama and Ron Paul in the process of learning. It is clear to me having seen the way the last few years have unfolded that political solutions will not avert a crisis. In the last few years, we have lived in S.E. Asia for 4 months, and in South America for a few years. I wanted to get to understand the world from outside the U.S. perspective. To get to know people in other countries as individuals, as humans. This has made it even more clear what the major problems are, and that the solutions won't be implemented until a major crisis strikes. So I've been working on learning relevant technology and science for the last few years as a backup plan. Trying to see where I might be able to help out in the most effective way possible. The Azimuth Project fits what I've seen. None of the other efforts appear to me to be addressing the realities I see having spent a lot of time investigating them. But most of all. The reason that I'm excited about the Azimuth Project is that it has the loftiest of goals and the Earth needs saving. We've screwed it up and we're running out of time. Comment Source:(Continuation) >I sometimes think one of my defects is that my interests flicker about too wildly. I'd be surprised if your interests flickered more than mine. I'm almost pathologically opposed to boredom and routine. I've spent most of my adult life looking for ways to help make the world better and I always set my sights very high. Every time I have failed I set out to understand the reasons for the failure. Generally some assumption I was making didn't hold and caused the failure. So I always set out to learn the new domain where I was missing information. Understanding the new domain has required me to switch from learning about technology, to marketing, to finance and venture capital, to business, to leadership, to politics, etc. > If I were less erratic I might be a better leader. Leadership is about inspiration more than anything else. It is about helping others figure out how to do their best work. It is about helping more than being helped. If you were less erratic you wouldn't be able to inspire because you wouldn't be able to see so clearly that something needs to be done. It is the clarity in your vision made possible because of your generalist nature among specialists that makes you well-suited to leadership of independent-minded scientists and engineers. People want to follow certainty. They want to follow someone who knows where they are going even when they are not sure of the exact path they will take. >I guess I'm considerably more insistent than most people here about the earth-shaking _importance_ of the Azimuth Project. In fact Curtis Faith may be the first person to have outdone me in this respect. <img src = "http://math.ucr.edu/home/baez/emoticons/tongue2.gif" alt = ""/> I just turned 47 on Sunday. I was a first-time father last year, and I have a 9-month old daughter. She is amazing. I don't want her to grow up and wonder why our generation let things get so bad and I didn't do anything to help make the world better. I'm a real optimist by nature. But ignoring the very clear trends of the last 30 to 40 years is no longer an option. Our generation must stand up and do something about this. A few years back I thought that politics might be the answer. I spent a lot of time learning the ins and outs of politics. My wife and I even followed the 2008 U.S. election and filmed the campaigns of Obama and Ron Paul in the process of learning. It is clear to me having seen the way the last few years have unfolded that political solutions will not avert a crisis. In the last few years, we have lived in S.E. Asia for 4 months, and in South America for a few years. I wanted to get to understand the world from outside the U.S. perspective. To get to know people in other countries as individuals, as humans. This has made it even more clear what the major problems are, and that the solutions won't be implemented until a major crisis strikes. So I've been working on learning relevant technology and science for the last few years as a backup plan. Trying to see where I might be able to help out in the most effective way possible. The Azimuth Project fits what I've seen. None of the other efforts appear to me to be addressing the realities I see having spent a lot of time investigating them. But most of all. The reason that I'm excited about the Azimuth Project is that it has the loftiest of goals and the Earth needs saving. We've screwed it up and we're running out of time. • Options 4. edited January 2011 A lot of things to say, but: Curtis: something like your last 6 paragraphs above would make a great addition to the blog post I proposed - the one where you talk about your history and why you've decided to join the Azimuth Project. It's inspiring - it will get people revved up! Comment Source:A lot of things to say, but: Curtis: something like your last 6 paragraphs above would make a great addition to the blog post I proposed - the one where you talk about your history and why you've decided to join the Azimuth Project. It's inspiring - it will get people revved up! • Options 5. I'll work on splitting the post into the two parts as we discussed in the other thread. Comment Source:I'll work on splitting the post into the two parts as we discussed in the other thread.
2022-09-27 17:46:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36131250858306885, "perplexity": 722.3841215939451}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335054.79/warc/CC-MAIN-20220927162620-20220927192620-00200.warc.gz"}
http://mathhelpforum.com/calculus/147027-triple-integral-question.html
# Math Help - a triple integral question 1. ## a triple integral question Evaluate $\int\int\int_ExydV$ where E is bounded by $x=y^2, y=x^2, z=0, z=6x+y$ My first try: $\int\int_D(\int_0^{6x+y}xydz)dA$ $=\int_0^1\int_{x^2}^{\sqrt{x}}(6x^2y+6xy^2)dydx$ ended up with 9/14 which was incorrect. 2. Originally Posted by Em Yeu Anh Evaluate $\int\int\int_ExydV$ where E is bounded by $x=y^2, y=x^2, z=0, z=6x+y$ My first try: $\int\int_D(\int_0^{6x+y}xydz)dA$ $=\int_0^1\int_{x^2}^{\sqrt{x}}(6x^2y+6xy^2)dydx$ ended up with 9/14 which was incorrect. $\int_0^{6x+y}xydz = 6x^2y+xy^2$ 3. Originally Posted by chiph588@ $\int_0^{6x+y}xydz = 6x^2y+xy^2$ *sigh* I'm so careless. Thanks!
2014-07-24 15:42:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9455559253692627, "perplexity": 4977.209031760949}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997889314.41/warc/CC-MAIN-20140722025809-00221-ip-10-33-131-23.ec2.internal.warc.gz"}
https://sctk.camplab.net/v2.2.0/articles/articles/ui_seurat_curated_workflow.html
## Introduction Seurat is an R package (Butler et al., Nature Biotechnology 2018 & Stuart, Butler, et al., Cell 2019) that offers various functions to perform analysis of scRNA-Seq data on the R console. In the singleCellTK, we implement all the common steps of the proposed workflow in an interactive and easy to use graphical interface including interactive visualizations. The purpose of this curated workflow is to allow the users to follow a standardized step-by-step workflow for an effortless analysis of their data. ### General Workflow A general workflow for the Seurat tab is summarized in the figure below: In this tutorial example, we illustrate all the steps of the curated workflow and focus on the options available to manipulate and customize the steps of the workflow as per user requirements. To initiate the Seurat workflow, click on the ‘Curated Workflows’ from the top menu and select Seurat. NOTE: This tutorial assumes that the data has already been uploaded via the upload tab of the toolkit and filtered before using the workflow. ## 1. Normalize Data Assuming that the data has been uploaded via the Upload tab of the toolkit, the first step for the analysis of the data is the Normalization of data. For this purpose, any assay available in the uploaded data can be used against one of the three methods of normalization available through Seurat i.e. LogNormalize, CLR (Centered Log Ratio) or RC (Relative Counts). 1. Open up the ‘Normalize’ tab by clicking on it. 2. Select the assay to normalize from the dropdown menu. 3. Select the normalization method from the dropdown menu. Available methods are LogNormalize, CLR or RC. 4. Set the scaling factor which represents the numeric value by which the data values are multiplied before log transformation. Default is set to 10000. 5. Press the ‘Normalize’ button to start the normalization process. ## 2. Scale Data Once normalization is complete, data needs to be scaled and centered accordingly. Seurat uses linear (linear model), poisson (generalized linear model) or negbinom (generalized linear model) as a regression model. 1. Open up the ‘Scale Data’ tab. 2. Select model for scaling from linear, poisson or negbinom. 3. Select if you only want to scale data or center it as well. 4. Input maximum scaled data value. This is the maximum value to which the data will be scaled to. 5. Press ‘Scale’ button to start processing. ## 3. Highly Variable Genes Identification of the highly variable genes is core to the Seurat workflow and these highly variable genes are used throughout the remaining workflow. Seurat provides three methods for variable genes identification i.e. vst (uses local polynomial regression to fit a relationship between log of variance and log of mean), mean.var.plot (uses mean and dispersion to divide features into bins) and dispersion (uses highest dispersion values only). 1. Open up the ‘Highly Variable Genes’ tab. 2. Select method for computation of highly variable genes from vst, mean.var.plot and dispersion. 3. Input the number of genes that should be identified as variable. Default is 2000. 4. Press ‘Find HVG’ button to compute the variable genes. 5. Once genes are computed, select number of the top most variable genes to display in (6). 6. Displays the top most highly variable genes as selected in (5). 7. Graph that plots each gene and its relationship based upon the selected model in (2), and identifies if a gene is highly variable or not. ## 4. Dimensionality Reduction Seurat workflow offers PCA or ICA for dimensionality reduction and the components from these methods can be used in the downstream analysis. Moreover, several plots are available for the user to inspect the output of the dimensionality reduction such as the standard ‘PCA Plot’, ‘Elbow Plot’, ‘Jackstraw Plot’ and ‘Heatmap Plot’. 1. Open up the ‘Dimensionality Reduction’ tab. 2. Select a sub-tab for either ‘PCA’ or ‘ICA’ computation. Separate tabs are available for both methods if the user wishes to compute and inspect both separately. 3. Input the number of components to compute. Default value is 50. 4. Select the plots that should be computed with the overall processing. The standard ‘PCA Plot’ will be computed at all times, while the remaining can be turned off if not required. 5. Input the number of features against which a ‘Heatmap’ should be plotted. Only available when ‘Compute Heatmap’ is set to TRUE. 6. Press the ‘Run PCA’ button to start processing. 7. Select the number of computed components that should be used in the downstream analysis e.g. in ‘tSNE/UMAP’ computation or with ‘Clustering’. If ‘Elbow Plot’ is computed, a suggested value will be indicated that should be preferred for downstream analysis. 8. The plot area from where all computed plots can be viewed by the user. 9. Heatmap plot has various options available for the users to customize the plot. Since a plot is computed against each component, a user-defined number of components can be selected against. Moreover, for viewing quality, a number of columns can be selected in which the plots should be shown. ## 5. tSNE/UMAP ‘tSNE’ and ‘UMAP’ can be computed and plotted once components are available from ‘Dimensionality Reduction’ tab. 1. Open up the ‘tSNE/UMAP’ tab. 2. Select ‘tSNE’ or ‘UMAP’ sub-tab. 3. Select a reduction method. Only methods that are computed previously in the ‘Dimensionality Reduction’ tab are available. 4. Set perplexity tuning parameter. 5. Information displayed to the user that how many components from the selected reduction method will be used. This value can only be changed from the ‘Dimensionality Reduction’ tab. 6. Press ‘Run tSNE’ or ‘Run UMAP’ button to start processing. 7. ‘tSNE’ or ‘UMAP’ plot depending upon the selected computation. ## 6. Clustering Cluster labels can be generated for all cells/samples using one of the computed reduction method. Plots are automatically re-computed with cluster labels. The available algorithms for clustering as provided by Seurat include original Louvain algorithm, Louvain algorithm with multilevel refinement and SLM algorithm. 1. Open up the ‘Clustering’ tab. 2. Select a previously computed reduction method. 3. Select clustering algorithm from original Louvain algorithm, Louvain algorithm with multilevel refinement and SLM algorithm 4. Set resolution parameter value for the algorithm. Default is 0.8. 5. Set if singletons should be grouped to nearest clusters or not. Default is TRUE. 6. Information displayed to the user that how many components from the selected reduction method will be used. This value can only be changed from the ‘Dimensionality Reduction’ tab. 7. Press ‘Find Clusters’ button to start processing. 8. Re-computed plots with cluster labels. Only those plots are available that have previously been computed. ## 7. Find Markers ‘Find Markers’ tab can be used to identify and visualize the marker genes using on of the provided visualization methods. The tab offers identification of markers between two selected phenotype groups or between all groups and can be decided at the time of the computation. Furthermore, markers that are conserved between two phenotype groups can also be identified. Visualizations such as Ridge Plot, Violin Plot, Feature Plot and Heatmap Plot can be used to visualize the individual marker genes. 1. Select if you want to identify marker genes against all groups in a biological variable or between two pre-defined groups. Additionally, users can select the last option to identify the marker genes that are conserved between two groups. 2. Select phenotype variable that contains the grouping information. 3. Select test used for marker genes identification. 4. Select if only positive markers should be returned. 5. Press “Find Markers” button to run marker identification. 6. Identified marker genes are populated in the table. 7. Filters can be applied on the table. 8. Filters allow different comparisons based on the type of the column of the table. 9. Table re-populated after applying filters. 10. Heatmap plot can be visualized for all genes populated in the table (9) against all biological groups in the selected phenotype variable. 11. To visualize each individual marker gene through gene plots, they can be selected by clicking on the relevant rows of the table. 12. Selected marker genes from the table are plotted with gene plots. ## 8. Downstream Analysis Once all steps of the Seurat workflow are completed, users can further analyze the data by directly going to the various downstream analysis options (Differential Expression, Marker Selection & Pathway Analysis) from within the Seurat workflow.
2021-08-02 02:57:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19900095462799072, "perplexity": 3569.649548609327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154302.46/warc/CC-MAIN-20210802012641-20210802042641-00009.warc.gz"}
https://xianblog.wordpress.com/tag/untractable-normalizing-constant/
## approximate likelihood Posted in Books, Statistics with tags , , , , , on September 6, 2017 by xi'an Today, I read a newly arXived paper by Stephen Gratton on a method called GLASS for General Likelihood Approximate Solution Scheme… The starting point is the same as with ABC or synthetic likelihood, namely a collection of summary statistics and an intractable likelihood. The author proposes to use as a substitute a maximum entropy solution based on these summary statistics and their assumed moments under the theoretical model. What is quite unclear in the paper is whether or not these assumed moments are available in closed form or not. Otherwise, it would appear as a variant to the synthetic likelihood [aka simulated moments] approach, meaning that the expectations of the summary statistics under the theoretical model and for a given value of the parameter are obtained through Monte Carlo approximations. (All the examples therein allow for closed form expressions.) ## normalising constants of G-Wishart densities Posted in Books, Statistics with tags , , , , , , on June 28, 2017 by xi'an Abdolreza Mohammadi, Hélène Massam, and Gérard Letac arXived last week a paper on a new approximation of the ratio of two normalising constants associated with two G-Wishart densities associated with different graphs G. The G-Wishart is the generalisation of the Wishart distribution by Alberto Roverato to the case when some entries of the matrix are equal to zero, which locations are associated with the graph G. While enjoying the same shape as the Wishart density, this generalisation does not enjoy a closed form normalising constant. Which leads to an intractable ratio of normalising constants when doing Bayesian model selection across different graphs. Atay-Kayis and Massam (2005) expressed the ratio as a ratio of two expectations, and the current paper shows that this leads to an approximation of the ratio of normalising constants for a graph G against the graph G augmented by the edge e, equal to Γ(½{δ+d}) / 2 √π Γ(½{δ+d+1}) where δ is the degree of freedom of the G-Wishart and d is the number of minimal paths of length 2 linking the two end points of e. This is remarkably concise and provides a fast approximation. (The proof is quite involved, by comparison.) Which can then be used in reversible jump MCMC. The difficulty is obviously in evaluating the impact of the approximation on the target density, as there is no manageable available alternative to calibrate the approximation. In a simulation example where such an alternative is available, the error is negligible though. ## density normalization for MCMC algorithms Posted in Statistics, University life with tags , , , , , , , , on November 6, 2014 by xi'an Another paper addressing the estimation of the normalising constant and the wealth of available solutions just came out on arXiv, with the full title of “Target density normalization for Markov chain Monte Carlo algorithms“, written by Allen Caldwell and Chang Liu. (I became aware of it by courtesy of Ewan Cameron, as it appeared in the physics section of arXiv. It is actually a wee bit annoying that papers in the subcategory “Data Analysis, Statistics and Probability” of physics do not get an automated reposting on the statistics lists…) In this paper, the authors compare three approaches to the problem of finding $\mathfrak{I} = \int_\Omega f(\lambda)\,\text{d}\lambda$ when the density f is unormalised, i.e., in more formal terms, when f is proportional to a probability density (and available): 1. an “arithmetic mean”, which is an importance sampler based on (a) reducing the integration volume to a neighbourhood ω of the global mode. This neighbourhood is chosen as an hypercube and the importance function turns out to be the uniform over this hypercube. The corresponding estimator is then a rescaled version of the average of f over uniform simulations in ω. 2.  an “harmonic mean”, of all choices!, with again an integration over the neighbourhood ω of the global mode in order to avoid the almost sure infinite variance of harmonic mean estimators. 3. a Laplace approximation, using the target at the mode and the Hessian at the mode as well. The paper then goes to comparing those three solutions on a few examples, demonstrating how the diameter of the hypercube can be calibrated towards a minimum (estimated) uncertainty. The rather anticlimactic conclusion is that the arithmetic mean is the most reliable solution as harmonic means may fail in larger dimension and more importantly fail to signal its failure, while Laplace approximations only approximate well quasi-Gaussian densities… What I find most interesting in this paper is the idea of using only one part of the integration space to compute the integral, even though it is not exactly new. Focussing on a specific region ω has pros and cons, the pros being that the reduction to a modal region reduces needs for absolute MCMC convergence and helps in selecting alternative proposals and also prevents from the worst consequences of using a dreaded harmonic mean, the cons being that the region needs be well-identified, which means requirements on the MCMC kernel, and that the estimate is a product of two estimates, the frequency being driven by a Binomial noise.  I also like very much the idea of calibrating the diameter Δof the hypercube ex-post by estimating the uncertainty. As an aside, the paper mentions most of the alternative solutions I just presented in my Monte Carlo graduate course two days ago (like nested or bridge or Rao-Blackwellised sampling, including our proposal with Darren Wraith), but dismisses them as not “directly applicable in an MCMC setting”, i.e., without modifying this setting. I unsurprisingly dispute this labelling, both because something like the Laplace approximation requires extra-work on the MCMC output (and once done this work can lead to advanced Laplace methods like INLA) and because other methods could be considered as well (for instance, bridge sampling over several hypercubes). As shown in the recent paper by Mathieu Gerber and Nicolas Chopin (soon to be discussed at the RSS!), MCqMC has also become a feasible alternative that would compete well with the methods studied in this paper. Overall, this is a paper that comes in a long list of papers on constant approximations. I do not find the Markov chain of MCMC aspect particularly compelling or specific, once the effective sample size is accounted for. It would be nice to find generic ways of optimising the visit to the hypercube ω and to estimate efficiently the weight of ω. The comparison is solely run over examples, but they all rely on a proper characterisation of the hypercube and the ability to simulate efficiently f over that hypercube. ## this issue of Series B Posted in Books, Statistics, Travel, University life with tags , , , , , , , , , , on September 5, 2014 by xi'an The September issue of [JRSS] Series B I received a few days ago is of particular interest to me. (And not as an ex-co-editor since I was never involved in any of those papers!) To wit: a paper by Hani Doss and Aixin Tan on evaluating normalising constants based on MCMC output, a preliminary version I had seen at a previous JSM meeting, a paper by Nick Polson, James Scott and Jesse Windle on the Bayesian bridge, connected with Nick’s talk in Boston earlier this month, yet another paper by Ariel Kleiner, Ameet Talwalkar, Purnamrita Sarkar and Michael Jordan on the bag of little bootstraps, which presentation I heard Michael deliver a few times when he was in Paris. (Obviously, this does not imply any negative judgement on the other papers of this issue!) For instance, Doss and Tan consider the multiple mixture estimator [my wording, the authors do not give the method a name, referring to Vardi (1985) but missing the connection with Owen and Zhou (2000)] of k ratios of normalising constants, namely $\sum_{l=1}^k \frac{1}{n_l} \sum_{t=1}^{n_l} \dfrac{n_l g_j(x_t^l)}{\sum_{s=1}^k n_s g_s(x_t^l) z_1/z_s } \longrightarrow \dfrac{z_j}{z_1}$ where the z’s are the normalising constants and with possible different numbers of iterations of each Markov chain. An interesting starting point (that Hans Künsch had mentioned to me a while ago but that I had since then forgotten) is that the problem was reformulated by Charlie Geyer (1994) as a quasi-likelihood estimation where the ratios of all z’s relative to one reference density are the unknowns. This is doubling interesting, actually, because it restates the constant estimation problem into a statistical light and thus somewhat relates to the infamous “paradox” raised by Larry Wasserman a while ago. The novelty in the paper is (a) to derive an optimal estimator of the ratios of normalising constants in the Markov case, essentially accounting for possibly different lengths of the Markov chains, and (b) to estimate the variance matrix of the ratio estimate by regeneration arguments. A favourite tool of mine, at least theoretically as practically useful minorising conditions are hard to come by, if at all available. ## recycling accept-reject rejections Posted in Statistics, University life with tags , , , , , , , , , on July 1, 2014 by xi'an Vinayak Rao, Lizhen Lin and David Dunson just arXived a paper which proposes anew technique to handle intractable normalising constants. And which exact title is Data augmentation for models based on rejection sampling. (Paper that I read in the morning plane to B’ham, since this is one of my weeks in Warwick.) The central idea therein is that, if the sample density (aka likelihood) satisfies $p(x|\theta) \propto f(x|\theta) \le q(x|\theta) M\,,$ where all terms but p are known in closed form, then completion by the rejected values of an hypothetical accept-reject algorithm−hypothetical in the sense that the data does not have to be produced by an accept-reject scheme but simply the above domination condition to hold−allows for a data augmentation scheme. Without requiring the missing normalising constant. Since the completed likelihood is $\prod_{i=1}^n \dfrac{f(x_i|\theta)}{M} \prod_{j=1}^{m_i} \left\{q(y_{ij}|\theta) -\dfrac{f(y_{ij}|\theta)}{M}\right\}$ A closed-form, if not necessarily congenial, function. Now this is quite a different use of the “rejected values” from the accept reject algorithm when compared with our 1996 Biometrika paper on the Rao-Blackwellisation of accept-reject schemes (which, still, could have been mentioned there… Or Section 4.2 of Monte Carlo Statistical Methods. Rather than re-deriving the joint density of the augmented sample, “accepted+rejected”.) It is a neat idea in that it completely bypasses the approximation of the normalising constant. And avoids the somewhat delicate tuning of the auxiliary solution of Moller et al. (2006)  The difficulty with this algorithm is however in finding an upper bound M on the unnormalised density f that is 1. in closed form; 2. with a manageable and tight enough “constant” M; 3. compatible with running a posterior simulation conditional on the added rejections. The paper seems to assume further that the bound M is independent from the current parameter value θ, at least as suggested by the notation (and Theorem 2), but this is not in the least necessary for the validation of the formal algorithm. Such a constraint would pull M higher, hence reducing the efficiency of the method. Actually the matrix Langevin distribution considered in the first example involves a bound that depends on the parameter κ. The paper includes a result (Theorem 2) on the uniform ergodicity that relies on heavy assumptions on the proposal distribution. And a rather surprising one, namely that the probability of rejection is bounded from below, i.e. calling for a less efficient proposal. Now it seems to me that a uniform ergodicity result holds as well when the probability of acceptance is bounded from below since, then, the event when no rejection occurs constitutes an atom from the augmented Markov chain viewpoint. There therefore occurs a renewal each time the rejected variable set ϒ is empty, and ergodicity ensues (Robert, 1995, Statistical Science). Note also that, despite the opposition raised by the authors, the method per se does constitute a pseudo-marginal technique à la Andrieu-Roberts (2009) since the independent completion by the (pseudo) rejected variables produces an unbiased estimator of the likelihood. It would thus be of interest to see how the recent evaluation tools of Andrieu and Vihola can assess the loss in efficiency induced by this estimation of the likelihood. Maybe some further experimental evidence tomorrow… ## the Poisson transform Posted in Books, Kids, pictures, Statistics, University life with tags , , , , , , , on June 19, 2014 by xi'an In obvious connection with an earlier post on the “estimation” of normalising constants, Simon Barthelmé and Nicolas Chopin just arXived a paper on The Poisson transform for unormalised statistical models. Obvious connection because I heard of the Guttmann and Hyvärinen (2012) paper when Simon came to CREST to give a BiP talk on this paper a few weeks ago. (A connected talk he gave in Banff is available as a BIRS video.) Without getting too much into details, the neat idea therein is to turn the observed likelihood $\sum_{i=1}^n f(x_i|\theta) - n \log \int \exp f(x|\theta) \text{d}x$ into a joint likelihood $\sum_{i=1}^n[f(x_i|\theta)+\nu]-n\int\exp[f(x|\theta)+\nu]\text{d}x$ which is the likelihood of a Poisson point process with intensity function $\exp\{ f(x|\theta) + \nu +\log n\}$ This is an alternative model in that the original likelihood does not appear as a marginal of the above. Only the modes coincide, with the conditional mode in ν providing the normalising constant. In practice, the above Poisson process likelihood is unavailable and Guttmann and Hyvärinen (2012) offer an approximation by means of their logistic regression. Unavailable likelihoods inevitably make me think of ABC. Would ABC solutions be of interest there? In particular, could the Poisson point process be simulated with no further approximation? Since the “true” likelihood is not preserved by this representation, similar questions to those found in ABC arise, like a measure of departure from the “true” posterior. Looking forward the Bayesian version! (Marginalia: Siméon Poisson died in Sceaux, which seemed to have attracted many mathematicians at the time, since Cauchy also spent part of his life there…) ## parallel MCMC via Weirstrass sampler (a reply by Xiangyu Wang) Posted in Books, Statistics, University life with tags , , , , , , , , , , , , on January 3, 2014 by xi'an Almost immediately after I published my comments on his paper with David Dunson, Xiangyu Wang sent a long comment that I think worth a post on its own (especially, given that I am now busy skiing and enjoying Chamonix!). So here it is: Thanks for the thoughtful comments. I did not realize that Neiswanger et al. also proposed the similar trick to avoid combinatoric problem as we did for the rejection sampler. Thank you for pointing that out. For the criticism 3 on the tail degeneration, we did not mean to fire on the non-parametric estimation issues, but rather the problem caused by using the product equation. When two densities are multiplied together, the accuracy of the product mainly depends on the tail of the two densities (the overlapping area), if there are more than two densities, the impact will be more significant. As a result, it may be unwise to directly use the product equation, as the most distant sub-posteriors could be potentially very far away from each other, and most of the sub posterior draws are outside the overlapping area. (The full Gibbs sampler formulated in our paper does not have this issue, as shown in equation 5, there is a common part multiplied on each sub-posterior, which brought them close.) Point 4 stated the problem caused by averaging. The approximated density follows Neiswanger et al. (2013) will be a mixture of Gaussian, whose component means are the average of the sub-posterior draws. Therefore, if sub-posteriors stick to different modes (assuming the true posterior is multi-modal), then the approximated density is likely to mess up the modes, and produce some faked modes (eg. average of the modes. We provide an example in the simulation 3.) Sorry for the vague description of the refining method (4.2). The idea is kinda dull. We start from an initial approximation to θ and then do one step Gibbs update to obtain a new θ, and we call this procedure ‘refining’, as we believe such process would bring the original approximation closer to the true posterior distribution. The first (4.1) and the second (4.2) algorithms do seem weird to be called as ‘parallel’, since they are both modified from the Gibbs sampler described in (4) and (5). The reason we want to propose these two algorithms is to overcome two problems. The first is the dimensionality curse, and the second is the issue when the subset inferences are not extremely accurate (subset effective sample size small) which might be a common scenario for logistic regression (with large parameters) even with huge data set. First, algorithm (4.1) and (4.2) both start from some initial approximations, and attempt to improve to obtain a better approximation, thus avoid the dimensional issue. Second, in our simulation 1, we attempt to pull down the performance of the simple averaging by worsening the sub-posterior performance (we allocate smaller amount of data to each subset), and the non-parametric method fails to approximate the combined density as well. However, the algorithm 4.1 and 4.2 still work in this case. I have some problem with the logistic regression example provided in Neiswanger et al. (2013). As shown in the paper, under the authors’ setting (not fully specified in the paper), though the non-parametric method is better than simple averaging, the approximation error of simple averaging is small enough for practical use (I also have some problem with their error evaluation method), then why should we still bother to use a much more complicated method? Actually I’m adding a new algorithm into the Weierstrass rejection sampling, which will render it thoroughly free from the dimensionality curse of p. The new scheme is applicable to the nonparametric method in Neiswanger et al. (2013) as well. It should appear soon in the second version of the draft.
2018-01-21 16:46:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8417418003082275, "perplexity": 833.4978348590968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890795.64/warc/CC-MAIN-20180121155718-20180121175718-00122.warc.gz"}
https://www.gamedev.net/forums/topic/145503-va_list-and-/
#### Archived This topic is now archived and is closed to further replies. # va_list and ... ## Recommended Posts i''m trying to simply get a ... from my function to fprintf. i''ve never used this before, and the only example i found on the internet required you to pass the number of args (it was an average function). all i really need is how to get this to work: void blahblah(char * FormatString, ...) { fprintf(FormatString, ???); } what do i put in the ??? ##### Share on other sites C-FAQ question 15.12: How can I write a function which takes a variable number of arguments and passes them to some other function (which takes a variable number of arguments)? In short: You can't. If you really want to do it, you could do it like this: #include <stdio.h>#include <stdarg.h>void blahblah(char *fmt, ...){ va_list argp; va_start(argp, fmt); vfprintf(stdout, fmt, argp); //instead of stdout, you could use a *FILE. va_end(argp);} [edited by - TravisWells on March 16, 2003 11:51:39 PM] ##### Share on other sites oops, i meant to have a FILE *, i just typed that real quick, i think thats it anyway. all i need to do is get the ... from my function to fprintf''s ##### Share on other sites quote: Original post by billybob oops, i meant to have a FILE *, i just typed that real quick, i think thats it anyway. all i need to do is get the ... from my function to fprintf''s Which is what vfprintf() does. The first parameter is FILE * Qui fut tout, et qui ne fut rien ##### Share on other sites oh i get it, sorry about that. thanks • ### Forum Statistics • Total Topics 628379 • Total Posts 2982353 • 10 • 9 • 15 • 24 • 11
2017-11-24 02:27:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.408019483089447, "perplexity": 3414.991957958652}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807056.67/warc/CC-MAIN-20171124012912-20171124032912-00144.warc.gz"}
https://hispanicprwire.com/en/la-siguiente-es-una-prueba-789/
This Is a Test From PR Newswire – 789 # This Is a Test From PR Newswire – 789 NEW YORK, Sept. 15, 2015 /PRNewswire/ — This is a test from PR Newswire. This is a test from PR Newswire. This is a test from PR Newswire. This is a test from PR Newswire. This is a test from PR Newswire. This is a test from PR Newswire. This is a test from PR Newswire. This is a test from PR Newswire. This is a test from PR Newswire. This is a test from PR Newswire. This is a test from PR Newswire. This is a test from PR Newswire. This is a test from PR Newswire. This is a test from PR Newswire. This is a test from PR Newswire. This is a test from PR Newswire. This is a test from PR Newswire. This is a test from PR Newswire. This is a test from PR Newswire. This is a test from PR Newswire. This is a test from PR Newswire. This is a test from PR Newswire. This is a test from PR Newswire. This is a test from PR Newswire. This Is a Test From PR Newswire This is a test from PR Newswire. This is a test from PR Newswire. This is a test from PR Newswire. This is a test from PR Newswire. This is a test from PR Newswire. This is a Test This is a Test 20014 2015 2014 2015 This is a Test: This is a Test \$                     – \$                    5 \$                     – \$                    5 This is a Test – 48 21 48 This is a Test – 53 21 53 This is a Test from PR Newswire.™ This is a Test – 2 – 2 This is a Test – 20 15 20 This is a Test® – 22 15 22 This is a Test – 31 6 31 This is a Test: This is a Test 1,252 3,127 4,209 3,127 This is a Test 503% 396% 830% 396% This is a Test 1,654 3,481 6,536 3,481 This is a Test from PR Newswire. This is a Test from PR Newswire 3,409 8,004 11,575 8,004 This is a Test – – (382) – This is a Test (3,409) (7,973) (11,951) (7,973) This is a Test (1,050) (1,382) (920) (1,382) This is a Test (4,459) (9,355) (12,871) (9,355) This is a Test (2)% 48% (2)% 48% This is a Test \$            (4,461) \$            (9,307) \$          (12,873) \$            (9,307) This is a Test from PR Newswire (1) \$             (0.70) \$             (1.46) \$             (3.28) \$             (1.46) This is a Test1 6,381,507 6,380,847 3,924,466 6,380,847 Source: PR Newswire: Test
2023-03-30 11:09:57
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.801173985004425, "perplexity": 4501.483127848002}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949181.44/warc/CC-MAIN-20230330101355-20230330131355-00655.warc.gz"}
https://www.gamedev.net/forums/topic/555083-d3dxvec3normalize-linker-error/
This topic is 3005 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Ok so I'm getting started with learning DirectX 10 from a book I have. As I'm going through the example everything works except the line of code dealing with the D3DXVec3Normalize function. Here's the sample code. #include <d3dx10.h> #include <iostream> using namespace std; ostream& operator<<(ostream &os, D3DXVECTOR3& v) { os << "(" << v.x << ", " << v.y << ", " << v.z << ")"; return os; } int main() { D3DXVECTOR3 u(1.0f, 2.0f, 3.0f); float x[3] = {-2.0f, 1.0f, -3.0f}; D3DXVECTOR3 v(x); D3DXVECTOR3 a, b, c, d, e; a = u + v; b = u - v; c = u * 10; float L = D3DXVec3Length(&u); D3DXVec3Normalize(&d, &u); float S = D3DXVec3Dot(&u, &v); D3DXVec3Cross(&e, &u, &v); cout << "u = " << u << endl; cout << "v = " << v << endl; cout << "a = u + v = " << a << endl; cout << "b = u - v = " << b << endl; cout << "c = u * 10 = " << c << endl; cout << "d = u / ||u|| = " << d << endl; cout << "e = u x v = " << e << endl; cout << "L = ||u|| = " << L << endl; cout << "S = u.v = " << S << endl; } If I comment out the D3DXVec3Normalize line it links just fine. Any thoughts as to what I might have missing? Edit: Just realized that this does work in release mode but not in debug mode. Does that change anything? I put both d3dx10.lib and d3dx10d.lib in the additional dependencies. Did I miss something else? ##### Share on other sites If you use debug you should take the following lib: d3dx10d.lib ##### Share on other sites Bah I'm an idiot. I put the DirectX SDK path into the additional libraries section of the project options->Linker->General section. After removing that it started working. Thanks for pointing out the d3dx10d.lib.
2018-02-22 23:04:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3319588899612427, "perplexity": 2899.8350717340336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814292.75/warc/CC-MAIN-20180222220118-20180223000118-00109.warc.gz"}
https://megaexams.com/questions/SSC/Average/
# SSC Explore popular questions from Average for SSC. This collection covers Average previous year SSC questions hand picked by experienced teachers. ## Finance and Economics Average Correct Marks 2 Incorrectly Marks -0.5 Q 1. A student was asked to find the arithmetic mean of the following 12 numbers : {tex} 3, 11, 7,9,15,13,8,19,17, 21, 14{/tex} and {tex} \mathrm x {/tex} He found the mean to be {tex} 12 . {/tex} The value of {tex} \mathrm { x } {/tex} will be: A 3 7 C 17 D 31 ##### Explanation Correct Marks 2 Incorrectly Marks -0.5 Q 2. The average of the marks obtained in an examination by 8 students was 51 and by 9 other students was {tex} 68 . {/tex} The average marks of all 17 students was : A 59 B 59.5 60 D 60.5 ##### Explanation Correct Marks 2 Incorrectly Marks -0.5 Q 3. If the average marks of three batches of 55,60 and 45 students respectively is 50,55 and 60, then the average marks of all the students is 54.68 B 53.33 C 55 D None of these ##### Explanation Correct Marks 2 Incorrectly Marks -0.5 Q 4. The average of 30 results is 20 and the average of other 20 results is 30 . What is the average of all the results? 24 B 48 C 25 D 50 ##### Explanation Correct Marks 2 Incorrectly Marks -0.5 Q 5. If the average weight of 6 students is {tex} 50 \mathrm { kg } {/tex}; that of 2 students is {tex} 51 \mathrm { kg } {/tex}; and that of other 2 students is {tex} 55 \mathrm { kg } {/tex}; then the average weight of all students is A {tex} 61\ \mathrm { kg } {/tex} B {tex} 51.5\ \mathrm { kg } {/tex} C {tex} 52\ \mathrm { kg } {/tex} {tex} 51.2\ \mathrm { kg } {/tex} ##### Explanation Correct Marks 2 Incorrectly Marks -0.5 Q 6. The average of 10 numbers is 7. If each number is multiplied by {tex} 12 , {/tex} then the average of the new set of numbers will be A 7 B 19 C 82 84 ##### Explanation Correct Marks 2 Incorrectly Marks -0.5 Q 7. The average income of 40 persons is Rs. 4200 and that of another 35 persons is Rs. 4000. The average income of the whole group is : A Rs. 4100 B Rs. {tex} 4106 \frac { 1 } { 3 } {/tex} Rs. {tex} 4106 \frac { 2 } { 3 } {/tex} D Rs. {tex} 4108 \frac { 1 } { 3 } {/tex} ##### Explanation Correct Marks 2 Incorrectly Marks -0.5 Q 8. The average weight of five persons sitting in a boat is {tex} 38 \mathrm { kg } {/tex}. The average weight of the boat and the persons sitting in the boat is {tex} 52 \mathrm { kg } {/tex}. What is the weight of the boat? A {tex} 228 \mathrm { kg } {/tex} {tex} 122 \mathrm { kg } {/tex} C {tex} 232 \mathrm { kg } {/tex} D {tex} 242 \mathrm { kg } {/tex} ##### Explanation Correct Marks 2 Incorrectly Marks -0.5 Q 9. The average marks of 32 boys of section A of class {tex} X {/tex} is 60 whereas the average marks of 40 boys of section {tex} B {/tex} of class {tex} X {/tex} is {tex} 33 . {/tex} The average marks for both the sections combined together is A 44 45 C {tex} 46 \frac { 1 } { 2 } {/tex} D {tex} 45 \frac { 1 } { 2 } {/tex} ##### Explanation Correct Marks 2 Incorrectly Marks -0.5 Q 10. Total weekly emoluments of the workers of a factory is Rs {tex} 1534 . {/tex} Average weekly emolument of a worker is Rs {tex} 118 {/tex}. The number of a workers in the factory is : A 16 B 14 13 D 12 ##### Explanation Correct Marks 2 Incorrectly Marks -0.5 Q 11. 12 kg of rice costing Rs. 30 per kg is mixed with 8 kg of rice costing Rs. 40 per kg. The average per kg price of mixed rice is A Rs. 38 B Rs. 37 C Rs. 35 Rs. 34 ##### Explanation Correct Marks 2 Incorrectly Marks -0.5 Q 12. If average of 20 observations {tex} x _ { 1 } {/tex}, {tex} x _ { 2 } , \ldots . , x _ { 20 } {/tex} is {tex} y , {/tex} then the average of {tex} x _ { 1 } - 101 , x _ { 2 } - 101 , x _ { 3 } - {/tex} {tex} 101 , \ldots . . , x _ { 20 } - 101 {/tex} is A {tex} y - 20 {/tex} {tex} y - 101 {/tex} C {tex} 20 y {/tex} D {tex} 101 y {/tex} ##### Explanation Correct Marks 2 Incorrectly Marks -0.5 Q 13. The average of {tex} x {/tex} numbers is {tex} y {/tex} and average of {tex} y {/tex} numbers is {tex} x {/tex}. Then the average of all the numbers taken together is A {tex} \frac { x + y } { 2 x y } {/tex} {tex} \frac { 2 x y } { x + y } {/tex} C {tex} \frac { x ^ { 2 } + y ^ { 2 } } { x + y } {/tex} D {tex} \frac { x y } { x + y } {/tex} ##### Explanation Correct Marks 2 Incorrectly Marks -0.5 Q 14. The average of {tex} x {/tex} numbers is {tex} y ^ { 2 } {/tex} and the average of {tex} y {/tex} numbers is {tex} x ^ { 2 } . {/tex} So the average of all the numbers taken together is A {tex} \frac { x ^ { 3 } + y ^ { 3 } } { x + y } {/tex} {tex} x y {/tex} C {tex} \frac { x ^ { 2 } + y ^ { 2 } } { x + y } {/tex} D {tex} x y ^ { 2 } + y x ^ { 2 } {/tex} ##### Explanation Correct Marks 2 Incorrectly Marks -0.5 Q 15. The average of {tex} n {/tex} numbers {tex} x _ { 1 } , x _ { 2 } , {/tex} {tex} \ldots x _ { n } {/tex} is {tex} \bar { x } {/tex}. Then the value of {tex}\displaystyle \sum _ { i = 1 } ^ { n } \left( x _ { i } - \bar { x } \right) {/tex} is equal to A {tex} n {/tex} 0 C {tex} n \bar { x } {/tex} D {tex} \bar { x } {/tex} ##### Explanation Correct Marks 2 Incorrectly Marks -0.5 Q 16. A man bought 13 articles at {tex} Rs. 70 {/tex} each, 15 at {tex} Rs. 60 {/tex} each and 12 at {tex} Rs. 65 {/tex} each. The average price per article is A Rs. 60.25 Rs. 64.75 C Rs. 65.75 D Rs. 62.25 ##### Explanation Correct Marks 2 Incorrectly Marks -0.5 Q 17. A library has an average number of 510 visitors on Sunday and 240 on other days. The average number of visitors per day in a month of 30 days beginning with Sunday is: 285 B 295 C 300 D 290 ##### Explanation Correct Marks 2 Incorrectly Marks -0.5 Q 18. The average of 30 numbers is 40 and that of other 40 numbers is 30. The average of all the numbers is {tex} 34 \frac { 2 } { 7 } {/tex} B 35 C 34 D 34.5 ##### Explanation Correct Marks 2 Incorrectly Marks -0.5 Q 19. The average of 20 numbers is 15 and the average of first five is {tex} 12 . {/tex} The average of the rest is 16 B 15 C 14 D 13 ##### Explanation Correct Marks 2 Incorrectly Marks -0.5 Q 20. The average monthly expenditure of a family is {tex} Rs. 2,200 {/tex} during first three months, {tex} Rs. 2,550 {/tex} during next four months and {tex} Rs. 3,120 {/tex} during last five months of the year. If the total savings during the year was {tex} Rs. 1,260 , {/tex} what is the average monthly income? A {tex} Rs. 1,260 {/tex} B {tex} Rs. 1,280 {/tex} {tex}Rs. 2,805 {/tex} D {tex}Rs. 2,850{/tex} ##### Explanation Correct Marks 2 Incorrectly Marks -0.5 Q 21. Find the average of {tex} 1.11,0.01 , {/tex} {tex} 0.101,0.001,0.11 {/tex} 0.2664 B 0.2554 C 0.1264 D 0.1164 ##### Explanation Correct Marks 2 Incorrectly Marks -0.5 Q 22. 4 boys and 3 girls spent Rs. 120 on the average, of which boys spent Rs. 150 on the average. Then the average amount spent by the girls is Rs. 80 B Rs. 60 C Rs. 90 D Rs. 100 ##### Explanation Correct Marks 2 Incorrectly Marks -0.5 Q 23. Six tables and twelve chairs were bought for {tex} Rs. 7,800 {/tex}. If the average price of a table is {tex} Rs. 750 , {/tex} then the average price of a chair would be A Rs. 250 Rs. 275 C Rs. 150 D Rs. 175 ##### Explanation Correct Marks 2 Incorrectly Marks -0.5 Q 24. Out of 20 boys, 6 are each of {tex}1\ \mathrm { m }\ 15\ \mathrm { cm } {/tex} height, 8 are of {tex}1\ \mathrm { m }\ 10\ \mathrm { cm } {/tex} and rest of {tex}1\ \mathrm { m }\ 12\ \mathrm { cm } {/tex}. The average height of all of them is {tex} 1\ \mathrm { m }\ 12.1\ \mathrm { cm } {/tex} B {tex} 1\ \mathrm { m }\ 21.1\ \mathrm { cm } {/tex} C {tex} 1\ \mathrm { m }\ 21\ \mathrm { cm } {/tex} D {tex} 1\ \mathrm { m }\ 12\ \mathrm { cm } {/tex} ##### Explanation Correct Marks 2 Incorrectly Marks -0.5 Q 25. There are two groups A and B of a class, consisting of 42 and 28 students respectively. If the average weight of group A is {tex} 25 \mathrm { kg } {/tex} and that of group {tex} \mathrm { B } {/tex} is {tex} 40 \mathrm { kg } {/tex}, find the average weight of the whole class. A {tex} 69\ \mathrm { kg } {/tex} {tex} 31\ \mathrm { kg } {/tex} C {tex} 70\ \mathrm { kg } {/tex} D {tex} 30\ \mathrm { kg } {/tex}
2021-03-05 17:08:39
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9010358452796936, "perplexity": 5652.634617125331}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178373095.44/warc/CC-MAIN-20210305152710-20210305182710-00010.warc.gz"}
https://direct.mit.edu/neco/article/31/4/738/8457/Filtering-Compensation-for-Delays-and-Prediction
Abstract Compensating for sensorimotor noise and for temporal delays has been identified as a major function of the nervous system. Although these aspects have often been described separately in the frameworks of optimal cue combination or motor prediction during movement planning, control-theoretic models suggest that these two operations are performed simultaneously, and mounting evidence supports that motor commands are based on sensory predictions rather than sensory states. In this letter, we study the benefit of state estimation for predictive sensorimotor control. More precisely, we combine explicit compensation for sensorimotor delays and optimal estimation derived in the context of Kalman filtering. We show, based on simulations of human-inspired eye and arm movements, that filtering sensory predictions improves the stability margin of the system against prediction errors due to low-dimensional predictions or to errors in the delay estimate. These simulations also highlight that prediction errors qualitatively account for a broad variety of movement disorders typically associated with cerebellar dysfunctions. We suggest that adaptive filtering in cerebellum, instead of often-assumed feedforward predictions, may achieve simple compensation for sensorimotor delays and support stable closed-loop control of movements. 1  Introduction The apparent ease with which we perform most daily tasks masks the difficulties of movement control. Even simple tasks such as reaching to grasp an object or performing a saccade toward a visual stimulus are supported by efficient closed-loop neural control. Several challenges arise for the brain, of which the following two have been the focus of much research effort: the compensation for sensorimotor delays and the filtering of sensorimotor noise. Although these two functions have often been dissociated experimentally, theoretical considerations and recent results indicate that estimation and prediction are performed simultaneously in the brain. On the one hand, sensorimotor delays are too long to explain skillful motor behavior based on delayed-feedback control only (Crevecoeur & Scott, 2014; Kawato, 1999), which suggests that the brain uses internal models and predictions (Miall & Wolpert, 1996; Wolpert & Ghahramani, 2000). One proposed mechanism for predictive control was the Smith predictor (Miall, Weir, Wolpert, & Stein, 1993). In this framework, a model of the biomechanical system is used to predict the consequences of motor commands ahead of sensory feedback. Movements are then controlled based on internal estimates, which removes the delay from the feedback loop provided that the internal model is correct (Zhong, 2010). One difficulty with this approach, and with other prediction techniques, is that they are excessively sensitive to model errors, such that an arbitrarily small mismatch in model parameters can lead to unstable closed-loop control (Michiels & Niculescu, 2007; Mondié & Michiels, 2003). Clearly, assuming that the brain has perfect knowledge of the peripheral motor system is not realistic. Thus, although the Smith predictor captures the principles of predictive neural control, it is unlikely to be implemented in neural circuits as such. Another compensatory mechanism must be at play to mitigate the impact of prediction errors. On the other hand, with regard to sensorimotor noise, a wealth of research has shown that the brain uses near-optimal estimates of variables such as hand location or of forces applied to the limb, obtained by combining prior knowledge with available sensory information in a statistically efficient manner (Angelaki, Gu, & DeAngelis, 2009; Koerding, 2007). This function has been described in the framework of Bayesian statistics in numerous studies on perceptual or motor decisions (Wolpert & Landy, 2012), but also, and importantly, during online control (Izawa & Shadmehr, 2008; Kording & Wolpert, 2004; Wolpert, Ghahramani, & Jordan, 1995). While these previous studies highlighted an efficient handling of uncertainty in the environment and in the nervous system, it is still critical to take sensorimotor delays into account to avoid unstable closed-loop control (Crevecoeur & Scott, 2013). Consistent with this hypothesis, recent results have shown that the combination of internal priors with sensory feedback depends not only on the variance of the feedback signals but also on their temporal delays (Crevecoeur, Munoz, & Scott, 2016). Thus, predictive control requires compensation for prediction errors, and optimal estimation must include a compensation for temporal delays. In theory, an optimal (Bayesian) integration of prior and delayed feedback is achieved by estimating the present and past states, which for discrete time systems can be achieved in an augmented system (Challa, Evans, & Wang, 2003). The integration of multiple, delayed measurements can also be optimally integrated using the same approach. However, system augmentation is suitable for digital control, and it is difficult to translate it to continuous time systems or to systems for which present and past estimates may not be jointly available, as in the sensorimotor system. In such a case, when optimal delay compensation cannot be achieved, prediction errors unavoidably occur, and the question arises as to whether closed-loop control remains feasible. Here we investigate this question in theory by analyzing the impact of approximate delay compensation in closed-loop control models of eye and arm movements subject to multiplicative noise in the command (Jones, Hamilton, & Wolpert, 2002) and affected by temporal delays compatible with each motor system. A seminal study by Miall and colleagues (1993) addressed how a mismatch between the controller and plant delays affected the behavior of simple systems, such as low-pass filters and integrators combined with single gain control. This study also considered relatively long delays ($≥$150 ms) in comparison with visual ($∼$100 ms) or somatosensory delays ($∼$50 ms in the upper limb) (Scott, 2016). In addition, this formalism did not include stochastic disturbances or uncertainty in the process, which is known to affect planning and control (Harris & Wolpert, 1998; Todorov & Jordan, 2002). To address these limitations, we revisit the problem of predictive control subject to prediction errors in stochastic control models of eye and arm movements. Our approach to this problem is based on finite spectrum assignment (FSA) (Michiels & Niculescu, 2007; Niculescu & Gu, 2004), which in the context of linear and deterministic systems can be stabilized by using a low-pass filter (Mondié & Michiels, 2003). Following the approach of Mondié and Michiels (2003), we show that probabilistic filtering of the predicted state can compensate for prediction errors, including errors that result from a low-dimensional approximation of the system dynamics or from a delay mismatch between the observer and the true system. Our analyses of the interplay between filtering and prediction errors provide insights into biologically plausible control mechanisms. Indeed, we show that delay compensation may be performed by low-pass filtering of motor commands over the delay interval (such as a moving average), which is easy to link to adaptive filtering in neural circuits. Furthermore, we illustrate how combining priors and predictions improves the stability margin of predictive control and how errors in the delay generate over- or undershoots qualitatively comparable to human movements. Thus, approximate delay compensation can still generate efficient movements in human-inspired models. In addition, this approach captures explicitly the influence of both uncertainty and time in state estimation, which represents a useful framework for modeling the impact of these variables during sensorimotor control. Finally, we found that prediction errors generated a wide spectrum of movement dysmetria that typically characterize disorders observed in disease and, in particular, in cerebellar ataxia and essential tremor. In this regard, adaptive filtering for stable closed-loop control may describe the function of cerebellum more accurately than the often-assumed feedforward predictions. We first present the problem definition and provide the derivation of the control algorithm. The section continues with an illustration based on a Langevin process, which shows analytically how the model captures the dependency of the optimal filter on both signal variances and temporal delays. We then illustrate the performance of the algorithm on bio-inspired models of eye and upper limb movements to investigate the robustness of this control algorithm against prediction errors. 2  Model 2.1  Definitions and Assumptions We consider a class of linear-time-invariant stochastic processes suitable for modeling simple systems such as the control of a single joint or the control of eye movements. Let $W(t)$ represent a $p$-dimensional Wiener process with unit variance. The class of systems considered is as follows: $dx(t)=(Ax(t)+Bu(t))dt+α(t)dW,$ (2.1) $y(t)=x(t-δt)+σ(t),$ (2.2) where $x(t)∈Rn$ is the state vector, $u(t)∈Rm$ is the control input, and the matrices $A$ and $B$ are known, constant, and of appropriate dimension. The function $α(t)$ is an $n×p$-dimensional, real-valued function multiplying the instantaneous variation of $W(t)$. We assume that $α(t)$ behaves sufficiently well that the noise disturbances exist for all sample paths and have finite variance over $δt$. The derivation that follows is valid for any such function $α(t)$. In particular, we will allow $α(t)$ to depend on the control function, $u(t)$, in order to capture the property of sensorimotor systems that the intensity of motor noise increases with the motor commands (Harris & Wolpert, 1998; Jones et al., 2002) (i.e., signal-dependent noise; for more details, see section 5, equation 5.3). The variable $σ(t)$ represents an $n$-dimensional zero-mean gaussian random variable with covariance $Σ$. Thus equation 2.2 expresses that the current sensory feedback and the delayed state ($y(t)$ and $x(t-δt)$, respectively) have joint gaussian distribution. Finally, we assume without loss of generality that $Σ$ is positive definite. The problem consists in deriving a feedback controller that minimizes a quadratic cost function. This problem can be solved using linear quadratic gaussian control (LQG) (Astrom, 1970; Phillis, 1985; Todorov, 2005), which gives a control law of the form $u(t)=L(t)x^(t)$, where $x^(t)$ is an estimate of the present state of the system. The challenge is to derive the optimal estimate in the presence of noise and measurement delays. 2.2  Optimal Solution Based on System Augmentation System augmentation reduces the delayed system to the nondelayed case and allows a standard solution derived in the context of stochastic optimal control (Fridman, 2014). In practice, the system is discretized, and a novel state vector is defined as follows (Crevecoeur & Scott, 2013): $zt=[xtT,xt-1T,…,xt-hT]T,$ (2.3) with the subscript $t$ representing the time step. Each time step corresponds to a discretization interval of $dt$, and the parameter $h$ represents the delay expressed in number of time steps, that is, $hdt=δt$. Thus, the size of the augmented state corresponds to the size of the state multiplied by the number of discrete steps in the delay interval. This approach allows optimal compensation for temporal delays; however, it is clearly an artificial technique that is suitable for digital applications. First, it is based on the key assumption that the delay is fixed and known, which is not desirable in this context, as we are interested in how the nervous system can handle errors in the prediction or in the estimate of the delay. In addition, the delay compensation is performed by updating the time series included in $zt$ through the block structure of the state estimator, which requires storing present and past estimates. In this study, we investigate the impact of prediction error that may arise from approximate representation of the dynamics during the delay interval or from errors in the estimate of the delay. Thus, we derive a near-optimal closed-loop controller, that allows manipulating the delay compensation directly. It is built on the insight that system augmentation by design achieves extrapolation of delayed sensory information to the present. Our proposed approach is thus based on an explicit extrapolation of the state measurements. 2.3  Filtering over Predictions Here we derive an estimator that extrapolates the current sensory feedback explicitly, prior to performing optimal state estimation. Our approach is based on the premise that for deterministic systems, low-pass filtering of the predicted state is known to stabilize predictive control against prediction errors (Mondié & Michiels, 2003; Niculescu & Gu, 2004). We thus derive a similar estimate in the context of stochastic processes and use the statistics of the predicted state to design an optimal filter (Kalman filter) and mitigate the impact of prediction errors. The current state measurement must first be extrapolated, which is obtained by solving a boundary value problem with uncertainty in the initial condition. Defining $M(t)=etA$, the unknown solution of equation 2.1 over the delay interval with initial condition $x(t-δt)$ is given by $x(t)=M(δt)x(t-δt)+∫t-δttM(t-s)Bu(s)ds+∫t-δttM(t-s)α(s)dW.$ (2.4) The extrapolation of the delayed state measurement computed by the controller is given by the following expression, which takes into account that the third term in equation 2.4 is zero on average: $x^(t|y):=M(δt)y(t)+∫t-δttM(t-s)Bu(s)ds.$ (2.5) It is easy to rewrite the extrapolation as a function of the true state, which directly gives the extrapolation error, $e(t)$: $x^(t|y)=x(t)+e(t),$ (2.6) $e(t)=M(δt)σ(t)-∫t-δttM(t-s)α(s)dW.$ (2.7) Thus, the extrapolation error is gaussian with zero mean and variance $V(e(t))$ given by $V(e(t))=M(δt)ΣM(δt)T+∫t-δttM(t-s)α(s)α(s)TM(t-s)Tds.$ (2.8) Note that the cross products of $σ(t)$ and $dW$ are zero on average, and the expected value of $dW2$ is by definition equal to $dt$. The variance of $e(t)$ has two terms. The first term captures the uncertainty associated with the measurement used as an initial condition. Indeed, it expresses how the measurement noise $σ(t)$ propagates through the systems' unforced dynamics over $δt$. It is worth mentioning that the uncertainty associated with $σ(t)$ decays exponentially, remains constant, or increases in the dimensions in which the matrix $A$ has negative, zero, or positive eigenvalues, respectively. The second term is the variance induced by the process noise, and thus it is a function of the system dynamics captured in $M(.)$, and dependent on $α(t)$. We can now derive a minimum-variance linear estimate, by using $x^(t|y)$ as measurement instead of $y(t)$ and applying the following well-known result (Anderson & Moore, 1979): let $X$ and $Y$ have joint gaussian distribution, defined as $XY∼NμXμY,ΣXXΣXYΣYXΣYY.$ Then the conditional distribution of $X$ given $Y$ is gaussian and is given by $X|Y∼NμX|Y,ΣX|Y,μX|Y=μX+ΣXYTΣYY-1(Y-μY),ΣX|Y=ΣXX-ΣXYΣYY-1ΣYX.$ The optimal filter over the predicted state is thus obtained by applying this result to calculate the posterior distribution of the next estimate given the current estimate and the extrapolated measurement (see equations 2.6 and 2.7). The whole estimation algorithm is as follows: 1. The initial state estimate is assumed to have known gaussian distribution: $x^(0)∼N(x0,Σ0).$ 2. The true state trajectory over a small discretization interval of $dt$ is $x(t+dt)=Adx(t)+Bdu(t)+ξt,$ where $Ad:=M(dt)$, $Bd:=∫0dtM(s)dsB$, and $ξt=α(t)dW$. With this definition, we have that $ξt∼N(0,α(t)α(t)Tdt)$. 3. The one-step prediction of the state at the next time step is computed by simulating the system dynamics over one time interval of $dt$: $x^P(t+dt)=Adx^(t)+Bdu(t).$ 4. The extrapolation of sensory signals $x^(t|y)$ and its covariance matrix $V(e(t))$ are computed through equations 2.5 and 2.8. 5. Finally, we use the expression of the conditional distribution of gaussian random variables given above to combine the one-step prediction ($x^P(t+dt))$ with the extrapolated state ($x^(t|y))$, which gives the following recurrence and completes the definition of the estimation algorithm: $Σx(0)=Σ0,$ (2.10) $K(t)=AdΣx(t)[Σx(t)+V(e(t))]-1,$ (2.11) $x^(t+dt)=x^P(t+dt)+K(t)[x^(t|y)-x^(t)],$ (2.12) $Σx(t+dt)=AdΣx(t)AdT+E(ξtξtT)-K(t)Σx(t)AdT.$ (2.13) Observe that the filter is well defined because $Σ$ is positive definite from assumptions and $Σx(t)$ is nonnegative; hence, $V(e(t))$ is also positive definite. This can be verified by expanding equation 2.13 and using the identity that $Σx+V(e)>Σx$ in the sense of Loewner ordering. Thus the matrix inverted in equation 2.11 is positive definite. Observe also that this filter is equivalent to a standard Kalman filter when there is no delay, as equation 2.5 computed when $δt=0$ gives $x^(t|y)=y(t)$. It is thus a natural extension of the Kalman filter to systems subject to measurement delays. The filter defined above is based on a prediction of the state for simplicity (FSA), as we consider a state-feedback controller. The difference between FSA and the Smith predictor is that FSA predicts the state of the system, whereas the Smith predictor predicts the output signal, which can be a linear function of the state. With the output signal of equation 2.2, these two approaches are identical in the context of this letter. When a more general output equation of the form $y'(t)=Hx(t-δt)+σ(t)$ is considered, it is necessary to reconstruct the state vector at $t-δt$ with an observer prior to the extrapolation step (see equation 2.5). Note that such an observer need not be optimal to derive the filter above, provided that its output corresponds to equation 2.2 as is the case for general observers. 2.4  Optimal Control Thus far, we have derived an optimal filter for the predicted state, assuming that the control function $u(t)$, was known. We now derive the optimal controller separately assuming that an optimal estimate of the state is available through the recurrence above (see equations 2.10 to 2.13). It is important to emphasize that this approach is based on a heuristic assumption that the separation principle applied, allowing us to solve the estimation and control problems independently. However, this assumption is not true in general. For instance, when considering multiplicative noise as in sensorimotor systems (e.g., $α(t)∝u(t))$, the optimal control and optimal filter depend on each other. In the linear case, the estimator and controller can numerically be approximated by iterating the sequence of nonadaptive Kalman gains and control gains until convergence (Phillis, 1985; Todorov, 2005). Our approach is based on first deriving the control gains as if the separation principle applied, and then optimizing the Kalman gains relative to the ongoing control function in each simulation run through the time-varying $V(e(t))$. Thus, our approach is globally suboptimal for linear systems, but we show in the next section that the loss is not substantial. The advantages are that it allows adapting the estimate online following modeled but unexpected disturbances without recalculating the control and estimation gains. It also handles general noise functions captured in $α(t)$. We consider a finite-horizon control problem that consists in minimizing a cost function of the following form (expressed here in discrete time): $J(x,u)=ExNTQNxN+∑t=1N-1xtTQtxt+utTRtut,$ (2.14) where $N$ is the time horizon in number of time steps; $Qt$ and $Rt$ are nonnegative and positive-definite cost matrices, respectively; and $E[.]$ denotes the expected value of the argument. Assuming additive noise, an optimal controller that minimizes $J(x,u)$ can be obtained in the framework of stochastic optimal control as follows (Astrom, 1970; Todorov, 2005): $u(t)=-(R+BdS(t+dt)BdT)-1BdTS(t+dt)Ax^(t).$ (2.15) The matrices $S(t)$ are computed offline through a backward-time recurrence starting with $S(Ndt)=QN$ (see Todorov, 2005, for details). 2.5  Low-Dimensional Approximation and Implementation The ada-ptive filter defined in equations 2.10 to 2.13 is infinite dimensional because in the estimate $x^(t|y)$ defined by equation 2.5, $u(t)$ is integrated over $δt$, and the infinite-dimensional property of the integral comes from the fact that it is defined on a space of continuous functions. In order to handle a low-dimensional estimator design, the extrapolation must be replaced by a finite-dimensional approximation of the integral, or quadrature rule. One such rule can be obtained by dividing the delay interval of $δt$ into $h$ subintervals of $dt$ and by summing over each time step of $dt$ used in the computer simulation. This approach corresponds to the following quadrature rule: $x^(t|y)=M(δt)y(t)+∑k=1hM(t-kdt)Bu(kdt)dt,$ (2.16) using $h=δt/dt$ (see section 5 for numerical values). Observe that in principle, this quadrature rule has the same dimension as the system augmentation technique (see equation 2.3) because we used the same discretization step, and the number of steps in the quadrature rule corresponds to the number of states used in the augmentation technique. Thus, both the number of operations in equation 2.16 and the number of operations involved in the augmentation technique scale with the dimension of the state and the number of time steps during the delay interval. We may reduce the dimension of the approximation by considering that the motor command is constant over $δt$ and define $Bδ:=∑k=1hM(t-kdt)B$. This yields the following one-step approximation of $x^(t|y)$: $x^(t|y)≅M(δt)y(t)+Bδu¯s.$ (2.17) The variable $u¯s$ is a constant approximating the control function $u(s)$ for $t-δt≤s≤t$. We use the average of the control function, that is, $u¯s=(δt)-1∫t-δttu(s)ds$, which is also infinite-dimensional in principle for continuous time models but easier to approximate than equation 2.5 in neural circuits through low-pass filtering or leaky integrators with appropriate time constants. Thus far, our developments consisted in replacing the augmentation technique by an explicit extrapolation of the state measurement. The dimensionality of the augmentation corresponds to the number of time steps in the quadrature rule presented in equation 2.16 when the same discretization step is used. Importantly, this dimension can be reduced by approximating the sum in equation 2.16 in one step, which can be done in theory, and approximated in practice with a summary version of $u(s)$, for $t-δt≤s≤t$ s. These expressions will allow us to investigate the robustness of the closed-loop controller against extrapolation errors. In addition, we will also investigate the robustness against an error in the internal estimate of the delay by assuming that the controller uses $δt^≠δt$, which has an impact on the estimate in equation 2.17 through the matrices $M(δt^)$ and $Bδ$. We will show that the state estimator defined in equations 2.10 to 2.13, combined with the low-dimensional quadrature rule given in equation 2.17, constitutes a good candidate model of neural compensation for sensorimotor delays. Indeed, this algorithm involves only multiplications of matrices and vectors, along with an operation that approximates $u(s)$ over $t-δt≤s≤t$. A schematic block representation of this estimator is shown in Figure 1. The inputs to this pathway consist of the motor command, available through the corollary discharge, the state estimate, and the sensory feedback, which are all available in the brain (Shadmehr, Smith, & Krakauer, 2010; Wolpert, Diedrichsen, & Flanagan, 2011; Wurtz, 2008). The operations are matrix-vector multiplications and additions, plus the averaging, or low-pass filtering, represented by the rectangular box. The remaining computational operations consist in adjusting $K$ over time as a function of $u(t)$ according to equations 2.11 and 2.13. In order to validate this schematic pathway for studying the impact of prediction errors during motor control, we must show that it generates behaviors compatible with biological movements. This is done in the next section. Figure 1: Model pathway for delay compensation in sensorimotor control based on the algebraic operation associated with state estimation. The blue symbols refer to the variables and matrices as defined in the text. The explicit dependencies of $M$ on $δt$ and of $K$ on $t$ were omitted for clarity. The rectangle represents a low-pass filter, or leaky integrator, which approximates $u(s)$ for $t-δt≤s≤t$. Figure 1: Model pathway for delay compensation in sensorimotor control based on the algebraic operation associated with state estimation. The blue symbols refer to the variables and matrices as defined in the text. The explicit dependencies of $M$ on $δt$ and of $K$ on $t$ were omitted for clarity. The rectangle represents a low-pass filter, or leaky integrator, which approximates $u(s)$ for $t-δt≤s≤t$. 3  Applications 3.1  Langevin Process To provide intuition about how dynamics, variance, and delays affect the uncertainty about the prediction (see equation 2.5), we begin this section with a scalar example for which an analytical expression can be derived, showing in a simple formula how delay compensation affects state estimation when uncertainty about the delay interval is considered. Langevin's equation describes the velocity of a particle subject to random disturbing forces and against a frictional damping force. The scalar stochastic differential equation for the one-dimensional velocity is $dx=-Gxdt+αdW,$ (3.1) where $G>0$ represents the viscous constant and $α$ is the intensity of the process noise capturing the random perturbations. The state measurement is defined as in equation 2.2. Because $α$ and $Σ$ are constant in this example, the variance of the extrapolation error is also constant. We call it $Ve$. This variance can be written, after rearranging terms, as follows: $Ve=α22G+(Σ-α22G)e-2Gδt.$ (3.2) It can be observed that $Ve$ is equal to $Σ$ when $δt=0$, and it is asymptotically equal to $α2/2G$ as $δt→+∞$. Thus, $Ve$ increases, remains constant, or decreases with the delay, dependent on whether $Σ$ is smaller than, equal to, or greater than $α2/2G$. The fact that the extrapolation variance decreases over time when $Σ>α2/2G$ is counterintuitive, as it suggests that the extrapolation becomes more reliable with longer delays. This is true in principle; however, the reason the extrapolation becomes more reliable is simply that it tends to zero due to the stable dynamics. As a consequence, the true state and the estimate converge to zero. In any case, even when $Σ>α2/2G$, it is always useful to integrate measurements of the state in the state estimator because $x(t|y)$ and $x(t)$ are correlated (see equation 2.6), and thus the projection of $x(t)$ onto the linear space generated by $x(t|y)$ will always decrease the estimation error. 3.2  Simulation of Human Movements We now present simulations of saccadic eye movements and single joint reaching of the upper limb with mechanical perturbations. These examples were chosen because the biomechanical plant can be approximated in a linear model and thus correspond to the class of stochastic processes defined in equation 2.1. The extrapolation was based on the two rules given in equations 2.16 and 2.17. We then varied the delay parameters used in the estimation ($δt^)$, without varying the true delay, in order to investigate numerically the sensitivity of these control models to this parameter. Eye movements are of particular interest as the visual system is affected by relatively long delays and the eyeball has low inertia. Thus, close-loop control of the oculomotor plant is challenging and likely more sensitive to model errors than other motor systems. The simulations shown below use $δt=100$ ms for the oculomotor system and $δt=50$ ms for the upper limb. For the visual system, this delay is based on the latency of reflexive saccades (Munoz & Everling, 2004). For the upper limb, the delay of 50 ms corresponds to long-latency feedback supported by a pathway through cortex (Scott, 2016). Details about the models and parameters can be found in the section 5. We first evaluate the performance of the algorithm derived above by comparing the results with those obtained in the extended LQG framework with system augmentation (Todorov, 2005) in order to quantify how it departs from the augmentation technique (see equation 2.3). When the prediction is performed with the high-dimensional quadrature rule, which corresponds to the dimension of an estimator obtained with state augmentation (see equation 2.16), the average behaviors of the two control algorithms are almost indistinguishable (see Figure 2a, black and blue traces). The difference can be observed in the peak positional variance, where an increase of about 10% can be noticed. The positional variance then recovered similar values as in the case of extended LQG near the end of the simulation (see Figure 2b). We also measured the cost for each trajectory and found an average increase of less than 10% in comparison with the measured and predicted expected costs from extended LQG. This result is expected because our controller uses slightly greater control signals, which in turn induces more variability across simulations due to signal-dependent noise. Figure 2: (a) Simulated eye trajectories with the extended LQG based on system augmentation (black) or with the proposed adaptive filter with high- (blue) or low-dimensional (red) quadrature rules (see equations 2.16 and 2.17, respectively). The cost function consisted of two fixation intervals separated by a 50 ms window for movement without any penalty on the state (see section 5 for details about the simulations). (b) Positional variance as a function of time for the three control models with the same color code as in panel a. All traces were normalized to the peak positional variance of the blue trace. (c) Expected cost from the extended LQG control model, and distribution of cost calculated for 500 simulation runs. Observe that the low-dimensional quadrature rule generates inaccurate movements with higher cost but lower positional variance. Figure 2: (a) Simulated eye trajectories with the extended LQG based on system augmentation (black) or with the proposed adaptive filter with high- (blue) or low-dimensional (red) quadrature rules (see equations 2.16 and 2.17, respectively). The cost function consisted of two fixation intervals separated by a 50 ms window for movement without any penalty on the state (see section 5 for details about the simulations). (b) Positional variance as a function of time for the three control models with the same color code as in panel a. All traces were normalized to the peak positional variance of the blue trace. (c) Expected cost from the extended LQG control model, and distribution of cost calculated for 500 simulation runs. Observe that the low-dimensional quadrature rule generates inaccurate movements with higher cost but lower positional variance. Comparing the two techniques, the augmentation reduces the system to the nondelayed case, but in this context, the presence of signal-dependent noise requires approximate solutions. In contrast to the explicit delay, compensation provides an exact filtering solution to a modified problem and for a given control law. In particular, the analyses in Figure 2 show that for behaviors considered in this study, the simulations obtained for both controllers are very close, and thus the filter developed in equations 2.10 to 2.13 is valid and suitable for studying approximate delay compensation in biological control. The impact of the low-dimensional quadrature rule is also shown in Figure 2. The main effect is a reduction in the control gain, followed by larger target overshoot for the eye trajectory (see Figure 2a, red). The consequence is that the traces become less variable across trials due to reduced signal-dependent noise, whereas the average cost displays approximately a three-fold increase. We verified that the results were similar for the reaching task (simulations not shown). Note that our goal is not to establish a novel control model for which solutions are available (Bar-Shalom, Huimin, & Mahendra, 2004; Challa et al., 2003; Choi, Choi, & Chung, 2012; Todorov, 2005) but to investigate the impact of extrapolation errors, as well as of errors in the estimate of the delay. To this aim, below we use the low-dimensional quadrature rule (see equation 2.17) and modify the delay parameter used by the controller to compute the extrapolation. Figure 3a illustrates the impact of an error in the internal estimate of the delay. The different traces were averaged across 50 simulation runs with an estimate of the delay ($δt^)$ that was $-$30% or $+$50% of the true delay chosen for illustration (left and right, respectively). Simulations are shown with the closed-loop controller using sensory predictions only (gray, $x^t=x(t|y))$, and sensory predictions with filtering (black, $x^t$ from equation 2.12) to highlight how filtering reduces the impact of prediction errors. It is noticeable that varying this single parameter generates different behaviors dependent on whether the delay is over- or underestimated. In the case of underestimation (left), the true trajectory is ahead of the estimated trajectory, leading to overshoots and oscillations in the eye angle. In contrast, when the delay is overestimated (right), the controller generates trajectories that fall short of the target and gradually catch up over time. Qualitatively, the simulation of a pursuit movement displays similar properties, although the impact of errors is not as important as during saccades because the pursuit task uses lower control gains than saccadic movements. Figure 3: (a) Simulation of saccadic eye movements with delay under- (left) or overestimation (right). The curves represent the eye position and the black/gray display refers to the estimation algorithm that was used. The estimation was either the predicted state ($x^(t|y)$, gray) or the filtered prediction (black). Observe that oscillation amplitude and frequency increase without filtering. (b). Quantification of oscillation amplitudes with or without filtering based on the simulated eye velocity. Figure 3: (a) Simulation of saccadic eye movements with delay under- (left) or overestimation (right). The curves represent the eye position and the black/gray display refers to the estimation algorithm that was used. The estimation was either the predicted state ($x^(t|y)$, gray) or the filtered prediction (black). Observe that oscillation amplitude and frequency increase without filtering. (b). Quantification of oscillation amplitudes with or without filtering based on the simulated eye velocity. To quantify the oscillatory behavior in the simulations, we extracted the ratio between the first and second peak velocity based on the fact that the eye plant was a second-order model (see Figure 3b). It can be observed that the filtering limits the amplitude of the oscillations, which reproduces the results previously established by Mondié and Michiels (2003) in the context of stochastic control. These authors showed that filtering mitigates the impact of prediction errors induced by a quadrature. Here we directly apply this result to stochastic processes and show that prediction errors induced by an approximated extrapolation can also be stabilized. In principle, it is clear that filtering can only improve the controller; thus, the important and novel observation is that closed-loop control can be stabilized for a relatively broad range of delay errors in human-inspired models of eye and arm movements. We also observed that the main frequency of the oscillations was lower when the filter was applied, and the difference was consistent across over- or underestimation of the delay (average reduction oscillation frequency of 25%). It is also noticeable that it is possible to minimize the amplitude of the oscillations by using an approximation of the delay (see the black arrow, Figure 3b); thus, the delay estimate and the low-dimensional approximation can be tuned together to achieve efficient closed-loop control. Together, these results highlight that applying the filter on approximate predictions improves the stability margin of the closed-loop controller. In theory, equation 3.2 captures analytically (in the simplest case) the interplay between the noise properties of the process and the delay for estimating its state. In the simulation presented, we concentrated on prediction errors induced by the delay while assuming that the noise parameters were fixed. Interestingly the noise properties have different effects on the simulations. To observe this, we scaled the motor and sensory noise matrices by a factor of 0.5 and 2 for the saccade task, with a delay error of $δt^/δt=0.6$. In this range, changes in motor noise affected the variability of trajectories and estimates without affecting the closed-loop dynamics substantially. In contrast, increasing the sensory noise reduced the gain of the prediction error in the Kalman filter, which tended to reduce the oscillations induced by the prediction error (simulations not shown). A theoretical account of the impact of errors in the noise covariance matrices on state estimation can be found elsewhere (Heffes, 1966; Nishimura, 1966). Simulations of upper limb control are shown in Figure 4 and emphasize similar results. Because the delay is shorter and the limb has higher inertia, the impact of using a low-dimensional quadrature rule was hardly visible (see equation 2.16). Figure 4 illustrates the effect of filtering by comparing trajectories with or without filtering for 20 degree reaching movements with a perturbation pulse applied during movement ($-$1 Nm applied for 100 ms; the onset time is represented with the black arrow). The simulations were generated with varying levels of delay error. When the delay is overestimated ($δt^>δt)$, the extrapolation is larger than the true state, and the overestimation in turn reduces the control gain. This effect results in shallower movements with slow corrections that do not oscillate until large errors are considered (simulations not shown). Thus, the analysis in Figure 4 concentrated on delay underestimation, which has more impact on control. Figure 4: (a) Same as Figure 3 for simulated reaching control of upper limb. A perturbation load (force pulse: $-$2 Nm applied for 100 ms) was applied to the limb at the moment indicated by the black arrow to highlight closed-loop control properties. (b) Quantification of the oscillation amplitude based on the joint torque. Delay overestimation generated slower movements with shallower velocity peaks, and oscillatory behaviors were observed again for large estimation errors ($δ^t∼2.5δt$, simulations not shown). Figure 4: (a) Same as Figure 3 for simulated reaching control of upper limb. A perturbation load (force pulse: $-$2 Nm applied for 100 ms) was applied to the limb at the moment indicated by the black arrow to highlight closed-loop control properties. (b) Quantification of the oscillation amplitude based on the joint torque. Delay overestimation generated slower movements with shallower velocity peaks, and oscillatory behaviors were observed again for large estimation errors ($δ^t∼2.5δt$, simulations not shown). In this situation, when the true delay is underestimated ($δt^<δt)$, the system starts oscillating because the extrapolated state is behind the true state and the controller erroneously increases the control gain. An example is given in Figure 4a for a value of the delay estimate chosen such that the control based on the predicted state only is unstable, whereas the filtered prediction with the same delay error is still stable due to a larger stability margin. The amplitude of the oscillations with or without filtering for delay underestimation is shown in Figure 4b. To investigate the model properties in more detail, we sought to characterize the oscillations across a range of loading conditions. This simulation was clearly motivated by the variety of tremor types across clinical conditions (e.g., physiologic, essential), and we were interested to see if the model could provide insight into the origin of any kind of tremor. We simulated a posture task in which the controller had to maintain the limb at the origin angle ($θ=0)$ and applied a small pulse at the beginning of the simulation (see Figure 5a). We focused again on delay underestimation as it evoked clearer oscillations and had more impact on control, but in biological control, it is clear that both over- and underestimation may occur. These simulations were computed with a relative delay error of 20%, in three distinct conditions of limb inertia ($I=0.1$ Kgm$2$, and $±$20%). For a limb of 2.5 Kg such as the wrist and hand, a change of 20% corresponds to a loading of 500 g, as used in previous reports on tremor (Elble, 1986). Interestingly, we found that the peak in the power spectral density was relatively invariant (less than 1 Hz difference) across the three loading conditions (see Figure 5b). The peaks occurred also at the same frequency when we varied the delay error (see Figure 5c). This is due to the fact that the system remains a delayed system with a term corresponding to the true delay for any (including arbitrarily small) delay error (Michiels & Niculescu, 2007). Qualitative similarities with essential tremor are discussed below. Figure 5: (a) Simulations of movement oscillations in three distinct loading conditions following pulses of 0.5 Nm applied at the beginning of the simulated task. The cost function penalized deviation from the origin angle throughout the time horizon. The limb inertia was $I=0.08$ (black), $I=0.1$ (blue), and $I=0.12$ (red) [Kgm$2$]. The simulations were generated with a delay error of 20%: $δt^=0.8δt$. (b) Power spectral density of the acceleration signals in each loading condition with similar color code as in panel a. The traces peaked on average at 4.3 Hz, and the gray rectangle highlights that there is less than 1 Hz change across conditions for the peak frequency. The vertical arrow is aligned with the average main frequency. (c) Power spectral density of the acceleration traces for a fixed loading condition ($I=0.1$ [Kgm$2$]) across distinct values of delay error: $δt^/δt$ took values 0.6 (solid black), 0.7 (solid gray), 0.8 (dashed black), and 0.9 (dashed gray). Figure 5: (a) Simulations of movement oscillations in three distinct loading conditions following pulses of 0.5 Nm applied at the beginning of the simulated task. The cost function penalized deviation from the origin angle throughout the time horizon. The limb inertia was $I=0.08$ (black), $I=0.1$ (blue), and $I=0.12$ (red) [Kgm$2$]. The simulations were generated with a delay error of 20%: $δt^=0.8δt$. (b) Power spectral density of the acceleration signals in each loading condition with similar color code as in panel a. The traces peaked on average at 4.3 Hz, and the gray rectangle highlights that there is less than 1 Hz change across conditions for the peak frequency. The vertical arrow is aligned with the average main frequency. (c) Power spectral density of the acceleration traces for a fixed loading condition ($I=0.1$ [Kgm$2$]) across distinct values of delay error: $δt^/δt$ took values 0.6 (solid black), 0.7 (solid gray), 0.8 (dashed black), and 0.9 (dashed gray). Importantly, we found that for all simulated tasks, the compensation for sensorimotor delays was necessary. Indeed, using $δt^=0$ corresponds to a situation in which the sensory feedback (see Figure 1; $y(t))$ is directly compared to the one step prediction of the next state ($x^P(t+dt))$ without any delay compensation (see the red box in Figure 1). In this case, all simulations displayed unstable oscillations that increased in amplitude over time and eventually diverged toward infinity. This was observed even for simulations of reaching movements, despite the fact that the upper limb is less prone to instability due to shorter delays and higher inertia. These results suggest that it is necessary to compensate for delays when combining priors and feedback. Alternative control models are thus constrained by the fact that combining priors and feedback without delay compensation may lead to unstable closed-loop control. Previous work shows that it is possible to use delayed state–feedback control to model human motor corrections (Crevecoeur & Scott, 2014). However, since there is evidence of continuous state feedback control and of the existence of internal priors supporting motor responses (Crevecoeur & Scott, 2013; Crevecoeur & Kurtzer, 2018), these control models are not suitable to describe the neural mechanisms underlying long-latency responses to mechanical perturbations. Finally, we observed that the approach presented could provide a heuristic for combining asynchronous and delayed measurements of the process, such as when vision and proprioception must be combined (Crevecoeur et al., 2016). This problem can in principle be solved based on system augmentation (Challa et al., 2003), which allows updating the posterior distribution of the present and previous states when a novel but delayed measurement becomes available. Here we sought to test whether independent extrapolations of two measurements could be combined with the filter already defined. Such independent extrapolation is suboptimal in theory because it does not correct simultaneously the present and previous estimates. However, it is relevant in the context of sensorimotor control because it remains unclear when or how sensory information from distinct modalities is combined in the neural feedback controller (Oostwoud Wijdenes & Medendorp, 2017). Thus it is interesting to ask whether suboptimal delay compensation may approach the optimal solution. More precisely, we consider that the measurement vector is $y(t)=x(t-δt1)x(t-δt2)+σ1(t)σ2(t),$ (3.3) which, after extrapolation of each measurement and considering equations 2.7 and 2.8, gives the following measurement signal ($In$ is the identity matrix of size $n)$: $x^(t|y1,y2)=InInx(t)+e1(t)e2(t),$ (3.4) with $e1$ and $e2$ corresponding to each measurement, and the filtering can be performed following standard procedures. It should be emphasized that the noise terms are not independent, because assuming that $δt2>δt1$, the two extrapolations are performed over a common interval from $t-δt1$ to $t$, during which the same process noises affect the dynamics. Using equation 2.8, we have $E[[e1,e2]T[e1,e2]]=M1Σ1M1T+I(t-δt1,t)I(t-δt1,t)I(t-δt1,t)M2Σ2M2T+I(t-δt2,t),$ (3.5) where $Mi:=M(δti)$ and $I(u,v):=∫uvM(t-s)α(s)α(s)TM(t-s)Tds.$ (3.6) Using the measurement signal from equation 3.4 captures the fact that the latency and gain of feedback responses to perturbations are similar with or without the most delayed measurement, whereas the positional variance across simulations is reduced in the case of combined measurements. Such behavior was observed when participants were instructed to track visually the motion of their hand while their arm was mechanically perturbed (Crevecoeur et al., 2016). This latter study reported similar response latency but reduced estimation variance, which cannot be captured if the different sensor signals are linearly combined without taking their respective delays into account. However, we observed that this independent extrapolation process was numerically fragile and, with a low quadrature rule and the parameters considered, occasionally yielded unstable trajectories. The conditions under which approximate predictions in the form of equation 2.17 can stabilize control with multiple measurements delays should provide informative constraints on the accuracy of neural representations associated with eye-hand coordination. To summarize, we have shown that combining prediction with optimal filtering can capture closed-loop control of human-inspired models of eye and upper limb movements. In this context, we have highlighted that sensory prediction errors arising when the process integral is approximated over the delay interval, or when the delay is not known exactly, generated movement over- or undershoots that can be partially mitigated by the application of a filter that considers the noise properties of the extrapolation and the process dynamics. Thus, combining priors with predictions about movement outcome improves the stability margin of the closed-loop controller. 4  Discussion We have studied the interplay between sensory prediction and state estimation in human-inspired models of the eye and upper limb movements. The presence of sensorimotor delays motivated the hypothesis that the nervous system uses predictions (Miall et al., 1993), often expressed as feedforward control (Kawato, 1999). However, even with predicted feedback in the case of the Smith predictor or with predicted state in the case of FSA, the system remains a closed-loop system, and delays remain problematic when the prediction is not perfect. The delay compensation involves two filtering operations. The first operation is the weighting of one-step predictions with the extrapolated state measurement, which takes the form of a standard Kalman filter (see equation 2.12). The second filtering operation is the computation of the extrapolated state measurement $x^(t|y)$, which in theory requires integrating the system dynamics over the delay interval (see equation 2.5), but can be approximated with a moving average of the control function as in equation 2.17, or other classes of filters. We have illustrated based on simulations that an approximate extrapolation can still generate movements compatible with human movements, and we have highlighted how the Kalman filter reduces the impact of prediction errors by improving the stability margin. Thus, in addition to reducing uncertainty for perception, optimal estimation also critically improves the stability margin of predictive neural control. Our approach was mainly motivated by evidence that the nervous system extrapolates sensory signals to adjust motor commands to the expected instantaneous state of the body or environment. Evidence for internal predictions during visual tracking is widespread. Indeed, previous studies showed that both sensory and extraretinal information about the eye and target trajectories are used during online tracking (Bennett, de Xivry, Barnes, & Lefevre, 2007; Blohm, Missal, & Lefevre, 2005; de Brouwer, Yuksel, Blohm, Missal, & Lefevre, 2002), or when anticipating future events such as trajectories following collisions (Badler, Lefevre, & Missal, 2010). This hypothesis also accounts for the perceptual loss that occurs around the time of saccades, suggesting that sensory extrapolation is an important component of visual processing (Crevecoeur & Kording, 2017). Regarding upper limb motor control, the same hypothesis accounted for the use of priors in motor responses to perturbations of distinct profiles (Crevecoeur & Scott, 2013). In this reference, long-latency responses (approximately 50 ms in the upper limb) depended on the expected profile, which is compatible with an extrapolation of the sensed perturbation-related motion. Thus, we included the extrapolation of sensory information explicitly in the model to account for this function of the nervous system (see equation 2.5). A clear limitation of our developments is that we focused on linear systems. Of course, the neuromusculoskeletal system is nonlinear. Mathematical challenges aside, the algorithm proposed above is in principle also applicable to nonlinear models of the sensorimotor system. An interesting question for future studies is to investigate whether or how much filtering tolerates prediction errors that arise when one ignores the nonlinearities of the biomechanical plant. In spite of its simplicity, a compelling aspect of the model is that it captured a broad spectrum of motor disorders by varying a single parameter. Indeed, assuming that the delay compensation mechanisms are similar for eye and limb movements, then movement dysmetria, including hyper- and hypometria, nystagmus, oscillations, and limb tremor, can be explained by assuming that the estimated delay is not equal to the true delay. In practice, such error simply comes down to an inaccurate calibration of the averaging or filtering operation that extrapolates the sensory feedback (see Figure 1, red box). Regarding the neural basis of the delay compensation, cerebellum has often been associated with motor predictions and state estimation (Bastian, 2011; Kawato, 1999; Miall, Christensen, Cain, & Stanley, 2007; Miall et al., 1993; Wolpert, Miall, & Kawato, 1998). What reminds one of cerebellum again in the context of this letter is that, qualitatively, the dysmetria obtained by varying the internal estimate of the delay has been associated with cerebellar ataxia in clinical populations or following inactivation through cooling experiments performed with nonhuman primates (NHPs) (Bodranghien et al., 2016; Diedrichsen & Bastian, 2014; Mackay & Murphy, 1979). Regarding the control of eye movements, saccadic dysmetria as well as nystagmus were reported in cerebellar ataxia (Gomez et al., 1997; Leech, Gresty, Hess, & Rudge, 1977; Zee, Yee, Cogan, Robinson, & Engel, 1976). This is consistent with the idea that a wrong estimate of the delay in cerebellum alters saccade amplitudes through its involvement in the online control of saccades (Kheradmand & Zee, 2011), which in turn generates movement errors and oscillations. Turning to the control of the upper limb movements, oscillations were observed in cerebellar patients following reaching movements (Diener, Hore, Ivry, & Dichgans, 1993), similar to those reproduced in Figure 4. Qualitatively, similar oscillations were documented in NHPs following inactivation of cerebellum (Flament, Vilis, & Hore, 1984; Hore & Flament, 1988; Hore & Vilis, 1984). As well, a range of deficits including hyper- and hypometria characterized goal-directed reaching in cerebellar patients (Bhanpuri, Okamura, & Bastian, 2014), a deficit that could be reproduced here by varying the estimated delay. Of course, other parameters may generate undershoots or oscillations in movements such as errors in the parameters of the internal models (e.g., eye or limb inertia, viscous forces), and it is difficult to derive predictions that are specific to the compensation of sensorimotor delays, and not to other sources of errors. We focused here on approximate delay compensation and thus highlighted that errors in this parameter can also potentially generate movement disorders. To investigate further the nature of oscillations induced in this model, we simulated a posture task and extracted the main frequency following small perturbation pulses (see Figure 5). Interestingly, we found a main frequency of approximately 4 Hz, which was invariant across loading conditions and errors in the delay. This result bears a striking similarity to previously documented cases of essential tremor, associated with main frequencies in the range 4–8 Hz, and independent of loading (Elble, 2003, 2013). These observations pointed to a neurogenic source of essential tremor that involves a cortico-cerebellar loop (Muthuraman et al., 2018). Thus, poor delay compensation captures oscillations of the closed-loop system at a frequency determined by the fixed and true delay and relatively insensitive to loading or internal errors. In this view, it is not clear that any brain region could be unambiguously identified as the source of oscillations in essential tremor, because the error can be small, but the details of processing can make the closed-loop system oscillate. This framework is also consistent with the role of cerebellum in coordinating multiple effectors (Bastian, Martin, Keating, & Thach, 1996; Bo, Block, Clark, & Bastian, 2008; Diedrichsen, Criscimagna-Hemminger, & Shadmehr, 2007), as uncoordinated movements can also arise from errors in the estimated delay. Assuming that cerebellum hosts a pathway similar to that of Figure 1, then all of these motor dysfunctions can be explained by inaccurate tuning of the sensory extrapolation, corresponding in principle to an error in the estimate of the delay or, equivalently, an estimate that is not adjusted to the true difference between the sensory feedback and the current state of the body. Finally, a strength of the model is that the low-pass or averaging operation that is required to compensate for the delay may be directly linked to the adaptive filtering performed in cerebellar microcircuits (Dean, Porrill, Ekerot, & Jorntell, 2010). Thus adaptive filtering of sensory extrapolations may achieve delay compensation and support predictive motor control. The model also suggests that the time constants of the adaptive filters in cerebellum should be related to the temporal delay and to the mechanical properties of the associated motor systems (see equation 2.17). To conclude, we suggest that errors in the estimated delays, or a poor tuning of the filtering operation required in the extrapolation of sensory feedback, may constitute the underlying cause of the wide variety of movement disorders generally associated with cerebellar dysfunctions. 5  Methods 5.1  Continuous-Time Models The continuous-time state-space representation of the eye dynamics was a second-order model defined as follows: $x˙1x˙2=01-1/(τ1τ2)-(τ1+τ2)/(τ1τ2)x1x2+01/(τ1τ2)u.$ (5.1) The state variables $x1$ and $x2$ represent the eye angle and velocity, respectively; $u$ is the motor command; and the dot operator represents the time derivative. The time constants used in the model were based on previous modeling work (Robinson, Gordon, & Gordon, 1986): we used $τ1=224$ ms and $τ2=13$ ms. The explicit dependency of $x$ and $u$ on time was omitted for clarity. For the upper limb, the continuous-time differential equation was $θ˙θ¨T˙=0100-G/I1/I00-1/τθθ˙T+001/τu,$ (5.2) with $θ$ being the joint angle and $T$ the muscle force. The parameters were taken from previous experimental testing (Brown, Cheng, & Loeb, 1999; Crevecoeur & Scott, 2014): $G=0.14$ Nms (velocity-dependent term capturing viscous forces in muscles); $I=0.1Kgm2$ (limb inertia); and $τ=60$ ms (muscle time constant). The two models were augmented with the target location. The state vector for the upper limb was further augmented with an external torque ($TE)$ representing the perturbation applied to the limb. This external torque is assumed to follow step functions, that is, $T˙E=0$. This assumption is necessary to reproduce responses to perturbations following a step function, as it must be estimated to generate active compensation for the sustained load applied to the limb. It is also compatible with the behavioral observations that internal priors about the perturbation profile influence long-latency responses (Crevecoeur & Scott, 2013). 5.2  Delays and Noise Parameters Each state-space representation was transformed into a discrete time control system following equations 2.1, 2.2, and 2.9, with discretization step of $dt=5$ ms. Thus, for the model of the eye plant, the delay of 100 ms corresponded to $h=20$ time steps, whereas the delay of 50 ms for the upper limb corresponded to $h=10$ time steps. We then considered the presence of signal-dependent noise as observed in the sensorimotor system (Harris & Wolpert, 1998), such that the function $α(t)$ was proportional to the control input, $ut$. We used the following noise expression, $ξt=α1Bdu(t)ɛ1,t+α2Bdɛ2,t,$ (5.3) with $α1=0.08$ or 0.006 for the oculomotor system or upper limb system, respectively; $α2=0.03$ or 0.003; and $ɛi,t$ are independent gaussian random variables with zero mean and unit variance. For the sensory noise, we used $σ(t)∼N(0,Σ)$ at each time step with $Σ=0.003×In1$ or $0.001×In2$ for the oculomotor or upper limb system, respectively ($Ini$ represents the identity matrix of appropriate dimensions). For the upper limb control system, we added a prediction noise that affects the one-step prediction $x^P(t+dt)$, without affecting the process dynamics. This prediction noise is required by the presence of an external torque ($TE)$ that is not controllable through the change in control $ut$. Since there is no process noise affecting this variable and because it is not controllable, it is necessary to consider that there is uncertainty about this variable through the internal estimate to allow the estimator to track possible changes in this variable due to external disturbances. This prediction noise was assumed to follow a gaussian distribution with zero mean and covariance matrix set to 0.001 times the identity matrix of the same dimension as the state vector. These parameters were adjusted manually so that the simulations were numerically well conditioned. Indeed, numerical errors can propagate through the recursions (see equations 2.102.13), leading to bad conditioning of the matrix being inverted even if it is mathematically well defined. These noise parameters have an impact on the variability of the trajectories. In principle, the stability margin may depend on the noise parameters, but the improvement obtained with filtering should not depend on these values. A thorough characterization of the relationship between the noise parameters and the stability radius relative to the estimated delay is an interesting question for prospective work. 5.3  Cost Functions The cost function for all simulations corresponded to equation 2.14. We used $R=10-2$ and $R=10-4$ for the oculomotor and upper limb systems, respectively. For the simulations of the oculomotor system, the cost function always consisted in penalizing deviation from the target, $x(t)TQx(t)=c(x1(t)-x1*)2,$ (5.4) where $x1*$ is the target angle and $c$ is a parameter equal to 100 during fixation and 0 otherwise. Saccades were simulated by considering two fixation periods separated by an interval of 50 ms without any penalty on the state ($c=0)$. The pursuit task was simulated as a fixation task with a sudden jump in the target position and velocity occurring during the simulation run. For the reaching task (see Figure 4), we considered a cost function similar to equation 5.4 with $c=1$ for 300 ms following the movement time, during which $c$ was set to 0. The movement times for the reaching movements represented in Figure 4 was set to 500 ms. The posture task (see Figure 5) was simulated with a target angle of zero for 800 ms, and no movement time. The external perturbation was generated during each simulation run as a force pulse. The pulse amplitude was $-$2 Nm for the reaching task and and $-$0.5N for the posture task. In each case, the force pulse was applied for 100 ms. Acknowledgments We thank Andrew Pruszynski for comments on an earlier version of the manuscript. F.C. is supported by the F.R.S.-FNRS (Belgium). References References Anderson , B. D. O. , & Moore , J. D. ( 1979 ). Optimal filtering . Engelwood Cliffs, NJ : Prentice Hall . Angelaki , D. E. , Gu , Y. , & DeAngelis , G. C. ( 2009 ). Multisensory integration: Psychophysics, neurophysiology, and computation . Current Opinion in Neurobiology , 19 ( 4 ), 252 258 . doi:10.1016/j.conb.2009.06.008 Astrom , K. J. ( 1970 ). Introduction to stochastic control theory . New York : . , J. , Lefevre , P. , & Missal , M. ( 2010 ). . J. Neurosci. , 30 ( 31 ), 10517 10525 . doi:10.1523/JNEUROSCI.1733-10.2010 Bar-Shalom , Y. , Huimin , C. , & Mahendra , M. ( 2004 ). One-step solution for the multistep out-of-sequence measurement problem in tracking . IEEE Transactions on Aerospace and Electronic System , 40 ( 1 ), 27 37 . Bastian , A. J. ( 2011 ). Moving, sensing and learning with cerebellar damage . Curr. Opin. Neurobiol. , 21 ( 4 ), 596 601 . doi:10.1016/j.conb.2011.06.007 Bastian , A. J. , Martin , T. A. , Keating , J. G. , & Thach , W. T. ( 1996 ). Cerebellar ataxia: Abnormal control of interaction torques across multiple joints . J. Neurophysiol. , 76 ( 1 ), 492 509 . Bennett , S. J. , de Xivry , J.-J. O. , Barnes , G. R. , & Lefevre , P. ( 2007 ). Target acceleration can be extracted and represented within the predictive drive to ocular pursuit . Journal of Neurophysiology , 98 ( 3 ), 1405 1414 . doi:10.1152/jn.00132.2007 Bhanpuri , N. H. , Okamura , A. M. , & Bastian , A. J. ( 2014 ). Predicting and correcting ataxia using a model of cerebellar function . Brain , 137 ( Pt. 7 ), 1931 1944 . doi:10.1093/brain/awu115 Blohm , G. , Missal , M. , & Lefevre , P. ( 2005 ). Processing of retinal and extraretinal signals for memory-guided saccades during smooth pursuit . J. Neurophysiol. , 93 ( 3 ), 1510 1522 . doi:10.1152/jn.00543.2004 Bo , J. , Block , H. J. , Clark , J. E. , & Bastian , A. J. ( 2008 ). A cerebellar deficit in sensorimotor prediction explains movement timing variability . J. Neurophysiol. , 100 ( 5 ), 2825 2832 . doi:10.1152/jn.90221.2008 Bodranghien , F. , Bastian , A. , Casali , C. , Hallett , M. , Louis , E. D. , Manto , M. , … van Dun , K. ( 2016 ). Consensus paper: Revisiting the symptoms and signs of cerebellar syndrome . Cerebellum , 15 ( 3 ), 369 391 . doi:10.1007/s12311-015-0687-3 Brown , I. E. , Cheng , E. J. , & Loeb , G. E. ( 1999 ). Measured and modeled properties of mammalian skeletal muscle. II. The effects of stimulus frequency on force-length and force-velocity relationships . Journal of Muscle Research and Cell Motility , 20 ( 7 ), 627 643 . doi:10.1023/a:1005585030764 Challa , S. , Evans , R. J. , & Wang , X. ( 2003 ). A Bayesian solution and its approximation to out-of-sequence measurement problems . Information Fusion , 4 , 185 189 . Choi , M. , Choi , J. , & Chung , W. K. ( 2012 ). State estimation with delayed measurements incorporating time-delay uncertainty . IET Control Theory and Applications , 6 ( 15 ), 2351 2361 . Crevecoeur , F. , & Kording , K. P. ( 2017 ). Saccadic suppression as a perceptual consequence of efficient sensorimotor estimation . Elife , 6 . doi:10.7554/eLife.25073 Crevecoeur , F. , & Kurtzer , I. L. ( 2018 ). Long-latency reflexes for inter-effector coordination reflect a continuous state-feedback controller . Journal of Neurophysiology , 120 , 2466 2483 . https://www.physiology.org/doi/full10.1152/jn.00205.2018 Crevecoeur , F. , Munoz , D. P. , & Scott , S. H. ( 2016 ). Dynamic multisensory integration: Somatosensory speed trumps visual accuracy during feedback control . J. Neurosci. , 36 ( 33 ), 8598 8611 . doi:10.1523/JNEUROSCI.0184-16.2016 Crevecoeur , F. , & Scott , S. H. ( 2013 ). Priors engaged in long-latency responses to mechanical perturbations suggest a rapid update in state estimation . PLoS Computational Biology , 9 ( 8 ). doi:10.1371/journal.pcbi.1003177 Crevecoeur , F. , & Scott , S. H. ( 2014 ). Beyond muscles stiffness: Importance of state estimation to account for very fast motor corrections . PLoS Computational Biology , 10 ( 10 ), e1003869 . Crevecoeur , F. , Sepulchre , R. J. , Thonnard , J. L. , & Lefevre , P. ( 2011 ). Improving the state estimation for optimal control of stochastic processes subject to multiplicative noise . Automatica , 47 ( 3 ), 595 596 . doi:10.1016/j.automatica.2011.01.026 de Brouwer , S. , Yuksel , D. , Blohm , G. , Missal , M. , & Lefevre , P. ( 2002 ). What triggers catch-up saccades during visual tracking ? J. Neurophysiol. , 87 ( 3 ), 1646 1650 . Dean , P. , Porrill , J. , Ekerot , C. F. , & Jorntell , H. ( 2010 ). The cerebellar microcircuit as an adaptive filter: Experimental and computational evidence . Nat. Rev. Neurosci. , 11 ( 1 ), 30 43 . doi:10.1038/nrn2756 Diedrichsen , J. , & Bastian , A. J. ( 2014 ). Cerebellar function. In M. S. Gazzaniga & G. R. Mangun (Eds.), The cognitive neurosciences (pp. 451 460 ). Cambridge, MA : MIT Press . Diedrichsen , J. , Criscimagna-Hemminger , S. E. , & , R. ( 2007 ). Dissociating timing and coordination as functions of the cerebellum . Journal of Neuroscience , 27 ( 23 ), 6291 6301 . doi:10.1523/jneurosci.0061-07.2007 Diener , H. C. , Hore , J. , Ivry , R. , & Dichgans , J. ( 1993 ). Cerebellar dysfunction of movement and perception . Can. J. Neurol. Sci. , 20 ( Suppl. 3 ), S62 S69 . Elble , R. J. ( 1986 ). Physiologic and essential tremor . Neurology , 36 ( 2 ), 225 231 . Elble , R. J. ( 2003 ). Characteristics of physiologic tremor in young and elderly adults . Clin. Neurophysiol. , 114 ( 4 ), 624 635 . Elble , R. J. ( 2013 ). What is essential tremor ? Curr. Neurol. Neurosci. Rep. , 13 ( 6 ), 353 . doi:10.1007/s11910-013-0353-4 Flament , D. , Vilis , T. , & Hore , J. ( 1984 ). Dependence of cerebellar tremor on proprioceptive but not visual feedback . Experimental Neurology , 84 ( 2 ), 314 325 . doi:10.1016/0014-4886(84)90228-0 Fridman , E. ( 2014 ). Introduction to time-delay systems: Analysis and control . New York : Springer . Gomez , C. M. , Thompson , R. M. , Gammack , J. T. , Perlman , S. L. , Dobyns , W. B. , Truwit , C. L. , … Anderson , J. H. ( 1997 ). Spinocerebellar ataxia type 6: Gaze-evoked and vertical nystagmus, Purkinje cell degeneration, and variable age of onset . Ann. Neurol. , 42 ( 6 ), 933 950 . doi:10.1002/ana.410420616 Harris , C. M. , & Wolpert , D. M. ( 1998 ). Signal-dependent noise determines motor planning . Nature , 394 ( 6695 ), 780 784 . Heffes , H. ( 1966 ). The Effect of erroneous models on the Kalman filter response . IEEE Transactions on Automatic Control , 11 ( 3 ), 541 543 . Hore , J. , & Flament , D. ( 1988 ). Changes in motor cortex neural discharge associated with the development of cerebellar limb ataxia . J. Neurophysiol. , 60 ( 4 ), 1285 1302 . Hore , J. , & Vilis , T. ( 1984 ). Loss of set in muscle responses to limb perturbations during cerebellar dysfunction . Journal of Neurophysiology , 51 ( 6 ), 1137 1148 . Izawa , J. , & , R. ( 2008 ). On-line processing of uncertain information in visuomotor control . Journal of Neuroscience , 28 ( 44 ), 11360 11368 . doi:10.1523/jneurosci.3063-08.2008 Jones , K. E. , Hamilton , A. F. D. , & Wolpert , D. M. ( 2002 ). Sources of signal-dependent noise during isometric force production . Journal of Neurophysiology , 88 ( 3 ), 1533 1544 . doi:10.1152/jn.00985.2001 Kawato , M. ( 1999 ). Internal models for motor control and trajectory planning . Current Opinion in Neurobiology , 9 ( 6 ), 718 727 . , A. , & Zee , D. S. ( 2011 ). Cerebellum and ocular motor control . Front. Neurol. , 2 , 53 . doi:10.3389/fneur.2011.00053 Koerding , K. ( 2007 ). Decision theory: What “should” the nervous system do ? Science , 318 ( 5850 ), 606 610 . doi:10.1126/science.1142998 Kording , K. P. , & Wolpert , D. M. ( 2004 ). Bayesian integration in sensorimotor learning . Nature , 427 ( 6971 ), 244 247 . doi:10.1038/nature02169 Leech , J. , Gresty , M. , Hess , K. , & Rudge , P. ( 1977 ). Gaze failure, drifting eye movements, and centripetal nystagmus in cerebellar disease . Br. J. Ophthalmol. , 61 ( 12 ), 774 781 . Mackay , W. A. , & Murphy , J. T. ( 1979 ). Cerebellar modulation of reflex gain . Progress in Neurobiology , 13 ( 4 ), 6291 6301 . doi:10.1016/0301-0082(79)90004-2 Miall , R. C. , Christensen , L. O. D. , Cain , O. , & Stanley , J. ( 2007 ). Disruption of state estimation in the human lateral cerebellum . PloS Biology , 5 , 2733 2744 . doi:10.1371/journal.pbio.0050316|ISSN 1544-9173 Miall , R. C. , Weir , D. J. , Wolpert , D. M. , & Stein , J. F. ( 1993 ). Is the cerebellum a Smith predictor ? Journal of Motor Behavior , 25 ( 3 ), 203 216 . Miall , R. C. , & Wolpert , D. M. ( 1996 ). Forward models for physiological motor control . Neural Networks , 9 ( 8 ), 1265 1279 . Michiels , W. , & Niculescu , S.-L. ( 2007 ). Stability and stabilization of time-delay systems . : Society for Industrial and Applied Mathematics . Mondié , S. , & Michiels , W. ( 2003 ). Finite spectrum assignment of unstable time-delay systems with a safe implementation . IEEE Transactions on Automatic Control , 48 ( 12 ), 2207 2212 . Munoz , D. P. , & Everling , S. ( 2004 ). Look away: The anti-saccade task and the voluntary control of eye movement . Nature Reviews Neuroscience , 5 ( 3 ), 218 228 . doi:10.1038/nrn1345 Muthuraman , M. , Raethjen , J. , Koirala , N. , Anwar , A. R. , Mideksa , K. G. , Elble , R. , … Deuschl , G. ( 2018 ). Cerebello-cortical network fingerprints differ between essential, Parkinson's and mimicked tremors . Brain , 141 ( 6 ), 1770 1781 . doi:10.1093/brain/awy098 Niculescu , S.-I. , & Gu , K. ( 2004 ). . Berlin : Springer-Verlag . Nishimura , T. ( 1966 ). On the a priori information in sequential estimation problems . IEEE Transactions on Automatic Control , 11 ( 2 ), 197 204 . Oostwoud Wijdenes , L. , & Medendorp , W. P. ( 2017 ). State estimation for early feedback responses in reaching: Intramodal or multimodal ? Front. Integr. Neurosci. , 11 , 38 . doi:10.3389/fnint.2017.00038 Phillis , Y. A. ( 1985 ). Controller design of systems with multiplicative noise . IEEE Transactions on Automatic Control , AC-30 ( 10 ), 1017 1019 . Robinson , D. A. , Gordon , J. L. , & Gordon , S. E. ( 1986 ). A model of the smooth pursuit eye movement system . Biol. Cybern. , 55 ( 1 ), 43 57 . Scott , S. H. ( 2016 ). A functional taxonomy of bottom-up sensory feedback processing for motor actions . Trends Neurosci. , 39 ( 8 ), 512 526 . doi:10.1016/j.tins.2016.06.001 , R. , Smith , M. A. , & Krakauer , J. W. ( 2010 ). Error correction, sensory prediction, and adaptation in motor control . Annual Review of Neuroscience , 33 , 89 108 . doi:10.1146/annurev-neuro-060909-153135 Todorov , E. ( 2005 ). Stochastic optimal control and estimation methods adapted to the noise characteristics of the sensorimotor system . Neural Computation , 17 ( 5 ), 1084 1108 . Todorov , E. , & Jordan , M. I. ( 2002 ). Optimal feedback control as a theory of motor coordination . Nat. Neurosci. , 5 ( 11 ), 1226 1235 . doi:10.1038/nn963 Wolpert , D. M. , Diedrichsen , J. , & Flanagan , J. R. ( 2011 ). Principles of sensorimotor learning . Nature Reviews Neuroscience , 12 ( 12 ), 739 751 . doi:10.1038/nrn3112 Wolpert , D. M. , & Ghahramani , Z. ( 2000 ). Computational principles of movement neuroscience . Nature Neuroscience , 3 , 1212 1217 . Wolpert , D. M. , Ghahramani , Z. , & Jordan , M. I. ( 1995 ). An internal model for sensorimotor integration . Science , 269 ( 5232 ), 1880 1882 . Wolpert , D. M. , & Landy , M. S. ( 2012 ). Motor control is decision-making . Current Opinion in Neurobiology , 22 ( 6 ), 996 1003 . doi:10.1016/j.conb.2012.05.003 Wolpert , D. M. , Miall , R. C. , & Kawato , M. ( 1998 ). Internal models in the cerebellum . Trends in Cognitive Sciences , 2 ( 9 ), 338 347 . Wurtz , R. H. ( 2008 ). Neuronal mechanisms of visual stability . Vision Res. , 48 ( 20 ), 2070 2089 . doi:10.1016/j.visres.2008.03.021 Zee , D. S. , Yee , R. D. , Cogan , D. G. , Robinson , D. A. , & Engel , W. K. ( 1976 ). Ocular motor abnormalities in hereditary cerebellar ataxia . Brain , 99 ( 2 ), 207 234 . Zhong , Q. C. ( 2010 ). Robust control of time-delay systems . London : Spinger-Verlag .
2021-04-20 10:33:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 254, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7960523366928101, "perplexity": 1167.6015530315185}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039388763.75/warc/CC-MAIN-20210420091336-20210420121336-00257.warc.gz"}
https://mathoverflow.net/questions/308558/immersions-of-the-hyperbolic-plane
# Immersions of the hyperbolic plane Is it possible to isometrically immerse the hyperbolic plane into a compact Riemannian manifold as a totally geodesic submanifold? Any nice examples? Edit: Although I did not originally say so, I was looking for injective immersions or at least for immersions that do not factor through a covering onto a compact surface. Thank you for your answers and comments, they've been very helpful. • I like Ian's answer, and I see that you have accepted it, but I wonder whether you wanted to specify any further properties of your immersion. For example, there is the trivial example of an immersion into a compact manifold as a totally geodesic submanifold by simply regarding the hyperbolic plane as the simply-connected cover of a compact Riemann surface $S$ of genus $g\ge 2$. Isn't that the simplest example satisfying your stated criteria? (If you want the image to be a proper submanifold, simply take the cross product of this example with any compact Riemannian manifold.) – Robert Bryant Aug 18 '18 at 11:12 • Ian guessed I was looking for something more like an injective immersion rather than something you would get from a covering map. I should have specified that in the OP. – alvarezpaiva Aug 18 '18 at 11:28 • To add to Bryant's comment: generically the image of a plane is dense (Ratner, Shah) for compact hyperbolic 3-manifolds and McMullen Mohammedi Oh for non compact. – Greg Mcshane Aug 20 '18 at 23:18 Yes, it immerses isometrically into certain solvmanifolds. Take an Anosov map of $T^2$, such as $\left[\begin{array}{cc}2 & 1 \\1 & 1\end{array}\right]$. The mapping torus admits a locally homogeneous metric modeled on the 3-dimensional unimodular solvable Lie group. The matrix has two eigenspaces with eigenvalues $\frac{3\pm\sqrt{5}}{2}$, and the suspensions of lines on the torus parallel to these eigenspaces give immersed manifolds modeled on $\mathbb{H}^2$. If the eigenspace line contains a periodic point of the Anosov map, the mapping torus of it will be an annulus. But otherwise it will be an immersed injective totally geodesic hyperbolic plane, which I think is what you're asking for. • Sorry how could it become an annulus? What will correspond to its boundary? – მამუკა ჯიბლაძე Aug 18 '18 at 7:14 • @მამუკაჯიბლაძე : I mean an open annulus, or infinite cylinder. Geometrically it will be a quotient of the hyperbolic plane by a translation. – Ian Agol Aug 18 '18 at 13:53 Here is a general construction. Take a non-trivial representation of $H=\text{SL}_2(\mathbb{R})$ into a semisimple Lie group $G$, take $K<G$ a maximal compact subgroup and take $\Gamma<G$ an irreducible cocompact lattice. Endow $X=G/K$ with the standard symmetric space structure and consider the image of $H$ in $X$ which is a totally geodesic hyperbolic space. Its image in $\Gamma\backslash X$ will be a totally geodesic immersion of a hyperbolic plane into a compact Riemannian manifold. Further, if $H$ is not a factor of $G$, up to Baire generically conjugating $\Gamma$ in $G$, we can get that the image of $H$ will be non-compact and if $X$ is of dimension $\geq 5$ (e.g $G=\text{SO}(5,1)$ or $G=\text{SL}_3(\mathbb{R})$) we can get that the immersion is injective (thanks to Ian Agol for correcting an inaccuracy here in a previous version of my answer). • I suppose X should have dimension at least 5. – Ian Agol Aug 19 '18 at 2:52 • @Ian, no, $X$ could be of any dimension $n>2$ by taking $G=\text{SO}(n,1)$. In fact, the case $n=3$ could be achieved in a way very closed to your example, replacing $T^2$ with a higher-genus surface and the Anosov map with a pseudo-Anosov one. – Uri Bader Aug 19 '18 at 6:06 • The question was asking for an embedding, not immersion (I interpreted this as injective immersion). The pseudo-Anosov case will give QI embedded planes in the hyperbolic metric, or isometrically embedded ones in the singular solve metric. I don’t think that the singular metric can be perturbed to give isometrically embedded hyperbolic planes. – Ian Agol Aug 19 '18 at 14:02 • In fact, I think one can show that the hyperbolic plane cannot isometrically and injectively immerse in a closed hyperbolic 3-manifold. – Ian Agol Aug 20 '18 at 5:24 • if you don’t mod out by K on the right, then you can get hyperbolic planes immersed. For example, if $G=SL_2R$, then we can identify $\Gamma\backslash G$ with the unit tangent bundle of a hyperbolic surface, and hyperbolic planes to stable or unstable leaves of the Anosov flow, which is analogous to my example of an Anosov flow. But we are taking the orbit by an upper triangular group (which is the hyperbolic plane) instead of $SL_2R$. – Ian Agol Aug 20 '18 at 14:58 This is an attempt to visualize the answer by Ian Agol. I am not sure it is correct. If it is, it must be a fundamental domain for the group action from that answer, and the surfaces - totally geodesic images of various projective plane embeddings. Definitely the simplest examples are the covering maps from the upperf half space to a compact hyperbolic surface that Bryant mentioned, although these are not injective. A slight variation is to consider the main diagonal embedding $\imath: \mathbb{H}^2 \to \mathbb{H}^2 \times \mathbb{H}^2$ and then use different projections on each factor. For example, the first can be the orbit projection $\pi : \mathbb{H} \to \mathbb{H}^2/G$ where $G \subset PSL(2, \mathbb{R})$ contains no translations in the real axis and the second could be $\pi \circ T$ where $T$ is any of such translations. Then $(\pi, \pi \circ T) \circ \imath$ is one to one. On the other hand, every compact hyperbolic 3-manifold has plenty of totally geodesic immersed $\mathbb{H}^2$ by just projecting totally geodesics $\mathbb{H}^2 \subset \mathbb{H}^3$, as Bryant pointed it is not clear whether the projection could be made injective. This should be a comment but I am not able to comment yet :) • I don't think the 'generic' such hyperbolic plane will be injected into the quotient compact hyperbolic $3$-manifold. More likely, the image will intersect itself nontrivially in a family of disjoint geodesic curves. (For comparison, just consider the case of a geodesic line in a compact Riemann surface of genus $2$. The 'generic' such line will intersect itself (countably) many times. – Robert Bryant Aug 19 '18 at 10:18 • Right, I was naive, thanks. The euclidean intuition of an irrational line in the torus is definetly not good here – Martin de Borbon Aug 19 '18 at 10:26
2019-05-25 04:06:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.867591917514801, "perplexity": 321.7277128998899}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257847.56/warc/CC-MAIN-20190525024710-20190525050710-00147.warc.gz"}
https://homework.cpm.org/category/CCI_CT/textbook/apcalc/chapter/5/lesson/5.2.5/problem/5-107
### Home > APCALC > Chapter 5 > Lesson 5.2.5 > Problem5-107 5-107. Examine the integrals below. Consider the multiple tools available for evaluating integrals and use the best strategy for each part. Evaluate each integral and briefly describe your method. 1. $\int _ { - 3 } ^ { 2 } ( - | x + 1 | + 2 ) d x$ You could rewrite the integrand as a piecewise function. Or you could sketch the graph (it's a transformation of $y = \left|x\right|$) and use geometry. 2. $\int ( \frac { 2 } { t ^ { 2 } } - t ^ { 3 } ) d t$ Before you integrate, rewrite the integrand with exponents. 3. $\int _ { 5 } ^ { 5 } \operatorname { log } ( \sqrt { 2 u ^ { 5 } + 1 } ) d u$ Notice the bounds. 4. $\int \operatorname { sec } ^ { 2 } ( x ) d x$ Don't panic! What famous trig function has a derivative of $\sec^2x$? $\tan x+C$
2021-04-22 14:15:31
{"extraction_info": {"found_math": true, "script_math_tex": 7, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9526737332344055, "perplexity": 2017.1929666884905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039610090.97/warc/CC-MAIN-20210422130245-20210422160245-00406.warc.gz"}
http://www.computer.org/csdl/trans/tc/1998/09/t0983-abs.html
Subscribe Issue No.09 - September (1998 vol.47) pp: 983-997 ABSTRACT <p><b>Abstract</b>—This paper proves some topological properties of bitonic sorters, which have found applications in constructing, along with banyan networks, internally nonblocking switching fabrics in future broadband networks. The states of all the sorting elements of an <it>N</it>×<it>N</it> bitonic sorter are studied for four different input sequences <tmath>$\left\{ {a_i} \right\}_{i=1}^N,$</tmath><tmath>$\left\{ {b_i} \right\}_{i=1}^N,$</tmath><tmath>$\left\{ {c_i} \right\}_{i=1}^N,$</tmath> and <tmath>$\left\{ {d_i} \right\}_{i=1}^N,$</tmath> where <it>a</it><sub><it>i</it></sub> = <it>i</it><tmath>$-$</tmath> 1, <it>b</it><sub><it>i</it></sub> = <it>N</it><tmath>$-$</tmath><it>i</it>, and the binary representations of <it>c</it><sub><it>i</it></sub> and <it>d</it><sub><it>i</it></sub> are the bit reverse of those of <it>a</it><sub><it>i</it></sub> and <it>b</it><sub><it>i</it></sub>, respectively. An application of these topological properties is to help design efficient fault diagnosis procedures. We present an example for detecting and locating single faulty sorting element under a simple fault model where all sorting elements are always in the straight state or the cross state.</p> INDEX TERMS Bitonic sorter, topological property, switching fabrics, monotonic sequence, fault diagnosis. CITATION Tsern-Huei Lee, Jin-Jye Chou, "Some Topological Properties of Bitonic Sorters", IEEE Transactions on Computers, vol.47, no. 9, pp. 983-997, September 1998, doi:10.1109/12.713317
2015-04-02 06:56:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29028546810150146, "perplexity": 7823.3052167433925}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131317570.85/warc/CC-MAIN-20150323172157-00138-ip-10-168-14-71.ec2.internal.warc.gz"}
https://planetcalc.com/419/?license=1
homechevron_rightStudychevron_rightMath # Logarithm This online calculator produces the logarithm of a given number with a given base. It computes also the natural logarithm and the common logarithm (decimal logarithm). This content is licensed under Creative Commons Attribution/Share-Alike License 3.0 (Unported). That means you may freely redistribute or modify this content under the same license conditions and must attribute the original author by placing a hyperlink from your site to this work https://planetcalc.com/419/. Also, please do not modify any references to the original work (if any) contained in this content. The Logarithm is the inverse operation to exponentiation. Suppose we have the exponentiation $N=x^a$, then the logarithm of N with base a equals to x, or $x=log_{a}{N}$ To compute the logarithm with an arbitrary base, just enter your number and base in the corresponding calculator fields: #### Logarithm Digits after the decimal point: 3 Logarithm to a given base Common logarithm Natural logarithm The calculator uses the following formula to get the algorithm with arbitrary base: $log_a b =\frac{log_c b}{log_c a}$ Also, the calculator gives a base-10 logarithm (common logarithm) and the natural logarithm of the given number. URL copied to clipboard #### Similar calculators PLANETCALC, Logarithm
2021-05-13 11:39:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 3, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6389263272285461, "perplexity": 1546.267346593157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989916.34/warc/CC-MAIN-20210513111525-20210513141525-00104.warc.gz"}
https://solvedlib.com/n/part-a-calculale-1n8-mcanikclin-the-solution-tne-tota-voumeihe,6930383
# Part €Calculale 1n8 McaniKClin the solution tne tota VoumeIhe soluticn 239 mL_Expres=answer writh the appropriate unitsView Available Hint(s)ValueUnitsSubmltPart DCalculale Ine ###### Question: Part € Calculale 1n8 Mcani KClin the solution tne tota Voume Ihe soluticn 239 mL_ Expres= answer writh the appropriate units View Available Hint(s) Value Units Submlt Part D Calculale Ine molaliy KClin the solution. Express vour answem writh the appropriate units View Available Hint(s) Value Units Submlt #### Similar Solved Questions ##### "([*0) 3x JOJ (r),H pul [I 0]3xJOJ *Ip _ zar J=()H 137 (s1d 01) (€ "([*0) 3x JOJ (r),H pul [I 0]3xJOJ *Ip _ zar J=()H 137 (s1d 01) (€... ##### Please help me with question 3 and 4. thank you so much 3. Use the graph... please help me with question 3 and 4. thank you so much 3. Use the graph of the experimental vapor pressure to determine the normal boiling point of methyl butyl ether. Vapor Pressure mmHg TZU TITUT 0 10 20 30 40 50 60 70 80 90 100 110 120 T°C A) 19°C B) 52°CC) 60°C D) 64°C E)... ##### In wheat plants, the ribosomal genes (nucleolus) don't condense along with the rest of the chromosome... In wheat plants, the ribosomal genes (nucleolus) don't condense along with the rest of the chromosome (they remain as uncondensed euchromatin). These genes are present on chromosome 6B. In the cell photo below, indicate the two copies of Chromosome 6B.... ##### Calculateproton that has momentum of 1.08 kg msWhat Is Its speed (In m/s)? Such protons form rare component of cosmlc radiatlon Wth unccrtaln origins:mfa Calculate proton that has momentum of 1.08 kg ms What Is Its speed (In m/s)? Such protons form rare component of cosmlc radiatlon Wth unccrtaln origins: mfa... ##### Draw the two dipeptides formed from each pair of amino acids.A. Tyrosine and LysineB. Threonine and GlutamineC. Alanine and Histidine Draw the two dipeptides formed from each pair of amino acids. A. Tyrosine and Lysine B. Threonine and Glutamine C. Alanine and Histidine... ##### Using Limits to Infinity methods (not the definition) determine the limit of the sequence V5n+n2+4n lim An An = 000 3n2+3n+2Divide the denominator and numerator by the highest power of n in the denominc br which is Note that you must return your answer in the form ofn" where a is @ whole numberAnlim An 760What does this answer tell us about the corresponding series? INo answer given) Using Limits to Infinity methods (not the definition) determine the limit of the sequence V5n+n2+4n lim An An = 000 3n2+3n+2 Divide the denominator and numerator by the highest power of n in the denominc br which is Note that you must return your answer in the form ofn" where a is @ whole numb... ##### What the probability Inat the sample proportion withun 002 of the population roportion? (Round plactt; and final answer {0 decimal places )valutdecimalprobabilm What the probability Inat the sample proportion withun 002 of the population roportion? (Round plactt; and final answer {0 decimal places ) valut decimal probabilm... ##### The balance sheet of Indian River Electronics Corporation as of December 31, 2020, included 10.5% bonds... The balance sheet of Indian River Electronics Corporation as of December 31, 2020, included 10.5% bonds having a face amount of $91.0 million. The bonds had been issued in 2013 and had a remaining discount of$4.0 million at December 31, 2020. On January 1, 2021, Indian River Electronics called the ... HW4 2) You just bought a European call option with a strike of $25 for BAC stock that matures in 3 months. You paid a premium of$2.40. BAC standard deviation is current 20% and the stock is currently selling for $23.16. The current risk-free rate for the next three months is 1.25% per annum with co... 1 answer ##### 4. Consider the following linear programming problem: Max 5x, +5x2 s.t. x 100 女 80 2x,... 4. Consider the following linear programming problem: Max 5x, +5x2 s.t. x 100 女 80 2x, +4x2 S 400 a. Identify the feasible region. (7 points) b. Find all the extreme points-list the value of xi and x2 at each extreme point.8 points) c. What is the optimal solution? (5 points)... 1 answer ##### Which of the following is true of a value stream? a.A value stream includes only value-added... Which of the following is true of a value stream? a.A value stream includes only value-added activities. b.A value stream is never created for groups of products that use common processes. c.A value stream reflects all that is done to bring the product to a customer. d.A value stream reflects only t... 5 answers ##### NamePeriodDaleCONCEPTUAL PhysICS Experiment 47 Linear Motion: Motion Graphs Go! Go! Gol Purpose In this experiment; you will plot = graph that represcnts thc motion of an object: Required Equipment and Supplies constant velocity tO} car butchcr paper (or continuous unperoratcd aCccss paper towel; tape cquivaleut) stopwarch melerstick graph papcrDenn DalDiscussion Sometincs tH 0 quantities arc rclated each othcr; and the relationship thc rclationship is harder C1sy Sontirnc$ either case , graph Name Period Dale CONCEPTUAL PhysICS Experiment 47 Linear Motion: Motion Graphs Go! Go! Gol Purpose In this experiment; you will plot = graph that represcnts thc motion of an object: Required Equipment and Supplies constant velocity tO} car butchcr paper (or continuous unperoratcd aCccss paper towel;... ##### La spruig?01617-math242-bracev lalech_slewarl8e_7 8 / 7LaTech Stewart8e 7.8: Problem 7PreviousPrablem ListNextpomnt) Determine whelher Ihe integra divergent convergentIf I [5 convergent; evaluale diverges unfirity . stale your answer as "Inr" diverges t0 negative infinity, state your ansver as ~inf' - IF it diverges wthout approaching infinity negative infinity , state your answer as "divergent"K d = Vi divergentPrevicw My AnswersSubinit AnswersYou have attempled this pr la spruig?01617-math242-bracev lalech_slewarl8e_7 8 / 7 LaTech Stewart8e 7.8: Problem 7 Previous Prablem List Next pomnt) Determine whelher Ihe integra divergent convergent If I [5 convergent; evaluale diverges unfirity . stale your answer as "Inr" diverges t0 negative infinity, state your... ##### 1. Job Searching in the Digital Age When preparing for employment, you must know yourself, know... 1. Job Searching in the Digital Age When preparing for employment, you must know yourself, know the job market, and know the employment process. When evaluating your professional interests, what question should you ask yourself Should I apply online or in person? How well do I know the required soft... ##### - ZOOM + logo Business - Homework 1 Part A Prepare in good form the following... - ZOOM + logo Business - Homework 1 Part A Prepare in good form the following curnal entries 1 The company received cash of $35,000 and issued common stock to the shareholders. Borrowed$20.000 from the bank and signed a long-term note payable. 8 Purchased equipment with a short term note payable f... ##### Reaction of the two molecules below will result in what new functional group?-C-h 4 4 4 4 h (-C-c-%- + ~C +9- € H EsterEtherAlcoholCarboxylic acidpoints Reaction of the two molecules below will result in what new functional group? -C-h 4 4 4 4 h (-C-c-%- + ~C +9- € H Ester Ether Alcohol Carboxylic acid points... ##### Use the References to access important values if needed for this question- Taking logarithms and antilogarithms is necessary to solve many chemistry problems. For practice_ complete the following table where N is number:In Nlog N68.9-0.722Submit Answer Use the References to access important values if needed for this question- Taking logarithms and antilogarithms is necessary to solve many chemistry problems. For practice_ complete the following table where N is number: In N log N 68.9 -0.722 Submit Answer... ##### Find a power series satisfying Eq. (9) with initial condition $y(0)=$ $0, y^{\prime}(0)=1.$ Find a power series satisfying Eq. (9) with initial condition $y(0)=$ $0, y^{\prime}(0)=1.$... ##### A 1044 kg van, stopped at a traffic light, is hit directly inthe rear by a 703 kg car traveling with a velocity of +2.04 m/s.Assume that the transmission of the van is in neutral, the brakesare not being applied, and the collision is elastic. What is thefinal velocity of the car? A 1044 kg van, stopped at a traffic light, is hit directly in the rear by a 703 kg car traveling with a velocity of +2.04 m/s. Assume that the transmission of the van is in neutral, the brakes are not being applied, and the collision is elastic. What is the final velocity of the car?... ##### A charge of 25 C passes through a circuit every 2 s. If the circuit can generate 25 W of power, what is the circuit's resistance? A charge of 25 C passes through a circuit every 2 s. If the circuit can generate 25 W of power, what is the circuit's resistance?... ##### PpM'H-AcOsy Spectrum (30o MHZ CDCI Lu4licn]2.0PpmponC-H Correlation Spectrum ('H3oo MHz; "C 75 MHZ: CDCi, olution}DDM Use the following data to determine the total dollar amount of assets to be classified as property, plant, and equipment. Bramble Corp Balance Sheet December 31, 2022 Cash $129800 Accounts payable$172500 Accounts receivable 122400 Salaries and wages payable 32500 Inventory 207200 Note payable (due ...
2023-01-30 11:56:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4793170988559723, "perplexity": 7833.53468477417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499816.79/warc/CC-MAIN-20230130101912-20230130131912-00367.warc.gz"}
https://www.ncatlab.org/nlab/show/coercion
Contents # Contents ## Idea A question that is traditionally of some debate in type theory is, What does it mean for a term to have more than one type? One place where this question becomes relevant is in the interpretation of polymorphism. Another is for type systems with a non-trivial subtyping? relation, in which case it becomes pertinent to consider the meaning of the subsumption rule: $\frac{e:A \qquad A \le B}{e:B}$ One possibility (sometimes called the inclusion interpretation of subtyping) is based on the idea of interpreting types as properties of terms (what is also sometimes called “types à la Curry”, or the extrinsic view of typing). Under this interpretation, the subsumption rule can be read more or less literally, as asserting that if $e$ satisfies the property $A$, and $A$ entails $B$, then $e$ also satisfies the property $B$. One potential drawback of this interpretation, though, is that it seems to commit you to giving a meaning to “untyped” terms (although these terms may of course intrinsically have some more coarse-grained type structure, as in models of pure lambda calculus by reflexive objects). A different possibility (sometimes called the coercion interpretation of subtyping) is to read the subsumption rule as introducing an implicit coercion from $A$ to $B$. Under this interpretation, subtyping can be seen as simply providing a convenient shorthand notation, when having to write out these coercions explicitly would be too burdensome. This interpretation allows one to maintain an intrinsic (“à la Church”) view of typing, but on the other hand it introduces a subtle issue of coherence: if there is more than one way of deriving the subtyping judgment $A \le B$, we would like to know that all of these result in equivalent coercions from $A$ to $B$, so that the meaning of a well-typed term is independent of its typing derivation. ### In dependent type theory Consider a dependent proposition $A: Type, x:A \vdash P(x):Prop$. Then the dependent sum, $\sum_{x:A} P(x)$, may be thought of as ‘$A$s which are $P$’. Sometimes it is convenient to identify a term of this dependent sum as though it were a term of the full type $A$. Here we can use the first projection of the dependent sum to coerce the identification. For example, we might have introduced ‘Eve’ as a term of type $Woman$, and then wish to form a type $Relatives(Eve)$, when $Relatives(x)$ had been introduced as a type depending on $x: Human$. If $Woman$ had been defined as $\sum_{x:Human} Female(x)$, then $Eve$ has projections to the underlying human and to a warrant for Eve’s being female. It would be clumsy to insist on $Relatives(\pi_1(Eve))$. Coercion along this projection allows us to form $Relatives(Eve)$. Luo (1999) proposed a way of formalizing coercions in dependent type theory, through what he calls the coercive definition rule, which when generalized to allow dependency on the coerced term is: $\frac{\Gamma, x: B \vdash f(x) : C(x) \; \; \; \Gamma \vdash a : A \;\;\;\Gamma \vdash A \lt_{c} B : Type}{\Gamma \vdash f(a) = f(c(a)) : C(c(a))}$ In this notation, a coercion between two types $A$ and $B$ is denoted $A \lt_{c} B$, for $c: (A \to B)$. ### In HoTT In homotopy type theory, the inclusion of a smaller universe in type theory in a larger one is often considered as a coercion. ### Coherence Reynolds (2000) gave an interesting proof of coherence for a coercion interpretation of a language with subtyping. The proof works by first building a logical relation between this “intrinsic semantics” and a separate untyped semantics of the language (the latter based on interpreting programs as elements of a universal domain $U$), and then establishing a “bracketing theorem” which says that for any domain $[[A]]$ in the image of the intrinsic semantics, there are a pair of functions $\phi_A : [[A]] \to U$ and $\psi_A : U \to [[A]]$ such that • for any $a\in [[A]]$, $a$ is logically related to $\phi_A(a)$ • if $a$ is logically related to $x$, then $a = \psi_A(x)$ Combined, the logical relations theorem and the bracketing theorem imply that the coercion $[[\alpha]] : [[A]] \to [[B]]$ associated to any derivation $\alpha$ of a subtyping judgment $A \le B$ can be factored as $\begin{matrix} [[A]] &\overset{\phi_A}{\to} & U & \overset{\psi_B}{\to} & [[B]] \end{matrix}$ and hence in particular that any two derivations $\alpha_1,\alpha_2$ of $A \le B$ result in the same coercion $[[\alpha_1]] = [[\alpha_2]] = (\phi_A;\psi_B) : [[A]] \to [[B]]$. • subtype? ## References • John C. Reynolds, Using Category Theory to Design Implicit Conversions and Generic Operators, In Proceedings of the Aarhus Workshop on Semantics-Directed Compiler Generation (January 14-18, 1980), Springer-Verlag. (pdf) • John C. Reynolds, The Coherence of Languages with Intersection Types. TACS 1991. (ps) • John C. Reynolds, The Meaning of Types: from Intrinsic to Extrinsic Semantics. BRICS Report RS-00-32, Aarhus University, December 2000. (pdf) • Zhaohui Luo, Coercive subtyping. Journal of Logic and Computation, 9(1):105–130, 1999. ps file. • Zhaohui Luo, Type-Theoretical Semantics with Coercive Subtyping, pdf Last revised on April 11, 2016 at 06:05:10. See the history of this page for a list of all contributions to it.
2020-02-23 14:38:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 49, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8457808494567871, "perplexity": 698.1774922896412}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145774.75/warc/CC-MAIN-20200223123852-20200223153852-00235.warc.gz"}
https://zenodo.org/record/4026722/export/xd
Conference paper Open Access # Future regional distribution of electric vehicles in Germany Speth, Daniel; Gnann, Till; Plötz, Patrick; Wietschel, Martin; George, Jan ### Dublin Core Export <?xml version='1.0' encoding='utf-8'?> <oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd"> <dc:creator>Speth, Daniel</dc:creator> <dc:creator>Gnann, Till</dc:creator> <dc:creator>Plötz, Patrick</dc:creator> <dc:creator>Wietschel, Martin</dc:creator> <dc:creator>George, Jan</dc:creator> <dc:date>2020-09-12</dc:date> <dc:description>Electrification as an option to decarbonize road transport leads to an increasing number of electric vehicles in Germany. However, the stock of electric vehicles is not evenly distributed regionally. The local distribution of electric vehicles is particularly important from an energy system perspective in order to be able to estimate future grid loads. Here, we use multiple linear regression to distribute a German-wide market diffusion of electric vehicles to 401 NUTS3-areas in Germany. Current regional vehicle stocks and regional development data (e.g. population, income, and spatial structure) are used as independent variables. We combine these variables with forecasts for spatial development and obtain a regionalized electric vehicle market diffusion for Germany. First results suggest a concentration of BEV and PHEV stocks in southwestern Germany and in large cities in the medium-term future.</dc:description> <dc:identifier>https://zenodo.org/record/4026722</dc:identifier> <dc:identifier>10.5281/zenodo.4026722</dc:identifier> <dc:identifier>oai:zenodo.org:4026722</dc:identifier> <dc:relation>doi:10.5281/zenodo.4026721</dc:relation> <dc:relation>url:https://zenodo.org/communities/evs33</dc:relation> <dc:rights>info:eu-repo/semantics/openAccess</dc:rights> <dc:rights>https://creativecommons.org/licenses/by/4.0/legalcode</dc:rights> <dc:subject>electric vehicle (EV)</dc:subject> <dc:subject>fleet</dc:subject> <dc:subject>Market Development</dc:subject> <dc:subject>modelling</dc:subject> <dc:subject>user behaviour</dc:subject> <dc:title>Future regional distribution of electric vehicles in Germany</dc:title> <dc:type>info:eu-repo/semantics/conferencePaper</dc:type> <dc:type>publication-conferencepaper</dc:type> </oai_dc:dc> 168 132 views downloads All versions This version Views 168168 Downloads 132132 Data volume 217.8 MB217.8 MB Unique views 152152 Unique downloads 118118
2023-02-01 22:54:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17077003419399261, "perplexity": 11825.323328523744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499953.47/warc/CC-MAIN-20230201211725-20230202001725-00707.warc.gz"}
http://jurnalkanun.dbp.my/3m2rx/does-cubic-zirconia-rust-in-water-2f0a0a
Does cubic zirconia rust? Purple cubic zirconia with checkerboard cut. It should not be confused with zircon, which is a zirconium silicate (ZrSiO4). The reason behind this shape is dictated by a concept known as crystal degeneration according to Tiller. A good way to clean your cubic zirconia is to use a small soft brush and hot soapy water to remove dirt. Particularly in the bio-engineering industry, it has been used to make reliable super-sharp medical scalpels for doctors that are compatible with bio tissues and contain an edge much smoother than one made of steel.[8]. https://www.diamond-essence.com/essential-diamond-resources-a/136.htm, https://jewelryinfoplace.com/how-to-clean-jewelry/#17, http://www.streetdirectory.com/travel_guide/62020/jewelry/how_to_maintain_and_care_for_cubic_zirconia.html, https://www.bidorbuy.co.za/article/6077/Caring_for_Cubic_Zirconia, consider supporting our work with a contribution to wikiHow. To preserve their shine and beauty, cubic zirconia gems should be cleaned monthly. Top ways to clean cubic zirconia engagement rings. This causes the molten zirconia to remain contained within its own powder which prevents it from contamination from the crucible and reduces heat loss. [18] However, concerns from mining countries such as the Democratic Republic of Congo are that a boycott in purchases of diamonds would only worsen their economy. [10] This material is marketed as "mystic" by many dealers. But, if you submerge the gemstones in water, the ratio of the refractive index differences increases markedly. It is a dense substance, with a specific gravity between 5.6 and 6.0—at least 1.6 times that of diamond. Orange- or red-tinted water flowing from the tap might look disgusting, but if the color is caused by rust, it doesn't pose a health threat. Clean the piece with the warm water and dish soap. The Fog Test As with the majority of grown diamond substitutes, the idea of producing single-crystal cubic zirconia arose in the minds of scientists seeking a new and versatile material for use in lasers and other optical applications. This article was co-authored by Jerry Ehrenwald. 2.3 shows these major polymorphs of zirconia. Don't use a makeup brush; use an eyebrow brush so that those spiky little bristles can get under the facets of the stone to get out all the grime hiding under there. [17] Diamond simulants have become an alternative to boycott the funding of unethical practices. When observing the phase diagram the cubic phase will crystallize first as the solution is cooled down no matter the concentration of Y2O3. Although they are not real diamonds, cubic zirconia stones possess many of the same traits of more valuable gems. Other major producers include Taiwan Crystal Company Ltd, Swarovski and ICT inc.[7][5] By 1980 annual global production had reached 60 million carats (12 tonnes) and continued to increase with production reaching around 400 tonnes per year in 1998.[7]. This continues until the entire product is molten. Please consider making a contribution to wikiHow today. The process was named cold crucible, an allusion to the system of water cooling used. n = index of refraction = 2.17 (for cubic zirconia) n = c/v nv = c v = c/n = 300,000,000 m/s / 2.17 v = 138,248,848 m/s speed of light in cubic zirconia d = vt Zirconia phase transformations are reversed on cooling and accompanied by volume expansions. These can help distinguish between diamonds, cubic zirconia (CZ), and artificial diamonds such as moissanite. Your support helps wikiHow to create more in-depth illustrated articles and videos and to share our trusted brand of instructional content with millions of people all over the world. Rinse your cubic zirconia jewelry well in warm water, since soap scum can build up easily, and then pat it dry with a soft, clean cloth. Thanks to all authors for creating a page that has been read 186,551 times. There are a few key features of cubic zirconia which distinguish it from diamond: Cubic zirconia, as a diamond simulant and jewel competitor, can potentially reduce demand for conflict diamonds, and impact the controversy surrounding the rarity and value of diamonds.[11][12]. This method was patented by Josep F. Wenckus and coworkers in 1997. It works wonders. Due to the cooling system surrounding the crucible, a thin shell of sintered solid material is formed. Please consider making a contribution to wikiHow today. ", "Very informative in a brief and to-the-point format. The common reason is that cubic zirconia is cheap. Cubic zirconia does not rust, but the jewelry setting can. A 1.4 Carat Princess-Cut diamond ring would cost about $9,800! Inexpensive metals such as brass, gold-plated alloys, and sterling silver often rust over time due to exposure to oxygen in the air and water. Last Updated: July 31, 2020 After cleaning your piece of jewelry, rinse with water and let it air dry. If you don't have a spare toothbrush on hand, cubic zirconia can also be cleaned with a soft cosmetic brush. Cubic zirconia does not rust, but the jewelry setting can. Because the natural form of cubic zirconia is so rare, all cubic zirconia used in jewelry has been synthesized, or created by humans. Organic materials such as amber, coral, fossil, ivory, mother of pearl, natural and cultured freshwater pearls, and natural saltwater pearls … [11][13] The company pleaded guilty to these charges in an Ohio court in 13 July 2004. According to the Ministry of Mines in Congo, 10% of its population relies on the income from diamonds. Whenever you have iron, water and oxygen together, you get rust. Inexpensive metals such as brass, gold-plated alloys, and sterling silver often rust over time due to exposure to oxygen in the air and water. Zirconia containing more than 50% cubic phase may be more water-stable than tetragonal-containing zirconia, as the cubic phase does not transform to the monoclinic phase. This article has been viewed 186,551 times. He is the previous President of the International Gemological Institute and the inventor of U.S.-patented Laserscribe℠, a means of laser inscribing onto a diamond a unique indicia, such as a DIN (Diamond Identification Number). If you really can’t stand to see another ad again, then please consider supporting our work with a contribution to wikiHow. By using our site, you agree to our. Is cubic zirconia stronger than white sapphire? In recent years[when?] [2], The high melting point of zirconia (2750 °C or 4976 °F) hinders controlled growth of single crystals. Remember that technically only iron and alloys that contain iron can rust. You can ensure this does not happen by treating the stone with care. × Cubic zirconia is crystallographically isometric, an important attribute of a would-be diamond simulant. How do you decrease the darkness in the middle of stone? Scrub the cubic zirconia with a soft-bristled toothbrush to remove dirt and debris. If the silver is still dull, depending on if the silver is real or fake, clean the stone part first and then try a very gentle silver cleaner. Colored stones may show a strong, complex rare earth absorption spectrum. Though, this test shouldn’t be enough as a moissanite or a cubic zirconia may also sink if too heavy. The rate at which the crucible is removed from the RF coils is chosen as a function of the stability of crystallization dictated by the phase transition diagram. An issue closely related to monopoly is the emergence of conflict diamonds. Include your email address to get a message when this question is answered. Due to its optical properties YCZ (yttrium cubic zirconia) has been used for windows, lenses, prisms, filters and laser elements. How long will cubic zirconia last? A stabilizer is required for cubic crystals (taking on the fluorite structure) to form, and remain stable at ordinary temperatures; typically this is either yttrium or calcium oxide, the amount of stabilizer used depending on the many recipes of individual manufacturers. Gently set your cubic zirconia in the water/detergent mixture so it's fully submerged. Cubic Zirconia is a man-made stone that has been mass produced for Jewelry since the 1970’s. In this case, 100% of readers who voted found the article helpful, earning it our reader-approved status. Because of its high hardness, it is generally considered brittle. Thus, if you submerge a cubic zirconia in a glass of water, the gemstone virtually disappears, while a diamond is still easily visible. Rinse it in warm water and pat dry with a clean cloth. Light scattering phase inclusions: Caused by contaminants in the crystal (primarily precipitates of silicates or aluminates of yttrium) typically of magnitude 0.03-10 μm. This is largely due to the process allowing for temperatures of over 3000 degrees to be achieved, lack of contact between crucible and material as well as the freedom to choose any gas atmosphere. Cubic zirconia usually comes to you already polished by automatic machines during the cutting process. Under longwave UV the effect is greatly diminished, with a whitish glow sometimes being seen. [3][9], Zirconium dioxide thoroughly mixed with a stabilizer (normally 10% yttrium oxide) is fed into a cold crucible. We know ads can be annoying, but they’re what allow us to make all of wikiHow available for free. It occurs when iron combines with the oxygen in the air causing it to corrode. "You covered all the questions I have on cleaning the chains on necklaces. Or, use toothpaste and the soft brush and the other concoction for the whole piece, after you have rinsed it really well. Its definition does not include forced labor conditions or human right violations. Cubic Zirconia jewelry is significantly less expensive than diamonds, with a 1.5 Carat Princess Cut Cubic Zirconia ring retailing for about$37. Particularly in the chemical industry it is used as window material for the monitoring of corrosive liquids due to its chemical stability and mechanical toughness. The facet edges can be rounded or "smooth". As the carat weight increases though, so does the price gap. If you want to know if your diamond is a real natural stone, you can do five tests at home. They didn’t look as similar to diamond and they now do. Darkness is a sign of damage. Below 2370°C, cubic zirconia transforms into tetragonal with a change in theoretical density from 6.06 to 6.1. Keep reading to learn tips on soaking your jewelry to remove tough stains. If it floats in the middle or at the surface of the water, then you know it’s not real. Every day at wikiHow, we work hard to give you access to instructions and information that will help you live a better life, whether it's keeping you safer, healthier, or improving your well-being. Cut: cubic zirconia gemstones may be cut differently from diamonds. Metallic chips of either zirconium or the stabilizer are introduced into the powder mix in a compact pile manner. The main catalyst for rust to occur is water. Gemstones that are soft and not so dense can become easily scratched, causing permanent damage. wikiHow is where trusted research and expert knowledge come together. When you're done, your jewelry should look shiny and new. Rusting is more generally known as oxidation, and involves combination of a material with oxygen. A list of specific dopants and colors produced by their addition can be seen below. Discovered in 1892, the yellowish monoclinic mineral baddeleyite is a natural form of zirconium oxide. Cubic zirconia (CZ) is the cubic crystalline form of zirconium dioxide (ZrO2). The discovery was confirmed through X-ray diffraction, proving the existence of a natural counterpart to the synthetic product.[4][5]. Which chakra(s) does Cubic Zirconia correspond with and why? To clean cubic zirconia, dip the jewelry in a mixture of warm water and mild dish detergent. Less expensive pieces may be set … ", "Great advice, I had no idea how to clean CZ. [16] UN reports show that more than US$24 million in conflict diamonds have been smuggled since the establishment of the KP. In this situation, the cubic and the diamond reflect a very similar amount of light. How does the spirit of the stone’s energy translate for healing the mind, body and spirit? People who purchase cubic zirconia engagement rings will want to know the best methods to keep them looking their best. Later, Soviet scientists under V. V. Osiko at the Lebedev Physical Institute in Moscow perfected the technique, which was then named skull crucible (an allusion either to the shape of the water-cooled container or to the form of crystals sometimes grown). 5 [14] Therefore, cubic zirconia are a short term alternative to reduce conflict but a long term solution would be to establish a more rigorous system of identifying the origin of these stones. Jerry Ehrenwald, GG, ASA, is a graduate gemologist in New York City. Remember to avoid plate like the plague!This includes gold, platinum and sterling silver plate over brass or copper rings. Use a soft brush to clean the jewels themselves, scrubbing gently with warm, mild soapy water to get rid of any dirt accumulation. No, zirconia will not rust. What is a mild soap I can use to clean a cubic zirconia? Dislocations: Similar to mechanical stresses, dislocations can be greatly reduced by annealing. The tiny little imperfections in a diamond is a way to spot its authenticity, as no diamond is perfect. They came up like they were new. Regarding value, the paradigm that diamonds are costly due to their rarity and visual beauty has been replaced by an artificial rarity[11][12] attributed to price-fixing practices of De Beers Company which held a monopoly on the market from the 1870s to early 2000s. {\displaystyle 5\times 10^{-5}} I really need this one. If you own a cubic zirconia necklace for example, what could rust is the metal or gold that the cubic zirconia is set in. Cubic zirconia has no cleavage and exhibits a conchoidal fracture. [11] The emergence of artificial stones such as cubic zirconia with optic properties similar to diamonds, could be an alternative for jewelry buyers given their lower price and noncontroversial history. Please help us continue to provide you with our trusted how-to guides and videos for free by whitelisting wikiHow on your ad blocker. This causes refractive index values of up to. 5 What spiritual element does the stone connect with? References Gemstones produced in the United States and other producing countries are of three types; natural, synthetic, and simulant. The cubic zirconia stone itself can’t rust. The first thing to look for in picking cubic zirconia jewelry is the metal that it is set in. They named the jewel Fianit after the institute's name FIAN (Physical Institute of the Academy of Science), but the name was not used outside of the USSR. Cubic zirconia is relatively hard, 8–8.5 on the Mohs scale—slightly harder than most semi-precious natural gems. 2. A gem or gemstone can be defined as a jewel or semiprecious stone cut and polished for personal adornment. They range in price from$1,000 to $2,500 . Your cubic zirconia jewelry will look like the day you got it! Their breakthrough was published in 1973, and commercial production began in 1976. manufacturers have sought ways of distinguishing their product by supposedly "improving" cubic zirconia. Shopping Tips for Cubic Zirconia Wedding Sets. [1] Its refractive index is high at 2.15–2.18 (compared to 2.42 for diamonds) and its luster is vitreous. Thanks. The Kimberley Process (KP) was established to deter the illicit trade of diamonds that fund civil wars in Angola and Sierra Leone. Color: only the rarest of diamonds are truly colorless, most having a tinge of yellow or brown to some extent. What size makeup brush should I use to clean cubic zirconia? In fact, this is one main way that jewelers can tell CZ and diamonds apart – by CZs lack of flaws. Earth, Air, Fire, or Water? The melt is left at high temperatures for some hours to ensure homogeneity and ensure all impurities have evaporated. Cubic Zirconia Weight Chart Use this chart to determine the approximate weight of particular CZ size and compare it to diamond weight Round Pear Oval Heart Marquise Princess Trillion Emerald Tringle Radiant Round C.Z. Another technique first applied to quartz and topaz has also been adapted to cubic zirconia: vacuum-sputtering an extremely thin layer of a precious metal (typically gold) or even certain metal oxides or nitrides amongst other coatings onto the finished stones creates an iridescent effect. Cubic zirconia (CZ) is the cubic crystalline form of zirconium dioxide (ZrO 2).The synthesized material is hard and usually colorless, but may be made in a variety of different colors. Drop your loose stone into the glass. There is no precise amount of soaking time required, but let the … Finally, the entire crucible is slowly removed from the RF coils to reduce the heating and let it slowly cool down (from bottom to top). Mechanical stresses: Typically caused from the high temperature gradients of the growth and cooling processes causing the crystal to form with internal mechanical stresses acting on it. Let me just add here that cubic zirconia is not to be mistaken with zircon.Zircon, which is also a diamond look-alike, is much softer (6.5 – 7.7 on the Mohs scale) and one of the oldest natural gemstones (some zircon crystals are estimated to be over 4 billion years old! % of people told us that this article helped them. Additionally, because of the high percentage of diamond bonds in the amorphous diamond coating, the finished simulant will show a positive diamond signature in Raman spectra. He is a senior member of the American Society of Appraisers (ASA) and is a member of the Twenty-Four Karat Club of the City of New York, a social club limited to 200 of the most accomplished individuals in the jewelry business. Some of the earliest research into controlled single-crystal growth of cubic zirconia occurred in 1960s France, much work being done by Y. Roulin and R. Collongues. Does Iron Rust? Does cubic zirconia get cloudy? On the Moh’s Scale of Hardness, a diamond is rated at 10. The resulting material is purportedly harder, more lustrous and more like diamond overall. Hardness: cubic zirconia has a rating of approximately 8 on. ", "It has helped me tremendously. If the stone floats, it is most likely cubic zirconia. [14] However, the KP is not as effective in decreasing the number of conflict diamonds reaching the European and American markets. Stainless steel is also a great choice that will not tarni… Cubic zirconia (CZ) is a diamond look-alike that costs a fraction of what a diamond does. The coating is thought to quench the excess fire of cubic zirconia, while improving its refractive index, thus making it appear more like diamond. Avoid using lotion, hairspray, makeup, powders and cleaning agents while wearing your cubic zirconia jewelry. [16] Terms such as “Eco-friendly Jewelry” define them as conflict free origin and environmentally sustainable. Rust is a form of iron oxide. [3], In 1937, German mineralogists M. V. Stackelberg and K. Chudoba discovered naturally occurring cubic zirconia in the form of microscopic grains included in metamict zircon. Rinse the piece under cool running water, making sure to remove all soap residue, then pat the cubic zirconia dry with a soft, clean cloth. CZ is almost as durable as a diamond. Zirconium dioxide (ZrO 2), sometimes known as zirconia (not to be confused with zircon), is a white crystalline oxide of zirconium.Its most naturally occurring form, with a monoclinic crystalline structure, is the mineral baddeleyite.A dopant stabilized cubic structured zirconia, cubic zirconia, is synthesized in various colours for use as a gemstone and a diamond simulant. ", "The pictures helped me focus more on what the words were trying to say, if that makes sense. Currently the primary method of cubic zirconia synthesis employed by producers remains to be through the skull-melting method. Synthetic cubic zirconia is a man-made product. Its dispersion is very high at 0.058–0.066, exceeding that of diamond (0.044). It is sometimes erroneously called cubic zirconium. [3] The diameter of the crystals is heavily influenced by the concentration of Y2O3 stabilizer. . The vast majority of YCZ (yttrium bearing cubic zirconia) crystals clear with high optical perfection and with gradients of the refractive index lower than [14][15] A 2015 study from the Enough Project, showed that groups in the Central African Republic have reaped between US$3 million and US\$6 million annually from conflict diamonds. Below 14% at low growth rates tend to be opaque indicating partial phase separation in the solid solution (likely due to diffusion in the crystals remaining in the high temperature region for a longer time). Rust is the orange-brown discoloration that builds up on metal. Many cubic zirconia pieces are set in real metals, including sterling silver or even real gold. This will restore the cubic zirconia stone to its natural shine and clarity. (Diamond is a crystal of pure carbon.) weight Diamond weight 1.25 0.013 0.01 1.5 0.023 0.015 1.75 0.037 0.02 2 0.06 0.03 2.25 0.08 0.04 2.5 0.1 0.05 … This provides the basis for Wenckus’ identification method (currently the most successful identification method), This page was last edited on 9 January 2021, at 05:15. Approved. Above this threshold crystals tend to remain clear at reasonable growth rates and maintains good annealing conditions.[8]. Primary downsides to this method include the inability to predict the size of the crystals produced and it is impossible to control the crystallization process through temperature changes. Its main competitor as a synthetic gemstone is a more recently cultivated material, synthetic moissanite. I am going to, "Really helped a lot, just have to follow instructions completely. We use cookies to make wikiHow great. Coating finished cubic zirconia with a film of diamond-like carbon (DLC) is one such innovation, a process using chemical vapor deposition. The RF generator is switched on and the metallic chips quickly start heating up and readily oxidize into more zirconia. Its production eventually exceeded that of earlier synthetics, such as synthetic strontium titanate, synthetic rutile, YAG (yttrium aluminium garnet) and GGG (gadolinium gallium garnet). Read more click: Cubic Zirconia Meaning and Properties for Healing. In fact, when iron is exposed to water and oxygen, it can begin to rust within a few hours. YCZ has also been used as a substrate for semiconductor and superconductor films in similar industries. A fake diamond will most likely float on top or only sink to the middle while a real diamond will sink to the bottom thanks to its density. [8] However some samples contain defects with the most characteristic and common ones listed below. You can use regular dish washing liquid. Yes. Less expensive pieces may be set in plated metals. The size and diameter of the obtained crystals is a function of the cross-sectional area of the crucible, volume of the melt and composition of the melt. {"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/8\/87\/Clean-Cubic-Zirconia-Step-1-Version-4.jpg\/v4-460px-Clean-Cubic-Zirconia-Step-1-Version-4.jpg","bigUrl":"\/images\/thumb\/8\/87\/Clean-Cubic-Zirconia-Step-1-Version-4.jpg\/aid1345850-v4-728px-Clean-Cubic-Zirconia-Step-1-Version-4.jpg","smallWidth":460,"smallHeight":306,"bigWidth":728,"bigHeight":485,"licensing":"
2021-05-08 13:04:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30007922649383545, "perplexity": 3410.0844305118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988882.7/warc/CC-MAIN-20210508121446-20210508151446-00450.warc.gz"}
http://math.stackexchange.com/questions/157891/matrix-multiplication-and-function-composition
# Matrix Multiplication and Function Composition Given the vector space $F^n$ and two linear function $T,S:F^n \rightarrow F^n$ is it true that multiplying the representative matrices according to the standard basis of $T$ and $S$ is equivalent to the composition of their explicit formula's? I.E. Given a vector v in $F^n$ is $[T]_E[S]_Ev = [TS]_Ev = T(S(v))$ ? - Si, si può controllare scrivendo esplicitamente il prodotto matriciale. Usando la linearità è abbastanza immediato vedere che è così. –  Vittorio Patriarca Jun 13 '12 at 17:52 This follows naturally from the definition of $[T]_E$ and the fact that given matrices A, B and a vector x, $A(Bx) = (AB)x$. –  Karolis Juodelė Jun 13 '12 at 18:11 I guess this results also from the fact that the explicit formula for a linear function of the type mentioned above is always from the standard basis to the standard basis? –  Robert S. Barnes Jun 13 '12 at 18:15 In fact, matrix multiplication is defined the (somewhat strange) way it is precisely so that it corresponds to composition of linear transformations. –  Arturo Magidin Jun 13 '12 at 18:59 I'm reading this question as: "Does matrix multiplication really give rise to the matrix for the composition of maps that the matrices represent?" I'm always using matrices operating on the left of column vectors, in my notation below. Define $\pi_i:F^n\rightarrow F$ by projecting onto the $i$th coordinate. Define $\mu_j:F\rightarrow F^n$ by inserting the input value into the $j$th coordinate. How do these maps relate to the matrix representation of a function? Let $f:F^n\rightarrow F^n$ be represented by $A=[A]_{ij}$ in the standard basis. If you compute $\mu_j(1)$, you get the unit vector with a 1 in the $j$th coordinate. To affect $f$ on this, you multiply on the left with its matrix $A$. So, $A\mu_j(1)$ is the $j$th column of $A$. Now by applying $\pi_i$ after this, you will pick out the $i$th entry of the $j$th row. So, $\pi_i(A\mu_j(1))=A_{ij}$. This shows that $\pi_i f\mu_j=A_{ij}$. This last $A_{ij}$ is the $1\times 1$ matrix representation of a linear transformation of $F$ into $F$. Can you see that $\sum_{k=1}^n\mu_k(\pi_k(v))=v$ for all $v\in F^n$? This means that $\sum_{k=1}^n\mu_k\pi_k=Id_V$, the identity on $V$. Let a transformation $g:F^n\rightarrow F^n$ be represented by the matrix $B=[B]_{ij}$ in standard basis. We will now show that $[BA]_{ij}$ is the matrix representing the composition $gf$. This will be true if $\pi_i gf \mu_j=[BA]_{ij}$, by our discussion before. We compute that $\pi_i gf \mu_j=\pi_i g \mathrm{Id}_V f \mu_j=\pi_i g(\sum_{k=1}^n \mu_k\pi_k)f \mu_j=\sum_{k=1}^n\pi_i g \mu_k\pi_kf \mu_j=\sum_{k=1}^n B_{ik}A_{kj}=[BA]_{ij}$. So there you have it: the components of $gf$ are given by matrix multiplication! - That is the question I'm asking, if somewhat more elegantly stated :-) –  Robert S. Barnes Jun 13 '12 at 19:08 @RobertS.Barnes I thought so... I remember learning this for the first time and finally thinking "Holy crap, matrix multiplication is natural!" –  rschwieb Jun 13 '12 at 19:47 More pedantically: Let $[T]_{ij}$ be the representation in the standard basis $\{e_i\}$ of the operator $T$. Suppose $x\in F^n$, then we can write $x = \sum_j [x]_j e_j$. The aim is to show that $[TS]_{ij} = \sum_k [T]_{ik} [S]_{kj}$. Note that since $\{e_i\}$ is a basis, the matrix representation is unique, ie, if the matrix $\alpha_1$ and $\alpha_2$ represent the operator $A$, then $\alpha_1 = \alpha_2$. Then we have \begin{align} T(Sx) & = T(\sum_j [x]_j S e_j) \\ & = T(\sum_j [x]_j \sum_k [S]_{kj} e_k) \\ & = \sum_j [x]_j \sum_k [S]_{kj} T e_k \\ & = \sum_j [x]_j \sum_k [S]_{kj} \sum_i [T]_{ik} e_i \\ & = \sum_j [x]_j \sum_i \sum_k [T]_{ik} [S]_{kj} e_i \\ & = (TS)x \\ & = \sum_j [x]_j \sum_i [TS]_{ij} e_i. \end{align} The desired result follows from uniqueness. -
2015-08-30 20:57:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9989383816719055, "perplexity": 369.3457688703677}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065341.3/warc/CC-MAIN-20150827025425-00145-ip-10-171-96-226.ec2.internal.warc.gz"}
https://cs.stackexchange.com/questions/133962/if-a-solution-to-partition-is-known-to-exist-can-it-be-found-in-polynomial-time
# If a solution to Partition is known to exist, can it be found in polynomial time? In the Partition problem, there is a set of integers, and the goal is to decide whether it can be partitioned into two sets of equal sum. This problem is known to be NP-complete. Suppose we are given an instance and we know that it admits an equal-sum partition. Can this partition be found in polynomial time (assuming $$P\neq NP$$)? Let's call this problem "GuaranteedPartition". On one hand, apparently one can prove that GuaranteedPartition is NP-hard, by reduction from Partition: if we could solve GuaranteedPartition with $$n$$ numbers in $$T(n)$$ steps, where $$T(n)$$ is a polynomial function, then we could give it as input any instance of Partition and stop it after $$T(n)$$ steps: if it returns an equal-sum partition the answer is "yes", otherwise the answer is "no". On the other hand, the GuaranteedPartition is apparently in the class TFNP, whose relation to NP is currently not known. Which of the above arguments (if any) is correct, and what is known about the GuaranteedPartition problem? • You should probably be clearer about what you mean by NP-hard. Under the definition "NP-hard under Karp reductions" it should be obvious that GuaranteedPartition cannot be NP-hard (as is explained on the wikipedia page you linked). But indeed, if GuaranteedPartition is in P, so is Partition, as you correctly argue. Jan 4, 2021 at 14:18 • The first argument shows that if P≠NP then GuaranteedPartition cannot be solved in polynomial time. Jan 4, 2021 at 14:39 • @YuvalFilmus Suppose we have an algorithm that solves GuaranteedPartition correctly when an equal partition exists, but when an equal partition does not exist, its behavior is undefined (as in the C++ standard en.cppreference.com/w/cpp/language/ub ). Then I am not sure the first argument works. I am also unsure if the concept of "undefined behavior" makes sense at all. This is quite confusing. Jan 4, 2021 at 19:09 • We don’t care what the algorithm does. If it doesn’t output a valid partition after the allotted time, we know that no such partition exists. Jan 4, 2021 at 19:11 I think the whole misunderstanding comes from an incorrect definition of TFNP. TFNP is for problems like factoring, where there is always a solution. Every integer has a prime factorization, so you don't have to make any restrictions on what inputs are allowed - i.e., every input string of bits represents a number that has a factorization. This is different from a promise, which is more like what you're describing. Unlike integers and factorizations, there are sets that do not have equal-sum partitions. And if you change the input format to be some strange representation of sets of integers such that every set represented always has a valid partition, then you change the problem entirely. A program that solves this new problem probably couldn't be used to solve the original NP-hard Partition problem. (GuaranteedPartition is also not a decision problem, so it's not NP-hard either - although a machine that solved GuaranteedPartition could be used to solve anything in NP, as you point out.) • This clarifies the issue. Thanks! Jan 5, 2021 at 17:41
2022-09-25 04:32:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7462338805198669, "perplexity": 382.0267853140011}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00476.warc.gz"}
https://math.libretexts.org/Bookshelves/Linear_Algebra/Map%3A_Linear_Algebra_(Waldron_Cherney_and_Denton)/02%3A_Systems_of_Linear_Equations/2.03%3A_Elementary_Row_Operations
# 2.3: Elementary Row Operations Elementary row operations (EROS) are systems of linear equations relating the old and new rows in Gaussian Elimination. Example 20: (Keeping track of EROs with equations between rows) We will refer to the new $$k$$th row as $$R'_k$$ and the old $$k$$th row as $$R_k$$. $\left(\begin{array}{rrr|r} 0 & 1 & 1 & 7 \\ 2 & 0 & 0 & 4 \\ 0 & 0 & 1 & 4\end{array}\right) \stackrel{R'_1 = 0R_1 + R_2 + 0R_3}{\stackrel{R'_2 = R_1 + 0R_2 + 0R_3}{\stackrel{R'_3 = 0R_1 + 0R_2 + R_3}{\sim}}} \left(\begin{array}{rrr|r} 2 & 0 & 0 & 4 \\ 0 & 1 & 1 & 7 \\ 0 & 0 & 1 & 4 \end{array}\right)$ $\stackrel{R'_1 = \frac{1}{2}R_1 + 0R_2 + 0R_3}{\stackrel{R'_2 = 0R_1 + R_2 + 0R_3}{\stackrel{R'_3 = 0R_1 + 0R_2 + R_3}{\sim}}} \left(\begin{array}{rrr|r} 1 & 0 & 0 & 2 \\ 0 & 1 & 1 & 7 \\ 0 & 0 & 1 & 4 \end{array}\right)$ $\stackrel{R'_1 = R_1 + 0R_2 + 0R_3}{\stackrel{R'_2 = 0R_1 + R_2 - R_3}{\stackrel{R'_3 = 0R_1 + 0R_2 + R_3}{\sim}}} \left(\begin{array}{rrr|r} 1 & 0 & 0 & 2 \\ 0 & 1 & 0 & 3 \\ 0 & 0 & 1 & 4 \end{array}\right)$ ## 2.3.1: Elementary Row Operations as Matrices The matrix describing the system of equations between rows performs the corresponding ERO on the augmented matrix: Example 21: (Performing EROs with Matrices) $\begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1\end{pmatrix}\left(\begin{array}{rrr|r} 0 & 1 & 1 & 7 \\ 2 & 0 & 0 & 4 \\ 0 & 0 & 1 & 4 \end{array}\right) = \left(\begin{array}{rrr|r} 2 & 0 & 0 & 4 \\ 0 & 1 & 1 & 7 \\ 0 & 0 & 1 & 4 \end{array}\right)$ $\begin{pmatrix} \frac{1}{2} & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1\end{pmatrix}\left(\begin{array}{rrr|r} 2 & 0 & 0 & 4 \\ 0 & 1 & 1 & 7 \\ 0 & 0 & 1 & 4 \end{array}\right) = \left(\begin{array}{rrr|r} 1 & 0 & 0 & 2 \\ 0 & 1 & 1 & 7 \\ 0 & 0 & 1 & 4 \end{array}\right)$ $\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & -1 \\ 0 & 0 & 1\end{pmatrix}\left(\begin{array}{rrr|r} 1 & 0 & 0 & 2 \\ 0 & 1 & 1 & 7 \\ 0 & 0 & 1 & 4 \end{array}\right) = \left(\begin{array}{rrr|r} 1 & 0 & 0 & 2 \\ 0 & 1 & 0 & 3 \\ 0 & 0 & 1 & 4 \end{array}\right)$ This obviously involved more writing than Gaussian elimination. The point here is that realizing EROs as matrices allows us to make concrete the notion of "dividing by a matrix'' {alluded to in chapter 1}; we can now perform manipulations on both sides of an equation in a familiar way. Example 22 (Undoing $$A$$ in $$Ax=b$$ slowly, using $$A=6=3\cdot2$$) $6x = 12$ $\Leftrightarrow 3^{-1}6x = 3^{-1}12$ $\Leftrightarrow 2x = 4$ $\Leftrightarrow 2^{-1}2x = 2^{-1}4$ $\Leftrightarrow 1x = 2 In particular, matrices corresponding to EROs undo a matrix step by step. Example 23 (Undoing $$A$$ in $$Ax=b$$ slowly, Using $$A=M=...$$) \[\begin{pmatrix} 0 &1 &1 \\ 2 &0 &0 \\ 0 &0 &1 \end{pmatrix}\begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} 7 \\ 4 \\ 4 \end{pmatrix}$ $\Leftrightarrow\begin{pmatrix} 0 &1 &0 \\ 1 &0 &0 \\ 0 &0 &1 \end{pmatrix}\begin{pmatrix} 0 &1 &1 \\ 2 &0 &0 \\ 0 &0 &1 \end{pmatrix}\begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} 0 &1 &0 \\ 1 &0 &0 \\ 0 &0 &1 \end{pmatrix}\begin{pmatrix} 7 \\ 4 \\ 4 \end{pmatrix}$ $\Leftrightarrow\begin{pmatrix} 2 &0 &0 \\ 0 &1 &1 \\ 0 &0 &1 \end{pmatrix}\begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} 4 \\ 7 \\ 4 \end{pmatrix}$ $\Leftrightarrow\begin{pmatrix}\frac{1}{2} &0 &0 \\ 0 &1 &0 \\ 0 &0 &1 \end{pmatrix}\begin{pmatrix} 2 &0 &0 \\ 0 &1 &1 \\ 0 &0 &1 \end{pmatrix}\begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix}\frac{1}{2} &0 &0 \\ 0 &1 &0 \\ 0 &0 &1 \end{pmatrix}\begin{pmatrix} 4 \\ 7 \\ 4 \end{pmatrix}$ $\begin{pmatrix} 1 &0 &0 \\ 0 &1 &1 \\ 0 &0 &1 \end{pmatrix}\begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} 2 \\ 7 \\ 4 \end{pmatrix}$ $\Leftrightarrow\begin{pmatrix} 1 &0 &0 \\ 0 &1 &-1 \\ 0 &0 &1 \end{pmatrix}\begin{pmatrix} 1 &0 &0 \\ 0 &1 &1 \\ 0 &0 &1 \end{pmatrix}\begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} 1 &0 &0 \\ 0 &1 &-1 \\ 0 &0 &1 \end{pmatrix}\begin{pmatrix} 2 \\ 7 \\ 4 \end{pmatrix}$ $\begin{pmatrix} 1 &0 &0 \\ 0 &1 &0 \\ 0 &0 &1 \end{pmatrix}\begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} 2 \\ 3 \\ 4 \end{pmatrix}$ This is another way of thinking about what is happening in the process of elimination which feels more like elementary algebra in the sense that you do something to both sides of an equation" until you have a solution. ## 2.3.2: Recording EROs in $$[M | I ]$$ Just as we put together $$3^{-1}2^{-1}=6^{-1}$$ to get a single thing to apply to both sides of $$6x=12$$ to undo $$6$$, we should put together multiple EROs to get a single thing that undoes our matrix. To do this, augment by the identity matrix (not just a single column) and then perform Gaussian elimination. There is no need to write the EROs as systems of equations or as matrices while doing this. Example 24: Collecting EROs that undo a matrix \begin{eqnarray*} \left(\begin{array}{ccc|ccc} 0 & 1 & 1 &1 &0 &0\\ 2 & 0 & 0 &0&1&0\\ 0& 0 & 1 &0 &0 &1\\ \end{array} \right) \sim &\left(\begin{array}{ccc|ccc} 2 & 0 & 0 &0&1&0\\ 0 & 1 & 1 &1 &0 &0\\ 0& 0 & 1 &0 &0 &1\\ \end{array} \right)& \\ \sim &\left(\begin{array}{ccc|ccc} 1 & 0 & 0 &0&\frac12&0\\ 0 & 1 & 1 &1 &0 &0\\ 0& 0 & 1 &0 &0 &1\\ \end{array} \right)& \sim \left(\begin{array}{ccc|ccc} 1 & 0 & 0 &0&\frac12&0\\ 0 & 1 & 0 &1 &0 &-1\\ 0& 0 & 1 &0 &0 &1\\ \end{array} \right) \end{eqnarray*} As we changed the left slot from the matrix $$M$$ to the identity matrix, the right slot changed from the identity matrix to the matrix which undoes $$M$$. Example 25: Checking that one matrix undoes another \begin{eqnarray*} \left(\begin{array}{ccc} 0&\frac12&0\\ 1 &0 &-1\\ 0 &0 &1\\ \end{array} \right) \left(\begin{array}{ccc} 0&1&1\\ 2 &0 &0\\ 0 &0 &1\\ \end{array} \right) = \left(\begin{array}{ccc} 1 &0 &0\\ 0 &1 &0\\ 0 &0 &1\\ \end{array} \right) \, . \end{eqnarray*} If the matrices are composed in the opposite order, the result is the same. \begin{eqnarray*} \left(\begin{array}{ccc} 0&1&1\\ 2 &0 &0\\ 0 &0 &1\\ \end{array} \right) \left(\begin{array}{ccc} 0&\frac12&0\\ 1 &0 &-1\\ 0 &0 &1\\ \end{array} \right) = \left(\begin{array}{ccc} 1 &0 &0\\ 0 &1 &0\\ 0 &0 &1\\ \end{array} \right) \, . \end{eqnarray*} In abstract generality, let $$M$$ be some matrix and, as always, let $$I$$ stand for the identity matrix. Imagine the process of performing elementary row operations to bring $$M$$ to the identity matrix. $(M | I) \sim ( E_1M| E_1)\sim (E_2E_1 M | E_2 E_1) \sim \cdots \sim (I | \cdots E_2E_1 )$ Ellipses stand for additional EROs. The result is a product of matrices that form a matrix which undoes $$M$$ $\cdots E_2 E_1 M = I$ This is only true if RREF of $$M$$ is the identity matrix. In that case, we say $$M$$ is invertible. Much use is made of the fact that invertible matrices can be undone with EROs. To begin with, since each elementary row operation has an inverse, $M= E_1^{-1} E_2^{-1} \cdots$ while the inverse of $$M$$ is $M^{-1}=\cdots E_2 E_1$ This is symbolically verified as $M^{-1}M=\cdots E_2 E_1\, E_1^{-1} E_2^{-1} \cdots=\cdots E_2 \, E_2^{-1} \cdots = \cdots = I$ Thus, if $$M$$ is invertible then $$M$$ and can be expressed as the product of EROs. (The same is true for its inverse.) This has the feel of the fundamental theorem of arithmetic (integers can be expressed as the product of primes) or the fundamental theorem of algebra (polynomials can be expressed as the product of first order polynomials); EROs are the building blocks of invertible matrices. ## 2.3.3: Three Kinds of EROs, Three Kinds of Matrices We now work toward concrete examples and applications. It is surprisingly easy to translate between EROs as descriptions of rows and as matrices. The matrices corresponding to these kinds are close in form to the identity matrix: • Row Swap: Identity matrix with two rows swapped. • Scalar Multiplication: Identity matrix with one diagonal entry not 1. • Row Sum: The identity matrix with one off-diagonal entry not 0. Example 26: Correspondences between EROs and their matrices The row swap matrix that swaps the 2nd and 4th row is the identity matrix with the 2nd and 4th row swapped: $\begin{pmatrix} 1 &0 &0 &0 &0 \\ 0 &0 &0 &1 &0 \\ 0 &0 &1 &0 &0 \\ 0 &1 &0 &0 &0 \\ 0 &0 &0 &0 &1 \end{pmatrix}$ The scalar multiplication matrix that replaces the 3rd row with 7 times the 3rd row is the identity matrix with 7 in the 3rd row instead of 1: $\begin{pmatrix} 1&0&0&0\\ 0&1&0&0\\ 0&0&7&0\\ 0&0&0&1\\ \end{pmatrix}$ The row sum matrix that replaces the 4th row with the 4th row plus 9 times the 2nd row is the identity matrix with a 9 in the 4th row, 2nd column: $\begin{pmatrix} 1&0&0&0&0&0&0\\ 0&1&0&0&0&0&0\\ 0&0&1&0&0&0&0\\ 0&9&0&1&0&0&0\\ 0&0&0&0&1&0&0\\ 0&0&0&0&0&1&0\\ 0&0&0&0&0&0&1\\ \end{pmatrix}$ We can write an explicit factorization of a matrix into EROs by keeping track of the EROs used in getting to RREF. Example 27: Express $$M$$ from Example 24 as a product of EROs Note that in the previous example one of each of the kinds of EROs is used, in the order just given. Elimination looked like \begin{eqnarray*} M= \left(\begin{array}{ccc} 0 & 1 & 1 \\ 2 & 0 & 0 \\\ 0& 0 & 1 \end{array} \right) \stackrel{E_1}{\sim} \left(\begin{array}{ccc} 2 & 0 & 0 \\ 0 & 1 & 1 \\ 0& 0 & 1 \end{array} \right) \stackrel{E_2}{\sim} \left(\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 1 \\ 0& 0 & 1 \end{array} \right) \stackrel{E_3}{\sim} \left(\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0& 0 & 1 \end{array} \right) =I \end{eqnarray*} where the EROs matrices are \begin{eqnarray*} E_1 = \left(\begin{array}{ccc} 0 &1 &0\\ 1 &0 &0\\ 0 &0 &1\\ \end{array} \right) , E_2 = \left(\begin{array}{ccc} \frac12 &0 &0\\ 0 &1 &0\\ 0 &0 &1\\ \end{array} \right) , E_3 = \left(\begin{array}{ccc} 1 &0 &0\\ 0 &1 & -1\\ 0 &0 &1\\ \end{array} \right) \,. \end{eqnarray*} The inverse of the ERO matrices (corresponding to the description of the reverse row manipulations) \begin{eqnarray*} E_1^{-1} = \left(\begin{array}{ccc} 0 &1 &0\\ 1 &0 &0\\ 0 &0 &1\\ \end{array} \right) , E_2^{-1} = \left(\begin{array}{ccc} 2 &0 &0\\ 0 &1 &0\\ 0 &0 &1\\ \end{array} \right) , E_3^{-1} = \left(\begin{array}{ccc} 1 &0 &0\\ 0 &1 & 1\\ 0 &0 &1\\ \end{array} \right) \,. \end{eqnarray*} Multiplying these gives \begin{eqnarray*} E_1^{-1}E_2^{-1}E_3^{-1} &=& \left(\begin{array}{ccc} 0 &1 &0\\ 1 &0 &0\\ 0 &0 &1\\ \end{array}\right) \left(\begin{array}{ccc} 2 &0 &0\\ 0 &1 &0\\ 0 &0 &1\\ \end{array}\right) \left(\begin{array}{ccc} 1 &0 &0\\ 0 &1 & 1\\ 0 &0 &1\\ \end{array}\right) \\&=& \left(\begin{array}{ccc} 0 &1 &0\\ 1 &0 &0\\ 0 &0 &1\\ \end{array}\right) \left(\begin{array}{ccc} 2 &0 &0\\ 0 &1 &1\\ 0 &0 &1\\ \end{array}\right) = \left(\begin{array}{ccc} 0 &1 &1\\ 2 &0 &0\\ 0 &0 &1\\ \end{array}\right) = M. \end{eqnarray*} ## 2.3.4: $$LU$$, $$LDU$$, and $$LDPU$$ Factorizations This process of elimination can be stopped half way to obtain decompositions frequently used in large computations in sciences and engineering. The first half of the elimination process is to eliminate entries below the diagonal leaving a matrix which is called upper triangular. The ERO matrices which do this part of the elimination are lower triangular, as are their inverses. But putting together the upper triangular and lower triangular parts one obtains the so called $$LU$$ factorization. Example 28: $$LU$$ factorization \begin{eqnarray*} M= \begin{pmatrix} 2&0&-3&1\\ 0&1&2&2\\ -4&0&9&2\\ 0&-1&1&-1\\ \end{pmatrix} &\stackrel{E_1}{\sim}& \begin{pmatrix} 2&0&-3&1\\ 0&1&2&2\\ 0&0&3&4\\ 0&-1&1&-1\\ \end{pmatrix} \\ &\stackrel{E_2}{\sim}& ~~\begin{pmatrix} 2&0&-3&1\\ 0&1&2&2\\ 0&0&3&4\\ 0&0&3&1\\ \end{pmatrix} ~~\stackrel{E_3}{\sim}~ \begin{pmatrix} 2&0&-3&1\\ 0&1&2&2\\ 0&0&3&4\\ 0&0&0&-3\\ \end{pmatrix} :=U \end{eqnarray*} where the EROs and their inverses are \begin{eqnarray*} E_1= \begin{pmatrix} 1&0&0&0\\ 0&1&0&0\\ 2&0&1&0\\ 0&0&0&1\\ \end{pmatrix} \, ,~~~~ E_2= \begin{pmatrix} 1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&1&0&1\\ \end{pmatrix} \, ,~~ E_3= \begin{pmatrix} 1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&-1&1\\ \end{pmatrix} \, \\ E_1^{-1}= \begin{pmatrix} 1&0&0&0\\ 0&1&0&0\\ -2&0&1&0\\ 0&0&0&1\\ \end{pmatrix} , \, E_2^{-1}= \begin{pmatrix} 1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&-1&0&1\\ \end{pmatrix} , \, E_3^{-1}= \begin{pmatrix} 1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&1&1\\ \end{pmatrix} \, . \end{eqnarray*} applying the inverses of the EROs to both sides of the equality $$U=E_3E_2E_1M$$ gives $$M=E_1^{-1}E_2^{-1}E_3^{-1}U$$ or \begin{eqnarray*} \begin{pmatrix} 2&0&-3&1\\ 0&1&2&2\\ -4&0&9&2\\ 0&-1&1&-1\\ \end{pmatrix} &=& \begin{pmatrix} 1&0&0&0\\ 0&1&0&0\\ -2&0&1&0\\ 0&0&0&1\\ \end{pmatrix} \begin{pmatrix} 1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&-1&0&1\\ \end{pmatrix} \begin{pmatrix} 1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&1&1\\ \end{pmatrix} \begin{pmatrix} 2&0&-3&1\\ 0&1&2&2\\ 0&0&3&4\\ 0&0&0&-3\\ \end{pmatrix} \\ &=& \begin{pmatrix} 1&0&0&0\\ 0&1&0&0\\ -2&0&1&0\\ 0&0&0&1\\ \end{pmatrix} \begin{pmatrix} 1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&-1&1&1\\ \end{pmatrix} \begin{pmatrix} 2&0&-3&1\\ 0&1&2&2\\ 0&0&3&4\\ 0&0&0&-3\\ \end{pmatrix} \\ &=& \begin{pmatrix} 1&0&0&0\\ 0&1&0&0\\ -2&0&1&0\\ 0&-1&1&1\\ \end{pmatrix} \begin{pmatrix} 2&0&-3&1\\ 0&1&2&2\\ 0&0&3&4\\ 0&0&0&-3\\ \end{pmatrix} \, . \end{eqnarray*} This is a lower triangular matrix times an upper triangular matrix. What if we stop at a different point in elimination? We could multiply rows so that the entries in the diagonal are 1 next. Note that the EROs that do this are diagonal. This gives a slightly different factorization. Example 29: $$LDU$$ factorization building from previous example \begin{eqnarray*} M= \begin{pmatrix} 2&0&-3&1\\ 0&1&2&2\\ -4&0&9&2\\ 0&-1&1&-1\\ \end{pmatrix} \stackrel{E_3E_2E_1}{\sim} \begin{pmatrix} 2&0&-3&1\\ 0&1&2&2\\ 0&0&3&4\\ 0&0&0&-3\\ \end{pmatrix} \stackrel{E_4}{\sim} \begin{pmatrix} 1&0&\frac{-3}{2}&\frac{1}{2}\\ 0&1&2&2\\ 0&0&3&4\\ 0&0&0&-3\\ \end{pmatrix} \\ \stackrel{E_5}{\sim} \begin{pmatrix} 1&0&\frac{-3}{2}&\frac{1}{2}\\ 0&1&2&2\\ 0&0&1&\frac43\\ 0&0&0&-3\\ \end{pmatrix} \stackrel{E_6}{\sim} \begin{pmatrix} 1&0&\frac{-3}{2}&\frac{1}{2}\\ 0&1&2&2\\ 0&0&1&\frac43\\ 0&0&0&1\\ \end{pmatrix} =:U \end{eqnarray*} The corresponding elementary matrices are \begin{eqnarray*}E_4= \begin{pmatrix} \frac12&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\\ \end{pmatrix} , \, ~~ E_5= \begin{pmatrix} 1&0&0&0\\ 0&1&0&0\\ 0&0&\frac13&0\\ 0&0&0&1\\ \end{pmatrix} , \, ~~ E_6= \begin{pmatrix} 1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&-\frac13\\ \end{pmatrix} , \, \\ E_4^{-1}= \begin{pmatrix} 2&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\\ \end{pmatrix} , \, E_5^{-1}= \begin{pmatrix} 1&0&0&0\\ 0&1&0&0\\ 0&0&3&0\\ 0&0&0&1\\ \end{pmatrix} , \, E_6^{-1}= \begin{pmatrix} 1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&-3\\ \end{pmatrix} , \, \end{eqnarray*} The equation $$U=E_6E_5E_4E_3E_2E_1 M$$ can be rearranged into $M=(E_1^{-1}E_2^{-1}E_3^{-1})(E_4^{-1}E_5^{-1}E_6^{-1})U.$ We calculated the product of the first three factors in the previous example; it was named $$L$$ there, and we will reuse that name here. The product of the next three factors is diagonal and we will name it $$D$$. The last factor we named $$U$$ (the name means something different in this example than the last example.) The $$LDU$$ factorization of our matrix is \begin{eqnarray*} \begin{pmatrix} 2&0&-3&1\\ 0&1&2&2\\ -4&0&9&2\\ 0&-1&1&-1\\ \end{pmatrix} = \begin{pmatrix} 1&0&0&0\\ 0&1&0&0\\ -2&0&1&0\\ 0&-1&1&1\\ \end{pmatrix} \begin{pmatrix} 2&0&0&0\\ 0&1&0&0\\ 0&0&3&0\\ 0&0&1&-3\\ \end{pmatrix} \begin{pmatrix} 1&0&\frac{-3}{2}&\frac{1}{2}\\ 0&1&2&2\\ 0&0&1&\frac43\\ 0&0&0&1\\ \end{pmatrix} \end{eqnarray*} The $$LDU$$ factorization of a matrix is a factorization into blocks of EROs of a various types: $$L$$ is the product of the inverses of EROs which eliminate below the diagonal by row addition, $$D$$ the product of inverses of EROs which set the diagonal elements to 1 by row multiplication, and $$U$$ is the product of inverses of EROs which eliminate above the diagonal by row addition. You may notice that one of the three kinds of row operation is missing from this story. Row exchange my very well be necessary to obtain RREF. Indeed, so far in this chapter we have been working under the tacit assumption that $$M$$ can be brought to the identity by just row multiplication and row addition. If row exchange is necessary, the resulting factorization is $$LDPU$$ where $$P$$ is the product of inverses of EROs that perform row exchange. Example 30: $$LDPU$$ factorization, building from previous examples \begin{eqnarray*} M= \begin{pmatrix} 0&1&2&2\\ 2&0&-3&1\\ -4&0&9&2\\ 0&-1&1&-1\\ \end{pmatrix} \stackrel{E_7}{\sim} \begin{pmatrix} 0&1&2&2\\ 2&0&-3&1\\ -4&0&9&2\\ 0&-1&1&-1\\ \end{pmatrix} \stackrel{E_6E_5E_4E_3E_2E_1}{\sim} L \end{eqnarray*} \begin{eqnarray*} E_7= \begin{pmatrix} 0&1&0&0\\ 1&0&0&0\\ 0&0&1&0\\ 0&0&0&1\\ \end{pmatrix} =E_7^{-1} \end{eqnarray*} \begin{eqnarray*} M=(E_1^{-1}E_2^{-1}E_3^{-1})(E_4^{-1}E_5^{-1}E_6^{-1}) (E_7^{-1}) U=LDPU\\ \end{eqnarray*} \begin{eqnarray*} \begin{pmatrix} 0&1&2&2\\ 2&0&-3&1\\ -4&0&9&2\\ 0&-1&1&-1\\ \end{pmatrix} = \begin{pmatrix} 1&0&0&0\\ 0&1&0&0\\ -2&0&1&0\\ 0&-1&1&1\\ \end{pmatrix} \begin{pmatrix} 2&0&0&0\\ 0&1&0&0\\ 0&0&3&0\\ 0&0&1&-3\\ \end{pmatrix} \begin{pmatrix} 0&1&0&0\\ 1&0&0&0\\ 0&0&1&0\\ 0&0&0&1\\ \end{pmatrix} \begin{pmatrix} 1&0&\frac{-3}{2}&\frac{1}{2}\\ 0&1&2&2\\ 0&0&1&\frac43\\ 0&0&0&1\\ \end{pmatrix} \end{eqnarray*}
2021-06-24 06:53:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9617531299591064, "perplexity": 517.9348420105426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488551052.94/warc/CC-MAIN-20210624045834-20210624075834-00128.warc.gz"}
https://worldbuilding.stackexchange.com/questions/42044/why-would-a-government-passively-encourage-its-people-to-not-obtain-a-formal-edu
# Why would a government passively encourage its people to not obtain a formal education? I have a city in my world, about with about the land area of Singapore and about 1/6 of its population. In my story, my government needs to passively encourage its population to not go to school. By passively, I mean by attending any educational facility, you have to pay an entrance fee, taxes, and a fine. This monetary amount is equivalent to about 2000 USD per student per year of education. In all, a 12 year education would cost about 30000, 24000 from fees, and $6000 for supplies.Because the government has imposed such a fine, much of the population receives only homeschooling or informal education from their parents. As such, many parents employ the following plan: One of the parents or an older sibling goes to school to receive an education, then comes back home and educates the rest of the family. For what reasons would a government fine the educated? • See this answer. "trong showings are entirely attributable to huge leads among voters without a college degree" – JDługosz May 22 '16 at 19:51 • Well, how much is$2000 relative to income? I would pay that to send my kid to school. – Azor Ahai May 23 '16 at 2:41 • Is not this close to what we have now in real word? Are not people in many countries pay X usd per year to go to university? – Salvador Dali May 23 '16 at 3:47 • According to your math, you have a city where someone provides education for free and then the government comes and fine people for attending free education. That makes little sense. You likely get more than 30000$in cost for the 12 year education if the government simply wouldn't fiance education. – Christian May 23 '16 at 8:32 • All the examples you gave are for actively encouraging people to not obtain an education. Here's a definition of passive: "accepting or allowing what happens or what others do, without active response or resistance.". Examples of this would be not enforcing a curriculum, not funding education, not advertising it, etc. – JBentley May 23 '16 at 8:45 ## 20 Answers ## Country leaders want to have ultimate power What is easier to control? Educated group of people, or uneducated crowd which can be bribed by "bread and games"? I know it is long term plan, but uneducated people tend to be easy to manipulate. Easy to entertain, easy to control. Make state jobs required low to none education. Supplement paid education with free, state controled "education programme" in TV which will be basically brainwashing the masses to believe the government is second best thing after sliced bread. In 50 years, you have everyone supporting your government • To make the brainwashing even more effective, the government could have free schools which teach students to read only at an extremely basic level so they cannot understand complex written works, teach no math or science, and teach only government propaganda in history and social studies. Then they maintain the illusion of freedom by fining those who get a real education. The social divide between those who can afford to get a real education and those who get a state education would ensure that the educated could never muster enough support for a rebellion. – Torisuda May 23 '16 at 2:41 • This is sometimes done when a country invades another. The language, the history, and the religion, of the previous country all become illegal. The same goes with the Catholic church. Initially, it didn't want the bible to translated in a language the population could understand. But the thing I don't understand is why the government would need to fine people, when education already has a pretty significant cost. Is education free in that made world? If so, why? Is the church offering free education? Are the rebels offering free organized education? There must be a reason education is free. – Stephan Branczyk May 23 '16 at 7:16 • An educated group of people is much easy to control if the government controls the lesson plan. The history of public education is about teaching students thought patterns that are desireable for the government. I'm not sure that any nation state succeeded in having it's population believe in it without eduction. – Christian May 23 '16 at 8:29 • @Christian "I'm not sure that any nation state succeeded in having it's population believe in it without eduction" In Britain, children were required to attend primary education only after the Elementary Education Act 1880. And places like Imperial Russia were mostly illiterate and unschooled until the emergence of the Soviet Union. Nonetheless most Russian peasants believed the Tsar to be a living God. Britain introduced schooling in part to help the newly enfranchised working class decide who to vote for. Before 1880 Brits didn't believe in their country or king? And Russians the Tsar? – inappropriateCode May 23 '16 at 10:02 • "...which can be bribed by 'bread and games'?" FWIW, there's a specific idiom in English for this: Bread and circuses. (The Wikipedia article says "or bread and games" but I've never heard a native English speaker say that.) – T.J. Crowder May 24 '16 at 12:44 The leaders of the country want to promote certain family or religious values. In this scenario, the decision-makers in government believe that large-scale education (schools) is counter-productive, and education should be happening in the home or in informal, local co-ops (where like-minded parents with similar-age kids cooperate to share the educational duties and benefits). Why would they believe that? A few possible reasons (choose all that apply): • Out-sourcing the education of your children is Just Wrong; education is a core parental function. • It's not a good use of resources to build, staff, and maintain schools; we need that money for something else. So at the very least, if you insist on sending your kids out to school, we're going to charge you enough to pay for those costs and then some. • Education isn't a "sit in a classroom for 8 hours a day" thing but should be wholly integrated into household life. • Individual or small-group tutoring provides superior education and the government actually values education. People today use private schools or home-school even though free public schools are available. To find further motivation for your government, look to the reasons those families choose and adapt them. • I can really see this, specially the first point. I hate that public schools now teach different values then I want to teach my kids. Homeschooling looks very attractive. 3 & 4 are very valid too. – coteyr May 23 '16 at 15:46 • Love this answer. I genuinely didn't expect a positive perspective on why they might choose to penalise classroom education—though now I think about it, one would expect even oppressive regimes to make a token attempt to offer one. – Jordan Gray May 25 '16 at 13:42 ### They want to keep education valuable by keeping it expensive Here's a crazy idea: We know that people with money are better off than people without money. Therefore, let's give a million dollars to everyone, and end the woes of poverty overnight! Anyone with even the most basic understanding of economics can take a single look at that and say "that's crazy, because inflation". Money is used as a proxy for labor and necessary resources, and putting more money into the system doesn't actually create more labor and necessary resources, so something has to give and it ends up devaluing the money. What does this have to do with the question? Well, here's another crazy idea: We know that people with education get better-paying jobs than people without education. Therefore, let's give a degree to everyone, and end the woes of poverty overnight in 4-6 years! With the context above, it should be immediately obvious why this will not work. Education doesn't get you a good job; it simply helps you distinguish yourself from less-educated applicants. It can be thought of as somewhat similar to money in this sense, and putting more degrees into the system doesn't create more good jobs; it just devalues the degrees. (See: US education policy over the last few decades, leading to some places where they literally require a Master's degree to be a pizza delivery guy, because that's how high you have to set the filter.) Which means you end up with a bunch of people trying to pay off a Master's degree worth of student loans on pizza delivery guy wages! (See: current US student debt crisis.) A government that's aware of the problems of inflation uses monetary policy to try to keep inflation down. A government that's aware of the problems of eduflation (yes, it's a real word, and it's a shame more people don't know about it) would use a restrictive educational policy to try to keep eduflation down. • Re: "some places where they literally require a Master's degree to be a pizza delivery guy, because that's how high you have to set the filter": Can you cite a source for this? I find it hard to believe: there may be places with enough people with master's degrees competing for pizza delivery jobs, but I don't see why pizza shops would prefer them over people with mere bachelor's degrees (especially to the point of categorically refusing to consider any of the latter). – ruakh May 23 '16 at 2:43 • So you are advocating ignorance... to make what education you have more valuable. Well, it's hard to imagine anything more morally wrong. The major difference between knowledge and money is that knowledge gets more when you share it. – Burki May 23 '16 at 7:13 • @Burki I'm not "advocating ignorance" at all; I'm discussing the real, counterintuitive consequences of short-sighted policies in pushing formal education on the masses. – Mason Wheeler May 23 '16 at 8:40 • One area where the analogy breaks down is that education (depending on what you study) has an inherent value, potentially leading to more jobs rather than just higher competition for existing jobs – Jezzamon May 23 '16 at 8:41 • @Jezzamon The key is "depending on what you study". A country full of engineering and science graduates is highly likely to have inherent value. A country full of philosophy and English lit graduates, less so. A lot of people (including the UK government, sadly) are under the illusion that uni qualifications will show whether they're worth employing for a completely unrelated job, because, look, they managed to do 3 years extra schooling. So the guy who wrote essays on fixing cars looks more employable than the guy who spent 3 years getting his hands dirty. :( – Graham May 23 '16 at 12:32 The government may want to discourage education for ideological or economic reasons. Ideologically speaking we've seen fairly nasty regimes go after education before, from Khmer Rouge's genocidal far-left anti-western philosophy, to Boko Haram's religious totalitarianism. In the former case they believed that formal education helped only to create inequality and was a deliberate ploy by their bourgeoisies enemies, and as such murdered anyone who was educated, urbanite, or even had glasses (because that apparently means they were definitely literate). Khmer Rouge's end game was to have a society where everyone was equal because everyone was a subsistence farmer. Similarly Boko Haram, which means western education is forbidden, believe that education which is not exclusively Islamic is used to discredit and attack Islam (they honestly believe that rain does not result from precipitation and water cycles but only and absolutely by the will of god). You may have a less extreme version of something like this at play. Economically speaking the government may be trying to follow a "race to the bottom"; in which they have a pretty extreme view of the free market, in which the people's living standards have to drop (the masses of course, not the elite who order this*) in order for wages to be competitive, and also will reduce the cost of public services which further means the state can lower taxation and in theory attract more investment (lower corporate and/or income tax plus lower wages for workers). In practice as mentioned prior that's simply bad economics, but like any belief system, capitalism has its extremists and fundamentalists. You can also look to the early history of public education at the turn of the industrial revolution. Britain for instance formalised public education for children because on the one hand business interests felt that the workers needed numeracy and literacy to make use of the more advanced technology they were creating, and also because the government was enfranchising more working class voters who needed to be able to read and thus reason who they should vote for. Formal education has also been used as defensive nationalism; to promote and preserve specific languages and thus group identity. So if your society didn't need skilled workers, or didn't have voters, or didn't have a unique ethnic group (or was seeing a major decline in all three areas), people may simply not feel there is a need for public education given the cost. • Side note: the royal families of Europe often didn't receive a formal education until recently because they were presumed to have been born with/divinely inspired with the skills needed to lead. That may be another motivation, that somehow people don't believe education works, or are against social mobility (like feudalism), or simply favour some sort of apprenticeship system instead. • Upvoted. These are excellent historical examples of exactly what you suggest. As an example of your "race to the bottom", Kansas is currently following a policy called the "March to 0" - meaning 0% income taxes. – indigochild May 23 '16 at 16:10 • "in which they have a pretty extreme view of the free market", "In practice as mentioned prior that's simply bad economics, but like any belief system, capitalism has its extremists and fundamentalists." - I guess extream free market would be anarchocapitalism. This looks like a straw man capitalism with goverment running for the Evulz (or someone who might read discussion between a-caps and others and decided to turn some points up to 11). [I just object to use of 'free market' here - not capitalism]. – Maciej Piechotka May 23 '16 at 18:34 • @MaciejPiechotka I'm curious now, what distinction do you draw between "free market" and "capitalism"? – inappropriateCode May 23 '16 at 19:31 • @inappropriateCode usually capitalism is a very all compassing and 'politically loaded' word and depending on person speak it may mean free market or rule of people with money etc. depending on person speaking. Free market is a narrow term which means that markets run without interference and are based on volontary exchanges. One may discuss particular meaning of the words here but fining people for education is clearly not 'lack of interference' - it is almost like claiming that progressive socialists want to tax poverty to eliminate it (some elements are there but turned on their head]. – Maciej Piechotka May 23 '16 at 19:49 • @MaciejPiechotka Ah! Okay. Well in my view free market and capitalism are equitable terms, as the point in either case is to allow as you say, free exchange between individuals. The extent and arrangement of policy, and how one thinks best to achieve capitalism/free market varies. So I don't see the distinction as sharply. – inappropriateCode May 23 '16 at 20:13 I'm new here, and I'm just gonna write this for fun. Consider this, in the near future, where humans have advanced much in the field of computing power and data collection. Scientists can run simulations with artificial intelligence that would enable machines to be trained in undertaking tasks that are considered too complicated to do for our current computing standards. In such a world, most jobs becomes depreciated, because machines can perform pretty much all the tasks. People can afford to live without jobs (or money), one can just go pickup their daily needs (produced by machines) such as food or any other product from some local warehouse. It it then much better if you have a majority of less educated people. For starters, it would be important to keep close control over the boundaries of AI development. Someone has to to make sure AI doesn't evolve in ways which would harm our existence. Those people needs to be highly educated and responsible (mostly responsible). And you just need a small number of such people (possibly monitored closely) so their actions are easily controlled. This would be opposed to where if most people have high education, then you'd have no control over how AI might evolve. Some high school kid could be running a simulated virtual organism on his home computer, feeding it random data from the internet (or whatever more advance version we'd have in the future), without proper limitation on how it might evolve. Before long, people would be creating dangerous AI programs that would threaten our existence. It would be better then, if say only 1% of people receive high enough education that would allow them to evolve and control the computers software. And the other 99% are only smart enough to happily play candy crush or posting selfies on facebook. The other reason would be social unrest. When the computers take over our jobs. You'd be left with a lot of bored people. As it turns out, higher educated people are much harder to satisfy. Ideally, people should be holidaying forever and spending more time to explore arts and music or their other none dangerous hobbies. But this only happens if they are dumb. e.g. people in the middle ages would be happy if all they did was partying and having sex forever, they'd even be happy if they worked as a blacksmith at the same place forever. But an educated person would be bored out of their mind. They'd be seeking ways for intellectual stimulation (such as the challenge of making killer robots, protesting and starting revolutions). Lastly, why would you even need to educate people, 99% of the jobs would be taken care of, and it would be a waste of time and resources to train people when it would be better for everyone to be stupid. Sure, social values would probably degrade pretty soon, but it'd be a better alternative than total self destruction. This might actually happen, think about it and have a nice day :). Added thoughts: Interesting point from @inappropriateCode with the human urge to teach and learn. One would think it'd surely be hard to enforce restrictions on education, especially in an advanced future society. But maybe this isn't as far fetched as it may seem. It might actually happen naturally to some extend. Just to clarify, the machines in my proposed situation would be smart enough to carry out structured jobs efficiently. Say to design and 3D print cars, grow crops to the best yield. But maybe not efficient enough to run an program that trains an AI to "freely reason", in any manageable time. The main motivation for a government who wants to dumb the population down would be to reduce the chances of someone from creating such AI programs (in which case it could force a man vs machines intellectual arms race which we'll probably lose because the human brain evolves slower). It might take a few generations from now to reach that point. In which time people are going to get less dependent on having a job to make a living. At some point (even now, in some countries) jobs becomes optional, and a percentage of the population would choose to not get a job or an education, as the "jobless living standards" improve, more and more people would fall into this category. And their kids would grow up in an education optional lifestyle, with no particular reason to break out of this bubble. After some time, you'd be left with the fairly intellectually minded people, that would still choose to get an education. Now the government decides to give the dumbing process a nudge, it impose a high education fee. Remember that pretty much all the young people at this point would be born in to a job and education optional world, so unless if you're very motivated, you probably wont fight against it. Maybe most people would be redirecting their attention to some other none formal education required activity, such as arts and craft, to fulfill their intellectual needs. We're now left with the really motivated people, who still wants to get formally educated, but even if they can pay the fees, they'd have to study really hard, with less teaching resources, and the seriously advance shit they'd have to learn. Their reward for finishing their studies would be to monitor the world and keep the machines in check (that'd pretty much be the only jobs available), while everyone else gets to party none stop or do whatever the hell they feel like. The ones that can't afford to study but still wants to can do home learning and or form interest groups (it's their right, why not?). But if the government ban free access to teaching resources and compiling software for computing. You'd have seriously hard time trying to design code / hack into the system from scratch. Remember technology would be more advance in the future which increases the difficulty. While you're trying hack your way into AI advancement, you'd surely standout from everyone else and the government can put a stop to it before you become a problem. In that sense, there's no real need for the government to ban learning, plus you'd be fine if you don't touch AI, just learn accounting instead, that's like....fun too.... And if you've received a formal education and decides to not get a job after, you'd still have the restricted access to computing resources and you'd be watched for life probably. All manageable problems. So there you go, that's how humanity would devolve in an advanced world, don't take it seriously..... but it might happen..... • Welcome to worldbuilding lzl! I see this description orientet towards the current development of the world as a perfectly valid answer to the question. – Hohmannfan May 23 '16 at 10:13 • Assuming the machines set us free (hopefully!) you perhaps underestimate how many bored people would choose to teach and study because it makes them happy. We've got to a point where most people don't work in the jobs they do because they need to per se, but because it keeps them busy and social as they choose their job. And if the machines do provide for the material needs of the society ("luxury communism") then why would education cost if most people's jobs are vocational rather than essential? The most expensive private schools today are so because people know they get you a good job. – inappropriateCode May 23 '16 at 10:16 • Yes, what you're saying is right, but in the question the government is actively trying to discourage it. The fee would be for discouragement. The aim is to stop those bored people from teaching and studying. Maybe i should also add that there would be restrictions to free access of information. – lzl May 23 '16 at 10:32 • I should also add that because hardly anyone worked and most stuff are free, then, money would be worth a lot more than their current value (there would still be trading, but not to the same scale as now). so 2000 USD would probably be like 2 mil (uneducated guess). That would be a very effective discouragement tool. in fact, the only people that can afford it would probably be those 1% ters, thus they'd probably keep their status for generations. – lzl May 23 '16 at 10:40 • hahaha, I don't think the casual learners would be a problem tho, the world's on the last step before machines become free for all killers, and with restricted computing resources, it might take more than some math to reach it. read my added thoughts. don't take it too seriously though, it's just one set of possibilities. – lzl May 23 '16 at 16:22 They believe in a free market for education. Public schools in industrialized nations can easily cost the taxpayer USD 10000 per student per year. The leaders might argue that a good education is an investment in the future of the child, so the parents had better take a credit and pay at least a small part of the cost. This is short-sighted, since better educated kids pay more taxes later in life, but with the right propaganda of "look out for yourself" and "the skilled ones will rise to the top" such a policy might be popular with the voters. • It's not the government's job to collect taxes. And "better educated" has very little correlation with "long education" - on a market, there's a strong incentive to make education as efficient as possible, which includes making it short. In our so-called "basic schools", it takes 9 years to go through an education that only teaches a tiny bit of essentials (basic literacy, counting) and a huge amount of context-less data that will never stick to the students. The truth is, if you make prices free (as in freedom), they will give you information - is it worth it to invest more in learning? – Luaan May 23 '16 at 9:02 • This is very important. Our basic schools could teach what they currently teach in about 3 years. The rest is just fluff. There has been a lot of push recently to change curriculum to a more "teaching how to learn" then "teaching facts" type of education, just because of this. – coteyr May 23 '16 at 15:53 • @Luaan, I hope you realize that I'm answering in the context of a fictional world for a story or game. In the real world, I firmly believe that it is the job of the government to collect enough taxes so that all children can get a decent education, no matter how rich or poor their parents are. – o.m. May 23 '16 at 20:25 • @o.m. Well, that's quite obvious from your second paragraph, so don't worry :) – Luaan May 23 '16 at 22:28 If the educational facilities are publicly funded, it sounds like the government is just trying to recoup the cost of the education. Good reasons keeping the price of education high: • Prevent waste in spending by people getting an education when they don't need it. If education is "free", meaning it is paid for through everyone's taxes, some people may get an education when it doesn't make good economic sense to do so. • Prevent waste in education efforts. In America K-12 education is free for everyone. Since no one has to pay for it, a lot of people fail to appreciate it and don't make the most of the opportunity. High schools in particular become temporary housing for difficult students rather than educational facilities. Good reasons for encouraging informal/home schooling: • Encourage the strength of nuclear families and local communities. Home schooling encourages people of communities to voluntarily work together for mutual benefit. In order to succeed in their efforts they need to get to know their neighbors and work with them. Public education encourages people to leave everything up to the state and lose their ability to form and maintain community organizations. • Encourage a diversity of ideas. Formal education requires formal standards. This requires that everyone learn pretty much the same ideas. It can even cause the creation of a learned class that actively discriminates against people with different ideas. Encouraging less formal education prevents this kind of echo chamber mono-culture. So-so reasons for fining the more highly educated: • Redistribution of resources from rich to poor. Many nations today tax the rich more than the poor. This would be similar because the educated would be expected to earn more than the uneducated. • Encourage the educated to make the most of their education. "From each according to his abilities; to each according to his needs". An educated person obviously has more ability and should contribute more to society. What if someone with medical degree decided he would rather run a dairy farm? He might like farming, but sick people could die! Bad reasons: • Prevent the rise of a middle class by keeping everyone poor. If everyone is poor, no one has time to agitate for rights and freedoms because they're too busy trying to earn enough to survive. • Prevent people from learning too much about ideas like individual freedoms and democracy. Keep them from learning about Milton Friedman, John Locke, George Mason, and Friedrich Hayek. People may be willing to pay high for education that has immediate economic payoffs, but they will be far less willing to pay for education that benefits the community especially when they don't know what those benefits may be before they get the education. • "Prevent waste in education efforts. In America K-12 education is free for everyone. Since no one has to pay for it, a lot of people fail to appreciate it..." Very true, this also leads to schools being forced to teach less, as they don't have enough to teach properly. I.e. "Why should I pay that much in taxes for that level of education, hell could teach that in a few months. I don't see why it takes them 12 years." – coteyr May 23 '16 at 15:56 • "High schools in particular become temporary housing for difficult students..." This is true also at lower "grades". Most of the time primary education is just fancy day care. There are may reasons for this "problem" but the problem certainly produces waste. Reducing that waste, specially in time, could be extremely valuable. – coteyr May 23 '16 at 15:59 Many people have put up the negative view of Education discouragement. Here is utopian- Education produces people with worse abilities than they started with- philosophers who pontificate indefinitely, artists who produce horrid artworks, engineers who produce impractical designs. The world is currently endowed with a lot of awful university trained scientists- people who spot the most flawed scientific studies, there is a lot art and soft science graduates with the most flawed thinking ever. If those people had simply started normal jobs without an university or other formal education, they could have become highly competent and that's what the government really- not useless parasites who refuse work and do bad jobs. Just make your world have the worst institutions as being held up the best ones. Reminds me of an old Dutch proverb: "You keep them poor, I'll keep them stupid, said the priest to the Baron". Knowledge and wealth equate to power. The powers that be (church and government) don't like competition. So best keep them poor and uneducated so they can't challenge the powers in control. And by making education expensive you achieve both at the same time. The poor can't get education. And the educated ones will be poorer afterwards. Because other forms of education work better. A lot of answers seem to be slanted towards "education good, no education bad", but that may not be true, even here on earth (for some societies). As humans we buy into the fact that we will spend the first 1/3 of our lives gaining education, the next 1/3 working and learning a little, then the last 1/3 teaching in some form. What if we lived longer or shorter lives. Would a structured education benefit us as well? What if our government structure was different? What if our values were different? A lot of the other answers assume that if your uneducated your stupid, and easy to manipulate and control. What if a gerontocracy was in place? Then the education of the normal citizen would not matter much. In fact, we have this problem today with (direct) democracies. In order for a voter to make an informed decision they have to have an education in the areas that they are making decisions in. Our society is currently "specializing", and this means that an educated voter, is really only educated in one or two topics and the rest is all opinion, or at most, "That makes sense". Lets take that "theory" to an extreme. First the country was run by some form of Democracy. As time went on, problems arose because there educated citizens were only educated in their field of expertise. It was not possible to educate a person in everything. Bad things happened because of this, so the government changed, and became a form of gerontocracy. Things were turbulent for a time, but once the turmoil was over, the people realized that this was better. Ideas were adapted slower, but that gave time for people to adjust and created a more peaceful society. At the same time, specialization continued. Now, with the government "problem" not really requiring everyone to know everything, it became far more valuable to train people in the field they wanted. There was no need for "core skills" they don't exist any more, and society doesn't need them to keep going. Instead value was placed in a guild like system. Apprentices, Juniors, Masters, and Experts. You gained knowledge in your field as you spent time in it. Formal education is being phased out, in favor of learning what you need. Not because someone is trying to keep you stupid, but the knowledge you need as a mathematics wiz doesn't include history. Take a look at today, and look at the negative sides of formal education. Cost is one factor, but so is the indoctrination. There is also the grand idea that one person is smarter then another, because of formal education. While education servers a need in our society right now, it's not a need that has always been there. In fact, if you ditch the "I'm better then you cause I have a degree" feeling that exists on to of education today, what it really is, is core classes that everyone "should have" like basic language, science, and math skills, and then some specialized skills. If we remove the "should have" by having a society where you don't need those basic skills, or those basic skills are acquired differently, there is almost no reason for formal education. In other words if your society focused more on "I'm happy" and then imparted specific skills via another system, then the idea of formal education is much less appealing. In short, one could make the argument that education is a "want to have so I can be better then my fellow man". We do this today. We look at people without education and consider them "less then" (not as an individual but as a society). If instead your society learned to value contentment and didn't place a negative on their fellow man for not knowing a thing, then the need for a formal education (as you describe it) goes away. A person can be very happy, not knowing a thing. If your society helped promote this kind of happiness, then your in good shape. Formal education exists to help people "re-train" or to keep them from falling into "a groove". You can only be a farmer cause that's what your parents were. You can't be a scientist. Then you could go to formal school, learn the basics of scientists and find an apprenticeship. Of course it's more expensive now, and there is a need to keep people "doing what they have always done" or that field will suffer when it becomes un-popular (but before it's phased out). ## The country needs its labor force to tackle a problem that (it believes) can be better solved with sheer manpower than with education and ingenuity For example, the country could be facing hunger and require its citizens to become farmers, believing that it will increase food production. You could look at an isolated country like North Korea as an example: Only talented people receive high education, whereas common folk are made to become soldiers and farmers. # The country is in debt and/or needs money. Perhaps said country is in a state of war with the neighboring nation and needs more money for its military expenditures. Perhaps the country needs to repay the debt that it owes the United Nations 20 years ago. There are many reasons why a government would fine the educated. One, the educated are the rich. Not only are the rich the most likely to afford education, the educated are the most likely to become rich. This pattern continues on in an endless loop. Second, if the country is in a desperate state of poverty, pollution, disease, and starvation, the government may be trying to utilize the wealth of the upper class to help repair the country's economy and infrastructure. (Or perhaps the government is just a bunch of greedy, corrupt bastards :)) • While answering own questions is nowhere discouraged, I just want to point out that if own answer is the first one to be seen it might demotivate people to post their answers – Pavel Janicek May 22 '16 at 19:20 • I sort of agree with Pavel Janicek's comment. The best uses of self-answered questions that I have seen were when the author wanted to present an essay-style answer as a resource for future worldbuilders, for instance CAgrippa's question and answer here: worldbuilding.stackexchange.com/questions/824/… (And that got closed, despite its high number of votes.) This self-answer by fi12 lacks that long term value. But I must admit that "Governments will tax anything they can get away with" was also my first thought. – Lostinfrance May 23 '16 at 9:31 Scenario: For decades, formal education was touted as the best chance for a person to make a better life for themselves. No one is sure who started the mantra, but everyone is chanting, "Get your degree, no matter the cost"? The government got behind the schools and started handing out low interest loans to make it happen and went so far as to subsidize the interest in some cases. The schools increased their costs and education costs tripled. The workforce abandoned skilled labor training in favor formal education. The result was catastrophic. The entire population suffered from debt overload as the jobs for the educated dried up. The skill labor population was decimated and basic carpentry and metal working skills were lost. Loan defaults were occurring faster than the planets birth rate and the government economy crumbled. Enlightened economists and people with plain ol' common sense figured out the problem and determined to fix it, simultaneously boosting the economy. Formal education would be the least of people's financial concerns as general education took a back seat to skills education. The populace would be fined into oblivion before another college grad was produced to compete for a job at McDonalds. Incentives and jobs were given to those who entered into cheap, skills centered training, that got them working and earning a living in months, not years. A corrupt / incompetent stereotypical-third-world-style (maybe military) government/dictatorship. There is no political will for the government to fund free education for all as it's not part of their priorities. They keep their supporters well paid and happy so they can afford the exorbitant fees. The more common people who can't, they simply don't care about. The government recoups the cost (or perhaps even makes a profit) from education so they don't want to spend resources and threaten this situation, and either it has never occurred to them, or they don't care, that free or cheap education for all will have long term benefits. The economy doesn't simply collapse perhaps because the country may be rich in a natural resource (e.g. oil) that they sell to more powerful countries in exchange for political, financial and technical support. Meanwhile, the lower class learns just about enough to be able to do the menial jobs of their parents to survive, or perhaps apprenticeships are common. There are a lot of excellent answers here. I would like to add a political economics explanation. In the real world, education policy is tied to the kind of government in place (what political scientists call regime). Different kinds of governments have different incentives for education. Dictatorships (in this case - any government where leaders are not decided by free and fair elections) typically minimize education. There are a few ways to look at this: 1. Dictatorships only require the support of key groups (for example, a military cadre or core economic elite). They government will be pressed to provide for their education, but not the public's. They might also need to provide rudimentary technical education, but won't if they can avoid it. 2. Dictatorships don't want an educated public, because those people will demand better standards of living, more freedom, etc. The government might be more tolerant of limited technical education, but even those students will make demand better standards of living (by virtue of their better career choices). 3. Dictatorships don't want to spend money on education, because every dollar spent on the public takes away from the wealth of elites. 4. Dictatorships typically have concerns - such as suppressing the masses, fending off internal rivals, and creating symbols of legitimacy - which are more pressing than their nation's education or economy. 5. Dictatorships are seldom worried about the future. Education pays off in the long run, but not while the current dictator is still alive. 6. Elites and dictators may have never lived like the masses do (perhaps due to extreme wealth inequality, or they remain socially segregated). They may not understand the masses or their problems, or have an inkling they want (or need) to be educated. I phrased all of these as being about "dictatorships", but you could repackage any of them to be about nearly any fictional government. For example, even a democracy might need only the support of a certain segment of the public. Only the needs of that segment need to be met, which could mean that only that segment is educated. I will update this with journal citations this evening, to improve the answer. This isn't really an answer to why the government would fine people, especially such a low amount.$2k a year for schooling is not unheard of. Why fine the people when there are easier ways to keep people from getting educated? They could either close most or all of the schools, with long queues to get into the ones that remain Or they could simply make the schools crappy, with underpaid unmotivated teachers, overcrowding, bad conditions, access to drugs, and tons of standardized tests that take all the fun out of learning? It's been working for the Detroit school system for years, so I don't know why it wouldn't work for your government. But why waste such a great opportunity? Think about it, if you had year round schooling, 8 hours a day, you'd have the children as a captive audience longer than the parents would. It's a perfect opportunity to indoctrinate them with the values and ideas that the government wants to foster. In fact make the schools such a great place to be that the students will resent having to go home where their parents can drone on and on about old fashioned values. Having an ignorant population where parents get to decide what the children believe is really shortsighted for a totalitarian government. Ok. Heres the problem: You are trying to get the people of your city to do something strange. Dont do that. Why would you even want that? Instead, work with your imaginary people. Start from the basics, and go with that. If they think education is cool and you dont, go with the education instead. Dont try to think ways to do something silly. Try to think ways to do something smart. So, why would they want people to stay uneducated? Dont know? Well, maybe they shouldnt want to keep people uneducated. Maybe they DO want it, but cant. Its not available? Its only available from abroad, and that makes it very costly. Or maybe they want people to get educated, but all foreign educators also bring new ideas they dont want to spread. So they limit the education to only those most loyal. If you start with hard rules, with a square hole and a round peg, and you cant make them work even with a hammer, back down a bit. Maybe it doesnt have to be just like that. Maybe you need only one or the other. Soften the demands a bit. Maybe the leaders want people educated. Maybe they use different methods. Maybe its not the leaders, but EVERYONE who thinks education is bad, like a religious community that rejects some or all science. Maybe education is not available, or everyone needs to work just to stay alive. It not much more than hundred years ago, that even in the west many people thought that education makes people lazy and they cannot afford to allow children in schools, and instead they had to work the fields to grow food. There are many options, if you can just relax your demands a little bit. What a coincidence. I just saw this question, while this article was open in another tab. Titled "A Turkish rector says “better to keep masses uneducated”", it describes a Turkish university rector who claimed the uneducated people will be the backbone of Turkey’s future and has said that “educated people are more dangerous than uneducated people.” So, no need to imagine an answer - just ask this douche (Prof. Dr. Bulent Ari, Deputy Rector of the Istanbul Sabahattin Zaim University) personally. • Welcome to Worldbuilding, I'd suggest fleshing out your answer a little more so you can provide more detail. – fi12 May 25 '16 at 0:38 "Why would a government passively encourage its people to not obtain a formal education?" The most obvious reason is that a Government plans to have the rich become an Educated Elite, 'User-Pays' so to speak. If education becomes the domain of a small privileged portion of society then that portion would be in a position of great power. "Knowledge is Power." Edit: If schools are run down and basically became indoctrination centres, that discourages people also, so it may not be that Government is intentionally trying to dissuade people so much as the process being an end result of an attempt at mass mind control. People resist this and so are reluctant to send their children to school. There could have been a malicious government back in the day. The schools founded by that government are actually indoctrination centers rather than learning centers; just like in our world currently. That government itself has been overthrown, but the fake schools are still there, and most of the citizens are still brainwashed and want to send their kids there. The new, not so malicious government, can't shut those schools down for some reason so instead they try to discourage people from going there.
2019-12-09 03:27:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25985223054885864, "perplexity": 2035.1715669101686}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540517156.63/warc/CC-MAIN-20191209013904-20191209041904-00101.warc.gz"}
http://www.physicsforums.com/showpost.php?p=4174691&postcount=1
Thread: Equilibrium with Lagrangians View Single Post ## Equilibrium with Lagrangians So let's say we have a mechanical system described by some Lagrangian $L=L(q_i,\dot{q}_i)$, where the qi's are the generalized coordinates of the system. Does the condition $$\frac{\partial L}{\partial q_i}=0$$ give the equilibrium configurations of the system? Intuitively it seems so, but I can't prove it. PhysOrg.com physics news on PhysOrg.com >> Study provides better understanding of water's freezing behavior at nanoscale>> Soft matter offers new ways to study how ordered materials arrange themselves>> Making quantum encryption practical
2013-05-22 09:09:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41592055559158325, "perplexity": 2209.4190943400968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701543416/warc/CC-MAIN-20130516105223-00011-ip-10-60-113-184.ec2.internal.warc.gz"}
https://calinon.ch/paper4055.htm
### Abstract We present drozBot: le robot portraitiste, a robotic system that draws artistic portraits of people. The input images for the portrait are taken interactively by the robot itself. We formulate the problem of drawing portraits as a problem of coverage which is then solved by an ergodic control algorithm to compute the strokes. The ergodic computation of the strokes for the portrait gives an artistic look to them. The specific ergodic control algorithm that we chose is inspired by the heat equation. We employed a 7-axis Franka Emika robot for the physical drawings and used an optimal control strategy to generate joint angle commands. We explain the influence of the different hyperparameters and show the importance of the image processing steps. The attractiveness of the results was evaluated by conducting a survey where we asked the participants to rank the portraits produced by different algorithms. ### Bibtex reference @article{Loew22RAL, author={L\"ow, T. and Maceiras, J. and Calinon, S.}, title={drozBot: Using Ergodic Control to Draw Portraits}, journal={{IEEE} Robotics and Automation Letters ({RA-L})}, year={2022} }
2022-06-29 10:53:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5351184606552124, "perplexity": 1914.5074525489258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103626162.35/warc/CC-MAIN-20220629084939-20220629114939-00573.warc.gz"}
http://www.zentralblatt-math.org/zmath/en/advanced/?q=an:1228.34009
Language:   Search:   Contact Zentralblatt MATH has released its new interface! For an improved author identification, see the new author database of ZBMATH. Advanced Search Query: Fill in the form and click »Search«... Format: Display: entries per page entries Zbl 1228.34009 Agarwal, Ravi P.; Ahmad, Bashir Existence theory for anti-periodic boundary value problems of fractional differential equations and inclusions. (English) [J] Comput. Math. Appl. 62, No. 3, 1200-1214 (2011). ISSN 0898-1221 Summary: This paper studies the existence of solutions for nonlinear fractional differential equations and inclusions of order $q\in (3,4]$ with anti-periodic boundary conditions. In the case of inclusion problem, the existence results are established for convex as well as nonconvex multivalued maps. Our results are based on some fixed point theorems, Leray-Schauder degree theory, and nonlinear alternative of Leray-Schauder type. Some illustrative examples are discussed. MSC 2000: *34A08 34B99 Boundary value problems for ODE Keywords: fractional differential equations; inclusions; anti-periodic boundary conditions; existence; multivalued maps; fixed point theorems Login Username: Password: Highlights Master Server Zentralblatt MATH Berlin [Germany] © FIZ Karlsruhe GmbH Zentralblatt MATH master server is maintained by the Editorial Office in Berlin, Section Mathematics and Computer Science of FIZ Karlsruhe and is updated daily. Other Mirror Sites Copyright © 2013 Zentralblatt MATH | European Mathematical Society | FIZ Karlsruhe | Heidelberg Academy of Sciences Published by Springer-Verlag | Webmaster
2013-05-18 16:20:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3686402440071106, "perplexity": 5816.920290950273}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382560/warc/CC-MAIN-20130516092622-00086-ip-10-60-113-184.ec2.internal.warc.gz"}
http://kyagrd.logdown.com/tags/Haskell
Type industry muggle ... ## Getting started with PureScript 0.7.1.0 PureScript is a purely functional language specifically targeting JavaScript as its backend. If you have tasted Haskell and lamenting about the current untyped mess in scripting languages, especially regarding web-programming, probabiy your only choice out there is PureScript. PureScript is the only language with a sane approch to staticaly support duck typing on records by using Row Polymorphism. In addition, PureScript supports Extensible Effects with the same mechanism, using Row Polymorphism. Monads are labeled with effects (e.g. console for console I/O) in PureScript. This greatly reduces the problem of excessive transformer stack-up when composing Haskell monadic libraries simply for combining effects in obvious ways. ## Short note for new PureScript users (like me) This is a short note for those who want to try PureScript, who are already familliar with GHC and cabal. PureScript is relatively well documented, but for newbies it could be confusing because updated documents for the recent version 0.7.1.0 are scatterred around several URLs and the e-book and several other documents on the web is written based on the older version. Installing PureScript is easy if you have installed cabal for installing PureScript and npm for installing JavaScript packages, especially for installing pulp (package dependency manangment/sandboxing for PureScript). Since purescript is implemented in Haskell, I felt it most natural to use cabal because I am GHC user. But you don’t have to have GHC and cabal installed in fact. There are also binary distribtions directly downlodable or via other binary packge managers from the PureScript homepage; see http://www.purescript.org/download/ for more informtion. I’ll just stick to cabal in this document. You can install PureScript in any system cabal runs. cabal install purescript For npm and pulp, You’d do somthing like this on Mac OS X brew install npm npm install -g pulp and on Debian-based linux systems sudo apt-get install npm npm install pulp Most users of Mac OS X would have Admin rights so it is most reasonlable to install pulp globalally in the system. The PureScript binary files should be installed in ~/.cabal/bin and you should have already added ~/.cabal/bin to your PATH already if you are a sensible GHC user. Since you’ve installed pulp globally with -g option, it should also be in your PATH. Note that installing pulp can take a while, especially when you have freshly installed npm, because it installs all its dependencies. Altough you can sudo install -g pulp on linux systems, if you are in sudo group, it is usually not recommended in linux systems. So, just install it locally and set up some additional path in bashrc or whatever shell configuration file you use. In some linux distros, such as Debian, the executable filename for the “node.js” is named as nodejs rather than node in most other systems. So, you’d have to make a symbolic link in one of your PATH anyway. The pulp package mananager for PureScript also tries to find node. Anyway this is what I did in Debian after installing pulp locally. The npm pacakge manager for JavaScript on Debian installs user packages into ~/npm_modules directory and its binary/executable files are symlinked into ~/npm_modules/.bin directory. kyagrd@debair:~$ls -al node_modules/ total 16 drwxr-xr-x 4 kyagrd kyagrd 4096 Aug 2 22:30 . drwxr-xr-x 33 kyagrd kyagrd 4096 Aug 3 00:01 .. drwxr-xr-x 2 kyagrd kyagrd 4096 Aug 3 00:00 .bin drwxr-xr-x 4 kyagrd kyagrd 4096 Aug 2 22:29 pulp kyagrd@debair:~$ ls -al node_modules/.bin total 8 drwxr-xr-x 2 kyagrd kyagrd 4096 Aug 3 00:00 . drwxr-xr-x 4 kyagrd kyagrd 4096 Aug 2 22:30 .. lrwxrwxrwx 1 kyagrd kyagrd 15 Aug 3 00:00 node -> /usr/bin/nodejs lrwxrwxrwx 1 kyagrd kyagrd 16 Aug 2 22:30 pulp -> ../pulp/index.js So, this is a good place to stick in the renamed symlink for “node.js”. (If you don’t like it here you can put it in anywhere else say ~/bin and add it to your PATH.) kyagrd@debair:~$ln -s /usr/bin/nodejs ~/node_modules/.bin/node kyagrd@debair:~$ ls -al node_modules/.bin total 8 drwxr-xr-x 2 kyagrd kyagrd 4096 Aug 3 00:00 . drwxr-xr-x 4 kyagrd kyagrd 4096 Aug 2 22:30 .. lrwxrwxrwx 1 kyagrd kyagrd 15 Aug 3 00:00 node -> /usr/bin/nodejs lrwxrwxrwx 1 kyagrd kyagrd 16 Aug 2 22:30 pulp -> ../pulp/index.js Now we’re done with all the tedious stuff and have fun. kyagrd@debair:~$mkdir pursproj kyagrd@debair:~$ cd pursproj/ kyagrd@debair:~/pursproj$pulp init * Generating project skeleton in /home/kyagrd/pursproj bower cached git://github.com/purescript/purescript-console.git#0.1.0 bower validate 0.1.0 against git://github.com/purescript/purescript-console.git#^0.1.0 bower cached git://github.com/purescript/purescript-eff.git#0.1.0 bower validate 0.1.0 against git://github.com/purescript/purescript-eff.git#^0.1.0 bower cached git://github.com/purescript/purescript-prelude.git#0.1.1 bower validate 0.1.1 against git://github.com/purescript/purescript-prelude.git#^0.1.0 bower install purescript-console#0.1.0 bower install purescript-eff#0.1.0 bower install purescript-prelude#0.1.1 purescript-console#0.1.0 bower_components/purescript-console └── purescript-eff#0.1.0 purescript-eff#0.1.0 bower_components/purescript-eff └── purescript-prelude#0.1.1 purescript-prelude#0.1.1 bower_components/purescript-prelude The pulp init command creates a template PureScript project for you. To see other command of pulp, you can pulp --help. Usage: /home/kyagrd/node_modules/.bin/pulp [options] [command-options] Global options: --bower-file -b Read this bower.json file instead of autodetecting it. --help -h Show this help message. --monochrome Don't colourise log output. --then Run a shell command after the operation finishes. Useful with --watch. --version -v Show current pulp version --watch -w Watch source directories and re-run command if something changes. Commands: browserify Produce a deployable bundle using Browserify. build Build the project. dep Invoke Bower for package management. docs Generate project documentation. init Generate an example PureScript project. psci Launch a PureScript REPL configured for the project. run Compile and run the project. test Run project tests. Use /home/kyagrd/node_modules/.bin/pulp --help to learn about command specific options. We can try run and test commands kyagrd@debair:~/pursproj$ pulp run * Building project in /home/kyagrd/pursproj * Build successful. Hello sailor! kyagrd@debair:~/pursproj$cat src/Main.purs module Main where import Control.Monad.Eff.Console main = do log "Hello sailor!" kyagrd@debair:~/pursproj$ pulp test * Building project in /home/kyagrd/pursproj * Build successful. Running tests... * Tests OK. kyagrd@debair:~/pursproj$cat test/Main.purs module Test.Main where import Control.Monad.Eff.Console main = do log "You should add some tests." kyagrd@debair:~/pursproj$ The run and test command invokes build command before running or testing if the project has not been built yet. One last thing is about editing the “bower.json” file. The pulp package manager for PureScript uses bower (a local sandboxing package manager for javascirpt packages and also used for purescript cuase purescript package seems to be desinged to be compatible with javascript package versioning/dpenecy convention) to locally sandbox all its dependencies in the bower_compnents directory. I think for beginners like me, the only thing to edit is the dependency section. kyagrd@debair:~/pursproj$ls bower.json bower_components output src test kyagrd@debair:~/pursproj$ cat bower.json { "name": "pursproj", "version": "1.0.0", "moduleType": [ "node" ], "ignore": [ "**/.*", "node_modules", "bower_components", "output" ], "dependencies": { "purescript-console": "^0.1.0" } } The dependices section of the default “bower.json” created by invoking pulp init only contains the purescript-console package. As you need to use more library packages, you can list them in the dependencies section. There is a list of most commonly used basic packages (of course including purescript-console) combined as a sort of meta-package called purescript-base. We can edit the depenecy section, instead of purescript-console change it to purescript-base. Because purescript-base depends on purescript-console, it is redundent to specify purescript-console when there is purescript-base. The current version of purescript-base is 0.1.0, which happens to be same as the version of purescript-console. So, change it like this and then install all its depending packages using pulp dep install (which invokes bower install). It sure does install more packages than before. kyagrd@debair:~/pursproj$vi bower.json ### edit with text editor kyagrd@debair:~/pursproj$ cat bower.json { "name": "pursproj", "version": "1.0.0", "moduleType": [ "node" ], "ignore": [ "**/.*", "node_modules", "bower_components", "output" ], "dependencies": { "purescript-base": "^0.1.0" } } kyagrd@debair:~/pursproj$pulp dep install bower cached git://github.com/purescript-contrib/purescript-base.git#0.1.0 bower validate 0.1.0 against git://github.com/purescript-contrib/purescript-base.git#^0.1.0 bower cached git://github.com/purescript/purescript-arrays.git#0.4.1 bower validate 0.4.1 against git://github.com/purescript/purescript-arrays.git#^0.4.0 bower cached git://github.com/purescript/purescript-control.git#0.3.0 bower validate 0.3.0 against git://github.com/purescript/purescript-control.git#^0.3.0 bower cached git://github.com/purescript/purescript-either.git#0.2.0 bower validate 0.2.0 against git://github.com/purescript/purescript-either.git#^0.2.0 bower cached git://github.com/purescript/purescript-enums.git#0.5.0 bower validate 0.5.0 against git://github.com/purescript/purescript-enums.git#^0.5.0 bower cached git://github.com/purescript/purescript-foldable-traversable.git#0.4.0 bower validate 0.4.0 against git://github.com/purescript/purescript-foldable-traversable.git#^0.4.0 bower cached git://github.com/paf31/purescript-lists.git#0.7.0 bower validate 0.7.0 against git://github.com/paf31/purescript-lists.git#^0.7.0 bower cached git://github.com/purescript/purescript-math.git#0.2.0 bower validate 0.2.0 against git://github.com/purescript/purescript-math.git#^0.2.0 bower cached git://github.com/purescript/purescript-maybe.git#0.3.3 bower validate 0.3.3 against git://github.com/purescript/purescript-maybe.git#^0.3.0 bower cached git://github.com/purescript/purescript-monoid.git#0.3.0 bower validate 0.3.0 against git://github.com/purescript/purescript-monoid.git#^0.3.0 bower cached git://github.com/purescript/purescript-strings.git#0.5.5 bower validate 0.5.5 against git://github.com/purescript/purescript-strings.git#^0.5.0 bower cached git://github.com/purescript/purescript-tuples.git#0.4.0 bower validate 0.4.0 against git://github.com/purescript/purescript-tuples.git#^0.4.0 bower cached git://github.com/paf31/purescript-unfoldable.git#0.4.0 bower validate 0.4.0 against git://github.com/paf31/purescript-unfoldable.git#^0.4.0 bower cached git://github.com/purescript/purescript-integers.git#0.2.1 bower validate 0.2.1 against git://github.com/purescript/purescript-integers.git#^0.2.0 bower cached git://github.com/purescript/purescript-st.git#0.1.0 bower validate 0.1.0 against git://github.com/purescript/purescript-st.git#^0.1.0 bower cached git://github.com/purescript-contrib/purescript-bifunctors.git#0.4.0 bower validate 0.4.0 against git://github.com/purescript-contrib/purescript-bifunctors.git#^0.4.0 bower cached git://github.com/purescript/purescript-lazy.git#0.4.0 bower validate 0.4.0 against git://github.com/purescript/purescript-lazy.git#^0.4.0 bower cached git://github.com/purescript/purescript-invariant.git#0.3.0 bower validate 0.3.0 against git://github.com/purescript/purescript-invariant.git#^0.3.0 bower install purescript-base#0.1.0 bower install purescript-arrays#0.4.1 bower install purescript-either#0.2.0 bower install purescript-control#0.3.0 bower install purescript-enums#0.5.0 bower install purescript-foldable-traversable#0.4.0 bower install purescript-lists#0.7.0 bower install purescript-math#0.2.0 bower install purescript-maybe#0.3.3 bower install purescript-monoid#0.3.0 bower install purescript-strings#0.5.5 bower install purescript-tuples#0.4.0 bower install purescript-unfoldable#0.4.0 bower install purescript-st#0.1.0 bower install purescript-integers#0.2.1 bower install purescript-bifunctors#0.4.0 bower install purescript-lazy#0.4.0 bower install purescript-invariant#0.3.0 purescript-base#0.1.0 bower_components/purescript-base ├── purescript-arrays#0.4.1 ├── purescript-console#0.1.0 ├── purescript-control#0.3.0 ├── purescript-either#0.2.0 ├── purescript-enums#0.5.0 ├── purescript-foldable-traversable#0.4.0 ├── purescript-integers#0.2.1 ├── purescript-lists#0.7.0 ├── purescript-math#0.2.0 ├── purescript-maybe#0.3.3 ├── purescript-monoid#0.3.0 ├── purescript-strings#0.5.5 ├── purescript-tuples#0.4.0 └── purescript-unfoldable#0.4.0 purescript-arrays#0.4.1 bower_components/purescript-arrays ├── purescript-foldable-traversable#0.4.0 ├── purescript-st#0.1.0 └── purescript-tuples#0.4.0 purescript-either#0.2.0 bower_components/purescript-either └── purescript-foldable-traversable#0.4.0 purescript-control#0.3.0 bower_components/purescript-control └── purescript-prelude#0.1.1 purescript-enums#0.5.0 bower_components/purescript-enums ├── purescript-either#0.2.0 ├── purescript-strings#0.5.5 └── purescript-unfoldable#0.4.0 purescript-foldable-traversable#0.4.0 bower_components/purescript-foldable-traversable ├── purescript-bifunctors#0.4.0 └── purescript-maybe#0.3.3 purescript-lists#0.7.0 bower_components/purescript-lists ├── purescript-foldable-traversable#0.4.0 ├── purescript-lazy#0.4.0 └── purescript-unfoldable#0.4.0 purescript-math#0.2.0 bower_components/purescript-math purescript-maybe#0.3.3 bower_components/purescript-maybe └── purescript-monoid#0.3.0 purescript-monoid#0.3.0 bower_components/purescript-monoid ├── purescript-control#0.3.0 └── purescript-invariant#0.3.0 purescript-strings#0.5.5 bower_components/purescript-strings └── purescript-maybe#0.3.3 purescript-tuples#0.4.0 bower_components/purescript-tuples └── purescript-foldable-traversable#0.4.0 purescript-unfoldable#0.4.0 bower_components/purescript-unfoldable ├── purescript-arrays#0.4.1 └── purescript-tuples#0.4.0 purescript-st#0.1.0 bower_components/purescript-st └── purescript-eff#0.1.0 purescript-integers#0.2.1 bower_components/purescript-integers ├── purescript-math#0.2.0 └── purescript-maybe#0.3.3 purescript-bifunctors#0.4.0 bower_components/purescript-bifunctors └── purescript-control#0.3.0 purescript-lazy#0.4.0 bower_components/purescript-lazy └── purescript-monoid#0.3.0 purescript-invariant#0.3.0 bower_components/purescript-invariant └── purescript-prelude#0.1.1 There are several tutorial examples online. Many of them should be doable by editing the src/Main.purs file. Ones that actually runs on browsers would need to use pulp browserify, which outputs single file that contains self contained javascript code. We can do it for this template project too (you need to check developer console for the output though). kyagrd@debair:~/pursproj$ pulp browserify > index.js * Browserifying project in /home/kyagrd/pursproj * Building project in /home/kyagrd/pursproj * Build successful. * Browserifying... kyagrd@debair:~/pursproj$cat > index.html <html> <head> <title>Example purescript app</title> <meta charset="UTF-8"> </head> <body> Check the console <script type="text/javascript" src="index.js"></script> </body> </html> kyagrd@debair:~/pursproj$ ls bower.json bower_components index.html index.js output src test Load “index.html” in a browser that supports a deveolper console like Firefox. Every time you load “index.html” you can see the console log printing “hello sailor!”, just as it did when we pulp run on the command line console. Happy PureScript-ing! ## An example of higher-kinded polymorphism not involving type classes We almost always explain higher-kinded polyomrphism (a.k.a. type constructor polymorphism) in Haskell with an example involving type classes such as Monad. It is natural to involve type classes because type classes were the motivation to support type constructor polymoprhism in Haskell. However, it is good to have an example that highlights the advantage of having higher-kinded polymorphism, because type constructor variables are not only useful for defining type classes. Here is an example I used in my recent draft of a paper to describe what type constructor polymorphism is and how they are useful. data Tree c a = Leaf a | Node (c (Tree c a)) newtype Pair a = Pair (a,a) type BTree a = Tree Pair a type RTree a = Tree [] a The ability to abstract over type constructors empowers us to define more generic data structures, such as Tree, that can be instantiated to well-knwon data structures, such as BTree (binary tree) or RTree (rose tree). The Tree datatype is parametrized by c :: * -> *, which determines the branching structure at the nodes, and a :: *, which determines the type of elemenets hanging on the leaves. By instantiating the type construcor variable c, we get concrete tree data structures, which we would define sparately as follows without type constructor polymophism, but only with type polymorphism: data BTree a = BLeaf a | BNode (BTree a, BTree a) data RTree a = RLeaf a | RNode [RTree a] ## Interactive Haskell on the web tryhaskell.org uses a patched version of mueval library and console library, and the famouse javascript library jQuery. So, it is based on GHC, but can only run limited things, which mueval allows. fpcomplete.com seems to be using GHC and its web freamwork is yesod, but not exactly sure what its implementation is based on. This is the most comprehensive authoring page that supports interactive Haskell environment on the web. ## Programming with term-indexed types in Nax Our draft (submitted for the IFL’12 post-proceeding review process) explores a language design space to support term-indexed datatypes along the lines of the lightweight approach, promoted by the implementation of GADTs in major functional programming languages (e.g., GHC, OCaml). Recently, the Haskell community has been exploring yet another language design space, datatype promotion (available in recent versions of GHC), to support term-indexed datatypes. In this draft, we discuss similarities and differences between these two design spaces that allow term-indices in languages with two universes (star and box) and no value dependencies. We provide concrete examples written in Nax, Haskell (with datatype promotion), and Agda, to illustrate the similarities and differences. “Nax’s term-indices vs. Haskell’s datatype promotion” corresponds to “Universe Subtyping vs. Universe Polymorphism” in Agda. Nax can have nested term indices. That is, we can use terms of already indexed types (e.g. length indexed lists, singleton types) as indices in Nax, while we cannot in Haskell’s datatype promotion (as it is now in GHC 7.6). On the other hand, Haskell’s datatype promotion allows type-level data structures containing types as elements (e.g. List *). P.S. In addition to supporting term indices, Nax is designed to be logically consistent in the Curry—Howard sense and to support Hindley—Milner-style type inference. These two important characteristics of Nax are out of the scope of this draft, but to be published soon and will also appear in my dissertation. Unfortulately, this was rejected for the post-preceeding but it will still apear in my thesis (which I’m really trying hard to finish up soon) and looking for other venues to submit a revised version. ## List-like things in Conor's talk at ICFP 2012 into Haskell In Conor’s keynote talk at ICFP 2012, he uses a generic list like thing in Agda, which generalizes over regular lists, length indexed lists, and even more complicated things. It is done by indexing the data structure with relations of kind (A -> A -> *) rather than just a value. With the help of GHC’s newest super hot datatype promotion and kind polymorphism this already type checks!!! (I checked this in GHC 7.4.1 but it should work in GHC 7.6.x as well.) {-# LANGUAGE KindSignatures, GADTs, DataKinds, PolyKinds #-} – list like thing in Conor’s talk into Haskell! data List x i j where Nil  :: List x i i Cons :: x i j -> List x j k -> List x i k append :: List x i j -> List x j k -> List x i k append Nil         ys = ys append (Cons x xs) ys = Cons x (append xs ys) – instantiating to a length indexed list data Nat = Z | S Nat data VecR a i j where MkVecR :: a -> VecR a (’S i) i type Vec a n = List (VecR a) n ‘Z vNil :: Vec a 'Z; vNil = Nil vCons :: a -> Vec a n -> Vec a (’S n) vCons = Cons . MkVecR – instantiating to a plain list data R a i j where MkR :: a -> R a ’() ’() type PlainList a = List (R a) ’() ’() nil :: PlainList a nil = Nil cons :: a -> PlainList a -> PlainList a cons = Cons . MkR You can go on and define Inst datatype for instructions. However, in GHC 7.4.1, you cannot write the compile function that compiles expressions to instructions because Haskell does not have stratified universes. GHC 7.4.1 will give you an error that says BOX is not *. I heard that *:* will be added go GHC at one point, so, it may work in more recent versions. {-# LANGUAGE KindSignatures, GADTs, DataKinds, PolyKinds #-} {-# LANGUAGE TypeOperators #-} data Ty = I | B data Val t where IV :: Int -> Val I BV :: Bool -> Val B plusV :: Val I -> Val I -> Val I plusV (IV n) (IV m) = IV (n + m) ifV :: Val B -> Val t -> Val t -> Val t ifV (BV b) v1 v2 = if b then v1 else v2 data Expr t where VAL :: Val t -> Expr t PLUS :: Expr I -> Expr I -> Expr I IF :: Expr B -> Expr t -> Expr t -> Expr t eval :: Expr t -> Val t eval (VAL v) = v eval (PLUS e1 e2) = plusV (eval e1) (eval e2) eval (IF e e1 e2) = ifV (eval e) (eval e1) (eval e2) data Inst :: [Ty] -> [Ty] -> * where PUSH  :: Val t -> Inst ts (t ’: ts) ADD   :: Inst ('I ’: 'I ’: ts) ('I ’: ts) IFPOP :: GList Inst ts ts’ -> GList Inst ts ts’ -> Inst ('B ’: ts) ts’ compile :: Expr t -> GList Inst ts (t ’: ts) compile (VAL v) = undefined – Cons (PUSH v) Nil – I am stuck here {- GListLike.hs:32:19: -     Couldn’t match kind BOX’ against *’ -     Kind incompatibility when matching types: -       k0 :: BOX -       [Ty] :: * -     In the return type of a call of Cons’ -     In the expression: Cons (PUSH v) Nil -} It turns out that this Conor’s example is a very nice example that highlights the differences between GHC’s datatype promotion and the Nax language approach (or System Fi, which I posted earlier). I’ll try to explain the difference in the IFL 2012 post-proceeding, but in a nutshell, System Fi needs neither *:* (which causes logical inconsistency) nor stratified universes to express Conor’s example of the stack-shape indexed compiler. (Or, we may say that Fi typing context split into two parts somehow encode certain uses of stratified universes) Anyway, I see no problem of writing that code in Nax. However, we do not have polymorphism at kind level in System Fi, so we cannot express GList, but only specific instances of GList instantiated to some specific x. I do have some rough thoughts that HM-style polymorphism at kind level won’t cause logical inconsistencies, but haven’t worked out the details. ## Indentation aware parsing for Haskell-like blocks Use indentparser (unless prefer libraries staring with “uu” prefix). It is the only choice on hackage compatible with Parsec, and it’s pretty good – follows the style of Parsec library and defined as general as it needs to be and its basic framework just looks right. There are other two indentation aware parsers on hackage but one seems obsolete and the other is not as beautiful as this one. However, there is one small problem. You need to build your own combinator using some of their token parsers. The library provided combinator “braceBlock” almost works but not what you want (I think it is a bug rather than a feature.). The braceBlock combinator lets you parse both item item item and { item; item; item; item; item } but not this one below { item; item; item; item; item } which is what you do want to parse as a block. So, what you need to do is to define something like this blockOfmany p = braces (semiSep p) <|> blockOf (many p) and to a parser for a do block can be defined as follows doP = keyword “do” >> blockOfmany commadP where keyword and commandP should be properly built up using their indentation aware token parsers and combinators. ## Untyped de Bruijn lambda term evaluator using datatype promotion in GHC {-# LANGUAGE GADTs, DataKinds, PolyKinds, KindSignatures, TypeOperators, RankNTypes, FlexibleInstances #-} data V a = Vfun (V a -> V a) | Vinv a -- user should never use Vinv type Val = V ([String] -> String) showV :: V ([String] -> String) -> [String] -> String showV (Vfun f) (x:xs) = "(\\"++x++"."++showV (f (Vinv $const x)) xs++")" showV (Vinv g) xs = g xs showVal v = showV v$ map return "xyz"++[c:show n | c<-cycle "xyz", n<-[0..]] instance Show Val where show = showVal data Nat = Z | S Nat data Fin :: Nat -> * where FZ :: Fin (S n) FS :: Fin n -> Fin (S n) data Lam :: Nat -> * where Var :: Fin n -> Lam n App :: Lam n -> Lam n -> Lam n Abs :: Lam (S n) -> Lam n apV (Vfun f) = f eval :: (Fin n -> Val) -> Lam n -> Val eval env (Var fn) = env fn eval env (App e1 e2) = apV (eval env e1) (eval env e2) eval env (Abs e) = Vfun (\x -> eval (ext env x) e) ext :: (Fin n -> V a) -> V a -> Fin (S n) -> V a ext env x FZ = x ext env x (FS n) = env n empty :: f Z -> V a empty _ = error "empty" *Main> eval (ext empty (Vfun id)) (Abs $Var (FS FZ))(\x.(\y.y))*Main> eval empty (Abs$ Var (FS FZ))<interactive>:182:13:    Couldn't match expected type 'Z' with actual type S n0'    Expected type: Lam 'Z      Actual type: Lam (S n0)    In the second argument of eval', namely (Abs $Var (FS FZ))' In the expression: eval empty (Abs$ Var (FS FZ))` This works! It statically checks the size of the environment. Great! ## Tried out synatstic for Haskell editing in Vim Syntastic is a fantastic syntax checker for Vim editor. http://www.vim.org/scripts/script.php?script_id=2736 I learned about this recently, and tried this out for my favoraite language Haskell. And, I have also learned that Vim now has a resonable way of managing Vim packages using pathogen. I followed the instructions on the following blog for installing pathogen: http://tammersaleh.com/posts/the-modern-vim-config-with-pathogen When you install syntastic using pathogen, which is the recommended way, things will just work (at least in linux). Syntastic supports Haskell using ghc-mod package, which of course requires ghc installed. Major linux distros are likey to have a distro packge for ghc-mod and ghc. I use the Debian unstable, which has ghc-mod and ghc packages by distro. When you try out sytastic for Haskell, standard haskell scripts of hs suffix will work but the lierate haskell scripts of lhs suffix will not. The solution for this is to simply make a symbolic link of lhaskell.vim from haskell.vim in the syntax_checker directory as follows: kyagrd@kyahp:~/.vim/bundle/syntastic/syntax_checkers$ls -al *haskell*.vim -rw-r–r– 1 kyagrd kyagrd 1694 4월 21 15:14 haskell.vim lrwxrwxrwx 1 kyagrd kyagrd 11 4월 21 16:24 lhaskell.vim -> haskell.vim Then, I got syntastic working for literate haskell scripts as well. Tips that I use. ## Korean syllable decomposition using Haskelll text-icu package provides bindings for the ICU library. The library author notes that there can be some memory overheads of copying Haskell memory area (Text) to a fixed memory area for FFI to ICU library. text-icu provides automatic buffer mangement. So, you don’t need to mess around with Ptr and raw heap memory allocation, but there is a usually expected overhead. kyagrd@kyahp:~/cscs/text-icu-ko$ ghci -XOverloadedStrings GHCi, version 7.4.1: http://www.haskell.org/ghc/  :? for help Prelude Data.Text.ICU> :m + Data.Text.ICU.Normalize Prelude Data.Text.ICU Data.Text.ICU.Normalize> By NFKD (compatibility decomposition) and NFKC (compatibility composition), Hangul chosung jamo can be equated with Hangul compatibility jamo, as follows: …> (normalize NFD “나”,normalize NFD “ㄴㅏ”) (“\4354\4449”,“\12596\12623”) …> (normalize NFKD “나”,normalize NFKD “ㄴㅏ”) (“\4354\4449”,“\4354\4449”) …> normalize NFKD “나” == normalize NFKD “ㄴㅏ” True However, Unicode standard does not seem to provide some relation between jongsung jamo and compatibility jamo (hence, cannot expect ICU library to have such faclility). …> (“난”,“ㄴㅏㄴ”) (“\45212”,“\12596\12623\12596”) …> (normalize NFD “난”,normalize NFD “ㄴㅏㄴ”) (“\4354\4449\4523”,“\12596\12623\12596”) …> (normalize NFKD “난”,normalize NFKD “ㄴㅏㄴ”) (“\4354\4449\4523”,“\4354\4449\4354”) …> normalize NFKD “난” == normalize NFKD “ㄴㅏㄴ” False So, in order to define an operator like “난” =:= “ㄴㅏㄴ” to be True, one needs to implement by themselves refering to the Hangule related unicode codepage. References: #langdev channel at Ozinger IRC network UTF8 한글 문자열을 첫가끝 낱자(자소)로 분해하기
2018-12-13 08:49:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.262570321559906, "perplexity": 11339.51374125149}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824601.32/warc/CC-MAIN-20181213080138-20181213101638-00587.warc.gz"}
http://planet.sympy.org/
## June 27, 2017 #### Shikhar Jaiswal (ShikharJ): GSoC Progress - Week 4 Hello, this post contains the fourth report of my GSoC progress. Though I had planned on writing a new blog post every Monday, seems like I’m a bit late on that account. ## Report ### SymEngine Last week I had mentioned that I’d be finishing up my work on Range set, however, after a talk with Isuru, it was decided to schedule it for the later part of the timeline. Instead I finished up on simplification of Add objects in Sign class through the PR #1297. Also, Isuru suggested implementing parser support for Relationals, which was required for PyDy. The work is currently in progress, through PR #1298, after we hit an issue with dual-character operators (<= and >=), but we’re working on it. I sent in another PR #1302 removing some redundant includes. From now my work with SymEngine will probably be a little slower than usual, as I’ve started off wrapping classes in SymEngine.py, which is supposed to continue for some time from now. ### SymEngine.py I pushed in #162 wrapping off a huge portion of the functions module in SymEngine. I’ll be working on wrapping Logic classes and functions in the coming week, as well as finishing off my work on the parser support in SymEngine. See you again! Vale ## Continued work on tutorial material During the past week I got binder to work with our tutorial material. Using environment.yml did require that we had a conda package available. So I set up our instance of Drone IO to push to the sympy conda channel. The new base dockerimage used in the beta version of binder (v2) does not contain gcc by default. This prompted me to add gcc to our environment.yml, unfortunately this breaks the build process: Attempting to roll back. running your command again with -v will provide additional information ==> script messages <== <None> I've reached out on their gitter channel, we'll see if anyone knows what's up. If we cannot work around this we will have two options: 1. Accept that the binder version cannot compile any generated code 2. Try to use binder with a Dockerimage based on the one used for CI tests (with the difference that it will include a prebuilt conda environment from environment.yml) I hope that the second approach should work unless we hit a image size limitation (and perhaps we need a special base image, I haven't looked into those details yet). Jason has been producing some really high quality tutorial notebooks and left lots of constructive feedback on my initial work. Based on his feedback I've started reworking the code-generation examples not to use classes as extensively. I've also added some introductory notebooks: numerical integration of ODEs in general & intro to SymPy's cryptically named lambdify (the latter notebook is still to be merged into master). ## Work on SymPy for version 1.1 After last weeks mentor meeting we decided that we would try to revert a change to sympy.cse (a function which performs common subexpression elimination). I did spend some time profiling the new implementation. However, I was not able to find any obvious bottle-necks, and given that it is not the main scope of my project I did not pursue this any further at the moment. I also just started looking at introducing a Python code printer. There is a PythonPrinter in SymPy right now, although it assumes that SymPy is available. The plan right now is to rename to old printer to SymPyCodePrinter and have the new printer primarily print expressions, which is what the CodePrinter class does best, even though it can be "coerced" into emitting statements. ## Plans for the upcoming week As the conference approaches the work intensifies both with respect to finishing up the tutorial material and fixing blocking issues for a SymPy 1.1 release. Working on the tutorial material helps finding things to improve code-generation wise (just today I opened a minor issue just to realize that my "work-in-progress" pull-request from an earlier week (which I intend to get back working on next week too) actually fixes said issue. ## Risch’s structure theorem’s For the next couple of weeks we will be moving to understand and implement the Risch’s structure theorem and applications that we will make use of in gsoc project. One of the application is that of “Simplification of real elementary functions”, a paper by Manuel Bronstein (in 1989)[1]. This paper by Bronstein uses Risch’s real structure theorem, the paper determines explicitly all algebraic relations among a set of real elementary functions. ### Simplification of Real Elementary Functions Risch in 1979 gave a theorem and an algorithm that found explicitly all algebraic relations among a set of only ’s and ’s. And to apply this algorithm required to convert trigonometric functions to ’s and ’s. Consider the integrand . For this first we make use of form of , i.e . Substituting it in the original expression we get . Now using the exponential form of we get . And substituting in place of we get the final expression. , which is a complex algebraic function. All of this could be avoided since the original function is a real algebraic function. The part that could be a problem to see it is , which can be seen to satisfy the algebraic equation using the formula for . Bronstein discusses the algorithm that wouldn’t require the use of complex ’s and ’s. A function is called real elementary over a differential field , if for a differential extension () of . If either is algebraic over or it is , , , of an element of (consider this recursively). For example: is real elementary over , so is and . Point to mention here is, we have to explicitly include , , since we don’t want the function to be a complex function by re-writing as , form( and can be written in form of complex and respectively), and we can write other trigonometric functions have real elementary relations with and . Alone or alone can’t do the job. This way we can form the definition of a real-elementary extension of a differential field . ### Functions currently using approach of structure theorems in SymPy Moving on, let us now look at the three functions in SymPy that use the structure theorem “approach”: 1. is_log_deriv_k_t_radical(fa, fd, DE): Checks if is the logarithmic derivative of a k(t)-radical. Mathematically where , . In naive terms if . Here k(t)-radical means n-th roots of element of . Used in process of calculating DifferentialExtension of an object where ‘case’ is ‘exp’. 2. is_log_deriv_k_t_radical_in_field(fa, fd, DE): Checks if is the logarithmic derivative of a k(t)-radical. Mathematically where , . It may seem like it is just the same as above with f given as input instead of having to calculate , but the “in_field” part in function name is important. 3. is_deriv_k: Checks if is the derivative of a k(t), i.e. where . ### What have I done this week? Moving onto what I have been doing for the last few days (at a very slow pace) went through a debugger for understanding the working of DifferentialExtension(exp(x**2/2) + exp(x**2), x) in which integer_powers function is currently used to determine the relation and , instead of and , since we can’t handle algebraic extensions currently (that will hopefully come later in my gsoc project). Similar example for is there in the book , though the difference is it uses the is_deriv_k (for case == 'primitive', we have is_log_deriv_k_t_radical for case == ‘exp’) to reach the conclusion that , and . I still have to understand the structure theorem, for what they are? and how exactly they are used. According to Aaron, Kalevi, I should start reading the source of is_deriv_k, is_log_deriv_k_t_radical and parametric_log_deriv functions in prde.py file. We worked on #12743 Liouvillian case for Parametric Risch Diff Eq. this week, it handles liouvillian cancellation cases. It enables us to handle integrands like . >>> risch_integrate(log(x/exp(x) + 1), x) (x*log(x*exp(-x) + 1) + NonElementaryIntegral((x**2 - x)/(x + exp(x)), x)) earlier it used to raise error prde_cancel() not implemented. After testing it a bit I realised that part of returned answer could be further integrated instead of being returned as a NonElementaryIntegral. Consider this In [4]: risch_integrate(x + exp(x**2), x) Out[4]: Integral(x + exp(x**2), x) as can be easily seen it can be further integrated. So Aaron opened the issue #12779 Risch algorithm could split out more nonelementary cases. Though I am not sure why Kalevi has still not merged the liouvillian PR (only a comment needs to be fixed n=2 -> n=5 in a comment), though that PR is not blocking me from doing further work. Starting tomorrow (26th) we have the first evaluation of GSoC. Anyways I don’t like the idea of having 3 evaluations. ### TODO for the next week: Figuring out and implementing the structure theorems. ### References • {1}. Simplification of real elementary functions, http://dl.acm.org/citation.cfm?id=74566 At the beginning of the week, all of my previous PRs got merged. This is good. Also, the homomorphism class is now mostly written with the exception of the kernel computation. The kernel is the only tricky bit. I don’t think there is any general method of computing it, but there is a way for finite groups. Suppose we have a homomorphism phi from a finite group G. If g is an element of G and phi.invert(h) returns an element of the preimage of h under phi, then g*phi.invert(phi(g))**-1 is an element of the kernel (note that this is not necessarily the identity as the preimage of phi(g) can contain other elements beside g). With this in mind, we could start by defining K to be the trivial subgroup of G. If K.order()*phi.image().order() == G.order(), then we are done. If not, generate some random elements until g*phi.invert(phi(g))**-1 is not the identity, and redefine K to be the subgroup generated by g*phi.invert(phi(g))**-1. Continue adding generators like this until K.order()*phi.image().order() == G.order() or finite G this will always terminate (or rather almost always considering that the elements are generated randomly - though it would only not terminate if at least one of the elements of G is never generated which is practically impossible). This is how I am going to implement it. I haven’t already because I’ve been thinking about some of the details of the implementation. One thing that could be problematic is multiple calls to reidemeister_presentation() to compute the presentation of K at each step. As I’ve discovered when I was implementing the search for finite index subgroups last week, this can be very inefficient if the index of K is large (which it could well be for a small number of generators at the start). After giving it some thought, I realised that actually we could completely avoid finding the presentation because K’s coset table would be enough to calculate the order and check if an element belongs to it. Assuming G is finite, K.order() can be calculated as the order of G divided by the length of the coset table so the knowledge of the generators is enough. And for membership testing, all that’s necessary is to check if a given element stabilises K with the respect to the action on the cosets described by the coset table. And that should be a huge improvement over the obvious calls to the subgroup() method. Actually, FpGroups doesn’t currently have a __contains__() method so one can’t check if a word w is in a group G using w in G. This is easy to correct. What I was wondering was if we wanted to be able to check if words over G belong to a subgroup of G created with the subgroup() method - that wouldn’t be possible directly because subgroup() returns a group on a different set of generators but it wouldn’t be too unreasonable to have SymPy do this: >>> F, a, b = free_group('a, b') >>> G = FpGroup(F, [a**3, b**2, (a*b)**2]) >>> K = G.subgroup([a]) >>> a**2 in K True I asked Kalevi about it earlier today and they said that it would be preferable to treat K in this case as a different group that happens to be linked to G through an injective homomorphism (essentially the canonical inclusion map). If we call this homomorphism phi, then the user can check if an element of G belongs to the subgroup represented by K like so: a**2 in phi.image(). Here phi.image() wouldn’t be an instance of FpGroup but rather of a new class FpSubgroup that I wrote today - it is a way to represent a subgroup of a group while keeping the same generators as in the original group. It’s only attributes are generators, parent (the original group) and C (the coset table of the original group by the subgroup) and it has a __contains__, an order() and a to_FpGroup() methods (the names are self-explanatory). For finite parent groups, the order is calculated as I described above for the kernel and for the infinite ones redeimeister_presentation() has to be called. The injective homomorphism from an instance of FpGroup returned by subgroup() would need to be worked out during the running of reidemeister_presentation() when the schreier generators are defined - this is still to be done. Another thing I implemented this week was an improved method for generating random elements of finitely presented groups. However, when I tried it out to see how random the elements looked, I got huge words even for small groups so I didn’t send a PR with it yet. Once I implement rewriting systems, these words could be reduced to much shorter ones. Speaking of rewriting systems, I think it could be good if the rewriting was applied automatically much like x*x**-1 is removed for FreeGroupElements. Though I suppose this could be too inefficient some times - this would need testing. This is what I’ll be working on this week. ## June 24, 2017 #### Arif Ahmed (ArifAhmed1995): Week 4 Report(June 18 – 24) : Dynamic Programming Note : If you’re viewing this on Planet SymPy and Latex looks weird, go to the WordPress site instead. Prof. Sukumar came up with the following optimization idea : Consider equation 10 in Chin et. al . The integral of the inner product of the gradient of $f(x)$ and ${x}_{0}$ need not be always re-calculated. This is because it might have been calculated before. Hence, the given polynomial can be broken up into a list of monomials with increasing degree. Over a certain facet, the integrals of the list of monomials can be calculated and stored for later use. Before calculation of a certain monomial we need to see if it’s gradient has  already calculated and hence available for re-use. Then again, there’s another way to implement this idea of re-use. Given the degree of the input polynomial we know all the types of monomials which can possibly exist. For example, for max_degree = 2 the monomials are : $[1, x, y, x^2, xy, y^2]$. All the monomial integrals over all facets can be calculated and stored in a list of lists. This would be very useful for one specific use-case, that is when there are many different polynomials with max degree less than or equal to a certain global maximum to be integrated over a given polytope. I’m still working on implementing the first one. That’s cause the worst case time(when no or very few dictionary terms are re-used) turns out to much more expensive than the straightforward method. Maybe computing the standard deviation among monomial degrees would be a good way to understand which technique to use. Here is an example to show what I mean. If we have a monomial list : $[x^3, y^3, 3x^2y, 3xy^2]$ , then the dynamic programming technique becomes useless here since gradient of any term will have degree 2 and not belong in the list. Now, the corresponding list of degrees is : $[3, 3, 3, 3]$, and standard deviation is zero. Hence the normal technique is applicable here. In reality, I’ll probably have to use another measure to make the judgement of dynamic vs. normal. Currently, I’m trying to fix a bug in the implementation of the first technique. After that, I’ll try out the second one. #### Arihant Parsoya (parsoyaarihant): GSoC17 Week 3 Report I ran the test suit for algebraic rules 1.2. Here is the report: Total number of tests: 1480 Passed: 130 Failed: 1350 Time taken to run the tests: 4967.428s Very few test passed since many rules use ExpandIntegrand[ ] function to evaluate its integral. There are 37 definitions of ExpandIntegrand[ ](for different expressions). I am trying to implement few definitions which can are used in algebraic rules. The problem is that ExpandIntegrand[ ] is defined after 50% of all the rules in the list. We have only been able to complete 10% of utility functions in SymPy. Hence, it is tricky to implement ExpandIntegrand[ ] when it depends on functions which we haven’t implemented. A good thing is that MatchPy was able to match almost all the patterns. Manuel said that he is going to work on incorporating optional arguments in MatchPy next week. I will rework on Rubi parser to incorporate his changes. I will complete my work on ExpandIntegrand[ ] and also continue my work in implementing remaining utility functions with Abdullah. ## June 23, 2017 #### Abdullah Javed Nesar (Abdullahjavednesar): GSoC: Progress on Utility Functions Hello all! After adding Algebraic\Linear Product rules and tests Arihant and I are working on implementations of Utility Functions, there is a huge set Utility Functions that needs to be implemented before we proceed ahead with next set of rules. Once we are done with the entire set of utility functions implementing rules and adding tests would be an easy task. Understanding the Mathematica codes and analyzing its dependent functions and converting it into Python syntax is a major task. We started with implementation of only those functions which were necessary to support Algebraic rules\Linear Products but because they were dependent on the previous functions we had to start off from the very beginning of the Pdf. So far we have implemented more that 100 utility functions. Our priority is to implement all Utility functions capable to support Algebraic Rules as soon as possible. ## June 20, 2017 #### Szymon Mieszczak (szymag): Week 3 In the last week I focused mainly on finishing tasks related Lame coefficients. During this time two PR were merged. In my previous post I described how we can calculate gradient, curl and divergence in different type of coordinate system, so now I described only new thing which was already add to mainline. We decided to remove dependency between Del class and CoordSysCartesian. From mathematical point of view it makes sense, because nabla operator is just an entity which acts on vector or scalar and his behavior is independent from coordinate system. #### Ranjith Kumar (ranjithkumar007): Solvers This week, I tried to get the earlier PR’s #1291 and #1293 merged in. But, Unfortunately there were several mistakes in the earlier implementation of simplifications in ConditionSet. Thanks to isuruf, I was able to correct them and finally it got merged in. #1293 PR on ImageSet is also complete but not yet merged. Along side, I started working on implementing lower order polynomial solvers. This work is done here in #1296. Short description of this PR: • Basic module for sovlers is up and running. • adds solvers for polynomials with degree <= 4. • integrates Flint’s wrappers for factorisation into solvers. • Fixes a bug in Forward Iterator. credits : Srajangarg This PR is still a WIP, as it doesn’t handle polynomials with symbolic coefficients. More on Solvers coming soon !! Until then Stay tuned !! ## Fast callbacks from SymPy using SymEngine My main focus the past week has been to get Lambdify in SymEngine to work with multiple output parameters. Last year Isuru Fernando lead the development to support jit-compiled callbacks using LLVM in SymEngine. I started work on leveraging this in the Python wrappers of SymEngine but my work stalled due to time constraints. But since it is very much related to code generation in SymPy I did put it into my time-line (later in the summer) in my GSoC application. But with the upcoming SciPy conference, and the fact that it would make a nice addition to our tutorial, I have put in work to get this done earlier than first planned. Another thing on my to-do-list from last week was to get numba working with lambdify. But for this to work we need to wait for a new upstream release of numba (which they are hoping to release before the SciPy conference). ## Status of codegen-tutorial material I have not added any new tutorial material this week, but have been working on making all notebooks work under all targeted operating systems. However, every change to the notebooks have to be checked on all operating systems using both Python 2 and Python 3. This becomes tedious very quickly so I decided to enable continuous integration on our repository. I followed conda-forges approach: Travis CI for OS X, CircleCI for Linux and AppVeyor for Windows (and a private CI server for another Linux setup). And last night I finally got green light on all 4 of our CI services. ## Plans for the upcoming week We have had a performance regression in sympy.cse which has bit me multiple times this week. I managed to craft a small test case indicating that the algorithmic complexity of the new function is considerably worse than before (effectively making it useless for many applications). In my weekly mentor-meeting (with Aaron) we discussed possibly reverting that change. I will first try to see if I can identify easy-to-fix bottlenecks by profiling the code. But the risk is that it is too much work to be done before the upcoming new release of SymPy, and then we will simply revert for now (choosing speed over extensiveness of the sub-expression elimination). I still need to test the notebooks using not only msvc under Windows (which is currently used in the AppVeyor tests), but also mingw. I did manage to get it working locally but there is still some effort left in order to make this work on AppVeyor. It's extra tricky since there is a bug in distutils in Python 3 which causes the detection of mingw to fail. So we need to either: • Patch cygwincompiler.py in distutils (which I believe we can do if we create a conda package for our tutorial material). • ...or use something else than pyximport (I'm hesitant to do this before the conference). • ...or provide a gcc executable (not a .bat file) that simply spawns gcc.bat (but that executable would need to be compiled during build of our conda package). Based on my work on making the CI services work, we will need to provide test scripts for the participants to run. We need to provide the organizers with these scripts by June 27th so this needs to be decided upon during next week. I am leaning towards providing an environment.yml file together with a simple instruction of activating said environment, e.g.: $conda env create -f environment.yml$ source activate codegen17 $python -c "import scipy2017codegen as cg; cg.test()" This could even be tested on our CI services. I also intend to add a (perhaps final) tutorial notebook for chemical kinetics where we also consider diffusion. We will solve the PDE using the method of lines. The addition of a spatial dimension in this way is simple in principle, things do tend to become tricky when handling boundary conditions though. I will try to use the simplest possible treatment in order to avoid taking focus from what we are teaching (code-generation). It is also my hope that this combined diffusion-reaction model is a good candidate for ufuncify from sympy.utilities.autowrap. #### Aaron Meurer (asmeurer): Automatically deploying this blog to GitHub Pages with Travis CI This blog is now deployed to GitHub pages automatically from Travis CI. As I've outlined in the past, is built with the Nikola static blogging engine. I really like Nikola because it uses Python, has lots of nice extensions, and is sanely licensed. Most importantly, it is a static site generator, meaning I write my posts in Markdown, and Nikola generates the site as static web content ("static" means no web server is required to run the site). This means that the site can be hosted for free on GitHub pages. This is how this site has been hosted since I started it. I have a GitHub repo for the site, and the content itself is deployed to the gh-pages branch of the repo. But until now, the deployment has happened only manually with the nikola github_deploy command. A much better way is to deploy automatically using Travis CI. That way, I do not need to run any software on my computer to deploy the blog. The steps outlined here will work for any static site generator. They assume you already have one set up and hosted on GitHub. ### Step 1: Create a .travis.yml file Create a .travis.yml file like the one below sudo: false language: python python: - 3.6 install: - pip install "Nikola[extras]" doctr script: - set -e - nikola build - doctr deploy . --built-docs output/ • If you use a different static site generator, replace nikola with that site generator's command. • If you have Nikola configured to output to a different directory, or use a different static site generator, replace --built-docs output/ with the directory where the site is built. • Add any extra packages you need to build your site to the pip install command. For instance, I use the commonmark extension for Nikola, so I need to install commonmark. • The set -e line is important. It will prevent the blog from being deployed if the build fails. Then go to https://travis-ci.org/profile/ and enable Travis for your blog repo. ### Step 2: Run doctr The key here is doctr, a tool I wrote with Gil Forsyth that makes deploying anything from Travis CI to GitHub Pages a breeze. It automatically handles creating and encrypting a deploy SSH key for GitHub, and the syncing of files to the gh-pages branch. First install doctr. doctr requires Python 3.5+, so you'll need that. You can install it with conda: conda install -c conda-forge doctr or if you don't use conda, with pip pip install doctr Then run this command in your blog repo: doctr configure This will ask you for your GitHub username and password, and for the name of the repo you are deploying from and to (for instance, for my blog, I entered asmeurer/blog). The output will look something like this: $ doctr configure Enter the GitHub password for asmeurer: A two-factor authentication code is required: app Authentication code: 911451 What repo do you want to build the docs for (org/reponame, like 'drdoctr/doctr')? asmeurer/blog What repo do you want to deploy the docs to? [asmeurer/blog] asmeurer/blog Generating public/private rsa key pair. Your identification has been saved in github_deploy_key. Your public key has been saved in github_deploy_key.pub. The key fingerprint is: SHA256:4cscEfJCy9DTUb3DnPNfvbBHod2bqH7LEqz4BvBEkqc doctr deploy key for asmeurer/blog The key's randomart image is: +---[RSA 4096]----+ | ..+.oo.. | | *o*.. . | | O.+ o o | | E + o B . | | + S . +o +| | = o o o.o+| | * . . =.=| | . o ..+ =.| | o..o+oo | +----[SHA256]-----+ The deploy key has been added for asmeurer/blog. You can go to https://github.com/asmeurer/blog/settings/keys to revoke the deploy key. ================== You should now do the following ================== 1. Commit the file github_deploy_key.enc. script: - set -e - # Command to build your docs - pip install doctr - doctr deploy <deploy_directory> to the docs build of your .travis.yml. The 'set -e' prevents doctr from running when the docs build fails. Use the 'script' section so that if doctr fails it causes the build to fail. 3. Put env: global: Follow the steps at the end of the command: 1. Commit the file github_deploy_key.enc. 2. You already have doctr deploy in your .travis.yml from step 1 above. 3. Add the env block to your .travis.yml. This will let Travis CI decrypt the SSH key used to deploy to gh-pages. ### That's it Doctr will now deploy your blog automatically. You may want to look at the Travis build to make sure everything works. Note that doctr only deploys from master by default (see below). You may also want to look at the other command line flags for doctr deploy, which let you do things such as to deploy to gh-pages for a different repo than the one your blog is hosted on. I recommend these steps over the ones in the Nikola manual because doctr handles the SSH key generation for you, making things more secure. I also found that the nikola github_deploy command was doing too much, and doctr handles syncing the built pages already anyway. Using doctr is much simpler. ### Extra stuff #### Reverting a build If a build goes wrong and you need to revert it, you'll need to use git to revert the commit on your gh-pages branch. Unfortunately, GitHub doesn't seem to have a way to revert commits in their web interface, so it has to be done from the command line. #### Revoking the deploy key To revoke the deploy key generated by doctr, go your repo in GitHub, click on "settings" and then "deploy keys". Do this if you decide to stop using this, or if you feel the key may have been compromised. If you do this, the deployment will stop until you run step 2 again to create a new key. #### Building the blog from branches You can also build your blog from branches, e.g., if you want to test things out without deploying to the final repo. We will use the steps outlined here. Replace the line - doctr deploy . --built-docs output/ in your .travis.yml with something like - if [[ "${TRAVIS_BRANCH}" == "master" ]]; then doctr deploy . --built-docs output/; else doctr deploy "branch-$TRAVIS_BRANCH" --built-docs output/ --no-require-master; fi This will deploy your blog as normal from master, but from a branch it will deploy to the branch-<branchname> subdir. For instance, my blog is at http://www.asmeurer.com/blog/, and if I had a branch called test, it would deploy it to http://www.asmeurer.com/blog/branch-test/. Note that it will not delete old branches for you from gh-pages. You'll need to do that manually once they are merged. This only works for branches in the same repo. It is not possible to deploy from a branch from pull request from a fork, for security purposes. #### Enable build cancellation in Travis If you go the Travis page for your blog and choose "settings" from the hamburger menu, you can enable auto cancellation for branch builds. This will make it so that if you push many changes in succession, only the most recent one will get built. This makes the changes get built faster, and lets you revert mistakes or typos without them ever actually being deployed. #### Shikhar Jaiswal (ShikharJ): GSoC Progress - Week 3 Hello, this post contains the third report of my GSoC progress. This week was mostly spent on learning the internal structures of SymEngine functionalities and methods on a deeper level. ## Report ### SymEngine This week I worked on implementing Conjugate class and the related methods in SymEngine, through PR #1295. I also worked on implementing the “fancy-set” Range, the code for which would be complete enough to be pushed sometime in the coming GSoC week. Also, since it would probably be the last week that I’d be working on SymEngine, I spent some time going through the codebase and checking for discontinuities between SymEngine and SymPy’s implementations. ### SymEngine.py I pushed in #155 fixing a trivial change from _sympify to sympify in relevant cases throughout the SymEngine.py codebase. The PR is reviewed and merged. I reached out to Isuru once again regarding further work to be undertaken for PyDy, and he suggested wrapping up Relationals from SymEngine. The work, which is pushed through #159, is in itself close to completion, with only specific parsing capabilities left to be implemented (for eg. x < y should return a LessThan(x, y) object). Wrapping Relationals also marks the initiation of SymEngine.py’s side of Phase II, which predominantly focuses on bug-fixing and wrapping. See you again! ## June 18, 2017 Last week and some of this one I was working on changing the order() method of FpGroups. Currently SymPy attempts to perform coset enumeration on the trivial subgroup and, if it terminates, the order of the group is the length of the coset table. A somewhat better way, at least theoretically, is to try and find a subgroup of finite index and compute the order of this subgroup separately. The function I’ve implemented only looks for a finite index subgroup generated by a subset of the group’s generators with a pseudo-random element thrown in (this can sometimes give smaller index subgroups and make the computation faster). The PR is here. The idea is to split the list of generators (with a random element) into two halves and try coset enumeration on one of the halves. To make sure this doesn’t go on for too long, it is necessary to limit the number of cosets that the coset enumeration algorithm is allowed to define. (Currently, the only way to set the maximum number of cosets is by changing the class variable CosetTable.coset_table_max_limit which is very large (4096000) by default - in the PR, I added a keyword argument to all functions relevant to coset enumeration so that the maximum can be set when calling the function.) If the coset enumeration fails (because the maximum number of cosets was exceeded), try the other half. If this doesn’t succeed, double the maximum number of cosets and try again. Once (if) a suitable subgroup is found, the order of the group is just the index times the order of the subgroup. The latter is computed in the same way by having order() call itself recursively. The implementation wasn’t hard in itself but I did notice that finding the subgroup’s presentation was taking far too long in certain cases (specifically when the subgroup’s index wasn’t small enough) and spent a while trying to think of a way around it. I think that for cyclic subgroups, there is a way to calculate the order during coset enumeration without having to find the presentation explicitly but I couldn’t quite work out how to do that. Perhaps, I will eventually find a way and implement it. For now, I left it as it is. For large groups, coset enumeration will take a long time anyway and at least the new way will be faster in some cases and may also be able to tell if a group is infinite (while coset enumeration on the trivial subgroup wouldn’t terminate at all). Now I am going to actually start working on homomorphisms which will be the main and largest part of my project. I’ll begin by writing the GroupHomomorphism class in this coming week. This won’t actually become a PR for a while but it is easier to do it first because I still have exams in the next several days. After that I’ll implement the Knuth-Bendix algorithm for rewriting systems (I might make a start on this later this week as I’ll have more time once the exams are over). Then I’ll send a PR with rewriting systems and once that’s merged, the GroupHomomorphism class one, because it would depend on rewriting systems. ## June 17, 2017 #### Arif Ahmed (ArifAhmed1995): Week 3 Report(June 11 – 17): Implementing suggestions Ondrej and Prof.Sukumar were quite busy this week, but eventually they reviewed both the notebook and code. Their review was quite insightful as it brought to surface one major optimization problem which I’ll discuss below. I also encountered a major issue with the already defined Polygon class. Firstly, there were some minor issues : 1. I had used python floating point numbers instead of SymPy’s exact representation in numerous places(both in the algorithm and the test file). So, that had to be changed first. 2. The decompose() method discarded all constant terms in a polynomial. Now, the constant value is assigned a value of zero as key. Example: Before: decompose(x**2 + x + 2) = {1: x, 2: x**2} After: decompose(x**2 + x + 2) = {0: 2, 1: x, 2: x**2} 3. Instead of computing component-wise and passing that value to integration_reduction(), the inner product is computed directly and then passed on. This leads to only one recursive call instead of two for 2D case and future three for 3D case. Prof. Sukumar also suggested that I should add the option of hyperplane representation. This was simple to do as well. All I did was compute intersection of the hyperplanes(lines as of now) to get the vertices. In case of vertex representation the hyperplane parameters would have to be computed. Major Issues: 1. Another suggestion was to add the tests for 2D polytopes mentioned in the paper(Page 10). The two tests which failed were polytopes with intersecting sides. In fact, this was the error which I got : sympy.geometry.exceptions.GeometryError: Polygon has intersecting sides. It seems that the existing polygon class in SymPy does not account for polygons with intersecting sides. At first I thought maybe polygons are not supposed to have intersecting sides by geometrical definition but that was not true.  I’ll have to discuss how to circumvent this problem with my mentor. 2. Prof. Sukumar correctly questioned the use of best_origin(). As explained in an earlier post, best_origin() finds that point on the facet which will lead to an inner product with lesser degree. But, obviously there is an associated cost with computing that intersection point. So, I wrote some really basic code to test out the current best_origin() versus simply choosing the first vertex of the facet. from __future__ import print_function, division from time import time import matplotlib.pyplot as plt from sympy import sqrt from sympy.core import S from sympy.integrals.intpoly import (is_vertex, intersection, norm, decompose, best_origin, hyperplane_parameters, integration_reduction, polytope_integrate, polytope_integrate_simple) from sympy.geometry.line import Segment2D from sympy.geometry.polygon import Polygon from sympy.geometry.point import Point from sympy.abc import x, y MAX_DEGREE = 10 def generate_polynomial(degree, max_diff): poly = 0 if max_diff % 2 == 0: degree += 1 for i in range((degree - max_diff)//2, (degree + max_diff)//2): if max_diff % 2 == 0: poly += x**i*y**(degree - i - 1) + y**i*x**(degree - i - 1) else: poly += x**i*y**(degree - i) + y**i*x**(degree - i) return poly times = {} times_simple = {} for max_diff in range(1, 11): times[max_diff] = 0 for max_diff in range(1, 11): times_simple[max_diff] = 0 def test_timings(degree): hexagon = Polygon(Point(0, 0), Point(-sqrt(3) / 2, S(1) / 2), Point(-sqrt(3) / 2, 3 / 2), Point(0, 2), Point(sqrt(3) / 2, 3 / 2), Point(sqrt(3) / 2, S(1) / 2)) square = Polygon(Point(-1,-1), Point(-1,1), Point(1,1), Point(1,-1)) for max_diff in range(1, degree): poly = generate_polynomial(degree, max_diff) t1 = time() polytope_integrate(square, poly) times[max_diff] += time() - t1 t2 = time() polytope_integrate_simple(square, poly) times_simple[max_diff] += time() - t2 return times for i in range(1, MAX_DEGREE + 2): test_timings(i) plt.plot(list(times.keys()), list(times.values()), 'b-', label="Best origin") plt.plot(list(times_simple.keys()), list(times_simple.values()), 'r-', label="First point") plt.show() The following figures are that of computation time vs. maximum difference in exponents of x and y. Blue line is when best_origin() is used. Red line is when simply the first vertex of the facet(line-segment) is selected. Hexagon Square When the polygon has a lot of facets which intersect axes thereby making it an obvious choice to select that intersection point as best origin, then the current best origin technique works better as in the case with the square where all four sides intersected the axes. However, in case of the hexagon the best_origin technique would result in a better point than the first vertex only for one facet. The added computation time makes it more expensive than just selecting the first vertex. Of course, as the difference between exponents increase then the time taken by best_origin is overshadowed by another processes in the algorithm. I’ll need to look at the method again and see if there are preliminary checks which can be performed making computing intersections a last resort. ## June 16, 2017 #### Arihant Parsoya (parsoyaarihant): GSoC17 Week 2 Report Our plan was to implement all Algebraic rules and complete the Rubi test suit for algebraic integration. However, there was a setback in our work because of a bug I found in ManyToOneReplacer of MatchPy. This bug prevents matching of expressions having many nested commutative expressions. Hence, we were not able to match all expressions in the test suit. Manuel Krebber have helped us a lot in adding features and giving suggestions to make MatchPy work for Rubi. He have fixed the bug today. I will resume my testing of algebraic rules asap. ### Utility functions Previously, our strategy was to implement only those functions which were used by algebraic rules. However, we found that those functions have dependencies on many other functions. So, we decided to implement the functions from the beginning itself, to avoid problems in the long run. Mathematica has functionality of implementing function arguments as patterns: SqrtNumberQ[m_^n_] := IntegerQ[n] && SqrtNumberQ[m] || IntegerQ[n-1/2] && RationalQ[m] SqrtNumberQ[u_*v_] := SqrtNumberQ[u] && SqrtNumberQ[v] SqrtNumberQ[u_] := RationalQ[u] || u===I In the above code SqrtNumberQ is defined multiple times for different function arguments. To implement these functions in Python, Francesco suggested that we test the type of argument using conditionals: def SqrtNumberQ(expr): # SqrtNumberQ[u] returns True if u^2 is a rational number; else it returns False. if expr.is_Pow: m = expr.base n = expr.exp return IntegerQ(n) & SqrtNumberQ(m) | IntegerQ(n-1/2) & RationalQ(m) elif expr.is_Mul: return all(SqrtNumberQ(i) for i in expr.args) else: return RationalQ(expr) or expr == I There was some problem while implementing Catch since SymPy doesn’t currently has Throw object: MapAnd[f_,lst_] := Catch[Scan[Function[If[f[#],Null,Throw[False]]],lst];True] I used the following code to implement ManAnd in Python: def MapAnd(f, l, x=None): # MapAnd[f,l] applies f to the elements of list l until False is returned; else returns True if x: for i in l: if f(i, x) == False: return False return True else: for i in l: if f(i) == False: return False return True ## TODO I and Abdullah have implemented ~40 utility functions so far. There are around ~40 more functions which needs to be implemented in order to support all algebraic rules. We should be able to complete those functions by tomorrow. I will also resume adding tests to Rubi test suit for algebraic functions. ## June 14, 2017 #### Gaurav Dhingra (gxyd): GSoC Week 2 After the Kalevi’s comment The param-rde branch is getting inconveniently big for hunting down and reviewing small changes. I think we should create another branch that would contain my original PR plus the added limited_integrate code (and optionally something else). Then that should be merged. Thereafter it would be easier to review new additions. I decided to remove a few commits from top of param-rde branch and made the branch param-rde mergeable. Yesterday Kalevi merged the pull request #11761, Parametric Risch differentia equation, there were quite a few problems(unrelated to my work) with travis but finally it passes the tests. So till now these are pull requests that have been completed/started for my project: • Merged: param-rde #11761: This is the pull request that Kalevi made back in september 2016. I further made commits for limited_integration function, implementation of parametric risch differential equation, though not many tests were added in this, which should definitely be done. I haven’t been able to find tests that lead to non-cancellation cases (Kalevi mentions that we should be able to find them), so for the time being we decided to start the implementation of cancellation routines, particularly liouvillian cases (others being non-linear and hypertangent cases), there isn’t a good reason to implement the hypertangent cases right now. • Unmerged: param-rde_polymatrix this pull request is intended to use PolyMatrix instead of Matrix (it is MutableDenseMatrix), here is Kalevi’s comment regarding it: “It would also be possible to use from … import PolyMatrix as Matrix. That would hint that there might be a single matrix in the future.”. The reason for making is Matrix doesn’t play well with Poly(or related) elements. • Merged: Change printing of DifferentialExtension object, it wasn’t necessary to make this pull request, but it does help me in debugging the problems a little easier. I was hoping that I would write a bit of mathematics in my blog posts, but unfortunately things I have dealt with till now required me to focus on programming API, introducing PolyMatrix so it deals well with the elements of Poly’s, but I am thinking this week I am going to deal with more of mathematics. ## TODO for this week I hope the next blog post is going to be a good mathematical one :) ## June 13, 2017 #### Szymon Mieszczak (szymag): Week 2 My second week I spent on introducing Lame coefficients into CoordSysCartesian. Unfortunately, our work is restricted by SymPy structure so, we don’t have too much freedom in our implementation. Hopefully, with my mentor, Francesco, we found some solution how to achieve our goals without destroying vector module. This week shows that I have lack in my knowledge about object-oriented programming and SymPy. Having access to Lame coefficient I was able to modified Del operator (in mathematics nabla operators) to handle spherical and cylindrical coordinate system. #### Ranjith Kumar (ranjithkumar007): Improve Sets Module Part II Last week’s PR #1281 on Complement, set_intersection and set_complement got merged in. This week, I implemented ConditionSet and ImageSet. This work is done in #1291 for ConditionSet and #1293 for ImageSet. ConditionSet : It is useful to represent unsolved equations or a partially solved equation. class ConditionSet : public Set { private: vec_sym syms_; RCP<const Boolean> condition_; ... } Earlier, I used another data member for storing the base set. Thanks to isuruf, I was able to merge it within condition_. For implementing contains method for ConditionSet, I added Subs Visitor for Contains and And. ImageSet : It is a Set representation of a mathematical function defined on symbols over some set. For example, x**2 for all x in [0,2] is represented as imageset({x},x**2,[0,2]). When is ImageSet useful ? Say, I need to solve a trignometric equation sin(x) = 1. Solution is 2*n*pi+pi/2, n belongs to [0,oo). for such solutions imageset is a useful representation to have. I will try to get these two PRs merged in ASAP. Next week, I will be working on implementing solvers for lower degree(<=4) polynomials. See you next time. Bye for now ! ## June 12, 2017 #### Shikhar Jaiswal (ShikharJ): GSoC Progress - Week 2 Hello, this post contains the second report of my GSoC progress. ## Report ### SymEngine This week I mostly worked on implementing specific classes in SymEngine, namely Sign, Float and Ceiling, through PRs #1287 and #1290. The work is currently under review, but again, mostly complete. Though I had originally planned to implement more classes in my proposal, after a thorough look, I realised that a number of the mentioned classes could easily be implemented in SymEngine.py side only. As such, there was no hard requirement for them to be implemented in SymEngine. Also, a number of them had been pre-implemented, but rather as virtual methods, and not standalone classes. There are still a couple of classes that I’d be working on in the coming week, which would effectively finish up a huge part of the planned Phase I of my proposal. ### SymEngine.py Isuru and I conversed a bit on whether I’d be interested in working on providing SymEngine support for PyDy (a multi-body dynamics toolkit), as a part of GSoC. I agreed happily, and Isuru opened a couple of issues in SymEngine.py for me to work on, as I was completely new to PyDy. I started off wrapping up Infinity, NegInfinity, ComplexInfinity and NaN classes through PR #151. I also worked on finishing Isuru’s code, wrapping ccode and CodePrinter class with an improvised doprint function through PR #152. I also opened #153, working on acquisition and release of Global Interpreter Lock or GIL in pywrapper.cpp file. See you again! Au Revoir ## June 11, 2017 #### Björn Dahlgren (bjodah): Status update week 2 GSoC I have spent the second week of Google Summer of Code on essentially two things: 1. Continued work on type awareness in the code printers (CCodePrinter and FCodePrinter). The work is ongoing in gh-12693. 2. Writing tutorial code on code-generation in the form of jupyter notebooks for the upcoming SciPy 2017 conference. ## Precision aware code printers After my weekly mentor meeting, we decided to take another approach to how we are going to represent Variable instances in the .codegen.ast module. Previously I had proposed to use quite a number of arguments (stored in .args since it inherits Basic). Aaron suggested we might want to represent that underlying information differently. After some discussion we came to the conclusion that we could introduce a Attribute class (inhereting from Symbol) to describe things such as value const-ness, and pointer const-ness (those two are available as value_const and pointer_const). Attributes will be stored in a FiniteSet (essentially SymPy version of set) and the instances we provide as "pre-made" in the .codegen.ast module will be supported by the printers by default. Here is some example code showing what the current proposed API looks like (for C99): >>> u = symbols('u', real=True) >>> ptr = Pointer.deduced(u, {pointer_const, restrict}) >>> ccode(Declaration(ptr)) 'double * const restrict u;' and for Fortran: >>> vx = Variable(Symbol('x'), {value_const}, float32) >>> fcode(Declaration(vx, 42)) 'real(4), parameter :: x = 42' The C code printer can now also print code using different math functions depending on the targeted precision (functions guaranteed to be present in the C99 stanard): >>> ccode(x**3.7, standard='c99', precision=float32) 'powf(x, 3.7F)' >>> ccode(exp(x/2), standard='c99', precision=float80) 'expl((1.0L/2.0L)*x)' ## Tutorial material for code generation Aaron, Jason and I have been discussing what examples to use for the tutorial on code generation with SymPy. Right now we are aiming to use quite a few examples from chemistry actually, and more specifically chemical kinetics. This is the precise application which got me started using SymPy for code-generation, so it lies close to my heart (I do extensive modeling of chemical kinetics in my PhD studies). Working on the tutorial material has already been really helpful for getting insight in development needs for the exisiting classes and functions used for code-generation. I was hoping to use autowrap from the .utilities module. Unfortunately I found that it was not flexible enough to be useful for integration of systems of ODEs (where we need to evaluate a vector-valued function taking a vector as input). I did attempt to subclass the CodeWrapper class to allow me to do this. But personally I found those classes to be quite hard to extend (much unlike the printers which I've often found to be intuitive). My current plan for the chemical kinetics case is to first solve it using sympy.lambdify. That allows for quick prototyping, and unless one has very high demands with respect to performance, it is usually good enough. The next step is to generate a native callback (it was here I was hoping to use autowrap with the Cython backend). The current approach is to write the expressions as Cython code using a template. Cython conveniently follows Python syntax, and hence the string printer can be used for the code generation. Doing this speeds up the integration considerably. At this point the bottleneck is going back and forth through the Python layer. So in order to speed up the integration further, we need to bypass Python during integration (and let the solver call the user provided callbacks without going through the interpreter). I did this by providing a C-code template which relies on CVode from the Sundials suite of non-linear solvers. It is a well established solver and is already available for Linux, Mac & Windows under conda from the conda-forge channel. I then provide a thin Cython wrapper, calling into the C-function, which: 1. sets up the CVode solver 2. runs the integration 3. records some statistics Using native code does come at a cost. One of the strengths of Python is that it is cross-platform. It (usually) does not matter if your Python application runs on Linux, OSX or Windows (or any other supported operating system). However, since we are doing code-generation, we are relying on compilers provided by the respective platform. Since we want to support both Python 2 & Python 3 on said three platforms, there are quite a few combinations to cover. That meant quite a few surprises (I now know for example that MS Visual C++ 2008 does not support C99), but thanks to the kind help of Isuru Fernando I think I will manage to have all platform/version combinations working during next week. Also planned for next week: • Use numba together with lambdify • Use Lambdify from SymEngine (preferably with the LLVM backend). • Make the notebooks more tutorial like (right now they are more of a show-case). and of course: continued work on the code printers. That's all for now feel free to get in touch with any feedback or questions. #### Arif Ahmed (ArifAhmed1995): Week 2 Report(June 3 – June 10) : Working Prototype, Improving functionality. Note : If you’re viewing this on Planet SymPy and Latex looks weird, go to the WordPress site instead. The 2D use case works perfectly although there are limitations. The current API and method-wise limitations are discussed here. That lone failing test was caused due to typecasting a Python set object to a list. Sets in Python do not support indexing, so when they are coerced to list, indexing of the resulting object is arbitrary. Therefore, I had to change to an axes list of [x, y]. Some other coding improvements were suggested by Gaurav Dhingra (another GSoC student). One can view them as comments to the pull request. Clockwise Sort I wanted to overcome some of the method-wise limitations. So, the first improvement I did was to implement a clockwise sorting algorithm for the points of the input polygon. Come to think of it, for the 3D use case the points belonging to a particular facet will have to be sorted anti-clockwise for the algorithm to work. Therefore, it would be better to include an extra argument “orient” with default value 1 (clockwise sort) and -1 when anti-clockwise sort. First I tried to think of it myself, and naturally the first thing that came to my mind was sorting the points depending on their $\arctan { \theta }$ value where $\theta$ is the angle that the line from that point to an arbitrary reference point (center) makes with the x-axis (more correctly, the independent variable axis). But obviously this is not efficient, because calculating the $\arctan { \theta }$ value is expensive. Then I came across this answer on StackOverflow. It was a much better technique because it did not use $\arctan { \theta }$ , no division operations and no distance computing using square root (as pointed out in the comments). Let us understand the reasoning. Firstly, I’m sure we all know that the distance of a point $({ x }_{ 1 },{ y }_{ 1 })$ from the line $(ax+by+c=0)$ is $(\frac { a{ x }_{ 1 }+b{ y }_{ 1 }+c }{ \sqrt { { a }^{ 2 }+{ b }^{ 2 } } })$. Now, for clockwise sorting we need to decide the left or right orientation of a point given the other point and a constant reference(in this case, the center). Therefore, it is required to define a custom compare function. Given three points : $a, b, center$, consider the line made by points $b$ and $center$. The point $a$ will be on the left if numerator of of above formula is negative and right if positive. The cases where we need not consider this formula is when we are sure of : Case 1 > The center lies in between $a$ and $b$(Firstly check this by comparing x-coordinates). Case 2 > If all the points are on a line parallel to y-axis : Sub-Case 1 > If any point is above center, that point is “lesser” than other one. Sub-Case 2 > If both below center, the point closer to center is “lesser” than other one. Case 3 > This is when Case 1, 2 and the standard check fails. This can only happen if $a$, $b$ and $center$ are all collinear points and $a$, $b$ both lie on the same side of the line with respect to center. Then, the one farthest  from center is “lesser” than the other one. Ondrej and Prof.Sukumar have looked at the notebook but will discuss in detail further and then inform me about the final 2D API. Till then, I’ll work on the existing limitations. ## June 09, 2017 #### Arihant Parsoya (parsoyaarihant): GSoC Week1 Report Francesco was skeptical if MatchPy could support all Rubi rules. Hence we (I and Abdullah) are trying to implement first set of rules (Algebraic) into MatchPy as soon as possible. Major things involved in accomplishing this are: • [Completed] Complete writing parser for Rubi rules form DownValues[] generated from Mathematica. • [Partially Completed] Complete basic framework for MatchPy to support Rubi rules. • [Completed] Parse Rubi tests. • [Incomplete] Add utility functions for Rubi into SymPy syntax. #### Generation of new patterns from Optional Arguments Mathematica supports optional arguments for Wild symbols. Some common functions (such as Mul, Add, and Pow) of Mathematica have built‐in default values for their arguments. MatchPy does not support optional arguments to its Wildcards. So, Manuel Krebber suggested to add two more rules for each optional argument that exist in the pattern. For example: Int[x_^m_.,x_Symbol] := x^(m+1)/(m+1) /; FreeQ[m,x] && NonzeroQ[m+1] In the above rule, the default value of m_ is 1. So I implemented these rules in MatchPy: Int[x_^m_.,x_Symbol] := x^(m+1)/(m+1) /; FreeQ[m,x] && NonzeroQ[m+1] (* substituting m = 1*) Int[x_,x_Symbol] := x^(2)/(2) /; FreeQ[2,x] && NonzeroQ[2] I have used powerset to generate all combinations of default values to generate patterns. Code for parser can be found here. #### Utility functions There are many utility functions for Rubi written in Mathematica. We are currently focusing on implementing the functions which are being used in Algebraic rules. As soon we we complete our implementation (hopefully by this weekend), we can start running the test suit for Rubi. I will work on implementing utility functions along with Abdullah in coming days. I will keep testing the module as we implement utility functions and add more rules into our matcher. ## June 08, 2017 #### Abdullah Javed Nesar (Abdullahjavednesar): Week 2 Begins Hello all! The first week comes to an end and we (Arihant and me) have partially implemented Algebraic functions\ Linear products. THE PROGRESS Most of my task was to translate Algebraic Integration tests from Maple Syntax to Python, write Utility Functions and writing tests for Utility Functions. I have already implemented 1 Algebraic functions\1 Linear products\1.2 (a+b x)**m (c+d x)**nhere. And test sets for 1 Algebraic functions\1 Linear products\1.3 (a+b x)**m (c+d x)**n (e+f x)**p and 1 Algebraic functions\1 Linear products\1.4 (a+b x)^m (c+d x)^n (e+f x)^p (g+h x)^q are almost ready here along with most of the Utility Functions we require till now, after this we will be successfully covering the Algebraic\ Linear products portion. Next what’s delaying our progress is Utility Functions, I  have been taking help from this pdf on Integration Utility Functions and looking for their definitions in Mathematica website but the major problem is the definitions provided are not very clear or no definition at all. Meanwhile Arihant was implementing RUBI rules, default values for variables and working on constraints . ## June 05, 2017 #### Shikhar Jaiswal (ShikharJ): GSoC Progress - Week 1 Ahoy there! This post contains my first GSoC progress report. ## Report ### SymEngine My previous PR(#1276) on Relationals is reviewed and merged in. I also worked on introducing additional support for them. The PRs, #1279 and #1280 are also reviewed and merged, leaving only LLVM support as a work in progress. I also noticed, that one of the pending requests for 0.3.0 milestone for SymEngine.py was the implementation of vector-specific methods such as dot() and cross() in SymEngine. The work is done here and is mostly complete. Apart from this, I started the planned implementation of SymPy classes by implementing the Dummy class in SymEngine here. Most of the above mentioned pending work should be ready to merge within a couple of days. ### SymPy Continuing my work on sympy/physics, I pushed in #12703 covering the stand-alone files in physics module and #12700 which is a minor addition to the work done in physics/mechanics. ### SymEngine.py Isuru pointed out some inconsistencies in the existing code for ImmutableMatrix class, which needed to be fixed for 0.3.0 milestone. The code was fixed through the PR #148. We’ll probably have a SymEngine.py release next week, after which I plan to port over pre-existing functionalities in SymEngine.py to SymPy’s left-over modules. That’s all for now. #### Ranjith Kumar (ranjithkumar007): Improve Sets Module - The Beginning As discussed in the previous blog, this and the next week’s task for me is to improve the Sets module. I started off by implementing the class for Complemsent and the functions set_complement and set_intersection. This work is done in #1281. set_intersection : RCP<const Set> set_intersection(const set_set &in); set_intersection tries to simplify the set of sets input by applying various rules. • trivial rules like if one of the input is emptyset() • handles finitesets by checking all other sets in the input for all elements in finitesets. • If any of the sets is union, then it tries to return a Union of Intersections • If any of the sets is Complement, then this returns a simplified Complement. • pair-wise rules checks every pair of sets and tries to merge them into one. set_complement : RCP<const Set> set_complement(const RCP<const Set> &universe, const RCP<const Set> &container); For this function, i had to implement virtual functions set_complement for all the existing child classes of Set. and this function simply calls the container’s set_complement() with the given universe. Details of Complement class : • similar to other classes in sets module, the class prototype for Complement is class Complement : public Set • It stores two Sets. Universe and container, and Complement(a, b) represents a-b where a is universe and b is the container. Apart from this, I started maintaining a wiki page for GSoC’s progress report as suggested by Amit Kumar in the first weekly meeting. I will post minutes of all meetings in this wiki page. Next week, I will be working on ImageSet and ConditionSet. That’s it for now. See you next time. Until then, Goodbye ! Last week I was working on implementing the Smith Normal Form for matrices over principal ideal domains. I’m still making corrections as the PR is being reviewed. I used the standard algorithm: use invertable (in the ring) row and column operations to make the matrix diagonal and make sure the diagonal entries have the property that an entry divides all of the entries that come after it (this is described in more detail on wikipedia for example). I ran into trouble when trying to determine the domain of the matrix entries if the user hadn’t explicitly specified one. Matrices in SymPy don’t have a .domain attribute or anything similar, and can contain objects of different types. So, if I did attempt to find some suitable principal ideal domain over which to consider all of them, the only way would be to try a few of the ones that are currently implemented until something fits, and that would have to be extended every time a new domain is implemented and generally sounds tedious to do. I’ve asked on the Group Theory channel if there was a better way and that started a discussion about changing the Matrix class to have a .domain or a .ring attribute and have the entries checked at construction. In fact, this has been brought up by other people before as well. Unfortunately, adding this attribute would require going over the matrix methods implemented so far and making sure they don’t assume anything that might not hold for general rings (especially non-commutative ones: like the determinant in its traditional form wouldn’t even make sense; however, it turns out there are several generalisations of determinants to non-commutative rings like quasideterminants and Dieudonné determinant). And this would probably take quite a while and is not directly related to my project. So in the end we decided to have the function work only for matrices for which a .ring attribute has been added manually by the user. For example, >>> from sympy.polys.solvers import RawMatrix as Matrix >>> from sympy.polys.domains import ZZ >>> m = Matrix([0,1,3],[2,4,1]) >>> setattr(m, "ring", ZZ) Hopefully, at some point in the future matrices will have this attribute by default. The Smith Normal Form was necessary to analyse the structure of the abelianisation of a group: abelian groups are modules over integers which is a PID and so the Smith Normal form is applicable if the relators of the abelianisation are written in the form of a matrix. If 0 is one of the abelian invariants (the diagonal entries of the Smith Normal form), then the abelianisation is infinite and so must be the whole group. I’ve added this test to the .order() method for FpGroups and now it is able to terminate with the answer oo (infinity) for certain groups for which it wouldn’t terminate previously. I hope to extend this further with another way to evaluate the order: trying to find a finite index cyclic subgroup (this can be achieved by generating a random word in the group and considering the coset table corresponging to it), and obtain the order of the group by multiplying the index with the order of the cyclic subgroup. The latter could be infinite, in which case the whole group is. Of course, this might not always terminate, but it will terminate in more cases than coset enumeration applied directly to the whole group. This is also what’s done in GAP. I have begun working on it but this week is the last before my exams, and I feel that I should spend more time revising. For this reason, I probably wouldn’t be able to send a PR with this new test by the end of the week. However, it would most likely be ready by the end of the next one, and considering that the only other thing I planned to do until the first evaluation period was to write (the main parts of) the GroupHomomorphism class assuming that the things it depends on (e.g. rewriting systems) are already implemented, I believe I am going to stay on schedule. ## June 03, 2017 #### Björn Dahlgren (bjodah): A summer of code and mathematics Google are generously funding work on selected open source projects each year through the Google Summer of Code project. The project allows under- and post-graduate students around the world to apply to mentoring organizations for a scholarship to work on a project during the summer. This spring I made the leap, I wrote a proposal which got accepted, and I am now working full time for the duration of this summer on one of these projects. In this blog post I'll give some background and tell you about the first project week. ## Background Since a few years I've been contributing code to the open-source project SymPy. SymPy is a so-called "computer algebra system", which lets you manipulate mathematical expressions symbolically. I've used this software package extensively in my own doctoral studies and it has been really useful. My research involves formulating mathematical models to: rationalize experimental observations, fit parameters or aid in design of experiments. Traditionally one sits down and derive equations, often using pen & paper, then one writes computer code which implements said model, and finally one writes a paper with the same formulas as LaTeX code (or something similar). Note how this procedure involves writing the same equations essentially three times, during derivation, coding and finally the article. By using SymPy I can, from a single source: 1. Do the derivations (fewer hard-to-find mistakes) 2. Generate the numerical code (a blazing fast computer program) 3. Output LaTeX formatted equations (pretty formulas for the report) A very attractive side-effect of this is that one truly get reproducible research (reproducibility is one of the pillars in science). Every step of the process is self-documented, and because SymPy is free software: anyone can redo them. I can't stress enough how big this truly is. It is also the main motivation why I haven't used proprietary software in place of SymPy, even though that software may be considerably more feature complete than SymPy, any code I wrote for it would be inaccessible to people without a license (possibly even including myself if I leave academia). For this work-flow to work in practice the capabilities of the computer algebra system need to be quite extensive, and it is here my current project with SymPy comes in. I have had several ideas on how to improve capability number two listed above: generating the numerical code, and now I get the chance to realize some of them and work with the community to improve SymPy. ## First week The majority of the first week has been spent on introducing type-awareness into the code-printers. SymPy has printer-classes which specialize printing of e.g. strings, C code, Fortran code etc. Up to now there has been no way to indicate what precision the generated code should be for. The default floating point type in python is for example "double precision" (i.e. 64-bit binary IEEE 754 floating point). This is also the default precision targeted by the code printers. However, there are occasions where one want to use another precision. For example, consumer class graphics cards which are ubiquitous often have excellent single precision performance, but are intentionally capped with respect to double precision arithmetic (due to marketing reasons). At other times, one want just a bit of extra precision and extended precision (80-bit floating point, usually the data type of C's long double) is just what's needed to compute some values with the required precision. In C, the corresponding math functions are standardized since C99. I have started the work to enable the code printers to print this in a pull-request to the SymPy source repository. I have also started experimenting with a class representing arrays. Arrays The first weekly meeting with Aaron Meurer went well and we also briefly discussed how to reach out to the SymPy community for wishes on what code-generation functions to provide, I've set up a wiki-page for it under the SymPy projects wiki: https://github.com/sympy/sympy/wiki/codegen-gsoc17 I'll be sending out an email to the mailing list for SymPy asking for feedback. We also discussed the upcoming SciPy 2017 conference where Aaron Meurer and Jason Moore will be giving a tutorial on code-generation with SymPy. They've asked me to join forces with them and I've happily accepted that offer and am looking forward to working on the tutorial material and teaching fellow developers and researchers in the scientific python community about how to leverage SymPy for code generation. Next blog post will most likely be a bit more technical, but I thought it was important to give some background on what motivates this effort and what the goal is. #### Arif Ahmed (ArifAhmed1995): Week 1 Report(May 24 – June 2) : The 2D prototype As per the timeline, I spent the week writing a prototype for the 2D use case. Here is the current status of the implementation. At the time of writing this blog post, the prototype mostly works but fails for one of the test cases. I haven’t used a debugger on it yet but will get around to fixing it today. The main agenda for the next week should be to improve this prototype and make it more robust and scalable. Making it robust would mean that the implementation need not require a naming convention for the axes and can handle more boundary cases. Scalability would refer to implementing in a way which works reasonably fast for large input data. It would also mean designing the functions in a use case independent way so that they could be re-used for the 3-Polytopes. The simplest example would be the norm function. It works for both 2D and 3D points. Currently, some parts of the implementation are hacky and depend upon denoting the axes by specific symbols namely x and y. That would be the main focus for the beginning of Week 2. Let’s take a look at the methods implemented and their respective shortcomings. The methods currently implemented are : 1. polytope_integrate : This is the main function which calls integration_reduction which further calls almost everything else. Mathematically, this function is the first application of the Generalized Stokes Theorem. Integration over the 2-Polytope surface is related to integration over it’s facets (lines in this use case). Nothing much to change here. 2. integration_reduction : This one is the second application of the Stokes Theorem. Integration over facets (lines) is related to evaluation of the function at the vertices. The implementation can be made more robust by avoiding accessing the values of the best_origin by the index and instead accessing it by key. One workaround would be to assign the Symbol ‘x‘ to the independent variable and ‘y‘ to the dependent one. In the code, all dictionary accesses will be done with these symbols. This offers scalability as well. An extra Symbol ‘z‘ will suffice to denote the third axis variable for the 3D use case. 3. hyperplane_parameters :  This function returns the values of the hyperplane parameters of which the facets are a part of. I can’t see improvements to be made with respect to the 2D use case, but that may change with the new API. 4. best_origin : This function returns a point on the line for which the vector inner product between the divergence of the homogeneous polynomial and that point yields a polynomial of least degree. This function is very much dependent on the naming scheme of the axes but this issue can be circumvented by assigning symbols as explained above. 5. decompose : This function works perfectly for all test data. However, I’ll see if it can be made faster. Will be important for scaling up-to polynomials containing large number of terms. 6. norm : This is a really simple function. The only reason to change it’s code would be adding support for different representations of points. 7. intersection, is_vertex : Both of these methods are simple to write and don’t require any further changes for the 2D case (at least as far as I can see). 9. plot_polytope, plot_polynomial : These are simple plotting functions to help visualize the polytope and polynomial respectively. If extra features for the plots are required then suitable changes to code can be made. After I get all the tests for the 2D prototype to pass, I’ll write a Jupyter notebook and add it to the examples/notebooks folder of SymPy. As recommended by Ondrej, it should contain examples along with some plots and description of whatever basic API exists now. The main focus for Week 2 should be : 1 > Get the prototype to pass all test cases (should be completed really soon). 2 > Make the notebook and discuss a better API for the 2D use case. 3 > Implement the final 2D API and write any tests for it. 3 > Discuss how to extend to the 3D use case (Implementation and API). ## June 02, 2017 #### Szymon Mieszczak (szymag): Week 1 My first task, which corresponds to GSoC was create three classes, Curl, Divergence, Gradient. They create object which are unevaluated mathematical expression. Sometimes it’s better working on such expression, for example when we wants to check some identity. We have to check if it’s true for every possible vector. We have still some work here, because in next step we want to create abstract vector expression. There is one open PR corresponding to described task: #### Arihant Parsoya (parsoyaarihant): Coding period starts Community bonding period is completed and coding period started on 31st May. Our original plan was to rewrite the pattern matcher for SymPy and generate decision tree for Rubi rules from Downvalues[] generated by Francesco. Aaron gave us a link to this paper by Manuel Krebber. Pattern matching algorithms discussed in the paper are implemented in MatchPy library. #### MatchPy MatchPy uses discrimination net’s to do many-to-one matching(i.e. matching one subject to multiple patterns). MatchPy generates its own discrimination net as we add patterns to its ManyToOneReplacer. Discrimination nets can be more efficient than a decision tree hence we decided to use MatchPy as the pattern matcher for Rubi. I wrote basic matching programs which implements few Rubi rules MatchPy. I found the following issues with MatchPy: • MatchPy cannot be directly added to SymPy because it is written in Python3.6(whereas SymPy supports Python2 also). • It lacked mathematical operations on its Symbols due to which it becomes difficult to implement Rubi constrains. A workaround this issue is to sympify the expression and do calculations in SymPy. • MatchPy uses external libraries such as Multiset, enum and typing. SymPy does not encourage using external libraries in its code. Those modules need to be reimplemented into SymPy if we are going to directly import MatchPy code into SymPy. Re-implementing MatchPy algorithms in SymPy can be very challenging and time consuming task as I am not very familiar with the algorithms used in MatchPy. I used 3to2 to convert MatchPy code to Python2 syntax. Majority tests are passing in Python2 Syntax. I am currently trying to get the code working in Python2. In coming week I will import MatchPy code to SymPy directly. If there are some setbacks in this approach, I will reimplement MatchPy algorithms in SymPy. Planet SymPy is made from the blogs of SymPy's contributors. The opinions it contains are those of the contributor. This site is powered by Rawdog and Rawdog RSS. Feed readers can read Planet SymPy with RSS, FOAF or OPML.
2017-06-28 22:25:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 23, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47146499156951904, "perplexity": 1591.970145767104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323807.79/warc/CC-MAIN-20170628222452-20170629002452-00606.warc.gz"}
https://docs.exabeam.com/en/cloud-delivered-advanced-analytics/all/release-notes/156090-what-s-new.html
What's New What's New in i60 Optimized Ingestion Performance With Advanced Analytics i60, Exabeam has further optimized the process of ingesting and parsing logs. You can confidently ingest more data from across your organization to increase detection coverage without impacting processing performance. Faster and More Efficient Parsing Exabeam has improved domain name recognition within parsers for faster and higher fidelity threat detection. The main navigation menu, previously located on the top-right side of the user interface, has been redesigned into a vertical menu on the upper-left side of the interface. This redesign is part of a move towards unifying the navigation experience across the different Exabeam environments. The Licenses page has been updated to support new Exabeam licensing options. See Types of Exabeam Product Licenses in the Advanced Analytics Administration Guide. What's New in i59 View any raw log from multiple Data Lake log sources in Smart Timelines. After you upgrade to Advanced Analytics i59, configure additional Data Lake clusters as a log source. After you restart the Analytics Engine, you can click View Logs in Smart Timelines to access raw logs across any Data Lake cluster. Advanced Analytics can ingest more logs, faster than ever. Advanced Analytics can ingest a larger volume and wider variety of logs from multiple sources. If there's a spike in logs your system ingests, your system can handle the spike and keeps running. If you restart your system, Advanced Analytics can more quickly recover from the lapse and resume ingesting and processing logs in real time. Improved Security Improved login security for LDAP authentication. What's New in i58 A Better Experience Commenting in Smart Timelines™ We refined your experience around commenting in Smart Timeline sessions so it's smoother and easier. It's easier to navigate long comment threads. Previously, you couldn't conveniently view all the comments for a given session: comments automatically closed when you scrolled between sessions, sessions wouldn't load, and you could only see the two latest comments. Now, we improved the behavior of the session summary information so that comments remain open when you scroll between sessions. Instead of clicking LOAD MORE, you can view and scroll through all comments for a given session. Smart Timelines now display the correct number of comments in the session summary or Daily Summary information. Previously, when you deleted or added a comment, the number of comments in the session summary or Daily Summary information didn't change. Now, this issue is resolved and you can trust that you're seeing the correct number of comments. Optimized Numerical Histograms To make it less likely that your system runs out of memory, we minimized how much memory numerical histograms consume. Previously, histograms used up to half of the Analytics Engine's long-term memory. Of the memory histograms used, two-thirds was attributed to its data structure. We tuned the data structure for numerical histograms so they use even less memory. Other Improvements • Audit logging now includes SAML events, added/edited secured resources, and API cluster authorizations. • Context Tables now include a Created Time field to log when the context data was imported. Since context data may change over time, this field tells you when the data was valid. What's New in i57 Stuck and Failed Parser Detection To keep Log Ingestion and Messaging Engine (LIME) running, your system detects stuck and failed parsers early and pauses them. Parsers use regular expressions to extract data from logs. If these regular expressions are incorrect, parsers can enter an infinite loop and get stuck, or fail with a non-timeout exception. Sometimes, parsers can also get stuck when it can't parse incorrect input data. When parsers fail or get stuck, LIME stops working because it can't move forward until the previous parser is done processing. Now, if a parser takes too long to process, a mechanism pauses those parsers to keep your system running. If the parser exceeds a configured time limit, your system fails the parser with a timeout exception, logs the error at a DEBUG security level, and notes the parser in internal error statistics. Your system periodically checks the error statistics to identify any parsers that have accumulated more than a certain number of errors, then pauses them. After your system pauses a stuck or failed parser, you can view the parser in the list of paused parsers under System Health. You do not receive a system health alert when a stuck or failed parsers is paused, but you will continue to receive a system health alert when a slow parser is paused. Exabeam Documentation: Paused Parsers Exabeam Documentation: View Paused Parsers Histograms Optimized for Better Stability To stabilize your system and keep it running, histograms now consume less memory. Previously, histograms used up to half of the Analytics Engine's long-term memory. Of the memory histograms used, two-thirds was attributed to its data structure. To free memory on your system and keep important services running, histograms now use a data structure that consumes less memory. With this new data structure, your system uses less heap space in Java and runs out of memory less frequently. Health Checks Refined for Cloud-Delivered Deployments In System Health, you only see the relevant health checks for cloud-delivered deployments. You no longer see health checks that apply only to hardware or virtual deployments.
2022-01-27 17:44:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28878578543663025, "perplexity": 5486.98484770578}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305277.88/warc/CC-MAIN-20220127163150-20220127193150-00421.warc.gz"}
http://www.ams.org/bookstore?fn=20&arg1=secoseries&ikey=SECO-13
New Titles  |  FAQ  |  Keep Informed  |  Review Cart  |  Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education Séminaires et Congrès 2007; 391 pp; softcover Number: 13 ISBN-10: 2-85629-222-4 ISBN-13: 978-2-85629-222-8 List Price: US$110 Member Price: US$88 Order Code: SECO/13 On March 8-13, 2004, a meeting was organized at the Luminy CIRM (France) on arithmetic and differential Galois groups, reflecting the growing interactions between the two theories. The present volume contains the proceedings of this conference. It covers the following themes: moduli spaces (of curves, of coverings, of connexions), including the recent developments on modular towers; the arithmetic of coverings and of differential equations (fields of definition, descent theory); fundamental groups; the inverse problems and methods of deformation; and the algorithmic aspects of the theories, with explicit computations or realizations of Galois groups. A publication of the Société Mathématique de France, Marseilles (SMF), distributed by the AMS in the U.S., Canada, and Mexico. Orders from other countries should be sent to the SMF. Members of the SMF receive a 30% discount from list. Readership Graduate students and research mathematicians interested in mechanization of proofs and logical operations. Table of Contents M. Berkenbosch -- Algorithms and moduli spaces for differential equations M. Berkenbosch and M. van der Put -- Families of linear differential equations on the projective line P. Boalch -- Brief introduction to Painlevé VI A. Buium -- Correspondences, Fermat quotients, and uniformization J.-M. Couveignes -- Jacobiens, jacobiennes et stabilité numérique P. Débes -- An introduction to the modular tower program M. Dettweiler and S. Wewers -- Variation of parabolic cohomology and Poincaré duality M. D. Fried -- The main conjecture of modular towers and its higher rank generalization R. Liţcanu and L. Zapponi -- Properties of Lamé operators with finite monodromy S. Malek -- On the Riemann-Hilbert problem and stable vector bundles on the Riemann sphere B. H. Matzat -- Integral $$p$$-adic differential modules F. Pop -- Galois theory of Zariski prime divisors M. Romagny and S. Wewers -- Hurwitz spaces D. Semmen -- The group theory behind modular towers C. Simpson -- Formalized proof, computation, and the construction problem in algebraic geometry Annexe. Liste des participants
2015-12-01 16:28:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37579819560050964, "perplexity": 4678.198467110649}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398468396.75/warc/CC-MAIN-20151124205428-00039-ip-10-71-132-137.ec2.internal.warc.gz"}
http://electrochemical.asmedigitalcollection.asme.org/article.aspx?articleid=2544871&journalid=123
0 Research Papers # Computational Studies of Interfacial Reactions at Anode Materials: Initial Stages of the Solid-Electrolyte-Interphase Layer FormationOPEN ACCESS [+] Author and Article Information G. Ramos-Sanchez Departamento de Quimica, Iztapalapa 09340, CDMX, Mexico F. A. Soto, J. M. Seminario Department of Chemical Engineering, Texas A&M University, College Station, TX 77843 J. M. Martinez de la Hoz The Dow Chemical Company, 2301 N. Brazosport Boulevard, Freeport, TX 77541 Z. Liu, P. P. Mukherjee Department of Mechanical Engineering, Texas A&M University, College Station, TX 77843 F. El-Mellouhi Qatar Environment and Energy Research Institute, Doha, Qatar P. B. Balbuena Department of Chemical Engineering, Texas A&M University, College Station, TX 77843 e-mail: [email protected] 1Corresponding author. Manuscript received April 14, 2016; final manuscript received July 13, 2016; published online October 20, 2016. Assoc. Editor: George Nelson. J. Electrochem. En. Conv. Stor. 13(3), 031002 (Oct 20, 2016) (10 pages) Paper No: JEECS-16-1049; doi: 10.1115/1.4034412 History: Received April 14, 2016; Revised July 13, 2016 ## Abstract Understanding interfacial phenomena such as ion and electron transport at dynamic interfaces is crucial for revolutionizing the development of materials and devices for energy-related applications. Moreover, advances in this field would enhance the progress of related electrochemical interfacial problems in biology, medicine, electronics, and photonics, among others. Although significant progress is taking place through in situ experimentation, modeling has emerged as the ideal complement to investigate details at the electronic and atomistic levels, which are more difficult or impossible to be captured with current experimental techniques. Among the most important interfacial phenomena, side reactions occurring at the surface of the negative electrodes of Li-ion batteries, due to the electrochemical instability of the electrolyte, result in the formation of a solid-electrolyte interphase layer (SEI). In this work, we briefly review the main mechanisms associated with SEI reduction reactions of aprotic organic solvents studied by quantum mechanical methods. We then report the results of a Kinetic Monte Carlo method to understand the initial stages of SEI growth. <> ## Introduction Electron and ion transfer occurring at interfaces affect many processes of high significance in everyday life. Transport and reactions at dynamic interfaces define electrochemical phenomena taking place in energy storage and other power-source devices, which are key ingredients for the practical use of renewables. Recent emphasis on the development of renewables requires parallel progress in energy storage [1]. Rechargeable batteries and supercapacitors are among the most promising electrochemical energy storage devices [2]. Li-ion batteries (LIBs) are the main portable power sources for popular electronic devices, such as cell phones and laptops. However, uses of LIBs in electric vehicles require further development not only in materials for electrodes and electrolytes but also in the understanding and control of properties derived from the complex interactions existent in this ultracorrelated system. In particular, their storage efficiency is largely limited by the occurrence of side reactions that take place via electron transfer at the solid–electrolyte interface and lead to irreversible charge loss [3]. Among the constituent cell materials, the electrolyte is the essential key component of a battery or supercapacitor. No matter how much time and resources are invested into the development of electrode materials, most of the effort will be wasted if the interfacial electrode/electrolyte properties are ignored. That is because the electrolyte is a nexus between the two electrodes, having the function of transporting ions from one electrode to another during charge or discharge of the battery [410], but more importantly, the electrolyte has the potential to control the chemistry of the reactions taking place at the electrode surfaces [11]. This control depends on one essential condition: the chemical properties of the electrolyte need to be such that it must remain stable within a certain electrochemical stability window (Fig. 1) defined by the electronic properties (electrochemical potentials) of the two electrodes and those of the electrolyte [12,13]. To ensure electrochemical stability, the energy gap (Eg) characterizing the electronic properties of the electrolyte must be larger than the energy difference of electrochemical potentials (μ) between the two electrodes (eVoc) at open circuit voltage (Voc). Otherwise, electron transfer reactions (see arrows in Fig. 1) may happen due to chemical instability of the electrolyte, which is reduced (oxidized) at the negative (positive) electrode [14]. Products of the reduction (at anode) or oxidation reactions (at cathode) including inorganic compounds such as Li2CO3 and organic compounds such as Li ethylene dicarbonate (Li2EDC) aggregate forming blocks and developing a heterogeneous film (Fig. 2) that in the best scenario could provide a protective role to the electrode, facilitating ionic conduction and blocking further electron transfer, i.e., self-controlling film growth [15]. Unfortunately, this is not always the case, and the formed film, known as the solid-electrolyte interphase (SEI) layer [16], has physicochemical properties that can result in an irreversible capacity loss, electrode mechanical fracture, and an overall decrease in the efficiency of the electrochemical cell. The formation of an effective SEI layer is one of the greatest challenges of current electrochemical energy storage devices [17,18]. To achieve this goal, a proper understanding of the film nucleation and growth is crucial. This understanding may start from a fundamental analysis of the interfacial reactions to determine the factors affecting the SEI formation. Even more essential is to identify how the SEI properties influence the overall battery performance: capacity and lifetime. SEI layers are formed at both the anode and the cathode. In this paper, we concentrate in the anode reactions. Three key materials for LIBs are: allotropes of carbon; Li metal, and Silicon-based anodes. Furthermore, Li metal is an ideal anode material to advance the next generation of energy storage systems due to its high theoretical specific capacity (3860 mAh/g), low density (0.59 g cm−3) and having the lowest negative electrochemical potential (−3.040 V versus the standard hydrogen electrode) [19]. Carbon is by far the most widely used anode material (i.e., graphite). There is no absolute parameter or formulas that may define all the SEI, instead is the combination of factors which produces its specific properties, quality, and efficiency. Among the main factors, the most investigated are the type of carbon, pretreatment, electrolyte composition, cycling mode, temperature, and storage [15]. Similarly, among the promising anode materials, Si is an attractive choice due to its high gravimetric capacity (4200 mAh/g, equivalent to a lithiation of Li4.4Si) and volumetric capacity (9786 mAh/cm3) [20]. Here, we review the multiscale modeling approaches used to develop a better understanding of the interactions of the electrolyte with graphite and Li metal anode surfaces. Although there has been recently a great deal of modeling SEI reactions on lithiated Si anodes, we defer their review to a future report. We also report our study using a coarse-grained kinetic Monte Carlo (CG-KMC) approach of the growth on an anode surface of one of the main SEI components derived from the decomposition of ethylene carbonate (EC): the oligomer species known as lithium ethylene dicarbonate (Li2EDC). ## Initial Stages of the SEI Layer Formation: Electrolyte Decomposition ###### Electrolyte Reduction on Lithium Electrodes. Although the Li metal is extremely reactive, models of Li metal electrodes are relatively simple in comparison to that on the more complex graphitic or Si-based anodes; therefore, the first attempts for more complex simulations of electrolyte reduction were made at the interface of the electrolyte with Li electrodes. ###### Gas Phase Reactions of Electrolyte at the Li Surface. Tasaki et al. [21] were among the first studying the effect of a metallic Li cluster on the reduction of a set of solvents commonly used in LIBs. The model is fundamentally different to all previous solvent reduction simulations, focusing on the direct decomposition of the solvent on the surface of the electrode. A cluster model of 15 Li atoms and one solvent molecule was fully optimized using the Hartree–Fock (HF) method. The initial orientation of the molecule on the Li cluster was such that the carbonyl oxygen of the molecule faced the Li cluster while the hydrocarbon moiety was away from it. The solvent and additive molecules examined included EC, propylene carbonate (PC), vinyl ethylene carbonate (VEC), vinyl carbonate (VC), vinyl vinylene carbonate (VVC), ethylene sulfite (ES), and tetrahydrofuran (THF). It is important to note that for VEC, two ring-opening reactions are possible: one with butadiene released and the other with no butadiene released upon reduction. Thus, the label VECa is used when butadiene is released and VECb when no butadiene is released upon reduction. Results for direct ring opening activation and reaction energy were reported. The full set of solvents was grouped into two subsets: those with higher reaction energies (more stable products) including EC, PC, and VECa with activation energies for ring opening ∼25 kcal/mol. On the other hand, solvent/additive molecules VC, VECb, and VVC decompose with a lower activation energy ∼10 kcal/mol. The THF molecule is a special case as the activation energy is very high and the reaction energy is the smallest. For all the molecules having low activation energy, the C-O scission (CE-O2) of the ring was followed by the direct adsorption of the product molecule on the cluster surface without fragmentation. This direct adsorption suggests that Li-organic compounds may be generated on the Li surface; thus, this group of molecules may be relatively superior to PC and EC regarding the formation of an appropriate SEI layer. Despite the limitations of the theoretical model due to the relatively small size of the cluster, the lack of co-solvent molecules and the relatively low level of theory used, the results were important to elucidate the differences occurring when more than one Li atom is used during the solvent reduction simulations. Thus, the results of these simulations were compared to those of the molecular clusters illustrated in Figs. 3(a) and 3(b) [22,23]. Another approach to the direct interaction of isolated molecules with Li metal atoms was made by Leung et al. [24]. It was shown that direct EC decomposition into CO3−2 and C2H4 (mechanism Fig. 3(c)) is thermodynamically more favorable than cleaving the Cc-O2 bond to form O(C2H4)OCO2− by 35.3 kcal/mol, with a negligible activation barrier for both processes using the Perdew-Burke-Ernzerhof (PBE) exchange-correlation (XC) functional. A slightly higher barrier for the Cc-O2 breaking of 3.7 kcal/mol was obtained with the more accurate HSE06 XC functional. Based on the magnitude of the calculated barriers, it may be concluded that both processes are possible on picosecond timescales at explicit EC/Li-electrode interfaces. The next question is how the kinetic prefactors could lead to the preference of either route, which may be addressed by methods able to model the dynamics of the process. ###### Dynamics of Solvent Reduction at the Li Surface. The inclusion of more than a single molecule of solvent on the surface of Li metal accounts for the interaction between molecules and also between molecules and the electrode, which may provide a better picture of the reduction process. Yu et al. [25] reported ab initio molecular dynamics (AIMD) simulations with the PBE functional. EC molecules pre-optimized with empirical classical force fields were allowed to interact with the Li (100) surface in the AIMD simulations. The EC molecules closer to the electrode undergo a drastic change from the planarity (imposed by some classical force fields) within 50 fs, then within 200 fs the Cc-O2 breaks leading to the precursor O(C2H4)OCO2− and CO, while two electrons are being transferred to the molecule. Using a hybrid functional HSE06 [26] to follow the same trajectory, it was found that the bond breaking occurred at the same timescales, therefore confirming that in direct contact with electrodes the direct CO formation mechanism (Fig. 3(d)) is more robust at electrode surfaces than inside the bulk EC liquid regions. Following the decomposition trajectory, it was found that within 15 ps, all the EC molecules adjacent to the Li metal were reduced ending in ring opening reactions. It is important to note that 90% of the EC molecules followed the CO formation reduction route (Fig. 3(d)) while only 10% resulted in the formation of ethylene and CO3 [24]. Therefore, the preference for the CO route should be inherent to kinetic factors owed to similar activation energies [27]. Neither CO nor OC2H4OCO2− radical anions are final products, in this simulation CO was adsorbed into the Li metal while the OC2H4OCO2− group remained reactive. Classical molecular dynamics (MD) simulations using reactive force fields were employed to analyze the reaction products of Li in contact with EC or di-methylcarbonate (DMc) solvents. In contrast with traditional nonreactive force fields, reactive force fields allow bonds to break and form [28]. The SEI thickness and its components are found to be dependent on the type of solvent/salt/additives used. For Li metal interacting with EC, the main SEI components are Li2O and Li2CO3 along with trapped gas molecules of C2H4. Even though Li ethylene dicarbonate (Li2EDC) is formed during the initial stages of SEI growth, these simulations showed that it may decompose by interacting with Li atoms on the Li metal surface; however, the scenario may change at different Li concentrations on the surface of other electrodes. Actually, much work on the stability of the SEI products have been reported recently [29,30]. In contrast, when DMC is used, the SEI was found to be mainly composed of Li2O, LiOCO2CH3, Li2CO3, LiCH3, and LiOCH3. Therefore, during the formation of LiOCH3, one CO molecule could be released and indeed it was found to be one of the major components trapped closer to the Li metal. Another important difference according to this study is the extent of the SEI formed during the 40 ps of simulation, the SEI is thicker when only EC is used and thinner with DMC, having intermediate values for solvent mixtures. The analysis of the distribution of formed species confirmed a multilayer arrangement with the first layer (closer to the electrode) composed of gasses trapped during SEI formation, then inorganic salts, and finally organic salts (in contact with the electrolyte). These reactive MD simulations have proved to be fairly accurate in describing competing reactions in different regions, leading to a very reasonable preliminary description of the SEI growth stages, although more details about the inorganic products resulting from salt decomposition and other reactions were not included. ###### Electrolyte Reduction on Carbon Electrodes. Graphite continues being the most popular negative electrode material in LIBs due to its relatively low cost and moderately long cycle life. During the initial lithiation cycles, a passivating SEI layer is formed on the graphite surface due to the reduction of unstable electrolyte components. Although the SEI layer prevents further consumption of the electrolyte solution, its formation results in an irreversible capacity loss of the working electrode, and consumption of Li-ions that would otherwise participate in the charging/discharging cycles [31]. The chemical structure and morphology of the SEI layer have been found to influence long-term electrochemical properties of the anode, including capacity retention, cycling efficiency, and resistivity for charge transport. Therefore, investigating the formation mechanisms of SEI layers on graphite anodes is important as it may improve the understanding of the role played by this film in the battery operation. Different types of theoretical models describing the SEI formation have been proposed for addressing this problem. Here, we review a few representative cases to illustrate the role and need of multiscale modeling approaches to address complex problems involving a variety of length and timescales. ###### Molecular/Atomistic Models. These models include more detailed physicochemical characteristics of the battery materials, and can be used to study different features of the SEI, such as structural properties and chemical composition across the surface of the electrode particles. These types of models include kinetic Monte Carlo (KMC), classical MD, AIMD, and density functional theory (DFT). AIMD simulations are based on a quantum mechanical description and can provide valuable information on reaction pathways, rates, and reduction products resulting from electrolyte decomposition. These types of simulation offer an advantage over cluster-based DFT calculations because electrodes and liquid electrolytes can be explicitly included in the model systems. Moreover, Leung et al. demonstrated that electrode–electrolyte interfaces can be simulated within timescales accessible to AIMD [24,3234]. Leung and coworkers performed AIMD simulations of the fully charged anode by “immersing” an SEI-free LiC6 electrode into liquid EC [34]. Four layers of LiC6 graphite were periodically replicated, and 32 EC molecules were confined between the exposed edges, representing the liquid electrolyte solution in contact with the electrode. Besides the intercalated Li, an extra Li+ resided in the liquid region. Dangling orbitals on edge carbon atoms were terminated with hydroxyl (C–OH), quinone (C = O), carboxylic acid (COOH), and/or protons (C–H). No reactions were observed on the anode with proton-terminated edges during the 7 ps of simulation. However, in the system with oxygen-terminated edges, transferences of 2e from the electrode to EC molecules were observed, resulting in their reduction to C2H4/CO3−2, CO/O(C2H4)O2−, or O(C2H4)OCO2−. These reactions took place on the electrode surface, with the electron flowing directly to the decomposing EC molecules coordinated to Li+, and without the electron first becoming delocalized in the liquid region. In the case of hydroxyl-terminated edges, two different reduction reactions were observed, resulting in the formation of a CO3−2/C2H4 pair and a CO/OC2H4O2− pair. Subsequently, the OC2H4O2− anion extracted a proton from the electrode and formed ethylene glycol. Leung's work shed light on the mechanism of solvent decomposition near the graphite electrode, a process that represents the initial stages of the SEI formation. In summary, reduction of EC initiates at graphite edge regions rich in oxidized sites and it may result in either formation of CO/OC2H4O2− or CO/OC2H4CO22− pairs. Initial reduction stages are expected to be fast 2e transfer processes, and subsequently, as the SEI thickens slower 1e transfer processes are expected to become dominant [33]. Ganesh et al. extended Leung's work by evaluating the influence of different salt and solvent molecules on solvent reduction processes [35]. The model system was analogous to that used by Leung, with four layers of fully lithiated graphite in contact with liquid EC. Hydrogen-terminated anodes led to no reduction processes after 11 ps of AIMD simulation, in agreement with the previous results [34]. When the termination of graphite corresponded to a mixture of O/OH, a set of rapid reactions were found to take place. Some of them had been previously observed in Leung's simulations (carbonate formation), but other not very common reactions resulted in the formation of C2H3O, H2, and O*. The different reactivity of the anode calculated in Ganesh and Leung's work could be attributed to a more deficient plane wave cut-off used in Ganesh work (300 eV compared to 400 eV employed by Leung), which could result in different forces during the simulation, therefore, promoting other reactions. A critical observation from Ganesh's work is that different edge terminations determine the likelihood of a given reaction to take place. In the H-terminated anodes, no reaction took place owing to the inherent confinement of Li ions in the interior of the electrode, which results in no Li+ at the interface. In the case of O/OH-terminated anodes, Li ions were mainly located near the oxygen atoms, leading to a high percentage of Li+ at the interface. Moreover, more reactions occurred near O terminations than near OH terminations because the electron transfer is more easily done through oxygen than through hydrogen. When adding LiPF6 salt to the electrolyte, the mobility of the cations was enhanced. This mobility may be due to the dissolved salt not being sufficiently screened by the solvent, leading to attractive interactions with Li+ in the anode, but keeping the structural integrity of the (PF6). On the other hand, when the salt was present with the O/OH terminations, the (PF6) rapidly decomposed into LiF and PF5 forming LiF chains at the interface with the electrode. The specific solvent used was found to affect the reduction products. When PC was used instead of EC, the solvent easily decomposed, which may be related to the higher mobility of Li+ cations. These fast reduction reactions caused significant graphite deformation (probably exfoliation). When DMC was used as a solvent, no reduction processes were observed. It is important to recall that no other AIMD simulation had ever reported species like C2H3O and LiCH3, even though they had been experimentally detected through X-ray photoelectron spectroscopy (XPS) experiments [36]. However, as mentioned earlier, using low cut-off energy (300 eV) could be responsible for the observation of these very reactive species. Although Leung and Ganesh demonstrated that AIMD simulations are useful in the determination of initial stages of solvent reduction on graphite anodes, longer timescales (impossible to be achieved with AIMD methods) are needed to investigate the continuous SEI formation, up to a few hundred nanometers, as observed in experiments. Tasaki performed classical MD simulations to examine bulk properties of the main reduction products of EC and PC, and their interaction with a graphite surface in gas and liquid phase [37]. The model compounds corresponded to Li2-EDC and dilithium 1,2-propylene dicarbonate (Li2-PDC), which have been identified as major components in SEI layers formed in the presence of EC and PC, respectively. Bulk properties of Li2-EDC and Li2-PDC, such as density and cohesive energy, were calculated at 300 K using a box with 40 molecules under fixed number of molecules, pressure, and temperature (NPT) conditions, using the COMPASS force field [38]. Li2-EDC was found to have a higher density and cohesive energy than Li2-PDC. This result suggests that SEI films formed in the presence of EC are denser and more brittle, which constitute favorable characteristics in the SEI. Subsequently, Tasaki investigated the solubility of Li2-EDC and Li2-PDC in EC and PC, respectively. Findings suggested that Li2-PDC is more soluble in PC than Li2-EDC in EC. Dissimilarities observed between the bulk properties of these reduction compounds are expected to cause contrasting physical properties in EC- and PC-based SEI films. Furthermore, adhesion of the decomposition products to graphite was explored. For this analysis, the simulation box consisted of one molecule of the decomposition product in contact with the (100) plane of five graphite layers. Then, quenched dynamics simulations were used to determine the global minimum for the system. A similar simulation box was employed to examine interactions between graphite and the decomposition product as a condensed material, using 40 molecules of the decomposition product instead (Li2-EDC or Li2-PDC) in contact with the (100) plane of graphite. Simulations showed that the adsorption energy of one Li2-EDC molecule to graphite is stronger than that of one Li2-PDC. Additionally, Li2-EDC in condensed phase also showed a more favorable interaction with graphite, which implies a better adhesion of the EC-based SEI layer to graphite, compared to that formed from PC. These findings were in agreement with the DFT studies reported by Wang and Balbuena [39]. Results from Tasaki's MD simulations are a good starting point for understanding marked differences in the cycling behavior of PC- and EC-based electrolytes, focusing on the bulk properties of the products forming the SEI layer, and the interaction of the SEI with both the electrolyte solution and the graphite anode. However, during the simulations, the graphite carbons were neutral, and the voltage in the anode was not taken into account. Vatamanu et al. performed MD simulations of an electrolyte composed of EC, DMC, and LiPF6 near the basal plane of graphite as a function of the electrode potential [40]. Polarizable force fields that allow controlling the potential between the electrodes were employed, and the polarization of the electrolyte was represented using induced dipoles applied through isotropic atomic polarizabilities. Charges in the electrode and induced dipoles were calculated as to minimize the total electrostatic energy of the system. Induced dipoles were updated every 3 fs and electrode charges every 0.3 ps. The temperature was maintained at 453 K under NPT conditions, using a Nose-Hoover thermostat. The electrode model consisted of three layers of graphite whose basal plane was in contact with the electrolyte. The distance between the electrodes in the simulation box corresponded to 91.6 Å. The electrolyte consisted of 114 EC molecules, 256 DMC molecules, 31 PF6 ions, and 31 Li+ ions. Equilibration runs were at least 4 ns long, followed by production-run trajectories at potentials between 0 and 7 V. Results showed that the composition of the interfacial layer of electrolyte near graphite is significantly influenced by the electrode potential. The amount of EC near the interface noticeably increased with increasing electrode charge, relative to the amount of DMC. Increasing negative potentials increased the amount of Li+ on the graphite surface, thereby involving more EC molecules in Li+ coordination. The tendency for formation of either Li2CO3 or (LiCO3CH2)2 from EC reduction was suggested to depend on the concentration of Li+ near the graphite/electrolyte interface, with the formation of Li2CO3 being favored at low Li+ concentrations, and the formation of (LiCO3CH2)2 favored at high Li+ concentrations, in agreement with earlier quantum-mechanical analysis [41] and experimental observations [42]. More recently, Jorn et al. performed MD simulations of the graphite/electrolyte interface in the presence and absence of SEI, under applied voltages [43]. Their work aims to help to elucidate the connection between composition, structure, and ion transport characteristics of the SEI, with the overall performance of the battery. The modeling approach included the development of a force field for the electrolyte (EC with LiPF6 dissolved at 1M concentration), through the implementation of a force-matching algorithm developed from AIMD simulations of EC, Li+, and PF6-. The components of the SEI were chosen to be Li2EDC and lithium fluoride (LiF). The SEI components were described using the standard CFF91 model implemented in LAMMPS. Electrostatic interactions between the SEI components and the electrolyte were given by their partial charges, and long-range electrostatics interactions were incorporated using the P3M method. Simulations were carried out by placing the electrolyte in between two graphite electrodes. Both the basal and edge planes of graphite were evaluated. Simulations studying the effect of the SEI layer incorporated a randomly generated film built from Li2EDC and LiF of varying thickness and composition ranging from 0.5 to 2 nm and 0 to 50% of LiF. Applied voltages of 0, 3, 6, and 7 V were considered. The temperature was maintained at 450 K under NVT conditions, using a Nose-Hoover thermostat. Jorn and coworkers observed phase separation when LiF and Li2EDC were combined to form a solid phase. When the LiF concentration increased up to 50% wt., LiF adopted a layered structure. This concentration could explain the layered SEI structure formed in aged battery cells due to thermodynamically favorable relaxation of components over time [44]. Analysis of electrolyte molecules was carried out applying a potential difference of 0 or 3 V between the electrodes. The potential was divided equally between the two electrodes, so individual potential drops of 0 and ±1.5 V were obtained. When the electrodes were represented by graphite basal planes, the preferred orientation of EC molecules at 0 V was entirely parallel to the electrode. In the case of graphite edges, the molecules adopted a broader range of positions. When the applied potential was 3 V, the molecules adopted mainly two configurations, one parallel, and another one perpendicular to the anode, with the carbonyl part of the EC pointing away from the electrode. The EC density peak distribution was sharper and closer to the electrode in the case of the positively charged electrode. When there was an applied potential (3 V), the PF6 anions tended to populate the positive electrode at around 4 Å of separation. In the case of Li+, there were two peaks in the distribution close to the negative electrode, one at 4 Å, which does not change significantly with the application of voltage, and the other centered at 8 Å due to the reluctance of Li ions to lose their solvation shell. After increasing the applied potential to 7 V, the Li+ began to distribute nearer the basal plane of negative electrodes (about 2 Å). Therefore, it was concluded that edge terminations of graphite allow a higher degree of solvent disorder compared to basal planes. After incorporating the SEI layer onto the graphite electrode, it was found that the electrolyte near the SEI interface lacks the structuring observed on bare electrodes. Random orientations seen in these cases were similar to those observed in the bulk of the electrolyte. The SEI thickness was found to influence the degree of interaction of EC molecules with the interface. Thinner SEI films undergo deformation, allowing EC molecules to interact directly with the electrode. As the SEI grows thicker, its deformation becomes more difficult, and EC molecules are not able to go through. The inclusion of LiF favors the formation of a sharper interface owed to the net unfavorable interaction of EC with the ionic LiF compounds. The SEI is expected to be formed by a bilayer structure, with a dense inorganic film closer to the anode; such layer was observed in these simulations as a concentrated region of fluorine within 4 Å of the SEI. A thicker porous organic layer, located at the interface with the electrolyte, is supposed to be covering the inorganic film. However, in these simulations, the porous behavior of the organic layer was not observed. Li+ and PF6- ions were found to be present inside the SEI region. Some ions were very close to the electrode (less than 2.4 Å) due to losing their solvation shell, and the ones retaining their solvation shell were further away (2.4 to 3.2 Å from the electrode). Increasing the LiF content resulted in Li ions from the electrolyte getting closer to the surface while PF6 anions were confined to the electrolyte region close to the SEI. Methekar et al. used KMC simulations to explore the formation and growth of a passivating SEI layer on a graphite anode, especially in the area tangential to the surface of the anode during charging/discharging cycles (Fig. 4) [45]. The model explicitly considered phenomena such as adsorption of Li+ in the anode (intercalation), desorption of Li+ from the electrode surface (deintercalation), surface diffusion of ions and molecules on the surface, and reduction of particles on the electrode surface to form material products contributing to the thickness of the SEI layer. The rate of diffusion of molecules on the surface was modeled using a cubic lattice, and the formation of the SEI layer was assumed to follow Butler-Volmer (B-V) kinetics. The kinetic parameters used in the model were obtained from continuum models and experimental data [4648]. Upon calculation of transition rates for each possible process, a given transition was randomly selected following a probability proportional to its kinetic rate. Once the selected transition took place, the process was repeated until the electrode was fully charged. The active surface coverage was found to remain constant for the initial charging cycles, and then decreased with increasing cycle number due to the formation of a passive SEI layer on the surface tangential to the lithium ion intercalation. This coverage also resulted in shorter times required for charging due to the reduced surface area available for electrodeposition of active atoms (reduced electrode capacity). Besides reproducing the capacity fade phenomenon observed experimentally, Methekar et al. also investigated the effect of temperature on the active surface area of the anode during cycling, and found that the temperature that optimizes the active surface area during the first cycle may not be optimal for the long-run performance of the battery. ###### Macroscopic Models. Continuum mechanics models are usually employed to predict SEI film growth rates, and electrode capacity loss, using kinetic and transport properties of electrolyte species involved in the film formation. Species and charge conservation equations are derived and solved, resulting in the prediction of concentration and electric potential profiles [4953]. Christensen and Newman developed a continuum mathematical model of the SEI as a mixed conductor, including details of the chemistry of SEI formation to simulate the growth of a 1D film of Li2CO3 on a planar graphite surface [53]. The electrolyte solution was assumed to be 1 M LiPF6 in an idealized solution of reactive EC and inert DMC. The elementary steps considered for the SEI formation mechanism included Li+ intercalation, EC adsorption, and EC reduction to Li2CO3. B-V kinetic equations were used to describe all charge-transfer reactions, and nonequilibrium interfacial kinetic expressions were used in the film boundary conditions. Other factors, such as diffusion of Li+ in graphite and variations of potential with the state of charge, were also taken into account. Calculated film growth rate at open circuit (zero current) was found to be proportional to the rate of electron transfer, and faster film growth rates were observed for charged batteries compared to uncharged batteries. The model showed that film growth directly affects both, irreversible capacity loss and film resistance. Colclasure and Kee extended Christensen and Newman's work by incorporating detailed electrolyte reduction mechanisms describing the charge transfer kinetics [51,52]. All microscopically reversible reactions were evaluated in elementary form, providing an alternative to the B-V formalism for charge transfer chemistry. Some limitations of the B-V formalism include the assumption of a single rate limiting step, making difficult to model simultaneous competitive reactions. Therefore, Colclasure and Kee's approach allowed the incorporation of multiple parallel pathways. Additionally, the semitheoretical Redlich-Kister expansion was used to represent nonideal effects of lithiated graphite. As a result, the model can predict SEI growth during charge and discharge cycles, which was not possible with Christensen and Newman's model. Results showed a weak dependence of growth rate on cycling rates, and a proportionality of film thickness and capacity loss with the square root of time. This results suggested that film growth during cycling was mainly limited by slow electron diffusion. Incorporating detailed SEI chemistry into battery models can sometimes be difficult due to a large number of model parameters needed. Physical and chemical properties of reaction intermediates and rate expressions have to be estimated or derived from the literature. A different, simpler approach is to represent the SEI chemistry as a constant resistance. Ploehn et al. developed a 1D continuum mechanics model in which the SEI growth is described by a reactive solvent component diffusing through the SEI layer and undergoing a two-electron reduction at the graphite/SEI interface [50]. It is assumed that reduction of the solvent produces an insoluble product that increases the SEI thickness. The model simplifies the bilayer structure observed in SEI films (a compact inorganic phase close to the electrode, covered by a porous organic phase closer to the electrolyte) and represents it as a single layer with continuously varying properties, such as composition and porosity. Linear correlation was found between SEI growth at open-circuit conditions and the square root of time. Using reasonable assumptions about the SEI composition, an SEI thickness of several tens of nanometers was estimated. The irreversible capacity loss was also found to increase linearly with the square root of time, which agreed with experimental data and demonstrated that solvent diffusion models can be useful alternatives for the modeling of SEI growth and capacity loss behavior in Li-ion batteries. A similar result was found by Pinson and Bazant [54] using a single rate determining transport mechanism to describe SEI growth on a graphite anode. By treating the SEI layer as a homogeneous structure, their model calculated the Li diffusion coefficient at 60 °C to be 3 × 10−6 cm2/s. Although the continuum models described above have shown the excellent capability to predict SEI growth, they ignore nonzero charge density at the interfacial region between the SEI and the electrolyte. Recently, Deng et al. proposed a phase field model for SEI growth that takes into account the nonzero charge density at the interface and captures the double-layer structure during SEI growth [49]. In this model, the interface is diffuse such that material properties vary smoothly across it, and the equations are solved simultaneously in the whole domain, including bulk phases and interfacial regions. The implementation of a diffuse interface results in significant differences in the way in which species are transported across the interface. When modeling equations are solved at each bulk phase separately, heterogeneous reactions occurring at the interface determine fluxes of species moving across the interface. On the other hand, the diffuse interface in the phase field model allows species to go across the interface naturally, according to the nonequilibrium thermodynamics. As a result, fluxes of species across the interface are not only determined by SEI formation reactions but also by diffusion and electromigration. Deng's model found that SEI growth is mainly controlled by the diffusion of electrons, and it is approximately proportional to the square root of time. This behavior is similar to that observed with Colclasure's [51,52], Pinson and Bazant [54], and Ploehn's models [50]. The phase field model also found that the SEI growth rate is proportional to the temperature and that SEI thickness follows an Arrhenius relation. Thermodynamic models have also been proposed to describe the formation of SEI layers on graphite anodes. Yan et al. employed classical nucleation theory to qualitatively explain the origin of the bilayer structure of the SEI and developed a theoretical foundation for the first electrochemical intercalation of Li into graphite as a special precipitation process [5557]. The precipitation of reduction products at a graphite surface takes place through two different steps: nucleation of solid species constituting the SEI layer, and growth of the solid nuclei at the graphite/electrolyte interface. Yan et al. introduced three different types of nucleation: (i) nucleation in the bulk electrolyte solution, (ii) nucleation directly on the graphite surface, and (iii) nucleation surrounding a particle already present on the graphite surface [57]. They showed that the smallest the shape factor of a given nucleation mode, the easier it is for the solid nuclei to form. The relative order of the shape factor associated with nucleation mode was determined to be: (i) > (ii) > (iii). Both nucleation modes (ii) and (iii) are employed by solid species located close to the graphite surface. Therefore, the number density of macroscopic solid particles growing close to graphite should be much larger than that away from the electrode. This growth explains the most compact nature of the inorganic film close to the anode. Additionally, since the concentration of organic species is higher than that of inorganic, a thicker porous layer composed of organic species is expected to form nearer to the electrolyte. Yang et al. also described the first Li intercalation into graphite, as the potential of the graphite electrode decreases from the open circuit potential (OCP) (fully delithiated state, 1.4 V versus Li/Li+) to 0 V. At potentials between 1.4 and 0.55 V, one-electron decomposition of electrolyte components may be observed, resulting in solid nuclei forming through nucleation modes (i) and (ii). Since the shape factor of nucleation type (ii) is smaller, resultant species have a higher tendency to precipitate on the graphite surface. When the anode is further polarized to potentials between 0.55 and 0.2 V, one- and two-electron decomposition of electrolyte components are observed. Nucleation modes (i), (ii), and (iii) are possible, but mode (iii) with the smallest shape factor will allow species such as LiF, Li2O, and Li2CO3 to surround eventually and replace the isolated solid deposits previously formed on graphite, constituting the compact inorganic film closer to the electrolyte. Solid organic species are in larger proportion in the electrolyte. Thus, a thicker porous layer will be formed closer to the electrolyte. When the potential of the anode is held between 0.2 and 0 V, the stable SEI layer has been established. Yan et al. also presented a thermodynamic argument demonstrating that most precipitation processes in the electrolyte followed the nucleation and growth mechanism [56], and treated in detail the growth of solid nuclei constituted by reduction products [55]. SEI thickness was found to increase with the square root of time, which demonstrated a diffusion-controlled growth, in agreement with results obtained using continuum mechanics models. ###### Growth of Li2EDC Film. In this section, we report a new coarse-grained (CG)-KMC model to study the SEI film formation and growth. As discussed in the previous sections, KMC methods are powerful tools to study film growth [58,59] and electrochemical deposition at the atomistic scale [6062]. Recently, several models based on the KMC algorithm were employed to study electrochemical energy systems [63]. In the 2D Methekar's model [45], the formation of SEI was simplified to a one-step reaction, and the growth of the SEI thickness was not investigated due to limitations of the 2D model. Here, we introduced a CG-KMC three-dimensional (3D) model where multistep chemical/electrochemical reactions during the formation of SEI are explicitly incorporated. The main idea is to detect the evolution of the organic Li2EDC film, which involves multiple-step chemical and electrochemical reactions [8,32,64]. In this preliminary model, it was assumed that Li2EDC formation is dominated by the one-electron mechanism via a series of steps given by the following reactions [24]: 1. (1)EC + * →EC* with adsorption rate k1, 2. (2)EC* + e- →c-EC- with reduction rate k2, 3. (3)c-EC-→o-EC- with conformational transition rate k3, 4. (4)2 o-EC- + 2 Li+→ Li2EDC + C2H4($↑$) with formation rate k4. The first step represents the adsorption of the EC molecule on the substrate. The second step represents the electrochemical reduction, in which an electron is transferred to the adsorbed EC and EC is reduced to a c-EC- with a closed-ring conformation. The c-EC can be converted to o-EC which has an open-ring conformation as shown in the third step. Finally, 2 o-EC anions can accept Li+ ions and form Li2EDC as an SEI film component. The simulation domain includes $50×50×500$ simple cubic sites. The length scale of each site approximates to 0.5 nm. Sites in the bottom plane are predefined as the anode substrate. Other sites can be occupied by EC, c-EC, o-EC or Li2EDC. It is worth noting that EC adsorption can only take place at the empty sites, which are located on the anode substrate or predeposited Li2EDC sites. This analysis focuses on how the reduction rate k2 and the SEI formation rate k4 affect the growth rate of the SEI film. Hence, the adsorption rate k1 is set to 108 s−1 site−1, and the conformational transition rate k3 is set to 103 s−1 site−1. Both of these values and the ranges of variation of k2 and k4 are adopted from Leung's analysis based on DFT calculations [32]. The contour map in Fig. 5(a) demonstrates the effect of k2 and k4 on the SEI growth rate. Both k2 and k4 vary from 10−8 site−1 s−1 to 108 site−1 s−1. It was found that the SEI growth rate is limited by the electrochemical reduction rate k2. When k2 is smaller than 10-4 site−1 s−1, the SEI growth rate is extremely slow, and the variation of Li2EDC formation rate k4 does not affect the growth rate. When k2 is larger than 10−2 site−1 s−1, the SEI growth rate increases linearly first as k4 increases as shown in Fig. 5(b). Then the growth rate arrives at a maximum value as k4 further increases. The maximum growth rate is also dependent on the electrochemical reduction rate. As the SEI thickness grows, electrons are harder to be tunneled to the SEI layer. Hence, the reduction rate k2 should decrease with the increase in SEI thickness. To mimic this effect, we use the following equation to describe the thickness-dependent reduction rate $k2(δ)=k2(0)×l0α×δ+l0$ Here, l0 is the length scale of a lattice site, and $α$ is a dimensionless coefficient. In this model, the relation between thickness and reduction rate is assumed. A more accurate equation will be applied in the KMC model in future work where the α parameter will be obtained directly from the decay obtained for a given thickness from a first-principles approach demonstrated in our previous work [65]. Figure 6 shows the SEI thickness variation versus time. It clearly shows that the growth rate decreases as the thickness increases, which agrees well with the results of the continuum model developed by Christensen and Newman [53] and with the results obtained by a first-principles approach [65]. A larger value of $α$ indicates that it is more difficult for electrons being tunelled to the SEI/electrolyte interface. According to Fig. 6(a), the larger $α$ can decrease the SEI growth rate. Christensen and Newman [53] used the electron transference number to characterize the difficulty of electrons tunelling through the SEI. It was found that the SEI film grew slower with a smaller electron transference number. The conclusion from the present coarse-grained molecular level model agrees well with that from the reported continuum model [53] and with a first-principles approach [65]. Figure 6(b) shows the fraction of species at the SEI/electrolyte interface. It can be seen that Li2EDC and EC* are the dominant species on the SEI surface. Snapshots in Figs. 6(c)6(f) show the distribution of species at the SEI top surface. c-EC and o-EC can be observed only when the SEI is thinner than around 1 nm. Once the SEI becomes thicker, only EC* and Li2EDC can be observed. The reason is that as SEI grows, the reduction rate k2 decreases fast while the other transition rates remain as constants. The EC adsorption event (formation of EC*) is the predominant event. Once an EC* is reduced to c-EC, it will be quickly converted to Li2EDC according to the respective governing reactions with high kinetic rates. These preliminary results illustrate how the coupling between different models could be done to achieve results even at the macroscopic level. ## Conclusions In this article, we have reviewed recent progress on modeling components of LIBs technology, emphasizing the growth of SEI layers and anode/electrolyte interface chemistry. Multiscale modeling studies that mainly focused on the electrolyte decomposition and SEI formation on the surface were closely summarized and discussed. Issues and methodology pertaining to simulations were also summarized. A CG-KMC model is developed to study the growth of Li2EDC film. The model involves multiple-step reactions. It is found that the reduction of EC limits the film growth rate. A simple model is used to incorporate the effect of SEI thickness, which shows the reduction of the growth rate as the film thickness increases. The evolution of the surface pattern is detected as a function of SEI thickness. Although the CG-KMC model is very simple based only in one SEI component, it shows the main ideas that will be further elaborated in future work. An important question still to be answered is a direct comparison between the first-principles calculated rate of SEI growth and experimental rates. An important step in that direction has been given in our recent work [29], where we discussed the role of the nature of the electrolyte in determining the rate of SEI growth. We demonstrated that additives such as VC yield SEI products that lead to a much slower SEI growth, whereas in the absence of such additive, the electrochemical instability of the SEI products leads to a much faster growth. In the future, we plan to incorporate these ideas into the KMC model. As the global demand for alternative renewable energy increases, electrochemical energy storage devices such as batteries are attracting greater attention than before. To develop a rechargeable LiB technology for a wide array of applications, the safety, cycle life, and energy density must be vastly improved. In order to move beyond these limitations, a rational electrolyte design investigation must take place. Although much progress has been done on studying the decomposition mechanism of electrolytes and SEI formation and growth as briefly reviewed in this work, much more effort is still required. The studies should be further extended to develop solutions to mitigate problems such as dendrite formation, formation of a “bad” SEI layer, and pulverization of the electrode. Regarding the SEI formation and growth, despite numerous debates and suggestions, the microscopic details of the evolution of organic components of the SEI layer based on specific electrolyte mixtures have not been fully elucidated at the anode/electrolyte interface and deserve intensive study. ## Acknowledgements This work was supported by the Qatar National Research Fund (QNRF) through the National Priorities Research Program (NPRP 7-162-2-077). PPM and ZL acknowledge financial support from NSF Grant No. 1438431. Computational resources from Texas A&M Supercomputing Center, Brazos Supercomputing Cluster at Texas A&M University, and from Texas Advanced Computing Center at UT Austin are gratefully acknowledged. ## References Turner, J. A. , 1999, “ A Realizable Renewable Energy Future,” Science, 285(5428), pp. 687–689. [PubMed] Gogotsi, Y. , and Simon, P. , 2011, “ True Performance Metrics in Electrochemical Energy Storage,” Science, 334(6058), pp. 917–918. [PubMed] Goodenough, J. B. , and Kim, Y. , 2010, “ Challenges for Rechargeable Li Batteries,” Chem. Mat., 22(3), pp. 587–603. Aurbach, D. , Talyosef, Y. , Markovsky, B. , Markevich, E. , Zinigrad, E. , Asraf, L. , Gnanaraj, J. S. , and Kim, H. J. , 2004, “ Design of Electrolyte Solutions for Li and Li-Ion Batteries: A Review,” Electrochim. Acta, 50(2–3), pp. 247–254. Schaffner, B. , Schaffner, F. , Verevkin, S. P. , and Borner, A. , 2010, “ Organic Carbonates as Solvents in Synthesis and Catalysis,” Chem. Rev., 110(8), pp. 4554–4581. [PubMed] Lewandowski, A. , and Swiderska-Mocek, A. , 2009, “ Ionic Liquids as Electrolytes for Li-Ion Batteries-An Overview of Electrochemical Studies,” J. Power Sources, 194(2), pp. 601–609. Fergus, J. W. , 2010, “ Ceramic and Polymeric Solid Electrolytes for Lithium-Ion Batteries,” J. Power Sources, 195(15), pp. 4554–4569. Xu, K. , 2004, “ Nonaqueous Liquid Electrolytes for Lithium-Based Rechargeable Batteries,” Chem. Rev., 104(10), pp. 4303–4417. [PubMed] Zhang, S. S. , 2006, “ A Review on Electrolyte Additives for Lithium-Ion Batteries,” J. Power Sources, 162(2), pp. 1379–1394. Wagner, R. , Preschitschek, N. , Passerini, S. , Leker, J. , and Winter, M. , 2013, “ Current Research Trends and Prospects Among the Various Materials and Designs Used in Lithium-Based Batteries,” J. Appl. Electrochem., 43(5), pp. 481–496. Aurbach, D. , 2000, “ Review of Selected Electrode-Solution Interactions Which Determine the Performance of Li and Li Ion Batteries,” J. Power Sources, 89(2), pp. 206–218. Maeshima, H. , Moriwake, H. , Kuwabara, A. , and Fisher, C. A. J. , 2010, “ Quantitative Evaluation of Electrochemical Potential Windows of Electrolytes for Electric Double-Layer Capacitors Using AB Initio Calculations,” J. Electrochem. Soc., 157(6), pp. A696–A701. Halls, M. D. , and Tasaki, K. , 2010, “ High-Throughput Quantum Chemistry and Virtual Screening for Lithium Ion Battery Electrolyte Additives,” J. Power Sources, 195(5), pp. 1472–1478. Balbuena, P. B. , and Wang, Y. X. , 2004, Lithium-Ion Batteries: Solid-Electrolyte Interphase, Imperial College Press, London. Verma, P. , Maire, P. , and Novak, P. , 2010, “ A Review of the Features and Analyses of the Solid Electrolyte Interphase in Li-Ion Batteries,” Electrochim. Acta, 55(22), pp. 6332–6341. Peled, E. , 1983, “ Lithium Stability and Film Formation in Organic and Inorganic Electrolyte for Lithium Battery Systems,” Lithium Batteries, J. P. Gabano , ed., Academic Press, London. Aurbach, D. , Moshkovich, M. , Cohen, Y. , and Schechter, A. , 1999, “ The Study of Surface Film Formation on Noble-Metal Electrodes in Alkyl Carbonates/Li Salt Solutions, Using Simultaneous In Situ AFM, EQCM, FTIR, and EIS,” Langmuir, 15(8), pp. 2947–2960. Goodenough, J. B. , and Kim, Y. , 2011, “ Challenges for Rechargeable Batteries,” J. Power Sources, 196(16), pp. 6688–6694. Xu, W. , Wang, J. , Ding, F. , Chen, X. , Nasybulin, E. , Zhang, Y. , and Zhang, J.-G. , 2014, “ Lithium Metal Anodes for Rechargeable Batteries,” Energy Environ. Sci., 7(2), pp. 513–537. Wu, H. , and Cui, Y. , 2012, “ Designing Nanostructured Si Anodes for High Energy Lithium Ion Batteries,” Nano Today, 7(5), pp. 414–429. Tasaki, K. , Kanda, K. , Kobayashi, T. , Nakamura, S. , and Ue, M. , 2006, “ Theoretical Studies on the Reductive Decompositions of Solvents and Additives for Lithium-Ion Batteries Near Lithium Anodes,” J. Electrochem. Soc., 153(12), pp. A2192–A2197. Wang, Y. , and Balbuena, P. B. , 2002, “ Theoretical Insights Into the Reductive Decompositions of Propylene Carbonate and Vinylene Carbonate: A Density Functional Theory Study,” J. Phys. Chem. B, 106(17), pp. 4486–4495. Wang, Y. , Nakamura, S. , Ue, M. , and Balbuena, P. B. , 2001, “ Theoretical Studies to Understand Surface Chemistry on Carbon Anodes for Lithium-Ion Batteries: Reduction Mechanisms of Ethylene Carbonate,” J. Am. Chem. Soc., 123(47), pp. 11708–11718. [PubMed] Leung, K. , Qi, Y. , Zavadil, K. R. , Jung, Y. S. , Dillon, A. C. , Cavanagh, A. S. , Lee, S. H. , and George, S. M. , 2011, “ Using Atomic Layer Deposition to Hinder Solvent Decomposition in Lithium Ion Batteries: First-Principles Modeling and Experimental Studies,” J. Am. Chem. Soc., 133(37), pp. 14741–14754. [PubMed] Yu, J. M. , Balbuena, P. B. , Budzien, J. , and Leung, K. , 2011, “ Hybrid DFT Functional-Based Static and Molecular Dynamics Studies of Excess Electron in Liquid Ethylene Carbonate,” J. Electrochem. Soc., 158(4), pp. A400–A410. Heyd, J. , Scuseria, G. E. , and Ernzerhof, M. , 2006, “ Hybrid Functionals Based on a Screened Coulomb Potential,” J. Chem. Phys., 118(18), p 8207. Leung, K. , 2013, “ Electronic Structure Modeling of Electrochemical Reactions at Electrode/Electrolyte Interfaces in Lithium Ion Batteries,” J. Phys. Chem. C, 117(4), pp. 1539–1547. Kim, S. P. , van Duin, A. C. T. , and Shenoy, V. B. , 2011, “ Effect of Electrolytes on the Structure and Evolution of the Solid Electrolyte Interphase (SEI) in Li-Ion Batteries: A Molecular Dynamics Study,” J Power Sources, 196(20), pp. 8590–8597. Soto, F. A. , Ma, Y. , Martinez-DeLaHoz, J. M. , Seminario, J. M. , and Balbuena, P. B. , 2015, “ Formation and Growth Mechanisms of Solid-Electrolyte Interphase Layers in Rechargeable Batteries,” Chem. Mater., 27(23), pp. 7990–8000. Leung, K. , Soto, F. A. , Hankins, K. , Balbuena, P. B. , and Harrison, K. L. , 2016, “ Stability of Solid Electrolyte Interphase Components on Li Metal and Reactive Anode Material Surfaces,” J. Phys. Chem. C, 120(12), pp. 6302–6313. Omichi, K. , Ramos-Sanchez, G. , Rao, R. , Pierce, N. , Chen, G. , Balbuena, P. B. , and Harutyunyan, A. R. , 2015, “ Origin of Excess Capacity in Lithium-Ion Batteries Based on Carbon Nanostructures,” J. Electrochem. Soc., 162(10), pp. A2106–A2115. Leung, K. , 2013, “ Two-Electron Reduction of Ethylene Carbonate: A Quantum Chemistry Re-Examination of Mechanisms,” Chem. Phys. Lett., 568–569, pp. 1–8. Leung, K. , 2012, “ Electronic Structure Modeling of Electrochemical Reactions at Electrode/Electrolyte Interfaces in Lithium Ion Batteries,” J. Phys. Chem. C, 117(4), pp. 1539–1547. Leung, K. , and Budzien, J. L. , 2010, “ Ab Initio Molecular Dynamics Simulations of the Initial Stages of Solid-Electrolyte Interphase Formation on Lithium Ion Battery Graphitic Anodes,” Phys. Chem. Chem. Phys., 12(25), pp. 6583–6586. [PubMed] Ganesh, P. , Kent, P. R. C. , and Jiang, D.-E. , 2012, “ Solid-Electrolyte Interphase Formation and Electrolyte Reduction at Li-Ion Battery Graphite Anodes: Insights From First-Principles Molecular Dynamics,” J. Phys. Chem. C, 116(46), pp. 24476–24481. Peled, E. , Towa, D. B. , Merson, A. , and Burstein, L. , 2000, “ Microphase Structure of SEI on HOPG,” J. New Mater. Electrochem. Syst., 3(4), pp. 319–326. Tasaki, K. , 2005, “ Solvent Decompositions and Physical Properties of Decomposition Compounds in Li-Ion Battery Electrolytes Studied by DFT Calculations and Molecular Dynamics Simulations,” J. Phys. Chem. B, 109(7), pp. 2920–2933. [PubMed] Sun, H. , 1998, “ COMPASS, an ab Initio Force Field Optimized for Condensed-Phase Applications-Overview With Details on Alkane and Benzene Compounds,” J. Phys. Chem. A, 102(38), p. 7338. Wang, Y. X. , and Balbuena, P. B. , 2003, “ Adsorption and 2-Dimensional Association of Lithium Alkyl Dicarbonates on the Graphite Surface through O-...Li+...p (Arene) Interactions,” J. Phys. Chem. B, 107(23), pp. 5503–5510. Vatamanu, J. , Borodin, O. , and Smith, G. D. , 2011, “ Molecular Dynamics Simulation Studies of the Structure of a Mixed Carbonate/LiPF6 Electrolyte Near Graphite Surface as a Function of Electrode Potential,” J. Phys. Chem. C, 116(1), pp. 1114–1121. Li, T. , and Balbuena, P. B. , 2000, “ Theoretical Studies of the Reduction of Ethylene Carbonate,” Chem. Phys. Lett., 317(3–5), pp. 421–429. Aurbach, D. , Levi, M. D. , Levi, E. , and Schechter, A. , 1997, “ Failure and Stabilization Mechanisms of Graphite Electrodes,” J. Phys. Chem. B, 101(12), pp. 2195–2206. Jorn, R. , Kumar, R. , Abraham, D. P. , and Voth, G. A. , 2013, “ Atomistic Modeling of the Electrode-Electrolyte Interface in Li-Ion Energy Storage Systems: Electrolyte Structuring,” J. Phys. Chem. C, 117(8), pp. 3747–3761. Zheng, J. Y. , Zheng, H. , Wang, R. , Ben, L. B. , Lu, W. , Chen, L. W. , Chen, L. Q. , and Li, H. , 2014, “ 3D Visualization of Inhomogeneous Multi-Layered Structure and Young's Modulus of the Solid Electrolyte Interphase (SEI) on Silicon Anodes for Lithium Ion Batteries,” Phys. Chem. Chem. Phys., 16(26), pp. 13229–13238. [PubMed] Methekar, R. N. , Northrop, P. W. C. , Chen, K. , Braatz, R. D. , and Subramanian, V. R. , 2011, “ Kinetic Monte Carlo Simulation of Surface Heterogeneity in Graphite Anodes for Lithium-Ion Batteries: Passive Layer Formation,” J. Electrochem. Soc., 158(4), pp. A363–A370. Santhanagopalan, S. , Guo, Q. , Ramadass, P. , and White, R. E. , 2006, “ Review of Models for Predicting the Cycling Performance of Lithium Ion Batteries,” J. Power Sources, 156(2), pp. 620–628. Sikha, G. , Popov, B. N. , and White, R. E. , 2004, “ Effect of Porosity on the Capacity Fade of a Lithium-Ion Battery Theory,” J. Electrochem. Soc., 151(7), pp. A1104–A1114. Subramanian, V. R. , Boovaragavan, V. , Ramadesigan, V. , and Arabandi, M. , 2009, “ Mathematical Model Reformulation for Lithium-Ion Battery Simulations: Galvanostatic Boundary Conditions,” J. Electrochem. Soc., 156(4), pp. A260–A271. Deng, J. , Wagner, G. J. , and Muller, R. P. , 2013, “ Phase Field Modeling of Solid Electrolyte Interface Formation in Lithium Ion Batteries,” J. Electrochem. Soc., 160(3), pp. A487–A496. Ploehn, H. J. , Ramadass, P. , and White, R. E. , 2004, “ Solvent Diffusion Model for Aging of Lithium-Ion Battery Cells,” J. Electrochem. Soc., 151(3), pp. A456–A462. Colclasure, A. M. , and Kee, R. J. , 2010, “ Thermodynamically Consistent Modeling of Elementary Electrochemistry in Lithium-Ion Batteries,” Electrochim. Acta, 55(28), pp. 8960–8973. Colclasure, A. M. , Smith, K. A. , and Kee, R. J. , 2011, “ Modeling Detailed Chemistry and Transport for Solid-Electrolyte-Interface (SEI) Films in Li–Ion Batteries,” Electrochim. Acta, 58(0), pp. 33–43. Christensen, J. , and Newman, J. , 2004, “ A Mathematical Model for the Lithium-Ion Negative Electrode Solid Electrolyte Interphase,” J. Electrochem. Soc., 151(11), pp. A1977–A1988. Pinson, M. B. , and Bazant, M. Z. , 2013, “ Theory of SEI Formation in Rechargeable Batteries: Capacity Fade, Accelerated Aging and Lifetime Prediction,” J. Electrochem. Soc., 160(2), pp. A243–A250. Yan, J. , Zhang, J. , Su, Y.-C. , Zhang, X.-G. , and Xia, B.-J. , 2010, “ A Novel Perspective on the Formation of the Solid Electrolyte Interphase on the Graphite Electrode for Lithium-Ion Batteries,” Electrochim. Acta, 55(5), pp. 1785–1794. Yan, J. , Su, Y.-C. , Xia, B.-J. , and Zhang, J. , 2009, “ Thermodynamics in the Formation of the Solid Electrolyte Interface on the Graphite Electrode for Lithium-Ion Batteries,” Electrochim. Acta, 54(13), pp. 3538–3542. Yan, J. , Xia, B.-J. , Su, Y.-C. , Zhou, X.-Z. , Zhang, J. , and Zhang, X.-G. , 2008, “ Phenomenologically Modeling the Formation and Evolution of the Solid Electrolyte Interface on the Graphite Electrode for Lithium-Ion Batteries,” Electrochim. Acta, 53(24), pp. 7069–7078. Wang, L. G. , and Clancy, P. , 2001, “ Kinetic Monte Carlo Simulation of the Growth of Polycrystalline Cu Films,” Surf Sci, 473(1–2), pp. 25–38. Wang, Z. Y. , Li, Y. H. , and Adams, J. B. , 2000, “ Kinetic Lattice Monte Carlo Simulation of Facet Growth Rate,” Surf Sci, 450(1–2), pp. 51–63. Drews, T. O. , Braatz, R. D. , and Alkire, R. C. , 2004, “ Coarse-Grained Kinetic Monte Carlo Simulation of Copper Electrodeposition With Additives,” Int. J. Multiscale Comput. Eng., 2(2), pp. 313–327. Li, X. , Drews, T. O. , Rusli, E. , Xue, F. , He, Y. , Braatz, R. , and Alkire, R. , 2007, “ Effect of Additives on Shape Evolution During Electrodeposition I. Multiscale Simulation With Dynamically Coupled Kinetic Monte Carlo and Moving-Boundry Finite-Volume Codes,” J. Electrochem. Soc., 154(4), pp. D230–D240. Zheng, Z. , Stephens, R. M. , Braatz, R. D. , Alkire, R. C. , and Petzold, L. R. , 2008, “ A Hybrid Multiscale Kinetic Monte Carlo Method for Simulation of Copper Electrodeposition,” J. Comput. Phys., 227(10), pp. 5184–5199. Turner, C. H. , Zhang, Z. T. , Gelb, L. D. , and Dunlap, B. I. , 2015, “ Kinetic Monte Carlo Simulation of Electrochemical Systems,” Rev. Comput. Chem., 28, pp. 175–204. Aurbach, D. , Gofer, Y. , Ben-Zion, M. , and Aped, P. , 1992, “ The Behaviour of Lithium Electrodes in Propylene and Ethylene Carbonate: Te Major Factors that Influence Li Cycling Efficiency,” J. Electroanal. Chem., 339(1), pp. 451–471. Benitez, L. , Cristancho, D. , Seminario, J. M. , Martinez de la Hoz, J. M. , and Balbuena, P. B. , 2014, “ Electron Transfer Through Solid-Electrolyte Interphase Layers Formed on Si Anodes of Li-ion Batteries,” Electrochim. Acta, 140(SI), pp. 250–257. View article in PDF format. ## References Turner, J. A. , 1999, “ A Realizable Renewable Energy Future,” Science, 285(5428), pp. 687–689. [PubMed] Gogotsi, Y. , and Simon, P. , 2011, “ True Performance Metrics in Electrochemical Energy Storage,” Science, 334(6058), pp. 917–918. [PubMed] Goodenough, J. B. , and Kim, Y. , 2010, “ Challenges for Rechargeable Li Batteries,” Chem. Mat., 22(3), pp. 587–603. Aurbach, D. , Talyosef, Y. , Markovsky, B. , Markevich, E. , Zinigrad, E. , Asraf, L. , Gnanaraj, J. S. , and Kim, H. J. , 2004, “ Design of Electrolyte Solutions for Li and Li-Ion Batteries: A Review,” Electrochim. Acta, 50(2–3), pp. 247–254. Schaffner, B. , Schaffner, F. , Verevkin, S. P. , and Borner, A. , 2010, “ Organic Carbonates as Solvents in Synthesis and Catalysis,” Chem. Rev., 110(8), pp. 4554–4581. [PubMed] Lewandowski, A. , and Swiderska-Mocek, A. , 2009, “ Ionic Liquids as Electrolytes for Li-Ion Batteries-An Overview of Electrochemical Studies,” J. Power Sources, 194(2), pp. 601–609. Fergus, J. W. , 2010, “ Ceramic and Polymeric Solid Electrolytes for Lithium-Ion Batteries,” J. Power Sources, 195(15), pp. 4554–4569. Xu, K. , 2004, “ Nonaqueous Liquid Electrolytes for Lithium-Based Rechargeable Batteries,” Chem. Rev., 104(10), pp. 4303–4417. [PubMed] Zhang, S. S. , 2006, “ A Review on Electrolyte Additives for Lithium-Ion Batteries,” J. Power Sources, 162(2), pp. 1379–1394. Wagner, R. , Preschitschek, N. , Passerini, S. , Leker, J. , and Winter, M. , 2013, “ Current Research Trends and Prospects Among the Various Materials and Designs Used in Lithium-Based Batteries,” J. Appl. Electrochem., 43(5), pp. 481–496. Aurbach, D. , 2000, “ Review of Selected Electrode-Solution Interactions Which Determine the Performance of Li and Li Ion Batteries,” J. Power Sources, 89(2), pp. 206–218. Maeshima, H. , Moriwake, H. , Kuwabara, A. , and Fisher, C. A. J. , 2010, “ Quantitative Evaluation of Electrochemical Potential Windows of Electrolytes for Electric Double-Layer Capacitors Using AB Initio Calculations,” J. Electrochem. Soc., 157(6), pp. A696–A701. Halls, M. D. , and Tasaki, K. , 2010, “ High-Throughput Quantum Chemistry and Virtual Screening for Lithium Ion Battery Electrolyte Additives,” J. Power Sources, 195(5), pp. 1472–1478. Balbuena, P. B. , and Wang, Y. X. , 2004, Lithium-Ion Batteries: Solid-Electrolyte Interphase, Imperial College Press, London. Verma, P. , Maire, P. , and Novak, P. , 2010, “ A Review of the Features and Analyses of the Solid Electrolyte Interphase in Li-Ion Batteries,” Electrochim. Acta, 55(22), pp. 6332–6341. Peled, E. , 1983, “ Lithium Stability and Film Formation in Organic and Inorganic Electrolyte for Lithium Battery Systems,” Lithium Batteries, J. P. Gabano , ed., Academic Press, London. Aurbach, D. , Moshkovich, M. , Cohen, Y. , and Schechter, A. , 1999, “ The Study of Surface Film Formation on Noble-Metal Electrodes in Alkyl Carbonates/Li Salt Solutions, Using Simultaneous In Situ AFM, EQCM, FTIR, and EIS,” Langmuir, 15(8), pp. 2947–2960. Goodenough, J. B. , and Kim, Y. , 2011, “ Challenges for Rechargeable Batteries,” J. Power Sources, 196(16), pp. 6688–6694. Xu, W. , Wang, J. , Ding, F. , Chen, X. , Nasybulin, E. , Zhang, Y. , and Zhang, J.-G. , 2014, “ Lithium Metal Anodes for Rechargeable Batteries,” Energy Environ. Sci., 7(2), pp. 513–537. Wu, H. , and Cui, Y. , 2012, “ Designing Nanostructured Si Anodes for High Energy Lithium Ion Batteries,” Nano Today, 7(5), pp. 414–429. Tasaki, K. , Kanda, K. , Kobayashi, T. , Nakamura, S. , and Ue, M. , 2006, “ Theoretical Studies on the Reductive Decompositions of Solvents and Additives for Lithium-Ion Batteries Near Lithium Anodes,” J. Electrochem. Soc., 153(12), pp. A2192–A2197. Wang, Y. , and Balbuena, P. B. , 2002, “ Theoretical Insights Into the Reductive Decompositions of Propylene Carbonate and Vinylene Carbonate: A Density Functional Theory Study,” J. Phys. Chem. B, 106(17), pp. 4486–4495. Wang, Y. , Nakamura, S. , Ue, M. , and Balbuena, P. B. , 2001, “ Theoretical Studies to Understand Surface Chemistry on Carbon Anodes for Lithium-Ion Batteries: Reduction Mechanisms of Ethylene Carbonate,” J. Am. Chem. Soc., 123(47), pp. 11708–11718. [PubMed] Leung, K. , Qi, Y. , Zavadil, K. R. , Jung, Y. S. , Dillon, A. C. , Cavanagh, A. S. , Lee, S. H. , and George, S. M. , 2011, “ Using Atomic Layer Deposition to Hinder Solvent Decomposition in Lithium Ion Batteries: First-Principles Modeling and Experimental Studies,” J. Am. Chem. Soc., 133(37), pp. 14741–14754. [PubMed] Yu, J. M. , Balbuena, P. B. , Budzien, J. , and Leung, K. , 2011, “ Hybrid DFT Functional-Based Static and Molecular Dynamics Studies of Excess Electron in Liquid Ethylene Carbonate,” J. Electrochem. Soc., 158(4), pp. A400–A410. Heyd, J. , Scuseria, G. E. , and Ernzerhof, M. , 2006, “ Hybrid Functionals Based on a Screened Coulomb Potential,” J. Chem. Phys., 118(18), p 8207. Leung, K. , 2013, “ Electronic Structure Modeling of Electrochemical Reactions at Electrode/Electrolyte Interfaces in Lithium Ion Batteries,” J. Phys. Chem. C, 117(4), pp. 1539–1547. Kim, S. P. , van Duin, A. C. T. , and Shenoy, V. B. , 2011, “ Effect of Electrolytes on the Structure and Evolution of the Solid Electrolyte Interphase (SEI) in Li-Ion Batteries: A Molecular Dynamics Study,” J Power Sources, 196(20), pp. 8590–8597. Soto, F. A. , Ma, Y. , Martinez-DeLaHoz, J. M. , Seminario, J. M. , and Balbuena, P. B. , 2015, “ Formation and Growth Mechanisms of Solid-Electrolyte Interphase Layers in Rechargeable Batteries,” Chem. Mater., 27(23), pp. 7990–8000. Leung, K. , Soto, F. A. , Hankins, K. , Balbuena, P. B. , and Harrison, K. L. , 2016, “ Stability of Solid Electrolyte Interphase Components on Li Metal and Reactive Anode Material Surfaces,” J. Phys. Chem. C, 120(12), pp. 6302–6313. Omichi, K. , Ramos-Sanchez, G. , Rao, R. , Pierce, N. , Chen, G. , Balbuena, P. B. , and Harutyunyan, A. R. , 2015, “ Origin of Excess Capacity in Lithium-Ion Batteries Based on Carbon Nanostructures,” J. Electrochem. Soc., 162(10), pp. A2106–A2115. Leung, K. , 2013, “ Two-Electron Reduction of Ethylene Carbonate: A Quantum Chemistry Re-Examination of Mechanisms,” Chem. Phys. Lett., 568–569, pp. 1–8. Leung, K. , 2012, “ Electronic Structure Modeling of Electrochemical Reactions at Electrode/Electrolyte Interfaces in Lithium Ion Batteries,” J. Phys. Chem. C, 117(4), pp. 1539–1547. Leung, K. , and Budzien, J. L. , 2010, “ Ab Initio Molecular Dynamics Simulations of the Initial Stages of Solid-Electrolyte Interphase Formation on Lithium Ion Battery Graphitic Anodes,” Phys. Chem. Chem. Phys., 12(25), pp. 6583–6586. [PubMed] Ganesh, P. , Kent, P. R. C. , and Jiang, D.-E. , 2012, “ Solid-Electrolyte Interphase Formation and Electrolyte Reduction at Li-Ion Battery Graphite Anodes: Insights From First-Principles Molecular Dynamics,” J. Phys. Chem. C, 116(46), pp. 24476–24481. Peled, E. , Towa, D. B. , Merson, A. , and Burstein, L. , 2000, “ Microphase Structure of SEI on HOPG,” J. New Mater. Electrochem. Syst., 3(4), pp. 319–326. Tasaki, K. , 2005, “ Solvent Decompositions and Physical Properties of Decomposition Compounds in Li-Ion Battery Electrolytes Studied by DFT Calculations and Molecular Dynamics Simulations,” J. Phys. Chem. B, 109(7), pp. 2920–2933. [PubMed] Sun, H. , 1998, “ COMPASS, an ab Initio Force Field Optimized for Condensed-Phase Applications-Overview With Details on Alkane and Benzene Compounds,” J. Phys. Chem. A, 102(38), p. 7338. Wang, Y. X. , and Balbuena, P. B. , 2003, “ Adsorption and 2-Dimensional Association of Lithium Alkyl Dicarbonates on the Graphite Surface through O-...Li+...p (Arene) Interactions,” J. Phys. Chem. B, 107(23), pp. 5503–5510. Vatamanu, J. , Borodin, O. , and Smith, G. D. , 2011, “ Molecular Dynamics Simulation Studies of the Structure of a Mixed Carbonate/LiPF6 Electrolyte Near Graphite Surface as a Function of Electrode Potential,” J. Phys. Chem. C, 116(1), pp. 1114–1121. Li, T. , and Balbuena, P. B. , 2000, “ Theoretical Studies of the Reduction of Ethylene Carbonate,” Chem. Phys. Lett., 317(3–5), pp. 421–429. Aurbach, D. , Levi, M. D. , Levi, E. , and Schechter, A. , 1997, “ Failure and Stabilization Mechanisms of Graphite Electrodes,” J. Phys. Chem. B, 101(12), pp. 2195–2206. Jorn, R. , Kumar, R. , Abraham, D. P. , and Voth, G. A. , 2013, “ Atomistic Modeling of the Electrode-Electrolyte Interface in Li-Ion Energy Storage Systems: Electrolyte Structuring,” J. Phys. Chem. C, 117(8), pp. 3747–3761. Zheng, J. Y. , Zheng, H. , Wang, R. , Ben, L. B. , Lu, W. , Chen, L. W. , Chen, L. Q. , and Li, H. , 2014, “ 3D Visualization of Inhomogeneous Multi-Layered Structure and Young's Modulus of the Solid Electrolyte Interphase (SEI) on Silicon Anodes for Lithium Ion Batteries,” Phys. Chem. Chem. Phys., 16(26), pp. 13229–13238. [PubMed] Methekar, R. N. , Northrop, P. W. C. , Chen, K. , Braatz, R. D. , and Subramanian, V. R. , 2011, “ Kinetic Monte Carlo Simulation of Surface Heterogeneity in Graphite Anodes for Lithium-Ion Batteries: Passive Layer Formation,” J. Electrochem. Soc., 158(4), pp. A363–A370. Santhanagopalan, S. , Guo, Q. , Ramadass, P. , and White, R. E. , 2006, “ Review of Models for Predicting the Cycling Performance of Lithium Ion Batteries,” J. Power Sources, 156(2), pp. 620–628. Sikha, G. , Popov, B. N. , and White, R. E. , 2004, “ Effect of Porosity on the Capacity Fade of a Lithium-Ion Battery Theory,” J. Electrochem. Soc., 151(7), pp. A1104–A1114. Subramanian, V. R. , Boovaragavan, V. , Ramadesigan, V. , and Arabandi, M. , 2009, “ Mathematical Model Reformulation for Lithium-Ion Battery Simulations: Galvanostatic Boundary Conditions,” J. Electrochem. Soc., 156(4), pp. A260–A271. Deng, J. , Wagner, G. J. , and Muller, R. P. , 2013, “ Phase Field Modeling of Solid Electrolyte Interface Formation in Lithium Ion Batteries,” J. Electrochem. Soc., 160(3), pp. A487–A496. Ploehn, H. J. , Ramadass, P. , and White, R. E. , 2004, “ Solvent Diffusion Model for Aging of Lithium-Ion Battery Cells,” J. Electrochem. Soc., 151(3), pp. A456–A462. Colclasure, A. M. , and Kee, R. J. , 2010, “ Thermodynamically Consistent Modeling of Elementary Electrochemistry in Lithium-Ion Batteries,” Electrochim. Acta, 55(28), pp. 8960–8973. Colclasure, A. M. , Smith, K. A. , and Kee, R. J. , 2011, “ Modeling Detailed Chemistry and Transport for Solid-Electrolyte-Interface (SEI) Films in Li–Ion Batteries,” Electrochim. Acta, 58(0), pp. 33–43. Christensen, J. , and Newman, J. , 2004, “ A Mathematical Model for the Lithium-Ion Negative Electrode Solid Electrolyte Interphase,” J. Electrochem. Soc., 151(11), pp. A1977–A1988. Pinson, M. B. , and Bazant, M. Z. , 2013, “ Theory of SEI Formation in Rechargeable Batteries: Capacity Fade, Accelerated Aging and Lifetime Prediction,” J. Electrochem. Soc., 160(2), pp. A243–A250. Yan, J. , Zhang, J. , Su, Y.-C. , Zhang, X.-G. , and Xia, B.-J. , 2010, “ A Novel Perspective on the Formation of the Solid Electrolyte Interphase on the Graphite Electrode for Lithium-Ion Batteries,” Electrochim. Acta, 55(5), pp. 1785–1794. Yan, J. , Su, Y.-C. , Xia, B.-J. , and Zhang, J. , 2009, “ Thermodynamics in the Formation of the Solid Electrolyte Interface on the Graphite Electrode for Lithium-Ion Batteries,” Electrochim. Acta, 54(13), pp. 3538–3542. Yan, J. , Xia, B.-J. , Su, Y.-C. , Zhou, X.-Z. , Zhang, J. , and Zhang, X.-G. , 2008, “ Phenomenologically Modeling the Formation and Evolution of the Solid Electrolyte Interface on the Graphite Electrode for Lithium-Ion Batteries,” Electrochim. Acta, 53(24), pp. 7069–7078. Wang, L. G. , and Clancy, P. , 2001, “ Kinetic Monte Carlo Simulation of the Growth of Polycrystalline Cu Films,” Surf Sci, 473(1–2), pp. 25–38. Wang, Z. Y. , Li, Y. H. , and Adams, J. B. , 2000, “ Kinetic Lattice Monte Carlo Simulation of Facet Growth Rate,” Surf Sci, 450(1–2), pp. 51–63. Drews, T. O. , Braatz, R. D. , and Alkire, R. C. , 2004, “ Coarse-Grained Kinetic Monte Carlo Simulation of Copper Electrodeposition With Additives,” Int. J. Multiscale Comput. Eng., 2(2), pp. 313–327. Li, X. , Drews, T. O. , Rusli, E. , Xue, F. , He, Y. , Braatz, R. , and Alkire, R. , 2007, “ Effect of Additives on Shape Evolution During Electrodeposition I. Multiscale Simulation With Dynamically Coupled Kinetic Monte Carlo and Moving-Boundry Finite-Volume Codes,” J. Electrochem. Soc., 154(4), pp. D230–D240. Zheng, Z. , Stephens, R. M. , Braatz, R. D. , Alkire, R. C. , and Petzold, L. R. , 2008, “ A Hybrid Multiscale Kinetic Monte Carlo Method for Simulation of Copper Electrodeposition,” J. Comput. Phys., 227(10), pp. 5184–5199. Turner, C. H. , Zhang, Z. T. , Gelb, L. D. , and Dunlap, B. I. , 2015, “ Kinetic Monte Carlo Simulation of Electrochemical Systems,” Rev. Comput. Chem., 28, pp. 175–204. Aurbach, D. , Gofer, Y. , Ben-Zion, M. , and Aped, P. , 1992, “ The Behaviour of Lithium Electrodes in Propylene and Ethylene Carbonate: Te Major Factors that Influence Li Cycling Efficiency,” J. Electroanal. Chem., 339(1), pp. 451–471. Benitez, L. , Cristancho, D. , Seminario, J. M. , Martinez de la Hoz, J. M. , and Balbuena, P. B. , 2014, “ Electron Transfer Through Solid-Electrolyte Interphase Layers Formed on Si Anodes of Li-ion Batteries,” Electrochim. Acta, 140(SI), pp. 250–257. ## Figures Fig. 1 Schematic figure of an electrochemical cell illustrating the chemical potentials at the anode (μa) and cathode (μc), the highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO), and associated energy gap (Eg). The condition for electrochemical stability of the electrolyte is that Eg > eVoc, where eVoc is the energy associated with the open circuit voltage (Voc) and e is the electron charge. Fig. 2 Schematic mosaic picture of the SEI layer as proposed by earlier work [1517] Fig. 3 Schematic representation of solvent reduction proposed mechanisms during first stages of SEI formation. EC molecule is used as an example; any other cyclic carbonate molecule could follow the same mechanism. Red, gray, white and purple spheres represent oxygen, carbon, hydrogen, and lithium respectively. (See on line article for full color representation.) Fig. 4 Schematic representation of the processes described in the KMC simulations by Methekar and co-workers (Adapted from Ref. [45]) Fig. 5 (a) Effect of the electrochemical reduction rate k2 and the EDC formation rate k4 on the growth rate of the EDC film. Both k2 and k4 vary from 10−8 site−1 s−1 to 108 site−1 s−1 and (b) SEI growth rate versus SEI formation rate k4 with different electrochemical reduction rates. The units of the growth rate (d δ /dt) are Å s−1. Fig. 6 (a) SEI thickness variation versus time with different thickness-dependent reduction rate and (b) species fraction at the SEI/electrolyte interface with α=107. Here the reduction rate at clean anode surface is set to k2(0) = 108 site−1 s−1. Snapshots demonstrate the top view of SEI film in the coarse-grained model with α=107 and various thicknesses. Black: EC*, red: c-EC, blue: o-EC site, cyan: Li2EDC. (See on line article for full color representation.) ## Discussions Some tools below are only available to our subscribers or users with an online account. ### Related Content Customize your page view by dragging and repositioning the boxes below. Related Journal Articles Related Proceedings Articles Related eBook Content Topic Collections
2017-11-24 21:56:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.577908992767334, "perplexity": 4823.980613111902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934808972.93/warc/CC-MAIN-20171124214510-20171124234510-00323.warc.gz"}
https://www.tutorialspoint.com/How-to-make-text-bold-in-HTML
# How to make text bold in HTML? To make text bold in HTML, use the <b>…</b> tag or <strong>…</strong> tag. Both the tags have the same functioning, but <strong> tag adds semantic strong importance to the text. The <b> tag is a physical markup element, but do not add semantic importance. Just keep in mind that you can get the same result in HTML with CSS font-weight property. ## Example You can try to run the following code to make text bold in HTML using <strong>…</strong> tag Live Demo <!DOCTYPE html> <html> <head> <title>HTML text bold</title> </head> <body> <h2>Our Products</h2> <p>Developed 10 <strong>Android</strong> apps till now.</p> </body> </html> ## Example You can try to run the following code to make text bold in HTML using font-weight property Live Demo <!DOCTYPE html> <html> <head> <title>HTML text bold</title> </head> <body> <h1>Tutorialspoint</h1> <p style="font-weight: bold;">Simply Easy Learning</p> </body> </html> Monica Mona Student of life, and a lifelong learner Advertisements
2023-02-08 01:12:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29822638630867004, "perplexity": 13945.368961052445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500664.85/warc/CC-MAIN-20230207233330-20230208023330-00375.warc.gz"}
https://pqdtopen.proquest.com/doc/2308259499.html?FMT=ABS
# Dissertation/Thesis Abstract Roots of Polynomial Congruences by Welsh, Matthew C, Ph.D., Rutgers The State University of New Jersey, School of Graduate Studies, 2019, 89; 13856719 Abstract (Summary) In this dissertation we derive and then investigate some consequences of a parametrization of the roots of polynomial congruences. To motivate the later chapters, we begin in chapter two by reviewing known results, presented in our own style, about the roots of the quadratic congruence $\mu^2 \equiv -1 \pmod m$. We also review in chapter two applications of the parametrization to the equidistribution and well-spacing of these roots. In chapter three we generalize this classical parametrization of the roots of a quadratic congruence to the cubic congruence $\mu^3 \equiv 2\pmod m$. Several new phenomena are revealed in our derivation, but a special case of the cubic parametrization is seen to be roughly analogous to the quadratic case. We use this case to prove a spacing property analogous to the well-spacing of the quadratic roots, but unfortunately between the points $\left( \frac{\mu}{m}, \frac{\mu^2}{m}\right)$ instead of the $\frac{\mu}{m}$ themselves. In chapter four we consider this special case for an arbitrary polynomial congruence of any degree, deriving a parametrization for the roots of these congruences. And just as in chapter three, we are able to prove a spacing property for certain points related to the roots. Finally, in chapter five we return to the congruence $\mu^3 \equiv 2 \pmod m$ to explore some of the new phenomena mentioned above with a view towards obtaining equidistribution and well-spacing results. We unfortunately do not prove any concrete results in these directions. Indexing (document details) Advisor: Iwaniec, Henryk Commitee: Miller, Steve, Kontorovich, Alex, Pitt, Nigel School: Rutgers The State University of New Jersey, School of Graduate Studies Department: Mathematics School Location: United States -- New Jersey Source: DAI-B 81/3(E), Dissertation Abstracts International Source Type: DISSERTATION Subjects: Mathematics Keywords: Approximation of roots, Binary cubic forms, Co-type zeta function, Parametrization of roots, Root and ideal correspondence, Spacing and distribution of roots Publication Number: 13856719 ISBN: 9781088326251
2020-12-02 10:11:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6254631280899048, "perplexity": 1303.6196060830093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141706569.64/warc/CC-MAIN-20201202083021-20201202113021-00522.warc.gz"}
https://www.physicsforums.com/threads/qm-show-a-relation-velocity-of-a-free-particle-related.683435/
# QM, show a relation (velocity of a free particle related) 1. Apr 5, 2013 ### fluidistic 1. The problem statement, all variables and given/known data Hi guys, I'm stuck at some step in a QM exercise. Here it is: Consider a free particle of mass m that moves along the x-axis (1 dimensional). Show that $\frac{dA}{dt}=\frac{2 \hbar ^2}{m^2}\int \frac{\partial \psi}{\partial x} \frac{\partial \psi ^*}{\partial x}dx=\text{constant}$. Where $A(t)=\frac{i\hbar}{m} \int x \left ( \psi \frac{\partial \psi ^*}{\partial x} - \psi ^* \frac{\partial \psi}{\partial x} \right ) dx$. In fact $A(t)=\frac{d \langle x^2 \rangle}{dt}$ but I already showed that it's worth the integral I just wrote (the result matches the answer). 2. Relevant equations (1)Schrödinger's equation: $i\hbar \frac{\partial \psi}{\partial t}=\frac{(p^2 \psi)}{2m}$, where p is the momentum operator and worth $-ih \frac{\partial \psi}{\partial x }$. Taking the complex conjugate on both sides, I get that $\psi ^*$ satisfies: (2) $-i \hbar \frac{\partial \psi ^*}{\partial t}=\frac{p^2 \psi ^*}{2m}$ From (1) I get that (3) $\frac{\partial \psi}{\partial t }=-\frac{i(p^2\psi)}{2m\hbar}$ and (4)$\frac{\partial \psi ^*}{\partial t }=\frac{i(p^2\psi ^*)}{2m\hbar}$ 3. The attempt at a solution $\frac{dA(t)}{dt}= \frac{d}{dt} \left [ \frac{i\hbar}{m} \int x \left ( \psi \frac{\partial \psi ^*}{\partial x} - \psi ^* \frac{\partial \psi }{\partial x} \right ) dx \right ]= \frac{ih}{m} \int x \underbrace{ \frac{d}{dt} \left ( \psi \frac{\partial \psi ^*}{\partial x} - \psi ^* \frac{\partial \psi }{\partial x} \right ) dx } _{K}$ (line 5) I try to calculate K first. $K=\frac{\partial \psi}{\partial t} \frac{\partial \psi ^*}{\partial x}+\psi \frac{\partial ^2 \psi ^*}{\partial x^2}- \left ( \frac{\partial \psi^*}{\partial t} \frac{\partial \psi }{\partial x} + \psi ^* \frac{\partial ^2 \psi }{\partial x ^2}\right )$. (line 6) Now I use (3) and (4) to get $K=-\frac{i}{\hbar} \frac{(p^2\psi)}{2m} \frac{\partial \psi ^*}{\partial x}+\psi \frac{\partial ^2 \psi ^*}{\partial t \partial x }- \left ( \frac{i}{\hbar} \frac{(p^2 \psi ^* )}{2m} \frac{\partial \psi }{\partial x }+ \psi ^* \frac{\partial ^2 \psi}{ \partial t \partial x } \right )$ (line 7) Here I assume that $\frac{\partial ^2 \psi }{\partial t \partial x }=\frac{\partial ^2 \psi}{\partial x \partial t}$. If so, I get $K=-\frac{i}{\hbar} \frac{(p^2 \psi)}{2m} \frac{\partial \psi ^*}{\partial x}+\psi \frac{\partial ^2 \psi ^*}{\partial x \partial t}- \left ( \frac{i}{\hbar} \frac{(p^2 \psi ^*)}{2m} \frac{\partial \psi}{\partial x} + \psi ^* \frac{\partial ^2 \psi }{\partial x \partial t} \right )$ (line 8) Now I use (3) and (4) to get $K=-\frac{i}{\hbar} \frac{(p^2 \psi)}{2m} \frac{\partial \psi ^*}{\partial x} + \psi \frac{\partial}{\partial x} \left [ \frac{i(p^2 \psi ^*)}{2m \hbar} \right ] - \{ \frac{i}{\hbar} \frac{(p^2 \psi ^*)}{2m\hbar} \frac{\partial \psi}{\partial x} + \psi ^* \frac{\partial }{\partial x } \left [ -\frac{i (p^2 \psi)}{2m\hbar} \right ] \}$ (line 9) I factor out i/(2m hbar) to get $K=\frac{i}{2m\hbar} \{ \psi \frac{\partial }{\partial x} (p^2 \psi ^*) - \frac{\partial \psi ^*}{\partial x} (p^2 \psi ) - \frac{\partial \psi }{\partial x} (p^2 \psi ^*) -\psi ^* \frac{\partial }{\partial x} (p^2 \psi) \}$ (line 10) Now I replace $p^2$ by $-\hbar ^2 \frac{\partial ^2 }{\partial x^2}$ to get $K=-\frac{i\hbar}{2m} \left ( \psi \frac{\partial ^3 \psi ^*}{\partial x^3} - \frac{\partial \psi ^*}{\partial x} \frac{\partial ^2 \psi }{\partial x^2} - \frac{\partial \psi}{\partial x} \frac{\partial ^2 \psi ^*}{\partial x^2} - \psi ^* \frac{\partial ^3 \psi}{\partial x^3} \right )$ (line 11) Now I replace the value of K into where I took it (that is, line 11 into line 5), to get $\frac{dA(t)}{dt} = \frac{\hbar^2}{2m^2} \int x \left ( \psi \frac{\partial ^3 \psi ^*}{\partial x^3} - \frac{\partial \psi ^*}{\partial x} \frac{\partial ^2 \psi }{\partial x^2} - \frac{\partial \psi}{\partial x} \frac{\partial ^2 \psi ^*}{\partial x^2} - \psi ^* \frac{\partial ^3 \psi}{\partial x^3} \right ) dx$. (line 12). This is where I'm stuck. I thought about integration by parts, but I'm not seeing anything nice. I've rewritten $\frac{dA(t)}{dt}$ as $\frac{\hbar^2}{2m^2} \int x \left [ \frac{\partial }{\partial x} \left ( \psi \frac{\partial ^2 \psi ^* }{\partial x^2} \right ) -2 \frac{\partial \psi}{\partial x} \frac{\partial ^2 \psi ^*}{\partial x^2 } - \frac{\partial }{\partial x} \left ( \psi ^* \frac{\partial ^2 \psi}{\partial x^2} \right ) \right ] dx$. I'm still totally stuck. I've been extremely careful and what I wrote in latex contains no typo error. All is exactly as in my draft so any mistake/error you see, it's not a typo but an algebra one that I made. Any help is greatly appreciated, I really want to solve that exercise. Thanks :) 2. Apr 5, 2013 ### TSny Check the sign of the last term on the right. Otherwise, things look good to me up to this point. After substituting for p2 I think you have the right idea to continue with some integration by parts. 3. Apr 5, 2013 ### fluidistic Oh thank you very much TSny, I didn't see this sign mistake. I fixed it and in consequences this changes all the following lines. I've reached the result! I had to integrate by parts twice for an integral and once for another. I also had to assume that $x\frac{\partial \psi ^*}{\partial x} \frac{\partial \psi}{\partial \psi} \big | _{-\infty}^\infty$ is worth 0 (and another 2 similar relations), which isn't that obvious to me. I mean I have a big feeling that it's worth 0 but I'm not really sure why. I know that $\frac{\partial \psi}{\partial x}$ and $\frac{\partial \psi ^*}{\partial x}$ must tend to 0 at infinities but if I multiply their product by x, then I'm not really sure. I also have to show that $\frac{dA(t)}{dt}$ is constant. I guess I just have to derivate it with respect to time and see that it's worth 0... I'll do it, I don't think this will be hard nor that long. All in all the problem was extremely lengthy in terms of algebra!
2017-11-23 19:23:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9089117050170898, "perplexity": 385.8359533361187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806856.86/warc/CC-MAIN-20171123180631-20171123200631-00761.warc.gz"}
https://projecteuler.net/problem=673
## Beds and Desks Published on Sunday, 2nd June 2019, 07:00 am; Solved by 169 ### Problem 673 At Euler University, each of the $n$ students (numbered from 1 to $n$) occupies a bed in the dormitory and uses a desk in the classroom. Some of the beds are in private rooms which a student occupies alone, while the others are in double rooms occupied by two students as roommates. Similarly, each desk is either a single desk for the sole use of one student, or a twin desk at which two students sit together as desk partners. We represent the bed and desk sharing arrangements each by a list of pairs of student numbers. For example, with $n=4$, if $(2,3)$ represents the bed pairing and $(1,3)(2,4)$ the desk pairing, then students 2 and 3 are roommates while 1 and 4 have single rooms, and students 1 and 3 are desk partners, as are students 2 and 4. The new chancellor of the university decides to change the organisation of beds and desks: he will choose a permutation $\sigma$ of the numbers $1,2,\ldots,n$ and each student $k$ will be given both the bed and the desk formerly occupied by student number $\sigma(k)$. The students agree to this change, under the conditions that: 1. Any two students currently sharing a room will still be roommates. 2. Any two students currently sharing a desk will still be desk partners. In the example above, there are only two ways to satisfy these conditions: either take no action ($\sigma$ is the identity permutation), or reverse the order of the students. With $n=6$, for the bed pairing $(1,2)(3,4)(5,6)$ and the desk pairing $(3,6)(4,5)$, there are 8 permutations which satisfy the conditions. One example is the mapping $(1, 2, 3, 4, 5, 6) \mapsto (1, 2, 5, 6, 3, 4)$. With $n=36$, if we have bed pairing: $(2,13)(4,30)(5,27)(6,16)(10,18)(12,35)(14,19)(15,20)(17,26)(21,32)(22,33)(24,34)(25,28)$ and desk pairing $(1,35)(2,22)(3,36)(4,28)(5,25)(7,18)(9,23)(13,19)(14,33)(15,34)(20,24)(26,29)(27,30)$ then among the $36!$ possible permutations (including the identity permutation), 663552 of them satisfy the conditions stipulated by the students. The downloadable text files beds.txt and desks.txt contain pairings for $n=500$. Each pairing is written on its own line, with the student numbers of the two roommates (or desk partners) separated with a comma. For example, the desk pairing in the $n=4$ example above would be represented in this file format as: 1,3 2,4 With these pairings, find the number of permutations that satisfy the students' conditions. Give your answer modulo $999\,999\,937$.
2019-08-22 10:21:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45968881249427795, "perplexity": 1122.6488758796352}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317037.24/warc/CC-MAIN-20190822084513-20190822110513-00468.warc.gz"}
https://dsp.stackexchange.com/questions/21942/calculating-affine-transformation-matrix-from-4-corner-points
# Calculating Affine Transformation matrix from 4 corner points I'm trying to develop a sudoku solver, using image processing, ocr with neural networks and such. I've got the bare solver working, I can successfully interpret numbers and letters from a picture. All of this is done in GNU Octave. Now I am trying to make it more robust, and I've found this question on stack overflow which greatly helped with most of the image processing. But now I am left with the last part - transforming the single "squares" so that they are uniform and (mostly) square. What I would like to have: A method that from one set of $(X,Y)$ corner coordinates, e.g.:$(73,86), (117,88), (70,127), (115,128)$ would transform the image with interpolation to the set of coordinates, e.g.: $(0,0), (50,0), (0,50), (50,50)$. I have looked at the functions supplied with octave and there are these possibilities: but both of them require, for me to know the affine matrix. Is there a thing I've overlapped with these functions, or could someone here point me in a direction to develop such a function? If you already have the pairs of corresponding two points from each coordinate, you can use the function fitgeotrans to calculate the transformation matrix. Once you have got the transformation matrix, the transformation can be done with imwarp. Note that you are going to use Homography transformation rather than affine transformation in this case (because you have specified corresponding 4 points, which may cause trapezoidal distortion). The code would be like this... % assume that the input image is in the variable inputImage. nPixelsX = 50; % assume that there is square image with 50 pixels for each axes after transformation. nPixelsY = 50; movingPoints = [73,86; ... 117,88; ... 70,127; ... 115,128]; % set image's corresponding points fixedPoint = [0,0; ... nPixelsY,0; ... 0,nPixelsX; ... nPixelsY,nPixelsX]; % set ideal coordinate's corresponding points transformationType = 'Projective'; % this should be homology transformation tform = fitgeotrans(movingPoints,fixedPoints,transformationType); % make transformation matrix resultReference = imref2d([nPixelsY, nPixelsX]); desiredImage = imwarp(inputImage, tform, 'OutputView', resultReference); % transform • Thank you for your answer, this theoretically answers my question - but your answer is for Matlab(Don't know why, but a moderator added a matlab tag to my post). Octave has not yet implemented the fitgeotrans function. – Greg Mar 12 '15 at 9:37 • Ah, I see. Sorry, but I haven't done it yet in Octave... Let's look forward to coming expert person in Octave. – salto Mar 13 '15 at 9:22
2020-09-19 06:23:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7813102602958679, "perplexity": 1212.4611017127602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400190270.10/warc/CC-MAIN-20200919044311-20200919074311-00580.warc.gz"}
https://www.vtmarkets.com/my-bm/trading/trade-conditions/spreads-commission/
Bahasa Melayu Europe Middle East Asia VT Markets APP Trade CFDs on FX, Gold and more Spread is essentially the difference between the bid and the ask price. As traders always trade one currency for another, forex currencies are always quoted in terms of what the price is currently compared to another. To make things easy they are written in pairs. E.g. AUD/USD (Australian Dollar/US Dollar – AUD is the ‘base currency’ and USD is called the ‘counter’. If it took $1.5 AUD to buy$1 USD this would be written as 1.5/1. If you choose to open and trade on a VT Markets RAW ECN account, you will find some of the lowest spreads in the industry. Trade on institutional grade liquidity, sourced directly from some of the world’s biggest banks with absolutely no mark-up applied at our end. This sort of liquidity and access to tight spread as low as zero, is no longer just the domain of hedge funds. 1. Open the Market Watch on the VT Markets MT4 Platform. 2. Right click inside the Market Watch and click ‘Spreads’ 3. You will now have a 4th column displaying the Spread for each Forex currency pair, Commodity or Indices market. Normal spreads on STP and ECN accounts For Standard STP account EURUSD from 1.3 GBPUSD from 1.8 AUDUSD from 1.6 USDJPY from 1.5 For RAW ECN account EURUSD from 0.0 GBPUSD from 0.6 AUDUSD from 0.4 USDJPY from 0.5 ECN commission charges Please refer to the following table for a full list of RAW ECN commission charges, in the denomination of your trading account. We have split the table between commission per 0.01 lots and 1.00 lots to make it easier for you to calculate, depending on your trading volume. Currency Commission per 0.01 LOTS (1,000 base currency) Commission per 1 LOT (100,000 base currency) AUD Australian Dollar AUD $0.03 (round turn$0.06) AUD $3 (round turn$6) USD US Dollar USD $0.03 (round turn$0.06) USD $3 (round turn$6) GBP Pound Sterling GBP £0.02 (round turn £0.04) GBP £2 (round turn £4) EUR Euro EUR €0.025 (round turn €0.05) EUR €2.5 (round turn € 5) CAD $0.03 (round turn$0.06) CAD $3 (round turn$6)
2023-01-28 22:45:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27224746346473694, "perplexity": 11176.806279006561}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499695.59/warc/CC-MAIN-20230128220716-20230129010716-00212.warc.gz"}
https://learn.careers360.com/ncert/question-describe-the-following-i-acetylation/
# Q 12.16     Describe the following.         (i)      Acetylation The addition of acetyl functional group ($CH_{3}CO-$) in an organic compound is called acetylation. Acetic anhydride($(CH_{3}CO)_{2}CO$) and acetyl chloride              ($CH_{3}COCl$) are mostly used as acetylating agents. This reaction is happens in presence of a base such as pyridine etc.
2020-04-01 09:17:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6882031559944153, "perplexity": 11870.160414217373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505550.17/warc/CC-MAIN-20200401065031-20200401095031-00540.warc.gz"}
https://plainmath.net/7982/suppose-that-_bca-equal-equal-equal-what-degree-measure-_abc-01510101651
# Suppose that m /_BCA=70^@, and x=33 cm and y = 47 cm. What is the degree measure of /_ABC? 01510101651.png Non-right triangles and trigonometry Suppose that $$\displaystyle{m}\angle{B}{C}{A}={70}^{\circ}$$, and x=33 cm and y = 47 cm. What is the degree measure of $$\displaystyle\angle{A}{B}{C}$$? 2020-12-28 $$\displaystyle{A}{B}=\sqrt{{{A}{C}^{{2}}+{B}{C}^{{2}}-{\left({2}\cdot{A}{C}\cdot{B}{C}{\cos}\angle{B}{C}{A}\right)}}}$$ $$\displaystyle{A}{B}=\sqrt{{{47}^{{2}}+{33}^{{2}}-{\left({2}\cdot{47}\cdot{33}{\cos{{70}}}^{\circ}\right)}}}$$ $$\displaystyle{A}{B}=\sqrt{{{2237.05}}}$$ $$\displaystyle{A}{B}={47.29}$$ Using the cosine law, $$\displaystyle\angle{A}{B}{C}={{\cos}^{{-{{1}}}}{\left(\frac{{{B}{C}^{{2}}+{A}{B}^{{2}}-{A}{C}^{{2}}}}{{{2}\cdot{B}{C}\cdot{A}{B}}}\right)}}$$ $$\displaystyle\angle{A}{B}{C}={{\cos}^{{-{{1}}}}{\left(\frac{{{33}^{{2}}+{47.29}^{{2}}-{47}^{{2}}}}{{{2}\cdot{33}\cdot{47.29}}}\right)}}$$ $$\displaystyle\angle{A}{B}{C}={{\cos}^{{-{{1}}}}{\left({0.3576}\right)}}$$ $$\displaystyle\angle{A}{B}{C}={{\cos}^{{-{{1}}}}=}{69.04}^{\circ}$$
2021-07-23 17:02:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7671347260475159, "perplexity": 1349.3658894494527}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046149929.88/warc/CC-MAIN-20210723143921-20210723173921-00500.warc.gz"}
https://mathematica.stackexchange.com/questions/148843/using-list-correlate-with-three-2d-arrays
# Using List correlate with three 2D arrays Working off from a question I asked previosuly where the answer to my problem was to use ListCorrelate, I have been playing around with trying to use ListCorrelate again, but this time for 3, two dimensional grids. By this I mean the following: Take an array array = DeveloperToPackedArray @ RandomChoice[{-1, 1}, {100, 100}]; the code count[dim_] := Outer[ Times, Join[Range[dim], Reverse[Range[dim-1]]], Join[Range[dim], Reverse[Range[dim-1]]] ] r2 = ListCorrelate[array, array, {-1, 1}, 0] / count[100] //Transpose; with some manipulating of the output gives you all two point correlation of the array. To do this we need to enter two 2D arrays. What I want to do now is to work out all triangular correlations on array. Hence I would need to somehow enter three array parameters into ListCorrelate - of which I cannot do. I have looked at the working of ListCorrelate which can be found here in a another question. I have fiddled around, and can't seem to get anywhere. The only was I can think of is to do the following: r2 = ListCorrelate[array, array, {-1, 1}, 0] / count[100] //Transpose; r3 = ListCorrelate[array, array, {-1, 1}, 0] / count[100] //Transpose; r23 = ListCorrelate[r2, r3, {-1, 1}, 0] / count[200] //Transpose; count here works out the average of each correlation - see my previous question for better background -. The problem here is that I do not think that r23 is the triangular (or three point) correlations of array. Diagrammatically this is what I would like ListCorrelate to do: In the normal use of ListCorrelate you would only have the red and green boxes. I now want to add the blue box. Essentially this translates to: moving the green triangle to the different positions. This is naturally only one of the possible box selections or triangle positioning as well as orientations. In my pictures I haven't been very thorough as ListCorrelate makes use of a Kernal, the link to the explanation of how ListCorrelates explains in much greater clarity. My purpose was to give an understanding of how I would like to implement ListCorrelate to calculate all triangular correlations on array, just as the normal use of it, is able to calculate all linear(or two point) correlations on array`.
2019-10-20 12:14:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4090680181980133, "perplexity": 1129.901008157697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986707990.49/warc/CC-MAIN-20191020105426-20191020132926-00515.warc.gz"}
https://nforum.ncatlab.org/discussion/10920/
Start a new discussion Not signed in Want to take part in these discussions? Sign in if you have an account, or apply for one below Site Tag Cloud Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support. • CommentRowNumber1. • CommentAuthorTobyBartels • CommentTimeFeb 10th 2020 Put some content here, as a disambiguation page. 1. It doesn’t really matter, but I’m not convinced by the “common theme” sentence… . Surely the point is more like that it describes something which cannot be counted? • CommentRowNumber3. • CommentAuthorTobyBartels • CommentTimeFeb 11th 2020 • (edited Feb 11th 2020) Mostly it means something larger than every natural number, although not always. I put a sentence saying that and noting a couple of exceptions (one each way). You could still argue (even with those exceptions) that it means something larger than whatever can be counted, for some sense of counting. • CommentRowNumber4. • CommentAuthorTodd_Trimble • CommentTimeFeb 11th 2020 “Larger than” or “beyond”. I don’t think of ordinary real numbers as things one counts, but when thinking about calculus, I do think of $\infty$ and $-\infty$ as ideal points beyond all finite real numbers. Similarly with valuation fields, etc.
2021-12-05 16:24:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8114564418792725, "perplexity": 2779.434400575122}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363215.8/warc/CC-MAIN-20211205160950-20211205190950-00287.warc.gz"}
https://manuals.cc4s.org/user-manual/objects/CoulombIntegrals.html
# CoulombIntegrals ## 1.Brief description $V^{pq}_{sr} = \sum_{G} {\Gamma^\ast}^{pG}_s \Gamma^q_{rG}$ CoulombIntegrals $$V^{pq}_{sr}$$ are computed from the CoulombVertex, which can be respresented in an arbitrary basis set depending on the employed interface (Hummel, Tsatsoulis, and Grüneis 2017). CoulombIntegrals are computed from the CoulombVertex using the VertexCoulombIntegrals algorithm using; for example, the following yaml input file. - name: VertexCoulombIntegrals in: slicedCoulombVertex: CoulombVertex out: coulombIntegrals: CoulombIntegrals We note that the VertexCoulombIntegrals algorithm of Cc4s provides a recipe for the computation of a set of CoulombIntegrals $$\{V^{ab}_{ij}, V^{ai}_{bj}, V^{ab}_{cd} ... \}$$ . The respective numerical evaluation of these integrals is performed only if the numerical values of the tensors are needed by an algorithm. ## 2.Literature Hummel, Felix, Theodoros Tsatsoulis, and Andreas Grüneis. 2017. “Low Rank Factorization of the Coulomb Integrals for Periodic Coupled Cluster Theory.” The Journal of Chemical Physics 146 (12): 124105. http://doi.org/10.1063/1.4977994. Created: 2022-09-19 Mon 15:00
2022-10-03 13:36:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6859747171401978, "perplexity": 4915.435903159519}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00520.warc.gz"}
http://math.stackexchange.com/questions/199057/inequality-involving-up-1-u?answertab=active
# Inequality involving $|u|^{p-1} u$ For $u,v \in L^q(\Omega)$ with $q \ge p \ge 1$, how does one show that: \begin{aligned} \||u|^{p-1}u - |v|^{p-1}v\|_{L^{p/q}} & \le C\,\|(|u|^{p-1} + |v|^{p-1})\,|u-v|\,\|_{L^{p/q}}\\ & \le C\,(\|u\|^{p-1}_{L^q} + \|v\|^{p-1}_{L^q})\,\|u-v\|_{L^q} \end{aligned} Thanks. - the first inequality is well explained here math.stackexchange.com/questions/9960/… –  uforoboa Sep 19 '12 at 10:12 ... and the second one is Hölder with $\frac 1q + \frac 1\alpha = \frac 1{\frac pq}$ giving $\alpha = \frac pq + \frac 1{p-1}$. –  martini Sep 19 '12 at 10:15 Ok, I nearly see it now. Except for the mean-value theorem step... –  David Chapel Sep 19 '12 at 10:21 For all real $r\ge 1$ one has $r^{p-1} - 1 \leq c_p(r - 1)(r^{p-1} + 1)$ Indeed, applying MVT to $f(x)=x^{p-1}$ on the interval $[1,r]$ we get $$f(r)-f(1)=f'(\xi)(r-1)=(p-1)\xi^{p-2}(r-1),\qquad \exists \xi\in (1,r)$$ Here $\xi^{p-2}\le \max(r^{p-2},1)$ where we take $\max$ because $p-2$ could be either negative or positive. Hence, $$r^{p-1} - 1 \leq (p-1)(r-1) \max(r^{p-2},1)\leq (p-1)(r-1) (r^{p-1}+1)$$
2014-09-24 03:06:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9836587905883789, "perplexity": 942.8741633375704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657141651.17/warc/CC-MAIN-20140914011221-00174-ip-10-234-18-248.ec2.internal.warc.gz"}
http://www.gamedev.net/topic/659275-vertices-defined-clockwisecounter-clockwise/
• Create Account # vertices defined clockwise/counter-clockwise 2 replies to this topic ### #1oreste  Members   -  Reputation: 149 Like 0Likes Like Posted 28 July 2014 - 04:57 PM Windows 7 VS 2013 express directx june 2010 In one tutorial lesson the code renders a triangle made up of transformed vertices. Vertices are defined in a clockwise fashion, this being the default in directx. If I define them in a counter-clockwise order the triangle isn't rendered. In the next lesson we go thru the geometry pipeline, so we start w/ untransformed vertices. Here the order in which the vertices are defined doesn't seem to matter the triangle is rendered. WHY? Note: I plotted these vertices in model space on a cartesian coordinate system (0,0) Where X and y intersect. To the right of (0,0) X is >0, and above (0,0) Y is >0. The z coordinate is the same for each vertice, so essentially the triangle is 2D. The next lesson is entitled "Rendering depth" that's where w'ell take care of z w/ & the zbuffer. ### #2NumberXaero  Prime Members   -  Reputation: 1403 Like 0Likes Like Posted 28 July 2014 - 06:01 PM Probably because the code changed the cull mode state somewhere before rendering, or disabled culling. SetRenderState(D3DRS_CULLMODE, D3DCULL_CCW); Edited by NumberXaero, 28 July 2014 - 06:02 PM. ### #3oreste  Members   -  Reputation: 149 Like 0Likes Like Posted 28 July 2014 - 09:19 PM Thanx for your answer. For me that means that the problems lies w/ me. Cull mode is not modified in the code! I thought that the error might be how i plotted the vertices in model space (2D cartesian coordinates)! I'll work on it more. Thanx again! PARTNERS
2014-09-21 08:11:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23144343495368958, "perplexity": 2478.031551880214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657135080.9/warc/CC-MAIN-20140914011215-00201-ip-10-234-18-248.ec2.internal.warc.gz"}
http://www.ncatlab.org/nlab/show/Hisham+Sati
# nLab Hisham Sati Hisham Sati (website) is working on non-perturbative phenomena in string theory/M-theory using tools of cohomology, homotopy theory, algebraic topology and higher category theory. His thesis advisor was Michael Duff. Hisham Sati is assistant professor in the Mathematics Department at Pittsburgh University. From his website: My research is interdisciplinary and lies in the intersection of differential geometry, algebraic topology, and mathematical/theoretical physics. I am mainly interested in geometric and topological structures arising from quantum (topological) field theory, string theory, and M-theory. This includes orientations with respect to generalized cohomology theories, and corresponding description via higher geometric, topological, and categorical notions of bundles. ## Selected publications • part I, Proc. Symp. Pure Math. 81 (2010), 181-236 (arXiv:1001.5020), part II: Twisted $String$ and $String^c$ structures, J. Australian Math. Soc. 90 (2011), 93-108 (arXiv:1007.5419); part III: Twisted higher structures, Int. J. Geom. Meth. Mod. Phys. 8 (2011), 1097-1116 (arXiv:1008.1755) on cohomology and twisted cohomology structures in string theory/M-theory. See also twisted smooth cohomology in string theory. on the Diaconescu-Moore-Witten anomaly interpreted in integral Morava K-theory: On F4 and Cayley plane-fiber bundles in M-theory: • H.S. $\mathbb{O}P^2$-bundles in M-theory, Commun. Num. Theor. Phys 3:495-530,2009 (arXiv:0807.4899) On mathematical foundations of quantum field theory and perturbative string theory: category: people Revised on May 29, 2014 22:57:47 by Urs Schreiber (217.39.7.253)
2014-07-31 19:35:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6841903328895569, "perplexity": 3395.4314640932735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273663.2/warc/CC-MAIN-20140728011753-00246-ip-10-146-231-18.ec2.internal.warc.gz"}
https://agentfoundations.org/item?id=445
by Patrick LaVictoire 797 days ago | link ASP has only been formalized in ways that don’t translate to modal universes, so it’s only the analogous extensions that could apply there. In the formulation here, it’s not enough that the agent one-box; it must do so provably in less than $$N$$ steps. I think that both the valuations $$\nu_{\textsf{ZFC+Sound(ZFC)}+A()=1}(U()=1M)$$ and $$\nu_{\textsf{ZFC+Sound(ZFC)}+A()=1}(U()=0)$$ will probably be about $$1/2$$, but I’m not sure. reply ### NEW DISCUSSION POSTS Indeed there is some kind of by Vadim Kosoy on Catastrophe Mitigation Using DRL | 0 likes Very nice. I wonder whether by Vadim Kosoy on Hyperreal Brouwer | 0 likes Freezing the reward seems by Vadim Kosoy on Resolving human inconsistency in a simple model | 0 likes Unfortunately, it's not just by Vadim Kosoy on Catastrophe Mitigation Using DRL | 0 likes >We can solve the problem in by Wei Dai on The Happy Dance Problem | 1 like Maybe it's just my browser, by Gordon Worley III on Catastrophe Mitigation Using DRL | 2 likes At present, I think the main by Abram Demski on Looking for Recommendations RE UDT vs. bounded com... | 0 likes In the first round I'm by Paul Christiano on Funding opportunity for AI alignment research | 0 likes Fine with it being shared by Paul Christiano on Funding opportunity for AI alignment research | 0 likes I think the point I was by Abram Demski on Predictable Exploration | 0 likes (also x-posted from by Sören Mindermann on The Three Levels of Goodhart's Curse | 0 likes (x-posted from Arbital ==> by Sören Mindermann on The Three Levels of Goodhart's Curse | 0 likes >If the other players can see by Stuart Armstrong on Predictable Exploration | 0 likes
2017-11-24 23:56:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3490525484085083, "perplexity": 5188.506769664498}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809160.77/warc/CC-MAIN-20171124234011-20171125014011-00241.warc.gz"}
https://www.math.uni-bielefeld.de/LAG/man/081.html
Seàn McGarraghy: Symmetric powers of symmetric bilinear forms [email protected] Submission: 2002, Mar 09 We study symmetric powers of classes of symmetric bilinear forms in the Witt-Grothendieck ring of a field of characteristic not equal to 2, and derive their basic properties and compute their classical invariants. We relate these to earlier results on exterior powers of such forms. 1991 AMS Subject Classification: 11E04, 11E81 Keywords and Phrases: exterior power, symmetric power, pre-$\lambda$-ring, $\lambda$-ring, $\lambda$-ring augmentation, symmetric bilinear form, Witt ring, Witt-Grothendieck ring. Full text: dvi.gz 32 k, dvi 80 k, ps.gz 258 k, pdf.gz 149 k, pdf 200 k.
2019-07-17 04:51:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9025217890739441, "perplexity": 3710.844657380207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525046.5/warc/CC-MAIN-20190717041500-20190717063500-00516.warc.gz"}
http://es.wikidoc.org/index.php/Kent_distribution
# Kent distribution File:10.1371 journal.pcbi.0020131.g004-L.jpg Three points sets sampled from the FB5 distribution. The mean directions are shown with arrows. The ${\displaystyle \kappa \,}$ parameter is highest for the red set. The 5-parameter Fisher-Bingham distribution or Kent distribution, named after Ronald Fisher, Christopher Bingham, and John T. Kent, is a probability distribution on the two-dimensional unit sphere ${\displaystyle S^{2}\,}$ in ${\displaystyle {\mathbb {R} }^{3}}$ . It is the analogue on the two-dimensional unit sphere of the bivariate normal distribution with an unconstrained covariance matrix. The distribution belongs to the field of directional statistics. The probability density function ${\displaystyle f(\mathbf {x} )\,}$ of the Kent distribution is given by: ${\displaystyle f(\mathbf {x} )={\frac {1}{{\textrm {c}}(\kappa ,\beta )}}\exp\{\kappa {\boldsymbol {\gamma }}_{1}\cdot \mathbf {x} +\beta [({\boldsymbol {\gamma }}_{2}\cdot \mathbf {x} )^{2}-({\boldsymbol {\gamma }}_{3}\cdot \mathbf {x} )^{2}]\}}$ where ${\displaystyle \mathbf {x} \,}$ is a three-dimensional unit vector and ${\displaystyle {\textrm {c}}(\kappa ,\beta )\,}$ is a normalizing constant. The parameter ${\displaystyle \kappa \,}$ (with ${\displaystyle \kappa >0\,}$ ) determines the concentration or spread of the distribution, while ${\displaystyle \beta \,}$ (with ${\displaystyle 0\leq 2\beta <\kappa }$ ) determines the ellipticity of the contours of equal probability. The higher the ${\displaystyle \kappa \,}$ and ${\displaystyle \beta \,}$ parameters, the more concentrated and elliptical the distribution will be, respectively. Vector ${\displaystyle \gamma _{1}\,}$ is the mean direction, and vectors ${\displaystyle \gamma _{2},\gamma _{3}\,}$ are the major and minor axes. The latter two vectors determine the orientation of the equal probability contours on the sphere, while the first vector determines the common center of the contours. The Kent distribution was proposed by John T. Kent in 1982, and is used in geology and bioinformatics.
2020-02-26 00:06:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 15, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8848223090171814, "perplexity": 436.1101187073753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146176.73/warc/CC-MAIN-20200225233214-20200226023214-00144.warc.gz"}
https://socratic.org/questions/58741ae811ef6b1dd30f1655
# Is the energy of the photon coming in greater than or less than the energy of the electronic transition? Jun 22, 2017 Neither. It has the energy given by the difference in energy, ${E}_{f} - {E}_{i}$. When photons are absorbed to excite an electron from initial energy ${E}_{i}$ to final energy ${E}_{f}$, they must account for the difference in energy, $\Delta E = {E}_{f} - {E}_{i}$ in order for the electron to transition upwards by that energy. Suppose you wanted to go from ${E}_{1}$ to ${E}_{2}$ in the hydrogen atom. Then, ${E}_{\text{photon" = DeltaE = -"13.6 eV}} \left(\frac{1}{2} ^ 2 - \frac{1}{1} ^ 2\right)$ would be the energy that the one absorbed photon needs to be in order to get the electron from $n = 1$ (where ${E}_{1} = - \text{13.6 eV}$) to $n = 2$ (where ${E}_{2} = - \text{3.4 eV}$). If you had $\text{13.6 eV}$ for that photon, you wouldn't be going from ${E}_{1}$ to ${E}_{2}$; you'd be going way past that, since the difference is only $+ \text{10.2 eV}$. And since the difference is that, you can't have $\text{3.4 eV}$ available and bridge that gap.
2022-12-04 11:26:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 16, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6776480078697205, "perplexity": 175.9676973341495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710972.37/warc/CC-MAIN-20221204104311-20221204134311-00224.warc.gz"}
https://logicng.org/documentation/explanations/mus/
# MUS A MUS is a minimal unsatisfiable subset for a given set of formulas. It contains only those formulas which lead to the given set of formulas being unsatisfiable. In other words: If you remove at least one of the formulas in the MUS from the given set of formulas, your set of formulas is satisfiable. This means a MUS is locally minimal. Thus, given a set of formulas which is unsatisfiable, you can compute its MUS and have one locally minimal conflict description why it is unsatisfiable. ## MUS Algorithms In LogicNG, two algorithms are implemented to compute the MUS: A deletion-based algorithm DeletionBasedMUS and an insertion-based algorithm PlainInsertionBasedMUS. The main idea of the deletion-based algorithm is to start with all given formulas and iteratively test each formula for relevance. A formula is relevant for the conflict, if its removal yields in a satisfiable set of formulas. Only the relevant formulas are kept. The main idea of the insertion-based algorithm is to start with an empty set and incrementally add propositions to the MUS which have been identified to be relevant. The default MUS generation MUSGeneration uses the deletion-based algorithm. ## Computing the MUS Consider the following propositions: List<Proposition> props = new ArrayList<>(); props.add(new StandardProposition(f.parse("A | C & D"))); props.add(new StandardProposition(f.parse("D & (E | F)"))); You can compute the MUS on a set of propositions in the following way: UNSATCore<Proposition> core = new MUSGeneration().computeMUS(props, f); The result is: UNSATCore{ isMUS=true, propositions=[ StandardProposition{formula=B => ~C, description=}, StandardProposition{formula=C, description=}, StandardProposition{formula=A & B, description=} ] } indicating that the conflict arises from the following subset of formulas: {A & B, B => ~C, C} As you can see, no proposition in this set can be removed without turning the set satisfiable, so in fact it is locally minimal. Note that it only makes sense to compute a MUS for an unsatisfiable formula set. If you call the MUS for a satisfiable formula set, an exception will be thrown. ## MUS Configuration You can configure the MUS generation using the MUSConfig in order to 1. specify the algorithm for the computation, and 2. control the computation using a SATHandler which is passed to the SAT solver computing the MUS under the hood. For example, the following code snippet defines a MUS configuration with the plain insertion algorithm (see above) and a TimeoutSATHandler which stops the computation after 100 ms. MUSConfig config = MUSConfig.builder() .algorithm(MUSConfig.Algorithm.PLAIN_INSERTION) .handler(new TimeoutSATHandler(100)) .build(); UNSATCore<Proposition> core = musGeneration.computeMUS(props, f, config); In this case, the result is the same for both algorithms. In general, this is not always the case. MUSes Are Not Unique A set of formulas can have multiple MUSes. Consider the following example: List<Proposition> props = new ArrayList<>(); {B, A | ~B, ~A} {B, ~D, ~B | C, ~C | D}
2023-03-26 00:12:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.863618016242981, "perplexity": 1913.772739877744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945376.29/warc/CC-MAIN-20230325222822-20230326012822-00658.warc.gz"}
https://www.jiskha.com/display.cgi?id=1173705904
# maths-complex numbers posted by . by using the substitution w = z^3, find all the solutions to z^6 - 8z^3 +25 = 0 in complex numbers, and describe them in polar form, using @(theta) to denote the angle satisfying tan@ = 3/4 ( note simply leave @ as it is, don't calculate it). i got up to z^3 = 4+3i and 4-3i then got stuck ! 4 + 3i = 5 Exp[i theta] The equation z^3 = Q for real positive Q has three solutions: z = cuberoot[Q] Exp[2 pi n i/3] for n = 0, 1 and 2, because Exp[2 pi n i/3]^3 = Exp[2 pi n i] = 1 So, in this case you find: z = 5^(1/3) Exp[i theta/3 + 2 pi n i/3 ] ## Similar Questions 1. ### complex numbers -3 + 4i in polar form in pi the magnitude is 5 from pyth theorm. Now the angle. Using i the positive real as the reference axis, the angle from it is PI/2 + arctan4/3 2. ### Maths- complex numbers Find tan(3 theta) in terms of tan theta Use the formula tan (a + b) = (tan a + tan b)/[1 - tan a tan b) in two steps. First, let a = b = theta and get a formula for tan (2 theta). tan (2 theta) = 2 tan theta/[(1 - tan theta)^2] Then … 3. ### Abstract Algebra Let H={a+bi a,b is a element R, a^2+b^2=1} be a subset of the non zero complex numbers C*. Prove that H<C* under complex multiplication using the one-step subgroup test. Also describe the elements of H geometrically. 4. ### maths (complex numbers) if b is a real number satisfying b^4 + 1/(b^4) = 6, find the value of :- (b + i/b)^16 where i = sqrt of -1 5. ### Math (Complex Numbers) Let a,b,c be complex numbers satisfying a+b+c=abc=1 and (ab+bc+ac)/3=(1/a^2)+(1/b^2)+(1c^2) The sum of absolute values of all possible ab+bc+ac can be written as (√n)/m, where n and m are positive coprime integers. What is n+m? 6. ### Complex Numbers The system of equations |z - 2 - 2i| = \sqrt{23}, |z - 8 - 5i| = \sqrt{38} has two solutions z1 and z2 in complex numbers. Find (z1 + z2)/2. Im really just plain old confused. Could anyone help me out? 7. ### Pre-cal The equation of the line joining the complex numbers -5 + 4i and 7 + 2i can be expressed in the form az + b*overline{z} = 38 for some complex numbers a and b. Find the product ab. Any help? 8. ### Maths Have a question on complex numbers. Z1= 1+j2, Z2= 2+j5 Find total impedance in Cartesian and polar form of two impedances in parallel: 1/ZT = 1/Z1 + 1/Z2 I get to 39/58 + 83/58 j but then get a bit lost. 9. ### Roots, complex numbers I want to solve z^5=-1 But all the examples I've seen use positive numbers. The -1 is throwing me off somewhat. Most of the examples I've worked with would be solved using the polar form I.e Z^5(cos5+i sin5)=1(cospi +i sinpi) But as … 10. ### Linear algebra Find all complex numbers z satisfying z3 = 27i and give your answer as a comma-separated list of values in the form a+bi. More Similar Questions
2018-01-20 13:26:10
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8579082489013672, "perplexity": 2009.6709423888044}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889617.56/warc/CC-MAIN-20180120122736-20180120142736-00468.warc.gz"}
https://socratic.org/questions/at-what-speed-does-the-earth-travel-on-its-orbit-around-the-sun#210927
# At what speed does the earth travel on its orbit around the sun? Jan 10, 2016 The Earth moves at a little over 100,000 km/hr in its orbit around the sun. #### Explanation: The Earth is in a roughly circular orbit with a radius of 149,597,871 km. The circumference of the orbit is $2 \cdot \pi \cdot r$ or 939,951,145 km. Since the Earth completes the orbit in one year (365.25 days, or 8766 hours), the speed is 107,227 km/hr, or 66,628 mph.
2021-11-27 06:25:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9800018072128296, "perplexity": 1346.3807305375096}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358118.13/warc/CC-MAIN-20211127043716-20211127073716-00306.warc.gz"}
https://webwork.libretexts.org/webwork2/html2xml?answersSubmitted=0&sourceFilePath=Library/ASU-topics/setRateChange/rich2.pg&problemSeed=1234567&courseID=anonymous&userID=anonymous&course_password=anonymous&showSummary=1&displayMode=MathJax&problemIdentifierPrefix=102&language=en&outputformat=libretexts
A car is timed going down a track. Table 1 shows the distance the car is from the start line after it initially takes off. Table 2 shows the distance the car is from the finish line after it crosses the line and eventually comes to a stop. Table 1 Time (s) Distance (ft) 0 0 2 28 4 74 6 195 8 444 10 795 Table 2 Time (s) Distance (ft) 0 0 2 292 4 445 6 519 8 559 10 585 Note: Click on any graph to view a larger graph. 1) From Table 1, calculate the average speed between $t=0$ and $t=2$: 2) From Table 1, calculate the average speed between $t=4$ and $t=6$: 3) From Table 1, calculate the average speed between $t=8$ and $t=10$: 4) From Table 2, calculate the average speed between $t=0$ and $t=2$: 5) From Table 2, calculate the average speed between $t=4$ and $t=6$: 6) From Table 2, calculate the average speed between $t=8$ and $t=10$: 7) Describe the behavior of the function in Table 1: 8) Describe the behavior of the function in Table 2:
2021-09-21 20:30:20
{"extraction_info": {"found_math": true, "script_math_tex": 12, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.765558660030365, "perplexity": 249.20739620595293}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057227.73/warc/CC-MAIN-20210921191451-20210921221451-00545.warc.gz"}
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition-blitzer/chapter-2-section-2-3-polynomial-functions-and-their-graphs-exercise-set-page-348/6
## Precalculus (6th Edition) Blitzer The given function $h\left( x \right)=8{{x}^{3}}-{{x}^{2}}+\frac{2}{x}$is not a polynomial. A function is said to be a polynomial function if $g\left( x \right)={{a}_{n}}{{x}^{n}}+{{a}_{n-1}}{{x}^{n-1}}+\cdots +{{a}_{2}}{{x}^{2}}+{{a}_{1}}x+{{a}_{0}}$ , Where ${{a}_{n}},{{a}_{n-1}},\ldots ,{{a}_{2}},{{a}_{1}},{{a}_{0}}$ are any real numbers with ${{a}_{n}}\ne 0$ and n is any non-negative integer. Now, the function g(x) is called a polynomial function of degree n; its coefficient is called the leading coefficient. The given function $h\left( x \right)=8{{x}^{3}}-{{x}^{2}}+\frac{2}{x}$ , simplifies into $h\left( x \right)=8{{x}^{3}}-{{x}^{2}}+2{{x}^{-1}}$. The power of $x$ is −1, which gives a negative value for $n$ and hence the function does not satisfy all the conditions to be a polynomial. Hence, the function $h\left( x \right)=8{{x}^{3}}-{{x}^{2}}+\frac{2}{x}$ is not a polynomial.
2019-12-13 23:21:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9322101473808289, "perplexity": 115.19709790346799}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540569332.20/warc/CC-MAIN-20191213230200-20191214014200-00245.warc.gz"}
https://www.gamedev.net/forums/topic/325914-pedantic-c-take-3/
Pedantic C++ Take 3 This topic is 4810 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. Recommended Posts struct Stuff { int i; //What is this type/classification of operator called? operator int&() {return i;} }; Share on other sites Conversion operator. Do I get a cookie? [smile] Share on other sites That's what I thought they were called, but it's a hybrid term. Share on other sites How do you mean? I have not seen them called anything else. Share on other sites The standard calls them conversion functions (12.3.2 -1) Share on other sites I've found a few instances of them informally/incorrectly being refered to as "cast operators", however a "cast operator" would be static_cast<>() and friends (and the C-style cast), as distinct from what you are refering to. Share on other sites My C++ standard calls them conversion functions 12.3.2-1 Share on other sites I was wondering if anyone would respond cast operators because it's used so much to describe that type of operator. I've seen them called that almost everywhere. Since I was looking for conversion operator, I had trouble tracking down 'conversion function'. Seems like conversion operator isn't too far off the mark though, but as Andrew notes cast operators really are something else! Share on other sites I call it a cast operator because it is an operator that is called when casting. Just like operator + is called when applying + on an object. Share on other sites But it is also called when doing an implicit conversion =) 1. 1 2. 2 3. 3 frob 15 4. 4 5. 5 • 20 • 12 • 13 • 14 • 80 • Forum Statistics • Total Topics 632144 • Total Posts 3004400 ×
2018-08-17 00:18:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24217760562896729, "perplexity": 5903.594257459139}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221211316.39/warc/CC-MAIN-20180816230727-20180817010727-00075.warc.gz"}
https://tleise.people.amherst.edu/Planimeter/part1.html
A Geometrical Explanation of the Polar Planimeter On a bright sunny morning you sit at the breakfast table sipping your orange juice. Your very active puppy, needing attention after the long lonely night, jumps up on your lap and, splash!, the orange juice forms a puddle on the floor. In order to properly berate the naughty beast, you want to determine to extent of the damage, i.e., the exact area of the carpet that is now stained a yellowish-orange. Let's assume that the puddle's boundary is a simple closed curve C. By simple, I simply mean a curve which doesn't intersect itself (no figure eights allowed!) and which bounds a single connected region (measure distinct splash areas separately). Denote the area inside this curve by the letter A. The planimeter has an elbow at Y, a wheel on its arm YZ and a fixed pivot point at X. I claim that the area A equals the length of the arm YZ times the circumference of the wheel times the number of revolutions that the wheel makes as you trace Z one full time around the curve C. It turns out that this is only true if X is outside the curve C and the arms XY and YZ never have to rotate past a full circle (you want the planimeter to be completely to one side of the curve, not surrounded by it). The proof: Let N be the number of revolutions of the wheel, counting counterclockwise as positive and clockwise as negative. Let L be the length of the arm YZ; let R be the distance between the elbow and the wheel; and let r be the radius of the wheel. Picture the region which the arm YZ passes over as its endpoint Z traces around the curve C. Let the boundary of this region minus the region inside C be denoted by C'. If the wheel turns counterclockwise, we'll say that YZ sweeps out positive area. If the wheel turns clockwise, we'll say that YZ sweeps out negative area. With this convention, the total area that the arm YZ sweeps out during a traversal of C is then the area bounded by (C+C') minus the area bounded by C', since the arm must backtrack over the region in C' in order to return to its starting position. This difference of areas is exactly the area A bounded by C, so now we just need to determine the total area swept out by YZ and that will be equal to A. Move the arm YZ just a little bit, with the endpoint Z following along the curve C. Decompose this motion into two pieces, a translation and a rotation. Let D be the distance the wheel rolls. When the arm YZ is rotating this distance will be R times the angle change dq, since the circumference of a sector is the radius times the angle. The area of a parallelogram is the height times the width, and the area of a sector of a circle is one half the angle times the radius squared. Now we can compute the area swept out by YZ during this small motion: D = Rdq + h, so h = D - Rdq. dA = L2dq/2 + hL = L2dq/2 + DL - RLdq = DL + dq(L2/2-LR). Since the arm YZ returns to its original position, the total change in angle is zero. Therefore the total area A will equal L times D, where the total distance D that the wheel rolls equals the number of revolutions N times the circumference of the wheel. A = (2prN)L. Note that if you do have to find the area inside a figure eight, just measure each of the two regions separately and add the results together. The geometric proof for this simple planimeter is adapted from that given by George Jennings in Modern Geometry with Applications, published by Springer-Verlag in 1994. This book contains several other such delightful applications, though the planimeter is my personal favorite. Back to Planimeters
2018-07-22 14:00:04
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8653861284255981, "perplexity": 605.1794031200498}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593302.74/warc/CC-MAIN-20180722135607-20180722155607-00021.warc.gz"}
https://www.physicsforums.com/threads/distribution-of-electrons-below-the-fermi-energy.203144/
# Homework Help: Distribution of electrons below the Fermi energy 1. Dec 7, 2007 ### taishar I feel dumb that I can't figure this out. I'm sure its something simple that I'm just not seeing, but its really frustrating. 1. The problem statement, all variables and given/known data How many electrons (in percent of the total number of electrons per mole) lie KbT (ev) below the Fermi energy? Take Ef=5eV and T=300K 2. Relevant equations Not quite sure, since the Fermi function did not work. 3. The attempt at a solution I tried using the Fermi function and end up with values around 30%. The answer (from the back of the book) is $$\Delta$$N/Ntot=.566% Any ideas ? Thanks! 2. Dec 7, 2007 ### 2Tesla Since Ef is much larger than kT, you wouldn't expect 30% of the electrons to lie between Ef-kT and Ef, right? You'd expect a much smaller fraction, like the answer from the book. When you integrated the Fermi function to get $$\Delta$$N and Ntot, what were the energy boundaries of your integrals?
2018-04-23 10:06:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.513946533203125, "perplexity": 1209.4142167187567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945940.14/warc/CC-MAIN-20180423085920-20180423105920-00508.warc.gz"}
https://staff.fnwi.uva.nl/a.l.kret/Seminar/seminar.html
# Arithmetic and Algebraic Geometry seminar ## University of Amsterdam ### Practical information The seminar takes place every week on Wednesday, from 11:00 to 12:00, starting September 13th. The lectures will take place in room F3.20 at the Nikhef. The seminar is organized by Arno Kret (ArnoKret at gmail dot com) and Mingmin Shen (M.Shen at uva dot nl). ### Upcoming lectures Click on the name of the speakers below to see the abstract. ### Past events #### June 13, 2018. Ramla Abdellatif: Iwahori-Hecke algebras and masures for split Kac-Moody groups Abstract: Let $$F$$ be a non-Archimedean local field and $$G$$ be the group of $$F$$-rational points of a connected reductive group defined over $$F$$. The study of (complex smooth) representations of $$G$$ imply various tools coming from different nature. These include in particular induction functors, Hecke algebras (seen as convolution algebras or as intertwinning algebras) and Bruhat-Tits buildings. When seeing Kac-Moody groups as a natural generalization of reductive groups, one can wonder how far the setting developed for reductive groups can be extended to the Kac-Moody case. Thanks to Rousseau, Gaussent-Rousseau and Bardy-Panse-Gaussent Rousseau, there is a suitable generalization of Bruhat-Tits buildings (called masures) as well as handful definitions of spherical and Iwahori-Hecke algebras. Nevertheless, these algebras are not really fully satisfying as they do not, for instance, satisfy the analogue of Bernstein's theorem in this setting. Another frustrating lack was that there was so far no natural construction attaching a Hecke algebra to a suitable analogue of open compact subgroups. In this talk, we discuss some results, obtained in collaboration with Auguste Hebert, addressing these questions for split Kac-Moody groups. In particular, we explain why the Iwahori-Hecke algebra as defined by Rousseau and his collaborators is not the right generalization of the usual Iwahori-Hecke algebra, as its center is too small', then we define a suitable generalization (using a sort of completion process) that satisfies a Bernstein-like theorem. If enough time is left, we will also explain how to attach a suitable Hecke algebra to each type 0 spherical facet of the masure that gives back the well-known Hecke algebras in the reductive case. #### May 30, 2018. Stefan Schreieder: Stably irrational hypersurfaces of small slopes Abstract: We show that a very general complex projective hypersurface of dimension $$N$$ and degree at least $${\rm log}_2 (N+2)$$ is not stably rational. The same statement holds over any uncountable field of characteristic $$p \gg N$$. #### May 16, 2018. Junliang Shen: Special subvarieties in holomorphic symplectic varieties Abstract: By the result of Bogomolov and Mumford, every projective K3 surface contains a rational curve. Holomorphic symplectic varieties are higher dimensional analogs of K3 surfaces. We will discuss a conjecture of Voisin about algebraically coisotropic subvarieties of holomorphic symplectic varieties, which can be viewed as a higher dimensional generalization of the Bogomolov-Mumford theorem. We show that Voisin's conjecture holds when the holomorphic symplectic variety arises as a moduli space of sheaves on a K3 surface. Finally, we discuss the structure of rational curves in holomophic symplectic varieties. Based on joint work with Georg Oberdieck, Qizheng Yin, and Xiaolei Zhao. #### May 9, 2018. Gregorio Baldi: An open image theorem for points on Shimura varieties defined over function fields Abstract: We will recall how to attach $$\ell$$-adic Galois representations to points of arbitrary Shimura varieties $${\rm Sh}_K(G,X)$$ defined over finitely generated fields of characteristic zero and why one inclusion of the Mumford-Tate conjecture holds in this setting. Tools from the theory of Shimura varieties can be used to study the image of such representations. In the case of function fields, when such points are not contained in a smaller Shimura subvariety and are without isotrivial component, this approach is especially fruitful. We show indeed that, for $$\ell$$ large enough, the image of the $$\ell$$-adic representation contains the $${\mathbb Z}_\ell$$-points coming from the simply connected cover of the derived subgroup of $$G$$. #### April 25, 2018. Will Sawin: The cohomology of the moduli spaces of rational curves on hypersurfaces Abstract: The Grothendieck-Lefschetz fixed point formula relates the cohomology of varieties over finite fields to their number of points. In many cases, we can use our understanding of the cohomology to prove bounds for the number of points. For the moduli spaces of rational curves on low-degree hypersurfaces, we have a good understanding of the point counts from analytic number theory, and we would like to translate that into cohomological information. This is not possible directly because the spaces can fail to be smooth and always fail to be proper. However, in joint work with Tim Browning, we adapted the classical analytic number theory techniques into a geometric argument to compute the high-degree cohomology groups. #### April 18, 2018. Chenyang Xu: Volume and stability of singularities Abstract: One guiding principle for the class of kawamata log terminal (klt) singularities is that it is the local analogue of Fano varieties. In this talk, I will discuss our work (joint with Chi Li) on establishing an algebraic stability theory, which is the analogue to the K-stability of Fano varieties, for a klt singularity. This is achieved by using Chi Li’s definition of normalised volumes on the ’non-archimedean link'. The conjectural picture can be considered as a purely local construction which algebrizes the metric tangent cone in complex geometry. As an application, we solve Donaldson-Sun’s conjecture. #### April 11, 2018. Jean Fasel: Borel-Moore homology and Chow-Witt groups of singular varieties Abstract: (joint work with Frédéric Déglise). In this talk, we present MW-motivic cohomology and its associated Borel-Moore homology. We will perform a few computations of these (co-)homology groups, and recover Chow-Witt groups of singular schemes as a special case. This shows that the latter are obtained via an explicit complex, covariantly functorial for proper morphisms (with an explicit push-forward morphism) and covariantly functorial for étale morphisms. #### March 21, 2018. Maxim Mornev: Regulator theory for elliptic shtukas. Abstract: Let $$X$$ be a smooth projective curve over a finite field $$\mathbb{F}_{\!q}$$. To an $$\ell$$-adic sheaf $$\mathcal{F}$$ on $$X$$ one can associate its $$L$$-function $$L(\mathcal{F},T)$$. Kato [1] discovered that the value $$L^*(\mathcal{F},1)$$ has a cohomological interpretation: multiplication by $$L^*(\mathcal{F},1)$$ on the one-dimensional $$\mathbb{Q}_\ell$$-vector space $$\det\nolimits_{\mathbb{Q}_\ell} \mathrm{R}\Gamma(X, \mathcal{F})$$ is a composition of certain natural operations on cohomology. Roughly speaking, a shtuka on $$X$$ is the correct analog of an $$\ell$$-adic sheaf in the situation when $$\mathbb{Q}_\ell$$ is replaced by a finite extension of $$\mathbb{F}_{\!q}(\!(z)\!)$$. Shtukas also have a cohomology theory and $$L$$-functions so it is tempting to try Kato's approach for them. This line of research was successfully pursued by V.~Lafforgue [2]. His theory is quite involved and differs considerably from the one of Kato in the $$\ell$$-adic setting. In this talk I would like to discuss one more such theory. It is somewhat parallel to Lafforgue's but applies to a different class of shtukas, the elliptic shtukas. One can construct a natural map on cohomology of elliptic shtukas called the regulator. I will explain how the regulator can be used to give a simple formula for the values of shtuka-theoretic $$L$$-functions at $$T = 1$$. [1] K. Kato. Lectures on the approach to Iwasawa theory for Hasse-Weil $$L$$-functions via $$B_{\textrm{dR}}$$. I. Arithmetic algebraic geometry (Trento, 1991), 50--163, Lecture Notes in Math. 1553, Springer, Berlin (1993) [2] V. Lafforgue. Valeurs speciales des fonctions $$L$$ en caracteristique $$p$$. J. Number Theory 129 (2009), no. 10, 2600--2634. #### March 14, 2018. Sofia Tirabassi: Derived categories of canonical covers in positive characteristic. Abstract: We show that abelian varieties and K3 surfaces arising as canonical covers of bielliptic and Enriques surfaces do not admit any non-trivial Fourier-Mukai partners This is a joint work with K. Honigs and L. Lombardi #### March 7, 2018. Celine Maistret: Parity of ranks of abelian surfaces. Abstract: Let $$K$$ be a number field and $$A/K$$ an abelian surface. By the Mordell-Weil theorem, the group of $$K$$-rational points on $$A$$ is finitely generated and as for elliptic curves, its rank is predicted by the Birch and Swinnerton-Dyer conjecture. A basic consequence of this conjecture is the parity conjecture: the sign of the functional equation of the $$L$$-series determines the parity of the rank of $$A/K$$. Under suitable local constraints and finiteness of the Shafarevich-Tate group, we prove the parity conjecture for principally polarized abelian surfaces. We also prove analogous unconditional results for Selmer groups. #### March 1, 2017. Xuanyu Pan: Automorphism and Cohomology of Varieties. Abstract: I will give a survey of my works and joint works on the following fundamental problem "Does the automorphism of variety act faithfully on cohomology?" The answer is positive for cubic fourfolds and its Fano variety of lines, complete Intersections and cyclic coverings with a few exceptions. There are some applications to arithmetic and geometry. For instance, Javanpeykar and Loughran relate this question to the Lang- Vojta conjecture and the Shafarevich conjecture, and the symmetry of cubic fourfolds and their middle Picard numbers. The program also involves a proof of (crystalline) variational Hodge conjecture for some graph cycles. #### February 22, 2017. Wushi Goldring: Geometry engendered by G-Zips: Shimura varieties and beyond. Abstract: Moonen, Pink, Wedhorn and Ziegler initiated a theory of $$G$$-Zips, which is modelled on the de Rham cohomology of varieties in characteristic $$p>0$$ "with $$G$$-structure", where $$G$$ is a connected reductive $${\mathbb F}_p$$-group. Building on their work, when $$X$$ is a good reduction special fiber of a Hodge-type Shimura variety, it has been shown that there exists a smooth, surjective morphism from X to a quotient stack $$G{\mathrm{-Zip}}^{\mu}$$. When $$X$$ is of PEL type, the fibers of this morphism recover the Ekedahl-Oort stratification defined earlier in terms of flags by Moonen. It is commonly believed that much of the geometry of $$X$$ lies beyond the structure of $$\zeta$$. I will report on a project, initiated jointly with J.-S. Koskivirta and developed further in joint work with Koskivirta, B. Stroh and Y. Brunebarbe, which contests this common view in two stages: The first consists in showing that fundamental geometric properties of $$X$$ are explained purely by means of (and its generalizations). The second is that, while these geometric properties may appear to be special to Shimura varieties, the $$G$$-Zip viewpoint shows that they hold much more generally, for geometry engendered by $$G$$-Zips, i.e. any scheme $$Z$$ equipped with a morphism to $${\mathrm {GZip}}^{\mu}$$ satisfying some general scheme-theoretic properties. To illustrate our program concretely, I will describe results and conjectures regarding two basic geometric questions about $$X, Z$$: (i) Which automorphic vector bundles on $$X, Z$$ admit global sections? (ii) Which of these bundles are ample? #### February 8, 2017. Peter Bruin: On elliptic curves with prescribed torsion over number fields. Abstract: This talk is about Mordell--Weil groups, and in particular torsion subgroups, of elliptic curves over number fields of given degree. I will explain some results showing that prescribing a suitable torsion structure on an elliptic curve over a number field can force the curve to have unexpected properties. For example, if $$E$$ is an elliptic curve over a cubic number field $$K$$ such that $$E(K)$$ has full 2-torsion and a point of order 7, then $$E$$ arises by base change from an elliptic curve over $$\mathbf{Q}$$. #### January 31, 2017. Mini-workshop Arithmetic and Moduli of K3 surfaces. Lectures by Martin Orr (Imperial College, London), Tony Várilly-Alvarado (Rice U, Houston), Alexei Skorobogatov (Imperial College, London) and Bianca Viray (U of Washington, Seattle). See https://staff.fnwi.uva.nl/l.d.j.taelman/k3day.html #### December 7, 2016. Bhargav Bhatt: Comparing crystalline and etale cohomology. Abstract: For smooth proper varieties over $$p$$-adic fields with good reduction, Grothendieck's ideas on the mysterious functor predicted a comparison isomorphism relating the rational crystalline cohomology of the special fiber with the rational $$p$$-adic etale cohomology of the generic fibre. A precise version of this conjecture was formulated by Fontaine in the early days of $$p$$-adic Hodge theory, and has since been established by various mathematicians. In my talk, I will review this story, and then explain our current understanding of the torsion aspects of this comparison: the torsion on the crystalline side provides an upper bound for the torsion on the etale side, and the inequality can be strict. This talk is based on recent joint work with Matthew Morrow and Peter Scholze. #### November 16, 2016. Ekaterina Amerik: Characteristic foliation on a smooth divisor in a holomorphic symplectic manifold. Abstract: Let $$D$$ be a smooth hypersurface in a holomorphic symplectic manifold. The kernel of the restriction of the symplectic form to $$D$$ defines a foliation in curves called the characterisic foliation. Hwang and Vehweg proved in 2008 that if $$D$$ is of general type this foliation cannot be algebraic unless in the trivial case when $$X$$ is a surface and D is a curve. I shall explain a refinement of this result, joint with F. Campana: the characteristic foliation is algebraic if and only if $$D$$ is uniruled or a finite covering of $$X$$ is a product with a symplectic surface and $$D$$ comes from a curve on that surface. I shall also explain a recent joint work with L. Guseva, concerning the particular case of an irreducible holomorphic symplectic fourfold: we show that if Zariski closure of a general leaf is a surface, then $$X$$ is a lagrangian fibration and $$D$$ is the inverse image of a curve on its base. See here. #### October 26, 2016. Jean-Stefan Koskivirta: Generalized Hasse invariants and some applications. Abstract: This talk is a report on a paper with Wushi Goldring. If $$A$$ is an abelian variety over a scheme $$S$$ of characteristic $$p$$, the isomorphism class of the $$p$$-torsion gives rise to a stratification on $$S$$. When it is nonempty, the ordinary stratum is open and the classical Hasse invariant is a section of the $$p-1$$ power of the Hodge bundle which vanishes exactly on its complement. In this talk, we will explain a group-theoretical construction of generalized Hasse invariants based on the stack of $$G$$-zips introduced by Pink, Wedhorn, Ziegler Moonen. When $$S$$ is the good reduction special fiber of a Shimura variety of Hodge-type, we show that the Ekedahl-Oort stratification is principally pure. We apply Hasse invariants to attach Galois representations to certain automorphic representations whose archimedean part is a limit of discrete series, and to study systems of Hecke-eigenvalues that appear in coherent cohomology. #### October 19, 2016. Javier Fresan: Exponential motives. Abstract: Exponential periods form a class of complex numbers containing the special values of the gamma and the Bessel functions, the Euler constant and other interesting numbers which are not expected to be periods in the usual sense of algebraic geometry. However, one can interpret them as coefficients of the comparison isomorphism between two cohomology theories associated to pairs consisting of an algebraic variety and a regular function: the de Rham cohomology of a connection with irregular singularities and the so-called “rapid decay” cohomology. Following ideas of Kontsevich and Nori, I will explain how this point of view allows one to construct a Tannakian category of exponential motives and to produce Galois groups which conjecturally govern all algebraic relations among these numbers. I will give many examples and show how some classical results from transcendence theory can be reinterpreted in this way. This is a joint work with Peter Jossen.
2018-06-19 00:31:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7863317728042603, "perplexity": 468.66376860691275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267861641.66/warc/CC-MAIN-20180619002120-20180619022120-00323.warc.gz"}
https://www.splashlearn.com/math-vocabulary/geometry/area
# Area in Math – Definition with Examples ## Definition Definition Area is defined as the total space taken up by a flat (2-D) surface or shape of an object. Take a pencil and draw a square on a piece of paper. It is a 2-D figure. The space the shape takes up on the paper is called its Area Now, imagine your square is made up of smaller unit squares. The area of a figure is counted as the number of unit squares required to cover the overall surface area of that particular 2-D shape. Square cms, square feet, square inches, square meters, etc., are some of the common units of area measurement.To find out the area of the square figures drawn below, draw unit squares of 1-centimeter sides. Thus, the shape will be measured in cm², also known as square centimeters. Here, the area of the shapes below will be measured in square meters (m²) and square inches (in²). How to calculate the area if there are also half unit squares in the grid? To understand that, let us take one more example: Step 1: Count the full squares. There are 18 full squares. Step 2: Count the half squares. On counting, we see that there are 6 half squares. Step 3: 1 full square $= 1$ square unit So, 18 full square $= 18$ square units 1 half square $= \frac{1}{2}$ square unit 6 half squares $= 3$ square units Total area $= 18 + 3 = 21$ square units. Origin of the Term: Area The term ‘area’ originated from Latin, meaning ‘a plain piece of empty land’. It also means ‘a particular amount of space contained within a set of boundaries’. Look at the carpet in your home. To buy a carpet that fits the floor, we need to know its area. Or the carpet will be bigger or smaller than the space! Some other instances when we need to know the area are while fitting tiles on the floor, painting the wall or sticking wallpaper to it, or finding out the total number of tiles needed to build a swimming pool. ## Formulas for Calculating Area We are surrounded by so many 2-D shapes: circle, triangle, square, rectangle, parallelogram, and trapezium. You can draw all of these shapes on your paper. Every shape is different and unique, so its area is also calculated differently. To find the area, first, identify the shape. Then, use the appropriate formula from the list given below to find its area. ## Areas of Composite Figures Every plane figure cannot be classified as a simple rectangle, square, triangle, or typical shape in real life. Some figures are made up of more than one simple 2-D shape. Let us join a rectangle and a semicircle. These shapes formed by the combination of two or more simple shapes are called “composite figures” or “composite shapes”. For finding the area of a composite figure, we must find the sum of the area of all the shapes in it. So, the area of the shape we just drew will be the area of the rectangle, l$\times$ b plus half the area of the circle, ½ x πr², where l and b are length and breadth of the rectangle and r is the radius of the semicircle. If we draw a semi – circle below a triangle, we get the composite shape: The area of such a composite figure will be calculated by adding the area of the triangle and the area of the semicircle. Area of the a composite figure =($\frac{1}{2}\times b\times h) (\frac{1}{2}+𝜋r^2$) where r is the radius of the semicircle and b and h are the base and height of the triangle respectively. Real-life Applications Here are a few ways in which you can apply the knowledge of the area of figures in your daily life. • We can find the area of a gifting paper to check whether it will be able to cover a box or not. • We can find the area of a square or circle to find the area of the signal board. ## Solved Examples 1. A circle has a diameter of 20 cm. Find out the area of this circle. Ans: For the circle, d = 20 cm. Radius, r = $\frac{d}{2}$ = 10 cm Therefore, A = πr² = 3.14$\times 10\times 10$ = 314 cm$^{2}$ Area of the given circle is 314 $cm^{2}$. 1. The height of a triangle is 10 cm and the base is 20 cm. What is the area of this triangle? Ans: Area of the triangle = $\frac{1}{2}\times b\times h$ = $\frac{1}{2}\times 20\times 10 = 100 cm^{2}$ Therefore, the area of the given triangle is 100 $cm^{2}$. 1. The width of a rectangle is half of its length. The width is measured to be 10 cm. What is the area of the rectangle? Ans: For the rectangle, w = 10 cm and l = (10 $\times$ 2) = 20 cm. Area of the rectangle, i.e., A = l $\times$ w A = 20 $\times$ 10 = 200 cm$^{2}$ Therefore, the area of the given rectangle is 200 cm$^{2}$. Example 4: What is the area of the following figure? Solution:  Full square $= 1$ square unit So, 14 full square $= 14$ square units 1 half square $= \frac{1}{2}$ square units 5 half squares $= 2.5$ square units Total area $= 14 + 2.5 = 16.5$ square units. ## Practice Problems 1 ### The side of a square is 7 cm. What is the area of this square? 20 cm$^{2}$ 30 cm$^{2}$ 40 cm$^{2}$ 49 cm$^{2}$ CorrectIncorrect Correct answer is: 49 cm$^{2}$ Area of the square = side $\times$ side A = 7 $\times$ 7 = 49 cm$^{2}$ 2 ### The height of a triangle is 25 cm and the base is 50 cm. What is the area of the triangle? 600 cm$^{2}$ 625 cm$^{2}$ 650 cm$^{2}$ 700 cm$^{2}$ CorrectIncorrect Correct answer is: 625 cm$^{2}$ Area of the triangle = $\frac{1}{2}\times b\times h = \frac{1}{2}\times 50\times 25$ = 625 cm$^{2}$ 3 ### What is the area of a circle that has a radius of 4 cm? 40 cm2 16 cm$^{2}$ 60.27 cm$^{2}$ 50.27 cm$^{2}$ CorrectIncorrect Correct answer is: 50.27 cm$^{2}$ Area of the circle = 𝜋r$^{2}$ = 3.14$\times 4\times 4$ = 50.27 cm$^{2}$ 4 ### The base of a parallelogram is 50 cm and the perpendicular height is 20 cm. What is the area of this parallelogram? 400 cm$^{2}$ 500 cm$^{2}$ 750 cm$^{2}$ 1000 cm$^{2}$ CorrectIncorrect Correct answer is: 1000 cm$^{2}$ Area of the parallelogram = b$\times$ h = 50$\times$ 20 = 1000 cm$^{2}$ Conclusion Through SplashLearn, learning becomes a part of children’s lives, and they get encouraged to study every day. The platform does this through interactive games and fun worksheets. For more information about exciting math-based games, log on to www.splashlearn.com Related math vocabularyLearn more about concepts like Perimeter, Polygon, Square Unit, Unit Square and more such exciting mathematical topics at www.SplashLearn.com.
2022-12-04 09:10:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6534521579742432, "perplexity": 750.9035098930265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710968.29/warc/CC-MAIN-20221204072040-20221204102040-00128.warc.gz"}
https://pypi.org/project/pdfrw/
## 1 Introduction pdfrw is a Python library and utility that reads and writes PDF files: • Version 0.4 is tested and works on Python 2.6, 2.7, 3.3, 3.4, 3.5, and 3.6 • Operations include subsetting, merging, rotating, modifying metadata, etc. • The fastest pure Python PDF parser available • Has been used for years by a printer in pre-press production • Can be used with rst2pdf to faithfully reproduce vector images • Can be used either standalone, or in conjunction with reportlab to reuse existing PDFs in new ones pdfrw will faithfully reproduce vector formats without rasterization, so the rst2pdf package has used pdfrw for PDF and SVG images by default since March 2010. pdfrw can also be used in conjunction with reportlab, in order to re-use portions of existing PDFs in new PDFs created with reportlab. ## 2 Examples The library comes with several examples that show operation both with and without reportlab. ### 2.1 All examples The examples directory has a few scripts which use the library. Note that if these examples do not work with your PDF, you should try to use pdftk to uncompress and/or unencrypt them first. • 4up.py will shrink pages down and place 4 of them on each output page. • alter.py shows an example of modifying metadata, without altering the structure of the PDF. • booklet.py shows an example of creating a 2-up output suitable for printing and folding (e.g on tabloid size paper). • cat.py shows an example of concatenating multiple PDFs together. • extract.py will extract images and Form XObjects (embedded pages) from existing PDFs to make them easier to use and refer to from new PDFs (e.g. with reportlab or rst2pdf). • poster.py increases the size of a PDF so it can be printed as a poster. • print_two.py Allows creation of 8.5 X 5.5” booklets by slicing 8.5 X 11” paper apart after printing. • rotate.py Rotates all or selected pages in a PDF. • subset.py Creates a new PDF with only a subset of pages from the original. • unspread.py Takes a 2-up PDF, and splits out pages. • watermark.py Adds a watermark PDF image over or under all the pages of a PDF. • rl1/4up.py Another 4up example, using reportlab canvas for output. • rl1/booklet.py Another booklet example, using reportlab canvas for output. • rl1/subset.py Another subsetting example, using reportlab canvas for output. • rl1/platypus_pdf_template.py Another watermarking example, using reportlab canvas and generated output for the document. Contributed by user asannes. • rl2 Experimental code for parsing graphics. Needs work. • subset_booklets.py shows an example of creating a full printable pdf version in a more professional and pratical way ( take a look at http://www.wikihow.com/Bind-a-Book ) ### 2.2 Notes on selected examples #### 2.2.1 Reorganizing pages and placing them two-up A printer with a fancy printer and/or a full-up copy of Acrobat can easily turn your small PDF into a little booklet (for example, print 4 letter-sized pages on a single 11” x 17”). But that assumes several things, including that the personnel know how to operate the hardware and software. booklet.py lets you turn your PDF into a preformatted booklet, to give them fewer chances to mess it up. The cat.py example will accept multiple input files on the command line, concatenate them and output them to output.pdf, after adding some nonsensical metadata to the output PDF file. The alter.py example alters a single metadata item in a PDF, and writes the result to a new PDF. One difference is that, since cat is creating a new PDF structure, and alter is attempting to modify an existing PDF structure, the PDF produced by alter (and also by watermark.py) should be more faithful to the original (except for the desired changes). For example, the alter.py navigation should be left intact, whereas with cat.py it will be stripped. #### 2.2.3 Rotating and doubling If you ever want to print something that is like a small booklet, but needs to be spiral bound, you either have to do some fancy rearranging, or just waste half your paper. The print_two.py example program will, for example, make two side-by-side copies each page of of your PDF on a each output sheet. But, every other page is flipped, so that you can print double-sided and the pages will line up properly and be pre-collated. #### 2.2.4 Graphics stream parsing proof of concept The copy.py script shows a simple example of reading in a PDF, and using the decodegraphics.py module to try to write the same information out to a new PDF through a reportlab canvas. (If you know about reportlab, you know that if you can faithfully render a PDF to a reportlab canvas, you can do pretty much anything else with that PDF you want.) This kind of low level manipulation should be done only if you really need to. decodegraphics is really more than a proof of concept than anything else. For most cases, just use the Form XObject capability, as shown in the examples/rl1/booklet.py demo. ## 3 pdfrw philosophy ### 3.1 Core library The philosophy of the library portion of pdfrw is to provide intuitive functions to read, manipulate, and write PDF files. There should be minimal leakage between abstraction layers, although getting useful work done makes “pure” functionality separation difficult. A key concept supported by the library is the use of Form XObjects, which allow easy embedding of pieces of one PDF into another. Addition of core support to the library is typically done carefully and thoughtfully, so as not to clutter it up with too many special cases. There are a lot of incorrectly formatted PDFs floating around; support for these is added in some cases. The decision is often based on what acroread and okular do with the PDFs; if they can display them properly, then eventually pdfrw should, too, if it is not too difficult or costly. Contributions are welcome; one user has contributed some decompression filters and the ability to process PDF 1.5 stream objects. Additional functionality that would obviously be useful includes additional decompression filters, the ability to process password-protected PDFs, and the ability to output linearized PDFs. ### 3.2 Examples The philosophy of the examples is to provide small, easily-understood examples that showcase pdfrw functionality. ## 4 PDF files and Python ### 4.1 Introduction In general, PDF files conceptually map quite well to Python. The major objects to think about are: • strings. Most things are strings. These also often decompose naturally into • lists of tokens. Tokens can be combined to create higher-level objects like • arrays and • dictionaries and • Contents streams (which can be more streams of tokens) ### 4.2 Difficulties The apparent primary difficulty in mapping PDF files to Python is the PDF file concept of “indirect objects.” Indirect objects provide the efficiency of allowing a single piece of data to be referred to from more than one containing object, but probably more importantly, indirect objects provide a way to get around the chicken and egg problem of circular object references when mapping arbitrary data structures to files. To flatten out a circular reference, an indirect object is referred to instead of being directly included in another object. PDF files have a global mechanism for locating indirect objects, and they all have two reference numbers (a reference number and a “generation” number, in case you wanted to append to the PDF file rather than just rewriting the whole thing). pdfrw automatically handles indirect references on reading in a PDF file. When pdfrw encounters an indirect PDF file object, the corresponding Python object it creates will have an ‘indirect’ attribute with a value of True. When writing a PDF file, if you have created arbitrary data, you just need to make sure that circular references are broken up by putting an attribute named ‘indirect’ which evaluates to True on at least one object in every cycle. Another PDF file concept that doesn’t quite map to regular Python is a “stream”. Streams are dictionaries which each have an associated unformatted data block. pdfrw handles streams by placing a special attribute on a subclassed dictionary. ### 4.3 Usage Model The usage model for pdfrw treats most objects as strings (it takes their string representation when writing them to a file). The two main exceptions are the PdfArray object and the PdfDict object. PdfArray is a subclass of list with two special features. First, an ‘indirect’ attribute allows a PdfArray to be written out as an indirect PDF object. Second, pdfrw reads files lazily, so PdfArray knows about, and resolves references to other indirect objects on an as-needed basis. PdfDict is a subclass of dict that also has an indirect attribute and lazy reference resolution as well. (And the subclassed IndirectPdfDict has indirect automatically set True). But PdfDict also has an optional associated stream. The stream object defaults to None, but if you assign a stream to the dict, it will automatically set the PDF /Length attribute for the dictionary. Finally, since PdfDict instances are indexed by PdfName objects (which always start with a /) and since most (all?) standard Adobe PdfName objects use names formatted like “/CamelCase”, it makes sense to allow access to dictionary elements via object attribute accesses as well as object index accesses. So usage of PdfDict objects is normally via attribute access, although non-standard names (though still with a leading slash) can be accessed via dictionary index lookup. >>> from pdfrw import PdfReader >>> x.keys() ['/Info', '/Size', '/Root'] >>> x.Info {'/Producer': '(cairo 1.8.6 (http://cairographics.org))', '/Creator': '(cairo 1.8.6 (http://cairographics.org))'} >>> x.Root.keys() ['/Type', '/Pages'] Info, Size, and Root are retrieved from the trailer of the PDF file. In addition to the tree structure, pdfrw creates a special attribute named pages, that is a list of all the pages in the document. pdfrw creates the pages attribute as a simplification for the user, because the PDF format allows arbitrarily complicated nested dictionaries to describe the page order. Each entry in the pages list is the PdfDict object for one of the pages in the file, in order. >>> len(x.pages) 1 >>> x.pages[0] {'/Parent': {'/Kids': [{...}], '/Type': '/Pages', '/Count': '1'}, '/Contents': {'/Length': '11260', '/Filter': None}, '/Resources': ... (Lots more stuff snipped) >>> x.pages[0].Contents {'/Length': '11260', '/Filter': None} >>> x.pages[0].Contents.stream 'q\n1 1 1 rg /a0 gs\n0 0 0 RG 0.657436 w\n0 J\n0 j\n[] 0.0 d\n4 M q' ... (Lots more stuff snipped) #### 4.3.2 Writing PDFs As you can see, it is quite easy to dig down into a PDF document. But what about when it’s time to write it out? >>> from pdfrw import PdfWriter >>> y = PdfWriter() >>> y.write('result.pdf') That’s all it takes to create a new PDF. You may still need to read the Adobe PDF reference manual to figure out what needs to go into the PDF, but at least you don’t have to sweat actually building it and getting the file offsets right. #### 4.3.3 Manipulating PDFs in memory For the most part, pdfrw tries to be agnostic about the contents of PDF files, and support them as containers, but to do useful work, something a little higher-level is required, so pdfrw works to understand a bit about the contents of the containers. For example: • PDF pages. pdfrw knows enough to find the pages in PDF files you read in, and to write a set of pages back out to a new PDF file. • Form XObjects. pdfrw can take any page or rectangle on a page, and convert it to a Form XObject, suitable for use inside another PDF file. It knows enough about these to perform scaling, rotation, and positioning. • reportlab objects. pdfrw can recursively create a set of reportlab objects from its internal object format. This allows, for example, Form XObjects to be used inside reportlab, so that you can reuse content from an existing PDF file when building a new PDF with reportlab. There are several examples that demonstrate these features in the example code directory. #### 4.3.4 Missing features Even as a pure PDF container library, pdfrw comes up a bit short. It does not currently support: • Most compression/decompression filters • encryption pdftk is a wonderful command-line tool that can convert your PDFs to remove encryption and compression. However, in most cases, you can do a lot of useful work with PDFs without actually removing compression, because only certain elements inside PDFs are actually compressed. ## 5 Library internals ### 5.1 Introduction pdfrw currently consists of 19 modules organized into a main package and one sub-package. The __init.py__ module does the usual thing of importing a few major attributes from some of the submodules, and the errors.py module supports logging and exception generation. ### 5.2 PDF object model support The objects sub-package contains one module for each of the internal representations of the kinds of basic objects that exist in a PDF file, with the objects/__init__.py module in that package simply gathering them up and making them available to the main pdfrw package. One feature that all the PDF object classes have in common is the inclusion of an ‘indirect’ attribute. If ‘indirect’ exists and evaluates to True, then when the object is written out, it is written out as an indirect object. That is to say, it is addressable in the PDF file, and could be referenced by any number (including zero) of container objects. This indirect object capability saves space in PDF files by allowing objects such as fonts to be referenced from multiple pages, and also allows PDF files to contain internal circular references. This latter capability is used, for example, when each page object has a “parent” object in its dictionary. #### 5.2.1 Ordinary objects The objects/pdfobject.py module contains the PdfObject class, which is a subclass of str, and is the catch-all object for any PDF file elements that are not explicitly represented by other objects, as described below. #### 5.2.2 Name objects The objects/pdfname.py module contains the PdfName singleton object, which will convert a string into a PDF name by prepending a slash. It can be used either by calling it or getting an attribute, e.g.: PdfName.Rotate == PdfName('Rotate') == PdfObject('/Rotate') In the example above, there is a slight difference between the objects returned from PdfName, and the object returned from PdfObject. The PdfName objects are actually objects of class “BasePdfName”. This is important, because only these may be used as keys in PdfDict objects. #### 5.2.3 String objects The objects/pdfstring.py module contains the PdfString class, which is a subclass of str that is used to represent encoded strings in a PDF file. The class has encode and decode methods for the strings. #### 5.2.4 Array objects The objects/pdfarray.py module contains the PdfArray class, which is a subclass of list that is used to represent arrays in a PDF file. A regular list could be used instead, but use of the PdfArray class allows for an indirect attribute to be set, and also allows for proxying of unresolved indirect objects (that haven’t been read in yet) in a manner that is transparent to pdfrw clients. #### 5.2.5 Dict objects The objects/pdfdict.py module contains the PdfDict class, which is a subclass of dict that is used to represent dictionaries in a PDF file. A regular dict could be used instead, but the PdfDict class matches the requirements of PDF files more closely: • Transparent (from the library client’s viewpoint) proxying of unresolved indirect objects • Return of None for non-existent keys (like dict.get) • Mapping of attribute accesses to the dict itself (pdfdict.Foo == pdfdict[NameObject(‘Foo’)]) • Automatic management of following stream and /Length attributes for content dictionaries • Indirect attribute • Other attributes may be set for private internal use of the library and/or its clients. • Support for searching parent dictionaries for PDF “inheritable” attributes. If a PdfDict has an associated data stream in the PDF file, the stream is accessed via the ‘stream’ (all lower-case) attribute. Setting the stream attribute on the PdfDict will automatically set the /Length attribute as well. If that is not what is desired (for example if the the stream is compressed), then _stream (same name with an underscore) may be used to associate the stream with the PdfDict without setting the length. To set private attributes (that will not be written out to a new PDF file) on a dictionary, use the ‘private’ attribute: mydict.private.foo = 1 Once the attribute is set, it may be accessed directly as an attribute of the dictionary: foo = mydict.foo Some attributes of PDF pages are “inheritable.” That is, they may belong to a parent dictionary (or a parent of a parent dictionary, etc.) The “inheritable” attribute allows for easy discovery of these: mediabox = mypage.inheritable.MediaBox #### 5.2.6 Proxy objects The objects/pdfindirect.py module contains the PdfIndirect class, which is a non-transparent proxy object for PDF objects that have not yet been read in and resolved from a file. Although these are non-transparent inside the library, client code should never see one of these – they exist inside the PdfArray and PdfDict container types, but are resolved before being returned to a client of those types. ### 5.3 File reading, tokenization and parsing pdfreader.py contains the PdfReader class, which can read a PDF file (or be passed a file object or already read string) and parse it. It uses the PdfTokens class in tokens.py for low-level tokenization. The PdfReader class does not, in general, parse into containers (e.g. inside the content streams). There is a proof of concept for doing that inside the examples/rl2 subdirectory, but that is slow and not well-developed, and not useful for most applications. An instance of the PdfReader class is an instance of a PdfDict – the trailer dictionary of the PDF file, to be exact. It will have a private attribute set on it that is named ‘pages’ that is a list containing all the pages in the file. When instantiating a PdfReader object, there are options available for decompressing all the objects in the file. pdfrw does not currently have very many options for decompression, so this is not all that useful, except in the specific case of compressed object streams. Also, there are no options for decryption yet. If you have PDF files that are encrypted or heavily compressed, you may find that using another program like pdftk on them can make them readable by pdfrw. In general, the objects are read from the file lazily, but this is not currently true with compressed object streams – all of these are decompressed and read in when the PdfReader is instantiated. ### 5.4 File output pdfwriter.py contains the PdfWriter class, which can create and output a PDF file. There are a few options available when creating and using this class. In the simplest case, an instance of PdfWriter is instantiated, and then pages are added to it from one or more source files (or created programmatically), and then the write method is called to dump the results out to a file. If you have a source PDF and do not want to disturb the structure of it too badly, then you may pass its trailer directly to PdfWriter rather than letting PdfWriter construct one for you. There is an example of this (alter.py) in the examples directory. buildxobj.py contains functions to build Form XObjects out of pages or rectangles on pages. These may be reused in new PDFs essentially as if they were images. buildxobj is careful to cache any page used so that it only appears in the output once. toreportlab.py provides the makerl function, which will translate pdfrw objects into a format which can be used with reportlab. It is normally used in conjunction with buildxobj, to be able to reuse parts of existing PDFs when using reportlab. pagemerge.py builds on the foundation laid by buildxobj. It contains classes to create a new page (or overlay an existing page) using one or more rectangles from other pages. There are examples showing its use for watermarking, scaling, 4-up output, splitting each page in 2, etc. findobjs.py contains code that can find specific kinds of objects inside a PDF file. The extract.py example uses this module to create a new PDF that places each image and Form XObject from a source PDF onto its own page, e.g. for easy reuse with some of the other examples or with reportlab. ### 5.6 Miscellaneous compress.py and uncompress.py contains compression and decompression functions. Very few filters are currently supported, so an external tool like pdftk might be good if you require the ability to decompress (or, for that matter, decrypt) PDF files. py23_diffs.py contains code to help manage the differences between Python 2 and Python 3. ## 6 Testing The tests associated with pdfrw require a large number of PDFs, which are not distributed with the library. To run the tests: • cd into the tests directory, and then clone the package github.com/pmaupin/static_pdfs into a subdirectory (also named static_pdfs). • Now the tests may be run from that directory using unittest, or py.test, or nose. • travisci is used at github, and runs the tests with py.test ## 7 Other libraries ### 7.1 Pure Python • reportlab reportlab is must-have software if you want to programmatically generate arbitrary PDFs. • pyPdf pyPdf is, in some ways, very full-featured. It can do decompression and decryption and seems to know a lot about items inside at least some kinds of PDF files. In comparison, pdfrw knows less about specific PDF file features (such as metadata), but focuses on trying to have a more Pythonic API for mapping the PDF file container syntax to Python, and (IMO) has a simpler and better PDF file parser. The Form XObject capability of pdfrw means that, in many cases, it does not actually need to decompress objects – they can be left compressed. • pdftools pdftools feels large and I fell asleep trying to figure out how it all fit together, but many others have done useful things with it. • pagecatcher My understanding is that pagecatcher would have done exactly what I wanted when I built pdfrw. But I was on a zero budget, so I’ve never had the pleasure of experiencing pagecatcher. I do, however, use and like reportlab (open source, from the people who make pagecatcher) so I’m sure pagecatcher is great, better documented and much more full-featured than pdfrw. • pdfminer This looks like a useful, actively-developed program. It is quite large, but then, it is trying to actively comprehend a full PDF document. From the website: “PDFMiner is a suite of programs that help extracting and analyzing text data of PDF documents. Unlike other PDF-related tools, it allows to obtain the exact location of texts in a page, as well as other extra information such as font information or ruled lines. It includes a PDF converter that can transform PDF files into other text formats (such as HTML). It has an extensible PDF parser that can be used for other purposes instead of text analysis.” ### 7.2 non-pure-Python libraries • pyPoppler can read PDF files. • pycairo can write PDF files. • PyMuPDF high performance rendering of PDF, (Open)XPS, CBZ and EPUB ### 7.3 Other tools • pdftk is a wonderful command line tool for basic PDF manipulation. It complements pdfrw extremely well, supporting many operations such as decryption and decompression that pdfrw cannot do. • MuPDF is a free top performance PDF, (Open)XPS, CBZ and EPUB rendering library that also comes with some command line tools. One of those, mutool, has big overlaps with pdftk’s - except it is up to 10 times faster. ## 8 Release information Revisions: 0.4 – Released 18 September, 2017 • Python 3.6 added to test matrix • Proper unicode support for text strings in PDFs added • buildxobj fixes allow better support creating form XObjects out of compressed pages in some cases • Compression fixes for Python 3+ • New subset_booklets.py example • Bug with non-compressed indices into compressed object streams fixed • Bug with distinguishing compressed object stream first objects fixed • Better error reporting added for some invalid PDFs (e.g. when reading past the end of file) • Better scrubbing of old bookmark information when writing PDFs, to remove dangling references • Refactoring of pdfwriter, including updating API, to allow future enhancements for things like incremental writing • Minor tokenizer speedup • Some flate decompressor bugs fixed • Compression and decompression tests added • Tests for new unicode handling added • Initial crypt filter support added 0.3 – Released 19 October, 2016. • Python 3.5 added to test matrix • Better support under Python 3.x for in-memory PDF file-like objects • Some pagemerge and Unicode patches added • Changes to logging allow better coexistence with other packages • Fix for “from pdfrw import *” • New fancy_watermark.py example shows off capabilities of pagemerge.py • metadata.py example renamed to cat.py 0.2 – Released 21 June, 2015. Supports Python 2.6, 2.7, 3.3, and 3.4. • Several bugs have been fixed • New regression test functionally tests core with dozens of PDFs, and also tests examples. • Core has been ported and tested on Python3 by round-tripping several difficult files and observing binary matching results across the different Python versions. • Still only minimal support for compression and no support for encryption or newer PDF features. (pdftk is useful to put PDFs in a form that pdfrw can use.) 0.1 – Released to PyPI in 2012. Supports Python 2.5 - 2.7 ## Project details Uploaded source Uploaded 3 5
2022-11-28 05:22:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31031695008277893, "perplexity": 3499.3582274889627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710473.38/warc/CC-MAIN-20221128034307-20221128064307-00453.warc.gz"}
https://zbmath.org/?q=cc:*68T20
# zbMATH — the first resource for mathematics ## Found 5284 documents (Results 1–100) Hatami, Hamed (ed.) et al., Proceedings of the 49th annual ACM SIGACT symposium on theory of computing, STOC ’17, Montreal, QC, Canada, June 19--23, 2017. New York, NY: Association for Computing Machinery (ACM) (ISBN 978-1-4503-4528-6). 1195-1199 (2017). Full Text: Salvagnin, Domenico (ed.) et al., Integration of AI and OR techniques in constraint programming. 14th international conference, CPAIOR 2017, Padua, Italy, June 5--8, 2017. Proceedings. Cham: Springer (ISBN 978-3-319-59775-1/pbk; 978-3-319-59776-8/ebook). Lecture Notes in Computer Science 10335, 403-418 (2017). MSC:  68T20 90C27 Full Text: Salvagnin, Domenico (ed.) et al., Integration of AI and OR techniques in constraint programming. 14th international conference, CPAIOR 2017, Padua, Italy, June 5--8, 2017. Proceedings. Cham: Springer (ISBN 978-3-319-59775-1/pbk; 978-3-319-59776-8/ebook). Lecture Notes in Computer Science 10335, 387-402 (2017). MSC:  68T20 90C27 Full Text:
2017-10-22 19:11:46
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8354111909866333, "perplexity": 3974.5009228839244}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825436.78/warc/CC-MAIN-20171022184824-20171022204824-00734.warc.gz"}
https://jira.lsstcorp.org/browse/DM-21048
# butler.get('raw',...) raises a lsst::pex::exceptions::NotFoundError for BOT data at lsst-dev XMLWordPrintable #### Details • Type: Bug • Status: Done • Resolution: Done • Fix Version/s: None • Component/s: None • Labels: None • Story Points: 6 • Team: Data Release Production #### Description For the BOT data at lsst-dev, the following code raises a NotFoundError: from lsst.daf.persistence import Butler   BUTLER_BOT_REPO = '/lsstdata/offline/teststand/BOT/gen2repo' butler = Butler(BUTLER_BOT_REPO)   datarefs = butler.subset('raw', run='6545D', raftName='R10', imagetype='BIAS', testtype='DARK')   for dataref in datarefs: print(dataref.dataId) exp = butler.get('raw', dataref.dataId) Here is the tail-end of the error output: File "/software/lsstsw/stack_20190330/stack/miniconda3-4.5.12-1172c30/Linux64/obs_base/18.1.0-11-g311e899+3/python/lsst/obs/base/cameraMapper.py", line 1076, in _standardizeExposure self._setFilter(mapping, item, dataId) File "/software/lsstsw/stack_20190330/stack/miniconda3-4.5.12-1172c30/Linux64/obs_base/18.1.0-11-g311e899+3/python/lsst/obs/base/cameraMapper.py", line 1030, in _setFilter item.setFilter(afwImage.Filter(filterName)) lsst.pex.exceptions.wrappers.NotFoundError: File "src/image/Filter.cc", line 335, in static int lsst::afw::image::Filter::_lookup(const string&, bool) Unable to find filter SDSSu+ND_OD4.0 {0} lsst::pex::exceptions::NotFoundError: 'Unable to find filter SDSSu+ND_OD4.0' This is using lsst_distrib w_2019_33. Camera I&T will start producing lots of BOT data with 9 science rafts and 4 corner rafts starting the second week of September 2019, so it would be good to have a fix for this before then. #### Activity Hide Eli Rykoff added a comment - After James Chiang told me about this, I took a quick look, and it seems that we should be able to leave the filter as UNKNOWN and log a warning and continue with reading the raw without crashing. I tried this on a user branch on obs_base and it seems to work. Is there any reason I can't turn this into a ticket branch to fix this? It would certainly be useful for all sorts of testing situations. https://github.com/lsst/obs_base/commit/8f9f4da566e951fa8daf63b21e42e8e0e935ac65 Show Eli Rykoff added a comment - After James Chiang told me about this, I took a quick look, and it seems that we should be able to leave the filter as UNKNOWN and log a warning and continue with reading the raw without crashing. I tried this on a user branch on obs_base and it seems to work. Is there any reason I can't turn this into a ticket branch to fix this? It would certainly be useful for all sorts of testing situations. https://github.com/lsst/obs_base/commit/8f9f4da566e951fa8daf63b21e42e8e0e935ac65 Hide Tim Jenness added a comment - That might mess up gen 3 butler ingest but I'll let Jim Bosch comment on that one. Show Tim Jenness added a comment - That might mess up gen 3 butler ingest but I'll let Jim Bosch comment on that one. Hide Jim Bosch added a comment - Probably prudent to come back to this after DM-20763 lands (I think that should happen sometime today).  That changes the way filters are defined in obs packages quite a bit (in order to make it so Gen2 and Gen3 can use the same declarations). Show Jim Bosch added a comment - Probably prudent to come back to this after DM-20763 lands (I think that should happen sometime today).  That changes the way filters are defined in obs packages quite a bit (in order to make it so Gen2 and Gen3 can use the same declarations). Hide Eli Rykoff added a comment - I will watch for that. But scanning the PR right now, I don't think it'll change that particular aspect of the reading of raws (but might change what tests are done). Show Eli Rykoff added a comment - I will watch for that. But scanning the PR right now, I don't think it'll change that particular aspect of the reading of raws (but might change what tests are done). Hide Eli Rykoff added a comment - I've rebased against DM-20763 and things work just the same. Anybody want to review https://github.com/lsst/obs_base/pull/168 ? Show Eli Rykoff added a comment - I've rebased against DM-20763 and things work just the same. Anybody want to review https://github.com/lsst/obs_base/pull/168 ? #### People Assignee: Eli Rykoff Reporter: James Chiang Reviewers: Jim Bosch Watchers: Eli Rykoff, Eric Charles, James Chiang, Jim Bosch, Kian-Tat Lim, Robert Lupton, Tim Jenness
2022-09-25 11:52:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5147225856781006, "perplexity": 5099.366433172923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00031.warc.gz"}
https://indico.cern.ch/event/322559/contributions/748511/
Indico has been upgraded to v2.3. Please see our blog for a list of improvements. 22-27 March 2015 Hotel do Bosque Brazil/East timezone ## Effects of time evolution and fluctuating initial conditions on heavy flavor electron $R_{AA}$ in event-by-event relativistic hydrodynamics Not scheduled 3h 30m Hotel do Bosque #### Hotel do Bosque Rodovia Mário Covas (Rio-Santos) BR - 101 Sul, Km 533, Angra dos Reis, RJ, Brazil Poster Relativistic heavy-ion reactions - new data, analyses and models ### Speaker Mr Caio Alves Garcia Prado (USP) ### Description Central Au+Au collisions at RHIC exhibit a strong particle suppression when compared to p+p collisions. Anisotropic flow is also observed in experiments. The particle suppression is usually associated with jet quenching or energy loss of partons inside the quark-gluon-plasma (QGP) where the anisotropic flow might be due to lumps of high-density inside the medium. These fluctuations in initial conditions can cause considerable quark suppression at early stages of the collision evolution, furthermore the QGP dynamics might cause these high-density spots to expand differently from the rest of the plasma which can affect higher harmonic orders of anisotropic flow such as $v_3$. In this work we aim to investigate the effects caused by the medium formed in heavy ion collisions on the heavy quark dynamics using a 2D+1 Lagrangian ideal hydrodynamic code which is based on the Smoothed Particle Hydrodynamics (SPH) algorithm, following an event-by-event paradigm. We use an energy loss parametrization on top of the evolving space-time energy density distributions to propagate the quark inside the medium until it reaches the freeze-out temperature where they fragment and hadronize. The resulting mesons are forced to decay giving us the final electron $p_\text{T}$ distribution that can be compared with experimental data of electron spectrum $R_\text{AA}$ and $v_2$. The simulations are run for different centrality classes for both RHIC and LHC collision energies. ### Presentation Materials There are no materials yet.
2020-08-11 08:37:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31948068737983704, "perplexity": 2709.5810998816146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738735.44/warc/CC-MAIN-20200811055449-20200811085449-00583.warc.gz"}
http://math.stackexchange.com/questions/236495/the-mid-point-rule-as-a-function-in-matlab?answertab=oldest
# The mid-point rule as a function in matlab How would I go about creating a function in matlab which would calculate the following $$M_c(f)=h \sum_{i=1}^N f(c_i)$$ where $h=\frac{b-a}{N}, \\c_i=a+0.5(2i-1)h,\\ i=1,\ldots,N$ What I have tried so far is function(M(f))= composite_midpoint(f) h=(b-a)/N for i=1:1:N c_i=a+0.5*(2i-1)*h M(f) = h*(sum + f) end Sorry about not inserting the matlab code directly, I'm not sure how to do it. - Updated the formatting, you can use four spaces in front of each line to show code. –  Dennis Jaheruddin Nov 13 '12 at 16:41 What types of 'functions' are you expecting to be able to input for f? Symbolic functions, or matlab functions? –  icurays1 Nov 13 '12 at 16:42 It would be a function which is smooth enough for Taylor's theorem and one which it's integral can be calculated exactly. –  Nicky Nov 13 '12 at 16:49 Yes, but with matlab functions are either .m files or "symbolic" functions. Its not going to work the way you have it written if you call it like "composite_midpoint(x^2)". It doesn't know what x^2 means. –  icurays1 Nov 13 '12 at 16:54 If I were to want the function to be x^2 how would I go about altering my code so that it worked for the midpoint rule? –  Nicky Nov 13 '12 at 16:59 First run this outside the function: a = 6; b = 4.234; N = 10; Then save this function to a file called compositemidpoint.m (in your current directory) function M = compositemidpoint(a,b,N) h = (b-a)/N i = 1:N c_i = a+0.5*(2*i-1)*h f = log(c_i) + c_i.^2 % A sample function M = h*sum(f); Then call it by typing: compositemidpoint(a,b,N) - It does not really make sens yet, but this is how i debugged your attempted code to not-crash. –  Dennis Jaheruddin Nov 13 '12 at 16:52 Your code only sums the vector $f$ and multiplies it by $h$. There is no midpoint rule occurring here - you have to create the vector $f$ by evaluating some function at the grid points $c_i$... –  icurays1 Nov 13 '12 at 16:56 I have updated the answer to include an example function. –  Dennis Jaheruddin Nov 13 '12 at 17:01 I tried running this but it wouldn't work. I've put in function out= compositemidpoint at the very begining of the function however it is still coming up with errors. –  Nicky Nov 13 '12 at 17:14 I have included some instructions on how to call it. –  Dennis Jaheruddin Nov 13 '12 at 17:22 Here's my solution, which is vectorized (for loops are bad in matlab). function Mf=midpoint_rule(a,b,N,f) h=(b-a)/N; ci=linspace(a+h/2,b-h/2,N-1); %This evaluates the function f, which is another matlab function y=f(ci); %you can just add up the vector y and multiply by h Mf=h*sum(y); end For example, you can save another .m file Myfunction.m, that might look like: function y=Myfunction(x) %The dot means "pointwise" y=x.^2 end Then, in the main window, you would evaluate the integral by saying "midpoint_rule(1,2,100,@Myfunction)". The "at" symbol tells matlab you'll be using a matlab function called "Myfunction". - If you only need your midpoint rule function to run for a couple test functions, you can also hard-code them in by saying "y=sin(x)" etc instead of "y=f(ci)". But then you would have to change your code every time you have a new function to integrate! –  icurays1 Nov 13 '12 at 17:18
2014-10-25 19:48:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7922336459159851, "perplexity": 1405.7906122084416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119649807.55/warc/CC-MAIN-20141024030049-00097-ip-10-16-133-185.ec2.internal.warc.gz"}
https://homework.zookal.com/questions-and-answers/whats-the-future-value-of-20000-after-17-years-if-401789624
What's the future value of $20,000 after 17 years if the appropriate interest rate is 3.75%, compounded annually? Round your answer to two decimal places. For example, if your answer is$345.667 enter as 345.67 and if your answer is .05718 or 5.718% enter as 5.72 in the answer box provided.
2021-03-03 05:49:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5044705271720886, "perplexity": 1995.8428511179889}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178365454.63/warc/CC-MAIN-20210303042832-20210303072832-00235.warc.gz"}
https://uebmath.aphtech.org/lesson8.0
Practice Problems -   -  Use 6 Dot Entry   Switch to Nemeth Tutorial # Lesson 8.0: Greek Letters ## Explanation The Greek alphabet is frequently used in mathematics. Greek letters are formed with the dots four six prefix, followed by the letter. The Greek letter modifier applies to the next letter only. If the letter is capitalized, the prefix is preceded by dot six. The grade 1 indicator is not used with Greek letters. The most commonly used Greek letters are listed here. A complete list of upper and lower case Greek letters can be found in Guidelines for Technical Materials (2014) §11.7. ## Lowercase Greek Letters $\text{alpha}\phantom{\rule{.3em}{0ex}}\alpha$ ⠨⠁ $\text{beta}\phantom{\rule{.3em}{0ex}}\beta$ ⠨⠃ $\text{gamma}\phantom{\rule{.3em}{0ex}}\gamma$ ⠨⠛ $\text{delta}\phantom{\rule{.3em}{0ex}}\delta$ ⠨⠙ $\text{epsilon}\phantom{\rule{.3em}{0ex}}\epsilon$ ⠨⠑ $\text{zeta}\phantom{\rule{.3em}{0ex}}\zeta$ ⠨⠵ $\text{eta}\phantom{\rule{.3em}{0ex}}\eta$ ⠨⠱ $\text{theta}\phantom{\rule{.3em}{0ex}}\theta$ ⠨⠹ $\text{iota}\phantom{\rule{.3em}{0ex}}\iota$ ⠨⠊ $\text{kappa}\phantom{\rule{.3em}{0ex}}\kappa$ ⠨⠅ $\text{lambda}\phantom{\rule{.3em}{0ex}}\lambda$ ⠨⠇ $\text{mu}\phantom{\rule{.3em}{0ex}}\mu$ ⠨⠍ $\text{nu}\phantom{\rule{.3em}{0ex}}\nu$ ⠨⠝ $\text{xi}\phantom{\rule{.3em}{0ex}}\xi$ ⠨⠭ $\text{omicron}\phantom{\rule{.3em}{0ex}}ο$ ⠨⠕ $\text{pi}\phantom{\rule{.3em}{0ex}}\pi$ ⠨⠏ $\text{rho}\phantom{\rule{.3em}{0ex}}\rho$ ⠨⠗ $\text{sigma}\phantom{\rule{.3em}{0ex}}\sigma$ ⠨⠎ $\text{tau}\phantom{\rule{.3em}{0ex}}\tau$ ⠨⠞ $\text{upsilon}\phantom{\rule{.3em}{0ex}}\upsilon$ ⠨⠥ $\text{phi}\phantom{\rule{.3em}{0ex}}\phi$ ⠨⠋ $\text{chi}\phantom{\rule{.3em}{0ex}}\chi$ ⠨⠯ $\text{psi}\phantom{\rule{.3em}{0ex}}\psi$ ⠨⠽ $\text{omega}\phantom{\rule{.3em}{0ex}}\omega$ ⠨⠺ ## Uppercase Greek Letters $\text{Gamma}\phantom{\rule{.3em}{0ex}}\Gamma$ ⠠⠨⠛ $\text{Delta}\phantom{\rule{.3em}{0ex}}\Delta$ ⠠⠨⠙ $\text{Theta}\phantom{\rule{.3em}{0ex}}\Theta$ ⠠⠨⠹ $\text{Lambda}\phantom{\rule{.3em}{0ex}}\Lambda$ ⠠⠨⠇ $\text{Xi}\phantom{\rule{.3em}{0ex}}\Xi$ ⠠⠨⠭ $\text{Pi}\phantom{\rule{.3em}{0ex}}\Pi$ ⠠⠨⠏ $\text{Sigma}\phantom{\rule{.3em}{0ex}}\Sigma$ ⠠⠨⠎ $\text{Upsilon}\phantom{\rule{.3em}{0ex}}Υ$ ⠠⠨⠥ $\text{Phi}\phantom{\rule{.3em}{0ex}}\Phi$ ⠠⠨⠋ $\text{Psi}\phantom{\rule{.3em}{0ex}}\Psi$ ⠠⠨⠽ $\text{Omega}\phantom{\rule{.3em}{0ex}}\Omega$ ⠠⠨⠺ ### Example 1 $\alpha +\beta =\pi -\theta$ ⠨⠁⠐⠖⠨⠃⠀⠐⠶⠀⠨⠏⠐⠤⠨⠹ ### Example 2 $\frac{\Delta a}{\Delta b}$ ⠰⠰⠷⠠⠨⠙⠁⠨⠌⠠⠨⠙⠃⠾ ### Example 3 $\Sigma$ ⠠⠨⠎ ### Example 4 $\left(\chi +\upsilon \right)$ ⠐⠣⠨⠯⠐⠖⠨⠥⠐⠜ ### Example 5 ${\mu }^{2}$ ⠨⠍⠰⠔⠼⠃ ### Example 6 $C=2\pi r$ ⠰⠠⠉⠀⠐⠶⠀⠼⠃⠨⠏⠗
2021-10-25 00:44:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 41, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6970705986022949, "perplexity": 3817.4814627441024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587608.86/warc/CC-MAIN-20211024235512-20211025025512-00276.warc.gz"}
https://physics.stackexchange.com/questions/159135/components-of-angular-velocity
# Components of angular velocity? Let $$\vec \omega = (\omega_1, \omega_2, \omega_3)$$ be the angular velocity of a rigid body with respect to the body frame, where the body frame is right-handed orthonormal. I have gathered 2 definitions of $$\vec \omega$$ from different sources and I am confused at how they connect to one another. One is that the rigid body rotates with $$\vec \omega$$ through its Center of Mass at rate $$abs(\vec \omega)$$. The other is that each component of $$\vec \omega$$ represents the rate at which the rigid body rotates about that particular basis axis of the body frame. Does this mean we can somehow add the 3 rotations (which are about different axes) and get an equivalent rotation about some other (single) axis? • @MartinCheung I am afraid that none of these answers is correct. I am not sure that the three projections of $\vec \omega$ mean that there is a rotation with $\omega_x$ around the axis $x$, with $\omega_y$ around the axis $y$, and with $\omega_z$ around the axis $z$. – Sofia Jan 13 '15 at 16:06 • @DavidHammen : Is that true that performing a rotation by an angle $\omega_x$ around the axis $x$, then by an angle $\omega_y$ around the axis $y$, and then by an angle $\omega_z$ around the axis $z$, we get in fact a rotation by $|\vec \omega|$ around the axis $\vec \omega$ ? Is that true? With the Euler angles we work differently, and as far as I know, one doesn't get the same result if one performs the rotations in a different order. – Sofia Jan 13 '15 at 18:40
2019-10-15 11:48:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7895249724388123, "perplexity": 106.88877647067027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986658566.9/warc/CC-MAIN-20191015104838-20191015132338-00142.warc.gz"}
https://www.sobyte.net/post/2021-12/recording-rules-on-prometheus/
As the hottest cloud-native monitoring tool, there is no doubt that Prometheus is a great performer. However, as we use Prometheus, the monitoring metrics stored in Prometheus become more and more frequent over time, and the frequency of queries increases, as we add more Dashboards with Grafana, we may slowly experience that Grafana is no longer able to render charts on time, and occasionally timeouts occur, especially when we are aggregating a large amount of metrics over a long period of time. Especially when we are aggregating a large amount of metrics over a long period of time, Prometheus queries may time out even more, which requires a mechanism similar to background batch processing to complete the computation of these complex operations in the background, so that the user only needs to query the results of these operations. Prometheus provides a Recording Rule to support this kind of background computation, which can optimize the performance of PromQL statements for complex queries and improve query efficiency. Problems Let’s say we want to know the actual CPU and memory utilization between Kubernetes nodes, we can query the CPU and memory utilization by using the metrics container_cpu_usage_seconds_total and container_memory_usage_bytes. Because each running container collects these two metrics, it is important to know that for slightly larger online environments where we may have thousands of containers running at the same time, for example, Prometheus will have a harder time querying data quickly when we are now querying data for thousands of containers every 5 minutes for the next week. Let’s say we divide the total number of container_cpu_usage_seconds_total by the total number of kube_node_status_allocatable_cpu_cores to get the CPU utilization. 1 2 sum(rate(container_cpu_usage_seconds_total[5m])) / avg_over_time(sum(kube_node_status_allocatable_cpu_cores)[5m:5m]) Load time: 15723ms Use the scrolling window to calculate memory utilization by dividing the total number of container_memory_usage_bytes by the total number of kube_node_status_allocatable_memory_bytes. 1 2 avg_over_time(sum(container_memory_usage_bytes)[15m:15m]) / avg_over_time(sum(kube_node_status_allocatable_memory_bytes)[5m:5m]) Load time: 18656ms Recording Rules We said that Prometheus provides a way to optimize our query statements called Recording Rule. The basic idea of the Recording Rule is that it allows us to create custom meta-time sequences based on other time series, and if you use the Prometheus Operator you will find a number of such rules already in Prometheus, such as 1 2 3 4 5 6 7 8 9 groups: - name: k8s.rules rules: - expr: | sum(rate(container_cpu_usage_seconds_total{image!="", container!=""}[5m])) by (namespace) record: namespace:container_cpu_usage_seconds_total:sum_rate - expr: | sum(container_memory_usage_bytes{image!="", container!=""}) by (namespace) record: namespace:container_memory_usage_bytes:sum These two rules above would be perfectly fine for executing our query above, they would be executed continuously and store the results in a very small time series. sum(rate(container_cpu_usage_seconds_total{job="kubelet", image!="", container_name!=""}[5m])) by (namespace) will be evaluated at predefined time intervals and stored as a new metric : namespace:container_cpu_usage_seconds_total:sum_rate, the same as the in-memory query. Now, I can change the query to derive the CPU utilization as follows. 1 2 sum(namespace:container_cpu_usage_seconds_total:sum_rate) / avg_over_time(sum(kube_node_status_allocatable_cpu_cores)[5m:5m]) Load time: 1077ms Now, it runs 14 times faster! In the same way memory utilization is calculated. 1 2 sum(namespace:container_memory_usage_bytes:sum) / avg_over_time(sum(kube_node_status_allocatable_memory_bytes)[5m:5m]) Load time: 677ms Now runs 27 times faster! Recoding rule usage In the Prometheus configuration file, we can define the access path to the recoding rule rule file via rule_files, in much the same way as we define an alarm rule. 1 2 rule_files: [ - ... ] Each rule file is defined by the following format. 1 2 groups: [ - ] A simple rules file might look like this. 1 2 3 4 5 groups: - name: example rules: - record: job:http_inprogress_requests:sum expr: sum(http_inprogress_requests) by (job) The specific configuration items for rule_group are shown below. 1 2 3 4 5 6 # 分组的名称,在一个文件中必须是唯一的 name: # 评估分组中规则的频率 [ interval: | default = global.evaluation_interval ] rules: [ - ... ] Consistent with alert rules, a group can contain multiple rule rules. 1 2 3 4 5 6 7 # 输出的时间序列名称,必须是一个有效的 metric 名称 record: # 要计算的 PromQL 表达式,每个评估周期都是在当前时间进行评估的,结果记录为一组新的时间序列,metrics 名称由 record 设置 expr: # 添加或者覆盖的标签 labels: [ : ] As defined in the rules, Prometheus completes the calculation of the PromQL expressions defined in expr in the background and saves the results in a new time series record, with the possibility to add additional labels to these samples via the labels tag. The frequency of these rule files is by default the same as that of the alarm rules, which are defined by global.evaluation_interval: 1 2 global: [ evaluation_interval: | default = 1m ]
2022-07-02 08:32:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33965349197387695, "perplexity": 2001.339671172555}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103989282.58/warc/CC-MAIN-20220702071223-20220702101223-00001.warc.gz"}
https://oeis.org/wiki/Differential_Logic_and_Dynamic_Systems_%E2%80%A2_Part_1
This site is supported by donations to The OEIS Foundation. # Differential Logic and Dynamic Systems • Part 1 Author: Jon Awbrey Stand and unfold yourself. Hamlet: Francsico—1.1.2 This article develops a differential extension of propositional calculus and applies it to a context of problems arising in dynamic systems.  The work pursued here is coordinated with a parallel application that focuses on neural network systems, but the dependencies are arranged to make the present article the main and the more self-contained work, to serve as a conceptual frame and a technical background for the network project. ## Review and Transition This note continues a previous discussion on the problem of dealing with change and diversity in logic-based intelligent systems. It is useful to begin by summarizing essential material from previous reports. Table 1 outlines a notation for propositional calculus based on two types of logical connectives, both of variable ${\displaystyle k}$-ary scope. • A bracketed list of propositional expressions in the form ${\displaystyle {\texttt {(}}e_{1},e_{2},\ldots ,e_{k-1},e_{k}{\texttt {)}}}$ indicates that exactly one of the propositions ${\displaystyle e_{1},e_{2},\ldots ,e_{k-1},e_{k}}$ is false. • A concatenation of propositional expressions in the form ${\displaystyle e_{1}~e_{2}~\ldots ~e_{k-1}~e_{k}}$ indicates that all of the propositions ${\displaystyle e_{1},e_{2},\ldots ,e_{k-1},e_{k}}$ are true, in other words, that their logical conjunction is true. All other propositional connectives can be obtained in a very efficient style of representation through combinations of these two forms. Strictly speaking, the concatenation form is dispensable in light of the bracketed form, but it is convenient to maintain it as an abbreviation of more complicated bracket expressions. This treatment of propositional logic is derived from the work of C.S. Peirce [P1, P2], who gave this approach an extensive development in his graphical systems of predicate, relational, and modal logic [Rob]. More recently, these ideas were revived and supplemented in an alternative interpretation by George Spencer-Brown [SpB]. Both of these authors used other forms of enclosure where I use parentheses, but the structural topologies of expression and the functional varieties of interpretation are fundamentally the same. While working with expressions solely in propositional calculus, it is easiest to use plain parentheses for logical connectives. In contexts where parentheses are needed for other purposes “teletype” parentheses ${\displaystyle {\texttt {(}}\ldots {\texttt {)}}}$ or barred parentheses ${\displaystyle (\!|\ldots |\!)}$ may be used for logical operators. The briefest expression for logical truth is the empty word, usually denoted by ${\displaystyle {}^{\backprime \backprime }{\boldsymbol {\varepsilon }}{}^{\prime \prime }}$ or ${\displaystyle {}^{\backprime \backprime }{\boldsymbol {\lambda }}{}^{\prime \prime }}$ in formal languages, where it forms the identity element for concatenation. To make it visible in this text, it may be denoted by the equivalent expression ${\displaystyle {}^{\backprime \backprime }{\texttt {((}}~{\texttt {))}}{}^{\prime \prime },}$ or, especially if operating in an algebraic context, by a simple ${\displaystyle {}^{\backprime \backprime }1{}^{\prime \prime }.}$ Also when working in an algebraic mode, the plus sign ${\displaystyle {}^{\backprime \backprime }+{}^{\prime \prime }}$ may be used for exclusive disjunction. For example, we have the following paraphrases of algebraic expressions by bracket expressions: ${\displaystyle {\begin{matrix}x+y~=~{\texttt {(}}x,y{\texttt {)}}\\[6pt]x+y+z~=~{\texttt {((}}x,y{\texttt {)}},z{\texttt {)}}~=~{\texttt {(}}x,{\texttt {(}}y,z{\texttt {))}}\end{matrix}}}$ It is important to note that the last expressions are not equivalent to the triple bracket ${\displaystyle {\texttt {(}}x,y,z{\texttt {)}}.}$ ${\displaystyle {\text{Expression}}}$ ${\displaystyle {\text{Interpretation}}}$ ${\displaystyle {\text{Other Notations}}}$ ${\displaystyle {\text{True}}}$ ${\displaystyle 1}$ ${\displaystyle {\texttt {(}}~{\texttt {)}}}$ ${\displaystyle {\text{False}}}$ ${\displaystyle 0}$ ${\displaystyle x}$ ${\displaystyle x}$ ${\displaystyle x}$ ${\displaystyle {\texttt {(}}x{\texttt {)}}}$ ${\displaystyle {\text{Not}}~x}$ ${\displaystyle {\begin{matrix}x'\\{\tilde {x}}\\\lnot x\end{matrix}}}$ ${\displaystyle x~y~z}$ ${\displaystyle x~{\text{and}}~y~{\text{and}}~z}$ ${\displaystyle x\land y\land z}$ ${\displaystyle {\texttt {((}}x{\texttt {)(}}y{\texttt {)(}}z{\texttt {))}}}$ ${\displaystyle x~{\text{or}}~y~{\text{or}}~z}$ ${\displaystyle x\lor y\lor z}$ ${\displaystyle {\texttt {(}}x~{\texttt {(}}y{\texttt {))}}}$ ${\displaystyle {\begin{matrix}x~{\text{implies}}~y\\\mathrm {If} ~x~{\text{then}}~y\end{matrix}}}$ ${\displaystyle x\Rightarrow y}$ ${\displaystyle {\texttt {(}}x{\texttt {,}}y{\texttt {)}}}$ ${\displaystyle {\begin{matrix}x~{\text{not equal to}}~y\\x~{\text{exclusive or}}~y\end{matrix}}}$ ${\displaystyle {\begin{matrix}x\neq y\\x+y\end{matrix}}}$ ${\displaystyle {\texttt {((}}x{\texttt {,}}y{\texttt {))}}}$ ${\displaystyle {\begin{matrix}x~{\text{is equal to}}~y\\x~{\text{if and only if}}~y\end{matrix}}}$ ${\displaystyle {\begin{matrix}x=y\\x\Leftrightarrow y\end{matrix}}}$ ${\displaystyle {\texttt {(}}x{\texttt {,}}y{\texttt {,}}z{\texttt {)}}}$ ${\displaystyle {\begin{matrix}{\text{Just one of}}\\x,y,z\\{\text{is false}}.\end{matrix}}}$ ${\displaystyle {\begin{matrix}x'y~z~&\lor \\x~y'z~&\lor \\x~y~z'&\end{matrix}}}$ ${\displaystyle {\texttt {((}}x{\texttt {),(}}y{\texttt {),(}}z{\texttt {))}}}$ ${\displaystyle {\begin{matrix}{\text{Just one of}}\\x,y,z\\{\text{is true}}.\\&\\{\text{Partition all}}\\{\text{into}}~x,y,z.\end{matrix}}}$ ${\displaystyle {\begin{matrix}x~y'z'&\lor \\x'y~z'&\lor \\x'y'z~&\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\texttt {((}}x{\texttt {,}}y{\texttt {),}}z{\texttt {)}}\\&\\{\texttt {(}}x{\texttt {,(}}y{\texttt {,}}z{\texttt {))}}\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\text{Oddly many of}}\\x,y,z\\{\text{are true}}.\end{matrix}}}$ ${\displaystyle x+y+z}$ ${\displaystyle {\begin{matrix}x~y~z~&\lor \\x~y'z'&\lor \\x'y~z'&\lor \\x'y'z~&\end{matrix}}}$ ${\displaystyle {\texttt {(}}w{\texttt {,(}}x{\texttt {),(}}y{\texttt {),(}}z{\texttt {))}}}$ ${\displaystyle {\begin{matrix}{\text{Partition}}~w\\{\text{into}}~x,y,z.\\&\\{\text{Genus}}~w~{\text{comprises}}\\{\text{species}}~x,y,z.\end{matrix}}}$ ${\displaystyle {\begin{matrix}w'x'y'z'&\lor \\w~x~y'z'&\lor \\w~x'y~z'&\lor \\w~x'y'z~&\end{matrix}}}$ Note. The usage that one often sees, of a plus sign "${\displaystyle +}$" to represent inclusive disjunction, and the reference to this operation as boolean addition, is a misnomer on at least two counts. Boole used the plus sign to represent exclusive disjunction (at any rate, an operation of aggregation restricted in its logical interpretation to cases where the represented sets are disjoint (Boole, 32)), as any mathematician with a sensitivity to the ring and field properties of algebra would do: The expression ${\displaystyle x+y}$ seems indeed uninterpretable, unless it be assumed that the things represented by ${\displaystyle x}$ and the things represented by ${\displaystyle y}$ are entirely separate; that they embrace no individuals in common. (Boole, 66). It was only later that Peirce and Jevons treated inclusive disjunction as a fundamental operation, but these authors, with a respect for the algebraic properties that were already associated with the plus sign, used a variety of other symbols for inclusive disjunction (Sty, 177, 189). It seems to have been Schröder who later reassigned the plus sign to inclusive disjunction (Sty, 208). Additional information, discussion, and references can be found in (Boole) and (Sty, 177–263). Aside from these historical points, which never really count against a current practice that has gained a life of its own, this usage does have a further disadvantage of cutting or confounding the lines of communication between algebra and logic. For this reason, it will be avoided here. ## A Functional Conception of Propositional Calculus Out of the dimness opposite equals advance . . . .      Always substance and increase, Always a knit of identity . . . . always distinction . . . .      always a breed of life. — Walt Whitman, Leaves of Grass, [Whi, 28] In the general case, we start with a set of logical features ${\displaystyle \{a_{1},\ldots ,a_{n}\}}$ that represent properties of objects or propositions about the world. In concrete examples the features ${\displaystyle \{a_{i}\}}$ commonly appear as capital letters from an alphabet like ${\displaystyle \{A,B,C,\ldots \}}$ or as meaningful words from a linguistic vocabulary of codes. This language can be drawn from any sources, whether natural, technical, or artificial in character and interpretation. In the application to dynamic systems we tend to use the letters ${\displaystyle \{x_{1},\ldots ,x_{n}\}}$ as our coordinate propositions, and to interpret them as denoting properties of a system's state, that is, as propositions about its location in configuration space. Because I have to consider non-deterministic systems from the outset, I often use the word state in a loose sense, to denote the position or configuration component of a contemplated state vector, whether or not it ever gets a deterministic completion. The set of logical features ${\displaystyle \{a_{1},\ldots ,a_{n}\}}$ provides a basis for generating an ${\displaystyle n}$-dimensional universe of discourse that I denote as ${\displaystyle [a_{1},\ldots ,a_{n}].}$ It is useful to consider each universe of discourse as a unified categorical object that incorporates both the set of points ${\displaystyle \langle a_{1},\ldots ,a_{n}\rangle }$ and the set of propositions ${\displaystyle f:\langle a_{1},\ldots ,a_{n}\rangle \to \mathbb {B} }$ that are implicit with the ordinary picture of a venn diagram on ${\displaystyle n}$ features. Thus, we may regard the universe of discourse ${\displaystyle [a_{1},\ldots ,a_{n}]}$ as an ordered pair having the type ${\displaystyle (\mathbb {B} ^{n},(\mathbb {B} ^{n}\to \mathbb {B} ),}$ and we may abbreviate this last type designation as ${\displaystyle \mathbb {B} ^{n}\ +\!\to \mathbb {B} ,}$ or even more succinctly as ${\displaystyle [\mathbb {B} ^{n}].}$ (Used this way, the angle brackets ${\displaystyle \langle \ldots \rangle }$ are referred to as generator brackets.) Table 2 exhibits the scheme of notation I use to formalize the domain of propositional calculus, corresponding to the logical content of truth tables and venn diagrams. Although it overworks the square brackets a bit, I also use either one of the equivalent notations ${\displaystyle [n]}$ or ${\displaystyle \mathbf {n} }$ to denote the data type of a finite set on ${\displaystyle n}$ elements. ${\displaystyle {\text{Symbol}}}$ ${\displaystyle {\text{Notation}}}$ ${\displaystyle {\text{Description}}}$ ${\displaystyle {\text{Type}}}$ ${\displaystyle {\mathfrak {A}}}$ ${\displaystyle \{{}^{\backprime \backprime }a_{1}{}^{\prime \prime },\ldots ,{}^{\backprime \backprime }a_{n}{}^{\prime \prime }\}}$ ${\displaystyle {\text{Alphabet}}}$ ${\displaystyle [n]=\mathbf {n} }$ ${\displaystyle {\mathcal {A}}}$ ${\displaystyle \{a_{1},\ldots ,a_{n}\}}$ ${\displaystyle {\text{Basis}}}$ ${\displaystyle [n]=\mathbf {n} }$ ${\displaystyle A_{i}}$ ${\displaystyle \{{\texttt {(}}a_{i}{\texttt {)}},a_{i}\}}$ ${\displaystyle {\text{Dimension}}~i}$ ${\displaystyle \mathbb {B} }$ ${\displaystyle A}$ ${\displaystyle {\begin{matrix}\langle {\mathcal {A}}\rangle \\[2pt]\langle a_{1},\ldots ,a_{n}\rangle \\[2pt]\{(a_{1},\ldots ,a_{n})\}\\[2pt]A_{1}\times \ldots \times A_{n}\\[2pt]\textstyle \prod _{i=1}^{n}A_{i}\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\text{Set of cells}},\\[2pt]{\text{coordinate tuples}},\\[2pt]{\text{points, or vectors}}\\[2pt]{\text{in the universe}}\\[2pt]{\text{of discourse}}\end{matrix}}}$ ${\displaystyle \mathbb {B} ^{n}}$ ${\displaystyle A^{*}}$ ${\displaystyle (\mathrm {hom} :A\to \mathbb {B} )}$ ${\displaystyle {\text{Linear functions}}}$ ${\displaystyle (\mathbb {B} ^{n})^{*}\cong \mathbb {B} ^{n}}$ ${\displaystyle A^{\uparrow }}$ ${\displaystyle (A\to \mathbb {B} )}$ ${\displaystyle {\text{Boolean functions}}}$ ${\displaystyle \mathbb {B} ^{n}\to \mathbb {B} }$ ${\displaystyle A^{\bullet }}$ ${\displaystyle {\begin{matrix}[{\mathcal {A}}]\\[2pt](A,A^{\uparrow })\\[2pt](A~+\!\to \mathbb {B} )\\[2pt](A,(A\to \mathbb {B} ))\\[2pt][a_{1},\ldots ,a_{n}]\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\text{Universe of discourse}}\\[2pt]{\text{based on the features}}\\[2pt]\{a_{1},\ldots ,a_{n}\}\end{matrix}}}$ ${\displaystyle {\begin{matrix}(\mathbb {B} ^{n},(\mathbb {B} ^{n}\to \mathbb {B} ))\\[2pt](\mathbb {B} ^{n}~+\!\to \mathbb {B} )\\[2pt][\mathbb {B} ^{n}]\end{matrix}}}$ ### Qualitative Logic and Quantitative Analogy Logical, however, is used in a third sense, which is at once more vital and more practical; to denote, namely, the systematic care, negative and positive, taken to safeguard reflection so that it may yield the best results under the given conditions. — John Dewey, How We Think, [Dew, 56] These concepts and notations may now be explained in greater detail. In order to begin as simply as possible, let us distinguish two levels of analysis and set out initially on the easier path. On the first level of analysis we take spaces like ${\displaystyle \mathbb {B} ,}$ ${\displaystyle \mathbb {B} ^{n},}$ and ${\displaystyle (\mathbb {B} ^{n}\to \mathbb {B} )}$ at face value and treat them as the primary objects of interest. On the second level of analysis we use these spaces as coordinate charts for talking about points and functions in more fundamental spaces. A pair of spaces, of types ${\displaystyle \mathbb {B} ^{n}}$ and ${\displaystyle (\mathbb {B} ^{n}\to \mathbb {B} ),}$ give typical expression to everything we commonly associate with the ordinary picture of a venn diagram. The dimension, ${\displaystyle n,}$ counts the number of “circles” or simple closed curves that are inscribed in the universe of discourse, corresponding to its relevant logical features or basic propositions. Elements of type ${\displaystyle \mathbb {B} ^{n}}$ correspond to what are often called propositional interpretations in logic, that is, the different assignments of truth values to sentence letters. Relative to a given universe of discourse, these interpretations are visualized as its cells, in other words, the smallest enclosed areas or undivided regions of the venn diagram. The functions ${\displaystyle f:\mathbb {B} ^{n}\to \mathbb {B} }$ correspond to the different ways of shading the venn diagram to indicate arbitrary propositions, regions, or sets. Regions included under a shading indicate the models, and regions excluded represent the non-models of a proposition. To recognize and formalize the natural cohesion of these two layers of concepts into a single universe of discourse, we introduce the type notations ${\displaystyle [\mathbb {B} ^{n}]=\mathbb {B} ^{n}\ +\!\to \mathbb {B} }$ to stand for the pair of types ${\displaystyle (\mathbb {B} ^{n},(\mathbb {B} ^{n}\to \mathbb {B} )).}$ The resulting “stereotype” serves to frame the universe of discourse as a unified categorical object, and makes it subject to prescribed sets of evaluations and transformations (categorical morphisms or arrows) that affect the universe of discourse as an integrated whole. Most of the time we can serve the algebraic, geometric, and logical interests of our study without worrying about their occasional conflicts and incidental divergences. The conventions and definitions already set down will continue to cover most of the algebraic and functional aspects of our discussion, but to handle the logical and qualitative aspects we will need to add a few more. In general, abstract sets may be denoted by gothic, greek, or script capital variants of ${\displaystyle A,B,C,}$ and so on, with their elements being denoted by a corresponding set of subscripted letters in plain lower case, for example, ${\displaystyle {\mathcal {A}}=\{a_{i}\}.}$ Most of the time, a set such as ${\displaystyle {\mathcal {A}}=\{a_{i}\}}$ will be employed as the alphabet of a formal language. These alphabet letters serve to name the logical features (properties or propositions) that generate a particular universe of discourse. When we want to discuss the particular features of a universe of discourse, beyond the abstract designation of a type like ${\displaystyle (\mathbb {B} ^{n}\ +\!\to \mathbb {B} ),}$ then we may use the following notations. If ${\displaystyle {\mathcal {A}}=\{a_{1},\ldots ,a_{n}\}}$ is an alphabet of logical features, then ${\displaystyle A=\langle {\mathcal {A}}\rangle =\langle a_{1},\ldots ,a_{n}\rangle }$ is the set of interpretations, ${\displaystyle A^{\uparrow }=(A\to \mathbb {B} )}$ is the set of propositions, and ${\displaystyle A^{\bullet }=[{\mathcal {A}}]=[a_{1},\ldots ,a_{n}]}$ is the combination of these interpretations and propositions into the universe of discourse that is based on the features ${\displaystyle \{a_{1},\ldots ,a_{n}\}.}$ As always, especially in concrete examples, these rules may be dropped whenever necessary, reverting to a free assortment of feature labels. However, when we need to talk about the logical aspects of a space that is already named as a vector space, it will be necessary to make special provisions. At any rate, these elaborations can be deferred until actually needed. ### Philosophy of Notation : Formal Terms and Flexible Types Where number is irrelevant, regimented mathematical technique has hitherto tended to be lacking. Thus it is that the progress of natural science has depended so largely upon the discernment of measurable quantity of one sort or another. — W.V. Quine, Mathematical Logic, [Qui, 7] For much of our discussion propositions and boolean functions are treated as the same formal objects, or as different interpretations of the same formal calculus. This rule of interpretation has exceptions, though. There is a distinctively logical interest in the use of propositional calculus that is not exhausted by its functional interpretation. It is part of our task in this study to deal with these uniquely logical characteristics as they present themselves both in our subject matter and in our formal calculus. Just to provide a hint of what's at stake: In logic, as opposed to the more imaginative realms of mathematics, we consider it a good thing to always know what we are talking about. Where mathematics encourages tolerance for uninterpreted symbols as intermediate terms, logic exerts a keener effort to interpret directly each oblique carrier of meaning, no matter how slight, and to unfold the complicities of every indirection in the flow of information. Translated into functional terms, this means that we want to maintain a continual, immediate, and persistent sense of both the converse relation ${\displaystyle f^{-1}\subseteq \mathbb {B} \times \mathbb {B} ^{n},}$ or what is the same thing, ${\displaystyle f^{-1}:\mathbb {B} \to {\mathcal {P}}(\mathbb {B} ^{n}),}$ and the fibers or inverse images ${\displaystyle f^{-1}(0)}$ and ${\displaystyle f^{-1}(1),}$ associated with each boolean function ${\displaystyle f:\mathbb {B} ^{n}\to \mathbb {B} }$ that we use. In practical terms, the desired implementation of a propositional interpreter should incorporate our intuitive recognition that the induced partition of the functional domain into level sets ${\displaystyle f^{-1}(b),}$ for ${\displaystyle b\in \mathbb {B} ,}$ is part and parcel of understanding the denotative uses of each propositional function ${\displaystyle f.}$ ### Special Classes of Propositions It is important to remember that the coordinate propositions ${\displaystyle \{a_{i}\},}$ besides being projection maps ${\displaystyle a_{i}:\mathbb {B} ^{n}\to \mathbb {B} ,}$ are propositions on an equal footing with all others, even though employed as a basis in a particular moment. This set of ${\displaystyle n}$ propositions may sometimes be referred to as the basic propositions, the coordinate propositions, or the simple propositions that found a universe of discourse. Either one of the equivalent notations, ${\displaystyle \{a_{i}:\mathbb {B} ^{n}\to \mathbb {B} \}}$ or ${\displaystyle (\mathbb {B} ^{n}{\xrightarrow {i}}\mathbb {B} ),}$ may be used to indicate the adoption of the propositions ${\displaystyle a_{i}}$ as a basis for describing a universe of discourse. Among the ${\displaystyle 2^{2^{n}}}$ propositions in ${\displaystyle [a_{1},\ldots ,a_{n}]}$ are several families of ${\displaystyle 2^{n}}$ propositions each that take on special forms with respect to the basis ${\displaystyle \{a_{1},\ldots ,a_{n}\}.}$ Three of these families are especially prominent in the present context, the linear, the positive, and the singular propositions. Each family is naturally parameterized by the coordinate ${\displaystyle n}$-tuples in ${\displaystyle \mathbb {B} ^{n}}$ and falls into ${\displaystyle n+1}$ ranks, with a binomial coefficient ${\displaystyle {\tbinom {n}{k}}}$ giving the number of propositions that have rank or weight ${\displaystyle k.}$ • The linear propositions, ${\displaystyle \{\ell :\mathbb {B} ^{n}\to \mathbb {B} \}=(\mathbb {B} ^{n}{\xrightarrow {\ell }}\mathbb {B} ),}$ may be written as sums: ${\displaystyle \sum _{i=1}^{n}e_{i}~=~e_{1}+\ldots +e_{n}~{\text{where}}~\left\{{\begin{matrix}e_{i}=a_{i}\\{\text{or}}\\e_{i}=0\end{matrix}}\right\}~{\text{for}}~i=1~{\text{to}}~n.}$ • The positive propositions, ${\displaystyle \{p:\mathbb {B} ^{n}\to \mathbb {B} \}=(\mathbb {B} ^{n}{\xrightarrow {p}}\mathbb {B} ),}$ may be written as products: ${\displaystyle \prod _{i=1}^{n}e_{i}~=~e_{1}\cdot \ldots \cdot e_{n}~{\text{where}}~\left\{{\begin{matrix}e_{i}=a_{i}\\{\text{or}}\\e_{i}=1\end{matrix}}\right\}~{\text{for}}~i=1~{\text{to}}~n.}$ • The singular propositions, ${\displaystyle \{\mathbf {x} :\mathbb {B} ^{n}\to \mathbb {B} \}=(\mathbb {B} ^{n}{\xrightarrow {s}}\mathbb {B} ),}$ may be written as products: ${\displaystyle \prod _{i=1}^{n}e_{i}~=~e_{1}\cdot \ldots \cdot e_{n}~{\text{where}}~\left\{{\begin{matrix}e_{i}=a_{i}\\{\text{or}}\\e_{i}={\texttt {(}}a_{i}{\texttt {)}}\end{matrix}}\right\}~{\text{for}}~i=1~{\text{to}}~n.}$ In each case the rank ${\displaystyle k}$ ranges from ${\displaystyle 0}$ to ${\displaystyle n}$ and counts the number of positive appearances of the coordinate propositions ${\displaystyle a_{1},\ldots ,a_{n}}$ in the resulting expression. For example, for ${\displaystyle {n=3},}$ the linear proposition of rank ${\displaystyle 0}$ is ${\displaystyle 0,}$ the positive proposition of rank ${\displaystyle 0}$ is ${\displaystyle 1,}$ and the singular proposition of rank ${\displaystyle 0}$ is ${\displaystyle {\texttt {(}}a_{1}{\texttt {)(}}a_{2}{\texttt {)(}}a_{3}{\texttt {)}}.}$ The basic propositions ${\displaystyle a_{i}:\mathbb {B} ^{n}\to \mathbb {B} }$ are both linear and positive. So these two kinds of propositions, the linear and the positive, may be viewed as two different ways of generalizing the class of basic propositions. Linear propositions and positive propositions are generated by taking boolean sums and products, respectively, over selected subsets of basic propositions, so both families of propositions are parameterized by the powerset ${\displaystyle {\mathcal {P}}({\mathcal {I}}),}$ that is, the set of all subsets ${\displaystyle J}$ of the basic index set ${\displaystyle {\mathcal {I}}=\{1,\ldots ,n\}.}$ Let us define ${\displaystyle {\mathcal {A}}_{J}}$ as the subset of ${\displaystyle {\mathcal {A}}}$ that is given by ${\displaystyle \{a_{i}:i\in J\}.}$ Then we may comprehend the action of the linear and the positive propositions in the following terms: • The linear proposition ${\displaystyle \ell _{J}:\mathbb {B} ^{n}\to \mathbb {B} }$ evaluates each cell ${\displaystyle \mathbf {x} }$ of ${\displaystyle \mathbb {B} ^{n}}$ by looking at the coefficients of ${\displaystyle \mathbf {x} }$ with respect to the features that ${\displaystyle \ell _{J}}$ "likes", namely those in ${\displaystyle {\mathcal {A}}_{J},}$ and then adds them up in ${\displaystyle \mathbb {B} .}$ Thus, ${\displaystyle \ell _{J}(\mathbf {x} )}$ computes the parity of the number of features that ${\displaystyle \mathbf {x} }$ has in ${\displaystyle {\mathcal {A}}_{J},}$ yielding one for odd and zero for even. Expressed in this idiom, ${\displaystyle \ell _{J}(\mathbf {x} )=1}$ says that ${\displaystyle \mathbf {x} }$ seems odd (or oddly true) to ${\displaystyle {\mathcal {A}}_{J},}$ whereas ${\displaystyle \ell _{J}(\mathbf {x} )=0}$ says that ${\displaystyle \mathbf {x} }$ seems even (or evenly true) to ${\displaystyle {\mathcal {A}}_{J},}$ so long as we recall that zero times is evenly often, too. • The positive proposition ${\displaystyle p_{J}:\mathbb {B} ^{n}\to \mathbb {B} }$ evaluates each cell ${\displaystyle \mathbf {x} }$ of ${\displaystyle \mathbb {B} ^{n}}$ by looking at the coefficients of ${\displaystyle \mathbf {x} }$ with regard to the features that ${\displaystyle p_{J}}$ "likes", namely those in ${\displaystyle {\mathcal {A}}_{J},}$ and then takes their product in ${\displaystyle \mathbb {B} .}$ Thus, ${\displaystyle p_{J}(\mathbf {x} )}$ assesses the unanimity of the multitude of features that ${\displaystyle \mathbf {x} }$ has in ${\displaystyle {\mathcal {A}}_{J},}$ yielding one for all and aught for else. In these consensual or contractual terms, ${\displaystyle p_{J}(\mathbf {x} )=1}$ means that ${\displaystyle \mathbf {x} }$ is AOK or congruent with all of the conditions of ${\displaystyle {\mathcal {A}}_{J},}$ while ${\displaystyle p_{J}(\mathbf {x} )=0}$ means that ${\displaystyle \mathbf {x} }$ defaults or dissents from some condition of ${\displaystyle {\mathcal {A}}_{J}.}$ ### Basis Relativity and Type Ambiguity Finally, two things are important to keep in mind with regard to the simplicity, linearity, positivity, and singularity of propositions. First, all of these properties are relative to a particular basis. For example, a singular proposition with respect to a basis ${\displaystyle {\mathcal {A}}}$ will not remain singular if ${\displaystyle {\mathcal {A}}}$ is extended by a number of new and independent features. Even if we stick to the original set of pairwise options ${\displaystyle \{a_{i}\}\cup \{{\texttt {(}}a_{i}{\texttt {)}}\}}$ to select a new basis, the sets of linear and positive propositions are determined by the choice of simple propositions, and this determination is tantamount to the conventional choice of a cell as origin. Second, the singular propositions ${\displaystyle \mathbb {B} ^{n}{\xrightarrow {\mathbf {x} }}\mathbb {B} ,}$ picking out as they do a single cell or a coordinate tuple ${\displaystyle \mathbf {x} }$ of ${\displaystyle \mathbb {B} ^{n},}$ become the carriers or the vehicles of a certain type-ambiguity that vacillates between the dual forms ${\displaystyle \mathbb {B} ^{n}}$ and ${\displaystyle (\mathbb {B} ^{n}{\xrightarrow {\mathbf {x} }}\mathbb {B} )}$ and infects the whole hierarchy of types built on them. In other words, the terms that signify the interpretations ${\displaystyle \mathbf {x} :\mathbb {B} ^{n}}$ and the singular propositions ${\displaystyle \mathbf {x} :\mathbb {B} ^{n}{\xrightarrow {\mathbf {x} }}\mathbb {B} }$ are fully equivalent in information, and this means that every token of the type ${\displaystyle \mathbb {B} ^{n}}$ can be reinterpreted as an appearance of the subtype ${\displaystyle \mathbb {B} ^{n}{\xrightarrow {\mathbf {x} }}\mathbb {B} .}$ And vice versa, the two types can be exchanged with each other everywhere that they turn up. In practical terms, this allows the use of singular propositions as a way of denoting points, forming an alternative to coordinate tuples. For example, relative to the universe of discourse ${\displaystyle [a_{1},a_{2},a_{3}]}$ the singular proposition ${\displaystyle a_{1}a_{2}a_{3}:\mathbb {B} ^{3}{\xrightarrow {s}}\mathbb {B} }$ could be explicitly retyped as ${\displaystyle a_{1}a_{2}a_{3}:\mathbb {B} ^{3}}$ to indicate the point ${\displaystyle (1,1,1)}$ but in most cases the proper interpretation could be gathered from context. Both notations remain dependent on a particular basis, but the code that is generated under the singular option has the advantage in its self-commenting features, in other words, it constantly reminds us of its basis in the process of denoting points. When the time comes to put a multiplicity of different bases into play, and to search for objects and properties that remain invariant under the transformations between them, this infinitesimal potential advantage may well evolve into an overwhelming practical necessity. ### The Analogy Between Real and Boolean Types Measurement consists in correlating our subject matter with the series of real numbers; and such correlations are desirable because, once they are set up, all the well-worked theory of numerical mathematics lies ready at hand as a tool for our further reasoning. — W.V. Quine, Mathematical Logic, [Qui, 7] There are two further reasons why it useful to spend time on a careful treatment of types, and they both have to do with our being able to take full computational advantage of certain dimensions of flexibility in the types that apply to terms. First, the domains of differential geometry and logic programming are connected by analogies between real and boolean types of the same pattern. Second, the types involved in these patterns have important isomorphisms connecting them that apply on both the real and the boolean sides of the picture. Amazingly enough, these isomorphisms are themselves schematized by the axioms and theorems of propositional logic. This fact is known as the propositions as types analogy or the Curry–Howard isomorphism [How]. In another formulation it says that terms are to types as proofs are to propositions. See [LaS, 42–46] and [SeH] for a good discussion and further references. To anticipate the bearing of these issues on our immediate topic, Table 3 sketches a partial overview of the Real to Boolean analogy that may serve to illustrate the paradigm in question. ${\displaystyle {\text{Real Domain}}~\mathbb {R} }$ ${\displaystyle \longleftrightarrow }$ ${\displaystyle {\text{Boolean Domain}}~\mathbb {B} }$ ${\displaystyle \mathbb {R} ^{n}}$ ${\displaystyle {\text{Basic Space}}}$ ${\displaystyle \mathbb {B} ^{n}}$ ${\displaystyle \mathbb {R} ^{n}\to \mathbb {R} }$ ${\displaystyle {\text{Function Space}}}$ ${\displaystyle \mathbb {B} ^{n}\to \mathbb {B} }$ ${\displaystyle (\mathbb {R} ^{n}\to \mathbb {R} )\to \mathbb {R} }$ ${\displaystyle {\text{Tangent Vector}}}$ ${\displaystyle (\mathbb {B} ^{n}\to \mathbb {B} )\to \mathbb {B} }$ ${\displaystyle \mathbb {R} ^{n}\to ((\mathbb {R} ^{n}\to \mathbb {R} )\to \mathbb {R} )}$ ${\displaystyle {\text{Vector Field}}}$ ${\displaystyle \mathbb {B} ^{n}\to ((\mathbb {B} ^{n}\to \mathbb {B} )\to \mathbb {B} )}$ ${\displaystyle (\mathbb {R} ^{n}\times (\mathbb {R} ^{n}\to \mathbb {R} ))\to \mathbb {R} }$ " ${\displaystyle (\mathbb {B} ^{n}\times (\mathbb {B} ^{n}\to \mathbb {B} ))\to \mathbb {B} }$ ${\displaystyle ((\mathbb {R} ^{n}\to \mathbb {R} )\times \mathbb {R} ^{n})\to \mathbb {R} }$ " ${\displaystyle ((\mathbb {B} ^{n}\to \mathbb {B} )\times \mathbb {B} ^{n})\to \mathbb {B} }$ ${\displaystyle (\mathbb {R} ^{n}\to \mathbb {R} )\to (\mathbb {R} ^{n}\to \mathbb {R} )}$ ${\displaystyle {\text{Derivation}}}$ ${\displaystyle (\mathbb {B} ^{n}\to \mathbb {B} )\to (\mathbb {B} ^{n}\to \mathbb {B} )}$ ${\displaystyle \mathbb {R} ^{n}\to \mathbb {R} ^{m}}$ ${\displaystyle {\begin{matrix}{\text{Basic}}\\[2pt]{\text{Transformation}}\end{matrix}}}$ ${\displaystyle \mathbb {B} ^{n}\to \mathbb {B} ^{m}}$ ${\displaystyle (\mathbb {R} ^{n}\to \mathbb {R} )\to (\mathbb {R} ^{m}\to \mathbb {R} )}$ ${\displaystyle {\begin{matrix}{\text{Function}}\\[2pt]{\text{Transformation}}\end{matrix}}}$ ${\displaystyle (\mathbb {B} ^{n}\to \mathbb {B} )\to (\mathbb {B} ^{m}\to \mathbb {B} )}$ The Table exhibits a sample of likely parallels between the real and boolean domains. The central column gives a selection of terminology that is borrowed from differential geometry and extended in its meaning to the logical side of the Table. These are the varieties of spaces that come up when we turn to analyzing the dynamics of processes that pursue their courses through the states of an arbitrary space ${\displaystyle X.}$ Moreover, when it becomes necessary to approach situations of overwhelming dynamic complexity in a succession of qualitative reaches, then the methods of logic that are afforded by the boolean domains, with their declarative means of synthesis and deductive modes of analysis, supply a natural battery of tools for the task. It is usually expedient to take these spaces two at a time, in dual pairs of the form ${\displaystyle X}$ and ${\displaystyle (X\to \mathbb {K} ).}$ In general, one creates pairs of type schemas by replacing any space ${\displaystyle X}$ with its dual ${\displaystyle (X\to \mathbb {K} ),}$ for example, pairing the type ${\displaystyle X\to Y}$ with the type ${\displaystyle (X\to \mathbb {K} )\to (Y\to \mathbb {K} ),}$ and ${\displaystyle X\times Y}$ with ${\displaystyle (X\to \mathbb {K} )\times (Y\to \mathbb {K} ).}$ The word dual is used here in its broader sense to mean all of the functionals, not just the linear ones. Given any function ${\displaystyle f:X\to \mathbb {K} ,}$ the converse or inverse relation corresponding to ${\displaystyle f}$ is denoted ${\displaystyle f^{-1},}$ and the subsets of ${\displaystyle X}$ that are defined by ${\displaystyle f^{-1}(k),}$ taken over ${\displaystyle k}$ in ${\displaystyle \mathbb {K} ,}$ are called the fibers or the level sets of the function ${\displaystyle f.}$ ### Theory of Control and Control of Theory You will hardly know who I am or what I mean, But I shall be good health to you nevertheless, And filter and fibre your blood. — Walt Whitman, Leaves of Grass, [Whi, 88] In the boolean context a function ${\displaystyle f:X\to \mathbb {B} }$ is tantamount to a proposition about elements of ${\displaystyle X,}$ and the elements of ${\displaystyle X}$ constitute the interpretations of that proposition. The fiber ${\displaystyle f^{-1}(1)}$ comprises the set of models of ${\displaystyle f,}$ or examples of elements in ${\displaystyle X}$ satisfying the proposition ${\displaystyle f.}$ The fiber ${\displaystyle f^{-1}(0)}$ collects the complementary set of anti-models, or the exceptions to the proposition ${\displaystyle f}$ that exist in ${\displaystyle X.}$ Of course, the space of functions ${\displaystyle (X\to \mathbb {B} )}$ is isomorphic to the set of all subsets of ${\displaystyle X,}$ called the power set of ${\displaystyle X,}$ and often denoted ${\displaystyle {\mathcal {P}}(X)}$ or ${\displaystyle 2^{X}.}$ The operation of replacing ${\displaystyle X}$ by ${\displaystyle (X\to \mathbb {B} )}$ in a type schema corresponds to a certain shift of attitude towards the space ${\displaystyle X,}$ in which one passes from a focus on the ostensibly individual elements of ${\displaystyle X}$ to a concern with the states of information and uncertainty that one possesses about objects and situations in ${\displaystyle X.}$ The conceptual obstacles in the path of this transition can be smoothed over by using singular functions ${\displaystyle (\mathbb {B} ^{n}{\xrightarrow {\mathbf {x} }}\mathbb {B} )}$ as stepping stones. First of all, it's an easy step from an element ${\displaystyle \mathbf {x} }$ of type ${\displaystyle \mathbb {B} ^{n}}$ to the equivalent information of a singular proposition ${\displaystyle \mathbf {x} :X{\xrightarrow {s}}\mathbb {B} ,}$ and then only a small jump of generalization remains to reach the type of an arbitrary proposition ${\displaystyle f:X\to \mathbb {B} ,}$ perhaps understood to indicate a relaxed constraint on the singularity of points or a neighborhood circumscribing the original ${\displaystyle \mathbf {x} .}$ This is frequently a useful transformation, communicating between the objective and the intentional perspectives, in spite perhaps of the open objection that this distinction is transient in the mean time and ultimately superficial. It is hoped that this measure of flexibility, allowing us to stretch a point into a proposition, can be useful in the examination of inquiry driven systems, where the differences between empirical, intentional, and theoretical propositions constitute the discrepancies and the distributions that drive experimental activity. I can give this model of inquiry a cybernetic cast by realizing that theory change and theory evolution, as well as the choice and the evaluation of experiments, are actions that are taken by a system or its agent in response to the differences that are detected between observational contents and theoretical coverage. All of the above notwithstanding, there are several points that distinguish these two tasks, namely, the theory of control and the control of theory, features that are often obscured by too much precipitation in the quickness with which we understand their similarities. In the control of uncertainty through inquiry, some of the actuators that we need to be concerned with are axiom changers and theory modifiers, operators with the power to compile and to revise the theories that generate expectations and predictions, effectors that form and edit our grammars for the languages of observational data, and agencies that rework the proposed model to fit the actual sequences of events and the realized relationships of values that are observed in the environment. Moreover, when steps must be taken to carry out an experimental action, there must be something about the particular shape of our uncertainty that guides us in choosing what directions to explore, and this impression is more than likely influenced by previous accumulations of experience. Thus it must be anticipated that much of what goes into scientific progress, or any sustainable effort toward a goal of knowledge, is necessarily predicated on long term observation and modal expectations, not only on the more local or short term prediction and correction. ### Propositions as Types and Higher Order Types The types collected in Table 3 (repeated below) serve to illustrate the themes of higher order propositional expressions and the propositions as types (PAT) analogy. ${\displaystyle {\text{Real Domain}}~\mathbb {R} }$ ${\displaystyle \longleftrightarrow }$ ${\displaystyle {\text{Boolean Domain}}~\mathbb {B} }$ ${\displaystyle \mathbb {R} ^{n}}$ ${\displaystyle {\text{Basic Space}}}$ ${\displaystyle \mathbb {B} ^{n}}$ ${\displaystyle \mathbb {R} ^{n}\to \mathbb {R} }$ ${\displaystyle {\text{Function Space}}}$ ${\displaystyle \mathbb {B} ^{n}\to \mathbb {B} }$ ${\displaystyle (\mathbb {R} ^{n}\to \mathbb {R} )\to \mathbb {R} }$ ${\displaystyle {\text{Tangent Vector}}}$ ${\displaystyle (\mathbb {B} ^{n}\to \mathbb {B} )\to \mathbb {B} }$ ${\displaystyle \mathbb {R} ^{n}\to ((\mathbb {R} ^{n}\to \mathbb {R} )\to \mathbb {R} )}$ ${\displaystyle {\text{Vector Field}}}$ ${\displaystyle \mathbb {B} ^{n}\to ((\mathbb {B} ^{n}\to \mathbb {B} )\to \mathbb {B} )}$ ${\displaystyle (\mathbb {R} ^{n}\times (\mathbb {R} ^{n}\to \mathbb {R} ))\to \mathbb {R} }$ " ${\displaystyle (\mathbb {B} ^{n}\times (\mathbb {B} ^{n}\to \mathbb {B} ))\to \mathbb {B} }$ ${\displaystyle ((\mathbb {R} ^{n}\to \mathbb {R} )\times \mathbb {R} ^{n})\to \mathbb {R} }$ " ${\displaystyle ((\mathbb {B} ^{n}\to \mathbb {B} )\times \mathbb {B} ^{n})\to \mathbb {B} }$ ${\displaystyle (\mathbb {R} ^{n}\to \mathbb {R} )\to (\mathbb {R} ^{n}\to \mathbb {R} )}$ ${\displaystyle {\text{Derivation}}}$ ${\displaystyle (\mathbb {B} ^{n}\to \mathbb {B} )\to (\mathbb {B} ^{n}\to \mathbb {B} )}$ ${\displaystyle \mathbb {R} ^{n}\to \mathbb {R} ^{m}}$ ${\displaystyle {\begin{matrix}{\text{Basic}}\\[2pt]{\text{Transformation}}\end{matrix}}}$ ${\displaystyle \mathbb {B} ^{n}\to \mathbb {B} ^{m}}$ ${\displaystyle (\mathbb {R} ^{n}\to \mathbb {R} )\to (\mathbb {R} ^{m}\to \mathbb {R} )}$ ${\displaystyle {\begin{matrix}{\text{Function}}\\[2pt]{\text{Transformation}}\end{matrix}}}$ ${\displaystyle (\mathbb {B} ^{n}\to \mathbb {B} )\to (\mathbb {B} ^{m}\to \mathbb {B} )}$ First, observe that the type of a tangent vector at a point, also known as a directional derivative at that point, has the form ${\displaystyle (\mathbb {K} ^{n}\to \mathbb {K} )\to \mathbb {K} ,}$ where ${\displaystyle \mathbb {K} }$ is the chosen ground field, in the present case either ${\displaystyle \mathbb {R} }$ or ${\displaystyle \mathbb {B} .}$ At a point in a space of type ${\displaystyle \mathbb {K} ^{n},}$ a directional derivative operator ${\displaystyle \vartheta }$ takes a function on that space, an ${\displaystyle f}$ of type ${\displaystyle (\mathbb {K} ^{n}\to \mathbb {K} ),}$ and maps it to a ground field value of type ${\displaystyle \mathbb {K} .}$ This value is known as the derivative of ${\displaystyle f}$ in the direction ${\displaystyle \vartheta }$ [Che46, 76–77]. In the boolean case ${\displaystyle \vartheta :(\mathbb {B} ^{n}\to \mathbb {B} )\to \mathbb {B} }$ has the form of a proposition about propositions, in other words, a proposition of the next higher type. Next, by way of illustrating the propositions as types idea, consider a proposition of the form ${\displaystyle X\Rightarrow (Y\Rightarrow Z).}$ One knows from propositional calculus that this is logically equivalent to a proposition of the form ${\displaystyle (X\land Y)\Rightarrow Z.}$ But this equivalence should remind us of the functional isomorphism that exists between a construction of the type ${\displaystyle X\to (Y\to Z)}$ and a construction of the type ${\displaystyle (X\times Y)\to Z.}$ The propositions as types analogy permits us to take a functional type like this and, under the right conditions, replace the functional arrows “${\displaystyle \to }$” and products “${\displaystyle \times }$” with the respective logical arrows “${\displaystyle \Rightarrow }$” and products “${\displaystyle \land }$”. Accordingly, viewing the result as a proposition, we can employ axioms and theorems of propositional calculus to suggest appropriate isomorphisms among the categorical and functional constructions. Finally, examine the middle four rows of Table 3. These display a series of isomorphic types that stretch from the categories that are labeled Vector Field to those that are labeled Derivation. A vector field, also known as an infinitesimal transformation, associates a tangent vector at a point with each point of a space. In symbols, a vector field is a function of the form ${\displaystyle \textstyle \xi :X\to \bigcup _{x\in X}\xi _{x}}$ that assigns to each point ${\displaystyle x}$ of the space ${\displaystyle X}$ a tangent vector to ${\displaystyle X}$ at that point, namely, the tangent vector ${\displaystyle \xi _{x}}$ [Che46, 82–83]. If ${\displaystyle X}$ is of the type ${\displaystyle \mathbb {K} ^{n},}$ then ${\displaystyle \xi }$ is of the type ${\displaystyle \mathbb {K} ^{n}\to ((\mathbb {K} ^{n}\to \mathbb {K} )\to \mathbb {K} ).}$ This has the pattern ${\displaystyle X\to (Y\to Z),}$ with ${\displaystyle X=\mathbb {K} ^{n},}$ ${\displaystyle Y=(\mathbb {K} ^{n}\to \mathbb {K} ),}$ and ${\displaystyle Z=\mathbb {K} .}$ Applying the propositions as types analogy, one can follow this pattern through a series of metamorphoses from the type of a vector field to the type of a derivation, as traced out in Table 4. Observe how the function ${\displaystyle f:X\to \mathbb {K} ,}$ associated with the place of ${\displaystyle Y}$ in the pattern, moves through its paces from the second to the first position. In this way, the vector field ${\displaystyle \xi ,}$ initially viewed as attaching each tangent vector ${\displaystyle \xi _{x}}$ to the site ${\displaystyle x}$ where it acts in ${\displaystyle X,}$ now comes to be seen as acting on each scalar potential ${\displaystyle f:X\to \mathbb {K} }$ like a generalized species of differentiation, producing another function ${\displaystyle \xi f:X\to \mathbb {K} }$ of the same type. ${\displaystyle {\text{Pattern}}}$ ${\displaystyle {\text{Construct}}}$ ${\displaystyle {\text{Instance}}}$ ${\displaystyle X\to (Y\to Z)}$ ${\displaystyle {\text{Vector Field}}}$ ${\displaystyle \mathbb {K} ^{n}\to ((\mathbb {K} ^{n}\to \mathbb {K} )\to \mathbb {K} )}$ ${\displaystyle (X\times Y)\to Z}$ ${\displaystyle \Uparrow }$ ${\displaystyle (\mathbb {K} ^{n}\times (\mathbb {K} ^{n}\to \mathbb {K} ))\to \mathbb {K} }$ ${\displaystyle (Y\times X)\to Z}$ ${\displaystyle \Downarrow }$ ${\displaystyle ((\mathbb {K} ^{n}\to \mathbb {K} )\times \mathbb {K} ^{n})\to \mathbb {K} }$ ${\displaystyle Y\to (X\to Z)}$ ${\displaystyle {\text{Derivation}}}$ ${\displaystyle (\mathbb {K} ^{n}\to \mathbb {K} )\to (\mathbb {K} ^{n}\to \mathbb {K} )}$ ### Reality at the Threshold of Logic But no science can rest entirely on measurement, and many scientific investigations are quite out of reach of that device. To the scientist longing for non-quantitative techniques, then, mathematical logic brings hope. — W.V. Quine, Mathematical Logic, [Qui, 7] Table 5 accumulates an array of notation that I hope will not be too distracting. Some of it is rarely needed, but has been filled in for the sake of completeness. Its purpose is simple, to give literal expression to the visual intuitions that come with venn diagrams, and to help build a bridge between our qualitative and quantitative outlooks on dynamic systems. ${\displaystyle {\text{Linear Space}}}$ ${\displaystyle {\text{Liminal Space}}}$ ${\displaystyle {\text{Logical Space}}}$ ${\displaystyle {\begin{matrix}{\mathcal {X}}&=&\{x_{1},\ldots ,x_{n}\}\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\underline {\mathcal {X}}}&=&\{{\underline {x}}_{1},\ldots ,{\underline {x}}_{n}\}\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\mathcal {A}}&=&\{a_{1},\ldots ,a_{n}\}\end{matrix}}}$ ${\displaystyle {\begin{matrix}X_{i}&=&\langle x_{i}\rangle \\&\cong &\mathbb {K} \end{matrix}}}$ ${\displaystyle {\begin{matrix}{\underline {X}}_{i}&=&\{{\texttt {(}}{\underline {x}}_{i}{\texttt {)}},{\underline {x}}_{i}\}\\&\cong &\mathbb {B} \end{matrix}}}$ ${\displaystyle {\begin{matrix}A_{i}&=&\{{\texttt {(}}a_{i}{\texttt {)}},a_{i}\}\\&\cong &\mathbb {B} \end{matrix}}}$ ${\displaystyle {\begin{matrix}X\\=&\langle {\mathcal {X}}\rangle \\=&\langle x_{1},\ldots ,x_{n}\rangle \\=&X_{1}\times \ldots \times X_{n}\\=&\prod _{i=1}^{n}X_{i}\\\cong &\mathbb {K} ^{n}\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\underline {X}}\\=&\langle {\underline {\mathcal {X}}}\rangle \\=&\langle {\underline {x}}_{1},\ldots ,{\underline {x}}_{n}\rangle \\=&{\underline {X}}_{1}\times \ldots \times {\underline {X}}_{n}\\=&\prod _{i=1}^{n}{\underline {X}}_{i}\\\cong &\mathbb {B} ^{n}\end{matrix}}}$ ${\displaystyle {\begin{matrix}A\\=&\langle {\mathcal {A}}\rangle \\=&\langle a_{1},\ldots ,a_{n}\rangle \\=&A_{1}\times \ldots \times A_{n}\\=&\prod _{i=1}^{n}A_{i}\\\cong &\mathbb {B} ^{n}\end{matrix}}}$ ${\displaystyle {\begin{matrix}X^{*}&=&(\ell :X\to \mathbb {K} )\\&\cong &\mathbb {K} ^{n}\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\underline {X}}^{*}&=&(\ell :{\underline {X}}\to \mathbb {B} )\\&\cong &\mathbb {B} ^{n}\end{matrix}}}$ ${\displaystyle {\begin{matrix}A^{*}&=&(\ell :A\to \mathbb {B} )\\&\cong &\mathbb {B} ^{n}\end{matrix}}}$ ${\displaystyle {\begin{matrix}X^{\uparrow }&=&(X\to \mathbb {K} )\\&\cong &(\mathbb {K} ^{n}\to \mathbb {K} )\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\underline {X}}^{\uparrow }&=&({\underline {X}}\to \mathbb {B} )\\&\cong &(\mathbb {B} ^{n}\to \mathbb {B} )\end{matrix}}}$ ${\displaystyle {\begin{matrix}A^{\uparrow }&=&(A\to \mathbb {B} )\\&\cong &(\mathbb {B} ^{n}\to \mathbb {B} )\end{matrix}}}$ ${\displaystyle {\begin{matrix}X^{\bullet }\\=&[{\mathcal {X}}]\\=&[x_{1},\ldots ,x_{n}]\\=&(X,X^{\uparrow })\\=&(X~+\!\to \mathbb {K} )\\=&(X,(X\to \mathbb {K} ))\\\cong &(\mathbb {K} ^{n},(\mathbb {K} ^{n}\to \mathbb {K} ))\\=&(\mathbb {K} ^{n}~+\!\to \mathbb {K} )\\=&[\mathbb {K} ^{n}]\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\underline {X}}^{\bullet }\\=&[{\underline {\mathcal {X}}}]\\=&[{\underline {x}}_{1},\ldots ,{\underline {x}}_{n}]\\=&({\underline {X}},{\underline {X}}^{\uparrow })\\=&({\underline {X}}~+\!\to \mathbb {B} )\\=&({\underline {X}},({\underline {X}}\to \mathbb {B} ))\\\cong &(\mathbb {B} ^{n},(\mathbb {B} ^{n}\to \mathbb {B} ))\\=&(\mathbb {B} ^{n}~+\!\to \mathbb {B} )\\=&[\mathbb {B} ^{n}]\end{matrix}}}$ ${\displaystyle {\begin{matrix}A^{\bullet }\\=&[{\mathcal {A}}]\\=&[a_{1},\ldots ,a_{n}]\\=&(A,A^{\uparrow })\\=&(A~+\!\to \mathbb {B} )\\=&(A,(A\to \mathbb {B} ))\\\cong &(\mathbb {B} ^{n},(\mathbb {B} ^{n}\to \mathbb {B} ))\\=&(\mathbb {B} ^{n}~+\!\to \mathbb {B} )\\=&[\mathbb {B} ^{n}]\end{matrix}}}$ The left side of the Table collects mostly standard notation for an ${\displaystyle n}$-dimensional vector space over a field ${\displaystyle \mathbb {K} .}$ The right side of the table repeats the first elements of a notation that I sketched above, to be used in further developments of propositional calculus. (I plan to use this notation in the logical analysis of neural network systems.) The middle column of the table is designed as a transitional step from the case of an arbitrary field ${\displaystyle \mathbb {K} ,}$ with a special interest in the continuous line ${\displaystyle \mathbb {R} ,}$ to the qualitative and discrete situations that are instanced and typified by ${\displaystyle \mathbb {B} .}$ I now proceed to explain these concepts in more detail. The most important ideas developed in Table 5 are these: • The idea of a universe of discourse, which includes both a space of points and a space of maps on those points. • The idea of passing from a more complex universe to a simpler universe by a process of thresholding each dimension of variation down to a single bit of information. For the sake of concreteness, let us suppose that we start with a continuous ${\displaystyle n}$-dimensional vector space like ${\displaystyle X=\langle x_{1},\ldots ,x_{n}\rangle \cong \mathbb {R} ^{n}.}$ The coordinate system ${\displaystyle {\mathcal {X}}=\{x_{i}\}}$ is a set of maps ${\displaystyle x_{i}:\mathbb {R} ^{n}\to \mathbb {R} ,}$ also known as the coordinate projections. Given a "dataset" of points ${\displaystyle \mathbf {x} }$ in ${\displaystyle \mathbb {R} ^{n},}$ there are numerous ways of sensibly reducing the data down to one bit for each dimension. One strategy that is general enough for our present purposes is as follows. For each ${\displaystyle i}$ we choose an ${\displaystyle n}$-ary relation ${\displaystyle L_{i}}$ on ${\displaystyle \mathbb {R} ^{n},}$ that is, a subset of ${\displaystyle \mathbb {R} ^{n},}$ and then we define the ${\displaystyle i^{\mathrm {th} }}$ threshold map, or limen ${\displaystyle {\underline {x}}_{i}}$ as follows: ${\displaystyle {\underline {x}}_{i}:\mathbb {R} ^{n}\to \mathbb {B} \ {\text{such that:}}}$ ${\displaystyle {\begin{matrix}{\underline {x}}_{i}(\mathbf {x} )=1&{\text{if}}&\mathbf {x} \in L_{i},\\[4pt]{\underline {x}}_{i}(\mathbf {x} )=0&{\text{if}}&\mathbf {x} \not \in L_{i}.\end{matrix}}}$ In other notations that are sometimes used, the operator ${\displaystyle \chi (\ldots )}$ or the corner brackets ${\displaystyle \lceil \ldots \rceil }$ can be used to denote a characteristic function, that is, a mapping from statements to their truth values in ${\displaystyle \mathbb {B} .}$ Finally, it is not uncommon to use the name of the relation itself as a predicate that maps ${\displaystyle n}$-tuples into truth values. Thus we have the following notational variants of the above definition: ${\displaystyle {\begin{matrix}{\underline {x}}_{i}(\mathbf {x} )&=&\chi (\mathbf {x} \in L_{i})&=&\lceil \mathbf {x} \in L_{i}\rceil &=&L_{i}(\mathbf {x} ).\end{matrix}}}$ Notice that, as defined here, there need be no actual relation between the ${\displaystyle n}$-dimensional subsets ${\displaystyle \{L_{i}\}}$ and the coordinate axes corresponding to ${\displaystyle \{x_{i}\},}$ aside from the circumstance that the two sets have the same cardinality. In concrete cases, though, one usually has some reason for associating these "volumes" with these "lines", for instance, ${\displaystyle L_{i}}$ is bounded by some hyperplane that intersects the ${\displaystyle i^{\text{th}}}$ axis at a unique threshold value ${\displaystyle r_{i}\in \mathbb {R} .}$ Often, the hyperplane is chosen normal to the axis. In recognition of this motive, let us make the following convention. When the set ${\displaystyle L_{i}}$ has points on the ${\displaystyle i^{\text{th}}}$ axis, that is, points of the form ${\displaystyle (0,\ldots ,0,r_{i},0,\ldots ,0)}$ where only the ${\displaystyle x_{i}}$ coordinate is possibly non-zero, we may pick any one of these coordinate values as a parametric index of the relation. In this case we say that the indexing is real, otherwise the indexing is imaginary. For a knowledge based system ${\displaystyle X,}$ this should serve once again to mark the distinction between acquaintance and opinion. States of knowledge about the location of a system or about the distribution of a population of systems in a state space ${\displaystyle X=\mathbb {R} ^{n}}$ can now be expressed by taking the set ${\displaystyle {\underline {\mathcal {X}}}=\{{\underline {x}}_{i}\}}$ as a basis of logical features. In picturesque terms, one may think of the underscore and the subscript as combining to form a subtextual spelling for the ${\displaystyle i^{\text{th}}}$ threshold map. This can help to remind us that the threshold operator ${\displaystyle ({\underline {~}})_{i}}$ acts on ${\displaystyle \mathbf {x} }$ by setting up a kind of a “hurdle” for it. In this interpretation the coordinate proposition ${\displaystyle {\underline {x}}_{i}}$ asserts that the representative point ${\displaystyle \mathbf {x} }$ resides above the ${\displaystyle i^{\mathrm {th} }}$ threshold. Primitive assertions of the form ${\displaystyle {\underline {x}}_{i}(\mathbf {x} )}$ may then be negated and joined by means of propositional connectives in the usual ways to provide information about the state ${\displaystyle \mathbf {x} }$ of a contemplated system or a statistical ensemble of systems. Parentheses ${\displaystyle {\texttt {(}}\ldots {\texttt {)}}}$ may be used to indicate logical negation. Eventually one discovers the usefulness of the ${\displaystyle k}$-ary just one false operators of the form ${\displaystyle {\texttt {(}}a_{1}{\texttt {,}}\ldots {\texttt {,}}a_{k}{\texttt {)}},}$ as treated in earlier reports. This much tackle generates a space of points (cells, interpretations), ${\displaystyle {\underline {X}}\cong \mathbb {B} ^{n},}$ and a space of functions (regions, propositions), ${\displaystyle {\underline {X}}^{\uparrow }\cong (\mathbb {B} ^{n}\to \mathbb {B} ).}$ Together these form a new universe of discourse ${\displaystyle {\underline {X}}^{\bullet }}$ of the type ${\displaystyle (\mathbb {B} ^{n},(\mathbb {B} ^{n}\to \mathbb {B} )),}$ which we may abbreviate as ${\displaystyle \mathbb {B} ^{n}\ +\!\to \mathbb {B} }$ or most succinctly as ${\displaystyle [\mathbb {B} ^{n}].}$ The square brackets have been chosen to recall the rectangular frame of a venn diagram. In thinking about a universe of discourse it is a good idea to keep this picture in mind, graphically illustrating the links among the elementary cells ${\displaystyle {\underline {\mathbf {x} }},}$ the defining features ${\displaystyle {\underline {x}}_{i},}$ and the potential shadings ${\displaystyle f:{\underline {X}}\to \mathbb {B} }$ all at the same time, not to mention the arbitrariness of the way we choose to inscribe our distinctions in the medium of a continuous space. Finally, let ${\displaystyle X^{*}}$ denote the space of linear functions, ${\displaystyle (\ell :X\to \mathbb {K} ),}$ which has in the finite case the same dimensionality as ${\displaystyle X,}$ and let the same notation be extended across the Table. We have just gone through a lot of work, apparently doing nothing more substantial than spinning a complex spell of notational devices through a labyrinth of baffled spaces and baffling maps. The reason for doing this was to bind together and to constitute the intuitive concept of a universe of discourse into a coherent categorical object, the kind of thing, once grasped, that can be turned over in the mind and considered in all its manifold changes and facets. The effort invested in these preliminary measures is intended to pay off later, when we need to consider the state transformations and the time evolution of neural network systems. ### Tables of Propositional Forms To the scientist longing for non-quantitative techniques, then, mathematical logic brings hope. It provides explicit techniques for manipulating the most basic ingredients of discourse. — W.V. Quine, Mathematical Logic, [Qui, 7–8] To prepare for the next phase of discussion, Tables 6 and 7 collect and summarize all of the propositional forms on one and two variables. These propositional forms are represented over bases of boolean variables as complete sets of boolean-valued functions. Adjacent to their names and specifications are listed what are roughly the simplest expressions in the cactus language, the particular syntax for propositional calculus that I use in formal and computational contexts. For the sake of orientation, the English paraphrases and the more common notations are listed in the last two columns. As simple and circumscribed as these low-dimensional universes may appear to be, a careful exploration of their differential extensions will involve us in complexities sufficient to demand our attention for some time to come. Propositional forms on one variable correspond to boolean functions ${\displaystyle f:\mathbb {B} ^{1}\to \mathbb {B} .}$ In Table 6 these functions are listed in a variant form of truth table, one in which the axes of the usual arrangement are rotated through a right angle. Each function ${\displaystyle f_{i}}$ is indexed by the string of values that it takes on the points of the universe ${\displaystyle X^{\bullet }=[x]\cong \mathbb {B} ^{1}.}$ The binary index generated in this way is converted to its decimal equivalent and these are used as conventional names for the ${\displaystyle f_{i},}$ as shown in the first column of the Table. In their own right the ${\displaystyle 2^{1}}$ points of the universe ${\displaystyle X^{\bullet }}$ are coordinated as a space of type ${\displaystyle \mathbb {B} ^{1},}$ this in light of the universe ${\displaystyle X^{\bullet }}$ being a functional domain where the coordinate projection ${\displaystyle x}$ takes on its values in ${\displaystyle \mathbb {B} .}$ ${\displaystyle {\begin{matrix}{\mathcal {L}}_{1}\\{\text{Decimal}}\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\mathcal {L}}_{2}\\{\text{Binary}}\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\mathcal {L}}_{3}\\{\text{Vector}}\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\mathcal {L}}_{4}\\{\text{Cactus}}\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\mathcal {L}}_{5}\\{\text{English}}\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\mathcal {L}}_{6}\\{\text{Ordinary}}\end{matrix}}}$ ${\displaystyle x\colon }$ ${\displaystyle 1~0}$ ${\displaystyle f_{0}}$ ${\displaystyle f_{00}}$ ${\displaystyle 0~0}$ ${\displaystyle {\texttt {(}}~{\texttt {)}}}$ ${\displaystyle {\text{false}}}$ ${\displaystyle 0}$ ${\displaystyle f_{1}}$ ${\displaystyle f_{01}}$ ${\displaystyle 0~1}$ ${\displaystyle {\texttt {(}}x{\texttt {)}}}$ ${\displaystyle {\text{not}}~x}$ ${\displaystyle \lnot x}$ ${\displaystyle f_{2}}$ ${\displaystyle f_{10}}$ ${\displaystyle 1~0}$ ${\displaystyle x}$ ${\displaystyle x}$ ${\displaystyle x}$ ${\displaystyle f_{3}}$ ${\displaystyle f_{11}}$ ${\displaystyle 1~1}$ ${\displaystyle {\texttt {((}}~{\texttt {))}}}$ ${\displaystyle {\text{true}}}$ ${\displaystyle 1}$ Propositional forms on two variables correspond to boolean functions ${\displaystyle f:\mathbb {B} ^{2}\to \mathbb {B} .}$ In Table 7 each function ${\displaystyle f_{i}}$ is indexed by the values that it takes on the points of the universe ${\displaystyle X^{\bullet }=[x,y]\cong \mathbb {B} ^{2}.}$ Converting the binary index thus generated to a decimal equivalent, we obtain the functional nicknames that are listed in the first column. The ${\displaystyle 2^{2}}$ points of the universe ${\displaystyle X^{\bullet }}$ are coordinated as a space of type ${\displaystyle \mathbb {B} ^{2},}$ as indicated under the heading of the Table, where the coordinate projections ${\displaystyle x}$ and ${\displaystyle y}$ run through the various combinations of their values in ${\displaystyle \mathbb {B} .}$ ${\displaystyle {\begin{matrix}{\mathcal {L}}_{1}\\{\text{Decimal}}\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\mathcal {L}}_{2}\\{\text{Binary}}\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\mathcal {L}}_{3}\\{\text{Vector}}\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\mathcal {L}}_{4}\\{\text{Cactus}}\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\mathcal {L}}_{5}\\{\text{English}}\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\mathcal {L}}_{6}\\{\text{Ordinary}}\end{matrix}}}$ ${\displaystyle x\colon }$ ${\displaystyle 1~1~0~0}$ ${\displaystyle y\colon }$ ${\displaystyle 1~0~1~0}$ ${\displaystyle {\begin{matrix}f_{0}\\[4pt]f_{1}\\[4pt]f_{2}\\[4pt]f_{3}\\[4pt]f_{4}\\[4pt]f_{5}\\[4pt]f_{6}\\[4pt]f_{7}\end{matrix}}}$ ${\displaystyle {\begin{matrix}f_{0000}\\[4pt]f_{0001}\\[4pt]f_{0010}\\[4pt]f_{0011}\\[4pt]f_{0100}\\[4pt]f_{0101}\\[4pt]f_{0110}\\[4pt]f_{0111}\end{matrix}}}$ ${\displaystyle {\begin{matrix}0~0~0~0\\[4pt]0~0~0~1\\[4pt]0~0~1~0\\[4pt]0~0~1~1\\[4pt]0~1~0~0\\[4pt]0~1~0~1\\[4pt]0~1~1~0\\[4pt]0~1~1~1\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\texttt {(}}~{\texttt {)}}\\[4pt]{\texttt {(}}x{\texttt {)(}}y{\texttt {)}}\\[4pt]{\texttt {(}}x{\texttt {)}}~y~\\[4pt]{\texttt {(}}x{\texttt {)}}\\[4pt]~x~{\texttt {(}}y{\texttt {)}}\\[4pt]{\texttt {(}}y{\texttt {)}}\\[4pt]{\texttt {(}}x{\texttt {,}}~y{\texttt {)}}\\[4pt]{\texttt {(}}x~y{\texttt {)}}\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\text{false}}\\[4pt]{\text{neither}}~x~{\text{nor}}~y\\[4pt]y~{\text{without}}~x\\[4pt]{\text{not}}~x\\[4pt]x~{\text{without}}~y\\[4pt]{\text{not}}~y\\[4pt]x~{\text{not equal to}}~y\\[4pt]{\text{not both}}~x~{\text{and}}~y\end{matrix}}}$ ${\displaystyle {\begin{matrix}0\\[4pt]\lnot x\land \lnot y\\[4pt]\lnot x\land y\\[4pt]\lnot x\\[4pt]x\land \lnot y\\[4pt]\lnot y\\[4pt]x\neq y\\[4pt]\lnot x\lor \lnot y\end{matrix}}}$ ${\displaystyle {\begin{matrix}f_{8}\\[4pt]f_{9}\\[4pt]f_{10}\\[4pt]f_{11}\\[4pt]f_{12}\\[4pt]f_{13}\\[4pt]f_{14}\\[4pt]f_{15}\end{matrix}}}$ ${\displaystyle {\begin{matrix}f_{1000}\\[4pt]f_{1001}\\[4pt]f_{1010}\\[4pt]f_{1011}\\[4pt]f_{1100}\\[4pt]f_{1101}\\[4pt]f_{1110}\\[4pt]f_{1111}\end{matrix}}}$ ${\displaystyle {\begin{matrix}1~0~0~0\\[4pt]1~0~0~1\\[4pt]1~0~1~0\\[4pt]1~0~1~1\\[4pt]1~1~0~0\\[4pt]1~1~0~1\\[4pt]1~1~1~0\\[4pt]1~1~1~1\end{matrix}}}$ ${\displaystyle {\begin{matrix}x~y\\[4pt]{\texttt {((}}x{\texttt {,}}~y{\texttt {))}}\\[4pt]y\\[4pt]{\texttt {(}}x~{\texttt {(}}y{\texttt {))}}\\[4pt]x\\[4pt]{\texttt {((}}x{\texttt {)}}~y{\texttt {)}}\\[4pt]{\texttt {((}}x{\texttt {)(}}y{\texttt {))}}\\[4pt]{\texttt {((}}~{\texttt {))}}\end{matrix}}}$ ${\displaystyle {\begin{matrix}x~{\text{and}}~y\\[4pt]x~{\text{equal to}}~y\\[4pt]y\\[4pt]{\text{not}}~x~{\text{without}}~y\\[4pt]x\\[4pt]{\text{not}}~y~{\text{without}}~x\\[4pt]x~{\text{or}}~y\\[4pt]{\text{true}}\end{matrix}}}$ ${\displaystyle {\begin{matrix}x\land y\\[4pt]x=y\\[4pt]y\\[4pt]x\Rightarrow y\\[4pt]x\\[4pt]x\Leftarrow y\\[4pt]x\lor y\\[4pt]1\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\mathcal {L}}_{1}\\{\text{Decimal}}\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\mathcal {L}}_{2}\\{\text{Binary}}\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\mathcal {L}}_{3}\\{\text{Vector}}\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\mathcal {L}}_{4}\\{\text{Cactus}}\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\mathcal {L}}_{5}\\{\text{English}}\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\mathcal {L}}_{6}\\{\text{Ordinary}}\end{matrix}}}$ ${\displaystyle x\colon }$ ${\displaystyle 1~1~0~0}$ ${\displaystyle y\colon }$ ${\displaystyle 1~0~1~0}$ ${\displaystyle f_{0}}$ ${\displaystyle f_{0000}}$ ${\displaystyle 0~0~0~0}$ ${\displaystyle {\texttt {(}}~{\texttt {)}}}$ ${\displaystyle {\text{false}}}$ ${\displaystyle 0}$ ${\displaystyle {\begin{matrix}f_{1}\\[4pt]f_{2}\\[4pt]f_{4}\\[4pt]f_{8}\end{matrix}}}$ ${\displaystyle {\begin{matrix}f_{0001}\\[4pt]f_{0010}\\[4pt]f_{0100}\\[4pt]f_{1000}\end{matrix}}}$ ${\displaystyle {\begin{matrix}0~0~0~1\\[4pt]0~0~1~0\\[4pt]0~1~0~0\\[4pt]1~0~0~0\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\texttt {(}}x{\texttt {)(}}y{\texttt {)}}\\[4pt]{\texttt {(}}x{\texttt {)}}~y~\\[4pt]~x~{\texttt {(}}y{\texttt {)}}\\[4pt]~x~~y~\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\text{neither}}~x~{\text{nor}}~y\\[4pt]y~{\text{without}}~x\\[4pt]x~{\text{without}}~y\\[4pt]x~{\text{and}}~y\end{matrix}}}$ ${\displaystyle {\begin{matrix}\lnot x\land \lnot y\\[4pt]\lnot x\land y\\[4pt]x\land \lnot y\\[4pt]x\land y\end{matrix}}}$ ${\displaystyle {\begin{matrix}f_{3}\\[4pt]f_{12}\end{matrix}}}$ ${\displaystyle {\begin{matrix}f_{0011}\\[4pt]f_{1100}\end{matrix}}}$ ${\displaystyle {\begin{matrix}0~0~1~1\\[4pt]1~1~0~0\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\texttt {(}}x{\texttt {)}}\\[4pt]x\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\text{not}}~x\\[4pt]x\end{matrix}}}$ ${\displaystyle {\begin{matrix}\lnot x\\[4pt]x\end{matrix}}}$ ${\displaystyle {\begin{matrix}f_{6}\\[4pt]f_{9}\end{matrix}}}$ ${\displaystyle {\begin{matrix}f_{0110}\\[4pt]f_{1001}\end{matrix}}}$ ${\displaystyle {\begin{matrix}0~1~1~0\\[4pt]1~0~0~1\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\texttt {(}}x{\texttt {,}}y{\texttt {)}}\\[4pt]{\texttt {((}}x{\texttt {,}}y{\texttt {))}}\end{matrix}}}$ ${\displaystyle {\begin{matrix}x~{\text{not equal to}}~y\\[4pt]x~{\text{equal to}}~y\end{matrix}}}$ ${\displaystyle {\begin{matrix}x\neq y\\[4pt]x=y\end{matrix}}}$ ${\displaystyle {\begin{matrix}f_{5}\\[4pt]f_{10}\end{matrix}}}$ ${\displaystyle {\begin{matrix}f_{0101}\\[4pt]f_{1010}\end{matrix}}}$ ${\displaystyle {\begin{matrix}0~1~0~1\\[4pt]1~0~1~0\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\texttt {(}}y{\texttt {)}}\\[4pt]y\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\text{not}}~y\\[4pt]y\end{matrix}}}$ ${\displaystyle {\begin{matrix}\lnot y\\[4pt]y\end{matrix}}}$ ${\displaystyle {\begin{matrix}f_{7}\\[4pt]f_{11}\\[4pt]f_{13}\\[4pt]f_{14}\end{matrix}}}$ ${\displaystyle {\begin{matrix}f_{0111}\\[4pt]f_{1011}\\[4pt]f_{1101}\\[4pt]f_{1110}\end{matrix}}}$ ${\displaystyle {\begin{matrix}0~1~1~1\\[4pt]1~0~1~1\\[4pt]1~1~0~1\\[4pt]1~1~1~0\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\texttt {(}}~x~~y~{\texttt {)}}\\[4pt]{\texttt {(}}~x~{\texttt {(}}y{\texttt {))}}\\[4pt]{\texttt {((}}x{\texttt {)}}~y~{\texttt {)}}\\[4pt]{\texttt {((}}x{\texttt {)(}}y{\texttt {))}}\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\text{not both}}~x~{\text{and}}~y\\[4pt]{\text{not}}~x~{\text{without}}~y\\[4pt]{\text{not}}~y~{\text{without}}~x\\[4pt]x~{\text{or}}~y\end{matrix}}}$ ${\displaystyle {\begin{matrix}\lnot x\lor \lnot y\\[4pt]x\Rightarrow y\\[4pt]x\Leftarrow y\\[4pt]x\lor y\end{matrix}}}$ ${\displaystyle f_{15}}$ ${\displaystyle f_{1111}}$ ${\displaystyle 1~1~1~1}$ ${\displaystyle {\texttt {((}}~{\texttt {))}}}$ ${\displaystyle {\text{true}}}$ ${\displaystyle 1}$ ## A Differential Extension of Propositional Calculus Fire over water: The image of the condition before transition. Thus the superior man is careful In the differentiation of things, So that each finds its place. — I Ching, Hexagram 64, [Wil, 249] This much preparation is enough to begin introducing my subject, if I excuse myself from giving full arguments for my definitional choices until some later stage. I am trying to develop a differential theory of qualitative equations that parallels the application of differential geometry to dynamic systems. The idea of a tangent vector is key to this work and a major goal is to find the right logical analogues of tangent spaces, bundles, and functors. The strategy is taken of looking for the simplest versions of these constructions that can be discovered within the realm of propositional calculus, so long as they serve to fill out the general theme. ### Differential Propositions : Qualitative Analogues of Differential Equations In order to define the differential extension of a universe of discourse ${\displaystyle [{\mathcal {A}}],}$ the initial alphabet ${\displaystyle {\mathcal {A}}}$ must be extended to include a collection of symbols for differential features, or basic changes that are capable of occurring in ${\displaystyle [{\mathcal {A}}].}$ Intuitively, these symbols may be construed as denoting primitive features of change, qualitative attributes of motion, or propositions about how things or points in the universe may be changing or moving with respect to the features that are noted in the initial alphabet. Therefore, let us define the corresponding differential alphabet or tangent alphabet as ${\displaystyle \mathrm {d} {\mathcal {A}}}$ ${\displaystyle =}$ ${\displaystyle \{\mathrm {d} a_{1},\ldots ,\mathrm {d} a_{n}\},}$ in principle just an arbitrary alphabet of symbols, disjoint from the initial alphabet ${\displaystyle {\mathcal {A}}}$ ${\displaystyle =}$ ${\displaystyle \{a_{1},\ldots ,a_{n}\},}$ that is intended to be interpreted in the way just indicated. It only remains to be understood that the precise interpretation of the symbols in ${\displaystyle \mathrm {d} {\mathcal {A}}}$ is often conceived to be changeable from point to point of the underlying space ${\displaystyle A.}$ Indeed, for all we know, the state space ${\displaystyle A}$ might well be the state space of a language interpreter, one that is concerned, among other things, with the idiomatic meanings of the dialect generated by ${\displaystyle {\mathcal {A}}}$ and ${\displaystyle \mathrm {d} {\mathcal {A}}.}$ The tangent space to ${\displaystyle A}$ at one of its points ${\displaystyle x,}$ sometimes written ${\displaystyle \mathrm {T} _{x}(A),}$ takes the form ${\displaystyle \mathrm {d} A}$ ${\displaystyle =}$ ${\displaystyle \langle \mathrm {d} {\mathcal {A}}\rangle }$ ${\displaystyle =}$ ${\displaystyle \langle \mathrm {d} a_{1},\ldots ,\mathrm {d} a_{n}\rangle .}$ Strictly speaking, the name cotangent space is probably more correct for this construction, but the fact that we take up spaces and their duals in pairs to form our universes of discourse allows our language to be pliable here. Proceeding as we did with the base space ${\displaystyle A,}$ the tangent space ${\displaystyle \mathrm {d} A}$ at a point of ${\displaystyle A}$ can be analyzed as a product of distinct and independent factors: ${\displaystyle \mathrm {d} A~=~\prod _{i=1}^{n}\mathrm {d} A_{i}~=~\mathrm {d} A_{1}\times \ldots \times \mathrm {d} A_{n}.}$ Here, ${\displaystyle \mathrm {d} A_{i}}$ is a set of two differential propositions, ${\displaystyle \mathrm {d} A_{i}=\{(\mathrm {d} a_{i}),\mathrm {d} a_{i}\},}$ where ${\displaystyle {\texttt {(}}\mathrm {d} a_{i}{\texttt {)}}}$ is a proposition with the logical value of ${\displaystyle {\text{not}}~\mathrm {d} a_{i}.}$ Each component ${\displaystyle \mathrm {d} A_{i}}$ has the type ${\displaystyle \mathbb {B} ,}$ operating under the ordered correspondence ${\displaystyle \{{\texttt {(}}\mathrm {d} a_{i}{\texttt {)}},\mathrm {d} a_{i}\}\cong \{0,1\}.}$ However, clarity is often served by acknowledging this differential usage with a superficially distinct type ${\displaystyle \mathbb {D} ,}$ whose intension may be indicated as follows: ${\displaystyle \mathbb {D} =\{{\texttt {(}}\mathrm {d} a_{i}{\texttt {)}},\mathrm {d} a_{i}\}=\{{\text{same}},{\text{different}}\}=\{{\text{stay}},{\text{change}}\}=\{{\text{stop}},{\text{step}}\}.}$ Viewed within a coordinate representation, spaces of type ${\displaystyle \mathbb {B} ^{n}}$ and ${\displaystyle \mathbb {D} ^{n}}$ may appear to be identical sets of binary vectors, but taking a view at this level of abstraction would be like ignoring the qualitative units and the diverse dimensions that distinguish position and momentum, or the different roles of quantity and impulse. ### An Interlude on the Path There would have been no beginnings:  instead, speech would proceed from me, while I stood in its path – a slender gap – the point of its possible disappearance. — Michel Foucault, The Discourse on Language, [Fou, 215] A sense of the relation between ${\displaystyle \mathbb {B} }$ and ${\displaystyle \mathbb {D} }$ may be obtained by considering the path classifier (or the equivalence class of curves) approach to tangent vectors.  Consider a universe ${\displaystyle [{\mathcal {X}}].}$  Given the boolean value system, a path in the space ${\displaystyle X=\langle {\mathcal {X}}\rangle }$ is a map ${\displaystyle q:\mathbb {B} \to X.}$  In this context the set of paths ${\displaystyle (\mathbb {B} \to X)}$ is isomorphic to the cartesian square ${\displaystyle X^{2}=X\times X,}$ or the set of ordered pairs chosen from ${\displaystyle X.}$ We may analyze ${\displaystyle X^{2}=\{(u,v):u,v\in X\}}$ into two parts, specifically, the ordered pairs ${\displaystyle (u,v)}$ that lie on and off the diagonal: ${\displaystyle {\begin{matrix}X^{2}&=&\{(u,v):u=v\}&\cup &\{(u,v):u\neq v\}.\end{matrix}}}$ This partition may also be expressed in the following symbolic form: ${\displaystyle {\begin{matrix}X^{2}&\cong &\operatorname {diag} (X)&+&2{\binom {X}{2}}.\end{matrix}}}$ The separate terms of this formula are defined as follows: ${\displaystyle {\begin{matrix}\operatorname {diag} (X)&=&\{(x,x):x\in X\}.\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\binom {X}{k}}&=&X~{\text{choose}}~k&=&\{k{\text{-sets from}}~X\}.\end{matrix}}}$ Thus we have: ${\displaystyle {\begin{matrix}{\binom {X}{2}}&=&\{\{u,v\}:u,v\in X\}.\end{matrix}}}$ We may now use the features in ${\displaystyle \mathrm {d} {\mathcal {X}}=\{\mathrm {d} x_{i}\}=\{\mathrm {d} x_{1},\ldots ,\mathrm {d} x_{n}\}}$ to classify the paths of ${\displaystyle (\mathbb {B} \to X)}$ by way of the pairs in ${\displaystyle X^{2}.}$  If ${\displaystyle X\cong \mathbb {B} ^{n},}$ then a path ${\displaystyle q}$ in ${\displaystyle X}$ has the following form: ${\displaystyle {\begin{matrix}q:(\mathbb {B} \to \mathbb {B} ^{n})&\cong &\mathbb {B} ^{n}\times \mathbb {B} ^{n}&\cong &\mathbb {B} ^{2n}&\cong &(\mathbb {B} ^{2})^{n}.\end{matrix}}}$ Intuitively, we want to map this ${\displaystyle (\mathbb {B} ^{2})^{n}}$ onto ${\displaystyle \mathbb {D} ^{n}}$ by mapping each component ${\displaystyle \mathbb {B} ^{2}}$ onto a copy of ${\displaystyle \mathbb {D} .}$  But in the presenting context ${\displaystyle {}^{\backprime \backprime }\mathbb {D} {}^{\prime \prime }}$ is just a name associated with, or an incidental quality attributed to, coefficient values in ${\displaystyle \mathbb {B} }$ when they are attached to features in ${\displaystyle \mathrm {d} {\mathcal {X}}.}$ Taking these intentions into account, define ${\displaystyle \mathrm {d} x_{i}:X^{2}\to \mathbb {B} }$ in the following manner: ${\displaystyle {\begin{array}{lcrcl}\mathrm {d} x_{i}(u,v)&=&{\texttt {(}}~x_{i}(u)&{\texttt {,}}&x_{i}(v)~{\texttt {)}}\\&=&x_{i}(u)&+&x_{i}(v)\\&=&x_{i}(v)&-&x_{i}(u).\end{array}}}$ In the above transcription, the operator bracket of the form ${\displaystyle {\texttt {(}}\ldots {\texttt {,}}\ldots {\texttt {)}}}$ is a cactus lobe, in general signifying that just one of the arguments listed is false.  In the case of two arguments this is the same thing as saying that the arguments are not equal.  The plus sign signifies boolean addition, in the sense of addition in ${\displaystyle \mathrm {GF} (2),}$ and thus means the same thing in this context as the minus sign, in the sense of adding the additive inverse. The above definition of ${\displaystyle \mathrm {d} x_{i}:X^{2}\to \mathbb {B} }$ is equivalent to defining ${\displaystyle \mathrm {d} x_{i}:(\mathbb {B} \to X)\to \mathbb {B} }$ in the following way: ${\displaystyle {\begin{array}{lcrcl}\mathrm {d} x_{i}(q)&=&{\texttt {(}}~x_{i}(q_{0})&{\texttt {,}}&x_{i}(q_{1})~{\texttt {)}}\\&=&x_{i}(q_{0})&+&x_{i}(q_{1})\\&=&x_{i}(q_{1})&-&x_{i}(q_{0}).\end{array}}}$ In this definition ${\displaystyle q_{b}=q(b),}$ for each ${\displaystyle b}$ in ${\displaystyle \mathbb {B} .}$  Thus, the proposition ${\displaystyle \mathrm {d} x_{i}}$ is true of the path ${\displaystyle q=(u,v)}$ exactly if the terms of ${\displaystyle q,}$ the endpoints ${\displaystyle u}$ and ${\displaystyle v,}$ lie on different sides of the question ${\displaystyle x_{i}.}$ The language of features in ${\displaystyle \langle \mathrm {d} {\mathcal {X}}\rangle ,}$ indeed the whole calculus of propositions in ${\displaystyle [\mathrm {d} {\mathcal {X}}],}$ may now be used to classify paths and sets of paths.  In other words, the paths can be taken as models of the propositions ${\displaystyle g:\mathrm {d} X\to \mathbb {B} .}$  For example, the paths corresponding to ${\displaystyle \mathrm {diag} (X)}$ fall under the description ${\displaystyle {\texttt {(}}\mathrm {d} x_{1}{\texttt {)}}\cdots {\texttt {(}}\mathrm {d} x_{n}{\texttt {)}},}$ which says that nothing changes against the backdrop of the coordinate frame ${\displaystyle \{x_{1},\ldots ,x_{n}\}.}$ Finally, a few words of explanation may be in order.  If this concept of a path appears to be described in a roundabout fashion, it is because I am trying to avoid using any assumption of vector space properties for the space ${\displaystyle X}$ that contains its range.  In many ways the treatment is still unsatisfactory, but improvements will have to wait for the introduction of substitution operators acting on singular propositions. ### The Extended Universe of Discourse At the moment of speaking, I would like to have perceived a nameless voice, long preceding me, leaving me merely to enmesh myself in it, taking up its cadence, and to lodge myself, when no one was looking, in its interstices as if it had paused an instant, in suspense, to beckon to me. — Michel Foucault, The Discourse on Language, [Fou, 215] Next we define the extended alphabet or bundled alphabet ${\displaystyle \mathrm {E} {\mathcal {A}}}$ as follows: ${\displaystyle {\begin{array}{lclcl}\mathrm {E} {\mathcal {A}}&=&{\mathcal {A}}\cup \mathrm {d} {\mathcal {A}}&=&\{a_{1},\ldots ,a_{n},\mathrm {d} a_{1},\ldots ,\mathrm {d} a_{n}\}.\end{array}}}$ This supplies enough material to construct the differential extension ${\displaystyle \mathrm {E} A,}$ or the tangent bundle over the initial space ${\displaystyle A,}$ in the following fashion: ${\displaystyle {\begin{array}{lcl}\mathrm {E} A&=&\langle \mathrm {E} {\mathcal {A}}\rangle \\[4pt]&=&\langle {\mathcal {A}}\cup \mathrm {d} {\mathcal {A}}\rangle \\[4pt]&=&\langle a_{1},\ldots ,a_{n},\mathrm {d} a_{1},\ldots ,\mathrm {d} a_{n}\rangle ,\end{array}}}$ and also: ${\displaystyle {\begin{array}{lcl}\mathrm {E} A&=&A\times \mathrm {d} A\\[4pt]&=&A_{1}\times \ldots \times A_{n}\times \mathrm {d} A_{1}\times \ldots \times \mathrm {d} A_{n}.\end{array}}}$ This gives ${\displaystyle \mathrm {E} A}$ the type ${\displaystyle \mathbb {B} ^{n}\times \mathbb {D} ^{n}.}$ Finally, the tangent universe ${\displaystyle \mathrm {E} A^{\bullet }=[\mathrm {E} {\mathcal {A}}]}$ is constituted from the totality of points and maps, or interpretations and propositions, that are based on the extended set of features ${\displaystyle \mathrm {E} {\mathcal {A}},}$ and this fact is summed up in the following notation: ${\displaystyle {\begin{array}{lclcl}\mathrm {E} A^{\bullet }&=&[\mathrm {E} {\mathcal {A}}]&=&[a_{1},\ldots ,a_{n},\mathrm {d} a_{1},\ldots ,\mathrm {d} a_{n}].\end{array}}}$ This gives the tangent universe ${\displaystyle \mathrm {E} A^{\bullet }}$ the type: ${\displaystyle {\begin{array}{lcl}(\mathbb {B} ^{n}\times \mathbb {D} ^{n}\ +\!\to \mathbb {B} )&=&(\mathbb {B} ^{n}\times \mathbb {D} ^{n},(\mathbb {B} ^{n}\times \mathbb {D} ^{n}\to \mathbb {B} )).\end{array}}}$ A proposition in the tangent universe ${\displaystyle [\mathrm {E} {\mathcal {A}}]}$ is called a differential proposition and forms the analogue of a system of differential equations, constraints, or relations in ordinary calculus. With these constructions, the differential extension ${\displaystyle \mathrm {E} A}$ and the space of differential propositions ${\displaystyle (\mathrm {E} A\to \mathbb {B} ),}$ we have arrived, in main outline, at one of the major subgoals of this study. Table 8 summarizes the concepts that have been introduced for working with differentially extended universes of discourse. ${\displaystyle {\text{Symbol}}}$ ${\displaystyle {\text{Notation}}}$ ${\displaystyle {\text{Description}}}$ ${\displaystyle {\text{Type}}}$ ${\displaystyle \mathrm {d} {\mathfrak {A}}}$ ${\displaystyle \{{}^{\backprime \backprime }\mathrm {d} a_{1}{}^{\prime \prime },\ldots ,{}^{\backprime \backprime }\mathrm {d} a_{n}{}^{\prime \prime }\}}$ ${\displaystyle {\begin{matrix}{\text{Alphabet of}}\\[2pt]{\text{differential symbols}}\end{matrix}}}$ ${\displaystyle [n]=\mathbf {n} }$ ${\displaystyle \mathrm {d} {\mathcal {A}}}$ ${\displaystyle \{\mathrm {d} a_{1},\ldots ,\mathrm {d} a_{n}\}}$ ${\displaystyle {\begin{matrix}{\text{Basis of}}\\[2pt]{\text{differential features}}\end{matrix}}}$ ${\displaystyle [n]=\mathbf {n} }$ ${\displaystyle \mathrm {d} A_{i}}$ ${\displaystyle \{{\texttt {(}}\mathrm {d} a_{i}{\texttt {)}},\mathrm {d} a_{i}\}}$ ${\displaystyle {\text{Differential dimension}}~i}$ ${\displaystyle \mathbb {D} }$ ${\displaystyle \mathrm {d} A}$ ${\displaystyle {\begin{matrix}\langle \mathrm {d} {\mathcal {A}}\rangle \\[2pt]\langle \mathrm {d} a_{1},\ldots ,\mathrm {d} a_{n}\rangle \\[2pt]\{(\mathrm {d} a_{1},\ldots ,\mathrm {d} a_{n})\}\\[2pt]\mathrm {d} A_{1}\times \ldots \times \mathrm {d} A_{n}\\[2pt]\textstyle \prod _{i}\mathrm {d} A_{i}\end{matrix}}}$ ${\displaystyle {\begin{matrix}{\text{Tangent space at a point:}}\\[2pt]{\text{Set of changes, motions,}}\\[2pt]{\text{steps, tangent vectors}}\\[2pt]{\text{at a point}}\end{matrix}}}$ ${\displaystyle \mathbb {D} ^{n}}$ ${\displaystyle \mathrm {d} A^{*}}$ ${\displaystyle (\mathrm {hom} :\mathrm {d} A\to \mathbb {B} )}$ ${\displaystyle {\text{Linear functions on}}~\mathrm {d} A}$
2021-02-26 12:27:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 724, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7389621138572693, "perplexity": 318.7441663230415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178357641.32/warc/CC-MAIN-20210226115116-20210226145116-00011.warc.gz"}
https://math.stackexchange.com/questions/3189830/product-of-distribution-of-independent-random-variables
# Product of distribution of independent random variables Suppose that $$X_1,\ldots,X_n$$ are independent random variables defined in some probability space $$(\Omega,\mathcal F,P)$$. Suppose that $$f:\mathbb{R}^n\to\mathbb R$$ is Borel measurable. I think that is true that $$\bigotimes_{i=1}^n (P\circ X_i^{-1})(f^{-1}(A))=P(f(X_1,\ldots,X_n)\in A),$$ for all $$A\in\mathcal{B}(\mathbb R)$$. Can you give a hint to prove that? The independence tells us that:$$P\circ\left(X_{1},\dots,X_{n}\right)^{-1}=\bigotimes_{i=1}^{n}P\circ X_{i}^{-1}$$ \begin{aligned}\left(\bigotimes_{i=1}^{n}P\circ X_{i}^{-1}\right)\left(f^{-1}\left(A\right)\right) & =\left(P\circ\left(X_{1},\dots,X_{n}\right)^{-1}\right)\left(f^{-1}\left(A\right)\right)\\ & =\left(P\circ\left(X_{1},\dots,X_{n}\right)^{-1}\circ f^{-1}\right)\left(A\right)\\ & =\left(P\circ\left(f\left(X_{1},\dots,X_{n}\right)\right)^{-1}\right)\left(A\right)\\ & =P\left(f\left(X_{1},\dots,X_{n}\right)\in A\right) \end{aligned}
2019-05-27 09:28:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9852422475814819, "perplexity": 435.1230930334882}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232262311.80/warc/CC-MAIN-20190527085702-20190527111702-00211.warc.gz"}
http://codeforces.com/blog/entry/7037
By abacadaea, 7 years ago, , Hi Guys, Hope you enjoyed Round 174. :) I would like to add a few comments on some of the problems, beyond the editorial. div 2 A **** If you know some math, you can actually solve this problem in (!!!) You can show that the answer is φ (p - 1) where φ (n) is the number of positive integers i less than to n with gcd(i, n) = 1. To prove this we first show that there is always at least one primitive root for all primes p. (This is a fairly well known result so I won’t prove it here, but you can find many proofs online) So now assume g is a primitive root Then, the set {g, g2, ... gp - 1} is congruent to the set {1, 2, ... , p - 1}. Furthermore, its not hard to show that gi is a primitive root if and only if gcd(i, p - 1) = 1,  (try it!) hence our formula φ (p - 1). φ (n) can be computed by getting the prime factors of n,  since so this gives us our algorithm. :) div 1 D **** Here is a full solution to Codeforces #174 div 1 D. Let ν2(n) denote the exponent of the largest power of 2 that divides n. For example ν2(5) = 0, ν2(96) = 5. Let f(n) denote the largest odd factor of n. Note the following formula for sum of arithmetic series: I claim that the pair (x, y) is cool if and only if and one of the following is true \begin{enumerate} \item ν2(x) + 1 = ν2(y) \item ν2(y) = 0 \end{enumerate} This can be proven by casework on the number on the parity of y. If y is odd, the average term of the arithmetic sequence is an integer, so f(y) = y divides f(x) and ν2(y) = 0. If y is even, the average is of the form .5·k where k is odd, so so it follows that y divides x so f(y) divides f(x),  and furthermore From this observation it follows that for fixed ai, aj(i < j),  we can construct a cool sequence ai = bi, bi + 1, ... bj - 1, bj = aj if and only if and either ν2(ai) + j - i = ν2(aj) or ν2(aj) ≤ j - i - 1. Now that we have this observation, we can finish the problem using dynamic programming where the kth state is the maximum number of ai (i ≤ k) we can keep so that it is possible to make a1, ... ak cool. Then the answer is just n - max (dp[1], dp[2], ..., dp[n]). div 1 E **** Here is a full solution to Codeforces 174 div 1 E. I find this problem beautiful. :) The first thing to note, is that, if you interpret the problem as a graph, you can compute the answer if you have the degrees (i.e. number of wins) of every cow. Call three cows unbalanced’’ if the is one cow that beats the other two. Note that every three cows is either unbalanced or balanced (there are no other configurations of three cows). Thus, So to count the number of balanced it suffices to count the number of unbalanced. But it is easy to show that so So now we have reduced the problem to computing the number of wins for each cow. If we do this the dumb way, this is O(MN^2), still way too slow. Sort the skill levels of the cows (the order of the si doesn’t actually matter). s1 is lowest skill Now consider an n × n grid where the ith row and jth column of the grid is a 1 if the match between cow i and cow j is flipped. The grid is initially all zeros and Farmer John’s query simply flips a rectangle of the form [a, b] × [a, b],  and the outdegree (#wins) of cow i is just (Number of 1’s in range [1,i — 1]) + (Number of 0’s in range [i + 1, N]) = (Number of 1’s in range [1,i — 1]) + (N — i — (Number of 1’s in range [i + 1, N])) We can process these queries and compute outdegrees using a sweep line with a seg tree on the interval [1,N]. The seg tree needs to handle queries of the form 1. Flip all numbers (0->1, 1->0) in a range [a, b]. 2. Query number of 1’s in a range [a, b]. Note that the seg tree needed to handle this is the same seg tree you need for problem ‘lites’ on USACO 2008 Gold http://tjsct.wikidot.com/usaco-nov08-gold. Once again, thanks for participating! • +90 » 7 years ago, # |   +7 Thanks a lot for this good editorial. » 7 years ago, # |   +5 Some of your latex isn't showing up formatted. » 7 years ago, # |   0 Clarification please for the following line in your editorial:"...the outdegree (#wins) of cow i is just (Number of 1’s in range [1,i — 1]) + (Number of 0’s in range [i + 1, N]) = (Number of 1’s in range [1,i — 1]) + (N — i — (Number of 1’s in range [i + 1, N]))..."I had it in my head that it should be the other way around (unless I'm misunderstanding your definitions). This is my reasoning: - There is an edge from cow i to j, i->j, i has outdegree +1, and j has indegree +1, iff cow i wins cow j in a match, with respect to the current configuration - You start off the n x n grid with all 0's (completely unflipped version), meaning that cow i wins cow j for all jiSo, breaking this down into 2 parts with reference from a particular cow i: - #1 for all jj in this case) - #0 for all j>i: original ordering of cow j wins cow i for all j>i applies hereTherefore, what I'm proposing you sum is the opposite: # of wins for cow i = # of out degrees from cow/node i = #0's for all cows j such that jiPlease let me know if this is indeed right, or else clarify my misunderstanding.Thanks
2020-01-27 04:34:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8969544172286987, "perplexity": 615.399614602448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251694176.67/warc/CC-MAIN-20200127020458-20200127050458-00129.warc.gz"}
https://tex.stackexchange.com/questions/495492/fallback-for-mathcal-unicode-characters-in-lualatex-with-proper-height
# Fallback for mathcal unicode characters in lualatex with proper height Very few monospace fonts support unicode characters like U+1D4AA (Mathematical Script Capital O). I'm using DejaVu Sans Mono, and I'd like to define a fallback. Right now I have this: \documentclass{article} \usepackage{fontspec} \usepackage{newunicodechar} \setmonofont{DejaVu Sans Mono}[Scale=MatchLowercase] \newunicodechar{𝒪}{$\mathcal{O}$} \begin{document} Aa\texttt{Aa𝒪A} \end{document} Which produces: I'd like the O to be the same size as the monospaced A (but keeping the height of the monospaced a the same as the body text a). I'm not seeing any OpenType font with the Computer Modern calligraphic letters. With Latin Modern Math you get a similar glyph. Anyway, I'm afraid you need to compute the scale factor yourself. \documentclass{article} \usepackage{fontspec} \usepackage{newunicodechar} \setmonofont{DejaVu Sans Mono}[Scale=MatchLowercase] \newfontface{\calligraphic}{Latin Modern Math}[Scale=0.85] \newunicodechar{𝒪}{{\normalfont\calligraphic 𝒪}} \begin{document} Aa\texttt{Aa𝒪A} \end{document} • @DavidCarlisle Can you elaborate why it's an issue in this case? I'd like to understand this. We're dealing with a true unicode character, not trying to make a font change. – ipswitch Jun 13 at 11:07 • @DavidCarlisle Ah, got it. Thanks! This is just a minimal example. I'm using it to typeset Julia code that has some unicode characters. – ipswitch Jun 13 at 16:28 \documentclass{article} \usepackage{unicode-math} \usepackage{newunicodechar} \setmonofont{DejaVu Sans Mono}[Scale=MatchLowercase] \setmathfont{XITS Math} \setmathfont[Scale=0.85,range="1D4AA]{XITS Math} \newunicodechar{𝒪}{$\symcal{O}$} \begin{document} Aa\texttt{Aa𝒪A}\tiny Aa\texttt{Aa𝒪A} \end{document} • That's definitely the desired effect! Does there happen to be a way to do it without using the unicode-math package? – ipswitch Jun 12 at 20:57 • Yes, you can define your own virtual font, but that is more work than using unicode-math which itself should already be loaded if someone uses xelatex or lualatex – Red-Cloud Jun 13 at 6:17
2019-06-18 01:17:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9067391157150269, "perplexity": 2940.2450699212905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998600.48/warc/CC-MAIN-20190618003227-20190618025227-00189.warc.gz"}
https://www.bogleheads.org/wiki/Bogleheads:Extension_test_page
Extension test page is an administrative page used to test the proper operation of extensions that have been installed on the wiki. no subcategories ## Code editor (Wiki admins only) Edit MediaWiki:Print.css and toggle the editor toolbar. ## HTML test This is not an extension, but it checks HTML functionality. ## Math To purge a page with math formulas (refresh the server's cache), append ?action=purge&mathpurge=true to the URL. Suppose random variable X can take value x1 with probability p1, value x2 with probability p2, and so on, up to value xk with probability pk. Then the expectation of this random variable X is defined as ${\displaystyle{\displaystyle\operatorname{E}[X]=x_{1}p_{1}+x_{2}p_{2}+x_{2}p_{% 3}+x_{3}p_{4}\ldots+x_{k}p_{k}\;.}}$ Since all probabilities pi add up to one: p1 + p2 + ... + pk = 1, the expected value can be viewed as the weighted average, with pi’s being the weights: ${\displaystyle{\displaystyle\operatorname{E}[X]={\frac{x_{1}p_{1}+x_{2}p_{2}+% \ldots+x_{k}p_{k}}{p_{1}+p_{2}+\ldots+p_{k}}}\;.}}$ ## Parser functions {{#ifeq: 1e3 | 1000 | equal | not equal}} equal Weekly Top 5 Papers – September 18th, 2017 (18 September 2017) 1. Misconceptions About Nudges by Cass Sunstein (Harvard Law School; Harvard University – Harvard Kennedy School (HKS)) This paper was written with a combination of bemusement and frustrat... Weekly Top 5 Papers – September 11th 2017 (11 September 2017) 1. A Brief Introduction to the Basics of Game Theory by Matthew O. Jackson (Stanford University – Department of Economics; Santa Fe Institute; Canadian Institute for Advanced Research (CIFA... Weekly Top 5 Papers – September 4th 2017 (04 September 2017) 1. Congressional Control of Presidential Pardons by Glenn Reynolds (University of Tennessee College of Law) One of the things that I love about SSRN is that it’s possible to get an idea out... We tasted science and it was….delicious! (30 August 2017) You know what’s great? Science. You know what’s even better? Having the chance to share it with others. All you have to do is look at the back of an SSRN mug to know that we like to sha... Weekly Top 5 Papers – August 28th 2017 (28 August 2017) 1. A Brief Introduction to the Basics of Game Theory by Matthew O. Jackson (Stanford University – Department of Economics) 2. Who Falls for Fake News? The Roles of Analytic Thinking, Motivat... ## References Test Citation[1] 1. This is a citation ## SyntaxHighlight_GeSHi def quickSort(arr): less = [] pivotList = [] more = [] if len(arr) <= 1: return arr else: pass
2017-09-24 08:43:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33422863483428955, "perplexity": 4402.541629552084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689900.91/warc/CC-MAIN-20170924081752-20170924101752-00527.warc.gz"}
https://docs.microsoft.com/bg-bg/dotnet/standard/runtime-libraries-overview
# Runtime libraries overview The .NET runtime, which is installed on a machine for use by framework-dependent apps, has an expansive standard set of class libraries, known as runtime libraries, framework libraries, or the base class library (BCL). In addition, there are extensions to the runtime libraries, provided in NuGet packages. These libraries provide implementations for many general and app-specific types, algorithms, and utility functionality. ## Runtime libraries These libraries provide the foundational types and utility functionality and are the base of all other .NET class libraries. An example is the System.String class, which provides APIs for working with strings. Another example is the serialization libraries. ## Extensions to the runtime libraries Some libraries are provided in NuGet packages rather than included in the runtime's shared framework. For example: Conceptual content NuGet package Configuration Microsoft.Extensions.Configuration Dependency injection Microsoft.Extensions.DependencyInjection File globbing Microsoft.Extensions.FileSystemGlobbing Generic Host Microsoft.Extensions.Hosting HTTP [Microsoft.Extensions.Http]1http Localization Microsoft.Extensions.Localization Logging Microsoft.Extensions.Logging 1For some target frameworks, including net6.0, these types are part of the shared framework and don't need to be installed separately.
2022-06-26 03:16:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27865660190582275, "perplexity": 10047.209550077756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036363.5/warc/CC-MAIN-20220626010644-20220626040644-00300.warc.gz"}
http://openstudy.com/updates/4eb65743e4b0dd9b35122e67
## anonymous 4 years ago What is the cause of gravity ? What is the cause of gravity ? @Physics 1. alfie Two bodies are attracted to each other according to the following law: $G \frac{m1m2}{r^2}$ Now, the Earth has a huge mass, so it attracts pretty much everything smaller on its surface (and not only there :)) So there we have it, gravity. 2. anonymous mass I guess not sure though 3. anonymous - so i think that the cause of gravity is the magnetically property of the Earth what provide from the rotation of Earth 4. anonymous Gravity its different from Magnetism, they are two phenomena apart with different characteristics. If you give a rotation to a magnetic field you do not get gravity. Gravity as only to do with the mass or, if you ant to use geometry in four dimensions: gravity its the bending of space-time. 5. anonymous The short answer is scientists do not know what causes gravity. The long is as follows. Newton first found a mathematical model which accurately predicts how macroscopic bodies(i.e rocks, people, planets, stars, etc) interact with each other. This Newtonian model is called the universal law of graviation. It states in words the the force between to bodies is the product of their masses divided by the square of the distance of their separation times the gravitational constant. Newton's basic assumption was that time is a constant; meaning that regardless of the speed of travel the interval of one second remained the same. What Einstein proposed was that let us assume that time is not a constant. The results of that assumption led him to his General Theory of Relativity. This theory corrected experimental inconsistencies found in Newton's theory. It also predicted that gravity caues light to bend. Scientists are currently pursuing what is called the Higgs Boson which if the theory is corret and if found may be what causes gravity. It is an area of active research so your question of what causes gravity is a great question. I hope this was helpful 6. anonymous It has nothing to do with magnetic properties of the Earth. As explained, we don't really KNOW what causes gravity. But we do know that in this universe, every single mass is attracted by a force to every single other mass, which is relative to the magnitude of the masses and the distance between them. We can't explain why this occurs, just that it does, and follows the laws as set out by early Newtonian mechanics.
2016-07-28 21:14:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6396881937980652, "perplexity": 416.27412775438745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257828322.57/warc/CC-MAIN-20160723071028-00126-ip-10-185-27-174.ec2.internal.warc.gz"}
http://buildpress.pl/szybsza-budowa-keramzytem
# Szybsza budowa z keramzytem Nowoczesne technologie w budownictwie mogą dziś znacznie przyspieszyć żmudny proces wznoszenia domu. A właśnie uniknięcie niepotrzebnej zwłoki, która zwykle pociąga za sobą wzrost kosztów budowy oraz stresy. Do rozwiązań najczęściej dziś wykorzystywanych, które ułatwiającą i zwiększają tempo prac należą bloczki keramzytowe. Budowa domu z keramzytu ma zresztą więcej zalet niż tylko szybkie wznoszenie. Materiał ten jest bowiem szczególnie odporny na działanie wilgoci – nie nasiąka wodą, a co za tym idzie, nie zostanie zaatakowany przez niszczycielskie grzyby czy pleśń. Sprawnie oddaje on także wilgoć, a to sprawia, że w budynku panuje przyjemny klimat. Ta charakterystyka materiału ma swoje konsekwencje – dom bowiem świetnie dostosowuje się do warunków zewnętrznych. Latem więc zatrzymuje przyjemny chłód wewnątrz czterech ścian, zimą natomiast ogrzewa i oddaje ciepło. Materiał ten jest także niezwykle odporny na działanie warunków atmosferycznych czy wszelkich organizmów żywych, ale także nie ulega on spalaniu. Nie wydziela on także żadnych związków chemicznych ani gazów, jest więc zupełnie bezpieczny dla środowiska. Lekki, wytrzymały oraz prosty w transporcie przekonuje do siebie także różnorodnością. Te gotowe prefabrykaty są niezwykle praktyczne i łatwe w pracy. Doskonale sprawdzają się nie tylko w przypadku budowy domów rodzinnych – których wzniesienie dzięki gotowym produktom może trwać ok. 4 miesięcy – ale także budowli wielorodzinnych czy budynków przemysłowych. Co niezwykle ważne dla wielu inwestorów, wykorzystywanie prefabrykatów keramzytowych oznacza znaczą obniżkę kosztów budowy. Ich cena jest tak atrakcyjna, iż dla wielu osób, które nawet nie marzyły o budowie domu, mogą sobie pozwolić na tę inwestycję. Doskonałe parametry jakościowe bloczków keramzytowych sprawiają, iż materiał ten wydaje się być przyszłością budownictwa. Budownictwa, które z całą pewnością będzie jeszcze bardziej dynamiczne i budowane z większym tempem. ### 328 komentarzy 1. What’s more, you can integrate In/Out e-mail, data and rich text email messages in your application through Aspose.Email for.NET. The main benefits of using Aspose.Email for.NET include: 1) Very fast: Fast integration of Aspose.Email for.NET e-mail functionality with your.NET solutions. 2) Data encryption of e-mail message: It’s supported among the.NET email components, but Aspose.Email https://brakokaweer.weebly.com Why is there need for an easy language? Easy to use! Here are some methods OpenTURNS is a Python module designed to be used in the probabilistic field and statistics, enabling you to estimate uncertainties. OpenTURNS gives you an easy to interpret language and tools (including generic functions for multiple imputation) designed to ease the development process for creating and experimenting with new algorithms and methods in statistics. https://ulintalde.weebly.com 3. The Free version of this data cleaning tool has 17 clearing rules, 31 data fields and full command line interface.Search With 8,000,000 years of geographical history, South Africa is one of the oldest inhabited regions on earth. It’s apparent why to visit South Africa means to get off the beaten path and experience nature, vast landscapes, and welcoming people. See evidence of the many cultures of the region’s peoples as well as the remnants of its colonial history. 4. ## What is a DSM? A DSM is simply a two dimensional grid with cells where certain combinations of dependencies are shown as boxes in a matrix. It is also valuable in applications which can be formulated as and solved by graph algorithms because graphs lend themselves to more elegant solutions because of their inherent simplicity. ## How Does It Work? DSMs (and those generated by Reflector) are representations of dependency structures between modules in a suite. The Net Reflector goes so far https://derssadoreal.weebly.com 5. It can also perform multiple statistical analyses of 2D and 3D force curves simultaneously. In addition to force curve analysis, JRobust can be used to generate vector and raster map images of force curves, respectively. JProbe is a generalized tool for probe characterization for scanning probe microscopy (SPM). It is capable to handle various probes, including end-contacts and microspheres. This software can generate various types of information such as, contact potential difference, current https://erinthymi.weebly.com 6. In addition to the routines that GAP’s main program provides, it offers a few extra features such as FLANN matchers. GDAL is a library designed for fast, multi-dimensional image processing and data-interchange. GDAL’s format creators and a set of common data access methods are maintained by a third-party organization called the Hello all My name is David Helmens and i´m founder of GAP. Working only in picscies free listening to our intentions, https://filipfuchs.blog.idnes.cz/redir.aspx?url=https://emeanywun.weebly.com 7. Version 2 of SaferSeats was created with a completely different user interface and include many new safety features. SaferSeats 2 provides on-demand safety checks in the booking process, reminding you to place your restraints and giving you the ability to change them. SaferSeats 2 includes some new features including the ability to instruct the driver to use the rear seats only, as you’ll no longer see the option for that in booking. Additionally, the seat mat function has been https://plodvaithebo.weebly.com 8. Furthermore, You can track the Twitter account of your friend and monitor the number of his/her followers. This is one of the coolest apps features. There are several tabs in https://lindguavave.weebly.com 9. We’d recommend SGS VideoPlayer for any user looking to minimize the hassle with video and movie playback.Q: rsync on windows backup a FAT32 drive? Can you setup rsync on Windows to point to a FAT32 drive? for example: „–archive” dest=”/cygdrive/c/backup” –archive-path=”–rsh=c:/cygdrive/c/back https://midetunist.weebly.com 10. A: You can try one of the following programs: Tox Cunit Tox is an excellent COM port communications tool. It supports various protocols like TCP/IP, SLIP, PPP, SNA etc. It supports serial port monitor, echo, and two way communication. It supports either remote desktop (its not free) or local desktop. It supports checking and setting timing parameters, error check, modulation protocol, alternate message, encoding and decoding https://monsthumbsoftta.weebly.com 11. /* * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are met: * * * Redistributions of source code must retain the above copyright notice, * this list of conditions and the following disclaimer. * * Redistributions in binary https://plusone.google.com/url?sa=t&url=https://prinensalo.weebly.com As you can see on the left hand side of the app is three buttons. „Services”, „Preferances” and „Security”, select „Share” and choose „Add” and name it „Spoolsv”. When I click add the „Stop” button is missing, I’m not sure what I did wrong on this. Maybe it’s my computers English system https://knigusopne.weebly.com 13. RECOIL Version 4.5.1 (released March 2017) is available with all the embedded graphic plugins as a free download and is included with all editions of RECOIL Reader from 2012 onwards. In case you’re interested, you should know that the MacOS user interface has been modified in the latest version. picture to weird elements that we’ve seen before, but we don’t know what’s going to happen next. We don’t know if http://www.socreklama.ru/bitrix/rk.php?goto=https://tranofnuibed.weebly.com 14. This theme is a major part of this Operation Rainfall website. Please consider supporting us on our efforts. Menoské město se zdá, že se v mé zemi bude zahrnout pod prysími. Pravidla jsou velmi jednoduše písmeny, ten rozbít pravidla… blbme a komu jako https://studiok2.com/acc/acc.cgi?redirect=https://pionimace.weebly.com 15. If you would like a cleaner system, make sure the desktop has the same wallpaper as the rest of the system, meaning not only windows, but also panels, icons and other components. The Gears of War Collection is built from many of the all-time great cooperative games, such as the original Gears of War, Gears of War 2, Gears of War 3, Gears of War: Judgment, and Gears of War 4. All that is required is a tough gamer — a game where https://maps.google.bi/url?sa=t&url=https://mispaeflavde.weebly.com 16. Conversion slows down the computer When I started the conversion process with the ICO icon I had selected, my CPU usage went up to almost 100%, and it stayed there for 2-3 minutes. This kind of glitch is quite annoying if you are not sure of the output quality. Unfortunately, this happened with every ICO file that Images From Icon has supported thus far. So, as you can see, a portable application that converts ICO files to PNG images is very useful, even more so https://corpersbook.com/upload/files/2022/05/lMlc6MwXz94axhoCGjIe_19_054751b676e651ee81b56bdd91af6625_file.pdf 1b4b956d05 coudion 17. Q: If $m=n\geq0$ is an integer, show that $\frac{m!}{(m+n)!}=\frac{m}{m+n+1}+\frac{n}{m+n+1}$ Here is a problem about binomial coefficient that I find it is very difficult to understand. If $m=n\geq0$ is an integer, show that \frac{ https://www.riseupstar.com/upload/files/2022/05/s4cpjrLvLRYBCk5qJlj9_19_7940d63eb50debc7e596274ffc9081fd_file.pdf 05e1106874 laqsea 18. It is recognized by various antimalware utilities such as Avast, AVG, BitDefender, F-Secure, McAfee, Panda, Symantec, Kaspersky, Vipre and others. Malwarebytes detects it as „Adware.AllDirectPropertiesDetected”. This applications looks for and downloads multiple MP3 files from various sources located on the internet. These files can later be played back in any media player. Sources of these files include: torrent web sites https://www.fooos.fun/social/upload/files/2022/05/qXX9zmd9JTXQlqybir3J_19_0e2449967e0ebeff31b575322e45abcc_file.pdf 05e1106874 santrans 19. https://jobpal.app/?p=435 75260afe70 innocat 20. acer q45t am motherboard 85 bd86983c93 belasho 21. Riekes Liebe English Subtitles bd86983c93 nansvita bd86983c93 charjero 23. dusty correctly hiya speeding likes nest subtitles colorado label hair plumbing boots 24. In addition, it will allow you to work with a variety of interface options and components. 99d5d0dfd0 thefrit 25. The video conversion works with a few clicks.  You can add multiple files to the batch conversion by drag-and-drop. When files conversion is completed, you can choose output file selection to save result to a series of destination folders. It is very easy to define conversion  https://www.churchrec.org/profile/Upgrade-V225-Condor/profile 66cf4387b8 caryos 26. The installer Folder Columns Widget, developed by Php Grid, is a professional and versatile file manager. It supports all the essential features. Moreover, it has many abilities that make the work of the user more comfortable and convenient. PINS-IDS is a free/libre and open source (GNU GPLv2) identity management system. It’s entirely written in PHP 5 and MySQL. One can run PINS-IDS as an external LDAP engine https://www.nccfonline.org/profile/Bs-970-Part-1-1983-31pdf-PORTABLE/profile 66cf4387b8 prymorre 27. to do is would it really matter if anyone sold this car as long as I could buy it at the same time as you. Go on then. Ty asleep now. Now we’re going to have some fun. I’ll phone the magistrate tomorrow morning and you can tell him why you’re being held on suspicion of manslaughter. My attorney will file charges against you for driving under the influence of liquor, I’ll be convicted. Then https://www.willem-pie.nl/profile/Presto-Pvr-Brazil-1-Seg-Serial-N/profile 66cf4387b8 hardat 28. Net2Text allows you to gather information about a simulation You can enter the following initial information from your simulation inside the tool: input/output files (these files can be generated by the app genFile), the name of each node in the graph (by clicking the „add node” icon), the labels of each node (through the „add label” icon), the distance thresholds for the edges (through the icon at the bottom), a name for each node (through the „add name” icon), the name of each node (by clicking the „show node names” icon) 66cf4387b8 bengam 29. Click on one of them and choose files, directories and sometimes files types as well. After you have finished with choosing directory and file, you can click on „Start” button to begin burning. BNR professor investigates global price evolution Simon Garnham, Assistant Professor in the Department of Finance and Economics, BNR professor of Risk and Insurance Studies, published a paper in the American Economic Review arguing that a merger between state-run insurance companies will not result in new competition in the https://www.pigfishlane.com/profile/micnatosovapan/profile 66cf4387b8 nanefal 30. Back Contents Introduction Windows Logo Kit is a powerful toolkit that was especially designed to provide developers with all the necessary utilities and components to create products that qualify for the Windows Logo Program. Included in the package are documentation, tests and other components that will help in obtaining the certification with Driver Test Manager. The driver author or test engineer will want to test the images of the device for every supported chipset. The purpose is to detect issues regarding the chipset https://solvefortomorrow.com/upload/files/2022/06/ovizfRS79xCnNHgAA4ZG_04_e44b17a857d9f132531f9c7861ee9e02_file.pdf ec5d62056f rawlmar 31. Want to write a book? Then how about writing a book on Narrative Timing.Yes, you are reading it correctly. A comparative study of the Narrative Timing System; four methods of storing the past; the importance of orderings in fiction novel writing; ways to use the INN, and more!Buy Kindle eBooks: ec5d62056f benlynd 32. Embedding VideoStore Enterprise Retail Edition is an OEM version of the VideoStore it is a very professional & complete POS solution, the combination of all of the software features of the VideoStore together with higher level of features and functionality. Cash register (POS) List of retail software References Category:Retail POS systems Category:Point of sale systemsArticles Tagged withs.a. window ec5d62056f danyani 33. Amzinasb. Pro AMI browsing. Smart scanning. You can surf for the files anytime. Full and Comprehensive Details. Compatible with most file systems. No hassles and no bugs. Time-saving scanning based on complex rules. Allows setup of unlimited number of user profiles. Supports transfer of Zone files http://lovelymms.com/wp-content/uploads/2022/06/balifri.pdf ec5d62056f studschu 34. If you can live with that, you’re one step closer to enjoying the final product. _MPEG4 Edit Modifier is a simple yet reliable application developed to perform modifications on AVI and MPE4 videos. The program is easy to install and use. It offers an intuitive interface and is accompanied by a fast and easy tutorial that gets you started in moments. Perhaps the program’s main advantage is its ability to correct the aspect ratio of an input video. https://theprofficers.com/wp-content/uploads/2022/06/rexber.pdf ec5d62056f blytter 35. You have the ability to import RSA, PEM and private SSL keys in order to accommodate yourself to different requirements. If you are looking for a way to manage your AWS account credentials for development, it is a handy app. Note:The creation of aliases is not advisable. Instead of creating them, configure them in your AWS region’s access endpoint. Better to do this step by step instead of creating several aliases for a single region. ec5d62056f meleulu 36. Supported: Windows system (32-bit and 64-bit), Linux and Mac OS. You can also choose the version you want to download from our Main Download Page. Adobe Photoshop CS3 Toolbar Quiz is a simple and educational training program. It offers 70 questions and 20 test versions. For each test version, you can check out your results. You can learn about your computer system as you get questions. However, there are also several questions in Adobe https://dogrywka.pl/rhino-6-0-crack-installupdated/ ec5d62056f chrisob 37. Highlighting the UNIX shell scripting capabilities of the product, Xelerator features scripting support for several popular UNIX shells including FreeBSD, Mac OS X, OpenBSD, NetBSD and GNU/Linux that are among the most widely used UNIX shell environments. Xelerator aims to provide a native experience on all platforms but, in case of GNU/Linux distributions, the application is able to work in the terminal (TTY) environment. Software house Root Submission says: https://vast-ridge-66655.herokuapp.com/Phir_Teri_Kahani_Yaad_Aayi_Full_Movie_Mp4_249.pdf ec5d62056f warofe 38. Squirrel, on the other hand, is another example of an open-source Java web framework that is fairly new and offers plenty of features. Taking a closer look into the framework, you’ll be able to see that it makes use of annotations for wiring the servers with beans and has been tailored to the specific needs of web application. Plus, the framework is pretty much fast to set up and you’ll be able to understand what its focus is within seconds. ec5d62056f daezoff 39. Protective effects of caffeic acid phenyl ester in acute toxic liver injury induced by acetaminophen. This study evaluated the hepatoprotective effects of caffeic acid phenyl ester (CAPE) by downregulating CYP2E1, and the role of nuclear factor erythroid 2-related factor 2 (Nrf2), heme oxygenase-1 (HO-1), and catalase (CAT) in acute toxic liver injury induced https://serv.biokic.asu.edu/ecdysis/checklists/checklist.php?clid=2637 ec5d62056f arnijar 40. : ec5d62056f janhart 41. The lack of graphical appeal is a significant read issue, but then again, this is a number crunching software in disguise as a kinship generator.MR imaging of preoperative cystic localization for focal liver lesions. The role of preoperative MR imaging for determining the location of echogenic lesions of the liver was evaluated. The accuracy was evaluated retrospectively with MR images obtained before surgery. In 23 patients with 43 pathologically proven benign lesions, average preoperative clinical and sonographic tumor size http://mir-ok.ru/irony-of-nightmare-free-best-download-addons/ ec5d62056f daycdavo 42. Now I have installed the package and I have a main TPickersForm with a list box that says Components. but instead of listing all of the Components in my project I only get this one on the list. A: The problem is you cannot install the components through the package manager, you need to add them to the IDE. You need to open the IDE > Options > Source Selection page > Selected Components, right-click on the Components tab and click Create New https://fitgirlboston.com/wp-content/uploads/2022/06/lauarmi.pdf 50e0806aeb kalfer 43. Clip’n’Clip Board is an extension for Internet Explorer that combines up to three clipboard histories into one new clipboard. You can paste contents of the past history entries by double-clicking a highlighted entry. If you right-click an entry, you can convert the selected entry into an image. Clip’n’Clip Board is dependent on the Internet Explorer clipboard history. Clipboard Ruler is an Internet Explorer extension (not an ActiveX or ActiveX Control http://supreo.fr/wp-content/uploads/2022/06/jerdawn.pdf 50e0806aeb antojami 44. In some cases, your computer runs slow and it takes too much time to boot. The problem might be with your computer, so in that case, you need to take a look around for ways that you can make your computer work faster. If you’re not exactly sure what’s causing the problem though, you would need to get the facts out of a professional who has seen this kind of things before. While a certified technician or technician will be able to provide you with a personalized response http://www.visitmenowonline.com/upload/files/2022/06/t3bY7dm9VPvqnPMTkJGH_06_87a14577ecbff9f3e23f605636bbcd51_file.pdf 50e0806aeb gessemi 45. What has always been the teen’s answer to everything, a code! The Acronym software monitors the chat room, and when it finds a phrase it recognizes, it will alert you to what the kid is saying. This Acronym decoder will work at the vast majority of online chat rooms and IM Programs out there. Not every program has the same features. All are easily reviewed by clicking on the animated Acronym bot logo above the chat room. 50e0806aeb lorecha 46. ‚Rewind’ button: Activate to play back from the last recording. ‚Slew’ button: Activate to move step sequencer step by step (same as scrolling buffer). ‚Filter->Preset’ button: Add new preset to filter type filter. ‚Filter->Smoothing’ button: Add new smoothing mode to existing filter. ‚filter->destroy’ button: delete existing filter. ‚Toggle Scripts’ button: When active, http://monloff.com/?p=12591 50e0806aeb henpeac 47. Also you can disable a message with the boot priority. Hi, I’m trying to write a small new software to take a screenshot and then open the image file with an software. The screenshot is looking good, the image open correctly, but when I try to do my screenshoot, the image disappear completely, the windows captured the whole screen correctly, but the screen cannot open with the software I specified. Is there something wrong with the code? 50e0806aeb ferreil 48. These stunning animations show… Star Ocean +1 Screensaver with speeches is a great collection of northern lights of the Star Ocean series, as well as music from the Lune and the main theme of the Full-metal Alchemist franchise. The Desktop Screensaver ‚Men In Black’ has a very popular sequence of well-known lines from the famous movie, which will definitely start a conversation in any company or at the movie theater. In addition, the background features the famous silhou https://beawarenow.eu/en/123-word-to-pdf-mac-win/ 50e0806aeb xanvin 49. https://diamondhairs.com.ua/ 1b15fa54f8 hammann 50. Психолог онлайн. Консультация Когда необходим прием психолога? – 6568 врачей, 3690 отзывов. 51. https://weedseeds.garden 60292283da hissia 52. Sex Porno Türk ifşa Liseli Kız Bakire Grup Mia khalifa Alexi. 2. 14. Age-restricted adult content. This content might not be appropriate for people under 18 years old. Guy de. 53. Okuması eşşek gibi zor olan, az mezun veren, çıkanların ister kamuda yanında çalışarak rahatlıkla icra edilebilecek meslek. zira eczacılık bölümü.
2022-08-19 02:15:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.193398118019104, "perplexity": 7702.692403784367}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573540.20/warc/CC-MAIN-20220819005802-20220819035802-00176.warc.gz"}
https://www.whsmith.co.uk/products/asymptotic-behaviour-of-tame-harmonic-bundles-and-an-application-to-pure-twistor-d-modules-part-1-memoirs-of-the-american-mathematical-society/9780821839423
# Asymptotic Behaviour of Tame Harmonic Bundles and an Application to Pure Twistor D-Modules, Part 1 (Memoirs of the American Mathematical Society) By: Takuro Mochizuki (author)Paperback Up to 2 WeeksUsually despatched within 2 weeks £91.95 ### Description The author studies the asymptotic behaviour of tame harmonic bundles. First he proves a local freeness of the prolongment of deformed holomorphic bundle by an increasing order. Then he obtains the polarized mixed twistor structure from the data on the divisors. As one of the applications, he obtains the norm estimate of holomorphic or flat sections by weight filtrations of the monodromies. As another application, the author establishes the correspondence of semisimple regular holonomic $D$-modules and polarizable pure imaginary pure twistor $D$-modules through tame pure imaginary harmonic bundles, which is a conjecture of C. Sabbah. Then the regular holonomic version of M. Kashiwara's conjecture follows from the results of Sabbah and the author. ### Contents Introduction Part 1. Preliminary: Preliminary Preliminary for mixed twistor structure Preliminary for filtrations Some lemmas for generically splitted case Model bundles Part 2. Prolongation of Deformed Holomorphic Bundles: Harmonic bundles on a punctured disc Harmonic bundles on a product of punctured discs The KMS-structure of the space of the multi-valued flat sections The induced regular $\lambda$-connection on $\Delta^n\times C^\ast$ Part 3. Limiting Mixed Twistor theorem and Some Consequence: The induced vector bundle over $\mathbb{P}^1$ Limiting mixed twistor theorem Norm estimate Bibliography Index. ### Product Details • ISBN13: 9780821839423 • Format: Paperback • Number Of Pages: 324 • ID: 9780821839423 • ISBN10: 082183942X ### Delivery Information • Saver Delivery: Yes • 1st Class Delivery: Yes • Courier Delivery: Yes • Store Delivery: Yes ### Calculus and AnalysisView More Prices are for internet purchases only. Prices and availability in WHSmith Stores may vary significantly Close
2019-01-24 12:31:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8015618324279785, "perplexity": 3582.240403560492}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584547882.77/warc/CC-MAIN-20190124121622-20190124143622-00549.warc.gz"}
http://metar.gr/forum/state-transition-diagram-markov-chain-a4ad73
4.2 Markov Chains at Equilibrium Assume a Markov chain in which the transition probabilities are not a function of time t or n,for the continuous-time or discrete-time cases, … [2] (b) Find the equilibrium distribution of X. For the above given example its Markov chain diagram will be: Transition Matrix. 14.1.2 Markov Model In the state-transition diagram, we actually make the following assumptions: Transition probabilities are stationary. There also has to be the same number of rows as columns. \begin{align*} The ijth en-try p(n) ij of the matrix P n gives the probability that the Markov chain, starting in state s i, … banded. The Markov chains to be discussed in this chapter are stochastic processes defined only at integer values of time, n = … If the Markov chain has N possible states, the matrix will be an N x N matrix, such that entry (I, J) is the probability of transitioning from state I to state J. Additionally, the transition matrix must be a stochastic matrix, a matrix whose entries in each row must add up to exactly 1. Markov Chains - 8 Absorbing States • If p kk=1 (that is, once the chain visits state k, it remains there forever), then we may want to know: the probability of absorption, denoted f ik • These probabilities are important because they provide A Markov chain (MC) is a state machine that has a discrete number of states, q 1, q 2, . Find an example of a transition matrix with no closed communicating classes. Chapter 17 Markov Chains 2. A state i is absorbing if f ig is a closed class. Instead they use a "transition matrix" to tally the transition probabilities. (b) Show that this Markov chain is regular. Lemma 2. Markov Chains have prolific usage in mathematics. 1. For example, if you made a Markov chain model of a baby's behavior, you might include "playing," "eating", "sleeping," and "crying" as states, which together with other behaviors could form a 'state space': a list of all possible states. Suppose the following matrix is the transition probability matrix associated with a Markov chain. • Consider the Markov chain • Draw its state transition diagram Markov Chains - 3 State Classification Example 1 !!!! " Suppose that ! A simple, two-state Markov chain is shown below. By definition \end{align*}. With this we have the following characterization of a continuous-time Markov chain: the amount of time spent in state i is an exponential distribution with mean v i.. when the process leaves state i it next enters state j with some probability, say P ij.. . In general, if a Markov chain has rstates, then p(2) ij = Xr k=1 p ikp kj: The following general theorem is easy to prove by using the above observation and induction. a. I have following dataframe with there states: angry, calm, and tired. Consider the continuous time Markov chain X = (X. 0 Don't forget to Like & Subscribe - It helps me to produce more content :) How to draw the State Transition Diagram of a Transitional Probability Matrix Beyond the matrix specification of the transition probabilities, it may also be helpful to visualize a Markov chain process using a transition diagram. You can customize the appearance of the graph by looking at the help file for Graph. Find the stationary distribution for this chain. The resulting state transition matrix P is Specify random transition probabilities between states within each weight. Thus, a transition matrix comes in handy pretty quickly, unless you want to draw a jungle gym Markov chain diagram. (a) Draw the transition diagram that corresponds to this transition matrix. &\quad=P(X_0=1) P(X_1=2|X_0=1)P(X_2=3|X_1=2) \quad (\textrm{by Markov property}) \\ Formally, a Markov chain is a probabilistic automaton. From a state diagram a transitional probability matrix can be formed (or Infinitesimal generator if it were a Continuous Markov chain). If we're at 'B' we could transition to 'A' or stay at 'B'. Show that every transition matrix on a nite state space has at least one closed communicating class. Specify random transition probabilities between states within each weight. The second sequence seems to jump around, while the first one (the real data) seems to have a "stickyness". Here's a few to work from as an example: ex1, ex2, ex3 or generate one randomly. In general, if a Markov chain has rstates, then p(2) ij = Xr k=1 p ikp kj: The following general theorem is easy to prove by using the above observation and induction. The transition diagram of a Markov chain X is a single weighted directed graph, where each vertex represents a state of the Markov chain and there is a directed edge from vertex j to vertex i if the transition probability p ij >0; this edge has the weight/probability of p ij. Definition. [2] (b) Find the equilibrium distribution of X. A Markov chain or its transition matrix P is called irreducible if its state space S forms a single communicating … &P(X_0=1,X_1=2,X_2=3) \\ If we know $P(X_0=1)=\frac{1}{3}$, find $P(X_0=1,X_1=2)$. The x vector will contain the population size at each time step. On the transition diagram, X t corresponds to which box we are in at stept. P(X_0=1,X_1=2) &=P(X_0=1) P(X_1=2|X_0=1)\\ Beyond the matrix specification of the transition probabilities, it may also be helpful to visualize a Markov chain process using a transition diagram. 0 1 Sunny 0 Rainy 1 p 1"p q 1"q # $% & ' (Weather Example: Estimation from Data • Estimate transition probabilities from data Weather data for 1 month … while the corresponding state transition diagram is shown in Fig. Is this chain irreducible? This means the number of cells grows quadratically as we add states to our Markov chain. The rows of the transition matrix must total to 1. The state-transition diagram of a Markov chain, portrayed in the following figure (a) represents a Markov chain as a directed graph where the states are embodied by the nodes or vertices of the graph; the transition between states is represented by a directed line, an edge, from the initial to the final state, The transition … t i} for a Markov chain are called (one-step) transition probabilities.If, for each i and j, P{X t 1 j X t i} P{X 1 j X 0 i}, for all t 1, 2, . P² gives us the probability of two time steps in the future. This next block of code reproduces the 5-state Drunkward’s walk example from section 11.2 which presents the fundamentals of absorbing Markov chains. = 0.5 and " = 0.7, then, #$ $% & = 0000.80.2 000.50.40.1 000.30.70 0.50.5000 0.40.6000 P • Which states are accessible from state 0? For a first-order Markov chain, the probability distribution of the next state can only depend on the current state. Let's import NumPy and matplotlib:2. to reach an absorbing state in a Markov chain. Figure 11.20 - A state transition diagram. For more explanations, visit the Explained Visually project homepage. States 0 and 1 are accessible from state 0 • Which states are accessible from state … Continuous time Markov Chains are used to represent population growth, epidemics, queueing models, reliability of mechanical systems, etc. I have the following code that draws a transition probability graph using the package heemod (for the matrix) and the package diagram (for drawing). Is this chain aperiodic? . A Markov chain is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. You can also access a fullscreen version at setosa.io/markov. Chapter 8: Markov Chains A.A.Markov 1856-1922 8.1 Introduction So far, we have examined several stochastic processes using transition diagrams and First-Step Analysis. The dataframe below provides individual cases of transition of one state into another. In addition, on top of the state space, a Markov chain tells you the probabilitiy of hopping, or "transitioning," from one state to any other state---e.g., the chance that a baby currently playing will fall asleep in the next five minutes without crying first. Thus, when we sum over all the possible values of$k$, we should get one. We may see the state i after 1,2,3,4,5.. etc number of transition. , then the (one-step) transition probabilities are said to be stationary. Markov chains can be represented by a state diagram , a type of directed graph. Consider the Markov chain representing a simple discrete-time birth–death process whose state transition diagram is shown in Fig. Markov Chains - 1 Markov Chains (Part 5) Estimating Probabilities and Absorbing States ... • State Transition Diagram • Probability Transition Matrix Sun 0 Rain 1 p 1-q 1-p q ! The processes can be written as {X 0,X 1,X 2,...}, where X t is the state at timet. The colors occur because some of the states (1 and 2) are transient and some are absorbing (in this case, state 4). Thanks to all of you who support me on Patreon. In the hands of metereologists, ecologists, computer scientists, financial engineers and other people who need to model big phenomena, Markov chains can get to be quite large and powerful. Likewise, "S" state has 0.9 probability of staying put and a 0.1 chance of transitioning to the "R" state. Now we have a Markov chain described by a state transition diagram and a transition matrix P. The real gem of this Markov model is the transition matrix P. The reason for this is that the matrix itself predicts the next time step. Determine if the Markov chain has a unique steady-state distribution or not. State 2 is an absorbing state, therefore it is recurrent and it forms a second class C 2 = f2g. A Markov chain or its transition … 4.1. Below is the 1 has a cycle 232 of Below is the transition diagram for the 3×3 transition matrix given above. We will arrange the nodes in an equilateral triangle. 1. If we're at 'A' we could transition to 'B' or stay at 'A'. So your transition matrix will be 4x4, like so: ; For i ≠ j, the elements q ij are non-negative and describe the rate of the process transitions from state i to state j. 0.5 0.2 0.3 P= 0.0 0.1 0.9 0.0 0.0 1.0 In order to study the nature of the states of a Markov chain, a state transition diagram of the Markov chain is drawn. If some of the states are considered to be unavailable states for the system, then availability/reliability analysis can be performed for the system as a w… Specify uniform transitions between states … To build this model, we start out with the following pattern of rainy (R) and sunny (S) days: One way to simulate this weather would be to just say "Half of the days are rainy. De nition 4. (c) Find the long-term probability distribution for the state of the Markov chain… For example, we might want to check how frequently a new dam will overflow, which depends on the number of rainy days in a row. One use of Markov chains is to include real-world phenomena in computer simulations. $$P(X_4=3|X_3=2)=p_{23}=\frac{2}{3}.$$, By definition Markov Chain can be applied in speech recognition, statistical mechanics, queueing theory, economics, etc.$1 per month helps!! c. This simple calculation is called Markov chain. &\quad=P(X_0=1) P(X_1=2|X_0=1) P(X_2=3|X_1=2, X_0=1)\\ If the Markov chain reaches the state in a weight that is closest to the bar, then specify a high probability of transitioning to the bar. 1. When the Markov chain is in state "R", it has a 0.9 probability of staying put and a 0.1 chance of leaving for the "S" state. . We can write a probability mass function dependent on t to describe the probability that the M/M/1 queue is in a particular state at a given time. Of course, real modelers don't always draw out Markov chain diagrams. From the state diagram we observe that states 0 and 1 communicate and form the first class C 1 = f0;1g, whose states are recurrent. Figure 11.20 - A state transition diagram. Of course, real modelers don't always draw out Markov chain diagrams. A class in a Markov chain is a set of states that are all reacheable from each other. Markov Chain Diagram. We can minic this "stickyness" with a two-state Markov chain. Finally, if the process is in state 3, it remains in state 3 with probability 2/3, and moves to state 1 with probability 1/3. Example: Markov Chain For the State Transition Diagram of the Markov Chain, each transition is simply marked with the transition probability p 11 0 1 2 p 01 p 12 p 00 p 10 p 21 p 22 p 20 p 1 p p 0 00 01 02 p 10 1 p 11 1 1 p 12 1 2 2 p 20 1 2 p The igraph package can also be used to Markov chain diagrams, but I prefer the “drawn on a chalkboard” look of plotmat. and transitions to state 3 with probability 1/2. 0.6 0.3 0.1 P 0.8 0.2 0 For computer repair example, we have: 1 0 0 State-Transition Network (0.6) • Node for each state • Arc from node i to node j if pij > 0. Every state in the state space is included once as a row and again as a column, and each cell in the matrix tells you the probability of transitioning from its row's state to its column's state. Consider a Markov chain with three possible states $1$, $2$, and $3$ and the following transition … Before we close the final chapter, let’s discuss an extension of the Markov Chains that begins to transition from Probability to Inferential Statistics. For example, the algorithm Google uses to determine the order of search results, called PageRank, is a type of Markov chain. In this example we will be creating a diagram of a three-state Markov chain where all states are connected. [2] (c) Using resolvents, find Pc(X(t) = A) for t > 0. [2] (c) Using resolvents, find Pc(X(t) = A) for t > 0. \begin{align*} State Transition Diagram: A Markov chain is usually shown by a state transition diagram. :) https://www.patreon.com/patrickjmt !! The concept behind the Markov chain method is that given a system of states with transitions between them, the analysis will give the probability of being in a particular state at a particular time. &\quad=\frac{1}{3} \cdot \frac{1}{2} \cdot \frac{2}{3}\\ Figure 1: A transition diagram for the two-state Markov chain of the simple molecular switch example. Let X n denote Mark’s mood on the n th day, then { X n , n = 0 , 1 , 2 , … } is a three-state Markov chain. See the answer Current State X Transition Matrix = Final State. • Consider the Markov chain • Draw its state transition diagram Markov Chains - 3 State Classification Example 1 !!!! " Theorem 11.1 Let P be the transition matrix of a Markov chain. Markov chains, named after Andrey Markov, are mathematical systems that hop from one "state" (a situation or set of values) to another. Specify uniform transitions between states in the bar. Example: Markov Chain ! A certain three-state Markov chain has a transition probability matrix given by P = [ 0.4 0.5 0.1 0.05 0.7 0.25 0.05 0.5 0.45 ] . Description Sometimes we are interested in how a random variable changes over time. A visualization of the weather example The Model. Draw the state-transition diagram of the process. $$P(X_3=1|X_2=1)=p_{11}=\frac{1}{4}.$$, We can write From a state diagram a transitional probability matrix can be formed (or Infinitesimal generator if it were a Continuous Markov chain). &= \frac{1}{3} \cdot\ p_{12} \\ In this two state diagram, the probability of transitioning from any state to any other state is 0.5. This is how the Markov chain is represented on the system. Find the stationary distribution for this chain. In terms of transition diagrams, a state i has a period d if every edge sequence from i to i has the length, which is a multiple of d. Example 6 For each of the states 2 and 4 of the Markov chain in Example 1 find its period and determine whether the state is periodic. The transition matrix text will turn red if the provided matrix isn't a valid transition matrix. , q n, and the transitions between states are nondeterministic, i.e., there is a probability of transiting from a state q i to another state q j: P(S t = q j | S t −1 = q i). A continuous-time process is called a continuous-time Markov chain … Example: Markov Chain For the State Transition Diagram of the Markov Chain, each transition is simply marked with the transition probability p 11 0 1 2 p 01 p 12 p 00 p 10 p 21 p 22 p 20 p 1 p p 0 00 01 02 p 10 1 p 11 1 1 p 12 1 2 2 p 20 1 2 p Chapter 3 FINITE-STATE MARKOV CHAINS 3.1 Introduction The counting processes {N(t); t > 0} described in Section 2.1.1 have the property that N(t) changes at discrete instants of time, but is defined for all real t > 0. Metropolitan School Maymar Campus, Caravan Vs Campervan, Newell Highway Brochure, Vocatura's Pizza Norwich, Ct, Not Disconcerted Crossword Clue, Earth 2 Episodes, The Rogue Not Taken Review, Not Disconcerted Crossword Clue, Carry On Deaths, Mazda Demio 2, Chambersburg Trojans Football Roster,
2021-01-28 09:19:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9004129767417908, "perplexity": 556.7774397052042}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704839214.97/warc/CC-MAIN-20210128071759-20210128101759-00321.warc.gz"}
https://lib.dr.iastate.edu/etd/15182/
Dissertation 2016 #### Degree Name Doctor of Philosophy Computer Science Computer Science Wensheng Zhang #### Abstract Cloud-based storage service has been popular nowadays. Due to the convenience and unprecedent cost-effectiveness, more and more individuals and organizations have utilized cloud storage servers to host their data. However, because of security and privacy concerns, not all data can be outsourced without reservation. The concerns are rooted from the users' loss of data control from their hands to the cloud servers' premise and the infeasibility for them to fully trust the cloud servers. The cloud servers can be compromised by hackers, and they themselves may not be fully trustable. As found by Islam et. al.~\cite{Islam12}, data encryption alone is not sufficient. The server is still able to infer private information from the user's {\em access pattern}. Furthermore, it is possible for an attacker to use the access pattern information to construct the data query and infer the plaintext of the data. Therefore, Oblivious RAMs (ORAM) have been proposed to allow a user to access the exported data while preserving user's data access pattern. In recent years, interests in ORAM research have increased, and many ORAM constructions have been proposed to improve the performance in terms of the communication cost between the user and the server, the storage costs at the server and the user, and the computational costs at the server and the user. However, the practicality of the existing ORAM constructions is still questionable: Firstly, in spite of the improvement in performance, the existing ORAM constructions still require either large bandwidth consumption or storage capacity. %in practice. Secondly, these ORAM constructions all assume a single user mode, which has limited the application to more general, multiple user scenarios. In this dissertation, we aim to address the above limitations by proposing four new ORAM constructions: S-ORAM, which adopts piece-wise shuffling and segment-based query techniques to improve the performance of data shuffling and query through factoring block size into design; KT-ORAM, which organizes the server storage as a $k$-ary tree with each node acting as a fully-functional PIR storage, and adopts a novel delayed eviction technique to optimize the eviction process; GP-ORAM, a general partition-based ORAM that can adapt the number of partitions to the available user-side storage and can outsource the index table to the server to reduce local storage consumption; and MU-ORAM, which can deal with stealthy privacy attack in the application scenarios where multiple users share a data set outsourced to a remote storage server and meanwhile want to protect each individual's data access pattern from being revealed to one another. We have rigorously quantified and proved the security strengths of these constructions and demonstrated their performance efficiency through detailed analysis. Jinsheng Zhang en application/pdf 146 pages COinS
2018-07-23 07:51:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2411241978406906, "perplexity": 1961.9768575250564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676595531.70/warc/CC-MAIN-20180723071245-20180723091245-00631.warc.gz"}
https://pypi.org/project/timebudget/
Skip to main content Stupidly-simple speed profiling tool for python # timebudget ### A stupidly-simple tool to see where your time is going in Python programs Trying to figure out where the time's going in your python code? Tired of writing elapsed = time.time() - start_time? You can find out with just a few lines of code after you pip install timebudget ## The simplest way With just two lines of code (one is the import), you can see how long something takes... from timebudget import timebudget with timebudget("Loading and processing the file"): raw = open(filename,'rt').readlines() lines = [line.rstrip() for line in raw] will print Loading and processing the file took 1.453sec ## Record times and print a report To get a report on the total time from functions you care about, just annotate those functions: from timebudget import timebudget timebudget.set_quiet() # don't show measurements as they happen timebudget.report_at_exit() # Generate report when the program exits @timebudget # Record how long this function takes def possibly_slow(): ... @timebudget # ... and this function too def should_be_fast(): ... And now when you run your program, you'll see how much time was spent in each annotated function: timebudget report... possibly_slow: 901.12ms for 3 calls should_be_fast: 61.35ms for 2 calls Or instead of calling report_at_exit() you can manually call timebudget.report(reset=True) # print out the report now, and reset the statistics If you don't set reset=True then the statistics will accumulate into the next report. You can also wrap specific blocks of code to be recorded in the report, and optionally override the default set_quiet choice for any block: with timebudget("load-file", quiet=False): text = open(filename,'rt').readlines() ## Percent of time in a loop If you are doing something repeatedly, and want to know the percent of time doing different things, time the loop itself, and pass the name to report. That is, add a timebudget annotation or wrapper onto the thing which is happening repeatedly. Each time this method or code-block is entered will now be considered one "cycle" and your report will tell you what fraction of time things take within this cycle. @timebudget def outer_loop(): if sometimes(): possibly_slow() should_be_fast() should_be_fast() for _ in range(NUM_CYCLES): outer_loop() timebudget.report('outer_loop') Then the report looks like: timebudget report per outer_loop cycle... outer_loop: 100.0% 440.79ms/cyc @ 1.0 calls/cyc possibly_slow: 40.9% 180.31ms/cyc @ 0.6 calls/cyc should_be_fast: 13.7% 60.19ms/cyc @ 2.0 calls/cyc Here, the times in milliseconds are the totals (averages per cycle), not the average time per call. So in the above example, should_be_fast is taking about 30ms per call, but being called twice per loop. Similarly, possibly_slow is still about 300ms each time it's called, but it's only getting called on 60% of the cycles on average, so on average it's using 41% of the time in outer_loop or 180ms. ## Requirements Needs Python 3.6 or higher. Other libraries are in requirements.txt and can be installed like pip install -r requirements.txt # only needed for developing timebudget. ## Testing To run tests: pytest ## Inspiration This tool is inspired by TQDM, the awesome progress bar. TQDM is stupidly simple to add to your code, and just makes it better. I aspire to imitate that. ## Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages. ### Source Distribution timebudget-0.7.1.tar.gz (5.0 kB view hashes) Uploaded source ### Built Distribution timebudget-0.7.1-py3-none-any.whl (9.5 kB view hashes) Uploaded py3
2023-01-29 14:57:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20698347687721252, "perplexity": 6502.450905615596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499744.74/warc/CC-MAIN-20230129144110-20230129174110-00688.warc.gz"}
https://www.snapxam.com/solver?p=%5Cfrac%7Bdx%7D%7Bdy%7D%3Dy%2Bxy
Try NerdPal! Our new app on iOS and Android # Solve the differential equation $\frac{dx}{dy}=y+xy$ ## Step-by-step Solution Go! Go! 1 2 3 4 5 6 7 8 9 0 a b c d f g m n u v w x y z . (◻) + - × ◻/◻ / ÷ 2 e π ln log log lim d/dx Dx |◻| θ = > < >= <= sin cos tan cot sec csc asin acos atan acot asec acsc sinh cosh tanh coth sech csch asinh acosh atanh acoth asech acsch ### Videos $x=C_1e^{\frac{1}{2}y^2}-1$ Got another answer? Verify it here! ## Step-by-step Solution Problem to solve: $\frac{dx}{dy}=y+xy$ Specify the solving method 1 Factor the polynomial $y+xy$ by it's GCF: $y$ $\frac{dx}{dy}=y\left(1+x\right)$ Learn how to solve differential equations problems step by step online. $\frac{dx}{dy}=y\left(1+x\right)$ Learn how to solve differential equations problems step by step online. Solve the differential equation dx/dy=y+xy. Factor the polynomial y+xy by it's GCF: y. Group the terms of the differential equation. Move the terms of the x variable to the left side, and the terms of the y variable to the right side of the equality. Integrate both sides of the differential equation, the left side with respect to y, and the right side with respect to x. Solve the integral \int\frac{1}{1+x}dx and replace the result in the differential equation. $x=C_1e^{\frac{1}{2}y^2}-1$ SnapXam A2 ### beta Got another answer? Verify it! Go! 1 2 3 4 5 6 7 8 9 0 a b c d f g m n u v w x y z . (◻) + - × ◻/◻ / ÷ 2 e π ln log log lim d/dx Dx |◻| θ = > < >= <= sin cos tan cot sec csc asin acos atan acot asec acsc sinh cosh tanh coth sech csch asinh acosh atanh acoth asech acsch $\frac{dx}{dy}=y+xy$
2023-02-04 11:52:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5664082169532776, "perplexity": 1293.1675318133227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500126.0/warc/CC-MAIN-20230204110651-20230204140651-00711.warc.gz"}
https://mathoverflow.net/questions/260931/minimal-dominating-subsets-in-infinite-graphs?noredirect=1
# Minimal dominating subsets in infinite graphs Let $G=(V,E)$ be any simple, undirected graph. A dominating set is a set $D\subseteq V$ such that for all $v\in V\setminus D$ there is $d\in D$ such that $\{v,d\}\in E$. Is there an infinite graph $G=(V,E)$ such that there is a dominating subset $D\subseteq V$ such that for any dominating subset $D_1\subseteq D$ there is a dominating subset $D_2\subseteq D_1$ with $D_2\neq D_1$? • $G=(\mathbb N,\le)$ ? – saf Jan 30 '17 at 16:02 • @saf That doesn't work, since any singleton is dominating, but the empty set is not. Jan 30 '17 at 16:10 Let $V(G)$ be the set of non-empty subsets of $\mathbb N$ and join two sets by an edge whenever they intersect. Let $D$ be the set of initial segments of $\mathbb N$. Then subsets of $D$ are dominating if and only if they are infinite.
2021-12-03 07:46:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9274489283561707, "perplexity": 124.50128553567738}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362605.52/warc/CC-MAIN-20211203060849-20211203090849-00289.warc.gz"}
https://julialintern.github.io/ClimateModel/
Basic Climate Modeling with a Simple Sinusoidal Regression This study will look at co2 measurements from ice core data over the long timescale of 120 - 800,000 years ago. This co2 dataset perhaps extends back further than any other… Data Source noaa ice core data Why CO2? It has long been established that carbon dioxide and temperature are strongly correlated. Although the intent of this blog series is to introduce a number of time series techniques, the nature of the this data will inevitably open up some climate-related questions. Perhaps the first question to emerge is in reference to the controversy surrounding the significant lag of co2 values behind temperature values: “How could CO2 levels affect global temperature when temperature values changed first?”. Researchers recently discovered the answer to this question. Whether in your glass of champagne or in a ice core: co2 bubbles will rise. http://www.scientificamerican.com/article/ice-core-data-help-solve/ Although this presents an additional challenge for climatologists, the lag issue is a fun exercise for modelers like us. We will study the relationship between temp & co2 further in this series, but for now we will focus the relationship between co2 and time. Time Plot Our first step will be to a generate a time plot of our co2 data vs time. This visualization will indicate the presence of important features such as trend, seasonality and outliers. The Quaternary Ice Age The periodic structure of our dataset is due to glacial cycles. As the plot illustrates, over the last 740,000 years there have been eight glacial cycles with each cycle lasting roughly 100,000 years. However, this data is just a piece of the entire Quaternary Period picture. The Quaternary ice age started approximately 2.5 million years ago. The QP will maintain its status as the current ice age as long as one permanent ‘large’ ice sheet exists. It’s perhaps surprising that we are somehow still living within an ice age (and will continue to) until the last cube of Antartica has melted away. Back to our data.. Decomposition: Time series analysis can be concerned with decomposing the series into trend, seasonal variation and cyclical components. One method that deals with trend is the linear filter, which is used to smooth out local fluctuations and estimate local mean (aka: moving average). This can be useful when a time series is strongly influenced by trend and seasonality. At first glance of our time plot, one could assume that trend wouldn’t play a part in this analysis. But what we know about the waning current ice age is confirmed by the following moving average plot: CO2 is trending upwards. Agreed. This finding may seem obvious, but bear in mind that 99.9% of our data predates the Industrial Revolution with our most recent data point dating 1879! Alas, trend could play a role in this time series. What about seasonality? Seasonality is generally assumed to be annual in nature. However, seasonality is really just periodic data with a known or fixed period. In this case our fixed period happens to be 100,000 years. We can see from our 1st plot that there is a decent sinusoidal component. Can we ‘idealize’ the seasonal component of our model and assume that it can be represented from a simple sinusoidal model ? Wave forms with a more complicated structure (sinusoidal waves within sinusoidal waves) can be synthesized by using a series of sine and cosine functions whose frequencies are multiples of the fundamental seasonal frequency (in this case: multiples of $\pi/100,000$).To do a proper analysis, we want to know what other internal forces may be influencing our data. Just as with fiscal data, we know that quarterly, monthly and even daily fluctuations are informing the annual time series. However, understanding the internal forces at play within the QP ice age is a much harder problem. So hard, that there is a wikipedia page attesting its difficulty. The One Hundred Thousand year Problem: Essentially, scientists are not even sure of where the 100,000 periodic interval comes from. A Simple Sinusoidal So we see that the model can become as complex as we would like to make it. Perhaps a sound approach would be to start with the most basic model, and add complexity iteratively. If we sense that a given time series contains a periodic structure ( a sinusoidal component) with a known frequency $(\omega)$ then we can apply the following model: (Eqn 1) $X_t=\mu +\alpha cos(\omega t)+\beta sin(\omega t) + Z_t$ Expected values for this model can be represented by: $E(X)= A\theta$ Where A is: And $\theta$ is: (Eqn 2) $\hat{\theta} = (A^TA)^{-1}A^Tx$ Because our Eqn 1 model is linear in parameters (alpha, beta and mu), it is a general linear model. Eqn 2 allows us to perform a least squares estimate of theta which minimizes: With a little matrix manipulation we can solve for theta: $\theta = [ 230.94802743, 7.45602447, -3.18249489]$ where $\mu= 230.94802743, \alpha=7.45602447, \beta = -3.18249489$ And now we have the simple sinusoidal plot of E(x): What do you think? First of all, our most recent period seems to behave differently than the other periods. The approximated values for the most recent period look quite compressed. This is most likely due to the nature of the ice sheets from which the ice cores are drilled. Ice cores can be hundreds of meters long. As one can imagine, the deeper layers of the ice sheets are under tremendous pressure. Upper layers are under much less pressure; the ice is more porous and the air bubbles (where are precious co2 data is contained) are much more likely to move around. As a result, we can expect more recent ice core data to be more noisy or at least behave different than the older, more compressed data. Because the first period is behaving like a completely different model, we should treat it as such. We will pull the first period out and study it separately. The result is the following: Now that we have overlaid the approximated values with the actual values, we see that we still have a lot of work to do. Please refer to part 2 for our the next step in our model refinement process. Written on June 29, 2018
2020-06-06 17:05:56
{"extraction_info": {"found_math": true, "script_math_tex": 8, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5779600143432617, "perplexity": 778.1133140053452}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348517506.81/warc/CC-MAIN-20200606155701-20200606185701-00564.warc.gz"}
https://labs.tib.eu/arxiv/?author=K.%20Ganz
• ### Analytic approximations, perturbation methods, and their applications(0710.5658) Nov. 2, 2007 gr-qc The paper summarizes the parallel session B3 {\em Analytic approximations, perturbation methods, and their applications} of the GR18 conference. The talks in the session reported notably recent advances in black hole perturbations and post-Newtonian approximations as applied to sources of gravitational waves.
2021-04-19 22:11:08
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.932844877243042, "perplexity": 3936.5920583434904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038917413.71/warc/CC-MAIN-20210419204416-20210419234416-00114.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/using-the-formula-for-deposit-creation-the-required-reserve-ratio-is-10-the-fed-purchases--q2980288
## How much will the money supply increase in the following scenarios? Using the formula for deposit creation. The required reserve ratio is 10%. The Fed purchases $70 million in securities. Comments • Anonymous commented plz rate the other answer also,then only i can get rewarded ## Answers (2) • ## You need a Homework Help subscription to view this answer! ### Chegg Homework Help subscribers get: • Full access to our library of 2 million Q&A posts • 24/7 help from our community of subject experts • Fast answers from experts, around the clock Start Free Trial... Your free trial lasts for 7 days. Your subscription will continue for$14.95/month unless you cancel. Get homework help
2013-05-23 21:12:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5604361295700073, "perplexity": 8208.207235146685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703788336/warc/CC-MAIN-20130516112948-00030-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.acmicpc.net/problem/5758
시간 제한 메모리 제한 제출 정답 맞은 사람 정답 비율 2 초 128 MB 0 0 0 0.000% ## 문제 Pretty Networks Inc. is a company that builds some curious artifacts whose purpose is to transform a set of input values in a given way. The transformation is determined by what they call a p-network. The following picture shows an example of a p-network. In the general case, a p-network of order N and size M, has N horizontal wires numbered 1, 2, . . . N, and M vertical strokes. Each stroke connects two consecutive wires. There are no two different strokes touching the same point of any wire, and there is no stroke touching the leftmost or rightmost point of any wire. The above example is a p-network of order 5 and size 9. The transformation determined by a p-network can be explained using a set of rules that govern the way a p-network should be traversed: 1. start at the leftmost point of one wire, and go to the right; 2. each time a stroke appears move to the connected wire, and keep going from left to right; 3. stop when the rightmost point of one wire is reached. If starting at wire i the traversing ends at wire j, we say that the p-network transforms i into j, and we denote this with i → j. In the above example the p-network determines the set of transformations {1 → 3, 2 → 5, 3 → 4, 4 → 1, 5 → 2} . Pretty Networks Inc. hired you to solve the following p-network design problem: given a number N and a set of transformations {1 → i1, 2 → i2, . . . N → iN}, decide if a p-network of order N can be built to accomplish them and, in this case, give one that does it. When there exists a solution with a certain size, in many cases there is another solution with a greater size. Scientists at Pretty Networks Inc. have stated that if there exists a solution for a p-network design problem, then there is a solution with size less than 4N2. Therefore, they are interested only in solutions having a size below this bound. ## 입력 The input has a certain number of p-network design problems. Each problem is described in just one line that contains the values N, i1, i2, . . . iN, separated by a single blank space. The value N is the order of the desired p-network, i.e., its number of wires (1 ≤ N ≤ 20). The values i1, i2, . . . iN represent that the p-network should determine the set of transformations {1 → i1, 2 → i2, . . . N → iN} (1 ≤ ij ≤ N, for each 1 ≤ j ≤ N). The input ends with a line in which N = 0; this line must not be processed as a p-network design problem. ## 출력 For each p-network design problem in the input, the output must contain a single line. If the problem has no solution the line must be No solution. Otherwise the line must contain a description of any p-network (with N wires and less than 4N2 strokes) that accomplishes the requested set of transformation. This description is given by a set of values M, s1, s2, . . . sM, where consecutive values are separated by a single blank space. The value M is the size of the p-network, i.e., its number of strokes. The values s1, s2, . . . sM describe the strokes of the p-network; it should be understood that the i-th stroke from left to right, connects the wires si and 1 + si (1 ≤ i ≤ M). Notice that 0 ≤ M < 4N2 , while 1 ≤ si < N for each 1 ≤ i ≤ M. ## 예제 입력 5 3 5 4 1 2 3 1 1 3 2 1 2 2 1 2 0 ## 예제 출력 9 1 3 2 4 1 3 2 3 4 No solution 0 2 1 1
2016-10-28 12:21:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2946532666683197, "perplexity": 845.9042081934848}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988722459.85/warc/CC-MAIN-20161020183842-00213-ip-10-171-6-4.ec2.internal.warc.gz"}