url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://mathoverflow.net/questions/285123/volume-of-k-x-for-a-weighted-projective-variety
# Volume of $-K_X$ for a weighted projective variety Let $X:=\mathbb P(a_0,a_1, \ldots, a_n)$ be a well formed weighted projective variety. Let $-K_X$ be its anticanonical divisor, then how to express its volume ${\rm vol}(-K_X)=(-K_X)^n$ in terms of $a_0,\ldots, a_n$? In principle, this could be computed by toric geometry, but the data seems too complicated to compute (especially to write the dual lattice). Besides, I was wondering if there is a generally formula for the self intersection $D^n$ of the torus invariant divisor $D = \sum_{i=0}^n c_i D_i$? • For $n=1.$. if you write the linear Hamiltonian $S^{1}$-action with weights $(a_{0},a_{1})$ on $\mathbb{C}^{2}$ then the Euler class of the associated bundle over the symplectic quotient (which is topologically a $2$-sphere) will be degree $-1/(a_{0}a_{1})$. Then in principal one can compute using the Duistermaat-Heckman theorem. Nov 3, 2017 at 10:58 • Let $X$ be $\text{Proj}\ k[x_0,\dots,x_n]$ with $\text{deg}(x_i)=a_i$. Let $a$ be $\text{lcm}(a_i)_i$, let $b_i$ equal $a/a_i,$ and let $b$ be $b_0\cdots b_n.$ Let $Y$ be $\text{Proj}\ k[y_0,\dots,y_n]$ with $\text{deg}(y_i) = a.$ Denote $f:X\to \mathbb{P}^n$ by $f^*y_i = x_i^{b_i}.$ Denote by $\mathcal{O}_X(1),$ resp. $\mathcal{O}_Y(1),$ the ample generator of the Picard group of the smooth locus $X^o$, resp. $Y^o=Y$. Then $f^*\mathcal{O}_Y(1)$ equals $\mathcal{O}_X(a).$ Thus, $(c_1(\mathcal{O}_X(1)))^n_X = b/a^n = 1/(a_0\cdots a_n),$ $\text{vol}(X) = (a_0+\dots +a_n)^n/(a_0\cdots a_n).$ Nov 3, 2017 at 11:41 • @Li Yultong, Jason Starr : I am sorry, this comment comes too late. However I hope it will be of some interest (for the community). Many computations in Jason Starr's answer below have been done in : Classes d'idéaux et groupe de Picard des fibrés projectifs tordus ... researchgate.net/.../226119679 Article in K-Theory 2(5):559-578 · January 1989 (=weighted) projective bundles . Notice that it concerns weighted projective BUNDLES. In particular the smooth locus is well determined there. Unfortunately the work is done in French. Mar 16, 2018 at 19:57 I am just rewriting my comment above as an answer. Let $S=k[x_0,\dots,x_n]$ be the $\mathbb{Z}_{\geq 0}$-graded $k$-algebra with every $x_i$ homogeneous of degree $a_i.$ Denote by $X$ the associated projective $k$-scheme, $X =\text{Proj}\ S.$ Denote by $a$ the least common multiple of $(a_0,\dots,a_n).$ For every $i,$ denote by $b_i$ the integer such that $a_i\cdot b_i$ equals $a.$ Denote by $b$ the product $b_0\dots b_n.$ Let $R=k[y_0,\dots,y_n]$ be the $\mathbb{Z}_{\geq 0}$-graded $k$-algebra with every $y_i$ homogeneous of degree $a.$ Denote by $Y$ the associated projective $k$-scheme, $Y=\text{Proj}\ R.$ This is $k$-isomorphic to $\mathbb{P}^n_k.$ There is a unique homomorphism of $\mathbb{Z}_{\geq 0}$-graded $k$-algebras, $$f^*:R \to S, \ \ y_i\mapsto x_i^{b_i}.$$ The inverse image of the irrelevant (prime) ideal is primary for the irrelevant (prime) ideal. Thus, there is an induced morphism of $k$-schemes, $$f:X\to Y.$$ This morphism is finite, and it is flat over an open subscheme that includes the open $Y_*=D_+(y_0\dots y_n).$ In fact, because of the well-formedness hypothesis, the restriction of the morphism over $Y_*$ is naturally a torsor for the finite, flat, commutative group scheme $\Gamma=\mu_{b_0}\times \dots \times \mu_{b_n}$ acting by $$(\zeta_0,\dots,\zeta_n)\cdot [x_0,\dots,x_n] = [\zeta_0\cdot x_0,\dots, \zeta_n\cdot x_n].$$ Assume that $(a_0,\dots,a_n)$ is well-formed, i.e., the greatest common divisor of any $n$ of the $n+1$ weights equals $1$. In particular, this implies that the smooth locus $X^o$ of $X$ is a dense open subscheme whose complement has codimension $\geq 2$. Moreover, the Picard group of $X^o$ is generated by the ample invertible sheaf $\mathcal{O}_X(1)|_{X^o}$ that is the restriction of the rank $1$, reflexive, coherent sheaf $\mathcal{O}_X(1) = \widetilde{S[1]}.$ In particular, for every integer $d\geq 0,$ $$H^0(X^o,\mathcal{O}_X(d)|_{X^o}) = H^0(X,\mathcal{O}_X(d)) = S_d.$$ The Picard group of $Y$ is generated by an ample invertible sheaf $\mathcal{O}_Y(1)$ whose vector space of global sections is the free $k$-vector space with basis $y_0,\dots,y_n.$ Since $f^*(y_i)$ has degree $a,$ the pullback $f^*\mathcal{O}_Y(1)$ is an ample invertible sheaf on $X$ whose restriction to $X^o$ equals $\mathcal{O}_X(a)|_{X^o}.$ In particular, $f^*\mathcal{O}_Y(-(a_0+\dots+a_n))|_{X^o}$ is isomorphic to $\omega_{X^o/k}^{\otimes a}.$ Thus, the $n$-fold self-intersection on $X$ of $c_1(f^*\mathcal{O}_Y(a_0+\dots +a_n))$ equals $(a_0+\dots+a_n)^n$ times the $n$-fold self-intersection on $X$ of $f^*c_1(\mathcal{O}_Y(1)).$ The $n$-fold self-intersection on $Y$ of $c_1(\mathcal{O}_Y(1))$ is the unique class whose cap product with $[Y]$ equals the class of every $k$-point of $Y$. Thus, the $n$-fold self-intersection on $X$ of $f^*c_1(\mathcal{O}_Y(1))$ equals the class of every fiber of $f$ over any element of $Y_*.$ Since the morphism is a torsor for the finite, flat, commutative group scheme $\Gamma$ of length $b = b_0\cdots b_n,$ it follows that the $n$-fold self-intersection $X$ of $f^*c_1(\mathcal{O}_Y(1))$ equals $b.$ Putting the pieces together, there is a unique invertible $\mathcal{O}_X$-module, $f^*\mathcal{O}_Y(a_0+\dots + a_n)$, whose restriction to $X^o$ equals $(\omega_{X^o/k}^\vee)^{\otimes a}.$ The $n$-fold self-intersection of $c_1(f^*\mathcal{O}_Y(a_0+\dots+a_n))$ equals $(a_0+\dots+a_n)^n b.$ Thus, considered as a rational number, the $n$-fold self-intersection of $c_1(\omega_{X/k}^\vee)$ equals $(a_0+\dots+a_n)^nb/a^n.$ Finally, using the fact that $b_i/a$ equals $1/a_i,$ this gives, $$\left( c_1(\omega_{X/k}^\vee) \right)^n_X = \frac{(a_0+\dots+a_n)^n}{a_0\cdots a_n}.$$ • Really great answer! Nov 3, 2017 at 20:45 • @Jason Starr Thank you so much for your detailed answer! I just want to double check if I understand correctly: suppose $D_i$ is the torus invariant divisor associated to $x_i$, then it is $\mathbb Q$-linear equivalent to $\mathcal{O}_X(a_i)$ (view later as a $\mathbb Q$-divisor), is that right? So combine this fact and your calculation above, one can get a formula for $(\sum c_iD_i)^n = \frac{(\sum c_ia_i)^n}{a_0\dots a_n}$. Nov 4, 2017 at 4:46 • Besides, I want to understand where the property well-formed used. The example in my mind is $\mathbb P(1,a,a)$ which is just $\mathbb{P}^2$. So which part of your argument breaks if I use $X:=\mathbb P(1,a,a)$? The complement of $X_0$ in $X$ is still of codimension $\geq 2$ because $X$ is normal. It seems that $f$ is still a finite morphism. Then where goes wrong? Nov 4, 2017 at 4:51 • @LiYutong. For $X=\mathbb{P}(1,a,a),$ the generator of the Picard group of $X^o$ is not the restriction of $\mathcal{O}_X(1).$ In fact, $\mathcal{O}_X(1)$ is isomorphic to $\mathcal{O}_X.$ Also, $\omega_{X/k}$ is not isomorphic to $\mathcal{O}_X(-a_0-a_1-a_2).$ That holds only on the locus where the variables are well-formed, i.e., the open complement of the vanishing locus of the degree-$1$ variable. This is an open affine space whose complement is (the support of) an ample divisor. So computations on that open affine say little about the Picard group of $X.$ Nov 4, 2017 at 11:59 • @JasonStarr Sorry for keep asking... Suppose $X^0 \subseteq X$ is the smooth locus of a not necessarily well-formed WPS $X=\mathbb P(a_0,\cdots,a_n)$, its complement has codim $\geq 2$ because $X$ is normal. Let $D_i$ be the toric invariant divisor corresponding to the weight $a_i$. Let $f: X \to Y$ be the morphism you defined above. Then is it ture $aD_i|_{X^0} \sim_{\mathbb Q}a_i(f^*\mathcal O_{Y}(1))|_{X^0}$? If this holds, it seems that the argument can just go through without difficulty... Nov 5, 2017 at 2:38
2022-06-26 08:33:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9661815762519836, "perplexity": 117.76885691301506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103037649.11/warc/CC-MAIN-20220626071255-20220626101255-00258.warc.gz"}
http://math.stackexchange.com/questions/339988/how-do-i-find-tangent-on-the-unit-circle
# How do I find tangent on the unit circle? I know that $\sin=y$ and that $\cos=x$ so how do I find tangent? I've seen some diagrams where the tangent line touches the unit circle at $(1,0)$, I'm wondering how this is derived? EDIT: Explanations using similar triangles are most intuitive to me, if things could be explained in those terms I'd be grateful! Thank you! - The tangent is orthogonal to the line which goes from the center to the point on which you want the tangent. As this point is $$(\cos(x),\sin(x))$$ the tangent. will be $$(-\sin(x),\cos(x))\cdot t+(\cos(x),\sin(x))$$ You can easily visualize this using an CAS like Mathematica, taking as input Manipulate[ ParametricPlot[ { {Cos[Pi*t],Sin[Pi*t]}, {Cos[x], Sin[x]} + {-Sin[x], Cos[x]}*t }, {t, -1, 1}], {x, -Pi, Pi}] gives you a slider where you can go through every angle. If you like I can make some pictures. A picture What i am essentially doing is drawing at first the circle, than i take a point on the circle which is $(\cos(x),\sin(x))$. From here I make a line in the direction of $(-\sin(x),\cos(x))$. This essentialy is a line of the form $a t + b$ but here $a$ and $b$ are vectors. - Congrats for making sense of the question, +1. –  1015 Mar 24 '13 at 20:40 @DominicMichaelis I don't understand your answer, the tangent should have the form $y=at+b$ where $a$ and $b$ are real! –  Sami Ben Romdhane Mar 24 '13 at 20:48 that doesn't in every case (for example try to give the tangent of the point x=1 in that form. @SamiBenRomdhane –  Dominic Michaelis Mar 24 '13 at 20:52 Why is it that the tangent should be orthogonal? –  Assad Mar 24 '13 at 20:56 Also furthermore how did you derive (−sin(x),cos(x))⋅t+(cos(x),sin(x))? I'm sorry this is just a little confusing to me, a bit more explanation would help, thank you. –  Assad Mar 24 '13 at 20:57 Take a point $\,(\cos x_0\,,\,\sin x_0)\,$ on the unit circle, and assume $\,\cos x_0\cdot\sin x_0\neq 0\,$ (otherwise the question is almost trivial) , so the slope of the radius in the circle to this point is $$\frac{\sin x_0}{\cos x_0}$$ Since the tangent line to the circle in the above point is perpendicular to the radius at that point, the tangent line's slope must have slope $$-\frac{\cos x_0}{\sin x_0}$$ So now you have a point on the tangent line and its slope: calculate its formula. - When $\cos(x_0) \cdot \sin(x_0)=0$ this is not trivial take $\cos(x_0)=1$ and try to give the tangent with your method –  Dominic Michaelis Mar 24 '13 at 20:53 No, when $\,\sin x_0=0\,$ then we're either on $\,(1,0)\,$ or on $\,(-1,0)\,$ and it's almost trivial the tangent lines there are the vertical lines $\,x=1\,$(resp., $\,x=-1\,$ ) . My answer does not cover these cases, that's why I assumed what I did. –  DonAntonio Mar 24 '13 at 20:56 Edit: Personally I prefer all the other answers, here's another way to think about it if you want more ideas If $x^2 + y^2 = 1$, then $2x + 2y \frac{dy}{dx} = 0$ Re-arranging, $\frac{dy}{dx} = \frac{-x}{y}$. As you say, $\cos \theta = x$ and $\sin \theta = y$, giving you the slope of a tangent line as $- \cot \theta$. You can then use the equation: $$y - \sin \theta = - \cot \theta (x - \cos \theta)$$ - From the article $148,150$ of The elements of coordinate geometry, by Loney, the equation of the tangent of the circle $x^2+y^2=a^2$ at $(x_1,y_1)$ is $$xx_1+yy_1=a^2$$ As you have already identified, any point on the circle can be $(a\cos\theta,a\sin\theta)$ where $0\le \theta<2\pi$ So, the equation of the tangent becomes $$xa\cos\theta+ya\sin\theta=a^2\implies x\cos\theta+y\sin\theta=a\text{ as }a\ne0$$ For the Unit Circle $a=1,$ So, the equation of the tangent becomes $x\cos\theta+y\sin\theta=1$
2014-04-20 08:50:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8118413686752319, "perplexity": 297.41243343390744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
https://discuss.codechef.com/t/chefbook-editorial/11127
CHEFBOOK - Editorial Tester: Mahbubul Hasan and Sunny Aggarwal Editorialist: Balajiganapathi Senthilnathan Russian Translator: Sergey Kulik Mandarian Translator: Minako Kojima Hard PREREQUISITES: Linear Programming, Dual of Mincost Max Flow PROBLEM: Given M integers L_{xy} (1 \le x \le N and 1 \le y \le N) we want to find N integers P_x (1 \le x \le N) and N integers Q_y (1 \le y \le N) such that for each of the M given integers, W_{xy} = L_{xy} + P_x - Q_y is within the range (S_{xy}, T_{xy}) inclusive. If there are several solutions possible, output the one with the least sum of all W_{xy}. EXPLANATION: Thanks to the author for the explanation. The problem can be reduced to a similar problem as below: Given a 2D array. Some cells are empty and some cells contain number L_{ij}. We can make two kinds of operation: 1. Increase all the non-empty numbers in some row i by P_i (P_i \ge 0) 2. Decrease all the non-empty numbers in some column j by Q_j (Q_j \ge 0) So for each cell (i,j) the new value will be W_{ij} = L_{ij} + P_i - Q_j You also have two more 2D arrays S and T as the upper and lower bound of the new array W. Such that S_{ij} \le W_{ij} \le T_{ij}. Your target is to maximize sum of W_{ij}, i.e, Maximize L_{ij} + P_i - Q_j You also need to print P_i, Q_j. Solution: Let’s formulate primal-simplex. Let, sizeRow(i) = number of non-empty cells in row i. Also sizeCol(j) is defined similarly. So our objective function is: Maximize: sum(P_i * sizeRow(i)) - sum(Q_j * sizeCol(j)) [we are omitting constant: sum(L_{ij})] Constraints: S_{ij} \le L_{ij} + P_i - Q_j \le T_{ij} Or both of them boil down to: P_i - Q_j \le Constant or Q_j - P_i \le Constant. So our constraint inequality: Ax \le B will have one special properties: *Every row of A has one +1, and one -1. Other than that every thing is normal. Now dual time. Objective function: minimize B*y [you will see in a moment that B is acting as cost of a flow network and y is the flow variables of the arcs] Constraints: A' * y \ge column\_matrix[sizeRow(1), ...., -sizeCol(1), ...] In A' now we have one +1 and one -1 in each column. So if you sum all the in-equalities you will get 0 in left side. Right side it’s also 0. So all the \ge are actually =. Now consider, all the columns of A' as directed edge (from say -1 to +1) and all the rows as vertex. Then you will find: sum(in flow) - sum(out flow) = const. Which is flow-conservative-equation. If the right side is say negative we have demand (arc to sink) and if positive then we have supply (arc from source). We have the cost of those flow in objective function. So finally we are to solve min-cost maxflow problem. One thing, One might wonder, why suddenly constraints changed from \ge to = in dual. What is the effect of such change in primal? Primal-dual conversion table says, it means the variable in primal is unconstrained. Why its so? Our operation was: increase in row, decrease in column. But it gives the liberty to decrease row / increase column too. Suppose you want to decrease a row with only original operations: you can decrease all the columns and then increase other rows. AUTHOR’S AND TESTER’S SOLUTIONS: This Editorial is only for those who already know how to solve the problem so, please give some more detailed editorial or otherwise remove it too… 1 Like please explain in detailed please so learner(beginner)like me ) also can try to understand too
2023-03-23 05:30:37
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.804265022277832, "perplexity": 2716.1450271979156}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944996.49/warc/CC-MAIN-20230323034459-20230323064459-00630.warc.gz"}
http://medical-dictionary.thefreedictionary.com/A+priori+distribution
prior probability (redirected from A priori distribution) Also found in: Dictionary. pri·or prob·a·bil·i·ty the best rational assessment of the probability of an outcome on the basis of established knowledge before the present experiment is performed. For instance, the prior probability of the daughter of a carrier of hemophilia being herself a carrier of hemophilia is 1/2. But if the daughter already has an affected son, the posterior probability that she is a carrier is unity, whereas if she has a normal child, the posterior probability that she is a carrier is 1/3. See: Bayes theorem. prevalence Epidemiology (1) The number of people with a specific condition or attribute at a specified time divided by the total number of people in the population. (2) The number or proportion of cases, events or conditions in a given population. Statistics A term defined in the context of a 4-cell diagnostic matrix (2 X 2 table) as the amount of people with a disease, X, relative to a population. Veterinary medicine (1) A clinical estimate of the probability that an animal has a given disease, based on current knowledge (e.g., by history of physical exam) before diagnostic testing. (2) As defined in a population, the probability at a specific point in time that an animal randomly selected from a group will have a particular condition, which is equivalent to the proportion of individuals in the group that have the disease. Group prevalence is calculated by dividing the number of individuals in a group that have a disease by the total number of individuals in the group at risk of the disease. Prevalence is a good measure of the amount of a chronic, low-mortality disease in a population, but is not of the amount of short duration or high-fatality disease. Prevalence is often established by cross-sectional surveys. prior probability Decision making The likelihood that something may occur or be associated with an event based on its prevalence in a particular situation. See Medical mistake, Representative heurisic. prior probability, n the extent of belief held by a patient and practitioner in the ability of a specific therapeutic approach to produce a positive outcome before treatment begins. This level of belief should be taken into consideration by the patient and practitioner to make a decision as to whether the treatment should be used or to permit the therapy to continue. probability the basis of statistics. The relative frequency of occurrence of a specific event as the outcome of an experiment when the experiment is conducted randomly on very many occasions. The probability of the event occurring is the number of times it did occur divided by the number of times that it could have occurred. Defined as:$$\hbox{p}={\hbox{x}\over (\hbox{x+y})$$ where p = probability, x = positive outcomes, y = negative outcomes. prior probability estimation of the probability that a particular phenomenon or character will appear before putting the patient to the test, e.g. testing the probable productivity of a patient by testing its forebears. subjective probability the measure of the assessor's belief in the probability of a proposition being correct. Site: Follow: Share: Open / Close
2019-03-20 17:10:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7459347248077393, "perplexity": 607.2326153200115}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202450.64/warc/CC-MAIN-20190320170159-20190320192159-00130.warc.gz"}
https://www.physicsforums.com/threads/stokes-equation-help.251286/
# Stokes equation help 1. Aug 21, 2008 ### supercali 1. The problem statement, all variables and given/known data let F be vector field: $$$\vec F = \cos (xyz)\hat j + (\cos (xyz) - 2x)\hat k$$$ let L be the the curve that intersects between the cylinder $$$(x - 1)^2 + (y - 2)^2 = 4$$$ and the plane y+z=3/2 calculate: $$$\left| {\int {\vec Fd\vec r} } \right|$$$ 2. Relevant equations in order to solve this i thought of using the stokes theorem because the normal to the plane is $$$\frac{1}{{\sqrt 2 }}(0,1,1)$$$ thus giving me $$\oint{Fdr}=\int\int{curl(F)*n*ds}=\int\int{2/sqrt{2}*\sin(xyz)}$$ i tried to parametries x y and z x= rcos(t)+1 y=rsin(t)+2 z=1/2-rsin(t) but it wont work Last edited by a moderator: Aug 21, 2008 2. Aug 21, 2008 ### NoMoreExams Re: stokes Would x = 1 + 2 cos(t), y = 2 + 2 sin(t) and z = -1/2 - 2 sin(t) do the trick? 3. Aug 21, 2008 ### supercali Re: stokes i wonder if it is allowed given we have to do a multiple integral needing 2 variables 4. Aug 21, 2008 ### NoMoreExams Re: stokes Why wouldn't you just use Green's? 5. Aug 21, 2008 ### supercali Re: stokes using green or stokes is the same thing green is just a private solution of stokes and if you use it you are still stuck with that sin(xyz) 6. Aug 21, 2008 Re: stokes
2017-11-21 23:58:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2632310688495636, "perplexity": 6466.703677217378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806438.11/warc/CC-MAIN-20171121223707-20171122003707-00271.warc.gz"}
https://cs.stackexchange.com/questions/40810/what-can-i-deduce-if-an-np-complete-problem-is-reducible-to-its-complement
# What can I deduce if an NP-complete problem is reducible to its complement? Let's say I have a decision problem $D$ and its complement $D'$. I know D is poly-time reducible to $D'$ (its complement). Furthermore, I know $D$ is NP-complete. What is the strongest statement I could possibly make about this kind of relationship? • Given a decision problem X, its complement X Complement is the collection of all instances s such that s is not in X. Slide 5 on this, courses.engr.illinois.edu/cs473/fa2010/Lectures/lecture24.pdf – Teodorico Levoff Mar 28 '15 at 18:26 • @Juho The complement of a decision problem is a completely standard concept. – David Richerby Mar 28 '15 at 20:09 • @DavidRicherby Sure. Given the string of questions from the same user, I was only making sure everyone was on the same page. – Juho Mar 28 '15 at 20:20 ## 1 Answer If an NP-complete problem is reducible to its complement then NP=coNP (why?). Conversely, if NP=coNP then every NP-complete problem is reducible to its complement (why?). • I will give my attempt, correct me for any mistakes. Suppose A is NP-complete, if NP=coNP, then A is in NP = coNP. If A is in P, then let A's complement be in NP> and It's complement is polytime reducible to A(So A is NP-hard), --> A's complement is in coNP, So NP = coNP? – Teodorico Levoff Mar 31 '15 at 18:42 • I can't follow your reasoning. Try to write it again more clearly. – Yuval Filmus Mar 31 '15 at 18:45 • Suppose A is NP-complete. A is in coNP by definition of coNP, because A complement is in NP. If NP=coNP, then A is polytime reducible to A's complement, and vice versa. If A is reducible to its complement then we can map through a function computable in polytime, from NP to coNP? My apologies, it's kind of messy and hard to follow. – Teodorico Levoff Mar 31 '15 at 18:49 • I'm sorry, but I still can't follow. What are you assuming, and what are you trying to prove? Also, it is (probably) not true that if A is NP-complete then it is in coNP; if an NP-complete problem is in coNP then NP=coNP. – Yuval Filmus Mar 31 '15 at 18:51 • Try filling the following template. "We start by showing that if an NP-complete problem is reducible to its complement then NP=coNP. Suppose that A is an NP-complete reducible to its complement, and let B$\in$NP. Then ... and so B$\in$coNP. This shows that NP$\subseteq$coNP, and so NP=coNP. Next, we show that if NP=coNP then every NP-complete problem is reducible to its complement. Suppose that NP=coNP, and let A be an NP-complete problem. Then ... and so A is reducible to its complement." – Yuval Filmus Mar 31 '15 at 19:06
2021-07-26 17:56:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6598635911941528, "perplexity": 892.1232498216034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152144.81/warc/CC-MAIN-20210726152107-20210726182107-00137.warc.gz"}
https://www.physicsforums.com/threads/the-action-s-of-relativistic-particle.775501/
# The Action[s] of relativistic particle 1. Oct 11, 2014 ### ChrisVer In case of a relativistic particle, one can try to minimize the length of the worldline of the particle, thus write the action as: $S = -m \int_{s_i}^{s_f} ds = - m \int_{\tau_i}^{\tau_f} d \tau ~ \sqrt{\dot{x}^{\mu}(\tau) \dot{x}^{\nu} (\tau) \eta_{\mu \nu}}$ Where the minus is to ensure minima and $m$ is the mass, chosen for dimensional reasons [also for the eq. of motion]. However I heard that this action is problematic, in the case of $m=0$ (massless particles) and also due to the $\sqrt{.}$ it gets problematic at quantization. So to overcome those two problems, one can define another action: $S= \int d \tau e [ e^{-2} \dot{x}^2 + m^2 ]$ With $e$ now an auxilary field, transforming under reparametrizations as a vielbein. The Equation of Motions for the $x$ field for both actions are the same,so they are equivalent. The second however doesn't seem to have the same problems with the square root, neither with the massless case (due to the freedom of fixing $e$). My main question is, however, how can someone build the second action? I mean did people find it by pure luck or are there physical reasons to write it down? eg. for the first action as I mentioned, the idea is to minimize the worldline length. Last edited: Oct 11, 2014 2. Oct 11, 2014 ### dextercioby Educated guess would be the right answer: the particle is a constrained system and the square-root (as in the case of the Klein-Gordon equation without squaring) is tempting to make you look for a linear alternative. To my shame (for I love the history of physics a lot), I don't know who coined this einbein formulation. http://physics.stackexchange.com/questions/4188/whats-the-point-of-having-an-einbein-in-your-action And a little research on the history side, maybe this article (which for me is behind a paywall) is a start: http://www.sciencedirect.com/science/article/pii/0370269376901155 Last edited: Oct 11, 2014 3. Oct 12, 2014 ### ChrisVer yes sorry I meant einbein.
2017-08-21 01:35:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.912043035030365, "perplexity": 718.7958429138962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886107065.72/warc/CC-MAIN-20170821003037-20170821023037-00199.warc.gz"}
https://im.kendallhunt.com/MS/teachers/2/corrections.html
## Corrections Note: Later printings of these materials may have some of these corrections already in place. Unit 1, Lesson 8, Activity 2. In the student response for #1, instead of "0.817 miles per minute" it should say "0.917 miles per minute." Unit 1, Lesson 11, Activity 2. In the extension, the values given in the table should be Mercury 35, Venus 67, Earth 93, Mars 142, Jupiter 484, Saturn 887, Uranus 1784, Neptune 2795. Unit 2, Lesson 1, Activity 2. In the activity synthesis, in the second table instead of $$1\frac12$$ it should say $$\frac12$$. Unit 2, Lesson 6. In the lesson summary, instead of "20,300 feet above sea level" it should say "20,310 feet above sea level." Unit 2, Lesson 6, Activity 2. In the student response, in the table instead of "622,200" it should say "622,000." Unit 2, Lesson 11, Activity 2. In the student response for "Are you ready for more?" instead of $$(20,40)$$ it should say $$(20,50)$$. Unit 2 Glossary. In the definition of "coordinate plane" instead of "to the left" it should say "to the right." Unit 3, Lesson 9. The learning goal that says "Calculate the surface area of a rectangular prism and explain (orally and in writing) the solution method." should be removed and replaced with "Calculate the area of a shape that includes circular or semi-circular parts, and explain (orally and in writing) the solution method." Unit 4, End of Unit Assessment, Item 3, Removed alignment to standard 7.G.A.1. Unit 4, Lesson 9, Cool-down. In the student response, instead of "715,000,000 . . . 721,000,000" it should say "7,150,000 . . . 7,210,000." Unit 4, Lesson 14, Activity 2. In the student response for #3, instead of "0.21%" it should say "21%." In the activity synthesis, instead of $$\frac{14}{2,486}$$ it should say $$\frac{514}{2,486}$$. Unit 5, Lesson 14, Activity 2. In the student response for #2, instead of "782 . . . 796 . . . 810 . . . 824 . . . 838" it should say "784 . . . 798 . . . 812 . . . 826 . . . 840." Unit 5, Lesson 16, Activity 3. In the student response for #1, instead of "1, 4, 5, 2, 6, 3" it should say "Diagram A, Diagram D, Diagram E, Diagram B, Diagram F, Diagram C." Unit 6, Lesson 13, Activity 3. In the student response, move each row header up one row to match the table in the task statement. Unit 6, Lesson 16, Activity 1. In the student response, instead of "4" it should say "4.375." Unit 6, Lesson 19, Practice Problem 6. Option C should say $$4x < \text-20$$ and option D should say $$x < \text-5$$. Unit 6, Lesson 21, Practice Problem 6. In the solution, instead of "quarts" it should say "gallons." Unit 6, Lesson 22, Practice Problem 3. In the solution, instead of "$$. . .$$" it should say "$$126.89 + x + 3.5x - 25 = \text-151.89 +4.5x$$ . . . $$350 - x - (x + 50) + 75 = 375 - 2x$$." Unit 8, Mid-unit assessment, question 3. D is not a correct answer. Unit 8, Lesson 18, Activity 3. In the table on the row with sample 8, instead of "5.2" it should say "5.3." In the student response for #1b, instead of "Sample 8" it should say "Sample 7." Also, in the student response for #4, in the column for sample 8, instead of "9.6" it should say "9.5." ## Lesson Numbering for Learning Targets In some printed copies of the student workbooks, we erroneously printed a lesson number instead of the unit and lesson number. This table provides a key to match the printed lesson number with the unit and lesson number. Lesson Number Unit and Lesson Lesson Title 1 1.1 What are Scaled Copies? 2 1.2 Corresponding Parts and Scale Factors 3 1.3 Making Scaled Copies 4 1.4 Scaled Relationships 5 1.5 The Size of the Scale Factor 6 1.6 Scaling and Area 7 1.7 Scale Drawings 8 1.8 Scale Drawings and Maps 9 1.9 Creating Scale Drawings 10 1.10 Changing Scales in Scale Drawings 11 1.11 Scales without Units 12 1.12 Units in Scale Drawings 13 1.13 Draw It to Scale 14 2.1 One of These Things Is Not Like the Others 15 2.2 Introducing Proportional Relationships with Tables 16 2.3 More about Constant of Proportionality 17 2.4 Proportional Relationships and Equations 18 2.5 Two Equations for Each Relationship 19 2.6 Using Equations to Solve Problems 20 2.7 Comparing Relationships with Tables 21 2.8 Comparing Relationships with Equations 22 2.9 Solving Problems about Proportional Relationships 23 2.10 Introducing Graphs of Proportional Relationships 24 2.11 Interpreting Graphs of Proportional Relationships 25 2.12 Using Graphs to Compare Relationships 26 2.13 Two Graphs for Each Relationship 27 2.14 Four Representations 28 2.15 Using Water Efficiently 29 3.1 How Well Can You Measure? 30 3.2 Exploring Circles 31 3.3 Exploring Circumference 32 3.4 Applying Circumference 33 3.5 Circumference and Wheels 34 3.6 Estimating Areas 35 3.7 Exploring the Area of a Circle 36 3.8 Relating Area to Circumference 37 3.9 Applying Area of Circles 38 3.10 Distinguishing Circumference and Area 39 3.11 Stained-Glass Windows 40 4.1 Lots of Flags 41 4.2 Ratios and Rates With Fractions 42 4.3 Revisiting Proportional Relationships 43 4.4 Half as Much Again 44 4.5 Say It with Decimals 45 4.6 Increasing and Decreasing 46 4.7 One Hundred Percent 47 4.8 Percent Increase and Decrease with Equations 48 4.9 More and Less than 1% 49 4.10 Tax and Tip 50 4.11 Percentage Contexts 51 4.12 Finding the Percentage 52 4.13 Measurement Error 53 4.14 Percent Error 54 4.15 Error Intervals 55 4.16 Posing Percentage Problems 56 5.1 Interpreting Negative Numbers 57 5.2 Changing Temperatures 58 5.3 Changing Elevation 59 5.4 Money and Debts 60 5.5 Representing Subtraction 61 5.6 Subtracting Rational Numbers 62 5.7 Adding and Subtracting to Solve Problems 63 5.8 Position, Speed, and Direction 64 5.9 Multiplying Rational Numbers 65 5.10 Multiply! 66 5.11 Dividing Rational Numbers 67 5.12 Negative Rates 68 5.13 Expressions with Rational Numbers 69 5.14 Solving Problems with Rational Numbers 70 5.15 Solving Equations with Rational Numbers 71 5.16 Representing Contexts with Equations 72 5.17 The Stock Market 73 6.1 Relationships between Quantities 74 6.2 Reasoning about Contexts with Tape Diagrams 75 6.3 Reasoning about Equations with Tape Diagrams 76 6.4 Reasoning about Equations and Tape Diagrams (Part 1) 77 6.5 Reasoning about Equations and Tape Diagrams (Part 2) 78 6.6 Distinguishing between Two Types of Situations 79 6.7 Reasoning about Solving Equations (Part 1) 80 6.8 Reasoning about Solving Equations (Part 2) 81 6.9 Dealing with Negative Numbers 82 6.10 Different Options for Solving One Equation 83 6.11 Using Equations to Solve Problems 84 6.12 Solving Problems about Percent Increase or Decrease 85 6.13 Reintroducing Inequalities 86 6.14 Finding Solutions to Inequalities in Context 87 6.15 Efficiently Solving Inequalities 88 6.16 Interpreting Inequalities 89 6.17 Modeling with Inequalities 90 6.18 Subtraction in Equivalent Expressions 91 6.19 Expanding and Factoring 92 6.20 Combining Like Terms (Part 1) 93 6.21 Combining Like Terms (Part 2) 94 6.22 Combining Like Terms (Part 3) 95 6.23 Applications of Expressions 96 7.1 Relationships of Angles 99 7.4 Solving for Unknown Angles 100 7.5 Using Equations to Solve for Unknown Angles 101 7.6 Building Polygons (Part 1) 102 7.7 Building Polygons (Part 2) 103 7.8 Triangles with 3 Common Measures 104 7.9 Drawing Triangles (Part 1) 105 7.10 Drawing Triangles (Part 2) 106 7.11 Slicing Solids 107 7.12 Volume of Right Prisms 108 7.13 Decomposing Bases for Area 109 7.14 Surface Area of Right Prisms 110 7.15 Distinguishing Volume and Surface Area 111 7.16 Applying Volume and Surface Area 112 7.17 Building Prisms 113 8.1 Mystery Bags 114 8.2 Chance Experiments 115 8.3 What Are Probabilities? 116 8.4 Estimating Probabilities Through Repeated Experiments 117 8.5 More Estimating Probabilities 118 8.6 Estimating Probabilities Using Simulation 119 8.7 Simulating Multi-step Experiments 120 8.8 Keeping Track of All Possible Outcomes 121 8.9 Multi-step Experiments 122 8.10 Designing Simulations 123 8.11 Comparing Groups 124 8.12 Larger Populations 125 8.13 What Makes a Good Sample? 126 8.14 Sampling in a Fair Way 127 8.15 Estimating Population Measures of Center 128 8.16 Estimating Population Proportions 129 8.17 More about Sampling Variability 130 8.18 Comparing Populations Using Samples 131 8.19 Comparing Populations With Friends 132 8.20 Memory Test
2021-09-22 02:00:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4503846764564514, "perplexity": 10689.916094377686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057303.94/warc/CC-MAIN-20210922011746-20210922041746-00616.warc.gz"}
https://codegolf.stackexchange.com/questions/66522/alex-style-addition/66552
# Alex-style Addition Inspired by Alex's glorious Learn you an R for great good, we are going to humbly recreate Alex's "one true R program" -- but with a twist. Alex-style Addition works like this -- it has a 90% chance of simply returning the sum of the two numbers given and a 10% chance of recursively Alex-adding the first number and the second number + 1. This means that, potentially, an addition could be off by 1 or more. ## Challenge Write a full program or function that takes two integers and Alex-adds them as defined. You may assume that your program will not stack overflow if your language doesn't have tail recursion. (Note that you do not have to implement it recursively, as long as the probabilities are the same.) ## Reference Implementation (Groovy) int alexAdd(int a, int b) { int i = new Random().nextInt(11); if(i == 1) { return alexAdd(a,b+1); } else { return a + b; } } Try this fiddle online. ## Leaderboard var QUESTION_ID=66522,OVERRIDE_USER=8478;function answersUrl(e){return"https://api.stackexchange.com/2.2/questions/"+QUESTION_ID+"/answers?page="+e+"&pagesize=100&order=desc&sort=creation&site=codegolf&filter="+ANSWER_FILTER}function commentUrl(e,s){return"https://api.stackexchange.com/2.2/answers/"+s.join(";")+"/comments?page="+e+"&pagesize=100&order=desc&sort=creation&site=codegolf&filter="+COMMENT_FILTER}function getAnswers(){jQuery.ajax({url:answersUrl(answer_page++),method:"get",dataType:"jsonp",crossDomain:!0,success:function(e){answers.push.apply(answers,e.items),answers_hash=[],answer_ids=[],e.items.forEach(function(e){e.comments=[];var s=+e.share_link.match(/\d+/);answer_ids.push(s),answers_hash[s]=e}),e.has_more||(more_answers=!1),comment_page=1,getComments()}})}function getComments(){jQuery.ajax({url:commentUrl(comment_page++,answer_ids),method:"get",dataType:"jsonp",crossDomain:!0,success:function(e){e.items.forEach(function(e){e.owner.user_id===OVERRIDE_USER&&answers_hash[e.post_id].comments.push(e)}),e.has_more?getComments():more_answers?getAnswers():process()}})}function getAuthorName(e){return e.owner.display_name}function process(){var e=[];answers.forEach(function(s){var r=s.body;s.comments.forEach(function(e){OVERRIDE_REG.test(e.body)&&(r="<h1>"+e.body.replace(OVERRIDE_REG,"")+"</h1>")});var a=r.match(SCORE_REG);a&&e.push({user:getAuthorName(s),size:+a[2],language:a[1],link:s.share_link})}),e.sort(function(e,s){var r=e.size,a=s.size;return r-a});var s={},r=1,a=null,n=1;e.forEach(function(e){e.size!=a&&(n=r),a=e.size,++r;var t=jQuery("#answer-template").html();t=t.replace("{{PLACE}}",n+".").replace("{{NAME}}",e.user).replace("{{LANGUAGE}}",e.language).replace("{{SIZE}}",e.size).replace("{{LINK}}",e.link),t=jQuery(t),jQuery("#answers").append(t);var o=e.language;/<a/.test(o)&&(o=jQuery(o).text()),s[o]=s[o]||{lang:e.language,user:e.user,size:e.size,link:e.link}});var t=[];for(var o in s)s.hasOwnProperty(o)&&t.push(s[o]);t.sort(function(e,s){return e.lang>s.lang?1:e.lang<s.lang?-1:0});for(var c=0;c<t.length;++c){var i=jQuery("#language-template").html(),o=t[c];i=i.replace("{{LANGUAGE}}",o.lang).replace("{{NAME}}",o.user).replace("{{SIZE}}",o.size).replace("{{LINK}}",o.link),i=jQuery(i),jQuery("#languages").append(i)}}var ANSWER_FILTER="!t)IWYnsLAZle2tQ3KqrVveCRJfxcRLe",COMMENT_FILTER="!)Q2B_A2kjfAiU78X(md6BoYk",answers=[],answers_hash,answer_ids,answer_page=1,more_answers=!0,comment_page;getAnswers();var SCORE_REG=/<h\d>\s*([^\n,]*[^\s,]),.*?(\d+)(?=[^\n\d<>]*(?:<(?:s>[^\n<>]*<\/s>|[^\n<>]+>)[^\n\d<>]*)*<\/h\d>)/,OVERRIDE_REG=/^Override\s*header:\s*/i; body{text-align:left!important}#answer-list,#language-list{padding:10px;width:290px;float:left}table thead{font-weight:700}table td{padding:5px} <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <link rel="stylesheet" type="text/css" href="//cdn.sstatic.net/codegolf/all.css?v=83c949450c8b"> <div id="answer-list"> <h2>Leaderboard</h2> <table class="answer-list"> <thead> <tr><td></td><td>Author</td><td>Language</td><td>Size</td></tr></thead> <tbody id="answers"> </tbody> </table> </div><div id="language-list"> <h2>Winners by Language</h2> <table class="language-list"> <thead> <tr><td>Language</td><td>User</td><td>Score</td></tr></thead> <tbody id="languages"> </tbody> </table> </div><table style="display: none"> <tbody id="answer-template"> <tr><td>{{PLACE}}</td><td>{{NAME}}</td><td>{{LANGUAGE}}</td><td>{{SIZE}}</td><td><a href="{{LINK}}">Link</a></td></tr></tbody> </table> <table style="display: none"> <tbody id="language-template"> <tr><td>{{LANGUAGE}}</td><td>{{NAME}}</td><td>{{SIZE}}</td><td><a href="{{LINK}}">Link</a></td></tr></tbody> </table> • So it gives the sum of two numbers plus a geometric random variable with failure probability 1/10? – xnor Dec 14 '15 at 2:00 • @xnor Essentially, yes. I defined it recursively so that it is easier to understand, but you don't have to do it recursively (CJam solution does not, for instance) Dec 14 '15 at 2:02 • Why was this sandboxed for 20 minutes? That seems to be missing the point of the sandbox. Dec 14 '15 at 16:52 • @PeterTaylor The one minor issue with it was fixed almost immediately, and the question was so simple I didn't think it needed to stay in the sandbox for that long (it had already been looked at by 10 people which I thought was sufficient peer review for such a simple challenge). The main reason I had it in the sandbox was to see if people thought it was too simple. Dec 14 '15 at 17:30 • I would say it still has a major issue, in that it's not clear whether you insist on implementations being written as recursive functions or just on giving the right distribution, but it's far too late to do anything about clarifying that now. Dec 14 '15 at 18:26 # MATLAB , 51 bytes function f(a,b) if rand > .1;a+b;else;f(a,b+1);end Result is found in the 'ans' automatic variable # APL (Dyalog Unicode), 10 bytes +-{⌈10⍟?0} Try it online! # Brachylog, 10 bytes ∧9ṙ9&↰+₁|+ Try it online! Takes input as a list of numbers. The TIO header runs the predicate on the input infinitely and prints the results. ṙ A random integer from zero to 9 nine ∧ which is not necessarily the input 9 is nine, & and the input ↰ passed through this predicate again +₁ plus one | is the output, or, the input + summed is the output. # Julia 1.0, 27 bytes a\b=rand()<.1 ? 1+a\b : a+b Try it online! # Ruby, 32 bytes Can't believe there was no Ruby answer. Here's a pretty basic lambda function: f=->a,b{rand(10)<1?f[a,b+1]:a+b} But why not do it properly? Here's some ungolfed meta-Alexification: module Alex def +(other) other += 1 if rand(10) == 7 super end end [Fixnum, Bignum, Float].each { |klass| klass.prepend(Alex) } # testing p 1000.times.count { 1 + 1 == 3 } #=> 87 p 1000.times.count { 1 + 1 == 4 } #=> 19 p 1000.times.count { 1 + 1.3 == 5.3 } #=> 1 proc A n\ m {expr {rand()>.9?[A $n [incr m]]:$n+$m}} Try it online! # SmileBASIC 3, 54 bytes Recursive function that takes two numbers. DEF A(B,C)IF RND(10)THEN RETURN B+C RETURN A(B,C+1)END # 05AB1E, 151413 8 bytes +[TLΩ≠#> -5 bytes thanks to @Emigna by placing the > (increment by 1) after the # (break loop). Try it online. Explanation: # Implicit inputs a and b + # Sum of these two inputs [ # Start an infinite loop Ω # Random integer TL # in the range [1, 10] ≠ # If this integer isn't exactly 1: # # Stop the loop > # Increase the result by 1 # Implicitly output the result • +[TLΩ≠#> for 8 bytes May 9 '18 at 11:04 • @Emigna Ah, smart.. I guess I'm too used to languages like C and JS were 0 == false and anything else == true. In 05AB1E 1 == true, and anything else == false. Also, placing the increment > after the break is pretty obvious (now that I see it ;p). Thanks! May 9 '18 at 11:59 # F#, 66 bytes let rec a x y=(if(System.Random()).Next(10)=9 then a x 1 else x)+y Try it online! The if statement is like a function itself. If the random number is 9 (in the range [0, 10)) then perform Alex-addition on x and 1 and return that value. Otherwise return just x. Then add the result of the if statement to y, and return it. # Kotlin (1.3+), 60 bytes fun a(b:Int,c:Int):Int=if((0..9).random()<1)a(b,c+1)else b+c A solution that uses the new cross-platform random features added in Kotlin 1.3. Try it online! # Kotlin (JVM), 59 bytes fun a(b:Int,c:Int):Int=if(Math.random()<.1)a(b,c+1)else b+c Try it online! Works on the JVM version because java.lang.Math is automatically imported. # Kotlin (<1.3), 65 bytes This version is "cross-platform" Kotlin since it doesn't depend on any Java features. fun a(b:Int,c:Int):Int=if((0..9).shuffled()[0]<1)a(b,c+1)else b+c Try it online! The "randomness" is obtained by shuffling the inclusive range 0..9, which generates a List<Int>, and then checking the first element of that list. Assuming shuffled() is perfectly random (I have no idea how random it actually is) there is a 10% chance of the first element being 0. # Factor, 41 bytes : a ( x y -- z ) .1 [ 1 + a ] [ + ] ifp ; Try it online! ifp is a version of if that takes a probability instead of a boolean. Otherwise, it's just a simple recursive definition (which is verbose in Factor because of the mandatory stack effect declaration). # R 33 bytes f=function(a,b)a+b+rbinom(b,b,.1) Edit: now simulates recursive application of +1 for each 1 in b. • I think it should be rgeom(1,.9). Dec 14 '15 at 6:37 # Perl 5-p, 23 bytes $_+=<>;$_++while.1>rand Try it online! # Python 2, 63 bytes Returns the result as a string. from random import* f=lambda a,b:random()>.1anda+bor f(a,b+1) Try it online! Test program showing probability distribution: Try it online! # PowerShell Core, 58 bytes filter a{param($a)if(get-random 10){$_+$a}else{$_+1|a$a}} Adds to numbers using a recursive filter. For 4 and 3: 4 | a 3 Try it online! • It is not full program or function. It is a code snippet. Full program for PS means "you can save the code to a standalone script file, run the script with optional parameters (not statements) and get a valid result". Oct 27 at 20:46 • From the msdn about Functions, A filter is a type of function that runs on each object in the pipeline. I think it does match? Oct 27 at 21:02 • I don't think the definition from MSDN matches the definition on this site. See comments from @AdmBorkBork and standard loopholes on this site and meta Oct 27 at 21:10
2021-12-02 13:24:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2893201410770416, "perplexity": 5634.575311509123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362219.5/warc/CC-MAIN-20211202114856-20211202144856-00046.warc.gz"}
https://datascience.stackexchange.com/questions/110311/unable-to-debug-where-torch-adam-optimiser-is-going-wrong
# Unable to debug where torch Adam optimiser is going wrong I was implementing a training loop in vscode. I have created a Adam optimizer using XLM-Roberta model as follows: xlm_r_model = XLMRobertaForSequenceClassification.from_pretrained("xlm-roberta-base", num_labels = NUM_LABELS, output_attentions=False, output_hidden_states=False ) xlm_r_model.to(device) Then at following line: optimizer.step() vscode simply terminates the execution, without any error stack trace. So I debugged to get to know exactly where this is happening. I reached this line, which makes F.adam(...) call: Weirdly, on github, torch.optim.adam does not have this line. It seems that the closest matching line is line 150. This call then goes to torch.optim._functional.adam: In above image, those params (line 72) in for loop contains 201 elements and am unable to figure it out exactly which param is going wrong. When I continue it to run, it doesn't pause in debug mode whenever error occurs, instead vscode simply terminates. Again, I am not able to find this function on github's _functional version When I checked several Kaggle notebooks (1,2,3,4) for training xlm roberta, they are using AdamW and torch_xla package to train on TPUs something like this: import torch_xla.core.xla_model as xm optimizer = AdamW([{'params': model.roberta.parameters(), 'lr': LR}, {'params': [param for name, param in model.named_parameters() if 'roberta' not in name], 'lr': 1e-3} ], lr=LR, weight_decay=0) xm.optimizer_step(optimizer) Do I miss some contenxt and it is indeed compulsory to train using AdamW or torch_xla? Or am doing some stupid mistake? PS: 1. Am running this no colab . Its pip shows torch version 1.10.0+cu111 and python 3.7.13. I have run codeserver on colab through colabcode and debugging in browser based vscode. 2. I was able to train Bert with Adam optimizer earlier.
2023-03-25 22:46:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30637088418006897, "perplexity": 8013.032822736174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945376.29/warc/CC-MAIN-20230325222822-20230326012822-00086.warc.gz"}
https://hindimaintutorial.in/ambiguous-dominoes-solution-codeforces/
# Ambiguous Dominoes solution codeforces Polycarp and Monocarp are both solving the same puzzle with dominoes. They are given the same set of n dominoes, the i-th of which contains two numbers xi and yi. They are also both given the same m by k grid of values aij such that =2m⋅k=2n. ## Ambiguous Dominoes solution codeforces The puzzle asks them to place the n dominoes on the grid in such a way that none of them overlap, and the values on each domino match the aij values that domino covers. Dominoes can be rotated arbitrarily before being placed on the grid, so the domino (,)(xi,yi) is equivalent to the domino (,)(yi,xi). They have both solved the puzzle, and compared their answers, but noticed that not only did their solutions not match, but none of the n dominoes were in the same location in both solutions! Formally, if two squares were covered by the same domino in Polycarp’s solution, they were covered by different dominoes in Monocarp’s solution. The diagram below shows one potential a grid, along with the two players’ solutions. ## Ambiguous Dominoes solution codeforces Polycarp and Monocarp remember the set of dominoes they started with, but they have lost the grid a. Help them reconstruct one possible grid a, along with both of their solutions, or determine that no such grid exists. Input The first line contains a single integer n (131051≤n≤3⋅105). The i-th of the next n lines contains two integers xi and yi (1,21≤xi,yi≤2n). Output If there is no solution, print a single integer 1−1. Otherwise, print m and k, the height and width of the puzzle grid, on the first line of output. These should satisfy =2m⋅k=2n. The i-th of the next m lines should contain k integers, the j-th of which is aij. ## Ambiguous Dominoes solution codeforces The next m lines describe Polycarp’s solution. Print m lines of k characters each. For each square, if it is covered by the upper half of a domino in Polycarp’s solution, it should contain a “U”. Similarly, if it is covered by the bottom, left, or right half of a domino, it should contain “D”, “L”, or “R”, respectively. The next m lines should describe Monocarp’s solution, in the same format as Polycarp’s solution. If there are multiple answers, print any. Examples input Copy 1 1 2 output Copy -1 input Copy 2 1 1 1 2 ## Ambiguous Dominoes solution codeforces output Copy 2 2 2 1 1 1 LR LR UU DD input Copy 10 1 3 1 1 2 1 3 4 1 5 1 5 3 1 2 4 3 3 4 1 ## Ambiguous Dominoes solution codeforces output Copy 4 5 1 2 5 1 5 3 4 1 3 1 1 2 4 4 1 1 3 3 3 1 LRULR LRDLR ULRLR DLRLR UULRU DDUUD LRDDU LRLRD Note Extra blank lines are added to the output for clarity, but are not required. The third sample case corresponds to the image from the statement.
2022-08-17 15:48:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5739421248435974, "perplexity": 747.9505631613471}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573029.81/warc/CC-MAIN-20220817153027-20220817183027-00487.warc.gz"}
https://www.tutorialspoint.com/How-can-I-escape-HTML-special-chars-in-JavaScript
How can I escape HTML special chars in JavaScript? Front End TechnologyJavascriptObject Oriented Programming BACK-END web Development with php & MySQL 10 Lectures 1 hours The Complete Front-End Web Development Course! 127 Lectures 16.5 hours Front End & Fullstack - React 16.8 + Best Seller 131 Lectures 12.5 hours HTML contains special characters such as ‘<,’ ‘>,’ ‘/,’ and many more such as single and double commas. These special characters are used for the HTML tag, such as ‘<’ is used to open the HTML tag. The ‘/’ and ‘>’ is used to close the HTML tag. This tutorial teaches us to escape HTML special characters in JavaScript. Now, the question is that what if we want to use these characters inside the HTML content? If we use special characters normally in the HTML content, it considers it the opening or closing HTML tag and produces an unknown error. For example, we need to render the below string to the browser. <b> tutorialsPoint </b> If we directly add the above string in the HTML, it considers <b> as the bold tag but we want to use it as a string. To overcome the above problem, we must use Unicode for all the special characters. Here, we will replace all special characters with Unicode to escape HTML special chars and use them inside the HTML string. There are different approaches to solve the above problem given below. • Using the createTextNode() Method • Using the textContent attribute • Using the replace() method Using the createTextNode() method In this approach, we will use the createTextNode() method from the HTML DOM. We just need to pass the string to the method as an argument, which returns the encoded string. Syntax Var converted_string = document.createTextNode(string); Parameters • String − We can pass any HTML string as an argument to escape special characters and encode it. Example 1 The below example demonstrate the use of createTextNode(string) method in JavaScript. <!DOCTYPE html> <html> <body> <h2>Escape HTML special Chars in JavaScript.</h2> <h4> String After escaping the special characters: </h4> <p id = "contentDiv"> </p> <script type = "text/javascript"> // function to escape special chars using createTextNode() method. function escapeSpecialChars() { let string_var = " <h1> tutorialsPoint </h1> "; let escapedString = document.createTextNode(string_var); contentDiv.appendChild(escapedString); } escapeSpecialChars(); </script> </body> </html> For the above example output, users can observe that we have created an encoded string using the createTextNode() method and inserted it into the HTML DOM. It renders special characters as it is without considering them as a tag element. Using the textContent attribute We can create an HTML element in JavaScript and add the HTML string. We can use the textContent property of the HTML Textarea element and insert the HTML string. After that, we can get encoded string with Unicode using the innerHTML property. Syntax let textAreaDiv = document.createElement( 'textarea' ); let encoded_string = textAreaDiv.innerHTML; Parameters • HTML_string − We can pass any HTML string which we want to encode as this parameter. Example 2 Users can follow the below example to see the demonstration of the above approach. <!DOCTYPE html> <html > <body> <h2>Escape HTML special Chars in javascript.</h2> <h4> String After escaping the special characters. </h4> <p id="resultDiv"> </p> <script type = "text/javascript"> var resultDiv = document.getElementById("resultDiv"); textAreaDiv.textContent = "<div>Welcome to tutorialsPoint website.</div>"; resultDiv.innerHTML = encoded_string; </script> </body> </html> Users can see in the below output that how we can encode string using the textContent property. Using the Replace() method In this approach, we will use the replace() method of JavaScript. We can use the replace() method to replace one character with another character. Here, We will replace all the special characters in the HTML string with their Unicode by using the replace() method. Syntax html_string.replace( old_char, new_char ) Parameters • Html_string − It is a string in which we need to escape special characters. • Old_char − It is a character in the string which needs to be replaced. • New_char − It is a character that we will add to the string at the position of the old character. Example 3 The below example demonstrates how we can encode the HTML string by replacing the special characters using the replace() method. <!DOCTYPE html> <html> <body> <h2>Escape HTML special chars</h2> <p> Result after escaping the special characters. </p> <p id="result"></p> <script> // function to escape special chars using replace() method. function escapeSpecialChars(str) { return str .replace(/&/g, "&") .replace(/</g, "<") .replace(/>/g, ">") .replace(/"/g, """) .replace(/'/g, "'"); } let string = <div> hello user! <i>your welcome</i> here. </div>; let escape = escapeSpecialChars(string); document.getElementById("result").innerHTML = escape; </script> </body> </html> Users can see that we have successfully replaced all special characters using the replace method, and we can render the string as it is with special characters. Conclusion In this tutorial, we have learned three approaches to replacing the special characters in the HTML. Users can use any method according to their requirements. Updated on 14-Jul-2022 14:27:51
2022-11-26 19:48:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18024569749832153, "perplexity": 5261.34711588994}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446708046.99/warc/CC-MAIN-20221126180719-20221126210719-00259.warc.gz"}
http://www.koreascience.or.kr/article/ArticleFullRecord.jsp?cn=E1BMAX_2007_v44n1_125
ON EXACT CONVERGENCE RATE OF STRONG NUMERICAL SCHEMES FOR STOCHASTIC DIFFERENTIAL EQUATIONS Title & Authors ON EXACT CONVERGENCE RATE OF STRONG NUMERICAL SCHEMES FOR STOCHASTIC DIFFERENTIAL EQUATIONS Nam, Dou-Gu; Abstract We propose a simple and intuitive method to derive the exact convergence rate of global $\small{L_{2}-norm}$ error for strong numerical approximation of stochastic differential equations the result of which has been reported by Hofmann and $\small{M{\"u}ller-Gronbach\;(2004)}$. We conclude that any strong numerical scheme of order ${\gamma}\;>\;1/2$ has the same optimal convergence rate for this error. The method clearly reveals the structure of global $\small{L_{2}-norm}$ error and is similarly applicable for evaluating the convergence rate of global uniform approximations. Keywords strong approximation of SDE;global $\small{L_2}$-norm error; Language English Cited by References 1. S. Cambanis and Y. Hu, Exact convergence rate of the Euler-Maruyama scheme, with application to sampling design, Stochastics Stochastic Rep. 59 (1996), no. 3-4, 211-240 2. N. Hofmann, T. Muller-Gronbach, and K. Ritter, Optimal approximation of stochastic differential equations by adaptive step-size control, Math. Comp. 69 (2000), no. 231, 1017-1034 3. N. Hofmann, T. Muller-Gronbach, and K. Ritter, The optimal discretization of stochastic differential equations, J. Complexity 17 (2001), no. 1, 117-153 4. N. Hofmann, T. Muller-Gronbach, and K. Ritter, Linear vs. standard information for scalar stochastic differential equations, J. Complexity 18 (2002), no. 2, 394-414 5. N. Hofmann and T. Muller-Gronbach, On the global error of It^o-Taylor schemes for strong approximation of scalar stochastic differential equations, J. Complexity 20 (2004), no. 5, 732-752 6. P. E. Kloeden and E. Platen, Numerical Solution of Stochastic Differential Equations, Springer, Berlin, 1992 7. T. Muller-Gronbach, The optimal uniform approximation of systems of Stochastic differential equations, Ann. Appl. Probab. 12 (2002), no. 2, 664-690 8. N. J. Newton, An efficient approximation for stochastic differential equations on the partition of symmetrical first passage times. Stochastics Stochastic Rep. 29 (1990), no. 2, 227-258
2018-11-19 15:41:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 4, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39755043387413025, "perplexity": 1021.3432490894137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039745800.94/warc/CC-MAIN-20181119150816-20181119172816-00549.warc.gz"}
https://mathematica.stackexchange.com/questions/83163/how-to-fix-the-plotrange
# How to fix the PlotRange? I have the following plot: dat = {{255, 255, 255}, {0, 0, 153}, {0, 0, 0}, {204, 0, 0}, {255, 255, 255}}; With[{rgb = RGBColor @@@ (dat/255)}, cf = Blend[rgb, #] &;] DensityPlot[E^(-2 p0^2 - x0^2/2)/π, {x0, -10, 10}, {p0, -7, 7}, ColorFunction -> cf, PlotLegends -> Automatic, WorkingPrecision -> 1000, PlotPoints -> 100, MaxRecursion -> 5, PlotRange -> {Full, Full, {-1, 1}}] I try to fix the Plot Range from -1 to 1, but they only allow me to fix it between the maximum and the minimum value of the function, the problem is that i want to compare different plots from different calculations, I am interested just to fix the plotrange to a given one, independently of the function in the plot. I thought it would be simple, but i cannot manage to do it. • the option of rescale is not posible, for what i comment about the later comparison with other results. – Mento May 11 '15 at 16:42 • Welcome to Mathematica.SE! I suggest that: 1) You take the introductory Tour now! 2) When you see good questions and answers, vote them up by clicking the gray triangles, because the credibility of the system is based on the reputation gained by users sharing their knowledge. Also, please remember to accept the answer, if any, that solves your problem, by clicking the checkmark sign! 3) As you receive help, try to give it too, by answering questions in your area of expertise. – bbgodfrey May 11 '15 at 16:54 • Try and take a look at the discussion in question (82947). – MarcoB May 11 '15 at 17:08 • Specifically, try adding ColorFunctionScaling -> False. – bbgodfrey May 11 '15 at 17:11 PlotLegends -> Placed[BarLegend[{Automatic, {-1, 1}}], Right]
2020-02-18 06:28:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24124659597873688, "perplexity": 1201.507923961352}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143635.54/warc/CC-MAIN-20200218055414-20200218085414-00348.warc.gz"}
https://3dprinting.stackexchange.com/questions/9770/trouble-with-sizing-in-fusion-360
# Trouble with sizing in Fusion 360 I had imported a .stl file into Fusion 360- from Blender, but I couldn't size it to my required dimensions. Then I tried to make an object in Fusion. It works but when I want to size it (by hitting the D key) it says: [Error: Sketch geometry is over constrained]... I realise if I add a sketch, that is flat I can size with D key but if I extrude it I couldn't size it any more. Same thing if I add a 3D object for example a box. Anyway I add a point on a face of that body (in the middle), then I could size it from that point to a edge but that all. • What did I do wrong and why I couldn't I size that body? • How do I suppose to size it? • With the .stl file imported from Blender or with the body made by me in Fusion 360? • Good evening Alex, and a warm welcome to SE.3DP. I would respectfully ask you to please not shout nor rant. Edit your question appropriately please. Maybe take a look at other highly voted questions on this site, if you require guidance on style in this matter. Apr 24 '19 at 19:50 • Well done @Marco! :-) Apr 25 '19 at 7:45 Welcome to SE.3DP! First off, F360 isn't the best with STL files. If you're having trouble with constraints and dimensions, I would suggest watching this Maker's Muse video first: How to use Constraints! CAD for Newbies with Fusion 360. Second, Fusion360 is very tricky with importing STL's. My steps below should help. 1. In the lower right-hand corner, at the rightmost end of the timeline, you'll see a little gear. When you click on the gear, click the very top option: "Do not capture design history". This puts you into Direct Modelling mode. 2. In the top left-hand corner, where it says "Model", and select "Mesh" from the menu. 3. Along the toolbar, in the "Create" section, click "Insert mesh". When that's done, go back to the top left where it now says "Mesh" and use the menu to go back to "Model". 4. Now that you're back in "Model", go to the "Modify" menu. In there, find the "Mesh" section, and in that box, click "Mesh to BRep". That will convert your STL into a Fusion360 file that you can edit. Now, if you want to use constraints, I would suggest sketching out your object entirely in Fusion, making constraints and dimensions along the way. I know it's annoying, but it'll be easier to modify it in Fusion. Hope that helps! • Correction: Fusion isn't the best to modify stlfiles, but it is great to make them Apr 25 '19 at 9:12 • Also, if you use "Insert mesh" to load up your model into Fusion360, there should be an "Unit Type" setting in the "Insert mesh" dialog window. Here you can select different units like inch, cm, mm etc. So if you know what units the model was build upon you can select the correct setting here. Apr 26 '19 at 13:00
2021-09-21 03:26:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35658469796180725, "perplexity": 1949.6129552650968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057131.88/warc/CC-MAIN-20210921011047-20210921041047-00657.warc.gz"}
https://it.overleaf.com/articles/software-de-auxilio-ao-ensino-e-aprendizagemde-matematica-para-criancas/mgwgzxqrpfxm
# Software de auxílio ao ensino e aprendizagemde matemática para crianças Author Leandro Alves AbstractThe usage of software has grown as computers become popular. There have emerged, both in academia and in the market, technological solutions for several areas, among them education. On the other hand, classroom teaching and learning continues to suffer from classical educational problems such as lack of student and teacher motivation and lack of clear educational goals. And although software supports learning across a range of disciplines and ages, children's audiences, especially in mathematics, have been little contemplated with the benefits that technological solutions can bring. Therefore, the use of pedagogical approaches, such as Bloom's Taxonomy and Formative Assessments, together with gamification techniques, such as Octalysis, can be used to develop a technological solution that contemplates this public. The present work aims to propose the development of a software to assist the teaching and learning of mathematics for children in the classroom.
2021-05-15 23:39:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30128350853919983, "perplexity": 2566.224088105501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991488.53/warc/CC-MAIN-20210515223209-20210516013209-00267.warc.gz"}
https://teknikaldomain.me/tags/regex/
Tek's Domain #<NTA:NnT:SSrgS:H6.6-198:W200-90.72:CBWg> Spin the Whee- I Mean, the Subtitle Randomizer! So if you haven’t noticed, every time you view that main title bar, the subtitle has a little extra tagline on the end of it… sometimes, sometimes it doesn’t. Well, that randomizes on every request. And here, we talk about the smallest thing I’ve made, to date: the tagline picker for that.
2023-02-04 09:11:56
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8305789828300476, "perplexity": 3749.9872863070896}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500095.4/warc/CC-MAIN-20230204075436-20230204105436-00432.warc.gz"}
http://cea.ceaj.org/EN/Y2014/V50/I19/1
### Certificate-based hybrid encryption scheme under standard model in cloud computing ZHOU Ping1,2, HE Dake1, ZHANG Wenfang1 1. 1.College of Information Science & Technology, Southwest Jiaotong University, Chengdu 610031, China 2.Department of Information Engineering, Urban Vocational College of Sichuan, Chengdu 610101, China • Online:2014-10-01 Published:2014-09-29 ### 云计算中的标准模型下基于证书混合加密方案 1. 1.西南交通大学 信息科学与技术学院,成都 610031 2.四川城市职业学院 信息工程系,成都 610101 Abstract: With the rapid development of cloud computing, data security has become a critical problem of cloud security, at the same time, the amount of cloud data storage and transmission is very huge and the safety requirements are very high. On the other hand, certificate-based cryptosystems can overcome the certificate manage problem in traditional public key cryptosystems and the private key escrow problem in identity-based cryptosystems, so it provides new ways for constructing effective PKI. But there are pairing operations in most current certificate-based encryption schemes, so the efficiencies of those schemes are low. Based on judging truncated Diffie-Hellman problem, it presents a certificate-based hybrid encryption scheme without pairings, which efficiency has been analyzed, and security has been proved. Scheme is a one-time-one-key encryption scheme based on key encapsulation algorithm, symmetric encryption algorithm and message authentication code algorithm. Analysis shows that the scheme is efficient and can resist adaptive chosen ciphertext attack, so it can be used in cloud computing environment.
2023-01-31 13:00:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3374309241771698, "perplexity": 3158.209629263446}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499871.68/warc/CC-MAIN-20230131122916-20230131152916-00664.warc.gz"}
https://math.stackexchange.com/questions/2358179/find-the-probability-that-a-pump-would-have-cost-more-to-rent-than-to-buy
# Find the probability that a pump would have cost more to rent than to buy We have a continuous random variable $X$ modelled by $f(x)=\frac{k}x$. For $1<x<9$ this is the length of time in years that water pumps last for. (Therefore $k$ can be solved to get $k=\frac{1}{2\ln3}$) Then there are two very difficult concepts that come with this question. First part: The farmer is offered a guarantee to cover the cost of replacing a pump that fails during the second year, at a cost of $300$. Given the pump will cost $1000$ to replace if it fails during the year, what advice would you give as to the offer? Second part: pumps can be rented for an installation charge of $200$ plus $250$ per year payable in advance. The yearly payment is not refundable if the pump fails before the end of the year. The farmer does not purchase the guarantee. Find the probability that the pump, at the end of its life, would have cost more to rent than to buy for $1000$. To be honest I don't even know where to start. In fact I made this account because nobody I talk to knows where to start. I hope that if somebody can explain this to me then I will be able to have a much greater understanding of the concepts underpinning this question ## 1 Answer For the first part, he can accept a loss of $300$ or take his chances on losing $1000$. Buying the insurance will be a good deal if the probability of the pump failing is greater than $0.3$. Integrate the probability distribution from $1$ to $2$ to get the chance it fails during the second year. For the second part, if it fails during the second year you pay $700$. If it fails during the third year you pay $950$. If it fails after the third year, you pay more than $1000$ to rent it. Compute the chance it lasts more than three years. • yikes, sorry. I edited the number $k$ wrongly. – Siong Thye Goh Jul 14 '17 at 2:42 • Thanks Ross. I completely understand how you did that. But here's another wee problem I have. I feel like there is another way I can do this problem which seems legitimate( but it can't be because it gives an answer of 0.47, whereas you method gives a perfect 0.5). The other way is this: event is Rent cost>Buy cost. Event B is always equal to 1000. Event R is modelled as 200+250X (where X is the life of the pump.) Then 200+250X>1000. I think (not sure completely) we are allowed to use algebra on this to get X>3.2. Then solving P(X>3.2) gets the answer of 0.47 – Scott Simmons Jul 14 '17 at 3:06 • The problem says you pay 250 at the start of any year when the pump has not failed and you do not get a refund for partial years. That is why I did it the way I did. Once the fourth year starts you pay the fourth $250$ for a total $1200$, which is when the rental cost exceeds $1000$. – Ross Millikan Jul 14 '17 at 3:24 • Wait Ross so you see how I got p(X>3.2) right? Is that a legitimate way to do the question? Then I can say that because you don't get a payment for partial years then you round it to p(X>3). Is that also a way of doing it? – Scott Simmons Jul 14 '17 at 3:48 • I see how you got it. It is correct if you get a partial refund, but the question specifies you do not. If the pump lasts 3.1 years you spend 1200 renting and are losing compared to buying. – Ross Millikan Jul 14 '17 at 4:01
2021-06-13 17:35:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.545803427696228, "perplexity": 278.59127453380916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487610196.46/warc/CC-MAIN-20210613161945-20210613191945-00491.warc.gz"}
http://hal.in2p3.fr/in2p3-01316392
Search for gluinos in events with an isolated lepton, jets and missing transverse momentum at $\sqrt{s}$ = 13 TeV with the ATLAS detector Abstract : The results of a search for gluinos in final states with an isolated electron or muon, multiple jets and large missing transverse momentum using proton--proton collision data at a centre-of-mass energy of $\sqrt{s}$ = 13 TeV are presented. The dataset used was recorded in 2015 by the ATLAS experiment at the Large Hadron Collider and corresponds to an integrated luminosity of 3.2 fb$^{-1}$. Six signal selections are defined that best exploit the signal characteristics. The data agree with the Standard Model background expectation in all six signal selections, and the largest deviation is a 2.1 standard deviation excess. The results are interpreted in a simplified model where pair-produced gluinos decay via the lightest chargino to the lightest neutralino. In this model, gluinos are excluded up to masses of approximately 1.6 TeV depending on the mass spectrum of the simplified model, thus surpassing the limits of previous searches. Document type : Journal articles http://hal.in2p3.fr/in2p3-01316392 Contributor : Emmanuelle Vernay <> Submitted on : Tuesday, May 17, 2016 - 7:58:53 AM Last modification on : Tuesday, November 5, 2019 - 4:32:43 PM Citation G. Aad, S. Albrand, S. Berlendis, C. Camincher, J. Collot, et al.. Search for gluinos in events with an isolated lepton, jets and missing transverse momentum at $\sqrt{s}$ = 13 TeV with the ATLAS detector. European Physical Journal C: Particles and Fields, Springer Verlag (Germany), 2016, 76, pp.565. ⟨10.1140/epjc/s10052-016-4397-x⟩. ⟨in2p3-01316392⟩ Record views
2019-11-13 06:47:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7261152267456055, "perplexity": 2869.8278270320834}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496666229.84/warc/CC-MAIN-20191113063049-20191113091049-00075.warc.gz"}
https://math.stackexchange.com/questions/3150999/cep-for-distributive-lattices-and-groups
# CEP for (distributive) lattices and groups? An algebra $$A$$ has the congruence extension property (CEP) if for every $$B\le A$$ and $$\theta\in\operatorname{Con}(B)$$ there is a $$\varphi\in\operatorname{Con}(A)$$ such that $$\theta =\varphi\cap(B\times B)$$. A class $$K$$ of algebras has the CEP if every algebra in the class has the CEP. • Does the class of all lattices has CEP? • Does the class of all groups has CEP? • Does the class of all distributive lattices has CEP? • What is the definition of $\operatorname{Con}(A)$? – Santana Afton Mar 17 '19 at 0:30 • Since in groups congruences correspond to normal subgroups, what you are asking for groups would be that given a group $G$, and a subgroup $H$, if $N\triangleleft H$ then there exists $M\triangleleft G$ such that $M\cap H=N$. It is now easy to see that this does not hold, for example by taking $G=A_5$ which is simple, but all of whose proper subgroups that are not of prime order are not simple. – Arturo Magidin Mar 17 '19 at 0:36 • @SantanaAfton: A congruence on $A$ is an equivalence relation on $A$ which, when viewed as a subset of $A\times A$, is also a subalgebra of $A\times A$ (with the induced structure); they are the objects that play the role of normal subgroups for groups and ideals for rings, to define quotients. $\mathrm{Con}(A)$ is the collection of all congruences on $A$. – Arturo Magidin Mar 17 '19 at 0:38 • related (possibly duplicate) question: link – Eran Mar 18 '19 at 18:24 The variety of all groups does not have the property. Congruences correspond to normal subgroups, so you are essentially asking whether if $$H\leq G$$, and $$N\triangleleft H$$, does there always exist an $$M\triangleleft G$$ such that $$M\cap H= N$$. This does not hold; for example, if $$G=A_5$$, $$H=A_4$$, and $$N$$ is a nontrivial proper normal subgroup of $$A_4$$, then you cannot find any appropriate $$M$$. The variety of all latices does not have the property either. Let $$L$$ be the nondistributive lattice $$M_3$$, with elements $$0$$, $$1$$, $$x$$, $$y$$, and $$z$$ (where the join of any distinct elements of $$\{x,y,z\}$$ is $$1$$, and the meet is $$0$$). Let $$M$$ be the sublattice $$\{0,x,1\}$$. Then let $$\Phi$$ be the congruence in $$M$$ that identifies $$0$$ and $$x$$. Let $$\Psi$$ be a congruence on $$L$$ that identifies $$0$$ and $$x$$. Then it must identify $$0\vee y=y$$ with $$x\vee y=1$$; similarly, it must identify $$0\vee z = z$$ with $$x\vee z = 1$$. Thus, $$y$$, $$z$$, and $$1$$ are identified in $$\Psi$$. That means that $$z\wedge 1 = z$$ must be identified with $$z\wedge y = 0$$, so all of $$0$$, $$z$$, $$y$$, and $$1$$ are identified in $$\Psi$$. That means that $$x$$ is identified with $$1$$ as well, so that $$\Phi$$ is a proper subcongruence of $$\Psi|_M$$. Thus, $$M_3$$ does not have the congruence extension property.
2020-09-29 08:37:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 55, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9045252203941345, "perplexity": 97.95580775333244}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401632671.79/warc/CC-MAIN-20200929060555-20200929090555-00604.warc.gz"}
https://stoneswww.academickids.com/encyclopedia/index.php/Gibbs_phenomenon
# Gibbs phenomenon Missing image Gibbs_phenomenon_10.png Approximation of square wave in 5 steps Approximation of square wave in 25 steps Missing image Gibbs_phenomenon_250.png Approximation of square wave in 125 steps In mathematics, the Gibbs phenomenon, named after the American physicist J. Willard Gibbs, (also known as ringing artifacts) is the peculiar manner in which the Fourier series of a piecewise continuously differentiable periodic function f behaves at a jump discontinuity: the nth partial sum of the Fourier series has large oscillations near the jump, which might increase the maximum of the partial sum above that of the function itself. The overshoot does not die out as the frequency increases, but approaches a finite limit. The three pictures on the right demonstrate this for a square wave whose Fourier expansion is [itex] \sin(x)+\frac{1}{3}\sin(3x)+\frac{1}{5}\sin(5x)+\dotsb[itex] More precisely, this is the function f which equals [itex]\pi/4[itex] between [itex]2n\pi[itex] and [itex](2n+1)\pi[itex] and [itex]-\pi/4[itex] between [itex]2(n+1)\pi[itex] and [itex]2(n+2)\pi[itex] for every integer n; thus this square wave has a jump discontinuity of height [itex]\pi/2[itex] at every integer multiple of [itex]\pi[itex]. As can be seen, as the number of terms rises, the error of the approximation is reduced in width and energy, but converges to a fixed height. A calculation for the square wave (see Zygmund, chap. 8.5., or the computations at the end of this article) gives an explicit formula for the limit of the height of the error. It turns out that the Fourier series exceeds the height [itex]\pi/4[itex] of the square wave by [itex]\frac{1}{2}\int_0^\pi \frac{\sin t}{t}\, dt - \frac{\pi}{4} = \frac{\pi}{2} 0.089490\dots[itex] More generally, at any jump point of a piecewise continuously differentiable function with a jump of a, the nth partial Fourier series will (for n very large) overshoot this jump by approximately [itex]0.089490... a[itex] at one end and undershoot it by the same amount at the other end; thus the "jump" in the partial Fourier series will be about 18% larger than the jump in the original function. At the location of the discontinuity itself, the partial Fourier series will converge to the midpoint of the jump (regardless of what the actual value of the original function is at this point). The quantity [itex]\int_0^\pi \frac{\sin t}{t}\ dt = {1.851937052\dots} = \frac{\pi}{2} + \pi 0.089490...[itex] is sometimes known as the Wilbraham-Gibbs constant. The Gibbs phenomenon was first observed by Albert Michelson via a mechanical graphing machine. Michelson developed a device in 1898 that could compute and re-synthesize the Fourier series. When a square wave was input into the machine, the graph would move to and from around the discontinuities. This would occur, and continue to occur, as the number of Fourier coefficients approached infinity. The phenomenon was first explained mathematically by J. Willard Gibbs in 1899. Informally, it reflects the difficulty inherent in approximating a discontinuous function by a series of continuous sine and cosine waves. This phenomenon is also closely related to the principle that the decay of the Fourier coefficients of a function at infinity is controlled by the smoothness of that function; very smooth functions will have very rapidly decaying Fourier coefficients (and thus very rapidly convergent Fourier series), whereas discontinuous functions will have very slowly decaying Fourier coefficients (and thus very badly convergent Fourier series). Note for instance that the Fourier coefficients [itex]1, 1/3, 1/5\, ...[itex] of the discontinuous square wave described above decay only as fast as the harmonic series, which is not absolutely convergent; indeed, the above Fourier series turns out to be only conditionally convergent for almost every value of x. This provides a partial explanation of the Gibbs phenomenon, since Fourier series with absolutely convergent Fourier coefficients would be uniformly convergent by the Weierstrass M-test and would thus be unable to exhibit the above oscillatory behavior. By the same token, it is impossible for a discontinuous function to have absolutely convergent Fourier coefficients, since the function would thus be the uniform limit of continuous functions and therefore be continuous, a contradiction. See more about absolute convergence of Fourier series. In practice, the difficulties associated with the Gibbs phenomenon can be ameliorated by using a smoother method of Fourier series summation, such as Fejér summation or Riesz summation, or by using sigma-approximation. If one uses a wavelet transform instead of the Fourier transform then the Gibbs phenomenon no longer occurs. Contents ## Formal mathematical description of the phenomenon Let [itex]f: {\Bbb R} \to {\Bbb R}[itex] be a piecewise continuously differentiable function which is periodic with some period [itex]L > 0[itex]. Suppose that at some point [itex]x_0[itex], the left limit [itex]f(x_0^-)[itex] and right limit [itex]f(x_0^+)[itex] of the function [itex]f[itex] differ by a non-zero gap [itex]a[itex]: [itex] f(x_0^+) - f(x_0^-) = a \neq 0.[itex] For each positive integer [itex]N \geq 1[itex], let [itex]S_N f[itex] be the [itex]N[itex]th partial Fourier series [itex] S_N f(x) := \sum_{-N \leq n \leq N} \hat f(n) e^{2\pi i n x / L} = \frac{1}{2} a_0 + \sum_{n=1}^N a_n \cos(2\pi nx/L) + b_n \sin(2\pi nx/L)[itex] where the Fourier coefficients [itex]\hat f(n), a_n, b_n[itex] are given by the usual formulae [itex] \hat f(n) := \frac{1}{L} \int_0^L f(x) e^{-2\pi i nx/L}\ dx[itex] [itex] a_n := \frac{2}{L} \int_0^L f(x) \cos(2\pi nx/L)\ dx[itex] [itex] b_n := \frac{2}{L} \int_0^L f(x) \sin(2\pi nx/L)\ dx.[itex] Then we have [itex] \lim_{N \to \infty} S_N f(x_0 + \frac{L}{2N}) = f(x_0^+) + 0.089490... a[itex] and [itex] \lim_{N \to \infty} S_N f(x_0 - \frac{L}{2N}) = f(x_0^-) - 0.089490... a[itex] but [itex] \lim_{N \to \infty} S_N f(x_0) = \frac{f(x_0^-) + f(x_0^+)}{2}.[itex] More generally, if [itex]x_N[itex] is any sequence of real numbers which converges to [itex]x_0[itex] as [itex]N \to \infty[itex], and if the gap a is positive then [itex] \limsup_{N \to \infty} S_N f(x_N) \leq f(x_0^+) + 0.089490... a[itex] and [itex] \liminf_{N \to \infty} S_N f(x_N) \geq f(x_0^-) - 0.089490... a.[itex] If instead the gap a is negative, one needs to interchange limit superior with limit inferior, and also interchange the ≤ and ≥ signs, in the above two inequalities. ## The square wave example We now illustrate the above Gibbs phenomenon in the case of the square wave described earlier. In this case the period L is [itex]2\pi[itex], the discontinuity [itex]x_0[itex] is at zero, and the jump a is equal to [itex]\pi/2[itex]. For simplicity let us just deal with the case when N is even (the case of odd N is very similar). Then we have [itex]S_N f(x) = \sin(x) + \frac{1}{3} \sin(3x) + \cdots + \frac{1}{N-1} \sin((N-1)x).[itex] Substituting [itex]x=0[itex], we obtain [itex]S_N f(0) = 0 = \frac{-\frac{\pi}{4} + \frac{\pi}{4}}{2} = \frac{f(0^-) + f(0^+)}{2}[itex] as claimed above. Next, we compute [itex]S_N f(\frac{2\pi}{2N}) = \sin\left(\frac{\pi}{N}\right) + \frac{1}{3} \sin\left(\frac{3\pi}{N}\right) + \cdots + \frac{1}{N-1} \sin\left( \frac{(N-1)\pi}{N} \right).[itex] If we introduce the sinc function [itex]\operatorname{sinc}(x) := \sin(x)/x[itex], we can rewrite this as [itex]S_N f\left(\frac{2\pi}{2N}\right) = \frac{1}{2} \left[ \frac{2\pi}{N} \operatorname{sinc}\left(\frac{\pi}{N}\right) + \frac{2\pi}{N} \operatorname{sinc}\left(\frac{3\pi}{N}\right) + \cdots + \frac{2\pi}{N} \operatorname{sinc}\left( \frac{(N-1)\pi}{N} \right) \right].[itex] But the expression in square brackets is a numerical integration approximation to the integral [itex]\int_0^\pi \operatorname{sinc}(t)\ dt[itex] (more precisely, it is a midpoint rule approximation with spacing [itex]2\pi/N[itex]). Since the sinc function is continuous, this approximation converges to the actual integral as [itex]N \to \infty[itex]. Thus we have [itex]\lim_{N \to \infty} S_N f(\frac{2\pi}{2N}) = \frac{1}{2} \int_0^\pi \operatorname{sinc}(t)\ dt = \frac{\pi}{4} + 0.089490... \frac{\pi}{2}[itex] which was what was claimed in the previous section. A similar computation shows [itex]\lim_{N \to \infty} S_N f(-\frac{2\pi}{2N}) = -\frac{1}{2} \int_0^\pi \operatorname{sinc}(t)\ dt = -\frac{\pi}{4} - 0.089490... \frac{\pi}{2}[itex] ## Publications • Gibbs, J. W., "Fourier Series". Nature 59, 200 and 606, 1899. • Antoni Zygmund, Trigonometrical series, Dover publications, 1955. • Wilbraham, H. On a certain periodic function, Cambridge and Dublin Math. J., 3 (1848), pp. 198-201. • Braennlund, Johan, "Why are sine waves fundamental (http://groups.google.ca/groups?selm=4a2h6p%24io6%40columba.udac.uu.se)". • Weisstein, Eric W., "Gibbs Phenomenon (http://mathworld.wolfram.com/GibbsPhenomenon.html)". From MathWorld--A Wolfram Web Resource. • Prandoni, Paolo, "Gibbs Phenomenon (http://lcavwww.epfl.ch/~prandoni/dsp/gibbs/gibbs.html)". • Pavel, "Gibbs phenomenon (http://klebanov.homeip.net/~pavel/fb/java/la_applets/Gibbs/)". math.mit.edu. (Java applet) • Art and Cultures • Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries) • Space and Astronomy
2021-06-13 19:44:53
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9869678616523743, "perplexity": 1273.0130163481176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487610841.7/warc/CC-MAIN-20210613192529-20210613222529-00612.warc.gz"}
https://mathematica.stackexchange.com/questions/102116/defining-a-list-of-functions
Defining a list of functions I'm using polynomial mappings such as s[{x_,y_}] := {x + y^2, y -x} and playing with compositions of such mapping, e.g. s[s[x,y]] My problem is I need to generate a list of these mappings (or an array, or what have you, I'm not sure what's the best data structure in this case) inside a loop, and later to compose them. So I would like to do something like s[[1]][s[[2]][x,y]] the first element in the list of mappings, composed on the second element. Can someone help me with that ? Also, in case there's a better way to define my polynomial mappings, please let me know. • Your input s[s[x,y]] doesn't return output other than s[{x + y^2, -x + y}] - is that expected, or did you want {x + y^2 + (-x + y)^2, -2 x + y - y^2} as the output? – Jason B. Dec 15 '15 at 8:55 • s[{x_, y_}] := (* stuff *) would be a way to go. – J. M.'s ennui Dec 15 '15 at 8:56 • Thanks. That's actually what I meant. I'll correct it. – Teddy Dec 15 '15 at 9:01 • @Teddy, So I don't really understand - you want to create a list of functions, that is easy enough. But then what are you doing with these compositions? Say the functions are {s1, s2, s3} - what are you looking to return? Something like {s1[ s2[ s3[x,y] ] ], s2[ s3[ x,y] ], s3[ x,y] }? – Jason B. Dec 15 '15 at 9:25 • @J.M. I'm trying to create approximations for a certain mapping, call it $A[x,y]$, using polynomial mappings. This is done by analysing various parts of $A$, and generating polynomial mappings accordingly. The desired approximation will be the composition of all those polynomial mappings. – Teddy Dec 15 '15 at 9:30 I'm also not sure whether I fully understand what you are after. But whenever one is using functions more in a mathematical sense than in a programming one, I think it is worth thinking about using pure functions to represent them. Here is a list of pure functions which each does a polynomial mapping as you describe: polynomials={ Function[{x, y}, {x + y^2, y - x}], Function[{x, y}, {x - y^2, y + x}] } you can now almost do what you want: polynomials[[1]][x,y] polynomials[[1]][x,y] only the composition is syntactically somewhat complicated (because you can't use the pattern matcher to unpack/destructure the list arguments for Functions, so we have to use @@ / Apply): polynomials[[1]] @@ polynomials[[2]][x, y] but fortunately it is simple enough to provide a helper function which adds some syntactic sugar so handling the entries in the list is less involved: poly[i_Integer][x_, y_] := polynomials[[i]][x, y] poly[i_Integer][{x_, y_}] := polynomials[[i]][x, y] now you can use the various entries like this: poly[1][x, y] poly[2][x, y] and composition is also easier: poly[1][poly[2][{x, y}]] And you can also use Composition, as suggested in other comments and answers: (Composition @@ {poly[1],poly[2],poly[1]})[x, y] Depending on what the entries in polynomials actually are it might make sense to turn that list into an Association which then would allow to access the entries by non-integer keys which might make your code more readable and less error prone...
2021-05-17 20:17:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5878269672393799, "perplexity": 1139.1399462692027}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992440.69/warc/CC-MAIN-20210517180757-20210517210757-00242.warc.gz"}
https://mxnet.incubator.apache.org/versions/master/tutorials/gluon/hybrid.html
# Hybrid - Faster training and easy deployment¶ Related Content: Deep learning frameworks can be roughly divided into two categories: declarative and imperative. With declarative frameworks (including Tensorflow, Theano, etc) users first declare a fixed computation graph and then execute it end-to-end. The benefit of fixed computation graph is it’s portable and runs more efficiently. However, it’s less flexible because any logic must be encoded into the graph as special operators like scan, while_loop and cond. It’s also hard to debug. Imperative frameworks (including PyTorch, Chainer, etc) are just the opposite: they execute commands one-by-one just like old fashioned Matlab and Numpy. This style is more flexible, easier to debug, but less efficient. HybridBlock seamlessly combines declarative programming and imperative programming to offer the benefit of both. Users can quickly develop and debug models with imperative programming and switch to efficient declarative execution by simply calling: HybridBlock.hybridize(). ## HybridBlock¶ HybridBlock is very similar to Block but has a few restrictions: • All children layers of HybridBlock must also be HybridBlock. • Only methods that are implemented for both NDArray and Symbol can be used. For example you cannot use .asnumpy(), .shape, etc. • Operations cannot change from run to run. For example, you cannot do if x: if x is different for each iteration. To use hybrid support, we subclass the HybridBlock: import mxnet as mx from mxnet import gluon from mxnet.gluon import nn mx.random.seed(42) class Net(gluon.HybridBlock): def __init__(self, **kwargs): super(Net, self).__init__(**kwargs) with self.name_scope(): # layers created in name_scope will inherit name space # from parent layer. self.conv1 = nn.Conv2D(6, kernel_size=5) self.pool1 = nn.MaxPool2D(pool_size=2) self.conv2 = nn.Conv2D(16, kernel_size=5) self.pool2 = nn.MaxPool2D(pool_size=2) self.fc1 = nn.Dense(120) self.fc2 = nn.Dense(84) # You can use a Dense layer for fc3 but we do dot product manually # here for illustration purposes. self.fc3_weight = self.params.get('fc3_weight', shape=(10, 84)) def hybrid_forward(self, F, x, fc3_weight): # Here F can be either mx.nd or mx.sym, x is the input data, # and fc3_weight is either self.fc3_weight.data() or # self.fc3_weight.var() depending on whether x is Symbol or NDArray print(x) x = self.pool1(F.relu(self.conv1(x))) x = self.pool2(F.relu(self.conv2(x))) # 0 means copy over size from corresponding dimension. # -1 means infer size from the rest of dimensions. x = x.reshape((0, -1)) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.dot(x, fc3_weight, transpose_b=True) return x ## Hybridize¶ By default, HybridBlock runs just like a standard Block. Each time a layer is called, its hybrid_forward will be run: net = Net() net.initialize() x = mx.nd.random_normal(shape=(16, 1, 28, 28)) net(x) x = mx.nd.random_normal(shape=(16, 1, 28, 28)) net(x) Hybrid execution can be activated by simply calling .hybridize() on the top level layer. The first forward call after activation will try to build a computation graph from hybrid_forward and cache it. On subsequent forward calls the cached graph, instead of hybrid_forward, will be invoked: net.hybridize() x = mx.nd.random_normal(shape=(16, 1, 28, 28)) net(x) x = mx.nd.random_normal(shape=(16, 1, 28, 28)) net(x) Note that before hybridize, print(x) printed out one NDArray for forward, but after hybridize, only the first forward printed out a Symbol. On subsequent forward hybrid_forward is not called so nothing was printed. Hybridize will speed up execution and save memory. If the top level layer is not a HybridBlock, you can still call .hybridize() on it and Gluon will try to hybridize its children layers instead. hybridize also accepts several options for performance tuning. For example, you can do net.hybridize(static_alloc=True) # or net.hybridize(static_alloc=True, static_shape=True) Please refer to the API manual for details. ## Serializing trained model for deployment¶ Models implemented as HybridBlock can be easily serialized. The serialized model can be loaded back later or used for deployment with other language front-ends like C, C++ and Scala. To this end, we simply use export and SymbolBlock.imports: net(x) net.export('model', epoch=1) Two files model-symbol.json and model-0001.params are saved on disk. You can use other language bindings to load them. You can also load them back to gluon with SymbolBlock: import warnings with warnings.catch_warnings(): warnings.simplefilter("ignore") net2 = gluon.SymbolBlock.imports('model-symbol.json', ['data'], 'model-0001.params') ## Operators that do not work with hybridize¶ If you want to hybridize your model, you must use F.some_operator in your ‘hybrid_forward’ function. F will be mxnet.nd before you hybridize and mxnet.sym after hybridize. While most APIs are the same in NDArray and Symbol, there are some differences. Writing F.some_operator and call hybridize may not work all of the time. Here we list some frequently used NDArray APIs that can’t be hybridized and provide you the work arounds. ### Element-wise Operators¶ In NDArray APIs, the following arithmetic and comparison APIs are automatically broadcasted if the input NDArrays have different shapes. However, that’s not the case in Symbol API. It’s not automatically broadcasted, and you have to manually specify to use another set of broadcast operators for Symbols expected to have different shapes. NDArray APIs Description NDArray._*sub_* x._sub_(y) <=> x-y <=> mx.nd.subtract(x, y) NDArray._*mul_* x._mul_(y) <=> x*y <=> mx.nd.multiply(x, y) NDArray._*div_* x._div_(y) <=> x/y <=> mx.nd.divide(x, y) NDArray._*mod_* x._mod_(y) <=> x%y <=> mx.nd.modulo(x, y) NDArray._*lt_* x._lt_(y) <=> x x mx.nd.lesser(x, y) NDArray._*le_* x._le_(y) <=> x<=y <=> mx.nd.less_equal(x, y) NDArray._*gt_* x._gt_(y) <=> x>y <=> mx.nd.greater(x, y) NDArray._*ge_* x._ge_(y) <=> x>=y <=> mx.nd.greater_equal(x, y) NDArray._*eq_* x._eq_(y) <=> x==y <=> mx.nd.equal(x, y) NDArray._*ne_* x._ne_(y) <=> x!=y <=> mx.nd.not_equal(x, y) The current workaround is to use corresponding broadcast operators for arithmetic and comparison to avoid potential hybridization failure when input shapes are different. Symbol APIs Description broadcast_equal Returns the result of element-wise equal to (==) comparison operation with broadcasting. broadcast_not_equal Returns the result of element-wise not equal to (!=) comparison operation with broadcasting. broadcast_greater Returns the result of element-wise greater than (>) comparison operation with broadcasting. broadcast_greater_equal Returns the result of element-wise greater than or equal to (>=) comparison operation with broadcasting. broadcast_lesser :: Returns the result of element-wise lesser than (<) comparison operation with broadcasting. broadcast_lesser_equal Returns the result of element-wise lesser than or equal to (<=) comparison operation with broadcasting. For example, if you want to add a NDarray to your input x, use broadcast_add instead of +: def hybrid_forward(self, F, x): # avoid writing: return x + F.ones((1, 1)) If you used +, it would still work before hybridization, but will throw an error of shape missmtach after hybridization. ### Shape¶ Gluon’s imperative interface is very flexible and allows you to print the shape of the NDArray. However, Symbol does not have shape attributes. As a result, you need to avoid printing shapes in hybrid_forward. Otherwise, you will get the following error: AttributeError: 'Symbol' object has no attribute 'shape' ### Slice¶ [] in NDArray is used to get a slice from the array. However, [] in Symbol is used to get an output from a grouped symbol. For example, you will get different results for the following method before and after hybridization. def hybrid_forward(self, F, x): return x[0] The current workaround is to explicitly call slice or slice_axis operators in hybrid_forward. ### Not implemented operators¶ Some of the often used operators in NDArray are not implemented in Symbol, and will cause hybridization failure. #### NDArray.asnumpy¶ Symbol does not support the asnumpy function. You need to avoid calling asnumpy in hybrid_forward. #### Array creation APIs¶ mx.nd.array() is used a lot, but Symbol does not have the array API. The current workaround is to use F.ones, F.zeros, or F.full, which exist in both the NDArray and Symbol APIs. #### In-Place Arithmetic Operators¶ In-place arithmetic operators may be used in Gluon imperative mode, however if you expect to hybridize, you should write these operations explicitly instead. For example, avoid writing x += y and use x = x + y, otherwise you will get NotImplementedError. This applies to all the following operators: NDArray in-place arithmetic operators Description The recommended practice is to utilize the flexibility of imperative NDArray API during experimentation. Once you finalized your model, make necessary changes mentioned above so you can call hybridize function to improve performance.
2019-08-20 06:14:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2161019891500473, "perplexity": 7454.713639931187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315222.56/warc/CC-MAIN-20190820045314-20190820071314-00136.warc.gz"}
http://stacks.math.columbia.edu/tag/01I2
# The Stacks Project ## Tag: 01I2 This tag has label schemes-lemma-category-affine-schemes and it points to The corresponding content: Lemma 22.6.5. The category of affine schemes is equivalent to the opposite of the category of rings. The equivalence is given by the functor that associates to an affine scheme the global sections of its structure sheaf. Proof. This is now clear from Definition 22.5.5 and Lemma 22.6.4. $\square$ \begin{lemma} \label{lemma-category-affine-schemes} The category of affine schemes is equivalent to the opposite of the category of rings. The equivalence is given by the functor that associates to an affine scheme the global sections of its structure sheaf. \end{lemma} \begin{proof} This is now clear from Definition \ref{definition-affine-scheme} and Lemma \ref{lemma-morphism-into-affine}. \end{proof} To cite this tag (see How to reference tags), use: \cite[\href{http://stacks.math.columbia.edu/tag/01I2}{Tag 01I2}]{stacks-project} In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the lower-right corner).
2013-05-26 03:36:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9442001581192017, "perplexity": 747.3785846075698}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706578727/warc/CC-MAIN-20130516121618-00088-ip-10-60-113-184.ec2.internal.warc.gz"}
https://mathematica.stackexchange.com/questions/138676/calculating-a-64-bit-integer-from-8-one-byte-integers
# Calculating a 64-bit integer from 8 one-byte integers I have 8 bytes expressed as integer values stored in a list. They correspond to a 64-bit counter with the lowest significant byte at first position. Here is an example what I am doing: byteIntegers = {123, 12, 0, 169, 255, 20, 67, 199}; counter=Total[Table[byteIntegers[[i]]*256^(i - 1), {i, 1, 8}]] 14358343125271841915 How would you calculate this value? You can use FromDigits[Reverse[byteIntegers], 256] because the elements of the list can be considered digits of a number expressed in base 256. If we did not have this function, we could also use Total[byteIntegers 256^(Range@Length[byteIntegers] - 1)] Whether you find this better than the Table version depends on taste. • I like your first solution more. – mrz Feb 26, 2017 at 10:37 • @mrz The second one is just showing what else you could do if the FromDigits function did not exist. This is how I would implement it. But your Table solution is more readable for those not used to the style of mine. You could change Total@Table[...] to Sum[...], which would be shorter, but personally I don't like that because I tend to think of Sum as a symbolic processing function. We could also do Total@MapIndexed[#1 256^(First[#2] - 1) &, byteIntegers], but it feels too convoluted to me. A similar solution can also be constructed with MapThread. Feb 26, 2017 at 11:27 • @mrz Finally, we could also use Inner[#1 256^#2 &, byteIntegers, Range@Length[byteIntegers] - 1], which looks clever, but I need a second before I understand what it actually does. As usual, Mathematica has myriads of ways. The two ways I like the most (after FromDigits, which is the clear winner) are what you showed and my second solution. Feb 26, 2017 at 11:28 • Thank you very much for your help and the additional solutions. – mrz Feb 27, 2017 at 9:08
2022-07-06 12:55:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1986042708158493, "perplexity": 1024.1457400742283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104672585.89/warc/CC-MAIN-20220706121103-20220706151103-00352.warc.gz"}
https://spot.pcc.edu/math/ahss/ed2/chapter_five_exercises.html
Section5.5Chapter exercises Exercises5.5.1Exercises 1.Relaxing after work. The General Social Survey asked the question: “After an average work day, about how many hours do you have to relax or pursue activities that you enjoy?” to a random sample of 1,155 Americans. 1  A 95% confidence interval for the mean number of hours spent relaxing or pursuing activities they enjoy was $(1.38, 1.92)\text{.}$ 1. Interpret this interval in context of the data. 2. Suppose another set of researchers reported a confidence interval with a larger margin of error based on the same sample of 1,155 Americans. How does their confidence level compare to the confidence level of the interval stated above? 3. Suppose next year a new survey asking the same question is conducted, and this time the sample size is 2,500. Assuming that the population characteristics, with respect to how much time people spend relaxing after work, have not changed much within a year. How will the margin of error of the 95% confidence interval constructed based on data from the new survey compare to the margin of error of the interval stated above? National Opinion Research Center, General Social Survey, 2018. Solution (a) We are 95% confident that Americans spend an average of 1.38 to 1.92 hours per day relaxing or pursuing activities they enjoy. (b) Their confidence level must be higher as the width of the confidence interval increases as the confidence level increases. (c) The new margin of error will be smaller, since as the sample size increases, the standard error decreases, which will decrease the margin of error. 2.Minimum wage, Part 2. In Exercise 5.3.11.7, we learned that a Rasmussen Reports survey of 1,000 US adults found that 42% believe raising the minimum wage will help the economy. Construct a 99% confidence interval for the true proportion of US adults who believe this. 3.Testing for food safety. A food safety inspector is called upon to investigate a restaurant with a few customer reports of poor sanitation practices. The food safety inspector uses a hypothesis testing framework to evaluate whether regulations are not being met. If he decides the restaurant is in gross violation, its license to serve food will be revoked. 1. Write the hypothesis in words. 2. What is a Type I Error in this context? 3. What is a Type II Error in this context? 4. Which error is more problematic for the restaurant owner? Why? 5. Which error is more problematic for the diners? Why? 6. As a diner, would you prefer that the food safety inspector requires strong evidence or very strong evidence of health concerns before revoking a restaurant's license? Explain your reasoning. Solution (a) $H_{0}\text{:}$ The restaurant meets food safety and sanitation regulations. $H_{A}\text{:}$ The restaurant does not meet food safety and sanitation regulations. (b) The food safety inspector concludes that the restaurant does not meet food safety and sanitation regulations and shuts down the restaurant when the restaurant is actually safe. (c) The food safety inspector concludes that the restaurant meets food safety and sanitation regulations and the restaurant stays open when the restaurant is actually not safe. (d) A Type 1 Error may be more problematic for the restaurant owner since his restaurant gets shut down even though it meets the food safety and sanitation regulations. (e) A Type 2 Error may be more problematic for diners since the restaurant deemed safe by the inspector is actually not. (f) Strong evidence. Diners would rather a restaurant that meet the regulations get shut down than a restaurant that doesn't meet the regulations not get shut down. 4.True or false. Determine if the following statements are true or false, and explain your reasoning. If false, state how it could be corrected. 1. If a given value (for example, the null hypothesized value of a parameter) is within a 95% confidence interval, it will also be within a 99% confidence interval. 2. Decreasing the significance level ($\alpha$) will increase the probability of making a Type 1 Error. 3. Suppose the null hypothesis is $p = 0.5$ and we fail to reject $H_{0}\text{.}$ Under this scenario, the true population proportion is 0.5. 4. With large sample sizes, even small differences between the null value and the observed point estimate, a difference often called the effect size, will be identified as statistically significant. 5.Unemplyment and relationship problems. A USA Today/Gallup poll asked a group of unemployed and underemployed Americans if they have had major problems in their relationships with their spouse or another close family member as a result of not having a job (if unemployed) or not having a full-time job (if underemployed). 27% of the 1,145 unemployed respondents and 25% of the 675 underemployed respondents said they had major problems in relationships as a result of their employment status. 1. What are the hypotheses for evaluating if the proportions of unemployed and underemployed people who had relationship problems were different? 2. The p-value for this hypothesis test is approximately 0.35. Explain what this means in context of the hypothesis test and the data. Solution (a) $H_{0} : p_{\text{punemp}} = p_{\text{punderemp}}\text{:}$ The proportions of unemployed and underemployed people who are having relationship problems are equal. $H_{A} : p_{\text{punemp}} \ne p_{\text{punderemp}}\text{:}$ The proportions of unemployed and underemployed people who are having relationship problems are different. (b) If in fact the two population proportions are equal, the probability of observing at least a 2% difference between the sample proportions is approximately 0.35. Since this is a high probability we fail to reject the null hypothesis. The data do not provide convincing evidence that the proportion of of unemployed and underemployed people who are having relationship problems are different. 6.Nearsighted. It is believed that nearsightedness affects about 8% of all children. In a random sample of 194 children, 21 are nearsighted. Conduct a hypothesis test for the following question: do these data provide evidence that the 8% value is inaccurate? 7.Nutrition labels. The nutrition label on a bag of potato chips says that a one ounce (28 gram) serving of potato chips has 130 calories and contains ten grams of fat, with three grams of saturated fat. A random sample of 35 bags yielded a confidence interval for the number of calories per bag of 128.2 to 139.8 calories. Is there evidence that the nutrition label does not provide an accurate measure of calories in the bags of potato chips? Solution Because 130 is inside the confidence interval, we do not have convincing evidence that the true average is any different than what the nutrition label suggests. 8.CLT for proportions. Define the term “sampling distribution” of the sample proportion, and describe how the shape, center, and spread of the sampling distribution change as the sample size increases when $p = 0.1\text{.}$ 9.Practical vs. statistical significance. Determine whether the following statement is true or false, and explain your reasoning: “With large sample sizes, even small differences between the null value and the observed point estimate can be statistically significant.” Solution True. If the sample size gets ever larger, then the standard error will become ever smaller. Eventually, when the sample size is large enough and the standard error is tiny, we can find statistically significant yet very small differences between the null value and point estimate (assuming they are not exactly equal). 10.Same observation, different sample size. Suppose you conduct a hypothesis test based on a sample where the sample size is $n = 50\text{,}$ and arrive at a p-value of 0.08. You then refer back to your notes and discover that you made a careless mistake, the sample size should have been $n = 500\text{.}$ Will your p-value increase, decrease, or stay the same? Explain. 11.Gender pay gap in medicine. A study examined the average pay for men and women entering the workforce as doctors for 21 different positions. 2 1. If each gender was equally paid, then we would expect about half of those positions to have men paid more than women and women would be paid more than men in the other half of positions. Write appropriate hypotheses to test this scenario. 2. Men were, on average, paid more in 19 of those 21 positions. Complete a hypothesis test using your hypotheses from part (a). Solution (a) In effect, we're checking whether men are paid more than women (or vice-versa), and we'd expect these outcomes with either chance under the null hypothesis: \begin{align*} H_{0}: p=0.5 \amp \amp H_{A} p \ne 0.5 \end{align*} We'll use p to represent the fraction of cases where men are paid more than women. Below is the completion of the hypothesis test. • There isn't a good way to check independence here since the jobs are not a simple random sample. However, independence doesn't seem unreasonable, since the individuals in each job are different from each other. The success-failure condition is met since we check it using the null proportion: $p_{0}n = (1-p_{0})n = 10.5$ is greater than 10. • We can compute the sample proportion, SE, and test statistic: \begin{gather*} \hat{p}=19/21 = 0.905\\ SE=\sqrt{\frac{0.5 \times (1-0.5)}{21}}=0.109\\ Z=\frac{0.905-0.5}{0.109}=3.72 \end{gather*} The test statistic $Z$ corresponds to an upper tail area of about 0.0001, so the p-value is 2 times this value: 0.0002. • Because the p-value is smaller than 0.05, we reject the notion that all these gender pay disparities are due to chance. Because we observe that men are paid more in a higher proportion of cases and we have rejected $H_{0}\text{,}$ we can conclude that men are being paid higher amounts in ways not explainable by chance alone.
2021-01-21 01:58:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5086682438850403, "perplexity": 1005.1019886757184}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703522150.18/warc/CC-MAIN-20210121004224-20210121034224-00105.warc.gz"}
https://picturethismaths.wordpress.com/2017/03/17/tea-with-almond-milk/
## Tea with (Almond) Milk Making a cup of tea in a hurry is a challenge. I want the tea to be as drinkable (cold) as possible after a short amount of time. Say, 5 minutes. What should I do: should I add milk to the tea at the beginning of the 5 minutes or at the end? The rule we will use to work this out is Newton’s Law of Cooling. It says “the rate of heat loss of the tea is proportional to the difference in temperature between the tea and its surroundings”. This means the temperature of the tea follows the differential equation $T' = -k (T - T_s)$, where the constant $k$ is a positive constant of proportionality. The minus sign is there because the tea is warmer than the room – so it is losing heat. Solving this differential equation, we get $T = T_s + (A - T_s) e^{-kt}$, where $A$ is the initial temperature of the tea. We’ll start by defining some variables, to set the question up mathematically. Most of them we won’t end up needing. Let’s say the tea, straight from the kettle, has temperature $T_0$. The cold milk has temperature $m$. We want to mix tea and milk in the ratio $L:l$. The temperature of the surrounding room is $T_s$. Option 1: Add the milk at the start We begin by immediately mixing the tea with the milk. This leaves us with a mixture whose temperature is $\frac{T_0 L + m l }{L + l}$. Now we leave the tea to cool. Its cooling follows the equation $T = T_s +\left( \frac{T_0 L + m l }{L + l} - T_s \right) e^{-kt}$. After five minutes, the temperature is Option 1 $= T_s +\left( \frac{T_0 L + m l }{L + l}- T_s \right) e^{-5k} .$ Option 2: Add the milk at the end For this option, we first leave the tea to cool. Its cooling follows the equation $T = T_s + (T_0 - T_s) e^{-kt}$. After five minutes, it has temperature $T = T_s + (T_0 - T_s) e^{-5k}$. Then, we add the milk in the specified ratio. The final concoction has temperature Option 2 $= \frac{(T_s + (T_0 - T_s) e^{-5k}) L + m l }{L + l}.$ So which temperature is lower: the “Option 1” temperature or the “Option 2” temperature? It turns out that most of the terms in the two expressions cancel out, and the inequality boils down to a comparison of $e^{-5k} (T_s L - ml)$ (from Option 2) with $(T_s L - ml)$ (from Option 1). The answer depends on whether $T_s L - ml > 0$. For our cup of tea, it will be: there’s more tea than milk ($L > l$) and the milk is colder than the surroundings ($m < T_s$). [What does this quantity represent?] Hence, since $k$ is positive, we have $e^{-5k} < 1$, and option 2 wins: add the milk at the end. But, does it really make a difference? (What’s the point of calculus?) Well, we could plug in reasonable values for all the letters ($T_0 = 95^o C$, etc.) and see how different the two expressions are. So, why tea with Almond milk? My co-blogger Rachael is vegan. She inspires me to make my tea each morning with Almond milk. Finally, here’s a picture of an empirical experiment from other people (thenakedscientists) tackling this important question: ### One thought on “Tea with (Almond) Milk” 1. Ned March 17, 2017 / 9:32 am In practice I think it’s far better to add the cold milk at the end because it mixes far better, falling through the drink. If you add it at the start it stays mostly near the bottom. Like
2017-07-24 16:33:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 22, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6870712041854858, "perplexity": 614.8320308742321}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424889.43/warc/CC-MAIN-20170724162257-20170724182257-00334.warc.gz"}
https://freakonometrics.hypotheses.org/tag/renglish
# Some sort of Otto Neurath (isotype picture) map Yesterday evening, I was walking in Budapest, and I saw some nice map that was some sort of Otto Neurath style. It was hand-made but I thought it should be possible to do it in R, automatically. A few years ago, Baptiste Coulmont published a nice blog post on the package osmar, that can be used to import OpenStreetMap objects (polygons, lines, etc) in R. We can start from there. More precisely, consider the city of Douai, in France, The code to read information from OpenStreetMap is the following library(osmar) src &lt;- osmsource_api() bb &lt;- center_bbox(3.07758808135,50.37404355, 1000, 1000) ua &lt;- get_osm(bb, source = src) We can extract a lot of things, like buildings, parks, churches, roads, etc. There are two kinds of objects so we will use two functions listek = function(vc,type="polygons"){ nat_ids &lt;- find(ua, way(tags(k %in% vc))) nat_ids &lt;- find_down(ua, way(nat_ids)) nat &lt;- subset(ua, ids = nat_ids) nat_poly &lt;- as_sp(nat, type)}   listev = function(vc,type="polygons"){ nat_ids &lt;- find(ua, way(tags(v %in% vc))) nat_ids &lt;- find_down(ua, way(nat_ids)) nat &lt;- subset(ua, ids = nat_ids) nat_poly &lt;- as_sp(nat, type)} For instance to get rivers, use W=listek(c("waterway")) and to get buildings M=listek(c("building")) We can also get churches C=listev(c("church","chapel")) but also train stations, airports, universities, hospitals, etc. It is also possible to get streets, or roads H1=listek(c("highway"),"lines") H2=listev(c("residential","pedestrian","secondary","tertiary"),"lines") but it will be more difficult to use afterwards, so let’s forget about those. We can check that we have everything we need plot(M) plot(W,add=TRUE,col="blue") plot(P,add=TRUE,col="green") if(!is.null(B)) plot(B,add=TRUE,col="red") if(!is.null(C)) plot(C,add=TRUE,col="purple") if(!is.null(T)) plot(T,add=TRUE,col="red") Now, let us consider a rectangular grid. If there is a river in a cell, I want a river. If there is a church, I want a church, etc. Since there will be one (and only one) picture per cell, there will be priorities. But first we have to check intersections with polygons, between our grid, and the OpenStreetMap polygons. library(sp) library(raster) library(rgdal) library(rgeos) library(maptools) identification = function(xy,h,PLG){ b=data.frame(x=rep(c(xy[1]-h,xy[1]+h),each=2), y=c(c(xy[2]-h,xy[2]+h,xy[2]+h,xy[2]-h))) pb1=Polygon(b) Pb1 = list(Polygons(list(pb1), ID=1)) SPb1 = SpatialPolygons(Pb1, proj4string = CRS("+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs +towgs84=0,0,0")) UC=gUnionCascaded(PLG) return(gIntersection(SPb1,UC)) } and then, we identify, as follows whichidtf = function(xy,h){ h=.7*h label="EMPTY" if(!is.null(identification(xy,h,M))) label="HOUSE" if(!is.null(identification(xy,h,P))) label="PARK" if(!is.null(identification(xy,h,W))) label="WATER" if(!is.null(identification(xy,h,U))) label="UNIVERSITY" if(!is.null(identification(xy,h,C))) label="CHURCH" return(label) } Let is use colored rectangle to make sure it works nx=length(vx) vx=as.numeric((vx[2:nx]+vx[1:(nx-1)])/2) ny=length(vy) vy=as.numeric((vy[2:ny]+vy[1:(ny-1)])/2) plot(M,border="white") for(i in 1:(nx-1)){ for(j in 1:(ny-1)){ lb=whichidtf(c(vx[i],vy[j]),h) if(lb=="HOUSE") rect(vx[i]-h,vy[j]-h,vx[i]+h,vy[j]+h,col="grey") if(lb=="PARK") rect(vx[i]-h,vy[j]-h,vx[i]+h,vy[j]+h,col="green") if(lb=="WATER") rect(vx[i]-h,vy[j]-h,vx[i]+h,vy[j]+h,col="blue") if(lb=="CHURCH") rect(vx[i]-h,vy[j]-h,vx[i]+h,vy[j]+h,col="purple") }} As a first start, we us agree that it works. To use pics, I did borrow them from https://fontawesome.com/. For instance, we can have a tree library(png) library(grid) download.file("http://freakonometrics.hypotheses.org/files/2018/05/tree.png","tree.png") tree &lt;- readPNG("tree.png") Unfortunatly, the color is not good (it is black), but that’s easy to fix using the RGB decomposition of that package rev_tree=tree rev_tree[,,2]=tree[,,4] We can do the same for houses, churches and water actually download.file("http://freakonometrics.hypotheses.org/files/2018/05/angle-double-up.png","angle-double-up.png") download.file("http://freakonometrics.hypotheses.org/files/2018/05/home.png","home.png") download.file("http://freakonometrics.hypotheses.org/files/2018/05/church.png","curch.png") water &lt;- readPNG("angle-double-up.png") rev_water=water rev_water[,,3]=water[,,4] home &lt;- readPNG("home.png") rev_home=home rev_home[,,4]=home[,,4]*.5 church &lt;- readPNG("church.png") rev_church=church rev_church[,,1]=church[,,4]*.5 rev_church[,,3]=church[,,4]*.5 and that’s almost it. We can then add it on the map plot(M,border="white") for(i in 1:(nx-1)){ for(j in 1:(ny-1)){ lb=whichidtf(c(vx[i],vy[j]),h) if(lb=="HOUSE") rasterImage(rev_home,vx[i]-h*.8,vy[j]-h*.8,vx[i]+h*.8,vy[j]+h*.8) if(lb=="PARK") rasterImage(rev_tree,vx[i]-h*.9,vy[j]-h*.8,vx[i]+h*.9,vy[j]+h*.8) if(lb=="WATER") rasterImage(rev_water,vx[i]-h*.8,vy[j]-h*.8,vx[i]+h*.8,vy[j]+h*.8) if(lb=="CHURCH") rasterImage(rev_church,vx[i]-h*.8,vy[j]-h*.8,vx[i]+h*.8,vy[j]+h*.8) }} Nice, isn’t it? (as least as a first draft, done during the lunch break of the R conference in Budapest, today). # European R Users Meeting Wednesday, I will give a talk at the European R Users Meeting about our recent work (with Ewen Gallic) on the use of collaborative data in demography. Slides (actually a longer version of the slides) are now online (including a 16:9 version that should fit better to the screen actually). This Tuesday, I will be giving the second part of the (crash) graduate course on advanced tools for econometrics. It will take place in Rennes, IMAPP room, and I have been told that there will be a visio with Nantes and Angers. Slides for the morning are online, as well as slides for the afternoon. In the morning, we will talk about variable section and penalization, and in the afternoon, it will be on changing the loss function (quantile regression). # When “learning Python” becomes “practicing R” (spoiler) 15 years ago, a student of mine told me that I should start learning Python, that it was really a great language. Students started to learn it, but I kept postponing. A few years ago, I started also Python for Kids, which is really nice actually, with my son. That was nice, but not really challenging. A few weeks ago, I also started a crash course in Python, taught by Pierre. The truth is I think I will probably give up. I keep telling myself (1) I can do anything much faster in R (2) Python is not intuitive, especially when you’re used to practice R for almost 20 years… Last week, I also had to link Python and R for our pricing game : Ali wrote some template codes in Python, and I had to translate them in R. And it was difficult… Anyway, since it was a school break this week, I said to my son that we should try to practice together, with a nice challenge. For those willing to try it, you’d better stop here, because I will spoil it. # Using convolutions (S3) vs distributions (S4) Usually, to illustrate the difference between S3 and S4 classes in R, I mention glm (from base) and vglm (from VGAM) that provide similar outputs, but one is based on S3 codes, while the second one is based on S4 codes. Another way to illustrate is to manipulate distributions. Consider the case where we want to sum (independent) random variables. For instance two lognormal distribution. Let us try to compute the median of the sum. The distribution function of the sum of two independent (positive) random variables is $F_{S_2}(x)=\int_0^x F_{X_1}(x-y)dF_{X_2}(x)$ pSum2 = function(x) integrate(function(y) plnorm(x-y,1,2)*dlnorm(y,2,1),0,x)$value Let us visualize that cumulative distribution function vx=seq(0.1,50,by=.1) vy=Vectorize(pSum2)(vx) plot(vx,vy,type="l",ylim=c(0,1)) abline(h=.5,lty=2) Let us find an upper bound to compute (in a decent time) quantiles pSum2(350) [1] 0.99195 and then use the uniroot function to inverse that function qSum = function(u) uniroot(function(x) Vectorize(pSum2)(x)-u, interval=c(0,350))$root vu=seq(.01,.99,by=.01) vv=Vectorize(qSum)(vu) The median is here qSum(.5) [1] 14.155 Why not consider the sum of three (independent) distributions ? Its cumulative distribution function can be writen using our previous function $F_{S_3}(x)=\int_0^x F_{S_2}(x-y)dF_{X_3}(x)$ pSum3 = function(x) integrate(function(y) pSum2(x-y)*dlnorm(y,2,2),0,x)$value If we look at some values we good pSum3(4) [1] 0.015624 pSum3(5) Error in integrate(function(y) plnorm(x - y, 1, 2) * dlnorm(y, 2, 1), : maximum number of subdivisions reached So obviously, there are computational issues here. Let us consider the following alternative expression $F_{S_3}(x)=\int_0^x F_{X_3}(x-y)dF_{S_2}(x)$. Of course, it is necessary here to compute the density of the sum of two variables dSum2 = function(x) integrate(function(y) dlnorm(x-y,1,2)*dlnorm(y,2,1),0,x)$value pSum3 = function(x) integrate(function(y) dlnorm(x-y,2,2)*dSum2(y),0,x)$value Again, let us compute some values pSum3(4) [1] 0.0090285 pSum3(5) [1] 0.01186 This one seems to work quite well. But it is just an illusion. pSum3(9) Error in integrate(function(y) dlnorm(x - y, 1, 2) * dlnorm(y, 2, 1), : maximum number of subdivisions reached Clearly, with those S3-type functions, it wlll be complicated to run computations with 3 variables, or more. Let us consider distributions in the S4-type format of the following package library(distr) X1 = Lnorm(mean=1,sd=2) X2 = Lnorm(mean=2,sd=1) S2 = X1+X2 To compute the median, we simply have to use distr::q(S2)(.5) [1] 14.719 We can also visualize it easily plot(q(S2)) which looks (very) close to what we got, manually. But here, it is also possible to work with the sum of 3 (independent) random variables X3 = Lnorm(mean=2,sd=2) S3 = X1+X2+X3 To compute the median, use distr::q(S3)(.5) [1] 33.208 The function is here plot(q(S3)) # (Advanced) R Crash Course, for Actuaries The fourth year of the Data Science for Actuaries program started this morning. I will be there for the introduction to R. The slides are available online (created with slidify, the .Rmd file is also available) A (standard) markdown is also available (as well as the .Rmd file). I have to thank Ewen for his help on slidify (especially for the online quizz, and the integration of leaflet maps or the rgl animated graph….) # Visualizing effects of a categorical explanatory variable in a regression Recently, I’ve been working on two problems that might be related to semiotic issues in predictive modeling (i.e. instead of a standard regression table, how can we plot coefficient values in a regression model). To be more specific, I have a variable of interest $Y$ that is observed for several individuals $i$, with explanatory variables $\mathbf{x}_i$, year $t$, in a specific region $z_i\in\{A,B,C,D,E\}$. Suppose that we have a simple (standard) linear model (forget about time here) $$y_i=\beta_0+\beta_1x_{1,i}+\cdots+\beta_kx_{k,i}+\sum_j \alpha_j \mathbf{1}(z_i\in j)+\varepsilon_i$$ Let us forget the temporal effect to focus on the spatial effect today. And consider some simulated dataset. There will be only one (continuous) explanatory variable. And I will generate correlated covariates, just to be more realistic. n=1000 library(mnormt) r=.5 Sigma=matrix(c(1,r,r,1), 2, 2) set.seed(1) X=rmnorm(n,c(0,0),Sigma) X1=cut(X[,1],c(-100,quantile(X[,1],c(.1,.4,.7,.85)), 100),labels=LETTERS[1:5]) X2=X[,2] Y=5+X[,1]-X[,2]+rnorm(n)/2 db=data.frame(Y,X1,X2) Here we have $$y_i=\beta_0+\beta_1x_{1,i}+\sum_{j\in\{A,B,C,D,E\}} \alpha_j \mathbf{1}(z_i\in j)+\varepsilon_i$$ The goal here is to get to graph to visualize the vector $\hat\alpha=(\hat\alpha_A,\cdots,\hat\alpha_E)$. Let us run the linear regression reg1=lm(Y~X1+X2,data=db) idx=which(substr(names(reg1$coefficients), 1,2)=="X1") v1=reg1$coefficients[idx] names(v1)=LETTERS[2:5] barplot(v1,col=rgb(0,0,1,.4)) Note that it is possible to add some sort of “confidence interval” to discuss significance (or to avoid to spend hours discussing differences in bar heights that are not significantly different) library(Hmisc) sv1=summary(reg1)$coefficients[idx,2] (bp1=barplot(v1,ylim=range(c(0,v1+2*sv1)))) errbar(bp1[,1],v1,v1-2*sv1,v1+2*sv1,add=TRUE) My main concern here is the “reference” that is considered. Should $A$ be the reference? Why not $B$ db$X1=relevel(db$X1,"B") reg1=lm(Y~X1+X2,data=db) idx=which(substr(names(reg1$coefficients),1,2)=="X1") v1=reg1$coefficients[idx] names(v1)=LETTERS[c(1,3:5)] library(Hmisc) sv1=summary(reg1)$coefficients[idx,2] (bp1=barplot(v1) errbar(bp1[,1],v1,v1-2*sv1,v1+2*sv1,add=TRUE) Why not the smallest one? Why not the largest one?… What if there is no simple way to choose. Furthermore, let us get back to the original point, which is that there might be some temporal aspects. More precisely, we can have $\hat\alpha^{(t)}=(\hat\alpha_A^{(t)},\cdots,\hat\alpha_E^{(t)})$. If we have also $\hat\alpha^{(t+1)}$ and we get another plot, how do we interpret it. If for $E$ the bar is taller, it means that relative to $A$, the difference has increased. I have the feeling that the interpretation is more complicated because we do not see, on that graph, changes in $\hat\alpha^{(t)}_A$. Let us try something else. First, let us get back to the original setting db$X1=relevel(db$X1,"A") Consider here the regression without the intercept, so that all values remain reg1=lm(Y~0+X1+X2,data=db) idx=which(substr(names(reg1$coefficients),1,2)=="X1") v1=reg2$coefficients[idx] names(v1)=LETTERS[1:5] barplot(v1) It can be hard to read, especially if $Y$ takes (very) large values, and you think that barplots should start at 0. But still, having those 5 values is nice. Why not rescale that graph? A natural idea my be to consider the case where no spatial component is considered, and to look at the difference with that reference. reg1=lm(Y~1+X2,data=db) reg2=lm(Y~0+X1+X2,data=db) idx=which(substr(names(reg2$coefficients),1,2)=="X1") v1=reg2$coefficients[idx] v2=v1-reg1$coefficients["(Intercept)"] barplot(v2,col=rgb(0,0,1,.4)) sv2=summary(reg2)$coefficients[idx,2] (bp2=barplot(v2,ylim=range(c(v2-2*sv2,v2+2*sv2)))) errbar(bp2[,1],v2,v2-2*sv2,v2+2*sv2,add=TRUE) I like that graph, I should admit it. Now, I still have some remaining questions. For instance, can we insure that when only the intercept is considered, the value of $\hat\beta_0$ is somewhere between $\hat\beta_A,\cdots,\hat\beta_E$? Is it possible that $\hat\beta_A-\hat\beta_0,\cdots,\hat\beta_E-\hat\beta_0$ are all positive? In that case, I would find that hard to interpret. Actually, if I really want values that can be seen as compared to some average, why not consider a (weighted) average of $\hat\beta_A,\cdots,\hat\beta_E$? (weights being here proportion in each class, in each region) w=table(db$X1) v3=v1-sum(w*v1)/sum(w) (bp3=barplot(v3,ylim=range(c(v3-2*sv3,v3+2*sv3)))) errbar(bp3[,1],v3,v3-2*sv3,v3+2*sv3,add=TRUE) I like that one. But what if, instead of normalizing at the end, we normalize the original dependent variable. By “normalize”, I mean “rescale”, to have a centered variable. db$Y0=db$Y-mean(db$Y) reg3=lm(Y0~0+X1+X2,data=db) sv3=summary(reg3)$coefficients[idx,2] (bp3=barplot(v3,ylim=range(c(v3-2*sv3,v3+2*sv3)))) errbar(bp3[,1],v3,v3-2*sv3,v3+2*sv3,add=TRUE) This one is nice, because it is extremely simple to explain. But what if instead of a linear regression, we add a logistic one (with $Y\in\{0,1\}$)? or a Poisson regression… So maybe it cannot be the best solution here. Let us try something else… In insurance ratemaking, people like to use “zonier“. It is a two-stage regression. The idea is to run a regression without any spatial components, first. Then, consider the regression of residuals on spatial variables. Here, it would be something like reg1=lm(Y~1+X2,data=db) reg2=lm(Y~0+X1+X2,data=db) Since we focus on residuals, those are centered, and we have an easy interpretation of respective values sv4=summary(reg4)$coefficients[idx,2] v4=reg4$coefficients (bp4=barplot(v4,names.arg=LETTERS[1:5]))) errbar(bp4[,1],v4,v4-2*sv4,v4+2*sv4,add=TRUE) I guess that it can also be use in generalized linear models, with Pearson (or deviance) residuals. Another possible idea can be the following. Again, the goal is not to have the true values, but to visualize on a graph how regions can be different. Here, all of them are significantly different. And in region $A$, $Y$ is smaller, ceteris paribus (other things equal in the sense that we have taken into account $x_1$). And in region $E$ it is larger. Here, the graph helps to “see” those differences. Why not consider a completely different graph. What if we plot vector $a$ instead of $\alpha$, where $a_A$ can be interpreted as the value of the coefficient if we consider region $A$ against “not region $A$“. What if we consider 5 regressions where dichotomous versions of $Z$ are considered : $Z_j=\mathbf{1}_{Z=j}$. v5=sv5=rep(NA,5) names(v5)=LETTERS[1:5] for(k in 1:5){ reg=lm(Y~I(X1==LETTERS[k])+X2,data=db) v5[k]=reg$coefficients[2] sv5[k]=summary(reg)$coefficients[2,2]} We can plot that sequence of values, including some confidence intervals (that would be related to significance with respect to all other regions) (bp5=barplot(v5,ylim=range(c(v5-2*sv5,v5+2*sv5)))) errbar(bp5[,1],v5,v5-2*sv5,v5+2*sv5,add=TRUE) Looking at values does not give intuitive results, but I have the feeling that it is easy to explain what we plot (we compare each region to “the rest of the world”), and the ordering of $a$ seems to be consistent with $\alpha$ (but I could not prove it). Here are some ideas I got. I should be able to provide other graphs, but I would love to discuss with anyone on that topics, to find a proper and nice way to visualize effects of a categorical explanatory variable in a regression model (that can be a logistic one). Comments are open… # Holt-Winters with a Quantile Loss Function Exponential Smoothing is an old technique, but it can perform extremely well on real time series, as discussed in Hyndman, Koehler, Ord & Snyder (2008)), when Gardner (2005) appeared, many believed that exponential smoothing should be disregarded because it was either a special case of ARIMA modeling or an ad hoc procedure with no statistical rationale. As McKenzie (1985) observed, this opinion was expressed in numerous references to my paper. Since 1985, the special case argument has been turned on its head, and today we know that exponential smoothing methods are optimal for a very general class of state-space models that is in fact broader than the ARIMA class. Furthermore, I like it because I think it has nice pedagogical features. Consider simple exponential smoothing, $$L_{t}=\alpha Y_{t}+(1-\alpha)L_{t-1}$$ where $\alpha\in(0,1)$ is the smoothing weight. It is locally constant, in the sense that ${}_{t}\hat Y_{t+h} = L_{t}$ library(datasets) X=as.numeric(Nile) SimpleSmooth = function(a){ T=length(X) L=rep(NA,T) L[1]=X[1] for(t in 2:T){L[t]=a*X[t]+(1-a)*L[t-1]} return(L) } plot(X,type="b",cex=.6) lines(SimpleSmooth(.2),col="red") When using the standard R function, we get hw=HoltWinters(X,beta=FALSE,gamma=FALSE, l.start=X[1]) hw$alpha [1] 0.2465579 Of course, one can replicate that optimal value V=function(a){ T=length(X) L=erreur=rep(NA,T) erreur[1]=0 L[1]=X[1] for(t in 2:T){ L[t]=a*X[t]+(1-a)*L[t-1] erreur[t]=X[t]-L[t-1] } return(sum(erreur^2)) } optim(.5,V)$par [1] 0.2464844 Here, the optimal value for $\alpha$ is the one that minimizes the one-step prediction, for the $\ell_2$ loss function, i.e. $$\sum_{t=2}^n(Y_t-{}_{t-1}\hat Y_t)^2$$ where here ${}_{t-1}\hat Y_t = L_{t-1}$. But one can consider another loss function, for instance the quantile loss function, $$\ell_{\tau}(\varepsilon)=\varepsilon(\tau-\mathbb{I}_{\varepsilon\leq 0})$$. The optimal coefficient is then obtained using HWtau=function(tau){ loss=function(e) e*(tau-(e&lt;=0)*1) V=function(a){ T=length(X) L=erreur=rep(NA,T) erreur[1]=0 L[1]=X[1] for(t in 2:T){ L[t]=a*X[t]+(1-a)*L[t-1] erreur[t]=X[t]-L[t-1] } return(sum(loss(erreur))) } optim(.5,V)$par } Here is the evolution of $\alpha^\star_\tau$ as a function of $\tau$ (the level of the quantile considered). T=(1:49)/50 HW=Vectorize(HWtau)(T) plot(T,HW,type="l") abline(h= hw$alpha,lty=2,col="red") Note that the optimal $\alpha$ is decreasing with $\tau$. I wonder how general this result can be… Of course, one can consider more general exponential smoothing, for instance the double one, with $$L_t=\alpha Y_t+(1-\alpha)[L_{t-1}+B_{t-1}]$$and$$B_t=\beta[L_t-L_{t-1}]+(1-\beta)B_{t-1}$$so that the prediction is now ${}_{t}\hat Y_{t+h} = L_{t}+hB_t$ (it is now locally linear – and no longer constant). hw=HoltWinters(X,gamma=FALSE,l.start=X[1]) hw$alpha alpha 0.4200241 hw$beta beta 0.05973389 The code to compute the smoothed series is the following DoubleSmooth = function(a,b){ T=length(X) L=B=rep(NA,T) L[1]=X[1]; B[1]=0 for(t in 2:T){ L[t]=a*X[t]+(1-a)*(L[t-1]+B[t-1]) B[t]=b*(L[t]-L[t-1])+(1-b)*B[t-1] } return(L+B) } Here also it is possible to replicate R using the $\ell_2$ loss function V=function(A){ a=A[1] b=A[2] T=length(X) L=B=erreur=rep(NA,T) erreur[1]=0 L[1]=X[1]; B[1]=X[2]-X[1] for(t in 2:T){ L[t]=a*X[t]+(1-a)*(L[t-1]+B[t-1]) B[t]=b*(L[t]-L[t-1])+(1-b)*B[t-1] erreur[t]=X[t]-(L[t-1]+B[t-1]) } return(sum(erreur^2)) } optim(c(.5,.05),V)$par [1] 0.41904510 0.05988304 (up to numerical optimization approximation, I guess). But here also, a quantile loss function can be considered HWtau=function(tau){ loss=function(e) e*(tau-(e&lt;=0)*1) V=function(A){ a=A[1] b=A[2] T=length(X) L=B=erreur=rep(NA,T) erreur[1]=0 L[1]=X[1]; B[1]=X[2]-X[1] for(t in 2:T){ L[t]=a*X[t]+(1-a)*(L[t-1]+B[t-1]) B[t]=b*(L[t]-L[t-1])+(1-b)*B[t-1] erreur[t]=X[t]-(L[t-1]+B[t-1]) } return(sum(loss(erreur))) } optim(c(.5,.05),V)$par } and we can plot those values on a graph T=(1:49)/50 HW=Vectorize(HWtau)(T) plot(HW[1,],HW[2,],type="l") abline(v= hw$alpha,lwd=.4,lty=2,col="red") abline(h= hw$beta,lwd=.4,lty=2,col="red") points(hw$alpha,hw$beta,pch=19,col="red") (with $\alpha$ on the $x$-axis, and $\beta$ on the $y$-axis). So here, it is extremely simple to change the loss function, but so far, it should be done manually. Of course, one do it also for the seasonal exponential smoothing model. # The myth of interpretability of econometric models There are important discussions nowadays about data modeling, to choose between the “two cultures” (as mentioned in Breiman (2001)), i.e. either econometrics models or machine/statistical learning models. We did discuss this issue recently in Econométrie et Machine Learning (so far only in French) with Emmanuel Flachaire and Antoine Ly. One argument often used by econometricians is the interpretability of econometric models. Or at least the attempt to get an interpretable model. We also have this discussion in actuarial science, for instance in ratemaking (or insurance pricing). Machine learning based models usually perform better (for some a priori chosen metric), but actuaries claim that econometric models are more easily interpretable. In actuarial literature, we assume that claim frequency $Y$ is driven by some non-observable risk factor $\Theta$, and therefore, we do have heterogeneous risks in our portfolio. And, it can be seen as legitimate to differentiate prices. Assume that this risk factor $\Theta$ is strongly correlated with $X_1$, the age of the driver. Because in our portfolio, old drivers tend to have more accidents. Here, we could pretend to have a “causal story” (as defined in Freedman (2009)) because of a possible interpretation of the model. So it is natural here to consider a regression model of $Y$ on $X_1$ to derive our actuarial pricing model. But assume that, possibly, risk factor $\Theta$ is also strongly correlated with $X_2$, that can be related to spatial features (say latitude, which denoted a north/south position). Because in our portfolio, drivers living in the south tend to have more accidents (reads are known to be more dangerous there). Here, we could pretend to have a second “causal story”. Of course, since $\Theta$ is strongly correlated with $X_1$ and $X_2$, it means that $X_1$ and $X_2$ are strongly correlated. Here also, this correlation can be interpreted (not in a causal way as previously, but still), since we know that old people like to live in southern regions. So, what should we do here ? Let us run some simulations to  illustrate. set.seed(123) n=1e5 Theta=rnorm(n) X1=Theta+rnorm(n)/8 X2=Theta+rnorm(n)/8 L=exp(-3+Theta) Y=rpois(n,L) B=data.frame(Y,X1,X2) Our first idea was to consider a model where $Y$ is “explained” by the first variable $X_1$, g1=glm(Y~X1,data=B,family=poisson) summary(g1)   Coefficients: Estimate Std. Error z value Pr(&gt;|z|) (Inter.) -2.97778 0.01544 -192.88 &lt;2e-16 *** X1 0.97926 0.01092 89.64 &lt;2e-16 *** As expected, our variable is “significant”, but also, probably more interesting, $X_2$, has no impact on the residuals B$e1=residuals(g1,type="pearson") g1e=lm(e1~X2,data=B) summary(g1e) Coefficients: Estimate Std. Error t value Pr(&gt;|t|) (Inter.) 0.0003618 0.0031696 0.114 0.909 X2 0.0028601 0.0031467 0.909 0.363 The interpretation is that once we corrected claim frequency for the age of the drivers, there is no spatial effect here. So, a good model should be based only on the age of the drivers. But we can also consider the other story. We can consider a model where $Y$ is “explained” by the second variable $X_2$, g2=glm(Y~X2,data=B,family=poisson) summary(g2) Coefficients: Estimate Std. Error z value Pr(&gt;|z|) (Inter.) -2.97724 0.01544 -192.81 &lt;2e-16 *** X2 0.97915 0.01093 89.56 &lt;2e-16 *** Here also we have a valid model, that can be interpreted, and here also $X_1$, has no impact on the residuals B$e2=residuals(g2,type="pearson") g2e=lm(e2~X1,data=B) summary(g2e)   Coefficients: Estimate Std. Error t value Pr(&gt;|t|) (Inter.) 0.0004863 0.0031733 0.153 0.878 X1 0.0027979 0.0031504 0.888 0.374 The story is similar here. If we correct from the spatial pattern, claims frequency does not depend on the age of the driver. So, what should we do now? We do have two models, and each of them is as interpretable as the other one. Note that we can not use any statistical tool to distinguish the two: they are comparable AIC(g1) [1] 51013.39 AIC(g2) [1] 51013.15 Why not incorporate the two explanatory variables $X_1$ and $X_2$, at the same time, in our regression model, and let “the model” decide what to do…? g=glm(Y~X1+X2,data=B,family=poisson) summary(g)   Coefficients: Estimate Std. Error z value Pr(&gt;|z|) (Inter.) -2.98132 0.01547 -192.723 2e-16 *** X1 0.49310 0.06226 7.920 2.38e-15 *** X2 0.49375 0.06225 7.931 2.17e-15 *** It looks like we completely lost the interpretability of the model, since our two explanatory variables are (strongly) correlated. Actually, instead of saying “use one, and drop the other one (since it brings no further information)”, it says “use both, each one will explain half of the variable”. Strange interpretation, isn’t it?  So why not try some LASSO here? library(glmnet) fit=glmnet(x=as.matrix(B[,c("X1","X2")]), y=B$Y,family="poisson") plot(fit,xvar="lambda") Here also, it says that we either keep both, or none. So it cannot be used for variable selection (which is an important motivation to use LASSO technique). So, what should be do if we several interpretable models, but no way to choose? Because usually, we claim that we prefer to use a model with an interpretation. But what should be done here? # Optimal Portfolios, or sort of… Last week, we got our first class on portfolio optimization. We’ve seen Markowitz’s theory where expected returns and the covariance matrix are given, > download.file(url="http://freakonometrics.free.fr/portfolio.r",destfile = "portfolio.r") > source("portfolio.r") > library(zoo) > library(FRAPO) > library(IntroCompFinR) > library(rrcov) > data( StockIndex ) > pzoo = zoo ( StockIndex , order.by = rownames ( StockIndex ) ) > rzoo = ( pzoo / lag ( pzoo , k = -1) - 1 ) * 100 > Moments <- function ( x , method = c ( "CovClassic" , "CovMcd" , "CovMest" , "CovMMest" , "CovMve" , "CovOgk" , "CovSde" , "CovSest" ) , ... ) { method <- match.arg ( method ) ans <- do.call ( method , list ( x = x , ... ) ) + return ( getCov ( ans ) )} > covmat=Moments(as.matrix(rzoo),"CovClassic") > (covmat=round(covmat,1)) SP500 N225 FTSE100 CAC40 GDAX HSI SP500 17.8 12.7 13.8 17.8 19.5 18.9 N225 12.7 36.6 10.8 15.0 16.2 16.7 FTSE100 13.8 10.8 17.3 18.8 19.4 19.1 CAC40 17.8 15.0 18.8 30.9 29.9 22.8 GDAX 19.5 16.2 19.4 29.9 38.0 26.1 HSI 18.9 16.7 19.1 22.8 26.1 58.1 > er=apply(as.matrix(rzoo),2,mean) > (er=round(er,1)) SP500 N225 FTSE100 CAC40 GDAX HSI 0.6 -0.2 0.4 0.5 0.8 1.0 > ef <- efficient.frontier(er, covmat, alpha.min=-2.5, alpha.max=2.5, nport=50) We can now visualize the efficient frontier (and admissible portfolios) below > u=c(12,ef$sd,12,12) > v=c(5,ef$er,-1,5) > plot(ef$sd,ef$er,type="l",xlab="Standard Deviation",ylab="Expected Return", xlim=c(3.5,11),ylim=c(0,2.5),col="red",lwd=1.5) > points(sqrt(diag(covmat)),er,pch=19,col="blue") > text(sqrt(diag(covmat)),er,names(er),pos=4, col="blue",cex=.6) > polygon(u,v,border=NA,col=rgb(0,0,1,.3)) That was the starting point of our class. We did also mention that something important was actually hard to visualize on that graph : the correlation between returns. It is not in the points (which are univariate, with expected return and standard deviation), but in the efficient frontier. For instance, here is our correlation matrix > (cormat=covmat/(sqrt(diag(covmat) %*% t(diag(covmat))))) SP500 N225 FTSE100 CAC40 GDAX HSI SP500 1.00 0.50 0.79 0.76 0.75 0.59 N225 0.50 1.00 0.43 0.45 0.43 0.36 FTSE100 0.79 0.43 1.00 0.81 0.76 0.60 CAC40 0.76 0.45 0.81 1.00 0.87 0.54 GDAX 0.75 0.43 0.76 0.87 1.00 0.56 HSI 0.59 0.36 0.60 0.54 0.56 1.00 We can actually change the correlation between FT500 and FTSE100 (which is here .786) courbe=function(r=.786){ R=cormat R[1,3]=R[3,1]=r covmat2=(sqrt(diag(covmat) %*% t(diag(covmat))))*R ef <- efficient.frontier(er, covmat2, alpha.min=-2.5, alpha.max=2.5, nport=50) plot(ef$sd,ef$er,type="l",xlab="Standard Deviation",ylab="Expected Return", xlim=c(3.5,11),ylim=c(0,2.5),col="red",lwd=1.5) points(sqrt(diag(covmat)),er,pch=19,col=c("blue","red")[c(2,1,2,1,1,1)]) text(sqrt(diag(covmat)),er,names(er),pos=4,col=c("blue","red")[c(2,1,2,1,1,1)],cex=.6) polygon(u,v,border=NA,col=rgb(0,0,1,.3)) } for instance, with a correlation of 0.6, we get the following efficient frontier > courbe(.6) and with a stronger correlation > courbe(.9) So clearly, correlation does matter. A lot. But more important, one should keep in mind that expected returns and covariances are not given, but estimated. Previously, we did use the standard estimator for the variance matrix. But another (more robust) estimator can be considered covmat=Moments(as.matrix(rzoo),"CovSde") er=apply(as.matrix(rzoo),2,mean) ef <- efficient.frontier(er, covmat, alpha.min=-2.5, alpha.max=2.5, nport=50) plot(ef$sd,ef$er,type="l",xlab="Standard Deviation",ylab="Expected Return",xlim=c(3.5,11),ylim=c(0,2.5),col="red",lwd=1.5) points(sqrt(diag(covmat)),er,pch=19,col="blue") text(sqrt(diag(covmat)),er,names(er),pos=4,col="blue",cex=.6) polygon(u,v,border=NA,col=rgb(0,0,1,.3)) It did influence (horizontal) position of points, since variances are now different, as well as the efficient frontier, with clearly much lower variances that can be reached. And to illustrate a last point, to illustrate the fact that we do have estimators based on observed returns, what if we had observed different ones? A way to get an idea of what might happened is to use bootstrap, e.g. of daily returns. > covmat=Moments(as.matrix(rzoo),"CovClassic") > er=apply(as.matrix(rzoo),2,mean) > ef <- efficient.frontier(er, covmat, alpha.min=-2.5, alpha.max=2.5, nport=50) > a=sqrt(diag(covmat)) > b=er > k=1 > plot(ef$sd,ef$er,type="l",xlab="Standard Deviation",ylab="Expected Return", xlim=c(3.5,11),ylim=c(0,2.5),col="white",lwd=1.5) > polygon(u,v,border=NA,col=rgb(0,0,1,.3)) > for(i in 1:100){ + id=sample(nrow(rzoo),replace=TRUE) + covmat=Moments(as.matrix(rzoo)[id,],"CovClassic") + er=apply(as.matrix(rzoo)[id,],2,mean) + points(sqrt(diag(covmat))[k],er[k],cex=.5) + } or for another asset Here is what we got on the (estimated) efficient frontier > covmat=Moments(as.matrix(rzoo),"CovClassic") > er=apply(as.matrix(rzoo),2,mean) > ef <- efficient.frontier(er, covmat, alpha.min=-2.5, alpha.max=2.5, nport=50) > plot(ef$sd,ef$er,type="l",xlab="Standard Deviation",ylab="Expected Return", xlim=c(3.5,11),ylim=c(0,2.5),col="white",lwd=1.5) > points(sqrt(diag(covmat)),er,pch=19,col="blue") > text(sqrt(diag(covmat)),er,names(er),pos=4, col="blue",cex=.6) > polygon(u,v,border=NA,col=rgb(0,0,1,.3)) > for(i in 1:100){ + id=sample(nrow(rzoo),replace=TRUE) + covmat=Moments(as.matrix(rzoo)[id,],"CovClassic") + er=apply(as.matrix(rzoo)[id,],2,mean) + ef <- efficient.frontier(er, covmat, alpha.min=-2.5, alpha.max=2.5, nport=50) + lines(ef$sd,ef$er,col="red") + } Thus, it is somehow rather difficult to assess wheter a portfolio is optimal, or not… At least from a statistical perspective…. # Traffic Flow of Kota Kinabalu (with R) This morning, we had our first practicals on network flows, using an example mentioned in some papers published by Noraini Abdullah and Ting Kien Hua, max flow min cut theorem to minimize traffic congestion in Kota Kinabalu and application of the Shortest Path and Maximum Flow with Bottleneck in Traffic Flow of Kota Kinabalu. From the roads mentioned in the articles, I did try my best to locate the nodes on a map, m=matrix(c(0,5.995910, 116.105520, 1,5.992737, 116.093718, 2,5.992066, 116.109883, 3,5.976947, 116.095760, 4,5.985766, 116.091580, 5,5.988940, 116.080112, 6,5.968318, 116.080764, 7,5.977454, 116.075460, 8,5.974226, 116.073604, 9,5.969651, 116.073753, 10,5.972341, 116.069270, 11,5.978818, 116.072880),3,12) we can be visualized below library(OpenStreetMap) map = openmap(c(lat= 6.000, lon= 116.06), c(lat= 5.960, lon= 116.12)) map=openproj(map) plot(map) points(t(m[3:2,]),col="black", pch=19, cex=3 ) text(t(m[3:2,]),c("s",1:10,"t"),col="white") If the source is realistic (up north), I do not feel very confortable with the location of the sink (on the west). But let’s pretend it’s find (to do the maths, at least). To extract information about edge capacity, on that network use the following code that will extract the three tables from the paper library(devtools) install_github("ropensci/tabulizer") library(tabulizer) location <- 'http://www.jistm.com/PDF/JISTM-2017-04-06-02.pdf' out <- extract_tables(location) with Windows, it seems to be necessary to download another package first library(devtools) install_github("ropensci/tabulizerjars") install_github("ropensci/tabulizer") library(tabulizer) location <- 'http://www.jistm.com/PDF/JISTM-2017-04-06-02.pdf' out <- extract_tables(location) Now we can get out data frame with capacities B1=as.data.frame(out[[2]]) B2=as.data.frame(out[[3]]) E=data.frame(from=B1[3:20,"V3"], to=B1[3:20,"V4"]) E=E[-c(6,8),] capacity=as.character(B2$V3[-1]) capacity[6]="843" capacity[4]="2913" E$capacity=as.numeric(capacity) We can add those edges on our map (without the arrows to indicate the direction, it would be to heavy to read) plot(map) points(t(m[3:2,]),col="black", pch=19, cex=3 ) B=data.frame(i=as.character(c("s",paste("V",1:10,sep=""),"t")), x=m[3,],y=m[2,]) for(i in 1:nrow(E)){ i1=which(B$i==as.character(E$from[i])) i2=which(B$i==as.character(E$to[i])) segments(B[i1,"x"],B[i1,"y"],B[i2,"x"],B[i2,"y"],lwd=3) } text(t(m[3:2,]),c("s",1:10,"t"),col="white") To get the graph with capacities, an alternative is to use library(igraph) g=graph_from_data_frame(E) E(g)$label=E$capacity plot(g) but it does not respect geographical locations of nodes. It can actually be done using plot(g, layout=as.matrix(B[,c("x","y")])) To get a better understanding of the capacities of the road, use plot(g, layout=as.matrix(B[,c("x","y")]), edge.width=E$capacity/200) From that network with capacities, the goal is to determine maximum flow on that network, from the source to the sink. This can be done with R using > (m=max_flow(graph=g, source="s", target="t")) $value [1] 2571 $flow [1] 1191 1380 1422 1380 231 0 231 0 1149 1422 1149 0 0 1149 1422 [16] 1149 Our maximum flow is here 2571, which is different from was is actually claimed both in the two papers  max flow min cut theorem to… and application of the Shortest Path… (“the maximum flow for the capacitated network with 12 nodes and 16 edges of the selected scope in this study was 2598 vehicles per hour“) where there are clearly typos since values in the table and on the graph are different. Here I did use the ones from the tables. E$flux1=m$flow E(g)$label=E$flux1 plot(g, layout=as.matrix(B[,c("x","y")]), edge.width=E$flux1/200) That is nice, but rather odd. Actually, a much simpler flow can be considered, but the same global value E$flux2=c(1422,1149,1422,1149,0,0,0,0, 1149,1422,1149,0,0,1149,1422,1149) E(g)$label=E$flux2 plot(g, layout=as.matrix(B[,c("x","y")]), edge.width=E$flux2/200) Nice, isn’t it. It is actually possible to do exactly the same on another paper they have, on the same city, traffic congestion problem of road networks in Kota Kinabalu. location <- 'http://www.worldresearchlibrary.org/up_proc/pdf/999-150486366625-30.pdf' out <- extract_tables(location) dim(out[[3]]) B1=as.data.frame(out[[3]]) E=data.frame(from=B1[2:61,"V2"], to=B1[2:61,"V3"], capacity=B1[2:61,"V4"]) E$capacity=as.numeric( as.character(E$capacity)) library(igraph) g=graph_from_data_frame(E) m=max_flow(graph=g, source="S", target="T") E$flux1=m$flow E(g)$label=E$flux1 plot(g, edge.width=E$flux1/200, edge.arrow.size=0.15) Here the value of the maximal flow is 4017, just as they found in the original paper # Multinomial Logit as an Iterated Logit Regression For the second section of the course at ENSAE, yesterday, we’ve seen how to run a multinomial logistic regression model. It is simply an extension of the binomial logistic regression. But actually, it is also possible to consider iterative binomial regressions. Consider here a response variable $Y$ with a multinomial distribution (3 factors to have something more general than the binomial), taking values $\{A,B,C\}$, with respective probabilities $\mathbf{p}=(p_A,p_B,p_C)$. Here is a code to generate some multinomial variables msample=function(A,B,C){ Y=rep(NA,B) for(i in 1:B){Y[i]=sample(A,size=1,prob=C[i,])} return(Y) } and here is a code to generate a dataset with $n$ rows, generate3=function(n,x,pb=c(-2,0)){ set.seed(x) X1=runif(n) X2=runif(n) X3=runif(n) s1=pb[1]+X1+X2 s2=pb[2]-X1+X2 P1=exp(s1)/(1+exp(s1)+exp(s2)) P2=exp(s2)/(1+exp(s1)+exp(s2)) Y=msample(0:2,n,cbind(1-P1-P2,P1,P2)) df=data.frame(Y=Y,X1=X1,X2=X2,X3=X3) return(df) } Let us generate a training dataset and a validation one pb=c(.31,.42) DF1=generate3(1000,1,pb=pb) DF2=generate3(500,2,pb=pb) With a multivariate logistic regression $$\mathbb{P}[Y=A|\mathbf{x}]=\frac{\exp[\mathbf{x}^{\text{T}}\mathbf{\alpha}]}{1+\exp[\mathbf{x}^{\text{T}}\mathbf{\alpha}]+\exp[\mathbf{x}^{\text{T}}\mathbf{\beta}]}$$ $$\mathbb{P}[Y=B|\mathbf{x}]=\frac{\exp[\mathbf{x}^{\text{T}}\mathbf{\beta}]}{1+\exp[\mathbf{x}^{\text{T}}\mathbf{\alpha}]+\exp[\mathbf{x}^{\text{T}}\mathbf{\beta}]}$$ $$\mathbb{P}[Y=B|\mathbf{x}]=\frac{1}{1+\exp[\mathbf{x}^{\text{T}}\mathbf{\alpha}]+\exp[\mathbf{x}^{\text{T}}\mathbf{\beta}]}$$ For convenience, consider the most popular factor in our training dataset modalite=names(sort(table(DF1$Y),decreasing = TRUE)) Consider a regression model on the simulated dataset (with several covariates), let us estimate it, and let us get predictions. library(nnet) reg=multinom(as.factor(Y) ~ ., data = DF1) mp1=predict (reg, DF1, "probs") mp2=predict (reg, DF2, "probs") An alternative can be the following. consider a first regression model on the Bernoulli variable $Y_A=\mathbf{1}(Y=A)$. Actually, we will consider the most important factor, but for convenience, assume that it is $A$. $$\mathbb{P}[Y_A=A|\mathbf{x}]=\frac{\exp[\mathbf{x}^{\text{T}}\mathbf{a}]}{1+\exp[\mathbf{x}^{\text{T}}\mathbf{a}]}$$ On our dataset, estimate that model, and get predictions. In the case where $Y\neq A$, define another Bernoulli variable $Y_B=\mathbf{1}(Y=B|Y\neq A)$. We can estimate that model and derive two probabilities, $\mathbb{P}(Y=B|Y\neq A)$ and $\mathbb{P}(Y=C|Y\neq A)$ (the sum of the two being equal to 1). Based on those two models, it is possible to compute the three probabilities we are looking for. $\mathbb{P}[Y=A]$ is obtained from the first model, and we can derive the other two from $\mathbb{P}[Y=B|Y\neq A]\cdot\mathbb{P}[Y\neq A]$ and $\mathbb{P}[Y=C|Y\neq A]\cdot\mathbb{P}[Y\neq A]$. reg1=glm((Y==modalite[1])~.,data=DF1,family=binomial) reg2=glm((Y==modalite[2])~.,data=DF1[-which(DF1$Y==modalite[1]),],family=binomial) p11=predict (reg1, newdata=DF1, type="response") p12=predict (reg2, newdata=DF1, type="response") p21=predict (reg1, newdata=DF2, type="response") p22=predict (reg2, newdata=DF2, type="response") mmp1=cbind(p11,(1-p11)*p12,(1-p11)*(1-p12)) mmp2=cbind(p21,(1-p21)*p22,(1-p21)*(1-p22)) colnames(mmp1)=colnames(mmp2)=modalite Let us compare the predicted probabilites, on the same dataset (here the training dataset) > mmp1[1:9,c("0","1","2")] 0 1 2 1 0.19728737 0.4991805 0.3035321 2 0.17244580 0.5648537 0.2627005 3 0.19291753 0.5971058 0.2099767 4 0.09087176 0.7787304 0.1303978 5 0.23400225 0.4083022 0.3576955 6 0.18063647 0.6637352 0.1556283 7 0.13188881 0.7402710 0.1278401 8 0.13776970 0.6524959 0.2097344 9 0.12325864 0.6790336 0.1977078 > mp1[1:9,c("0","1","2")] 0 1 2 1 0.19691036 0.5022692 0.3008205 2 0.17123189 0.5680647 0.2607034 3 0.19293066 0.5984402 0.2086291 4 0.08821851 0.7813318 0.1304497 5 0.23470739 0.4109990 0.3542936 6 0.18249687 0.6602168 0.1572863 7 0.13128711 0.7400898 0.1286231 8 0.13525341 0.6553618 0.2093848 9 0.12090016 0.6815915 0.1975084 The two are very close. So yes, it is possible to see the multinomial regression as some sequential binomial regressions. # Traveling Salesman In the second part of the course on graphs and networks, we will focus on economic applications, and flows. The first series of slides are on the traveling salesman problem. Slides are available online. # Networks with R In order to practice with network data with R, we have been playing with the Padgett (1994) Florentine’s wedding dataset (discussed in the lecture). The dataset is available from > library(network) > data(flo) > nflo=network(flo,directed=FALSE) > plot(nflo, displaylabels = TRUE, + boxed.labels = + FALSE) The next step was to move from the network package to igraph. Since we have the adjacency matrix, we can use it > library(igraph) > iflo=graph_from_adjacency_matrix(flo, + mode = "undirected") > plot(iflo) The good thing is that a lot of functions are available, for instance we can get shortest paths, between two specific nodes. And we can give appropriate colors to the nodes that we’ll cross > AP=all_shortest_paths(iflo, + from="Peruzzi", + to="Ginori") > L=AP$res[[1]] > V(iflo)$color="yellow" > V(iflo)$color[L[2:4]]="light blue" > V(iflo)$color[L[c(1,5)]]="blue" > plot(iflo) We can also visualize edges, but I found it slightly more complicated (to extract edges from the output) > liens=c(paste(as.character(L)[1:4], + "--", + as.character(L)[2:5],sep=""), + paste(as.character(L)[2:5], + "--", + as.character(L)[1:4],sep="")) > df=as.data.frame(ends(iflo,E(iflo))) > names(df)=c("src","target") > lstn=sort(unique(c(as.character(df[,1]),as.character(df[,2]),"Pucci"))) > Eliens=paste(as.numeric(factor(df[,1],levels=lstn)),"--", + as.numeric(factor(df[,2],levels=lstn)),sep="") > EU=unlist(lapply(Eliens,function(x) x%in%liens)) > E(iflo)$color=c("grey","black")[1+EU] > plot(iflo) But it works. It is also possible to use some D3js visualization > library( networkD3 ) > simpleNetwork (df) Then the next question was to add a vertice to the network. The most simple way to do it is probability through the adjacency matrix > flo2=flo > flo2["Pucci","Bischeri"]=1 > flo2["Bischeri","Pucci"]=1 > nflo2=network(flo2,directed=FALSE) > plot(nflo2, displaylabels = TRUE, + boxed.labels = + FALSE) Then, we’ve been playing with centrality measures. > plot(iflo,vertex.size=betweenness(iflo)) The goal was to see how related they were. Here, for all of them, “Medici” is the central node. But what about the others? > B=betweenness(iflo) > C=closeness(iflo) > D=degree(iflo) > E=eigen_centrality(iflo)$vector > base=data.frame(betw=B,close=C,deg=D,eig=E) > cor(base) betw close deg eig betw 1.0000000 0.5763487 0.8333763 0.6737162 close 0.5763487 1.0000000 0.7572778 0.7989789 deg 0.8333763 0.7572778 1.0000000 0.9404647 eig 0.6737162 0.7989789 0.9404647 1.0000000 Those measures are quite correlated. It is also possible to use a hierarchical graph to visualize how close those centrality measures can be > H=hclust(dist(t(base)), + method="ward") > plot(H) Instead of looking at values of centrality measures, it is possible to looks are ranks > rbase=base > for(i in 1:4) rbase[,i]=rank(base[,i]) > H=hclust(dist(t(rbase)), + method="ward") > plot(H) Here the eigenvector measure is very close to the degree of vertices. Finally, it is possible to seek clusters (in the context of coalition here, in case a war should start between those families) > kc <- fastgreedy.community ( iflo ) Here we have 3 classes (+1 for the node that is disconnected from the other families) > V(iflo)$color=c("yellow","orange", + "light blue")[membership ( kc )] > plot(iflo) > plot(kc,iflo) # I Got The Feelin’ Last week, I’ve been going through my CD collection, trying to find records I haven’t been listing for a while. And I got the feeling that music I listen to nowadays is slower than the one I was listening to in my 20’s. I was wondering if that was an age issue, or it was simply the fact that music in the 90s was “faster” than the one released in 2015. So I tried to scrap the BPM database to get a more appropriate answer about this “feeling” I have. I did extract two information: the BPM (beat per minute) and the year (of release). Here is a function to extract information from the website, > library(XML) > extractbpm = function(VBP,P){ + url=paste("https://www.bpmdatabase.com/music/search/?artist=&title=&bpm=",VBP,"&genre=&page=",P,sep="") + download.file(url,destfile = "page.html") + tables=readHTMLTable("page.html") + return(tables)} For instance > extractbpm(115,13)$track-table Artist Title 1 Eros Ramazzotti y Claudio Guidetti Dimelo A Mi 2 Everclear Volvo Driving Soccer Mom 3 Evils Toy Dear God 4 Expose In Walked Love 5 Fabolous ft. 2 Chainz When I Feel Like It 6 Fabolous ft. 2 Chainz When I Feel Like It 7 Fabolous ft. 2 Chainz When I Feel Like It 8 Fanny Lu Fanfarron 9 Featurecast Ain't My Style 10 Fem 2 Fem Obsession 11 Fernando Villalona Mi Delito 12 Fever Ray Triangle Walks 13 Firstlove Freaky 14 Fito Blanko Pegadito Suavecito 15 Flechazo Del Norte Mariposa Traicionera 16 Fluke Switch/Twitch 17 Flyleaf Something Better 18 FM Static The Next Big Thing 19 Fonseca Eres Mi Sueno 20 Fonseca ft. Maffio & Nayer Eres Mi Sueno 21 Francesca Battistelli Have Yourself A Merry Little Christmas 22 Frankie Ballard Young & Crazy 23 Frankie J. More Than Words 24 Frank Sinatra The Hucklebuck 25 Franz Ferdinand The Dark Of The Matinée Mix BPM Genre Label Year 1 — 115 — Sony 2009 2 — 115 — Capitol Records 2003 3 — 115 — — — 4 — 115 — Arista Records 1994 5 Explicit 115 Urban Def Jam/Island Def Jam 2013 6 — 115 Urban Def Jam/Island Def Jam 2013 7 Radio Edit 115 Urban Def Jam/Island Def Jam 2013 8 — 115 Latin Pop Universal Latino 2011 9 Psychemagik Dub 115 — Jalapeno 2012 10 — 115 — Critique Records 1993 11 — 115 — Mt&vi Records/caminante Records 2001 12 Rex The Dog Remix 115 — Little Idiot/Mute 2012 13 — 115 — Jwp Music 2000 14 — 115 Merengue Mambo Crown Loyalty 2012 15 — 115 — Hacienda 2010 16 Album Version 115 — One Little Indian Records 2004 17 — 115 Alternative A&M/Octone 2013 18 — 115 — Tooth & Nail Records 2007 19 — 115 Merengue Mambo 10 2012 20 Urban Version 115 — 10 2012 21 — 115 — Word/Fervent/Warner Bros 2009 22 — 115 Country Warner Bros 2015 23 Mynt Rocks Radio Edit 115 — Columbia 2005 24 — 115 Jazz Columbia 1950 25 — 115 New Wave — 2004 We have here one of the few old songs, a 1950 tune by Frank Sinatra. If we scrap the website, with a simple loop (where the bpm is from 40 to 200). Start with BASE=NULL > vbp=40 > p=1 and then, a loop based on > while(vbp<=2017){ + F=extractbmp(vbp,p) + if(length(F)==1){ + BASE=rbind(BASE,F[[1]][,c("Artist","Title","BPM","Year")]) + p=p+1} + if(length(F)==0){ + vbp=vbp+1 + p=1}} Then we should clean the dataset BASE=BASE[-duplicated(BASE),] BASE=BASE[-which(BASE$Year=="—"),] BASE$y=as.numeric(as.character(BASE$Year)) BASE$bpm=as.numeric(as.character(BASE$BPM)) BASE=BASE[BASE$y>=1940,] and we end up with almost 50,000 tunes. boxplot(BASE$bpm~as.factor(BASE$y), col="light blue") Over the past 20 years, it looks like speed of tunes has declined (let us forget tunes of 2017, clearly we have a problem here…) library(mgcv) plot(BASE$y,BASE$bpm) reg=gam(bpm~s(y),data=BASE) B=data.frame(y=1950:2017) p=predict(reg,newdata=B) lines(B$y,p,lwd=3,col="red") which is confirmed with a (smoothed) regression p2=predict(reg,newdata=B,se.fit=TRUE) plot(B$y,p2$fit,lwd=3,col="red",type="l",ylim=c(90,140)) lines(B$y,p2$fit+p2$se.fit) lines(B$y,p2$fit-p2\$se.fit) even when incorporating the confidence band. Bumps are probably related to smoothing parameters, but indeed, it looks like the average speed of music tune has decreased, from 110-115 in the 90’s to less than 100 nowadays. Now to be honest, I would love to have access to personal information from itunes, deezer or spotify, to get a better understanding (eg when in the week, in the day, do we like to listen to faster music for instance). But so far, I could not have access to such data. Too bad…
2018-05-23 07:01:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 95, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4633973240852356, "perplexity": 3459.4808823237695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865456.57/warc/CC-MAIN-20180523063435-20180523083435-00452.warc.gz"}
http://www.gamedev.net/topic/655472-normalize-vertices-in-vertexshader/#entry5146525
• Create Account Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic. 3 replies to this topic ### #1montify  Members 594 Like 0Likes Like Posted 12 April 2014 - 04:21 AM Hello With my old VertexShader i normalize and Scale each Vertex in VertexShader like: VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { float3 worldPosition = mul(input.Position, World).xyz; float3 normalized = normalize(worldPosition); float4 scaled = float4(normalized * SeaLevel, 1); output.Position = mul(mul(scaled, View), Projection); output.UV = input.UV; return output; } So, to Multiply the World, View, Projection Matrix on CPU in have a new VertexShader: VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { output.Position = mul(input.Position, worldViewProj); return output; } My question is, how can i normalize each Vertex in the new Shader? This is for my Planet, so i have a Cube, and the center of the Cube is Origin(0,0,0). ### #2cozzie  Members 4633 Like 0Likes Like Posted 12 April 2014 - 07:36 AM Hi. I'm not sure if I understand correct. The 2nd vertex shader only transforms the modelspace vertex into viewspace (and all transformations in between). If you still want to scale the source vertex, you need to keep some of the old vertex shader. For example: (I've assumed SeaLevel is an extern constant) VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { float3 worldPosition = normalize(mul(input.Position, World).xyz); float4 scaled = float4(worldPosition * SeaLevel, 1); output.Position = mul(scaled, worldViewProj); output.UV = input.UV; return output; } Not sure though why you have to normalize the worldposition, before scaling. The result might be that all vertices are now 'quite close together'. Maybe you can save the nonscaled position, determine scale, then scale the position (and multiply by worldViewProj). Crealysm game & engine development: http://www.crealysm.com Looking for a passionate, disciplined and structured producer? PM me ### #3Jason Z  Members 6417 Like 0Likes Like Posted 12 April 2014 - 08:10 AM If you need multiple copies of your position, just make an extra copy and multiply it accordingly.  Unfortunately I can't really follow what you are trying to do - can you describe it with a little more detail what you are trying to do?  Most likely if you describe how this geometry appears in a scene then it could be helpful. Jason Zink :: DirectX MVP Direct3D 11 engine on CodePlex: Hieroglyph 3 Games: Lunar Rift ### #4montify  Members 594 Like 0Likes Like Posted 12 April 2014 - 08:49 AM Thank you for replay... So i have a QuadTree ( Chunked LOD)  like here: http://www.gamedev.net/topic/621554-imprecision-problem-on-planet/ I scale and normalize each Vertex in the VertexShader, the advantage is, i can work with a cube on cpu side. 1 VertexBuffer for each Child/Patch. every Patch has a own worldmatrix, i can calculate each Position of the Patch. So to create a Sphere i use "Cube2Sphere" mapping, that mean the center of the "cube" should 0,0,0 (in WorldSpace)  to normalize each Vertex to get a Sphere. But when you open the Link you see my problem -> float Precision. I read, i should calculate the World,View, Projection Matrix on CPU side with doubles. The result is i have one Matrix ( worldViewProjection ) and here my Question start. I found to render "Relative to the GPU" http://www.amazon.de/3D-Engine-Design-Virtual-Globes/dp/1568817118/ref=sr_1_1?ie=UTF8&qid=1397314306&sr=8-1&keywords=openglobe https://github.com/virtualglobebook/OpenGlobe/tree/master/Source/Examples/Chapter05/Jitter/GPURelativeToEyeDSFUN90 It's in OpenGL, and he also use WordlViewProj Matrix instead of 3 single Matrix. Edited by montify, 12 April 2014 - 09:34 AM. Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
2016-12-11 02:50:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23510099947452545, "perplexity": 3398.7169345681195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698543782.28/warc/CC-MAIN-20161202170903-00296-ip-10-31-129-80.ec2.internal.warc.gz"}
http://www.mzan.com/article/48751447-what-is-a-pythonic-way-of-specifying-which-class-to-import-using-config-file-and.shtml
Home What is a pythonic way of specifying which class to import using config file and variable names # What is a pythonic way of specifying which class to import using config file and variable names user948 1# user948 Published in March 18, 2018, 1:55 am I have written a JSON config file that lists all the inputs for running a package. That package, though, has several classes that are interchangeable. What is a pythonic way to define the imports from within the config file? I essentially want a cleaner way of doing this: JSON FILE: ... "import_location": "class_one_location", "import_name": "class_one", ... PYTHON FILE: if config.import_name == 'class_one': from config.class_one_location import config.import_name as class if config.import_name == 'class_two': from config.class_two_location import config.import_name as class As such, I can't do this because the import function doesn't accept variable names. I also don't believe I can use __import__ because I need to specify both the path and the module name within that path, although I may be misunderstanding. Thanks! You need to login account before you can post. Processed in 0.295323 second(s) , Gzip On .
2018-03-18 01:55:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2780607342720032, "perplexity": 3538.006002436295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645413.2/warc/CC-MAIN-20180318013134-20180318033134-00490.warc.gz"}
http://bodembedekkers.net/docs/category/2ae318-types-of-dental-materials
15 Weeks, 10–14 hours per week. Blog Archive. Moreover, commercial sites such as search engines, recommender systems (e.g., Netflix, Amazon), advertisers, and financial institutions employ machine learning algorithms for content recommendation, predicting customer behavior, compliance, or risk. I do not claim any authorship of these notes, but at the same time any error could well be arising from my own interpretation of the material. ★ 8641, 5125 This Repository consists of the solutions to various tasks of this course offered by MIT on edX. You signed in with another tab or window. The importance, and central position, of machine learning to the field of data science does not need to be pointed out. Added grades.jl, Linear, average and kernel Perceptron (units 1 and 2), Clustering (k-means, k-medoids and EM algorithm), recommandation system based on EM (unit 4), Decision Trees / Random Forest (mentioned on unit 2). Real AI I am Ritchie Ng, a machine learning engineer specializing in deep learning and computer vision. NLP 3. Moreover, commercial sites such as search engines, recommender systems (e.g., Netflix, Amazon), advertisers, and financial institutions employ machine learning algorithms for content recommendation, predicting customer behavior, compliance, or risk. Machine Learning with Python-From Linear Models to Deep Learning You must be enrolled in the course to see course content. If you spot an error, want to specify something in a better way (English is not my primary language), add material or just have comments, you can clone, make your edits and make a pull request (preferred) or just open an issue. -- Part of the MITx MicroMasters program in Statistics and Data Science. Netflix recommendation systems 4. k nearest neighbour classifier. Machine Learning with Python: from Linear Models to Deep Learning. https://www.edx.org/course/machine-learning-with-python-from-linear-models-to, Lecturers: Regina Barzilay, Tommi Jaakkola, Karene Chu. Offered by – Massachusetts Institute of Technology. Course Overview, Homework 0 and Project 0 Week 1 Homework 0: Linear algebra and Probability Review Due on Wednesday: June 19 UTC23:59 Project 0: Setup, Numpy Exercises, Tutorial on Common Pack-ages Due on Tuesday: June 25, UTC23:59 Unit 1. GitHub is where the world builds software. Blog. And the beauty of deep learning is that with the increase in the training sample size, the accuracy of the model also increases. Work fast with our official CLI. This is a practical guide to machine learning using python. Use Git or checkout with SVN using the web URL. And that killed the field for almost 20 years. Instructors- Regina Barzilay, Tommi Jaakkola, Karene Chu. For an implementation of the algorithms in Julia (a relatively recent language incorporating the best of R, Python and Matlab features with the efficiency of compiled languages like C or Fortran), see the companion repository "Beta Machine Learning Toolkit" on GitHub or in myBinder to run the code online by yourself (and if you are looking for an introductory book on Julia, have a look on my one). Millions of developers and companies build, ship, and maintain their software on GitHub — the largest and most advanced development platform in the world. It will likely not be exhaustive. Work fast with our official CLI. We will cover: Representation, over-fitting, regularization, generalization, VC dimension; If nothing happens, download Xcode and try again. You'll learn about supervised vs. unsupervised learning, look into how statistical modeling relates to machine learning, and do a comparison of each. If a neural network is tasked with understanding the effects of a phenomena on a hierarchal population, a linear mixed model can calculate the results much easier than that of separate linear regressions. This Machine Learning with Python course dives into the basics of machine learning using Python, an approachable and well-known programming language. A must for Python lovers! Description. Machine learning methods are commonly used across engineering and sciences, from computer systems to physics. Home » edx » Machine Learning with Python: from Linear Models to Deep Learning. If you have specific questions about this course, please contact us [email protected]. In this course, you can learn about: linear regression model. Machine learning methods are commonly used across engineering and sciences, from computer systems to physics. The purpose of this project is not to produce as optimized and computationally efficient algorithms as possible but rather to present the inner workings of them in a transparent and accessible way. The teacher and creator of this course for beginners is Andrew Ng, a Stanford professor, co-founder of Google Brain, co-founder of Coursera, and the VP that grew Baidu’s AI team to thousands of scientists.. Platform- Edx. Learn more. 6.86x Machine Learning with Python {From Linear Models to Deep Learning Unit 0. Machine learning in Python. But we have to keep in mind that the deep learning is also not far behind with respect to the metrics. ... Overview. Implement and analyze models such as linear models, kernel machines, neural networks, and graphical models Choose suitable models for different applications Implement and organize machine learning projects, from training, validation, parameter tuning, to feature engineering. Python implementations of some of the fundamental Machine Learning models and algorithms from scratch. Machine Learning Algorithms: machine learning approaches are becoming more and more important even in 2020. Handwriting recognition 2. Level- Advanced. Implement and analyze models such as linear models, kernel machines, neural networks, and graphical models Choose suitable models for different applications Implement and organize machine learning projects, from training, validation, parameter tuning, to feature engineering. Machine learning methods are commonly used across engineering and sciences, from computer systems to physics. Whereas in case of other models after a certain phase it attains a plateau in terms of model prediction accuracy. naive Bayes classifier. 2018-06-16 11:44:42 - Machine Learning with Python: from Linear Models to Deep Learning - An in-depth introduction to the field of machine learning, from linear models to deep learning and r Amazon 2. Machine learning methods are commonly used across engineering and sciences, from computer systems to physics. Create a Test Set (20% or less if the dataset is very large) WARNING: before you look at the data any further, you need to create a test set, put it aside, and never look at it -> avoid the data snooping bias `python from sklearn.model_selection import train_test_split. Rating- N.A. An in-depth introduction to the field of machine learning, from linear models to deep learning and reinforcement learning, through hands-on Python projects. Check out my code guides and keep ritching for the skies! Here are 7 machine learning GitHub projects to add to your data science skill set. You signed in with another tab or window. logistic regression model. - antonio-f/MNIST-digits-classification-with-TF---Linear-Model-and-MLP The skill level of the course is Advanced.It may be possible to receive a verified certification or use the course to prepare for a degree. トップ > MITx > 6.86x Machine Learning with Python-From Linear Models to Deep Learning ... and the not-yet-named statistics-based methods of machine learning, of which neural networks were an early example.) BetaML currently implements: Unit 00 - Course Overview, Homework 0, Project 0: [html][pdf][src], Unit 01 - Linear Classifiers and Generalizations: [html][pdf][src], Unit 02 - Nonlinear Classification, Linear regression, Collaborative Filtering: [html][pdf][src], Unit 03 - Neural networks: [html][pdf][src], Unit 04 - Unsupervised Learning: [html][pdf][src], Unit 05 - Reinforcement Learning: [html][pdf][src]. Machine-Learning-with-Python-From-Linear-Models-to-Deep-Learning, download the GitHub extension for Visual Studio. Machine learning methods are commonly used across engineering and sciences, from computer systems to physics. Sign in or register and then enroll in this course. The course uses the open-source programming language Octave instead of Python or R for the assignments. Learn what is machine learning, types of machine learning and simple machine learnign algorithms such as linear regression, logistic regression and some concepts that we need to know such as overfitting, regularization and cross-validation with code in python. Machine learning methods are commonly used across engineering and sciences, from computer systems to physics. You can safely ignore this commit, Update links in the readme, corrected end of line returns and added pdfs, Added overview of one task in project 5. ... Machine Learning Linear Regression. 1. For an implementation of the algorithms in Julia (a relatively recent language incorporating the best of R, Python and Matlab features with the efficiency of compiled languages like C or Fortran), see the companion repository "Beta Machine Learning Toolkit" on GitHub or in myBinder to run the code online by yourself (and if you are looking for an introductory book on Julia, have a look on my one). The course Machine Learning with Python: from Linear Models to Deep Learning is an online class provided by Massachusetts Institute of Technology through edX. Database Mining 2. * 1. The following is an overview of the top 10 machine learning projects on Github. Machine learning algorithms can use mixed models to conceptualize data in a way that allows for understanding the effects of phenomena both between groups, and within them. The $\beta$ values are called the model coefficients. If nothing happens, download GitHub Desktop and try again. ... Overview. Machine Learning From Scratch About. Machine Learning with Python: from Linear Models to Deep Learning. The full title of the course is Machine Learning with Python: from Linear Models to Deep Learning. If you have specific questions about this course, please contact us [email protected]. Self-customising programs 1. A better fit for developers is to start with systematic procedures that get results, and work back to the deeper understanding of theory, using working results as a context. Brain 2. Course 4 of 4 in the MITx MicroMasters program in Statistics and Data Science. Learn more. Machine Learning with Python: from Linear Models to Deep Learning Find Out More If you have specific questions about this course, please contact us [email protected]. David G. Khachatrian October 18, 2019 1Preamble This was made a while after having taken the course. MITx: 6.86x Machine Learning with Python: from Linear Models to Deep Learning - KellyHwong/MIT-ML Contributions are really welcome. Machine learning projects in python with code github. Machine learning methods are commonly used across engineering and sciences, from computer systems to physics. If nothing happens, download the GitHub extension for Visual Studio and try again. Use Git or checkout with SVN using the web URL. Timeline- Approx. boosting algorithm. Machine Learning with Python: From Linear Models to Deep Learning (6.86x) review notes. support vector machines (SVMs) random forest classifier. Disclaimer: The following notes are a mesh of my own notes, selected transcripts, some useful forum threads and various course material. download the GitHub extension for Visual Studio, Added resources and updated readme for BetaML, Unit 00 - Course Overview, Homework 0, Project 0, Unit 01 - Linear Classifiers and Generalizations, Unit 02 - Nonlinear Classification, Linear regression, Collaborative Filtering, Updated link to Beta Machine Learning Toolkit and corrected an error …, Added a test for link in markdown. 10. Linear Classi ers Week 2 While it can be studied as a standalone course, or in conjunction with other courses, it is the fourth course in the MITx MicroMasters Statistics and Data Science, which we outlined in a news item a year ago when it began. End Notes. Notes of MITx 6.86x - Machine Learning with Python: from Linear Models to Deep Learning. edX courses are defined on weekly basis with assignment/quiz/project each week. Machine Learning with Python-From Linear Models to Deep Learning. If nothing happens, download Xcode and try again. If nothing happens, download GitHub Desktop and try again. Learning linear algebra first, then calculus, probability, statistics, and eventually machine learning theory is a long and slow bottom-up path. An in-depth introduction to the field of machine learning, from linear models to deep learning and reinforcement learning, through hands-on Python projects. If nothing happens, download the GitHub extension for Visual Studio and try again. train_set, test_set = train_test_split(housing, test_size=0.2, random_state=42) from Linear Models to Deep Learning This course is a part of Statistics and Data Science MicroMasters® Program, a 5-course MicroMasters series from edX. In this Machine Learning with Python - from Linear Models to Deep Learning certificate at Massachusetts Institute of Technology - MITx, students will learn about principles and algorithms for turning training data into effective automated predictions. Code from Coursera Advanced Machine Learning specialization - Intro to Deep Learning - week 2. Scikit-learn. This is the course for which all other machine learning courses are judged. Applications that can’t program by hand 1. Understand human learning 1. Transfer Learning & The Art of using Pre-trained Models in Deep Learning . Is machine Learning with Python: from Linear Models to Deep Learning reinforcement. Code from Coursera Advanced machine Learning with Python: from Linear Models to Deep Learning - week.! Or checkout with SVN using the web URL offered by MIT on edx computer systems to.!, from computer systems to physics in Deep Learning - week 2 some of the fundamental machine methods! Far behind with respect to the field of machine Learning projects on...., 5125 this Repository consists of the course is machine Learning with Python: from Linear Models to Deep you. Killed the field of machine Learning projects on GitHub Learning ( 6.86x ) review.. Checkout with SVN using the web URL 18, 2019 1Preamble this was made a while after having the. Almost 20 years this course notes, selected transcripts, some useful forum threads and course. Long and slow bottom-up path G. Khachatrian October 18, 2019 1Preamble this was made while., Statistics, and central position, of machine Learning, from Linear Models to Learning. In this course, please contact us atsds-mm @ mit.edu ) random forest classifier Learning on... $\beta$ values are called the model also increases sciences, from Linear to! To be pointed out course uses the open-source programming language the increase in the course for which all machine!, an approachable and well-known programming language following notes are a mesh my! Notes of MITx 6.86x - machine Learning with Python course dives into the of... The increase in the MITx MicroMasters program in Statistics and data science the notes... Linear regression model of using Pre-trained Models in Deep Learning and computer vision well-known programming language real I! Learning engineer specializing in Deep Learning is also not far behind with respect to the of... For the skies 1Preamble this was made a while after having taken course. Across engineering and sciences, from computer systems to physics notes of MITx 6.86x machine! Learning & the Art of using Pre-trained Models in Deep Learning and reinforcement Learning, from Linear Models Deep. Am Ritchie Ng, a machine Learning GitHub projects to add to your data science set. You must be enrolled machine learning with python-from linear models to deep learning github the training sample size, the accuracy of the model increases! On weekly basis with assignment/quiz/project each week, some useful forum threads and various course material and well-known programming Octave...: Linear regression model and slow bottom-up path with SVN using the web machine learning with python-from linear models to deep learning github Barzilay, Tommi Jaakkola, Chu. Courses are defined on weekly basis with assignment/quiz/project each week with respect to the metrics 8641, 5125 Repository... It attains a plateau in terms of model prediction accuracy theory is a practical guide to Learning!, you can learn about: Linear regression model cover: Representation, over-fitting,,. Specific questions about this course, you can learn about: Linear regression model Python R! Keep in mind that the Deep Learning to physics Learning ( 6.86x review... My own notes, selected transcripts, some useful forum threads and various course.. Some of the course is machine Learning, through hands-on Python projects transfer Learning & the Art using! Transcripts, some useful forum threads and various course material Octave instead of Python R! Or R for the skies to see course content Barzilay, Tommi Jaakkola, Karene Chu of. \Beta $values are called the model coefficients is the course is machine Learning are. Are called the model coefficients Linear Models to Deep Learning is that with the in... Computer vision SVMs ) random forest classifier the fundamental machine Learning using Python model also increases web URL language instead! Us atsds-mm @ mit.edu 5125 this Repository consists of the top 10 machine with! The training sample size, the accuracy of the top 10 machine Learning GitHub projects to add to data! Here are 7 machine Learning engineer specializing in Deep Learning you must be enrolled in the MITx MicroMasters program Statistics! Are called the model also increases we will cover: Representation, over-fitting, regularization generalization... Killed the field of machine Learning with Python-From Linear Models to Deep Learning KellyHwong/MIT-ML. Statistics, and eventually machine Learning methods are commonly used across engineering and sciences, from systems! This Repository consists of the fundamental machine Learning with Python: from Linear Models to Deep.! In-Depth introduction to the field of machine Learning theory is a practical guide to machine Learning algorithms: machine engineer! Made a while after having taken the course for which all other machine Learning approaches are becoming more more... To the metrics field of data science my own notes, selected transcripts, some useful forum threads and course... Tommi Jaakkola, Karene Chu and central position, of machine Learning engineer specializing in Deep Learning Contributions. Be enrolled in the training sample size, the accuracy of the top machine... Then calculus, probability, Statistics, and eventually machine Learning using Python Statistics and data skill... Implementations of some of the top 10 machine Learning specialization - Intro to Deep Learning must! Threads and various course material uses the open-source programming language Octave instead of Python or R for the skies systems! Specialization - Intro to Deep Learning 4 in the training sample size, the accuracy the... { from Linear Models to Deep Learning Unit 0 web URL Models and algorithms scratch!, Lecturers: Regina Barzilay, Tommi Jaakkola, Karene Chu all other machine using. Systems to physics Deep Learning Unit 0 Python or R for the skies useful threads! Micromasters program in Statistics and data science across engineering and sciences, from computer systems to.... Is also not far behind with respect to the field of machine Learning Python. Code guides and keep ritching for the assignments importance, and central position, machine. Each week offered by MIT on edx the open-source programming language Octave instead of Python or R for the!! Us atsds-mm @ mit.edu Statistics, and central position, of machine Learning methods are commonly across. We will cover: Representation, over-fitting, regularization, generalization, dimension... To machine Learning with Python: from Linear Models to Deep Learning and computer vision size. The top 10 machine Learning projects on GitHub VC dimension ; if nothing happens, download GitHub and. Learning methods are commonly used across engineering and sciences, from computer systems to.! Following notes are a mesh of my own notes, selected transcripts, useful! Part of the course to see course content is an overview of the model also.! Dives into the basics of machine Learning with Python-From Linear Models to Deep Learning must! Models and algorithms from scratch not far behind with respect to the field for almost 20 years course material to. Full title of the top 10 machine Learning specialization - Intro to Deep Learning is that with increase... Plateau in terms of model prediction accuracy Models in Deep Learning and reinforcement Learning through. Statistics, and eventually machine Learning using Python, an approachable and well-known programming language Octave instead of Python R! Values are called the model also increases regularization, generalization, VC ;. Python, an approachable and well-known programming language tasks of this course weekly basis with each. Transcripts, some useful forum threads and various course material engineering and sciences, from computer systems physics. And central position, of machine Learning theory is a practical guide to machine Learning methods commonly. Https: //www.edx.org/course/machine-learning-with-python-from-linear-models-to, Lecturers: Regina Barzilay, Tommi Jaakkola, Karene Chu field. Offered by MIT on edx of using Pre-trained Models in Deep Learning pointed. Are judged notes of MITx 6.86x - machine Learning using Python model prediction accuracy and ritching..., a machine Learning with Python: from Linear Models to Deep Learning ( 6.86x ) notes! The increase in the training sample size, the accuracy of the course see! Ng, a machine Learning with Python: from Linear Models to Deep Learning you must enrolled!, of machine Learning with Python: from Linear Models to Deep Learning of... //Www.Edx.Org/Course/Machine-Learning-With-Python-From-Linear-Models-To, Lecturers: Regina Barzilay, Tommi Jaakkola, Karene Chu this Repository consists of MITx! Course is machine Learning using Python, an approachable and well-known programming language of! We will cover: Representation, over-fitting, regularization, generalization, VC dimension ; if happens... But we have to keep in mind that the Deep Learning is that the... Course offered by MIT on edx, Lecturers: Regina Barzilay, Tommi Jaakkola, Karene Chu MITx MicroMasters in! \Beta$ values are called the model coefficients overview of the model increases! ; if nothing happens, download the GitHub extension for Visual Studio and again!: machine Learning with Python: from Linear Models to Deep Learning the metrics with increase. Of Deep Learning and reinforcement Learning, from computer systems to physics defined on weekly basis with each! Open-Source programming language Octave instead of Python or R for the assignments is a long and slow path! Increase in the MITx MicroMasters program in Statistics and data science instructors- Barzilay... That the Deep Learning - KellyHwong/MIT-ML Contributions are really welcome program in Statistics and data science set... Of Deep Learning learn about: Linear regression model on edx 7 machine Learning methods commonly... Course is machine Learning methods are commonly used across engineering and sciences, from Linear Models to Deep Learning 0... Taken the course and well-known programming language » machine Learning methods are commonly used across engineering and sciences from... Are 7 machine Learning with Python-From Linear Models to Deep Learning - KellyHwong/MIT-ML Contributions are really....
2023-01-30 13:37:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20120471715927124, "perplexity": 2018.6316858123216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499819.32/warc/CC-MAIN-20230130133622-20230130163622-00531.warc.gz"}
http://optilayer.com/electric-field-representation/32-uncategorised/275-electric-field-representation
### New options in Electric Field Representation Electric field plot now respects settings of Thickness units. Before X-axis was always represented in Physical thickness units. Evaluation of the standing wave electric fields inside a stack of films is effected with the Field command from the Analysis Menu. OptiLayer computes the electrical field intensity inside the current design and plots it in the graphical form shown below. The physical thickness profile of the layers in the design is represented by the design bar displayed just below the graph of electric field strength. The design bar shows the  different layer materials in different colors and/or patterns, the number of layers N and the overall thickness Th of the coating design are also displayed. $$|E|^2$$ values are expressed in relative units when incident wave has $$|E|^2=100\%$$. Since these values are quadratic, $$|E|^2$$ can reach 400% in the incident medium for the case of high reflectors. ### Easy to start OptiLayer provides user-friendly interface and a variety of examples allowing even a beginner to effectively start to design and characterize optical coatings.        Read more... ### Docs / Support Comprehensive manual in PDF format and e-mail support help you at each step of your work with OptiLayer.
2018-01-22 08:19:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5120308995246887, "perplexity": 1311.4626556714836}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891196.79/warc/CC-MAIN-20180122073932-20180122093932-00274.warc.gz"}
https://iacr.org/cryptodb/data/author.php?authorkey=10415
## CryptoDB ### Sruthi Sekar #### Publications Year Venue Title 2022 CRYPTO Leakage resilient secret sharing (LRSS) allows a dealer to share a secret amongst $n$ parties such that any authorized subset of the parties can recover the secret from their shares, while an adversary that obtains shares of any unauthorized subset of parties along with bounded leakage from the other shares learns no information about the secret. Non-malleable secret sharing (NMSS) provides a guarantee that even shares that are tampered by an adversary will reconstruct to either the original message or something independent of it. The most important parameter of LRSS and NMSS schemes is the size of each share. For LRSS, in the "local leakage model" (i.e., when the leakage functions on each share are independent of each other and bounded), Srinivasan and Vasudevan (CRYPTO 2019), gave a scheme for threshold access structures with a share size of approximately 3.((message length) + $\mu$), where $\mu$ is the number of bits of leakage tolerated from every share. For the case of NMSS, the best-known result (again due to the above work) has a share size of 11.(message length). In this work, we build LRSS and NMSS schemes with much improved share size. Additionally, our LRSS scheme obtains optimal share and leakage size. In particular, we get the following results: -We build an information-theoretic LRSS scheme for threshold access structures with a share size of (message length + $\mu$). -As an application of the above result, we obtain an NMSS with a share size of 4.(message length). Further, for the special case of sharing random messages, we obtain a share size of 2.(message length). 2022 TCC Side-stepping the protection provided by cryptography, exfiltration attacks are becoming a considerable real-world threat. With the goal of mitigating the exfiltration of cryptographic keys, big-key cryptosystems have been developed over the past few years. These systems come with very large secret keys which are thus hard to exfiltrate. Typically, in such systems, the setup time must be large as it generates the large secret key. However, subsequently, the encryption and decryption operations, that must be performed repeatedly, are required to be efficient. Specifically, the encryption uses only a small public key and the decryption only accesses small ciphertext-dependent parts of the full secret key. Nonetheless, these schemes require decryption to have access to the entire secret key. Thus, using such big-key cryptosystems necessitate that users carry around large secret keys on their devices, which can be a hassle and in some cases might also render exfiltration easy. With the goal of removing this problem, in this work, we initiate the study of big-key identity-based encryption (bk-IBE). In such a system, the master secret key is allowed to be large but we require that the identity-based secret keys are short. This allows users to use the identity-based short keys as the ephemeral secret keys that can be more easily carried around and allow for decrypting ciphertexts matching a particular identity, e.g. messages that were encrypted on a particular date. In particular: -We build a new definitional framework for bk-IBE capturing a range of applications. In the case when the exfiltration is small our definition promises stronger security --- namely, an adversary can break semantic security for only a few identities, proportional to the amount of leakage it gets. In contrast, in the catastrophic case where a large fraction of the master secret key has been ex-filtrated, we can still resort to a guarantee that the ciphertexts generated for a randomly chosen identity (or, an identity with enough entropy) remain protected. We demonstrate how this framework captures the best possible security guarantees. -We show the first construction of such a bk-IBE offering strong security properties. Our construction is based on standard assumptions on groups with bilinear pairings and brings together techniques from seemingly different contexts such as leakage resilient cryptography, reusable two-round MPC, and laconic oblivious transfer. We expect our techniques to be of independent interest. 2021 CRYPTO We introduce Adaptive Extractors, which unlike traditional randomness extractors, guarantee security even when an adversary obtains leakage on the source \textit{after} observing the extractor output. We make a compelling case for the study of such extractors by demonstrating their use in obtaining adaptive leakage in secret sharing schemes. Specifically, at FOCS 2020, Chattopadhyay, Goodman, Goyal, Kumar, Li, Meka, Zuckerman, built an adaptively secure leakage resilient secret sharing scheme (LRSS) with both rate and leakage rate being $\mathcal{O}(1/n)$, where $n$ is the number of parties. In this work, we build an adaptively secure LRSS that offers an interesting trade-off between rate, leakage rate, and the total number of shares from which an adversary can obtain leakage. As a special case, when considering $t$-out-of-$n$ secret sharing schemes for threshold $t = \alpha n$ (constant $0<\alpha<1$), we build a scheme with constant rate, constant leakage rate, and allow the adversary leakage from all but $t-1$ of the shares, while giving her the remaining $t-1$ shares completely in the clear. (Prior to this, constant rate LRSS scheme tolerating adaptive leakage was unknown for \textit{any} threshold.) Finally, we show applications of our techniques to both non-malleable secret sharing and secure message transmission. 2019 PKC The notion of Registration-Based Encryption (RBE) was recently introduced by Garg, Hajiabadi, Mahmoody, and Rahimi [TCC’18] with the goal of removing the private-key generator (PKG) from IBE. Specifically, RBE allows encrypting to identities using a (compact) master public key, like how IBE is used, with the benefit that the PKG is substituted with a weaker entity called “key curator” who has no knowledge of any secret keys. Here individuals generate their secret keys on their own and then publicly register their identities and their corresponding public keys to the key curator. Finally, individuals obtain “rare” decryption-key updates from the key curator as the population grows. In their work, they gave a construction of RBE schemes based on the combination of indistinguishability obfuscation and somewhere statistically binding hash functions. However, they left open the problem of constructing RBE schemes based on standard assumptions.In this work, we resolve the above problem and construct RBE schemes based on standard assumptions (e.g., CDH or LWE). Furthermore, we show a new application of RBE in a novel context. In particular, we show that anonymous variants of RBE (which we also construct under standard assumptions) can be used for realizing abstracts forms of anonymous messaging tasks in simple scenarios in which the parties communicate by writing messages on a shared board in a synchronized way. 2019 JOFC Non-malleable codes (NMCs), introduced by Dziembowski, Pietrzak and Wichs (ITCS 2010), provide a powerful guarantee in scenarios where the classical notion of error-correcting codes cannot provide any guarantee: a decoded message is either the same or completely independent of the underlying message, regardless of the number of errors introduced into the codeword. Informally, NMCs are defined with respect to a family of tampering functions $\mathcal {F}$ F and guarantee that any tampered codeword decodes either to the same message or to an independent message, so long as it is tampered using a function $f \in \mathcal {F}$ f ∈ F . One of the well-studied tampering families for NMCs is the t -split-state family, where the adversary tampers each of the t “states” of a codeword, arbitrarily but independently. Cheraghchi and Guruswami (TCC 2014) obtain a rate-1 non-malleable code for the case where $t = \mathcal {O}(n)$ t = O ( n ) with n being the codeword length and, in (ITCS 2014), show an upper bound of $1-1/t$ 1 - 1 / t on the best achievable rate for any t -split state NMC. For $t=10$ t = 10 , Chattopadhyay and Zuckerman (FOCS 2014) achieve a constant-rate construction where the constant is unknown. In summary, there is no known construction of an NMC with an explicit constant rate for any $t= o(n)$ t = o ( n ) , let alone one that comes close to matching Cheraghchi and Guruswami’s lowerbound! In this work, we construct an efficient non-malleable code in the t -split-state model, for $t=4$ t = 4 , that achieves a constant rate of $\frac{1}{3+\zeta }$ 1 3 + ζ , for any constant $\zeta > 0$ ζ > 0 , and error $2^{-\varOmega (\ell / log^{c+1} \ell )}$ 2 - Ω ( ℓ / l o g c + 1 ℓ ) , where $\ell$ ℓ is the length of the message and $c > 0$ c > 0 is a constant. 2018 EUROCRYPT 2017 TCC TCC 2022
2023-03-28 06:18:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6900650262832642, "perplexity": 1447.2135450723993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948765.13/warc/CC-MAIN-20230328042424-20230328072424-00061.warc.gz"}
http://devmaster.net/forums/topic/13152-imminent-release-of-the-hero-engine-indie/page__pid__70849
# Imminent release of the Hero Engine Indie 7 replies to this topic ### #1gillvane Valued Member • Members • 127 posts Posted 10 June 2010 - 01:43 PM We're on pins and needles to see what the price will be, and what features will be included in the Indie version of the Hero Engine. It could release as early as 6/11/2010 as hinted in some emails, but you know how developers are, and it may take a while longer. This video is why people are excited: http://mmorpgmaker.c...p?f=121&t=10485 But will it be 10K? 5K? less than 2K? No one knows just yet. Also, lots of people hope it includes the Heroe's Journey MMORPG as a reference, however we don't know how complete HJ is, or if it will be included for sure. An amazing year for engine releases if you want to tackle making your own MMORPG. Ryzom is open source, Big World Tech has a 300 dollar license, and now you may be able to buy the Hero Engine, the same engine used to make The Old Republic. Things are moving right along. http://www.mmorpgmaker.com ### #2Nerd_Skywalker Valued Member • Members • 215 posts Posted 10 June 2010 - 04:31 PM That looks so cool! TheNut said: "Hmm, yes. Strong is the force with this one" ### #3gillvane Valued Member • Members • 127 posts Posted 10 June 2010 - 09:59 PM Update. The indie version released today. The license will be 5,000 bucks, plus 15% royalties. The engine is hosted on a cloud, and they provide the server for you to work on, and for the first year of hosting your game when it goes live. http://www.mmorpgmaker.com DevMaster Staff • Moderators • 1716 posts Posted 11 June 2010 - 03:23 PM Hmm. For the pro indie then. Not going to be an entry-level engine at that price. Hyperbole is, like, the absolute best, most wonderful thing ever! However, you'd be an idiot to not think dogmatism is always bad. ### #5Nerd_Skywalker Valued Member • Members • 215 posts Posted 11 June 2010 - 04:28 PM Hmm. For the pro indie then. Not going to be an entry-level engine at that price. Hopefully it will keep the "OMG I WANNA MAEK A WoW KILLER!!111!11!" kids at bay. TheNut said: "Hmm, yes. Strong is the force with this one" DevMaster Staff • Moderators • 1716 posts Posted 11 June 2010 - 06:03 PM No, those guys think that everyone else will use their OMGWTFBBQ ideas and develop their game, and host it, and support it, and manage it, and... There's a much smaller market that is serious, but dipping their toes into the indie market. Those guys don't have $5K to throw around. But, that's okay. Someone else will fill the void. Hyperbole is, like, the absolute best, most wonderful thing ever! However, you'd be an idiot to not think dogmatism is always bad. ### #7gillvane Valued Member • Members • 127 posts Posted 18 June 2010 - 05:21 PM alphadog said: No, those guys think that everyone else will use their OMGWTFBBQ ideas and develop their game, and host it, and support it, and manage it, and... There's a much smaller market that is serious, but dipping their toes into the indie market. Those guys don't have$5K to throw around. But, that's okay. Someone else will fill the void. Yes, this is for the serious developer at 5K. Also, the tools look awesome, and reportedly they are, but the people that have purchased the engine report that the wiki is hundreds and hundreds of pages, and you're not going to make much progress with the engine unless you study and understand the wiki, at the very least, PLUS learn the proprietary scripting language. Big World Technology is a good alternative, with a 300 dollar license. The problem with BWT for some people is there is no reference to work from, so you have to script all your own game logic from scratch. However, I feel confident that eventually someone will release a reference, similar to the Torque MMOkit, for BWT, and that will make it much easier for indie dev teams to dip their toes in the water, so to speak. http://www.mmorpgmaker.com ### #8WorldKi New Member • Members • 2 posts Posted 26 August 2010 - 06:49 PM I got a chance to use the engine a bit at GDC and know a few of the guys that work at their St. Louis office. It's a pretty sharp editor, no doubt. Very dynamic, great graphics engine. The \$5000 price point is tricky, though, as it puts it out of the reach of the hobbyist and the license might not be attractive enough for professionals. We'll see, though, it's cool that more engines are targeting indies. #### 1 user(s) are reading this topic 0 members, 1 guests, 0 anonymous users
2013-05-25 07:03:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2482225000858307, "perplexity": 3754.818488870221}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705618968/warc/CC-MAIN-20130516120018-00096-ip-10-60-113-184.ec2.internal.warc.gz"}
https://euclid.math.temple.edu/events/seminars/gradsem/
Current contact: Thomas Ng and Zachary Cline. The seminar takes place on Fridays (from 1:00-2:00pm) in Room 617 on the sixth floor of Wachman Hall. Pizza and refreshments are available beforehand in the lounge next door. • Friday April 27, 2018 at 11:00, Rm 617 TBA Yang Xiao, Brown University • Friday April 13, 2018 at 11:00, Rm 617 TBA Geoff Schneider • Friday March 30, 2018 at 11:00, Rm 617 TBA Rebekah Parlmer, Temple University • Friday March 23, 2018 at 11:00, Rm 617 The tree for SL(2) Khanh Le, Temple University • Friday March 16, 2018 at 11:00, Rm 617 Eigenvalues of analytic kernels Narek Hovsepyan, Temple University It is shown that the eigenvalues of an analytic kernel on a finite interval go to zero at least as fast as $R^{ - n}$ for some fixed $R < 1$. The best possible value of R is related to the domain of analyticity of the kernel. The method is to apply the Weyl–Courant minimax principle to the tail of the Chebyshev expansion for the kernel. An example involving Legendre polynomials is given for which R is critical. Reference - G. Little, J. B. Reade, Eigenvalues of analytic kernels , SIAM J. Math. Anal., 15(1), 1984, 133–136. • Friday February 16, 2018 at 16:30, Rm 617 Building blocks for low-dimensional manifolds Thomas Ng, Temple University • Friday February 16, 2018 at 16:00, Rm 617 Numerical linear algebra: the hidden math in everything Kathryn Lund, Temple University • Friday February 9, 2018 at 11:00, Rm 617 Random graphs and surfaces Thomas Ng, Temple University We will describe a model introduced by Bollob\'as for random finite k-regular graph. In the case when k=3, we will discuss connections with two constructions of random Riemann surfaces introduced by Buser and Brooks-Makover. Along the way, we will see a glimpse of the space of metrics on a surface (Teichmuller space) and (ideal) triangulations. • Friday February 2, 2018 at 11:00, Rm 617 Jones polynomial as a quantum invariant Zachary Cline, Temple University There is a cool construction of a variant of this polynomial which is instructive and which anyone remotely interested in knot theory should see at least once in their life. I will present this construction and then explain how this polynomial invariant arises as a functor from the tangle category to the category of vector spaces over $C$.
2018-03-18 00:12:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6272763013839722, "perplexity": 2291.974442281804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645405.20/warc/CC-MAIN-20180317233618-20180318013618-00023.warc.gz"}
https://queereka.com/2013/04/01/montely-mathematics-some-algebras/
# Montely mathematics: Some Algebra(s) / / / 35 Views Initially, I would like to congratulate Miller for quickly answering both questions last month. Now, to mathematics. The topic for this month is a hodgepodge of linear algebra, field theory, and universal algebra. I think that the wall of text last time scared some people off. So, I tried to be as concise as possible this time. Note: Due to life being crazy, this post will go up without as much editing as it deserves. I apologize for this, and will appreciate any corrections. Notation: I believe that all of my notation here is fairly standard with the exception of scalar multiplication. I denote it as follows: $( S^{\mathbb{V}}_{c} )_{c \in F}$. The vector passed to $S^{\mathbb{V}}_c$ is scaled by c, where $c \in F$. Definitions: 1. Algebra: An algebra is an ordered pair $\mathbb{A} = ( A, \mathcal{F} )$ where A is the universe, i.e. what elements exist in $\mathbb{A}$, $\mathcal{F}$ is a family of functions. These functions are call fundamental operations. The fundamental operations are closed, i.e. their range is a subset of A. 1. Congruence lattice: A congruence lattice is a lattice who’s universe is the set of all congruences on $\mathbb{A}$ and meets and joins are calculated in the usual way over equivalence relations. 1. Field: A field, $\mathbb{F} = (F; +^{\mathbb{F}}, *^{\mathbb{F}}, -^{\mathbb{F}}, ^{-1^{\mathbb{F}}}, 1^{\mathbb{F}}, 0^{\mathbb{F}})$, is an algebraic structure with an additive and multiplicative operation such that the field axioms are satisfied. 1. Vector Space: Due to time constraints, I am going to link to a definition: http://en.wikipedia.org/wiki/Vector_space#Definition 1. Operation Table: An example of a operation table is the addition or multiplication table. An operation table is a table that shows you what the output of a function is based on the input. Theorems: 1. Meet and join for relations 2. Let $\theta_{1}$ and $\theta_{2}$ be an equivalence relations for some algebra, then the usual meet and join are calculated as follows: • $\theta_{1} \wedge \theta_{2} = \theta_{1} \cap \theta_{2}$ • $\theta_{1} \vee \theta_{2} = \theta_{1} \cup ( \theta_{1} \circ \theta_{2} ) \cup ( \theta_{1} \circ \theta_{2} \circ \theta_{1} ) \cup ( \theta_{1} \circ \theta_{2} \circ \theta_{1} \circ \theta_{2} ) \cup ...$ Questions: 1. Suppose $F = \{ 0, 1, \alpha, \alpha + 1 \}$ and $\mathbb{F} = (F; +^{\mathbb{F}}, *^{\mathbb{F}}, ^{-1^{\mathbb{F}}},-^{\mathbb{F}}, 1^{\mathbb{F}}, 0^{\mathbb{F}})$, i.e $\mathbb{F}$ is a field. Define $\alpha^{2} = \alpha + 1$ and $\alpha^{2} + \alpha + 1 = 0$. What do the operation tables for $+^{\mathbb{F}}$ and $*^{\mathbb{F}}$ look like. 2. Let $F = \{ 0, 1, \alpha, \alpha + 1 \}$, and $\mathbb{V} = (V; +^{\mathbb{V}}, (S^{\mathbb{V}}_{c})_{c\in F}, -^{\mathbb{V}}, 0^{\mathbb{V}})$ be a vector space where $V=F\times F$. What is the structure of $\mathbb{C} \bold{on} ( \mathbb{V} )$? Let me assure you non-mathematician reader, you most certainly can answer the first question. I’m not saying it will be necessarily obvious, but you have the skills and reasoning abilities to solve it! The second question, granted, may require some familiarity with relations, lattices, and vector spaces. In general, let me encourage you to ask questions! I, or someone else, will be happy to help you solve these problems.
2019-07-17 22:59:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 23, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.817879855632782, "perplexity": 405.3529012657522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525414.52/warc/CC-MAIN-20190717221901-20190718003901-00376.warc.gz"}
https://www.jobilize.com/physics1/section/conceptual-questions-torque-by-openstax?qcr=www.quizover.com
# 10.6 Torque  (Page 3/6) Page 3 / 6 ## Calculating torque on a rigid body [link] shows several forces acting at different locations and angles on a flywheel. We have $|{\stackrel{\to }{F}}_{1}|=20\phantom{\rule{0.2em}{0ex}}\text{N},$ $|{\stackrel{\to }{F}}_{2}|=30\phantom{\rule{0.2em}{0ex}}\text{N}$ , $|{\stackrel{\to }{F}}_{3}|=30\phantom{\rule{0.2em}{0ex}}\text{N}$ , and $r=0.5\phantom{\rule{0.2em}{0ex}}\text{m}$ . Find the net torque on the flywheel about an axis through the center. ## Strategy We calculate each torque individually, using the cross product, and determine the sign of the torque. Then we sum the torques to find the net torque. ## Solution We start with ${\stackrel{\to }{F}}_{1}$ . If we look at [link] , we see that ${\stackrel{\to }{F}}_{1}$ makes an angle of $90\text{°}+60\text{°}$ with the radius vector $\stackrel{\to }{r}$ . Taking the cross product, we see that it is out of the page and so is positive. We also see this from calculating its magnitude: $|{\stackrel{\to }{\tau }}_{1}|=r{F}_{1}\text{sin}\phantom{\rule{0.2em}{0ex}}150\text{°}=0.5\phantom{\rule{0.2em}{0ex}}\text{m}\left(20\phantom{\rule{0.2em}{0ex}}\text{N}\right)\left(0.5\right)=5.0\phantom{\rule{0.2em}{0ex}}\text{N}·\text{m}.$ Next we look at ${\stackrel{\to }{F}}_{2}$ . The angle between ${\stackrel{\to }{F}}_{2}$ and $\stackrel{\to }{r}$ is $90\text{°}$ and the cross product is into the page so the torque is negative. Its value is $|{\stackrel{\to }{\tau }}_{2}|=\text{−}r{F}_{2}\text{sin}\phantom{\rule{0.2em}{0ex}}90\text{°}=-0.5\phantom{\rule{0.2em}{0ex}}\text{m}\left(30\phantom{\rule{0.2em}{0ex}}\text{N}\right)=-15.0\phantom{\rule{0.2em}{0ex}}\text{N}·\text{m}.$ When we evaluate the torque due to ${\stackrel{\to }{F}}_{3}$ , we see that the angle it makes with $\stackrel{\to }{r}$ is zero so $\stackrel{\to }{r}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{\stackrel{\to }{F}}_{3}=0.$ Therefore, ${\stackrel{\to }{F}}_{3}$ does not produce any torque on the flywheel. We evaluate the sum of the torques: ${\tau }_{\text{net}}=\sum _{i}|{\tau }_{i}|=5-15=-10\phantom{\rule{0.2em}{0ex}}\text{N}·\text{m}.$ ## Significance The axis of rotation is at the center of mass of the flywheel. Since the flywheel is on a fixed axis, it is not free to translate. If it were on a frictionless surface and not fixed in place, ${\stackrel{\to }{F}}_{3}$ would cause the flywheel to translate, as well as ${\stackrel{\to }{F}}_{1}$ . Its motion would be a combination of translation and rotation. Check Your Understanding A large ocean-going ship runs aground near the coastline, similar to the fate of the Costa Concordia , and lies at an angle as shown below. Salvage crews must apply a torque to right the ship in order to float the vessel for transport. A force of $5.0\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{5}\phantom{\rule{0.2em}{0ex}}\text{N}$ acting at point A must be applied to right the ship. What is the torque about the point of contact of the ship with the ground ( [link] )? The angle between the lever arm and the force vector is $80\text{°};$ therefore, ${r}_{\perp }=100\text{m(sin80}\text{°}\right)=98.5\phantom{\rule{0.2em}{0ex}}\text{m}$ . The cross product $\stackrel{\to }{\tau }=\stackrel{\to }{r}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\stackrel{\to }{F}$ gives a negative or clockwise torque. The torque is then $\tau =\text{−}{r}_{\perp }F=-98.5\phantom{\rule{0.2em}{0ex}}\text{m}\left(5.0\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{5}\text{N}\right)=-4.9\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{7}\text{N}·\text{m}$ . ## Summary • The magnitude of a torque about a fixed axis is calculated by finding the lever arm to the point where the force is applied and using the relation $|\stackrel{\to }{\tau }|={r}_{\perp }F$ , where ${r}_{\perp }$ is the perpendicular distance from the axis to the line upon which the force vector lies. • The sign of the torque is found using the right hand rule. If the page is the plane containing $\stackrel{\to }{r}$ and $\stackrel{\to }{F}$ , then $\stackrel{\to }{r}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\stackrel{\to }{F}$ is out of the page for positive torques and into the page for negative torques. • The net torque can be found from summing the individual torques about a given axis. ## Conceptual questions What three factors affect the torque created by a force relative to a specific pivot point? magnitude of the force, length of the lever arm, and angle of the lever arm and force vector Give an example in which a small force exerts a large torque. Give another example in which a large force exerts a small torque. When reducing the mass of a racing bike, the greatest benefit is realized from reducing the mass of the tires and wheel rims. Why does this allow a racer to achieve greater accelerations than would an identical reduction in the mass of the bicycle’s frame? The moment of inertia of the wheels is reduced, so a smaller torque is needed to accelerate them. Specific heat capacity .....what is the formulae for solving the SHC of a substance in respect to its container what is symbol of nano what is the symbol of nano Iqra n Grant n Irtza using dimensional analysis find the unit of gravitation constant G in F=G m1 m2/r Newton meter per kg square Irtza meter squre par second and kg swaure Irtza what are the possible sources of error in coefficient of static and dynamic friction and there precautions GRACE what is Bohr He is a physicist who formulated the atomic model of an Atom Lily Lily Check university physics vol 3 > Nuclear physics Lily Bohr model kami what is mean by Doppler effect Good increase or decrease in the frequency of sound and light. Jhon good Eng is it? Vinayaka actually it is apparent change in the frequency of light or sound as object move towards or away. Vinayaka state the basic assumption of kinetic theory of gases state the characteristics of gases that differentiate them from solids FELIX identify the magnitude and direction a vector quantity Identify work done on an inclined plane given at angle to the horizontal DOLLY formula for Velocity what is the value of x 6yx7y what is the formula for frictional force I believe, correct me if I am wrong, but Ffr=Fn*mu Grant frictional force ,mathematically Fforce (Ffr) =K∆R where by K stands for coefficient of friction ,R stands for normal force/reaction NB: R = mass of a body ( m) x Acc.due gravity (g) The formula will hold the meaning if and only if the body is relatively moving with zero angle (∅ = 0°C) Boay What is concept associated with linear motion what causes friction? Elijah uneven surfaces cause friction Elijah Shii rough surfacea Grant what will happen to vapor pressure when you add solute to a solution? how is freezing point depression different from boiling point elevation? shane how is the osmotic pressure affect the blood serum? shane what is the example of colligative properties that seen in everyday living? shane freezing point depression deals with the particles in the matter(liquid) loosing energy.....while boiling elevation is the particles of the matter(liquid)gaining energy E-vibes What is motion moving place to place change position with respect to surrounding to which to where ? the phenomenon of an object to changes its position with respect to the reference point with passage of time then it is called as motion Shubham it's just a change in position festus reference point -it is a fixed point respect to which can say that a object is at rest or motion Shubham yes Shubham A change in position Lily change in position depending on time bassey a change in the position of a body E-vibes Is there any calculation for line integral in scalar feild? what is thrust when an object is immersed in liquid, it experiences an upward force which is called as upthrust. Phanindra @Phanindra Thapa No, that is buoyancy that you're talking about... Shii thrust is simply a push Shii it is a force that is exerted by liquid. Phanindra what is the difference between upthrust and buoyancy? misbah The force exerted by a liquid is called buoyancy. not thrust. there are many different types of thrust and I think you should Google it instead of asking here. Sharath hey Kumar, don't discourage somebody like that. I think this conversation is all about discussion...remember that the more we discuss the more we know... festus thrust is an upward force acting on an object immersed in a liquid. festus uptrust and buoyancy are the same akanbi Shii a Thrust is simply a push Shii the perpendicular force applied on the body Shubham thrust is a force of depression while bassey what is friction? MFON while upthrust is a force that act on a body when it is fully or partially submerged in a liquid bassey mathematically upthrust (u) = Real weight (wr) - Apparent weight (wa) u = wr- wa. Boay friction is a force which opposes relative motion. Boay
2019-10-22 08:17:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 31, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6948074102401733, "perplexity": 673.930971294055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987813307.73/warc/CC-MAIN-20191022081307-20191022104807-00035.warc.gz"}
https://www.codyperakslis.com/tags/poetry/
# poetry ## A Snowball's Chance The snowball’s method $\hspace{15mm}$to build a mountain, Is slow and crowded and selfless. From the mountain comes the avalanche. A progression to regression as $\hspace{5mm}$a progression to a mission. A cold understanding, $\hspace{5mm}$an indomitable power. ## Causal Contact To come to know $\hspace{5mm}$the world about, Is of an effect, a cause. A mind built in the sky $\hspace{5mm}$from contact with the earth. Writing most natural laws.
2020-10-01 08:03:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3193965256214142, "perplexity": 10270.39755950449}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402124756.81/warc/CC-MAIN-20201001062039-20201001092039-00094.warc.gz"}
http://www.weimann-meyer.org/killjoy-ecm/de4127-hazard-ratio-coxph-r
Diese Seite verwendet Cookies und Analysetools, beginnend mit Ihrer Zustimmung durch Klick auf “Weiter”. Weitere Infos finden Sie in unserer Datenschutzerklärung. # hazard ratio coxph r Here are some basic examples that illustrate the process and key syntax. This model is easily implemented in R using the coxph() function in the survival package [57,58]. cpositions: relative positions of first three columns in the OX scale. data: a dataset used to fit survival curves. Using hazard ratio statements in SAS 9.4, I get a hazard ratio for 1) a at the mean of b, and 2) b at the mean of a. If HR>1 then there is a high probability of death and if it is less than 1 then there is a low probability of death. coxph(): Fits a Cox proportional hazards regression model. 0. Using coxph() gives a hazard ratio (HR). Plot the simulations with the simGG method. This can also be in the form of a vector if you have several models. 6 8 10 12 14 16 18-0.5 0.0 0.5 1.0 1.5 2.0 Hemoglobin level Partial for pspline(hgb, 4) Low hemoglobin or anemia is a recognized marker of frailty in older age, so the rise in risk for Hazard ratios. main: title of the plot. Estimate a Cox PH model in the usual way with the coxph command in the survival package. The hazard ratio is the ratio of these two expected hazards: h 0 (t)exp (b 1a)/ h 0 (t)exp (b 1b) = exp(b 1(a-b)) which does not depend on time, t. Thus the hazard is proportional over time. My problem is that I (and it seems like Internet too) do not know how to export it as .txt file for example. If HR>1 then there is a high probability of death and if it is less than 1 then there is a low probability of death. Optionally, the predict() method computes asymptotic confidence intervals and confidence bands for the predicted absolute risks. For a factor variable with k levels, for instance, this … How do I turn around the model, so that inverse hazard ratio and conf. The quantities exp(bi)exp(bi) are called hazard ratios (HR). Hazard ratios suffer therefore somewhat less from possible selection bias introduced by endpoints. Therefore, the hazard ratio of patients in the good prognostic group to die is 0.2149 compared to patients in the poor prognostic group, ie about an 79% reduction of the hazard. Well, in this particular case (as we shall see) this would be the right thing to do, but Wald tests should in general not be used as an aid in model selection in multivariate analyses. This gives the reader an indication of which model is important. I've isolated it to the "x1" variable in the example below, which is log-normally distributed. Weighted Cox regression provides unbiased average hazard ratio estimates also in case of non-proportional hazards. p-value computed using the likelihood ratio test whether the hazard ratio is different from 1. n number of samples used for the estimation. These type of plot is called a forest plot. Put another way, a hazard ratio above 1 indicates a covariate that is positively associated with the event probability, and thus … 4.3 years ago by. 3.3 Computing the Hazard Ratio. terms: if TRUE, do a test for each term in the model rather than for each separate covariate. We review the formula behind the estimators implemented and … a data.frame … ggforest ( model, data = NULL, main = "Hazard ratio", cpositions = c (0.02, 0.22, 0.4), fontsize = 0.7, refLabel = "reference", noDigits = 2) Arguments. The HR represents the ratio of hazards between two groups at any particular point in time. When the results of predict are used in further calculations it may be desirable to use a single reference level for all observations. However, in some cases, the … Estimate a Cox PH model in the usual way with the coxph command in the survival package. In a stratified … ggforest (model, data = NULL, main = "Hazard ratio", cpositions = c (0.02, 0.22, 0.4) , fontsize = 0.7, refLabel = "reference", noDigits = 2) Arguments. the result of fitting a Cox regression model, using the coxph or coxme functions. cumulative over observation time, hazard ratios reflect an instantaneous risk over the study period or a subset of the period. polygon_ci: If you want a polygon as indicator for your confidence interval. Hazard ratio for individual with X = x vs. X = (x+1): This term is the hazard ratio for the event of interest for people with covariate x+1 vs. people with covariate x. Most likely you think censor==0 is an event and you are telling [r] that censor==1 is an event. model: an object of class coxph. Using the reference="strata" option is the safest centering, since strata occassionally have different means. One of the main goals of the Cox PH model is to compare the hazard rates of individuals who have different values for the covariates. main: title of the plot. We also present a concomitant predict() S3 method which computes the absolute risks of the event of interest for given combinations of covariate values and time points. The package provides options to estimate time-dependent effects conveniently by including interactions of covariates with arbitrary functions of time, with or without … 63 9 9 bronze badges. The response must be a survival object as returned by the Surv function. A hazard ratio is defined as the hazard for one individual divided by the hazard for a … This is because the … It is up to you to create the sensible CoxPH model. If the term is >1, then those people who have a one-unit increases in their covariate compared against a reference group are at a higher "risk" (hazard) for the event. For example I got the > following HRs for one endpoint: … I obtained the results in form of "coxph" object, which seems to be a list. Use of reference="sample" will use the overall means, and agrees with the … model: an object of class coxph. The … Sometimes you only want one model to have a polygon and the rest to be dotted lines. The R summary for the Cox model gives the hazard ratio (HR) for the second group relative to the first group, that is, female versus male. Specifically, it fails to converge, even when bumping up the number of max iterations or setting reasonable initial values. You can build Cox proportional hazards models using the coxph function and visualize them using the ggforest. Approximated generalized concordance probability an effect size measure for clear-cut decisions can be obtained. fontsize: relative size of … This is the hazard ratio – the multiplicative effect of that variable on the hazard rate (for each unit increase in that variable). Briefly, an HR > 1 indicates an increased risk of death (according to the definition of h(t)) if a specific condition is met by a patient. In the hazard ratio model, the resulting value is no longer time-dependent so that the ratio of the two objects remains at all times proportional hazard. limits and p-values are shown(i mean for inverse model)/ or do you think something else has gone wrong? A Few Examples. Possible values are "km", "rank", "identity" or a function of one argument. The function basehaz (from the previous answer) provides the cumulative hazard, not the hazard function (the rate). The continuous exposure must be a spline term for the smoothing function to work. Instead, the popular Cox proportional hazards model [11] is often used to determine the e ects of covariates and to identify signi cant predictors of time to failure. Remarkably, even though the baseline hazard is unspeci ed, the Cox model can still be esti- mated by the method of partial likelihood, developed by Cox (1972) in the same paper in which he introduced what came to called the Cox model. Cox proportional hazard model Model fitting and significance test. data: a dataset used to fit survival curves. It shows so-called hazard ratios (HR) which are derived from the model for all covariates that we included in the formula in coxph. Under the Cox proportional hazard model, the hazard ratio is constant. Also given is the Wald statistic for each parameter as well as overall likelihood ratio, wald and score tests. The estimated Hazard ratio from the model is incorrect (verified by an AFT model). In one case the P was 0.04 yet the CI >> crossed one, which confused me, and certainly will raise questions by >> reviewers. By contrasting values you can have the median as a reference point making it easier to compare hazard ratios. Simulate quantities of interest–hazard ratios, first differences, marginal effect, relative hazards, or hazard rates–with the appropriate simPH simulation command. > Hello, > > I have the following problem. Nuke Nuke. … The Muhaz R package can do this for one sample data. In a Cox model, stratification allows for as many different hazard functions as there are strata. If not supplied then data will be extracted from 'fit' object. The proportional cox regression model emphasizes the fulfillment of the proportional hazard assumption which means that the ratio between individual hazard functions of one and other individual hazard functions is constant. Here are some basic examples that illustrate the process and key syntax. cpositions: relative positions of first three columns in the OX scale. Now I would like to calculate a p for trend across > the hazard ratios that I got for the three groups. transform: a character string specifying how the survival times should be transformed before the test is performed. Estimating the hazard function would require specification of the type of smoothing (like in density estimation). You could also flip the sign on the coef column, … I stratified my patient cohort into three > ordered groups and performed multivariate adjusted Cox regression analysis > on each group separately. Hazard ratios. cat("The Hazard Ratio (Good:Poor) is ",round(hr.exp,4),".") Although … Using hazard ratio statements in SAS 9.4, I get a hazard ratio for 1) a at the mean of b, and 2) b at the mean of a. coxm coxph.object fitted on the survival data and x (see below). The quantity of interest from a Cox regression model is a hazard ratio (HR). The Cox model thus assumes an underlying hazard function with a corresponding survival curve. The idea is that we care more about comparing groups than about estimating absolute survival. The hazard ratios and P-values suggest that whilst CAVD and leukemia are significant risk factors, the interaction between the two factors is not significant.So should we drop the interaction term from the model? data. However, the assumption of proportional hazards is not always satis ed, … 1. coxph(formula, data=, weights, subset, na.action, init, control, ties=c("efron","breslow","exact"), singular.ok=TRUE, robust, model=FALSE, x=FALSE, y=TRUE, tt, method=ties, id, cluster, istate, statedata, ...) Arguments formula. Simulate quantities of interest--hazard ratios, first differences, marginal effect, relative hazards, or hazard rates--with the appropriate simPH simulation command. Please, … Interpreting the output from R This is actually quite easy. There is … fontsize : relative size of … orzech_mag • 220. share | improve this question | follow | asked Oct 26 '17 at 15:38. The coxph() function gives you the hazard ratio for a one unit change in the predictor as well as the 95% confidence interval. a formula object, with the response on the left of a ~ operator, and the terms on the right. But you … hazard estimator [1,43]) cannot be estimated simultaneously with covariates. The coxph routines try to approximately center the predictors out of self protection. … If not supplied then data will be extracted from 'fit' object. I believe that question was about the hazard function. 5.1.2 Theory For transparency the derivation is given below: The beta coefficient for sex = -0.53 indicates that females have lower risk of death (lower survival rates) than males, in these data. A Few Examples. ## The Hazard Ratio (Good:Poor) is 0.2149 . coxph() fits a Cox proportional hazard model to the data and the syntax is similar to survfit().Here, we fit a model using only the age predictor and called summary() to examine the details of the coxph fit. The hazard ratio for these two cases, h i(t) h i0(t) = h 0(t)e i h 0(t)e i0 = e i e i0 is independent of time t. Consequently, the Cox model is a proportional-hazards model. This is just the bare-bones basics of Cox Proportional Hazards models. In retrospect I can see that the CI calculated by coxph is >> intimately related to the Wald p-value (which in this specific … The HR is interpreted as the instantaneous rate of occurrence of the event of interest in those who are still at risk for the event. A value of bibigreater than zero, or equivalently a hazard ratio greater than one, indicates that as the value of the ithith covariate increases, the event hazard increases and thus the length of survival decreases. Produce hazard ratio table and plot from a Cox Proportional Hazards analysis, survival::coxph(). How can I do that if I > only have the HR and the confidence interval? rug: The rug … orzech_mag • 220 wrote: Dear colleges, I performed Cox regression for proportional hazard using R package "survival". Poland/Łódź. > On Nov 20, 2011, at 6:34 PM, Paul Johnston wrote: >> ... >> I had intended to report logrank P values with the hazard ratio and CI >> obtained from this function. r survival-analysis hazard cox. To this end, we are going to use the Hazard Ratio (HR). regression models using either coxph() or cph(). Question: R: exporting summary of coxph object. So, for a categorical variable like sex, going from male (baseline) to female results in approximately ~40% reduction in hazard. data list of data used to compute the hazard ratio (x, surv.time and surv.event). The function takes as input the results of a Cox proportional hazard model and plots a continuous exposure against the hazard ratio. From the output, we can see that the coefficient for age is greater than $0$ and $\exp(\text{coef}) > 1$, meaning that the age … Sometimes the model is expressed differently, relating the relative hazard, which is the ratio of the hazard at time t to the baseline hazard, to the risk factors: We can take the natural logarithm (ln) of each side … Beta coefficients (hazard ratios) optimized for all strata are then fitted. Plot the simulations with the simGG method. The coxph function in R is not working for me when I use a continuous predictor in the model. Before getting … See below ) 1. n number of max iterations or setting reasonable initial values object as returned by Surv. Marginal effect, relative hazards, or hazard rates–with the appropriate simPH simulation command transformed before the is. Hazards models using the reference= '' strata '' option is the Wald for! Polygon as indicator for your hazard ratio coxph r interval ~ operator, and the confidence interval is because the … (. 57,58 ] isolated it to the x1 '' variable in the OX scale used in calculations... Max iterations or setting reasonable initial values fails to converge, even when bumping up number. Be transformed before the test is performed ratios ) optimized for all strata are then fitted the right rug question... The ratio of hazards between two groups at any particular point in time exposure must be survival. Quantity of interest from a Cox proportional hazard model model fitting and test..., relative hazards, or hazard rates–with the appropriate simPH simulation command ] that censor==1 is an event the of. Effect size measure for clear-cut decisions can be obtained simulate quantities of interest–hazard ratios first! First differences, marginal effect, relative hazards, or hazard rates–with the appropriate simPH simulation.... Hr ) data list of data used to fit survival curves in form of a operator! Sensible coxph model with k levels, for instance, this … hazard ratios that got... Inverse hazard ratio is constant although … how do I turn around the model rather than for each as! And conf a spline term for the three groups fitting and significance test each term in OX! Therefore somewhat less from possible selection bias introduced by endpoints have different means ( I mean inverse. Returned by the Surv function question was about the hazard function with corresponding... The safest centering, since strata occassionally have different means sensible coxph model than about estimating absolute.! By the Surv function proportional hazards analysis, survival::coxph ( ): Fits a Cox proportional hazards.. The predicted absolute risks single reference level for all observations coxm coxph.object fitted on the survival package size measure clear-cut! Model fitting and significance test ratio, Wald and score tests not supplied then will. Approximately center the predictors out of self protection please, … Cox proportional hazards regression model is easily implemented R! On the left of a vector if you want a polygon as indicator for confidence... These type of smoothing ( like in density estimation ) can I do that if I > only have following! A p for trend across > the hazard ratio table and plot from Cox... Samples used for the three groups, this … hazard ratios that got! Is an event and you are telling [ R ] that censor==1 is an event like to calculate a for... And p-values are shown ( I mean for inverse model ) or hazard the! ( x, surv.time and surv.event ): relative size of … Interpreting the from! 1. n number of samples used for the estimation from the model the! On the left of a ~ operator, and the rest to be dotted lines rest to be dotted.... 220 wrote: Dear colleges, I performed Cox regression analysis > on group... Coxph ( ) method computes asymptotic confidence intervals and confidence bands for the absolute... Fitting a Cox PH model in the form of coxph '' object, which is log-normally.! Can build Cox proportional hazards regression model table and plot from a Cox PH in! The rest to be dotted lines by the Surv function bare-bones basics of Cox proportional hazard using package. This … hazard ratios spline term for the three groups the … coxph ( ): Fits a proportional... Model ) / or do you think censor==0 is an event relative size of … Interpreting the output from this! Must be a survival object as returned by the Surv function is … > Hello, > > I the. Bumping up the number of max iterations or setting reasonable initial values the predictors out of self.! Regression model, the predict ( ): Fits a Cox regression for proportional hazard,.: exporting summary of coxph object of Cox proportional hazard model model fitting and significance test left a... Predictors out of self protection seems to be dotted lines ( x, surv.time and )... By the Surv function ) can not be estimated simultaneously with covariates reference level for all observations are some examples. Is a hazard ratio ( HR ) between two groups at any particular point in.... Coxph '' object, which seems to be a list results of predict are used in further calculations may... Easily implemented in R using the reference= '' strata '' option is the safest,! Isolated it to the x1 '' variable in the OX scale function and them... Intervals and confidence bands for the three groups further calculations it may be desirable to a. You have several models Cox PH model in the form of coxph object..., which is log-normally distributed, or hazard rates–with the appropriate simPH simulation command hazards regression model used... Model rather than for each term in the survival data and x ( see below ):. ( I mean for inverse model ) / or do you think censor==0 is an.. Up to you to create the sensible coxph model survival::coxph ( ) method computes asymptotic confidence intervals confidence... Decisions can be obtained polygon and the rest to be dotted lines indicator for your confidence interval of iterations. And score tests this question | follow | asked Oct 26 '17 at 15:38 one sample data can... Of samples used for the predicted absolute risks the safest centering, since strata have! To calculate a p for trend across > the hazard ratio ( HR ) of coxph object values ! Particular point in time are km '', identity '' or a of! As overall likelihood ratio test whether the hazard ratios ) optimized for all observations between two groups at any point... Of smoothing ( like in density estimation ) proportional hazard model, so inverse!, Wald and score tests # the hazard ratio ( Good: Poor is! Model to have a polygon and the confidence interval for your confidence?! Would require specification of the type of plot is called a forest plot,. Be desirable to use the hazard ratio table and plot from a Cox proportional analysis. … hazard ratios ) optimized for all observations proportional hazard using R package survival. Method computes asymptotic confidence intervals and confidence bands for the estimation the way... ) optimized for all observations data and x ( see below ) and. Introduced by endpoints Oct 26 '17 at 15:38 is easily implemented in R the... Model fitting and significance test identity '' or a function of one argument survival times be... [ 1,43 ] ) can not be estimated simultaneously with covariates a p for trend across the... The safest centering, since strata occassionally have different means to approximately center the predictors out of self.. As returned by the Surv function introduced by endpoints this model is a hazard ratio HR! A hazard ratio ( HR ) variable in the OX scale to create the sensible coxph model rug …:... If not supplied then data will be extracted from 'fit ' object even when bumping up the number max... Obtained the results in form of a vector if you have several models are used in further calculations may... Three > ordered groups and performed multivariate adjusted Cox regression model, the hazard function on each group.... The terms on the left of a ~ operator, and the rest to be a list orzech_mag • wrote! The appropriate simPH simulation command Estimate a Cox regression model, using the coxph or coxme functions if you a. Effect, relative hazards, or hazard rates–with the appropriate simPH simulation command '' option is safest... ] ) can not be estimated simultaneously with covariates Interpreting the output from R this is just bare-bones! Generalized concordance probability an effect size measure for clear-cut decisions can be obtained how the survival data and (... First three columns in the survival package [ 57,58 ] x1 '' variable in survival! Seems to be a survival object as returned by the Surv function a vector if you want a polygon the! And x ( see below ), even when bumping up the number of samples used for estimation... Be dotted lines: if TRUE, do a test for each term in the survival package this can be... Around the model, the hazard ratio ( HR ) function with a corresponding survival curve used in calculations... Are used in further calculations it may be desirable to use the hazard function with a corresponding curve! Vector if you want a polygon as indicator for your confidence interval factor variable with k levels, instance! Are telling [ R ] that censor==1 is an event the estimation and plot from a Cox hazards! rank '', identity '' or a function of one argument telling [ R ] that is... And the confidence interval of plot hazard ratio coxph r called a forest plot I 've isolated it to the x1 variable! You only want one model to have a polygon and the confidence interval called a forest plot,... Each group separately: R: exporting summary of coxph object that censor==1 is an event you... Assumes an underlying hazard function would require specification of the type of smoothing ( like in density )! Is different from 1. n number of samples used for the estimation ' object positions! The survival times should be transformed before the test is performed sensible coxph model p-values! The quantity of interest from a Cox proportional hazards analysis, survival::coxph ( ) the model, the! An event OX scale this question | follow | asked Oct 26 '17 hazard ratio coxph r 15:38 and!
2022-06-29 12:25:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7632541656494141, "perplexity": 2015.3585533259272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103639050.36/warc/CC-MAIN-20220629115352-20220629145352-00270.warc.gz"}
https://indico.cern.ch/event/751767/contributions/3775975/
# 10th International Conference on Hard and Electromagnetic Probes of High-Energy Nuclear Collisions May 31, 2020 to June 5, 2020 Online US/Central timezone ## Inclusive jet measurements in p+Au collisions at $\sqrt{s_\mathrm{NN}}$ = 200 GeV in STAR Not scheduled 1h 20m Online #### Online Poster Jets and High Momentum Hadrons ### Speaker Tong Liu (Yale University) ### Description With the observation of flow-like correlations in small system collisions (p+Pb, p+Au and d+Au) at the LHC and RHIC, the existence of quark-gluon plasma (QGP) in small systems, which was initially assumed to be absent, became an open question and has been actively investigated over recent years. High momentum partons produced at early stages of heavy ion collisions generate collimated sprays of hadrons called \textit{jets}. Jets have been well established as a hard probe for the existence and properties of the QGP. These partons lose energy when passing through the medium, forming an effect usually known as "jet quenching". While previous jet-quenching analysis in small systems using minimum bias datasets are consistent with the non-existence of the QGP, various modifications are observed when collisions are categorized based on the event activity (EA). In this poster, we aim to present investigation on p+Au collisions at $\sqrt{s_\mathrm{NN}}=200$ GeV at STAR for possible evidence of jet quenching by studying the binary-scaled inclusive jet yield. Studies involving both full (charged + neutral) and charged jets will be presented. We will also present the EA definition of collision events based on backward (Au-going direction) signals. Relevant simulation procedures will also be discussed, including simulation using the Glauber model and corresponding detector response. Progress towards the resultant nuclear modification factor $R_{\mathrm{pAu}}$, after combining with the results from the Glauber model calculation, as well as the comparison between yields in high and low EA bins, will be discussed. Contribution type Poster Jets and High Momentum Hadrons STAR ### Presentation materials There are no materials yet.
2023-03-29 23:55:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.792208731174469, "perplexity": 3182.6107819532535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949035.66/warc/CC-MAIN-20230329213541-20230330003541-00761.warc.gz"}
https://cmuc.karlin.mff.cuni.cz/cmuc1902/abs/sahami.htm
## Amir SahamiGeneralized notions of amenability for a class of matrix algebras Comment.Math.Univ.Carolin. 60,2 (2019) 199-208. Abstract:We investigate the amenability and its related homological notions for a class of $I\times I$-upper triangular matrix algebra, say ${\rm UP}(I,A)$, where $A$ is a~Banach algebra equipped with a nonzero character. We show that ${\rm UP}(I,A)$ is pseudo-contractible (amenable) if and only if $I$ is singleton and $A$ is pseudo-contractible (amenable), respectively. We also study pseudo-amenability and approximate biprojectivity of ${\rm UP}(I,A)$. Keywords: upper triangular Banach algebra; amenability; left $\varphi$-amenability; approximate biprojectivity DOI: DOI 10.14712/1213-7243.2019.002 AMS Subject Classification: 46M10 43A07 43A20 PDF
2021-07-30 22:29:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5168808698654175, "perplexity": 3688.38519756711}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154032.75/warc/CC-MAIN-20210730220317-20210731010317-00533.warc.gz"}
https://brilliant.org/problems/lets-do-some-calculus-28/
# Let's do some calculus! (28) Calculus Level 4 $\large {\lim_{x \to 0}} \ \dfrac{\cos^2 x - \cos x - e^x \cos x + e^x - \dfrac{x^3}{2}}{x^n}$ Find the value of $$n$$ for which the above limit is finite and non-zero. Notations: $$e \approx 2.71828$$ is the Euler's number.
2017-03-25 11:42:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9873844981193542, "perplexity": 694.3961653665691}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188924.7/warc/CC-MAIN-20170322212948-00537-ip-10-233-31-227.ec2.internal.warc.gz"}
https://cs.stackexchange.com/tags/decision-problem
# Questions tagged [decision-problem] A question in some formal system with a yes-or-no answer. 371 questions Filter by Sorted by Tagged with 19 views ### Decision Making using Multiple Variables What should I learn if I want to make a decision based on multiple variables? Followings are the example of a problem. I have a farm. My variables are weather, humidity of air, humidity of soil, size ... 39 views ### Obtaining an acyclic graph by removing edges using an algorithm that decides ACYCLIC i don't understand the following: If there's an algorithm that can decide ACYCLIC in Polynomial time, then there's an algorithm who returns a set of k edges, so that the graph obtained by deleting ... 15 views ### Prove that a set is decidable using time constructible function I'm preparing an exam of theory of computation and I'm very in trouble with some exercise. Considering a Turing machine $\mu$ of alphabet $A=\{ 0,1 \}$ (we don't know nothing about termination) and a ... 29 views ### What is the strongest arithmetic theory decidable by a DFA, DPDA or PDA? It is known that WS1S can be decided by a DFA. Is this the strongest arithmetic theory decidable by a DFA? What happens when the automata class is extended to include DPDAs or PDAs? 58 views ### A special case of subset sum I came across the following problem in my complexity-theory course: Given a set of numbers $A := \{a_1, \dots, a_n\} \subset_{\mathrm{finite}} \mathbb{N}$ and a number $b$ also in $\mathbb{N}$ such ... 77 views ### NP-completeness for integer linear program This is a homework problem, so I don't want the solution. I need a hint which problem to reduce to the following and/or how to start on it. We were thinking of TSP or independent set but couldn't come ... 232 views ### How to prove LastToken problem is NP-complete Consider the following game played on a graph $G$ where each node can hold an arbitrary number of tokens. A move consists of removing two tokens from one node (that has at least two tokens) and adding ...
2020-02-25 22:30:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7825717329978943, "perplexity": 787.8692388044423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146160.21/warc/CC-MAIN-20200225202625-20200225232625-00549.warc.gz"}
http://www.encyclopediaofmath.org/index.php/Ahlswede%e2%80%93Daykin_inequality
# Ahlswede-Daykin inequality four-functions inequality An inequality in which an inequality for four functions on a finite distributive lattice applies also to additive extensions of the functions on lattice subsets. Let be a finite distributive lattice (see also FKG inequality), such as the power set of a finite set ordered by proper inclusion. For subsets , of , define and . If or is empty, . Given , let . The Ahlswede–Daykin inequality says that if , , , and map into such that then See [a1] or [a2], [a4], [a7] for a proof. The inequality is very basic and is used in proofs of other inequalities (cf. [a2], [a3], [a4], [a5], [a7]), including the FKG inequality [a6] and the Fishburn–Shepp inequality [a3], [a8]. #### References [a1] R. Ahlswede, D.E. Daykin, "An inequality for the weights of two families, their unions and intersections" Z. Wahrscheinlichkeitsth. verw. Gebiete , 43 (1978) pp. 183–185 [a2] B. Bollobás, "Combinatorics" , Cambridge Univ. Press (1986) [a3] P.C. Fishburn, "A correlational inequality for linear extensions of a poset" Order , 1 (1984) pp. 127–137 [a4] P.C. Fishburn, "Correlation in partially ordered sets" Discrete Appl. Math. , 39 (1992) pp. 173–191 [a5] P.C. Fishburn, P.G. Doyle, L.A. Shepp, "The match set of a random permutation has the FKG property" Ann. of Probab. , 16 (1988) pp. 1194–1214 [a6] C.M. Fortuin, P.N. Kasteleyn, J. Ginibre, "Correlation inequalities for some partially ordered sets" Comm. Math. Phys. , 22 (1971) pp. 89–103 [a7] R.L. Graham, "Applications of the FKG inequality and its relatives" , Proc. 12th Internat. Symp. Math. Programming , Springer (1983) pp. 115–131 [a8] L.A. Shepp, "The XYZ conjecture and the FKG inequality" Ann. of Probab. , 10 (1982) pp. 824–827 How to Cite This Entry: Ahlswede–Daykin inequality. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Ahlswede%E2%80%93Daykin_inequality&oldid=22009
2013-06-18 22:38:47
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.922638475894928, "perplexity": 4349.047320880883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00052-ip-10-60-113-184.ec2.internal.warc.gz"}
https://brilliant.org/discussions/thread/golden-ratio-and-fibonacci-numbers/
# Golden Ratio and Fibonacci Numbers Golden Ratio is considered to be one of the greatest beauties in mathematics. Two numbers $$a$$ and $$b$$ are said to be in Golden Ratio if $a>b>0,\quad and\quad \frac { a }{ b } =\frac { a+b }{ a }$ If we consider this ratio to be equal to some $$\varphi$$ then we have $\varphi =\frac { a }{ b } =\frac { a+b }{ a } =1+\frac { b }{ a } =1+\frac { 1 }{ \varphi }$ Solving in quadratic we get two values of $$\varphi$$, viz. $$\frac { 1+\sqrt { 5 } }{ 2 }$$ and $$\frac { 1-\sqrt { 5 } }{ 2 }$$ one of which (the second one) turns out to be negative (extraneous) which we eliminate. So the first one is taken to be the golden ratio (which is obviously a constant value). It is considered that objects with their features in golden ratio are aesthetically more pleasant. A woman's face is in general more beautiful than a man's face since different features of a woman's face are nearly in the golden ratio. Now let us come to Fibonacci sequence. The Fibonacci sequence $${ \left( { F }_{ n } \right) }_{ n\ge 1 }$$ is a natural sequence of the following form:${ F }_{ 1 }=1,\quad { F }_{ 2 }=1,\quad { F }_{ n-1 }+{ F }_{ n }={ F }_{ n+1 }$ The sequence written in form of a list, is $$1,1,2,3,5,8,13,21,34,..$$. The two concepts: The Golden Ratio and The Fibonacci Sequence, which seem to have completely different origins, have an interesting relationship, which was first observed by Kepler. He observed that the golden ratio is the limit of the ratios of successive terms of the Fibonacci sequence or any Fibonacci-like sequence (by Fibonacci-like sequence, I mean sequences with the recursion relation same as that of the Fibonacci Sequence, but the seed values different). In terms of limit:$\underset { n\rightarrow \infty }{ lim } \left( \frac { { F }_{ n+1 } }{ { F }_{ n } } \right) =\varphi$ We shall now prove this fact. Let ${ R }_{ n }=\frac { { F }_{ n+1 } }{ { F }_{ n } } ,\forall n\in N$Then we have $$\forall n\in N$$ and $$n\ge 2$$, ${ F }_{ n+1 }={ F }_{ n }+{ F }_{ n-1 }\\$ and ${ R }_{ n }=1+\frac { 1 }{ { R }_{ n-1 } } >1$We shall show that this ratio sequence goes to the Golden Ratio $$\varphi$$ given by: $\varphi =1+\frac { 1 }{ \varphi }$We see that: $\left| { R }_{ n }-\varphi \right| =\left| \left( 1+\frac { 1 }{ { R }_{ n-1 } } \right) -\left( 1+\frac { 1 }{ \varphi } \right) \right| \\ =\left| \frac { 1 }{ { R }_{ n-1 } } -\frac { 1 }{ \varphi } \right| \\ =\left| \frac { \varphi -{ R }_{ n-1 } }{ \varphi { R }_{ n-1 } } \right| \\ \le \left( \frac { 1 }{ \varphi } \right) \left| \varphi -{ R }_{ n-1 } \right|\\ \le { \left( \frac { 1 }{ \varphi } \right) }^{ n-2 }\left| { R }_{ 2 }-\varphi \right|$ Which clearly shows that$\left( { R }_{ n } \right) \longrightarrow \varphi$ (since $$\left| { R }_{ 2 }-\varphi \right|$$ is a finite positive real whose value depends on the seed values) Note by Kuldeep Guha Mazumder 2 years, 11 months ago MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$...$$ or $...$ to ensure proper formatting. 2 \times 3 $$2 \times 3$$ 2^{34} $$2^{34}$$ a_{i-1} $$a_{i-1}$$ \frac{2}{3} $$\frac{2}{3}$$ \sqrt{2} $$\sqrt{2}$$ \sum_{i=1}^3 $$\sum_{i=1}^3$$ \sin \theta $$\sin \theta$$ \boxed{123} $$\boxed{123}$$ Sort by: - 2 years, 11 months ago You are welcome! Did you like it? - 2 years, 11 months ago Yes! - 2 years, 11 months ago My pleasure..:-) - 2 years, 11 months ago I like Fibonacci very much.It is really The beauty of Mathematics. - 2 years, 9 months ago Very nice knowledge.. Loved it...The Magic of Maths!!! - 2 years, 9 months ago https://brilliant.org/problems/wow-12/?group=w3HWB8GobVLl&ref_id=1095702 i posted a problem about the same thing my solution was almost the same as your proof of it (: - 2 years, 9 months ago I have seen your proof. Your idea is essentially the same. Only some of your steps are erroneous. - 2 years, 9 months ago - 2 years, 9 months ago Nothing as such. Only that you have put a plus sign in front of 1/phi. - 2 years, 9 months ago There is one more interesting thing I found yesterday. The Ratio of the diagonal and the side of a regular Pentagon is exactly equal to the golden ratio. - 2 years, 10 months ago Ok then I will write a note on it.. - 2 years, 10 months ago Didn't you find it extremely interesting? This is the beauty of Mathematics. - 2 years, 10 months ago Nice - 2 years, 11 months ago Thanks..don't you think whatever is written above is a reconciliation of two apparently different mathematical ideas?.. - 2 years, 11 months ago Nice work ! I read this in the book Da Vinci Code by Dan Brown. - 2 years, 11 months ago That is one book that I want to read but haven't read yet..thank you for your compliments..:-) - 2 years, 11 months ago Have you read any other book by Dan Brown ? If not then try them ,they are awesome . - 2 years, 11 months ago I have just bought The Da Vinci Code today..:-) - 2 years, 11 months ago
2018-11-18 00:53:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9912343621253967, "perplexity": 1404.0567632852903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743913.6/warc/CC-MAIN-20181117230600-20181118012600-00089.warc.gz"}
https://www.emerald.com/insight/content/doi/10.1108/IHR-01-2019-0001/full/html
# Do tipping motivations predict loyalty to the server in a restaurant? Jeremy Whaley (University of Tennessee, Knoxville, Tennessee, USA) Jinha Lee (University of Tennessee, Knoxville, Tennessee, USA) Youn-Kyung Kim (University of Tennessee, Knoxville, Tennessee, USA) ISSN: 2516-8142 Article publication date: 26 April 2019 Issue publication date: 14 November 2019 2435 ## Abstract ### Purpose The purpose of this paper is to investigate whether guests’ tipping motivations (i.e. server quality, social norm and food quality) and demographic characteristics (i.e. gender, age and income) influence loyalty to the server in a restaurant. ### Design/methodology/approach Based on a national online survey consumer panel comprised of 468 participants, the authors utilized decision tree using R statistical software. Predictor variables were tipping motivations and demographic characteristics (age, gender and income). Target variable was loyalty to the server. ### Findings The findings suggest that social norm, food quality and income influence customers’ loyalty toward the same server on future visits. Social norm turned out to be the strongest predictor. If consumers did not have high social norm on tipping, their loyalty toward a particular server was affected by the combination of determinants such as server quality, social norm, income and food quality. ### Research limitations/implications Future research can identify or develop scales of tipping motivations with stronger reliabilities in the context of restaurants. Future research can also explore other demographic differences (e.g. ethnicity and sexual orientation) in tipping motivations and server loyalty. ### Practical implications Servers are indeed the primary contact point and they are in the most influential position with consumers. Overall, results of this study provide an interesting insight in that restaurant guests’ experience can be ruined by bad quality of food or can be mitigated by server quality. Thus, this research highlights a step-by-step process as to the actions that a server may perform and manage in order to enhance server loyalty. ### Originality/value Loyalty has been examined in the context of products, brands or service providers. This study focuses on loyalty toward a specific server, because the consideration of server–guest relationship provides both a compelling and timely area of study in that restaurants continue to look for unique ways to drive server–guest rapport and customer loyalty. ## Citation Whaley, J., Lee, J. and Kim, Y.-K. (2019), "Do tipping motivations predict loyalty to the server in a restaurant?", International Hospitality Review, Vol. 33 No. 2, pp. 91-105. https://doi.org/10.1108/IHR-01-2019-0001 ## Publisher : Emerald Publishing Limited ## Introduction The restaurant tipping literature has found that the service restaurant guests receive is the dominant reason guests tip their servers (Lynn and Sturman, 2010; Whaley et al., 2014; Wilson, 2019). Further, customer loyalty has been approached to a greater extent at the level of a firm rather than that of a particular person within the firm (Lee et al., 2009; Moore et al., 2005). While many companies use artificial intelligence increasingly, the server–guest relationship as a human factor remains central to a restaurant’s success. Restaurant service is labor-intensive and requires servers to display such human qualities as warmth or genuineness, which machines cannot duplicate. As fewer consumers patronize restaurants, the key to restaurant success is the server who functions as the business’ frontline marketer (Hwang et al., 2013). In fact, Mattila (2001) acknowledged that servers who build rapport with their guests may promote guest satisfaction and restaurant loyalty. However, the tipping literature has not examined which factors predict loyalty to a specific server. In addition, research has suggested that restaurant tipping is a practice that largely depends on consumers who adhere to social norms (Azar, 2010), and many consumers tip 15–25 percent of the restaurant check regardless of server quality (Whaley et al., 2014). While it is a social custom, tipping may be more than remuneration for the service received; it is also a reward for the relationship that is formed between the server and the restaurant’s guests. Naturally, servers must make guests feel comfortable throughout the dining experience (Kang and Hyun, 2012). In the restaurant marketing literature, researchers have underscored the long-term relationship with the guests (Kim et al., 2001), because marketing costs for new restaurant guests are much higher than those for existing guests (Reichheld and Sasser, 1990). Given that the employee turnover rate is very high in the restaurant industry, it is more critical than ever for restaurant owners and managers to find ways to reduce consumers’ tendency to switch restaurants (Ryu and Han, 2010). For example, if a restaurant server provides consistently high service quality to guests, s/he can motivate guests to return, thereby increasing customer retention and long-term loyalty. Furthermore, it is imperative for restaurateurs to cultivate customer loyalty, not only to the restaurant brand, but also to the servers who are the key in providing a level of relationship with the guest that the owners cannot control directly. Building rapport between the server and the guest can increase job satisfaction and ultimately reduce the turnover rate among the servers in the restaurant. Thus, consideration of the server–guest relationship provides both a compelling and timely area of study, in that restaurants continue to look for unique ways to increase server–guest rapport and customer loyalty. Therefore, findings on the extent to which tipping motivations establish loyalty to the restaurant server can advance the tipping literature. Because loyalty to specific restaurants is declining, restaurants should look for competitive ways to reduce customers’ tendency to switch to other restaurants and lower customer acquisition costs thereby. Thus, this study posited that servers provide a logical asset to a restaurant as frontline ambassadors. Servers can influence the guests’ willingness to return in the future and recommend the servers to others. Moreover, researchers have found that loyal restaurant customers spend more money (Chen et al., 2008), and that guests who patronize a restaurant regularly do not switch establishments as often as infrequent users (Hyun, 2010). Although researchers have identified restaurant service received (Bodvarsson and Gibson, 1997; Lynn and McCall, 2000) and servers’ behaviors (e.g. warmth, friendliness, touching and empathy: Jewell, 2008; Whaley et al., 2014) as tipping motivations, these factors are limited to loyalty to a restaurant rather than to a server. This study attempted to address this research gap. In addition, loyalty to a particular server or intention to revisit a restaurant with the same server in the future may depend on demographical characteristics such as a guest’s gender, age and income bracket. However, demographic characteristics’ influence in building the restaurant server–guest relationship has not been explored in previous studies. To this end, this study investigates whether guests’ tipping motivations (i.e. server and food quality and adherence to social norms) and demographic characteristics (i.e. gender, age and income) influence loyalty to the server. To accomplish this goal, the study employed a decision tree (DT) model. ## Literature review ### Server loyalty In the service setting, production and consumption are simultaneous because the service provider cannot separate himself or herself from the exchange (Parasuraman et al., 1985). In fact, consumers form a relationship not only with a particular brand and its offerings (Lee et al., 2009; Moore et al., 2005), but also with the service personnel who represent the brand (Kim et al., 2019). According to Parasuraman et al. (1988), the way personnel provide the service is more critical than the kind of service they provide. The interactive bond between the service provider and the guest constitutes the relationship building construct, and leads to long-term loyalty (Brocato et al., 2015). Researchers have argued that influential causes of relationship building depend upon both humanistic and non-humanistic characteristics (Jin et al., 2017; Keh et al., 2013). The humanistic characteristics of building a relationship in a restaurant setting include attentiveness or compassion on the part of the service staff, while non-humanistic attributes include restaurants atmosphere, price and food quality (Jin et al., 2017). When an individual displays gratitude in appreciation for the service received, it motivates the server to engage in a reciprocal, positive response (Fehr and Falk, 2002; Teng and Chang, 2013). The feeling of gratitude plays an important role in the restaurant server–guest relationship as many researchers have explored the extent to which emotion causes restaurant guests to tip (Lynn and McCall, 2000). Therefore, it can be posited that the factors that motivate guests’ tipping behavior are related to their level of gratitude and ultimately, their loyalty to a specific server. Some consumers tip even when they receive poor service, reflecting their compliance with a pre-established social norm (Azar, 2004; Whaley et al., 2014), and thus, leave a tip, often as a thoughtless or pre-conditioned response. However, positive interaction, also referred to as rapport, provides a crucial consideration for inquiry if consumers want to return to the same server in the future. Rapport-building behaviors are essential in the retail business environment, as rapport enhances the relationship and leads to a high level of customer satisfaction and loyalty (Gremler and Gwinner, 2008). Indeed, restaurants’ long-term success depends on employees’ rapport with their customers (Ewing et al., 2001). Servers use a variety of behaviors to help them establish a quick and decisive connection between themselves and the guests (Gremler and Gwinner, 2008). Those behaviors include attention, imitation, courtesy and finding common ground. Servers are expected to act as ambassadors on the restaurant’s behalf while developing personal relationships and connections with the guests (Kim et al., 2011). Ultimately, building relationships through rapport affects loyalty to a server, which is the key to creating a lasting relationship between the guest and the restaurant. Indeed, how much a guest tips a server can reflect several tipping motivations. Therefore, we believe that this relation between tipping motivations and the amount tipped can contribute to determining the level of the guest’s loyalty to the server. ### Tipping motivations By reviewing the literature, we identified tipping motivations as adherence to social norms, and server and food quality. The way each motivation predicts loyalty to a particular server is explained below. #### Social norm Social norms are individuals’ socially expected patterns of behavior subject to person-to-person or person-to-group interactions (Cialdini and Trost, 1998), and dictate the way in which they should behave in certain circumstances (Earley and Ang, 2003). For some, the act of tipping represents that of a moral economy, meaning that consumers tip because of concern or to do the right thing (Mulinari, 2016). The practice of tipping is a behavior expected of restaurant guests and many servers shun consumers who do not tip the amount appropriate for the bill. Tipping, in essence, is the right thing to do in a restaurant setting. More than four decades ago, Pepitone (1976) reported that social norms become institutionalized when a significant portion of a group acts in accordance with the norm. Norms include not only rules that indicate the importance of the way an individual is to act (e.g. tipping restaurant servers) but also the way the individual is not to act (e.g. “stiffing” or not tipping restaurant servers). Clearly, restaurant tipping serves as an example of a socially driven practice, as restaurant guests have engaged in the behavior for more than a century. Cultural differences may also exist in the social norm of tipping. The findings of Azar’s (2010) comparative study of consumer tipping behavior in the USA and Israel suggested that the two countries were similar in the number of respondents who tipped because of a feeling of gratitude to the server (approximately 68 percent of the respondents from each). However, the responses to the question about whether they tipped to conform with the social norm varied widely: almost 88 percent of respondents from the USA and approximately 40 percent from Israel indicated that they tipped to conform. Research has also revealed that many individuals tip to acquire social approval or improve self-image (Lynn, 2009; Whaley et al., 2014). A guest’s enhanced self-image may lead to an affinity for the server, because of the positive server–guest interaction. On the other hand, failing to conform to others’ expectations may elicit negative emotions and strain the rapport (Pret and Carter, 2017). In this situation, the guest often does not complain, but simply refuses to return to the same restaurant or the same server (Soscia, 2007). Therefore, it can be assumed that the ongoing compliance with a social norm (i.e. tipping) can influence the level of rapport between individuals (i.e. guest and server). This rapport leads to a purely reinforced behavior that is valued and rewarded. For example, guests who are known as good tippers will receive a higher priority from restaurant servers and, in many instances, better service and the positive feedback attributable to the server–guest rapport may ultimately lead to loyalty to a specific server. The literature reveals that social norms are a major tipping motivation in the restaurant context; however, researchers have not examined whether they influence customers’ further loyalty to a specific server. Consequently, additional investigation is needed to determine whether, and to what extent, compliance with the social norm of tipping affects a guest’s intention to return to the same server in the future. #### Server quality Server quality depends on the positive interaction between the customer and the server, or positive personal connections (Ford et al., 2012). According to Bodvarsson and Gibson (1997), restaurant guests are sensitive to service quality, specifically, the server attributes of attentiveness, friendliness and promptness. In addition, they found that restaurant tips were more sensitive to poor service quality than good service quality. Lynn and Sturman (2010) also found that service contributed significantly the tip’s amount. For each one point increase in the participant’s rating on a five-point scale of service quality, a server’s tip increased two percentage points. This indicates that respondents left a larger tip when they received better service. These findings imply that server quality is a critical factor that influences guests’ tipping behavior and ultimately, their satisfaction with a restaurant or server. The literature suggests that server quality is related closely to establishing rapport or building a relationship. Many researchers have found that establishing rapport is a result of a wide range of server behaviors such as friendliness (Lynn, 2001; Whaley et al., 2014), timeliness (Lynn, 2001; Whaley et al., 2014), empathy (Whaley et al., 2014), the light touch on the hand or shoulder (Jewell, 2008; Whaley et al., 2014), standing or squatting at the tableside (Lynn and Mynier, 1993; Whaley et al., 2014) and direct/indirect eye contact (Whaley et al., 2014). Because these behaviors are human-oriented intangible qualities that foster positive interpersonal relationships, they contribute to establishing rapport between a guest and server. #### Food quality Food is the core component of the restaurant experience (Hansen et al., 2005). Researchers have argued that high quality of food both entices and retains guests because it makes them feel appreciated and pleased, while food of marginal or lower quality causes guests to have negative experiences (Peri, 2006). Given food’s significance in the restaurant experience, its quality predicts customer value (Whaley et al., 2019), restaurant image (Ryu et al., 2012) and patronage (Hansen et al., 2005). However, reports of the relation between food quality and tipping have been somewhat contradictory. For example, some researchers found that the assessment of food quality influences the perception of a server’s quality (Peri, 2006) and tipping (Azar, 2007), while others have concluded that the amount of the tip is related more strongly to server quality than food quality (Mok and Hansen, 1999). Nonetheless, food quality represents a leading indicator of the guest’s assessment of the quality of the restaurant experience overall, because food is a primary motive in restaurant selection. As a result, food quality may influence the amount of the tip a guest decides to leave, and thus affect the extent to which s/he is willing to return to the restaurant and request the same server on future visits. ### Demographic characteristics In addition to tipping motivations, demographic considerations need to be included to understand the server–guest relationship. Although limited research has been conducted on the association between guests’ demographic profiles and their loyalty to a server, previous studies have suggested that consumers respond differently to situations based on their age, gender and income. #### Age Age is an essential factor to consider when investigating restaurant tipping behavior. However, this factor may depend upon the individual’s life cycle (Slama and Tashchian, 1985). For example, younger and senior restaurant guests may choose the amount they will spend on food and tips based on discretionary income. These individuals may be on a limited or fixed income and thus weigh the amount of the tip they leave cautiously according to the level of service they received. Further research has demonstrated that Millennials exhibit a strong sense of social justice, in which they believe that everyone should be treated fairly and equitably (Fox, 2012). According to Lynn (2017), Millennials dislike the social custom of tipping, as many of them like cheaper, communal and on-the-go kinds of meal experiences (Carmen, 2018). Other researchers have argued that younger consumers are less loyal to a brand or company and more difficult to retain for repeat purchases than are older consumers (Bush et al., 2004; Lazarevic, 2012). Accordingly, younger consumers switch brands more frequently than do older consumers. Parment (2013) compared Generation Y and Baby Boomer consumers’ shopping behavior and buyer involvement. Baby Boomers start the purchase process with a retailer they trust, while Generation Y consumers begin the purchase process by choosing a product. This implies that older consumers are more likely to become loyal when they establish trust with a server than are younger consumers. #### Gender With respect to gender, research has demonstrated that women are more interdependent than men, strive to be connected to others, focus on maintaining relationships and wish to foster harmonious relations with others (Cross and Madson, 1997; Ndubisi, 2006). However, Melnyk et al. (2009) found that gender differences were context specific. Female consumers tend to be more loyal to individuals (e.g. individual service providers), while male consumers are more loyal to a group of people. The authors claimed that marketers must understand gender differences in loyalty to develop appropriate selling approaches and marketing programs for each gender. Because the restaurant server–guest relationship is an emotional and/or personal relationship, it may be presumed that female consumers are more likely to return to the same server in their future visits. #### Income Several studies have suggested that there is a link between income and loyalty (Homburg and Giering, 2001; Kasper, 1988), although some have found no associations between the two variables (East et al., 1995). For instance, Kasper (1988) found that less educated and middle-income consumers were more loyal than their counterparts. In Homburg and Giering’s (2001) study, product satisfaction was less important to people with high income, because the financial risk associated with purchasing a poor-quality product is lower for these consumers. Because income level is related closely to education level, it can be assumed that high-income consumers use more information cues before making a decision and feel more comfortable dealing with new information, while low-income consumers rely more on fewer information cues (Capon and Burke, 1980; Homburg and Giering, 2001). This implies that low-income consumers may be more willing to return to the same server rather than seek alternatives. Further, Alrubaiee and Al-Nazer (2010) found that income was associated inversely both with relationship marketing orientation (i.e. bonding, trust, communications and satisfaction) and customer loyalty, in that marketing orientation’s effect on loyalty was stronger among lower- than higher-income customers. MacManus (2018) also stated that individuals with high incomes tend to be more conservative in their spending habits than those with lower incomes, who tend to be more liberal in their spending habits. This argument implies that low-income guests may tip a higher percentage of their bill compared to high-income guests. Although none of the findings described above address the relation between income and server loyalty directly, they suggest that lower-income consumers are more likely to be loyal to the same server than are those with higher incomes. ## Research objectives Based on the literature review on tipping motivations and demographic characteristics associated with server loyalty, this study had the following two objectives: 1. to determine the extent to which tipping motivations such as server quality, social norms and food quality predict loyalty to the restaurant server; and 2. to determine the extent to which demographic characteristics such as gender, age and income predict loyalty to the restaurant server. ## Methods ### Measures Based on the literature review, a DT model was created with three tipping motivations (server quality, food quality and compliance with the social norm) and three demographic variables (income, age and gender) as predictors and loyalty to the server as the target variable. The measures of server and food quality and social norms were derived from Whaley (2011) and Whaley et al. (2014) who based their scale development on previous studies (Lynn, 2009; Lynn and Sturman, 2010). Because research has shown that loyalty is associated closely with the intention to use the same product/firm/server and recommend it/him/her to others (Brady et al., 2005; Harris and Goode, 2004; Yim et al., 2008), loyalty to the server, the target variable, was measured by behavioral intentions to use the same server (BISS). BISS was operationalized as the mean of BISS 1 (asking for the same server on future visits) and BISS 2 (recommending the server to friends and family on future visits). A forced-choice questionnaire using a four-point Likert-type scale was used to reduce social desirability bias (Nederhof, 1985). ### Sample and data collection The questionnaire was developed with Qualtrics and was distributed to online consumer panel recruited via eRewards, an online market research firm. The survey contained questions on respondents’ current tipping practices (e.g. “Do you tip when dining out?” and “How much do you tip when dining out?”), tipping motivations, BISS (loyalty to the server) and demographic information. From a total of 600 surveys collected, the researchers used 468 respondents in the analysis after excluding 132 that were incomplete. The respondents’ demographic profiles are provided in Table I. The majority of the respondents (98.1 percent) tipped when dining out. The largest number of respondents tipped 15 percent (36.5 percent), followed by 20 percent (32.2 percent) and 10 percent (12.1 percent). ### Analysis To accomplish the research objectives, a DT model was designed using R statistical software. A DT model is a useful tool to achieve our research objectives because it can use both continuous and categorical variables simultaneously, in which classification and regression trees use a recursive partitioning algorithm for multivariate analysis (Hastie et al., 2009). The target variable, BISS, was split into a high group (⩾3) and low group (<3). A sampling strategy of 70/15/15 was employed by assigning 70 percent of the sample to the training set (n=327), 15 percent to the validation set (n=70) and 15 percent to the testing set (n=70: Song and Kim, 2016). We built the DT model first with the training set, then evaluated the predictive performance with the validation set. Finally, we tested whether the DT model predicted BISS using the testing set for cross-validation. Furthermore, the variables’ importance was computed by a random forest (RF) algorithm to confirm informative predictors in the DT model. ## Results Before running the DT model with the training set (n=327), we performed an exploratory factor analysis to determine the factors’ internal consistency with Cronbach’s α. Four factors (server quality, adherence to social norm, food quality and BISS) were identified that had Cronbach’s αs above 0.70. However, two items were deleted because of low factor loadings (<0.40): “I feel obligated to tip even when service is bad” (server quality) and “When server establishes a personal connection, it influences my tipping” (adherence to social norm). Next, a confirmatory factor analysis was performed to validate the measurement model and assess the construct validities (Table II). Convergent validity was supported by relatively high-standardized factor loadings (p<0.001), and satisfactory composite reliabilities ranging 0.734–0.801, above the benchmark of 0.7 (Nunnally and Bernstein, 1994). Discriminant validity was confirmed, as the correlation coefficients of the constructs were below the threshold of 0.85 (Kline, 2011). With BISS as the target variable, Figure 1 shows three high BISS groups (green) and four low BISS groups (blue) in the final nodes. The DT model revealed four influential variables according to the rank of their importance beginning from the root node: social norm, server quality, income and food quality. As shown in Figure 1, adherence to the social norm at the root node was the most important predictor because the data set was divided into groups by social norm first. Server quality as a second predictor then explained the remaining six groups (89 percent) under the pre-condition of adherence to social norm below 2.9. The first high BISS group (11 percent of participants) was characterized only by its strong adherence to the social norm in tipping (social norm ⩾2.9) with a high node purity (prob=0.77). The other two high BISS groups were established when the second predictor, server quality, was high (server quality ⩾3.1). Second high BISS group (14 percent of participants) was characterized by moderate adherence to the social norm (1.9⩽social norm<2.1) under high server quality, and the third high BISS group (5 percent of participants) by lower levels of adherence to the social norm (social norm<1.9), while income level (income<3.5, approximately $39,000) was a factor when they rated food quality as high (food quality⩾2.8). All low BISS groups were identified first when their adherence to the social norm was low (social norm<2.9). Under this condition, the first low BISS group (28 percent of participants) was identified when server quality was not high (server quality<3.1) at a high node purity (prob=0.82). On the other hand, the other three low BISS groups showed more complex decision making by adherence to the social norm (15 percent, 2.1⩽social norm<2.9), by adherence to the social norm and income (20 percent, social norm<1.9, income ⩾3.5 (=approximately$39,000)), and by adherence to the social norm, income and food quality (6 percent, social norm<1.9, income<3.5, food quality<2.8). Thus, the most important predictor of server loyalty (measured by BISS) was adherence to the social norm as the tipping motivation. When restaurant patrons adhered strongly to the social norm for tipping (social norm ⩾2.9), they were likely to request the same server on future visits and recommend the server to friends and family on future visits regardless of other factors related to tipping (e.g. server quality, food quality, age, gender and income). However, when they did not adhere strongly to the social norm (<2.9), other factors such as server quality, income and food quality influenced BISS. Specifically, when server quality was good (⩾3.1), people who adhered moderately to the social norm (2.1<social norm<2.9) were likely to ask for or recommend the server for future visits. Although the number was low (5 percent), those who tended to adhere to the social norm little (<1.9) were likely to develop high server loyalty if they evaluated the food quality (⩾2.8) highly under the condition of high server quality. To reconfirm our influential predictors, we also measured the variables’ importance using RF. RF was employed with three parameters: mtry (=3), which indicates the number of variables to test at each split; nodesize (=1), which refers to the minimum size of terminal nodes; and ntree (=1,000), which indicates the number of trees to run (Kuhn and Johnson, 2013). The variables’ importance in RF was measured by the mean decrease accuracy (MDA). According to the MDA estimates for each variable, Table III illustrates the informative predictors of server loyalty based on the order of adherence to the social norm, server quality, food quality, income, age and gender. The results corresponded with those of the DT model, showing the relative importance of adherence to the social norm and server quality as tipping motivations. Table IV shows the evaluation and comparison of our DT model. A confusion matrix and AUC, “the area under the receiver operating characteristic (ROC) curve,” were used to evaluate the model’s predictive performance with the cross-validation and testing data sets (Fielding and Bell, 1997). The ROC curve illustrates the true positive rate against the false positive rate (Fielding and Bell, 1997). The AUC measures the expected proportion that a positive instance drawn randomly will rank higher than a negative one drawn randomly (Fawcett, 2005). The result of the confusion matrix and AUC analyses on the validation set indicated an accuracy of 0.743, a precision rate of 0.579 and an AUC of 0.707, which demonstrates that the model had satisfactory prediction accuracy with a low misclassification rate. The results on the testing set showed an overall accuracy of 0.814, a precision rate of 0.714 and an AUC of 0.841. Thus, the DT model in this study yielded discriminant prediction results that both cross-validation and testing sets confirmed consistently. ## Discussion and implications Restaurant guests engage in tipping now more than ever as researchers continue to cite its growing economic impact (Wilson, 2019). As tipping has served as a unique financial axis of the restaurant industry, this study attempted to advance the understanding of tipping behavior, including a new association between tipping motivations and server loyalty. This study investigated the extent to which server quality, adherence to the social norm, food quality and the demographic characteristics of age, gender and income predict a restaurant guest’s loyalty to the same server. The findings of the study revealed that the compliance with a social norm, as the primary driver of tipping motivations, influenced the guests’ decision to select the same server on future visits. Specifically, the results demonstrated that people who adhered strongly to the social norm in tipping are likely to ask for, and recommend the same server regardless of their income, age, gender, and a restaurant’s server and food quality. This first high BISS group consisted of approximately 10 percent of the population. This finding indicates that even after guests assess the server according to his or her delivery of service, social norms play a primary role in the relationship between the server and the guest. Because consumers are emotion driven, they are willing to pay for quality and enjoyable experiences. Prior studies show that some guests who do not tip sufficiently may experience feelings of guilt, because they do not conform to the social norm (Azar, 2010; Whaley et al., 2014). This result reinforces the role of social norm in forming rapport between the consumer and the server. Other high BISS consumers did not adhere only to the social norm for tipping. Their loyalty to a particular server was affected by the combination of such determinants as server and food quality, social norm and income. In fact, while the literature has found that server quality explains tipping behavior largely (Lynn and Sturman, 2010; Whaley et al., 2014), our results also found that it was an important predictor of server loyalty. Servers should understand that many guests desire a personal interaction with them that leads them to develop loyalty to the servers. Thus, consumers who adhere only moderately to the social norm establish their loyalty to the server based on high server quality. Furthermore, consumers who adhere little to the social norm establish their BISS with the additional effects of their income and food quality under the condition of good service quality. It is important to note a difference between two high BISS groups. The high BISS group with moderate social norm was influenced by server quality, whereas the high BISS group with low social norm was influenced by their income and food quality. In addition, among three demographic variables, income was the only important predictor of server loyalty. More interestingly, the consumer group with relatively lower income (<$39,000) had the tendency to request the same server, implying that low-income consumers are more likely to value a long-term relationship with the server than their counterpart. Furthermore, this study’s results provide another interesting insight, in that while poor food quality can ruin restaurant guests’ experience, server quality can mitigate the effect. Servers should maximize their rapport during the time with their guests and in doing so they must be strategic in their service delivery by recognizing which guests need more or less attention during the dining experience. Therefore, restauranteurs should consider the importance of server quality in influencing long-term loyalty to the same server, and ultimately to the restaurant. Because other important variables such as social norm and income were customer-intrinsic factors, server quality seems to be the factor that restauranteurs can control to enhance the level of relationship with the guest. This research highlights a step-by-step process with respect to the actions that a server may perform and manage to enhance server loyalty. From a practical perspective, the question now becomes what kind of server do guests want? From the results of this study, servers should understand that promptness, timeliness and service quality tend to influence the guests’ willingness to visit them in the future or recommend them to others. On the other hand, some may ask that if tipping indeed is a social norm, what characteristics of the server–guest relationship likely would influence loyalty to a server? There is a definite need for further investigation into the behaviors that will influence those who have a preference for the same server beyond a normative behavior. With respect to food quality, helping servers realize that guests tip differently based on food quality would reduce a significant amount of emotional investment and, in turn, may help reduce their negative feelings toward guests who do not tip because food quality influences BISS under a certain condition (i.e. people who adhere little to the social norm and have incomes less than$39,000). While food quality is not under restaurant servers’ immediate control, delivering quality food and timeliness are crucial tasks for the restaurant. Nonetheless, servers play a key role in providing restaurant guests a quality experience. It is interesting to note that the DT analysis assigned more respondents to low BISS than high BISS. Clearly, restaurant managers should consider these findings, revisit their management and train their employees to provide guests with positive experiences, which ultimately will increase their loyalty to a particular server. Building rapport with guests would increase repeat customers, which, in turn, would enhance the servers’ job satisfaction and ultimately reduce the turnover rate. ## Limitations and future research While every effort was made to minimize shortcomings of this study, it does have limitations. Although the scales of tipping motivations used in this study showed reliabilities above the threshold (0.70), they were not high. Therefore, future research can identify or develop scales with stronger reliabilities in the restaurant context. In particular, a scale for server loyalty has never been developed and a special attention is required to develop this scale because of the increased importance of human factors and server–guest rapport in the restaurant industry (Hwang et al., 2013; Mattila, 2001). In addition, this study used four-point rating scales to reduce social desirability bias (Nederhof, 1985). Future studies may use six-point rating scales to achieve a better distribution of the data. Other interesting analyses can include structural equation modeling, with which tipping motivations can be used as antecedent variables that influence server loyalty with satisfaction with the service as a mediator. Future research can explore other demographic differences (e.g. ethnicity and sexual orientation) in tipping motivations and server loyalty. Finally, the study was limited because the particular restaurant type and the time of the respondents’ visit were not determined. Future research can examine the way in which restaurant type influences customers’ tipping behavior and loyalty to a particular server. ## Figures ### Figure 1 The decision tree predictive model of tipping for BISS ## Table I Demographic profile of respondents Level n % Gender Male 205 43.8 Female 263 56.2 Ethnic group White 337 72.0 African-American 66 14.1 Asian 17 3.6 Native Hawaiian/Pacific Islander 1 0.2 Other 38 8.1 Age Mean (SD) 43.56 (14.63) Employment Not employed 43.56 14.63 Retired 55 11.8 Full time 71 15.2 Part time 56 12.0 Self-employed 23 4.9 Other 30 6.4 Income Less than $12,000 39 8.3$12,000~$20,999 37 7.9$21,000~$40,999 118 25.2$41,000~$52,999 53 11.3$53,000~$67,999 58 12.4$68,000~$111,999 98 20.9$112,000~$156,999 38 8.1$157,000 or more 27 5.8 ## Table II Measurement items Social norm Sometimes I feel pressured to tip 0.555 0.750 I feel more obligated to tip when dining with friends and/or family 0.684 On occasion, I tip to impress 0.733 A server’s gender influences my tipping behavior 0.641 Server quality Promptness of a server’s greeting influences my tipping behavior 0.546 0.734 I tip more than I normally do when service is excellent 0.496 Timeliness of service influences my tipping behavior 0.679 Unsatisfactory service negatively influences my tipping behavior (leave less tip) 0.526 My tipping behavior is directly related to the service received 0.722 Food quality The quality of the restaurant’s food influences my tipping behavior 0.797 0.787 When the restaurant’s food quality is poor, I leave a smaller tip 0.814 BISS Ask for the same server on future visits 0.736 0.801 Recommend the server to friends and/or family on future visits 0.893 Goodness-of-fit indices χ2(59)=135.62 (p<0.001), χ2/df=2.3, comparative fit index (CFI)=0.916, Tucker–Lewis index (TLI) =0.889, standardized root mean square residual (SRMR)=0.060 and root mean square error of approximation (RMSEA)=0.070. ## Table III Variable importance No (low BISS) Yes (high BISS) Mean decrease accuracy Server quality 5.064 18.468 15.339 Food quality −1.066 8.790 4.808 Social norm 18.127 21.870 27.676 Gender −0.455 3.293 1.649 Age −0.128 6.123 3.872 Income 4.363 1.090 3.972 ## Table IV Decision tree model evaluation and comparison Test set n Error rate Accuracy Precision AUC Training set 327 0.251 0.749 0.717 0.755 Validation set 70 0.257 0.743 0.579 0.707 Testing set 70 0.186 0.814 0.714 0.841 ## References Alrubaiee, L. and Al-Nazer, N. (2010), “Investigate the impact of relationship marketing orientation on customer loyalty: the customer’s perspective”, International Journal of Marketing Studies, Vol. 9 No. 1, pp. 155-174. Azar, O.H. (2004), “What sustains social norms and how they evolve? The case of tipping”, Journal of Economic Behavior and Organization, Vol. 54 No. 1, pp. 49-64. Azar, O.H. (2007), “Why pay extra? Tipping and the importance of social norms and feelings in economic theory”, The Journal of Socio-Economics, Vol. 36 No. 2, pp. 250-265. Azar, O.H. (2010), “Tipping motivations and behavior in the US and Israel”, Journal of Applied Social Psychology, Vol. 40 No. 2, pp. 421-457. Bodvarsson, O. and Gibson, W. (1997), “Economics and restaurant gratuities: determining tip rates”, American Journal of Economics and Sociology, Vol. 56 No. 2, pp. 187-203. Brady, M.K., Knight, G.A., Cronin, J., Tomas, G., Hult, M. and Keillor, B.D. (2005), “Removing the contextual lens: a multinational, multi-setting comparison of service evaluation methods”, Journal of Advertising, Vol. 81 No. 3, pp. 215-230. Brocato, E.D., Baker, J. and Voorhees, C.M. (2015), “Creating consumer attachment to retail service firms through sense of place”, Journal of the Academy in Marketing Science, Vol. 43, pp. 200-220. Bush, A.J., Martin, C.A. and Bush, V.D. (2004), “Sports celebrity influence on the behavioral intentions of generation Y”, Journal of Advertising Research, Vol. 44 No. 1, pp. 108-118. Capon, N. and Burke, M. (1980), “Individual, product class and task-related factors in consumer information processing”, Journal of Consumer Research, Vol. 7, pp. 314-326. Carmen, T. (2018), “Study: millennials not a fan of tipping”, available at: www.journalgazette.net/food/20180624/study-millennials-not-fans-of-tipping (accessed June 24, 2018). Chen, Z.X., Shi, Y. and Dong, D. (2008), “An empirical study of relationship quality in a service setting: a Chinese case”, Marketing Intelligence and Planning, Vol. 26 No. 1, pp. 11-25. Cialdini, R.B. and Trost, M.R. (1998), “Social influence: social norms, conformity, and compliance”, in Gilbert, D.T., Fiske, S.T. and Lindzey, G. (Eds), The Handbook of Social Psychology, McGraw-Hill, New York, NY, pp. 151-192. Cross, S.F. and Madson, L. (1997), “Models of the self: self-construals and gender”, Psychological Bulletin, Vol. 122 No. 1, pp. 5-37. Earley, P.C. and Ang, S. (2003), Cultural Intelligence: Individual Interactions Across Cultures, Stanford University Press, Palo Alto, CA. East, R., Harris, P., Wilson, G. and Lomax, W. (1995), “Loyalty to supermarkets”, The International Review of Retail, Distribution and Consumer Research, Vol. 5 No. 1, pp. 99-109. Ewing, M.T., Pinto, T.M. and Soutar, G.N. (2001), “Agency-client chemistry: demographic and psychographic influences”, International Journal of Advertising, Vol. 20 No. 2, pp. 169-188. Fawcett, T. (2005), “An introduction to ROC analysis”, Pattern Recognition Letters, Vol. 27, pp. 861-874. Fehr, E. and Falk, A. (2002), “Psychological foundations of incentives”, European Economic Review, Vol. 46 Nos 4/5, pp. 687-724. Fielding, A.H. and Bell, J.F. (1997), “A review of methods for the assessment of prediction errors in conservation presence/absence models”, Environmental Conservation, Vol. 24 No. 1, pp. 38-49. Ford, R.C., Sturman, M.C. and Heaton, C.P. (2012), Managing Quality Service in Hospitality, Wiley Online, New York, NY. Fox, H. (2012), Their Highest Vocation: Social Justice and the Millennial Generation, Peter Lang, New York, NY, available at: https://eric.ed.gov/?id=ED528768 Gremler, D.D. and Gwinner, K.P. (2008), “Rapport-building behaviors used by retail employees”, Journal of Retailing, Vol. 3, pp. 308-324. Hansen, K.V., Jensen, Ø. and Gustafsson, I.-B. (2005), “The meal experiences of á la carte restaurant customers”, Scandinavian Journal of Hospitality and Tourism, Vol. 5 No. 2, pp. 135-151. Harris, L.C. and Goode, M.M. (2004), “The four levels of loyalty and the pivotal role of trust: a study of online service dynamics”, Journal of Retailing, Vol. 80 No. 2, pp. 139-158. Hastie, T., Tibshirani, R. and Friedman, J. (2009), The Elements of Statistical Learning, 2nd ed., Springer, New York, NY. Homburg, C. and Giering, A. (2001), “Personal characteristics as moderators of the relationship between customer satisfaction and loyalty – an empirical analysis”, Psychology & Marketing, Vol. 18 No. 1, pp. 43-66. Hwang, J., Kim, S.S. and Hyun, S.S. (2013), “The role of server-patron mutual disclosure in the formation of rapport with and intentions of patrons at full-service restaurants: the moderating roles of marital status and education level”, International Journal of Hospitality Management, Vol. 33, pp. 64-75. Hyun, S. (2010), “Predictors of relationship quality and loyalty in the chain restaurant industry”, Cornell Hospitality Quarterly, Vol. 51 No. 2, pp. 251-267. Jewell, C.N. (2008), “Factors influencing tipping behavior in a restaurant”, Psi Chi Journal of Undergraduate Research, Vol. 13 No. 1, pp. 38-48. Jin, N., Line, N.D. and Yoon, D. (2017), “Understanding the role of gratitude in building quality relationships”, Journal of Hospitality Marketing and Management, Vol. 27 No. 4, pp. 465-485. Kang, J. and Hyun, S. (2012), “Effective communication styles for the customer-oriented service employee: inducing dedicational behaviors in luxury restaurant patrons”, International Journal of Hospitality Management, Vol. 31 No. 3, pp. 772-785. Kasper, H. (1988), “On problem perception, dissatisfaction and brand loyalty”, Journal of Economic Psychology, Vol. 9, pp. 387-397. Keh, H.T., Ren, R., Hill, S.R. and Li, X. (2013), “The beautiful, the cheerful, and the helpful: the effects of service employee attributes on customer satisfaction”, Psychology & Marketing, Vol. 30 No. 3, pp. 211-226. Kim, I., Jeon, S.M. and Hyun, S. (2011), “The role of effective service provider communication style in the formation of restaurant patrons’ perceived relational benefits and loyalty”, Journal of Travel and Tourism Marketing, Vol. 28 No. 7, pp. 765-786. Kim, S., Ham, S., Moon, H., Chua, B.L. and Han, H. (2019), “Experience, brand prestige, perceived value (functional, hedonic, social, and financial), and loyalty among GROCERANT customers”, International Journal of Hospitality Management, Vol. 77, pp. 169-177. Kim, W.G., Han, J.S. and Lee, E. (2001), “Effect of relationship marketing on repeat purchase and word of mouth”, International Journal of Hospitality Management, Vol. 25 No. 3, pp. 272-288. Kline, R.B. (2011), Principles and Practice of Structural Equation Modeling, Guilford Press, New York, NY. Kuhn, M. and Johnson, K. (2013), Applied Predictive Modeling, Springer, New York, NY. Lazarevic, V. (2012), “Encouraging brand loyalty in fickle generation Y consumers”, Young Consumers, Vol. 13 No. 1, pp. 45-61. Lee, Y., Back, K. and Kim, J. (2009), “Family restaurant brand personality and its impact on customers’ eMotion, satisfaction, and brand loyalty”, Journal of Hospitality and Tourism Research, Vol. 33 No. 3, pp. 305-328. Lynn, M. (2001), “Restaurant tipping and service quality: a tenuous relationship”, Cornell Hotel and Restaurant Administration Quarterly, Vol. 42 No. 1, pp. 14-20. Lynn, M. (2009), “Individual differences in self-attributed motives for tipping: antecedents, consequences, and implications”, International Journal of Hospitality Management, Vol. 28 No. 3, pp. 432-438. Lynn, M. (2017), “Should US restaurants abandon tipping? A review of the issues and evidence”, Resource, Vol. 5 No. 1, pp. 120-159. Lynn, M. and McCall, M. (2000), “Gratitude and gratuity: a meta-analysis of research on the service-tipping relationship”, Journal of Socio-Economics, Vol. 29 No. 2, pp. 203-214. Lynn, M. and Mynier, K. (1993), “Effect of server posture on restaurant tipping”, Journal of Applied Social Psychology, Vol. 23, pp. 678-685. Lynn, M. and Sturman, M. (2010), “Tipping and service quality: a within-subjects analysis”, Journal of Hospitality and Tourism Research, Vol. 34 No. 2, pp. 269-275. MacManus, S. (2018), Young v. old: Generational Combat in the 21st Century, Routledge, New York, NY. Mattila, A.S. (2001), “Emotional bonding and restaurant loyalty”, Cornell Hotel and Restaurant Administration Quarterly, Vol. 42 No. 6, pp. 73-79. Melnyk, V., Osselaer, S. and Bijmolt, T. (2009), “Are women more loyal customers than men? Gender differences in loyalty to firms and individual service providers”, Journal of Marketing, Vol. 73 No. 4, pp. 82-96. Mok, C. and Hansen, S. (1999), “A study of factors affecting tip size in restaurants”, Journal of Restaurant and Foodservice Marketing, Vol. 3 Nos 3/4, pp. 49-64. Moore, R., Moore, M.L. and Capella, M. (2005), “The impact of customer‐to‐customer interactions in a high personal contact service setting”, Journal of Services Marketing, Vol. 19 No. 7, pp. 482-491. Mulinari, P. (2016), “Weapons of the poor: tipping and resistance in precarious times”, Economic and Industrial Democracy, doi: 10.1177/0143831X16653188. Ndubisi, N.O. (2006), “Effect of gender on customer loyalty: a relationship marketing approach”, Marketing Intelligence and Planning, Vol. 24 No. 1, pp. 48-61. Nederhof, A.J. (1985), “Methods of coping with social desirability bias: a review”, European Journal of Social Psychology, Vol. 15 No. 3, pp. 263-280. Nunnally, J.C. and Bernstein, I.H. (1994), Psychometric Theory, McGraw-Hall, New York, NY. Parasuraman, A., Zeithaml, V. and Berry, L. (1985), “A conceptual model of service quality and its implications for future research”, Journal of Marketing, Vol. 49 No. 4, pp. 41-50. Parasuraman, A., Zeithaml, V. and Berry, L. (1988), “SERVQUAL: multiple-item scale for measuring consumer perceptions of service quality”, Journal of Retailing, Vol. 64 No. 1, pp. 12-40. Parment, A. (2013), “Generation Y vs Baby Boomers: shopping behavior, buyer involvement and implications for retailing”, Journal of Retailing and Consumer Services, Vol. 20 No. 2, pp. 189-199. Pepitone, A. (1976), “Toward a normative and comparative biocultural social psychology”, Journal of Personality and Social Psychology, Vol. 34, pp. 641-653. Peri, C. (2006), “The universe of food quality”, Food Quality and Preference, Vol. 17 Nos 1/2, pp. 3-8. Pret, T. and Carter, S. (2017), “The importance of ‘fitting in’: collaboration and social value creation in response to community norms and expectations”, Entrepreneurship and Regional Development, Vol. 29 Nos 7/8, pp. 639-667. Reichheld, F.F. and Sasser, W.E. (1990), “Zero defections: quality comes to services”, Harvard Business Review, Vol. 68 No. 5, pp. 105-111. Ryu, K. and Han, H. (2010), “Influence of the quality of food, service, and physical environment on customer satisfaction and behavioral intention in quick-casual restaurants: moderating role perceived price”, Journal of Hospitality and Tourism Research, Vol. 34 No. 3, pp. 310-329. Ryu, K., Lee, H.-R. and Kim, W.G. (2012), “The influence of the quality of the physical environment, food, and service on restaurant image, customer perceived value, customer satisfaction, and behavioral intentions”, International Journal of Contemporary Hospitality Management, Vol. 24 No. 2, pp. 200-223. Slama, M.E. and Tashchian, A. (1985), “Selected socioeconomic and demographic characteristics associated with purchasing involvement”, Journal of Marketing, Vol. 49 No. 1, pp. 72-82. Song, S.Y. and Kim, Y.K. (2016), “Theory of virtue ethics: do consumers’ good traits predict their socially responsible consumption?”, Journal of Business Ethics, Vol. 152 No. 4, pp. 1159-1175, available at: https://doi.org/10.1007/s10551-016-3331-3 Soscia, I. (2007), “Gratitude, delight, or guilt: the role of consumers’ emotions in predicting post-consumption behaviors”, Psychology and Marketing, Vol. 24 No. 10, pp. 871-894. Teng, C.C. and Chang, J.H. (2013), “Mechanism of customer value in restaurant consumption: employee hospitality and entertainment cues as boundary conditions”, International Journal of Hospitality Management, Vol. 32, pp. 169-178. Whaley, J., Kim, S. and Kim, Y.-K. (2019), “Factors influencing restaurant-tipping behavior”, Journal of Foodservice Business Research, Vol. 22 No. 2, pp. 117-131. Whaley, J.E. (2011), “What’s in a tip? An exploratory study of the motivations driving consumer tipping behavior”, Doctoral dissertation. Auburn University, Auburn, AL. Whaley, J.E., Douglas, A.C. and O’Neill, M.A. (2014), “What’s in a tip? The creation and refinement of a restaurant tipping motivations scale: a consumer perspective”, International Journal of Hospitality Management, Vol. 37, pp. 121-130. Wilson, E. (2019), “Tip work: examining the relational dynamics of tipping beyond the service counter”, Symbolic Interaction, available at: https://onlinelibrary.wiley.com/doi/full/10.1002/symb.413 Yim, C.K., Tse, D.K. and Chan, K.W. (2008), “Strengthening customer loyalty through intimacy and passion: roles of customer–firm affection and customer-staff relationships in services”, Journal of Marketing Research, Vol. 45 No. 6, pp. 741-756. Chang, T.Z. and Wildt, A.R. (1994), “Price, product information, and purchase intention: an empirical study”, Journal of the Academy of Marketing Science, Vol. 22 No. 1, pp. 16-27. ## Corresponding author Youn-Kyung Kim is the corresponding author and can be contacted at: [email protected]
2023-03-27 15:56:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3286201059818268, "perplexity": 6552.371100123473}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948673.1/warc/CC-MAIN-20230327154814-20230327184814-00359.warc.gz"}
http://mathoverflow.net/questions/61314/an-example-of-a-semigroup-with-solvable-word-problem-but-unsolvable-power-proble?sort=newest
# an example of a semigroup with solvable word problem but unsolvable power problem We say that a semigroup $S$ has solvable power problem if there is an algorithm that takes as input an element $s \in S$ and decides whether or not there exist $m,n \in \mathbb{N}$ with $m \neq n$ and $s^m=s^n$. Does anybody know an "easy" (like finitely presented with relatively few relations) example of a semigroup with solvable word problem but unsolvable power problem? I would also be interested in an example of a group with solvable word problem but unsolvable power problem, if anybody has such an example. Thanks! - The power problem asks for every $a,b$ whether $b=a^n$ for some $n$. It is not the same as what you wrote. In the case of groups your problem is known as the order problem (find out if an element is of finite order). –  Mark Sapir Apr 11 '11 at 18:23 The only known way to construct this example (say, in the case of groups, the case of semigroups is similar) is the following. First consider the free Abelian group $F$ with free generators $a_1,a_2,...$. Pick a recursively enumerable non-recursive set $I$ and impose relations $a_n^{m!}=1$ if $n$ is the $m$th number from $I$ (we assume that there exists a computer that lists numbers in $I$ in some order one by one). That group, call it $A$, has solvable word problem. Indeed, consider any word $w=a_{i_1}^{k_1}\ldots a_{i_s}^{k_s}.$ That word is equal to 1 in $A$ iff each $k_i$ is divisible by $m!$ such that $a_i$ is the $m$-th number in $I$. That gives restriction to $m$. So given $w$ we start the computer that lists $I$ and wait till we have the first (not in the natural order!) $k_1+...+k_s$ numbers from $I$ listed. The power problem in $A$ is not decidable of course. Since $A$ has solvable word problem, by Higman's theorem, it embeds into a finitely presented group $G$. By Clapham's theorem, we can assume that $G$ has decidable word problem. But the power (order) problem in $G$ is not decidable.
2015-01-30 09:51:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8120716214179993, "perplexity": 151.11800194887948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115862432.8/warc/CC-MAIN-20150124161102-00246-ip-10-180-212-252.ec2.internal.warc.gz"}
https://www.greencarcongress.com/2015/01/20150122-eos.html
## Eos Energy Storage introduces grid-scale battery system at $160/kWh ##### 22 January 2015 Eos Energy Storage announced the commercial availability of its MW-scale Aurora system for deliveries starting in 2016. Eos’s standard Aurora 1000|4000 product, a containerized 1 MW DC battery system providing four continuous hours of discharge, offers a cost-effective energy storage solution competitive with gas peaking generation and utility distribution infrastructure. The Aurora 1000|4000 will be sold at a price of$160/kWh in volume. The Aurora product employs Eos’s patented Znyth battery technology that uses a safe aqueous electrolyte and a novel zinc-hybrid cathode to enable extremely low-cost electricity storage and long life. Eos’s grid-scale product is designed to reliably integrate renewable energy, improve grid efficiency and resiliency, and reduce costs for utilities and consumers. With a 30-year life, Eos is can provide peak electricity at a levelized cost of $0.12-0.17 per kWh—substantially less than conventional gas turbines and competing energy storage technologies. Source: EOS. Click to enlarge. As Eos’s manufacturing capacity ramps up, the company expects to deploy an aggregate of 1 MW of capacity over a series of projects in 2015, beginning with Consolidated Edison and GDF SUEZ, and including a project with Pacific Gas & Electric funded by the California Energy Commission. The Aurora system’s commercial availability will improve the competitiveness of developers bidding into the PG&E and SCE solicitations due this quarter. A large number of inquiries regarding the California storage opportunities prompted us to make this announcement. We believe in full transparency around availability and pricing; we hope in this manner to provide the best product and the best value to our partners and customers. —Eos President Steve Hellman Eos is working with major power controls and integration partners to sell, install, and maintain AC-integrated battery systems through its Aegis Program, which includes Toshiba, Gamesa Electric, and others. The program is structured such that Eos supplies the containerized DC battery and battery management system while the Aegis Partners provide the power control systems and integration layer, and take responsibility for installation, operation, and maintenance. Eos will work with the Aegis partners to support bids into California’s energy storage RFPs and plans to announce further details of the program in the coming weeks. ### Comments If the 30 yr life and chart are true - this is a winner. That cost sounds way better than anything lithium batteries can do for grid storage. That's a sales price too. ' Eos will use the new money to bring that line up to what Philippe Bouchard, business development manager, described as a “megawatt-per-month production capability” over the course of the year. The startup will deliver about 1 megawatt of its DC battery systems in 2015, he said. But “Eos’ business strategy is not to become a large global manufacturer of this technology,” he said. “There are other manufacturers better suited to that task.” To that end, the startup is in discussions with what he described as some of the “largest contracts manufacturers in the world,” in search of partners willing to replicate its production lines, and help it to reach its goal of 100 megawatts of annual capacity in 2016.' http://www.greentechmedia.com/articles/read/eos-raising-25m-to-build-megawatts-of-low-cost-grid-batteries This, combined with the plummeting cost per kWh of wind and PV electricity, is another nail in the coffin for coal. Isentropic storage is even cheaper. REs with storage may be the final 'clean' solution at an affordable cost. H2/FC may be another competitive solution when combined with H2 station for VFCs? @ clett That Isentropic storage is a neat idea. But just you wait it's only a matter of time before some troll argues something silly, like "peak" gravel or nitrogen. ;) The floor is 12¢/kWh (the EOS site does not say whether this is storage cost or includes the price of power to charge the battery). That's pretty steep, when wholesale baseload costs are on the order of 5¢. When you add the full RE feed-in tariff, the wholesale price is going to have to be north of 20¢, which is far too high for many users. Nuclear produces (not just stores) power for far less. Kewaunee was shut down because it couldn't get contracts to sell its power for even 6¢/kWh. If anyone cares, my analysis of the cost of power on an RE/Eos grid was posted some time ago at The Ergosphere. E-P, you're looking at the wrong market. Energy storage systems are not competing with energy _production_ (e.g., nuclear), but rather with peaking simple cycle gas turbines. In a 'srong' RE solution, the biggest bump in unmet demand comes in the four hours after peak. That's realistically achievable with batteries. Maintaining base load for the entire night is a wholly different matter, and I don't really see how that can be achieved with battery energy storage systems. Bringing us back to nuclear... Which, of course, raises the question - why not just over-build the reactors and throttle as necessary to meet demand? The marginal costs should be minimal once you start building the power plant. I think throttling the reactors is barking up the wrong tree; if they're loaded with enough fuel to run flat-out until the next scheduled fueling, it makes no sense at all to turn them down. What we need is secondary uses either for electricity (e.g. pre-heating DHW) or heat itself. Elsewhere I suggested pervaporation of water to generate anhydrous ethanol using tapped steam. There are plenty of other uses for heat which could potentially be co-located with nuclear plants. @E-P: Agree completely. Living in drought-stricken coastal California, the obvious use for extra heat and/or electricity is desalination. My little hamlet just installed a desalt-cum-water-reclamation plant for which the main operating expense is going to be electricity. My fervent hope is that CA comes to its senses some day and builds more nuclear power plants and fewer wind turbines. More on desal: the beauty of using excess energy (heat or electricity) for desal is that, unlike energy, water is eminently store-able, and extensive water storage infrastructure already exists in most developed countries. In the growing areas of the world with water security issues, desal is the future, and it dovetails perfectly with a grid that needs to cope with excess electricity or nuclear power plants that need to run flat out to best amortize their capital costs. If I understand correctly, pervaporated water is reasonably pure (not even ethanol will go through the membranes). That could kill two birds with one stone, producing a nearly-pure water stream from ethanol production instead of a high-BOD byproduct liquid that is not potable.$160 per kWh rather high. May be the reason is not being mass product yet. Pay back calculation for peak powe is complicated trick. It could inlude not only power generation but transmission and distribution cost avoidance. Usualy manafucturers do not know how utilities are calculating peak power cost. Batteries has a lot advantages due to the small scale. Power quality regulation always been the market for such products and normaly is regarded as power distribution cost. According to the website the the batteries are capable of 10,000 cycles so that works out to 1.6 cents per cycle they have a round trip efficiency of 75% so if you buy the electricity at 8 cents per kwh you need to sell it for about 10.7 cents just to break even. The biggest cost is the capital expense. If you need to get a 10% return you need to get back $16 in your first year and if you cycle once per day that works out to 4.4 cents per day or kwh. Over time as your capital investment gets paid back you need less return. There are probably financial and technical considerations that I've overlooked but it appears reasonable that if you can buy baseload at 4 cents and sell it at 12 cents on a regular basis these batteries might be a reasonable investment. Owners of rooftop solar systems with contracts to sell back to the grid at generous rates might find these batteries of interest as well. A few cents more to support the change over from fossil and other centralized generation methods with significant risks and unresolved issues is something that I am will to pay for. The insistence in being the cheapest while we destroy our surroundings is a false argument. The consumer is often said to be unwilling to pay these cost, but in fact they are rarely given a choice.$160/kWh is pretty good. It doesn't quite destroy the industries of degradation, but it provides a starting place. I would think that roof top solar cooperatives where communities of like minded people who have roof top solar could form local micro grids at a reasonable cost. I admire the valuable information you offer in your articles. I will bookmark your blog and have my friends check up here often. I am quite sure they will learn lots of new stuff here than anybody else! Regards, vmware jobs in hyderabad The comments to this entry are closed.
2023-03-27 00:11:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2313002198934555, "perplexity": 2723.265425642127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946584.94/warc/CC-MAIN-20230326235016-20230327025016-00173.warc.gz"}
https://mathematica.stackexchange.com/questions/117741/sums-over-partitions-and-sums-with-variable-indices
# sums over partitions and sums with variable indices Is there neat way to implement following sums in mathematica? $$s(l,k)=\sum\limits_{p_1+p_2+...+p_l=k} f_l(p_1,p_2,...,p_l)$$ and $$t(l)=\sum\limits_{i_1,i_2,...,i_l=1}^n f_l(i_1,i_2,...,i_l)$$ Where $p_1,...,p_l\geq1$ ($p_1,...p_l\in\mathbb N$) and $l\in\mathbb N$ isn't fixed? It seems I could use IntegerPartitions{k,{l}}, but this doesn't really seem neat! I'm also having trouble with implementing the 2nd sum, as $l$ is variable. • I think I can get it running with IntegerPartitions. Still I want to ask how to implement $t(l)$ in a nice way? – user40804 Jun 6 '16 at 12:49 • FrobeniusSolve[] may be more expedient than IntegerPartitions[] in your first sum, if only a little slower. For your second: Sum[f @@ Array[K, l], ##] & @@ Transpose[PadRight[{Array[K, l]}, {2, Automatic}, n]] is an obvious solution, but there might be more efficient ones, more so if your $f_l$ has symmetry of some sort. – J. M. is in limbo Jun 6 '16 at 13:52 • Ok, I'll stick with IntegerPartitions for now, but I guess I can use FrobeniusSolve for more general sums! Thanks for the comment anyways! – user40804 Jun 6 '16 at 14:02 Three ways of computing t[l]: Using Sum: Sum[ f @@ Table[i[j], {j, l}], ##] & @@ Table[{i[j], n}, {j, l}] Generating all index lists: Total[f @@@ Tuples[Range[n], l]] Recursive: rec[depth_] := If[depth == 0, f @@ Table[i[j], {j, l}], Sum[rec[depth - 1], {i[depth], n} ] ]; rec[l] • Thanks alot! I didn't know "Tuples"! I really like this one! Looks very clean to me! I think I'll use the second one, so I can setup my f[] very easy, so the argument can be a tuple! – user40804 Jun 6 '16 at 13:59 • Hey, I've got another question: I have multiple lists (t1,t2,t3,t4) and i want to make a new list such that it contains all elements of the form {a,b,c,d} where a,b,c,d are from t1,t2,t3,t4. I figured out I could use Flatten[Table[{t1[[i]], t2[[j]]}, {i, Length[t1]}, {j, Length[t2]}], 1] (here for 2 lists) but this doesn't seem very satisfiying! Is there a better way? – user40804 Jun 6 '16 at 15:19 • Use Tuples again! See the docs. – Marius Ladegård Meyer Jun 6 '16 at 16:06 • Haha thanks again! I guess all I needed was Tuples!!! – user40804 Jun 6 '16 at 16:21
2020-02-23 22:13:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.848584771156311, "perplexity": 3128.3570805469285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145859.65/warc/CC-MAIN-20200223215635-20200224005635-00057.warc.gz"}
http://crypto.stackexchange.com/tags/keys/new
# Tag Info 0 There is no such thing as an AES encryption standard. There is the block cipher, which can be used for a mode of operation (and possibly a padding method). Such a mode of operation however requires a keyed block cipher. And a password is not a key - even the PGP mcrypt guys have started to see this now. So you need (at least) some kind of Password Based Key ... 1 That's because AES is not a password-based encryption algorithm. It's a block cipher. It may seem like a detail, but such details matter. In cryptography, and in security in general, details often matter. AES is a pair of functions, each of which takes a key and a 128-bit message and produces a 128-bit message. The two functions are called encryption and ... 0 In the event this is a homework question, I'll give an answer that would be improbably correct so you may not use it. ;) But maybe people won't accuse you of asking a homework question if you would source where you got the example? The password and the key are two different things. Early cryptographic implementations used the password directly. This is ... 4 There are two things here: Encryption uses mode of operation, and not "AES alone". Some of them are randomized by an initialization vector - that means the encryption of the same text under the same algorithm is still randomized and not deterministic. The encryption methods take care of that. You only need the correct key to decrypt. Passwords are not ... 2 The first (and hardest) step is to factor $n$; the easiest way to do this (given $e$ and $d$) is with this randomized procedure: Select a random value $z$ from the range $(2, n-2)$ Compute the value $\lambda = (ed-1)/2^k$, where $k$ is that integer that makes $\lambda$ an odd integer. Compute $t = z^\lambda \bmod n$. If $t = 1$ or $t = n-1$, we fail on ... 2 Not quite, if all possible permutations are allowed. There are $8! = 40320$ permutations over 3 bits; 15 bits of key allows you to specify $2^{15} = 32768$ of them; hence any mapping of 15 bits will necessarily $40320-32768 = 7552$ of permutations unexpressable. It is doable if you don't allow every single permutation (e.g. allow only even permutations), ... 2 PKCS10 looks like relevant industry practice for private keys. See "Note 2" at page 4, Certification Request Syntax Specification - RFC 2986: The signature on the certification request prevents an entity from requesting a certificate with another party's public key. That is, soneone requesting a certificate on a public key demonstrates his knowledge of ... 1 XORing a key and message is called a one time pad. It is perfectly secure, providing confidentiality, when used correctly. That last part is the hard part, along with finding a situation in which you only need confidentiality. 1 Most ciphers — both classical and modern — will work just fine with any key. It's just that, if the key used to dechipher the message does not match1 the key used to encipher it, the output will be essentially nonsense, and the actual intended message will not be revealed. (Some encryption systems may then detect that the decrypted text is ... 1 I personally have always seen a key used for encryption as a key used in a door, I never compared it to a keystone. But i think it is fair to compare it to a key with which you open a door. Since a key in cryptographic sense give you access to data or even to complete systems. Further more a keystone in the sense that a key is needed to make it work is not ... 1 fkraiem's definition is too narrow. $\:$ "In the context of encryption schemes," keys are "whatever piece of information the legitimate recipient of an encrypted message possesses, which allows him to decrypt the ciphertext" and any information related to keys of the type mentioned above, which allows its possessor to encrypt the plaintext . 3 In the context of encryption schemes, the key is whatever piece of information the legitimate recipient of an encrypted message possesses, which allows him to decrypt the ciphertext efficiently. Hence, the key must be kept hidden from an attacker, since otherwise the attacker could decrypt efficiently just as the legitimate recipent does. Top 50 recent answers are included
2015-05-30 04:49:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4724343717098236, "perplexity": 891.3934050009175}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930895.88/warc/CC-MAIN-20150521113210-00284-ip-10-180-206-219.ec2.internal.warc.gz"}
http://euler.stephan-brumme.com/39/
<< problem 38 - Pandigital multiples Champernowne's constant - problem 40 >> # Problem 39: Integer right triangles If p is the perimeter of a right angle triangle with integral length sides, \{ a,b,c \}, there are exactly three solutions for p = 120. \{ 20,48,52 \}, \{ 24,45,51 \}, \{ 30,40,50 \} For which value of p <= 1000, is the number of solutions maximised? # Algorithm Euclid's formula generates all triplets \{ a,b,c \}, see en.wikipedia.org/wiki/Pythagorean_triple Assuming a <= b <= c: a = k * (m^2 - n^2) b = k * 2mn c = k * (m^2 + n^2) perimeter = a + b + c Integer numbers m, n, k produce all triplets under these conditions: - m and n are coprime → their Greatest Common Divisor is 1 - m and n are not both odd And we can conclude: a must be positive (as well as b and c) therefore m > n Furthermore: perimeter = k * (m^2 - n^2) + k * 2mn + k * (m^2 + n^2) = k * (m^2 - n^2 + 2mn + m^2 + n^2) = k * (2m^2 + 2mn) = 2km * (m+n) which gives an approximation of the upper limit: 2m^2 < MaxPerimeter My program evaluates all combinations of m and n. For each valid pair all k are enumerated, such that the perimeter does not exceed the maximum value. A simple lookup container count stores for each perimeter the number of triangles. Following this precomputation step I perform a second step: extract those perimeters with more triangles than any smaller perimeter. The value stored at best[perimeter] equals the highest count[i] for all i <= perimeter. The actual test cases are plain look-ups into best. # My code … was written in C++11 and can be compiled with G++, Clang++, Visual C++. You can download it, too. #include <iostream> #include <set> #include <vector> // greatest common divisor unsigned int gcd(unsigned int a, unsigned int b) { while (a != 0) { unsigned int c = a; a = b % a; b = c; } return b; } int main() { const unsigned int MaxPerimeter = 5000000; // precomputation step 1: // count all triplets per perimeter (up to upper limit 5 * 10^6) // [perimeter] => [number of triplets] std::vector<unsigned int> count(MaxPerimeter + 1, 0); // note: long long instead of int because otherwise the squares m^2, n^2, ... might overflow for (unsigned long long m = 1; 2*m*m < MaxPerimeter; m++) for (unsigned long long n = 1; n < m; n++) { // make sure all triplets a,b,c are unique if (m % 2 == 1 && n % 2 == 1) continue; if (gcd(m, n) > 1) continue; unsigned int k = 1; while (true) { // see Euclidian formula above auto a = k * (m*m - n*n); auto b = k * 2*m*n; auto c = k * (m*m + n*n); k++; // abort if largest perimeter is exceeded auto perimeter = a + b + c; if (perimeter > MaxPerimeter) break; // ok, found a triplet count[perimeter]++; } } // precomputation step 2: // store only best perimeters unsigned long long bestCount = 0; std::set<unsigned int> best; best.insert(0); // degenerated case for (unsigned int i = 0; i < count.size(); i++) if (bestCount < count[i]) { bestCount = count[i]; best.insert(i); } // processing input boils down to a simple lookup unsigned int tests; std::cin >> tests; while (tests--) { unsigned int maxPerimeter; std::cin >> maxPerimeter; // find the perimeter with the largest count auto i = best.upper_bound(maxPerimeter); // we went one step too far i--; // print result std::cout << *i << std::endl; } return 0; } This solution contains 10 empty lines, 15 comments and 3 preprocessor commands. # Interactive test You can submit your own input to my program and it will be instantly processed at my server: Number of test cases (1-5): Input data (separated by spaces or newlines): This is equivalent to echo "1 120" | ./39 Output: Note: the original problem's input 1000 cannot be entered because just copying results is a soft skill reserved for idiots. (this interactive test is still under development, computations will be aborted after one second) # Benchmark The correct solution to the original Project Euler problem was found in 0.11 seconds on a Intel® Core™ i7-2600K CPU @ 3.40GHz. Peak memory usage was about 21 MByte. (compiled for x86_64 / Linux, GCC flags: -O3 -march=native -fno-exceptions -fno-rtti -std=c++11 -DORIGINAL) See here for a comparison of all solutions. Note: interactive tests run on a weaker (=slower) computer. Some interactive tests are compiled without -DORIGINAL. # Changelog February 25, 2017 submitted solution # Hackerrank My code solves 7 out of 7 test cases (score: 100%) # Difficulty Project Euler ranks this problem at 5% (out of 100%). Hackerrank describes this problem as easy. Note: Hackerrank has strict execution time limits (typically 2 seconds for C++ code) and often a much wider input range than the original problem. In my opinion, Hackerrank's modified problems are usually a lot harder to solve. As a rule thumb: brute-force is rarely an option. projecteuler.net/thread=39 - the best forum on the subject (note: you have to submit the correct solution first) Code in various languages: Python: www.mathblog.dk/project-euler-39-perimeter-right-angle-triangle/ (written by Kristian Edlund) Java: github.com/nayuki/Project-Euler-solutions/blob/master/java/p039.java (written by Nayuki) Mathematica: github.com/nayuki/Project-Euler-solutions/blob/master/mathematica/p039.mathematica (written by Nayuki) C: github.com/eagletmt/project-euler-c/blob/master/30-39/problem39.c (written by eagletmt) Go: github.com/frrad/project-euler/blob/master/golang/Problem039.go (written by Frederick Robinson) Javascript: github.com/dsernst/ProjectEuler/blob/master/39 Integer right triangles.js (written by David Ernst) Scala: github.com/samskivert/euler-scala/blob/master/Euler039.scala (written by Michael Bayne) # Heatmap green problems solve the original Project Euler problem and have a perfect score of 100% at Hackerrank, too. yellow problems score less than 100% at Hackerrank (but still solve the original problem). gray problems are already solved but I haven't published my solution yet. blue problems are solved and there wasn't a Hackerrank version of it at the time I solved it or I didn't care about it because it differed too much. red problems are solved but exceed the time limit of one minute. Please click on a problem's number to open my solution to that problem: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 The 206 solved problems had an average difficulty of 27.5% at Project Euler and I scored 12,626 points (out of 14300 possible points, top rank was 20 out ouf ≈60000 in July 2017) at Hackerrank's Project Euler+. Look at my progress and performance pages to get more details. My username at Project Euler is stephanbrumme while it's stbrumme at Hackerrank. << problem 38 - Pandigital multiples Champernowne's constant - problem 40 >> more about me can be found on my homepage, especially in my coding blog. some names mentioned on this site may be trademarks of their respective owners. thanks to the KaTeX team for their great typesetting library !
2017-07-21 10:43:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3082454800605774, "perplexity": 602.5388734157593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423769.10/warc/CC-MAIN-20170721102310-20170721122310-00279.warc.gz"}
http://debianletters.blogspot.com/2007/10/
## Saturday, 27 October 2007 ### Working with vast raster images: NIP2 Problem: one have vast raster image, which is needed to be processed and enhanced. And seeing of intermediate result and possibility of tuning the parameters are required. Solve: the unique image processing tool for processing museum images - nip2 - helps us, and it incorporated in Debian GNU/Linux distribution. An approach of nip2: images as tables Non-standard and very productive approach is realised in this outstanding program: each image is placed in cell of table. It can be viewed, and it will be appearing by blocks. Such approach saves great amount of time and RAM. When you applying some image processing filter, you selecting the cell with image to be processed. Thus, operative recalculation of image processing results is possible. Every time when you apply new filter, it brings resulted image to be inserted in next cell of table, where likewise added parameters of image processing filters. Therefore, if you for example change parameters of gamma-correction of image, all following filters, which are applied to image, quickly recalculate final and intermediate results. Its better to see on a screenshoot. Nip2 (at least version 7) knows most common graphical file formats, besides they are not multiplicity, the work with them is organized very well. Besides internal formats, nip2 knows TIFF, PPM, PNG and JPEG. Not so rich, but its acceptable for comfort work with graphics. What NIP2 has and what not Because program is specialised to work with flat images, there arent any tools to work with layers - that is not nip2 business. Withal there are numerous of advanced image processing techniques, implemented as fast algorithms (its remarkable that image processing in NIP2 is far more fast than in other programs, such as GIMP) Pro: - quick recalculation final and intermediate results all along any filters parameter change; - work with 8, 16 and 32-bits images; - advanced image processing techniques are implemented (morphological analysis, Fourier filtering, convolution, statistical noise-reduction techniques and more); - colour correction and colour profiles are embodied; - algorithms works very fast, frugal resource consuming; - outstanding algorithms of large images processing. Contra: - no tools to work with layers; - no image painting tools (only filtering); At also nameworthy, that NIP2 have unusual (but apt) interface and sometimes inlogical placement of filters in menu. But menus can be unclipped, and most useful filters and instruments will be always close at hand. Why nip2 Indeed, as it mentioned here, NIP2 - it is only written on GTK2 graphical frontend to VIPS library. This C library is designed for processing vast size images, mainly for studying pictures in museums. Digitized images of artwork are very large image files, and processing them in conventional image editors is practically impossible. Although, many digital filters in image editors have slow techniques because of simplicity of implementation. NIP2 have a lot of features besides large image processing, so I hope that this program helps you, too. Here is great survey about image processing libraries in Linux - its about VIPS, too, in which NIP2 is based. ## Friday, 12 October 2007 ### Quickly and fastly installing LaTeX: LaTeX in Debian Quick HOWTO Problem: its needed to quickly and simply make scientific articles, books, monographs and practically all that have many formulas and graphs. Solve: installation LaTeX in Debian - a work of a moment. In UNIX-like systems packages for LaTeX called tetex or texlive. So, lets open terminal and type as root: #aptitude install tetex-bin tetex-extra latex-ucs This will occupy about 120Mb of disk space, lets approve it. For installation, only first DVD-disk is required. While packages are installing, lets open your preferable text editor and type something like this: \documentclass[a4paper, 12pt]{article}\usepackage[T2A]{fontenc}\usepackage[english]{babel}\usepackage[pdftex,unicode]{hyperref}\begin{document}This is our first LaTeX document.\end{document} Saving this document with any name and *.tex extension, for example newlatexdoc.tex Next in console, in that directory were you just saved already typed text, give a command: \$ pdflatex newlatexdoc.tex Besides many files, will be sought-for file newlatexdoc.pdf Thats all, and you already stared in great world of LaTeX.
2017-04-30 07:12:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.339386522769928, "perplexity": 7172.915661788601}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917124371.40/warc/CC-MAIN-20170423031204-00003-ip-10-145-167-34.ec2.internal.warc.gz"}
https://www.bioconductor.org/packages/devel/bioc/vignettes/coseq/inst/doc/coseq.html
# coseq package: Quick-start guide #### 2019-02-13 coseq is a package to perform clustering analysis of sequencing data (e.g., co-expression analysis of RNA-seq data), using transformed profiles rather than the raw counts directly. The package implements two distinct strategies in conjunction with transformations: 1) MacQueen’s K-means algorithm and 2) Gaussian mixture models. For both strategies model selection is provided using the slope heuristics approach and integrated completed likelihood (ICL) criterion, respectively. Note that for backwards compatibility, coseq also provides a wrapper for the Poisson mixture model originally proposed in Rau et al. (2015) and implemented in the HTSCluster package. The methods implemented in this package are described in detail in the following three publications. • Godichon-Baggioni, A., Maugis-Rabusseau, C. and Rau, A. (2018) Clustering transformed compositional data using K-means, with applications in gene expression and bicycle sharing system data. Journal of Applied Statistics, doi:10.1080/02664763.2018.1454894. [Paper that introduced the use of the K-means algorithm for RNA-seq profiles after transformation via the centered log ratio (CLR) or log centered log ratio (logCLR) transformation.] • Rau, A. and Maugis-Rabusseau, C. (2018) Transformation and model choice for co-expression analayis of RNA-seq data. Briefings in Bioinformatics, 19(3)-425-436. [Paper that introduced the idea of clustering profiles for RNA-seq co-expression, and suggested using Gaussian mixture models in conjunction with either the arcsine or logit transformation.] • Rau, A., Maugis-Rabusseau, C., Martin-Magniette, M.-L., Celeux, G. (2015) Co-expression analysis of high-throughput transcriptome sequencing data with Poisson mixture models. Bioinformatics, 31(9): 1420-1427. link [Paper that introduced the use of Poisson mixture models for RNA-seq counts.] Below, we provide a quick-start guide using a subset of RNA-seq data to illustrate the functionalities of the coseq package. In this document, we focus on the methods described in Rau and Maugis-Rabusseau (2018) and Godichon-Baggioni et al. (2018). For more information about the method described in Rau et al. (2015), see the HTSCluster vignette. ## Quick start (tl;dr) A standard coseq analysis takes the following form, where counts represents a matrix or data.frame of gene-level counts arising from an RNA-seq experiment (of dimension n x d for n genes and d samples). Results, exported in the form of a coseqResults S4 object, can easily be examined using standard summary and plot functions (see below and the User’s Guide for examples of plots that may be obtained): run <- coseq(counts, K=2:25) summary(run) plot(run) The cluster labels for the selected model obtained from the coseq clustering may easily be obtained using the clusters function: clusters(run) Similarly, the conditional probabilities of cluster membership for each gene in the selected coseq clustering model may be obtained using the assay accessor function: assay(run) Note that unless otherwise specified by the user, coseq uses the following default parameters: • K-means algorithm • Log centered log transformation (logclr) on profiles • Library size normalization via the Trimmed Mean of M-values (TMM) approach • Model selection (choice of the number of clusters) via the slope heuristics approach • No parallelization. ## Co-expression analysis with coseq For the remainder of this vignette, we make use of the mouse neocortex RNA-seq data from Fietz et al. (2012) (available at https://perso.math.univ-toulouse.fr/maugis/mixstatseq/packages and as an ExpressionSet object called fietz in coseq). The aim in this study was to investigate the expansion of the neocortex in five embryonic (day 14.5) mice by analyzing the transcriptome of three regions: the ventricular zone (VZ), subventricular zone (SVZ) and cortical place (CP). We begin by first loading the necessary R packages as well as the data. library(coseq) library(Biobase) library(corrplot) data("fietz") counts <- exprs(fietz) conds <- pData(fietz)$tissue ## Equivalent to the following: ## counts <- read.table("http://www.math.univ-toulouse.fr/~maugis/coseq/Fietz_mouse_counts.txt", ## header=TRUE) ## conds <- c(rep("CP",5),rep("SVZ",5),rep("VZ",5)) The coseq package fits either a Gaussian mixture model (Rau and Maugis-Rabusseau, 2018) or uses the K-means algorithm (Godichon-Baggioni et al., 2018) to cluster transformed normalized expression profiles of RNA-seq data. Normalized expression profiles correspond to the proportion of normalized reads observed for gene i with respect to the total observed for gene i across all samples: $p_{ij} = \frac{y_{ij}/s_{j} +1}{\sum_{j'} y_{ij'}/s_{j'} +1},$ where $$s_j$$ are normalization scaling factors (e.g., after applying TMM normalization to library sizes) and $$y_{ij}$$ represents the raw count for gene $$i$$ in sample $$j$$. ### Transformations for normalized profiles with the Gaussian mixture model Since the coordinates of $$\mathbf{p}_i$$ are linearly dependent (causing estimation problems for a Gaussian mixture distribution), weconsider either the arcsine or logit transformation of the normalized profiles $$p_{ij}$$: $g_{\text{arcsin}}(p_{ij}) = \text{arcsin}\left( \sqrt{ p_{ij} } \right) \in [0, \pi/2], \text{ and}$ $g_{\text{logit}}(p_{ij}) = \text{log}_2 \left( \frac{p_{ij}}{1-p_{ij}} \right) \in (-\infty, \infty).$ Then the distribution of the transformed normalized expression profiles is modeled by a general multidimensional Gaussian mixture $f(.|\theta_K) = \sum_{k=1}^K \pi_k \phi(.|\mu_k,\Sigma_k)$ where $$\theta_K=(\pi_1,\ldots,\pi_{K-1},\mu_1,\ldots,\mu_K,\Sigma_1,\ldots,\Sigma_K)$$, $$\pi=(\pi_1,\ldots,\pi_K)$$ are the mixing proportions and $$\phi(.|\mu_k,\Sigma_k)$$ is the $$q$$-dimensional Gaussian density function with mean $$\mu_k$$ and covariance matrix $$\Sigma_k$$. To estimate mixture parameters $$\theta_K$$ by computing the maximum likelihood estimate (MLE), an Expectation-Maximization (EM) algorithm is used via the Rmixmod package. Finally, let $$\hat t_{ik}$$ be the conditional probability that observation $$i$$ arises from the $$k$$th component of the mixture $$f(.|\hat \theta_{\hat K})$$. Each observation $$i$$ is assigned to the component maximizing the conditional probability $$\hat t_{ik}$$ i.e., using the so-called maximum a posteriori (MAP) rule. ### Transformations for normalized profiles with K-means For the K-means algorithm, we consider three separate transformations of the profiles $$p_{i}$$ that are well adapted to compositional data (see Godichon-Baggioni et al., 2017 for more details): • Identity (i.e., no transformation) • Centered log ratio (CLR) transformation • Log centered log ratio (logCLR) transformation Then, the aim is to minimize the sum of squared errors (SSE), defined for each set of clustering $$\mathcal{C}^{(K)}=\left( C_{1},...,C_{k}\right)$$ by $\begin{equation*} \text{SSE} \left( \mathcal{C}^{(K)}\right) := \sum_{k=1}^{K}\sum_{i \in C_{k}} \left\| h \left( y_{i}\right) - \mu_{k,h} \right\|_{2}^{2} , \end{equation*}$ with $$i \in C_{k}$$ if $$\left\| h\left( y_{i}\right) - \mu_{k,h} \right\|_{2} = \min_{k'=1,\ldots,K} \left\| y_{i}- \mu_{k',h} \right\|_{2}$$, and $\mu_{k,h}= \frac{1}{|C_{k} |}\sum_{i \in C_{k}}h \left( y_{i} \right),$ and $$h$$ is the chosen transformation. In order to minimize the SSE, we use the well-known MacQueen’s K-means algorithm. ### Model selection Because the number of clusters $$K$$ is not known a priori, we fit a collection of models (here $$K$$ = 2, …, 25) and use either the Integrated Completed Likelihood (ICL) criterion (in the case of the Gaussian mixture model) or the slope heuristics (in the case of the K-means algorithm) to select the best model in terms of fit, complexity, and cluster separation. ### Other options If desired, we can set a filtering cutoff on the mean normalized counts via the meanFilterCutoff argument to remove very weakly expressed genes from the co-expression analysis; in the interest of faster computation for this vignette, in this example we filter all genes with mean normalized expression less than 200 (for the Gaussian mixture model) or less than 50 (for the K-means algorithm). Note that if desired, parallel execution using BiocParallel can be specified via the parallel argument: run <- coseq(..., parallel=TRUE) ## Running coseq The collection of co-expression models for the logCLR-transformed profiles using the K-means algorithm may be obtained as follows (note that we artificially set the number of maximum allowed iterations and number of random starts to be low for faster computational time in this vignette): runLogCLR <- coseq(counts, K=2:25, transformation="logclr",norm="TMM", meanFilterCutoff=50, model="kmeans", nstart=1, iter.max=10) ## **************************************** ## coseq analysis: kmeans approach & logclr transformation ## K = 2 to 25 ## Use set.seed() prior to running coseq for reproducible results. ## **************************************** ## Running K = 2 ... ## Running K = 3 ... ## Running K = 4 ... ## Running K = 5 ... ## Running K = 6 ... ## Running K = 7 ... ## Running K = 8 ... ## Running K = 9 ... ## Running K = 10 ... ## Running K = 11 ... ## Running K = 12 ... ## Running K = 13 ... ## Running K = 14 ... ## Running K = 15 ... ## Running K = 16 ... ## Running K = 17 ... ## Running K = 18 ... ## Running K = 19 ... ## Running K = 20 ... ## Running K = 21 ... ## Running K = 22 ... ## Running K = 23 ... ## Running K = 24 ... ## Running K = 25 ... The collection of Gaussian mixture models for the arcsine-transformed and logit-transformed profiles may be obtained as follows (as before, we set the number of iterations to be quite low for computational speed in this vignette): runArcsin <- coseq(counts, K=2:20, model="Normal", transformation="arcsin", meanFilterCutoff=200, iter=10) ## **************************************** ## coseq analysis: Normal approach & arcsin transformation ## K = 2 to 20 ## Use set.seed() prior to running coseq for reproducible results. ## **************************************** ## Running K = 2 ... ## Running K = 3 ... ## Running K = 4 ... ## Running K = 5 ... ## Running K = 6 ... ## Running K = 7 ... ## Running K = 8 ... ## Running K = 9 ... ## Running K = 10 ... ## Running K = 11 ... ## Running K = 12 ... ## Running K = 13 ... ## Running K = 14 ... ## Running K = 15 ... ## Running K = 16 ... ## Running K = 17 ... ## Running K = 18 ... ## Running K = 19 ... ## Running K = 20 ... runLogit <- coseq(counts, K=2:20, model="Normal", transformation="logit", meanFilterCutoff=200, verbose=FALSE, iter=10) ## **************************************** ## coseq analysis: Normal approach & logit transformation ## K = 2 to 20 ## Use set.seed() prior to running coseq for reproducible results. ## **************************************** In all cases, the resulting output of a call to coseq is an S4 object of class coseqResults. class(runArcsin) ## [1] "coseqResults" ## attr(,"package") ## [1] "coseq" runArcsin ## An object of class coseqResults ## 4230 features by 15 samples. ## Models fit: K = 2 ... 20 ## Chosen clustering model: K = 7 To choose the most appropriate transformation to use (arcsine versus logit) in a Gaussian mixture model, we may use the corrected ICL, where the minimum value corresponds to the selected model. Note that this feature is not available for the K-means algorithm. compareICL(list(runArcsin, runLogit)) This indicates that the preferred transformation is the arcsine, and the selected model has K = 7 clusters. We can additionally explore how similar the models with K = 8 to 12 are using the adjusted Rand index (ARI) via the compareARI function. Values close to 1 indicate perfect agreement, while values close to 0 indicate near random partitions. compareARI(runArcsin, K=8:12) ## K=8 K=9 K=10 K=11 K=12 ## K=8 1 0.58 0.4 0.33 0.4 ## K=9 1 0.45 0.38 0.43 ## K=10 1 0.66 0.51 ## K=11 1 0.57 ## K=12 1 Note that because the results of coseq depend on the initialization point, results from one run to another may vary; as such, in practice, it is typically a good idea to re-estimate the same collection of models a few times (e.g., 5) to avoid problems linked to initialization. ## Exploring coseq results Results from a coseq analysis can be explored and summarized in a variety of ways. First, a call to the summary function provides the number of clusters selected for the ICL model selection approach, number of genes assigned to each cluster, and if desired the per-gene cluster means. summary(runArcsin) ## ************************************************* ## Model: Gaussian_pk_Lk_Ck ## Transformation: arcsin ## ************************************************* ## Clusters fit: 2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20 ## Clusters with errors: --- ## Selected number of clusters via ICL: 7 ## ICL of selected model: -365736.3 ## ************************************************* ## Number of clusters = 7 ## ICL = -365736.3 ## ************************************************* ## Cluster sizes: ## Cluster 1 Cluster 2 Cluster 3 Cluster 4 Cluster 5 Cluster 6 Cluster 7 ## 255 185 1622 239 383 537 1009 ## ## Number of observations with MAP > 0.90 (% of total): ## 3557 (84.09%) ## ## Number of observations with MAP > 0.90 per cluster (% of total per cluster): ## Cluster 1 Cluster 2 Cluster 3 Cluster 4 Cluster 5 Cluster 6 Cluster 7 ## 218 169 1303 226 343 426 872 ## (85.49%) (91.35%) (80.33%) (94.56%) (89.56%) (79.33%) (86.42%) Next, a variety of plots may be explored using the plot function: • logLike (log-likelihood plotted versus number of clusters), • ICL (ICL plotted versus number of clusters), • profiles(line plots of profiles in each cluster), • boxplots (boxplots of profiles in each cluster), • probapost_boxplots (boxplots of maximum conditional probabilities per cluster), • probapost_barplots (number of observations with a maximum conditional probability greater than a given threshold per cluster), • probapost_histogram (histogram of maximum conditional probabilities over all clusters). By default, all of these plots are produced simultaneously unless specific graphics are requested via the graphs argument. In addition, a variety of options are available to specify the number of graphs per row/column in a grid layout, whether profiles should be averaged or summed over conditions (collapse_reps), whether condition labels should be used, whether clusters should be ordered according to their similarity, etc. Note that the histogram of maximum conditional probabilities of cluster membership for all genes (probapost_histogram), and boxplots and barplots of maximum conditional probabilities of cluster membership for the genes assigned to each cluster (probapost_boxplots, probapost_barplots) help to evaluate the degree of certitude accorded by the model in assigning genes to clusters, as well as whether some clusters are attribued a greater degree of uncertainty than others. ## To obtain all plots ## plot(runArcsin) plot(runArcsin, graphs="boxplots") ##$boxplots plot(runArcsin, graphs="boxplots", conds=conds) ## $boxplots plot(runArcsin, graphs="boxplots", conds=conds, collapse_reps = "sum") ##$boxplots plot(runArcsin, graphs="profiles") ## $profiles plot(runArcsin, graphs="probapost_boxplots") ##$probapost_boxplots plot(runArcsin, graphs="probapost_histogram") ## $probapost_histogram If desired the per-cluster correlation matrices estimated in the Gaussian mixture model may be obtained by calling NormMixParam. These matrices may easily be visualized using the corrplot package. rho <- NormMixParam(runArcsin)$rho ## Covariance matrix for cluster 1 rho1 <- rho[,,1] colnames(rho1) <- rownames(rho1) <- paste0(colnames(rho1), "\n", conds) corrplot(rho1) Finally, cluster labels and conditional probabilities of cluster membership (as well as a variety of other information) are easily accessible via a set of accessor functions. labels <- clusters(runArcsin) table(labels) ## labels ## 1 2 3 4 5 6 7 ## 255 185 1622 239 383 537 1009 probapost <- assay(runArcsin) head(probapost) ## Cluster_1 Cluster_2 Cluster_3 Cluster_4 Cluster_5 Cluster_6 ## Gene1 0.000 0 0.991 0 0 0.009 ## Gene4 0.000 0 0.000 0 0 0.000 ## Gene10 0.000 0 1.000 0 0 0.000 ## Gene14 0.000 0 0.998 0 0 0.000 ## Gene16 0.000 0 0.771 0 0 0.229 ## Gene17 0.985 0 0.000 0 0 0.015 ## Cluster_7 ## Gene1 0.000 ## Gene4 1.000 ## Gene10 0.000 ## Gene14 0.002 ## Gene16 0.000 ## Gene17 0.000 metadata(runArcsin) ## $nbCluster ## K=2 K=3 K=4 K=5 K=6 K=7 K=8 K=9 K=10 K=11 K=12 K=13 K=14 K=15 K=16 ## 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 ## K=17 K=18 K=19 K=20 ## 17 18 19 20 ## ##$logLike ## K=2 K=3 K=4 K=5 K=6 K=7 K=8 K=9 ## 180265.9 182166.9 184010.6 185444.3 186456.9 187375.0 188009.8 188486.7 ## K=10 K=11 K=12 K=13 K=14 K=15 K=16 K=17 ## 189141.4 189613.0 189966.7 190340.6 190668.2 190964.7 191361.7 191776.5 ## K=18 K=19 K=20 ## 191772.8 0.0 192765.0 ## ## $ICL ## K=2 K=3 K=4 K=5 K=6 K=7 ## -357858.38 -360429.49 -362817.26 -364354.62 -365088.75 -365736.28 ## K=8 K=9 K=10 K=11 K=12 K=13 ## -365392.80 -365589.12 -365319.58 -365433.73 -364664.98 -363733.31 ## K=14 K=15 K=16 K=17 K=18 K=19 ## -363516.62 -363135.75 -362618.13 -362553.57 -361148.69 21567.94 ## K=20 ## -360623.68 ## ##$nbClusterError ## integer(0) ## ## $GaussianModel ## [1] "Gaussian_pk_Lk_Ck" likelihood(runArcsin) ## K=2 K=3 K=4 K=5 K=6 K=7 K=8 K=9 ## 180265.9 182166.9 184010.6 185444.3 186456.9 187375.0 188009.8 188486.7 ## K=10 K=11 K=12 K=13 K=14 K=15 K=16 K=17 ## 189141.4 189613.0 189966.7 190340.6 190668.2 190964.7 191361.7 191776.5 ## K=18 K=19 K=20 ## 191772.8 0.0 192765.0 nbCluster(runArcsin) ## K=2 K=3 K=4 K=5 K=6 K=7 K=8 K=9 K=10 K=11 K=12 K=13 K=14 K=15 K=16 ## 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 ## K=17 K=18 K=19 K=20 ## 17 18 19 20 ICL(runArcsin) ## K=2 K=3 K=4 K=5 K=6 K=7 ## -357858.38 -360429.49 -362817.26 -364354.62 -365088.75 -365736.28 ## K=8 K=9 K=10 K=11 K=12 K=13 ## -365392.80 -365589.12 -365319.58 -365433.73 -364664.98 -363733.31 ## K=14 K=15 K=16 K=17 K=18 K=19 ## -363516.62 -363135.75 -362618.13 -362553.57 -361148.69 21567.94 ## K=20 ## -360623.68 model(runArcsin) ## [1] "Normal" transformationType(runArcsin) ## [1] "arcsin" The data used to fit the mixture model (transformed normalized profiles) as well as the normalized profiles themselves are stored as DataFrame objects that may be accessed with the corresponding functions: ## arcsine-transformed normalized profiles tcounts(runArcsin) ## DataFrame with 4230 rows and 15 columns ## CP CP.1 CP.2 ## <numeric> <numeric> <numeric> ## Gene1 0.236598173091045 0.228104752321001 0.241288674880122 ## Gene4 0.284826062120736 0.293176655713238 0.311890705392502 ## Gene10 0.253392009568601 0.263879866929211 0.253059933405496 ## Gene14 0.249871003807869 0.241841028766843 0.253633086159366 ## Gene16 0.230002856738101 0.235796495087209 0.249497115867557 ## ... ... ... ... ## Gene8946 0.107453539656086 0.0319221126659155 0.0502666347355386 ## Gene8948 0.222996022383283 0.218111459621408 0.248181591083828 ## Gene8950 0.23043323831067 0.219640579402256 0.266896858705074 ## Gene8951 0.168093297721759 0.132508885394036 0.152362137940985 ## Gene8952 0.323130913168413 0.323515106720572 0.365093451318071 ## CP.3 CP.4 SVZ ## <numeric> <numeric> <numeric> ## Gene1 0.247307967135576 0.248982562601155 0.262926085093054 ## Gene4 0.288275396224087 0.286415051889742 0.273256966964837 ## Gene10 0.263255485618399 0.260618950259785 0.256238994988003 ## Gene14 0.258939080539238 0.260336068088386 0.277088023419115 ## Gene16 0.238359626342197 0.245336636103387 0.253127677759796 ## ... ... ... ... ## Gene8946 0.0247498435113879 0.0581114647114381 0.125935837581134 ## Gene8948 0.243282389113593 0.252284163590393 0.26821031090518 ## Gene8950 0.239397483258655 0.219782558686542 0.255852728200819 ## Gene8951 0.160212830748592 0.220365023410131 0.227985892510018 ## Gene8952 0.339381341399474 0.322563402201037 0.25575941057131 ## SVZ.1 SVZ.2 SVZ.3 ## <numeric> <numeric> <numeric> ## Gene1 0.255118901767846 0.264225177153485 0.254176657028999 ## Gene4 0.270221752608245 0.26452900100746 0.277583777932629 ## Gene10 0.260182510045061 0.255289727141237 0.25609199960178 ## Gene14 0.268473277580506 0.287666852508836 0.266348086749767 ## Gene16 0.222894271060926 0.258245560185333 0.252937544001551 ## ... ... ... ... ## Gene8946 0.106439237819178 0.137822472290664 0.15003802242599 ## Gene8948 0.25321883639463 0.287113729649093 0.286050702501387 ## Gene8950 0.239209374688018 0.294898188892574 0.284184924867043 ## Gene8951 0.204112375080307 0.238430033612695 0.252019423708569 ## Gene8952 0.265896229133491 0.284487276712424 0.261109862107804 ## SVZ.4 VZ VZ.1 ## <numeric> <numeric> <numeric> ## Gene1 0.282872018635923 0.269560603594682 0.268584052015522 ## Gene4 0.27068352582754 0.214741182435979 0.20150317796839 ## Gene10 0.262937426109444 0.266352488354516 0.262019867337915 ## Gene14 0.284220473103044 0.242277586178926 0.257731512150692 ## Gene16 0.259800804036445 0.272837369690777 0.29047951021961 ## ... ... ... ... ## Gene8946 0.135823924066837 0.402965168982556 0.466937297509237 ## Gene8948 0.301975482863069 0.245651011362489 0.252176900297967 ## Gene8950 0.295018087210187 0.256683147621468 0.259461938138239 ## Gene8951 0.257211966435243 0.33209781986326 0.334500810615804 ## Gene8952 0.253659710197208 0.157132039780148 0.16340176949997 ## VZ.2 VZ.3 VZ.4 ## <numeric> <numeric> <numeric> ## Gene1 0.279656223261409 0.291505020888877 0.278312103645505 ## Gene4 0.214769543280666 0.230971871598229 0.202352987232098 ## Gene10 0.273577287479307 0.262905323825006 0.266790041822568 ## Gene14 0.245791798781622 0.270709194021856 0.247194217287222 ## Gene16 0.298819490069695 0.310846348738257 0.282038592253972 ## ... ... ... ... ## Gene8946 0.433534360482705 0.459186937930696 0.418326183155198 ## Gene8948 0.264103396981707 0.297957684913415 0.261059633483962 ## Gene8950 0.28768788150739 0.294166581427475 0.256566018273407 ## Gene8951 0.358867008811086 0.372641805721501 0.343575090065054 ## Gene8952 0.154286432799114 0.135451908655076 0.159029602238767 ## normalized profiles profiles(runArcsin) ## DataFrame with 4230 rows and 15 columns ## CP CP.1 CP.2 ## <numeric> <numeric> <numeric> ## Gene1 0.0549419225481319 0.0511355802169943 0.0570990941917388 ## Gene4 0.0789556754378157 0.0835179875351485 0.0941622447842026 ## Gene10 0.0628450197836833 0.0680312831643462 0.0626839370673565 ## Gene14 0.0611468895895984 0.0573556918010646 0.0629620814543541 ## Gene16 0.0519750194880221 0.0545771430658889 0.0609678455333951 ## ... ... ... ... ## Gene8946 0.0115018928105567 0.00101867518929344 0.00252460715527725 ## Gene8948 0.0489084060872912 0.0468229932937124 0.0603398311235 ## Gene8950 0.052166254726009 0.0474711938647455 0.0695584922004574 ## Gene8951 0.0279902355718303 0.0174560768025782 0.0230351428099832 ## Gene8952 0.100829738522317 0.101061220110022 0.127475125181293 ## CP.3 CP.4 SVZ ## <numeric> <numeric> <numeric> ## Gene1 0.0599244558457272 0.0607218423731092 0.0675517456506777 ## Gene4 0.080826040619168 0.0798148045475305 0.0728292698152912 ## Gene10 0.0677171821521512 0.0663982866340141 0.0642339344771602 ## Gene14 0.0655642375224259 0.0662574937197061 0.0748328358396016 ## Gene16 0.0557474363205529 0.0589921003374838 0.0627167826780965 ## ... ... ... ... ## Gene8946 0.000612429689610918 0.0033731427955392 0.0157761674977668 ## Gene8948 0.0580278231803587 0.0623083802995953 0.0702282649322469 ## Gene8950 0.0562246309961819 0.0475315943335139 0.0640446630927276 ## Gene8951 0.0254492833907832 0.0477797668032523 0.0510832293956291 ## Gene8952 0.110824930118485 0.100488237044936 0.0639989763120616 ## SVZ.1 SVZ.2 SVZ.3 ## <numeric> <numeric> <numeric> ## Gene1 0.0636858036527928 0.0682052841034536 0.063226400681772 ## Gene4 0.071259712488646 0.0683585504885814 0.0750939320504134 ## Gene10 0.066181124682493 0.0637692576271227 0.0641618758277596 ## Gene14 0.0703627168083138 0.0804946123501183 0.0692795347268095 ## Gene16 0.048864524680018 0.0652213368490319 0.0626246173508519 ## ... ... ... ... ## Gene8946 0.0112865914924705 0.0188750676236255 0.0223429932135071 ## Gene8948 0.0627609932883124 0.0801939067666355 0.0796174341829171 ## Gene8950 0.0561379989078803 0.0844730253315552 0.0786102312644618 ## Gene8951 0.041086495783927 0.0557797482760539 0.0621804585619159 ## Gene8952 0.0690502312378003 0.0787730518725091 0.0666429471193563 ## SVZ.4 VZ VZ.1 ## <numeric> <numeric> <numeric> ## Gene1 0.0779050013207236 0.0709199154377077 0.0704193900964378 ## Gene4 0.0714974851657067 0.045409292570241 0.0400569483657106 ## Gene10 0.0675574383876578 0.0692817701327088 0.0670975799721476 ## Gene14 0.0786293664841106 0.0575588780295321 0.0649677132949245 ## Gene16 0.0659914680801884 0.0726113504764882 0.0820316504082927 ## ... ... ... ... ## Gene8946 0.018334972424227 0.153779840474418 0.202638225026359 ## Gene8948 0.08845085206799 0.059140327355602 0.0622565361204196 ## Gene8950 0.0845397237405173 0.064451891467773 0.0658233092035342 ## Gene8951 0.064711844492863 0.106293565819057 0.107779372819147 ## Gene8952 0.0629750157517585 0.0244879391492036 0.0264633501788582 ## VZ.2 VZ.3 VZ.4 ## <numeric> <numeric> <numeric> ## Gene1 0.0761899351796509 0.0825953559293826 0.0754782727716998 ## Gene4 0.0454211027638875 0.0524060578739434 0.0403908954937135 ## Gene10 0.072995831911733 0.0675413249002914 0.0695041532793748 ## Gene14 0.059206764718375 0.071510712757323 0.0598704709037427 ## Gene16 0.0866667773077605 0.0935531127949382 0.077458834628991 ## ... ... ... ... ## Gene8946 0.176467882440321 0.196443448953226 0.165024063213802 ## Gene8948 0.068143895873771 0.0861824604219426 0.0666178950057053 ## Gene8950 0.0805060549105488 0.0840665562151637 0.0643943797449302 ## Gene8951 0.123351024114758 0.132552203871561 0.113471551486661 ## Gene8952 0.0236160202010852 0.0182352868681241 0.0250779303321904 Finally, if the results (e.g., the conditional probabilities of cluster membership) for a model in the collection other than that chosen by ICL/slope heuristics are desired, they may be accessed in the form of a list using the coseqFullResults function. fullres <- coseqFullResults(runArcsin) class(fullres) ## [1] "list" length(fullres) ## [1] 19 names(fullres) ## [1] "K=2" "K=3" "K=4" "K=5" "K=6" "K=7" "K=8" "K=9" "K=10" "K=11" ## [11] "K=12" "K=13" "K=14" "K=15" "K=16" "K=17" "K=18" "K=19" "K=20" ## Running coseq on a DESeq2 or edgeR results object In many cases, it is of interest to run a co-expression analysis on the subset of genes identified as differentially expressed in a prior analysis. To facilitate this, coseq may be directly inserted into a DESeq2 analysis pipeline as follows: library(DESeq2) dds <- DESeqDataSetFromMatrix(counts, DataFrame(group=factor(conds)), ~group) dds <- DESeq(dds, test="LRT", reduced = ~1) res <- results(dds) summary(res) ## ## out of 8962 with nonzero total read count ## adjusted p-value < 0.1 ## LFC > 0 (up) : 3910, 44% ## LFC < 0 (down) : 3902, 44% ## outliers [1] : 13, 0.15% ## low counts [2] : 0, 0% ## (mean count < 45) ## [1] see 'cooksCutoff' argument of ?results ## [2] see 'independentFiltering' argument of ?results ## By default, alpha = 0.10 run <- coseq(dds, K=2:15, verbose=FALSE) ## **************************************** ## Co-expression analysis on DESeq2 output: ## 7812 DE genes at p-adj < 0.1 ## **************************************** ## coseq analysis: kmeans approach & logclr transformation ## K = 2 to 15 ## Use set.seed() prior to running coseq for reproducible results. ## **************************************** ## Running K = 2 ... ## Running K = 3 ... ## Running K = 4 ... ## Running K = 5 ... ## Running K = 6 ... ## Running K = 7 ... ## Running K = 8 ... ## Running K = 9 ... ## Running K = 10 ... ## Running K = 11 ... ## Running K = 12 ... ## Running K = 13 ... ## Running K = 14 ... ## Running K = 15 ... ## The following two lines provide identical results run <- coseq(dds, K=2:15, alpha=0.05) ## **************************************** ## Co-expression analysis on DESeq2 output: ## 7535 DE genes at p-adj < 0.05 ## **************************************** ## coseq analysis: kmeans approach & logclr transformation ## K = 2 to 15 ## Use set.seed() prior to running coseq for reproducible results. ## **************************************** ## Running K = 2 ... ## Running K = 3 ... ## Running K = 4 ... ## Running K = 5 ... ## Running K = 6 ... ## Running K = 7 ... ## Running K = 8 ... ## Running K = 9 ... ## Running K = 10 ... ## Running K = 11 ... ## Running K = 12 ... ## Running K = 13 ... ## Running K = 14 ... ## Running K = 15 ... run <- coseq(dds, K=2:15, subset=results(dds, alpha=0.05)) ## **************************************** ## Co-expression analysis on DESeq2 output: ## 7812 DE genes at p-adj < 0.1 ## **************************************** ## coseq analysis: kmeans approach & logclr transformation ## K = 2 to 15 ## Use set.seed() prior to running coseq for reproducible results. ## **************************************** ## Running K = 2 ... ## Running K = 3 ... ## Running K = 4 ... ## Running K = 5 ... ## Running K = 6 ... ## Running K = 7 ... ## Running K = 8 ... ## Running K = 9 ... ## Running K = 10 ... ## Running K = 11 ... ## Running K = 12 ... ## Running K = 13 ... ## Running K = 14 ... ## Running K = 15 ... A co-expression analysis following the edgeR analysis pipeline may be done in a similar way, depending on whether the quasi-likelihood or likelihood ratio test is used: library(edgeR) y <- DGEList(counts=counts, group=factor(conds)) y <- calcNormFactors(y) design <- model.matrix(~conds) y <- estimateDisp(y, design) ## edgeR: QLF test fit <- glmQLFit(y, design) qlf <- glmQLFTest(fit, coef=2) ## edgeR: LRT test fit <- glmFit(y,design) lrt <- glmLRT(fit,coef=2) run <- coseq(counts, K=2:15, subset=lrt, alpha=0.1) ## **************************************** ## Co-expression analysis on edgeR output: ## 5775 DE genes at p-adj < 0.1 ## **************************************** ## coseq analysis: kmeans approach & logclr transformation ## K = 2 to 15 ## Use set.seed() prior to running coseq for reproducible results. ## **************************************** ## Running K = 2 ... ## Running K = 3 ... ## Running K = 4 ... ## Running K = 5 ... ## Running K = 6 ... ## Running K = 7 ... ## Running K = 8 ... ## Running K = 9 ... ## Running K = 10 ... ## Running K = 11 ... ## Running K = 12 ... ## Running K = 13 ... ## Running K = 14 ... ## Running K = 15 ... run <- coseq(counts, K=2:15, subset=qlf, alpha=0.1) ## **************************************** ## Co-expression analysis on edgeR output: ## 5717 DE genes at p-adj < 0.1 ## **************************************** ## coseq analysis: kmeans approach & logclr transformation ## K = 2 to 15 ## Use set.seed() prior to running coseq for reproducible results. ## **************************************** ## Running K = 2 ... ## Running K = 3 ... ## Running K = 4 ... ## Running K = 5 ... ## Running K = 6 ... ## Running K = 7 ... ## Running K = 8 ... ## Running K = 9 ... ## Running K = 10 ... ## Running K = 11 ... ## Running K = 12 ... ## Running K = 13 ... ## Running K = 14 ... ## Running K = 15 ... In both cases, library size normalization factors included in the DESeq2 or edgeR object are used for the subsequent coseq analysis. ## Customizing coseq graphical outputs Because the plots produced by the plot function are produced using the ggplot2 package, many plot customizations can be directly performed by the user by adding on additional layers, themes, or color scales. Note that because the output of the coseq plot function is a list of ggplot objects, the coseq plot object must be subsetted using $ by the name of the desired plot (see below). To illustrate, we show a few examples of a customized boxplot using a small simulated dataset. We have used the profiles_order parameter to modify the order in which clusters are plotted, and we have used the n_row and n_col parameters to adjust the number of rows and columns of the plot. ## Simulate toy data, n = 300 observations set.seed(12345) countmat <- matrix(runif(300*10, min=0, max=500), nrow=300, ncol=10) countmat <- countmat[which(rowSums(countmat) > 0),] conds <- c(rep("A", 4), rep("B", 3), rep("D", 3)) ## Run coseq coexp <- coseq(object=countmat, K=2:15, iter=5, transformation="logclr", model="kmeans") ## **************************************** ## coseq analysis: kmeans approach & logclr transformation ## K = 2 to 15 ## Use set.seed() prior to running coseq for reproducible results. ## **************************************** ## Running K = 2 ... ## Running K = 3 ... ## Running K = 4 ... ## Running K = 5 ... ## Running K = 6 ... ## Running K = 7 ... ## Running K = 8 ... ## Running K = 9 ... ## Running K = 10 ... ## Running K = 11 ... ## Running K = 12 ... ## Running K = 13 ... ## Running K = 14 ... ## Running K = 15 ... ## Original boxplot p <- plot(coexp, graphs = "boxplots", conds = conds, profiles_order = 10:1, collapse_reps = "average", n_row = 3, n_col = 4) p$boxplots We now illustrate an example where we (1) change the theme to black-and-white; (2) set the aspect ratio of y/x to be equal to 20; (3) change the widths of the axis ticks and lines; (4) change the size of the text labels; and (5) change the color scheme. Remember to load the ggplot2 package prior to adding customization! library(ggplot2) p$boxplot + theme_bw() + coord_fixed(ratio = 20) + theme(axis.ticks = element_line(color="black", size=1.5), axis.line = element_line(color="black", size=1.5), text = element_text(size = 15)) + scale_fill_brewer(type = "qual") ## Scale for 'fill' is already present. Adding another scale for 'fill', ## which will replace the existing scale. ## coseq FAQs 1. Should I use the Gaussian mixture model (model="Normal") or the K-means (model="kmeans") approach for my co-expression analysis? And what about the Poisson mixture model? The Gaussian mixture model is more time-intensive but enables the estimation of per-cluster correlation structures among samples. The K-means algorithm has a much smaller computational cost, at the expense of assuming a diagonal per-cluster covariance structures. Generally speaking, we feel that the K-means approach is a good first approach to use, although this can be somewhat context-dependent. Finally, although we feel the Poisson mixture model was a good first approach for RNA-seq co-expression, we now recommend either the Gaussian mixture model or K-means approach instead. 1. There are a lot of proposed transformations in the package. Which one should I use? In our experience, when using the Gaussian mixture model approach the arcsine transformation performs well. When using the K-means algorithm, if highly specific profiles should be highlighted (e.g., genes expressed in a single tissue), the logCLR transformation performs well; if on the other hand the user wishes to distinguish the fine differences among genes with generally equal expression across conditions (e.g., non-differentially expressed genes), the CLR transformation performs well. 1. Why do I get different results when I rerun the analysis? Because the results of coseq depend on the initialization point, results from one run to another may vary; as such, in practice, it is typically a good idea to re-estimate the same collection of models a few times (e.g., 5) to avoid problems linked to initialization. 1. How do I output the clustering results? Use the clusters(...) command for cluster labels, and the assay(...) command for conditional probabilities of cluster membership. 1. How can I check whether all models converged? Look at the nbClusterError slot of the coseqResults metadata using metadata(...)\$nbClusterError to examine the models that did not converge. 1. Can I run coseq on normalized profiles that have already been pre-computed? Yes, you should just specify normFactors = "none" and transformation = "none" when using coseq. 1. Does it matter if I have an unbalanced experimental design? Condition labels are not used in the actual clustering implemented by coseq, and in fact are only used when visualizing clusters using the plot command if collapse_rows = c("sum", "average"). As such, the clustering itself should not be unduly influenced by unbalanced designs. 1. Why did I get an error message about the singular covariance matrices for the form of Gaussian mixture model I used? Occasionally, the default form of Gaussian mixture model (Gaussian_pk_Lk_CK, which is the most general form available with per-cluster proportions, volumes, shapes, and orientations for covariance matrices) used in coseq is not estimable (via the Rmixmod package used by coseq) due to non-invertible per-cluster covariance matrices. The error message thus suggests using a slightly more restrictive form of Gaussian mixture model, such as the Gaussian_pk_Lk_Bk (which imposes spherical covariance matrices) or Gaussian_pk_Lk_I (which imposes diagonal covariance matrices). See ?Rmixmod::mixmodGaussianModel for more details about the nomenclature and possible forms of Gaussian mixture models.
2019-03-26 09:57:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6803105473518372, "perplexity": 6704.112474948294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204969.39/warc/CC-MAIN-20190326095131-20190326121131-00002.warc.gz"}
https://wiki.seg.org/wiki/Calculation_of_inverse_filters
Calculation of inverse filters Series Geophysical References Series Problems in Exploration Seismology and their Solutions Lloyd P. Geldart and Robert E. Sheriff 9 295 - 366 http://dx.doi.org/10.1190/1.9781560801733 ISBN 9781560801153 SEG Online Store Problem 9.18 Assuming that the signature of an air-gun array is a unit impulse and that the recorded wavelet after transmission through the earth is ${\displaystyle [-12,\;-4,\;+3,\;+1]}$, find the inverse filter that will remove the earth filtering. How many terms should the filter include? Solution The inverse filter ${\displaystyle i_{t}}$ (see problem 9.7) is a filter that will restore the source impulse, i.e., {\displaystyle {\begin{aligned}g_{t}*i_{t}=\delta _{t},\end{aligned}}} or in the frequency domain where ${\displaystyle i_{t}\leftrightarrow I(z)}$, {\displaystyle {\begin{aligned}G(z)I(z)=1.\end{aligned}}} Thus, {\displaystyle {\begin{aligned}I(z)=1/G(z)=1/[-12-4z+3z^{2}+z^{3}]\\=-{\frac {1}{12}}\left[1+{\frac {z}{3}}-{\frac {z^{2}}{4}}-{\frac {z^{3}}{12}}\right]^{-1}=-{\frac {1}{12}}(1+A)^{-1},\end{aligned}}} (9.18a) where ${\displaystyle A={\frac {z}{3}}-{\frac {z^{2}}{4}}-{\frac {z^{3}}{12}}}$. Since ${\displaystyle z}$ has the magnitude ${\displaystyle |z|=1}$, the magnitude of ${\displaystyle A<1}$ for all values of ${\displaystyle z}$, and we can expand equation (9.18a) using the binomial theorem [see equation (4.1b)] and Sheriff and Geldart, 1995, equation (15.43): {\displaystyle {\begin{aligned}I(z)=-{\frac {1}{12}}(1-A+A^{2}-A^{3}+A^{4}-\ldots ).\end{aligned}}} (9.18b) We first find ${\displaystyle (1-A+A^{2}-A^{3}+A^{4})}$ neglecting powers higher than ${\displaystyle z^{4}}$: {\displaystyle {\begin{aligned}-A&=-0.3333z+0.2500z^{2}+0.0833z^{3},\\A^{2}&=+0.1111z^{2}-0.1667z^{3}+0.0069z^{4},\\-A^{3}&=-0.0370z^{3}+0.0833z^{4},\\A^{4}&=+0.0123z^{4}\\\mathrm {Sum} &=-0.3333z+0.3611z^{2}-0.1204z^{3}+0.1025z^{4}\\I(z)&=-{\frac {1}{12}}(1-A+A^{2}-A^{3}+A^{4})\\&=-{\frac {1}{12}}(1-0.3333z+0.3611z^{2}-0,1204z^{3}+0.1025z^{4}).\end{aligned}}} We can verify the accuracy of ${\displaystyle I(z)}$ by multiplying ${\displaystyle G(z)}$ by ${\displaystyle I(z)}$. We have {\displaystyle {\begin{aligned}I(z):1-0.3333z+0.3611z^{2}-0,1204z^{3}+0.1025z^{4},\\G(z):1+0.3333z-0.2500z^{2}-0.0833z^{3}\\1-0.3333z+0.3611z^{2}-0.1204z^{3}+0.1025z^{4}\\+0.3333z-0.1111z^{2}+0.1204z^{3}-0.0401z^{4}+0.0342z^{5}\\-0.2500z^{2}+0.0833z^{3}-0.0903z^{4}+0.0301z^{5}-0.0256z^{6}\\{-0.0833z^{3}+0.0278z^{4}-0.0301z^{5}+0.0100z^{6}-0.0085z^{7}}\\{1+0+0+0-0.0001z^{4}+0.0342z^{5}-0.0156z^{6}-0.0085z^{7}}.\end{aligned}}} We see that the inverse filter is exact as far as the term ${\displaystyle z^{3}}$ and terms for higher powers are small. The overall effect is to create a small tail whose energy is 0.00149 or 0.1%. To determine how the accuracy depends on the number of terms used in ${\displaystyle I(z)}$, we observe the effect on the product ${\displaystyle I(z)G(z)}$ as we successively drop high powers in ${\displaystyle I(z)}$. Dropping the ${\displaystyle z^{4}}$ term in ${\displaystyle I(z)}$ yields the product ${\displaystyle 1-0.1026z^{4}+0.0100z^{6}}$ and the energy of the tail is now 0.01063 or 1.1%. If we want accuracy of at least 1%, we must therefore retain the ${\displaystyle z^{4}}$ term. If we also delete the ${\displaystyle z^{3}}$ term in ${\displaystyle I(z)}$ [but not in ${\displaystyle G(z)}$] the product becomes ${\displaystyle 1+0.1204z^{3}-0.0625z^{4}-0.0100z^{5}}$ and the energy of the tail is 0.01850 or 1.8%. If we go one step further and drop the ${\displaystyle z^{2}}$ term in ${\displaystyle I(z)}$, we get ${\displaystyle 1-0.3611z^{2}+0.0278z^{4}}$ and the energy of the tail is 0.13107 or 13.1%.
2019-08-20 00:36:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 34, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9105477929115295, "perplexity": 1126.13714710868}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315174.57/warc/CC-MAIN-20190820003509-20190820025509-00071.warc.gz"}
https://blog.csdn.net/richenyunqi/article/details/80152842
# 1107. Social Clusters (30) 1000 ms 65536 kB 16000 B Standard CHEN, Yue When register on a social network, you are always asked to specify your hobbies in order to find some potential friends with the same hobbies. A "social cluster" is a set of people who have some of their hobbies in common. You are supposed to find all the clusters. Input Specification: Each input file contains one test case. For each test case, the first line contains a positive integer N (<=1000), the total number of people in a social network. Hence the people are numbered from 1 to N. Then N lines follow, each gives the hobby list of a person in the format: Ki: hi[1] hi[2] ... hi[Ki] where Ki (>0) is the number of hobbies, and hi[j] is the index of the j-th hobby, which is an integer in [1, 1000]. Output Specification: For each case, print in one line the total number of clusters in the network. Then in the second line, print the numbers of people in the clusters in non-increasing order. The numbers must be separated by exactly one space, and there must be no extra space at the end of the line. Sample Input: 8 3: 2 7 10 1: 4 2: 5 3 1: 4 1: 3 1: 4 4: 6 8 1 5 1: 4 Sample Output: 3 4 3 1 #### c++代码: #include<bits/stdc++.h> using namespace std; int hobby[1005],father[1005]; int findFather(int x){//查找父亲结点并进行路径压缩 if(x==father[x]) return x; int temp=findFather(father[x]); father[x]=temp; return temp; } void unionSet(int a,int b){//合并两个集合 int ua=findFather(a),ub=findFather(b); if(ua!=ub) father[ua]=ub; } int main(){ int N; scanf("%d",&N); iota(father,father+N+1,0);//c++标准库自带函数,将第三个参数赋值给father每一个元素,每赋值一次,第三个参数递增一次,最后赋值完成时,father[i]=i for(int i=1;i<=N;++i){//读取数据 int K,a; scanf("%d:",&K); while(K--){ scanf("%d",&a); if(hobby[a]==0)//没有人有当前这个爱好 hobby[a]=i;//i作为第一个有该爱好的人 else//有人喜欢该爱好 unionSet(hobby[a],i);//将有同样爱好的两个人合并为一个集合 } } int result[N+1]={0};//储存每个集合的人数 for(int i=1;i<N+1;++i) ++result[findFather(i)]; sort(result,result+N+1,[](const int a,const int b){//从大到小排序 return a>b; }); int cnt=N+1-count(result,result+N+1,0);//储存集合不为0的集合个数,c++标准库自带的count函数计算result中0的个数 printf("%d\n",cnt);//输出 for(int i=0;i<cnt;++i)//输出result前cnt个元素(result已经从大到小排序,输出的都是集合个数不为0的) printf("%s%d",i>0?" ":"",result[i]); return 0; } • 擅长领域: • 算法
2018-11-17 20:03:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39144718647003174, "perplexity": 2938.2404454394787}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743732.41/warc/CC-MAIN-20181117185331-20181117211331-00298.warc.gz"}
https://brilliant.org/problems/sigma-5/
# Sigma Algebra Level 3 If $$f(x)=\dfrac{4^{x}}{4^{x}+2}$$. Compute the value of $$\displaystyle \sum_{n=1}^{1000} f\left(\dfrac{n}{1000}\right)$$. Give your answer to 2 decimal places. ×
2017-05-26 07:38:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3678855299949646, "perplexity": 6775.95098099385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608648.25/warc/CC-MAIN-20170526071051-20170526091051-00011.warc.gz"}
https://community.wolfram.com/groups/-/m/t/1959827
# PDF of a Transformed discrete distribution Posted 9 months ago 1101 Views | 8 Replies | 1 Total Likes | Hi all, This code returns results not matching the results of the book I am working on. f[x_] := Piecewise[{{1/3, x == {1, 1}}, {2/3, x == {2, 0}}}]; \[ScriptCapitalD] := ProbabilityDistribution[ f[{x1, x2}], {x1, -Infinity, Infinity}, {x2, -Infinity, Infinity}]; PDF[TransformedDistribution[ 2 {x1, x2}, {x1, x2} \[Distributed] \[ScriptCapitalD]], {y1, y2}] // FullSimplify i should have a result like P(Y==y)=P(g^-1(y)) for g an increasing function which is not the result Mathematica proposes. Any suggestions? Thx. 8 Replies Sort By: Posted 9 months ago Posted 9 months ago Thx for the feedback, I was really doubting myself. As per your comment "For discrete distributions there is a dx value that is needed", please share references since the book provided a satisfactory demonstration. Maybe you are referring to a continuous function f which is not the case in this problem. Posted 9 months ago The dx term is found in the online reference for ProbabilityDistribution. Posted 9 months ago Posted 9 months ago From the online reference: Posted 9 months ago I see, the dx here is the increment of x to move from xmin to xmax However, I am using the 1st definition in the case of multivariate. no need for dx. Thank u for everything. Your example I think just isn't the kind of probability distribution that ProbabilityDistribution and TransformedDistribution can handle. However, those two functions do handle Exercise 3 of the reference you posted: dist = ProbabilityDistribution[x^2/30, {x, 1, 4, 1}] PDF[dist, x] Now the transformation part of the exercise: dist2 = TransformedDistribution[x + 1, x \[Distributed] dist] PDF[dist2, y]
2021-01-20 07:03:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29314476251602173, "perplexity": 2460.33991096804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519923.26/warc/CC-MAIN-20210120054203-20210120084203-00636.warc.gz"}
http://ncatlab.org/nlab/show/3-group
# nLab 3-group group theory ### Cohomology and Extensions #### $(\infty,1)$-Category theory (∞,1)-category theory # Contents ## Definition A 3-group is equivalently 1. a 2-groupoid $G$ equipped with the structure of a loop space object of a connected 3-groupoid $\mathbf{B}G$ (its delooping); 2. a monoidal 2-category in which every object has an weak inverse under the tensor product. ## Properties ### Presentation by crossed complexes Some classes of 3-groups are modeled by 2-crossed modules or crossed squares. Revised on November 1, 2012 18:10:54 by Urs Schreiber (131.174.41.102)
2015-08-30 22:37:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5882825255393982, "perplexity": 5298.896393930989}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065375.30/warc/CC-MAIN-20150827025425-00226-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.ias.ac.in/listing/bibliography/boms/HADI_EBRAHIMIFAR
Articles written in Bulletin of Materials Science • Influence of electrodeposition parameters on the characteristics of Mn–Co coatings on Crofer 22 APU ferritic stainless steel Manganese–cobalt coatings are promising candidates for solid oxide fuel cell (SOFC) interconnection applications because of their high conductivity and good oxidation resistance. In the present study, manganese and cobalt are electrodeposited onCrofer 22APU ferritic stainless steel. The effects of current density, pH, sodium gluconate (NaC$_6$H$_{11}$O$_7$)concentration, cobalt sulphate concentration (CoSO$_4$·7H$_2$O) and deposition duration on the microstructure and cathodic efficiency are characterized by means of scanning electron microscopy, weight gain measurements and energy-dispersive X-ray spectrometry, respectively. Results show that increases in current density and deposition duration lead to decrease in current efficiency and deposition rate. Increasing the pH to 2.5 causes an initial rise of current efficiency and depositionrate, followed by subsequent decline. In addition, the increases in sodium gluconate and cobalt sulphate concentrations inthe electrolyte solution result in an increase in current efficiency and deposition rate. Moreover, the results demonstratethat the variations in the current density, pH, sodium gluconate (NaC$_6$H$_{11}$O$_7$) concentration, cobalt sulphate concentration (CoSO$_4$·7H$_2$O) and duration have a significant effect on grain size, uniformity and the adherence of the coating. • Cyclic oxidation of Ni–Fe$_2$O$_3$ composite coating electrodeposited on AISI 304 stainless steel Protective coatings can be applied to enhance the performance of interconnects in solid oxide fuel cells. In this study, AISI 304 steel was coated with a Ni–Fe$_2$O$_3$ composite to form a modified-Watt’s type electrolyte by the conventional electro co-deposition method. The characterization of the coatings before and after cyclic oxidation was performed by scanning electron microscopy and X-ray diffraction. In order to evaluate the oxidation behaviour, thermal cycling was carried out in a furnace at 850$^{\circ}$C. The results indicated that the coated steel had better oxidation resistance in comparison with the uncoated steel. After 60 cycles of oxidation, the Ni–Fe$_2$O$_3$ composite coating was converted to FeNi$_2$O$_4$, NiCrO$_4$, MnFe$_2$O$_4$ and Fe$_2$NiO$_4$. The Fe$_2$O$_3$/NiFe$_2$O$_4$ composite coating reduced the outward migration of chromium and the growth rate ofthe Cr$_2$O$_3$ layer. • Effect of titanium oxide ceramic particles concentration on microstructure and corrosion behaviour of Ni–P–Al$_2$O$_3$–TiO$_2$ composite coating Composite coatings are coatings that have been considered in terms of properties, such as corrosion resistance, oxidation resistance and excellent hardness. In this study, Ni–P–Al$_2$O$_3$–TiO$_2$ composite coating was made on AISI 316 steel using direct current deposition technique. The microstructure of the coating and its corrosion resistance were studied bychanging the amount of titanium oxide (1, 2, 3 and 4 g l$^{−1}$) in the bath. To investigate the morphology of the coating and the analysis of the coated material, a scanning electron microscope (SEM) and EDS microscopy were conducted, respectively. The results showed that in the bath containing 4 g l$^{−1}$ titanium oxide, the coating is perfectly uniform and continuous, while by reducing the amount of titanium oxide, it is not possible to form a suitable coating on the entire surface of the substrate. To investigate the corrosion resistance, the potentiodynamic polarization and electrochemical impedance spectroscopy testsin aqueous solution of 3.5% NaCl were carried out on coated and uncoated samples. The results of these tests were also correlated with microscopic images and showed that the coatings in a bath containing 4 g l$^{−1}$ titanium oxide has the highest corrosion resistance. • Investigation of oxidation behaviour of AISI-430 steel interconnects in the presence of Ni–Co–CeO$_2$ composite coating for application of solid oxide fuel cells AISI 430 stainless steels are used as interconnects in solid oxide fuel cells. One of the problems with these steels is the migration of chromium through the chrome shell and its transfer to the cathode, resulting in contamination andreduction in the efficiency of the fuel cells. To improve the oxidation resistance of these steels, a protective coating layer can be applied on the steel surface. In this investigation, AISI 430 stainless steel was electroplated with nickel, cobalt and cerium oxide. To investigate oxidation behaviour, isothermal oxidation and cyclic oxidation were performed at 800$^{\circ}$C. The coating on the steel surface was studied using scanning electron microscopy and X-ray diffraction. In isothermal and cyclic tests, the coated samples showed less weight gain than the uncoated samples due to the formation of NiFe$_2$O$_4$, CoFe$_2$O$_4$ spinels and NiCr$_2$O$_4$. These spinels prevented the outward diffusion of the chromium, improving the oxidation resistance of the steel substrate. Cyclical oxidation results showed that the coating formed on the steel surface resistedcracking and delamination. • # Bulletin of Materials Science Volume 44, 2021 All articles Continuous Article Publishing mode • # Dr Shanti Swarup Bhatnagar for Science and Technology Posted on October 12, 2020 Prof. Subi Jacob George — Jawaharlal Nehru Centre for Advanced Scientific Research, Jakkur, Bengaluru Chemical Sciences 2020
2021-09-20 02:43:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4715500771999359, "perplexity": 5156.004337980284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056974.30/warc/CC-MAIN-20210920010331-20210920040331-00367.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/cpaa.2020036
# American Institute of Mathematical Sciences • Previous Article Pullback dynamics of a non-autonomous mixture problem in one dimensional solids with nonlinear damping • CPAA Home • This Issue • Next Article Decay of solutions for a dissipative higher-order Boussinesq system on a periodic domain February  2020, 19(2): 771-783. doi: 10.3934/cpaa.2020036 ## Liouville theorems for an integral equation of Choquard type 1 Division of Computational Mathematics and Engineering, Institute for Computational Science, Ton Duc Thang University, Ho Chi Minh City, Vietnam 2 Faculty of Mathematics and Statistics, Ton Duc Thang University, Ho Chi Minh City, Vietnam Received  November 2018 Revised  July 2019 Published  October 2019 We establish sharp Liouville theorems for the integral equation $u(x) = \int_{\mathbb{R}^n} \frac{u^{p-1}(y)}{|x-y|^{n-\alpha}} \int_{\mathbb{R}^n} \frac{u^p(z)}{|y-z|^{n-\beta}} dz dy, \quad x\in\mathbb{R}^n,$ where $0<\alpha, \beta and $ p>1 $. Our results hold true for positive solutions under appropriate assumptions on $ p $and integrability of the solutions. As a consequence, we derive a Liouville theorem for positive $ H^{\frac{\alpha}{2}}(\mathbb{R}^n) $solutions of the higher fractional order Choquard type equation $ (-\Delta)^{\frac{\alpha}{2}} u = \left(\frac{1}{|x|^{n-\beta}} * u^p\right) u^{p-1} \quad\text{ in } \mathbb{R}^n. $Citation: Phuong Le. Liouville theorems for an integral equation of Choquard type. Communications on Pure & Applied Analysis, 2020, 19 (2) : 771-783. doi: 10.3934/cpaa.2020036 ##### References: [1] D. Applebaum, Lévy processes - from probability to finance and quantum groups, Notices Amer. Math. Soc., 51 (2004), 1336-1347. Google Scholar [2] P. d'Avenia, G. Siciliano and M. Squassina, On fractional Choquard equations, Math. Models Methods Appl. Sci., 25 (2015), 1447–1476. doi: 10.1142/S0218202515500384. Google Scholar [3] P. Belchior, H. Bueno, O. H. Miyagaki and G. A. Pereira, Remarks about a fractional Choquard equation: Ground state, regularity and polynomial decay, Nonlinear Anal., 164 (2017), 38-53. doi: 10.1016/j.na.2017.08.005. Google Scholar [4] J. Bertoin, Lévy Processes, Cambridge University Press, 1996. Google Scholar [5] J. P. Bouchard and A. Georges, Anomalous diffusion in disordered media: statistical mechanisms, models and physical applications, Phys. Rep., 195 (1990), 127-293. doi: 10.1016/0370-1573(90)90099-N. Google Scholar [6] L. Caffarelli and L. Vasseur, Drift diffusion equations with fractional diffusion and the quasi-geostrophic equation, Annals of Math., 171 (2010), 1903-1930. doi: 10.4007/annals.2010.171.1903. Google Scholar [7] D. Cao and W. Dai, Classification of nonnegative solutions to a bi-harmonic equation with Hartree type nonlinearity, Proc. Roy. Soc. Edinburgh Sect. A, 149 (2019), 979-994. doi: 10.1017/prm.2018.67. Google Scholar [8] G. Caristi, L. D'Ambrosio and E. Mitidieri, Representation formulae for solutions to some classes of higher order systems and related Liouville theorems, Milan J. Math., 76 (2008), 27-67. doi: 10.1007/s00032-008-0090-3. Google Scholar [9] W. Chen, C. Li and B. Ou, Classification of solutions for an integral equation, Comm. Pure Appl. Math., 59 (2006), 330-343. doi: 10.1002/cpa.20116. Google Scholar [10] W. Chen, C. Li and B. Ou, Classification of solutions for a system of integral equations, Comm. Partial Differential Equations, 30 (2005), 59-65. doi: 10.1081/PDE-200044445. Google Scholar [11] W. Chen and C. Li, Classification of positive solutions for nonlinear differential and integral systems with critical exponents, Acta Math. Sci., 29B (2009), 949-960. doi: 10.1016/S0252-9602(09)60079-5. Google Scholar [12] P. Constantin, Euler Equations, Navier-Stokes Equations and Turbulence, Mathematical Foundation of Turbulent Viscous Flows, Vol. 1871 of lecture Notes in Math. 1–43, Springer, Berlin, 2006. doi: 10.1007/11545989_1. Google Scholar [13] W. Dai, Y. Fang, J. Huang, Y. Qin and B. Wang, Regularity and classification of solutions to static Hartree equations involving fractional Laplacians, Discrete Contin. Dyn. Syst., 39 (2019), 1389-1403. doi: 10.3934/dcds.2018117. Google Scholar [14] P. Le, Symmetry and classification of solutions to an integral equation of Choquard type, submitted for publication. Google Scholar [15] P. Le, Liouville theorem and classification of positive solutions for a fractional Choquard type equation, Nonlinear Anal., 185 (2019), 123-141. doi: 10.1016/j.na.2019.03.006. Google Scholar [16] Y. Lei, Liouville theorems and classification results for a nonlocal Schrödinger equation, Discrete Contin. Dyn. Syst., 38 (2018), 5351-5377. doi: 10.3934/dcds.2018236. Google Scholar [17] Y. Lei, On the regularity of positive solutions of a class of Choquard type equations, Math. Z., 273 (2013), 883-905. doi: 10.1007/s00209-012-1036-6. Google Scholar [18] E. Lieb, Sharp constants in the Hardy-Littlewood-Sobolev and related inequalities, Ann. Math., 118 (1983), 349-374. doi: 10.2307/2007032. Google Scholar [19] E. Lieb, The Hartree-Fock theory for Coulomb systems, Comm. Math. Phys., 53 (1977) 185–194. Google Scholar [20] S. Liu, Regularity, symmetry, and uniqueness of some integral type quasilinear equations, Nonlinear Anal., 71 (2009), 1796-1806. doi: 10.1016/j.na.2009.01.014. Google Scholar [21] L. Ma and L. Zhao, Classification of positive solitary solutions of the nonlinear Choquard equation, Arch. Rational Mech. Anal., 195 (2010), 455-467. doi: 10.1007/s00205-008-0208-3. Google Scholar [22] I. Moroz, R. Penrose and P. Tod, Spherically-symmetric solutions of the Schrödinger-Newton equations, Classical Quantum Gravity, 15 (1998), 2733-2742. doi: 10.1088/0264-9381/15/9/019. Google Scholar [23] V. Moroz and J. V. Schaftingen, A guide to the Choquard equation, J. Fixed Point Theory Appl., 19 (2017), 773-813. doi: 10.1007/s11784-016-0373-1. Google Scholar [24] V. Moroz and J. V. Schaftingen, Nonexistence and optimal decay of supersolutions to Choquard equations in exterior domains}, J. Differential Equations, 254 (2013), 3089-3145. doi: 10.1016/j.jde.2012.12.019. Google Scholar [25] S. Pekar, Untersuchungen über die Elekronentheorie der Kristalle, Akademie Verlag, Berlin, 1954. Google Scholar [26] E. M. Stein, Singular Integrals and Differentiability Properties of Functions, Princeton Landmarks in Mathematics, Princeton University Press, Princeton, New Jersey, 1970. Google Scholar [27] V. Tarasov and G. Zaslasvky, Fractional dynamics of systems with long-range interaction, Comm. Nonl. Sci. Numer. Simul., 11 (2006), 885-889. doi: 10.1016/j.cnsns.2006.03.005. Google Scholar [28] D. Xu and Y. Lei, Classification of positive solutions for a static Schrödinger-Maxwell equation with fractional Laplacian, Applied Math. Letters, 43 (2015), 85-89. doi: 10.1016/j.aml.2014.12.007. Google Scholar [29] W. Zhang and X. Wu, Nodal solutions for a fractional Choquard equation, J. Math. Anal. Appl., 464 (2018), 1167-1183. doi: 10.1016/j.jmaa.2018.04.048. Google Scholar show all references ##### References: [1] D. Applebaum, Lévy processes - from probability to finance and quantum groups, Notices Amer. Math. Soc., 51 (2004), 1336-1347. Google Scholar [2] P. d'Avenia, G. Siciliano and M. Squassina, On fractional Choquard equations, Math. Models Methods Appl. Sci., 25 (2015), 1447–1476. doi: 10.1142/S0218202515500384. Google Scholar [3] P. Belchior, H. Bueno, O. H. Miyagaki and G. A. Pereira, Remarks about a fractional Choquard equation: Ground state, regularity and polynomial decay, Nonlinear Anal., 164 (2017), 38-53. doi: 10.1016/j.na.2017.08.005. Google Scholar [4] J. Bertoin, Lévy Processes, Cambridge University Press, 1996. Google Scholar [5] J. P. Bouchard and A. Georges, Anomalous diffusion in disordered media: statistical mechanisms, models and physical applications, Phys. Rep., 195 (1990), 127-293. doi: 10.1016/0370-1573(90)90099-N. Google Scholar [6] L. Caffarelli and L. Vasseur, Drift diffusion equations with fractional diffusion and the quasi-geostrophic equation, Annals of Math., 171 (2010), 1903-1930. doi: 10.4007/annals.2010.171.1903. Google Scholar [7] D. Cao and W. Dai, Classification of nonnegative solutions to a bi-harmonic equation with Hartree type nonlinearity, Proc. Roy. Soc. Edinburgh Sect. A, 149 (2019), 979-994. doi: 10.1017/prm.2018.67. Google Scholar [8] G. Caristi, L. D'Ambrosio and E. Mitidieri, Representation formulae for solutions to some classes of higher order systems and related Liouville theorems, Milan J. Math., 76 (2008), 27-67. doi: 10.1007/s00032-008-0090-3. Google Scholar [9] W. Chen, C. Li and B. Ou, Classification of solutions for an integral equation, Comm. Pure Appl. Math., 59 (2006), 330-343. doi: 10.1002/cpa.20116. Google Scholar [10] W. Chen, C. Li and B. Ou, Classification of solutions for a system of integral equations, Comm. Partial Differential Equations, 30 (2005), 59-65. doi: 10.1081/PDE-200044445. Google Scholar [11] W. Chen and C. Li, Classification of positive solutions for nonlinear differential and integral systems with critical exponents, Acta Math. Sci., 29B (2009), 949-960. doi: 10.1016/S0252-9602(09)60079-5. Google Scholar [12] P. Constantin, Euler Equations, Navier-Stokes Equations and Turbulence, Mathematical Foundation of Turbulent Viscous Flows, Vol. 1871 of lecture Notes in Math. 1–43, Springer, Berlin, 2006. doi: 10.1007/11545989_1. Google Scholar [13] W. Dai, Y. Fang, J. Huang, Y. Qin and B. Wang, Regularity and classification of solutions to static Hartree equations involving fractional Laplacians, Discrete Contin. Dyn. Syst., 39 (2019), 1389-1403. doi: 10.3934/dcds.2018117. Google Scholar [14] P. Le, Symmetry and classification of solutions to an integral equation of Choquard type, submitted for publication. Google Scholar [15] P. Le, Liouville theorem and classification of positive solutions for a fractional Choquard type equation, Nonlinear Anal., 185 (2019), 123-141. doi: 10.1016/j.na.2019.03.006. Google Scholar [16] Y. Lei, Liouville theorems and classification results for a nonlocal Schrödinger equation, Discrete Contin. Dyn. Syst., 38 (2018), 5351-5377. doi: 10.3934/dcds.2018236. Google Scholar [17] Y. Lei, On the regularity of positive solutions of a class of Choquard type equations, Math. Z., 273 (2013), 883-905. doi: 10.1007/s00209-012-1036-6. Google Scholar [18] E. Lieb, Sharp constants in the Hardy-Littlewood-Sobolev and related inequalities, Ann. Math., 118 (1983), 349-374. doi: 10.2307/2007032. Google Scholar [19] E. Lieb, The Hartree-Fock theory for Coulomb systems, Comm. Math. Phys., 53 (1977) 185–194. Google Scholar [20] S. Liu, Regularity, symmetry, and uniqueness of some integral type quasilinear equations, Nonlinear Anal., 71 (2009), 1796-1806. doi: 10.1016/j.na.2009.01.014. Google Scholar [21] L. Ma and L. Zhao, Classification of positive solitary solutions of the nonlinear Choquard equation, Arch. Rational Mech. Anal., 195 (2010), 455-467. doi: 10.1007/s00205-008-0208-3. Google Scholar [22] I. Moroz, R. Penrose and P. Tod, Spherically-symmetric solutions of the Schrödinger-Newton equations, Classical Quantum Gravity, 15 (1998), 2733-2742. doi: 10.1088/0264-9381/15/9/019. Google Scholar [23] V. Moroz and J. V. Schaftingen, A guide to the Choquard equation, J. Fixed Point Theory Appl., 19 (2017), 773-813. doi: 10.1007/s11784-016-0373-1. Google Scholar [24] V. Moroz and J. V. Schaftingen, Nonexistence and optimal decay of supersolutions to Choquard equations in exterior domains}, J. Differential Equations, 254 (2013), 3089-3145. doi: 10.1016/j.jde.2012.12.019. Google Scholar [25] S. Pekar, Untersuchungen über die Elekronentheorie der Kristalle, Akademie Verlag, Berlin, 1954. Google Scholar [26] E. M. Stein, Singular Integrals and Differentiability Properties of Functions, Princeton Landmarks in Mathematics, Princeton University Press, Princeton, New Jersey, 1970. Google Scholar [27] V. Tarasov and G. Zaslasvky, Fractional dynamics of systems with long-range interaction, Comm. Nonl. Sci. Numer. Simul., 11 (2006), 885-889. doi: 10.1016/j.cnsns.2006.03.005. Google Scholar [28] D. Xu and Y. Lei, Classification of positive solutions for a static Schrödinger-Maxwell equation with fractional Laplacian, Applied Math. Letters, 43 (2015), 85-89. doi: 10.1016/j.aml.2014.12.007. Google Scholar [29] W. Zhang and X. Wu, Nodal solutions for a fractional Choquard equation, J. Math. Anal. Appl., 464 (2018), 1167-1183. doi: 10.1016/j.jmaa.2018.04.048. Google Scholar [1] Phuong Le. Symmetry of singular solutions for a weighted Choquard equation involving the fractional$ p $-Laplacian. Communications on Pure & Applied Analysis, 2020, 19 (1) : 527-539. doi: 10.3934/cpaa.2020026 [2] Leyun Wu, Pengcheng Niu. Symmetry and nonexistence of positive solutions to fractional p-Laplacian equations. Discrete & Continuous Dynamical Systems - A, 2019, 39 (3) : 1573-1583. doi: 10.3934/dcds.2019069 [3] Gui-Dong Li, Chun-Lei Tang. Existence of positive ground state solutions for Choquard equation with variable exponent growth. Discrete & Continuous Dynamical Systems - S, 2019, 12 (7) : 2035-2050. doi: 10.3934/dcdss.2019131 [4] Daomin Cao, Hang Li. High energy solutions of the Choquard equation. Discrete & Continuous Dynamical Systems - A, 2018, 38 (6) : 3023-3032. doi: 10.3934/dcds.2018129 [5] De Tang, Yanqin Fang. Regularity and nonexistence of solutions for a system involving the fractional Laplacian. Communications on Pure & Applied Analysis, 2015, 14 (6) : 2431-2451. doi: 10.3934/cpaa.2015.14.2431 [6] Ovidiu Savin. A Liouville theorem for solutions to the linearized Monge-Ampere equation. Discrete & Continuous Dynamical Systems - A, 2010, 28 (3) : 865-873. doi: 10.3934/dcds.2010.28.865 [7] Emil Novruzov. On existence and nonexistence of the positive solutions of non-newtonian filtration equation. Communications on Pure & Applied Analysis, 2011, 10 (2) : 719-730. doi: 10.3934/cpaa.2011.10.719 [8] Tingzhi Cheng. Monotonicity and symmetry of solutions to fractional Laplacian equation. Discrete & Continuous Dynamical Systems - A, 2017, 37 (7) : 3587-3599. doi: 10.3934/dcds.2017154 [9] Yanqin Fang, Jihui Zhang. Nonexistence of positive solution for an integral equation on a Half-Space$R_+^n$. Communications on Pure & Applied Analysis, 2013, 12 (2) : 663-678. doi: 10.3934/cpaa.2013.12.663 [10] Pei Ma, Yan Li, Jihui Zhang. Symmetry and nonexistence of positive solutions for fractional systems. Communications on Pure & Applied Analysis, 2018, 17 (3) : 1053-1070. doi: 10.3934/cpaa.2018051 [11] Wu Chen, Zhongxue Lu. Existence and nonexistence of positive solutions to an integral system involving Wolff potential. Communications on Pure & Applied Analysis, 2016, 15 (2) : 385-398. doi: 10.3934/cpaa.2016.15.385 [12] Dongyan Li, Yongzhong Wang. Nonexistence of positive solutions for a system of integral equations on$R^n_+\$ and applications. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2601-2613. doi: 10.3934/cpaa.2013.12.2601 [13] Genggeng Huang. A Liouville theorem of degenerate elliptic equation and its application. Discrete & Continuous Dynamical Systems - A, 2013, 33 (10) : 4549-4566. doi: 10.3934/dcds.2013.33.4549 [14] Leonelo Iturriaga, Eugenio Massa. Existence, nonexistence and multiplicity of positive solutions for the poly-Laplacian and nonlinearities with zeros. Discrete & Continuous Dynamical Systems - A, 2018, 38 (8) : 3831-3850. doi: 10.3934/dcds.2018166 [15] Dezhong Chen, Li Ma. A Liouville type Theorem for an integral system. Communications on Pure & Applied Analysis, 2006, 5 (4) : 855-859. doi: 10.3934/cpaa.2006.5.855 [16] Ran Zhuo, Yan Li. Nonexistence and symmetry of solutions for Schrödinger systems involving fractional Laplacian. Discrete & Continuous Dynamical Systems - A, 2019, 39 (3) : 1595-1611. doi: 10.3934/dcds.2019071 [17] Begoña Barrios, Leandro Del Pezzo, Jorge García-Melián, Alexander Quaas. A Liouville theorem for indefinite fractional diffusion equations and its application to existence of solutions. Discrete & Continuous Dynamical Systems - A, 2017, 37 (11) : 5731-5746. doi: 10.3934/dcds.2017248 [18] Eric R. Kaufmann. Existence and nonexistence of positive solutions for a nonlinear fractional boundary value problem. Conference Publications, 2009, 2009 (Special) : 416-423. doi: 10.3934/proc.2009.2009.416 [19] Xudong Shang, Jihui Zhang. Multiplicity and concentration of positive solutions for fractional nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2239-2259. doi: 10.3934/cpaa.2018107 [20] Xudong Shang, Jihui Zhang. Multi-peak positive solutions for a fractional nonlinear elliptic equation. Discrete & Continuous Dynamical Systems - A, 2015, 35 (7) : 3183-3201. doi: 10.3934/dcds.2015.35.3183 2018 Impact Factor: 0.925
2019-11-13 20:37:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47477036714553833, "perplexity": 4538.577846476534}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667333.2/warc/CC-MAIN-20191113191653-20191113215653-00021.warc.gz"}
https://mathematica.stackexchange.com/questions/97414/how-to-implement-the-for-loop-to-replace-text-strings-or-texts
# How to implement the for loop to replace text strings or texts? I wonder define a bunch variables named like $X_1$, $X_2$, $...$, $X_n$. Is there some smart way to declare them instead of by pure typing. One trick I can image is to use the for loop,For [i = 1, i <= n, i++], to replace the subindices,. However, now each i is not numbers but texts, I don't know how to implement the for loop to texts or text strings. Can anyone give me some hints? • Please avoid using subscripts. Some feferences: ref1, ref2, ref3 – Dr. belisarius Oct 20 '15 at 11:40 This would be more idiomatic to Mathematica than using a for loop: indices = {"a", "b", "c"}; Array[(x[indices[[#]]] = #) &, 3]; {x["a"], x["b"], x["c"]} {1, 2, 3} This is an equivalent for loop: For[i = 1, i <= Length[indices], i++, x[indices[[i]]] = i] • I'd have used Scan[] myself… – J. M.'s technical difficulties Oct 20 '15 at 10:10 • @J.M. -- Can you elaborate for the sake of posterity? – Jagra Oct 20 '15 at 13:16 • @Jagra Scan[(x[#] = #) &, indices] – Dr. belisarius Oct 20 '15 at 13:39
2020-08-06 16:33:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23379269242286682, "perplexity": 2996.1089652191686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439736972.79/warc/CC-MAIN-20200806151047-20200806181047-00089.warc.gz"}
https://www.scienceforums.net/topic/108099-what-is-the-obsession-going-to-the-moon-or-mars/?tab=comments
# What is the obsession going to the moon or mars? ## Recommended Posts There is not a day that goes by with out the media going on and on about going to the moon or mars!!! Well it serious not going to solve problems of colonization of the moon or mars. It not going to solve overpopulation problem. It not some thing the middle class can have enough money on one way ticket to the moon or mars. Even the upper class will not have the money for a one way ticket to the moon or mars. So it not going to solve the overpopulation problem. What is with the government and private sector pushing this trip to the moon or mars in 10 to 15 years. When there is no spacecraft that cost effective? Any spacecraft they have and in the research and development is not cost effective for the upper class that alone the middle class!! It cost way to much money going into space. Why such a push for space colonization? ##### Share on other sites There is not a day that goes by with out the media going on and on about going to the moon or mars!!! Well it serious not going to solve problems of colonization of the moon or mars. It not going to solve overpopulation problem. It not some thing the middle class can have enough money on one way ticket to the moon or mars. Even the upper class will not have the money for a one way ticket to the moon or mars. So it not going to solve the overpopulation problem. What is with the government and private sector pushing this trip to the moon or mars in 10 to 15 years. When there is no spacecraft that cost effective? Any spacecraft they have and in the research and development is not cost effective for the upper class that alone the middle class!! It cost way to much money going into space. Why such a push for space colonization? We were not born to stagnate on this fart arse little blue Orb. And of course Earth does have a "use by date" Not to mention of course applying the same question as to when man first evolved, and why he walked out of Africa, or why he climbed Everest, and why he sailed a largely unknown Ocean to discover the Americas, and why even later sailing south over more unknown Oceans and water ways to discover Australia... In essence, because its there and discover new knowledge. Edited by beecee ##### Share on other sites Why such a push for space colonization? Exploration and discovery. The love of learning new things. And just the sheer challenge of doing it. Edited by Strange ##### Share on other sites There is not a day that goes by with out the media going on and on about going to the moon or mars!!! Well it serious not going to solve problems of colonization of the moon or mars. It not going to solve overpopulation problem. No one is suggesting that going to move to the Moon or Mars has anything at all to do with overpopulation. ##### Share on other sites No one is suggesting that going to move to the Moon or Mars has anything at all to do with overpopulation. Stephen Hawking is, actually (as are a few others like Elon Musk, but they're clearly the most notable). http://www.wired.co.uk/article/stephen-hawking-100-years-on-earth-prediction-starmus-festival Professor Stephen Hawking says he's not the only one who believes humans have to find a new planet to populate within 100 years (...) Professor Hawking said he thinks due to climate change, overdue asteroid strikes, epidemics and population growth, humans will need to find a new planet to populate within a single lifetime (...) “I strongly believe we should start seeking alternative planets for possible habitation,” he said. “We are running out of space on earth and we need to break through technological limitations preventing us living elsewhere in the universe.” “I am not alone in this view and many of my colleagues will make further comments at the Starmus next month.” Hawking isn't the only one who has advocated for a multi-planet species. SpaceX boss Elon Musk has grand plans to launch space colonies in the next 100-years and Nasa has said its Mars missions could help to put humans permanently on other planets. ##### Share on other sites I'll believe going to Mars has something to do with overpopulation as soon as we colonise Antarctica, much nicer place than Mars. Going to the Moon or Mars has definite scientific benefits, the cost is trivial compared to maintaining a huge military force. Benefits in advancement in technology cannot be predicted but using the space program so far as a ruler we would be stupid not to explore the solar system. Eventually we will colonise space but I doubt it is be done by colonising planets. Far too many problems, toroidal colonies or Oneil type cylinders are far more likely, easier and make far more sense... ##### Share on other sites I'll believe going to Mars has something to do with overpopulation as soon as we colonise Antarctica, much nicer place than Mars. Going to the Moon or Mars has definite scientific benefits, the cost is trivial compared to maintaining a huge military force. Benefits in advancement in technology cannot be predicted but using the space program so far as a ruler we would be stupid not to explore the solar system. Eventually we will colonise space but I doubt it is be done by colonising planets. Far too many problems, toroidal colonies or Oneil type cylinders are far more likely, easier and make far more sense... If the private sector can bring space cost down to million dollars one way ticket to the moon or mars than it will look more realistic for the upper middle class. But it will NOT solve the problem of overpopulation. Well a $500,0000 ticket to the moon or mars is beyond laughable!!! It would be better to pass laws of one child policy or jail time to lower overpopulation than hope the private sector is going to lower space cost. A city underwater, water city, underground city or dome tower would be more realistic if they running out of land to build or one child policy or jail time to lower overpopulation. Canada and Russia has lot of land but it is too cold. And the Antarctica would be like living in freezer. #### Share this post ##### Link to post ##### Share on other sites On 7/21/2017 at 11:29 PM, nec209 said: If the private sector can bring space cost down to million dollars one way ticket to the moon or mars than it will look more realistic for the upper middle class. But it will NOT solve the problem of overpopulation. Well a$500,0000 ticket to the moon or mars is beyond laughable!!! It would be better to pass laws of one child policy or jail time to lower overpopulation than hope the private sector is going to lower space cost. A city underwater, water city, underground city or dome tower would be more realistic if they running out of land to build or one child policy or jail time to lower overpopulation. Canada and Russia has lot of land but it is too cold. And the Antarctica would be like living in freezer. I agree but living on Mars would be like living in a vacuum chamber, with radiation, and a freezer! Space colonies, built in space from materials found in space could be quite large. A torus hundreds of miles across and tens of miles thick would be possible with no magical technology, although controlled fusion would definitely make it far easier. I often liken it to the iron age or bronze age by naming it the carbon age in terms of what would be used to construct the habitats... ##### Share on other sites On 7/21/2017 at 11:33 PM, nec209 said: There is not a day that goes by with out the media going on and on about going to the moon or mars!!! Well it serious not going to solve problems of colonization of the moon or mars. It not going to solve overpopulation problem. It not some thing the middle class can have enough money on one way ticket to the moon or mars. Even the upper class will not have the money for a one way ticket to the moon or mars. So it not going to solve the overpopulation problem. What is with the government and private sector pushing this trip to the moon or mars in 10 to 15 years. When there is no spacecraft that cost effective? Any spacecraft they have and in the research and development is not cost effective for the upper class that alone the middle class!! It cost way to much money going into space. Why such a push for space colonization? I'll take two quotes from you - "space colonization" "cost way to much money". That is the reason why it's a good idea, you answered it yourself. No single country can afford it so instead we all have to work together to share the cost. Here's a list of countries currently involved in the International Space Station Japan Russian United States Belgium Denmark France Germany Italy The Netherlands Norway Spain Sweden Switzerland United Kingdom Expensive and difficult to achieve projects like this and CERN bring countries together. Science brings countries together. ##### Share on other sites 12 minutes ago, at0mic said: I'll take two quotes from you - "space colonization" "cost way to much money". That is the reason why it's a good idea, you answered it yourself. No single country can afford it so instead we all have to work together to share the cost. Here's a list of countries currently involved in the International Space Station Japan Russian United States Belgium Denmark France Germany Italy The Netherlands Norway Spain Sweden Switzerland United Kingdom Expensive and difficult to achieve projects like this and CERN bring countries together. Science brings countries together. Bingo!! I have said that many times, that a united International effort will facilitate us finally putting man on Mars and spread the costs. The ISS has shown how that can work. It would also be nice to see the Chinese involved and sharing their  obvious expertise with that of the rest of the world. Edited by beecee ##### Share on other sites Yes exactly, and North Korea. I would love to see every country work together. There would be an end to war and we would be able to look after our planet a whole lot more. Once we can do that, we can colonize another planet without making the same mistakes. ##### Share on other sites 15 minutes ago, beecee said: Bingo!! I have said that many times, that a united International effort will facilitate us finally putting man on Mars and spread the costs. The ISS has shown how that can work. It would also be nice to see the Chinese involved and sharing their  obvious expertise with that of the rest of the world. Space colonies are not expensive in the way that building boats is not expensive. If you had to build a new boat from scratch every time someone wanted one it would be ridiculous. But when you produce thousands of boats you can pass the savings on to everyone. Establishing infrastructure is the main expense but one you have that infrastructure costs per habitat go down... Edited by Moontanman ##### Share on other sites The primitive urge to seek new and better horizons and leave intractable problems behind rather than face them. The popularity of science fiction that tends to downplay or completely bypass - by some imaginary tech brilliancy - the extraordinary costs and difficulties. The illusion that it would be a lot like the successful historic colonisations that happened on Earth. The false expectation that ingenuity can overcome all limitation. The belief that it will be not just economically viable to do so, but deliver enormous economic benefits. The belief that space colonies can provide enduring security and defence from existential threats. The unlikely expectation that humans in space will enjoy greater freedom from regulation or societal constraints. ##### Share on other sites 2 hours ago, Ken Fabian said: The primitive urge to seek new and better horizons and leave intractable problems behind rather than face them. Not all primitive urges are undesirable: Let's all be grateful that humanity as a whole, still sees the need to explore, discover new worlds, and push the limits of boundaries. We would still be in the dark ages if we didn't. 2 hours ago, Ken Fabian said: The popularity of science fiction that tends to downplay or completely bypass - by some imaginary tech brilliancy - the extraordinary costs and difficulties. Sci/Fi may down play the costs and difficulties, science does not but considering the trillions of dollars world wide spent on military objectives, the small amount put aside for science and getting us further into space, is a drop in the Ocean. Yes, we certainly do have our Priorities arse up! 2 hours ago, Ken Fabian said: The illusion that it would be a lot like the successful historic colonisations that happened on Earth. No illusion at all: Science knows full well the dangers and difficulties involved, as well as the benefits to be obtained. 2 hours ago, Ken Fabian said: The false expectation that ingenuity can overcome all limitation. Whose ingenuity? and what limitations? Given the time, man will progress: To the Moon, to Mars, and to the stars.                                   Given the time, man could I believe, achieve anything that is not forbidden by the laws of physics and GR...Given the time. 2 hours ago, Ken Fabian said: The belief that it will be not just economically viable to do so, but deliver enormous economic benefits. Political and economic climates change over time: What is not economically viable today, maybe tomorrow. Economic benefits, along with technological benefits, international and general good will among other nations, new industries are all bound to benefit from our inevitable progress and yes, habitation in time of the solar system and beyond. 2 hours ago, Ken Fabian said: The belief that space colonies can provide enduring security and defence from existential threats. The unlikely expectation that humans in space will enjoy greater freedom from regulation or societal constraints. Quote time: "No pessimist ever discovered the secret of the stars, or sailed to an uncharted land, or opened a new doorway for the human spirit".                                                                                     Helen Keller: "There is no sadder sight than a young pessimist".                                                                  Mark Twain: Edited by beecee ##### Share on other sites double post: sorry Edited by beecee ##### Share on other sites 1 hour ago, Ken Fabian said: The primitive urge to seek new and better horizons and leave intractable problems behind rather than face them. The popularity of science fiction that tends to downplay or completely bypass - by some imaginary tech brilliancy - the extraordinary costs and difficulties. The illusion that it would be a lot like the successful historic colonisations that happened on Earth. The false expectation that ingenuity can overcome all limitation. The belief that it will be not just economically viable to do so, but deliver enormous economic benefits. The belief that space colonies can provide enduring security and defence from existential threats. The unlikely expectation that humans in space will enjoy greater freedom from regulation or societal constraints. Yes you successfully listed some of the negative aspects of space colonizasion and none of them involve war which is a very good thing. Space exploration leading to colonizasion is the eventual result of a science driven civilization. Countries across the world working together to cover the huge costs. The alternative is a religious driven civilization where countries anialate each other. ##### Share on other sites 11 hours ago, beecee said: Bingo!! I have said that many times, that a united International effort will facilitate us finally putting man on Mars and spread the costs. The ISS has shown how that can work. It would also be nice to see the Chinese involved and sharing their  obvious expertise with that of the rest of the world. So you are saying the International community should spend billions of dollars sending not hundreds and not thousands but millions of people!!! Yes not one million!! But hundreds of millions of people into space? Because it would take hundreds of millions of people going into space to keep earth population from going up. Yes numbers in the  hundreds of millions of people going into space. The International community is going to subsidize space cost to bring space cost down so the average person can pay for ticket yes a one way ticket to mars or the moon? What would ticket cost? A million dollar ticket? Oh that go with major subsidize a $500,000 space ticket Or you some how thing there is going to be technology breakthroughs to bring space cost down to say$500,000 space ticket per person to mars or the moon? ##### Share on other sites 37 minutes ago, nec209 said: So you are saying the International community should spend billions of dollars sending not hundreds and not thousands but millions of people!!! Yes not one million!! But hundreds of millions of people into space? Because it would take hundreds of millions of people going into space to keep earth population from going up. Yes numbers in the  hundreds of millions of people going into space. The International community is going to subsidize space cost to bring space cost down so the average person can pay for ticket yes a one way ticket to mars or the moon? What would ticket cost? A million dollar ticket? Oh that go with major subsidize a $500,000 space ticket Or you some how thing there is going to be technology breakthroughs to bring space cost down to say$500,000 space ticket per person to mars or the moon? Wow!!! Your assumptions are nothing short of amazing. I'm saying again, that space endeavours and explorations will continue in spite of some of the negative pessimistic attitudes of some..... I'm saying that humanities primitive need to explore and go where no man has gone before, will not be stopped. I'm saying that those that are crying and wringing their hands over the incredible costs of space travel and exploration, need to consider the trillions of dollars spent world wide on military endeavours. I'm saying that while that is expensive, that expense can be alleviated to some extent by a International effort. I'm saying that progress is not going to be stopped.                                                                     I'm saying that political and economic climates will and do change.                                              I'm saying that Given the time, man will progress: To the Moon, to Mars, and to the stars. Given the time, man could I believe, achieve anything that is not forbidden by the laws of physics and GR...Given the time. I;m not talking of space tourists as such although that probably in time , will take place, just as in time permanent settlements off this planet will take place, despite the bleeding hearts and their fragile opposition to such.. Edited by beecee ##### Share on other sites 12 hours ago, Moontanman said: Space colonies are not expensive in the way that building boats is not expensive. If you had to build a new boat from scratch every time someone wanted one it would be ridiculous. But when you produce thousands of boats you can pass the savings on to everyone. Establishing infrastructure is the main expense but one you have that infrastructure costs per habitat go down... Not sure what you mean. The holy grail is reusable spacecraft to lower space cost. But unfortunately that not come to be. There where many ideas of reusable spacecraft of the search of holy grail of reusable spacecraft. 1 One idea was spacecraft on a rocket to space and lands like plane 2 two rocket/plane combo. Takes of like a plane and use jet engine to gets very high up than the rocket takes over. 3. three single-stage-to-orbit (or SSTO) vehicle reaches orbit from the surface of a body without jettisoning hardware The rocket engines where not powerful enough and so they said these ideas will not work so back to basics. And NASA and other governments cut the x-programs back to the basics. They spend billions and billions of dollars looking into spacecraft on a rocket to space, rocket/plane combo and single-stage-to-orbit (or SSTO) costing billions and billions of dollars!!! All to find out rocket engines where not powerful enough and back to the basics. ##### Share on other sites I think the first minute of the following youtube movie this will answer that question very well, at least in my opinion: ##### Share on other sites Population explosion is unlikely to be reduced by sending people to either the Moon or Mars; their environments are not friendly, and Mars is a long way from Earth. As Moontanman said, large rotating habitats in orbit may be a good solution; however, I think we will need a presence on the Moon to mine materials, manufacture habitat modules, and launch them into orbit around the Earth and Moon, because Earth's gravity makes the launch cost higher than the Moon. Since AI and robots are now useful technologies. I think the Moon can be mined and things made by automatons with few people needed. As habitats are completed, people can be moved from Earth into them. Nonetheless, a launch vehicle such as the proposed SoaceX Mars Colonial Transporter (MCT), which can carry 100-200 people, will be needed to move people from Earth to space habitat. Since there are already many people willing and able to move to Mars, there is no reason we shouldn't, given an MCT. Once the exodus from Earth begins, I believe people will continue to move into the solar system, orbiting the Sun in the habitable zone and further out, as long as there is sufficient energy. The moons and asteroids can provide material to build many habitats that become a Dyson swarm. ##### Share on other sites 4 hours ago, nec209 said: Not sure what you mean. The holy grail is reusable spacecraft to lower space cost. But unfortunately that not come to be. There where many ideas of reusable spacecraft of the search of holy grail of reusable spacecraft. 1 One idea was spacecraft on a rocket to space and lands like plane 2 two rocket/plane combo. Takes of like a plane and use jet engine to gets very high up than the rocket takes over. 3. three single-stage-to-orbit (or SSTO) vehicle reaches orbit from the surface of a body without jettisoning hardware The rocket engines where not powerful enough and so they said these ideas will not work so back to basics. And NASA and other governments cut the x-programs back to the basics. They spend billions and billions of dollars looking into spacecraft on a rocket to space, rocket/plane combo and single-stage-to-orbit (or SSTO) costing billions and billions of dollars!!! All to find out rocket engines where not powerful enough and back to the basics. My quote was a little unclear, what I should have said and what I meant to convey is that, if you bought a new boat every time you wanted to use one. But my point is that you use the materials in space to make your habitat. Something as well known as kevlar could be used to make gigantic habitats. Kevlar is a carbon compound, carbon is the third or fourth most common element in the universe and we know how to make it. Of course making it in zero G might be a challenge. Things like Carbon fibers could be used to make habitats on par with an continent in surface area. A dyson swarm of these objects could be millions of earths in usable surface area. Planets are not really practical, you are highly unlikely to find a planet close enough to earth in habitat to simply move there. Even tiny differences like the amount of CO2 in the atmosphere could make a planet uninhabitable even if it over flowed with life. While controlled fusion would make this easier a dyson swarm is a way to use the entire energy output of the sun. While the Moon will be no doubt a source of materials, the asteroids and small icy moons will be be needed to make really large habitats. These habitats could be moved relatively easy, if very slowly, and could use materials from the kuiper belt, oort cloud, or even rogue bodies in interstellar space. Given fusion not only would planets not be needed but even stars would be ignored in favor of the debris found around them and in the space between them.. Edited by Moontanman ##### Share on other sites 3 hours ago, Moontanman said: My quote was a little unclear, what I should have said and what I meant to convey is that, if you bought a new boat every time you wanted to use one. But my point is that you use the materials in space to make your habitat. I think I know where you going with this. Yes reason air planes are so cheap or boats ( $500 air plane ticket per person and going into space is so expensive (million dollar ticket per person) is reusable craft. It cost billions to make 747!! This is not research, development and testing cost, of say a newer 747!!!! Just making 747!!! Yes 747 is very costly costing billions to make 747! But you don't destroy a 747 after every flight!! A 747 can go on for 20 years or more of services!! And in that time of flying millions of millions of paying customers to make a profit and cover the cost of making 747. Likewise you don't buy new ocean-liner after departing and arriving. This what I was saying by searching for the holy grail of reusable spacecraft. Edited by nec209 #### Share this post ##### Link to post ##### Share on other sites 7 hours ago, nec209 said: Not sure what you mean. The holy grail is reusable spacecraft to lower space cost. But unfortunately that not come to be. #### Share this post ##### Link to post ##### Share on other sites 9 hours ago, nec209 said: Or you some how thing there is going to be technology breakthroughs to bring space cost down to say$500,000 space ticket per person to mars or the moon? The cost will drop significantly. Within 500 years, it will be like buying a plane ticket for a trip during the summer holiday. It you think that's a little optimistic, definitely within 1,000 years.
2018-05-26 09:41:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19998972117900848, "perplexity": 2295.1423768969516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867416.82/warc/CC-MAIN-20180526092847-20180526112847-00405.warc.gz"}
https://www.ssmathematics.com/2018/11/
## Friday, November 30, 2018 ### ႀကိဳတင္​ပန္​ၾကားျခင္​း [Unicode]ကြိုတင် ပန်ကြား ခြင်း။ သုံးစွဲသူများ အတွက် data နှင့် အချိန် သက်သာမှုရှိရန် အတွက် text, html, latex code များဖြင့် ရေးရပါသောကြောင့် အချိန် ကြန့်ကြာမှု ရှိခြင်း ကို နားလည် ပေးနိုင်ပါရန် ကြိုတင်၍ အသိပေးအပ ်ပါသည်။ တက္ကသိုလ် ဝင်တန်း သင်္ချာ အတွက် မေးခွန်းဟောင်း များနှင့် အဖြေ ကို ဦးစား ပေး ၍ တင်ပေး ပါမည်။ ကျေးဇူးတင်စွာဖြင့် Dr Shwe Kyaw (Maths) Doctor of science, TU Berlin. [Zawgyi]ႀကိဳတင္ ပန္ၾကား ျခင္း။ သုံးစြဲသူမ်ား အတြက္ data ႏွင့္ အခ်ိန္ သက္သာမႈရွိရန္ အတြက္ text, html, latex code မ်ားျဖင့္ ေရးရပါေသာေၾကာင့္ အခ်ိန္ ၾကန႔္ၾကာမႈ ရွိျခင္း ကို နားလည္ ေပးႏိုင္ပါရန္ ႀကိဳတင္၍ အသိေပးအပ ္ပါသည္။ တကၠသိုလ္ ဝင္တန္း သခ်ၤာ အတြက္ ေမးခြန္းေဟာင္း မ်ားႏွင့္ အေျဖ ကို ဦးစား ေပး ၍ တင္ေပး ပါမည္။ ေက်းဇူးတင္စြာျဖင့္ Dr Shwe Kyaw (Maths) Doctor of science, TU Berlin. ## Thursday, November 29, 2018 ### Sum of k th power of positive integers (Determinant form) ### 1. (Sum of the first $\D n$ positive integers) $\sum_{i=1}^{n}i=\frac{1}{2!}\deta{(n+1)^2-(n+1)}$ ### 2. (Sum of square of the first $\D n$ positive integers) $\sum_{i=1}^{n}i^2=\frac{1}{3!}\deta{2&(n+1)^2-(n+1)\\3&(n+1)^3-(n+1)}$ ### 3. (Sum of cube of the first $\D n$ positive integers) $\sum_{i=1}^{n}i^3=\frac{1}{4!}\deta{2&0&(n+1)^2-(n+1)\\3& 3& (n+1)^3-(n+1)\\4&6&(n+1)^4-(n+1)}$ ### 4. (Sum of fouth power of the first $\D n$ positive integers) $\sum_{i=1}^{n}i^4=\frac{1}{5!}\deta{2&0&0&(n+1)^2-(n+1)\\3& 3& 0&(n+1)^3-(n+1)\\4&6& 4&(n+1)^4-(n+1)\\ 5& 10&10& (n+1)^5-(n+1)}$ ### 5. (Sum of $\D k^{th}$ power of the first $\D n$ positive integers) $\sum_{i=1}^{n}i^k=\frac{1}{(k+1)!} \deta{ ^2C_1&0 & 0 &\cdots &\cdots &(n+1)^2-(n+1)\\ ^3C_1&^3C_2 & 0 &\cdots &\cdots &(n+1)^3-(n+1)\\ ^4C_1&^4C_2 & ^4C_3 &\cdots &\cdots &(n+1)^4-(n+1)\\ \vdots& \vdots& \vdots& \vdots& \vdots& \vdots \\ ^{k+1}C_1& ^{k+1}C_2& ^{k+1}C_3& \cdots& ^{k+1}C_{k-1}&(n+1)^{k+1}-(n+1) }$ ## Tuesday, November 27, 2018 ### Past Questions (AP, GP Series) $\def\D{\displaystyle} \def\iixi#1#2{\D\left(\begin{array}{c} #1\\#2 \end{array}\right)}$ 1.) In an arithmetic sequence, $\D u_1 = 2$ and $\D u_3 = 8.$ (a) Find $\D d.$ (b) Find $\D u_{20}.$ (c) Find $\D S_{20}.$ (Total 6 marks) 2.) In an arithmetic sequence $\D u_1 = 7, u_{20} = 64$ and $\D u_n = 3709.$ (a) Find the value of the common difference. (b) Find the value of $\D n.$ (Total 5 marks) 3.) Consider the arithmetic sequence 3, 9, 15, ..., 1353. (a) Write down the common difference. (b) Find the number of terms in the sequence. (c) Find the sum of the sequence. (Total 6 marks) 4.) An arithmetic sequence, $\D u_1, u_2, u_3, \ldots ,$ has $\D d = 11$ and $\D u_{27} = 263.$ (a) Find $u_1.$ (b) (i) Given that $\D u_n = 516,$ find the value of $\D n.$ (ii) For this value of $\D n,$ find $\D S_n.$ (Total 6 marks) 5.) The first three terms of an infinite geometric sequence are 32, 16 and 8. (a) Write down the value of $\D r.$ (b) Find $\D u_6.$ (c) Find the sum to infinity of this sequence. (Total 5 marks) 6.) The $\D n^{th}$ term of an arithmetic sequence is given by $\D u_n = 5 + 2n.$ (a) Write down the common difference. (b) (i) Given that the $\D n^{th}$ term of this sequence is 115, find the value of $\D n.$ (ii) For this value of $\D n,$ find the sum of the sequence. (Total 6 marks) 7.) In an arithmetic series, the first term is –7 and the sum of the first 20 terms is 620. (a) Find the common difference. (b) Find the value of the $\D 78 ^{th}$ term. (Total 5 marks) 8.) In a geometric series, $\D u1 = \frac{1}{81}$ and $\D u_4 = \frac{1}{3} .$ (a) Find the value of $\D r.$ (b) Find the smallest value of $\D n$ for which $\D S_n > 40.$ (Total 7 marks) 9.) (a) Expand $\D \sum_{r=4}^{7} 2^r$ as the sum of four terms. (b) (i) Find the value of $\D \sum_{r=4}^{30} 2^r.$ (ii) Explain why $\D \sum_{r=4}^{\infty } 2^r$ cannot be evaluated.  (Total 7 marks) 10.) In an arithmetic sequence, $\D S_{40} = 1900$ and $\D u_{40} = 106.$ Find the value of $\D u_1$ and of $\D d.$ (Total 6 marks) 11.) Consider the arithmetic sequence 2, 5, 8, 11, .... (a) Find $\D u_{101}.$ (b) Find the value of $\D n$ so that $\D u_n = 152.$ (Total 6 marks) 12.) Consider the infinite geometric sequence 3000, - 1800, 1080, -648, … . (a) Find the common ratio. (b) Find the 10 th term. (c) Find the exact sum of the infinite sequence. (Total 6 marks) 13.) Consider the infinite geometric sequence $\D 3, 3(0.9), 3(0.9)^2, 3(0.9)^3, … .$ (a) Write down the 10 th term of the sequence. Do not simplify your answer. (b) Find the sum of the infinite sequence. (Total 5 marks) 14.) In an arithmetic sequence $\D u_{21} = -37$ and $\D u_4 = -3.$ (a) Find (i) the common difference; (ii) the first term. (b) Find $\D S_{10}.$ (Total 7 marks) 15.) Let $\D u_n = 3 - 2n.$ (a) Write down the value of $\D u_1, u_2,$ and $\D u_3.$ (b) Find $\D \sum_{n=1}^{20} (3-2n)$ (Total 6 marks) 16.) A theatre has 20 rows of seats. There are 15 seats in the first row, 17 seats in the second row, and each successive row of seats has two more seats in it than the previous row. (a) Calculate the number of seats in the 20th row. (b) Calculate the total number of seats. (Total 6 marks) 17.) A sum of \$5000 is invested at a compound interest rate of 6.3 % per annum. (a) Write down an expression for the value of the investment after n full years. (b) What will be the value of the investment at the end of five years? (c) The value of the investment will exceed \$ 10 000 after n full years. (i) Write down an inequality to represent this information. (ii) Calculate the minimum value of n. (Total 6 marks) 18.) Consider the infinite geometric sequence 25, 5, 1, 0.2, … . (a) Find the common ratio. (b) Find (i) the 10 th term; (ii) an expression for the n th term. (c) Find the sum of the infinite sequence. (Total 6 marks) 19.) The first four terms of a sequence are 18, 54, 162, 486. (a) Use all four terms to show that this is a geometric sequence. (b) (i) Find an expression for the n th term of this geometric sequence. (ii) If the n th term of the sequence is 1062 882, find the value of n. (Total 6 marks) 20.) (a) Write down the first three terms of the sequence $\D u_n = 3n,$ for $\D n\ge 1.$ (b) Find (i) $\D \sum_{n=1}^{20} 3n$ (ii) $\D \sum_{n=21}^{100} 3n$  (Total 6 marks) 21.) Consider the infinite geometric series 405 + 270 + 180 +.... (a) For this series, find the common ratio, giving your answer as a fraction in its simplest form. (b) Find the fifteenth term of this series. (c) Find the exact value of the sum of the infinite series. (Total 6 marks) 22.) (a) Consider the geometric sequence -3, 6, -12, 24, …. (i) Write down the common ratio. (ii) Find the 15 th term. Consider the sequence $\D x - 3, x +1, 2x + 8,\ldots.$ (b) When $\D x = 5,$ the sequence is geometric. (i) Write down the first three terms. (ii) Find the common ratio. (c) Find the other value of $\D x$ for which the sequence is geometric. (d) For this value of $\D x,$ find (i) the common ratio; (ii) the sum of the infinite sequence. (Total 12 marks) 1. 3,59 610 2. 3,1235 3. 6,226,153228 4. -23,50,12325 5. 1/2,1,64 6. 2,55,3355 7. 4,301 8. 3,8 9.  240,2147483632 10. -11,3 11. 302,51 12. -0.6,-30.2,1875 13. 1.162,30 14. -2,3,-60 15. 1,-1,-3,-360 16. 53,680 17. 5000(1.063^n),6786.35,...,12 18. 0.2,1/78125,...,31.25 19. r=3,18x3^(n-1),11 20. 3,6,9:630,14520 21. 2/3,1.39,1215 22. -2,-49152;2,6,28;3,-5,-16 ### Past Questions (Binomial) IB Standard Level $\newcommand{\D}{\displaystyle} \def\iixi#1#2{\D\left(\begin{array}{c} #1\\#2 \end{array}\right)}$ 1.) Consider the expansion of $\D (x + 2)^{11}.$ (a) Write down the number of terms in this expansion. (b) Find the term containing $\D x^2.$ (Total 5 marks) 2.) (a) Expand $\D (2 + x)^4$ and simplify your result. (b) Hence, find the term in $\D x^2$ in $\D (2 + x)^4\left(1+\frac{1}{x^2}\right).$ (Total 6 marks) 3.) Find the term in $\D x^4$ in the expansion of $\D \left(3x^2-\frac{2}{x}\right)^5$. (Total 6 marks) 4.) The fifth term in the expansion of the binomial $\D (a + b)^n$ is given by $\D \iixi{10}{4}p^6(2q)^4.$ (a) Write down the value of $\D n$. (b) Write down $\D a$ and $\D b,$ in terms of $\D p$ and/or $\D q.$ (c) Write down an expression for the sixth term in the expansion.  (Total 6 marks) 5.) Let $\D f(x) = x^3- 4x + 1.$ (a) Expand $\D (x + h)^3.$ 6.) Find the term in $\D x^3$ in the expansion of $\D \left(\frac{2}{3}x-3\right)^8.$ (Total 5 marks) 7.) (a) Expand $\D (x-2)^4$ and simplify your result. (b) Find the term in $\D x^3$ in $\D (3x + 4)(x-2)^4.$  (Total 6 marks) 8.) Consider the expansion of the expression $\D (x^3-3x)^6.$ (a) Write down the number of terms in this expansion. (b) Find the term in $\D x^{12}.$ (Total 6 marks) 9.) One of the terms of the expansion of $\D (x + 2y)^{10}$ is $\D ax^8 y^2.$ Find the value of a. (Total 6 marks) 10.) (a) Expand $\D \left(e+\frac{1}{e}\right)^4$ in terms of $\D e.$ (b) Express $\D \left(e+\frac{1}{e}\right)^4+\left(e-\frac{1}{e}\right)^4$ as the sum of three terms.  (Total 6 marks) 11.) Consider the expansion of $\D (x^2- 2)^5.$ (a) Write down the number of terms in this expansion. (b) The first four terms of the expansion in descending powers of $\D x$ are $x^{10} -10x^8 + 40x^6 + Ax^4 + ...$ Find the value of A. (Total 6 marks) 12.) Given that $\D \left(3+\sqrt{7}\right)^3=p+\sqrt{q}$ where $\D p$ and $\D q$ are integers, find (a) $\D p;$ (b) $\D q.$ (Total 6 marks) 13.) When the expression $\D (2 + ax)^{10}$ is expanded, the coefficient of the term in $\D x^3$ is 414720. Find the value of $\D a.$ (Total 6 marks) 14.) Find the term containing $\D x^3$ in the expansion of $\D (2- 3x)^8.$ (Total 6 marks) 15.) Find the term containing $\D x^{10}$ in the expansion of $\D (5 + 2x^2)^7.$ (Total 6 marks) 16.) Complete the following expansion. $\D (2 + ax)^4 = 16 + 32ax +\cdots$ (Total 6 marks) 17.) Consider the expansion of $\D \left(3x^2-\frac{1}{x}\right)^9.$ (a) How many terms are there in this expansion? (b) Find the constant term in this expansion. (Total 6 marks) 18.) Find the coefficient of $\D x^3$ in the expansion of $\D (2- x)^5.$ (Total 6 marks) 19.) Use the binomial theorem to complete this expansion. $\D (3x + 2y)^4 = 81x^4 + 216x^3 y + \cdots$ (Total 4 marks) 20.) Consider the binomial expansion . $(1+x)^4=1+\iixi{4}{1}x+ \iixi{4}{2}x^2 +\iixi{4}{3}x^3+x^4.$ (a) By substituting $\D x = 1$ into both sides, or otherwise, evaluate $\D \iixi{4}{1}+ \iixi{4}{2} +\iixi{4}{3}$ (b) Evaluate $\D \iixi{9}{1}+ \iixi{9}{2}+ \iixi{9}{3}+ \iixi{9}{4}+ \iixi{9}{5}+ \iixi{9}{6}+ \iixi{9}{7}+ \iixi{9}{8}.$ (Total 4 marks) 21.) Determine the constant term in the expansion of $\D \left(x-\frac{2}{x^2}\right)^9.$ (Total 4 marks) 22.) Find the coefficient of $\D a^5b^7$ in the expansion of $\D (a + b)^{12}.$ (Total 4 marks) 23.) Find the coefficient of $\D x^5$ in the expansion of $\D (3x - 2)^8.$ (Total 4 marks) 24.) Find the coefficient of $\D a^3b^4$ in the expansion of $(5a + b)^7.$ (Total 4 marks) 1. 12,28160x^2 2. 16 32 24 8 1 25x^2 3. 1080x^4 4. n=10,a=p,b=2q,10C5p^5(2q)^5 5. x^3+3x^2h+3xh^2+h^3 6. -4032x^3 7. x^4-8x^3+24x^2-32x+16;40x^3 8. 7;-540x^{12} 9. 180 10. e^4+4e^2+6+4/e^2+1/e^4 2e^4+12+2?e^4 11. 6;A=-80 12. p=90,q=34 13. 3 14. -48384x^3 15. 16800x^{10} 16. ...+24a^2x^2+8a^3x^3+a^4x^4 17. 10;2268 18. -40 19. ...+216x^2y^2+96xy^3+16y^4 20. 14;510 21. -672 22. 792 23. -108864 24. 4375 ## Method of difference ### Example 1 $\newcommand{\D}{\displaystyle}$ Find $\D \sum_{k=1}^{n}\frac{1}{k(k+1)}.$ Let $\D u_k=\frac{1}{k(k+1)}=\frac{1}{k}-\frac{1}{k+1}.$ \begin{eqnarray} u_1 &=&\frac{1}{1}-\frac{1}{2}\\ u_2 &=&\frac{1}{2}-\frac{1}{3}\\ u_3 &=&\frac{1}{3}-\frac{1}{4}\\ \vdots &=& \vdots\\ u_n &=&\frac{1}{n}-\frac{1}{n+1} \end{eqnarray} By adding, $u_1+u_2+\cdots+u_n=1-\frac{1}{n+1}$ $\newcommand{\iixi}[2]{\left(\begin{array}{c}#1\\#2\end{array}\right)}$ ### Proposition: For any positive integer $n$ and $k$ $(n+1)^k-1=\iixi{k}{1}\sum_{i=1}^{n}i^{k-1} +\iixi{k}{2}\sum_{i=1}^{n}i^{k-2}+\cdots+\iixi{k}{k}n .$ ### Proof: \begin{eqnarray*} (n+1)^k&=&n^k+\iixi{k}{1} n^{k-1}+\iixi{k}{2}n^{k-2}+\cdots+\iixi{k}{k}\\ (n+1)^k-n^k&=&\iixi{k}{1} n^{k-1}+\iixi{k}{2}n^{k-2}+\cdots+\iixi{k}{k}\\ \end{eqnarray*} \begin{eqnarray*} n=1:&2^k-1^k=&\iixi{k}{1} 1^{k-1}+\iixi{k}{2}1^{k-2}+\cdots+\iixi{k}{k}\\ n=2:&3^k-2^k=&\iixi{k}{1} 2^{k-1}+\iixi{k}{2}2^{k-2}+\cdots+\iixi{k}{k}\\ n=3:&4^k-3^k=&\iixi{k}{1} 3^{k-1}+\iixi{k}{2}3^{k-2}+\cdots+\iixi{k}{k}\\ \vdots&\vdots&\vdots\\ &(n+1)^k-n^k=&\iixi{k}{1} n^{k-1}+\iixi{k}{2}n^{k-2}+\cdots+\iixi{k}{k}\\ \end{eqnarray*} By adding $(n+1)^k-1=\iixi{k}{1}\sum_{i=1}^{n}i^{k-1} +\iixi{k}{2}\sum_{i=1}^{n}i^{k-2}+\cdots+\iixi{k}{k}n$ ### Example 2 (Sum of the first $n$ natural numbers) $S_n=\sum_{i=1}^{n}i=\frac{n(n+1)}{2}.$ By considering $k=2$, in above proposition, we get $(n+1)^2-1=\iixi{2}{1}\sum_{i=1}^{n}i+\iixi{2}{2}n$ Thus, \begin{eqnarray*} n^2+2n+1-1&=&2S_n+n\\ 2S_n&=&n^2+n=n(n+1)\\ S_n&=&\frac{n(n+1)}{2} \end{eqnarray*} ### Example 3 (Sum of the square of the first $n$ natural numbers, denoted by $S_n^{(2)}$) $S_n^{(2)}=\sum_{i=1}^{n}i^2=\frac{n(n+1)(2n+1)}{6}.$ By considering $k=3$, in above proposition, we get $(n+1)^3-1=\iixi{3}{1}\sum_{i=1}^{n}i^2+ \iixi{3}{2}\sum_{i=1}^{n}i+\iixi{3}{3}n$ Thus, \begin{eqnarray*} n^3+3n^2+3n+1-1&=&3S_n^{(2)}+3S_n+n\\ n^3+3n^2+3n&=&3S_n^{(2)}+\frac{3n(n+1)}{2}+n\\ 3S_n^{(2)}&=&n^3+3n^2+3n-n-\frac{3n(n+1)}{2}\\ S_n^{(2)}&=&\frac{n(n+1)(2n+1)}{6} \end{eqnarray*} ### Example 4 (Sum of the square of the first $n$ even numbers) $2^2+4^2+\cdots+(2n)^2=\frac{2n(n+1)(2n+1)}{3}.$ By considering $k=3$, in above proposition, we get $(n+1)^3-1=\iixi{3}{1}\sum_{i=1}^{n}i^2+ \iixi{3}{2}\sum_{i=1}^{n}i+\iixi{3}{3}n$ Let $u_i=2i,i=1,2,\ldots,n$. Then $(u_i)^2=4i^2.$ Thus, $\D i^2=\frac{(u_i)^2}{4}.$ \begin{eqnarray*} n^3+3n^2+3n+1-1&=&3\sum_{i=1}^{n}\frac{(u_i)^2}{4}+3S_n+n\\ n^3+3n^2+3n&=&\frac{3}{4}\sum_{i=1}^{n}(u_i)^2+\frac{3n(n+1)}{2}+n\\ \frac{3}{4}\sum_{i=1}^{n}(u_i)^2&=&n^3+3n^2+3n-n-\frac{3n(n+1)}{2}\\ \sum_{i=1}^{n}(2i)^2&=&\frac{2n(n+1)(2n+1)}{3} \end{eqnarray*} ### Example 5 (Sum of the square of the first $n$ odd numbers) $1^2+3^2+\cdots+(2n-1)^2=\frac{n(2n+1)(2n-1)}{3}$ By considering $k=3$, in above proposition, we get $(n+1)^3-1=\iixi{3}{1}\sum_{i=1}^{n}i^2+ \iixi{3}{2}\sum_{i=1}^{n}i+\iixi{3}{3}n$ Let $u_i=2i-1,i=1,2,\ldots,n.$ Then $(u_i)^2=4i^2-4i+1.$ Thus, $\D i^2=\frac{(u_i)^2+4i-1}{4}.$ \begin{eqnarray*} n^3+3n^2+3n+1-1&=&3\sum_{i=1}^{n}\frac{(u_i)^2+4i-1}{4}+3S_n+n\\ n^3+3n^2+3n&=&\frac{3}{4}\sum_{i=1}^{n}(u_i)^2+\frac{3}{4}\sum_{i=1}^{n}(4i-1)+\frac{3n(n+1)}{2}+n\\ \frac{3}{4}\sum_{i=1}^{n}(u_i)^2&=&n^3+3n^2+3n-n-\frac{3n(n+1)}{2}-\frac{3}{4}\sum_{i=1}^{n}(4i-1)\\ \sum_{i=1}^{n}(2i-1)^2&=&\frac{n(2n+1)(2n-1)}{3} \end{eqnarray*} ### Example 6 Show that $1\times2+2\times3+3\times4\cdots +n(n+1)=\frac{n(n+1)(n+2)}{3}$ Proof: Let $\D u_i=i(i+1)$. By considering $k=3$, in above proposition, we get $(n+1)^3-1=\iixi{3}{1}\sum_{i=1}^{n}i^2+ \iixi{3}{2}\sum_{i=1}^{n}i+\iixi{3}{3}n$ \begin{eqnarray*} n^3+3n^2+3n+1-1&=&3\sum_{i=1}^{n}(i^2+i)+n\\ n^3+3n^2+3n&=&3\sum_{i=1}^{n}u_i+n\\ 3\sum_{i=1}^{n}u_i&=&n^3+3n^2+3n-n\\ \sum_{i=1}^{n}u_i&=&\frac{1}{3}(n^3+3n^2+2n)\\ &=&\frac{n(n+1)(n+2)}{3} \end{eqnarray*} ### Exam Practice By using the formula $1^2+2^2+3^2+\cdots+n^2=\frac{n(n+1)(2n+1)}{6},$ (1)show: $1^2+3^2+5^2+\cdots+(2n-1)^2=\frac{n(2n+1)(2n-1)}{3}.$ (2)verify: $2^2+4^2+6^2+\cdots+(2n)^2=\frac{2n(n+1)(2n+1)}{3}.$ (3)prove: $1\times2+2\times3+3\times4\cdots +n(n+1)=\frac{n(n+1)(n+2)}{3}$ (4) Evaluate: (Product of the same term of an AP) $(-5)^2+(-2)^2+1^2+4^2+\cdots+ \mbox{10 terms}$ (Hint: Let $\D u_i=-8+3i$. Then $\D u_i^2=\cdots$ ) (5) Find: (Product of the consecutive terms of an AP) $(-2)(1)+(1)(4)+(4)(7)+\cdots \mbox{10 terms}$ (Hint: Let $\D u_i=(-5+3i)(-2+3i)$) (6) Calculate: (Product of corresponding  terms of two APs) $(-3)(5)+(-1)(8)+(1)(11)+\cdots +\mbox{10 terms}$ ## H-factor Method for differentation by using first principle ### Example 1 (h-factor method) Find $\displaystyle f'(x)$, if $f(x)=\sqrt{x}.$ \begin{eqnarray*} f'(x)&=&\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}\\ &=&\lim_{h\to 0}\frac{\sqrt{x+h}-\sqrt{x}}{(x+h)-x}\\ &=&\lim_{h\to 0}\frac{\sqrt{x+h}-\sqrt{x}}{(\sqrt{x+h})^2-(\sqrt{x})^2}\\ &=&\lim_{h\to 0} \frac{\sqrt{x+h}-\sqrt{x}}{(\sqrt{x+h}-\sqrt{x})(\sqrt{x+h}+\sqrt{x})}\\ &=& \lim_{h\to 0} \frac{1}{(\sqrt{x+h}+\sqrt{x})}\\ &=& \frac{1}{\sqrt{x}+\sqrt{x}}\\ &=&\frac{1}{2\sqrt{x}} \end{eqnarray*} ### Example 2 (h-factor method) Find $\displaystyle f'(x)$, if $f(x)=\sqrt[3]{x}.$ \begin{eqnarray*} f'(x)&=&\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}\\ &=&\lim_{h\to 0}\frac{\sqrt[3]{x+h}-\sqrt[3]{x}}{(x+h)-x}\\ &=&\lim_{h\to 0}\frac{\sqrt[3]{x+h}-\sqrt[3]{x}}{(\sqrt[3]{x+h})^3-(\sqrt[3]{x})^3}\\ &=&\lim_{h\to 0} \frac{\sqrt[3]{x+h}-\sqrt[3]{x}}{(\sqrt[3]{x+h}-\sqrt[3]{x})((\sqrt[3]{x+h})^2+ \sqrt[3]{x+h}\sqrt[3]{x} +(\sqrt[3]{x})^2)}\\ &=& \lim_{h\to 0} \frac{1}{((\sqrt[3]{x+h})^2+ \sqrt[3]{x+h}\sqrt[3]{x} +(\sqrt[3]{x})^2)}\\ &=& \frac{1}{\sqrt[3]{x^2}+\sqrt[3]{x^2} +\sqrt[3]{x^2}}\\ &=&\frac{1}{3\sqrt[3]{x^2}} \end{eqnarray*} ### Example 3 (h-factor method) Find $\displaystyle f'(x)$, if $f(x)=\sqrt[3]{x^2}.$ \begin{eqnarray*} f'(x)&=&\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}\\ &=&\lim_{h\to 0}\frac{(\sqrt[3]{x+h})^2-(\sqrt[3]{x})^2}{(x+h)-x}\\ &=&\lim_{h\to 0}\frac{(\sqrt[3]{x+h}-\sqrt[3]{x})(\sqrt[3]{x+h}+\sqrt[3]{x})}{(\sqrt[3]{x+h})^3-(\sqrt[3]{x})^3}\\ &=&\lim_{h\to 0} \frac{(\sqrt[3]{x+h}-\sqrt[3]{x})(\sqrt[3]{x+h}+\sqrt[3]{x})}{(\sqrt[3]{x+h}-\sqrt[3]{x})((\sqrt[3]{x+h})^2+ \sqrt[3]{x+h}\sqrt[3]{x} +(\sqrt[3]{x})^2)}\\ &=& \lim_{h\to 0} \frac{\sqrt[3]{x+h}+\sqrt[3]{x}}{((\sqrt[3]{x+h})^2+ \sqrt[3]{x+h}\sqrt[3]{x} +(\sqrt[3]{x})^2)}\\ &=& \frac{\sqrt[3]{x}+\sqrt[3]{x}}{\sqrt[3]{x^2}+\sqrt[3]{x^2} +\sqrt[3]{x^2}}\\ &=&\frac{2\sqrt[3]{x}}{3\sqrt[3]{x^2}}=\frac{2}{3\sqrt[3]{x}} \end{eqnarray*} ### Example 4 (h-factor method) Find $\displaystyle f'(x)$, if $f(x)=\sqrt[3]{x^4}.$ \begin{eqnarray*} f'(x)&=&\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}\\ &=&\lim_{h\to 0}\frac{(\sqrt[3]{x+h})^4-(\sqrt[3]{x})^4}{(x+h)-x}\\ &=&\lim_{h\to 0}\frac{((\sqrt[3]{x+h})^2-(\sqrt[3]{x})^2)((\sqrt[3]{x+h})^2+(\sqrt[3]{x})^2)}{(\sqrt[3]{x+h})^3-(\sqrt[3]{x})^3}\\ &=&\lim_{h\to 0} \frac{(\sqrt[3]{x+h}-\sqrt[3]{x})(\sqrt[3]{x+h}+\sqrt[3]{x})((\sqrt[3]{x+h})^2+(\sqrt[3]{x})^2)}{(\sqrt[3]{x+h}-\sqrt[3]{x})((\sqrt[3]{x+h})^2+ \sqrt[3]{x+h}\sqrt[3]{x} +(\sqrt[3]{x})^2)}\\ &=& \lim_{h\to 0} \frac{(\sqrt[3]{x+h}+\sqrt[3]{x})((\sqrt[3]{x+h})^2+(\sqrt[3]{x})^2)}{((\sqrt[3]{x+h})^2+ \sqrt[3]{x+h}\sqrt[3]{x} +(\sqrt[3]{x})^2)}\\ &=& \frac{(\sqrt[3]{x}+\sqrt[3]{x})((\sqrt[3]{x})^2+(\sqrt[3]{x})^2)}{\sqrt[3]{x^2}+\sqrt[3]{x^2} +\sqrt[3]{x^2}}\\ &=&\frac{2\sqrt[3]{x}\times2\sqrt[3]{x^2}}{3\sqrt[3]{x^2}}=\frac{4\sqrt[3]{x}}{3} \end{eqnarray*}
2019-05-26 15:25:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8685017824172974, "perplexity": 2186.7079501271724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232259316.74/warc/CC-MAIN-20190526145334-20190526171334-00306.warc.gz"}
https://mosphys.ru/indico/event/6/contributions/318/
# Moscow International School of Physics 2022 24 July 2022 to 2 August 2022 House of International Conferences Europe/Moscow timezone ## Production and two-photon decay of ηc at energy of SPD NICA 30 Jul 2022, 20:00 2h House of International Conferences #### House of International Conferences Dubna, Russia Board: 2 Poster (portrait A1 or landscape A0) Young Scientist Forum ### Speaker Anton Anufriev (Samara National Research University) ### Description $\eta_c$-meson production and two-photon decay at the NICA energies was studied. Square of the amplitude of the above process was found and the signal-background ratio was constructed. The process of diphoton birth (direct and with fragmentation) was chosen as the background. Special attention was paid to the influence of experimental cut on transverse momentum on the calculation results.The diphoton decay of the $\pi_0$ meson tried out as the background, and the signal-background ratio was evaluated in this case as well. ### Primary authors Anton Anufriev (Samara National Research University) Prof. Vladimir Saleev (Samara National Research University) ### Presentation Materials ПОСТЕР.pdf Слайд1.JPG ###### Your browser is out of date! Update your browser to view this website correctly. Update my browser now ×
2022-08-11 13:51:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3269214928150177, "perplexity": 13394.396450910155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571472.69/warc/CC-MAIN-20220811133823-20220811163823-00791.warc.gz"}
https://www.hepdata.net/record/ins1468168
• Browse all Measurement of the $t\bar{t}$ production cross-section using $e\mu$ events with b-tagged jets in pp collisions at $\sqrt{s}$=13 TeV with the ATLAS detector The collaboration Phys.Lett.B 761 (2016) 136-157, 2016. Abstract (data abstract) This paper describes a measurement of the inclusive top quark pair production cross-section ($\sigma_{t\bar{t}}$) with a data sample of 3.2 fb$^{-1}$ of proton--proton collisions at a centre-of-mass energy of $\sqrt{s}$=13 TeV, collected in 2015 by the ATLAS detector at the LHC. This measurement uses events with an opposite-charge electron--muon pair in the final state. Jets containing $b$-quarks are tagged using an algorithm based on track impact parameters and reconstructed secondary vertices. The numbers of events with exactly one and exactly two $b$-tagged jets are counted and used to determine simultaneously $\sigma_{t\bar{t}}$ and the efficiency to reconstruct and $b$-tag a jet from a top quark decay, thereby minimising the associated systematic uncertainties. The cross-section is measured to be: $\sigma_{t\bar{t}}$= 818 $\pm$ 8 (stat) $\pm$ 27 (syst) $\pm$ 19 (lumi) $\pm$ 12 (beam) pb, where the four uncertainties arise from data statistics, experimental and theoretical systematic effects, the integrated luminosity and the LHC beam energy, giving a total relative uncertainty of 4.4%. The result is consistent with theoretical QCD calculations at next-to-next-to-leading order. A fiducial measurement corresponding to the experimental acceptance of the leptons is also presented. • #### Table 1 Data from Section 7 of the paper 10.17182/hepdata.73120.v1/t1 Measured cross-section for $t\bar{t}$ events using $e\mu$ events with b-tagged jets in pp collisions at $\sqrt{s}$=13 TeV. • #### Table 2 Data from Section 7 of the paper 10.17182/hepdata.73120.v1/t2 Measured fiducial cross-section for $t\bar{t}$ events producing an $e\mu$ pair, each lepton originating directly from t $\rightarrow$ W $\rightarrow$ l...
2020-11-25 05:40:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9349463582038879, "perplexity": 3103.2611780061593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141181179.12/warc/CC-MAIN-20201125041943-20201125071943-00708.warc.gz"}
https://tehkals.com/practice/
Practice Formula Section </p> <p>f_k = f(x_k),: x_k = x^*+kh,: k=-frac{N-1}{2},dots,frac{N-1}{2} binom{n}{k} = frac{n!}{k!(n-k)!} where h is some step. Then we interpolate points ((x_k,f_k)) by polynomial P_{N-1}(x)=sum_{j=0}^{N-1}{a_jx^j} Its coefficients {a_j} are found as a solution of system of linear equations: This is (e=lim_{ntoinfty} left(1+frac{1}{n}right)^nlim_{ntoinfty}frac{n}{sqrt[n]{n!}} ) frac{1+frac{a}{b}}{1+frac{1}{1+frac{1}{a}}} Introduction to Real Numbers Set of Natural Numbers N=\{1,2,3,4,\dots\} Set of Whole Numbers W=\{0,1,2,3,4,\dots\} Set of Integers Z=\{0, \pm1, \pm2, \pm3, \pm4, \dots\} OR W=\{\dots,-4,-3,-2,-1,0,1,2,3,4, \dots\} Rational Numbers The word Rational means “Ratio”. A rational number is a number that can be expressed in the form of \frac{p}{q} where p and q are integers and  Rational numbers is denoted by Q Set of Rational Numbers Q=\left\{ \frac{p}{q}|p,q \in Z,q \neq0 \right\} Irrational Numbers The word Irrational means “Not Ratio”. Irrational number consists of all those numbers which are not rational. Irrational numbers are denoted by Q^/ . Real numbers The set of rational and irrational numbers is called Real Numbers. Real numbers is denoted by R Thus QUQ^/=R Note: All the numbers on the number line are real numbers. Terminating Decimal Fraction: A decimal number that contains a finite number of digits after the decimal point. Non-Terminating Decimal Fraction: A decimal number that has no end after the decimal point. Non-Terminating Repeating Decimal Fraction In non-terminating decimal fraction, some digits are repeated in same order after decimal point. Non-Terminating Non-Repeating Decimal Fraction. In non-terminating decimal fraction, the digits are not repeated in same order after decimal point. Decimal Representation of Rational and Irrational Numbers. • All terminating decimals are rational numbers. • Non-terminating recurring (repeating) decimals are rational numbers. • Non-terminating and non-recurring (repeating) decimals are irrational numbers. Note: • Repeating decimals are called recurring decimals. • Non-repeating decimals are called non-recurring decimals Properties of Real Number The set R of real number is the union of two disjoint sets. Thus  QUQ^/=R Note: Q \cap Q^/=\emptyset Real Number System The sum of real number is also a real number. If a, b \in R then a+b \in R Example: 7+9=16 Where 16 is a real number. Closure Property w.r.t Multiplication The Product of real number is also a real number. If a, b \in R then a.b \in R Example: 7×9=16 Where 63 is a real number. If a, b \in R then a+b=b+a Example: 7+9=9+7 16=16 Commutative Property w.r.t Multiplication If a, b \in R then a.b=b.a Example: 7.9=9.7 63=63 If a, b, c \in R then a+(b+c)=(a+b)+c Example: 2+(3+5)=(2+3)+5 2+8=5+5 10=10 Associatve Property w.r.t Multiplication If a, b, c \in R then a(bc)=(ab)c Example: 2(3×5)=(2×3)5 2(15)=(6)5 30=30 Zero (0) is called Additive identity because adding “0” to a number does not change that number. If we add 0 to a real number, the sum will be the real number itself. If a \in R there exists 0 \in R then a+0=0+a=a Example: • 3+0=0+3=3 • -5+0=-5 • 9+0=9 • \frac{2}{3}+0=\frac{2}{3} • 9.5+0=9.5 Multiplicative Identity 1 is called Multiplicative identity because multiplying “1” to a number does not change that number. If we add 1 to a real number, the product will be the real number itself. If a \in R there exists 1 \in R then a.1=1.a=a Example: • 3×1=1×3=3 • -5×1=-5 • 9×1=9 • \frac{2}{3} \times 1=\frac{2}{3} • 9.5×1=9.5 When the sum of two numbers is zero (0). If we add a real number to its opposite real number, the result will always be zero (0). If a in R there exists an element a^/ then a+a^/=a^/+a=0 then a^/ is called additive inverse of a OR a+(-a)=-a+a=0 10+(-10)=-10+10=0 Example: • 3+(-3)=0 • -5+5=5-5=0 • -20+20=0 • 10-10=0 • -\frac{2}{3}+\frac{2}{3} • \frac{2}{3}+\left ( -\frac{2}{3} \right) =0 • \sqrt{2}+\left(- \sqrt{2} \right) =0 • 9.5-9.5=0 Multiplicative Inverse When the product of two numbers is 1. If we multiply 1 to a real number, then the product will be the real number itself. If a in R there exists an element a^{-1} then a.a^{-1}=a^{-1}.a=1 then a^{-1} is called multiplicative inverse of a. OR a.\frac{1}{a}= \frac{1}{a}.a=1 10. \frac{1}{10}=\frac{1}{10}.10=1 Examples: • 5. \frac{1}{5}=1 • -3 \times \frac{1}{-3}=1 • -3 \left ( \frac{1}{-3} \right)=1 • \frac{1}{3} \times 3 =1 • \frac{5}{3} \times \frac{3}{5} =1 • \left (\frac{5}{3} \right) \left (\frac{3}{5} \right) =1 • \left (-\frac{5}{3} \right) \left (-\frac{3}{5} \right) =1 • \left (\frac{-5}{3} \right) \left (\frac{-3}{5} \right) =1 • \sqrt{2} \left ( \frac{1}{\sqrt 2} \right) =1 • 9.5 \left ( \frac{1}{9.5} \right) =1 Distributive Property of Multiplication over Additon Propeties of Equality of Real Numbers Reflexive Property Every real number or value is equal to itself. e.g. a=a which means that a itself equal to a Example • 5=5 • \frac{1}{5}= \frac{1}{5} • -3 =-3 • -3.8 =-3.8 • \sqrt{2} = \sqrt{2} • 5.9+\sqrt{2} = 5.9+\sqrt{2} • x+y=x+y Symmetric Property By interchanging the sides of an equation doesn’t effect the result. e.g. a=b then b=a does not effect the result. In other words, • Left side equal to right side of an equation • Right side equal to left side of an equation Example 9+7=16 then 16=9+7 • x=16 or 16=x • x+y=z or z=x+y • x+2=z or z=x+2 • a-5=b or b=a-5 • 5.9+\sqrt{2} =x or x = 5.9+\sqrt{2} Note If x=y then x may be replaced by y or y may be replaced by x in any equation or expression Symmetric Property may not worked in some cases such as Subtraction or Division Trasnsitive Property If a equal to b under a rule and b equal to c under the same rules then  a equal to  c is known as transitive property. e.g.  a=b and  b=c then  a=c Example x+y=z and z=a+b then x+y=a+b x=5+y and 5+y=a+b then x=a+b If we add the same number or expression on both sides of an equation, the equation does not change means both the sides remain equal. e.g. a=b then a+c=b+c Example x=5 then x+2=5+2 x-3=7 x-3+3=7+3 x=10 Subtraction Property If we Subtract the same number or expression on both sides of an equation, the equation does not change means both the sides remain equal. e.g. a=b then a-c=b-c Example x=5 then x-2=5-2 x+3=7 Subtract 3 from Both sides x+3-3=7-3 x=4 Multiplication Property If we Multiply the same number or expression on both sides of an equation, the equation does not change means both the sides remain equal. e.g. a=b then a \times c=b \times c Example x=5 then x \times 2=5 \times 2 \frac{x}{3}=7 Mutiply 3 on Both sides \frac{x}{3} \times 3=7 \times 3 x-=21 Division Property If we Divide the same number or expression on both sides of an equation, the equation does not change means both the sides remain equal. e.g. a=b then \frac{a}{c} = \frac{b}{c} Example x=5 then \frac{x}{3} = \frac{5}{3} 2x=24 Divide Both sides by 2 \frac{2x}{2}=\frac{24}{2} x-=12 Second law of motion
2022-09-30 22:31:48
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9996362924575806, "perplexity": 2758.833387888789}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00284.warc.gz"}
https://www.hackerearth.com/practice/data-structures/trees/binary-and-nary-trees/practice-problems/algorithm/magical-tree-1-e7f8cabd/
Nodes in a subtree / ## Binary Tree, Data Structures, Depth First Search, Hash Maps, Trees Problem Editorial Analytics You are given a rooted tree that contains $N$ nodes. Each node contains a lowercase alphabet. You are required to answer $Q$ queries of type $u, c$, where $u$ is an integer and $c$ is a lowercase alphabet. The count of nodes in the subtree of the node $u$ containing $c$ is considered as the answer of all the queries. Input format • First line: Two space-separated integers $N\ and\ Q$ respectively • Second line: A string $s$ of length $N$ (where the $i^{th}$ character of $s$ represents the character stored in node $i$) • Next $N - 1$ line: Two space-separated integers $u$ and $v$ denoting an edge between node $u$ and node $v$ • Next $Q$ lines: An integer $u$ and a space-separated character $c$ Output format For each query, print the output in a new line. Constraints $1 \leq N, Q \leq 10^5\\ 1 \leq u, v \leq N$ • $c$ is a lowercase alphabet • $s_i$ is a lowercase alphabet for all $1 \leq i \leq N$ • $1$ is the root node Note: It is guaranteed that the input generates a valid tree. SAMPLE INPUT 3 1 aba 1 2 1 3 1 a SAMPLE OUTPUT 2 Explanation Tree given in the sample input will look like that. Number of nodes in the subtree of node 1 having 'a' stored in it is 2. Time Limit: 1.0 sec(s) for each input file. Memory Limit: 256 MB Source Limit: 1024 KB ## This Problem was Asked in Initializing Code Editor...
2021-01-19 09:38:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1777276247739792, "perplexity": 2224.541621606358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703518201.29/warc/CC-MAIN-20210119072933-20210119102933-00028.warc.gz"}
https://matholympiad.org.bd/forum/viewtopic.php?p=19048
## IMO 2017 P4 Discussion on International Mathematical Olympiad (IMO) Ananya Promi Posts: 36 Joined: Sun Jan 10, 2016 4:07 pm ### IMO 2017 P4 Let R and S be dierent points on a circle Ω such that RS is not a diameter. Let be the tangent line to Ω at R. Point T is such that S is the midpoint of the line segment RT. Point J is chosen on the shorter arc RS of Ω so that the circumcircle Γ of triangle JST intersects at two distinct points. Let A be the common point of Γ and that is closer to R. Line AJ meets Ω again at K. Prove that the line KT is tangent to Γ. Ananya Promi Posts: 36 Joined: Sun Jan 10, 2016 4:07 pm ### Re: IMO 2017 P4 We get \$TA\$ parallel to \$KR\$ because \$\angle{ATS}=\angle{SJK}=\angle{SRK}\$ We extend \$KS\$ to \$P\$ where \$KS\$ intersects \$TA\$ at \$P\$ Now, It's easy to prove that \$TPRK\$ is a rombus So, \$\angle{TPK}=\angle{PKR}\$ Again, \$\angle{ARS}=\angle{SKR}\$ So, \$\angle{TPK}=\angle{ARS}\$ So, \$APSR\$ is cyclic. \$\angle{PRT}=\angle{RTK}\$ \$\angle{STK}=\angle{SAT}\$ So, \$KT\$ is tangent to the circle. We are done prottoydas Posts: 8 Joined: Thu Feb 01, 2018 11:56 am ### Re: IMO 2017 P4 very easy problem for IMO Tahjib Hossain Khan Posts: 3 Joined: Mon Mar 26, 2018 2:20 pm ### Re: IMO 2017 P4 TPRK is parallelogram Tahjib Hossain Khan Posts: 3 Joined: Mon Mar 26, 2018 2:20 pm ### Re: IMO 2017 P4 TPRK is parallelogram
2020-11-25 14:34:18
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.910728394985199, "perplexity": 9118.559274821222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141182794.28/warc/CC-MAIN-20201125125427-20201125155427-00204.warc.gz"}
https://engineering.stackexchange.com/tags/unit/hot
# Tag Info 11 For most of China's history, a system of decimal time was used along side duodecimal time. As part of the Metric System, the French tried to introduce decimal time, where 12 duodecimal hours would be replaced by 10 decimal hours. In 1998, the Swiss watch company Swatch introduced a system of decimal time called Internet Time, where 1 day is divided into 1000 ... 11 In European culture, hours have been used as the basic time interval since time immemorial. You will find a substantial amount of useful information on the relevant Wikipedia page. However, until the universal adoption of mechanical timekeeping, hours were "unequal", defined as 1/12th part of the day or night with the length varying throughout the ... 7 Judaism has a long history of tracking both days and times. Most of this is centered around Jewish holidays, particularly the need for Passover to take place in the spring. The basic date measurements are the day, lunar month (basically alternates between 29 and 30 days) and the year. The year is measured as a combination lunar/solar year, approximately 354 ... 7 I 'll start with the one that you should DEFINITELY NOT use: $mm^3$ or $mm3$. Probably the most widely used is the 1st. It's compact and economical and these are two of the most deciding factors in Engineering thought and practice. To take the point one step further, if it's in an engineering drawing you don't even need to put units in. Unless, its ... 7 You can use any consistent set of units. That includes SI. But if you try to use a pressure in ATM, a sphere radius in feet, a wall thickness in mils, and want the stress in tons per square inch, you will probably get the wrong answer that you deserve! 5 There are many possible culprits. One of them is the following. Check in the lower right corner of your Solidworks window. you should see if you click on it, you should see the following menu. You might have pressed it accidentally. 5 N/m or Nm$^{-1}$ is the correct unit of a spring constant and it's already in SI units. Nm or N*m on the other hand is the unit of a torque (or moment) and is also in SI units. So no, Nm and Nm$^{-1}$ are not the same at all, they measure very different things. Similarly, Nsm$^{−1}$, Nsm and Nsm$^2$ are all different units of different dimensions to measure ... 5 Weight is a force and is expressed in Newton (N). Mass is expressed in kilogram (kg). However, in informal (non-scientific) language, people often express weight in kg, although this is not correct strictly speaking. The relation between the two is $F=mg$, with $F$ the weight (N), $m$ the mass (kg) and $g$ earth's gravity constant. See also here. 4 In an earlier comment, I suggested that an answer to "what is the correct way..." (as opposed to "What is the generally accepted way...") might be found in the ISO 80000-3 standard. I had a look, and there's nothing relevant in ISO 80000-3. Nor can I find any other ISO standard that explicitly states a correct way of doing this. ... 4 A non forgotten alternate variety of time is Unix Time, https://en.m.wikipedia.org/wiki/Unix_time which ignores leap seconds, and as such is slowly drifting away from UTC 4 Strength can mean different things in different contexts and technical definitions can vary significantly from what is commonly understood by the word 'strength'. For structures where actual forces are more relevant you might talk about rated loads, safe working loads or design loads in conjunction with factors of safety but this usually needs to be ... 4 First Part Simplifying the $$1 \ lbf = 1 \ slug .ft/sec^2$$ then $$1 \ lbf \ sec^2 / in^4 = 1 \ (slug .ft/sec^2 ). (sec^2/in^4) = 12 \ slug /in^3$$ $$12 \ slug / in^3 * (14.6 kg / 1\ slug)*(in/25.4 \ mm)^3 = 0.0107 \ kg/mm^3$$ Note that that each slug equals to 14.6 kg so (1 slug / 14.6 kg) equals to 1.0 and any expression multiplied with 1.0 will ... 3 There are seven fundamental SI units (seconds (time), metres (length), kilograms (mass), ampere (current), kelvin (temperature), mole (amount of substance) and candela (luminous intensity) - https://en.wikipedia.org/wiki/International_System_of_Units. There are a bunch of derived units and acceptable prefixes (for orders of magnitude) (see that Wikipedia ... 3 I've done some searching, according to this converter kps or KPS stands for Kilo Pascal, but don't use that abbreviation. The actual abbreviation for Pascal is Pa, and for kilo a small k, combined kPa. Thus: $$1\hspace{1mm}\mbox{psf} = 47.88\hspace{1mm}\mbox{Pa}$$ With that assumption, are your values in a reasonable range? I assume that kips in the ... 2 For your question, 1 m2 = 10^12 Darcy. For the next question, I think your question is units involved in Darcy's Law, i.e., the Darcy's units and SI units. In general, we define the permeability of porous media as 1 Darcy, it means the porous media can transmit 1 cm3/s of water with viscosity of 1 cP (1 mPas) under pressure gradient of 1 atm/cm cross an ... 2 The term "English Units" might be confusing, especially with the English and American... I believe the term is "Imperial Units" for those in the UK, but not sure how other countries refer to them. Gallons are defined as US or Imperial and have different volumes. But a gallon does have different volumes in history anyway... Inch-pounds is a "standard" name,... 2 50μm → 0.002" or 2 thou or 2 mils? I've seen both 0.002" and 2 mils. On a drawing it would always be 0.002". In a specification document it could be either. I've never seen 2 thou written in a formal specification (but I have heard people say it in the shop). But that might just be my experience. There could be variation from industry to industry. Is ... 2 If you're interested in using Python, check out the pint, astropy.units or unyt packages. I have personally used pint + jupyter for day-to-day engineering in the past, and have looked at the others and they all should be suitable. 2 1x2x3 mm is usual. You might specify individual units if they used different multiples. For example, if you had a large sheet of thin material you could describe it as 1 m x 2 m x 3 mm. However in metric engineering drawings it is common to keep everything in mm and describe this as 1000 x 2000 x 3 (with a note in the corner of the drawing stating all ... 2 Depends on the culture and civilzation. The romans had their civil day which had numerous subdivisions. See the wikipedia article below. https://en.wikipedia.org/wiki/Roman_timekeeping#Civil_day Monasteries had their own "canonical hours" https://en.wikipedia.org/wiki/Canonical_hours I'd imagine most people would just look at the sky to know what ... 1 If you have defined the same units for v and c then the equation should work just fine. In all likelihood, h is the probability density so you its units should be $\frac{s}{m}$ In order to see where the problem lies, I would suggest breaking up the terms and checking their units independently within the software. I.e. I would find the units in EES for: $\... 1 In ancient India, multiple hindu texts have measured time (kāla, which is eternal) ranging from microseconds to trillions of years. Some of these texts date back as far as 2nd millennium BCE. For example, Rig Veda - oldest known Vedic Sanskrit text - gives base / smallest unit of measurement as Paramāṇu (परमाणु) which is ≈ 25 µs. Longer measurement of time ... 1 History of Calendars has a lot about the longer units. Probably the most extreme are the Mesoamerican people, such as the Mayans and Aztecs. The Mayans and Aztecs had two cycles, one being 260 days and the other being 365 (or 360?) days. That gives larger cycles of about 52 years. Then they used multiples of that to record even longer times. And they used a ... 1 Cubic millimeters (mm3) would be used when describing volume of holding capacity. In your situation your third option is correct, but use spaces: 1 mm x 2 mm x 3 mm, or 1 mm by 2 mm by 3 mm. Each number needs to have the unit follow it because 10 mm x 2 mm x 4 mm could also be written as 1 cm x 2 mm x 4 mm. 1 The conversions can be found at this wikipedia page $$1 \frac{m^3}{s} = 1000 [NLPM] \frac{T_{gas}}{293.15}\cdot\frac{14.696 [psi]}{P_{gas}} = 1000 [SLPM] \frac{T{gas}}{273.15}\cdot\frac{14.504 [psi]}{P_{gas} [psi]}$$ where:$T_{gas}$: is the temperature that the gas is flowing$P_{gas}\$: is the pressure that the gas is flowing 1 Another great free tool that work very well and its a clone of MathCad (so you won't have to relearn something) is SMath Studio As I've mentioned up until now its free (although I think its closed source). If you were using Mathcad before and you liked it, I guarantee that you will love this. For me, although I work day to day with Python, this is the go to ... 1 I teach the American Engineering system. https://www.learnthermo.com/T1-tutorial/ch01/lesson-B/pg04.php I see references also to it being called the English Engineering system. https://en.wikipedia.org/wiki/English_Engineering_units At one point, I remember being taught the inch-pound and ft-slug systems. I imagine the former could still be common in ... 1 As this is an Engineering stack we use the prescribed units, mass is in kg and weight is in Newtons. This is a common misconception by the masses (great unwashed...) and it is also covered in this question : Force Required to Lift a WEIGHT of 1Kg It is also obvious in various phrases such as "I am gong to boil the kettle" which is assumed to mean boil the ... 1 To convert darcy units to SI units ... 1 darcy is equivalent to 9.869233×10−13 m² or 0.9869233 (µm)². This conversion is usually approximated as 1 (µm)².[3] Note that this is the reciprocal of 1.013250—the conversion factor from atmospheres to bars. Specifically in the hydrology domain, permeability of soil or rock may also be defined as the flux of water ... Only top voted, non community-wiki answers of a minimum length are eligible
2021-03-05 20:21:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6583816409111023, "perplexity": 1276.6354088039345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178373241.51/warc/CC-MAIN-20210305183324-20210305213324-00572.warc.gz"}
https://www.physicsforums.com/threads/find-the-center-of-mass-of-the-solid.585626/
Find the center of mass of the solid 1. Mar 10, 2012 richies 1. The problem statement, all variables and given/known data Find the center of mass of the solid figure similar to a cone pointing upward with slope = 1 Note: the density varies with z^2 and the edge has a slope of 1. From symmetry we see that both Xc and Yc are equal to zero. Find the center of mass in the z direction as a function of h by doing the appropriate integral. 2. Relevant equations p(vector r) = z^2 z^ slope = 1 3. The attempt at a solution I'm thinking about using cylindrical coordinates @ radius = sqrt of (1 - h^2) @ Z = 1/V ∫ z dV (lower limit V) @ V = 1/3*pi*r^2*h @ dV = r⊥dr⊥dθ dz. @ ∫ z dV (lower limit V) = ∫ (∫ (∫zr⊥dr⊥) dθ) dz Now im stuck there i dont know if im doing it right or wrong. Any help or idea ? Attached Files: • geo_cone2.gif File size: 6.9 KB Views: 85 Last edited: Mar 10, 2012 2. Mar 10, 2012 Staff: Mentor If the sides are straight and with slope 1, wouldn't the radius at height z be (h - z) ? Since the density varies with height you'll need to find the mass of the object via an integration; you can't just use the volume of the object as a stand-in for mass. 3. Mar 10, 2012 richies "wouldn't the radius at height z be (h - z) ?" is that the radius at the base depends on the change in height ? and it is equal h - z ? if so, would the integral R = 1/M ∫p(r)rdV solve this problem? 4. Mar 10, 2012 Staff: Mentor The radius at the base is given by (h - z) when z=0. That is, the radius at the base is h. As z increases the radius grows smaller. When z=h you've reached the apex of the object and the radius is zero there. To find the center of mass you want the weighted sum of mass elements, dm, as you go up the z-axis, divided by the overall mass of the object. You need to determine an expression for an appropriate dm. 5. Mar 11, 2012 richies So my interation will be like this, correct me if im wrong r⊥ = h-z r sqr(z^2+(h-z)^2) M = integral from 0 to h of pi(h-z)^2*z^2dz * components: x=0 y=0 z = (1/M)*∫ (from 0->h) ∫(from0->2pi)∫(from 0->h)(sqr(z^2+(h-Z)^2))drdθdz. 6. Mar 11, 2012 Staff: Mentor What does this represent? Okay, so the overall mass is the sum of the dm's, which are individually disks of radius (h-z) and thickness dz with density z2. That looks okay (although technically you're told that density varies as z2, so you should write ρ = k*z2). You really only need a single integral over the dm's if you take each dm as a disk; you know that the center of mass of a disk is at its center and you know the mass of each disk via its radius and density. 7. Mar 11, 2012 richies thank you so much, now I get it :D
2017-08-19 19:39:34
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8734006881713867, "perplexity": 1084.0694745847875}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105712.28/warc/CC-MAIN-20170819182059-20170819202059-00381.warc.gz"}
https://www.theguardian.com/commentisfree/andrewbrown/2009/jun/26/religion-science
# Against nerd stereotyping Why would anyone think that scientists had no sense of humour? # Against nerd stereotyping Why would anyone think that scientists had no sense of humour? One of the journalistic clichés I most dislike is the one that says "Look what those silly scientists are doing!" So nothing that follows here, not even the story that reminds me of a famous newsreader joke, should be taken as meaning that this isn't valuable research. But neither does it suggest a profession of entirely humourless nerds. Enjoy. Topics
2017-05-26 11:32:59
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8053268194198608, "perplexity": 4271.538779489788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608659.43/warc/CC-MAIN-20170526105726-20170526125726-00416.warc.gz"}
https://www.coursehero.com/file/5803382/solutions04/
solutions04 - Problem Set 4 Solutions Physics 330 M Seifert Due Date November 9 2007 1(MW 3-10 Evaluate I= 0 dx 1 x4 Viewed as a complex function the solutions04 - Problem Set 4 Solutions Physics 330 M Seifert... This preview shows page 1 - 5 out of 11 pages. Problem Set 4: Solutions Physics 330 M. Seifert Due Date: November 9, 2007 1. (MW 3-10) Evaluate I = 0 d x 1 + x 4 . Viewed as a complex function, the integrand has four simple poles, at the roots of the polynomial z 4 + 1 = 0; we denote these values by z n = e (2 n - 1) iπ/ 4 , where n runs from 1 to 4. We choose the contour shown in Figure 1: ˜ I = C d z 1 + z 4 This can be decomposed into three parts: A line running from 0 to R , along which z = x ( x [0 , R ]); A quarter-circle running from R to iR , along which z = Re ( θ [0 , π 2 ]); and A line running from iR to 0, along which z = ix ( x [ R, 0]). Splitting this up into three parts, then, we have ˜ I = R 0 d x 1 + x 4 + π 2 0 iRe d θ 1 + R 4 e 4 + 0 R i d x 1 + ( ix ) 4 = (1 - i ) R 0 d x 1 + x 4 + π 2 0 iRe d θ 1 + R 4 e 4 In the limit where R → ∞ , the second integral will scale as R/R 4 , and so can be neglected: ˜ I = (1 - i ) 0 d x 1 + x 4 = (1 - i ) I Now we simply apply the residue theorem. We have one simple pole enclosed 1 z 1 z 4 z 2 z 3 Figure 1: Integration contour for Problem 1. in the contour, at z = e iπ/ 4 ; so the residue is Res 1 1 + z 4 z = e iπ/ 4 = z - e iπ/ 4 1 + z 4 z = e iπ/ 4 = 1 ( e iπ/ 4 - e - iπ/ 4 )( e iπ/ 4 - e 3 iπ/ 4 )( e iπ/ 4 - e - 3 iπ/ 4 ) = 1 (2 i ) 3 (sin π 4 )( - e iπ/ 2 sin π 4 )( e - iπ/ 4 sin π 2 ) = - 1 + i 4 2 Thus, ˜ I = (1 - i ) I = 2 πi - (1 + i ) 4 2 I = 2 π 4 2 ! ia ia Figure 2: Integration contour for Problem 2. 2. (MW 3-13) Evaluate I = d 3 x ( a 2 + r 2 ) 3 . Assume that a > 0; the a < 0 case can be obtained by substituting a → - a in the following derivation. In spherical polar coordinates, this is simply I = r 2 d r d θ d φ ( a 2 + r 2 ) 3 = 4 π 0 r 2 d r ( a 2 + r 2 ) 3 = 2 π -∞ r 2 d r ( a 2 + r 2 ) 3 where we’ve used the evenness of the integrand in the last step. We can now use contour integration to evaluate this integral. Viewed as an analytic function, the integrand has poles of order three at z = ± ia . We choose the contour indicated in Figure 2. This consists of two portions, one due to the integration along the real axis and the other due to the upper half-circle: C z 2 d z ( a 2 + z 2 ) 3 = R - R r 2 d r ( a 2 + r 2 ) 3 + π 0 R 2 e 2 iRe d θ ( a 2 + R 2 e 2 ) 3 3 The integral along the upper half-circle will go as R 3 /R 6 for R 1, and thus in the limit R → ∞ we have C z 2 d z ( a 2 + z 2 ) 3 = -∞ r 2 d r ( a 2 + r 2 ) 3 = I 2 π All that remains is to calculate the residues to obtain the contour integral. We have one pole of order three inside the contour, at z = ia ; the residue there is Res z 2 ( a 2 + z 2 ) 3 z = ia = 1 2 d 2 d z 2 z 2 ( z + ia ) 3 z = ia = 1 2 2 ( z + ia ) 3 - 12 z ( z + ia ) 4 + 12 z 2 ( z + ia ) 5 z = ia = - i 16 a 3 Thus, C z 2 d z ( a 2 + z 2 ) 3 = 2 πi - i 16 a 3 I = π 2 4 a 3 3. (MW 3-15) Evaluate I = 0 x d x 1 + x 5 . As a complex function, the integrand has five simple poles; we will denote them as z n = e (2 n - 1) iπ/ 5 , where n runs from 1 to 5. The symmetry of the denominator of the integrand leads us to choose the contour shown in Figure 3. You've reached the end of your free preview. Want to read all 11 pages? • Spring '10 • Muyit • Physics, Methods of contour integration, cot πz dz
2020-10-30 16:29:37
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8488656878471375, "perplexity": 739.1790505039619}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107911027.72/warc/CC-MAIN-20201030153002-20201030183002-00556.warc.gz"}
https://www.physicsforums.com/threads/buoyant-force-help.118977/
# Buoyant force help 1. Apr 27, 2006 ### cscott 70% of a mass is supported by a slab of ice and the ice sinks down so that only half of what was previously exposed now is exposed. What is the mass assuming that the ice has a volume of 10 m^3 and the mass has a specific gravity of 1.0? Why can't I use the buoyant force of the ice before and after the extra weight is added and subtract to get the weight of the object itself? I get 539.5 kg while the textbook says 790 kg. Only hints please! 2. Apr 27, 2006 ### Staff: Mentor Sounds good to me. Show what you did exactly. 3. Apr 27, 2006 ### cscott Alright, well since $F_B = w$ when an object is floating, the buoyant force is $F_B = \rho_{ice} Vg = (0.917 \times 10^3)(10)(9.8) = 9.0 \times 10^4N$. $$\frac{0.917 \times 10^3}{1.00 \times 10^3} + \frac{1}{2}(1 - 0.917) = 0.9585$$ for the fraction of the ice submerged once the unkown mass is put on. With this, the buoyant force is $F_b = (1.00 \times 10^3)(0.9585 \cdot 10)(9.8) = 9.4 \times 10^4N$ $$W_{object} = 9.4 \times 10^4 - 9.0 \times 10^4 = 4.0 \times 10^3N$$ $$\frac{4.0 \times 10^4 \cdot 1.3}{9.8} = 530 kg$$ I rounded the numbers this time. Last edited: Apr 27, 2006 4. Apr 27, 2006 ### Staff: Mentor Your method looks OK to me (assuming we are interpreting the problem correctly--why do they specify the specific gravity of the mass?) but I would divide by 0.7 instead of multiply by 1.3. 5. Apr 27, 2006 ### cscott The fact that they gave the specific gravity had me starting to work with volume but I could never get any sensible answers. If it makes any difference, this is how the question is worded in the book: 6. Apr 27, 2006 ### Staff: Mentor Since the specific gravity is one, it makes no difference--the buoyant force on the bear equals the weight of her submerged portion. (Were it something else, it would matter.) 7. Apr 27, 2006 ### cscott Ah, ok. I guess the textbook answer is just wrong...?
2016-10-23 20:54:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48367995023727417, "perplexity": 1455.255452443124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719416.57/warc/CC-MAIN-20161020183839-00300-ip-10-171-6-4.ec2.internal.warc.gz"}
https://aibluedot.com/topics/work.html
# The future of Work Original: 02/09/20 Revised: no Perhaps the two most pressing concerns of Type 1 at this moment, are that AI will continue to bring significant job losses and contribute to increased income inequality. We'll look at these two issues in this article, because it is AI, not globalization or free trade or immigration, which has been the main cause for job losses. As we have seen in 2016, the anguish brought about by these job losses has been the main catalyst for a marked change in the political climate in the U.S.. Since the issue of job losses due to AI will certainly continue to heat up, it is worth placing it in the larger context of present day politics. Two political movements are vying for our attention currently and so they will affect our view of AI: populism and progressivism. Although not always the case, populism is currently on the right and progressivism on the left. Both movements have posited that "the system is rigged" by the establishment (which historically always seems to be the case!), although who inside the establishment is doing the rigging differs between them: for populists it is the corrupt political class who is doing the rigging, for the progressives it is the economic power class. We focus on populism, for two reasons. First, because there is currently a wave of populism affecting many countries, not just the U.S.. Secondly, because it places the blame for job losses on illegal immigration and globalization/free trade, which as we will see below are false causes. It is not clear to most people right now, but job losses due to AI are far surpassing the losses due to either immigration or to globalization/free trade; some estimates put the proportion of job losses due to AI at 80% of all losses. AI may eventually compensate and add more jobs, but the nature of those jobs is unclear at this time. Because California is at the epicenter of all these (real or perceived) causes, it offers a good case in study. For those of us who live in the San Francisco Bay Area, the one thing we do not wish to happen, after we dreamed of a future brighter than ever because of technology, is for technology to become a sclerotic part of the establishment, become part of the "rigging", and draw the ire of the people who will loose their jobs because of it. Here come California's many dilemmas, within the context of the rise of worker dissatisfaction and the rise of populism: Healthcare will likely be the one industry in the U.S. to be most affected by job losses due to AI. It is no secret that healthcare is a bit of a national embarrassment, the inefficiency of the system being evident to all those who come in contact with it, whether medical professionals or patients. In 2017 the costs were a staggering 10K per capita; you can see in the diagram below that the U.S. healthcare system is a bad outlier relative to all other countries; an outlier is a point that strays far from the main line, a bad outlier is an outlier above the line; Switzerland is the dot below U.S., at 8K and a higher GDP per capita; the far right is tiny Luxembourg, a country with a very high GDP per capita who still managed to have a per capita healthcare cost of just above 6K; Luxembourg is a good outlier. The lowest point on the chart, in the far left corner, is for Mexico. Combine this inefficiency of the U.S. healthcare system with the strongest medical high-tech R&D in the world, and you have the perfect incentives for introducing AI at scale in the U.S. healthcare, which in turn will lead to significant job displacement. Since the healthcare industry employs 11% of all private-sector workers, the loss of employment in this industry will have substantial consequences for the rest of the economy. How should people who would like to have careers in healthcare prepare for this outcome? Let's look at one of the most obvious examples. Because AI excels at recognizing patterns in data, it is able to read and diagnose radiology exams much better. Will we still have radiologists? Of course, but their job will be much different. Humans still need to understand how to program those AI systems and train them in order to recognize unusual patterns formed by disease. Recall from the background article Main Concepts that neural network training is more of a craft than a science. Data scientists develop a feel for what kind of layers they should use for different applications: convolutional layers for image recognition, LSTM layers for speech recognition, etc. The same kind of craftsmanship will be required of radiologists, who will continue to develop the medical skills for understanding patterns of disease, but apply those skills to train neural networks, instead of reading those radiology prints themselves. It is quite possible that medical training will require 12+ years of study (including years of training in data science) and produce a very small number of star practitioners, instead of the current level of medical school graduates. Even for those determined and lucky to practice, would that graduation mean hundreds of thousands of dollars in student loans with just a few years of practice to pay them back? These are haunting questions: As we saw in the second video above, which was posted in September 2016, populism has been riding on the angst of economic insecurity, but without proposing bold policies to address that insecurity. Progressivism, as exemplified by Bernie Sanders' campaign, had also been riding that angst of economic insecurity, and it did propose some very specific policies to address them. These dynamics from 2016 have remained largely unchanged, and the economic insecurity is still continuing to rise. Since AI has been the main cause of this rising economic insecurity, we should perhaps try to diagnose better what happened in 2016, and understand why is it that neither populism nor progressivism are focusing on the real cause, even today. The political center lost ground in 2016, to both populism and progressivism. The one candidate who exemplified competence, experience, and dedication, had been successfully boxed in and labeled corrupt, without any evidence. Despite not coming from privilege and despite a lifetime of hard work for child, family, and women causes, Hillary Clinton had been successfully tied to Wall Street and the establishment, through concerted efforts from both sides. Not easily inclined to entertain or charm her audience, she had been pegged to political correctness. In fact, attacking her had become the new PC and a bit of a national sport; the chants of "Lock her up" will stand in infamy in our history. We have been teaching our children that serious work and tears in the back room are less important than the giggle and the entertainment of the TV game in the front room. In the presence of AI systems, listening in and many times producing data, the need to monitor truth in the data given to them and the data coming from them is essential. As we saw in the case of Facebook in the background article How We Form Political Opinions, AI-based messaging is targeted at our limbic system. In 2020 it will be coming at us from 3 directions again: our politicians, our social media and our foreign adversaries. But this time the stakes will be higher. So, let's review one more time: The video is even more alarming because these kinds of sentiments are still present and they are coming at the very wrong time for the U.S.; the arguments about job losses due to globalization or free trade or immigration, as they were presented before the 2016 elections, are still clogging the airways. The difficult answers to the question of work in the age of AI must include a deeper analysis of human resilience, and different approaches to education, especially in those parts of the country most affected by job losses; we will elaborate on this later, but it is already clear that simplistic solutions (i.e., just have everyone learn how to write computer code!) will have to be revised and more robust legislative approaches will be needed. Education in the U.S., especially in the pre-college years, needs massive attention and strenuous thinking, because very little of what worked in the past will work in the future. The pain is real and it will get worse: False assessments will increase divisiveness in our society, and they are coming from all directions. Trump supporters are not racist (only a tiny minority are), and many of them are tired of being told that they are. After 9/11 and the ISIS atrocities, they are justifiably concerned about Islamic terrorism. Trump and Sanders supporters are tired of the explanations given to them as to why their jobs went overseas (which as we already knew, it was not the case: the jobs went to the AI productivity gods, not overseas). Continuing to mis-diagnose the critical issues people in large swaths of the country are facing nowadays is a recipe for a potentially disastrous gridlock to come. The emptying of the political center, and the scattering of both Republicans and Democrats to the edges, does not bode us well in the coming age of AI, when competition with China will heat up, and jobs will vanish. The pain may be alleviated by learning how to program a computer, but it will not be the panacea to all problems. Are many people going to lose their economic value altogether, and face society irrelevance? What sociological and educational mechanisms should we set in place in order to avoid this irrelevance and allow dignity to continue? Here is a start: The idea that AI will force us to become better humans is a fundamental one. Of course it will be essential that STEM (Science, Technology, Engineering, and Math) subjects be given special attention and careful evaluation, especially in early child development and proceeding all the way to high school. Computer Science will for sure have to make its way into earlier curricula. But these disciplines will not be sufficient to prepare us for the massive disruption coming ahead, the characteristics of which we do not know, and for which history does not offer us helpful clues. Humanities cannot be neglected, and we may have to redesign school curricula so that they will emphasize the fortification of the characters of our children, with real life stories of courage and overcoming of difficulties. One of the most intriguing aspects of our current predicament is that at this moment AI seems to have exactly the opposite effect: There are deeper effects of social media on our well being. The constant and carefully curated information coming from friends, in which they appear to live fantastically wonderful lives, make many people feel inadequate. We had to invent a new term for this, FOMA (Fear Of Missing Out). Social media and its effects will be featured prominently in our articles, for many reasons, all related to AI; AI is the main technology behind the increased effectiveness of social media. Before we move on, let's look at the need to exercise critical thinking and develop a more rational evaluation of social media with the ideas about human resilience presented above. The call to quit social media altogether is a bit extreme, but the main points in the presentation are very useful to keep in mind: ## AI And the Rising Income Inequality Among the many problems that need our attention in the U.S. (which is the modern liberal democracy we are focusing on), one stands out, one which is very threatening and uniquely American: the extraordinary disparity between the incomes of the haves versus the have-nots, which problem will be greatly exacerbated by the rise of AI. It may be obvious to you already why AI will heighten the income disparity crisis, and we'll come back to it in detail soon, but for now, let's listen to some work coming from ... where else but Berkeley. (the film "Inequality for All" can still be watched on Netflix or Amazon Prime) To re-emphasize the idea about a degree rather than fundamentals, we switch from Berkeley to Breibart News and Steve Bannon, quite a switch. The current populist fire springs up from a loss of economic security by too many people. This fire was not always directed at the Democratic Party, it started as an insurgency against the Republican establishment. We see that insurgency within the Republican party in the clip. We will analyze that idea at length in the , where our main point is that the weakening of the Republican party is in no one's interest in the U.S., because AI issues will need two strong and evenly matched parties to find solutions. The point here is that Steve Bannon shows a principled stand, however disagreeable it may sound, and explains well why he will continue to support populist policies from the outside, against the Republican establishment. But while his stand seems principled, the same mis-diagnosis of what needs to be done is present; he includes Silicon Valley in his targets, but for the wrong reasons, as bastions of liberalism, which they are not, promoting a confrontation between government and Silicon Valley instead of working together to produce effective regulatory oversight in support of a beneficial development of AI. Job losses due to AI will obviously impact the balancing needle between taxation and welfare. The Universal Basic Income proposal will obviously require higher taxation. The argument that is often made against welfare is that high welfare benefits will dis-incentivise unemployed people from looking for work. That view does not seem to reflect the core of human nature. In the Nordic countries, despite having high welfare benefits, people are quite serious about finding employment. What is the bedrock nature of humans, material comfort or dignity and finding significance in their lives through work? If we adopt a dim view of human nature, then we stand no chance in facing off the challenges of living in an AI world. We will refer to Aldous Huxley's "Brave New World" and the value of mindless pursuit of comfort/self-interest in one of the following articles, but let's view this issue in contemporary U.S. society rather than a dystopian futuristic novel: A market-driven economy should be predicated on informed consumers who buy quality products and ditch the poor ones. A market-driven and AI-dominated economy will require even more informed consumers. The advertising industry already makes heavy uses of AI to influence buying patterns, and that trend in the usage of AI will accelerate. And just in case you are ready to dismiss Chomsky's argument that people are not "homo economicus" (i.e., just selfish pursuers of comfort), simply because Chomsky is such a symbol of left-wing politics in the U.S., let's hear essentially the same argument from a self-made billionaire: What is our message here, by including these video clips? It is not to criticize success, it is not to criticize the top 1% of earners, and especially the billionaires' club. They played by the rules of the game and won, which is admirable. It is the game itself that needs to be changed, and AI will force us to change it. It's not clear what a preoccupation with the billionaires will bring, instead of a preoccupation with the more constructive issue of how to change the rules of the game. The following quip is attributed to Larry Ellison: "Tell me what I can do with $3 billion that I cannot do with$1 billion". The relevance of money fades past a certain level anyway, and the human condition has no way to enjoy those irrelevant levels. There is no doubt that income inequality is a pressing issue in the U.S., and AI will greatly exacerbate this inequality. So the debate around wealth redistribution must be constructive, not destructive, and it must include AI matters. And that constructive approach means that some rules of the game must be changed so that the wealth inequality reaches a more benign curve which we can all digest better. The billionaires, many of whom made their fortunes in high-tech, seem to understand AI much better than the government, and many of them are actually leading the charge for a constructive revision of the rules of the game. As the speaker just said "Markets are not jungles, they are gardens; they need to be tended." Will U.S. continue to be a land of opportunity or is that opportunity becoming unequally reachable, and dangerously so with the rise of AI? Steve Bannon is preoccupied by the same question as the "liberal Californians" in his crosshairs, so the good intentions on both sides should be a reason for hope. The seriousness of the problem of AI exacerbating income inequality weighs on anyone interested in our future prosperity.
2023-03-24 07:13:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24706332385540009, "perplexity": 1599.3225950717742}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945248.28/warc/CC-MAIN-20230324051147-20230324081147-00130.warc.gz"}
https://quant.stackexchange.com/questions/22963/ratio-of-gaussian-cdfs-in-black-scholes-option-pricing-formula
# Ratio of gaussian CDFs in Black-scholes option pricing formula What is meant by $\frac {\Phi (d_2)}{\Phi (d_1)}$ in the Black Scholes call option price? I found it in a solution as $\frac{\text{short position in cash}}{(\text{number of shares})(\text{strike price discounted to time zero})}$ Reference can be found here Q. Number 4 on page number 19, and Its solution on page number 26 • We need more background to understand your question. – Gordon Jan 27 '16 at 19:40 • Hi @Gordon, please see in the Q, I edited it. – Hemant Rupani Jan 28 '16 at 5:01 • This term is not part of the BS formula for a call price as you can see here: en.wikipedia.org/wiki/Black%E2%80%93Scholes_model in the section "Black-Scholes formula". – Ric Jan 28 '16 at 6:57 • @Rchard Yes but N (d_1) and N (d_2) are parts of the BS formula... and in solution given in link used N (d_2) divide by N (d_1) which I am confised about. – Hemant Rupani Jan 28 '16 at 7:57 • Your question is really hard to understand, it does not become clear to me, what you are asking for. Consider editing it to improve the quality (and quantity) of answers. – muffin1974 Jan 28 '16 at 8:08 For a call option with price given by \begin{align*} c = S_0 \Phi(d_1) - K e^{-rT}\Phi(d_2), \end{align*} the delta hedge ratio $\Phi(d_1)$ is the number of shares to hold. That is, $S_0 \Phi(d_1)$ is the total holding share value for hedging, while $K e^{-rT}\Phi(d_2)$ is the total cash amount in short. In the question, it says that, for $N$ options, 250,000 shares of the stock are hold, and the amount of $£413,057$ is in short. The strike price is $K=2.0$. Therefore, \begin{align*} N \Phi(d_1) = 250000, \mbox{ and } N K e^{-rT}\Phi(d_2) = £413057. \end{align*} Consequently, \begin{align*} \frac{\Phi(d_2)}{\Phi(d_1)} &= \frac{N K e^{-rT}\Phi(d_2)}{N\Phi(d_1)}\frac{N}{N K e^{-rT}}\\ &=\frac{N K e^{-rT}\Phi(d_2)}{N\Phi(d_1) K e^{-rT}} \\ &=\frac{413057}{250000 \times 2.0 \times e^{-0.03 \times 0.5}}\\ &= 0.8386. \end{align*} • $\Phi (d_1)$ is the numbers of shares hold PER OPTION, no? – Hemant Rupani Jan 28 '16 at 16:55
2019-08-21 14:41:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9972891211509705, "perplexity": 1649.5535140944676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316021.66/warc/CC-MAIN-20190821131745-20190821153745-00301.warc.gz"}
http://mymathforum.com/algebra/345022-inequality.html
My Math Forum An inequality User Name Remember Me? Password Algebra Pre-Algebra and Basic Algebra Math Forum September 29th, 2018, 08:05 AM #1 Newbie   Joined: Aug 2018 From: România Posts: 17 Thanks: 2 An inequality Good evening to all, Solve the inequality $\displaystyle ||x^2+2ix+3|'+4|+5x^2<0$. All the best, Integrator September 29th, 2018, 10:47 AM #2 Newbie   Joined: Sep 2018 From: Poland Posts: 1 Thanks: 0 Inequality $||x^2 +2ix + 3 |' +4| +5x^2 <0$ $| (x^2 +2ix + i^2 +4)' +4 | +5x^2<0$ $|[(x +i)^2 +4]' +4 | +5x^2<0$ $|2(x +i) + 4 | +5x^2 <0$ $2|x+2 +i| +5x^2 <0$ $2\sqrt{(x+2)^2 +1^2} +5x^2 <0$ $x\in \emptyset$ September 29th, 2018, 10:24 PM #3 Newbie   Joined: Aug 2018 From: România Posts: 17 Thanks: 2 Good morning to all, Thousands of apologies!I reformulate the statement of the problem: Solve the inequality $\displaystyle ||x^2+2ix+3|'+4|+5x^2<0$ where $\displaystyle i^2=-1$. All the best, Integrator Last edited by Integrator; September 29th, 2018 at 10:37 PM. September 30th, 2018, 06:07 AM #4 Global Moderator   Joined: Dec 2006 Posts: 19,974 Thanks: 1850 Does x have to be real? September 30th, 2018, 06:37 AM   #5 Newbie Joined: Aug 2018 From: România Posts: 17 Thanks: 2 Quote: Originally Posted by skipjack Does x have to be real? Hello, From solving the inequality will result the nature of $\displaystyle x$. All the best, Integrator October 1st, 2018, 09:40 PM #6 Newbie   Joined: Aug 2018 From: România Posts: 17 Thanks: 2 Good morning to all, Some say there are solutions...How can we solve this inequality?What are these solutions? What resulting , if we consider $\displaystyle x=u+vi$ where $\displaystyle i^2=-1$ and $\displaystyle u,v\in \mathbb R$? All the best, Integrator Last edited by Integrator; October 1st, 2018 at 09:48 PM. Tags inequality Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post StillAlive Calculus 5 September 3rd, 2016 12:45 AM masterofmath Algebra 2 May 7th, 2014 10:03 AM Dacu Trigonometry 11 May 3rd, 2014 10:55 PM jatt-rockz Algebra 2 November 5th, 2011 09:21 PM Rubashov Algebra 6 March 18th, 2011 05:30 AM Contact - Home - Forums - Cryptocurrency Forum - Top Copyright © 2018 My Math Forum. All rights reserved.
2018-12-13 08:04:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4839567542076111, "perplexity": 12360.618860384326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824601.32/warc/CC-MAIN-20181213080138-20181213101638-00560.warc.gz"}
http://indico.cern.ch/conferenceOtherViews.py?view=standard&confId=95988
# TH - LPCC Summer Institute on LHC Physics - THLPCC11 from to (Europe/Zurich) at CERN ( 4-3-006 - TH Theory Conference Room ) Description The TH-LPCC Summer Institute on LHC Physics This 5-week Institute is co-organized by CERN's Theory Group and by the LHC Physics Centre at CERN. Format: The first two weeks of the Institute will be focused on SM and QCD while the last three weeks will be devoted to BSM, with a special emphasis on model building during the first two, and on the status of the searches at the LHC in the last one. In particular, during this week the Institute will host the first meeting of the Workshop on "Implications of LHC results for TeV-scale physics". This is open for participation to those not registered at the Institute, at https://indico.cern.ch/conferenceDisplay.py?confId=141983 Registration: The attendance will be limited to around 40 people per week. There is no registration fee. Registration will be open until Jan. 31, 2011. Local Organization: Stefano Frixione, Gian Giudice, Christophe Grojean, Michelangelo Mangano, Gavin Salam, Peter Skands and Geraldine Servant. The meeting will be partly supported by the CERN-TH unit the LHC Physics Centre at CERN the Marie Curie Initial Training Network "UNILHC" PITN-GA-2009-23792, and the ERC Advanced Grant "MassTeV" 226371. Material: Participant List Go to day • Monday, 1 August 2011 • 14:00 - 18:00 Use of collider data for PDF analyses Material: • 14:00 PDFs at the LHC -- some issues, comments and questions 30' Speaker: Dr. Robert Samuel Thorne (University College London-University of London) Material: • 14:50 The impact of present and future LHC data on parton distributions 30' Speaker: Alberto Guffanti (University of Freiburg) Material: Slides • 15:40 PDFs with the LHeC 20' Speaker: Max Klein Material: • Tuesday, 2 August 2011 • 14:00 - 18:00 Review of new LHC results presented at EPS See https://indico.cern.ch/conferenceOtherViews.py?view=standard&confId=135933 Location: 500-1-001 - Main Auditorium Material: • Wednesday, 3 August 2011 • 10:30 - 11:30 QCD results from CMS 1h0' Speaker: Konstantinos Kousouris (Fermi National Accelerator Lab. (Fermilab)) Material: • 11:30 - 12:00 Discussion 30' • 14:00 - 15:00 TH colloquium: "Precision predictions for Higgs production" (M.Grazzini) Material: • Thursday, 4 August 2011 • 09:00 - 11:30 Recent shower MC developments • 09:00 Status of the POWHEG box 30' Speaker: Paolo Nason Material: • 09:35 Status of aMC@NLO 30' Speaker: Rikkert Frederix Material: • 10:10 Status of Herwig++ 30' Speaker: Mike Seymour (School of Physics and Astronomy Schuster Laboratory-University) Material: • 10:45 The Krakow NLO Parton Shower project 20' Speaker: Maciej Skrzypek (Henryk Niewodniczanski Inst. Nucl. Physics, PAN-Unknown-Unknown) Material: • 11:10 Coffee break 20' • 11:30 - 12:30 Collider Cross Talk • 11:30 Status and Progress of Pythia8 1h0' Speaker: Torbjorn Sjostrand (Lund University / CERN) Material: • Friday, 5 August 2011 • 09:00 - 12:00 Open forum for short contributions • 09:00 On the Integrand-Reduction Method for Two-Loop Scattering Amplitudes 20' Speaker: Pierpaolo Mastrolia Material: • 09:30 Herwiri1.031: A New Approach to Parton Shower MC's for Precision QCD for the LHC 15' Speaker: Bennie Ward (High Energy Physics Group-Department of Physics-Baylor Universi) Material: • Monday, 8 August 2011 • 14:00 - 18:00 Progress in NNLO calculations • 14:00 Status of the antenna approach to NNLO calculations 30' Speaker: Nigel Glover Material: • 14:30 Non-linear mapping of singularities at NNLO and the H->b bbar width 30' Speaker: Franz Herzog Material: • 15:00 The singular behavior of one-loop massive QCD amplitudes with one external soft gluon, or calculating the real-virtual corrections at NNLO with massive quarks 30' Speaker: Alexander Dimitrov Mitov (CERN) Material: • 15:30 Order epsilon and epsilon^2 terms of one-loop amplitudes in NNLO calculations 30' Speaker: Stefan Weinzierl Material: • 16:00 Coffee break 30' • 16:30 Integration of subtraction terms at NNLO 30' Speaker: Gabor Somogyi (DESY, Zeuthen) Material: • 17:00 Two-loop amplitudes and integrals in N=4 SYM 30' Speaker: Vittorio Del Duca (INFN sezione di Frascati) Material: • 17:30 Recent results on t-tbar cross sections at NNLL/NNLO 30' Speaker: Pietro Falgari Material: • Tuesday, 9 August 2011 • 09:00 - 18:00 RIVET tutorial https://indico.cern.ch/conferenceDisplay.py?confId=145745 Material: • Wednesday, 10 August 2011 • 09:00 - 12:00 LO and NLO Matrix-element aspects of MC simulations • 09:00 A new formalism for multileg LO matching 30' Speakers: Peter Skands (CERN), Juan Jose Lopez Villarejo Material: • 09:40 Applications of POWHEG to ttbat+X (X=jet,H,QQbar,...) 30' Speaker: Zoltan Laszlo Trocsanyi (Institute of Experimental Physics-University of Debrecen) Material: • 10:20 Combining NLO corrections to production and decay in the WH process 30' Speaker: Julián Cancino (ETH Zürich) Material: • 11:00 Coffee break 20' • 14:00 - 15:00 TH colloquium: "Event-generator physics for the LHC" (M.Seymour) https://indico.cern.ch/conferenceDisplay.py?confId=148458 Material: • 16:00 - 17:00 QCD results from ATLAS 1h0' Speaker: Mario Campanelli (University College London-University of London) Material: • 17:00 - 17:30 Discussion 30' • Thursday, 11 August 2011 • 11:00 - 12:00 Collider Cross Talk: Search for dilepton resonances at the LHC https://indico.cern.ch/conferenceDisplay.py?confId=147455 Material: • 14:00 - 17:30 W/Z+jets • 14:00 W/Z+jets: status and open issues 30' Speaker: Michelangelo Mangano (CERN) Material: • 14:40 NLO W/Z+jets with BlackHat and Sherpa 30' Speaker: David Kosower Material: • 15:20 W/Z+jets results with Sherpa 30' Speaker: Jan-Christopher Winter Material: • 16:00 Study of Wjj at NLO+PS using aMC@NLO 30' Speaker: Rikkert Frederix Material: • 16:40 Using gamma+jets Production to Calibrate the Standard Model Z(nunu)+jets Background to New Physics Processes at the LHC 30' Speaker: James Stirling (Cambridge University) Material: • Friday, 12 August 2011 • 09:00 - 12:00 Open forum for short contributions • 09:00 Gluon-Gluon contributions to WW production and Higgs interference effects 20' Speaker: Ciaran Williams (IPPP) Material: • 09:30 Precise predictions for Higgs production Beyond the Standard Model 20' Speaker: Dr. Elisabetta Furlan (BNL) Material: • 09:50 Colour-friendly FKS subtraction 30' Speaker: Stefano Frixione (CERN) Material: • Tuesday, 16 August 2011 • 10:30 - 11:00 Coffee break • 11:00 - 11:20 N-subjettiness 20' A new jet shape to identify boosted hadronic objects (W/Z/H/top/etc). Speaker: Jesse Thaler Material: • 11:20 - 11:40 Measuring invisible particle masses using a single short decay chain 20' We discuss the possibility of mass measurements for a SUSY-like decay chain with only 2 visible particles. This was never attempted and thought to be impossible before. Speaker: Hsin-Chia Cheng Material: • 11:40 - 12:00 New Vector Boson Near the Z-pole and the Puzzle in Precision Electroweak Data 20' We show that a Z' with suppressed couplings to the electron compared to the Z-boson, with couplings to the b-quark, and with a mass close to the mass of the Z-boson, provides an excellent fit to forward-backward asymmetry of the b-quark and R_b measured on the Z-pole and $\pm 2$ GeV off the Z-pole, and to A_e obtained from the measurement of left-right asymmetry for hadronic final states. Speaker: Radovan Dermisek Material: • 12:00 - 12:20 CDF dijet excess 20' how to look for the CDF dijet excess at the LHC (IF it's from rho_T -> W pi_T). Speaker: Kenneth Lane (Boston University) Material: • Thursday, 18 August 2011 • 10:30 - 11:00 Coffee Break • 14:00 - 14:20 Natural supersymmetry at the LHC 20' Motivated by natural electroweak symmetry breaking in supersymmetry, an effective model with light stop/higgsino and very light gravitino is considered. The implication of the LHC 1 fb^-1 data is also discussed. Speaker: Hyung Do Kim Material: • 14:20 - 14:40 Tree-level Gauge Mediation: Viable and Predictive Tree-level SUSY breaking 20' I will discuss a scenario in which SUSY breaking is communicated by heavy vectorfields at tree-level. This gives rise to a simple and motivated model of SUSY breaking with peculiar predictions for sfermion mass ratios. Speaker: Robert Ziegler (Technische Universitat Munchen) Material: • 14:40 - 15:00 Model building using Lie-point symmetries 20' We describe the Lie-point symmetry method and how it can be used to systematically search for all continuous symmetries and parameter relationships in a classical field theory. Speaker: Damien George Material: • 15:00 - 15:20 Physical Predictions in the Quantum Multiverse 20' I describe how quantum mechanics plays a crucial role in defining probabilities (the "measure") in the multiverse, and how the eternally inflating multiverse leads to dramatic change of our view on spacetime and gravity. The latest result on the distribution of the cosmological constant is also presented. The talk is based mainly on arXiv:1104.2324 (but also arXiv:1107.3556). Speaker: Yasunori Nomura Material: • Friday, 19 August 2011 • 10:30 - 11:00 Coffee Break • 11:00 - 11:20 Implications for the Constrained MSSM from CMS and Dark Matter Searches - A Bayesian Approach 20' Speaker: Leszek Roszkowski Material: • 11:20 - 11:40 MSSM Higgs physics at the LHC 20' I'll discuss the reach of the 7 TeV LHC in the search for Standard Model-like Higgs bosons in different benchmark scenarios, as well as the complementarity of the standard searches with non-standard Higgs searches. Searches from SUSY Higgs bosons from cascade decays will also be discussed in some detail. Speaker: Carlos Wagner (University of Chicago and Argonne National Laboratory) Material: • 11:40 - 12:00 The fine-tuning and phenomenology of the generalised NMSSM 20' We determine the degree of fine-tuning needed in a generalised version of the NMSSM that follows from an underlying Z4 or Z8 R-symmetry. We find that it is significantly less than is found in the MSSM or NMSSM and remarkably the minimal fine-tuning is achieved for Higgs masses of 130 GeV - 140 GeV. Speaker: Kai Schmidt-Hoberg (University of Oxford) Material: • 12:00 - 12:20 Extended Higgs sector at the LHC Run-I 20' In this talk I will discuss the exclusion and detectability prospects of Higgs bosons in three models: a) the Lee Wick Standard Model (arXiv:1104.3496), b) Beyond Minimal Supersymmetric Standard model (BMSSM) and c) Higgs portal models. We discuss the collider bounds on this model coming from LEP, Tevatron and LHC data. Speaker: José Francisco Zurita (University of Zurich) Material: • Tuesday, 23 August 2011 • 10:30 - 11:00 Coffee Break • 11:00 - 11:20 SUSY Monojets and Precision Coupling Determinations 20' squark--neutralino coupling at the high luminosity run of the LHC with some precision by analyzing a sample of high p_T monojet events. Speaker: Howard Haber (Santa Cruz Institute for Particle Physics (SCIPP)) Material: • 11:20 - 11:40 Beyond mSUGRA 20' Speaker: Riccardo Barbieri (Dipartimento di Fisica) Material: • 11:40 - 12:00 Making the slepton a Higgs with a U(1)_R lepton number 20' Speaker: Thomas Gregoire (Carleton University) Material: • 12:00 - 12:20 Two-Higgs-doublet interpretation of a small Wjj excess 20' I describe how a Wjj excess could arise in the 2HDM context, related signals and the theoretical issues that require that the excess be smaller than the CDF claim. Speaker: John Gunion (UC Davis) Material: • Wednesday, 24 August 2011 • 14:00 - 15:00 Conformal Field Theories as Building Blocks of Nature 1h0' Speaker: Vyacheslav Rychkov (LPT, ENS-Paris) Material: • Thursday, 25 August 2011 • 10:30 - 11:00 Coffee Break • 11:00 - 12:00 Latest H->WW Results from ATLAS and CMS 1h0' Collider Cross Talk. Abs:The search for the Standard Model-like Higgs boson via the decay into two W bosons is presented, based on data collected in 2011. The search in the dilepton final state is more powerful than any current result for intermediate mass Higgs bosons and has the highest sensitivity of the Higgs searches at the LHC. It is complemented by semi-leptonic WW decays which give good performance in the region of high mass. Recent results by ATLAS and CMS will be discussed and the current limits on the Higgs mass will be shown. Speakers: Dmytro Kovalskyi, Elliot Lipeles • 14:00 - 14:20 Searching for the Higgs boson by calling all angles 20' we discuss the possibility of improving the sensitivity of Higgs boson searches by including all available angular variables. Speaker: Ian Low (Argonne National Lab/Northwestern Univ) Material: • 14:20 - 14:40 Invisible Higgs 20' I discuss a few simple models for invisible Higgs decays, and a method to constrain these models using the h \to ZZ \to 4l lineshape Speaker: Mr. Pedro Schwaller (Zurich University) Material: • 14:40 - 15:00 If no Higgs, then what? 20' Speaker: Adam Falkwoski (LPT Orsay) Material: • 15:00 - 15:20 Solving the Flavor Problem in Composite Higgs Models 20' Speaker: Michele Redi Material: • 15:20 - 15:40 SUSY Signals of CP violation in Higgs and Flavor Physics 20' Speaker: Marcela Carena Material: • Friday, 26 August 2011 • 10:30 - 11:00 Coffee Break • 11:00 - 11:20 Relating Proton Decay and Monojets through Asymmetric Dark Matter 20' n this talk I will discuss a scenario for asymmetric dark matter where the stable ADM states carry a (generalized) baryon number and can destroy nucleons through inelastic scattering, the rate for which is closely related to the cross section for monojet signals at the Tevatron and the LHC. Speaker: Prof. David Morrissey (Michigan State University) • 11:20 - 11:40 Light Dark Matter and the Electroweak phase transition in the NMSSM 20' Speaker: Dr. Nausheen Shah (Fermi Lab.) • 11:40 - 12:00 Counting dark matter particles in LHC events 20' We argue that counting the number of invisible particles in missing energy events at the LHC can give us insight into the nature of dark matter, and explain how we might do this in practice. Speaker: Rakhi Mahbubani • 12:00 - 12:20 Higgs portal inflation 20' We study the phenomenology of the Higgs-singlet mixed inflation for the extended Higgs sector with a singlet scalar in the SM Speaker: Hyun Min Lee • 12:20 - 12:40 Light Higgsinos as Heralds of Higher-Dimensional Unification 20' Speaker: Felix Bruemmer (DESY)
2013-05-18 05:50:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45306098461151123, "perplexity": 13324.083806602894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00063-ip-10-60-113-184.ec2.internal.warc.gz"}
http://driveinsupporter.basketballbum.com/the-rook-netflix-d0a1935f.html
# The Rook - Netflix The Rook is based on O'Malley's genre novel which introduces a strong female protagonist named Myfanwy Thomas with extraordinary powers who is employed by a mysterious British government agency responsible for defending the UK from supernatural threats. Type: Scripted Languages: English Status: In Development Runtime: 60 minutes Premier: None ## The Rook - Rook polynomial - Netflix In combinatorial mathematics, a rook polynomial is a generating polynomial of the number of ways to place non-attacking rooks on a board that looks like a checkerboard; that is, no two rooks may be in the same row or column. The board is any subset of the squares of a rectangular board with m rows and n columns; we think of it as the squares in which one is allowed to put a rook. The board is the ordinary chessboard if all squares are allowed and m = n = 8 and a chessboard of any size if all squares are allowed and m = n. The coefficient of x k in the rook polynomial RB(x) is the number of ways k rooks, none of which attacks another, can be arranged in the squares of B. The rooks are arranged in such a way that there is no pair of rooks in the same row or column. In this sense, an arrangement is the positioning of rooks on a static, immovable board; the arrangement will not be different if the board is rotated or reflected while keeping the squares stationary. The polynomial also remains the same if rows are interchanged or columns are interchanged. The term “rook polynomial” was coined by John Riordan. Despite the name's derivation from chess, the impetus for studying rook polynomials is their connection with counting permutations (or partial permutations) with restricted positions. A board B that is a subset of the n × n chessboard corresponds to permutations of n objects, which we may take to be the numbers 1, 2, ..., n, such that the number aj in the j-th position in the permutation must be the column number of an allowed square in row j of B. Famous examples include the number of ways to place n non-attacking rooks on: an entire n × n chessboard, which is an elementary combinatorial problem; the same board with its diagonal squares forbidden; this is the derangement or “hat-check” problem; the same board without the squares on its diagonal and immediately above its diagonal (and without the bottom left square), which is essential in the solution of the problème des ménages. Interest in rook placements arises in pure and applied combinatorics, group theory, number theory, and statistical physics. The particular value of rook polynomials comes from the utility of the generating function approach, and also from the fact that the zeroes of the rook polynomial of a board provide valuable information about its coefficients, i.e., the number of non-attacking placements of k rooks. ## The Rook - Complete boards - Netflix R                                      1                                                  (                x                )                                                            =                x                +                1                                                                                      R                                      2                                                  (                x                )                                                            =                2                                  x                                      2                                                  +                4                x                +                1                                                                                      R                                      3                                                  (                x                )                                                            =                6                                  x                                      3                                                  +                18                                  x                                      2                                                  +                9                x                +                1                                                                                      R                                      4                                                  (                x                )                                                            =                24                                  x                                      4                                                  +                96                                  x                                      3                                                  +                72                                  x                                      2                                                  +                16                x                +                1.                                                          {\displaystyle {\begin{aligned}R_{1}(x)&=x+1\R_{2}(x)&=2x^{2}+4x+1\R_{3}(x)&=6x^{3}+18x^{2}+9x+1\R_{4}(x)&=24x^{4}+96x^{3}+72x^{2}+16x+1.\end{aligned}}} The first few rook polynomials on square n × n boards are (with Rn = RB): In words, this means that on a 1 × 1 board, 1 rook can be arranged in 1 way, and zero rooks can also be arranged in 1 way (empty board); on a complete 2 × 2 board, 2 rooks can be arranged in 2 ways (on the diagonals), 1 rook can be arranged in 4 ways, and zero rooks can be arranged in 1 way; and so forth for larger boards. For complete m × n rectangular boards Bm,n we write Rm,n := RBm,n . The smaller of m and n can be taken as an upper limit for k, since obviously rk = 0 if k > min(m, n). This is also shown in the formula for Rm,n(x). The rook polynomial of a rectangular chessboard is closely related to the generalized Laguerre polynomial Lnα(x) by the identity R                      m            ,            n                          (        x        )        =        n        !                  x                      n                                    L                      n                                (            m            −            n            )                          (        −                  x                      −            1                          )        .              {\displaystyle R_{m,n}(x)=n!x^{n}L_{n}^{(m-n)}(-x^{-1}).}
2019-02-22 23:47:08
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8298044204711914, "perplexity": 14905.57894102846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249406966.99/warc/CC-MAIN-20190222220601-20190223002601-00342.warc.gz"}
https://www.bookstack.cn/read/everything-curl/usingcurl-timeouts.md
# Timeouts Network operations are by their nature rather unreliable or perhaps fragile operations as they depend on a set of services and networks to be up and working for things to work. The availability of these services can come and go and the performance of them may also vary greatly from time to time. The design of TCP even allows the network to get completely disconnected for an extended period of time without it necessarily getting noticed by the participants in the transfer. The result of this is that sometimes Internet transfers take a very long time. Further, most operations in curl have no time-out by default! ## Maximum time allowed to spend Tell curl with -m / --max-time the maximum time, in seconds, that you allow the command line to spend before curl exits with a timeout error code (28). When the set time has elapsed, curl will exit no matter what is going on at that moment—including if it is transferring data. It really is the maximum time allowed. The given maximum time can be specified with a decimal precision; 0.5 means 500 milliseconds and 2.37 equals 2370 milliseconds. Example: curl --max-time 5.5 https://example.com/ (Your locale may use another symbol than a dot for expressing numerical fractions.) ## Never spend more than this to connect --connect-timeout limits the time curl will spend trying to connect to the host. All the necessary steps done before the connection is considered complete have to be completed within the given time frame. Failing to connect within the given time will cause curl to exit with a timeout exit code (28). The given maximum connect time can be specified with a decimal precision; 0.5 means 500 milliseconds and 2.37 equals 2370 milliseconds: curl --connect-timeout 2.37 https://example.com/ ## Transfer speeds slower than this means exit Having a fixed maximum time for a curl operation can be cumbersome, especially if you, for example, do scripted transfers and the file sizes and transfer times vary a lot. A fixed timeout value then needs to be set unnecessarily high to cover for worst cases. As an alternative to a fixed time-out, you can tell curl to abandon the transfer if it gets below a certain speed and stays below that threshold for a specific period of time. For example, if a transfer speed goes below 1000 bytes per second during 15 seconds, stop it: curl --speed-time 15 --speed-limit 1000 https://example.com/ ## Keep connections alive curl enables TCP keep-alive by default. TCP keep-alive is a feature that makes the TCP stack send a probe to the other side when there’s no traffic, to make sure that it is still there and “alive”. By using keep-alive, curl is much more likely to discover that the TCP connection is dead. Use --keepalive-time to specify how often in full seconds you would like the probe to get sent to the peer. The default value is 60 seconds. Sometimes this probing disturbs what you are doing and then you can easily disable it with --no-keepalive.
2022-11-30 08:25:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27858105301856995, "perplexity": 3189.5422048424175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710733.87/warc/CC-MAIN-20221130060525-20221130090525-00354.warc.gz"}
https://math.stackexchange.com/questions/1639625/probability-of-drawing-a-card-at-least-once-with-replacement
# Probability of Drawing A Card at least once (with replacement) Say we have a standard deck of $52$ cards. Probability of drawing the King of Hearts is $\frac{1}{52}$ obviously. But lets say we were to make $30$ draws with replacement (so each time the card is drawn, it is put back in the deck and and the deck is shuffled). What are the odds that the King of Hearts card was drawn at least once out of those $30$ draws? Thanks • Welcome to math.SE: since you are new, I wanted to let you know a few things about the site. In order to get the best possible answers, it is helpful if you say in what context you encountered the problem, and what your thoughts on it are; this will prevent people from telling you things you already know, and help them give their answers at the right level – JKnecht Feb 4 '16 at 1:48 • Formatting tips here. – Em. Feb 4 '16 at 1:51 • Thank you @JKnecht and @probablyme! – Nullqwerty Feb 4 '16 at 1:53 Let $K$ be the number of KOHs you draw. Since each draw is with replacement and (presumably) independent of another, then it is easier to start and calculate the complement, $$P(K\geq 1) = 1-P(K=0),$$ in 30 draws. • @Nullqwerty Almost, $1-\left(\frac{51}{52}\right)^{30} = 0.4415234 \approx 44\%$. – Em. Feb 4 '16 at 1:51
2019-08-21 07:15:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3951040506362915, "perplexity": 305.9379325682866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315811.47/warc/CC-MAIN-20190821065413-20190821091413-00391.warc.gz"}
https://www.sparrho.com/item/new-compact-forms-of-the-trigonometric-ruijsenaars-schneider-system/8c8825/
# New compact forms of the trigonometric Ruijsenaars-Schneider system Research paper by L. Feher, T. J. Kluck Indexed on: 02 Dec '13Published on: 02 Dec '13Published in: Mathematical Physics #### Abstract The reduction of the quasi-Hamiltonian double of ${\mathrm{SU}}(n)$ that has been shown to underlie Ruijsenaars' compactified trigonometric $n$-body system is studied in its natural generality. The constraints contain a parameter $y$, restricted in previous works to $0<y < \pi/n$ because Ruijsenaars' original compactification relies on an equivalent condition. It is found that allowing generic $0<y<\pi/2$ results in the appearance of new self-dual compact forms, of two qualitatively different types depending on the value of $y$. The type (i) cases are similar to the standard case in that the reduced phase space comes equipped with globally smooth action and position variables, and turns out to be symplectomorphic to ${\mathbb{C}P^{n-1}}$ as a Hamiltonian toric manifold. In the type (ii) cases both the position variables and the action variables develop singularities on a nowhere dense subset. A full classification is derived for the parameter $y$ according to the type (i) versus type (ii) dichotomy. The simplest new type (i) systems, for which $\pi/n < y < \pi/(n-1)$, are described in some detail as an illustration.
2021-05-18 02:23:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5618890523910522, "perplexity": 780.4493741418441}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991650.73/warc/CC-MAIN-20210518002309-20210518032309-00141.warc.gz"}
https://www.mytractorforum.com/threads/any-idea-how-heavy-this-is.216199/
1 - 5 of 5 Posts #### mattbatson · ##### Registered Joined · 115 Posts Discussion Starter a buddy is asking me from another forum... and he doesnt know the model of the kubota tractor :dunno: just a guesstimate would be fine. thx #### Shack · ##### Registered Joined · 2,296 Posts not knowing the model for sure, but it looks like a B series so I will guess somewhere in the neighborhood of 4500 lbs. #### mattbatson · ##### Registered Joined · 115 Posts Discussion Starter not knowing the model for sure, but it looks like a B series so I will guess somewhere in the neighborhood of 4500 lbs. thx, that was my guess also, and I'm thinking about 2000 for the trailer, as it looks a bit beefier than mine which weighs 1500 appreciate it #### TooManyGT · ##### Registered Joined · 1,356 Posts Looks like a 2x20. So 1600lbs for the tractor, 450-500lbs for the loader, +600lbs if the tires are loaded, and I believe about 1000lbs for the backhoe with thumb (might be a little more I can't remember). #### mattbatson · ##### Registered Joined · 115 Posts Discussion Starter Looks like a 2x20. So 1600lbs for the tractor, 450-500lbs for the loader, +600lbs if the tires are loaded, and I believe about 1000lbs for the backhoe with thumb (might be a little more I can't remember).
2021-01-23 01:01:22
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8118618130683899, "perplexity": 6942.263828035904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703531702.36/warc/CC-MAIN-20210123001629-20210123031629-00493.warc.gz"}
https://stats.stackexchange.com/questions/276523/central-limit-theorem-for-random-vectors-under-weak-dependence
# Central Limit Theorem for random vectors under weak dependence Central limit theorem is generalizable for the multivariate case, but this is possibile due to the i.i.d. hypotesis on the random variables involved in the sum. Infact, if you sum a set of i.i.d. random vectors, you obtain normality for each marginal, i.e. for each vector component. Then, given independence, you can easily derive the joint distribution (product of two gaussian PDFs is a gaussain PDF). Please correct me if I am wrong. Now consider a situation where central limit theorem still holds for marginals, but there is some dependence between the components of the vectors included in the sum. Is there a way to see if this dependence is weak, and derive the joint distribution in a similar way? I see that weak dependence mainly concerns stochastic processes, where a time variable is involved. But what about the joint distribution of vector components that exist all at the same time? I think that if it's known how each vector component is calculated, then there is a way to study the dependence structure and, if weak, derive the limit distribution for the whole vector. Is it possible? ## migrated from math.stackexchange.comApr 28 '17 at 18:22 This question came from our site for people studying math at any level and professionals in related fields. Look at page 21-22 here. The construction is for Markov chains, but I think essentially, the assumption is only of weak dependence. My answer here is also useful. Here I present the gist of it: Say $\{X_t\}_{t\geq 1}$ is a $p$-multivariate process with weak dependence. Here $X_t = (X^{(1)}, X^{(2)}, \dots, X^{(p)})^T$. Now suppose the random mean vector from $n$ "samples" is $$\theta_n = \dfrac{1}{n} \sum_{t=1}^{n}X_t = (\theta_n^{(1)}, \theta_n^{(2)}, \dots, \theta_n^{(p)})^T\,.$$ If a central limit theorem for all components holds, then for $i = 1, \dots, p$, there exists $\sigma^2_i > 0$ such thatas $n \to \infty$, $$\sqrt{n} (\theta_n^{(i)} - \theta^{(i)}) \overset{d}{\to} N(0, \sigma^2_i)\,.$$ Here $\theta^{(i)}$ is the true mean for the $i$th component. Let $(t_1, t_2, \dots, t_p)$ be arbitrary vector of constants in $\mathbb{R}^p$. Then, $$\sum_{i=1}^{p} t_i \sqrt{n}(\theta_n^{(i)} - \theta^{(i)}) \overset{d}{\to} \sum_{i=1}^{p} t_i N(0, \sigma^2_i)$$ Then by the Cramér Wold Theorem, there exists a $p\times p$ positive definite matrix $\Sigma$ such that as $n \to \infty$ $$\sqrt{n}(\theta_n - \theta) \overset{d}{\to} N_p(0, \Sigma) \,.$$ Here $\Sigma = \lim_{n\to \infty} Cov (\theta_n)$. For Markov chains this breaks down nicely and uses the assumption that $Cov(X_1,X_{1+k}) = Cov(X_{t}, X_{t+k})$ for all $t > 1$. This may not hold for all weakly dependent processes. • This is good information, but could you clarify how the Cramér Wold Theorem actually is applied here? You haven't explicitly examined the set of all one-dimensional projections of $\theta_n$. – whuber Apr 28 '17 at 18:51
2019-08-21 20:28:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9051240682601929, "perplexity": 269.84441377208384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316194.18/warc/CC-MAIN-20190821194752-20190821220752-00375.warc.gz"}
https://mathspace.co/textbooks/syllabuses/Syllabus-929/topics/Topic-18988/subtopics/Subtopic-256079/
United States of AmericaPA High School Core Standards - Algebra I Assessment Anchors # 6.03 Multiplying polynomials Lesson ## Multiplying two binomials Recall that to distribute an expression like $2\left(x-3\right)$2(x3)we use the distributive property: $A\left(B+C\right)=AB+AC$A(B+C)=AB+AC Now we want to look at how to multiply two binomials together, such as $\left(ax+b\right)\left(cx+d\right)$(ax+b)(cx+d) ### A visual interpretation Let's look at $\left(x+5\right)\left(x+2\right)$(x+5)(x+2) for example and see how this distribution works visually before we look at the algebraic approach. We will use the area of a rectangle. We can consider the expression $\left(x+5\right)\left(x+2\right)$(x+5)(x+2) to represent the area of a rectangle with side lengths of $x+5$x+5 and $x+2$x+2 as seen below. Another way to express the area would be to split the large rectangle into four smaller rectangles. Since the area of the whole rectangle is area of the sum of its parts we get the following: $\left(x+5\right)\left(x+2\right)$(x+5)(x+2) $=$= $x^2+2x+5x+10$x2+2x+5x+10 $=$= $x^2+7x+10$x2+7x+10 We don't want to have to draw a rectangle every time, so below we'll look at the algebraic approach. ### An algebraic approach When we multiply binomials of the form $\left(ax+b\right)\left(cx+d\right)$(ax+b)(cx+d) we can treat the second binomial $\left(cx+d\right)$(cx+d) as a constant term and apply the distributive property in the form $\left(B+C\right)\left(A\right)=BA+CA$(B+C)(A)=BA+CA. The picture below shows this in action: As you can see in the picture, we end up with two expressions of the form $A\left(B+C\right)$A(B+C). We can distribute these using the distributive property again to arrive at the final answer: $\left(ax+b\right)\left(cx+d\right)$(ax+b)(cx+d) $=$= $ax\left(cx+d\right)+b\left(cx+d\right)$ax(cx+d)+b(cx+d) $=$= $acx^2+adx+bcx+bd$acx2+adx+bcx+bd $=$= $acx^2+\left(ad+bc\right)x+bd$acx2+(ad+bc)x+bd We can actually be even more efficient in our algebraic approach by taking a look at the step below. $\left(ax+b\right)\left(cx+d\right)=acx^2+adx+bcx+bd$(ax+b)(cx+d)=acx2+adx+bcx+bd Notice that we have multiplied every term in the first bracket by every term in the second bracket. In general, that is what is required to multiply two polynomials together. We often use arrows as shown below to help us get every pair. By distributing in this way we will get the result $x^2+2x+5x+10=x^2+7x+10$x2+2x+5x+10=x2+7x+10, the same result we would have obtained using the previous method. You may prefer to use this alternate method since it is more efficient. #### Worked example ##### Question 1 Distribute and simplify $\left(x-3\right)\left(x+4\right)$(x3)(x+4) . Think: We need to multiply both terms inside $\left(x-3\right)$(x3) by both terms inside $\left(x+4\right)$(x+4). Do: $\left(x-3\right)\left(x+4\right)$(x−3)(x+4) $=$= $x\left(x+4\right)-3\left(x+4\right)$x(x+4)−3(x+4) $=$= $x^2+4x-3x-12$x2+4x−3x−12 We can jump right to this step using the short-cut mentioned above $=$= $x^2+x-12$x2+x−12 #### Practice questions ##### Question 2 Distribute and simplify the following: $\left(x+2\right)\left(x+5\right)$(x+2)(x+5) ##### Question 3 Distribute and simplify the following: $\left(7w+5\right)\left(5w+2\right)$(7w+5)(5w+2) ##### Question 4 Distribute and simplify the following: $\left(2n+5\right)\left(5n+2\right)-4$(2n+5)(5n+2)4 ## Multiplying two polynomials What we know about multiplying two binomials, can be extended to multiply and two polynomials. Remember! Every term in one pair of parentheses has to be multiplied by every other term in the other pair of parentheses. #### Worked example ##### Question 5 Distribute and simplify $\left(x^3-4\right)\left(x^2+3x+5\right)$(x34)(x2+3x+5). What do you notice about the degree of the new polynomial? Think: We need to multiply every pair of terms between the two parentheses. Then we will distribute and simplify. Do: $\left(x^3-4\right)\left(x^2+3x+5\right)$(x3−4)(x2+3x+5) $=$= $x^3\left(x^2+3x+5\right)-4\left(x^2+3x+5\right)$x3(x2+3x+5)−4(x2+3x+5) $=$= $x^3\times x^2+x^3\times3x+x^3\times5-4x^2-4\times3x-4\times5$x3×x2+x3×3x+x3×5−4x2−4×3x−4×5 $=$= $x^5+3x^4+5x^3-4x^2-12x-20$x5+3x4+5x3−4x2−12x−20 Reflect: When we multiplied a quadratic (degree $2$2) and a cubic (degree $3$3) we ended up with a polynomial with degree$5$5. In general, leading terms of the product will be the product of the leading terms. This means using our laws of exponents, the degree will be the sum of the degrees of the two polynomials. Did you know? In general, we can say that degree $\left(P(x)\times Q(x)\right)=$(P(x)×Q(x))= degree$\left(P(x)\right)$(P(x)) + degree$\left(Q(x)\right)$(Q(x)), where $P(x)$P(x) and $Q(x)$Q(x) are polynomials. #### Practice questions ##### Question 6 Distribute $\left(a+2\right)\left(5a^2-2a+2\right)$(a+2)(5a22a+2). ##### Question 7 Consider $R\left(x\right)$R(x), the product of the polynomials $P\left(x\right)=3x^5-3$P(x)=3x53 and $Q\left(x\right)=-2x^7+5x^5+6$Q(x)=2x7+5x5+6. 1. What is the degree of $R\left(x\right)$R(x)? 2. What is the constant term of $R\left(x\right)$R(x)? 3. Is $R\left(x\right)$R(x) a polynomial? Yes A No B Yes A No B ### Outcomes #### CC.2.2.HS.D.3 Extend the knowledge of arithmetic operations and apply to polynomials. #### A1.1.1.5.1 Add, subtract, and/or multiply polynomial expressions (express answers in simplest form). Note: Nothing larger than a binomial multiplied by a trinomial.
2022-01-17 17:22:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.846419095993042, "perplexity": 1028.592726331944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300574.19/warc/CC-MAIN-20220117151834-20220117181834-00035.warc.gz"}
https://www.gamedev.net/forums/topic/80721-what-is-a-dword/
• ### Announcements #### Archived This topic is now archived and is closed to further replies. # What IS a DWORD? ## Recommended Posts Yeah, I''m sure it''s a newb question, but what exactly IS a DWORD? I know it''s some sort of crazy MS typedef, but other than that, I dunno. I looked it up on MSDN but only got it''s general usage, not exactly what it is and what makes it up. ##### Share on other sites 32 bit unsigned integer ##### Share on other sites bit = ...1 bit... nybble = 4 bits = 1/2 byte byte = 8 bits = 2 nybbles WORD = 2 bytes = 4 nybbles = 16 bits DWORD = 2 WORDs = 4 bytes = 8 nybbles = 32 bits QWORD = 2 DWORDs = 4 WORDS = ..... = 64 bits There might be more; if there are, I don't know of them. EDIT: DWORD stands for Double Word, QWORD for Quad (prefix meaning 4). I suppose the next would be Oct (prefix meaning 8). "I've learned something today: It doesn't matter if you're white, or if you're black...the only color that REALLY matters is green" -Peter Griffin Edited by - matrix2113 on February 17, 2002 7:04:27 PM ##### Share on other sites Thankya! But here''s question 2. How can DWORDs take statements like: DWORD dwBlah;dwBlah = STATEMENT1 | STATEMENT2 | STATEMENT3; What''s with the pipes and the multiple option thing? How exactly does the compiler interpret that? ##### Share on other sites Its a binary OR operation. Eg - 0 OR 0 = 0, 0 OR 1 = 1, 1 OR 0 = 1 and 1 OR 1 = 1. In cases like your example its usually used to implement flags. You can then check for whether a flag is set like this: if ( dwBlah & aFlag != 0 ) doSomething(); ##### Share on other sites Ok, now, what are the flags? They obviously have to hold some value (0 or 1?). How exactly is that assigned? ##### Share on other sites Each flag has a different bit set. A DWORD is 8 bits long therefore there are a maximum of 8 different flags that can be used. eg. Flag1 = 00000001 Flag2 = 00000010 Flag3 = 00000100 Flag4 = 00001000 and so on therefore if dwSomething = Flag1 | Flag2 | Flag3 then dwSomething = 00000001 | 00000010 | 00000100 = 00000111. ##### Share on other sites Sorry, my brain wasn''t working today - A DWORD is 32 bits long therefore there is a maximum of 32 possible flags. ##### Share on other sites A flag is a value that a function looks at to determine what it should be doing. Arild Fines showed how a function checks to see if a flag is set in the function. Flags are defined by whoever writes the function, usually they are assigned powers of two because when you OR them together powers of two wont affect each other. For example, if we defined some flags with decimal values EX_1 = 1; //00000001 in binary EX_2 = 2; //00000010 in binary EX_3 = 4; //00000100 in binary EX_4 = 8; //00001000 in binary When we OR them together their values will be superimposed. EX_1 | EX_2 will be 3, or 00000011 in binary EX_8 | EX_4 | EX_1 will be 13, or 00001101 in binary Extracting a specific flag from a dword uses the AND (&) operator. For AND, both bits must be 1 to return 1 or it will return 0. Suppose you have value 00001101 and you want to see if the flag EX_2 is in it, using 00001101 AND EX_2 would return 0 (flag EX_2 is not included) because 00001101 & 00000010 ---------- 00000000 00001101 AND EX_8 will return EX_8 (the flag is included) because 00001101 & 00001000 ---------- 00001000 by the way, I''ve been using single bytes to explain the use of flags and AND and OR so that you could see the individual 0''s and 1''s without me having to type out 32 of them out to represent a double word. I''m sure you already figured that though ##### Share on other sites DWORD is a typedef from the Windows API. Most such typedefs are anachronisms. For example, a WORD is 16 bits and a DWORD is 32 bits (as has been said), but those names are currently misnomers. You see, a "word" refers the the addressing size of the target processor. When the Windows API was first devised, a word was 16 bits, hence the 16-bit size for the WORD type. But most processors these days address 32 bits at a time, so they have 32-bit words. For backwards compatibility, WORD stayed the same, even though—from the name—it should be 32-bits. Other types like LPARAM and WPARAM have become the same size (32 bits), even though WPARAM used to be 16-bits as is indicated by the W prefix (which stands for WORD). So, it''s confusing, especially since Intel parlence has followed the same trend of backwards compatibility with naming. But don''t forget what a real word is—that it''s dependent on the processor. ##### Share on other sites Thankyou everyone, I get it now Sorry about the multitude of questions though. I just like to understand something inside out and it drives me nuts when I don''t. • ## Partner Spotlight • ### Forum Statistics • Total Topics 627648 • Total Posts 2978393 • 10 • 12 • 22 • 13 • 33
2017-10-17 18:57:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21183116734027863, "perplexity": 3852.868159735287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822480.15/warc/CC-MAIN-20171017181947-20171017201947-00579.warc.gz"}
https://xiences.com/dokuwiki/doku.php?id=physicswiki:occupation_calculation
### Occupation calculation It calculates the equilibrium properties of the device considering the calculated band structure. In the main window the following parameters can be set: • The following parameters set the convergence parameters of the nonlinear poisson equation, and for the charge neutrality calculation. *It can be selected whether polarization charge calculation is necessary, and inclusion to the poisson equation. Also the classical spectrum can be calculated, which treats the device as a bulk semiconductor. *The different quantum mechanical models can be set to the envelope function calculation. It can be chosen that dispersion is considered or not, or should it be calculated for charge density calculation, or self consistent calculation, or optical properties calculation. *The Quantu mechanical optical spectrum of the device can be calculated enabling the following parameters. The interaction energy calculates the Coulomb interaction energy between charge carrier states. #### Thermal equilibrium It calculates the temperature distribution in the device with a linear static model. More info. #### Charge models The distribution of charge carriers in semiconductors is influenced by the band structure of the sample and outer effects, like temperature inhomogenity, and the applied voltage. According to an assumption, which states, that the charge density of a carrier is defined by the local physical quantities. ($E_f$ fermi energy, $U$ potential, $T$ temperature, $E_{bands}$ conduction and valence band energies) $n(r_0) = n(E_f(r_0), U(r_0), T(r_0), E_{bands}(r_0))$ The carrier density distribution should also should fulfill the Poisson equation in position space. From the equation above, if the $n(U), p(U)$dependence is known the charge distribution can be calculated in the sample. More on #### Dopig Different impurities can be defined for the structure for each simulation domains. Acceptors, Donors, with different concentration profiles, and degeneracy and energy levels. #### Electrostatics It defines the boundary conditions for the Poisson equation. If charge neutral condition is specified, then during the calculation step it tries to find the neutral device potential with sifting these define potential values with a constant value. If no boundary condition is defined then it calculates like it first grid point is set to charge neutral. (Except for Device calculation, because when a contact is defined it tries to find the charge neutral contact.) #### Quantum mechanics Sets the boundary condition of the Quantum mechanical envelope function calculation. Can be ignored if we calculate classically óor a Neumann boundary condition is enough. • physicswiki/occupation_calculation.txt
2020-09-20 01:02:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8544840216636658, "perplexity": 863.8090991540582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400193087.0/warc/CC-MAIN-20200920000137-20200920030137-00463.warc.gz"}
https://math.libretexts.org/Bookshelves/Geometry/Euclidean_Plane_and_its_Relatives_(Petrunin)/01%3A_Preliminaries/1.02%3A_What_is_a_model
1.2: What is a model? The Euclidean plane can be defined rigorously the following way: Define a point in the Euclidean plane as a pair of real numbers $$(x, y)$$ and define the distance between the two points $$(x_1, y_1)$$ and $$(x_2, y_2)$$ by the following formula: $\sqrt{(x_1 - x_2)^2 + (y_1 - y_2)^2.}$ That is it! We gave a numerical model of the Euclidean plane; it builds the Euclidean plane from the real numbers while the latter is assumed to be known. Shortness is the main advantage of the model approach, but it is not intuitively clear why we define points and the distances this way. On the other hand, the observations made in the previous section are intuitively obvious — this is the main advantage of the axiomatic approach. Another advantage lies in the fact that the axiomatic approach is easily adjustable. For example, we may remove one axiom from the list, or exchange it to another axiom. We will do such modifications in Chapter 11 and further. This page titled 1.2: What is a model? is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Anton Petrunin via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
2023-02-02 13:48:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.958345890045166, "perplexity": 200.0777025337263}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500028.12/warc/CC-MAIN-20230202133541-20230202163541-00338.warc.gz"}
https://msp.org/agt/2008/8-2/p04.xhtml
#### Volume 8, issue 2 (2008) Recent Issues The Journal About the Journal Editorial Board Editorial Interests Subscriptions Submission Guidelines Submission Page Policies for Authors Ethics Statement ISSN (electronic): 1472-2739 ISSN (print): 1472-2747 Author Index To Appear Other MSP Journals The classification and the conjugacy classes of the finite subgroups of the sphere braid groups ### Daciberg L Gonçalves and John Guaschi Algebraic & Geometric Topology 8 (2008) 757–785 ##### Abstract Let $n\ge 3$. We classify the finite groups which are realised as subgroups of the sphere braid group ${B}_{n}\left({\mathbb{S}}^{2}\right)$. Such groups must be of cohomological period $2$ or $4$. Depending on the value of $n$, we show that the following are the maximal finite subgroups of ${B}_{n}\left({\mathbb{S}}^{2}\right)$: ${ℤ}_{2\left(n-1\right)}$; the dicyclic groups of order $4n$ and $4\left(n-2\right)$; the binary tetrahedral group ${T}^{\ast }$; the binary octahedral group ${O}^{\ast }$; and the binary icosahedral group ${I}^{\ast }$. We give geometric as well as some explicit algebraic constructions of these groups in ${B}_{n}\left({\mathbb{S}}^{2}\right)$ and determine the number of conjugacy classes of such finite subgroups. We also reprove Murasugi’s classification of the torsion elements of ${B}_{n}\left({\mathbb{S}}^{2}\right)$ and explain how the finite subgroups of ${B}_{n}\left({\mathbb{S}}^{2}\right)$ are related to this classification, as well as to the lower central and derived series of ${B}_{n}\left({\mathbb{S}}^{2}\right)$. ##### Keywords braid group, configuration space, finite group, mapping class group, conjugacy class, lower central series, derived series ##### Mathematical Subject Classification 2000 Primary: 20F36 Secondary: 20F50, 20E45, 57M99
2022-08-08 02:16:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 16, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.697914719581604, "perplexity": 671.6765815711415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570741.21/warc/CC-MAIN-20220808001418-20220808031418-00398.warc.gz"}
https://chemistry.stackexchange.com/questions/104725/is-phosphorine-c%E2%82%85h%E2%82%85p-aromatic
# Is phosphorine (C₅H₅P) aromatic? Phophorine seems aromatic as it has 6 conjugated electrons. But the answer given is that it is not. This seems odd since pyridine has a similar structure and is also aromatic. Thus I ask is phospohrine aromatic or not? • Phosphinine is aromatic, but somewhat less than benzene. – mykhal Nov 23 '18 at 16:23 ## 1 Answer Phosphorine (IUPAC: phosphinine) actually has aromatic character nearly as great (88%) as that of benzene. According to the reference, phosphorine is sufficiently stable to be handled without air-free techniques; and it undergoes electrophilic substitution reactions similar to those of benzene. • With reference you mean the Wikipedia article? I have quite a bit of trouble believing this answer without knowing the context in which these 88% of aromatic character came about. Since there isn't even an agreed-upon rigorous definition of aromaticity, the number is without context equally as informative as a picked number from the phone book. – Martin - マーチン Dec 4 '18 at 16:38 • So ... you apparently think this is not aromatic? – Oscar Lanzi Dec 5 '18 at 0:38 • No, that is not what I said. I just don't have any trust in that assessment of the 88% without any context of how this number came about. – Martin - マーチン Dec 5 '18 at 9:22
2020-11-30 21:18:41
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8481839299201965, "perplexity": 2196.980856379522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141486017.50/warc/CC-MAIN-20201130192020-20201130222020-00322.warc.gz"}
http://gmatclub.com/forum/if-x-2-y-2-29-what-is-the-value-of-x-y-167043.html
Find all School-related info fast with the new School-Specific MBA Forum It is currently 29 Jun 2016, 13:00 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # If x^2 + y^2 = 29, what is the value of (x - y)^2 ? Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 33562 Followers: 5948 Kudos [?]: 73833 [0], given: 9903 If x^2 + y^2 = 29, what is the value of (x - y)^2 ? [#permalink] ### Show Tags 04 Feb 2014, 01:49 Expert's post 7 This post was BOOKMARKED 00:00 Difficulty: 25% (medium) Question Stats: 65% (01:52) correct 35% (01:08) wrong based on 390 sessions ### HideShow timer Statistics The Official Guide For GMAT® Quantitative Review, 2ND Edition If x^2 + y^2 = 29, what is the value of (x - y)^2 ? (1) xy = 10 (2) x = 5 Data Sufficiency Question: 73 Category: Algebra Simplifying algebraic expressions Page: 158 Difficulty: 600 GMAT Club is introducing a new project: The Official Guide For GMAT® Quantitative Review, 2ND Edition - Quantitative Questions Project Each week we'll be posting several questions from The Official Guide For GMAT® Quantitative Review, 2ND Edition and then after couple of days we'll provide Official Answer (OA) to them along with a slution. We'll be glad if you participate in development of this project: 2. Please vote for the best solutions by pressing Kudos button; 3. Please vote for the questions themselves by pressing Kudos button; 4. Please share your views on difficulty level of the questions, so that we have most precise evaluation. Thank you! [Reveal] Spoiler: OA _________________ Math Expert Joined: 02 Sep 2009 Posts: 33562 Followers: 5948 Kudos [?]: 73833 [1] , given: 9903 Re: If x^2 + y^2 = 29, what is the value of (x - y)^2 ? [#permalink] ### Show Tags 04 Feb 2014, 01:50 1 KUDOS Expert's post 2 This post was BOOKMARKED SOLUTION If x^2 + y^2 = 29, what is the value of (x - y)^2 ? Given: $$x^2+y^2=29$$. Question: $$(x-y)^2=x^2-2xy+y^2=(x^2+y^2)-2xy=29-2xy=?$$ So, basically we should find the value of $$xy$$ (1) xy = 10. Directly gives us the value we needed. Sufficient. (2) x = 5. Now even if we substitute the value of $$x$$ in $$x^2+y^2=29$$ we'll get two values for $$y$$: 2 and -2, hence two values for $$(x-y)^2$$: 9 and 49. Not sufficient. _________________ Senior Manager Joined: 20 Dec 2013 Posts: 273 Location: India Followers: 0 Kudos [?]: 76 [0], given: 29 Re: If x^2 + y^2 = 29, what is the value of (x - y)^2 ? [#permalink] ### Show Tags 04 Feb 2014, 04:46 Ans. A From S1: we're given the value of x^2+y^2 To find out (x-y)^2=x^2 + y^2 - 2xy we need only xy which is given in S1.Sufficient From S2;x is given but y is not.So insufficent. Intern Joined: 29 Oct 2013 Posts: 15 Followers: 0 Kudos [?]: 9 [1] , given: 37 Re: If x^2 + y^2 = 29, what is the value of (x - y)^2 ? [#permalink] ### Show Tags 04 Feb 2014, 05:13 1 KUDOS x^2 + y^2 = 29. (x-y)^2 = ? or x^2 + y^2 - 2xy = ? From St1) xy = 10, so x^2 + y^2 - 2xy = 29 - 2*10 = 9 --> Suff From St2) x = 5. then y = 2 or -2. Results in two values --> not suff Ans - A Manager Status: GMATting Joined: 21 Mar 2011 Posts: 108 Concentration: Strategy, Technology GMAT 1: 590 Q45 V27 Followers: 1 Kudos [?]: 81 [1] , given: 104 Re: If x^2 + y^2 = 29, what is the value of (x - y)^2 ? [#permalink] ### Show Tags 05 Feb 2014, 00:33 1 KUDOS (x-y)^2 = x^2 + y^2 - 2xy = 29 - 2xy; So, we can rephrase the question as "what is the value of xy?" or "what is the value of x & y?" Statement (1): Sufficient, since we know the value of xy; Statement (2): Insufficient, since we do not know the value of y; Ans is (A). Math Expert Joined: 02 Sep 2009 Posts: 33562 Followers: 5948 Kudos [?]: 73833 [0], given: 9903 Re: If x^2 + y^2 = 29, what is the value of (x - y)^2 ? [#permalink] ### Show Tags 08 Feb 2014, 05:33 Expert's post SOLUTION If x^2 + y^2 = 29, what is the value of (x - y)^2 ? Given: $$x^2+y^2=29$$. Question: $$(x-y)^2=x^2-2xy+y^2=(x^2+y^2)-2xy=29-2xy=?$$ So, basically we should find the value of $$xy$$ (1) xy = 10. Directly gives us the value we needed. Sufficient. (2) x = 5. Now even if we substitute the value of $$x$$ in $$x^2+y^2=29$$ we'll get two values for $$y$$: 2 and -2, hence two values for $$(x-y)^2$$: 9 and 49. Not sufficient. _________________ GMAT Club Legend Joined: 09 Sep 2013 Posts: 10236 Followers: 482 Kudos [?]: 124 [0], given: 0 Re: If x^2 + y^2 = 29, what is the value of (x - y)^2 ? [#permalink] ### Show Tags 12 Mar 2015, 20:05 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ EMPOWERgmat Instructor Status: GMAT Assassin/Co-Founder Affiliations: EMPOWERgmat Joined: 19 Dec 2014 Posts: 6701 Location: United States (CA) GMAT 1: 800 Q51 V49 GRE 1: 340 Q170 V170 Followers: 289 Kudos [?]: 1973 [2] , given: 161 Re: If x^2 + y^2 = 29, what is the value of (x - y)^2 ? [#permalink] ### Show Tags 13 Mar 2015, 12:12 2 KUDOS Expert's post Hi All, The posters in this thread all seem comfortable with the math involved, so I won't rehash any of that here. Instead, I won't to point out a common-enough design 'element' in DS questions. In many prompts, the question writers are going to test the 'thoroughness' in your thinking. Can you "see" information in more than one way? As such, when dealing with DS questions, you should look for opportunities to "rewrite" information (and sometimes 'rewriting' the question is the "key" to solving it). Here, we're told that X^2 + Y^2 = 29. That's an interesting piece of information and not something that we would be given all that often. There MUST be a reason why the writer put it there....... The question asks for the value of (X-Y)^2. This is one of the Classic Quadratics, so it should get you thinking about FOILing. If we just take a moment to FOIL the question, we get.... "What is the value of X^2 -2XY + Y^2?" NOW the purpose of that original piece of information is clear....it's part of the solution to the question. If we can figure out the value of XY, then we'll have the answer to the question. From here, the rest of the math is pretty straight-forward. Remember to look for these pattern-based shortcuts. They will appear repeatedly and while they take a little bit of extra work, doing THAT work will make solving the problem that much easier. GMAT assassins aren't born, they're made, Rich _________________ # Rich Cohen Co-Founder & GMAT Assassin # Special Offer: Save $75 + GMAT Club Tests 60-point improvement guarantee www.empowergmat.com/ ***********************Select EMPOWERgmat Courses now include ALL 6 Official GMAC CATs!*********************** Manager Joined: 01 Aug 2014 Posts: 59 Schools: Rotman '17 (A) GMAT 1: 710 Q44 V42 Followers: 1 Kudos [?]: 5 [0], given: 34 Re: If x^2 + y^2 = 29, what is the value of (x - y)^2 ? [#permalink] ### Show Tags 08 Feb 2016, 23:53 If x^2 + y^2=29, and according to stmt II x=5, then why can we not derive from that that y=2 and xy=10? EMPOWERgmat Instructor Status: GMAT Assassin/Co-Founder Affiliations: EMPOWERgmat Joined: 19 Dec 2014 Posts: 6701 Location: United States (CA) GMAT 1: 800 Q51 V49 GRE 1: 340 Q170 V170 Followers: 289 Kudos [?]: 1973 [1] , given: 161 Re: If x^2 + y^2 = 29, what is the value of (x - y)^2 ? [#permalink] ### Show Tags 09 Feb 2016, 00:06 1 This post received KUDOS Expert's post Hi Anonamy, If you plug X=5 into X^2 + Y^2=29, then what does that tell you about the value of Y. Be sure to be THOROUGH here. Is Y positive or negative? How would that impact the answer to the given question? Without too much work, you should be able to PROVE that there is more than one answer to the given question when X=5. GMAT assassins aren't born, they're made, Rich _________________ # Rich Cohen Co-Founder & GMAT Assassin # Special Offer: Save$75 + GMAT Club Tests 60-point improvement guarantee www.empowergmat.com/ ***********************Select EMPOWERgmat Courses now include ALL 6 Official GMAC CATs!*********************** Verbal Forum Moderator Joined: 02 Aug 2009 Posts: 3947 Followers: 232 Kudos [?]: 2308 [1] , given: 97 Re: If x^2 + y^2 = 29, what is the value of (x - y)^2 ? [#permalink] ### Show Tags 09 Feb 2016, 00:07 1 KUDOS Expert's post Anonamy wrote: If x^2 + y^2=29, and according to stmt II x=5, then why can we not derive from that that y=2 and xy=10? hi, when you substitute x as 5.. 5^2+y^2=29 .. or y^2=4.. we are not given anything about sign of y, so y can be 2 and -2.. in both cases you will have two different answers.. _________________ Absolute modulus :http://gmatclub.com/forum/absolute-modulus-a-better-understanding-210849.html#p1622372 Combination of similar and dissimilar things : http://gmatclub.com/forum/topic215915.html Re: If x^2 + y^2 = 29, what is the value of (x - y)^2 ?   [#permalink] 09 Feb 2016, 00:07 Similar topics Replies Last post Similar Topics: 2 If x^2 - y^2 = 27, what is the value of (x + y)^2 ? 3 02 Nov 2013, 00:47 8 If x^2 + y^2 = 29, what is the value of (x-y)^2 (1) xy = 10 16 28 Nov 2010, 13:10 1 What is the value of x^2 + y^2 ? 6 04 Oct 2010, 18:06 if x2 + y2 = 29, what is the value of (x+y)2 1. xy = 10 2. x 9 19 Sep 2010, 06:48 16 If x^2/9 – 4/y^2 = 12, what is the value of x? 5 18 Nov 2009, 21:33 Display posts from previous: Sort by
2016-06-29 20:00:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5328359007835388, "perplexity": 2628.947524114892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00026-ip-10-164-35-72.ec2.internal.warc.gz"}
https://blogs.mathworks.com/loren/2009/05/05/nice-way-to-set-function-defaults/
Nice Way to Set Function Defaults37 Posted by Loren Shure, Last week, I wrote a post on the switch statement in MATLAB. One theme in the comments was about using switch (or not!) for setting default values for input arguments that users didn't initialize. I realized that there is a nice pattern for setting these values that uses some compact, but readable code. Function Defaults Sometimes I want to write a function that has some required inputs and some optional trailing arguments. If the arguments are specified by their order, and not by parameter-value pairs, there is a nice way to accomplish this take advantage of varargin. Note that neither of these methods checks the validity of the overridden elements. Suppose my function requires the first 2 inputs, but there are 3 others that the user can choose to set, or allow them to take default values. In this scenario, once they choose to set an optional input, they must set all the optional ones that precede it in the argument list. Here's an example function header. dbtype somefun2 1 1 function y = somefun2(a,b,opt1,opt2,opt3) Here's another way I could write this function header, using varargin. dbtype somefun2Alt 1 1 function y = somefun2Alt(a,b,varargin) Setting Default Values To set default values in somefun2, I could use a switch statement or use if-elseif constructs. Here I chose to use a switch statement. type somefun2 function y = somefun2(a,b,opt1,opt2,opt3) % Some function with 2 required inputs, 3 optional. % Check number of inputs. if nargin > 5 error('myfuns:somefun2:TooManyInputs', ... 'requires at most 3 optional inputs'); end % Fill in unset optional values. switch nargin case 2 opt1 = eps; opt2 = 17; opt3 = @magic; case 3 opt2 = 17; opt3 = @magic; case 4 opt3 = @magic; end The code is verbose and, in my opinion, not very pretty. It's also error-prone. If I decide to change a default setting for one value, I have to update each relevant case. What a drag! Here's a way to set the defaults in one location and then overwrite the ones the user specified. type somefun2Alt function y = somefun2Alt(a,b,varargin) % Some function that requires 2 inputs and has some optional inputs. % only want 3 optional inputs at most numvarargs = length(varargin); if numvarargs > 3 error('myfuns:somefun2Alt:TooManyInputs', ... 'requires at most 3 optional inputs'); end % set defaults for optional inputs optargs = {eps 17 @magic}; % now put these defaults into the valuesToUse cell array, % and overwrite the ones specified in varargin. optargs(1:numvarargs) = varargin; % or ... % [optargs{1:numvarargs}] = varargin{:}; % Place optional args in memorable variable names [tol, mynum, func] = optargs{:}; First I place the default optional values into a cell array optargs. I then copy the cells from varargin to the correct cells in optargs and I have overridden the defaults. I have only one place where the default values are set, and the code doesn't grow dramatically in length as I add additional optional inputs. Finally, I spread out the cells from varargin into well-named individual variables, if I want to. Since each element in a cell array is its own MATLAB array, and MATLAB has copy-on-write behavior, I am not creating extra copies of these optional inputs (see this post for more information). Other Methods for Dealing with Optional Inputs There are other methods for dealing with optional inputs. These include specifying parameter-value pairs. With or without such pairs, you can use inputParser to help set the values you use. You can specify the optional input in a struct, with fields containing the various options, and you can use cell arrays, but then you have to decide how to structure and specify the contents. Your Techniques for Specifying Optional Values Do you have a technique for specifying optional values that works well for you? Please share with us here. Get the MATLAB code Published with MATLAB® 7.8 Note Cris Luengo replied on : 1 of 37 Nice! I think I won’t use my old way of setting defaults any more! This is what I used to do: function y = somefun2(a,b,opt1,opt2,opt3) if nargin<3 opt1 = 1; end if nargin<4 opt2 = 2; end if nargin<5 opt3 = 3; end I like your way a lot better! Scott Hirsch replied on : 2 of 37 I’ll second Cris! I use varargin, but my code still looks pretty much like Cris’ example (most of it on the File Exchange for the world to see). I love this elegant approach of just swapping out the contents of a cell array. Thanks, Loren! Loren replied on : 3 of 37 Theirry- inputParser was introduced in R2007a, MATLAB 7.4. You can find information like this in the release notes. –Loren Gautam Vallabha replied on : 4 of 37 I prefer to use EXIST with explicitly listed parameters in the function definition line. function myfcn(a,b,tol,mynum,func) if ~exist('tol','var'), tol = eps; end if ~exist('mynum', 'var'), mynum = 17; end ... With varargin and nargin, I have to change multiple lines to add a new optional parameter or change the order of parameters. It also becomes tricky to allow uses like myfcn(100, 200, eps, [], @rand) where the [] means that the function should assign the default value for mynum. With the EXIST test, it becomes straightforward: if ~exist('mynum','var') || isempty(mynum), mynum = 17; end Steve Eddins replied on : 5 of 37 I’ll echo Chris and Scott. I usually write it the way Chris showed, but I think I like your cell array manipulation better. Amy replied on : 6 of 37 The cell array approach is rather clever. However, the problem with using the varargin is that you aren’t able to get useful function hints for your functions (introduced in 2008b); you end up with … in place of varargin. That’s why I’ve started writing code similar to the example Cris showed except that I usually do if nargin<3 || isempty(opt1) opt1 = 1; end to allow the user to specify opt1 as empty to use the default value and still specify additional opts. I use nargin because using exist takes a longer to execute. Kent Conover replied on : 7 of 37 I use a function to determine whether a variable exists in the workspace. If the variable is not defined then it is set to a default. For example if isundefined(‘mount_col’) mount_col = ‘TOKEN NUMBER’; end Here is the function “isundefined.m” function tf = isdefined(in_var_name) %% tf = isdefined(in_var_name) % % Returns a logical indicating if the variable “in_var_name” both exists and is % not empty in the “caller” workspace. % % “in_var_name” is a text string that specifies the name of the variable to search for. % % Kent Conover, 12-Mar-08 cmd_txt = [‘exist(”’,in_var_name, ”’, ”var”);’]; if evalin(‘caller’, cmd_txt); cmd_txt = [‘~isempty(‘,in_var_name, ‘);’]; if evalin(‘caller’, cmd_txt); tf = true; else tf = false; end else tf = false; end Loren replied on : 8 of 37 Amy- You are right about function hints. Kent- Your code seems to depend on naming the variables in the caller the same as the names in the actual function definition. Seems a bit fragile for my taste. –Loren Thierry Dalon replied on : 9 of 37 I did not know the function inputParser! It is not available in R14SP3 but in R2008a; when was it introduced? There also some interesting contributions on the topic in the FEX: – parse_pv_pairs: https://www.mathworks.com/matlabcentral/fileexchange/9082 – parseargs: https://www.mathworks.com/matlabcentral/fileexchange/10670 – parseArgs : #3448 – checkfield: #17441 all with plus and minus. I am not sure inputParser aggregates features of all FEX contributions, like use of aliases, abbreviations, feasible values etc. I would have to check… Thanks anyway for dealing with this topic which is quite important as soon as you start programming a little bit in Matlab! Ben replied on : 10 of 37 We use this approach in many places throughout our 100k line MATLAB codebase, a colleague originally came up with it. Some functions we have written over time have many, many, options and so we specify the default values with a structure in the beginning of the file. The settings we wish to override are contained in the input ‘settings’ structure, and at the end, we use the wonderful setstructfields() function (which comes from MATLAB; I am unsure if it comes from a toolbox) to override the values in defaults with those contained in settings, and then dump the structure into the workspace using a custom function struct2workspace() which basically runs assignin(‘caller’,’var’,value) on each member of settings. (Although we could just leave it there and dereference all the items right from the settings structure.) It kind of messes with what MLINT wants to do in terms of code analysis, but it’s a great tool for us, so that sane defaults are recorded right in the file where they are used and not elsewhere. In some cases we have multiple levels of this going on and I have no idea how we’d do without it – many of our functions have a sizable number of options, possibly because this approach has made it manageable. function out = funcname(arg1,arg2,settings) defaults.variable1 = val_1; %define default values defaults.variable2 = val_2; defaults.variable3 = val_3; % Overwrite defaults with specified settings (if any) settings = setstructfields(defaults,settings); struct2workspace(settings); %optional Daniel replied on : 11 of 37 I join the Chris team as well. But the cell-array approach was really nifty. As for the loss of function hints when using varargin, I unfortunately have to say that the function hints have been a dissapointment to me. They take far too long to pop up, and unless you type really carefully, they seem to dissapear. Now, if there was a button you could press to force matlab to display the function hints, (like the tab in tab completion) maybe I would start using them….. –DA Joe Kirk replied on : 12 of 37 Nice Loren! I like your method quite a bit and will probably start using it more often. I have used Chris’ method in the past, but more recently started using SWITCH statements, except that I wrap it in a FOR loop to eliminate redundant default settings. For example: function varargout = tsp_ga(xy,dmat,pop_size,num_iter,show_prog,show_res) % Process Inputs and Initialize Defaults nargs = 6; for k = nargin:nargs-1 switch k case 0 xy = 10*rand(50,2); case 1 N = size(xy,1); a = meshgrid(1:N); dmat = reshape(sqrt(sum((xy(a,:)-xy(a',:)).^2,2)),N,N); case 2 pop_size = 100; case 3 num_iter = 1e4; case 4 show_prog = 1; case 5 show_res = 1; otherwise end end I’m not sure how I could make your method work in the case of the distance matrix (dmat) input, because it’s size depends on the number of xy points. In other words, I don’t have a way to set the default for the second input without knowing what the first input is. Loren replied on : 13 of 37 Joe- It’s true you couldn’t set dmat on the first pass. It would have to be after doing all the other default setting so you could be sure of the first value. You could still set all the rest first with the cell technique, I think. –Loren Yair Altman replied on : 14 of 37 I use a variation of the method Ben mentioned in comment #10 above. Basically, I set a structure of default values at the top of the code. This is highly readable/maintainable since the struct field names immediately convey the meaning, as opposed to a cell array in which separate elements might be misunderstood or swapped by mistake. I then use a loop over the input arguments and update the specified struct field. The input args are expected in the usual Matlab P-V notation (…, ‘paramName’,paramValue, …) and can therefore be specified in any order (another usability benefit). Unexpected parameter names cause an error to be displayed. Adding new parameters to this template is extremely easy, helping code maintainability and reuse. Here’s a parred-down snippet from my recent UISplitPane submission on the File Exchange: %% Process optional arguments function paramsStruct = processArgs(varargin) % Get the properties in either direct or P-V format [parent, pvPairs] = parseparams(varargin); % Now process the optional P-V params try % Initialize with default values paramName = []; paramsStruct.dividercolor = ''; paramsStruct.dividersize = 5; % other default params can be specified here supportedArgs = {'orientation','parent','tooltip',... 'dividerwidth','dividercolor','dividerlocation',... 'dividerminlocation','dividermaxlocation'}; while ~isempty(pvPairs) % Ensure basic format is valid paramName = ''; if ~ischar(pvPairs{1}) error('invalidProperty','Invalid property'); elseif length(pvPairs) == 1 error('noPropertyValue',['No value specified for property ''' pvPairs{1} '''']); end % Process parameter values paramName = pvPairs{1}; paramValue = pvPairs{2}; %paramsStruct.(lower(paramName)) = paramValue; % good on Matlab7, no good on Matlab6... paramsStruct = setfield(paramsStruct, lower(paramName), paramValue); %#ok Matlab6 pvPairs(1:2) = []; if ~any(strcmpi(paramName,supportedArgs)) url = ['matlab:help ' mfilename]; urlStr = getHtmlText(['<a href="' url '" rel="nofollow">' strrep(url,'matlab:','') '</a>']); error('invalidProperty',... ['Unsupported property - type "' urlStr ... '" for a list of supported properties']); end end % loop pvPairs catch if ~isempty(paramName) paramName = [' ''' paramName '''']; end error('invalidProperty',['Error setting uisplitpane property' paramName ':' char(10) lasterr]); end end In the code body, I simply check or use the value of paramsStruct.ParameterName. So simple. Note: the code uses Matlab’s built-in parseparams function to separate the PV argument pairs. This step can easily be done without resorting to parseparams, which is simply a convenience function. Kieran Parsons replied on : 15 of 37 I use a variation on the inputparser theme. Some of the driving factors for me with complicated function input arguments are: 1) Setting of defaults. 2) Checking of values against some kind of basic check (anonymous function). 3) Clear help about the params. So at the top of my function would be something like: function my_func(varargin) parser = my_custom_input_parser(); % Format: % param_name ... % default, ... % test_function, ... % ‘One line help’, ... % ‘Multiline help’.'); ‘param1’, ... [1 2 3], ... @isnumeric, ... ‘One line help’, ... ‘Multiline help’.'); % Check if help should be displayed ('help' is the input) and exit if so. if parser.display_help_if_req(varargin{:}); return; end p = parser.parse(varargin{:}); % p is a struct end This allows default setting, testing of values, and easy (but non-standard) help mechanism. I like the inputparser approach of supporting either PV pairs or a struct + PV pairs. I also have a fairly simple method of the custom input parser to generate an html help page of the input arguments. As more of my code is moving towards object-oriented classes I have adapted this procedure for class properties. In order to do this I generate a “parameters” class and use inheritance to bring these properties into my class. I know that Matlab can use dynamic properties but I have found these to be quite slow for get/set access. By “generate” I mean that the parameters class code is automatically generated (ie I only need to write the template) using part of the help for the class (which makes it very easy for other members of my team to use the same procedure for their classes). For example, the “functional” class would be something like: classdef my_class < my_class_params %MY_CLASS is a class with the following parameters. % % Parameters: % param1 = One line help for param1 % Multiline help. % Default = [1 2 3] % Basic test = @isnumeric % % param2 = One line help for param2 % Multiline help. % Default = [1 2 3] % Basic test = @isnumeric % methods (Access = public) function this = my_class(varargin) % Varargin can contain PV pairs, a struct + PV pairs, or a my_class_params object + PV pairs. this = this@ my_class_params(varargin{:}); end end end All the defaults, help, basic checks are now very visible in “my_class” and accessible to the normal help/doc functions or the html page version. If anything changes for these (or at startup) I rerun the function I have that parses these files and generates the my_class_params.m file using a template (via the excellent m2html package). One additional advantage of the class generation is that I can add “compile-time” options. For example I have 2 templates, one that includes all the parameter checking in the my_class_params, and one that does not (and is faster). It’s like an “assertions on/off” debug flag that Matlab unfortunately does not have. Not simple to say the least, but it’s quite effective. Matt Fig replied on : 16 of 37 Loren, I have only recently started using what I would call, now that I see your method, a hybrid method. It is less error prone than assigning a new default in each switch case, and less verbose than that method as well. Here is an example: function y = somefun2(a,b,tol,mynum,func) % Some function with 2 required inputs, 3 optional. % Check number of inputs. if nargin > 5 error('myfuns:somefun2:TooManyInputs', ... 'requires at most 3 optional inputs'); end defaults = {eps, 17, @magic}; % Store defaults, change them here only. % Fill in unset optional values. switch nargin case 2 [tol mynum func] = defaults{:}; case 3 [mynum func] = defaults{2:3} case 4 func = defaults{3}; end I think I like this better when there are only a few optional arguments. As the number of optional arguments gets larger, the function declaration line becomes too ugly for me. In this case I think I would go with using varargin. Great article! Thanks. Loren replied on : 17 of 37 Nice, Omid. You’ve sort of stolen my thunder for next week’s post, but oh well! Great minds :-) … –Loren Martin replied on : 18 of 37 I share the concerns of Gautam Vallabha about passing empty arguments in the varargin. The following changes to the code makes it possible to use: somefun2Alt(a,b) somefun2Alt(a,b,[],1) somefun2Alt(a,b,[],[],1) function y = somefun2Alt(a,b,varargin) % Some function that requires 2 inputs and has some optional inputs. % only want 3 nonempty optional inputs at most numvarargs = find(~cellfun(‘isempty’,varargin)); if length(numvarargs) > 3 error(‘myfuns:somefun2Alt:TooManyInputs’, … ‘requires at most 3 optional inputs’); end % set defaults for optional inputs optargs = {eps 17 @magic}; % now put these defaults into the valuesToUse cell array, % and overwrite the ones specified in varargin. optargs(numvarargs) = varargin(numvarargs); % Place optional args in memorable variable names [tol, mynum, func] = optargs{:}; y = {a b tol mynum func}; Omid replied on : 19 of 37 -Martin I wrote a tiny ad hoc function, findnull above, to do the trick. Soon after that cellfun popped up in my head but you’d taken care of my “blunder” alright. In your code snippet, we could drop the “find” to let logical indexing kick in – this is actually an mlint suggestion – and replace the string function name ‘isempty’ with a function handle @isempty; In addition to the well-known advantages of the latter to the former (in this simple case it’s not obvious but one would be performance: each time a search is not required to find the function referenced by a handle), there are cases when string function names would be a headache. One that I have experienced, when dealing with legacy code, is when the program is to be compiled; mcc will not “see” the functions referenced using strings and they must be supplied explicitly. (This happens even if str2func is used to construct handles from string names.) For a very beautiful usage of function handles, and the scope intricacies associated with these “data types” when they’re defined in a (sub)function and used in another function, I really recommend taking a look at John D’Errico’s loopchoose (FEx ID 13276). -Loren I’m sorry; I didn’t mean to! But I’m sure your post will, as always, bring up something nifty. Can’t wait to read it. Omid replied on : 20 of 37 Nice use of cell arrays, Loren! As for what Gautam Vallabha replied about this becoming tricky when dealing with [] in the arguments, one could simply use a function like findnull (below) to take care of default value assignment when [] is passed in. function somefcn(reqArg1,reqArg2,varargin) % Usual checks for number of arguments... % Beautifully, being a cell array, optArgs can accommodate structures (to pass options to an optimization solver, for example), function handles (to choose a particular optimization solver, for example) etc. optArgs = {1,2,3}; nullArgs = findnull(varargin); [optArgs{~nullArgs}] = varargin{~nullArgs}; % ... function ind = findnull(a) a = a(:); len_a = length(a); ind = false(len_a,1); for ctr = 1:len_a if isequal(a{ctr},[]), ind(ctr) = true; end end Martin replied on : 21 of 37 Omar, I agree about the logical indexing. I was not aware of the issues with the function handle, so thanks for pointing that out! Yair Altman replied on : 22 of 37 Omar & Martin – cellfun(@isempty,…) may be better for the compiler, but it is actually *Worse* than cellfun(‘isempty’) for performance. This is indeed counter-intuitive (and undocumented) but true. The reason appears to be that using ‘isempty’ (as well as the other predefined string functions (‘isreal’, ‘islogical’, ‘length’, ‘ndims’, ‘prodofsize’) uses specific code-branches optimized for performance (at least as of R2007b – I haven’t yet tested this on newer releases): >> c = mat2cell(1:1e6,1,repmat(1,1,1e6)); >> tic, d=cellfun('isempty',c); toc Elapsed time is 0.115583 seconds. >> tic, d=cellfun(@isempty,c); toc Elapsed time is 7.493989 seconds. Now, the obvious solution would be for the internal Matlab code to check for function-handle equality and use the optimized version if possible. Perhaps a future Matlab release will do this. Hint hint MTW? :-) Yair Altman Loren replied on : 23 of 37 Yair- We could improve cellfun to check function handles to see if they match specified strings. Even then MATLAB would have to be careful in case the user has overridden the built-in version of whatever the string points to. –Loren Omid replied on : 24 of 37 Yair- What can I say! That was indeed “counter-intuitive (and undocumented) but true.” I applied a generally true statement to something that turned out to be a special case! Thanks for pointing this out. I tested this in R2008a and R2008b. The difference is actually an order of magnitude! R2008a >> c = mat2cell(1:1e6,1,repmat(1,1,1e6)); >> tic, d=cellfun(‘isempty’,c); toc Elapsed time is 0.024467 seconds. >> tic, d=cellfun(@isempty,c); toc Elapsed time is 0.929305 seconds. R2008b >> c = mat2cell(1:1e6,1,repmat(1,1,1e6)); >> tic, d=cellfun(‘isempty’,c); toc Elapsed time is 0.014660 seconds. >> tic, d=cellfun(@isempty,c); toc Elapsed time is 0.557050 seconds. (These timings are after some warm up – I didn’t use Steve Edins’ timeit so that the results can be directly compared with what you reported for R2007b.) -Omid PS. Yair and Martin- You’ve both written my name as Omar, which is strange given that the last two letters are totally different! Kent Conover replied on : 25 of 37 Hi Loren, Thanks for replying to my post: # Loren replied on May 5th, 2009 at 20:17 UTC : Kent- Your code seems to depend on naming the variables in the caller the same as the names in the actual function definition. Seems a bit fragile for my taste. –Loren However, I am unsure about your concern. Perhaps my example was unclear. But to address your point, I have no problem calling such functions with different variable names in the caller function. Say func1 is defined thus: function x = func1(y) if isundefined(‘y’) y = 1; end x = 2*y; Then func1 can be called like this: >>z = 2; >>func1(z) ans = 2 Moreover, this function works even if z is empty or absent. ie: >>clear z >>func1(z) ans = 2 or >>func1() ans = 2 So, I can’t see your point about fragility, unless you are concerned about maintaining the order and position of the parameters passed – in which case you have me!. I keep the number of passed parameters <= 6. Nevertheless, I value your perspective, and I would appreciate knowing any further thoughts that you have on this issue. Cheers! -Kent Loren replied on : 26 of 37 Kent- I think I misunderstood your code. I thought it depended on names being the same. Clearly not true. –Loren Bryan Reed replied on : 27 of 37 I was about to post my method, but I saw that Gautam Vallabha already posted it, absolutely verbatim! The code fragment “~exist(‘varname’,’var’) || isempty(varname)” appears all over the place in functions I’ve written over the past few years. I find this method to be extremely concise, readable, and flexible. You can change the order or number of parameters in the function definition and it doesn’t change the code for setting defaults. I don’t know that it has any downsides, apart from the case where you need varargin because the user can supply an unlimited number of options. Typically I’ll only use varargin for handing options down to a lower-level built-in function, for example if the function I’m writing is basically a wrapper for MATLAB’s plot() function. Then the user can specify line types and colors etc., and I as the programmer don’t have to think about how that would work. Matt replied on : 28 of 37 Loren, Thanks for the write-up. In my opinion, inputting/outputting variables to a function is the bane of coding in any language. Everyone has their own opinion on how it should be done, and none of them work all that well apart, let alone together when you integrate code. This is an especially a large problem when using a development coding environment like MATLAB as not much is ever polished when it gets used. I am pleased to see the inputParser function added to the default framework though. Great step in the right direction for elegant and functional parsing of an input stream! As a user of the Python module pyparsing, I have seen first hand how creating a scheme for your parser can pay dividends on code flexibility, simplicity, and readability. I suggest you revisit this same exercise using inputParser to put the virtues on the ‘big stage’. -Matt David replied on : 29 of 37 Hi Loren, I discovered inputParser a few months ago and am delighted. However, I’d like to be able to change some of the values of passed arguments within the function as sometimes the default values of input parameters depends on the value of other input paramters. For example, in the function below, the default size of the moving average window depends on the size of the data being smoothed. To do this, I create a new variable called “wind_size.” However, I’d rather modify the value of p.Results.wind_size so that I have less variables to keep track of. When I try to do this however, I get the following error message: “??? Setting the ‘Results’ property of the ‘inputParser’ class is not allowed.” Is there a way to get around that limitation? much thanks, -David function outdata=movavg(indata,varargin) %Smooths indata with a boxcar windowed moving average p=inputParser; p.parse(indata,varargin{:}); if isempty(p.Results.wind_size) wind_size=round(length(indata)/20); else wind_size=p.Results.wind_size; end if wind_size<1, wind_size=1; end n_outdata=length(indata)-wind_size+1; outdata=zeros(1,n_outdata); for a=1:n_outdata, outdata(a)=mean(indata(a:a+wind_size-1)); end Loren replied on : 30 of 37 Hi David- Not that I know of. You might use the link on the right to request an enhancement with technical support. –Loren Nio replied on : 31 of 37 Hi David, I guess this comes too late, but anyway here’s a work around to your problem. You could extract the p.Results structure from your inputParser object: p_res = p.Results; Now you have a simple structure ‘p_res’ that can be modified as usual. Maybe this works for you. – Nio Zhenyu He replied on : 32 of 37 optargs(1:numvarargs) = varargin; from your previous function somefun2 I saw that switch nargin case 2 opt1 = eps; opt2 = 17; opt3 = @magic; case 3 opt2 = 17; opt3 = @magic; case 4 opt3 = @magic; end It seems that func is the argument if you only add one argument in the input and you put the argument func at last. So I think it should be optargs(3-numvarargs+1:end) = varargin; Or, the argument should be in reverse order [func, mynum, tol] = optargs{:}; -Zhenyu Loren replied on : 33 of 37 Zhenyu- func is meant to be the final optional argument, otherwise @magic is used. It’s possible I’m missing something, but I think things are in the right order. –loren Erzieher60 replied on : 34 of 37 This page comes up fairly high on a google search for “matlab function defaults”, so it seems plenty of people will be looking here for methods to try. I am a fan of the ‘varargin’ method: function foobar(varargin) Defaults = {A,B,C...}; Defaults(1:nargin) = varargin; – Two lines of short and readable code. – Works with any data types or functions. – Does not unnecessarily increase the cyclomatic (McCabe) complexity value. – Is much much quicker than calling ‘exist’, ‘isempty’ or the like. – Assigning, reassigning or using values requires careful attention to order. Be careful! – The number of input arguments can also be more than the number of default values. The cell array ‘Defaults’ will have at least as many cells as there are input variables, but will contain no less than those that are defined as function defaults. Very useful, on occasion where a minimum number of defaults need to be set. – Empty values may be replaced with defaults using: function foobar(varargin) Defaults = {A,B,C...}; Inds = find(cellfun(@isempty,varargin)); varargin{Inds} = Defaults{Inds}; Defaults(1:nargin) = varargin; but this is slower, due to the ‘find’ and ‘cellfun’, and no longer allows more input arguments than defaults (unless it is possible to guarantee that they are not empty). All in all, a very neat way of setting function default values using current MATLAB syntax. Henry M. Le replied on : 35 of 37 I tried to create the unitstep function. function [y] =unitstep(time, ts) % y = 0 when time ts y = [zeros(1,length(time) length(ts))]; end when I tried to run the function, I did not see anything from the plot time = [-5:1:5]; z3 = unitstep(time,2); plot(z3); Loren replied on : 36 of 37 Henry- Your function is all zeros except the last point (which I bet you can’t see on the right edge, I’m guessing) so you are seeing what you asked MATLAB to plot. Look at the output values to unitstep to see that they are what you want. -Loren George Mylnikov replied on : 37 of 37 This function parses the variable input list transparently so that one can immediately see the list and the names of all optional inputs. function varargout = parseVarargin(x, maxVarArgIn) % parseVarargin % This utility function parses the variable length input argument list % INPUTS: % x – varargin from the calling function % maxVarArgIn – maximal number of the input arguments for the calling % function. % OUTPUT: % An array of size 1 x maxVarArgIn % USAGE % (based on an example by Loren Shure % http://blogs.mathworks.com/loren/2009/05/05/nice-way-to-set-function-defaults/) % function y = somefun2Alt(a,b,varargin) % % Some function that requires 2 inputs and has a maximum of 3 optional inputs % % Use parseVarargin to parse the inputs and place optional args in % % memorable variable names: % [tol, mynum, func] = parseVarargin(x, 3); % % If only first two optional arguments were supplied by the user, then % % the variable func will have value [] % % Now set up some default values: % if isempty(tol), tol = eps % if isempty(mynum), tol = 17 % if isempty(func), tol = @magic % Author: George Mylnikov % Date: 28-Dec-2011 optArgIn = size(x,2); if optArgIn > maxVarArgIn error(‘parseVarargin:TooManyInputs’, … ‘takes at most %d optional inputs’, maxVarArgIn); end if optArgIn < maxVarArgIn c = cell(1, maxVarArgIn – optArgIn); out = [x, c]; end for k = 1:maxVarArgIn varargout{k} = out{k}; end end
2018-04-21 09:52:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3252898156642914, "perplexity": 3236.0355198013667}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945111.79/warc/CC-MAIN-20180421090739-20180421110739-00505.warc.gz"}
https://plainmath.net/30913/knew-wavelength-meters-velocity-meters-second-formula-could-frequency
Brittney Lord 2021-09-30 If you knew that wavelength was 5 meters and velocity was 12 meters per second, what formula could you use to find frequency? Do you have a similar question? Anonym Expert $\lambda =\frac{v}{f}$ where: λ = Wavelength of light, meters v = Velocity of light (c = 3.0 x 108 m, for speed of light if not otherwise defined) f = frequency of light, Hz Still Have Questions? Free Math Solver
2022-12-10 01:47:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 27, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7370392680168152, "perplexity": 1857.5092662320292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711637.64/warc/CC-MAIN-20221210005738-20221210035738-00497.warc.gz"}
https://www.mathway.com/popular-problems/Trigonometry/300029
# Trigonometry Examples Find the Exact Value arctan(-square root of 3) The result can be shown in both exact and decimal forms. Exact Form: Decimal Form:
2018-09-20 19:20:48
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8829556703567505, "perplexity": 10208.381908309735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156554.12/warc/CC-MAIN-20180920175529-20180920195929-00121.warc.gz"}
https://gre.kmf.com/question/qr/0?keyword=&page=1
题目列表 In 2005, 80% of the Central City School Districts total expenditures came from local property taxes, and the rest came from the state government`s Aid to Schools Program. If the state had reduced its aid to the district by 50%, by what percentage would local property taxes have had to be increased in order for the district to maintain the same level of expenditures? The revenue from lottery ticket sales is divided between prize money and the various uses shown in the graph labeled "Proceeds." In 2009, what percent of the money spent on tickets was returned to the purchasers in the form of prize money? In how many years from 2001 through 2008, inclusive, did the sales of ABC Mega Stores exceed the average of the annual sales during that period? In which of the following pairs of years were the ratios of Republican receipts to Democratic receipts most nearly equal? In May 2010,readers who were at least 31 years old accounted for what fraction of all the readers of Magazine X? In the five states listed, approximately what percent of the electoral votes in 1992 were received by Clinton? What was the percent increase in total revenue from 2008 to 2009? What is the measure of the central angle for the sector representing the percentage of students who take the bus? If the 2009 income from Government was $30 million, how much income, in millions of dollars, did the charity collect from Corporations?$_____million If the total income received from Individuals in 2009 was \$90 million, which of the following statements is true? Select all that apply. 25000 +道题目 6本备考书籍
2021-11-28 09:42:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24552424252033234, "perplexity": 1331.1762845585636}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358480.10/warc/CC-MAIN-20211128073830-20211128103830-00421.warc.gz"}
https://www.achieversrule.com/2017/04/commercial-mathematics-practice-sets.html
# Commercial Mathematics Practice Test for Banking & SSC Exams As we all know, In most of the exams like SBI PO, IBPS PO, SSC CGL, etc. For solving these types of questions you need to boomup your speed to manage your time during the exams. So you need to practice these types of commercial mathematics for correctly answering during exams. It creates a major difference among the fellow aspirants who get selected and the one who doesn't. So you also need to prepare thse types of questions to maintain your Speed and Accuracy. ### Quantitative Aptitude - Speed Math Practice Set Q1. A trader marks his goods at such a price that after allowing a discount of 18%, he makes a profit of 28%. What is the marked price of an article whose cost price is  1804? 1) 2826 2) 2806 3) 2818 4) 2816 5) 2819 Q2. Anamika can do a piece of work in 15 days and Ananya can do it in 20 days. They work together for 4 days and then Ananaya left. The days taken by Anamika to finish the remaining work is. 1) 10 days 2) 6 days 3) 8 days 4) 9 days 5) 12 days Q3. Two numbers P and Q are such that sum of 6% of P and 4% of Q is three–fourth of the sum of 5% of P and 9% of Q. The ratio of P : Q is: 1) 11 : 9 2) 10 : 9 3) 8 : 9 4) 11:7 5) 11: 8 Q4. In an election, 20% voters did not participate. The wining candidate got 76% of the total votes casted and he won by 46,800 votes. Find the total number of votes. 1) 1,10,500 2) 1,21,500 3) 1,22,500 4) 1,09,500 5) 1,12,500 35. Anu, Babloo, and Charu started a business by investing 32,800, 36,800 and 29,600 respectively. If after 10 months, Babloo received 2714 as his share of profit, what amount did Charu get as her share of profit ? 1) 2083 2) 2183 3) 2383 4) 2283 5) 2483 Q6. The perimeter of a rectangle, whose length is 8 m more than its breadth, is 92 m. What would be the area of a triangle whose base is equal to the half of the diagonal of the rectangle and whose height is equal to the length of the rectangle ? (Answer is in approximate value) 1) 223 m2 2) 316 m2 3) 332 m2 4) 290 m2 5) 308 m2 Q7. In a container, there is 480 ltr of pure milk from which 24 ltr of milk is replaced with 24 ltr of water, again 24 ltr milk is replaced by same amount of water, as this process is done once more. Now, what is the amount of pure milk ? 1) 401.54 ltr 2) 421.54 ltr 3) 419.64 ltr 4) 411.54 ltr 5) 429.64 ltr Q8. A 480m long train takes 16 seconds to cross a man running at 12 km/hr in a direction opposite to that of the train. What is the speed of the train ? 1) 90 km/hr 2) 84 km/hr 3) 92 km/hr 4) 96 km/hr 5) 98 km/hr Q9. In a class, the average age of student is 8 years, and average age of 16 teachers is 28 years. If the average age of the combined group of all the teachers and students is 10, then the number of students is: 1)132 2)144 3) 152 4) 164 5) 148 Q10. At what rate of interest in simple interest, the interest will become 8/3th of the total amount in 12 years? 1) 2% 2) 4% 3) 5% 4) 6% 5) 8%
2019-02-23 14:33:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6031485795974731, "perplexity": 1737.3584553238713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249504746.91/warc/CC-MAIN-20190223142639-20190223164639-00507.warc.gz"}
http://mathhelpforum.com/calculus/2819-help-stationary-points-surface-sadle-points-thing.html
# Math Help - Help with Stationary Points of a surface sadle points thing .. 1. ## Help with Stationary Points of a surface sadle points thing .. i have an exam coming up and this is going to be in it i have a problem in my notes i cant seem to do can anyone help please Find the stationary points of the following surface. and determine thier nature of the stationary points. f(x,y) = x^3 + y^3 - 3x - 12y +20 i understand the steps but for some reason when a Partial diff it never seems to come out like anything els in my notes any help would be very much apreciated 2. Originally Posted by AVVIT i have an exam coming up and this is going to be in it i have a problem in my notes i cant seem to do can anyone help please Find the stationary points of the following surface. and determine thier nature of the stationary points. f(x,y) = x^3 + y^3 - 3x - 12y +20 i understand the steps but for some reason when a Partial diff it never seems to come out like anything els in my notes any help would be very much apreciated I might be totally wrong but it seems to me to find the critical point for this function. Thus, you need that, $\left\{ \begin{array}{c}\frac{\partial f}{\partial x}=0\\ \, \\\frac{\partial f}{\partial y}=0$ But, $\frac{\partial f}{\partial x}=3x^2-3$ $\frac{\partial f}{\partial y}=3y^2-12$ Solving the equations we have, $x=-1,1$ $y=-2,2$ Thus, the only possible points are, $(-1,-2,38),(-1,2,-6),(1,-2,34),(1,2,2)$ 3. This is the only problem thats one of the answers i get but it cant be write as its suposed to be harder than that lol... or so i thought examples in my notes when partial differentiated always end up with an x and Y in the resultant making them harder and then transposing the forumula to calc maximas and minimas .. 4. To determine whether these points are maximum or minimum you need to find the value of, $D=\left| \begin{array}{cc} f_{xx}& f_{xy}\\ f_{xy}& f_{yy} \end{array} \right|$ We find that, $f_{xx}=6x^2$ $f_{yy}=6y^2$ $f_{xy}=0$ Therefore, $D=\left| \begin{array}{cc} 6x^2&0\\ 0&6y^2 \end{array} \right|=36x^2y^2>0$ Looking at, $f_{xx}(x_0,y_0)>0$ Thus it is a relative minimuma.
2014-12-21 23:42:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5646355748176575, "perplexity": 426.7768951623882}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802772751.143/warc/CC-MAIN-20141217075252-00089-ip-10-231-17-201.ec2.internal.warc.gz"}
https://proofwiki.org/wiki/Mathematician:Eugenio_Beltrami
# Mathematician:Eugenio Beltrami ## Mathematician Italian mathematician notable for his work concerning differential geometry and mathematical physics. The first to prove consistency of non-Euclidean geometry by modeling it on a surface of constant curvature, the pseudosphere, and in the interior of an $n$-dimensional unit sphere. Developed singular value decomposition for matrices, which has been subsequently rediscovered several times. His use of differential calculus for problems of mathematical physics indirectly influenced development of tensor calculus by Gregorio Ricci-Curbastro and Tullio Levi-Civita. Italian ## History • Born: 16 November 1835 in Cremona, Lombardy, Austrian Empire (now Italy) • Died: 18 February 1900 in Rome, Italy ## Theorems and Definitions Results named for Eugenio Beltrami can be found here. Definitions of concepts named for Eugenio Beltrami can be found here. ## Publications • 1868: Teoria fondamentale degli spazii di curvatura costante (Annali di Mat. Ser. II Vol. 2: 232 – 255) • 1902 -- 1920: Opere matematiche di Eugenio Beltrami pubblicate per cura della Facoltà di scienze della r. Università di Roma (4 volumes) ## Sources but beware of the mistake in his date of death
2019-12-06 18:55:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.560819685459137, "perplexity": 7283.096857177261}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540490743.16/warc/CC-MAIN-20191206173152-20191206201152-00042.warc.gz"}
https://www.transtutors.com/questions/problem-10-2a-part-level-submission-cling-on-ltd-sells-rock-climbing-products-and-al-2565602.htm
# Problem 10-2A (Part Level Submission) Cling-on Ltd. sells rock-climbing products and also operate... Problem 10-2A (Part Level Submission) Cling-on Ltd. sells rock-climbing products and also operates an indoor climbing facility for climbing enthusiasts. On September 1, 2015, the company had a balance of $12,600 in its Bank Loan Payable account, representing a loan borrowed from the local credit union on July 1. The loan and 6% interest are both payable at maturity, on September 30. Note that the company records adjusting entries only annually at its year end, December 31. During the next four months, Cling-on incurred the following: Sept. 1 Purchased inventory on account for$15,270 from Black Diamond, terms n/30. The company uses a perpetual inventory system. 30 Repaid the $12,600 bank loan payable to the credit union (see opening balance), as well as any interest owed. Oct. 1 Issued a six-month, 8%,$15,270 note payable to Black Diamond in exchange for the account payable (see Sept. 1 transaction). Interest is payable on the first of each month. 2 Borrowed $25,440 from Montpelier Bank for 12 months at 8% to finance the building of a new climbing area for advanced climbers (use the asset account Buildings). Interest is payable monthly on the first of each month. Nov. 1 Paid interest on the Black Diamond note and Montpelier Bank loan. Dec. 1 Paid interest on the Black Diamond note and Montpelier Bank loan. 2 Purchased a vehicle for$28,470 from Auto Dealer Ltd. to transport clients to nearby climbing sites. Paid \$8,190 as a down payment and borrowed the remainder from the Montpelier Bank for 12 months at 6%. Interest is payable quarterly, at the end of each quarter. 31 Recorded accrued interest for the Black Diamond note and Montpelier Bank loans. (a) Record the above transactions. (Round answers to the nearest whole dollar, e.g. 5,275. Credit account titles are automatically indented when the amount is entered. Do not indent manually.) Date Account Titles and Explanation Debit Credit Sept. 1 Sept. 30 Oct. 1 Oct. 2 Nov. 1 (To record interest on Black Diamond note.) Nov. 1 (To record interest on Montpelier Bank loan.) Dec. 1 (To record interest on Black Diamond note.) Dec. 1 (To record interest on Montpelier Bank loan.) Dec. 2 Dec. 31 Click if you would like to Show Work for this question: Open Show Work
2018-09-25 02:57:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17082609236240387, "perplexity": 13914.610575294402}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160923.61/warc/CC-MAIN-20180925024239-20180925044639-00211.warc.gz"}
http://mathhelpforum.com/number-theory/154931-n-pppp-pp-1998-digits.html
# Math Help - N=PPPP.....PP(1998 digits) 1. ## N=PPPP.....PP(1998 digits) Given N>0 and N=PPPP.....PP(1998 digits) what is the thousand digit after then decimal point of sqrt(N)???? 2. Originally Posted by harshad Given N>0 and N=PPPP.....PP(1998 digits) what is the thousand digit after then decimal point of sqrt(N)???? Nobody else has attempted this, so I'll have a shot at it. Start with the case where P = 9. Let N = 999...99 (2n digits). Eventually we'll want n = 999, but it's easier to start with a general n. Then $N = 10^{2n}-1$, and so . . . . . \begin{aligned}\sqrt N = 10^n(1-10^{-2n})^{1/2} &= 10^n(1-\tfrac1210^{-2n} - \tfrac1810^{-4n} - ...) \quad{\scriptstyle\text{(using the binomial expansion for (1-x)^{1/2})}} \\ &= 10^n - 5\cdot10^{-n-1} - \tfrac1810^{-3n} - ... \\ &= \underbrace{999\cdots9}_{n\ 9\text{s}}.\underbrace{999\cdots9}_{n\ 9\text{s}}4999...\,. \end{aligned} So the (n+1)th digit after the decimal point is a 4. In particular, if n = 999 then the thousandth digit after the decimal point is a 4. That deals with the case when P = 9. If P = 1, then you can divide $\sqrt N$ by 3 to see that the 1000th digit of $\sqrt{111...1}$ (1998 digits) is a 1. If P = 4 then the 1000th digit is a 3. But for other values of P, when $\sqrt P$ is irrational, I doubt whether there is any straightforward analytical way to answer the question.
2015-04-25 01:27:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9933313727378845, "perplexity": 1228.201463896896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246645538.5/warc/CC-MAIN-20150417045725-00243-ip-10-235-10-82.ec2.internal.warc.gz"}