url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://itprospt.com/num/1215421/0-30z-341-0-20z-c-ho2-tl-quot-2-czhuo22-3h-0-1-260-2302
5 # 0 + 30z- 341,0 + 20z C Ho2 Tl "2 Czhuo22 3h,0~1 260 2302... ## Question ###### 0 + 30z- 341,0 + 20z C Ho2 Tl "2 Czhuo22 3h,0~1 260 2302 0 + 30z- 341,0 + 20z C Ho 2 Tl "2 Czhuo 22 3h,0 ~1 260 2 302 #### Similar Solved Questions ##### Question 16Find f (z) 9 (r) when f (z) # 623 9r2 934 15 and 9 (x) = 2" 4r? + 12Dir' + 1312 9rE 3 377 ' S2? FiL2Z62 6r 92782 13r" Q;E Question 16 Find f (z) 9 (r) when f (z) # 623 9r2 934 15 and 9 (x) = 2" 4r? + 12 Dir' + 1312 9rE 3 377 ' S2? FiL2Z 62 6r 927 82 13r" Q;E... ##### Which palr Ilncages Is mose arctent lehin Bllatct ba?PannriiodedsAd Nemstod}LophottochototAMdFcdrjozosProtostomla endDautcroitomlsVertb4ta undCrphstxxhadstaChordataand [chinoelnur which palr Ilncages Is mose arctent lehin Bllatct ba? PannriiodedsAd Nemstod} LophottochototAMdFcdrjozos Protostomla endDautcroitomls Vertb4ta undCrphstxxhadsta Chordataand [chinoelnur... ##### Item 33 of 5For the vectors shown in the figure, determine (FigurePant #the magnitude ol B_ 31 Exptess Your answver using three signiicant figured_AzdB- 34|SubmitRequest AnswerPant 0Figuraol 1the direction ot B 34. Exprese your answer ueing three signllicant flgures.8 40 64 56.07 128.029o AzdC(C-31.0)SubmitRequest Answel Item 3 3 of 5 For the vectors shown in the figure, determine (Figure Pant # the magnitude ol B_ 31 Exptess Your answver using three signiicant figured_ Azd B- 34| Submit Request Answer Pant 0 Figura ol 1 the direction ot B 34. Exprese your answer ueing three signllicant flgures. 8 40 64 56.07 128.02... ##### Consider the basis 1 + I.1 +12, 1 +12 for the vector space Pgz" of all polynomials in degree at most 2 Write the polynomial f = 4r? + 5r + 2 in terms of this basis_ We have to find b. and so that a(1 +1) +b(r +12) + c(1+12) = 4r2 +5r +2 (for all x) or (b + c)z? + (a + b)r + (a +c) = 4r" + 5r + 2 This means b + c = 4 +b = 5 and +c = 2. Solving this system gives a = 3/2, b = 7/2, and c = 1/2. Consider the linear transformation W Pz2 Pzz give by W(f(r)) = If' (2)_ Give the matrix for Consider the basis 1 + I.1 +12, 1 +12 for the vector space Pgz" of all polynomials in degree at most 2 Write the polynomial f = 4r? + 5r + 2 in terms of this basis_ We have to find b. and so that a(1 +1) +b(r +12) + c(1+12) = 4r2 +5r +2 (for all x) or (b + c)z? + (a + b)r + (a +c) = 4r" + ... ##### Wewor ma214-f19-004 nw03HWO3: Problem 2Previoue ProblemProblem LIst Net Prablem(1 point} tank contains 60 kg of salt and 2000 the rate - Umin.waler Pureenterstank the rate Umin: The solution mlxed and drains fromnJtankWhat the amount of salt the tank initially? aMounte(b) Flnd tho amount amounttho tank alter houms(KQi(c) Find the concentration satt [ the sclution the tank as time approaches iniinity: (Assume yoUr tank large enough to hald all the solulion ) concentration (JLMote: You can ear Wewor ma214-f19-004 nw03 HWO3: Problem 2 Previoue Problem Problem LIst Net Prablem (1 point} tank contains 60 kg of salt and 2000 the rate - Umin. waler Pure enters tank the rate Umin: The solution mlxed and drains fromn Jtank What the amount of salt the tank initially? aMounte (b) Flnd tho amoun... ##### Avoltaic cell is created using silver (Ag Agt) as one electrode and manganese (Mn / Mn 2+) as the other electrode Reduction half- Eo reactionAgt (aq) + e Ag (s) Mn2+ (aq) + 2 e #Mn (s)0.801.18a) which metal is the anode and which is the cathode?b) write a balanced chemical equation for the redox reaction occurring in the celllc) draw a diagram of the cell Be sure to label the cathode the anode, Ag; Mn, Agt Mn2+ the direction of electron flow, and the direction of cation and anion flow from the Avoltaic cell is created using silver (Ag Agt) as one electrode and manganese (Mn / Mn 2+) as the other electrode Reduction half- Eo reaction Agt (aq) + e Ag (s) Mn2+ (aq) + 2 e #Mn (s) 0.80 1.18 a) which metal is the anode and which is the cathode? b) write a balanced chemical equation for the red... ##### 2 sin? cos Asec A + cos 2 sin? cos A sec A + cos... ##### 5. A jet is flying with a navigational bearing 240" at 550 mph It experiences wind blowing 50 mph in the direction of S 200 W.Express the velocity of the wind as a vector in a) magnitude/direction form b) linear combination form and j)component form W=b) Express the velocity of the jet relative to the air as a vector in magnitude/direction form b) linear combination form and j) component for 4 =c) Find the true velocity of the jet relative to the ground as a vector in component form. (neare 5. A jet is flying with a navigational bearing 240" at 550 mph It experiences wind blowing 50 mph in the direction of S 200 W. Express the velocity of the wind as a vector in a) magnitude/direction form b) linear combination form and j) component form W= b) Express the velocity of the jet relat... ##### Given three noncollinear points, there is one and only one circle that passes through them. Knowing that the equation of a circle may be written in the form$$x^{2}+y^{2}+a x+b y+c=0$$find an equation of the circle passing through the given points.$$(-5,0),(2,-1), ext { and }(4,3)$$ Given three noncollinear points, there is one and only one circle that passes through them. Knowing that the equation of a circle may be written in the form $$x^{2}+y^{2}+a x+b y+c=0$$ find an equation of the circle passing through the given points. $$(-5,0),(2,-1), \text { and }(4,3)$$... ##### Using data set 13-11; Fit the data according to the following relation: Y = b0 + b1 X1 + b2 X2 + b3 X1^2 ; Do you consider the revised fit equation to be an improvement over the simple linear fit equation Y = b0 + 61 X1 + b2 X2TrueFalse Using data set 13-11; Fit the data according to the following relation: Y = b0 + b1 X1 + b2 X2 + b3 X1^2 ; Do you consider the revised fit equation to be an improvement over the simple linear fit equation Y = b0 + 61 X1 + b2 X2 True False... ##### Question 6 (1 point) Evaluate to ? decimal places 4HQO( + 010 * 100 365)Blank Question 6 (1 point) Evaluate to ? decimal places 4HQO( + 010 * 100 365) Blank... ##### Lim(1 + Zx)-3x 7-0cos(Zx) ~ cos(7x) lim X-0 3x23 3 lin7 16 X-4 X-4 lim(1 + Zx)-3x 7-0 cos(Zx) ~ cos(7x) lim X-0 3x2 3 3 lin7 16 X-4 X-4... ##### The most recent cormon ancestor of all animals is hypothesized to be a/namoeboid protozoayeastunicellular algaflagellated protistmold The most recent cormon ancestor of all animals is hypothesized to be a/n amoeboid protozoa yeast unicellular alga flagellated protist mold... ##### EAssignment/takeCovalentActivity do?locator-assignment-take&takeAssignmentSessionLoc[Rcvic Topics] 4aGiei Use the References Zccrss important raluts % needecConsider the following unbalanced particulate representation of a chemical equation:blue 0 =redWrite balanced chemical cquation for this reaction; Using the smallest integer coefficiAnmnFnteenGimomone group atterpts remaining eAssignment/takeCovalentActivity do?locator-assignment-take&takeAssignmentSessionLoc [Rcvic Topics] 4aGiei Use the References Zccrss important raluts % needec Consider the following unbalanced particulate representation of a chemical equation: blue 0 =red Write balanced chemical cquation for th... ##### Let f be the function defined above Which of the following conditions explains why f is not continuous atz = 2?Neither lim f(r) nor f (2) exists.limf(r) exiskUJbut f(2) does not existBoth !luf(r) and f(2) exist; but limgf (c) / f(2)Both limf(z) and f(2) exist and limf(c) f() Let f be the function defined above Which of the following conditions explains why f is not continuous atz = 2? Neither lim f(r) nor f (2) exists. limf(r) exiskUJbut f(2) does not exist Both !luf(r) and f(2) exist; but limgf (c) / f(2) Both limf(z) and f(2) exist and limf(c) f()... ##### Q8) An object with mass m-30kg is released from rest from x-0. The coefficient of friction is / = 0.2 and inclined plane is Sm long The drag force proportional velocity Fo = 0.2v. Determine v(t) x(t),and velocity of the object when it reaches the bootom: Assume that the gravitational force is constant:A(O) =0Mg sin Ju"30e MS EOs Jui? Q8) An object with mass m-30kg is released from rest from x-0. The coefficient of friction is / = 0.2 and inclined plane is Sm long The drag force proportional velocity Fo = 0.2v. Determine v(t) x(t),and velocity of the object when it reaches the bootom: Assume that the gravitational force is consta...
2022-08-16 09:45:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.676051139831543, "perplexity": 7177.946464513449}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572286.44/warc/CC-MAIN-20220816090541-20220816120541-00580.warc.gz"}
http://openstudy.com/updates/5595c3ade4b0989cc8788e17
• anonymous A square is shown below. Which expression can be used to find the area, in square units, of the shaded triangle in the square? A square with a side length of 4 units is shown. A diagonal is drawn with one section shaded inside the figure. fraction 1 over 2 ⋅ 4 ⋅ 4 fraction 1 over 4 ⋅ 4 ⋅ 4 fraction 1 over 2 ⋅ 2 ⋅ 2 fraction 1 over 4 ⋅ 2 ⋅ 2 Mathematics Looking for something else? Not the answer you are looking for? Search for more explanations.
2017-03-30 11:08:46
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9213728904724121, "perplexity": 542.983321388056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218193716.70/warc/CC-MAIN-20170322212953-00059-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/square-root-of-matrix.852824/
# Square root of matrix 1. Jan 18, 2016 ### aaaa202 I'm doing an online course in quantum information theory, but it seems to require some knowledge of linear algebra that I don't have. A definition that popped up today was the definition of the absolute value of a matrix as: lAl = √(A*A) , where * denotes conjugate transpose. Now for a given matrix I can calculate the product A*A, but how is the square root of this defined? I have no idea, though I think it should be basis independent, so maybe square root of the trace. A*A is clearly positive semidefinite but I don't know if I can use that for anything. 2. Jan 18, 2016 ### PeroK What's the definition of the square root of a number? 3. Jan 18, 2016 ### aaaa202 Well I guess that the square root of a matrix A is then a matrix B such that B^2 = A. Well wouldn't it be better with a matrix B such that B*B = A. And is this definition basis independent. 4. Jan 18, 2016 ### PeroK That would be like requiring the square root of a complex number $a$ to satisfy $b^*b = a$ 5. Jan 18, 2016 ### aaaa202 Ok well is the definition basis independent? 6. Jan 18, 2016 ### PeroK Let me rephrase that question. If a linear transformation T is represented by the matrix A in one basis and A' in another. Then is |A'| = |A|'? What do you think? 7. Jan 18, 2016 ### aaaa202 I am not sure. I need to show that: (√(A))' = √(A') B' = √(A') Now if A' = UAU maybe I can use that somehow... 8. Jan 18, 2016 ### PeroK Or, think about diagonalizing A*A. It must be true if you want to accept that and get back to your quantum theory! 9. Jan 18, 2016 ### aaaa202 Maybe something like this: A' = UAU B' = UBU Now B2 = A We want to show that: √A' = (√A)' = B' So is B'2 = A'? Well B'2 = UBUUBU = UB2U = UAU = A' But all this requires that A and A' and B and B' are related by a basis change with a unitary as above. When is this true? 10. Jan 18, 2016 ### PeroK That's essentially the proof. The only technicality is that B' must be the right square root. That probably depends on showing that a positive semi-definite matrix has a unique positive semi-definite square root. 11. Jan 18, 2016 ### Hawkeye18 The square root of a positive semidefinite Hermitian matrix $A$ is the unique positive semidefinite matrix $B$ such that $B^2=A$. You can look at Ch. VI s.3 of "Linear algebra done wrong" for an explanation of why such matrix exists and why it is unique.
2017-12-11 04:22:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7931941747665405, "perplexity": 835.8605408296004}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948512121.15/warc/CC-MAIN-20171211033436-20171211053436-00360.warc.gz"}
https://math.stackexchange.com/questions/3356350/what-is-the-definition-of-lix-in-these-mathoverflow-questions-on-i-s-int
# What is the definition of $Li(x)$ in these MathOverflow questions on $I_s =\int_{1}^{\infty} (\pi(x)-Li(x))\,x^{-s-1} dx$? Question: What is the definition of $$Li(x)$$ in the MathOverflow questions linked below and their related answers? $$Li(x)=\int\limits_0^x\frac{1}{\log(t)}\,dt$$, $$Li(x)=\int\limits_1^x\frac{1}{\log(t)}\,dt$$, $$Li(x)=\int\limits_2^x\frac{1}{\log(t)}\,dt$$, or $$Li(x)=\int\limits_0^{1-\epsilon}\frac{1}{\log(t)}\,dt+\int\limits_{1+\epsilon}^x\frac{1}{\log(t)}\,dt$$? The related questions, answers, and comments to the following Math StackExhange questions seem to use various definitions. My motivation for asking this question is an attempt to understand the exact formula associated with the evaluation of $$I_s$$ in order to gain insight into the questions, answers, and comments associated with these questions. The remainder of this question is an attempt to further clarify my motivation for asking this question and is based on the definitions in formulas (1) to (3) below where $$P(s)$$ is the prime zeta function. (1) $$\quad \pi(x)=\sum\limits_{p\le x}1$$ (2) $$\quad P(s)=s\int\limits_1^\infty \pi(x)\,x^{-s-1}\,ds=\sum\limits_p\frac{1}{p^s}\,,\quad\Re(s)>1$$ (3) $$\quad P(s)=\sum\limits_{m=1}^\infty\frac{\mu(m)}{m}\log\zeta(m\,s)\,,\qquad\qquad\Re(s)>0$$ The questions at the links (d) and (e) above use the definition in (4) below, the claimed identity in (5) below, and the equivalence in (6) below to derive the claimed equality in (7) below. (4) $$\quad Li(x)=\lim_{\epsilon\rightarrow 0^{+}}\Big(\int_{0}^{1-\epsilon}+\int_{1+\epsilon}^{x}\Big)\frac{dt}{\log t}$$ (5) $$\quad s\int_{1}^\infty Li(x)\,x^{-s-1}\,dx=-\log(s-1)\,,\quad\Re(s)>1$$ (6) $$\quad\sum\limits_p\frac{1}{p^s}=\sum\limits_{m=1}^\infty\frac{\mu(m)}{m}\log\zeta(m\,s)\,,\quad\Re(s)>1$$ (7) $$\quad s\int_{1}^\infty (\pi(x)-Li(x))\,x^{-s-1}\,dx-\log((s-1)\,\zeta(s))=\sum\limits_{m=2}^{\infty}\frac{\mu(m)}{m}\log\zeta(m\,s)\,,\quad Re(s)>1$$ The questions at links (d) and (e) above initially state the claimed equality illustrated in (7) above is valid for $$Re(s)>1$$, but then claim the equality can be extended via analytic continuation to $$Re(s)>\Theta$$ where $$\Theta$$ is the supremum of the real parts of the zeros of $$\zeta(s)$$. Assuming correctness of the claimed identity in (5) above, evaluation of the integral in (7) above leads to (8) below which is valid for $$\Re(s)>1$$, but I don't understand the claim with respect to analytic continuation to $$Re(s)>\Theta$$. The equivalence illustrated in (6) above can be used to rewrite (8) below as (9) below which leads to (10) below which is valid for $$\Re(s)>0$$, but this isn't useful. (8) $$\quad \sum\limits_p\frac{1}{p^s}+\log(s-1)-\log((s-1)\,\zeta(s))=\sum\limits_{m=2}^{\infty}\frac{\mu(m)}{m}\log\zeta(m\,s)\,,\quad Re(s)>1$$ (9) $$\quad \sum\limits_{m=1}^{\infty}\frac{\mu(m)}{m}\log\zeta(m\,s)+\log(s-1)-\log((s-1)\,\zeta(s))=\sum\limits_{m=2}^{\infty}\frac{\mu(m)}{m}\log\zeta(m\,s)\,,\\$$ $$\quad Re(s)>1$$ (10) $$\quad \sum\limits_{m=2}^{\infty}\frac{\mu(m)}{m}\log\zeta(m\,s)=\sum\limits_{m=2}^{\infty}\frac{\mu(m)}{m}\log\zeta(m\,s)\,,\\$$ $$\quad Re(s)>0$$ The following figure illustrates the right-hand side of (8) above (which is strictly real in the interval $$\frac{1}{2}) in blue and the real and imaginary parts of the left-hand side of (8) above in orange and green respectively where the left-hand side sum is taken over the first $$10,000$$ primes. The figure below clearly illustrates the fallacy of the equivalence of the left and right hand sides of (8) above for $$\Theta. Figure (1): Illustration of RHS of (8) (blue) and LHS of (8) ($$\Re$$ orange and $$\Im$$ green) • en.wikipedia.org/wiki/Logarithmic_integral_function – Wojowu Sep 14 '19 at 15:51 • All your proposed definitions are wrong... – Unit Sep 14 '19 at 15:56 • @Unit Thanks for point out the error. I corrected the upper limits from $\infty$ to $x$ in the integrals related to the definition of $Li(x)$ above. – Steven Clark Sep 14 '19 at 16:14 • there is a standard definition (see Wikipedia) and then changing lower bounds makes it differ by a constant which is utterly irrelevant when one does Mellin transforms as in the post since differences are entire functions, while convergence issues are always at infinity; it is more interesting when one does the complex Li - usually the integral definition in Edwards book on RZ is best to work as it excludes the negative reals and in particular it doesn't affect non-trivial RZ zeroes – Conrad Sep 14 '19 at 16:18 • The difference between definitions is the mellin transform of a constant so kind of trivial; trying to actually estimate the integral, so involving $Li(x^\rho)$ for non trivial zeta roots is what you want – Conrad Sep 14 '19 at 16:47 $$li(x)=pv(\int_0^x \frac1{\log(t)}dt), \qquad x > 0$$ The problem is much less the definition of $$Li$$ than the possible Mellin transforms. There are mostly 3 things to try : • $$\int_0^\infty li(x)x^{-s-1}dx$$ never converges. • $$\int_0^\infty 1_{x>2}(li(x)-li(2))x^{-s-1}dx$$ approximates $$\frac{-\log(s-1)}{s}$$ very well around $$s=1$$ and since $$1_{x>2}(li(x)-li(2))$$ is continuous and piecewise $$C^1$$ its Mellin transform is $$L^1$$ on vertical lines. • Let $$f(x)= li(x)1_{x > 1}\in L^1_{loc}$$, due to the huge discontinuity at $$x=1$$ its distributional derivative is complicated, but if $$\phi \in C^\infty_c(0,\infty),\phi(1)=0$$ then $$\ =\ \int_1^\infty \frac{\phi(x)}{\log x}dx \quad \implies \quad f' \log x = 1_{x > 1}$$ Which means $$F(s)=\int_0^\infty f(x)x^{-s-1}dx,\qquad sF(s)=\int_0^\infty f'(x)x^{-s}dx$$ $$-(sF(s))' =\int_0^\infty f'(x)\log(x)x^{-s}dx=\int_0^\infty 1_{x > 1}x^{-s}dx= \frac1{s-1}$$ $$sF(s)=-\log (s-1)+A$$ • Finding $$A$$ is a pain : around $$x=1$$ we have $$li(x) = B+log |x-1|+O(x-1)$$ where the $$O(x-1)$$ term is $$C^1$$ thus as $$s \to +\infty$$ $$F(s) = \int_1^\infty log |x-1|x^{-s-1}dx+\frac{B}s+o(\frac{1}{s})$$ $$\int_1^\infty log |x-1|x^{-s-1}dx=\frac1s\int_1^\infty \frac{x^{-s}-1}{x-1}dx=\frac1s\int_1^\infty \frac{x^{-s-1}-x^{-1}}{1-x^{-1}}dx$$ $$= \frac1s \sum_{m=0}^\infty \int_1^\infty (x^{-s-1-m}-x^{-1-m})dx= \frac1s \sum_{m=0}^\infty(\frac{1}{s+m}-\frac{1}{1+m})= \frac{-\psi(s)-\gamma}{s}$$ And hence from the asymptotic expansion of $$\psi$$ and $$B=\int_0^1 (\frac1{\log (t)}-\frac1{t-1})dt=\gamma$$ we obtain $$A=0$$ and $$\int_1^\infty li(x)x^{-s-1}dx=F(s) = \frac{-\log(s-1)}{s}$$ • You seem to have confirmed the claimed identity in formula (5) of my question above which is valid for $\Re(s)>1$. The question at link (e) above originally used the definition $Li(x)=\int\limits_1^x\frac{1}{\log(t)}\,dt$, but the definition was later revised to $Li(x)=\lim_{\epsilon\rightarrow 0^{+}}\Big(\int_{0}^{1-\epsilon}+\int_{1+\epsilon}^{x}\Big)\frac{dt}{\log t}$ which is consistent with your derivation. – Steven Clark Sep 15 '19 at 16:30 • In the comments associated with the question at link (e) above (which have since been moved to chat), there was an argument made based on the answers to the question at link (a) above. I wasn't sure if the answers to the question at link (a) assumed the same definition of $Li(x)$, but the comments on my question above seem to indicate it really doesn't matter. – Steven Clark Sep 15 '19 at 16:30
2021-01-21 02:43:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 63, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8817101716995239, "perplexity": 169.27849619975055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703522150.18/warc/CC-MAIN-20210121004224-20210121034224-00003.warc.gz"}
http://annualreport.iita.org/is-infinite-hahcnjm/viewtopic.php?id=67f99c-unit-weight-of-soil-in-kg%2Fm3
The moisture content of the soil is 17% when the degree of saturation is 60%. Soil Unit Weight publications, software and technical guidance for the career development, information, and resources for Geotechnical Engineers. Information includes typical unit weight values, relative values with relation to soil … The unit weight of soil conversion is used to convert the pound per cubic foot to kilo newton per cubic meter. The unit weight of soil solids is the weight of soil solids Wd per unit volume of solids (Vs): $\gamma_{s} = \frac{W_{d}}{V_{s}}$ Thus, when the dry weight is reckoned with reference to the total original volume V, it is called the dry unit weight and when it is reckoned with reference to the volume of solids, we get unit weight of soil … The approximate bulk density of sand that is commonly used in normal-weight concrete is between 1520-1680 kg… The soil moisture calculations in percentages on a weight basis have been commonly used, but this does not give a true pictures of soil-moisture relationships. Strictly speaking, unit weight has units of force/(length^3). Unit in kg/m 3 or lb/ft 3. Myers Group claims that 1 cubic meter of topsoil, with some moisture, is 1.44 metric tonnes. The density of soil is defined as the mass of the soil per unit volume. Symbols and Notations. It is represented by symbol called row ( p ). One cubic meter of soil weighs between 1.2 and 1.7 metric tonnes, or between 1,200 and 1,700 kilograms. We don't collect information from our users. Standard Test Method for Density and Unit Weight of Soil in Place by Sand-Cone Method The moist unit weight of a soil is 17.8 kN/m 3 and the moisture content is 14%. Following is the list of 75 different building materials and their unit weight in Kg… Weight & Density of Aluminum Aluminum and Aluminium Alloy Density, g/cm3 (g/ml) Density (kg/m3) Density (g/mm3) Density (lbs/in3) Density (lbs/ft3) 6061 2.70 2700 0.0027 0.0975 … Soil - Weight and Composition of Earth - Typical weight and composition of soil Soil and Rock - Bulk Factors - Soil and rock expansion - or swell - after mining Solids and Metals - Specific Gravities - … Find the void ratio of the soil … The dry unit weight of a soil sample is 14.8 kN/m3 Given that Gs = 2.72 and w = 17%, determine: 1.the Unit weight when the sample is fully saturated 2. the mass of water (in kg) to be added per cubic meter (m3) of soil … The dry density of a soil is same as the unit weight of … If it is 0.3 m 3 then we can calculate the number of bags =0.3 X 1440 = 432 Kg … One cubic meter is approximately 1.3 cubic yards, or 35 cubic feet. We can easily calculate the number of bags by multiplying the volume by unit weight. The unit weight of reinforcement concrete = 25 KN/m3 The unit weight of fine sand = 1510 to 1570 KN/m3 The unit weight of Stone Masonry = 2040 to 2650 KN/m3 The unit weight of Brick Masonry = 1885 to … Please read AddThis Privacy for more information. γb, γ ' = Buoyant unit weight or effective unit weight. γsat = Saturated unit weight. We don't save this data. Unit Weights of Soil. You can target the Engineering ToolBox by using AdWords Managed Placements. The Engineering ToolBox explains that 1 cubic meter of loose, dry topsoil is 1.22 metric tonnes, and loose, moist topsoil is 1.25 metric tonnes. Bulk density. Type of Soil Approximate Weight (lb/ft 3) (kg/m 3) Loose earth 75 1200 Rammed earth 100 1600 Typical Composition Element Approximate Content (%) Aluminum 6 - 10 Calcium 1 - 7 Iron 2 - 10 Magnesium … γd = Dry unit weight. Dry unit weight b. ... specific weight of material and characteristic data of beam and structural steel The moist unit weight of a soil is 16.5 kN.m 3.If the water content is 15% and sp.gr. Thus if 1 m3 of soil solids weighs 2.6 Mg, the particle density is 2.6 Mg / m3 (since 1 Mg =1 million grams and 1 m3 =1 million cubic centimeters) thus particle density can also be expressed as … 1 kilograms per cubic meters = 0.0098066358553261 kilonewton per cubic meters 85.87 kilograms per cubic meters = Y kilonewton per cubic meters Assuming Y is the answer, and by criss-cross principle; … It is expressed in kg/m3 or lb/ft3 and shows compactness of building material. Moisture content also affects density and weight of soil. of soil is 2.7, determine the following: a) Dry unit weight b) Porosity c) Degree of saturation d) Mass of water in kg… If you want to promote your products or services in the Engineering ToolBox - please use Google Adwords. A guide to Soil Types has been provided by StructX and additional information has been provided below. Soil Properties & Soil Compaction Page (4) Solved Problems in Soil Mechanics Ahmed S. Al-Agha 2. Determine its (a) Dry unit weight (b) Moist unit weight, and the (c) Amount of water to be added per m3 to make it saturated Problem.3, The dry density of a sand with porosity of 0.387 is 1600 kg/m3. Notation ρ = bulk density (the ratio of the total mass to the total volume), ib/ft 3 or kg/m 3 ρ sub = … Add standard and customized parametric components - like flange beams, lumbers, piping, stairs and more - to your Sketchup model with the Engineering ToolBox - SketchUp Extension - enabled for use with the amazing, fun and free SketchUp Make and SketchUp Pro .Add the Engineering ToolBox extension to your SketchUp from the SketchUp Pro Sketchup Extension Warehouse! The Specific gravity and porosity of the soil are given as 2.67 and 0.418 respectively. γs = … These applications will - due to browser restrictions - send data between your browser and our server. One cubic meter of soil weighs between 1.2 and 1.7 metric tonnes, or between 1,200 and 1,700 kilograms. Unit weight of steel 7850 kg/m3, cement 1440 kg/m3, sand (dry) 1540 to 1600 Kg/ m3, brick 1600 Kg/ m3, Aggregates 1750 kg/m3, bitumen 1040 kg/m3 Unit Weight Of Building Materials … Cookies are only used in the browser to improve user experience. Calculation of static forces and stresses is often easier with unit weight… Determine the; 1. The lb/ft 3 to kn/m 3 conversion calculator is a simple way to calculate soil unit weight. Please read Google Privacy & Terms for more information about how you can control adserving and the information collected. In this definition, the volume is that contain both the sand and the voids between sand particles. The mass of powdered or granulated solid material per unit of volume. The specific weight, also known as the unit weight, is the weight per unit volume of a material.A commonly used value is the specific weight of water on Earth at 4 C, which is 9.807 kN/m 3 or 62.43 lbf/ft 3. Unit in kg/m 3 or lb/ft 3. lb/ft 3 is also used as pcf. The unit weight of soil is 15.10 kN/m 3. 1 Cubic meter River & M sand weight in kg & ton, hi guys in this article we know about 1 cubic metre sand weight in kg & ton and weight of 1 cubic metre of river sand and M sand. Re: conversion soil water content from m3/m3 to mm In my guess, 1 m3/m3 can be converted to 1000 mm of water, if 1mm of water content is defined as 1mm of water drawn from 1m deep soil. And handling visitor statistics terms for more information about how you can control adserving and the information collected Text this... Google Privacy & terms for more information about how you can control and. … one cubic meter of soil weighs between 1.2 and 1.7 metric tonnes, between! Calculator is a simple way to calculate soil unit weight Adwords Managed Placements in. Or effective unit weight is that contains both the concrete and the voids between sand particles 1440 Kg m. And sp.gr contains both the sand and the voids between concrete particles information about how you can target the ToolBox! And the voids between concrete particles bags by multiplying the volume by unit weight effective unit weight want! Ratio of the soil is 16.5 kN.m 3.If the water content is %... Content is 15 % and sp.gr the specific gravity, and less often specific unit! Or 35 cubic feet both the concrete and the voids between concrete.... 15 % and sp.gr and sp.gr = Buoyant unit weight or effective unit weight Kg. The specific gravity and porosity of the soil solids is 2.69, calculate the number of bags multiplying! The concrete and the voids between sand particles 0.418 respectively metric tonnes, or cubic. Of topsoil, with some moisture, is 1.44 metric tonnes, or between 1,200 and 1,700 kilograms affects! ( or our Health? specific weight… unit in kg/m 3 or lb/ft 3. lb/ft 3 to 3! Easily calculate the number of bags by multiplying the volume is that contain both the concrete and the between! / m 3 2.69, calculate the following: a our server Engineering ToolBox - please use Google Adwords 2,645... Of bags by multiplying the volume is that contains both the sand and the voids concrete! 60 % weight 1440 Kg / m 3 in our archive … Transcribed Image Text this... From this Question a ) the moist unit weight, bulk unit weight & terms for more information how. Application data to your local unit weight of soil in kg/m3 information collected is heavier and sp.gr Cell Phone Plans ( our. Due to browser restrictions - send data between your browser and our server and compactness... Void ratio, and less often specific weight… unit in kg/m 3 or lb/ft 3. lb/ft 3 to 3! A simple way to calculate soil unit weight of a soil is 18.1 KN/m3 used as pcf concrete.. You unit weight of soil in kg/m3 control adserving and the voids between concrete particles our Health? content of the soil 18.1. Kg/M 3 or lb/ft 3. lb/ft 3 to kn/m 3 conversion calculator a... Can easily calculate the number of bags by multiplying the volume by unit of! These metric figures convert to between 2,645 and 3,747 pounds, or 35 cubic feet let you save data! Shows compactness of building material - send data between your browser and our server 3 conversion calculator a! Services in the Engineering ToolBox by using Adwords Managed Placements find the void,... % when the degree of saturation the sand and the voids between sand particles easily the! The terms specific gravity and porosity of the soil is 18.1 KN/m3 density is also called as unit weight social... Soil weighs between 1.2 and 1.7 metric tonnes, or between 1,200 and 1,700 kilograms moisture of. Or our Health? soil weighs between 1.2 and 1.7 metric tonnes, or between 1.3 tons 2.75! Γb, γ ' = Buoyant unit weight you can target the ToolBox! 2.67 and 0.418 respectively from this Question a ) the moist unit weight unit weight cubic.... Local computer calculate the following: a by using Adwords Managed Placements between your browser and our server our! By knowing cement unit weight by using Adwords Managed Placements is represented by symbol called row ( p ) the!, γ ' = Buoyant unit weight of a soil is 16.5 kN.m 3.If the water content is 15 and. Browser restrictions - send data between your browser and our server Kg / m 3 γs = … one meter! Tons, per cubic meter of topsoil, with some moisture, is 1.44 metric tonnes, or between tons... And porosity of the soil … 10 sand and the information collected γs = … one cubic of. = unit weight m 3 and less often specific weight… unit in kg/m 3 or lb/ft 3. lb/ft 3 also... Save application data to your local computer % and sp.gr per cubic meter of topsoil, some. Density is also used as pcf our ads and handling visitor statistics effective... Or lb/ft 3. lb/ft 3 to kn/m 3 conversion calculator is a simple way to soil! Of soil Kg / unit weight of soil in kg/m3 3 is that contains both the concrete and voids... In kg/m3 or lb/ft3 and shows compactness of building material have similar … Transcribed Image Text from this Question )... And answers are saved in our archive or between 1.3 tons unit weight of soil in kg/m3 tons! Γs = … one cubic meter have similar … Transcribed Image Text from this Question )!, with some moisture, is 1.44 metric tonnes, or between 1.3 tons and 2.75,. Lighter, and compacted topsoil is lighter, and less often specific unit! Adserving and the information collected please read Google Privacy & terms for more information about how can. Of saturation is 60 % affects density and weight of a soil is 17 % when the degree of is! Can easily calculate the number of bags by multiplying the volume is contains. Toolbox by using Adwords Managed Placements sand particles Transcribed Image Text from this a. 18.1 KN/m3 between 2,645 and 3,747 pounds, or 35 cubic feet Google. Content also affects density and weight of soil weighs between 1.2 and 1.7 metric tonnes, or 1,200... And sp.gr a ) the moist unit weight, bulk unit weight or effective unit.... Soil unit weight the water content is 15 % and sp.gr Google use cookies for our... Lb/Ft 3. lb/ft 3 to kn/m 3 conversion calculator is a simple way to calculate soil unit weight of soil! Promote your products or services in the browser to improve user experience less specific... Ratio, and degree of saturation is 60 % and 3,747 pounds, or between 1.3 and... Have similar … Transcribed Image Text from this Question a ) the moist unit unit weight of soil in kg/m3 of a soil is kN.m! And 1.7 metric tonnes, or between 1,200 and 1,700 kilograms to your computer... Use Google Adwords applications let you save application data to your local computer ratio and! The void ratio, and degree of saturation is 60 % meter is 1.3. Effective unit weight, bulk unit weight gravity of the soil are given as 2.67 and respectively. The browser to improve user experience the soil … 10 you save application data to your local.... To improve user experience the concrete and the voids between sand particles a is... Is represented by symbol called row ( p ), or 35 cubic feet effective unit weight of.... Use Google Adwords with some moisture, is 1.44 metric tonnes and answers are saved our... Transcribed Image Text from this Question a ) the moist unit weight Kg. Between 1,200 and 1,700 kilograms approximately 1.3 cubic yards, or 35 cubic.!
2021-04-15 14:34:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47657501697540283, "perplexity": 3450.9134147721425}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038085599.55/warc/CC-MAIN-20210415125840-20210415155840-00481.warc.gz"}
https://www.codebymath.com/index.php/welcome/challenge/inv-tan-sum
# Coding challenge Write some code to prove that $\sum_{k=1}^n\tan^{-1}\frac{1}{2k^2}=\tan^{-1}\frac{n}{n+1}$. Note the right-hand side depends only on $n$, the numbers of terms in your sum on the left-hand side. (From Andreescu, p. 28.) Type your code here: See your results here: Show a friend, family member, or teacher what you've done! Here is a share link to your code:
2019-04-24 17:50:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2735908329486847, "perplexity": 1881.174307780613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578655155.88/warc/CC-MAIN-20190424174425-20190424200425-00206.warc.gz"}
https://www.physicsforums.com/threads/inverse-image-of-phi-totient.471027/
# Inverse image of phi (totient) math_grl So upon introduction to Euler's phi function, we can see that $$\phi (1) = 1$$ and $$\phi (2) = 1$$, where it turns out that these are in fact the only numbers in N that map to 1. Now what I'm wondering is if there is some general way to find the inverse image of numbers in the image of phi? Also, how would one go about showing that once we find $$\phi^{-1}$$ that these are in fact the only numbers it could be? al-mahed So upon introduction to Euler's phi function, we can see that $$\phi (1) = 1$$ and $$\phi (2) = 1$$, where it turns out that these are in fact the only numbers in N that map to 1. Now what I'm wondering is if there is some general way to find the inverse image of numbers in the image of phi? Also, how would one go about showing that once we find $$\phi^{-1}$$ that these are in fact the only numbers it could be? what you mean by "inverse"? do you mean multiplicative inverse ab=1 such that b = a^-1? I think you want to know if a given number is a phi of one or more numbers, and I think there is no general way to do it yet, only by hand for instance, given 14, is it a phi of at least one number? no, and I think there is no known way to characterize such numbers yet math_grl what you mean by "inverse"? do you mean multiplicative inverse ab=1 such that b = a^-1? I don't think there should be any confusion in my terminology but in case a refresher is needed check out http://en.wikipedia.org/wiki/Image_(mathematics)#Inverse_image" It might also help make it clear that $$f: \mathbb{N} \rightarrow \phi(\mathbb{N})$$ where $$f(n) = \phi(n)$$ cannot have an inverse as it's onto but not injective. Other than that, yes, what I was asking if there was a way to find all those numbers that map to 14 (for example) under phi... Last edited by a moderator: al-mahed I don't think there should be any confusion in my terminology but in case a refresher is needed check out http://en.wikipedia.org/wiki/Image_(mathematics)#Inverse_image" It might also help make it clear that $$f: \mathbb{N} \rightarrow \phi(\mathbb{N})$$ where $$f(n) = \phi(n)$$ cannot have an inverse as it's onto but not injective. Other than that, yes, what I was asking if there was a way to find all those numbers that map to 14 (for example) under phi... hi math-grl so what you want is to find the n's such that $$\varphi(n_1)=m_1$$ $$\varphi(n_2)=m_2$$ $$\varphi(n_3)=m_3$$ $$\varphi(n_4)=m_4$$ ... knowing only the m's, correct? there is a conjecture related to it, although what you want is far more difficult than the conjecture http://en.wikipedia.org/wiki/Carmichael's_totient_function_conjecture Last edited by a moderator: Gold Member Hi math_grl, I think that your question does have an answer. The following inequalities can be proved directly from the definition of the totient function, or by using the product formula: $$\frac{1}{2} \sqrt{x}\ \leq \ \phi(x) \ \leq \ x$$ for any positive integer x. It then follows that the equation $\phi(x) = n$ has only finitely many solutions for a given positive integer n. In fact, given n, the inequalities imply that all solutions to the equation satisfy $$n \ \leq x \ \leq \ 4n^2$$ Raphie Hi math_grl, Beyond Petek's reply, one can also calculate the maximal possible integer with a totient of n via recourse to the mathematics associated with "Inverse Totient Trees." For instance, take the integers with a totient of 24. Then... phi (N) = 24 phi (24) = 8 phi (8) = 4 phi (4) = 2 phi (2) = 1 There are 5 "links" (designate: L_x) in the totient chain so to speak, with 4 intervals. In general, the greatest integer that can have a totient of n is 2*3^(L-1), which means that 2*3^(5 - 1) = 162 is the upper bound of an integer with a totient of 24. In fact, via a not very exhausting proof by exhaustion, one can easily check a table and see that the greatest integer where phi(n) = 24 is 90. phi (n) = 24 --> 35, 39, 45, 52, 56, 70, 72, 78, 84, 90 And a couple related number sequences. A032447 Inverse function of phi( ). http://oeis.org/A032447 A058811 Number of terms on the n-th level of the Inverse-Totient-Tree (ITT). http://oeis.org/A058811 As for why the 2*3^(L-1) formula works, I am as curious as anyone and would be more than happy if anyone could provide some insight on that. - RF Last edited:
2023-01-28 04:27:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8451298475265503, "perplexity": 300.7016494696977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499470.19/warc/CC-MAIN-20230128023233-20230128053233-00770.warc.gz"}
https://www.physicsforums.com/threads/planck-distribtion.310810/
Planck distribtion Homework Statement I cant find the proof of "the ratio of the number of oscillators in their (n+1) )th quantum state of excitation to the number in nth quantum state is: k is boltzman costant N_(n+1)/N_(n)=exp(-hω/2π(kT)" The Attempt at a Solution :-( I dont have any idea Last edited: malawi_glenn Homework Helper Can you write down the planck distribution and explain it in words? for obtaining planck distribution first we use this equation N_(n+1)/N_(n)=exp(-hω/2π(kT)) and then the fraction of total number of oscilators in nth quantum stae is N_n/∑_(s=0)^∞▒ N_s =exp(-hω/2π(kT))/∑_(s=0)^∞▒〖exp(-shω/2π(kT))〗 <n>=∑_(s=0)^∞▒〖s exp(-shω/2π(kT))〗/∑_(s=0)^∞▒〖exp(-shω/2π(kT)) <n>=1/[exp(-hω/2π(kT))-1] " n" is average excitation quantum number of an oscillator But I dont know how can get this equation" N_(n+1)/N_(n)=exp(-hω/2π(kT))" Matterwave Gold Member Have you learned about Boltzmann factors? This is basically a direct application of Boltzmann factors: $$\frac{n_i}{n_j}=e^{\frac{-\Delta E_{ij}}{kT}}$$ Do you know how to get the Boltzmann factors? (Hint: it has to do with entropy) Last edited: I've seen boltzmann factor, but I dont know how I can proove it,could u tell me some hints? malawi_glenn Homework Helper prove and prove, why do you need to prove the boltzman factor? Sorry for my hint going via the plank distribution, working with boltzmann factors are much easier ;-) thanx any way :-) Matterwave Gold Member So, the Boltzmann factor can be proved using Entropy of a reservoir and a particle in state i. The gist of it is, if you change the state of the particle, you change the energy of the particle and the entropy (multiplicities) of the reservoir. If you use some entropy and multiplicity relations, you can get the Boltzmann factor. I don't remember the exact proof, but it's provided here: http://www.physics.thetangentbundle.net/wiki/Statistical_mechanics/Boltzmann_factor [Broken] Edit: oops, I realize I forgot a - sign in my first post. I've fixxed it. Last edited by a moderator: for the derivation of Boltzmman factor you can see the book SEARS AND SALINGER of thermodynamics. In this book much simpler method is used. malawi_glenn
2022-05-29 02:07:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8250113129615784, "perplexity": 1411.272554035578}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663035797.93/warc/CC-MAIN-20220529011010-20220529041010-00585.warc.gz"}
http://tex.stackexchange.com/questions/53126/how-to-change-vertical-spacing-in-toc-between-section-entry-and-only-the-first-s
# How to change vertical spacing in TOC between section entry and only the first subsection entry in each section? I am using article class and tocloft package. Does anyone know how to put a space (for example of size "22pt") between a ToC section entry and only the first subsection entry in every section. When I use \setlength\cftbeforesubsecskip{22pt} this puts a space before every subsection entry. I don't want extra space between subsection entires. I only want a space between the section entry and 1st subsection entry (in every section). What would be useful is a command that changes the space after a section entry, not before. This way I could set the space after a section entry but as far as I know something like \cftAFTERsecskip does not exist. Does anyone have a suggestion? Thank you! \documentclass[12pt]{article} \usepackage{tocloft} \setlength\cftparskip{0pt} \setlength\cftbeforesecskip{0pt} \setlength\cftbeforesubsecskip{22pt} \setlength\cftaftertoctitleskip{44pt} \begin{document} \tableofcontents \section{Test section one} \subsection{test section one one} \subsection{test section one two} \section{Test section two} \subsection{test section two one} \subsection{test section two two} \clearpage \section{Test section three} \subsection{test section three one} \subsection{test section three two} \end{document} - –  lockstep Apr 24 '12 at 15:23 One option would be to redefine \subsection as implemented in article.cls. In the redefinition you test for the value of the subsection counter; if the value is greater then one, do nothing; if it's equal to one, add the space to the ToC: \documentclass[12pt]{article} \usepackage{tocloft} \setlength\cftparskip{0pt} \setlength\cftaftertoctitleskip{44pt} \makeatletter \renewcommand\subsection{\@startsection{subsection}{2}{\z@}% {-3.25ex\@plus -1ex \@minus -.2ex}% {1.5ex \@plus .2ex}% {\normalfont\large\bfseries% \makeatother \begin{document} \tableofcontents \section{Test section one} \subsection{test section one one} \subsection{test section one two} \section{Test section two} \subsection{test section two one} \subsection{test section two two} \clearpage \section{Test section three} \subsection{test section three one} \subsection{test section three two} \end{document} Patching the \subsection command with the help of \patchcmd from the etoolbox package simplifies the code: \documentclass[12pt]{article} \usepackage{tocloft} \usepackage{etoolbox} \setlength\cftparskip{0pt} \setlength\cftaftertoctitleskip{44pt} \patchcmd{\subsection}{\bfseries}% \begin{document} \tableofcontents \section{Test section one} \subsection{test section one one} \subsection{test section one two} \section{Test section two} \subsection{test section two one} \subsection{test section two two} \clearpage \section{Test section three} \subsection{test section three one} \subsection{test section three two} \end{document} -
2014-07-30 07:12:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9792383909225464, "perplexity": 5742.744597434432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510268734.38/warc/CC-MAIN-20140728011748-00162-ip-10-146-231-18.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/2017173/every-closed-subspace-of-a-compact-space-is-compact-why-closed
# Every closed subspace of a compact space is compact.. Why closed? I'm reading proofs about a closed subspace of a compact space being compact, but in my mind I think i can prove it for far more than only closed subspaces. For example, $X$ being compact, then for every open cover of $X$ I can find a finite cover for $X$. Since the elements of this finite cover are opens in $X$, let's name them $U_{\alpha}$, when I take the intersection $U_{\alpha}\cap Y$, I have a finite cover for $Y$ made of open sets... Hmmm, so, what if I just do like this: Suppose $A$ an open cover for $Y$, then for each $A_{\alpha}$, take $U_\alpha$ such that $A_\alpha\cap U_\alpha$ is open in $X$. I now must construct an open cover for $X$ with these things.. Hmmm, I think that in the case where $Y$ is closed, I can just make the union with the complementar, but in the other cases I'm not sure if the union of those things is gona give the entire $X$. Is this the reason why I must suppose a closed $Y$? • If a (Hausdorff) set is not closed then it can't be compact (take a sequence not converging in the set, and draw countably many small opens balls around its elements) – reuns Nov 16 '16 at 18:25 • @user1952009: That’s true in Hausdorff spaces but not in general. – Brian M. Scott Nov 16 '16 at 18:26 • The answer to your last question is yes. If $Y$ is not closed, you may not be able to find an open cover of $X$ whose trace on $Y$ is the one with which you started. As an example, let $X=[0,1]$ and $Y=(0,1)$. Let $U_n=\left(\frac1n,1\right)$ for $n\in\Bbb Z^+$. Then $\{U_n:n\in\Bbb Z^+\}$ is an open cover of $Y$, but there is no way to find open $V_n$ in $[0,1]$ such that $V_n\cap Y=U_n$ for each $n\in\Bbb Z^+$ and $\{V_n:n\in\Bbb Z^+\}$ covers $X$. – Brian M. Scott Nov 16 '16 at 18:28 A counterexample is easy: $(-1,1)$ is not compact, because the open cover by the intervals $(-1 + 1/n,1 - 1/n)$ (for integer $n>0$) has no finite subcover; however $[-1,1]$ is compact. Where is your attempt wrong? You have to start with an open cover for $Y$, not for $X$. Not every open cover for $Y$ is necessarily obtained by considering $U_\alpha\cap Y$, where the sets $U_\alpha$ form an open cover for $X$. On the other hand, if $Y$ is closed in $X$, an open cover for $Y$ is always of that type, because you can consider $X\setminus Y$, which is open in $X$.
2019-10-14 16:21:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9182144999504089, "perplexity": 89.83262881521253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986653876.31/warc/CC-MAIN-20191014150930-20191014174430-00354.warc.gz"}
http://arxitics.com/articles/1809.05081
## arXiv Analytics ### arXiv:1809.05081 [quant-ph]AbstractReferencesReviewsResources #### Demonstration of displacement sensing of a mg-scale pendulum for mm- and mg- scale gravity measurements Published 2018-09-13Version 1 Gravity generated by large masses at long distances has been observed using a variety of probes from atomic interferometers to torsional balances. However, gravitational coupling between small masses has never been observed so far. Here we realize displacement sensing of a mg-scale probe pendulum with the quality factor of 250, whose sensitivity is $3\times10^{-14}\rm{m/\sqrt{Hz}}$ at a mechanical resonant frequency of 280 Hz. This sensitivity for an integration time of one second corresponds to the displacement generated by the gravitational coupling between the probe and a mm separated 100 mg mass, whose position is modulated at the pendulum mechanical resonant frequency. The sensitivity demonstrated here paves the way for a new class of experiments where gravitational coupling between small masses in quantum regimes can be achieved.
2019-05-23 07:47:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.858022928237915, "perplexity": 1522.1831104815728}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257156.50/warc/CC-MAIN-20190523063645-20190523085645-00516.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-1/chapter-2-solving-equations-2-3-solving-multi-step-equations-mixed-review-page-100/78
## Algebra 1 $3y$ We start with the given expression: $7y-4y$ We combine like terms by subtraction: $3y$
2018-12-14 17:40:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21317072212696075, "perplexity": 1949.0728056691685}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826145.69/warc/CC-MAIN-20181214162826-20181214184826-00365.warc.gz"}
https://www.projecteuclid.org/euclid.em/1047674389
## Experimental Mathematics ### The Bredon-Löffler conjecture #### Abstract We give a brief exposition of results of Bredon and others on passage to fixed points from stable $C_2$ equivariant homotopy (where $C_2$ is the group of order two) and its relation to Mahowald's root invariant. In particular we give Bredon's easy equivariant proof that the root invariant doubles the stem; the conjecture of the title is equivalent to the Mahowald--Ravenel conjecture that the root invariant never more than triples the stem. Our main result is to verify by computation that the algebraic analogue of this holds in an extensive range: this improves on results of [Mahowald and Shick 1983]. #### Article information Source Experiment. Math., Volume 4, Issue 4 (1995), 289-297. Dates First available in Project Euclid: 14 March 2003
2020-02-25 13:28:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.485523521900177, "perplexity": 1584.7633885631583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146066.89/warc/CC-MAIN-20200225110721-20200225140721-00515.warc.gz"}
http://bims.iranjournals.ir/article_471.html
# The nc-supplemented subgroups of finite groups Document Type : Research Paper Authors 1 School of Science, Sichuan University of Science & Engineering, 643000, Zigong, P. R. China 2 School of Mathematics and Statistics, Chongqing University of Arts and Sciences, 402160, Chongqing, P. R. China Abstract A subgroup $H$ is said to be $nc$-supplemented in a group $G$ if there exists a subgroup $Kleq G$ such that $HKlhd G$ and $Hcap K$ is contained in $H_{G}$, the core of $H$ in $G$. We characterize the supersolubility of finite groups $G$ with that every maximal subgroup of the Sylow subgroups is $nc$-supplemented in $G$. Keywords Main Subjects ### History • Receive Date: 10 September 2011 • Revise Date: 24 September 2012 • Accept Date: 24 November 2012 • First Publish Date: 15 December 2013
2022-11-28 00:43:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2102774679660797, "perplexity": 3071.7183104479236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710462.59/warc/CC-MAIN-20221128002256-20221128032256-00494.warc.gz"}
https://cunymathblog.commons.gc.cuny.edu/2012/03/11/magical-arguments/
Magical Arguments There are certain arguments that are beautiful and convincing but not admissible as proofs. They give the right answer, and show in an important way why something should be true, but they don’t pass muster as formal mathematics. Frequently they are shunned as “intuitive” or “heuristic” arguments that aren’t sufficiently “rigorous.” Certainly there is something about them that seems to lack deductive strength. But I am sometimes saddened by the injustice of relegating these stunning ideas to the apocrypha, especially when they served as prologue to a famous result.  It can be interesting to summon up the feeling that these arguments are “right” in a way that we perhaps do not understand. Here I want to look at a few of these mysterious quasi proofs. The first and probably most famous is Archimedes’ argument of the lever, which he used to find the quadrature of a parabola. Quadrature refers to the problem of finding a region bounded by straight lines which has the same area as a given curved region.  In the problem I will present, Archimedes is trying to find the quadrature of a parabolic segment. There is a very nice discussion of the problem on Wikipedia. A picture of a parabolic segment is given here: A parabolic segment As Wikipedia reveals, Archimedes formally established his result using the method of exhaustion, and described his proof in a letter to Dositheus. But in his work The Method, Archimedes establishes the result by recourse to a magical argument. Consider the following picture. The argument of the lever We will go into the picture in detail, but the main idea is that the parabolic segment defined by the parabola and the line $ac$, when dangled from $w$, balances the triangle $\Delta abc$ which is suspended on the lever from its center of gravity.  Archimedes then concludes, from the placement of the fulcrum, that the triangle is exactly three times heavier than the parabolic segment, and therefore must have three times the area. For those who are curious about the construction, the line containing the segment $bc$ is drawn to be tangent to the parabola at $c$.  We then draw $ab$ to intersect the tangent at $b$, with $ab$ parallel to the axis of symmetry of the parabola.  This yields the triangle $\Delta abc$. Now draw $cf$ to bisect $ab$ and then produce the line to $w$ so that $fw = cf$.  Now consider an infinitely thin “slice” of $\Delta abc$ which I have drawn in red and green.  Archimedes shows, using mechanical principles, that the whole slice (both red and green parts) balances at the fulcrum with just the green part suspended from $w$.  Since this slice was arbitrarily located, the argument is completed by “summing” over all of the infinitely many slices of the triangle. Interestingly Archimedes remarks in his argument that the slices of the triangle and the slices of the parabolic segment are equal in number. As Katz says in his book A History of Mathematics, this makes one wonder if Archimedes had some notion that there are different infinite cardinalities. Before going on to the next magical argument, I want to remind the reader about a theorem from geometry. The idea of a “dynamic triangle” is the key ingredient in Euclid’s proof of the Pythagorean Theorem, and is given in Book I of The Elements.  Consider the following picture. ABC has the same area as BDC To explain the picture, suppose that a triangle $\Delta ABC$ is given of a certain height $h$.  Draw a line through $A$ parallel to $BC$ and on that line select a point $D$.  Then the areas of the triangles $\Delta ABC$ and $\Delta BDC$ are the same.  It is hard not to see the triangle $BDC$ as a “moved” version of $ABC$. This dynamic property is a powerful source of geometric intuition. The following argument for the area of a circle was given by Kepler. Kepler's circle The idea is to divide the circle into infinitely many slices of pie, each one so thin that the circular segment at its base is virtually a straight line.  One then “peels” the circle as shown in the picture to form a triangle of height $r$, where $r$ is the radius of the circle.  By the argument on dynamic triangles, each pie slice has the same area both in the circle and in the peeled version. Therefore the area of the circle and the triangle are the same.  But the triangle has height $r$ and base $2\pi r$ (because the base is the circumference).  By the familiar formula for the area of a triangle, the area of a circle must be $\frac{1}{2} base \times height = \frac{1}{2} 2\pi r \cdot r = \pi r^2$. Kepler’s argument naturally makes one wonder if something analogous can be done for the sphere.  Recall the following theorem on the “dynamic” properties of a cone. A volume preserving movement of a cone Though we usually only learn the formula for the volume of a “right circular” cone, the formula is much more general.  A cone of any base and height $h$ has volume $\frac{1}{3} B h$ where $B$ is the area of the base.  This gives a “dynamic” picture of cone volume similar to the one for Euclid’s triangle — movements of the “tip” of the cone in the plane parallel to the plane of the base do not affect the volume. Now (I will not attempt a graphic for this!) decompose a sphere into infinitely many cones, all meeting at the center.  To aid the imagination, suppose that the base of each cone is an infinitesimal hexagon, which is completely flat.  Now peel the sphere and lay it flat, while keeping the center unmoved.  The cones (whose volumes have been preserved) now unite to make one cumulative cone in a way similar to Kepler’s triangle.  The base of this cone will have area equal to the surface area of the sphere, and the height of the cone will be $r$, the radius of the sphere.  This shows that the volume of a sphere should be one third its surface area times its radius, which is in fact the case, if you consider the traditional formulas: $A_{sphere} = 4\pi r^2$   and   $V_{sphere} = \frac{4}{3}\pi r^3$ Note that this uses a special property of the sphere:  all of the original cones have the same height $r$. The above arguments are similar to the following magical method of finding the volume of a torus. A straightened torus Finding the volume of a torus is a fairly challenging exercise for students in Calc II.  The exercise is considerably simplified if you snip the torus along the green circle and then pull it straight.  The volume of the resulting cylinder is the same as the torus. One could alternatively think of this as a “restacking” of the infinitely many discs of which the torus is composed. The last example of a magical argument I borrow from Jerome Keisler‘s freely downloadable Elementary Calculus: An Infinitesimal Approach. It is an example of something I find especially intriguing:  an infinitesimal diagram. A version of this diagram is used in Chapter 7 to support the argument that $\frac{d}{dt}\, \sin{t} = \cos{t}$ Here it is: An infinitestimal diagram showing a differential triangle for sine and cosine As students of calculus will know, $dt$ is a small perturbation of the angle $t$.  By the definition of radian measure (the diagram shows a unit circle) the red line in the diagram is also equal to $dt$. By imagining that $dt$ is infinitely small, we can treat the red segment of the circle as if it were straight.  From basic trigonometry, it then follows that $\frac{dy}{dt} = \cos{t}$ or in other words $\frac{d}{dt}\, \sin{t} = \cos{t}$ which is the familiar formula for the derivative of the sine function. What do all of these arguments have in common? All of them are highly visual, and involve the infinitely small.  None of them are deductive.  There are no established axiomatic systems currently in existence that justify them. And yet each of them is, in its own way, evidence for the truth.  From a modern viewpoint it is easy to say that these crude ideas have been replaced by the more precise idea of the limit.  But that is too dismissive of the heuristic power the arguments have to point the way towards plausible theorems. I think something even stronger could be true, which is that the arguments are valid in the context of some logical superstructure which has not yet been discovered.  Our enormous store of mathematical knowledge has the potential to give us a kind of epistemological chauvinism.  To combat this, we should try to see ourselves as we see the mathematical figures of the past: in pursuit of something powerful, but not yet grasping its full nature.  I find it particularly humbling to consider the Egyptian and Babylonian mathematicians who could not or did not conceive of fractions with non-unit numerators, which could have greatly simplified their calculations.  There is no reason to be confident that we are not still similarly benighted. There is a mysterious relationship between the curved and the straight, between things with magnitude and things without.  We have found deductive work-arounds for dealing with some phenomena related to these dualities, but they are still out there, and we still do not understand them. This entry was posted in Uncategorized. Bookmark the permalink. 5 Responses to Magical Arguments 1. illuminaija says: Well explained… It is a plus to Zinoleesky songs 2. jack wisdon says: One of the best article to i ever read going to be robux generator no survey batter tips to all time connect it. 3. wow thanks ^^ i like your mind… 4. Woah! I’m really enjoying the template/theme of this blog. It’s simple, yet effective. A lot of times it’s very hard to get that “perfect balance” between usability and visual appearance. I must say you’ve done a awesome job with this. Additionally, the blog loads super fast for me on Opera. Outstanding Blog! 5. jim says: This approach to area/volume reminds me of Mamikon’s Theorem. (Or read this presentation by Tom Apostle.)
2023-03-25 11:18:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 43, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.755731999874115, "perplexity": 541.9092600647018}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945323.37/warc/CC-MAIN-20230325095252-20230325125252-00300.warc.gz"}
http://instil.ca/sciences/science9
• Reset • Settings • Shortcuts • Fullscreen Science This course enables students to develop their understanding of basic concepts in biology, chemistry, Earth and space science, and physics, and to relate science to technology, society, and the environment. Throughout the course, students will develop their skills in the processes of scientific investigation. Students will acquire an understanding of scientific theories and conduct investigations related to sustainable ecosystems; atomic and molecular structures and the properties of elements and compounds; the study of the universe and its properties and components; and the principles of electricity. Prerequisite: Science 8 Properties & Characteristics Density is calculated as: $density = \dfrac{mass}{volume}$ Earth & Space Science: the Universe You know that the distance from the Sun to the planets is very far. So, the term astronomical unit (AU) is used to simplify this very far distance. Planet Astronomical Units (AU) Venus 0.39 Earth 1.00 Mars 1.52 Neptune 30.06 Physics: Electricity Match the correct equations with the type of circuits below. 1. RT = R1 + R2 + ... 2. $\dfrac{1}{R_T} = \dfrac{1}{R_1} + \dfrac{1}{R_2} + ...$ 3. VT = V1 + V2 + ... 4. VT = V1 = V2 = ... 5. iT = i1 + i2 + ... 6. iT = i1 = i2 = ... V = I · R kW·h Scientific Investigation and Exploration What is the most appropriate scientific unit of measurement for the mass of a Turkey? Solution Pounds Ounces Grams Grains The most accepted unit in science is generally based on the metric system. Kilograms are SI units, so grams is the closest and most appropriate. Percent complete: PRESS START ★ WORK FOR IT & LEVEL UP × CLASS PAGE SETTINGS Show Videos Show Solutions Sound Level of Difficulty 1 2 3 4 5 6 7 8 9 10 11 × KEYBOARD SHORTCUTS • class settings • keyboard shortcuts • drawing tools
2018-02-21 01:02:52
{"extraction_info": {"found_math": true, "script_math_tex": 30, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 18, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21287089586257935, "perplexity": 3095.4968808112067}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813187.88/warc/CC-MAIN-20180221004620-20180221024620-00504.warc.gz"}
https://www.physicsforums.com/threads/can-someone-please-explain-scott-aaronson-lecture-9.905036/
Can someone please explain Scott Aaronson lecture #9? • I Here is a link to a lecture by Scott Aaronson. http://www.scottaaronson.com/democritus/lec9.html About half way in the lecture he talks about the "qubit". In that section he introduces a 2x2 unitary matrix which rotates a vector by 45°. He applies that transformation to the state |0>. When he does that he gets the mixed state 1.0/√2 ( |0>, |1> ). I understand where that mixed state comes form. Next he applies the transformation another time to the mixed result and he ends up with the state |1>. I understand where that comes from also. It is his next step that I do not understand. He displays a binary tree which is supposed to show all the possible "paths" when applying the transformation twice. Where does that tree come from? I see two "paths" but I do not see 4 paths. Where does the path that results in the state -|0> come from? EDIT: I think I see where the state -|0> comes from. If you apply the transformation a third time you get the mixed state 1.0/√2 ( -|0>, |1> ) and if you apply the transformation a fourth time you get the state -|0>. I am still not sure how he constructs that tree. Here is screen capture of the tree... Last edited: Related Quantum Physics News on Phys.org Are you familiar with the Quincunx ? If you imagine that the tree is a two-level quincunx machine, but instead of multiplying probabilites at each branch, use the square of the products of amplitudes instead - tnen some paths are never taken. The matrix in the lecture is sometimes called the 'quincunx matrix'. Last edited: PeroK Are you familiar with the Quincunx ? If you imagine that the tree is a two-level quincunx machine, but instead of multiplying probabilites at each branch, use the square of the products of amplitudes instead - tnen some paths are never taken. The matix in the lecture is sometimes called the 'quincunx matrix'. Thanks for your response. I will look into it. I might have figured out what the tree represents. At the root of the tree is the state |0>. If you apply the transformation to that state you get a mixed state, ( |0>, |1> ) (except for the constant 1/√2). So the level below the root shows the mixed state you get if you apply the transformation to the state \0>. In the second level we have two states, |0> and |1>. The third level is obtained by applying the transformation to each of the states in the second level. We already know what the children for the state |0> are, so they are just copied to level 3. However, applying the transform to the state |1> yields a new mixed state which is ( -|0> and |1> ) and that mixed state is then shown as the children of the |1> state in the third level. I seriously doubt that this is the correct interpretation for how the tree is constructed, but it is one way in which it can be constructed. I think it is a complete tree. I would like to understand this better. Thanks for your response. I will look into it. I might have figured out what the tree represents. At the root of the tree is the state |0>. If you apply the transformation to that state you get a mixed state, ( |0>, |1> ) (except for the constant 1/√2). So the level below the root shows the mixed state you get if you apply the transformation to the state \0>. In the second level we have two states, |0> and |1>. The third level is obtained by applying the transformation to each of the states in the second level. We already know what the children for the state |0> are, so they are just copied to level 3. However, applying the transform to the state |1> yields a new mixed state which is ( -|0> and |1> ) and that mixed state is then shown as the children of the |1> state in the third level. I seriously doubt that this is the correct interpretation for how the tree is constructed, but it is one way in which it can be constructed. I think it is a complete tree. I would like to understand this better. I'm not so sure now that there can be a 'quantum quincunx'. I would expect it to be I'm not so sure now that there can be a 'quantum quincunx'. I would expect it to be View attachment 113606 How did you get that? Nugatory Mentor At the root of the tree is the state |0>. If you apply the transformation to that state you get a mixed state, ( |0>, |1> ) (except for the constant 1/√2). So the level below the root shows the mixed state you get if you apply the transformation to the state \0>. Careful - these are not mixed states, they are still pure states. The only reason that ##|0\rangle## looks different than ##\sqrt{2}/2(|0\rangle+|1\rangle)## (they're different states, but that's not why they look different) is that we've chosen a basis that makes it look that way. As an analogy: there's nothing qualitatively different between "northwest" (a linear combination of north and west) and "north"; we could have chosen different basis vectors and then north would have been the linear combination. Not calling the superposition a "mixed" state is not a quibble. This distinction is fundamental to understanding how QM works. I seriously doubt that this is the correct interpretation for how the tree is constructed.... You were right until you started doubting - it's right. The easiest way to see this is to do the algebra: calculate ##\phi=U|\psi\rangle##, then apply ##U## to ##\phi##. You'll end up with four terms corresponding to the four leaves of the tree. Look at where they came from, compare with the tree, and it will all make sense. bhobba and mike1000 Demystifier Gold Member Are you familiar with the Quincunx ? I like it! bhobba Careful - these are not mixed states, they are still pure states. The only reason that ##|0\rangle## looks different than ##\sqrt{2}/2(|0\rangle+|1\rangle)## (they're different states, but that's not why they look different) is that we've chosen a basis that makes it look that way. As an analogy: there's nothing qualitatively different between "northwest" (a linear combination of north and west) and "north"; we could have chosen different basis vectors and then north would have been the linear combination. Not calling the superposition a "mixed" state is not a quibble. This distinction is fundamental to understanding how QM works. You were right until you started doubting - it's right. The easiest way to see this is to do the algebra: calculate ##\phi=U|\psi\rangle##, then apply ##U## to ##\phi##. You'll end up with four terms corresponding to the four leaves of the tree. Look at where they came from, compare with the tree, and it will all make sense. I have modified the tree to make it clear what is going on. Last edited: PeterDonis Mentor 2019 Award I have modified the tree to make it clear what is going on. That's not the right way to construct the tree. The top node of the tree is just ##\vert 0 \rangle##. The second row of the tree is ##U \vert 0 \rangle##, which becomes two nodes, ##\vert 0 \rangle## and ##\vert 1 \rangle##. (Note that there are factors of ##1 / \sqrt{2}## which are being left out, because they don't matter for the argument Aaronson is making.) The third row of the tree applies ##U## to the two nodes in the second row. That becomes four nodes: two on the left for ##U \vert 0 \rangle## (applying ##U## to the left node of the second row) and two on the right for ##U \vert 1 \rangle## (applying ##U## to the right node of the second row). So to figure out what the right two nodes in the third row are, you need to calculate what ##U \vert 1 \rangle## is. That's what Aaronson did to get the third row of his tree. That's not the right way to construct the tree. The top node of the tree is just ##\vert 0 \rangle##. The second row of the tree is ##U \vert 0 \rangle##, which becomes two nodes, ##\vert 0 \rangle## and ##\vert 1 \rangle##. (Note that there are factors of ##1 / \sqrt{2}## which are being left out, because they don't matter for the argument Aaronson is making.) The third row of the tree applies ##U## to the two nodes in the second row. That becomes four nodes: two on the left for ##U \vert 0 \rangle## (applying ##U## to the left node of the second row) and two on the right for ##U \vert 1 \rangle## (applying ##U## to the right node of the second row). So to figure out what the right two nodes in the third row are, you need to calculate what ##U \vert 1 \rangle## is. That's what Aaronson did to get the third row of his tree. That is what I was trying to show. At the root of the tree you apply U to the state |0>. In the second level of the tree you apply U a second time. The third level is the result of applying U twice. (If I were to apply U a third time I would add U's to the third row and and a fourth row showing the third rows children...) PeterDonis Mentor 2019 Award The third level is the result of applying U twice. Ok, then what four nodes does applying ##U## twice to the top node of the tree result in? Aaronson says it results in the four nodes ##\vert 0 \rangle##, ##\vert 1 \rangle##, ##- \vert 0 \rangle##, and ##\vert 1 \rangle##; then the first and third nodes cancel and you're just left with the state ##\vert 1 \rangle##. In your OP, you interpreted the last two nodes as coming from ##U## applied to ##\vert 1 \rangle## (the right node in the second row). That looks correct to me. So I think Nugatory was correct when he said you were right until you started doubting. Ok, then what four nodes does applying ##U## twice to the top node of the tree result in? Aaronson says it results in the four nodes ##\vert 0 \rangle##, ##\vert 1 \rangle##, ##- \vert 0 \rangle##, and ##\vert 1 \rangle##; then the first and third nodes cancel and you're just left with the state ##\vert 1 \rangle##. In your OP, you interpreted the last two nodes as coming from ##U## applied to ##\vert 1 \rangle## (the right node in the second row). That looks correct to me. So I think Nugatory was correct when he said you were right until you started doubting. When you apply U to the root node you only get two children. When you apply U to each of those two children you get the 4 leaf nodes. The 4 leaf nodes are the 4 states you mention. PeterDonis Mentor 2019 Award When you apply U to the root node you only get two children. When you apply U to each of those two children you get the 4 leaf nodes. Yes, I know that. And the four leaf nodes are the ones Aaronson described. Do you agree with that? PeterDonis Mentor 2019 Award Yes, I know that. And the four leaf nodes are the ones Aaronson described. Do you agree with that? Yes. PeterDonis Mentor 2019 Award Yes. Then I think your question is answered: we agree on how the tree is constructed, and we see how the tree explains why the ##\vert 0 \rangle## state drops out in the third row, so applying ##U## twice only leaves the ##\vert 1 \rangle## state. Then I think your question is answered: we agree on how the tree is constructed, and we see how the tree explains why the ##\vert 0 \rangle## state drops out in the third row, so applying ##U## twice only leaves the ##\vert 1 \rangle## state. Yes. The two |0> states destructively interfere and cancel out. The two |1> states constructively interfere. If I apply U to the state |0> twice, how do I write that in Dirac notation? Do I write it like this ... UU | 0> or like this U | U0>? Also, how do I show the path to the negative state in the 3rd row in Diract notation? There really should be paths associated with the 3rd row. Each path would show, in Dirac notation, the order and state in which the operations were carries out. I do not know how to express that in Dirac notation yet. PeterDonis Mentor 2019 Award Do I write it like this ... UU | 0> Yes. Then you can substitute and rearrange as follows: ##UU\vert 0 \rangle = U \frac{1}{\sqrt{2}} \left( \vert 0 \rangle + \vert 1 \rangle \right) = \frac{1}{\sqrt{2}} \left( U \vert 0 \rangle + U \vert 1 \rangle \right)##, and so on. or like this U | U0> That won't work because the ket ##\vert U 0 \rangle## doesn't make sense. PeterDonis Mentor 2019 Award I have again revised the tree. I think this is the way you think I should write it. Not really, no. The operator U doesn't really apply to individual nodes; it applies to entire rows. So if you were going to try to express things in tree notation, it would look something like this (I'm not going to try to wrangle LaTeX too much to make it actually look like a tree, but just write what would be in each row of the tree): $$\vert 0 \rangle$$ $$U \vert 0 \rangle = \frac{1}{\sqrt{2}} \left[ \vert 0 \rangle + \vert 1 \rangle \right]$$ $$U U \vert 0 \rangle = \frac{1}{\sqrt{2}} \left[ U \vert 0 \rangle + U \vert 1 \rangle \right] = \frac{1}{2} \left[ \vert 0 \rangle + \vert 1 \rangle - \vert 0 \rangle + \vert 1 \rangle \right]$$ Jilang and Mentz114 Not really, no. The operator U doesn't really apply to individual nodes; it applies to entire rows. So if you were going to try to express things in tree notation, it would look something like this (I'm not going to try to wrangle LaTeX too much to make it actually look like a tree, but just write what would be in each row of the tree): $$\vert 0 \rangle$$ $$U \vert 0 \rangle = \frac{1}{\sqrt{2}} \left[ \vert 0 \rangle + \vert 1 \rangle \right]$$ $$U U \vert 0 \rangle = \frac{1}{\sqrt{2}} \left[ U \vert 0 \rangle + U \vert 1 \rangle \right] = \frac{1}{2} \left[ \vert 0 \rangle + \vert 1 \rangle - \vert 0 \rangle + \vert 1 \rangle \right]$$ Well, I think the first way I rewrote the tree is probably the best way to show that. The tree can be decomposed into two elementary operations...applying the operator to the state |0> and applying the operator to the state |1>. It would be very easy to extend this tree to deeper levels because everything is known. There are two simple binary trees, the binary tree formed when U is applied to the state |0> and the binary tree formed when U is applied to the state |1>. Here it is again... PeterDonis Mentor 2019 Award I think the first way I rewrote the tree is probably the best way to show that. I think Aaronson's way of writing the tree, with no U anywhere, is the best way to show that. IMO putting a U on any of the nodes of the tree is misleading, because then the node does not correspond to what it appears to say. (For example, the left node on the second row does not actually correspond to ##U \vert 0 \rangle##; it just corresponds to ##\vert 0 \rangle##, which is what Aaronson wrote there.) If you're going to include U at all it should be off to the left of the tree, just describing what each row of the tree corresponds to: ##U \vert 0 \rangle##, ##UU \vert 0 \rangle##, etc. bhobba I think Aaronson's way of writing the tree, with no U anywhere, is the best way to show that. IMO putting a U on any of the nodes of the tree is misleading, because then the node does not correspond to what it appears to say. (For example, the left node on the second row does not actually correspond to ##U \vert 0 \rangle##; it just corresponds to ##\vert 0 \rangle##, which is what Aaronson wrote there.) If you're going to include U at all it should be off to the left of the tree, just describing what each row of the tree corresponds to: ##U \vert 0 \rangle##, ##UU \vert 0 \rangle##, etc. Doing it my way, would the √2 factors not have added up correctly? I think they would have worked themselves out correctly. There are only two operations....applying U to the |0> state and applying U to the |1> state. All of the detail is handled by the tree structure. Each path through the tree will tell you what sequence of operations gave rise to that leaf node. But the state is given by the superposition of all the leaf nodes. PeterDonis Mentor 2019 Award Doing it my way, would the √2 factors not have added up correctly? Doing what your way? Drawing the tree is not the same as actually applying the U operator to states. Your way of drawing the tree seems misleading to me, but that doesn't make it "wrong" unless you are thinking that your way of drawing the tree implies a different (wrong) way of applying the U operator to actual states. Doing what your way? Drawing the tree is not the same as actually applying the U operator to states. Your way of drawing the tree seems misleading to me, but that doesn't make it "wrong" unless you are thinking that your way of drawing the tree implies a different (wrong) way of applying the U operator to actual states. I want the tree to be simple, not complex. All I am intending to show is that we can apply the operator at different levels of the tree, and if we do that , we will arrive at the exact same answer that Arraonson arrives at. The simplest form of all, is of course, Aaronsons original tree. I had to put the operators in to make it clear to me what the tree represented, when it was in Aaronsons original form I wasn't sure what the tree was trying to represent. Of course now, I see it is/was something extremely simple. In retrospect, I believe that Mr. Aaronson should have put the operators in the tree in some way to clarify what he was doing, as scaffolding. Maybe not the way that I put them in, but some way. The essence of the tree is that it encapsulates application of the U operator twice. The tree structure can encapsulate application of the U operator any number of times you care to apply it. Last edited: PeterDonis Mentor 2019 Award I believe that Mr. Aaronson should have put the operators in the tree in some way to clarify what he was doing. If they were off to the left of each row, as I suggested, I think that would do it.
2020-10-31 21:53:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8052190542221069, "perplexity": 651.992722967696}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107922463.87/warc/CC-MAIN-20201031211812-20201101001812-00121.warc.gz"}
https://www.quizover.com/trigonometry/test/algebraic-sum-to-product-and-product-to-sum-formulas-by-openstax
# 9.4 Sum-to-product and product-to-sum formulas  (Page 3/6) Page 3 / 6 ## Verifying the identity using double-angle formulas and reciprocal identities Verify the identity $\text{\hspace{0.17em}}{\mathrm{csc}}^{2}\theta -2=\frac{\mathrm{cos}\left(2\theta \right)}{{\mathrm{sin}}^{2}\theta }.$ For verifying this equation, we are bringing together several of the identities. We will use the double-angle formula and the reciprocal identities. We will work with the right side of the equation and rewrite it until it matches the left side. $\begin{array}{ccc}\hfill \frac{\mathrm{cos}\left(2\theta \right)}{{\mathrm{sin}}^{2}\theta }& =& \frac{1-2\text{\hspace{0.17em}}{\mathrm{sin}}^{2}\theta }{{\mathrm{sin}}^{2}\theta }\hfill \\ & =& \frac{1}{{\mathrm{sin}}^{2}\theta }-\frac{2\text{\hspace{0.17em}}{\mathrm{sin}}^{2}\theta }{{\mathrm{sin}}^{2}\theta }\hfill \\ & =& {\mathrm{csc}}^{2}\theta -2\hfill \end{array}$ Verify the identity $\text{\hspace{0.17em}}\mathrm{tan}\text{\hspace{0.17em}}\theta \text{\hspace{0.17em}}\mathrm{cot}\text{\hspace{0.17em}}\theta -{\mathrm{cos}}^{2}\theta ={\mathrm{sin}}^{2}\theta .$ $\begin{array}{ccc}\hfill \mathrm{tan}\text{\hspace{0.17em}}\theta \text{\hspace{0.17em}}\mathrm{cot}\text{\hspace{0.17em}}\theta -{\mathrm{cos}}^{2}\theta & =& \left(\frac{\mathrm{sin}\text{\hspace{0.17em}}\theta }{\mathrm{cos}\text{\hspace{0.17em}}\theta }\right)\left(\frac{\mathrm{cos}\text{\hspace{0.17em}}\theta }{\mathrm{sin}\text{\hspace{0.17em}}\theta }\right)-{\mathrm{cos}}^{2}\theta \hfill \\ & =& 1-{\mathrm{cos}}^{2}\theta \hfill \\ & =& {\mathrm{sin}}^{2}\theta \hfill \end{array}$ Access these online resources for additional instruction and practice with the product-to-sum and sum-to-product identities. ## Key equations Product-to-sum Formulas $\begin{array}{ccc}\hfill \mathrm{cos}\text{\hspace{0.17em}}\alpha \text{\hspace{0.17em}}\mathrm{cos}\text{\hspace{0.17em}}\beta & =& \frac{1}{2}\left[\mathrm{cos}\left(\alpha -\beta \right)+\mathrm{cos}\left(\alpha +\beta \right)\right]\hfill \\ \hfill \mathrm{sin}\text{\hspace{0.17em}}\alpha \text{\hspace{0.17em}}\mathrm{cos}\text{\hspace{0.17em}}\beta & =& \frac{1}{2}\left[\mathrm{sin}\left(\alpha +\beta \right)+\mathrm{sin}\left(\alpha -\beta \right)\right]\hfill \\ \hfill \mathrm{sin}\text{\hspace{0.17em}}\alpha \text{\hspace{0.17em}}\mathrm{sin}\text{\hspace{0.17em}}\beta & =& \frac{1}{2}\left[\mathrm{cos}\left(\alpha -\beta \right)-\mathrm{cos}\left(\alpha +\beta \right)\right]\hfill \\ \hfill \mathrm{cos}\text{\hspace{0.17em}}\alpha \text{\hspace{0.17em}}\mathrm{sin}\text{\hspace{0.17em}}\beta & =& \frac{1}{2}\left[\mathrm{sin}\left(\alpha +\beta \right)-\mathrm{sin}\left(\alpha -\beta \right)\right]\hfill \end{array}$ Sum-to-product Formulas $\begin{array}{ccc}\hfill \mathrm{sin}\text{\hspace{0.17em}}\alpha +\mathrm{sin}\text{\hspace{0.17em}}\beta & =& 2\text{\hspace{0.17em}}\mathrm{sin}\left(\frac{\alpha +\beta }{2}\right)\mathrm{cos}\left(\frac{\alpha -\beta }{2}\right)\hfill \\ \hfill \mathrm{sin}\text{\hspace{0.17em}}\alpha -\mathrm{sin}\text{\hspace{0.17em}}\beta & =& 2\text{\hspace{0.17em}}\mathrm{sin}\left(\frac{\alpha -\beta }{2}\right)\mathrm{cos}\left(\frac{\alpha +\beta }{2}\right)\hfill \\ \hfill \mathrm{cos}\text{\hspace{0.17em}}\alpha -\mathrm{cos}\text{\hspace{0.17em}}\beta & =& -2\text{\hspace{0.17em}}\mathrm{sin}\left(\frac{\alpha +\beta }{2}\right)\mathrm{sin}\left(\frac{\alpha -\beta }{2}\right)\hfill \\ \hfill \mathrm{cos}\text{\hspace{0.17em}}\alpha +\mathrm{cos}\text{\hspace{0.17em}}\beta & =& 2\text{\hspace{0.17em}}\mathrm{cos}\left(\frac{\alpha +\beta }{2}\right)\mathrm{cos}\left(\frac{\alpha -\beta }{2}\right)\hfill \end{array}$ ## Key concepts • From the sum and difference identities, we can derive the product-to-sum formulas and the sum-to-product formulas for sine and cosine. • We can use the product-to-sum formulas to rewrite products of sines, products of cosines, and products of sine and cosine as sums or differences of sines and cosines. See [link] , [link] , and [link] . • We can also derive the sum-to-product identities from the product-to-sum identities using substitution. • We can use the sum-to-product formulas to rewrite sum or difference of sines, cosines, or products sine and cosine as products of sines and cosines. See [link] . • Trigonometric expressions are often simpler to evaluate using the formulas. See [link] . • The identities can be verified using other formulas or by converting the expressions to sines and cosines. To verify an identity, we choose the more complicated side of the equals sign and rewrite it until it is transformed into the other side. See [link] and [link] . ## Verbal Starting with the product to sum formula $\text{\hspace{0.17em}}\mathrm{sin}\text{\hspace{0.17em}}\alpha \text{\hspace{0.17em}}\mathrm{cos}\text{\hspace{0.17em}}\beta =\frac{1}{2}\left[\mathrm{sin}\left(\alpha +\beta \right)+\mathrm{sin}\left(\alpha -\beta \right)\right],$ explain how to determine the formula for $\text{\hspace{0.17em}}\mathrm{cos}\text{\hspace{0.17em}}\alpha \text{\hspace{0.17em}}\mathrm{sin}\text{\hspace{0.17em}}\beta .$ Substitute $\text{\hspace{0.17em}}\alpha \text{\hspace{0.17em}}$ into cosine and $\text{\hspace{0.17em}}\beta \text{\hspace{0.17em}}$ into sine and evaluate. Provide two different methods of calculating $\text{\hspace{0.17em}}\mathrm{cos}\left(195°\right)\mathrm{cos}\left(105°\right),$ one of which uses the product to sum. Which method is easier? Describe a situation where we would convert an equation from a sum to a product and give an example. Answers will vary. There are some equations that involve a sum of two trig expressions where when converted to a product are easier to solve. For example: $\text{\hspace{0.17em}}\frac{\mathrm{sin}\left(3x\right)+\mathrm{sin}\text{\hspace{0.17em}}x}{\mathrm{cos}\text{\hspace{0.17em}}x}=1.\text{\hspace{0.17em}}$ When converting the numerator to a product the equation becomes: $\text{\hspace{0.17em}}\frac{2\text{\hspace{0.17em}}\mathrm{sin}\left(2x\right)\mathrm{cos}\text{\hspace{0.17em}}x}{\mathrm{cos}\text{\hspace{0.17em}}x}=1$ Describe a situation where we would convert an equation from a product to a sum, and give an example. ## Algebraic For the following exercises, rewrite the product as a sum or difference. $16\text{\hspace{0.17em}}\mathrm{sin}\left(16x\right)\mathrm{sin}\left(11x\right)$ $8\left(\mathrm{cos}\left(5x\right)-\mathrm{cos}\left(27x\right)\right)$ $20\text{\hspace{0.17em}}\mathrm{cos}\left(36t\right)\mathrm{cos}\left(6t\right)$ $2\text{\hspace{0.17em}}\mathrm{sin}\left(5x\right)\mathrm{cos}\left(3x\right)$ $\mathrm{sin}\left(2x\right)+\mathrm{sin}\left(8x\right)$ $10\text{\hspace{0.17em}}\mathrm{cos}\left(5x\right)\mathrm{sin}\left(10x\right)$ $\mathrm{sin}\left(-x\right)\mathrm{sin}\left(5x\right)$ $\frac{1}{2}\left(\mathrm{cos}\left(6x\right)-\mathrm{cos}\left(4x\right)\right)$ $\mathrm{sin}\left(3x\right)\mathrm{cos}\left(5x\right)$ For the following exercises, rewrite the sum or difference as a product. #### Questions & Answers the gradient function of a curve is 2x+4 and the curve passes through point (1,4) find the equation of the curve 1+cos²A/cos²A=2cosec²A-1 test for convergence the series 1+x/2+2!/9x3 a man walks up 200 meters along a straight road whose inclination is 30 degree.How high above the starting level is he? 100 meters Kuldeep Find that number sum and product of all the divisors of 360 Ajith exponential series Naveen what is subgroup Prove that: (2cos&+1)(2cos&-1)(2cos2&-1)=2cos4&+1 e power cos hyperbolic (x+iy) 10y Michael tan hyperbolic inverse (x+iy)=alpha +i bita prove that cos(π/6-a)*cos(π/3+b)-sin(π/6-a)*sin(π/3+b)=sin(a-b) why {2kπ} union {kπ}={kπ}? why is {2kπ} union {kπ}={kπ}? when k belong to integer Huy if 9 sin theta + 40 cos theta = 41,prove that:41 cos theta = 41 what is complex numbers Dua Yes ahmed Thank you Dua give me treganamentry question Solve 2cos x + 3sin x = 0.5
2018-12-14 18:56:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 22, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9437305331230164, "perplexity": 1065.2512244693016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826306.47/warc/CC-MAIN-20181214184754-20181214210754-00638.warc.gz"}
https://ocw.mit.edu/courses/mathematics/18-065-matrix-methods-in-data-analysis-signal-processing-and-machine-learning-spring-2018/video-lectures/lecture-10-survey-of-difficulties-with-ax-b/
# Lecture 10: Survey of Difficulties with Ax = b Flash and JavaScript are required for this feature. ## Description The subject of this lecture is the matrix equation $$Ax = b$$. Solving for $$x$$ presents a number of challenges that must be addressed when doing computations with large matrices. ## Summary Large condition number $$\Vert A \Vert \ \Vert A^{-1} \Vert$$ $$A$$ is ill-conditioned and small errors are amplified. Undetermined case $$m < n$$ : typical of deep learning Penalty method regularizes a singular problem. Related chapter in textbook: Introduction to Chapter II Instructor: Prof. Gilbert Strang The following content is provided under a Creative Commons license. Your support will help MIT open courseware continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Let's go. So if you want to know the subject of today's class, it's A x = b. I got started writing down different possibilities for A x = b, and I got carried away. It just appears all over the place for different sizes, different ranks, different situations, nearly singular, not nearly singular. And the question is, what do you do in each case? So can I outline my little two pages of notes here, and then pick on one or two of these topics to develop today, and a little more on Friday about Gram-Schmidt? So I won't do much, if any, of Gram-Schmidt today, but I will do the others. So the problem is A x = b. That problem has come from somewhere. We have to produce some kind of an answer, x. So I'm going from good to bad or easy to difficult in this list. Well, except for number 0, which is an answer in all cases, using the pseudo inverse that I introduced last time. So that deals with 0 eigenvalues and zero singular values by saying their inverse is also 0, which is kind of wild. So we'll come back to the meaning of the pseudo inverse. But now, I want to get real, here, about different situations. So number 1 is the good, normal case, when a person has a square matrix of reasonable size, reasonable condition, a condition number-- oh, the condition number, I should call it sigma 1 over sigma n. It's the ratio of the largest to the smallest singular value. And let's say that's within reason, not more than 1,000 or something. Then normal, ordinary elimination is going to work, and Matlab-- the command that would produce the answer is just backslash. So this is the normal case. Now, the cases that follow have problems of some kind, and I guess I'm hoping that this is a sort of useful dictionary of what to do for you and me both. So we have this case here, where we have too many equations. So that's a pretty normal case, and we'll think mostly of solving by least squares, which leads us to the normal equation. So this is standard, happens all the time in statistics. And I'm thinking in the reasonable case, that would be ex hat. The solution A-- this matrix would be invertible and reasonable size. So backslash would still solve that problem. Backslash doesn't require a square matrix to give you an answer. So that's the good case, where the matrix is not too big, so it's not unreasonable to form a transpose. Now, here's the other extreme. What's exciting for us is this is the underdetermined case. I don't have enough equations, so I have to put something more in to get a specific answer. And what makes it exciting for us is that that's typical of deep learning. There are so many weights in a deep neural network that the weights would be the unknowns. Of course, it wouldn't be necessarily linear. It wouldn't be linear, but still the idea's the same that we have many solutions, and we have to pick one. Or we have to pick an algorithm, and then it will find one. So we could pick the minimum norm solution, the shortest solution. That would be an L2 answer. Or we could go to L1. And the big question that, I think, might be settled in 2018 is, does deep learning and the iteration from stochastic gradient descent that we'll see pretty soon-- does it go to the minimum L1? Does it pick out an L1 solution? That's really an exciting math question. For a long time, it was standard to say that these deep learning AI codes are fantastic, but what are they doing? We don't know all the interior, but we-- when I say we, I don't mean I. Other people are getting there, and I'm going to tell you as much as I can about it when we get there. So those are pretty standard cases. m = n, m greater than n, m less than n, but not crazy. Now, the second board will have more difficult problems. Usually, because they're nearly singular in some way, the columns are nearly dependent. So that would be the columns in bad condition. You just picked a terrible basis, or nature did, or somehow you got a matrix A whose columns are virtually dependent-- almost linearly dependent. The inverse matrix is really big, but it exists. Then that's when you go in, and you fix the columns. You orthogonalize columns. Instead of accepting the columns A1, A2, up to An of the given matrix, you go in, and you find orthonormal vectors in that column space and orthonormal basis Q1 to Qn. And the two are connected by Gram-Schmidt. And the famous matrix statement of Gram-Schmidt is here are the columns of A. Here are the columns of Q, and there's a triangular matrix that connects the two. So that is the central topic of Gram-Schmidt in that idea of orthogonalizing. It just appears everywhere. It appears all over course 6 in many, many situations with different names. So that, I'm sort of saving a little bit until next time, and let me tell you why. Because just the organization of Gram-Schmidt is interesting. So Gram-Schmidt, you could do the normal way. So that's what I teach in 18.06. Just take every column as it comes. Subtract off projections onto their previous stuff. Get it orthogonal to the previous guys. Normalize it to be a unit vector. Then you've got that column. Go on. So I say that again, and then I'll say it again two days from now. So Gram-Schmidt, the idea is you take the columns-- you say the second orthogonal vector, Q2, will be some combination of columns 1 and 2, orthogonal to the first. Lots to do. And there's another order, which is really the better order to do Gram-Schmidt, and it allows you to do column pivoting. So this is my topic for next time, to see Gram-Schmidt more carefully. Column pivoting means the columns might not come in a good order, so you allow yourself to reorder them. We know that you have to do that for elimination. In elimination, it would be rows. So elimination, we would have the matrix A, and we take the first row as the first pivot row, and then the second row, and then the third row. But if the pivot is too small, then reorder the rows. So it's row ordering that comes up in elimination. And Matlab just systematically says, OK, that's the pivot that's coming up. The third pivot comes up out of the third row. But Matlab says look down that whole third column for a better pivot, a bigger pivot. Switch to a row exchange. So there are lots of permutations then. You end up with something there that permutes the rows, and then that gets factored into LU. So I'm saying something about elimination that's just sort of a side comment that you would never do elimination without considering the possibility of row exchanges. And then this is Gram-Schmidt orthogonalization. So this is the LU world. Here is the QR world, and here, it happens to be columns that you're permuting. So that's coming. This is section 2.2, now. But there's more. 2.2 has quite a bit in it, including number 0, the pseudo inverse, and including some of these things. Actually, this will be also in 2.2. And maybe this is what I'm saying more about today. So I'll put a little star for today, here. What do you do? So this is a case where the matrix is nearly singular. You're in danger. It's inverse is going to be big-- unreasonably big. And I wrote inverse problems there, because inverse problem is a type of problem with an application that you often need to solve or that engineering and science have to solve. So I'll just say a little more about that, but that's a typical application in which you're near singular. Your matrix isn't good enough to invert. Well, of course, you could always say, well, I'll just use the pseudo inverse, but numerically, that's like cheating. You've got to get in there and do something about it. So inverse problems would be examples. Actually, as I write that, I think that would be a topic that I should add to the list of potential topics for a three week project. Look up a book on inverse problems. So what do I mean by an inverse problem? I'll just finish this thought. What's an inverse problem? Typically, you know about a system, say a network, RLC network, and you give it a voltage or current. You give it an input, and you find the output. You find out what current flows, what the voltages are. But inverse problems are-- suppose you know the response to different voltages. What was the network? You see the problem? Let me say it again. Discover what the network is from its outputs. So that turns out to typically be a problem that gives nearly singular matrices. That's a difficult problem. A lot of nearby networks would give virtually the same output. So you have a matrix that's nearly singular. It's got singular values very close to 0. What do you do then? Well, the world of inverse problems thinks of adding a penalty term, some kind of a penalty term. When I minimize this thing just by itself, in the usual way, A transpose, it has a giant inverse. The matrix A is badly conditioned. It takes vectors almost to 0. So that A transpose has got a giant inverse, and you're at risk of losing everything to round off. So this is the solution. You could call it a cheap solution, but everybody uses it. So I won't put that word on videotape. But that sort of resolves the problem, but then the question-- it shifts the problem, anyway, to what number-- what should be the penalty? How much should you penalize it? You see, by adding that, you're going to make it invertible. And if you make this bigger, and bigger, and bigger, it's more and more well-conditioned. It resolves the trouble, here. And like today, I'm going to do more with that. So with that, I'll stop there and pick it up after saying something about 6 and 7. I hope this is helpful. It was helpful to me, certainly, to see all these possibilities and to write down what the symptom is. It's like a linear equation doctor. Like you look for the symptoms, and then you propose something at CVS that works or doesn't work. But you do something about it. So when the problem is too big-- up to now, the problems have not been giant out of core. But now, when it's too big-- maybe it's still in core but really big-- then this is in 2.1. So that's to come back to. The word I could have written in here, if I was just going to write one word, would be iteration. Iterative methods, meaning you take a step like-- the conjugate radiant method is the hero of iterative methods. And then that name I erased is Krylov, and there are other names associated with iterative methods. So that's the section that we passed over just to get rolling, but we'll come back to. So then that one, you never get the exact answer, but you get closer and closer. If the iterative method is successful, like conjugate gradients, you get pretty close, pretty fast. And then you say, OK, I'll take it. And then finally, way too big, like nowhere. You're not in core. Just your matrix-- you just have a giant, giant problem, which, of course, is happening these days. And then one way to do it is your matrix. You can't even look at the matrix A, much less A transpose. A transpose would be unthinkable. You couldn't do it in a year. So randomized linear algebra has popped up, and the idea there, which we'll see, is to use probability to sample the matrix and work with your samples. So if the matrix is way too big, but not too crazy, so to speak, then you could sample the columns and the rows, and get an answer from the sample. See, if I sample the columns of a matrix, I'm getting-- so what does sampling mean? Let me just complete this, say, add a little to this thought. Sample a matrix. So I have a giant matrix A. It might be sparse, of course. I didn't distinguish over their sparse things. That would be another thing. So if I just take random X's, more than one, but not the full n dimensions, those will give me random guys in the column space. And if the matrix is reasonable, it won't take too many to have a pretty reasonable idea of what that column space is like, and with it's the right hand side. So this world of randomized linear algebra has grown because it had to. And of course, any statement can never say for sure you're going to get the right answer, but using the inequalities of probability, you can often say that the chance of being way off is less than 1 in 2 to the 20th or something. So the answer is, in reality, you get a good answer. That is the end of this chapter, 2.4. So this is all chapter 2, really. The iterative method's in 2.1. Most of this is in 2.2. Big is 2.3, and then really big is randomized in 2.4. So now, where are we? You were going to let me know or not if this is useful to see. But you sort of see what are real life problems. And of course, we're highly, especially interested in getting to the deep learning examples, which are underdetermined. Then when you're underdetermined, you've got many solutions, and the question is, which one is a good one? And in deep learning, I just can't resist saying another word. So there are many solutions. What to do? Well, you pick some algorithm, like steepest descent, which is going to find a solution. So you hope it's a good one. And what does a good one mean verses a not good one? They're all solutions. A good one means that when you apply it to the test data that you haven't yet seen, it gives good results on the test data. The solution has learned something from the training data, and it works on the test data. So that's the big question in deep learning. How does it happen that you, by doing gradient descent or whatever algorithm-- how does that algorithm bias the solution? It's called implicit bias. How does that algorithm bias a solution toward a solution that generalizes, that works on test data? And you can think of algorithms which would approach a solution that did not work on test data. So that's what you want to stay away from. You want the ones that work. So there's very deep math questions there, which are kind of new. They didn't arise until they did. And we'll try to save some of what's being understood. Can I focus now on, for probably the rest of today, this case, when the matrix is nearly singular? So you could apply elimination, but it would give a poor result. So one solution is the SVD. I haven't even mentioned the SVD, here, as an algorithm, but of course, it is. The SVD gives you an answer. Boy, where should that have gone? Well, the space over here, the SVD. So that produces-- you have A = U sigma V transposed, and then A inverse is V sigma inverse U transposed. So we're in the case, here. We're talking about number 5. Nearly singular, where sigma has some very small, singular values. Then sigma inverse has some very big singular values. So you're really in wild territory here with very big inverses. So that would be one way to do it. But this is a way to regularize the problem. So let's just pay attention to that. So suppose I minimize the sum of A x minus b squared and delta squared times the size of x squared. And I'm going to use the L2 norm. It's going to be a least squares with penalty, so of course, it's the L2 norm here, too. Suppose I solve that for a delta. For some, I have to choose a positive delta. And when I choose a positive delta, then I have a solvable problem. Even if this goes to 0, or A does crazy things, this is going to keep me away from singular. In fact, what equation does that lead to? So that's a least squares problem with an extra penalty term. So it would come, I suppose. Let's see, if I write the equations A delta I, x equals b 0, maybe that is the least squares equation-- the usual, normal equation-- for this augmented system. Because what's the error here? This is the new big A-- A star, let's say. X equals-- this is the new b. So if I apply least squares to that, what do I do? I minimize the sum of squares. So least squares would minimize A x minus b squared. That would be from the first components. And delta squared x squared from the last component, which is exactly what we said we were doing. So in a way, this is the equation that the penalty method is solving. And one question, naturally, is, what should delta be? Well, that question's beyond us, today. It's a balance of what you can believe, and how much noise is in the system, and everything. That choice of delta-- what we could ask is a math question. What happens as delta goes to 0? So suppose I solve this problem. Let's see, I could write it differently. What would be the equation, here? This part would give us the A transpose, and then this part would give us just the identity, x equals A transpose b, I think. Wouldn't that be? So really, I've written here-- what that is is A star transpose A star. This is least squares on this gives that equation. So all of those are equivalent. All of those would be equivalent statements of what the penalized problem is that you're solving. And then the question is, as delta goes to 0, what happens? Of course, something. When delta goes to 0, you're falling off the cliff. Something quite different is suddenly going to happen, there. Maybe we could even understand this question with a 1 by 1 matrix. I think this section starts with a 1 by 1. Suppose A is just a number. Maybe I'll just put that on this board, here. Suppose A is just a number. So what am I going to call that number? Just 1 by 1. Let me call it sigma, because it's certainly the leading singular value. So what's my equation that I'm solving? A transpose A would be sigma squared plus delta squared, 1 by 1, x-- should I give some subscript here? I should, really, to do it right. This is the solution for a given delta. So that solution will exist. Fine. This matrix is certainly invertible. That's positive semidefinite, at least. That's positive semidefinite, and then what about delta squared I? It is positive definite, of course. It's just the identity with a factor. So this is a positive definite matrix. I certainly have a solution. And let me keep going on this 1 by 1 case. This would be A transpose. A is just a sigma. I think it's just sigma b. So A is 1 by 1, and there are two cases, here-- Sigma bigger than 0, or sigma equals 0. And in either case, I just want to know what's the limit. So the answer x-- let me just take the right hand side. Well, that's fine. Am I computing OK? Using the penalize thing on a 1 by 1 problem, which you could say is a little bit small-- so solving this equation or equivalently minimizing this, so here, I'm finding the minimum of-- A was sigma x minus b squared plus delta squared x squared. You see it's just 1 by 1? Just a number. And I'm hoping that calculus will agree with linear algebra here, that if I find the minimum of this-- so let me write it out. Sigma squared x squared and delta squared x squared, and then minus 2 sigma xb, and then plus b squared. And now, I'm going to find the minimum, which means I'd set the derivative to 0. So I get 2 sigma squared and 2 delta squared. I get a two here, and this gives me the x derivative as 2 sigma b. So I get a 2 there, and I'm OK. I just cancel both 2s, and that's the equation. So I can solve that equation. X is sigma over sigma squared plus delta squared b. So it's really that quantity. I want to let delta go to 0. So again, what am I doing here? I'm taking a 1 by 1 example just to see what happens in the limit as delta goes to 0. What happens? So I just have to look at that. What is the limit of that thing in a circle, as delta goes to 0? So I'm finding out for a 1 by 1 problem what a penalized least squares problem, ridge regression, all over the place-- what happens? So what happens to that number as delta goes to 0? 1 over sigma. So now, let delta go to 0. So that approaches 1 over sigma, because delta disappears. Sigma over sigma squared, 1 over sigma. So it approaches the inverse, but what's the other possibility, here? The other possibility is that sigma is 0. I didn't say whether this matrix, this 1 by 1 matrix, was invertible or not. If sigma is not 0, then I go to 1 over sigma. If sigma is really small, it will take a while. Delta will have to get small, small, small, even compared to sigma, until finally, that term goes away, and I just have 1 over sigma. But what if sigma is 0? Sorry to get excited about 0. Who would get excited about 0? So this is the case when this is 1 over sigma, if sigma is positive. And what does it approach if sigma is 0? 0! Because this is 0, the whole problem was like disappeared, here. The sigma was 0. Here is a sigma. So anyway, if sigma is 0, then I'm getting 0 all the time. But I have a decent problem, because the delta squared is there. I have a decent problem until the last minute. My problem falls apart. Delta goes to 0, and I have a 0 equals 0 problem. I'm lost. But the point is the penalty kept me positive. It kept me with his delta squared term until the last critical moment. It kept me positive even if that was 0. If that is 0, and this is 0, I still have something here. I still have a problem to solve. And what's the limit then? So 1 over sigma if sigma is positive. And what's the answer if sigma is not positive? It's 0. Just tell me. I'm getting 0. I get 0 all the way, and I get 0 in the limit. And now, let me just ask, what have I got here? What is this sudden bifurcation? Do I recognize this? The inverse in the limit as delta goes to 0 is either 1 over sigma, if that makes sense, or it's 0, which is not like 1 over sigma. 1 over sigma-- as sigma goes to 0, this thing is getting bigger and bigger. But at sigma equals 0, it's 0. You see, that's a really strange kind of a limit. Now, it would be over there. What have I found here, in this limit? Say it again, because that was exactly right. The pseudo inverse. So this system-- choose delta greater than 0, then delta going to 0. The solution goes to the pseudo inverse. That's the key fact. When delta is really, really small, then this behaves in a pretty crazy way. If delta is really, really small, then sigma is bigger, or it's 0. If it's bigger, you go this way. If it's 0, you go that way. So that's the message, and this is penalized. These squares, as the penalty gets smaller and smaller, approaches the correct answer, the always correct answer, with that sudden split between 0 and not 0 that we associate with the pseudo inverse. Of course, in a practical case, you're trying to find the resistances and inductions in a circuit by trying the circuit, and looking at the output b, and figuring out what input. So the unknown x is the unknown system parameters. Not the voltage and current, but the resistance, and inductance, and capacitance. I've only proved that in the 1 by 1 case. You may say that's not much of a proof. In the 1 by 1 case, we can see it happen in front of our eyes. So really, a step I haven't taken here is to complete that to any matrix A. So that the statement then. That's the statement. So that's the statement. For any matrix A, this matrix, A transpose A plus delta squared inverse times A transpose-- that's the solution matrix to our problem. That's what I wrote down up there. I take the inverse and pop it over there. That approaches A plus, the pseudo inverse. And that's what we just checked for 1 by 1. For 1 by 1, this was sigma over sigma squared plus delta squared. And it went either to 1 over sigma or to 0. It split in the limit. It shows that limits can be delicate. The limit-- as delta goes to 0, this thing is suddenly discontinuous. It's this number that is growing, and then suddenly, at 0, it falls back to 0. Anyway, that would be the statement. Actually, statisticians discovered the pseudo inverse independently of the linear algebra history of it, because statisticians did exactly that. To regularize the problem, they introduced a penalty and worked with this matrix. So statisticians were the first to think of that as a natural thing to do in a practical case-- add a penalty. So this is adding a penalty, but remember that we stayed with L2 norms, staying with L2, least squares. We could ask, what happens? Suppose the penalty is the L1 norm. I'm not up to do this today. Suppose I minimize that. Maybe I'll do L2, but I'll do the penalty guy in the L1 norm. I'm certainly not an expert on that. Or you could even think just that power. So that would have a name. A statistician invented this. It's called the Lasso in the L1 norm, and it's a big deal. Statisticians like the L1 norm, because it gives sparse solutions. It gives more genuine solutions without a whole lot of little components in the answer. So this was an important step. Let me just say again where we are in that big list. The two important ones that I haven't done yet are these iterative methods in 2.1. So that's like conventional linear algebra, just how to deal with a big matrix, maybe with some special structure. That's what numerical linear algebra is all about. And then Gram-Schmidt with or without pivoting, which is a workhorse of numerical computing, and I think I better save that for next time. So this is the one I picked for this time. And we saw what happened in L2. Well, we saw it for 1 by 1. Would you want to extend to prove this for any A, going beyond 1 by 1? How would you prove such a thing for any A? I guess I'm not going to do it. It's too painful, but how would you do it? You would use the SVD. If you want to prove something about matrices, about any matrix, the SVD is the best thing you could have-- the best tool you could have. I can write this in terms of the SVD. I just plug-in A equals whatever the SVD tells me to put in there. U sigma V transposed. Plug it in there, simplify it using the fact that these are orthogonal. If I have any good luck, it'll get an identity somewhere from there and an identity somewhere from there. And it will all simplify. It will all diagonalize. That's what the SVD really does is turns my messy problem into a problem about their diagonal matrix, sigma in the middle. So I might as well put sigma in the middle. Yeah, why not? Before we give up on it-- a special case of that, but really, the genuine case would be when A is sigma. Sigma transpose sigma plus delta squared I inverse times sigma transpose approaches the pseudo inverse, sigma plus. And the point is the matrix sigma here is diagonal. Oh, I'm practically there, actually. Why am I close to being able to read this off? Well, everything is diagonal here. Diagonal, diagonal, diagonal. And what's happening on those diagonal entries? So you had to take my word that when I plugged in the SVD, the U and the V got separated out to the far left and the far right. And it was that that stayed in the middle. So it's really this is the heart of it. And say, well, that's diagonal matrix. So I'm just looking at what happens on each diagonal entry, and which problem is that? The question of what's happening on a typical diagonal entry of this thing is what question? The 1 by 1 case! The 1 by 1, because each entry in the diagonal is not even noticing the others. So that's the logic, and it would be in the notes. Prove it first for 1 by 1, then secondly for diagonal. This, and finally with A's, and they're using the SVD with and U and V transposed to get out of the way and bring us back to here. So that's the theory, but really, I guess I'm thinking that far the most important message in today's lecture is in this list of different types of problems that appear and different ways to work with them. And we haven't done Gram-Schmidt, and we haven't done iteration. So this chapter is a survey of-- well, more than a survey of what numerical linear algebra is about. And I haven't done random, yet. Sorry, that's coming, too. So three pieces are still to come, but let's take the last two minutes off and call it a day.
2021-10-16 18:21:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7841673493385315, "perplexity": 473.940848971208}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584913.24/warc/CC-MAIN-20211016170013-20211016200013-00199.warc.gz"}
https://socratic.org/questions/how-do-i-find-the-asymptotes-of-f-x-3x-2-2x-1-x-1-are-there-even-any-asymptotes
# How do I find the asymptotes of f(x) = (3x^2 + 2x - 1 )/ (x + 1)? Are there even any asymptotes? May 28, 2018 no asymptotes #### Explanation: Given: $f \left(x\right) = \frac{3 {x}^{2} + 2 x - 1}{x + 1}$ This is a rational function of the form $\frac{N \left(x\right)}{D \left(x\right)}$. Factor the numerator. $f \left(x\right) = \frac{3 {x}^{2} + 2 x - 1}{x + 1} = \frac{\left(3 x - 1\right) \left(x + 1\right)}{x + 1}$ There is a removable discontinuity (a hole) at $x = - 1$ since both the numerator and the denominator have the same factor $x + 1$ There are no vertical, horizontal or slant asymptotes because of the hole. The graph of the function is a line $y = 3 x - 1$ with a hole at $x = - 1$. Typically you will not see the hole unless you perform a Trace on a graphing calculator. You won't have a $y$ value : $X = - 1 , Y =$ graph{(3x^2 + 2x -1)/(x+1) [-5, 5, -5, 5]}
2021-11-29 23:56:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8951475024223328, "perplexity": 627.9654620740954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358847.80/warc/CC-MAIN-20211129225145-20211130015145-00418.warc.gz"}
http://mathhelpforum.com/math-topics/96847-wrong.html
# Thread: Is this wrong? 1. ## Is this wrong? $ T(n) = T(n-1) + c + T(n - 1) = 2T(n - 1) + C = 2(2(T(n - 2) + c) + c) $ It's the towers of hanoi algorithm in case that helps. 2. Hello, alyosha2! $T(n) \:= \:T(n-1) + c + T(n - 1) \;=\; 2T(n - 1) + c \;=\; 2\bigg[2(T(n - 2) + c\bigg] + c$ It's the "Towers of Hanoi" algorithm in case that helps. Not sure what the question is . . . Consider the first few values of the sequence. . . $n$ = number of disks, $T(n)$ = number of moves. . . $\begin{array}{|c|c|} n & T(n) \\ \hline 1 & 1 \\ 2 & 3 \\ 3 & 7 \\ 4 & 15 \\ 5 & 31 \\ \vdots & \vdots \end{array}$ The recursive form is: . $T(n) \;=\;2\!\cdot\!T(n-1) + 1$ . . The closed form is: . $T(n) \;=\;2^n - 1$ 3. Originally Posted by Soroban Hello, alyosha2! Not sure what the question is . . . Consider the first few values of the sequence. . . $n$ = number of disks, $T(n)$ = number of moves. . . $\begin{array}{|c|c|}$ $ n & T(n) \\ \hline 1 & 1 \\ 2 & 3 \\ 3 & 7 \\ 4 & 15 \\ 5 & 31 \\ \vdots & \vdots \end{array}" alt=" n & T(n) \\ \hline 1 & 1 \\ 2 & 3 \\ 3 & 7 \\ 4 & 15 \\ 5 & 31 \\ \vdots & \vdots \end{array}" /> The recursive form is: . $T(n) \;=\;2\!\cdot\!T(n-1) + 1$ . . The closed form is: . $T(n) \;=\;2^n - 1$ I was meaning more about the parentheses. Shouldn't it be $ 2(2T(n-2) + c) + c $ in the third part? 4. Originally Posted by alyosha2 I was meaning more about the parentheses. Shouldn't it be $ 2(2T(n-2) + c) + c $ in the third part? That is what you get by applying the itteration once more yes (note it simplifies down to: $T_n=4T_{n-1}+3c$ CB 5. Originally Posted by CaptainBlack That is what you get by applying the itteration once more yes (note it simplifies down to: $T_n=4T_{n-1}+3c$ CB so in the original post it is wrong then (it's from a text book and i couldn't make sense of it) i have a follow up question about the next part, should i post that in a new thread? 6. Originally Posted by alyosha2 so in the original post it is wrong then (it's from a text book and i couldn't make sense of it) i have a follow up question about the next part, should i post that in a new thread? Yes the original is wrong. If the follow up is more or less directly related to or depends on this, post it here. If it stands up on its own, post it in a new thread. You will get a reply faster that way. CB
2017-12-16 13:36:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8329160213470459, "perplexity": 815.1361610225521}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948588072.75/warc/CC-MAIN-20171216123525-20171216145525-00694.warc.gz"}
https://www.bzziii.com/2021/10/cbse-sample-paper-12-class-accountancy.html
### Class 12 Accountancy Sample Paper Solution - CBSE | 2022 NUMERICAL QUESTION 1. Gain / loss on revaluation at the time of change in profit sharing ratio of existing partners is shared by ___(i)______ whereas in case of admission of a partner it is shared by____(ii)_____. (A) (i) Remaining Partners, (ii) All Partners. (B) (i) All Partners, (ii) Old partners. (C) (i) New Partner, (ii) All partner. (D) (i) Sacrificing Partner, (ii) Incoming partner. 2. Calculate the amount of second & final call when Abhijit Ltd, issues Equity shares of ₹10 each at a premium of 40% payable on Application ₹3, On Allotment ₹5, On First Call ₹2. (A) Second & final call ₹3. (B) Second & final call ₹4. (C) Second & final call ₹1. (D) Second & final call ₹14. 3. Anish Ltd, issued a prospectus inviting applications for 2,000 shares. Applications were received for 3,000 shares and pro- rata allotment was made to the applicants of 2,400 shares. If Dhruv has been allotted 40 shares, how many shares he must have applied for? (A) 40 (B) 44 (C) 48 (D) 52 4. Ambrish Ltd offered 2,00,000 Equity Shares of ₹10 each, of these 1,98,000 shares were subscribed. The amount was payable as ₹3 on application, ₹4 an allotment and balance on first call. If a shareholder holding 3,000 shares has defaulted on first call, what is the amount of money received on first call? (A) ₹9,000. (B) ₹5,85,000. (C) ₹5,91,000. (D) ₹6,09,000. 5. What will be the correct sequence of events? (i) Forfeiture of shares. (ii) Default on Calls. (iii) Re-issue of shares. (iv) Amount transferred to capital reserve. Options: (A) (i), (iv), (ii), (iii) (B) (ii), (iv), (i), (iii) (C) (ii), (i), (iii), (iv) (D) (iii), (iv), (i) (ii) 6. Arun and Vijay are partners in a firm sharing profits and losses in the ratio of 5:1. Balance Sheet (Extract) Liabilities Amount (Rs.) Assets Amount (Rs.) Machinery 40,000 If the value of machinery reflected in the balance sheet is overvalued by 33 1/3%, find out the value of Machinery to be shown in the new Balance Sheet: (A) ₹ 44,000 (B) ₹48,000 (C) ₹ 32,000 (D) ₹30,000 7. Which of the following is true regarding Salary to a partner when the firm maintains fluctuating capital accounts? (A) Debit Partner’s Loan A/c and Credit P & L Appropriation A/c. (B) Debit P & L A/c and Credit Partner’s Capital A/c. (C) Debit P & L Appropriation A/c and Credit Partner’s Current A/c. (D) Debit P & L Appropriation A/c and Credit Partner’s Capital A/c. 8. At the time of reconstitution of a partnership firm, recording of an unrecorded liability will lead to: (A) Gain to the existing partners (B) Loss to the existing partners (C) Neither gain nor loss to the existing partners (D) None of the above 9. E, F and G are partners sharing profits in the ratio of 3:3:2. According to the partnership agreement, G is to get a minimum amount of ₹80,000 as his share of profits every year and any deficiency on this account is to be personally borne by E. The net profit for the year ended 31st March 2021 amounted to ₹3,12 ,000. Calculate the amount of deficiency to be borne by E? (A) ₹1,000 (B) ₹4,000 (C) ₹8,000 (D) ₹2,000 10. At the time of admission of a partner, what will be the effect of the following information? Balance in Workmen compensation reserve ₹40,000. Claim for workmen compensation ₹45,000. (A) ₹45,000 Debited to the Partner’s capital Accounts. (B) ₹40,000 Debited to Revaluation Account. (C) ₹5,000 Debited to Revaluation Account. (D) ₹5,000 Credited to Revaluation Account. 11. In the absence of partnership deed, a partner is entitled to an interest on the amount of additional capital advanced by him to the firm at a rate of: (A) entitled for 6% p.a. on their additional capital, only when there are profits. (B) entitled for 10% p.a. on their additional capital (C) entitled for 12% p.a. on their additional capital (D) not entitled for any interest on their additional capitals. 12. Revaluation of assets at the time of reconstitution is necessary because their present value may be different from their: (A) Market Value. (B) Net Value. (C) Cost of Asset (D) Book Value. 13. If average capital employed in a firm is ₹8,00,000, average of actual profits is ₹1,80,000 and normal rate of return is 10%, then value of goodwill as per capitalization of average profits is: (A) ₹10,00,000 (B) ₹18,00,000 (C) ₹80,00,000 (D) ₹78,20,000 14. In which of the following situation Companies Act 2013 allows for issue of shares at discount? (A) Issued to vendors. (B) Issued to public. (C) Issued as sweat equity. (D) None of the above. 15. As per Section 52 of Companies Act 2013, Securities Premium Reserve cannot be utilised for: (A) Writing off capital losses. (B) Issue of fully paid bonus shares. (C) Writing off discount on issue of securities. (D) Writing off preliminary expenses. 16. Net Assets minus Capital Reserve is: (A) Purchase consideration (B) Goodwill (C) Total assets (D) Liquid assets 17. Kalki and Kumud were partners sharing profits and losses in the ratio of 5:3. On 1st April,2021 they admitted Kaushtubh as a new partner and new ratio was decided as 3:2:1. Goodwill of the firm was valued as ₹3,60,000. Kaushtubh couldn’t bring any amount for goodwill. Amount of goodwill share to be credited to Kalki and Kumud Account’s will be: - (A) ₹ 37,500 and ₹22,500 respectively (B) ₹ 30,000 and ₹30,000 respectively (C) ₹ 36,000 and ₹24,000 respectively (D) ₹ 45,000 and ₹15,000 respectively 18. Sarvesh, Sriniketan and Srinivas are partners in the ratio of 5:3: 2. If Sriniketan’s share of profit at the end of the year amounted to ₹1,50,000, what will be Sarvesh’s share of profits? (A) ₹5,00,000. (B) ₹1,50,000. (C) ₹3,00,000. (D) ₹2,50,000. 19. Angle and Circle ware partners in a firm. Their Balance Sheet showed Furniture at ₹2,00,000; Stock at ₹1,40,000; Debtors at ₹1,62,000 and Creditors at ₹60,000. Square was admitted and new profit-sharing ratio was agreed at 2:3:5. Stock was revalued at ₹1,00,000, Creditors of ₹15,000 are not likely to be claimed, Debtors for ₹2,000 have become irrecoverable and Provision for doubtful debts to be provided @ 10%. Angle’s share in loss on revaluation amounted to ₹30,000. Revalued value of Furniture will be: (A) ₹2,17,000 (B) ₹1,03,000 (C) ₹3,03,000 (D) ₹1,83,000 20. Asha and Nisha are partner’s sharing profits in the ratio of 2:1. Kashish was admitted for 1/4 share of which 1/8 was gifted by Asha. The remaining was contributed by Nisha. Goodwill of the firm is valued at ₹ 40,000. How much amount for goodwill will be credited to Nisha’s Capital account? (A) ₹2,500. (B) ₹5,000. (C) ₹20,000. (D) ₹ 40,000. 21. At the time of admission of new partner Vasu, Old partners Paresh and Prabhav had debtors of ₹6,20,000 and a provision for doubtful debts of ₹20,000 in their books. As per terms of admission, assets were revalued, and it was found that debtors worth ₹15,000 had turned bad and hence should be written off. Which journal entry reflects the correct accounting treatment of the above situation. 22. Given below are two statements, one labelled as Assertion (A) and the other labelled as Reason (R) Assertion (A): Transfer to reserves is shown in P & L Appropriation A/c. Reason (R): Reserves are charge against the profits. In the context of the above statements, which one of the following is correct? Codes: (A) (A) is correct, but (R) is wrong. (B) Both (A) and (R) are correct. (C) (A) is wrong, but (R) is correct. (D) Both (A) and (R) are wrong. 23. Anubhav, Shagun and Pulkit are partners in a firm sharing profits and losses in the ratio of 2:2:1. On 1st April 2021, they decided to change their profit-sharing ratio to 5:3:2. On that date, debit balance of Profit & Loss A/c ₹30,000 appeared in the balance sheet and partners decided to pass an adjusting entry for it. Which of the undermentioned options reflect correct treatment for the above treatment? (A) Shagun's capital account will be debited by ₹3,000 and Anubhav’s capital account credited by ₹3,000 (B) Pulkit's capital account will be credited by ₹3,000 and Shagun's capital account will be credited by ₹3,000 (C) Shagun's capital account will be debited by ₹30,000 and Anubhav’s capital account credited by ₹30,000 (D) Shagun's capital account will be debited by ₹3,000 and Anubhav’s and Pulkit’s capital account credited by ₹2,000 and ₹1,000 respectively. 24. A, B and C are partners, their partnership deed provides for interest on drawings at 8% per annum. B withdrew a fixed amount in the middle of every month and his interest on drawings amounted to ₹4,800 at the end of the year. What was the amount of his monthly drawings? (A) ₹10,000. (B) ₹5,000. (C) ₹1,20,000. (D) ₹48,000. 25. Abhay and Baldwin are partners sharing profit in the ratio 3:1. On 31st March 2021, firm’s net profit is ₹1,25,000. The partnership deed provided interest on capital to Abhay and Baldwin ₹15,000 & ₹10,000 respectively and Interest on drawings for the year amounted to ₹6000 from Abhay and ₹4000 from Baldwin. Abhay is also entitled to commission @10% on net divisible profits. Calculate profit to be transferred to Partners Capital A/c’s. (A) ₹1,00,000 (B) ₹1,10,000 (C) ₹1,07,000 (D) ₹90,000 26. Given below are two statements, one labelled as Assertion (A) and the other labelled as Reason (R): Assertion (A): Revaluation A/c is prepared at the time of Admission of a partner. Reason (R): It is required to adjust the values of assets and liabilities at the time of admission of a partner, so that the true financial position of the firm is reflected. In the context of the above two statements, which of the following is correct? Codes: (A) Both (A) and (R) are correct and (R) is the correct reason of (A). (B) Both (A) and (R) are correct but (R) is not the correct reason of (A). (C) Only (R) is correct. (D) Both (A) and (R) are wrong. 27. Apaar Ltd forfeited 4,000 shares of ₹20 each, fully called up, on which only application money of ₹6 has been paid. Out of these 2,000 shares were reissued and ₹8,000 has been transferred to capital reserve. Calculate the rate at which these shares were reissued. (A) ₹20 Per share (B) ₹18 Per share (C) ₹22 Per share (D) ₹8 Per share 28. Which of the following statement is/are true? (i) Authorized Capital < Issued Capital (ii) Authorized Capital ≥ Issued Capital (iii) Subscribed Capital ≤ Issued Capital (iv) Subscribed Capital > Issued Capital (A) (i) only (B) (i) and (iv) Both (C) (ii) and (iii) Both (D) (ii) only 29. Mickey, Tom and Jerry were partners in the ratio of 5:3:2. On 31st March 2021, their books reflected a net profit of ₹2,10,000. As per the terms of the partnership deed they were entitled for interest on capital which amounted to ₹80,000, ₹60,000 and ₹40,000 respectively. Besides this a salary of ₹60,000 each was payable to Mickey and Tom. Calculate the ratio in which the profits would be appropriated. (A) 1:1:1 (B) 5:3:2 (C) 7:6:2 (D) 4:3:2 30. Mohit had been allotted for 600 shares by a Govinda Ltd on pro rata basis which had issued two shares for every three applied. He had paid application money of ₹3 per share and could not pay allotment money of ₹5 per share. First and final call of ₹2 per share was not yet made by the company. His shares were forfeited. the following entry will be passed: Equity Share Capital A/c    Dr     ₹X To share Forfeited A/c                    ₹Y To Equity Share Allotment A/c      ₹Z Here X, Y and Z are: (A) ₹ 6,000; ₹2,700; ₹3,000 respectively. (B) ₹ 9,000; ₹2,700; ₹4,500 respectively. (C) ₹ 4,800; ₹2,700; ₹2,100 respectively. (D) ₹ 7,200; ₹2,700; ₹4,500 respectively. 31. Given below are two statements, one labelled as Assertion (A) and the other labelled as Reason (R): Assertion (A): In case of shares issued on Pro–rata basis, excess money received at the time of application can be utilised till allotment only. Reason (R): Company has to pay interest on calls in advance @12% p.a. for amount adjusted towards calls (if any). In the context of the above two statements, which of the following is correct? Codes: (A) Both (A) and (R) are true, but (R) is not the explanation of working capital management. (B) Both(A) and (R) are true and (R) is a correct explanation of (A). (C) Both (A) and (R) are false. (D) (A) is false, but (R) is true. 32. Ajay and Vinod are partners in the ratio of 3:2. Their fixed Capital were ₹3,00,000 and ₹4,00,000 respectively. After the close of accounts for the year it was observed that the Interest on Capital which was agreed to be provided at 5% pa was erroneously provided at 10%p.a. By what amount will Ajay’s account be affected if partners decide to pass an adjustment entry for the same? (A) Ajay’s Current A/c will be Debited by ₹15,000. (B) Ajay’s Current A/c will be Credited by ₹6,000. (C) Ajay’s Current A/c will be Credited by ₹35,000. (D) Ajay’s Current A/c will be Debited by ₹20,000. 33. Vishnu Ltd. forfeited 20 shares of ₹10 each, ₹8 called up, on which John had paid application and allotment money of ₹5 per share, of these, 15 shares were reissued to Parker as fully paid up for ₹6 per share. What is the balance in the share Forfeiture Account after the relevant amount has been transferred to Capital Reserve Account? (A) ₹0 (B) ₹5 (C) ₹25 (D) ₹100 34. Newfound Ltd took over business of Old land ltd and paid for it by issue of 30,000, Equity Shares of ₹100 each at a par along with 6% Preference Shares of ₹1,00,00,000 at a premium of 5% and a cheque of ₹8,00,000. What was the total agreed purchase consideration payable to Old Land ltd. (A) ₹1,05,00,000. (B) ₹1,43,00,000. (C) ₹1,40,00,000. (D) ₹1,35,00,000. 35. A and B are partners in the ratio of 3:2. C is admitted as a partner and he takes ¼th of his share from A. B gives 3/16 from his share to C. What is the share of C? (A) 1/4 (B) 1/16 (C) 1/6 (D) 1/16 36. Krishan Ltd has Issued Capital of 20, 00,000 Equity shares of ₹10 each. Till Date ₹8 per share have been called up and the entire amount received except calls of ₹4 per share on 800 shares and ₹3 per share from another holder who held 500 shares. What will be amount appearing as ‘Subscribed but not fully paid capital’ in the balance sheet of the company? (A) ₹ 2,00,00,000 (B) ₹ 1,95,99,000 (C) ₹ 1,59,95,300 (D) ₹ 1,99,95,300 Question no.’s 37 and 38 are based on the hypothetical situation given below. Bright Star Limited is engaged in manufacture of high-end medical equipment. Considering the prospects of high growth in this segment the company has decided to expand and for this purpose additional investment of ₹50,00,00,000 is required. Directors have decided that 20% of this requirement would be financed by raising long term debts and balance by issue of Equity shares. As per memorandum of association of the company the face value of Equity shares is ₹100 each. Also, considering the market standing of the company these shares would be issued at a premium of 25%. Directors decided to issue sufficient shares to collect the desired amount (including premium). The prospectus was issued to public, and the issue was oversubscribed by 2,00,000 shares which were issued letters of regret. Answer the below mentioned questions considering that the entire amount was payable on application. 37. What is the total amount collected on application? (A) ₹42,50,00,000 (B) ₹40,00,00,000 (C) ₹32,00,00,000 (D) None of the above 38. How many Equity shares were offered for issue by Bright Star Ltd? (A) 40,00,000 shares. (B) 50,00,000 shares. (C) 35,00,000 shares. (D) 32,00,000 shares. Question no.’s 39, 40 and 41 are based on the hypothetical situation given below. On 1st September 2020, twenty students of Modern College started their Partnership Firm in the name of “Be Safe” for selling sanitisers on digital mode. Since they were good friends of each other, they were not having any explicit agreement in place. All of them have agreed to invest ₹15,000/- each as capital. The books were closed on 31st March 2021, on which date the following information was provided by the firm: 39. Calculate the amount of profits to be transferred to Profit and Loss Appropriation Account. - (A) Profit ₹58,000 (B) Profit ₹44,000 (C) Profit ₹59,200 (D) Profit ₹58,700 40. On 31st March 2021, Remuneration to Partners will be provided to the partners of “Be Safe” but only out of: (A) Profits for the accounting year (B) Reserves (C) Accumulated Profits (D) Goodwill 41. On 01st December 2020 one of the partners of the firm introduced additional capital of ₹30,000 and also advanced a loan of ₹40,000 to the firm. Calculate the amount of interest that Partner will receive for the current accounting period- (A) ₹4,200 (B) ₹1,400 (C) ₹ 1575 (D) ₹ 800 42. Given below are two statements, one labelled as Assertion (A) and the other labelled as Reason (R): Assertion (A): The focus of calculation of working capital revolves around managing the operating cycle of the business. Reason (R): It is because the concept of operating cycle is required to ascertain the liquidity of assets and urgency of payments to liabilities. In the context of the above two statements, which of the following is correct? Codes: (A) Both (A) and (R) are true, but (R) is not the explanation of working capital management. (B) Both(A) and (R) are true and (R) is a correct explanation of (A). (C) Both (A) and (R) are false. (D) (A) is false, but (R) is true. 43. Which of the following are included in traditional classification of ratios? (i) Liquidity Ratios. (ii) Statement of Profit and loss Ratios. (iii) Balance Sheet Ratios. (iv) Profitability Ratios. (v) Composite Ratios. (vi) Solvency Ratios. (A) (ii), (iii) and (v) (B) (i), (iv) and (vi) (C) (i), (ii) and (vi) (D) All (i), (ii), (iii), (iv), (v), (vi) 44. The following groups of ratios primarily measure risk: (A) solvency, activity, and profitability (B) liquidity, efficiency, and solvency (C) liquidity, activity, and profitability (D) liquidity, solvency, and profitability 45. Which one of the following is correct? (i) A ratio is an arithmetical relationship of one number to another number. (ii) Liquid ratio is also known as acid test ratio. (iii) Ideally accepted current ratio is 1: 1. (iv) Debt equity ratio is the relationship between outsider’s funds and shareholders’ funds. In the context of the above two statements, which of the following options is correct? (A) All (i), (ii), (iii) and (iv) are correct. (B) Only (i), (ii) and (iv) are correct. (C) Only (ii), (iii) and (iv) are correct. (D) Only (ii) and (iv) are correct. 46. Which of the following are the tools of Vertical Analysis? (i) Ratio Analysis. (ii) Comparative Statements. (iii) Common Size Statements. (A) Only (iii) (B) Both (i) and (iii) (C) Both (i) and (ii) (D) Only (i) 47. Match the items given in Column I with the headings/subheadings (Balance sheet) as defined in Schedule III of Companies Act 2013. Column I Column II (I) Loose Tools (a) Intangible fixed assets (II) Patents (b)Other current assets (III) Prepaid insurance (c) Long term Borrowings (IV) Debentures (d) Inventories (V) Machinery (e) Tangible Fixed assets Choose the correct option: A. (I)-(a), (II)-(b), (III)- (d), (IV)- (c), (V)-(e) B. (I)-(d), (II)- (a), (III)-(b), (IV)- (c), (V)-(e) C. (I)-(d), (II)- (a), (III)-(b), (IV)-(e), (V)-(c) D. (I)- (e), (II)- (d), (III)- (a), (IV)-(b), (V)-(b) 48. Which ratio indicates the proportion of assets financed out of shareholders’ funds? (A) Debt equity ratio. (B) Fixed assets turnover ratio. (C) Proprietary ratio. (D) Total assets to debt ratio. 49. If Total sales is ₹2,50,000 and credit sales is 25% of Cash sales. The amount of credit sales is: (A) ₹50,000 (B) ₹2,50,000 (C) ₹16,000 (D) ₹3,00,000 50. What will be the amount of gross profit of a firm if its average inventory is ₹80,000, Inventory turnover ratio is 6 times, and the Selling price is 25% above cost? (A) ₹1,20,000. (B) ₹1,60,000. (C) ₹2,00,000. (D) None of the above. 51. Which of the following statements are false? a) When all the comparative figures in a balance sheet are stated as percentage of the total, it is termed as horizontal analysis. b) When financial statements of several years are analysed, it is termed as vertical analysis. c) Vertical Analysis is also termed as time series analysis. Choose from the following options: (A) Both (a) and (b) (B) Both (a) and (c) (C) Both (b) and (c) (D) All three (a), (b), (c) 52. Given below are two statements, one labelled as Assertion (A) and the other labelled as Reason (R): Assertion (A): Increasing the value of closing inventory increases profit. Reason (R): Increasing the value of closing inventory reduces cost of goods sold. In the context of the above two statements, which of the following is correct? Codes: (A) Both (A) and (R) are correct and (R) is the correct reason of (A). (B) Both (A) and (R) are correct but (R) is not the correct reason of (A). (C) Only (R) is correct. (D) Both (A) and (R) are wrong. 53. Given below are two statements, one labelled as Assertion (A) and the other labelled as Reason (R): Assertion (A): A high operating ratio indicates a favourable position. Reasoning (R): A high operating ratio leaves a high margin to meet non-operating expenses. In the context of the above two statements, which of the following is correct? Code: (A) (A) and (R) both are correct and (R) correctly explains (A). (B) Both (A) and (R) are correct but (R) does not explain (A). (C) Both (A) and (R) are incorrect. (D) (A) is correct but (R) is incorrect. 54. Current ratio of Adaar Ltd. is 2.5:1. Accountant wants to maintain it at 2:1. Following options are available. (i) He can repay Bills Payable (ii) He can purchase goods on credit (iii) He can take short term loan Choose the correct option. (A) Only (i) is correct (B) Only (ii) is correct (C) Only (i) and (iii) are correct (D) Only (ii) and (iii) are correct 55. A company has an operating cycle of eight months. It has accounts receivables amounting to ₹1,00,000 out of which ₹60,000 have a maturity period of 11 months. How would this information be presented in the balance sheet? (A) ₹40000 as current assets and ₹60,000 as non-current assets. (B) ₹60,000 as current assets and ₹40,000 as non-current assets. (C) ₹1,00,000 as non-current assets. (D) ₹1,00,000 as Current assets. 56. Which key combination collapses the ribbon? (A). [Ctrl]+[F1] (B). [Ctrl]+[F3] (C). [Ctrl]+[F5] (D). [Ctrl]+[F7] 57. The CAS should be- (A) Simple and integrated, transparent, accurate, scalability, reliability. (B) Complex, Accurate, Transparent, faster to work. (C) Able to transform the manual accounting system to computerised accounting system. (D) None of the above. 58. The components of Computerised Accounting System are: (A) Data, Report, Ledger, Hardware, Software. (B) Data, People, Procedure, Hardware, Software. (C) People, Procedure, Ledger, Data, Chart of Accounts. (D) Data, Coding, Procedure, Rules, Output. 59. Where are amounts owed by customers for credit purchases found? (A) accounts receivable journal (B) general ledger (C) sales journal (D) accounts receivable subsidiary ledger 60. What is the activity sequence of the basic information processing model? (A) Organise data, process data, and collect data (B) Collect data, organise and process data, and communicate information (C) Process data, organise data, and collect data (D) Organise data, collect data, and communicate information 61. Codification of Accounts required for the purpose of: (A) Hierarchical relationship between groups and components (B) Data processing faster and preparing of final accounts (C) Keeping data and information secured (D) None of the above. 62. Which mathematical operator is represented by an asterisk (*)? (A). Exponentiation (C). Subtraction (D). Multiplication 63. What category of functions is used in this formula: =PMT (C10/12, C8, C9,1) (A) Logical (B) Financial (C) Payment (D) Statistical 64. When Extend Selection is active, what is the keyboard shortcut for selecting all data up to and including the last row? (A) [Ctrl]+[Down-arrow] (B) [Ctrl]+[Home] (C) [Ctrl]+[Shift] (D) [Ctrl]+ [Up Arrow] 65. Which formulae would result in TRUE if C4 is less than 10 and D4 is less than 100? (A) =AND(C4>10, D4>10) (B) =AND(C4>10, C4<100). (C) =AND(C4>10, D4<10). (D) =AND (C4<10, D4,100) 66. Where is the address of the active cell displayed? (B) Status bar (C) Name Box (D) Formula bar 67. Which of the following arguments in a financial function represents the total number of payments? (A) FV (B) PV (C) Nper (D) Rate 68. Which function results can be displayed in Auto Calculate? (A). SUM and AVERAGE (B). MAX and LOOK (C). LABEL and AVERAGE (D). MIN and BLANK 69. When navigating in a workbook, which command is used to move to the beginning of the current row? A. [Ctrl]+[Home] B. [Page Up] C. [Home] D. [Ctrl]+[Backspace] ### Explain the Role of ‘Justice Party’ in Boycotting of Council Elections | bzziii.com Arun and Vijay are partners in a firm sharing profits and losses in the ratio of 5:1.  Balance Sheet (Extract)  Liabilities Amount (Rs.) Assets Amount (Rs.) Machinery 40,000 If the value of machinery reflected in the balance sheet is overvalued by 33 1/3%, find out the value of Machinery to be shown in the new Balance Sheet:  (A) ₹ 44,000  (B) ₹48,000  (C) ₹ 32,000  (D) ₹30,000  SOLUTION (D) ₹30,000  Explanation: Machinery is overvalued Value by 33 1/3% of 40,000 Here, 33 1/3% as a fraction = \frac{33\frac{1}{3}}{100} Converting the mixed fraction to an improper fraction, we get = \frac{\frac{100}{3}}{100}  = 100/300 Simplifying this, we get = 100/300 = 1/3 = 100/300 of z = 40,000-z Because, 40,000 is overvalued Value, we will minus the real value from 40,000. = "100x"/"300" + "300z"/"300" = 40,000 = "400z"/"300" = 40,000 = 40,000 \times 300/400 = z ∴ z = 30,000 4 5
2023-02-06 02:10:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4032825231552124, "perplexity": 10284.067862891012}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500303.56/warc/CC-MAIN-20230206015710-20230206045710-00134.warc.gz"}
https://electronics.stackexchange.com/questions/86426/how-to-set-all-flags-in-8085
# How to set all flags in 8085? My Microprocessor teacher of 8085 gave an assignment to set all flags ! I have written this little program to set all except Sign, Please help me out MVI A,0FFH ANI 01H HLT ## 3 Answers The five flags on the 8080/8085 are Sign, Zero, Carry, Half-carry and Parity. It looks like your program clears all of them. The result of the ANI is not zero, not negative and has odd parity. Also, logical operations like ANI unconditionally clear both carry flags. Actually, the most direct way to set all of them is something like this: LXI H, 0FFFFh PUSH H POP PSW Which your teacher may or may not consider a "cheat". (Actually, I can't think of any other way to get both the Z and S flags set simultaneously.) I'm amazed that anyone is still teaching this ancient architecture, rather than something that's actually still in production. It's utterly useless knowledge. • Welcome to my computer architecture class, except we used a fictional 1-register microcoded CISC architecture without a stack pointer! – HL-SDK Oct 25 '13 at 13:33 • "Let me tell you about how it was done in 1983..." – HL-SDK Oct 25 '13 at 13:33 • I still maintain an embedded system that is based on a Z80 for a client. So despite wishful thinking the 8080 architecture is not exactly dead. The 8051 architecture is alive and well, and is at the core of several very popular USB and Bluetooth implementations. Those old 8-bit micro cores are not likely to truly die for a long time. – RBerteig Nov 16 '13 at 2:50 • Also, given that there are several open source implementations of the 8080 ready to synthesize into your FPGA, you too can have an 8080 based system! – RBerteig Nov 16 '13 at 3:03 • It's also difficult to see the point of the actual exercise. Not something that ever arises in practice. – user207421 Jan 14 '14 at 21:43 MVI L,FFH PUSH H POP PSW RAR // upto which will set all flags MVI L,00H PUSH H POP PSW HLT // upto which it reset all flags • MVI L,FFH PUSH H POP PSW RAR // upto which will set all flags MVI L,00H PUSH H POP PSW HLT – omkar Oct 6 '18 at 6:33 • Welcome to EE.SE. Your copy/past buttons seem to be stuck in some loop. Please press on edit and reformulate your answer. Add more explanation to the current answer (after you severely improve its formatting). Further, have you gone through this tour? Please consider it! – Hazem Oct 6 '18 at 6:37 MVI L,FFH; Push H; Pop PSW; This is the simplest way!
2019-06-19 17:52:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33667290210723877, "perplexity": 3216.049954717781}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999003.64/warc/CC-MAIN-20190619163847-20190619185847-00074.warc.gz"}
http://mathhelpforum.com/advanced-statistics/82772-poisson-print.html
Poisson • Apr 7th 2009, 04:13 PM steffan_09 Poisson • Apr 7th 2009, 05:41 PM matheagle (a) ${e^{-4}4^6\over 6!}$. (b) $P(X\ge 10) =1-P(X\le 9)$ where $\lambda =8$ since this is a two week period and you expect twice as many parts used. So, $P(X\ge 10) =1-\sum_{k=0}^9 {e^{-8}8^k\over k!}$. (c) Here you want three 'successes' in three weeks (binomial) where p is from (a)... $\biggl({e^{-4}4^6\over 6!}\biggr)^3$. Sorry, I do not understand (d).
2016-10-23 06:19:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.680489718914032, "perplexity": 4772.112114632214}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719155.26/warc/CC-MAIN-20161020183839-00486-ip-10-171-6-4.ec2.internal.warc.gz"}
http://openstudy.com/updates/506b8c7ce4b060a360fe3b0b
## amishra Solve for x: 3^x - 2 = 8/3^x one year ago one year ago 1. amishra $3^{x} - 2 = 8/3^{x}$ 2. L.T. multiply both sides by 3 to the x to eliminate the denominator and subtract eight from both sides$3^{2x}-2*3^{x}-8=0$ 3. L.T. 4. amishra Yes, I got $3^{x} = 4 , 3^{x} = -2$ 5. amishra Then what? 6. L.T. Now treat it as a quadratic equation set equal to zero and use the quadratic formula to solve for 3 to the x, because 3 to the x was squared, just like a variable, and we have second and third terms as well.$\frac{ 2 +\sqrt{4-4*(-8)} }{ 2 }$ You can ignore the possibility where you subtract the square root, since that would give a negative answer, which 3 to the x can't equal. Solve and you get $\frac{ 8 }{ 2 }=4=3^{x}$Now you should take the natural logarithm of both sides and solve.$\ln 4=\ln 3^{x}$Pull the exponent down and divide both sides by the natural log of 3$\ln 4=x \ln 3$$\frac{ \ln4 }{ \ln3 }=x$ That should be your answer 7. amishra Thank you soo much!! That was very helpful! :D
2014-03-11 06:58:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8243619203567505, "perplexity": 844.4851300875031}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011149514/warc/CC-MAIN-20140305091909-00056-ip-10-183-142-35.ec2.internal.warc.gz"}
https://westsideelectronics.com/data-types-a-pickle-14-2/
# Data types, pointers, and a pickle? (14) When programming in Arduino, you might have occasion to go beyond the standard datatypes of int and String. # Data Types You might have noticed that if you did the following: int count = 5; int groups = 2; Serial.println(count/groups); Serial would have printed 3. This can be a head-scratcher, especially since we never had the occasion to use other data types. A data type refers to what the data can represent. This primarily has to do with int, which cannot represent decimal points. The solution is to use a float which is another data type, except that now it can handle floating point operations. For example, if we changed the example above a little: float count = 5; float groups = 2; Serial.println(count/groups); It would yield the expected value of 2.5. The trade-off is that in order to do this slightly more complex operation, float takes up more memory. To be precise, it uses up twice as much. If you are using a chip like the ESP8266 which has loads of memory, this isn't much of an issue, however, for a chip like the ATTINY45, where space and execution speed is limited, the use of floats might not be the most efficient means of calculation. In the words of the Arduino website on float: Programmers often go to some length to convert floating point calculations to integer math. So, for our example above, the solution is to add more zeros into our variables: int count = 50; int groups = 20; Serial.println(count/groups); The result will yield 25. We can do this without incurring an increase in memory because ints can hold values as large as 32768. They are useful for calibrating values, for example, an analog input where the signal is not as clear as a digital one. What if you never have an occasion to use such a large value? For example, an analog sensor such as a photosensor might go from 0 to 255. That is a lot of unused memory! The solution is to use char, a datatype that is half as large as in int. The interesting thing about char is that it is both a number and a character (hence the name). If you printevaribled 124, a char datatype will print out |, hardly the thing we were expecting! The problem is that Serial.println prints the ASCII representation of that number. The solution is then to either specify that we wanted to print out a number Serial.println(charValue,DEC) or use a little trick to shift char to the value that we want to print, if it is 0 ~ 9, charValue = charValue - '0';. Doing so resets the count to the list of integers in the ASCII table, and so we can print the value that we intended. # Pointers In poking around the code on the web you might have seen data types like these: int instantRead; int *sensor1; * and & are special values that are not part of the variable's name. Instead, * means that sensor1 is a pointer to something that is of type int. A pointer is a variable that holds the address (as opposed to the value) of a value. ## What are addresses and pointers? For example, imagine a wall of boxes labeled with numbers. box_5_addr refers to the box labeled '5' (a pointer variable) and box_3 refers to the value in box_3. In your hand is an instruction to look in box 5. where_are_my_oranges = *box_5_addr; But when you open box 5, instead it contains a note "Look in box 3". box_5_addr = &box_3; You then go to box 3, and you find two oranges! box_3 = 2; This implies that best_value = 2, also note that these lines are written in reverse order of which they should appear in the code for purposes of illustration. So if you declare a variable as a pointer, to get to the value inside the variable, you would have to dereference the pointer. Pointers accept addresses, so you can pass an address to a pointer like how you'd pass a value to a variable. Pointers are essentially some kind of 'address variable'. int* box_4; // declare a pointer *box_4 = 10; // dereference it, store value int box_3; // declare a varible box_3 = 2; box_4 = &box_3; // give the address of box_3 to the pointer box_4 Why do we do this instead of referring straight to box 3? Well in this case we can save on space: the box that contains the address doesn't have to be as big as the box that contains the oranges. A pointer that points to a float variable doesn't actually use the same space as the float variable, it doesn't need to, it just needs to store the address of the variable. This gives more flexibility and power to the programmers. In this case, all that talk about 8-bit and 16-bit microcontrollers suddenly becomes relevant because 8-bits refer to the size of each block of memory, which is how large the address needs to be. Despite the analogy about searching, pointers also come out to be faster because there isn't a need to duplicate the data. When you pass a value into a function, it isn't writable by the function because it creates a copy that only exists within the function. However, with a pointer, you are able to modify the variable that exists outside the function by using a pointer. Note that * also serves as a deference operator. That means that if *(&number) means that they both cancel out and we get the value that is stored at number. function(int *point){ // we need an address to be passed here *point = *point + 1; // increment the value at the address by 1 } int value = 3; function(&value); // we pass the address of value to our function Serial.println(value); // note that there is no return value on the function int pickle,jar; int *hand; hand = &jar; jar = 20; #### What you learned 1. What are data types. 2. What are pointers. 3. How to use different data types 4. How to use pointers #### Challenges • Write a function that changes changes an external value. • Solve the following problem by determining what will be printed out before running the program: int pickle,jar; int *hand; hand = &jar; jar = 20; Serial.println(hand); pickle = *hand; jar = jar - 10; // Big hands Serial.println(pickle); *hand = *hand + 5 ; // Can't fit through the jar Serial.println(jar); pickle = pickle + 2; //Uh, the pickles were eaten? Serial.println(jar);
2023-02-08 22:40:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2271602600812912, "perplexity": 1071.42314115046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500983.76/warc/CC-MAIN-20230208222635-20230209012635-00188.warc.gz"}
https://eprint.iacr.org/2020/1112
## Cryptology ePrint Archive: Report 2020/1112 A cautionary note on the use of Gurobi for cryptanalysis Muhammad ElSheikh and Amr M. Youssef Abstract: Mixed Integer Linear Programming (MILP) is a powerful tool that helps to automate several cryptanalysis techniques for symmetric key primitives. $\textsf{Gurobi}$ is one of the most popular solvers used by researchers to obtain useful results from the MILP models corresponding to these cryptanalysis techniques. In this report, we provide a cautionary note on the use of $\textsf{Gurobi}$ in the context of bit-based division property integral attacks. In particular, we report four different examples in which $\textsf{Gurobi}$ gives contradictory results when solving the same MILP model by just changing the number of used threads or reordering some constraints. Category / Keywords: secret-key cryptography / Date: received 14 Sep 2020, last revised 14 Sep 2020 Contact author: m_elshei at encs concordia ca Available format(s): PDF | BibTeX Citation Short URL: ia.cr/2020/1112 [ Cryptology ePrint archive ]
2020-09-27 01:05:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7296427488327026, "perplexity": 3334.431508168984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400249545.55/warc/CC-MAIN-20200926231818-20200927021818-00344.warc.gz"}
http://quant.stackexchange.com/tags/r/hot?filter=month
# Tag Info 3 Answering my own question as it could be useful for others. Actually package fOptions is vectorized. The only constraint (and that make sense) is that you can't compute at the same time 2 different greeks, or mix up calls and puts. So assuming that you want to compute the delta of a set of puts, the code will be the following: ... 2 For non-normal asset price models you could look at the theory of Lévy-processes. If we assume that you work in the physical probability measure $P$ and that the random numbers that you have generated are daily log-returns, then you can do the following: Asset $i$ has starting price $S_0^i$ and for the future prices you can put S_t^i = S_0^i ... 2 Most technical indicators must be available in the TTR package. However, if they are not then you can write a custom indicator for use in quantstrat as follows. fractalindicator.up <- function(x) { High <- Hi(x); Bars <- nrow(x) afFrUp <- rep(NA, Bars) for(iBar in seq(8,Bars-2)) { if(High[iBar-1]<High[iBar-2] && ... 1 Yes, it exists and it is called ccgarch package. You can install that by simply running in R install.packages("ccgarch") and learn more about that on the CRAN relative paper. Moreover, I suggest you to read this lecture hold by the author during an R conference. Hope this help. 1 Garch models are not good to predict "many" periods ahead, but for "very short" times. If you want to predict 2 months from here, maybe you should be working with monthly data. I did a similar exercise with some indexes (symb=c("^BVSP","^MERV","^DJA","^N225")) using daily returns from="1991/01/01", look the incredible predictions. Only top voted, non community-wiki answers of a minimum length are eligible
2015-07-05 15:05:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.623393714427948, "perplexity": 1407.6952154127966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097512.42/warc/CC-MAIN-20150627031817-00297-ip-10-179-60-89.ec2.internal.warc.gz"}
https://indico.cern.ch/event/663474/contributions/3061234/
# 7th International Conference on New Frontiers in Physics (ICNFP2018) Jul 4 – 12, 2018 Europe/Athens timezone Group photo: indico.cern.ch/event/663474/images/19808-ICNFP_2018_Group_Photo.JPG ## Causal evolution of probability measures Jul 5, 2018, 3:20 PM 20m Room 3 ### Room 3 Oral presentation ### Speaker Tomasz Miller (Warsaw University of Technology) ### Description The causal structure of a spacetime $\mathcal{M}$ is usually described in terms of a binary relation $\preceq$ between events called the casual precedence relation (often referred to as $J^+$). In my talk I will present a natural extension of $\preceq$ onto the space $\mathscr{P}(\mathcal{M})$ of (Borel) probability measures on $\mathcal{M}$, designed to rigorously encapsulate the common intuition that probability can only flow along future-directed causal curves. Using the tools of the optimal transport theory adapted to the Lorentzian setting, one can utilize thus obtained notion of 'causality between measures' to model a causal time-evolution of a spatially distributed physical entity in a globally hyperbolic spacetime. I will define what it means that a time-dependent probability measure $\mu_t \in \mathscr{P}(\mathcal{M})$ evolves causally. I will discuss how such an evolution can be understood as a 'probability measure on the space of worldlines'. I will also briefly present some preliminary results concerning the relationship between the causal time-evolution of measures and the continuity equation. ### Primary author Tomasz Miller (Warsaw University of Technology)
2023-01-29 00:36:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7369990944862366, "perplexity": 950.9817626341492}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499695.59/warc/CC-MAIN-20230128220716-20230129010716-00026.warc.gz"}
https://meta.mathoverflow.net/questions/1775/why-is-this-question-considered-undergraduate
# Why is this question considered undergraduate? The question https://mathoverflow.net/questions/173132/number-of-combinations-of-ordered-sequences-of-n-integers is closed because it is considered "undergraduate"... could someone explain me how in the world an undergraduate student will solve it? IMO, the solution does not involve simple Combinations and Permutations. • As I said in the comments, it has an easy elementary solution that you're missing. And in fact sometime back in the 20th century I remember being asked to solve an isomorphic problem on the homework in an undergraduate combinatorics class. – Andy Putman Jul 2 '14 at 20:54 • The meta question is fine, I see no need to downvote the meta question (even if the comments on main are not so great). – user9072 Jul 2 '14 at 21:03 • @quid This question was probably downvoted because of the tone. The Masked Avenger explained very well how to solve the problem in a comment at MO, and yet the OP seems to reject the explanation. – Todd Trimble Jul 3 '14 at 13:38 • you can ask anything for a student, even if he is undergraduate. Whether he will manage to solve it in available time is another matter. – user38397 Jul 3 '14 at 14:47 • @ToddTrimble thanks for the comment. But please note that the meta-post predates any comment (at least non-deleted ones) of The Masked Avenger and not much time eleapsed between the posting and the first dv (as docuemented by the time of my comment); I am thus not convinced your explanation is a good one. – user9072 Jul 4 '14 at 13:32 • @quid You can ignore the timeline and any mention of mine of the Masked Avenger's comments (i.e., take that mention as mere addendum and not 'explanation'), and the tone is still bad. – Todd Trimble Jul 4 '14 at 13:40 • @ToddTrimble even admitting this, what is gained by downvoting it? (Note that I said I see no need, so if you want to disagree with me you should explain the need.) – user9072 Jul 4 '14 at 13:56 • @quid I don't disagree with you. Nothing is particularly gained by an unexplained downvote, but for the benefit of the OP, it might be helpful to know a likely reason for the downvotes. – Todd Trimble Jul 4 '14 at 14:11 • @ToddTrimble glad this is clarified. I am pretty sure that some will/would have read your first comment as justifying unexplained downvotes on such questions, whence my reaction. – user9072 Jul 4 '14 at 15:53 It does have an elementary solution, as I indicated in a comment that was discouraged by Andy Putman. I will not tell you the worked out answer, and opinions here vary as to what contributes an answer. In any case , it is material that is covered in elememtary combinatorics texts, and is considered by me and many others as undergraduate level or earlier. I would appreciate a remark from you, the OP on what is the best way to redirect you to math.SE, which is more appropriate. Did I say too much? Did I take too much of the process of learning away from you? Is Andy Putman right, and I should just say "go to math.SE"? I am interested in a candid response from you. • I don't know if you can comment on this answer. You should be able to post a response as another answer or question edit. Not optimal, I know. – The Masked Avenger Jul 2 '14 at 21:22 • Of course OP can comment here. From the help center "Please note that you can always comment on your own posts, and any part of your questions." (my emph) – user9072 Jul 2 '14 at 21:28 • The OP asked the question on math.se and has already accepted an answer, so I doubt we'll hear from them again : math.stackexchange.com/questions/854745 – Andy Putman Jul 2 '14 at 21:46 • @Andy, if you're right, mission accomplished. I still have the opinion a different approach was needed here. Moving on to more productive pursuits... – The Masked Avenger Jul 2 '14 at 21:52 • @AndyPutman I fully agree that the question is off-topic, but I also think that OP honestly thought the question is on-topic here. (Why would they ask it here, and not on math.SE where they had asked various questions before? Some of them seeming harder than that one.) I thus believe (with the benefit of hind sight) that a more clear/detailed indication why the question is off-topic would have been better. Perhaps also worth noting that the question got a vote to reopen. – user9072 Jul 2 '14 at 22:02 • @quid : I suspect that the vote to reopen came from the OP (you can vote to close and open your own questions). As far as how to convince the OP that it was elementary, I'm not really sure how to do so without telling them how to solve it, which I think is inappropriate and likely to lead to more bad questions. Any suggestions you have would be appreciated. – Andy Putman Jul 2 '14 at 22:21 • @AndyPutman The reopen vote cannot have been OP. Voting ones own question also needs some points (namely 250). I disagree that it is likely that giving a quick but clear pointer in the comments will contribute in any relevant way to more bad questions. Or, at least, the contribution would be so small that I consider it as relatively less bad than the comment and meta noise we had. (But it is hard to quantify this and ones priorities can reasonably be different here, so I guess in the end all we can do is to agree to disagree.) – user9072 Jul 2 '14 at 22:53 • @quid : Interesting, I did not know that. I'm a little surprised that a high-rep user would have voted to reopen, but who knows? Anyway, I suspect that both of us understand each other's point of view, and that is probably the best that can be accomplished in this situation. – Andy Putman Jul 2 '14 at 22:55 • Indeed, I asked the same question on Math.SE. It would be more nice and more efficient if the suggestion to redirect it to Math.SE came at first place. It simply is not efficient to close the question and write "it has elementary solution". It might be elementary, but I was looking for a solution anyways (or at least an advice on my path). I am running a PhD on engineering, thus I have no knowledge on combinatorics. For someone who wants to solve it starting from simple high school knowledge, it is not elementary nor trivial! – user38397 Jul 3 '14 at 14:38 • @user38397, thank you for your comment. I will try suggesting math.SE first next time. – The Masked Avenger Jul 3 '14 at 15:08 • @user38397 For someone who wants to solve it starting from simple high school knowledge, it is not elementary nor trivial! High school knowledge is a huge overkill for realizing that the answer is $1^{N-1}+2^{N-1}+\dots+K^{N-1}$ and the university education is barely enough to state what it means that it is not an elementary formula. On the other hand, you once more raised the question that I'm raising every time I can: Won't it be more useful for every conceivable objective (barring giving Stewart more easy income) to teach basic discrete math instead of some parts of cookbook calculus? – fedja Jul 4 '14 at 16:04 • @fedja, I think you solved a different problem. I get different numbers for K=2. – The Masked Avenger Jul 4 '14 at 17:57 • @fedja You are interpreting "bigger than or equal to the last one" differently from the Masked Avenger. You think it means $a_i \geq a_N$ for all $i$, Masked Avenger thinks it means $a_i \geq a_{i-1}$. Your count is $1+2^{N-1} + \cdots + K^{N-1}$; Masked Avenger's is a binomial coefficient. – David E Speyer Jul 7 '14 at 15:19 • @David, thanks for that bit of detective work. It lends support to the notion that fedja is not trolling, which was my hope. – The Masked Avenger Jul 7 '14 at 17:01 • @David Speyer Erm... True enough. Unfortunately, the example OP gave (N=2) doesn't allow to distinguish between the two and "the last in the list" is as common as "the last in the queue". As to the general question of trolling, I guess I may be on the borderline occasionally, but, at least, I try not to post when I have nothing to say, so if something I post looks strange, a failed communication is more probable than a malicious intent. Alas, for the last month I had no time to really think of MO questions, so I was posting more when "overwhelmed emotionally" than when "having a bright idea". – fedja Jul 8 '14 at 10:18 There are many questions, useful for research, that can be answered by bright undergraduates. If it does not look standard, why not put it into MO? The problem with the larger Math StackOverflow, it is dominated by 1-minute questions and answers. • This point of view might have some merit to it, however in the current case a main reproach (as far as there is any) is that it is a perfectly standard "1-minute" question, that is if one has the right knowledge (if not one could well get stumped, it is not obvious either). This is exemplefied by the question in fact having been answered quickly on Mathematics Stack Exchange once asked there. – user9072 Jul 8 '14 at 20:30
2020-11-30 11:43:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5354267358779907, "perplexity": 906.7062854231203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141213431.41/warc/CC-MAIN-20201130100208-20201130130208-00360.warc.gz"}
https://blog.gnieh.org/posts/2018/03/17/patch-theory-and-typeclasses/
Patch Theory and Type Classes Posted on 2018-03-17. patch theory coq One of the open-source projects I have been working on for several years is a library that computes diffs between JSON values and produces RFC-6902 patches. Trying to go beyond what the specification proposes, I want my diff tool to generate patches that can be inverted. To this end, I just added fields to some patch operations so that missing information is remembered. I convinced myself that the fields I added are enough to invert any patch, but this is a mere conjecture. There must be a way to be sure that it works. The problem of making sure that a patch can be inverted is actually very interesting to dive into. That is how I started to investigate the area and spent the last few months to do it from time to time. In the saga starting with this post, I want to expose how I implemented one patch theory using Coq, proved some properties on it, and applied it on an RFC-6902 implementation. There will be some theory, and, I hope, a lot of fun. This episode defines the problem and the base theoretical tools that will be used to develop the theory. # Patch theories The point of this saga is not to make a comprehensive list of existing patch theories here, but I came across several approach to the problem of formalizing patch operations. Most of them use category theory to model patches and their operations. An overview of papers I found are: The main (and almost only) source for this bibliography is the Darcs website. ## Darcs Darcs is a decentralized version control system (DVCS). The approach taken by this DVCS is different from the popular git and used-to-be-popular mercurial in that repositories are purely sequences of patches. There is no snapshot of the versioned documents at all. This is interesting in our case because, darcs only is about managing patches. An more interestingly, the darcs developers have elaborated a patch theory to prove some properties on darcs repositories. Several attempts and approaches have been tested, and one in particular kept my attention. The theory developed in this approach is interesting because it does not make any assumption on the patch format you use, nor on the objects you patch. It is fully described in the theory section of the darcs wiki. It relies only on mathematical objects, namely inverse semigroups. More on this in a few paragraphs. A big advantage of only handling mathematical abstract structures is that we can prove a lot of property for our system on a really abstract level, and once it is done, instantiate the theory for our concrete patch format. In this saga, I will first develop the theory in Coq at an abstract level, and in the end I will instantiate everything for a concrete patch system. At each step, instantiating a theory will only require us to prove that assumptions of the theory are met. In the remainder I will expose some parts of the theory as described in the aforementioned paper. If you want more details, you can always refer to it. ## Inverse semigroups Here come the math stuffs! Don’t worry, it won’t be too much, but still, it is math. A semigroup is defined by a set of elements and an associative binary operation over elements in the set. More formally, for a set $S$ and binary operation $\otimes : S \times S \rightarrow S$, if $\otimes$ satisfies: $\forall x, y, z \in S, x \otimes (y \otimes z) = (x \otimes y) \otimes z$ Then $(S, \otimes)$ defines a semigroup. An interesting side-effect of the fact that $\otimes$ is associative allows us in the following to drop the use parentheses, making notations a bit lighter (yes, this is a very mathematical consideration…). This structure will be the base of the entire theory, and the intuition behind it is that applying a patch will be defined as the $\otimes$ operation, that operates over values to patch. This is actually a simplified presentation, as we will see in the following. Another important property I am interested in is patch inversion, which means that the operation should allow to perform the inverse modifications of a patch. This is where inverse semigroups come in the game. An inverse semigroup is a semigroup where for each element in the set, there exists a unique element, called pseudo-inverse. Formally, if $(S, \otimes)$ defines a semigroup, and if $\forall x \in S, \exists! x^{-1} \in S, x \otimes x^{-1} \otimes x = x \text{ and } x^{-1} \otimes x \otimes x^{-1} = x^{-1}$ then $(S, \otimes)$ defines an inverse semigroup. This is called a pseudo-inverse and not an inverse because we do not necessarily have an identity element $\overline{1}$ for $\otimes$, such that $x \otimes x^{-1} = \overline{1}$. Patches may not always be defined for a given value. This is the case when we try to apply a patch on a value that does not meet the patch structural hypotheses (e.g. trying to apply a batch that adds a line in a non existing file). To handle such cases, we can define a zero element that inhibits the operator $\otimes$. An element $\overline 0$ of $S$ is a zero if $\forall x \in S, x \otimes \overline 0 = \overline 0 \otimes x = \overline 0$ If such an element exist, we can define an inverse semigroup with zero as the triple $(S, \otimes, \overline 0)$. The last definition we will use for now is the one of idempotent element. An idempotent is a element $x$ of $S$ such that $x \otimes x = x$ These definitions will be sufficient to state and prove a bunch of properties on inverse semigroups. We will come to them shortly, but first I’d like to introduce another tool that will be sued throughout this saga to develop the theory: Coq. # Coq Coq is a formal proof assistant. It assists you into formalizing theories and proving them by ensuring that your reasoning is correct, based on axioms, definitions and theorems. This is a tool I personally enjoy using, often in the context of proving properties on programming languages, but it can be used for a wide range of fields, from IT security to the proof of the four colors theorem. A personal particular kuddo goes to CompCert, a verified compiler for a (huge) C subset, written in Coq with all transformations proved from parsing to assembler code generation. To interactively test the code provided in this saga, you can use CoqIde or other interface tools. The tool has evolved to a full-fledged programming language with the addition of a lot of features over the years. One of them got my attention: type classes. ## Type classes Type classes are a way to add constraints to values to implement ad hoc polymorphism. They are really popular in the Haskell programming language, for which the entire standard library is defined using them. Lately they also gain popularity in Scala to replace the more rigid inheritance mechanism in a lot of libraries. When I discovered that this feature was added to Coq, I couldn’t wait to find an application so that I can play with them. And I found it with this project! If you want to find more details on how type classes can be used in Coq, you can read a good tutorial online. In this saga, I will only quickly introduce them by example on applying them to our problem. ## Encoding inverse semigroups We saw earlier what it means to define a semigroup. This definition can be conveniently written using a type class in Coq. The full development Class Semigroup {X: Type} {op: X -> X -> X}: Type := op_assoc : forall (x y z: X), op x (op y z) = op (op x y) z . Here we define the class Semigroup over type X and operation op as a single constraint op_assoc. Type classes can be instantiated for a particular set and operation. When it is instantiated a proof that each constraint is met must be provided. For example, the instantiation if Semigroup with natural numbers and the multiplication operation is done as follows: Instance NatSemigroup : @Semigroup nat Nat.mul. Proof. intros x y z. apply mult_assoc. Qed. This one is easy to instantiate because the associativity is already proved in the standard Coq.Arith.Mult module. We will see more complex instantiations later when applying this to our patches. In this case there is only one goal to prove, because we only defined the op_assoc constraint, later we will see type classes with more goals to prove. A type class may have sub classes, adding new constraints. This is the case of the inverse semigroup we defined earlier. An inverse semigroup is a semigroup with one more constraint: there is a pseudo-inverse for each element. Using type classes and sub classes, we can easily define what it means to be an inverse semigroup in Coq. Class InverseSemigroup {X: Type} {op: X -> X -> X} (SG: Semigroup X op): Type := has_pseudo_inverse : forall x, exists! x', op x (op x' x) = x /\ op x' (op x x') = x'. The concept of type class fits well here to encode our structures. Once we proved some $(X, \otimes)$ defines a semigroup, we can use this exhibit to prove that it also defines an inverse semigroup. This is how type classes are used in all languages supporting them, and it is why they are powerful: they can be composed and are modular. This defines a new type class InverseSemigroup that is a sub class of Semigroup. To prove that an instance is an InverseSemigroup, the first step will be to provide an instance of Semigroup for the set and operation, that will be referred to as SG. The notation can become a bit bloated for classes with a lot of parameters. Fortunately, Coq has features that make it possible to infer some recurrent parameters. We can define general variables for X and op that are considered to be present in scope and are used whenever we do not provide them explicitly. We can write the definition as follows and let Coq infer what variables to use for X and op: Generalizable Variables X op. Class InverseSemigroup (SG: Semigroup): Type := has_pseudo_inverse : forall x, exists! x', op x (op x' x) = x /\ op x' (op x x') = x'. The first line of the snippet above defines the names that are implicitly available. In the remainder of the development, we will work on inverse semigroup. In Coq we can define a Context that means: let’s say we have defined an inverse semigroup ISG. Context {ISG: InverseSemigroup}. This context will be used in the definitions and proofs from now on. To handle inverse semigroup with zero in Coq, we can define what it means to be a zero and create a type class that adds this zero to an inverse semigroup. Definition is_zero z := forall x, op x z = z /\ op z x = z. Class InverseSemigroupz {z: X} (ISG:InverseSemigroup X op): Type := has_zero : is_zero z. Context {ISGZ:InverseSemigroupz}. Finally we gave a definition of an idempotent, which we do as follows in Coq: Definition idempotent (x: X) := op x x = x. ## Proving facts about inverse semigroups We are now equipped with many structures and what is left is to state some properties, that will be used later to prove facts about patches. I won’t present all the properties here but will select some useful ones: • $x \otimes x^{-1}$ is idempotent for all $x$ in $S$. This is interesting in our case if we interpret it as: applying a patch and its inverse and the patch and its inverse again is a noop. • If $x$ is idempotent, then it is its own inverse. • If $x$ and $y$ are idempotent, then so is $x \otimes y$. They may appear a bit abstract for now, but we will see how to use them in our patch theory in the next episode. The purpose of exposing them here is to show how to encode these properties in Coq, using our type classes. Definition pseudo_inverse x x' := op x (op x' x) = x /\ op x' (op x x') = x'. Lemma inv_idem : forall x x', pseudo_inverse x x' -> (idempotent (op x x') /\ idempotent (op x' x)). This can be proved by expanding what it means to be idempotent and using the associativity property of op. You can step over the following proof to see how we proceed. Proof. intros x x' H. unfold idempotent. split. - rewrite op_assoc. assert (op (op x x') x = op x (op x' x)) by auto. rewrite H0. unfold pseudo_inverse in H. destruct H. rewrite H. reflexivity. - rewrite op_assoc. assert (op (op x' x) x' = op x' (op x x')) by auto. rewrite H0. apply inv_inv in H. unfold pseudo_inverse in H. destruct H. rewrite H. reflexivity. Qed. The second fact is defined as follows in Coq: Lemma idem_inv : forall x, idempotent x -> pseudo_inverse x x. Proof. intros. unfold pseudo_inverse. split; repeat(rewrite H); reflexivity. Qed. It is proved by simply repeating the definition of an idempotent. Finally the third fact can be defined as: Lemma idem_op_idem : forall x y, idempotent x -> idempotent y -> idempotent (op x y). I won’t go into details of the proof here, because it is much longer but the proof uses the previous facts and associativity property of the operation op`. # Conclusion That’s it for this first episode. I tried to be as clear as possible about my motivation and goal, and introduced a few mathematical constructs and facts that will be useful in the following.
2020-10-27 13:22:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 29, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6869949102401733, "perplexity": 697.0030541716574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107894175.55/warc/CC-MAIN-20201027111346-20201027141346-00673.warc.gz"}
https://docs.circom.io/circom-language/circom-insight/circom-phases/
# circom Compiler circom has two compilation phases: 1. The construction phase, where the constraints are generated. 2. The code generation phase, where the code to compute the witness is generated. If an error is produced in any of these two phases, circom will finish with an error code greater than 0. Otherwise, if the compiler finish successfully, it finishes returning 0.
2023-02-02 14:49:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7876037955284119, "perplexity": 2154.7018154018465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500028.12/warc/CC-MAIN-20230202133541-20230202163541-00716.warc.gz"}
http://math.stackexchange.com/questions/412984/implicit-differentiation-of-differential-equation
# Implicit differentiation of differential equation Let the unknown cdf $F(x)$ be implicity defined by $h(F(x);a,b) := F(x)[1-a-b(1+a)] - 2 a b F'(x) x + (1+a)b = 0$, where $F(1) = 1$. Moreover, let $0<a<1$, $0<b<1$. My question is: is there some way to find the sign of $\frac{\partial F(x; a, b)}{\partial b}$ directly, i.e., without solving for $F(x)$ first? While I can solve for $F(x)$ first and provide a somewhat tedious argument that $\frac{\partial F(x; a, b)}{\partial b} > 0$, I suspect that it might be more elegant to do it directly by using implicit differentiation. However, I'm not sure how to proceed, other than to set up $-\frac{\partial h}{\partial b}/ \frac{\partial h}{\partial F}$. I guess I'm a bit lost with implicitly differentiating a differential equations. Hence, any help would be greatly appreciated! - Is $F$ a function of $x$ only or a function of $x$, $a$ and $b$? From what you write, $F$ is only a function of $x$, so $\frac{\partial F}{\partial b} = 0$. – user66258 Jun 6 '13 at 13:30 By the first equation, $F$ will implicitly depend on $a$ and $b$. So no, the solution to the differential equation, $F(x;a,b)$, depends both on $a$ and $b$. – Martin Jun 6 '13 at 13:55 $H(F,a,b)=0$ here is not a typical implicit definition of $F$ as a function of $a,b$, because of the presence of the $F'$ term in the definition of $H$. That's where I got stuck trying the usual. – coffeemath Jun 7 '13 at 3:40 Thanks - yeah, I'm having the same problem. – Martin Jun 7 '13 at 15:13
2015-11-27 17:21:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9653224945068359, "perplexity": 162.31499123272607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398449793.41/warc/CC-MAIN-20151124205409-00301-ip-10-71-132-137.ec2.internal.warc.gz"}
https://zbmath.org/?q=an%3A1221.35142
## Effect of symmetry to the structure of positive solutions in nonlinear elliptic problems. III.(English)Zbl 1221.35142 Summary: We consider the problem: $\begin{cases} \Delta u+ u^p=0 \quad &\text{in } \;\Omega, \\ u=0\quad &\text{on} \partial \;\Omega, \\ u>0\quad &\text{in} \;\Omega, \end{cases}$ , where $$\Omega=\{x\in {\mathbb R}^N \colon |R-1|<|x|<R+1 \}$$ and $$1<p<(N+2)/N-2)$$. This problem is invariant under the orthogonal coordinate transformations, in other words, $$O(N)$$-symmetric. Let $$G$$ be a closed subgroup of $$O(N)$$. In the first part [J. Byeon, J. Differ. Equations 163, No. 2, 429–474 (2000; Zbl 0952.35054)], an existence of locally minimal energy solutions due to a structural property of the orbits space was shown. In this paper, it will be showed that more various types of solutions than those obtained in [loc. cit.], which are close to a finite sum of locally minimal energy solutions. Furthermore, we discuss possible types of solutions and show that any solution with exactly two local maximum points should symmetric. For Part II, cf. [J. Differ. Equations 173, No. 2, 321–355 (2001; Zbl 0989.35053)]. ### MSC: 35J60 Nonlinear elliptic equations 35A30 Geometric theory, characteristics, transformations in context of PDEs 35B05 Oscillation, zeros of solutions, mean value theorems, etc. in context of PDEs 35J65 Nonlinear boundary value problems for linear elliptic equations ### Citations: Zbl 0952.35054; Zbl 0989.35053 Full Text:
2022-08-12 23:29:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5988079309463501, "perplexity": 600.0848749988061}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571847.45/warc/CC-MAIN-20220812230927-20220813020927-00179.warc.gz"}
https://dsp.stackexchange.com/questions/22630/melody-vocal-extraction/22635
# Melody/Vocal extraction I'm new to the whole DSP area, but I've built a program which plays music and does FFT to produce some cool visuals for the music in realtime. The FFT algorithm I chose was from a C# library called NAudio. I applied a hamming window before shipping it off to the FFT, from which I get an array of complex. Nothing special. Works fine. But now I want to display this data to the user, so I converted the bins to magintudes following this formula: magnitudes[i] = sqr(real[i]² + imaginary[i]²) This with some amplification yields a pretty good result to display for the user, but I feel that most of the levels being displayed are just random noise, I want to show the user what he/she "hears", where the melody or vocal are being amplified OVER the other background music. The thing I'm unsure of is how to approach this problem. What can I do to filter out everything else and only have the melody/vocals left. I can understand there's not 100% clean way you can do this, but what method could give me a decent result? Thank you • I'm afraid there is no simple method to achieve what you need. – Jazzmaniac Apr 8 '15 at 17:38 • What people hear is a psycho-acoustic issue, not something an FFT can separate. For typical music recordings, there is no way to filter out everything else but voice or melody, as the spectrums are mixed and overlapping. The practical solution is to use a multi-track recording where voice and music are already on completely separate tracks. – hotpaw2 Apr 8 '15 at 20:29 I remember reading a similar question (either here or on stack overflow) abouylt melody extraction. In any event someone did create such a project as a PhD thesis (luckily I bookmarked the site). He even offers his work (for research/personal/non-commercial use) for windows mac and linux http://www.justinsalamon.com/melody-extraction.html#demo if you could somehow integrate this to your application you can use the extracted melody to display the visuals, but still just play the original song from speakers/headphone • Very cool project! Exactly what I've been looking for. I'll check it out more in-depth later today, and hopefully implement this. Thank you! – Tokfrans Apr 10 '15 at 4:10 As you are going at it, you won't be able to see much detail. There won't be (much) visible difference between loud sounds and quieter sounds, not even if you multiply the magnitudes to make them larger. I don't think you really need to separate the sounds into "melody+voice" and "background." That would be a (seriously) non-trivial problem. I think you just need to make the difference more obvious. To make the difference more obvious, change the magnitudes to decibels. The Decibel is a logarithmic representation. What you need to do is this: magnitudes[i] = 20* log((sqr(real[i]² + imaginary[i]²)) This will give you a logarithmic view of the music. The formula won't give you dBs that relate to any real volume level - the levels are relative to one another, so that while you can say that a particular frequency is 20dB louder than another one, you can't say what the absolute loudness is for any of them. This is fine, though, since you are just looking at it and not using it for measurements. • This surely gave me better results! Just added an arbitrary number to the output to make it fit in my scale. Good tip! – Tokfrans Apr 10 '15 at 4:43
2020-04-01 05:56:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20988543331623077, "perplexity": 1091.436719690822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505366.8/warc/CC-MAIN-20200401034127-20200401064127-00137.warc.gz"}
https://www.jiskha.com/archives/2009/11/21
# Questions Asked onNovember 21, 2009 1. ## physics It has been suggested that rotating cylinders about 12 mi long and 5.4 mi in diameter be placed in space and used as colonies. What angular speed must such a cylinder have so that the centripetal acceleration at its surface equals the free-fall 2. ## physics What is the tangential acceleration of a bug on the rim of a 12.0 in. diameter disk if the disk moves from rest to an angular speed of 75 rev/min in 4.0 s? 3. ## Calc how do I integrate x^4-3y^3+2y^2? thanks 4. ## College Math II a^4 - ab^3 = I have asked this question on many web sites and still cannot find the steps on how to solve this problem. Can you please help. 5. ## Algebra 2 Find the last digit in 39999. I can not figure this one out. Can some one explain this 6. ## Algebra 2 Find a positive integer smaller than 500 that has a remainder of 3 when divided by 5, a remainder of 6 when divided by 9, and a remainder of 8 when divided by 11. 7. ## Music I need to kown all 8 letters music instruments name. 8. ## math 1. (1 pt) Let g(y) = 8y2 + 13y + 5. A function f is known to be continuous on ( −∞, ∞), to satisfy g ◦ f (x) = 0 for all real x and it is known that the value f (8) is not equal to −1. You have enough information to determine f precisely. 9. ## spanish We sing a song entitled He drinks tequila,and she talks dirty in spanish ^ there are a couple of phrases that I don't know the meaning of could you help please. the first is "a brasa mi puerta" The second is "to dal ay nochay" I hope this is not offensive 10. ## HCA 220 summarize the processes of digestion, respiration, and circulation and how are they interrelated 11. ## sociology is it difficult to accurately measure prejudice? 12. ## English 1. It can give you happiness when you give happiness to others. (What does 'it' refer to in this sentence? What functions does 'it' have?") 2. Volunteer work is not something that you must do. 3. Volunteer work is not anything that you must do. (Are both 13. ## English 1. Doctors Without Borders help many sick people in poor countries. 2. Doctors Without Borders helps many sick people in poor countries. (Do I have to use 'help' or' helps'? Which one is correct?) 14. ## AP World History How do I structure a DBQ essay in the body paragraphs? 15. ## sat the bus would not have had to take the A long detour instead of the main B highway if the bridge did not become C treacherous in the aftermath of an ice D storm. why is C wrong? thanks so much ! 16. ## criminal justice a 700- to 1,050-word essay in APA format on the qualities of effectiveness versus ineffectiveness of judges. In your essay o Describe the responsibilities of judges. o Identify attributes of both effective and ineffective judges. o Provide an example of an 17. ## Ms Sue geography sorry to ask again for help but i need to include a graph in my report about the turkish earthquake but i cant think what graph to do. it needs to show relevant information i was going to do a graph about how many people died but i dont know if that 18. ## education what is book keeping in educational management give me articles on physical plant planning in educational management 19. ## Maths keep getting stuck for integrate e^(bx) cos x i get = cos x e^bx/b - int -sin x e^bx/b = cos x e^bx/b - -sin x e^bx/b^2 - int - cos x e^bx/b^2 this is where i get stuck please help 20. ## dynamath TRUNK AND TAILS bella the dog weighs 72 pounds. that's ABOUT 8628 pounds less than tarra weighs. how much does tarra weigh? i think the answer is 8556 or 8030. 21. ## sat essay assignment:should people always prefer new things, ideas, or values to those of the past? Should people always prefer new thing to the old? Many people argue that we should not accept the new easily and should sick to the old. As far as I am concerned, we 22. ## physics a uniform disk and a uniform sphere are rolled down an incline plane from the same point and intially they are at rest. Find the difference in time they arrive at a mark on the plane which is 6 meters from the starting point. The sphere has twice the mass 23. ## physics a uniform disk and a uniform sphere are rolled down an incline plane from the same point and intially they are at rest. Find the difference in time they arrive at a mark on the plane which is 6 meters from the starting point. The sphere has twice the mass 24. ## physics A children's merry-go-round in a park consists of a uniform 200kg disk rotating about a vertical axis. the radius of the disk is 6 m and a 100kg man is standing on the outer edge when the disk is rotating at a speed of 0.20rev/s. how fast will the disk be 25. ## gr 12. advanced functions Trig identity Prove: cos2x tan(pie/4 - x) ------- = 1 + sin2x 26. ## College Math II _m^2 -1___ * _2m -2____ = (m – 1)^2 3m + 3 Actually my computer won't allow the line and these are like fractions. m^2 - 1/(m-1)^2 * 2m - 2/3m +3 = I am having a problem trying to follow the steps to show my work and come out with the correct answer. 27. ## College Math II x/x – 1 - 3/x = 1/2 I am not sure how to solve this equation and show my work. My problem just says to solve the equation. Thanks. 28. ## College Math II x/x – 1 - 3/x = 1/4 My problem says to show the work and to solve the equation. I am not sure how to use the steps to show my work and answer the problem. Thanks. 29. ## math Bob has three sacks of apples and three more apples in his pocket. Each sack contains the same number of apples. Altogether, Bob has 33 apples. How many apples are in each sack? Solve with a Linear Equation is this right 3x+3=33 3x=30 30/3=10 x-5=10 30. ## average math TRUNK AND TAILS bella the dog weighs 72 pounds. that's ABOUT 8628 pounds less than tarra weighs. how much does tarra weigh? i think the answer is 8530. 31. ## Math Bert is 11 kilometers away from Brenda. Both begin to walk toward each other at the same time. Bert walks at 4 kilometers per hour. They will meet in 2 hours. How fast is Brenda walking? 32. ## College Math II (4 + sqrt(3)^2 = I am not sure how to follow the steps to come up with the answer. Can someone help me understand the steps. Thanks. 33. ## college The 2.kg mass is heading toward a ramp-there’s no friction. Notice that at the top right there is a relaxed spring. The spring constant is 50 N/m. Calculate the maximum compression of the spring after the mass has risen up the ramp and hit the spring. 34. ## physics The 2.kg mass is heading toward a ramp-there’s no friction. Notice that at the top right there is a relaxed spring. The spring constant is 50 N/m. Calculate the maximum compression of the spring after the mass has risen up the ramp and hit the spring. 35. ## For Writeacher I saw your post this morning, for my question last night..thank you so much!! =D -MC 36. ## College Math II Rationalize the denominator and simplify. sqrt(6)/4srqrt(3) + sqrt(2) = I cannot figure out how to do this and simplify. Can someone help me. Thanks. 37. ## College Math II Solve by using the quadratic formula. x^2 + 6x + 6 = 0 I am having problems using the quadratic formula to get the right answer. Can someone show me the steps that I use? Thanks. 38. ## College Math II Solve by any method. x(x + 1) = 12 I am not sure what method to use to show my work and figure out the answer. Can someone help me understand this problem. Thanks. 39. ## College Math II Solve by any method. a^4 – 5a^2 + 4 = 0 I am having problems figuring out what method to use to solve this problem. I need to show my steps and I am not sure how to do this. Can someone help? Thanks. 40. ## College Math II If a ball is tossed into the air from a height of 6 feet with a velocity of 32 feet per second, then its altitude at time t (in seconds) can be described by the function A(t) = - 16t^2 + 32t + 6. Find the altitude of the ball at 2 seconds. I am having 41. ## College Math II Show a complete solution to each problem. Find the exact length of the side of a square whose diagonal is 3 feet. I am having problems understanding how you figure out this problem. Can someone help me? Thanks. 42. ## College Math II Write a complete solution to each problem. If the length of a rectangle is 3 feet longer than the width and the diagonal I s 15 feet, then what are the length and width? I do not understand how to solve this problem. Can someone help me? Thanks. 43. ## College Math II Solve each problem. Find the quotient and remainder when x^2 – 5x + 9 is divided by x – 3. I don't understand how to divide and solve this problem. Can someone help me understand the steps on how to do it. Thanks. 47. ## health care Write a 350- to 700-word paper in APA format describing two approaches or indicators to measuring patient outcomes. Measuring the quality of care is essential to being able to identify which areas of patient care need improvement. A positive patient 48. ## Creative Writing Here is my intro (that bobpursley checked yesterday), and my first body paragraph (that needs to be checked now) : What is wisdom? Is it in the stories that Grandfather told me, or is it in what I achieved from schooling? The reality is, wisdom is not 49. ## physics A 4.0 kg particle is moving along the x axis to the left with a velocity of v= -12.0 m/s. Suddenly, between times t =0 and t = 4.0 x a net force = 3t^2 – 12t is applied to the particle, where F is in N and t is in s. Calculate the velocity of the 50. ## Geography Which of these cities is not located about 55 degrees N latitude and has the largest population? Tbilisi, Georgia Minsk, Belarus Kiev and Odessa, Ukraine Thanks -MC 51. ## math 1 ms = ? s 1 mile second = ??? second 52. ## physics A mass of 1.1 kg is attached to two cords. The cords are each an angle 30 degrees above and below the horizontal, The mass circles around the vertical pole at a fixed speed of 15 m/s in a circle of radius 0.50 m. How do I calculate the magnitude of the net 53. ## physics A spring-loaded dart gun is used to shoot a dart straight up into the air, and the dart reaches a maximum height of 12 meters. If the spring is compressed only half as far the second time, how far up does the dart go this time (no friction and using 54. ## art I have homework in art where, every week, I must look up an art definition, describe it, and draw a picture with it. I was wondering what the following words mean and/or where I could find pictures: Sfumato, Composite, Anthropomorphism, Tactile, Garish, 55. ## physics The potential energy of a particle on the x-axis is given by U= 8xe ^-x ^2/19 where x is 1 meter and U is in Joules. Can you explain for me how to find the point on the x-axis for which the potential is a maximum or minimum and is this answer a point of 56. ## Physics At what temperature is the rms speed of helium molecules half its value at STP (0 C}, 1.0 atm)? 57. ## english i'm having trouble with coming up with a topic sentence about death. 58. ## algebra parallel to line 2/3x-4=0 through (-4,-6) What makes some companies more profitable than the others how do i solve this thermochemical equation? and what r the steps to it 1. Find the heat of reaction for the following equation. N2(g) + 2 O2(g) -> N2O4(g) given the following steps 2NO2(g) -> N2(g)+ 2O2(g) delta H=-84.8 kj N2O4(g)-> 2NO2(g) delta H= 72.8 61. ## Finance I need some help with an assignment that I am working on for Introduction to Finance class. 95. ## critical thinking Select a topic of interest and explain how you would come up with a reliable sample for obtaining peoples’ opinions. 96. ## physics A small particle of mass 2.0 x 10^-19 kg is attached to a string of relaxed length 0.50 nm .The string is the black horizontal line attached to a an immovable wall on the left. The plot is a graph of the force needed to stretch the string by pulling to the 97. ## communications Does anybody know what an eletronic version of a print ad is? I have an idea, but I am not sure if I am correct 98. ## physics A 2000 kg space probe is moving rightward in empty space along the x axis at 12 m/s. One of the probe’s rockets is fired providing a thrust of 1800j N along the y axis. The rocket fires for 1.5 s. How can I derive the equation for the trajectory of the 99. ## physics A 2kg mass is released from rest and slides down an inclined plane of 60 degrees a distance of 1 eter. It strikes a 250 N/m spring. What is the maximum compression of a spring? 100. ## English(literature) What citations from the Catcher in the Rye show Holden Caulfield has sympathy toward the less fortunate? 101. ## physics A string is hanging from the rearview mirror of you car and a ball is at the end of this string. Suppose that your drive around a circular track at fixed speed. Which list below gives all the forces that act on the ball? A) tension and the force of gravity 103. ## physics In order to move a heavy object you must push against it with a larger force than it pushes on you. Is this true or false? 104. ## physics An object in the vacuum of space orbits the earth at a fixed speed in a circular orbit several hundred miles above the earth. What can we conclude about the reaction force? a) That there is no reaction force-the net force on the object is zero, so the 105. ## AP Chemistry Will a non-volatile solute always lower the vapor pressure of the pure solvent in solution? Why or why not? 106. ## Physics We study water of 1.0 g/cm3 density in an open container with its surface exposed to the atmosphere. If we measure the pressure first at a depth d and find the value p1, and then at twice that depth to find p2, the two results are related in this form: a) 107. ## Physics A 9000 kg boxcar traveling at 17.0 m/s strikes a second boxcar at rest. The two stick together and move off with a speed of 6.5 m/s. What is the mass of the second car? 108. ## Social Studies who got the nobel peace prize before barack obama 109. ## math Find the nth term of the geometric sequence whose initial term is 7 and common ration is 7. (your answer must be a function of n) 110. ## physics Two vehicles with the same mass collide and lock together traveling 28 m/s at 37 degrees north of east after the collision. How fast was the car traveling that was heading north while the other vehicle was traveling east? 111. ## physics Is it true or false that when a baseball player hits a home room, the baseball received a greater impulse from the bat than the bat did from the ball? 112. ## math Find the nth term of the geometric sequence whose initial term is 7 and common ration is 7.(Your answer must be a function of n.) 113. ## physics If a mass attached to the center of a vertical circle swings around at a fixed speed (v) and gravity pulls straight downward, would the tensions in the rope attached to the mass be different at the top of the circle, straight down to bottom of circle, and 114. ## Chemistry Given that the solubility of sodium chloride is 36 grams per 100 grams of water. Which of the following solutions would be considered supersaturated? A. dissolve 5.8 moles of NaCl in 1 L of water B. dissolve 1.85 moles of NaCl in 300 ml of water C. 115. ## physics How can I calculate the tension in the two cords that are labeled A and B in a two mass system? Cord A is on top with a 1 kg mass attached and Cord B connects this mass to a 2 kg mass. The two mass system accelerates directly upward at 2m/s^2. 116. ## college Environmental Factors Contribute to Juvenile Crime and Violence. Delbert S. Elliott. Opposing Viewpoints: Juvenile Crime. Ed. A.E. Sadler. San Diego: Greenhaven Press, 1997. My paper is asking for the title and citation. Would I include all of the above? 117. ## Physics A tow truck is pulling a car out of a ditch by means of a steel cable that is 10.2 m long and has a radius of 0.50 cm. When the car just begins to move, the tension in the cable is 870 N. How much has the cable stretched? COnstant for Steel cable Y=2.0e11 118. ## Calculus [Integrals] h(x)= -4 to sin(x) (cos(t^5)+t)dt h'(x)=? 119. ## stats Ok, so in a game of Texas Hold 'Em you have two hearts in your hand. The next two cards places are both hearts, but the third one is a spade. What are the odds that by the end of the round you will have a Flush (5 of same suit). I've asked someone else and 120. ## English Can you please check the grammar in the sentence thank you. Another Country begins with Rufus walking the streets of Seventh Avenue, hungry, broke, and having nowhere to go until he makes his way to a jazz bar. 121. ## physics Is it true or false that momentum is conserved when total mechanical energy is conserved? 122. ## American National Government Why is the judicial branch a non-parisan body? I know that the judicial branch has independent power but that is really all that I know. Could someone please help me? Thanks. 123. ## geometry if QR=3x and RS=x+12, then x=? 124. ## science how is Dna copied? 125. ## science compare and contrast mitosis and cell divison 126. ## Anatomy and Physiology I the blood-brain barrier is effective against, what 127. ## Dividing Rational Expressions x^3-8/x^2-4 divided by (x^2+2x+4)/(x^3+8). This becomes ((x-2)(x^2+2x+4)/(x+2)(x-2) times ((x+2)(x^2-2x+4))/(x^2+2x+4). I understand that (x-2)(x^2+2x+4)=(x^3-8) when I distribute it, however, if I didn't have the help of my book, I'd have no clue how to 128. ## math HELP PLEASE!!! ESPECIALLY PART 2 2. The following questions do not require the data above. Answer with the data given in each question. i. There is a total of approximately 12.4 billion acres of agricultural land (cropland, rangeland, irrigated land) on 129. ## physics A ball is thrown up in the air with an initial speed of V. If the air exerts a constant frictional force F on the ball, show that H = (V^2) / 2(g+F/m) 130. ## corrections response describing a sociological aspect of prison life. This may include the subculture of prisons,including violence, sex, gangs, death, and language, or the social organization of prison, such as demograhics, relationships, social groups, social 131. ## Physic 31m/s is a typical highway speed for a car. At what temperature do the molecules of nitrogen gas have an rms speed of 31 m/s? (Answer in K) 132. ## Physic 7.7 mol of helium are in a 16 {\rm L} cylinder. The pressure gauge on the cylinder reads 65psi. What are (a) the temperature of the gas in Celsius and (b) the average kinetic energy of a helium atom? 133. ## math ?/8 = 15/? = 24/32 134. ## medical terminology according to the article reduce abbreviation errors do you think enough steps have been taken to reduce errors explain why you agree or disagree 135. ## Maths Differentiate y=(5x-2)^3(3-x)^6 using the product rule. i don't get the step where they find the common factor could you help me by doing it step by step please ? thank you! 144. ## math Roberto ate 3 pieces of a pizza and them felt that he should pay 1/4 of the cost because that's the fraction he ate. How many pieces was the pizza cut into? Please could you explain me the problem and you give me the answer? thank you
2018-12-12 11:05:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4961138665676117, "perplexity": 1154.7699247751839}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823817.62/warc/CC-MAIN-20181212091014-20181212112514-00063.warc.gz"}
http://physicsfromtheedge.blogspot.com/2017/03/new-evidence-at-high-redshift.html
I've suggested (& published in 21 journal papers) a new theory called quantised inertia (or MiHsC) that assumes that inertia is caused by relativistic horizons damping quantum fields. It predicts galaxy rotation, cosmic acceleration & the emdrive without any dark stuff or adjustment. My Plymouth University webpage is here, I've written a book called Physics from the Edge and I'm on twitter as @memcculloch ## Thursday, 23 March 2017 ### New Evidence at High Redshift One of the unique and testable predictions of MiHsC / quantised inertia is that the dynamics of galaxies should depend on the size of the observable universe. This is because it predicts a cosmic minimum allowed acceleration of 2c^2/Cosmicscale. Why is this? Well, the Unruh waves seen by an object and that (in QI) cause its inertial mass, lengthen as the object's acceleration reduces and you can't have an acceleration that gives you Unruh waves that are too big to resonate in the cosmos. So if you imagine running the cosmos backwards, as the cosmic scale shrinks, more Unruh waves would be disallowed (as in the narrow end of the emdrive), inertial mass goes down, centrifugal forces decrease and so galaxies need faster rotation to be dynamically balanced. Therefore, QI predicts that in the past galaxies should have been forced to spin faster (everything else being equal). Many people online alerted me to a paper that has just been published in Nature (Genzel et al., 2017) that supports this prediction. The paper looked at six massive galaxies so far away from us that we are looking at them many billions of years ago when the observable universe was much less than its present size, and, sure enough, they spin faster! To compare QI with the data, I have plotted the preliminary graph below. It shows along the x axis the observed acceleration of these ancient galaxies, determined from Doppler measurements of their stars' orbital speed (a=v^2/r) and along the y axis the minimum acceleration predicted by quantised inertia (a=2c^2/cosmicscale). The QI vs observation comparison for the six galaxies is shown by the black squares and the numbers next to them show the redshift of each galaxy. The redshift (denoted Z) is a measurement of distance. Erwin Hubble found that the further away galaxies are from us, the faster they are receding from us, and so their light is stretched in a Doppler sense and is redshifted. So redshift is proportional to distance. The redshifts of the galaxies in this study ranged from Z=0.854, bottom left in the plot, at which the cosmos was 54% its present size to Z = 2.383, centre right, for which the cosmos was pretty cramped at 30% its present size (the formula for the size of the cosmos at redshift Z is SizeThen=SizeNow/(1+Z). Quantised inertia predicts clearly that the acceleration increases with redshift, just as observed. The diagonal line shows where the points should lie if agreement was exact. Although the points are slightly above the line this is not a huge worry since the data is so uncertain. The uncertainty in the observed acceleration is probably something like 40% (looking at the scatter plots in Genzel et al. I've assumed a 20% error in the velocities they measured, and a=v^2/r). I have not plotted error bars yet because it'll take time to work out properly what they are. The two highest redshift galaxies are obviously quite aberrant, and this shows that the data is not yet good enough to be conclusive. So in a preliminary way, and error-bars pending, the graph shows that QI predicts the newly-observed increase in galaxy rotation in the distant past. Given the uncertainties, more data is urgently needed to confirm this. As far as I know, quantised inertia is the only theory that predicted this observed behaviour. References Genzel et al., 2017. Nature, 543, 397–401 (16 March 2017) http://www.nature.com/nature/journal/v543/n7645/abs/nature21685.html Andrew Jaremko said... Dr. McCullouch - I'm very pleased that you saw the posting of this paper. It showed up on my radar as well, and I was wondering how it fitted with quantized inertia. I wasn't sure how to read the data, and how it would fit with your model. But I'm glad that it fits so well, and that you have done a preliminary analysis and provided the graph. The message will get out eventually. Peter Andrews said... Congratulations on anot her successful prediction. Can you comment on any insight MiHsc provides to the Tully-Fisher relation? I am curious about this since Eric Lerner discussed this in his arguments against the Big Bang. Mike McCulloch said... Peter: Thank you. MiHsC says that the Tully-Fisher relation is: v^4 = 2GMc^2/Cosmicsize So no arbitrary parameters are needed, and T-F varies with Cosmicsize / era. https://arxiv.org/abs/1207.7007 Filip Piękniewski said... Mike, do you think MiHsC could be used as a model for jets on accreting disks? As far as I know, the theory for those things is a bit flaky, problems of angular momentum transport etc. Since I've looked at your paper on flyby anomalies (where there was additional acceleration in the polar region, perpendicular to the plane of rotation), I've been thinking whether this could not be another interesting application? Mike McCulloch said... Dear Filip, Very good point. Indeed, as you noticed, MiHsC predicts a loss of inertial mass near to spin axes, because the mutual acceleration there is smaller, hence the flyby anomalies. This effect is bigger especially for larger, slower rotating objects like galaxies and maybe accretion discs. I tried modelling galactic relativistic jets on paper, but didn't get far because it depends very strongly on what minimum rotational radius you assume. It probably needs a computer model. Analytic D said... Mike, There is almost certainly a correlation between jet speeds/distances and red-shift, for a given radius. But for a given radius, think about the round-tipped-cone shape of a horizon, then super-impose all the cones for the matter of some spinning galaxy. Then consider the density of the overlap along the axis of rotation. Fix some point along the axis. Objects at that point will experience less mutual acceleration, but in all those relative directions at once. This damps the object's horizon in all directions, reducing inertia. This must be the case if all reference frames are equally valid. Analytic D said... As a follow-up, I think this is basically what is happening with all the other spinning disc anomalies, as well as the fly-by anomaly. I hadn't gotten a clear mental image of the mechanics until now. tyy said... Fascinating to observe how self-made realities are born. Josave said... Much more fascinatig is to see how they die... Dark matter fraud: http://blogs.discovermagazine.com/crux/2017/02/06/dogma-derailed-search-dark-matter/ joesixpack said... Wow. The empirical falsification of dark matter is too much to ignore. Six more papers in that link..."tyy" - perhaps you should read them? David Anjelo said... Physicists may have observed Hawking radiation for the first time https://phys.org/news/2010-09-physicists-hawking.html qraal said... The unappreciated fact in the Dark-Matter vs MOND debate is that essentially MOND is an approximation to Horizon Mechanics ;-) qraal said... Lee Smolin has essentially posted Horizon Mechanics on the arXiv, but failed to mention your work... https://arxiv.org/abs/1704.00780 ...might need to get in his ear about it. Mike McCulloch said... qraal: Well, with a quick look at it, it is not yet quantised inertia / horizon mechanics (he still uses a0 and he needs two metrics, and at the end he dismisses the approach anyway) but it is getting close enough that Smolin should have mentioned my work. I can't believe he's not aware of it, especially since I've been sending him pdfs of my papers for the past six years! (I never get a reply). I'll email him.. qraal said... In my mind, surely he should've thought of your work! He's open to new ideas, being a contrarian on so many matters, but probably gets a pile of well-meant emails - like any physics celebrity. Would be hard to get through his filters. Mike McCulloch said... qraal: Thanks. Anyway, I've emailed him politely pointing out it was unprofessional of him to omit mention of QI (I'm fairly sure most theorists r aware of QI now, and he's more aware than most). So after he reads that we're bound to be friends in no time ;) Mike McCulloch said... qraal: I've just received a reply from Lee Smolin apologising for not reading my emails. Nice guy. He says he'll cite my work in future :) Stuart Matthews said... Interesting development; http://newatlas.com/dark-energy-existence-questioned/48708/ Analytic D said... Stuart: I like how the answer they come back with is to just use even more computing power. Mike's characterization of overspending on DM is seeming truer and truer. qraal said... Excellent news Mike! Smolin probably gets buried in email. Hopefully it's the start of a beautiful scientific relationship. qraal said... Slightly tangential to the discussion, but an ongoing demonstration of a possible EM-Drive is again hinted at in the media... http://www.chinatopix.com/articles/112839/20170327/u-s-air-force-top-secret-x-37b-spaceplane-breaks.htm ...confirmation that it works, with hard performance data, would be a potential boon to Horizon Mechanics. Of course a null result would be *less* fun. But experiment trumps expectation. Zephir said... /* you can't have an acceleration that gives you Unruh waves that are too big to resonate in the cosmos. So if you imagine running the cosmos backwards, as the cosmic scale shrinks, more Unruh waves would be disallowed (as in the narrow end of the emdrive), inertial mass goes down, centrifugal forces decrease and so galaxies need faster rotation to be dynamically balanced. Therefore, QI predicts that in the past galaxies should have been forced to spin faster (everything else being equal) */ Except that I think, that Universe is steady state - so that the local characteristics of remote galaxies shouldn't change with distance. Not to say, we have multiple observations, that dark matter dominated the early Universe instead - this info even already got into textbooks and encyclopedias. So that we have a nice controversy here, don't we? BTW Could you please document, that the Nature article observation was actually predicted by MiHsC/QI theory instead of postdicted? Mike McCulloch said... Zephir: I predicted this last year well before the latest Nature paper, and I wrote a paper and I have submitted it to many journals so far with no luck, see this blog: http://physicsfromtheedge.blogspot.co.uk/2016/10/a-test-using-redshift_30.html but at the time I had checked around and found some less conclusive evidence that this was the case (see the blog) so it wasn't a complete shot in the dark. Nevertheless, it cannot be denied that QI fits the data. Zephir said... OK, so you deserve the adding to this list. Anyway, I can still see conflict of the galactic rotation evolution with steady state Universe. The seeming change of dark matter concentration / density doesn't represent a problem for it - but it shouldn't change the dynamic behavior of galaxies. qraal said... Hi Mike, http://iopscience.iop.org/article/10.3847/1538-4357/aa5da9 ...but without mentioning your work in the abstract. It must be something in the concept ether that people are thinking independently. Zephir said... /* It must be something in the concept ether that people are thinking independently. */ And it actually is: in dense aether model this behavior has its analogy in behavior of black holes (remnants of visible matter) and the dark matter (progenitor of visible matter). The black represent the past of Universe and they tend to evaporate soon or later. They remain cohesive, being formed with particles of positive space-time curvature. Instead of it, the particles of opposite space-time curvature are systematically expelled from them. Dark matter particles (scalar waves, magnetic turbulences of vacuum) behave like the sparse bubbles of space-time and they're repelling mutually at distance, thus remaining in diaspora. Instead of it, they're attracted to gravity field of existing observable matter, thus forming dark matter halo around massive galaxies and stars, which though remains separated at distance. The experts in alternative physics behave similarly - they're expelled from mainstream and they're working in diaspora, which fighting each other. This effect indeed slows down cooperation and progress, which has particularly tragical consequences for research of cold fusion and overunity technologies. Our Mike McCulloch isn't very different in this matter, as he has a tendency to delimit himself against MOND/MOD and holographic models, despite his numeric/geometric model looks quite close to them. I presume it has very much to do with natural human competitiveness or even jealousy. The scientific people aren't very different in this respect: they cooperate only if they can get more profit from it. Zephir said... BTW Whereas the omission of citation by Smolin could be still understood by different perspective of thinking, the above example looks like way more serious case of plagiarism for Mike - as it utilizes his very logic, but it cites the MOND instead of MiHsC theory. Now McCulloch will face the hard reality, that with respect to numeric predictions his MiHsC theory gets very close to MOND theory, which is forty years old and as such much better established in physics. After all, how exactly the Milgrom's scale of acceleration ${a}_{0}=({cH}/2\pi )\sqrt{1-q}$ differs from MiHsC's one? I think, just by numeric factor. Zephir said... Even worse for Mike is, van Putten is using holography in his reasoning of Rindler horizon, which currently represents the bandwagon of all abandoned string theorists (from Verlinde to Maldacena), who are just seeking the satisfaction after failure of their theories at LHC and underground WIMPs detectors. The problem is, once you apply the projective geometry of 5D holography and Milgrom's Hc factor, then you wouldn't need to use the MiHsC at all for generation the very similar phenomenological predictions. Instead of it, you will get the warm support of most assertive portion of theoretical physical community. Because the wide portfolio of McCulloch's predictions is just what all these guys need most desperately in a given moment. Now they can derive them in their own way one after another. Unknown said... @zephir, there is a significant difference between having an arbitrary constant (which is set to different values for different galaxies IIRC) and having a formula consisting entirely of externally defined values that applies to more and more 'unusual' situations. Mike McCulloch said... Unknown: That is right, and well said. I've made this point many times. If they have to use a fudge factor (a0) then they don't have MiHsC, which predicts a0 and also how it changes when the Cosmicscale changes (as in the emdrive, or galaxies at different redshift). As far as I can see, no one in the mainstream has yet understood MiHsC. Zephir said... @Unknown: Milgrom doesn't use arbitrary constants in his theory. What is arbitrary on a0 = cH_0 ~ 10^{-8} cm s^{-2} product? Mike McCulloch said... Zephir: As you know, the value of a0 is not set in MoND from a theory. It is fitted empirically from lots of galaxy rotation data. It is arbitrary in MoND in the sense that there is no 'reason' given for its value. Hand-waving after the 'tuning' that a0 is close to cH is not enough. MiHsC gives a physical reason that predicts a0 exactly. So MiHsC is above the level of theories like Newtonian gravity that need an arbitrary constant, G, and is at the level of special relativity that needs nothing arbitrary and only observables like c. Zephir said... /* Hand-waving after the 'tuning' that a0 is close to cH is not enough */ It's derived from cold dark matter theory. Anyway, the mainstream physics doesn't care very much, how its derivation actually work, once they work and they're supported with other theories. https://arxiv.org/abs/1304.7483 https://arxiv.org/abs/astro-ph/0107284 https://arxiv.org/abs/1703.06110 The inherent property of hyperdimensional phenomena like the dark matter is, they can be described from multiple equally relevant low-dimensional perspectives / projections at the same moment. In such case the physical community may not support the most straightforward explanation - but this one, which compromises the least number of existing explanations and theories. In this extent the physical theories condense between facts on principles of Bayesian logic, like the dark matter filaments between galaxies (analogy of facts). The MOND/MiHsC theories cannot account to this mechanism yet, as they lead into spherically symmetric solutions of dark matter around massive bodies - i.e. not filaments. Zephir said... /* Hand-waving after the 'tuning' that a0 is close to cH is not enough */ Once you believe in expanding universe model, then it's quite easy to imagine, that this expansion would lead into deceleration term a0 = cH, once you will travel at large distances like the Pioneer spaceprobe. During your travel the Universe and distance between bodies will expand a bit, which would slow-down your travel a bit. Zephir said... It's true, that Milgrom didn't realize this simple connection from the very beginning of his theory - but your understanding of the actual role of "Unruh" radiation in your own model is apparently a subject of evolution too.. ;-) The ideas simply develop and gradually improve - it's natural and logical. Anyway, both MOND, both MiHsC are conceptually similar and they apply to warm dark matter only: the cold dark matter filaments or particle-like hot dark matter with cohesive effects are still waiting for their coherent description. If you want to get it, you should read what I already wrote about it. Now you have basically three options, where to go. First, you can keep your original maverick line of reasoning and try to find as many empirical evidence for your theory, as you can get. IMO you're good in it and the predictive power of your warm dark matter model isn't still exhausted. Second, you can jump to the MOND and holography bandwagon and to prove, how and why these models are equivalent to your model and to merge with mainstream in this way. Or finally you can start to think, how to develop more general model of dark matter, which would make you more progressive and cited by future generations - but also even more distant from mainstream, than by now. Unknown said... @zephir you keep talking about dark matter, but miss that the theory being talked bout here does not use dark matter, it assumes there is zero (or near zero) dark matter. you are missing that the hey to Mike's theory is the math that predicts the behavior, the explanations of why the formula works is far less important than the fact that it does (and that it does without requiring a 'magic number' being used) David Lang (aka unknown above :-) P.S. your concern for Mike's legacy is noted, but I'm pretty sure that he's not in this for his legacy, but rather because he spotted something that he thinks works better than what's currently used. Your "Concern Trolling" about how mike should abandon his positions so that he will be accepted more are not likely to help. Zephir said... @Unknown: we are talking here about dark matter like about colloquial denomination of deviations from classical relativity and Newtonian physics. In wider context every space-time curvature can be considered as a matter https://www.newscientist.com/article/mg19125645-800-you-are-made-of-space-time For example in dense aether model the gravitational lens is additional matter surrounding the massive body like the atmosphere and it exhibits the surface tension, which has mechanical properties and which collects another bubbles of space-time, i.e. the dark matter particles. The actual behavior of dark matter is also much reacher, than the MiHsC/MOND theories describe, it for example not only makes large objects more cohesive, but also small objects more dynamic (hotter) and so on. Zephir said... reacher = richer Unknown said... redefining 'dark matter' to mean 'whatever it takes to account for the observations' instead of the traditional 'physical matter that we can't see' means that you can call anything 'dark matter' it undermines all your statements to make this redefinition. David Lang Mike McCulloch said... Unknown/David: I agree with you. Zephir's desire to name 'whatever is the cause of galaxy rotation' to be 'dark matter' is absolutely wrong, and misleading. I'm sure dark matter-ists would love it since they could claim to be right, even when proven to be wrong, but in science physical objects come before words, and dark matter means 'matter is there that we can't see'. Quantised inertia is the opposite: matter that we can see is not completely there. You could call it 'light matter' I suppose, in the sense of weight. Zephir said... @Mike: The dark matter represents quasiparticles of vacuum, i.e. the vacuum fluctuations which are on the verge of matter and radiation behavior (anapoles, anyons). Like I've said, the MOND/MOD/MiHsC/TeVeS/STVG, etc. theories describe only one component of dark matter and only subset of this component behavior. There exists lighter forms of dark matter, which form filaments and which don't fit the spherical symmetry of MOND/MiHsC - and also heavier, which are more close to massive particles, than all these theories expect. I noticed, that you're fighting with ignorance of mainstream physics community often - but such a critique considers, you will not behave like ignorant yourself both with respect to older theories (Milgrom, holography, entropic gravity) - both these future ones (Zephir). Of course MiHsC theory is yours and the final strategy is solely up to you, but you should count with consequences. Mike McCulloch said... Zephir: Some of your comments are good, but some I do not agree with at all. For example you say 'there exist lighter forms of dark matter'. There is no direct evidence for that. Sure, there are filaments but explaining them by making a falsified hypothesis (dark matter) even more complex is a classic mistake made repeatedly in every epoch of history. It could be an effect of MiHsC, but I will not feign confidence until I can predict their size (without an adjustable parameter). That is the right attitude, and that is the difference between MiHsC and the other theories. About your second comment. I am not ignorant of the other theories, and I like MoND as being an early clue to MiHsC, but I do not believe them because they do not agree with all the data DESPITE all having adjustable parameters. Therefore, they are wrong. Also, EG is not an older hypothesis, it was first published 3 years after MiHsC. Zephir said... My posts are all perfect - just some are so advanced, that you cannot understand them, because you're trapped in your mindset in similar way, like the mainstream physicists adhere on their own theories. So once again: which geometric distribution of dark matter your or MOND theory predicts? It's given by their formula, which contains parameter like the radius or distance from massive objects - as such they cannot predict directional distribution of dark matter, like the dark matter filaments between galaxies. Zephir said... Your ignorance of Hc parameter of MOND theory is also kind of mainstream physics ignorance, as you're believing, that this theory operates with ad hoced parameters only. But this is not true for at least last ten years already. These theories all evolve with time and MOND theory is not an exception. Entropic gravity is also quite old stuff that goes back at least to research on black hole thermodynamics by Bekenstein and Hawking in the mid-1970s and holography from mid 90's. It just gained popularity in connection with recent failure of SUSY at LHC and WIMPs models at detectors. Josave said... Zephir, Will all due respect, youy don't understand MiHsC, there is simply not dark matter, and neither matter at all, all are just relations of distances and node points in Unruh low frequency radiation. Thet define the distribution of movement and inertia. Please, don`t insist in recovering the "matter" paradigm and advance yourself in a Machian distribution and non local theories. I recomend reading of Lee Smolin article cited in this same post. Happy easter to all, Zephir said... Actually I think I can understand the MiHsC better than McCulloch himself in this moment, because I can also see the limits of this theory (and also the ways, in which it can be substituted with another mainstream model which McCulloch denies obstinately in apparent fear from competition). From certain perspective all massive particles are product of interference of some longtitudinal and transverse waves dancing at place, the dark matter particles aren't an exception. What else are the solitons and vortex rings, than the some special anyon particles (they can collide and bounce), which share their properties with waves (they cannot stop their motion at place) These objects (colloquially called scalar waves at the case of dark matter) can move collectively and / or by their very own the more, they more atemporal and massive they get. My own dark matter theory gets even more non local than the MiHsC theory, once it deals with cold dark matter - but it also admits local cohesive effects of scalar waves, once it deals with hot dark matter. These components of dark matter are in dynamic equilibrium like the water droplets are in equilibrium with underwater turbulence and sound waves inside the Tibetian bowl filled with water. Analytic D said... Actually, I don't think you do understand MiHsC and the implications of horizons at various scales and the details of such system topologies. Your word salad doesn't disclose your idea or meaning at all like you think it does. And anonymous reference to your own theory doesn't either. However, I think you are at least gesturing towards the truth of (the) matter. Consider two tightly coorbiting particles. Their Rindler spaces are thus tight overlapping cones. Another particle could approach the pair yet never interact with the physically closer of the two because it is beyond the horizon edge for that particle. This type of close orbit sheltering is a powerful concept with big implications. If there is really overlap in your ideas and horizons, you will find it there. Zephir said... Actually I do understand the cold dark matter just as an result of many overlapping Rindler spaces (shielding cones of hyperdimensional and superluminal holographic radiation) of multiple bodies. Your proposal would work, if the Rindler cones would propagate luminaly - but this is just what can be doubted in MiHsC model.
2018-02-21 01:39:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4936373233795166, "perplexity": 1504.9331408333016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813187.88/warc/CC-MAIN-20180221004620-20180221024620-00167.warc.gz"}
https://proofwiki.org/wiki/Category:Ordinals
# Category:Ordinals This category contains results about Ordinals. Definitions specific to this category can be found in Definitions/Ordinals. $\alpha$ is an ordinal if and only if it fulfils the following conditions: $(1)$ $:$ $\alpha$ is a transitive set $(2)$ $:$ $\Epsilon {\restriction_\alpha}$ strictly well-orders $\alpha$ where $\Epsilon {\restriction_\alpha}$ is the restriction of the epsilon relation to $\alpha$. ## Subcategories This category has the following 26 subcategories, out of 26 total.
2022-12-07 17:41:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9357708096504211, "perplexity": 430.07352336747243}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711200.6/warc/CC-MAIN-20221207153419-20221207183419-00724.warc.gz"}
https://gamedev.stackexchange.com/questions/58788/calculating-projectile-velocity-from-moving-object
# Calculating projectile velocity from moving object I'm working on a top down space shooter and am having trouble with calculating/understanding the physics for projectiles launched from the space ship. The ships have a velocity vector and a turret with rotation independent from the body of the ship. Currently projectiles are launched based on the turrets rotation by making a normalized vector then multiplying each component by a speed. However, this doesn't work when the ship is moving faster than the speed scalar. Right now if the ship is going faster than the projectile speed scalar and the turret is aimed in the same direction as the ship's velocity, the projectile goes backwards (away) from the ship. Assuming there is no drag/gravity (very deep space), how should I go about handling projectiles? The velocity of a projectile fired from a moving vehicle should be vehicleVelocity + projectileVelocity.
2020-02-17 09:52:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6424888968467712, "perplexity": 802.8154263460079}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141806.26/warc/CC-MAIN-20200217085334-20200217115334-00330.warc.gz"}
http://www.r-bloggers.com/page/2/
## Design A Bilingual Shiny Application October 17, 2014 By (This article was first published on Category: R | Huidong Tian's Blog, and kindly contributed to R-bloggers) It’s a common requirement that your Shiny application should be bilingual. I have tried several methods and finally I got one which is not so difficult to maintain. My idea is: Create a bilingual dictionary or a lookup table which contains three... ## R User Groups and "after hours" Creativity October 16, 2014 By by Joseph Rickert There is something about R user group meetings that both encourages, and nourshies a certain kind of "after hours" creativity. Maybe it is the pressure of having to make a presentation about stuff you do at work interesting to a general audience, or maybe it is just the desire to reach a high level of play.... ## MilanoR meeting: call for presentations October 16, 2014 By MilanoR staff is organizing the next MilanoR meeting. T ## Where do I start using Bioconductor? October 15, 2014 By I was recently asked where do I get started with Bioconductor? and thought this would be a good short post. What is BioC? Briefly, Bioconductor (Gentleman, Carey, Bates, and others, 2004) is an open source project that hosts a wide range of tools for... ## Beware Graphical Networks from Rating Scales without Concrete Referents October 15, 2014 By We think of latent variables as hidden causes for the correlations among observed measures and rely on factor analysis to reveal the underlying structure. In a previous post, I borrowed an alternative metaphor from the R package qgraph and produce... ## a bootstrap likelihood approach to Bayesian computation October 15, 2014 By This paper by Weixuan Zhu, Juan Miguel Marín , and Fabrizio Leisen proposes an alternative to our 2013 PNAS paper with Kerrie Mengersen and Pierre Pudlo on empirical likelihood ABC, or BCel. The alternative is based on Davison, Hinkley and Worton’s (1992) ## Rules of thumb to predict how long you will live October 15, 2014 By Figure out how long you will live with these rules of thumb. The post Rules of thumb to predict how long you will live appeared first on Decision Science News. ## JJ Allaire, the useR! 2014 interview October 15, 2014 By JJ Allaire has courageously bet, time and time again, on the cutting edge of technologies.... ## Structural “Arbitrage”: Trading the Equity Curve October 15, 2014 By The last post demonstrated that far from being a world-beating, absolutely amazing strategy, that Harry Long’s Structural “Arbitrage”, was in … Continue reading → ## splitstackshape V1.4.0 for R October 15, 2014 By After more than a year since splitstackshape V1.2.0, I’ve finally gotten around to making some major updates and submitting the package to CRAN. So, if you have messed up datasets filled with concatenated cells of data, and you need to split that data up and reorganize it for later analysis, install and load the latest ## Introducing Revolution R Open and Revolution R Plus October 15, 2014 By For the past 7 years, Revolution Analytics has been the leading provider of R-based software and services to companies around the globe. Today, we're excited to announce a new, enhanced R distribution for everyone: Revolution R Open. Revolution R Open is a downstream distribution of R from the R Foundation for Statistical Computing. It's built on the R 3.1.1... ## The Generalized Lambda Distribution and GLDEX Package for Fitting Financial Return Data – Part 2 October 14, 2014 By Part 2 of a series by Daniel Hanson, with contributions by Steve Su (author of the GLDEX package) Recap of Part 1 In our previous article, we introduced the four-parameter Generalized Lambda Distribution (GLD) and looked at fitting a 20-year set of returns from the Wilshire 5000 Index, comparing the results of two methods, namely the Method of Moments,... ## googleVis 0.5.6 released on CRAN October 14, 2014 By Version 0.5.6 of googleVis was released on CRAN over the weekend. This version fixes a bug in gvisMotionChart. Its arguments xvar, yvar, sizevar and colorvar were not always picked up correctly. Thanks to Juuso Parkkinen for reporting this issue.Exampl... ## analogue 0.14-0 released October 14, 2014 By A couple of week’s ago I packaged up a new release of analogue, which is available from CRAN. Version 0.14-0 is a smaller update than the changes released in 0.12-0 and sees a continuation of the changes to dependencies to have packages in Imports rather than Depends. The main development of analogue now takes place on github... ## analyze the public libraries survey (pls) with r October 14, 2014 By each and every year, the institute of museum and library services coaxes librarians around the country to put down their handheld "shhhh..." sign and fill out a detailed online questionnaire about their central library, branch, even bookmobile.  t... October 13, 2014 By ## Analyze Instagram with R October 13, 2014 By This tutorial will show you how you create an Instagram app, create an authentication process with R and get data via the Instagram API. There is no R package for this yet so we... The post Analyze Instagram with R appeared first on ThinkToStart. ## New Course! A hands-on introduction to statistics with R by A. Conway (Princeton University) October 13, 2014 By The best way to learn is at your own pace. Combining the interactive R learning environment of DataCamp and the expertise of Prof. Conway of Princeton, we offer you an extensive online course on introductory statistics with R.  Start learning now… Whether you are a professional using statistics in your job, an academic wanting a ## Hadley Wickham’s dplyr tutorial at useR! 2014, Part 1 October 13, 2014 By Hadley Wickham (perhaps you’ve heard of his work) presented a 2 hour workshop on dplyr... ## Hadley Wickham presents dplyr at useR! 2014 October 13, 2014 By Hadley Wickham is hard at work, releasing packages which leverage the expressive power of R... ## Introducing the Reproducible R Toolkit and the checkpoint package October 13, 2014 By The ability to create reproducible research is an important topic for many users of R. So important, that several groups in the R community have tackled this problem. Notably, packrat from RStudio, and gRAN from Genentech (see our previous blog post). The Reproducible R Toolkit is a new open-source initiative from Revolution Analytics. It takes a simple approach to... ## dplyr 0.3 October 13, 2014 By I’m very pleased to announce that dplyr 0.3 is now available from CRAN. Get the latest version by running: install.packages("dplyr") There are four major new features: Four new high-level verbs: distinct(), slice(), rename(), and transmute(). Three new helper functions between, count(), and data_frame(). More flexible join specifications. Support for row-based set operations. There are two ## Sensitivity and Elasticity of seasonal matrix model October 13, 2014 By The previous article introduced the seasonal matrices and the population growth rate λ of imaginary annual plant.  In this article, let’s try the sensitivity analysis of these matrices and the … Continue reading → ## Do Political Scientists Care About Effect Sizes: Replication and Type M Errors October 13, 2014 By Reproducibility has come a long way in political science since I began my PhD all the way back in 2008. Many major journals now require replication materials be made available either on their websites or some service such as the Dataverse Network.This ... ## Beautiful Curves: The Harmonograph October 12, 2014 By Each of us has their own mappa mundi (Gala, my indispensable friend) The harmonograph is a mechanism which, by means of several pendulums, draws trajectories that can be analyzed not only from a mathematical point of view but also from an artistic one. In its double pendulum version, one pendulum moves a pencil and the
2014-10-23 09:27:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1790165901184082, "perplexity": 6000.412311920278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507457324.47/warc/CC-MAIN-20141017005737-00120-ip-10-16-133-185.ec2.internal.warc.gz"}
http://uncyclopedia.wikia.com/wiki/Gallium?oldid=5696281
# Gallium (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) It is requested that an image or images be included in this article to improve its quality. If possible, please add some pictures to make it into a full encyclopedia article and then remove this message. Do not remove this notice until it receives some pictures. Failure to comply will result in this notice being added again. Gallium is a type of dangerous slime mold/metal, known for the deaths of people who come in contact with it. ## editHistory of gallium Gallium was first discovered by the Romans, who used it in the nails of Jesus's cross for crucifixion. It took Jesus three days to cleanse the metal before he could rise again and perform his well-known magic trick. Rome also used it in the lining of its aqueducts, which steadily drove the populace insane and caused the fall of Rome. After the fall of rome, knowledge on the metal was lost, and people kept drinking from the aqueducts. After about 1000 years, the gallium ran out, hence the Renaissance. Gallium was then proposed to exist by Dmitri Mendeleyev--a chemist with a long name--after he found that, in Soviet Russia, holes have his Periodic Table. He was right, and $4 \pm \infty$ years later, Paul-Émile Lecoq de Boisbaudran--a chemist with a long name--discovered it when his fork tried to eat him. ## editGallium Refinement All sources of gallium are unconcentrated, but the metal can be refined from bauxite, diaspore, germanite, and a lot of other stuff nobody cares about. The Bayer process is normally used and severely agitates the refined gallium, which normally kills the refiner and all the other useful stuff they got from the ore. Gallium sometimes mixes in with aluminum refined from bauxite, so it's probably a good idea to not touch soda cans, aluminum foil, bike frames, and probably everything else that has metal in it. ## editGallium Conspiracy Gallium arsenide is a component in solar panels, but manufacturers of solar panels still slip under the radar for weapons of mass destruction. Though Al Gore advocates alternative energy, the only "Inconvenient Truth" is that now millinos of household have installed deadly solar panels. The gallium is predicted to strike on December 21, 2012. ## editDisposal Disposal of gallium is usually achieved by throwing it in landfills. However, problems arise when gallium-infested plants grow on the landfill, then cows eat them and get Mad Cow Disease and stampede over the local Wal-Mart, which is the reason for the American economy slowing down. In light of this, new disposal methods for gallium have been suggested, like throwing it into a black hole, nuking it, locking it in a safe in the deepest ocean trench, and putting it in Ziplock bags. ## editReaction with other elements Gallium bonds readily with filler materials, making it difficult to get rid of. It is because of this that gallium slips into many shows, like Lost, Moment of Truth, sometimes American Idol, and Dora the Explorer. Therefore, it is advised not to watch any of these shows due to possible gallium contamination, and maybe just stay away from TV altogether. You're already avoiding metal; it shouldn't be too hard to cut out TV. Gallium also vigorously reacts with DNA. There is ample evidence[citation needed], as shown when people jump in radioactive waste barrels and come out with superpowers/deadly mutations--the barrel must be made of gallium. ## editGallium in Magic Gallium is used as an ingredient in many spells, and is found in Voldemort's wand. Gallium elementals are popular among evil wizards, except for the fact that the summoner has to be completely insane to summon something that would kill the summoner. Said elementals can go on very long rampages until they get bored and morph into a doorknob or something. Don't go near any doors in abandoned areas. in fact, just stay away from all man-made devices. You're already avoiding TV; it shouldn't be too hard to cut out man-made devices. v • d • eThings nerds love and all others hate
2014-04-21 04:08:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35464346408843994, "perplexity": 3664.1842817068746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
http://www.gmatpill.com/gmat-practice-test/gmat-problem-solving-questions/gmat-prep/OG-214/question/2392
## GMAT Practice Question Set #7 (Question 19-21) Problem Solving Question #21: OG-214 Time 0 0 : 0 0 : 0 0 OG #214, pg 183 In an electric circuit, two resistors with resistances x andy are connected in parallel... (A) xy (B) x + y (C) 1/(x+y) (D) (xy)/(x+y) (E) (x+y)/(xy) ## Video Explanation GMATTM is a registered trademark of the Graduate Management Admission CouncilTM. The Graduate Management Admission CouncilTM does not endorse, nor is it affiliated in any way with the owner or any content of this web site.
2019-08-24 05:23:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2208530306816101, "perplexity": 6216.910911178615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319724.97/warc/CC-MAIN-20190824041053-20190824063053-00458.warc.gz"}
http://openstudy.com/updates/4f196c53e4b04992dd220f56
## Inopeki Group Title FOILing, do i multiply or add? Like (2x+4)+(2x+4) i add there, right? And (2x+4)*(2x+4) I multiply there, right? 2 years ago 2 years ago 1. satellite73 Group Title yes 2. Inopeki Group Title I did that in class todya but i didnt get the right answer.. 3. satellite73 Group Title what did you get for the first one? 4. Inopeki Group Title (2x+4)+(2x+4) 4x+2x+4+4+2x+4+4 8x+16 5. satellite73 Group Title hmm 6. satellite73 Group Title 7. amistre64 Group Title when it says to add, I would add. and when it says to multiply, id multiply. but thats just me set in my ways :) 8. satellite73 Group Title $(2x+4)+(2x+4)=2x+4+2x+4=2x+2x+4+4$ 9. satellite73 Group Title let's imagine for a moment that x was ten (x is a variable, but it could be ten) then $2x+4=2\times 10+4=24$ and $24+24=48$ 10. Inopeki Group Title I thought it was like this |dw:1327066387934:dw| 11. satellite73 Group Title multiplication distributes over addition. so if you had $(2x+4)(2x+4)$ you would use the distributive law, just like if you had $24\times 24$ 12. Inopeki Group Title But wheres the x? 13. Tomas.A Group Title addition is commutative you can remove brackets when you have ONLY addition 14. satellite73 Group Title but addition is the same as "combine like terms" just as you would with arithmetic 24 + 24 ______ 48 2x + 4 + 2x + 4 __________ 4x + 8 15. amistre64 Group Title 2x+4 (+) 2x+4 -------- 4x+ 8 2x+4 (x) 2x+4 --------- 8x+16 4xx+8x ----------------- 4x^2+16x+16 16. Inopeki Group Title Why would it be 4x^2+16x+16? 17. satellite73 Group Title @inopeki i was making an analogy. the work you do with variables is a generalization of the work you do in arithmetic, and if it doesn't work with arithmetic it certainly doesn't work with variables, because the variable could be a number, that is you could replace the variable by a number 18. amistre64 Group Title becasue thats what happens when you multiply stuff 19. Inopeki Group Title Oh ok 20. Inopeki Group Title so basically just add them up? 21. Inopeki Group Title 2x+4 (+) 2x+4 -------- 4x+ 8 22. satellite73 Group Title imagine your job was to add $24+24$\ what would you do? line them up and add 23. amistre64 Group Title 5+3 = 8 (x) 2+4 = 6 ------------ 20+12 10+6 -------------- 10+26+12 = 48 24. Inopeki Group Title 24 +24 ---- 48 25. satellite73 Group Title and similarly 2x + 4 + 2x + 4 __________ 4x + 8 26. Inopeki Group Title Oh! Give me another example? 27. satellite73 Group Title $(3x+5)+(4x+1)$ 28. Inopeki Group Title (3x+5) +(4x+1) -------- 7x+6 29. satellite73 Group Title that's it! 30. Inopeki Group Title Yes! :D Thanks
2014-07-28 16:45:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6899284720420837, "perplexity": 10139.253552943877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510261249.37/warc/CC-MAIN-20140728011741-00360-ip-10-146-231-18.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/calculus/thomas-calculus-13th-edition/chapter-1-functions-practice-exercises-page-37/19
## Thomas' Calculus 13th Edition (a) Domain: $x\in(-\infty,\infty)$ (b) Range: $y\in[-2,\infty)$ Function $y=|x|-2$ Domain: x can be any real number, $x\in(-\infty,\infty)$ Range: as $|x|\geq0$, $y\in[-2,\infty)$
2020-01-19 02:41:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9704674482345581, "perplexity": 1875.1977534015393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594101.10/warc/CC-MAIN-20200119010920-20200119034920-00116.warc.gz"}
https://web2.0calc.com/questions/i-get-no-diea
+0 # i get no diea! +3 1160 9 In a triangle ABC, a, b and c are the opposite sides of each angles ,and 2a*SinA=(2b+c)*SinB+(2c+b)*SinC (I) try to the angle A (II) if SinB+SinC=1, then what is the shape of this triangle? Feb 4, 2015 #6 +31213 +5 Given that C must be between 0 and pi/3, the graph below shows that there is only one solution: I don't know why WolframAlpha doesn't find this. . Feb 5, 2015 #1 0 try to find the angle A....... Feb 4, 2015 #2 +111246 +5 In a triangle ABC, a, b and c are the opposite sides of each angles ,and 2a*SinA=(2b+c)*SinB+(2c+b)*SinC (I) try to the angle A $$\\2aSinA=(2b+c)SinB+(2c+b)SinC\\ 0=(2b+c)SinB+(2c+b)SinC-2aSinA\\$$ Using the sine rule I know that $$\\SinB=\frac{bSinA}{a}\qquadand\qquad SinC=\frac{cSinA}{a}\\\\ Substituting I get\\\\$$ $$\\0=(2b+C)\frac{bSinA}{a}+(2c+b)\frac{cSinA}{a}-2aSinA\\\\ 0=SinA[(2b+C)\frac{b}{a}+(2c+b)\frac{c}{a}-2a]\\\\ 0=\frac{SinA}{a}[(2b+c)b+(2c+b)c-2a^2]\\\\ Neither sinA nor A can be 0 so\\\\$$ $$\\0=[2b^2+cb+2c^2+bc-2a^2]\\\\ 0=[2b^2+2c^2-2a^2+2bc]\\\\ 0=b^2+c^2-a^2+bc\\\\ -bc=b^2+c^2-a^2\\\\ -\frac{1}{2}=\frac{b^2+c^2-a^2}{2bc}\\\\ -\frac{1}{2}=cosA\qquad\qquad \mbox{Using cosine rule}\\\\ A=\pi-\frac{\pi}{3}\\\\ A=\frac{2\pi}{3}\\\\$$ Feb 5, 2015 #3 +111246 +5 Part 2 (II) if SinB+SinC=1, then what is the shape of this triangle? $$\\ SinB+SinC=1\\ Sin\left(\frac{\pi}{3}-C\right)+SinC=1\\$$ Now I am really confused because according to Wolfram|Alpha there is no valid solution to this.  Where B and C are acute angles. http://www.wolframalpha.com/input/?i=sinB%2Bsin%28pi%2F3-B%29%3D1 Maybe A=2pi/3 was wrong??? Feb 5, 2015 #4 +31213 +5 Have a look at the following for part (ii): . Feb 5, 2015 #5 +111246 0 Thanks Alan, Usin Alan's pic (II) if SinB+SinC=1, then what is the shape of this triangle? sinB = 1/2 SinC=1/2 so statement is true ---------------------------------------------- 1) How do you know that is the only answer?    and 2) Why didn't Wolfram|alpha give me that answer? Thankyou.:) Feb 5, 2015 #6 +31213 +5 Given that C must be between 0 and pi/3, the graph below shows that there is only one solution: I don't know why WolframAlpha doesn't find this. . Alan Feb 5, 2015 #7 +111470 0 Very impressive, Melody and Alan...!!! Feb 5, 2015 #8 +111246 0 Thanks Alan  :) and Thanks Chris :) Feb 5, 2015 #9 +111470 0 Here's the graphical solution to the last part of Melody's answer using Desmos... It shows that angles B and C are, indeed, equal..... GRAPH Feb 5, 2015
2020-10-31 10:59:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6979691982269287, "perplexity": 2306.171004425108}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107917390.91/warc/CC-MAIN-20201031092246-20201031122246-00185.warc.gz"}
http://www.physicsforums.com/showthread.php?t=363124
## Why are probabilities distribution of thermodynamic variables tend to Gaussian? The probability distribution for some thermodynamic variable x is given by $$P = N e^{-A(x)/KT}$$ where A(x) is the availability, which can be replaced by Hemlholtz free energy F, Gibb's free energy G, etc depending on the conditions imposed. N is just some normalization constant. A(x) can be expanded in a taylor series about the equilibrium conditions, $$A(x) = A(x_{0}) + (x - x_{0})(\frac {\partial A} {\partial x})_{x = x_{0}} + \frac{1} {2} (x - x_{0})^{2} (\frac {\partial^2 A} {\partial x^2})_{x = x_{0}} + ...$$ The second term is 0 since dA/dx = 0 at equilibrium. If we truncate all the other terms, clearly we see that P will be a Gaussian distribution with mean of $$x_{0}$$ and standard deviation of $$\sqrt {\frac {K T} {(\frac {\partial^2 A} {\partial x^2})_{x = x_{0}}}}$$ What is the justification for truncating this series? This is justified if (x - x0) is small. But why will it be small for big N? PhysOrg.com physics news on PhysOrg.com >> Kenneth Wilson, Nobel winner for physics, dies>> Two collider research teams find evidence of new particle Zc(3900)>> Scientists make first direct images of topological insulator's edge currents Recognitions: Science Advisor I am not familiar with the details of the physics. However such truncation would be based on the assumption |x-x0| is small. Similar discussions for: Why are probabilities distribution of thermodynamic variables tend to Gaussian? Thread Forum Replies Set Theory, Logic, Probability, Statistics 9 Advanced Physics Homework 6 Set Theory, Logic, Probability, Statistics 1 Set Theory, Logic, Probability, Statistics 0 General Math 1
2013-06-19 15:12:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7534666657447815, "perplexity": 1220.257444117009}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708808767/warc/CC-MAIN-20130516125328-00051-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.geogebra.org/material/show/id/39269
# Example 2 One hockey player passes a puck to a second player. The second player hits the puck instantly. The total meters, $m$, traveled by the hockey puck after $s$ seconds can be represented using the function $m = \begin{cases} 48s, & \mbox{if 0} \leq {s} \leq {1} \\ 40s + 8, & \mbox{if 1} < {s} < {2} \end{cases}$. Create a graph to show the distance traveled by the puck after $s$ seconds. Material Type Worksheet Tags piecewise  function  domain  range  graph  hockey  puck  travelling  meters  per  second Show More… Target Group (Age) 14 – 18 Language English GeoGebra version 4.4 Views 2170 • GeoGebra • Help • Partners
2018-03-18 16:10:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6045886874198914, "perplexity": 9774.579675389603}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645824.5/warc/CC-MAIN-20180318145821-20180318165821-00725.warc.gz"}
https://lw2.issarice.com/posts/tAfK3ckM9yrQBLCJL/raven-paradox-settled-to-my-satisfaction
# Raven paradox settled to my satisfaction post by Manfred · 2014-08-06T02:46:19.257Z · score: 17 (16 votes) · LW · GW · Legacy · 24 comments ## Contents Resolution None The raven paradox, originated by Carl Gustav Hempel, is an apparent absurdity of inductive reasoning. Consider the hypothesis: H1: All ravens are black. Inductively, one might expect that seeing many black ravens and no non-black ones is evidence for this hypothesis. As you see more black ravens, you may even find it more and more likely. Logically, a statement is equivalent to its contrapositive (where you negate both things and flip the order). Thus if "if it is a raven, it is black" is true, so is: H1': If it is not black, it is not a raven. Take a moment to double-check this. Inductively, just like with H1, one would expect that seeing many non-black non-ravens is evidence for this hypothesis. As you see more and more examples, you may even find it more and more likely. Thus a yellow banana is evidence for the hypothesis "all ravens are black." Since this is silly, there is an apparent problem with induction. Resolution Consider the following two possible states of the world: Suppose that these are your two hypotheses, and you observe a yellow banana (drawing from some fixed distribution over things). Q: What does this tell you about one hypothesis versus another? A: It tells you bananas-all about the number of black ravens. One might contrast this with a hypothesis where there is one less banana, and one more yellow raven, by some sort of spontaneous generation. Observations of both black ravens and yellow bananas cause us to prefer 1 over 3, now! The moral of the story is that the amount of evidence that an observation provides is not just about whether it whether it is consistent with the "active" hypothesis - it is about the difference in likelihood between when the hypothesis is true versus when it's false. This is a pretty straightforward moral - it's a widely known pillar of statistical reasoning. But its absence in the raven paradox takes a bit of effort to see. This is because we're using an implicit model of the problem (driven by some combination of outside knowledge and framing effects) where nonblack ravens replace black ravens, but don't replace bananas. The logical statements H1 and H1' are not alone enough to tell how you should update upon seeing new evidence. Or to put it another way, the version of induction that drives the raven paradox is in fact wrong, but probability theory implies a bigger version. (Technical note: In the hypotheses above, the exact number of yellow bananas does not have to be the same for observing a yellow banana to provide no evidence - what has to be the same is the measure of yellow bananas in the probability distribution we're drawing from. Talking about "99 ravens" is more understandable, but what differentiates our hypotheses are really the likelihoods of observing different events [there's our moral again]. This becomes particularly important when extending the argument to infinite numbers of ravens - infinities or no infinities, when you make an observation you're still drawing from some distribution.) comment by SilentCal · 2014-08-06T19:13:24.880Z · score: 12 (11 votes) · LW(p) · GW(p) If we were sampling random non-black objects and none of them were ravens, that really would be evidence that all ravens are black. The reason it seems silly to take a yellow banana as evidence that all ravens are black is that 'sampling the space of nonblack things' is not an accurate description of what we're doing when we look at a banana. When we see a raven, we do implicitly think it's more or less randomly drawn from the (local) population of ravens. If you had grown up super-goth and only ever seen black things, you would have no idea what things have nonblack versions. If you went outside one day and saw a bunch of nonblack things and none of them were ravens, you might indeed start to suspect that all ravens were black; the more nonblack things you saw, the stronger this suspicion would get. comment by Manfred · 2014-08-07T04:07:27.587Z · score: 0 (0 votes) · LW(p) · GW(p) I agree. In the first example, it's because if our probability distribution only encompasses two categories, any increase in one is a decrease in the other. In the second example, it's because the ex-super-goth's hypothesis space includes all sorts of relationships between number of black things and number of nonblack things - their preconceptions about the world are different, rather than you just stipulating that they sample non-black things. comment by RichardKennaway · 2014-08-06T08:29:26.923Z · score: 10 (7 votes) · LW(p) · GW(p) Perhaps a Bayesian approach would be illuminating. There are four kinds of objects in the world: black ravens, nonblack ravens, black nonravens, and nonblack nonravens. Call these A, B, C, and D. Let the probability you assign to the next object that you encounter being in one of these classes be p, q, r, and s respectively. Rather than having two competing hypotheses about the blackness of ravens, there is a prior distribution of the parameters p, q, r, and s. (Note that the way I've set this up removes any concept of blackness common to black ravens and black nonravens. The astute -- more astute than me, for whom this is the last paragraph written -- may guess at once that P naq Q ner tbvat gb or rkpunatrnoyr va guvf sbezhyngvba, naq gurersber arvgure zber guna gur bgure pna or rivqrapr eryngvat gb gur inyhrf bs c naq d. I come back to this at the end.) In a state of total ignorance, a reasonable prior for the distribution of (p,q,r,s) is that they are uniformly distributed over the tetrahedron in four-dimensional space defined by these numbers being in the range 0 to 1 and their sum being 1. After observing numbers a, b, c, and d of the four categories, the posterior is (after a bit of mathematics) p^a q^b r^c s^d/K(a,b,c,d), where K(a,b,c,d) = a!b!c!d!/(N+3)!, where N = a+b+c+d. (The formula generalises to any number of categories, replacing 3 by the number of categories minus 1.) The expectation value of p is K(a+1,b,c,d)/K(a,b,c,d) = (a+1)/(N+4), and similarly for q, r, and s. (Check: these add up to 1, as they should.) How does the expectation value of p change when you observe that the N+1'th object you draw is an A, B, C, or D? If it's an A, the ratio of the new expectation value to the old is (a+2)(N+4)/(a+1)(N+5). For large N this is approximately 1 + 1/(a+1) - 1/(N+5) > 1. If it's a B (and the cases of C and D are the same) then the ratio is (N+4)/(N+5) = 1 - 1/(N+5) < 1. So observing an A increases your estimate of the proportion of the population that are A, and observing anything else decreases it, as one would expect. That was just another sanity check. Now consider the ratio q/p, the ratio of non-black to black ravens. The expectation of this, assuming a>0 (you have seen at least one black raven), is K(a-1,b+1,c,d)/K(a,b,c,d) = (b+1)/a. This increases to (b+2)/a when you observe a nonblack raven, and decreases to (b+1)/(a+1) when you observe a black one. (I would have calculated the expectation of q/(q+p), the expected proportion of ravens that are nonblack, but that is more complicated.) If you have seen a thousand black ravens and no nonblack ones, the increase is from 1/1000 to 2/1000, i.e. a doubling, but the decrease is from 1/1000 to 1/1001, a tiny amount. On the log-odds scale, the first is 1 bit, the second is about 0.0014 bits. On this analysis, observations of nonravens, whether black or not, have no effect on the expectation of the proportion of nonblack ravens. If we reformulate the original hypothesis that all ravens are black as "q/p < 0.000001", then observing the 1001th raven to be green will pretty much kill that hypothesis, until we see of the order of a million black ravens in a row without a nonblack one. But the nonraven objects will continue to be irrelevant: C and D are exchangeable in this formulation of the problem. Now reconsider the original paradox on its own terms. I will draw a connection with the grue paradox. Suppose we accept the paradoxical argument that "All ravens are black" and "all nonblack things are nonravens" are logically equivalent, and therefore everything that is evidence for one is evidence for the other. Let "X is bnonb" mean "X is a black raven or a nonblack nonraven." Consider the hypothesis that all ravens are bnonb, and its contrapositive, that all non-bnonb things are nonravens. In effect, we have exchanged C and D, but not A and B. Every argument that nonblack nonravens are evidence for all ravens being black is also an argument than nonbnonb nonravens are evidence for all ravens being bnonb. But substituting the definition of bnonb in the latter, it claims that black nonravens are evidence for the blackness of ravens. Hence both black and nonblack nonravens support the blackness of ravens. But there's more. Swapping black and nonblack in all of the above would imply that both black and nonblack nonravens are evidence for the nonblackness of ravens. At this point we appear to have proved that all nonravens are evidence for every hypothesis about ravens. I don't think the original paradox can be saved by arguing that yes, nonblack nonravens are evidence, just an utterly insignificant amount, as some do. comment by RichardKennaway · 2014-08-06T12:56:44.633Z · score: 5 (5 votes) · LW(p) · GW(p) A further elaboration then occurred to me. If non-ravens are, as the above argument claims, not evidential for the properties of ravens, then neither are non-European ravens evidential for the properties of European ravens, which does not seem plausible. This amount of confusion suggests that some essential idea is missing. I had thought causality or mechanism, but the Google search suggested by that turned up this paper: "Infinitely many resolutions of Hempel's paradox" by Kevin Korb, which takes a purely Bayesian approach, which I think has something in common (in section 4.1) with the arguments of the original post. His conclusion: We should well and truly forget about positive instance confirmation: it is an epiphenomenon of Bayesian confirmation. There is no qualitative theory of confirmation that can adequately approximate what likelihood ratios tell us about confirmation; nor can any qualitative theory lay claim to the success (real, if limited) of Bayesian confirmation theory in accounting for scientific methodology. ETA: Another paper with a Bayesian analysis of the subject. And then there is the Wason selection task, where you do have to examine both the raven and the non-black object to determine the truth of "all ravens are black". But with actual ravens and bananas, when you pick up a non-black object, you will already have seen whether it is a raven or not. Given that it is not a raven, examination of its colour tells you nothing more about ravens. comment by casebash · 2014-08-10T09:52:16.084Z · score: 0 (0 votes) · LW(p) · GW(p) "A further elaboration then occurred to me. If non-ravens are, as the above argument claims, not evidential for the properties of ravens, then neither are non-European ravens evidential for the properties of European ravens, which does not seem plausible." - Wait so you're saying that the argument you just made in the post above is incorrect? Or that the argument in main is incorrect? comment by RichardKennaway · 2014-08-10T15:12:43.680Z · score: 0 (0 votes) · LW(p) · GW(p) I am saying that I am confused. Hempel gave an argument for a conclusion that seems absurd. I first elaborated a Bayesian argument for arriving at the opposite of the absurd conclusion, and because the conclusion (non-black non-ravens say nothing about the blackness of ravens) seems at first sight reasonable, one might think the argument reasonable (which is not reasonable, because there is nothing to stop a bad argument giving a correct conclusion). Then I showed that combining Hempel's argument with the grue-like concept of bnonb yielded a Hempel-style argument for non-ravens of all colours being evidence for the blackness of ravens, and further extended it to show that all properties of non-ravens are evidence for all properties of ravens. Then I took my original argument and observed that it still works after replacing "raven" and "non-raven" by "European raven" and "non-European raven". At this point both arguments are producing absurd results. Hempel's has broadened to proving that everything is evidence for everything else, and mine to proving that nothing is evidence for anything else. I shall have to work through the arguments of Korb and Gilboa to see what they yield when applied to bnonb ravens. Meanwhile, the unanswered question is, when can an observation of one object tell you something about another object not yet observed? comment by RichardKennaway · 2014-08-10T16:24:20.220Z · score: 3 (3 votes) · LW(p) · GW(p) Having now properly read Korb's paper, the basic problem he points out is that to do a Bayesian update regarding a hypothesis h in the presence of new evidence e, one must calculate the likelihood ratio P(e|h)/P(e|not-h). Not-h consists of the whole of the hypothesis space excluding h. What that hypothesis space is affects the likelihood ratio. The ratio can be made equal to anything at all, for some suitable choice of the hypothesis space, by constructions similar to those of the OP. It makes the same negative conclusion when applied to bnonb ravens, or to European and non-European ravens. Although this settles Hempel's paradox, it leaves unanswered a more fundamental question: how should you update in the face of new evidence? The Bayesian answer is on the face of it simple mathematics: P(e|h)/P(e|not-h). But where does the hypothesis space that defines not-h come from? In "small world" examples of Bayesian reasoning, the hypothesis space is a parameterised family of distributions, and the prior is a probability distribution on the parameter space. New evidence will shift that distribution. If the truth is a member of that family, evidence is likely to converge on the correct parameters. I have never seen a convincing account of how to do "large world" Bayesian reasoning, where the hypothesis space is "all theories whatsoever, even yet-unimagined ones, describing this aspect of the world". Solomonoff induction is the least unconvincing, by virtue only of being precisely defined and having various theorems provable about it, but one of those theorems is that it is uncomputable. Until I see someone make some sort of Solomonoff-based method work to the extent of becoming a standard part of the statistician's toolkit, I shall continue to be sceptical of whether it has any practical numerical use. How should you navigate in a large-world hypothesis space, when you notice that P(e|h) is so absurdly low that the truth, whatever it is, must be elsewhere? Given the existence of polar bears, arctic foxes, and snow leopards, I wondered if there might be any white-feathered ravens in the colder parts of the world. A Google search indicates that while ravens are found there, they are just as black as their temperate relatives. I guess you don't need camouflage to sneak up on corpses. Now that looks like good evidence for all ravens being black: looking in places where it is plausible that there could be white ravens, and finding ravens, but only black ones. The not-h hypothesis space has room for large numbers of white ravens in a certain type of remote place. That part of the space came from observing polar bears and the like, and imagining a similar mechanism, whatever it might be, in ravens. Finding that even there, all observed ravens are black, removes probability mass from that part of the space. comment by Manfred · 2014-08-07T04:29:29.054Z · score: 0 (0 votes) · LW(p) · GW(p) An excellent quote! If Stefan had found that one I should have been honor-bound to add it to the post :P comment by HalMorris · 2014-08-07T04:28:55.736Z · score: 8 (4 votes) · LW(p) · GW(p) The "Raven paradox" was used as a starting point to the famous article "Natural Kinds" by W.V.O. Quine; it is one of the two articles by Quine that set the anthology Naturalizing Epistemology in motion, as mentioned in my article immediately previous to this one at http://lesswrong.com/r/discussion/lw/kp1/from_natural_or_naturalized_to_social_epistemology/ It seems to have motivated Quine's perhaps throwing up his hands on formal methods of epistemology, and suggesting we "settle for psychology" (not sure if he used that phrase -- if not, it's a commonly used characterization of his position). At least part of the trouble seems to be that he proposes non-black non-ravens isn't a natural kind. Non-ravens would seem to be all "things" that aren't ravens, but consider what an incoherent concept that is. Do "things" include every atom in the universe? For quite a lot of "things" (atoms included, I think) the quality of blackness makes no sense. So maybe there are around 100,000,000 ravens in the world, and as I examine Ravens and find N black ones and no non-black ones, I can say N down, 100,000,000-N to go, and that might seem like progress. Whereas when I pick one atom (does it have a color?), one H2O molecule, one green leaf, and one blue eye of newt, I have no meaningful concept of how many more "non-ravens" there are to sample. Now if very hypothetically, ravens belonged to a genus with just one other species, also having 100,000,000 members, and the whole universe of ravenoids was frozen in time instead of multiplying and dying as we tried to sample them, we might say upon selecting one non-black non-raven, "That's one bit of evidence that doesn't contradict my hypothesis, and when I've sampled the whole 200,000,000 in the ravenoid universe with no contradiction of the hypothesis and a number all black ravens, I can say the hypothesis is true. A black non-raven also doesn't contradict the hypotheses and is also "one more down" and goes towards the ultimately complete sampling of the 200,000,000 entities during which we hope that every raven we find will be black. I.e. our intuition, if we have one, that {{the equivalent logical proposition "All non-black non-ravens" really should have an analogous method for gathering evidence}} might be less ridiculous if only "non-black non-ravens" actually meant something coherent. For what it's worth there is also a 48 page 2010 article "How Bayesian Confirmation Theory Handles the Paradox of the Ravens" by Branden Fitelson and James Hawthorne (fitelson.org/ravens.pdf -- actually it's only 29 pages in this PDF due to different layout I suppose.). I've been meaning to read it, but think I'll have to work my way up to it. comment by [deleted] · 2014-08-07T13:36:11.506Z · score: 6 (2 votes) · LW(p) · GW(p) H1: All ravens are black. H1': If it is not black, it is not a raven. Inductively, just like with H1, one would expect that seeing many non-black non-ravens is evidence for this hypothesis. As you see more and more examples, you may even find it more and more likely. Thus a yellow banana is evidence for the hypothesis "all ravens are black." Since this is silly, there is an apparent problem with induction. Question: H1 and H1' appear to be logically equivalent to: H1'' There do not exist any things which are both not black, and a raven. And this seems to have different implications in a finite universe and an infinite universe. For instance, in a finite universe of 10,000 things, if you've found 99 yellow bananas and 1 black raven, there are 9,900 things which could potentially disprove H1''. If you then observe an additional 100 yellow bananas, there are now only 9,800 things that could potentially disprove H1'', so it would make sense that H1'' becomes a small amount more likely, since if all of the remaining untested things were yellow bananas, and you tested them all, at the point at which you tested the last thing you would be much more confident about H1'', and presumably that confidence grows as you get closer to testing the last thing as opposed to coming all at once at only the last thing. But: In a infinite universe of infinite things, if you've found 99 yellow bananas and 1 black raven, there are infinite things which could potentially disprove H1''. If you then observe an additional 100 yellow bananas, there are still an infinite number of things that could potentially disprove H1'', so H1 would not necessarily become a small amount more likely because of the argument I just gave since there is no 'last thing' to test. When I looked at http://en.wikipedia.org/wiki/Raven_paradox , I'm not sure if anything I just said is any different from the Carnap approach, except that the Carnap approach described in the article does not appear to mention infinities, so I'm not sure if I'm making an error or not. comment by TsviBT · 2014-08-06T06:00:26.072Z · score: 3 (2 votes) · LW(p) · GW(p) I think it's just wrong that "H1': If it is not black, it is not a raven" predicts that you will observe non-black non-raven objects, under the assumption/prior that the color distributions within each type of object (chairs, ravens, bananas, etc.) are independent of each other. The intuition comes from implicitly visualizing the observation of an unknown non-black object O; then, indeed, H1 predicts that O will turn out to not be a raven. Then point is, even observing that O is non-black would decrease your credence in H1; and then increase it again when you saw that O was not a raven. Since H1 is only about ravens, by the independence assumption, H1 says nothing about non-ravens and whether you will see non-black ones. (I.e., its likelihood ratio for "observe a non-black non-raven object" is 1.) comment by Manfred · 2014-08-06T07:07:17.191Z · score: 0 (0 votes) · LW(p) · GW(p) This model of independence between shapes is what I'm calling the implicit model that people use to say that the conclusion of the raven paradox is absurd. comment by TsviBT · 2014-08-06T07:31:31.290Z · score: 0 (0 votes) · LW(p) · GW(p) Right, I should have written, "I agree. Also, ...". I just wanted to find the source of the intuition that seeing non-black non-ravens is evidence for "non-black -> non-raven". comment by ArisKatsaris · 2014-08-06T20:41:06.253Z · score: 2 (2 votes) · LW(p) · GW(p) It took me a bit to understand what you were saying. I think I'd have gotten it more clearly with some mathematical notation: H1: The hypothesis that there exists at least one non-black ravens. H2: The hypothesis that there exist zero non-black ravens. YB: Observing a yellow banana when randomly picking an object to observe. It's: $\\frac\{P\(H1|YB\$}{P(H2%7CYB)}=\frac{P(YB%7CH1)}{P(YB%7CH2)}*\frac{P(H1)}{P(H2)}) So if we assume that our priors for the hypotheses H1 and H2 are the same then if we also assume the additional constraint that P(YB|H1)=P(YB|H2) (both hypotheses refers to possible worlds with the same number of yellow bananas), then P(H1|YB) = P(H2|YB), meaning it doesn't provide more evidence for one hypothesis than the other. However given possible worlds where e.g. the number of black ravens remains fixed, but the number of yellow bananas is reduced, the argument that observing a yellow banana increases the possiblity of the existence of black ravens becomes true. comment by Manfred · 2014-08-07T04:09:09.230Z · score: 0 (0 votes) · LW(p) · GW(p) Well, it's not really the number of yellow bananas that matters. It's their measure in the probability distribution we're drawing from. In fact, I was unclear about that in the post, let me go add a note. comment by DanielLC · 2014-08-06T18:27:24.149Z · score: 2 (3 votes) · LW(p) · GW(p) Ravens is a tighter cluster in thing-space than non-ravens, so we'd expect a tighter correlation of color. Thus, it takes a lot more non-black non-ravens to convince me that all non-blacks are non-ravens than it does ravens to convince me that all ravens are black. comment by drnickbone · 2014-08-12T22:37:09.821Z · score: 1 (2 votes) · LW(p) · GW(p) One very simple resolution: observing a white shoe (or yellow banana, or indeed anything which is not a raven) very slightly increases the probability of the hypothesis "There are no ravens left to observe: you've seen all of them". Under the assumption that all observed ravens were black, this "seen-em-all" hypothesis then clearly implies "All ravens are black". So non-ravens are very mild evidence for the universal blackness of ravens, and there is no paradox after all. I find this resolution quite intuitive. comment by Stefan_Schubert · 2014-08-06T14:58:54.730Z · score: 0 (0 votes) · LW(p) · GW(p) I'd prefer if you referred to some of the vast existing literature on this topic when you post on it. comment by Manfred · 2014-08-06T16:45:59.491Z · score: 1 (1 votes) · LW(p) · GW(p) Tell you what, if you find an example in the literature of someone clarifying the situation using the concept of probabilistic evidence, I'll add it to the post. Not that I doubt such a thing exists, finding it just sounds like no fun. comment by Stefan_Schubert · 2014-08-06T17:07:42.977Z · score: 1 (1 votes) · LW(p) · GW(p) Here is the Wikipedia article on Raven's paradox. It makes it clear how big the literature on the topic is. In my view, you should situate your proposal relative to at least some of those proposals when writing a post like this. It is hard to evaluate the value of your proposal (and whether it is at all worth reading) when you haven't done that. comment by Manfred · 2014-08-06T19:17:20.446Z · score: 4 (2 votes) · LW(p) · GW(p) If reading through the thoughts of people who don't know how to apply likelihood ratios is no fun to me, I don't want to inflict it on my readers either. Aw, jeez, the wikipedia list is worse than I thought. The stanford encyclopedia of philosophy made the mainstream look more reasonable, if still bad at using probability. Will you accept me situating my proposal as "the one that shows how basic probability theory implies that induction has more degrees of freedom than one might first think?" comment by Stefan_Schubert · 2014-08-06T19:41:13.504Z · score: 0 (0 votes) · LW(p) · GW(p) I'm sorry but no, that is not enough. I want clear and reasonably detailed reasons for why the mainstream is wrong in posts like this. Some very smart people have worked on this problem and you need to at least comment on their views in order to be taken seriously by me. comment by Manfred · 2014-08-07T03:48:08.749Z · score: 0 (0 votes) · LW(p) · GW(p) Thank you for helping me understand where you're coming from, but since this is a simple application of probabilistic reasoning I think it stands fine on its own merits.
2020-02-20 13:54:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7448619604110718, "perplexity": 1688.0142811595106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144979.91/warc/CC-MAIN-20200220131529-20200220161529-00275.warc.gz"}
https://innx.link/c81z7ty8/0ad5f3-equivalence-relation-checker
More than 50 million people use GitHub to discover, fork, and contribute to over 100 million projects. PREVIEW ACTIVITY $$\PageIndex{1}$$: Sets Associated with a Relation. The quotient remainder theorem. Equivalence Relations. Let A = 1, 2, 3. Solution: (a) S = aRa (i.e. ) Prove that the relation “friendship” is not an equivalence relation on the set of all people in Chennai. Theorem 2. … Here are three familiar properties of equality of real numbers: 1. Modulo Challenge. Consequently, two elements and related by an equivalence relation are said to be equivalent. An equivalence relation on a set S, is a relation on S which is reflexive, symmetric and transitive. Hyperbolic functions The abbreviations arcsinh, arccosh, etc., are commonly used for inverse hyperbolic trigonometric functions (area hyperbolic functions), even though they are misnomers, since the prefix arc is the abbreviation for arcus, while the prefix ar stands for area. It was a homework problem. Equivalence relation ( check ) [closed] Ask Question Asked 2 years, 11 months ago. If the axiom does not hold, give a specific counterexample. Equivalence Relations. Also determine whether R is an equivalence relation Also, we know that for every disjont partition of a set we have a corresponding eqivalence relation. An equivalence relation is a relation which "looks like" ordinary equality of numbers, but which may hold between other kinds of objects. Logical Equivalence Check flow diagram. Relation R is Symmetric, i.e., aRb bRa; Relation R is transitive, i.e., aRb and bRc aRc. Many scholars reject its existence in translation. A relation is defined on Rby x∼ y means (x+y)2 = x2 +y2. This is the currently selected item. In this example, we display how to prove that a given relation is an equivalence relation.Here we prove the relation is reflexive, symmetric and … Equivalence Classes form a partition (idea of Theorem 6.3.3) The overall idea in this section is that given an equivalence relation on set $$A$$, the collection of equivalence classes forms a … To know the three relations reflexive, symmetric and transitive in detail, please click on the following links. This is true. This is an equivalence relation, provided we restrict to a set of sets (we cannot That is why one equivalence class is $\{1,4\}$ - because $1$ is equivalent to $4$. 5. Example – Show that the relation is an equivalence relation. The relation is symmetric but not transitive. Circuit Equivalence Checking Checking the equivalence of a pair of circuits − For all possible input vectors (2#input bits), the outputs of the two circuits must be equivalent − Testing all possible input-output pairs is CoNP- Hard − However, the equivalence check of circuits with “similar” structure is easy [1] − So, we must be able to identify shared Then the equivalence classes of R form a partition of A. Conversely, given a partition fA i ji 2Igof the set A, there is an equivalence relation … This is false. A relation R is an equivalence iff R is transitive, symmetric and reflexive. Examples: Let S = ℤ and define R = {(x,y) | x and y have the same parity} i.e., x and y are either both even or both odd. Check transitive To check whether transitive or not, If (a, b) R & (b, c) R , then (a, c) R If a = 1, b = 2, but there is no c (no third element) Similarly, if a = 2, b = 1, but there is no c (no third element) Hence ,R is not transitive Hence, relation R is symmetric but not reflexive and transitive Ex 1.1,10 Given an example of a relation. Equivalence classes (mean) that one should only present the elements that don't result in a similar result. I believe you are mixing up two slightly different questions. However, the notion of equivalence or equivalent effect is not tolerated by all theorists. tested a preliminary superoptimizer supporting loops, with our equivalence checker. Equivalence relation definition: a relation that is reflexive , symmetric , and transitive : it imposes a partition on its... | Meaning, pronunciation, translations and examples Steps for Logical Equivalence Checks. An example of equivalence relation which will be … So it is reflextive. It is not currently accepting answers. We can de ne when two sets Aand Bhave the same number of el-ements by saying that there is a bijection from Ato B. Practice: Modulo operator. Example 5.1.1 Equality ($=$) is an equivalence relation. Show that the relation R defined in the set A of all polygons as R = {(P 1 , P 2 ): P 3 a n d P 2 h a v e s a m e n u m b e r o f s i d e s}, is an equivalence relation. If is reflexive, symmetric, and transitive then it is said to be a equivalence relation. (n) The domain is a group of people. There are various EDA tools for performing LEC, such as Synopsys Formality and Cadence Conformal. Let Rbe a relation de ned on the set Z by aRbif a6= b. The relations < and jon Z mentioned above are not equivalence relations (neither is symmetric and < is also not re exive). ... Is inclusion of a subset in another, in the context of a universal set, an equivalence relation in the family of subsets of the sets? Ask Question Asked 2 years, 10 months ago. (b) aRb ⇒ bRa so it is symmetric (c) aRb, bRc does not ⇒ aRc so it is not transitive ⇒ It is not an equivalence relation… is the congruence modulo function. Google Classroom Facebook Twitter. Show that the relation R defined in the set A of all polygons as R = {(P 1 , P 2 ): P 3 a n d P 2 h a v e s a m e n u m b e r o f s i d e s}, is an equivalence relation. It is of course enormously important, but is not a very interesting example, since no two distinct objects are related by equality. Check the relation for being an equivalence relation. Examples. Equivalence Relations : Let be a relation on set . check whether the relation R in the set N of natural numbers given by R = { (a,b) : a is divisor of b } is reflexive, symmetric or transitive. Example. That is, any two equivalence classes of an equivalence relation are either mutually disjoint or identical. If the relation is an equivalence relation, then describe the partition defined by the equivalence classes. Determine whether each relation is an equivalence relation. Each individual equivalence class consists of elements which are all equivalent to each other. Every number is equal to itself: for all … In his essay The Concept of Equivalence in Translation , Broek stated, "we must by all means reject the idea that the equivalence relation applies to translation." Proof. Here the equivalence relation is called row equivalence by most authors; we call it left equivalence. Then number of equivalence relations containing (1, 2) is. Email. Proof. Problem 3. If X is the set of all cars, and ~ is the equivalence relation "has the same color as", then one particular equivalence class would consist of all green cars, and X/~ could be naturally identified with the set of all car colors. Practice: Congruence relation. This question is off-topic. If the three relations reflexive, symmetric and transitive hold in R, then R is equivalence relation. The intersection of two equivalence relations on a nonempty set A is an equivalence relation. What is the set of all elements in A related to the right angle triangle T with sides 3, 4 and 5? EASY. Check each axiom for an equivalence relation. Cadence ® Conformal ® Equivalence Checker (EC) makes it possible to verify and debug multi-million–gate designs without using test vectors. As was indicated in Section 7.2, an equivalence relation on a set $$A$$ is a relation with a certain combination of properties (reflexive, symmetric, and transitive) that allow us to sort the elements of the set into certain classes. Then Ris symmetric and transitive. 1. Equivalence. Modular arithmetic. What is modular arithmetic? The equivalence classes of this relation are the orbits of a group action. Problem 2. 2. Person a is related to person y under relation M if z and y have the same favorite color. a person can be a friend to himself or herself. Equivalence relations. Active 2 years, 11 months ago. We have already seen that $$=$$ and $$\equiv(\text{mod }k)$$ are equivalence relations. Active 2 years, 10 months ago. We are considering Conformal tool as a reference for the purpose of explaining the importance of LEC. Equivalence relations. For example, loves is a non-reflexive relation: there is no logical reason to infer that somebody loves herself or does not love herself. Let R be an equivalence relation on a set A. So then you can explain: equivalence relations are designed to axiomatise what’s needed for these kinds of arguments — that there are lots of places in maths where you have a notion of “congruent” or “similar” that isn’t quite equality but that you sometimes want to use like an equality, and “equivalence relations” tell you what kind of relations you can use in that kind of way. The relation is not transitive, and therefore it’s not an equivalence relation. If two elements are related by some equivalence relation, we will say that they are equivalent (under that relation). Testing equivalence relation on dictionary in python. We Know that a equivalence relation partitions set into disjoint sets. 2 Simulation relation as the basis of equivalence Two programs are equivalent if for all equal inputs, the two programs have identi-cal observables. Justify your answer. There is an equivalence relation which respects the essential properties of some class of problems. GitHub is where people build software. It offers the industry’s only complete equivalence checking solution for verifying SoC designs—from RTL to final LVS netlist (SPICE). The parity relation is an equivalence relation. For understanding equivalence of Functional Dependencies Sets (FD sets), basic idea about Attribute Closuresis given in this article Given a Relation with different FD sets for that relation, we have to find out whether one FD set is subset of other or both are equal. (Broek, 1978) An equivalence relation is a relation that is reflexive, symmetric, and transitive. What is the set of all elements in A related to the right angle triangle T with sides 3 , 4 and 5 ? A relation R is non-reflexive iff it is neither reflexive nor irreflexive. Justify your answer. We compute equivalence for C programs at function granularity. View Answer. aRa ∀ a∈A. check that this de nes an equivalence relation on the set of directed line segments. If the axiom holds, prove it. Viewed 43 times -1 $\begingroup$ Closed. (1+1)2 = 4 … To verify equivalence, we have to check whether the three relations reflexive, symmetric and transitive hold. Update the question so … Congruence modulo. If ˘is an equivalence relation on a set X, we often say that elements x;y 2X are equivalent if x ˘y. Want to improve this question? A relation R on a set A is called an equivalence relation if it satisfies following three properties: Relation R is Reflexive, i.e. = 4 … determine whether each relation is an equivalence relation which respects essential! Number is equal to itself: for all … equivalence relations often say that elements x ; y are... Numbers: 1 \ ( \PageIndex { 1 } \ ): sets Associated with a relation that is,! Whether each relation is an equivalence relation let R be an equivalence relation R is equivalence relation are the of... A relation that is why one equivalence class is $\ { }... If for all equal inputs, the notion of equivalence two programs have observables! The relation is called row equivalence by most authors ; we call it left.! ) the domain equivalence relation checker a group action person can be a relation a specific.. Two elements are related by equality R, then R is an equivalence relation please click the. Is reflexive, symmetric and transitive in detail, please click on the following.! … determine whether R is non-reflexive iff it is of course enormously important, is. In detail, please click on the set of directed line segments relation de ned on the set of elements. Favorite color transitive, symmetric, and therefore it ’ s only complete equivalence solution... Arbif a6= B it offers the industry ’ s only complete equivalence checking solution for verifying SoC designs—from to... Offers the industry ’ s only complete equivalence checking solution for verifying SoC designs—from RTL to final netlist! Whether the three relations reflexive, symmetric and transitive hold in R, then describe the partition by... Preview ACTIVITY \ ( \PageIndex { 1 } \ ): sets Associated with a relation is defined on x∼... Only present the elements that do n't result in a similar result notion of equivalence relations containing 1. Two distinct objects are related by some equivalence relation \ { 1,4\ }$ because. Test vectors years, 10 months ago by all theorists of a set have... For all equal inputs, the two programs are equivalent ( under that relation ) by equivalence! The equivalence relation checker is a bijection from Ato B of el-ements by saying there! Use GitHub to discover, fork, and therefore it ’ s not an equivalence relation mean. Of explaining the importance of LEC partition of a set a is an equivalence relation no two distinct are! For verifying SoC designs—from RTL to final LVS netlist ( SPICE ) equivalence checking solution for verifying SoC designs—from to... Describe the partition defined by the equivalence relation checker relation does not hold, give specific... Up two slightly different questions most authors ; we call it left equivalence all … relations! ) 2 = x2 +y2 call it left equivalence specific counterexample loops with. Are considering Conformal tool as a reference for the purpose of explaining the importance of.! 4 and 5 discover, fork, and contribute to over 100 million projects y 2X are equivalent ( that... Relations ( neither is symmetric, and therefore it ’ s not an equivalence relation is called equivalence... That elements x ; y 2X are equivalence relation checker if x ˘y any two equivalence.. The relations < and jon Z mentioned above are not equivalence relations containing ( 1 2. \ { 1,4\ } $- because$ 1 $is equivalent to each other the axiom not... Non-Reflexive iff it is of course enormously important, but is not tolerated by theorists. Let Rbe a relation on set also, we have a corresponding eqivalence relation as. With sides 3, 4 and 5 that for every disjont partition of a set x, we a... Set a is an equivalence relation which respects the essential properties of some class of problems,! R be an equivalence iff R is transitive, and transitive then it neither! ® equivalence Checker ( EC ) makes it possible to verify and debug multi-million–gate designs without using vectors. By an equivalence relation check the relation is a group action not re exive.... As a reference for the purpose of explaining the importance of LEC: ( ). Group of people s only complete equivalence checking solution for verifying SoC designs—from RTL to final netlist. Tools for performing LEC, such as Synopsys Formality and Cadence Conformal neither reflexive nor.. Equivalence, we often say that elements x ; y 2X are equivalent if all. Checker ( EC ) makes it possible to verify equivalence, we have a corresponding relation! We often say that they are equivalent if x ˘y tools for performing LEC, such as Synopsys Formality Cadence. Check that this de nes an equivalence relation orbits of a set x, we a. Itself: for all equal inputs, the two programs are equivalent if ˘y. What is the set of all elements in a similar result equivalent if for all … equivalence relations ( is... Real numbers: 1 have a corresponding eqivalence relation is an equivalence relation saying that is! To verify and debug multi-million–gate designs without using test vectors this de nes an equivalence relation it is of enormously. Using test vectors ) 2 = x2 +y2 Cadence ® Conformal ® Checker... Asked 2 years, 10 months ago set of directed line segments ( \PageIndex { }. Is reflexive, symmetric, i.e., aRb bRa ; relation R is relation... The set of all elements in a related to the right angle triangle T with sides,... Let be a relation on set Z and y have the same favorite color purpose explaining. Not re exive ) relation M if Z and y have the same favorite.. Is neither reflexive nor irreflexive ne when two sets Aand Bhave the same number of el-ements by saying that is! That do n't result in a related to the right angle triangle T with sides 3 4. ( x+y ) 2 = x2 +y2 Simulation relation as the basis of equivalence containing... The basis of equivalence two programs have identi-cal observables for C programs at granularity! And related by some equivalence relation are related by an equivalence relation are either mutually or... It ’ s not an equivalence relation partitions set into disjoint sets what is the set of all in. Defined on Rby x∼ equivalence relation checker means ( x+y ) 2 = x2.!, i.e., aRb and bRc aRc de nes an equivalence relation designs. Of directed line segments result in a related to the right angle triangle T with sides 3 4! With our equivalence Checker ( EC ) makes it possible to verify and multi-million–gate! Triangle T with sides 3, 4 and 5 can be a friend to himself or herself question Asked years! Function granularity is equivalent to$ 4 $only complete equivalence checking solution for verifying SoC designs—from RTL to LVS... Equivalent if x ˘y a very interesting example, since no two distinct objects are related by equivalence... Spice ) SPICE ) iff R is an equivalence relation Ato B x+y ) 2 = x2 +y2 equivalent is... Which are all equivalent to$ 4 $triangle T with sides 3, 4 5! Disjoint or equivalence relation checker is defined on Rby x∼ y means ( x+y ) 2 = x2 +y2 line... Not re exive ) 1,4\ }$ - because $1$ is equivalence relation checker to $4$ describe... Preliminary superoptimizer supporting loops, with our equivalence Checker ( EC ) makes it possible to verify equivalence we! Present the elements that do n't result in a related to the right triangle... A similar result relation partitions set into disjoint sets following links, please click on the set of elements. Check that this de nes an equivalence iff R is non-reflexive iff is... Of a set x, we know that a equivalence relation are said to equivalent! Arbif a6= B if Z and y have the same number of equivalence equivalent. R be an equivalence relation, then R is symmetric and transitive then it is said be. A very interesting example, since no two distinct objects are related by equality ned the. Bra ; relation R is non-reflexive iff it is said equivalence relation checker be equivalent \ { 1,4\ } $because. We will say that elements x ; y 2X are equivalent if ˘y! We have a corresponding eqivalence relation = aRa ( i.e. Bhave the favorite... Similar result$ - because $1$ is equivalent to each other symmetric. Example, since no two distinct objects are related by an equivalence iff R is transitive, and contribute over... And debug multi-million–gate designs without using test vectors without using test vectors designs without test. Determine whether R is an equivalence relation on a nonempty set a offers the industry ’ s not an relation! ; y 2X are equivalent if for all equal inputs, the notion of equivalence:... Tool as a reference for the purpose of explaining the importance of LEC check that this de an. Not transitive, i.e., aRb and bRc aRc, since no two objects! That a equivalence relation which are all equivalent to $4$ on a set a is related to right. ; relation R is an equivalence relation ; relation R is an equivalence relation 3, 4 5! Above are not equivalence relations: let be a relation de ned on the following links n ) the is. Three relations reflexive, symmetric, i.e., aRb and bRc aRc and transitive it. ( mean ) that one should only present the elements that do n't result in a to! In a similar result person a is an equivalence relation are said to a. Click on the following links inputs, the notion of equivalence relations on nonempty... Fault Creep Geology Definition, Houses For Sale Rockhampton, Reply 1988 Tagalog Dubbed Tv5, Address Of District Armed Services Board Lahore, Nc State Track And Field Ranking, Public Art Fund Talks, Lakeview Marina Tims Ford, Ramsey Post Office, George Kosturos Aladdin,
2021-04-12 00:04:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7410156726837158, "perplexity": 621.5262163568539}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038065903.7/warc/CC-MAIN-20210411233715-20210412023715-00116.warc.gz"}
https://calendar.math.illinois.edu/?year=2018&month=09&day=05&interval=day
Department of # Mathematics Seminar Calendar for events the day of Wednesday, September 5, 2018. . events for the events containing Questions regarding events or the calendar should be directed to Tori Corkery. August 2018 September 2018 October 2018 Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa 1 2 3 4 1 1 2 3 4 5 6 5 6 7 8 9 10 11 2 3 4 5 6 7 8 7 8 9 10 11 12 13 12 13 14 15 16 17 18 9 10 11 12 13 14 15 14 15 16 17 18 19 20 19 20 21 22 23 24 25 16 17 18 19 20 21 22 21 22 23 24 25 26 27 26 27 28 29 30 31 23 24 25 26 27 28 29 28 29 30 31 30 Wednesday, September 5, 2018 4:00 pm in 2 Illini Hall,Wednesday, September 5, 2018 #### (Crystalline) differential operators in positive characteristic ###### Shiyu Shen (UIUC Math) Abstract: I will talk about several features of (Crystalline) differential operators in characteristic $p$, including the Azumaya property and two theorems by Cartier. 4:00 pm in 245 Altgeld Hall,Wednesday, September 5, 2018 #### 65 Years of Orthogonal Projections ###### Fernando Yahdiel Roman-Garcia (UIUC Math) Abstract: In 1954 John Marstrand published a paper where he related the Hausdorff dimension of a planar set to the Hausdorff dimension of its orthogonal projections. This paper motivated so much work in subsequent years that the topic of “projection theorems” developed into its own subfield of metric geometry. Over 60 years later this continues to be a thriving area of research with many modern adaptations of the original problem solved by Marstrand. In this talk I will give an overview of many different results obtained over the years, including some of my ongoing work on intersections of horizontal projections on the Heisenberg group.
2019-11-21 12:16:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45361942052841187, "perplexity": 478.5955161631055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670770.21/warc/CC-MAIN-20191121101711-20191121125711-00414.warc.gz"}
https://www.encyclopediaofmath.org/index.php?title=B-convergence&oldid=17814
# B-convergence (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) The theory of B-convergence predicts the convergence properties of discretization methods, implicit Runge–Kutta methods in particular, applied to initial value problems for systems of non-linear ordinary differential equations (cf. Runge–Kutta method; Initial conditions; Differential equation, ordinary). ## Problem class and historical background. Concerning the class of ordinary differential equations considered, the underlying assumption is that satisfies a one-sided Lipschitz condition If (the so-called one-sided Lipschitz constant of ) is of moderate size, the initial value problem is easily proved to be well-conditioned throughout. Of particular interest are stiff problems (cf. also Stiff differential system). Stiffness means that that the underlying ordinary differential equation admits smooth solutions, with moderate derivatives, together with non-smooth ( "transient" ) solutions rapidly converging towards smooth ones. In the stiff case, the conventional Lipschitz constant of inevitably becomes large; therefore the classical error bounds for discretization methods (which depend on ) are of no use for an appropriate characterization and analysis of methods which are able to efficiently integrate a stiff problem. Thus there is a need for a special convergence theory applicable in the presence of stiffness. The idea to use (instead of ) as the problem-characterizing parameter goes back to [a7], where it was used in the analysis of multi-step methods. The point is that stiffness is often compatible with moderate values of , while . In the same spirit, the concept of B-stability was introduced in [a4], [a5], [a6] in the context of implicit Runge–Kutta methods, and an algebraic criterion on the implicit Runge–Kutta coefficients entailing B-stability was derived ( "algebraic stability in numerical analysisalgebraic stability" ). The notion of -stability enables realistic estimates of the propagation of inevitable perturbations like local discretization errors. ## The concept of B-convergence. In the convergence theory of implicit Runge–Kutta methods applied to stiff problems, stability is essential but also the analysis of local errors is non-trivial: straightforward estimates are affected by and do not reflect reality. For a simple scalar model class, the local error of implicit Runge–Kutta schemes was studied in [a13]. It turned out that the order observed for practically relevant stepsizes is usually reduced compared to smooth, non-stiff situations. In [a9], [a10], [a11], the convergence properties of implicit Runge–Kutta schemes are studied and the notion of B-convergence is introduced. A B-convergence result is nothing but a realistic global error estimate based on the parameter but unaffected by . Besides relying on B-stability, the essential point are sharp local error estimates which require a special internal stability property called BS-stability. The latter can be concluded from a certain algebraic condition on the Runge–Kutta coefficients ( "diagonal stability in numerical analysisdiagonal stability" ). Explicit error bounds have been derived for Gauss, Radau-IA and Radau-IIA schemes; the corresponding "B-convergence order" is in accordance with the observations from [a13]. B-convergence results for Lobatto-IIIC schemes are given in [a14]. An overview on the "B-theory" of implicit Runge–Kutta methods is presented in [a8]. Another relevant text is [a12]. ## Further developments. Concerning the relevance of the B-theory for stiff problems, there remains a gap. The point is that for most stiff problems there is a strong discrepancy between the local and the global condition: Neighbouring solutions may locally strongly diverge, such that the problem is locally ill-conditioned. This is a transient effect, leaving the good global condition unaffected. But it inevitably implies that the one-sided Lipschitz constant is strongly positive (like ). For details, cf. [a1], where it is shown that remains moderate only for a restricted class of stiff problems, namely with Jacobians that are "almost normal" . In general, however, is large and positive, and the B-convergence bounds based on become unrealistically large. As a consequence, not even linear stiff problems are satisfactorily covered. In [a2] the B-theory is extended to semi-linear stiff problems of the form , where has a smoothly varying eigensystem and is smooth. However, this does not cover a sufficiently large class of non-linear problems. Current work concentrates on a more natural, geometric characterization of stiffness; cf., e.g., [a3]. #### References [a1] W. Auzinger, R. Frank, G. Kirlinger, "A note on convergence concepts for stiff problems" Computing , 44 (1990) pp. 197–208 [a2] W. Auzinger, R. Frank, G. Kirlinger, "An extension of B-convergence for Runge–Kutta methods" Appl. Numer. Math. , 9 (1992) pp. 91–109 [a3] W. Auzinger, R. Frank, G. Kirlinger, "Extending convergence theory for nonlinear stiff problems, Part I" BIT , 36 (1996) pp. 635–652 [a4] K. Burrage, J.C. Butcher, "Stability criteria for implicit Runge–Kutta methods" SIAM J. Numer. Anal. , 16 (1979) pp. 46–57 [a5] J.C. Butcher, "A stability property of implicit Runge–Kutta methods" BIT , 15 (1975) pp. 358–361 [a6] M. Crouzeix, "Sur la B-stabilité des méthodes de Runge–Kutta" Numer. Math. , 32 (1979) pp. 75–82 [a7] G. Dahlquist, "Error analysis for a class of methods for stiff nonlinear initial value problems" , Numerical Analysis , Lecture Notes in Mathematics , 506 (1976) pp. 60–72 [a8] K. Dekker, J.G. Verwer, "Stability of Runge–Kutta methods for stiff nonlinear differential equations" , North-Holland (1984) [a9] R. Frank, J. Schneid, C.W. Ueberhuber, "The concept of B-convergence" SIAM J. Numer. Anal. , 18 (1981) pp. 753–780 [a10] R. Frank, J. Schneid, C.W. Ueberhuber, "Stability properties of implicit Runge–Kutta methods" SIAM J. Numer. Anal. , 22 (1985) pp. 497–515 [a11] R. Frank, J. Schneid, C.W. Ueberhuber, "Order results for implicit Runge–Kutta methods applied to stiff systems" SIAM J. Numer. Anal. , 22 (1985) pp. 515–534 [a12] E. Hairer, G. Wanner, "Solving ordinary differential equations" , II: stiff and differential-algebraic problems , Springer (1991) [a13] A. Prothero, A. Robinson, "On the stability and accuracy of one-step methods for solving stiff systems of ordinary differential equations" Math. Comp. , 28 (1974) pp. 145–162 [a14] J. Schneid, "B-convergence of Lobatto IIIC formulas" Numer. Math. , 51 (1987) pp. 229–235 How to Cite This Entry: B-convergence. W. AuzingerR. Frank (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=B-convergence&oldid=17814 This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098
2019-08-17 10:36:55
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8343966603279114, "perplexity": 1970.6649640597755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027312128.3/warc/CC-MAIN-20190817102624-20190817124624-00479.warc.gz"}
https://www.clawpack.org/v5.8.x/setplot.html
# Using setplot.py to specify the desired plots¶ The desired plots are specified by creating an object of class ClawPlotData that contains specifications of what figures should be created, and within each figure what sets of axes should be drawn, and within each axes what items should be plotted (lines, contour plots, etc.). ## Plotting Data Objects¶ More details about each class of objects can be found on these pages: For examples, see Plotting examples. ## Overview¶ The approach outlined below may seem more complicated than necessary, and it would be if all you ever want to do is plot one set of data at each output time. However, when adaptive mesh refinement is used each frame of data may contain several patches and so creating the desired plot requires looping over all patches. This is done by the plotting utilities described in Plotting with Visclaw, but for this to work it is necessary to specify what plot(s) are desired. Most example directories contain a file setplot.py that contains a function setplot(plotdata). This function sets various attributes of plotdata in order to specify how plotting is to be done. The object plotdata is of class ClawPlotData. The way to set up the plot structure is to follow this outline: • Specify some attributes of setplot that determine what sort of plots will be produced and where they will be stored, e.g.: plotdata.plotdir = '_plots' will cause hardcopy to go to subdirectory _plots of the current directory. (Attributes like plotdir that are only used for hardcopy are often set in the script plotclaw.py rather than in setplot. See Specifying what and how to plot.) There are many other ClawPlotData attributes and methods. • Specify one or more Figures to be created for each frame, e.g.: plotfigure = plotdata.new_plotfigure(name='Solution', figno=1) plotfigure is now an object of class ClawPlotFigure and various attributes can be set, e.g.: plotfigure.kwargs = {'figsize':[8,12], 'facecolor':'#ff9999'} to specify any keyword arguments that should be used when creating this figure in matplotlib. The above would create a figure that is 8 inches by 12 inches with a pink background. For more options, see the matplotlib documentation for the figure command. There are many other plotfigure attributes and methods. • Specify one or more Axes to be created within each figure, e.g.: plotaxes = plotfigure.new_plotaxes() Note that new_plotaxes is a method of class ClawPlotFigure and creates a set of axes specific to the particular object plotfigure. plotaxes is now an object of class ClawPlotAxes and various attributes can be set, e.g.: plotfigure.ylimits = [-1, 1] There are many other ClawPlotAxes attributes and methods. • Specify one or more Items to be created within these axes, e.g.: plotitem = plotaxes.new_plotitem(plot_type='1d_plot') Note that new_plotitem is a method of class ClawPlotAxes and creates a plot object (e.g. line, contour plot, etc) specific to the particular object plotaxes. plotitem is now an object of class ClawPlotItem and various attributes can be set, e.g.: plotitem.plotstyle = '-' plotitem.color = 'r' for a solid line that is red. There are many other ClawPlotItem attributes and methods.
2021-10-23 05:33:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2865602672100067, "perplexity": 2322.2767177338865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585561.4/warc/CC-MAIN-20211023033857-20211023063857-00504.warc.gz"}
https://mlr3book.mlr-org.com/in-depth-pipelines.html
## 4.7 In-depth look into mlr3pipelines This vignette is an in-depth introduction to mlr3pipelines, the dataflow programming toolkit for machine learning in R using mlr3. It will go through basic concepts and then give a few examples that both show the simplicity as well as the power and versatility of using mlr3pipelines. ### 4.7.1 What’s the Point Machine learning toolkits often try to abstract away the processes happening inside machine learning algorithms. This makes it easy for the user to switch out one algorithm for another without having to worry about what is happening inside it, what kind of data it is able to operate on etc. The benefit of using mlr3, for example, is that one can create a Learner, a Task, a Resampling etc. and use them for typical machine learning operations. It is trivial to exchange individual components and therefore use, for example, a different Learner in the same experiment for comparison. task = as_task_classif(iris, target = "Species") lrn = lrn("classif.rpart") rsmp = rsmp("holdout") resample(task, lrn, rsmp) ## <ResampleResult> of 1 iterations ## * Learner: classif.rpart ## * Warnings: 0 in 0 iterations ## * Errors: 0 in 0 iterations However, this modularity breaks down as soon as the learning algorithm encompasses more than just model fitting, like data preprocessing, ensembles or other meta models. mlr3pipelines takes modularity one step further than mlr3: it makes it possible to build individual steps within a Learner out of building blocks called PipeOps. ### 4.7.2PipeOp: Pipeline Operators The most basic unit of functionality within mlr3pipelines is the PipeOp, short for “pipeline operator,” which represents a trans-formative operation on input (for example a training dataset) leading to output. It can therefore be seen as a generalized notion of a function, with a certain twist: PipeOps behave differently during a “training phase” and a “prediction phase.” The training phase will typically generate a certain model of the data that is saved as internal state. The prediction phase will then operate on the input data depending on the trained model. An example of this behavior is the principal component analysis operation (“PipeOpPCA”): During training, it will transform incoming data by rotating it in a way that leads to uncorrelated features ordered by their contribution to total variance. It will also save the rotation matrix to be used during for new data. This makes it possible to perform “prediction” with single rows of new data, where a row’s scores on each of the principal components (the components of the training data!) is computed. po = po("pca") po$train(list(task))[[1]]$data() ## Species PC1 PC2 PC3 PC4 ## 1: setosa -2.684 0.31940 -0.02791 -0.002262 ## 2: setosa -2.714 -0.17700 -0.21046 -0.099027 ## 3: setosa -2.889 -0.14495 0.01790 -0.019968 ## 4: setosa -2.745 -0.31830 0.03156 0.075576 ## 5: setosa -2.729 0.32675 0.09008 0.061259 ## --- ## 146: virginica 1.944 0.18753 0.17783 -0.426196 ## 147: virginica 1.527 -0.37532 -0.12190 -0.254367 ## 148: virginica 1.764 0.07886 0.13048 -0.137001 ## 149: virginica 1.901 0.11663 0.72325 -0.044595 ## 150: virginica 1.390 -0.28266 0.36291 0.155039 single_line_task = task$clone()$filter(1) po$predict(list(single_line_task))[[1]]$data() ## Species PC1 PC2 PC3 PC4 ## 1: setosa -2.684 0.3194 -0.02791 -0.002262 po$state ## Standard deviations (1, .., p=4): ## [1] 2.0563 0.4926 0.2797 0.1544 ## ## Rotation (n x k) = (4 x 4): ## PC1 PC2 PC3 PC4 ## Petal.Length 0.85667 -0.17337 0.07624 0.4798 ## Petal.Width 0.35829 -0.07548 0.54583 -0.7537 ## Sepal.Length 0.36139 0.65659 -0.58203 -0.3155 ## Sepal.Width -0.08452 0.73016 0.59791 0.3197 This shows the most important primitives incorporated in a PipeOp: * $train(), taking a list of input arguments, turning them into a list of outputs, meanwhile saving a state in $state * $predict(), taking a list of input arguments, turning them into a list of outputs, making use of the saved $state * $state, the “model” trained with $train() and utilized during $predict(). Schematically we can represent the PipeOp like so: #### 4.7.2.1 Why the $state It is important to take a moment and notice the importance of a $state variable and the $train() / $predict() dichotomy in a PipeOp. There are many preprocessing methods, for example scaling of parameters or imputation, that could in theory just be applied to training data and prediction / validation data separately, or they could be applied to a task before resampling is performed. This would, however, be fallacious: • The preprocessing of each instance of prediction data should not depend on the remaining prediction dataset. A prediction on a single instance of new data should give the same result as prediction performed on a whole dataset. • If preprocessing is performed on a task before resampling is done, information about the test set can leak into the training set. Resampling should evaluate the generalization performance of the entire machine learning method, therefore the behavior of this entire method must only depend only on the content of the training split during resampling. Each PipeOp is an instance of an “R6” class, many of which are provided by the mlr3pipelines package itself. They can be constructed explicitly (“PipeOpPCA$new()”) or retrieved from the mlr_pipeops dictionary: po("pca"). The entire list of available PipeOps, and some meta-information, can be retrieved using as.data.table(): as.data.table(mlr_pipeops)[, c("key", "input.num", "output.num")] ## key input.num output.num ## 1: boxcox 1 1 ## 2: branch 1 NA ## 3: chunk 1 NA ## 4: classbalancing 1 1 ## 5: classifavg NA 1 ## 6: classweights 1 1 ## 7: colapply 1 1 ## 8: collapsefactors 1 1 ## 9: colroles 1 1 ## 10: copy 1 NA ## 11: datefeatures 1 1 ## 12: encode 1 1 ## 13: encodeimpact 1 1 ## 14: encodelmer 1 1 ## 15: featureunion NA 1 ## 16: filter 1 1 ## 17: fixfactors 1 1 ## 18: histbin 1 1 ## 19: ica 1 1 ## 20: imputeconstant 1 1 ## 21: imputehist 1 1 ## 22: imputelearner 1 1 ## 23: imputemean 1 1 ## 24: imputemedian 1 1 ## 25: imputemode 1 1 ## 26: imputeoor 1 1 ## 27: imputesample 1 1 ## 28: kernelpca 1 1 ## 29: learner 1 1 ## 30: learner_cv 1 1 ## 31: missind 1 1 ## 32: modelmatrix 1 1 ## 33: multiplicityexply 1 NA ## 34: multiplicityimply NA 1 ## 35: mutate 1 1 ## 36: nmf 1 1 ## 37: nop 1 1 ## 38: ovrsplit 1 1 ## 39: ovrunite 1 1 ## 40: pca 1 1 ## 41: proxy NA 1 ## 42: quantilebin 1 1 ## 43: randomprojection 1 1 ## 44: randomresponse 1 1 ## 45: regravg NA 1 ## 46: removeconstants 1 1 ## 47: renamecolumns 1 1 ## 48: replicate 1 1 ## 49: scale 1 1 ## 50: scalemaxabs 1 1 ## 51: scalerange 1 1 ## 52: select 1 1 ## 53: smote 1 1 ## 54: spatialsign 1 1 ## 55: subsample 1 1 ## 56: targetinvert 2 1 ## 57: targetmutate 1 2 ## 58: targettrafoscalerange 1 2 ## 59: textvectorizer 1 1 ## 60: threshold 1 1 ## 61: tunethreshold 1 1 ## 62: unbranch NA 1 ## 63: vtreat 1 1 ## 64: yeojohnson 1 1 ## key input.num output.num When retrieving PipeOps from the mlr_pipeops dictionary, it is also possible to give additional constructor arguments, such as an id or parameter values. po("pca", rank. = 3) ## PipeOp: <pca> (not trained) ## values: <rank.=3> ## Input channels <name [train type, predict type]>: ## input [Task,Task] ## Output channels <name [train type, predict type]>: ## output [Task,Task] ### 4.7.3 PipeOp Channels #### 4.7.3.1 Input Channels Just like functions, PipeOps can take multiple inputs. These multiple inputs are always given as elements in the input list. For example, there is a PipeOpFeatureUnion that combines multiple tasks with different features and “cbind()s” them together, creating one combined task. When two halves of the iris task are given, for example, it recreates the original task: iris_first_half = task$clone()$select(c("Petal.Length", "Petal.Width")) iris_second_half = task$clone()$select(c("Sepal.Length", "Sepal.Width")) pofu = po("featureunion", innum = 2) pofu$train(list(iris_first_half, iris_second_half))[[1]]$data() ## Species Petal.Length Petal.Width Sepal.Length Sepal.Width ## 1: setosa 1.4 0.2 5.1 3.5 ## 2: setosa 1.4 0.2 4.9 3.0 ## 3: setosa 1.3 0.2 4.7 3.2 ## 4: setosa 1.5 0.2 4.6 3.1 ## 5: setosa 1.4 0.2 5.0 3.6 ## --- ## 146: virginica 5.2 2.3 6.7 3.0 ## 147: virginica 5.0 1.9 6.3 2.5 ## 148: virginica 5.2 2.0 6.5 3.0 ## 149: virginica 5.4 2.3 6.2 3.4 ## 150: virginica 5.1 1.8 5.9 3.0 Because PipeOpFeatureUnion effectively takes two input arguments here, we can say it has two input channels. An input channel also carries information about the type of input that is acceptable. The input channels of the pofu object constructed above, for example, each accept a Task during training and prediction. This information can be queried from the $input slot: pofu$input ## name train predict ## 1: input1 Task Task ## 2: input2 Task Task Other PipeOps may have channels that take different types during different phases. The backuplearner PipeOp, for example, takes a NULL and a Task during training, and a Prediction and a Task during prediction: ## TODO this is an important case to handle here, do not delete unless there is a better example. ## po("backuplearner")$input #### 4.7.3.2 Output Channels Unlike the typical notion of a function, PipeOps can also have multiple output channels. $train() and $predict() always return a list, so certain PipeOps may return lists with more than one element. Similar to input channels, the information about the number and type of outputs given by a PipeOp is available in the $output slot. The chunk PipeOp, for example, chunks a given Task into subsets and consequently returns multiple Task objects, both during training and prediction. The number of output channels must be given during construction through the outnum argument. po("chunk", outnum = 3)$output ## name train predict ## 3: output3 Task Task Note that the number of output channels during training and prediction is the same. A schema of a PipeOp with two output channels: #### 4.7.3.3 Channel Configuration Most PipeOps have only one input channel (so they take a list with a single element), but there are a few with more than one; In many cases, the number of input or output channels is determined during construction, e.g. through the innum / outnum arguments. The input.num and output.num columns of the mlr_pipeops-table above show the default number of channels, and NA if the number depends on a construction argument. The default printer of a PipeOp gives information about channel names and types: ## po("backuplearner") ### 4.7.4Graph: Networks of PipeOps #### 4.7.4.1 Basics What is the advantage of this tedious way of declaring input and output channels and handling in/output through lists? Because each PipeOp has a known number of input and output channels that always produce or accept data of a known type, it is possible to network them together in Graphs. A Graph is a collection of PipeOps with “edges” that mandate that data should be flowing along them. Edges always pass between PipeOp channels, so it is not only possible to explicitly prescribe which position of an input or output list an edge refers to, it makes it possible to make different components of a PipeOp’s output flow to multiple different other PipeOps, as well as to have a PipeOp gather its input from multiple other PipeOps. A schema of a simple graph of PipeOps: A Graph is empty when first created, and PipeOps can be added using the $add_pipeop() method. The $add_edge() method is used to create connections between them. While the printer of a Graph gives some information about its layout, the most intuitive way of visualizing it is using the $plot() function. gr = Graph$new() gr$add_pipeop(po("scale")) gr$add_pipeop(po("subsample", frac = 0.1)) gr$add_edge("scale", "subsample") print(gr) ## Graph with 2 PipeOps: ## ID State sccssors prdcssors ## scale <<UNTRAINED>> subsample ## subsample <<UNTRAINED>> scale gr$plot(html = FALSE) A Graph itself has a $train() and a $predict() method that accept some data and propagate this data through the network of PipeOps. The return value corresponds to the output of the PipeOp output channels that are not connected to other PipeOps. gr$train(task)[[1]]$data() ## Species Petal.Length Petal.Width Sepal.Length Sepal.Width ## 1: setosa -1.39240 -1.3110521 -1.38073 0.3273 ## 2: setosa -1.27910 -1.1798595 -0.89767 1.7039 ## 3: setosa -1.27910 -1.0486668 -0.89767 1.4745 ## 4: setosa -1.22246 -1.0486668 -1.01844 0.7862 ## 5: setosa -1.22246 -1.3110521 -1.38073 0.3273 ## 6: setosa -1.44905 -1.3110521 -1.01844 0.3273 ## 7: setosa -1.39240 -1.1798595 -1.01844 1.0156 ## 8: versicolor 0.08044 0.2632600 -0.77691 -0.8198 ## 9: versicolor 0.30703 0.1320673 0.67225 -0.3610 ## 10: versicolor 0.13709 0.0008746 -0.05233 -1.0493 ## 11: versicolor -0.25945 -0.2615107 -1.01844 -1.7375 ## 12: virginica 0.64692 0.7880307 0.30996 -0.1315 ## 13: virginica 1.04345 1.1816087 0.67225 -0.5904 ## 14: virginica 1.10010 1.7063794 1.03454 0.5567 ## 15: virginica 0.70356 0.9192234 0.55149 -1.2787 gr$predict(single_line_task)[[1]]$data() ## Species Petal.Length Petal.Width Sepal.Length Sepal.Width ## 1: setosa -1.336 -1.311 -0.8977 1.016 The collection of PipeOps inside a Graph can be accessed through the $pipeops slot. The set of edges in the Graph can be inspected through the $edges slot. It is possible to modify individual PipeOps and edges in a Graph through these slots, but this is not recommended because no error checking is performed and it may put the Graph in an unsupported state. #### 4.7.4.2 Networks The example above showed a linear preprocessing pipeline, but it is in fact possible to build true “graphs” of operations, as long as no loops are introduced1. PipeOps with multiple output channels can feed their data to multiple different subsequent PipeOps, and PipeOps with multiple input channels can take results from different PipeOps. When a PipeOp has more than one input / output channel, then the Graph’s $add_edge() method needs an additional argument that indicates which channel to connect to. This argument can be given in the form of an integer, or as the name of the channel. The following constructs a Graph that copies the input and gives one copy each to a “scale” and a “pca” PipeOp. The resulting columns of each operation are put next to each other by “featureunion.” gr = Graph$new()$add_pipeop(po("copy", outnum = 2))$ add_pipeop(po("scale"))$add_pipeop(po("pca"))$ gr$add_edge("copy", "scale", src_channel = 1)$ # designating channel by index add_edge("copy", "pca", src_channel = "output2")$# designating channel by name add_edge("scale", "featureunion", dst_channel = 1)$ #### 4.7.4.4PipeOp IDs and ID Name Clashes PipeOps within a graph are addressed by their $id-slot. It is therefore necessary for all PipeOps within a Graph to have a unique $id. The $id can be set during or after construction, but it should not directly be changed after a PipeOp was inserted in a Graph. At that point, the $set_names()-method can be used to change PipeOp ids. po1 = po("scale") po2 = po("scale") po1 %>>% po2 ## name clash ## Error in gunion(list(g1, g2)): Assertion on 'ids of pipe operators' failed: Must have unique names, but element 2 is duplicated. po2$id = "scale2" gr = po1 %>>% po2 gr ## Graph with 2 PipeOps: ## ID State sccssors prdcssors ## scale <<UNTRAINED>> scale2 ## scale2 <<UNTRAINED>> scale ## Alternative ways of getting new ids: po("scale", id = "scale2") ## PipeOp: <scale2> (not trained) ## values: <robust=FALSE> ## Input channels <name [train type, predict type]>: ## input [Task,Task] ## Output channels <name [train type, predict type]>: ## output [Task,Task] po("scale", id = "scale2") ## PipeOp: <scale2> (not trained) ## values: <robust=FALSE> ## Input channels <name [train type, predict type]>: ## input [Task,Task] ## Output channels <name [train type, predict type]>: ## output [Task,Task] ## sometimes names of PipeOps within a Graph need to be changed gr2 = po("scale") %>>% po("pca") gr %>>% gr2 ## Error in gunion(list(g1, g2)): Assertion on 'ids of pipe operators' failed: Must have unique names, but element 3 is duplicated. gr2$set_names("scale", "scale3") gr %>>% gr2 ## Graph with 4 PipeOps: ## ID State sccssors prdcssors ## scale <<UNTRAINED>> scale2 ## scale2 <<UNTRAINED>> scale3 scale ## scale3 <<UNTRAINED>> pca scale2 ## pca <<UNTRAINED>> scale3 ### 4.7.5 Learners in Graphs, Graphs in Learners The true power of mlr3pipelines derives from the fact that it can be integrated seamlessly with mlr3. Two components are mainly responsible for this: • PipeOpLearner, a PipeOp that encapsulates a mlr3 Learner and creates a PredictionData object in its $predict() phase • GraphLearner, a mlr3 Learner that can be used in place of any other mlr3 Learner, but which does prediction using a Graph given to it Note that these are dual to each other: One takes a Learner and produces a PipeOp (and by extension a Graph); the other takes a Graph and produces a Learner. #### 4.7.5.1PipeOpLearner The PipeOpLearner is constructed using a mlr3 Learner and will use it to create PredictionData in the $predict() phase. The output during $train() is NULL. It can be used after a preprocessing pipeline, and it is even possible to perform operations on the PredictionData, for example by averaging multiple predictions or by using the “PipeOpBackupLearner” operator to impute predictions that a given model failed to create. The following is a very simple Graph that performs training and prediction on data after performing principal component analysis. gr = po("pca") %>>% po("learner", lrn("classif.rpart")) gr$train(task) ## $classif.rpart.output ## NULL gr$predict(task) ## $classif.rpart.output ## <PredictionClassif> for 150 observations: ## row_ids truth response ## 1 setosa setosa ## 2 setosa setosa ## 3 setosa setosa ## --- ## 148 virginica virginica ## 149 virginica virginica ## 150 virginica virginica #### 4.7.5.2GraphLearner Although a Graph has $train() and $predict() functions, it can not be used directly in places where mlr3 Learners can be used like resampling or benchmarks. For this, it needs to be wrapped in a GraphLearner object, which is a thin wrapper that enables this functionality. The resulting Learner is extremely versatile, because every part of it can be modified, replaced, parameterized and optimized over. Resampling the graph above can be done the same way that resampling of the Learner was performed in the introductory example. lrngrph = as_learner(gr) resample(task, lrngrph, rsmp) ## <ResampleResult> of 1 iterations ## * Task: iris ## * Learner: pca.classif.rpart ## * Warnings: 0 in 0 iterations ## * Errors: 0 in 0 iterations ### 4.7.6 Hyperparameters mlr3pipelines relies on the paradox package to provide parameters that can modify each PipeOp’s behavior. paradox parameters provide information about the parameters that can be changed, as well as their types and ranges. They provide a unified interface for benchmarks and parameter optimization (“tuning”). For a deep dive into paradox, see the tuning chapter or the in-depth paradox chapter. The ParamSet, representing the space of possible parameter configurations of a PipeOp, can be inspected by accessing the $param_set slot of a PipeOp or a Graph. op_pca = po("pca") op_pca$param_set ## <ParamSet:pca> ## id class lower upper nlevels default value ## 1: center ParamLgl NA NA 2 TRUE ## 2: scale. ParamLgl NA NA 2 FALSE ## 3: rank. ParamInt 1 Inf Inf ## 4: affect_columns ParamUty NA NA Inf <Selector[1]> To set or retrieve a parameter, the $param_set$values slot can be accessed. Alternatively, the param_vals value can be given during construction. op_pca$param_set$values$center = FALSE op_pca$param_set$values ## $center ## [1] FALSE op_pca = po("pca", center = TRUE) op_pca$param_set$values ##$center ## [1] TRUE Each PipeOp can bring its own individual parameters which are collected together in the Graph’s $param_set. A PipeOp’s parameter names are prefixed with its $id to prevent parameter name clashes. gr = op_pca %>>% po("scale") gr$param_set ## <ParamSetCollection> ## id class lower upper nlevels default value ## 1: pca.center ParamLgl NA NA 2 TRUE TRUE ## 2: pca.scale. ParamLgl NA NA 2 FALSE ## 3: pca.rank. ParamInt 1 Inf Inf ## 4: pca.affect_columns ParamUty NA NA Inf <Selector[1]> ## 5: scale.center ParamLgl NA NA 2 TRUE ## 6: scale.scale ParamLgl NA NA 2 TRUE ## 7: scale.robust ParamLgl NA NA 2 <NoDefault[3]> FALSE ## 8: scale.affect_columns ParamUty NA NA Inf <Selector[1]> gr$param_set$values ##$pca.center ## [1] TRUE ## ## $scale.robust ## [1] FALSE Both PipeOpLearner and GraphLearner preserve parameters of the objects they encapsulate. op_rpart = po("learner", lrn("classif.rpart")) op_rpart$param_set ## <ParamSet:classif.rpart> ## id class lower upper nlevels default value ## 1: minsplit ParamInt 1 Inf Inf 20 ## 2: minbucket ParamInt 1 Inf Inf <NoDefault[3]> ## 3: cp ParamDbl 0 1 Inf 0.01 ## 4: maxcompete ParamInt 0 Inf Inf 4 ## 5: maxsurrogate ParamInt 0 Inf Inf 5 ## 6: maxdepth ParamInt 1 30 30 30 ## 7: usesurrogate ParamInt 0 2 3 2 ## 8: surrogatestyle ParamInt 0 1 2 0 ## 9: xval ParamInt 0 Inf Inf 10 0 ## 10: keep_model ParamLgl NA NA 2 FALSE glrn = as_learner(gr %>>% op_rpart) glrn\$param_set ## <ParamSetCollection> ## id class lower upper nlevels default ## 1: pca.center ParamLgl NA NA 2 TRUE ## 2: pca.scale. ParamLgl NA NA 2 FALSE ## 3: pca.rank. ParamInt 1 Inf Inf ## 4: pca.affect_columns ParamUty NA NA Inf <Selector[1]> ## 5: scale.center ParamLgl NA NA 2 TRUE ## 6: scale.scale ParamLgl NA NA 2 TRUE ## 7: scale.robust ParamLgl NA NA 2 <NoDefault[3]> ## 8: scale.affect_columns ParamUty NA NA Inf <Selector[1]> ## 9: classif.rpart.minsplit ParamInt 1 Inf Inf 20 ## 10: classif.rpart.minbucket ParamInt 1 Inf Inf <NoDefault[3]> ## 11: classif.rpart.cp ParamDbl 0 1 Inf 0.01 ## 12: classif.rpart.maxcompete ParamInt 0 Inf Inf 4 ## 13: classif.rpart.maxsurrogate ParamInt 0 Inf Inf 5 ## 14: classif.rpart.maxdepth ParamInt 1 30 30 30 ## 15: classif.rpart.usesurrogate ParamInt 0 2 3 2 ## 16: classif.rpart.surrogatestyle ParamInt 0 1 2 0 ## 17: classif.rpart.xval ParamInt 0 Inf Inf 10 ## 18: classif.rpart.keep_model ParamLgl NA NA 2 FALSE ## value ## 1: TRUE ## 2: ## 3: ## 4: ## 5: ## 6: ## 7: FALSE ## 8: ## 9: ## 10: ## 11: ## 12: ## 13: ## 14: ## 15: ## 16: ## 17: 0 ## 18: 1. It is tempting to denote this as a “directed acyclic graph,” but this would not be entirely correct because edges run between channels of PipeOps, not PipeOps themselves.↩︎
2021-06-19 09:19:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31386011838912964, "perplexity": 7005.150488439079}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487647232.60/warc/CC-MAIN-20210619081502-20210619111502-00211.warc.gz"}
https://socratic.org/questions/what-is-the-arc-length-of-f-t-3te-t-t-e-t-over-t-in-2-4
# What is the arc length of f(t)=(3te^t,t-e^t) over t in [2,4]? May 15, 2017 $612.530$ (3dp) #### Explanation: We have: $f \left(t\right) = \left(3 t {e}^{t} , t - {e}^{t}\right)$ where $t \in \left[2 , 4\right]$ The parametric arc-length is given by: $L = {\int}_{\alpha}^{\beta} \setminus \sqrt{{\left(\frac{\mathrm{dx}}{\mathrm{dt}}\right)}^{2} + {\left(\frac{\mathrm{dy}}{\mathrm{dt}}\right)}^{2}} \setminus \mathrm{dt}$ We can differentiate the parameters: $x \left(t\right) = 3 t {e}^{t} \implies \frac{\mathrm{dx}}{\mathrm{dt}} = 3 t {e}^{t} + 3 {e}^{t}$ $y \left(t\right) = t - {e}^{t} \implies \frac{\mathrm{dy}}{\mathrm{dt}} = 1 - {e}^{t}$ Then the arc-length is given by: $L = {\int}_{2}^{4} \setminus \sqrt{{\left(3 t {e}^{t} + 3 {e}^{t}\right)}^{2} + {\left(1 - {e}^{t}\right)}^{2}} \setminus \mathrm{dt}$ This integral dos not have a trivial anti-derivative, and so is evacuated using numerical methods to give: $L = 612.530$ (3dp)
2022-08-11 15:01:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.967536985874176, "perplexity": 3110.4278325661826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571472.69/warc/CC-MAIN-20220811133823-20220811163823-00596.warc.gz"}
http://www.newton.ac.uk/webseminars/Rothschilds/2005?field_when_value_1[min]=1991-01-01&field_when_value_1[max]=2014-08-16&title_2=&title=&combine=&page=237
# Seminar Archive Event Code Date Speaker Seminar Title CMP 15th October 2002 J van den Berg Self-destructive percolation 14th October 2002 L Lovasz Global information from local observation NST 10th October 2002 M Hanamura Blow-ups and mixed motives NST 9th October 2002 M Mahowald tmf(3) and other tmf module spectra CMP 8th October 2002 M Simonovits On a hypergraph extremal problem NSTW03 4th October 2002 W Gajda K-theory and classical conjectures in the arithmetic of cyclotomic fields NSTW03 4th October 2002 R De Jeu $K_{4}$ of curves and syntomic regulators NSTW03 4th October 2002 A Schmidt Relative K-groups and class field theory of arithmetic surfaces NSTW03 4th October 2002 U Jannsen Kato complexes: conjectures and results NSTW03 3rd October 2002 B Kahn Rational and numerical equivalence on certain abelian varieties over finite fields NSTW03 3rd October 2002 T Geisser Weil \'{e}tale motivic cohomology over finite fields NSTW03 3rd October 2002 D Burns On equivariant Tamagawa numbers, Weil \'{e}tale cohomology and values of L-functions NSTW03 3rd October 2002 S Lichtenbaum Weil \'{e}tale cohomology CMP 2nd October 2002 A Jarai Incipient infinite percolation clusters in two and high dimensions NSTW03 2nd October 2002 V Abrashkin Analogue of the Grothendieck conjecture for higher dimensional local fields NSTW03 2nd October 2002 M Taylor de Rham discriminants NSTW03 1st October 2002 C Pedrini Finite-dimensional motives and the Beilinson-Bloch conjectures NSTW03 1st October 2002 Z Wojtkoviak l-adic polylogarithms NSTW03 1st October 2002 I Fesenko Analysis on arithmetic surfaces NSTW03 1st October 2002 VP Snaith Stark's conjecture and new Stickelberger phenomena NSTW03 30th September 2002 P Schneider Noncommutative Iwasawa Theory NSTW03 30th September 2002 C Soule Bounds on the torsion in the K-theory of algebraic integers NSTW03 30th September 2002 S Bloch ${\bf G}_{a}$, motives, polylogarithms etc NSTW03 30th September 2002 A Scholl Zeta elements and modular forms NST 27th September 2002 B Kahn Informal geometry seminar series: Birational motives NST 26th September 2002 G Banaszak Non-torsion elements in algebraic k-theory of number fields, Mordell-Weil groups and l-adic representations NST 26th September 2002 N Yagita Algebraic cobordism of simply connected Lie groups CMP 26th September 2002 P Winkler On playing golf with two balls CMP 25th September 2002 T Luczak The phase transiton in random graphs: a microscopic view of the random-cluster model NSTW01 20th September 2002 C Weibel Homological algebra in DM NSTW01 20th September 2002 A Vishik Quadratic Grassmannians \& Steenrod operations (2) NSTW01 20th September 2002 B Toen Homotopical algebraic geometry NSTW01 20th September 2002 VP Snaith Equivariant motivic phenomena (2) NSTW01 19th September 2002 G Carlsson Structured stable homotopy theory \& descent in algebraic K theory (2) NSTW01 19th September 2002 A Vishik Quadratic Grassmannians \& Steenrod operations (1) NSTW01 19th September 2002 B Totaro An examplein the theory of Witt groups of algebraic varieties using Steenrod operations on Chow groups NSTW01 19th September 2002 VP Snaith Equivariant motives phenomena (1) NSTW01 18th September 2002 J Hornbostel ${\Bbb A}^1$ -representability of hermitian K-theory and Witt groups \& Morel's conjectures on ${\Bbb A}^1$ -homotopy groups of spheres NSTW01 18th September 2002 G Carlsson Structured stable homotopy theory \& descent in algebraic K theory (1) NSTW01 18th September 2002 J Rognes Glaors descent for the K-theory of commutatice {\$}-algebras NSTW01 18th September 2002 S Bloch Questions related to motives NSTW01 17th September 2002 L Hesselholt Trace methods in algebraic K-theory - II NSTW01 17th September 2002 M Levine Mixed Tate motives (2) NSTW01 17th September 2002 F Morel${\Bbb A}^1$homotopy theory (2) NSTW01 17th September 2002 R Jardine Stable homotopy theories for simplicial presheaves (2) NSTW01 16th September 2002 L Hesselholt trace methods in algebraic K theory - I NSTW01 16th September 2002 M Levine Introduction to algebraic cobordism (1) NSTW01 16th September 2002 F Morel${\Bbb A}^1\$ homotopy theory (1) NSTW01 16th September 2002 R Jardine Stable homotopy theories for simplicial presheaves (1) NSTW01 13th September 2002 NP Strickland Axiomatic stable homotopy theory
2016-05-05 08:58:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7154555916786194, "perplexity": 10868.129012306357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860126377.4/warc/CC-MAIN-20160428161526-00118-ip-10-239-7-51.ec2.internal.warc.gz"}
https://www.physicsforums.com/members/mapes.109737/recent-content
# Recent Content by Mapes 1. Post ### I What is the formula 1/(dS/dE)>>0 and how does it apply? $1/(\partial S/\partial E)$ with entropy $S$ and energy $E$ is the definition of temperature (I've replaced the absolute derivative $d/dx$... Post by: Mapes, Dec 9, 2017 in forum: General Physics 2. Post ### B Why is it so much easier to increase the temperature of something vs. decreasing it? Regarding the perceived asymmetry between heating things up vs. cooling them down, note that we're surrounded by sources of low-entropy,... Post by: Mapes, May 16, 2017 in forum: General Physics 3. Post ### Relation between Young's modulus and the coefficient of thermal expansion Please see the discussion of the correlation between stiffness, melting temperature, and thermal expansion here. You can investigate the... Post by: Mapes, Apr 18, 2017 in forum: Materials and Chemical Engineering 4. Post ### I Is a quasi-static but irreversible process possible? Whenever energy transfer is driven by a gradient (e.g., a pressure difference causing a change in volume, a voltage difference causing electric... Post by: Mapes, Apr 5, 2017 in forum: Classical Physics 5. Post ### I Is a quasi-static but irreversible process possible? A reversible process doesn't increase entropy and thus cannot exist in the real world (although we can come arbitrarily close). Every real process... Post by: Mapes, Apr 2, 2017 in forum: Classical Physics
2019-02-17 07:20:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4068531394004822, "perplexity": 3565.749708026584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481766.50/warc/CC-MAIN-20190217071448-20190217093448-00625.warc.gz"}
https://stacks.math.columbia.edu/tag/02A5
Exercise 110.37.9. Let $S$ be a graded ring. Let $X = \text{Proj}(S)$. Let $Z, Z' \subset X$ be two closed subschemes. Let $\varphi : Z \to Z'$ be an isomorphism. Assume $Z \cap Z' = \emptyset$. Show that for any $z \in Z$ there exists an affine open $U \subset X$ such that $z \in U$, $\varphi (z) \in U$ and $\varphi (Z \cap U) = Z' \cap U$. (Hint: Use Exercise 110.37.8 and something akin to Schemes, Lemma 26.11.5.) In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2023-03-23 18:41:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9837471842765808, "perplexity": 250.62649487549604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945182.12/warc/CC-MAIN-20230323163125-20230323193125-00541.warc.gz"}
https://zbmath.org/?q=an:1165.34303
Global existence and uniqueness of solutions for fuzzy differential equations under dissipative-type conditions.(English)Zbl 1165.34303 Summary: Using the properties of a differential and integral calculus for fuzzy set valued mappings and completeness of metric space of fuzzy numbers, the global existence, uniqueness and the continuous dependence of a solution on a fuzzy differential equation are derived under the dissipative-type conditions. We also present the global existence and uniqueness of solutions for a fuzzy differential equation on a closed convex subset of fuzzy number space. MSC: 34A12 Initial value problems, existence, uniqueness, continuous dependence and continuation of solutions to ordinary differential equations 26E50 Fuzzy real analysis Full Text: References: [1] A. Kandel, W.J. Byatt, Fuzzy differential equations, in: Proc. Int. Conf. Cybern. and Society, Tokyo, Nov. 1978, pp. 1213-1216 [2] Kaleva, O., Fuzzy diffrential equations, Fuzzy sets and systems, 24, 301-319, (1987) [3] Nieto, J.J., The Cauchy problem for fuzzy differential equations, Fuzzy sets and systems, 102, 259-262, (1999) · Zbl 0929.34005 [4] Kaleva, O., The Cauchy problem for fuzzy differential equations, Fuzzy sets and systems, 35, 389-396, (1990) · Zbl 0696.34005 [5] Wu, C.X.; Song, S.J., Existence theorem to the Cauchy problem of fuzzy diffrential equations under compactness-type conditions, Inform. sci., 108, 123-134, (1998) [6] Wu, C.X.; Song, S.J.; Qi, Z.Y., Existence and uniqueness for a solution on the closed subset to the Cauchy problem of fuzzy diffrential equations, J. Harbin inst. tech., 2, 1-7, (1997) [7] Wu, C.X.; Song, S.J.; Lee, E.S., Approximate solutions and existence and uniqueness theorem to the Cauchy problem of fuzzy diffrential equations, J. math. anal. appl., 202, 629-644, (1996) · Zbl 0861.34040 [8] Song, S.J.; Wu, C.; Xue, X.P., Existence and uniqueness theorem to the Cauchy problem of fuzzy diffrential equations under dissipative conditions, Comput. math. appl., 51, 1483-1492, (2006) · Zbl 1157.34002 [9] Park, J.Y.; Han, H.K., Fuzzy differential equations, Fuzzy sets and systems, 110, 69-77, (2000) · Zbl 0946.34055 [10] Song, S.J.; Guo, L.; Feng, C.B., Global existence of solution to fuzzy diffrential equations, Fuzzy sets and systems, 115, 371-376, (2000) [11] Ding, Z.H.; Ma, M.; Kandel, A., Existence of the solutions of fuzzy differential equations with parameters, Inform. sci., 99, 205-217, (1997) · Zbl 0914.34057 [12] Seikkala, S., On the initial value problem, Fuzzy sets and systems, 24, 319-330, (1987) · Zbl 0643.34005 [13] Aumann, R.J., Integrals of set-valued fuctions, J. math. anal. appl., 12, 1-12, (1965) · Zbl 0163.06301 [14] Dubois, D.; Prade, H., Towards fuzzy differential calculus: part 1, integration of fuzzy mappings, Fuzzy sets and systems, 8, 1-17, (1982) · Zbl 0493.28002 [15] Radstrom, H., An embedding theorem for spaces on convex set, Proc. amer. math. soc., 3, 165-169, (1952) · Zbl 0046.33304 [16] Diamond, P.; Kloeden, P., Characterization of compact subsets of fuzzy sets, Fuzzy sets and systems, 29, 341-348, (1989) · Zbl 0661.54011 [17] Puri, M.L.; Ralescu, D.A., Fuzzy random variables, J. math. anal. appl., 114, 409-422, (1986) · Zbl 0592.60004 [18] Puri, M.L.; Ralescu, D.A., Differentials of fuzzy functions, J. math. anal. appl., 91, 552-558, (1983) · Zbl 0528.54009 [19] Ma, M., On embedding problems of fuzzy number space: part 5, Fuzzy sets and systems, 55, 313-318, (1993) · Zbl 0798.46058 [20] Dugundji, J., An extension of tietzes theorem, Pacific J. math., 1, 353-367, (1951) · Zbl 0043.38105 [21] Lakshmikantham; Leela, S., Nonlinear diffrential equations in abstract spaces, (1981), Pergamon Press New York · Zbl 0456.34002 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2023-02-03 13:23:10
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.821844220161438, "perplexity": 4319.051511341804}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500056.55/warc/CC-MAIN-20230203122526-20230203152526-00593.warc.gz"}
http://aimpl.org/catheckehilbert/3/
## 3. Springer fibres 1. #### Problem 3.1. To an algebraic link one can associate a Springer fibre and a Milnor fibre. Can one compute HHH from the Milnor fibre? • #### Problem 3.2. It is known that the Alexander polynomial can be computed from both the Springer fibre and the Milnor fibre. Understand the relationship between these two constructions. • #### Problem 3.3. Outside of type A is there still a relation between Hochschild cohomology of Rouquier complexes and affine Springer fibres? Cite this as: AimPL: Categorified Hecke algebras, link homology, and Hilbert schemes, available at http://aimpl.org/catheckehilbert.
2020-07-05 03:46:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7641290426254272, "perplexity": 2052.4810578172346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886865.30/warc/CC-MAIN-20200705023910-20200705053910-00014.warc.gz"}
http://www.nag.com/numeric/FL/nagdoc_fl24/html/F06/f06zuf.html
F06 Chapter Contents F06 Chapter Introduction NAG Library Manual # NAG Library Routine DocumentF06ZUF (ZSYRK) Note:  before using this routine, please read the Users' Note for your implementation to check the interpretation of bold italicised terms and other implementation-dependent details. ## 1  Purpose F06ZUF (ZSYRK) performs one of the symmetric rank-$k$ update operations $C←αAAT + βC or C←αATA + βC ,$ where $A$ is a complex matrix, $C$ is an $n$ by $n$ complex symmetric matrix, and $\alpha$ and $\beta$ are complex scalars. ## 2  Specification SUBROUTINE F06ZUF ( UPLO, TRANS, N, K, ALPHA, A, LDA, BETA, C, LDC) INTEGER N, K, LDA, LDC COMPLEX (KIND=nag_wp) ALPHA, A(LDA,*), BETA, C(LDC,*) CHARACTER(1) UPLO, TRANS The routine may be called by its BLAS name zsyrk. None. None. ## 5  Parameters 1:     UPLO – CHARACTER(1)Input On entry: specifies whether the upper or lower triangular part of $C$ is stored. ${\mathbf{UPLO}}=\text{'U'}$ The upper triangular part of $C$ is stored. ${\mathbf{UPLO}}=\text{'L'}$ The lower triangular part of $C$ is stored. Constraint: ${\mathbf{UPLO}}=\text{'U'}$ or $\text{'L'}$. 2:     TRANS – CHARACTER(1)Input On entry: specifies the operation to be performed. ${\mathbf{TRANS}}=\text{'N'}$ $C←\alpha A{A}^{\mathrm{T}}+\beta C$. ${\mathbf{TRANS}}=\text{'T'}$ $C←\alpha {A}^{\mathrm{T}}A+\beta C$. Constraint: ${\mathbf{TRANS}}=\text{'N'}$ or $\text{'T'}$. 3:     N – INTEGERInput On entry: $n$, the order of the matrix $C$; the number of rows of $A$ if ${\mathbf{TRANS}}=\text{'N'}$, or the number of columns of $A$ if ${\mathbf{TRANS}}=\text{'T'}$ or $\text{'C'}$. Constraint: ${\mathbf{N}}\ge 0$. 4:     K – INTEGERInput On entry: $k$, the number of columns of $A$ if ${\mathbf{TRANS}}=\text{'N'}$, or the number of rows of $A$ if ${\mathbf{TRANS}}=\text{'T'}$ or $\text{'C'}$. Constraint: ${\mathbf{K}}\ge 0$. 5:     ALPHA – COMPLEX (KIND=nag_wp)Input On entry: the scalar $\alpha$. 6:     A(LDA,$*$) – COMPLEX (KIND=nag_wp) arrayInput Note: the second dimension of the array A must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{K}}\right)$ if ${\mathbf{TRANS}}=\text{'N'}$ and at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{N}}\right)$ if ${\mathbf{TRANS}}=\text{'T'}$ or $\text{'C'}$. On entry: the matrix $A$; $A$ is $n$ by $k$ if ${\mathbf{TRANS}}=\text{'N'}$, or $k$ by $n$ if ${\mathbf{TRANS}}=\text{'T'}$ or $\text{'C'}$. 7:     LDA – INTEGERInput On entry: the first dimension of the array A as declared in the (sub)program from which F06ZUF (ZSYRK) is called. Constraints: • if ${\mathbf{TRANS}}=\text{'N'}$, ${\mathbf{LDA}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{N}}\right)$; • if ${\mathbf{TRANS}}=\text{'T'}$ or $\text{'C'}$, ${\mathbf{LDA}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{K}}\right)$. 8:     BETA – COMPLEX (KIND=nag_wp)Input On entry: the scalar $\beta$. 9:     C(LDC,$*$) – COMPLEX (KIND=nag_wp) arrayInput/Output Note: the second dimension of the array C must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{N}}\right)$. On entry: the $n$ by $n$ symmetric matrix $C$. • If ${\mathbf{UPLO}}=\text{'U'}$, the upper triangular part of $C$ must be stored and the elements of the array below the diagonal are not referenced. • If ${\mathbf{UPLO}}=\text{'L'}$, the lower triangular part of $C$ must be stored and the elements of the array above the diagonal are not referenced. On exit: the updated matrix $C$. 10:   LDC – INTEGERInput On entry: the first dimension of the array C as declared in the (sub)program from which F06ZUF (ZSYRK) is called. Constraint: ${\mathbf{LDC}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{N}}\right)$. None. Not applicable.
2016-10-24 14:29:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 69, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999760389328003, "perplexity": 5716.139195975506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719638.55/warc/CC-MAIN-20161020183839-00499-ip-10-171-6-4.ec2.internal.warc.gz"}
https://blog.csdn.net/u010660276/article/details/17126685
# 有已知边的最小生成树Kruskal+Uva10397 Problem E Connect the Campus Input: standard input Output: standard output Time Limit: 2 seconds Many new buildings are under construction on the campus of the University of Waterloo. The university has hired bricklayers, electricians, plumbers, and a computer programmer. A computer programmer? Yes, you have been hired to ensure that each building is connected to every other building (directly or indirectly) through the campus network of communication cables. We will treat each building as a point specified by an x-coordinate and a y-coordinate. Each communication cable connects exactly two buildings, following a straight line between the buildings. Information travels along a cable in both directions. Cables can freely cross each other, but they are only connected together at their endpoints (at buildings). You have been given a campus map which shows the locations of all buildings and existing communication cables. You must not alter the existing cables. Determine where to install new communication cables so that all buildings are connected. Of course, the university wants you to minimize the amount of new cable that you use. Fig: University of Waterloo Campus Input The input file describes several test case.  The description of each test case is given below: The first line of each test case contains the number of buildings N (1<=N<=750). The buildings are labeled from 1 to N. The next N lines give the x and y coordinates of the buildings. These coordinates are integers with absolute values at most 10000. No two buildings occupy the same point. After that there is a line containing the number of existing cables M (0 <= M <= 1000) followed by M lines describing the existing cables. Each cable is represented by two integers: the building numbers which are directly connected by the cable. There is at most one cable directly connecting each pair of buildings. Output For each set of input, output in a single line the total length of the new cables that you plan to use, rounded to two decimal places. Sample Input 4 103 104 104 100 104 103 100 100 1 4 2 4 103 104 104 100 104 103 100 100 1 4 2 Sample Output 4.41 4.41 1.kruskal,先把已知的边放到一个集合里,然后最求最小生成树。 #include<iostream> #include<algorithm> #include<cstdio> #include<cmath> using namespace std; const int MAX=800; struct node { double len; int u,v; } edge[600000]; struct A { int x,y; } dian[1010]; int N,num; int pre[MAX],rank[MAX]; void init() { num=0; for(int i=0;i<=N;i++) { pre[i]=i; rank[i]=i; } } { edge[num].u=u; edge[num].v=v; edge[num++].len=x; } int find(int x) { if(pre[x]==x) return x; return pre[x]=find(pre[x]); } void unite(int x,int y) { int tx=find(x); int ty=find(y); if(tx==ty) return; if(rank[tx]<rank[ty]) pre[tx]=ty; else { pre[ty]=tx; if(rank[tx]==rank[ty]) rank[tx]++; } } bool cmp(node a,node b) { return a.len<b.len; } void Kruskal() { double ans=0; sort(edge,edge+num,cmp); for(int i=0;i<num;i++) { int x=find(edge[i].u); int y=find(edge[i].v); if(x!=y) { unite(x,y); ans+=edge[i].len; } } printf("%.2lf\n",ans); } int main() { #ifndef ONLINE_JUDGE freopen("in.txt","r",stdin); #endif while(cin>>N) { init(); for(int i=1; i<=N; i++) { scanf("%d%d",&dian[i].x,&dian[i].y); for(int j=1; j<i; j++) { double x=pow((dian[i].x-dian[j].x),2)+pow((dian[i].y-dian[j].y),2); x=sqrt(x); } } int cnt; cin>>cnt; int x,y; for(int i=0;i<cnt;i++) { scanf("%d%d",&x,&y); unite(x,y); } Kruskal(); } return 0; } 2.prim或者Kruskal,把已知边的边权值设置为0. #include <cstdio> #include <cstring> #include <cmath> #include <algorithm> using namespace std; const int N = 800; const double inf = 10000000; int n, m; double x[N], y[N], map[N][N]; double prim( ) { int u; double ans = 0, ma; bool vis[N]; memset( vis, 0, sizeof(vis) ); vis[1] = true; for ( int k = 2; k <= n; ++k ) { ma = inf; for ( int i = 1; i <= n; ++i ) if ( !vis[i] && map[1][i] < ma ) ma = map[1][i], u = i; ans += ma; vis[u] = true; for ( int i = 1; i <= n; ++i ) if ( !vis[i] && map[u][i] < map[1][i] ) map[1][i] = map[u][i]; } return ans; } int main() { while ( scanf("%d", &n) != EOF ) { for ( int i = 1; i <= n; ++i ) scanf("%lf%lf", &x[i], &y[i]); for ( int i = 1; i <= n; ++i ) for ( int j = 1; j <= n; ++j ) { if ( i != j ) map[i][j] = sqrt( (x[i] - x[j])*(x[i] - x[j]) + (y[i] - y[j])*(y[i] - y[j]) ); else map[i][j] = 99999999.0; } scanf("%d", &m); for ( int i = 0; i < m; ++i ) { int u, v; scanf("%d%d", &u, &v); map[u][v] = map[v][u] = 0; } printf("%.2lf\n", prim()); } } • 本文已收录于以下专栏: 举报原因: 您举报文章:有已知边的最小生成树Kruskal+Uva10397 色情 政治 抄袭 广告 招聘 骂人 其他 (最多只允许输入30个字)
2018-03-22 14:24:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3431031405925751, "perplexity": 4443.500250819502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647885.78/warc/CC-MAIN-20180322131741-20180322151741-00788.warc.gz"}
http://www.dailybreakthru.com/lkf-usa-wnl/ols-regression-description
Let's understand OLS in detail using an example: We are given a data set with 100 observations and 2 variables, namely Heightand Weight. See regression.linear_model.RegressionResults for a description of the available covariance estimators. Depends R(>= 3.2.4) For the purpose of robustness check, please suggest me an appropriate methodology. object: An object of class "formula" (or one that can be coerced to that class): a symbolic description of the model to be fitted or class lm. səs] (statistics) The description of the nature of the relationship between two or more variables; it is concerned with the problem of describing or estimating the value of the dependent variable on the basis of one or more independent variables. Regression is a statistical measurement that attempts to determine the strength of the relationship between one dependent variable (usually denoted by … By looking at the correlation matrix we can see that RM has a strong positive correlation with MEDV (0.7) where as LSTAT has a high negative correlation with MEDV(-0.74). Instead, they assess the average effect of changing a predictor, but not the distribution around that average. Ridge Regression : In Ridge regression, we add a penalty term which is equal to the square of the coefficient. | PowerPoint PPT presentation | free to view . Simple Linear Regression—Description. On the other hand, if we use absolute value loss, quantile regression will be better. Decision-makers can use regression equations to predict outcomes. where Y is an individual’s wage and X is her years of education. to perform a regression analysis, you will receive a regression table as output that summarize the results of the regression. OLS Simple linear regression model De…ne the sum of squares of the residuals (SSR) function as: ST ( ) = TX t=1 (yt 1 2xt)2 Estimator: Formula for estimating unknown parameters Estimate: Numerical value obtained when sample data is substituted in formula The OLS estimator (b) minimizes ST ( ). Finally, review the section titled "How Regression Models Go Bad" in the Regression Analysis Basics document as a check that your OLS regression model is properly specified. The most commonly performed statistical procedure in SST is multiple regression analysis. OLS Regression Author: Barreto/Howland Description: Reports Robust SEs; handles missing values; contains OLSReg function 17 Jun 2008 Last modified by: Frank Howland Created Date: 7/31/2000 7:56:24 PM Other titles: Doc DocRegResults3 New Reg Results Ordinary Least Squares (OLS) is the most common estimation method for linear models—and that’s true for a good reason. Ridge Regression is a technique used when the data suffers from multicollinearity (independent variables are highly correlated). In statistics, regression is a technique that can be used to analyze the relationship between predictor variables and a response variable. In this set of notes, you will begin your foray into regression analysis. OLS is easy to analyze and computationally faster, i.e. In econometrics, Ordinary Least Squares (OLS) method is widely used to estimate the parameters of a linear regression model. Stocks I think the use of "on average" just expresses that there is a difference between a slope parameter and its estimator. OLS Our Example Figure 8: Linear regression 12 14. Description. OLS model (multiple regression) results are free from autocorrelation and heteroscedasticity errors. robust_trend(avg:{*}) The most common type of linear regression—ordinary least squares (OLS)—can be heavily influenced by a small number of points with extreme values. Value. As long as your model satisfies the OLS assumptions for linear regression, you can rest easy knowing that you’re getting the best possible estimates. $\begingroup$ The description is expressing the fact that b is an estimate of the slope of the regression line. In this case if is zero then the equation is the basic OLS else if then it will add a constraint to the coefficient. Which is what Peter Folm's answer: If you are interested in the mean, use OLS, if in the median, use quantile. It is used when we want to predict the value of a … The residual is the difference between the value of the dependent variable predicted by the model, and the true value of the dependent variable. Then fit() method is called on this object for fitting the regression line to the data. Ordinary least squares. The L2 term is equal to the square of the magnitude of the coefficients. If we use squared loss as a measure of success, quantile regression will be worse than OLS. Nevertheless, the researchers of the mentioned paper utilize exactly this term “pooled (panel) regressions” (p.24). Linear regression models have several applications in real life. Ordinary least squares regression. The REG command provides a simple yet flexible way compute ordinary least squares regression estimates. In my understanding, a pooled OLS regression in STATA is provided through the command reg or regress (which is completely the same). The form of the model is the same as above with a single response variable (Y), but this time Y is predicted by multiple explanatory variables (X1 to X3). When you use software (like R, SAS, SPSS, etc.) OLS results cannot be trusted when the model is misspecified. ols_regress (object, ...) # S3 method for lm ols_regress (object, ...) Arguments. Options to the REG command permit the computation of regression diagnostics and two-stage least squares (instrumental variables) estimates. Math behind estimating the regression line. By definition, OLS regression gives equal weight to all observations, but when heteroscedasticity is present, the cases with larger disturbances, or data … For OLS, constants are included in X automatically unless if nocons option is True. Introduction to Properties of OLS Estimators. Ordinary Least Squares and Poisson Regression Models by Luc Anselin University of Illinois Champaign-Urbana, IL This note provides a brief description of the statistical background, estimators and model characteristics for a regression specification, estimated by means of both Ordinary Least Squares (OLS) and Poisson regression. A1. 8.2.2.2 Interpreting Results. In multicollinearity, even though the least squares estimates (OLS) are unbiased, their variances are large which … To fit a linear regression model, we select those features which have a high correlation with our target variable MEDV. object: An object of class "formula" (or one that can be coerced to that class): a symbolic description of the model to be fitted or class lm.... Other inputs. The OLS() function of the statsmodels.api module is used to perform OLS regression. MLR is used extensively in econometrics and financial inference. Quantile regression, in general, and median regression, in particular, might be considered as an alternative to robust regression. Linear Regression Analysis using SPSS Statistics Introduction. Ordinary least squares regression. use_t bool, optional Related Terms. Interpretation of OLS is much easier than other regression techniques. it can be quickly applied to data sets having 1000s of features. However, Soyer and Hogarth find that experts in applied regression analysis generally don’t correctly assess the uncertainties involved in making predictions. Title Tools for Building OLS Regression Models Version 0.4.0 Description Tools for building OLS regression models . Linear regression is the next step up after correlation. This is the predicted $$mpg$$ for a car with 0 cylinders and 0 horsepower.-2.26 is the coefficient of cylinder. indicates that the instantaneous return for an additional year of education is 8 percent and the compounded return is 8.3 percent (e 0.08 – 1 = 0.083).If you estimate a log-linear regression, a couple outcomes for the coefficient on X produce the most likely relationships: Regression and Analysis of Variance II - We would like to show you a description here but the site won t allow us. Located in Ridge.py; This includes the feature of adding a Ridge regression bias parameter into the regression. SAS does quantile regression using a little bit of proc iml . cov_kwds list or None, optional. See linear_model.RegressionResults.get_robustcov_results for a description required keywords for alternative covariance estimators. Other inputs. OLS regression with multiple explanatory variables The OLS regression model can be extended to include multiple explanatory variables by simply adding additional variables to the equation. We also add a coefficient to control that penalty term. Ridge regression based on Hoerl and Kennard (1970) and Hoerl, Kennard, Baldwin (1975). Here is how we interpret the three parameters that were estimated in the regression: 36.9 is the intercept in the model. However, it does not seem that this approach takes the actual panel structure into account. The 0.08 value for. When estimating the regression line we are interested in finding the slope ($$B_1$$) and intercept ($$B_0$$) values that will make the predicted y values $$\hat y_i = B_0 + B_1 x_i$$ as close to actual $$y_i$$ values as possible.Formally, we want to find the $$B$$ values that minimize the sum of squared errors: $$\sum (y_i - \hat y_i)^2$$. It returns an OLS object. Now we perform the regression of the predictor on the response, using the sm.OLS class and and its initialization OLS(y, X) method. To do so, we will use the riverview.csv data to examine whether education level is related to income.The data contain five attributes collected from a random sample of $$n=32$$ employees working for the city of Riverview, a hypothetical midwestern city (see the data codebook). Description Example; robust_trend() Fit a robust regression trend line using Huber loss. LEAST squares linear regression (also known as “least squared errors regression”, “ordinary least squares”, “OLS”, or often just “least squares”), is one of the most basic and most commonly used prediction techniques known to humankind, with applications in fields as diverse as statistics, finance, medicine, economics, and psychology. In linear regression, the model specification is that the dependent variable is a linear combination of the parameters (but need not be linear in the independent variables). Includes comprehensive regression output, heteroskedasticity tests, collinearity diagnostics, residual diagnostics, measures of influence, model fit assessment and variable selection proce-dures. For the validity of OLS estimates, there are assumptions made while running linear regression models. Multiple regression is an extension of linear (OLS) regression that uses just one explanatory variable. How we interpret the three parameters that were estimated in the model when the.. Use absolute value loss, quantile regression will be worse than OLS: in ridge based. Making predictions line to the coefficient predictor, but not the distribution around that average,. Computation of regression diagnostics and two-stage Least Squares ( instrumental variables ) estimates that this approach takes the actual structure! The mentioned paper utilize exactly this term “ pooled ( panel ) regressions (! 1975 ) to perform OLS regression models Soyer and Hogarth find that experts applied! Don ’ t correctly assess the uncertainties involved in making predictions if nocons is... Variance II - we would like to show you a description required keywords for alternative estimators. We would like to show you a description required keywords for alternative covariance estimators cylinders and 0 horsepower.-2.26 is most... T correctly assess the uncertainties involved in making predictions depends R ( > 3.2.4. Think the use of on average '' just expresses that there a! Of adding a ridge regression: in ridge regression bias parameter into the regression to... 8: linear regression model in the model is misspecified common estimation method for linear models—and ’. If nocons option is True ) fit a robust regression the average of. Interpretation of OLS estimates, there are assumptions made while running linear regression models Version 0.4.0 description Tools for OLS... ; this includes the feature of adding a ridge regression based on Hoerl and Kennard ( 1970 ) and,. Not seem that this approach takes the actual panel structure into account like R, SAS, SPSS,.... Models Version 0.4.0 description Tools for Building OLS regression a technique that can be applied... Computation of regression diagnostics and two-stage Least Squares ( OLS ) method is on! Etc. takes the actual panel structure into account R, SAS SPSS... Figure 8: linear regression models linear regression model, we add a constraint the! Alternative covariance estimators don ’ t correctly assess the average effect of changing a,... Called on this object for fitting the regression a ridge regression based on Hoerl and Kennard 1970! That penalty term module is used extensively in econometrics and financial inference of cylinder the purpose of robustness check please! Average effect of changing a predictor, but not the distribution around average! ( object,... ) Arguments SPSS, etc. this term “ pooled ( )! Summarize the results of the statsmodels.api module is used to estimate the of... Between predictor variables and a response variable around that average fit a robust regression regression 12.... The REG command provides a simple yet flexible way compute ordinary Least ols regression description ( OLS method... Commonly performed statistical procedure in SST is multiple regression ) results are free from autocorrelation and heteroscedasticity.. ’ s True for a description here but the site won t allow us ) estimates constraint to the suffers! The purpose of robustness check, please suggest me an appropriate methodology, SAS, SPSS, etc.,. Changing a predictor, but not the distribution around that average ) Arguments multiple! Be quickly applied to data sets having 1000s of features average '' just expresses there... Etc. allow us running linear regression models Version 0.4.0 description Tools for Building OLS regression regression.linear_model.RegressionResults a. Select those features which have a high correlation with our target variable MEDV of OLS estimates, there are made... Equation is the predicted \ ( mpg\ ) for a description here but the site t... A high correlation with our target variable MEDV the coefficient constraint to the square the! Line using Huber loss model, we add a constraint to the REG provides! Will be worse than OLS S3 method for lm ols_regress ( object.... Regression model, we select those features which have a high correlation with our target MEDV! Independent variables are highly correlated ) and two-stage Least Squares ( OLS ) is coefficient. Model, we add a coefficient to control that penalty term that penalty term as alternative. Regression models a high correlation with our target variable MEDV Tools for Building OLS regression models have several applications real. Compute ordinary Least Squares ( OLS ) is the next step up after correlation OLS else then., and median regression, in particular, might be considered as an to. 0.4.0 description Tools ols regression description Building OLS regression models Version 0.4.0 description Tools Building! 12 14 than OLS ( p.24 ), etc. OLS model ( multiple )! Select those features which have a high correlation with our target variable MEDV multicollinearity ( independent variables are correlated! The site won t allow us select those features which have a high correlation with our variable! Worse than OLS of features target variable MEDV of OLS estimates, there are assumptions while. But the site won t allow us the data is called on object. Check, please suggest me an appropriate methodology cylinders and 0 horsepower.-2.26 is the most commonly statistical... And Hogarth find that experts in applied regression analysis, you will begin foray! Hoerl, Kennard, Baldwin ( 1975 ): 36.9 is the predicted \ mpg\! General, and median regression, we select those features which have a high correlation our... The purpose of robustness check, please suggest me an appropriate methodology is a technique that can be quickly to... Begin your foray into regression analysis regression line to the coefficient the magnitude of the mentioned paper utilize this. Faster, i.e use absolute value loss, quantile regression will be than! Changing a predictor, but not the distribution around that average is how we the! Example ; robust_trend ( ) method is widely used to perform a regression table as that. Regression 12 14 suffers from multicollinearity ( independent variables are highly correlated ) be! Simple yet flexible way compute ordinary Least Squares regression estimates econometrics and financial inference > = 3.2.4 ) OLS can! The predicted \ ( mpg\ ) for a good reason the computation of regression diagnostics and Least. Paper utilize exactly this term “ pooled ( panel ) regressions ” ( p.24 ) use (... Bit of proc iml used when the model is misspecified regression line to the REG command permit the of. The L2 term is equal to the coefficient that ’ s True for ols regression description... Is how we interpret the three parameters that were estimated in the model is misspecified nevertheless, the of. Of the statsmodels.api module is used to analyze and computationally faster, i.e yet flexible way compute Least... Using Huber loss most commonly performed statistical procedure in SST is multiple regression analysis, you will a. Select those features which have a high correlation with our target variable MEDV OLS results not... Keywords for alternative covariance estimators begin your foray ols regression description regression analysis OLS model ( multiple regression ) results free., please suggest me an appropriate methodology the researchers of the statsmodels.api module used. Ols else if then it will add a penalty term correctly assess the average effect changing... Generally don ’ t correctly assess the average effect of changing a predictor, but not distribution... A measure of success, quantile regression will be better, it does seem... If is zero then the equation is the basic OLS else if then it will add a penalty which. There are assumptions made while running linear regression is the basic OLS else if then it will add constraint. Can not be trusted when the data suffers from multicollinearity ( independent are... Would like to show you a description here but the site won t us. Is used extensively in econometrics, ordinary Least Squares ( OLS ) is coefficient! Hoerl, Kennard, Baldwin ( ols regression description ) covariance estimators if then will... = 3.2.4 ) OLS results can not be trusted when the model is misspecified than OLS common estimation for... Absolute value loss, quantile regression, in particular, might be considered as an alternative to robust.. Correlation with our target variable MEDV this includes the feature of adding a ridge regression, general... Variables ) estimates trusted when the model is misspecified mlr is used estimate! The model to data sets having 1000s of features structure into account please., Soyer and Hogarth find that experts in applied regression analysis ) results. In econometrics and financial inference heteroscedasticity errors in ridge regression is a difference between a parameter. Appropriate methodology econometrics and financial inference than other regression techniques bias parameter into the regression line the. R, SAS, SPSS, etc. if is zero then the equation the., you will begin your foray into regression analysis generally don ’ correctly! ) is the basic OLS else if then it will add a constraint the! A ridge regression is a technique that can be used to analyze and computationally faster i.e. Estimation method for lm ols_regress ( object,... ) Arguments like to show you description. Permit the computation of regression diagnostics and two-stage Least Squares ( OLS is. Covariance estimators find that experts in applied regression analysis generally don ’ correctly. ) # S3 method for lm ols_regress ( object,... ) # S3 for! Ols ( ) function of the magnitude of the coefficients like R,,! That experts in applied regression analysis generally don ’ t correctly assess the uncertainties involved in making predictions Example. Air Vent Ridge Shingle Vent Ii, Hillside Lodge Loch Awe, United Community Bank Debit Card Balance, Physics Building Stolk Syracuse University, Npa Vacancies 2021,
2021-09-28 09:47:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5855941772460938, "perplexity": 1673.7199133503657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060677.55/warc/CC-MAIN-20210928092646-20210928122646-00280.warc.gz"}
https://www.physicsforums.com/threads/evaluating-integrals-by-trigonometric-substitution.218684/
Evaluating integrals by trigonometric substitution 1. Feb 28, 2008 Integralien I have a few quick problems concering evaluating integrals by trigonometric substitution. I guess I will just post five that way if anyone can help with any, would be greatly appreciated. Also: if anyone could inform me on how to input the actual equations onto this forum as I have seen in some posts, I would also greatly appreciate that. 1) The integral of (3x² + x - 1) onto the square root of (1 - x²)dx 2) The integral of 1 all over the square root of (16 + 25x²) dx 3) The integral of 1 all over the square root of (36x² - 25) dx 4) The integral of 1 all over the square root of (3 - 2x - x²) dx 5) The integral of the square root of (10 - 4x + 4x²)dx Again I apologize if that is hard to understand if anyone could tell me quickly how to offer an easier way to display problems, I would be very thankful. Brian 2. Feb 28, 2008 sutupidmath use LATEX! 3. Feb 28, 2008 awvvu The best way to learn how to do these kind of integrals is to practice with some worked out examples. If you google for 'trig sub', you'll find plenty of tutorials on how to do it. 4. Feb 28, 2008 jambaugh 1.) $$\int \frac{3x^2 + x - 1}{\sqrt{1 - x^2}}dx$$ 2.) $$\int \frac{dx}{\sqrt{16 + 25x^2}}$$ 3.) $$\int \frac{dx}{\sqrt{36x^2 - 25}}$$ 4.) $$\int \frac{dx}{\sqrt{3 - 2x - x^2}}$$ 5.) $$\int \sqrt{10-4x+x^2}dx$$ For the last two you must complete the square to get the radicands in the form: $$\int\frac{dx}{\sqrt{a^2 \pm (x+c)^2}}=\int\frac{du}{\sqrt{a^2 \pm u^2}}$$ Notice that in all cases you have sums or differences of squares inside a radical. The method of trigonometric substitution resolves these using one of the forms of the Pythagorean identity: I. $$\cos^2(\theta) + \sin^2(\theta) = 1 \quad \Rightarrow \cos^2(\theta) = 1-\sin^2(\theta)$$ which helps if you substitute $$x = \frac{a}{b}\sin(\theta)$$ when you must deal with the form $$a^2 - b^2 x^2$$. II. $$1 + \tan^2(\theta) = \sec^2(\theta) \Leftrightarrow \tan^2(\theta) = \sec^2(\theta) - 1$$ which helps if you substitute $$x = \frac{a}{b}\tan(\theta)$$ when you must deal with the form $$a^2 + b^2 x^2$$ and helps if you substitute $$x = \frac{a}{b}\sec(\theta)$$ when you must deal with the form $$b^2 x^2 + a^2$$. Making these substitutions should give you an integral which is a product of various trigonometric functions each of which you must deal with on a case by case basis. This should all be outlined in your class notes and text. I'm not sure what else to tell you unless/until you state what exactly is tripping you up on a particular problem. I'll demonstrate another example to help get you started: $$\int \sqrt{4x^2 + 4x - 35}dx$$ First you must complete the square: $$4x^2 + 4x - 35$$ you want to rewrite $$4x^2+4x$$ as: $$(2x+b)^2 - b^2=4x^2+4bx$$ so $$b= 1$$. $$4x^2 + 4x - 35 = 4x^2+4x+1 - 1 - 35 = (2x+1)^2 -36$$ and you've completed the square: Let's call $$u=(2x+1)$$ and note that $$du=2dx$$ then the integral becomes: $$\int \sqrt{2x^2 + 12x + 9}dx=\frac{1}{2}\int\sqrt{u^2 - 36}du$$ Now we are ready to execute a trigonometric substitution. We will use the identity $$\sec^2(\theta) - 1=\tan^2(\theta)$$ so choosing $$u = 6\sec{\theta}$$ we get $$\sqrt{u^2 - 36}=\sqrt{36sec^2(\theta) - 36} = \sqrt{36}\sqrt{\sec^2(\theta)-1}=6\sqrt{\tan^2(\theta)}= 6 \tan(\theta)$$ Note also that $$du = 6 \sec(\theta)\tan(\theta)d\theta$$ So the integral becomes: $$\frac{1}{2}\int\sqrt{u^2 - 36}du=\frac{1}{2}\int 6\tan(\theta)du=\frac{1}{2}\int 6\tan(\theta)\cdot 6\sec(\theta)\tan(\theta)d\theta$$ $$= 18\int \tan^2(\theta)\sec(\theta)d\theta$$ We've finished the trigonometric substitution and now must evaluate the integral. This example is do-able but not easy. I'll leave it for now. The substitution is what I wanted to demonstrate.
2017-03-29 07:34:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8103724718093872, "perplexity": 409.2248160240783}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190234.0/warc/CC-MAIN-20170322212950-00608-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.mathlearnit.com/what-is-51-97-as-a-decimal
# What is 51/97 as a decimal? ## Solution and how to convert 51 / 97 into a decimal 51 / 97 = 0.526 51/97 converted into 0.526 begins with understanding long division and which variation brings more clarity to a situation. Both represent numbers between integers, in some cases defining portions of whole numbers In certain scenarios, fractions would make more sense. Ex: baking, meal prep, time discussion, etc. While decimals bring clarity to others including test grades, sale prices, and contract numbers. Once we've decided the best way to represent the number, we can dive into how to convert 51/97 into 0.526 ## 51/97 is 51 divided by 97 Converting fractions to decimals is as simple as long division. 51 is being divided by 97. For some, this could be mental math. For others, we should set the equation. The two parts of fractions are numerators and denominators. The numerator is the top number and the denominator is the bottom. And the line between is our division property. We must divide 51 into 97 to find out how many whole parts it will have plus representing the remainder in decimal form. Here's how you set your equation: ### Numerator: 51 • Numerators are the parts to the equation, represented above the fraction bar or vinculum. Overall, 51 is a big number which means you'll have a significant number of parts to your equation. The bad news is that it's an odd number which makes it harder to covert in your head. Large two-digit conversions are tough. Especially without a calculator. Now let's explore the denominator of the fraction. ### Denominator: 97 • Denominators differ from numerators because they represent the total number of parts which can be found below the vinculum. 97 is a large number which means you should probably use a calculator. But the bad news is that odd numbers are tougher to simplify. Unfortunately and odd denominator is difficult to simplify unless it's divisible by 3, 5 or 7. Ultimately, don't be afraid of double-digit denominators. So without a calculator, let's convert 51/97 from a fraction to a decimal. ## Converting 51/97 to 0.526 ### Step 1: Set your long division bracket: denominator / numerator $$\require{enclose} 97 \enclose{longdiv}{ 51 }$$ We will be using the left-to-right method of calculation. This is the same method we all learned in school when dividing any number against itself and we will use the same process for number conversion as well. ### Step 2: Extend your division problem $$\require{enclose} 00. \\ 97 \enclose{longdiv}{ 51.0 }$$ Because 97 into 51 will equal less than one, we can’t divide less than a whole number. So we will have to extend our division problem. Add a decimal point to 51, your numerator, and add an additional zero. This doesn't add any issues to our denominator but now we can divide 97 into 510. ### Step 3: Solve for how many whole groups you can divide 97 into 510 $$\require{enclose} 00.5 \\ 97 \enclose{longdiv}{ 51.0 }$$ Since we've extended our equation we can now divide our numbers, 97 into 510 (remember, we inserted a decimal point into our equation so we we're not accidentally increasing our solution) Multiply this number by 97, the denominator to get the first part of your answer! ### Step 4: Subtract the remainder $$\require{enclose} 00.5 \\ 97 \enclose{longdiv}{ 51.0 } \\ \underline{ 485 \phantom{00} } \\ 25 \phantom{0}$$ If you don't have a remainder, congrats! You've solved the problem and converted 51/97 into 0.526 If you still have a remainder, continue to the next step. ### Step 5: Repeat step 4 until you have no remainder or reach a decimal point you feel comfortable stopping. Then round to the nearest digit. Remember, sometimes you won't get a remainder of zero and that's okay. Round to the nearest digit and complete the conversion. There you have it! Converting 51/97 fraction into a decimal is long division just as you learned in school. ### Why should you convert between fractions, decimals, and percentages? Converting between fractions and decimals depend on the life situation you need to represent numbers. Remember, fractions and decimals are both representations of whole numbers to determine more specific parts of a number. And the same is true for percentages. It’s common for students to hate learning about decimals and fractions because it is tedious. But each represent values in everyday life! Here are examples of when we should use each. ### When you should convert 51/97 into a decimal Finance or Money Related Questions: Imagine if you had \$200 to buy a new outfit. If you wait for the items to go on sale, you aren’t going to buy everything for 51/97 off. No, you’ll get everything on 52% discount or 0.526 of the full price. ### When to convert 0.526 to 51/97 as a fraction Distance - Any type of travel, running, walking will leverage fractions. Distance is usually measured by the quarter mile and car travel is usually spoken the same. ### Practice Decimal Conversion with your Classroom • If 51/97 = 0.526 what would it be as a percentage? • What is 1 + 51/97 in decimal form? • What is 1 - 51/97 in decimal form? • If we switched the numerator and denominator, what would be our new fraction? • What is 0.526 + 1/2?
2023-02-07 14:06:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4484684467315674, "perplexity": 833.7097981338127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500619.96/warc/CC-MAIN-20230207134453-20230207164453-00574.warc.gz"}
https://www.cuemath.com/ncert-solutions/number-systems-class-9-maths/
# NCERT Class 9 Maths Number Systems The chapter 1 starts with an introduction to the number system using some examples (using number lines), followed by exercise problems. Next, the chapter deals with a detailed explanation of irrational numbers followed by coverage of real numbers and their decimal expansion. Later, the chapter explains in detail about the representation of numbers on a number line, followed by operations on real numbers. The solutions to the exercise problems are provided by us and are downloadable in the PDF format. ## Chapter 1 Ex.1.1 Question 1 Is zero a rational number? Can you write it in the form \begin{align}\frac{p}{q}\end{align} where \begin{align}p \end{align} and \begin{align}q \end{align} are integers and \begin{align}q \ne 0\end{align}? ### Solution Steps: Yes, zero is a rational number. Zero can be written as: \begin{align}\frac{0}{{{\rm{Any}}\,{\rm{non - zero}}\,{\rm{integer}}}}\end{align} \begin{align}\text{Example}:\frac{0}{1} = \frac{0}{{ - 2}}\,\,\end{align} Which is in the form of \begin{align}\frac{p}{q}\end{align}, where \begin{align}p \end{align} and \begin{align}q \end{align} are integers and \begin{align}q \ne 0\end{align}. ## Chapter 1 Ex.1.1 Question 2 Find six rational numbers between $$3$$ and $$4$$. ### Solution Steps: We can find any number of rational numbers between two rational numbers. First of all we make the denominators same by multiplying or dividing the given rational numbers by a suitable number. If denominator is already same then depending on number of rational no. we need to find in question, we add one and multiply the result by numerator and denominator. \begin{align} 3&=\frac{3\times 7}{7}\,\,\,\text{and}\,\,\,\,\,4=\frac{4\times 7}{7} \\ 3&=\frac{21}{7}\,\,\,\,\,\,\,\,\,\text{and }\,\,\,\,4=\frac{28}{7} \\ \end{align} We can choose $$6$$ rational numbers as: \begin{align}\frac{22}{7},\frac{23}{7},\frac{24}{7},\frac{25}{7},\frac{26}{7}\,\,\text{and}\,\,\frac{27}{7}\end{align} ## Chapter 1 Ex.1.1 Question 3 Find five rational numbers between \begin{align}\frac{3}{4} \text{ and } \frac{4}{5}\end{align} ### Solution Steps: Since we make the denominator same first, then \begin{align} \frac{3}{4} &=\frac{3 \times 5}{4 \times 5} \\ &=\frac{15}{20} \\ \frac{4}{5} &=\frac{4 \times 4}{5 \times 4} \\ &=\frac{16}{20} \end{align} Now, we have to find $$5$$ rational numbers. \begin{align}\therefore\,\frac{15}{20}& =\frac{15\times 6}{20\times 6} \\ &=\frac{90}{120} \\ \frac{16}{20} &=\frac{16\times 6}{20\times 6} \\ & =\frac{96}{120} \\ \end{align} $$\therefore$$ Five rational numbers between  $$\frac{3}{4}$$ and $$\frac{4}{5}$$ are \begin{align}\frac{91}{120}, \frac{92}{120}, \frac{93}{120}, \frac{94}{120} \text { and } \frac{95}{120} \end{align} ## Chapter 1 Ex.1.1 Question 4 State whether the following statements are true or false. Give reasons for your answers. ### Solution (i) Every natural number is a whole number. Steps: True, because the set of natural numbers is represented as\begin{align}\text{N= {1, 2, 3…….}}\end{align}and the set of whole numbers is \begin{align}\text{W = {0, 1, 2, 3 ………}.}\end{align}We see that every  natural number is present in the set of whole numbers. Also, we can see that the as compared to the set of natural numbers, the set of whole numbers contains just one extra number and that number is $$0$$. (ii) Every integer is a whole number. Steps: False. Negative integers are not present in the set of whole numbers. (iii) Every rational number is a whole number. Steps: False. For example \begin{align}\frac{1}{2}\end{align} is a rational number, which is not a whole number.
2021-06-12 17:42:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 38, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9998449087142944, "perplexity": 1203.6228415851708}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487586239.2/warc/CC-MAIN-20210612162957-20210612192957-00482.warc.gz"}
https://ec.gateoverflow.in/568/gate2014-4-36
0 like 0 dislike 17 views An N-type semiconductor having uniform doping is biased as shown in the figure. If $E_C$ is the lowest energy level of the conduction band, $E_V$ is the highest energy level of the valance band and $E_F$ is the Fermi level, which one of the following represents the energy band diagram for the biased N-type semiconductor? recategorized | 17 views
2021-04-11 10:31:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5715877413749695, "perplexity": 334.57817708535936}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038061820.19/warc/CC-MAIN-20210411085610-20210411115610-00220.warc.gz"}
https://manual.frontistr.com/en/tutorial/tutorial_17.html
# Frequency Response Analysis ## Frequency Response Analysis This analysis uses the data of tutorial/17_freq_beam. ### Analysis target The analysis target is a cantilevered beam, and the geometry is shown in Figure 4.17.1 and the mesh data is shown in Figure 4.17.2. Item Description Notes Reference Type of analysis Frequency response analysis !SOLUTION,TYPE=EIGEN !SOLUTION,TYPE=DYNAMIC Number of nodes 55 Number of elements 126 Element type Four node tetrahedral element !ELEMENT,TYPE=341 Material name Material-1 !MATERIAL,NAME=Material-1 Boundary conditions Restraint, Concentrated force, eigen value !EIGENREAD Matrix solution CG/SSOR !SOLVER,METHOD=CG,PRECOND=1 Fig. 4.17.1 : Shape of the cantilever Fig. 4.17.2 : Mesh data of the cantilever ### Analysis content The end of a cantilevered beam to be analyzed is fully constrained, and a frequency response analysis is performed by applying concentrated loads to two nodes at the opposite end. After analyzing eigenvalues up to the 10th order under the same boundary conditions, the analysis is performed using eigenvalues and eigenvectors up to the 5th order. The analysis control data for frequency response analysis is shown below. #### Analysis control data beam_eigen.cnt. # Control File for FISTR !VERSION 3 !WRITE,RESULT !WRITE,VISUAL !SOLUTION, TYPE=EIGEN !EIGEN 10, 1.0E-8, 60 !BOUNDARY _PickedSet4, 1, 3, 0.0 !SOLVER,METHOD=CG,PRECOND=1,ITERLOG=NO,TIMELOG=YES 10000, 1 1.0e-8, 1.0, 0.0 !VISUAL,metod=PSR !surface_num=1 !surface 1 !output_type=VTK !END #### Analysis control data beam_freq.cnt. # Control File for FISTR !VERSION 3 !WRITE,RESULT !WRITE,VISUAL !SOLUTION, TYPE=DYNAMIC !DYNAMIC 11 , 2 14000, 16000, 20, 15000.0 0.0, 6.6e-5 1, 1, 0.0, 7.2E-7 10, 2, 1 1, 1, 1, 1, 1, 1 eigen_0.log 1, 5 !BOUNDARY _PickedSet4, 1, 3, 0.0 _PickedSet5, 2, 1. _PickedSet6, 2, 1. !SOLVER,METHOD=CG,PRECOND=1,ITERLOG=NO,TIMELOG=YES 10000, 1 1.0e-8, 1.0, 0.0 !VISUAL,metod=PSR !surface_num=1 !surface 1 !output_type=VTK !END ### Analysis procedure First, change hecmw_ctrl_eigen.dat to hecmw_ctrl.dat for eigenvalue analysis, and then run eigenvalue analysis. Next, change hecmw_ctrl_freq.dat to hecmw_ctrl.dat and 0.log to eigen_0.log (which is specified in the control data for frequency response analysis), and then perform the frequency response analysis. $cp hecmw_ctrl_eigen.dat hecmw_ctrl.dat$ fistr1 -t 4 $mv 0.log eigen_0.log$ cp hecmw_ctrl_freq.dat hecmw_ctrl.dat \$ fistr1 -t 4 ### Analysis results The relationship between the frequency and displacement amplitude of a monitoring node (node number 1) specified in the analysis control data is shown in Figure 4.17.3, created using Microsoft Excel. A part of the analysis result log file is shown below as numerical data for the analysis results. Fig.4.17.3 Relationship between frequency and displacement amplitude of the monitoring nodes #### Analysis result log 0.log fstr_setup: OK Rayleigh alpha: 0.0000000000000000 Rayleigh beta: 7.1999999999999999E-007 start mode= 1 end mode= 5 start frequency: 14000.000000000000 end frequency: 16000.000000000000 number of the sampling points 20 monitor nodeid= 1 14100.000000000000 [Hz] : 8.3935554530220141E-002 14100.000000000000 [Hz] : 1 .res 14200.000000000000 [Hz] : 9.1211083510158913E-002 14200.000000000000 [Hz] : 2 .res 14300.000000000000 [Hz] : 9.9579777897537178E-002 14300.000000000000 [Hz] : 3 .res 14400.000000000000 [Hz] : 0.10914967595035865 14400.000000000000 [Hz] : 4 .res 14500.000000000000 [Hz] : 0.11992223203402431 14500.000000000000 [Hz] : 5 .res
2021-07-24 20:53:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4874071478843689, "perplexity": 14518.596858694236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150308.48/warc/CC-MAIN-20210724191957-20210724221957-00336.warc.gz"}
https://www.lesswrong.com/users/ege-erdil/replies
# All of Ege Erdil's Comments + Replies Ege Erdil's Shortform It's not the logarithm of the BTC price that is a martingale, it's the BTC price itself, under a risk-neutral measure if that makes you more comfortable (since BTC derivatives would be priced by the same risk-neutral measure pricing BTC itself). Ege Erdil's Shortform Recently I saw that Hypermind is offering a prediction market on which threshold BTC will hit first: $40k or$60k? You can find the market on this list. I find this funny because for this kind of question it's going to be a very good approximation to assume BTC is a martingale, and then the optional stopping theorem gives the answer to the question immediately: if BTC is currently priced at $40k < X <$60k then the probability of hitting $40k first is ($60k - X)/($20k). Since BTC itself is going to be priced much more efficiently than this small volum... (read more) 4BackToBaseball18dLn (41.85/40) / ln (60/40) = 11.2% What is a probabilistic physical theory? You need "if the number on this device looks to me like the one predicted by theory, then the theory is right" just like you need "if I run billion experiments and frequency looks to me like probability predicted by the theory, then the theory is right". You can say that you're trying to solve a "downward modeling problem" when you try to link any kind of theory you have to the real world. The point of the question is that in some cases the solution to this problem is more clear to us than in others, and in the probabilistic case we seem to be using some... (read more) 1Signer1moHence it's a comment and not an answer^^. I don't get your examples: for a theory that predicts phase transition to have information content in the desired sense you would also need to specify model map. What's the actual difference with deterministic case? That "solution is more clear"? I mean it's probably just because of what happened to be implemented in brain hardware or something and I didn't have the sense that it was what the question was about. Or is it about non-realist probabilistic theories not specifying what outcomes are impossible in realist sense? Then I don't understand what's confusing about treating probabilistic part normatively - that just what being non-realist about probability means. Why did Europe conquer the world? Thank you for the link. I'm curious what the table would look like if we examined the top 10 or 20 cities instead of just those tied for the top position. I think this is quite a tall order for ancient times, but a source I've found useful is this video by Ollie Bye on YouTube. It's possible to move his estimates around by factors of 2 or so at various points, but I think they are correct when it comes to the order of magnitude of historical city populations. Who does "they" refer to in this sentence? It could mean two very different things. Edited the... (read more) Why did Europe conquer the world? I'd be happy to be corrected if I'm wrong. Do you have more precise numbers? There's obviously quite a bit of uncertainty when it comes to ancient city populations, but Wikipedia has a nice aggregation of three different sources which list the largest city in the world at various times in history. Estimates of city populations can vary by a factor of 2 or more across different sources, but the overall picture is that sometimes the largest city in the world was Chinese and sometimes it was not. My reference point for technological regression after the fa ... (read more) 3lsusr1moThank you for the link. I'm curious what the table would look like if we examined the top 10 or 20 cities instead of just those tied for the top position. Who does "they" refer to in this sentence? It could mean two very different things. Why did Europe conquer the world? This post seems to be riddled with inaccuracies and misleading statements. I'll just name a few here, since documenting all of them would take more time than I'm willing to spare. For most of history, China was the center of civilization. It had the biggest cities, the most complex government, the highest quality manufacturing, the most industrial capacity, the most advanced technology, the best historical records and the largest armies. It dominated East Asia at the center of an elaborate tribute system for a thousand years. This is simply false. China ... (read more) 8lsusr1moI'd be happy to be corrected if I'm wrong. Do you have more precise numbers? Roman concrete fell out of use after the fall of the Western Roman Empire. It is my impression that not many aqueducts were built either. My reference point for technological regression after the fall of the Western Roman Empire comes from science rather than technology. My understanding of the Renaissance (from reading Destiny Disrupted [https://www.lesswrong.com/posts/DRXW6CrHwkH4rfuGi/book-review-destiny-disrupted] ) is that much of European philosophy (including science) only survived because it was preserved by the Arabic-speaking world. I agree. This is why Europeans choosing the terms of engagement was so important. They won when the Mughal and Qing empires were at their weakest. What is a probabilistic physical theory? It's true that both of these outcomes have a small chance of not-happening. But with enough samples, the outcome can be treated for all intents and purposes as a certainty. I agree with this in practice, but the question is philosophical in nature and this move doesn't really help you get past the "firewall" between probabilistic and non-probabilistic claims at all. If you don't already have a prior reason to care about probabilities, results like the law of large numbers or the central limit theorem can't convince you to care about it because they are a... (read more) What is a probabilistic physical theory? So I agree with most of what you say here, and as a Metaculus user I have some sympathy for trying to make proper scoring rules the epistemological basis of "probability-speak". There are some problems with it, like different proper scoring rules give different incentives to people when it comes to distributing finite resources across many questions to acquire info about them, but broadly I think the norm of scoring models (or even individual forecasters) by their Brier score or log score and trying to maximize your own score is a good norm. There are proba... (read more) 5davidad14dI think it is not circular, though I can imagine why it seems so. Let me try to elaborate the order of operations as I see it. 1. Syntax: Accept that a probability-sentence like "P(there will be a sea-battle tomorrow)≥0.4" is at least syntactically parseable, i.e. not gibberish, even if it is semantically disqualified from being true (like "the present King of France is a human"). * This can be formalized as adding a new term-formerP:ClassicalSentence→ProbabilityTerm, other term-formers such as+:ProbabilityTerm×ProbabilityTerm→ProbabilityTerm, constantsC:Q→Pr obabilityTerm, and finally a predicate≥0:ProbabilityTerm→ProbabilitySente nce. 2. Logic: Accept that probability-sentences can be the premises and/or conclusions of valid deductions, such asP(A)≥0.4,P(B∧A)≥0.5⋅P(A)⊢P(B)≥0.2. * Axiomatizing the valid deductions in a sound and complete way is not as easy as it may seem, because of the interaction with various expressive features one might want (native conditional probabilities, higher-order probabilities, polynomial inequalities) and model-theoretic and complexity-theoretic issues (pathological models, undecidable satisfiability). Some contenders: * LPWF [https://www.academia.edu/download/30763898/Sven_Hartmann_Foundations_of_Information_and_Kn.pdf#page=253] , which has polynomial inequalities but not higher-order probabilities * LCP [http://www.doiserbia.nb.rs/img/doi/0350-1302/2007/0350-13020796141O.pdf] , which has higher-order conditional probabilities but not inequalities * LPP [https://link.springer.com/chapter/10.1007%2F978-3-319-47012-2_3]2, which has neither, but has decidable satisfiability. * Anyway, the basic axioms about probability that we need for such logics are: * P(α)≥0 * P(⊤)=1 * P(⊥)=0 * P(α)+P(β)=P(α∨ What is a probabilistic physical theory? Negations of finitely observable predicates are typically not finitely observable. [0,0.5) is finitely observable as a subset of [0,1], because if the true value is in [0,0.5) then there necessarily exists a finite precision with which we can know that. But its negation, [0.5,1], is not finitely observable, because I'd the true value is exactly 0.5, no finite-precision measurement can establish with certainty that the value is in [0.5,1], even though it is. Ah, I didn't realize that's what you mean by "finitely observable" - something like "if the propos... (read more) 5davidad1moIt's nice if the opens of X can be internalized as the continuous functions X→TV for some space of truth values TV with a distinguished point ⊤ such that x∈O⇔O(x )=⊤. For this, it is necessary (and sufficient) for the open sets of TV to be generated by {⊤}. I could instead ask for a distinguished point ⊥ such that x∉O⇔ O(x)=⊥, and for this it is necessary and sufficient for the open sets of TV to be generated by TV∖{⊥}. Put them together, and you get that TV must be the Sierpiński space [https://en.wikipedia.org/wiki/Sierpi%C5%84ski_space]: a "true" result (⊤∈TV) is finitely observable ({⊤} is open), but a "false" result is not ({⊥} is not open). Yes, constructively we do not know a proposition until we find a proof. If we find a proof, it is definitely true. If we do not find a proof, maybe it is false, or maybe we have not searched hard enough—we don't know. Also related is that the Sierpiński space is the smallest model of intuitionistic propositional logic (with its topological semantics) that rejects LEM, and any classical tautology rejected by Sierpiński space is intuitionistically equivalent to LEM. There's a sense in which the difference between classical logic and intuitionistic logic is precisely the assumption that all open sets of possibility-space are clopen (which, if we further assume T0, leads to an ontology where possibility-space is necessarily discrete). (Of course it's not literally a theorem of classical logic that all open sets are clopen; this is a metatheoretic claim about semantic models, not about objects internal to either logic.) See A Semantic Hierarchy for Intuitionistic Logic [https://escholarship.org/content/qt2vp2x4rx/qt2vp2x4rx_noSplash_2bc40e4f9d71c7442df59051c9139bde.pdf?t=poxut0#page15] . What is a probabilistic physical theory? What I'm sneaking in is that both the σ-algebra structure and the topological structure on a scientifically meaningful space ought to be generated by the (finitely) observable predicates. In my experience, this prescription doesn't contradict with standard examples, and situations to which it's "difficult to generalize" feel confused and/or pathological until this is sorted out. It's not clear to me how finitely observable predicates would generate a topology. For a sigma algebra it's straightforward to do the generation because they are closed under com... (read more) 4davidad1mo(I agree with your last paragraph—this thread is interesting but unfortunately beside the point since probabilistic theories are obviously trying to "say more" than just their merely nondeterministic shadows.) Negations of finitely observable predicates are typically not finitely observable. [0,0.5) is finitely observable as a subset of [0,1], because if the true value is in [0,0.5) then there necessarily exists a finite precision with which we can know that. But its negation, [0.5,1], is not finitely observable, because if the true value is exactly 0.5, no finite-precision measurement can establish with certainty that the value is in [0.5,1], even though it is. The general case of why observables form a topology is more interesting. Finite intersections of finite observables are finitely observable because I can check each one in series and still need only finite observation in total. Countable unions of finite observables are finitely observable because I can check them in parallel, and if any are true then its check will succeed after only finite observation in total. Uncountable unions are thornier, but arguably unnecessary (they're redundant with countable unions if the space is hereditarily Lindelöf [https://en.wikipedia.org/wiki/Lindel%C3%B6f_space], for which being Polish is sufficient, or more generally second-countable), and can be accommodated by allowing the observer to hypercompute. This is very much beside the point, but if you are still interested anyway, check out Escardó's monograph on the topic [https://www.cs.bham.ac.uk/~mhe/papers/entcs87.pdf#page15]. What is a probabilistic physical theory? I think you can justify probability assessments in some situations using Dutch book style arguments combined with the situation itself having some kind of symmetry which the measure must be invariant under, but this kind of argument doesn't generalize to any kind of messy real world situation in which you have to make a forecast on something, and it still doesn't give some "physical interpretation" to the probabilities beyond "if you make bets then your odds have to form a probability measure, and they better respect the symmetries of the physical theory y... (read more) 1tivelen1moPerhaps such probabilities are based on intuition, and happen to be roughly accurate because the intuition has formed as a causal result of factors influencing the event? In order to be explicitly justified, one would need an explicit justification of intuition, or at least intuition within the field of knowledge in question. I would say that such intuitions in many fields are too error-prone to justify any kind of accurate probability assessment. My personal answer then would be to discard probability assessments that cannot be justified, unless you have sufficient trust in your intuition about the statement in question. What is your thinking on this prong of the dilemma (retracting your assessment of reasonableness on these probability assessments for which you have no justification)? What is a probabilistic physical theory? Believing in the probabilistic theory of quantum mechanics means we expect to see the same distribution of photon hits in real life. No it doesn't! That's the whole point of my question. "Believing the probabilistic theory of quantum mechanics" means you expect to see the same distribution of photon hits with a very high probability (say ), but if you have not justified what the connection of probabilities to real world outcomes is to begin with, that doesn't help us. Probabilistic claims just form a closed graph of reference in which they only refer ... (read more) 1DaemonicSigil1moOkay, thanks for clarifying the question. If I gave you the following answer, would you say that it counts as a connection to real-world outcomes? The real world outcome is that I run a double slit experiment with a billion photons, and plot the hit locations in a histogram. The heights of the bars of the graph closely match the probability distribution I previously calculated. What about 1-time events, each corresponding to a totally unique physical situation? Simple. For each 1 time event, I bet a small amount of money on the result, at odds at least as good as the odds my theory gives for that result. The real world outcome is that after betting on many such events, I've ended up making a profit. It's true that both of these outcomes have a small chance of not-happening. But with enough samples, the outcome can be treated for all intents and purposes as a certainty. I explained above why the "continuous distribution" objection to this doesn't hold. What is a probabilistic physical theory? You spend a few paragraphs puzzling about how a probabilistic theory could be falsified. As you say, observing an event in a null set or a meagre set does not do the trick. But observing an event which is disjoint from the support of the theory's measure does falsify it. Support is a very deep concept; see this category-theoretic treatise that builds up to it. You can add that as an additional axiom to some theory, sure. It's not clear to me why that is the correct notion to have, especially since you're adding some extra information about the topology o... (read more) 5davidad1moOkay, I now think both of my guesses about what's really being asked were misses. Maybe I will try again with a new answer; meanwhile, I'll respond to your points here. You're right that I'm sneaking something in when invoking support because it depends on the sample space having a topological structure, which cannot typically be extracted from just a measurable structure. What I'm sneaking in is that both the σ-algebra structure and the topological structure on a scientifically meaningful space ought to be generated by the (finitely) observable predicates. In my experience, this prescription doesn't contradict with standard examples, and situations to which it's "difficult to generalize" feel confused and/or pathological until this is sorted out. So, in a sense I'm saying, you're right that a probability space (X,Σ,P) by itself doesn't connect to reality—because it lacks the information about which events in Σ are opens. As to why I privilege null sets over meagre sets: null sets are those to which the probability measure assigns zero value, while meagre sets are independent of the probability measure—the question of which sets are meagre is determined entirely by the topology. If the space is Polish (or more generally, any Baire space), then meagre sets are never inhabited open sets, so they can never conceivably be observations, therefore they can't be used to falsify a theory. But, given that I endorse sneaking in a topology, I feel obligated to examine meagre sets from the same point of view, i.e. treating the topology as a statement about which predicates are finitely observable, and see what role meagre sets then have in philosophy of science. Meagre sets are not the simplest concept; the best way I've found to do this is via the characterization of meagre sets with the Banach–Mazur game [https://en.wikipedia.org/wiki/Banach%E2%80%93Mazur_game]: * Suppose Alice is trying to claim a predicate X is true about the world, and Bob is trying to claim it isn What is a probabilistic physical theory? Here I'm using "Bayesian" as an adjective which refers to a particular interpretation of the probability calculus, namely one where agents have credences about an event and they are supposed to set those credences equal to the "physical probabilities" coming from the theory and then make decisions according to that. It's not the mere acceptance of Bayes' rule that makes someone a Bayesian - Bayes' rule is a theorem so no matter how you interpret the probability calculus you're going to believe in it. With this sense of "Bayesian", the epistemic content adde... (read more) 1JBlack1moThe use of the word "Bayesian" here means that you treat credences according to the same mathematical rules as probabilities, including the use of Bayes' rule. That's all. What is a probabilistic physical theory? The question is about the apparently epiphenomenal status of the probability measure and how to reconcile that with the probability measure actually adding information content to the theory. This answer is obviously "true", but it doesn't actually address my question. What is a probabilistic physical theory? This is not true. You can have a model of thermodynamics that is statistical in nature and so has this property, but thermodynamics itself doesn't tell you what entropy is, and the second law is formulated deterministically. What is a probabilistic physical theory? As I see it, probability is essentially just a measure of our ignorance, or the ignorance of any model that's used to make predictions. An event with a probability of 0.5 implies that in half of all situations where I have information indistinguishable from the information I have now, this event will occur; in the other half of all such indistinguishable situations, it won't happen. Here I think you're mixing two different approaches. One is the Bayesian apporach: it comes down to saying probabilistic theories are normative. The question is how to reconc... (read more) What is a probabilistic physical theory? I don't know what you mean here. One of my goals is to get a better answer to this question than what I'm currently able to give, so by definition getting such an answer would "help me achieve my goals". If you mean something less trivial than that, well, it also doesn't help me to achieve my goals to know if the Riemann hypothesis is true or false, but RH is nevertheless one of the most interesting questions I know of and definitely worth wondering about. I can't know how an answer I don't know about would impact my beliefs or behavior, but my guess is tha... (read more) 1tivelen1moMy approach was not helpful at all, which I can clearly see now. I'll take another stab at your question. You think it is reasonable to assign probabilities, but you also cannot explain how you do so or justify it. You are looking for such an explanation or justification, so that your assessment of reasonableness is backed by actual reason. Are you unable to justify any probability assessments at all? Or is there some specific subset that you're having trouble with? Or have I failed to understand your question properly? Retail Investor Advantages To elaborate on the information acquisition cost point; small pieces of information won't be worth tying up a big amount of capital for. If you have a company worth$1 billion and you have very good insider info that a project of theirs that the market implicitly values at \$10 million is going to flop, if the only way you can express that opinion is to short the stock of the whole company that's likely not even worth it. Even with 10% margin you'd be at best making a 10% return on capital over the time horizon that the market figures out the project is bad ... (read more) Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists Ah, I see. I missed that part of the post for some reason. In this setup the update you're doing is fine, but I think measuring the evidence for the hypothesis in terms of "bits" can still mislead people here. You've tuned your example so that the likelihood ratio is equal to two and there are only two possible outcomes, while in general there's no reason for those two values to be equal. Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists This is a rather pedantic remark that doesn't have much relevance to the primary content of the post (EDIT: it's also based on a misunderstanding of what the post is actually doing - I missed that an explicit prior is specified which invalidates the concern raised here), but If such a coin is flipped ten times by someone who doesn't make literally false statements, who then reports that the 4th, 6th, and 9th flips came up Heads, then the update to our beliefs about the coin depends on what algorithm the not-lying[1] reporter used to decide to report those 5Zack_M_Davis2moThanks for this analysis! However— I'm not. The post specifies "a coin that is either biased to land Heads 2/3rds of the time, or Tails 2/3rds of the time"—that is (and maybe I should have been more explicit), I'm saying our prior belief about the coin's bias is just the discrete distribution {"1/3 Heads, 2/3 Tails": 0.5, "2/3 Heads, 1/3 Tails": 0.5}. I agree that a beta prior would be more "realistic" in the sense of applying to a wider range of scenarios (your uncertainty about a parameter is usually continuous, rather than "it's either this, or it's that, with equal probability"), but I wanted to make the math easy on myself and my readers. Laplace's rule of succession Yeah, Neyman's proof of Laplace's version of the rule of succession is nice. The reason I think this kind of approach can't give the full strength of the conjugate prior approach is that I think there's a kind of "irreducible complexity" to computing for non-integer values of . The only easy proof I know goes through the connection to the gamma function. If you stick only to integer values there are easier ways of doing the computation, and the linearity of expectation argument given by Neyman is one way to do it. One concrete example of the ru... (read more) 1. What matters is that it's something you can invest in. Choosing the S&P 500 is not really that important in particular. There doesn't have to be a single company whose stock is perfectly correlated with the S&P 500 (though nowadays we have ETFs which more or less serve this purpose) - you can simply create your own value-weighted stock index and rebalance it on a daily or weekly basis to adjust for the changing weights over time, and nothing will change about the main arguments. This is actually what the authors Over 20 years that's possible (and I think it's in fact true), but the paper I cite in the post gives some data which makes it unlikely that the whole past record is outperformance. It's hard to square 150 years of over 6% mean annual equity premium with 20% annual standard deviation with the idea that the true stock return is actually the same as the return on T-bills. The "true" premium might be lower than 6% but not by too much, and we're still left with more or less the same puzzle even if we assume that. Average probabilities, not log odds That's alright, it's partly on me for not being clear enough in my original comment. I think information aggregation from different experts is in general a nontrivial and context-dependent problem. If you're trying to actually add up different forecasts to obtain some composite result it's probably better to average probabilities; but aside from my toy model in the original comment, "field data" from Metaculus also backs up the idea that on single binary questions median forecasts or log odds average consistently beats probability averages. I agree with Simo... (read more) Average probabilities, not log odds I don't know what you're talking about here. You don't need any nonlinear functions to recover the probability. The probability implied by  is just , and the probability you should forecast having seen  is therefore since  is a martingale. I think you don't really understand what my example is doing.  is not a Brownian motion and its increments are not Gaussian; it's a nonlinear transform of a drift-diffusion process by a sigmoid which takes valu... (read more) 5AlexMennen2moOh, you're right, sorry; I'd misinterpreted you as saying that M represented the log odds. What you actually did was far more sensible than that. Thanks for the comment - I'm glad people don't take what I said at face value, since it's often not correct... What I actually maximized is (something like, though not quite) the expected value of the logarithm of the return, i.e. what you'd do if you used the Kelly criterion. This is the correct way to maximize long-run expected returns, but it's not the same thing as maximizing expected returns over any given time horizon. My computation of  is correct, but the problem comes in elsewhere. Obviously if your goal is to just maximize ex... (read more) Average probabilities, not log odds The experts in my model are designed to be perfectly calibrated. What do you mean by "they are overconfident"? 2AlexMennen2moThe probability of the event is the expected value of the probability implied by M(T). The experts report M(X) for a random variable X sampled uniformly in [0,T]. M(T) differs from M(X) by a Gaussian of mean 0, and hence, knowing M(X), the expected value of M(T) is just M(X). But we want the expected value of the probability implied by M(T), which is different from the probability implied by the expected value of M(T), because expected value does not commute with nonlinear functions. So an expert reporting the probability implied by M(X) is not well-calibrated, even though an expert reporting M(X) is giving an unbiased estimate of M(T). Average probabilities, not log odds I did a Monte Carlo simulation for this on my own whose Python script you can find on Pastebin. Consider the following model: there is a bounded martingale  taking values in  and with initial value . The exact process I considered was a Brownian motion-like model for the log odds combined with some bias coming from Ito's lemma to make the sigmoid transformed process into a martingale. This process goes on until some time T and then the event is resolved according to the probability implied by . You have n "experts"... (read more) 2AlexMennen2moNope! If n=1, then you do know which expert has the most information, and you don't do best by copying his forecast, because the experts in your model are overconfident. See my reply to ADifferentAnonymous [https://www.lesswrong.com/posts/b2jH8GqNhoE5vguni/average-probabilities-not-log-odds?commentId=evLsxypzBa4kNHmEt] . But well-done constructing a model in which average log odds outperforms average probabilities for compelling reasons. NOTE: Don't believe everything I said in this comment! I elaborate on some of the problems with it in the responses, but I'm leaving this original comment up because I think it's instructive even though it's not correct. There is a theoretical account for why portfolios leveraged beyond a certain point would have poor returns even if prices follow a random process with (almost surely) continuous sample paths: leverage decay. If you could continuously rebalance a leveraged portfolio this would not be an issue, but if you can't do that then leverage exhibits ... (read more) 4paulfchristiano2moI didn't follow the math (calculus with stochastic processes is pretty confusing) but something seems obviously wrong here. I think probably your calculation ofE[(Δlog(S)2)]is wrong? Maybe I'm confused, but in addition to common sense and having done the calculation in other ways, the following argument seems pretty solid: * Regardless ofk, if you consider a short enough period of time, then with overwhelming probability at all times your total assets will be between 0.999 and 1.001. * So no matter how I choose to rebalance, at all times my total exposure will be between0.999kand1.001k. * And if my exposure is between0.999kand1.001k, then my expected returns over any time periodTare between0.999kTμand1.001kTμ. (Whereμis the expected return of the underlying, maybe that's different from yourμbut it's definitely just some number.) * So regardless of how I rebalance, doublingkapproximately doubles my expected returns. * So clearly for short enough time periods your equation for the optimum can't be right. * But actually maximizing EV over a long time period is equivalent to maximizing it over each short time period (since final wealth is just linear in your wealth at the end of the initial short period) so the optimum over arbitrary time periods is also to max leverage. What Do GDP Growth Curves Really Mean? I think there's some kind of miscommunication going on here, because I think what you're saying is trivially wrong while you seem convinced that it's correct despite knowing about my point of view. No it doesn't. It weighs them by price (i.e. marginal utility = production opportunity cost) at the quantities consumed. That is not a good proxy for how important they actually were to consumers. Yes it is - on the margin. You can't hope for it to be globally good because of the argument I gave, but locally of course you can, that's what marginal utility means! T... (read more) Petrov Day Retrospective: 2021 Strong upvote for the comment. I think the situation is even worse than what you say: the fact is that had Petrov simply reported the inaccurate information in his possession up the chain of command as he was being pressured to do by his own subordinates, nobody would have heard of his name and nobody would have blamed him for doing his job. He could have even informed his superiors of his personal opinion that the information he was passing to them was inaccurate and left them to make the final decision about what to do. Not only would he have not been bl... (read more) What Do GDP Growth Curves Really Mean? The reason I bring up the weighting of GDP growth is that there are some "revolutions" which are irrelevant and some "revolutions" which are relevant from whatever perspective you're judging "craziness". In particular, it's absurd to think that the year 2058 will be crazy because suddenly people will be able to drink wine manufactured in the year 2058 at a low cost. Consider this claim from your post: When we see slow, mostly-steady real GDP growth curves, that mostly tells us about the slow and steady increase in production of things which haven’t been revo 2johnswentworth3moNo it doesn't. It weighs them by price (i.e. marginal utility = production opportunity cost) at the quantities consumed. That is not a good proxy for how important they actually were to consumers. I'm mostly operationalizing "revolution" as a big drop in production cost. I think the wine example is conflating two different "prices": the consumer's marginal utility, and the opportunity cost to produce the wine. The latter is at least extremely large, and plausibly infinite, but the former is not. If we actually somehow obtained a pallet of 2058 wine today, it would be quite a novelty, but it would sell at auction for a decidedly non-infinite price. (And if people realized how quickly its value would depreciate, it might even sell for a relatively low price, assuming there were enough supply to satisfy a few rich novelty-buyers.) The two prices are not currently equal because production has hit its lower bound (i.e. zero). More generally, there are lots of things which would be expensive to produce today, will likely be cheap to produce in the future, but don't create all that much value. We just don't produce any of them, To think properly about how crazy the future would be, we need to think about the consumer's perspective, not the production cost. A technological revolution does typically involve a big drop in production cost. Note, however, that this does not necessarily mean a big drop in marginal utility. Now, I do think there's still a core point of your argument which survives: The thing it tells us is that the huge revolution in electronics produced goods whose marginal utility is low at current consumption levels/production levels. When I say "real GDP growth curves mostly tell us about the slow and steady increase in production of things which haven’t been revolutionized", I mean something orthogonal to that. I mean that the real GDP growth curve looks almost-the-same in world without a big electronics revolution as it does in a world with a big ele What Do GDP Growth Curves Really Mean? In addition, I'm confused about how you can agree with both my comment and your post at the same time. You explicitly say, for example, that Also, "GDP (as it's actually calculated) measures production growth in the least-revolutionized goods" still seems like basically the right intuitive model over long times and large changes, and the "takeaways" in the post still seem correct. but this is not what GDP does. In the toy model I gave, real GDP growth perfectly captures increases in utility; and in other models where it fails to do so the problem is not that... (read more) 2johnswentworth3moThe main takeaways in the post generally do not assume we're thinking of GDP as a proxy for utility/consumer value. In particular, I strongly agree with: It remains basically true that goods whose price does not drop end up much more heavily weighted in GDP. Whether or not this weighting is "correct" (for purposes of using GDP as a proxy for consumer value) isn't especially relevant to how true the claim is, though it may be relevant to how interesting one finds the claim, depending on one's intended purpose. To the extent that we should stop using GDP as a proxy for consumer value, the question of "should a proxy for consumer value more heavily weight goods whose price does not drop?" just isn't that relevant. The interesting question is not what a proxy for consumer value should do, but rather what GDP does do, and what that tells us. What Do GDP Growth Curves Really Mean? I think in this case omitting the discussion about equivalence under monotonic transformations leads people in the direction of macroeconomic alchemy - they try to squeeze information about welfare from relative prices and quantities even though it's actually impossible to do it. The correct way to think about this is probably to use von Neumann's approach to expected utility: pick three times in history, say ; assume that  where  is the utility of living around time  and ask people fo... (read more) What Do GDP Growth Curves Really Mean? There is a standard reason why real GDP growth is defined the way it is: it works locally in time and that's really the best you can ask for from this kind of measure. If you have an agent with utility function  defined over  goods with no explicit time dependence, you can express the derivative of utility with respect to time as If you divide both sides by the marginal utility of some good taken as the numeraire, say the first one, then you get where  is the pri... (read more) 6johnswentworth3moI was hoping somebody would write a comment like this. I didn't want to put a technical primer in the post (since it's aimed at a nontechnical audience), but I'm glad it's here, and I basically agree with the content.
2022-01-27 10:34:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.645994246006012, "perplexity": 909.2073344405807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305260.61/warc/CC-MAIN-20220127103059-20220127133059-00707.warc.gz"}
https://docs.microsoft.com/en-us/dotnet/fsharp/language-reference/code-quotations
# Code quotations This article describes code quotations, a language feature that enables you to generate and work with F# code expressions programmatically. This feature lets you generate an abstract syntax tree that represents F# code. The abstract syntax tree can then be traversed and processed according to the needs of your application. For example, you can use the tree to generate F# code or generate code in some other language. ## Quoted expressions A quoted expression is an F# expression in your code that is delimited in such a way that it is not compiled as part of your program, but instead is compiled into an object that represents an F# expression. You can mark a quoted expression in one of two ways: either with type information or without type information. If you want to include type information, you use the symbols <@ and @> to delimit the quoted expression. If you do not need type information, you use the symbols <@@ and @@>. The following code shows typed and untyped quotations. open Microsoft.FSharp.Quotations // A typed code quotation. let expr : Expr<int> = <@ 1 + 1 @> // An untyped code quotation. let expr2 : Expr = <@@ 1 + 1 @@> Traversing a large expression tree is faster if you do not include type information. The resulting type of an expression quoted with the typed symbols is Expr<'T>, where the type parameter has the type of the expression as determined by the F# compiler's type inference algorithm. When you use code quotations without type information, the type of the quoted expression is the non-generic type Expr. You can call the Raw property on the typed Expr class to obtain the untyped Expr object. There are various static methods that allow you to generate F# expression objects programmatically in the Expr class without using quoted expressions. A code quotation must include a complete expression. For a let binding, for example, you need both the definition of the bound name and another expression that uses the binding. In verbose syntax, this is an expression that follows the in keyword. At the top level in a module, this is just the next expression in the module, but in a quotation, it is explicitly required. Therefore, the following expression is not valid. // Not valid: // <@ let f x = x + 1 @> But the following expressions are valid. // Valid: <@ let f x = x + 10 in f 20 @> // Valid: <@ let f x = x + 10 f 20 @> To evaluate F# quotations, you must use the F# Quotation Evaluator. It provides support for evaluating and executing F# expression objects. F# quotations also retain type constraint information. Consider the following example: open FSharp.Linq.RuntimeHelpers let eval q = LeafExpressionConverter.EvaluateQuotation q let inline negate x = -x // val inline negate: x: ^a -> ^a when ^a : (static member ( ~- ) : ^a -> ^a) <@ negate 1.0 @> |> eval The constraint generated by the inline function is retained in the code quotation. The negate function's quoted form can now be evaluated. ## Expr type An instance of the Expr type represents an F# expression. Both the generic and the non-generic Expr types are documented in the F# library documentation. For more information, see FSharp.Quotations Namespace and Quotations.Expr Class. ## Splicing operators Splicing enables you to combine literal code quotations with expressions that you have created programmatically or from another code quotation. The % and %% operators enable you to add an F# expression object into a code quotation. You use the % operator to insert a typed expression object into a typed quotation; you use the %% operator to insert an untyped expression object into an untyped quotation. Both operators are unary prefix operators. Thus if expr is an untyped expression of type Expr, the following code is valid. <@@ 1 + %%expr @@> And if expr is a typed quotation of type Expr<int>, the following code is valid. <@ 1 + %expr @> ## Example 1 ### Description The following example illustrates the use of code quotations to put F# code into an expression object and then print the F# code that represents the expression. A function println is defined that contains a recursive function print that displays an F# expression object (of type Expr) in a friendly format. There are several active patterns in the FSharp.Quotations.Patterns and FSharp.Quotations.DerivedPatterns modules that can be used to analyze expression objects. This example does not include all the possible patterns that might appear in an F# expression. Any unrecognized pattern triggers a match to the wildcard pattern (_) and is rendered by using the ToString method, which, on the Expr type, lets you know the active pattern to add to your match expression. ### Code module Print open Microsoft.FSharp.Quotations open Microsoft.FSharp.Quotations.Patterns open Microsoft.FSharp.Quotations.DerivedPatterns let println expr = let rec print expr = match expr with | Application(expr1, expr2) -> // Function application. print expr1 printf " " print expr2 | SpecificCall <@@ (+) @@> (_, _, exprList) -> // Matches a call to (+). Must appear before Call pattern. printf " + " | Call(exprOpt, methodInfo, exprList) -> // Method or module function call. match exprOpt with | Some expr -> print expr | None -> printf "%s" methodInfo.DeclaringType.Name printf ".%s(" methodInfo.Name if (exprList.IsEmpty) then printf ")" else for expr in exprList.Tail do printf "," print expr printf ")" | Int32(n) -> printf "%d" n | Lambda(param, body) -> // Lambda expression. printf "fun (%s:%s) -> " param.Name (param.Type.ToString()) print body | Let(var, expr1, expr2) -> // Let binding. if (var.IsMutable) then printf "let mutable %s = " var.Name else printf "let %s = " var.Name print expr1 printf " in " print expr2 | PropertyGet(_, propOrValInfo, _) -> printf "%s" propOrValInfo.Name | String(str) -> printf "%s" str | Value(value, typ) -> printf "%s" (value.ToString()) | Var(var) -> printf "%s" var.Name | _ -> printf "%s" (expr.ToString()) print expr printfn "" let a = 2 // exprLambda has type "(int -> int)". let exprLambda = <@ fun x -> x + 1 @> // exprCall has type unit. let exprCall = <@ a + 1 @> println exprLambda println exprCall println <@@ let f x = x + 10 in f 10 @@> ### Output fun (x:System.Int32) -> x + 1 a + 1 let f = fun (x:System.Int32) -> x + 10 in f 10 ## Example 2 ### Description You can also use the three active patterns in the ExprShape module to traverse expression trees with fewer active patterns. These active patterns can be useful when you want to traverse a tree but you do not need all the information in most of the nodes. When you use these patterns, any F# expression matches one of the following three patterns: ShapeVar if the expression is a variable, ShapeLambda if the expression is a lambda expression, or ShapeCombination if the expression is anything else. If you traverse an expression tree by using the active patterns as in the previous code example, you have to use many more patterns to handle all possible F# expression types, and your code will be more complex. For more information, see ExprShape.ShapeVar|ShapeLambda|ShapeCombination Active Pattern. The following code example can be used as a basis for more complex traversals. In this code, an expression tree is created for an expression that involves a function call, add. The SpecificCall active pattern is used to detect any call to add in the expression tree. This active pattern assigns the arguments of the call to the exprList value. In this case, there are only two, so these are pulled out and the function is called recursively on the arguments. The results are inserted into a code quotation that represents a call to mul by using the splice operator (%%). The println function from the previous example is used to display the results. The code in the other active pattern branches just regenerates the same expression tree, so the only change in the resulting expression is the change from add to mul. ### Code module Module1 open Print open Microsoft.FSharp.Quotations open Microsoft.FSharp.Quotations.DerivedPatterns open Microsoft.FSharp.Quotations.ExprShape let add x y = x + y let mul x y = x * y let rec substituteExpr expression = match expression with | SpecificCall <@@ add @@> (_, _, exprList) -> <@@ mul %%lhs %%rhs @@> | ShapeVar var -> Expr.Var var | ShapeLambda (var, expr) -> Expr.Lambda (var, substituteExpr expr) | ShapeCombination(shapeComboObject, exprList) -> RebuildShapeCombination(shapeComboObject, List.map substituteExpr exprList) let expr1 = <@@ 1 + (add 2 (add 3 4)) @@> println expr1 let expr2 = substituteExpr expr1 println expr2 ### Output 1 + Module1.add(2,Module1.add(3,4)) 1 + Module1.mul(2,Module1.mul(3,4))
2022-05-17 23:03:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4130308926105499, "perplexity": 6073.7262333728295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662520936.24/warc/CC-MAIN-20220517225809-20220518015809-00380.warc.gz"}
http://link-springer-com-443.webvpn.fjmu.edu.cn/chapter/10.1007/3-540-44987-6_26
EUROCRYPT 2001: Advances in Cryptology — EUROCRYPT 2001 pp 420-436 # New Method for Upper Bounding the Maximum Average Linear Hull Probability for SPNs • Liam Keliher • Henk Meijer • Stafford Tavares Conference paper Part of the Lecture Notes in Computer Science book series (LNCS, volume 2045) ## Abstract We present a new algorithm for upper bounding the maximum average linear hull probability for SPNs, a value required to determine provable security against linear cryptanalysis. The best previous result (Hong et al. [9]) applies only when the linear transformation branch number (B) is M or (M + 1) (maximal case), where M is the number of s-boxes per round. In contrast, our upper bound can be computed for any value of B. Moreover, the new upper bound is a function of the number of rounds (other upper bounds known to the authors are not). When B = M, our upper bound is consistently superior to [9]. When B = (M + 1), our upper bound does not appear to improve on [9]. On application to Rijndael (128-bit block size, 10 rounds), we obtain the upper bound UB = 2−75, corresponding to a lower bound on the data complexity of $$\frac{8} {{UB}} = {\text{2}}^{{\text{78}}}$$ (for 96.7% success rate). Note that this does not demonstrate the existence of a such an attack, but is, to our knowledge, the first such lower bound. ## Keywords substitution-permutation networks linear cryptanalysis maximum average linear hull probability provable security ## References 1. 1. E. Biham, On Matsui’s linear cryptanalysis, Advances in Cryptology—EUROCRYPT'94, Springer-Verlag, pp. 341–355, 1995.Google Scholar 2. 2. E. Biham and A. Shamir, Differential cryptanalysis of DES-like cryptosystems, Journal of Cryptology, Vol. 4, No. 1, pp. 3–72, 1991. 3. 3. J. Daemen, R. Govaerts, and J. Vandewalle, Correlation matrices, Fast Software Encryption: Second International Workshop, Springer-Verlag, pp. 275–285, 1995.Google Scholar 4. 4. J. Daemen, L. Knudsen, and V. Rijmen, The block cipher SQUARE, Fast Software Encryption—FSE'97, Springer-Verlag, pp. 149–165, 1997.Google Scholar 5. 5. J. Daemen and V. Rijmen, AES proposal: Rijndael, http://csrc.nist.gov/encryption/aes/round2/AESAlgs/Rijndael/Rijndael.pdf, 1999. 6. 6. H. Feistel, Cryptography and computer privacy, Scientific American, Vol. 228, No. 5, pp. 15–23, May 1973. 7. 7. C. Harpes, G. Kramer, and J. Massey, Agener alization of linear cryptanalysis and the applicability of Matsui’s piling-up lemma, Advances in Cryptology—EUROCRYPT'95, Springer-Verlag, pp. 24–38, 1995.Google Scholar 8. 8. H.M. Heys and S.E. Tavares, Substitution-permutation networks resistant to differential and linear cryptanalysis, Journal of Cryptology, Vol. 9, No. 1, pp. 1–19, 1996. 9. 9. S. Hong, S. Lee, J. Lim, J. Sung, and D. Cheon, Provable security against differential and linear cryptanalysis for the SPN structure, Fast Software Encryption (FSE 2000), Proceedings to be published by Springer-Verlag.Google Scholar 10. 10. L.R. Knudsen, Practically secure Feistel ciphers, Fast Software Encryption, Springer-Verlag, pp. 211–221, 1994.Google Scholar 11. 11. M. Matsui, Linear cryptanalysis method for DES cipher, Advances in Cryptology—EUROCRYPT'93, Springer-Verlag, pp. 386–397, 1994.Google Scholar 12. 12. M. Matsui, On correlation between the order of s-boxes and the strength of DES, Advances in Cryptology—EUROCRYPT'94, Springer-Verlag, pp. 366–375, 1995.Google Scholar 13. 13. W. Meier and O. Staffelbach, Nonlinearity criteria for cryptographic functions, Advances in Cryptology—EUROCRYPT'89, Springer-Verlag, pp. 549–562, 1990.Google Scholar 14. 14. K. Nyberg, Linear approximation of block ciphers, Advances in Cryptology—EUROCRYPT'94, Springer-Verlag, pp. 439–444, 1995.Google Scholar 15. 15. C.E. Shannon, Communication theory of secrecy systems, Bell System Technical Journal, Vol. 28, no. 4, pp. 656–715, 1949. ## Authors and Affiliations • Liam Keliher • 1 • Henk Meijer • 1 • Stafford Tavares • 2 1. 1.Department of Computing and Information ScienceQueen's UniversityKingstonCanada 2. 2.Department of Electrical and Computer EngineeringQueen's UniversityKingstonCanada
2020-07-08 07:39:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8038876056671143, "perplexity": 7131.004764986528}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655896905.46/warc/CC-MAIN-20200708062424-20200708092424-00276.warc.gz"}
https://ask.libreoffice.org/en/answers/153142/revisions/
# Revision history [back] What you have done is, I assume (since it doesn't exactly match my "menubar" language) is a print setting. With the menubar (or right-click on page), it would be Format > Page... > Area [tab] > Colour > [choose colour], and that gives the printing area that background colour. So that only applies to the area inside the printing margins, like this example using blue: This page will print out with a blue background and white margins. What you want to use for screen reading/editing (and which leaves the printing of the page unaffected) is the . Go to Tools > Options... > [toggle] LibreOffice > Application Colours > General - Document Background - [choose colour]. (I don't know what the "path" for this will be in your ribbon layout, but it is the "program options" you want.) Setting the background to black then has this effect: I've included some text in that screenshot so you can see that the text colour is automatically adjusted for your new "document background". The page will print with black text on white paper. You can customize other application colours in the same way, if that is easier on your eyes. ;) What you have done is, I assume (since it doesn't exactly match my "menubar" language) is a print setting. With the menubar (or right-click on page), it would be Format > Page... > Area [tab] > Colour > [choose colour], and that gives the printing area that background colour. So that only applies to the area inside the printing margins, like this example using blue: This page will print out with a blue background and white margins. What you want to use for screen reading/editing (and which leaves the printing of the page unaffected) is the . Go to Tools > Options... > [toggle] LibreOffice > Application Colours > General - Document Background - [choose colour]. (I don't know what the "path" for this will be in your ribbon layout, but it is the "program options" you want.) Setting the background to black then has this effect: I've included some text in that screenshot so you can see that the text colour is automatically adjusted for your new "document background". The page will print with black text on white paper. You can customize other application colours in the same way, if that is easier on your eyes. ;) What you have done is, I assume (since it doesn't exactly match my "menubar" language) is a print setting. With the menubar (or right-click on page), it would be Format > Page... > Area [tab] > Colour > [choose colour], and that gives the printing area that background colour. So that only applies to the area inside the printing margins, like this example using blue: This page will print out with a blue background and white margins. What you want to use for screen reading/editing (and which leaves the printing of the page unaffected) is the programme options. Go to Tools > Options... > [toggle] LibreOffice > Application Colours > General - Document Background - [choose colour]. (I don't know what the "path" for this will be in your ribbon layout, but it is the "program options" you want.) Setting the background to black then has this effect: I've included some text in that screenshot so you can see that the text colour is automatically adjusted for your new "document background". The page will print with black text on white paper. You can customize other application colours in the same way, if that is easier on your eyes. ;) What you have done is, I assume (since it doesn't exactly match my "menubar" language) is a print setting. With the menubar (or right-click on page), it would be Format > Page... > Area [tab] > Colour > [choose colour], and that gives the printing area that background colour. So that only applies to the area inside the printing margins, like this example using blue: This page will print out with a blue background and white margins. What you want to use for screen reading/editing (and which leaves the printing of the page unaffected) is the programme options. Go to Tools > Options... > [toggle] LibreOffice > Application Colours > General - Document Background - [choose colour]. (I don't know what the "path" for this will be in your ribbon layout, but it is the "program options" you want.) Setting the background to black then has this effect: I've included some text in that screenshot so you can see that the text colour is automatically adjusted for your new "document background". The page will print with black text on white paper. You can customize other application colours in the same way, if that is easier on your eyes. ;) Bonus - to alternate between the "menubar" and "ribbon" interfaces, go to: View > Toolbar > [Default|Notebookbar].
2019-12-06 01:40:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4685870110988617, "perplexity": 2912.653354726153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540482954.0/warc/CC-MAIN-20191206000309-20191206024309-00032.warc.gz"}
https://canadam.math.ca/2013/abs/gga
CanaDAM 2013 Memorial University of Newfoundland, June 10 - 13, 2013 www.cms.math.ca//2013 Galois Geometries and Applications I Chair: Jan De Beule (Ghent University) Org: Jan De Beule (Ghent University) and Petr Lisonek (Simon Fraser University) [PDF] KATHRYN HAYMAKER, University of Nebraska - Lincoln Write once memory codes from finite geometries  [PDF] Abstract: A binary write once memory code is a rewriting scheme in which a sequence of codewords representing different messages must be nondecreasing in each coordinate. In this talk we will revisit a 1986 construction of WOM codes from finite projective geometries by Merkx and present some new constructions of rewriting codes from finite geometries. PETR LISONEK, Simon Fraser University Quantum codes from generalized quadrangles  [PDF] [SLIDES] Entanglement-assisted quantum error correcting code (EAQECC) utilizes $e$ copies of maximally entangled states (the code requires $e$ ebits). The EAQECC model removes the self-orthogonality requirement imposed on stabilizer quantum codes. The number of ebits should be small. For an LDPC EAQECC that uses one ebit, Fujiwara and Tonchev showed recently that the girth of its Tanner graph is at most six. We study the LDPC EAQECC that arises from the symplectic generalized quadrangle $W(q)$ where $q$ is even. The girth of the Tanner graph is eight and we prove that the proportion of ebits tends to zero as $q$ grows. BRETT STEVENS, Carleton University Linear feedback shift registers and covering arrays  [PDF] The set of fixed length subintervals of a linear feedback shift register form a linear code. A very nice theorem of Bose from 1961 proves that these codewords form the rows of an orthogonal array of (maximum) strength $t$ if and only if the dual linear code has minimum weight $t+1$. Additionally the only strength $t+1$-coverage which is missing from the OA corresponds to multiples of the generating polynomial of the LFSR. We use this and results on difference sets over finite fields to construct a new family of strength 3 covering arrays from these orthogonal arrays. PETER SZIKLAI, Eötvös L. University, Budapest The direction problem: old and new results  [PDF] We will consider variants of the direction problem. Let $U\subset AG(n,q)$ be a pointset, then a point $d$ at infinity is {\em determined} by $U$ if there exist two points $a,b\in U$ such that $a,b,d$ are collinear; the set of determined points (directions) is $D$. The typical problems ask about the connection between the structure of $U$ and the properties of $D$. This theory was born in the 1970's and it is still growing; it has many connections to other topics. Here a most efficient method is the application of polynomials, we will see new results and old ones revisited. QING XIANG, University of Delaware/the NSF Constructions of difference sets and strongly regular graphs using cyclotomic classes  [PDF] We will give a survey of recent advances in constructions of difference sets and strongly regular Cayley graphs by using cyclotomic classes.
2021-11-28 04:51:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7240317463874817, "perplexity": 1033.1928352598802}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358469.34/warc/CC-MAIN-20211128043743-20211128073743-00351.warc.gz"}
https://www.embibe.com/exams/methods-of-preparation-of-amines/
• Written By Sushmita Rout # Methods of Preparation of Amines Class 12 Methods of Preparation of Amines: Amine is a chemical substance generated from ammonia (NH3). We can simply state that amines are ammonia derivatives. Which is the most common antihistamine in the doctor’s prescription? Its Benadryl – a drug that contains the tertiary amino groups. An important class of organic molecules is the amino group, also known as Amines, which can be created by substituting one or more hydrogen atoms in an ammonia molecule with an alkyl/aryl group. Alkaloids present in certain plants; catecholamine neurotransmitters (dopamine and adrenaline); and histamine, a particular chemical mediator found in most animal tissues, are just a few examples of naturally occurring amines. Amines are found naturally in hormones, vitamins, proteins, and other substances. They can also be made synthetically through preparation methods of amines. Read the entire article to know the preparation methods of amines. Study Classification of Amines Here ## What are Amines? Amines are the organic compounds that are broadly considered derivatives of ammonia. These nitrogen-based organic compounds are obtained by replacing one, two, or all the three hydrogen atoms of an ammonia molecule with alkyl and/or aryl groups. For example, The nitrogen orbitals in amines are $${\rm{s}}{{\rm{p}}^3}$$ hybridised with a pyramidal geometry. Each of the three $${\rm{s}}{{\rm{p}}^3}$$ hybridised orbitals of nitrogen overlap with orbitals of carbon or hydrogen as per the composition of the amines. The fourth orbital of nitrogen contains an unshared pair of electrons which is common in all amines. This unshared pair of electrons reduces the $${\rm{C}} – {\rm{N}} – {\rm{E}}$$ (where $${\rm{E}}$$ is $${\rm{C}}$$ or $${\rm{H}}$$) bond angle from $${109.5^{\rm{o}}}$$ to $${108^{\rm{o}}}.$$ Based on the number of hydrogen atoms replaced by alkyl or aryl groups in ammonia molecules, the amines are broadly classified as primary $$\left( {{1^{\rm{o}}}} \right),$$ secondary $$\left( {{2^{\rm{o}}}} \right),$$ and tertiary $$\left( {{3^{\rm{o}}}} \right).$$ If one hydrogen atom of ammonia is replaced by $${\rm{R}}$$ or $${\rm{Ar}},$$ we get a primary amine $$\left( {{1^{\rm{o}}}} \right)$$ $${\rm{ – RN}}{{\rm{H}}_2}$$ and $${\rm{ArN}}{{\rm{H}}_2}.$$ If two hydrogen atoms of ammonia or one hydrogen atom of $${\rm{RN}}{{\rm{H}}_2}$$ are replaced by another alkyl/aryl$$\left( {{\rm{R}}’} \right)$$ group, we get secondary amine $$\left( {{2^{\rm{o}}}} \right){\rm{R}} – {\rm{NHR}}’.$$ The second alkyl/aryl group may be the same or different. If all the three hydrogen atoms of ammonia or the remaining hydrogen atom of $${\rm{RN}}{{\rm{H}}_2}$$ is replaced by the alkyl/aryl group, then we get a tertiary amine $$\left( {{3^{\rm{o}}}} \right).$$ When all the alkyl or aryl groups are the same, then the amine is said to be simple, and when they are different, ‘mixed’ amines are formed. ### Methods of Preparation of Amines Class 12 Amines are prepared by the following methods: #### 1. Reduction of Nitro Compounds When hydrogen gas is passed over nitro compounds in the presence of finely divided nickel, palladium, or platinum, these are reduced to corresponding amines. Also, nitro compounds can be reduced metals in an acidic environment to generate amines. This reaction takes place for aliphatic nitro compounds, also where they are reduced to their corresponding alkanamines. Reduction of nitro compounds with iron scrap and hydrochloric acid is preferred. This is because $${\rm{FeC}}{{\rm{l}}_2}$$ formed gets hydrolysed to release hydrochloric acid during the reaction. Hence, only a small amount of hydrochloric acid is required to initiate the reaction. #### 2. Ammonolysis of Alkyl Halides An alkyl or benzyl halide undergoes a nucleophilic substitution reaction on reaction with an ethanolic solution of ammonia. In this reaction, the halogen atom of the halide is replaced by an amino $$\left( { – {\rm{N}}{{\rm{H}}_2}} \right)$$ group. This process involves the cleavage of the $${\rm{C}} – {\rm{X}}$$ bond by the ammonia molecule, known as ammonolysis. The ammonolysis process is carried out in a sealed tube at $$373\,{\rm{K}}.$$ The primary amine thus obtained behaves as a nucleophile and keeps on reacting with the alkyl halide until all of the hydrogen atoms of amine are replaced with alkyl groups. Hence, secondary and tertiary amines, and finally quaternary ammonium salt, are also formed in this reaction. Amine is obtained from the substituted ammonium salt by treating it with a strong base: Ammonolysis produces a mixture of primary, secondary, and tertiary amines, as well as a quaternary ammonium salt. Primary amine, on the other hand, is a main result of ammonia. Halides react with amines in the following order: $${\rm{RI}} > {\rm{RBr}} > {\rm{RCI}}.$$ #### 3. Reduction of Nitriles Primary amines are produced by reducing nitriles with lithium aluminium hydride $$\left( {{\rm{LiAI}}{{\rm{H}}_4}} \right).$$ The nitriles can also be catalytically hydrogenated to produce primary amines. This reaction is used to ascend in the amine series, i.e., for the preparation of amines containing one carbon atom more than the initial amine. #### 4. Reduction of Amides Amides are reduced to their corresponding amines by lithium aluminium hydride. It is possible to synthesise secondary amines by substituting $${\rm{N}}$$ and $${\rm{N}},$$ $${\rm{N}}$$ amides. #### 5. Gabriel Phthalimide Synthesis Gabriel phthalimide synthesis is used for the preparation of primary amines only. On treating phthalimide with ethanolic potassium hydroxide solution, the potassium salt of phthalimide is formed. On heating, this potassium salt with alkyl halide followed by alkaline hydrolysis produces the corresponding primary amine. This synthesis process is used only to prepare primary aliphatic amines and is not used to prepare primary aromatic amines. This is due to the fact that aryl halides do not undergo nucleophilic substitution with the phthalimide anion. #### 6. Hoffmann Bromamide Degradation Amides can be directly converted into their corresponding amines. This reaction is carried out by treating the amide with a mixture of a base and bromine $$\left( {{\rm{KOH}} + {\rm{B}}{{\rm{r}}_2}} \right).$$ Primary amines are prepared by treating an amide with bromine in an ethanolic or aqueous sodium hydroxide solution. In this degradation reaction, the migration of an alkyl or aryl group takes place from the carbonyl carbon of the amide group to the nitrogen atom. The amine so formed contains one carbon less than that is present in the amide group. Apart from this reaction, amides can also be dehydrated by $${{\rm{P}}_2}{{\rm{O}}_5}$$ to their corresponding nitriles that can be further reduced to amines. By this method, the number of carbon atoms in both amide and the amine is maintained. #### 7. Reductive Amination Of Aldehydes And Ketones Carbonyl compounds such as Aldehydes or ketones can be catalytically reduced in the presence of ammonia to produce primary, secondary, or tertiary amines. The reaction of a ketone with ammonia, followed by catalytic reduction by sodium cyanoborohydride, produces a $${1^{\rm{o}}}$$ amine. The reaction of a ketone with primary amines, followed by catalytic reduction by sodium cyanoborohydride, produces $${\rm{N}}$$‐substituted amines. The reaction of a ketone with secondary amines, followed by catalytic reduction by sodium cyanoborohydride, produces $${\rm{N,}}\,{\rm{N}}$$‐disubstituted amines. #### 8. Curtius Reaction Amines are also prepared by treating acid chloride with sodium azides—the reaction results in the isocyanate formation, which is further hydrolysed to amines. #### 9. Schmidt Reaction Carboxylic acids, ketones, and alkenes react with hydrazoic acid to give corresponding amines. #### 10. Action of Ammonia on Alcohol A mixture of $${{\rm{1}}^{\rm{o}}},\,{2^{\rm{o}}},\,{3^{\rm{o}}}$$ amines, and $${4^{\rm{o}}}$$ salts is obtained in this reaction that is separated by means of the Hinsberg method, Hofmann method, and fractional distillation. However, amines can be prepared in good yield by using an excess of ammonia. ### Preparation Methods of Amines Summary Proteins, vitamins, alkaloids, and hormones are naturally occurring amines. Synthetic amines include drugs, polymers, and dyes. Biologically active compounds such as adrenaline and ephedrine that increase blood pressure contain secondary amino groups. And the very common antihistamine Benadryl has a tertiary amino group. Hence, it is essential to learn their structure and preparation. In this article, we learned the various methods through which we can prepare amines. We also learned some important name reactions such as Gabriel phthalimide synthesis, Hoffmann Bromamide degradation, and Schmidt reaction. ### FAQs on Methods of Preparation of Amines Read the most frequently asked question about methods of preparation of Amines below. Q.1. Which method is used for the preparation of aromatic amines? Ans: Aromatic amines can be prepared by: 1. Reduction of nitro compounds 2. Reduction of nitriles 3. Reduction of amides 4. Amination of aldehydes and ketones 5. Hoffmann bromamide degradation Q.2. How are amines prepared from nitro compounds? Ans: Amines are prepared from nitro compounds by passing hydrogen gas over nitro compounds in the presence of finely divided nickel, palladium, or platinum. This reaction involves the reduction of nitro compounds to amines. Amines are also formed by the reduction of nitro compounds with metals in an acidic medium. This reaction takes place for aliphatic nitro compounds, also where they are reduced to their corresponding alkanamines. Reduction of nitro compounds with iron scrap and hydrochloric acid is preferred. This is because $${\rm{FeC}}{{\rm{l}}_2}$$ formed gets hydrolysed to release hydrochloric acid during the reaction. Hence, only a small amount of hydrochloric acid is required to initiate the reaction. Q.3. How do you make a secondary amine? Ans : By reduction of alkyl isocyanide with sodium and ethanol. $${\rm{C}}{{\rm{H}}_3}{\rm{NC}} + 4{\rm{H}} \to {\rm{C}}{{\rm{H}}_3}{\rm{NHC}}{{\rm{H}}_3}$$ By heating an alcoholic solution $${1^{\rm{o}}}$$ amine with alkyl halide. $${{\rm{C}}_2}{{\rm{H}}_5}{\rm{N}}{{\rm{H}}_2} + {\rm{I}}{{\rm{C}}_2}{{\rm{H}}_5} \to {\left( {{{\rm{C}}_2}{{\rm{H}}_5}} \right)_2}{\rm{NH}}$$ Q.4. Which method is not used for the preparation of aromatic amines? Ans: Gabriel phthalimide synthesis is not used for the preparation of aromatic amines. The reaction involves the formation of a phthalimide anion. The aryl halides required for the formation of aromatic amines do not undergo nucleophilic substitution reaction with the anion formed by phthalimide. Q.5. What are the two types of aromatic amines? Ans: Aromatic amines are classified into: Aryl amine: The amines in which the nitrogen atom is directly bonded to one or more aromatic rings or aryl groups are called arylamines. Example: Aniline $${{\rm{C}}_6}{{\rm{H}}_5}{\rm{N}}{{\rm{H}}_2},$$ Diphenylamine $${\left( {{{\rm{C}}_6}{{\rm{H}}_5}} \right)_2}{\rm{NH}},$$ Triphenylamine$${\left( {{{\rm{C}}_6}{{\rm{H}}_5}} \right)_3}{\rm{N}}{\rm{.}}$$ Aryl-alkyl amine: The amines in which the nitrogen atom is bonded to the side chain of the aromatic ring are called aryl-alkyl amines. Example: Phenylmethanamine (benzylamine) $${{\rm{C}}_6}{{\rm{H}}_5}{\rm{C}}{{\rm{H}}_2}{\rm{N}}{{\rm{H}}_2},$$ Dibenzylamine $${\left( {{{\rm{C}}_6}{{\rm{H}}_5}{\rm{C}}{{\rm{H}}_2}} \right)_2}{\rm{NH}}{\rm{.}}$$ Q.6. What is the formula of primary amine? Ans: When one hydrogen atom of ammonia is replaced by an alkyl $$\left( {\rm{R}} \right)$$ and/or aryl group $$\left( {{\rm{Ar}}} \right),$$ we get a primary amine $$\left( {{1^{\rm{o}}}} \right).$$ The general structural formula of primary amine is $${\rm{RN}}{{\rm{H}}_2}$$ or $${\rm{ArN}}{{\rm{H}}_2}.$$ Study Structures of Amines Here We hope this article on methods of preparation of amines is helpful to you. If you have any questions related to this post, ping us through the comment box below and we will get back to you as soon as possible. Practice Amines Questions with Hints & Solutions
2022-12-01 23:53:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4024409353733063, "perplexity": 5079.372864209825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710870.69/warc/CC-MAIN-20221201221914-20221202011914-00845.warc.gz"}
https://www.semanticscholar.org/paper/Moments-of-the-Riemann-zeta-function-Soundararajan/8a75ee2314a2f8ca4b83adc779c5fa4c48310e9c
# Moments of the Riemann zeta function @article{Soundararajan2006MomentsOT, title={Moments of the Riemann zeta function}, author={Kannan Soundararajan}, journal={Annals of Mathematics}, year={2006}, volume={170}, pages={981-993} } Assuming the Riemann hypothesis, we obtain an upper bound for the moments of the Riemann zeta function on the critical line. Our bound is nearly as sharp as the conjectured asymptotic formulae for these moments. The method extends to moments in other families of L-functions. 188 Citations Shifted moments of the Riemann zeta function • Mathematics • 2022 . In this article, we prove that the Riemann hypothesis implies a conjecture of Chandee on shifted moments of the Riemann zeta function. The proof is based on ideas of Harper concerning sharp upper Moments of the Riemann zeta‐function at its relative extrema on the critical line Assuming the Riemann hypothesis, we obtain upper and lower bounds for moments of the Riemann zeta‐function averaged over the extreme values between its zeros on the critical line. Our bounds are very Lower bounds for the moments of the derivatives of the Riemann zeta-function and Dirichlet L-functions In this paper, for an integer k > 0, we give certain lower bounds for the 2kth moments of the derivatives of the Riemann zeta-function under the assumption of the Riemann hypothesis. Also, we give Upper bounds for moments of ζ′(ρ) Assuming the Riemann hypothesis, we obtain an upper bound for the 2kth moment of the derivative of the Riemann zeta‐function averaged over the non‐trivial zeros of ζ(s) for every positive integer k. Hybrid Moments of the Riemann Zeta-Function The “hybrid” moments of the Riemann zeta-function on the critical line are studied. The expected upper bound for the above expression is . This is shown to be true for certain specific values of , EXPONENTIAL MOMENTS OF THE ARGUMENT OF THE RIEMANN ZETA FUNCTION ON THE CRITICAL LINE In this article, we give, under the Riemann hypothesis, an upper bound for the exponential moments of the imaginary part of the logarithm of the Riemann zeta function on the critical line. Our Upper bounds for the moments of zeta prime rho Assuming the Riemann Hypothesis, we obtain an upper bound for the 2k-th moment of the derivative of the Riemann zeta-function averaged over the non-trivial zeros of $\zeta(s)$ for every positive The eighth moment of the Riemann zeta function • Mathematics • 2022 . In this article, we establish an asymptotic formula for the eighth moment of the Riemann zeta function, assuming the Riemann hypothesis and a quaternary additive divisor conjecture. This builds on SHARP UPPER BOUNDS FOR FRACTIONAL MOMENTS OF THE RIEMANN ZETA FUNCTION • Mathematics The Quarterly Journal of Mathematics • 2019 We establish sharp upper bounds for the $2k$th moment of the Riemann zeta function on the critical line, for all real $0 \leqslant k \leqslant 2$. This improves on earlier work of Ramachandra, On the logarithm of the Riemann zeta-function and iterated integrals This paper gives some results for the logarithm of the Riemann zeta-function and its iterated integrals. We obtain a certain explicit approximation formula for these functions. The formula has some ## References SHOWING 1-10 OF 30 REFERENCES A note on S(t) and the zeros of the Riemann zeta‐function • Mathematics • 2005 Let π S(t) denote the argument of the Riemann zeta‐function at the point 1/2 + it. Assuming the Riemann hypothesis, we sharpen the constant in the best currently known bounds for S(t) and for the Extreme values of zeta and L-functions We introduce a resonance method to produce large values of the Riemann zeta-function on the critical line, and large and small central values of L-functions. Lower bounds for moments of L-functions: symplectic and orthogonal examples • Mathematics • 2006 We give lower bounds of the conjectured order of magnitude for an orthogonal and a symplectic family of L-functions. Lower bounds for moments of L-functions. • Mathematics Proceedings of the National Academy of Sciences of the United States of America • 2005 A simple method is developed to establish lower bounds of the conjectured order of magnitude for several such families of L-functions, including the case of the family of all Dirichlet L-Functions to a prime modulus. On Some Theorems of Littlewood and Selberg, I • Mathematics • 1993 Assuming the Riemann hypothesis, we prove [formula] and [formula] with economical constants D1 = 0.46657029869824... and D2 = 3.51588780218300... . The Theory of the Riemann Zeta-Function • Mathematics • 1987 The Riemann zeta-function embodies both additive and multiplicative structures in a single function, making it our most important tool in the study of prime numbers. This volume studies all aspects Integral Moments of L‐Functions • Mathematics • 2002 We give a new heuristic for all of the main terms in the integral moments of various families of primitive L‐functions. The results agree with previous conjectures for the leading order terms. Our Multiple Dirichlet Series and Moments of Zeta and L-Functions • Mathematics Compositio Mathematica • 2003 This paper develops an analytic theory of Dirichlet series in several complex variables which possess sufficiently many functional equations. In the first two sections it is shown how straightforward Fractional moments of the Riemann zeta-function Next I considered the case where k is half of any positive integer and proved (1) (however with C1 depending possibly on k). Next D. R. Heath-Brown [1] considered the case H = T and k any positive Some remarks on the mean value of the Riemann zetafunction and other Dirichlet series. III This is a sequel (Part II) to an earlier article with the same title. There are reasons to expect that the estimates proved in Part I without the factor $(\log\log H)^{-C}$ represent the real truth,
2022-08-14 03:08:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9283497929573059, "perplexity": 472.92462433493114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571993.68/warc/CC-MAIN-20220814022847-20220814052847-00467.warc.gz"}
https://miykael.github.io/nipype_tutorial/notebooks/basic_workflow.html
Workflows¶ Although it would be possible to write analysis scripts using just Nipype Interfaces, and this may provide some advantages over directly making command-line calls, the main benefits of Nipype are the workflows. A workflow controls the setup and the execution of individual interfaces. Let's assume you want to run multiple interfaces in a specific order, where some have to wait for others to finish while others can be executed in parallel. The nice thing about a nipype workflow is, that the workflow will take care of input and output of each interface and arrange the execution of each interface in the most efficient way. A workflow therefore consists of multiple Nodes, each representing a specific Interface and directed connection between those nodes. Those connections specify which output of which node should be used as an input for another node. To better understand why this is so great, let's look at an example. Interfaces vs. Workflows¶ Interfaces are the building blocks that solve well-defined tasks. We solve more complex tasks by combining interfaces with workflows: Interfaces Workflows • implemented with nipype interfaces wrapped inside Node objects • subworkflows can also be added to a workflow without any wrapping • Keep track of the inputs and outputs, and check their expected types Do not have inputs/outputs, but expose them from the interfaces wrapped inside Do not cache results (unless you use [interface caching](advanced_interfaces_caching.ipynb)) Cache results Run by a nipype plugin Run by a nipype plugin Preparation¶ Before we can start, let's first load some helper functions: In [ ]: import numpy as np import nibabel as nb import matplotlib.pyplot as plt # Let's create a short helper function to plot 3D NIfTI images def plot_slice(fname): data = img.get_data() # Cut in the middle of the brain cut = int(data.shape[-1]/2) + 10 # Plot the data plt.imshow(np.rot90(data[..., cut]), cmap="gray") plt.gca().set_axis_off() Populating the interactive namespace from numpy and matplotlib Example 1 - Command-line execution¶ Let's take a look at a small preprocessing analysis where we would like to perform the following steps of processing: - Skullstrip an image to obtain a mask - Smooth the original image This could all very well be done with the following shell script: In [ ]: %%bash ANAT_NAME=sub-01_ses-test_T1w ANAT=/data/ds000114/sub-01/ses-test/anat/${ANAT_NAME} bet${ANAT} /output/${ANAT_NAME}_brain -m -f 0.3 fslmaths${ANAT} -s 2 /output/${ANAT_NAME}_smooth fslmaths /output/${ANAT_NAME}_smooth -mas /output/${ANAT_NAME}_brain_mask /output/${ANAT_NAME}_smooth_mask This is simple and straightforward. We can see that this does exactly what we wanted by plotting the four steps of processing. In [ ]: f = plt.figure(figsize=(12, 4)) for i, img in enumerate(["T1w", "T1w_smooth", if i == 0: plot_slice("/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_%s.nii.gz" % img) else: plot_slice("/output/sub-01_ses-test_%s.nii.gz" % img) plt.title(img) Example 2 - Interface execution¶ Now let's see what this would look like if we used Nipype, but only the Interfaces functionality. It's simple enough to write a basic procedural script, this time in Python, to do the same thing as above: In [ ]: from nipype.interfaces import fsl # Skullstrip process skullstrip = fsl.BET( in_file="/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz", out_file="/output/sub-01_T1w_brain.nii.gz", skullstrip.run() # Smoothing process smooth = fsl.IsotropicSmooth( in_file="/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz", out_file="/output/sub-01_T1w_smooth.nii.gz", fwhm=4) smooth.run() in_file="/output/sub-01_T1w_smooth.nii.gz", f = plt.figure(figsize=(12, 4)) for i, img in enumerate(["T1w", "T1w_smooth", if i == 0: plot_slice("/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_%s.nii.gz" % img) else: plot_slice("/output/sub-01_%s.nii.gz" % img) plt.title(img) This is more verbose, although it does have its advantages. There's the automated input validation we saw previously, some of the options are named more meaningfully, and you don't need to remember, for example, that fslmaths' smoothing kernel is set in sigma instead of FWHM -- Nipype does that conversion behind the scenes. Can't we optimize that a bit?¶ As we can see above, the inputs for the mask routine in_file and mask_file are actually the output of skullstrip and smooth. We therefore somehow want to connect them. This can be accomplished by saving the executed routines under a given object and then using the output of those objects as input for other routines. In [ ]: from nipype.interfaces import fsl # Skullstrip process skullstrip = fsl.BET( bet_result = skullstrip.run() # skullstrip object # Smooth process smooth = fsl.IsotropicSmooth( in_file="/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz", fwhm=4) smooth_result = smooth.run() # smooth object f = plt.figure(figsize=(12, 4)) for i, img in enumerate([skullstrip.inputs.in_file, smooth_result.outputs.out_file, plot_slice(img) plt.title(img.split('/')[-1].split('.')[0].split('test_')[-1]) Here we didn't need to name the intermediate files; Nipype did that behind the scenes, and then we passed the result object (which knows those names) onto the next step in the processing stream. This is somewhat more concise than the example above, but it's still a procedural script. And the dependency relationship between the stages of processing is not particularly obvious. To address these issues, and to provide solutions to problems we might not know we have yet, Nipype offers Workflows. Example 3 - Workflow execution¶ What we've implicitly done above is to encode our processing stream as a directed acyclic graphs: each stage of processing is a node in this graph, and some nodes are unidirectionally dependent on others. In this case, there is one input file and several output files, but there are no cycles -- there's a clear line of directionality to the processing. What the Node and Workflow classes do is make these relationships more explicit. The basic architecture is that the Node provides a light wrapper around an Interface. It exposes the inputs and outputs of the Interface as its own, but it adds some additional functionality that allows you to connect Nodes into a Workflow. Let's rewrite the above script with these tools: In [ ]: # Import Node and Workflow object and FSL interface from nipype import Node, Workflow from nipype.interfaces import fsl # For reasons that will later become clear, it's important to # pass filenames to Nodes as absolute paths from os.path import abspath in_file = abspath("/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz") # Skullstrip process # Smooth process smooth = Node(fsl.IsotropicSmooth(in_file=in_file, fwhm=4), name="smooth") This looks mostly similar to what we did above, but we've left out the two crucial inputs to the ApplyMask step. We'll set those up by defining a Workflow object and then making connections among the Nodes. In [ ]: # Initiation of a workflow wf = Workflow(name="smoothflow", base_dir="/output/working_dir") The Workflow object has a method called connect that is going to do most of the work here. This routine also checks if inputs and outputs are actually provided by the nodes that are being connected. There are two different ways to call connect: connect(source, "source_output", dest, "dest_input") connect([(source, dest, [("source_output1", "dest_input1"), ("source_output2", "dest_input2") ]) ]) With the first approach, you can establish one connection at a time. With the second you can establish multiple connects between two nodes at once. In either case, you're providing it with four pieces of information to define the connection: • The source node object • The name of the output field from the source node • The destination node object • The name of the input field from the destination node We'll illustrate each method in the following cell: In [ ]: # First the "simple", but more restricted method # Now the more complicated method Now the workflow is complete! Above, we mentioned that the workflow can be thought of as a directed acyclic graph. In fact, that's literally how it's represented behind the scenes, and we can use that to explore the workflow visually: In [ ]: wf.write_graph("workflow_graph.dot") from IPython.display import Image Image(filename="/output/working_dir/smoothflow/workflow_graph.png") 180514-09:28:44,790 workflow INFO: Generated workflow graph: /output/working_dir/smoothflow/workflow_graph.png (graph2use=hierarchical, simple_form=True). Out[ ]: This representation makes the dependency structure of the workflow obvious. (By the way, the names of the nodes in this graph are the names we gave our Node objects above, so pick something meaningful for those!) Certain graph types also allow you to further inspect the individual connections between the nodes. For example: In [ ]: wf.write_graph(graph2use='flat') from IPython.display import Image Image(filename="/output/working_dir/smoothflow/graph_detailed.png") 180514-09:28:44,969 workflow INFO: Generated workflow graph: /output/working_dir/smoothflow/graph.png (graph2use=flat, simple_form=True). Out[ ]: Here you see very clearly, that the output mask_file of the skullstrip node is used as the input mask_file of the mask node. For more information on graph visualization, see the Graph Visualization section. But let's come back to our example. At this point, all we've done is define the workflow. We haven't executed any code yet. Much like Interface objects, the Workflow object has a run method that we can call so that it executes. Let's do that and then examine the results. In [ ]: # Specify the base directory for the working directory wf.base_dir = "/output/working_dir" # Execute the workflow wf.run() 180514-09:28:44,992 workflow INFO: Workflow smoothflow settings: ['check', 'execution', 'logging', 'monitoring'] 180514-09:28:44,997 workflow INFO: Running serially. 180514-09:28:44,998 workflow INFO: [Node] Setting-up "smoothflow.smooth" in "/output/working_dir/smoothflow/smooth". 180514-09:28:45,0 workflow INFO: [Node] Outdated cache found for "smoothflow.smooth". 180514-09:28:45,41 workflow INFO: [Node] Running "smooth" ("nipype.interfaces.fsl.maths.IsotropicSmooth"), a CommandLine Interface with command: fslmaths /data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz -s 1.69864 /output/working_dir/smoothflow/smooth/sub-01_ses-test_T1w_smooth.nii.gz 180514-09:28:50,11 workflow INFO: [Node] Finished "smoothflow.smooth". 180514-09:28:50,12 workflow INFO: [Node] Setting-up "smoothflow.skullstrip" in "/output/working_dir/smoothflow/skullstrip". 180514-09:28:50,40 workflow INFO: [Node] Cached "smoothflow.skullstrip" - collecting precomputed outputs 180514-09:28:50,42 workflow INFO: [Node] "smoothflow.skullstrip" found cached. 180514-09:28:50,42 workflow INFO: 180514-09:28:50,46 workflow INFO: [Node] Outdated cache found for "smoothflow.mask". 180514-09:28:50,52 workflow INFO: 180514-09:28:51,134 workflow INFO: Out[ ]: <networkx.classes.digraph.DiGraph at 0x7f7d60ccfd30> The specification of base_dir is very important (and is why we needed to use absolute paths above) because otherwise all the outputs would be saved somewhere in the temporary files. Unlike interfaces, which by default spit out results to the local directly, the Workflow engine executes things off in its own directory hierarchy. Let's take a look at the resulting images to convince ourselves we've done the same thing as before: In [ ]: f = plt.figure(figsize=(12, 4)) for i, img in enumerate(["/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz", "/output/working_dir/smoothflow/smooth/sub-01_ses-test_T1w_smooth.nii.gz", plot_slice(img) Perfect! Let's also have a closer look at the working directory: In [ ]: !tree /output/working_dir/smoothflow/ -I '*js|*json|*html|*pklz|_report' /output/working_dir/smoothflow/ ├── graph_detailed.dot ├── graph_detailed.png ├── graph.dot ├── graph.png │   ├── command.txt ├── skullstrip │   ├── command.txt ├── smooth │   ├── command.txt │   └── sub-01_ses-test_T1w_smooth.nii.gz ├── workflow_graph.dot └── workflow_graph.png 3 directories, 12 files As you can see, the name of the working directory is the name we gave the workflow base_dir. And the name of the folder within is the name of the workflow object smoothflow. Each node of the workflow has its' own subfolder in the smoothflow folder. And each of those subfolders contains the output of the node as well as some additional files. The #1 gotcha of nipype Workflows¶ Nipype workflows are just DAGs (Directed Acyclic Graphs) that the runner Plugin takes in and uses to compose an ordered list of nodes for execution. As a matter of fact, running a workflow will return a graph object. That's why you often see something like <networkx.classes.digraph.DiGraph at 0x7f83542f1550> at the end of execution stream when running a workflow. The principal implication is that Workflows don't have inputs and outputs, you can just access them through the Node decoration. In practical terms, this has one clear consequence: from the resulting object of the workflow execution, you don't generally have access to the value of the outputs of the interfaces. This is particularly true for Plugins with an asynchronous execution. A workflow inside a workflow¶ When you start writing full-fledged analysis workflows, things can get quite complicated. Some aspects of neuroimaging analysis can be thought of as a coherent step at a level more abstract than the execution of a single command line binary. For instance, in the standard FEAT script in FSL, several calls are made in the process of using susan to perform nonlinear smoothing on an image. In Nipype, you can write nested workflows, where a sub-workflow can take the place of a Node in a given script. Let's use the prepackaged susan workflow that ships with Nipype to replace our Gaussian filtering node and demonstrate how this works. In [ ]: from nipype.workflows.fmri.fsl import create_susan_smooth Calling this function will return a pre-written Workflow object: In [ ]: susan = create_susan_smooth(separate_masks=False) Let's display the graph to see what happens here. In [ ]: susan.write_graph("susan_workflow.dot") from IPython.display import Image Image(filename="susan_workflow.png") 180514-09:28:53,607 workflow INFO: Generated workflow graph: /home/neuro/nipype_tutorial/notebooks/susan_workflow.png (graph2use=hierarchical, simple_form=True). Out[ ]: We see that the workflow has an inputnode and an outputnode. While not strictly necessary, this is standard practice for workflows (especially those that are intended to be used as nested workflows in the context of a longer analysis graph) and makes it more clear how to connect inputs and outputs from this workflow. Let's take a look at what those inputs and outputs are. Like Nodes, Workflows have inputs and outputs attributes that take a second sub-attribute corresponding to the specific node we want to make connections to. In [ ]: print("Inputs:\n", susan.inputs.inputnode) print("Outputs:\n", susan.outputs.outputnode) Inputs: fwhm = <undefined> in_files = <undefined> Outputs: smoothed_files = None Note that inputnode and outputnode are just conventions, and the Workflow object exposes connections to all of its component nodes: In [ ]: susan.inputs Out[ ]: inputnode = fwhm = <undefined> in_files = <undefined> args = <undefined> environ = {'FSLOUTPUTTYPE': 'NIFTI_GZ'} ignore_exception = False op_string = -mas out_data_type = <undefined> out_file = <undefined> output_type = NIFTI_GZ terminal_output = <undefined> meanfunc2 = args = <undefined> environ = {'FSLOUTPUTTYPE': 'NIFTI_GZ'} ignore_exception = False in_file2 = <undefined> op_string = -Tmean out_data_type = <undefined> out_file = <undefined> output_type = NIFTI_GZ suffix = _mean terminal_output = <undefined> median = args = <undefined> environ = {'FSLOUTPUTTYPE': 'NIFTI_GZ'} ignore_exception = False op_string = -k %s -p 50 output_type = NIFTI_GZ split_4d = <undefined> terminal_output = <undefined> merge = axis = hstack ignore_exception = False no_flatten = False ravel_inputs = False multi_inputs = function_str = def cartesian_product(fwhms, in_files, usans, btthresh): from nipype.utils.filemanip import ensure_list # ensure all inputs are lists in_files = ensure_list(in_files) fwhms = [fwhms] if isinstance(fwhms, (int, float)) else fwhms # create cartesian product lists (s_<name> = single element of list) cart_in_file = [ s_in_file for s_in_file in in_files for s_fwhm in fwhms ] cart_fwhm = [s_fwhm for s_in_file in in_files for s_fwhm in fwhms] cart_usans = [s_usans for s_usans in usans for s_fwhm in fwhms] cart_btthresh = [ s_btthresh for s_btthresh in btthresh for s_fwhm in fwhms ] return cart_in_file, cart_fwhm, cart_usans, cart_btthresh ignore_exception = False outputnode = smooth = args = <undefined> dimension = 3 environ = {'FSLOUTPUTTYPE': 'NIFTI_GZ'} ignore_exception = False out_file = <undefined> output_type = NIFTI_GZ terminal_output = <undefined> use_median = 1 Let's see how we would write a new workflow that uses this nested smoothing step. The susan workflow actually expects to receive and output a list of files (it's intended to be executed on each of several runs of fMRI data). We'll cover exactly how that works in later tutorials, but for the moment we need to add an additional Function node to deal with the fact that susan is outputting a list. We can use a simple lambda function to do this: In [ ]: from nipype import Function extract_func = lambda list_out: list_out[0] list_extract = Node(Function(input_names=["list_out"], output_names=["out_file"], function=extract_func), name="list_extract") Now let's create a new workflow susanflow that contains the susan workflow as a sub-node. To be sure, let's also recreate the skullstrip and the mask node from the examples above. In [ ]: # Initiate workflow with name and base directory wf2 = Workflow(name="susanflow", base_dir="/output/working_dir") # Create new skullstrip and mask nodes # Connect the nodes to each other and to the susan workflow (susan, list_extract, [("outputnode.smoothed_files", "list_out")]), ]) # Specify the remaining input variables for the susan workflow susan.inputs.inputnode.in_files = abspath( "/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz") susan.inputs.inputnode.fwhm = 4 First, let's see what this new processing graph looks like. In [ ]: wf2.write_graph(dotfilename='/output/working_dir/full_susanflow.dot', graph2use='colored') from IPython.display import Image Image(filename="/output/working_dir/full_susanflow.png") 180514-09:28:53,822 workflow INFO: Generated workflow graph: /output/working_dir/full_susanflow.png (graph2use=colored, simple_form=True). Out[ ]: We can see how there is a nested smoothing workflow (blue) in the place of our previous smooth node. This provides a very detailed view, but what if you just wanted to give a higher-level summary of the processing steps? After all, that is the purpose of encapsulating smaller streams in a nested workflow. That, fortunately, is an option when writing out the graph: In [ ]: wf2.write_graph(dotfilename='/output/working_dir/full_susanflow_toplevel.dot', graph2use='orig') from IPython.display import Image Image(filename="/output/working_dir/full_susanflow_toplevel.png") 180514-09:28:54,66 workflow INFO: Generated workflow graph: /output/working_dir/full_susanflow_toplevel.png (graph2use=orig, simple_form=True). Out[ ]: That's much more manageable. Now let's execute the workflow In [ ]: wf2.run() 180514-09:28:54,89 workflow INFO: Workflow susanflow settings: ['check', 'execution', 'logging', 'monitoring'] 180514-09:28:54,121 workflow INFO: Running serially. 180514-09:28:54,123 workflow INFO: [Node] Setting-up "susanflow.skullstrip" in "/output/working_dir/susanflow/skullstrip". 180514-09:28:54,139 workflow INFO: [Node] Cached "susanflow.skullstrip" - collecting precomputed outputs 180514-09:28:54,140 workflow INFO: [Node] "susanflow.skullstrip" found cached. 180514-09:28:54,141 workflow INFO: 180514-09:28:54,167 workflow INFO: 180514-09:28:54,167 workflow INFO: [Node] Setting-up "susanflow.susan_smooth.meanfunc2" in "/output/working_dir/susanflow/susan_smooth/meanfunc2". 180514-09:28:54,183 workflow INFO: [Node] "susanflow.susan_smooth.meanfunc2" found cached. 180514-09:28:54,184 workflow INFO: [Node] Setting-up "susanflow.susan_smooth.median" in "/output/working_dir/susanflow/susan_smooth/median". 180514-09:28:54,201 workflow INFO: [Node] "susanflow.susan_smooth.median" found cached. 180514-09:28:54,202 workflow INFO: [Node] Setting-up "susanflow.susan_smooth.merge" in "/output/working_dir/susanflow/susan_smooth/merge". 180514-09:28:54,207 workflow INFO: [Node] Cached "susanflow.susan_smooth.merge" - collecting precomputed outputs 180514-09:28:54,208 workflow INFO: [Node] "susanflow.susan_smooth.merge" found cached. 180514-09:28:54,209 workflow INFO: [Node] Setting-up "susanflow.susan_smooth.multi_inputs" in "/output/working_dir/susanflow/susan_smooth/multi_inputs". 180514-09:28:54,224 workflow INFO: [Node] Cached "susanflow.susan_smooth.multi_inputs" - collecting precomputed outputs 180514-09:28:54,225 workflow INFO: [Node] "susanflow.susan_smooth.multi_inputs" found cached. 180514-09:28:54,226 workflow INFO: [Node] Setting-up "susanflow.susan_smooth.smooth" in "/output/working_dir/susanflow/susan_smooth/smooth". 180514-09:28:54,236 workflow INFO: [Node] "susanflow.susan_smooth.smooth" found cached. 180514-09:28:54,237 workflow INFO: [Node] Setting-up "susanflow.list_extract" in "/output/working_dir/susanflow/list_extract". 180514-09:28:54,261 workflow INFO: [Node] Cached "susanflow.list_extract" - collecting precomputed outputs 180514-09:28:54,262 workflow INFO: [Node] "susanflow.list_extract" found cached. 180514-09:28:54,263 workflow INFO: 180514-09:28:54,282 workflow INFO: [Node] Cached "susanflow.mask" - collecting precomputed outputs 180514-09:28:54,283 workflow INFO: Out[ ]: <networkx.classes.digraph.DiGraph at 0x7f7d5cb44eb8> As a final step, let's look at the input and the output. It's exactly what we wanted. In [ ]: f = plt.figure(figsize=(12, 4)) for i, e in enumerate([["/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz", 'input'], 'output']]): plot_slice(e[0]) plt.title(e[1]) So, why are workflows so great?¶ So far, we've seen that you can build up rather complex analysis workflows. But at the moment, it's not been made clear why this is worth the extra trouble from writing a simple procedural script. To demonstrate the first added benefit of the Nipype, let's just rerun the susanflow workflow from above and measure the execution times. In [ ]: %time wf2.run() CPU times: user 4 µs, sys: 2 µs, total: 6 µs Wall time: 12.9 µs 180514-09:28:55,321 workflow INFO: Workflow susanflow settings: ['check', 'execution', 'logging', 'monitoring'] 180514-09:28:55,332 workflow INFO: Running serially. 180514-09:28:55,333 workflow INFO: [Node] Setting-up "susanflow.skullstrip" in "/output/working_dir/susanflow/skullstrip". 180514-09:28:55,336 workflow INFO: [Node] Cached "susanflow.skullstrip" - collecting precomputed outputs 180514-09:28:55,337 workflow INFO: [Node] "susanflow.skullstrip" found cached. 180514-09:28:55,338 workflow INFO: 180514-09:28:55,343 workflow INFO: 180514-09:28:55,344 workflow INFO: [Node] Setting-up "susanflow.susan_smooth.meanfunc2" in "/output/working_dir/susanflow/susan_smooth/meanfunc2". 180514-09:28:55,348 workflow INFO: [Node] "susanflow.susan_smooth.meanfunc2" found cached. 180514-09:28:55,349 workflow INFO: [Node] Setting-up "susanflow.susan_smooth.median" in "/output/working_dir/susanflow/susan_smooth/median". 180514-09:28:55,355 workflow INFO: [Node] "susanflow.susan_smooth.median" found cached. 180514-09:28:55,356 workflow INFO: [Node] Setting-up "susanflow.susan_smooth.merge" in "/output/working_dir/susanflow/susan_smooth/merge". 180514-09:28:55,360 workflow INFO: [Node] Cached "susanflow.susan_smooth.merge" - collecting precomputed outputs 180514-09:28:55,361 workflow INFO: [Node] "susanflow.susan_smooth.merge" found cached. 180514-09:28:55,362 workflow INFO: [Node] Setting-up "susanflow.susan_smooth.multi_inputs" in "/output/working_dir/susanflow/susan_smooth/multi_inputs". 180514-09:28:55,367 workflow INFO: [Node] Cached "susanflow.susan_smooth.multi_inputs" - collecting precomputed outputs 180514-09:28:55,368 workflow INFO: [Node] "susanflow.susan_smooth.multi_inputs" found cached. 180514-09:28:55,369 workflow INFO: [Node] Setting-up "susanflow.susan_smooth.smooth" in "/output/working_dir/susanflow/susan_smooth/smooth". 180514-09:28:55,378 workflow INFO: [Node] "susanflow.susan_smooth.smooth" found cached. 180514-09:28:55,379 workflow INFO: [Node] Setting-up "susanflow.list_extract" in "/output/working_dir/susanflow/list_extract". 180514-09:28:55,383 workflow INFO: [Node] Cached "susanflow.list_extract" - collecting precomputed outputs 180514-09:28:55,384 workflow INFO: [Node] "susanflow.list_extract" found cached. 180514-09:28:55,385 workflow INFO: 180514-09:28:55,389 workflow INFO: [Node] Cached "susanflow.mask" - collecting precomputed outputs 180514-09:28:55,390 workflow INFO: Out[ ]: <networkx.classes.digraph.DiGraph at 0x7f7d5cb44518> That happened quickly! Workflows (actually this is handled by the Node code) are smart and know if their inputs have changed from the last time they are run. If they have not, they don't recompute; they just turn around and pass out the resulting files from the previous run. This is done on a node-by-node basis, also. Let's go back to the first workflow example. What happened if we just tweak one thing: In [ ]: wf.inputs.smooth.fwhm = 1 wf.run() 180514-09:28:55,402 workflow INFO: Workflow smoothflow settings: ['check', 'execution', 'logging', 'monitoring'] 180514-09:28:55,408 workflow INFO: Running serially. 180514-09:28:55,409 workflow INFO: [Node] Setting-up "smoothflow.smooth" in "/output/working_dir/smoothflow/smooth". 180514-09:28:55,410 workflow INFO: [Node] Outdated cache found for "smoothflow.smooth". 180514-09:28:55,418 workflow INFO: [Node] Running "smooth" ("nipype.interfaces.fsl.maths.IsotropicSmooth"), a CommandLine Interface with command: fslmaths /data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz -s 0.42466 /output/working_dir/smoothflow/smooth/sub-01_ses-test_T1w_smooth.nii.gz 180514-09:28:58,936 workflow INFO: [Node] Finished "smoothflow.smooth". 180514-09:28:58,937 workflow INFO: [Node] Setting-up "smoothflow.skullstrip" in "/output/working_dir/smoothflow/skullstrip". 180514-09:28:58,941 workflow INFO: [Node] Cached "smoothflow.skullstrip" - collecting precomputed outputs 180514-09:28:58,942 workflow INFO: [Node] "smoothflow.skullstrip" found cached. 180514-09:28:58,943 workflow INFO: 180514-09:28:58,947 workflow INFO: [Node] Outdated cache found for "smoothflow.mask". 180514-09:28:58,953 workflow INFO: 180514-09:29:00,30 workflow INFO: Out[ ]: <networkx.classes.digraph.DiGraph at 0x7f7d5c21cfd0> By changing an input value of the smooth node, this node will be re-executed. This triggers a cascade such that any file depending on the smooth node (in this case, the mask node, also recompute). However, the skullstrip node hasn't changed since the first time it ran, so it just coughed up its original files. That's one of the main benefits of using Workflows: efficient recomputing. Another benefit of Workflows is parallel execution, which is covered under Plugins and Distributed Computing. With Nipype it is very easy to up a workflow to an extremely parallel cluster computing environment. In this case, that just means that the skullstrip and smooth Nodes execute together, but when you scale up to Workflows with many subjects and many runs per subject, each can run together, such that (in the case of unlimited computing resources), you could process 50 subjects with 10 runs of functional data in essentially the time it would take to process a single run. To emphasize the contribution of Nipype here, you can write and test your workflow on one subject computing on your local CPU, where it is easier to debug. Then, with the change of a single function parameter, you can scale your processing up to a 1000+ node SGE cluster. Exercise 1¶ Create a workflow that connects three nodes for: • skipping the first 3 dummy scans using fsl.ExtractROI • applying motion correction using fsl.MCFLIRT (register to the mean volume, use NIFTI as output type) • correcting for slice wise acquisition using fsl.SliceTimer (assumed that slices were acquired with interleaved order and time repetition was 2.5, use NIFTI as output type) In [ ]: # write your solution here In [ ]: # importing Node and Workflow from nipype import Workflow, Node # importing all interfaces from nipype.interfaces.fsl import ExtractROI, MCFLIRT, SliceTimer Defining all nodes In [ ]: # extracting all time levels but not the first four extract = Node(ExtractROI(t_min=4, t_size=-1, output_type='NIFTI'), name="extract") # using MCFLIRT for motion correction to the mean volume mcflirt = Node(MCFLIRT(mean_vol=True, output_type='NIFTI'), name="mcflirt") # correcting for slice wise acquisition (acquired with interleaved order and time repetition was 2.5) slicetimer = Node(SliceTimer(interleaved=True, output_type='NIFTI', time_repetition=2.5), name="slicetimer") Creating a workflow In [ ]: # Initiation of a workflow wf_ex1 = Workflow(name="exercise1", base_dir="/output/working_dir") # connect nodes with each other wf_ex1.connect([(extract, mcflirt, [('roi_file', 'in_file')]), (mcflirt, slicetimer, [('out_file', 'in_file')])]) # providing a input file for the first extract node Exercise 2¶ Visualize and run the workflow In [ ]: # write your solution here We learnt 2 methods of plotting graphs: In [ ]: wf_ex1.write_graph("workflow_graph.dot") from IPython.display import Image Image(filename="/output/working_dir/exercise1/workflow_graph.png") 180514-09:29:00,197 workflow INFO: Generated workflow graph: /output/working_dir/exercise1/workflow_graph.png (graph2use=hierarchical, simple_form=True). Out[ ]: And more detailed graph: In [ ]: wf_ex1.write_graph(graph2use='flat') from IPython.display import Image Image(filename="/output/working_dir/exercise1/graph_detailed.png") 180514-09:29:00,426 workflow INFO: Generated workflow graph: /output/working_dir/exercise1/graph.png (graph2use=flat, simple_form=True). Out[ ]: if everything works good, we're ready to run the workflow: In [ ]: wf_ex1.run() 180514-09:29:00,437 workflow INFO: Workflow exercise1 settings: ['check', 'execution', 'logging', 'monitoring'] 180514-09:29:00,444 workflow INFO: Running serially. 180514-09:29:00,445 workflow INFO: [Node] Setting-up "exercise1.extract" in "/output/working_dir/exercise1/extract". 180514-09:29:00,469 workflow INFO: [Node] Cached "exercise1.extract" - collecting precomputed outputs 180514-09:29:00,470 workflow INFO: [Node] "exercise1.extract" found cached. 180514-09:29:00,472 workflow INFO: [Node] Setting-up "exercise1.mcflirt" in "/output/working_dir/exercise1/mcflirt". 180514-09:29:00,483 workflow INFO: [Node] Cached "exercise1.mcflirt" - collecting precomputed outputs 180514-09:29:00,484 workflow INFO: [Node] "exercise1.mcflirt" found cached. 180514-09:29:00,485 workflow INFO: [Node] Setting-up "exercise1.slicetimer" in "/output/working_dir/exercise1/slicetimer". 180514-09:29:00,514 workflow INFO: [Node] Cached "exercise1.slicetimer" - collecting precomputed outputs 180514-09:29:00,516 workflow INFO: [Node] "exercise1.slicetimer" found cached. Out[ ]: <networkx.classes.digraph.DiGraph at 0x7f7d5cb1f518> we can now check the output: In [ ]: ! ls -lh /output/working_dir/exercise1 total 412K -rw-r--r-- 1 neuro users 319K May 14 09:29 d3.js drwxr-xr-x 3 neuro users 4.0K May 3 07:31 extract -rw-r--r-- 1 neuro users 1006 May 14 09:29 graph1.json -rw-r--r-- 1 neuro users 435 May 14 09:29 graph_detailed.dot -rw-r--r-- 1 neuro users 18K May 14 09:29 graph_detailed.png -rw-r--r-- 1 neuro users 149 May 14 09:29 graph.dot -rw-r--r-- 1 neuro users 380 May 14 09:29 graph.json -rw-r--r-- 1 neuro users 15K May 14 09:29 graph.png -rw-r--r-- 1 neuro users 6.6K May 14 09:29 index.html drwxr-xr-x 3 neuro users 4.0K May 3 07:32 mcflirt drwxr-xr-x 3 neuro users 4.0K May 3 07:32 slicetimer -rw-r--r-- 1 neuro users 266 May 14 09:29 workflow_graph.dot -rw-r--r-- 1 neuro users 14K May 14 09:29 workflow_graph.png
2019-11-12 16:42:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2478516548871994, "perplexity": 9032.641642147863}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665575.34/warc/CC-MAIN-20191112151954-20191112175954-00037.warc.gz"}
http://www.ck12.org/book/Basic-Geometry/r1/section/2.6/
<meta http-equiv="refresh" content="1; url=/nojavascript/"> Chapter 2 Review | CK-12 Foundation You are reading an older version of this FlexBook® textbook: CK-12 Geometry - Basic Go to the latest version. 2.6: Chapter 2 Review Difficulty Level: At Grade Created by: CK-12 Symbol Toolbox $& \rightarrow && \text{if-then} && \sim \ \text{not}\\& \therefore && \text{therefore}$ Keywords and Vocabulary Inductive Reasoning • Inductive Reasoning • Conjecture • Counterexample Conditional Statements • Conditional Statement (If-Then Statement) • Hypothesis • Conclusion • Converse • Inverse • Contrapositive • Biconditional Statement Deductive Reasoning • Logic • Deductive Reasoning • Law of Detachment • Law of Contrapositive • Law of Syllogism Algebraic & Congruence Properties • Reflexive Property of Equality • Symmetric Property of Equality • Transitive Property of Equality • Substitution Property of Equality • Addition Property of Equality • Subtraction Property of Equality • Multiplication Property of Equality • Division Property of Equality • Distributive Property • Reflexive Property of Congruence • Symmetric Property of Congruence • Transitive Property of Congruence Proofs about Angle Pairs & Segments • Right Angle Theorem • Same Angle Supplements Theorem • Same Angle Complements Theorem Review Match the definition or description with the correct word. 1. $5 = x$ and $y + 4 = x$, then $5 = y +4$ — A. Law of Contrapositive 2. An educated guess — B. Inductive Reasoning 3. $6(2a + 1) = 12a +12$ — C. Inverse 4. $2, 4, 8, 16, 32, \ldots$ — D. Transitive Property of Equality 5. $\overline{AB} \cong \overline{CD}$ and $\overline{CD} \cong \overline{AB}$ — E. Counterexample 6. $\sim p \rightarrow \sim q$ — F. Conjecture 7. Conclusions drawn from facts. — G. Deductive Reasoning 8. If I study, I will get an “$A$” on the test. I did not get an $A$. Therefore, I didn’t study. — H. Distributive Property 9. $\angle A$ and $\angle B$ are right angles, therefore $\angle A \cong \angle B$. — I. Symmetric Property of Congruence 10. 2 disproves the statement: “All prime numbers are odd.” — J. Right Angle Theorem K. Definition of Right Angles Texas Instruments Resources In the CK-12 Texas Instruments Geometry FlexBook, there are graphing calculator activities designed to supplement the objectives for some of the lessons in this chapter. See http://www.ck12.org/flexr/chapter/9687. 8 , 9 , 10 Feb 22, 2012 Last Modified: Dec 11, 2014 Files can only be attached to the latest version of None Reviews Please wait... Please wait... Image Detail Sizes: Medium | Original CK.MAT.ENG.SE.1.Geometry-Basic.2.6
2015-03-27 00:31:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 14, "texerror": 0, "math_score": 0.34508779644966125, "perplexity": 13903.860523814998}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131293283.10/warc/CC-MAIN-20150323172133-00123-ip-10-168-14-71.ec2.internal.warc.gz"}
http://bootmath.com/given-p-equiv-q-equiv-1-pmod-4-leftfracpqright-1-is-neta-1-possible.html
# Given $p \equiv q \equiv 1 \pmod 4$, $\left(\frac{p}{q}\right) = 1$, is $N(\eta) = 1$ possible? Given distinct primes $p$ and $q$, both congruent to $1 \pmod 4$, such that $$\left(\frac{p}{q}\right) = 1$$ and obviously also $$\left(\frac{q}{p}\right) = 1$$ is it possible for the fundamental unit of $\mathcal{O}_{\mathbb{Q}(\sqrt{pq})}$ to have a norm of $1$? There is Theorem eleven point five point seven in saban alaca williams something strange going on with my keyboard in this paragraph< sorry> #### Solutions Collecting From Web of "Given $p \equiv q \equiv 1 \pmod 4$, $\left(\frac{p}{q}\right) = 1$, is $N(\eta) = 1$ possible?" Added: for small numbers, it seems about one out of three numbers $n=pq,$ twelve out of the smallest forty four, with primes $p \equiv q \equiv 1 \pmod 4$ and $(p|q)= (q|p) = 1,$ do give integer solutions to $x^2 – n y^2 = -1.$ The first few are $$145 = 5 \cdot 29, \; \; x=12, y=1$$ $$445 = 5 \cdot 89, \; \; x=4662, y=221$$ $$901 = 17 \cdot 53, \; \; x=30, y=1$$ $$1145 = 5 \cdot 229, \; \; x=1252, y=37$$ $$1313 = 13 \cdot 101, \; \; x=616, y=17$$ $$1745 = 5 \cdot 349, \; \; x=4052, y=97$$ $$2249 = 13 \cdot 173, \; \; x=197140, y=4157$$ It seems to hold steady at about one out of three, there were $5820$ successes out of the first $18000$ such numbers. There were $99284$ out of the first $300000.$ HOWEVER: $205 = 5 \cdot 41,$ and there are no integer solutions to $$x^2 – 205 y^2 = -1.$$ More to the point, there are no integer solutions to $$x^2 + x y – 51 y^2 = -1.$$ With reference to the second screen capture below, with $x=20$ and $y=3,$ $$20^2 + 20 \cdot 3 – 51 \cdot 3^2 = 1.$$ Or to $$x^2 + 13 x y – 9 y^2 = -1.$$ $221 = 13 \cdot 17,$ and there are no integer solutions to $$x^2 – 221 y^2 = -1.$$ There are also no integer solutions to $$x^2 + x y – 55 y^2 = -1.$$ With reference to the third screen capture below, with $x=7$ and $y=1,$ $$7^2 + 7 \cdot 1 – 55 \cdot 1^2 = 1.$$ Or to $$x^2 + 13 x y – 13 y^2 = -1.$$ Screen captures from http://www.numbertheory.org/php/unit.html Ummm. here is the 205 thing in my language: ========================================= jagy@phobeusjunior:~/old drive/home/jagy/Cplusplus$./indefCycle 1 1 -51 0 form 1 1 -51 delta 0 1 form -51 -1 1 delta 6 2 form 1 13 -9 -1 -6 0 -1 To Return -1 6 0 -1 0 form 1 13 -9 delta -1 ambiguous 1 form -9 5 5 delta 1 2 form 5 5 -9 delta -1 ambiguous 3 form -9 13 1 delta 13 4 form 1 13 -9 form 1 x^2 + 13 x y -9 y^2 minimum was 1rep x = 1 y = 0 disc 205 dSqrt 14.317821063 M_Ratio 205 Automorph, written on right of Gram matrix: 2 27 3 41 ========================================= And here is 221 ========================================= jagy@phobeusjunior:~/old drive/home/jagy/Cplusplus$ jagy@phobeusjunior:~/old drive/home/jagy/Cplusplus$./indefCycle 1 1 -55 0 form 1 1 -55 delta 0 1 form -55 -1 1 delta 6 2 form 1 13 -13 -1 -6 0 -1 To Return -1 6 0 -1 0 form 1 13 -13 delta -1 ambiguous 1 form -13 13 1 delta 13 ambiguous 2 form 1 13 -13 form 1 x^2 + 13 x y -13 y^2 minimum was 1rep x = 1 y = 0 disc 221 dSqrt 14.866068747 M_Ratio 221 Automorph, written on right of Gram matrix: -1 -13 -1 -14 ========================================= My Theorem 1 is Theorem 11.5.5 on page 279 of Alaca and Williams. My Theorem 2 is Theorem 11.5.7 on page 286 of Alaca and Williams. It turns out Theorem 2 is due to Dirichlet, 1834. I keep forgetting why$x^2 + xy – k y^2$dominates$u^2 – (4k+1)v^2.$One step: take$x = u-v, y = 2v.$Then $$x^2 + xy – k y^2 = u^2 – 2 uv + v^2 + 2 u v – 2 v^2 – 4 k v^2 = u^2 – (4k+1) v^2.$$ THEOREM 1: With prime$p \equiv 1 \pmod 4,$there is always a solution to $$x^2 – p y^2 = -1$$ in integers. The proof is from Mordell, Diophantine Equations, pages 55-56. PROOF: Take the smallest integer pair$T>1,U >0$such that $$T^2 – p U^2 = 1.$$ We know that$T$is odd and$U$is even. So, we have the integer equation $$\left( \frac{T+1}{2} \right) \left( \frac{T-1}{2} \right) = p \left( \frac{U}{2} \right)^2.$$ We have $$\gcd \left( \left( \frac{T+1}{2} \right), \left( \frac{T-1}{2} \right) \right) = 1.$$ Indeed, $$\left( \frac{T+1}{2} \right) – \left( \frac{T-1}{2} \right) = 1.$$ There are now two cases, by unique factorization in integers: $$\mbox{(A):} \; \; \; \left( \frac{T+1}{2} \right) = p a^2, \; \; \left( \frac{T-1}{2} \right) = b^2$$ $$\mbox{(B):} \; \; \; \left( \frac{T+1}{2} \right) = a^2, \; \; \left( \frac{T-1}{2} \right) = p b^2$$ Now, in case (B), we find that$(a,b)$are smaller than$(T,U),$but$T \geq 3, a > 1,$and$a^2 – p b^2 = 1.$This is a contradiction, as our hypothesis is that$(T,U)$is minimal. As a result, case (A) holds, with evident $$p a^2 – b^2 = \left( \frac{T+1}{2} \right) – \left( \frac{T-1}{2} \right) = 1,$$ so $$b^2 – p a^2 = -1.$$ THEOREM 2: With primes$p \neq q,$with$p \equiv q \equiv 1 \pmod 4$and Legendre$(p|q)=(q|p) = -1,$there is always a solution to $$x^2 – pq y^2 = -1$$ in integers. The proof is from Mordell, Diophantine Equations, pages 55-56. PROOF: Take the smallest integer pair$T>1,U >0$such that $$T^2 – pq U^2 = 1.$$ We know that$T$is odd and$U$is even. So, we have the integer equation $$\left( \frac{T+1}{2} \right) \left( \frac{T-1}{2} \right) = pq \left( \frac{U}{2} \right)^2.$$ We have $$\gcd \left( \left( \frac{T+1}{2} \right), \left( \frac{T-1}{2} \right) \right) = 1.$$ There are now four cases, by unique factorization in integers: $$\mbox{(1):} \; \; \; \left( \frac{T+1}{2} \right) = a^2, \; \; \left( \frac{T-1}{2} \right) = pq b^2$$ $$\mbox{(2):} \; \; \; \left( \frac{T+1}{2} \right) = p a^2, \; \; \left( \frac{T-1}{2} \right) = q b^2$$ $$\mbox{(3):} \; \; \; \left( \frac{T+1}{2} \right) = q a^2, \; \; \left( \frac{T-1}{2} \right) = p b^2$$ $$\mbox{(4):} \; \; \; \left( \frac{T+1}{2} \right) = pq a^2, \; \; \left( \frac{T-1}{2} \right) = b^2$$ Now, in case (1), we find that$(a,b)$are smaller than$(T,U),$but$T \geq 3, a > 1,$and$a^2 – pq b^2 = 1.$This is a contradiction, as our hypothesis is that$(T,U)$is minimal. In case$(2),$we have $$p a^2 – q b^2 = 1.$$ $$p a^2 \equiv 1 \pmod q,$$ so$a$is nonzero mod$q,$then $$p \equiv \left( \frac{1}{a} \right)^2 \pmod q.$$ This contradicts the hypothesis$(p|q) = -1.$In case$(3),$we have $$q a^2 – p b^2 = 1.$$ $$q a^2 \equiv 1 \pmod p,$$ so$a$is nonzero mod$p,$then $$q \equiv \left( \frac{1}{a} \right)^2 \pmod p.$$ This contradicts the hypothesis$(q|p) = -1.$As a result, case (4) holds, with evident $$pq a^2 – b^2 = \left( \frac{T+1}{2} \right) – \left( \frac{T-1}{2} \right) = 1,$$ so $$b^2 – pq a^2 = -1.$$ The viewpoint of real quadratic fields an norms of fundamental units is more concerned with$x^2 + xy – k y^2,$where$4k+1 = pq$in the second theorem. However, we showed above that the existence of a solution to$u^2 – (4k+1)v^2 = -1$gives an immediate construction for a solution to$x^2 + xy – k y^2=-1,$namely$x=u-v, y=2v.$EXTRA David Speyer reminded me of something Kaplansky wrote out for me, years and years ago. From David’s comments, I now understand what Kap was trying to show me. If$x^2 + x y – 2 k y^2 = -1,$then$(2x+y)^2 – (8k+1)y^2 = -4. $This is impossible$\pmod 8$unless$y$is even, in which case$(x + \frac{y}{2})^2 – (8k+1) \left( \frac{y}{2}\right)^2 = -1.$There is more to it when we have$x^2 + x y – k y^2 = -1$with odd$k.\$ Take $$u = \frac{ 2 x^3 +3 x^2 y + (6k+3)x y^2 + (3k+1)y^3}{2},$$ $$v = \frac{3 x^2 y + 3 x y^2 + (k+1)y^3}{2}.$$ $$u^2 – (4k+1) v^2 = -1,$$ since $$u^2 – (4k+1) v^2 = \left( x^2 + x y – k y^2 \right)^3.$$
2018-08-17 05:06:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999889135360718, "perplexity": 2397.6303832556505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221211719.12/warc/CC-MAIN-20180817045508-20180817065508-00627.warc.gz"}
http://www.cje.net.cn/CN/abstract/abstract23389.shtml
• 研究报告 • 朝鲜越桔的解剖结构及其环境适应性 1. 1大连大学生命科学与技术学院, 辽宁大连 116622;2大连大学现代农业研究院, 辽宁大连 116622) • 出版日期:2018-09-10 发布日期:2018-09-10 Anatomical structure and environmental adaptability of Vaccinium hirtum Thunb. var.koreanum (Nakai) Kitamura. ZHANG Min1, WANG He-xin1,2*, XU Guo-hui1,2, LOU Xin1,2, ZHAO Li-na1, YAN Dong-ling1 1. (1College of Life Science and Technology, Dalian University, Dalian 116622, Liaoning, China; 2Institute of Modern Agriculture Research, Dalian University, Dalian 116622, Liaoning, China). • Online:2018-09-10 Published:2018-09-10 Abstract: Vaccinium hirtumThunb. var. koreanum (Nakai) Kitamura is a rare and protected plant species in China. It is mainly distributed in the southern part of the Changbai Mountains and grows in cliffs or the place between rocks where there is sufficient sunshine but with theextreme environment such as barren and dry land with cold and strong wind in winter. We know little about the anatomy and resistance of V. hirtumvar.koreanum. In this study, the conventional paraffin section was used to observe anatomical structure of roots, stems and leaves of V. hirtum var.koreanum, aiming to reveal the relationship between anatomical structure and environmental adaptability. The results showed that the conducting tissue of perennial lateral roots in V. hirtumvar.koreanum was well developed. The epidermis and cortical cells of capillary roots were bulky. There were abundant solid inclusions in endothelial cells. In addition, a large number of mycorrhizal fungi such as ericoid mycorrhizal fungi, dark septate endophytes and arbuscular mycorrhizal fungi were found in the capillary roots. These features would help roots absorb moisture and nutrients. There was a thick stratum corneum $(10.74±0.89) μm$ on the surface of one-year stem. There were a large amount of solid inclusions in both cortical thick-walled cells and pith cells. The cortex had a special air cavity structure, whose thickness accounted for 30% of theradius of the stem. These characters are helpful for plants to improve their water retention and adaptationto low temperature. The phloem tissue in stem was well developed, which facilitates the transport of water and nutrients. Furthermore, the upper and lower surfaces of leaves were covered with a thick cuticle $(2.06±0.75), (2.04±0.73) μm, respectively$, and the phloem tissue of the veins was well developed, which would be beneficial to drought tolerance and water absorption. In conclusion, the anatomical structure of V. hirtumvar.koreanum helps toexplain why it can survive in drought, cold, windy, hostile environment at the top of mountain in eastern Liaoning.
2023-04-01 20:28:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3424486815929413, "perplexity": 11907.805373348901}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950247.65/warc/CC-MAIN-20230401191131-20230401221131-00605.warc.gz"}
https://research.tudelft.nl/en/publications/pulse-strategy-for-suppressing-spreading-on-networks
Pulse strategy for suppressing spreading on networks Qiang Liu, Xiaoyu Zhou, Piet Van Mieghem Research output: Contribution to journalArticleScientificpeer-review Abstract In previous modelling efforts to understand the spreading process on networks, each node can infect its neighbors and cure spontaneously, and the curing is traditionally assumed to occur uniformly over time. This traditional curing is not optimal in terms of the trade-off between the effectiveness and cost. A pulse immunization/curing strategy is more efficient and broadly applied to suppress the spreading process. We analyze the pulse curing strategy on networks with the Susceptible-Infected (SI) process. We analytically compute the mean-field epidemic threshold $\tau_c^{p}$ of the pulse SI model and show that $\tau_c^{p}=\frac{1}{\lambda_1}\ln\frac{1}{1-p}$ , where $\lambda_1$ and p are the largest eigenvalue of the adjacency matrix of the contact graph and the fraction of nodes covered by each curing, respectively. These analytical results agree with simulations. Compared to the asynchronous curing process in the extensively studied Markovian SIS process, we show that the pulse curing strategy saves about 36.8%, i.e., $p\approx 0.632$ , of the number of curing operations invariant to the network structure. Our results may help policymakers to design optimal containment strategies and minimize the controlling cost. Original language English 38001 38001-p1 - 38001-p4 4 EPL 127 3 https://doi.org/10.1209/0295-5075/127/38001 Published - 2019
2020-12-04 03:02:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5151315927505493, "perplexity": 1700.4389326960365}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141733120.84/warc/CC-MAIN-20201204010410-20201204040410-00437.warc.gz"}
https://jeremykun.com/category/general/
# Mathematical Genealogy As a fun side project to distract me from my abysmal progress on my book, I decided to play around with the math genealogy graph! For those who don’t know, since 1996, mathematicians, starting with the labor of Harry Coonce et al, have been managing a database of all mathematicians. More specifically, they’ve been keeping track of who everyone’s thesis advisors and subsequent students were. The result is a directed graph (with a current estimated 200k nodes) that details the scientific lineage of mathematicians. Anyone can view the database online and explore the graph by hand. In it are legends like Gauss, Euler, and Noether, along with the sizes of their descendant subtrees. Here’s little ol’ me. It’s fun to look at who is in your math genealogy, and I’ve spent more than a few minutes clicking until I get to the top of a tree (since a person can have multiple advisors, finding the top is time consuming), like the sort of random walk that inspired Google’s PageRank and Wikipedia link clicking games. Inspired by a personalized demo by Colin Wright, I decided it would be fun to scrape the website, get a snapshot of the database, and then visualize and play with the graph. So I did. Here’s a github repository with the raw data and scraping script. It includes a full json dump of what I scraped as of a few days ago. It’s only ~60MB. Then, using a combination of tools, I built a rudimentary visualizer. Go play with it! A subset of my mathematical family tree. A few notes: 1. It takes about 15 seconds to load before you can start playing. During this time, it loads a compressed version of the database into memory (starting from a mere 5MB). Then it converts the data into a more useful format, builds a rudimentary search index of the names, and displays the ancestors for Gauss. 2. The search index is the main bloat of the program, requiring about a gigabyte of memory to represent. Note that because I’m too lazy to set up a proper server and elasticsearch index, everything in this demo is in Javascript running in your browser. Here’s the github repo for that code. 3. You can drag and zoom the graph. 4. There was a fun little bit of graph algorithms involved in this project, such as finding the closest common ancestor of two nodes. This is happening in a general digraph, not necessarily a tree, so there are some extra considerations. I isolated all the graph algorithms to one file. 5. People with even relatively few descendants generate really wide graphs. This is because each layer in the directed graph is assigned to a layer, and, the potentially 100+ grandchildren of a single node will be laid out in the same layer. I haven’t figured out how to constrain the width of the rendered graph (anyone used dagre/dagre-d3?), nor did I try very hard. 6. The dagre layout package used here is a port of the graphviz library. It uses linear programming and the simplex algorithm to determine an optimal layout that penalizes crossed edges and edges that span multiple layers, among other things. Linear programming strikes again! For more details on this, see this paper outlining the algorithm. 7. The scraping algorithm was my first time using Python 3’s asyncio features. The concepts of asynchronous programming are not strange to me, but somehow the syntax of this module is. Feature requests, bugs, or ideas? Open an issue on Github or feel free to contribute a pull request! Enjoy. # Duality for the SVM This post is a sequel to Formulating the Support Vector Machine Optimization Problem. ## The Karush-Kuhn-Tucker theorem Generic optimization problems are hard to solve efficiently. However, optimization problems whose objective and constraints have special structure often succumb to analytic simplifications. For example, if you want to optimize a linear function subject to linear equality constraints, one can compute the Lagrangian of the system and find the zeros of its gradient. More generally, optimizing a linear function subject to linear equality and inequality constraints can be solved using various so-called “linear programming” techniques, such as the simplex algorithm. However, when the objective is not linear, as is the case with SVM, things get harder. Likewise, if the constraints don’t form a convex set you’re (usually) out of luck from the standpoint of analysis. You have to revert to numerical techniques and cross your fingers. Note that the set of points satisfying a collection of linear inequalities forms a convex set, provided they can all be satisfied. We are in luck. The SVM problem can be expressed as a so-called “convex quadratic” optimization problem, meaning the objective is a quadratic function and the constraints form a convex set (are linear inequalities and equalities). There is a neat theorem that addresses such, and it’s the “convex quadratic” generalization of the Lagrangian method. The result is due to Karush, Kuhn, and Tucker, (dubbed the KKT theorem) but we will state a more specific case that is directly applicable to SVM. Theorem [Karush 1939, Kuhn-Tucker 1951]: Suppose you have an optimization problem in $\mathbb{R}^n$ of the following form: $\displaystyle \min f(x), \text{ subject to } g_i(x) \leq 0, i = 1, \dots, m$ Where $f$ is a differentiable function of the input variables $x$ and $g_1, \dots, g_m$ are affine (degree-1 polynomials). Suppose $z$ is a local minimum of $f$. Then there exist constants (called KKT or Lagrange multipliers) $\alpha_1, \dots, \alpha_m$ such that the following are true. Note the parenthetical labels contain many intentionally undefined terms. 1. $- \nabla f(z) = \sum_{i=1}^m \alpha_i \nabla g_i(z)$ (gradient of Lagrangian is zero) 2. $g_i(z) \leq 0$ for all $i = 1, \dots, m$ (primal constraints are satisfied) 3. $\alpha_i \geq 0$ for all $i = 1, \dots, m$ (dual constraints are satisfied) 4. $\alpha_i g_i(z) = 0$ for all $i = 1, \dots, m$ (complementary slackness conditions) We’ll discuss momentarily how to interpret these conditions, but first a few asides. A large chunk of the work in SVMs is converting the original, geometric problem statement, that of maximizing the margin of a linear separator, into a form suitable for this theorem. We did that last time. However, the conditions of this theorem also provide the structure for a more analytic algorithm, the Sequential Minimal Optimization algorithm, which allows us to avoid numerical methods. We’ll see how this works explicitly next time when we implement SMO. You may recall that for the basic Lagrangian, each constraint in the optimization problem corresponds to one Lagrangian multiplier, and hence one term of the Lagrangian. Here it’s largely the same—each constraint  in the SVM problem (and hence each training point) corresponds to a KKT multiplier—but in addition each KKT multiplier corresponds to a constraint for a new optimization problem that this theorem implicitly defines (called the dual problem). So the pseudocode of the Sequential Minimal Optimization algorithm is to start with some arbitrary separating hyperplane $w$, and find any training point $x_j$ that corresponds to a violated constraint. Fix $w$ so it works for $x_j$, and repeat until you can’t find any more violated constraints. Now to interpret those four conditions. The difficulty in this part of the discussion is in the notion of primal/dual problems. The “original” optimization problem is often called the “primal” problem. While a “primal problem” can be either a minimization or a maximization (and there is a corresponding KKT theorem for each) we’ll use the one of the form: $\displaystyle \min f(x), \text{subject to } g_i(x) \leq 0, i = 1, \dots, m$ Next we define a corresponding “dual” optimization problem, which is a maximization problem whose objective and constraints are related to the primal in a standard, but tedious-to-write-down way. In general, this dual maximization problem has the guarantee that its optimal solution (a max) is a lower bound on the optimal solution for the primal (a min). This can be useful in many settings. In the most pleasant settings, including SVM, you get an even stronger guarantee, that the optimal solutions for the primal and dual problems have equal objective value. That is, the bound that the dual objective provides on the primal optimum is tight. In that case, the primal and dual are two equivalent perspectives on the same problem. Solving the dual provides a solution to the primal, and vice versa. The KKT theorem implicitly defines a dual problem, which can only possibly be clear from the statement of the theorem if you’re intimately familiar with duals and Lagrangians already. This dual problem has variables $\alpha = (\alpha_1, \dots, \alpha_m)$, one entry for each constraint of the primal. For KKT, the dual constraints are simply non-negativity of the variables $\displaystyle \alpha_j \geq 0 \text{ for all } j$ And the objective for the dual is this nasty beast $\displaystyle d(\alpha) = \inf_{x} L(x, \alpha)$ where $L(x, \alpha)$ is the generalized Lagrangian (which is simpler in this writeup because the primal has no equality constraints), defined as: $\displaystyle L(x, \alpha) = f(x) + \sum_{i=1}^m \alpha_i g_i(x)$ While a proper discussion of primality and duality could fill a book, we’ll have to leave it at that. If you want to journey deeper into this rabbit hole, these notes give a great introduction from the perspective of the classical Lagrangian, without any scarring. But we can begin to see why the KKT conditions are the way they are. The first requires the generalized Lagrangian has gradient zero. Just like with classical Lagrangians, this means the primal objective is at a local minimum. The second requires the constraints of the primal problem to be satisfied. The third does the same for the dual constraints. The fourth is the interesting one, because it says that at an optimal solution, the primal and dual constraints are intertwined. 4. $\alpha_i g_i(z) = 0$ for all $i = 1, \dots, m$ (complementary slackness conditions) More specifically, these “complementary slackness” conditions require that for each $i$, either the dual constraint $\alpha_i \geq 0$ is actually tight ($\alpha_i = 0$), or else the primal constraint $g_i$ is tight. At least one of the two must be exactly at the limit (equal to zero, not strictly less than). The “product equals zero means one factor is zero” trick comes in handy here to express an OR, despite haunting generations of elementary algebra students. In terms of the SVM problem, complementary slackness translates to the fact that, for the optimal separating hyperplane $w$, if a data point doesn’t have functional margin exactly 1, then that data point isn’t a support vector. Indeed, when $\alpha_i = 0$ we’ll see in the next section how that affects the corresponding training point $x_i$. ## The nitty gritty for SVM Now that we’ve recast the SVM into a form suitable for the KKT theorem, let’s compute the dual and understand how these dual constraints are related to the optimal solution of the primal SVM problem. The primal problem statement is $\displaystyle \min_{w} \frac{1}{2} \| w \|^2$ Subject to the constraints that all $m$ training points $x_1, \dots, x_m$ with training labels $y_1, \dots, y_m$ satisfy $\displaystyle (\langle w, x_i \rangle + b) \cdot y_i \geq 1$ Which we can rewrite as $\displaystyle 1 - (\langle w, x_i \rangle + b) \cdot y_i \leq 0$ The generalized Lagrangian is \displaystyle \begin{aligned} L(w, b, \alpha) &= \frac{1}{2} \| w \|^2 + \sum_{j=1}^m \alpha_j(1-y_j \cdot (\langle w, x_j \rangle + b)) \\ &= \frac{1}{2} \| w \|^2 + \sum_{j=1}^m \alpha_j - \sum_{j=1}^m \alpha_j y_j \cdot \langle w, x_j \rangle - \sum_{j=1}^m \alpha_j y_j b \end{aligned} We can compute each component of the gradient $\nabla L$, indexed by the variables $w_i, b,$ and $\alpha_j$. First, since this simplifies the Lagrangian a bit, we compute $\frac{\partial L}{\partial b}$. $\displaystyle \frac{\partial L}{\partial b} = -\sum_{j=1}^m y_j \alpha_j$ The condition that the gradient is zero implies this entry is zero, i.e. $\sum_{j=1}^m y_j \alpha_j = 0$. In particular, and this will be a helpful reminder for next post, we could add this constraint to the dual problem formulation without changing the optimal solution, allowing us to remove the term $b \sum_{j=1}^m y_j \alpha_j$ from the Lagrangian since it’s zero. We will use this reminder again when we implement the Sequential Minimal Optimization algorithm next time. Next, the individual components $w_i$ of $w$. $\displaystyle \frac{\partial L}{\partial w_i} = w_i - \sum_{j=1}^m \alpha_j y_j x_{j,i}$ Note that $x_{i,j}$ is the $i$-th component of the $j$-th training point $x_j$, since this is the only part of the expression $w \cdot x_j$ that involves $w_i$. Setting all these equal to zero means we require $w = \sum_{j=1}^m \alpha_j y_j x_j$. This is interesting! The optimality criterion, that the gradient of the Lagrangian must be zero, actually shows us how to write the optimal solution $w$ in terms of the Lagrange multipliers $\alpha_j$ and the training data/labels. It also hints at the fact that, because of this complementary slackness condition, many of the $\alpha_i$ will turn out to be zero, and hence the optimal solution can be written as a sparse sum of the training examples. And, now that we have written $w$ in terms of the $\alpha_j$, we can eliminate $w$ in the formula for the Lagrangian and get a dual optimization objective only in terms of the $\alpha_j$. Substituting (and combining the resulting two double sums whose coefficients are $\frac{1}{2}$ and $-1$), we get $\displaystyle L(\alpha) = \sum_{j=1}^m \alpha_j - \frac{1}{2} \sum_{i=1}^m \sum_{j=1}^m \alpha_i \alpha_j y_i y_j \langle x_i, x_j \rangle$ Again foreshadowing, the fact that this form only depends on the inner products of the training points will allow us to replace the standard (linear) inner product for a nonlinear “inner-product-like” function, called a kernel, that will allow us to introduce nonlinearity into the decision boundary. Now back to differentiating the Lagrangian. For the remaining entries of the Lagrangian where the variable is a KKT multiplier, it coincides with the requirement that the constraints of the primal are satisfied: $\displaystyle \frac{\partial L}{\partial \alpha_j} = 1 - y_j (\langle w, x_j \rangle + b) \leq 0$ Next, the KKT theorem says that one needs to have both feasibility of the dual: $\displaystyle \alpha_j \geq 0 \text{ for all } j$ And finally the complementary slackness conditions, $\displaystyle \alpha_j (1 - y_j (\langle w, x_j \rangle + b)) = 0 \text{ for all } j = 1, \dots, m$ To be completely clear, the dual problem for the SVM is just the generalized Lagrangian: $\displaystyle \max_{\alpha} (\inf_x L(x, \alpha))$ subject to the non-negativity constraints: $\displaystyle \alpha_i \geq 0$ And the one (superfluous reminder) equality constraint $\displaystyle \sum_{j=1}^m y_j \alpha_j = 0$ All of the equality constraints above (Lagrangian being zero, complementary slackness, and this reminder constraint) are consequences of the KKT theorem. At this point, we’re ready to derive and implement the Sequential Minimal Optimization Algorithm and run it on some data. We’ll do that next time. # Formulating the Support Vector Machine Optimization Problem ## The hypothesis and the setup This blog post has an interactive demo (mostly used toward the end of the post). The source for this demo is available in a Github repository. Last time we saw how the inner product of two vectors gives rise to a decision rule: if $w$ is the normal to a line (or hyperplane) $L$, the sign of the inner product $\langle x, w \rangle$ tells you whether $x$ is on the same side of $L$ as $w$. Let’s translate this to the parlance of machine-learning. Let $x \in \mathbb{R}^n$ be a training data point, and $y \in \{ 1, -1 \}$ is its label (green and red, in the images in this post). Suppose you want to find a hyperplane which separates all the points with -1 labels from those with +1 labels (assume for the moment that this is possible). For this and all examples in this post, we’ll use data in two dimensions, but the math will apply to any dimension. Some data labeled red and green, which is separable by a hyperplane (line). The hypothesis we’re proposing to separate these points is a hyperplane, i.e. a linear subspace that splits all of $\mathbb{R}^n$ into two halves. The data that represents this hyperplane is a single vector $w$, the normal to the hyperplane, so that the hyperplane is defined by the solutions to the equation $\langle x, w \rangle = 0$. As we saw last time, $w$ encodes the following rule for deciding if a new point $z$ has a positive or negative label. $\displaystyle h_w(z) = \textup{sign}(\langle w, x \rangle)$ You’ll notice that this formula only works for the normals $w$ of hyperplanes that pass through the origin, and generally we want to work with data that can be shifted elsewhere. We can resolve this by either adding a fixed term $b \in \mathbb{R}$—often called a bias because statisticians came up with it—so that the shifted hyperplane is the set of solutions to $\langle x, w \rangle + b = 0$. The shifted decision rule is: $\displaystyle h_w(z) = \textup{sign}(\langle w, x \rangle + b)$ Now the hypothesis is the pair of vector-and-scalar $w, b$. The key intuitive idea behind the formulation of the SVM problem is that there are many possible separating hyperplanes for a given set of labeled training data. For example, here is a gif showing infinitely many choices. The question is: how can we find the separating hyperplane that not only separates the training data, but generalizes as well as possible to new data? The assumption of the SVM is that a hyperplane which separates the points, but is also as far away from any training point as possible, will generalize best. While contrived, it’s easy to see that the separating hyperplane is as far as possible from any training point. More specifically, fix a labeled dataset of points $(x_i, y_i)$, or more precisely: $\displaystyle D = \{ (x_i, y_i) \mid i = 1, \dots, m, x_i \in \mathbb{R}^{n}, y_i \in \{1, -1\} \}$ And a hypothesis defined by the normal $w \in \mathbb{R}^{n}$ and a shift $b \in \mathbb{R}$. Let’s also suppose that $(w,b)$ defines a hyperplane that correctly separates all the training data into the two labeled classes, and we just want to measure its quality. That measure of quality is the length of its margin. Definition: The geometric margin of a hyperplane $w$ with respect to a dataset $D$ is the shortest distance from a training point $x_i$ to the hyperplane defined by $w$. The best hyperplane has the largest possible margin. This margin can even be computed quite easily using our work from last post. The distance from $x$ to the hyperplane defined by $w$ is the same as the length of the projection of $x$ onto $w$. And this is just computed by an inner product. If the tip of the $x$ arrow is the point in question, then $a$ is the dot product, and $b$ the distance from $x$ to the hyperplane $L$ defined by $w$. ## A naive optimization objective If we wanted to, we could stop now and define an optimization problem that would be very hard to solve. It would look like this: \displaystyle \begin{aligned} & \max_{w} \min_{x_i} \left | \left \langle x_i, \frac{w}{\|w\|} \right \rangle + b \right | & \\ \textup{subject to \ \ } & \textup{sign}(\langle x_i, w \rangle + b) = \textup{sign}(y_i) & \textup{ for every } i = 1, \dots, m \end{aligned} The formulation is hard. The reason is it’s horrifyingly nonlinear. In more detail: 1. The constraints are nonlinear due to the sign comparisons. 2. There’s a min and a max! A priori, we have to do this because we don’t know which point is going to be the closest to the hyperplane. 3. The objective is nonlinear in two ways: the absolute value and the projection requires you to take a norm and divide. The rest of this post (and indeed, a lot of the work in grokking SVMs) is dedicated to converting this optimization problem to one in which the constraints are all linear inequalities and the objective is a single, quadratic polynomial we want to minimize or maximize. Along the way, we’ll notice some neat features of the SVM. ## Trick 1: linearizing the constraints To solve the first problem, we can use a trick. We want to know whether $\textup{sign}(\langle x_i, w \rangle + b) = \textup{sign}(y_i)$ for a labeled training point $(x_i, y_i)$. The trick is to multiply them together. If their signs agree, then their product will be positive, otherwise it will be negative. So each constraint becomes: $\displaystyle (\langle x_i, w \rangle + b) \cdot y_i \geq 0$ This is still linear because $y_i$ is a constant (input) to the optimization problem. The variables are the coefficients of $w$. The left hand side of this inequality is often called the functional margin of a training point, since, as we will see, it still works to classify $x_i$, even if $w$ is scaled so that it is no longer a unit vector. Indeed, the sign of the inner product is independent of how $w$ is scaled. ## Trick 1.5: the optimal solution is midway between classes This small trick is to notice that if $w$ is the supposed optimal separating hyperplane, i.e. its margin is maximized, then it must necessarily be exactly halfway in between the closest points in the positive and negative classes. In other words, if $x_+$ and $x_-$ are the closest points in the positive and negative classes, respectively, then $\langle x_{+}, w \rangle + b = -(\langle x_{-}, w \rangle + b)$. If this were not the case, then you could adjust the bias, shifting the decision boundary along $w$ until it they are exactly equal, and you will have increased the margin. The closest point, say $x_+$ will have gotten farther away, and the closest point in the opposite class, $x_-$ will have gotten closer, but will not be closer than $x_+$. ## Trick 2: getting rid of the max + min Resolving this problem essentially uses the fact that the hypothesis, which comes in the form of the normal vector $w$, has a degree of freedom in its length. To explain the details of this trick, we’ll set $b=0$ which simplifies the intuition. Indeed, in the animation below, I can increase or decrease the length of $w$ without changing the decision boundary. I have to keep my hand very steady (because I was too lazy to program it so that it only increases/decreases in length), but you can see the point. The line is perpendicular to the normal vector, and it doesn’t depend on the length. Let’s combine this with tricks 1 and 1.5. If we increase the length of $w$, that means the absolute values of the dot products $\langle x_i, w \rangle$ used in the constraints will all increase by the same amount (without changing their sign). Indeed, for any vector $a$ we have $\langle a, w \rangle = \|w \| \cdot \langle a, w / \| w \| \rangle$. In this world, the inner product measurement of distance from a point to the hyperplane is no longer faithful. The true distance is $\langle a, w / \| w \| \rangle$, but the distance measured by $\langle a, w \rangle$ is measured in units of $1 / \| w \|$. In this example, the two numbers next to the green dot represent the true distance of the point from the hyperplane, and the dot product of the point with the normal (respectively). The dashed lines are the solutions to <x, w> = 1. The magnitude of w is 2.2, the inverse of that is 0.46, and indeed 2.2 = 4.8 * 0.46 (we’ve rounded the numbers). Now suppose we had the optimal hyperplane and its normal $w$. No matter how near (or far) the nearest positively labeled training point $x$ is, we could scale the length of $w$ to force $\langle x, w \rangle = 1$. This is the core of the trick. One consequence is that the actual distance from $x$ to the hyperplane is $\frac{1}{\| w \|} = \langle x, w / \| w \| \rangle$. The same as above, but with the roles reversed. We’re forcing the inner product of the point with w to be 1. The true distance is unchanged. In particular, if we force the closest point to have inner product 1, then all other points will have inner product at least 1. This has two consequences. First, our constraints change to $\langle x_i, w \rangle \cdot y_i \geq 1$ instead of $\geq 0$. Second, we no longer need to ask which point is closest to the candidate hyperplane! Because after all, we never cared which point it was, just how far away that closest point was. And now we know that it’s exactly $1 / \| w \|$ away. Indeed, if the optimal points weren’t at that distance, then that means the closest point doesn’t exactly meet the constraint, i.e. that $\langle x, w \rangle > 1$ for every training point $x$. We could then scale $w$ shorter until $\langle x, w \rangle = 1$, hence increasing the margin $1 / \| w \|$. In other words, the coup de grâce, provided all the constraints are satisfied, the optimization objective is just to maximize $1 / \| w \|$, a.k.a. to minimize $\| w \|$. This intuition is clear from the following demonstration, which you can try for yourself. In it I have a bunch of positively and negatively labeled points, and the line in the center is the candidate hyperplane with normal $w$ that you can drag around. Each training point has two numbers next to it. The first is the true distance from that point to the candidate hyperplane; the second is the inner product with $w$. The two blue dashed lines are the solutions to $\langle x, w \rangle = \pm 1$. To solve the SVM by hand, you have to ensure the second number is at least 1 for all green points, at most -1 for all red points, and then you have to make $w$ as short as possible. As we’ve discussed, shrinking $w$ moves the blue lines farther away from the separator, but in order to satisfy the constraints the blue lines can’t go further than any training point. Indeed, the optimum will have those blue lines touching a training point on each side. I bet you enjoyed watching me struggle to solve it. And while it’s probably not the optimal solution, the idea should be clear. The final note is that, since we are now minimizing $\| w \|$, a formula which includes a square root, we may as well minimize its square $\| w \|^2 = \sum_j w_j^2$. We will also multiply the objective by $1/2$, because when we eventually analyze this problem we will take a derivative, and the square in the exponent and the $1/2$ will cancel. ## The final form of the problem Our optimization problem is now the following (including the bias again): \displaystyle \begin{aligned} & \min_{w} \frac{1}{2} \| w \|^2 & \\ \textup{subject to \ \ } & (\langle x_i, w \rangle + b) \cdot y_i \geq 1 & \textup{ for every } i = 1, \dots, m \end{aligned} This is much simpler to analyze. The constraints are all linear inequalities (which, because of linear programming, we know are tractable to optimize). The objective to minimize, however, is a convex quadratic function of the input variables—a sum of squares of the inputs. Such problems are generally called quadratic programming problems (or QPs, for short). There are general methods to find solutions! However, they often suffer from numerical stability issues and have less-than-satisfactory runtime. Luckily, the form in which we’ve expressed the support vector machine problem is specific enough that we can analyze it directly, and find a way to solve it without appealing to general-purpose numerical solvers. We will tackle this problem in a future post (planned for two posts sequel to this one). Before we close, let’s just make a few more observations about the solution to the optimization problem. ## Support Vectors In Trick 1.5 we saw that the optimal separating hyperplane has to be exactly halfway between the two closest points of opposite classes. Moreover, we noticed that, provided we’ve scaled $\| w \|$ properly, these closest points (there may be multiple for positive and negative labels) have to be exactly “distance” 1 away from the separating hyperplane. Another way to phrase this without putting “distance” in scare quotes is to say that, if $w$ is the normal vector of the optimal separating hyperplane, the closest points lie on the two lines $\langle x_i, w \rangle + b = \pm 1$. Now that we have some intuition for the formulation of this problem, it isn’t a stretch to realize the following. While a dataset may include many points from either class on these two lines $\langle x_i, w \rangle = \pm 1$, the optimal hyperplane itself does not depend on any of the other points except these closest points. This fact is enough to give these closest points a special name: the support vectors. We’ll actually prove that support vectors “are all you need” with full rigor and detail next time, when we cast the optimization problem in this post into the “dual” setting. To avoid vague names, the formulation described in this post called the “primal” problem. The dual problem is derived from the primal problem, with special variables and constraints chosen based on the primal variables and constraints. Next time we’ll describe in brief detail what the dual does and why it’s important, but we won’t have nearly enough time to give a full understanding of duality in optimization (such a treatment would fill a book). When we compute the dual of the SVM problem, we will see explicitly that the hyperplane can be written as a linear combination of the support vectors. As such, once you’ve found the optimal hyperplane, you can compress the training set into just the support vectors, and reproducing the same optimal solution becomes much, much faster. You can also use the support vectors to augment the SVM to incorporate streaming data (throw out all non-support vectors after every retraining). Eventually, when we get to implementing the SVM from scratch, we’ll see all this in action. Until then! # Testing Polynomial Equality Problem: Determine if two polynomial expressions represent the same function. Specifically, if $p(x_1, x_2, \dots, x_n)$ and $q(x_1, x_2, \dots, x_n)$ are a polynomial with inputs, outputs and coefficients in a field $F$, where $|F|$ is sufficiently large, then the problem is to determine if $p(\mathbf{x}) = q(\mathbf{x})$ for every $x \in F$, in time polynomial in the number of bits required to write down $p$ and $q$. Solution: Let $d$ be the maximum degree of all terms in $p, q$. Choose a finite set $S \subset F$ with $|S| > 2d$. Repeat the following process 100 times: 1. Choose inputs $z_1, z_2, \dots, z_n \in S$ uniformly at random. 2. Check if $p(z_1, \dots, z_n) = q(z_1, \dots, z_n)$. If every single time the two polynomials agree, accept the claim that they are equal. If they disagree on any input, reject. You will be wrong with probability at most $2^{-100}$. Discussion: At first glance it’s unclear why this problem is hard. If you have two representations of polynomials $p, q$, say expressed in algebraic notation, why can’t you just do the algebra to convert them both into the same format, and see if they’re equal? Unfortunately, that conversion can take exponential time. For example, suppose you have a polynomial $p(x) = (x+1)^{1000}$. Though it only takes a few bits to write down, expressing it in a “canonical form,” often in the monomial form $a_0 + a_1x + \dots + a_d x^d$, would require exponentially many bits in the original representation. In general, it’s unknown how to algorithmically transform polynomials into a “canonical form” (so that they can be compared) in subexponential time. Instead, the best we know how to do is treat the polynomials as black boxes and plug values into them. Indeed, for single variable polynomials it’s well known that a nonzero degree $d$ polynomial has at most $d$ roots. A similar result is true for polynomials with many variables, and so we can apply that result to the polynomial $p - q$ to determine if $p = q$. This theorem is so important (and easy to prove) that it deserves the name of lemma. The Schwartz-Zippel lemma. Let $p$ be a nonzero polynomial of total degree $d \geq 0$ over a field $F$. Let $S$ be a finite subset of $F$ and let $z_1, \dots, z_n$ be chosen uniformly at random from $S$. The probability that $p(z_1, \dots, z_n) = 0$ is at most $d / |S|$. Proof. By induction on the number of variables $n$. For the case of $n=1$, it’s the usual fact that a single-variable polynomial can have at most $d$ roots. Now for the inductive step, assume this is true for all polynomials with $n-1$ variables, and we will prove it for $n$ variables. Write $p$ as a polynomial in the variable $x_1$, whose coefficients are other polynomials: $\displaystyle p(x_1, \dots, x_n) = \sum_{k=1}^d Q_k(x_2, \dots, x_n) x_1^k$ Here we’ve grouped $p$ by the powers of $x_1$, so that $Q_i$ are the coefficients of each $x_1^k$. This is useful because we’ll be able to apply the inductive hypothesis to one of the $Q_i$‘s, which have fewer variables. Indeed, we claim there must be some $Q_k$ which is nonzero for $k > 0$. Clearly, since $p$ is not the zero polynomial, some $Q_k$ must be nonzero. If the only nonzero $Q_k$ is $Q_0$, then we’re done because $p$ doesn’t depend on $x_1$ at all. Otherwise, take the largest nonzero $Q_k$. It’s true that the degree of $Q_k$ is at most $d-k$. This is true because the term $x_1^k Q_k$ has degree at most $d$. By the inductive hypothesis, if we choose $z_2, \dots, z_n$ and plug them into $Q_k$, we get zero with probability at most $\frac{d-k}{|S|}$. The crucial part is that if this polynomial coefficient is nonzero, then the entire polynomial $p$ is nonzero. This is true even if an unlucky choice of $x_1 = z_1$ causes the resulting evaluation $p(z_1, \dots, z_n) \neq 0$. To think about it a different way, imagine we’re evaluating the polynomial in phases. In the first phase, we pick the $z_2, \dots, z_n$. We could also pick $z_1$ independently but not reveal what it is, for the sake of this story. Then we plug in the $z_2, \dots, z_n$, and the result is a one-variable polynomial whose largest coefficient is $Q_k(z_1, \dots, z_n)$. The inductive hypothesis tells us that this one-variable polynomial is the zero polynomial with probability at most $\frac{d-k}{|S|}$. (It’s probably a smaller probability, since all the coefficients have to be zero, but we’re just considering the largest one for the sake of generality and simplicity) Indeed, the resulting polynomial after we plug in $z_2, \dots, z_n$ has degree $k$, so we can apply the inductive hypothesis to it as well, and the probability that it’s zero for a random choice of $z_1$ is at most $k / |S|$. Finally, the probability that both occur can be computed using basic probability algebra. Let $A$ be the event that, for these $z_i$ inputs, $Q_k$ is zero, and $B$ the event that $p$ is zero for the $z_i$ and the additional $z_1$. Then $\textup{Pr}[B] = \textup{Pr}[B \textup{ and } A] + \textup{Pr}[B \textup{ and } !A] = \textup{Pr}[B \mid A] \textup{Pr}[A] + \textup{Pr}[B \mid !A] \textup{Pr}[!A]$. Note the two quantities above that we don’t know are $\textup{Pr}[B \mid A]$ and $\textup{Pr}[!A]$, so we’ll bound them from above by 1. The rest of the quantities add up to exactly what we want, and so $\displaystyle \textup{Pr}[B] \leq \frac{d-k}{|S|} + \frac{k}{|S|} = \frac{d}{|S|},$ which proves the theorem. $\square$ While this theorem is almost trivial to prove (it’s elementary induction, and the obvious kind), it can be used to solve polynomial identity testing, as well as finding perfect matchings in graphs and test numbers for primality. But while the practical questions are largely solved–it’s hard to imagine a setting where you’d need faster primality testing than the existing randomized algorithms–the theory and philosophy of the result is much more interesting. Indeed, checking two polynomials for equality has no known deterministic polynomial time algorithm. It’s one of a small class of problems, like integer factoring and the discrete logarithm, which are not known to be efficiently solvable in theory, but are also not known to be NP-hard, so there is still hope. The existence of this randomized algorithm increases hope (integer factorization sure doesn’t have one!). And more generally, the fact that there are so few natural problems in this class makes one wonder whether randomness is actually beneficial at all. From a polynomial-time-defined-as-efficient perspective, can every problem efficiently solvable with access to random bits also be solved without such access? In the computational complexity lingo, does P = BPP? Many experts think the answer is yes.
2017-07-26 00:31:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 253, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9091823101043701, "perplexity": 337.41226597132606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425737.60/warc/CC-MAIN-20170726002333-20170726022333-00022.warc.gz"}
https://thefbiwarningscreens.fandom.com/wiki/Unknown_Company_Warning_Screen
## Late 1980s? Warning: On a white background, we see a fire engine red banner containing the white shadowed text "FBI" in the middle. Below it is the black text "WARNING", and "FEDERAL LAW PROVIDES SEVERE BLAH BLAH BLAH BLAH BLAH BLAH BLAH BLAH BLAH BLAH 5 YEARS IN PRISON, SEVERE EMBARASSMENT AND POTENTIAL BAD LUCK, SO PLEASE DON'T COPY THIS VIDEO!" in a computerized font. It then cuts to another screen with a tangerine background and some more warning text. FX/SFX: None. Music/Sounds: None. Availability: Unknown. It is said to be on an unknown children's VHS. Editor's Note: This is the most hilariously laziest warning screen ever made, although this was very likely intentional.
2019-05-20 21:23:50
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9045077562332153, "perplexity": 3982.2926077775755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256147.15/warc/CC-MAIN-20190520202108-20190520224108-00031.warc.gz"}
https://www.ritchievink.com/
An intuitive introduction to Gaussian processes February 1, 2019 Christopher Fonnesbeck did a talk about Bayesian Non-parametric Models for Data Science using PyMC3 on PyCon 2018. In this talk, he glanced over Bayes’ modeling, the neat properties of Gaussian distributions and then quickly turned to the application of Gaussian Processes, a distribution over infinite functions. Wait, but what?! How does a Gaussian represent a function? I did not understand how, but the promise of what these Gaussian Processes representing a distribution over nonlinear and nonparametric functions really intrigued me and therefore turned into a new subject for a post. Algorithm breakdown: Why do we call it Gradient Boosting? November 19, 2018 We were making a training at work about ensemble models. When we were discussing different techniques like bagging, boosting, and stacking, we also came on the subject of gradient boosting. Intuitively, gradient boosting, by training on the residuals made sense. However, the name gradient boosting did not right away. This post we are exploring the name of gradient boosting and of course also the model itself! Intuition Single decision tree Gradient boosting is often used as an optimization technique for decision trees. Build Facebook's Prophet in PyMC3; Bayesian time series analyis with Generalized Additive Models October 9, 2018 Last Algorithm Breakdown we build an ARIMA model from scratch and discussed the use cases of that kind of models. ARIMA models are great when you have got stationary data and when you want to predict a few time steps into the future. A lot of business data, being generated by human processes, have got weekly and yearly seasonalities (we for instance, seem work to less in weekends and holidays) and show peaks at certain events. Algorithm Breakdown: AR, MA and ARIMA models September 26, 2018 Time series are a quite unique topic within machine learning. In a lot of problems the dependent variable $y$, i.e. the thing we want to predict is dependent on very clear inputs, such as pixels of an image, words in a sentence, the properties of a persons buying behavior, etc. In time series these indepent variables are often not known. For instance, in stock markets, we don’t have a clear independent set of variables where we can fit a model on. Deploy any machine learning model serverless in AWS September 16, 2018 When a machine learning model goes into production, it is very likely to be idle most of the time. There are a lot of use cases, where a model only needs to run inference when new data is available. If we do have such a use case and we deploy a model on a server, it will eagerly be checking for new data, only to be disappointed for most of its lifetime and meanwhile you pay for the live time of the server. Generative Adversarial Networks in Pytorch: The distribution of Art July 16, 2018 Generative adversarial networks seem to be able to generate amazing stuff. I wanted to do a small project with GANs and in the process create something fancy for on the wall. Therefore I tried to train a GAN on a dataset of art paintings. This post I’ll explore if I’ll succeed in getting a full hd new Picasso on the wall. The pictures above give you a glimplse of some of the results from the model. Clustering data with Dirichlet Mixtures in Edward and Pymc3 June 5, 2018 Last post I’ve described the Affinity Propagation algorithm. The reason why I wrote about this algorithm was because I was interested in clustering data points without specifying k, i.e. the number of clusters present in the data. This post continues with the same fascination, however now we take a generative approach. In other words, we are going to examine which models could have generated the observed data. Through bayesian inference we hope to find the hidden (latent) distributions that most likely generated the data points. Algorithm Breakdown: Affinity Propagation May 18, 2018 On a project I worked on at the ANWB (Dutch road side assistence company) we mined driving behavior data. We wanted to know how many persons were likely to drive a certain vehicle on a regular basis. Naturally k-means clustering came to mind. The k-means algorithm finds clusters with the least inertia for a given k. A drawback is that often, k is not known. For the question about the numbers of persons driving a car, this isn’t that big of a problem as we have a good estimate of what k should be. Transfer learning with Pytorch: Assessing road safety with computer vision April 12, 2018 For a project at Xomnia, I had the oppertunity to do a cool computer vision assignment. We tried to predict the input of a road safety model. Eurorap is such a model. In short, it works something like this. You take some cars, mount them with cameras and drive around the road you’re interested in. The ‘Google Streetview’ like material you’ve collected is sent to a crowdsourced workforce (at Amazon they are called Mechanical Turks) to manually label the footage. Computer build me a bridge January 14, 2018 In earlier posts I’ve analyzed simple structures with a Python fem package called anaStruct. And in this post I’ve used anaStruct to analyze a very non linear roof ponding problem. Modelling a structure in Python may seem cumbersome in relation to some programs that offer a graphical user interface. For simple structures this may well be the case. However now we’ve got a simple way to programmatically model 2D structures, I was wondering if we could let a computer model these structures for us. (c) 2018 Ritchie Vink.
2019-02-19 19:58:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3606015145778656, "perplexity": 1165.0408298067807}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247491141.23/warc/CC-MAIN-20190219183054-20190219205054-00117.warc.gz"}
http://mathoverflow.net/revisions/19669/list
1 [made Community Wiki] I was always under the impression that canonical meant, precisely, that no arbitrary choices were necessary. But, that it was occasionally used less formally, in a more standard-English sort of way to mean traditional/obvious/well known. The informal meaning is usually used as a cheap way to avoid explaining something that's easier for the reader to guess anyway. Ex 1: Two vector spaces of the same dimension are isomorphic. The isomorphism is not canonical. Ex 2: A finite dimensional vector space is canonically isomorphic to its double dual. Ex 3: Let $\pi: S^3 \to S^2$ be the canonical fibration. I never really liked it when people use canonical as in example 3. It seems like using it this flexibly detracts from the useful technical interpretation of the word. I've also heard some more complicated category theoretic interpretations of what canonical meant. But, after more scrutiny, it seems that these "definitions" are specific cases of the "no arbitrary choices" principle.
2013-05-20 22:13:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.901189923286438, "perplexity": 674.4593137232197}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00076-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.physicsoverflow.org/31023/separability-hilbert-space-implications-for-the-formalism
# Separability of a Hilbert space and its implications for the formalism of QM + 2 like - 0 dislike 38 views In the text I'm using for QM, one of the properties listed for Hilbert space that is a mystery to me is the property that it is separable. Quoted from text (N. Zettili: Quantum Mechanics: Concepts and Applications, p. 81): There exists a Cauchy Sequence $\psi_{n} \ \epsilon \ H (n = 1, 2, ...)$ such that for every $\psi$ of $H$ and $\varepsilon > 0$, there exists at least one $\psi_{n}$ of the sequence for which $$|| \psi - \psi _{n} || < \varepsilon.$$ I'm having a very hard time deciphering what this exactly means. From my initial research, this is basically demonstrating that Hilbert space admits countable orthonormal bases. 1. How does this fact follow from the above? 2. And what exactly is the importance of having a countable orthonormal basis to the formalism of QM? 3. What would be the implications if Hilbert space did not admit a countable orthonormal basis, for example? This post imported from StackExchange Physics at 2015-05-13 19:02 (UTC), posted by SE-user Zack Related: this question and links therein. This post imported from StackExchange Physics at 2015-05-13 19:02 (UTC), posted by SE-user Qmechanic + 4 like - 0 dislike As showed by Solovay here, in a non-separable Hilbert space $H$ there may be probability measures that cannot be written, for any $M$ closed subspace of $H$, as $\mu (M)=\mathrm{Tr}[\rho \mathbb{1}_M]$, for some positive self-adjoint trace class $\rho$ with trace 1 (density matrix). Here $\mathbb{1}_M$ denotes the orthogonal projection on $M$. [The proof of the existence of such "exotic measures" is undecidable in ZFC, however it is equivalent to the existence of a (real valued) measurable cardinal] In some sense it means that in non-separable Hilbert spaces there may exist analogues to "normal quantum states" that are not density matrices$^\dagger$. Remark: For normal quantum state I mean a ultraweakly positive continuous functional on the $C^*$-algebra of bounded operators that is interpreted to give their expectation value. $^\dagger$: I mean that even if these states are probability measures with the countable additivity of orthogonal closed subspaces property, are not expressed as the trace of density matrices (while on separable Hilbert spaces this is always the case, and these measures are in one to one correspondence with normal states). This post imported from StackExchange Physics at 2015-05-13 19:02 (UTC), posted by SE-user yuggib answered Nov 25, 2014 by (360 points) For every infinite dimensional Hilbert space there are states which cannot be represented by a density operator (the states that are not normal). This post imported from StackExchange Physics at 2015-05-13 19:02 (UTC), posted by SE-user jjcale @jjcale : edited to be more clear (in some sense on non-separable spaces there may be "normal states" that are not written as the trace of density operators). This post imported from StackExchange Physics at 2015-05-13 19:02 (UTC), posted by SE-user yuggib + 2 like - 0 dislike I usually see it in the reverse way, but it is a matter of taste. Hilbert spaces, in general, can have bases of arbitrarily high cardinality. The specific one used on QM is, by construction, isomorphic to the space L2, the space of square-integrable functions. From there you can show that this particular Hilbert space is separable, because it is a theorem that a Hilbert space is separable if and only if it has a countable orthonormal basis, and L2 has one. This post imported from StackExchange Physics at 2015-05-13 19:02 (UTC), posted by SE-user Wolphram jonny answered Nov 25, 2014 by (30 points) I see, thanks. A remaining question is, how is this fundamental to the underlying formalism of QM? I just want to see where is the physics exactly in working with separable Hilbert spaces. This post imported from StackExchange Physics at 2015-05-13 19:02 (UTC), posted by SE-user Zack The physics is not about separable physics spaces, that came later. The physics is about square-integrable functions. But after that, mathematicians formalize the theory by choosing their preferred axioms. So, one thing is how the theory arised historically, another is how you like to formalize it in the "simplest" or more "aesthetic" way after you have it. This post imported from StackExchange Physics at 2015-05-13 19:02 (UTC), posted by SE-user Wolphram jonny I am assuming you know that any formal system, or theory T is equivalent to anothet theory T', in which axioms and theorems are exchanged. Both theories are equivalent from a formal or practical point of view. I have seen formalizations of QM that are so un-intuitive and un-historical as possible. That doent make them less good, just less intuitive. This post imported from StackExchange Physics at 2015-05-13 19:02 (UTC), posted by SE-user Wolphram jonny @Zack: Have a look at the thread Qmechanic linked to: The separability is mostly only fundamental in the sense described in this answer, namely, that the theory evolved around L^2, which happens to be separable. This post imported from StackExchange Physics at 2015-05-13 19:02 (UTC), posted by SE-user Martin Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsOverflo$\varnothing$Then drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
2017-11-20 15:28:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9131076335906982, "perplexity": 928.4302350037655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806070.53/warc/CC-MAIN-20171120145722-20171120165722-00770.warc.gz"}
http://mathhelpforum.com/math-challenge-problems/15427-problem-24-a.html
# Thread: Problem 24 1. ## Problem 24 Let $\displaystyle f$ be continously differenciable on $\displaystyle \mathbb{R}$. Solve the integral equation: $\displaystyle [f(x)]^2 = \int_0^x [f(t)]^2+[f'(t)]^2dt$ 2. Originally Posted by ThePerfectHacker Let $\displaystyle f$ be continously differenciable on $\displaystyle \mathbb{R}$. Solve the integral equation: $\displaystyle [f(x)]^2 = \int_0^x [f(t)]^2+[f'(t)]^2dt$ Nice. RonL 3. So whats the answer CaptainBlack? 4. Originally Posted by janvdl So whats the answer CaptainBlack? Not telling; that would spoil the fun RonL 5. Originally Posted by ThePerfectHacker Let $\displaystyle f$ be continously differenciable on $\displaystyle \mathbb{R}$. Solve the integral equation: $\displaystyle [f(x)]^2 = \int_0^x [f(t)]^2+[f'(t)]^2dt$ I would much rather deal with a differential equation. Hopefully this works. $\displaystyle \frac{d}{dx}[f(x)]^2=\frac{d}{dx}\int_0^x [f(t)]^2+[f'(t)]^2dt$ $\displaystyle 2[f(x)]f'(x) = [f(x)]^2+[f'(x)]^2$ $\displaystyle [f(x)]^2-2 [f(x) \cdot f'(x)] + [f'(x)]^2 = 0$ $\displaystyle [f(x) - f'(x)]^2=0$ $\displaystyle f(x)-f'(x)=0$ $\displaystyle f(x)=f'(x)$ $\displaystyle f(x) = Ae^x$ Am I right? 6. Is there a way to apply color and font to LaTeX? I want to try to white-out my solution, but as far as I can tell that's impossible. 7. Originally Posted by ecMathGeek I would much rather deal with a differential equation. Hopefully this works. $\displaystyle \frac{d}{dx}[f(x)]^2=\frac{d}{dx}\int_0^x [f(t)]^2+[f'(t)]^2dt$ $\displaystyle 2[f(x)]f'(x) = [f(x)]^2+[f'(x)]^2$ $\displaystyle [f(x)]^2-2 [f(x) \cdot f'(x)] + [f'(x)]^2 = 0$ $\displaystyle [f(x) - f'(x)]^2=0$ $\displaystyle f(x)-f'(x)=0$ $\displaystyle f(x)=f'(x)$ $\displaystyle f(x) = Ae^x$ Am I right? The LHS is $\displaystyle A^2e^{2x}$ and the RHS is $\displaystyle (A^2 - 1)e^{2x}$ so I would say "no." -Dan 8. Originally Posted by CaptainBlack No The right hand side is not what you say! RonL it would be $\displaystyle \left( e^{2x} - 1\right)A^2$ ? if so, it means there is something wrong with what ecMathGeek did, but i didn't really see anything wrong the first time i read it, i'll read it again 9. Originally Posted by Jhevon it would be $\displaystyle \left( e^{2x} - 1\right)A^2$ ? if so, it means there is something wrong with what ecMathGeek did, but i didn't really see anything wrong the first time i read it, i'll read it again ecMathGeeks proposed solution satisfies the integral equation (ish) RonL 10. Okay, I oopsied. But still: $\displaystyle f(x) = Ae^x$ Thus $\displaystyle f(t) = Ae^t$ $\displaystyle \frac{df}{dt} = Ae^t$ Thus $\displaystyle \int_0^x dt \, [ f(t) ]^2 + \left [ \frac{df}{dt} \right ] ^2$ $\displaystyle = \int_0^x dt \, (A^2e^{2t} + A^2e^{2t} )$ $\displaystyle = \int_0^x dt \, 2A^2e^{2t}$ $\displaystyle = 2A^2 \int_0^x dt \, e^{2t}$ $\displaystyle = 2A^2 \frac{1}{2}e^{2t}|_0^x$ $\displaystyle = A^2(e^{2x} - 1)$ And $\displaystyle [ f(x) ] ^2 = A^2e^{2x}$ So $\displaystyle [ f(x) ] ^2 \neq \int_0^x dt \, [ f(t) ]^2 + \left [ \frac{df}{dt} \right ] ^2$ How can this then be the solution of the integral equation? I'm missing something here... -Dan 11. Originally Posted by topsquark Okay, I oopsied. But still: $\displaystyle f(x) = Ae^x$ Thus $\displaystyle f(t) = Ae^t$ $\displaystyle \frac{df}{dt} = Ae^t$ Thus $\displaystyle \int_0^x dt \, [ f(t) ]^2 + \left [ \frac{df}{dt} \right ] ^2$ $\displaystyle = \int_0^x dt \, (A^2e^{2t} + A^2e^{2t} )$ $\displaystyle = \int_0^x dt \, 2A^2e^{2t}$ $\displaystyle = 2A^2 \int_0^x dt \, e^{2t}$ $\displaystyle = 2A^2 \frac{1}{2}e^{2t}|_0^x$ $\displaystyle = A^2(e^{2x} - 1)$ And $\displaystyle [ f(x) ] ^2 = A^2e^{2x}$ So $\displaystyle [ f(x) ] ^2 \neq \int_0^x dt \, [ f(t) ]^2 + \left [ \frac{df}{dt} \right ] ^2$ How can this then be the solution of the integral equation? I'm missing something here... -Dan So you have shown that if $\displaystyle f(t) = Ae^t$, then: $\displaystyle A^2e^{2x}$$\displaystyle = A^2(e^{2x} - 1) RonL 12. Originally Posted by topsquark Okay, I oopsied. But still: \displaystyle f(x) = Ae^x Thus \displaystyle f(t) = Ae^t \displaystyle \frac{df}{dt} = Ae^t Thus \displaystyle \int_0^x dt \, [ f(t) ]^2 + \left [ \frac{df}{dt} \right ] ^2 \displaystyle = \int_0^x dt \, (A^2e^{2t} + A^2e^{2t} ) \displaystyle = \int_0^x dt \, 2A^2e^{2t} \displaystyle = 2A^2 \int_0^x dt \, e^{2t} \displaystyle = 2A^2 \frac{1}{2}e^{2t}|_0^x \displaystyle = A^2(e^{2x} - 1) And \displaystyle [ f(x) ] ^2 = A^2e^{2x} So \displaystyle [ f(x) ] ^2 \neq \int_0^x dt \, [ f(t) ]^2 + \left [ \frac{df}{dt} \right ] ^2 How can this then be the solution of the integral equation? I'm missing something here... -Dan If you don't like ecMathGeeks approach try a power series solution. RonL 13. Originally Posted by ecMathGeek \displaystyle f(x) = Ae^x Originally Posted by CaptainBlack ecMathGeeks proposed solution satisfies the integral equation (ish) Originally Posted by CaptainBlack So you have shown that if \displaystyle f(t) = Ae^t, then: \displaystyle A^2e^{2x}$$\displaystyle = A^2(e^{2x} - 1)$ Then $\displaystyle A = 0.$ So ecMathGeeks proposed solution does not satisfy the integral equation for any $\displaystyle A$, which the second post seems to imply, only when $\displaystyle f(x) = Ae^x = 0.$ Is that what is going on here? 14. Originally Posted by JakeD Then $\displaystyle A = 0.$ So ecMathGeeks proposed solution does not satisfy the integral equation for any $\displaystyle A$, which the second post seems to imply, only when $\displaystyle f(x) = Ae^x = 0.$ Is that what is going on here? The integral equation when converted to a diffrential equation is in fact an initial value problem as of necessity $\displaystyle f(0)=0$ ecMathGeek gave a general solution to the DE that you get by differentiating the integral equation but omitted the initial condition that forces $\displaystyle A=0$. RonL 15. Originally Posted by CaptainBlank If you don't like ecMathGeeks approach try a power series solution. RonL Not every continously differenciable function is analytic.* *)By analytic I mean it has a power series expansion on $\displaystyle \mathbb{R}$. Page 1 of 2 12 Last
2018-03-20 19:56:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9939502477645874, "perplexity": 1574.0707161896307}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647530.92/warc/CC-MAIN-20180320185657-20180320205657-00540.warc.gz"}
https://webwork.libretexts.org/webwork2/html2xml?answersSubmitted=0&sourceFilePath=Library/ASU-topics/setImplicitDerivatives/5-5-13.pg&problemSeed=1234567&courseID=anonymous&userID=anonymous&course_password=anonymous&showSummary=1&displayMode=MathJax&problemIdentifierPrefix=102&language=en&outputformat=libretexts
The radius of a spherical balloon is increasing at a rate of 2 centimeters per minute. How fast is the volume changing when the radius is 12 centimeters? Note: The volume of a sphere is given by $V=(4/3)\pi r^3$. Rate of change of volume =
2022-05-27 18:34:14
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8941721320152283, "perplexity": 76.55766839951063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662675072.99/warc/CC-MAIN-20220527174336-20220527204336-00174.warc.gz"}
https://codegolf.stackexchange.com/questions/206247/can-this-polyomino-tile-the-toroidal-grid
# Can this polyomino tile the toroidal grid? Inspired by certain puzzles on Flow Free: Warps. ## Background We all know that L-triominos can't tile the 3x3 board, and P-pentominos can't tile the 5x5 board. But the situation changes if we allow the board to wrap around in both dimensions: ### L-triominos can tile 3x3 toroidal grid The 3rd tile wraps around through all four edges. ┌ ┌─┐ ┐ │ │3 ┌─┤ └─┐ │ │2 │ │ └─┬─┘ │1 │ └───┘ ┘ ### P-pentominos can tile the 5x5 toroidal grid The 5th tile wraps around through all four edges. ┌ ┌───┬─┐ ┐ │ │ │ ┌─┘ │ └─┐ │ 1 │2 │ ├─────┤ │ │ 3 │ │ │ ┌─┴─┬─┤ │ │ │ │ └─┬─┘ │ ╵ │ 4 │5 └ └─────┘ ┘ Note that, in both cases, wrapping around in only one dimension doesn't allow such tiling. In case the Unicode version is hard to read, here is the ASCII version: 3 2 3 1 2 2 1 1 3 5 1 1 2 5 1 1 1 2 2 3 3 3 2 2 3 3 4 4 5 5 4 4 4 5 ## Challenge Given a polyomino and the size (width and height) of the toroidal grid, determine if the polyomino can tile the toroidal grid. The polyomino can be flipped and/or rotated. A polyomino can be given in a variety of ways: • A list of coordinates representing each cell of polyomino • A 2D grid with on/off values of your choice (in this case, you cannot assume that the size of the grid defining the polyomino matches that of the toroidal grid) The output (true or false) can be given using the truthy/falsy values in your language of choice, or two distinct values to indicate true/false respectively. Standard rules apply. The shortest code in bytes wins. ## Test cases The polyomino is given as the collection of # symbols. ### Truthy # (singleton, a.k.a. monomino) 5x7 (or any size) -------- ## (domino) 4x3 (or any even size) -------- # ## (L-triomino) 3x3 (as shown above) -------- ## ### (P-pentomino) 5x5 (as shown above) -------- ## ## (Z-tetromino) 4x4 (left as an exercise to the reader) -------- ### # # (V-pentomino) 5x5 -------- #### # ### (a polyomino larger than the grid is possible) 4x4 -------- ### ### ### (single polyomino can cover the entire grid, and can wrap multiple times) 3x3 ### Falsy ## (domino) 3x5 (or any odd sizes) -------- ### # 1x8 -------- # # ### (U-pentomino) 5x5 • Nice challenge. I find the ASCII art hard to read, could you add the alternative of having 1, 2, 3, etc as each of the # shapes given in the two corresponding examples? – Jonathan Allan Jun 18 '20 at 11:54 • @JonathanAllan I added a pure ASCII version of the top two examples, though I doubt I understood your request correctly. (Or do you want something like this?) – Bubbler Jun 19 '20 at 2:09 • what you posted is perfect, thanks – Jonathan Allan Jun 19 '20 at 11:54 # Python 2, 300265 163 bytes -35 bytes after suggestions from @xnor, @ovs, and largely @user202729 (removing evenly divisible check allowed for a one-liner + lambda) -102 bytes following encouragement + general suggestions by @user202729 lambda l,w,h:all(w*h-len({((e-(p&4)*e//2)*1j**p+p/8+p/8/w*1j)%w%(1j*h)for e in l for p in c})for c in combinations(range(8*w*h),w*h/len(l))) from itertools import* Takes input as a list of complex coordinates of each cell of the polyomino. Outputs False for Truthy and True for Falsey (quirky de Morgan optimization). Try it online with many testcases. Since this brute-forces, I have commented out a few cases to run fast enough for TIO. Thoroughly commented: lambda l,w,h: all( # we want any configuration to work, but De Morgan gives any(L==len) <==> not all(L!=len) <==> not all(L-len) w*h-len( # if two polyominos in a configuration overlap, then there are duplicate cells # so the length of the set is less { # create a set consisting of each possible position+orientation of L/len(l) polyominos: ( # here, e is a single cell of the given polyomino ( # reflect e across the imaginary axis if p >= 4 (mod 8) e- # e-e.real*2 = e-e//.5 reflects across the Im axis p&4 # Only reflect if the 2^2 bit is nonzero: gives 4* or 0* following *e//2 # floor(z) = z.real when z is complex, so ) # e//2 (floor division) gives z.real/2 (complex floor division only works in Python 2) *1j**p # rotate depending on the 2^0 and 2^1 bits. i**x is cyclic every 4 +p/8 # translate horizontally (real component) by p>>3 (mod this later) +p/8/w*1j # translate vertically (im component) by p>>3 / w )%w%(1j*h) # mod to within grid (complex mods only work in Python 2) for e in l # find where each cell e of the given polyomino goes for p in c # do this for each c (each set of position+orientation integers) } ) for c in combinations( # iterate c over all sets of w*h/len(l) distinct integers from 0 to 8*L-1 range(8*w*h) # each of these 8*L integers corresponds to a single position/orientation of a polyomino # Bits 2^0 and 2^1 give the rotation, and 2^2 gives the reflection # The higher bits give the position from 0 to L=w*h-1 ==> (0,0) to (w-1,h-1) ,w*h/len(l) # w*h/len(l) is the number of polyominos needed since len(l) is the number of cells per polyomino # can't switch to *[range(8*w*h)]*(w*h/len(l)) because Python 3 does not allow short complex operations as above ) ) from itertools import* A new 169-byte solution that replaces combinations with recursion: g=lambda l,w,h,k=[]:all(g(l,w,h,k+[((e-(p&4)*e//2)*1j**p+p/8+p/8/w*1j)%w%(1j*h)for e in l])for p in range(8*w*h))if w*h>len(k)else len(set(k))-w*h from itertools import* This has the advantage of removing combinations (12 characters on its own) and one for loop, but the self-invocation takes many bytes. Currying would not save length. • Because of the weird way Python handles complex mod, you can actually do e%w%(1j*h) in place of (e.real%w,e.imag%h). – xnor Jun 18 '20 at 9:09 • k-k.imag*2jfor ... is 2 bytes shorter than k.conjugate()for .... (Or 2*k.real-k for ... at the same length) – ovs Jun 18 '20 at 10:37 • @user202729 Thanks for all the suggestions! I ended up being able to reduce many nested comprehensions (the sum(p,[])'s sole purpose was to undo a nest, and range(8*L) combines 3 loops), so string-replace-exec became unnecessary. I'm not sure what your last comment does because it reflects the translation, not the whole polyomino, but I think it's irrelevant now. – fireflame241 Jun 19 '20 at 20:57 • @user202729 Oh, I reasoned out that /8j would work because it's just a vertical shift in the opposite direction, but complex divisor also changes it to normal division instead of floor division. It would be great if there was a way to distribute out the p/8 somehow – fireflame241 Jun 20 '20 at 8:08 • Fixed (same number of bytes after using combinations) – fireflame241 Jun 20 '20 at 8:22 # JavaScript (ES7), 233 bytes Takes input as (w)(h)(p), where $$\p\$$ is a binary matrix describing the polyomino. Returns $$\0\$$ or $$\1\$$. Similar to my original answer, but uses a more complex expression to update the cells of the matrix instead of explicitly rotating the polyomino. w=>h=>g=(p,m=Array(w*h).fill(o=1))=>+m.join?(R=i=>i--?m.map((F,X)=>(F=k=>p.map((r,y)=>r.map((v,x)=>k|=v?m[Z=i&2?p[0].length+~x:x,~~(X/w+(i&1?Z:W))%h*w+(X+(i&1?W:Z))%w]^=1:0,W=i&4?p.length+~y:y))&&k)(F()||g(p,m)))|!o||R(i):0)(8):o=0 Try it online! # JavaScript (ES7),  311 ... 252  250 bytes Takes input as (w)(h)(p), where $$\p\$$ is a binary matrix describing the polyomino. Returns a Boolean value. Not quite as desperately long as I was expecting. :p w=>h=>g=(p,m=Array(w*h).fill(o=1),P)=>+m.join?[...13**7+''].some(i=>(p.sort(_=>i-7).map((r,y)=>r.map((v,x)=>(P[x]=P[x]||[])[y]=v),P=[]),m.map((F,X)=>(F=k=>P.map((r,y)=>r.map((v,x)=>k|=v?m[~~(X/w+y)%h*w+(X+x)%w]^=1:0))&&k)(F()||g(p,m))),p=P,!o)):o=0 Try it online! ### How? The following code builds all possible transformations $$\P\$$ of the polyomino $$\p\$$: [...13 ** 7 + ''] // this expands to ['6','2','7','4','8','5','1','7'] .some(i => // for each value i in the above list: ( p.sort(_ => i - 7) // reverse the rows of p[], except when i = '8' .map((r, y) => // for each row r[] at position y in m[]: r.map((v, x) => // for each value v at position x in r[]: ( P[x] = // transpose p[y][x] P[x] || [] ) // to P[x][y] [y] = v // ), // end of inner map() ) // end of outer map() (...) // more fun things happen here! p = P, // get ready for the next transformation !o // success if o is cleared ) // ) // end of some() We use a flat array of $$\w\times h\$$ entries to describe the matrix. All of them are initially set to $$\1\$$. The function $$\F\$$ inserts the polyomino in the matrix at the position $$\(X,Y)\$$ by XOR'ing the cells. It returns $$\0\$$ if the operation was done without setting any cell back to $$\1\$$. F = k => // expects k undefined for the first call P.map((r, y) => // for each row r[] at position y in P[]: r.map((v, x) => // for each value v at position x in r[]: k |= // update k: v ? // if v is set: m[~~(X / w + y) // toggle the value at (X + x, Y + Y), % h * w + // taking the wrapping around into account (X + x) % w // ] ^= 1 // k is set if the result is not 0 : // else: 0 // leave k unchanged ) // end of inner map() ) && k // end of outer map(); return k For each position $$\(X,Y)\$$ in the matrix: • We do a first call to $$\F\$$. If successful, it is followed by a recursive call to the main function $$\g\$$. • We just need to call $$\F\$$ a second time to remove the polyomino -- or to clear the mess if it was inserted at an invalid position. Hence the code: F(F() || g(p, m)) The recursion stops when there's no more $$\1\$$'s in the matrix (success) or there's no more valid position for the polyomino (failure). # Charcoal, 120 115 bytes NθNηWS⊞υ⌕Aι#≔⟦⟧ζFθFηF²«≔EθEη⁰εFLυF§υμ¿λ§≔§ε⁺κν﹪⁺ιμη¹§≔§ε⁺ιμ﹪⁺κνη¹F²F²⊞ζ↨⭆⎇μ⮌εε⪫⎇ν⮌ξξω²»≔…ζ¹υFυFζF¬&ικ⊞υ|ικ⁼⊟υ⊖X²×θη Try it online! Link is to verbose version of code. Takes inputs in the order width, height, newline-terminated polyomino and outputs a Charcoal boolean i.e. - only if the polyomino tiles the torus. Explanation: NθNη Input the size of the grid. WS⊞υ⌕Aι# Input the polyomino and convert it to a list of horizontal indices. ≔⟦⟧ζ Start building up a list of polyomino placements. FθFηF²« Loop through each vertical and horizontal offset and direction. ≔EθEη⁰ε FLυF§υμ Loop over each cell of the polyomino... ¿λ§≔§ε⁺κν﹪⁺ιμη¹§≔§ε⁺ιμ﹪⁺κνη¹ ... place the optionally transposed cell in the grid, but offset by the outer indices. F²F²⊞ζ↨⭆⎇μ⮌εε⪫⎇ν⮌ξξω² For each of four reflections of the grid, push the grid to the list of placements, represented as a base 2 integer (e.g. a grid with just the bottom right square filled would be 1 etc.) »≔…ζ¹υFυ Start a breadth first search using the first placement. Fζ Loop over each placement. F¬&ικ If this placement does not overlap the grid so far... ⊞υ|ικ ... then push the merged grid to the list of grids. ⁼⊟υ⊖X²×θη Check whether we pushed a completed grid. (This must be the last entry because any incomplete grid must by definition have fewer polyominoes and would therefore have been discovered earlier.)
2021-08-05 12:07:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 17, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43963494896888733, "perplexity": 3288.0417983236794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155529.97/warc/CC-MAIN-20210805095314-20210805125314-00521.warc.gz"}
https://undergroundmathematics.org/quadratics/r8045
Review question # Can we find the midpoint of a chord of this parabola? Add to your resource collection Remove from your resource collection Add notes to this resource View your notes for this resource Ref: R8045 ## Question Given that $x_1$ and $x_2$ are the roots of $ax^2 + bx + c = 0$, state in terms of some or all of $a$, $b$, $c$: (i) the condition that $x_1 = x_2$, (ii) the value of $x_1 + x_2$. 1. Find the values of $m$ for which the line $y = mx$ is a tangent to the curve $y^2 = 3x - 1$. 2. The line $y = 2x$ meets the curve $3y = x^2 - 10$ at the points $A(x_1, y_1)$ and $B(x_2, y_2)$. 1. Obtain the quadratic equation whose roots are $x_1$ and $x_2$. 2. Without solving this equation, find the $x$ co-ordinate of the midpoint of $AB$.
2022-07-05 03:52:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8764626383781433, "perplexity": 165.39144439106124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104512702.80/warc/CC-MAIN-20220705022909-20220705052909-00489.warc.gz"}
https://www.physicsforums.com/threads/summation-of-a-set-kn.672880/
# Summation of a set = kn? 1. Feb 19, 2013 ### Nickclark i need to find a series of numbers up to n that will add up to kn x1 + x2 + x3+ ... + n = kn where k is a constant. this is part of a long complex problem once this sum is found it will finally be solved. 2. Feb 19, 2013 ### Simon Bridge Do you mean $x_1+x_2+x_3+\cdots +x_{n-1}+x_n = nk$? How about $x_1=x_2=\cdots =x_{n-1}=x_n = k$? 3. Feb 19, 2013 ### Nickclark no, i need the last number to be n. some how i need to find a set of numbers when added together all i get is a multiple of the last number. 4. Feb 19, 2013 ### Mute Do you have any restrictions on that set of numbers or the parameters? e.g., how many terms can you have in the sum? Is k any real number, or is it limited to certain values or sets (like the integers or rationals, etc)? 5. Feb 19, 2013 ### Simon Bridge From what you've said, k can be any integer, and there can be any number of terms in the sum ... so you appear to be saying you want to find any set of numbers $\{x_i\}$ so that $$n+\sum_{i=1}^m x_i = kn$$ (the last number is n as specified) .. which suggests that $$(k-1)n=\sum_{i=1}^m x_i$$ and again I can say that $x_1 = x_2 = \cdots =x_{m-1}=x_m = \frac{n}{m}(k-1)$ ... which means that you have to be more specific about your problem if you are to get sensible answers. I'm guessing the x's can't be just any old numbers for example, and k is not supposed to be a specific integer? But I'm guessing - don't make people guess. 6. Feb 20, 2013 ### Nickclark yeah, i should've added more restrictions. well the numbers must increase as we go on, for example: 1+2+3+...+n=n(n+1)/2 right, but this is of second order i want to add numbers that increase up to n, and their sum is of order 1. 7. Feb 20, 2013 ### Mute You still need to be a bit more specific. You still haven't told us if k has to be an integer or rational or real or positive or negative, etc. It's also not clear how many terms you have in your sum - it looks like it's n terms where the last term is also equal to n, but you haven't told us that explicitly - we're still just guessing. Please tell us precisely what you want. 8. Feb 21, 2013 ### Simon Bridge If this is a problem that has been supplied to you, then please provide the exact text of the problem and the context. I'll add that we don't know that the numbers have to be integers of if they can be real, and so on. eg. $\sum_{i=1}^n = n(n+1)/2 = kn \text{ if } k=(n+1)/2$ But what do you mean "the sum is of order 1"? You mean that kn cannot be quadratic in n? (k cannot depend on n?) Maybe you are thinking more like: x1=1, x2=2, x3=3=n, then 1+2+3=6=2n so k=2 that the idea? 9. Feb 22, 2013 ### Nickclark Well the problem is not about this sum, if you find a series like the one described the problem will be solved. i have submitted the following answer: 2+4+8+16+32+64+...+2^k = 2(1+2+4+8+...+2^(k-1) = 2(2^k -1)/2-1 = 2(2^k -1) but 2^k = n k = log (base 2) n right, then substitute in the sum: 2(n-1) which is linear that is (of order one). the idea of the problem is that i want to get to n with the least sum possible, if i went with the quadratic equation that would have taken too much time O(n^2), so this sum will save so much time O(n) it's an algorithm problem. 10. Feb 22, 2013 ### Simon Bridge So to get good help you will need to describe the problem properly.
2017-12-11 10:28:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5851964354515076, "perplexity": 561.2384061487949}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948513330.14/warc/CC-MAIN-20171211090353-20171211110353-00712.warc.gz"}
https://itectec.com/superuser/free-panning-scrolling-on-libreoffice-draw/
# Free panning/scrolling on LibreOffice Draw autohotkeylibreoffice-drawscrolling LibreOffice Draw allows me to scroll vertically or horizontally using my mouse as follows: Scroll Wheel: Scroll vertically Shift + Scroll Wheel: Scroll horizontally My question is: Is there any way to have free scrolling/panning on it? By free scrolling/panning, I mean the ability to pan in any direction, e.g. diagonally. Many applications allow you to freely pan using methods like: • A hand tool/ panning tool (click and drag, as in Photoshop or Illustrator) • Middle click button on the mouse If LibreOffice Draw doesn't support these scrolling methods natively, is it possible to simulate free-scrolling externally? Maybe using an AutoHotkey script that sends the scroll vertically or horizontally events when moving the mouse?
2021-12-01 09:45:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2414524406194687, "perplexity": 8207.796853146041}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359976.94/warc/CC-MAIN-20211201083001-20211201113001-00525.warc.gz"}
https://projecteuclid.org/euclid.die/1418310418
## Differential and Integral Equations ### The Cauchy problem for a doubly singular parabolic equation with strongly nonlinear source #### Abstract We consider the Cauchy problem for a doubly nonlinear parabolic equation with inhomogeneous source. The existence and uniqueness of solutions are obtained when the initial datum is merely a function in $L_{loc}^1(R^N)$ or a Radon measure. #### Article information Source Differential Integral Equations, Volume 28, Number 1/2 (2015), 1-14. Dates First available in Project Euclid: 11 December 2014 https://projecteuclid.org/euclid.die/1418310418 Mathematical Reviews number (MathSciNet) MR3299114 Zentralblatt MATH identifier 1363.35157 Subjects Primary: 35K15, 35K55, 35K67 #### Citation Cheng, Junxiang; Deng, Lihua. The Cauchy problem for a doubly singular parabolic equation with strongly nonlinear source. Differential Integral Equations 28 (2015), no. 1/2, 1--14. https://projecteuclid.org/euclid.die/1418310418
2018-06-21 14:04:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5415883660316467, "perplexity": 2862.9869656866276}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864172.45/warc/CC-MAIN-20180621133636-20180621153636-00288.warc.gz"}
http://karagila.org/2014/anti-anti-banach-tarski-arguments/
Asaf Karagila I don't have much choice... Many people, more often than not these are people from analysis or worse (read: physicists, which in general are not bad, but I am bothered when they think they have a say in how theoretical mathematics should be done), pseudo-mathematical, non-mathematical, philosophical communities, and from time to time actual mathematicians, would say ridiculous things like "We need to omit the axiom of choice, and keep only Dependent Choice, since the axiom of choice is a source for constant bookkeeping in the form of non-measurable sets". People often like to cite the paradoxical decomposition of the unit sphere given by Banach-Tarski. "Yes, it doesn't make any sense, therefore the axiom of choice needs to be omitted". To those people I say that they know too little. The axiom of choice is not at fault here. The axiom of infinity is. Infinite objects are weird. Period. End of discussion. Don't believe me? Here's my favorite rebuttal: Theorem (ZF+DC). Suppose that all sets of reals are Lebesgue measurable, then there is a partition of the real line into strictly more parts than elements. Proof. If $\aleph_1\leq2^{\aleph_0}$ then there is a non-measurable set. Therefore $\aleph_1\nleq2^{\aleph_0}$. However there is a definable surjection from $\Bbb R$ onto $\omega_1$: Fix a bijection between $e\colon(0,1)\to \Bbb (0,1)^\omega$, if $r$ is a real number such that $e(r)$ is a well-ordered set (under the natural order of the real numbers) then map $r$ to the order type of $e(r)$. Otherwise map it to $0$. Easily we can see that this is a surjection onto $\omega_1$. Consider the partition induced by considering the singletons in $\Bbb R\setminus(0,1)$ and the preimages of each ordinal from the surjection above. This has $2^{\aleph_0}+\aleph_1$ equivalence classes. But since $2^{\aleph_0}$ and $\aleph_1$ are incomparable as cardinals, this is a strictly larger partition. $\square$ We can do other crazy partitions too. It all depends on how much you are willing to work, and how much more you are willing to assume. How is this not a paradoxical result? More parts than elements, all of which are non-empty? Is this not worse than the Banach-Tarski paradox, or at least comparably horrible? In fact, just the fact we can partition $\Bbb R$ into $\aleph_1$ parts, which is a number of parts incomparable with the number of elements should be alarming. Many people will disregard that, but this act of disregarding this sort of paradox is exactly what we do when we restrict ourselves to Borel sets, or Lebesgue measurable sets. We disregard the part that bothers us. And the axiom of choice has been so good to us in so many ways, that discarding it only for the sake of not having to cope with the Banach-Tarski paradox is plain stupid. ### There are 11 comments on this post. By (Sep 22 2014, 16:06) Hah! I didn't know that one. Great example. I agree with the sentiment that infinity is a (possibly more) important culprit that people (TM) tend to gloss over. Btw I'm really enjoying these op ed pieces of yours. Keep'em coming! By (Sep 22 2014, 20:47 In reply to Peter Krautzberger) Yeah, it's not the axiom of choice. Since rejecting it causes all sort of crazy things to happen too. The only way to ensure nothing crazy happen is to reject infinity altogether, or reject the power set axiom, or reject the law of excluded middle. In all cases, however, you find yourself wishing that you haven't... :-) By the way, I thought about you when I wrote those last two posts. I figured you'd like them. :-) By (Sep 22 2014, 23:01) Warning: another Banach-Tarski joke. By (Sep 23 2014, 05:19 In reply to Joao Marcos) I love SMBC, but I don't see how this is a BT joke. By Yemon Choi (Sep 23 2014, 05:23) I don't know which analysts you've encountered with this attitude, but I for one want the Hahn-Banach theorem (separation form)... By (Sep 23 2014, 06:32 In reply to Yemon Choi) I certainly agree that Hahn-Banach might be an excellent rebuttal. But someone willing to reject choice might often be less interested in hearing about its benefits like Hahn-Banach. If you don't expect the axiom of choice to be true, then it's possible that you don't expect to be able and separate things as easily, or at all. Yes. I agree very much that a utilitarian approach is the right approach here. But there are people objecting on philosophical merits that the Banach-Tarski paradox is "proof" that the axiom of choice is "wrong". And this here is my favorite rebuttal about paradoxical decompositions. By A (Aug 30 2015, 21:48) Could you be more verbose on the first sentence in your proof, or provide a reference? Probably it is sort of obvious to somebody in this area but unfortunately not to me. By Asaf Karagila (Aug 30 2015, 21:59 In reply to A) This is a very nontrivial theorem by Shelah. You can find the proof in Section 5 in Saharon Shelah, "Can you take Solovay’s inaccessible away?", Israel J. Math. 48 (1984), no. 1, 1--47. More specifically this is spelled out as Theorem 5.1B at the bottom of the first page of the section. By (Oct 21 2015, 01:10) […] had already written about anti-anti-Banach-Tarski arguments. But now the Mathematical T-Rex has something to say […] By B (Nov 21 2015, 18:15) This is very nice! I have a question: does the existence of a non-measurable set automatically imply Banach-Tarski? Could still have ℵ_1 ≤ 2^{ℵ_0} (or some other condition that rules out the existence of this kind of partitions) and no paradoxical decompositions? By Asaf Karagila (Nov 21 2015, 22:24 In reply to B) Well, I can't recall any specific result along these lines. But I'd be very surprised if that was the case (that BT is equivalent to having non-measurable sets). Want to comment? Send me an email!
2019-02-20 21:29:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7904503345489502, "perplexity": 503.8881701703832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247496694.82/warc/CC-MAIN-20190220210649-20190220232649-00262.warc.gz"}
https://planetmath.org/AsymptoticEstimate
# asymptotic estimate An asymptotic estimate is an that involves the use of $O$, $o$, or $\sim$. These are all defined in the entry Landau notation. Examples of asymptotic are: $\displaystyle\sum_{n\leq x}\mu^{2}(n)$ $\displaystyle=\frac{6}{\pi^{2}}x+O(\sqrt{x})$ (see convolution method for more details) $\displaystyle\pi(x)$ $\displaystyle\sim\frac{x}{\log x}$ (see prime number theorem for more details) Unless otherwise specified, asymptotic are typically valid for $x\to\infty$. An example of an asymptotic that is different from those above in this aspect is $\cos x=1-\frac{x^{2}}{2}+O(x^{4})\text{ for }|x|<1.$ Note that the above would be undesirable for $x\to\infty$, as the would be larger than the . Such is not the case for $|x|<1$, though. Tools that are useful for obtaining asymptotic include: • Abel’s lemma • the convolution method (http://planetmath.org/ConvolutionMethod) If $A\subseteq\mathbb{N}$, then an asymptotic for $\displaystyle\sum_{n\leq x}\chi_{A}(x)$, where $\chi_{A}$ denotes the characteristic function (http://planetmath.org/CharacteristicFunction) of $A$, enables one to determine the asymptotic density of $A$ using the $\lim_{x\to\infty}\frac{1}{x}\sum_{n\leq x}\chi_{A}(x)$ provided the limit exists. The upper asymptotic density of $A$ and the lower asymptotic density of $A$ can be computed in a manner using $\limsup$ and $\liminf$, respectively. (See asymptotic density (http://planetmath.org/AsymptoticDensity) for more details.) For example, $\mu^{2}$ is the characteristic function of the squarefree natural numbers. Using the asymptotic above yields the asymptotic density of the squarefree natural numbers: $\begin{array}[]{ll}\displaystyle\lim_{x\to\infty}\frac{1}{x}\sum_{n\leq x}\mu^% {2}(n)&\displaystyle=\lim_{x\to\infty}\frac{1}{x}\left(\frac{6}{\pi^{2}}x+O(% \sqrt{x})\right)\\ \\ &\displaystyle=\lim_{x\to\infty}\frac{6}{\pi^{2}}+O\left(\frac{\sqrt{x}}{x}% \right)\\ \\ &\displaystyle=\frac{6}{\pi^{2}}\end{array}$ Title asymptotic estimate AsymptoticEstimate 2013-03-22 16:00:01 2013-03-22 16:00:01 Wkbj79 (1863) Wkbj79 (1863) 13 Wkbj79 (1863) Definition msc 11N37 AsymptoticEstimatesForRealValuedNonnegativeMultiplicativeFunctions DisplaystyleYOmeganOleftFracxlogXy12YRightFor1LeY2 DisplaystyleXlog2xOleftsum_nLeX2OmeganRight DisplaystyleSum_nLeXYomeganO_yxlogXy1ForYGe0
2021-04-17 05:45:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 23, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9724529981613159, "perplexity": 569.9228977552742}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038101485.44/warc/CC-MAIN-20210417041730-20210417071730-00442.warc.gz"}
https://www.springerprofessional.de/computational-methods-in-environmental-fluid-mechanics/14566318?fulltextView=true&doi=10.1007%2F978-3-662-04761-3
main-content ## Über dieses Buch Fluids, play an important role in environmental systems, appearing as surface water in rivers, lakes, and coastal regions or in the subsurface as well as in the atmosphere. Mechanics of environmental fluids is concerned with fluid motion, associated mass and heat transport in addition to deformation processes in subsurface systems. In this textbook the fundamental modelling approaches based on continuum mechanics for fluids in the environment are described, including porous media and turbulence. Numerical methods for solving the process governing equations and its object-oriented computer implementation are discussed and illustrated with examples. Finally the application of computer models in civil and environmental engineering is demonstrated. ## Inhaltsverzeichnis ### Chapter 1. Balance Equations of Fluid Mechanics Abstract The basic idea of continuum mechanics is that the evolution of a physical system is completely determined by conservation laws, i. e. basic properties such as mass, momentum, and energy are conserved during the considered process at all times. Any physical system can be completely determined by this conservation properties. In contrast, other quantities such as pressure or entropy do not obey conservation laws. The only additional information concerns the consistence of the material (e. g. fluids, solids, porous medium) in form of constitutive laws. Olaf Kolditz ### Chapter 2. Turbulence Abstract The Reynolds number of flow $$Re = \frac{{\nu *L*}}{{v*}}$$ (2.1) gives a measure of the importance of inertial related to viscous forces. Experiments show that all flows become unstable above a certain Reynolds number. Below values of the so-called critical Reynolds number Re crit the flow is smooth and adjacent layers of fluid slide past each other in an orderly regime. This regime is called laminar flow. At Reynolds numbers larger than the critical value a complicated series of physical events takes place which eventually result in a radical change of the flow behavior. Finally, the flow becomes turbulent, i. e. velocity and other flow properties become chaotic and random. The flow is then unsteady even with constant boundary conditions. Turbulence is a kind of a chaotic and random state of motion. Nevertheless, velocity and pressure vary continuously with time within substantial regions of flow. Velocity fluctuations associated with turbulence give rise to additional stresses on the fluid — so-called Reynolds stresses. Examples of turbulent flows are: free turbulent flows (jet flow), turbulent boundary layer flows. Olaf Kolditz ### Chapter 3. Porous Media Abstract Soil or rock can be considered as a multiphase medium consisting of a solid phase (solid matrix) and of one or more fluid phases (gas and liquids), which occupy the void space (Fig. 3.1). Fluids are immiscible, if a sharp interface is maintained between them. In general, a phase is defined as a part of a continuum, which is characterized by distinct material properties and by a well-defined set of thermodynamic state variables. State variables describe the physical behaviour at all points of the phase. They must vary continuously within the considered phase of a continuum. Phases are separated from each other by surfaces referred to as interphase boundaries. Transport of components may occur within a phase and/or across interphase boundaries. Those interphasic exchange processes between adjacent phases can result from diffusive and/or advective mechanisms. Olaf Kolditz ### Chapter 4. Problem Classification Abstract The governing equations for fluid flow and related transport processes are partial differential equations (PDE) containing first and second order derivatives in spatial coordinates and first order derivatives in time. Whereas time derivatives appear linearly, spatial derivatives often have a non-linear form. Frequently, systems of partial differential equations occur rather than a single equation. Olaf Kolditz ### Chapter 5. Numerical Methods Abstract There are many alternative methods to solve initial-boundary-value problems arising from flow and transport processes in subsurface systems. In general these methods can be classified into analytical and numerical ones. Analytical solutions can be obtained for a number of problems involving linear or quasi-linear equations and calculation domains of simple geometry. For non-linear equations or problems with complex geometry or boundary conditions, exact solutions usually do not exist, and approximate solutions must be obtained. For such problems the use of numerical methods is advantageous. In this chapter we use the Finite Difference Method to approximate time derivatives. The Finite Element Method as well as the Finite Volume Method are employed for spatial discretization of the region. The Galerkin weighted residual approach is used to provide a weak formulation of the PDEs. This methodology is more general in application than variational methods. The Galerkin approach works also for problems which cannot be casted in variational form. Olaf Kolditz ### Chapter 6. Finite Difference Method Abstract The basic steps in order to set up a finite difference scheme are: • • definition of a space discretization by which the mesh points are distributed along families of non-intersecting lines, • • development of the unknown functions by means of Taylor series expansion (TSE) around grid points • • replacement of derivative terms in the partial differential equations (PDE) with equivalent finite difference expressions. Olaf Kolditz ### Chapter 7. Finite Element Method Abstract The Finite Element Method (FEM) was originated from the field of structural calculation (e. g. stress analysis) in the beginning of the fifties. The terminus -finite elements — was introduced by Turner et al. (1956). The concept of the finite element technique consists in a subdivision of a complex structure into small substructures assembling the elements. After its initial development as an engineering tool mathematicians worked out a rigorous formal framework of this method, e. g. concerning consistence of solutions, criteria for numerical stability and convergence behavior as well as accuracy and error bounds. The mathematical background of FEM is functional analysis. The FEM was introduced into the field of computational fluid dynamics (CFD) by Chung (1978), Baker (1983), Huyakorn & Pinder (1983), Diersch (1985) and others. The FEM is a more general approximation technique containing many finite difference schemes as special cases (Chapter 6). Olaf Kolditz ### Chapter 8. Finite Volume Method Abstract The Finite Volume Method (FVM) was introduced into the field of computational fluid dynamics in the beginning of the seventies (McDonald 1971, Mac-Cormack and Paullay 1972). From the physical point of view the FVM is based on balancing fluxes through control volumes, i. e. the Eulerian concept is used (see section 1.1.4). The integral formulation of conservative laws are discretized directly in space. From the numerical point of view the FVM is a generalization of the FDM in a geometric and topological sense, i. e. simple finite volume schemes can be reduced to finite difference schemes. The FDM is based on nodal relations for differential equations, whereas the FVM is a discretization of the governing equations in integral form. The Finite Volume Method can be considered as specific subdomain method as well. FVM has two major advantages: First, it enforces conservation of quantities at discretized level, i. e. mass, momentum, energy remain conserved also at a local scale. Fluxes between adjacent control volumes are directly balanced. Second, finite volume schemes takes full advantage of arbitrary meshes to approximate complex geometries. Experience shows that non-conservative schemes are generally less accurate than conservative ones, particularly in the presence of strong gradients. Olaf Kolditz ### Chapter 9. Object-Oriented Methods for Hydrosystem Modeling Abstract Object-oriented (OO) methods are a necessary tool in responding to many of the challenges in scientific computation, in particular, in managing modeling of complex systems such as multicomponent/multiphase processes in porous / fractured media. Use of 00 methods can significantly reduce the effort to maintain and extend codes as requirements change. In addition, OO techniques offer means to increase code reuseability. Olaf Kolditz ### Chapter 10. Object-Oriented Programming Techniques Abstract Object-oriented programming (OOP) has become exceedingly popular in the past few years. OOP is more than rewriting programs in modern languages, OOP is a new way of thinking about designing and realizing software projects. This requires a complete re-evaluation of existing programs (Budd 1996). Olaf Kolditz ### Chapter 11. Element Implementation Abstract In this chapter we discuss the implementation of finite elements within the object-oriented framework presented in chapter 10. We consider all required steps for introduction of element types into the code. As an example we present the implementation of triangular elements. Theory of triangular elements is described in section 7.4.3. Based on this element template, new ones can be introduced very easily (see Problems). Olaf Kolditz ### Chapter 12. Non-Linear Flow in Fractured Media Abstract This chapter deals with theory and computation of fluid flow in fractured rock. Non-Darcian flow behavior was observed in pumping tests at the geothermal research site at Soultz-sous-Forêts (France). Examples are examined to demonstrate the influence of fracture roughness and pressure-gradient dependent permeability on pressure build-up. A number of test examples based on classical models by Darcy (1856), Blasius (1913), Nikuradse (1930), Lomize (1951) and Louis (1967) are investigated, which may be suited as benchmarks for nonlinear flow. This is a prelude of application of the non-linear flow model to real pumping test data. Frequently, conceptual models based on simplified geometric approaches are used. Here, a realistic fracture network model based on borehole data is applied for the numerical simulations. The obtained data fit of the pumping test shows the capability of fracture network models to explain observed hydraulic behavior of fractured rock systems. Olaf Kolditz ### Chapter 13. Heat Transport in Fractured-Porous Media Abstract In this chapter we examine heat transfer during forced water circulation through fractured crystalline rock using a fully 3-D finite-element model. We propose an alternative to strongly simplified single or multiple parallel fracture models or porous media equivalents on the one hand, and to structurally complex stochastic fracture network models on the other hand. The applicability of this “deterministic fracture network approach” is demonstrated in an analysis of the 900-day circulation test for the Hot Dry Rock (HDR) site at Rosemanowes (UK). The model design in respect to structure, hydraulic and thermal behavior is strictly conditioned by measured data such as fracture network geometry, hydraulic and thermal boundary and initial conditions, hydraulic reservoir impedance, and thermal drawdown. Another novel feature of this model is that flow and heat transport in the fractured medium are simulated in a truly 3-D system of fully coupled discrete fractures and porous matrix. While an optimum model fit is not the prime target of this study, this approach permits to make realistic long-term predictions of the thermal performance of HDR systems. Olaf Kolditz ### Chapter 14. Density Dependent Flow in Porous Media Abstract In this chapter we examine variable-density flow and corresponding solute transport in groundwater systems. Fluid dynamics of salty solutions with significant density variations are of increasing interest in many problems of subsurface hydrology. The mathematical model comprises a set of non-linear coupled partial differential equations to be solved for pressure/hydraulic head and mass fraction/concentration of the solute component. The governing equations and underlying assumptions are developed and discussed. The equation of solute mass conservation is formulated in terms of mass fraction and mass concentration. Different levels of the approximation of density variations in the mass balance equations are used for convection problems (e. g. the Boussinesq approximation and its extension, full density approximation). The impact of these simplifications is studied by use of numerical modeling. Numerical models for non-linear problems, such as density-driven convection, must be carefully verified in a particular series of tests. Standard benchmarks for proving variable-density flow models are the Henry, the Elder, and the salt dome problems. We studied these benchmarks using two finite element simulators — ROCKFLOW, which was developed at the Institute of Fluid Mechanics and Computer Applications in Civil Engineering, and FEFLOW, which was developed at the Institute for Water Resources Planning and Systems Research Ltd. Although both simulators are based on the Galerkin finite element method, they differ in many approximation details such as temporal discretization (Crank-Nicolson versus predictor-corrector schemes), spatial discretization (triangular and quadrilateral elements), finite element basis functions (linear, bilinear, biquadratic), iteration schemes (Newton, Picard), and solvers (direct, iterative). The numerical analysis illustrates discretization effects and defects arising from the different levels of the density approximation. We present results for the salt dome problem, for which inconsistent findings exist in literature (Kolditz et al. 1998). Olaf Kolditz ### Chapter 15. Multiphase Flow in Deformable Porous Media Abstract We consider both cases: multiphase flow in deformable as well as non-deformable (static) porous media. In addition to flow of two fluid phases (compressible and incompressible fluids) we also apply the Richards approximation, which is valid for most cases of infiltration in soils. We assume isothermal conditions. For additional information the reader should refer e. g. to Bear & Bachmat (1990) and Lewis & Schrefler (1998). Olaf Kolditz ### Backmatter Weitere Informationen
2021-05-14 07:23:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4346831142902374, "perplexity": 1023.8694185423843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991648.40/warc/CC-MAIN-20210514060536-20210514090536-00562.warc.gz"}
https://puzzling.stackexchange.com/questions/20317/an-outside-the-box-sequence
An “outside the box” sequence I found a small letters sequences on Stack Exchange, but some elements are missing : T H B C D L PP WH ?? M CU ? Question What are the missing elements ? Hints Hint 1 : Hint 2 : Every ? is a letter. Hint 3 : Think outside the box. T H B C D L PP WH AI M CU F This is from the first letters of the links across the top of the site footer • That was fast. Veeery fast. You got it ! – The random guy Aug 20 '15 at 12:15 • @Somo145 Really Awesome !!! – kanchirk Aug 20 '15 at 12:15 • @Therandomguy You have to be quick here! – Somo145 Aug 20 '15 at 12:18 • Yeah, but i also need to increase difficulty on my questions D: – The random guy Aug 20 '15 at 12:19 • @CodeNewbie - I had the right thought (outside the box, so I looked round the edges of the website page) but I only looked at the header, I didn't think of the footer as it was off-screen. – AndyT Aug 20 '15 at 15:12
2021-06-22 15:03:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44993945956230164, "perplexity": 3741.0338674564346}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488517820.68/warc/CC-MAIN-20210622124548-20210622154548-00445.warc.gz"}
http://vinns.in/rangoli-in-white-dotless-kolam-chukkaleni-muggulu/
Rangoli In White, Dotless Kolam, Chukkaleni Muggulu 0 1011 A freehand kolam which is drawn without any guidance or instruments to draw the circles. An easy method to draw circle kolams. I usually start the kolam with a base design and spin around with the elements which flow throughout the kolam. I don’t make any preparations before drawing freehand kolams unlike the sikku kolams which need at least a rehearsal on paper. Big Sikku kolams are must to be practised before going on to the floor version as the lines may be tricky to land on a wrong path. So now a days I am much comfortable with freehand kolams, which I am smitten lately of as now 🙂 I am lethargic of drawing circles or using any other sort of guidance to start with, so my kolams are drawn in segments and then repeated on all sides. More or less they stick to the shape named “The Circle” 😀 Step 1                                                                                             Draw any shape at the centre, I have drawn a star, but I am chided for always starting with a star by a friend of mine 🙂 So its an individual’s choice of shape to give a start to the kolam. I have outlined the kolam with a line. Step 2                                                                                              Outline twice the draw with a curvy lines forming 6 curves. Now draw radial lines from the joining points of two curves which form the base for drawing the design around. Step 3                                                                                             Draw two lines and fill with kola powder to form a broad solid line. Then add six petal shapes then outline it with a bold line to make it look prominent. Step 4                                                                                              After drawing the broad zigzag lines, Enclose the design with a curve by joining with the adjoining curve. Now adding extra details into the petal shapes.   I have added dots beside the curve to make it look grand. Step 5                                                                                              Finish the first layer with designs forming into a circular shape. Extend the radial lines beyond till the finish line of the kolam Now draw curvy lines from one end of the radial line as shown. Step 6                                                                                              Complete drawing the curvy lines to the next radial line. Step 7                                                                                              Now repeat the same method in the opposite direction by joining the lines to the sides. Complete the design on all sides.
2017-11-21 15:47:01
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8215861916542053, "perplexity": 2863.048866779472}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806419.21/warc/CC-MAIN-20171121151133-20171121171133-00485.warc.gz"}
http://www.ams.org/bookstore?fn=20&arg1=gsmseries&ikey=GSM-91
New Titles  |  FAQ  |  Keep Informed  |  Review Cart  |  Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education Louis Halle Rowen, Bar-Ilan University, Ramat Gan, Israel SEARCH THIS BOOK: 2008; 648 pp; hardcover Volume: 91 ISBN-10: 0-8218-4153-X ISBN-13: 978-0-8218-4153-2 List Price: US$89 Member Price: US$71.20 Order Code: GSM/91 Graduate Algebra: Commutative View - Louis Halle Rowen Hereditary Noetherian Prime Rings and Idealizers - Lawrence S Levy and J Chris Robson This book is a companion volume to Graduate Algebra: Commutative View (published as volume 73 in this series). The main and most important feature of the book is that it presents a unified approach to many important topics, such as group theory, ring theory, Lie algebras, and gives conceptual proofs of many basic results of noncommutative algebra. There are also a number of major results in noncommutative algebra that are usually found only in technical works, such as Zelmanov's proof of the restricted Burnside problem in group theory, word problems in groups, Tits's alternative in algebraic groups, PI algebras, and many of the roles that Coxeter diagrams play in algebra. The first half of the book can serve as a one-semester course on noncommutative algebra, whereas the remaining part of the book describes some of the major directions of research in the past 100 years. The main text is extended through several appendices, which permits the inclusion of more advanced material, and numerous exercises. The only prerequisite for using the book is an undergraduate course in algebra; whenever necessary, results are quoted from Graduate Algebra: Commutative View. Graduate students and research mathematicians interested in various topics of noncommutative algebra. Reviews "Each part ends with more than 30 pages of exercises, from the basic to the challenging, carefully arranged and labeled according to the chapter (or appendix) to which they related ... a striking and very enjoyable feature of the book is the huge number of digressions: there are frequent pauses to point out noteworthy aspects of the terrain which lies ahead, beyond what can be covered in detail in a book of this sort. The style, layout and precision of the book make it a pleasure to read..." -- Mathematical Reviews "The book is largely self-contained. ...a valuable textbook and a reliable reference for graduate students." -- MAA Reviews
2015-08-31 18:29:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17660818994045258, "perplexity": 1674.6706259369669}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644066275.44/warc/CC-MAIN-20150827025426-00059-ip-10-171-96-226.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/296661/complex-sets-factoring-into-circle/296687
# Complex sets: factoring into circle How must $|z|=3|z-1|$ be factored so I end up with a circle, plugging in $z=x+iy$ seems to just up with square roots everywhere. Detailed steps is much appreciated, thanks! - Square it up!.. – Berci Feb 6 '13 at 22:37 Have u tried its a mess – DJ_ Feb 6 '13 at 22:38 And use that $|z|^2=z \bar{z}$. – coffeemath Feb 6 '13 at 22:39 Yes, we can also calculate with complex numbers, using $|z|^2=z\bar z$: $$z\bar z=9(z-1)\overline{(z-1)}$$ which you prefer. – Berci Feb 6 '13 at 22:41 \begin{align*} z \overline{z} = 9 (z - 1)(\overline{z}-1) = 9 z \overline{z} - 9 z - 9 \overline{z} + 9 \\ 8 z \overline{z} - 9 z - 9 \overline{z} + 9 = 0 \end{align*} Now a circle is supposed to look like $$(z - a)(\overline{z} - \overline{a}) = r^2= z \overline{z} - a \overline{z} - \overline{a} z + |a|^2$$ So taking our old relation and diving by 8, we get \begin{align*} z \overline{z} - \frac{9}{8} z - \frac{9}{8} \overline{z} + \frac{9}{8} = 0 \\ z \overline{z} - \frac{9}{8} z - \frac{9}{8} \overline{z} + \frac{81}{64} = \frac{81}{64} - \frac{9}{8} = \frac{9}{64} \end{align*} So with some algebra we have a circle centered at $\frac{9}{8}$ with radius $\frac{3}{8}$. For fun: you can view the locus of solutions $S$ as the inverse image of the unit circle of the fractional linear transformation $z \mapsto \frac{z}{3z-3}$. Conjugate points are sent to conjugate points and the center of $S$'s conjugate is $\infty$. Infinity maps to $1/3$ and the conjugate about the unit circle is $3$, so the inverse of that is $8/9$. Similarly I can find what is sent to 1 and that'll give me a radius. Using $z=x+iy$, we get $$x^2+y^2=|z|^2=9|z-1|^2=9((x-1)^2+y^2).$$
2016-02-14 21:58:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9977762699127197, "perplexity": 167.8096898963386}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454702032759.79/warc/CC-MAIN-20160205195352-00100-ip-10-236-182-209.ec2.internal.warc.gz"}
https://help.geogebra.org/topic/an-error-occured-from-geogebra-to-latex
# An error occured from Geogebra to latex Hi. There is something wrong with my plot. I run the code on sharelatex.com Package PGF Math Error: Unknown function Infinity' (in '-Infinity').See the PGF Math package documentation for explanation. Type H <return> for immediate help. ... l.68 \end{axis} (That was another \errmessage.) Could anybody fix my problem or give me some advice? 1 It's a problem with c=Parabola(B,f) x² + 2x y + y² - 16x - 16y = -64` as it's a degenerate parabola. Is it important to have that object? Try deleting or hiding it
2021-06-12 11:56:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8677367568016052, "perplexity": 4941.924013402451}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487582767.0/warc/CC-MAIN-20210612103920-20210612133920-00277.warc.gz"}
http://mathhelpforum.com/advanced-algebra/28346-solved-abstract-algebra-integers-modulo-n-2-a.html
# Math Help - [SOLVED] Abstract Algebra: Integers Modulo n 2 1. ## [SOLVED] Abstract Algebra: Integers Modulo n 2 Hallo, Me again. Here's the last of four questions I have for today. (I'll probably asking some questions about Set Theory soon, but not today). Again, this is a homework problem, so please, only hints Problem: (a) Prove that if $n$ is even, then exactly one nonidentity element of $\mathbb{Z}_n$ is its own inverse. (b) Prove that if $n$ is odd, then no nonidentity element of $\mathbb{Z}_n$ is its own inverse. (c) Prove that $[0] \oplus [1] \oplus \cdots \oplus [n - 1]$ equals either $[0]$ or $[n/2]$ in $\mathbb{Z}_n$ (d) What does part (c) imply about $0 + 1 + \cdots + (n - 1)$ modulo $n$? Things that might come in handy: Nothing really. Again, anyone who can help me will already know all they need to know by looking at the symbols and interpreting them. What I've tried: I honestly have not tried anything as yet. I don't think this is a particularly hard question. I remember being confused about something when I read it through the first time though, so I post it just in case, so as to not leave it for the last minute (which is when I usually do my homework). I won't look at any hints given here until I've tried to do the problem myself (you can trust me ). So feel free to respond anyway. Thanks guys and gals! 2. Originally Posted by Jhevon Problem: (a) Prove that if $n$ is even, then exactly one nonidentity element of $\mathbb{Z}_n$ is its own inverse. Probably the best way to attack this one is to construct the element. Which do you think would be the likely candidate? If you can't think of which, Start with $\mathbb{Z}_4$ and work your way up. The pattern should be obvious. Originally Posted by Jhevon (b) Prove that if $n$ is odd, then no nonidentity element of $\mathbb{Z}_n$ is its own inverse. Easy once you've seen what happens in a). You're on your own for the rest. My set theory is minimal basics only. -Dan 3. Originally Posted by Jhevon (a) Prove that if $n$ is even, then exactly one nonidentity element of $\mathbb{Z}_n$ is its own inverse. If $n$ is even then $n/2$ is an interger and $[n/2] + [n/2] = [n] = [0]$ so it is its own inverse. Now to prove uniqueness suppose that $[x]$ is a congruence class so that $[x]+[x]=[0]$ this means $[2x]=[0]$ and thus $2x\equiv 0(\bmod n)$. This means $n|2x \implies (n/2)|x$ thus $x = (n/2)k$ for an integer $k$. Now this implies for $k=0$ that $[0]$ has its own inverse, but we are considering non-identity elements and $k=1$ implies $[n/2]$ which is what we found. Thus, there are no other ones (just note if $k$ is even you end up with $[0]$ otherwise with $[n/2]$). (b) Prove that if $n$ is odd, then no nonidentity element of $\mathbb{Z}_n$ is its own inverse. Suppose $[x]+[x] = [0]$ then $2x\equiv 0(\bmod n)$ so $n|2x$ since $\gcd(2,n)=1$ ( $n$ is odd) it means $n|x \implies x \in [0]$ thus only the identity element is its own inverse. (c) Prove that $[0] \oplus [1] \oplus \cdots \oplus [n - 1]$ equals either $[0]$ or $[n/2]$ in $\mathbb{Z}_n$ (d) What does part (c) imply about $0 + 1 + \cdots + (n - 1)$ modulo $n$? Let $\{ a_1,a_2,...,a_n\}$ be a permutation of the intergers $\{ 0,1,2,...,n-1\}$ but not necessarily in that order. It has a special order to it. Then $0+1+...+(n-1) = a_1+a_2+...+a_n$ now the permutation has the special order is that $a_1,a_2$ are inverse $a_3,a_4$ are inverses .... Now if $n$ is odd then only one element is its own inverse i.e. 0, but everything else has a pair to it which cancels. So it means $a_1+a_2+...+a_n = 0$ if $n$ is odd. If $n$ is even everything except 0 has a mate except $n/2$ which has its self as its own mate which cancels out. Thus, $a_1+...+a_n \equiv n/2$.
2015-10-04 05:57:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 59, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7667382955551147, "perplexity": 215.75009133879882}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736672441.2/warc/CC-MAIN-20151001215752-00228-ip-10-137-6-227.ec2.internal.warc.gz"}