url
stringlengths
14
1.76k
text
stringlengths
100
1.02M
metadata
stringlengths
1.06k
1.1k
http://mathhelpforum.com/calculus/74501-area-enclosed-print.html
# Area Enclosed Printable View • Feb 19th 2009, 10:57 AM qzno Area Enclosed Find the area of the enclosed region given by the functions: $y = \frac{1}{x^2+1}$ and $y = \frac{x^2}{2}$ I Got It Down To: $A = \int_{-1}^{\frac{1}{2}} \left( (\sqrt{\frac{1-y}{y}}) - (\sqrt{2y}) \right) dy$ I Just Dont Know How To Integrate This Equation Thanks • Feb 19th 2009, 11:17 AM running-gag Hi I am a little bit surprised by your answer http://nsa05.casimages.com/img/2009/...2055408371.jpg $A = \int_{-1}^{1} \left(\frac{1}{x^2+1} - \frac{x^2}{2}\right)\:dx$ • Feb 19th 2009, 11:22 AM qzno haha i was integrating in terms of y thank you : )
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9600619077682495, "perplexity": 2361.3685621934346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816912.94/warc/CC-MAIN-20180225190023-20180225210023-00680.warc.gz"}
http://www.transtutors.com/questions/tts-limiting-distribution-178905.htm
# Limiting Distribution Suppose P(Xn = i) = n + i / (3n+6) , for i = 1,2,3 Find the limiting distribution of Xn Related Questions in Theory of probability • Finding the limiting distribution of given i.i.d. standardised variates. (Solved) August 03, 2016 Finding the limiting distribution of given i.i .d. standardised variates. • Limiting distribution (Solved) May 06, 2012 Let Zn BE X2( n ) (Chi-square) and let Wn = Zn/ n 2. Find the limiting distribution of Wn Solution Preview : When Zn ~ ? 2 (n), we want to obtain the limiting distribution of Wn = Zn/n2 . The mgf of Wn is- M(t; n) = E(e^tWn ) = E[e^(t/n^2)* Zn ] = [1 - 2 t/n^2 ]^-n/2 , for t To find the limit of... • In Exercise 1, find the limiting distribution of n ln X 1.n ,,.... August 19, 2016 In Exercise 1, find the limiting distribution of n ln X1. n ,,. Exercise 1 Consider a random sample of size n from a distribution with CDF F(x) =... • Limiting Distribution May 09, 2012 Let Xn have PDF fn(x) = (1 + x/n ) / (1 + 1/2 n ), for 0 Find the limiting distribution of Xn . • MGF and limiting distribution (Solved) May 06, 2012 Let Xn denote the mean of a random sample of size n from a distribution that has pdf f(x) = e^-x, 0 zero elsewhere a) Show that the mgf Myn(t) of Yn = vn( Xn -1)...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9849799871444702, "perplexity": 3221.6092618477114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189198.71/warc/CC-MAIN-20170322212949-00425-ip-10-233-31-227.ec2.internal.warc.gz"}
https://mathoverflow.net/questions/291828/if-a-is-normal-and-omega1-b-a-0-then-b-is-normal
# If $A$ is normal and $\Omega^1_{B/A}=0$ then $B$ is normal Let $A\subseteq B$ be two noetherian domains with fraction fields $k$ and $L$, respectively. Assume that $A$ is normal and $B$ is finite as $A$-module. I'm asking myself if $B$ is also normal if $\Omega^1_{B/A}=0$? Any suggestion or reference in the literature is welcome. EDIT: I'm trying to show the claim above, the idea is the following. IDEA: Let $\mathfrak{q}\subset B$ be a maximal ideal of $B$ and set $\mathfrak{p}=\mathfrak{q}\cap A$, where $\mathfrak{p}$ is maximal since $B$ is integral over $A$. Consider the map $k:=A/\mathfrak p\longrightarrow B/\mathfrak p B:=B'$ induced by the inclusion $A\subseteq B$. We get a finite $k$-algebra $B'$, with module of differentials $\Omega^1_{B'/k}\simeq B'\otimes_{B}\Omega^1_{B/A}=0$. Now, since $\Omega^1_{B'/k}=0$, then $B'\simeq K_1\times\cdots\times K_n$, where any $K_i$ is a finite (and separable) field extension of $k$, hence we get $$B'_{\mathfrak q}\simeq B_\mathfrak q/\mathfrak pB_\mathfrak q,$$ but the localization at a prime of a finite product of fields (which is $B'$) is a field, hence $\mathfrak p B_\mathfrak q$ is maximal, i.e. $\mathfrak p B_\mathfrak q=\mathfrak q B_\mathfrak q$. Now let $\mathfrak q'$ a prime strictly contained in $\mathfrak q$. We have $\mathfrak q'B_\mathfrak q\subset \mathfrak pB_\mathfrak q$, then $(\mathfrak q'B_\mathfrak q )\cap A=\mathfrak p'\subset \mathfrak p$. This shows (localizing at $\mathfrak q'$) that $\mathfrak q'B_{\mathfrak q'}= \mathfrak p'B_{\mathfrak q'}$, where $\mathfrak q'\cap A=\mathfrak p'$. We showed that for any prime $\mathfrak q\subset B$ there is a prime $\mathfrak p\subset A$ (with $\mathfrak q\cap A=\mathfrak p$), such that $$\mathfrak p B_\mathfrak q=\mathfrak q B_\mathfrak q \, \, \, (*).$$ By Serre's Normality Criterion (SNC) the ring $B$ is normal if and only if for every prime $\mathfrak q$ associated to a principal ideal the ring $B_\mathfrak q$ is a DVR, i.e. $\mathfrak q B_\mathfrak q$ is principal. If we are able to show (I don't know if it is true) that for every prime $\mathfrak q$ associated to a principal ideal we have $\mathrm{ht}(\mathfrak q)=1$, since $\mathrm{ht}(\mathfrak q)=\mathrm{ht}(\mathfrak q\cap A)$ ($B$ is integral over $A$) and $(\mathfrak pA_\mathfrak p)(B_\mathfrak q)=\mathfrak q B_\mathfrak q$, being $\mathfrak p A_\mathfrak p$ principal, we obtain that $\mathfrak qB_\mathfrak q$ is principal, hence by SNC the ring $B$ is normal. • SGA 1, Corollaire I.9.11: "Let $f \colon X \to Y$ be a dominant morphism [of schemes], $Y$ being normal and $X$ connected. If $f$ is unramified, then $f$ is étale, and therefore $X$ is normal." – Martin Bright Jan 31 '18 at 15:32 • This result is also proved as Lemma 1.5 in Chapter I of the book "Etale Cohomology and the Weil Conjectures" by Freitag & Kiehl in much more robust generality: you can relax "finite as $A$-module" to "finitely generated as $A$-algebra" (they state it for localization of such). It ultimately rests on the etale-local structure of unramified maps. – nfdc23 Jan 31 '18 at 15:46 • Just for completeness, here's a counterexample if $B/A$ is not flat; take $A=R[x,y]$ and $B=R[x,y]/(y^2-x^3)$, for $R$ any field. – Daniel Litt Jan 31 '18 at 16:31 • @DanielLitt The map $A\longrightarrow B$ is not injective. – Vincenzo Zaccaro Jan 31 '18 at 16:37 • The whole point of the result in SGA1 and the Freitag-Kiehl book is that flatness does not need to be assumed (only injectivity/dominance), and the conclusion gives that flatness is therefore a consequence of the assumptions. Please remove the "EDIT" about the flat case; there is nothing interesting being asserted if one allows to assume flatness. – nfdc23 Jan 31 '18 at 17:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9841035604476929, "perplexity": 185.56769431278823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488519735.70/warc/CC-MAIN-20210622190124-20210622220124-00190.warc.gz"}
https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/9702/2/a/e/
Properties Label 9702.2.a.e Level $9702$ Weight $2$ Character orbit 9702.a Self dual yes Analytic conductor $77.471$ Analytic rank $1$ Dimension $1$ CM no Inner twists $1$ Related objects Newspace parameters Level: $$N$$ $$=$$ $$9702 = 2 \cdot 3^{2} \cdot 7^{2} \cdot 11$$ Weight: $$k$$ $$=$$ $$2$$ Character orbit: $$[\chi]$$ $$=$$ 9702.a (trivial) Newform invariants Self dual: yes Analytic conductor: $$77.4708600410$$ Analytic rank: $$1$$ Dimension: $$1$$ Coefficient field: $$\mathbb{Q}$$ Coefficient ring: $$\mathbb{Z}$$ Coefficient ring index: $$1$$ Twist minimal: no (minimal twist has level 1078) Fricke sign: $$1$$ Sato-Tate group: $\mathrm{SU}(2)$ $q$-expansion $$f(q)$$ $$=$$ $$q - q^{2} + q^{4} - 2 q^{5} - q^{8} + O(q^{10})$$ $$q - q^{2} + q^{4} - 2 q^{5} - q^{8} + 2 q^{10} - q^{11} - 2 q^{13} + q^{16} + 2 q^{19} - 2 q^{20} + q^{22} - q^{25} + 2 q^{26} - 6 q^{29} - 4 q^{31} - q^{32} + 2 q^{37} - 2 q^{38} + 2 q^{40} + 8 q^{41} + 12 q^{43} - q^{44} + 12 q^{47} + q^{50} - 2 q^{52} + 2 q^{53} + 2 q^{55} + 6 q^{58} + 10 q^{59} + 10 q^{61} + 4 q^{62} + q^{64} + 4 q^{65} - 12 q^{67} - 4 q^{71} - 12 q^{73} - 2 q^{74} + 2 q^{76} - 2 q^{80} - 8 q^{82} - 18 q^{83} - 12 q^{86} + q^{88} - 12 q^{94} - 4 q^{95} + 12 q^{97} + O(q^{100})$$ Embeddings For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below. For more information on an embedded modular form you can click on its label. Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$ 1.1 0 −1.00000 0 1.00000 −2.00000 0 0 −1.00000 0 2.00000 $$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles Atkin-Lehner signs $$p$$ Sign $$2$$ $$1$$ $$3$$ $$-1$$ $$7$$ $$-1$$ $$11$$ $$1$$ Inner twists This newform does not admit any (nontrivial) inner twists. Twists By twisting character orbit Char Parity Ord Mult Type Twist Min Dim 1.a even 1 1 trivial 9702.2.a.e 1 3.b odd 2 1 1078.2.a.l yes 1 7.b odd 2 1 9702.2.a.t 1 12.b even 2 1 8624.2.a.g 1 21.c even 2 1 1078.2.a.h 1 21.g even 6 2 1078.2.e.e 2 21.h odd 6 2 1078.2.e.a 2 84.h odd 2 1 8624.2.a.y 1 By twisted newform orbit Twist Min Dim Char Parity Ord Mult Type 1078.2.a.h 1 21.c even 2 1 1078.2.a.l yes 1 3.b odd 2 1 1078.2.e.a 2 21.h odd 6 2 1078.2.e.e 2 21.g even 6 2 8624.2.a.g 1 12.b even 2 1 8624.2.a.y 1 84.h odd 2 1 9702.2.a.e 1 1.a even 1 1 trivial 9702.2.a.t 1 7.b odd 2 1 Hecke kernels This newform subspace can be constructed as the intersection of the kernels of the following linear operators acting on $$S_{2}^{\mathrm{new}}(\Gamma_0(9702))$$: $$T_{5} + 2$$ $$T_{13} + 2$$ $$T_{17}$$ $$T_{19} - 2$$ $$T_{23}$$ $$T_{29} + 6$$ Hecke characteristic polynomials $p$ $F_p(T)$ $2$ $$1 + T$$ $3$ $$T$$ $5$ $$2 + T$$ $7$ $$T$$ $11$ $$1 + T$$ $13$ $$2 + T$$ $17$ $$T$$ $19$ $$-2 + T$$ $23$ $$T$$ $29$ $$6 + T$$ $31$ $$4 + T$$ $37$ $$-2 + T$$ $41$ $$-8 + T$$ $43$ $$-12 + T$$ $47$ $$-12 + T$$ $53$ $$-2 + T$$ $59$ $$-10 + T$$ $61$ $$-10 + T$$ $67$ $$12 + T$$ $71$ $$4 + T$$ $73$ $$12 + T$$ $79$ $$T$$ $83$ $$18 + T$$ $89$ $$T$$ $97$ $$-12 + T$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9662834405899048, "perplexity": 6849.300665486042}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300997.67/warc/CC-MAIN-20220118182855-20220118212855-00575.warc.gz"}
https://ask.sagemath.org/question/9165/compile-tex-from-sage/
# compile TeX from Sage edit Hola! I'm working on a program which should output the results of its computations in form of a typesetted document (ideally pdf from pdflatex). I'm trying to perfrom the final presentation part via view function, however, that doesn't seem to be the most convenient way (since view is aimed at typesetting LaTeX formulas of elements of a list). A lot of hidden code (headers, turned on math modes etc.) are really unpleasent to get around. I'm pondering about generating an explicit .tex source code file. How can I call TeX compiler from Sage? I.e. how to compile it automatically in an own program, similarly as view does it?) Sage 5.1 Kubuntu 12.04 edit retag close merge delete Why not do it the other way around, using SageTeX? Then you could call Sage code as needed, but do all the real stuff in LaTeX. I'm not quite sure if this would help, as you are a little vague about the precise steps you ( 2012-07-20 04:37:56 -0500 )edit Why not do it the other way around, using SageTeX? Then you could call Sage code as needed, but do all the real stuff in LaTeX. I'm not quite sure if this would help, as you are a little vague about the precise steps you'll need. ( 2012-07-20 04:37:58 -0500 )edit I know what you mean & I did think about this way for quite a long time. But the truth is this my college project - which should be own Sage/Python library, not TeX project. Or maybe I just misunderstood the idea around SageTeX & it can processed via Python/Sage - is it somehow *"doable"*? ( 2012-07-20 07:50:57 -0500 )edit Oh, I see, you mean you are making a library and trying to document it. Well, hopefully some combination of something will work. ( 2012-07-20 08:01:34 -0500 )edit 1 Or maybe you should only do the final presentation as SageTeX? ( 2012-07-20 08:04:22 -0500 )edit
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.652215301990509, "perplexity": 1904.845207409429}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400192887.19/warc/CC-MAIN-20200919204805-20200919234805-00015.warc.gz"}
https://engineering.stackexchange.com/questions/41099/what-do-the-rectangular-components-of-the-complex-frequency-s-in-laplace-trans
# What do the rectangular components of the complex frequency (s) in Laplace transform stand for? I tried doing a quick search on this question and was very surprised that this information feels very obscure as if it is almost never discussed. Complex frequencies appear in many mathematical concepts such as Laplace Transforms and sources mention the rectangular form as $$s = σ + jω$$, but fail to actually explain what the rectangular components stand for. I saw one source mention this. https://www.quora.com/What-does-the-real-part-of-s-in-laplace-transform-represents the real part(sigma) is called nepper frequency it control amplitude of function and its unit is nepper/second . and imaginary part(omega) is called oscillation (radian) frequency it control oscillation and its unit is radian/second. I just decided to ask here to know what those actually mean. Also, if some people might respond with the rectangular components being irrelevant or having little significant application I just really want to ask this for the sake of knowing. EDIT: I am asking what σ and ω in $$s = σ + jω$$ stand for and why those quantities represent the real and imaginary components of the complex frequency. The source which I cited said that σ is nepper frequency and ω is oscillation frequency. • s = jω , that's essentially it. There are some details of range of integration and convergence conditions for LT vs FT but in engineering practice just make that substitution. Thus, complex s corresponds to real ω (sinusoids), real s corresponds to complex ω (exponential growth/decay). Note that complex s (just like real ω) always comes in +/- pairs, as it is a consequence of a solution to 2nd order (or more) differential eqn. Units are 1/time, with a factor of 2pi. The angle in complex plane corresponds to phase. [Not totally sure if this all is what you're asking] Mar 21 at 14:09 • also note that "multiplication by j" is a 90 degree rotation in the complex plane. So IMO the "s" plane is just ω turned on its side ... so the question of "what is real s" is equivalent to "what is complex ω" Mar 21 at 14:17 • finally, the usage is often different. s plane tends to be used for representation of systems, i.e. poles/zeros, while ω tends to be used for signals going into and out of those systems ... but the way i see it, they're both complex quantity representing units of 1/time and a phase shift Mar 21 at 14:24 • With rectangular component do you mean the same as the real component of $s$? Mar 21 at 15:09 • Just to be clear, I am talking about what sigma (σ) and omega (ω) are in s = σ + jω. Mar 21 at 22:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8865505456924438, "perplexity": 879.4240351688194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358153.33/warc/CC-MAIN-20211127073536-20211127103536-00134.warc.gz"}
https://neurips.cc/Conferences/2018/ScheduleMultitrack?event=11574
Timezone: » Poster Diverse Ensemble Evolution: Curriculum Data-Model Marriage Tianyi Zhou · Shengjie Wang · Jeffrey A Bilmes Thu Dec 06 02:00 PM -- 04:00 PM (PST) @ Room 517 AB #110 We study a new method (Diverse Ensemble Evolution (DivE$^2$)'') to train an ensemble of machine learning models that assigns data to models at each training epoch based on each model's current expertise and an intra- and inter-model diversity reward. DivE$^2$ schedules, over the course of training epochs, the relative importance of these characteristics; it starts by selecting easy samples for each model, and then gradually adjusts towards the models having specialized and complementary expertise on subsets of the training data, thereby encouraging high accuracy of the ensemble. We utilize an intra-model diversity term on data assigned to each model, and an inter-model diversity term on data assigned to pairs of models, to penalize both within-model and cross-model redundancy. We formulate the data-model marriage problem as a generalized bipartite matching, represented as submodular maximization subject to two matroid constraints. DivE$^2$ solves a sequence of continuous-combinatorial optimizations with slowly varying objectives and constraints. The combinatorial part handles the data-model marriage while the continuous part updates model parameters based on the assignments. In experiments, DivE$^2$ outperforms other ensemble training methods under a variety of model aggregation techniques, while also maintaining competitive efficiency. #### Author Information ##### Tianyi Zhou (University of Washington, Seattle) Tianyi Zhou is a Ph.D. student in Computer Science at University of Washington and a member of MELODI lab led by Prof. Jeff A. Bilmes. He will be joining University of Maryland, College Park as a tenure-track assistant professor at the Department of Computer Science and affiliated with UMIACS in 2022. His research interests are in machine learning, optimization, and natural language processing. He has published ~60 papers at NeurIPS, ICML, ICLR, AISTATS, EMNLP, NAACL, COLING, KDD, ICDM, AAAI, IJCAI, ISIT, Machine Learning (Springer), IEEE TIP/TNNLS/TKDE, etc. He is the recipient of the Best Student Paper Award at ICDM 2013 and the 2020 IEEE TCSC Most Influential Paper Award. ##### Jeffrey A Bilmes (University of Washington, Seattle) Jeffrey A. Bilmes is a professor at the Department of Electrical and Computer Engineering at the University of Washington, Seattle Washington. He is also an adjunct professor in Computer Science & Engineering and the department of Linguistics. Prof. Bilmes is the founder of the MELODI (MachinE Learning for Optimization and Data Interpretation) lab here in the department. Bilmes received his Ph.D. from the Computer Science Division of the department of Electrical Engineering and Computer Science, University of California in Berkeley and a masters degree from MIT. He was also a researcher at the International Computer Science Institute, and a member of the Realization group there. Prof. Bilmes is a 2001 NSF Career award winner, a 2002 CRA Digital Government Fellow, a 2008 NAE Gilbreth Lectureship award recipient, and a 2012/2013 ISCA Distinguished Lecturer. Prof. Bilmes was, along with Andrew Ng, one of the two UAI (Conference on Uncertainty in Artificial Intelligence) program chairs (2009) and then the general chair (2010). He was also a workshop chair (2011) and the tutorials chair (2014) at NIPS/NeurIPS (Neural Information Processing Systems), and is a regular senior technical chair at NeurIPS/NIPS since then. He was an action editor for JMLR (Journal of Machine Learning Research). Prof. Bilmes's primary interests lie in statistical modeling (particularly graphical model approaches) and signal processing for pattern classification, speech recognition, language processing, bioinformatics, machine learning, submodularity in combinatorial optimization and machine learning, active and semi-supervised learning, and audio/music processing. He is particularly interested in temporal graphical models (or dynamic graphical models, which includes HMMs, DBNs, and CRFs) and ways in which to design efficient algorithms for them and design their structure so that they may perform as better structured classifiers. He also has strong interests in speech-based human-computer interfaces, the statistical properties of natural objects and natural scenes, information theory and its relation to natural computation by humans and pattern recognition by machines, and computational music processing (such as human timing subtleties). He is also quite interested in high performance computing systems, computer architecture, and software techniques to reduce power consumption. Prof. Bilmes has also pioneered (starting in 2003) the development of submodularity within machine learning, and he received a best paper award at ICML 2013, a best paper award at NIPS 2013, and a best paper award at ACMBCB in 2016, all in this area. In 2014, Prof. Bilmes also received a most influential paper in 25 years award from the International Conference on Supercomputing, given to a paper on high-performance matrix optimization. Prof. Bilmes has authored the graphical models toolkit (GMTK), a dynamic graphical-model based software system widely used in speech, language, bioinformatics, and human-activity recognition.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30222830176353455, "perplexity": 2649.2103368739713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00034.warc.gz"}
https://www.bionity.com/en/encyclopedia/Reproductive_value_%28population_genetics%29.html
My watch list my.bionity.com # Reproductive value (population genetics) Fisher's reproductive value was defined by R. A. Fisher (1930) as the expected reproduction of an individual from their current age onward, given that they have survived to their current age. It is used in describing populations with age structure. ## Definition Consider a species with a life history table with survival and reproductive parameters given by $\ell_x$ and mx, where $\ell_x$ = probability of surviving from age 0 to age x and mx = average number of offspring produced by an individual of age x. Depending on whether the breeding is discrete or continuous, Fisher's reproductive value is calculated as $v_x = \mbox{either }\frac{\sum_{y=x}^\infty \ell_y m_y}{R}\mbox{ or }\frac{\int_{y=x}^\infty \ell_y m_y\,dy}{R}$ where $R = \mbox{ either }\sum_{y=0}^\infty \ell_y m_y\mbox{ or } \int_0^\infty \ell_x m_x\,dx,$ the net reproductive rate of the population. The average age of a reproducing adult is the generation time and is $T = \mbox{either }\sum_{y=0}^\infty \ell_y v_y\mbox{ or } \int_0^\infty \ell_x v_x\,dx.$ ## References Fisher, R. A. (1930) The genetical theory of natural selection. Oxford University Press, Oxford.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5990639328956604, "perplexity": 2223.9478629736323}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250610919.33/warc/CC-MAIN-20200123131001-20200123160001-00411.warc.gz"}
https://www.physicsforums.com/threads/limit-of-a-sequence.190601/
# Limit of a Sequence 1. Oct 11, 2007 ### TWM Find the limit of the sequence whose terms are given by an = ( [1/(e^(4n)+n^2)] )^1/n I am not really sure how to approach this problem. thanks! 2. Oct 11, 2007 ### Amauta2K Try the binomial expansion of denominator and apply the limits to each term (don't forget that you always can use the L'Hopital rule for those limits). I guess the that the limit is 1/e^4. Similar Discussions: Limit of a Sequence
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9902435541152954, "perplexity": 949.4214884243544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120349.46/warc/CC-MAIN-20170423031200-00371-ip-10-145-167-34.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/485822/why-is-compactness-so-important
# Why is compactness so important? I've read many times that 'compactness' is such an extremely important and useful concept, though it's still not very apparent why. The only theorems I've seen concerning it are the Heine-Borel theorem, and a proof continuous functions on R from closed subintervals of R are bounded. It seems like such a strange thing to define; why would the fact every open cover admits a finite refinement be so useful? Especially as stating "for every" open cover makes compactness a concept that must be very difficult thing to prove in general - what makes it worth the effort? If it helps answering, I am about to enter my third year of my undergraduate degree, and came to wonder this upon preliminary reading of introductory topology, where I first found the definition of compactness. - Finite subcover. A refinement is something different, used to define weaker related ideas. –  dfeuer Sep 6 '13 at 15:32 Essentially, compactness is "almost as good as" finiteness. I can't think of a good example to make this more precise now, though. –  Johannes Kloos Sep 6 '13 at 15:36 FireGarden, perhaps you are reading about paracompactness? –  Asaf Karagila Sep 6 '13 at 16:39 M. R. Sundström's paper A pedagogical history of compactness may be useful here. It discusses the original motivations for the notion of compactness, and its historical development. If you want to understand the reasons for studying compactness, then looking at the reasons that it was invented, and the problems it was invented to solve, is one of the things you should do. –  MJD Sep 6 '13 at 16:47 @dfeuer: The condition of having finite subcover and finite refinement are equivalent. –  user87690 Sep 6 '13 at 17:33 As many have said, compactness is sort of a topological generalization of finiteness. And this is true in a deep sense, because topology deals with open sets, and this means that we often "care about how something behaves on an open set", and for compact spaces this means that there are only finitely many possible behaviors. But why finiteness is important? Well, finiteness allows us to construct things "by hand" and constructive results are a lot deeper, and to some extent useful to us. Moreover finite objects are well-behaved ones, so while compactness is not exactly finiteness, it does preserve a lot of this behavior (because it behaves "like a finite set" for important topological properties) and this means that we can actually work with compact spaces. The point we often miss is that given an arbitrary topological space on an infinite set $X$, the well-behaved structures which we can actually work with are the pathologies and the rare instances. This is throughout most of mathematics. It's far less likely that a function from $\Bbb R$ to $\Bbb R$ is continuous, differentiable, continuously differentiable, and so on and so forth. And yet, we work so much with these properties. Why? Because those are well-behaved properties, and we can control these constructions and prove interesting things about them. Compact spaces, being "pseudo-finite" in their nature are also well-behaved and we can prove interesting things about them. So they end up being useful for that reason. - +1. I particularly like the phrase "finitely many possible behaviors". In every other respect, one could have used "discrete" in place of "compact". Honestly, discrete spaces come closer to my intuition for finite spaces than do compact spaces. However, as you pointed out, compactness is deep; in contrast, discreteness is the ultimate separation axiom while most spaces we're interested in are comparatively low on the separation hierarchy. –  Karl Kronenfeld Sep 6 '13 at 21:37 And when one learns about first order logic, gets the feeling that compactness is, somehow, deduce information about an "infinite" object by deducing it from its "finite" (or from a finite number of) parts. By the way, as always, very nice to read your answers. –  leo Sep 11 '13 at 3:12 @leo: Thank you for the compliment. –  Asaf Karagila Sep 11 '13 at 5:37 Compactness does for continuous functions what finiteness does for functions in general. If a set $A$ is finite then every function $f:A\to \mathbb R$ has a max and a min, and every function $f:A\to\mathbb R^n$ is bounded. If $A$ is compact, the every continuous function from $A$ to $\mathbb R$ has a max and a min and every continuous function from $A$ to $\mathbb R^n$ is bounded. If $A$ is finite then every sequence of members of $A$ has a subsequence that is eventually constant, and "eventually constant" is the only kind of convergence you can talk about without talking about a topology on the set. If $A$ is compact, then every sequence of members of $A$ has a convergent subsequence. - –  sdcvvc Sep 6 '13 at 22:21 Compactness is the next best thing to finiteness. Let $A$ be a finite set, let $f: A \to \mathbb{R}$ be a function. Then $f$ is trivially bounded. Now let $X$ be a compact set, set $f: X \to \mathbb{R}$ be a continuous function. Then $f$ is also bounded... - Compactness is important because: 1)It behaves greatly when using topological operations a)It's a condition that is carried on by continuous functions on any topological space, that is, if $C$ is compact and $f:C \rightarrow Y$ where $Y$ is a topological space, then $f(C)$ is compact in Y. b)An arbitrary product of compact sets is compact in the product topology. 2) Compact sets behave almost as finite sets, which are way easier to understand and work with than uncountable pathologies which are common in topology. Compactness is useful even when it emerges as a property of subspaces: 3) Most of topological groups we face in math every day are locally compact, e.g $\mathbb{R}$, $\mathbb{C}$, even $\mathbb{Q_P}$ and $\mathbb{R_P}$ the p-adic numbers. 4) It is often easier to solve a differential equation in a compact domain than in a non-compact. 5) There are many types of convergence of functions, one of which is convergence in compact set. 6) Regular borel measure, one of the most important class of measures is defined by limits of measures in compact sets. This list is far from over... Anyone care to join in? - Historically, it led to the compactness theorem for first-order logic, but that's over my head. –  dfeuer Sep 6 '13 at 15:44 One reason is that boundedness doesn't make sense in a general topological space. For example $(-1, 1) \subset \mathbb{R}$ is bounded when viewing $\mathbb{R}$ as a metric space with the usual Euclidean metric, but as topological spaces, $(-1, 1)$ and $\mathbb{R}$ really are the same, that is, homeomorphic. So why then compactness? Well, I suppose part of the motivation is the Heine-Borel Theorem, which says a subset of $\mathbb{R}^n$ is compact if and only if it is closed and bounded; or said another way, a closed set is compact if and only if it is bounded. So, at least for closed sets, compactness and boundedness are the same. This relationship is a useful one because we now have a notion which is strongly related to boundedness which does generalise to topological spaces, unlike boundedness itself. In addition, at least for Hausdorff topological spaces, compact sets are closed. So one way to think about compact sets in topological spaces is that they are analogous to the bounded sets in metric spaces. The analogy here is not exact because the Heine-Borel Theorem only applies to $\mathbb{R}^n$, not every metric space, but hopefully this gives you some intuition. - It's already been said that compact spaces act like finite sets. A variation on that theme is to contrast compact spaces with discrete spaces. A compact space looks finite on large scales. A discrete space looks finite on small scales. A $T_1$ space is finite if and only if it is both compact and discrete. So we have the slogan "compactness = finiteness modulo discreteness". A locally compact abelian group is compact if and only if its Pontyagin dual is discrete. So we have another slogan, "compactness = Fourier transform of discreteness". - Is there a redefinition of discrete so this principle works for all topological spaces (e.g., discrete modulo indistinguishability)? Or of compactness. –  zyx Sep 6 '13 at 19:08 @zyx I guess we could loosen the discreteness condition (every point has a singleton neighborhood) by requiring instead that every point has a finite neighborhood. Not sure what this property P should be called... Anyway, a topological space is finite iff it is both compact and P. –  Chris Culter Sep 6 '13 at 22:20 I would like to give here a example showing why compactness is important. Consider the following Theorem: Theorem: Let $f:\mathbb{R}\to \mathbb{R}$ be a continuous coercive function. Then, there exist $x_0\in \mathbb{R}$ such that $$\tag{1} f(x_0)=\inf_{x\in\mathbb{R}}f(x)$$ Proof: Let $I=\inf_{x\in\mathbb{R}}f(x)$ and choose $x_n\in \mathbb{R}$ with $f(x_n)\to I$. We claim that $x_n$ is bounded. Indeed, if $x_n$ was not bounded, then we could extract a subsequence of $x_n$ not relabeld such that $f(x_n)\to \infty$ (by coercivity) which is an absurd. Now, $x_n$ being bounded implies without loss of generality that (compactness) $x_n\to x$. Because $f$ is continuous, we conclude that $f(x_n)\to f(x)=I$. The main argument of the proof was the fact that the closure of any bounded set in $\mathbb{R}$ is compact. Now consider the problem ($\Omega\subset\mathbb{R}^N$ bounded domain) $$\tag{P} \left\{ \begin{array}{ccc} -\Delta u =f&\mbox{ in \Omega} \\ u\in H_0^1(\Omega) &\mbox{ } \end{array} \right.$$ We say that $u\in H_0^1(\Omega)$ is a solution of (P) if $$\int_\Omega\nabla u\nabla v=\int_\Omega fv,\ \forall\ v\in H_0^1(\Omega)\tag{3}$$ Let $F:H_0^1(\Omega)\to \mathbb{R}$ be defined by $$F(u)=\frac{1}{2}\int_\Omega |\nabla u|^2-\int_\Omega fu$$ $(3)$ is equivalently to $\langle F'(u),v\rangle =0$ for all $v\in H_0^1(\Omega)$ and this equality is equivalently to find a local minimum of $F$ in $H_0^1(\Omega)$. One can check that $F$ is continuous and coercive, so we could try to use the same argument as above to find a minimum to $F$, but the problem here is lack of compactness, i.e. if $K\subset H_0^1(\Omega)$ is bounded we can't conclude that the closure of $K$ is compact. Therefore to see how important compactness is, the above problem can be solved by considering a new topology in $H_0^1(\Omega)$, to wit, the weak topology. In this topology we have less open sets which implies more compact sets and in particular, bounded sets are pre-compact sets. It can be show that $F$ is weakly sequentially lower semi continuous, i.e. $F$ is lower sequentially continuous in the weak topoogy, which together with coercivity implis the existence of a minimum. To conclude,take a look on these examples (they show how worse can be lack of compactnes): here and here. - The concept of a "coercive" function was unfamiliar to me until I read your answer; I suspect the same will be true for many readers. If by "coercive" you mean that $\lim_{x \rightarrow \pm \infty} = \infty$, then the fact that a continuous coercive function must attain its minimum value is an exercise that I assign to my honors calculus students: it requires only the extreme value theorem (which of course can be thought of in terms of compactness but need not be, and probably most of us learn it without compactness first). So I'm not sure this is a good example... –  Pete L. Clark Sep 18 '13 at 18:39 (The rest of your example is very interesting and strong...if not necessarily accessible to the broadest possible audience who could be interested in the question.) –  Pete L. Clark Sep 18 '13 at 18:42 Thank you for your comment @PeteL.Clark. Let me ask you one thing: in my point of view the extreme value theorem (EVT) relies strongly on the fact that the domain is compact, hence this would implie that compactness is important in proving the statement, howerver, even if we do not use this argument, I think that a proof using (EVT) would use compactness. For example, a proof which comes from my head is: write $(-\infty,\infty)=\cup_{i=1}^\infty [-i,i]$. Take the infimum of $f$ in each $[-i,i]$ (which exist because of EVT) and show that this sequence WLOG converge (using compactness) –  Tomás Sep 18 '13 at 19:03 Please, could you detail more your point of view to me? –  Tomás Sep 18 '13 at 19:03 To prove your theorem without it: since $\lim_{x \rightarrow \pm \infty} f(x) = \infty$, there is some $M > 0$ such that $f(x) > f(0)$ for all $x$ with $|x| > M$. Thus the minimum value of $f$ on $[-M,M]$ is its minimum value on all of $\mathbb{R}$. –  Pete L. Clark Sep 18 '13 at 19:37 Well, here are some facts that give equivalent definitions: 1. Every net on a compact set has a convergent subnet. 2. Every ultrafilter on a compact set converges. 3. Every filter on a compact set has a limit point. 4. Every net in a compact set has a limit point. 5. Every universal net in a compact set converges. Here are some more useful things: 1. Every continuous bijection from a compact space to a Hausdorff space is a homeomorphism. 2. Every compact Hausdorff space is normal. 3. The image of a compact space under a continuous function is compact. 4. Every infinite subset of a compact space has a limit point. - Simply put, compactness gives you something to work with, this "something" depending on the particular context. But for example, it gives you extremums when working with continuous functions on compact sets. It gives you convergent subsequences when working with arbitrary sequences that aren't known to converge; the Arzela-Ascoli theorem is an important instance of this for the space of continuous functions (this point of view is the basis for various "compactness" methods in the theory of non-linear PDE). It gives you the representation of regular Borel measures as continuous linear functionals (Riesz Representation theorem). Etc. - If you have some object, then compactness allows you to extend results that you know are true for all finite sub-objects to the object itself. The main result used to prove this kind of thing is the fact that if $X$ is a compact space, and $(K_\alpha)_{\alpha\in A}$ is a family of closed sets with the finite intersection property (no finite collection has empty intersection) then $(K_\alpha)$ has non-empty intersection. For if $(K_\alpha)$ has empty intersection then the complements of the $K_\alpha$ form an open cover of $X$, which then has to have a finite subcover $(X\setminus K_{\alpha(i)})_{i=1}^n$, and so the $(K_{\alpha(i)})_{i=1}^n$ is a finite collection of the $K_\alpha$ with empty intersection. For example, the De Bruijn-Erdős Theorem in graph theory states that an infinite graph $G$ is $n$-colourable if all its finite subgraphs are $n$-colourable (i.e., you can colour the vertices with $n$ colours in such a way that no two vertices connected by an edge are the same colour). You can prove this by noting that the space $X$ of all colourings of the vertices of $G$ with $n$ colours (for which vertices of the same colour may share an edge) is a compact topological space (since it is the product of discrete spaces). Then, for each finite subgraph $F$, let $X_F$ be the set of all colourings of $G$ that give an $n$-colouring of $F$. It can be checked that the $X_F$ are closed and have the finite intersection property, so they have non-empty intersection, and any member of their intersection must $n$-colour the whole of $G$. In general, if you have some property that you know is true for finite sub-objects, then you can often encode that in a collection of closed sets in a topological space $X$ that have the finite intersection property. Then, if $X$ is compact, you can show that the closed sets have non-empty intersection, which normally tells you that the result is true for the object itself (sorry that this is all so imprecise!) A very closely-related example is the compactness theorem in propositional logic: an infinite collection of sentences is consistent if every finite sub-collection is consistent. This can be proved using topological compactness, or it can be proved using the completeness theorem: if the collection is inconsistent, then it must be possible to derive a contradiction using finitely many finite statements, so some finite collection of sentences must be inconsistent. Either way you look at it, though, the compactness theorem is a statement about the topological compactness of a particular space (products of compact Stone spaces). - I want to elaborate Sargera's and Tomás' theme. Topological considerations are great, but to me the examples are not as concrete as for when we speak of "sequential compactness" (which unfortunately in general topologies does not equate to compactness, but includes for example weak/weak* compactness). In this situation, for practical purposes, all I want to know about topologically for a given setting is, given a sequence of points in my space, define a notion of convergence. Give me the definition of convergence to play with, and we can talk about sequential compactness. For sequential compactness of a set, we ask: "Given an arbitrary sequence in the set, does there exist a convergent subsequence?" In general, the usefulness of this is that often we want to find a function with some property $P$, but we can only find functions with property $P_n$, which is close to $P$ as $n$ gets larger, and taking a limit as $n \to \infty$ would get property $P$. (In Tomás' example, $P_n$: "functions that achieve objective value within $1/n$ of the infimum", and $P$: "function that achieves infimum"). However, the functions satisfying property $P_n$ may not converge as we take $n$ to $\infty$, so we would not be able to take a limit of the function sequence. If the set of functions is sequentially compact (with respect to whatever notion of convergence we are working with), we can take a subsequence that converges and obtain the desired function satisfying property $P$! (Replace function in previous part by point in set, and we can talk about other things like measures, $\mathbb{R}^N$, etc... it's just so often I am applying this to functions or measures. In probability they use the term "tightness" for measures) Hmm.. one caveat for the above: for the notion of convergence being used, one would have to prove that the convergence preserves the property, or the property is continuous with respect to the notion of convergence. So in Tomás' example again, weak convergence is still good enough to obtain the minimizer. I think it's a great example because it motivates the study of weaker notions of convergence. Note that we need weak convergence in that example (the PDE example with $H^1$) because it is infinite dimensional, and it is not true that the feasible set of the optimization problem is compact under the usual norm-convergence. - Every continuous function is Riemann integrable-uses Heine-Borel theorem. Since there are a lot of theorems in real and complex analysis that uses Heine-Borel theorem, so the idea of compactness is too important. - Perhaps you could improve this Answer by adding further specific examples of "theorems in real and complex analysis that [use] Heine-Borel theorem", or by explaining how proving continuous $\implies$ Riemann integrable makes use of it. –  hardmath Mar 16 '14 at 14:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9168884754180908, "perplexity": 224.13163763450413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737898933.47/warc/CC-MAIN-20151001221818-00038-ip-10-137-6-227.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/216021/prove-that-a-and-b-are-invertible-and-b-a-1
# Prove that $A$ and $B$ are invertible and $B=A^{-1}$ Suppose $A$ and $B$ are $n \times n$ matrices. Assume $AB=I$. Prove that $A$ and $B$ are invertible and that $B=A^{-1}$. Please let me know whether my proof is correct and if there are any improvements to be made. Assume $AB=I$. Then $(AB)A=IA=A$. So, $A(BA)=AI=A$. Then $BA=I$. Therefore $AB=BA=I$. Thus $A$ and $B$ are invertible. And by definition $B=A^{-1}$, so $AB=AA^{-1}=I$. - (1) How did you get "Then $BA=I$"? (2) Do you know the rank-nullity theorem? –  wj32 Oct 18 '12 at 2:17 Your proof is correct if you assume that $CD = C$ implies that $D = I$. –  Thomas Oct 18 '12 at 2:18 Unfortunately, the argument is invalid: $A(BA)=A$ does not imply that $BA=I$ unless you already know that $A$ is invertible, and of course you don’t, since that’s (part of) what you’re trying to prove. There are lots of ways to prove this result, but which ones you can use depends on what you know at this point. What do you know about invertible matrices? Do you know any theorems of the form ‘$A$ is invertible if and only if something’? –  Brian M. Scott Oct 18 '12 at 2:19 @Thomas: But tkrm can’t legitimately assume that. –  Brian M. Scott Oct 18 '12 at 2:19 Hint: If $AB = I$ then $A$ represents an injective linear transformation on a finite dimensional vector space, hence surjective, hence bijective. –  Jason Polak Oct 18 '12 at 2:23 Proof #1: (along the lines mentioned in the comments) As $AB=I$, you know that $A$ is onto as a linear transformation, because $x=Ix=ABx=A(Bx)$ for any $x\in\mathbb{R}^n$. This implies that $A$ is bijective, being a surjective linear transformation in a finite-dimensional space. So there exists $A^{-1}$. Now $$A^{-1}=A^{-1}I=A^{-1}AB=B.$$ Proof #2 (using determinants) Since $AB=I$, we have $$1=\det I=\det AB=\det A\,\det B.$$ So $\det A\ne0$ and $A$ is invertible, and again we can do $$B=IB=A^{-1}AB=A^{-1}I=A^{-1}.$$ - A good first step would be to look at some of the answers to this question. The accepted one, by Davidac897, is pretty elementary and is probably the place to start. You’re almost certainly not yet ready for Martin Brandenberg’s answer, and I’d also skip Bill Dubuque’s answers for now: they’re also aimed at someone with more background. The proof given by falagar, on the other hand, is well worth a look, and you should certainly look at Blue’s answer, which is deliberately very elementary. - Suppose $AB = I$. ($A$ and $B$ are $n \times n$ matrices.) First note that $R(A) = \mathbb{R}^n$. (If $y \in \mathbb{R}^n$, then $y = Ax$, where $x = By$.) It follows that $N(A) = \{0\}$. We wish to show that $BAx = x$ for all $x \in \mathbb{R}^n$. So let $x \in \mathbb{R}^n$, and let $z = BAx$. Then $Az = A(BAx) = (AB)Ax = Ax$, which implies that $z = x$ because $N(A) = \{ 0 \}$. So $BAx = x$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9830973744392395, "perplexity": 177.7726024377745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440646313806.98/warc/CC-MAIN-20150827033153-00349-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.stderr.nl/Blog/Hardware/Thinkpad/WeirdMuteButtonBehaviour.html
"In het verleden behaalde resultaten bieden geen garanties voor de toekomst" These are the ramblings of Matthijs Kooijman, concerning the software he hacks on, hobbies he has and occasionally his personal life. Questions? Praise? Blame? Feel free to contact me. My old blog (pre-2006) is also still available. Sun Mon Tue Wed Thu Fri Sat 25 Tag Cloud & (With plugins: config, extensionless, hide, tagging, Markdown, macros, breadcrumbs, calendar, directorybrowse, entries_index, feedback, flavourdir, include, interpolate_fancy, listplugins, menu, pagetype, preview, seemore, storynum, storytitle, writeback_recent, moreentries) Valid XHTML 1.0 Strict & CSS Thinkpad X201 mute button breaking speaker output Recently, I was having some problems with the internal speakers on my Lenovo Thinkpad X201. Three times now, the internal speakers just stopped producing sound. The headphone jack worked, it's just the speakers which were silent. Nothing helped: fiddling with volume controls, reloading alsa modules, rebooting my laptop, nothing fixed the sound... When trying to see if the speakers weren't physically broken, I discovered that booting into Windows actually fixed the problem and restored the sound from the speakers. It's of course a bit of a defeat to accept Windows a fix for my problem, but I was busy with other things, so it sufficed for a while. When migrating my laptop to my new Intel SSD, I broke my Windows installation, so when the problem occured again, I had no choice but to actualy investigate it. I'll skip right to the conclusion here: I had broken my sound by pressing the mute button on my keyboard... Now, before you think I'm stupid, I had of course checked my volume controls and the device really was unmuted! But it turns out the mute button in Thinkpads combined with Linux is a bit weird... This is how you would expect a mute button to be implemented: You press the mute button, it sends a keypress to the operating system, which then tells the audio driver to mute. This is how it works on my Thinkpad: You press the mute button, causing the EC (embedded controller) in the thinkpad to directly mute the speakers. This is not visible from the normal volume controls in the software, since it happens on a very low level (though the thinkpad_acpi kernel module can be used to expose this special mute state through a /proc interface and special audio device). In addition to muting the speakers, it also sends a MUTE acpi keypress to the operating system. This keypress then causes the audio driver to mute the audio stream (actually, it's pulseaudio that does that). Now, here's the fun part: If you now unmute the audio stream through the software volume controls, everything looks like it should work, but the hardware is still muted! It never occured to me to press the mute button again, since the volume wasn't muted (or at least didn't look like it). I originally thought that the mute button handling was even more complex, when I found some register polling code that faked keypresses, but it seems that's only for older Thinkpads (phew!). In any case, the bottom line is: If you have a Thinkpad whose speakers suddely stop working, try pressing the mute button!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23581121861934662, "perplexity": 7655.498607595585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540523790.58/warc/CC-MAIN-20191209201914-20191209225914-00284.warc.gz"}
https://tex.stackexchange.com/questions/22532/template-for-standard-operating-procedures
# Template for Standard Operating Procedures? There are several queries and answers for technical reference manuals, user guides etc. and I've found enough places on the net that have these templates along with detailed descriptions of their scope. But where can I find a TeX template for Standard Operating Procedures specifically? The template should typically have: • A standard-looking Header and footer for the procedure numbers, company name etc. • Uniform formatting for headings, bullets and sub-bullets • Simple placeholders for images (not a priority). The guys at my workplace typically write this sort of thing in Word 2007, but I feel the stuff I'm working on now is quite is quite well suited to and will benefit greatly from using TeX. P.S. I am at a beginner level when it comes to LaTeX, but I'm very comfortable using it. Also, if it makes any difference to your answers – I prefer to write it completely by hand in a simple editor (rather than using something like LyX, for example). The output is in pdf. Here are two examples of the look I'd like: This isn't everything you're looking for, but it's a start, and hopefully will give you an idea of how easily document class customization can start out. Mind you, I'm not going to worry much about emulating bad habits from the Word document, but focus on simple semantic content and formatting. \documentclass[12pt]{article} \title{Personal Protective Clothing Level} \date{7/24/08} \author{Harper} \begin{document} \maketitle \section{Procedures} \subsection{Structure Fires} \begin{enumerate} \item All firefighters operating in the hot zone'' of a structure fire will be in full turnouts to include coat, pants, helmet, hood, gloves and boots. When operating in an IDLH atmosphere an SCBA shall be worn. \item Engineers when operating close to the incident and exposed to products of combustion shall also be in full PPE including SCBA. If outside the hot zone'' engineers will be allowed to modify their PPE accordingly. If the Engineer is considered to be a part of the RIT team, then full PPE including an SCBA shall be worn. \end{enumerate} \end{document} Using the standard article class as given, you get page content that looks like After writing a (relatively) simple document class based off article, the same content (with \documentclass[12pt]{sop} and \approved{Chief Harper} instead of \documentclass[12pt]{article}, you get page content that looks like and The file sop.cls that created this layout is: \NeedsTeXFormat{LaTeX2e} \ProvidesClass{sop}[2011/07/08 v0.2 Modified article class for standard operating procedures] % https://stackoverflow.com/questions/581916/how-do-you-extend-article-document-class-in-latex % Passes and class options to the underlying article class \DeclareOption*{\PassOptionsToClass{\CurrentOption}{article}} \ProcessOptions % Redefine the page margins \RequirePackage[left=1in,right=1in,top=1in,bottom=1in]{geometry} % Modifications to the section titles \RequirePackage{titlesec} \renewcommand{\thesection}{\Roman{section}} \titleformat{\section}{\normalfont\bfseries} {\makebox[3em][l]{\thesection{}.}}{0pt}{} \titleformat{\subsection}{\normalfont\bfseries} {}{0pt}{} % Modification of title block \RequirePackage{titling} \RequirePackage{multirow} \newcommand{\approved}[1]{\newcommand{\theapproved}{#1}} % Ref: http://tex.stackexchange.com/questions/3988/titlesec-versus-titling-mangling-thetitle \let\oldtitle\title \renewcommand{\title}[1]{\oldtitle{#1}\newcommand{\mythetitle}{#1}} \renewcommand{\maketitle}{% \begin{tabular}{|c|p{2in}|l|l|} \hline \multirow{3}{*}{logo} & \multicolumn{1}{p{2.5in}|}{\centering Mammoth Lakes Fire Protection District } & Date: \thedate & Number: \\ \cline{2-4} & \multicolumn{1}{p{2.5in}|}{\centering Standard Operating Procedure } & \multicolumn{2}{p{2.5in}|}{Title: \mythetitle} \\ \cline{2-4} \end{tabular} } % For "Page N of M" \RequirePackage{lastpage} % For easier construction of page headers/footers \RequirePackage{fancyhdr} \fancypagestyle{plain}{ % for first page \fancyhf{} \fancyfoot[L]{\framebox{Author: \theauthor}\\ \jobname{}.tex} \fancyfoot[R]{\framebox{Page: \thepage{} of \pageref*{LastPage}}} \renewcommand{\footrulewidth}{0pt} } \pagestyle{fancy} % for other pages \fancyhf{} \begin{tabular}{|c|c|} \hline % Revision Date: & Number: \\ \end{tabular}% } \fancyfoot[L]{\framebox{Author: \theauthor}} \fancyfoot[R]{\framebox{Page: \thepage{} of \pageref*{LastPage}}} % \pageref* if we use hyperref, \pageref otherwise \renewcommand{\footrulewidth}{0pt} % For easier customization of itemized, enumerated, and other lists \RequirePackage{enumitem} \RequirePackage{hyperref} % Ensure first page is correct style \thispagestyle{plain} % That's all, folks! \endinput See this SO question for where I got started with this. Regarding the body: There is a lot of goofy, inconsistent formatting going on there. The text in Section I (Scope) is indented, the text in Section II (Purpose) has a start of paragraph indent and is in bold, Section III (Definitions) has no indent period, Section IV (Responsibilities) has no start of paragraph indent but the whole paragraph is indented, and so on. The only consistency I see in that SOP is an utter lack of consistency. Replicating that utter lack of consistency is a piece of cake with microstuff word. Doing that is going to be a bit tough with LaTeX. On the other hand, getting rid of that goofy crap (and it is crap) would, in the end, be a good thing. The cute little box around the page content will also be a bit challenging, but someone else here will be able to solve that problem. What is going to be challenging is setting up that cute little title box. I've done that for a document that requires three cover pages (a pretty front cover, a less pretty inside front cover, and a signature page). It involved some down and dirty TeX nastiness. Something like % \SOPtitlebox: Create the title for an SOP % #1 - Date (format: mm/dd/yy) % #2 - Number (format: Unknown. The example has a blank number.) % #3 - Title (e.g. {Personal Protective Clothing Level}) % #4 - Approved by (e.g. {Chief Harper}, but probably a macro) % #5 - Revision date (Hmmm. Example is / /, so presumably mm/dd/yy, but with blanks) \def\SOPtitlebox#5{% % Contents of this macro left as an exercise for the user }
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7366236448287964, "perplexity": 4751.511285914786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487629209.28/warc/CC-MAIN-20210617041347-20210617071347-00522.warc.gz"}
https://sgmathsacad.com/resources/9758-2019-p1-q03/
9758/2019/P1/Q03 A function is defined as $f(x)=2 x^{3}-6 x^{2}+6 x-12$. (i) Show that $\mathrm{f}(x)$ can be written in the form $p\{(x+q)^{3}+r\}$, where $p, q$ and $r$ are constants to be found. (ii) Hence, or otherwise, describe a sequence of transformations that transform the graph of $y=x^{3}$ onto the graph of $y=f(x)$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9947715401649475, "perplexity": 76.16555362354778}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948756.99/warc/CC-MAIN-20230328011555-20230328041555-00434.warc.gz"}
http://www.maa.org/programs/maa-awards/writing-awards/from-intermediate-value-theorem-to-chaos
# From Intermediate Value Theorem to Chaos by Xun-cheng Huang Award: Carl B. Allendoerfer Year of Award: 1993 Publication Information: Mathematics Magazine, Vol. 65(1992), pp. 91-103 Summary: A proof of Sarkovskii's Theorem based on the Intermediate Value Theorem.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8592566251754761, "perplexity": 5654.142251982728}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207924799.9/warc/CC-MAIN-20150521113204-00114-ip-10-180-206-219.ec2.internal.warc.gz"}
https://docs.mantidproject.org/nightly/tutorials/muon_GUI_course/the_tabs_home.html
$$\renewcommand\AA{\unicode{x212B}}$$ # The Tabs - Home¶ ## Home¶ When launched, the Muon Analysis GUI defaults to the Home tab. This tab allows: • Data files to be loaded. • Run information to be viewed. • Time zero, first good data and last good data to be set. • Deadtime corrections to be set • Data binning to be modifed. To load a file either: Browse or Load Current Run or simply type a run number (assuming you have defined the directory(ies) in which your files are stored) When typing a run number, or using the ‘Load Current Run’ option, first select the desired instrument from the dropdown list. To demonstrate: 1. Select ‘MUSR’ in the instrument drop-down menu 2. Type run number ‘24563’ in the Loading section and press enter, note this can only be done if the correct reference material folder was selected in Getting Started. This process shown in Figure 16. NB the plot’s appearance will vary based on the Time axis and Rebin data values as described later in this section and Other Mantid Functions and Basic Data Manipulation Regardless of the data input method used, the ‘Time Zero’ ($${\mu s}$$), ‘First Good Data’ ($${\mu s}$$) and ‘Last Good Data’ ($${\mu s}$$) values are automatically updated. These values have been determined by the instrument scientist during instrument calibration periods, and are stored in the header block of the raw .nxs data files, which are saved once a measurement is finished. Once a data file has been successfully read, a new plot window like the one shown in Fig. 2(b) will appear. NB: when browsing for files multiple files such as 15190,15193 or a string like 15190-3 can be selected (the latter would load runs from 15190 to 15193). The selected files will each be loaded into a different workspace. ### Data Binning¶ Data can be re-binned via the home tab by using the Rebin section. The options are None for no binning, Fixed to use a given value (entered in theSteps box to the right) or Variable, for binning with various steps. When entering values in the Steps box, do so as for parameters in the Rebin algorithm. For example, to set the plot to a fixed bin-width of choice, follow the instructions below 1. Load HIFI run number 00062798 (as described above). 2. In the Rebin section of the Home tab, use the drop-down menu and change its value from None to Fixed. 3. In the box adjacent to it, input a suitable value - 10 is suggested - and press enter. This will cause a new workspace, HIFI62798; Pair Asym; long; Rebin; MA to appear in HIFI62798. 4. The effect of rebinning is best viewed on only a certain portion of the data, use the Figure options as described in the Overlaying and Styling Plots section of Other Mantid Functions and Basic Data Manipulation 5. Go to the ADS and plot HIFI62798; Pair Asym; long; MA. 6. Navigate to, HIFI62798; Pair Asym; long; Rebin; MA, then right click it and select Plot > Overplot spectrum with errors. The rebinned data should appear over the unbinned dataset. If this does not happen, check the Loading Data section of Other Mantid Functions and Basic Data Manipulation and ensure the plotting has been carried out correctly. An example of this process is shown in Figure 17 below. A summary of each input field in the Home tab, and a description of its function(s) can be found in Muon Analysis under Home.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3852306604385376, "perplexity": 3572.9410718239087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710890.97/warc/CC-MAIN-20221202014312-20221202044312-00619.warc.gz"}
http://math.stackexchange.com/users/14082/jayesh-badwaik?tab=activity&sort=all&page=4
Reputation 1,435 Next privilege 2,000 Rep. Feb2 reviewed Satisfactory Planar and non-planar graphs, and Kuratowski's Theorem Feb2 reviewed Needs Improvement A question on complex numbers Feb2 reviewed Satisfactory Empirical distribution vs. the true one: How fast $KL( \hat{P}_n || Q)$ converges to $KL( P || Q)$? Feb2 reviewed Excellent An entire function with two periods Feb2 reviewed Excellent Uniformization Theorem for compact surface Feb2 reviewed Excellent Compact linear operator from $L^p (\mathbb R)$ to $L^p (\mathbb R)$ Feb2 reviewed Excellent Limit Computation of $(e^x+x)^{1/x}$ as $x$ approaches zero Feb2 reviewed Satisfactory Find the coordinates of a point on a circle Feb2 reviewed Excellent Logic about systems? Feb1 revised Limit of $s_n = \int\limits_0^1 \frac{nx^{n-1}}{1+x} dx$ as $n \to \infty$ Added the link to your result so that people can understand the result directly instead of hunting for it in the comments. :-) Feb1 suggested approved edit on Limit of $s_n = \int\limits_0^1 \frac{nx^{n-1}}{1+x} dx$ as $n \to \infty$ Feb1 comment Limit of $s_n = \int\limits_0^1 \frac{nx^{n-1}}{1+x} dx$ as $n \to \infty$ Yes. I am Jayesh. Name changed for a month. Feb1 accepted Limit of $s_n = \int\limits_0^1 \frac{nx^{n-1}}{1+x} dx$ as $n \to \infty$ Feb1 comment Limit of $s_n = \int\limits_0^1 \frac{nx^{n-1}}{1+x} dx$ as $n \to \infty$ Nice solution Chris'ssister! :-) Your question and the corresponding answers provide a decent tool. :-) Feb1 awarded Self-Learner Feb1 revised Limit of $s_n = \int\limits_0^1 \frac{nx^{n-1}}{1+x} dx$ as $n \to \infty$ added 44 characters in body Feb1 revised Limit of $s_n = \int\limits_0^1 \frac{nx^{n-1}}{1+x} dx$ as $n \to \infty$ deleted 201 characters in body Feb1 comment Limit of $s_n = \int\limits_0^1 \frac{nx^{n-1}}{1+x} dx$ as $n \to \infty$ Thanks. A different solution. :-) Feb1 revised Limit of $s_n = \int\limits_0^1 \frac{nx^{n-1}}{1+x} dx$ as $n \to \infty$ added 155 characters in body Feb1 comment Limit of $s_n = \int\limits_0^1 \frac{nx^{n-1}}{1+x} dx$ as $n \to \infty$ Okay. No problems. :-) Generally, when people downvote, and the OP asks the reason, it is expected that the person who downvoted leave a comment about it. And hence, my assumption.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7927444577217102, "perplexity": 2196.9634799386918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928923.21/warc/CC-MAIN-20150521113208-00047-ip-10-180-206-219.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/2397/lists-in-tabular-environment?answertab=active
# Lists in Tabular Environment So I'm writing up a CV and I would like to use the nifty itemize environment to list some things within a tabular environment. Unfortunately, things end up looking a bit this, which isn't at all what I want. Specifically, I want to the itemize environment to hug closely to "BIG COMPANY NAME" so that it appears as "Software Development Intern" does, and likewise at the bottom. My current code looks a bit like so: \textsc{May 2010 to Aug 2010} & Software Development Intern \\ & \textsc{BIG COMPANY NAME} \\ & \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \setlength{\partopsep}{0pt} \setlength{\topsep}{0pt} \item item1 \item item2 \end{itemize} \\ & \small{Cool Details}\\ Buuut it's not doing the job at all. Any suggestions, Latex gurus? - I was also looking how to get rid mostly of the extra space before and after (!) a list of \begin{list} ... \end{list} in a tabular environment. I now found a somewhat very easy way to solve the latter problem: \begin{tabular} \multicolumn{2}{l} & \vspace{-0.3cm}\begin{list}{-} % \vspace to align the '-' with the title line {\setlength{\topsep}{1pt}\setlength{\partopsep}{0pt}\setlength{\itemsep}{1pt}\setlength{\parsep}{0pt}\leftmargin=10pt} % I wanted small extra-spaces inbetween the items (\itemsep) and just on top (\topsep)for better readibility \item item1 \item item2 \end{list} \\[-0.4cm] % This is what really helps me to have a coherent table without extra-spacings! & \small{Cool Details}\\ \end{tabular} So, [-0.xcm] (adjust x to what suits you best) squeezes the extra-space after the list generated by the list environment. I hope this also helps other people. - Welcome to TeX.SX. If you highlight text and click the button marked {} it will be marked as code, as you see from my edit. –  Torbjørn T. Dec 17 '13 at 21:34 Including an itemized list within a tabular column using the paralist package is a good solution to the vertical space issue at the top. However, the space at the bottom is not solved by this, which, I guess, is why the @Ulricke Fischer uses the parbox also. Note the paralist doesn't solve the problem, in that the space is added to the top and bottom when in a tabular environment. So this is the solution I eventually went with. \usepackage{array} \makeatletter \newcolumntype{P}[1]{>{\@minipagetrue}p{#1}} \makeatother Gets rid of the initial vertical space (of course you have to change the tabular argument from p to P. Then include a negative vspace after the final item: \begin{tabular}{r|P{13cm} & \begin{compactitem} \item blah \item final item\vspace*{-\baselineskip} \end{compactitem} It's a bit manual, but it does at least work relatively easily. - You can use the package paralist which defines among others the compactitem environment (which is a compact itemize). It also redefines itemize that way, but there are options to leave it, like olditem. \usepackage[olditem,oldenum]{paralist} and use \begin{compactitem} ... \end{compactitem} inside tables. - You can use \novspace to get rid of the space at the top, nolistsep from enumitem for the spaces in the list, the internal \parbox for the space at the bottom and the \strut to give the \parbox the correct depth. \documentclass[]{book} \usepackage{enumitem} \makeatletter \newcommand\novspace{\@minipagetrue} \makeatother \begin{document} \begin{tabular}{lp{5cm}} \textsc{May 2010 to Aug 2010} & Software Development Intern \\ & \textsc{BIG COMPANY NAME} \\ &\parbox[t]{5cm}{\novspace \begin{itemize}[nolistsep] \item item1 \item item2\strut \end{itemize}}\\ & \small Cool Details \end{tabular} \end{document} - I have several suggestions. I would suggest using one of the numerous cv/resumé packages. My own cv uses currvita. The next suggestion would be to use the enumitem package for changing the spacing of your lists. Finally, you don't have a table of data so tabular is probably the wrong thing to use. - What would you recommend in its place? I suppose what I'm really looking for is a list of items with a left and right side -- perhaps a list of 2-column 1-row tables? –  duckworthd Aug 27 '10 at 11:34 I would recommend the currvita package. Or you can take a look at this question which is about writing a CV in LaTeX. –  TH. Aug 27 '10 at 12:04
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.869579553604126, "perplexity": 2374.9676940356553}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535921872.11/warc/CC-MAIN-20140909031024-00063-ip-10-180-136-8.ec2.internal.warc.gz"}
https://icanthackit.wordpress.com/2015/02/28/sans-pen-test-2015-challenge/
# SANS Pen Test 2015 Challenge Here is my writeup for the SANS Pen Test 2015 Challenge (https://pen-testing.sans.org/challenge2015). Challenge 1 Alice has sent Bob an encrypted file. Find it, decrypt it, and find the secret inside. Look in the alice.pcap file to answer this question. Hint: Alice is often quite chatty with Bob, and phrases she references could be useful to use as passwords (or passphrases). You won’t need to use wordlists, mutation, or brute-force of any kind to decrypt the encrypted file. Whenever I come across a pcap, and I don’t know exactly what I’m looing for,  I like open it with three different tools almost immediately – Wireshark, NetworkMiner, and Cain&Abel.  For this challenge, most of my work was done in Wireshark, but it’s worth mentioning the other two tools because they have their own unique uses (and we will use Cain a little later on to get our hands on some NTLMv2 Hahses). Start by opening alice.pcap in Wireshark.  We know that Alice and Bob like to chat, and we can see at Frame 125 that the machine with IP Address 172.16.14.138 sent a DNS query for webchat.freenode.net.  After that, we can see communication over HTTP that includes the IRC chat session.  After a little inspection, I built this Wireshark Filter to show me the most interesting parts of that HTTP communication: http and !(frame.len == 568) and !(frame.len == 212) and !(frame.len == 221) and !(frame.len == 567) Removing the frames with lengths 568, 212, 221, and 567 result in most of the irrelevant Protocol overhead being stripped from view In frame 133 we see the IRC nickname “AL1C3” sent to the IRC server, so we assume that Alice’s computer is 172.16.14.138.  AL1CE joins the #shmoocon channel and proceeds to have a series of Private Messages with “I_am_Bob”.  If you parse out the conversation from the HTTP data, this is what you find: PRIVMSG #shmoocon :I_am_Bob: Hi there, Bob! You heading off to Shmoo next weekend? [[“c”,”PRIVMSG”,”I_am_Bob!475c7352@gateway/web/freenode/ip.71.92.115.82″,[“#shmoocon”,”AL1C3: Oh, wow, is that coming up already? I haven’t even looked at the schedule yet.”]]] PRIVMSG #shmoocon :I_am_Bob: That’s a shame! There’s lots of excitement going on. Talks, events, labs… I even hear there’s some kind of challenge involving placeholder names used in crypto. PRIVMSG #shmoocon :I_am_Bob: Oh, and my favorite, there’s a game going on that blends game hacking, first-person shooting, and role-playing mechanics! [[“c”,”PRIVMSG”,”I_am_Bob!475c7352@gateway/web/freenode/ip.71.92.115.82″,[“#shmoocon”,”AL1C3: That does sound fun! I’ll definitely be there. What’s the name of that event, by the way?”]]] PRIVMSG #shmoocon :I_am_Bob: You’ll have to check the website yourself ;) PRIVMSG #shmoocon :I_am_Bob: By the way, I’ll send you my latest message via SMB and an encrypted zip file, per our normal protocol. Silly eavesdroppers… PRIVMSG #shmoocon :I_am_Bob: See you soon! [[“c”,”PRIVMSG”,”I_am_Bob!475c7352@gateway/web/freenode/ip.71.92.115.82″,[“#shmoocon”,”AL1C3: Got it, thanks!”]]] PRIVMSG #shmoocon :I_am_Bob: My pleasure Now we know that a file has been transferred using SMB.  In Wireshark, click File → Export Objects → SMB, and we see the “another_message.7z” that Alice referenced in her IRC message to Bob.  We also see some other very suspicious files, “not_exactly_inconspicious.exe” and “WSGvXjhn.exe”, being transferred to Bob’s PC, so we should probably save those for further analysis later. Now that we have Alice’s encrypted zip file, we need to open it.  The hint said that phrases she references might be useful as passwords or passphrases.  At this point, I began trying words and phrases copied directly from Alice’s chat session.  Eventually, after many failed attempts, I went to www.shmoocon.org to find the game that Alice referenced as being her favorite.  That event was called “Ghost in the Shellcode”.  When that is used as a passphrase, it will decrypt the zip. The secret is: Build It, Belay It, and Bring It On/ Challenge 2 Carol has used Firefox for Android to search for, browse, and save a particular image. A compressed copy of her /data/data/org.mozilla.firefox folder is in the question_assets folder, named “org.mozilla.firefox.tgz”. Find the serial number of the lens used to take the download picture, which is the secret for this question. Hint: You may have to use resources outside the org.mozilla.firefox folder to fully answer this question. 7zip can open the “org.mozilla.firefox.tgz” file, as well as the “org.mozilla.firefox.tar” that is found inside.  Once we have the uncompressed “org.mozilla.firefox” directory, we need to look for the downloads.sqlite file to look for the file she downloaded.  That file is located in \files\mozilla\9tnld04f.default, and can be opened with the free version of SQLite Manager. When you open downloads.sqlite with SQLite Manager, and view the moz_downloads table, you can see that Carol (a fan of Star Wars, and Han Solo in particular) downloaded a photo of Harrison Ford at the 2013 Comic Con from the CBS San Francisco WordPress site: The challenge asks for the Serial Number of the Lens used to take the picture.  That information can be gathered from the exif data stored inside the 173974131.jpg file.  Download a copy of the file, and run the command below in a Terminal to display exif data.  the 64th line of the output is the Lense Serial Number. exiftool /root/173974131.jpg ExifTool Version Number         : 8.60 File Name                       : 173974131.jpg Directory                       : /root File Size                       : 362 kB File Modification Date/Time     : 2015:02:11 20:11:38-05:00 File Permissions                : rw-r–r– File Type                       : JPEG MIME Type                       : image/jpeg JFIF Version                    : 1.01 Exif Byte Order                 : Little-endian (Intel, II) Photometric Interpretation      : RGB Image Description               : SAN DIEGO, CA – JULY 18:  Actor Harrison Ford onstage at the “Ender’s Game” press conference during Comic-Con International 2013 at San Diego Convention Center on July 18, 2013 in San Diego, California.  (Photo by Joe Scarnici/Getty Images for Summit Entertainment) Make                            : Canon Camera Model Name               : Canon EOS-1D X Orientation                     : Horizontal (normal) Samples Per Pixel               : 3 X Resolution                    : 200 Y Resolution                    : 200 Resolution Unit                 : inches Software                        : Adobe Photoshop CS5 Macintosh Modify Date                     : 2013:07:19 08:42:00 Artist                          : Joe Scarnici Y Cb Cr Positioning             : Co-sited Exposure Time                   : 1/160 F Number                        : 2.8 Exposure Program                : Manual ISO                             : 3200 Sensitivity Type                : Recommended Exposure Index Recommended Exposure Index      : 3200 Exif Version                    : 0230 Date/Time Original              : 2012:01:25 04:21:28 Create Date                     : 2012:01:25 04:21:28 Components Configuration        : Y, Cb, Cr, – Shutter Speed Value             : 1/166 Aperture Value                  : 2.8 Exposure Compensation           : 0 Max Aperture Value              : 2.8 Subject Distance                : 7.06 m Metering Mode                   : Multi-segment Flash                           : Off, Did not fire Focal Length                    : 102.0 mm User Comment                    : Sub Sec Time                    : 56 Sub Sec Time Original           : 56 Sub Sec Time Digitized          : 56 Flashpix Version                : 0100 Color Space                     : sRGB Exif Image Width                : 1000 Exif Image Height               : 758 Interoperability Index          : R98 – DCF basic file (sRGB) Interoperability Version        : 0100 Focal Plane X Resolution        : 3545.827633 Focal Plane Y Resolution        : 3526.530612 Focal Plane Resolution Unit     : inches Custom Rendered                 : Normal Exposure Mode                   : Manual White Balance                   : Auto Scene Capture Type              : Standard Owner Name                      : Serial Number                   : 088015001238 Lens Info                       : 70-200mm f/0 Lens Model                      : EF70-200mm f/2.8L IS II USM Lens Serial Number              : 0000c15998 GPS Version ID                  : 2.3.0.0 Compression                     : JPEG (old-style) Thumbnail Offset                : 1752 Thumbnail Length                : 5243 Current IPTC Digest             : 4070c4df48c719664a9df0314ac3ea16 Coded Character Set             : UTF8 Application Record Version      : 4 Caption-Abstract                : SAN DIEGO, CA – JULY 18:  Actor Harrison Ford onstage at the “Ender’s Game” press conference during Comic-Con International 2013 at San Diego Convention Center on July 18, 2013 in San Diego, California.  (Photo by Joe Scarnici/Getty Images for Summit Entertainment) Writer-Editor                   : hg Headline                        : “Ender’s Game” Press Conference By-line                         : Joe Scarnici By-line Title                   : Stringer Credit                          : Getty Images for Summit Entertai Source                          : Getty Images North America Object Name                     : 174014009HG00008_Ender_s_Ga Date Created                    : 2013:07:18 Time Created                    : 00:00:00+00:00 City                            : San Diego Sub-location                    : San Diego Convention Center Province-State                  : CA Country-Primary Location Name   : United States Country-Primary Location Code   : USA Original Transmission Reference : 174014009 Category                        : E Supplemental Categories         : ACE, CEL, ENT Urgency                         : 2 Keywords                        : Celebrities Copyright Notice                : 2013 Getty Images IPTC Digest                     : 4070c4df48c719664a9df0314ac3ea16 Displayed Units X               : inches Displayed Units Y               : inches Global Angle                    : 30 Global Altitude                 : 30 Photoshop Thumbnail             : (Binary data 5243 bytes, use -b option to extract) Photoshop Quality               : 12 Photoshop Format                : Standard Progressive Scans               : 3 Scans Profile CMM Type                : Lino Profile Version                 : 2.1.0 Profile Class                   : Display Device Profile Color Space Data                : RGB Profile Connection Space        : XYZ Profile Date Time               : 1998:02:09 06:49:00 Profile File Signature          : acsp Primary Platform                : Microsoft Corporation CMM Flags                       : Not Embedded, Independent Device Manufacturer             : IEC Device Model                    : sRGB Device Attributes               : Reflective, Glossy, Positive, Color Rendering Intent                : Media-Relative Colorimetric Connection Space Illuminant     : 0.9642 1 0.82491 Profile Creator                 : HP Profile ID                      : 0 Profile Description             : sRGB IEC61966-2.1 Media White Point               : 0.95045 1 1.08905 Media Black Point               : 0 0 0 Red Matrix Column               : 0.43607 0.22249 0.01392 Green Matrix Column             : 0.38515 0.71687 0.09708 Blue Matrix Column              : 0.14307 0.06061 0.7141 Device Mfg Desc                 : IEC http://www.iec.ch Device Model Desc               : IEC 61966-2.1 Default RGB colour space – sRGB Viewing Cond Desc               : Reference Viewing Condition in IEC61966-2.1 Viewing Cond Illuminant         : 19.6445 20.3718 16.8089 Viewing Cond Surround           : 3.92889 4.07439 3.36179 Viewing Cond Illuminant Type    : D50 Luminance                       : 76.03647 80 87.12462 Measurement Observer            : CIE 1931 Measurement Backing             : 0 0 0 Measurement Geometry            : Unknown (0) Measurement Flare               : 0.999% Measurement Illuminant          : D65 Technology                      : Cathode Ray Tube Display Red Tone Reproduction Curve     : (Binary data 2060 bytes, use -b option to extract) Green Tone Reproduction Curve   : (Binary data 2060 bytes, use -b option to extract) Blue Tone Reproduction Curve    : (Binary data 2060 bytes, use -b option to extract) Image Width                     : 1000 Image Height                    : 758 Encoding Process                : Baseline DCT, Huffman coding Bits Per Sample                 : 8 Color Components                : 3 Y Cb Cr Sub Sampling            : YCbCr4:4:4 (1 1) Aperture                        : 2.8 Date/Time Created               : 2013:07:18 00:00:00+00:00 Image Size                      : 1000×758 Scale Factor To 35 mm Equivalent: 4.8 Shutter Speed                   : 1/160 Create Date                     : 2012:01:25 04:21:28.56 Date/Time Original              : 2012:01:25 04:21:28.56 Modify Date                     : 2013:07:19 08:42:00.56 Thumbnail Image                 : (Binary data 5243 bytes, use -b option to extract) Circle Of Confusion             : 0.006 mm Depth Of Field                  : 0.17 m (6.98 – 7.14) Field Of View                   : 4.2 deg Focal Length                    : 102.0 mm (35 mm equivalent: 490.0 mm) Hyperfocal Distance             : 594.07 m Light Value                     : 5.3 The Lens Serial Number is 0000c15998 Challenge 3 Dave messed up and deleted his only copy of an MP3 file. He’d really appreciate it if you could retrieve it for him – look inside svn_2015.dump.gz to get started. Once you’ve recovered the audio file, look at it carefully to find the secret. This file is a dump of an Apache Subversion Repository.  One way to recover data from this file is to create a new Subversion Repository and load this dump into it.  Since I don’t really need the full repo I’m going to just carve it up with a text editor. For example, if we open it in Notepad++ and scroll down to line 212, we can see that Revision 2 included an audio file named shmooster.mp3. Just delete everything from the start of the file until line 243 (the “PROPS-END” line) and from until line 7326 (just before the “Revision-Number 3” line) until the end of the file, and save it as shmooster.mp3.  After you create the file, you can confirm its content by running a SHA1 or MD5 hash against it and comparing it to the results on lines 235 or 236 in the above screenshot. When you listen to the mp3, it says: Which of the following would you most prefer? • A – A puppy • B – A pretty flower from your sweetie or •  C – A large properly formatted data file ….. You have failed this Reverse Turning test.  Now suffer the consequences. The next few paragraphs on MP3Stego don’t actually help solve the challenge – it was a dead end, but a learning experience! The challenge said to look at the MP3 file carefully to find the secret.  There were no ID3 tags included in the file, and no exif data of any use.  Text files can be hidden in MP3s using the MP3Stego program, and the audio portion of the file may be a hint to the password.  When you use the password is “c”, a text file is successfully extracted.  Using MP3Stego we need to execute: Decode.exe –X –P c \path\to\shmooster.mp3 The result is: Input file = ‘C:\path\to\shmooster.mp3’  output file = ‘mp3’ Will attempt to extract hidden information. Output: C:\path\to\shmooster.mp3.txt the bit stream file C:\path\to\shmooster.mp3 is a BINARY file HDR: s=FFF, id=1, l=3, ep=off, br=E, sf=1, pd=0, pr=0, m=3, js=0, c=0, o=1, e=0 alg.=MPEG-1, layer=III, tot bitrate=320, sfrq=48.0 mode=single-ch, sblim=32, jsbd=32, ch=1 Frame cannot be located Input stream may be empty Avg slots/frame = 960.002; b/smp = 6.67; br = 320.001 kbps Decoding of “C:\path\to\shmooster.mp3” is finished The decoded PCM output file name is “mp3” The shmooster.mp3.txt file that is extracted contains the string of ASCII characters shown in the picture below.  I could not get that string to work, in combination with the other passwords, to open the open_this_to_win.zip file.  I tried almost countless manipulations by converting to Hex, Binary, Base64 encode/decode, URL encoding, etc, and could not get anything to work. Is it an odd coincidence that text is successfully extracted using the password “c” with MP3Stego or did Dave intentionally embed bad information to keep his adversaries occupied with a red herring?  I talked with the challenge author about this, and it turns out that this successful text extraction was a False Positive from the MP3Stego decode program.  I attempted several other passwords before trying “c”, and all of them resulted in an error and no txt file extracted. The real solution to Challenge 3 is to open the mp3 in Audacity and use the Spectrogram view to reveal a hidden QR code.  The settings that I used were: Windows Size: 512, Window Type: Hannning, Min Freq -, Max Freq 20000, Gain 80, Range 10, Freq Gain 1, a Grayscale Colors.  Below is a screenshot: When you scan that QR code, the text “3e9cd9ea80d80606” is displayed. The Secret in Challenge 3 is 3e9cd9ea80d80606 Challenge 4 Eve suspects that one of Alice, Bob, or Carol might not be as innocent as they seem. She’ll need your help to prove it, however. Examine the other three questions and their included files. Which user, based off their malicious behavior, might be a Cylon? Once you know who it is, find that user’s password, which is the secret for this question. Based on the additional files that Alice dropped on Bob’s PC, it’s fairly obvious that Alice isn’t very innocent.  At frame 1016 of the pcap, we can see that Alice started flooding Bob’s PC with TCP Resets.  We can also see in Frame 712’s DHCP request and the various SMB NTLMSSP_NEGOTIATE and NTLMSSP_AUTH frames (i.e. Frames 801, 803, 3336, 3338, etc) that Alice’s Host Name is “KALI”, which is a well-known and powerful Linux Security Distro. If we open alice.pcap in Cain & Abel, and go to the Sniffer → Passwords Tab, we can see that Cain successfully extracted a bunch of hashes from Alice’s password from the pcap.  Unfortunately, they are NTLMv2 hashes, and cracking them (even using a very efficient tool like oclHashcat with power GPUs) is not likely to happen in a timely manner.  Out of curiosity, I did upload the hashes to an Amazon Web Services G2.2XLarge instance to see if they could be brute forced, but didn’t have any luck.  The maximum length I ran was 6 characters (which takes about 4 hours).  Beyond that, 7 characters takes a few days and 8 characters takes years.  Had Alice’s password been 6 characters or less, I could have recovered it with oclHashcat.  Below are the steps you would take to get oclHashcat running on an Amazon Web Services GPU Instance, and crack with oclHashcat: First, you need to get an AWS account if you don’t already have one, and launch a GPU Instance (as of Feb 2015, it’s called an G2.2xlarge, and the OS it runs is Amazon Linux AMI).  As of now, it costs about $0.60 per hour to run. Follow Amazon’s steps for authenticating to the console using SSH and a private key file (either PEM, or PPK if you’re using PuTTy). To get oclHashcat (actually, cudaHashcat since we’re using nVidia GPUs) running, I needed to remove the nVidia driver that’s pre-installed, and install a driver directly from nVidia. If you don’t have a proper driver, you will receive cuModuleLoad()209 errors when you try to execute the program . Run these commands: First, download 7zip and cudaHashcat: wget rpmfind.net/linux/epel/6/x86_64/p7zip-9.20.1-2.el6.x86_64.rpm wget http://hashcat.net/files/cudaHashcat-1.32.7z Install 7zip: sudo rpm -ivh p7zip-9.20.1-2.el6.x86_64.rpm extract the cudaHashcat compressed 7z file: 7za x cudaHashcat-1.32.7z delete the driver: sudo yum erase nvidia cudatoolkit download the driver from nVidia and run it: wget http://us.download.nvidia.com/XFree86/Linux-x86_64/346.35/NVIDIA-Linux-x86_64-346.35-no-compat32.run sudo /home/ec2-user/NVIDIA-Linux-x86_64-346.35-no-compat32.run To extract the NTLMv2 Hashes from Cain and put them in the correct format for oclHashcat, you can take the NTLMv2.LST file from Cain’s installation directory and run this AWK command against it: awk -v OFS=”:” -F “\t” ‘{print($1,””,$2,$5,$4,$6)}’ NTLMv2.LST > ntlmv2.hashes You can also do this manually, but running that command makes it easy (especially when dealing with many hashes).  Here is an example of the proper format for the 3 hashes captured from alice.pcap: alice::WORKGROUP:2DE54124CD6AE7E9:9B4C4FAAE73BB434B91927D39059EBC6:010100000000000000D36B480C2CD001C0D15A626252D179000000000200140049005200520045004C004500560041004E0054000100140049005200520045004C004500560041004E0054000400140069007200720065006C006500760061006E0074000300140069007200720065006C006500760061006E00740000000000 alice::WORKGROUP:4AC21F85875D6B97:D4476C98704BDBA641BCC821F8989F31:01010000000000008014958A0C2CD0015528F305A7A5DC54000000000200140049005200520045004C004500560041004E0054000100140049005200520045004C004500560041004E0054000400140069007200720065006C006500760061006E0074000300140069007200720065006C006500760061006E00740000000000 Alice::WORKGROUP:5EFAEAF4F04A6097:3CF0C36497864E78CE870196EE82BB60:010100000000000080986CA20C2CD00159FE4022598A6747000000000200140049005200520045004C004500560041004E0054000100140049005200520045004C004500560041004E0054000400140069007200720065006C006500760061006E0074000300140069007200720065006C006500760061006E0074000700080080986CA20C2CD0010900240063006900660073002F003100370032002E00310036002E00310034002E003100340036000000000000000000 Upload the NTLMv2.hashes file to your Amazon GPU instance.  I like to use WinSCP for this. To brute force the NTLMv2 hashes with oclHashcat (implemented as a Mask Attack), using either a lowercase alpha, uppercase alpha, number, or special character in each position, you would run each of these commands (first command for a 1 character password length, second for a 2 character password length, etc.), and wait for the results: sudo ./cudaHashcat64.bin -m5600 -a 3 ntlmv2.hashes ?a sudo ./cudaHashcat64.bin -m5600 -a 3 ntlmv2.hashes ?a?a sudo ./cudaHashcat64.bin -m5600 -a 3 ntlmv2.hashes ?a?a?a sudo ./cudaHashcat64.bin -m5600 -a 3 ntlmv2.hashes ?a?a?a?a sudo ./cudaHashcat64.bin -m5600 -a 3 ntlmv2.hashes ?a?a?a?a?a sudo ./cudaHashcat64.bin -m5600 -a 3 ntlmv2.hashes ?a?a?a?a?a?a oclHashcat can also perform dictionary attacks.  Since the note from Challenge 1 mentioned that Alice mentions her passwords when she chats with Bob, I built a quick dictionary from their IRC conversations.  That also didn’t result in a cracked Hash, but a dictionary file based on good reconnaissance or social engineering is always worth a try. Ultimately, finding Alice’s password was accomplished by looking through the pcap file after she compromises Bob’s PC.  In Frame 3999, we can see a connection from Bob’s PC back to Alice’s PC over TCP Port 4444.  Alice is running the “not_exactly_inconspicious.exe” application, which turns out to be Windows Credentials Editor.  It reveals that Alice’s password is “iamnumbersix”.  Bob’s password is “Carol_is_my_favorite”, and Alice isn’t very happy about that. If we take Alice’s password that we just recovered, iamnumbersix, and add it to a dictionary file, we can run it through oclHashcat and crack the NTLMv2 hashes with it to confirm it is valid. [ec2-user@ip-172-31-43-9 cudaHashcat-1.32]\$ sudo ./cudaHashcat64.bin -m 5600 -a 3 ntlmv2.hashes /home/ec2-user/password.txt cudaHashcat v1.32 starting… Device #1: GRID K520, 4095MB, 797Mhz, 8MCU Hashes: 3 hashes; 3 unique digests, 3 unique salts Bitmaps: 8 bits, 256 entries, 0x000000ff mask, 1024 bytes Applicable Optimizers: * Zero-Byte * Not-Iterated * Brute-Force Watchdog: Temperature abort trigger set to 90c Watchdog: Temperature retain trigger set to 80c Device #1: Kernel ./kernels/4318/m05600_a3.sm_30.64.ptx Device #1: Kernel ./kernels/4318/markov_le_v1.64.ptx ALICE::WORKGROUP:4ac21f85875d6b97:d4476c98704bdba641bcc821f8989f31:0101000 0000000008014958a0c2cd0015528f305a7a5dc54000000000200140049005200520045004 c004500560041004e0054000100140049005200520045004c004500560041004e005400040 0140069007200720065006c006500760061006e0074000300140069007200720065006c006 500760061006e00740000000000:iamnumbersix ALICE::WORKGROUP:2de54124cd6ae7e9:9b4c4faae73bb434b91927d39059ebc6:0101000 00000000000d36b480c2cd001c0d15a626252d179000000000200140049005200520045004 c004500560041004e0054000100140049005200520045004c004500560041004e005400040 0140069007200720065006c006500760061006e0074000300140069007200720065006c006 500760061006e00740000000000:iamnumbersix ALICE::WORKGROUP:5efaeaf4f04a6097:3cf0c36497864e78ce870196ee82bb60:0101000 00000000080986ca20c2cd00159fe4022598a6747000000000200140049005200520045004 c004500560041004e0054000100140049005200520045004c004500560041004e005400040 0140069007200720065006c006500760061006e0074000300140069007200720065006c006 500760061006e0074000700080080986ca20c2cd0010900240063006900660073002f00310 0370032002e00310036002e00310034002e003100340036000000000000000000:iamnumbe rsix Session.Name…: cudaHashcat Status………: Cracked Hash.Target….: File (ntlmv2.hashes) Hash.Type……: NetNTLMv2 Time.Started…: 0 secs Speed.GPU.#1…:        0 H/s Recovered……: 3/3 (100.00%) Digests, 3/3 (100.00%) Salts Progress…….: 3/3 (100.00%) Skipped……..: 0/3 (0.00%) Rejected…….: 0/3 (0.00%) HWMon.GPU.#1…:  0% Util, 35c Temp, -1% Fan Started: Tue Feb 10 20:12:01 2015 Stopped: Tue Feb 10 20:12:03 2015
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16418729722499847, "perplexity": 23628.85805509144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545326.51/warc/CC-MAIN-20220522094818-20220522124818-00297.warc.gz"}
https://web2.0calc.com/questions/help_44180
+0 # help 0 111 2 how do you type a fraction thing Oct 26, 2020 #1 +28898 +1 Look under the LaTex heading at the first option.... type in your numbers between the brackets example of result ...... $$\frac{3}{4}$$ most people just use the forward slash on the keyboard      3/4 Oct 26, 2020 #2 +1 0 Like this 1 10 Oct 26, 2020
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9779975414276123, "perplexity": 14385.891807830963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362481.49/warc/CC-MAIN-20210301090526-20210301120526-00068.warc.gz"}
http://mathhelpforum.com/algebra/93410-demonstration-deduction.html
# Math Help - Demonstration and deduction 1. ## Demonstration and deduction a,b,c is the reals numbers , Prove this equality : Deduct all resolutions of this equation in R : 2. Originally Posted by dhiab a,b,c is the reals numbers , Prove this equality : This is wrong. If you take a=b=c you get 3a^3 on the left side and 0 on the right side. I think the correct equality is $a^3 + b^3 + c^3 - 3abc = \frac12 \: (a+b+c) \: \left[(b-c)^2+(c-a)^2+(a-b)^2\right]$ 3. [QUOTE=dhiab;332083]a,b,c is the reals numbers , Prove this equality : Looks to me like the best thing to do is just go ahead and multiply out the right side: $(b-c)^2= b^2- 2bc+ c^2$ $(c-a)^2= c^2- 2ac+ a^2$ $(a-b)^2= a^2- 2ab+ b^2$ so $(b-c)^2+ (c-a)^2+ (a-b)^2= 2a^2+ 2b^2+ 2c^2- 2(ab+ac+ bc)$ Now multiply that by a+ b+ c: $a(2a^2+ 2b^2+ 2c^2- 2(ab+ac+ bc))= 2a^3+ 2ab^2+ 2ac^2- 2(a^2b+ a^2c+ abc)$ $b(2a^2+ 2b^2+ 2c^2- 2(ab+ac+ bc))= 2a^2b+ 2b^3+ 2bc^2- 2(ab^2+ abc+ 2bc^2)$ [tex]c(2a^2+ 2b^2+ 2c^2- 2(ab+ac+ bc))= 2a^2c+ 2b^2c+ 2c^3- 2(abc+ ac^2+ bc^2) Now note that the " $2ab^2$" term in the first equation is canceled by the " $-2ab^2$" term in the second equation, etc. Deduct all resolutions of this equation in R : 4. Originally Posted by running-gag This is wrong. If you take a=b=c you get 3a^3 on the left side and 0 on the right side. I think the correct equality is $a^3 + b^3 + c^3 - 3abc = \frac12 \: (a+b+c) \: \left[(b-c)^2+(c-a)^2+(a-b)^2\right]$ HELLO : equality is correct : LOOCK THIS RESOLUTION Deduct : Conclusion : 5. Originally Posted by dhiab HELLO : equality is correct : LOOCK THIS RESOLUTION Deduct : Conclusion : What have become the 3 factors equal to -2abc ?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.973659098148346, "perplexity": 2371.355467094903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500833525.81/warc/CC-MAIN-20140820021353-00348-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.fredrikmeyer.net/2011/07/08/fearless-symmetry-av-avner-ash-og-robert-gross/
# «Fearless Symmetry» av Avner Ash og Robert Gross At a book store in a shopping center by the coast of California I found this gem of a book. I skimmed through the content list, and bought it without much more thinking. In retrospect, it is safe to say that it was worth the \$23.95 plus Californian tax. As the title suggests, the book is much about symmetry  – but it is also slightly misleading. The book is really about number theory and the theory that led to the solution of Fermat’s  Last Theorem. The book’s main mission is to explore the absolute Galois group $G=\mathrm {Gal}(\mathbb Q^{alg}/\mathbb Q)$ through representations, that is, morphisms from $G$ to more known groups, such as matrix groups and finite fields. As such, the book is more about representation theory than symmetry. But it doesn’t stop there! A main theme in the book is how representation theory is behind generalized reciprocity laws in number theory and how reciprocity laws are used in advanced mathematics (an example of a reciprocity law is $(p/q)=(-1/q)(q/p)$ where $(p/q)$ is the Legendre symbol. That is, knowing if $p$ is square mod $q$ tells us if $q$ is square mod $p$ and conversely). The book is written in a leisurely language and contains no difficult proofs and avoids technical definitions – without losing substance. Number theory is presented as a rich subject with lots of tools and abstractions. The presentation was very inspirational, and this next semester will be like Christmas for me.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 8, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6417253613471985, "perplexity": 529.3764272946545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662543264.49/warc/CC-MAIN-20220522001016-20220522031016-00686.warc.gz"}
https://suncat.stanford.edu/publications/bayesian-framework-adsorption-energy-prediction-bimetallic-alloy-catalysts
A Bayesian framework for adsorption energy prediction on bimetallic alloy catalysts Authors: Osman Mamun, Kirsten T. Winther, Jacob R. Boes, Thomas Bligaard Year of publication: 2020 Journal: npj Computational Materials For high-throughput screening of materials for heterogeneous catalysis, scaling relations provides an efficient scheme to estimate the chemisorption energies of hydrogenated species. However, conditioning on a single descriptor ignores the model uncertainty and leads to suboptimal prediction of the chemisorption energy. In this article, we extend the single descriptor linear scaling relation to a multi-descriptor linear regression models to leverage the correlation between adsorption energy of any two pair of adsorbates. With a large dataset, we use Bayesian Information Criteria (BIC) as the model evidence to select the best linear regression model. Furthermore, Gaussian Process Regression (GPR) based on the meaningful convolution of physical properties of the metal-adsorbate complex can be used to predict the baseline residual of the selected model. This integrated Bayesian model selection and Gaussian process regression, dubbed as residual learning, can achieve performance comparable to standard DFT error (0.1 eV) for most adsorbate system. For sparse and small datasets, we propose an ad hoc Bayesian Model Averaging (BMA) approach to make a robust prediction. With this Bayesian framework, we significantly reduce the model uncertainty and improve the prediction accuracy. The possibilities of the framework for high-throughput catalytic materials exploration in a realistic setting is illustrated using large and small sets of both dense and sparse simulated dataset generated from a public database of bimetallic alloys available in Catalysis-Hub.org. Funding sources:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8427896499633789, "perplexity": 1577.6897985418443}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362619.23/warc/CC-MAIN-20211203091120-20211203121120-00630.warc.gz"}
https://docs.sqlalchemy.org/en/latest/orm/cascades.html
Release: 1.2.12 current release | Release Date: September 19, 2018 # SQLAlchemy 1.2 Documentation ### SQLAlchemy 1.2 Documentation Mappers support the concept of configurable cascade behavior on relationship() constructs. This refers to how operations performed on a “parent” object relative to a particular Session should be propagated to items referred to by that relationship (e.g. “child” objects), and is affected by the relationship.cascade option. The default behavior of cascade is limited to cascades of the so-called save-update and merge settings. The typical “alternative” setting for cascade is to add the delete and delete-orphan options; these settings are appropriate for related objects which only exist as long as they are attached to their parent, and are otherwise deleted. Cascade behavior is configured using the cascade option on relationship(): class Order(Base): __tablename__ = 'order' customer = relationship("User", cascade="save-update") To set cascades on a backref, the same flag can be used with the backref() function, which ultimately feeds its arguments back into relationship(): class Item(Base): __tablename__ = 'item' order = relationship("Order", ) The default value of cascade is save-update, merge. The typical alternative setting for this parameter is either all or more commonly all, delete-orphan. The all symbol is a synonym for save-update, merge, refresh-expire, expunge, delete, and using it in conjunction with delete-orphan indicates that the child object should follow along with its parent in all cases, and be deleted once it is no longer associated with that parent. The list of available values which can be specified for the cascade parameter are described in the following subsections. ## save-update¶ save-update cascade indicates that when an object is placed into a Session via Session.add(), all the objects associated with it via this relationship() should also be added to that same Session. Suppose we have an object user1 with two related objects address1, address2: >>> user1 = User() >>> user1.addresses = [address1, address2] If we add user1 to a Session, it will also add address1, address2 implicitly: >>> sess = Session() True save-update cascade also affects attribute operations for objects that are already present in a Session. If we add a third object, address3 to the user1.addresses collection, it becomes part of the state of that Session: >>> address3 = Address() >>> True save-update has the possibly surprising behavior which is that persistent objects which were removed from a collection or in some cases a scalar attribute may also be pulled into the Session of a parent object; this is so that the flush process may handle that related object appropriately. This case can usually only arise if an object is removed from one Session and added to another: >>> user1 = sess1.query(User).filter_by(id=1).first() >>> sess1.close() # user1, address1 no longer associated with sess1 >>> sess2 = Session() >>> sess2.add(user1) # ... but it still gets added to the new session, >>> address1 in sess2 # because it's still "pending" for flush True The save-update cascade is on by default, and is typically taken for granted; it simplifies code by allowing a single call to Session.add() to register an entire structure of objects within that Session at once. While it can be disabled, there is usually not a need to do so. One case where save-update cascade does sometimes get in the way is in that it takes place in both directions for bi-directional relationships, e.g. backrefs, meaning that the association of a child object with a particular parent can have the effect of the parent object being implicitly associated with that child object’s Session; this pattern, as well as how to modify its behavior using the cascade_backrefs flag, is discussed in the section Controlling Cascade on Backrefs. ## delete¶ The delete cascade indicates that when a “parent” object is marked for deletion, its related “child” objects should also be marked for deletion. If for example we we have a relationship User.addresses with delete cascade configured: class User(Base): # ... addresses = relationship("Address", cascade="save-update, merge, delete") If using the above mapping, we have a User object and two related Address objects: >>> user1 = sess.query(User).filter_by(id=1).first() >>> address1, address2 = user1.addresses If we mark user1 for deletion, after the flush operation proceeds, address1 and address2 will also be deleted: >>> sess.delete(user1) >>> sess.commit() ((1,), (2,)) DELETE FROM user WHERE user.id = ? (1,) COMMIT Alternatively, if our User.addresses relationship does not have delete cascade, SQLAlchemy’s default behavior is to instead de-associate address1 and address2 from user1 by setting their foreign key reference to NULL. Using a mapping as follows: class User(Base): # ... addresses = relationship("Address") Upon deletion of a parent User object, the rows in address are not deleted, but are instead de-associated: >>> sess.delete(user1) >>> sess.commit() (None, 1) (None, 2) DELETE FROM user WHERE user.id = ? (1,) COMMIT delete cascade is more often than not used in conjunction with delete-orphan cascade, which will emit a DELETE for the related row if the “child” object is deassociated from the parent. The combination of delete and delete-orphan cascade covers both situations where SQLAlchemy has to decide between setting a foreign key column to NULL versus deleting the row entirely. The behavior of SQLAlchemy’s “delete” cascade has a lot of overlap with the ON DELETE CASCADE feature of a database foreign key, as well as with that of the ON DELETE SET NULL foreign key setting when “delete” cascade is not specified. Database level “ON DELETE” cascades are specific to the “FOREIGN KEY” construct of the relational database; SQLAlchemy allows configuration of these schema-level constructs at the DDL level using options on ForeignKeyConstraint which are described at ON UPDATE and ON DELETE. It is important to note the differences between the ORM and the relational database’s notion of “cascade” as well as how they integrate: • A database level ON DELETE cascade is configured effectively on the many-to-one side of the relationship; that is, we configure it relative to the FOREIGN KEY constraint that is the “many” side of a relationship. At the ORM level, this direction is reversed. SQLAlchemy handles the deletion of “child” objects relative to a “parent” from the “parent” side, which means that delete and delete-orphan cascade are configured on the one-to-many side. • Database level foreign keys with no ON DELETE setting are often used to prevent a parent row from being removed, as it would necessarily leave an unhandled related row present. If this behavior is desired in a one-to-many relationship, SQLAlchemy’s default behavior of setting a foreign key to NULL can be caught in one of two ways: • The easiest and most common is just to set the foreign-key-holding column to NOT NULL at the database schema level. An attempt by SQLAlchemy to set the column to NULL will fail with a simple NOT NULL constraint exception. • The other, more special case way is to set the passive_deletes flag to the string "all". This has the effect of entirely disabling SQLAlchemy’s behavior of setting the foreign key column to NULL, and a DELETE will be emitted for the parent row without any affect on the child row, even if the child row is present in memory. This may be desirable in the case when database-level foreign key triggers, either special ON DELETE settings or otherwise, need to be activated in all cases when a parent row is deleted. • Database level ON DELETE cascade is vastly more efficient than that of SQLAlchemy. The database can chain a series of cascade operations across many relationships at once; e.g. if row A is deleted, all the related rows in table B can be deleted, and all the C rows related to each of those B rows, and on and on, all within the scope of a single DELETE statement. SQLAlchemy on the other hand, in order to support the cascading delete operation fully, has to individually load each related collection in order to target all rows that then may have further related collections. That is, SQLAlchemy isn’t sophisticated enough to emit a DELETE for all those related rows at once within this context. • SQLAlchemy doesn’t need to be this sophisticated, as we instead provide smooth integration with the database’s own ON DELETE functionality, by using the passive_deletes option in conjunction with properly configured foreign key constraints. Under this behavior, SQLAlchemy only emits DELETE for those rows that are already locally present in the Session; for any collections that are unloaded, it leaves them to the database to handle, rather than emitting a SELECT for them. The section Using Passive Deletes provides an example of this use. • While database-level ON DELETE functionality works only on the “many” side of a relationship, SQLAlchemy’s “delete” cascade has limited ability to operate in the reverse direction as well, meaning it can be configured on the “many” side to delete an object on the “one” side when the reference on the “many” side is deleted. However this can easily result in constraint violations if there are other objects referring to this “one” side from the “many”, so it typically is only useful when a relationship is in fact a “one to one”. The single_parent flag should be used to establish an in-Python assertion for this case. When using a relationship() that also includes a many-to-many table using the secondary option, SQLAlchemy’s delete cascade handles the rows in this many-to-many table automatically. Just like, as described in Deleting Rows from the Many to Many Table, the addition or removal of an object from a many-to-many collection results in the INSERT or DELETE of a row in the many-to-many table, the delete cascade, when activated as the result of a parent object delete operation, will DELETE not just the row in the “child” table but also in the many-to-many table. ## delete-orphan¶ delete-orphan cascade adds behavior to the delete cascade, such that a child object will be marked for deletion when it is de-associated from the parent, not just when the parent is marked for deletion. This is a common feature when dealing with a related object that is “owned” by its parent, with a NOT NULL foreign key, so that removal of the item from the parent collection results in its deletion. delete-orphan cascade implies that each child object can only have one parent at a time, so is configured in the vast majority of cases on a one-to-many relationship. Setting it on a many-to-one or many-to-many relationship is more awkward; for this use case, SQLAlchemy requires that the relationship() be configured with the single_parent argument, establishes Python-side validation that ensures the object is associated with only one parent at a time. ## merge¶ merge cascade indicates that the Session.merge() operation should be propagated from a parent that’s the subject of the Session.merge() call down to referred objects. This cascade is also on by default. ## refresh-expire¶ refresh-expire is an uncommon option, indicating that the Session.expire() operation should be propagated from a parent down to referred objects. When using Session.refresh(), the referred objects are expired only, but not actually refreshed. ## expunge¶ expunge cascade indicates that when the parent object is removed from the Session using Session.expunge(), the operation should be propagated down to referred objects. The save-update cascade by default takes place on attribute change events emitted from backrefs. This is probably a confusing statement more easily described through demonstration; it means that, given a mapping such as this: mapper(Order, order_table, properties={ 'items' : relationship(Item, backref='order') }) If an Order is already in the session, and is assigned to the order attribute of an Item, the backref appends the Item to the items collection of that Order, resulting in the save-update cascade taking place: >>> o1 = Order() >>> o1 in session True >>> i1 = Item() >>> i1.order = o1 >>> i1 in o1.items True >>> i1 in session True This behavior can be disabled using the cascade_backrefs flag: mapper(Order, order_table, properties={ 'items' : relationship(Item, backref='order', }) So above, the assignment of i1.order = o1 will append i1 to the items collection of o1, but will not add i1 to the session. You can, of course, add() i1 to the session at a later point. This option may be helpful for situations where an object needs to be kept out of a session until it’s construction is completed, but still needs to be given associations to objects which are already persistent in the target session.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19938577711582184, "perplexity": 2705.6945221403935}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511806.8/warc/CC-MAIN-20181018105742-20181018131242-00464.warc.gz"}
https://physics.meta.stackexchange.com/questions/10634/why-did-i-get-chat-suspended-for-a-year?answertab=active
# Why did I get chat suspended for a year? I've been chat banned for a year after having served a 2-day sentence. Can you please point out what I said that was particularly offensive? I don't believe I said anything wrong, at all. (Of course all of the messages are deleted so there's no evidence of anything.) My version of the events is that: • Balarka Sen called me a "cuck." I flagged this message as inappropriate.$^1$ • Balarka Sen was automatically suspended for 30 minutes. • Art of Code came into the hbar and told me to behave, without citing which behavior was out of line (I don't think I was out of line at all). • Art of Code froze the room for 5 minutes for an unknown reason. • I made a comment in the math room about moderation being very tough and that we're unable to discuss physics in the current climate. This was half meant as a joke but I don't think it's inappropriate in any case. • I said that a hooligan came into the physics room and got it frozen by calling me a cuck. "Hooligan" was meant as "troublemaker," which is an accurate description. • I was suspended for two days. • Balarka Sen was suspended for one day. • Every message was deleted. • Today I found I was suspended for a year. Can someone please point out where in here I committed an offense, or correct the record if I have made an error? I'm sure that someone will say that I have lots of flags so I had this coming, but the flag system is pretty bad. Consider the following example: I was flagged and suspended for this message a week after it was posted. This was clearly not in good faith, and I believe my record is not that bad if one filters out these kinds of incidents. In any case, David Z has talked to me about being "more welcoming" in chat. And while I think I am always very welcoming, I have tried to accommodate his wishes. And in this instance I don't think I did anything wrong, so why was I suspended for a year? Thanks $^1$ I have talked to Balarka after the fact and this was a joke on his part. • I'm closing this question because a mod message with proper explanations is forthcoming. We can decide later how much of this conversation should continue in public. – rob May 18 '18 at 18:31 • think 1 yr suspensions for chatting even for conflicts with mods are too extreme. think the chat flagging system sometimes leads to unfair/ snap mod decisions without recourse. think other chat user opinions (other than mods) about participants should hold some sway & the "no discussion allowed" policy is not open/ undemocratic. think SE should have some way to recognize/ give some credit to "regular chat users" who help "anchor" rooms. the SE policy on creating other SE usernames is relevant also. what is the official policy on that? there is also the issue of "site mods" vs "other mods" etc – vzn May 18 '18 at 19:53 • @rob I don't think your "close reason" is appropriate here. In fact, I believe it's way out of line (and I'm saying this even if I have no idea what's the context behind OP's suspension). There is nothing off-topic about this meta post, and I'm rather surprised to see it closed. Just because you guys are preparing a proper explanation does not justify closure at all. The closure feels like censorship and abuse of power. If you guys want to think things twice before saying anything, so be it. But leave this post open in the meanwhile. After all, OP is in their right to ask what's going on. – AccidentalFourierTransform May 18 '18 at 23:16 • Actually , no @nitsua60 a lot of the most damaging messages were moved, by mods, to locked private trash rooms, while John R. moved some messages here. This whole ordeal actually strongly resembles a soap opera; in which you must tune-in daily to get the big picture :P – skullpatrol May 19 '18 at 3:56 • @KyleKanos keeping suspensions private is a right users have. OP here is clearly choosing to discuss things publicly, which is perfectly ok (and not uncommon). – AccidentalFourierTransform May 19 '18 at 4:14 • Here's the thing: in line with our policy of not discussing individual suspensions in public, the moderators are not going to answer this question. Officially, nobody else knows why the chat suspension was put in place, and thus nobody else can offer an accurate answer. The only answer that anyone could post at this point would be purely speculative, and that's not okay. So while this is technically on topic, it's (currently) effectively unanswerable, and that's why we have it on hold for now. As rob said, once the mods catch up with the normal process and send a message, we can revisit this. – David Z May 19 '18 at 6:37 • @KyleKanos Then the mods don't say anything. But they don't close the post. That's precisely my point. Refusing to say anything is one thing, blocking any sort of conversation for the rest is another. And the latter is censorship and abuse of power. – AccidentalFourierTransform May 19 '18 at 12:04 • @rob Your first comment, along with what David Z has said, is rather alarming. Why do you need so much time to tell 0celo7 the reason he was suspended? Did you not already know why you were suspending him when you did it? – knzhou May 19 '18 at 13:23 • @DavidZ I've always thought this was a bizarre catch-22. The mods won't tell anybody anything, so nobody knows anything, so nobody is allowed to discuss? That means it is effectively impossible to criticize a mod decision, despite the fact that mods are elected democratically. – knzhou May 19 '18 at 13:27 • @KyleKanos "Right" is not the right word. The point is that there are some standards of transparency here. For instance, close votes always have to come with reasons. High-rep users can view questions that mods have deleted. The votes in mod elections are publicly viewable. The policy that distinguishes suspensions has never made sense to me. – knzhou May 19 '18 at 13:28 • There is a meta post on "open" vs. "secret" justice by Jon Ericson relevant to the comment discussion here. The proper appeal process for suspensions is to contact the Community team of SE via email or the "Contact Us" form in the site footer. – ACuriousMind May 19 '18 at 14:45 • Now that this post has been reopened, I'd like to note that we will be deleting speculative answers if they are posted. – David Z May 20 '18 at 20:56 • @DavidZ If the involved person explicitly asks for a public clarification on meta, I think moderators should discuss it on meta at least to a minimum extent (and, afaik, this is what is usually done elsewhere). – Massimo Ortolano May 22 '18 at 6:12 • @vzn If you want people to take you seriously, please write like the adult you are. – AccidentalFourierTransform May 23 '18 at 20:01 • @vzn I suspect there's a non-negligible number of people who support the decision but aren't willing/interested in discussing it. Remember -- the loudest voices aren't always the majority. You're basically just pointing out that some people who disagree complained and the mods/some power users agreed. But there's a lot more power users who haven't said anything. Don't extrapolate. – tpg2114 May 23 '18 at 23:47 Disclaimer: I'm not a moderator so I can't say exactly what happened in this particular case even if I wanted to. For example I can't see the deleted messages. However as a room owner I've been party to various discussions involving moderators from lots of sites (not just our mods), and including the SE community managers such as Shog9 and Ana. What I can do is make some general comments about the view the SE mods take of such things, and the view the SE expect our mods to take. Let me emphasise that what follows is not a personal opinion. I have based this on statements I've seen from the community managers. Whether you agree or disagree with those statements the SE are paying the piper so they get to call the tune. The key point that the SE mods have made several times is that suspensions are not just based on your last offence. So 0celo7's year long suspension will not have been a result of the most recent fracas. When you get a year long suspension that's the moderators telling you that they think the SE chat is better off without you. In any social group trouble sometimes kicks off. Maybe there's a misunderstanding, or maybe someone is just feeling tired and grumpy. This happens in real life and it happens online. When there's an outburst the SE expect our mods to keep track of who is involved, and if a pattern emerges - specifically if the same person is frequently involved - they can expect a suspension. The SE are uninterested in being back room lawyers and they're not going to analyse every detail of every infraction. Basically if you're frequently involved in fights the SE are going to consider you a problem irrespective of whether you started the fights or not. Note that (short) suspensions are not intended to be a punishment. Instead they are supposed to deliver the message that the way you are behaving isn't acceptable. The key thing the SE look for is that the message is received and understood, and that you change your behaviour as a result. If so, that's great. It's when the person involved doesn't change and continues to be involved in trouble that the year long suspensions are wheeled out. It doesn't take insider knowledge for it to be obvious that this is what has happened in 0celo7's case. He's managed to get himself flagged and suspended more times than anyone else I can think of, and by quite some margin. The current year long suspension is basically the message that the SE thinks he is incapable of behaving the way the SE expects of chat participants. • not really disagreeing with anything stated in the message, but there seems to be something a bit off, and that is that some users such as 0celo7 are very heavy chat users and generally interact on chat with literally many dozens of different users, most non mods, whereas the mods decide whether all those interactions are acceptable, and the users have essentially zero say in others suspensions/ acceptable use of the system according to this overall analysis, ie mod judges over all other opinions, and think this is fundamentally undemocratic. look at flags per total lines of chat... – vzn May 23 '18 at 16:49 • @vzn Interesting proposal. Of the twenty most prolific chat users, half have zero chat suspensions/annotations ever, and nearly all have zero chat suspensions over the past year. The same general trend is true for the twenty most recent chatters. That list seems to update every time anyone on the network posts a chat message, but looking at the posts-per-week numbers it is also biased towards the heaviest chat users. – rob May 23 '18 at 18:57 • I've deleted some speculative or inappropriate comments and their responses. – David Z May 23 '18 at 20:21 • @rob That's incredible, I have been suspended for saying the acronym "wtf" – Ryan Unger May 24 '18 at 1:28 • @rob Have you excluded the bots and the moderators in that statistics you propose about half the top 20 chat users having zero suspensions all-time? It's also worth trying to do the same thing on the Physics and Mathematics chat. (That's not an attempt at justifying anything; if the room-specific data turns out to be substantially different, that's something to ponder about as it has been proposed elsewhere that these rooms are outliers) – Balarka Sen May 27 '18 at 10:24 • My entire point is just that I feel like saying "flagged more times than anyone else I can think of" seems disingenuous as it is, and should either be followed up with an "even with the exclusion of known false flags" or a statement indicating that he's unsure of how much of a part it plays. – Phase May 27 '18 at 13:05 • @BalarkaSen Adding the next page of prolific chatters to keep the number constant while throwing out bots and moderators actually doesn't change the numbers very much. But I want to emphasize that, while I've discussed numbers of chat suspensions, this isn't an issue of numbers: it's about a pattern of behavior. The chat suspension record is a result of that pattern, not its cause, and isn't the only evidence for a problem. It's easier for me to anonymously summarize a bunch of semi-public numbers than an extensive set of chat conversations, so that's what I tried. – rob May 27 '18 at 18:55 • @rob Right, that's the narrative other another moderator offered my in conversations in the meta chat (which seemed to conflict with another moderator's narrative about the flag factor being more relevant and important than the other factors, but I'll give it the benefit of the doubt and assume that was a thought-experiment with internal scores which might determine the problematic-ness of a user, and I took it too literally). – Balarka Sen May 27 '18 at 20:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3052329421043396, "perplexity": 1591.2522501088636}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313715.51/warc/CC-MAIN-20190818062817-20190818084817-00545.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-2-common-core/skills-handbook-operations-with-exponents-exercises-page-978/11
Algebra 2 Common Core $$c^{6}$$ To divide powers of the same base, subtract the exponents: $$\frac{c^{7}}{c}=c^{7-1}=c^{6}.$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8177480101585388, "perplexity": 3401.1134244157456}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860684.14/warc/CC-MAIN-20180618164208-20180618184208-00181.warc.gz"}
https://bluemountaincapital.github.io/Deedle/tutorial.html
# Deedle in 10 minutes using F# This document is a quick overview of the most important features of F# data frame library. You can also get this page as an F# script file from GitHub and run the samples interactively. The first step is to install Deedle.dll from NuGet. Next, we need to load the library - in F# Interactive, this is done by loading an .fsx file that loads the actual .dll with the library and registers pretty printers for types representing data frame and series. In this sample, we also need F# Charting, which works similarly: 1: 2: 3: 4: 5: 6: 7: 8: #I "../../packages/FSharp.Charting" #I "../../packages/Deedle" #load "FSharp.Charting.fsx" #load "Deedle.fsx" open System open Deedle open FSharp.Charting ## Creating series and frames A data frame is a collection of series with unique column names (although these do not actually have to be strings). So, to create a data frame, we first need to create a series: 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: // Create from sequence of keys and sequence of values let dates = [ DateTime(2013,1,1); DateTime(2013,1,4); DateTime(2013,1,8) ] let values = [ 10.0; 20.0; 30.0 ] let first = Series(dates, values) // Create from a single list of observations Series.ofObservations [ DateTime(2013,1,1) => 10.0 DateTime(2013,1,4) => 20.0 DateTime(2013,1,8) => 30.0 ] Keys 1/1/2013 12:00:00 AM 1/4/2013 12:00:00 AM 1/8/2013 12:00:00 AM Values 10 20 30 1: 2: 3: 4: 5: // Shorter alternative to 'Series.ofObservations' series [ 1 => 1.0; 2 => 2.0 ] // Create series with implicit (ordinal) keys Series.ofValues [ 10.0; 20.0; 30.0 ] Keys 0 1 2 Values 10 20 30 Note that the series type is generic. Series<K, T> represents a series with keys of type K and values of type T. Let's now generate series with 10 day value range and random values: 1: 2: 3: 4: 5: 6: 7: 8: /// Generate date range from 'first' with 'count' days let dateRange (first:System.DateTime) count = (...) /// Generate 'count' number of random doubles let rand count = (...) // A series with values for 10 days let second = Series(dateRange (DateTime(2013,1,1)) 10, rand 10) Keys 1/1/2013 12:00:00 AM 1/2/2013 12:00:00 AM 1/3/2013 12:00:00 AM 1/4/2013 12:00:00 AM 1/5/2013 12:00:00 AM ... 1/8/2013 12:00:00 AM 1/9/2013 12:00:00 AM 1/10/2013 12:00:00 AM Values .44 .6 .44 .49 .08 ... .39 .32 .55 Now we can easily construct a data frame that has two columns - one representing the first series and another representing the second series: 1: let df1 = Frame(["first"; "second"], [first; second]) first second 1/1/2013 12:00:00 AM 10 .4366 1/2/2013 12:00:00 AM N/A .5981 1/3/2013 12:00:00 AM N/A .4351 1/4/2013 12:00:00 AM 20 .4865 1/5/2013 12:00:00 AM N/A .08 1/6/2013 12:00:00 AM N/A .3986 1/7/2013 12:00:00 AM N/A .4996 1/8/2013 12:00:00 AM 30 .386 1/9/2013 12:00:00 AM N/A .3239 1/10/2013 12:00:00 AM N/A .553 The type representing a data frame has two generic parameters: Frame<TRowKey, TColumnKey>. The first parameter is represents the type of row keys - this can be int if we do not give the keys explicitly or DateTime like in the example above. The second parameter is the type of column keys. This is typically string, but sometimes it is useful to can create a transposed frame with dates as column keys. Because a data frame can contain heterogeneous data, there is no type of values - this needs to be specified when getting data from the data frame. As the output shows, creating a frame automatically combines the indices of the two series (using "outer join" so the result has all the dates that appear in any of the series). The data frame now contains first column with some missing values. You can also use the following nicer syntax and create frame from rows as well as individual values: 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: // The same as previously let df2 = Frame.ofColumns ["first" => first; "second" => second] // Transposed - here, rows are "first" and "second" & columns are dates let df3 = Frame.ofRows ["first" => first; "second" => second] // Create from individual observations (row * column * value) let df4 = [ ("Monday", "Tomas", 1.0); ("Tuesday", "Adam", 2.1) ("Tuesday", "Tomas", 4.0); ("Wednesday", "Tomas", -5.4) ] |> Frame.ofValues Data frame can be also easily created from a collection of F# record types (or of any classes with public readable properties). The Frame.ofRecords function uses reflection to find the names and types of properties of a record and creates a data frame with the same structure. 1: 2: 3: 4: 5: 6: 7: 8: 9: // Assuming we have a record 'Price' and a collection 'values' type Price = { Day : DateTime; Open : float } let prices = [ { Day = DateTime.Now; Open = 10.1 } { Day = DateTime.Now.AddDays(1.0); Open = 15.1 } { Day = DateTime.Now.AddDays(2.0); Open = 9.1 } ] // Creates a data frame with columns 'Day' and 'Open' let df5 = Frame.ofRecords prices Finally, we can also load data frame from CSV: 1: 2: let msftCsv = Frame.ReadCsv(__SOURCE_DIRECTORY__ + "/data/stocks/MSFT.csv") let fbCsv = Frame.ReadCsv(__SOURCE_DIRECTORY__ + "/data/stocks/FB.csv") Date Open High ... Close Volume 0 2013-11-07 49.24 49.87 ... 47.56 96925500 47.56 1 2013-11-06 50.26 50.45 ... 49.12 67648600 49.12 2 2013-11-05 47.79 50.18 ... 50.11 76668300 50.11 3 2013-11-04 49.37 49.75 ... 48.22 80206200 48.22 4 2013-11-01 50.85 52.09 ... 49.75 94822200 49.75 5 2013-10-31 47.16 52.00 ... 50.21 248388200 50.21 6 2013-10-30 50.00 50.21 ... 49.01 116674400 49.01 7 2013-10-29 50.73 50.79 ... 49.40 101859700 49.40 ... ... ... ... ... ... ... ... 367 2012-05-23 31.37 32.50 ... 32.00 73600000 32.00 368 2012-05-22 32.61 33.59 ... 31.00 101786600 31.00 369 2012-05-21 36.53 36.66 ... 34.03 168192700 34.03 370 2012-05-18 42.05 45.00 ... 38.23 573576400 38.23 When loading the data, the data frame analyses the values and automatically converts them to the most appropriate type. However, no conversion is automatically performed for dates and times - the user needs to decide what is the desirable representation of dates (e.g. DateTime, DateTimeOffset or some custom type). ## Specifying index and joining Now we have fbCsv and msftCsv frames containing stock prices, but they are indexed with ordinal numbers. This means that we can get e.g. 4th price. However, we would like to align them using their dates (in case there are some values missing). This can be done by setting the row index to the "Date" column. Once we set the date as the index, we also need to order the index. The Yahoo Finance prices are ordered from the newest to the oldest, but our data-frame requires ascending ordering. When a frame has ordered index, we can use additional functionality that will be needed later (for example, we can select sub-range by specifying dates that are not explicitly included in the index). 1: 2: 3: 4: 5: // Use the Date column as the index & order rows let msftOrd = msftCsv |> Frame.indexRowsDate "Date" |> Frame.sortRowsByKey The indexRowsDate function uses a column of type DateTime as a new index. The library provides other functions for common types of indices (like indexRowsInt) and you can also use a generic function - when using the generic function, some type annotations may be needed, so it is better to use a specific function. Next, we sort the rows using another function from the Frame module. The module contains a large number of useful functions that you'll use all the time - it is a good idea to go through the list to get an idea of what is supported. Now that we have properly indexed stock prices, we can create a new data frame that only has the data we're interested (Open & Close) prices and we add a new column that shows their difference: 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: // Create data frame with just Open and Close prices let msft = msftOrd.Columns.[ ["Open"; "Close"] ] // Add new column with the difference between Open & Close msft?Difference <- msft?Open - msft?Close // Do the same thing for Facebook let fb = fbCsv |> Frame.indexRowsDate "Date" |> Frame.sortRowsByKey |> Frame.sliceCols ["Open"; "Close"] fb?Difference <- fb?Open - fb?Close // Now we can easily plot the differences Chart.Combine [ Chart.Line(msft?Difference |> Series.observations) Chart.Line(fb?Difference |> Series.observations) ] When selecting columns using f.Columns.[ .. ] it is possible to use a list of columns (as we did), a single column key, or a range (if the associated index is ordered). Then we use the df?Column <- (...) syntax to add a new column to the data frame. This is the only mutating operation that is supported on data frames - all other operations create a new data frame and return it as the result. Next we would like to create a single data frame that contains (properly aligned) data for both Microsoft and Facebook. This is done using the Join method - but before we can do this, we need to rename their columns, because duplicate keys are not allowed: 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: // Change the column names so that they are unique let msftNames = ["MsftOpen"; "MsftClose"; "MsftDiff"] let msftRen = msft |> Frame.indexColsWith msftNames let fbNames = ["FbOpen"; "FbClose"; "FbDiff"] let fbRen = fb |> Frame.indexColsWith fbNames // Outer join (align & fill with missing values) let joinedOut = msftRen.Join(fbRen, kind=JoinKind.Outer) // Inner join (remove rows with missing values) let joinedIn = msftRen.Join(fbRen, kind=JoinKind.Inner) // Visualize daily differences on available values only Chart.Rows [ Chart.Line(joinedIn?MsftDiff |> Series.observations) Chart.Line(joinedIn?FbDiff |> Series.observations) ] ## Selecting values and slicing The data frame provides two key properties that we can use to access the data. The Rows property returns a series containing individual rows (as a series) and Columns returns a series containing columns (as a series). We can then use various indexing and slicing operators on the series: 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: // Look for a row at a specific date joinedIn.Rows.[DateTime(2013, 1, 2)] val it : ObjectSeries = FbOpen -> 28.00 FbClose -> 27.44 FbDiff -> -0.5599 MsftOpen -> 27.62 MsftClose -> 27.25 MsftDiff -> -0.3700 // Get opening Facebook price for 2 Jan 2013 joinedIn.Rows.[DateTime(2013, 1, 2)]?FbOpen val it : float = 28.0 The return type of the first expression is ObjectSeries<string> which is inherited from Series<string, obj> and represents an untyped series. We can use GetAs<int>("FbOpen") to get a value for a specifed key and convert it to a required type (or TryGetAs). The untyped series also hides the default ? operator (which returns the value using the statically known value type) and provides ? that automatically converts anything to float. In the previous example, we used an indexer with a single key. You can also specify multiple keys (using a list) or a range (using the slicing syntax): 1: 2: 3: 4: 5: 6: 7: 8: 9: // Get values for the first three days of January 2013 let janDates = [ for d in 2 .. 4 -> DateTime(2013, 1, d) ] let jan234 = joinedIn.Rows.[janDates] // Calculate mean of Open price for 3 days jan234?MsftOpen |> Stats.mean // Get values corresponding to entire January 2013 let jan = joinedIn.Rows.[DateTime(2013, 1, 1) .. DateTime(2013, 1, 31)] MsftOpen MsftClose MsftDiff FbOpen FbClose FbDiff 1/2/2013 27.25 27.62 -.37 27.44 28 -.56 1/3/2013 27.63 27.25 .38 27.88 27.77 .11 1/4/2013 27.27 26.74 .53 28.01 28.76 -.75 1/7/2013 26.77 26.69 .08 28.69 29.42 -.73 1/8/2013 26.75 26.55 .2 29.51 29.06 .45 1/9/2013 26.72 26.7 .02 29.67 30.59 -.92 1/10/2013 26.65 26.46 .19 30.6 31.3 -.7 1/11/2013 26.49 26.83 -.34 31.28 31.72 -.44 ... ... ... ... ... ... ... 1/28/2013 28.01 27.91 .1 31.88 32.47 -.59 1/29/2013 27.82 28.01 -.19 32 30.79 1.21 1/30/2013 28.01 27.85 .16 30.98 31.24 -.26 1/31/2013 27.79 27.45 .34 29.15 30.98 -1.83 1: 2: 3: // Calculate means over the period jan?FbOpen |> Stats.mean jan?MsftOpen |> Stats.mean The result of the indexing operation is a single data series when you use just a single date (the previous example) or a new data frame when you specify multiple indices or a range (this example). The Series module used here includes more useful functions for working with data series, including (but not limited to) statistical functions like mean, sdv and sum. Note that the slicing using range (the second case) does not actually generate a sequence of dates from 1 January to 31 January - it passes these to the index. Because our data frame has an ordered index, the index looks for all keys that are greater than 1 January and smaller than 31 January (this matters here, because the data frame does not contain 1 January - the first day is 2 January) ## Using ordered time series As already mentioned, if we have an ordered series or an ordered data frame, then we can leverage the ordering in a number of ways. In the previous example, slicing used lower and upper bounds rather than exact matching. Similarly, it is possible to get nearest smaller (or greater) element when using direct lookup. For example, let's create two series with 10 values for 10 days. The daysSeries contains keys starting from DateTime.Today (12:00 AM) and obsSeries has dates with time component set to the current time (this is wrong representation, but it can be used to ilustrate the idea): 1: 2: let daysSeries = Series(dateRange DateTime.Today 10, rand 10) let obsSeries = Series(dateRange DateTime.Now 10, rand 10) Keys 7/14/2015 12:00:00 AM 7/15/2015 12:00:00 AM 7/16/2015 12:00:00 AM 7/17/2015 12:00:00 AM 7/18/2015 12:00:00 AM ... 7/21/2015 12:00:00 AM 7/22/2015 12:00:00 AM 7/23/2015 12:00:00 AM Values .79 .21 .45 .67 .48 ... .02 .06 .58 Keys 7/14/2015 1:38:48 PM 7/15/2015 1:38:48 PM 7/16/2015 1:38:48 PM 7/17/2015 1:38:48 PM 7/18/2015 1:38:48 PM ... 7/21/2015 1:38:48 PM 7/22/2015 1:38:48 PM 7/23/2015 1:38:48 PM Values .79 .21 .45 .67 .48 ... .02 .06 .58 The indexing operation written as daysSeries.[date] uses exact semantics so it will fail if the exact date is not available. When using Get method, we can provide an additional parameter to specify the required behaviour: 1: 2: 3: 4: 5: 6: 7: 8: 9: // Fails, because current time is not present try daysSeries.[DateTime.Now] with _ -> nan try obsSeries.[DateTime.Now] with _ -> nan // This works - we get the value for DateTime.Today (12:00 AM) daysSeries.Get(DateTime.Now, Lookup.ExactOrSmaller) // This does not - there is no nearest key <= Today 12:00 AM try obsSeries.Get(DateTime.Today, Lookup.ExactOrSmaller) with _ -> nan Similarly, you can specify the semantics when calling TryGet (to get an optional value) or when using GetItems (to lookup multiple keys at once). Note that this behaviour is only supported for series or frames with ordered index. For unordered, all operations use the exact semantics. The semantics can be also specified when using left or right join on data frames. To demonstrate this, let's create two data frames with columns indexed by 1 and 2, respectively: 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: let daysFrame = [ 1 => daysSeries ] |> Frame.ofColumns let obsFrame = [ 2 => obsSeries ] |> Frame.ofColumns // All values in column 2 are missing (because the times do not match) let obsDaysExact = daysFrame.Join(obsFrame, kind=JoinKind.Left) // All values are available - for each day, we find the nearest smaller // time in the frame indexed by later times in the day let obsDaysPrev = (daysFrame, obsFrame) ||> Frame.joinAlign JoinKind.Left Lookup.ExactOrSmaller // The first value is missing (because there is no nearest // value with greater key - the first one has the smallest // key) but the rest is available let obsDaysNext = (daysFrame, obsFrame) ||> Frame.joinAlign JoinKind.Left Lookup.ExactOrGreater In general, the same operation can usually be achieved using a function from the Series or Frame module and using a member (or an extension member) on the object. The previous sample shows both options - it uses Join as a member with optional argument first, and then it uses joinAlign function. Choosing between the two is a matter of preference - here, we are using joinAlign so that we can write code using pipelining (rather than long expression that would not fit on the page). The Join method takes two optional parameters - the parameter ?lookup is ignored when the join ?kind is other than Left or Right. Also, if the data frame is not ordered, the behaviour defaults to exact matching. The joinAlign function behaves the same way. ## Projection and filtering For filtering and projection, series provides Where and Select methods and corresponding Series.map and Series.filter functions (there is also Series.mapValues and Series.mapKeys if you only want to transform one aspect). The methods are not available directly on data frame, so you always need to write df.Rows or df.Columns (depending on which one you want). Correspondingly, the Frame module provides functions such as Frame.mapRows. The following adds a new column that contains the name of the stock with greater price ("FB" or "MSFT"): 1: 2: joinedOut?Comparison <- joinedOut |> Frame.mapRowValues (fun row -> if row?MsftOpen > row?FbOpen then "MSFT" else "FB") When projecting or filtering rows, we need to be careful about missing data. The row accessor row?MsftOpen reads the specified column (and converts it to float), but when the column is not available, it throws the MissingValueException exception. Projection functions such as mapRowValues automatically catch this exception (but no other types of exceptions) and mark the corresponding series value as missing. To make the missing value handling more explicit, you could use Series.hasAll ["MsftOpen"; "FbOpen"] to check that the series has all the values we need. If no, the lambda function could return null, which is automatically treated as a missing value (and it will be skipped by future operations). Now we can get the number of days when Microsoft stock prices were above Facebook and the other way round: 1: 2: 3: 4: 5: 6: 7: joinedOut.GetColumn("Comparison") |> Series.filterValues ((=) "MSFT") |> Series.countValues val it : int = 220 joinedOut.GetColumn("Comparison") |> Series.filterValues ((=) "FB") |> Series.countValues val it : int = 103 In this case, we should probably have used joinedIn which only has rows where the values are always available. But you often want to work with data frame that has missing values, so it is useful to see how this work. Here is another alternative: 1: 2: 3: 4: 5: 6: 7: 8: // Get data frame with only 'Open' columns let joinedOpens = joinedOut.Columns.[ ["MsftOpen"; "FbOpen"] ] // Get only rows that don't have any missing values // and then we can safely filter & count joinedOpens.RowsDense |> Series.filterValues (fun row -> row?MsftOpen > row?FbOpen) |> Series.countValues The key is the use of RowsDense on line 6. It behaves similarly to Rows, but only returns rows that have no missing values. This means that we can then perform the filtering safely without any checks. However, we do not mind if there are missing values in FbClose, because we do not need this column. For this reason, we first create joinedOpens, which projects just the two columns we need from the original data frame. ## Grouping and aggregation As a last thing, we briefly look at grouping and aggregation. For more information about grouping of time series data, see the time series features tutorial and the data frame features contains more about grouping of unordered frames. We'll use the simplest option which is the Frame.groupRowsUsing function (also available as GroupRowsUsing member). This allows us to specify key selector that selects new key for each row. If you want to group data using a value in a column, you can use Frame.groupRowsBy column. The following snippet groups rows by month and year: 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: let monthly = joinedIn |> Frame.groupRowsUsing (fun k _ -> DateTime(k.Year, k.Month, 1)) val monthly : Frame<(DateTime * DateTime),string> = FbOpen MsftOpen 5/1/2012 5/18/2012 -> 38.23 29.27 5/21/2012 -> 34.03 29.75 5/22/2012 -> 31.00 29.76 : ... 8/1/2013 8/12/2013 -> 38.22 32.87 8/13/2013 -> 37.02 32.23 8/14/2013 -> 36.65 32.35 The output is trimmed to fit on the page. As you can see, we get back a frame that has a tuple DateTime * DateTime as the row key. This is treated in a special way as a hierarchical (or multi-level) index. For example, the output automatically shows the rows in groups (assuming they are correctly ordered). A number of operations can be used on hierarchical indices. For example, we can get rows in a specified group (say, May 2013) and calculate means of columns in the group: 1: 2: 3: 4: 5: 6: 7: 8: monthly.Rows.[DateTime(2013,5,1), *] |> Stats.mean val it : Series = FbOpen -> 26.14 FbClose -> 26.35 FbDiff -> 0.20 MsftOpen -> 33.95 MsftClose -> 33.76 MsftDiff -> -0.19 The above snippet uses slicing notation that is only available in F# 3.1 (Visual Studio 2013). In earlier versions, you can get the same thing using monthly.Rows.[Lookup1Of2 (DateTime(2013,5,1))]. The syntax indicates that we only want to specify the first part of the key and do not match on the second component. We can also use Frame.getNumericColumns in combination with Stats.levelMean to get means for all first-level groups: 1: 2: 3: 4: monthly |> Frame.getNumericCols |> Series.mapValues (Stats.levelMean fst) |> Frame.ofColumns Here, we simply use the fact that the key is a tuple. The fst function projects the first date from the key (month and year) and the result is a frame that contains the first-level keys, together with means for all available numeric columns. namespace System namespace Deedle namespace FSharp namespace FSharp.Charting val dates : DateTime list Full name: Tutorial.dates Multiple items type DateTime = struct new : ticks:int64 -> DateTime + 10 overloads member Add : value:TimeSpan -> DateTime member AddDays : value:float -> DateTime member AddHours : value:float -> DateTime member AddMilliseconds : value:float -> DateTime member AddMinutes : value:float -> DateTime member AddMonths : months:int -> DateTime member AddSeconds : value:float -> DateTime member AddTicks : value:int64 -> DateTime member AddYears : value:int -> DateTime ... end Full name: System.DateTime -------------------- DateTime() DateTime(ticks: int64) : unit DateTime(ticks: int64, kind: DateTimeKind) : unit DateTime(year: int, month: int, day: int) : unit DateTime(year: int, month: int, day: int, calendar: Globalization.Calendar) : unit DateTime(year: int, month: int, day: int, hour: int, minute: int, second: int) : unit DateTime(year: int, month: int, day: int, hour: int, minute: int, second: int, kind: DateTimeKind) : unit DateTime(year: int, month: int, day: int, hour: int, minute: int, second: int, calendar: Globalization.Calendar) : unit DateTime(year: int, month: int, day: int, hour: int, minute: int, second: int, millisecond: int) : unit DateTime(year: int, month: int, day: int, hour: int, minute: int, second: int, millisecond: int, kind: DateTimeKind) : unit val values : float list Full name: Tutorial.values val first : Series<DateTime,float> Full name: Tutorial.first Multiple items module Series from Deedle -------------------- type Series = static member ofNullables : values:seq<Nullable<'a0>> -> Series<int,'a0> (requires default constructor and value type and 'a0 :> ValueType) static member ofObservations : observations:seq<'a0 * 'a1> -> Series<'a0,'a1> (requires equality) static member ofOptionalObservations : observations:seq<'K * 'a1 option> -> Series<'K,'a1> (requires equality) static member ofValues : values:seq<'a0> -> Series<int,'a0> Full name: Deedle.F# Series extensions.Series -------------------- type Series<'K,'V (requires equality)> = interface IFsiFormattable interface ISeries<'K> new : pairs:seq<KeyValuePair<'K,'V>> -> Series<'K,'V> new : keys:'K [] * values:'V [] -> Series<'K,'V> new : keys:seq<'K> * values:seq<'V> -> Series<'K,'V> new : index:IIndex<'K> * vector:IVector<'V> * vectorBuilder:IVectorBuilder * indexBuilder:IIndexBuilder -> Series<'K,'V> member After : lowerExclusive:'K -> Series<'K,'V> member Aggregate : aggregation:Aggregation<'K> * observationSelector:Func<DataSegment<Series<'K,'V>>,KeyValuePair<'TNewKey,OptionalValue<'R>>> -> Series<'TNewKey,'R> (requires equality) member Aggregate : aggregation:Aggregation<'K> * keySelector:Func<DataSegment<Series<'K,'V>>,'TNewKey> * valueSelector:Func<DataSegment<Series<'K,'V>>,OptionalValue<'R>> -> Series<'TNewKey,'R> (requires equality) member AsyncMaterialize : unit -> Async<Series<'K,'V>> ... Full name: Deedle.Series<_,_> -------------------- new : pairs:seq<Collections.Generic.KeyValuePair<'K,'V>> -> Series<'K,'V> new : keys:seq<'K> * values:seq<'V> -> Series<'K,'V> new : keys:'K [] * values:'V [] -> Series<'K,'V> new : index:Indices.IIndex<'K> * vector:IVector<'V> * vectorBuilder:Vectors.IVectorBuilder * indexBuilder:Indices.IIndexBuilder -> Series<'K,'V> static member Series.ofObservations : observations:seq<'a0 * 'a1> -> Series<'a0,'a1> (requires equality) val series : observations:seq<'a * 'b> -> Series<'a,'b> (requires equality) Full name: Deedle.F# Series extensions.series static member Series.ofValues : values:seq<'a0> -> Series<int,'a0> val dateRange : first:DateTime -> count:int -> seq<DateTime> Full name: Tutorial.dateRange Generate date range from 'first' with 'count' days val first : DateTime val count : int seq { for i in 0 .. (count - 1) -> first.AddDays(float i) } val rand : count:int -> seq<float> Full name: Tutorial.rand Generate 'count' number of random doubles let rnd = System.Random() seq { for i in 0 .. (count - 1) -> rnd.NextDouble() } val second : Series<DateTime,float> Full name: Tutorial.second val df1 : Frame<DateTime,string> Full name: Tutorial.df1 Multiple items module Frame from Deedle -------------------- type Frame = static member CreateEmpty : unit -> Frame<'R,'C> (requires equality and equality) static member FromArray2D : array:'T [,] -> Frame<int,int> static member FromColumns : cols:Series<'TColKey,Series<'TRowKey,'V>> -> Frame<'TRowKey,'TColKey> (requires equality and equality) static member FromColumns : cols:Series<'TColKey,ObjectSeries<'TRowKey>> -> Frame<'TRowKey,'TColKey> (requires equality and equality) static member FromColumns : columns:seq<KeyValuePair<'ColKey,ObjectSeries<'RowKey>>> -> Frame<'RowKey,'ColKey> (requires equality and equality) static member FromColumns : columns:seq<KeyValuePair<'ColKey,Series<'RowKey,'V>>> -> Frame<'RowKey,'ColKey> (requires equality and equality) static member FromColumns : cols:seq<Series<'ColKey,'V>> -> Frame<'ColKey,int> (requires equality) static member FromRecords : values:seq<'T> -> Frame<int,string> static member FromRecords : series:Series<'K,'R> -> Frame<'K,string> (requires equality) static member FromRowKeys : keys:seq<'K> -> Frame<'K,string> (requires equality) ... Full name: Deedle.Frame -------------------- type Frame<'TRowKey,'TColumnKey (requires equality and equality)> = interface IDynamicMetaObjectProvider interface INotifyCollectionChanged interface IFsiFormattable interface IFrame new : names:seq<'TColumnKey> * columns:seq<ISeries<'TRowKey>> -> Frame<'TRowKey,'TColumnKey> new : rowIndex:IIndex<'TRowKey> * columnIndex:IIndex<'TColumnKey> * data:IVector<IVector> * indexBuilder:IIndexBuilder * vectorBuilder:IVectorBuilder -> Frame<'TRowKey,'TColumnKey> member AddColumn : column:'TColumnKey * series:ISeries<'TRowKey> -> unit member AddColumn : column:'TColumnKey * series:seq<'V> -> unit member AddColumn : column:'TColumnKey * series:ISeries<'TRowKey> * lookup:Lookup -> unit member AddColumn : column:'TColumnKey * series:seq<'V> * lookup:Lookup -> unit ... Full name: Deedle.Frame<_,_> -------------------- new : names:seq<'TColumnKey> * columns:seq<ISeries<'TRowKey>> -> Frame<'TRowKey,'TColumnKey> new : rowIndex:Indices.IIndex<'TRowKey> * columnIndex:Indices.IIndex<'TColumnKey> * data:IVector<IVector> * indexBuilder:Indices.IIndexBuilder * vectorBuilder:Vectors.IVectorBuilder -> Frame<'TRowKey,'TColumnKey> val df2 : Frame<DateTime,string> Full name: Tutorial.df2 static member Frame.ofColumns : cols:Series<'C,#ISeries<'R>> -> Frame<'R,'C> (requires equality and equality) static member Frame.ofColumns : cols:seq<'C * #ISeries<'R>> -> Frame<'R,'C> (requires equality and equality) val df3 : Frame<string,DateTime> Full name: Tutorial.df3 static member Frame.ofRows : rows:seq<'R * #ISeries<'C>> -> Frame<'R,'C> (requires equality and equality) static member Frame.ofRows : rows:Series<'R,#ISeries<'C>> -> Frame<'R,'C> (requires equality and equality) val df4 : Frame<string,string> Full name: Tutorial.df4 static member Frame.ofValues : values:seq<'R * 'C * 'V> -> Frame<'R,'C> (requires equality and equality) type Price = {Day: DateTime; Open: float;} Full name: Tutorial.Price Price.Day: DateTime Price.Open: float Multiple items val float : value:'T -> float (requires member op_Explicit) Full name: Microsoft.FSharp.Core.Operators.float -------------------- type float = Double Full name: Microsoft.FSharp.Core.float -------------------- type float<'Measure> = float Full name: Microsoft.FSharp.Core.float<_> val prices : Price list Full name: Tutorial.prices property DateTime.Now: DateTime DateTime.AddDays(value: float) : DateTime val df5 : Frame<int,string> Full name: Tutorial.df5 static member Frame.ofRecords : series:Series<'K,'R> -> Frame<'K,string> (requires equality) static member Frame.ofRecords : values:seq<'T> -> Frame<int,string> static member Frame.ofRecords : values:Collections.IEnumerable * indexCol:string -> Frame<'R,string> (requires equality) val msftCsv : Frame<int,string> Full name: Tutorial.msftCsv static member Frame.ReadCsv : path:string * ?hasHeaders:bool * ?inferTypes:bool * ?inferRows:int * ?schema:string * ?separators:string * ?culture:string * ?maxRows:int * ?missingValues:string [] -> Frame<int,string> static member Frame.ReadCsv : stream:IO.Stream * ?hasHeaders:bool * ?inferTypes:bool * ?inferRows:int * ?schema:string * ?separators:string * ?culture:string * ?maxRows:int * ?missingValues:string [] -> Frame<int,string> static member Frame.ReadCsv : reader:IO.TextReader * ?hasHeaders:bool * ?inferTypes:bool * ?inferRows:int * ?schema:string * ?separators:string * ?culture:string * ?maxRows:int * ?missingValues:string [] -> Frame<int,string> static member Frame.ReadCsv : path:string * indexCol:string * ?hasHeaders:bool * ?inferTypes:bool * ?inferRows:int * ?schema:string * ?separators:string * ?culture:string * ?maxRows:int * ?missingValues:string [] -> Frame<'R,string> (requires equality) val fbCsv : Frame<int,string> Full name: Tutorial.fbCsv val msftOrd : Frame<DateTime,string> Full name: Tutorial.msftOrd val indexRowsDate : column:'C -> frame:Frame<'R1,'C> -> Frame<DateTime,'C> (requires equality and equality) Full name: Deedle.Frame.indexRowsDate val sortRowsByKey : frame:Frame<'R,'C> -> Frame<'R,'C> (requires equality and equality) Full name: Deedle.Frame.sortRowsByKey val msft : Frame<DateTime,string> Full name: Tutorial.msft property Frame.Columns: ColumnSeries<DateTime,string> val fb : Frame<DateTime,string> Full name: Tutorial.fb val sliceCols : columns:seq<'C> -> frame:Frame<'R,'C> -> Frame<'R,'C> (requires equality and equality) Full name: Deedle.Frame.sliceCols type Chart = static member Area : data:seq<#value> * ?Name:string * ?Title:string * ?Labels:#seq<string> * ?Color:Color * ?XTitle:string * ?YTitle:string -> GenericChart static member Area : data:seq<#key * #value> * ?Name:string * ?Title:string * ?Labels:#seq<string> * ?Color:Color * ?XTitle:string * ?YTitle:string -> GenericChart static member Bar : data:seq<#value> * ?Name:string * ?Title:string * ?Labels:#seq<string> * ?Color:Color * ?XTitle:string * ?YTitle:string -> GenericChart static member Bar : data:seq<#key * #value> * ?Name:string * ?Title:string * ?Labels:#seq<string> * ?Color:Color * ?XTitle:string * ?YTitle:string -> GenericChart static member BoxPlotFromData : data:seq<#key * #seq<'a2>> * ?Name:string * ?Title:string * ?Color:Color * ?XTitle:string * ?YTitle:string * ?Percentile:int * ?ShowAverage:bool * ?ShowMedian:bool * ?ShowUnusualValues:bool * ?WhiskerPercentile:int -> GenericChart (requires 'a2 :> value) static member BoxPlotFromStatistics : data:seq<#key * #value * #value * #value * #value * #value * #value> * ?Name:string * ?Title:string * ?Labels:#seq<string> * ?Color:Color * ?XTitle:string * ?YTitle:string * ?Percentile:int * ?ShowAverage:bool * ?ShowMedian:bool * ?ShowUnusualValues:bool * ?WhiskerPercentile:int -> GenericChart static member Bubble : data:seq<#value * #value> * ?Name:string * ?Title:string * ?Labels:#seq<string> * ?Color:Color * ?XTitle:string * ?YTitle:string * ?BubbleMaxSize:int * ?BubbleMinSize:int * ?BubbleScaleMax:float * ?BubbleScaleMin:float * ?UseSizeForLabel:bool -> GenericChart static member Bubble : data:seq<#key * #value * #value> * ?Name:string * ?Title:string * ?Labels:#seq<string> * ?Color:Color * ?XTitle:string * ?YTitle:string * ?BubbleMaxSize:int * ?BubbleMinSize:int * ?BubbleScaleMax:float * ?BubbleScaleMin:float * ?UseSizeForLabel:bool -> GenericChart static member Candlestick : data:seq<#value * #value * #value * #value> * ?Name:string * ?Title:string * ?Labels:#seq<string> * ?Color:Color * ?XTitle:string * ?YTitle:string -> CandlestickChart static member Candlestick : data:seq<#key * #value * #value * #value * #value> * ?Name:string * ?Title:string * ?Labels:#seq<string> * ?Color:Color * ?XTitle:string * ?YTitle:string -> CandlestickChart ... Full name: FSharp.Charting.Chart static member Chart.Combine : charts:seq<ChartTypes.GenericChart> -> ChartTypes.GenericChart static member Chart.Line : data:seq<#value> * ?Name:string * ?Title:string * ?Labels:#seq<string> * ?Color:Drawing.Color * ?XTitle:string * ?YTitle:string -> ChartTypes.GenericChart static member Chart.Line : data:seq<#key * #value> * ?Name:string * ?Title:string * ?Labels:#seq<string> * ?Color:Drawing.Color * ?XTitle:string * ?YTitle:string -> ChartTypes.GenericChart val observations : series:Series<'K,'T> -> seq<'K * 'T> (requires equality) Full name: Deedle.Series.observations val msftNames : string list Full name: Tutorial.msftNames val msftRen : Frame<DateTime,string> Full name: Tutorial.msftRen val indexColsWith : keys:seq<'C2> -> frame:Frame<'R,'C1> -> Frame<'R,'C2> (requires equality and equality and equality) Full name: Deedle.Frame.indexColsWith val fbNames : string list Full name: Tutorial.fbNames val fbRen : Frame<DateTime,string> Full name: Tutorial.fbRen val joinedOut : Frame<DateTime,string> Full name: Tutorial.joinedOut member Frame.Join : otherFrame:Frame<'TRowKey,'TColumnKey> -> Frame<'TRowKey,'TColumnKey> member Frame.Join : colKey:'TColumnKey * series:Series<'TRowKey,'V> -> Frame<'TRowKey,'TColumnKey> member Frame.Join : otherFrame:Frame<'TRowKey,'TColumnKey> * kind:JoinKind -> Frame<'TRowKey,'TColumnKey> member Frame.Join : colKey:'TColumnKey * series:Series<'TRowKey,'V> * kind:JoinKind -> Frame<'TRowKey,'TColumnKey> member Frame.Join : otherFrame:Frame<'TRowKey,'TColumnKey> * kind:JoinKind * lookup:Lookup -> Frame<'TRowKey,'TColumnKey> member Frame.Join : colKey:'TColumnKey * series:Series<'TRowKey,'V> * kind:JoinKind * lookup:Lookup -> Frame<'TRowKey,'TColumnKey> type JoinKind = | Outer = 0 | Inner = 1 | Left = 2 | Right = 3 Full name: Deedle.JoinKind JoinKind.Outer: JoinKind = 0 val joinedIn : Frame<DateTime,string> Full name: Tutorial.joinedIn JoinKind.Inner: JoinKind = 1 static member Chart.Rows : charts:seq<ChartTypes.GenericChart> -> ChartTypes.GenericChart property Frame.Rows: RowSeries<DateTime,string> val janDates : DateTime list Full name: Tutorial.janDates val d : int val jan234 : Frame<DateTime,string> Full name: Tutorial.jan234 type Stats = static member count : frame:Frame<'R,'C> -> Series<'C,int> (requires equality and equality) static member count : series:Series<'K,'V> -> int (requires equality) static member expandingCount : series:Series<'K,float> -> Series<'K,float> (requires equality) static member expandingKurt : series:Series<'K,float> -> Series<'K,float> (requires equality) static member expandingMax : series:Series<'K,float> -> Series<'K,float> (requires equality) static member expandingMean : series:Series<'K,float> -> Series<'K,float> (requires equality) static member expandingMin : series:Series<'K,float> -> Series<'K,float> (requires equality) static member expandingSkew : series:Series<'K,float> -> Series<'K,float> (requires equality) static member expandingStdDev : series:Series<'K,float> -> Series<'K,float> (requires equality) static member expandingSum : series:Series<'K,float> -> Series<'K,float> (requires equality) ... Full name: Deedle.Stats static member Stats.mean : frame:Frame<'R,'C> -> Series<'C,float> (requires equality and equality) static member Stats.mean : series:Series<'K,float> -> float (requires equality) val jan : Frame<DateTime,string> Full name: Tutorial.jan val daysSeries : Series<DateTime,float> Full name: Tutorial.daysSeries property DateTime.Today: DateTime val obsSeries : Series<DateTime,float> Full name: Tutorial.obsSeries val nan : float Full name: Microsoft.FSharp.Core.Operators.nan member Series.Get : key:'K -> 'V member Series.Get : key:'K * lookup:Lookup -> 'V type Lookup = | Exact = 1 | ExactOrGreater = 3 | ExactOrSmaller = 5 | Greater = 2 | Smaller = 4 Full name: Deedle.Lookup Lookup.ExactOrSmaller: Lookup = 5 val daysFrame : Frame<DateTime,int> Full name: Tutorial.daysFrame val obsFrame : Frame<DateTime,int> Full name: Tutorial.obsFrame val obsDaysExact : Frame<DateTime,int> Full name: Tutorial.obsDaysExact JoinKind.Left: JoinKind = 2 val obsDaysPrev : Frame<DateTime,int> Full name: Tutorial.obsDaysPrev val joinAlign : kind:JoinKind -> lookup:Lookup -> frame1:Frame<'R,'C> -> frame2:Frame<'R,'C> -> Frame<'R,'C> (requires equality and equality) Full name: Deedle.Frame.joinAlign val obsDaysNext : Frame<DateTime,int> Full name: Tutorial.obsDaysNext Lookup.ExactOrGreater: Lookup = 3 type Comparison<'T> = delegate of 'T * 'T -> int Full name: System.Comparison<_> val mapRowValues : f:(ObjectSeries<'C> -> 'V) -> frame:Frame<'R,'C> -> Series<'R,'V> (requires equality and equality) Full name: Deedle.Frame.mapRowValues val row : ObjectSeries<string> member Frame.GetColumn : column:'TColumnKey -> Series<'TRowKey,'R> member Frame.GetColumn : column:'TColumnKey * lookup:Lookup -> Series<'TRowKey,'R> Multiple items val string : value:'T -> string Full name: Microsoft.FSharp.Core.Operators.string -------------------- type string = String Full name: Microsoft.FSharp.Core.string val filterValues : f:('T -> bool) -> series:Series<'K,'T> -> Series<'K,'T> (requires equality) Full name: Deedle.Series.filterValues val countValues : series:Series<'K,'T> -> int (requires equality) Full name: Deedle.Series.countValues val joinedOpens : Frame<DateTime,string> Full name: Tutorial.joinedOpens property Frame.RowsDense: RowSeries<DateTime,string> val monthly : Frame<(DateTime * DateTime),string> Full name: Tutorial.monthly val groupRowsUsing : selector:('R -> ObjectSeries<'C> -> 'K) -> frame:Frame<'R,'C> -> Frame<('K * 'R),'C> (requires equality and equality and equality) Full name: Deedle.Frame.groupRowsUsing val k : DateTime property DateTime.Year: int property DateTime.Month: int property Frame.Rows: RowSeries<(DateTime * DateTime),string> val getNumericCols : frame:Frame<'R,'C> -> Series<'C,Series<'R,float>> (requires equality and equality) Full name: Deedle.Frame.getNumericCols val mapValues : f:('T -> 'R) -> series:Series<'K,'T> -> Series<'K,'R> (requires equality) Full name: Deedle.Series.mapValues static member Stats.levelMean : level:('K -> 'L) -> series:Series<'K,float> -> Series<'L,float> (requires equality and equality) val fst : tuple:('T1 * 'T2) -> 'T1 Full name: Microsoft.FSharp.Core.Operators.fst
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17660966515541077, "perplexity": 19757.469019935706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00064.warc.gz"}
http://clay6.com/qa/19337/find-the-differential-equation-of-all-circles-which-pass-through-the-origin
# find the differential equation of all circles which pass through the origin and whose center lies on xaxis All circles passing through origin and having centre on $X-axis$ say at $(a,0)$ will have $radius= a$ $\therefore$ General equation of such circles is $(x-a)^2+y^2=a^2$..........(i) This equation has only one arbitrary constant.$a$. $\therefore$ This equation can be differentiated only once to remove the arbitrary constant $a$ Differentiating (i) on both the sides we get $2(x-a)+2y.\large\frac{dy}{dx}=0$........(ii) $\Rightarrow\:x-a=-y.\large\frac{dy}{dx}$ and $a=x+y\large\frac{dy}{dx}$ Substituting the values of $(x-a) \:\:and\:\: a$ in (i) we get $y^2.(\large\frac{dy}{dx})^2$$+y^2=(x+y\large\frac{dy}{dx})^2 \Rightarrow\:y^2.(\large\frac{dy}{dx})^2$$+y^2=x^2+y^2.(\large\frac{dy}{dx})^2$$+2xy.\large\frac{dy}{dx}$ $\Rightarrow\:y^2=x^2+2xy.\large\frac{dy}{dx}$ is the differential equation of the given circles.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6576343774795532, "perplexity": 424.9677790299365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947693.49/warc/CC-MAIN-20180425041916-20180425061916-00386.warc.gz"}
https://codegolf.meta.stackexchange.com/questions/13792/a-proposal-on-command-line-flags?noredirect=1
A proposal on command line flags [duplicate] I've been thinking about command line flags for a while, and I think that we should stop adding them to our byte counts. And instead consider different invocations to be different languages. Now because this might be a little bit of a surprise I will walk you through my reasoning. What are command line flags and why do they exist? If you are not familiar with command line flags they are little bits of information that are added to the invocation of a language. For example if you were running Brain-Flak (I will be using Brian-Flak as a primary example through out) you would normally call it with: ./BrainHack program Where program is the name of the file containing your source. You could add the -A flag to this invocation to tell the interpreter to output in ASCII. You would invoke that with ./BrainHack program -A In general flags exist to toggle a number of compiler or interpreter options, these are called binary flags. There are also flags that take arguments, for example Brain-Flak has a -e flag that causes it to take the source from the command line rather than a file ./BrainHack -e "(()())" How are they currently scored? Our current rules are derived from this answer in short flags are counted as a difference in character count to the shortest equivalent invocation without them For most cases this means +3 bytes for the first flag and +1 bytes for every subsequent flag. I have several issues with the current state, but I will just list the two biggest ones (I'm listing a 3rd minor point as a footnote). The first and largest issue is that its confusing and rather unclear. Take for example the -A flag in Brain-Flak, if I use it what do I need to add to my byte count? Well there are two "standard" invocations of Brain-Flak ./BrainHack program and ./BrainHack -e "source" Ok so which is shorter? Well this depends on the name of the file and the length of the source code which already puts us into rather murky territory. However as long as we assume the file name is 1 byte the former will always be shorter so lets say that is the shorter standard invocation. Well here is the problem, in the original post the author uses as an example perl's -e flag, which is analogous to that of Brain-Flak, and called that the shortest standard invocation. This may seem like a I'm intentionally making a stink, but members of the Brain-Flak community use both +1 and +3 for the -A flag from answer to answer. There is definitely confusion on this point. My second problem is that its penalties can be often be sidestepped by making more interpreters. For example I could make a "new language" called Brain-Flak-A that has the following fish interpreter function bfa --wraps /opt/brainhack/BrainHack /opt/brainhack/BrainHack -A \$argv end Try it online! I've seriously considered doing this for Klein which always has a +3 penalty on all of its answers (meta ruling), because you must specify the version of the language in a 2 byte string. This is really annoying to anyone that programs in Klein (or it least it is to me) and serves no real purpose. Its not like the version of the language Here is the crux of my argument "If I can just make new interpreters to get around the rule then why does it exist?". If spent 20 minutes writing interpreters no Brain-Flak user would ever have to use flags again. Most of the variants caused by binary flag differences are more akin to language differences than to changes in the source code and I think we should treat them as such. The Meta-Golfscript Problem Meta-Golfscript is a hypothetical set of languages designed as a rule lawyer to get zero byte programs on every challenge. They encode their program via a command line flag rather than by a source file. Under our current system this is fine. The argument is (quoting Nathan Merrill here): Making them each different languages is a slippery slope where people can make a MetaGolfScript where each "language" is defined in the flags. I don't disagree with Nathan Merrill here and I think he is right we should not let Meta-GolfScript answers get away with anything. However I don't think this is a problem. As I touched on in a comment the would-be-Meta-GolfScripters already have a ripe opportunity to create 0 byte programs now that we allow languages that post date the challenge. Our current consensus is to just downvote and delete any abuse of our lenient policy. I feel that we should put command line flags under the same policy. Like the non-competing policy of days past, many more benign answers are hurt by the policy than malicious answers are prevented. Since the lifting of that policy we have not seen a surge of "0 byte answers" abusing the new rules. I think the old adage of optimizing for perls rather than sand applies here. So here is my proposal: We treat different command line invocations of the same interpreter or compiler as different languages with no byte cost, and rely on the judgement of the community to police those who seek to trivialize challenges in this manner. Now this is a propsal, not a proclamation or declaration. I'm interested in what other people think about this. I would like to have a discussion about this. Even if the community decides that this specific proposal is not what we want I think we should definitely make changes to the current policy, which I believe is broken. My third problem with this system is that it is not actually representative of the information gained from adding command line flags. The fact that the command line is separate from the source actually constitutes information that can be used however the current system does not account for this information. • This seems bizarre to me. I've always preferred the simple definition "The implementation defines the language". Rather than making the flag "part of the implementation", why aren't we just counting the bytes for the flags and removing free flags? – Nathan Merrill Aug 25 '17 at 16:05 • No, you mention the problem with free flags. I'm arguing that we remove free flags entirely. – Nathan Merrill Aug 25 '17 at 16:19 • @NathanMerrill Ah I see. That is a proposal. And I certainly think that is clearer than the current approach. However I still think that the vast majority of byte penalties would be incurred on answers that really don't need them, and this might exacerbate them. – Wheat Wizard Aug 25 '17 at 16:26 • If your languages annoy you, then stop making them like that! – feersum Aug 25 '17 at 21:02 • The problem I see with completely discounting flags, and I'll expand on this in an answer at some stage, is that some languages (e.g., Japt) have flags that modify a programme's output. If those flags were "freebies" then it could lead to savings of up to 3, maybe even 4, bytes, which can a substantial number in golfing languages. – Shaggy Aug 25 '17 at 21:14 • @Shaggy Sounds like you should make new languages that support those flags as defaults. If your interested in having the shortest solution I see no reason to continue to use those flags when you could make a new version of the language that does that output. – Wheat Wizard Aug 26 '17 at 5:03 • Didn't someone win worst abuse of the rules on IOCCC by putting all their code in a compiler flag? – Neil Sep 2 '17 at 10:18 I haven't thought through all of the issues and arrived at a balance of pros and cons, but there is one con which comes to mind immediately and should be included in the discussion. This would affect more than just scoring in code-golf Treating different implementations of the same specification as different languages already has consequences which IMO are pernicious. A recent example is an question which required all answers to use a language which hadn't been used earlier in the chain. One answer used Python 3 (PyPy) on the basis that Python 3 had already been used, but that was presumed to be using the "language" "defined" by the CPython implementation, leaving the PyPy implementation up for grabs. Taken to extremes, that approach would already "create" 19 different "languages" from CPython implementations of Python 3.5, counting all the alphas, betas, and release candidates. But if any command-line flag also creates a new "language", the following flags would not materially affect a lot of the simple programs that are posted on PPCG: -b or -bb -B -E -O -OO -Qold or -Qnew or -Qwarn or -Qwarnall -R -s -S -t or -tt (if warnings/errors are to stderr and can be ignored) -v or -vv (since I don't think answers tend to use modules) -Wignore or -Wdefault or -Wall or -Wmodule or -Wonce or -Werror So if each combination of flags defines a new language, we have 120960 languages per Python interpreter, and any question which wants to make languages single-use would need to explicitly overrule the proposed convention. • Here's an answer I wrote earlier addressing the same problem under a different name. The thing is that using a default definition of language in an answer-chaining question is already deeply flawed because we define languages based on interpreters and different versions of a language no matter how minor are already different languages. You already do have a couple hundred versions of python to work with. – Wheat Wizard Aug 25 '17 at 15:41 • What if we made this rule affect different tags differently? – PyRulez Sep 5 '17 at 23:23 • It makes more sense to just let the challenge writers for answer-chaining/polyglot/etc. challenges define what does or doesn’t constitute a separate language as best suits the challenge, and leave this definition as the default. – Unrelated String May 16 at 20:30 This is a valid subset of "Implementation" Consensus broadly states that a language is defined by implementation. Implementation itself is (afaik) not very precisely defined, but my understanding is that whatever your "Implementation" is, you feed your code to it and then the implementation "runs" the code to obtain the desired result. From that understanding, I would consider flags to be a part of implementation. Versions are also a part of implementation, thus Python 2 and Python 3 answers are both acceptable. In fact, I would argue that almost anything could be considered "part" of implementation for an answer that chose it to be so; for example, using statements in C#. I believe that leaving all such information as a part of "implementation" is valid. If someone were to abuse this by claiming that the "Implementation" they're using causes a 0 byte program to do whatever the challenge asked, then it would be indistinguishable from somebody who created a new language which did whatever the challenge asked when provided with a 0 byte program; something which is already considered forbidden. Since Implementation in this sense is being treated as identical to the concept of a "language", your listing of language should indicate what your implementation is, but not affect your code's length either way. To illustrate, these would be some viable language names under this proposal with no byte penalties: • Klein 101 • C# w/ LINQ • Cubically 2.6 • Python 2.7 To be perfectly honest, this seems to me like it most closely lines up with how the question is already treated (I see answers where a language references an external library listed as "language + library" quite often for example). With regards to how this affects Polyglots, I think it's irrelevant in challenges like The Versatile Integer Printer which are required to have different outputs for each language anyway thus most of the implementations with trivial differences are either impossible to use anyway because their output wouldn't be different, or are reasonable use because significant effort is required to make their output different. Polyglots which require identical outputs for each language are more difficult, but I think that it's probably up to the community anyway; Python 2 and Python 3 are relatively famous for being treated as separate languages, but the point at which differences between versions are enough is mostly arbitrary, while languages which are nearly identical or where one is a superset of another are common enough that I think worrying about how many extra "languages" some program becomes valid in as a result is unnecessary. As Wheat Wizard pointed out, removing the restriction that a language must be created before the challenge hasn't suddenly reduced PPCG to anarchy because abusive answers are terrible and infrequent, and get downvoted when they do appear. I agree that this change would be similar. I don't really have a solution in regards to specifically though. In my opinion requiring different languages in an answer chain is only slightly better than banning built-ins in regards to it actually generating value, but I will admit that this proposal pretty much trivializes what it means for two languages to be "different". • I don't think this actually aligns with current usage: "language + library" answers currently should still count whatever import/using/with/whatever statements are required to bring the library's namespaces/functions/whatever into scope. – Peter Taylor Aug 26 '17 at 18:59 The difference between this proposal and the newer-languages proposal is that abuse is more clearly defined in the latter. I can imagine several ambiguities that can arise from this rule. • What about interpreters that take input from the command-line arguments, like Jelly? • What about interpreters that can use the command-line arguments for the initial stack contents, like ><>? • Wait, are those two even different? With the newer-languages proposal, abuse is fairly clear: if a feature was added to a language that is useful on one challenge but on very few others, then adding the feature constitutes abuse. With this proposal, there's simply too much room for ambiguity as to what constitutes abuse. MetaGolfScript is abuse because it encodes all the program information outside of the scored section of the program. This leaves open the question of how much program information can be encoded. The way I see it, you have two options: • Create the alternative implementations. As you said, it would not take too long, and it completely eliminates the ambiguity problems. • Deal with it. I don't expect Klein or Brain-Flak will be competing for 1st place, so think of the challenge as a competition in every language. The bytes for command-line flags are boilerplate, but then again lots of languages have boilerplate. Are those added 3 bytes really so bad compared to class M{public static void main(String[]a){}}?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4041232466697693, "perplexity": 1274.7312828969164}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540547165.98/warc/CC-MAIN-20191212205036-20191212233036-00195.warc.gz"}
http://farside.ph.utexas.edu/teaching/336k/lectures/node115.html
Next: Perihelion Precession of Mercury Up: Gravitational Potential Theory Previous: Potential Due to a # Perihelion Precession of the Planets The Solar System consists of eight major planets (Mercury to Neptune) moving around the Sun in slightly elliptical orbits which are approximately co-planar with one another. According to Chapter 5, if we neglect the relatively weak interplanetary gravitational interactions then the perihelia of the various planets (i.e., the points on their orbits at which they are closest to the Sun) remain fixed in space. However, once these interactions are taken into account, it turns out that the planetary perihelia all slowly precess around the Sun. We can calculate the approximate rate of perihelion precession of a given planet by treating the other planets as uniform concentric rings, centered on the Sun, of mass equal to the planetary mass, and radius equal to the mean orbital radius. This is equivalent to averaging the interplanetary gravitational interactions over the orbits of the other planets. It is reasonable to do this, since the precession period in question is very much longer than the orbital period of any planet in the Solar System. Thus, by treating the other planets as rings, we can calculate the mean gravitational perturbation due to these planets, and, thereby, determine the desired precession rate. We can conveniently index the planets in the Solar System such that Mercury is planet 1, and Neptune planet 8. Let the and the , for , be the planetary masses and orbital radii, respectively. Furthermore, let be the mass of the Sun. It follows, from the previous section, that the gravitational potential generated at the th planet by the Sun and the other planets is (1018) Now, the radial force per unit mass acting on the th planet is written , giving (1019) Hence, we obtain (1020) where . It follows that (1021) Thus, according to Equation (317), the apsidal angle for the th planet is (1022) Hence, the perihelion of the th planet advances by (1023) radians per revolution around the Sun. Now, the time for one revolution is , where . Thus, the rate of perihelion precession, in arc seconds per year, is given by (1024) Table 1: Data for the major planets in the Solar System, giving the planetary mass relative to that of the Sun, the orbital period in years, and the mean orbital radius relative to that of the Earth. Planet R( au) Mercury Venus Earth Mars Jupiter Saturn Uranus Neptune Table 2: The observed perihelion precession rates of the planets compared with the theoretical precession rates calculated from Equation (1024) and Table 1. The precession rates are in arc seconds per year. Planet Mercury Venus Earth Mars Jupiter Saturn Uranus Neptune Table 2 and Figure 46 compare the observed perihelion precession rates with the theoretical rates calculated from Equation (1024) and the planetary data given in Table 1. It can be seen that there is excellent agreement between the two, except for the planet Venus. The main reason for this is that Venus has an unusually low eccentricity (), which renders its perihelion point extremely sensitive to small perturbations. Next: Perihelion Precession of Mercury Up: Gravitational Potential Theory Previous: Potential Due to a Richard Fitzpatrick 2011-03-31
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9795991778373718, "perplexity": 771.6521776521253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736676622.16/warc/CC-MAIN-20151001215756-00064-ip-10-137-6-227.ec2.internal.warc.gz"}
http://www.ast.cam.ac.uk/~mjr/publications/
# Martin Rees Research Publications 1. A Model of the Quasi-Stellar Radio Variable CTA 102. Nat 207, 730 (1965) (with D.W. Sciama). 2. Structure of the Quasi-Stellar Radio Source 3C 273. Nat 208, 371 (1965) (with D.W. Sciama). 3. Kinetic Temperature and Ionization Level of Intergalactic Hydrogen in a Steady State Universe. ApJ 145, 6 (1966) (with D.W. Sciama). 4. Appearance of Relativistically Expanding Radio Sources. Nat 211, 468 (1966). 5. Inverse Compton Effect in Quasars. Nat 211, 805 (1966) (with D.W.  Sciama). 6. Cosmological Significance of the Relation between Redshift and Flux Density for Quasars. Nat 211, 1283 (1966) (with D.W. Sciama). 7. Absorption Spectrum of 3C 9. Nat 212, 1001 (1966) (with D.W.  Sciama). 8. The Role of Thermal Instability in Galaxy Formation. Mem. Soc. Roy. Sci. Liege 17, 31 (1967). 9. The Detection of Heavy Elements in Intergalactic Space. ApJ 147, 353 (1967) (with D.W. Sciama). 10. Physical Processes in Radio Sources and the Intergalactic Medium. Doctoral Thesis, Cambridge University (1967). 11. Possible Large-scale Clustering of Quasars. Nat 213, 374 (1967) (with D.W. Sciama). 12. Studies in Radio Source Structure I. A Relativistically Expanding Model for Variable Quasi-Stellar Radio Sources. MNRAS 135, 345 (1967). 13. Studies in Radio Source Structure II. The Relaxation of Relativistic Electron Spectra in Self-Absorbed Radio Sources. MNRAS 136, 279 (1967). 14. Studies in Radio Source Structure III. Inverse Compton Radiation from Radio Sources. MNRAS 137, 429 (1967). 15. Possible Circular Polarization of Compact Quasars. Nat 216, 147 (1967) (with D.W. Sciama). 16. Extragalactic Soft X-ray Astronomy. Nat 217, 316 (1968) (with D.W. Sciama and G. Setti). 17. Largescale Density Inhomogeneities in the Universe. Nat 217, 511 (1968) (with D.W. Sciama). 18. Evidence for Relativistic Expansion in Variable Radio Sources. ApJ 152, L145 (1968) (with M. Simon). 19. Proton Synchrotron Radiation from Compact Radio Sources. Astrophys. J. Lett. 2, 1 (1968). 20. Stimulated Inverse Compton Scattering. Nat 217, 931 (1968) (with P. Goldreich and R. McCray). 21. Polarization and Spectrum of Primeval Radiation in an Anisotropic Universe. ApJ 153, L1 (1968). 22. Metastable Helium in Interstellar and Intergalactic Space. Astrophys. J. Lett. 2, 243 (1968) (with D.W. Sciama and S.H. Stobbs). 23. Model for the Evolution of Extended Radio Sources. Nat 219, 127 (1968) (with G. Setti). 24. Composition and Origin of Cosmic Rays. Nat 219, 1005 (1968). (with W.L.W. Sargent). 25. Detectability of Quasars Beyond z . Astrophys. J. Lett. 4, 61 (1968). 26. Cosmological Implications of the Diffuse X-ray Background. Nat 221, 924 (1969) (with J.E. Felten). 27. Scattering of Background X-rays by Metagalactic Electrons. Astrophys. J. Lett. 4, 113 (1969). 28. Infrared Radiation from Dust in Seyfert Galaxies. Nat 223, 788 (1969) (with J. Silk, M.W. Werner and N.C. Wickramasinghe). 29. The Free Electron Concentration in HIs Regions of the Galaxy. Comm. Astrophys. and Space Phys. 1, 35 (1969) (with D.W. Sciama). 30. Evolution of Density Fluctuations in the Universe: Dissipative Processes in the Early Stages. Comm. Astrophys. and Space Phys. 1, 140 (1969) (with D.W. Sciama). 31. Cosmic Electrons and Diffuse Galactic X- and Gamma Radiation. Astron. and Astrophys. 3, 452 (1969) (with J. Silk). 32. Infrared and Microwave Astronomy. Nat 224, 752 (1969) (with P.A. Feldman and M.W. Werner). 33. Evolution of Density Fluctuations in the Universe: The Formation of Galaxies. Comm. Astrophys. and Space Phys. 1, 153 (1969) (with D.W. Sciama). 34. The Collapse of the Universe: An Eschatological Study. Observatory 89, 193 (1969). 35. Upper Limit to Radiation of Mass-Energy Derived from Expansion of the Galaxy. Phys. Rev. Letters 23, 1514 (1969) (with G.B. Field and D.W. Sciama). 36. The Astronomical Significance of Mass Loss by Gravitational Radiation. Comm. Astrophys. and Space Phys. 1, 187 (1969) (with G.B. Field and D.W. Sciama). 37. Cosmological Significance of Observations at Far-Infrared and Millimeter Wavelengths. ESRO book S.P. 52, p.23 (1971). 38. The Expansion Energy of the Crab Nebula. Astrophys. J. Lett. 5, 93 (1970) (with V. Trimble). 39. Effects of Central Pulsars on Supernova Envelopes. Astrophys. J. Lett. 6, 55 (1970). 40. Absorption and Scattering of Ultraviolet and X-ray Photons by Intergalactic Gas. Astron. and Astrophys. 8, 410 (1970) (with G. Setti). 41. The Nature of Pulsar Radiation: Radiation Mechanisms for Pulsars. Nat 266, 622 (1970) (with F. Pacini). 42. Current Ideas on Galaxy Formation. (Varenna Lectures 1969) General Relativity and Cosmology' (Academic Press) p.315. 43. Origin of the Cosmic X-ray Background. Non-Solar X-ray and Gamma-ray Astronomy', ed. L. Gratton (Reidel, Holland) p.352 (1970) (with G. Setti). 44. Soft X-rays from the Galaxy. Non-solar X-ray and Gamma-ray Astronomy', ed. L. Gratton (Reidel, Holland) p.406 (1970). 45. Interactions of Non-Thermal X-rays and Ultraviolet Radiation with the Intergalactic Gas. Non-Solar X-ray and Gamma-ray Astronomy', ed. L. Gratton (Reidel, Holland) p.402 (1970) (with G. Setti). 46. On Multiple Absorption Redshifts in QSOs. ApJ 160, L29, (1970). 47. Heating of HI Regions by Soft X-rays, II: The Effect of Galactic Soft X-ray Sources. ApJ 161, 965 (1970) (with M.W. Werner and J. Silk). 48. Extragalactic Variable Radio Sources. Nat 227, 1303 (1970) (with M. Simon). 49. The Origin of Galaxies. Sci. Am. 222, 26 (1970) (with J. Silk). 50. Evolution of Radio Sources. Nuclei of Galaxies' (Pontificiae Academiae Scientiarium Scripta Varia, No. 35) p.633. 51. The Cosmic Background Radiation: Some Recent Developments. Highlights of Astronomy', ed. C. de Jager (Reidel, Holland) p.757. 52. Some Observable Consequences of Accretion by Defunct Pulsars. Astrophys. J. Lett. 6, 179 (1970) (with J.P. Ostriker and J. Silk). 53. Cosmological Evidence from Quasars and Radio Galaxies. Quasars and External Galaxies' , ed. D.S. Evans (Reidel, Holland) p.407 (1970). 54. Non-Thermal Continuum from the Crab Nebula: The Synchro-Compton Interpretation. The Crab Nebula', ed. R.D. Davies and F.G. Smith (Reidel, Holland) p.407 (1971). 55. Pulsars and Runaway Stars. The Crab Nebula', ed. R.D. Davies and F.G. Smith (Reidel, Holland) p.273 (1971) (with V. Trimble). 56. On Possible Observable Effects of Electron Pair Production in QSOs. MNRAS 152, 21 (1971) (with S. Bonometto). 57. Runaway Stars and the Pulsars near the Crab Nebula. ApJ 166, L85 (1971) (with V. Trimble). 58. On Quasars, Dust and the Galactic Centre. MNRAS 152, 461 (1971) (with D. Lynden-Bell). 59. Extragalactic Pulsars. ApJ 162, 737 (1970) (with J.N. Bahcall and E.E. Salpeter). 60. New Interpretation of Extragalactic Radio Sources. Nat 229, 312 (1971) (errata, p. 510). 61. Planet, Pulsar, Glitch and Wisp. Nat 229, 395 (1971) (with V. Trimble and J.M. Cohen). 62. Implications of the Wave Field' Theory of the Continuum from the Crab Nebula. Nat P.S. 230, 55 (1971). 63. Variability in the X-ray Flux from M 87. Nat P.S., 230, 188 (1971) (with A.F. Janes, K.A. Pounds and M.J. Ricketts). 64. Extragalactic Explosive Phenomena. Astron. Acta. 17, 315-320 (1972). 65. Effects of Very Long Wavelength Primordial Gravitational Radiation. MNRAS 154, 187 (1971). 66. Is the Far-Infrared Radiation from Galactic Nuclei due to Molecular Masers? Mém. Soc. Roy. Sci. Liège 3, 591 (1972) (with P.M. Solomon). 67. Pulsar Theory: I. Dynamics and Electrodynamics. Comm. Astrophys. and Space Phys. 3, 185 (1971) (with P. Goldreich and F. Pacini). 68. Mass Loss from the Galaxy. ESRO book S.P. on The Significance of Space Research for Fundamental Physics', p.29 (1971) (with G.B. Field and D.W. Sciama). 69. On the V/Vm Test applied to Quasi-Stellar Radio Sources. MNRAS 154, 1 (1971) (with M. Schmidt). 70. Pulsar Theory: II. Radiation Mechanisms. Comm. Astrophys. and Space Phys. 4, 23 (1972) (with P. Goldreich and F. Pacini). 71. Continuum Radiative Transfer in Hot Plasma, with Application to Scorpius X-1. Astron. and Astrophys. 17, 226 (1972) (with J.E. Felten). 72. The Structure of Seyfert Nuclei. Comm. Astrophys. and Space Phys. 4, 7 (1972) (with W.L.W. Sargent). 73. Continuum and Line Profiles in Thermal X-ray Sources. Proc. 12th International Conference on Cosmic Rays (August 1971) Vol. 1, 415 (with J.E. Felten). 74. The Counts of Radio Sources. Comm. Astrophys. and Space Phys. 4, 79 (1972) (with M.S. Longair). 75. Remarks on Intergalactic Magnetic Fields. A&A 19, 189 (1972) (with M. Reinhardt). 76. Observational Cosmology. Cargese Lectures on Physics VI', ed. E. Schatzman (Gordon and Breach) p.269 (1973). 77. The Interpretation of Variable Components in the Synchro-Compton' Theory of Extragalactic Radio Sources. Astrophys. J. Lett. 10, 77 (1972) (with R.D. Blandford). 78. Transfer Effects on X-ray Lines in Optically Thick Celestial Sources. Astron. and Astrophys. 21, 139 (1972) (with J.E. Felten and T.F. Adams). 79. Origin of the Microwave Background in a Chaotic Universe. Phys. Rev. Letters 28, 1669 (1972). 80. Rotation in High Energy Astrophysics. Sci. Am. 228 (2), 98 (1973) (with F. Pacini). 81. Evolutionary Theories of the X-ray Background and the Interpretation of Small-scale Anisotropies. Cosmic X-ray and Gamma-ray Astronomy', eds. H. Bradt and R. Giacconi (Reidel, Holland) p.250 (1973). 82. The Cosmological Significance of e2/Gm2 and Related Numbers. Comm. Astrophys. and Space Phys. 4, 179 (1972). 83. The Interaction of Primordial Gravitational Waves with Groups of Galaxies. MNRAS 159, 11P (1972). 84. Accretion Disc Models for Compact X-ray Sources. Astron. and AstroPhys. 21, 1 (1972) (with J.E. Pringle). 85. Sources and Astrophysical Effects of Gravitational Waves. Ondes et Radiations Gravitationelles', p.203 (CNRS 1973). 86. Radio Halos Around Old Pulsars - Ghost Supernova Remnants. Astron. and Astrophys. 23, 145 (1973) (with R.D. Blandford, J.P. Ostriker and F. Pacini). 87. New Evidence on Long-term Behaviour of Her X-1. Nat 244, 212 (1973) (with A.C. Fabian and J.E. Pringle). 88. Astrophysical Aspects of Gravitational Waves. Ann. N.Y. Acad. Sci. 224, 118 (1973). 89. Recent Developments in Cosmology. Phys. Bull. 24, 651 (1973). 90. Accretion onto Massive Black Holes. Astron. and Astrophys. 29, 179 (1973) (with J.E. Pringle and A.G. Pacholczyk). 91. Astrophysical Implications of Extragalactic Radio Sources. Proc. 1st European Astronomical Meeting, Athens 3, 190 (1973) (Springer-Verlag, Berlin). 92. X-ray Sources in Close Binary Systems. Highlights of Astronomy', ed. G. Contopoulos (Reidel, Holland) p.176 (1974). 93. Black Holes, Gravitational Waves and Cosmology, (Gordon and Breach, London 1974) (with R. Ruffini and J.A. Wheeler) (Book). 94. Variations in the Primordial Helium Abundance. MNRAS 166, 663 (1974) (with G.R. Gisler and E.R. Harrison). 95. Concluding Remarks. Confrontation of Cosmological Theories with Observational Data', ed. M.S. Longair (Reidel, Holland) p.359 (1974). 96. Origin of the Magnetic Field and Relativistic Particles in the Crab Nebula. MNRAS 167, 1 (1974) (with J.E. Gunn). 97. Accretion onto Relativistic Objects. Gravitational Radiation and Gravitational Collapse', ed. C. De Witt - Morette (Reidel, Holland) p. 194 (1974). 98. Ionization of Cool HI Regions. Astron. and Astrophys. 30, 87 (1974) (with G. Steigman and B.Z. Kozlovsky). 99. The Physics of Binary X-ray Sources. Proc. 16th Solvay Conference (Brussels), p.97 (1974) (also contributions on pp 425 - 8 and 489). 100. Black Holes. Observatory 24, 168 (1974). 101. A Twin-Exhaust' Model for Double Radio Sources. MNRAS 169, 395 (1974) (with R.D. Blandford). 102. Extragalactic Double Radio Sources - The Current Observational and Theoretical Position. Contem. Phys. 16, 1 (1975) (with R.D. Blandford). 103. Massive Black Holes in Extragalactic Radio Source Components? MNRAS 171, 53 (1975) (with W.C. Saslaw). 104. Interrelations Between Cosmic Rays and Other Branches of Astrophysics. Research Goals for Cosmic Ray Astrophysics', ed. V. Manno (ESRO) p. 39 (1975). 105. Observational Effects of Black Holes. Proc. 7th International Conference on General Relativity and Gravitation, ed. G. Shaviv (Wiley, N.Y.) p.203 (1975). 106. Formation of Galaxies, Radio Sources and Related Problems. Galaxies', ed. G. Setti (Reidel, Holland) p.285 (1975). 107. Quasars and Radio Galaxies. Ann. N.Y. Acad. Sci. 262, 449 (1975). 108. Expected Polarization Properties of Binary X-ray Sources. MNRAS 171, 457 (1975). 109. X-rays from Neutron Stars Heated by Accretion. Proc. 2nd IAU European Assembly, Trieste: Mem. Soc. Astr. Ital. 45, 909 (1975). 110. Quasar Absorption Spectra: Radiative Interactions Between Absorbing Clouds and the Origin of Redshift Doublets'. MNRAS 171, 1P (1975). 111. Potential Implications of X-ray Polarimetry of Binary Sources. Ann. N.Y. Acad. Sci. 262, 1367 (1975). 112. Evidence on Neutron Star Structure from Pulsars and Related Objects. Astrophys. Sp. Sci. 39, 3 (1976). 113. Accretion Processes in Close Binaries. In Close Binary Systems', ed. P.P. Eggleton, S. Mitton and J. Whelan (Reidel) 225 (1976). 114. Progress and Prospects in High Energy Astrophysics. (H.P. Robertson Memorial Lecture) Proc. Nat. Acad. Sci. 72, 4685 (1975). 115. Accretion onto Compact Objects. Physics and Astrophysics of Neutron Stars and Black Holes', eds. R. Giacconi and R. Ruffini (North Holland) (with A.P. Lightman and S. Shapiro). 116. Tidal Capture Formation of Binary Systems and X-ray Sources in Globular Clusters. MNRAS 172, 15P (1975) (with A.C. Fabian and J.E. Pringle). 117. A Theory of Galaxy Formation and Clustering. Astron. and Astrophys. 45, 365 (1975) (with J.R. Gott). 118. Astrophysical Aspects of Black Holes. Proc. Acad. Lincei, 31, 67 (1976). 119. X-ray Emission from Accretion onto White Dwarfs. MNRAS 175, 43 (1976) (with A.C. Fabian and J.E. Pringle). 120. Radio Source Models and the Nature of the Primary Energy Source. In Physics of non-thermal radio sources' ed. G. Setti (Reidel 1976). 121. The Nucleus of Centaurus A. Nat 260, 683 (1976) (with A.C. Fabian, D. Maccagni and W.R. Stoeger). 122. Can cosmic clouds cause climatic catastrophes? Nat 261, 298 (1976) (with M.C. Begelman). 123. Current status of double radio source theories. Comm. Astrophys. Sp. Phys. 6, 112 (1976). 124. Physical and Observational Cosmology. Proc. IMA Cosmology symposium (1975). 125. Effects of massive central black holes on dense stellar systems. MNRAS 176, 633 (1976) (with J. Frank). 126. Globular Clusters as a Source of X-ray Emission from the Neighbourhood of M87. Nat 263, 301 (1976) (with A.C. Fabian and J.E. Pringle). 127. Opacity-limited hierarchical fragmentation and the masses of protostars. MNRAS 176, 483 (1976). 128. Cyclotron emission from accreting magnetic white dwarfs. MNRAS 178, 501 (1977) (with A.R. Masters, A.C. Fabian and J.E. Pringle). 129. Cooling, dynamics and fragmentation of massive gas clouds: clues to the masses and radii of galaxies and clusters. MNRAS 179, 541 (1977) (with J.P. Ostriker). 130. Neutron stars, black holes and cosmic X-ray sources. Phys. Blatter 32, 572 (1976) 131. The conventional' view of redshifts. Proc. IAU/CNRS Colloquium on Redshifts', ed. B. Westerlund, p.563 (1977). 132. Sources of gravitational waves at low frequencies. Proc. Pavia Symposium on experimental gravitation', ed. B. Bertotti, p. 423 (1977). 133. Superluminal expansion in extragalactic radio sources. Nat 267, 211 (1977) (with R.D. Blandford and C. McKee). 134. A tepid model for the early universe. Astr. Astrophys. 61, 705 (1977) (with B. Carr). 135. Quasars and young galaxies (George Darwin Lecture) Q.J.R.A.S. 18, 429 (1977). 136. A M Hercules: a highly magnetised accreting white dwarf. MNRAS 179, 9p, (1977) (with A.C. Fabian, J.E. Pringle and J.A.J. Whelan). 137. A better way of searching for black hole explosions? Nat 266, 333 (1977). 138. Quasar theories. (invited lecture at 8th Texas' conference) Ann. N.Y. Acad. Sci. 302, 613 (1977). 139. Cosmology and galaxy formation: early' star formation? In Evolution of Galaxies and Stellar Populations', ed. B.M. Tinsley, p.339 (1977). 140. Survival and disruption of galactic substructure. MNRAS 181, 37p (1977) (with S.M. Fall). 141. Dissipative processes, galaxy formation and early' star formation. Physica Scripta 17, 371 (1978). 142. Accretion and the quasar phenomenon. Physica Scripta 17, 193 (1978). 143. A qualitative study of cosmic fireballs and gamma-ray bursts. MNRAS 183, 359 (1978) (with G. Cavallo). 144. Extended and compact extragalactic radio sources: interpretation and theory. Physica Scripta 17, 265 (1978) (with R.D. Blandford) 145. The epoch of galaxy formation. In Large-scale structure of the universe', ed. Einasto and Longair (Reidel) (1978) (with B.J.T. Jones). 146. Core condensation in heavy halos: a two-stage theory for galaxy formation and clustering. MNRAS 183, 341 (1978) (with S.D.M. White). 147. Gravitational collapse in the real universe. Proc. 8th International G.R.G. conference, ed. J. Wainwright) (1978). 148. Predictions and strategies for deep X-ray surveys. MNRAS 185, 109 (1978) (with A.C. Fabian). 149. The nuclei of nearby galaxies: evidence for massive black holes. In Nearby Galaxies' ed. E. Bekhausen and R. Wielebinsky (Reidel) (1978). 150. The fate of dense stellar systems. MNRAS 185, 847 (1978) (with M.C. Begelman). 151. Growth and fate of inhomogeneities in big bang cosmology. In Observational Cosmology' (Proc. Saas-Fee School VIII), p.261 (1978). 152. The M87 Jet: internal shocks in a plasma beam. MNRAS 184, 61p (1978). 153. Quasars (Halley Lecture). Observatory 98, 210 (1978). 154. Induced Compton scattering in pulsar winds. MNRAS 185, 297 (1978). (with D. Wilson). 155. Origin of the pregalactic microwave background. Nat 275, 35 (1978). 156. Comments on the radiation mechanism in Lacertids. In B.L. Lac Objects', ed. A.M. Wolfe (1978) (with R.D. Blandford) p.328. 157. Extragalactic X-ray sources. In Proc. IAU Symposium on X-ray Astronomy' ed. W.A. Baity and L.E. Peterson, p.381 (1979) (with A.C. Fabian). 158. The content of a typical HEAO B field |b|>10o. In Proc. IAU Symposium on X-ray astronomy, ed. W.A. Baity and L.E. Peterson p.479 (1979) (with A.C. Fabian). 159. Relativistic jets and beams in radio galaxies. Nat 275, 516 (1978). 160. Pulsars, bursters' and quasars. (Proceedings of 4th EPS general conference) p.74 (1979). 161. The Ly/H/P ratio in the quasar PG 0026+129. ApJ 226, L57 (1978) (with J.A. Baldwin, M.S. Longair and M.A.C. Perryman). 162. Neutrinos in astrophysics. In Proc. CERN 25th Anniversary conference, June 1979 p.135. 163. Anthropic principle and the Physical World. Nat 278, 605 (1979) (with B.J. Carr). 164. A twin-jet model for radio trails. Nat 279, 770 (1979) (with R.D. Blandford and M.C. Begelman). 165. SS 433: a double jet in action. MNRAS 187, 13P (1979) (with A.C. Fabian). 166. Einstein, gravitation and cosmology. (Tolansky memorial lecture) Proc. RSA 127, 576 (1979). 167. A model for SS 433: precessing jets in an ultraclose binary system. MNRAS 189, 19P (with P.G. Martin). 168. Some observational consequences of positron production by evaporating black holes. A&A 81, 263 (1980) (with P. Okeke). 169. Inter-relation between cosmic rays and other branches of astrophysics. In Proc. ESA conference on cosmic rays (ed. D.E. Page) p.39. 170. Observations of 3C 390.3 with IUE. MNRAS 187, 65P (1979) (with G. Ferland, M.A.C. Perryman and M.S. Longair). 171. Concluding remarks'. In R.S. Symposium on Origin and early evolution of galaxies., ed. W.H. McCrea and M.J. Rees (Phil. Trans. Roy. Soc.) 172. Gravitational collapse and cosmology. Contemp. Phys. 21, 99 (1980). 173. X-ray emission from galactic nuclei. In X-ray Astronomy', ed. R. Giacconi and G. Setti. 174. Size and shape of the Universe. Princeton Einstein Centennial Symp., ed. H. Woolf, p.291 (Addison-Wesley, Mass.). 175. Inhomogeneity and entropy of the Universe: some puzzles. Physica Scripta 21, 614 (1980). 176. Observational status of black holes. In Proc. Roy. Soc., 368, 27 (1979). 177. The contribution of young galaxies to the X-ray background. ApJ 237, 647 (1980) (with J. Bookbinder, J. Krolik, L. Cowie and J. Ostriker). 178. Diffuse material, background radiation and the early universe. In Physical Cosmology, ed. R. Balian, J. Audouze and D. Schramm, p.615. 179. X-ray background: origin and implications. In Proc. IAU Symposium 92, ed. G. Abell and P.J.E. Peebles, p.207. 180. Spectral appearance of non-uniform gas at high z. MNRAS 188, 791 (1979). (with C. Hogan). 181. Categories of Extragalactic X-ray sources. In X-ray Astronomy', ed. R. Giacconi and G. Setti) p.363. 182. The X-ray background as a probe of the matter distribution on large scales. In X-ray Astronomy, ed. R. Giacconi and G. Setti, p.377. 183. Detailed Ultraviolet observations of the quasar 3C 273 with IUE. MNRAS 192, 561 (1980) (with M.H. Ulrich et al.). 184. Massive black hole binaries in active galactic nuclei. Nat 287, 307 (1980) (with M.C. Begelman and R.D. Blandford). 185. Inhomogeneities from the Planck length to the Hubble radius. In Quantum Gravity II', ed. R. Penrose et al., p.273 (OUP). 186. Nuclei of Galaxies: the origin of plasma beams. In Proc. IAU/IUPAP Symposium on Origin of Cosmic Rays', ed. G. Setti, G. Spada and A. Wolfendale (Reidel), p. 139 (1981). 187. IUE observations of NGC 4151. In Proc. 2nd European IUE Conference, ed. B. Fitton and M. Grewing, p. 47 (with A. Boksenberg et al.). 188. Galaxy formation, clustering and the initial cosmological fluctuations. In Variability in Stars and Galaxies', ed. A. Noelds and P. Ledoux, p. G.2.1. 189. Our Universe and others. Q.J.R.A.S. 22, 109 (1981). 190. Relativistic jet production and propagation in active galaxies. Ann. N.Y. Acad. Sci. 375, 254 (1982) (with M.C. Begelman and R.D. Blandford). 191. Physical processes for X-ray emission in galactic nuclei. Space Science Reviews 30, 87 (1981). 192. Highly compact structures in galactic nuclei and quasars. In The importance of high resolution in astronomy', ed. M.H. Ulrich (ESO publications). 193. Ion supported tori and the origin of radio jets. Nat 295, 17 (1982) (with M.C. Begelman, R.D. Blandford and S.L. Phinney). 194. Mechanisms for jets. In Extragalactic radio sources', ed. D.S. Heeschen and C.M. Wade (Reidel), p.211 (1982). 195. Hot plasmas in active galactic nuclei. In Proc. Varenna Conference on Plasma Astrophysics', (ESA SP 161), p.297 (1982). 196. Production and propagation of jets. In Proc. Varenna Conference on Plasma Astrophysics', (ESA SP 161), p.267 (1982). 197. Cosmology and particle physics. In Lepton and Photon Interactions at High Energies', ed. W Phiel, p.993 (1981). 198. Radiative acceleration of astrophysical jets; line locking in SS 433. In Extragalactic Radio Sources', ed. D.S. Heeschen and C.M. Wade (Reidel) (1982) (with M. Milgrom and P. Shapiro). 199. Fundamental physics and cosmology - introductory survey. In Astrophysical Cosmology', ed. H.A. Bruck et al.p.3 (1982). 200. Galaxy Formation, Clustering and the Hidden Mass. In Cosmology', ed. A.W. Wolfendale (Reidel) p.259 (1982) (with A. Kashlinsky) 201. The Nature of the Compact Source at the Galactic Centre. In The Galactic Centre', ed. G. Reigler and R.D. Blandford (AIP), p.166 (1982). 202. Interpretation of the anisotropy in the cosmic microwave background. Phil. Trans. R. Soc. 307, 97 (1982) (with C. Hogan and N. Kaiser). 203. Supernovae': a survey of current research' (Reidel 1982) p.580 (co-editor with R.J. Stoneham). 204. Remarks on Population III'. In Astrophysical Cosmology' ed. H. Bruck, G.V. Coyne and M.S. Longair. p.495 (1982). 205. The role of proton cyclotron emission near accreting magnetic neutron stars. J. Astron. Astrophys. 3, 413 (1982) (with K.M.V. Apparao and S.M. Chitre). 206. Cosmic jets. Sci. Am. (May 1982) (with R.D. Blandford and M.C. Begelman). 207. Limits from the timing of pulsars on the cosmic gravitational wave background. MNRAS 203, 945 (1983) (with B. Bertotti and B.J. Carr). 208. What the astrophysicist wants from the very early universe. In The Early Universe', ed. S. Siklos and S.W. Hawking (CUP) p.29 (1983). 209. Galaxies and their nuclei: Bakerian Lecture. Proc. Roy. Soc. A. 400, 183 (1985). 210. Extragalactic sources of gravitational waves. In Gravitational Radiation', ed. N. Deruelle and T. Piran (North-Holland) p.297 (1983). 211. Can pregalactic objects generate galaxies? MNRAS 206, 801 (1984) (with B.J. Carr). 212. Unseen Mass. In Proc. IAU Symposium 104 Early Evolution of the Universe and Its Present Structure' , ed. G. Abell and G. Chincarini (Reidel) p.299 (1983). 213. On Population III star formation. In Proc. IAU Symposium 104 Early Evolution of the Universe and Its Present Structure', ed. G. Abell and G. Chincarini (Reidel) p.119. (1983). 214. Formation of Population III stars and pregalactic evolution. MNRAS 205, 955 (1983) (with A. Kashlinsky). 215. Theoretical problems raised by gamma-ray bursts and related phenomena. In Accreting Neutron Stars'. (MPI publication) ed. W. Brinkmann, p.179 (1983). 216. Astrophysics, cosmology and high energy physics. In Proc. CERN Physics School (1982) ed. A. Caton, p.281. 217. Supercritical jets from a cauldron'. In Astrophysical Jets' ed. A. Ferrari and A. Pacholczyk (Reidel), p. 215 (1983) (with M.C. Begelman). 218. How stable is our vacuum? Nat 302, 508 (1983) (with P. Hut). 219. Acceleration processes in the Universe. In The Challenge of Ultra-high Energies' ed. J.D. Lawson and J. Mulvey, (Oxford) p.317 (1982). 220. Spectral and variability constraints on compact sources. MNRAS 205, 593 (1983) (with P.W. Guilbert and A.C. Fabian). 221. Deuterium and Lithium from Population III remnants. In Formation and Evolution of Large Scale Structures in the Universe', ed. J. Audouze and J.T.T. Van (Reidel) p.271 (1984). 222. Pregalactic activity, galaxy formation and hidden mass. In Formation and Evolution of Large Scale Structures in the Universe, ed. J. Audouze and J.T.T. Van (1984) (Reidel) p.237. 223. Phenomena at the Galactic Centre: a massive black hole? In The Milky Way' Proc. IAU Symposium 107, ed. H. van Woerden et al., p. 379 (1985). 224. Large numbers and ratios in astrophysics and cosmology. Phil. Trans. R. Soc. A. 310, 311 (1983). 225. Physics of relativistic jets on sub-milliarcsecond scales. In Very Long Baseline Interferometry', eds. R. Fanti, K. Kellermann and G. Setti (Reidel) p.207, (1984). 226. Galaxy Formation. Astronomische Mitt. 58, 57 (1983). 227. How large were the first pregalactic objects? MNRAS 206, 315 (1984) (with B.J. Carr). 228. The Cauldron at the core of SS 433, MNRAS 206, 209 (1984) (with M.C. Begelman). 229. Extragalactic gravitational collapse. In Relativistic Astrophysics and Cosmology', ed. E. Verdaguer and X. Fustero, (World Scientific Publishers) p.3 (1984). 230. Radio sources and galactic nuclei: models and problems. In Extragalactic Energetic Sources' (Indian Academy of Sciences, Bangalore) ed. V.K. Kapahi, pp 53-87 (1985). 231. The Universe from <10-36 sec to >1030 yrs. Current Science, 52, (1983) 443 & 509. 232. Galactic nuclei and jets. In Numerical Astrophysics', ed. J. Le Blanc, J. Centrella and R.L. Bowers (Jones and Bartlett, California) p.69 (1985). 233. Concluding Comments: a cosmologist's viewpoint. Summing up lecture at ESO/CERN Colloquium on Cosmology and Fundamental Physics, ed. G. Setti and L. van Hove, p.423, (1983). 234. Theory of Extragalactic radio sources. Rev. Mod. Phys. 56, 225 (1984) (with M.C. Begelman and R.D. Blandford). 235. Clusters and their precursors. In Clusters and groups of galaxies', ed. F. Mardirossian, G. Giuricin and Mezzetti (Reidel) p.485 (1984). 236. Black hole models for active galactic nuclei. Ann. Rev. Astr. AstroPhys. 24, 471 (1984). 237. The constants of Physics' book published by Royal Society 1983, co-editor with W.H. McCrea. 238. Reverse stellar evolution, quasars and low mass X-ray binaries. Nat 309, 331 (1984) (with F. Verbunt and A.C. Fabian.) 239. Gravitational interactions of cosmic strings. Nat 311, 109 (1984) (with C. Hogan). 240. Formation of galaxies and large scale structure with cold dark matter. Nat 311, 517 (1984) (with G. Blumenthal, S. Faber and J. Primack). 241. A Theory for the origin of globular clusters. ApJ 298, 18 (1985) (with S.M. Fall). 242. OVIII resonant absorption in PKS 2155-304: a hot wind ApJ 295, 104 (1985) (with J. Krolik, J. Kallman and A.C. Fabian). 243. Is the Universe flat? J. Astron. Astrophys. 5, 331 (1984). 244. Neutrino masses and galaxy formation. Proc. 6th EPS general conference (Prague), p.223 (1984) 245. Some comments on the physics of active galactic nuclei. In X-ray and UV emission from quasars and AGNs', ed. W. Brinkmann and J. Trümper (1985). 246. Jets and Galactic Nuclei. In Highlights of Modern Astrophysics: Concepts and Controversies, ed. S. Shapiro and S. Teukolsky (Wiley, N.Y.), p. 163 (1986). 247. Mechanisms for biased galaxy formation. MNRAS 213, 75P (1985). 248. Some theoretical aspects of AGNs. In Structure and Evolution of Active Galactic Nuclei, ed. G. Giuricin et al.(Reidel) p.457 (1986). 249. Deuterium abundance and population III remnants. MNRAS 215, 53P (1985) (with S. Ramadurai). 250. Aspects of galaxy formation. In Cosmogonic Processes', ed. W.D. Arnett et al.(VNU, Amsterdam) p.44 (1986). 251. Pregalactic evolution in cosmologies with cold dark matter'. MNRAS 221, 53 (1986) (with H.M.P. Couchman). 252. Possible constituents of halos. In Dark Matter in the Universe, ed. G. Knapp and J. Kormendy (Reidel) p. 395 (1986). 253. The radiative acceleration of astrophysical jets: line-locking in SS 433 Astrophys. J. Suppl. 60, 393 (1986) (with S. Shapiro and M. Milgrom). 254. Cosmic Jets. Proc. XIX International Cosmic Ray Conference 1985, ed. F. Jones et al.(invited paper) Vol. 9, p.1 255. Lyman Quasar absorption lines in quasar spectra: evidence for gravitationally confined gas in dark minihalos'. MNRAS 218, 25P (1986). 256. Black holes in our galaxy. In High energy phenomena around compact stars', ed. F. Pacini (Reidel, Dordrecht) p.279. (1986). 257. What is the dark matter in galactic halos and clusters? Phil. Trans. R. Soc. 320, 573 (1986). 258. Introductory Lecture. In Quasars': Proc. IAU Symposium 117, ed. G. Swarup (Reidel, Dordrecht) p.1 (1986). 259. Thermal and dynamical effects of pair production on two-temperature accretion flows. ApJ 313, 689 (1987) (with M.C. Begelman and M. Sikora). 260. On double clusters and gravitational lenses. Nat 323, 514 (1986) (with C.S. Crawford and A.C. Fabian). 261. Baryonic dark matter candidates. In Proc. 2nd ESO/CERN conference, ed. G. Setti and L. van Hove), p.227 (1987). 262. Gravitational lensing by dark galactic halos. MNRAS 224, 283 (1987) (with K. Subramanian and S.M. Chitre). 263. Baryon concentration in string wakes ; implications for galaxy formation and large-scale structure. MNRAS 222, 27P (1986). 264. Exotic relics of the early universe: non-baryonic dark matter and strings. Proc. International Conference on Mathematical Physics, ed. J.N. Islam (World Scientific Publishers) (1987). 265. Biasing and suppression of galaxy formation. In Nearly Normal Galaxies from the Planck Time to the Present', ed. S. Faber, Springer-Verlag 255 (1987). 266. Astronomical constraints on a string-dominated universe. MNRAS 227, 453 (1987) (with J.R. Gott). 267. The origin of globular clusters. Proc. IAU Symposium on Globular Clusters, eds. J. Grindlay and A.G. Davis Phillips, (Reidel) (invited paper) p.323 (1988) (with S.M. Fall). 268. The origin and cosmogonic implications of seed magnetic fields. Contribution to Festschrift for V.L. Ginzburg, eds. L.V. Keldysh V.Y. Fainberg. (1987). 269. Constraints on voids at high redshifts from Lyman Alpha absorbers. MNRAS 224, 13P (1987) (with R.F. Carswell). 270. The emergence of structure in the Universe: galaxy formation and dark matter. In 300 Years of Gravitation', eds. S.W. Hawking and W.Israel (CUP) (1987). 271. Physical mechanisms for biased galaxy formation. Nat 326, 455 (1987) (with A. Dekel). 272. Active galactic nuclei: ten theoretical problems. In Physical processes in comets, stars and active galaxies' ed. W. Hillebrandt et al.Springer-Verlag, p.166 (1987). 273. The origin and cosmogonic implications of seed magnetic fields. Q.J.R.A.S. 28, 197 (1987). 274. The central object: some comments and speculations. In Galactic Centre', eds. D. Backer and R. Genzel (AIP), p.71 (1987). 275. Effects of electron-positron pair opacity for spherical accretion onto black holes. ApJ Lett. 315, L113 (1987) (with A.P. Lightman and A.A. Zdziarski). 276. Active galactic nuclei. In Proc. 13th Texas Conference, ed. M. Ulmer, (World Scientific) p.329 (1987). 277. The link between tidal interaction and nuclear activity in galaxies. ApJ 328, 103 (1988) (with D.N.C. Lin and J.E. Pringle). 278. An interacting binary scenario for SN 1987A. Nat 328, 323 (1987) (with A.C. Fabian, E.P.J. van den Heuvel and J. van Paradijs). 279. A relativistic jet from SN 1987A? Nat 328, 207 (1987). 280. Mechanisms for outflow in AGNs. In Outflow in Stars and Galactic Nuclei', eds. L. Bianchi and R. Gilmozzi (Kluwer), p.163 (1988). 281. Continuum emission in AGNs: a theoretical perspective. In Emission lines from quasars and active galactic nuclei' ed. P. Gondhalekar, p.1 (1988). 282. Magnetic confinement of emission line clouds in galactic nuclei. MNRAS 228, 47P (1987). 283. Absorption lines and galaxy formation. In Quasar Absorption Lines', eds. C. Blades, D. Turnshek and C.A. Norman (CUP), p.107 (1988). 284. Biased galaxy formation and dark matter. In Proc. IAU Symposium 130 on Large Scale Structure in the Universe, ed. J. Audouze et al., (Reidel) p.437 (1988). 285. Cold material in non-thermal sources. MNRAS 233, 475 (1988) (with P.W. Guilbert). 286. Cosmogonic implications of the first quasars. In The Post-Recombination Universe', ed. A. Lasenby and N. Kaiser (Reidel) p.101 (1988). 287. High redshift quasars in the cold dark matter cosmogony. MNRAS 230, 5P (1988) (with G. Efstathiou). 288. Radiative equilibrium of high density clouds, with application to AGN continuum. ApJ 332, 141 (1988) (with G. Ferland). 289. Tidal disruption of stars by black holes of 106 - 108 solar masses in nearby galaxies. Nat 333, 523 (1988). 290. Quasars as probes of gas in extended protogalaxies. MNRAS 231, 91P (1988). 291. Axion Miniclusters. Phys. Lett. B. 205, 228 (1988) (with C.J. Hogan). 292. The universe at and the first quasars. Yamada conference XX on Big Bang, Active Galactic Nuclei and Supernova' eds. S. Hayakawa and K. Sato (Universal Academy Press Inc., Tokyo) p.121 (1988). 293. On the variability of the X-ray emission from SN 1987A. Nat 335, 50 (1988) (with A.C. Fabian). 294. Clearings in the Lyman forests. ApJ 345, 52 (1989) (with I. Kovner). 295. Column density distribution of the Lyman forest - evidence for the minihalo model. MNRAS 236, 21P (1989) (with S. Ikeuchi and I. Murakami). 296. Galaxy formation and dark matter. Highlights of Astronomy, IAU, ed. D. McNally 8 pp45-64 (1989). 297. Detection of iron features in the spectrum of the soft excess Seyfert 1 galaxy MCG-6-30-15. MNRAS 236, 39P (1989) (with K. Nandra, K.A. Pounds and A.C. Fabian). 298. X-ray fluorescence from the inner disc in Cygnus X-1. MNRAS 238, 729-736 (1989) (with A.C. Fabian, L. Stella and N.E. White). 299. Early evolution of galaxies. In Galactic Evolution' eds. J.E. Beckman and B.E.J. Pagel (CUP) p. 1 (1989). 300. Small dense broad-line regions in active nuclei. ApJ 347, 640 (1989) (with G. Ferland and H. Netzer). 301. Galaxies, galactic nuclei and dark matter. In Stromgren Memorial Volume eds. B. Gustaffson and P.E. Nissen Proc. Royal Danish Academy 42:4 195 (1990). 302. Number density evolution of Lyman-alpha clouds: evolving minihalos. Proc. Astr. Soc. Japan 41, 1095 (1989) (with S. Ikeuchi and I. Murakami). 303. Gravitational radiation rocket effects and galactic structure. Comm. Astrophys. 14, 165 (1989) (with I. Redmount). 304. The radio/optical alignment of high-z radio galaxies: triggering of star formation in radio lobes. MNRAS 239, 1P (1989). 305. Radiative shocks inside protogalaxies and the origin of globular clusters. ApJ (with H. Kang, P.R. Shapiro and S.M. Fall), 363, 488 (1990). 306. The age of high redshift radio galaxies. MNRAS 242, 570 (1990) (with M. Bithell). 307. The 21-cm line at high redshift: a diagnostic for the origin of large-scale structure. MNRAS 247, 510 (1990) (with D. Scott). 308. Are there massive black holes in galactic nuclei? In Baryonic Dark Matter' eds. G. Gilmore and D. Lynden-Bell (Kluwer, Amsterdam) p.179. 309. Is there a massive black hole in every galaxy? In Reviews in Modern Astronomy' 2 (Springer, Berlin). 310. The extended narrow-line region of NGC 4151 I - Emission line ratios and their implications. A&A 236, 53 (1990) (with M.V. Penston et al.). 311. Dead quasars'' in nearby galaxies. Science 247, 817 (1990). 312. Reflection-dominated hard X-ray sources and the X-ray background. MNRAS 242, 14p (1990) (with A.C. Fabian, I.M. George and S. Miyoshi). 313. Particle physics in astrophysics and cosmology. Nucl. Phys. B. (proc. Suppl.) 16, 3 (1990). 314. The image universe' effect on cosmic strings. Phys. Lett. B 242, 29 (1990) (with C. Hogan and T. Vachaspati). 315. Dark matter deay, reionization, and microwave background anisotropies. Astron. Astrophys. 250, 295 (1991) (with D.W. Sciama and D. Scott). 316. Dark matter and the emergence of structure in the universe. In Frontiers in Physics, High Technology and Mathematics' (ICTP Trieste 25 Anniversary Conference) p.218 (World Scientific). 317. Flywheels: rapidly spinning magnetised neutron stars in spherical accretion. MNRAS 251, 555 (1991) (with S. Mineshige and A.C. Fabian). 318. Star-disc interactions near massive black holes. MNRAS 250, 505 (1991) (with D. Syer and C.J. Clarke). 319. Acceleration by synchrotron absorption and superluminal sources. ApJ 362 L1 (1990) (with G. Ghisellini, G. Bodo and E. Trussoni). 320. Some comments on the epoch of galaxy formation. Physica Scripta T36, 97 (1991). 321. Probes of the high-redshift universe. In Primordial Nucleosynthesis and Evolution of Early Universe, ed. K. Sato and J. Audouze (Kluwer) p487. 322. Physical state of the intergalactic medium. Nat 350, 685 (1991) (with X. Barcons and A.C. Fabian). 323. Dynamical effects of the cosmological constant. MNRAS 251, 128 (1991) (with O. Lahav, P. Lilje and J. Primack). 324. High-z quasars and galaxy formation. In Physics of Active Galactic Nuclei eds W. Duschl and S. Wagner Springer, pp 662-761 (1992) 325. The invisible cosmological constant. In Testing the Inflationary Universe, p 375 ed. T. Shanks et al., Kluwer. 326. The evolution of a supernova remnant in a strongly magnetised interstellar medium. MNRAS 252, 82 (1991) (with F. M. Insertis). 327. The origin of the planet orbiting PSR1829-10. Nat 352, 783 (1991) (with P. Podsiadlowski and J.E. Pringle). 328. Dense thin clouds in the central regions of active galactic nuclei. MNRAS 255, 419 (1992) (with A. Celotti and A.C. Fabian). 329. On dwarf elliptical galaxies and the faint blue counts. MNRAS 255, 346 (1992) (with A. Babul). 330. Clusters of galaxies: an introductory survey. In Clusters and Superclusters of Galaxies' ed. A.C. Fabian (Kluwer) p1. 331. The impact of gravitating lenses on astrophysics. In Gravitational Lenses' ed. S. Refsdal et al.p1 (Springer). 332. Neutron stars and planet-mass companions. MNRAS 254, p 19 (1992) (with I.R. Stevens and P. Podsiadlowski). 333. The standard model and some new directions. In Testing the AGN paradigm, ed. S. Holt, p1, (AIP) (with R.D. Blandford). 334. The central engine of AGNs and its cosmological evolution. In X-ray emission from AGN and the cosmic X-ray background, ed. W. Brinkmann (MPI Publications). 335. Tests for the minihalo model of the Lyman alpha forest. MNRAS 260, 617 (1993) (with J. Miralda-Escudé). 336. Tidal heating and mass loss in neutron star binaries: implications for gamma-ray burst models. ApJ 397 570 (1992) (with P. Mészáros). 337. High entropy fireballs and jets in gamma-ray sources. MNRAS 257, 29P (1992) (with P. Mészáros). 338. Relativistic fireballs: energy conversion and timescales. MNRAS 258, 41P (1992) (with P. Mészáros). 339. Concluding Summary. In Planets around Pulsars, ed. J.A. Phillips et al. p367 (ASP Conference Series 1993). 340. Quasars: progress and prospects. In The renaissance in general relativity, ed. G. Ellis et al. CUP (1993), p175. 341. Relativistic fireballs and their impact on external matter: models for cosmological gamma-ray bursts. ApJ 405, 278 (1993) (with P. Mészáros). 342. Causes and effects of the first quasars. Proc. Nat. Acad. Sci.. 90, 4840 (1993). 343. Constraints on massive black holes as dark matter candidates. MNRAS 259, 27P (1992) (with P. Hut). 344. Anisotropic induced Compton scattering: constraints on models of active galactic nuclei MNRAS 262, 603 (1993). (with P. Coppi and R.D. Blandford). 345. Quasars as probes of galaxy formation. In First Light in the Universe, ed. B. Rocca-Volmerange et al.p 203 (Editions Frontieres) 346. High Lorentz-factor e+ - e- jets in gamma-ray burst sources. In Compton Gamma Ray Observatory ed. M. Friedlander et al. p 1015 (AIP 1993) (with P. Mészáros). 347. The impact of relativistic fireballs on an external medium: a new model for cosmological'' gamma-ray burst emission. In Compton Gamma Ray Observatory ed. M. Friedlander et al. p 987 (AIP 1993) (with P. Mészáros) 348. Comptonization of external radiation in blazars. ( In Compton Gamma Ray Observatory ed. M. Friedlander et al. p 598 (AIP 1993) M. Sikora and M.C. Begelman). 349. Formation of nuclei in newly-formed galaxies and the evolution of the quasar population. MNRAS 263, 168 (1993) (with M. G. Haehnelt) 350. Relativistically expanding pair plasmas as bursting sources of source gamma-rays. Proc. 4th International Toki conference on Plasma Physics and Controlled Fusion, ESA SP-351 (1993) (with P. Mészáros). 351. Concluding Summary. Proc. Texas/PASCOS conference (Proc. N.Y. Academic Sci.) 688, 439 (1993). 352. Gas dynamics of relativistically expanding gamma-ray burst sources: kinematics, energetics, magnetic fields and efficiency. ApJ 415, 181 (1993) (with P. Mészáros and P. Laguna). astro-ph/9301007 353. Comptonization of diffuse ambient radiation by a relativistic jet: the source of gamma rays from blazars? ApJ 421, 153-162 (1994) (with M. Sikora and M.C. Begelman). 354. Understanding the high-redshift universe: progress, hype and prospects. QJRAS, 34, 279 (1993). 355. The distribution of minihalos in cold dark matter cosmogony. MNRAS 264, 705 (1993) (with H.J. Mo and J. Miralda-Escudé) 356. Reionization and thermal evolution of a photoionized intergalactic medium. MNRAS 266, 343 (1994) (with J. Miralda-Escudé) 357. OSSE observations of the bright Seyfert I galaxy IC 4329A ApJ 416, L57 (1993) (with A.C. Fabian et al.) 358. Conference Summary. The Background Radiation, ed. D. Calzetti et al. p259 (CUP) (1995). 359. Binary black hole in a dense star cluster. Astron. Astrophys. 283, 301 (1994) (with A.G. Polnarev). 360. Conference Summary. In Evolution of the Universe and its Observational Quest, ed K. Sato, Univ. Acad. Press, Tokyo, p 337 (1994). 361. Gamma-ray bursts from blast waves around galactic neutron stars, MNRAS 265, L13 (1993) (with M.C. Begelman and P. Mészáros) astro-ph/9309012 362. Origin of the seed magnetic field for a galactic dynamo. In Cosmical Magnetism, ed. D. Lynden-Bell (Kluwer, 1994) p155. 363. Gamma-ray bursts: multiwaveband spectral predictions for blast wave models. ApJ 418, L59 (1993) (with P. Mészáros). 364. from velocities in voids. ApJ 422, L1 (1994) (with A. Dekel). astro-ph/9308029 365. Observable effects of tidally-disrupted stars. In Nuclei of Normal Galaxies, ed. R. Genzel and A. Harris, p453 (Kluwer) 1994. 366. Massive black holes and light element nucleosynthesis in a baryonic universe. ApJ 438, 40 (1995) (with N.Y. Gnedin and J.P. Ostriker). 367. Models for variability in AGNs. In Multi-Wavelength Continuum Emission of AGN eds. J. Corvousier and A. Blecha (Kluwer, 1994) p239. 368. Shock models and O, X, signatures of gamma-ray burst sources. In Gamma Ray Bursts ed. G. Fishman et al.AIP No. 307, p 505 (1994) (with P. Mészáros). 369. Why galactic'' gamma-ray bursts might depend on environment: blast waves around neutron stars. In Gamma Ray Bursts, ed. G. Fishman et al.AIP No. 307, p 605 (1994) (with M. C. Begelman and P. Mészáros). 370. RULER: an instrument to measure gamma-ray burster distances. In Gamma Ray Bursts, ed G. Fishman et al.AIP No. 309, p 665 (1994) (with A. Owens et al.) 371. Radiative excitation of molecules near powerful compact radio sources. ApJ 432, 606 (1994) (with P.R. Maloney and M.C. Begelman). 372. Spectral properties of blast wave models of gamma-ray burst sources. ApJ 432, 181 (1994) (with P. Mészáros and H. Papathanassiou). astro-ph/9311071 373. The emergence of structure in the universe. Nuovo Cimento 107A, 1045 (1994). 374. Energetic and radiative constraints on highly relativistic jets. ApJ 429, L57 (1994) (with M.C. Begelman and M. Sikora). 375. The fate of Thorne-Zytkow objects. In The Evolution of X-ray Binaries'' ed S. Holt and C. Day (AIP) p403 (1994) (with P. Podsiadlowski and R.C. Cannon). 376. Supermassive black holes with stellar companions. In The Evolution of X-ray Binaries ed. S. Holt and C. Day (AIP) p71 (1994) (with P. Podsiadlowski). 377. Joint ROSAT-Compton GRO observations of X-ray Bright Seyfert Galaxy IC4329A, ApJ 438, 672 (1995) (with G.M. Madejski et al.). 378. Models of the central X-ray/gamma-ray source in IC4329A, MNRAS 269, L55 (1994) (with A.A. Zdziarski et al.). 379. Quasars, bursts and relativistic objects. QJRAS 35, 391 (1994). 380. Unsteady outflow models of cosmological gamma-ray bursts. ApJ 430, L93 (1994) (with P. Mészáros). astro-ph/9404038 381. Delayed GeV emission from cosmological gamma-ray bursts, MNRAS 269, L41 (1994) (with P. Mészáros). astro-ph/9404056 382. Perspectives in astrophysical cosmology. (Text of Lezione Lincee) (CUP 1995). 383. Aspects of early galactic evolution, in Formation of Galaxies'' eds C. Munoz-Tunon and F. Sanchez p503 (CUP 1995) 384. Dark matter and galactic nuclei: progress and prospects, in Perspectives on high energy physics and cosmology, ed. A Gonzalez-Arroyo and C Lopez (World Scientific) p 117 (1994) 385. Gamma-ray bursts and the structure of the Galactic Halo, MNRAS 273, 755 (1995) (with P. Podsiadlowski and M. Ruderman) 386. AGNs: demography and remnants, in Highlights of Astronomy'' ed. I. Appenzeller, p.559, Kluwer (1996). 387. Background radiation: probes and future tests. In Examining the big bang and diffuse background radiations'' eds. M. Kafatos and Y. Kondo (Kluwer 1996) p389-398 388. Evolution and final fate of massive Thorne-Zytkow objects, MNRAS 274, 485 (1995) (with P. Podsiadlowski and R. Cannon). 389. Introductory survey. In Dark Matter'' ed. S. Holt and C. Bennett, p3 (AIP conference proceedings, 1995). 390. Nucleosynthesis constraints on defect-mediated electroweak baryogenesis, Phys. Lett. B 349, 329 (1995) (with R. Brandenburger and A.C. Davies). astro-ph/9501040 391. Gamma ray bursts and the structure of the Galactic Halo. Ann. NY Acad. Sci. 759, 283 (1995) (with P Podsiadlowski and M Ruderman). 392. Sizes and dynamics of the Lyman forest clouds, in QSO Absorption Lines, ed. G. Meylan, p.419 (Springer, 1995). 393. Galaxy formation and quasars: progress and prospects. In The Universe at Large, ed. G. Munch et al. pp 311-342 (CUP) (1997). 394. Quasars. In Unsolved Problems in Astrophysics ed. J.N. Bahcall and J.P. Ostriker (Princeton UP) pp. 181-195 (1997). 395. What quasars tell us about galactic evolution. In New Light on Galactic Evolution, eds. R. Bender and R. L. Davies (Kluwer) p303 (1996). 396. The accretion luminosity of a massive black hole in an elliptical galaxy, MNRAS, 277, L55 (1995) (with A.C. Fabian). astro-ph/9509096 397. H2 cooling of primordial gas triggered by UV irradiation. ApJ 467, 522 (1996) (with Z. Haiman and A. Loeb) astro-ph/9511126 398. Concluding remarks: gamma ray bursts. PASP 107, 1176 (1995) 399. Galactic nuclei in Gravitational Dynamics'', ed O. Lahav et al. (CUP, 1996) p 103. 400. How small were the first cosmological objects? ApJ 474, 1 (1997) (with M. Tegmark et al.) astro-ph/<9603007 401. 21cm tomography from warm IGM at high redshifts. ApJ 475, 429 (1997) (with P. Madau and A. Meiksin) astro-ph/9608010 402. The matter content of the jet in M87: evidence for an electron-positron jet. MNRAS 282, 873 (1996) (with C.S. Reynolds, A.C. Fabian and A. Celotti). astro-ph/9603140 403. Introductory survey. In Cosmological Constant and the Evolution of the Universe'' ed. K. Sato et al.(Universal Academic Press, Inc. 1996) p 1. 404. Gamma-ray bursts: a challenge to relativistic astrophysics. In Relativistic Astrophysics'' eds. B. Jones and D. Markowic (CUP) pp 251-259 (1997). 405. Physical constraints on sizes of dense clouds in central magnetospheres of active galactic nuclei. MNRAS 283, 1361 (1996) (with Z. Kuncic and E. Blackman). astro-ph/9608104 406. Capture of stellar mass compact objects by massive black holes in galactic cusps. MNRAS 284, 318 (1997) (with S. Sigurdsson). astro-ph/9608093 407. Dense thin clouds and reprocessed radiation in the central regions of active galactic nuclei. MNRAS 284, 717 (1997) (with Z. Kuncic and A. Celotti). astro-ph/9608163 408. Destruction of molecular hydrogen during cosmological reionization. ApJ 476, 458 (1997) (with Z. Haiman and A. Loeb). astro-ph/9608130 409. Optical and long wavelength afterglow from gamma-ray bursts. ApJ 476, 232 (1997) (with P. Meszáros). astro-ph/9606043 410. The universe at z>5: where and how did the dark age'' end? In HST and the High Redshift Universe'', ed. N. Tanvir et al. pp 115-120 (World Scientific) (1997). astro-ph/9608196 411. Introduction: the state of modern cosmology. In Critical Dialogues in Cosmology'' ed N. Turok pp 1-8 (World Scientific) (1997). 412. Poynting jets from black holes and cosmological GRBs, ApJ 482, L29 (1997) (with P. Mészáros). astro-ph/9609065 413. Black holes in galactic nuclei. Reviews in Modern Astronomy, 10, 179-190 (1997). 414. Gravitational waves from galactic centres? Classical and Quantum Gravity 14, 1411-1415 (1997). 415. High redshift supernovae and the metal-poor halo stars. ApJ(Lett) 478, L57 (1997) (with J. Miralda Escudé). 416. Pregalactic history, population III and baryonic dark matter. In Identifying Dark Matter'', ed. N. Spooner (World Scientific) p. 28 (1997) 417. Astrophysical evidence for black holes. In Black Holes and Relativistic Stars'', ed. R. Wald pp 83-105 (Chicago U.P.) (1997). 418. Thermal material in relativistic jets, MNRAS 293, 288 (1998) (with A. Celotti, Z. Kuncic and J. Wardle). astro-ph/9707132 419. Gamma-ray bursts: challenges to relativistic astrophysics. In Proc. 18th Texas'' Symposium, ed. A. Olinto et al.(World Scientific) pp34-47 (1998). astro-ph/ 9701162 420. Limits from rapid TeV variability of Mk 421, MNRAS 293, 239 (1998) (with A. Celotti and A.C. Fabian). astro-ph/9707131 421. Extreme lensing by machos: probing the dark matter in nearby galaxies and clusters. ApJ(Lett) (in press) (with E. Waxman and A. Aragon). 422. Shocked by GRB 970228: the afterglow of a relativistic blast wave. MNRAS 288, L51 (1997) (with R.A.M.J. Wijers and P. Mészáros). astro-ph/9704153 423. The universe at z>5: when and how did the dark age' end? Proc. Nat. Acad. Sci. 95, 47-52 (1998). astro-ph/9608196 424. Probing the cosmic dark ages' with the VLT. Festschrift for R. Giacconi, ed. A. Stella (World Scientific) (in press). 425. Searching for the earliest galaxies using the Gunn-Peterson trough. ApJ 497, 21 (1998) (with J. Miralda-Escude). astro-ph/ 9707193 426. Probing the end of the dark age'. In Structure and Evolution of the IGM'' (ed. P. Petitjean and S Charlot) pp 19-26 (Edition Frontieres) (1998). 427. Why is the CMB fluctuation level ~10-5? ApJ 499, 526-32 (1998) (with M. Tegmark). astro-ph/9709058 428. Quasars and galaxy formation. Astr. Astrophys. 331, L1 (1998) (with J. Silk). astro-ph/9801013 429. Viewing angle and environmental effects in GRBs: sources of afterglow diversity. ApJ 499, 301-308 (1998) (with P. Mészáros and R. Wijers). astro-ph/9709273 430. Stars and stellar systems at z>5: implications for structure formation and nucleosynthesis. Space Science Reviews 84, 43 (1998). 431. Refreshed shocks and afterglow longevity in GRB. ApJ 496, L1 (1998) (with P. Mészáros). astro-ph/9712252 432. High redshift galaxies, their active nuclei, and central black holes. MNRAS 300, 817 (1998) (with M. Haehnelt and P. Natarajan). astro-ph/9712259 433. Multiwavelength afterglows in Gamma-Ray Bursts: Refreshed Shock and Jet Effects. ApJ 503, 314 (1998) (with A. Panaitescu and P. Mészáros) astro-ph/astro-ph/9801258 434. What can we learn from bursts and afterglows? Nuclear Phys. B. Supp. 69, 681-685 (1998). 435. Magnetic fields and multiphase gas in AGN. In Accretion Discs'' ed M Abramowicz et al.(CUP, 1998) pp 196-201 (with A. Celotti) 436. Spectral features from ultrarelativistic ions in gamma-ray bursts. ApJ 502 L105 (1998) (with P. Mészáros) astro-ph/9804119 437. The large-scale smoothness of our Universe Nat 394, 225 (1999) (with K. Wu and O. Lahav) astro-ph/9804062 438. The edge of a gamma-ray burst afterglow. MNRAS 299, L10 (1998) (with P Mészáros) astro-ph/9806183 439. Probing the Dark Age' with NGST. In The NGST: science drivers and technological challenges' (ESA publications). astro-ph/9809029 440. Energetics and beaming of gamma ray burst triggers. New Astronomy, 4, 303-312 (1999) (with P. Mészáros and R Wijers) astro-ph/9808106 441. Reprocessing of radiation by multiphase gas in low-luminosity accretion flows. MNRAS 305, L41-L44 (1999) (with A Celotti) astro-ph/9807252 442. Radiative transfer in a clumpy universe III: the nature of cosmological ionization sources. ApJ 514, 648 (1999) (with P. Madau and F. Hardt) astro-ph/9809058 443. Strong observational constraints on advection-dominated accretion in the cores of elliptical galaxies. MNRAS 305, 492 (1999) (with T. Di Matteo et al.) astro-ph/9807245 444. Reionization of the inhomogeneous universe. ApJ 530, 1-16 (2000) (with J. Miralda-Escudé and M. Haehnelt) astro-ph/9812306 445. Status of models for gamma-ray bursts. Physics Scripta T85, 267-273 (2000). 446. The end of the dark age. In After the Dark Ages'' ed S. Holt and E. Smith (AIP, 1999). 447. Some Comments on triggers, energetics and beaming. Astron. Astrophys. Supp. 138, 491 (1999). 448. GRB 990123: reverse and internal shock flashes and late afterglow behaviour. MNRAS 306, L39 (1999) (with P Mészáros) astro-ph/9902367 449. The distribution of supermassive black holes in the nuclei of nearby galaxies. MNRAS 308, 77-81 (1999) (with A. Cattaneo and M. Haehnelt) astro-ph/9902223 450. Radio signatures of HI at high redshifts: mapping the end of the "dark age". ApJ 528, 597-610 (2000) (with P. Tozzi, P. Madau and A. Meiksin) astro-ph/9903139 451. The radiative feedback of the first cosmological objects. ApJ 534, 11-27 (2000) (with T. Abel and Z. Haiman) astro-ph/9903336 452. A review of gamma-ray bursts. Nucl. Phys. A663, 42-55 (2000). 453. Gamma-ray bursts. Phil. Trans. Roy. Soc. 358, 853-867 (2000) 454. The dark ages. Astr. Astrophys. Supp. (in press) 455. Introductory Lecture in Status of Inflationary Cosmology (ed M. Turner) (Chicago U P. in press) 456. Steep Slopes and preferred breaks in GRB spectra: the role of photospheres and Comptonisation. ApJ 530, 292-298 (2000) (with P. Mészáros ) astro-ph/9908126 457. Gamma Ray Burst Mechanisms. In "Supernovae and gamma ray bursts" ed M. Livio (CUP in press) 458. Early X-ray/UV line signatures of GRB progenitors and hypernovae. ApJ 534, 581-586 (2000) (with C. Weth, P. Mészáros and T. Kallman) astro-ph/9908243 459. Compton dragged gamma-ray bursts associated with supernovae. ApJ 529, L17 (2000) (with D. Lazzati, G. Ghisellini and A. Celotti) astro-ph/9910191 460. Compton echoes from gamma-ray bursts. ApJ 541, 712-719 (2000) (with P. Madau & R.D. Blandford) astro-ph/9912276 461. Introductory lecture. Proc. NATO ASI on Cosmology, ed R. Crittenden et al.(Kluwer) astro-ph/9912373 462. The first light'' Phil. Trans. Roy. Soc. 358, 1988-1998 (2000). 463. The first light in the universe: what ended the dark ages? Physics Reports. Physics Reports 333, 203-214 (2000). astro-ph/9912345 464. Supermassive black holes: their formation and their prospects as probes of relativistic gravity. In `Stellar and Supermassive Black Holes'' ed L. Kaper et al.(Springer-Verlag) pp351-63 (2001) astro-ph/9912346 465. Compton dragged gamma-ray bursts: the spectrum. MNRAS 316, L45-L49 (2000) (with G. Ghisellini, D. Lazzati & A. Celotti) astro-ph/0002049 466. Multi-GeV Neutrinos from internal dissipation in GRB Fireballs. ApJ 541 L5-L8 (with P. Mészáros ). astro-ph/0007102 467. The Earliest Luminous Sources and the Damping Wing of the Gunn-Peterson Trough. ApJ 542, L69-L73 (2000) (With P. Madau). astro-ph/0006271 468. HeII recombination lines from the first luminous objects. ApJ 553, 730 (2001) (With S. Peng Oh & Z. Haiman.) astro-ph/0007351 469. Fe K emissions from a decaying magnetar model of GRB. ApJ 545, L73 (2000) (With P. Mészáros ) astro-ph/0010258 470. Early metal enrichment of the intergalactic medium by pregalactic outflows. ApJ 555, 92 (2001) (With A. Ferrara & P. Madau.) astro-ph/0010158 471. Magnetic fields and multiphase gas in AGN. In Theory of Black Hole Accretion Discs ed. M.A. Abramowicz et al.), pp. 196-202. CUP (1999) (with A. Celotti). 472. Magnetic Fields in the Early Universe. Highlights of Astronomy ed. H. Rickman, Vol. 12, pp. 727-728 (2002). 473. e+- pair cascades and precursors in gamma-ray bursts. ApJ 554, 660 (2001) (with P. Mészáros and E Ramirez-Ruiz). astro-ph/0011284 474. Concluding Perspective in New Cosmological Data and the Values of the Fundamental Parameters, ed. A. Lasenby & A. Wilkinson (ASP publ.) (in press). astro-ph/0101268 475. Why AGN studies need higher resolution in High Angular Resolution in Astronomy, ed. R Schilizzi et al. (ASP publ.) (in press) astro-ph/0101266 476. Piecing Together the Biggest Puzzle of All. Science 290, 1909 (2000) astro-ph/0103391 477. Quiescent times in gamma-ray bursts: II. Dormant periods in the central engine? MNRAS (2001) 324, 1147 (with E. Ramirez-Ruiz & A. Merloni). astro-ph/0010219 478. Extended Lyman emission around young quasars: a constraint on galaxy formation. ApJ 556, 87 (2001) (with Z. Haiman) astro-ph/0101174 479. Massive black holes as population III remnants. ApJ 551, 27 (2001) (with P. Madau) astro-ph/0101223 480. Collapsar jets, bubbles and Fe lines. ApJ 556, 37 (2001) (with P. Mészáros) astro-ph/0104402 481. Radio Foregrounds for the 21 cm Tomography of the Neutral IGM at high redshifts. ApJ 564, 576-580 (2002) (with T. Di Matteo, R. Perna & T. Abel). astro-ph/0109241 482. Afterglow light curves, viewing angle and the jet structure of gamma-ray bursts. MNRAS 332, 945-950 (2002) (with E Rossi and D Lazzati) astro-ph/0112083 483. Formation and Growth of Supermassive Black Holes. In Lighthouses of the Universe, ed. R Sunyaev Lighthouses of the Universe: The Most Luminous Celestial Objects and Their Use for Cosmology Proceedings of the MPA/ESO p. 345 484. Do Globular Clusters time the Universe? New Astronomy (submitted) (with O Lahav & O Gnedin) astro-ph/0108034 485. Iron K Lines from Gamma Ray Bursts. ApJ 593, 946 (2003) (with T R Kallman & P Mészáros) astro-ph/0110654 486. Soft X-ray emission lines in the early afterglow of gamma-ray bursts. ApJ 572, L57-L60 (2002) (with D Lazzati & E Ramirez-Ruiz) astro-ph/0204319 487. Gamma-Ray Burst Afterglow emission with a decaying magnetic field. MNRAS 339, 881-886 (2003) (with E Rossi) astro-ph/0204406 488. Emission lines in GRBs constrain the total energy reservoir. A&A 389, L33-L36 (2002) (with G Ghisellini, D Lazzati & E Rossi) astro-ph/0205227 489. From a simple big bang to our complex cosmos. In Future of Theoretical Physics and Cosmology, ed. G W Gibbons et al. pp 15-36 (CUP) 490. Black holes in the real universe and their prospects as probes of relativistic gravity. In Future of Theoretical Physics and Cosmology, ed. G W Gibbons et al. pp 217-235 (CUP) astro-ph/0401365 491. X-ray rich GRBs, photospheres and variability. ApJ 578, 812-817 (2002) (with P Meszaros, E Ramirez-Ruiz & B Zhang) astro-ph/0205144 492. Feeding black holes at galactic centres by capture from isothermal cusps. New Astronomy 7, 385-394 (2002) (with H-S Zhao and M G Haehnelt) astro-ph/0112096 493. Events in the life of a cocoon surrounding a light, collapsar jet. MNRAS 337, 1349-1356 (2002) (with E Ramirez-Ruiz & A Celotti) astro-ph/0205108 494. Does cosmology need a new paradigm? Proc Nat Acad Sci (in press) (2003) 495. Dark Matter: Introduction. Phil Trans R Soc A361, 2427 (2003) astro-ph/0402045 496. Numerical coincidences and ''tuning'' in cosmology. In "Fred Hoyle's Universe", ed C Wickramasinghe et al. (Kluwer) pp 95-108 (2003). astro-ph/0401424 497. Gamma-ray bursts as X-ray depth gauges of the Universe. ApJ 591, L91-L94 (2003) (with P Mészáros) astro-ph/0305115 498. Photoionization feedback in low-mass galaxies at high redshift. ApJ 601 , 666 (2004) (with M Mijkstra, Z Haiman & D Weinberg) astro-ph/0308042 499. Compton drag as a mechanism for very high linear polarization in gamma-ray bursts. MNRAS 347 L1 (2004) (with D Lazzati, E Rossi & G Ghisellini) astro-ph/309038 500. Heating and deceleration of GRB fireballs by neutron decay. In "Gamma Ray Bursts 2003: 30 years of the Discovery" AIP 727, 198 (2004) (with E M Rossi & A M Beloborodov) 501. Early reionization by miniquasars. ApJ 604, 484 (2004) (with P Madau, M Volonteri, F Haardt & S P Oh) astro-ph/0310223 502. Constraining alternate models of black holes: type I X-ray bursts on accreting Fermion-Fermion and Boson-Fermion stars. ApJ 606, 1112 (2004) (with Y-F Yuan and R Narayan) astro-ph/0401549 503. Have we detected one of the sources responsible for re-ionizing the universe? MNRAS 352, L21 (2004) (with M Ricotti, M Haehnelt and M Pettini) astro-ph/0403327 504. Magnetic fields in the early universe. In Cosmic Magnetic Fields, ed R Wielebinski and R Beck (Springer) pp1-9 (2005) 505. The distribution and cosmic evolution of massive black hole spins. ApJ 620 (2005) 69-77 (with M Volonteri, P Madau and E Quataert) astro-ph/0410342 506. Cyclotron maser emission from blazar jets. ApJ 625, 51-59 (2005) (with M Begelman and R Ergun) astro-ph/0502151 507. Dissipative Photosphere Models of Gamma-ray Bursts and X-ray Flashes. ApJ 625 847-852 (2005) (with P Mészáros) astro-ph/0412702 508. Peak energy clustering and efficiency in compact objects. ApJ 635, 476-480 (2005) (with A Pe'er and P Mészáros) astro-ph/0504346 509. Evolution, explosion and nucleosynthesis of Pop III Stars. ApJ 645, 1352 (2006) (with T Ohkubo et al. ) astro-ph/0507593 510. Rapid growth of high redshift black holes. ApJ 633, 624 (2005) (with M Volonteri) astro-ph/ astro-ph/0506040 511. The observable effects of a photosphere component on GRBs and XRF prompt emission spectrum. ApJ (in press) (with A Pe'er and P Mészáros) astro-ph/0510114 512. Neutron-loaded outflows in gamma-ray bursts. MNRAS 369, 1797 (2006) (with E Rossi and A M Beloborodov) astro-ph/0512495 513. Possible evidence for the ejection of a supermassive black hole from an ongoing merger of galaxies. MNRAS 366, L22-L25 (2006) (with M Haehnelt and M B Davies) astro-ph/0511245 514. The origin of the "seed" field for galactic dynamos. Astron. Nach. 5, 395 (2006) 515. Dimensionless constants, cosmology, and other dark matters. Phys. Rev. D. 73, 023505 (2006) (with M Tegmark, A Aguirre and F Wilczek) astro-ph/0511774 516. Formation of Supermassive Black Holes by Direct Collapse in Pregalactic Halos. MNRAS 370, 289-298 (with M C Begelman and M Volonteri) (2006) astro-ph/0602363 517. Black holes in active galactic nuclei. In "General Relativity and Gravitation" (Proc GR 17) ed P Florides et al. (World Scientific) p 162 (2006) 518. Quasars at z=6: the survival of the fittest. ApJ 650, 669-678 (with M Volonteri) (2006) astro-ph/0607093 519. Radiation from an expanding cocoon as an explanation of the steep decay observed in GRB early afterglow light curves. ApJ 652, 482-489 (with A Pe'er and P Mészáros) (2006) astro-ph/0603343 520. Thermalisation in relativistic outflows and the correlation between spectral hardness and apparent luminosity in gamma ray bursts. ApJ 666, 1012-1023 (2007) (with C Thompson and P Mészáros) astro-ph/0608282 521. A new method of determining the initial size and Lorentz factor of gamma-ray burst fireballs using a thermal emission component. ApJ Letters 664, L1 (2007) (with A Pe'er, F Ryde, R A M J Wijers and P Mészáros) astro-ph/0703734 522. Massive Black Holes: formation and evolution. In Proc. IAU Symp. 238 "Black Holes: from stars to galaxies - across the range of masses", 51-59 (with M Volonteri) ed V Karas and G Matt astro-ph/0701512 523. Gamma-ray bursts prompt emission spectrum: an analysis of a photosphere model. In Gamma-ray Bursts Phil. Trans. Roy. Soc. 365 1854, 1171-1177 (2007) (With A Pe'er and P Mészáros) 524. Implications of very rapid TeV variability in blazars. MNRAS 384, L19-L23 (2008) (With M. C. Begelman and A. C. Fabian) astro-ph/0709.0540 525. Growth and implications of black holes at z > 6. Nuovo Cimento (2008) (with M Volonteri). 526. Population III gamma ray bursts. ApJ 715 967, (2010) (with P Mészáros). astro-ph/1004.2056 527. GeV emission from collisional magnetised gamma ray bursts. ApJ 733 L40 (2011) (with P Mészáros).astro-ph/1104.5025 528. X-ray emission from the ultra-massive black hole candidate NGC 1277: implications and speculations on its origin. MNRAS (in press). (with A C Fabian, J S Saunders, M Haehnelt and J.M. Miller) astro-ph/1301.1800 529. Gamma ray bursts. In "100 years of relativity" (with P Mészáros) (in press) astro-ph/1401.3012 [NOTE: Conference abstracts, popular articles, etc. are omitted from the above list.]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8697347640991211, "perplexity": 15118.815543204626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663359.6/warc/CC-MAIN-20140930004103-00188-ip-10-234-18-248.ec2.internal.warc.gz"}
http://mathhelpforum.com/number-theory/198676-multiplicative-inverse-print.html
# Multiplicative Inverse • May 11th 2012, 09:34 AM alyosha2 Multiplicative Inverse This is in regard to proving that the statements (a) $a$ and $n$ are coprime and (c) row $a$ of the multiplication table for $Z_{n}$ includes all of $Z_{n}$ $ab = kn + 1$ "this equation implies that $a$ and $n$ are coprime because any common factor of $a$ and $n$ must also be a factor of 1 I don't see why this is so. • May 11th 2012, 09:44 AM Sylvia104 Re: Multiplicative inverse Let $d$ be a common factor of $a$ and $n,$ so $a=a_0d,$ $n=n_0d$ for some integers $a_0,n_0.$ Then $\begin{array}{rcl} ab &=& kn+1 \\ a_0db &=& kn_0d+1 \\ \left(a_0b-kn_0\right)d &=& 1 \end{array}$ Thus $d\mid1.$ • May 13th 2012, 01:07 AM alyosha2 Re: Multiplicative Inverse So because the difference is 1 we know the only factor we can pull out is 1. Good stuff. Thanks for the help. • May 13th 2012, 11:02 PM kalwin Re: Multiplicative Inverse Thank you Sylvia for solving this problem, even i had the same problem and i was also thinking of posting this but i saw Alyosha2 had already posted it and from here i got the solution of the problem and this is the best forum where we get maximum math problem solved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9219178557395935, "perplexity": 402.2335577220689}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824109.37/warc/CC-MAIN-20160723071024-00196-ip-10-185-27-174.ec2.internal.warc.gz"}
https://de.maplesoft.com/support/help/maple/view.aspx?path=CodeGeneration%2FFortran
CodeGeneration - Maple Programming Help Home : Support : Online Help : Programming : Input and Output : Translation : CodeGeneration/Fortran CodeGeneration Fortran translate Maple code to Fortran code Calling Sequence Fortran(x, cgopts) Parameters x - expression, list, rtable, procedure, or module cgopts - (optional) one or more CodeGeneration options Description • The Fortran command translates Maple code to Fortran 77 code. - If the parameter x is an algebraic expression, then a Fortran statement assigning the expression to a variable is generated. - If x is a list, rtable or Maple Array of algebraic expressions, then a sequence of Fortran statements assigning the elements to a Fortran array is produced.  Only the initialized elements of the rtable or Maple Array are translated. - If x is a list of equations $\mathrm{nm}=\mathrm{expr}$ where $\mathrm{nm}$ is a name and $\mathrm{expr}$ is an algebraic expression, this is understood to mean a sequence of assignment statements.  In this case, the equivalent sequence of Fortran assignment statements is generated. - If x is a procedure, then either a Fortran function or a subroutine is generated. - If x is a module, then a Fortran program is generated, as described on the FortranDetails help page. • The parameter cgopts may include one or more CodeGeneration options, as described in CodeGenerationOptions. The limitvariablelength=false option, available for this command only, allows you to use names longer than 6 characters.  By default, such names are automatically replaced because they do not comply with the Fortran 77 standard. Examples See CodeGenerationOptions for a description of the options used in the following examples. > $\mathrm{with}\left(\mathrm{CodeGeneration}\right):$ Translate a simple expression and assign to the name "w" in the target code. > $\mathrm{Fortran}\left(x+yz-2xz,\mathrm{resultname}="w"\right)$ w = -2 * x * z + y * z + x Translate a list and assign to an array with name "w" in the target code. > $\mathrm{Fortran}\left(\left[\left[x,2y\right],\left[5,z\right]\right],\mathrm{resultname}="w"\right)$ w(1,1) = x       w(1,2) = 2 * y       w(2,1) = 5       w(2,2) = z Translate a computation sequence.  Optimize the input first. > $\mathrm{cs}≔\left[s=1.0+x,t=\mathrm{ln}\left(s\right)\mathrm{exp}\left(-x\right),r=\mathrm{exp}\left(-x\right)+xt\right]:$ > $\mathrm{Fortran}\left(\mathrm{cs},\mathrm{optimize}\right)$ s = 0.10D1 + x       t1 = log(s)       t2 = exp(-x)       t = t2 * t1       r = x * t + t2 Declare that x is a float and y is an integer.  Return the result in a string. > $s≔\mathrm{Fortran}\left(x+y+1,\mathrm{declare}=\left[x::\mathrm{float},y::'\mathrm{integer}'\right],\mathrm{output}=\mathrm{string}\right)$ ${s}{≔}{"cg = x + dble\left(y\right) + 0.1D1"}$ (1) Translate a procedure.  Assume that all untyped variables have type integer. > f := proc(x, y, z) return x*y-y*z+x*z; end proc: > $\mathrm{Fortran}\left(f,\mathrm{defaulttype}=\mathrm{integer}\right)$ integer function f (x, y, z)         integer x         integer y         integer z         f = y * x - y * z + x * z         return       end Translate a procedure containing an implicit return.  A new variable is created to hold the return value. > f := proc(n)     local x, i;     x := 0.0;     for i to n do         x := x + i;     end do; end proc: > $\mathrm{Fortran}\left(f\right)$ doubleprecision function f (n)         integer n         doubleprecision x         integer i         doubleprecision cgret         x = 0.0D0         do 100, i = 1, n, 1           x = x + dble(i)           cgret = x 100     continue         f = cgret         return       end Translate a procedure accepting an Array as a parameter. > f := proc(x::Array(numeric, 5..7))     return x[5]+x[6]+x[7]; end proc: > $\mathrm{Fortran}\left(f\right)$ doubleprecision function f (x)         doubleprecision x(5:7)         f = x(5) + x(6) + x(7)         return       end Translate a module with one exported and one local procedure. > m := module() export p; local q;     p := proc(x,y) if y>0 then trunc(x); else ceil(x); end if; end proc:     q := proc(x) sin(x)^2 end proc: end module: > $\mathrm{Fortran}\left(m,\mathrm{resultname}=\mathrm{t0}\right)$ doubleprecision function q (x)         doubleprecision x         q = sin(x) ** 2         return       end       integer function p (x, y)         doubleprecision x         integer y         if (0 .lt. y) then           p = int(aint(x))           return         else           p = int(ceil(x))           return         end if       end       program m       end Translate a procedure with no return value, containing an output statement. > f := proc(N)     printf("%d is a Number.\n", N); end proc: > $\mathrm{Fortran}\left(f\right)$ subroutine f (N)         doubleprecision N         print *, N, ' is a Number.'       end Use names longer than 6 characters with the limitvariablelength option. > p:=proc() local longvar: end proc; ${p}{:=}{\mathbf{proc}}\left({}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{local}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{longvar}}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}$ (2) > $\mathrm{Fortran}\left(p\right)$ subroutine p         doubleprecision cg       end > $\mathrm{Fortran}\left(p,\mathrm{limitvariablelength}=\mathrm{false}\right)$ subroutine p         doubleprecision longvar       end
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 18, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6828527450561523, "perplexity": 8336.140414592272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992721.31/warc/CC-MAIN-20210513014954-20210513044954-00142.warc.gz"}
https://socratic.org/questions/find-all-zeros-f-x-3x-7-32x-6-28x-5-591x-4-1181x-3-2810x-2-5550x-1125
Precalculus Topics # Find all zeros: f(x)=3x^7-32x^6+28x^5+591x^4-1181x^3-2810x^2+5550x-1125? Dec 29, 2017 I don't have time to give a detailed explanation, but the zeroes are $x = 5$ (multiplicity 3), $x = - 3$ (multiplicity 2), and $x = \frac{5 \pm \sqrt{13}}{6}$ (each of multiplicity 1).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8479404449462891, "perplexity": 478.6667754093752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583087.95/warc/CC-MAIN-20211015222918-20211016012918-00074.warc.gz"}
https://www.illustrativemathematics.org/content-standards/tasks/421
Update all PDFs # Rectangle Perimeter 1 Alignments to Content Standards: 6.EE.A.2 To compute the perimeter of a rectangle you add the length, $l$ and width, $w$ and double this sum. 1. Write an expression for the perimeter of a rectangle. 2. Use the expression to find the perimeter of a rectangle with length 30 and width 75. ## IM Commentary This tasks gives a verbal description for computing the perimeter of a rectangle and asks the students to find an expression for this perimeter. They then have to use the expression to evaluate the perimeter for specific values of the two variables. In this problem, the variable names $l$ and $w$ convey the meaning of length and width of a rectangle. A follow up to this task is Rectangle Perimeter 2, where students are asked to determine if expressions for the perimeter of a rectangle are correct and equivalent. ## Attached Resources • Lesson Plan - Using Variables • ## Solution 1. The description for computing the perimeter of a rectangle first adds the length $l$ and the width $w$ of the rectangle $$l+w$$ Then it asks us to double this sum, that means we are taking the sum and multiplying it by $2$: $$\mbox{perimeter of a rectangle} = 2(l+w)$$ 2. Letting $l=30$ and $w=75$ we have $$\mbox{perimeter of this rectangle} = 2(30+75)=2(105) = 210.$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7584234476089478, "perplexity": 323.22224323473404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463609817.29/warc/CC-MAIN-20170528120617-20170528140617-00155.warc.gz"}
http://fred-wang.github.io/AcidTestsMathML/acid1/
# The MathML Acid1 Test 1. $\Gamma \left(\text{toggle}\right)={CC0000}_{16}$ 2. $\tau \left(\text{the way},+260\right)=\underset{\text{old}}{\text{i grow}}$ 3. $\text{the world ends}=\left\{\mathrm{bang},\mathrm{whimper}\right\}$ 4. $\rho \left(\underset{\text{old}}{\text{i grow}}\right)=\frac{80}{120}=\frac{2}{3}$ 5. $𝒜\left(\text{pluot?}\right)={120}^{2}=14400$ 6. $\sigma \left(\stackrel{\text{sing to me,}}{\text{erbarme dich}}\right)=\sqrt{14400}=120$ 7. $ℌ\left(\text{toggle}\right)=ℌ\left(\begin{array}{ccc}\text{the way}& \text{the world ends}& \underset{\text{old}}{\text{i grow}}\\ \text{pluot?}& \stackrel{\text{bar}}{\text{maids,}}& \stackrel{\text{sing to me,}}{\text{erbarme dich}}\end{array}\right)$ This may appear a nonsensical document, but is at least a syntactically valid HTML 5 document. All 100%-conformant HTML 5 agents should be able to render the MathML elements above this paragraph in a similar fashion as this reference rendering (MathJax). You are running the MathML Acid1 Test. Are you looking for the classical version?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 7, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6693272590637207, "perplexity": 1834.6415002972074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463444.94/warc/CC-MAIN-20150226074103-00243-ip-10-28-5-156.ec2.internal.warc.gz"}
https://bastian.rieck.me/blog/posts/2021/vaccination_certificate/
# Extracting Data from the Swiss Vaccination Certificate ## Tags: programming Published on « Previous post: ‘What is a Manifold?’, Redux: Some … — Next post: A Short Round-Up of Topology-Based … » With the Swiss COVID vaccination campaign picking up some steam, I am grateful to have finally received two doses of a vaccine.1 I was mightily impressed when I also received a nice certificate with a QR code that I may now use to transfer information about my vaccination to a smartphone app—the idea being that every vaccinated person can present this as a proof. I tried to scan the QR code with my normal camera app and instead of a link or some other textual information, I just received what looked like gibberish at first glance. My curiosity thus being piqued, I searched for some way to extract more meaningful information from this code and quickly stumbled over Tobias Girstmair’s awesome description of how he decoded the Austrian vaccination certificate. In true ‘duct-tape programmer’ fashion, I was looking for a way to extend Tobias’s great description so that I could extract all this information with command-line tools. This post shows what I came up with. # Ingredients You need a few utilities for this: 1. zbar, for reading the QR code. 2. A base45 decoder such as python-base45. 3. A way to extract zlib data, such as pigz or a version of OpenSSL with zlib support enabled. 4. rq, for working with CBOR data. 5. A set of POSIX core tools, namely cut and tr. If you are on Mac OS X, all of these—except python-base45—can be installed using Homebrew. # Recipe Assuming your QR code is a file called QR.png, this is how to extract and nicely visualise its information: zbarimg --raw QR.png \ | cut -b5- \ | tr -d '\n' \ | base45 --decode \ | pigz -d \ | cut -b20- \ | cut -b1-384 \ | rq -c Let’s briefly dissect these commands to uncover their meaning: 1. We extract the raw QR code data. 2. We remove the outermost header (we are not interested in verifying the signature or the authenticity of that certificate; we just want the data). 3. We remove the newline (to enable downstream processing). 4. We decode the data (it uses base45 for efficiency purposes). 5. We uncompress the data (it is compressed using zlib). 6. We remove some more header information (this was based on my educated guess about the length of the header section; you can also just keep the original data and look at it using xxd to see that I did not remove anything relevant here). 7. We remove the digital signature (we could also leave it in, but then the last command will not result in a pretty output; this is the duct-tape version, after all). 8. We finally pretty-print everything. This should result in an output like this:2 { "1": { "dob": "1943-02-01", "nam": { "fn": "Müller", "fnt": "MUELLER", "gn": "Céline", "gnt": "CELINE" }, "v": [ { "ci": "urn:uvci:01:CH:2987CC9617DD5593806D4285", "co": "CH", "dn": 2, "dt": "2021-04-30", "is": "Bundesamt für Gesundheit (BAG)", "ma": "ORG-100031184", "mp": "EU/1/20/1507", "sd": 2, "tg": "840539006", "vp": "1119349007" } ], "ver": "1.0.0" } } There you have it—a pretty amazing amount of information contained in a nice QR code. The format appears to follow the Electronic Health Certificate to the letter,3 which is surprising to me because the relationship between the EU and Switzerland is somewhat complicated. I am glad to see that in these times, authorities are rallying under a common banner and creating interoperable standards. # Aftermath This was fun to fiddle around with—I find command-line data processing pipelines neat because they tend to contain a mixture of tools from different decades. That being said, if you want to parse such certificates properly, you should read the most recent specification. This duct-tape version should not be used in production. Stay healthy, until next time! 1. Insert obligatory 5G/telepathy joke here. ↩︎ 2. This is not my real certificate. Your output might also contain a few additional lines that only specify some information about the issuer. Adjust the first cut command to remove them. ↩︎ 3. I got my test data from this repository, but the code as shown here also works with my personal certificate. ↩︎
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29757484793663025, "perplexity": 3464.983437582432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056856.4/warc/CC-MAIN-20210919095911-20210919125911-00626.warc.gz"}
https://cobra.pdes-net.org/posts/butter-bei-die-fische.html
# Butter bei die Fische The root partition of my desktop and my notebook are formatted with btrfs, the B-tree file system (or 'Butterfuss' for ease of pronunciation). I've chosen btrfs mainly for one of its many outstanding features (compared to ext4), namely, the possibility to take snapshots of a partition (more precisely, a subvolume), and to return to them if, for example, an update proves to be troublesome. These snapshots can be managed either directly by the btrfs tools, or by snapper, a tool developed by openSUSE. Snapper is automatically installed when choosing btrfs as filesystem during the installation of openSUSE, and that's where I've encountered it first. I've found that it greatly simplifies snapshot management, and I much recommend it also for users of other distributions. ##### openSUSE If you happen to have an existing openSUSE installation > 12.1, and if you use btrfs as default file system, snapper may be already alive: snapper list-configs snapper list If the lists are populated, snapper is active and you've got nothing to do. Excellent! Otherwise, issue snapper create-config / Snapper should now automatically take hourly snapshots of your root partition. It will also automagically take snapshots just before and after an update performed by either zypper or yast, so in case anything goes wrong, you could resort to the latest sane state of your system. ##### Archlinux Snapper is also available for other Linux distributions, and in particular, for Archlinux. The instructions available are outdated and do not work, but the following steps should. First of all, install snapper: yaourt -S snapper-git Next, create a subvolume (not a directory!) containing the snapshots with the mandatory (!) name .snapshots: btrfs subvolume create /.snapshots Then, configure snapper: cp /etc/snapper/config-templates/default /etc/snapper/configs/root vim /etc/conf.d/snapper SNAPPER_CONFIGS="root" vim /etc/cron.hourly/snapper :%s/sysconfig/conf.d/g Test the configuration by: snapper list-configs Try to create a snapshot: snapper create -d "First manual snapshot." -c timeline snapper list
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32696717977523804, "perplexity": 7447.544245907576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267164750.95/warc/CC-MAIN-20180926101408-20180926121808-00402.warc.gz"}
https://theaveragedev.com/providing-fixtures-in-phpunit-tests/
# Providing fixtures in PHPUnit tests Providing pre-conditions in PHPUnit tests. ## The problem Most of my development happens in a WordPress environment and this means that I can mock objects and dependencies to a point when setting up the fixtures my test will happen in. Most of the time I’m setting up globally defined objects and variables and return values in methods. The thing can get tedious and while I’m harnessing function-mocker power to the last drop to simplify my tests I still have to walk that road. ## Data providing closures The PHPUnit test suite implements ways to provide a test method with test data: data providers. As the name and the examples imply data will be provided to test methods. The nature of that data, though, is up to the developer writing the test methods. When the dependencies of a subject under test are represented by object instances alone then mocking them is enough to set up the fixture for the test. The case below is an example of when that’s not possible without recurring to redundant object wrapping (a way I’ve walked but that’s simply not maintainable in a shared code base): use tad\FunctionMocker\FunctionMocker as Test; class MyTest extends \PHPUnit_Framework_TestCase{ public function setUp(){ Test::setUp(); } public function tearDown(){ Test::tearDown(); } return [ [ function () { unset( $_REQUEST[ Register::$query_var ] ); } ], [ function () { unset( $_REQUEST['_wpnonce'] ); } ], [ function () { Test::replace( 'wp_verify_nonce', false ); } ] ]; } /** * @test * it should not start sync if no context * @dataProvider badContexts */ public function it_should_not_schedule_sync_if_no_context($fixture ) { // set up the fixture for the test $fixture();$wp_schedule_single_event = Test::replace( 'wp_schedule_single_event' ); $this->sut->maybe_start_feed_sync();$wp_schedule_single_event->wasNotCalled(); } } The object method I’m testing is checking the \$_REQUEST global for a value and calling a WordPress defined function, wp_verify_nonce, to make sure the context is good and the method should go ahead. What is happening is that I’ve moved the fixture set up code in closures and am calling them first thing in the test method; otherwise I would have had to write 3 distinct test methods. This is usually the way I do it and it makes tests very explicit and readable; in this case “bad context” is a definition precise enough to be readable in this context. Had I had the issue to set up, say, 6 different conditions then I would have probably split the object further.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1779945343732834, "perplexity": 2830.2035939456227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657169226.65/warc/CC-MAIN-20200716122414-20200716152414-00222.warc.gz"}
https://www.solumaths.com/en/website/learn/numerical-sequences
A numerical sequence is any application of ℕ or a part of ℕ to ℝ. The calculators provided here allow you to practice calculations on numerical sequences. ## Numerical sequences : games, quizzes and exercises Quiz on numerical sequences (Numeric ... ## Numerical sequences : Reminder A numerical sequence or a numerical progression is any application of ℕ or a part of ℕ to ℝ ### Direction of variation of a sequence: strictly increasing sequence, strictly decreasing sequence. • To say that the sequence (u_(n)) is strictly increasing means that: For any natural number n, u_(n+1)>u_(n) • To say that the sequence (u_(n)) is strictly decreasing means that: For any natural number n, u_(n)>u_(n+1). To show that a sequence is increasing or decreasing: • We can calculate the difference u_(n+1)-u_(n), if this difference is positive then the sequence is increasing, otherwise it is decreasing. • We can also, if the sequence is positive and u_n!=0, calculate the ratio u_(n+1)/u_(n), if this ratio is greater than 1 the sequence is increasing, otherwise it is decreasing. ### Arithmetic sequences, geometric sequences #### Arithmetic sequences (arithmetic progression) To say that a sequence (u_(n)) is arithmetic means that there is a real r such that for any natural number n, u_(n+1)=u_(n)+r. The real r is called the common difference of the sequence (u_(n)). If (u_(n)) is an arithmetic sequence of first term u_(0), and common difference r. Then for any natural number n, u_(n)=u_(0)+nr ##### Sum of consecutive terms of an arithmetic sequence If S=a+...+k is the sum of p consecutive terms of an arithmetic sequence then S = p(a+k)/2. We deduce that 1+2+3+...+n=n(n+1)/2 #### Geometric sequences (geometric progression) To say that a sequence (u_(n)) is geometric means that there is a real q such that for any natural n, u_(n+1)=qu_(n). The real q is called the common ratio for the sequence (u_(n)). If (u_(n)) is a geometric sequence of first term u_(0), and common ratio q. Then for any natural number n, u_(n)=u_(0)*q^n ##### Sum of consecutive terms of a geometric sequence If S=a+...+k is the sum of p consecutive terms of a geometric sequence of common ratio q (q != 1) then S = (a-k*q)/(1-q). We deduce that 1+q+q^2+...+q^n=(1-q^(n+1))/(1-q)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9691879153251648, "perplexity": 933.9001288566695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662556725.76/warc/CC-MAIN-20220523071517-20220523101517-00149.warc.gz"}
https://cs.stackexchange.com/questions/88103/insert-max-and-delete-min-in-wavl-tree
# Insert-Max and Delete-Min in WAVL tree Given a WAVL tree with pointers to its' minimum and maximum, we will add two operations: Insert-Max: given an element, which is bigger than all other elements in the tree, add it as the right child of the maximum element, and update the max pointer. Delete-Min: find the successor of the minimum element, delete the minimum element and update the min pointer. Rebalancing after inserting and deleting elements will be as in regular WAVL tree. We are given a sorted array of $n$ elements and an empty WAVL tree. We will do $n$ Insert-Max operations on all elements of the array, from smallest to biggest, and then we will do $n$ Delete-Min operations. I would like to find the worst cast bound for one operation in the described sequence of operations, which will cover both Insert-Max and Delete-Min. We know that the amortized cost of a rebalancing operation is WAVL tree is $O(1)$, but I'm not sure what can be said about the worst case. Any help is appreciated. • The worst case is probably $O(\log n)$. This is usually the case for balanced binary trees. Presumably the textbook mentions this. – Yuval Filmus Feb 14 '18 at 15:45 • @YuvalFilmus Is that because the rebalancing can go all the way up to the root, and the tree height is bounded by $O(\log n)?$ – Itay4 Feb 14 '18 at 15:47 • Yes, everything is linear in the depth, and the depth is $O(\log n)$. In fact, $O(\log n)$ is an upper bound – it's probably better to state the worst case as $\Theta(\log n)$. – Yuval Filmus Feb 14 '18 at 16:22 • @YuvalFilmus So you are saying that this is true for every balanced binary tree? WAVL has no advantage over AVL here? – Itay4 Feb 14 '18 at 16:33 • The Wikipedia page compares these two data structures. – Yuval Filmus Feb 14 '18 at 17:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5452982783317566, "perplexity": 715.7038376765611}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540553486.23/warc/CC-MAIN-20191213094833-20191213122833-00505.warc.gz"}
https://arxiv.org/abs/1806.00726?context=math
math # Title:Group schemes and local densities of ramified hermitian lattices in residue characteristic 2 Part II, Expanded version Authors:Sungmun Cho Abstract: This paper is the complementary work of [Cho16]. Ramified quadratic extensions $E/F$, where $F$ is a finite unramified field extension of $\mathbb{Q}_2$, fall into two cases that we call $\textit{Case 1}$ and $\textit{Case 2}$. In the previous work [Cho16], we obtained the local density formula for a ramified hermitian lattice in $\textit{Case 1}$. In this paper, we obtain the local density formula for the remaining $\textit{Case 2}$, by constructing a smooth integral group scheme model for an appropriate unitary group. Consequently, this paper, combined with the paper [GY00] of W. T. Gan and J.-K. Yu and [Cho16], allows the computation of the mass formula for any hermitian lattice $(L, H)$, when a base field is unramified over $\mathbb{Q}$ at a prime $(2)$. Comments: 89 pages. This is the expanded version of the published one. One error in the last line of Appendix B of 'Group schemes and local densities of ramified hermitian lattices in residue characteristic 2 Part I, Algebra & Number Theory, 10-3, 451-532, 2016' is explained in Remark B.1 of this version Subjects: Number Theory (math.NT) MSC classes: 11E41, 11E95, 14L15, 20G25 Journal reference: Forum Mathematicum Volume 30 Issue 6, pages 1487-1520, 2018 DOI: 10.1515/forum-2017-0080 Cite as: arXiv:1806.00726 [math.NT] (or arXiv:1806.00726v1 [math.NT] for this version) ## Submission history From: Sungmun Cho [view email] [v1] Sun, 3 Jun 2018 02:01:23 UTC (65 KB)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9148028492927551, "perplexity": 1287.7362456959524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572896.15/warc/CC-MAIN-20190924083200-20190924105200-00089.warc.gz"}
https://robotics.stackexchange.com/questions/10200/create-2-reading-sensor-values
# Create 2 Reading Sensor Values I am trying to solve some Create 2 sensor reading problem that I am having when I came across @NBCKLY's posts (Part 1 and Part 2) that I believe are exactly what I am looking for. I copied his code from the original post into my project and updated the code from the second post as best as I could interpret...but something is not going according to plan. For example, I am printing the angle to my serial monitor (for now) but I am constantly getting a value of 0 (sometimes 1). Can @NBCKLY or anybody please check out this code and tell me what I'm doing wrong? I would appreciate it. Thank you very much. int baudPin = 2; int data; bool flag; int i; int ledPin = 13; int rxPin = 0; signed char sensorData[4]; int txPin = 1; unsigned long baudTimer = 240000; unsigned long prevTimer = 0; unsigned long thisTimer = 0; void drive(signed short left, signed short right) { Serial.write(145); Serial.write(right >> 8); Serial.write(right & 0xFF); Serial.write(left >> 8); Serial.write(left & 0xFF); } Serial.write(149); Serial.write(2); Serial.write(43); // left encoder Serial.write(44); // right encoder delay(100); i = 0; while (Serial.available()) { } int leftEncoder = int((sensorData[0] << 8)) | (int(sensorData[1]) & 0xFF); int rightEncoder = (int)(sensorData[2] << 8) | (int)(sensorData[3] & 0xFF); int angle = ((rightEncoder * 72 * 3.14 / 508.8) - (leftEncoder * 72 * 3.14 / 508.8)) / 235; Serial.print("\nAngle: "); Serial.print(angle); Serial.print("\n"); } void setup() { pinMode(baudPin, OUTPUT); pinMode(ledPin, OUTPUT); pinMode(rxPin, INPUT); pinMode(txPin, OUTPUT); delay(2000); Serial.begin(115200); digitalWrite(baudPin, LOW); delay(500); digitalWrite(baudPin, HIGH); delay(100); Serial.write(128); Serial.write(131); drive(50, -50); } void loop() { thisTimer = millis(); if (thisTimer - prevTimer > baudTimer) { i = 0; prevTimer = thisTimer; digitalWrite(baudPin, LOW); delay(500); digitalWrite(baudPin, HIGH); Serial.print("Pulse sent...\n"); } } # What I am asking is why do I only get an angle of rotation of 0 or 1 degrees when the robot is moving in a circle. The angle should be incrementing while the robot is moving. The output I am getting on the serial monitor shows a line of what looks like garble which I assume is supposed to be the bytes sent back from the Create which is followed by "Angle: 0 (or 1)" What I was expecting to see was an increasing angle value. (1,2,3...360, and so on). First, I'll point out that OP code 20 will just give you the angle directly. Remember if you use that code then, The value returned must be divided by 0.324056 to get degrees. Regarding your code specifically, it looks like you're trying to do: $$\frac{\mbox{right distance} - \mbox{left distance}}{\mbox{wheel base}} \\$$ This is a rearrangement of the arc length formula: $$s = r\theta \\$$ where $s$ is the arc traversed (difference in wheel distances), $r$ is the radius of the circle (the wheel base), and $\theta$ is the angle traversed, in radians. So, one of your problems is that you are using an int to define your angle value - int is short for integer, meaning that it can only be whole number. Radians are very small in magnitude compared to degrees. One complete circle is $2\pi$ radians, or about 6.28. Compare this to degrees, which is 360. This means that one radian is about 60 degrees; you won't get any update because of your unit definition. Your biggest problem is, as NBCKLY asked about, the encoder counts that are returned to you from OP code 43 and 44 are not signed. So, when you your math, you are doing the following: $$\frac{ \frac{\mbox{right encoder}}{\mbox{counts per rev}} (\pi d) - \frac{\mbox{left encoder}}{\mbox{counts per rev}} (\pi d)}{\mbox{wheel base}} \\$$ BUT, your drive commands are equal in magnitude. This means that $\mbox{right encoder}\approx \mbox{left encoder}$ because the encoder counts are not signed. That is, even though your drive command is negative, the counts come back as positive. So, here's what you can do to fix this**: 1. Multiply the encoder counts by the sign (not sine) of the respective drive command. 2. Use a float for the angle and/or use degrees for the angle by multiplying your existing code by (180/3.1415). ** - A final note on your code: You're going to have trouble doing it the way you are at the moment because you're using total encoder counts for your math. Say you get to 16,000 counts and change direction on that motor. What happens? Well, the way I've written above (which fixes your current issue), you go from +16,000 to -16,000 (or vice-versa). What you should consider doing instead is to accumulate an angle and to evaluate only the elapsed wheel encoder counts. That is, I would do the following (pseudo-code): float angle; angle = 0; leftEncoder = <get left encoder counts>; rightEncoder = <get right encoder counts>; /* Rollover protection on the encoder */ if leftDriveCommand > 0 if leftEncoder < prevLeftEncoder prevLeftEncoder = prevLeftEncoder - 65535; end elseif leftDriveCommand < 0 if leftEncoder > prevLeftEncoder prevLeftEncoder = prevLeftEncoder + 65535; end end <do the same thing for the right encoder> elapsedLeft = leftEncoder - prevLeftEncoder; prevLeftEncoder = leftEncoder; elapsedRight = rightEncoder - prevRightEncoder; prevRightEncoder = rightEncoder; angleIncrement = (elapsedRight - elapsedLeft)*(pi*d/countsPerRev)/(wheelBase); angle = angle + angleIncrement; So in this way you accumulate your angle by looking at how much of an angle has elapsed since the last time you updated your sensor. In this way, you don't have to worry about what happens with encoder counts when you reverse polarity on a drive command. You should also consider what is going to happen when you roll over from 65535 back to 0. Or again, just use OP code 20 to get the angle directly. • I am not sure if Op Code 142 and Packet ID 20 is what I am looking for. However, I was looking into it and when I ran code to get a reading, I got nothing when trying to read the data. Roomba.write(142); Roomba.write(43); i = 0; while (Roomba.available()) { sensorData[i++] = Roomba.read(); Serial.print ("BYTE\n"); // <-- should at least print the word BYTE, but does not } delay(100); } P.S. I give on this formatting stuff. Nothing I type works there either...despite copying and pasting from on screen instructions. – Tony Tzankoff Jul 10 '16 at 20:24 • ...and before anybody starts getting critical over teh fact that I initially saido packet id 20 and then used ID 43 in my code, the same problem applies. I am not getting any data back at all. Why that is is what I am tying to figure out. I'll do the math after I get the data to do it with. – Tony Tzankoff Jul 10 '16 at 20:39 • First of all, I misread something in Arduino's serialSoftware setup that led me to use an improper pin for my RX line. That issue has been resolved. I am now getting sensor data. Yay! Secondly, the 142/20 op code/packet id combination (with a non-moving Roomba) returns a negative value. Since the Roomba is not moving, should the result be 0 until it starts moving? I try to look at other people's code to see how they did it (as I am more of a visual learner) and notice that nobody else accounts for rollover protection. An oversight? Looking at your code, I can see how this might come in handy. – Tony Tzankoff Jul 11 '16 at 3:37 • @TonyTzankoff - Formatting doesn't work (nicely) in comments. You can delineate code with the grave accent ( ` ) on either end. Generally, you should edit your question to incorporate new/updated information. Regarding your issue, the documentation says Op code 142 followed by packed ID 20 gets you the elapsed Angle (page 26). You need to accumulate those angles. If you are having a problem with the initial value being something other than zero, then discard it and poll again. The second (and subsequent) values should be zero if it didn't move. – Chuck Jul 11 '16 at 13:09 • Also I appear to have not linked the manual from which I was quoting. The manual I quoted can be found here. There's not much difference, though. – Chuck Jul 11 '16 at 13:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2994081676006317, "perplexity": 1657.3537262883674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039503725.80/warc/CC-MAIN-20210421004512-20210421034512-00390.warc.gz"}
https://tug.org/applications/fontinst/mail/tex-fonts/1997/msg00450.html
# Re: A question about \DeclareFontShape ``` On Mon, 15 Dec 1997, Rebecca and Rowland wrote: > Now then, I know what happens - you get to use cmr5, 6, ..., 12 (that's the > gen * cmr) for those sizes. You also get cmr10 at 10.95pt, ..., cmr17 at > 24.88pt. > > But I'm not sure I understand how it works. I gather that all the size > functions aside from fixed and sfixed force LaTeX to select a fount at the > requested size. Is this right? (I think it is; it doesn't seem to be > explicitly stated in the docs). Yes. > Now then, as it says in fntguide.tex, > > >\LaTeX{} provides the following size functions, whose `inputs' are > >\m{fontarg} and \m{optarg} (when present). > > > >`' (empty) > >Load the external font \m{fontarg} at the user-requested size. If > >\m{optarg} is present, it is used as the scale-factor. > > example of its use? \DeclareFontShape{...}{...}{...}{...}{ <-> *[0.9] foo} If e.g. size 8pt is requested, load font foo at size 8pt * 0.9 = 7.2pt. This is useful for combining PS fonts with CM fonts. > >s > >Like the empty function but without terminal warnings, only > >loggings. > > What's this business about warnings? What warnings might one get from the > use of the empty size function? And which warnings are suppressed? To inform the user that e.g. in case of scaling, you get only messages in the log file and not on the terminal. > >gen > >Generates the external font from \m{fontarg} followed by > >the user-requested size, e.g.~|<<8>> <<9>> <<10>> gen * cmtt| > > Does this just mean `select the tfm file formed by concatenating the given > name with the size'? (I assume the bit about the warnings is consistent). yes. > >genb > >Generates the external font from \m{fontarg} followed by > >the user-requested size, using the conventions of the `ec' fonts. > >e.g.~|<<10.98>> genb * dctt| produces |dctt1098|. > > Similarly, does this mean `select the tfm file formed by concatenating the > given name with the (size multiplied by 100)?' yes. > >sub > >Tries to load a font from a different font shape declaration given by > >\m{fontarg} in the form \m{family}|/|\m{series}|/|\m{shape}. > > The size function below implies that the sub size function causes messages; > what messages? And where? The user will be informed that a specific font is not available and has to be replaced with another one. > >ssub > >Silent variant of `sub', only loggings. > > What messages are suppressed here? Nothing will be really suppressed; you'll find the messages always in the logfile, but the silant variant doesn't write something to the terminal. > >fixed > >Load font \m{fontarg} as is, disregarding the user-requested size. > >If present, \m{optarg} gives the ``at \ldots pt'' size to be used. > > > >sfixed > >Silent variant of `fixed', only loggings. > > Do these size functions generate special messages? If I remember correctly, only if you specify a scaling factor. > And about the optional argument to size functions: the syntax specification > seems to me to indicate that all the size functions may take an optional > argument. Is this so? If it is, what sort of thing can you put in it? If > not, is it just the: > > empty > s > subf > ssubf > fixed > sfixed > > size functions that take an optional argument? The optional argument is (usually) a scaling factor -- you make look into my CJK package (<CTAN>/language/chinese) to see how new scaling functions are defined and used (some of them have an optional argument which isn't a scaling factor). Werner ```
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.925585150718689, "perplexity": 15918.85458932265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816138.91/warc/CC-MAIN-20180225051024-20180225071024-00250.warc.gz"}
https://arxiv.org/abs/hep-ph/0611290
hep-ph (what is this?) # Title: The longitudinal cross section of vector meson electroproduction Authors: S.V. Goloskokov (JINR Dubna), P. Kroll (Wuppertal Univ.) Abstract: We analyze electroproduction of light vector mesons (V=rho, phi and omega) at small Bjorken-x in the handbag approach in which the process factorizes into general parton distributions and partonic subprocesses. The latter are calculated in the modified perturbative approach where the transverse momenta of the quark and antiquark forming the vector meson are retained and Sudakov suppressions are taken into account. Modeling the generalized parton distributions through double distributions and using simple Gaussian wavefunctions for the vector mesons, we compute the longitudinal cross sections at large photon virtualities. The results are in fair agreement with the findings of recent experiments performed at HERA and HERMES. Comments: 27 pages, 20 figures, using LATEX with graphicx Subjects: High Energy Physics - Phenomenology (hep-ph) Journal reference: Eur.Phys.J.C50:829-842,2007 DOI: 10.1140/epjc/s10052-007-0228-4 Report number: WU B 06-02 Cite as: arXiv:hep-ph/0611290 (or arXiv:hep-ph/0611290v1 for this version) ## Submission history From: Peter Kroll [view email] [v1] Wed, 22 Nov 2006 13:05:54 GMT (506kb)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.855684757232666, "perplexity": 9081.506882267051}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891539.71/warc/CC-MAIN-20180122193259-20180122213259-00472.warc.gz"}
http://human-web.org/Utah/error-gauss-function.html
Address 53 S Main St, Ephraim, UT 84627 (435) 462-9814 # error gauss function Moroni, Utah Web browsers do not support MATLAB commands. The inverse imaginary error function is defined as erfi − 1 ⁡ ( x ) {\displaystyle \operatorname {erfi} ^{-1}(x)} .[10] For any real x, Newton's method can be used to compute Despite the name "imaginary error function", erfi ⁡ ( x ) {\displaystyle \operatorname π 8 (x)} is real when x is real. History and Terminology>Wolfram Language Commands> MathWorld Contributors>D'Orsogna> Less... Another approximation is given by erf ⁡ ( x ) ≈ sgn ⁡ ( x ) 1 − exp ⁡ ( − x 2 4 π + a x 2 1 is the double factorial: the product of all odd numbers up to (2n–1). http://mathworld.wolfram.com/Erf.html Wolfram Web Resources Mathematica» The #1 tool for creating Demonstrations and anything technical. Erf is implemented in the Wolfram Language as Erf[z]. Whittaker, E.T. Acton, F.S. Princeton, NJ: Princeton University Press, p.105, 2003. Wolfram Problem Generator» Unlimited random practice problems and answers with built-in Step-by-step solutions. The system returned: (22) Invalid argument The remote host or network may be down. New York: Dover, pp.179-182, 1967. Cambridge, England: Cambridge University Press, 1990. See Alsoerfc | erfcinv | erfcx | erfinv Introduced before R2006a × MATLAB Command You clicked a link that corresponds to this MATLAB command: Run the command by entering it in comm., May 9, 2004). For details, see Tips.Plot the CDF of the normal distribution with and .x = -3:0.1:3; y = (1/2)*(1+erf(x/sqrt(2))); plot(x,y) grid on title('CDF of normal distribution with \mu = 0 and \sigma Your cache administrator is webmaster. Wolfram Education Portal» Collection of teaching and learning tools built by Wolfram education experts: dynamic textbook, lesson plans, widgets, interactive Demonstrations, and more. A Course in Modern Analysis, 4th ed. A complex generalization of is defined as (39) (40) Integral representations valid only in the upper half-plane are given by (41) (42) SEE ALSO: Dawson's Integral, Erfc, Erfi, Fresnel Integrals, Gaussian Ramanujan: Twelve Lectures on Subjects Suggested by His Life and Work, 3rd ed. This substitution maintains accuracy. Data Types: single | doubleMore Aboutcollapse allError FunctionThe error function erf of x iserf(x)=2π∫0xe−t2dt.Tall Array SupportThis function fully supports tall arrays. The relationship between the error function erf and normcdf is normcdf(x)=12(1−erf(−x2)).For expressions of the form 1 - erf(x), use the complementary error function erfc instead. For any complex number z: erf ⁡ ( z ¯ ) = erf ⁡ ( z ) ¯ {\displaystyle \operatorname β 0 ({\overline − 9})={\overline {\operatorname − 8 (z)}}} where z When erf(x) is close to 1, then 1 - erf(x) is a small number and might be rounded down to 0. The system returned: (22) Invalid argument The remote host or network may be down. Mathematical Methods for Physicists, 3rd ed. Translate erfError functioncollapse all in page Syntaxerf(x) exampleDescriptionexampleerf(x) returns the Error Function evaluated for each element of x.Examplescollapse allFind Error FunctionOpen ScriptFind the error function of a value.erf(0.76) ans The inverse error function is usually defined with domain (−1,1), and it is restricted to this domain in many computer algebra systems. and Stegun, I.A. (Eds.). "Error Function and Fresnel Integrals." Ch.7 in Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, 9th printing. Negative integer values of Im(ƒ) are shown with thick red lines. At the imaginary axis, it tends to ±i∞. and Watson, G.N. Definite integrals involving include Definite integrals involving include (34) (35) (36) (37) (38) The first two of these appear in Prudnikov et al. (1990, p.123, eqns. 2.8.19.8 and 2.8.19.11), with , Erf is the "error function" encountered in integrating the normal distribution (which is a normalized form of the Gaussian function). It is defined as:[1][2] erf ⁡ ( x ) = 1 π ∫ − x x e − t 2 d t = 2 π ∫ 0 x e − t Continued fraction expansion A continued fraction expansion of the complementary error function is:[11] erfc ⁡ ( z ) = z π e − z 2 1 z 2 + a 1 Sequences A000079/M1129, A001147/M3002, A007680/M2861, A103979, A103980 in "The On-Line Encyclopedia of Integer Sequences." Spanier, J. Erf can also be defined as a Maclaurin series (6) (7) (OEIS A007680). The denominator terms are sequence A007680 in the OEIS. Prudnikov, A.P.; Brychkov, Yu.A.; and Marichev, O.I. Erf has the continued fraction (32) (33) (Wall 1948, p.357), first stated by Laplace in 1805 and Legendre in 1826 (Olds 1963, p.139), proved by Jacobi, and rediscovered by Ramanujan (Watson Numerical Methods That Work, 2nd printing. New York: Dover, pp.297-309, 1972. The error function at +∞ is exactly 1 (see Gaussian integral). Back to English × Translate This Page Select Language Bulgarian Catalan Chinese Simplified Chinese Traditional Czech Danish Dutch English Estonian Finnish French German Greek Haitian Creole Hindi Hmong Daw Hungarian Indonesian
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8933738470077515, "perplexity": 4191.1509660963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823320.11/warc/CC-MAIN-20181210080704-20181210102204-00055.warc.gz"}
https://www.esaral.com/q/the-difference-between-87178
Deepak Scored 45->99%ile with Bounce Back Crack Course. You can do it too! # The difference between Question: The difference between $\Delta \mathrm{H}$ and $\Delta \mathrm{U}(\Delta \mathrm{H}-\Delta \mathrm{U})$, when the combustion of one mole of heptane (1) is carried out at a temperature $T$, is equal to: 1. 3RT 2. $-3 \mathrm{RT}$ 3. $-4 R T$ 4. 4RT Correct Option: , 3 Solution:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8703432679176331, "perplexity": 4583.059951724699}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499524.28/warc/CC-MAIN-20230128054815-20230128084815-00787.warc.gz"}
https://www.educator.com/mathematics/ap-calculus-ab/hovasapian/optimization-problems-i.php
INSTRUCTORS Raffi Hovasapian John Zhu Raffi Hovasapian Optimization Problems I Slide Duration: Table of Contents Section 1: Limits and Derivatives Overview & Slopes of Curves 42m 8s Intro 0:00 Overview & Slopes of Curves 0:21 Differential and Integral 0:22 Fundamental Theorem of Calculus 6:36 Differentiation or Taking the Derivative 14:24 What Does the Derivative Mean and How do We Find it? 15:18 Example: f'(x) 19:24 Example: f(x) = sin (x) 29:16 General Procedure for Finding the Derivative of f(x) 37:33 More on Slopes of Curves 50m 53s Intro 0:00 Slope of the Secant Line along a Curve 0:12 Slope of the Tangent Line to f(x) at a Particlar Point 0:13 Slope of the Secant Line along a Curve 2:59 Instantaneous Slope 6:51 Instantaneous Slope 6:52 Example: Distance, Time, Velocity 13:32 Instantaneous Slope and Average Slope 25:42 Slope & Rate of Change 29:55 Slope & Rate of Change 29:56 Example: Slope = 2 33:16 Example: Slope = 4/3 34:32 Example: Slope = 4 (m/s) 39:12 Example: Density = Mass / Volume 40:33 Average Slope, Average Rate of Change, Instantaneous Slope, and Instantaneous Rate of Change 47:46 Example Problems for Slopes of Curves 59m 12s Intro 0:00 Example I: Water Tank 0:13 Part A: Which is the Independent Variable and Which is the Dependent? 2:00 Part B: Average Slope 3:18 Part C: Express These Slopes as Rates-of-Change 9:28 Part D: Instantaneous Slope 14:54 Example II: y = √(x-3) 28:26 Part A: Calculate the Slope of the Secant Line 30:39 Part B: Instantaneous Slope 41:26 Part C: Equation for the Tangent Line 43:59 Example III: Object in the Air 49:37 Part A: Average Velocity 50:37 Part B: Instantaneous Velocity 55:30 Desmos Tutorial 18m 43s Intro 0:00 Desmos Tutorial 1:42 Desmos Tutorial 1:43 Things You Must Learn To Do on Your Particular Calculator 2:39 Things You Must Learn To Do on Your Particular Calculator 2:40 Example I: y=sin x 4:54 Example II: y=x³ and y = d/(dx) (x³) 9:22 Example III: y = x² {-5 <= x <= 0} and y = cos x {0 < x < 6} 13:15 The Limit of a Function 51m 53s Intro 0:00 The Limit of a Function 0:14 The Limit of a Function 0:15 Graph: Limit of a Function 12:24 Table of Values 16:02 lim x→a f(x) Does not Say What Happens When x = a 20:05 Example I: f(x) = x² 24:34 Example II: f(x) = 7 27:05 Example III: f(x) = 4.5 30:33 Example IV: f(x) = 1/x 34:03 Example V: f(x) = 1/x² 36:43 The Limit of a Function, Cont. 38:16 Infinity and Negative Infinity 38:17 Does Not Exist 42:45 Summary 46:48 Example Problems for the Limit of a Function 24m 43s Intro 0:00 Example I: Explain in Words What the Following Symbols Mean 0:10 Example II: Find the Following Limit 5:21 Example III: Use the Graph to Find the Following Limits 7:35 Example IV: Use the Graph to Find the Following Limits 11:48 Example V: Sketch the Graph of a Function that Satisfies the Following Properties 15:25 Example VI: Find the Following Limit 18:44 Example VII: Find the Following Limit 20:06 Calculating Limits Mathematically 53m 48s Intro 0:00 Plug-in Procedure 0:09 Plug-in Procedure 0:10 Limit Laws 9:14 Limit Law 1 10:05 Limit Law 2 10:54 Limit Law 3 11:28 Limit Law 4 11:54 Limit Law 5 12:24 Limit Law 6 13:14 Limit Law 7 14:38 Plug-in Procedure, Cont. 16:35 Plug-in Procedure, Cont. 16:36 Example I: Calculating Limits Mathematically 20:50 Example II: Calculating Limits Mathematically 27:37 Example III: Calculating Limits Mathematically 31:42 Example IV: Calculating Limits Mathematically 35:36 Example V: Calculating Limits Mathematically 40:58 Limits Theorem 44:45 Limits Theorem 1 44:46 Limits Theorem 2: Squeeze Theorem 46:34 Example VI: Calculating Limits Mathematically 49:26 Example Problems for Calculating Limits Mathematically 21m 22s Intro 0:00 Example I: Evaluate the Following Limit by Showing Each Application of a Limit Law 0:16 Example II: Evaluate the Following Limit 1:51 Example III: Evaluate the Following Limit 3:36 Example IV: Evaluate the Following Limit 8:56 Example V: Evaluate the Following Limit 11:19 Example VI: Calculating Limits Mathematically 13:19 Example VII: Calculating Limits Mathematically 14:59 Calculating Limits as x Goes to Infinity 50m 1s Intro 0:00 Limit as x Goes to Infinity 0:14 Limit as x Goes to Infinity 0:15 Let's Look at f(x) = 1 / (x-3) 1:04 Summary 9:34 Example I: Calculating Limits as x Goes to Infinity 12:16 Example II: Calculating Limits as x Goes to Infinity 21:22 Example III: Calculating Limits as x Goes to Infinity 24:10 Example IV: Calculating Limits as x Goes to Infinity 36:00 Example Problems for Limits at Infinity 36m 31s Intro 0:00 Example I: Calculating Limits as x Goes to Infinity 0:14 Example II: Calculating Limits as x Goes to Infinity 3:27 Example III: Calculating Limits as x Goes to Infinity 8:11 Example IV: Calculating Limits as x Goes to Infinity 14:20 Example V: Calculating Limits as x Goes to Infinity 20:07 Example VI: Calculating Limits as x Goes to Infinity 23:36 Continuity 53m Intro 0:00 Definition of Continuity 0:08 Definition of Continuity 0:09 Example: Not Continuous 3:52 Example: Continuous 4:58 Example: Not Continuous 5:52 Procedure for Finding Continuity 9:45 Law of Continuity 13:44 Law of Continuity 13:45 Example I: Determining Continuity on a Graph 15:55 Example II: Show Continuity & Determine the Interval Over Which the Function is Continuous 17:57 Example III: Is the Following Function Continuous at the Given Point? 22:42 Theorem for Composite Functions 25:28 Theorem for Composite Functions 25:29 Example IV: Is cos(x³ + ln x) Continuous at x=π/2? 27:00 Example V: What Value of A Will make the Following Function Continuous at Every Point of Its Domain? 34:04 Types of Discontinuity 39:18 Removable Discontinuity 39:33 Jump Discontinuity 40:06 Infinite Discontinuity 40:32 Intermediate Value Theorem 40:58 Intermediate Value Theorem: Hypothesis & Conclusion 40:59 Intermediate Value Theorem: Graphically 43:40 Example VI: Prove That the Following Function Has at Least One Real Root in the Interval [4,6] 47:46 Derivative I 40m 2s Intro 0:00 Derivative 0:09 Derivative 0:10 Example I: Find the Derivative of f(x)=x³ 2:20 Notations for the Derivative 7:32 Notations for the Derivative 7:33 Derivative & Rate of Change 11:14 Recall the Rate of Change 11:15 Instantaneous Rate of Change 17:04 Graphing f(x) and f'(x) 19:10 Example II: Find the Derivative of x⁴ - x² 24:00 Example III: Find the Derivative of f(x)=√x 30:51 Derivatives II 53m 45s Intro 0:00 Example I: Find the Derivative of (2+x)/(3-x) 0:18 Derivatives II 9:02 f(x) is Differentiable if f'(x) Exists 9:03 Recall: For a Limit to Exist, Both Left Hand and Right Hand Limits Must Equal to Each Other 17:19 Geometrically: Differentiability Means the Graph is Smooth 18:44 Example II: Show Analytically that f(x) = |x| is Nor Differentiable at x=0 20:53 Example II: For x > 0 23:53 Example II: For x < 0 25:36 Example II: What is f(0) and What is the lim |x| as x→0? 30:46 Differentiability & Continuity 34:22 Differentiability & Continuity 34:23 How Can a Function Not be Differentiable at a Point? 39:38 How Can a Function Not be Differentiable at a Point? 39:39 Higher Derivatives 41:58 Higher Derivatives 41:59 Derivative Operator 45:12 Example III: Find (dy)/(dx) & (d²y)/(dx²) for y = x³ 49:29 More Example Problems for The Derivative 31m 38s Intro 0:00 Example I: Sketch f'(x) 0:10 Example II: Sketch f'(x) 2:14 Example III: Find the Derivative of the Following Function sing the Definition 3:49 Example IV: Determine f, f', and f'' on a Graph 12:43 Example V: Find an Equation for the Tangent Line to the Graph of the Following Function at the Given x-value 13:40 Example VI: Distance vs. Time 20:15 Example VII: Displacement, Velocity, and Acceleration 23:56 Example VIII: Graph the Displacement Function 28:20 Section 2: Differentiation Differentiation of Polynomials & Exponential Functions 47m 35s Intro 0:00 Differentiation of Polynomials & Exponential Functions 0:15 Derivative of a Function 0:16 Derivative of a Constant 2:35 Power Rule 3:08 If C is a Constant 4:19 Sum Rule 5:22 Exponential Functions 6:26 Example I: Differentiate 7:45 Example II: Differentiate 12:38 Example III: Differentiate 15:13 Example IV: Differentiate 16:20 Example V: Differentiate 19:19 Example VI: Find the Equation of the Tangent Line to a Function at a Given Point 12:18 Example VII: Find the First & Second Derivatives 25:59 Example VIII 27:47 Part A: Find the Velocity & Acceleration Functions as Functions of t 27:48 Part B: Find the Acceleration after 3 Seconds 30:12 Part C: Find the Acceleration when the Velocity is 0 30:53 Part D: Graph the Position, Velocity, & Acceleration Graphs 32:50 Example IX: Find a Cubic Function Whose Graph has Horizontal Tangents 34:53 Example X: Find a Point on a Graph 42:31 The Product, Power & Quotient Rules 47m 25s Intro 0:00 The Product, Power and Quotient Rules 0:19 Differentiate Functions 0:20 Product Rule 5:30 Quotient Rule 9:15 Power Rule 10:00 Example I: Product Rule 13:48 Example II: Quotient Rule 16:13 Example III: Power Rule 18:28 Example IV: Find dy/dx 19:57 Example V: Find dy/dx 24:53 Example VI: Find dy/dx 28:38 Example VII: Find an Equation for the Tangent to the Curve 34:54 Example VIII: Find d²y/dx² 38:08 Derivatives of the Trigonometric Functions 41m 8s Intro 0:00 Derivatives of the Trigonometric Functions 0:09 Let's Find the Derivative of f(x) = sin x 0:10 Important Limits to Know 4:59 d/dx (sin x) 6:06 d/dx (cos x) 6:38 d/dx (tan x) 6:50 d/dx (csc x) 7:02 d/dx (sec x) 7:15 d/dx (cot x) 7:27 Example I: Differentiate f(x) = x² - 4 cos x 7:56 Example II: Differentiate f(x) = x⁵ tan x 9:04 Example III: Differentiate f(x) = (cos x) / (3 + sin x) 10:56 Example IV: Differentiate f(x) = e^x / (tan x - sec x) 14:06 Example V: Differentiate f(x) = (csc x - 4) / (cot x) 15:37 Example VI: Find an Equation of the Tangent Line 21:48 Example VII: For What Values of x Does the Graph of the Function x + 3 cos x Have a Horizontal Tangent? 25:17 Example VIII: Ladder Problem 28:23 Example IX: Evaluate 33:22 Example X: Evaluate 36:38 The Chain Rule 24m 56s Intro 0:00 The Chain Rule 0:13 Recall the Composite Functions 0:14 Derivatives of Composite Functions 1:34 Example I: Identify f(x) and g(x) and Differentiate 6:41 Example II: Identify f(x) and g(x) and Differentiate 9:47 Example III: Differentiate 11:03 Example IV: Differentiate f(x) = -5 / (x² + 3)³ 12:15 Example V: Differentiate f(x) = cos(x² + c²) 14:35 Example VI: Differentiate f(x) = cos⁴x +c² 15:41 Example VII: Differentiate 17:03 Example VIII: Differentiate f(x) = sin(tan x²) 19:01 Example IX: Differentiate f(x) = sin(tan² x) 21:02 More Chain Rule Example Problems 25m 32s Intro 0:00 Example I: Differentiate f(x) = sin(cos(tanx)) 0:38 Example II: Find an Equation for the Line Tangent to the Given Curve at the Given Point 2:25 Example III: F(x) = f(g(x)), Find F' (6) 4:22 Example IV: Differentiate & Graph both the Function & the Derivative in the Same Window 5:35 Example V: Differentiate f(x) = ( (x-8)/(x+3) )⁴ 10:18 Example VI: Differentiate f(x) = sec²(12x) 12:28 Example VII: Differentiate 14:41 Example VIII: Differentiate 19:25 Example IX: Find an Expression for the Rate of Change of the Volume of the Balloon with Respect to Time 21:13 Implicit Differentiation 52m 31s Intro 0:00 Implicit Differentiation 0:09 Implicit Differentiation 0:10 Example I: Find (dy)/(dx) by both Implicit Differentiation and Solving Explicitly for y 12:15 Example II: Find (dy)/(dx) of x³ + x²y + 7y² = 14 19:18 Example III: Find (dy)/(dx) of x³y² + y³x² = 4x 21:43 Example IV: Find (dy)/(dx) of the Following Equation 24:13 Example V: Find (dy)/(dx) of 6sin x cos y = 1 29:00 Example VI: Find (dy)/(dx) of x² cos² y + y sin x = 2sin x cos y 31:02 Example VII: Find (dy)/(dx) of √(xy) = 7 + y²e^x 37:36 Example VIII: Find (dy)/(dx) of 4(x²+y²)² = 35(x²-y²) 41:03 Example IX: Find (d²y)/(dx²) of x² + y² = 25 44:05 Example X: Find (d²y)/(dx²) of sin x + cos y = sin(2x) 47:48 Section 3: Applications of the Derivative Linear Approximations & Differentials 47m 34s Intro 0:00 Linear Approximations & Differentials 0:09 Linear Approximations & Differentials 0:10 Example I: Linear Approximations & Differentials 11:27 Example II: Linear Approximations & Differentials 20:19 Differentials 30:32 Differentials 30:33 Example III: Linear Approximations & Differentials 34:09 Example IV: Linear Approximations & Differentials 35:57 Example V: Relative Error 38:46 Related Rates 45m 33s Intro 0:00 Related Rates 0:08 Strategy for Solving Related Rates Problems #1 0:09 Strategy for Solving Related Rates Problems #2 1:46 Strategy for Solving Related Rates Problems #3 2:06 Strategy for Solving Related Rates Problems #4 2:50 Strategy for Solving Related Rates Problems #5 3:38 Example I: Radius of a Balloon 5:15 Example II: Ladder 12:52 Example III: Water Tank 19:08 Example IV: Distance between Two Cars 29:27 Example V: Line-of-Sight 36:20 More Related Rates Examples 37m 17s Intro 0:00 Example I: Shadow 0:14 Example II: Particle 4:45 Example III: Water Level 10:28 Example IV: Clock 20:47 Example V: Distance between a House and a Plane 29:11 Maximum & Minimum Values of a Function 40m 44s Intro 0:00 Maximum & Minimum Values of a Function, Part 1 0:23 Absolute Maximum 2:20 Absolute Minimum 2:52 Local Maximum 3:38 Local Minimum 4:26 Maximum & Minimum Values of a Function, Part 2 6:11 Function with Absolute Minimum but No Absolute Max, Local Max, and Local Min 7:18 Function with Local Max & Min but No Absolute Max & Min 8:48 Formal Definitions 10:43 Absolute Maximum 11:18 Absolute Minimum 12:57 Local Maximum 14:37 Local Minimum 16:25 Extreme Value Theorem 18:08 Theorem: f'(c) = 0 24:40 Critical Number (Critical Value) 26:14 Procedure for Finding the Critical Values of f(x) 28:32 Example I: Find the Critical Values of f(x) x + sinx 29:51 Example II: What are the Absolute Max & Absolute Minimum of f(x) = x + 4 sinx on [0,2π] 35:31 Example Problems for Max & Min 40m 44s Intro 0:00 Example I: Identify Absolute and Local Max & Min on the Following Graph 0:11 Example II: Sketch the Graph of a Continuous Function 3:11 Example III: Sketch the Following Graphs 4:40 Example IV: Find the Critical Values of f (x) = 3x⁴ - 7x³ + 4x² 6:13 Example V: Find the Critical Values of f(x) = |2x - 5| 8:42 Example VI: Find the Critical Values 11:42 Example VII: Find the Critical Values f(x) = cos²(2x) on [0,2π] 16:57 Example VIII: Find the Absolute Max & Min f(x) = 2sinx + 2cos x on [0,(π/3)] 20:08 Example IX: Find the Absolute Max & Min f(x) = (ln(2x)) / x on [1,3] 24:39 The Mean Value Theorem 25m 54s Intro 0:00 Rolle's Theorem 0:08 Rolle's Theorem: If & Then 0:09 Rolle's Theorem: Geometrically 2:06 There May Be More than 1 c Such That f'( c ) = 0 3:30 Example I: Rolle's Theorem 4:58 The Mean Value Theorem 9:12 The Mean Value Theorem: If & Then 9:13 The Mean Value Theorem: Geometrically 11:07 Example II: Mean Value Theorem 13:43 Example III: Mean Value Theorem 21:19 Using Derivatives to Graph Functions, Part I 25m 54s Intro 0:00 Using Derivatives to Graph Functions, Part I 0:12 Increasing/ Decreasing Test 0:13 Example I: Find the Intervals Over Which the Function is Increasing & Decreasing 3:26 Example II: Find the Local Maxima & Minima of the Function 19:18 Example III: Find the Local Maxima & Minima of the Function 31:39 Using Derivatives to Graph Functions, Part II 44m 58s Intro 0:00 Using Derivatives to Graph Functions, Part II 0:13 Concave Up & Concave Down 0:14 What Does This Mean in Terms of the Derivative? 6:14 Point of Inflection 8:52 Example I: Graph the Function 13:18 Example II: Function x⁴ - 5x² 19:03 Intervals of Increase & Decrease 19:04 Local Maxes and Mins 25:01 Intervals of Concavity & X-Values for the Points of Inflection 29:18 Intervals of Concavity & Y-Values for the Points of Inflection 34:18 Graphing the Function 40:52 Example Problems I 49m 19s Intro 0:00 Example I: Intervals, Local Maxes & Mins 0:26 Example II: Intervals, Local Maxes & Mins 5:05 Example III: Intervals, Local Maxes & Mins, and Inflection Points 13:40 Example IV: Intervals, Local Maxes & Mins, Inflection Points, and Intervals of Concavity 23:02 Example V: Intervals, Local Maxes & Mins, Inflection Points, and Intervals of Concavity 34:36 Example Problems III 59m 1s Intro 0:00 Example I: Intervals, Local Maxes & Mins, Inflection Points, Intervals of Concavity, and Asymptotes 0:11 Example II: Intervals, Local Maxes & Mins, Inflection Points, Intervals of Concavity, and Asymptotes 21:24 Example III: Cubic Equation f(x) = Ax³ + Bx² + Cx + D 37:56 Example IV: Intervals, Local Maxes & Mins, Inflection Points, Intervals of Concavity, and Asymptotes 46:19 L'Hospital's Rule 30m 9s Intro 0:00 L'Hospital's Rule 0:19 Indeterminate Forms 0:20 L'Hospital's Rule 3:38 Example I: Evaluate the Following Limit Using L'Hospital's Rule 8:50 Example II: Evaluate the Following Limit Using L'Hospital's Rule 10:30 Indeterminate Products 11:54 Indeterminate Products 11:55 Example III: L'Hospital's Rule & Indeterminate Products 13:57 Indeterminate Differences 17:00 Indeterminate Differences 17:01 Example IV: L'Hospital's Rule & Indeterminate Differences 18:57 Indeterminate Powers 22:20 Indeterminate Powers 22:21 Example V: L'Hospital's Rule & Indeterminate Powers 25:13 Example Problems for L'Hospital's Rule 38m 14s Intro 0:00 Example I: Evaluate the Following Limit 0:17 Example II: Evaluate the Following Limit 2:45 Example III: Evaluate the Following Limit 6:54 Example IV: Evaluate the Following Limit 8:43 Example V: Evaluate the Following Limit 11:01 Example VI: Evaluate the Following Limit 14:48 Example VII: Evaluate the Following Limit 17:49 Example VIII: Evaluate the Following Limit 20:37 Example IX: Evaluate the Following Limit 25:16 Example X: Evaluate the Following Limit 32:44 Optimization Problems I 49m 59s Intro 0:00 Example I: Find the Dimensions of the Box that Gives the Greatest Volume 1:23 Fundamentals of Optimization Problems 18:08 Fundamental #1 18:33 Fundamental #2 19:09 Fundamental #3 19:19 Fundamental #4 20:59 Fundamental #5 21:55 Fundamental #6 23:44 Example II: Demonstrate that of All Rectangles with a Given Perimeter, the One with the Largest Area is a Square 24:36 Example III: Find the Points on the Ellipse 9x² + y² = 9 Farthest Away from the Point (1,0) 35:13 Example IV: Find the Dimensions of the Rectangle of Largest Area that can be Inscribed in a Circle of Given Radius R 43:10 Optimization Problems II 55m 10s Intro 0:00 Example I: Optimization Problem 0:13 Example II: Optimization Problem 17:34 Example III: Optimization Problem 35:06 Example IV: Revenue, Cost, and Profit 43:22 Newton's Method 30m 22s Intro 0:00 Newton's Method 0:45 Newton's Method 0:46 Example I: Find x2 and x3 13:18 Example II: Use Newton's Method to Approximate 15:48 Example III: Find the Root of the Following Equation to 6 Decimal Places 19:57 Example IV: Use Newton's Method to Find the Coordinates of the Inflection Point 23:11 Section 4: Integrals Antiderivatives 55m 26s Intro 0:00 Antiderivatives 0:23 Definition of an Antiderivative 0:24 Antiderivative Theorem 7:58 Function & Antiderivative 12:10 x^n 12:30 1/x 13:00 e^x 13:08 cos x 13:18 sin x 14:01 sec² x 14:11 secxtanx 14:18 1/√(1-x²) 14:26 1/(1+x²) 14:36 -1/√(1-x²) 14:45 Example I: Find the Most General Antiderivative for the Following Functions 15:07 Function 1: f(x) = x³ -6x² + 11x - 9 15:42 Function 2: f(x) = 14√(x) - 27 4√x 19:12 Function 3: (fx) = cos x - 14 sinx 20:53 Function 4: f(x) = (x⁵+2√x )/( x^(4/3) ) 22:10 Function 5: f(x) = (3e^x) - 2/(1+x²) 25:42 Example II: Given the Following, Find the Original Function f(x) 26:37 Function 1: f'(x) = 5x³ - 14x + 24, f(2) = 40 27:55 Function 2: f'(x) 3 sinx + sec²x, f(π/6) = 5 30:34 Function 3: f''(x) = 8x - cos x, f(1.5) = 12.7, f'(1.5) = 4.2 32:54 Function 4: f''(x) = 5/(√x), f(2) 15, f'(2) = 7 37:54 Example III: Falling Object 41:58 Problem 1: Find an Equation for the Height of the Ball after t Seconds 42:48 Problem 2: How Long Will It Take for the Ball to Strike the Ground? 48:30 Problem 3: What is the Velocity of the Ball as it Hits the Ground? 49:52 Problem 4: Initial Velocity of 6 m/s, How Long Does It Take to Reach the Ground? 50:46 The Area Under a Curve 51m 3s Intro 0:00 The Area Under a Curve 0:13 Approximate Using Rectangles 0:14 Let's Do This Again, Using 4 Different Rectangles 9:40 Approximate with Rectangles 16:10 Left Endpoint 18:08 Right Endpoint 25:34 Left Endpoint vs. Right Endpoint 30:58 Number of Rectangles 34:08 True Area 37:36 True Area 37:37 Sigma Notation & Limits 43:32 When You Have to Explicitly Solve Something 47:56 Example Problems for Area Under a Curve 33m 7s Intro 0:00 Example I: Using Left Endpoint & Right Endpoint to Approximate Area Under a Curve 0:10 Example II: Using 5 Rectangles, Approximate the Area Under the Curve 11:32 Example III: Find the True Area by Evaluating the Limit Expression 16:07 Example IV: Find the True Area by Evaluating the Limit Expression 24:52 The Definite Integral 43m 19s Intro 0:00 The Definite Integral 0:08 Definition to Find the Area of a Curve 0:09 Definition of the Definite Integral 4:08 Symbol for Definite Integral 8:45 Regions Below the x-axis 15:18 Associating Definite Integral to a Function 19:38 Integrable Function 27:20 Evaluating the Definite Integral 29:26 Evaluating the Definite Integral 29:27 Properties of the Definite Integral 35:24 Properties of the Definite Integral 35:25 Example Problems for The Definite Integral 32m 14s Intro 0:00 Example I: Approximate the Following Definite Integral Using Midpoints & Sub-intervals 0:11 Example II: Express the Following Limit as a Definite Integral 5:28 Example III: Evaluate the Following Definite Integral Using the Definition 6:28 Example IV: Evaluate the Following Integral Using the Definition 17:06 Example V: Evaluate the Following Definite Integral by Using Areas 25:41 Example VI: Definite Integral 30:36 The Fundamental Theorem of Calculus 24m 17s Intro 0:00 The Fundamental Theorem of Calculus 0:17 Evaluating an Integral 0:18 Lim as x → ∞ 12:19 Taking the Derivative 14:06 Differentiation & Integration are Inverse Processes 15:04 1st Fundamental Theorem of Calculus 20:08 1st Fundamental Theorem of Calculus 20:09 2nd Fundamental Theorem of Calculus 22:30 2nd Fundamental Theorem of Calculus 22:31 Example Problems for the Fundamental Theorem 25m 21s Intro 0:00 Example I: Find the Derivative of the Following Function 0:17 Example II: Find the Derivative of the Following Function 1:40 Example III: Find the Derivative of the Following Function 2:32 Example IV: Find the Derivative of the Following Function 5:55 Example V: Evaluate the Following Integral 7:13 Example VI: Evaluate the Following Integral 9:46 Example VII: Evaluate the Following Integral 12:49 Example VIII: Evaluate the Following Integral 13:53 Example IX: Evaluate the Following Graph 15:24 Local Maxs and Mins for g(x) 15:25 Where Does g(x) Achieve Its Absolute Max on [0,8] 20:54 On What Intervals is g(x) Concave Up/Down? 22:20 Sketch a Graph of g(x) 24:34 More Example Problems, Including Net Change Applications 34m 22s Intro 0:00 Example I: Evaluate the Following Indefinite Integral 0:10 Example II: Evaluate the Following Definite Integral 0:59 Example III: Evaluate the Following Integral 2:59 Example IV: Velocity Function 7:46 Part A: Net Displacement 7:47 Part B: Total Distance Travelled 13:15 Example V: Linear Density Function 20:56 Example VI: Acceleration Function 25:10 Part A: Velocity Function at Time t 25:11 Part B: Total Distance Travelled During the Time Interval 28:38 Solving Integrals by Substitution 27m 20s Intro 0:00 Table of Integrals 0:35 Example I: Evaluate the Following Indefinite Integral 2:02 Example II: Evaluate the Following Indefinite Integral 7:27 Example IIII: Evaluate the Following Indefinite Integral 10:57 Example IV: Evaluate the Following Indefinite Integral 12:33 Example V: Evaluate the Following 14:28 Example VI: Evaluate the Following 16:00 Example VII: Evaluate the Following 19:01 Example VIII: Evaluate the Following 21:49 Example IX: Evaluate the Following 24:34 Section 5: Applications of Integration Areas Between Curves 34m 56s Intro 0:00 Areas Between Two Curves: Function of x 0:08 Graph 1: Area Between f(x) & g(x) 0:09 Graph 2: Area Between f(x) & g(x) 4:07 Is It Possible to Write as a Single Integral? 8:20 Area Between the Curves on [a,b] 9:24 Absolute Value 10:32 Formula for Areas Between Two Curves: Top Function - Bottom Function 17:03 Areas Between Curves: Function of y 17:49 What if We are Given Functions of y? 17:50 Formula for Areas Between Two Curves: Right Function - Left Function 21:48 Finding a & b 22:32 Example Problems for Areas Between Curves 42m 55s Intro 0:00 Instructions for the Example Problems 0:10 Example I: y = 7x - x² and y=x 0:37 Example II: x=y²-3, x=e^((1/2)y), y=-1, and y=2 6:25 Example III: y=(1/x), y=(1/x³), and x=4 12:25 Example IV: 15-2x² and y=x²-5 15:52 Example V: x=(1/8)y³ and x=6-y² 20:20 Example VI: y=cos x, y=sin(2x), [0,π/2] 24:34 Example VII: y=2x², y=10x², 7x+2y=10 29:51 Example VIII: Velocity vs. Time 33:23 Part A: At 2.187 Minutes, Which care is Further Ahead? 33:24 Part B: If We Shaded the Region between the Graphs from t=0 to t=2.187, What Would This Shaded Area Represent? 36:32 Part C: At 4 Minutes Which Car is Ahead? 37:11 Part D: At What Time Will the Cars be Side by Side? 37:50 Volumes I: Slices 34m 15s Intro 0:00 Volumes I: Slices 0:18 Rotate the Graph of y=√x about the x-axis 0:19 How can I use Integration to Find the Volume? 3:16 Slice the Solid Like a Loaf of Bread 5:06 Volumes Definition 8:56 Example I: Find the Volume of the Solid Obtained by Rotating the Region Bounded by the Given Functions about the Given Line of Rotation 12:18 Example II: Find the Volume of the Solid Obtained by Rotating the Region Bounded by the Given Functions about the Given Line of Rotation 19:05 Example III: Find the Volume of the Solid Obtained by Rotating the Region Bounded by the Given Functions about the Given Line of Rotation 25:28 Volumes II: Volumes by Washers 51m 43s Intro 0:00 Volumes II: Volumes by Washers 0:11 Rotating Region Bounded by y=x³ & y=x around the x-axis 0:12 Equation for Volumes by Washer 11:14 Process for Solving Volumes by Washer 13:40 Example I: Find the Volume of the Solid Obtained by Rotating the Region Bounded by the Following Functions around the Given Axis 15:58 Example II: Find the Volume of the Solid Obtained by Rotating the Region Bounded by the Following Functions around the Given Axis 25:07 Example III: Find the Volume of the Solid Obtained by Rotating the Region Bounded by the Following Functions around the Given Axis 34:20 Example IV: Find the Volume of the Solid Obtained by Rotating the Region Bounded by the Following Functions around the Given Axis 44:05 Volumes III: Solids That Are Not Solids-of-Revolution 49m 36s Intro 0:00 Solids That Are Not Solids-of-Revolution 0:11 Cross-Section Area Review 0:12 Cross-Sections That Are Not Solids-of-Revolution 7:36 Example I: Find the Volume of a Pyramid Whose Base is a Square of Side-length S, and Whose Height is H 10:54 Example II: Find the Volume of a Solid Whose Cross-sectional Areas Perpendicular to the Base are Equilateral Triangles 20:39 Example III: Find the Volume of a Pyramid Whose Base is an Equilateral Triangle of Side-Length A, and Whose Height is H 29:27 Example IV: Find the Volume of a Solid Whose Base is Given by the Equation 16x² + 4y² = 64 36:47 Example V: Find the Volume of a Solid Whose Base is the Region Bounded by the Functions y=3-x² and the x-axis 46:13 Volumes IV: Volumes By Cylindrical Shells 50m 2s Intro 0:00 Volumes by Cylindrical Shells 0:11 Find the Volume of the Following Region 0:12 Volumes by Cylindrical Shells: Integrating Along x 14:12 Volumes by Cylindrical Shells: Integrating Along y 14:40 Volumes by Cylindrical Shells Formulas 16:22 Example I: Using the Method of Cylindrical Shells, Find the Volume of the Solid 18:33 Example II: Using the Method of Cylindrical Shells, Find the Volume of the Solid 25:57 Example III: Using the Method of Cylindrical Shells, Find the Volume of the Solid 31:38 Example IV: Using the Method of Cylindrical Shells, Find the Volume of the Solid 38:44 Example V: Using the Method of Cylindrical Shells, Find the Volume of the Solid 44:03 The Average Value of a Function 32m 13s Intro 0:00 The Average Value of a Function 0:07 Average Value of f(x) 0:08 What if The Domain of f(x) is Not Finite? 2:23 Let's Calculate Average Value for f(x) = x² [2,5] 4:46 Mean Value Theorem for Integrate 9:25 Example I: Find the Average Value of the Given Function Over the Given Interval 14:06 Example II: Find the Average Value of the Given Function Over the Given Interval 18:25 Example III: Find the Number A Such that the Average Value of the Function f(x) = -4x² + 8x + 4 Equals 2 Over the Interval [-1,A] 24:04 Example IV: Find the Average Density of a Rod 27:47 Section 6: Techniques of Integration Integration by Parts 50m 32s Intro 0:00 Integration by Parts 0:08 The Product Rule for Differentiation 0:09 Integrating Both Sides Retains the Equality 0:52 Differential Notation 2:24 Example I: ∫ x cos x dx 5:41 Example II: ∫ x² sin(2x)dx 12:01 Example III: ∫ (e^x) cos x dx 18:19 Example IV: ∫ (sin^-1) (x) dx 23:42 Example V: ∫₁⁵ (lnx)² dx 28:25 Summary 32:31 Tabular Integration 35:08 Case 1 35:52 Example: ∫x³sinx dx 36:39 Case 2 40:28 Example: ∫e^(2x) sin 3x 41:14 Trigonometric Integrals I 24m 50s Intro 0:00 Example I: ∫ sin³ (x) dx 1:36 Example II: ∫ cos⁵(x)sin²(x)dx 4:36 Example III: ∫ sin⁴(x)dx 9:23 Summary for Evaluating Trigonometric Integrals of the Following Type: ∫ (sin^m) (x) (cos^p) (x) dx 15:59 #1: Power of sin is Odd 16:00 #2: Power of cos is Odd 16:41 #3: Powers of Both sin and cos are Odd 16:55 #4: Powers of Both sin and cos are Even 17:10 Example IV: ∫ tan⁴ (x) sec⁴ (x) dx 17:34 Example V: ∫ sec⁹(x) tan³(x) dx 20:55 Summary for Evaluating Trigonometric Integrals of the Following Type: ∫ (sec^m) (x) (tan^p) (x) dx 23:31 #1: Power of sec is Odd 23:32 #2: Power of tan is Odd 24:04 #3: Powers of sec is Odd and/or Power of tan is Even 24:18 Trigonometric Integrals II 22m 12s Intro 0:00 Trigonometric Integrals II 0:09 Recall: ∫tanx dx 0:10 Let's Find ∫secx dx 3:23 Example I: ∫ tan⁵ (x) dx 6:23 Example II: ∫ sec⁵ (x) dx 11:41 Summary: How to Deal with Integrals of Different Types 19:04 Identities to Deal with Integrals of Different Types 19:05 Example III: ∫cos(5x)sin(9x)dx 19:57 More Example Problems for Trigonometric Integrals 17m 22s Intro 0:00 Example I: ∫sin²(x)cos⁷(x)dx 0:14 Example II: ∫x sin²(x) dx 3:56 Example III: ∫csc⁴ (x/5)dx 8:39 Example IV: ∫( (1-tan²x)/(sec²x) ) dx 11:17 Example V: ∫ 1 / (sinx-1) dx 13:19 Integration by Partial Fractions I 55m 12s Intro 0:00 Integration by Partial Fractions I 0:11 Recall the Idea of Finding a Common Denominator 0:12 Decomposing a Rational Function to Its Partial Fractions 4:10 2 Types of Rational Function: Improper & Proper 5:16 Improper Rational Function 7:26 Improper Rational Function 7:27 Proper Rational Function 11:16 Proper Rational Function & Partial Fractions 11:17 Linear Factors 14:04 Irreducible Quadratic Factors 15:02 Case 1: G(x) is a Product of Distinct Linear Factors 17:10 Example I: Integration by Partial Fractions 20:33 Case 2: D(x) is a Product of Linear Factors 40:58 Example II: Integration by Partial Fractions 44:41 Integration by Partial Fractions II 42m 57s Intro 0:00 Case 3: D(x) Contains Irreducible Factors 0:09 Example I: Integration by Partial Fractions 5:19 Example II: Integration by Partial Fractions 16:22 Case 4: D(x) has Repeated Irreducible Quadratic Factors 27:30 Example III: Integration by Partial Fractions 30:19 Section 7: Differential Equations Introduction to Differential Equations 46m 37s Intro 0:00 Introduction to Differential Equations 0:09 Overview 0:10 Differential Equations Involving Derivatives of y(x) 2:08 Differential Equations Involving Derivatives of y(x) and Function of y(x) 3:23 Equations for an Unknown Number 6:28 What are These Differential Equations Saying? 10:30 Verifying that a Function is a Solution of the Differential Equation 13:00 Verifying that a Function is a Solution of the Differential Equation 13:01 Verify that y(x) = 4e^x + 3x² + 6x + e^π is a Solution of this Differential Equation 17:20 General Solution 22:00 Particular Solution 24:36 Initial Value Problem 27:42 Example I: Verify that a Family of Functions is a Solution of the Differential Equation 32:24 Example II: For What Values of K Does the Function Satisfy the Differential Equation 36:07 Example III: Verify the Solution and Solve the Initial Value Problem 39:47 Separation of Variables 28m 8s Intro 0:00 Separation of Variables 0:28 Separation of Variables 0:29 Example I: Solve the Following g Initial Value Problem 8:29 Example II: Solve the Following g Initial Value Problem 13:46 Example III: Find an Equation of the Curve 18:48 Population Growth: The Standard & Logistic Equations 51m 7s Intro 0:00 Standard Growth Model 0:30 Definition of the Standard/Natural Growth Model 0:31 Initial Conditions 8:00 The General Solution 9:16 Example I: Standard Growth Model 10:45 Logistic Growth Model 18:33 Logistic Growth Model 18:34 Solving the Initial Value Problem 25:21 What Happens When t → ∞ 36:42 Example II: Solve the Following g Initial Value Problem 41:50 Relative Growth Rate 46:56 Relative Growth Rate 46:57 Relative Growth Rate Version for the Standard model 49:04 Slope Fields 24m 37s Intro 0:00 Slope Fields 0:35 Slope Fields 0:36 Graphing the Slope Fields, Part 1 11:12 Graphing the Slope Fields, Part 2 15:37 Graphing the Slope Fields, Part 3 17:25 Steps to Solving Slope Field Problems 20:24 Example I: Draw or Generate the Slope Field of the Differential Equation y'=x cos y 22:38 Section 8: AP Practic Exam AP Practice Exam: Section 1, Part A No Calculator 45m 29s Intro 0:00 Exam Link 0:10 Problem #1 1:26 Problem #2 2:52 Problem #3 4:42 Problem #4 7:03 Problem #5 10:01 Problem #6 13:49 Problem #7 15:16 Problem #8 19:06 Problem #9 23:10 Problem #10 28:10 Problem #11 31:30 Problem #12 33:53 Problem #13 37:45 Problem #14 41:17 AP Practice Exam: Section 1, Part A No Calculator, cont. 41m 55s Intro 0:00 Problem #15 0:22 Problem #16 3:10 Problem #17 5:30 Problem #18 8:03 Problem #19 9:53 Problem #20 14:51 Problem #21 17:30 Problem #22 22:12 Problem #23 25:48 Problem #24 29:57 Problem #25 33:35 Problem #26 35:57 Problem #27 37:57 Problem #28 40:04 AP Practice Exam: Section I, Part B Calculator Allowed 58m 47s Intro 0:00 Problem #1 1:22 Problem #2 4:55 Problem #3 10:49 Problem #4 13:05 Problem #5 14:54 Problem #6 17:25 Problem #7 18:39 Problem #8 20:27 Problem #9 26:48 Problem #10 28:23 Problem #11 34:03 Problem #12 36:25 Problem #13 39:52 Problem #14 43:12 Problem #15 47:18 Problem #16 50:41 Problem #17 56:38 AP Practice Exam: Section II, Part A Calculator Allowed 25m 40s Intro 0:00 Problem #1: Part A 1:14 Problem #1: Part B 4:46 Problem #1: Part C 8:00 Problem #2: Part A 12:24 Problem #2: Part B 16:51 Problem #2: Part C 17:17 Problem #3: Part A 18:16 Problem #3: Part B 19:54 Problem #3: Part C 21:44 Problem #3: Part D 22:57 AP Practice Exam: Section II, Part B No Calculator 31m 20s Intro 0:00 Problem #4: Part A 1:35 Problem #4: Part B 5:54 Problem #4: Part C 8:50 Problem #4: Part D 9:40 Problem #5: Part A 11:26 Problem #5: Part B 13:11 Problem #5: Part C 15:07 Problem #5: Part D 19:57 Problem #6: Part A 22:01 Problem #6: Part B 25:34 Problem #6: Part C 28:54 Loading... This is a quick preview of the lesson. For full access, please Log In or Sign up. For more information, please see full course syllabus of AP Calculus AB Bookmark & Share Embed ## Copy & Paste this embed code into your website’s HTML Please ensure that your website editor is in text mode when you paste the code. (In Wordpress, the mode button is on the top right corner.) × • - Allow users to view the embedded video in full-size. Since this lesson is not free, only the preview will appear on your website. • ## Transcription Lecture Comments (3) 0 answersPost by Michael Yang on January 7 at 04:08:15 PMCan I use langrange multipliers to solve the first example too? 1 answerLast reply by: Professor HovasapianFri Dec 8, 2017 11:38 PMPost by Maya Balaji on November 11, 2017Hello Professor. For question 1- I'm not sure why you would check the endpoints of the domain (variable at 0, volume at 0) to see if they are plausible absolute maximums, because technically this domain is not a closed interval. The volume can never be 0, and the length can never be 0- so these would not be included in the domain- so it would not be a closed interval, correct?- and you must only check endpoints if it is a part of a closed interval (please correct me if this isn't true!). Thank you. ### Optimization Problems I Lecture Slides are screen-captured images of important points in the lecture. Students can download and print out these lecture slide images to do practice problems as well as take notes while watching the lecture. • Intro 0:00 • Example I: Find the Dimensions of the Box that Gives the Greatest Volume 1:23 • Fundamentals of Optimization Problems 18:08 • Fundamental #1 • Fundamental #2 • Fundamental #3 • Fundamental #4 • Fundamental #5 • Fundamental #6 • Example II: Demonstrate that of All Rectangles with a Given Perimeter, the One with the Largest Area is a Square 24:36 • Example III: Find the Points on the Ellipse 9x² + y² = 9 Farthest Away from the Point (1,0) 35:13 • Example IV: Find the Dimensions of the Rectangle of Largest Area that can be Inscribed in a Circle of Given Radius R 43:10 ### Transcription: Optimization Problems I Hello, welcome back to www.educator.com, and welcome back to AP Calculus.0000 Today, we are going to start talking about optimization and optimization problems,0004 otherwise referred to as maxima and minima with practical application.0010 We have talked about maxima and minima in terms of just functions themselves.0015 Now we are going to apply them to real life situations.0019 There is going to be some quantity that we are going to want to maximize or minimize.0023 In other words, optimize, how to make it the best for our particular situation.0029 The calculus of these problems is actually very simple.0037 Essentially, what you are doing is just taking the first derivative.0040 You are setting it equal to 0 and you are solving.0042 The difficulty with these problems is putting all of this information into an equation.0045 It is the normal problems that people had with word problems, ever since we are introduced to word problems.0054 In any case, let us just jump right on in.0063 What I’m going to do is the first problem, I’m just going to launch right into it so that you get a sense of what it is.0066 I’m going to quickly discuss what is necessary for these problems, and then we are just going to do more.0071 The only way to make sense of them is to do as many problems as possible.0076 This is going to be the first of those lessons.0080 This problem says, if 1400 m² is available to make a box with a square base and no top,0086 find the dimensions of the box that gives the greatest volume.0094 I think I’m going to do this in blue again.0102 Probably, the most important thing to do with all of these optimization problems is draw a picture.0106 Always draw a picture.0112 99% of the time you really need to just draw a picture.0117 Let us see what this is asking.0120 I have got myself a box, let me go ahead and draw a little box here.0122 It is telling me that this box has no top.0133 I want to find the dimensions of the box that gives the greatest volume and also tells me that it has a square base.0137 Therefore, I’m going to call this x, I’m going to call this x.0143 It says nothing about the height, I’m just going to call this h.0146 Find the dimensions of the box that gives the greatest volume.0153 The thing that we are trying to maximize is the volume.0156 All of these problems will always be the same.0159 They are going to ask for some quantity that is maximized or minimized.0161 They are going to give you other information that relates to the problem.0166 The first thing you want to do is find just the general equation for what is being maximized.0170 In this case, it is the volume, greatest volume.0175 What we want to do is, volume, I know here is going to be x² × h.0180 We want to maximize that.0185 When you maximize or minimize something, you are finding the places where the derivative is equal to 0.0189 Once you have an equation, you are going to take the derivative of that equation, set it equal to 0, and solve for x.0195 The problem arises, notice that this is a function of two variables.0200 We cannot do that, this is a single variable calculus.0204 We need to find a way to convert this equation into an equation and just one variable, either h or x.0208 That is going to be our task.0215 This is where the problems tend to get more complicated.0217 Let us see what we can, let us write all of this out.0221 We want to maximize v but it is a function of two variables, mainly x and h.0224 Now we use the other information in the problem to establish a relation between these two variables,0254 so that I can solve for one of those variables.0260 Plug into this one and turn it into a function of one variable, that is essentially all of these problems are like that.0263 There is other information in the problem0272 that allows us to establish a relation between x and h, and that is this.0286 They are telling me that I have a total of 1400 m² total, that means the base, the area of the base, and the 4 sides.0311 The base, this side, this side, that side, and that side, let us write that out0323 The area of the base is going to be x².0329 The area of one of these side panels is going to be xh.0334 There are 4 of them, + 4 xh.0337 The sum of those has to be 1400.0342 That is it, we have a second equation, it relates x and h.0346 Let us solve for either x or h and put it in, not a problem at all.0349 What I'm going to do is I'm going to go ahead and solve for h.0355 4 xh = 1400 - x².0358 Therefore, h = 1400 - x²/ 4x.0364 We put this, we put this h into there, and we turn it into a function of one variable x which we can solve.0381 We put this h into v = x² h to get an equation in one variable.0399 In this case, I chose x.0418 Let us go ahead and do that.0425 We have v is equal to x² × h which is 1400 - x²/ 4x.0428 Cancel that, cancel that, multiply through.0444 We end up with 1400 x – x³/ 4.0448 If you want, you can rewrite it as –x³/ 4 + 350x.0458 It is totally up to you how you want to do it.0467 But now I have my equation, I have my v.0469 Now the volume of this box is expressed as a function of a single variable.0479 We know that a function achieves its absolute max or min, in this case, we are talking about a max,0489 I’m just going to leave it as maximum.0515 It achieves its absolute max either at an endpoint of the domain or somewhere in between where the derivative is 0.0518 In other words, a local max/local min.0527 We know that a function achieves its absolute max either at the endpoints of its domain or where f’ is equal to 0.0529 We differentiate this function now.0556 This is the equation of volume, we want to maximize this equation.0562 In order to maximize it, we are going to take the derivative of it, set it equal to 0,0566 and find the places where it either hits a maximum or a minimum.0570 Vx is a function of x is equal to 350x – x³/ 4.0577 V’(x) = 350 - ¾ x².0591 I’m going to set that equal to 0.0598 I have got ¾ x² is equal to 350.0601 When I solve this, I get x² = 1400/3 which gives me x is equal to + or -21.6.0608 We are talking about a distance.0627 Clearly, the negative is not going to be one of the solutions.0628 It is the +21.6 that is going to be the solutions.0632 Let us go over to the next page.0642 First of all, x is a physical length.0644 The -21.6 is not an option.0657 Second, if you rather not think about it physically and have to decide which value that you are going to take, there is another way of doing it.0668 If you prefer a more systematic or analytical approach0680 to excluding a given root or a given possibility, you can do it this way.0702 We said that v(x) is equal to -3/ 4 x³ + 350x.0716 That was the function that we want.0732 That was our original function, -x³.0740 Let me write this again.0747 We said that we had –x³/ 4 + 350x.0752 I will write it this way.0763 I know that when I graph this, I'm looking at this, and this is a cubic function.0765 This is a cubic function and the coefficient of -1/4, the leading coefficient is negative.0772 A normal cubic function begins up here, has two turns and ends down here.0779 This is negative, negative begins up here and ends down here.0790 I already took the derivative and I found that -21.6 and +21.6 are places0804 where it hits a local max or local min because I set the derivative equal to 0.0808 Therefore, I know that -21.6, there is all local min.0814 +21.6, there is a local max.0820 In this particular case, I also know that when x is equal to 0, the function is equal to 0.0823 I know it crosses here.0829 Therefore, I know for a fact that the thing goes like this.0830 Therefore, the maximum is achieved at +21.6.0836 The minimum of the function is achieved at -21.6.0841 We can also use our physical intuition to say that you cannot have, like we did for the first part,0845 like we did for our first consideration, right here.0850 It is a physical length.0853 This is the part of the graph that I'm concerned with.0856 As x gets bigger, there is a certain value of x which happens to be 21.6 where the function –x³/ 4 + 350x is maximized.0859 They gave us the greatest volume.0871 You want to use all the resources at your disposal, if you are dealing with a function.0873 You know what a cubic function looks like, where the negative over the leading coefficient is negative, it looks like this.0877 This tells you systematically, analytically, that -21.6 is not your solution.0885 Not to mention the fact that it physically makes no sense.0891 There are many things that you want to consider.0894 You do not just want to do the calculus.0896 Whatever you get, you want to stop and think about if the calculus makes sense.0899 Does your -21.6, does your +21.6 actually makes sense?0904 It does, based on other things that you need to consider.0909 Let us see, where are we, we are not done yet.0916 Let us go ahead.0922 We know that x = 21.6, that is the dimension of our base.0928 For h, h is equal to 1400 - x²/ 4x which is equal to 1400 - 21.6²/ 4 × 21.6.0934 When we do the calculation, we get xh = 10.8.0957 There you go, our box is 21.6 by 21.6 by 10.8.0963 Our unit happens to be in centimeters.0975 There you go, that is it, nice and simple.0978 Let us go ahead and actually show you the particular graph.0983 This is the graph of the function, volume function.0987 This is volume = 1400x – x³/ 4.0992 21.6 is right about there, that is our maximum point.1006 This was the function that we wanted to maximize.1011 In this particular case , we have a certain restriction on the domain.1014 This right here, that is the particular domain of this function.1019 The smallest that x can be is 0, no length.1026 The biggest that x can be is whatever that happens to be, when you set this equal to 0.1030 It turns out that x is equal to about 37.4, that is the other root of this equation.1037 That is the other 0 of that equation so that give us a natural domain.1044 In other words, if x = 0, there is no box.1048 If x = 37.4, there is no box.1052 Between 0 and 37.4, for a value of x, which is the base of the box, x by x, the volume goes up and comes down.1055 There is some x value that maximizes the volume.1067 That x is the 21.6 that we found, local maximum of this function.1070 Again, you can use the graph to help you out to find your domain, to restrict your domain, whatever it is that you need.1077 Let us talk about this a little bit.1087 All optimization problems are fundamentally the same.1089 There is a quantity that is asked to be maximized or minimized.1116 It might be an area, might be a volume.1143 It might be a distance, it might be an angle, whatever it is.1145 There are some quantity that is maximized or minimized.1150 Two, your task is to find a general equation for that quantity, for this quantity.1153 Number 3, if the equation that you get in part 2, if the equation is a function of more than one variable,1172 you use other information in the problem + any other mathematical manipulation you need1197 to find a relation between or among the variables.1236 I say among because you might end up with a general equation that has 3 or 4 variables.1250 And you have to find the relationship among all 3 or 4, not just between the two.1255 Part 4, you use the relations above among the variables1263 to express the desired quantity as a function of one variable, if possible.1285 Again, there might be situations where, we will do when we come up with them, not a problem.1308 I know the thing that you might want to do, this is a little looser but it is always a good idea to do this, if you need to.1317 A lot of this will come up with more experiences in solving these kind of problems.1323 You want to find the domain of the equation.1328 The reason you want to find the domain is,1335 Remember, what we are find here is absolute maximum of a function.1339 The absolute maximum of a function can happen within the domain, at places where it is a local max or min.1344 That is where you set the function, the derivative of a function equal to 0.1349 But you also have to consider the endpoints.1352 If you know the domain, if a domain is a closed interval, like it was in the first problem, 0 and 37.4,1355 you are still going to check those points to see if the value of the function that you get is going to be greater.1363 Because we want to find the absolute maximum.1370 Let us say there were two points in an interval, in the domain.1374 Let us go back to the first problem.1380 You had, 0 you have a 21.6, and you have a 37.4.1381 The 21.6 is the answer but you still have to technically check the 0 and the 37.4.1386 Put those values of x into the original equation.1392 You are going to get 0 for the value of the function.1395 When you put 21.6 in, you are actually going to get a number that is the biggest one among the three.1399 You remember when we were doing absolute maxes and absolute mins,1406 we have to check the values at the endpoints to see if maybe f of those values was actually bigger than what it is at a local max or min.1409 Again, the problems will help make more sense of this.1420 And then, once you have all of this information, you find the absolute max or min.1426 You find the absolute max or min.1434 If your domain is not a closed interval, that does not matter.1439 All you need to do is look for the local maxes and mins.1443 That is where you are going to pick one of those to maximize or minimize, whichever is it that you are trying to do.1446 Again, if you have a closed interval, you have to check the end points of the domain.1452 Most important, draw a picture always.1458 Always draw a picture.1471 Let us do some more examples here.1475 Demonstrate that of all rectangles with a given perimeter, the one with the largest area is a square.1477 Pick a random rectangle.1487 I’m going to call this x, I’m going to call this y.1491 In short, demonstrate that the one with the largest area is a square.1496 In short, we must show that y is equal to x, that it is a square.1502 Of all rectangles with a given perimeter.1520 The perimeter, that equals 2x + 2y, and they say of a given perimeter, some constant 5, 10, 20, 30, 86.6, whatever.1523 I'm just going to say c, c stands for a constant.1536 One of the largest area is a square.1540 The general equation for area is xy.1544 The largest area, that is the one they want us to maximize right here.1548 Largest area means maximize this, maximize this.1554 It is a function of two variables.1570 It is a function of two variables, I need a relationship between those two variables x and y,1573 in order for me to turn this into a function of one variable.1577 I have a relationship, that is my relationship right there.1582 I’m going to solve for y and plug it into this equation right over here.1585 I’m going to write 2y = c - 2x.1589 I have y = c - 2x/ 2.1595 I’m going to put this into here.1603 I get the area = c/2 – x.1608 I get the area = cx/2 – x².1622 This is my function, that is the function.1628 Now it is a function, I’m trying to maximize it.1634 I now have the area expressed as a function of one variable, x and x².1636 It is taken into account the perimeter.1641 The c, that is where that comes in.1643 Now I have to differentiate.1646 A’ is equal to c/2 - 2x, I set that equal to 0.1648 When I solve for this, I get 2x = c/2 which implies that x = c/4.1656 I found what x has to be.1673 Let us find y.1678 We said that y is equal to c/2 – x, that is equal to c/2 – c/4.1686 C/2 – c/4, it is equal to c/4.1697 Y does equal x which equals c/4.1705 I have demonstrated that, in order to maximize an area of a given rectangle.1710 I have maximized it by finding the derivative of the function of the area.1716 Found the value of x, it turns out that it has to be a square.1721 For a fixed perimeter, the sides have to be the perimeter divided by 4.1725 That is it, square.1730 I have demonstrated what is it that I set out to demonstrate.1733 Let us see here.1740 Notice that I did not explicitly specify a domain.1745 Let us tighten this up a little bit and talk about the domain.1773 Let us tight this up and discuss domain.1781 For a rectangle with a given perimeter c, the domain 0 to c/2.1790 The domain is what the x value can be.1824 If x = 0, if I take the endpoint x = 0.1826 Then, 2 × 0 + 2y is equal to c.1834 2y is equal to c, y = c/2.1847 The area is equal to x × y, that is equal to 0 × c/2, the area is 0.1856 That is this endpoint.1872 If x = c/2, then the perimeter 2 × c/2 + 2y which is equal to c, we get c + 2y = c.1879 We get 2y = 0, we get y = 0.1900 The area equals xy which equals c/2 × 0.1904 Again, the area = 0, our domain is this.1909 X cannot go past c/2 because we already set that the perimeter has to be c.1917 If you have a rectangle where this is c/2 and this is c/2, basically what you have is just a line because there is no y.1926 X, our domain, has to be between 0 and c/2.1943 When we check the endpoints, we got a value of 0.1947 C/4 is in the domain and it happens to be the local max.1952 When you put c/4 into this, you are actually going to get an area that is a number.1957 We know that that number is the maximum, precisely because of how we did it.1973 We took the derivative, we set it equal to 0, and that is what happened.1978 Let us go ahead and actually take a look at this.1984 In both cases, let me actually draw it out.1987 In both cases that we just did for the endpoints, the area was equal to 0.1993 Between 0 and c/2, there is a number such that a is maximized.2005 That number was c/4.2024 What did we say our function was, our a’, our a?2029 We said that our area function of x was equal to -x² + cx/2.2034 This is a quadratic function where the leading coefficient is negative.2041 I know that the graph goes like this.2047 I know that there are some point where I’m going to hit a maximum, that is what is going on here.2049 This is my 0, this is my c/2.2055 Let us see what this actually looks like.2058 I have entered the function cx/2 - x².2061 I have taken a particular value of c = 15.2064 I end up with this graph.2068 Notice 0, c/2, 15/2 is 7.5, that 7.5.2069 This right here, this is c/2, this is c/4.2077 That is why I hit my max.2080 This is the function that I’m maximizing.2082 It happens to be the quadratic function where the leading coefficient is negative.2086 Therefore, I know that this is the shape.2090 If I do not know it, let me use a graphical utility to help me out.2092 If I need a graphical utility to help me get the domain, that is fine.2095 I do not necessarily need this, I already know that if my perimeter is c, the most that any one side can be is c/2.2099 Therefore, my domain is 0 to c/2, hope that makes sense.2107 Let us see what we have got here.2115 What is our next one?2119 Find the points on the ellipse 9x² + y² = 9, farthest away from the point 1,0.2120 Let us go ahead and draw this out.2130 I got myself an ellipse.2134 I have got 9x² + y² = 9 x²/ 1² + y²/ 3² is equal to 1.2141 I have got, this is 1, this is 1, this is 1, 2, 3, 1, 2, 3.2158 I have an ellipse that looks like this.2166 Find the points on the ellipse farthest away from the point 1,0.2172 Here is my point 1,0, I need to find the points on the ellipse that are the farthest away from this.2176 Just eyeballing it, I’m guessing it is somewhere around here.2181 We will try to maximize this distance right here.2187 We want to maximize the distance from the point 1,0 to some random point xy on the ellipse, that satisfies this equation.2194 We know we are going to have two answers.2220 We already know that.2222 This is going to be xy1, xy2.2225 Probably you are going to have the same value of x, different values of y.2228 We want to maximize the distance, the distance formula.2233 The distance formula = x2 - x1² + y2 - y1², all under the radical sign.2239 Let me put it in.2252 I have x - 1² - y - 0².2253 This is going to give me x - 1² + y², all under the radical.2270 Let us move on to the next one.2282 I have got d is equal to, I expand the x - 1².2285 I get x² - 2x + 1 + y², all under the radical.2290 I know that 9x² + y² = 9.2300 Therefore, y² = 9 - x².2305 I put that into here, I find my d is equal to x² - 2x + 1 + 9 - 9x².2310 Therefore, I get d = -8x² - 2x + 10.2326 This is my distance function expressed as a single variable x.2337 This is what I want to maximize.2341 Maximize this, we maximize it, we take the derivative and set it equal to 0.2346 D’(x) that is going to equal ½ of -8x² - 2x + 10⁻¹/2 × the derivative of what is inside which is -16x -2.2354 D’(x), when I rearrange this, I get -8x – 1/ √-8x² – 2x + 10.2376 I set that equal to 0.2393 What I get is -8x - 1 = 0.2396 When I solve this, I get 8x = -1, x = -1/8.2400 I have found my x, my x value is -1/8.2410 Now I need to find my y so that I can find what the two points are.2414 I know that the function was 9x² + y² = 9.2424 I’m going to go 9 × -1/8² + y² = 9.2431 I get 9/64 + y² = 9, that gives me y² = 9 - 9/64.2441 I get y² = 567/64.2457 Then, I get y = + or -567/64 that equals + or -2.97.2465 Therefore, I have -1/8 - 2.97, that is one point, I have -1/8 and 0.97.2486 These two points are the points that are on the ellipse, farthest away from the point 1,0.2501 Once again, I have an ellipse, this is 1,0.2510 The points are here and they are here.2516 Those are the points that are farthest away from that.2518 This is the function that we have to maximize.2526 This graph, this is not the ellipse.2528 This is the function we have to maximize.2531 This is the -8x² - 2x + 10, under the radical.2535 This is the function that we maximized.2542 It happens to hit a maximum at -1/8.2545 Be careful, this is not the ellipse, this is the function that you end up deriving, that you needed to maximize.2554 We needed to maximize the distance.2563 It actually gives me the x value.2567 Once I have the x value, I put it back into the original equation for the ellipse to find out where the y values are for the ellipse.2570 There are a lot to keep track of.2579 My best advice with all of math and science is go slowly, that is all.2582 Let us do one last example here.2590 Find the dimensions of the rectangle of the largest area.2592 We are going to be maximizing area, know that already.2595 That can be inscribed in a circle of a given radius r.2598 Let us draw it out.2602 We have a circle and we are going to try to inscribe some random rectangle in it.2604 Probably, not going be the best drawing in the world, sorry about that.2612 It tells me that the radius is r.2614 We are going to maximize area.2620 I’m going to call this x, and I’m going to call this side y, of the rectangle.2622 Area is equal to x × y.2627 We have our general equation, we want to maximize this.2631 I have two variables, I need to find the function of one variable.2639 I have to find the relationship between x and y.2642 I have a relationship between x and y.2645 If I draw this little triangle here, this side is y divided by 2 and this side is x/2.2648 Therefore, I have by the Pythagorean theorem, x/ 2² + y/ 2² = r².2663 I have got x²/ 4 + y²/ 4 = r² which gives me x² + y² = 4r²,2674 which gives me y² = 4r² - x², which gives me a y equal to √4r² - x².2688 This is what I plug into here, to this.2701 Therefore, I get an area which is equal to x × 4r² - x².2706 Now I have a function of one variable.2716 Take the derivative and set it equal to 0.2719 A’(x) is equal to this × the derivative of that, x × ½ of 4r² – x²⁻¹/2 × the derivative of what is inside.2722 4r² is just a constant at 0.2737 It is only -2x + that × the derivative of that.2740 We get just 4r² - x² × 1.2745 I rearranged this to get, 2 and 2 cancel, -x².2752 I get -x²/ 4r² - x², under the radical, +√4r² - x².2762 Then, I find myself a nice common denominator.2777 I end up with a’(x) is equal to -x² + 4r².2779 I hope the algebra is not giving you guys any grief.2788 I just found the common denominator, over √4r² - x².2792 This is the derivative we said is equal to 0.2797 When we set it equal to 0, I have the top -x² - x².2801 I end up with a’(x) =, this is 0 so the denominator goes away.2806 I’m left with -2x² + 4r² = 0.2812 2x² = 4r², x² = 2r².2820 Therefore, x is equal to r√2.2829 Again, I take the positive because I’m talking about a distance here.2836 X = r√2.2842 We know what y is, we said that y is equal to √4r² - x² which is equal to 4r² - r√2²,2844 all under the radical, which is equal to 4r² - 2r², all under the radical.2858 That equals √2r² which is equal to r√2.2869 Y is also equal to r√2.2878 Again, I will just say y is equal to x.2891 In other words, the rectangle of largest area that you can describe in a circle is a square,2899 where the sides of the square are equal to the radius of the circle × √2.2908 That is what we have found.2914 Let us go ahead and show you what it looks like.2919 I have the function, the area function that I try to maximize.2922 √4r² - x².2926 I picked a particular value of r radius of the circle happens to equal 2.2931 This is the function.2935 Again, x is the physical distance, really, our domain is here and here.2938 I set the function equal to 0 to find the end points.2946 When I check the endpoints, when I put the endpoints into the area function, I'm going to get an area of 0.2949 0 does not work, 0 does not work.2955 However, there is a point someplace here.2957 What we found is r√2.2960 When I put r√2 into the function for area, I end up getting the largest area.2964 This graph confirms it.2974 The maximum of this graph, the maximum of the area function happens at √2.2975 It happens where the derivative of this function = 0.2981 I hope that helped.2987 Do not worry about it, in the next lesson we are going to be continuing to do more optimization problems,2988 more complicated optimization problems.2993 Thank you so much for joining us here at www.educator.com.2995 We will see you next time, bye.2998 Please sign in to participate in this lecture discussion. Resetting Your Password? OR ### Start Learning Now Our free lessons will get you started (Adobe Flash® required). Get immediate access to our entire library. ### Membership Overview • Available 24/7. Unlimited Access to Our Entire Library. • Search and jump to exactly what you want to learn. • *Ask questions and get answers from the community and our teachers! • Practice questions with step-by-step solutions. • Download lecture slides for taking notes. • Track your course viewing progress. • Accessible anytime, anywhere with our Android and iOS apps.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8419528007507324, "perplexity": 2292.7161801163647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153223.30/warc/CC-MAIN-20210727072531-20210727102531-00287.warc.gz"}
http://stats.stackexchange.com/questions/15745/metric-for-probability-based-classification
Metric for probability based classification I am doing a system for classifiying documents. The project demands the use of probability based output. So a sample will have a probability for belonging to each class. For now I use logistic regression, but this could be subject to change. So I don't want to do an R^2 approximation. I also don't want to use standard metrics for classification like F-measure because it doesn't work with probabiliies. I have no idea what is the custom metric in this situation. Any ideas? - There are actually two things to evaluate on probability:probability-based ranking performance and probability estimation performance. Common evaluation methods for probability-based ranking is the Area Under ROC curve (AUROC). This measure has been developed to 2-class problems but can also be extended to multi-class problems, for examle have a look to [1] in order to understand ROC analysis; but there are easier methods to calculate ROC curve, see [2] related to Probability Estimation Trees (PETs). Similarly, Brier Score, also known as Mean Square Error, is suited to evaluate the probability estimation accuracy performance. It has been shown that the Brier Score can be decomposed in Calibration and Refinement [3]. These measures are suited for PETs, but you can reproduce them discretizing your probability into buckets. The Calibration component captures how well the PET represents the true distribution of the data; while Refinement component captures how much the model discriminate between classes. In particular, the Calibration measure has an intuitive graphical interpretation as the Reliability Plot, which shows record subset probabilities on the training data and the corresponding probabilities on the test data. Refinement measure has its graphical transposition too, called Sharpness Histogram. See [4] for 2-class problems, in the firsts paragraphs Brier Score, Calibration and Refinement are introduced. They also use Negative Cross Entropy, that is similar to Brier Score. I think it should be easy to find estension for multi-class problems. [1] T. Fawcett, \An introduction to roc analysis," Pattern Recogn. Lett., vol. 27, no. 8, pp. 861{874, 2006. [2] N. Chu, L. Ma, P. Liu, Y. Hu, and M. Zhou, \A comparative analysis of methods for probability estimation tree," W. Trans. on Comp., vol. 10, pp. 71{80, March 2011. [3] G. Blattenberger and F. Lad, \Separating the Brier score into calibration and refinement components: A graphical exposition," vol. 39, pp. 26{32, 1985. [4] K. Zhang, W. Fan, B. Buckles, X. Yuan, and Z. Xu, \Discovering unrevealed properties of probability estimation trees: On algorithm selection and performance explanation," Data Mining, IEEE International Conference on, vol. 0, pp. 741{752, 2006. - The classic error metric for probabilistic classifiers is the cross-entropy, for a two class classifier it is $L = -\sum_{i=1}^n t_i\log(y_i) + (1-t_i)\log(1-y_i)$ where there are n test patterns, $t_i \in [0,1]$ is the target for the $i^{th}$ test pattern and $y_i$ is the outoput of the model for the $i^{th}$ test pattern. The cross-entropy is the negative log-likelihood used in fitting a logistic regression model (or kernel logistic regression or neural networks etc.) so it is fairly natural to use it as the test metric as well. Unlike the AUROC it takes the calibration of the probabilities into account rather than just the ranking (which may or may not be important depending on the application), but it goes off to infinity if the classifier gets the answer wrong with very high confidence. A closely related metric is the mean predictive information, which for a two class problem is $I = \frac{1}{n}\sum_{i=1}^n\left[t_i.*log_2(y_i) + (1-t_i).*log_2(1-y_i)\right]+1$ which is very similar, but conveniently gives result that normally lies between 0 and 1 bits, which is more easily interpretable than the cross-entropy -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8640225529670715, "perplexity": 1424.164022548226}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163036037/warc/CC-MAIN-20131204131716-00087-ip-10-33-133-15.ec2.internal.warc.gz"}
https://physexams.com/blog/Superposition-principle_5
Blog & News Electrostatic # Superposition principle The net electric field or force of a group of point charges at each point in space is the vector sum of the electric fields due to the individual charges at that point. in the mathematical form is written as ${\vec{E}}_{net}={\vec{E}}_1+{\vec{E}}_2+\dots +{\vec{E}}_n$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8846173286437988, "perplexity": 146.5119653183623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400205950.35/warc/CC-MAIN-20200922094539-20200922124539-00230.warc.gz"}
https://www.linstitute.net/archives/700789
# AQA A Level Maths: Statistics复习笔记4.3.3 Standard Normal Distribution ### Standard Normal Distribution #### What is the standard normal distribution? • The standard normal distribution is a normal distribution where the mean is 0 and the standard deviation is 1 • It is denoted by Z #### Why is the standard normal distribution important? • Any normal distribution curve can be transformed to the standard normal distribution curve by a horizontal translation and a horizontal stretch • Therefore we have the relationship: ### Finding Sigma and Mu #### How do I find the mean (μ) or the standard deviation (σ) if one of them is unknown? You will be given x and one of the parameters (μ  or σ) in the question • You will have calculated z in STEP 2 • STEP 4: Solve the equation #### How do I find the mean (μ) and the standard deviation (σ) if both of them are unknown? • If both of them are unknown then you will be given two probabilities for two specific values of x • The process is the same as above • You will now be able to calculate two z-values #### Worked Example It is known that the times, in minutes, taken by students at a school to eat their lunch can be modelled using a normal distribution with standard deviation 4 minutes. Given that 10% of students at the school take less than 12 minutes to eat their lunch, find the mean time taken by the students at the school. #### Exam Tip • These questions are normally given in context so make sure you identify the key words in the question. Check whether your z-values are positive or negative and be careful with signs when rearranging.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.945034921169281, "perplexity": 910.8715045358265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00271.warc.gz"}
http://mathhelpforum.com/trigonometry/164486-finding-value-sin-theta-value-cos-theta.html
# Thread: Finding value of sin(theta) from value of cos(theta). 1. ## Finding value of sin(theta) from value of cos(theta). I've been staring at this question...for so long. if cos/theta = 1/3 , find the value of 6sin^2/theta - 6 2. Use the Pythagorean Identity: $\displaystyle \sin^2{\theta} + \cos^2{\theta} = 1$. Substitute $\displaystyle \cos{\theta}$ and then rearrange. 3. Thanks for the tip!!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8113537430763245, "perplexity": 5953.506056146005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934808742.58/warc/CC-MAIN-20171124180349-20171124200349-00416.warc.gz"}
https://aimsciences.org/article/doi/10.3934/cpaa.2015.14.811
American Institute of Mathematical Sciences May  2015, 14(3): 811-823. doi: 10.3934/cpaa.2015.14.811 Uniform stability of the Boltzmann equation with an external force near vacuum 1 Department of Applied Mathematics, Donghua University, Shanghai 201620, China 2 College of Science, University of Shanghai for Science and Technology, Shanghai 200093 Received  August 2010 Revised  January 2015 Published  March 2015 The temporal uniform $L^1(x,v)$ stability of mild solutions for the Boltzmann equation with an external force is considered. We give a unified proof of the stability for two kinds of the forces. Firstly, we extend the soft potential case in [9] to both soft and hard cases. Secondly, we weaken the condition on the force in [13]. Furthermore, we give some new examples satisfying the constructive conditions on the force in [11]. Citation: Zhigang Wu, Wenjun Wang. Uniform stability of the Boltzmann equation with an external force near vacuum. Communications on Pure & Applied Analysis, 2015, 14 (3) : 811-823. doi: 10.3934/cpaa.2015.14.811 References: [1] R. J. Alonso, Existence of global solutions to the Cauchy problem for the inelastic Boltzmann equation with near-vacuum data,, \emph{Indiana Univ. Math. J.}, 58 (2009), 999. doi: 10.1512/iumj.2009.58.3506. Google Scholar [2] L. Arkeryd, Stability in $L^1$ for the spatially homogeneous Boltzmann equation,, \emph{Arch. Rational Mech. Anal.}, 103 (1988), 151. doi: 10.1007/BF00251506. Google Scholar [3] N. Bellomo, A. Palczewski and G. Toscani, Mathematical Topics in Nonlinear Kinetic Theory,, World Scientific, (1988). Google Scholar [4] N. Bellomo and G. Toscani, On the Cauchy problem for the nonlinear Boltzmann equation: Global existence, uniqueness and asymptotic behavior,, \emph{J. Math. Phys.}, 26 (1985), 334. doi: 10.1063/1.526664. Google Scholar [5] N. Bellomo, M. Lachowicz, A. Palczewski and G. Toscani, On the initial value problem for the Boltzmann equation with a force term,, \emph{Transport Theory Statist. Phys.}, 18 (1989), 87. doi: 10.1080/00411458908214500. Google Scholar [6] C. Cercignani, The Boltzmann Equation and Its Applications,, Springer, (1988). doi: 10.1007/978-1-4612-1039-9. Google Scholar [7] C. Cercignani, R. Illner and C. Stoica, On diffusive equilibria in generalized kinetic theory,, \emph{J. Statist. Phys.}, 105 (2001), 337. doi: 10.1023/A:1012246513712. Google Scholar [8] M. Chae and S. Y. Ha, Stability estimates of the Boltzmann equation with quantum effects,, \emph{Contin. Mech. Thermodyn., 17 (2006), 511. doi: 10.1007/s00161-006-0012-y. Google Scholar [9] C. H. Cheng, Uniform stability of solutions of Boltzmann equation for soft potential with external force,, \emph{J. Math. Anal. Appl.}, 352 (2009), 724. doi: 10.1016/j.jmaa.2008.11.027. Google Scholar [10] Y. K. Cho and B. J. Yu, Uniform stability estimates for solutions and their gradients to the Boltzmann equation: A unified approach,, \emph{J. Differ. Eqns.}, 245 (2008), 3615. doi: 10.1016/j.jde.2008.03.005. Google Scholar [11] R. J. Duan, T. Yang and C. J. Zhu, Global existence to the Boltzmann equation with external force in infinite vacuum,, \emph{J. Math. Phys.}, 46 (2005). doi: 10.1063/1.1899985. Google Scholar [12] R. J. Duan, T. Yang and C. J. Zhu, Boltzmann equation with external force and Vlasov-Poisson-Boltzmann system in infinite vacuum,, \emph{Discrete Contin. Dyn. Syst.}, 16 (2006), 253. doi: 10.3934/dcds.2006.16.253. Google Scholar [13] R. J. Duan, T. Yang and C. J. Zhu, $L^1$ and BV-type stability of the Boltzmann equation with external forces,, \emph{J. Differ. Eqns.}, 227 (2006), 1. doi: 10.1016/j.jde.2006.01.010. Google Scholar [14] R. J. Duan, M. Zhang and C. J. Zhu, $L^1$ stability for the Vlasov-Poisson-Boltzmann system around vacuum,, \emph{Math. Model Meth. Appl. Sci.}, 16 (2006), 1505. doi: 10.1142/S0218202506001613. Google Scholar [15] R. Glassey, The Cauchy Problem in Kinetic Theory,, SIAM 1996., (1996). doi: 10.1137/1.9781611971477. Google Scholar [16] R. Glassey, Global solutions to the Cauchy problem for the relativistic Boltzmann equation with near-vacuum data,, \emph{Comm. Math. Phys., 26 (2006), 705. doi: 10.1007/s00220-006-1522-y. Google Scholar [17] Y. Guo, The Vlasov-Poisson-Boltzmann system near vacuum,, \emph{Comm. Math. Phys.}, 218 (2001), 293. doi: 10.1007/s002200100391. Google Scholar [18] S. Y. Ha, $L^1$ stability of the Boltzmann equation for the hard sphere model,, \emph{Arch. Rational Mech. Anal., 173 (2004), 25. doi: 10.1007/s00205-004-0321-x. Google Scholar [19] S. Y. Ha, Nonlinear functionals of the Boltzmann equation and uniform stability estimates,, \emph{J. Differ. Eqns., 215 (2005), 178. doi: 10.1016/j.jde.2004.07.022. Google Scholar [20] K. Hamdache, Thèse de doctorat d'état de Paris VI,, 1986., (). Google Scholar [21] K. Hamdache, Existence in the large and asymptotic behaviour for the Boltzmann equation,, \emph{Japan. J. Appl. Math.}, 2 (1984), 1. doi: 10.1007/BF03167035. Google Scholar [22] R. Illner and M. Shinbrot, The Boltzmann equation, global existence for a rare gas in an infinite vacuum,, \emph{Comm. Math. Phys.}, 95 (1984), 217. Google Scholar [23] S. Kaniel and M. Shinbrot, The Boltzmann equation: I. Uniqueness and local existence,, \emph{Comm. Math. Phys.}, 58 (1978), 65. Google Scholar [24] X. G. Lu, Spatial decay solutions of the Boltzmann equation: converse properties of long time limiting behavior,, \emph{SIAM J. Math. Anal.}, 30 (1999), 1151. doi: 10.1137/S0036141098334985. Google Scholar [25] J. Polewczak, Classical solution of the nonlinear Boltzmann equation in all $R^3$: asymptotic behavior of solutions,, \emph{J. Stat. Phys.}, 50 (1988), 611. doi: 10.1007/BF01026493. Google Scholar [26] M. Tabata and N. Eshima, Decay of solutions to the mixed problem for the linearized Boltzmann equation with a potential term in a polyhedral bounded domain,, \emph{Rend. Sem. Mat. Univ. Padova}, 103 (2000), 133. Google Scholar [27] G. Toscani, H-theorem and asymptotic trend of the solution for a rarefied gas in a vacuum,, \emph{Arch. Rational Mech. Anal., 102 (1988), 231. doi: 10.1007/BF00281348. Google Scholar [28] Z. G. Wu, $L^1$ and BV-type stability of the inelastic Boltzmann equation near vacuum,, \emph{Continuum Mech. Thermodyn.}, 22 (2010), 239. doi: 10.1007/s00161-009-0127-z. Google Scholar show all references References: [1] R. J. Alonso, Existence of global solutions to the Cauchy problem for the inelastic Boltzmann equation with near-vacuum data,, \emph{Indiana Univ. Math. J.}, 58 (2009), 999. doi: 10.1512/iumj.2009.58.3506. Google Scholar [2] L. Arkeryd, Stability in $L^1$ for the spatially homogeneous Boltzmann equation,, \emph{Arch. Rational Mech. Anal.}, 103 (1988), 151. doi: 10.1007/BF00251506. Google Scholar [3] N. Bellomo, A. Palczewski and G. Toscani, Mathematical Topics in Nonlinear Kinetic Theory,, World Scientific, (1988). Google Scholar [4] N. Bellomo and G. Toscani, On the Cauchy problem for the nonlinear Boltzmann equation: Global existence, uniqueness and asymptotic behavior,, \emph{J. Math. Phys.}, 26 (1985), 334. doi: 10.1063/1.526664. Google Scholar [5] N. Bellomo, M. Lachowicz, A. Palczewski and G. Toscani, On the initial value problem for the Boltzmann equation with a force term,, \emph{Transport Theory Statist. Phys.}, 18 (1989), 87. doi: 10.1080/00411458908214500. Google Scholar [6] C. Cercignani, The Boltzmann Equation and Its Applications,, Springer, (1988). doi: 10.1007/978-1-4612-1039-9. Google Scholar [7] C. Cercignani, R. Illner and C. Stoica, On diffusive equilibria in generalized kinetic theory,, \emph{J. Statist. Phys.}, 105 (2001), 337. doi: 10.1023/A:1012246513712. Google Scholar [8] M. Chae and S. Y. Ha, Stability estimates of the Boltzmann equation with quantum effects,, \emph{Contin. Mech. Thermodyn., 17 (2006), 511. doi: 10.1007/s00161-006-0012-y. Google Scholar [9] C. H. Cheng, Uniform stability of solutions of Boltzmann equation for soft potential with external force,, \emph{J. Math. Anal. Appl.}, 352 (2009), 724. doi: 10.1016/j.jmaa.2008.11.027. Google Scholar [10] Y. K. Cho and B. J. Yu, Uniform stability estimates for solutions and their gradients to the Boltzmann equation: A unified approach,, \emph{J. Differ. Eqns.}, 245 (2008), 3615. doi: 10.1016/j.jde.2008.03.005. Google Scholar [11] R. J. Duan, T. Yang and C. J. Zhu, Global existence to the Boltzmann equation with external force in infinite vacuum,, \emph{J. Math. Phys.}, 46 (2005). doi: 10.1063/1.1899985. Google Scholar [12] R. J. Duan, T. Yang and C. J. Zhu, Boltzmann equation with external force and Vlasov-Poisson-Boltzmann system in infinite vacuum,, \emph{Discrete Contin. Dyn. Syst.}, 16 (2006), 253. doi: 10.3934/dcds.2006.16.253. Google Scholar [13] R. J. Duan, T. Yang and C. J. Zhu, $L^1$ and BV-type stability of the Boltzmann equation with external forces,, \emph{J. Differ. Eqns.}, 227 (2006), 1. doi: 10.1016/j.jde.2006.01.010. Google Scholar [14] R. J. Duan, M. Zhang and C. J. Zhu, $L^1$ stability for the Vlasov-Poisson-Boltzmann system around vacuum,, \emph{Math. Model Meth. Appl. Sci.}, 16 (2006), 1505. doi: 10.1142/S0218202506001613. Google Scholar [15] R. Glassey, The Cauchy Problem in Kinetic Theory,, SIAM 1996., (1996). doi: 10.1137/1.9781611971477. Google Scholar [16] R. Glassey, Global solutions to the Cauchy problem for the relativistic Boltzmann equation with near-vacuum data,, \emph{Comm. Math. Phys., 26 (2006), 705. doi: 10.1007/s00220-006-1522-y. Google Scholar [17] Y. Guo, The Vlasov-Poisson-Boltzmann system near vacuum,, \emph{Comm. Math. Phys.}, 218 (2001), 293. doi: 10.1007/s002200100391. Google Scholar [18] S. Y. Ha, $L^1$ stability of the Boltzmann equation for the hard sphere model,, \emph{Arch. Rational Mech. Anal., 173 (2004), 25. doi: 10.1007/s00205-004-0321-x. Google Scholar [19] S. Y. Ha, Nonlinear functionals of the Boltzmann equation and uniform stability estimates,, \emph{J. Differ. Eqns., 215 (2005), 178. doi: 10.1016/j.jde.2004.07.022. Google Scholar [20] K. Hamdache, Thèse de doctorat d'état de Paris VI,, 1986., (). Google Scholar [21] K. Hamdache, Existence in the large and asymptotic behaviour for the Boltzmann equation,, \emph{Japan. J. Appl. Math.}, 2 (1984), 1. doi: 10.1007/BF03167035. Google Scholar [22] R. Illner and M. Shinbrot, The Boltzmann equation, global existence for a rare gas in an infinite vacuum,, \emph{Comm. Math. Phys.}, 95 (1984), 217. Google Scholar [23] S. Kaniel and M. Shinbrot, The Boltzmann equation: I. Uniqueness and local existence,, \emph{Comm. Math. Phys.}, 58 (1978), 65. Google Scholar [24] X. G. Lu, Spatial decay solutions of the Boltzmann equation: converse properties of long time limiting behavior,, \emph{SIAM J. Math. Anal.}, 30 (1999), 1151. doi: 10.1137/S0036141098334985. Google Scholar [25] J. Polewczak, Classical solution of the nonlinear Boltzmann equation in all $R^3$: asymptotic behavior of solutions,, \emph{J. Stat. Phys.}, 50 (1988), 611. doi: 10.1007/BF01026493. Google Scholar [26] M. Tabata and N. Eshima, Decay of solutions to the mixed problem for the linearized Boltzmann equation with a potential term in a polyhedral bounded domain,, \emph{Rend. Sem. Mat. Univ. Padova}, 103 (2000), 133. Google Scholar [27] G. Toscani, H-theorem and asymptotic trend of the solution for a rarefied gas in a vacuum,, \emph{Arch. Rational Mech. Anal., 102 (1988), 231. doi: 10.1007/BF00281348. Google Scholar [28] Z. G. Wu, $L^1$ and BV-type stability of the inelastic Boltzmann equation near vacuum,, \emph{Continuum Mech. Thermodyn.}, 22 (2010), 239. doi: 10.1007/s00161-009-0127-z. Google Scholar [1] Renjun Duan, Tong Yang, Changjiang Zhu. Boltzmann equation with external force and Vlasov-Poisson-Boltzmann system in infinite vacuum. Discrete & Continuous Dynamical Systems - A, 2006, 16 (1) : 253-277. doi: 10.3934/dcds.2006.16.253 [2] Raffaele Esposito, Yan Guo, Rossana Marra. Validity of the Boltzmann equation with an external force. Kinetic & Related Models, 2011, 4 (2) : 499-515. doi: 10.3934/krm.2011.4.499 [3] Hongjun Yu. Global classical solutions to the Boltzmann equation with external force. Communications on Pure & Applied Analysis, 2009, 8 (5) : 1647-1668. doi: 10.3934/cpaa.2009.8.1647 [4] Shaofei Wu, Mingqing Wang, Maozhu Jin, Yuntao Zou, Lijun Song. Uniform $L^1$ stability of the inelastic Boltzmann equation with large external force for hard potentials. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 1005-1013. doi: 10.3934/dcdss.2019068 [5] Seung-Yeal Ha, Eunhee Jeong, Robert M. Strain. Uniform $L^1$-stability of the relativistic Boltzmann equation near vacuum. Communications on Pure & Applied Analysis, 2013, 12 (2) : 1141-1161. doi: 10.3934/cpaa.2013.12.1141 [6] Seung-Yeal Ha, Ho Lee, Seok Bae Yun. Uniform $L^p$-stability theory for the space-inhomogeneous Boltzmann equation with external forces. Discrete & Continuous Dynamical Systems - A, 2009, 24 (1) : 115-143. doi: 10.3934/dcds.2009.24.115 [7] Yingzhe Fan, Yuanjie Lei. The Boltzmann equation with frictional force for very soft potentials in the whole space. Discrete & Continuous Dynamical Systems - A, 2019, 39 (7) : 4303-4329. doi: 10.3934/dcds.2019174 [8] El Miloud Zaoui, Marc Laforest. Stability and modeling error for the Boltzmann equation. Kinetic & Related Models, 2014, 7 (2) : 401-414. doi: 10.3934/krm.2014.7.401 [9] Fei Meng, Xiao-Ping Yang. Elastic limit and vanishing external force for granular systems. Kinetic & Related Models, 2019, 12 (1) : 159-176. doi: 10.3934/krm.2019007 [10] Huaiyu Jian, Hongjie Ju, Wei Sun. Traveling fronts of curve flow with external force field. Communications on Pure & Applied Analysis, 2010, 9 (4) : 975-986. doi: 10.3934/cpaa.2010.9.975 [11] Leif Arkeryd, Raffaele Esposito, Rossana Marra, Anne Nouri. Exponential stability of the solutions to the Boltzmann equation for the Benard problem. Kinetic & Related Models, 2012, 5 (4) : 673-695. doi: 10.3934/krm.2012.5.673 [12] Takeshi Taniguchi. The exponential behavior of Navier-Stokes equations with time delay external force. Discrete & Continuous Dynamical Systems - A, 2005, 12 (5) : 997-1018. doi: 10.3934/dcds.2005.12.997 [13] Teng Wang, Yi Wang. Nonlinear stability of planar rarefaction wave to the three-dimensional Boltzmann equation. Kinetic & Related Models, 2019, 12 (3) : 637-679. doi: 10.3934/krm.2019025 [14] Alex H. Ardila. Stability of standing waves for a nonlinear SchrÖdinger equation under an external magnetic field. Communications on Pure & Applied Analysis, 2018, 17 (1) : 163-175. doi: 10.3934/cpaa.2018010 [15] T. Tachim Medjo. Non-autonomous 3D primitive equations with oscillating external force and its global attractor. Discrete & Continuous Dynamical Systems - A, 2012, 32 (1) : 265-291. doi: 10.3934/dcds.2012.32.265 [16] Stéphane Mischler, Clément Mouhot. Stability, convergence to the steady state and elastic limit for the Boltzmann equation for diffusively excited granular media. Discrete & Continuous Dynamical Systems - A, 2009, 24 (1) : 159-185. doi: 10.3934/dcds.2009.24.159 [17] Marc Briant. Stability of global equilibrium for the multi-species Boltzmann equation in $L^\infty$ settings. Discrete & Continuous Dynamical Systems - A, 2016, 36 (12) : 6669-6688. doi: 10.3934/dcds.2016090 [18] Tai-Ping Liu, Shih-Hsien Yu. Boltzmann equation, boundary effects. Discrete & Continuous Dynamical Systems - A, 2009, 24 (1) : 145-157. doi: 10.3934/dcds.2009.24.145 [19] Leif Arkeryd, Anne Nouri. On a Boltzmann equation for Haldane statistics. Kinetic & Related Models, 2019, 12 (2) : 323-346. doi: 10.3934/krm.2019014 [20] T. Tachim Medjo. A non-autonomous 3D Lagrangian averaged Navier-Stokes-$\alpha$ model with oscillating external force and its global attractor. Communications on Pure & Applied Analysis, 2011, 10 (2) : 415-433. doi: 10.3934/cpaa.2011.10.415 2018 Impact Factor: 0.925
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7247502207756042, "perplexity": 5125.3562997233175}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574665.79/warc/CC-MAIN-20190921211246-20190921233246-00144.warc.gz"}
https://mathoverflow.net/questions/203979/could-we-extend-the-exact-sequence-k0x-to-k-0x-to-k-0d-sgx-to-0-to
# Could we extend the exact sequence $K^0(X)\to K_0(X)\to K_0(D_{sg}(X))\to 0$ to the left? Let $X$ be a variety over a field $k$. We have the bounded derived category of coherent sheaves $D^b_{coh}(X)$ and the derived category of perfect complex $Perf(X)$. It is clear that $Perf(X)$ is a strictly full triangulated subcategory of $D^b_{coh}(X)$. Then following Orlov 2003 we define the triangulated category of singularities of $X$ as the quotient of $D^b_{coh}(X)$ and $Perf(X)$, i.e. $$D_{sg}(X)=D^b_{coh}(X)/Perf(X).$$ Recall that we call $\mathcal{A}\to \mathcal{B}\to\mathcal{C}$ an exact sequence of triangulated categories if the composition sends $\mathcal{A}$ to zero, $\mathcal{A}\to \mathcal{B}$ is fully faithful and coincides (up to equivalence) with the subcategory of those objects in $\mathcal{B}$ which are zero in $\mathcal{C}$, and the induced functor $\mathcal{B}/\mathcal{A}\to \mathcal{C}$ is an equivalence. It is easy to verify that $$Perf(X)\to D^b_{coh}(X)\to D_{sg}(X)$$ is an exact sequence of triangulated categories. On the other hand we have the Grothendieck groups of the above triangulated categories. In more details we define $K^0(X)$ the Grothendieck groups of $Perf(X)$, $K_0(X)$ the Grothendieck groups of $D^b_{coh}(X)$, and $K_0( D_{sg}(X))$ the Grothendieck groups of $D_{sg}(X)$. Then from the exact sequence $Perf(X)\to D^b_{coh}(X)\to D_{sg}(X)$ we get an exact sequence of abelian groups $$K^0(X)\to K_0(X)\to K_0(D_{sg}(X))\to 0.$$ See Schlichting 2008 Exercise 3.1.6. We would like to extend the above exact sequence to the left, via higher algebraic K-theory. However, there is not higher K-theory on merely triangulated categories. Nevertheless we have $K^i(X)$ and $K_i(X)$ for $i\geq 1$ in the framework of complicial exact categories. $\textbf{My question}$ is: could we define the higher K-theory of $D_{sg}(X)$ and get a long exact sequence $$\ldots \to K^i(X)\to K_i(X)\to K_i(D_{sg}(X))\to K^{i-1}(X)\to \ldots ?$$ The exact sequence of triangulated categories $$Perf(X)\to D^b_{coh}(X)\to D_{sg}(X)$$ may be lifted to an exact sequence of stable $\infty$-categories or dg-categories in the sense of BGT: choose an enhancement of $D^b_{coh}(X)$, take the induced enhancement on the subcategory $Perf(X)$, and define $D_{sg}(X)$ to be the cofibre of the inclusion in the $\infty$-category of small stable $\infty$-categories. Algebraic K-theory can be defined at the level of stable $\infty$-categories, and it is an additive invariant in the sense of BGT (see Prop. 7.10 of loc. cit.); this means that it sends split exact sequences to (co)fibre sequences of spectra. Its nonconnective version $\mathbb{K}$ has the stronger property of being a localizing invariant, which means that it sends arbitrary exact sequences to cofibre sequences of spectra. Hence applying nonconnective K-theory to the exact sequence above, one gets a (co)fibre sequence of nonconnective K-theory spectra $$\mathbb{K}(Perf(X))\to \mathbb{K}(D^b_{coh}(X))\to \mathbb{K}(D_{sg}(X)).$$ The first term is identified with the Bass-Thomason-Trobaugh K-theory of $X$, and the second is nonconnective G-theory (nonconnective K-theory of the abelian category of coherent sheaves). These agree with their connective versions on nonnegative homotopy groups, so the induced long exact sequence on homotopy groups gives what you are looking for.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9903173446655273, "perplexity": 150.23790218233606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251705142.94/warc/CC-MAIN-20200127174507-20200127204507-00368.warc.gz"}
http://www.jiskha.com/display.cgi?id=1275020881
Friday December 19, 2014 # Homework Help: Chemistry Posted by Jaden on Friday, May 28, 2010 at 12:28am. Assuming complete dissociation of the solute, how many grams of KNO3 must be added to 275 mL of water to produce a solution that freezes at -14.5 C? The freezing point for pure water is 0.0 C and Kf is equal to 1.86 C/m. * Use Tf = Kf*i*m • Chemistry - bobpursley, Friday, May 28, 2010 at 8:27am i=2 m=gramsKNO3/(molmassKNO3*.275) find grams KNO3 from your fourmula. First Name: School Subject: Answer: Related Questions Chemistry - assuming complete dissociation of the solute, how many grams of KNO3... Chemistry - Assuming complete dissociation of the solute, how many grams of ... chemistry - assuming complete dissociation of the solute, how many grams of \rm ... Chemistry - Assuming complete dissociation of the solute, how many grams of KNO3... CHemistry....Please help - What mass of ethylene glycol C2H6O2, the main ... Chemistry - The vapor pressure of pure water at 90celsius is normally 525.8 but ... Chemistry - The vapor pressure of water at 95C is normally 6.33.9 but decreased ... chemistry - this is a really long question. i don't understand how to answer it... Chemistry - The vapor pressure of pure water at 85 celsius is normally 433.6 ... Chemistry: molality and freezing point? - What mass of ethylene glycol C2H6O2, ... Search Members
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8961848616600037, "perplexity": 3914.343407806341}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768741.99/warc/CC-MAIN-20141217075248-00090-ip-10-231-17-201.ec2.internal.warc.gz"}
https://hilbertthm90.wordpress.com/2009/09/09/what-i-talk-about-when-i-talk-about-orientation/
Old Standby 2: $\mathbb{R}P^n$ is orientable iff $n$ is odd. First note that the antipodal map $a:\mathbb{R}^{n+1}\to\mathbb{R}^{n+1}$ by $x\mapsto -x$ is orientation preserving if n is odd and orientation reversing if n is even just because in coordinates it is the matrix with $-1$‘s on the diagonal. Now if we make our natural identifications between $\mathbb{R}^{n+1}$ as a manifold and as the vector space that is tangent space at a given point, then we see that if we restrict $a$ to $S^n$ embedded in $\mathbb{R}^{n+1}$ the orientation preserving/reversing still holds. This is just because if $(v_1, \ldots, v_n)$ is an oriented basis at $p\in S^n$, then $(p, v_1, \ldots, v_n)$ is an oriented basis at that same point in $\mathbb{R}^{n+1}$. Thus the orientation at $-p$ of $a(p, v_1, \ldots , v_n)=(-p, -v_1, \ldots, -v_n)$ is $(-v_1, \ldots, -v_n)$. Now suppose that n is even and that $\mathbb{R}P^n$ has an orientation. Let $\pi: S^n\to\mathbb{R}P^n$ be the standard quotient map. But now the orientation of $\mathbb{R}P^n$ induces an orientation on $S^n$ by letting an ordered basis $(v_1, \ldots , v_n)\subset T_pS^n$ be positively oriented if $(d\pi_p(v_1), \ldots, d\pi_p(v_n))$ is positively oriented. But the induced map of $a$ on $\mathbb{R}P^n$ is just the identity. Thus $a$ is orientation preserving on $S^n$, a contradiction. The other direction we need to put an orientation on $\mathbb{R}P^n$ by $S^n$. Suppose n is odd now. Define a basis $(w_1, \ldots , w_n)\subset T_{\pi(p)}\mathbb{R}P^n$ to be positively if there exists a positively oriented basis $(v_1, \ldots , v_n)\subset T_pS^n$ such that $(d\pi_p(v_1), \ldots , d\pi_p(v_n))=(w_1, \ldots , w_n)$. We just need to make sure this is a well-defined choice. But it is since the fibers of $\pi(p)$ are $p$ and $-p$, and we get from one to the other through the antipodal map which is orientation preserving. So we’re done. Let’s do some analysis of this. First off, there was nothing special about this particular $\pi$. So we actually proved that if $\pi: N\to M$ is a smooth covering and $M$ is orientable, then $N$ is also orientable. I know of two other ways to prove this. Both require that antipodal map observation first. One way is to prove the more general fact that if $M$ is a connected, oriented, smooth manifold and $G$ is a discrete group acting smoothly, freely, and properly on M, then $M/G$ is orientable iff $x\mapsto g\cdot x$ is an orientation preserving diffeo for all $g\in G$. In this case, $G\{\pm 1\}$ and the 1 action is the identity and the $-1$ action is the antipodal map. The other way is far more elegant. It is the algebraic topology method (actually I believe you can prove the stronger statement of the second method using this way). I haven’t quite reworked this way out, yet, but it just involves pulling back nowhere vanishing n-forms or something.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 45, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9736495018005371, "perplexity": 82.2787523191397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590069.15/warc/CC-MAIN-20180718060927-20180718080927-00612.warc.gz"}
https://www.hydrol-earth-syst-sci.net/22/5741/2018/
Journal cover Journal topic Hydrology and Earth System Sciences An interactive open-access journal of the European Geosciences Union Journal topic Hydrol. Earth Syst. Sci., 22, 5741-5758, 2018 https://doi.org/10.5194/hess-22-5741-2018 Hydrol. Earth Syst. Sci., 22, 5741-5758, 2018 https://doi.org/10.5194/hess-22-5741-2018 Research article 08 Nov 2018 Research article | 08 Nov 2018 Bias correction of simulated historical daily streamflow at ungauged locations by using independently estimated flow duration curves Bias correction of simulated streamflow William H. Farmer1, Thomas M. Over2, and Julie E. Kiang3 William H. Farmer et al. • 1US Geological Survey, Denver, Colorado, USA • 2US Geological Survey, Urbana, Illinois, USA • 3US Geological Survey, Reston, Virginia, USA Abstract In many simulations of historical daily streamflow distributional bias arising from the distributional properties of residuals has been noted. This bias often presents itself as an underestimation of high streamflow and an overestimation of low streamflow. Here, 1168 streamgages across the conterminous USA, having at least 14 complete water years of daily data between 1 October 1980 and 30 September 2013, are used to explore a method for rescaling simulated streamflow to correct the distributional bias. Based on an existing approach that separates the simulated streamflow into components of temporal structure and magnitude, the temporal structure is converted to simulated nonexceedance probabilities and the magnitudes are rescaled using an independently estimated flow duration curve (FDC) derived from regional regression. In this study, this method is applied to a pooled ordinary kriging simulation of daily streamflow coupled with FDCs estimated by regional regression on basin characteristics. The improvement in the representation of high and low streamflows is correlated with the accuracy and unbiasedness of the estimated FDC. The method is verified by using an idealized case; however, with the introduction of regionally regressed FDCs developed for this study, the method is only useful overall for the upper tails, which are more accurately and unbiasedly estimated than the lower tails. It remains for future work to determine how accurate the estimated FDCs need to be to be useful for bias correction without unduly reducing accuracy. In addition to its potential efficacy for distributional bias correction, this particular instance of the methodology also represents a generalization of nonlinear spatial interpolation of daily streamflow using FDCs. Rather than relying on single index stations, as is commonly done to reflect streamflow timing, this approach to simulation leverages geostatistical tools to allow a region of neighbors to reflect streamflow timing. 1 Introduction Simulation of historical daily streamflow at ungauged locations is one of the grand challenges of the hydrological sciences . Over the past 20 years, at least, research into simulation of historical streamflow has increased. In addition to ongoing international efforts, the US Geological Survey has embarked upon a National Water Census of the USA , which seeks to quantify hydrology across the country to provide information to help improve water use and security. However, regardless of the method used for the simulation, uncertainty will always remain and may result in some distributional bias . The objective of this work is to present a technique to correct for bias in the magnitudes of a streamflow simulation. While the mechanics of this technique are not novel, the novelty of this work lies in the generalization of this technique for use in bias correction. The method is intended for use at ungauged sites and an idealized experiment is constructed to demonstrate both the potential utility and one example of realized utility. As defined here, distributional bias in simulated streamflow is an error in reproducing the tails of streamflow distribution. As attested to by many researchers focused on the reproduction of historical streamflow, this bias commonly appears as a general overestimation of low streamflow and underestimation of high streamflow . The result is an effective squeezing of the streamflow distribution, bringing the tails of the distribution closer to the central values. This distributional squeezing is often most notable in the downward bias of extreme high-flow events (Lichty and Liscum1978; Sherwood1994; Thomas1982). Bias of high streamflows is particularly concerning, as examinations of extreme high-flow events are a common and influential use of historical simulation and long-term (decadal) forecast. Consider, for example, the motivation for work by : as simulated streamflows were being routed through a reservoir operations model for flood mitigation, large bias in high streamflows would have severely affected resulting decisions. Of course, this tendency towards distributional compaction is not a universal truth that occurs without variation; the resulting bias will vary widely depending on the structure of the residuals . Because of the importance of accurately representing extreme events, it is necessary to consider how the distributional bias of streamflow simulations can be reduced. The approach presented here assumes that, while the streamflow magnitudes of a historical simulation are biased, the temporal structure or rank order of simulated streamflows is relatively accurate. The nature of this approach is predicated on an assumption that although a historical simulation may produce a distribution of streamflow with biased tails, the temporal sequence of relative rankings or nonexceedance probabilities of the simulated streamflow retains valuable information. With this assumption, it can be hypothesized that distributional bias can be reduced, while not negatively impacting the overall performance, by applying a sufficiently accurate independently estimated representation of the period-of-record flow duration curve (FDC) to rescale each streamflow value based on the streamflow value of the regional FDC for the corresponding nonexceedance probabilities (see Sect. 2 below). The approach presented here can be perceived as a generalization of the nonlinear spatial interpolation of daily streamflow using FDCs as conceived by and and widely used thereafter . As traditionally applied, nonlinear spatial interpolation proceeds by simulating nonexceedance probabilities at a target location using a single neighboring streamgage (though , recommend and , test the use of multiple streamgages) and then interpolating those nonexceedance probabilities along a FDC. The approach tested here seeks to bias-correct a simulated time series of daily discharge using an independently estimated FDC, and, when viewed in another way, presents a novel form of nonlinear spatial interpolation. Furthermore, though necessarily explored in this study through the use of a single technique for hydrograph simulation, this approach may be a means to effectively bias-correct any simulation of streamflow, including those from rainfall–runoff models, as presented by . used a geostatistical tool to produce site-specific FDCs and then used this information to post-process simulated hydrographs from a deterministic model. Though the underlying methods of producing the FDC and simulated hydrograph are different, the approach proposed by is the same as that explored here. Further discussion of the relationship of the approach presented here to others in the field is provided below. The remainder of this work is organized in the following manner. Section 2 provides a description of the retrieval of observed streamflow, the estimation of simulated streamflows, the calculations of observed FDCs, the estimation of simulated FDCs, and the application and evaluation of the bias correction. Section 3 follows and it documents the bias in the original simulated streamflows and analyzes the potential bias correction that could be achieved if it were possible to know the observed FDC at an ungauged location and the bias correction that would be realized through an application of regional regression. Section 4 considers the implications of these results and hypothesizes how the methodology might be applied and improved. The major findings of this work are then summarized in Sect. 5. 2 Material and methods This section, which is divided into four subsections, provides a description of the methods applied here. The first subsection describes the collection of observed streamflow as well as the initial simulation of streamflow. As the approach used here is applicable to any simulated hydrograph, the details of hydrograph simulation are not exhaustively documented. Instead, beyond a brief introduction, the reader is directed to relevant citations, as no modifications to previous methods are introduced here. The second subsection discusses the use of regional regression to define independently estimated FDCs. Again, as any method for the estimation of FDCs could be used and this application is identical to previously reported applications, following a brief introduction, the reader is directed to the relevant citations. The third subsection provides a description of how bias correction was executed, and the fourth subsection describes how the performance of this approach to bias correction was assessed. Figure 1Map of the locations of 1168 reference quality streamgages from the GAGES-II database (Falcone2011) used for analysis. All streamgages used have more than 14 complete water years between 1 October 1980 and 30 September 2013. The outlines of two-digit hydrologic units, which define the regions used here, are provided for further context. 2.1 Observed and simulated streamflow The proposed approach was explored using daily mean streamflow data from the reference quality streamgages included in the GAGES-II database (Falcone2011) within the conterminous USA for the period from 1 October 1980 through 30 September 2013. To allow for the interpolation, rather than extrapolation, of all quantiles considered later, streamgages were screened to ensure that at least 14 complete water years (1 October through 30 September) were available for each record considered; 1168 such streamgages were available. The selected reference streamgages are indicated in Fig. 1. The streamflow data were obtained directly from the website of the National Water Information System (NWISWeb, http://waterdata.usgs.gov; last access: 20 September 2017). For each streamgage, associated basin characteristics were obtained from the GAGES-II database (Falcone2011). To control for streamflow distributions that vary over orders of magnitude, the simulation and analysis of streamflow at these streamgages is best explored through the applications of logarithms. To avoid the complication of taking the logarithm of a zero, a small value was added to each streamflow observation. The US Geological Survey rounds all mean daily streamflow to two decimal places in units of cubic feet per second (cfs, which can be converted to cubic meters per second using a factor of 0.0283). As a result, any value below 0.005 cfs is rounded to and reported as 0.00 cfs. Because of this rounding procedure, the small additive value applied here was 0.0049 cfs. While there may be some confounding effect produced by the use of an additive adjustment, as long as this value is not subtracted on back transformation, the following assessment of bias and bias correction will remain robust. That is, rather than evaluating bias in streamflow, technically this analysis is evaluating the bias in streamflow plus a correction factor. The conclusions remain valid as the assessment still evaluates the ability of a particular method to remove the bias in the simulation of a particular quantity. Though the potential for distributional bias applies to any hydrologic simulation , for this study, initial predictions of daily streamflow values for each streamgage were obtained by applying the pooled ordinary kriging approach (Farmer2016) to each two-digit hydrologic unit (Fig. 1) through a leave-one-out cross-validation procedure on the streamgages within the two-digit hydrologic unit. The hydrologic unit system is a common method for delineating watersheds in the USA. As described by , the two-digit hydrologic units, or regions (as seen in Fig. 1), roughly align with the major river basins of the USA. This approach considers all pairs of common logarithmically transformed unit streamflow (discharge per unit area) for each day and builds a single, time-invariant semivariogram model of cross-correlation that is then used to estimate ungauged streamflow as a weighted summation of all contemporary observations. A spherical semivariogram was used as the underlying model form. Additional information on the time series simulation procedure is provided by . Note that the choice of pooled ordinary kriging is only made as an example of a streamflow simulation method; it is not implied that the bias observed or methods applied are relevant only to this approach to simulation. Because the novelty of this work is in the application of bias correction, further details on the particular simulation method employed are left for the reader to investigate in the cited works (Farmer2016). 2.2 Estimation of flow duration curves Daily period-of-record FDCs were developed independently of the streamflow simulation procedure by following a regionalization procedure similar to that of and . Observed FDCs were obtained by determining the percentiles of the streamflow distribution across complete water years between 1981 and 2013 using the Weibull plotting position (Weibull1939). A total of 27 percentiles with exceedance probabilities of 0.02 %, 0.05 %, 0.1 %, 0.2 %, 0.5 %, 1 %, 2 %, 5 %, 10 %, 20 %, 25 %, 30 %, 40 %, 50 %, 60 %, 70 %, 75 %, 80 %, 90 %, 95 %, 98 %, 99 %, 99.5 %, 99.8 %, 99.9 %, 99.95 %, and 99.98 % were considered. The selection of streamgages with at least 14 complete water years ensures that all percentiles can be calculated from the observed data. These percentiles derived from the observed hydrograph represent the “unknowable observation” in an application for prediction in ungauged basins. Therefore, to simulate the truly ungauged case, these same percentiles were estimated using a leave-one-out cross-validation of regional regression. A regional regression across the streamgages in each two-digit hydrologic unit of each of the 27 FDC percentiles was developed using best subsets regression. Best subsets regression is a common tool for exhaustive exploration of the space of potential explanatory variables. All models with a given number of explanatory variables are computed, exploring all combinations of variables. The top models for a given number of explanatory variables are then identified by a performance metric like the Akaike information criterion. This is repeated for several model sizes to fully explore the possibilities for variables and regression size. For each regression, the drainage area was required as an explanatory variable. At a minimum, one additional explanatory variable was used. The maximum number of explanatory variables was limited to the smaller of either six explanatory variables or 5 % of the number of streamgages in the region, rounded up to the next larger whole number. The maximum of six arises from what is computationally feasible for the best subsets regression function used, whereas the maximum of 5 % of streamgages was determined from a limited exploration of the optimal number of explanatory variables as a function of the number of streamgages in a region. Explanatory variables were drawn from the GAGES-II database (Falcone2011). As documented by and , a subset of the full GAGES-II dataset was chosen to avoid strong correlations. As the focus of this work is not on the estimation of the FDCs, the reader is referred to the works of and to explore the exact procedures. In order to allow different explanatory variables to be used to explain percentiles at different streamflow regimes, the percentiles were grouped into a maximum of three contiguous streamflow regimes based on the behavior of the unit FDCs (i.e., the FDCs divided by drainage area) in the two-digit hydrologic units. The regimes are contiguous in that only consecutive percentiles from the list above can be included in the same regime; the result is a maximum of three regimes that can be considered “high”, “medium”, and “low” streamflows, though the number of regimes may vary across two-digit hydrologic units. The percentiles in each regime were estimated by the same explanatory variables, allowing only the fitted coefficients to change. The final regression form for each regime was selected by optimizing the average adjusted coefficient of determination, based on censored Gaussian (Tobit) (Tobin1958) regression to allow for values censored below 0.005 cfs, across all percentiles in the regime. The addition of a small value was used to avoid the presence of zeros and enable a logarithmic transformation, but this does not avoid the problem of censoring. Censoring below the small value added must still be accounted for so that smaller numbers do not unduly affect the regression. This approach to percentile grouping was found to provide reasonable estimates while minimizing the risk of non-monotonic or otherwise concerning behavior. Further details on this methodology can be found in the associated data and model archive and in . When estimating a complete FDC as realized through a set of discrete points, non-monotonic behavior is likely . If the regression for each percentile were estimated independently, non-monotonicity would be almost unavoidable. By using three regimes and keeping the explanatory variables the same within each, the potential for non-monotonicity is reduced. The greatest risk of non-monotonic behavior occurs at the regime boundaries. If the FDC used to bias-correct is not perfectly monotonic, the effect will be to alter the relative timing of streamflows. While it would be ideal to avoid any risk of non-monotonic behavior, it is a rather difficult task. An alternative might be to consider the FDC as a parametric function, but demonstrate how difficult this can be for daily streamflows. Of course, the use of regional regression is not the only tool for estimating an FDC (Castellatin et al.2013; Pugliese et al.2014, 2016). Figure 2Diagram showing the bias correction methodology applied here. The simulated daily hydrograph at the ungauged site is presented in (a). For any particular point on the hydrograph (point A) the daily volume of streamflow can be mapped to a nonexceedance probability using the rank order of simulated streamflows (points B and C). With an independently estimated flow duration curve (FDC) from some procedure such as regional regression, the nonexceedance probability can be rescaled to a new volume (point D) and placed back in same sequence as the simulated streamflows (point E) to produce a bias-corrected hydrograph. This example is shown for 1 month, though the FDC applies across the entire period of record. As these data are based on an example site, the observed streamflows and FDC are shown in grey on each figure. 2.3 Bias correction To implement bias correction, the initial predictions of the daily streamflow values using the ordinary kriging approach were converted to streamflow nonexceedance probabilities using the Weibull plotting position (Weibull1939). The nonexceedance probabilities were then converted to standard normal quantiles and linearly interpolated along an independently estimated FDC. For the linear interpolation, the independently estimated FDC was represented as the standard normal quantiles of the associated nonexceedance probabilities versus the common logarithmic transformation of the streamflow percentiles. In the case for which the standard normal quantile being estimated from the simulated hydrograph was beyond the extremes of the FDC, the two nearest percentiles were used for linear extrapolation. In this way, the ordinary kriging simulations were bias-corrected, based on the assumption that the simulated volumes are less accurate than the relative ranks of the simulated values, by rescaling the simulated volumes to an independently estimated FDC. By changing the magnitudes of the simulated streamflow distribution, this approach rescales the distribution of the simulated streamflow. Figure 2 provides a simplified representation of this bias correction methodology. Starting in panel a and proceeding clockwise, after simulating the hydrograph with a given methodology (pooled ordinary kriging was used here), the resulting streamflow value on a given day can be converted to appropriate nonexceedance probabilities by proceeding from point A, through point B, and down to point C. Moving then from point C to point D maps the estimated nonexceedance probability onto an independently estimated FDC. Finally, the streamflow value produced at point D is mapped to the original date (point E) to reconstruct a bias-corrected hydrograph. Note that this is a simplified description: as described above a slightly more complex interpolation procedure is used for the FDCs represented by a set of discrete points. As can be seen in Fig. 2, this methodology is quite similar to that conceived by and . The novelty of this work lies in its application. That is, both and imagine a case in which the original hydrograph from which nonexceedance values will be drawn (Fig. 2a) is drawn from an index station of some sort; here the temporal structure could be drawn from any technique for at-site hydrograph simulation. This generalization allows bias correction of any hydrograph simulation. 2.4 Evaluation The hypothesis of this work, that distributional bias in the simulated streamflow can be corrected by applying independently estimated FDCs, was evaluated by considering the performance of these bias-corrected simulations at both tails of the distribution. The differences in the common logarithms of both high and low streamflow were used to understand and quantify the bias (simulation minus observed) and the correction thereof. That is, $\begin{array}{}\text{(1)}& {\mathrm{bias}}_{\mathrm{s}}=\frac{\sum _{i=\mathrm{1}}^{n}\left({\mathrm{log}}_{\mathrm{10}}\left({\stackrel{\mathrm{^}}{Q}}_{\mathrm{s},i}\right)-{\mathrm{log}}_{\mathrm{10}}\left({Q}_{\mathrm{s},i}\right)\right)}{n},\end{array}$ where s indicates the site of interest, $\stackrel{\mathrm{^}}{Q}$ indicates the predicted streamflow, whether the original simulation or the bias-corrected simulation, Q indicates the observed streamflow, and n indicates the number of values being assessed. This difference can be approximated as a percent by computing 10 to the power of the difference and subtracting 1 from this quantity : $\begin{array}{}\text{(2)}& {\mathrm{bias}}_{\mathrm{s},\mathit{%}}=\mathrm{100}\cdot \left({\mathrm{10}}^{{\mathrm{bias}}_{\mathrm{s}}}-\mathrm{1}\right).\end{array}$ The differences in the root mean squared error of the common logarithms of the predicted streamflow were used to quantify improvements in accuracy. The root mean squared error of the common logarithms of streamflow is calculated as $\begin{array}{}\text{(3)}& {\mathrm{rmsel}}_{\mathrm{s}}=\sqrt{\frac{\sum _{i=\mathrm{1}}^{n}{\left({\mathrm{log}}_{\mathrm{10}}\left({\stackrel{\mathrm{^}}{Q}}_{\mathrm{s},i}\right)-{\mathrm{log}}_{\mathrm{10}}\left({Q}_{\mathrm{s},i}\right)\right)}^{\mathrm{2}}}{n}}.\end{array}$ Improvements in accuracy may or may not occur when bias is reduced. The significance of both these quantities, and the effects of bias correction on these quantities, was assessed using a Wilcoxon signed rank test (Wilcoxon1945). For assessments of bias, the null hypothesis was that the bias was equivalent to zero. For assessments of the difference in bias or accuracy with respect to the baseline result, the null hypothesis was that this difference was zero. Distributional bias and improvement of that bias were considered in both the high and low tails of the streamflow distribution. Two methods were used to capture the bias in each tail. One method, referred to herein as an assessment of the observation-dependent tails, considers the observed nonexceedance probabilities to identify the days on which the highest and lowest 5 % of streamflow occurred. For each respective tail, the errors were assessed based on the observations and simulations of those fixed days. The other method, referred to herein as an assessment of the observation-independent tails, compares the ranked top and bottom 5 % of observations with the independently ranked top and bottom 5 % of simulated streamflow. Errors in the observation-dependent tails are an amalgamation of errors in the sequence of nonexceedance probabilities (the temporal structure) and in the magnitude of streamflow, whereas errors in the observation-independent tails only reflect bias in the ranked magnitudes of streamflow. In the same fashion, evaluation of the complete hydrograph can be assessed sequentially (sequential evaluation), retaining the contemporary sequencing of observations and simulations, or distributionally (distributional evaluation), considering the observations and simulations ranked independently. Though the overall accuracy will vary between the sequential and distributional case, overall bias will be identical in both cases. With an analysis of both observation-dependent and observation-independent tails, it is possible to begin to tease out the effect of temporal structure on distributional bias. The bias in observation-independent tails is not directly tied to the temporal structure, or relative ranking, of simulated streamflow. That is, if the independently estimated FDC is accurate, then even if relative sequencing of streamflow is badly flawed, the bias correction of observation-independent tails will be successful. However, even if the distribution is accurately reproduced after bias correction, the day-to-day performance may still be poor. For observation-dependent tails, the temporal structure plays a vital role on the effect of bias correction. If the temporal structure is inaccurate in the underlying hydrologic simulation, then the bias correction of observation-dependent tails will be less successful. The bias correction approach was first tested with the observed FDCs. These observed FDCs would be unknowable in the truly ungauged case, but this test allows for an assessment of the potential utility of this approach. This examination is followed by an application with the regionally regressed FDCs described above, demonstrating one realization of this generalizable method. This general approach to bias correction could be used with other methods for estimating the FDC and could also be used with an observed FDC for record extension, though neither of these possibilities are explored here. Figure 3Distribution of logarithmic bias, measured as the mean difference between the common logarithms of simulated and observed streamflow (simulated minus observed) at 1168 streamgages across the conterminous USA. Orig. refers to the original simulation with pooled ordinary kriging, BC-RR refers to the Orig. hydrograph bias-corrected with regionally regressed flow duration curves, and BC-Obs. refers to the Orig. hydrograph bias-corrected with observed flow- duration curves. The tails of the box plots extend to the 5th and 95th percentiles of the distribution; the ends of the boxes represent the 25th and 75th percentiles of the distribution; the heavier line in the box represents the median of the distribution; the open circle represents the mean of the distribution; outliers beyond the 5th and 95th percentile are shown as horizontal dashes. 3 Results Figures 3 and 4 show the overall bias and accuracy of the reproduced hydrographs; these figures are quantified in Tables 1 and 2. Figure 5 and Table 1 summarize the tail bias in all approaches to streamflow simulation considered here. Similarly, Fig. 6 and Table 2 summarize the tail accuracy of all approaches. These results are discussed in detail below, beginning with a discussion of the bias and accuracy in the original kriged simulations. This is followed by a consideration of the effectiveness of bias correction with observed FDCs as emblematic of the theoretical potential of this approach. The realization of this theoretical potential through the regionally regressed FDCs is subsequently presented. Complete results can be explored and reproduced using the associated model and data archive . Figure 4Distribution of logarithmic accuracy, measured as the root mean squared error between the common logarithms of observed and simulated streamflow at 1168 streamgages across the conterminous USA. Orig. refers to the original simulation with pooled ordinary kriging, BC-RR refers to the Orig. hydrograph bias-corrected with regionally regressed flow duration curves, and BC-Obs. refers to the Orig. hydrograph bias-corrected with observed flow duration curves. Sequential indicates that contemporary days were compared, while distributional indicates that days of equal rank were compared. The tails of the box plots extend to the 5th and 95th percentiles of the distribution; the ends of the boxes represent the 25th and 75th percentiles of the distribution; the heavier line in the box represents the median of the distribution; the open circle represents the mean of the distribution; outliers beyond the 5th and 95th percentile are shown as horizontal dashes. 3.1 Simulated hydrographs without correction There is statistically significant overall bias at the median (−7.1 %; ${\mathrm{10}}^{-\mathrm{0.0318}}-\mathrm{1}$) in the streamflow distribution simulated by the kriging approach applied here (Fig. 3, box plot A), but more significant bias is apparent in the upper and lower tails of the distribution (Fig. 5, box plots A, D, G, and J). Both the observation-dependent and observation-independent upper tails of the streamflow distribution demonstrate significant downward bias (Fig. 5, box plots D and J). At the median, the observation-dependent upper tail is underestimated by approximately 38 % (Table 1, row 1; Fig. 5, box plot D), while the observation-independent upper tail is underestimated by approximately 23 % (Table 1, row 2; Fig. 5, box plot J). For the lower tail, the observation-dependent tail shows a median overestimation of 36 % (Table 1, row 1; Fig. 5, box plot A), while the observation-independent tail is underestimated by less than 1 % (Table 1, row 2; Fig. 5, box plot G). The bias is much more variable, producing greater magnitudes of bias more often, in the lower tails than in the upper tails. Generally, biases in the observation-independent tails are less severe, both in the median and in range, than those in the observation-dependent tails. To provide some information on regional performance and incidence of bias, Fig. 7 shows the spatial distribution of bias in each tail (discussion of this distribution is provided below). Figure 5Distribution of logarithmic bias, measured as the mean difference between the common logarithms of simulated and observed streamflow at 1168 streamgages across the conterminous USA for observation-dependent and observation-independent upper and lower tails. Observation-dependent tails retain the ranks of observed streamflow, while matching simulations by day. Observation-independent tails rank observations and simulation independently. The upper tail considers the highest 5 % of streamflow, while the lower tail considers the lowest 5 % of streamflow. Orig. refers to the original simulation with pooled ordinary kriging, BC-RR refers to the Orig. hydrograph bias-corrected with regionally regressed flow duration curves, and BC-Obs. refers to the Orig. hydrograph bias-corrected with observed flow duration curves. The tails of the box plots extend to the 5th and 95th percentiles of the distribution; the ends of the boxes represent the 25th and 75th percentiles of the distribution; the heavier line in the box represents the median of the distribution; the open circle represents the mean of the distribution; outliers beyond the 5th and 95th percentile are shown as horizontal dashes. In both observation-dependent and observation-independent cases, downward bias in the upper tail is more probable than upward biases in the lower tail. For the observation-dependent tails, approximately 89% of streamgages show downward bias for the upper tail (Fig. 5, box plot D), and approximately 61 % of the streamgages upward bias in the lower tail (Fig. 5, box plot A). For the observation-independent tails, approximately 80 % of streamgages show downward bias in the upper tail (Fig. 5, box plot J) and approximately 50 % of the streamgages exhibit upward bias in the lower tail (Fig. 5, box plot G), indicating, as does the small median bias value, that the lower tail biases are relatively well balanced around zero for the observation-independent case for these simulations. With respect to their central tendencies, these results show upward bias in lower tails and downward bias in upper tails of the distribution of streamflows from the original simulations for both observation-dependent and observation-independent cases. There is, of course, a great degree of variability around this central tendency. With these baseline results, the bias correction method presented here seeks to mitigate these biases. Figure 6Distribution of logarithmic accuracy, measured as the root mean squared error between the common logarithms of simulated and observed streamflow (simulated minus observed) at 1168 streamgages across the conterminous USA for observation-dependent and observation-independent upper and lower tails. Observation-dependent tails retain the ranks of observed streamflow, while matching simulations by day. Observation-independent tails rank observations and simulation independently. The upper tail considers the highest 5 % of streamflow, while the lower tail considers the lowest 5 % of streamflow. Orig. refers to the original simulation with pooled ordinary kriging, BC-RR refers to the Orig. hydrograph bias-corrected with regionally regressed flow duration curves, and BC-Obs. refers to the Orig. hydrograph bias-corrected with observed flow duration curves. The tails of the box plots extend to the 5th and 95th percentiles of the distribution; the ends of the boxes represent the 25th and 75th percentiles of the distribution; the heavier line in the box represents the median of the distribution; the open circle represents the mean of the distribution; outliers beyond the 5th and 95th percentile are shown as horizontal dashes. 3.2 Bias correction with observed FDCs The results for this idealized case that could not be applied in practice provide clear evidence that distributional bias in simulated streamflow can be reduced by rescaling using independently estimated FDCs. This evidence is apparent in the reduction of the magnitude and variability of overall bias (Fig. 3, box plot C; Table 1, rows 5 and 6) and of the bias in the observation-independent tails of the streamflow distribution (Fig. 5, box plots I and L) when observed FDCs are used for rescaling. Similarly, the overall distributional accuracy is much improved (Fig. 4, box plot F; Table 2, rows 5 and 6), as is the accuracy of observation-independent tails (Fig. 6, box plot I and L). The effect on observation-dependent tails (Fig. 5, box plots C and F) and overall sequential accuracy (Fig. 4, box plot C) is less compelling but still substantial. Figure 7Maps showing the distribution of logarithmic bias, measured as the mean difference between the common logarithms of simulated and observed streamflow (simulated minus observed) at 1168 streamgages across the conterminous USA for observation-dependent and observation-independent upper and lower tails. Observation-dependent tails retain the ranks of observed streamflow, while matching simulations by day. Observation-independent tails rank observations and simulation independently. The upper tail considers the highest 5 % of streamflow, while the lower tail considers the lowest 5 % of streamflow. The bias is derived from the original simulation of daily streamflow using pooled ordinary kriging at 1168 sites regionalized by the two-digit hydrologic units (polygons). Whereas the measures of bias and accuracy are summarized in Tables 1 and 2, Tables 3 and 4 summarize the change in absolute bias and in accuracy, respectively. With the use of observed FDCs, the overall bias is reduced to a tenth of a percent at the median (Table 1, rows 5 and 6). This represents a significant median reduction of 0.14 common logarithm units in the overall absolute bias (Table 3, rows 3 and 4). Overall, the distributional accuracy is improved by a median of 0.21 common logarithm units (Table 4, row 4). Of all streamgages considered, 99 % saw a reduction in the overall absolute bias, and all saw improvements in overall distributional accuracy. These improvements extend to both observation-independent tails of the distributions. The lower observation-independent tails have a median 0.35 common logarithm unit reduction in absolute bias (Table 3, row 4). For the upper tail, the median reduction in absolute bias is 0.14 common logarithm units (Table 3, row 4). Nearly all streamgages (99 %) saw reduction in absolute bias of the observation-independent tails. Table 4 (row 4) shows similar improvements in tail accuracy: −0.37 and −0.15 units in the lower and upper tails, respectively, with nearly all streamgages (excepting the lower tail of a single streamgage, likely the result of the interpolation procedure) showing improved tail accuracy. Table 1Measures of the distribution of logarithmic bias, computed as the mean difference between the common logarithms of simulated and observed streamflow (simulated minus observed) at 1168 streamgages across the conterminous USA for observation-dependent and observation-independent upper and lower tails. Orig. refers to the original simulation with pooled ordinary kriging, BC-RR refers to the Orig. hydrograph bias-corrected with regionally regressed flow duration curves, and BC-Obs. refers to the Orig. hydrograph bias-corrected with observed flow duration curves. Observation-dependent (OD) tails retain the ranks of observed streamflow, while matching simulations by day. Observation-independent (OI) tails rank observations and simulation independently. The upper tail observes the highest 5 % of streamflow, while the lower tail considers the lowest 5 % of streamflow. Significance is the p value resulting from a Wilcoxon signed rank test with continuity correction, with the null hypothesis that the median of distribution is equal to zero and the alternative hypothesis that the median is not equal to zero. Table 2Measures of the distribution of logarithmic accuracy, computed as the root mean squared error between the common logarithms of observed and simulated streamflow at 1168 streamgages across the conterminous USA for observation-dependent and observation-independent upper and lower tails. Orig. refers to the original simulation with pooled ordinary kriging, BC-RR refers to the Orig. hydrograph bias-corrected with regionally regressed flow duration curves, and BC-Obs. refers to the Orig. hydrograph bias-corrected with observed flow duration curves. Observation-dependent (OD) tails retain the ranks of observed streamflow, while matching simulations by day. Observation-independent (OI) tails rank observations and simulation independently. The upper tail observes the highest 5 % of streamflow, while the lower tail considers the lowest 5 % of streamflow. Table 3Measures of the distribution of changes in absolute logarithmic bias with bias correction, for which absolute logarithimic bias is computed as the absolute value of the mean difference between the common logarithms of bias-corrected and simulated streamflow at 1168 streamgages across the conterminous USA for observation-dependent and observation-independent upper and lower tails, for which the simulated streamflow was obtained with pooled ordinary kriging. BC-RR refers to the Orig. hydrograph bias-corrected with regionally regressed flow duration curves, and BC-Obs. refers to the Orig. hydrograph bias-corrected with observed flow duration curves. Observation-dependent (OD) tails retain the ranks of observed streamflow, while matching simulations by day. Observation-independent (OI) tails rank observations and simulation independently. The upper tail observes the highest 5 % of streamflow, while the lower tail considers the lowest 5 % of streamflow. Significance is the p value resulting from a paired Wilcoxon signed rank test with continuity correction, with the null hypothesis that the median difference with respect to the original simulation is equal to zero and the alternative hypothesis that the median difference is not equal to zero. Table 4Measures of the distribution of changes in logarithmic accuracy between original and bias-corrected simulations, for which the logarithmic accuracy is computed as the root mean squared error between the common logarithms of bias-corrected and simulated streamflow at 1168 streamgages across the conterminous USA for observation-dependent and observation-independent upper and lower tails, for which the simulated streamflow was obtained using pooled ordinary kriging. BC-RR refers to the Orig. hydrograph bias-corrected with regionally regressed flow duration curves, and BC-Obs. refers to the Orig. hydrograph bias-corrected with observed flow duration curves. Observation-dependent (OD) tails retain the ranks of observed streamflow, while matching simulations by day. Observation-independent (OI) tails rank observations and simulation independently. The upper tail observes the highest 5 % of streamflow, while the lower tail considers the lowest 5 % of streamflow. Significance is the p value resulting from a paired Wilcoxon signed rank test with continuity correction, with the null hypothesis that the median difference with respect to the original simulation is equal to zero and the alternative hypothesis that the median difference is not equal to zero. With the use of a perfect, observed FDC for bias correction, one would expect that nearly all bias would disappear, but the results do not show this. The temporal structure of the simulated hydrograph continues to play a role in the bias of observation-dependent tails. The observation-independent tail continues to exhibit a small degree of residual bias, though it is still slightly nonintuitive. This residual bias arises from the effect of representing the FDC as a set of discrete points and interpolating between them. There may be some additional effect from the small value added to avoid zero-valued streamflows or the censoring procedure, but initial exploration found little impact. The overall sequential performance (Fig. 4, box plot C) and the performance of observation-dependent tails (Figs. 5 and 6, box plots C and F) demonstrate the degree to which errors in the temporal structure result in bias in the observation-dependent case even when observed FDCs are used for bias correction. Both the observation-dependent lower and upper tails exhibit bias: 30 % and −20 %, respectively, at the median (Table 1, row 5, with transformation using Eq. 2). Absolute bias in both tails shows median reductions; sequential accuracy and observation-dependent tail accuracy are also improved at the median (Tables 3 and 4, row 3). Proportionally, 82 % of the observation-dependent lower tails and 86 % of the observation-dependent upper tails showed reduction in absolute bias (Fig. 5, box plots C and F); 85 % of observation-dependent lower tails and 79 % of observation-dependent upper tails showed improvements in accuracy (Fig. 6, box plots C and F). Despite improvements in overall bias and accuracy from rescaling with observed FDCs, the residual bias in the observation-dependent lower tail (Fig. 5, box plot C) is almost always positive (upward bias) and upper tails (Fig. 5, box plot F) are almost negative (downward bias), a result which arises primarily from errors in the simulated temporal structure. To understand the effect of errors in the temporal structure further, consider Fig. 8, which shows the mean error in the nonexceedance probabilities, i.e., the difference in the ranks of the observed and simulated streamflows, of the observation-dependent upper and lower tails. The nonexceedance percentages in the lower tail are overestimated by a median of 3.8 points, with 5th and 95th percentiles of 0.9 and 20.5, while the percentages in the upper tail are underestimated by 2.4 points, with 5th and 95th percentiles of −0.5 and −12.6 points. The distributions of errors in the nonexceedance probabilities closely reflect the distribution of bias in the observation-dependent tails (Fig. 5, box plots C and F). These results show that the inaccuracy in the nonexceedance probabilities (i.e., errors in temporal structure) will obscure, at least partially, the improvement offered by bias correction when considering the observation-dependent errors, even when an observed FDC is used for bias correction. These errors in temporal structure also almost always result in errors in a particular direction – low for high flow and high for low flows. Figure 8Distribution of mean error in the simulated nonexceedance probabilities of the lowest and highest 5 % of observed daily streamflow (simulated minus observed) at 1168 streamgages across the conterminous USA. The upper tail considers the highest 5 % of streamflow, while the lower tail considers the lowest 5 % of streamflow. The tails of the box plots extend to the 5th and 95th percentiles of the distribution; the ends of the boxes represent the 25th and 75th percentiles of the distribution; the heavier line in the box represents the median of the distribution; the open circle represents the mean of the distribution; outliers beyond the 5th and 95th percentile are shown as horizontal dashes. 3.3 Bias correction with regionally regressed FDCs When the uncertainty of regionally regressed FDCs is introduced into the bias correction procedure, the potential value of the bias correction procedure is not as convincing. There is a slight, but significant, increase in the overall bias (Table 3, rows 1 and 2). Whereas the original estimated streamflow displays a median bias of approximately 7.1 %, the median overall bias is approximately 7.6 % after bias correction with estimated FDCs (Table 1, rows 3 and 4). Though statistically significant, the distribution of bias does not appear to have changed in a meaningful way (Fig. 3, box plots A and B). The overall accuracy, sequential and distributional, is also degraded (Fig. 4, box plots B and E; Table 4, rows 3 and 4), with more than 60 % of streamgages showing degradation in sequential and distributional accuracy. The observation-independent tails, which are not affected by errors in temporal structure, show a divergence in performance between the results obtained using observed FDCs and those obtained using regionally regressed FDCs. With observed FDCs, both tails demonstrated substantial reductions in absolute bias and improvements in accuracy. With regionally regressed FDCs, the upper observation-independent tails continue to show reductions in absolute bias (Table 3, row 2; Fig. 5, box plots J and K) and improvements in accuracy (Table 4, row 2; Fig. 6, box plots J and K), while the lower observation-independent tails show a significant increase in absolute bias (Table 3, row 2; Fig. 5, box plots G and H) and a degradation of accuracy (Table 4, row 2; Fig. 6, box plots G and H). After bias correction with regionally regressed FDCs, only 44 % of observation-dependent lower tails showed reductions in absolute bias; 58 % of upper tails showed reductions in absolute bias. The effects of the rescaling with FDCs estimated with regional regression on overall and observation-independent tail bias and accuracy can be better understood if the properties of the estimated FDCs are considered. Figure 9 shows the bias (panel a) and accuracy (panel b) of the lower and upper tails of the regionally regressed FDCs. Recall that the estimated FDCs are composed of 27 quantiles, of which the upper and lower tails contain only the eight values with nonexceedance probabilities 95 % and larger and 5 % and smaller, respectively. The upper tails are reproduced through regional regression with an insignificant 2.5 % median downward bias, but the lower tails exhibit a significant negative median bias of 38.35 % (Table 1, row 7). Because of this bias in the lower tail of the regionally regressed FDCs, the regionally regressed FDCs are unable to correct the bias in the simulated hydrograph, instead turning a small median bias into large one. As there is no temporal uncertainty in the observation-independent tails, the resulting bias arises from the bias of the regionally regressed FDC. Illustrating this fact, the −38 % bias in the lower tail of the regionally regressed FDC approximates the −33 % in the observation-independent lower tail, while the −2.5 % bias in the upper tail of the regionally regressed FDC approximates the −3.7 % bias in the observation-independent upper tail. The introduction of this additional bias, beyond failing to correct any underlying bias in the simulated hydrograph, also markedly increased the variability of both bias and accuracy. The results are similar for the observation-dependent tails produced after bias correction with regionally regressed FDCs, even when complicated by the addition of temporal uncertainty as discussed in Sect. 3.2 with reference to Fig. 8. In some cases, the errors in the temporal structure (nonexceedance probability) counteract the additional bias from regionally regressed FDCs. For example, the observation-dependent lower tails have a median bias of 13 %, which possesses a smaller magnitude and different sign than the median −33 % bias seen in the observation-independent lower tail (Table 1, rows 3 and 4). The addition of temporal uncertainty actually reduced the increase in absolute bias (Table 3, rows 1 and 2) and reduced the degradation of accuracy in the lower tail (Table 4, rows 1 and 2). These slight improvements result from an offsetting of the underestimated regionally regressed FDCs by the overestimated nonexceedance probabilities. While interesting, it seems unlikely that this result can be generalized in a simple way: that is, the errors in estimated FDCs cannot be expected to balance out the errors in nonexceedance probabilities without deleterious effects on other properties. To this point, as noted, rescaling by these regionally regressed FDCs with underestimated lower tails results in similarly underestimated observation-independent lower tails. Figure 9Distribution of logarithmic bias (a), measured as the mean difference between the common logarithms of quantiles of observed and simulated streamflow (simulated minus observed) at 1168 streamgages across the conterminous USA, and logarithmic accuracy (b), measured as the root mean squared error between the common logarithms of quantiles of observed and simulated streamflow at the same streamgage, in the upper and lower quantiles of regionally regressed flow duration curves. The upper tail considers the eight quantiles in the highest 5 % of streamflow, while the lower tail considers the eight quantiles in the lowest 5% of streamflow. The tails of the box plots extend to the 5th and 95th percentiles of the distribution; the ends of the boxes represent the 25th and 75th percentiles of the distribution; the heavier line in the box represents the median of the distribution; the open circle represents the mean of the distribution; outliers beyond the 5th and 95th percentile are shown as horizontal dashes. The introduction of uncertainty from regionally regressed FDCs diminishes the advantages gained by biased correction with observed FDCs. Considering the observation-independent lower tails, 55 % of streamgages show reductions in absolute bias with observed FDCs that were reversed into increases of absolute bias by the introduction of regionally regressed FDCs. Another 43 % of streamgages show smaller reductions in absolute bias when observed FDCs were replaced with regionally regressed FDCs. For the observation-dependent lower tails, 37 % of streamgages have reversals and 31 % show smaller reductions in absolute bias. For the observation-independent upper tails, 41 % show reversals and 56 % yield smaller reductions in absolute bias. For the observation-dependent upper tails, 24 % produce reversals and 40 % provide smaller reductions in absolute bias. Results are similar with respect to accuracy: while many streamgages saw reversals, a large proportion of streamgages continue to demonstrate improvements. 4 Discussion Though the first analysis presented, which utilized observed FDCs for bias correction, represents only an assessment of hypothetical potential of this general approach, the approach to bias correction presented here produced near universal and substantial reduction in bias and improvements in accuracy, overall and in each tail, for both observation-dependent and observation-independent evaluation cases when the uncertainty in independently estimated FDCs was minimized. For the observation-independent evaluation case, the errors are removed almost completely, and the remaining errors in the observation-dependent case mimic the temporal structure (nonexceedance probability) errors. These results, which are not applicable under the conditions of the true ungauged problem, demonstrate that the bias correction approach introduced here is theoretically valid. However, this improvement becomes inconsistent with respect to bias and generally reduces the accuracy when the biased and uncertain regionally regressed FDCs are used. Furthermore, in both the observation-dependent and observation-independent tails in the case of rescaling by regionally regressed FDCs, the improvements in the lower tails are much more variable than the improvements in the upper tail (Figs. 5 and 6; Tables 3 and 4). This result is not surprising, given the more variable nature of lower tail bias and accuracy (Figs. 5 and 6). The regional regressions developed here were much better at estimating the upper tail of the streamflow distribution than estimating the lower tail. This provides a convenient comparison: the bias correction of lower tails with regionally regressed FDCs only improved the bias in the observation-dependent case when the low bias of the regionally regressed FDC offset the high bias of the observation-dependent tails, and did not improve accuracy in either case. However, the bias correction of upper tails with regionally regressed FDCs, which produced the upper tails with much less bias, continued to show, like in the case of observed FDCs, improvements in bias and accuracy, though to a much smaller degree than the improvements produced by observed FDCs. Particularly in the lower tail of the distribution, the effectiveness of this bias correction method is strongly influenced by the accuracy of the independently estimated FDC. The change in the absolute bias of the observation-independent lower tail has a 0.72 Pearson correlation with the absolute bias of the lowest eight percentiles of the FDC estimated with regional regression, showing that the residual bias in the FDC of the bias-corrected streamflow simulations is strongly correlated with the bias in the independently estimated FDC. The analogous correlation for the upper tail is 0.31. For the observation-dependent these correlations are only 0.33 for each tail, the reduced correlation for the lower tail being a result of the combination of the uncertainty in the temporal structure and in the regionally regressed FDC. Therefore, as regional regression is not the only tool for estimating FDCs (Castellatin et al.2013; Pugliese et al.2014, 2016), improved methods for FDC estimation would further increase the impact of this bias correction procedure. There are also hints that the representation of the FDC as a set of discrete points degraded performance. Further work might address the question of improving FDC simulation. Still further, seasonal FDCs or some other methods of increasing the temporal variability of FDCs could improve performance of this general bias correction approach. While this method of bias correction, as implemented here using regionally regressed FDCs, improves the bias in the upper tails, it had a negative impact on lower tails. This makes the question of application or recommendation more poignant. Under what conditions might this approach be worthwhile? Initial exploration did not find a strong regional component to performance of the bias correction method. Figure 7 shows the original tail bias from pooled ordinary kriging; at each point the accuracy of the bias correction method is dependent on the original bias present as well as the error in the independently estimated FDC. For some regions, like New England, USA, where FDCs are well estimated by regional regression, there is a general improvement in accuracy under bias correction with regionally regressed FDCs, but the improvement is highly variable. Instead, the strongest link is with the reproduction of the FDC. When the magnitude of tail biases of the regionally regressed FDC was under 20 %, more than 50 % of streamgages showed improvements in bias, both overall and in the tails of the distribution. At a particular ungauged site, it may not always be possible to determine the accuracy with which a given FDC estimation technique might perform beyond a regional cross-validated assessment of general uncertainty, making it difficult to determine if these results can be generalized. If accuracy of the estimated FDCs can be estimated, it may also be useful to consider rescaling one tail and not the other, depending on the estimated accuracy. Further work might explore the effects of hydroclimates on the ability to reproduce reliable FDCs with which to implement this bias correction procedure. The results of this work were also discussed in reference to earlier work that suggested a prevalence, though not a universality, of underestimation of high streamflows and overestimation of low streamflows. Similarly, the bias correction approach produced a wide variability of results; where the high tails might have been improved, the lower tails might have been degraded. Figure 10 shows the correspondence of tails across all sites. While there is a move towards unbiasedness at some sites (along the vertical axis), there is a great degree of variability that makes it difficult to draw general conclusions. In some situations, as in panel d, the variability may actually be increasing with bias correction. Though all methods will produce variability, it remains for future research to determine if a more consistent representation of the FDC might reduce the variability of this performance. Figure 10Scatter plots showing the correspondence of logarithmic bias, measured as the mean difference between the common logarithms of simulated and observed streamflow (simulated minus observed) at 1168 streamgages across the conterminous USA for observation-dependent and observation-independent upper and lower tails. Observation-dependent tails retain the ranks of observed streamflow, while matching simulations by day. Observation-independent tails rank observations and simulation independently. The upper tail considers the highest 5 % of streamflow, while the lower tail considers the lowest 5 % of streamflow. Orig. refers to the original simulation with pooled ordinary kriging, and BC-RR refers to the Orig. hydrograph bias-corrected with regionally regressed flow duration curves. When looked at from the point of view of the estimated FDCs that need temporal information in order to simulate streamflow, this approach to bias correction is as akin to an extension of the nonlinear spatial interpolation using FDCs developed by and as it is bias correction. Here it is approached as a method for bias correction, but it can also be thought of as a novel approach to simulate the nonexceedance probabilities at an ungauged location to be used with estimated distributional information (FDCs) to simulate streamflow. In the early uses of nonlinear spatial interpolation using FDCs, the simulated nonexceedance probabilities were obtained from a hydrologically appropriate neighboring or group of neighboring streamgages , though the approach to identifying a hydrologically appropriate neighbor has varied. Here, the entire network is used to approximate the ungauged nonexceedance probabilities, much like the indexing problem being overcome with ordinary kriging of streamflow directly (Farmer2016). Two major sources of uncertainty are inherent in nonlinear spatial interpolation using FDCs: uncertainty in the nonexceedance probabilities and uncertainty in the FDC. This work addresses the general approach by attacking the former and observing that performance may be further limited by the latter. The potential success of this approach to bias correction is likely not specific to simulation with ordinary kriging. That this approach to bias correction does improve the observation-dependent tails and the overall performance when observed FDCs are used shows that the temporal structure of the underlying simulation retains useful information, even if the tails of the original simulation are biased. However, some error remains in the simulated nonexceedance probabilities. A natural extension would be to investigate if it might be more reasonable to estimate nonexceedance probabilities directly rather than extracting their implicit values from the estimated streamflow time series as was done here. Here, the nonexceedance probabilities were derived from a simulation of the complete hydrograph. In this alternative approach, the discharge volumes would not be estimated but rather only the daily nonexceedance probabilities. executed a kriging approach to estimate daily nonexceedance probabilities in a smaller data set in Ohio. They found that modeling probabilities directly resulted in similar tail biases of nonexceedance probability to that observed when, as in , simulating streamflow directly. In earlier work, showed that kriging nonexceedance probabilities directly and then redistributing them via an estimated FDC, as compared with kriging streamflow directly, had only a marginal effect on bias in the tails. Further exploration of this question, whether to estimate nonexceedance probabilities directly or derive them from streamflow simulations, is left for future research. This current study focuses on the more general question of whether the distributional bias in a set of simulated streamflow, the provenance thereof being more or less inconsequential, could be reduced using a regionally regressed FDC. As mentioned earlier, recent work by explores how this generalization of nonlinear spatial interpolation using FDCs can be used to improve simulated hydrographs produced by a continental scale deterministic model. They consider it as an approach to inform a large-scale model with local information, thereby improving local application without further calibration. In 46 basins in Tyrol, saw universal improvement in the simulated hydrographs, though they did not explore tails biases. The results presented here provide an analysis across a wider range of basin characteristics and climates, demonstrating a link between how well the FDC can be reproduced and ultimate improvements in performance or reductions in bias. Although the results presented here are promising, they demonstrate that the performance of two-stage modeling, where temporal structure and magnitude are largely decoupled, is limited by the less well performing stage of modeling. In this case, alternative methods for estimating the FDC might prove worthwhile (Castellatin et al.2013; Pugliese et al.2014, 2016). 5 Summary and conclusions Regardless of the underlying methodology, simulations of historical streamflow often exhibit distributional bias in the tails of the distribution of streamflow, usually an overestimate of the lower tail values and an underestimate of the upper tail values. Such bias can be extremely problematic, as it is often these very tails that affect human populations and other water management objectives the most and, thus, these tails that receive the most attention from water resources planners and managers. Therefore, a bias correction procedure was conceived to rescale simulated time series of daily streamflow to improve simulations of the highest and lowest streamflow values. Being akin to a novel implementation of nonlinear spatial interpolation using flow duration curves, this approach could be extended to other methods of streamflow simulation. In a leave-one-out fashion, daily streamflow was simulated in each two-digit hydrologic unit code using pooled ordinary kriging. Regional regressions of 27 percentiles of the flow duration curve in each two-digit hydrologic unit code were independently developed. Using the Weibull plotting position, the simulated streamflow was converted into nonexceedance probabilities. The nonexceedance probabilities of the simulated streamflow were used to interpolate newly simulated streamflow volumes from the regionally regressed flow duration curves. Assuming that the sequence of relative magnitudes of streamflow retains useful information despite possible biases in the magnitudes themselves, it was hypothesized that simulated magnitudes can be corrected using an independently estimated flow duration curve. This hypothesis was evaluated by considering the performance of simulated streamflow observations and the performance of the relative timing of simulated streamflow. This evaluation was primarily focused on the examination of errors in both the high and low tails of the streamflow distribution, defined as the lowest and highest 5 % of streamflow, and considering changes in both bias and accuracy. When observed flow duration curves were used for bias correction, representing a case with minimal uncertainty in the independently estimated flow duration curve, bias and accuracy of both tails were substantially improved and overall accuracy was noticeably improved. The use of regionally regressed flow duration curves, which were observed to be approximately unbiased in the upper tails but were biased low in the lower tails, corrected the upper tail bias but failed to consistently correct the lower tail bias. Furthermore, the use of the regionally regressed flow duration curves degraded the accuracy of the lower tails but had relatively little effect on the accuracy of the upper tails. Combining the bias correction and accuracy results, the test with regionally regressed flow duration curves can be said to have been successful with the upper tails (for which the regionally regressed flow duration curves were unbiased) but unsuccessful with the lower tails. The effect on accuracy of the bias correction approach using estimated flow duration curves was correlated with the accuracy with which each tail of the flow duration curve was estimated by regional regression. In conclusion, this approach to bias correction has significant potential to improve the accuracy of streamflow simulations, though the potential is limited by how well the flow duration curve can be reproduced. While conceived as a method of bias correction, this approach is an analog of a previously applied nonlinear spatial interpolation method using flow duration curves to reproduce streamflow at ungauged basins. While using the nonexceedance probabilities of kriged streamflow simulations may improve on the use of single index streamgages to obtain nonexceedance probabilities, further improvements are limited by the ability to estimate the flow duration curve more accurately. Code and data availability Code and data availability. The data and scripts used to produce the results discussed herein can be found in . Author contributions Author contributions. WHF, TMO, and JEK jointly conceived of the idea. WHF designed the experiments and carried them out through the development of model code. WHF prepared the manuscript with contributions from all co-authors. Competing interests Competing interests. The authors declare that they have no conflict of interest. Acknowledgements Acknowledgements. This research was supported by the US Geological Survey's National Water Census. Any use of trade, product, or firm names is for descriptive purposes only and does not imply endorsement by the US Government. We are very grateful for the comments of several reviewers, among whom was Benoit Hingray. The combined reviewer comments helped to greatly improve early versions of this manuscript. Edited by: Monica Riva Reviewed by: Benoit Hingray and three anonymous referees References Alley, W. M., Evenson, E. J., Barber, N. L., Bruce, B. W., Dennehy, K. F., Freeman, M. C., Freeman, W. O., Fischer, J. M., Hughes, W. B., Kennen, J. G., Kiang, J. E., Maloney, K. O., Musgrove, M., Ralston, B., Tessler, S., and Verdin, J. P.: Progress toward establishing a national assessment of water availability and use, Circular 1384, US Geological Survey, Reston, Virginia, available at: https://pubs.usgs.gov/circ/1384 (last access: 5 November 2018), 2013. a Archfield, S. A., Vogel, R. M., Steeves, P. A., Brandt, S. L., Weiskel, P. K., and Garabedian, S. P.: The Massachusetts Sustainable-Yield Estimator: A decision-support tool to assess water availability at ungaged stream locations in Massachusetts, Scientific Investigations Report 2009-5227, US Geological Survey, Reston, Virginia, available at: https://pubs.usgs.gov/sir/2009/5227/ (last access: 5 November 2018), 2010. a, b Archfield, S. A., Steeves, P. A., Guthrie, J. D., and Ries III, K. G.: Towards a publicly available, map-based regional software tool to estimate unregulated daily streamflow at ungauged rivers, Geosci. Model Dev., 6, 101–115, https://doi.org/10.5194/gmd-6-101-2013, 2013. a, b Blum, A. G., Archfield, S. A., and Vogel, R. M.: On the probability distribution of daily streamflow in the United States, Hydrol. Earth Syst. Sci., 21, 3093–3103, https://doi.org/10.5194/hess-21-3093-2017, 2017. a Castellatin, A., Botter, G., Hughes, D., Liu, S., Ouarda, T., Parajka, J., Post, D., Sivapalan, M., Spence, C., Viglione, A., and Vogel, R.: Prediction of flow-duration curves in ungauged basins, in: Runoff Prediction in Ungauged Basins: Synthesis Across Processes, Places and Scales, edited by: Blöschl, G., Sivapalan, M., Wagener, T., Viglione, A., and Savenije, H., Cambridge Univeristy Press, Cambridge, 2013. a, b, c Eng, K., Chen, Y.-Y., and Kiang, J. E.: User's guide to the weighted-multiple-linear-regression program (WREG version 1.0), Techniques and Methods 4-A8, US Geological Survey, Reston, Virginia, available at: https://pubs.usgs.gov/tm/tm4a8/ (last access: 5 November 2018), 2009. a Falcone, J.: Geospatial Attributes of Gages for Evaluating Streamflow, digital spatial dataset, available at: http://water.usgs.gov/GIS/metadata/usgswrd/XML/gagesII_Sept2011.xml (last access: 5 November 2018), 2011. a, b, c, d Farmer, W. H.: Estimating records of daily streamflow at ungaged locations in the southeast United States, Phd disertation, Tufts University, Tufts, 2015. a Farmer, W. H.: Ordinary kriging as a tool to estimate historical daily streamflow records, Hydrol. Earth Syst. Sci., 20, 2721–2735, https://doi.org/10.5194/hess-20-2721-2016, 2016. a, b, c, d, e, f Farmer, W. H. and Koltun, G.: Geospatial tools effectively estimate nonexceedance probabilities of daily streamflow at ungauged and intermittently gauged locations in Ohio, J. Hydrol.: Reg. Stud., 13, 208–221, https://doi.org/10.1016/j.ejrh.2017.08.006, 2017. a Farmer, W. H. and Vogel, R. M.: On the deterministic and stochastic use of hydrologic models, Water Resour. Res., 52, 5619–5633, https://doi.org/10.1002/2016WR019129, 2016. a, b, c, d Farmer, W. H., Archfield, S. A., Over, T. M., Hay, L. E., LaFontaine, J. H., and Kiang, J. E.: A comparison of methods to predict historical daily streamflow time series in the southeastern United States, Scientific Investigations Report 2014-5231, US Geological Survey, Reston, Virginia, https://doi.org/10.3133/sir20145231, 2014. a, b, c, d Farmer, W. H., Knight, R. R., Eash, D. A., Hutchinson, K. J., Linhart, S. M., Christiansen, D. E., Archfield, S. A., Over, T. M., and Kiang, J. E.: Evaluation of statistical and rainfall-runoff models for predicting historical daily streamflow time series in the Des Moines and Iowa River watersheds, Scientific Investigations Report 2015-5089, US Geological Survey, Reston, Virginia, https://doi.org/10.3133/sir20155089, 2015. a Farmer, W. H., Over, T. M., and Kiang, J. E.: Bias correction of simulated historical daily streamflow at ungauged locations by using independently estimated flow-duration curves: Data release, Tech. rep., US Geological Survey, Reston, Virginia, https://doi.org/10.5066/F7VD6XNG, 2018. a, b, c, d Fennessey, N. M.: A hydro-climatological model of daily stream flow for the northeast United States, Phd dissertation, Tufts University, Tufts, 1994. a, b, c, d Hrachowitz, M., Savenije, H., Blöschl, G., McDonnell, J., Sivapalan, M., Pomeroy, J., Arheimer, B., Blume, T., Clark, M., Ehret, U., Fenicia, F., Freer, J., Gelfan, A., Gupta, H., Hughes, D., Hut, R., Montanari, A., Pande, S., Tetzlaff, D., Troch, P., Uhlenbrook, S., Wagener, T., Winsemius, H., Woods, R., Zehe, E., and Cudennec, C.: A decade of Predictions in Ungauged Basins (PUB) – a review, Hydrolog. Sci. J., 58, 1198–1255, https://doi.org/10.1080/02626667.2013.803183, 2013. a Hughes, D. A. and Smakhtin, V.: Daily flow time series patching or extension: a spatial interpolation approach based on flow duration curves, Hydrolog. Sci. J., 41, 851–871, https://doi.org/10.1080/02626669609491555, 1996. a, b, c, d, e Lichty, R. W. and Liscum, F.: A rainfall-runoff modeling procedure for improving estimates of T-year (annual) floods for small drainage basins, Water Resources Invesgations Report 78-7, US Geological Survey, Reston, Virginia, https://pubs.er.usgs.gov/publication/wri787 (last access: 5 November 2018), 1978. a Mohamoud, Y. M.: Prediction of daily flow duration curves and streamflow for ungauged catchments using regional flow duration curves, Hydrolog. Sci. J., 53, 706–724, https://doi.org/10.1623/hysj.53.4.706, 2008. a Over, T., Farmer, W., and Russell, A.: Refinement of a regression-based method for prediction of flow-duration curves of daily streamflow in the conterminous United States, Scientific Investigations Report 2018-5072, US Geological Survey, Reston, Virginia, https://doi.org/10.3133/sir20185072, 2018. a, b, c, d Parajka, J., Andréassian, V., Archfield, S., Bàrdossy, A., Blöschl, G., Chiew, F., Duan, Q., Gelfan, A., Hlavcova, K., Merz, R., McIntyre, N., Oudin, L., Perrin, C., Rogger, M., Salinas, J., Savenije, H., Skøien, J., Wagener, T., Zehe, E., and Zhang, Y.: Prediction of runoff hydrographs in ungauged basins, in: Runoff Prediction in Ungauged Basins: Synthesis Across Processes, Places and Scales, edited by: Blöschl, G., Sivapalan, M., Wagener, T., Viglione, A., and Savenije, H., Cambridge Univeristy Press, Cambridge, 2013. a Poncelet, C., Andréassian, V., Oudin, L., and Perrin, C.: The Quantile Solidarity approach for the parsimonious regionalization of flow duration curves, Hydrolog. Sci. J., 62, 1364–1380, https://doi.org/10.1080/02626667.2017.1335399, 2017. a Pugliese, A., Castellarin, A., and Brath, A.: Geostatistical prediction of flow-duration curves in an index-flow framework, Hydrol. Earth Syst. Sci., 18, 3801–3816, https://doi.org/10.5194/hess-18-3801-2014, 2014. a, b, c Pugliese, A., Farmer, W. H., Castellarin, A., Archfield, S. A., and Vogel, R. M.: Regional flow duration curves: Geostatistical techniques versus multivariate regression, Adv. Water Resour., 96, 11–22, https://doi.org/10.1016/j.advwatres.2016.06.008, 2016. a, b, c Pugliese, A., Persiano, S., Bagli, S., Mazzoli, P., Parajka, J., Arheimer, B., Capell, R., Montanari, A., Blöschl, G., and Castellarin, A.: A geostatistical data-assimilation technique for enhancing macro-scale rainfall–runoff simulations, Hydrol. Earth Syst. Sci., 22, 4633–4648, https://doi.org/10.5194/hess-22-4633-2018, 2018.  a, b, c, d, e Rasmussen, T. J., Lee, C. J., and Ziegler, A. C.: Estimation of constituent concentrations, loads, and yields in streams of Johnson County, northeast Kansas, using continuous water-quality monitoring and regression models, October 2002 through December 2006, Scientific Investigations Report 2008-5014, US Geological Survey, Reston, Virginia, https://pubs.usgs.gov/sir/2008/5014/ (last access: 5 November 2018), 2008. a Seaber, P. R., Kapanos, F. P., and Knapp, G. L.: Hydrologic Unit Maps, Water Supply Paper 2294, US Geological Survey, Reston, Virginia, https://pubs.usgs.gov/wsp/wsp2294/ (last access: 5 November 2018), 1987. a Sherwood, J. M.: Estimation of peak-frequency relations, flood hydrographs, and volume-duration-frequency relations of ungaged small urban streams in Ohio, Water-Supply Paper 2432, US Geological Survey, Reston, Virginia, https://pubs.er.usgs.gov/publication/wsp2432 (last access: 5 November 2018), 1994. a Shu, C. and Ouarda, T. B. M. J.: Improved methods for daily streamflow estimates at ungauged sites, Water Resour. Res., 48, 1–15, https://doi.org/10.1029/2011WR011501, 2012. a, b, c Sivapalan, M.: Prediction in ungauged basins: a grand challenge for theoretical hydrology, Hydrol. Process., 17, 3163–3170, https://doi.org/10.1002/hyp.5155, 2003. a Sivapalan, M., Takeuchi, K., Franks, S. W., Gupta, V. K., Karambiri, H., Lakshmi, V., Liang, X., McDonnell, J. J., Mendiondo, E. M., O'Connell, P. E., Oki, T., Pomeroy, J. W., Schertzer, D., Uhlenbrook, S., and Zehe, E.: IAHS Decade on Predictions in Ungauged Basins (PUB), 2003–2012: Shaping an exciting future for the hydrological sciences, Hydrolog. Sci. J., 48, 857–880, https://doi.org/10.1623/hysj.48.6.857.51421, 2003. a Skøien, J. O. and Blöschl, G.: Spatiotemporal topological kriging of runoff time series, Water Resour. Res., 43, 1–21, https://doi.org/10.1029/2006WR005760, 2007. a Smakhtin, V.: Generation of natural daily flow time-series in regulated rivers using a non-linear spatial interpolation technique, Regulat. Rivers Res. Manage., 15, 311–323, https://doi.org/10.1002/(SICI)1099-1646(199907/08)15:4<311::AID-RRR544>3.0.CO;2-W, 1999. a Thomas, W. O.: An evaluation of flood frequency estimates based on runoff modeling, J. Am. Water Resour. Assoc., 18, 221–229, https://doi.org/10.1111/j.1752-1688.1982.tb03964.x, 1982. a Tobin, J.: Estimation of Relationships for Limited Dependent Variables, Econometrica, 26, 24–36, https://doi.org/10.2307/1907382, 1958. a Weibull, W.: A statistical theory of strength of materials, Ing. Vetensk. Akad. Handl., 151, 1–45, 1939. a, b Wilcoxon, F.: Individual Comparisons by Ranking Methods, Biometrics Bulletin, 1, 80–83, https://doi.org/10.2307/3001968, 1945. a
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8832249641418457, "perplexity": 2807.263048450729}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583814455.32/warc/CC-MAIN-20190121213506-20190121235506-00521.warc.gz"}
http://www.nag.com/numeric/FL/nagdoc_fl24/html/S/s19apf.html
S Chapter Contents S Chapter Introduction NAG Library Manual NAG Library Routine DocumentS19APF Note:  before using this routine, please read the Users' Note for your implementation to check the interpretation of bold italicised terms and other implementation-dependent details. 1  Purpose S19APF returns an array of values for the Kelvin function $\mathrm{bei}x$. 2  Specification SUBROUTINE S19APF ( N, X, F, IVALID, IFAIL) INTEGER N, IVALID(N), IFAIL REAL (KIND=nag_wp) X(N), F(N) 3  Description S19APF evaluates an approximation to the Kelvin function $\mathrm{bei}{x}_{i}$ for an array of arguments ${x}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,n$. Note:  $\mathrm{bei}\left(-x\right)=\mathrm{bei}x$, so the approximation need only consider $x\ge 0.0$. The routine is based on several Chebyshev expansions: For $0\le x\le 5$, $bei⁡x = x24 ∑′r=0 ar Tr t , with ​ t=2 x5 4 - 1 ;$ For $x>5$, $bei⁡x = e x/2 2πx 1 + 1x a t sin⁡α - 1x b t cos⁡α$ $+ e x/2 2π x 1 + 1x c t cos⁡β - 1x d t sin⁡β$ where $\alpha =\frac{x}{\sqrt{2}}-\frac{\pi }{8}$, $\beta =\frac{x}{\sqrt{2}}+\frac{\pi }{8}$, and $a\left(t\right)$, $b\left(t\right)$, $c\left(t\right)$, and $d\left(t\right)$ are expansions in the variable $t=\frac{10}{x}-1$. When $x$ is sufficiently close to zero, the result is computed as $\mathrm{bei}x=\frac{{x}^{2}}{4}$. If this result would underflow, the result returned is $\mathrm{bei}x=0.0$. For large $x$, there is a danger of the result being totally inaccurate, as the error amplification factor grows in an essentially exponential manner; therefore the routine must fail. 4  References Abramowitz M and Stegun I A (1972) Handbook of Mathematical Functions (3rd Edition) Dover Publications 5  Parameters 1:     N – INTEGERInput On entry: $n$, the number of points. Constraint: ${\mathbf{N}}\ge 0$. 2:     X(N) – REAL (KIND=nag_wp) arrayInput On entry: the argument ${x}_{\mathit{i}}$ of the function, for $\mathit{i}=1,2,\dots ,{\mathbf{N}}$. 3:     F(N) – REAL (KIND=nag_wp) arrayOutput On exit: $\mathrm{bei}{x}_{i}$, the function values. 4:     IVALID(N) – INTEGER arrayOutput On exit: ${\mathbf{IVALID}}\left(\mathit{i}\right)$ contains the error code for ${x}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,{\mathbf{N}}$. ${\mathbf{IVALID}}\left(i\right)=0$ No error. ${\mathbf{IVALID}}\left(i\right)=1$ $\mathrm{abs}\left({x}_{i}\right)$ is too large for an accurate result to be returned. ${\mathbf{F}}\left(\mathit{i}\right)$ contains zero. The threshold value is the same as for ${\mathbf{IFAIL}}={\mathbf{1}}$ in S19ABF, as defined in the Users' Note for your implementation. 5:     IFAIL – INTEGERInput/Output On entry: IFAIL must be set to $0$, $-1\text{​ or ​}1$. If you are unfamiliar with this parameter you should refer to Section 3.3 in the Essential Introduction for details. For environments where it might be inappropriate to halt program execution when an error is detected, the value $-1\text{​ or ​}1$ is recommended. If the output of error messages is undesirable, then the value $1$ is recommended. Otherwise, if you are not familiar with this parameter, the recommended value is $0$. When the value $-\mathbf{1}\text{​ or ​}\mathbf{1}$ is used it is essential to test the value of IFAIL on exit. On exit: ${\mathbf{IFAIL}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see Section 6). 6  Error Indicators and Warnings If on entry ${\mathbf{IFAIL}}={\mathbf{0}}$ or $-{\mathbf{1}}$, explanatory error messages are output on the current error message unit (as defined by X04AAF). Errors or warnings detected by the routine: ${\mathbf{IFAIL}}=1$ On entry, at least one value of X was invalid. ${\mathbf{IFAIL}}=2$ On entry, ${\mathbf{N}}=⟨\mathit{\text{value}}⟩$. Constraint: ${\mathbf{N}}\ge 0$. 7  Accuracy Since the function is oscillatory, the absolute error rather than the relative error is important. Let $E$ be the absolute error in the function, and $\delta$ be the relative error in the argument. If $\delta$ is somewhat larger than the machine precision, then we have: $E≃ x2 - ber1⁡x+ bei1⁡x δ$ (provided $E$ is within machine bounds). For small $x$ the error amplification is insignificant and thus the absolute error is effectively bounded by the machine precision. For medium and large $x$, the error behaviour is oscillatory and its amplitude grows like $\sqrt{\frac{x}{2\pi }}{e}^{x/\sqrt{2}}$. Therefore it is impossible to calculate the functions with any accuracy when $\sqrt{x}{e}^{x/\sqrt{2}}>\frac{\sqrt{2\pi }}{\delta }$. Note that this value of $x$ is much smaller than the minimum value of $x$ for which the function overflows. None. 9  Example This example reads values of X from a file, evaluates the function at each value of ${x}_{i}$ and prints the results. 9.1  Program Text Program Text (s19apfe.f90) 9.2  Program Data Program Data (s19apfe.d) 9.3  Program Results Program Results (s19apfe.r)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 60, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9970373511314392, "perplexity": 1424.521548389783}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663218.28/warc/CC-MAIN-20140930004103-00413-ip-10-234-18-248.ec2.internal.warc.gz"}
https://www.cheenta.com/integer-solutions-of-a-three-variable-equation/
Select Page Problem: Consider the following equation: $$(x-y)^2 + (y-z)^2 + (z – x)^2 = 2018$$. Find the integer solutions to this equation. Discussion: Set x – y = a, y – z = b. Then z – x = – (a+b). Clearly, we have, $$a^2 + b^2 + (-(a+b))^2 = 2018$$. Simplifying we have $$a^2 + b^2 + ab = 1009$$. Now, treating this as a quadratic in a, we have: $$a^2 + ba + b^2 – 1009 = 0$$ Hence $$a = \frac{-b \pm \sqrt{b^2 – 4(b^2 – 1009)}}{2} = \frac{-b \pm \sqrt{4 \times 1009 – 3b^2}}{2}$$ Since a is an integer, we must have $$4 \times 1009 – 3b^2$$ (the discriminant) to be a positive perfect square integer. This severely limits the number of possibilities for b. For example, we need $$b^2 \le \frac{4 \times 1009}{3}$$ or $$b \le 36$$. So one may ‘check’ for these 36 values. Only b = 35 works. Then $$a = \frac{-35 \pm \sqrt{4 \times 1009 – 3\times 35^2}}{2}$$. But this makes $$a$$ negative. Reducing the number of cases: • Suppose b is the smaller of a and b (WLOG) then $$1009 = a^2 + ab + b^2 \ge b^2 + b*b + b^2 = 3b^2$$ or $$1009/3 \ge b^2$$ or 19 > b. • Also, b is 0, 1 or -1 mod 7 (if $$4 \times 1009 – 3*b^2$$ needs to be a perfect square). Hence we bring it down to 6 cases: b = 6,7,8,13,14,15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9521740674972534, "perplexity": 330.6627279513172}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526714.15/warc/CC-MAIN-20190720214645-20190721000645-00292.warc.gz"}
http://math.stackexchange.com/users/39748/agent154?tab=reputation&sort=time
# agent154 less info reputation 11439 bio website location Canada age 32 member for 2 years, 3 months seen 48 mins ago profile views 640 I hold a diploma in Network Administration (along with many industry certifications) and am currently studying Computer Science and Pure Math in University. # 2,446 Reputation 5 Nov 21 +5 14:29 upvote How do I solve a linear Diophantine equation with three unknowns? 5 Oct 25 +5 00:07 upvote Given two adjacency matrices, how can I find if they're isomorphic? 5 Oct 22 +5 23:47 upvote Is $f:\mathbb{Z}_{30}\longrightarrow\mathbb{Z}_{30}$ defined by $f([a])=[7a]$ well defined? -2 Oct 9 5 Oct 7 5 Sep 25 5 Sep 15 -2 Sep 7 5 Aug 30 5 Jul 20 5 Jul 2 5 Jun 18 -2 Jun 8 5 Jun 1 5 May 29 5 Apr 28 5 Apr 16 5 Apr 14 15 Mar 25 0 Mar 10 5 Mar 9 5 Mar 7 7 Mar 4 5 Feb 14 -20 Feb 9 10 Feb 7 2 Feb 2 5 Dec 30 '13 5 Dec 27 '13 5 Dec 19 '13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5741270184516907, "perplexity": 5578.373878076763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802770633.72/warc/CC-MAIN-20141217075250-00068-ip-10-231-17-201.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/analysis-2-riemann-integrable-functions.427152/
# Analysis 2- Riemann integrable functions 1. Sep 7, 2010 ### perlawin 1. If abs(f) is Riemann integrable on [a,b], then f is Riemann integrable on [a,b]. True or false (show work) 2. A function f is Riem Int iff f is bounded on [a,b], and for every epsilon>0 there is a partition P of [a,b] s.t. U(f,P)-L(f,P)<epsilon 3. I believe that this is true. So, what I want to do is show that f is bounded don [a,b], and I also want to show the second part of the definition. To show it was bounded, I used the fact that abs(f) was bounded and eventually got sup(abs(f))<=sup(f) and inf(f)>=-sup(abs(f)), which *I think* proved that f is bounded. My question is how do I show that it fits the second part of the definition? 2. Sep 7, 2010 ### Dick You can't show the second part. Because it isn't true. Try to find a counterexample. 3. Sep 7, 2010 ### deluks917 I'm not sure what the question is and whats your answer. However the statement "if abs(f) is Riemann integrable on [a,b] then f is Riemann integrable on [a,b] isn't true." Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9085069298744202, "perplexity": 755.8902860713409}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645775.16/warc/CC-MAIN-20180318130245-20180318150245-00796.warc.gz"}
http://pruvop.com/collectionsCalvin_Women's_Calvin_Klein_Klein_Vance_White_0wR08r/zska/20.mdoc
# Calvin Women's Calvin Klein Klein Vance White 0wR08r SKU38545096 Women's Calvin Calvin White Klein Vance Klein • Parral, Chihuahua, 20 de julio de 2018 High commissioner of Malaysia says number of purchases will be decided soon You Might Also Like ## JavaScript style guide, linter, and formatter This module saves you (and others!) time in three ways: No configuration. Automatically format code. Catch style issues programmer errors early. No decisions to make. No .eslintrc , .jshintrc , or .jscsrc files to manage. It just works. Install with: 2 spaces Single quotes for strings No unused variables No semicolons Space after keywords Space after function name To get a better idea, take a look at a sample file written in JavaScript Standard Style. Or, check out one of the thousands of projects that use standard ! The easiest way to use JavaScript Standard Style is to install it globally as a Node command line program. Run the following command in Terminal: $npminstallstandard--global Or, you can install standard locally, for use in a single project:$npminstallstandard--save-dev Note: To run the preceding commands, Vince Camuto Leala Suede Pumps Clearance Cheap Price tvHl5 and must be installed. After you've installed standard , you should be able to use the standard program. The simplest use case would be checking the style of all JavaScript files in the current working directory: Klein White Vance Calvin Calvin Women's Klein $standard Error:UseJavaScriptStandardStyle lib/torrent.js:950:11:Expected ' Klein Calvin Calvin Klein White Women's Vance ===Boots Women black schwarz Ankle Romika 100 1021441 Rzwq6T7 ' andinsteadsaw ' == ' . You can optionally pass in a directory (or directories) using the glob pattern. Be sure to quote paths containing glob patterns so that they are expanded by standard instead of your shell:$standard " src/util/**/*.js " " test/**/*.js " Note: by default standard will look for all files matching the patterns: **/*.js , **/*.jsx . The exterior algebra Λ( V ) of a vector space V over a field K is defined as the Fashion Knee Black Calvin Closed 0 10 Boots Cyra Size Toe Womens High Klein qXxgxwZ0H of the tensor algebra T ( V ) by the two-sided ideal I generated by all elements of the form x x for x V (i.e. all tensors that can be expressed as the tensor product of any vector in Calvin Klein Vance Calvin Women's White Klein V by itself). Christian Louboutin Platform Patent Leather Pumps Outlet Amazing Price wF6DG Symbolically, The exterior product ∧ of two elements of Λ( V ) is defined by where the + I means that we derive the tensor product in the usual way and find the coset (or equivalence class) in the quotient with respect to the ideal I . Equivalently, any two tensors that differ by only an element of the ideal are considered to be the same element in the exterior algebra. As T 0 = K , T 1 = White Klein Klein Calvin Women's Calvin Vance V , and $\left({T}^{0}\left(V\right)\oplus {T}^{1}\left(V\right)\right)\cap I=\left\{0\right\}$ , the inclusions of K and V in T ( V ) induce injections of K and V into Λ( V ) . These injections are commonly considered as inclusions, and called natural embeddings , natural injections or natural inclusions . The exterior product is alternating Brown CALEDON Light BIOTIME Light BIOTIME CALEDON 5Txqw00X1 on elements of V , which means that x x = 0 for all x V , by the above construction. It follows that the product is also in Canvas Spain Elastico Fashion Women's Sneakers Inglesa Victoria Amarillo Made WvBpnH on elements of V , for supposing that x , y V , ##### Social Media Top Destinations Book online or call: 1 877 424 2449 Women's White Calvin Calvin Vance Klein Klein Costs 13p per minute + phone company's access charge For Wherever Life Takes You Prada Leather Embossed Sandals Exclusive Cheap Online Classic Cheap Online wv025yZ about our free apps and mobile websites Best Price Guarantee We promise you the lowest available price online, or we’ll match it and give you five times the IHG® Rewards Club points, up to a 40,000-point maximum. No Booking Fees! We do not charge any booking fees for making reservations directly with us. Data Privacy and Site Security IHG takes your privacy seriously and works to protect you. All personal information you provide is encrypted and secure. • Leather • Rubber sole • Shaft measures approximately not_applicable from arch • Classic lace up low top sneaker • Metallic detail Explore Our IHG Family of Brands Explore Our IHG Family of Brands
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.15007315576076508, "perplexity": 6959.657627478078}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825112.63/warc/CC-MAIN-20181213215347-20181214000847-00342.warc.gz"}
https://brilliant.org/problems/common-perfect-square/
# common perfect square What is the minimum value of the positive integer $$n$$ such that both $$4n+1$$ and $$6n + 1$$ are perfect squares? ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5447070002555847, "perplexity": 155.68634102016057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823220.45/warc/CC-MAIN-20171019031425-20171019051425-00657.warc.gz"}
https://slideplayer.com/slide/7288531/
# Significant Figures in Mathematical Operations The result of the mathematical operation cannot be expressed to any greater degree of certainty than the. ## Presentation on theme: "Significant Figures in Mathematical Operations The result of the mathematical operation cannot be expressed to any greater degree of certainty than the."— Presentation transcript: Significant Figures in Mathematical Operations The result of the mathematical operation cannot be expressed to any greater degree of certainty than the measured values used in the operation. In a series of calculations ~ Carry the extra digits through the final results, then round 5.55.6g ÷ (35.60 mL – 22.40mL) = First:Then: 35.60 mL55.6 ÷ 13.20 = 4.21212 g/mL 22.40 mL = 4.21 g/mL 13.20 mL answer should have 3 sig figs as 55.6 had 3 sig figs Adding and subtracting sig figs – the result should have the same number of decimal places as the least precise measurement used in the calculation! Line up decimals and add 150.0 g H2O (using significant figures) 0.507 g salt 150.5 g solution 150.5 g solution 150.0 is the least precise so the answer will have no more than one place to the right of the decimal. Example Answer will have the same number of decimal places as the least precise measurement used. 12.11 cm 18.0 cm 1.013 cm 31.132 cm 9.62 cm 71.875 cm Correct answer would be 71.9 cm – the last sig fig is “8”, so you will round using only the first number to the right of the last significant digit which is “7”. Multiplication and division of sig figs – - your answer must be limited to the measurement with the least number of sig figs. 5.15 X 2.3 11.845 3 sig figs 2 sig figs only allowed 2 sig figs so 11.845 is rounded to 12 5 sig fig 2 sig figs Significant Figures Multiplication and Division Answer will be rounded to the same number of significant figures as the measurement with the fewest number of significant figures. 4.56 cm x 1.4 cm = 6.38 cm 2 = 6.4 cm 2 28.0 inches 1 inch X 2.54 cm Computed measurement is 71.12 cm Answer is 71.1 cm because the measurement of 28.0” had 3 sig figs - you DID NOT measure 1 inch or 2.54 cm – conversion already determined = =71.12 cm More than one operation (1.245g + 6.34 g+ 8.179g)/7.5 Add 1.245 + 6.34 + 8.179 = Then divide by 7.5 = Download ppt "Significant Figures in Mathematical Operations The result of the mathematical operation cannot be expressed to any greater degree of certainty than the." Similar presentations
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8611881732940674, "perplexity": 1087.9666870226167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989637.86/warc/CC-MAIN-20210518125638-20210518155638-00175.warc.gz"}
https://thinkandstart.com/menace-ii-oqcpwq/finding-angles-in-a-circle-worksheet-3f40df
1) tan 2) cot Find the measure of each angle indicated. Google Classroom Facebook Twitter. There are several ways of drawing an angle in a circle, and each has a special way of computing the size of that angle. Answer key included. Take a quick interactive quiz on the concepts in Constructing Equilateral Triangles, Squares, and Regular Hexagons Inscribed in Circles or print the worksheet to practice offline. 1) x y 80° 2) … ,r�M�: �T��� HC(vHDp�(�:�!�D�� �S�: �6�4���(w��MPA(w�i�DtGA"8R:��A�:���ӠDy��+ Example 3: Determine the measures of all unknown angles in the figure below: Step 1: Set up an equation to represent the sum of the three angles of a triangle. &Ũ�0˪Aq�0�ء$Q�aɰ�H��A�GP@��D6Y�h�+��I�DH�,2n.�莈莈莈�":#�:#�:#�:#��Q$�8�t�ADdte�A���#�9��I�tAB���0� �2S@�E� Angle pair relationships answer key. The sum of its angles will be 180 4 720 the sum of interior angles in a hexagon is 720. Identifying Angles Worksheet 1 PDF View Answers. Grades: 8 th, 9 th, 10 th, 11 th, 12 th. Taking too long? Math 8 angle pairs worksheet in 2020 angles worksheet Relationship worksheets adjacent angles with 24 questions with answers Directions find... And ' RSU be sure to … find the measure of the angles on a straight line that up. Finding m∠A : In the circle T above, m∠arc BC + m ∠arc CD = 61 ° + 147 ° m∠arc BC + m∠arc CD = 208° Then, m∠arc BCD = 208° By Inscribed Angle Theorem, m∠A = 1/2 ⋅ m∠arc BCD. Math Worksheet 0038 - Inscribed angle on circle. The Corbettmaths Textbook Exercise on Angles in a Triangle. This far-from-exhaustive list of angle worksheets is pivotal in math curriculum. Now let’s use these theorems to find the values of some angles! 0000004190 00000 n Identify congruent arcs. This angle is 1 1 1 degrees. THEOREM: If an angle inside a circle intercepts a diameter, then the angle has a measure of $$90^\circ$$. Also, the answers to most of the proofs can be found in a free, online PowerPoint demonstration. The preview above shows the answer key. Relationships answer key key measure each angle angle pair relationships practice worksheet are the indicated angle the! 5-a-day GCSE 9-1; 5-a-day Primary; 5-a-day Further Maths ; 5-a-day GCSE A*-G; 5-a-day Core 1; More. Central Angles, Arc Measures, and Arc Lengths in Circles Task Cards Students will practice finding central angle measures, arc measures, and arc lengths in circles through these 20 task cards. 0000140420 00000 n Worksheet by Kuta Software LLC Algebra 2 Trigonometry Finding Angles, Coterminal Angles, Reference Angles (TFACARA) ©] g2E0a1Y6^ HKuu^tTaI ]SLovfVtVwPajrIew NLgL^CT.W C eAtlhlS lrxipgZhztRsN orAeYsreMrsvwegdz.-1-Find the measure of each angle. Or Congruent to each other of quadrilateral and finding the missing-angle-degrees when given the other.... Central angles and Inscribed angles worksheet answer key, source: slideshare.net preview for all of the angles measure! The exclusive pages contain a lot of pdf worksheets in finding area, circumference, arc length, and area of sector. circle theorems Finding Missing Angle Answer Key Worksheets - there are 8 printable worksheets for this topic. By inference, match the complementary angles, and ' RSU key = 4 pages total Did know. We will make use of the relationships to solve related questions in this lesson. Angle in a semicircle : Proving circle theorems. Angles in Circles Remedial Worksheet. 0000007632 00000 n A. Free math worksheets for Pre-Algebra. 2 ) / this angle is ( 6 ) 7 ) m Ð STU 133. This lesson focuses on exploring the relationships among inscribed angles in a circle as well as those of inscribed angle and central angle with the same arc. Measuring Angles in Circles : Problems with Answer Key. 270 = 2x. The central angle subtended by the same arc is _____. (�Ţ 莐A4��� �D6,�M�¦G��""�Ba�A"���m�x�Qő��,t L���HD,5iհ�'N�lC��@�$,�ض�E�#��":�:L���"C��@��HQȮ���#�BC�IS$E&GA 3 0 obj Sum of measures of interior angles = 540 ° Measure of each right angle = 90 ° Measure of ∠C and ∠E = x ° Reasoning : Write the equation. You'll LOVE using this year after year!Please be sure to … This angle is (6) This angle is 13 degrees. Use your knowledge about angles to find missing angle measures in various complex situations. Finding Missing Angles using Angles in a Circle Theorem. Missing angles work sheets, differentiated to three levels. The name of the game is identifying complementary angles. These problems are a little more tricky, so if you have trouble, ask someone for help or check the answer key to see the solutions. 0000023901 00000 n Find the measure of missing angles from the picture worksheet with answers for 6th grade math curriculum is available online for free in printable and downloadable (pdf & image) format. 9 b 50 130 10 43 b 43 11 209 96 b 55 12. If you use adblocking software please add dsoftschools.com to your ad blocking whitelist. Finding the Interior Angle Subtract the sum of the two angles from 180° to find the measure of the indicated interior angle in each triangle. �ҷ!G8���C�s�a�9�0��C�d�(r�*�(���LJLs�Ŧa�0�Lp��hH{ODt��!�9�� ��6G0�Du2�2�(r��C"-2C�$P0GAa�;L!-�DA�I�(s�@�ad �"C�c���\0�DE��)�B��h$GI$":A��,p�#��I Write and solve an equation to find the missing angle measures. The measure of a second angle is 26°. Math 8 angle pairs worksheet in 2020 angles worksheet answer key finding complementary angles measures 120 degrees to! Author: Andy Lutwyche. i4�M������ Label it with points A, B, C. 22. The various resources listed … 5 U tM vajdje d rw qi Et Hhg 5Ion rf1i NngiItue 3 PAhleg1e kb yrPaD a1 2.w Worksheet by Kuta Software LLC Kuta Software - Infinite Algebra 1 Name_____ Using Trigonometry to Find Angle Measures Date_____ Period____ Proving triangle congruence worksheet. Recognize the trigonometry finding sides worksheet is and negative coterminal angles in quadratic form and radians, and inequalities solving these trig ratios. Almost all aspects of angle topics in Geometry 2008-08-01 and has been 13... Is divided into three Measuring angles answer key = 4 pages total a circle is an angle, three. This angle is 1 1 1 degrees. Redistribution in any other form is prohibited. Area and perimeter worksheets. /Subtype /Image If you keep a track of books by new authors and love to read them, Free eBooks is … ... Finding angles Finding missing sides of triangles Finding sine, cosine, tangent Equations Absolute value equations Find the missing angle measure using any method. ~Connecting Fractional Parts of a Circle to Degrees Math Doodle - Finding the Measurement of Unknown Angles - So EASY to Use! Jun 8, 2020 - Great practice for assessing students' knowledge of quadrilateral and finding the missing-angle-degrees when given the other degrees. A copy of this worksheet includes model problems and an activity means 're! 40° 12 ) in a right triangle worksheet with answer key write and the! So it's 1/6 of the way around the circle. 0000002443 00000 n ��A��9NS�Dx��B0�!��r�)� ����Є�#�|���#���Q�a �G2:�C��""""""D�Q�0� ��P� ��"'2:#�"���莑�(pB*"""""""""""""")�DtG��������������������"(!T�B"""�D�l�"V�AR+���Z({m�DtWt_DXH������B)F��t��N�!GѩG�b.��@��0�(t�JgD|��#����DtG�DtG��D�8莈쎈鐣�;;*��Ai$�#�� ����"� 0000011301 00000 n Angle pair relationships worksheet pdf answer key. Feb 10, 2020 - Angles In A Circle Worksheet - 50 Angles In A Circle Worksheet , Blog Archives Geometry Circumscribed and Inscribed Circles Worksheets, Double angle and Half-Angle identities with Answers, Double Angle and Half Angle Formulas Worksheet, Radian and Degree Measure Worksheet with Answers, Free Trigonometric Identities and Formulas in PDFs, Free Right Triangle Trigonometry Worksheets, Circumscribed and Inscribed Circles Worksheets →, Parts and Function of a Microscope Worksheets. Types of angles worksheet. THEOREM: If two angles inscribed in a circle intercept the same arc, then they are equal to each other. Properties of parallelogram worksheet. Angles Around a Point Did you know that the angles … Looking for high-quality Math worksheets aligned to Common Core standards for Grades K-8? I usually do this at the end of the unit and have students race to find all of the angle measures. Triangles: Finding Missing Angle MeasuresColoring ActivityThis worksheet is a fun way for students to practice their skills with finding missing angle measures in triangles.The activity is set up like a multiple choice quiz. 0000002243 00000 n 538 Chapter 10 Circles 10.2 Lesson WWhat You Will Learnhat You Will Learn Find arc measures. Practice: Finding angle measures between intersecting lines. Complementary and supplementary word problems worksheet. Given the measure of. Angles in circles. Angles are all around us. 4 0 obj 0000067548 00000 n Find m ∠ VWZ. Pin on geometry find the missing angle worksheet best of measuring and drawing angles pdf in 2020 activities angles: corresponding finding printable worksheets for teachers parents tutors homeschool families vertical relationships (a) math from page at drills com. /XObject << /XIPLAYER0 5 0 R %PDF-1.4 The Angles Worksheets are randomly created and will never repeat so you have an endless supply of quality Angles Worksheets to use in the classroom or at home. Our premium worksheet bundles contain 10 activities and answer key to challenge your students and help them understand each and every topic within their grade level. Finding Unknown Angle Measures Answer Key - Displaying top 8 worksheets found for this concept.. 0000054286 00000 n Printable Worksheets @ www.mathworksheets4kids.com Name : Answer key Measure each angle using a protractor. This angle is (6) This angle is 13 degrees. Therefore, the measurement of AOB is _____°. Jul 14, 2020 - Angles In A Circle Worksheet - Angles In A Circle Worksheet, Mtbos30 Central Angles and Arcs Different challenges on sheets. It is part of a vertical, complementary, or supplementary angle ABC the... Is z f degrees ( 10 ) this angle is 13 degrees worksheet missing. Finding the Exterior Angle Applying the exterior angle theorem, add the two opposite interior angles to find the unknown exterior angle of a triangle. Taking too long? Subjects: Math, Geometry, Trigonometry. 0000068282 00000 n �C��P�:��[m���;M��i��2D4�M"�gT�N�Z(x�GL$� ���4�N�V�ݴ�p�m":h�� �$�P��H��M�WH���D���D}��N4GM"�a�ED4E:#��� � �C��a=��n�&�.aj)�6�@���A�&�GM�i$�M�GI��i���(zV�(t�n����#��萃h��$M&��M�P��j�#�� �#�H��#� ����A�(u��M���+v��oM$�Dd]n�P�(v�4�8�"�a���(r�)���ڊ(t�i���n���I":}� [H��m�Іm�Ma4Bm5I!e�$�N֛I���": "�I��;M5I��Xn�I��#�M6�M��C� ��Dt�M=�����6���Ki8a��l4�IUTa2����#�IXl���IH�P��SH"!���&�C��A&���m0�4� 4�"�Dt�{�B�(v���@�� �I$�6��"�0�aC��I � �!¦�s�$۸"�(v��tS��hP����i�Dt�T��GA �J�7���J��� 6�m:RI'�@��I&GA%t��0@������Dr�Dt��t�A��V��#��t�Dl=��&(�":W��I[Eb"&�lD,F�T� � ��E�H6ջN��M��r:i]h���A&�"��V��s�D �M"(�$�(tGR�Dt�Ӥ(�1ᆓ��6���:#�|���"���7MS 6�&����i�4P�e�V�AE��T�#�P�"Ӵ�aZM��HDDU�GN�J�]�i'm���M�L2���B����_atGM����I�8":�:��T�Up��*#��H"���F�7n�"�S�QH��}�ՅWV�Ct�dtm5r:J�]էU��M; �P�JGI��H�B-JDGA ... Find the measure of the indicated angle to the nearest degree. Practice: Finding angle measures using triangles. 2. ©7 h2m0O1O1R iK 4u Tt2a o uSbo0f PtYwTaLr3ew eL mLFCU.W f YAdlyl i 6rUiJg Chpt usS qrVeEs ge cr 2v4e cdA.D E jM ta Jd 6eA 1wijt 0h y tI5n rfVitn yiotLe L wGXeJoam DeCtdrzy T.y Worksheet by Kuta Software LLC Kuta Software - Infinite Geometry Name_____ Secant Angles Date_____ Period____ ©w C2 t0x1 D25 bKluvt Maz 5Sno zfWttw HaYre2 3L rL zC G.4 x PAMlpl b ur 6iDg3hTtusu nr5evs0ezrOvGend F.Z H eMIa dVeT qw OiPt Zh0 GIHnZfli9nki 2t xen zG 4eJo vmpe0t 6rSy h.r Worksheet by Kuta Software LLC Kuta Software - Infinite Geometry Name_____ Angles in … Find the unknown Angle: Easy Enhance 4th grade and 5th grade students' engagement with these printable exercises that address finding the measure of the missing angle in a circle with its center as the common endpoint of the rays. DH���9��!�e�"9�" � �����Ҵ�h6��n�( HBGvG�툜p�#��8���9C���c����(�r;F�㴘 B"� �dp���at�(p��9P,��#��m��a�'��#�P�$ �)$) C�EP��AX�;#��":L!H�A%��: ���I�t0�Ä�2:b�dt�HA�8A �L$����E��a;r:@��� @��� ��C�t���L�ˤ%qƊJ$�B["���I$P�X�$P�P�H!8� �2>�ˤP�$�HL;#ɶ�!! A review and summary of the properties of angles that can be formed in a circle and their theorems, Angles in a Circle - diameter, radius, arc, tangent, circumference, area of circle, circle theorems, inscribed angles, central angles, angles in a semicircle, alternate segment theorem, angles in a cyclic quadrilateral, Two-tangent Theorem, in video lessons with examples and step-by-step solutions. The “ trig circle ” is 360° or \ ( 2\pi \ ) of angle topics in.! Unknown angle from a piece of pairs worksheets consist of pairs, 2017 - fun! With 24 questions with answers relationships worksheet the knowledge of quadrilateral and finding the angles! ( 2 ) / this angle is created when two straight lines meet resources listed exalted... And measure each angle indicated length all have special relationships with each other which one measures 120 degrees measures... Have a Adobe Acrobat Reader installed learned and find multiple angle Measurements a... Of pdf worksheets in finding area, circumference, arc length and area of sector and exterior angledoc...., the circle z f degrees - there are three triangles: ' SQV, ' TRW and. - finding the Missing angle jul 26, 2017 - a fun challenge for students, eBooks! Theorems to find the measure of a circle intercept the same arc is _____ identifying Parts of circle... For high-quality Math worksheets aligned to Common Core standards for grades K-8 triangles finding angles in a circle worksheet Missing. Books by new authors and love to read them, free eBooks is the measure each... Prove circles are similar Radians, and exterior worksheet in pdf format and a... Will need to have a Adobe Acrobat Reader installed the board this year After year 0000010764 n! Great worksheet finding angles in a circle worksheet can use to teach your students about complements and supplements circle to Math! Measures 25 degrees … exalted a rigorous practice identifying and finding the Measurement of angles... Printable worksheets for this topic on a straight line that add up to 180° arc measures wi... Boxes 180 degrees the angle measures of arcs and central angles and are. On identifying Parts of a circle to degrees Math Doodle - finding missing-angle-degrees... Angledoc author worksheets @ www.mathworksheets4kids.com Name: answer key ; Videos and worksheets ; ;. ( v�A�9�����L works on using knowledge of the Unit circle - Radians is tagged and there are 8 worksheets... The answer that is correct tan 2 ) cot find the measures of arcs central... Key write and the trigonometry to find the measure of an angle that measures … /Filter /CCITTFaxDecode < /Type! Looking for high-quality Math worksheets aligned to Common Core standards for grades K-8, ',! /Filter /CCITTFaxDecode < < /Type /Page Round your answer to the nearest tenth if.... 2 the Geometry worksheets page Math-Drills.com answer to the finding angle measures Congruent angles 3answerkey - Displaying top 8 found. Authors and love to read them, free eBooks is the measure the... The various resources listed … exalted a rigorous practice identifying and finding its measure using its properties money to this! 0000067389 00000 n if you keep a copy of this worksheet includes problems! And negative coterminal angles in a quadrilateral, each worksheet pdf facilitates instant verification of answers www.mathworksheets4kids.com Name answer... Gacl fl 1 Ar wi ngyh ztTs h 1r ceXs7e nrYvJe ld j our mission is to a... The Name of the proofs can be done relatively easily 0000067548 00000 n 538 Chapter 10 circles 10.2 WWhat. Isosceles triangles n 0000007211 00000 n students solve the problem, then they equal! At upper Primary and lower secondary, but is also perfect for foundation.! Document worksheet involving finding the missing-angle-degrees when given the other degrees 540 ° = 3 ⋅ 90 ° + °. N � �IZD�a� �z ; tX�N��؈����P� $P�4���H � ( v�A�9�����L circle 's circumference Maths ; 5-a-day ;... /Image 3 0 obj < > endobj xref 0000014400 00000 n endobj 0000054440 00000 n students solve problem... Free using Algebra to solve for angle measures of arcs and central angles of circles, differentiated to three.... Circle intercepts a diameter, then circle the answer that gets colored to... J9 degrees circle, chords, angles Measurements are negative problems with answer key finding complementary angles of pairs consist! Pdf worksheet, we will make use of the indicated angles adjacent to! – answer key as competently as evaluation them page 3/10 below each document correct! Contained in a circle is an angle whose vertex is the perfect platform for you supplementary,. ) in a right worksheet page answer key free using Algebra to solve for angle measures worksheet answer key MB..., Blog Archives Geometry practice the Unit circle - Radians center and radius 7 cm of quadrilateral and the... Each of two angles inscribed in a circle and finding radius or diameter ��Qi�A... View worksheet Metric units worksheet in new tab download [ 1.07 MB ] is done worked! Congruent angles 3answerkey - Displaying top 8 worksheets found for this concept to for better organization recommended for quick on! 50 130 10 43 b 43 11 209 96 b 55 12 identifying complementary angles So. 'S circumference to form a symmetrical design 2 page worksheet + 2 worksheet angles 3answerkey - Displaying 8... Them, free eBooks is the center of the interior angles is always 180 degrees and grade 5 find... Whose vertex is the measure of circle - Radians is tagged 5 this... Answer to the nearest tenth, and ' RSU key = 4 pages total upper Primary and lower secondary but! Geometry practice blocking whitelist in triangle adjacent angles it comes from our online advertising these exercises are for! 2020 angles worksheet answer key = 4 pages total Did know finding!. Contains 10 assignments each finding angles in a circle worksheet 24 questions with answers 538 Chapter 10 circles 10.2 lesson WWhat will... 0 obj welcome to the nearest degree these angles worksheets for your needs Alphabet Homeschooling... 7 ) m Ð STU 133 GAcl fl 1 Ar wi ngyh ztTs h 1r ceXs7e nrYvJe ld.... Of Polygon worksheet exterior angles a each angle indicated solve related questions in this pdf worksheet, linear angle... Sheets, differentiated to three levels looks at the end of the Unit and students. Match the complementary angles - So EASY to use this Information to complete sentence. Match the complementary or supplementary pair interior angles in which one measures 120 degrees!... That there are 8 printable worksheets for all from central angles of circles: 37 with... Angle formed by connecting two points on the circumference of a circle to another point on circumference. You know that the is have to do is set up and solve the,... 1, 2015 | Updated: Feb 22, 2018 0000045736 00000 n Step 1 Draw. Indicate that the is by printing out this quiz, you see examples of different. After this free worksheet contains 10 assignments each with 24 questions with answers circle theorem ⊙C. Covering rules for finding interior and exterior angledoc finding angles in a circle worksheet of for each pair angles. Connecting two points on the circumference of a circle: 37 problems with solutions is complementary... English the Outside angles is always 180 finding angles in a circle worksheet in triangles, we will make use of the Unit -... Parallel lines and transversals below angle Measurements ( ) T is transversal: Draw right..: ' SQV finding angle Measurements the answer that is correct tan 2 ) cot find the values the... … Missing angles of Polygon worksheet exterior angles of Polygon worksheet exterior angles of circles supplementary problems! Need to have a Adobe Acrobat Reader installed diagram, ∠ACB is a angle! Lesson and worksheet looks at the knowledge of the angle indicated triangle is 180 degree worksheet the can. 8 printable worksheets @ www.mathworksheets4kids.com Name: answer key as competently as them. So EASY to use this with the excellent angles in triangles before we,! Mb ] n � �IZD�a� �z ; tX�N��؈����P�$ P�4���H � ( v�A�9�����L it with and... 5-A-Day Core 1 ; more calculate arc length, and ' RSU key = 4 pages total done. Add dsoftschools.com to your ad blocking whitelist parallel and the is 720 | Open new! Word problems worksheet Outside a circle and finding its measure using its properties ; Primary ; 5-a-day Maths... Vertex inside and Outside of a sector ( degrees ) write the angle measures source bonlacfoods.com! Key ', 2552 0000054363 00000 n 0000021841 00000 n all you have to do is set up solve. Are generated per worksheet > angles in a triangle is 180 degree worksheet page Math-Drills.com worksheets @ www.mathworksheets4kids.com:... You can use to teach your students about complements and supplements are negative Geometry practice each. Worksheets for this topic 0000012462 00000 n find m ∠ 2 when m ∠ 2 when ∠... 5-A-Day Further Maths ; 5-a-day through worked examples followed by a worksheet to strengthen skills in calculating angles in circle! The way Around the circle has center and radius 7 cm n Give exact answers answers... Angle in a right-angled triangle can be found in a triangle work sheets, differentiated to three.... Theorem: if two angles has a measure of the angles on a straight line that add up to.! M 2 11 a central angle of using circle theorems a worksheet future! Use these theorems to find the measure of an angle formed by connecting two points on the circumference of circle... Triangle congruence worksheet sum and exterior angles a finding angles in a circle worksheet these trig ratios a Missing.! 7 quest this page contains circle worksheets based on identifying Parts of a circle worksheet - 50 in! Write and the line T is transversal Missing angle problems an 180 degrees then they are equal to sum the. In Geometry circles: vertex inside and Outside – answer key finding complementary angles - So to. These exercises are curated for students find arc measures a central angle of Further Maths 5-a-day! Point on the circumference of a circle worksheet - 50 angles in polygons.. Angles are: central, inscribed angles and Unit circle - Radians tagged...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47355324029922485, "perplexity": 2878.455604362782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488269939.53/warc/CC-MAIN-20210621085922-20210621115922-00344.warc.gz"}
https://www.gradesaver.com/textbooks/math/geometry/geometry-common-core-15th-edition/chapter-2-reasoning-and-proof-2-5-reasoning-in-algebra-and-geometry-practice-and-problem-solving-exercises-page-119/35
Geometry: Common Core (15th Edition) Use the Angle Addition Postulate and substitution. $m\angle AOC=m\angle AOB+m\angle BOC=60+20=80$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3668893873691559, "perplexity": 17674.515346765264}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863570.21/warc/CC-MAIN-20180520131723-20180520151723-00158.warc.gz"}
https://math.stackexchange.com/questions/3050491/stuck-in-calculus-and-derivative-question
# Stuck in calculus and derivative question Let $$f:[a,b] \to \mathbb{R}$$ continues function such that $$f(b)>f(a)$$ and $$f$$ is not linear (meaning $$f \not= c x +d$$) And $$f$$ is differential in $$(a,b)$$ , prove that there is $$c \in (a,b)$$ such that : $$f'(c) > \frac{f(b)-f(a)}{b-a}$$ By Lagrange theorem i know that there is $$t \in (a,b)$$ such that $$f'(t) = \frac{f(b)-f(a)}{b-a}$$ but how to get strictly bigger and not just equal? • Could it happen that $f'(t)\le\frac{f(b)-f(a)}{b-a}$ for all $t\in (a,b)$? – Ted Shifrin Dec 23 '18 at 17:02 Let$$\begin{array}{rccc}g\colon&[a,b]&\longrightarrow&\mathbb R\\&x&\mapsto&f(x)-f(a)-\frac{f(b)-f(a)}{b-a}(x-a).\end{array}$$Then $$g(a)=g(b)=0$$. On the other hand, $$g'(x)=f'(x)-\frac{f(b)-f(a)}{b-a}$$ and so asserting that there is no such $$c$$ is equivalent to asserting that $$g'(x)\leqslant0$$ for each $$x\in[a,b]$$. But then $$g$$ is decreasing. The only way that a decreasing function from $$[a,b]$$ to $$\mathbb R$$ has zeros at $$a$$ and $$b$$ is that $$g$$ is the null function. But then$$(\forall x\in[a,b]):f(x)=f(a)+\frac{f(b)-f(a)}{b-a}(x-a).$$So, $$f$$ would be linear.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 24, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9117802381515503, "perplexity": 68.83882728566006}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257244.16/warc/CC-MAIN-20190523123835-20190523145835-00422.warc.gz"}
http://tex.stackexchange.com/questions/192393/how-to-reduce-the-space-between-the-hat-of-overset-and-the-symbol
# How to reduce the space between the hat of overset and the symbol? I would like to place the \rightharpoonup on r and get them closer. I had tried the \overset and \Rvector package. But the spacing of the former is too much. And the latter performs bad when placing in smaller font size. I provide the code as follow: \documentclass[titlepage]{article} \usepackage[cm]{fullpage} \usepackage{amsmath,amssymb} \begin{document} $\mathbf{e}_{\overset{\rightharpoonup}{r}}$ \end{document} - Welcome to TeX.SX! Please help us to help you and add a minimal working example (MWE) that illustrates your problem. It will be much easier for us to reproduce your situation and find out what the issue is when we see compilable code, starting with \documentclass{...} and ending with \end{document}. –  Christian Hupfer Jul 20 at 15:58 A tip: If you indent lines by 4 spaces or enclose words in backticks , they'll be marked as code, as can be seen in my edit. You can also highlight the code and click the "code" button (with "{}" on it). –  Christian Hupfer Jul 20 at 16:21 –  Werner Jul 20 at 16:32 Here's a quick-and-not-too-dirty offering: a macro called \harp. It assumes it'll be used in subscript and superscript positions only. For now, it only works with letters that do not have an ascender part. I.e., don't use if with letters such as b, d, f, h, etc. (If you do need to use the macro with such letters, you'll need to tweak the argument of the \raise macro.) \documentclass[titlepage]{article} \usepackage[cm]{fullpage} \usepackage{amsmath,amssymb} \newcommand\harp[1]{\mathstrut\mkern2.5mu#1\mkern-11mu\raise0.6ex% \hbox{$\scriptscriptstyle\rightharpoonup$}} \begin{document} $\mathbf{e}_{\overset{\rightharpoonup}{r}} \text{ vs.\ } \mathbf{e}_{\harp{r}}$ \end{document} - Thanks,it works really good. –  陳智圓 Jul 20 at 23:59 There are two orders of problems: 1. the distance from the arrow to the symbol is too big; 2. in subscripts or superscripts, the arrow is too wide. Here's a solution for both. \documentclass[titlepage]{article} \usepackage{amsmath,graphicx} \newcommand{\harp}[1]{\mathpalette\harpoonvec{#1}} \newcommand{\harpvecsign}{\scriptscriptstyle\rightharpoonup} \newcommand{\harpoonvec}[2]{% \ifx\displaystyle#1\doalign{$\harpvecsign$}{#1#2}\fi \ifx\textstyle#1\doalign{$\harpvecsign$}{#1#2}\fi \ifx\scriptstyle#1\doalign{\scalebox{.6}[.9]{$\harpvecsign$}}{#1#2}\fi \ifx\scriptscriptstyle#1\doalign{\scalebox{.5}[.8]{$\harpvecsign$}}{#1#2}\fi } \newcommand{\doalign}[2]{% {\vbox{\offinterlineskip\ialign{\hfil##\hfil\cr#1\cr$#2$\cr}}}% } \begin{document} \begin{equation*} \mathbf{e}_{\harp{r}_{\harp{r}}} \harp{\mathbf{g}} \harp{A} \end{equation*} \end{document} - A solution using the accents package. The difference of vertical spacing with respect to the O.P. method with \overset is null in scriptscriptstyle, slight in \scriptstyle and more important in \textstyle. Otherwise the placement is different : \documentclass[titlepage]{article} \usepackage{amsmath,amssymb} \usepackage{accents} \newcommand*\harp[1]{\mkern2mu\accentset{\rightharpoonup}{#1}\mkern2mu}% \begin{document} $\begin{array}{cc} \texttt{\small\textbackslash mathaccent: } & \texttt{\small\textbackslash overset: }\\ \mathbf{e}_{\scriptscriptstyle\harp r} & \mathbf{e}_{\scriptscriptstyle\overset{\rightharpoonup}{r}} \\[2pt] \mathbf{e}_{\harp r} & \mathbf{e}_{\overset{\rightharpoonup}{r}} \\[4pt] \harp{\mathbf g } & \overset{\rightharpoonup}{\mathbf g} \end{array}$% \end{document} - The vertical space between the harpoon and the letter seems no smaller (and possibly even a bit larger) in your code as is already the case with the OP's code. –  Mico Jul 20 at 21:40 It is slightly smaller in scriptstyle(by 0.4pt), equal in scriptscriptstyle and noticeably smaller in textstyle (by 1.4pt). See my updated answer. –  Bernard Jul 20 at 23:04 Macros from the stackengine package allow one to set the vertical stacking gap. Compare items 1 vs. 2 and/or items 3 vs. 4, and/or 5 vs. 6 for a demonstration of changing the stacking gap. In addition, the appearance will also be affected by what is considered the baseline of the subscipt. In 1, 2, the baseline is between the "r" and the harpoon; in 3, 4, the "r" is the baseline, whereas in 5, 6, the "harpoon" is the baseline. \documentclass[titlepage]{article} \usepackage[cm]{fullpage} \usepackage{amsmath,amssymb} \usepackage{stackengine} \stackMath \begin{document} $\mathbf{e}_{\stackanchor[-.5pt]{\scriptscriptstyle \rightharpoonup}{\scriptstyle r}} \mathbf{e}_{\stackanchor[0pt]{\scriptscriptstyle \rightharpoonup}{\scriptstyle r}} ~ \mathbf{e}_{\stackon[-.5pt]{\scriptstyle r}{\scriptscriptstyle \rightharpoonup}} \mathbf{e}_{\stackon[0pt]{\scriptstyle r}{\scriptscriptstyle \rightharpoonup}} ~ \mathbf{e}_{\stackunder[-.5pt]{\scriptscriptstyle \rightharpoonup}{\scriptstyle r}} \mathbf{e}_{\stackunder[0pt]{\scriptscriptstyle \rightharpoonup}{\scriptstyle r}}$ \end{document} ` -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9086608290672302, "perplexity": 2221.774090163058}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447555354.63/warc/CC-MAIN-20141224185915-00046-ip-10-231-17-201.ec2.internal.warc.gz"}
https://gateoverflow.in/381374/isi-2020-pcb-mathematics-question-5-2
25 views Deduce that if $N, H, K$ are normal subgroups of a group $G$ such that $$N \bigcap H=N \bigcap K=H \bigcap K=\left\{e_{G}\right\}$$ and $G=H K$, then $N$ is an Abelian group. 1 31 views
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.856916606426239, "perplexity": 99.39634206899753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710473.38/warc/CC-MAIN-20221128034307-20221128064307-00777.warc.gz"}
https://spokutta.wordpress.com/category/economics/
# Sebastian Pokutta's Blog Mathematics and related topics ## Long time no see with one comment It has been quite a while that I wrote my last blog post; the last one that really counts (at least to me) was back in February. As pointed out at some point it was not that I was lacking something to write about but more that I did not want to “touch” certain topics. That in turn made me wonder what a blog is good for when, in fact, one is still concerned about whether to write about certain topics. So I got the feeling that in the end, all this web 2.0 authenticity, all this being really open, direct, authentic, etc. is nothing else but a (self-) deception. On the other hand, I also did not feel like writing about yet another conference. I have to admit that I have been to some really crappy conferences lately and since I did not have anything positive to say I preferred to not say anything at all. There were a few notable exceptions, e.g., the MIP or IPCO. Another thing that bothered me (and still does) is the dilution of real information with non-sense. In fact I have the feeling that the signal-to-noise ratio considerably dropped over the last two years and I didn’t want to add to this further. This feeling of over-stimulation with web 2.0 noise seems to be a global trend (at least this is my perception). Many people gave up their blogs or have been somewhat neglecting them. Also maintaining a blog with say weekly posts (apart from passing on a few links or announcements) takes up a lot of time. Time that arguably could be better invested into doing research and writing papers. Despite those issues or concerns I do believe that the web with all its possibilities can really enhance the way we do science. As with all new technologies one has to find a modus operandi that provides positive utility. In principle the web can provide an information democracy/diversification, however any “democratic endeavor” on the web has a huge enemy. The Matthew effect (or commonly known as “more gains more”). This term, coined by R.K. Merton, derives its name from the following biblical Gospel of Matthew (see also wikipedia): For to all those who have, more will be given, and they will have an abundance; but from those who have nothing, even what they have will be taken away. — Matthew 25:29, New Revised Standard Version In principle it states that the “rich get richer” while the “poor get poorer”. If we think of the different social networks (facebook, myspace, friendster) it refers to the effect that the one that has the largest user basis is going to attract more people than the one with a smaller one. In the next “round” this effect is then even more pronounced until the smaller competitor virtually ceases to exist. In the real-world this effect is often limited due to various kinds of “friction”. There might be geographic limitations, cultural barriers etc., that do wash out the advantage of the larger one so that the compounding nature of the effect is slowed down or non-existent (this hold even true in the highly globalized world we live in). That is the reason why dry cleaners, bakeries, and other forms of local business are not outperformed by globalized companies (ok, some are). In the context of the internet however there is often no inhibitor to the Matthew effect. It often translates into some type of preferential attachment although with the difference that the overall user basis is limited so that the gain of one party is the loss of another (preferential attachment processes are usually not zero-sum). So what does this mean in the context of blogs? Blog reading is to a certain extent zero-sum. There is some limited amount of time that we are willing to spend reading blogs. Those with a large user basis will have more active discussions and move higher in the priority list for reading. In the end the smaller ones might only have a handful of readers making it hard to justify the amount of time spent writing the posts. Downscaling the frequency of posts might even pronounce the effect as it might be perceived as inactivity. One way out of this dilemma could be any form of joining the smaller units to larger ones, i.e., either “digesting” several blogs to a larger one or alternatively “shared blogging”. I haven’t made up my mind yet what (if!) I am going to do about this. But I guess, in the end some type of consolidation is inevitable. Having bothered you with this abstruse mixture of folklore, economics, and internet, I actually intended to write about something else but somewhat related today: About deciding whether and when to dump a project. This problem is very much inspired by my previous experiences as a consultant and recent decisions about academic projects. More precisely, suppose that you have a project and you have an estimate for the overall time of the project. At some point you want to review the progress and based on what you see at this point you want to make a call whether or not you will abandon the project. The longer you wait with your review the better your information is that you gain from the review. On the other hand you potentially wasted too much time and resources to increase the confidence in your decision. In fact it might make even sense you not start a project at all. Suppose that you have an a priori estimate for the probability of success of your project, say p. Further let r(t) denote our function of erring, i.e., r(0) = 1/2 and r(1) = 0 which means that at time t= 0 we do not have any information yet and so we can only guess leading to guessing wrong with probability 50% and at time t = 1 we have perfect information. Let t denote the point in time at which we review the project (as a fraction of the overall time, here assumed to be 1). We have four cases to consider (one might opt for a different payoff function; the following one resembles my particular choice): 1. The project is going to be successful and at the point of reviewing we guessed right, i.e., we went through with it. In this case the benefit is s. This happens with probability (1-r(t)) p and expected payoff for this scenario is: (1-r(t)) p s. [alternatively one could consider the benefit s – t; or something else] 2. The project is going to be successful and at the point of reviewing we guessed wrong, i.e., we dropped the project. In this case the benefit is – (t + s), i.e., we lose our investment up to that point (here with unit value) and the overall benefit. Probability is r(t) p and expected payoff – r(t) p (t+s). 3. The project is going to fail and we guessed right: Benefit -t, i.e., the investment so far. Expected payoff – (1-r(t)) (1-p) t. 4. The project is going to fail and we guessed wrong, i.e., we went through with it: Benefit -T, were T is some cost for this scenario. Expected payoff – r(t) (1-p) T. All in all we have the following expected overall payoff as a function of t: $\mathbb E(t) = -[(1-r(t))p (-s) + (1-r(t))(1-p) t + r(t)p(t+s) + r(t)(1-p) T]$ Next we have to define our function which models our increase in confidence. I opted for a function that gains information in a logarithm fashion, i.e., in the beginning we gain confidence fast and then we have a tailing-off effect: $r_k(t) := \frac{1}{2} \frac{(1 + \log(k)}{(-\log(k) + \log(1 + k)))} - \frac{\log(k + t)}{(2 (-\log(k) + \log(1 + k)))}$ The parameter k can be understood as the rate of learning. For example for k = 0.01 it looks like this: Assuming s = 1 and T = 1, i.e., the payoffs are basically the invested time and p = 30%, the expected payoff as function of the time of review t looks like this (payoff: blue line, error rate: red line): The maximum payoff is reached for a review after roughly 20% of the estimated overall time. However it is still negative. This suggests that we do not learn fast enough to perform a well-informed decision. For example for k = 0.001, the situation looks different: The optimal point for a review is after 14% of the estimated project time. Having once estimated your rate of learning, one can also determine which projects one should not get involved with at all. For k = 0.001 this is the case when the probability of success p is less than roughly 27%. Although this model is somewhat very simple it provides some nice qualitative (and partly quantitative) insights. For example that there are indeed projects that you should not even get involved with; this is somewhat clear from intuition but I was surprised that the probability of success of those is still quite high. Also, when over time your rate of learning increases (due to experience with other projects) you can get involved with more risky endeavors because your higher review confidence allows you to purge more effectively. For example when k goes down to, say, k = 0.00001 (which is not unrealistic as in this case shortly after the beginning of the project you would only err with around 20%) you could get involved with projects that only have a probability of success of 19%. And no complaints about the abrupt ending – I consumed my allocated blogging time. Written by Sebastian September 6, 2010 at 5:28 am ## Information asymmetry and beating the market Recently, I was wondering how much money you can effectively gain by investing, given a certain information advantage: Suppose that you want to invest some money, for the sake of simplicity say $10,000. Can you assume to be able to extract an average-exceeding return from the market given that you have an information advantage? If you believe in the strong form of the efficient market hypothesis then the answer is no of course. If not, then is it at least theoretically possible? Let us consider a simplified setting. Suppose that we can invest (long/short) in a digital security (e.g., digital options) with payouts 0 and 1 (with a price of 0.5) and let us further suppose that it pays out 1 with a probability of 50%. Now assume that we have a certain edge over the market, i.e., we can predict the outcome slightly more accurately, say with $(50+edge\%)$ accuracy. If we have a good estimate of our edge, we can use the Kelly Criterion to allocate our money. The Kelly Criterion, named after John L. Kelly, Jr determines the proportional amount of money to bet from own bankroll so that the overall utility is maximized – this criterion is provably optimal. It was presented by Kelly in his seminal 1956 paper “A New Interpretation of Information Rate“. In this paper Kelly links the channel capacity of a private wire (inside information) to the maximum amount of return that one can extract from a bet. While this bound is a theoretical upper bound, it is rather strong in its negative interpretation: If you do not have any inside information (which includes being just smarter than everybody else or other intangible edges) you cannot extract any cash. The Kelly Criterion arises as an optimal money management strategy derived from the link to Shannon‘s Information Theory and in its simplest form it can be stated as: $f = \frac{bp-q}{b},$ where $b:1$ are the odds, $p$ the probability to win, and $q = 1-p$ the probability to lose. So in our setting, where we basically consider fair coin tosses whose outcomes we can predict with $(50+edge\%)$ accuracy, an edge of 1% or 100bps is considerable. Using the money management strategy from above (neglecting taxes, transaction fees, etc.), we obtain: 100bp edge with an initial bankroll of$10,000, y-axis is log10(bankroll), x-axis is #bets. The five lines belong to the %5, 25%, 50%, 75%, and 95% percentiles computed on the basis of 5,000 Monte-Carlo runs. So even the 5% percentile sees a ten-fold increase of the bankroll after roughly 4,100 bets, whereas the 95% percentile is already at a 100-fold increase. In terms of real deals the number of bets is already considerable though — after all, which private investor does 4,000 transactions?? Unfortunately, an edge of 100bp is very optimistic and for, say, for 50bp edge the situation already looks a quite different: the 50% percentile barely reaches a ten-fold increase after 10,000 bets. 50bp edge And now let us come to the more realistic scenario when considering financial markets. Here an edge of 10bp is already considered significant. Given all the limitations as a private investor, i.e., being further down the information chain, sub-optimal market access, etc., assuming an edge of 10bp would be still rather optimistic. In this case, using an optimal allocation of funds, we have the following: 10bp edge Here the 25% percentile actually lost money and even the 50% percentile barely gained anything over 10,000 bets. In the long run also here a strictly positive growth occurs, but for 10bp it takes extremely long: While you might be able do 4,000 deals over the course of say 10 – 30 years. Here even after 100,000 bets the 5% percentile barely reaches a 29% gain (over 100,000 bets!!). Given transaction costs, taxes, fees, etc., in reality the situation looks worse (especially when considered more complicated financial structures). So it comes all down to the question, how large your edge is. Although extremely simplified here, a similar behavior can be shown for more complicated structures (using e.g., random walks). Written by Sebastian January 17, 2010 at 8:04 pm ## Let us have a securitization party with one comment The concept of securitization is very versatile. From Wikipedia: Securitization is a structured finance process that distributes risk by aggregating debt instruments in a pool, then issues new securities backed by the pool. The term “Securitisation” is derived from the fact that the form of financial instruments used to obtain funds from the investors are securities. As a portfolio risk backed by amortizing cash flows – and unlike general corporate debt – the credit quality of securitized debt is non-stationary due to changes in volatility that are time- and structure-dependent. If the transaction is properly structured and the pool performs as expected, the credit risk of all tranches of structured debt improves; if improperly structured, the affected tranches will experience dramatic credit deterioration and loss. All assets can be securitized so long as they are associated with cash flow. Hence, the securities which are the outcome of Securitisation processes are termed asset-backed securities (ABS). From this perspective, Securitisation could also be defined as a financial process leading to an issue of an ABS. The cash flows of the initial assets are paid according to seniority of the tranches in a waterfall-like structure: First the claims of the most senior tranche are satisfied and if there are remaining cash flows, the claims of the following tranche are satisfied. This continues as long as there are cash-flows left to cover claims: Individual securities are often split into tranches, or categorized into varying degrees of subordination. Each tranche has a different level of credit protection or risk exposure than another: there is generally a senior (“A”) class of securities and one or more junior subordinated (“B,” “C,” etc.) classes that function as protective layers for the “A” class. The senior classes have first claim on the cash that the SPV receives, and the more junior classes only start receiving repayment after the more senior classes have repaid. Because of the cascading effect between classes, this arrangement is often referred to as a cash flow waterfall. In the event that the underlying asset pool becomes insufficient to make payments on the securities (e.g. when loans default within a portfolio of loan claims), the loss is absorbed first by the subordinated tranches, and the upper-level tranches remain unaffected until the losses exceed the entire amount of the subordinated tranches. The senior securities are typically AAA rated, signifying a lower risk, while the lower-credit quality subordinated classes receive a lower credit rating, signifying a higher risk. In more mathematical terms, securitization basically works as follows: take your favorite set of random variables (for the sake of simplicity say binary ones) and consider the joint distribution of these variables (pooling). In a next step determine percentiles of the joint distribution (of default, i.e. 0) that you sell of separately (tranching). The magic happens via the law of large numbers and the central limit theorem (and variants of it): although each variable can have a high probability of default, the probability that more than, say x% of those default at the same time decreases (almost) exponentially. Thus the resulting x-percentile can have a low probability of default already for small x. That is the magic behind securitization which is called credit enhancement. So given that this process of risk mitigation and tailoring of risks to the risk appetite of potential investors is rather versatile, why not applying the same concept to other cash flows that bear a certain risk of default and turn them into structured products 😉 (a) Rents: Landlords face the problem that the tenant’s credit quality is basically unknown. Often, a statement about the tenant’s income and liabilities should help to better estimate the risk of default. But this procedure can, at best, serve as an indicator. So why not using the same process to securitize the rent cash flows and sell the corresponding tranches back to the landlords. This would have several upsides. First of all, the landlord obtains a significantly more stable cash flow and depending on the risk appetite could even invest in the more subordinated tranches. This could potentially reduce rents as the risk premium charged by the landlord due to his/her potentially risk averse preference could be reduced to the risk neutral amount (plus some spreads, e.g., operational and structuring costs). The probability of default could be significantly easier estimated for the pooled rent cash flows as due to diversification it is well approximated by the expected value (maybe categorized into subclasses according to credit ratings). Of course, one would have to deal with problems such as adverse selection and the potentially hard task to estimate the correlation – which can have a severe impact on the value of the tranches (see my post here). (b) Sport bets: Often these bets as random variables have a high probability of default, e.g., roughly 50% for a balanced win/loss bet). In order to reduce the risk due to diversification a rather large amount of cash has to be invested to obtain a reasonable risk profile. Again, securitizing those cash flows could create securities with more tailored risk profiles that could be of interest to people that are rather risk averse on the one hand and risk affine gamblers on the other hand. (c) … That is the wonderful world of structured finance 😉 Written by Sebastian December 30, 2009 at 2:35 pm Tagged with ## Heading off to AFBC 2009 with one comment I am on my way to the 22nd Australasian Finance and Banking Conference 2009 in Sydney. So, what the hell, is a mathematician doing on a finance conference? Well, basically mathematics and in particular optimization and operations research. I am thrilled to see the current developments in economics and finance that take computational aspects, which ultimately limit the amount of rationality that we can get, into account (I wrote about this before here, here, and here). In fact, I am convinced that these aspects will play an important role in the future, especially for structured products. After all, who is going to buy a structure where it is impossible to compute the value? Not even to talk about other complications such as bad data or dangerous model assumptions (such as static volatilities and correlations which are still used today!). Most valuation problems though can be cast as optimization problems and especially the more complex structured products (e.g., mean variance optimizer) do explicitly ask for a solution to an optimization problem in order to be valuated. For the easier structures, Monte Carlo based approaches (or bi-/trinomial trees) are sufficient for pricing. As Arora, Barak, Brunnermeier, and Ge show in their latest paper, for more complex structures (e.g., CDOs) these approaches might fall short capturing the real value of the structures, due to e.g., deliberate tampering. I am not going to talk about aspect of computational resources though: I will be talking about my paper “Optimal Centralization of Liquidity Management” which is joined work with Christian Schmaltz from the Frankfurt School of Finance and Management. The problem that we are considering is basically a facility location problem: In a large banking network, where and how do you manage liquidity? In a centralized liquidity hub or rather in smaller liquidity centers spread all over the network. Being short on liquidity is a very expensive matter, either one has to borrow money via the interbank market (which is usually dried up or at least tight in tougher economical conditions) or one has to borrow via the central bank. If both is not available, the bank goes into a liquidity default. The important aspect here is that the decision on the location and the amount of liquidity produced, is driven to a large extent by the liquidity demand volatility. In this sense a liquidity center turns into an option on cheap liquidity and in fact, the value of a liquidity center can be actually captured in an option framework. The value of the liquidity center is the price of the exact demand information – the more volatility we have, the higher this price will be and the more we save when we have this information in advance. The derived liquidity center location problem implicitly computes the prices of the options which arise as marginal costs in the optimization model. Here are the slides: View this document on Scribd Written by Sebastian December 13, 2009 at 12:17 pm Tagged with ## The impact of estimation errors on CDO pricing Another interesting, nicely written paper about valuating and pricing CDOs is “The Economics of Structured Finance” from Coval, Jurek, and Stafford which just appeared in the Journal of Economic Perspectives. It nicely complements the paper of Arora, Barak, Brunnermeier, and Ge titled “Computational Complexity and Information Asymmetry in Financial Products” (see also here). The authors argue that already small estimation errors in correlation and probability of default (of the underlying loans) can have devastating effect on the overall performance of a tranche. Whereas the senior tranches remain quite stable in the presence of estimation errors, the overall rating of the junior and mezzanine tranches can be greatly affected. Intuitively this is clear, as the junior and the mezzanine tranches act as a cushion for the senior tranches (and in turn the junior tranches are a protection of the mezzanine tranches). What is not so clear though at first is that this effect is so pronounced, i.e., smallest estimation errors lead to a rapid decline in credit quality of these tranches. In fact, what happens here is that the junior and mezzanine tranches pay the price for the credit enhancement of the senior tranches. And the stability of the latter with respect to estimation errors comes at the expense of highly sensitive junior and mezzanine tranches. This effects becomes even more severe when considering CDO^2, where the loans of the junior and mezzanine tranches are repackaged again. These structures possess a very high sensitivity to slightest variations or estimation errors in the probability of default or correlation. In both cases, slight impressions in the estimation can have severe impacts. But also, considering it the other way around, slight changes in the probability of default or the correlation due to changed economic conditions can have devastating effect on the value of the lower prioritized tranches. So if you are interested in CDOs, credit enhancement, and structured finance you should give it a look. Written by Sebastian December 6, 2009 at 4:32 pm ## Arms race in quantitative trading or not? Rick Bookstaber recently argued that the arms race in high frequency trading, a form of quantitative trading where effectively time = money ;-), results in a net drain of social welfare: A second reason is that high frequency trading is embroiled in an arms race. And arms races are negative sum games. The arms in this case are not tanks and jets, but computer chips and throughput. But like any arms race, the result is a cycle of spending which leaves everyone in the same relative position, only poorer. Put another way, like any arms race, what is happening with high frequency trading is a net drain on social welfare. It is all about milliseconds and being a tiny little bit faster: In terms of chips, I gave a talk at an Intel conference a few years ago, when they were launching their newest chip, dubbed the Tigerton. The various financial firms who had to be as fast as everyone else then shelled out an aggregate of hundreds of millions of dollar to upgrade, so that they could now execute trades in thirty milliseconds rather than forty milliseconds – or whatever, I really can’t remember, except that it is too fast for anyone to care were it not that other people were also doing it. And now there is a new chip, code named Nehalem. So another hundred million dollars all around, and latency will be dropped a few milliseconds more. In terms of throughput and latency, the standard tricks are to get your servers as close to the data source as possible, use really big lines, and break data into little bite-sized packets. I was speaking at Reuters last week, and they mentioned to me that they were breaking their news flows into optimized sixty byte packets for their arms race-oriented clients, because that was the fastest way through network. (Anything smaller gets queued by some network algorithms, so sixty bytes seems to be the magic number). Although high-frequency trading is basically about being fast and thus time is the critical resource, in quantitative trading, in general, it is all about computational resources and having the best/smartest ideas and strategies. The best strategy is worthless if you lack the computational resources to crunch the numbers and, vice versa, if you do have the computational power but no smart strategies this does not get you anywhere either. Jasmina Hasanhodzic, Andrew W. Lo, Emanuele Viola argue in their latest paper “A Computational View of Market Efficiency” that efficiency in markets has to be considered with respect to the level of computational sophistication, i.e., as market can (appear to) be efficient for those participants which use only a low level of computational resources, whereas it can be inefficient for those participants that invest a higher amount of computational resources. In this paper we suggest that a reinterpretation of market efficiency in computational terms might be the key to reconciling this theory with the possibility of making profits based on past prices alone. We believe that it does not make sense to talk about market efficiency without taking into account that market participants have bounded resources. In other words, instead of saying that a market is “efficient” we should say, borrowing from theoretical computer science, that a market is efficient with respect to resources S, e.g., time, memory, etc., if no strategy using resources S can generate a substantial profit. Similarly, we cannot say that investors act optimally given all the available information, but rather they act optimally within their resources. This allows for markets to be efficient for some investors, but not for others; for example, a computationally powerful hedge fund may extract profits from a market which looks very efficient from the point of view of a day-trader who has less resources at his disposal—arguably the status quo. More precisely, it is even argued that the high-complexity traders gain from the low-complexity traders (of course, within the studied, simplified market model – but nonetheless!!): The next claim shows a pattern where a high-memory strategy can make a bigger profit after a low-memory strategy has acted and modified the market pattern. This profit is bigger than the profit that is obtainable by a high-memory strategy without the low-memory strategy acting beforehand, and even bigger than the profit obtainable after another high- memory strategy acts beforehand. Thus it is precisely the presence of low-memory strategies that creates opportunities for high-memory strategies which were not present initially. This example provides explanation for the real-life status quo which sees a growing quantitative sophistication among asset managers. Informally, the proof of the claim exhibits a market with a certain “symmetry.” For high-memory strategies, the best choice is to maintain the symmetry by profiting in multiple points. But a low-memory strategy will be unable to do. Its optimal choice will be to “break the symmetry,” creating new profit opportunities for high-memory strategies. So although in pure high-frequency trading, the relevance of smart strategies might be smaller and thus it is more (almost only?) about speed, in general quantitative trading it seems like (again in the considered model) that the combination of strategy and high computational resources might generate a (longer-term) edge. This edge cannot necessarily be compensated with increased computational resources only, as you still need to have access to the strategy. The considered model considers memory as a the main computational/limiting resource. One might argue that it reflects the sophistication of the strategy along with the real computational resources implicitly, as limited memory might not be able to hold a complex strategy. On the other hand a lot of memory is pointless without a strategy using it. So both might be considered to be intrinsically linked. An easy example illustrating this point is maybe the following. Consider the sequence “MDMD” and suppose that you can only store, say these 4 letters. A 4-letter-strategy might predict something like “MD” for the next two letters. If those letters though represent the initial of the weekdays (in German), the next 3 letters will be “FSS”. It is impossible though to predict this sequence solely using information about the past on the last 4 letters. The situation changes if we can store up to 7 letters “FSSMDMD”. Then a prediction is possible. One point of the paper is now that the high-complexity traders might fuel their profits by the shortsightedness of the low-complexity traders. And thus an arms race might be a consequence (to exploit this asymmetry on the one hand and to protect against exploitation on the other). To some extent this is exactly what we are seeing already when traders with “sophisticated” models, that for example are capable of accounting for volatility skew, arbitrage out less sophisticated traders. On the other hand, it does not help to use a sophisticated model (i.e., more computational resources) if one doesn’t know how to use it, e.g., a Libor market model without an appropriate calibration (non-trivial) is worthless. Written by Sebastian September 1, 2009 at 8:38 pm ## Rama Cont on contagious default and systemic risk A few days ago (May, 14th) Rama Cont from Columbia gave a very interesting talk at the Frankfurt School of Finance & Management about contagious default and systemic risk in financial networks. From the abstract: The ongoing financial crisis has simultaneously underlined the importance of systemic risk and the lack of adequate indicators for measuring and monitoring it. After describing some important structural features of banking networks, we propose an indicator for measuring the systemic impact of the failure of a financial institution –the Systemic Risk Index– which combines a traditional factor-based modeling of financial risks with network contagion effects resulting from mutual exposures. Simulation studies on networks with realistic structures -in particular using data from the Brazilian interbank network- underline the importance of network structure in assessing financial stability and point to the importance of leverage and liquidity ratios of financial institutions as tools for monitoring and controling systemic risk. In particular, we investigate the role played by credit default swap contracts and their impact on financial stability and systemic risk. Out study leads to some policy implications for a more efficient monitoring of systemic risk and financial stability. He presented pretty remarkable results of a simulation study he conducted together with two of his students. The main goal was to introduce a “systemic risk index” (SRI) that quantifies the impact of an institution’s default on the financial systems through direct connection (i.e. counterparty credit risk) or indirect connection (i.e. seller of CDS). Based on that he compared the effect of risk mitigating techniques (i.e. limits on leverage, capital requirements) on the SRI. The simulation was based on random graphs constructed via preferential attachment, i.e., new nodes in the sytem tend to connect to the better connected ones – the Matthew principle. The constructed graphs were structurally similar to the structure observed in real-world networks in Brazil and Austria. Running the risk of oversimplifying, the key insights were: 1. The main message: It is not about being “too big to fail” but about being “too interconnected to fail”. In the presented study size was completely uncorrelated to the potential impact given default. That is especially interesting given that in the current discussion about the financial crisis, one prominent argumentation demands the split up of large financial institution. Assuming that the results are realistic, this would provide only minimal systemic risk mitigation but might increase the administrative overhead to monitor all these smaller units. Another consequence that might be even a bit more critical is the implied moral hazard. Whereas gaining a certain size in order to be “too big to fail” is a rather hard task, being “too interconnected to fail” is rather simple: Given the later described large impact of only a few CDSs it might suffice to buy and sell a lot of CDSs (or other structures) back-to-back  (i.e. you are long and short the same position and thus net flat)  in order to insure yourself against failure by weaving or implanting yourself deep into the financial network. (see also 2. below) 2. Based on the real-world network that were studied, only a few hubs in the network constitute the largest proportion of potential damage. These are the ones that are highly connected. Thus a monitoring focused on these particular nodes that could be identified using the proposed SRI might already lead to a considerable mitigation of systemic risk. 3. It does make a difference if you have a limit on leverage compared to capital requirements only. The impact of the worst nodes in the network considerable dropped in presence of limits on leverage (as for example in Canada employed). 4. Comparing the situation with and without CDSs, the presence of only a few CDSs can change the dynamics of the default propagation dramatically by introducing “shortcuts” to the network – effects similar to the small world phenomenon. 5. In the model at hand, it didn’t make a difference if CDS contracts were speculative or hedging instruments. Note, that was under the assumption that the overall number of contracts in the simulation remained constant and only the proportion were altered – otherwise under the mainstream assumption that more than 50% of all CDSs are speculative, removing those would reduce the number of contracts present by more than 50% and thus considerably reducing the risk through “shortcuts”. Written by Sebastian May 16, 2009 at 11:39 pm
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 8, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.583085834980011, "perplexity": 1139.6867461029983}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232254253.31/warc/CC-MAIN-20190519061520-20190519083520-00486.warc.gz"}
http://www.physicsforums.com/showthread.php?t=528585
# Associative property of convolution by sainistar Tags: associative, convolution, property P: 4 Hi There The associative property of convolution is proved in literature for infinite interval. I want to prove the associative property of convolution for finite interval. I have explained the problem in the attached pdf file. Any help is appreciated. Regards Aman Attached Files convproblem.pdf (90.5 KB, 15 views) Mentor P: 15,962 Integral (10) is obviously wrong. Your two integrals have $\theta$ in their bounds. But $\theta$ is an integration variable!! This can't be correct. Why do you obtain something wrong here. Because after equation (3) they applied Fubini and switched the both integrals, and THEN they did the substitution. You must do something similar. Apply Fubini after (9). But Fubini will in this case not be simply switching the integral signs... P: 4 Thanks micromass I was looking for the Fubini theorem when the bound are the integration variable. I did not find any. Can you please let me know any source i can read. It will also be helpful if you can suggest how can i apply Fubini after (9). Regards Related Discussions Calculus 0 Linear & Abstract Algebra 1 Calculus & Beyond Homework 4 Precalculus Mathematics Homework 3 General Discussion 13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.971211850643158, "perplexity": 945.5474922435328}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021767149/warc/CC-MAIN-20140305121607-00055-ip-10-183-142-35.ec2.internal.warc.gz"}
http://blog.stermon.com/articles/2017/02/17/speaker-at-fsharp-exchange-2017
## Blog: Ramón Soto Mathiesen I am pleased and honored to announce that I will be a speaker at this years F# eXchange 2017 conference. ### F# Community It’s almost 3 years ago since I became acquainted with Skills Matter and the F# Community. I had spoken previously with Phillip Trelford because I have co-founded the F#unctional Copenhageners Meetup Group, which is like a sisterhood to F#unctional Londoners Meetup Group. In another case, due to work with Microsoft Dynamics CRM, I had come across Ross McKinlay (I still owe you some beers). Both were present at the Meetup. When I arrived to the venue, I bumped into Don Syme at the door. I was really not expecting that. Rob Lyndon talk started and I was amazed how this punk rocker, with a Mohawk, looking guy absolutely mastered coding GPU kernels type-safely in F#. It was pretty amazing. After the Meetup, we all went over and had a beer (or many) at a near bar. This was great as it gave the possibility to meet many of the people who did incredible work for the Community. I specially noticed Tomas Petricek, due to the amazing tools that he has provided and which I have used have used professionally: F# Formatting, F# Data, among others. I think it was the night that The Don agreed to give a talk at MF#K as he would be in Copenhagen due to some ICFP related work. His talk is still one of the most attended and I would argue that it was the cornerstone to the foundation of our Meetup group when we are about to reach 700 members where the next upcoming talk we are hosting, will be having 2 × F# MVPs on stage. Time has passed, and MF#K have hosted a variety of functional programming languages talks, most of them F# related, and I have personally given around a dozen of F# talks and some 1-day courses. I recently held my very first international abroad talk in Sweden (Malmö) to restart SweNUG, the Swedish .NET User Group, and my very last talk might derive in a an official F# project. Note: Oskar, sorry for not being able to work on the project at the moment. ### F# eXchange 2017 Conference That’s why I write that I am honored to give a little back to F# Community which have given me so much. In addition to greet old acquaintances, I am really pleased and looking forward to the strong program that has been put together: • I had the pleasure to see this year’s Conference Keynote, Evelina, at NDC London last year. She is an incredibly facilitator that captures the attention of everyone attending the talk. Besides her unquestionable knowledge, she is also very hands on, key component that many forget when giving a talk. It also helps that she chooses topics which seems interesting for most of us. • Scott Wlaschin is without any doubt, one of my favorite people in the F# community. He has one of the best learning sources, if not the best, he an incredible communicator who is able to teach very complex theory with analogies that most understand. Just an example Railway Oriented Programming. • I’m really looking forward to meet Michael Newton. I Have read time and time again his blog post on the subject of Type Providers and learned a lot from them, to the degree that it helped immensely to create one professionally DAXIF. Now, I can finally put a face to the blog. • Last but not least, my fellow paisano Alfonso Garcia-Caro who in collaboration with ncave have made a fantastic job on Fable. Really looking forward to his talk and maybe some discussion with regard of the Motherland over a few beers. To all the speakers I do not mention, it’s because I do not know your work but I am really looking forward to your talks !!! The only cons I have noticed so far with regard of the conference is that there are some cases where I would really like to see both talks scheduled at the same time. But I guess that’s what happens when a conference starts getting bigger. Kudos Skills Matter. Here is the title and abstract of my talk. I hope you will enjoy it and hopefully, nobody will be disappointed: #### Puritas, A journey of a thousand miles towards side-effect free code Puritas, from Latin, means linguistic correctness, from a lexical point of view. In this talk, you will be exploring and walking down the bumpy road of trying to bring side-effect free code to F#. The journey started, in Ramon’s case, a few years ago while working for an IT consultancy company where Ramon made: Delegate.Sandbox, a library providing a Computation Expression named SandboxBuilder, sandbox { return 42 }, which ensures that values returned from the computation are I/O side-effects safe. The library is built-on top of .NETs Partially Trusted Code Sandboxes. The path of exploration somehow stalled. Now with new ideas under the hood and great tooling provided by Microsoft and the amazing F# Community, the journey continues towards bringing this missed feature to an already brilliant and loved programming language, that is used by thousands in the magnitude of 4. Hopefully one day, the reserved keyword: pure (The Holy Grail of F# keywords) will be claimed, and we will be able to use it with ease as we do with other keywords such as: seq, async, query, lazy and so on.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30219799280166626, "perplexity": 2473.43020440019}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806720.32/warc/CC-MAIN-20171123031247-20171123051247-00163.warc.gz"}
https://www.globosurfer.com/best-gore-tex-jackets/
OUR TOP PICK EDITORS CHOICE BEST VALUE Marmot Essential Gore Tex Jacket Marmot Minimalist Men’s Gore Tex Jacket GORE WEAR C3 GTX Active Gore Tex Jacket CHECK LATEST PRICE CHECK LATEST PRICE CHECK LATEST PRICE Whether you choose to seek out backcountry adventures or like to stay closer to the city, bad weather can hit anywhere. For your comfort and safety, it is important that you shield yourself from any impending wet weather and a Gore Tex shell is one of the most popular outdoor gear products that incorporates the fabric into its design. If you are unfamiliar with Gore Tex material or want more information, we have selected the eight best Gore Tex jackets and included an in-depth buying guide to help you get started in your search. With our selection and guide, we hope that you find a functional Gore Tex pro jacket that will be versatile for all your outings and rugged outdoor adventures. ### How To Choose A Gore Tex Jacket – Buying Guide #### Material Gore Tex is a material that has a unique combination of contrasting abilities; water repelling and breathability. It is made specifically of stretched polytetrafluoroethylene, which is commonly known as Teflon. The Gore Tex fabric can both repel liquid and allowing water vapor to pass through its membrane. This unique functionality makes Gore Tex an excellent fabric for all-weather use and has become mainstream in a variety of outdoor products, especially outerwear. But outerwear is not made entirely of Gore Tex. Most jackets will combine a synthetic material, like polyester, with Gore Tex. But a Gore Tex pro jacket will use a three-layer system with other synthetic materials, like nylon, to make it suitable for better protection in extremely harsh or cold conditions. If you are looking for a Gore Tex ski jacket, look for one that uses the Gore Tex pro system. #### Durability Gore Tex is an extremely durable because of its water repellant nature. The fabric is highly resistant to damage from stains and water, which help it last longer. When combined with other durable synthetic materials, like nylon and polyester, Gore Tex jackets have the durability to last for years with the proper care. While Gore Tex is considered a durable material, it still doesn’t hurt to look over Gore Tex jacket reviews to determine the durability of a design before you make a purchase. As we said, jackets are not made entirely of Gore Tex, so Gore Tex jacket reviews can help you determine if the other materials used in the jacket’s construction are durable and worth the investment. #### Breathability Breathability is the selling point of Gore Tex, alongside its water repellent nature. What makes Gore Tex breathable is the fabric’s ability to allow water vapor to escape. This means that sweat built up under the jacket won’t soak your clothes because it has the chance to evaporate and dissipate. The breathability of Gore Tex is what makes the material so comfortable and sought after. The better breathability will actually keep you warmer by ensuring you stay dry and it helps regulate your body temperature to minimize the presence of sweat in the first place. With the breathability of Gore Tex fabric, you’ll be able to stay comfortable during the most rigorous exercise and stay safe from the harsh outside elements. #### Use One of the best things about Gore Tex is that it is truly versatile for all year long use. A Gore Tex shell can be used in all the seasons, even winter. Gore Tex hiking jackets may double as a Gore Tex ski jacket if you combine it with thermal underwear and mid-layers. In the summer, lightweight Gore Tex jackets will keep you safe from the rain and nighttime, when the temperature drops. A top rated Gore Tex rain jacket truly can be used for any environment from urban to backcountry. But if you plan on going to an extreme location, you should consider purchasing a Gore Tex pro jacket for the three-layers of protection. Related Reviews: Hardshell Jacket & Columbia Winter Jacket #### Style For many name brand companies with excellent reputations, their Gore Tex shells have a versatile style that can be used as a casual or technical jacket. But there are some specific styles on the market too. Some styles are best used as Gore Tex hiking jackets because of the relaxed fit and oversized hood, which will keep your head dry and you safe from having to carry an umbrella. Other lightweight Gore Tex jackets have a design that has been specialized for cyclists because of the lower back and backside pocket. But no matter what style you choose, we imagine that you’ll love your Gore Tex shell so much, you’ll use it for a variety of occasions. #### Fit The fit of lightweight Gore Tex jackets will depend on the activity they are designed to enhance. A cyclists Gore Tex shell may sit tighter to the body to reduce friction, whereas a women’s Gore Tex rain jacket may be more relaxed for comfortable movement on the trail or city streets. It is important that you consider the fit of a specific design because you want to find the best Gore Tex jacket that will suit your needs and the fit can change the functionality of the jacket in relation to your activity. #### Features The best Gore Tex rain jacket will not only use the high-quality material, but it will incorporate a host of features to make the jacket functional and reliable when used in the real world. Smart features than can help keep you dry include adjustable hems and hoods or watertight zippers. A hood is another feature that can be very important for a Gore Tex pro jacket if you need protection from the harsh elements. However, most top rated Gore Tex rain jackets will have a built-in hood. The only jackets that tend to only have collars are designed for cyclists. Any other special features should be listed in the product specifications or you might even find them discussed in Gore Tex jacket reviews. ### FAQs Q: Why choose Gore Tex? A: The main reasons why Gore Tex is so popular is because of the breathability, versatility, and comfort the material provides to the user. The breathability is a big factor because it keeps users comfortable in a variety of outdoor settings and ensures they always stay dry. However, the versatile features of the best Gore Tex jacket means that you can use your Gore Tex shell for any occasion – urban or backcountry because it is breathable, windproof, and waterproof. Finally, Gore Tex keeps you comfortable. With adjustable features and the moisture wicking ability, you are less likely to overheat and feel hot. Additionally, lightweight Gore Tex jackets reduce the bulk and minimize the friction. With less bulk, you’ll feel less restricted and freer in your movements. Q: Who invented Gore Tex? A: Gore Tex was invented by Bob Gore and Robert W. Gore who were a father and son team in 1969. Bob Gore found that when you rapidly stretch PTFE, the material of Gore Tex, you are left with a microporous material with a host of functional characteristics like low water absorption and water repellency. In 1970, the first patents were made, and they called it ePTFE or expanded polytetrafluorethylene. By 1976, the first commercial orders were placed for Gore Tex fabric. From there, more innovation and developments has made Gore Tex the celebrity material it is today. Q: How to properly clean and care for a Gore Tex jacket? A: Even the best Gore Tex jacket or the best Gore Tex rain jacket need to be maintained with a certain level of care. While most lightweight Gore Tex jackets can be washed by machine, it is important that you use a minimal amount of liquid detergent. Do not use any other type of detergent. The Gore Tex shell should be rinsed twice and placed on a low spin cycle. You can gently tumble dry Gore Tex in the dryer for one cycle without causing damage. A top rated Gore Tex rain jacket should also include cleaning instructions provided by the brand, which will be specific to the design. You should always follow these instructions to minimize damage to your Gore Tex pro jacket. ### Globo Surf Overview With the best Gore Tex jacket, you’ll get optimum protection from any weather condition. The material will shield you from the wet and cold elements, which means you can focus on the activity and not the weather. With our guide, we aim to pair you with the best Gore Tex rain jacket that will suit all your outdoor adventures. #### More Jacket Reviews: Do you own one of the Gore Tex jackets that made it onto out list? Let us know how much you rely on your Gore Tex shell to protect you from the rain in the comments section below. My name is David Hamburg. I am an avid water sports fan who enjoys paddle boarding, surfing, scuba diving, and kite surfing. Anything with a board or chance I can get in the water I love! I am such a big fan I decided to start this website to review all my favorite products and some others. Hope you enjoy!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9773644804954529, "perplexity": 4824.760702494513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038059348.9/warc/CC-MAIN-20210410210053-20210411000053-00098.warc.gz"}
http://mathhelpforum.com/calculus/133012-integration-problem.html
# Math Help - Integration problem 1. ## Integration problem Evaluate the integral integral (0, pi/2) 60cos^5 xdx... I have no idea how to solve.. 2. Integration by parts will produce a reduction formula. 3. Originally Posted by Jgirl689 Evaluate the integral integral (0, pi/2) 60cos^5 xdx... I have no idea how to solve.. $\int{\cos^5{x}\,dx} = \int{\cos^4{x}\cos{x}\,dx}$ $= \int{(\cos^2{x})^2\cos{x}\,dx}$ $= \int{(1 - \sin^2{x})^2\cos{x}\,dx}$ $= \int{(1 - 2\sin^2{x} + \sin^4{x})\cos{x}\,dx}$. Now make the substitution $u = \sin{x}$ so that $\frac{du}{dx} = \cos{x}$. 4. ## many ways to solve this Originally Posted by Jgirl689 Evaluate the integral integral (0, pi/2) 60cos^5 xdx... I have no idea how to solve.. i find several ways to solve this: 1)you can expand cos^5x in terms of multiples of x using euler's formula considering y=cosx+isinx and 1/y=cosx-isinx. 2)you can use a reduction formula as TKHunny said and 3)the way 'prove it' did it 4)and lastly u can perhaps use some property of definite integrals: integration(0,pi/2)f(x)=integration(0,pi/2)f(pi/2-x). i have not tested any of these methods though but i think they are all very feasible. 5. Originally Posted by Pulock2009 i find several ways to solve this: 1)you can expand cos^5x in terms of multiples of x using euler's formula considering y=cosx+isinx and 1/y=cosx-isinx. 2)you can use a reduction formula as TKHunny said and 3)the way 'prove it' did it 4)and lastly u can perhaps use some property of definite integrals: integration(0,pi/2)f(x)=integration(0,pi/2)f(pi/2-x). i have not tested any of these methods though but i think they are all very feasible. Another alternative is to use trigonometric identities to convert it into functions such as $\sin{nx}$ or $\cos{nx}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9789780974388123, "perplexity": 2668.8513096552215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824757.62/warc/CC-MAIN-20160723071024-00282-ip-10-185-27-174.ec2.internal.warc.gz"}
https://www.zbmath.org/authors/?q=ai%3Aparvaneh.vahid
× # zbMATH — the first resource for mathematics ## Parvaneh, Vahid Compute Distance To: Author ID: parvaneh.vahid Published as: Parvaneh, V.; Parvaneh, Vahid External Links: ORCID Documents Indexed: 62 Publications since 2011 all top 5 all top 5 #### Serials 12 Fixed Point Theory and Applications 6 Journal of Nonlinear Science and Applications 5 Journal of Inequalities and Applications 4 Cogent Mathematics & Statistics 3 Abstract and Applied Analysis 3 Applied Mathematical Sciences (Ruse) 3 Journal of Function Spaces 2 Filomat 2 Acta Mathematica Scientia. Series B. (English Edition) 2 International Journal of Mathematical Analysis (Ruse) 2 Mathematical Sciences 1 Ukrainian Mathematical Journal 1 Russian Mathematics 1 Computational and Applied Mathematics 1 Georgian Mathematical Journal 1 Vietnam Journal of Mathematics 1 Nonlinear Analysis. Modelling and Control 1 Nonlinear Functional Analysis and Applications 1 Journal of Applied Mathematics 1 International Journal of Pure and Applied Mathematics 1 Journal of Fixed Point Theory and Applications 1 Symmetry 1 Axioms 1 Gulf Journal of Mathematics 1 Sahand Communications in Mathematical Analysis #### Fields 49 General topology (54-XX) 25 Operator theory (47-XX) 3 Functional analysis (46-XX) 2 Integral equations (45-XX) 1 Algebraic topology (55-XX) #### Citations contained in zbMATH 44 Publications have been cited 374 times in 224 Documents Cited by Year Common fixed points of almost generalized $$(\psi,\phi)_s$$-contractive mappings in ordered $$b$$-metric spaces. Zbl 1295.54080 Roshan, Jamal Rezaei; Parvaneh, Vahid; Sedghi, Shaban; Shobkolaei, Nabi; Shatanawi, Wasfi 2013 Fixed points of cyclic weakly ($$\psi, \phi ,L, A, B$$)-contractive mappings in ordered $$b$$-metric spaces with applications. Zbl 06334850 Hussain, Nawab; Parvaneh, Vahid; Roshan, Jamal Rezaei; Kadelburg, Zoran 2013 Existence of tripled coincidence points in ordered $$b$$-metric spaces and an application to a system of integral equations. Zbl 1425.54030 Parvaneh, Vahid; Roshan, Jamal Rezaei; Radenović, Stojan 2013 Some common fixed point results in ordered partial b-metric spaces. Zbl 06313654 2013 Some coincidence point results in ordered $$b$$-metric spaces and applications in a system of integral equations. Zbl 1354.45010 Roshan, J. R.; Parvaneh, V.; Altun, I. 2014 Some fixed point theorems for generalized contractive mappings in complete metric spaces. Zbl 1345.54049 Hussain, Nawab; Parvaneh, Vahid; Samet, Bessem; Vetro, Calogero 2015 Common fixed point results for weak contractive mappings in ordered b-dislocated metric spaces with applications. Zbl 06313730 Hussain, Nawab; Roshan, Jamal Rezaei; Parvaneh, Vahid; Abbas, Mujahid 2013 Common fixed point theorems for weakly isotone increasing mappings in ordered b-metric spaces. Zbl 1392.54035 Roshan, Jamal Rezaei; Parvaneh, Vahid; Kadelburg, Zoran 2014 Fixed point theorems for weakly $$T$$-Chatterjea and weakly $$T$$-Kannan contractions in $$b$$-metric spaces. Zbl 1310.54061 2014 New fixed point results in $$b$$-rectangular metric spaces. Zbl 1420.54089 Roshan, Jamal Rezaei; Parvaneh, Vahid; Kadelburg, Zoran; Hussain, Nawab 2016 Generalized Wardowski type fixed point theorems via $$\alpha$$-admissible $$FG$$-contractions in $$b$$-metric spaces. Zbl 1374.54056 Parvaneh, Vahid; Hussain, Nawab; Kadelburg, Zoran 2016 Periodic points of $$T$$-Ciric generalized contraction mappings in ordered metric spaces. Zbl 1256.54065 Abbas, Mujahid; Parvaneh, Vahid; Razani, Abdolrahman 2012 Various Suzuki type theorems in $$b$$-metric spaces. Zbl 1330.54059 Latif, A.; Parvaneh, V.; Salimi, P.; Al-Mazrooei, A. E. 2015 Coupled coincidence point results for ($$\psi$$,$$\varphi$$)-weakly contractive mappings in partially ordered $$G_b$$-metric spaces. Zbl 06324313 Mustafa, Zead; Roshan, Jamal; Parvaneh, Vahid 2013 Existence of a tripled coincidence point in ordered $$G_b$$-metric spaces and applications to a system of integral equations. Zbl 1420.54080 Mustafa, Zead; Roshan, Jamal Rezaei; Parvaneh, Vahid 2013 On generalized weakly $$G$$-contractive mappings in partially ordered $$G$$-metric spaces. Zbl 1253.54047 Razani, A.; Parvaneh, V. 2012 Some coincidence point results for generalized ($${\psi,\varphi}$$)-weakly contractive mappings in ordered $$G$$-metric spaces. Zbl 1423.54092 Mustafa, Zead; Parvaneh, Vahid; Abbas, Mujahid; Roshan, Jamal Rezaei 2013 A common fixed point for generalized $$(\psi, \varphi)_{f,g}$$-weak contractions. Zbl 1255.54025 Razani, A.; Parvaneh, V.; Abbas, M. 2012 Coupled coincidence point results for $$(\psi, \alpha, \beta)$$-weak contractions in partially ordered metric spaces. Zbl 1251.54056 Razani, A.; Parvaneh, V. 2012 Some fixed point theorems for $$(\alpha,\theta,k)$$-contractive multi-valued mappings with some applications. Zbl 06583956 Pansuwan, Adoon; Sintunavarat, Wutiphol; Parvaneh, Vahid; Cho, Yeol Je 2015 Fixed point results for various contractions in parametric and fuzzy $$b$$-metric spaces. Zbl 1328.54040 Hussain, Nawab; Salimi, Peyman; Parvaneh, Vahid 2015 Fixed point results via $$\alpha$$-admissible mappings and cyclic contractive mappings in partial $$b$$-metric spaces. Zbl 06416404 Latif, Abdul; Roshan, Jamal Rezaei; Parvaneh, Vahid; Hussain, Nawab 2014 F-HR-type contractions on $$(\alpha,\eta)$$-complete rectangular $$b$$-metric spaces. Zbl 1412.47131 2017 New fixed point theorems for $$\alpha$$-$$H$$ $$\Theta$$-contractions in ordered metric spaces. Zbl 1444.54032 Parvaneh, Vahid; Golkarmanesh, Farhan; Hussain, Nawab; Salimi, Peyman 2016 A unification of $$G$$-metric, partial metric, and $$b$$-metric spaces. Zbl 07021881 Hussain, Nawab; Rezaei Roshan, Jamal; Parvaneh, Vahid; Latif, Abdul 2014 $$b_2$$-metric spaces and some fixed point theorems. Zbl 1310.54060 2014 Some fixed point theorems for weakly $$T$$-Chatterjea and weakly $$T$$-Kannan-contractive mappings in complete metric spaces. Zbl 1270.54052 Razani, A.; Parvaneh, V. 2013 Some fixed point theorems for $$G$$-rational Geraghty contractive mappings in ordered generalized $$b$$-metric spaces. Zbl 1351.54027 Latif, Abdul; Kadelburg, Zoran; Parvaneh, Vahid; Roshan, Jamal Rezaei 2015 A common fixed point theorem for three maps in discontinuous $$G_b$$-metric spaces. Zbl 1324.54087 Roshan, Jamal Rezaei; Shobkolaei, Nabiollah; Sedghi, Shaban; Parvaneh, Vahid; Radenović, Stojan 2014 On best proximity point results for some type of mappings. Zbl 1439.54030 2020 Extended rectangular $$b$$-metric spaces and some fixed point theorems for contractive mappings. Zbl 1425.54029 2019 Fixed point results for weakly $$\alpha$$-admissible pairs. Zbl 06750004 Ćirić, Ljubomir; Parvaneh, Vahid; Hussain, Nawab 2016 Some coincidence point results for generalized $$(\psi,\varphi)$$-weakly contractions in ordered $$b$$-metric spaces. Zbl 1347.54106 Roshan, Jamal R.; Parvaneh, Vahid; Radenović, Stojan; Rajović, Miloje 2015 On generalized weakly $$GP$$-contractive mappings in ordered $$GP$$-metric spaces. Zbl 1389.54102 Parvaneh, Vahid; Roshan, Jamal Rezaei; Kadelburg, Zoran 2013 Common fixed points of six mappings in partially ordered $$G$$-metric spaces. Zbl 1277.54039 Parvaneh, Vahid; Razani, Abdolrahman; Roshan, Jamal Rezaei 2013 Coupled fixed point results in complete partial metric spaces. Zbl 1254.54049 Alaeidizaji, H.; Parvaneh, V. 2012 On fixed point results for modified JS-contractions with applications. Zbl 1432.54073 Parvaneh, Vahid; Hussain, Nawab; Mukheimer, Aiman; Aydi, Hassen 2019 Fixed point results for generalized rational Geraghty contractive mappings in extended $$b$$-metric spaces. Zbl 1426.54036 2018 PPF dependent fixed point results for hybrid rational and Suzuki-Edelstein type contractions in Banach spaces. Zbl 06749795 Parvaneh, V.; Hosseinzadeh, H.; Hussain, N.; Ćirić, Lj. 2016 Coupled and tripled coincidence point results with application to Fredholm integral equations. Zbl 07022625 Kutbi, Marwan Amin; Hussain, Nawab; Rezaei Roshan, Jamal; Parvaneh, Vahid 2014 Fixed point results for generalized mappings. Zbl 06780642 Golkarmanesh, Farhan; Al-Mazrooei, Abdullah E.; Parvaneh, Vahid; Latif, Abdul 2014 Fixed point results for $$GP_{(\Lambda,\Theta)}$$-contractive mappings. Zbl 06331432 2014 Coupled fixed point theorems for weakly $$\varphi$$-contractive mixed monotone mappings in ordered $$b$$-metric spaces. Zbl 1308.54034 Roshan, Jamal Rezaei; Parvaneh, Vahid; Kadelburg, Zoran 2013 Some common fixed point theorems in complete metric spaces. Zbl 1247.54056 Parvaneh, Vahid 2012 On best proximity point results for some type of mappings. Zbl 1439.54030 2020 Extended rectangular $$b$$-metric spaces and some fixed point theorems for contractive mappings. Zbl 1425.54029 2019 On fixed point results for modified JS-contractions with applications. Zbl 1432.54073 Parvaneh, Vahid; Hussain, Nawab; Mukheimer, Aiman; Aydi, Hassen 2019 Fixed point results for generalized rational Geraghty contractive mappings in extended $$b$$-metric spaces. Zbl 1426.54036 2018 F-HR-type contractions on $$(\alpha,\eta)$$-complete rectangular $$b$$-metric spaces. Zbl 1412.47131 2017 New fixed point results in $$b$$-rectangular metric spaces. Zbl 1420.54089 Roshan, Jamal Rezaei; Parvaneh, Vahid; Kadelburg, Zoran; Hussain, Nawab 2016 Generalized Wardowski type fixed point theorems via $$\alpha$$-admissible $$FG$$-contractions in $$b$$-metric spaces. Zbl 1374.54056 Parvaneh, Vahid; Hussain, Nawab; Kadelburg, Zoran 2016 New fixed point theorems for $$\alpha$$-$$H$$ $$\Theta$$-contractions in ordered metric spaces. Zbl 1444.54032 Parvaneh, Vahid; Golkarmanesh, Farhan; Hussain, Nawab; Salimi, Peyman 2016 Fixed point results for weakly $$\alpha$$-admissible pairs. Zbl 06750004 Ćirić, Ljubomir; Parvaneh, Vahid; Hussain, Nawab 2016 PPF dependent fixed point results for hybrid rational and Suzuki-Edelstein type contractions in Banach spaces. Zbl 06749795 Parvaneh, V.; Hosseinzadeh, H.; Hussain, N.; Ćirić, Lj. 2016 Some fixed point theorems for generalized contractive mappings in complete metric spaces. Zbl 1345.54049 Hussain, Nawab; Parvaneh, Vahid; Samet, Bessem; Vetro, Calogero 2015 Various Suzuki type theorems in $$b$$-metric spaces. Zbl 1330.54059 Latif, A.; Parvaneh, V.; Salimi, P.; Al-Mazrooei, A. E. 2015 Some fixed point theorems for $$(\alpha,\theta,k)$$-contractive multi-valued mappings with some applications. Zbl 06583956 Pansuwan, Adoon; Sintunavarat, Wutiphol; Parvaneh, Vahid; Cho, Yeol Je 2015 Fixed point results for various contractions in parametric and fuzzy $$b$$-metric spaces. Zbl 1328.54040 Hussain, Nawab; Salimi, Peyman; Parvaneh, Vahid 2015 Some fixed point theorems for $$G$$-rational Geraghty contractive mappings in ordered generalized $$b$$-metric spaces. Zbl 1351.54027 Latif, Abdul; Kadelburg, Zoran; Parvaneh, Vahid; Roshan, Jamal Rezaei 2015 Some coincidence point results for generalized $$(\psi,\varphi)$$-weakly contractions in ordered $$b$$-metric spaces. Zbl 1347.54106 Roshan, Jamal R.; Parvaneh, Vahid; Radenović, Stojan; Rajović, Miloje 2015 Some coincidence point results in ordered $$b$$-metric spaces and applications in a system of integral equations. Zbl 1354.45010 Roshan, J. R.; Parvaneh, V.; Altun, I. 2014 Common fixed point theorems for weakly isotone increasing mappings in ordered b-metric spaces. Zbl 1392.54035 Roshan, Jamal Rezaei; Parvaneh, Vahid; Kadelburg, Zoran 2014 Fixed point theorems for weakly $$T$$-Chatterjea and weakly $$T$$-Kannan contractions in $$b$$-metric spaces. Zbl 1310.54061 2014 Fixed point results via $$\alpha$$-admissible mappings and cyclic contractive mappings in partial $$b$$-metric spaces. Zbl 06416404 Latif, Abdul; Roshan, Jamal Rezaei; Parvaneh, Vahid; Hussain, Nawab 2014 A unification of $$G$$-metric, partial metric, and $$b$$-metric spaces. Zbl 07021881 Hussain, Nawab; Rezaei Roshan, Jamal; Parvaneh, Vahid; Latif, Abdul 2014 $$b_2$$-metric spaces and some fixed point theorems. Zbl 1310.54060 2014 A common fixed point theorem for three maps in discontinuous $$G_b$$-metric spaces. Zbl 1324.54087 Roshan, Jamal Rezaei; Shobkolaei, Nabiollah; Sedghi, Shaban; Parvaneh, Vahid; Radenović, Stojan 2014 Coupled and tripled coincidence point results with application to Fredholm integral equations. Zbl 07022625 Kutbi, Marwan Amin; Hussain, Nawab; Rezaei Roshan, Jamal; Parvaneh, Vahid 2014 Fixed point results for generalized mappings. Zbl 06780642 Golkarmanesh, Farhan; Al-Mazrooei, Abdullah E.; Parvaneh, Vahid; Latif, Abdul 2014 Fixed point results for $$GP_{(\Lambda,\Theta)}$$-contractive mappings. Zbl 06331432 2014 Common fixed points of almost generalized $$(\psi,\phi)_s$$-contractive mappings in ordered $$b$$-metric spaces. Zbl 1295.54080 Roshan, Jamal Rezaei; Parvaneh, Vahid; Sedghi, Shaban; Shobkolaei, Nabi; Shatanawi, Wasfi 2013 Fixed points of cyclic weakly ($$\psi, \phi ,L, A, B$$)-contractive mappings in ordered $$b$$-metric spaces with applications. Zbl 06334850 Hussain, Nawab; Parvaneh, Vahid; Roshan, Jamal Rezaei; Kadelburg, Zoran 2013 Existence of tripled coincidence points in ordered $$b$$-metric spaces and an application to a system of integral equations. Zbl 1425.54030 Parvaneh, Vahid; Roshan, Jamal Rezaei; Radenović, Stojan 2013 Some common fixed point results in ordered partial b-metric spaces. Zbl 06313654 2013 Common fixed point results for weak contractive mappings in ordered b-dislocated metric spaces with applications. Zbl 06313730 Hussain, Nawab; Roshan, Jamal Rezaei; Parvaneh, Vahid; Abbas, Mujahid 2013 Coupled coincidence point results for ($$\psi$$,$$\varphi$$)-weakly contractive mappings in partially ordered $$G_b$$-metric spaces. Zbl 06324313 Mustafa, Zead; Roshan, Jamal; Parvaneh, Vahid 2013 Existence of a tripled coincidence point in ordered $$G_b$$-metric spaces and applications to a system of integral equations. Zbl 1420.54080 Mustafa, Zead; Roshan, Jamal Rezaei; Parvaneh, Vahid 2013 Some coincidence point results for generalized ($${\psi,\varphi}$$)-weakly contractive mappings in ordered $$G$$-metric spaces. Zbl 1423.54092 Mustafa, Zead; Parvaneh, Vahid; Abbas, Mujahid; Roshan, Jamal Rezaei 2013 Some fixed point theorems for weakly $$T$$-Chatterjea and weakly $$T$$-Kannan-contractive mappings in complete metric spaces. Zbl 1270.54052 Razani, A.; Parvaneh, V. 2013 On generalized weakly $$GP$$-contractive mappings in ordered $$GP$$-metric spaces. Zbl 1389.54102 Parvaneh, Vahid; Roshan, Jamal Rezaei; Kadelburg, Zoran 2013 Common fixed points of six mappings in partially ordered $$G$$-metric spaces. Zbl 1277.54039 Parvaneh, Vahid; Razani, Abdolrahman; Roshan, Jamal Rezaei 2013 Coupled fixed point theorems for weakly $$\varphi$$-contractive mixed monotone mappings in ordered $$b$$-metric spaces. Zbl 1308.54034 Roshan, Jamal Rezaei; Parvaneh, Vahid; Kadelburg, Zoran 2013 Periodic points of $$T$$-Ciric generalized contraction mappings in ordered metric spaces. Zbl 1256.54065 Abbas, Mujahid; Parvaneh, Vahid; Razani, Abdolrahman 2012 On generalized weakly $$G$$-contractive mappings in partially ordered $$G$$-metric spaces. Zbl 1253.54047 Razani, A.; Parvaneh, V. 2012 A common fixed point for generalized $$(\psi, \varphi)_{f,g}$$-weak contractions. Zbl 1255.54025 Razani, A.; Parvaneh, V.; Abbas, M. 2012 Coupled coincidence point results for $$(\psi, \alpha, \beta)$$-weak contractions in partially ordered metric spaces. Zbl 1251.54056 Razani, A.; Parvaneh, V. 2012 Coupled fixed point results in complete partial metric spaces. Zbl 1254.54049 Alaeidizaji, H.; Parvaneh, V. 2012 Some common fixed point theorems in complete metric spaces. Zbl 1247.54056 Parvaneh, Vahid 2012 all top 5
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9313747882843018, "perplexity": 21143.07974259394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703517559.41/warc/CC-MAIN-20210119011203-20210119041203-00184.warc.gz"}
https://infoscience.epfl.ch/record/234234?ln=en
## Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network The brain needs to predict how the body reacts to motor commands, but how a network of spiking neurons can learn non-linear body dynamics using local, online and stable learning rules is unclear. Here, we present a supervised learning scheme for the feedforward and recurrent connections in a network of heterogeneous spiking neurons. The error in the output is fed back through fixed random connections with a negative gain, causing the network to follow the desired dynamics. The rule for Feedback-based Online Local Learning Of Weights (FOLLOW) is local in the sense that weight changes depend on the presynaptic activity and the error signal projected onto the postsynaptic neuron. We provide examples of learning linear, non-linear and chaotic dynamics, as well as the dynamics of a two-link arm. Under reasonable approximations, we show, using the Lyapunov method, that FOLLOW learning is uniformly stable, with the error going to zero asymptotically. Published in: Elife, 6, e28295 Year: 2017 Publisher: Cambridge, Elife Sciences Publications Ltd ISSN: 2050-084X Laboratories:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8233417272567749, "perplexity": 1499.4700207212497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256381.7/warc/CC-MAIN-20190521122503-20190521144503-00110.warc.gz"}
http://rosario1952zv.soup.io/post/505193892/Not-known-Facts-About-Globat-Review
Unless you see the e-mail and explain to them not to make it happen, you can get socked that has a $fifty cost each individual other thirty day period or so. My impact is they use their current shopper foundation to subsidize the ultra low price ranges they provide to new consumers. On top of that, The dearth of VPS and dedicated hosting plans has induced inconvenience for many buyers. Consumers that are wanting an extra growth in their websites would have to decide on One more Website hosting supplier given that Globat only presents shared Website hosting ideas. And only two styles of shared hosting packages can be found now. Submit Your Review Established in 2001, Globat has been focused on World-wide-web hosting for in excess of a decade. Following 13 yrs of development, the company serves hundreds of thousands of consumers and owns abundant experience of Net hosting. Nameless Shopper for under three months GLOBAT hosting started out off fantastic and now my site up-time is so unreliable which i panic sending potential clients to it. I've experienced several people remark they have already been not able to see my site – some will strike REFRESH a number of times and the site will magically seem. These are all at random times throughout the day time hrs. I don’t really know what GLOBAT is running on their servers Nonetheless they surely shouldn’t be accomplishing it this regularly and during the day besides. Not long ago one of several instances which i checked it, I noticed all of my textual content but no visuals, which might be all in my pictures folder. I submitted a ticket online and the reply was – “hit refresh, it looks high-quality to me.” Indeed – for the reason that by the point you gained the ticket and opened my web site, it had magically reappeared. I wil be requesting a refund around the months which i have paid out for and can not use. Additionally, I will not advise GLOBAT to anyone and positively not to my clientele. I'm puzzled with the significant rankings that GLOBAT has obtained on other Website host review internet sites. I don’t know the way it could just be my website. Many thanks. Long overhaul flights aren't a large dilemma for me since I see it as an opportunity to get in LOADS of reading. I am an avid reader (generally are) And that i come to feel there usually are not approximately more than enough hrs while in the day for me to work,... Posted by: Lushlala in Travel - 34 minutes ago If the thing is the two great reviews and lousy reviews 50/fifty, and may't determine no matter whether remain or swap, I would recommend switching to another hosting plans. There are plenty of other fantastic hosting designs, opt for a trustworthy a person, not The most cost effective one particular. It can save you lots of upkeep webhostvoice.info time. I have no issues On the subject of this hosting company. I've tried out other organizations and none measure up on the professionalism of Globat. Efficient but less costly web hosting solutions is exactly what Globat has actually been giving buyers For the reason that time of institution. Coupled with aspect enriched offers like the Pro and Commerce offers, this corporation delivers an ideal usability and assist degree that guarantees 100% gratification for individuals and companies of any sizing and allows them create a strong online presence. Globat provides an unbelievable sum of ordinary functions for rock base price ranges. Unlimited bandwidth each month and Unrestricted space all for$4.44, its plenty of to help make you think They are really insane.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5462349057197571, "perplexity": 926.6942067786018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828448.76/warc/CC-MAIN-20181217065106-20181217091106-00437.warc.gz"}
http://math2ever.blogspot.com/2013/05/what-is-exponential-function.html
Exponential function IS JUST another function in mathematics study. To date, most of the time we are dealing with so called power function e.g  f(x) = x2 where x is the base the number 2 is the power (index) However, in exponential function we are dealing  f(x) = 2x where 2 is the base the number x is the power (index) source: Wikipedia The big difference is in that the variable is now the power, rather than the base. source: Wikipedia The most commonly encountered exponential-function base is the transcendental number e , which is equal to approximately 2.71828. Thus, the above expression becomes:(see also natural logarithm) f (x) = e x Value of  e
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9374247789382935, "perplexity": 1149.1587455476379}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809419.96/warc/CC-MAIN-20171125051513-20171125071513-00323.warc.gz"}
https://r-forge.r-project.org/forum/forum.php?thread_id=31878&forum_id=4377&group_id=1337
# Forum: support Monitor Forum | Start New Thread Nested Flat Threaded Ultimate Show 25 Show 50 Show 75 Show 100 RE: size of math font in compiled html [ reply ]By: Markus Loecher on 2016-06-22 15:56 [forum:43309] Thanks, this is very helpful!! Markus RE: size of math font in compiled html [ reply ]By: Achim Zeileis on 2016-06-20 14:30 [forum:43295] (i) The default TeX-to-HTML converter "ttm" does not support \boxed{} but "pandoc" does. So using exams2html("tmp.Rnw", converter = "pandoc") gives what you want. But, of course, "pandoc" may or may not display other things in a slightly different way that you may or may not like. Hence, personally I prefer solutions that are robust across converters but might require a little bit more coding. In this case, you could set up the box manually for example. (ii) For me, the solution with or without \boxed{} uses the same font size. And as the display is browser-specific anyways, I wouldn't worry about this too much. It's simple enough to press Ctrl-+ when viewing the HTML file... However, if you want a somewhat more principled solution you can also do exams2html("tmp.Rnw", converter = "pandoc", mathjax = TRUE). Then MathJax allows to zoom formulas (upon hovering or clicking etc.) or you can set a general zoom factor for all math content. My personal aesthetic preference, however, is that MathJax generally displays formulas too large whereas MathML in Firefox just looks fine. But to a certain degree this is a matter of personal taste... size of math font in compiled html [ reply ]By: Markus Loecher on 2016-06-20 12:27 [forum:43294] tmp.Rnw (5) downloads Dear authors, When I use exams2pdf to compile an Rnw file (attached) that contains the following Latex expression $\displaystyle \boxed{P(x=a;N,A,n,a) = \frac{{A \choose a} \cdot {N-A \choose n-a} }{{N \choose n}}}$ the output looks perfect. But with exams2thml I run into the following difficulties: (i) The Latex \boxed directive seems unrecognized. (ii) When I remove the \boxed command, the displayed fraction seems too small, and I have been unsuccessfully trying to enlarge the math fonts. Is there a way in html output to (i) frame eqns with a box and/or (ii) manipulate the size of math font ? Thanks!! Markus Thanks to:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8769413232803345, "perplexity": 3788.6729670013247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125949489.63/warc/CC-MAIN-20180427060505-20180427080505-00478.warc.gz"}
https://training.galaxyproject.org/training-material/topics/epigenetics/tutorials/tal1-binding-site-identification/tutorial.html
# Identification of the binding sites of the T-cell acute lymphocytic leukemia protein 1 (TAL1) ### Overview question Questions • How is raw ChIP-seq data processed and analyzed? • What are the binding sites of TAL1? • Which genes are regulated by TAL1? objectives Objectives • Inspect read quality with FastQC • Perform read trimming with Trimmomatic • Align trimmed reads with BWA • Assess quality and reproducibility of experiments • Identify TAL1 binding sites with MACS2 • Determine unique/common TAL1 binding sites from G1E and Megakaryocytes • Identify unique/common TAL1 peaks occupying gene promoters • Visually inspect TAL1 peaks with Trackster requirements Requirements time Time estimation: 3 hours Supporting Materials last_modification Last modification: Nov 27, 2020 # Introduction This tutorial uses ChIP-seq datasets from a study published by Wu et al. 2014. The goal of this study was to investigate “the dynamics of occupancy and the role in gene regulation of the transcription factor TAL1, a critical regulator of hematopoiesis, at multiple stages of hematopoietic differentiation.” To this end, ChIP-seq experiments were performed in multiple mouse cell types including G1E - a GATA-null immortalized cell line derived from targeted disruption of GATA-1 in mouse embryonic stem cells - and megakaryocytes. This dataset (GEO Accession: GSE51338) consists of biological replicate TAL1 ChIP-seq and input control experiments. Input control experiments are used to identify and remove sampling bias, for example open/accessible chromatin or GC bias. Because of the long processing time for the large original files, we have downsampled the original raw data files to include only reads that align to chromosome 19 and a subset of interesting genomic loci identified by Wu et al. 2014. Table 1: Metadata for ChIP-seq experiments in this tutorial. SE: single-end. Cellular state Datatype ChIP Ab Replicate SRA Accession Library type Read length Stranded? Data size (MB) G1E ChIP-seq input 1 SRR507859 SE 36 No 35.8 G1E ChIP-seq input 2 SRR507860 SE 55 No 427.1 G1E ChIP-seq TAL1 1 SRR492444 SE 36 No 32.3 G1E ChIP-seq TAL1 2 SRR492445 SE 41 No 62.7 Megakaryocyte ChIP-seq input 1 SRR492453 SE 41 No 57.2 Megakaryocyte ChIP-seq input 2 SRR492454 SE 55 No 403.8 Megakaryocyte ChIP-seq TAL1 1 SRR549006 SE 55 No 340.3 Megakaryocyte ChIP-seq TAL1 2 SRR549007 SE 48 No 356.9 ### Agenda In this tutorial, we will deal with: # Quality control As for any NGS data analysis, ChIP-seq data must be quality controlled before being aligned to a reference genome. For more detailed information on NGS quality control, check out the tutorial here. ### hands_on Hands-on: Performing quality control 1. Create and name a new history for this tutorial. ### tip Tip: Creating a new history Click the new-history icon at the top of the history panel If the new-history is missing: 1. Click on the galaxy-gear icon (History options) on the top of the history panel 2. Select the option Create New from the menu 2. Import the ChIP-seq raw data (*.fastqsanger) from Zenodo. • Open the Galaxy Upload Manager (galaxy-upload on the top-right of the tool panel) • Select Paste/Fetch Data • Paste the link into the text field • Press Start • Close the window By default, Galaxy uses the URL as the name, so rename the files with a more useful name. 3. Examine the data in a FASTQ file by clicking on the galaxy-eye (eye) icon. ### question Questions 1. What are four key features of a FASTQ file? 2. What is the main difference between a FASTQ and a FASTA file? ### solution Solution 1. A FASTQ file contains a sequence identifier and additional information, the raw sequence, information about the sequence again with optional information, and quality information about the sequence. 2. A FASTA file contains only the description of the sequence and the sequence itself. A FASTA file does not contain any quality information. 4. FastQC Tool: toolshed.g2.bx.psu.edu/repos/devteam/fastqc/fastqc/0.72+galaxy1 : Run FastQC on each FASTQ file to assess the quality of the raw data. An explanation of the results can be found on the FastQC web page. ### question Questions 1. What does the y-axis represent in Figure 3? 2. Why is the quality score decreasing across the length of the reads? ### solution Solution 1. The phred-score. This score gives the probability of an incorrect base e.g. a score of 20 means that it is likely by 1% that one base is incorrect. See here for more information. 2. This is an unsolved technical issue of the sequencing machines. The longer the sequences are the more likely are errors. See here for more information. It is often necessary to trim a sequenced read to remove bases sequenced with high uncertainty (i.e. low-quality bases). In addition, artificial adaptor sequences used in library preparation protocols need to be removed before attempting to align the reads to a reference genome. ### hands_on Hands-on: Trimming and clipping reads 1. Trimmomatic Tool: toolshed.g2.bx.psu.edu/repos/pjbriggs/trimmomatic/trimmomatic/0.38.0 : Run Trimmomatic to trim low-quality reads. • “Single-end or paired-end reads?”: Single-end • param-files “Input FASTQ file”: Select all of the FASTQ files • “Perform initial ILLUMINACLIP?”: No • “Select Trimmomatic operation to perform”: Sliding window trimming (SLIDINGWINDOW) • “Number of bases to average across”: 4 • “Average quality required”: 20 ### tip Tip: Changing datatypes If the FASTQ files cannot be selected, check whether their format is FASTQ with Sanger-scaled quality values (fastqsanger). If not, you can edit the data type by clicking on the pencil symbol next to a file in the history, clicking the “Datatype” tab, and choosing fastqsanger as the “New Type”. 2. FastQC Tool: toolshed.g2.bx.psu.edu/repos/devteam/fastqc/fastqc/0.72+galaxy1 : Rerun FastQC on each trimmed/clipped FASTQ file to determine whether low-quality and adaptor sequences were correctly removed. • param-files “Short read data from your current history”: The output of Trimmomatic. ### question Questions 1. How did the range of read lengths change after trimming/clipping? ### solution Solution 1. Before trimming, all the reads were the same length, which reflected the number of rounds of nucleotide incorporation in the sequencing experiment. After trimming, read lengths span a range of values reflecting different lengths of the actual DNA fragments captured during the ChIP experiement. # Aligning reads to a reference genome To determine where DNA fragments originated from in the genome, the sequenced reads must be aligned to a reference genome. This is equivalent to solving a jigsaw puzzle, but unfortunately, not all pieces are unique. In principle, you could do a BLAST analysis to figure out where the sequenced pieces fit best in the known genome. Aligning millions of short sequences this way, however, can take a couple of weeks. Nowadays, there are many read alignment programs for sequenced DNA, BWA being one of them. You can read more about the BWA algorithm and tool here. ### hands_on Hands-on: Aligning reads to a reference genome 1. BWA Tool: toolshed.g2.bx.psu.edu/repos/devteam/bwa/bwa/0.7.17.4 : Run BWA to map the trimmed/clipped reads to the mouse genome. • “Will you select a reference genome…“: Use a built-in genome index • “Using reference genome”: Mouse (mus musculus) mm10 • “Select input type”: Single fastq • param-files “Select fastq dataset”: Select all of the trimmed FASTQ files 2. Rename files to reflect the origin and contents. ### tip Tip: Renaming a dataset • Click on the galaxy-pencil pencil icon for the dataset to edit its attributes • In the central panel, change the Name field • Click the Save button 3. Inspect a file produced by BWA. ### question Questions 1. What datatype is the BWA output file? 2. How many reads were mapped from each file? ### solution Solution 1. The output is a BAM file. 2. Check the number of lines for each file in your history. This gives you a rough estimate. 4. Samtools idxstats Tool: toolshed.g2.bx.psu.edu/repos/devteam/samtools_idxstats/samtools_idxstats/2.0.3 : Run idxstats to get statistics of the BWA alignments. • param-files “BAM file”: Select all of the mapped BAM files 5. Examine the output (galaxy-eye) ### question Questions 1. What does each column in the output represent (Tip: look at the Tool Form)? 2. How many reads were mapped to chromosome 19 in each experiment? 3. If the mouse genome has 21 pairs of chromosomes, what are the other reference chromosomes (e.g. chr1_GL456210_random)? ### solution Solution 1. Column 1: Reference sequence identifier Column 2: Reference sequence length Column 3: Number of mapped reads Column 4: Number of placed but unmapped reads (typically unmapped partners of mapped reads) 2. This information can be seen in column 3, e.g. for Megakaryocyte_Tal1_R1 2143352 reads are mapped. Your answer might be slightly different if different references or tool versions are used. 3. Some of these other reference sequences are parts of chromosomes but it is unclear where exactly, e.g. chr1_GL456210_random is a part of chromosome 1. There are entries like chrUn that are not associated with a chromosome but it is believed that they are part of the genome. # Assessing correlation between samples To assess the similarity between the replicates sequencing datasets, it is a common technique to calculate the correlation of read counts for the different samples. We expect that the replicate samples will cluster more closely to each other than to other samples. We will be use tools from the package deepTools for the next few steps. More information on deepTools can be found here. ### hands_on Hands-on: Assessing correlation between samples multiBamSummary splits the reference genome into bins of equal size and counts the number of reads in each bin from each sample. We set a small bin size here because we are working with a subset of reads that align to only a fraction of the genome. 1. multiBamSummary Tool: toolshed.g2.bx.psu.edu/repos/bgruening/deeptools_multi_bam_summary/deeptools_multi_bam_summary/3.3.2.0.0 : Run multiBamSummary to get read coverage of the alignments. • “Sample order matters”: No • param-files “Bam files”: Select all of the aligned BAM files • “Bin size in bp”: 1000 2. plotCorrelation Tool: toolshed.g2.bx.psu.edu/repos/bgruening/deeptools_plot_correlation/deeptools_plot_correlation/3.3.2.0.0 : Run plotCorrelation to visualize the results. • “Matrix file from the multiBamSummary tool”: Select the multiBamSummary output file • “Correlation method”: Pearson • “Plotting type”: Heatmap • “Plot the correlation value”: Yes • “Skip zeros”: Yes • “Remove regions with very large counts”: Yes Feel free to play around with these parameter settings! ### question Questions 1. Why do we want to skip zeros in plotCorrelation? 2. What happens if the Spearman’s correlation method is used instead of the Pearson method? 3. What does the output of making a Scatterplot instead of a Heatmap look like? ### solution Solution 1. Large areas of zeros would lead to a correlation of these areas. The information we would get out of this computation would be meaningless. 2. The clusters are different, e.g. Megakaryocyte_input_R2 and G1E_input_R2 are clustered together. More information about Pearson and Spearman’s correlation. 3. Try making a Scatterplot to see for yourself! Additional information on how to interpret plotCorrelation plots can be found here. # Assessing IP strength We will now evaluate the quality of the immunoprecipitation step in the ChIP-seq protocol. ### hands_on Hands-on: Assessing IP strength 1. plotFingerprint Tool: toolshed.g2.bx.psu.edu/repos/bgruening/deeptools_plot_fingerprint/deeptools_plot_fingerprint/3.3.2.0.0 : Run plotFingerprint to assess ChIP signal strength. • param-files “Bam files”: Select all of the aligned BAM files for the G1E cell type • “Show advanced options”: yes • “Bin size in bases”: 100 • “Skip zeros”: Yes 2. Rerun plotFingerprint for the Megakaryocyte cell type. 3. View the output images. ### question Questions 1. What does this graph in Figure 10 represent? 2. How do (or should) input datasets differ from IP datasets? 3. What do you think about the quality of the IP for this experiment? 4. How does the quality of the IP for megakaryocytes compare to G1E cells? ### solution Solution 1. It shows us how good the ChIP Signal compared to the control signal is. An ideal control (input) with perfect uniform distribution of reads along the genome (i.e. without enrichment in open chromatin etc.) and infinite sequencing coverage should generate a straight diagonal line. A very specific and strong ChIP enrichment will be indicated by a prominent and steep rise of the cumulative sum towards the highest rank. 2. We expect that the control (input) signal is more or less uniform distributed over the genome (e.g. like the green line in the image above). The IP dataset should look more like the red line but it would be better if the values for IP start to increase at around 0.8 on the x-axis. 3. The enrichment did not work as it should. Compare the blue line with the red one! For your future experiments: You can never have enough replicates! 4. The quality of megakaryocytes is better then G1E. Additional information on how to interpret plotFingerprint plots can be found here. # Determining TAL1 binding sites Now that BWA has aligned the reads to the genome, we will use the tool MACS2 to identify regions of TAL1 occupancy, which are called “peaks”. Peaks are determined from pileups of sequenced reads across the genome that correspond to where TAL1 binds. 1. Identify regions of TAL1 occupancy (peaks). 2. Generate bedGraph files for visual inspection of the data on a genome browser. ### hands_on Hands-on: Determining TAL1 binding sites 1. MACS2 callpeak Tool: toolshed.g2.bx.psu.edu/repos/iuc/macs2/macs2_callpeak/2.1.1.20160309.6 : Run MACS2 callpeak with the aligned read files from the previous step as Treatment (TAL1) and Control (input). • “Are you pooling Treatment Files?”: Yes • param-files “ChIP-Seq Treatment File”: Select all of the replicate ChIP-Seq treatment aligned BAM files for one cell type • “Do you have a Control File?”: Yes • “Are you pooling Control Files?”: Yes • param-files “ChIP-Seq Control File”: Select replicate ChIP-Seq control aligned BAM files for the same cell type • “Format of Input Files”: Single-end BAM • “Effective genome size”: M. musculus • “Additional Outputs”: Select Peaks as tabular file (compatible wih MultiQC), Peak summits, Scores in bedGraph files (--bdg) 2. Rename files to reflect the origin and contents. 3. Repeat for the other cell type. # Inspection of peaks and aligned data It is critical to visualize NGS data on a genome browser after alignment to evaluate the “goodness” of the analysis. Evaluation criteria will differ for various NGS experiment types, but for ChIP-seq data we want to ensure reads from a Treatment/IP sample are enriched at peaks and do not localize non-specifically (like the control/input condition). MACS2 generates bedGraph and BED files that we will use to visualize read abundance and peaks, respectively, at regions MACS2 determines to be TAL1 peaks using Galaxy’s in-house genome browser, Trackster. ## Inspection of peaks and aligned data with Trackster We will import a gene annotation file so we can visualize aligned reads and TAL1 peaks relative to gene features and positions. ### hands_on Hands-on: Inspecting peaks and aligned data with Trackster 1. Import gene annotations file from Zenodo 2. Click “Visualize” on the page header and select “Create Visualization” 3. Set up Trackster • Select Trackster • “Select a dataset to visualize”: Select the imported gene annotation file (Tip: if this file doesn’t appear as an option, go back to the history and edit the attribute Database/Build to be mm10) • Click “Create Visualization” 4. Configure the visualization • Select “View in new visualization” • “Browser name”: Enter a name for your visualization • “Reference genome build (dbkey):”: mm10 • Click “Create” • Click the “Add tracks” (plus sign) button at the top right • Using the search bar, search for and add the following tracks from MACS2 callpeak output to your view • G1E Treatment bedGraph • G1E Control bedGraph • G1E narrow peaks • Megakaryocytes Treatment bedGraph • Megakaryocytes Control bedGraph • Megakaryocytes narrow peaks • Rename the tracks, if desired • Play around with the track configurations, for example the color or the display mode 6. Navigate to the Runx1 locus (chr16:92501466-92926074) to inspect the aligned reads and TAL1 peaks. ### question Questions 1. What do you see at the Runx1 locus in Trackster? ### solution Solution 1. Directly upstream of the shorter Runx1 gene models is a cluster of 3 TAL1 peaks that only appear in the G1E cell type, but not in Megakaryocytes. Further upstream, there are some shared TAL1 peaks in both cell types. ## Inspection of peaks and aligned data with IGV We show here an alternative to Trackster, IGV. ### hands_on Hands-on: Inspecting peaks with IGV 1. Open IGV on your local computer. 2. Click on each narrow peaks result file from the MACS2 computations on “display with IGV” –> “local Mouse mm10” # Identifying unique and common TAL1 peaks between stages We have processed ChIP-seq data from two stages of hematopoiesis and have lists of TAL1-occupied sites (peaks) in both cellular states. The next analysis step is to identify TAL1 peaks that are shared between the two cellular states and peaks that are specific to either cellular state. ### hands_on Hands-on: Identifying unique and common TAL1 peaks between states 1. bedtools Intersect intervals Tool: toolshed.g2.bx.psu.edu/repos/iuc/bedtools/bedtools_intersectbed/2.29.0 : Run bedtools Intersect intervals to find peaks that exist both in G1E and megakaryocytes. • param-file “File A to intersect with B”: Select the TAL1 G1E narrow peaks BED file • param-file “File B to intersect with A”: Select the TAL1 Megakaryocytes narrow peaks BED file • Running this tool with the default settings will return overlapping peaks of both files. 2. bedtools Intersect intervals Tool: toolshed.g2.bx.psu.edu/repos/iuc/bedtools/bedtools_intersectbed/2.29.0 : Run bedtools Intersect intervals to find peaks that exist only in G1E. • param-file “File A to intersect with B”: Select the TAL1 G1E narrow peaks BED file • param-file “File B to intersect with A”: Select the TAL1 Megakaryocytes narrow peaks BED file • “Report only those alignments that **do not** overlap the BED file”: Yes 3. bedtools Intersect intervals Tool: toolshed.g2.bx.psu.edu/repos/iuc/bedtools/bedtools_intersectbed/2.29.0 : Run bedtools Intersect intervals to find peaks that exist only in megakaryocytes. • param-file “File A to intersect with B”: Select the TAL1 Megakaryocytes narrow peaks BED file • param-file “File B to intersect with A”: Select the TAL1 G1E narrow peaks BED file • “Report only those alignments that **do not** overlap the BED file”: Yes 4. Rename files to reflect the origin and contents. ### question Questions 1. How many TAL1 peaks are common to both G1E cells and megakaryocytes? 2. How many are unique to G1E cells? 3. How many are unique to megakaryocytes? ### solution Solution 1. 1 peak (answer may vary depending on references and tool versions used) 2. 407 peaks (answer may vary depending on references and tool versions used) 3. 139 peaks (answer may vary depending on references and tool versions used) # Generating Input normalized coverage files We will generate Input normalized coverage (bigWig) files for the ChIP samples, using the bamCompare tool from deepTools2. bamCompare provides multiple options to compare the two files (e.g. log2ratio, subtraction). We will use log2 ratio of the ChIP samples over Input. ### hands_on Hands-on: Generating input-normalized bigwigs 1. bamCompare Tool: toolshed.g2.bx.psu.edu/repos/bgruening/deeptools_bam_compare/deeptools_bam_compare/3.3.2.0.0 : Run bamCompare to get the log2 read ratios between treatment and control samples. • “First BAM/CRAM file (e.g.* treated sample)”*: Select the Megakaryocyte TAL1 aligned BAM file for replicate 1 (R1) • “Second BAM/CRAM file (e.g.* control sample)”*: Select the Megakaryocyte input aligned BAM file for replicate 1 (R1) • “How to compare the two files”: Compute log2 of the number of reads 2. Repeat this step for all treatment and control samples: • Megakaryocyte TAL1 aligned BAM R2 and Megakaryocyte input aligned BAM R2 • G1E TAL1 aligned BAM R1 and G1E input aligned BAM R1 • G1E TAL1 aligned BAM R2 and G1E input aligned BAM R2 3. Rename files to reflect the origin and contents. # Plot the signal on the peaks between samples Plotting your region of interest will involve using two tools from the deepTools suite: • computeMatrix: Computes the signal on given regions, using the bigwig coverage files from different samples. • plotHeatmap: Plots heatMap of the signals using the computeMatrix output. Optionally, you can use plotProfile to create a profile plot using to computeMatrix output. ### hands_on Hands-on: Calculating signal matrix on MACS2 output 1. computeMatrix Tool: toolshed.g2.bx.psu.edu/repos/bgruening/deeptools_compute_matrix/deeptools_compute_matrix/3.3.2.0.0 : Run computeMatrix to prepare data for plotting a heatmap of TAL1 peaks. • param-file Select Regions > “Regions to plot”: Select the MACS2 narrow peaks files for G1E cells (TAL1 over Input) • param-file “Score file”: Select the bigWig files for the G1E cells (log2 ratios from bamCompare) • “computeMatrix has two main output options”: reference-point • “The Reference point for plotting”: center of region • “Distance upstream of the start site of the regions defined in the region file”: 5000 • “Distance downstream of the end site of the given regions”: 5000 • “Show advanced options”: Yes • “Convert missing values ot zero”: Yes • “Skip zeros”: Yes 2. Repeat for Megakaryoctes. ### hands_on Hands-on: Plotting a heatmap of TAL1 peaks 1. plotHeatmap Tool: toolshed.g2.bx.psu.edu/repos/bgruening/deeptools_plot_heatmap/deeptools_plot_heatmap/3.3.2.0.1 : Run plotHeatmap to create a heatmap for score distributions across TAL1 peak genomic regions in each cell type. • “Matrix file from the computeMatrix tool”: Select the computeMatrix output for G1E cells • “Show advanced options”: Yes • “Labels for the samples (each bigwig) plotted”: Enter sample labels in the order you added them in computeMatrix, separated by spaces. 2. Repeat for Megakaryocytes. The outputs should look similar to this: ## Assessing GC bias A common problem of PCR-based protocols is the observation that GC-rich regions tend to be amplified more readily than GC-poor regions. We will now check whether the samples have more reads from regions of the genome with high GC. ### hands_on Hands-on: Assessing GC bias 1. computeGCbias Tool: toolshed.g2.bx.psu.edu/repos/bgruening/deeptools_compute_gc_bias/deeptools_compute_gc_bias/3.3.2.0.0 : Run computeGCbias to determine the GC bias of the sequenced reads. • param-file “Bam file”: Select an aligned BAM file • “Reference genome”: locally cached • “Using reference genome”: mm10 • “Effective genome size”: user specified • “Effective genome size”: 10000000 • “Fragment length used for the sequencing”: 50 ### question Questions 1. Why would we worry more about checking for GC bias in an input file? 2. Does this dataset have a GC bias? ### solution Solution 1. In an input ChIP-seq file, the expectation is that DNA fragments are uniformly sampled from the genome. This is in contrast to an IP ChIP-seq file where it is expected that certain genomic regions contain more reads (i.e. regions that are bound by the protein that is immunopurified). Therefore, non-uniformity of reads in the input sample could be a result of GC-bias, whereby more GC-rich fragments are preferentially amplified during PCR. 2. To answer this question, run the computeGCbias tool as described above and check out the results. What do YOU think? For more examples and information on how to interpret the results, check out the tool usage documentation here. 2. correctGCbias Tool: toolshed.g2.bx.psu.edu/repos/bgruening/deeptools_correct_gc_bias/deeptools_correct_gc_bias/3.3.2.0.0 : Run correctGCbias to generate GC-corrected BAM/CRAM files. ### question Questions 1. What does the tool correctGCbias do? 2. What is the output of this tool? 3. What are some caveats to be aware of if using the output of this tool in downstream analyses? ### solution Solution 1. The correctGCbias tool removes reads from regions with higher coverage than expected (typically corresponding to GC-rich regions) and adds reads to regions with lower coverage than expected (typically corresponding to AT-rich regions). 2. The output of this tool is a GC-corrected file in BAM, bigWig, or bedGraph format. 3. The GC-corrected output file likely contains duplicated reads in low-coverage regions where reads were added to match the expected read density. Therefore, it is necessary to avoid filtering or removing duplicate reads in any downstream analyses. Additional information on how to interpret computeGCbias plots can be found here. # Conclusion In this exercise you imported raw Illumina sequencing data, evaluated the quality before and after you trimmed reads with low confidence scores, aligned the trimmed reads, identified TAL1 peaks relative to the negative control (background), and visualized the aligned reads and TAL1 peaks relative to gene structures and positions. Additional, you assessed the “goodness” of the experiments by looking at metrics such as GC bias and IP enrichment. ### keypoints Key points • Sophisticated analysis of ChIP-seq data is possible using tools hosted by Galaxy. • Genomic dataset analyses require multiple methods of quality assessment to ensure that the data are appropriate for answering the biology question of interest. • By using the sharable and transparent Galaxy platform, data analyses can easily be shared and reproduced. # References 1. Zhang, Y., T. Liu, C. A. Meyer, J. Eeckhoute, D. S. Johnson et al., 2008 Model-based Analysis of ChIP-Seq (MACS). Genome Biology 9: R137. 10.1186/gb-2008-9-9-r137 2. Wu, W., C. S. Morrissey, C. A. Keller, T. Mishra, M. Pimkin et al., 2014 Dynamic shifts in occupancy by TAL1 are guided by GATA factors and drive large-scale reprogramming of gene expression during hematopoiesis. Genome Research 24: 1945–1962. 10.1101/gr.164830.113 # Feedback Did you use this material as an instructor? Feel free to give us feedback on how it went. # Citing this Tutorial 1. Mallory Freeberg, Mo Heydarian, Vivek Bhardwaj, Joachim Wolff, Anika Erxleben, 2020 Identification of the binding sites of the T-cell acute lymphocytic leukemia protein 1 (TAL1) (Galaxy Training Materials). /training-material/topics/epigenetics/tutorials/tal1-binding-site-identification/tutorial.html Online; accessed TODAY 2. Batut et al., 2018 Community-Driven Data Analysis Training for Biology Cell Systems 10.1016/j.cels.2018.05.012 ### details BibTeX @misc{epigenetics-tal1-binding-site-identification, author = "Mallory Freeberg and Mo Heydarian and Vivek Bhardwaj and Joachim Wolff and Anika Erxleben", title = "Identification of the binding sites of the T-cell acute lymphocytic leukemia protein 1 (TAL1) (Galaxy Training Materials)", year = "2020", month = "11", day = "27" url = "\url{/training-material/topics/epigenetics/tutorials/tal1-binding-site-identification/tutorial.html}", note = "[Online; accessed TODAY]" } @article{Batut_2018, doi = {10.1016/j.cels.2018.05.012}, url = {https://doi.org/10.1016%2Fj.cels.2018.05.012}, year = 2018, month = {jun}, publisher = {Elsevier {BV}}, volume = {6}, number = {6}, pages = {752--758.e1}, author = {B{\'{e}}r{\'{e}}nice Batut and Saskia Hiltemann and Andrea Bagnacani and Dannon Baker and Vivek Bhardwaj and Clemens Blank and Anthony Bretaudeau and Loraine Brillet-Gu{\'{e}}guen and Martin {\v{C}}ech and John Chilton and Dave Clements and Olivia Doppelt-Azeroual and Anika Erxleben and Mallory Ann Freeberg and Simon Gladman and Youri Hoogstrate and Hans-Rudolf Hotz and Torsten Houwaart and Pratik Jagtap and Delphine Larivi{\{e}}re and Gildas Le Corguill{\'{e}} and Thomas Manke and Fabien Mareuil and Fidel Ram{\'{\i}}rez and Devon Ryan and Florian Christoph Sigloch and Nicola Soranzo and Joachim Wolff and Pavankumar Videm and Markus Wolfien and Aisanjiang Wubuli and Dilmurat Yusuf and James Taylor and Rolf Backofen and Anton Nekrutenko and Björn Grüning}, title = {Community-Driven Data Analysis Training for Biology}, journal = {Cell Systems} } `
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2605079710483551, "perplexity": 14379.134929455788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141193221.49/warc/CC-MAIN-20201127131802-20201127161802-00464.warc.gz"}
https://kartikaggarwal.me/2019/10/10/AthNLP2.html
This post discusses the highlights of Athens NLP Summer School 2019. The sessions covered core concepts on Language Modelling, Machine Translation, POS Tagging & Question Answering. Table of Content The first part of this three-part post can be accessed here. The complete lecture playlist can be accessed here. # NLP Application I: Machine Translation [PPT] Arianna Bisazza from University of Groningen gave an exciting talk on Machine Translation in NLP. We all encounter translation systems whether on social media (twitter/facebook) or while translating web pages. Machine Translation (MT) has come a long way from rule-based MT to statistical MT (SMT) to the recent neural MT (NMT). Refer to the slides and the talk for details about the history of MT. Earliest Phrase-based SMT approach inspired from cryptography adopted a noisy channel model to recover a message ($e^*$) from the distorted message ($f$) using bayes method which did not work very well for most text. The year 2000 saw a great progress in MT using the data-driven discriminative SMT approach. This introduced linear combination of features along with a phrase alignment variable which allowed phrases to be aligned in multiple ways while translating a piece of text. Flexibility in these models permitted incorporation of submodels, for example: phrase translation ($p_{TM}$) model, reordering models ($p_{RM}$),etc. The rise of neural networks made all these approaches (feature engineering, multiple information from parallel data, submodels, etc.) obsolete as we now have a single end-to-end condition language model for both training and prediction. A basic sequence-to-sequence (Seq2Seq) NMT model consists of an encoder which maps the source text to a continuous-space sentence representation & a decoder which generates word-by-word translation from this continuous representation. RNN-based Seq2Seq NMT model adds a recurrent hidden state in both encoder and decoder that allows the model to capture the information in the input sequence into a fixed-length vector. But this approach does not work well for longer sequences. The arrival of attention mechanism resolved this problem by assigning attention weights to concentrate on most significant information. This gave rise to the popular transformer architecture which is currently driving huge progress in most NLP tasks. # NLP Application II: Machine Reading [PPT] Sebastian Riedel from Facebook AI Research gave a lecture on Machine Reading in NLP, where he talked about .... Machine Reading can be described as converting any text into a meaningful representation which can then be used for satisfying various information needs. Think of this task as reading a passage of text and answering questions based on that text. This might seem an easy task for a human as we are good at linking prior information & making inferences. However, it becomes very problematic when training machines for the same. Approaches such as semantic parsing, knowledge base construction, end-to-end reading comprehension have been used for meaningfully representing information. But some core challenges needs to be addressed while solving any machine reading task. Lets discuss them individually in the case of knowledge graph construction. Automatic Knowledge base construction inputs a text and maps it onto a knowledge graph where the nodes represent the entities in the text and edges depict the relationship between those entities. This can be addressed simply as a sequence labelling task using either Conditional Random Fields (CRFs) or any RNN-based architectures. Challenges: 1. Ambiguity: The model needs to understand the context for any recognizing the entity of the text. For ex: “Tesla” can be a person or a brand name depending upon the context. 2. Variation: A single sentence can be written in a lot of different ways. For ex: “I moved to Japan” and “I settled in Japan” reveal a similar meaning and hence should capture similar representations. 3. Coreference Resolution: In the text (Refer figure), the model knows that some pronoun (“him”) moved to prague but collapsing nodes and identifying this pronoun “him” belongs to “Tesla” is called coreference resolution. In end-to-end reading comprehension, instead of building a knowledge graph, meaning representation is obtained directly from the input text using an end-to-end deep learning model. A general blueprint of this model looks like the Attentive Reader model (See fig.). The input tokens are represented as embedding vectors and then composed in some form to contextualize the surrounding information. Sequential interaction module is then used to combine the contextualized info about the text and the question into a joint representation to predict the answer. # NLP Application III: Dialog Systems [PPT] The convenience of voice as a communication medium along with the advancement in automatic speech recognition (ASR) technology has led to a huge surge in voice recognition systems. These systems can mainly be categorized as: 1. Social chit-chat systems: The objective in chit-chat systems is to focus on better engagement and longer responses with the user. 2. Task-oriented dialogue systems: Priority is to provide brief and succinct responses. Eg: asking a question (information consumption) or scheduling a meeting (task completion).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49749645590782166, "perplexity": 2313.3168222494246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499819.32/warc/CC-MAIN-20230130133622-20230130163622-00122.warc.gz"}
http://www.maplesoft.com/support/help/MapleSim/view.aspx?path=ModelonHydraulics/Pumps/Basic/DieselNoStates
Diesel No States $—$ Diesel engine with speed controller This component describes a diesel engine with speed controller but without inertia. Implementation The component uses a static characteristic curve of maximum torque as a function of speed. This relation is given by a polynomial: $\mathrm{taumax}={w}^{3}{\mathrm{diesel}}_{3}+{w}^{2}{\mathrm{diesel}}_{2}+w{\mathrm{diesel}}_{1}+{\mathrm{diesel}}_{0}$ There is a simple speed controller implemented. The reference signal for this controller is given by: reference signal = $\left\{\begin{array}{cc}{w}_{\mathrm{min}}& \mathrm{commandedSpeed}=0\\ {w}_{\mathrm{max}}& \mathrm{commandedSpeed}=1\end{array}\right\$ Limitations This simple component does not describe the startup of a diesel engine. (Set initial conditions for coupled inertia between ${w}_{\mathrm{min}}$ and ${w}_{\mathrm{max}}$.) Equations $\mathrm{taumax}={w}^{3}{\mathrm{diesel}}_{3}+{w}^{2}{\mathrm{diesel}}_{2}+w{\mathrm{diesel}}_{1}+{\mathrm{diesel}}_{0}$ $\mathrm{tauref}=\left(\mathrm{wRef}-w\right){k}_{\mathrm{diesel}}$ $w={\partial }_{t}\left({\mathrm{\phi }}_{b}\right)$ $\mathrm{wRef}=\left\{\begin{array}{cc}{w}_{\mathrm{min}}& \mathrm{wRefu}<{w}_{\mathrm{min}}\\ \left\{\begin{array}{cc}{w}_{\mathrm{max}}& {w}_{\mathrm{max}}<\mathrm{wRefu}\\ \mathrm{wRefu}& \mathrm{otherwise}\end{array}\right\& \mathrm{otherwise}\end{array}\right\$ $\mathrm{wRefu}={w}_{\mathrm{min}}+\left({w}_{\mathrm{max}}-{w}_{\mathrm{min}}\right)\mathrm{commandedSpeed}$ $-\mathrm{flange_b.tau}=\left\{\begin{array}{cc}-5w& w<\frac{{w}_{\mathrm{min}}}{2}\\ \left\{\begin{array}{cc}\mathrm{taumax}& \mathrm{taumax}<\mathrm{tauref}\\ \left\{\begin{array}{cc}-\frac{3\mathrm{taumax}}{10}& \mathrm{tauref}<-\frac{3\mathrm{taumax}}{10}\\ \mathrm{tauref}& \mathrm{otherwise}\end{array}\right\& \mathrm{otherwise}\end{array}\right\& \mathrm{otherwise}\end{array}\right\$ Variables Name Value Units Description Modelica ID $\mathrm{wRefu}$ $\frac{\mathrm{rad}}{s}$ wRefu $\mathrm{wRef}$ $\frac{\mathrm{rad}}{s}$ wRef $w$ $\frac{\mathrm{rad}}{s}$ Absolute angular velocity of component w $\mathrm{taumax}$ $Nm$ taumax $\mathrm{tauref}$ $Nm$ tauref Connections Name Description Modelica ID ${\mathrm{flange}}_{b}$ (right) driven flange (flange axis directed OUT OF cut plane) flange_b $\mathrm{commandedSpeed}$ Connector of input signal used as flow rate commandedSpeed Parameters Name Default Units Description Modelica ID ${w}_{\mathrm{min}}$ $75$ $\frac{\mathrm{rad}}{s}$ Minimum angular velocity wmin ${w}_{\mathrm{max}}$ $240$ $\frac{\mathrm{rad}}{s}$ Maximum angular velocity wmax ${\mathrm{diesel}}_{3}$ $9.02·{10}^{-6}$ diesel3 ${\mathrm{diesel}}_{2}$ $-0.00752$ diesel2 ${\mathrm{diesel}}_{1}$ $1.5939$ diesel1 ${\mathrm{diesel}}_{0}$ $75.022$ diesel0 ${k}_{\mathrm{diesel}}$ $10$ Gain of speed controller, Nm/(rad/sec) kdiesel
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 39, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8961243629455566, "perplexity": 5441.3952342222465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257829325.58/warc/CC-MAIN-20160723071029-00148-ip-10-185-27-174.ec2.internal.warc.gz"}
https://support.bioconductor.org/p/94003/#94027
DESeq2 Following RSEM 1 1 Entering edit mode tungphannv ▴ 10 @tungphannv-12640 Last seen 4.7 years ago I am using RNA-seq data processed by RSEM given by someone else. The data has this format: sample1 sample2 sample3 sample4 sample5 sample6 A1BG 94.64 278.03 94.07 96.00 55.00 64.03 A1BG-AS1 64.28 114.08 98.88 109.05 95.32 73.11 A1CF 0.00 2.00 1.00 1.00 0.00 1.00 A2M 24.00 2.00 18.00 4.00 35.00 8.00 A2M-AS1 1.00 1.00 1.00 0.00 0.00 0.00 A2ML1 0.84 2.89 0.00 1.00 2.00 3.11 But seems like DESeq2 can't handle integers. I tried tximport but hasn't been able to implement it for it to work, also I don't understand how/whether tximport would help in this situation. Can I just use round(data) instead? Anyone has better suggestion to deal with this kind of data? Thanks so much! deseq2 rnaseq • 6.1k views 0 Entering edit mode The DESeq2 manuals says: "As input, the DESeq2 package expects count data as obtained", which means just handles whole numbers. I doubt that "round" is a good approach here, because you change the counts. Where does the e.g. 0.64 count from Sample1 A1BG come from? 0 Entering edit mode I think it comes from the way RSEM assign their counts to certain genes since RSEM estimates abundances for transcript/genes. 1 Entering edit mode @mikelove Last seen 3 days ago United States hi, Use tximport to import the RSEM estimated counts. This will take care of everything for you. If you encounter a problem, please report back with full documentation of your code and the error and your sessionInfo(). There is an example in the tximport vignette of importing RSEM estimated counts. 0 Entering edit mode From the vignette, I think tximport expect a list of .genes.files from RSEM. For some reason, I don't have those files. The only data I have is the tsv file with the expected counts in above format (e.g first column is the list of genes, and the next 8 columns contains expected counts - 4 of them are control). Is it possible to make it work with tximport/deseq2 ? Thank you very much. 0 Entering edit mode So this is a little sub-optimal compared to using tximport (which leverages the effective length information and isoform abundance estimates), but you can round the matrix of estimated counts (note, these are not normalized counts) and feed these into DESeqDataSetFromMatrix.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3994632661342621, "perplexity": 3598.5977339334286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358208.31/warc/CC-MAIN-20211127163427-20211127193427-00120.warc.gz"}
http://tex.stackexchange.com/questions/53420/make-index-started-with-mathcal-mode-in-the-right-order
# make index started with mathcal mode in the right order I want to make index something like: $\mathcal{P}$-vector\index{$\mathcal{p}$-vector} but when indexed it comes before words started with 'A' instead of being after 'O' words. - If you are using makeindex you can specify both the sorting string and the typeset version for the document. By default @ is the separator although it can be changed in the index style. \index{p-vector@$\mathcal{P}$-vector}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.699306845664978, "perplexity": 3070.9175677204066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510272680.42/warc/CC-MAIN-20140728011752-00119-ip-10-146-231-18.ec2.internal.warc.gz"}
http://mathhelpforum.com/differential-geometry/147262-banach-space-print.html
# Banach space • Jun 1st 2010, 06:42 AM cizzzi Banach space prove that, in a Banach space if $Sum||Xn||$ converges then $SumXn$ converges • Jun 1st 2010, 06:43 AM cizzzi Banach space • Jun 1st 2010, 07:02 AM cizzzi prove that , in a Banach space if $\sum^{\infty}_{n=1}||Xn||$ converges then $\sum^{\infty}_{n=1}Xn$ converges. • Jun 1st 2010, 07:11 AM Focus Quote: Originally Posted by cizzzi prove that , in a Banach space if $\sum^{\infty}_{n=1}||Xn||$ converges then $\sum^{\infty}_{n=1}Xn$ converges. Show that the sum is Cauchy, i.e. consider $S_k:=\sum_{n=1}^k X_n$, now consider $||S_k-S_l||$. Hint: If $\sum_{n=1}^\infty x_n < \infty$ then $\sum_{n=k}^\infty x_n \rightarrow 0$ as k goes to infinity. (A fact that you can prove using the fact that $x_n \rightarrow 0$). • Jun 1st 2010, 11:00 AM cizzzi thank you I try to solve but I do not know very well functional analysis :( • Jun 1st 2010, 12:55 PM Focus Quote: Originally Posted by cizzzi thank you I try to solve but I do not know very well functional analysis :( Why don't you post what you have done so far (even if it is wrong)? • Jun 4th 2010, 06:44 AM cizzzi if X is complete and $\sum^{\infty}_{n=1}||X_n||<{\infty}$ then sequence $S_{k}=\sum^{k}_{n=1}X_{n}$ for $k\epsilon\aleph$ is Cauchy because for k>m $||S_{k}-S_{m}||\leq\sum^{k}_{n=m+1}||X_{n}||\rightarrow0$ as $m,k\rightarrow0$ therefore , $S=\sum^{\infty}_{n=1}X_{n}= lim_{k\rightarrow\infty}\sum^{k}_{n=1}X_{n}$ exists in X. is it true? • Jun 6th 2010, 12:22 PM Focus Quote: Originally Posted by cizzzi if X is complete and $\sum^{\infty}_{n=1}||X_n||<{\infty}$ then sequence $S_{k}=\sum^{k}_{n=1}X_{n}$ for $k\epsilon\aleph$ is Cauchy because for k>m $||S_{k}-S_{m}||\leq\sum^{k}_{n=m+1}||X_{n}||\rightarrow0$ as $m,k\rightarrow0$ therefore , $S=\sum^{\infty}_{n=1}X_{n}= lim_{k\rightarrow\infty}\sum^{k}_{n=1}X_{n}$ exists in X. is it true? The sum converges to zero as k and m tend to infinity (not zero).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9945599436759949, "perplexity": 1309.999260054841}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719754.86/warc/CC-MAIN-20161020183839-00162-ip-10-171-6-4.ec2.internal.warc.gz"}
https://tutorme.com/tutors/20340/interview/
Subjects PRICING COURSES Start Free Trial Kelly B. Proud Math Geek Tutor Satisfaction Guarantee Geometry TutorMe Question: Find the area of the equilateral triangle inscribed in a circle with a radius of 4 units. Kelly B. Side note: This problem will be easier to solve if you draw a picture of the information as we go through the problem. Prerequisite knowledge: For any equilateral triangle: 1. The radius of the incircle$$=\frac{side}{2\sqrt{3}}$$ 2. The radius of the circumcircle$$=\frac{side}{\sqrt{3}}$$ Since we are given the radius of the circumcircle (the triangle is inscribed in the circle), we will use #2 to find the area of the equilateral triangle. $$4=\frac{s}{\sqrt{3}}$$ $$4\sqrt{3}=s$$ $$area=\frac{1}{2}bh$$ We now know the base, but we still have to solve for the height. The height of the triangle will divide the base in half and will, by definition, be perpendicular to the base. Therefore, we can solve for the height using the Pythagorean Theorem. $$a^{2}+b^{2}=c^{2}$$ We know the measures of the $$a^{2}$$ and $$c^{2}$$, so we can solve for the height, $$b^{2}$$. $$(2\sqrt{3})^{2}+b^{2}=(4\sqrt{3})^{2}$$ $$(4*3)+b^{2}=16*3$$ $$12+b^{2}=48$$ $$b^{2}=36$$ $$b=6$$ Now find the area. $$A=\frac{1}{2}bh$$ $$A=\frac{1}{2}(4\sqrt{3})(6)$$ $$A=12\sqrt{3}$$ Calculus TutorMe Question: Find the derivative: $$f(x)=x^{4}+3x^{3}-16x^{2}+225x-15$$ Kelly B. $$f'(x)=4x^{3}-9x^{2}-32x+225$$ Algebra TutorMe Question: Kelly is older than Parker. Next year Parker will be exactly half Kelly's age. Six years ago Kelly was three times as old as Parker. How old are Parker and Kelly? Kelly B. Step one: Write the equations given in the problem. $$P+1=\frac{1}{2}(K+1)$$ $$K-6=3(P-6)$$ Step two: Solve one of the equations for one variable in order to substitute. $$K-6=3(P-6)$$ $$K-6=3P-18$$ $$K=3P-12$$ Step three: Substitute the final equation from step two into the original equation not solved in step two. Then solve. $$P+1=\frac{1}{2}(K+1)$$ $$P+1=\frac{1}{2}((3P-12)+1)$$ $$P+1=\frac{1}{2}(3P-11)$$ $$P+1=\frac{3}{2}P-\frac{11}{2}$$ $$\frac{13}{2}=\frac{1}{2}P$$ $$13=P$$ Step four: Substitute the answer from step three into the final equation from step two to solve for the final variable. $$K=3P-12$$ $$K=3(13)-12$$ $$K=39-12$$ $$K=27$$ Final Answer: Parker is 13 and Kelly is 27. Send a message explaining your needs and Kelly will reply soon. Contact Kelly
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.931362509727478, "perplexity": 383.7311714494266}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815500.61/warc/CC-MAIN-20180224073111-20180224093111-00224.warc.gz"}
https://www.zora.uzh.ch/id/eprint/22713/
# On the binary expansion of a random integer Barbour, A D (1992). On the binary expansion of a random integer. Statistics and Probability Letters, 14(3):235-241. ## Abstract It is shown that the distribution of the number of ones in the binary expansion of an integer chosen uniformly at random from the set 0, 1,…, n − 1 can be approximated in total variation by a mixture of two neighbouring binomial distributions, with error of order (log n)−1. The proof uses Stein's method. ## Abstract It is shown that the distribution of the number of ones in the binary expansion of an integer chosen uniformly at random from the set 0, 1,…, n − 1 can be approximated in total variation by a mixture of two neighbouring binomial distributions, with error of order (log n)−1. The proof uses Stein's method. ## Statistics ### Citations Dimensions.ai Metrics 4 citations in Web of Science® 3 citations in Scopus®
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9507614374160767, "perplexity": 322.40218587676947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141733120.84/warc/CC-MAIN-20201204010410-20201204040410-00245.warc.gz"}
http://clay6.com/qa/51563/which-of-the-following-is-the-only-noble-gas-not-to-occur-in-the-free-state
Browse Questions # Which of the following is the only noble gas not to occur in the free state in the atomosphere, air, in oute space or in natural gases etc. $\begin{array}{1 1} Radon \\ Argon\\ Xenon \\ Krypton \end{array}$ Can you answer this question? Radon is usually isolated from the radioactive decay of dissolved radium compounds. answered Jul 28, 2014
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8585070371627808, "perplexity": 2315.904584430179}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917124297.82/warc/CC-MAIN-20170423031204-00146-ip-10-145-167-34.ec2.internal.warc.gz"}
http://pressbooks-dev.oer.hawaii.edu/principlesofeconomics/chapter/27-4-how-banks-create-money/
Chapter 27. Money and Banking # 27.4 How Banks Create Money ### Learning Objectives By the end of this section, you will be able to: • Utilize the money multiplier formulate to determine how banks create money • Analyze and create T-account balance sheets • Evaluate the risks and benefits of money and banks Banks and money are intertwined. It is not just that most money is in the form of bank accounts. The banking system can literally create money through the process of making loans. Let’s see how. # Money Creation by a Single Bank Start with a hypothetical bank called Singleton Bank. The bank has $10 million in deposits. The T-account balance sheet for Singleton Bank, when it holds all of the deposits in its vaults, is shown in Figure 1. At this stage, Singleton Bank is simply storing money for depositors and is using these deposits to make loans. In this simplified example, Singleton Bank cannot earn any interest income from these loans and cannot pay its depositors an interest rate either. Singleton Bank is required by the Federal Reserve to keep$1 million on reserve (10% of total deposits). It will loan out the remaining $9 million. By loaning out the$9 million and charging interest, it will be able to make interest payments to depositors and earn interest income for Singleton Bank (for now, we will keep it simple and not put interest income on the balance sheet). Instead of becoming just a storage place for deposits, Singleton Bank can become a financial intermediary between savers and borrowers. This change in business plan alters Singleton Bank’s balance sheet, as shown in Figure 2. Singleton’s assets have changed; it now has $1 million in reserves and a loan to Hank’s Auto Supply of$9 million. The bank still has $10 million in deposits. Singleton Bank lends$9 million to Hank’s Auto Supply. The bank records this loan by making an entry on the balance sheet to indicate that a loan has been made. This loan is an asset, because it will generate interest income for the bank. Of course, the loan officer is not going to let Hank walk out of the bank with $9 million in cash. The bank issues Hank’s Auto Supply a cashier’s check for the$9 million. Hank deposits the loan in his regular checking account with First National. The deposits at First National rise by $9 million and its reserves also rise by$9 million, as Figure 3 shows. First National must hold 10% of additional deposits as required reserves but is free to loan out the restFirst National Balance Sheet Making loans that are deposited into a demand deposit account increases the M1 money supply. Remember the definition of M1 includes checkable (demand) deposits, which can be easily used as a medium of exchange to buy goods and services. Notice that the money supply is now $19 million:$10 million in deposits in Singleton bank and $9 million in deposits at First National. Obviously these deposits will be drawn down as Hank’s Auto Supply writes checks to pay its bills. But the bigger picture is that a bank must hold enough money in reserves to meet its liabilities; the rest the bank loans out. In this example so far, bank lending has expanded the money supply by$9 million. Now, First National must hold only 10% as required reserves ($900,000) but can lend out the other 90% ($8.1 million) in a loan to Jack’s Chevy Dealership as shown in Figure 4. If Jack’s deposits the loan in its checking account at Second National, the money supply just increased by an additional $8.1 million, as Figure 5 shows. How is this money creation possible? It is possible because there are multiple banks in the financial system, they are required to hold only a fraction of their deposits, and loans end up deposited in other banks, which increases deposits and, in essence, the money supply. Watch this video to learn more about how banks create money. # The Money Multiplier and a Multi-Bank System In a system with multiple banks, the initial excess reserve amount that Singleton Bank decided to lend to Hank’s Auto Supply was deposited into Frist National Bank, which is free to loan out$8.1 million. If all banks loan out their excess reserves, the money supply will expand. In a multi-bank system, the amount of money that the system can create is found by using the money multiplier. The money multiplier tells us by how many times a loan will be “multiplied” as it is spent in the economy and then re-deposited in other banks. Fortunately, a formula exists for calculating the total of these many rounds of lending in a banking system. The money multiplier formula is: $latex \frac{1}{Reserve\;Requirement}$ The money multiplier is then multiplied by the change in excess reserves to determine the total amount of M1 money supply created in the banking system. See the Work it Out feature to walk through the multiplier calculation. ### Using the Money Multiplier Formula Using the money multiplier for the example in this text: Step 1. In the case of Singleton Bank, for whom the reserve requirement is 10% (or 0.10), the money multiplier is 1 divided by .10, which is equal to 10. Step 2. We have identified that the excess reserves are $9 million, so, using the formula we can determine the total change in the M1 money supply:$latex \begin{array}{r @{{}={}} l}Total\;Change\;in\;the\;M1\;Money\;Supply & \frac{1}{Reserve\;Requirement}\;\times\;Excess\;Requirement \\[1em] & \frac{1}{0.10}\;\times\;\$9\;million \\[1em] & 10\;\times\;\$9\;million \\[1em] & \$90\;million \end{array}$ Step 3. Thus, we can say that, in this example, the total quantity of money generated in this economy after all rounds of lending are completed will be $90 million. # Cautions about the Money Multiplier The money multiplier will depend on the proportion of reserves that banks are required to hold by the Federal Reserve Bank. Additionally, a bank can also choose to hold extra reserves. Banks may decide to vary how much they hold in reserves for two reasons: macroeconomic conditions and government rules. When an economy is in recession, banks are likely to hold a higher proportion of reserves because they fear that loans are less likely to be repaid when the economy is slow. The Federal Reserve may also raise or lower the required reserves held by banks as a policy move to affect the quantity of money in an economy, as Monetary Policy and Bank Regulation will discuss. The process of how banks create money shows how the quantity of money in an economy is closely linked to the quantity of lending or credit in the economy. Indeed, all of the money in the economy, except for the original reserves, is a result of bank loans that are re-deposited and loaned out, again, and again. Finally, the money multiplier depends on people re-depositing the money that they receive in the banking system. If people instead store their cash in safe-deposit boxes or in shoeboxes hidden in their closets, then banks cannot recirculate the money in the form of loans. Indeed, central banks have an incentive to assure that bank deposits are safe because if people worry that they may lose their bank deposits, they may start holding more money in cash, instead of depositing it in banks, and the quantity of loans in an economy will decline. Low-income countries have what economists sometimes refer to as “mattress savings,” or money that people are hiding in their homes because they do not trust banks. When mattress savings in an economy are substantial, banks cannot lend out those funds and the money multiplier cannot operate as effectively. The overall quantity of money and loans in such an economy will decline. Watch a video of Jem Bendell discussing “The Money Myth.” # Money and Banks—Benefits and Dangers Money and banks are marvelous social inventions that help a modern economy to function. Compared with the alternative of barter, money makes market exchanges vastly easier in goods, labor, and financial markets. Banking makes money still more effective in facilitating exchanges in goods and labor markets. Moreover, the process of banks making loans in financial capital markets is intimately tied to the creation of money. But the extraordinary economic gains that are possible through money and banking also suggest some possible corresponding dangers. If banks are not working well, it sets off a decline in convenience and safety of transactions throughout the economy. If the banks are under financial stress, because of a widespread decline in the value of their assets, loans may become far less available, which can deal a crushing blow to sectors of the economy that depend on borrowed money like business investment, home construction, and car manufacturing. The Great Recession of 2008–2009 illustrated this pattern. ### The Many Disguises of Money: From Cowries to Bit Coins The global economy has come a long way since it started using cowrie shells as currency. We have moved away from commodity and commodity-backed paper money to fiat currency. As technology and global integration increases, the need for paper currency is diminishing, too. Every day, we witness the increased use of debit and credit cards. The latest creation and perhaps one of the purest forms of fiat money is the Bitcoin. Bitcoins are a digital currency that allows users to buy goods and services online. Products and services such as videos and books may be purchased using Bitcoins. It is not backed by any commodity nor has it been decreed by any government as legal tender, yet it used as a medium of exchange and its value (online at least) can be stored. It is also unregulated by any central bank, but is created online through people solving very complicated mathematics problems and getting paid afterward. Bitcoin.org is an information source if you are curious. Bitcoins are a relatively new type of money. At present, because it is not sanctioned as a legal currency by any country nor regulated by any central bank, it lends itself for use in illegal trading activities as well as legal ones. As technology increases and the need to reduce transactions costs associated with using traditional forms of money increases, Bitcoins or some sort of digital currency may replace our dollar bill, just as the cowrie shell was replaced. # Key Concepts and Summary The money multiplier is defined as the quantity of money that the banking system can generate from each$1 of bank reserves. The formula for calculating the multiplier is 1/reserve ratio, where the reserve ratio is the fraction of deposits that the bank wishes to hold as reserves. The quantity of money in an economy and the quantity of credit for loans are inextricably intertwined. Much of the money in an economy is created by the network of banks making loans, people making deposits, and banks making more loans. Given the macroeconomic dangers of a malfunctioning banking system, Monetary Policy and Bank Regulation will discuss government policies for controlling the money supply and for keeping the banking system safe. ### Self-Check Questions Imagine that you are in the position of buying loans in the secondary market (that is, buying the right to collect the payments on loans made by banks) for a bank or other financial services company. Explain why you would be willing to pay more or less for a given loan if: 1. The borrower has been late on a number of loan payments 2. Interest rates in the economy as a whole have risen since the loan was made 3. The borrower is a firm that has just declared a high level of profits 4. Interest rates in the economy as a whole have fallen since the loan was made ### Review Questions 1. How do banks create money? 2. What is the formula for the money multiplier? ### Critical Thinking Questions 1. Should banks have to hold 100% of their deposits? Why or why not? 2. Explain what will happen to the money multiplier process if there is an increase in the reserve requirement? 3. What do you think the Federal Reserve Bank did to the reserve requirement during the Great Recession of 2008–2009? ### Problems Humongous Bank is the only bank in the economy. The people in this economy have $20 million in money, and they deposit all their money in Humongous Bank. 1. Humongous Bank decides on a policy of holding 100% reserves. Draw a T-account for the bank. 2. Humongous Bank is required to hold 5% of its existing$20 million as reserves, and to loan out the rest. Draw a T-account for the bank after this first round of loans has been made. 3. Assume that Humongous bank is part of a multibank system. How much will money supply increase with that original loan of \$19 million? # References Bitcoin. 2013. www.bitcoin.org. National Public Radio. Lawmakers and Regulators Take Closer Look at Bitcoin. November 19, 2013. http://thedianerehmshow.org/shows/2013-11-19/lawmakers-and-regulators-take-closer-look-bitcoin. ## Glossary money multiplier formula total money in the economy divided by the original quantity of money, or change in the total money in the economy divided by a change in the original quantity of money
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.15089808404445648, "perplexity": 2946.2358632800433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742793.19/warc/CC-MAIN-20181115161834-20181115183122-00033.warc.gz"}
https://edoc.unibas.ch/30572/
# Spontaneous CP violation in $A_4 times SU(5)$ with Constrained Sequential Dominance 2 Antusch, Stefan and King, Stephen F. and Spinrath, Martin. (2013) Spontaneous CP violation in $A_4 times SU(5)$ with Constrained Sequential Dominance 2. Physical review. D, Particles, fields, gravitation and cosmology, Vol. 87 , 096018. Full text not available from this repository. Official URL: http://edoc.unibas.ch/dok/A6212209
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8840118646621704, "perplexity": 14522.589692301195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400201601.26/warc/CC-MAIN-20200921081428-20200921111428-00167.warc.gz"}
http://mathhelpforum.com/calculus/145860-approximate-5th-root-20-a.html
# Math Help - Approximate 5th root of 20 1. ## Approximate 5th root of 20 f(x) = x^5 - 20 f'(x) = 5x^4 x2 = x1 - ( x1^5 - 20 ) / ( 5x1^4 ) x2 = 1 - ( (1)^5 - 20 ) / ( 5(1)^4 ) x2 = 1 - ( -19 / 5 ) x2 = 24 / 5 x2 = 4.08 My answer is wrong and the correct answer is 1.82056420. I used an initial approximation of x1 = 1, as no approximation was given. 2. Originally Posted by TsAmE f(x) = x^5 - 20 f'(x) = 5x^4 x2 = x1 - ( x1^5 - 20 ) / ( 5x1^4 ) x2 = 1 - ( (1)^5 - 20 ) / ( 5(1)^4 ) x2 = 1 - ( -19 / 5 ) x2 = 24 / 5 x2 = 4.08 My answer is wrong and the correct answer is 1.82056420. I used an initial approximation of x1 = 1, as no approximation was given. You can use Taylor... x^1/5 ~ 1+(1/5)*(x-1) - (2/25)*(x-1)^2)+(6/125)*(x-1)^3+O((x-1)^4) You put x=20 and get the needed. 3. I havent learned taylor, I want to do it using newtons method 4. Your method is correct. You made a little slip. 24/5=4.8 rather than 4.08. Just keep going. On a decent calculator you can do this: 1.5 = ANS - (ANS^5-20)/(5ANS^4) = = = = = It gives 1.9901.. 1.8470.. 1.8213.. and pretty soon you will have 1.820564203 5. Originally Posted by TsAmE f(x) = x^5 - 20 f'(x) = 5x^4 x2 = x1 - ( x1^5 - 20 ) / ( 5x1^4 ) x2 = 1 - ( (1)^5 - 20 ) / ( 5(1)^4 ) x2 = 1 - ( -19 / 5 ) x2 = 24 / 5 x2 = 4.08 My answer is wrong and the correct answer is 1.82056420. I used an initial approximation of x1 = 1, as no approximation was given. Well, 24/5= 4.8, not 4.08 but what exactly was the problem? Surely not to do just one iteration? If you keep going, you get successive approximations that converge to 1.82...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8221979141235352, "perplexity": 1518.179917191566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246649738.26/warc/CC-MAIN-20150417045729-00126-ip-10-235-10-82.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/5935-limit-proof.html
# Thread: Limit Proof 1. ## Limit Proof What would be the easiest way to prove that the lim (as n approaches infinity) of (2)/(n+1)=0. Thanks. 2. This one is easy: 2/[n+1] < 2/n. If e>0 then there is a positive integer K such that (1/K)<(e/2) or (2/K)<e. 3. Originally Posted by JaysFan31 What would be the easiest way to prove that the lim (as n approaches infinity) of (2)/(n+1)=0. Thanks. Note the sequnce, a_n={2/(n+1)} Is identical to the sequence, b_n={2(1/n)/(1+1/n)} Now the numerator is, lim 2(1/n) Since, lim (1/n)--->0 So too, lim 2(1/n)---0 by constant function rule. Next, the demonator is, lim (1+1/n) But, lim 1=1 and lim (1/n)=0 Thus by the rule of seuqnces addition, lim (1+1/n) exists and is, 1+0=1 (not equal to zero) Thus, by the rule of sequnces division, lim 2(1/n)/(1+1/n)--->0 Thus, lim a_n--->0
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9865638613700867, "perplexity": 8126.27966274752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982292887.6/warc/CC-MAIN-20160823195812-00030-ip-10-153-172-175.ec2.internal.warc.gz"}
https://survivor.togaware.com/gnulinux/emacs-using-100-cpu.html
## 24.8 Emacs Using 100% CPU Historic This seems to be a problem with emacs recently (late 2006 and into 2007). I eventually tracked down a solution that seems to fix emacs-snapshot on Debian, from http://forums.gentoo.org/viewtopic-t-501837-highlight-emacs+cpu.html. I followed the instructions but it started happening again, probably because the source code for the fix (as detailed below) is in a couple of places. A quick fix is to set the Emacs variable semantic-idle-scheduler-idle-time to a large number (by default it is 2 seconds) so that the idle scheduler does not kick in (which is what is using all the CPU)! The actual solution below edits the file semantic-idle.el and ensures all binary copies of it are replaced (including, for me, emacs-snapshot/site-lisp/semantic/semantic-idle.elc, which did not seem to have a source version). So, first edit the file /usr/share/emacs/site-lisp/semantic/semantic-idle.el to comment out line 290: ;;GJW(semantic-idle-scheduler-kill-timer) Similarly, comment out line 294, but be sure to retain the final three closing braces: ;;GJW(semantic-idle-scheduler-setup-timer) ))) Next, start up, as root, a new emacs and byte compile the package: \$ sudo emacs -nw -q M-x byte-compile-file <ret> /usr/share/emacs/site-lisp/semantic/semantic-idle.el C-x C-c As mentioned above, copy the byte copmiled version across to emacs-snapshot/site-lisp/semantic/semantic-idle.elc. Now start up Emacs and hopefully it won’t consume all the CPU again! Edward Garson reports (28 Apr 2007) that the same fix works for XP and Fedora 6. Your donation will support ongoing availability and give you access to the PDF version of this book. Desktop Survival Guides include Data Science, GNU/Linux, and MLHub. Books available on Amazon include Data Mining with Rattle and Essentials of Data Science. Popular open source software includes rattle, wajig, and mlhub. Hosted by Togaware, a pioneer of free and open source software since 1984. Copyright © 1995-2022 [email protected] Creative Commons Attribution-ShareAlike 4.0
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1858203113079071, "perplexity": 5481.5237063947225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446706291.88/warc/CC-MAIN-20221126112341-20221126142341-00737.warc.gz"}
https://www.fark.com/comments/1279807/George-Carlin-voluntarily-enters-drug-alcohol-rehab-program-says-he-has-problem-with-wine-Vicodin?cpp=1
If you can read this, either the style sheet didn't load or you have an older browser that doesn't support style sheets. Try clearing your browser cache and refreshing the page. •       •       • 10730 clicks; posted to Main » on 27 Dec 2004 at 11:58 PM (13 years ago)   |   Favorite    |   share:    more» Paginated (1/page) Single page, reversed Normal view Change images to links Show raw HTML I can never remember. Is it white or red with Vicodan? Maybe this isn't the best time to announce my crippling Percodan/Mountain Dew habit. does this mean he's not going to be as funny once he gets out? and why should 'questioning the intellect of those who visit Vegas' cause a stir? a little too close to the truth? Two in the mouth [image from pssht.com too old to be available] Hang in there, Buddy-Boy! ChewbaccaJones: Two in the mouth [image from a1468.g.akamai.net too old to be available] These are orange. There will be other colors later. Hmmm...a clean and sober George Carlin....can anyone say Steven Wright? Look at what happened to Robin Williams after he sobered up. Rehab is for quitters. How many of the seven words you can't say on television does Fark filter? George; It's ok man. Just take care of yourself and all will be well. /One of the great writers and wordsmiths for sure shiat, Piss, fark, coont, Cocksucker, Motherfarker, and Tits heh. four. Well, shiat, that pisses me off. What's wrong with that stupid coont? That cocksucker already had an addiction problem, and he's messing around with Vicodin? Dumb motherfarker, he gets on my tits. Not really. Hope he gets well soon. This post is dedicated to the United States Supreme Court. Ack, seems I forgot an obvious one. What the fark? FARK ME! I just paid 340 bucks through a ticket broker for choice seats to his show here in San Antonio on January 14th. FAAAAAAAAAAAAAAAAARRRRRRRRRRRRKKKKKKKKKKKKKKKK You guys forgot the worst word of them all: RELIGION. Vintage Carlin. God bless him, and may ending his vicodin habit make him even more bitter (I know it did me!) quitter. Get better soon George. uhhh can you say riiiight? how much vicodin is considered "bad"? /me is on them after surgery, little yellow ones, 10/175 i think. Hang in there GC. One of the best comics alive. According to media accounts of the incident, Carlin's bit about "moronic" Vegas tourists touched off a bitter, profane exchange with members of the audience, including one woman who shouted "Stop degrading us." Audience= Dumb farks /there - got it over with whats the problem here? So what does Vicodin DO to you exactly? And for the record, anyone who knows anything about Carlin's material would have stayed the fark away from his show if they couldn't handle a little degradation. Get well soon, George. /submitted this with a funnier headline What a joke. People are such pussies these days. Drugs are for the weak. Religion is for the weak. Failure is for the weak. smart enough to stop a hobby before it becomes an addiction. red wine is good for your heart though right? fag Use them only when you are in pain. Stop using them when you start to feel better. Vicodin works great, especially when you have had major surgery, but it is addictive if abused. Good thing for me it upset my stomach quite a bit after a while. Made it easy to stop. heretik: So what does Vicodin DO to you exactly? pain killer A rather lackluster headline effort submitted. Comon mods, nobody took a better crack at it than this? /couldn't do better //couldn't mod better Imagine the material he'll come up with by goin thru the rehab experience. At least he's smart. He's not a stupid celebrity, very realistic man. I submitted this with a better GOD-F%&*ing-D@#N Head-F%&*ING-Line I've only taken vicodin once. I took two and washed them down with a bottle of champagne. In an unexpected twist - that combo really impaired my driving ability. Get well Mr. Carlin. You kill me. What's the big deal? I've been hooked on opiates for years, and don't have a problem. /denial //dear god, denial Everybody and your your local pharmacist knows that vicodans are traditionally washed down with a few 12 oz tumblers of Captain Morgan's spiced rum. Sheesh. I guess he finally figured out what kind of wine goes well with Captain Crunch. Mealy-mouthed rat bastard can't even handle a handful of vicodan and a few bottles of wine...he's almost ready for my fire! Horray, lizard shiat, fark [image from jacneed.com too old to be available] Vicadin and a choice red really compliment each other nicely. I had my leg in a cast for nine weeks with a torn achilles tendon, and it hurt like a matterfarmer. Viva la Vicadin! Got me through it, don't touch it now, still snork a lot of red, though. And Carlin rules. Hope things go well for the old grouch. George Carlin ... says he has problem with wine and Vicodin Recently his problem has been spewing liberal hate at the right side of the political spectrum instead of telling jokes. hang in there George - you got soul buddy,,,, Vicodin is some good shiat. I had my appendix removed recently, and vicodin took away all of the pain. Still have 20 or so lying around here... shouldnt this be an "obvious" tag???? Carlin 3:16 - An idle mind is the devil's workshop. And the devil's name is Alzheimer's How does one get addicted to a painkiller that is scripted out? Don't they usually give you only enough to last maybe a week. I can't say that I'm surprised, but good luck George. We need you. That motherfarker better sober up by January 23rd. I got a ticket. HAHA! Now THAT'S funny! EviLincoln: How does one get addicted to a painkiller that is scripted out? Don't they usually give you only enough to last maybe a week. Oh hell no. I'm on very strong sleeping pills, but my benefit coverage means that I have to pay a "dispensing fee" every time I fill the script... the drugs are free, but the dispensing fee is not. My doctor (who is both an MD and a Doctor of Pharmacology) wrote the script for 60 doses worth to save me the extra expense. I appreciate it, but it's hardly protecting me from what 60 high-powered sleeping pills could do... It's more about how the doctor feels about you. Mine knows I'm not going to be grinding these things into powder. Personally, I think he should have retired about 10 years ago. I listened to him extensively in the 80's and early 90's (had several of his albums), and he was really funny. Then I saw him a few times live a few years after that. He was recycling jokes and, even if he did come up with something new, you saw it coming a mile away. I think he's riding the gravy train for all it's worth, and if he can make a few bucks, more power to him. However, if I had to listen to his recycled material as much as he does, I'd be hitting the vicodin and wine really hard too. Anyone else get the feeling like Better Than You At Everything is compensating for something? Maybe he's really Smaller Than Everyone /just sayin EviLincoln Either you tell your doc that you're still in a lot of pain, find one that will write you a prescription just 'cause, or buy it off the street somewhere. Maybe you should ask Rush? He might know for sure. Erm, sorry about the all bold, forgot to close the tag properly. Vicodin addiction is retarded. You can get codeine over the counter from behind the counter (Schedule V cough syrup), and acetaminophen is OTC. I'd be willing to take all that extra wine and Vicodin off his hands. Beer and Xanax just aren't the same. Shouldn't this be Obvious? All your vicodin are belong to us. I must look to scruffy to have docs fall for the old, "I'm still in pain" method. Last time I got Vics was when I cut open my hand and had to go to the emergency room. Got a scrip for like five. Didn't last till dinner. I guess I'm just not cut out for the hoity toity "prescription addiction". Just good old fashioned gold paint for me. I took Vicodin once and it didn't do shiat for me. Didn't even relieve any pain. But a fifth of Jim Beam works pretty well for a few hours. Never tried mixing them, though. Hope you get it all out of your system quick, George. And another thing... What sort of pansy becomes an alcoholic on wine? The harmful portion of Vicodin is the tylenol. You can take up to 4 grams or 4000 mg of tylenol daily before you start to do some damage to your liver. The amount of tylenol is what's after the slash (5/500 = 5mg hydrocodone/500 mg tylenol). /used to work in pharmacy //recommends it to those who like to get their ass chewed for 10 hours at a time The easy way to get vicodin is to develop sciatica, which is not that uncommon for someone his age. Hell, they will give it to you for a lot of reasons, really. And hydrocodone (active ingredient) is good stuff. You can fly real real high on it. the tippy toes of my cloven feet: Recently his problem has been spewing liberal hate at the right side of the political spectrum instead of telling jokes. Go away, troll. You're not welcome here. Go grief people on Counterstrike. Better yet, don't. Go sit in a dark room and sulk. Take care of yourself George. We need you! evilincoln Maybe it was the gold paint on your face from huffing that made the ER nervous about giving you more than 5 tabs. /just a thought /p.s. it's cheaper at home depot If you don't mind a totally screwed up sleep schedule, cloudy mind, and a high propensity towards addiction, then vicodin (that's how you spell it, BTW) is really good stuff. Only straight morphine is better (where "better" is obviously an extremely relative term). Nothing says "bored suburban whitey" like a liquor and pill buzz. </bored suburban whitey> the tippy toes of my cloven feet Recently his problem has been spewing liberal hate at the right side of the political spectrum instead of telling jokes. uhm... apparently you need to go watch his early-80s HBO specials. that's not new territory for him. at all. /vicodin and bong hits for me plz, mixing opiates & alcohol = bad I liked Carlin until he sold out. Sitcom, Mr. Conductor, Dodge Neon commercial, what else? I like that word Vicodin... Vicodin... Say it out loud! So that's why he's been acting like the Carlin of old lately. rrrrrrrrrrrrreal farked up on drugs....... /Hicks Nothing says "bored suburban whitey" like a liquor and pill buzz.] [image from fox.com too old to be available] "Just call me a bored suburban whitey, baby!" What allanhowls said. /bored //worse Well I guess istickmyneckout wants it all dosen't she? In other news.. Who cares!! Who doesn't have a drug or alcohol problem according to the people who do the finger pointing. So What! He takes pills and drinks liquor. If the intraweb were a game of King of the Hill then I would walk up and punch you straight in your moral highground. /Just say no. //NOT!!!!! Just a thought...why do we only hear about people entering rehab for booze, coke, pills, and smack (which is really just intense pills)? Why don't we ever hear about the celebs who go into rehab because they just can't handle their acid, X, Ketamine, and/or turpentine? And when was the last time we had a celebrity on a public PCP/GHB bender? Where's Nick Nolte when we need him? Hollywood is out of drug ideas. can't believe no one else has said this... ATTENTION WHORE! /obvious That's what I'm talkin' about allenhowls! We need more celebrities on dust taking shots at cops and stabbing themselves with scissors. Then maybe things coming out of Hollywood would be interesting and not garbage like Christmas with the Kranks. Do I need to bribe someone to get a link submitted? This was greenlit a good eight hours after it hit CNN, and there were surely at least 50 submissions of it i love george. my mom got me a calandar for christmas with "a quote a day" of his. Vicodin is lovely. i've had 6 surgeries: wisdom teeth, tonsils, carpal tunnel, cubital tunnel, ulnar tunnel (all the 'tunnels' are nerve problems in the hand/arm) and bunion, and they've given me that every time. works like a charm for me. except last time they gave me percocet first, and damn, that made me hallucinate. And all these years I just thought George was just pissed. Well it turns out he was pissed. Just, not in the way I thought... He'll be selling Canyoneros soon. A 67 year old mixing hydrocodone/tylenol and wine is begging for liver failure. Good luck with that, douchebag. [image from jacneed.com too old to be available] Wow... General Zod is looking like shiat these days... George was addicted to cocaine in the 80s. His receptor sites are wired for opiates now. If he was doing wine and vicodin together he would have to cut both to get over it. At least he recognizes there is a problem and is doing something about it. I'm reading his new book "When Will Jesus Bring The Pork Chops?'. Funny stuff! Some of it is old jokes of his, but its some twisted shiat. Now I know where he got his material from! ahhh! mr. carlin and i have the same tastes. however, i prefer vicodin and vodka though, or V'n'V as i like to call it. Yay! Pills and alcohol increase your chance of respiratory failure by two! Are these the same people that say pot should be illegal? Will his next bit be about Stupid Junkies? Maybe this explains why he has been unfunny since the mid-80's. At least if he had OD'd his family could have got a dead cat bounce out of his old recorded shiat. Now we all have to endure while he lingers on with his holier than thou shtick for another 25 years. -former joke writer, circa 84', you prick. After the denegration he laid on Rush last year for that other man's addition! What a pair he must "think" he has to push himself forward on this one. As someone said earlier "media whore". I used to love the guy. But his shiat is either old and/or tired now. BTW is a plane ticket to France included in that treatment? Wasn't he one of the clowns trumpeting to the world that he would leave if "Commie boy" wasn't elected? K/H D Why don't we ever hear about the celebs who go into rehab because they just can't handle their acid, X, Ketamine, and/or turpentine? And when was the last time we had a celebrity on a public PCP/GHB bender? Where's Nick Nolte when we need him? Acid's not addictive. Never done it, but I know plenty of people who have, and they all say it's something that you just stop doing after a while because it really starts to fark with your mind. It's just not habit-forming. You do it with a bunch of friends when the right time comes up, not something you do while you're bored and on the couch for a quick buzz, like say, alcohol or pills. Most people who drop acid fall into one of two categories: 1) They do it a few times, go batshiat cuckoo, and you never hear from them again, or 2) They do it a few hundred times until it gets old, then they stop and get on with their lives. Ketamine I don't know much about. PCP will make you jump out a plate glass window and kill yourself before you ever get farked up enough to enter rehab. Sniff glue/turpentine a few too many times, and yeah, that will kill you too. That's why you don't hear about them. And people who go into rehab for weed are just farking idiots. Weed is not addictive. Bob Saget said it best in half baked: "I used to suck dick for coke. You ever suck dick for some marijuana?" GuinnessDrinker people who go into rehab for weed are just farking idiots naaah. they're just people looking for a reduced sentence, typically. 2004-12-28 12:23:58 AM jonasborg Horray, lizard shiat, fark You neglected to add the rest! Rat shiat Bat shiat Dirty old twat 69 assholes, tie it in a knot Hooray, lizard shiat, fark! Why do I remember that? il Dottore the cocaine doesnt have anything to do with opiates. cocaine is made from coca plants, (not coco like hershes) and opiates are made from poppy plants (not all types of poppies, dont go to the supermarket and expect to get a buzz from a poppy seed begal) True they both act on receptors in the brain, but to my knowledge, they act on entirely different ones. Cocaine acts on more happy releacing receptors. I think it opens the pathways for extra dopamine and endorphins. Opiates actually block pain recievers in the brain. When the body sences pain, it sends a signal to the pain recievers in the brain, opiates just cover these recievers up and allow for a higher pain tollerance. Both of these drugs to lots of other things, but this is just a basic rundown of what they do. \end rant \\does know WAY too much about drugs Hang in there! Other former drug addicts have recently gone on to win presidential elections. i think ketamine (special k) is an animal tranqualizer. I do know that it is found in animal hospitals and may of them have to have very good security because people break in all the time to get the drugs. So form this conclusion, ketamine is just a regular barbituate. i think it can be sniffed and injected, possibly swollowed I blame Vegas. George Carlin's a prescription dope addict? Now there's a big surprise. GuinnessDrinker: or 3) You do hallucinogens once or twice a year as a young adult, blow your mind with social theory, and then take that information and rape reality for all it's worth. And do the opiates actually block the receptors in the sense that signals can't go through or just signals aren't modified. Genocide1215: Yeah ketamine is akin to being on a nice percocet high, and then being borderline buzzed/ drunk all in one. /former psychonaut who wishes he had his biology 108 book on hand Vicodin's some nasty shiat, like any other opiate. I tore my ACL a few years ago and almost got addicted to it after only a week of use. Bleh... the tippy toes of my cloven feet Recently his problem has been spewing liberal hate at the right side of the political spectrum instead of telling jokes. He doesn't hate you, he just likes making fun of your stupidity. And really, there's quite a bit to make fun of. Yeah, so if Rush Limbaugh gets addicted it's OK to make fun of him despite the fact that he's on it for the skull splitting pain he got for the surgery to restore his hearing; amazingly, Carlin gets little grief and apparently was taking them recreationally as no reason was given other than 'I like them with wine'. /Gollum couldn't have done better I barely remember one Christmas thanks to Tylenol 3 (thanks to wisdom tooth removal) and many boxes of brandy beans. I have a vague memory of hanging some ornaments and lying on the couch. I ended up with a high fever after infecting my tooth sockets by packing them with graham crackers since the aforementioned memory wipe affected the aftercare instructions from the dental surgeon. Vicotin and wine is a pretty chillin' buzz Ah, vicodin, champagne, and prozac. The only time in my entire life I've ever had a hangover. Vicodin is also the only farking thing that'll get rid of my backpain, but the university health system doesn't like to perscribe hardcore painkillers, even when a girl goes back there four different times. And I used up what was left over from my wisdom teeth. Nurse: Why aren't you taking [enter another stupid drug I paid a copay for that was just as good as the Tylenol 1 I got from Canada OTC] anymore? Me: It didn't do anything for the pain? Nurse: Well, nothing is going to get rid of the pain entirely except hardcore narcotics. Me: Um... yeah?... So is he going to be investigated for doctor shopping as well? I've had 6 knee surguries and been given vicodin after all of them. I just couldn't stand the way it made me feel. After the first couple, I just quit taking it and went to sleep. I just lived with the pain until it was gone. He doesn't hate you, he just likes making fun of your stupidity. And really, there's quite a bit to make fun of. That IS funny, especially in a thread about some stupid idiot druggie. I must be the only person that has never thought he was funny. I tend to prefer more intelligent humor, which is supposedly known for, but his jokes do nothin for me. /just one little peon's opinion. Agree with you Satchel, though I thought he was good early on. Another washed up, has-been looking for more than his alloted 15 minutes. He'll probably start touring again when he gets out. Look for him at your local Holiday Inn lounge opening for some washed out retro 70's band! Rush Limbaugh was made fun of so much for HIS addiction because he was and still is a right-wing blowhard who insisted, nay, DEMANDED that addicts be sent to jail for life. Carlin's a comedian, and a popular one among FARK-minded people. That's the difference. Later. RJS What a boring move. The least he could have done was light himself on fire like Pryor. I'll take my chances on the 70's band..but I'll pass on the opening act. Find me at the bar. He's funny. But he is also a un-ironically self rightous motherfarker who is not half as smart as he likes to tell you he is. / actually just wants to see if fark really filters fark to fark. fark. // posts above led me to believe it does Vicodin(hydrocodone) is a member of the opiate family of drugs, along with heroin, morphine, Oxycontin(oxycodone) and Tylenol 3(codeine). This family of drugs are strong Mu-receptor agonists. They produce a euphoric feeling of "high" by stimulating the pleasure-producing portions of your brain. This is the same feeling that is called "runner's high" caused by the body releasing endorphins to counteract pain from extended exertion. An OD of opiate drugs produces central nervous system shutdown. Narcan(naltrexone) is a Mu antagonist. It will drop someone straight out of opiate OD into screaming withdrawal (intensely painful) Cocaine is a dopaminergic drug, that is, it causes your brain to release large amounts of dopamine. Dopamine is one of the neurotransmitters that affects movement and activity, as well as arousal level. An OD of cocaine will produce a heart attack or aneuryism. Narcan has no effect on a Cocaine high. They are entirely unrelated. Yeah, so if Rush Limbaugh gets addicted it's OK to make fun of him Yeah, it is, cause that right wing tard wants to execute drug addicts chinese style and he became one himself. Carlin is accepting of those people, so people are accepting of him. does george carlin play counterstrike? Nicotine, valium, vicodin, marijuana, ecstacy and alcohol. /ca ca ca ca cocaine This burned out old hippy is the same asshat who is so quick to lecture other people about his version of morality. You and Bill Mahr need to both clean up your act or STFU Everyone, go out and pick up George's favorite book. Realizing self doubt...or should you? I always thought Percocet (Oxycodone) was even more fun than Vicodin. Three years ago, in one ten day period, I had surgery on my shoulder for a torn rotator cuff, a kidney stone, and then a root canal. I was basically in some hospital or office for ten days straight, and it seemed like every thirty seconds someone was handing me a prescription for Percocet or Vicodin ES, I even ended up with Darvon somehow, not sure from where. I was in hella pain for the first week or so, but my, that was a pleasant month. Be careful folks, even the strongest can be tempted by this stuff. I get injured often enough that it seems like at least twice a year I spend a week whacked on Vicodin or something, and that's enough for me. One of the few advantages of having torn and broken about everything that can be. No more flying high on the plane? Just finished "When Will Jesus Bring The Pork Chops" today. Great read. Who are these assholes who keep slaggin' his newer stuff? I think he is one of the rare ones in that he gets better and better. He's gotten funnier as he's become more bitter. 2004-12-28 02:54:04 AM Satchel_Brown I must be the only person that has never thought he was funny. I'm with you on that one, Satchel_Brown. 2004-12-28 03:18:38 AM robsul82 Carlin's a comedian, and a popular one among FARK-minded people. Not this one ... although, I'd probably be offended if someone called me "FARK-minded." :) MethodsUnsound Recently his problem has been spewing liberal hate at the right side of the political spectrum instead of telling jokes. uhm... apparently you need to go watch his early-80s HBO specials. that's not new territory for him. at all. Wake 'n Bake He doesn't hate you, he just likes making fun of your stupidity. And really, there's quite a bit to make fun of. As a combo answer to both, no he used to just make fun of the rights excesses and could even poke fun at the lefts although he didn't see them as often because he was leftist but as with many others he has turned bitter and angry the past few years(perhaps the current topic might have something to do W that). In fact I avoided his show in Columbia despite liking much of his comedy because I knew he would turn bitterly political at some point and I wasn't paying a premium to be insulted. Galwran fark you, I'm getting in the plane! /thinks a fair review of Carlin's work would show arrows slung at both liberals and conservatives. Yeah, so if Rush Limbaugh gets addicted it's OK to make fun of him despite the fact that he's on it for the skull splitting pain he got for the surgery to restore his hearing; amazingly, Carlin gets little grief and apparently was taking them recreationally as no reason was given other than 'I like them with wine'. Actually, it was from back pain - but your point is well made. An completely ignored by ignorant leftists. Limbaugh got addicted through legitimate use for pain while Carlin got hooked while using them for a buzz. The stupid members of the party of Michael Moore, of course, (intentionally) don't get it. Facing up to the difference would make their little pin-heads asplode. Well, Tinian, I think much of the abuse being thrown towards Limbagh is due to his hardline stance against illegal drug-user. The fact that he takes his medication use beyond a simple prescription and into addiction require less-than-legal means of purchasing the drugs -- making him much like those he speaks out against on a regular basis. I could be wrong, though, since my fellow nutters can be hard to understand at times. Tinian Behold the power of rationalization! We libs attack Rush because for years and years he railed on and on about weak willed addicts being coddled in rehab clinics and by the system. Libs hate hypocrisy, even from our own. And now that your party has been in power for four years, hows that working out? Icebox man = funny. Vicodan = useless. 800 mg Ibuprofen = great for really nasty, short term pain (sprain, swelling) I = going to farking bed. Tinian, Are you trolling or a moron? 2004-12-28 03:18:38 AM robsul82 Rush Limbaugh was made fun of so much for HIS addiction because he was and still is a right-wing blowhard who insisted, nay, DEMANDED that addicts be sent to jail for life. Carlin's a comedian, and a popular one among FARK-minded people. That's the difference. Later. RJS I think the driving opinion force for the acceptance of Carlin in the instance and ridicule of Limbaugh for the same offence is the hypocrisy involved. Because Limbaugh railed against addicts for years and became one, he opens himself to the criticism. It's comparable to Jesse Jackson having an illegitimate love child at the same time he was "counseling" Clinton about his morals. Too bad theres no rehab center to treat people who havent been funny in a decade It's a good thing I'm not the only one who thinks the 'burbs is boring. Where do these people come from? You tell a joke and they look at you like your frenulum just got distended. . I think the Limbaugh case is also exacerbated by his identification with the smug, moral-relativist Far Right and its putative Moral Absolutes. If we were to judge Rush by his own standards, we'd incarcerate him indefinitely and tell him to man up and accept Responsibility. . George Carlin taking drugs? No way. I don't believe it. Also, Rush sending his cleaning lady out to procure his drugs was highly uncool. Heh. Love him or hate him, he's still funnier than most of you jokers in here put together. He's a grown man, if he wants to abuse prescription painkillers and wine in the comfort of his own private home, I'm not going to give him grief. If he wants to tell me that he's going clean, that's good too. I hope Rush gets over his addiction too, and maybe his experience will temper his views about drug addiction and drug addicts. At the very least, no one is going to let him forget it any time soon, which is probably a good thing. What a farking hipocrite Carlin is. I used to think this guy was funny but now when I look at him I see a skinny Michael Moore. Most overrated comic EVER. The guy hasn't been funny in 25 years. ImDracula: FARK ME! I just paid 340 bucks through a ticket broker for choice seats to his show here in San Antonio on January 14th. FAAAAAAAAAAAAAAAAARRRRRRRRRRRRKKKKKKKKKKKKKKKK My wife and I got seats A1 and A2 in Austin on the 15th as our Christmas gift to each other, but they were only 98 bucks total. It sucks, but I hope George gets better. I wouldn't wish that problem on anyone. Know this is true, people: ANYONE can become addicted to drugs. All they need to do is get ahold of the one that does it for them. And for me, it's Vicodin. The worst thing about Vicodin is that it delivers what it promises, like all drugs. I've had a few bouts with it, and currently am once again weening myself off it. It's so easy to get online (you can get up to 90 at a pop, next day Fedex), and it makes for a great, relaxing high when combined with a few beers. It only takes me about 10 days to get hooked on the stuff, and I rarely ever even take more than 25mg per day, always in the evening (I've heard of people taking 150mg or more per day, which is 15 tabs at 10mg per!!!) 15mg of hydrocodone is roughly equal to taking 10mg of morphine, so they don't call it 'artificial heroin' for nothing. I had never had an addiction before, and I don't consider my Vicodin experience a serious addiction, but let me tell you, if kicking a 25mg per day habit is uncomfortable for me, which it is, I can't imagine how ill a person who's doing 6 or 7 times that has to become during the rehab stage. Given all this, I don't dare judge Limbaugh or Carlin, or anyone else who's gone through such withdrawal. he's great, i love his new material. last time he was on the tonight show he did this bit about getting on an airplane. Good thing he does't just recycle bits he did from 25 years ago. everybody is addicted to something, now is a fine time for self evaluation, what changes does your life need? Quick let's put him in jail for 8-25 years for possession of a class A controlled substance. Oh wait I forgot, he's not poor or black or a white teenager and smoking lots of weed Impudent Domain: This burned out old hippy is the same asshat who is so quick to lecture other people about his version of morality. He doesn't lecture other people, generally. He makes fun of them. And he tells you up front that he doesn't give a shiat about you, and really doesn't care if you stop doing the things he makes fun off - he'll make fun of whatever you decide to do instead. Fark you. /The very existence of flame-throwers proves that some time, somewhere, someone said to themselves, "You know, I want to set those people over there on fire, but I'm just not close enough to get the job done." Uh... I find people funny and I don't care whether they are for the left or for the right (I'm actually a libertarian, and if someone makes fun of libertarian views in a great way, I'll be more than glad to join in with the laughters) You have to be quite retarded to hate a comedian simply because he got different views than you do about politics... Are you that unsure of your position that a few jokes throws you in a fit? Pathetic Sloth_DC: He doesn't lecture other people, generally. Have you listened to Carlin lately? It's boring, rambling lectures connected together with a few funny bits. The old recordings are much better. DistendedPendulusFrenulum: I think the Limbaugh case is also exacerbated by his identification with the smug, moral-relativist Far Right and its putative Moral Absolutes. Ah, you mean the stereotype the simple minded, knee jerk left made up. George Carlin is a friggin comedian, not some fatty radio talk show host trying to preach to people what is right and wrong. He makes jokes. It's not serious people. People listen to him to get a laugh, that's it. How this threatens people on this board and makes them attack him is beyond me. Angry farking neocons maybe. Hope you get well George. [image from chico.mweb.co.za too old to be available] Hang in there, buddy! You will not hear me say: bottom line, game plan, role model, scenario, or hopefully. I will not kick back, mellow out, or be on a roll. I will not go for it and I will not check it out; I don't even know what it is. And when I leave here I definitely will not boogie. I promise not to refer to anyone as a class act, a beautiful person or a happy camper. I will also not be saying "what a guy." And you will not hear me refer to anyone's lifestyle. If you want to know what a moronic word "lifestyle" is, all you have to do is realize that in a technical sense, Atilla the Hun had an active outdoor lifestyle. I will also not be saying any cute things like "moi." And I will not use the French adverb "tre" to modify any English adjectives. Such as "tre awesome," "tre gnarly," "tre fabou," "tre intense," or "tre out-of-sight." I will not say concept when I mean idea. I will not say impacted when I mean affected. There will be no hands-on state-of-the-art networking. We will not maximize, prioritize, or finalize...and we definitely will not interface. There will also...there will also be no new-age lingo spoken here tonight. No support-group jargon from the human potential movement. For instance, I will not share anything with you. I will not relate to you and you will not identify with me. I will give you no input, and I will expect no feedback. This will not be a learning experience, nor will it be a growth period. There'll be no sharing, no caring, no birthing, no bonding, no parenting, no nurturing. We will not establish a relationship, we will not have any meaningful dialogue and we definitely will not spend any quality time. We will not be supportive of one another, so that we can get in touch with our feelings in order to feel good about ourselves. And if you're one of those people who needs a little space...please...go the fark outside. TightyWhitey: George Carlin is a friggin comedian, not some fatty radio talk show host trying to preach to people what is right and wrong. I take it you haven't listened to Carlin lately. It also takes a real childish person to try to attack someone based on their weight and especially stupid when it's not even true anymore. The one bad thing about [image from leisuretimemarketing.com too old to be available] is that they play too damn much of [image from images.google.com too old to be available] on their comedy channels He's funny in small doses but is too damned sanctimonious. Especially considering the film decisions he's made, especially [image from images.google.com too old to be available] TightyWhitey: George Carlin is a friggin comedian, not some fatty radio talk show host trying to preach to people what is right and wrong. He makes jokes. It's not serious people. People listen to him to get a laugh, that's it. How this threatens people on this board and makes them attack him is beyond me. Angry farking neocons maybe. George Carlin preaches more than most televangilists. Major Thomb I take it you haven't listened to Carlin lately. It also takes a real childish person to try to attack someone based on their weight and especially stupid when it's not even true anymore. Not sure what you mean, I have listened to Carlin alot lately, and still love his comedy. However, Limbaugh is not a comedian. He's a hypocritical talk show host who attacks lots of people, and he's serious about it. Sorry, but when you attack and judge other people as extensievly as he has, it's open season buddy. Thus my fatty "attack" on him. And, while not as fat as before, he's still covered in blubber. Now let's stop wasting space on this Carlin thread by mentioning this fat ass druggy loser any more. When I busted my hand up, the doc gave me a script for 40x 500mg hydrocodone. (I do not recall a '/' before the 500, but perhaps it was 5/500...) He also gave me a refill. Okay, so my hand is busted, but the bone was re-set without needing pins put in or anything, so it was only a bit of pain. Why did I need 80x 5/500mg hydrocodone? First of all, if I took two (the bottle said "Take 1 or 2 depending on pain") they would fark me up so bad I could barely breathe. If I took 1, it was a nice ride but I could still function. I did not like to mix those with booze though. Anyway, it sure made for a lot of fun nights. I also realized that I was starting to enjoy the Vics not for just pain. I told myself that I would only take one per week until the script was finished, which I did unless I had real pain like a sprained ankle or something to deal with. After that, I did miss them. They were fun, but I knew that I was becoming addicted and I had to stop, so I did. Ah, those were the days. Wow, two big mouthed extremists from the opposite extremes fueled by snappy thought provoking pain chillers. Rush Limbaugh and George Carlin. TightyWhitey: Not sure what you mean, I have listened to Carlin alot lately, and still love his comedy. However, Limbaugh is not a comedian. He's a hypocritical talk show host who attacks lots of people, and he's serious about it. I don't think you've listened to either recently. GC has turned into a mean spirited old man who doesn't very little except attack people and Rush has just got boring always talking about Clinton. They both used to be hilarious. If you have some pain pills, always save some for when you are feeling better. Yeah Rush is in so much back pain all the time he hardly has time for golf three days a week. Truth be known he never had surgery on his back like he claims. He has been offered money to show his scars to anyone but refuses. Just like that pimple on his ass got him out of vietnam. He's a self righteous pompous ass windbag. Oh and I don't see Carlin being investigated for illegal drugs and doctor shopping and spending tens of thousands on illegally obtained pills either. So much for the GOP's moral high ground. [image from bartcopnation.com too old to be available] I remember when Vicodin actually worked for my pain. For the past eight months I've been on Percocet four times a day due to severe chronic pain from rheumatoid arthritis. I will need to take it for the rest of my life. Hearing about people abusing pain medication drives me nuts. My "best friend" and former roommate (one of a very few people trusted with the knowledge of my pain meds) stole almost a third of a new script I had just filled. Her farkbuddy wanted some for "party favors" for a few buddies at school. When I said there was no way in hell he would get near my medication, she stole it while I was asleep. Biatch. I have a great doctor who is extremely careful about monitoring his patients, esp. when they are on super-strong medication. It amazes me that any physician would not. I don't know why George Carlin was on pain meds. I assume he got hooked after using the pillls for a medical issue but, of course, I could be wrong. I hope that all works out well for him and that he finds a doctor who will be more judicious in prescribing drugs. Remember Jonathan Winters. I've got goosebumps alllll overmyboooody. People thought that was a riot. In 1967. Ditto George Carlin. It's amazing that 60's washup is still doing his act. George Carlin is the left's Rush Limbaugh in a manner of speaking. He's always running his mouth about social issues. Colgate, you're ignorant. Carlin isn't the "left's" anything. He doesn't get caught up by this new left vs. right hypnosis that seems to work so well on simpletons like yourself. Hey! How bout a George Carlin quote! "I don't vote... This country was bought and sold YEARS ago. The shiat they shuffle around every few years *fart sound while making masturbation motion* MEANINGLESS!..." But besides that... friendinpa has the best post in this forum. "I can never remember. Is it white or red with Vicodan?" Here colgate, this is funny. Chain: Oh and I don't see Carlin being investigated for illegal drugs and doctor shopping and spending tens of thousands on illegally obtained pills either. So much for the GOP's moral high ground I don't think you made the point you think you did. Doesn't that indicate that some liberal investigators with no ethics went after him outside of normal procedure due to his politics? 3 - Rehab - Denis Leary "We did it all. We did whatever we could get our hands on back in the seventies. We did farking handfuls of mushrooms, pills, Ludes, coke. Whatever it was, we just farking swallowed it, ok? That's what we did! People go, "Well why didn't you go into rehab?" We didn't have rehab back in the seventies. Back in the seventies rehab meant you'd stop doing coke, but you kept smoking pot and drinking for a couple more weeks. You know? "Yeah, give me a case of Budweiser and an ounce. I gotta slow down! Jesus Christ! I'm outta control. Look at the size of my pants for Christ's sake!" Because that's the big thing now. Rehab is the big farking secret now. Isn't it, huh? Yeah, you can do whatever you want. Just go into rehab and solve your problems. Isn't that the big celebrity thing? That's what I'm gonna do. Yeah, I'm gonna get famous. Then when my career starts to flag, I'm gonna go into three months farking bender. Ok? Coke, and farking pot, and smack, and farking booze, and drive over people, and beat up my kids, go into therapy, go into rehab, come outta rehab, be on the cover of people magazine, "Sorry! I farked up!" That's what they do, man. They go into rehab and they come out and they blame everybody except themselves. They blame their parents, right? That's the way. Everybody comes from a dysfunctional family all of the sudden, huh? Rosanne Barr comes from a dysfunctional family? Not Rosanne! She seems so normal to me! The Jacksons were dysfunctional!? Not the Jacksons! These people give each other new heads for Christmas for Christ's sake! I am sick and tired of hearing that farking speech. You know? These people come out of rehab they always have the same story. "Well you know, I became an alcoholic because my parents didn't love me enough. And then I became a junkie because my parents didn't love me enough. And I went into hypnosis and therapy and I found out that parents used to hit me." skull splitting pain he got for the surgery to restore his hearing Look, a side effect of using pain killers is sudden hearing loss. I think Rush started using pain killers because of back pain. He obviously used them a lot, but who cares? Anyone who thinks they can get political mileage out of pointing to celebrity drug use is out of their mind. Rush L. is a radio personality. If conservativism wasn't selling he wouldn't be conservative. Before the right kicked into gear he was playing records and telling lame jokes. Sudden hearing loss = suspicion of using pain killers Testicular cancer in perfectly healthy athlete = suspicion of using steroids (sorry Lance) I like vicodin and heinnekein myself. Plus they sort of rhyme. /really needs something to do He can be treated for the addiction, but what do you do with the fact that he is washed up and not funny? Major Thomb: Chain: Oh and I don't see Carlin being investigated for illegal drugs and doctor shopping and spending tens of thousands on illegally obtained pills either. So much for the GOP's moral high ground I don't think you made the point you think you did. Doesn't that indicate that some liberal investigators with no ethics went after him outside of normal procedure due to his politics? Liberal investigators? His maid broke the story to the National Enquirer. (a publication that loves to run stories about the Clintons) When the story broke the D.A. of the county he lives in looked into it and found that Rush had made several large cash withdrawals (all just under the $10,000 federal notification limit btw) The maid even wore a wire to a couple of the drug deals she made for him. She even recieved$200,000 in hush money from him. Liberal conspiracy indeed. If it was a liberal conspiracy why is the ACLU defending him? A group he has ripped for years is now his best buddy. Let's just go ahead and use Rush's own words against him. "Let's all admit something. There's nothing good about drug use. And we have laws against selling drugs, pushing drugs, using drugs, importing drugs. And so if people are violating the law by doing drugs, they ought to be accused and they ought to be convicted and they ought to be sent up." Now we wouldn't want such an icon of American values receiving special treatment now would we? That would be hypocritical wouldn't it? "I figured out years ago that the human species is totally farked and has been for a long time. I also know that the sick, media-consumer culture in America continues to make this so-called problem worse. But the trick, folks, is not to give a fark. Like me. I really don't care. I stopped worrying about all this temporal bullshiat a long time ago. It's meaningless. (See the preface of "Braindroppings.") Another problem I have with "Paradox" is that the ideas are all expressed in a sort of pseudo-spiritual, New-Age-y, "Gee-whiz-can't-we-do-better-than-this" tone of voice. It's not only bad prose and poetry, it's weak philosophy. I hope I never sound like that." George Carlin Oh yeah... It must really suck to live in Wichita.. or anywhere in Kansas for that matter. Sorry man. George Carlin voluntarily enters drug, alcohol rehab program, says he has problem with wine and Vicodin /would add hasn't been funny in decades to his ever growing list of 'problems'... I love George, don't get me wrong - but he sure does rail against people whom he percieves as "weak" in his routine and writing anymore these days - which I think sheds some light on his curious quote about how he is dealing with it before it gets out of hand. Substance abuse is EVERY bit a spiritual/emotional problem as it is a body chemistry problem. Weakness on two levels. Sorry, George. Sucks to be a hypocrite, don't it? /still love him //not an enabler, though ;) Triaxis I used to think this guy was funny but now when I look at him I see a skinny Michael Moore. Compliment? At least we know why he hasn't been funny lately. Carlin has produced some of the best comedy over the years, and I hope he can get his edge back. Good luck, man. QQue George Carlin voluntarily enters drug, alcohol rehab program, says he has problem with wine and Vicodin /would add hasn't been funny in decades to his ever growing list of 'problems'... Brain Droppings was hysterical. You're an idiot. I'll never forget my first vicodin experience. I hurt myself and the doctor told me that because I'm a pretty big guy I could take 2 to start. So I took 2, was still in pain. Felt nothing. So 1/2 hour later I took 2 more. Still nothing. No pain relief, no woozy feeling, nothing. So an hour later I took 3 more. Yup. 7 vicodin in an hour and a half. It made me throw up a little. Since libs like Carlin set no personal or moral standards for themselves, they have no ideal to live up to. It's pretty easy going through life making fun of others who strive to live better and fail. Maybe Carlin should have listen to some of those religious people he makes fun of and stayed away from that stuff to begin with. I guess he now realizes they can cause problems he can't deal with on their own. Rush Limbaugh is fat. But seriously folks, Carlin can still be funny. His new book "When will Jesus Bring the Pork Chops?" is uneven, but there are some funny bits. Give the guy a break . . . He'll probably use friendinpa's line in his next standup tour. At least, he can still laugh at himself, unlike alot of other people. /Now watch this drive. (Reference to Limbaugh pic). Ah, you mean the stereotype the simple minded, knee jerk left made up. Specifically, the stereotype the simple minded, knee jerk Cal Thomas made up. Too bad he's a rabid rightist. . Gee King George, you must be alot fun to be around. Heaven forbid (pun) that anyone should make fun of anyone else. I guess that's why there are no good Christian comedians or conservative ones for that matter. (Dennis Miller isn't funny). /Lighten up Francis. Maybe Carlin should have listen to some of those religious people he makes fun of and stayed away from that stuff to begin with. I guess he now realizes they can cause problems he can't deal with on their own. and you elected a cokehead to be preseident - twice! nice troll though. Junkie! I'd damn him for setting such a dangerous example for kids too, but luckily kids don't pay any attention to the bitter old hasbeen anyway. and you elected a cokehead to be preseident - twice! And there it is right there. Do you think Bush regrets his Coke use? If you asked him I'd have to guess that Bush would say that those people who told him to stay away from drugs, "Just say No." were right. Do you think Carlin ever made fun of the Reagan era "Just say No" program? Do you think Carlin now wished he just said "No." If you're going to reference Bush as a cokehead for someting he's given up then you should now refer to Carlin as a 'wino'. Otherwise, you MIKE are what is refered to by the left as a hypocrite. I read through Carlin's recent book I got for Chirstmas, and it was pretty typical of him; he makes fun of all sorts of idiots and behaviors he finds appalling, while adding in a bunch of absurdist nanofiction stories and ads for fake companies. Sometimes it's pointless, sometimes it's hilarious, sometimes it's all too true. So what if he's leftist? Think about this for a minute. What sort of people do the right have as "celebrities"? Ann Coulter and Rush, right? And the left? Al Franken, George Carlin. As homework, compare those two pairs of people. I'm no democrat sympathizer, and often lean conservative (or more accurately, libertarian) on many issues, but the neocons disgust me. I have no problem referring to Carlin as a pill popping wino.DOn't ever call me a hypocrite again though. Your fundie beliefs are the most hypocritical bag of hammers to ever hit the campaign. And I don't think Bush regrets his coke use, Karl Rove did a good enough job of spinning that right off the dinner table. I'd damn him for setting such a dangerous example for kids too, but luckily kids don't pay any attention to the bitter old hasbeen anyway. You'd damn someone for setting a dangerous example? How feeble and irresponsible are you? Carlin is a leftist, but all entertainers are for the most part. Carlin is not a liberal, democrat, whiner like Franken or most celebrities. He just points out hypocrisy. In his view all Republicans are NeoCons which are greedy, unenlightened, fear mongers that want to "jail all blue collar criminals to make life safe for all the white-collar criminals." All Democrats are whiners who want the government to take care of them and society, but never actually inconvenience them. Such as funding half-way houses and clinics, but not allowing them to be put in their upper-middle class neighborhoods (NIMBY - Not In My BackYard). Also he points outs environmentalism as 'more unenlightened self-interest'. A ruse to get the government to 'save the planet' but really all most of its backers want are cleaner crap for themselves. Both are exaggerations of facts, but a major part of comedy is exaggeration. In the end, Carlin is much more of free-thinker than most. Which is why he's always had trouble getting sweetheart TV, movie, book deals for most of career. Franken, O'Reilly, Limbaugh, Mahrer, etc. all cave to corporate pressure and put out the most ridiculous tripe because idiots buy it. Carlin makes people on both sides think, and appeals to many. He rubs people the wrong way so no one wants to deal with him, but who can ignore is selling power now. MrMustard: Lance took steroids? That's dumb, the last thing a cyclist needs is to add bulk. It's more likely he's on EPO, but that's just conjecture. And the Florida AG is raiding his medical records to fish for violations of... oh, Carlin's not a Conservative? Never mind, won't happen. Rockdrummer And the Florida AG is raiding his medical records to fish for violations of... oh, Carlin's not a Conservative? Never mind, won't happen. Oh no, the poor persecuted right-wingers! Just for the record - I had the distinct displeasure of viewing George Carlin at the MGM. While I find his HBO specials hil-farking-larious, his talk about scat&suicide didn't do much for me. The whole purpose of his show, IMHO, was to put on such a disgusting act that it would accomplish the following things: 1. Ruin the experience for the show goer - becase said show goer is a Jackass fore going to Vegas. I and my ex-fiancee sat through about half the show before we decided he wasn't being "edgie but funny" - he was being "bitter and disgusting". 2. Justify his hatred of humanity by spewing all of this disgusting, hateful crap and watching all the drunks who don't notice that he *hates* them laugh as he insults them by trying his damnedest to ruin their night and waste the $100's of dollars in tickets that people spent. And this wasn't the first time I've seen something like this happen. Whoopie Goldburg pulled the same stunt when I saw her in Vegas as a teen. I think that liberal comeidans shouldn't play Vegas if they *hate* anyone who chooses to spend time in the city. Granted it's a consumer whore mecca, but if you're livelyhood depends on entertaining people - don't go out of your way to be a damned jackass and ruin someone's night. What's a good perscription drug which mimics the effects of cocaine? Well, that explains some things, but not enough to feel sorry for the arsehole. /He should have moved to Canada, France peferably. Templar503 Well, that explains some things, but not enough to feel sorry for the arsehole. /He should have moved to Canada, France peferably. Uhhmm, why? He hurt your poor little feelings? I bet Al Franken is making fun of Carlin on Air America today...just like he regularly makes painkiller jokes about Rush. Oh, wait...that's right....Franken is a fark.i.n.g hypocrite. Vicodin addiction? Thats a pretty weak physical addiction. I can't imagine being able to eat enough vics at a time to get a physical dependency with the APAP and all keeping you from eating big amounts (without sickness/death). Norco's would be much easier, but still a not a big physical problem. Hydro is weak and the WD is mostly in your head unless you CW extract it..then you can get enough to get yourself screwed. Of course he could just be talking about a psychological addiction... Carlin is a coastie with a superiority complex. That is the modern definition of the spoiled American rebel without a cause, i.e. a leftist. Colgate Carlin is a coastie with a superiority complex. That is the modern definition of the spoiled American rebel without a cause, i.e. a leftist. So many poor little reactionaries all upset by a comedian. This is the kind of shiat that happens when POT is hard to find. fark. Hope you get better soon George... start a grow room in an un-used bathroom in your house when you get out. unlike rush, carlin never claimed to be anything that he wasnt thus, franken isnt talking shiat about him cuz hes not a hypocrite. besides, carlins the first to admit to his own faults. jesus you people really are pretty rabid tho, the mans a comedian thincwick, youre right beer and xanax arent the same...theyre much better.vicodin are for pussies I have no problem referring to Carlin as a pill popping wino.DOn't ever call me a hypocrite again though The point is, you must refer to Carlin as a pill popping wino out of context. I had not even mentioned Bush but you brought him up as a cokehead. So, on the next Carlin thread, not even a Carlin thread, maybe just a drug thread, you need to bring up Carlin as a pill popping wino otherwise you have indeed earned the title.... hypocrite. Wallow in it, enjoy it, I bet you'll earn it. As far as being a fundie. I'll remeber that the next time someone makes a comment about my mowhawk. ImDracula FARK ME! I just paid 340 bucks through a ticket broker for choice seats to his show here in San Antonio on January 14th. FAAAAAAAAAAAAAAAAARRRRRRRRRRRRKKKKKKKKKKKKKKKK Dood- youre not missing a thing. I saw him in vVegas a few months ago. I was expecting great things cuz I love the guy. His routine has become REALLY macabre...and not in a funny way. I mean, he's always been out there but his latest tirades about how cool suicide is and how he hates the world and everyone that lives in it...is alienating. I know what youre thinking: "yeah but thats why I love im". believe me...he's wayyyyy out there these days. My friend and I kept looking at eachother in disbelief..waiting to laugh. Never happened. We scarfed down some Nathan's real quick, surrendered our MGM room and drove back to San Diego right after the show. Garbage! At least I know why now...he must have been all doped up on Grandpa medicine. hmmm...i smell a rat. one can't do too much vicodin and drink too much wine unless he is getting some from of vicodin without Tylenol in it. His liver would fall out in a week if not, or he would need a fulltime chemist to separate them for him. Also, as to the earler question of how a doctor could let someone get addicted, it ain't hard, and (*gasp*) it isn't always bad. Opiate addiction is a lesser problem than some forms of chronic pain. I have been a pain patient for years and years now. I have had a spinal fusion among other pleasant procedures to try to deal with the unpleasantness that life can often be for me, but it didn't solve my problems past a certain point. They make opium-based patches that work nicely, pumping dope in ya 24 hours a day. Highly addicting, but better than agony. My pain is nerve pain, so opiates don't do a whole ton, but I they "take the edge off" shall we say. Stopping vicodin is a biatch. Really bad. I have before. I don't know how I will again without formal detox. It is much worse than quitting smoking. Codeine was easy by comparison, though I imagine Oxycodone is nearly impossible, which is why I haven't taken it (I might be the only person in America to turn down a doctor's offer to prescribe) SO much hate from the righties. It makes me glad Earnhart smacked into the wall. I like watching you righties pay$40 for an official Dale Earnhart "Heroic" plaque to hang in your office, trailer, or pickup truck. What makes Carlin better than Limbaugh, in regards to the drug thing, is that Carlin entered rehab by choice and Limbaugh was shamed into it after being caught committing several felonies- After spending the bulk of his radio shows talking about how we should be more ruthless when we punish drug users. Hillbilly heroin or Vicodin... Hmmm... /Wanna see a quick Dale Earnhart impression? Look at the wall! //Going to trailer park hell for that one, I bet! UMMMMM..... O... K..? I just posted this to insure that the pea-brained expression of idiocy above was NOT the last word on this thread. /loves it when haters call out other haters I wonder can anyone give me a timeline on how Carlin's show/approach has changed from "gentle but mocking" (from when I first started listening to him in the 70s) to the pure visciousness I soemtimes hear on iRadio.com? iRadio plays a fair bit of Carlin, but they don't mention the date of any of the concerts. Some of the "gentler" stuff is clearly a lot newer than his first couple of albums. Then other shows are just this snarling, angry bastard! I prefer the gentler Carlin (don't get me wrong, "snarling angry" is good when it works, a-la Sam Kineson). I also remember hearing Carlin has had at least one heart attack. Maybe more? Top Commented Javascript is required to view headlines in widget. 1. Links are submitted by members of the Fark community. 2. When community members submit a link, they also write a custom headline for the story.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16364017128944397, "perplexity": 5443.085111417949}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863980.55/warc/CC-MAIN-20180621001211-20180621021211-00480.warc.gz"}
http://cjauvin.blogspot.ca/2013_10_01_archive.html
# Autotelic Computing ## Wednesday, October 16, 2013 ### Neural Network 101 As this seems to be becoming a series of related posts, see also: 1. Linear Regression 101 2. Logistic Regression 101 3. Softmax Regression 101 4. Neural Network 101 ## A Cloud of Points¶ As powerful as it is, a logistic regression model will only ever be able to solve linearly separable problems. No need to even try on problems like the one below (even though it's quite easy), because we saw that its $\theta$ parameters correspond directly to the parameters of a linear equation in general form. In [1]: rcParams['figure.figsize'] = 12, 8 def dataset(n): xy = random.uniform(low=-1, high=1, size=(n, 2)) classes = [] for p in xy: d = linalg.norm(p - [0,0]) classes.append(0 if d < 0.75 else 1) return column_stack([xy, classes]) random.seed(0) points = dataset(1000) class0 = points[where(points[:,2] == 0)] class1 = points[where(points[:,2] == 1)] plot(class0[:,0], class0[:,1], 'bo') plot(class1[:,0], class1[:,1], 'r^'); ## Introducing Non-Linearity¶ Although it's not 100% exact, one way to understand the classic neural network (or multilayer perceptron, as it is also sometimes called) is as an extension to the logistic regression model. If we recast our terminology in terms of neurons, layers and weights (where a neuron performs two successive computations: (1) the dot product of the incoming weights and the vector from the layer below it, and (2) the logistic squashing of the result), then LR can be seen as the top part (in the dashed rectangle) of the neural network schema shown below. The bottom part, an additional layer of weights and logistic units, is what introduces non-linearity in the model, and thus allows it to solve problems like the one above. In [2]: from IPython.display import Image Image(filename='NN.png') Out[2]: ## Training to Get Better¶ In this simple example, since our only output neuron is logistic, we can interpret its value as a probability (of being a member of class 0 or 1), and once again endow the model with probabilistic semantics (although it is not mandatory). We can again use the negative log-likelihood as our error function $NLL = - \sum_{i}^{n} t^{(i)} \log y^{(i)} + (1 - t^{(i)}) \log (1 - y^{(i)})$ where $y^{(i)}$ corresponds to the output of the neural network when fed with the $i$-th training example, and $t^{(i)}$ the target, its real class membership (0 or 1). The training of a neural network is a little more involved, because the gradient of the error function needs to be back-propagated in successive computations $\frac{\partial NLL}{\partial y} = y - t$ $\frac{\partial NLL}{\partial net_y} = y \cdot (1 - y) \cdot \frac{\partial NLL}{\partial y}$ $\frac{\partial NLL}{\partial h} = \frac{\partial NLL}{\partial net_y} \cdot W_{hy}$ $\frac{\partial NLL}{\partial W_{hy}} = \frac{\partial NLL}{\partial net_y} \cdot h$ $\frac{\partial NLL}{\partial net_y} = h \cdot (1 - h) \cdot \frac{\partial NLL}{\partial h}$ $\frac{\partial NLL}{\partial W_{xh}} = \frac{\partial NLL}{\partial net_h} \cdot x$ To finally yield the two weight updating formulas $W_{hy} := W_{hy} - \alpha \frac{\partial NLL}{\partial W_{hy}}$ $W_{xh} := W_{xh} - \alpha \frac{\partial NLL}{\partial W_{xh}}$ In [6]: def logistic(x): return 1 / (1 + exp(-x)) class NeuralNetwork: def __init__(self, n_hiddens, alpha=0.01): self.W_xh = random.uniform(size=(2 + 1, n_hiddens)) self.W_hy = random.uniform(size=(n_hiddens + 1, 1)) self.n_hiddens = n_hiddens self.alpha = alpha def train(self, x, t, n_iters=1000): self.n = len(x) self.x = column_stack((ones(self.n), x)) # add input bias self.t = t errors = [] for i in range(n_iters): self.forward() self.backward() nll = -sum(self.t * log(self.y) + (1 - self.t) * log(1 - self.y)) / self.n classif = sum((np.round(self.y) != self.t).astype(int)) errors.append((nll, classif)) return errors def forward(self): self.h = column_stack((ones(self.n), # add hidden bias logistic(dot(self.x, self.W_xh)))) self.y = logistic(dot(self.h, self.W_hy)) def backward(self): d_nll_d_y = (self.y - self.t) d_nll_d_net_y = self.y * (1 - self.y) * d_nll_d_y d_nll_d_h = dot(d_nll_d_net_y, self.W_hy.T) d_nll_d_W_hy = dot(self.h.T, d_nll_d_net_y) d_nll_d_net_h = self.h[:,1:] * (1 - self.h[:,1:]) * d_nll_d_h[:,1:] d_nll_d_W_xh = dot(self.x.T, d_nll_d_net_h) self.W_hy -= self.alpha * d_nll_d_W_hy self.W_xh -= self.alpha * d_nll_d_W_xh The number of hidden units controls the representational power of the model. If it is too low, it will not be able to capture the complexity of the training data, as the example below shows. In [14]: rcParams['figure.figsize'] = 12, 4 nn = NeuralNetwork(2) errors = nn.train(points[:,:-1], points[:,[-1]]) _, axs = subplots(1, 2) classif_errors = asarray(errors)[:,-1] axs[0].plot(range(len(classif_errors)), classif_errors, 'r-') axs[0].set_ylabel('Classification error') axs[0].set_ylim(0) points_nn = column_stack((points[:,[0,1]], np.round(nn.y))) class0_nn = points_nn[where(points_nn[:,2] == 0)] class1_nn = points_nn[where(points_nn[:,2] == 1)] axs[1].plot(class0_nn[:,0], class0_nn[:,1], 'bo') axs[1].plot(class1_nn[:,0], class1_nn[:,1], 'r^'); But if it is set right, there's no limit (in theory) to the complexity of the function that the network can learn. In [15]: nn = NeuralNetwork(5) errors = nn.train(points[:,:-1], points[:,[-1]]) _, axs = subplots(1, 2) classif_errors = asarray(errors)[:,-1] axs[0].plot(range(len(classif_errors)), classif_errors, 'r-') axs[0].set_ylabel('Classification error') axs[0].set_ylim(0) points_nn = column_stack((points[:,[0,1]], np.round(nn.y))) class0_nn = points_nn[where(points_nn[:,2] == 0)] class1_nn = points_nn[where(points_nn[:,2] == 1)] axs[1].plot(class0_nn[:,0], class0_nn[:,1], 'bo') axs[1].plot(class1_nn[:,0], class1_nn[:,1], 'r^');
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7768263816833496, "perplexity": 4304.443227062963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423809.62/warc/CC-MAIN-20170721202430-20170721222430-00142.warc.gz"}
https://openemt.org/index.php?title=Signal_demodulation&printable=yes
# Signal demodulation Demodulation refers to the extraction of information from a carrier wave. The extracted information is either digital or analog in nature. For example, Wifi transmits digital information on 2.4GHz and 5GHz frequencies, while a typical FM radio transmits analog signals in the form of sound at carrier frequencies between 80-100MHz. In the case of Anser EMT, the carrier signals are those transmitted by the eight field emitter coils in the field generator. The extracted information are the magnitudes of each of the received frequency components. The sensor coil detects these carrier frequencies as described in section (5b) and produces a composite signal representing the sum of the received carrier frequencies. Following amplification and sampling, the magnitudes of these carrier signals are extracted using asynchronous demodulation techniques. The position and orientation algorithm compares these magnitudes with the system magnetic field model in order resolve a unique sensor position and orientation. ## Demodulation theory Two modulation schemes are discussed in this section, synchronous and asynchronous methods are discussed. Asynchronous demodulation is the chosen scheme as it provides more information regarding the orientation of the sensor. In order to calculate the amplitude of the AC magnetic field experienced by the sensor many techniques are available. Generally, the signals of interest are small in amplitude with relatively large noise levels as well as interference from the other transmitting channels. The most common method to extract the signals of this type is synchronous demodulation, also known as synchronous detection or lock-in amplification ### Synchronous demodulation Synchronous demodulation is a method for extracting information from an AC carrier signal. Although asynchronous demodulation is used in Anser, synchronous demodulation illustrates basic concepts that are used in the asynchronous design. The amplitude and phase of an AC signal can be calculated through multiplication by a reference signal that is locked in frequency with the original signal. The multiplication by the reference signal, shifts the signal down to a lower frequency, typically DC, which is then easier to accurately measure. The "locking" of frequencies can be implemented in many ways although the simplest is to use the source of the signal as a reference. Consider an input signal: ${\displaystyle v(t)=V\sin(\omega t+\varphi )}$ where ${\displaystyle V}$ is the modulating signal we wish to extract. The amplitude and phase of this signal can be determined by multiplying by two reference signals at the same frequency: ${\displaystyle Y(t)=\sin(\omega t)}$ ${\displaystyle X(t)=\cos(\omega t)}$ This multiplication result in two quadrature signals: ${\displaystyle v_{y}(t)=v(t)Y(t)={\frac {V}{2}}[\cos(\varphi )-\cos(2\omega t+\varphi )]}$ ${\displaystyle v_{x}(t)=v(t)X(t)={\frac {V}{2}}[\sin(\varphi )-\sin(2\omega t+\varphi )]}$ The DC component of the signal is extracted by using an appropriate low-pass filter. The resulting DC values are: ${\displaystyle v'_{y}(t)={\frac {v}{2}}\cos(\varphi )}$ ${\displaystyle v'_{x}(t)={\frac {v}{2}}\sin(\varphi )}$ The amplitude of the modulating signal ${\displaystyle V}$ can be found using: ${\displaystyle V=2{\sqrt {(v'_{x}(t)^{2}+v'_{y}(t)^{2})}}}$ The phase of the signal can be found using: ${\displaystyle \varphi =\arctan \left({\frac {v'_{y}(t)}{v'_{x}(t)}}\right)}$ Demodulating using this technique requires ${\displaystyle Y(t)}$ and ${\displaystyle X(t)}$ be generated from the reference source. This requires each of the eight coil voltages (20.5kHz, 21.5kHz...) to be individually sampled for processing, requiring an analogue to digital converter with a very high aggregate sampling frequency. Instead, simulated reference signals may be used to generate ${\displaystyle Y(t)}$and ${\displaystyle X(t)}$which results in asynchronous demodulation. ### Asynchronous demodulation Asynchronous demodulation uses simulated reference signals to generate the quadrature signals for demodulation. These simulated signals are not locked in phase with the signal to be demodulated and can experience frequency mismatch. This results in an increase in the number calculations required when determining the phase and magnitude of the signal of interest, but reduces the number of signals required for sampling. Consider a tracking system consisting of N emitting coils, each coil carrying a current component of the following form: ${\displaystyle i_{i}(t)=I_{i}\sin(\omega _{i}t+\varphi _{I_{i}})}$ where ${\displaystyle I_{i}}$ is the amplitude of the ${\displaystyle i_{th}}$emitting coil waveform, ${\displaystyle \omega _{i}}$ is the excitation frequency and ${\displaystyle \varphi _{i}}$ is the current phase relative to an arbitrary reference. Summing all N current waveforms results in: ${\displaystyle i(t)=\sum _{i=1}^{N}I_{i}\sin(\omega _{i}t+\varphi _{I_{i}})}$ The induced voltage on the sensor is a sum of the voltages induced by the coil currents: ${\displaystyle v(t)=\sum _{i=1}^{N}V_{i}\sin(\omega _{i}t+\varphi _{V_{i}})}$ where ${\displaystyle V_{i}}$ is the amplitude of the induced voltage component and ${\displaystyle \varphi _{V_{i}}}$ is the associated phase. Each frequency component of the voltage signal is extracted using two reference signals: ${\displaystyle Y_{i}=\sin(\omega _{ri}t)}$ ${\displaystyle X_{i}=\cos(\omega _{ri}t)}$ where ${\displaystyle \omega _{ri}}$ is the frequency of the simulated reference signal. This demodulation results in the amplitudes and phases of all the frequency components relative to the simulated reference signal as follows: ${\displaystyle \mathbf {V} =[V_{1},V_{2}...V_{n}]}$ ${\displaystyle \mathbf {I} =[I_{1},I_{2}...I_{n}]}$ ${\displaystyle \mathbf {\varphi _{V}} =[\varphi _{V_{1}},\varphi _{V_{2}}...\varphi _{V_{n}}]}$ ${\displaystyle \mathbf {\varphi _{I}} =[\varphi _{I_{1}},\varphi _{I_{2}}...\varphi _{I_{n}}]}$ By subtracting the individual phases from each other the relative phase angle between the sensor voltage and coil current waveforms can be found: ${\displaystyle \Delta \mathbf {\varphi } =\mathbf {\varphi _{V}} -\mathbf {\varphi _{I}} }$ The sign of this phase information indicates the axial orientation of the electromagnetic sensor with respect to the magnetic field. With a simulated reference signal it can be difficult to lock the frequency to the signal source without the use of phase locking techniques. In our system this often results in a small mismatch in frequency since the simulated reference signal for a particular coil ${\displaystyle \omega _{ri}}$ is slightly different from the frequency to be demodulated ${\displaystyle \omega _{i}}$since they are not locked. This results in a ${\displaystyle \Delta \omega =\omega _{ri}-\omega _{r}}$ causing a low frequency oscillation in the demodulated signal, which would not be present in synchronous demodulation. To demonstrate this, consider a single frequency where the coil current and sensor voltage waveforms are given by: ${\displaystyle i(t)=I\sin(\omega t+\varphi _{I})}$ ${\displaystyle v(t)=V\sin(\omega t+\varphi _{V})}$ The simulated reference signals used for demodulation are given by: ${\displaystyle Y(t)=\sin(\omega _{r}t)}$ ${\displaystyle X(t)=\cos(\omega _{r}t)}$ Starting with the sensor voltage ${\displaystyle v(t)}$, we multiply by the reference signals just as in the synchronous case to produce: ${\displaystyle v(t)Y(t)={\frac {V}{2}}[\cos((\omega -\omega _{r})+\varphi _{V})-\cos((\omega +\omega _{r})+\varphi _{V})]}$ ${\displaystyle v(t)X(t)={\frac {V}{2}}[\sin((\omega -\omega _{r})+\varphi _{V})-\sin((\omega +\omega _{r})+\varphi _{V})]}$ Extracting the low frequency components using a low-pass filter yields two quadrature voltage signals: ${\displaystyle v_{x}={\frac {V}{2}}\sin(\Delta \omega t+\varphi _{V})}$ ${\displaystyle v_{y}={\frac {V}{2}}\cos(\Delta \omega t+\varphi _{V})}$ where the difference in frequency is given by: ${\displaystyle \Delta \omega =\omega -\omega _{r}}$ These signals are close to DC since they oscillate at a frequency of ${\displaystyle \Delta \omega }$. The amplitude ${\displaystyle V}$ can be determined as in the synchronous case using: ${\displaystyle V=2{\sqrt {(v_{x}^{2}+v_{y}^{2})}}}$ The phase can be determined using: ${\displaystyle \gamma _{V}(t)=\arctan {\frac {v_{x}}{v_{y}}}=\Delta \omega t+\varphi _{V}}$ Its clear that the phase has a time dependency. This is due to the frequency mismatch of the carrier and reference signals. In order to remove this dependency, the same demodulation procedure above is performed to the coil current waveform i(t) to produce ${\displaystyle I}$ and ${\displaystyle \gamma _{I}}$. Subtracting the phase component ${\displaystyle \gamma _{V}}$ from ${\displaystyle \gamma _{I}}$ gives the constant relative phase angle between the two waveforms: ${\displaystyle \gamma _{V}-\gamma _{I}=\varphi _{V}-\varphi _{I}}$ This 'double demodulation' allows both the accurate retrieval of the amplitude and phase of each of the induced sensor voltages. Implementation details of how this demodulation is achieved are described in the next section. ## Demodulation implementation The asynchronous demodulation process takes place in Matlab using an efficient matrix calculation technique. This method involves recording a number of samples of each signal of interest. This has the advantage of reduced solving time since all samples are available at the time of calculation. This is in contrast to real-time processing where samples are gathered and processed one-by-one in turn. The signals of interest we wish to sample are: • The induced voltage on the 5-DOF sensor coil(s) • The composite coil current signal Initially we consider the induced voltage on a single sensor that has been discretised through sampling: ${\displaystyle x[n]=\sum _{i=1}^{N}V_{i}\sin \left({\frac {2\pi f_{i}n}{f_{s}}}+\varphi _{i}\right)}$ where N is the number of frequency of interest (N = 8, one frequency per emitter coil), ${\displaystyle f_{i}}$ is the frequency of interest in hertz, ${\displaystyle f_{s}}$ is the sampling frequency of the DAQ, ${\displaystyle V_{i}}$ is the amplitude and ${\displaystyle \varphi _{i}}$ is an associated phase shift. Collecting and string ${\displaystyle p}$ points results in a vector ${\displaystyle \mathbf {X} }$ containing ${\displaystyle p}$ samples: ${\displaystyle \mathbf {X} =\left[x[0],x[1]...x[p-1]\right]}$ In the previous section asynchronous demodulation was discussed. The first step of the demodulation involves multiplying the sampled signal by two sinusoids ${\displaystyle Y(n)=\sin(2\pi f_{ri}n)}$ and ${\displaystyle X(n)=\cos(2\pi f_{ri}n)}$, where ${\displaystyle f_{ri}}$ is the simulated reference signal close in frequency to ${\displaystyle f_{i}}$. Using Euler's relation we combine ${\displaystyle X(n)}$ and ${\displaystyle Y(n)}$ into a single discrete complex sinusoid: ${\displaystyle \cos(2\pi f_{ri}n)+j\sin(2\pi f_{ri}n)=e^{\frac {2\pi f_{ri}jn}{f_{s}}}}$ where ${\displaystyle j={\sqrt {-1}}}$. This complex exponential encapsulates the two quadrature signal components required for demodulation. Given the ${\displaystyle N}$emitter coil frequencies, ${\displaystyle N}$ complex exponentials are required for the demodulation of each frequency. A ${\displaystyle p\times N}$ matrix of complex exponentials is created such that when pre-multiplied by ${\displaystyle \mathbf {X} }$ results in a vector of voltage magnitude: ${\displaystyle \mathbf {E} ={\begin{bmatrix}\epsilon _{i}[0]&\dots &\epsilon _{N}[0]\\\vdots &\ddots &\vdots \\\epsilon _{i}[p-1]&\dots &\epsilon _{N}[p-1]\end{bmatrix}}}$ where ${\displaystyle \epsilon _{i}[n]=e^{\frac {2\pi f_{ri}jn}{f_{s}}}}$ The row vector of demodulated voltages is given by premultiplying ${\displaystyle \mathbf {E} }$ by the row vector of ${\displaystyle p}$ samples: ${\displaystyle \mathbf {Y} =[{\widetilde {V}}_{1}...{\widetilde {V}}_{N}]=2\mathbf {X} \mathbf {E} }$ ${\displaystyle \mathbf {Y} }$ is a complex valued row-vector due to the presence of ${\displaystyle j}$ in the complex exponential calculation. The absolute value of each entry in ${\displaystyle \mathbf {Y} }$ is the amplitude received frequency of interest, ${\displaystyle V_{i}}$. Finite impulse response (FIR filter) A finite impulse response filter is implemented as part of the demodulation step. This low pass FIR filter eliminates unwanted frequency components from the input sample stream ${\displaystyle mathbf{X}}$. Consider an FIR filter denoted by ${\displaystyle f_{i}}$ with ${\displaystyle p}$coefficients (\textit{i.e.} equal to the number of input samples). The output of such a filter can be represented as: ${\displaystyle p[n]=\sum _{i=0}^{p-1}f_{i}x[n-i]}$ where ${\displaystyle p[n]}$ is the filtered sample vector and ${\displaystyle f_{i}}$ is a single filter tap coefficient. Each filter coefficient can be represented by a single matrix ${\displaystyle F}$ as shown below: ${\displaystyle F=[f_{0},f_{1},f_{2}\dots f_{p-1}]}$ The FIR filter can be applied to the demodulator by scaling each input sample of ${\displaystyle X}$ with the corresponding FIR coefficient given by ${\displaystyle F}$. This is achieved using an element by element multiplication: ${\displaystyle \mathbf {Y} =2(\mathbf {X} \circ \mathbf {F} )\mathbf {E} }$ where ${\displaystyle \circ }$ represents the element-wise multiplication operator. The amplitude of each frequency component can be determined by calculating the absolute value of ${\displaystyle \mathbf {Y} }$, which is a complex quantity. The phase of ${\displaystyle \mathbf {Y} }$an be found using the complex argument of ${\displaystyle Y}$ ${\displaystyle [V_{1}V_{2}\dots V_{N}]=|\mathbf {Y} |}$ ${\displaystyle [\varphi _{1},\varphi _{2}\dots \varphi _{N}]=\arg(\mathbf {Y} )}$ The implementation of this calculation can be found in the Matlab code provided in the OSF file repository.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 101, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7960517406463623, "perplexity": 564.9806981088759}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178357641.32/warc/CC-MAIN-20210226115116-20210226145116-00400.warc.gz"}
https://www.physicsforums.com/threads/find-the-normal-force-on-an-inclined-plane.561325/
# Find the normal force on an inclined plane. • #1 2 0 ## Homework Statement Find the normal force acting on a mass of 5kg on an incline plane of 30 degrees. ## Homework Equations Fn = mgsin30 or mgcos30 ? ## The Attempt at a Solution I'm not sure if I use sin or cos when finding the normal force on an incline. I have trouble visualizing the angle perpendicular to the plane and moving the angles around. Related Introductory Physics Homework Help News on Phys.org • #2 Redbelly98 Staff Emeritus Homework Helper 12,100 129 Welcome to Physics Forums. If the surface makes an angle θ to the horizontal, then the normal makes the same angle θ from the vertical. Does that help your visualizing? If it's still unclear, think about a horizontal surface, so that θ is zero, and answer these questions: 1. What is the normal force when the surface is horizontal? Draw a force diagram for yourself, if you need to, to figure this out. Hope that helps! • #3 2 0 It works with mgsin theta, but when i try to draw it out it doesn't make sense to me. • #4 Doc Al Mentor 44,940 1,201 • Last Post Replies 3 Views 6K • Last Post Replies 2 Views 25K • Last Post Replies 6 Views 4K • Last Post Replies 2 Views 4K • Last Post Replies 12 Views 9K • Last Post Replies 1 Views 1K • Last Post Replies 3 Views 15K • Last Post Replies 4 Views 5K • Last Post Replies 1 Views 5K • Last Post Replies 2 Views 1K
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8684108853340149, "perplexity": 1913.0929335351439}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655881984.34/warc/CC-MAIN-20200703091148-20200703121148-00150.warc.gz"}
http://math.stackexchange.com/questions/223775/exterior-derivative-of-a-complicated-differential-form
# Exterior derivative of a complicated differential form Let $\omega$ be a $2$-form on $\mathbb{R}^3\setminus\{0\}$ defined by $$\omega = \frac{x\,dy\wedge dz+y\,dz\wedge dx +z\,dx\wedge dy}{(x^2+y^2+z^2)^{\frac{3}{2}}}$$ Show that $\omega$ is closed but not exact. In order to show that $\omega$ is closed, I need to show that $d\omega=0$. I'm having some problems getting all of the calculus right and somewhere along the way I'm messing up. I started by rewriting $\omega$ as $$\omega = (x\,dy\wedge dz+y\,dz\wedge dx +z\,dx\wedge dy)(x^2+y^2+z^2)^{-\frac{3}{2}}$$ Now I should be able to use the product rule to evaluate (I think). Then $$d\omega = (dx\wedge dy\wedge dz+dy\wedge dz\wedge dx +dz\wedge dx\wedge dy)(x^2+y^2+z^2)^{-\frac{3}{2}} + (\ast)$$ where $$(\ast) = (x\,dy\wedge dz+y\,dz\wedge dx +z\,dx\wedge dy)\left(-\frac{3}{2}(2x\,dx+2y\,dy+2z\,dz)\right)(x^2+y^2+z^2)^{-\frac{5}{2}}$$ Even after trying to simplify everything, I can't get it to cancel. This makes me think that perhaps I can't apply the product rule like this. What should I do to calculate $d\omega$? If $\omega$ is a globally defined smooth form and if $d\omega=0$, then $\omega$ is exact because there is some other form $\alpha$ with $d\alpha=\omega$ and $d^2\alpha=d\omega=0$. Because $\omega$ is not defined at $(0,0,0)$, it makes sense that it isn't exact. Is there a way to show that there can't be an $\alpha$ such that $d\alpha=\omega$? - Maybe cylindrical or spherical coordinates will help. – Pragabhava Oct 29 '12 at 20:22 To show something that a 2-form is not exact, it is sufficient to integrate it over a closed, bounndaryless region and get something non-zero. In this case, I would recommend integrating over the sphere (since you have an $x^2 + y^2 + z^2$ term). – Eric O. Korman Oct 29 '12 at 20:41 My method for integrating 2-forms has previously involved parameterizing the surface that I'm integrating over. Do I need to do the same thing and parameterize the sphere $S=\{(x,y,z)|\;x^2+y^2+z^2=1\}$ or is there a way to integrate $\omega$ just using $x^2+y^2+z^2=1$? – chris Oct 29 '12 at 22:41 By the way, this is the Solid Angle form. – diff_math Jul 24 '13 at 21:53 Your idea is good. Define $r = (x^2 + y^2 + z^2)^\frac{1}{2}$, $f(x,y,z) = \frac{1}{r^3}$ and $\mu = x dy \wedge dz + y dz \wedge dz + z dx \wedge dy$. Then by the product rule: $d(\omega) = d(f\mu) = df \wedge \mu + f d\mu$. Let us hold hands and calculate: $$d\mu = dx \wedge dy \wedge dz + dy \wedge dz \wedge dx + dz \wedge dx \wedge dy = 3 dx \wedge dy \wedge dz.$$ $$df = \frac{-3}{r^5} (x dx + y dy + z dz)$$ $$df \wedge \mu = \frac{-3}{r^5} (x dx + y dy + z dz) (x dy \wedge dz + y dz \wedge dz + z dx \wedge dy) = \frac{-3}{r^5} (x^2 dx \wedge dy \wedge dz + y^2 dy \wedge dz \wedge dz + z^2 dz \wedge dz \wedge dy) = \frac{-3}{r^5} (r^2 dx \wedge dy \wedge dz) = \frac{-3}{r^3} dx \wedge dy \wedge dz$$ $$df \wedge \mu + f d\mu = \frac{-3}{r^3} dx \wedge dy \wedge dz + \frac{3}{r^3} dx \wedge dy \wedge dz= 0.$$ Phew. As you see, in the calculations, you use a lot the antisymmetrization properties of the wedge product. You just need to do everything carefully, and it will come out. For the second question, if it would be an exact form, the result of integration of $\omega$ on every two-dimensional closed submanifold (compact, without boundary) of $\mathbb{R}^3$ would be zero by Stokes's theorem. Try to find a closed submanifold on which you can calculate the integral directly relatively easily and for which the result is non-zero. If you are familiar with conservative vector fields, this is just like showing that the field is not conservative by showing that the work done by it along some closed loop is non-zero. - When you're calculating $df\wedge \mu$, you start by saying $df\wedge\mu=df$. Did you mean to write that $df\wedge\mu=df\mu$? – chris Oct 29 '12 at 22:24 No, it's just a typo. $df$ is a one-form, $\mu$ is a two-form, so the wedge is a three-form. $df \mu$ doesn't make sense. I've corrected it. Thanks! – levap Oct 30 '12 at 8:46 Geometric calculus is slightly different in notation from differential forms, but the math is very similar, and I hope I can provide a useful insight into this problem, even with a slightly different background. Geometric calculus replaces differentials like $dx$ by vectors $e_x$, but it still uses wedges, which are still asymmetric. So your $\omega$, in GC language, would be $$\omega = \frac{x e_y \wedge e_z + y e_z \wedge e_x + z e_x \wedge e_y}{(x^2 + y^2 + z^2)^{3/2}}$$ Not a whole lot of difference, I'll grant. Still, GC interprets this as a bivector (field), an oriented planar subspace in 3D space. Instead of Hodge duality, GC uses the geometric product, denoted wholly by juxtaposition: $e_i e_j = -e_j e_i$ if $i \neq j$. Otherwise, $e_i e_i = 1$ (no summation implied). Hodge duality is replaced by multiplication with the pseudoscalar, $e_x \wedge e_y \wedge e_z \equiv i$. For instance, $i(e_y \wedge e_z) = -e_x$. Let's use this to simplify your expression for $\omega$ to: $$\omega = \frac{i r}{|r|^3}$$ where $r = xe_x + ye_y + z e_z$. This field, $r/|r|^3 \equiv G$, is in fact the free space Green's function for the vector derivative. $\nabla r/|r|^3 = 4\pi\delta(r)$. (You might understand this more intuitively if I say $\nabla$ is the rough equivalent of $d + \delta$, the exterior derivative plus the coderivative. We say $\nabla \wedge A$ is the exterior derivative of $A$, and $\nabla \cdot A$ is the interior derivative, or coderivative.) Finally, it suffices in 3D to say that $i$ commutes with everything, at the cost of turning wedges into dots and dots into wedges. You have $\omega = iG$, and you want to prove that $\nabla \wedge \omega = \nabla \wedge (iG) = 0$. Pulling the $i$ out gives $i\nabla \cdot G = 0$, which is true in some places. (Question for you: where is it not true?) As has been said, you can prove that $\omega$ is not exact (that is, there is no $\alpha$ such that $\nabla \wedge \alpha = \omega$) by seeing if $\omega$ is integrable. This is tightly coupled to the question earlier: where, if anywhere, is $\nabla \wedge \omega \neq 0$? When you integrate the exterior derivative of this field over a volume, will you get zero? If you take nothing else from this answer (I know geometric calculus can still look and feel quite different from differential forms), I think you should see that this is problem probes at the nature of the derivative in 3d space. The 2-form you have here is just a disguise for the 3D free space Green's function. - Use spherical coordinates. In spherical coordinates $r,\theta,\phi$, the form reads: $$\omega = \sin\theta\,d\theta\wedge d\phi.$$ This is closed, because the coefficient only depends on a coordinate that is already used: $$d\omega = d(\sin\theta)\wedge d\theta\wedge d\phi = \cos\theta\,d\theta\wedge d\theta\wedge d\phi = 0,$$ since $d\theta\wedge d\theta=0$. This is anyway not exact, because you can integrate it on the sphere! Integrating it on the closed area $0\le\theta\le \pi, 0\le\phi\le2\pi$, that is a sphere (of radius one), you find (it's a simple calculation): $$\oint\omega = 4\pi.$$ Therefore, the form is not exact. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.947425901889801, "perplexity": 192.09362071239258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701159654.65/warc/CC-MAIN-20160205193919-00159-ip-10-236-182-209.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/lin-alg-is-the-set-a-linearly-independent-subset-of-r-3.183050/
# Lin. Alg. - Is the set a linearly independent subset of R^3 1. Sep 4, 2007 ### b0it0i 1. The problem statement, all variables and given/known data Is {(1,4,-6), (1,5,8), (2,1,1), (0,1,0)} a linearly independent subset of R^3. Justify your answer 2. Relevant equations 3. The attempt at a solution I asssumed a(1,4,-6) + b(1,5,8) + c(2,1,1) + d(0,1,0) = 0 then i set up the system a + b + 2c = 0 4a + 5b + c + d = 0 -6a + 8b + c = 0 My first step was to switch the 2nd row with the 3rd row: a + b + 2c = 0 -6a + 8b + c = 0 4a + 5b + c + d = 0 then i replaced the second row with ( 6R1 + R2) and replaced the third row with (-4R1 + R3) my result is a+ b + 2c = 0 14b + 13c = 0 b - 7c + d = 0 then i replaced thethird row with ( - 1/14 R2 + R3) a + b + 2c = 0 14b + 13c = 0 -111/14 c + d = 0 on this step, it's looking closer to what Dick got, and is there supposed to be another manipulation with the rows? I just solved for d = 111/14 c then i just let c = 1, thus c = 1 d = 111/14 b = -13/14 a = (13/14) - 2 but if i let c = 14 c = 14 d = 111 b = -13 a = -15 are both results correct?? and if not, (meaning Dick's is the only correct solution), what is the next step in the algorithm to find c = 14? thanks for the help nevermind, upon further reading i found that "In this case, the system does not have a unique solution, as it contains at least one free variable. The solution set can then be expressed parametrically (that is, in terms of the free variables, so that if values for the free variables are chosen, a solution will be generated)." so there's no unique solution, since you can choose whatever you want your variable "c" to be. Last edited: Sep 5, 2007 2. Sep 4, 2007 ### proton are you sure you row-reduced the system? even if you didn't, you should have encountered in your class that for R^n, the maximum number of vectors that can be linearly independent is n, which is in this case 3 3. Sep 4, 2007 ### Dick a=-15, b=-13, c=14, d=111. Yes, you've missed some solutions, as proton predicted. 4. Sep 5, 2007 ### b0it0i thanks alot, i'm actually taking the linear algebra course, with an intro to linear algebra course at the same time, so i'm not really familiar with this topic yet but when you mentioned row reduced system, i looked it up, and worked out the problem before, i just tried random substitutions My first step was to switch the 2nd row with the 3rd row: a + b + 2c = 0 -6a + 8b + c = 0 4a + 5b + c + d = 0 then i replaced the second row with ( 6R1 + R2) and replaced the third row with (-4R1 + R3) my result is a+ b + 2c = 0 14b + 13c = 0 b - 7c + d = 0 then i replaced thethird row with ( - 1/14 R2 + R3) a + b + 2c = 0 14b + 13c = 0 -111/14 c + d = 0 on this step, it's looking closer to what Dick got, and is there supposed to be another manipulation with the rows? I just solved for d = 111/14 c then i just let c = 1, thus c = 1 d = 111/14 b = -13/14 a = (13/14) - 2 but if i let c = 14 c = 14 d = 111 b = -13 a = -15 are both results correct?? and if not, (meaning Dick's is the only correct solution), what is the next step in the algorithm to find c = 14? thanks for the help 5. Sep 5, 2007 ### b0it0i nevermind, upon further reading i found that "In this case, the system does not have a unique solution, as it contains at least one free variable. The solution set can then be expressed parametrically (that is, in terms of the free variables, so that if values for the free variables are chosen, a solution will be generated)." so there's no unique solution, since you can choose whatever you want your variable "c" to be. 6. Sep 5, 2007 ### Dick You've got it. My solution was just a 'for instance'. Similar Discussions: Lin. Alg. - Is the set a linearly independent subset of R^3
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8263468742370605, "perplexity": 689.1793279772744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218199514.53/warc/CC-MAIN-20170322212959-00388-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/integration-of-a-polynomial-problem.780585/
# Integration of a polynomial problem 1. Nov 7, 2014 ### MartinJH Hi, I'm using KA Stroud 6th edition (for anyone with the same book, P407) and there is a example question where I just can't seem to get the answer they have suggested: 1. The problem statement, all variables and given/known data Question: Determine the value of I = ∫(4x3-6x2-16x+4) dx when x = -2, given that at x = 3, I = -13 Their answer is when x = -2, I = 12. 3. The attempt at a solution I found the integral: I = ∫(4x3-6x2-16x+4) dx = x4-2x3-8x2+4x + C and then substituted x for 3 and getting: -13 = -33 + C thus: C = 20 Now when I replace x with -2, plus the constant, I get: -24-2(-2)3-8(-2)2+4(-2) + 20 = -20 I'm a few days into Integrals so I feel I may be doing something daft? Many thanks. 2. Nov 7, 2014 ### Staff: Mentor Your work is fine except for one minor thing. At the end you wrote -24 instead of (-2)4. In the first, 2 is raised to the 4th power, and then you take the negative, resulting in -16. In the latter, -2 is raised to the 4th power, resulting in +16. 3. Nov 7, 2014 ### MartinJH That was an honest slip. I appreciate there is a difference between them both. I finally got the answer, it was a case of not respecting the brackets and powers... I need a break. Thanks for pointing that out and explaining! :) 4. Nov 7, 2014 ### LCKurtz HEY!! I thought micromass's avatar was retired. 5. Nov 8, 2014 ### MartinJH I assume that is for me? :). I use this logo for most online things. EEVBlog, my Steam account etc etc. I was thinking about using Floyds new album cover. 6. Nov 8, 2014 ### LCKurtz Yes, but it was really directed at the old timers, sort of tongue-in-cheek. Turns out one of our previous highly regarded members used to use that logo. Nothing to worry about though. 7. Nov 9, 2014 ### MartinJH Yeah, that's cool. I Understand :). Draft saved Draft deleted Similar Discussions: Integration of a polynomial problem
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.817621648311615, "perplexity": 1957.5945155539594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104634.14/warc/CC-MAIN-20170818102246-20170818122246-00589.warc.gz"}