url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://mathhelpforum.com/statistics/128226-so-lost.html
# Math Help - so lost ;( 1. ## so lost ;( let Y be a random vairable giving the no. of heads minus the no. of tails in 2 tosses of a coin. assuming that the coin is biased sot aht the head is twice as likely to occur as the tail, calculate the mean and standard deviation. then calculate the cumulative distribution function. i just learnt this chapter and am completely lost. can someone help me? these are my working: P(-1) = 6/27 where there are 2 tails and 1 head P (1) = 18/27 where there are 2 heads and one tail P(-3) = 1/27 where there are 3 tails P( 3) 8/27 wherer there are 3 heads thus the mean = 1.2222 and the standard deviation = 1.547 how do i go about doing the last part? i know that to find the cumulative distribution function, i have to find the area under p(y) but i do not have an equation to integrate. did i do something wrong or is there another method to solve this? thank you! 2. Originally Posted by alexandrabel90 let Y be a random vairable giving the no. of heads minus the no. of tails in 2 tosses of a coin. assuming that the coin is biased sot aht the head is twice as likely to occur as the tail, calculate the mean and standard deviation. then calculate the cumulative distribution function. I think that you have not understood this problem. The coin is tossed only twice. Thus your random variable has only three values: -2,two tails; 0, one of each; 2, two heads. The probabilities of these are: $P(\{T,T\}=\left(\frac{1}{3}\right)^2,~ P(\{H,H\}=\left(\frac{2}{3}\right)^2,~\&~ P(\{H,T\}=2\left(\frac{2}{3}\right) \left(\frac{1}{3}\right)$ . 3. SORRY! that was a typo. it should be taht the coin was tossed 3 times instead. apart form that typo mistake, how do i go about solving for the cumulative distribution function? 4. Would the cdf be equivalent to finding the probability as in part b? Thanks 5. Originally Posted by alexandrabel90 how do i go about solving for the cumulative distribution function? Let $X$ be number of heads. Then $X=0,1,2,3$. $P(X=k)=\binom{3}{k}\left(\frac{2}{3}\right)^k\left (\frac{1}{3}\right)^{3-k}$. Now that is the Pdf, you use that to write the Cdf. 6. am i right to say taht that is the probability distribution function and the CDF is the summation of that from X = 0 to X= 3 since this is a discrete random variable? 7. wont the summation of it be equals to 1? sorry im not really sure with how to make use of cdf and pdf except that cdf is the summation of pdf 8. Originally Posted by alexandrabel90 am i right to say taht that is the probability distribution function and the CDF is the summation of that from X = 0 to X= 3 since this is a discrete random variable? If P(x) is the PDF and C(x) is the CDF, then C(0)= P(0), C(1)= P(0)+ P(1), C(2)= P(0)+ P(1)+ P(2), and C(3)= P(0)+ P(1)+ P(2)+ P(3)= 1.
2014-09-17 01:44:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8827582597732544, "perplexity": 553.9061799338616}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657120446.62/warc/CC-MAIN-20140914011200-00217-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://www.hepdata.net/record/104962
Measurement of elliptic flow of light nuclei at $\sqrt{s_{NN}}=$ 200, 62.4, 39, 27, 19.6, 11.5, and 7.7 GeV at the BNL Relativistic Heavy Ion Collider The collaboration Phys.Rev.C 94 (2016) 034908, 2016. Abstract We present measurements of 2$^{nd}$ order azimuthal anisotropy ($v_{2}$) at mid-rapidity $(|y|<1.0)$ for light nuclei d, t, $^{3}$He (for $\sqrt{s_{NN}}$ = 200, 62.4, 39, 27, 19.6, 11.5, and 7.7 GeV) and anti-nuclei $\bar{\rm d}$ ($\sqrt{s_{NN}}$ = 200, 62.4, 39, 27, and 19.6 GeV) and $^{3}\bar{\rm He}$ ($\sqrt{s_{NN}}$ = 200 GeV) in the STAR (Solenoidal Tracker at RHIC) experiment. The $v_{2}$ for these light nuclei produced in heavy-ion collisions is compared with those for p and $\bar{\rm p}$. We observe mass ordering in nuclei $v_{2}(p_{T})$ at low transverse momenta ($p_{T}<2.0$ GeV/$c$). We also find a centrality dependence of $v_{2}$ for d and $\bar{\rm d}$. The magnitude of $v_{2}$ for t and $^{3}$He agree within statistical errors. Light-nuclei $v_{2}$ are compared with predictions from a blast wave model. Atomic mass number ($A$) scaling of light-nuclei $v_{2}(p_{T})$ seems to hold for $p_{T}/A < 1.5$ GeV/$c$. Results on light-nuclei $v_{2}$ from a transport-plus-coalescence model are consistent with the experimental measurements.
2021-05-15 13:40:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7402151226997375, "perplexity": 1789.4994900914091}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991370.50/warc/CC-MAIN-20210515131024-20210515161024-00468.warc.gz"}
https://planetmath.org/algebraic1
# algebraic Let $B$ be a ring with a subring $A$. An element $x\in B$ is algebraic over $A$ if there exist elements $a_{1},\dots,a_{n}\in A$, with $a_{n}\neq 0$, such that $a_{n}x^{n}+a_{n-1}x^{n-1}+\cdots+a_{1}x+a_{0}=0.$ An element $x\in B$ is transcendental over $A$ if it is not algebraic. The ring $B$ is algebraic over $A$ if every element of $B$ is algebraic over $A$. Title algebraic Algebraic1 2013-03-22 12:07:50 2013-03-22 12:07:50 djao (24) djao (24) 8 djao (24) Definition msc 13B02 AlgebraicExtension transcendental
2019-03-26 00:48:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 13, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8563376069068909, "perplexity": 684.9525356780554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204736.6/warc/CC-MAIN-20190325234449-20190326020449-00392.warc.gz"}
https://www.physicsforums.com/threads/question-about-a-definition-in-rivasseaus-book.944553/
# I Question about a definition in Rivasseau's book Tags: 1. Apr 11, 2018 ### Iliody I have a problem understanding a definition at page 93 of 'from perturbative to constructive renormalization', that is related to Graph Theory, and he uses it on the proof of the uniform BPH theorem for $\phi_4^4$ ($\lambda \phi^4$ in $D=4$). You have a graph G, a forest of quadrupeds F, and g a subgraph of G compatible with the tree structure of F. $B_F(g)$ is defined as the ancestor of g in FUG. What means ancestor in this context? Maybe this question doesn't belong here, I'm sorry if that's the case. I thought that it belonged here because it's related to renormalization. 2. Apr 16, 2018 ### PF_Help_Bot Thanks for the thread! This is an automated courtesy bump. Sorry you aren't generating responses at the moment. Do you have any further information, come to any new conclusions or is it possible to reword the post? The more details the better.
2018-12-16 18:23:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6529216766357422, "perplexity": 869.4291961455089}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827963.70/warc/CC-MAIN-20181216165437-20181216191437-00624.warc.gz"}
https://math.stackexchange.com/questions/1863722/proving-the-baire-category-theorem-from-scratch-stuck
# Proving the Baire Category Theorem from scratch, Stuck! Theorem: (Baire) $(X,d)$ is a complete metric space then the intersection of countably many dense, open subsets in the metric topology $\mathcal{T}$ generated by $d$ is dense In other words, $$D = \bigcap_{n \in \mathbb{N}} D_n$$ where $D_n$ is a dense, open subset of $X$, is dense I am asked to prove this raw, no hints whatsoever. I am already hitting my head on several bricks Idea/Approach: By contradiction, assume that $D$ defined above is not dense, then $(X,d)$ is not complete. Proof Attempt: (No idea how to start! We will start by stating what we want) • We wish to construct a Cauchy sequence $(x_n)$ on $X$, then by completeness of $X$, $x_n \to x$ as $n \to \infty$. Afterwards, we wish to obtain a contradiction such that if $D = \bigcap_{n \in \mathbb{N}} D_n$ is not dense, then $x_n \not\to x$. (Difficult part is to find out how $D_n$ is related to $x_n$...) • Let $\{D_n\}$ be a set of dense, open subset of $X$, where $n \in \mathbb{N}$. By definition, for all $U \in \mathcal{T}$, $D_n \cap U \neq \varnothing$. • Let $D = \bigcap_{n \in \mathbb{N}} D_n$ and suppose for contradiction there exists $U \in \mathcal{T}$ such that $D \cap U = \varnothing$ (i.e. $D$ is not dense). Then let $x_n \in D_n \cap U, \forall n \in \mathbb{N}$ • Then $(x_n)$ is a Cauchy sequence if $\forall \epsilon > 0, \exists N \in \mathbb{N}$, such that $d(x_n, x_m) < \epsilon, \forall n, m > N$ (At this point it is obvious that without additional assumptions, there is no way $(x_n)$ is Cauchy) ... Does anyone see if this approach can still be continued? Any help is appreciated at this point. • en.wikipedia.org/wiki/Baire_category_theorem#Proof – avs Jul 19 '16 at 0:37 • Note that completion is a reference to se uence limits and you have already produced a se uence of dense subsets. Perhaps you could produce se uences of points , using each dense subset. – Jacob Wakem Jul 19 '16 at 0:43 • @BeachedWhale It seems very likely that you are the creator of (metric-topology) tag. I wanted to let you know that I have made a post on meta to discuss this new tag. – Martin Sleziak Aug 1 '16 at 10:08 • @MartinSleziak Hi! Yes I like to create new tags at whim, such as Arzela Ascoli. I created metric topology because this is now used a lot in formal teaching as people confuse metric space with topological space and vice versa, mainly because real analysis and topology are taught separately. For example, most people are not introduced to formal definition of topology in real analysis, therefore all intuition about convergence of sequences etc are wrt metric topology, but does not hold in general topology. Perhaps metrizable spaces is more appropriate. But do whatever as you wish to the tag! – Carlos - the Mongoose - Danger Aug 1 '16 at 10:47 • Thanks for the response. My impression is that the tag (metric-spaces) is typically used for questions related to metric space, metrizable spaces, topology derived from a metric etc. In any case, if we want to continue this discussion, it would be probably more suitable on meta or in chat. – Martin Sleziak Aug 1 '16 at 12:24 Your approach can be carried out; you just have to figure out how to choose the points $x_n$ so that $\langle x_n:n\in\Bbb N\rangle$ is a Cauchy sequence. I’ll get you started. For $x\in X$ and $r>0$ let $C(x,r)=\{y\in X:d(x,y)\le r\}$, the closed ball of radius $r$ centred at $x$. Begin by choosing any $x_0\in U\cap D_0$; there is an $r_0>0$ such that $C(x_0,r_0)\subseteq U\cap D_0$. The open ball $B(x_0,r_0)$ intersects $D_1$, so choose $x_1\in B(x_0,r_0)\cap D_1$. There is an $r_1>0$ such that $C(x_1,r_1)\subseteq B(x_0,r_0)\cap D_1$ and $r_1\le\frac12r_0$. (Why?) Similarly, $B(x_1,r_1)\cap D_2\ne\varnothing$, so there is an $x_2\in B(x_1,r_1)\cap D_2$, and there is then an $r_2>0$ such that $C(x_2,r_2)\subseteq B(x_1,r_1)\cap D_2$ and $r_2\le\frac12r_1$. (Again, why?) Now try to complete the blockquoted sentence below that describes how to keep going in this fashion. Given $x_n\in U$ and $r_n>0$, we know that $B(x_n,r_n)\cap D_{n+1}\ne\varnothing$, so there is an $x_{n+1}\in B(x_n,r_n)\cap D_{n+1}$, and there is then an $r_{n+1}>0$ such that ... In the end we have a sequence $\langle x_n:n\in\Bbb N\rangle$ such that each $x_n\in U$. • Show by induction on $n$ that $x_n\in\bigcap_{k\le n}B(x_k,r_k)$ for each $n\in\Bbb N$. • Use this to show that $\langle x_n:n\in\Bbb N\rangle$ is a Cauchy sequence and therefore converges to some $x\in X$. • Show that for each $n\in\Bbb N$, $x_n\in\bigcap_{k\le n}D_k$. • Use this to show that $x\in\bigcap_{n\in\Bbb N}D_n$. Note that a proof by contradiction is not needed: the argument shows directly that $\bigcap_{n\in\Bbb N}D_n$ meets every non-empty open $U\subseteq X$. • Yeah yours is pretty much the same as I eventually did. Although I am good at being terse. – Jacob Wakem Jul 19 '16 at 2:42 • I think I understand, more tedious than I thought however, the amount of things you need to check – Carlos - the Mongoose - Danger Jul 19 '16 at 3:06 • @BeachedWhale check out my approach in my revised answer. I cut out the fat. – Jacob Wakem Jul 19 '16 at 3:14 • @Jacob: I would say that while your answer provides the intuition, it falls far short of being a proof. That is, what you cut was the actual proof, leaving only the basic idea behind it. – Brian M. Scott Jul 19 '16 at 3:16 • @BrianM.Scott I mean it is still readable. Stuff like "much smaller" is easily formalizable (I'm sure it has already been suitably defined by someone and I use that definition freely) and my infinitary sentence can be put into set builder notation in the usual way. – Jacob Wakem Jul 19 '16 at 3:33
2019-07-16 00:25:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.945911169052124, "perplexity": 283.3639958213621}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524290.60/warc/CC-MAIN-20190715235156-20190716021156-00161.warc.gz"}
http://wiki.planetmath.org/locallycompactquantumgroup
# locally compact quantum group ###### Definition 0.1. A defined as in ref. [1] is a quadruple $\mathcal{G}=(A,\Delta,\mu,\nu)$, where $A$ is either a $C^{*}$– or a $W^{*}$algebra equipped with a co-associative comultiplication (http://planetmath.org/WeakHopfCAlgebra2) $\Delta:A\to A\otimes A$ and two faithful semi-finite normal weights, $\mu$ and $\nu$right and -left Haar measures. ## 0.0.1 Examples 1. 1. An ordinary unimodular group $G$ with Haar measure $\mu$. $A:=L^{\infty}(G,\mu),\Delta:f(g)\mapsto f(gh)$, $S:f(g)\mapsto f(g-1),\phi(f)=\int_{G}f(g)d\mu(g)$, where $g,h\in G,f\in L^{\infty}(G,\mu)$. 2. 2. A:= Ł(G) is the von Neumann algebra generated by left-translations $L_{g}$ or by left convolutions $L_{f}:={\int}_{G}f(g)L_{g}d\mu(g)$ with continuous functions $f(.)\in L^{1}(G,\mu)\Delta:\mapsto L_{g}\otimes L_{g}\mapsto L_{g}^{-1},\phi(f% )=f(e)$, where $g\in G$, and e is the unit of G. ## References • 1 Leonid Vainerman. 2003. http://planetmath.org/?op=getobj&from=papers&id=471Locally Compact Quantum Groups and Groupoids: Proceedings of the Meeting of Theoretical Physicists and Mathematicians, Strasbourg, February 21-23, 2002., Series in Mathematics and Theoretical Physics, 2, Series ed. V. Turaev., Walter de Gruyter Gmbh & Co: Berlin. Title locally compact quantum group Canonical name LocallyCompactQuantumGroup Date of creation 2013-03-22 18:21:24 Last modified on 2013-03-22 18:21:24 Owner bci1 (20947) Last modified by bci1 (20947) Numerical id 18 Author bci1 (20947) Entry type Definition Classification msc 81R50 Classification msc 46M20 Classification msc 18B40 Classification msc 22A22 Classification msc 17B37 Classification msc 46L05 Classification msc 22D25 Synonym Hopf algebras Synonym ring groups Related topic CompactQuantumGroup Related topic LocallyCompactQuantumGroupsUniformContinuity2 Related topic RepresentationsOfLocallyCompactGroupoids Related topic VonNeumannAlgebra Related topic WeakHopfCAlgebra2 Related topic LocallyCompactHausdorffSpace Related topic QuantumGroups Defines quantum group Defines local quantum symmetry
2018-03-18 01:52:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 16, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6461775302886963, "perplexity": 8623.582556294908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645413.2/warc/CC-MAIN-20180318013134-20180318033134-00571.warc.gz"}
http://umj.imath.kiev.ua/article/?lang=en&article=10013
2017 Том 69 № 7 # Investigation of a class of methods of summation of interpolational processes Pogodicheva N. A. Abstract For the class of all continuous $2jt$-periodic functions, two processes are considered for the approximation of the functions of this class by trigonometric polynomials. Citation Example: Pogodicheva N. A. Investigation of a class of methods of summation of interpolational processes // Ukr. Mat. Zh. - 1964. - 16, № 2. - pp. 164-184. Full text
2017-08-24 10:30:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36100685596466064, "perplexity": 1543.8796501352704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886133449.19/warc/CC-MAIN-20170824101532-20170824121532-00502.warc.gz"}
https://thankyouforcoming.net/wtti4ny/4e5052-shear-modulus-formula-from-young%27s-modulus
Bulk modulus formula. Solution: Given. I mean dynamic analysis starts from some in-situ condition and in this stage you just allow gravitational stresses to develop within the body. The shear modulus itself may be expressed mathematically as shear modulus = (shear stress)/ (shear strain) = (F / A)/ (x / y). Because the denominator is a ratio and thus dimensionless, the dimensions of the shear modulus are those of … I have dynamic properties of the soil and want to derive static properties for initial static equilibrium. The dimensional formula of Shear modulus is M1L-1T-2. The strength criterion is used to analyse geotechnical engineering. After 50.6 µs the much stronger shear wave echo appears in the signal. Is it feasible to use a high value of the young's modulus for dense sand? It is used extensively in quantitative seismic interpretation, rock physics, and rock mechanics. But my problem is that in all examples in FLAC manual for dynamic analysis the same properties are used in initial static equilibrium and dynamic analysis. Depending on what on the strains you are expecting this can either be a good value (small strains) or an overestimation (medium to high strains). Yes, the high value of young's modulus can be justified. A thin square plate of dimensions 80 cm × 80 cm × 0.5 cm is fixed vertical on one of its smaller surfaces. Difference between asteroid,mateorites and comet, Difference Between Distance And Displacement, Relation between power force and velocity. If you however want to model an initial phase with building loads/ excavations then the mohr coulomb should be used with caution as the linear stiffness might easily over/underestimate the actual behaviour. Save my name, email, and website in this browser for the next time I comment. All rights reserved. Modulus of Rigidity or Shear Modulus: It is defined as the ratio of shear stress to the corresponding shear strain within elastic limit. "Initial" stiffness depends on loading history. An elastic modulus (also known as modulus of elasticity) is a quantity that measures an object or substance's resistance to being deformed elastically (i.e., non-permanently) when a stress is applied to it. You could have a higher shear modulus in a dynamic measearment than in a quasi-static triaxial test on an identical soil sample, simply because the modulus can be measured at lower strains in the dyanmic test. The modulus of elasticity, also known as Young's modulus, is a material property and a measure of its stiffness under compression or tension. If you're seeing this message, it means we're having trouble loading external resources on our website. The shear modulus of material gives us the ratio of shear stress to shear strain in a body. G = Modulus of Rigidity. The basic difference between young’s modulus, bulk modulus, and shear modulus is that Young’s modulus is the ratio of tensile stress to tensile strain, the bulk modulus is the ratio of volumetric stress to volumetric strain and shear modulus is the ratio of shear stress to shear strain. Many thanks for your expert answer. Required fields are marked *. And it can recover in excess of "unloading reloading" stiffness (Eur). I have UPV and Density, but there are many different equations? Answer: The shear modulus is calculated using the formula, G = (5*10 4 … Using these equations assumes the soil to be linear-elastic material which may not be the case with you. Of course a single stiffness valuse in a dynamic calculation should always be used with caution, have you considered using strain dependent stiffness degradation? This equation is a specific form of Hooke’s law of elasticity. Your email address will not be published. The idea behind it is that most of the time the mohr-coulomb model is used for simplified analyses and using a stiffness value close to E100 leads to a conservative enough estimation when its not known what stressranges to expect. I would use the following approach, starting from E0=504MPa: this offcourse means using several correlations to get to a value, so it should be used with caution, but it gives a good starting point. We have Y = (F/A)/(∆L/L) = (F × L) /(A × ∆L) As strain is a dimensionless quantity, the unit of Young’s modulus is the same as that of stress, that is N/m² or Pascal (Pa). Relation between Young Modulus, Bulk Modulus and Modulus of Rigidity: Where. 0.4 for sands seems too high to me.Â. A 150-meter-wide strip of land along the entire waterfront at the Port was divided into a number of site categories for which linear and nonlinear site response analyses were performed. As we all know that the dynamic modulus increases with increase in frequency, then how can we give a single value for a dynamic modulus? The E value that you calculate with the shown formula is the E0, so the youngs modulus for small strains. Using P and S wave measurements to determine Poisson’s Ratio and Modulus of Elasticity: This table taken from Wikepedia shows how elastic properties of materials may be … You can 'fit' your model behaviour to the overall experimental response (from soil element tests) by finding an appropriate modulus value.Â. But liquefaction or plastic yielding can loosen soil up, in which case stiffness will be lost, only to recover later, during smaller, drained loading cycles. If your dynamic analysis is sensible to in-situ stress distribution (for example there are weak spots at the verge of failing) or you're looking for total deformations it is necessary to put these low value for  static equalibrium. We have a mathematical relation between the Youngs Modulus (E) and the Shear Modulus (G) Where μ = 1/m (Poisson’s ratio) What is your expected strain amplitudes during this state.? Modulus of elasticity of concrete […] Young’s Modulus is the ratio of Longitudinal Stress and Longitudinal Strain. These parameters are for slope stability analyses in terms of effective stress analyses using Mohr - Coulomb model . Unit of shear modulus is Nm–2 or pascals (Pa). Y = Longitudinal Stress / Longitudinal Strain = (F/A)/(l/L) = (FL)/(Al) Its unit is N/m^2 or Pascal. The elastic modulus of an object is defined as the slope of its stress–strain curve in the elastic deformation region: A stiffer material will have a higher elastic modulus. Some conclusions are drawn as follows. when the two tests were compared  the compressive behavior are very different in terms of the Young's modulus value.But why? Formula is as follows according to the definition: E = $$\frac{\sigma} {\varepsilon}$$ We can also write Young’s Modulus Formula by using other quantities, as below: E = $$\frac{FL_0}{A \Delta L}$$ Notations Used in the Young’s Modulus Formula. Young's Modulus and is denoted by E symbol. Even in dynamic elasticity problems (explosive etc. It is also known as the modulus of rigidity and may be denoted by G or less commonly by S or μ.The SI unit of shear modulus is the Pascal (Pa), but values are usually expressed in gigapascals (GPa). How can I calculate Elastic Modulus of soil layers (Es) from SPT N-values? For example in our Bulgarian engineering practice the rule of thumb is to multiply the static deformation modulus by a factor of 2 for sands/gravels and 3 for clays to find the strain-equivalent modulus for strong ground motions (above 0.15g). You recommended a multiplier of 2 for sand/gravel and 3 for clay as a rule of thumb in your engineering practice in Bulgaria. Elastic constants includes Young's modulus, shear modulus, Poisson's raito, bulk modulus, and Lame's constnat. Where ΔV is the change in original volume V. Shear modulus. Is the Young's modulus supposed to be the same? Where, Therefore, the shear modulus G is required to be nonnegative for all materials, This relationship is given as below: E=2G(1+μ)E= 2G ( 1+\mu )E=2G(1+μ) And E=3K(1–2μ)E = 3K ( 1 – 2 \mu )E=3K(1–2μ) Where, To compute for shear modulus, two essential parameters are needed and these parameters are young’s modulus (E) and Poisson’s ratio (v). Difference between young's modulus, bulk modulus and shear modulus. Modulus in Tension or Bending (E) This is the coefficient of stiffness used for torsion and flat springs (Young's Modulus). I didn't talk about using poisson's ratio in dynamic analysis, and about it's value there is a 0.3-0.45 range recommendation for dense sand in the literature. Young's modulus and shear modulus in static and dynamic analysis? The high value of young's modulus can be justified in 2 cases: Drained cycles are required for stiffness to recover. 2). Shear Modulus. In other words, it reflects the ability of concrete to deflect elastically. As a result of all the answers to my question I should use different stiffness for static and dynamic stages. It is defined as = shear stress/shear strain. In numerical simulation before any dynamic analysis you should bring your model to the initial static equilibrium. How can I calculate Dynamic Modulus of Elasticity? Some equations just depend on UPV and density such as :Ed =( V2 ρ)/g * 10-2, others depend on poisson ratio: V=√(K×Ed/ρ) ,  K=(1-V)/((1+V)(1-2V)). Using dynamic modulii,  I believe you should also use an Poisson! Is simply stress divided by strain internal structure of material of doing the simulations sand. Stiffness degradation would be applied to force per unit shear modulus formula from young's modulus, and Lame 's constnat for settlement calculation pdf. So the youngs modulus for dense sand Hooke ’ s modulus ( E ) [ ]... It at the Port of Oakland in northern California are very small and soil behaves as a of! To derive static properties anyone provide reference about estimation of c’ and φ’ for clay! Monotonic compression test just bear in mind the stated when you compare your modulus with! And modulus of a microzonation study conducted at the Port of Oakland northern. Reduced shear wave echo appears in the static phase of these type problems. Commonly used elastic constants of an isotropic material to other elastic moduli of the applied stress shear... Is fixed vertical on one of its smaller surfaces just bear in the. And modulus of Rigidity: where original volume V. shear modulus is defined as the ratio of Longitudinal stress shear. Use of your chosen Poisson 's ratio does not seem appropriate appears in the signal Poisson 's does... Excess of unloading reloading '' stiffness ( Eur ) to other elastic moduli of the soil want! At the surface of sand soil depth, what equation should I use to calculate E50ref Eur! By your formula strain dependent stiffness degradation would be applied all elastic constant are. And is denoted by E symbol geotechnical engineering should I use to calculate a shear modulus can be in. Dear Ashraf, elastisity modulus reflects internal structure of material ( Eur ) provide reference about of... Estimating increase in Young 's modulus, Poisson 's ratio you can 'fit ' model. Velocity by your formula defined as the ratio of applied stress but also its stiffness analysis you should your. E50Ref, Eur and Eoed from stiffness modulus numbers assumes the soil and want to derive static properties seeing message! It does change with depth I think the initial static phase, the high value of Young. 1.25×10 6 Nm 2 be calculated properly of the Young 's modulus for small strains that is why to it. Concrete ( Ec ) is very small some in-situ condition and in this strain. Poisson 's ratio using reduced shear wave velocity constants of an isotropic material to other elastic moduli of the Cam-clay. Where shear modulus formula from young's modulus is the Young 's modulus, bulk modulus and shear modulus: it defined! In terms of effective stress analyses using Mohr - Coulomb model determine the E50 value ( for dense sands =... And since you are using dynamic modulii,  I believe you should your... To recover volume is involved, then the ratio of applied stress to shear strain seen in mentioned several... Effective stress analyses using Mohr - Coulomb model by strain the elastic modulus is the ratio shear! I mean dynamic analysis starts from some in-situ condition and in this browser for the next I... The value of Young 's modulus of Rigidity or shear modulus is correlated with sound velocity by your formula numerical. During this state. words, it means we 're having trouble loading external resources on our.... Be different between static and dynamic compression ( 2 ) how can I calculate elastic modulus in or... Soil modulus measurement is senstive to the corresponding shear strain power force and.!, difference between Young 's modulus of elasticity of concrete to deflect.! Resources on our website with static properties for static and dynamic stages ’ s modulus static properties for static... Called Young ’ s modulus is defined as the ratio of tensile to. Means we 're having trouble loading external resources on our website with you and 3 for clay as a material... Elastic modulus change with depth, what equation should I use the Alpan to! Moduli are much greater even for large strains solve any engineering problem related to other moduli... Stated when you compare your modulus values with those in the mohr-coulomb model, so the modulus! ( from soil element tests ) by finding an appropriate modulus value. analyses using Mohr - Coulomb.. Eur and Eoed from stiffness modulus numbers between static and dynamic compression Pa ) I believe you should bring model! Modulus reflects internal structure of material gives us the ratio of uniaxial stress to the mimumum strains in tests. Decreases by cyclic strain amplitude increasing '' of soil layers ( Es ) from SPT for... Compression test pdf form reflects the ability of concrete to deflect elastically sense and the 2nd law Thermodynamics. For clay as a linear-elastic material  strain dependent stiffness degradation would be applied to force per unit,... Amplitude increasing '' layers ' elastic modulus is the E0, so the youngs modulus for settlement?... Elasticity applies have an exact reference for it but I have not try to repeat the with. And Eoed from stiffness modulus numbers, elastisity modulus reflects internal structure of material before any dynamic analysis should. Regular strain range ( exavations etc. the literature etc. Coulomb model graph which available! E value that you calculate with the shown formula is the E0, so the modulus... That of the modified Cam-clay model for dense sand an isotropic material to other elastic moduli of applied. I comment Drained loading cycles ( Pa ) the value of Young 's modulus value be between. Then the ratio of shear modulus is related to other elastic moduli of the Young 's,! Youâ compare your modulus values with those in the regular strain range ( exavations.... Stage you just allow gravitational stresses to develop within the body to solve any engineering related! Used elastic constants this message, it all depends on your purpose of doing simulations... Bring your model to the mimumum strains in the tests using these equations assumes the soil to be case... And sand please constant which are used to analyse geotechnical engineering both clay and please... Stay tuned with BYJU ’ s to learn more on other Physics related.... Soil behaves as a rule of thumb case with you / ( 4×10-2 shearmodulus. Think the initial stresses will not be the case with you seams to me that Young but... The initial stresses will not be the case with you small strains wave should! Modulus measurement is senstive to the mimumum strains in the tests analysis should... Are much greater even for large strains moduli of the material why to change it extremely high impact (. The applied stress but also its stiffness and comet, difference between asteroid, mateorites and,... ( Ec ) is defined as the ratio of uniaxial stress to shear strain within elastic limit ( )... Stay tuned with BYJU ’ s law of Thermodynamics require that a positive strain... Monotonic compression test and monotonic compression test and monotonic compression test and monotonic compression.! Phase of these type of problems should be analyzed with static properties for initial static of! How can I extract the values of data plotted in a graph which is available in form! External resources on our website layers ' elastic modulus of soil layers, how I. By using hysteretic damping,  I believe you should also use an Poisson... A beta Titanium alloy which was tested with load-unload compression test you compare your modulus values with those the! Elastisity modulus reflects internal structure of material gives us the ratio of tensile stress to uniaxial strain linear. Think the initial static phase of these type of problems should be applied to force per unit,. Dynamic stages seeing this message, it means we 're having trouble loading external resources on our website Poisson! Soil modulus measurement is senstive to the initial stresses will not be calculated properly in numerical simulation any! In mentioned in several papers as rule of thumb range ( exavations etc. ( )!, then the ratio of shear stress to shear strain / ( )!, how can we get the layers ' elastic modulus is defined as the ratio of shear stress uniaxial. Also use an appropriate Poisson 's ratio does not seem appropriate with you by strain soil and want to static! And is denoted by E symbol soil was exposed to Drained loading cycles between Young 's modulus can be in. Function generalizes that of the Young 's modulus of soil layers ( Es ) from SPT?. I believe you should bring your model to the overall experimental response ( soil. Modulus is the ratio of shear modulus formula from young's modulus stress to shear strain in a body I comment dear college it... Calculate the strain-equivalent G-modulus using reduced shear wave velocity E in the static phase, the static. S to learn more on other Physics related concepts smaller surfaces in Bulgaria it... Data plotted in a graph which is available in pdf form stress divided by strain ground Eo! Rule of thumb geotechnical engineering FL–2 ], n—the elastic modulus of with... The following equation is a measure of the maximum stress that a material can handle of reloading... Youngs moduli in the literature by using hysteretic damping,  I believe you should also use an Poisson! Sand please mean dynamic analysis you should also use an appropriate Poisson 's are! Stress leads to a positive shear stress to tensile strain is called modulus... Ofâ the Young 's modulus, bulk modulus and shear modulus be calculated properly loading external resources on our.... I believe you should bring your model behaviour to the corresponding shear strain it seams to me Young! Of these type of problems should be applied to force per unit area, Lame... If the stress amplitude ( consequently - strain.. ) is very small relations between all elastic constant which used!
2021-12-04 08:36:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6845719814300537, "perplexity": 1896.8596998248954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362952.24/warc/CC-MAIN-20211204063651-20211204093651-00568.warc.gz"}
https://email.esm.psu.edu/pipermail/macosx-tex/2005-July/016487.html
# [OS X TeX] LaTeXiT 1.2 released Pierre Chatelier pierre.chatelier at club-internet.fr Thu Jul 14 17:30:48 EDT 2005 Hello, > I have a few usage questions: I'm trying to paste or drag an > equation into a keynote document, but it seems I can only drag 'em > there as an image, not as some inline text. Yes, this is what LaTeXiT can do. It produces an image, not text. Only a tool like Illustrator can understand the PDF and offer to modify it. However, thanks to the LinkBack technology, the image dropped in Keynote can be reopened in LaTeXiT, so that you can modify it afterwards. > As a side note: I did have to resolve the \pdfminorpdfversion issue > reported here a few days ago before I could use the tool. Uh ? I must have missed that (one week of holydays). What is that problem ? > Note that this requires recreating all formats. I don't understand what does that mean ? Pierre --------------------- Info --------------------- Mac-TeX Website: http://www.esm.psu.edu/mac-tex/ & FAQ: http://latex.yauh.de/faq/ TeX FAQ: http://www.tex.ac.uk/faq List Post: <mailto:MacOSX-TeX at email.esm.psu.edu>
2020-07-14 05:31:15
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8586496710777283, "perplexity": 7407.896939478012}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657149205.56/warc/CC-MAIN-20200714051924-20200714081924-00152.warc.gz"}
http://sipnayan.com/2014/05/law-exponent-5-x-raised-m-x-raised-n/
# Law of Exponent 5 – x raised to m over x raised to n In this video we discuss the fifth law of exponent in the Law of Exponent Series. We discuss the law $\frac{x^m}{x^n}$ where case 1: $m > n$ case 2: $m < n$ case 3: $m = n$. If you cannot see the video in this post, you can watch it on Youtube.
2019-04-20 08:10:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7206692099571228, "perplexity": 357.16462192524006}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578529472.24/warc/CC-MAIN-20190420080927-20190420102927-00350.warc.gz"}
http://mathhelpforum.com/calculus/212070-integral-polinomial-division.html
Math Help - integral of polinomial division 1. integral of polinomial division Dear All! please, any ideas of what to do with: $\int \frac{3x^3+2x^2+4x-8}{x^4+16}$ 2. Re: integral of polinomial division You can factor the denominator as $(x^2-2\sqrt{2}x+4)(x^2+2\sqrt{2}x+4)$ and use the method of partial fractions. - Hollywood
2015-08-04 11:16:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.943110466003418, "perplexity": 3236.75143563076}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990609.0/warc/CC-MAIN-20150728002310-00032-ip-10-236-191-2.ec2.internal.warc.gz"}
https://gamedev.stackexchange.com/questions/183567/how-do-i-play-an-audio-clip-repeatedly-controlled-by-time/183568
# How do I play an audio clip repeatedly, controlled by time? I have a gun firing sound audio clip and I want this to be played repeatedly while the mouse left button is pressed, like a machine gun where there is automatic firing. If I do something like: void Update() { if (gunScript.gotGun) { var mouse = Mouse.current; if (mouse != null) { if (mouse.leftButton.isPressed) { gunFireSound.Play(); } } } } It plays extremely rapidly as that's how the code gets executed very fast. How do I add a small delay or is there a better way to achieve this? I also thought of an alternative but it doesn't play it fast enough. (the alternative was an if statement to check if it is finished playing, and play it again) • Does your audio clip play a single-fired shot, or is it a looping sound? You can deal with these two cases in two different ways. Jun 16 '20 at 12:13 • It is a single fired shot. Maybe I should do what DMGregory said and put the file in Audacity and get it so that the looping sounds right, but I'll try the other solutions first. Jun 16 '20 at 12:16 You can manage this using a coroutine. For example: private IEnumerator GunShotSound_Coroutine() { while(mouse.leftButton.isPressed) { gunFireSound.Play(); yield return new WaitForSeconds(1.0f); } } In the update function, you controlled if the leftbutton was pressed this frame and you start the coroutine. if(Input.GetKeyDown(mouse.leftButton)) { StartCoroutine("GunShotSound_Coroutine"); } I did not test this code but the idea is here :) • Better to remove the quotation marks around the coroutine name so you can reference it directly, without reflection. This will also let it update correctly as code is refactored. Jun 16 '20 at 12:07 • Brilliant! This seems to be the fastest solution, I will check back on it. Jun 16 '20 at 12:18 • I just used the code and changed the seconds to 0.1f and it's perfect! Works like a charm, thank you all. :D Jun 16 '20 at 12:25 The simplest thing to do is to check whether the audio source is still playing the previous sound, and play a new one only if it's done: if(!gunFireSound.isPlaying) gunFireSound.Play(); But that might make your shots sound too far apart. So you might want to allow your new shot to interrupt the old one after a specified duration. if(!gunFireSound.isPlaying || gunFireSound.timeSamples > gunFireInterrupt) gunFireSound.Play(); Here timeSamples is a high-precision count of the individual audio samples played so far. You can also use PlayScheduled to queue-up the next iteration of the shot at a fixed interval from the previous one, to make the rhythm of shots more uniform despite variances in frame timing. But the best solution is probably to have a looping machine gun sound that you can start playing on press, and transition into a tail-off sound on release. This will give you the best control over the sound and rhythm of the shots, and avoid artifacts from interrupting a sound in progress. • Thanks! I am a little confused over the timeSamples and gunFireInterrupt. How do I control the timeSamples and the other variable if you don't mind me asking? :) Also that sounds like a great idea, I will see if I can get it to work that way. Jun 16 '20 at 12:17 • You just set gunFireInterrupt to an integer high enough for your needs. For instance, a 44 kHz audio clip uses 44 000 samples per second. So if you want to be able to interrupt it after half a second you'd set gunFireInterrupt to 22 000 Jun 16 '20 at 12:20 • I see, thank you very much! I will look into this more. Jun 16 '20 at 12:24 • The looping sound approach works best for weapons with high ROF, whereas I prefer to sync the SFX playing to the actual logic (e.g. instantiating the bullet object/trail/whatever) for semi-automatic weapons or low ROFs. Jun 16 '20 at 12:32
2021-10-26 05:53:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22839967906475067, "perplexity": 1241.7035437000125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587799.46/warc/CC-MAIN-20211026042101-20211026072101-00493.warc.gz"}
http://observations.rene-grothmann.de/adaptive-integral-in-python/
# Adaptive Integral in Python Continuing my efforts to learn Python for Math, I wrote a function for adaptive integration. It uses 10-point Gauss quadrature. To check the accuracy, the algorithm uses a bipartition of the interval. So each step takes 30 evaluations of the function, one for the complete step and two for the check. If the accuracy is not high enough, the step size will decreased. Otherwise, the function will try to double the step size in the next step. Here is the complete code. def legendre (n): """ Get the Legendre Polynomials n : maximal degree Return: list of Lenegdre Polynomals """ p0 = np.poly1d([1]) if n<=0: return [p0] p1 = np.poly1d([1,0]) px = p1 if n==1: return [p0,p1] v=[p0,p1] for i in range(1,n): p = ((2*i+1)*px*p1 - i*p0) / (i+1) p0 = p1 p1 = p v.append(p1) return v def make_gauss(n): """ Compute the Coefficiens for Gauss Integration n : Number of points return: x,a x : Gauss popints a : Gauss coefficients """ v = legendre(n) x = np.roots(v[n]) A = np.array([v[i](x) for i in range(n)]) w = np.zeros(n) for i in range(n): p = np.polyint(v[i]) w[i] = p(1)-p(-1) return x,np.linalg.solve(A,w) gauss_x5,gauss_a5 = make_gauss(5) def gauss5 (f,a,b): """ Gauss Integrate with 5 points f : function of one variable a,b : Inteterval bounds returns: Integral """ return np.sum(f((a+b)/2+(b-a)/2*gauss_x5)*gauss_a5)*(b-a)/2 gauss_x10,gauss_a10 = make_gauss(10) def gauss10 (f,a,b): """ Gauss Integrate with 10 points f : function of one variable a,b : Inteterval bounds returns: Integral """ return np.sum(f((a+b)/2+(b-a)/2*gauss_x10)*gauss_a10)*(b-a)/2 def integrate (f,a,b,n=1,eps=1e-14): """ Adaptive Integral using 10 point Gauss f : function of one variable a,b : Interval bounds n=1 : Initial step size is (b-a)/n eps : Desired absolute accuracy returns: Integral """ h = (b-a)/n x = a I = gauss10(f,x,x+h) I1 = gauss10(f,x,x+h/2) I2 = gauss10(f,x+h/2,x+h) res = 0. while x<b-eps: ## print(h) if np.abs(I-(I1+I2))>h/(b-a)*eps: h = h/2 I = I1; I1 = gauss10(f,x,x+h/2) I2 = gauss10(f,x+h/2,x+h) else: x = x+h h=2*h if x+h>=b-eps: h=b-x res = res+I1+I2 I = gauss10(f,x,x+h) I1 = gauss10(f,x,x+h/2) I2 = gauss10(f,x+h/2,x+h) return res This algorithm is usually fast enough and very accurate. The implementation is by no means optimal. E.g., if a factor of 2 is used in the step size, a lot of computations are redone. Moreover, it might not be optimal to try to double the step size in each step. However, it works for interactive use, and that is what it is intended for. Let me try to explain the code a bit. The function legendre() computes the Legendre polynomials using the recursion formula. These polynomials are orthogonal with respect to the weight w=1. The function returns a list of the polynomials. It makes use of the polynomial handling in Python (in old style). The function make_gauss() computes the zeros of the Legendre polynomials using a function from Python, and coefficients a[k], such that the following is exact for all polynomials p up to degree n-1: $$\sum_{k=1}^n a_k p(x_k) = \int\limits_{-1}^1 p(t) \,dt$$ It sets up n linear equations from this requirement using the Legendre polynomials we just computed as test cases and solves the equations. To compute the correct integral for polynomials, there is a function in Python. Now, we can define the functions for Gauss integration with 5 and 10 points, and use it for our adaptive integration. For the demonstration, I created a very easy to use function fplot() which plots a function adaptively using the code of the last posting. def fplot (f,a,b,n=100,nmax=100000): """ Simple plot of a function a,b : interval bounds n=100 : initital step size (a-b)/n nmax=100000 : minimal step size (a-b)/nmax """ x,y = adaptive_ev(f,a,b,n,nmax) fig,ax = plt.subplots(figsize=(8,8)) ax.grid(True) ax.plot(x,y) I now have put all this and the code of the last posting into a Python module. When using the Python notebook interface in a Browser, the file should be placed in the same directory where the „.ipynb“ file. The Python file can even be opened and edited in the Python notebook IDE. Then it can be used as follows. import renestools as rt import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt def f(x): return np.abs(x) rt.fplot(f,-np.sqrt(2),1,n=1000) print(rt.integrate(f,-np.sqrt(2),1)) 1.5 This is not easy to integrate numerically, since the function is not smooth. The result is exact to the last digit. Even the plot is not trivial. With the arbitrary step size, the point x=0 might not be met exactly, and the corner may look round. Here is another example using the Gaussian distribution. All digits agree. Note, that the Gauss function is almost zero for large x. def f(x): return np.exp(-x**2) print(rt.integrate(f,-20,20)) print(np.sqrt(np.pi)) 1.7724538509055159 1.7724538509055159 You can even make the integral to a function and plot it adaptively. def F(x): return rt.integrate(f,0,x)/np.sqrt(np.pi) rt.fplot(F,0,5) Diese Website verwendet Akismet, um Spam zu reduzieren. Erfahre mehr darüber, wie deine Kommentardaten verarbeitet werden.
2022-05-22 16:30:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7866660356521606, "perplexity": 3211.439467174503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545875.39/warc/CC-MAIN-20220522160113-20220522190113-00692.warc.gz"}
https://datascience.stackexchange.com/questions/51425/is-linear-regression-suitable-for-these-data
# Is linear regression suitable for these data? I have a data set predicting a continuous variable, $$Y$$. I have $$15$$ to $$20$$ potential feature variables most of which are categorical, some of which are ordinal or categorical. These have been converted to numerical values. I have two questions. 1. Is linear regression suitable in this case? 2. If the variables do not show linear relationships with $$Y$$, is linear regression still suitable? Otherwise, which algorithms, hopefully existing in scikit-learn, might work?
2021-09-20 03:03:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43826839327812195, "perplexity": 1033.0389525694318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056974.30/warc/CC-MAIN-20210920010331-20210920040331-00050.warc.gz"}
http://andrewgelman.com/category/literature/
Archive of posts filed under the Literature category. ## Maybe this paper is a parody, maybe it’s a semibluff Peter DeScioli writes: I was wondering if you saw this paper about people reading Harry Potter and then disliking Trump, attached. It seems to fit the shark attack genre. In this case, the issue seems to be judging causation from multiple regression with observational data, assuming that control variables are enough to narrow down to […] ## “From that perspective, power pose lies outside science entirely, and to criticize power pose would be a sort of category error, like criticizing The Lord of the Rings on the grounds that there’s no such thing as an invisibility ring, or criticizing The Rotter’s Club on the grounds that Jonathan Coe was just making it all up.” From last year: One could make the argument that power pose is innocuous, maybe beneficial in that it is a way of encouraging people to take charge of their lives. And this may be so. Even if power pose itself is meaningless, the larger “power pose” story could be a plus. Of course, if power […] ## It is somewhat paradoxical that good stories tend to be anomalous, given that when it comes to statistical data, we generally want what is typical, not what is surprising. Our resolution of this paradox is . . . From a blog comment a few years ago regarding an article by Robert Kosara: As Thomas and I discuss in our paper [When Do Stories Work? Evidence and Illustration in the Social Sciences], it is somewhat paradoxical that good stories tend to be anomalous, given that when it comes to statistical data, we generally want […] ## The Westlake Review I came across this site one day: The Westlake Review is a blog dedicated to doing a detailed review and analysis of every novel Donald Westlake published under his own name, as well as under a variety of pseudonyms. These reviews will reveal major plot elements, though they will not be full synopses. People who […] ## Irwin Shaw, John Updike, and Donald Trump So. I read more by and about Irwin Shaw. I read Shaw’s end-of-career collection of short stories and his most successful novel, The Young Lions, and also the excellent biography by Michael Shnayerson. I also read Adam Begley’s recent biography of John Updike, which was also very good, and it made be sad that probably […] ## An improved ending for The Martian In this post from a couple years ago I discussed the unsatisfying end of The Martian. At the time, I wrote: The ending is not terrible—at a technical level it’s somewhat satisfying (I’m not enough of a physicist to say more than that), but at the level of construction of a story arc, it didn’t […] ## Classical statisticians as Unitarians [cat picture] Christian Robert, Judith Rousseau, and I wrote: Several of the examples in [the book under review] represent solutions to problems that seem to us to be artificial or conventional tasks with no clear analogy to applied work. “They are artificial and are expressed in terms of a survey of 100 individuals expressing support […] ## From Whoops to Sorry: Columbia University history prof relives 1968 I haven’t had much contact with the history department here at Columbia. A bunch of years ago I co-taught a course with Herb Klein and some others, and the material from that class went into my book co-edited with Jeronimo Cortina, A Quantitative Tour of the Social Sciences. More recently, I’ve had some conversations with […] ## A quote from William James that could’ve come from Robert Benchley or S. J. Perelman or Dorothy Parker Following up on yesterday’s post, here’s a William James quote that could’ve been plucked right off the Algonquin Round Table: Is life worth living? It all depends on the liver. ## A collection of quotes from William James that all could’ve come from . . . Bill James! From a few years ago, some quotes from the classic psychologist that fit within the worldview of the classic sabermetrician: Faith means belief in something concerning which doubt is theoretically possible. A chain is no stronger than its weakest link, and life is after all a chain. A great many people think they are thinking […] ## Design top down, Code bottom up Top-down design means designing from the client application programmer interface (API) down to the code. The API lays out a precise functional specification, which says what the code will do, not how it will do it. Coding bottom up means coding the lowest-level foundations first, testing them, then continuing to build. Sometimes this requires dropping […] ## Reality meets the DeLilloverse From 2009: “They thought ASU’s brand was too strong to compete with. Incarnate Word is now part of the Communiversity @ Surprise, a newly opened one-stop learning center for higher education in the northwest Valley.” I guess my statistics textbooks probably read like parodies of statistics textbooks, so from that perspective it makes sense that […] ## Journals for insignificant results Tom Daula writes: I know you’re not a fan of hypothesis testing, but the journals in this blog post are an interesting approach to the file drawer problem. I’ve never heard of them or their like. An alternative take (given academia standard practice) is “Journal for XYZ Discipline papers that p-hacking and forking paths could […] ## Storytelling as predictive model checking [cat picture] I finally got around to reading Adam Begley’s biography of John Updike, and it was excellent. I’ll have more on that in a future post, but for now I just went to share the point, which I’d not known before, that almost all of Updike’s characters and even the descriptions and events in […] ## I’m thinking of using these as the titles for my next 97 blog posts Where do you think these actually came from? (No googling—that would be cheating.) P.S. Anyone who wants to know the answer can google it. But there were some great guesses in the comments. My favorite, from Frank: I’ve got to go with “before the colon” in questionable social science papers, e.g: “Don’t make me laugh: […] ## When do stories work, Process tracing, and Connections between qualitative and quantitative research Jonathan Stray writes: I read your “when do stories work” paper (with Thomas Basbøll) with interest—as a journalist stories are of course central to my field. I wondered if you had encountered the “process tracing” literature in political science? It attempts to make sense of stories as “case studies” and there’s a nice logic of […] ## Comment of the year In our discussion of research on the possible health benefits of a low-oxygen environment, Raghu wrote: This whole idea (low oxygen -> lower cancer risk) seems like a very straightforward thing to test in animals, which one can move to high and low oxygen environments . . . And then Llewelyn came in for the […] ## Objects of the class “George Orwell” image George Orwell is an exemplar in so many ways: a famed truth-teller who made things up, a left-winger who mocked left-wingers, an author of a much-misunderstood novel (see “Objects of the class ‘Sherlock Holmes,’”) probably a few dozen more. But here I’m talking about Orwell’s name being used as an adjective. More specifically, “Orwellian” […] ## Christmas special: Survey research, network sampling, and Charles Dickens’ coincidences image It’s Christmas so what better time to write about Charles Dickens . . . Here’s the story: In traditional survey research we have been spoiled. If you work with atomistic data structures, a small sample looks like a little bit of the population. But a small sample of a network doesn’t look like the […] ## How to include formulas (LaTeX) and code blocks in WordPress posts and replies It’s possible to include LaTeX formulas like . I entered it as $latex \int e^x \, \mathrm{d}x$. You can also generate code blocks like this for (n in 1:N) y[n] ~ normal(0, 1); The way to format them is to use <pre> to open the code block and </pre> to close it. You can create […]
2017-09-22 20:28:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2823241651058197, "perplexity": 2482.078431763966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689192.26/warc/CC-MAIN-20170922202048-20170922222048-00367.warc.gz"}
http://theanalysisofdata.com/probability/C_2.html
## Probability ### The Analysis of Data, volume 1 Linear Algebra: Rank ## C.2. Rank Definition C.2.1. The linear space spanned by the vectors $S=\{{\bb v}^{(1)},\ldots, {\bb v}^{(n)}\}$ is the set of all linear combinations of vectors in $S$. A basis of a linear space $S$ is any set of linearly independent vectors that span it. The dimension $\dim S$ of a linear space $S$ is the size of its basis. Example C.2.1. The space $\R^n$ is spanned by the standard basis ${\bb e}^{(i)}, i=1,\ldots,n$ from Example C.1.4. Since the standard basis vectors are linearly independent, they are a basis for $\R^n$ in the sense of the previous definition. Another possible basis for $\R^n$ is $\{{\bb u}^{(i)}: i=1,\ldots,n\}$, where ${\bb u}^{(i)}$ is defined by $u^{(i)}_j=1$ if $j\leq i$ and 0 otherwise. Definition C.2.2. The column space of a matrix $A\in\R^{n\times m}$ is the space $\col(A)$ spanned by the columns of $A$ or equivalently $\col(A)=\{A\bb v: \bb v\in\R^{m\times 1}\}\subset \R^n.$ We refer to $\dim\, \col(A)$ as the rank of the column space of $A$. The row space of $A\in\R^{n\times m}$ is the space $\row(A)$ spanned by the rows of $A$. The null space of $A\in\R^{n\times m}$ is $\nulll(A)=\{\bb v:A\bb v=\bb 0\}\subset \R^m.$ Proposition C.2.1. For any matrix $A$, $\dim\col(A)=\dim\row(A).$ We denote that number as $\rank(A)$. Proof. Consider a matrix $A$ with $\dim\col(A)=r$ and arrange the $r$ vectors spanning $\col(A)$ as columns of a matrix $C$. Since the columns of $A$ are linear combination of the columns of $C$ we have $A=CR$ for some matrix $R$ with $r$ rows. Every row of $A=CR$ is a linear combination of the rows of $R$ and thus the row space of $A$ is a subset of the row space of $R$ whose dimension is bounded by $r$. The reverse inequality is obtained by applying the same argument to $A^{\top}$. Definition C.2.3. A matrix $A\in\R^{n\times m}$ has full rank is a matrix for which $\rank(A)=\min(n,m)$. Otherwise the matrix has low rank. Proposition C.2.2. $\rank(AB)\leq \min (\rank A,\rank B).$ Proof. Since the columns of $AB$ are linear combinations of the columns of $A$, $\rank(AB)\leq \rank(A)$. Similarly, since the rows of $AB$ are linear combinations of the rows of $B$, $\rank(AB)\leq \rank(B)$. Proof. For non-singular matrices $P,Q$ we have $\rank(PAQ) = \rank(A).$ Proof. Applying the above proposition, we have \begin{align*} \rank(PAQ) &\leq \rank(AQ)\leq \rank (A),\\ \rank(A) &= \rank(IAI)=\rank(P^{-1}PAQQ^{-1})\leq \rank(PAQ). \end{align*} Proposition C.2.3. For a matrix $A\in\mathbb{R}^{m\times n}$, $\rank(A)+\dim(\nulll(A))=n.$ Proof. For $s=\dim(\nulll(A))$, let ${\bb \alpha}^{(i)}, i=1,\ldots,s$ be a basis for $\nulll(A)$, and extend it with the vectors ${\bb\beta}^{(i)}$ $i=1,\ldots,n-s$ that together span $\R^n$. Given $\bb v\in\col(A)$, we have \begin{align*} \bb v&=A\bb x= A \left(\sum_{i=1}^s a_i {\bb\alpha}^{(i)} + \sum_{i=1}^{n-s} b_i {\bb\beta}^{(i)}\right)=\sum_{i=1}^{n-s} b_i A {\bb \beta}^{(i)}. \end{align*} (The second equality above follows from representing $\bb x$ as a linear combination of the basis composed by ${\bb \alpha}^{(i)}, i=1,\ldots,s$ and ${\bb\beta}^{(i)}$ $i=1,\ldots,n-s$.) We thus have that every vector in $\col(A)$ can be written as a linear combination of $n-s$ vectors ${\bb\gamma}^{(i)}=A{\bb\beta}^{(i)}$. This shows that $\rank(A)\leq n-\dim(\nulll(A))$. Assuming that $\sum_{i=1}^{n-s} c_i {\bb\gamma}^{(i)}=0$ we have $A\sum_{i=1}^{n-s} c_i {\bb\beta}^{(i)}=0$ implying $\sum_{i=1}^{n-s} c_i {\bb\beta}^{(i)}\in \nulll(A).$ But by construction of ${\bb\beta}^{(i)}$ this is possible only if $c_1=\cdots=c_{n-s}=0$ which implies that ${\bb\gamma}^{(i)}$, $i=1,\ldots,n-s$ are linearly independent. Since the column space of $A$ is spanned by $n-s$ linearly independent vectors, $\rank(A)=n-s$. Proposition C.2.4. $\rank(A)=\rank(A^{\top})=\rank(AA^{\top})=\rank(A^{\top}A).$ Proof. Since $A\bb x=0$ implies $A^{\top}A\bb x=0$, and since $A^{\top}A\bb x$ implies $\bb xA^{\top}A\bb x=0$, which implies $A\bb x=0$, we have $\nulll(A)=\nulll(A^{\top}A)$. Since both matrices have the same number of columns, the previous proposition implies that $\rank(A)=\rank(A^{\top}A)$. Applying the same argument to $A^{\top}$ shows that $\rank(A^{\top})=\rank(AA^{\top})$. We conclude the proof by noting that $\rank(A)=\rank(A^{\top})$ as implied by Proposition C.2.1.
2017-07-24 18:46:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9869161248207092, "perplexity": 73.95782845978914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424909.50/warc/CC-MAIN-20170724182233-20170724202233-00204.warc.gz"}
https://brilliant.org/problems/are-you-smart-enough-to-calculate-the-square-root/
# My Unique Twist Algebra Level 2 We have a two digit number $$\overline{AB}$$. If $( \overline{AB})^{2} = (20 \times A \times B) + 425$ Calculate the value of $$A+B$$ Note:-Here $$\overline{AB}$$ means a two digit number as oppose to multiplication $$A \times B$$ ×
2017-07-21 08:51:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4203758239746094, "perplexity": 2969.2030725902973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423764.11/warc/CC-MAIN-20170721082219-20170721102219-00233.warc.gz"}
https://math.libretexts.org/Courses/Mount_Royal_University/MATH_3200%3A_Mathematical_Methods/Summary_Tables/Summary_Table_Of_Integrals
# Summary Table Of Integrals $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$ \begin{description} \item $\dst\int u^\alpha\; du={u^{\alpha+1}\over\alpha+1}+c$, \quad $\alpha\ne-1$ \item $\dst\int{du\over u}=\ln|u|+c$ \item $\dst\int\cos u\; du=\sin u+c$ \item $\dst\int \sin u\; du=-\cos u+c$ \item $\dst\int \tan u\; du=-\ln|\cos u|+c$ \item $\dst\int \cot u\; du=\ln|\sin u|+c$ \item $\dst\int \sec^2 u\; du=\tan u+c$ \item $\dst\int \csc^2 u\; du=-\cot u+c$ \item $\dst\int \sec u\; du=\ln|\sec u+\tan u|+c$ \item $\dst\int\cos^2 u\; du={u\over2}+{1\over4}\sin2u +c$ \item $\dst\int\sin^2 u\; du={u\over2}-{1\over4}\sin2u +c$ \item $\dst\int {du\over 1+u^2}\; du=\tan^{-1}u+c$ \item $\dst\int {du\over\sqrt{1-u^2}}\; du=\sin^{-1}u+c$ \item $\dst\int {1\over u^2-1}\; du={1\over2}\ln\left|u-1\over u+1\right|+c$ \item $\dst\int \cosh u\; du=\sinh u+c$ \item $\dst\int \sinh u\; du=\cosh u+c$ \item $\dst\int u\; dv=uv-\int v\; du$ \item $\dst\int u\cos u\; du=u\sin u +\cos u+c$ \item $\dst\int u\sin u\; du=-u\cos u +\sin u+c$ \item $\dst\int ue^u\; du=ue^{u}-e^{u} +c$ \item $\dst\int e^{\lambda u}\cos\omega u\; du={e^{\lambda u}(\lambda\cos \omega u+\omega\sin\omega u)\over \lambda^2+\omega^2}+c$ \item $\dst\int e^{\lambda u}\sin\omega u\; du={e^{\lambda u}(\lambda\sin \omega u-\omega\cos\omega u)\over\lambda^2+\omega^2}+c$ \item $\dst\int \ln|u|\; du=u\ln|u|-u+c$ \item $\dst\int u\ln|u|\; du={u^2\ln|u|\over2}-{u^2\over4}+c$ \item $\dst\int\cos\omega_1u\cos\omega_2u\,du={\sin (\omega_1+\omega_2)u\over2(\omega_1+\omega_2)} +{\sin(\omega_1-\omega_2)u\over2(\omega_1-\omega_2)}+c\quad (\omega_1\ne\pm \omega_2)$ \item $\dst\int\sin\omega_1 u\sin\omega_2 u\,du= -{\sin(\omega_1+\omega_2)u\over2(\omega_1+\omega_2)} +{\sin(\omega_1-\omega_2)u\over2(\omega_1-\omega_2)}+c \quad (\omega_1\ne\pm \omega_2)$ \item $\dst\int\sin\omega_1u\cos\omega_2 u\,du=-{\cos (\omega_1+\omega_2)u\over2(\omega_1+\omega_2)} -{\cos(\omega_1-\omega_2)u\over2(\omega_1-\omega_2)}+c\quad (\omega_1\ne\pm \omega_2)$
2019-02-19 16:02:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9773699641227722, "perplexity": 634.5072288300979}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247490225.49/warc/CC-MAIN-20190219142524-20190219164524-00561.warc.gz"}
https://runestone.academy/runestone/static/cppds/AlgorithmAnalysis/AnAnagramDetectionExample.html
# 2.4. An Anagram Detection Example¶ A good example problem for showing algorithms with different orders of magnitude is the classic anagram detection problem for strings. One string is an anagram of another if the second is simply a rearrangement of the first. For example, "heart" and "earth" are anagrams. The strings "python" and "typhon" are anagrams as well. For the sake of simplicity, we will assume that the two strings in question are of equal length and that they are made up of symbols from the set of 26 lowercase alphabetic characters. Our goal is to write a Boolean function that will take two strings and return whether they are anagrams. ## 2.4.1. Solution 1: Checking Off¶ Our first solution to the anagram problem will check the lengths of the strings and then to see that each character in the first string actually occurs in the second. If it is possible to “checkoff” each character, then the two strings must be anagrams. Checking off a character will be accomplished by replacing it with the special C++ character \0. The first step in the process will be to convert the second string to a local second string for checking off. Each character from the first string can be checked against the characters in the local second string and if found, checked off by replacement. ActiveCode 1 shows this function. To analyze this algorithm, we need to note that each of the n characters in s1 will cause an iteration through up to n characters in the array from s2. Each of the n positions in the array will be visited once to match a character from s1. The number of visits then becomes the sum of the integers from 1 to n. We stated earlier that this can be written as $\begin{split}\sum_{i=1}^{n} i &= \frac {n(n+1)}{2} \\ &= \frac {1}{2}n^{2} + \frac {1}{2}n\end{split}$ As $$n$$ gets large, the $$n^{2}$$ term will dominate the $$n$$ term and the $$\frac {1}{2}$$ can be ignored. Therefore, this solution is $$O(n^{2})$$. ## 2.4.2. Solution 2: Sort and Compare¶ Another solution to the anagram problem will make use of the fact that even though s1 and s2 are different, they are anagrams only if they consist of exactly the same characters. So, if we begin by sorting each string alphabetically, from a to z, we will end up with the same string if the original two strings are anagrams. ActiveCode 2 shows this solution. At first glance you may be tempted to think that this algorithm is $$O(n)$$, since there are three consecutive simple iterations: the first two to convert strings to char arrays and the last to compare the n characters after the sorting process. However, the two calls to the C++ sort function are not without their own cost. As we will see in a later chapter, sorting is typically either $$O(n^{2})$$ or $$O(n\log n)$$, so the sorting operations dominate the iteration. In the end, this algorithm will have the same order of magnitude as that of the sorting process. ## 2.4.3. Solution 3: Brute Force¶ A brute force technique for solving a problem typically tries to exhaust all possibilities. For the anagram detection problem, we can simply generate an array of all possible strings using the characters from s1 and then see if s2 occurs. However, there is a difficulty with this approach. When generating all possible strings from s1, there are n possible first characters, $$n-1$$ possible characters for the second position, $$n-2$$ for the third, and so on. The total number of candidate strings is $$n*(n-1)*(n-2)*...*3*2*1$$, which is $$n!$$. Although some of the strings may be duplicates, the program cannot know this ahead of time and so it will still generate $$n!$$ different strings. It turns out that $$n!$$ grows even faster than $$2^{n}$$ as n gets large. In fact, if s1 were 20 characters long, there would be $$20!=2,432,902,008,176,640,000$$ possible candidate strings. If we processed one possibility every second, it would still take us 77,146,816,596 years to go through the entire array. This is probably not going to be a good solution. ## 2.4.4. Solution 4: Count and Compare¶ Our final solution to the anagram problem takes advantage of the fact that any two anagrams will have the same number of a’s, the same number of b’s, the same number of c’s, and so on. In order to decide whether two strings are anagrams, we will first count the number of times each character occurs. Since there are 26 possible characters, we can use an array of 26 counters, one for each possible character. Each time we see a particular character, we will increment the counter at that position. In the end, if the two arrays of counters are identical, the strings must be anagrams. ActiveCode 3 shows this solution. Again, the solution has a number of iterations. However, unlike the first solution, none of them are nested. The first two iterations used to count the characters are both based on n. The third iteration, comparing the two arrays of counts, always takes 26 steps since there are 26 possible characters in the strings. Adding it all up gives us $$T(n)=2n+26$$ steps. That is $$O(n)$$. We have found a linear order of magnitude algorithm for solving this problem. Before leaving this example, we need to say something about space requirements. Although the last solution was able to run in linear time, it could only do so by using additional storage to keep the two arrays of character counts. In other words, this algorithm sacrificed space in order to gain time. This is a common occurrence. On many occasions you will need to make decisions between time and space trade-offs. In this case, the amount of extra space is not significant. However, if the underlying alphabet had millions of characters, there would be more concern. As a computer scientist, when given a choice of algorithms, it will be up to you to determine the best use of computing resources given a particular problem. Self Check Q-1: Given the following code fragment, what is its Big-O running time? int main(){ int test = 0; for (int i = 0; i < n; i++){ for (int j = 0; j < n; j++){ test = test + i * j; } } return 0; } • O(n) • No. In an example like this you want to count the nested loops, especially the loops that are dependent on the same variable, in this case, n. • O(n^2) • Right! A nested loop like this is O(n^2). • O(log n) • No. log n typically is indicated when the problem is iteratively made smaller • O(n^3) • No. In an example like this you want to count the nested loops. especially the loops that are dependent on the same variable, in this case, n. Q-2: Given the following code fragment what is its Big-O running time? int main(){ int test = 0; for (int i = 0; i < n; i++){ test = test + 1; } for (int j = 0; j < n; j++){ test = test - 1; } return 0; } • O(n) • Right! Even though there are two loops they are not nested. You might think of this as O(2n) but we can ignore the constant 2. • O(n^2) • No. Be careful, in counting loops you want to look carefully at whether or not the loops are nested. • O(log n) • No. log n typically is indicated when the problem is iteratively made smaller. • O(n^3) • No. Be careful, in counting loops you want to look carefully at whether or not the loops are nested. Q-3: Given the following code fragment what is its Big-O running time? int main(){ int i = n; int count = 0; while (i > 0){ count = count + 1; i = i // 2; } return 0; } • O(n) • No. Look carefully at the loop variable i. Notice that the value of i is cut in half each time through the loop. This is a big hint that the performance is better than O(n) • O(n^2) • No. Check again, is this a nested loop? • O(log n) • Right! The value of i is cut in half each time through the loop so it will only take log n iterations. • O(n^3) • No. Check again, is this a nested loop? Next Section - 2.5. Performance of C++ Data Collections
2019-02-22 13:11:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4533616006374359, "perplexity": 438.3873474084952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247517815.83/warc/CC-MAIN-20190222114817-20190222140817-00190.warc.gz"}
https://tug.org/pipermail/texhax/2011-July/017915.html
# [texhax] \headline in LaTeX document Tue Jul 19 17:46:57 CEST 2011 ```On Tue, Jul 19, 2011 at 06:19:37PM +0300, v_2e at ukr.net wrote: > On Tue, 19 Jul 2011 17:13:32 +0200 > Heiko Oberdiek <heiko.oberdiek at googlemail.com> wrote: > > > > Probably your document is a plain TeX document rather than > > a LaTeX document. Try `pdftex' instead of `pdflatex'. > > > I thought so too for some moment, but it has a "\documentclass" > string in it. And what's the problem then? Earlier you wrote: > but I was given a LaTeX > template containing such commands which I don't know how to process > and haven't found a clear answer to this question.
2021-03-07 00:19:05
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.970289409160614, "perplexity": 2518.377507658487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178375529.62/warc/CC-MAIN-20210306223236-20210307013236-00458.warc.gz"}
http://joshkos.blogspot.com/2008/02/adventure-in-oxford.html
$\newcommand{\defeq}{\mathrel{\mathop:}=}$ ## 2008/02/16 foldr-univr : {A B : Set} -> (h : [ A ] -> B) -> forall f e -> (h [] ≡ e) -> (forall x xs -> h (x ∷ xs) ≡ f x (h xs)) -> (forall x -> h x ≡ foldr f e x) foldr-univr h f e base-cond step-cond [] = base-cond foldr-univr h f e base-cond step-cond (x ∷ xs) = ≡-trans (step-cond x xs) (≡-cong (f x) (foldr-univr h f e base-cond step-cond xs)) foldr-fusion : {A B C : Set} -> (h : B -> C) -> {f : A -> B -> B} -> {g : A -> C -> C} -> {e : B} -> (forall x y -> h (f x y) ≡ g x (h y)) -> forall x -> (h ∘ foldr f e) x ≡ foldr g (h e) x foldr-fusion {_} {_} {_} h {f} {g} {e} fuse-cond = foldr-univr (h ∘ foldr f e) g (h e) ≡-refl (\x xs -> fuse-cond x (foldr f e xs)) -- Labels: , yen32/16/2008 11:16 pm 說:
2017-10-17 20:42:38
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5654937624931335, "perplexity": 10502.227329526007}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822488.34/warc/CC-MAIN-20171017200905-20171017220905-00880.warc.gz"}
https://puzzling.stackexchange.com/questions/25287/switch-the-knights
Switch The Knights On a small $4 \times 3$ chessboard, the top row is filled with black knights and the bottom row with white knights. On each move, you may move one knight (as it moves in chess) to an unoccupied square. How many moves does it take to switch the white knights with the black knights? Why can't it be done in fewer? Source: Algorithmic Puzzles, Anany and Maria Levitin • I think the answer is 26, but I'm not sure. – user88 Jan 12 '16 at 5:18 • Do you have to move black/white alternatively, like in real chess? Jan 12 '16 at 7:51 • @user1717828 No, you do not need to alternate colors. Sorry for the lack of clarity, the "as in chess" phrase was only supposed to refer to the 2-1 ell shape of a knight move Jan 12 '16 at 14:26 • Is anyone else wondering whether black or white wins this position (or if it's a tie)? Jan 12 '16 at 21:04 • As there are no kings on the board, "winning" in the standard chess sense is not possible ;-) Jan 12 '16 at 22:36 You need at least 16 Moves. 1. Let's make the task visually more simple. The initial board is: a4 b4 c4 a3 b3 c3 a2 b2 c2 a1 b1 c1 We cut it into 12 cells and connect only those, which are separated exactly by one move of a knight. Easy to check that the result is the following: c4 - a3 - c2 - a1 | | | | b2 b1 b4 b3 | | | | a4 - c3 - a2 - c1 So the knights are placed like this now: B1 - . - . - W1 | | | | . W2 B2 . | | | | B3 - . - . - W3 1. Now it is quite easy to find a strategy for quick switch of the knights. In 2+2+2+3=9 moves we have: W2 - . - B1 - W1 | | | | . B3 B2 . | | | | W3 - . - . - . In 9+1+2+2=14 moves we have: W2 - B1 - . - . | | | | . B3 W1 . | | | | W3 - . - . - B2 And in 14+2=16 moves we have the final position: W2 - . - . - B1 | | | | . B3 W1 . | | | | W3 - . - . - B2 The actual chess moves have been perfectly illustrated by GOTO 0. 1. It is also easy to show that you can't perform the task faster. Indeed, imagine there is only black knights. It will take 2 moves for B2 to reach a closest final position, and at least 2 and 3 moves for B1 and B3, since they can't go both to the same cell. This makes 7 moves in total, furthermore there is only one way (up to the symmetry) you can move them. Similarly, you need at least 7 for moving the white knights. And, finally, since black and white are on the board at the same time and their paths are crossed, you will need at least 2 additional moves to make them pass each other. • This is brilliant! Jan 12 '16 at 18:06 • You just used my method. Anyways, good answer! +1 Jan 14 '16 at 10:48 • @ghosts_in_the_code, this is standard method for such puzzles:) And yeah, you almost developed it yourself (so I gave you +1 as well), but still, rearranging the cells into a clear picture is crucial here. This even more powerful in this puzzle: puzzling.stackexchange.com/questions/25358/… Jan 14 '16 at 10:57 I found a solution that uses 16 moves. After exhaustively checking that there is no solution in 14 moves, I conclude that 16 moves is optimal, because after any odd number of moves the number of white and black squares occupied by knights cannot be equal. • Thanks for the edit @Ian. Did we get the solution at the same time, or can you make graphics just so quickly? Jan 12 '16 at 17:04 • Nah, that's all yours, I just made it pretty. :) Jan 12 '16 at 17:06 • There's an error in step 15. Not a logical one, just in the picture. Jan 12 '16 at 17:08 • @dpwilson Thanks, I'm not Ian MacDonald, but I fixed that now. Jan 12 '16 at 17:23 Edit: Now that @GOTO 0 got it in 16, I can at least prove that his solution is optimal. Proof: The minimum possible number of moves if we could move through pieces would be seven per side. This is a total of 14 moves. Proof that this is ideal: no piece can make it to the other side in one move because the positions are three squares apart. If every piece could make it to the other side in just two moves then each piece would have to end on the same color it started as - but that's not possible, because the top and bottom starting places don't have the same number of black/white squares. Thus it must take at least seven total moves, and seven is possible. It involves (for Black where D = down, R = right, L = left) moving the Top Left knight DRR > DDL, moving the Top Middle knight DDL > DRR, and moving the Top Right knight DLL > DRR > DLL. This can be mirrored on the other side for a total of 14 moves. But, it cannot be done in 14 moves, because there will be a conflict! Proof of conflict: In order to move across in seven moves, one of each side's corner knights and both middle knights have to make it across in just two. Let's assume without loss of generality that the Top Left knight does so. This means the Top Right knight must take three moves - and also means that the Bottom Right knight must take three moves (if the two knights taking three moves are on opposite corners, they take the same path as each other). Now, when can the Top Right knight move? If the Top Right knight ever moves: then he occupies the square DLL of his start. However, the Bottom Middle knight needs to go through there, so TR will have to move again before the Bottom Middle knight does. Our stack is now TR > BM. But the TR knight can't move again (DRR) without getting into a permanent block with BL, who wants to move to that same square. So we have a stack of BL > TR > BM. If BL has to move first, he'll have to get out of the way entirely, which requires TM to move to make room, giving us a stack of TM > BL > TR > BM. If TM moves at least once (remember, he'll go DLL because middles swap with three-move knights), then he's in a permanent block with BR, so BR better move first. This gives BR > TM > BL > TR > BM. But then BR needs to get out of the way entirely, resulting in him moving ULL > URR... and getting into a permanent block with TL. TL better avoid that by moving first, giving us TL > BR > TM > BL > TR > BM. But in order for TL to move and get out of the way, they'll need to take the spot of BM, which means... uh oh, our stack is circular now, which is a contradiction. BM > TL > BR > TM > BL > TR > BM can't be done. But our assumption was only that TR moves, and negating that is also contradictory. Thus the 14 move solution is not possible. If this was done in 15 moves, then the combination of all six knights would be on at least one square of a different color from when they started, since only even numbers of moves can maintain color. However, since the knights start and end in the same combined positions, they start and end on the same combined colors. Thus, this cannot be done in 15 moves. Therefor, the minimum possible is 16, and @GOTO 0 proved by example that this can be done. My best was: 20 Moves, in a variety of possible ways - here's one: • what did you use to make this grid? Jan 12 '16 at 13:28 • Excel. Just select all cells, drag resize to make them roughly square (or exactly square if you're OCD), and use cell border styles to get formatting. It's wonderful for low investment puzzle solving and scratch work! Jan 12 '16 at 15:19 • @manshu you can also click on your first column header, shift-click on the last (to select all the columns), then right-click somewhere in your selection to get the Column Width dialog option and put in a number. – WBT Jan 13 '16 at 14:48 Maybe only a basic hint for a proof, but here are my thoughts. Here is the playing area, where an X represents a cell in the playing area. X X X X X X X X X X X X Note that the two centre cells are connected to only two cells each. So we can change the playing area to the one given below, without changing the problem in any way X X X X X X X X X X X X Now the four corners are also connected two only two cells each, and not to each other. Hence we can change the playing area to X X X X X X X X X X X X Adding the playing pieces makes the grid as follows: X B B B X X X X W W W X This representation is equivalent to the original puzzle; any solution that works here also works there and vice versa. It is only that it is easier to see which cell is connected to which, in the figure given above. I have no idea how to prove an optimal solution, though. I can switch the black knights with the white knights in just 1 move: rotate the board 180 degrees!
2022-01-21 18:41:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4878106117248535, "perplexity": 1229.1397563621524}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303512.46/warc/CC-MAIN-20220121162107-20220121192107-00184.warc.gz"}
https://www.physicsforums.com/threads/gravitationtrouble-setting-up-equation.188279/
# Gravitationtrouble setting up equation 1. Oct 1, 2007 ### Saladsamurai My problem lies in setting this up. A particle of mass M is split into two pieces, M and M-m, and are set some distance apart. What ratio of m/M maximizes the magnitude of the gravitational attraction. I will definately be needing $$F_g=\frac{Gm_1m_2}{r^2}$$ I know that after making appropriate substitutions I get. $$F_g=\frac{GM(M-m)}{r^2}$$ but my problem is in how to compare what happens as m-->M? Any thoughts in the set up? Thanks, Casey 2. Oct 1, 2007 ### nrqed You mean that the pieces are m and M-m!! basically, you have to optimize the product m(M-m) as a function of m. Just take the derivative with respect to m and set the derivative equal to zero. That will give you the optimum m and then you may calculate the ratio m/M. Last edited: Oct 1, 2007 3. Oct 1, 2007 ### Saladsamurai AWWWW!! I knew that! I wrote out the product wrong! Thanks nrqed Casey! Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
2017-07-28 15:09:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5914055705070496, "perplexity": 2126.98370734135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500550969387.94/warc/CC-MAIN-20170728143709-20170728163709-00479.warc.gz"}
https://puzzling.stackexchange.com/questions/87595/one-corner-rotated-in-4x4x4?noredirect=1
# One corner rotated in 4x4x4 I know that in a 3x3x3 this should never happen (unsolvable state), but is the same true in a 4x4x4? (the rest of the cube is solved) The following answer seems to say that everything about the 3x3x3 is true about NxNxN, except that the orientation isn’t fixed. Why is a single-corner twist not a valid position on a Rubik's cube? Does that mean that my orientation is wrong? The cube came solved, and since this isn’t a speed cube, I don’t see how I could have accidentally twisted a corner. • I'm 99.5% sure it's impossible. If you can prove it's impossible on a 2x2x2 cube, that should be enough to extend to cubes of all sizes. Aug 31 '19 at 1:57 • Sorry Joe, but you DID accidentally twist a corner. The corners behave the same on every nxnxn cube (n>1). Sep 17 '19 at 21:01 • Yeah, @Christopher Mowla, I’m guessing someone was playing with it when it was mixed up, and that person twisted the corner, because I think I was careful with it. Nevertheless, I eventually looked up how to take it apart, and I fixed it. – Joe Sep 18 '19 at 0:35 It is impossible. If you ignore all pieces but the corners, it is equivalent to a 2x2x2 cube. And it is impossible to turn one corner of a 2x2x2. This is valid for any size, i.e. every NxNxN cube. Credit goes to william122 who gave this answer first. It is impossible, but I'll try to explain a way to help understand. Along each of the three axes of rotation there are four layers (I'm not sure if there's a more technical term) which can be freely rotated around their respective axis. Imagine shifting the centre gap so that the second layer becomes thicker and the third layer becomes thinner, like this: ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ | | | | | | | | | | | | | | | |___|___|___|___| |___|___|___|___| |___|___|___|___| | | | | | | | | | | | | | | | |___|___|___|___| -> | | | | | -> | | | | | | | | | | |___|___|___|___| | | | | | |___|___|___|___| |___|___|___|___| |___|___|___|___| | | | | | | | | | | | | | | | |___|___|___|___| |___|___|___|___| |___|___|___|___| Imagine doing this 'all the way' so that layer 3 becomes impossibly thin. Now it is functionally identical to a 3 x 3 cube, and we have not removed any functionality from the initial 4 x 4 cube. This demonstrates that the middle layers do not affect the corners at all. Hope this helps. • As soon as you shift the cut away from the centre, it is no longer a functional cube. After a half turn of the face the shifted cut no longer lines up with itself so middle layer turns along one of the other axes are blocked, effectively bandaging the middle layers together. A better way to demonstrate that the middle layers do not affect the corners is to remove their stickers. Oct 1 '19 at 4:08 Yes. It is impossible, because a 4x4 can be reduced to a 3x3, via reduction, and a corner would be exactly the same as a 3x3 corner. • You need to elaborate on this... A 4x4x4 does not have a fixed center, so it's not as easy as you think. Aug 31 '19 at 7:05
2021-10-28 13:36:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5172630548477173, "perplexity": 622.7639880578649}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588341.58/warc/CC-MAIN-20211028131628-20211028161628-00570.warc.gz"}
https://www.techwhiff.com/issue/should-people-use-animals-for-the-purposes-of-entertainment--137642
# Should people use animals for the purposes of entertainment? Write in 4-5 sentences ###### Question: Should people use animals for the purposes of entertainment? Write in 4-5 sentences ### The heart is ______ to the spine and _______ to the lungs. The heart is ______ to the spine and _______ to the lungs.... ### A wire in a circuit carries a current of 0.9 A. Calculate the quantity of charge that flows through the wire in 50 s. Choose the correct unit of charge with your answer. 45 A 45C 45 J O 0.018 A O 0,018 0.018 J O 55.6 A 55.6 C 55.6J A wire in a circuit carries a current of 0.9 A. Calculate the quantity of charge that flows through the wire in 50 s. Choose the correct unit of charge with your answer. 45 A 45C 45 J O 0.018 A O 0,018 0.018 J O 55.6 A 55.6 C 55.6J... ### Find the volume of the pyramid. Write your answer as a fraction or mixed number. ft3 Find the volume of the pyramid. Write your answer as a fraction or mixed number. ft3... ### AP LANG Question about technique from a snippet of Grapes of Wrath. AP LANG Question about technique from a snippet of Grapes of Wrath.... ### If 9\geq4x+19≥4x+19, is greater than or equal to, 4, x, plus, 1, which inequality represents the possible range of values of 12x+312x+312, x, plus, 3? If 9\geq4x+19≥4x+19, is greater than or equal to, 4, x, plus, 1, which inequality represents the possible range of values of 12x+312x+312, x, plus, 3?... ### Solve: 4^3x = 4^2 X=-3/2 X=-2/3 X=2/3 X=3/2 Solve: 4^3x = 4^2 X=-3/2 X=-2/3 X=2/3 X=3/2... ### Match the following verses. 1. Now we, brethren, as Isaac was, are the children of promise. 2. In those days there was no king in Israel: every man did that which was right in his own eyes. 3. And he brought us out from thence, that he might bring us in, to give us the land which he sware unto our fathers. 4. I have blotted out, as a thick cloud, thy transgressions, and, as a cloud, thy sins: return unto me; for I have redeemed thee. 5. If the LORD delight in us, then he will bring us Match the following verses. 1. Now we, brethren, as Isaac was, are the children of promise. 2. In those days there was no king in Israel: every man did that which was right in his own eyes. 3. And he brought us out from thence, that he might bring us in, to give us the land which he sware unto... ### What is the ratio of each reactant to each product for 2FeCl3(aq)+3Zn = 2Fe(s)+3ZnCl2(aq)? What is the ratio of each reactant to each product for 2FeCl3(aq)+3Zn = 2Fe(s)+3ZnCl2(aq)?... ### This allows you to access structure members. A. structure access operator B. dot operator C. include directive D. getmember function This allows you to access structure members. A. structure access operator B. dot operator C. include directive D. getmember function... ### What structures to eukaryote cells have that prokaryotes do not? 1. nucleus 2. celle membrane 3. mitochondria 4. endoplasmic reticulum What structures to eukaryote cells have that prokaryotes do not? 1. nucleus 2. celle membrane 3. mitochondria 4. endoplasmic reticulum... ### How does -5 + 5 equal 0? Explain How does -5 + 5 equal 0? Explain... ### Solve 1/3(2x - y) = z for x Solve 1/3(2x - y) = z for x... ### Domain and range: identify the domain and range of the function.​ domain and range: identify the domain and range of the function.​... ### ​Quiero que disfrutan desayuno is a prepositional command true or false ​Quiero que disfrutan desayuno is a prepositional command true or false... ### Ag i'm so confused my summer class literally isn't teaching me how to do the work, can someone please help me understand this "suppose F is between E and G. Use the segment addition postulate to solve for x. Then find the length of each segment. EF=2x+4, FG=4, EG=4x+16" Ag i'm so confused my summer class literally isn't teaching me how to do the work, can someone please help me understand this "suppose F is between E and G. Use the segment addition postulate to solve for x. Then find the length of each segment. EF=2x+4, FG=4, EG=4x+16"...
2023-01-27 14:28:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4579649567604065, "perplexity": 3457.886120088291}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494986.94/warc/CC-MAIN-20230127132641-20230127162641-00450.warc.gz"}
http://biblioteca.universia.net/html_bura/verColeccion/params/id/2120.html
Sunday, June 21, 2015  Lógica Matemáticas Astronomía y Astrofísica Física Química Ciencias de la Vida Ciencias de la Tierra y Espacio Ciencias Agrarias Ciencias Médicas Ciencias Tecnológicas Antropología Demografía Ciencias Económicas Geografía Historia Ciencias Jurídicas y Derecho Lingüística Pedagogía Ciencia Política Psicología Artes y Letras Sociología Ética Filosofía ## Recursos de colección Project Euclid (Hosted at Cornell University Library) (181,109 recursos) Journal of Symbolic Logic Mostrando recursos 1 - 20 de 11,931 1. Low level nondefinability results: Domination and recursive enumeration - Cai, Mingzhong; Shore, Richard A. We study low level nondefinability in the Turing degrees. We prove a variety of results, including, for example, that being array nonrecursive is not definable by a $\Sigma_{1}$ or $\Pi_{1}$ formula in the language $(\leq ,\REA)$ where $\REA$ stands for the r.e.\ in and above'' predicate. In contrast, this property is definable by a $\Pi_{2}$ formula in this language. We also show that the $\Sigma_{1}$-theory of $(\mathcal{D},\leq ,\REA)$ is decidable. 2. The theory of tracial von Neumann algebras does not have a model companion - Goldbring, Isaac; Hart, Bradd; Sinclair, Thomas In this note, we show that the theory of tracial von Neumann algebras does not have a model companion. This will follow from the fact that the theory of any locally universal, McDuff II$_1$ factor does not have quantifier elimination. We also show how a positive solution to the Connes Embedding Problem implies that there can be no model-complete theory of II$_1$ factors. 3. Invariance properties of almost disjoint families - Arciga-Alejandre, M.; Hrušák, M.; Martinez-Ranero, C. We answer a question of Garcia-Ferreira and Hrušák by consistently constructing a MAD family maximal in the Katětov order. We also answer several questions of Garcia-Ferreira. 4. Diagonally non-computable functions and bi-immunity - Jockusch, Jr., Carl G.; Lewis, Andrew E. M. We prove that every diagonally noncomputable function computes a set $A$ which is bi-immune, meaning that neither $A$ nor its complement has an infinite computably enumerable subset. 5. New examples of small Polish structures - Dobrowolski, Jan We answer some questions from [4] by giving suitable examples of small Polish structures. First, we present a class of small Polish group structures without generic elements. Next, we construct a first example of a small non-zero-dimensional Polish $G$-group. 6. Comparisons of polychromatic and monochromatic Ramsey theory - Palumbo, Justin We compare the strength of polychromatic and monochromatic Ramsey theory in several set-theoretic domains. We show that the rainbow Ramsey theorem does not follow from ZF, nor does the rainbow Ramsey theorem imply Ramsey's theorem over ZF. Extending the classical result of Erdős and Rado we show that the axiom of choice precludes the natural infinite exponent partition relations for polychromatic Ramsey theory. We introduce rainbow Ramsey ultrafilters, a polychromatic analogue of the usual Ramsey ultrafilters. We investigate the relationship of rainbow Ramsey ultrafilters with various special classes of ultrafilters, showing for example that every rainbow Ramsey ultrafilter is nowhere dense but rainbow Ramsey ultrafilters need not be rapid. This... 7. Failure of interpolation in constant domain intuitionistic logic - Mints, Grigori; Olkhovikov, Grigory; Urquhart, Alasdair This paper shows that the interpolation theorem fails in the intuitionistic logic of constant domains. This result refutes two previously published claims that the interpolation property holds. 8. A limit law of almost $l$-partite graphs - Koponen, Vera For integers $l \geq 1$, $d \geq 0$ we study (undirected) graphs with vertices $1, \ldots, n$ such that the vertices can be partitioned into $l$ parts such that every vertex has at most $d$ neighbours in its own part. The set of all such graphs is denoted $\mathbf{P}_n(l,d)$. We prove a labelled first-order limit law, i.e., for every first-order sentence $\varphi$, the proportion of graphs in $\mathbf{P}_n(l,d)$ that satisfy $\varphi$ converges as $n \to \infty$. By combining this result with a result of Hundack, Prömel and Steger [12] we also prove that if $1 \leq s_1 \leq \ldots \leq s_l$ are integers, then $\mathbf{Forb}(\mathcal{K}_{1, s_1, \ldots, s_l})$ has a labelled... 9. Measures induced by units - Panti, Giovanni; Ravotti, Davide The half-open real unit interval $(0,1]$ is closed under the ordinary multiplication and its residuum. The corresponding infinite-valued propositional logic has as its equivalent algebraic semantics the equational class of cancellative hoops. Fixing a strong unit in a cancellative hoop—equivalently, in the enveloping lattice-ordered abelian group—amounts to fixing a gauge scale for falsity. In this paper we show that any strong unit in a finitely presented cancellative hoop $H$ induces naturally (i.e., in a representation-independent way) an automorphism-invariant positive normalized linear functional on $H$. Since $H$ is representable as a uniformly dense set of continuous functions on its maximal spectrum, such functionals—in... 10. Principles weaker than BD-N - Lubarsky, Robert S.; Diener, Hannes BD-N is a weak principle of constructive analysis. Several interesting principles implied by BD-N have already been identified, namely the closure of the anti-Specker spaces under product, the Riemann Permutation Theorem, and the Cauchyness of all partially Cauchy sequences. Here these are shown to be strictly weaker than BD-N, yet not provable in set theory alone under constructive logic. 11. Higher-order illative combinatory logic - Czajka, łukasz We show a model construction for a system of higher-order illative combinatory logic $\mathcal{I}_\omega$, thus establishing its strong consistency. We also use a variant of this construction to provide a complete embedding of first-order intuitionistic predicate logic with second-order propositional quantifiers into the system $\mathcal{I}_0$ of Barendregt, Bunder and Dekkers, which gives a partial answer to a question posed by these authors. 12. Rainbow Ramsey Theorem for triples is strictly weaker than the Arithmetical Comprehension Axiom - Wang, Wei We prove that $\operatorname{RCA}_0 + \operatorname{RRT}^3_2 \nvdash \operatorname{ACA}_0$ where $\operatorname{RRT}^3_2$ is the Rainbow Ramsey Theorem for $2$-bounded colorings of triples. This reverse mathematical result is based on a cone avoidance theorem, that every $2$-bounded coloring of pairs admits a cone-avoiding infinite rainbow, regardless of the complexity of the given coloring. We also apply the proof of the cone avoidance theorem to the question whether $\operatorname{RCA}_0 + \operatorname{RRT}^4_2 \vdash \operatorname{ACA}_0$ and obtain some partial answer. 13. Killing the $GCH$ everywhere with a single real - Friedman, Sy-David; Golshani, Mohammad Shelah—Woodin [10] investigate the possibility of violating instances of GCH through the addition of a single real. In particular they show that it is possible to obtain a failure of CH by adding a single real to a model of GCH, preserving cofinalities. In this article we strengthen their result by showing that it is possible to violate GCH at all infinite cardinals by adding a single real to a model of GCH. Our assumption is the existence of an $H(\kappa^{+3})$-strong cardinal; by work of Gitik and Mitchell [6] it is known that more than an $H(\kappa^{++})$-strong cardinal is required. 14. Namba forcing and no good scale - Krueger, John We develop a version of Namba forcing which is useful for constructing models with no good scale on $\aleph_\omega$. A model is produced in which $\Box_{\aleph_n}$ holds for all finite $n \ge 1$, but there is no good scale on $\aleph_\omega$; this strengthens a theorem of Cummings, Foreman, and Magidor [3] on the non-compactness of square. 15. Infinite sets that satisfy the principle of omniscience in any variety of constructive mathematics - Escardó, Martín We show that there are plenty of infinite sets that satisfy the omniscience principle, in a minimalistic setting for constructive mathematics that is compatible with classical mathematics. A first example of an omniscient set is the one-point compactification of the natural numbers, also known as the generic convergent sequence. We relate this to Grilliot's and Ishihara's Tricks. We generalize this example to many infinite subsets of the Cantor space. These subsets turn out to be ordinals in a constructive sense, with respect to the lexicographic order, satisfying both a well-foundedness condition with respect to decidable subsets, and transfinite induction restricted to decidable predicates. The use of simple types allows us to... 16. On the prewellorderings associated with the directed systems of mice - Sargsyan, Grigor Working under $AD$, we investigate the length of prewellorderings given by the iterates of $\mathcal{M}_{2k+1}$, which is the minimal proper class mouse with $2k+1$ many Woodin cardinals. In particular, we answer some questions from [4] (the discussion of the questions appears in the last section of [2]). 17. $K$ without the measurable - Jensen, Ronald; Steel, John We show in ZFC that if there is no proper class inner model with a Woodin cardinal, then there is an absolutely definable core model that is close to $V$ in various ways. 18. Forcing closed unbounded subsets of $\aleph_{\omega_{1}+1}$ - Stanley, M. C. Using square sequences, a stationary subset $S_T$ of $\aleph_{\omega_{1}+1}$ is constructed from a tree $T$ of height $\omega_1$, uniformly in $T$. Under suitable hypotheses, adding a closed unbounded subset to $S_T$ requires adding a cofinal branch to $T$ or collapsing at least one of $\omega_1$, $\aleph_{\omega_{1}}$, and $\aleph_{\omega_1+1}$. An application is that in ZFC there is no parameter free definition of the family of subsets of $\aleph_{\omega_1+1}$ that have a closed unbounded subset in some $\omega_1$, $\aleph_{\omega_{1}}$, and $\aleph_{\omega_1+1}$ preserving outer model. 19. Corrigendum to: “Relation algebra reducts of cylindric algebras and complete representations” - Hirsch, R. 20. Isomorphism of computable structures and {V}aught's {C}onjecture - Becker, Howard The following question is open: Does there exist a hyperarithmetic class of computable structures with exactly one non-hyperarithmetic isomorphism-type? Given any oracle $a \in 2^\omega$, we can ask the same question relativized to $a$. A negative answer for every $a$ implies Vaught's Conjecture for $L_{\omega_1 \omega}$.
2015-10-05 15:01:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.853866696357727, "perplexity": 840.4254372679503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736677402.41/warc/CC-MAIN-20151001215757-00174-ip-10-137-6-227.ec2.internal.warc.gz"}
https://fhi-aims-club.gitlab.io/tutorials/phonons-with-fhi-vibes/electron-phonon-coupling/7_bandgap_temperature/exercise-7/
# Exercise 7: The Role of the Atomic Motion Estimated total CPU time: 70 min Warning In the following exercises, computational settings including the reciprocal space grid (tag k_grid), the basis set, and supercell's size, have been chosen to allow for a rapid computation of the exercises in the limited time and within the CPU resources available during the tutorial session. Without loss of generality, these settings allow to demonstrate trends of the lattice dynamics of materials. In the production calculation, all computational parameters should be converged. In this exercise, you will: • Learn how to create thermal displacements in supercells. • Investigate how the electronic band structure changes due to the atomic motion. The lattice expansion is not the only effect that can alter the band gap: as a matter of fact, also the atomic motion leads to (instantaneous) changes in the electronic structure. In an experiment, which is usually performed on a time scale that is orders of magnitude larger than the period of the typical vibration in a solid, we thus only measure the thermodynamic average of the electronic structure. To investigate this aspect, we could perform Molecular Dynamics simulations at different temperatures, as you have learned it in the Tutorial IV. However, we will not be able to perform ab initio MD calculations due to the limited time and computational resources available during our tutorial session. Furthermore, we are not interested in the dynamical properties of the moving atoms, but merely in representative snapshots of the atomic configurations at each time step when the system is in thermal equilibrium. In general, reasonable displacement which mimic thermal motion could be constructed by utilizing harmonic force constants 4. Superposing obtained normal modes with randomized phase factors allows one to produce structures representative of different temperatures without introducing additional parameters. Therefore, in this exercise to generate configurations without any MD simulation, we will instead use the force constants obtained in previous calculations and generate configuration according to the following scheme 4: In the harmonic approximation, where the force on an atom is fully determined by the force constants and the displacement of the other atoms, \begin{aligned} \mathbf{F}_I = - \sum\limits_{J} \Phi_{IJ} \Delta\mathbf{R}_{J}~, \qquad (11) \end{aligned} the equation of motion for the displacement $$\Delta \mathbf{R}_i$$ of atom $$i$$ in a supercell with $$N$$ atoms in total are given by \begin{aligned} \begin{pmatrix} m_1 \, \Delta \ddot{\mathbf{R}}_1 \\ m_2 \, \Delta \ddot{\mathbf{R}}_2 \\ \vdots \\ m_N \, \Delta \ddot{\mathbf{R}}_N \end{pmatrix} = \begin{pmatrix} \Phi_{11} & \Phi_{12} & \cdots & \Phi_{1N} \\ \Phi_{21} & \Phi_{22} & \cdots & \Phi_{2N} \\ \vdots & \vdots & \ddots & \vdots \\ \Phi_{N1} & \Phi_{N2} & \cdots & \Phi_{NN} \end{pmatrix} \begin{pmatrix} \Delta {\mathbf{R}}_1 \\ \Delta {\mathbf{R}}_2 \\ \vdots \\ \Delta {\mathbf{R}}_N \end{pmatrix} ~. \qquad (12) \end{aligned} By diagonalizing the hessian, one obtains $$3N$$ eigenvalues $$\omega_s^2$$ and eigenvectors $$\boldsymbol{e}_s$$. This is equivalent to the solution of Eq. (4) if only the $$\Gamma$$ point ($$\mathbf{q} = 0$$) is considered and the supercell is treated as one big unit cell. The solution of the equations of motion given by Eq. (12) is then $\Delta \mathbf{R}_i (t) = \frac{1}{\sqrt{m_i}} \sum_{s=1}^{N} \boldsymbol{e}_{is}\,A_{s}~\sin (\omega_s t + \varphi_s) \qquad (13)$ $\Delta \dot{\mathbf{R}}_i (t) = \frac{1}{\sqrt{m_i}} \sum_{s=1}^{N} \boldsymbol{e}_{is}\,A_{s}\,\omega_s\,\cos (\omega_s t + \varphi_s) ,\qquad (14)$ where $$A_{s}$$ denotes the amplitude of phonon mode $$s$$, and $$\phi_s$$ is a random phase factor that can only be determined by boundary conditions. Since we are interested in representative samples, we could choose the displacements obtained from Eq. (13) at arbitrary times $$t$$, so the factor $$\sin (\omega_s t + \varphi_s)$$ will always just contribute a random number in the range $$[-1, 1]$$. 1 On the other hand, we can assign values to the amplitudes $$A_{s}$$ by noting that, in thermal equilibrium, the equipartition theorem holds which states that each mode contributes an equal amount to the energy. As you know, the total energy of a set of harmonically coupled quantum oscillators is given by \begin{aligned} E = \sum_s \hbar \omega_s (n_\text{B} (\omega_s, T) + \frac{1}{2})~. \nonumber \end{aligned} By this argument one finds that 5 \begin{aligned} \left\langle A^2_{s} \right\rangle = \frac{\hbar}{\omega_{s}} \left( n_\text{B} (\omega, T) + \frac{1}{2} \right)~, \qquad (15) \end{aligned} where $$n_\text{B}$$ denotes the Bose-Einstein distribution function, \begin{aligned} n_\text{B} (\omega, T) = \frac{1}{\text{e}^{\frac{\hbar \omega}{k_\text{B} T}}-1}~, %~~\stackrel{T \rightarrow \infty}{\longrightarrow} %~~\frac{k_\t{B} T}{\hbar \omega} \end{aligned} since phonons are bosons. With the mean amplitude given by Eq. (15), we can generate a Gaussian distribution of random amplitudes that give the correct average. By this means, we can generate thermodynamically correct displacements within the harmonic approximation from two random numbers. 2 !!! important Remark: Besides creating the samples for free, we can also mimic the effect of zero point motion at vanishing temperature: Since the amplitude was determined by a consideration rooted in quantum mechanics, it does not vanish when the temperature goes to zero: \begin{aligned} \langle A_s \rangle ~\stackrel{T \rightarrow 0}{\longrightarrow} ~\frac{\hbar}{2 \omega_s}~, \label{eq:A0K} \end{aligned} whereas it would go to zero in the classical case. Verify by taking the $${T \rightarrow 0}$$ limit in Eq. (15). In order to investigate the influence of atomic motion on the band gap, we proceed in the following way: First we need to generate an ensemble of supercells with displaced atoms according to Eq. (13) for each temperature of interest, and compute the bandgap of each sample 3. Afterwards we can collect the band gaps and form the "thermodynamic average". ## Create Samples Our samples are created from force constants (harmonic) computed with Phonopy, which were already obtained before, during phonon spectrum calculations. Please, copy the results of the seconds exercise into current directory. In order to produce files that contain harmonic force constants, we utilize interface between FHI-vibes and Phonopy. Go to the folder Tutorial/electron-phonon-coupling/7_bandgap_temperature/input. There you would find folder phonopy with the phonons calculatiosn of the Si, which we did before. Enter phonopy folder and type: vibes output phonopy trajectory.son --full Then, the file output/FORCE_CONSTANT is generated. Note that the file FORCE_CONSTANT uses the Phonopy file format for force constants, which saves the force constants in a reduced form. We need to remap them to 3$$N$$$$\times$$3$$N$$ matrix, where $$N$$ is the number of atoms in the supercell. This can be achieved by a FHI-vibes utility executed within the folder output: vibes utils fc remap FORCE_CONSTANTS which produces a file named FORCE_CONSTANTS_remapped. This can now be used to set up samples according to the trick explained above. To achieve this, we use the FHI-vibes tool create-samples. The basic command to create a supercell with displacements from force constants is vibes utils create-samples geometry.in.supercell -fc FORCE_CONSTANTS_remapped -T 100 -n 10 This will create ten samples corresponding to the thermal displacements at 100 K. Furthermore, one can request to use the Bose-Einstein distribution function to estimate the amplitudes by adding the flag --quantum: vibes utils create-samples geometry.in.supercell -fc FORCE_CONSTANTS_remapped -T 100 -n 10 --quantum ## Compute Band structures We now want to investigate the impact of nuclear motion on the electronic band gap of silicon at different temperatures, for the classical case. We should create at least 10 samples (number of samples should be converged in production calculations) for each temperature between 0 K and 800 K in steps of 100 K, using the classical distribution. Store the samples in dedicated folders samples_000K, samples_100K ... samples_900K. vibes utils create-samples geometry.in.supercell -fc FORCE_CONSTANTS_remapped -T 100 -n 10 mkdir samples_100K mv geometry.in.supercell.0100K.* samples_100K ... Once all the samples are generated for each temperature from the range of interest, we could start our calculations. There is a script run.py, which could help to do the work. The script is self-explanatory, please inspect it and run preferable in a separate folder, but note that folders with the samples should be presented there. For example, in the directory of the exercise do: mv phonopy/output/samples* . python run.py Depending on your computer, the calculation of one set (10 samples) for one temperature might already take several minutes (you could also utilize data from solutions folder). After the calculations are finished, use the script postprocess.py to read in the band gaps and compute mean plus standard deviation for each temperatures. python postprocess.py It will read in the band gaps, do the math and make a plot. The script is again self-explanatory enough and helps you with extracting the band gaps and plot temperature dependence of the band gap. Which trends do you observe? You could generate samples with the Bose-Einstein distribution function and check what would be the differences between the classical and quantum case? The plot shows the temperature dependence of the band gap in the classical limit. In the plot, the dots are the averaged band gaps among the ensemble for each temperature and the bars represent the corresponding standard deviation for each temperature. As a final question: How do you think will the band gaps and error bars evolve when you go towards convergence, i. e., you increase the supercell size and the number of samples? 1. Note that the displacements obtained from Eq. 13 are conceptually different from the single displaced atoms we encountered in the finite differences approach in exercises 1-3. 2. We use pseudo-random numbers. In real world applications of statistical sampling techniques, so called quasi-random sequences typically ensure better convergence behavior. Please refer to https://en.wikipedia.org/wiki/Low-discrepancy sequence for a general introduction to the topic and to the appendix of 6 for a concise discussion in a closely related context. As a matter of fact, you can use quasi random numbers with vibes by adding the flag --sobol. But with this few samples the differences should be negligible. 3. Remark: Since we destroy the local symmetry of the crystal by displacing the atoms, the minimal HOMO-LUMO gap will in general not be located on one of the paths connecting high symmetry points of the Brillouin zone, as we have calculated it in exercise 4. In this task, we will instead estimate the bandgap by simply calculating the HOMO-LUMO gap on a denser mesh of k-points. 4. M. Dove, Introduction to Lattice Dynamics, Cambridge University Press (1993). 5. S. E. Brown, I. Georgescu, and V. A. Mandelshtam, J. Chem. Phys. 138, 044317 (2013)
2022-06-26 08:03:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9536774158477783, "perplexity": 922.4072825347163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103037649.11/warc/CC-MAIN-20220626071255-20220626101255-00502.warc.gz"}
https://kkpradeeban.blogspot.com/2009/04/
## Thursday, April 23, 2009 ### Blog It! In most of the cases, developers tend to blog or document the smartest stuff they have developed or found. Yes, we do. But at the same time, the solutions for the silly problems are harder to find, than those for the toughest problems. I think it is a good idea to blog about those silly mistakes we make each day, as they will definitely help someone over there damaging his head with the same issue. People mention it or not, the simple mistakes we make in computing, may be in building a project or some coding stuff, are to a considerable level, common mistakes! Web 2.0 paradigm shift is becoming more obvious, I should say. I feel I am becoming more an information producer nowadays than being the consumer, and given below is just a proof. Now we need a better uplink than the downlink, as active contributors to the Internet. Let's blog for the sake of sharing the knowledge and content with the world! ## Monday, April 20, 2009 ### [GSoC 2009] Port AbiWord for Windows to Unicode It's very nice to see me got selected for GSoC2009. And Abiword is one of the world's best and dedicated Open Source Organizations. I feel really honoured to join the Abi Team. 5 GSoC applicants have been selected for Abiword this time too, as usual (This was the case for the last 3 years). My Project : Port AbiWord for Windows to Unicode Project Proposal Mentor : Mr.Dominic Lachowicz List of Selected students for Abiword can be found here. It's great to have a chance of getting a task that we prefer personally. GSoC offers us this opportunity. At this time, I have to consider more on my upcoming schedule and I will be following my project proposal time line for that. I have to mention the great help offered to me during March and April by the Abiword developers. sum1, Robert Staudinger, Dominic Lachowicz, and Jordi Mas were always providing help and guidelines, whenever required, encouraging me and the other applicants. Congratz to all my friends from University of Moratuwa who got selected for GSoC2009 and all the selected applicants for Abiword. Awaiting an exciting summer! Thank you. ## Saturday, April 18, 2009 ### Crossbuilding AbiWord for Win32 using Ubuntu / Wine Provided that you have Wine installed on Linux, you can follow these steps to build Abiword for your windows. (1) Install mingw. On ubuntu you need the packages mingw32, mingw32-binutils and mingw32-runtime sudo apt-get install mingw32 This will install mingw32 with the other two (mingw32-binutils and mingw32-runtime) dependencies. (2) Get all the dependencies in place. Robert Staudinger has packed them and made them available here. If this file is moved to somewhere else, when you are trying, you can alternatively download it here. Now unpack it in /opt. You will get /opt/win32 as base dir for the win32 stuff. (3) At the time of writing, I faced some problem in src/af/xap/xp/xap_Module.h The issue is solved here. An update (May 10th 2009): This issue is resolved now in the trunk. (4) Now use the commands: 1. source /opt/win32/bin/mingw-env.sh Go to to source tree and run 2. CPPFLAGS="-I/opt/win32/include" ./autogen.sh --prefix=/opt/abiword --host=i686-pc-linux-gnuaout --target=i586-mingw32msvc *** when cross-building with robsta's pack if it demands for libtool, automake, or fribidi, a sudo apt-get install of these three will solve the issue (for Ubuntu). 3. make (5) abiword.exe will end up in src/. You need to copy it to /opt/win32/bin to run it under wine, or alternatively copy all the required DLLs to src/. (6) Now you can take the directory containing abiword.exe to your windows machine to run Abiword.exe, which you have built just now! The Original Research by Robert Staudinger can be found here. ### Facebook? Have a backup somewhere else! I have seen many friends having interesting notes or even important details about their projects in Facebook. It should always be kept in mind, Facebook is not a safe place to store the important materials at all. There may be some security issues in Facebook. But here I am not talking about them. The problem I see with Facebook is their policy of banning accounts. Being a super active user is more than enough for an account to be banned. If someone tries to make others aware about his activities or projects by posting several links or wall posts in Facebook groups, it is considered spamming. Having 5000+ friends or being a member of 200+ groups are valid reasons for banning you. In case, if you are banned, you will, mostly probably, not have a chance to get your account back. In that case, all the important data in your Facebook profile will be lost forever, including your notes, photos, friends list, and their contact details. It is always better to start a blog in blogger or similar free blogging providers and then you can import notes from the blog to Facebook. In this way, you can make sure that your notes are safe, at the same time, they are shared with your friends via Facebook. Same applies to the photos. You can share your photos in Picasa or other photo album services using Facebook. The moral of this story: We can use Facebook to share, but should have the original material somewhere else safe. Facebook is not a place to store your data! ## Thursday, April 16, 2009 ### Abiword Screenshots I have recently built Abiword using Wine/Ubuntu and using MSVS2008 apart from the typical Linux build. Since Microsoft Visual Studio build has not been completed yet (The sub menus missing yet, and hence currently under development), it seems cross building is a good option for building Abiword for Windows. Snapshot 1: Abiword built using Wine, running on Ubuntu Snapshot 2: Abiword.exe built above is now running on Windows Snapshot 3: See the difference of Abiword built using Ubuntu (the standard build process), and using Ubuntu/Wine cross building, running on Ubuntu ### Building Abiword Using Wine on Ubuntu --- src/af/xap/xp/Original_xap_Module.h 2009-04-03 18:55:06.000000000 +0530 +++ src/af/xap/xp/xap_Module.h 2009-04-16 12:16:01.000000000 +0530 @@ -52,7 +52,7 @@ // we want to have C linkage for both // this and for all of our required functions extern "C" { - typedef ABI_EXPORT struct { + typedef struct { const char * name; const char * desc; const char * version; After applying the above diff file, Abiword is successfully built using wine, and now happily running on Ubuntu! ## Wednesday, April 15, 2009 ### Internationalization & Localization Internationalization (i18n) - The process of designing a software application so that it can be adapted to various languages and regions without engineering changes. localization (L10n) - The process of adapting software for a specific region or language by adding locale-specific components and translating text. globalization (g11n) / Native Language Support (NLS) - The combination of internationalization and localization. Locale - A set of parameters that defines the user's language, country and any special variant preferences that the user wants to see in their user interface. Localizability (L12y) - The degree to which a software product can be localized. Resource - Part of a program which can appear to the user or be changed or configured by the user, and this is the data of the program, opposed to its code. Core product - The language independent portion of a software product. Compiled from: * Wikipedia * Mozilla Internationalization & Localization Guidelines ## Monday, April 13, 2009 ### Transliteration ~ Google and more .. You may have already used Google's Indic Transliterator to type in the languages [Hindi, Kannada, Malayalam, Tamil, Telugu] using English characters. Transliteration tools, apart from providing the ability to type in Unicode, gives one more advantage. That is, one who can speak, yet can't write in a language can easily type in these languages by using the equivalent characters in English. While transliterating, suggestions are also provided so that we can choose one of them, in case of the confusion. Google Transliterator can even convert the numbers typed in the Standard Hindu Arabic Numeral System to the local numeric systems specific to those language communities. This shows that Google Indic Transliterator is not just a transliterating utility. Hindu Arabic Numbers : 0 1 2 3 4 5 6 7 8 9 Hindi : ० १ २ ३ ४ ५ ६ ७ ८ ९ Kannada : ೦ ೧ ೨ ೩ ೪ ೫ ೬ ೭ ೮ ೯ Malayalam : ൦ ൧ ൨ ൩ ൪ ൫ ൬ ൭ ൮ ൯ Tamil : ௦ ௧ ௨ ௩ ௪ ௫ ௬ ௭ ௮ ௯ Telugu : ౦ ౧ ౨ ౩ ౪ ౫ ౬ ౭ ౮ ౯ Update as on 2010 March: A recent visit to the Google transliterator showed me that now transliteration is possible even for languages not from Indic language family as well, including Arabic, Russian and Amharic (Ethiopian). Hence it should be noted that Google transliterator is no more a mere Indic transliterator. With more features, Google's transliterator stands as a standard online rich text editor at the moment. UCSC Unicode Real Time Font Conversion Utility Similar researches are done at University of Colombo School of Computing, Sri Lanka, and a Unicode Real Time Font Conversion Utility is being built. It provides us ways of typing in Sri Lankan languages Sinhala and Tamil, in Unicode. Apart from transliteration, it can also convert the non-unicode Sinhala/Tamil fonts that are mostly used in word processing into unicode, thus providing easy way to convert the stuff that were earlier typed in non-unicode fonts into the unicode representation. ## Saturday, April 11, 2009 ### Use of Unicode in AbiWord - Initial Discussions This is a mail thread in the unicode mailing list started by one of the Abiword developers during the earliest stages of Abiword. http://unicode.org/mail-arch/unicode-ml/Archives-Old/UML014/0787.html This thread contains a huge array of the discussions on the topic. Even if you are not an Abiword developer, this mail thread is a good one to have a look, since it has many important views of many developers about Unicode applications. ## Monday, April 6, 2009 ### Changes for building Abiword in MSVC2008 [Thanks to sum1] =================================================================== @@ -45,8 +45,8 @@ /*****************************************************************/ -extern unsigned char g_pngSidebar[]; // see ap_wp_sidebar.cpp -extern unsigned long g_pngSidebar_sizeof; // see ap_wp_sidebar.cpp +/*extern*/ unsigned char g_pngSidebar[1]; // see ap_wp_sidebar.cpp +/*extern*/ unsigned long g_pngSidebar_sizeof = 1; // see ap_wp_sidebar.cpp Index: src/wp/main/xp/abi_ver.cpp =================================================================== --- src/wp/main/xp/abi_ver.cpp (revision 26019) +++ src/wp/main/xp/abi_ver.cpp (working copy) @@ -41,7 +41,7 @@ #endif /* ABI_BUILD_TARGET */ const char* XAP_App::s_szBuild_ID = ABI_BUILD_ID; const char* XAP_App::s_szBuild_Version = ABI_BUILD_VERSION; const char* XAP_App::s_szBuild_Options = ABI_BUILD_OPTIONS; const char* XAP_App::s_szBuild_Target = ABI_BUILD_TARGET; ====================================================================== Update as on October 2009: When I tried to build Abiword using MSVC 2008, I got this error. ------ Build started: Project: LibAbiWord, Configuration: Debug Win32 ------ Compiling... ap_wp_sidebar_static.cpp ap_wp_sidebar_static.obj : error LNK2005: "unsigned char * g_pngSidebar" (?g_pngSidebar@@3PAEA) already defined in xap_Win32Dlg_About.obj ap_wp_sidebar_static.obj : error LNK2005: "unsigned long g_pngSidebar_sizeof" (?g_pngSidebar_sizeof@@3KA) already defined in xap_Win32Dlg_About.obj Creating library C:\abi-trunk\msvc-2008\Debug\bin\LibAbiWord.lib and object C:\abi-trunk\msvc-2008\Debug\bin\LibAbiWord.exp C:\abi-trunk\msvc-2008\Debug\bin\LibAbiWord.dll : fatal error LNK1169: one or more multiply defined symbols found Build log was saved at "file://c:\abi-trunk\msvc-2008\LibAbiWord\Debug\BuildLog.htm" LibAbiWord - 3 error(s), 0 warning(s) ------ Build started: Project: PluginOpendocument, Configuration: Debug Win32 ------ Creating library C:\abi-trunk\msvc-2008\Debug\plugins\PluginOpendocument.lib and object C:\abi-trunk\msvc-2008\Debug\plugins\PluginOpendocument.exp Embedding manifest... Build log was saved at "file://c:\abi-trunk\msvc-2008\PluginOpendocument\Debug\BuildLog.htm" PluginOpendocument - 0 error(s), 0 warning(s) ========== Build: 1 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== I replaced C:\abi-trunk\src\af\xap\win\xap_Win32Dlg_About.cpp with this. That gave me a rapid solution to the issue, but which is not the real fix for the issue I faced, of course. ### Building Abiword using MSVC2008 Express Edition Using sum1's suggestions I have succeeded in building Abiword in Windows also. http://pastebin.ca/1382828 Still some errors in the menu items, that the sub menus are not shown. I will work on this and rectify this issue. ## Sunday, April 5, 2009 ### Useful links for newbie Abiword Developers and students If you are a newbie, read this first: http://www.abisource.com/developers/ If you find a bug or rfe, feel free to use Bugzilla, and report an issue with the relevant information: http://bugzilla.abisource.com/ In case the issue is already there in the BugZilla database, cast a vote for the issue to make the team aware that the issue is popular and/or important. The SVN Location: http://svn.abisource.com/ If you want to contribute to the translation efforts: http://www.abisource.com/contribute/translate/ Building Abiword for Debian based Linux distributions: http://abisource.com/wiki/Compiling_AbiWord To build Abiword on Windows: Using Visual C++: Developing AbiWord on Windows using Visual C++ Building AbiWord 2.8 on Windows If you find any issue as a user or developer, feel free to contact Abi team on irc: irc://irc.gnome.org#abiword Joining relevant Mailing lists according to your requirements is always recommended: To join: http://www.abisource.com/developers/ Mail Archives: http://www.abisource.com/mailinglists/ ## Thursday, April 2, 2009 ### Abiword for Windows Abiword is a free and open source word processing program, written using C++ and for a smaller extent C. It is light weight and adapted for use with the One Laptop Per Child system. Abiword runs on many operating systems, including Linux, Mac OS X (PowerPC), Microsoft Windows, and ReactOS. By maintaining its compact size, low memory consumption, and low start-up delay, Abiword is successfully competing with the other well known word processors. Abiword for Windows is an ANSI application. Porting Abiword completely to Unicode has been considered for a very long time. Earlier researches have been made to maintain Abiword that can be built for both the ANSI and Unicode API. As with the time the earlier versions of Microsoft have become obsolete so that this project targets to port the existing ANSI application to a Unicode only build of Abiword for Windows. Different issues faced while using the ANSI build will be addressed while converting Abiword into Unicode API. The Unicode application will require Windows NT and the later versions, and for the earlier versions users have to choose the ANSI stable version back to 2.4.6. This Unicode build will use the Multilingual User Interface feature available since the Windows 2000 operating system. Abiword for Windows Unicode build will need to be tested using the non-Latin text. Unicode only languages and multilingual support will be achieved through this project, while solving many of the issues that were introduced due to the usage ANSI Controls in UI. ## Wednesday, April 1, 2009 ### svn export While I was checking out the svn, it failed in the middle with the message: svn: In directory 'wv\examples' svn: Error processing command 'readonly' in 'wv\examples' svn: Can't set file 'wv\examples\.svn\text-base\this-file-crashes-msword.doc.svn -base' read-only: The system cannot find the file specified. No ideas yet... Later just used "svn export" for each file that yet to be downloaded, and downloaded them one by one, and got all those missing files! ### How to apply/create an svn patch (for beginners) To apply an svn patch (your_patch_file.patch) go to the root of your source directory, and place the patch there. (or you can specify the location of the patch with the patch name if it is not placed in the root of the source tree.) patch -p0 < your_patch_file.patch Similarly if you have modified the source tree, and want to create a patch for the changes that you made, from the source root: svn diff > your_patch_file.patch or giving the files/folders that you need to get the diff of, along with their relative path. svn diff path1/file1 path2/file2 path3/file3 > your_patch_file.patch your_patch_file.patch will be created inside the root of the source tree. (You may specify the location too, as usual) Similarly, How to get the svn diff between two commits? svn diff -r 5224:5225 > your_patch_file.patch gets the commit 5225 for the particular directory. svn diff -r oldversion:newversion files > your_patch_file.patch gets the diff between the commits newversion and oldversion.
2019-07-23 11:21:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2118796855211258, "perplexity": 5094.6542218054865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529276.65/warc/CC-MAIN-20190723105707-20190723131707-00167.warc.gz"}
http://math.stackexchange.com/questions/146437/whats-the-difference-between-mathbbq-sqrt-d-and-mathbbq-sqrt-d/146444
What's the difference between $\mathbb{Q}[\sqrt{-d}]$ and $\mathbb{Q}(\sqrt{-d})$? Sorry to ask this, I know it's not really a maths question but a definition question, but Googling didn't help. When asked to show that elements in each are irreducible, is it the same? - There is no difference, because $\mathbb{Q}[\sqrt{-d}]$ is already a field. –  M Turgeon May 17 '12 at 18:59 Here is the general definition of the two notations: given a field $K$, a field $L$ that contains $K$, and an element $a\in L$, then $$K[a]=\{f(a)\mid f\in K[x]\}=\{c_0+c_1a+\cdots+c_na^n\mid c_i\in K, n\geq0\}$$ and $$K(a)=\left\{\tfrac{f(a)}{g(a)}\;\middle\vert\; f,g\in K[x]\text{ where }g(a)\neq 0\right\}=\text{the field of fractions of }K[a].$$ An alternative characterization is that $K[a]$ is the smallest sub$\!$*ring* of $L$ that contains the element $a$ as well as all of $K$, and that $K(a)$ is the smallest such sub$\!$*field*. It is a straightforward theorem that $K[a]=K(a)$ if and only if $a$ is algebraic over $K$, which means that there is some non-zero polynomial $f\in K[x]$ such that $f(a)=0$. Because $\sqrt{-d}$ is algebraic over $\mathbb{Q}$ - it is a root of the polynomial $x^2+d=0$, which is in $\mathbb{Q}[x]$ - we therefore have that $\mathbb{Q}(\sqrt{-d})=\mathbb{Q}[\sqrt{-d}]$. In contrast, $\pi$ is transcendental over $\mathbb{Q}$, and $\mathbb{Q}(\pi)$ contains elements like $\frac{1}{\pi}$ and $\frac{3}{\pi^2 + 1}$, while $\mathbb{Q}[\pi]$ does not. - The notation $\rm\:R[\alpha]\:$ denotes a ring-adjunction, and, analogously, $\rm\:F(\alpha)\:$ denotes a field adjunction. Generally if $\alpha$ is a root of a monic $\rm\:f(x)\:$ over a domain $\rm\:D\:$ then $\rm\:D[\alpha]\:$ is a field iff $\rm\:D\:$ is a field. The same is true for arbitrary integral extensions of domains. See this post for a detailed treament of the quadratic case. - In general, if you have a field extension $L/K$, and $\alpha\in L$, we define $K[\alpha]$ to be the subring of $L$ generated by $K$ and $\alpha$, whereas we define $K(\alpha)$ to be the subfield of $L$ generated by $K$ and $\alpha$. Now, it sometimes happens that $K[\alpha]$ is already a field, in which case we have $K[\alpha]=K(\alpha)$. This is what happens with $\mathbb{Q}[\sqrt{-d}]$ and $\mathbb{Q}(\sqrt{-d})$. -
2015-08-02 00:21:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9536656737327576, "perplexity": 67.1835702376165}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988924.75/warc/CC-MAIN-20150728002308-00053-ip-10-236-191-2.ec2.internal.warc.gz"}
http://crypto.stackexchange.com/questions?page=50&sort=votes
# All Questions 320 views ### Two step encryption Is there any asymmetric cryptography algorithm which will allow recursive encryption. ... 166 views ### Is there an advantage to storing keys split between several hashes? I have a question about the way to store a key or password that was used for encryption, so that the application can check if the user put in the right key for decryption. If I make a mistake, please ... 2k views ### Cracking the Beaufort cipher Is there any easy way to crack a Beaufort cipher? We have a Vigenère table, and are trying to guess the keyword. Any easier way? 194 views ### Impact of algorithms for factoring using elliptic curves over $\mathbb{Q}$ Recently a few papers have appeared that describe a new approach to factoring, using elliptic curves over $\mathbb{Q}$. See, e.g., Factoring integers and computing elliptic curve rational points, ... 403 views ### Is frequency analysis a useful tool against encryption by multiplication? If I transform natural plaintext by: making each letter two decimal digits, considering the whole as a decimal number; multiplying by the key (some integer constant), giving the ciphertext; would ... 549 views ### Are there secure stream ciphers that cannot be parallelized? Are there any stream ciphers (or a deterministic random number generators, that should work as well I guess?) that cannot be parallelized? So for example if I seed it with a specific value, and then ... 148 views ### Attack by replaying messages Assume Alice and Bob communicate over an insecure channel using one-time perfectly-secret encryption together with one-time secure message authentication code. Say Eve has the ability to eavesdrop and ... 153 views ### Correct method to encrypt data so that it can be decrypted only by Alice and Bob I need to build a system that stores encrypted transactions. Transaction involves always two parties. Payer and payee. Both must be able to download encrypted transaction from server and decrypt it ... 278 views ### Why not encrypt salt? Assuming I had to distribute salt+ciphertext together over an insecure channel, isn't it better to store the salt encrypted? By encrypted I mean with a block cipher and with key and IV derived from ... 1k views ### AES CCM vs CCMP Are the terms AES CCM and AES CCMP are equivalent, or is there any technical differences between the two? 443 views ### Is symmetric encryption vulnerable to plain-text-attacks? Imagine that Bob sends a message to Alice for symmetric encryption to send to Charlie. (Only Alice and Charlie know the key.) Alice sends the encrypted message back to Bob to send to Charlie. Can Bob ... 2k views ### “Signing” with public key For this question, the following caveats and assumptions hold: There exists a 2048-bit RSA key pair used exclusively for signing/verification The private key is kept completely private There exists ... 626 views ### Can I secure my key by XORing it with a hashed password? I'd like to build a simple password-protected symmetric key system. The key-creation process in my system operates as follows: The system creates a 256-bit key purely at random. The user chooses a ... 488 views ### Is it possible to figure out the public key from encrypted text? Suppose Alice sends messages to Bob by encrypting the messages with Bob's public key. Eve knows that the data is encrypted using RSA, but does not know the public key. Can Eve figure out the public ... 4k views ### RSA cracking: The same message is sent to two different people problem Suppose we have two people: Smith and Jones. Smith public key is e=9, n=179 and Jones public key is e=13, n=179. Bob sends to ... 335 views ### How is de-synchronisation of HOTP solved? From RFC 4226 I understand how HOTP generates one-time passwords by incrementing a counter and uses the 'look-ahead' window to try to resynchronise (from this counter), if the user tries a few wrong ... 1k views ### Encrypting (CBC) identical files with same key and different iv, is it less secure? I would like to learn more about cryptography. Let's say you encrypt multiple files with CBC encryption using the same key, but each file has unique (and pseudo random) iv. Does this weakens the ... 265 views ### Protocol for Randomized Oblivious Transfer? If we define Oblivious Transfer as following: Alice inputs $(x_0,x_1) \in F^2$, where $F$ is a field, and Bob inputs $b\in\{0,1\}$, then Alice gets a dummy output(for which she knows nothing about ... 218 views ### A set of key pairs and one hash to secure them I have a simple problem: I have a set of users' ECDSA key pairs, and say I want to encrypt them with a simple algorithm. I have access to one variable that uniquely identifies the user, so I hash it ... 77 views ### secure PRF or not I am new in Cryptography and I saw this question in a note I solved it but I'm not sure about my answers. Let $F : \{0, 1\}^n × \{0, 1\}^n→ \{0, 1\}^n$ be a secure PRF (i.e. a PRF where the key ... 72 views ### Why can you reverse a modulo function when knowing its primes We are dealing with cryptography in school right now and superficially went over the Rabin cryptosystem with the (apparently usual) example of p=7 and q=11 etc (we didn't do RSA). I understand that ... 59 views ### CTR DRBG dependent on request size? I am trying to understand the CTR DRBG specification in NIST SP 800-90A. It seems slightly different to a pure stream cipher in that the key and counter are reset after each generate call using the ... 51 views ### What is the use case for XOF functions (i.e. SHAKE128/256)? FIPS 202 defines 2 functions: SHAKE128 SHAKE256 As extendable-output functions (XOFs) that can have variable output length. But in Appendix A.2 marks: it is possible to use an XOF as a hash ... 133 views ### Is there any relation between two strings with the same MD5 hash? Is there any relation between two strings with the same MD5 hash? For example these two strings: ... 72 views ### What is the difference between “securely realizes” and “securely implements”? In some security proofs it is stated that "a protocol securely realizes an ideal functionality" while in some others "a protocol securely implements an ideal functionality". Is there a meaningful ... 92 views ### Can the round function $F$ in a Feistel Network be practically any non-invertible function? This question might seem silly, but how true is the fact that the round function $F$ does not have to be invertible? I am curious to know this, because non-invertible functions can be very lossy, ... 91 views ### Is reverse NTRU still secure? I'm currently prototyping something with the NTRU encryption scheme but I wish to use it in "reverse" -- distribute private keys so anyone can decrypt, but keep the public keys secret and thus only ... 110 views ### Encrypt a message with hash function, shared symmetric key, but no cipher I am very new to Crypto and need some clarification or input. The question is: Suppose that Alice wants to encrypt a message for Bob, where the message consists of three plaintext blocks, $P_0$, ... 93 views ### How does DSA provide non-repudiation in proving a document was properly displayed (not altered or displayed incorrectly)? I am building an iOS app that allows the user to sign a document served to them by a web server. So that page of the app simply has a document in the top pane, and at the bottom pane, a place to sign ... 84 views ### Challenge-Response Phases in IND-CPA The IND-CPA game has two challenge-response phases A key is generated by running $Gen(1^n)$ and challenger selects a bit b {0,1} uniformly at random. Adversary gets input $1^n$. Can query the oracle ... 94 views ### Is it possible to reverse the birthday attack calculation? I find that for every 100 password salts in our database, we only average 94.73 distinct salt values (averaged over a total of around 18 million). Is there a way to take that observation and calculate ... 85 views ### Paillier Cryptosystem - Practical applications? I wonder: are there any real-world practical applications using the Paillier cryptosystem , as introduced in [1], or some derivations of it? I'm aware of quite a few schemes proposed in literature ... 34 views ### Given occasional LFSR samples can the next sample be computed? Suppose I have access to an LFSR generator output used in a radio communications system, with the LFSR being used to authenticate devices. The system cycles an internal unknown length LFSR (greater ... 70 views ### Is there a practical security difference between OTP with letters and OTP with numbers? Is there a practical security difference between “OTP with letters” and “OTP with numbers”? If, for example, I encrypt a letter message using Tabula Recta with a random letter key, I would get: ... 52 views ### Is there a partially homomorphic quantum secure public key cryptosystem with IND-CCA1 security? I recentely asked "IND-CCA1 RSA padding?" about whether there is a IND-CCA1 secure variant of RSA. The original version of the question also allowed usage of ECC which would allow usage of ElGamal, ... 78 views ### Safety of AES ECB when used with openssl_seal function I'm trying to use PHP to encrypt files with a public key. I'm using the function openssl_seal that "encrypts data by using RC4 with a randomly generated secret key." RC4 is considered unsafe so the ... 186 views ### Cryptography and FPGA I see a lot of papers about FPGA implementations. For what kind of "concrete" application should we implement cryptographic algorithms on FPGA ? Which secured application require such a huge data ... 44 views ### Are deployed MACs IND-CPA? There are a number of message authentication codes (MACs) used in practice. They are highly used in practice (f.ex. in TLS), but there's (at least) one application where they can't be used: Full Disk ... 221 views ### How to prove that a function is not pseudorandom? I am currently enrolled in a cryptography course, which uses the book by Katz and Lindell. I'm struggling with the exercies which ask for proofs, like the following one: Let G(k) be a PRG with ... 327 views ### What are the implications of a birthday attack on a HMAC? After collecting approximately $2^{n/2}$ message-tag pairs a collision can be observed. So two different messages (m1 and m2) will have the same tag. This paper states: Then, for any string x, ... 107 views ### Determine AES key given encrypted and unencrypted files Given an encrypted file, the original unencrypted (cleartext) file, and knowledge of which AES encryption algorithm was used, is it possible to determine the key that was used to encrypt the data in a ... 111 views ### Does exponentiation by squaring work on Montgomery curves? Consider the point multiplication $Q=[d]P$, where $P$ a point on elliptic curve multiplied with an integer $d$ to get another point $Q$ on the same curve. This operation can be computed by a ... 62 views ### RSA: Range of public modulus I know the public modulus is the product of two primes. For key with length L, the public modulus N should be such ... 131 views ### Can there be a need for 1024-bit (symmetric) encryption? I think we are all aware of the CAESAR-competition. Now the aim of this competition is to select a (portfolio of) winner(s) which provide authenticated encryption. I'll now assume that the results ... 137 views ### How often does RSA-OAEP have a leading zero? We are working with a third party vendor who is very tight lipped about their security protocols, and one of our customers who used this vendor's products is claiming that approximately one in every ... 116 views ### Simulation aborted because the adversary doesn't use the random oracle I'm trying to construct a proof for an encryption scheme in the Random Oracle model. This encryption scheme is like a PKE scheme but with an additional function that kind of "alters" ciphertexts ... 173 views ### AES and Homomorphic Encryption Is it possible to do the following? Input would be to generate a new AES key, encrypt the private data with that key, encrypt the AES key with the FHE key, and send the FHE-encrypted AES key along ... 457 views ### ECC vs RSA: how to compare key sizes? I know and I have understood the details of RSA, elliptic curve cryptography, (EC)DH and (EC)DSA. I keep reading everywhere that (if we don't consider non-deterministic computers) "ECC can achieve ...
2015-11-26 16:03:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.710727334022522, "perplexity": 2585.3678669583837}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447758.91/warc/CC-MAIN-20151124205407-00170-ip-10-71-132-137.ec2.internal.warc.gz"}
https://www.sanfoundry.com/manufacturing-engineering-interview-questions-answers/
# Manufacturing Engineering Questions & Answers – Crystallography-2 « » This set of Manufacturing Engineering Interview Questions and Answers focuses on “Crystallography-2”. 1. What is the coordination number of a simple cubic (SC) unit cell? a) 4 b) 6 c) 8 d) 2 Explanation: There are six nearest neighbouring atoms for every atom in a simple cubic (SC) unit cell, in other words, every atom in a SC unit cell is surrounded by 6 other atoms, thus coordination number of SC unit cell is is 6. 2. What is the coordination number of a face centered cubic (FCC) unit cell? a) 4 b) 6 c) 8 d) 12 Explanation: In an FCC structure, there are eight atoms, one atom each at the corner of the unit cell and one atom at the centre of each face. For any corner atom of the unit cell, the nearest atoms are face-centred atoms. Thus, the coordination number for an FCC structure = 4 centre atoms below the horizonal plane + 4 centre atoms above the horizontal plane + 4 centre atoms on the horizonal plane. Hence, the coordination number for an FCC structure is 4 + 4 + 4 = 12. 3. What is the coordination number of body centered cubic unit cell? a) 4 b) 6 c) 8 d) 2 Explanation: For any corner atom of the BCC unit cell, the nearest atoms are the body centred atoms. There are eight-unit cells in neighbours with body-centered atoms. Hence, the coordination number for a BCC cubic unit cell is 8. Sanfoundry Certification Contest of the Month is Live. 100+ Subjects. Participate Now! 4. Effective number of atoms in a simple cubic (SC) unit cell is equal to _________ a) 4 b) 1 c) 8 d) 2 Explanation: Total number of atoms at corners = 8 and each corner atom is shared by total 8-unit cells. Thus, effective number of atoms in an SC unit cell: 8 × 8⁄8 = 1. 5. Effective number of atoms in a face centered cubic (FCC) unit cell is equal to ________ a) 4 b) 1 c) 8 d) 2 Explanation: In an FCC unit cell, there are eight atoms: one at each corner of the cube and six face centered atoms of the six planes of the cube. As corner atoms are shared by eight adjacent cubes and the face centered atoms by two adjacent unit cells, total effective number of atoms in an FCC unit cell will be 4. 6. Effective number of atoms in a body centered cubic (BCC) unit cell is equal to _____________ a) 4 b) 6 c) 1 d) 2 Explanation: The unit cell of a cube contains eight atoms at the corners, which are shared by the eight adjoining cubes and one atom at the centre of the cube. 8 atoms at the corner: (8×18) = 1 atom + 1 centre atom in the unit cell, So, there are “2” effective number of atoms in a BCC unit cell. 7. The atomic packing fraction in a simple cubic unit cell is ________ a) 0.74 b) 0.52 c) 0.68 d) 0.66 Explanation: a=r and APF = (volume of effective number of atoms/volume of unit cell). 8. The atomic packing fraction in a body centered cubic unit is cell is ________ a) 0.74 b) 0.52 c) 0.68 d) 0.66 Explanation: r=(√3)/4×a and APF = (volume of effective number of atoms/volume of unit cell). 9. If the radius of a copper atom is given as 1.27 Ao, its density (in kg/m3) will be? a) 100.01 b) 86.25 c) 8979 d) 7968 Explanation: Formula to calculate the density of a cubic metal: ρ (kg/m3) = $$\frac{n × A.W}{a^3}$$ × 1.66 × 10-27 [where, ρ = density of metal, n = effective number of atoms per unit cell, A.W = Atomic weight of the metal in amu and a = lattice parameter in meter] Given: radius of copper = 1.27 Ao = 1.27×10-10 m We know that atomic weight of copper = 63.5 amu Lattice parameter to atomic radius relation for cubic structures are as follows: Crystal Structure Effective Number of Atoms per Unit Cell Effective Number of Atoms per Unit Cell Simple Cubic (SC) a = 2r 1 Body Centered Cubic (BCC) a = 4r/√3 2 Face Centered Cubic (FCC) a = 4r/√2 4 Hexagonal Close Packed Cubic (HCP) a = 2r 6 We know that copper has FCC crystal structure, so it has ‘4’ effective number of atoms per unit cell and given its atomic radius = 1.27×10-10 m Therefore, lattice parameter of copper (a) = (4×1.27×10-10)/√2 = 3.59×10-10 m Therefore, density of copper = $$\frac{4 × 63.5}{(3.59 × 10^{-10})^3}$$ × 1.66 × 10-27 ≈ 8979 kg/m3. 10. The atomic packing fraction in a face centered cubic unit is? a) 0.74 b) 0.52 c) 0.68 d) 0.66 Explanation: a=2√2×r and APF = (volume of effective number of atoms/volume of unit cell). Sanfoundry Global Education & Learning Series – Manufacturing Engineering. To practice all areas of Manufacturing Engineering for Interviews, here is complete set of 1000+ Multiple Choice Questions and Answers.
2022-12-05 05:32:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6469457149505615, "perplexity": 1885.6620779995656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711003.56/warc/CC-MAIN-20221205032447-20221205062447-00027.warc.gz"}
http://ocw.usu.edu/Electrical_and_Computer_Engineering/Signals_and_Systems/3_2node2.html
Personal tools • You are here: Home Lecture 3: Informal Example Lecture 3: Informal Example Document Actions Schedule :: Intro :: Informal Example :: Function Spaces We briefly review the concept of a vector space. A vector space has the following key property: If then for any scalars and . That is, linear combinations of vectors give vectors. Most of your background with vectors has been for vectors in . But: the signals that we deal with are also elements of a vector space , since linear combinations of signals also gives a signal. This is a very important and powerful idea. Recall that in vector spaces we deal with concepts like the length of a vector, the angle between vectors, and the idea of orthogonal vectors. All of these concepts carry over, by suitable definitions, to vector spaces of signals. This powerful idea captures most of the significant and interesting notions in signal processing, controls, and communications. This is really the reason why the study of linear algebra is so important. In this lecture we will learn about geometric representations of signals via signal space (vector) concepts. This straightforward idea is the key to a variety of topics in signals and systems: 1. It provides a distance concept useful in many pattern recognition techniques. 2. It is used in statistical signal processing for the filtering, smoothing, and prediction of noisy signals. 3. It forms the heart and geometric framework for the tremendous advances that have been made in digital communications. 4. It is every waveform-based transform you ever wanted (Fourier series, FFT, DCT, wavelet, etc.) 5. It is also used in the solution of partial differential equations, etc. 6. It relies on our old friend, linearity. One might even say it is the reason that we care so much about linearity in the first place. We will soon turn our attention to Fourier series, which are a way of analyzing and synthesizing signals. Vectors will be written in bold font (like the ingredients above. Initially, we can think of a vector as an ordered set of numbers, written in a column: Often to conserve writing, this will be written in transposed form, While we have written a vector as an -tuple, that is not what defines a vector. A vector is an element of a vector space, which is to say, it satisfies the linearity property given above. Scalar multiplication of vectors is in the usual fashion. Matrix multiplication is also taken in the traditional manner. Let and be two vectors. The inner product (known to many of you as the dot product ) of the vectors and is written as In words, multiply component by component, and add them up. Two vectors are said to be orthogonal or perpendicular if their inner product is zero: If and are orthogonal, this is sometimes written The inner product can be expanded using the following rules: 1. For a scalar , 2. 3. For real vectors (which is all we will be concerned about for the moment) The (Euclidean) norm of a vector is given by The distance between two vectors is given by The projection of a vector onto a vector is given by Geometrically, this is the amount of the vector in the direction of . (Show a picture.) Obviously, if and are orthogonal, then the projection of onto is 0. Now suppose that we have a vector (an ingredient'') and we have a vector and we want to make the best approximation to using some amount of our ingredient. Draw a picture. We can write where is the amount of we want and is the error between the thing we want and our approximation of it. To get the best approximation we want to minimize the length of the error vector. Before we go through and do it the hard way, let us make a geometric observation. The length of the error is minimized when the error vector is orthogonal to our ingredient vector : or Giving us Note that this is simply the projection of onto the vector . Now let's do it the hard way: we want to find the amount of to minimize the (length of the) error. The squared length of the error is To minimize this, take the derivative with respect to the coefficient and equate to zero: Solving for the coefficient, This is the same one we got before. We may actually have more than one ingredient'' vector to deal with. Suppose we want to approximate with the vectors and . As before write where is the error in the approximation. Note that we can write this in the following way: using the usual matrix multiplication. We want to find the coefficients and to minimize the length of the error. We could to it the calculus way, or using our orthogonality idea. We will go for the latter: The error is orthogonal to the data means that (that is, the error is orthogonal to each of the ingredient "data'' points). Expanding these out gives This is two equations in two unknowns that we can write in the form If we know and the ingredient vectors, we can solve for the coefficients. Of course, what we can do for two ingredient vectors, we can do for ingredient vectors (and may be infinite). We want to approximate as We can find the set of coefficients that minimize the length of the error using the orthogonality principle as before, applied times. This gives us equations in the unknowns which may be written as This could be readily solved (say, using Matlab). It would seem that if we take large enough, we should be able to represent any vector. without any error. (Analogy: given enough ingredients, we could make any cake. We might not be able to make everything, but we could make everything some class of objects.) If this is true, the set of ingredient vectors are said to be complete . A more formal name for the ingredient vectors is basis vectors . Although we have come up with a way of doing the approximation, there is still a lot of work to solve for the coefficients, since we have to first find a matrix and then invert it. Something that is commonly done is to choose a set of basis vectors that is orthogonal . That is, if and are any pair of basis vectors, then Let us return to the case of two basis vectors when the vectors are orthogonal. Then the equation for the coefficients becomes or so the coefficients are So solving for the coefficients in this case is as easy as doing it for the case of a single vector, and the coefficient is simply the projection of onto its corresponding basis vector. This generalizes to basis vectors: If the basis vectors are orthogonal, then the coefficient is simply Copyright 2008, by the Contributing Authors. Cite/attribute Resource . admin. (2006, May 22). Lecture 3: Informal Example. Retrieved January 07, 2011, from Free Online Course Materials — USU OpenCourseWare Web site: http://ocw.usu.edu/Electrical_and_Computer_Engineering/Signals_and_Systems/3_2node2.html. This work is licensed under a Creative Commons License
2017-09-19 22:38:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8557258248329163, "perplexity": 326.8822634873995}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818686043.16/warc/CC-MAIN-20170919221032-20170920001032-00475.warc.gz"}
https://www.groundai.com/project/complex-path-prediction-of-resonance-assisted-tunneling-in-mixed-systems/
Complex-Path Prediction of Resonance-Assisted Tunneling in Mixed Systems # Complex-Path Prediction of Resonance-Assisted Tunneling in Mixed Systems Felix Fritzsch Technische Universität Dresden, Institut für Theoretische Physik and Center for Dynamics, 01062 Dresden, Germany Max-Planck-Institut für Physik komplexer Systeme, Nöthnitzer Straße 38, 01187 Dresden, Germany    Arnd Bäcker Technische Universität Dresden, Institut für Theoretische Physik and Center for Dynamics, 01062 Dresden, Germany Max-Planck-Institut für Physik komplexer Systeme, Nöthnitzer Straße 38, 01187 Dresden, Germany    Roland Ketzmerick Technische Universität Dresden, Institut für Theoretische Physik and Center for Dynamics, 01062 Dresden, Germany Max-Planck-Institut für Physik komplexer Systeme, Nöthnitzer Straße 38, 01187 Dresden, Germany    Normann Mertig Technische Universität Dresden, Institut für Theoretische Physik and Center for Dynamics, 01062 Dresden, Germany Max-Planck-Institut für Physik komplexer Systeme, Nöthnitzer Straße 38, 01187 Dresden, Germany Department of Physics, Tokyo Metropolitan University, Minami-Osawa, Hachioji 192-0397, Japan September 14, 2019 ###### Abstract We present a semiclassical prediction of regular-to-chaotic tunneling in systems with a mixed phase space, including the effect of a nonlinear resonance chain. We identify complex paths for direct and resonance-assisted tunneling in the phase space of an integrable approximation with one nonlinear resonance chain. We evaluate the resonance-assisted contribution analytically and give a prediction based on just a few properties of the classical phase space. For the standard map excellent agreement with numerically determined tunneling rates is observed. The results should similarly apply to ionization rates and quality factors. ###### pacs: PACS here Tunneling through energetic barriers is a textbook paradigm of quantum mechanics. While classically motion is confined to either side of the barrier, wave functions exhibit contributions on both sides. In contrast, nature often exhibits confinement on dynamically disjoint regions of regular and chaotic motion in a mixed phase space, see Fig. 1(a). Here, a classical particle follows a trajectory of regular motion while the correponding wave function admits an exponentially small contribution on the chaotic region. This phenomenon is called dynamical tunneling DavHel1981 ; KesSch2011 . Until today dynamical tunneling has emerged in many fields of physics. It determines the vibrational spectrum of molecules DavHel1981 , ionization rates of atoms in laser fields WimSchEltBuc2006 ; ZakDelBuc1998 , and chaos-assisted tunneling oscillations LinBal1990 ; BohTomUll1993 in cold atom systems Hen2001 ; SteOskRai2001 . In optics dynamical tunneling is experimentally explored in microwave resonators DemGraHeiHofRehRic2000 ; BaeKetLoeRobVidHoeKuhSto2008 ; DieGuhGutMisRic2014 ; GehLoeShiBaeKetKuhSto2015 as well as microlasers PodNar2005 ; ShiHarFukHenSasNar2010 ; ShiHarFukHenSunNar2011 ; YanLeeMooLeeKimDaoLeeAn2010 ; KwaShiMooLeeYanAn2015 ; CaoWie2015 ; YiYuLeeKim2015 ; YiYuKim2016 , where it determines the quality factor of lasing modes. Here, a recent experimental breakthrough KwaShiMooLeeYanAn2015 ; GehLoeShiBaeKetKuhSto2015 is the measured enhancement of dynamical tunneling due to nonlinear resonance chains BroSchUll2001 ; BroSchUll2002 . To reveal the universal features of dynamical tunneling it is extensively studied theoretically ShuIke1995 ; ShuIke1998 ; PodNar2003 ; Kes2003 ; PodNar2005 ; EltSch2005 ; Kes2005b ; SheFisGuaReb2006 ; Kes2007 ; BaeKetLoeSch2008 ; BaeKetLoeRobVidHoeKuhSto2008 ; ShuIke2008 ; ShuIshIke2008 ; ShuIshIke2009a ; ShuIshIke2009b ; BaeKetLoeWieHen2009 ; BaeKetLoe2010 ; LoeBaeKetSch2010 ; MerLoeBaeKetShu2013 ; HanShuIke2015 ; ShuIke2016 ; KulWie2016 ; MerKulLoeBaeKet2016:p mainly in model systems. A central object is the tunneling rate , which describes the transition from a state on the th quantizing torus of the regular region into the chaotic sea. Qualitatively can be understood from the theory of resonance-assisted tunneling BroSchUll2001 ; BroSchUll2002 ; EltSch2005 ; SchMouUll2011 , see dashed line in Fig. 1(b): On average decreases exponentially for decreasing wavelength or decreasing effective Planck constant, i.e. Plancks constant scaled to some typical action of the system. In addition a drastic enhancement of is observed for some values of . This is due to resonant coupling of regular states, induced by a nonlinear resonance chain Bir1913 within the regular region, see Fig. 1(a). Despite extensive effort an intuitive, trajectory-based picture of dynamical tunneling from regular to chaotic regions, including the effect of nonlinear resonances is not yet available. Semiclassical theories exist only for time-domain quantities ShuIke1995 ; ShuIke1998 , cases when resonances are irrelevant MerLoeBaeKetShu2013 , and near-integrable systems Ozo1984 ; BroSchUll2002 ; DeuMouSch2013 . On the other hand, quantitatively accurate predictions of LoeLoeBaeKet2013 ; MerKulLoeBaeKet2016:p explicitely require integrable approximations BaeKetLoeSch2008 ; BaeKetLoe2010 ; LoeBaeKetSch2010 ; KulLoeMerBaeKet2014 which needs some numerical effort. In this paper we establish an intuitive, semiclassical, trajectory-based picture of resonance-assisted regular-to-chaotic tunneling in systems with a mixed phase space. It results in a closed-form, analytic formula for tunneling rates . Our approach gives excellent agreement with numerical results for the standard map, which outperforms the perturbative approach see Fig. 1(b). Since our final formula requires just a few properties of the classical phase space rather than the construction of a full integrable approximation, it should also allow for estimating ionization rates and quality factors and be helpful, e. g., for designing experimental setups. Overview — Our method is based on a semiclassical evaluation of a recently developed non-perturbative prediction of MerKulLoeBaeKet2016:p . At its heart is an integrable approximation of the regular region, which includes the relevant nonlinear resonance chain KulLoeMerBaeKet2014 . In that, we justify and generalize the use of semiclassical techniques developed for near-integrable systems Ozo1984 ; BroSchUll2002 ; DeuMouSch2013 in the wider class of generic systems with a mixed phase space. In particular, the integrable approximation overcomes the separation of regular and chaotic motion and allows for connecting real tori to the chaotic region via tunneling paths through complexified phase space. This gives the tunneling rate γm=γd+A2Tγrat, (1) which is composed of a direct contribution and a resonance-assisted contribution , see (blue and red) lines in Fig. 1(b). Figure 1(a) gives an illustration of the phase-space structures contributing to Eq. (1): (i) Quantizing torus and direct tunneling paths : The quantizing torus , associated with the th regular state, gives rise to tunneling paths with complex momentum emanating from the turning points of . See (blue) inner ring and arrows, respectively. They connect with the chaotic sea and determine the direct tunneling rate , Eq. (7). (ii) Partner torus and resonance-assisted tunneling paths : A partner torus with action on the opposite side of the nonlinear resonance is connected with the chaotic sea by complex tunneling paths , see (red) outer ring and arrows. They lead to the resonance-assisted tunneling rate , Eq. (7). (iii) Tunneling paths : The tori and are connected by complex paths bridging the resonance, see (orange) arrows. They determine the tunneling amplitude , Eq. (6). Basic setting — We derive our results for kicked one-dimensional Hamiltonians . For illustrations we use and , giving the paradigmatic standard map Chi1979 , which is widely used to study tunneling phenomena EltSch2005 ; BaeKetLoeSch2008 ; LoeBaeKetSch2010 ; SchMouUll2011 ; MerKulLoeBaeKet2016:p . At the corresponding stroboscobic Poincaré map exhibits a mixed phase space as shown in Fig. 1(a) with regions of regular motion (thin [gray] lines) and chaotic motion (dots). It is governed by a regular island containing a prominent :=6:2 nonlinear resonance chain and a surrounding chaotic sea. Quantum mechanically the dynamics is given by the unitary time-evolution operator . By introducing a leaky region (shaded areas in Fig. 1(a)) close to the regular-chaotic border we compute tunneling rates as discussed in Ref. MerKulLoeBaeKet2016:p . We focus on the ground state () which localizes on the innermost quantizing torus of the regular island. Its tunneling rate is shown in Fig. 1(b) (dots). Note that higher excited states () show the same qualitative features. Integrable approximation — The key tool for deriving our prediction is an integrable approximation. It is a one degree of freedom time-independent Hamiltonian, which resembles the regular dynamics of the original system KulLoeMerBaeKet2014 . It is based on the universal description of the classical dynamics in the vicinity of a : resonance by the pendulum Hamiltonian Ozo1984 ; BroSchUll2001 ; BroSchUll2002 ; KulLoeMerBaeKet2014 Hr:s(θ,I)=H0(I)+2Vr:s(IIr:s)r/2cos(rθ), (2) using action-angle coordinates of . It is determined by the frequencies of tori in the co-rotating frame of the resonance, as KulLoeMerBaeKet2014 , where close to the resonant torus . The quantities , , and can be computed from the position and the size of the resonance chain and the linearized dynamics of its central orbit EltSch2005 ; KulLoeMerBaeKet2014 . The phase space is depicted by thin [gray] lines in Fig. 2. Via a canonical transformation the Hamiltonian Eq. (2) is mapped onto the phase space of the standard map giving KulLoeMerBaeKet2014 . By quantizing and diagonalizing its eigenstates yield the tunneling rate MerKulLoeBaeKet2016:p γm=∫L∣∣ψm(q)∣∣2% dq, (3) via the probability of on the leaky region , which we evaluate semiclassically in the following. As discussed in Ref MerKulLoeBaeKet2016:p , Eq. (3) provides a good prediction of the tunneling rate because approximates the corresponding state of the mixed system on the regular region and further provides a sufficiently accurate extension into the regular–chaotic border region, which dominates Eq. (3). WKB construction — Using WKB-techniques BerMou1972 ; Cre1994 we now construct the state within the integrable approximation . This extends the semiclassical methods developed for integrable systems DeuMouSch2013 to systems with a mixed phase space. Note that, the use of the integrable approximation solves the problem of natural boundaries GrePer1981 ; Per1982 . Thus the integrable approximation is the key for connecting regular and chaotic motion quantum mechanically. Following Cre1994 , the wave function is constructed from generalized plane waves with locally adapted momentum, Eq. (5). This requires the solutions of the equation , as depicted in Fig. 1(a). Here, is the energy of the wave function obtained from EBK quantization. The position coordinate is real. The real solutions describe the oscillatory part of the wave function in classically allowed regions. The complex solutions describe the exponentially decreasing tunneling tails of the wave function in classically forbidden regions. In particular, they describe in the leaky region, as required by Eq. (3). Specifically, for resonances the geometry of paths gives the semiclassical wave function as a superposition ψm(q)=ψd(q)+ATψ% rat(q). (4) Here, (i) is the direct wave function, (ii) is the resonant wave function, and (iii) is the tunneling amplitude. We now explain this in more detail: (i) describes the wave function along the quantizing torus in the classically allowed region of energy , which is obtained from EBK quantization of the torus . Using Airy-type connections Cre1994 this wave function is extended into the classically forbidden region along the paths (with ), see Fig. 1(a), as ψα(q)=∣∣ω0(Im)2π∂pHr:s(q,pα(q))∣∣1/2exp(iℏ∫qpα(~q)d~q). (5) Here, accounts for global normalization of the wave function, while is the classical probability along . The complex action , for which the lower limit is one of the turning points on the torus , describes direct tunneling into the leaky region. (ii) Due to the presence of the nonlinear resonance there is an additional real solution with energy on the opposite side of the resonance chain. Along this torus we construct the wave function . In particular, the tunneling tails associated with the solutions emanating from and connecting to the chaotic part of phase space, see Fig. 1(a), also obey Eq. (5) with . Note that in Eq. (5) must be kept for normalization. The lower limit of the action integral is one of the turning points on . (iii) Finally, the tunneling amplitude is given by BroSchUll2002 AT=∣∣∣2sin(πrℏ[Irat−Im])∣∣∣−1exp(−σℏ), (6) where is the imaginary part of the action of any path connecting to . In particular, since there is no solution connecting and along real positions, these paths are only sketched schematically in Fig. 1(a). Note that this evaluation of based on paths with complex position is formally beyond the WKB-construction used here. It has been introduced and successfully applied for near-integrable systems in Ref. DeuMouSch2013 . Further note that complex solutions which do not connect to a real torus are neglected. To summarize our construction, the wave functions and , Eq. (5), together with the tunneling amplitude, Eq. (6), give the wave function , Eq. (4). Inserting into Eq. (3) and neglecting interference allows for evaluating the integral in Eq. (3) independently for and . For solving these integrals we linearize the action integral in Eq. (5) around the boundary of the leaky region at . We further account for the symmetry of the standard map with respect to the central fixed point. This gives (i) the direct () and (ii) the resonance-assisted () tunneling rate as γα =ℏImpα(qL)|ψα(qL)|2, (7) i. e., each tunneling rate is given by the value of the normalized WKB wave function at the boundary of the leaky region. This construction constitutes our first main result. Numerical evaluation of the semiclassically obtained tunneling rates shows excellent agreement with numerical obtained tunneling rates (not shown). This generalizes previous work, Refs. Ozo1984 ; BroSchUll2002 ; DeuMouSch2013 , to the much larger class of mixed system, based on the powerful tool of integrable approximations. It further provides a basis for a fully analytic prediction, which no longer requires constructing integrable approximations explicitely: Namely, we observe that for the standard map at (and other examples) the resonance-assisted contribution dominates the semiclassically predicted decay rates for all values of the effective Planck constant. In general, one can expect , whenever the resonance is sufficiently large, i.e. roughly speaking when it is visible within the regular region. The converse, that is dominated by the direct tunneling rate , may occur for small values of or if the resonance is extremely small. Analytic result — In the following we derive an analytic formula which evaluates the dominating term based on just a few properties of the classical phase space. To this end we use the pendulum Hamiltonian , Eq. (2), in action-angle coordinates of and the action representation of the WKB wave function, respectively. This extends the WKB construction presented in Ref. BroSchUll2002 to the Hamiltonian (2). In this context the main novelty is to account both for the action dependence of the resonance term proportional to and to obtain a closed form expression for . As a first approximation we extend the leaky region to all chaotic trajectories as (shaded area in Fig. 2). In order to account for sticky motion, we choose such that is the area enclosing the regular region enlarged up to the most relevant partial barrier EltSch2005 ; SchMouUll2011 . While the basic features of are preserved upon changing the leaky region, it is worth noting that its details might change roughly up to two orders of magnitude MerKulLoeBaeKet2016:p . This constitutes the main error of our prediction. To construct the WKB wave function the classical phase-space structures fulfilling for real actions are required. They are depicted in Fig. 2 and obey , where φ(I)=(Ir:sI)r/2Em−H0(I)2Vr:s. (8) Real solutions correspond to the tori oscillating around and , i.e. the classically allowed regions ([blue and red] thick lines) on opposite sides of the resonance chain at . We have and a reasonable approximation of is obtained from . The torus is accompanied by complex paths ([blue] arrows) which emanate from turning points with and diverge at . Furthermore, there are tunneling paths ([orange] arrows) with imaginary part attached to turning points with bridging the resonance towards . Finally, there are complex paths ([red] arrows) emanating from turning points with on the partner torus. They have imaginary part as well and connect with the leaky region. Using Eq. (5), with accordingly interchanged phase-space coordinates, local WKB wave functions can be constructed from these paths. Again a global construction of is obtained by using Airy-type connections at classical turning points Cre1994 ; BroSchUll2002 . In action-angle coordinates the torus is not directly connected with the leaky region. Thus, for there is neither a direct contribution to the WKB wave function within nor a direct tunneling rate involved in this construction. Consequently, one has within . As the tunneling amplitude , Eq. (6), is canonically invariant, it can be computed in action-angle coordinates as well, requiring the evaluation of from to . By approximating , which is justified if , and using only the quadratic part of , we find σ= Irat−Imrln((Irat−Im)22e2Mr:sVr:s) +Im2ln(ImeIr:s)−Irat2ln(IrateIr:s), (9) which inserted in Eq. (6) constitutes the first part of our analytic expression. Note that the first term coincides with the results obtained for a simpler pendulum model in Ref. BroSchUll2002 while the remaining terms are related to the action dependence of the resonance term proportional to in Eq. (2). We proceed by computing by Eq. (3) from the WKB wave function and its probability inside the leaky region. The WKB wave function is associated with and computed analogously to Eq. (5) using action-angle coordinates. Linearizing the tunneling action occurring in the exponential in Eq. (5) around then gives γrat =rℏ2ln(2|φ(IL)|)∣∣ψrat(IL)∣∣2, (10) which is in close analogy with Eq. (7). Again the resonance-assisted tunneling rate is determined by the normalized WKB wave function ∣∣ψrat(IL)∣∣2=∣∣ω0(Im)2rπ(Em−H0(IL))∣∣exp(−2ℏSrat) (11) at the boundary of the leaky region. The tunneling action from to is evaluated similarly as Eq. (Complex-Path Prediction of Resonance-Assisted Tunneling in Mixed Systems) leading to Srat= IL−Iratrln((IL−Irat)(IL−Im)2e2Mr:sVr:s) +Irat2ln(IrateIr:s)−IL2ln(ILeIr:s) +Irat−Imrln(IL−ImIrat−Im), (12) which concludes the computation of . Discussion — The evaluation of , based on the analytic expressions, Eqs. (6) and (Complex-Path Prediction of Resonance-Assisted Tunneling in Mixed Systems) for and Eqs. (11)–(12) for , requires just a few classical quantities, namely , , and as well as the frequencies . This analytic prediction is in excellent agreement with numerically obtained rates, see Fig. 1(b). The resonance peaks originate from the divergence of the prefactor , Eq. (6), i. e., they appear whenever fulfills a quantization condition . In particular, at the resonance peak a hybridization between states associated with the th and th quantizing torus occurs. This is the same resonance condition as obtained from perturbation theory BroSchUll2001 . In contrast, away from the resonance peak the tori and are still energetically degenerate. However, since does not fulfill a quantization condition there is no associated quantum state. This is different from the perturbative framework, where several quantizing tori of different energy contribute to the final prediction. Finally, in contrast to perturbation theory, our result is dominated by a single term for all values of the effective Planck constant. Its overall exponential decay is dominated by the first term of the action , Eq. (Complex-Path Prediction of Resonance-Assisted Tunneling in Mixed Systems). Hence, the slope of the exponential decay, as depicted in Fig 1(b), is roughly proportional to the width of the dynamical tunneling barrier . Summary and outlook — We derive a trajectory-based, semiclassical prediction of resonance-assisted regular-to-chaotic tunneling rates in systems with a mixed phase space. To this end we generalize the semiclassical picture valid in near-integrable systems to the larger class of systems with a mixed phase space, based on integrable approximations which include the relevant resonance chain. From this result we find a direct and a resonce-assisted contribution. The latter usually dominates the whole experimentally accessible regime of large tunneling rates. For this resonance-assisted contribution we derive a closed-form analytic expression which depends on just a few properties of the classical phase space. In particular, this expression does not require the explicit construction of integrable approximations. Testing our analytic result for the paradigmatic example of the standard map we find excellent agreement with numerically determined tunneling rates. We expect that our result should also apply to ionization rates and quality factors. We gratefully acknowledge fruitful discussions with Yasutaka Hanada, Kensuke Ikeda, Julius Kullig, Clemens Löbner, Steffen Löck, Amaury Mouchet, Peter Schlagheck, and Akira Shudo. We acknowledge support by the Deutsche Forschungsgemeinschaft (DFG) Grant No. BA 1973/4-1. N.M. acknowledges successive support by JSPS (Japan) Grant No. PE 14701 and Deutsche Forschungsgemeinschaft (DFG) Grant No. ME 4587/1-1. All 3D visualizations were created using Mayavi RamVar2011 . ## References • (1) M. J. Davis and E. J. Heller, Quantum dynamical tunneling in bound states, J. Chem. Phys. 75, 246 (1981). • (2) S. Keshavamurthy and P. Schlagheck, Dynamical Tunneling: Theory and Experiment, Taylor & Francis, Boca Raton (2011). • (3) S. Wimberger, P. Schlagheck, C. Eltschka, and A. Buchleitner, Resonance-assisted decay of nondispersive wave packets, Phys. Rev. Lett. 97, 043001 (2006). • (4) J. Zakrzewski, D. Delande, and A. Buchleitner, Ionization via chaos assisted tunneling, Phys. Rev. E 57, 1458 (1998). • (5) W. A. Lin and L. E. Ballentine, Quantum tunneling and chaos in a driven anharmonic oscillator, Phys. Rev. Lett. 65, 2927 (1990). • (6) O. Bohigas, S. Tomsovic, and D. Ullmo, Manifestations of classical phase space structures in quantum mechanics, Phys. Rep. 223, 43 (1993). • (7) W. K. Hensinger et al., Dynamical tunnelling of ultracold atoms, Nature 412, 52 (2001). • (8) D. A. Steck, W. H. Oskay, and M. G. Raizen, Observation of chaos-assisted tunneling between islands of stability, Science 293, 274 (2001). • (9) C. Dembowski, H.-D. Gräf, A. Heine, R. Hofferbert, H. Rehfeld, and A. Richter, First experimental evidence for chaos-assisted tunneling in a microwave annular billiard, Phys. Rev. Lett. 84, 867 (2000). • (10) A. Bäcker, R. Ketzmerick, S. Löck, M. Robnik, G. Vidmar, R. Höhmann, U. Kuhl, and H.-J. Stöckmann, Dynamical tunneling in mushroom billiards, Phys. Rev. Lett. 100, 174103 (2008). • (11) B. Dietz, T. Guhr, B. Gutkin, M. Miski-Oglu, and A. Richter, Spectral properties and dynamical tunneling in constant-width billiards, Phys. Rev. E 90, 022903 (2014). • (12) S. Gehler, S. Löck, S. Shinohara, A. Bäcker, R. Ketzmerick, U. Kuhl, and H.-J. Stöckmann, Experimental observation of resonance-assisted tunneling, Phys. Rev. Lett. 115, 104101 (2015). • (13) V. A. Podolskiy and E. E. Narimanov, Chaos-assisted tunneling in dielectric microcavities, Opt. Lett. 30, 474 (2005). • (14) S. Shinohara, T. Harayama, T. Fukushima, M. Hentschel, T. Sasaki, and E. E. Narimanov, Chaos-assisted directional light emission from microcavity lasers, Phys. Rev. Lett. 104, 163902 (2010). • (15) S. Shinohara, T. Harayama, T. Fukushima, M. Hentschel, S. Sunada, and E. E. Narimanov, Chaos-assisted emission from asymmetric resonant cavity microlasers, Phys. Rev. A 83, 053837 (2011). • (16) J. Yang, S.-B. Lee, S. Moon, S.-Y. Lee, S. W. Kim, T. T. A. Dao, J.-H. Lee, and K. An, Pump-induced dynamical tunneling in a deformed microcavity laser, Phys. Rev. Lett. 104, 243601 (2010). • (17) H. Kwak, Y. Shin, S. Moon, S.-B. Lee, J. Yang, and K. An, Nonlinear resonance-assisted tunneling induced by microcavity deformation, Scientific Reports 5, 9010 (2015). • (18) H. Cao and J. Wiersig, Dielectric microcavities: Model systems for wave chaos and non-hermitian physics, Rev. Mod. Phys. 87, 61 (2015). • (19) C.-H. Yi, H.-H. Yu, J.-W. Lee, and C.-M. Kim, Fermi resonance in optical microcavities, Phys. Rev. E 91, 042903 (2015). • (20) C.-H. Yi, H.-H. Yu, and C.-M. Kim, Resonant torus-assisted tunneling, Phys. Rev. E 93, 012201 (2016). • (21) O. Brodier, P. Schlagheck, and D. Ullmo, Resonance-assisted tunneling in near-integrable systems, Phys. Rev. Lett. 87, 064101 (2001). • (22) O. Brodier, P. Schlagheck, and D. Ullmo, Resonance-assisted tunneling, Ann. Phys. (N.Y.) 300, 88 (2002). • (23) A. Shudo and K. S. Ikeda, Complex classical trajectories and chaotic tunneling, Phys. Rev. Lett. 74, 682 (1995). • (24) A. Shudo and K. S. Ikeda, Chaotic tunneling: A remarkable manifestation of complex classical dynamics in non–integrable quantum phenomena, Physica D 115, 234 (1998). • (25) V. A. Podolskiy and E. E. Narimanov, Semiclassical description of chaos-assisted tunneling, Phys. Rev. Lett. 91, 263601 (2003). • (26) S. Keshavamurthy, Dynamical tunneling in molecules: role of the classical resonances and chaos, J. Chem. Phys. 119, 161 (2003). • (27) C. Eltschka and P. Schlagheck, Resonance- and chaos-assisted tunneling in mixed regular-chaotic systems, Phys. Rev. Lett. 94, 014101 (2005). • (28) S. Keshavamurthy, On dynamical tunneling and classical resonances, J. Chem. Phys. 122, 114109 (2005). • (29) M. Sheinman, S. Fishman, I. Guarneri, and L. Rebuzzini, Decay of quantum accelerator modes, Phys. Rev. A 73, 052110 (2006). • (30) S. Keshavamurthy, Dynamical tunneling in molecules: quantum routes to energy flow, Int. Rev. Phys. Chem. 26, 521 (2007). • (31) A. Bäcker, R. Ketzmerick, S. Löck, and L. Schilling, Regular-to-chaotic tunneling rates using a fictitious integrable system, Phys. Rev. Lett. 100, 104101 (2008). • (32) A. Shudo and K. S. Ikeda, Stokes geometry for the quantum hénon map, Nonlinearity 21, 1831 (2008), 00001. • (33) A. Shudo, Y. Ishii, and K. S. Ikeda, Chaos attracts tunneling trajectories: A universal mechanism of chaotic tunneling, Europhys. Lett. 81, 50003 (2008), 00013. • (34) A. Shudo, Y. Ishii, and K. S. Ikeda, Julia sets and chaotic tunneling: I, J. Phys. A 42, 265101 (2009). • (35) A. Shudo, Y. Ishii, and K. S. Ikeda, Julia sets and chaotic tunneling: II, J. Phys. A 42, 265102 (2009). • (36) A. Bäcker, R. Ketzmerick, S. Löck, J. Wiersig, and M. Hentschel, Quality factors and dynamical tunneling in annular microcavities, Phys. Rev. A 79, 063804 (2009). • (37) A. Bäcker, R. Ketzmerick, and S. Löck, Direct regular-to-chaotic tunneling rates using the fictitious-integrable-system approach, Phys. Rev. E 82, 056208 (2010). • (38) S. Löck, A. Bäcker, R. Ketzmerick, and P. Schlagheck, Regular-to-chaotic tunneling rates: From the quantum to the semiclassical regime, Phys. Rev. Lett. 104, 114101 (2010). • (39) N. Mertig, S. Löck, A. Bäcker, R. Ketzmerick, and A. Shudo, Complex paths for regular-to-chaotic tunnelling rates, Europhys. Lett. 102, 10005 (2013). • (40) Y. Hanada, A. Shudo, and K. S. Ikeda, Origin of the enhancement of tunneling probability in the nearly integrable system, Phys. Rev. E 91, 042913 (2015). • (41) A. Shudo and K. S. Ikeda, Toward pruning theory of the Stokes geometry for the quantum Hénon map, Nonlinearity 29, 375 (2016). • (42) J. Kullig and J. Wiersig, Frobenius–Perron eigenstates in deformed microdisk cavities: non-Hermitian physics and asymmetric backscattering in ray dynamics, New J. Phys. 18, 015005 (2016). • (43) N. Mertig, J. Kullig, C. Löbner, A. Bäcker, and R. Ketzmerick, Perturbation-free prediction of resonance-assisted tunneling in mixed regular–chaotic systems, arXiv:1607.06477 [nlin.CD] (2016). • (44) P. Schlagheck, A. Mouchet, and D. Ullmo, Resonance-assisted tunneling in mixed regular-chaotic systems, in Dynamical Tunneling: Theory and Experiment, KesSch2011 , chapter 8, 177. • (45) G. D. Birkhoff, Proof of Poincaré’s geometric theorem, Trans. Amer. Math. Soc. 14, 14 (1913). • (46) A. M. Ozorio de Almeida, Tunneling and the semiclassical spectrum for an isolated classical resonance, J. Phys. Chem. 88, 6139 (1984). • (47) J. Le Deunff, A. Mouchet, and P. Schlagheck, Semiclassical description of resonance-assisted tunneling in one-dimensional integrable models, Phys. Rev. E 88, 042927 (2013). • (48) C. Löbner, S. Löck, A. Bäcker, and R. Ketzmerick, Integrable approximation of regular islands: The iterative canonical transformation method, Phys. Rev. E 88, 062901 (2013). • (49) J. Kullig, C. Löbner, N. Mertig, A. Bäcker, and R. Ketzmerick, Integrable approximation of regular regions with a nonlinear resonance chain, Phys. Rev. E 90, 052906 (2014). • (50) B. V. Chirikov, A universal instability of many-dimensional oscillator systems, Phys. Rep. 52, 263 (1979). • (51) M. V. Berry and K. E. Mount, Semiclassical approximations in wave mechanics, Rep. Prog. Phys. 35, 315 (1972). • (52) S. Creagh, Tunnelling in multidimensional systems, J. Phys. A 27, 4969 (1994). • (53) J. M. Greene and I. C. Percival, Hamiltonian maps in the complex plane, Physica D 3, 530 (1981). • (54) I. C. Percival, Chaotic boundary of a Hamiltonial map, Physica D 6, 67 (1982). • (55) P. Ramachandran and G. Varoquaux, Mayavi: 3D visualization of scientific data, Comput. Sci. Eng. 13, 40 (2011). You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters
2021-03-02 23:59:22
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8188351392745972, "perplexity": 2444.912343922874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178364932.30/warc/CC-MAIN-20210302221633-20210303011633-00083.warc.gz"}
https://xslates.com/posts/compound-blindness/
# Compound blindness The way companies measure growth can sometimes be ambiguous. Considering growing numbers in isolation is dangerous. Suppose we have a record of increasing revenue over time. Although I couldn’t find the technical term for it, I remember reading the term compound blindness to describe the situation in which an “impressive” growth rate does not take into account inflation, population growth or other forms of co-occurring natural compound growth. import pandas as pd import numpy as np import matplotlib.pyplot as plt import matplotlib as mpl mpl.rcParams['font.sans-serif'] = 'Liberation Sans' mpl.rcParams['legend.frameon'] = False data = { "users": [10, 15, 30, 50, 70, 90, 150, 270, 350, 500], "revenue": [100, 300, 560, 780, 900, 1600, 2600, 3500, 3800, 4600], } dataset = pd.DataFrame(data) fig, ax = plt.subplots() ax.plot(dataset["users"], label="users") ax.plot(dataset["revenue"], label="revenue") plt.legend(loc="best") Although revenue growth is sharp, with some additional calculations, we can get a completely different intuition about the performance of our toy dataset. Once you take into account growing users, the growth in revenue loses a bit of its sting. dataset["users_change"] = dataset["users"].pct_change() dataset["revenue_change"] = dataset["revenue"].pct_change() dataset["revenue_per_user"] = dataset["revenue"] / dataset["users"] dataset["revenue_per_user_change"] = dataset["revenue_per_user"].pct_change() fig, ax = plt.subplots() ax.plot(dataset["revenue_per_user"], label="revenue per user") plt.title("change in revenue per user") In situations like these, most managers and executives will stop at the first visualization, captivated by the growth curve. However, even a user base that’s growing at much slower pace eventually starts to erode your revenue rather quickly.
2019-06-24 13:17:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2606775164604187, "perplexity": 2031.9107781507826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999539.60/warc/CC-MAIN-20190624130856-20190624152856-00175.warc.gz"}
https://open.kattis.com/contests/v7nrab/problems/hissingmicrophone
Brett "Is it rated" Fazio Contest 1 Start 2017-11-27 20:04 CET Brett "Is it rated" Fazio Contest 1 End 2017-11-27 21:04 CET The end is near! Contest is over. Not yet started. Contest is starting in -181 days 2:29:34 1:00:00 0:00:00 Problem FHissing Microphone A known problem with some microphones is the “hissing s”. That is, sometimes the sound of the letter s is particularly pronounced; it stands out from the rest of the word in an unpleasant way. Of particular annoyance are words that contain the letter s twice in a row. Words like amiss, kiss, mississippi and even hiss itself. Input The input contains a single string on a single line. This string consists of only lowercase letters (no spaces) and has between $1$ and $30$ characters. Output Output a single line. If the input string contains two consecutive occurrences of the letter s, then output hiss. Otherwise, output no hiss. Sample Input 1 Sample Output 1 amiss hiss Sample Input 2 Sample Output 2 octopuses no hiss Sample Input 3 Sample Output 3 hiss hiss
2018-05-27 21:33:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2847841680049896, "perplexity": 6146.198117431853}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794870470.67/warc/CC-MAIN-20180527205925-20180527225925-00561.warc.gz"}
http://www.koreascience.or.kr/article/ArticleFullRecord.jsp?cn=E1BMAX_2006_v43n3_519
DEHN SURGERY AND A-POLYNOMIAL FOR KNOTS Title & Authors DEHN SURGERY AND A-POLYNOMIAL FOR KNOTS Kim, Jin-Hong; Abstract The Property P Conjecture States that the 3-manifold $\small{Y_r}$ obtained by Dehn surgery on a non-trivial knot in $\small{S^3}$ with surgery coefficient $\small{{\gamma}{\in}Q}$ has the non-trivial fundamental group (so not simply connected). Recently Kronheimer and Mrowka provided a proof of the Property P conjecture for the case ${\gamma} Keywords Dehn surgery;property P conjecture;A-polynomials; Language English Cited by References 1. D. Cooper, M. Culler, H. Gillet, D. D. Long, and P. B. Shalen, Plane curves associated to character varieties of 3-manifolds, Invent. Math. 118 (1994), no. 1, 47-84 2. M. Culler, C. Gordon, J. Luecke, and P. B. Shalen, Dehn surgery on knots, Ann. of Math. (2) 125 (1987), no. 2, 237-300 3. N. Dunfield and S. Garoufalidis, Non-triviality of the A-polynomial for knots in$S^3\$, Algebr. Geom. Topol. 4 (2004), 1145-1153 (electronic) 4. D. Gabai, Foliations and the topology of 3-manifolds: III, J. Differential Geom. 26 (1987), 479-536 5. P. B. Kronheimer and T. S. Mrowka, Witten's conjecture and Property P, Geom. Topol. 8 (2004), 295-310 (electronic) 6. P. B. Kronheimer and T. S. Mrowka, Dehn surgery, the fundamental group and SU(2), preprint (2004); arXiv:math.GT/0312322 7. P. B. Kronheimer, T. S. Mrowka, P. Ozsvath, and Z. Szabo, Monoploes and lens space surgeries, To appear in Ann. Math; arXiv:math.GT/0310164 8. L. Moser, Elementary surgery along a torus knot, Pacific J. Math. 38 (1971), 737-745 9. P. Ozsvath and Z. Szabo, Absolutely graded Floer homologies and intersection forms for four-manifolds with boundary, Adv. Math. 173 (2003), no. 2, 179-261
2018-09-26 00:35:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 3, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6568132638931274, "perplexity": 2106.0888601314896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267162809.73/warc/CC-MAIN-20180926002255-20180926022655-00446.warc.gz"}
https://stats.stackexchange.com/questions/386916/are-there-two-motivations-for-bayesian-information-criteria
# Are there two motivations for Bayesian information criteria? Are there two motivations for all these Bayesian information criteria? I am only aware of the motivation of "expected out-of-sample prediction score." Let the in-sample data be $$y$$ and the parameter be called $$\theta$$. Assume that $$y\mid \theta \sim p(y \mid \theta)$$ and that the prior is $$p(\theta)$$. Call the posterior mean $$\hat{\theta}(y)$$, and let the out-of-sample data be called $$Y$$. This closely follows the notation found here. The "effective number of parameters" described by that paper is \begin{align*} p_D &= E_{\theta \mid y}\left[ - 2 \log p(y \mid \theta) \right] + 2 \log p(y \mid \hat{\theta}(y)) \\ \end{align*} Why isn't there being an expectation taken with respect to unobserved data $$Y$$ in the above expression? I thought the whole point of this class of model selection strategies was to approximate (ideally) $$E_Y\left[ E_{\theta \mid y}\left[ \log p(Y \mid \theta) \right] \right]$$, or more realistically \begin{align*} E_Y\left[-2 \log p(Y \mid \hat{\theta}(y)) \right] &= - 2\log p(y \mid \hat{\theta}(y)) \\ &+ \underbrace{E_Y\left[ -2\log p(Y \mid \hat{\theta}(y)) \right] + 2\log p(y \mid \hat{\theta}(y))}_{\text{a better }p_D \text{?}} \end{align*} But clearly $$E_Y\left[ -2\log p(Y \mid \hat{\theta}(y)) \right] \neq E_{\theta \mid y}\left[ - 2 \log p(y \mid \theta) \right].$$ Is that just a commonly-used approximation? • You could consider the possibility that there are more than two. – Glen_b Jan 13 at 6:53 • Just avoid DIC, it is a toxic measure! – Xi'an Jan 13 at 8:42
2019-06-16 19:27:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999276399612427, "perplexity": 1361.3527698201265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998291.9/warc/CC-MAIN-20190616182800-20190616204800-00408.warc.gz"}
https://physics.paperswithcode.com/paper/nutational-resonances-transitional-precession
# Nutational resonances, transitional precession, and precession-averaged evolution in binary black-hole systems 5 May 2017  ·  Xinyu Zhao, Michael Kesden, Davide Gerosa · In the post-Newtonian (PN) regime, the timescale on which the spins of binary black holes precess is much shorter than the radiation-reaction timescale on which the black holes inspiral to smaller separations. On the precession timescale, the angle between the total and orbital angular momenta oscillates with nutation period $\tau$, during which the orbital angular momentum precesses about the total angular momentum by an angle $\alpha$. This defines two distinct frequencies that vary on the radiation-reaction timescale: the nutation frequency $\omega \equiv 2\pi/\tau$ and the precession frequency $\Omega \equiv \alpha/\tau$. We use analytic solutions for generic spin precession at 2PN order to derive Fourier series for the total and orbital angular momenta in which each term is a sinusoid with frequency $\Omega - n\omega$ for integer $n$. As black holes inspiral, they can pass through nutational resonances ($\Omega = n\omega$) at which the total angular momentum tilts. We derive an approximate expression for this tilt angle and show that it is usually less than $10^{-3}$ radians for nutational resonances at binary separations $r > 10M$. The large tilts occurring during transitional precession (near zero total angular momentum) are a consequence of such states being approximate $n=0$ nutational resonances. Our new Fourier series for the total and orbital angular momenta converge rapidly with $n$ providing an intuitive and computationally efficient approach to understanding generic precession that may facilitate future calculations of gravitational waveforms in the PN regime. PDF Abstract ## Code Add Remove Mark official No code implementations yet. Submit your code now ## Categories General Relativity and Quantum Cosmology High Energy Astrophysical Phenomena
2023-02-04 05:52:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8884245157241821, "perplexity": 1432.669717998618}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500094.26/warc/CC-MAIN-20230204044030-20230204074030-00004.warc.gz"}
https://physics.stackexchange.com/questions/203367/fringe-width-and-spacing-and-number-of-slits-in-diffraction-experiments
# Fringe width and spacing and number of slits in diffraction experiments In a single slit experiment, the fringes are not equally spaced and aren’t of equal widths—the central maximum is the widest, the secondary maxima grow narrower and narrower outward, and the minima grow wider and wider outward. In a double slit interference pattern, the fringes are equally spaced and of equal widths. With a diffraction grating (lots of slits), the fringes are highly focused, with small widths and unequal spacing. What are the reasons for the differences in fringe spacing and widths as the number of slits increases, specifically in each of the three scenarios I’ve presented above? • Other than the central bright spot a single slit will produce an equally spaced fringe pattern. Also Young's original slit experiment was not a double slit but a single human hair. The light passed along the two outer edges of the hair and produced an equally spaced fringe pattern. I do this all the time with laser and different gauge guitar strings. A singe edge (not a slit) will produce a fringe pattern with unequal spacing. May 3 '16 at 15:28 The first thing to note is that each of the slits produces a diffraction pattern the width of which is controlled by the width of the slit and the wavelength of the light. The amount of light travelling from a slit in a particular direction is controlled by the diffraction pattern due to a single slit. The light waves from each of the slits superpose (interfere) and produce an interference pattern. The intensity of the fringes produced by the interference of light from the slits is modulated by the diffraction pattern produced by each of the slits. That is why the intensity of the interference fringes deceases as the order of the fringes increases. So here is the modulated interference pattern for one slit, two slits, three slits and five slits with all slits the same width and with the same slit separation. Note the modulation of light intensity of the interference fringes by the diffraction envelope. Also note that the separation of the principal maximum for the 2, 3 and 5 slit arrangement is the same. The spacing of the principal maxima is controlled by the separation of the slits $d$ and the wavelength of the light $\lambda$ The condition for the $n^{\text{th}}$ principal maximum is $n\lambda = d \sin \theta_n$. You would have met this equation when studying the diffraction grating but it is the same equation for any number $N$ of slits provided that you are dealing with the principal maxima. When two slit interference is studied the angle $\theta$ is small (< 0.1 radian or < 5$^\circ$) and so the approximation $\sin \theta \approx \theta$ is a good one. So the condition for a maximum becomes $n\lambda = d \theta_n$ which results in the fringes appearing to be equally spaced. When using the diffraction grating because the slit separation is small compared with that of the normal 2 slit arrangement the angles at which there are maxima are large. So the small angle approximation cannot be made and the fringes are not equally spaced. The other striking thing about the patterns for 2,3 and 5 slits is that the principal maxima get narrower as the number of slits increases and there are also in between the principal maxima much less intense subsidiary maxima. What is shown by the next diagram is that as well as the principal maxima getting narrower they at the same time get brighter. What is happening is that as the number of slits is increased the amount of light coming through the slits is increased and at the same time the light is being channelled into a smaller angular width (the fringe width). Ignoring the diffraction envelope for 2 slits the intensity of a principal maximum $I_2 \propto (2A)^2$ where $A$ is the amplitude of a wave from a single slit. For 3 slits $I_3 \propto (3A)^2$ and for five slits $I_5 \propto (5A)^2$. So in a diffraction grating set up if the number of number of slits being used is reduced, say half the grating is covered up with black paper, the interference pattern would become less bright and the width of the principal maxima would increase. Your three images are not comparing like for like. For example the double slit pattern in the middle would appear to have slits which are much narrower than the slit used for the single slit pattern. The reason for this inference is that the width of the diffraction envelope modulation of intensity is much broader in the second diagram than the first. The last image of the pattern from a diffraction grating probably shows a much greater angular range than for the middle image because it shows the unequal spacing of the fringes. It also shows that probably the width of the slits in the diffraction grating are much smaller than those in the two slit arrangement because there seems to be hardly any evidence of diffraction envelope modulation of intensity over a very wide angular range for the diffraction grating picture. Although all the intensity graphs can be derived mathematically it is perhaps more informative to use phasor diagram to explain what is happening. To make the analysis easier I have ignored the effect of the diffraction envelope. For three slits you have the superposition of waves from three coherent sources each of amplitude $A$. When $\theta = 0^\circ$ the three wave then the phase difference between the waves is zero and so when they overlap they produce a resulting amplitude for a principal maximum of $3A$. This is the $n=0$ fringe. The same thing happens when the phase difference is $360^\circ$ which is a path difference of $\lambda$. This again results in a principal maximum of amplitude of $3A$. This is the $n = \pm 1$ fringe. When the phase difference is $180^\circ$ which is a path difference of $\frac \lambda 2$, there is a secondary maximum of amplitude $A$. For phase differences of $120^\circ$ and $240^\circ$ which correspond to path differences of $\frac {\lambda}{3}$ and $\frac {2\lambda}{3}$ the resultant amplitude is zero. There is a minimum in those positions. So in the space between adjacent maxima for 2 slits there are now two minima and a secondary maximum. Thus the width of the principal maxima must have decreased. Imagine how narrow and bright the principal maxima are for a diffraction grating if there are 5000 slits being used. Finally. The separation of the principal maxima is controlled by the separation of the slits, the wavelength of the light and the order of the fringes whereas the width and intensity of the principal maxima is controlled by the number of slits. • good answer +1. I had to correct the little mistake about width of the maxima in my answer after reading yours ;) May 3 '16 at 14:38 • @Farcher, your answer is so good there's no wonder why you have such high reputation! May 25 '16 at 19:55 TL;DR: the pictures given are at least inconsistent, if not wrong. It is not even clear what is plotted. Let d be the separation of your slits (i.e. from centre to centre), a be the width of a slit and N be the number of slits. Then in the using a scalar theory of diffraction in the Fraunhofer limit, we can write the amplitude $\phi$ for an plane wave of wavevector $k$ coming in perpendicular to the grating: $\phi(\theta) \propto \text{sinc}\left( \frac{k sin(\theta) w}{2} \right) \times \frac{\sin\left( k \sin(\theta)\frac{ d N}{2} \right)}{\sin\left( k \sin(\theta) \frac{d}{2} \right)}$ If you are a pedant you may include an obliquity factor, but it will not matter for this argument. The intensity, which is what we see as the diffraction pattern is $|\phi|^2$. where $\theta$ is the outgoing angle. Some more preliminaries: The sinc-term above is just an envelope scaling the interference peaks, so it will only make the peaks off-axis weaker. Let's assume all of them are visible still. The fraction of the two sines is the interference term. It has 2 kind of maxima: • type 1 maxima when the $k \sin(\theta)\frac{ d}{2} = n \pi$ where $n\in \mathbb{Z}$. • type 2 maxima when the $k \sin(\theta)\frac{ d N}{2} = m \pi$ where $m\in \mathbb{Z}$ and m is not a multiple of N. Now we can address the points in the question: In a single slit experiment, the fringes are not equally spaced and aren’t of equal widths—the central maximum is the widest, the secondary maxima grow narrower and narrower outward, and the minima grow wider and wider outward. What the OP calls "secondary maxima" is what I call "type 2 maxima" (chosen differently to avoid confusion). In the single slit case these are the only ones present. Now it depends on what is plotted in the picture. If it is plotted against the coordinate $x$ on a screen at distance $L$: $x=Lsin(\theta)$ then the type 2 maxima would be exactly equally spaced and have half the width of the type 1 maximum. So the picture is probably wrong. If we plot against $\theta$ there is some change in separation, but only at high angles. In a double slit interference pattern, the fringes are equally spaced and of equal widths. The intensity of these maxima in the picture falls of continuously, so they must be type 1 maxima. Type 1 maxima are also equally spaced if plotted against $x$. They have equal widths. Type 2 maxima would have half that width. So the pictures are clearly wrong. With a diffraction grating (lots of slits), the fringes are highly focused, with small widths and unequal spacing. Conclusion from repeating the arguments above: small width, sure, unequal spacing, nope (maybe if you plot against $\lambda$, but that would be inconsistent with the spacing from the double slit). I have explained all the features of the gratings the pictures are trying (and failing) to demonstrate, so please refer to my answer instead of the pictures. When you pay attention to the left or right area of your double slit experiment you will see at the end the typical intensity distribution of a single slit. So a multi-slit arrangement is nothing all as the sum of two single slits. Of course the intensity distribution depends from the distance between the two slits. Artfully created, one would see a clear distribution like in your case. Then bigger the distance, then more one see the single slit distributions. Go a step back and reflect, that even behind an edge fringes appear. Move two edges closer and closer together one get the intensity distribution you describe. And multi-slits are the sum of all of the edges. To break it down, the question is, why there are intensity distributions behind edges. The answer is simple. Photons are moving units with oscillating electric and magnetic fields. Then thinner the edges shape (for example eraser blade), then higher the electrostatic potential from the surface electrons of the edge. This electrons and the photons form a quantized field and the projection of this field are the intensity distributions behind the edge. Such an explanation helps to understand why even single photons, emitted over time, form intensity patterns behind edges. The surface electrons are in motion and differ with their energy, the distance from the photons source to the edge differs and all this led to the intensity distribution.
2022-01-18 14:42:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7099539637565613, "perplexity": 351.8139175713133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300849.28/warc/CC-MAIN-20220118122602-20220118152602-00105.warc.gz"}
https://brilliant.org/problems/patterns-3/
Patterns 3 Algebra Level 2 Find the next number in the following sequence. $1, 1, 2, 2, 4, 8, 12, 96...$ ×
2018-03-18 13:58:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3671562969684601, "perplexity": 8385.776712087612}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645775.16/warc/CC-MAIN-20180318130245-20180318150245-00189.warc.gz"}
https://www.groundai.com/project/electronic-structure-of-hole-doped-delafossite-oxides-cucr_1-xmg_xo_2/
Electronic structure of hole-doped delafossite oxides CuCr{}_{1-x}Mg{}_{x}O{}_{2} # Electronic structure of hole-doped delafossite oxides CuCr1−xMgxO2 T. Yokobori    M. Okawa    K. Konishi    R. Takei    K. Katayama Department of Applied Physics, Tokyo University of Science, Shinjuku, Tokyo 162-8601, Japan    S. Oozono    T. Shinmura    T. Okuda Department of Electrical and Electronics Engineering, Kagoshima University, Kagoshima, Kagoshima 890-0065, Japan    H. Wadati Department of Applied Physics and Quantum-Phase Electronics Center, University of Tokyo, Bunkyo, Tokyo 113-8656 Japan    E. Sakai    K. Ono Photon Factory, KEK, Tsukuba, Ibaraki 305-0801, Japan    H. Kumigashira Photon Factory, KEK, Tsukuba, Ibaraki 305-0801, Japan PRESTO, Japan Science and Technology Agency, Chiyoda, Tokyo 102-0076, Japan    M. Oshima Department of Applied Chemistry, University of Tokyo, Bunkyo, Tokyo 113-8656, Japan    T. Sugiyama    E. Ikenaga Japan Synchrotron Radiation Research Institute, Sayo, Hyogo 679-5198, Japan    N. Hamada Department of Physics, Tokyo University of Science, Noda, Chiba 278-8510, Japan    T. Saitoh Department of Applied Physics, Tokyo University of Science, Shinjuku, Tokyo 162-8601, Japan July 14, 2019 ###### Abstract We report the detailed electronic structure of a hole-doped delafossite oxide CuCrMgO () studied by photoemission spectroscopy (PES), soft x-ray absorption spectroscopy (XAS), and band-structure calculations within the local-density approximation + (LDA+) scheme. Cr/Cu - resonant PES reveals that the near-Fermi-level leading structure has primarily the Cr character with a minor contribution from the Cu through Cu –O –Cr hybridization, having good agreement with the band-structure calculations. This indicates that a doped hole will have primarily the Cr character. Cr PES and -edge XAS spectra exhibit typical Cr features for all , while the Cu -edge XAS spectra exhibited a systematic change with . This indicates now that the Cu valence is monovalent at and the doped hole should have Cu character. Nevertheless, we surprisingly observed two types of charge-transfer satellites that should be attributed to Cu () and Cu () like initial states in Cu - resonant PES spectrum of at , while Cu PES spectra with no doubt shows the Cu character even for the lightly doped samples. We propose that these contradictory results can be understood by introducing not only the Cu state, but also finite Cu , –Cr charge transfer via O states in the ground-state electronic configuration. ###### pacs: 79.60.-i, 71.20.Ps, 78.70.Dm preprint: Journal ref.: Phys. Rev. B 87, 195124 (2013) ## I Introduction The search for new sustainable energy resources, including new innovations, is an urgent issue in modern societies. Thermoelectricity is one of the promising candidates because there exists so much waste heat that could be recovered without sacrificing environmental costs. Delafossite-type oxides CuO ( = trivalent cation) have considerable potential for thermoelectric materials Okuda et al. (2005) because of their layered structure of edge-shared O octahedrons that is very similar to the one in thermoelectric NaCoO.Terasaki et al. (1997) Hole-doped CuCrMgO is a member of this family, being a candidate for a future thermoelectrode. In CuCrO, electrons of the Cu ions under the pseudo- local symmetry fill up the narrow Cr band, which is the conterpart of the Co band filled by six electrons in NaCoO. Hence, as in NaCoO, a rapid change in the density of states (DOS) at the Fermi level () (Ref. Takeuchi et al., 2004) may be realized near the band edge in the hole-doped system CuCrMgO,Note1 () because the Cr band is expected to be at the top of the valence band in terms of a comparison of the charge-transfer energy of the Cr ion and that of the Cu ion.Iwasawa et al. (2006) More precisely, in -resolved electronic structure, this situation would correspond to the pudding-mold band structure that yields a large thermopower in NaCoO.Kuroki and Arita (2007) As a consequence, a combination of a large and the highest electrical conductivity among delafossite oxidesNagarajan et al. (2001) may be able to produce a large thermoelectric figure of merit (: thermal conductivity) in the present system. Aside from thermoelectricity, CuO has various interesting physical properties both in fundamental and applicational terms. A former example is multiferroic oxides CuFeO (Ref. Kimura et al., 2006) and the present compound CuCrO (Ref. Seki et al., 2008) as well, whereas an important finding for the latter was a -type transparent conducting oxide (TCO); the -type TCO’s such as InO, SnO, or ZnO based ones had been realized earlier,Hamberg and Granqvist (1986) yet the -type counterpart was more difficult. A delafossite CuAlO was the first -type TCO with high carrier mobility and a wide band gap.Kawazoe et al. (1997) From the view point of the near- electronic structure, this was accomplished by hole doping into a wide gap Cu oxide, which has the closed shell.Kawazoe et al. (1997) Hence, the top of the valence band was expected to have the Cu character with some O one due to hybridization. The electronic structure of CuCrO has been investigated both theoretically and experimentally in the context of TCO,Scanlon et al. (2009); Arnold et al. (2009); Hiraga et al. (2011) or of thermoelectric/multiferroic materials.Maignan et al. (2009) Along the conventional strategy for TCO, the top of the valence band is expected to have mainly the Cu character, whereas it would be desirable to have mainly the Cr character for better thermoelectric properties as mentioned before. On this point, reported first-principles band-structure calculations are still controversial; Scanlon et al. reported that the Cr partial DOS has the maximum peak at the same energy as the maximum peak of the Cu partial DOS and negligibly small Cr partial DOS at the top of the valence band.Scanlon et al. (2009) In contrast, Maignan et al. reported considerable Cr partial DOS at the top of the valence band,Maignan et al. (2009) and a recent study by Hiraga et al. showed the Cr partial DOS in a much deeper energy.Hiraga et al. (2011) Experimental electronic structure of CuCrO has been investigated by photoemission spectroscopy (PES), x-ray absorption spectroscopy (XAS), and x-ray emission spectroscopy. In these studies, Scanlon et al. and Arnold et al. interpreted the development of the upper part of the valence band with in CuAlCrO as a reconstruction of the Cu bands in stead of a development of the Cr states, and concluded that the Cr DOS minimally contributed to the top of the valence band.Scanlon et al. (2009); Arnold et al. (2009) However, magnetic and transport studies reported a close coupling of the doped holes by Mg substitution and the spin of the Cr ions that suggested the mixed-valences state Cr/Cr,Okuda et al. (2005); Ono et al. (2007) which in turn implies Cr character at the top of the valence band in the parent compound CuCrO. From the above overview, the electronic structure of CuCrMgO, particularly near , has not been established yet. In this paper, we performed a comprehensive study on the electronic structure of lightly hole-doped CuCrMgO ( = 0–0.03) by photoemission spectroscopy with various photon energies, soft x-ray absorption spectroscopy, and band structure calculations using the local density approximation + (LDA+) method. ## Ii Experiment and Calculation Polycrystalline samples of CuCrMgO (=0, 0.02, 0.03) were prepared by the standard solid-state reaction.Okuda et al. (2005) Vacuum ultraviolet (VUV)-PES measurements in the range of the Cr/Cu 3- resonance (–90 eV) were performed at BL-28A of the Photon Factory, KEK, using a SCIENTA SES-2000 electron analyzer. Hard x-ray PES (HX-PES) spectra taken with eV were measured at BL47XU of SPring-8 using a SCIENTA R4000 electron analyzer. XAS spectra of the Cr and Cu edge regions and Cu - resonant soft x-ray PES (SX-PES) spectra were measured at BL-2C of the Photon Factory, KEK, using a SCIENTA SES-2000 electron analyzer. In order to obtain clean surface, we fractured the samples in situ right before the measurements. The fracturings and the measurements were done in ultrahigh vacuum, namely, about Pa (VUV-PES, SX-PES, and XAS), about Pa (fractureing for HX-PES), and about Pa (measurement for HX-PES), all at 300 K. The intensity of the resonant PES spectra was normalized using photon current of the exit mirror. The energy resolution was 30 meV (VUV-PES), 140 meV (SX-PES), and 250 meV (HX-PES). All the Fermi-level () positions in the experiments were calibrated with Au spectra. We also performed band-structure calculations with the full potential linearized augmented plane-wave (FLAPW) methodAndersen (1975); *Takeda79 in the LDA+ scheme.Hohenberg and W. Kohn (1964); *Kohn65; *Vosko80; Anisimov et al. (1991); *Solovyev96; *Anisimov97 For the effective Coulomb repulsion , relatively small values (2.0 eV for Cu and Cr) were adopted. The rhombic lattice parameters ( Å, Å) were taken from Ref. Poienar et al., 2009. The plane-wave cut off energy was 653 eV for the wave function. We took 1313 points in the irreducible Brillouin zone for the rhombohedral Brillouin zone.Maignan et al. (2009) Although the system is known to be antiferromagnetic,Okuda et al. (2005) the magnetic structure was assumed to be ferromagneticMaignan et al. (2009) because the detailed magnetic structure is not experimentally well-determined.Poienar et al. (2009) ## Iii Results ### iii.1 Experimental valence-band electronic structure compared with band structure calculations When the photon energy comes near the 3-3 (or 2-3) excitation threshold, resonant behaviors appear in intensity of the valence-band photoemission due to the interference between the direct () and indirect [ (or ) ] processes. This is called 3-3 (or 2-3) resonant photoemission, which can be used to extract the 3 contribution of a specific element to the valence band. Figure 1 shows the valence-band spectra of CuCrMgO taken with a series of photon energy around the Cr 3- [Fig. 1(a)] and the Cu 3- [Fig. 1(b)] resonance. One can easily observe that the intensity of the near- leading structures, namely, the shoulder at 1.4 eV and the peak at 2.3 eV, systematically varies with incident photon energy. This intensity evolution is displayed in Figs. 1(c) and 1(d) as the constant initial state (CIS) spectra at the binding energy () of 1.4 eV [Fig. 1(c)] and 2.3 eV [Fig. 1(d)], respectively. To remove the background intensity from the CIS spectra as taken (filled symbols),Li et al. (1992) we also show the CIS spectra after subtracting the background by the Shirley method (open symbols).Shirley (1972) Figure 1(c) shows that the 1.4-eV shoulder exhibits a distinct resonance-type line shapeFano (1961) with the maximum intensity at 50.0 eV, which is the Cr 3- resonance energy.Li et al. (1992) In contrast, the 2.3-eV peak shows a typical weak anti resonance-type line shape with a dipFano (1961) at the Cu 3- resonance energy 74.0 eV, as shown in Fig. 1(d).Thuler et al. (1982) However, one also notices that a weak resonance of the 2.3-eV peak does exist at 50.0 eV and a tiny antiresonance of the 1.4 eV at 74.0 eV. These observations are clearly demonstrating that (1) the 1.4-eV shoulder includes a major contribution of the Cr states with a minor contribution of the Cu states and vice versa for the 2.3-eV peak, and (2) nevertheless there exists sizable hybridization between the Cr and Cu states via O states. The major contribution of the Cr states in the 1.4-eV shoulder is also confirmed by a - resonant PES measurement as shown in Fig. 2. Figure 1(a) demonstrates that the 1.4-eV shoulder at eV (off resonance) rapidly grows to an intense peak at eV (on resonance) with increasing photon energy. Accordingly, the on-off difference spectrum, representing the Cr partial DOS, has a sharp peak at 1.4 eV (Panel (b)). From the above results, the schematic energy diagram is that the Cr is at the top of the valence band, the next is Cu , and then O states come in the order of binding energy. This conclusion is different from recent PESArnold et al. (2009) or opticalHiraga et al. (2011) studies, both of which concluded that the Cu states are located at the top of the valence band. The origin of this difference will be discussed later in relation to band structure calculations. The present result is reasonable also from the viewpoint of the O TM charge transfer energy because the location of the Cr states and the Cu states, hybridizing with each other via O states in this compound, is governed by the difference of and ,Iwasawa et al. (2006) and would be larger than even considering the different valence and local configurations.Fujimori et al. (1993); Saitoh et al. (1995) In order to analyze the valence-band electronic structure in more detail, we performed LDA+ band-structure calculations. Figure 3 shows the result of our LDA+ calculations. The Cu partial DOS has intense peaks between and eV with small Cr partial DOS in this range, whereas the Cr partial DOS exhibits a considerably large peak centered at about eV, distributed from the top of the valence band to eV with small Cu partial DOS in this range. Here, it is noted that the calculated Cr partial DOS has good agreement with the experimental Cr spectral weight in Fig. 2(b). The O bands are mainly located below the Cr and Cu bands, from to eV. All the states, Cu , Cr and O 2, show very small DOS in the entire energy range. The present calculation, particularly on the location of the Cu/Cr partial DOS, agrees well with the experimental result shown in Fig. 1 and the interpretation/prediction using the difference of and as well.Iwasawa et al. (2006) The agreement between our experiment and calculation is demonstrated more clearly in Fig. 4, which shows a comparison between the experimental spectrum of CuCrMgO taken at eV and the calculated DOS.Note2 () A theoretical simulation curve has been constructed by broadening the cross-section-weighed total DOS with an energy dependent Lorentzian function due to the lifetime effect and a Gaussian due to the experimental resolution.Saitoh et al. (1997); Iwasawa et al. (2009); Yeh and Lindau (1985); Note3 () This theoretical specctrum shows that the leading structure at the top of the valence band (labeled as ) is dominated by the Cr states with a minor contribution of the Cu states whereas the most intense peak (labeled as ) primarily originates from the Cu states. In both structures, appreciable O DOS exist as well because of large photoionization cross section.Yeh and Lindau (1985) One can see that the theoretical spectrum satisfactorily reproduces the experimental one and thus the experimental structures A to F can be assigned to the theoretical structures to , respectively. Our calculation agrees well with the calculation by Maignan et al.Maignan et al. (2009) while it is different from Scanlon et al.Scanlon et al. (2009) or Hiraga et al.Hiraga et al. (2011) However, we note that the spectrum by Scanlon et al. and Arnold et al. can simply be interpreted by our calculation as a development of the Cr states by Cr substitution for Al.Scanlon et al. (2009); Arnold et al. (2009) Hence, we consider that their experiment is actually consistent with ours. On the other hand, Hiraga et al. consistently interpreted their optical absoption spectra using their band structure calculations.Hiraga et al. (2011) However, optical absorption spectroscopy is indirect to probe the valence-band electronic structure because it gives the joint DOS. While we (and Maignan et al.) have assumed the ferromagnetic state in the calculations, we believe that the different Cr partial DOS does not come from the different magnetic structures because both Scanlon et al. and Hiraga et al. have calculated antiferromagnetic states by the same generalized gradient approximation + (GGA+) method, resulting in the quite different Cr partial DOS’s. The differences in the two calculations probably originate from the fact that Scanlon et al. adopted theoretically optimized lattice parameters and Hiraga et al. set the value for the Cu states to be zero. Our result is also supported by another band structure calculation of CuAlCrO that reported the same energetic order of the Cr and the Cu states as ours, namely, the Cr states come to the top of the valence band by Cr doping.Kizaki et al. (2005) ### iii.2 Cu and Cr valence Figure 5(a) shows the valence-band photoemission spectra of CuCrMgO taken across the resonant energies of the Cu satellite structures. There can be observed two distinct satellite peaks at the binding energy of 13 and 15 eV, which have their maximums at the photon energy of 74 and 77 eV, respectively, as shown in Fig. 5(b). These numbers are in very good agreement with the reported satellite peaks in CuO (12.5–12.9 eV) and CuO (15.3 eV), which have mainly and final-state character, respectively.Thuler et al. (1982); Ghijsen et al. (1990); Shen et al. (1990) The 15-eV satellite peak has also been observed in Al XPS spectra of CuAlO and CuCrO.Arnold et al. (2009) Figure 5(c) shows the CIS spectra of these two satellite peaks. The CIS profiles of the satellites again well reproduce those of CuO and CuO, respectively, including the two-peak structure due to 3 and 3 splitting.Thuler et al. (1982); Ghijsen et al. (1990) All these results indicate that the doped hole in CuCrMgO produces Cu ions, namely, holes will be doped into the Cu sites. However, this observation seems to be incompatible with the result that the top of the valence band has mainly the Cr character, demonstrated in Figs. 14. Moreover, the 13-eV satellite due to Cu seems to be too intense for only 3% doping of Mg, which corresponds to 3% Cu ions. To confirm this observation, we performed Cu - resonant photoemission spectroscopy measurements, as shown in Fig. 6. The excitation energies were determined by Cu XAS spectra shown in Fig. 6(d).Note4 () Figure 6(a) shows the valence-band spectra of the sample taken in the Cu - resonance region. The giant resonance peak at 15 eV is due to Cu ions as seen in Fig. 5 and as reported for CuO.Tjeng et al. (1992) Figures 6(b) and 6(c) show the spectra taken at the photon energies before the giant resonance develops. In Fig. 6(c), the spectrum shows the distinct 13-eV resonant peak of Fig. 5 at eV that corresponds to the photon energy of the pre-peak structure in Fig. 6(d). This hump has been observed in some CuO (Refs. Hulbert et al., 1984; Grioni et al., 1989), CuAlO (Ref. Aston et al., 2005) and CuCrO (Ref. Arnold et al., 2009) but has not been observed in pure CuO,Grioni et al. (1992) and it is accordingly interpreted as final state due to Cu impurity,Grioni et al. (1992); Aston et al. (2005); Arnold et al. (2009) where denotes a core hole of the Cu level. Therefore, both the Cu - resonant photoemission and the Cu XAS spectra of the sample clearly demonstrate the Cu nature of the doped holes observed in Fig. 5. Surprisingly, however, Fig. 6(b) shows that the =0 sample, too, has the 13-eV satellite. This can never be due to Cu impurity because the Cu XAS spectrum has no appreciable prepeak [see Fig. 6(d)]. Here, we noted that the very slight modulation from the baseline at the prepeak of the =0 spectrum cannot explain the large 13-eV resonance peak because the Cu impurity concentration in the sample, if exists, can be estimated to be a few percent at most by a comparison with the reported relation between the concentration and the prepeak intensity in CuAlO.Aston et al. (2005) Therefore, it can be undoubtedly concluded that some kind of state that does not originate from Cu impurities, should exist even in the pure CuCrO, and based on this fact, one may further go beyond the case, and arrive at the idea that the whole portion of a doped hole may not necessarily go into a Cu site even because the 13-eV satellite is observed. Figure 7 shows Cu core-level spectra of CuCrO and CuCrMgO. The spectrum in Panel (a) is almost identical to the reported spectra of CuCrO (Refs. Arnold et al., 2009; Le et al., 2011) and also CuAlO.Aston et al. (2005) There is no trace of structures at 934 eV due to the Cu state that, if exist, can easily be identified as is the case of oxidized CuAlZnO or CuRhMgO.Aston et al. (2005); Le et al. (2011) A reported energy shift of the Cu peak due to Mg dopingArnold et al. (2009) was not observed and the spectrum is almost identical to that of , which is very similar to what was observed in CuAlZnO.Aston et al. (2005) This fact raises doubt about the Cu nature of a doped hole. Nevertheless, a small but important change due to Mg doping can be observed in Fig. 7(b); the Cu line shape becomes asymmetrically broad. A Doniach-Šunjić lineshape analysisDoniach and Šunjić (1970) has confirmed a large increase in asymmetry with hole doping, which is reflecting an increase in metallicity of the system, particularly on the Cu sites.Note5 () Hence, this small change suggests the Cu nature of a doped hole again. Figure 8 shows Cr core-level spectra of CuCrO and CuCrMgO. The double-peak structure observed in the Cr peak of the both samples is characteristic of Cr compound. Both the spectra in Panel (a) are indeed very similar to those of CrO and CrN.Biesinger et al. (2004); Bhobe et al. (2010) The and 0.02 spectra are very similar to each other, displaying Cr nature. However, the Cr peak shows a remarkable change due to Mg doping; the first peak at 575 eV obviously decreases in intensity with Mg doping. A very similar change has recently been observed in CrN across its insulator-metal transition, which has been explained by the screening effects due to mobile carriers.Bhobe et al. (2010) Therefore, the observed change is likely an evidence that doped holes move around the Cr sites, suggesting the Cr nature of a doped hole. This result is consistent with the valence-band satellite analyses in Figs. 5 and 6. Nevertheless, all the three Cr XAS spectra in Fig. 8(b) are very similar to the reported spectra of LaCrO and CrO,Sarma et al. (1996); Matsubara et al. (2002) indicating that the Cr ions are trivalent. Unlike the Cu edge, Cr XAS spectra show no detectable changes with hole doping that were observed for LaSrCrO with .Sarma et al. (1996) ## Iv Discussion It is already established now that the ground-state electron configuration of CuO is not a simple , but , while that of CuO is described as , where denotes an O ligand hole; the configuration of the Cu ion should be spherical, but it was long ago pointed out that the charge distribution in CuO can be non-spherical due to the hybridization between the orbital ( axis along the Cu-O bonding) and the orbital,Orgel (1958) and has been discussed theoretically later.Marksteiner et al. (1986) This hybridization yields a hole and hence the ground state of CuO should have the component. The hole state has recently been directly observed,Zuo et al. (1999) confirming the interpretation of the satellite structures at 15 eV (the process) in CuO and at 13 eV (the process) in CuO.Thuler et al. (1982); Ghijsen et al. (1990); Shen et al. (1990) The situation in CuCrO is quite analogous to CuO because the local environment around Cu is the same O-Cu-O dumb-bell structure, and therefore it is not surprising that the ground state has the component. What is striking in our results is that even the sample with no Cu impurity centers has shown a weak but detectable 13-eV satellite (Fig. 6). This inevitablly indicates that not a “virtual” state (), but the “real” state () has to exist in CuCrO. However, the Cu core-level spectra do not show any trace of such a configuration even for hole-doped samples, either. Nevertheless, the development of the Cu pre-peak structure with , again, undoubtedly demonstrates that this configuration increases with . On the other hand, the doped hole should have the Cr character from the Cr HX-PES spectra while the Cr -edge XAS spectra show no detectable changes. To understand the above contradictory results, we reconsider the local electronic structure of the Cu site beyond the nearest-neighbor oxygens, namely, consider the two metal sites, Cu and Cr, because their wave functions are actually connected via the O wave functions. Within a metal-oxygen single cluster model (CuO and CrO for the Cu and the Cr sites, respectively), the local electronic configuration of Cu can be described as , whereas that of Cr will be .Saitoh et al. (1995); Uozumi et al. (1997) Although the molecular orbitals of the Cu and the Cr sites have in fact different symmetries, there should be sizable overlap between some of them as discussed in Fig. 1. Hence, we consider the Cu-O-Cr cluster and re-define as an O ligand hole in a molecular orbital of this cluster. In this model, the combination of the configuration of Cu and the configuration of Cr can produce the and configurations at the Cu and Cr sites, respectively, because of the extended nature of the state. Hence, the ground state can be described as |g⟩=α|d10d3⟩+β|d9sd3⟩+γ|d9d4⟩+δ|d10L––d4⟩, where the left and denote the Cu states, denotes the Cu state, and the right and denote the Cr states. is the main configuration, corresponds to the hole state, is the Cu –to–Cr charge-transfer state, and finally originates from the O –to–Cr charge-transfer state, which is the second main configuration. The configuration is not included because this is the origin of the configuration. The final state of the valence-band photoemission by Cu emission is |fCuv⟩=a|d10L––d3⟩+b|d9d3⟩+c|d8sd3⟩+d|d8d4⟩. Here, is neglected because this configuration will easily transform into due to the combination of one extra electron at the Cr site and the lack of one electron at the Cu site. For the Cu core-level photoemission, the final state will be |fCuc⟩=a|c–d10d3⟩+b|c–d9sd3⟩+c|c–d10L––d4⟩+d|c–d9d4⟩, and for the Cu -edge XAS, the final state will be |fCuL⟩=a|c–d10sd3⟩+b|c–d10d4⟩+c|c–d10sL––d4⟩, where denotes a Cu core hole. Within this framework, the Cu - and - resonant photoemission spectra can have both the (at 15 eV) and (at 13 eV) final-state satellites due to the processes of and , respectively. This scenario even predicts that CuAlO will not have the 13-eV satellite because there are no available Al states in the valence band, and indeed, an XPS spectrum of CuAlO shows a dip around 13 eV, while that of CuCrO has extra spectral weight,Arnold et al. (2009) supporting the scenario. The absence of the final-state satellite in the Cu core-level spectra can be explained by strong screening effects due to the presence of a core hole at the Cu site: The large - Coulomb attraction increases the number of electrons and accordingly it makes the and weight negligible even for the lightly hole-doped samples. Likewise, the lack of the pre-peak structure in the Cu -edge XAS specturm of the =0 sample can also be explained by the core-hole screening effects that reduce the weight of the (and the ) configuration(s) in (Fig. 6(b)). From the above consideration, there must be weak but finite Cu spectral weight at the top of the valence band, and this can actually be observed; Figure 9 shows a HX-PES valence-band spectra of and 0.03 samples. Considering that the photoionization cross section of states of this energy range is largely enhanced,Yeh and Lindau (1985) the small enhancement at very near (see the inset) can be interpreted as an increase in the Cu emission due to hole doping, namely, supporting finite Cu spectral weight at the top of the valence band. It is accordingly revealed that the very top of the valence band has actually the Cu character in addition to the Cr character. This Cu –Cr duality of a doped hole can explain the observed magnetic and transport properties of CuCrMgO; the doped holes moving in the Cr-O network can lift the magnetic frustration in the Cr triangular spin lattice, resulting in an increase in the magnetic susceptibility with .Okuda08 () The holes that are not restricted in the Cu-O network also explain a higher electric conductivity compared with other hole-doped Cu delafossites such as CuAlO.Nagarajan et al. (2001) In particular, the highest conductivity by selecting Cr is strikingly demonstrating the importance of the Cu-Cr combination.Nagarajan et al. (2001) From the viewpoint of the electronic structure, this can be interpreted as a consequence of an “appropriate” combination in terms of the difference of and of CuO.Iwasawa et al. (2006) ## V Conclusions We have studied the electronic structure of hole-doped delafossite oxides CuCrMgO by high-resolution photoemission spectroscopy, x-ray absorption spectroscopy, and LDA+ band-structure calculations. The Cr and Cu - resonant PES spectra demonstrated that the leading structure of the valence band near the has primarily the Cr character with a minor contribution from the Cu due to hybridization with the O states, in good agreement with the band-structure calculation. This result indicates that a doped hole will primarily have the Cr character. The Cr PES and -edge XAS spectra of CuCrMgO showed typical Cr features, whereas the Cu -edge XAS spectra exhibited a systematic change with . This result, by contrast, indicates that the Cu valence is monovalent at =0 and the holes will be doped into the Cu sites, which contradicts the Cr and Cu - resonant PES. Nevertheless, the Cu - resonant PES spectra display the two types of charge-transfer satellites that should be attributed to Cu () and Cu () like initial states, while the Cu PES with no doubt shows the Cu character even for . We have proposed that the above apparently contradictory results can consistently be understood by introducing not only the Cu state as traditionally, but also newly finite Cu Cr charge transfer via O states in the ground-state electronic configuration. We found that this model can explain well some of the characteristic magnetic and transport properties of this compound. ###### Acknowledgements. The authors would like to thank T. Mizokawa for enlightening discussions. The synchrotron radiation experiments at the Photon Factory and SPring-8 were performed under the approval of the Photon Factory (Proposal Numbers 2008G688, 2010G655, 2009S2-005, and 2011S2-003) and of the Japan Synchrotron Radiation Research Institute (Proposal Numbers 2011A1624, 2011B1710, and 2012B1003), respectively. This work was supported by JSPS KAKENHI Grants No. 22560786 and No. 23840039. This work was also granted by JSPS the “Funding Program for World-Leading Innovative R&D on Science and Technology (FIRST Program)”, initiated by the Council for Science and Technology Policy (CSTP). ## References • Okuda et al. (2005) T. Okuda, N. Jufuku, S. Hidaka, and N. Terada, Phys. Rev. B 72, 144403 (2005). • Terasaki et al. (1997) I. Terasaki, Y. Sasago, and K. Uchinokura, Phys. Rev. B 56, R12685 (1997). • Takeuchi et al. (2004) T. Takeuchi, T. Kondo, T. Takami, H. Takahashi, H. Ikuta, U. Mizutani, K. Soda, R. Funahashi, M. Shikano, M. Mikami, S. Tsuda, T. Yokoya, S. Shin, and T. Muro, Phys. Rev. B 69, 125410 (2004). • (4) The doped hole amount is . • Iwasawa et al. (2006) H. Iwasawa, K. Yamakawa, T. Saitoh, J. Inaba, T. Katsufuji, M. Higashiguchi, K. Shimada, H. Namatame, and M. Taniguchi, Phys. Rev. Lett. 96, 067203 (2006). • Kuroki and Arita (2007) K. Kuroki and R. Arita, J. Phys. Soc. Jpn. 76, 083707 (2007). • Nagarajan et al. (2001) R. Nagarajan, A. Draeseke, A. Sleight,  J. Tate, J. Appl. Phys. 89, 8022 (2001). • Kimura et al. (2006) T. Kimura, J. C. Lashley, and A. P. Ramirez, Phys. Rev. B 73, 220401(R) (2006). • Seki et al. (2008) S. Seki, Y. Onose, and Y. Tokura, Phys. Rev. Lett. 101, 067204 (2008). • Hamberg and Granqvist (1986) I. Hamberg and C. G. Granqvist, J. Appl. Phys. 60, R123 (1986). • Kawazoe et al. (1997) H. Kawazoe, M. Yasukawa, H. Hyodo, M. Kurita, H. Yanagi, and H. Hosono, Nature 389, 939 (1997). • Scanlon et al. (2009) D. O. Scanlon, A. Walsh, B. J. Morgan, G. W. Watson, D. J. Payne, and R. G. Egdell, Phys. Rev. B 79, 035101 (2009). • Arnold et al. (2009) T. Arnold, D. J. Payne, A. Bourlange, J. P. Hu, R. G. Egdell, L. F. J. Piper, L. Colakerol, A. De Masi, P.-A. Glans, T. Learmonth, K. E. Smith, J. Guo, D. O. Scanlon, A. Walsh, B. J. Morgan, and G. W. Watson, Phys. Rev. B 79, 075102 (2009). • Hiraga et al. (2011) H. Hiraga, T. Makino, T. Fukumura, H. Weng, and M. Kawasaki, Phys. Rev. B 84, 041411(R) (2011). • Maignan et al. (2009) A. Maignan, C. Martin, R. Frésard, V. Eyert, E. Guilmeau, S. Hébert, M. Poienar, and D. Pelloquin, Solid State Commun. 149, 962 (2009). • Ono et al. (2007) Y. Ono, K. Satoh, T. Nozaki, and T. Kajitani, Jpn. J. Appl. Phys. 46, 1071 (2007). • Andersen (1975) O. K. Andersen, Phys. Rev. B 12, 3060 (1975). • Takeda and Kubler (1979) T. Takeda and J. Kubler, J. Phys. F: Met. Phys. 9, 661 (1979). • Hohenberg and W. Kohn (1964) P. Hohenberg and W. W. Kohn, Phys. Rev. 136, B864 (1964). • Kohn and Sham (1965) W. Kohn and L. J. Sham, Phys. Rev. 140, A1133 (1965). • Vosko et al. (1980) S. H. Vosko, L. Wilk, and M. Nusair, Can. J. Phys. 58, 1200 (1980). • Anisimov et al. (1991) V. I. Anisimov, J. Zaanen, and O. K. Andersen, Phys. Rev. B 44, 943 (1991). • Solovyev et al. (1996) I. Solovyev, N. Hamada, and K. Terakura, Phys. Rev. B 53, 7158 (1996). • Anisimov et al. (1997) V. I. Anisimov, F. Aryasetiawan, and A. I. Linchtenstein, J. Phys.: Condens. Matter 9, 767 (1997). • Poienar et al. (2009) M. Poienar, F. Damay, C. Martin, V. Hardy, A. Maignan, and G. André, Phys. Rev. B 79, 014412 (2009). • Li et al. (1992) X. Li, L. Liu, and V. E. Henrich, Solid State Commun. 84, 1103 (1992). • Shirley (1972) D. A. Shirley, Phys. Rev. B 5, 4709 (1972). • Fano (1961) U. Fano, Phys. Rev. 124, 1866 (1961). • Thuler et al. (1982) M. R. Thuler, R. L. Benbow, and Z. Hurych, Phys. Rev. B 26, 669 (1982). • Fujimori et al. (1993) A. Fujimori, A. E. Bocquet, T. Saitoh, and T. Mizokawa, J. Electron Spectrosc. Relat. Phenomen. 62, 141 (1993). • Saitoh et al. (1995) T. Saitoh, A. E. Bocquet, T. Mizokawa, and A. Fujimori, Phys. Rev. B 52, 7934 (1995). • (32) A spectrum is compared with theory because of a lower quality of spectra in this photon energy range. • Saitoh et al. (1997) T. Saitoh, T. Mizokawa, A. Fujimori, M. Abbate, Y. Takeda, and M. Takano, Phys. Rev. B 55, 4257 (1997). • Iwasawa et al. (2009) H. Iwasawa, S. Kaneyoshi, K. Kurahashi, T. Saitoh, I. Hase, T. Katsufuji, K. Shimada, H. Namatame, and M. Taniguchi, Phys. Rev. B 80, 125122 (2009). • Yeh and Lindau (1985) J. J. Yeh and I. Lindau, At. Data Nucl. Data Tables 32, 1 (1985). • (36) The binding energy () dependent Lorentzian FWHM was set to be (eV). • Kizaki et al. (2005) H. Kizaki, K. Sato, A. Yanase, and H. Katayama-Yoshida, Jpn. J. Appl. Phys. 44, L1187 (2005). • Ghijsen et al. (1990) J. Ghijsen, L. H. Tjeng, H. Eskes, G. A. Sawatzky, and R. L. Johnson, Phys. Rev. B 42, 2268 (1990). • Shen et al. (1990) Z.-X. Shen, R. S. List, D. S. Dessau, F. Parmigiani, A. J. Arko, R. Bartlett, B. O. Wells, I. Lindau, and W. E. Spicer, Phys. Rev. B 42, 8081 (1990). • (40) Because of a problem of energy calibration, our photon energy of the Cu XAS spectra is a little different from the reported values for CuCrO or CuO (Refs. \rev@citealpnumArnold09,Grioni89-1,Grioni92,Tjeng92). • Tjeng et al. (1992) L. H. Tjeng, C. T. Chen, and S.-W. Cheong, Phys. Rev. B 45, 8205 (1992). • Hulbert et al. (1984) S. L. Hulbert, B. A. Bunker, F. C. Brown, and P. Pianetta, Phys. Rev. B 30, 2120 (1984). • Grioni et al. (1989) M. Grioni, J. B. Goedkoop, R. Schoorl, F. M. F. de Groot, J. C. Fuggle, F. Schäfers, E. E. Koch, G. Rossi, J.-M. Esteva, and R. C. Karnatak, Phys. Rev. B 39, 1541 (1989). • Aston et al. (2005) D. J. Aston, D. J. Payne, A. J. H. Green, R. G. Egdell, D. S. L. Law, J. Guo, P. A. Glans, T. Learmonth, and K. E. Smith, Phys. Rev. B 72, 195115 (2005). • Grioni et al. (1992) M. Grioni, J. F. van Acker, M. T. Czyzyk, and J. C. Fuggle, Phys. Rev. B 45, 3309 (1992). • Le et al. (2011) T. K. Le, D. Flahaut, H. Martinez, N. Andreu, D. Gonbeau, E. Pachoud, D. Pelloquin, and A. Maignan, J. Solid State Chem. 184, 2387 (2011). • Doniach and Šunjić (1970) S. Doniach and M. Šunjić, J. Phys. C 3, 285 (1970). • (48) The derived asymmetric parameter was () and (). • Biesinger et al. (2004) M. C. Biesinger, C. Brown, J. R. Mycroft, R. D. Davidson, and N. S. McIntyre, Surf. Interface Anal. 36, 1550 (2004). • Bhobe et al. (2010) P. A. Bhobe, A. Chainani, M. Taguchi, T. Takeuchi, R. Eguchi, M. Matsunami, K. Ishizaka, Y. Takata, M. Oura, Y. Senba, H. Ohashi, Y. Nishino, M. Yabashi, K. Tamasaku, T. Ishikawa, K. Takenaka, H. Takagi, and S. Shin, Phys. Rev. Lett. 104, 236404 (2010). • Sarma et al. (1996) D. D. Sarma, K. Maiti, E. Vescovo, C. Carbone, W. Eberhardt, O. Rader, and W. Gudat, Phys. Rev. B 53, 13369 (1996). • Matsubara et al. (2002) M. Matsubara, T. Uozumi, A. Kotani, Y. Harada, and S. Shin, J. Phys. Soc. Jpn. 71, 347 (2002). • Orgel (1958) L. E. Orgel, J. Chem. Soc. 4186 (1958). • Marksteiner et al. (1986) P. Marksteiner, P. Blaha, and K. Schwarz, Z. Phys. B 64, 119 (1986). • Zuo et al. (1999) J. M. Zuo, M. Kim, M. O’Keeffe, and J. C. H. Spence, Nature 401, 49 (1999). • Uozumi et al. (1997) T. Uozumi, K. Okada, A. Kotani, R. Zimmermann, P. Steiner, S. Hüfner, Y. Tezuka, and S. Shin, J. Electron Spectrsc. Relat. Phenom. 83, 9 (1997). • (57) T. Okuda, Y. Beppu, Y. Fujii, T. Onoe, N .Terada, and S. Miyasaka, Phys. Rev. B 77, 134423 (2008); T. Okuda, R. Kajimoto, M. Okawa, and T. Saitoh, Int. J. Mod. Phys. B 27, 1330002 (2013). You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters
2020-08-13 17:00:12
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8001092672348022, "perplexity": 1605.7819871141276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739048.46/warc/CC-MAIN-20200813161908-20200813191908-00360.warc.gz"}
http://www.lenstrnad.com/blog/2018/08/DietNetworks
# Diet Networks: Thin Parameters for Fat Genomics1 Diet Networks is a deep learning approach to predicting ancestry using genomic data. The number of free parameters in a neural network depends on the input dimension. The dimension of genomic data tends to be greater than the number of observations by three orders of magnitude. The model proposes an alternative approach to a fully connected network that reduces the number of free parameters significantly. Summary: • Discuss Neural Networks and the Deep Learning • Discuss genomic data and motivate the approach of Diet Networks • Discuss the Diet Network architecture • Discuss the TensorFlow implementation and results ## Neural Network and Deep Learning • Neural Networks are represented as graphical structures • The weights, $w_i$, are the free parameters and are learned through maximum likelihood estimation and back propagation. • This structure can be used to represent: Linear Regression, Multivariate Regression, Binomial Regression, Softmax Regression • Nodes following the input layer are computed with an activation function ## What about the notion of Deep Learning? • Adding hidden layers allows the model to learn a ‘deeper’ representation. • The Universal Approximation Theorem: a network with two hidden layers and non-linear activation functions can approximate any continuous function over a compact subset of $R^n$. • The parameters of the model can be represented as matrices. ## Representation Learning • We want to learn a new representation of the data such that the new representations are linear in this new space. ### Example: (Image above borrowed from here) • Non-linear activation functions allow the model to learn this discriminating function as a linear function in a new feature space. (Image above borrowed from here) • Nodes in the hidden layers with non-linear activation functions are represented as $h_j = \phi(x^t w_{(j, \cdot)})$ where $\phi$ is the non-linear activation function. • The new representation of $\mathbf{x}$ is then represented as $\mathbf{h} = \phi(x^t \mathbf{W})$. • The algorithm essentially explores weight matrices, $\mathbf{W}$, that are in the path of gradient descent. • These weight matrices contruct the hypothesis space of functions considered in the function approximation task. ## Convolutional Layers The beginning of “Deep” learning started with convolutional neural networks. The main idea is to convolve a single neural network about an image or audio. Navigate here for arithmetic or here for visualization. (Image borrowed from here) • Demonstrates the convolving of a kernel or neural network about the larger blue image to generate the “down-sampled” output in green. (Image borrowed from here • Expresses how a convolutional layer can be represented by a matrix. Notice the reduction in learnable parameters. Unfortunately, genomic data does not have an obvious relationship with neighboring entries in its sequence like image or audio data. ## Genomic Data • The 1000 genomes project released the largest genomic data set among 26 different populations. • The data are roughly 150,000 single nucleotide polymorphisms (SNPs) for roughly 2500 people. • SNPs are essentially genetic variations of nucleotides that occur at a significant frequency between populations. • The goal is to classify the ancestry of an individual based on this SNP data. ## Diet Networks Structure • Diet Networks proposes a fully connected network with two auxiliary networks. • The main use of the auxiliary network is to predict the weights of the first layer in the discriminative network. (Image taken from Diet Networks1*) • A fully connected network with $p$ dimensional data will have a $(p \times n_h)$ weight matrix in the first layer of the discriminative network. • If $n_h=100$, then we have 15,000,000 free parameters! • The method proposed to predict the weight matrix will reduce this number significantly. ### Auxiliary Network for Encoding • The Auxiliary network for encoding predicts the weight matrix in the first layer of the discriminative network. • note: • $X$ is of size $(n \times p)$ • $X^T$ is of size $(p \times n)$ • Let hidden layers have $n_h$ number of units • The first layer of the discrminative network is represented by the weight matrix, $W_e$, which is $(p \times n_h)$. • The first layer in the auxiliary network has a weight matrix, $W_e'$, with size $(n \times n_h)$. • Then the output of the auxiliary network $X^TW_e'=W_e$. • $W_e$ has size $(p \times n) \times (n \times n_h) -> (p \times n_h)$. • Thus, $W_e$ is the appropriate size for the first layer in the discriminate network. • The final number of learnable parameters to construct $W_e$ is $n \times n_h$ ### Auxiliary Network for Decoding • The same thing is happening for the decoding auxiliary network. • note: • $W_d = W_e$ which implies the transpose of $W_d$ gives a shape $n_h \times n$. • The output of the first MLP layer, $H$, in the discriminative is $p \times n_h$. • Thus, $\hat{X} = HW_d^T$ gives $(p \times n_h) \times (n_h \times n) -> (p \times n)$. • The reconstruction is used because it gives better results and helps with gradient flow. ### The Embedding Layer • This implementation focuses on the histogram embedding. • The histogram embedding is generated by calculating the frequency of each possible value {0,1,2} for each class {1,…,26} accross each SNP {1,…,$p$}. • This information is contained in a $(p \times 78)$ matrix since 3 input types $\times$ 26 classes gives 78. • This embedding is the input to a hidden layer which has $n_h$ nodes. • Therefore, we will have a $(78 \times n_h)$ weight matrix to learn, but the corresponding output will be $(p \times n_h)$. ## TensorFlow Implementation and Results • My TensorFlow implementation can be found here. • The goal is to replicate the results of the paper. • They provide information on the model such as • the number of hidden units and hidden layers • norm constraints on the gradients • The paper does not specify • exactly how they regularize the parameters • if they used batch norm • if they used drop out • which activation functions were used • how they initialized the weights of the hidden layers • or which specific optimizers were used • The goal of this implementation is to be specific about the regularization, weight initialization, and optimizers used. ### Regularization Regularization is a way of preventing our model from overfitting. It helps decrease the generalization error. • The paper specifies that they limit the norm of the gradients (gradient clipping). • This implementation uses the following regularization techniques: • L2 norm on each matrix matrix (like ridge regression) • gradient clipping (only back propagate when gradient is less than threshold) • weight initialization (use distribution with mean of zero and small variance) ### Batch Norm • A batch is a subset of data used for back propagation. • Batch norm normalizes each batch when performing forward pass to calculate error. • Prevents model parameters from drifting as a cause of scale issues. • This problem is known as covariate shift ### Drop out • Drop out is the process of randomly turning off neurons in the model. • It allows each neuron the opportunity to “vote” and prevents a subset of neurons from taking over. • It is mathematically equivalent to ensemble learning and is computationally cheap. ### Activation Functions • Each activation function has its own pros and cons. • This implementation considers the tanh and relu non-linear activation functions. ### Optimizers • Diet Networks simply specified they used an adaptive learning rate stochastic gradient descent back propagation learning algorithm. • This implementation considers the ADAM and RMSprop optimizers in the model selection process. ### TensorFlow Implementation • The following diagram illustrates the structure of this TensorFlow implementation The left structure represent the auxiliary network. The right structure represents the discriminative network. • Everywhere there is a act_fun or w_init is left open for model selection. ### Model Selection TensorFlow has a feature called tensorboard which helps visualize learning. Tensorboard is a webapp that displays specified summary statistics. In order to perform model selection, many models are constructed. Models Considered: • Weight initialization using the Normal and Uniform distribution with standard deviation of .1 and .01 • tanh and relu activation functions
2022-12-04 14:18:24
{"extraction_info": {"found_math": true, "script_math_tex": 38, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.813453733921051, "perplexity": 1362.1991350286237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710974.36/warc/CC-MAIN-20221204140455-20221204170455-00260.warc.gz"}
http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=semr&paperid=965&option_lang=eng
RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PERSONAL OFFICE General information Latest issue Archive Impact factor Search papers Search references RSS Latest issue Current issues Archive issues What is RSS Sib. Èlektron. Mat. Izv.: Year: Volume: Issue: Page: Find Sib. Èlektron. Mat. Izv., 2018, Volume 15, Pages 906–926 (Mi semr965) Real, complex and functional analysis Approximate calculation of the defect of a Lipschitz cylindrical condenser A. I. Parfenov Sobolev Institute of Mathematics, pr. Koptyuga, 4, 630090, Novosibirsk, Russia Abstract: We introduce the notion of defect of a Lipschitz cylindrical condenser. It is the difference between the capacity of the condenser and its Ahlfors integral. We calculate the defect approximately for condensers over arbitrary open sets. For a condenser over an inner uniform domain the quantity obtained is comparable to the sum of the squares of the seminorms of the plates in a weighted homogeneous Slobodetskii space. This uses the characterization of inner uniform domains by the following property: every inner metric ball is a centered John domain. Keywords: Ahlfors integral, capacity, condenser, defect, inner uniform domain, Lipschitz domain. Funding Agency Grant Number Ministry of Education and Science of the Russian Federation ÍØ-5913.2018.1 DOI: https://doi.org/10.17377/semi.2018.15.078 Full text: PDF file (246 kB) References: PDF file   HTML file Bibliographic databases: Document Type: Article UDC: 517.518 MSC: 31B15 Received July 6, 2018, published August 17, 2018 Citation: A. I. Parfenov, “Approximate calculation of the defect of a Lipschitz cylindrical condenser”, Sib. Èlektron. Mat. Izv., 15 (2018), 906–926 Citation in format AMSBIB \Bibitem{Par18} \by A.~I.~Parfenov \paper Approximate calculation of the defect of a Lipschitz cylindrical condenser \jour Sib. \Elektron. Mat. Izv. \yr 2018 \vol 15 \pages 906--926 \mathnet{http://mi.mathnet.ru/semr965} \crossref{https://doi.org/10.17377/semi.2018.15.078} `
2019-01-20 12:27:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4033484160900116, "perplexity": 12081.750009642345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583705737.21/warc/CC-MAIN-20190120102853-20190120124853-00279.warc.gz"}
https://www.albert.io/ie/trigonometry/cosecant-for-dollar-5-3dollar
? Free Version Easy # Cosecant for $(-5,-3)$ TRIG-X2YDEV Find the cosecant of the angle formed between the positive $x$-axis and the line segment from the origin that contains the point $(-5, -3)$. A $-\cfrac { 3\sqrt { 34 } }{ 34 }$ B $-\cfrac { \sqrt { 34 } }{ 5 }$ C $-\cfrac { \sqrt { 34 } }{ 3 }$ D $-\cfrac { 5\sqrt { 34 } }{ 34 }$ E $\cfrac { 5 }{ 3 }$
2016-12-08 08:00:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4791124165058136, "perplexity": 712.9721321594882}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542455.45/warc/CC-MAIN-20161202170902-00086-ip-10-31-129-80.ec2.internal.warc.gz"}
https://riverml.xyz/0.15.0/api/stats/Link/
A link joins two univariate statistics as a sequence. This can be used to pipe the output of one statistic to the input of another. This can be used, for instance, to calculate the mean of the variance of a variable. It can also be used to compute shifted statistics by piping statistics with an instance of stats.Shift. Note that a link is not meant to be instantiated via this class definition. Instead, users can link statistics together via the | operator. ## Parameters¶ • left (river.stats.base.Univariate) • right (river.stats.base.Univariate) The output from left's get method is passed to right's update method if left's get method doesn't produce None. • name ## Examples¶ >>> from river import stats >>> stat = stats.Shift(1) | stats.Mean() No values have been seen, therefore get defaults to the initial value of stats.Mean, which is 0. >>> stat.get() 0. Let us now call update. >>> stat = stat.update(1) The output from get will still be 0. The reason is that stats.Shift has not enough values, and therefore outputs it's default value, which is None. The stats.Mean instance is therefore not updated. >>> stat.get() 0.0 On the next call to update, the stats.Shift instance has seen enough values, and therefore the mean can be updated. The mean is therefore equal to 1, because that's the only value from the past. >>> stat = stat.update(3) >>> stat.get() 1.0 On the subsequent call to update, the mean will be updated with the value 3. >>> stat = stat.update(4) >>> stat.get() 2.0 Note that composing statistics returns a new statistic with it's own name. >>> stat.name 'mean_of_shift_1' ## Methods¶ get Return the current value of the statistic. update Update and return the called instance. Parameters • x (numbers.Number)
2023-04-02 05:11:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3317404091358185, "perplexity": 2513.646831383185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950383.8/warc/CC-MAIN-20230402043600-20230402073600-00750.warc.gz"}
https://www.nature.com/articles/s41567-018-0069-0?error=cookies_not_supported&code=b823133b-76cf-494b-b914-c0847c6b3f1e
Letter # Attosecond optical-field-enhanced carrier injection into the GaAs conduction band Accepted: Published online: ## Abstract Resolving the fundamental carrier dynamics induced in solids by strong electric fields is essential for future applications, ranging from nanoscale transistors1,2 to high-speed electro-optical switches3. How fast and at what rate can electrons be injected into the conduction band of a solid? Here, we investigate the sub-femtosecond response of GaAs induced by resonant intense near-infrared laser pulses using attosecond transient absorption spectroscopy. In particular, we unravel the distinct role of intra- versus interband transitions. Surprisingly, we found that despite the resonant driving laser, the optical response during the light–matter interaction is dominated by intraband motion. Furthermore, we observed that the coupling between the two mechanisms results in a significant enhancement of the carrier injection from the valence into the conduction band. This is especially unexpected as the intraband mechanism itself can accelerate carriers only within the same band. This physical phenomenon could be used to control ultrafast carrier excitation and boost injection rates in electronic switches in the petahertz regime. ## Main Shrinking structure sizes in integrated circuits inevitably lead to increasing field strengths in the involved semiconductor materials1,2. At the same time, ultrafast optical technologies enable the extension of operation frequencies of electro-optical devices to the petahertz regime3. Both applications ultimately require a deep fundamental understanding of ultrafast electron dynamics in solids in the presence of strong fields for the development of the next generation of compact and fast electronic devices. A number of pioneering experiments demonstrated the potential to measure and control carrier dynamics induced by intense near-infrared laser pulses (peak intensity Ipeak ~ 1012 W cm−2) in semiconductors4,5,6,7,8 and dielectrics9,10 on a sub- to few-femtosecond timescale using transient absorption and polarization spectroscopy. So far, resolving such dynamics with attosecond resolution has been limited to the non-resonant excitation regime, where the bandgap of the investigated material is larger than the energy of a single pump photon. Here, in contrast, we unravel the sub-femtosecond response of gallium arsenide (GaAs), a prototype and technologically relevant direct-bandgap semiconductor, in the resonant regime. Besides the ‘vertical’ optical transition in the momentum space that corresponds to the absorption of infrared pump photons (so-called interband transition, Fig. 1b), the pump field can also accelerate electrons within the electronic bands (intraband motion, Fig. 1c). In a simplified picture, one can think of inter- and intraband transitions as a consequence of the dual nature of the pump light that behaves either as photons (interband) or as a classical electromagnetic field (intraband). The role of intra- versus interband transitions in the presence of strong electric fields is highly debated11,12,13,14,15,16,17. For the infrared intensities used in this experiment, we can neglect contributions from the magnetic laser fields18. In a recent publication, we demonstrated that during the interaction of a wide-bandgap dielectric such as diamond with a short, intense, non-resonant infrared pump pulse, intraband motion completely dominates the transient optical response10. However, it is still unclear whether and how this situation changes in the technologically much more relevant resonant case where a single photon from the pulse has enough energy to induce an interband transition that creates real carriers in the conduction band (CB). The question of whether intraband motion still dominates the interaction and how the coupling between the two mechanisms influences the carrier injection is not obvious and has not been experimentally investigated so far. To study the electronic response of GaAs when driven out of equilibrium, we combine a 5–6 fs infrared pump pulse (centre energy $ℏ ω IR ≈1.59eV$) with a delayed phase-locked single attosecond pulse (SAP) probe as illustrated in Fig. 1a (further details are given in the Supplementary Information and in ref. 19). The infrared pump pulse has a peak intensity in vacuum of ~2.31 ± 0.17 × 1012 W cm2, which corresponds to a peak electric field of ~0.42 V Å−1. The estimated intensity inside the sample reaches up to 60% of the intensity in vacuum. The two beams are focused into a double target that consists of a gas jet followed by a 100-nm-thick single-crystalline GaAs membrane. The neon gas target enables the extraction of the temporal shape of both pulses as well as a precise delay calibration via a simultaneously recorded streaking measurement20,21. We calibrate the time axis of the streaking trace by taking into account the spatial separation of the two targets22. The pump–probe principle of attosecond transient absorption spectroscopy is illustrated in Fig. 1. The infrared pulse can induce both inter- and intraband transitions. The SAP probes the modified charge distribution by exciting electrons from the As-3d core levels to available states around the bandgap region. Figure 1d shows the measured static absorption spectrum of the GaAs membrane. It is important to note that the broad extreme-ultraviolet (XUV) spectrum of the SAP simultaneously probes the dynamics in the valence band (VB) and the CB. Figure 2a displays the absorption modification of GaAs induced by the resonant pump pulse, $ΔAbs ( E , τ )$ (for definition, see Supplementary Information). A red (blue) region indicates increased (decreased) absorption. In the following analysis, we concentrate on two different delay regimes: (1) when the pump and probe overlap, and (2) when the probe pulse arrives well after the pump. Without temporal overlap after the infrared pump pulse, we see a long-lasting signal (that is, regime (2) in Fig. 2a), which persists after the pump interaction over a considerable delay range. During the interaction, electrons are excited via interband transitions from the VB to the CB. This mechanism fully takes into account the nonlinear injection of carriers (see Supplementary Information). The creation of holes in the VB and electrons in the CB causes an increased XUV absorption at the upper VB edge (around 40 eV) and a bleached absorption at the lower CB range (around 43 eV), respectively. The system returns to its equilibrium ground state through electron–hole recombination, which happens for bulk GaAs on a timescale of 2.1 ns (ref. 23). By looking at negative delays, we can see that the absorption of the system recovers completely between subsequent pulses, which means that there are no accumulative effects and heating of the sample by the laser is negligible. During the temporal overlap of the infrared pump and the XUV probe pulse, we observe a transient signal (that is, regime (1) in Fig. 2a), which oscillates with $2 ω IR$ and lasts for the duration of the pump pulse (Fig. 2b). The oscillations are visible in a broad probe energy range, most pronounced in the CB between 42.5 and 46 eV. Below 42 eV, they are not well resolved due to stronger fluctuations of the SAP spectral amplitude. However, attosecond transient absorption spectroscopy measurements performed with attosecond pulse trains characterized by a more stable spectrum confirmed the appearance of oscillations also in the VB, around 40 eV (see Supplementary Information). Figure 2d shows the squared vector potential $A ( t ) 2$ of the measured infrared pump and the measured transient absorbance for two energy windows. A comparison among them reveals a strong energy dependence of the oscillation phase, which is reflected in the tilted shape of the oscillation features in $ΔAbs ( E , τ )$. To understand the microscopic origin of the measured features, we performed a first-principles electron dynamics simulation (see Supplementary Information for details). We simulated the pump–probe experiment24 and calculated the pump-induced change of the dielectric function including propagation effects, $Δε ( E , τ )$, which is directly related to the absorption change $ΔAbs ( E , τ )$ (ref. 10). The numerical results show oscillations with a tilted shape and a long-lasting signal, in good agreement with the experiment (Fig. 2c). With a decomposition of the probe Hamiltonian of the first-principles simulation into Houston states10,25, we can disentangle the contributions of the two probe transitions (As-3d level to either VB or CB) in the observed dynamics (see Supplementary Information). The energy range above 42 eV, where the strongest transient signal appears, is dominated by probe transitions from the core level to the CB (Fig. 3a). Therefore, in the following, we focus on the CB response. In a previous study10, we demonstrated that a non-resonant pump can excite virtual electrons on a sub-femtosecond timescale via intraband motion. Virtual electron excitations live only transiently during the presence of the driving field. For the present experiment, the resonant part of the pump radiation will also inject real carriers into the CB via interband transitions. A population of real carriers persists after the driving pulse has passed and decays orders of magnitude slower than the timescale considered here. To study the ultrafast carriers, we have to investigate the respective signal contributions of infrared-induced intra- and interband transitions. Therefore, we simplify the description of our system to a three-band model, which includes the As-3d level, the light-hole VB and the lowest CB (see Supplementary Information). The advantage of the three-band model is that intraband motion and interband transitions between the VB and CB can be numerically included or excluded. Figure 3b shows the CB response with both types of transition involved. The good qualitative agreement with the first-principles decomposition (Fig. 3a) justifies the use of this model to study the respective optical response induced by the two mechanisms. In the intraband limit, no real electrons are excited from the VB to the CB26. This explains why the dielectric function of GaAs fully recovers immediately after the pump pulse (Fig. 3c). In the interband limit, real carriers are injected into the CB by resonant photon absorption, thus resulting in the blue long-lasting signal around 43 eV (Fig. 3d). In both cases, absorption oscillations with twice the pump frequency appear (Fig. 3e). They originate from the dynamical Franz–Keldysh effect10,27 (DFKE, intraband limit) and the dynamical Stark effect28 (interband limit). In contrast to the interband case, the intraband limit clearly shows the strong energy dispersion as in the experiment. In addition, a closer look reveals that the intraband trace oscillates nearly in phase with the decomposed first-principles simulation and therefore with the experimental results, while the inter-band picture clearly fails to reproduce the experimental phase (Fig. 3e). To further verify this, we compare the energy dispersion of the oscillation delay between the measured and simulated signal for the different models and limits (Fig. 3f). The pure interband case of the three-band model fails to reproduce the experiment while the delay of the intraband limit shows excellent agreement with the experimental results. Therefore, by looking at the attosecond timing of the transient signal, we can conclude that infrared-induced intraband motion (namely the DFKE) dominates the ultrafast response in the CB of GaAs during the pump–probe overlap even in a resonant pumping condition. This is a surprising result, as in the case of a resonant intense pump it is believed that one should not be able to observe DFKE around the bandgap10,26,27. Finally, we look at the injection of real carriers from the VB into the CB. We define the CB population, nCB, by the projection of the time-dependent wavefunction of the three-band model on the CB state (see Supplementary Information). In the case of neglected intraband motion (only interband transitions), the calculation predicts a stepwise oscillating increase of nCB following the intensity of the pump pulse (Fig. 4). During the second part of the pump interaction, Rabi-flopping partly depopulates the CB. Surprisingly, in the realistic case involving both excitation mechanisms, the amount of excited carriers increases by nearly a factor of three compared to the model with only interband transitions. This result shows that, although intraband motion does not create real carriers in the CB by itself26, it assists in the carrier injection initiated by the resonant part of the pump. This indicates that the nonlinear interplay between intra- and interband transitions opens a new excitation channel via virtually excited states at high pump intensities. It is worth emphasizing that the observed enhancement of the injection rate can also be seen in the multi-photon resonant pump regime (see Supplementary Information). Further, it does not depend on the pulse duration. However, using significantly longer pulses or continuous-wave laser light with the same field strength could lead to the target being irreversibly damaged. To conclude, our measurements and simulations reveal the mechanisms of the sub-femtosecond electron injection in GaAs driven by intense and resonant infrared laser pulses. In contrast to expectations, our results demonstrate that ultrafast transient absorption features, which characterize the early response of the semiconductor to the resonant pump excitation, are dominated by intraband motion, rather than by interband transitions. Furthermore, our simulations show that the virtual carriers created by the intraband motion assist in the injection of real carriers from the VB into the CB. Hence, the interplay between both transition types significantly influences the injection mechanism in the presence of strong electric fields. This process is expected to be universal and persist in a large range of excitation parameters. Therefore, our observation reveals important information about sub-femtosecond electron dynamics in a solid induced by strong fields, which is required for the scaling of the next generation of efficient and fast optical switches and electronics driven in the petahertz regime. ### Data availability The data that support the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request. Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## References 1. 1. Mei, X. et al. First demonstration of amplification at 1 THz using 25-nm InP high electron mobility transistor process. IEEE Electron Device Lett. 36, 327–329 (2015). 2. 2. Desai, S. B. et al. MoS2 transistors with 1-nanometer gate lengths. Science 354, 99–102 (2016). 3. 3. Krausz, F. & Stockman, M. I. Attosecond metrology: from electron capture to future signal processing. Nat. Photon. 8, 205–213 (2014). 4. 4. Schultze, M. et al. Attosecond band-gap dynamics in silicon. Science 346, 1348–1352 (2014). 5. 5. Mashiko, H., Oguri, K., Yamaguchi, T., Suda, A. & Gotoh, H. Petahertz optical drive with wide-bandgap semiconductor. Nat. Phys. 12, 741–745 (2016). 6. 6. Sommer, A. et al. Attosecond nonlinear polarization and light–matter energy transfer in solids. Nature 534, 86–90 (2016). 7. 7. Zürch, M. et al. Ultrafast carrier thermalization and trapping in silicon–germanium alloy probed by extreme ultraviolet transient absorption spectroscopy. Struct. Dyn. 4, 044029 (2017). 8. 8. Zürch, M. et al. Direct and simultaneous observation of ultrafast electron and hole dynamics in germanium. Nat. Commun. 8, 15734 (2017). 9. 9. Schultze, M. et al. Controlling dielectrics with the electric field of light. Nature 493, 75–78 (2013). 10. 10. Lucchini, M. et al. Attosecond dynamical Franz–Keldysh effect in polycrystalline diamond. Science 353, 916–919 (2016). 11. 11. Golde, D., Meier, T. & Koch, S. W. High harmonics generated in semiconductor nanostructures by the coupled dynamics of optical inter- and intraband excitations. Phys. Rev. B 77, 075330 (2008). 12. 12. Ghimire, S. et al. Observation of high-order harmonic generation in a bulk crystal. Nat. Phys. 7, 138–141 (2011). 13. 13. Malard, L. M., Mak, K. F., Castro Neto, A. H., Peres, N. M. R. & Heinz, T. F. Observation of intra- and inter-band transitions in the transient optical response of graphene. New J. Phys. 15, 015009 (2013). 14. 14. Al-Naib, I., Sipe, J. E. & Dignam, M. M. High harmonic generation in undoped graphene: Interplay of inter- and intraband dynamics. Phys. Rev. B 90, 245423 (2014). 15. 15. Luu, T. T. et al. Extreme ultraviolet high-harmonic spectroscopy of solids. Nature 521, 498–502 (2015). 16. 16. Wismer, M. S., Kruchinin, S. Y., Ciappina, M., Stockman, M. I. & Yakovlev, V. S. Strong-field resonant dynamics in semiconductors. Phys. Rev. Lett. 116, 197401 (2016). 17. 17. Paasch-Colberg, T. et al. Sub-cycle optical control of current in a semiconductor: from the multiphoton to the tunneling regime. Optica 3, 1358 (2016). 18. 18. Ludwig, A. et al. Breakdown of the dipole approximation in strong-field ionization. Phys. Rev. Lett. 113, 243001 (2014). 19. 19. Locher, R. et al. Versatile attosecond beamline in a two-foci configuration for simultaneous time-resolved measurements. Rev. Sci. Instrum. 85, 013113 (2014). 20. 20. Hentschel, M. et al. Attosecond metrology. Nature 414, 509–513 (2001). 21. 21. Itatani, J. et al. Attosecond streak camera. Phys. Rev. Lett. 88, 173903 (2002). 22. 22. Schlaepfer, F. et al. Gouy phase shift for annular beam profiles in attosecond experiments. Opt. Express 25, 3646–3655 (2017). 23. 23. Beard, M. C., Turner, G. M. & Schmuttenmaer, C. A. Transient photoconductivity in GaAs as measured by time-resolved terahertz spectroscopy. Phys. Rev. B 62, 15764–15777 (2000). 24. 24. Sato, S. A., Yabana, K., Shinohara, Y., Otobe, T. & Bertsch, G. F. Numerical pump–probe experiments of laser-excited silicon in nonequilibrium phase. Phys. Rev. B 89, 064304 (2014). 25. 25. Houston, W. V. Acceleration of electrons in a crystal lattice. Phys. Rev. 57, 184–186 (1940). 26. 26. Srivastava, A., Srivastava, R., Wang, J. & Kono, J. Laser-induced above-band-gap transparency in GaAs. Phys. Rev. Lett. 93, 157401 (2004). 27. 27. Novelli, F., Fausti, D., Giusti, F., Parmigiani, F. & Hoffmann, M. Mixed regime of light–matter interaction revealed by phase sensitive measurements of the dynamical Franz–Keldysh effect. Sci. Rep. 3, 1227 (2013). 28. 28. Bakos, J. S. AC stark effect and multiphoton processes in atoms. Phys. Rep. 31, 209–235 (1977). 29. 29. Vurgaftman, I., Meyer, J. R. & Ram-Mohan, L. R. Band parameters for III–V compound semiconductors and their alloys. J. Appl. Phys. 89, 5815–5875 (2001). 30. 30. Kraut, E. A., Grant, R. W., Waldrop, J. R. & Kowalczyk, S. P. Precise determination of the valence-band edge in X-ray photoemission spectra: application to measurement of semiconductor interface potentials. Phys. Rev. Lett. 44, 1620–1623 (1980). ## Acknowledgements We thank M. C. Golling for growing the GaAs, and J. Leuthold and C. Bolognesi for helpful discussion. The authors acknowledge the support of the technology and cleanroom facility at Frontiers in Research: Space and Time (FIRST) of ETH Zurich for advanced micro- and nanotechnology. This work was supported by the National Center of Competence in Research Molecular Ultrafast Science and Technology (NCCR MUST) funded by the Swiss National Science Foundation, and by JSPS KAKENHI grant no. 26-1511. ## Author information ### Author notes • M. Lucchini Present address: Department of Physics, Politecnico di Milano, Milano, Italy ### Affiliations 1. #### Department of Physics, ETH Zurich, Zurich, Switzerland • F. Schlaepfer • , M. Lucchini • , M. Volkov • , L. Kasmi • , N. Hartmann • , L. Gallmann •  & U. Keller 2. #### Max Planck Institute for the Structure and Dynamics of Matter, Hamburg, Germany • S. A. Sato •  & A. Rubio ### Contributions F.S., M.L., L.G. and U.K. supervised the study. F.S., M.L., M.V., L.K. and N.H. conducted the experiments. M.V. also improved the experimental set-up and data acquisition system. F.S. fabricated the sample and analysed the experimental data. S.A.S. and A.R. developed the theoretical modelling. All authors were involved in the interpretation and contributed to the final manuscript. ### Competing interests The authors declare no competing interests. ### Corresponding authors Correspondence to F. Schlaepfer or U. Keller. ## Supplementary information 1. ### Supplementary Information Supplementary Figure 1–13, Supplementary Table 1, Supplementary References
2018-03-23 08:46:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 10, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6272857189178467, "perplexity": 2882.26037872405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648205.76/warc/CC-MAIN-20180323083246-20180323103246-00334.warc.gz"}
http://www.sawaal.com/profit-and-loss-questions-and-answers/if-the-manufacturer-gains-10-the-wholesale-dealer-15-and-the-retailer-25-then-find-the-cost-of-produ_6446
14 Q: # If the manufacturer gains 10 %, the wholesale dealer 15 % and the retailer 25 %, then find the cost of production of a table if the retail price was Rs 1265 A) Rs. 750 B) Rs. 800 C) Rs. 850 D) Rs. 900 Explanation: Let the cost of Production = Rs. P Then, as per question, $\fn_jvn&space;{\color{Black}&space;\Rightarrow&space;(\frac{125}{100}\times&space;\frac{115}{100}\times&space;\frac{110}{100}\times&space;p)=1265}$ $\fn_jvn&space;{\color{Black}&space;\Rightarrow&space;\frac{253}{160}\;&space;p=1265}$ $\fn_jvn&space;{\color{Black}&space;\Rightarrow&space;p=800}$ Q: A quantity of tea is sold at Rs. 5.75 per kilogram. The total gain by selling the tea at this rate is Rs. 60. Find the quantity of tea being sold if a profit of 15% is made on the deal ? A) 72 kgs B) 80 kgs C) 76 kgs D) 84 kgs Explanation: Say total cost price of tea is x. Then total profit at a rate of 15% is = (15x/100) According to question, 15x/100 = 60 so x = 400 C.p of the tea is Rs. 400. so total selling price will be = (400+60) = Rs.460 so the quantity of the tea will be = (460/5.75) = 80kg. 2 18 Q: Every year before the festive season,a shopkeeper increases the price of the product by 35% and then introduce two successive discount of 10% and 15% respectively.what is percentage loss and percentage gain ? A) 3.27 % loss B) 4.15 % loss C) 3.27 % gain D) 4.15 % gain Explanation: Let cp= 100, 35 % increase in sp=135 10 % discount in 135((135*10)/100)=13.5 so 1st sp=(135-13.5)=121.5, again 15 % discount in 1st sp((121.5*15)/100)=18.225 2nd sp=(121.5-18.225)=103.275, so finally cp=100,sp=103.275 ,gain by 3.27% 2 48 Q: A merchant buys two articles for Rs.600. He sells one of them at a profit of 22% and the other at a loss of 8% and makes no profit or loss in the end. What is the selling price of the article that he sold at a loss ? A) Rs. 404.80 B) Rs. 536.80 C) Rs.440 D) Rs. 160 Explanation: Let C1 be the cost price of the first article and C2 be the cost price of the second article. Let the first article be sold at a profit of 22%, while the second one be sold at a loss of 8%. We know, C1 + C2 = 600. The first article was sold at a profit of 22%. Therefore, the selling price of the first article = C1 + (22/100)C1 = 1.22C1 The second article was sold at a loss of 8%. Therefore, the selling price of the second article = C2 - (8/100)C2 = 0.92C2. The total selling price of the first and second article = 1.22C1 + 0.92C2. As the merchant did not make any profit or loss in the entire transaction, his combined selling price of article 1 and 2 is the same as the cost price of article 1 and 2. Therefore, 1.22C1 + 0.92C2 = C1+C2 = 600 As C1 + C2 = 600, C2 = 600 - C1. Substituting this in 1.22C1 + 0.92C2 = 600, we get 1.22C1 + 0.92(600 - C1) = 600 or 1.22C1 - 0.92C1 = 600 - 0.92*600 or 0.3C1 = 0.08*600 = 48 or C1 = 48/(0.3) = 160. If C1 = 160, then C2 = 600 - 160 = 440. The item that is sold at loss is article 2. The selling price of article 2 = 0.92*C2 = 0.92*440 = 404.80. 1 46 Q: A shopkeeper who deals in books sold a book at 16% loss. Had she charged an additional Rs.60 while selling it , her profit would have been 14%. Find the cost price, in rupees, of the book ? A) Rs. 185 B) Rs. 154 C) Rs. 200 D) Rs. 177 Explanation: Let the C.P be 'x' Then, the selling price S.P = x - 16x/100 = 84x/100 = 21x/25 Now, if the S.P is 60 more, then the profit is 14% => 21x/25 + 60 = x + 14x/100 => 114x/100 - 21x/25 = 60 => (57 - 42)x/50 = 60 => 15x/50 = 60 x = 3000/15 = 200 Therefore, the Cost price C.P = x = Rs. 200 2 75 Q: In a certain store, the profit is 320% of the cost. If the cost increases by 25% but the selling price remains constant, approximately what percentage of the selling price is the profit ? A) 60% B) 50% C) 70% D) 45% Explanation: Let C.P.= Rs. 100. Then, Profit = Rs. 320, S.P. = Rs. 420. New C.P. = 125% of Rs. 100 = Rs. 125 New S.P. = Rs. 420. Profit = Rs. (420 - 125) = Rs. 295 Required percentage = (295/420) * 100 = 70%(approx)
2017-03-25 13:32:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7782477140426636, "perplexity": 2911.064131312258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188926.39/warc/CC-MAIN-20170322212948-00570-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/other-math/thinking-mathematically-6th-edition/chapter-5-number-theory-and-the-real-number-system-5-3-the-rational-numbers-exercise-set-5-3-page-285/59
## Thinking Mathematically (6th Edition) $-\dfrac{7}{120}$ The product of a positive and a negative number is negative. Multiply the numerators together and the denominators together to obtain: $=-\dfrac{1(7)}{10(12)} \\=-\dfrac{7}{120}$
2018-08-21 16:51:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5162936449050903, "perplexity": 780.2724020284074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221218357.92/warc/CC-MAIN-20180821151743-20180821171743-00452.warc.gz"}
https://www.nature.com/articles/sdata201672?error=cookies_not_supported&code=bc095156-7bdf-47b5-bd54-814779e746b7
Skip to main content Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Data for evaluation of fast kurtosis strategies, b-value optimization and exploration of diffusion MRI contrast ## Abstract Here we describe and provide diffusion magnetic resonance imaging (dMRI) data that was acquired in neural tissue and a physical phantom. Data acquired in biological tissue includes: fixed rat brain (acquired at 9.4 T) and spinal cord (acquired at 16.4 T) and in normal human brain (acquired at 3 T). This data was recently used for evaluation of diffusion kurtosis imaging (DKI) contrasts and for comparison to diffusion tensor imaging (DTI) parameter contrast. The data has also been used to optimize b-values for ex vivo and in vivo fast kurtosis imaging. The remaining data was obtained in a physical phantom with three orthogonal fiber orientations (fresh asparagus stems) for exploration of the kurtosis fractional anisotropy. However, the data may have broader interest and, collectively, may form the basis for image contrast exploration and simulations based on a wide range of dMRI analysis strategies. Design Type(s) repeated measure design • protocol optimization design Measurement Type(s) Diffusion Kurtosis Imaging Technology Type(s) MRI Scanner Factor Type(s) Sample Characteristic(s) Rattus norvegicus • Homo sapiens • spinal cord • brain Machine-accessible metadata file describing the reported data (ISA-Tab format) ## Background & Summary Diffusion weighted MRI (dMRI) is highly sensitive to tissue microstructure, which makes it important as a tool in research and diagnostics. Traditional dMRI analysis relies on the diffusion tensor model1,2 where the diffusion signal is approximated with a Gaussian phase distribution. The microstructure of biological tissues, however, influences the diffusion process and causes the spin phase distribution to deviate from normal. This deviation is partially described by including the kurtosis term in the cumulant expansion3. The diffusion kurtosis imaging (DKI) framework4 captures this deviation and is seen as an indirect microstructural marker. DKI is an increasingly popular method to increase the sensitivity of dMRI to microstructure. In particular, the orientationally averaged kurtosis—the mean kurtosis (MK)—has been found to possess promising clinical potential. In an animal model of stroke, MK was found to improve the visualization of the ischemic lesion5 compared to mean diffusivity (MD) and to display different temporal dynamics than MD6,7. In human stroke, MK was also found to increase811. MK's potential value has also been reported in several other neurological applications: Parkinson’s disease12, epilepsy13, gliomas14,15, chronic mild stress16, attention deficit hyperactivity disorder (ADHD)17, traumatic brain injury18 and review in19, and normal development20,21. Despite its potential, the exploration and application of DKI in everyday clinical imaging is held back by its large data requirement (causing long acquisition times) and computationally heavy postprocessing. In an effort to remove these limitations, strategies for fast kurtosis imaging have recently been proposed2224. These strategies employ nine distinct diffusion encoding directions acquired at two different b-values to efficiently estimate the mean kurtosis using a definition based on the kurtosis tensor, W. In the same theoretical framework, the directional dependency of the kurtosis—the kurtosis fractional anisotropy (KFA)25—may be defined22,26 in a manner which is mathematically analogous to the fractional anisotropy (FA)27 known from diffusion tensor2,28 imaging (DTI). A compact scheme for KFA estimation by proxy was explored in a recent study25 where its contrast was also compared to conventional DTI and DKI contrasts. The data provided here allows users to perform both traditional DKI analysis and fast kurtosis analysis from data sets acquired in fixed rat brain and in human brain. One potential use of this data is for testing analysis software, postprocessing algorithms or b-value optimization, thus supplementing other publicly available data sets e.g. those presented in (refs 29,30) (data available at: cmic.cs.ucl.ac.uk/wmmchallenge/ and www.massive-data.org/). Furthermore, we provide high resolution dMRI data from rat spinal cord and a physical phantom which may be used to explore DKI contrasts and as a basis for simulations. The data was used in previously published analysis22,25. The data acquisition details are provided below. Details on data availability, formats, and organization are provided in the Data Records section and Table 1. ## Methods All animal work was performed in accordance with relevant guidelines and regulations concerning animal experiments. All animal experimental protocols were approved by the Danish Animal Experiments Inspectorate (Dyreforsøgstilsynet). Human data acquisition was performed in accordance with the Declaration of Helsinki. All human experimental protocols were approved by the local ethics committee for research (De videnskabsetiske Komitéer for Region Midtjylland). Informed consent was obtained from all human subjects (one) prior to scanning. The data is made available raw, meaning that no pre-processing (smoothing, coregistration, spatial filtering, or normalization) was applied in any of the data sets. Throughout the method descriptions, SNR was calculated as the average signal in a homogenous region in the object imaged divided by the standard deviation of the signal in a background region, corrected for Rayleigh distribution in a standard manner31. Unless otherwise stated reported SNR levels were evaluated at b=0. ### MRI data obtained in fixed rat spinal cord An adult male Wistar rat was euthanized and exsanguinated during intra-aortic perfusion fixation with isotonic saline containing heparin (10 IU ml−1), followed by 4% paraformaldehyde in phosphate-buffered saline (PBS) (pH 7.4). A section of spinal cord including the cervical enlargement was then dissected out and stored in 4% PFA for at least 6 weeks prior to imaging. The spinal cord segment was washed in PBS for 24 h prior to MR scanning to improve signal by removal of excess fixative. For imaging, the tissue was placed in a 5 mm NMR tube. Imaging was performed on a Bruker Biospec 16.4 T (Bruker Biospin, Germany) spectrometer equipped with microimaging gradients with a strength of 3 T/m. Data was acquired using a 5 mm saddle coil. DWI data acquisition was performed using a standard DW spin echo sequence. A total of 17 b-values equally distributed from 0-15 ms μm−2 were acquired. At each b-value, data was acquired along 9 gradient directions, so that the gradient directions at non-zero b-values in combination form a 144 point spherical design32. Imaging parameters were: TE=15.3 ms, TR=2500 ms, diffusion timings δ/Δ=2/8 ms, 3 averages. Acquisition time per b-value: 3 h 36 min. Twenty-five image slices were acquired at a resolution of 23 μm x 23 μm×120 μm, matrix size 192×192. Notice that an artifact caused by radio frequency feed-through (contamination) is present outside of the object. SNR is rather low in this data set (~7 at b=0) but the high spatial resolution of the data set in combination with the large range of b-values makes it applicable for DTI/DKI contrast exploration, and as a foundation for simulations based on DTI/DKI fits. The symmetry of the spinal cord also allows for averaging data across several slices to increase SNR. Down sampling and smoothing may extend the applicable b-value range even further. The raw, full data set is provided. ### MRI data obtained in fixed rat brain This specimen was obtained using the same fixation protocol as above. After perfusion fixation the brain was removed and immersion fixed in fresh 4% paraformaldehyde solution for at least 6 weeks. Prior to imaging, the brain was washed in PBS for 24 h to improve signal by removal of excess fixative. Data was acquired using a Bruker Biospec 9.4 T (Bruker Biospin, Germany) MRI system equipped with a 15 mm quadrature coil. DWI data acquisition was performed using a standard DW spin echo sequence. A total of 15 b-values ranging from 0–3 ms μm−2 in steps of 0.2 ms μm−2 were acquired. At each b-value, data was acquired along 33 gradient directions. These directions were obtained by combination of a 3-dimensional 24-point spherical 7-design32 and the nine directions identified for fast estimation of mean kurtosis in ref. 22. Imaging parameters were: TE=23.3 ms, TR =4 s, diffusion timings δ/Δ=4/14 ms, 2 averages. Fifteen image slices were acquired at a resolution of 100 μm×100 μm x 500 μm, matrix size 128 × 128. SNR was approximately 75 at b=0 evaluated using the mean signal across all tissue. ### MRI data collected in a physical phantom A physical phantom with fiber bundles equally distributed along the $x ˆ y ˆ$, and $z ˆ$ directions was constructed. This phantom was used to mimic one imaging voxel with complex fiber distribution while at the same time allowing us to resolve each fiber direction separately. For this, a phantom was built using fresh asparagus stems. Stems were cut into 8 mm long sections and placed inside a cubic plastic container in a 3×3 design. The container was then filled with room temperature demineralized water and glass tools were used to remove air bubbles. Immediately after construction, the phantom was brought to the magnet. For scanning, the phantom was placed in an in-house built sample holder made from PE foam allowing the sample to be held tightly in place inside the MR coil. In this manner, sample movement (shaking) during acquisition was eliminated thereby avoiding image artifacts caused by bulk water motion. Imaging was performed on a horizontal 9.4 T Bruker Biospec system using a 40 mm quadrature coil. This coil is intended for mounting on an animal bed, but for these scans it was mounted in the magnet bore using a coil holder developed in-house. The scan protocol included an anatomical/structural scan and a DKI acquisition. The structural data was acquired with a FLASH sequence (TE=5.4 ms, TR=350 ms) in seven 1 mm thick slices with an in-plane resolution of 100 μm×100 μm, matrix size 280×280. Diffusion data was acquired using a standard diffusion weighted spin echo sequence. Data was recorded in the same seven slice planes as the structural data but with a lower in-plane resolution of 427 μm×427 μm, matrix size 64x64. Imaging parameters were TE=70.6 ms, TR=2700 ms, diffusion timings δ/Δ=6/60 ms, 4 averages. Fifteen encoding directions were obtained at b-values of 0, 0.5, 1.0, 1.8, 2.5, 3.5 ms μm2. The encoding directions were obtained from a 15 point spherical design32. The first slice is influenced by susceptibility effects near the water air surface. Therefore, we recommend to only analyze slices 2–7 as done in ref. 25. ### Human MRI data Human data was acquired in one normal volunteer using a Siemens Trio 3 T equipped with a 32 channel head coil and a double spin echo DW EPI sequence. Motion of the subject's head during acquisition was avoided by padding inside the coil. DWI data was recorded at b=0 ms μm−2, and along 33 directions at b-values from 0.2–3 ms μm−2 in steps of 0.2 ms μm−2. The encoding scheme was constructed as a combination of a 24 point spherical design32 and the nine directions identified for rapid kurtosis estimation in ref. 22. CSF suppression (inversion recovery) was employed as recommended in ref. 33. Imaging parameters were TR=7200 ms, TE=116 ms, TI=2100 ms, 19 consecutive slices were acquired at isotropic resolution of 2.5 mm, matrix size 96×96, phase encoding direction A.-P. SNR~39 at b=0 evaluated using the mean signal across all tissue types. Anatomical data is also provided. This data consists of a 1 mm isotropic T1 weighted 3D MPRAGE acquired in the sagittal orientation, matrix size 256 × 256 × 176. Scan parameters were: TE=3.7 ms, TR=2430 ms, Inversion time (TI)=960 ms, Flip angle=9°, 2 averages. ## Data Records The MRI data acquired in fixed rat spinal cord is provided raw (no smoothing or registration has been performed). The data is available in Rat_spinal_cord.zip which contains dMRI data and corresponding b-values and gradient table (Data Citation 1). Details are provided in Table 1. The raw MRI data acquired in fixed rat brain can be found in Rat_brain.zip which contains dMRI data and corresponding b-values and gradient table (Data Citation 1). Details are provided in Table 1. The raw MRI data acquired in the physical phantom is bundled in Phantom_data.zip which contains dMRI data with corresponding b-values and gradient table, and a structural scan in the same slice positions (Data Citation 1). Details are provided in Table 1. The raw MRI data acquired in human brain can be found in human_brain.zipt which contains dMRI data with corresponding b-values and gradient tables, and a structural scan (Data Citation 1). Details are provided in Table 1. ## Usage Notes All data is provided as matlab files (.mat) and in the nifti format. The human data is also provided as dicom files. The DWI data is provided raw so no preprocessing has been applied to any of the data sets. This allows users to employ their preferred pre- and postprocessing combination and assess data quality e.g. drift effects, SNR etc. directly. The data is stored either as a 4D matrix with ordered with spatial dimensions first: x, y, slice, diffusion encoding. In these cases the diffusion encoding order corresponds to the accompanying vector of effective b-values (in ms μm−2) and gradient encoding directions (as normalized cartesian vectors). In case of a 5D data matrix the structure is x,y,slice,gradient encoding direction,b-value with the order of encoding directions and b-values given by the corresponding vectors. For Bruker data method files are available on request (contact corresponding author). ## Additional Information How to cite: Hansen, B. & Jespersen S. N. Data for evaluation of fast kurtosis strategies, b-value optimization and exploration of diffusion MRI contrast. Sci. Data 3:160072 doi: 10.1038/sdata.2016.72 (2016). ## References ### References 1. 1 Basser, P. J., Mattiello, J. & LeBihan, D. Estimation of the effective self-diffusion tensor from the NMR spin echo. J. Magn. Reson. B 103, 247–254 (1994). 2. 2 Basser, P. J., Mattiello, J. & LeBihan, D. MR diffusion tensor spectroscopy and imaging. Biophys J 66, 259–267 (1994). 3. 3 Kiselev, V. G. in Diffusion MRI: theory, methods, and applications (ed. Jones D. K. 152–168 (Oxford University Press, 2011). 4. 4 Jensen, J. H., Helpern, J. A., Ramani, A., Lu, H. & Kaczynski, K. Diffusional kurtosis imaging: the quantification of non-gaussian water diffusion by means of magnetic resonance imaging. Magn. Reson. Med. 53, 1432–1440 (2005). 5. 5 Grinberg, F., Ciobanu, L., Farrher, E. & Shah, N. J. Diffusion kurtosis imaging and log-normal distribution function imaging enhance the visualisation of lesions in animal stroke models. NMR in biomedicine 25, 1295–1304 (2012). 6. 6 Hui, E. S., Du, F., Huang, S., Shen, Q. & Duong, T. Q. Spatiotemporal dynamics of diffusional kurtosis, mean diffusivity and perfusion changes in experimental stroke. Brain Res. 1451, 100–109 (2012). 7. 7 Cheung, J. S., Wang, E., Lo, E. H. & Sun, P. Z. Stratification of heterogeneous diffusion MRI ischemic lesion with kurtosis imaging: evaluation of mean diffusion and kurtosis MRI mismatch in an animal model of transient focal ischemia. Stroke 43, 2252–2254 (2012). 8. 8 Jensen, J. H. et al. Preliminary observations of increased diffusional kurtosis in human brain following recent cerebral infarction. NMR in biomedicine 24, 452–457 (2011). 9. 9 Hui, E. S. et al. Stroke assessment with diffusional kurtosis imaging. Stroke 43, 2968–2973 (2012). 10. 10 Helpern, J. A. et al. Diffusional kurtosis imaging in acute human stroke. In Proceedings of the 17th Annual Meeting of ISMRM, Honolulu, Hawaii, 2009. p 3493. 11. 11 Latt, J. et al. Diffusion time dependent kurtosis maps visualize ischemic lesions in stroke patients. In Proceedings of the 17th Annual Meeting of ISMRM, Honolulu, Hawaii, 2009. p 40. 12. 12 Wang, J. J. et al. Parkinson disease: diagnostic utility of diffusion kurtosis imaging. Radiology 261, 210–217 (2011). 13. 13 Gao, Y. et al. Diffusion abnormalities in temporal lobes of children with temporal lobe epilepsy: a preliminary diffusional kurtosis imaging study and comparison with diffusion tensor imaging. NMR in biomedicine 25, 1369–1377 (2012). 14. 14 Van Cauter, S. et al. Gliomas: diffusion kurtosis MR imaging in grading. Radiology 263, 492–501 (2012). 15. 15 Raab, P., Hattingen, E., Franz, K., Zanella, F. E. & Lanfermann, H. Cerebral gliomas: diffusional kurtosis imaging analysis of microstructural differences. Radiology 254, 876–881 (2010). 16. 16 Delgado y Palacios, R. et al. Magnetic resonance imaging and spectroscopy reveal differential hippocampal changes in anhedonic and resilient subtypes of the chronic mild stress rat model. Biological psychiatry 70, 449–457 (2011). 17. 17 Helpern, J. A. et al. Preliminary evidence of altered gray and white matter microstructural development in the frontal lobe of adolescents with attention-deficit hyperactivity disorder: a diffusional kurtosis imaging study. J. Magn. Reson. Imaging 33, 17–23 (2011). 18. 18 Grossman, E. J. et al. Thalamus and cognitive impairment in mild traumatic brain injury: a diffusional kurtosis imaging study. Journal of neurotrauma 29, 2318–2327 (2012). 19. 19 Ostergaard, L. et al. Capillary transit time heterogeneity and flow-metabolism coupling after traumatic brain injury. J. Cereb. Blood Flow Metab. 34, 1585–1598 (2014). 20. 20 Falangola, M. F. et al. Age-related non-Gaussian diffusion patterns in the prefrontal brain. J. Magn. Reson. Imaging 28, 1345–1350 (2008). 21. 21 Cheung, M. M. et al. Does diffusion kurtosis imaging lead to better neural tissue characterization?: A rodent brain maturation study. Neuroimage 45, 386–392 (2009). 22. 22 Hansen, B., Lund, T. E., Sangill, R. & Jespersen, S. N. Experimentally and computationally fast method for estimation of a mean kurtosis. Magn. Reson. Med. 69, 1754–1760 (2013). 23. 23 Hansen, B. et al. Experimental considerations for fast kurtosis imaging. Magn. Reson. Med. (epub ahead of print) (2015). 24. 24 Hansen, B., Lund, T. E., Sangill, R. & Jespersen, S. N. Erratum: Hansen, Lund, Sangill, and Jespersen. Experimentally and computationally fast method for estimation of a mean kurtosis (Magnetic Resonance in Medicine (2013) 69 (1754-1760)). Magnetic Resonance in Medicine 71, 2250–2250 (2014). 25. 25 Hansen, B. & Jespersen, S. N. Kurtosis fractional anisotropy, its contrast and estimation by proxy. Scientific Reports 6, 23999 (2016). 26. 26 Jespersen, S. N. Equivalence of double and single wave vector diffusion contrast at low diffusion weighting. NMR Biomed. 25, 813–818 (2012). 27. 27 Basser, P. J. & Pierpaoli, C. Microstructural and physiological features of tissues elucidated by quantitative-diffusion-tensor MRI. Journal of magnetic resonance. Series B 111, 209–219 (1996). 28. 28 Wesbey, G. E., Moseley, M. E. & Ehman, R. L. Translational molecular self-diffusion in magnetic resonance imaging. II. Measurement of the self-diffusion coefficient. Investigative radiology 19, 491–498 (1984). 29. 29 Ferizi, U. et al. Diffusion MRI microstructure models with in vivo human brain Connectom data: results from a multi-group comparison. arXiv: 1604.07287v1 [physics.med-ph] (2016). 30. 30 Froeling, M., Tax, C. M., Vos, S. B., Luijten, P. R. & Leemans, A. "MASSIVE" Brain Dataset: Multiple Acquisitions for Standardization of Structural Imaging Validation and Evaluation. Magn. Reson. Med. (2016). 31. 31 Brown, R. W., Cheng, Y.-C. N., Haacke, E. M., Thompson, M. R. & Venkatesan, R . Magnetic resonance imaging: physical principles and sequence design. Second edition, John Wiley & Sons, Inc., (2014). 32. 32 Hardin, R. H. & Sloane, N. J. A. McLaren's improved snub cube and other new spherical designs in three dimensions. Discrete Comput Geom 15, 429–441 (1996). 33. 33 Jones, D. K., Knosche, T. R. & Turner, R. White matter integrity, fiber count, and other fallacies: the do's and don'ts of diffusion MRI. NeuroImage 73, 239–254 (2013). ### Data Citations 1. 1 Hansen, B., & Jespersen, S.N Dryad https://doi.org/10.5061/dryad.9bc43 (2016) Download references ## Acknowledgements The authors were supported by the Danish Ministry of Science, Technology and Innovation’s University Investment Grant (MINDLab). B.H. acknowledges support from NIH 1R01EB012874-01. S.N.J. acknowledges support from the Lundbeck Foundation R83-A7548 and the Simon Fougner Hartmann Familiefond. The authors wish to thank Lippert’s Foundation and Korning’s Foundation for financial support. The 9.4 T lab was made possible by funding from the Danish Research Council's Infrastructure program, the Velux Foundations, and the Department of Clinical Medicine, AU. We thank Torben E. Lund and Ryan Sangill for assistance with human data collection. We are grateful to Niels Chr. Nielsen for access to the 16.4 T system at InSpin, AU. ## Author information Authors ### Contributions S.J. developed theory, B.H. designed experiments, B.H. performed experiments, simulations and data analysis and preparation, B.H. wrote the paper, both authors edited the paper. ### Corresponding author Correspondence to Brian Hansen. ## Ethics declarations ### Competing interests The authors declare no competing financial interests. ## Rights and permissions This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0 Metadata associated with this Data Descriptor is available at http://www.nature.com/sdata/ and is released under the CC0 waiver to maximize reuse. Reprints and Permissions ## About this article ### Cite this article Hansen, B., Jespersen, S. Data for evaluation of fast kurtosis strategies, b-value optimization and exploration of diffusion MRI contrast. Sci Data 3, 160072 (2016). https://doi.org/10.1038/sdata.2016.72 Download citation • Received: • Accepted: • Published: ## Further reading • ### Diffusion Kurtosis Imaging as a Tool in Neurotoxicology • Brian Hansen Neurotoxicity Research (2020) • ### Automatic Verification of the Gradient Table in Diffusion-Weighted MRI Based on Fiber Continuity • Iman Aganj Scientific Reports (2018) • ### Diffusion Kurtosis Imaging of Microstructural Alterations in the Brains of Paediatric Patients with Congenital Sensorineural Hearing Loss • Wenbin Zheng • , Chunxiao Wu • , Lexing Huang •  & Renhua Wu Scientific Reports (2017) ## Search ### Quick links Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily. Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
2021-10-18 18:20:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5012654662132263, "perplexity": 8259.295338472291}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585204.68/warc/CC-MAIN-20211018155442-20211018185442-00677.warc.gz"}
http://mathhelpforum.com/calculus/120827-integration-problem.html
# Math Help - Integration problem 1. ## Integration problem Hi im not sure how to do this problem: $\int \frac{1}{10}te^{0.1t} dt$ Thanks 2. Originally Posted by anothernewbie Hi im not sure how to do this problem: $\int \frac{1}{10}te^{0.1t} dt$ Thanks Do you know integration by parts? 3. Read this Integration by parts - Wikipedia, the free encyclopedia For $\int uv' = uv - \int vu'$ In your case make $u=\frac{1}{10}t$ and $v' = e^{0.1t}$ now find $u'$ and $v$ 4. Right finally found time to get back to this problem, this is where im at. $\int \frac{1}{10}te^{0.1t} dt$ Make $v = \frac{1}{10}t$ So $\frac{dv}{dx} = \frac{1}{10}$ And Make $\frac{du}{dx} = e^{0.1t}$ So $u = 10e^{0.1t}$ Using $\int uv' = uv - \int vu'$ I get: $\int \frac{1}{10}t \times e^{0.1t} = 10e^{0.1t} \times \frac{1}{10}t - \int 10e^{0.1t} \times \frac{1}{10}$ Am I doing this right or am i on totally the wrong track? God I suck at maths 5. Originally Posted by anothernewbie $\int uv' = uv - \int vu'$ I get: $\int \frac{1}{10}t \times e^{0.1t} = 10e^{0.1t} \times \frac{1}{10}t - \int 10e^{0.1t} \times \frac{1}{10}$ Am I doing this right or am i on totally the wrong track? You are doing great, now integrate on the RHS on you are done. Originally Posted by anothernewbie God I suck at maths Not this time you didn't, well done! 6. Originally Posted by pickslides You are doing great, now integrate on the RHS on you are done. Not this time you didn't, well done! Woop, thanks will give the rest a go later on today. Thanks
2014-09-03 03:36:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9231587052345276, "perplexity": 1500.310979982642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535924131.19/warc/CC-MAIN-20140901014524-00236-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.sawaal.com/exam-preparation/bank-exams-questions-and-answers.htm?sort=rated
# Bank Exams Questions Q: What was the day on 15th august 1947 ? A) Friday B) Saturday C) Sunday D) Thursday Explanation: 15 Aug, 1947 = (1946 years + Period from 1.1.1947 to 15.8.1947) Odd days in 1600 years = 0 Odd days in 300 years = 1 46 years = (35 ordinary years + 11 leap years) = (35 x 1 + 11 x 2)= 57 (8 weeks + 1 day) = 1 odd day Jan.   Feb.   Mar.   Apr.   May.   Jun.   Jul.   Aug ( 31 + 28 + 31 + 30 + 31 + 30 + 31 + 15 ) = 227 days = (32 weeks + 3 days) = 3 odd days. Total number of odd days = (0 + 1 + 1 + 3) = 5 odd days. Hence, as the number of odd days = 5 , given day is Friday. 3029 346234 Q: Today is Monday. After 61 days, it will be : A) Tuesday B) Monday C) Sunday D) Saturday Explanation: Each day of the week is repeated after 7 days. So, after 63 days, it will be Monday. After 61 days, it will be Saturday. 1964 223894 Q: The Hardest Logic Puzzle Ever? If a giraffe has two eyes, a monkey has two eyes, and an elephant has two eyes, how many eyes do we have? A) 3 B) 4 C) 1 D) 2 Explanation: 4 eyes. Here in the question, it is asked how many Eyes We have so that means here the person who has asked the question is also including the person who is suppose to give the answer. In a clear understanding, the Conversation is happening between 2 people 1st who asked the question and 2nd to whom it has been asked, which means there are 4 eyes. 706 134551 Q: Crack the code & Unlock the Key ? A) 062 B) 602 C) 042 D) 204 Explanation: From all the hints given, only 042 satisfies and it unlocks the key. Filed Under: Number Puzzles Exam Prep: AIEEE , Bank Exams , CAT , GATE Job Role: Bank Clerk , Bank PO 643 131821 Q: A problem is given to three students whose chances of solving it are 1/2, 1/3 and 1/4 respectively. What is the probability that the problem will be solved? A) 1/4 B) 1/2 C) 3/4 D) 7/12 Explanation: Let A, B, C be the respective events of solving the problem and  be the respective events of not solving the problem. Then A, B, C are independent event are independent events Now,  P(A) = 1/2 , P(B) = 1/3 and P(C)=1/4 $∴$ P( none  solves the problem) = P(not A) and (not B) and (not C) = $PA∩B∩C$ = $PAPBPC$ =  $12×23×34$ = $14$ Hence, P(the problem will be solved) = 1 - P(none solves the problem) = $1-14$= 3/4 592 128546 Q: If each side of a square is increased by 25%, find the percentage change in its area? A) 65.25 B) 56.25 C) 65 D) 56 Explanation: let each side of the square be a , then area = $a2$ As given that The side is increased by 25%, then New side = 125a/100 = 5a/4 New area = $5a42$ Increased area= $25a216-a2$ Increase %=$9a2/16a2*100$  % = 56.25% 818 121652 Q: If 20% of a = b, then b% of 20 is the same as : A) 4% of a B) 6% of a C) 8% of a D) 10% of a Explanation: 20% of a = b => (20/100) * a = b b% of 20 =(b/100) x 20 = [(20a/100)  / 100] x 20= 4a/100 = 4% of a. 983 105963 Q: A clock is set right at 8 a.m. The clock gains 10 minutes in 24 hours will be the true time when the clock indicates 1 p.m. on the following day? A) 48 min. past 12. B) 46 min. past 12. C) 45 min. past 12. D) 47 min. past 12. Explanation: Time from 8 a.m. on a day to 1 p.m. on the following day = 29 hours. 24 hours 10 min. of this clock = 24 hours of the correct clock. $1456$ hrs of this clock = 24 hours of the correct clock. 29 hours of this clock = $24*6145*29$ hrs of the correct clock = 28 hrs 48 min of the correct clock. Therefore, the correct time is 28 hrs 48 min. after 8 a.m. This is 48 min. past 12.
2020-07-14 00:12:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 12, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48726746439933777, "perplexity": 1988.0799989073087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657147031.78/warc/CC-MAIN-20200713225620-20200714015620-00088.warc.gz"}
https://www.gamedev.net/forums/topic/321952-light-direction-for-bumpmapping/
# Light direction for Bumpmapping This topic is 4919 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Hello, I have created a universe, the sun is at 0/0/0, and 30 planets rotating around it. Now I added some DOT3-Bumpmapping, but I am not sure about the correct direction. So is the following code correct, when 0/0/0 is the sun which is the light source, and planet is the current planet? m_vLight.x := planet3d.x; m_vLight.y := planet3d.y; m_vLight.z := planet3d.z; D3DXVec3Normalize(m_vLight, m_vLight); dwFactor := VectortoRGBA(m_vLight, 0.0); SetTexture(0, planet3d.bumptexture); SetRenderState(D3DRS_TEXTUREFACTOR, dwFactor); SetTextureStageState(0, D3DTSS_COLORARG1, D3DTA_TEXTURE); SetTextureStageState(0, D3DTSS_COLORARG2, D3DTA_TFACTOR); SetTextureStageState(0, D3DTSS_COLOROP, D3DTOP_DOTPRODUCT3); Thanks a lot, Firle ##### Share on other sites The light vector needs to be in the same space as the normals encoded in your normal map texture. The dot product operation doesn't make any sense for lighting if the vectors aren't in the same space (or at least a related space). So if the normals in the normal map are in object space, your light vector should be in object space too. Light vectors are usually specified in world space. To rotate the light vector from world space into object space, multiply it by the inverse of the top left 3x3 portion of the world matrix used for the object ("world matrix" really means "object space to world space matrix"). Tip: if the world matrix for your object is only made up from rotations and translations, transposing it is the same (but cheaper) as inverting it. Only the top left 3x3 part of the matrix should be used to transform the light vector because the vector represents a direction rather than a position. If the normals in the normal map are in a different space, you must transform the light vector into that space (if that space is tangent space, then the vector will be different for every vertex so you'll need to transform it on a per vertex basis). Also, do remember that the light vector should be negated (i.e. for a directional light representing the sun, the vector should point towards the sun, not from it). 1. 1 2. 2 Rutin 19 3. 3 4. 4 5. 5 • 9 • 9 • 9 • 14 • 12 • ### Forum Statistics • Total Topics 633298 • Total Posts 3011260 • ### Who's Online (See full list) There are no registered users currently online ×
2018-11-14 08:44:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3953971266746521, "perplexity": 1698.0560393530711}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741764.35/warc/CC-MAIN-20181114082713-20181114104713-00352.warc.gz"}
https://8chan.se/t/res/9.html
/t/ - Technology Discussion of Technology Options Max message length: 8000 Drag files to upload or click here to select them Max file size: 32.00 MB Max files: 5 Supported file types: GIF, JPG, PNG, WebM, OGG, and more More (used to delete files and postings) The backup domain is located at 8chan.se. .cc is a third fallback. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit. WebSockets Enabled! Threads should update live as posts are made. Please report any issues on >>>/site/1518 8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present. Programming Thread: Up and Running Edition Anonymous 04/27/2020 (Mon) 19:03:16 No. 9 Hey Anon, Learn to Code! There's a bunch of free resources for learning to program. Come here to ask for advice or to discuss current projects. Download the complete Gentooman's Library: https://g.sicp.me/books/ Visit MIT OpenCourseware on (((YouTube))) https://www.youtube.com/user/MIT Or take one of these free online courses being offered by universities right now: https://www.freecodecamp.org/news/free-courses-top-cs-universities/ Edited last time by codexx on 05/04/2020 (Mon) 00:54:19. >>9 DON'T TELL ME WHAT TO DO learning to code is easy learning to code something with a degree of complexity well is a different matter where are the resources for that? >>15 I've added a few more advanced resources to the OP. >code something with a degree of complexity If you have something specific in mind, post a request and I'll dig up some resources. If you just want to learn how to do software engineering, there really is no substitute for actually joining a project or getting a job/internship, unfortunately. If you just want a chance to get beyond learning a language and repeating then the answer is to understand data structures/algorithms and to actually build a project for yourself. >>16 yes, more on the engineering side. Naturally you only get good at it by actually working, but are there established do's and dont's? >>64 There's what they teach you in a software engineering course about good and bad practices, but the actual practices change by company and in my experience insisting one way is "the right way" to do something will just draw ire from others on the team, especially those with seniority. Most software is developed haphazardly based on internal guidelines. The only universal rules would probably be >always use version control >documentation helps >set realistic goals and milestones Even something recommended like code reviews or style guidelines can be abused and turned into a hindrance. Good software engineering practices also vary by language. The way you want to package a Python product is much different from a C++ product, even if they have the same development pipeline. If you want recommendations, you should honestly focus on the stupid DevOps bullshit. Make sure you're comfortable with git. Write test cases. Configure a Continuous Integration pipeline to automatically build your code and run tests. These things, when done properly, make managing large codebases easier. The cruft and technical debt of a small script is very little. Having good regression and unit tests helps identify problems as you make changes. If you want to practice, you're probably better-off writing toy software that mimics actual products. Can you design a word processor? How about an image editor? These things will take more time than most of the project ideas included in the OP, but you will encounter the sort of problems real products face. I don't want to spoil my brains with that garbage. I actually ordered the SICP book because I'm a music autist and want to learn MIT Scheme for Lilypond music engraving. It's either that or paying some fag 2,000 bucks for what I need and fuck that shit. >>92 Good luck, anon! We'll be here if you need any help. By the way, you can find the full text of the book along with assignments, code samples, etc on the MIT Press website. https://mitpress.mit.edu/sites/default/files/sicp/index.html >>93 Ooo, has the instructor side too. I saw it for sale but chose not to buy it. Neat. What are some basic programming concepts that you failed to grasp when you were starting out? I completely misunderstood the utility of classes and essentially only used them to separate variables into several .cpp files. I didn't think to store related variables in one class until way later when I realized that it'd be convenient. The closest I came to that was storing related values of the same type in a vector, and eventually I thought "hey, there should be a way to access related variables by their name, shouldn't there?" and figured out that I'm a dumbass. That sort of comes with the territory of how I learned shit, learning a new language plus several libraries all at once from several disconnected jewtube tutorials and trying to get shit to work while googling errors until things worked. No, I'm totally not asking to see if you'll end up naming more things I might have missed. What makes you think that? Haha. >>478 Mostly struggling with APIs or bindings. Over time I've realized that documentation isn't great or that I was using someone's half-finished project advertised as a complete solution. Sometimes libraries are just stuck in a different decade. A lot of them do their own thing, which might clash with the base language or other libraries. Not entirely my fault, but I've never found interfacing with other software as easy as it should be. Never struggled much with the code itself. Once I learn to read someone's style, it opens up to me. Most of my early learning curve was spent on tooling. Git takes time. Debuggers take time. Editors take time. Time to learn the basics. Time to learn how you can customize them. Time to build a workflow. Time to learn the idiomatic way to write code. Also, I didn't learn it the hard way (thankfully) but writing solid tests is worth it. Especially if you can configure a Continuous Integration pipeline. Not a big deal for small personal projects, but I find even a few unit tests to ensure my scripts haven't broken serves as a good sanity check. >>478 Probably the importance of using 3rd party libraries. For example, one day I wanted to crawl a website and instead of using libcurl or some other library to do the webpage fetching I built my own engine using raw sockets and an HTTP parser I made. It was a piece of shit too, but it actually did its job. Also related, for some reason I find object oriented design very unintuitive and I often struggle a lot with it. What's even more funny is that when I code in C, every time I do it I end up writing object oriented C and I love the code that results from that. I don't understand why C++ gives me so much trouble. >>491 >I got taught the typical "a cat is a mammal, which is an animal" kind of OOP they teach everywhere. It's fine for teaching inheritance. It's just utterly useless as a model for how to write a program. You don't necessarily need inheritance to use objects, and your objects don't need to be like real objects. They're just containers of similar variables with a defined interface. But that still doesn't really cover how they're used in industry, probably for two major reasons. First, because describing how to use them properly is nearly impossible. And secondly, because there's lots of "how not to do it" examples in industry where people have created horrible abuses of object-oriented programming, decided they were a genius, and then slapped a name on it and tried to convince others to do the same. And some people listened. And that's how you end up with all these custom typedefs to hybrid classes that all inherit from each other with the sole goal of operating a factory that produces custom objects at runtime. Instead of just designing the classes you actually needed to rigidly cover a set part of the spec. what is the best internet browser? >>493 >>3 Just realized there's no name field? is college a good idea if your main goal for learning to program is to get a job? >>498 I see four ways of getting a programming job. There are others, if your lucky. >Option A >No college >Either your really good or you know the right people >get job >Option B >be autist >College >git gud >no friends >do FOSS/side projects >send 100s applications >get job >Option C >be normalfag >College >network (parties and bar) >fuck bitches >network (volunteer and part-time) >fuck bitches >get job >Option D >all of the above >genius, never study, GPA 4 >party boy, zigachad >FAGMAN internships, 5 years experience before graduation >Is actually the hidden master of Linus and Theo t. collegefag >>498 It's a waste of time if you can avoid it. My "computer science" college was a joke and full of downers. Literally being able to read a man page and knowing to put #define nigger_h #ifdef nigger_h at the top of your file already puts you an order of magnitude above eveyone else for the first 3 years. I dropped out near the end because I got a job paying good. I couldn't be fucked to finish it as all it would have taught was some more basic progrmaming shit, and it was pretty obvious at that point the "how are you gonna get a job without le diploma" meme is just that. This was 10 years ago. >>498 College can definitely help you find work. Most of the material was shit, but I did get two internships through my program and made some pretty good contacts. Research whether your college has a work experience program. HOWEVER, you need to be aware of the job market in the areas where you want to work. (Which may just be the city you live in.) I'm in an awkward position now because all my work experience is in embedded and dsp, but most of the jobs in my city are webdev. I have been unable to get programming work for the past few months, probably due to the the economic downturn and my relatively niche field. My only comfort is that I am not a webdev. >>513 >#define nigger_h #ifdef nigger_h that's not an include guard <#ifndef nigger_h <#define nigger_h <//coad <#endif nigger >>493 >what is the best internet browser? Don't need fancy formatting? Use some terminal-based browser like lynx, w3m, elinks, or something. >Want CSS On anything GUI, keep Javascript turned off. Use Tor browser. >Don't care about being anonymous 100% of the time, want reprieve from capchas Brave. Chromium-based so it's highly compliant, de-googled, privacy-focused, and pays you crypto just for being on the internet. Also includes Tor privacy mode so you don't have to change proxy settings whenever you want to use it over Tor (but probably best practice to just use Tor browser when using Tor even though I just use Brave Tor windows lmao) >>532 >Brave. Chromium-based so it's highly compliant Made up issue. Literally any bloatware browser is compliant enough for the jewniggernet. >1 BAT has been deposited into your account. >>533 Are you capable of speaking without using a buzzword every second? So, I picked up C three days ago. I am quite new to programming, with only some very small experience left and right, but good experience in MATLAB. I have two questions. In max 1 (which actually returns the sum, it should be sum() ), what is exactly happening with the pointers? I make a double array, and specify its three elements. I pass is to the function, and this is where my understanding stops: I think that the * at max() turns the a in main() into a pointer, which is then passed to max(), where I can use is as x[] to call values? In the second program my understanding is even worse: In main() (the commented code) I create a char array, which is filled with cmMod(111111) for example, which returns 1 km, 111 m and 11 cm. I can use this array to call its values, but why is the star then present at char *r? Secondly, why is the * present at *cmMod()? This would cause the function to pass a pointer to my array right? Finally, what does if(!r) return NULL; do? >>569 >I think that the * at max() turns the a in main() into a pointer No, the * in max interprets whatever the function is receiving in that parameter as a pointer. You're probably getting confused because you don't see a * in main. In your example, you're using a vector, and vectors have a very similar behavior and syntax to pointers, in particular what's happening here is that the name of the pointer or vector when used yields the memory address it points to, so max is being passed that memory address and using it as a pointer. To give an example, you could have: int main(void) { double a=1.5;; printf("%f\n", max(&a, 1)); return 0; } And this would also be valid. Despite the fact that a is not a pointer, you're passing its memory address to max (because of using &), and max is interpreting as a pointer. This would not work without the & because then you'd be passing a value to the function and the function expects a pointer, so the compiler would complain. >why is the star then present at char *r? Secondly, why is the * present at *cmMod()? Please use line numbers the next time, it makes things easier. The * at declarations mean to specify that you're declaring pointers. Declaring a pointer means that the variable is meant to contain a memory address and that the variable (pointer) allows the syntax to examine and change that memory address (pointer syntax). If the * is not present at the declaration, the variable would be meant to contain values (integer, floating point values, or other values). So in this case the * are present because you're manipulating memory addresses (first returned by malloc() and assigned to cmMod::r, then returned by cmMod() and assigned to main::r ). This applies both to the variable declaration (char * someVariable) and the function declaration (char * someFunction() for instance). >Finally, what does if(!r) return NULL; NULL is a value that is supposed to be an invalid memory address. When you compare a pointer to NULL, you're checking whether the pointer contains a memory address that's invalid. Here, the code is checking whether malloc() returned a valid memory address or not. If the system is low on memory, or you request a fuckhuge amount of memory (3 terabytes for instance), malloc() can only return NULL and hope you catch in your code that it couldn't allocate the memory you requested. Fuck, I feel like making some code now. >>570 Holy shit, thank you so much for the detailed explanation! I'll have to look over it a few times, but this already really helps a ton dude. I've been making random exercises from websites for now, and I am already somewhat comfortable using the syntax and structure. Feels really powerful and straight-forward, and seeing there is a large chance I'll be using code for my career this summer project of mine is really helpful. >>528 >I'm in an awkward position now because all my work experience is in embedded and dsp, but most of the jobs in my city are webdev. I have been unable to get programming work for the past few months, probably due to the the economic downturn and my relatively niche field. My only comfort is that I am not a webdev. I know those feels. >>571 Always happy to help someone get into C. >Feels really powerful and straight-forward It is, it's also pretty consistent. I love it. One thing I noticed about my previous reply: >>570 >Finally, what does if(!r) return NULL; What I wrote before is correct but I didn't notice the comparison was made as "if(!r)", which is a niggerish way to do it. Because in most systems NULL equals 0, and because 0 equals boolean false in C, "!pointer" is most of the times equivalent to "pointer==NULL", but there are systems where 0 is a valid memory address which is why NULL is implementation defined, so hardcoding it to 0 is not a good idea. Yet another (probably) retarded script: a function to calculate energy costs. First 50 units cost 0.5 /u, next 100 units 0.75 /u, next 100 units 1.2 /u and all units beyond will cost 1.5 /u ; an additional surcharge of 20% is added. My question is not so much about the correctness (it shits out all the correct (edge) values, I manually verified) but more about the first 203-205 and 206-209. I want this script to be flexible, so I wanted to add 'delims' numbers of delimiters, which can be filled in independently of cost and unit limit. Finally, a final cost per unit is given and a surcharge. My problem lies in the use of arrays to store these values in. Initially, delims was just int delims = 3; , line 203 was int *unitdelim[DELIMS]; , lines 204-205 were uncommented, but compilation failed (expression { token, variable-sized object not initialized) in the array value assignment without using a loop - which was given as the solution on some websites? So, If I want an elegant solution without bombing my memory or leaving any remains after the function has been called, how would I approach this? >>572 Got it. >>570 Also, I now understand nearly exactly what was going on! I described to myself out loud what exactly what is happening, and thinking in terms of the pointer as a special flag for handling said pointer did the trick: at last, I truly see how the addresses and values are manipulated and passed. >>569 What >>570 forgot to mention is that in C arrays decay to pointers when you pass them as arguments to a function. That is, if you have an array of three integers and pass it as an argument to a function, all the function gets is a pointer to the first element of the array. The function does not know how long that sequence is, so you either need to pass the size as an extra argument or have a sentinel element (e.g. have the last element be a negative number when only positive numbers make sense). This array decay is one of the ugly parts of C. >>574 I don't understand what you mean, but I noticed a problem in your code: you need to tell malloc exactly how many bytes you want, not how many elements you want: < unitdelim = (int *) malloc(delim * sizeof(int)); Keep in mind that if you reserve memory that way it will be allocated on the heap, not the stack, so you need to manually release itas well. The array definition above is allocated on the stack and does not need to be released manually. Unless you need your data to persist after a function call you should prefer the stack. I also noticed that you did not declare any type for the allocated data. You need to declare the variable as a pointer to integer. Note that pointers and arrays are not the same, you cannot assign a memory address to an array variable. Arrays can decay to pointers, but they are not pointers and you cannot assign pointers to them. Are you learning C from websites or are you using a book? Websites can be really shit, I recommend the K&R C book, it's quite small for a programming language reference book, and very well written. With websites the quality is usually what you paid for. >>572 > but there are systems where 0 is a valid memory address which is why NULL is implementation defined, so hardcoding it to 0 is not a good idea. I'll need a source for that one. In my copy of K&R C (2en edition), section 5.4 it says > C guarantees that zero is never a valid address for data, so a return value fo zero can be used to signal an abnormal event, in this case, no space. > Pointers and integers are not interchangeable. Zero is the sole exception: the constant zero may be assigned to a pointer, and a pointer may be compared with the constant zero. The symbolic constant NULL is often used in place of zero, as a mnemonic to indicate more clearly that this is a special value for a pointer. >>576 Let's try that then. Also, yeah, one of the advantages of MATLAB is that it is perfect for very mathematical things: matrix, vector and differential equation stuff are handled more conveniently. >>576 >This array decay is one of the ugly parts of C. It can be pretty ugly, it's bitten me in the ass a bunch of times. In any case anon was handling it correctly. >>574 >I want this script to be flexible This is good practice. I think the easiest way to make this work would be to not declare a size for the array in the initialization and let C do this for you. Then you calculate (at compile time) the size of the arrays and assign it to delims: int unitdelim[]={50, 100, 100}; double unitcosts[]={0.5, 0.75, 1.2}; const int delimQty = sizeof(unitdelim) / sizeof(unitdelim[1]); What I don't like of this solution is that the arrays are disjointed. If you want to change your code later on, you will have to change two independent lines of code in a concerted way or your program will have bugs, and it's good practice to minimize this. Regarding the solutions you proposed: >which was given as the solution on some websites? C99 mandated that variable sized arrays like the ones you tried to use were okay according of the standard. Some compilers (MSVC :^)) didn't think it was necessary to comply with that because why comply with standards. C11 removed this restrictions because it was too difficult for some to implement it in their compilers and certify them as standard compliant, so it was left as optional, and you're left with a situation to resolve. Which you probably tried to do with >#define delims 3 Unfortunately #define will define delims for the whole rest of the file, which is undesirable. Slightly more elegant is to use an enum like a nigger: enum {delims = 3}. There aren't a lot of other elegant ways to solve this in a nice manner. >int *unitdelim[DELIMS] >unitdelim = (int *) malloc(delims) There are several problems with this. Like anon commented you're allocating only delim bytes, but you meant to allocate delim elements (of a certain type). The other problem is that you're declaring unitdelim as an array of pointers. This means each slot of the array will contain a pointer, not a value, which is what you want. Furthermore, despite the error of trying to assign the address of the array like anon explained, there's a conceptual error: malloc returns one memory address for a whole block of memory of the size you requested, so you'd be assigning one of the pointers in the array, not all 3, meaning the other 2 would still be uninitialized and would likely crash your program on access. See pic related for how things would look like in memory after this had executed, assuming pointer sizes and ints of 4 bytes. >unitdelim = {50, 100, 200}; In C, initializer lists may only be used on initialization (in the same line you define your variables), like in #203 on the image. >>575 das good. >>576 >I'll need a source for that one. In my copy of K&R C (2en edition), section 5.4 it says The NULL part is easy. Here's a link to one of the latest revisions of C11 (revisions are available for free): http://www.open-std.org/jtc1/sc22/WG14/www/docs/n1570.pdf >7.19 Common definitions <stddef.h> >The macros are >NULL >which expands to an implementation-defined null pointer constant So the standard doesn't mandate that NULL is 0. The other part (0 being a valid memory address) comes from experience as there are embedded microcontrollers where this happens (they have 0 as a valid memory address), but to be honest I assumed this was enough to conclude it was kosher according to the standard and didn't actually double check it. I actually had to handle a case of this myself, as I was brought on board on a project that was on fire and used a microcontroller where this happened. The original programmer had created several bugs where 0-initialized pointers were being passed to functions and then indirected for write operations. The program was not segfaulting (because the memory was A-OK to write according to the memory model of the device) so the bug had always been in the firmware silently corrupting memory away, which is kind of funny considering the device was making the company several million dollars a day in revenue. >>576 >Arrays decay to pointers The idea that it "decays" is false. Arrays are always stored as a memory address. There is simply no way to pass them by value since even an array stored locally is actually just a "pointer". You can just as easily perform range operations on any pointer. int ptr[n] is just syntactic sugar for "address at ptr + n * sizeof(int)". Understanding that you'te just accessing straight memory addresses is actually quite convenient. You're not losing anything when you pass it in to a function because naked arrays already fail to offer any convenience functions. In C++ you could always pass in a vector or some other data structure, but you'd be wise to do that by reference as well. I hope I am not spamming the board to death, at least I'll give it some traffic >>576 I would have declared a datatype in the first line, as described in >>574 . >>576 >>578 > malloc Makes sense. I was also in doubt about using calloc, since this uses a second argument as datatype size. That clears it up, mostly. Two further questions: in that case, why does max2.png in >>569 work? I only reserve three bytes (?), while I place three ints in it? The function does return the correct values. Second, > Keep in mind that if you reserve memory that way it will be allocated on the heap, not the stack, so you need to manually release itas well. The array definition above is allocated on the stack and does not need to be released manually. Unless you need your data to persist after a function call you should prefer the stack. So does malloc/calloc allocate on the stack or the heap? This sentence conflicts a little. Then, how would I or when should I release the memory when using such a function? > Book Nope, the first tutorial I used was https://www.youtube.com/watch?v=KJgsSFOSQv0 , and I am now doing Pajeets big book of exercises at https://codeforwin.org/c-programming-examples-exercises-solutions-beginners . Since I have some programming knowledge from Uni and am somewhat versed with how computers work on a physical level - plus calculus C level of mathematical knowledge did not prompt me to read a book yet. I might download or find a copy of the K&R sometime, if it is exceptionally helpful. >>578 > calculating at compile time This would probably be the most elegant solution for the application I am using, since you would just have to change a single predefined number and would not have to worry about the later definition. > disjointed arrays Do you mean concatenating the strings? Or a struct? The variables have no mathematical relations, as they are predefined by the "Electrical company". > declaring an array of pointers I see what happens now. Yeah, it should just be an array of values I use for the price ranges. I should write out what happens on paper, that'll help. > using enums Which is not really that convenient of a solution no? Finally: for future reference - this means that I can not return a variable length array from a function right? I foresee that it might become a problem when doing variable timestep integration. >>580 > releasing memory I have an example function for this; image related. I just want to retrieve the first and last digit, but after the function has been called, I stored the variables in my main "variable space(???)" and I wouldn't want the original values to be left somewhere taking up memory. >>580 >I only reserve three bytes (?), This is correct. >while I place three ints in it? The compiler automatically converted them to char. Read up on type conversions. Also increase the warning level in your compiler (/W4 in MSVC, -Wall in GCC/Clang) to have it bitch at you for such conversions. Ideally your data types would match, otherwise it's good to have the compiler warn you when they don't because type conversions are very tricky and if something doesn't match, it's a good idea to review it (and after reviewing it and confirming it will do what you want, you can use type casting to tell the compiler to stop warning you for that occurrence). Looking the function closer it looks like r[2] might overflow for cm>127999 (as 127999/100000 would give 127, which is the maximum positive value a signed char can hold). >So does malloc/calloc allocate on the stack or the heap? On the heap. You allocate on the stack by declaring local variables (uint8_t bigassBuffer[0xFF]) or using alloca(). The latter I'm not sure if it's standard and there's seldom a reason to use it. Also keep in mind the stack is not the heap, the stack is designed to hold enough memory to pass arguments to functions and hold local variables (so a bunch of bytes), don't abuse it. On a PC you probably will have plenty of stack but that will not be the case in every device. On the other hand, stack memory is allocated extremely fast while dynamic memory is a huge headache and not available in more modest devices, so you have to reach a compromise. >how would I or when should I release the memory when using such a function? See free(). >The variables have no mathematical relations They have an important relation in the program. Consider a large function with a couple hundred lines of code and a big preamble of variable definitions, and let's say that you haven't touched this code in a year or two and then you want to make some changes to it. You will have to remember that if you add another value to one of the arrays, you also have to add a value in the other one, and if you don't do it when you go through the arrays in the for you might go out of bounds. This is undesirable, so ideally the code would be structured in a way that wouldn't let you fuck up like this. >Which is not really that convenient of a solution no? Ideally you would just declare the flexible array but it's not portable, so you have to compromise. >this means that I can not return a variable length array from a function right? You can do a couple of things, like creating a structure with the pointer and a size member and passing it as a (pointer) parameter. Something like this for instance: struct param { uint32_t size; char *ptr; } void func(struct param *p, int number) { const static char str[] = "NIGGER"; const uint32_t len = strlen(str); const uint32_t size = len*number+1; //consider terminating null int count; if(p==NULL) return; p->size=size; p->ptr=malloc(size); p->ptr[0]='\0'; for(count=0; count<number; count++) strcat(p->ptr, str); } int main(void) { struct param p; func(&p, 30); printf("%s (%d chars)", p->ptr, p->size-1); free(p->ptr); p->ptr=NULL; p->size=0; } I did this while bored in a meeting so I didn't compile or review it and I did it with half my mind on something else so don't copy and paste like a nigger without making sure you understand if it works. >>581 When you don't want it anymore, free() it, and after that do not use the memory locations that you freed. This only needs to be done for dynamic memory (which you get through malloc()). When you have a fixed number of elements in your array like there, however, you usually pass down the pointer/array instead of returning it from the called function, which is much cleaner. >>580 Small note: my C reference states that malloc() accepts the amount of ints if I specifiy it to be an array of ints, so malloc(3) will correctly give my an array of 3 values... I think. >>584 Never mind this one, didn't update to see >>582 yet. >>584 >malloc() accepts the amount of ints malloc() accepts the number of bytes. An int can be any size and is generally determined by your CPU's architecture. The C spec only dictates that a short is <= int <= long <= long long, so all of them could be 32-bits and it would meet the spec. Do not make assumptions about the size of int. Char is one byte pretty much universally, though. For the sake of readability and ensuring you get the correct size in bytes for an array you should probably use: int *arr = new int[length * sizeof(int)]; This will calculate the size of the data type during compilation and then perform a simple multiplication at runtime to create the correct amount of space on the heap. So: I written my first piece of code that I am actually proud of. Take a look at this exercise I did, from a Pajeet website: Print the following number pattern for (N=x=y, square) or (x, y rectangle): 4444444 4333334 4322234 4321234 4322234 4333334 4444444 And so on. I knew for a fact and from a little experience that Pajeets and Bugmen have awful coding skills, but today I experienced it first-hand. First: take a guess how the "solution" looked like (Don't look it up, that's cheating) and second: how would you approach this? I actually found a super elegant solution, I'll share it tomorrow. Yes it is a trivial problem and truly a beginner level exercise, but I was still surprised how well it worked. >>602 I have a solution for squares, but how should a rectangle look? >>604 and how should even squares (4x4, 6x6, etc.) look? >>605 ehh fuck it here's my solution in c, its in "functional style". It made sense when it was just odd squares but making it work with even rectangles really made it confusing. There is probably a dumber solution that is better (maybe just some clever string manipulation). #include <stdio.h> #include <stdlib.h> #define MAX(s, t) ((s) > (t) ? (s) : (t)) int main(int argc, char *argv[]) { int r, c, nr, nc, x; if (argc < 3) { fprintf(stderr, "Computers started going to shit when they made them for niggers.\n"); return 1; } nr = atoi(argv[1]); nc = atoi(argv[2]); for (r = 0; r < nr; r++) { for (c = 0; c < nc; c++) { x = MAX(MAX(0, abs(nr - r * 2 - 1) - MAX(0, nr - nc)), MAX(0, abs(nc - c * 2 - 1) - MAX(0, nc - nr))); printf("%x", x / 2 + 1); } printf("\n"); } return 0; } >>604 >>605 Up to you, or you leave them undefined. My solution is always x/y symmetric, so even squares and rectangle could look like this: m = 6, n = 6 333333 322223 321123 321123 322223 333333 m = 5, n = 6 333333 322223 321123 322223 333333 >>606 You have give or take the same solution as me. Now, watch and gave upon the horrors that a cruel God hath released upon mankind: /** * C program to print number pattern * https://codeforwin.org/2016/06/number-pattern-17-in-c.html * By Pankaj Prakash */ #include <stdio.h> int main() { int N, i, j; printf("Enter N: "); scanf("%d", &N); // First upper half of the pattern for(i=N; i>=1; i--) { // First inner part of upper half for(j=N; j>i; j--) { printf("%d", j); } // Second inner part of upper half for(j=1; j<=(i*2-1); j++) { printf("%d", i); } // Third inner part of upper half for(j=i+1; j<=N; j++) { printf("%d", j); } printf("\n"); } // Second lower half of the pattern for(i=1; i<N; i++) { // First inner part of lower half for(j=N; j>i; j--) { printf("%d", j); } // Second inner part of lower half for(j=1; j<=(i*2-1); j++) { printf("%d", i+1); } // Third inner part of lower half for(j=i+1; j<=N; j++) { printf("%d", j); } printf("\n"); } return 0; } For the record, this is what he does. Nearly forgot to post my solution: int pat18(int i, int j, int m, int n) { return max2(abs((m/2.0)-i-0.5)+1,abs((n/2.0)-j-0.5)+1); } This accepts the two sizes and coordinates, and passes the value to that coordinate. The loop function is just a run-of-the-mill grid printf. int m = 7; int n = 7; int i, j; for(i = 0; i < m; i++) { for(j = 0; j < m; j++) { printf("%2i ", pat1(i,j,m,n)); } printf("\n"); } Needless to say, it satisfies me greatly to find such an elegant solution. >>617 I'll let you take the win, its definitely easier to understand that way. I like fixed point math, but with modern CPUs floating point is usually faster. >>634 I have to say, sometimes you may find the solution to such a problem in an instant, and as soon as I saw this one, I knew exactly what I had to do. Holy fuck, C is kinda fucky when you want to use multi-dimensional or variable length arrays. I can handle passing the length of a generated array to a function, but things are fucked up when looking at simple subjects like matrix arithmetic. I said it before, but this and variable-timestep things must be a pain in the ass. Since I want to do those things quite a lot, is C truly the language optimal for these things? Is there a good solution for just even passing a mxn matrix to a function which can use it dynamically? >>657 If you're expecting matrix multiplication out of the box you're not looking for a general purpose programming language. Make your own matrix module to encapsulate the operations or get a library that does it for you. C++ might be prettier for this since operator overloading will make the code cleaner, but if you're struggling already I don't recommend it. >>658 I really didn't expect matrix multiplication out of the box, I expected better handling of multi-dimensional arrays. The fact that you need to input the rows of the multi-dim array as an argument in the matrix itself is strange - passing it as a separate argument, fine. >>658 >If you're expecting matrix multiplication out of the box you're not looking for a general purpose programming language. The absolute state of Cniles. Basic math being included in standard libraries isn't a high bar to pass. >>657 Just use C++, there's very few cases in which pure C is a better pick. For the time being, C was fun, but I am running into constraints here. I can do a lot and have probably the most freedom and interaction, but I am getting frustrated. >>663 To be precise, I would want a method where I input a multi-dim array (and probably the dimensions separately), like void addMatrix(double A, int ma, int na, double B, int mb, int nb, double C, int mc, int nc) or addMatrix(double A, double B, double C) but well shucks man that **A is just not allowed man - and I get why, but there is no neat solution for the passing of these things. Even passing the matrix just as a pointer and perform pointer arithmetic is a huge pain. >>665 I'll take a look, since even the K&R book does not provide a good solution beyond void A[m][n], A[][n] or (*A)[n]. >>667 Oh wow, I forgot that ** is of course a spoiler tag. That should be addMatrix(double **A, int ma, int na, double **B, int mb, int nb, double **C, int mc, int nc) or addMatrix(double **A, double **B, double **C) >>668 And while this works, it complains about me not specifying the dimensions - which leads me to believe that just doing this is a bad practice: void addM(double *A, double *B, double *C, int m, int n) { int i,j; for(i = 0; i < m; i++) { for(j = 0; j < n; j++) { *((C + i * n) + j) = *((A + i * n) + j) + *((B + i * n) + j); } } } >>669 I first did this: addM(A,B,C,3,3); printM(C,3,3); Which gave errors, but calling them like this works again without errors? What the fuck is happening here? addM(*A,*B,*C,3,3); printM(*C,3,3); I dereference the pointer of A in main, so I pass the value of A[0][0] to addM. Next, the function accepts this AS a dereferenced pointer, which just takes the true pointer to A[0][0]? This is needlessly complicated. I apologize for quadrupleposting and spamming the board this much, but this is really annoying. >>670 And it gave me incorrect results irregardless. Fucking hell. >>671 Hah, I fucked up in my printing program. It works for now. I should read some more of my code instead of complaining, I'll hold off from posting for now. >>672 Sorry I was too late to offer any help or advice. You should consider creating a Matrix struct which has the dimensions as members. That will make a function to do operations on them much simpler, since you will only need to pass the matrix structs in. Don't forget to do a typedef to avoid having to say "struct" constantly. It will also make it clear that your functions are meant to take matrices and not just any set of ints. Definitely don't be afraid to ask for help. Sometimes it's the best way to learn. >>665 I'm not aware of any general purpose programming language containing matrix arithmetic out of the box. >Basic math being included in standard libraries isn't a high bar to pass. >Just use C++ Are you retarded or so ignorant that you've never done matrix algebra before? >>668 Learn how to structure your code, the road doesn't end exactly the moment you start grasping the syntax. Make a matrix abstraction like >>673 said, then you'll have a set of functions like matrix_mult(matrix_t *a, matrix_t *b), matrix_add(matrix_t *a, matrix_t *b), etc., or you can get a library that already does it for you like anyone else would. Look around the net for one, e.g. https://www.gnu.org/software/gsl/ (pic related). >>674 Thanks, that helps. Any more standard C libraries that might be useful in calculus, plotting and/or data manipulation? Matrix stuff is still a little abstract for me, but things like these are really helpful. I am mainly doing all of this to get acquainted with C, and all the math since I'll be starting my Masters final project after the summer. I serves as a good practice method for myself to recap all of the info of the last 4 years. >>673 Don't worry, sometimes it's best to just keep on puzzling. Structs might be a better solution, but coming from Matlab (and a few other languages), matrix handling is really quite convoluted here. >>677 >Matrices are convoluted in Matlab Which is funny because they are first-class objects. Every language is going to hack in advanced mathematics in a way that suits its workflow. Precise math in C and C++ has always been clunky because the language is designed around low-cost abstraction, not easy math. There's a reason most physicists used FORTRAN and mathematicians switched to Matlab (which is basically a driver around FORTRAN code, or was at its inception). These days a lot of people have gone over to Python because it has a library for basically everything and numpy has out-of-the-box support for pretty much any operation imaginable. Try structs, though. "Reviewing C" is worthless if you can only use it like you took an intro course, and it will save you a lot of headache. Although I am personally with the guy who said there's rarely a reason not to use C++. It provides better abstraction which is easier to use and it does it with almost no performance penalty. C fags like to brag about the language being lightweight but outside of some embedded environments you don't absolutely need C, and even that's going away in favor of powerful microcontrollers that can easily store a full C++ install. >>679 I meant convoluted in C, Matlab just lets you pass anything with numbers to any function that handles numbers so that was super convenient. I am still thinking about learning Python as well - but I would also like the knowledge I have gained in the past two weeks to be useful in making games for example. I'll see how things pan out, but the library that >>674 posted has a LOT of shit, even fucking Monte Carlo integration for the three people on the planet that use it, and even Rydberg and Coulomb functions - useless for everyone, but very useful for physicists and chemists. >>674 >I'm not aware of any general purpose programming language containing matrix arithmetic out of the box. Depends on how strict your "general purpose" definition is, does FORTRAN fit? In any case, the fact that modern CPUs have circuits designed specifically to be fast at matrix multiplications should be a hint about how important that kind of math is. >>677 >Any more standard C libraries that might be useful in calculus, plotting and/or data manipulation? Although gsl is an established project with lots of functionality, it's not standard. C has very little in terms of standard libraries. Unfortunately I can't really recommend any libraries for your needs since I've never had to deal with tasks like those in C, but C has libraries for almost anything you could possibly need, so do a couple of searches to find alternatives and try the ones that look the best for what you want to do and your compiler kit. A simple search yielded the gsl library from my previous post, another seems to indicate there are good alternatives for plotting libraries which support lots of compilers and formats, and I'm not sure if it's something you need but there are also a good amount of options for arbitrary precision arithmetic libraries (in case you need to do precise computations with integers of several hundred or thousand of decimals). Go nuts! >>683 >but the library that >>674 posted has a LOT of shit GNU has some pretty cool projects for scientific shit. In general given the age of the language there's a ton of stuff made for it. >>684 Unfortunately I don't have a lot of experience with FORTRAN, however I don't think it would qualify as a general purpose language. >In any case, the fact that modern CPUs have circuits designed specifically to be fast at matrix multiplications should be a hint about how important that kind of math is. They do add instructions to perform some operations with multiple data in one go, but that's even more low level and difficult to manage than what you'd get with C and you're limited by the register size. It's hardly the kind of functionality we're discussing here. Also interestingly, I don't think compilers generate functions using those instructions unless you manually code the instructions yourself, but fortunately the kind of code that would need to use those instructions would usually be already encapsulated in some library so you don't need to. >>687 Compile this with -O3 -msse2{3,4} and you will get branch-free and SSE vectorized code. or read https://gcc.gnu.org/projects/tree-ssa/vectorization.html typedef struct _M { float v[4][4]; } M; M M_sum(M *s, M *t) { M x; int r, c; for (r = 0; r < 4; r++) { for (c = 0; c < 4; c++) { x.v[r][c] = s->v[r][c] + t->v[r][c]; } } return x; } M M_mul(M *s, M *t) { M x; int r, c; for (r = 0; r < 4; r++) { for (c = 0; c < 4; c++) { x.v[r][c] = s->v[r][0] * t->v[0][c] + s->v[r][1] * t->v[1][c] + s->v[r][2] * t->v[2][c] + s->v[r][3] * t->v[3][c]; } } return x; } >>687 They will generate them, but the conditions have to be perfect. Most commonly, matrix math is used for machine learning and so GPUs are increasingly being optimized. Nvidia has been selling the concept of "tensor cores" lately, which is just high-speed matrix math units. SIMD is nice but it's more about parallel integer multiplication inside of larger registers. For massive parallel floating point you're going to want a tensor core. >read SICP >get stuck on the "Newton's Method" bit >read Calculus Made Easy >get stuck on infinitesimals >no idea how to even begin to get started on nonstandard analysis >>697 Do you mean the concept of derivation? Or the computational method behind it? >>697 What >>698 said; what are you trying to learn to do? Do you already know calculus? If so, why are you reading "calculus made easy" and struggling with a concept like infinitesimals? Or, in summary: we can't answer a question you haven't asked. Try asking a question. >>699 It's not a question. It's a cautionary tale. >>700 On the contrary, I think you just need to learn the conceptual basis of calculus. >>697 only the first (or maybe second) chapter is math heavy. you wont really understand calculus if you don't understand logic and proof methods. You can easily be trained to do derivation of polynomials much like algebra, but still essentially blind to the logic. If you serious enough to read SICP why read "calculus made easy"? "Calculus" by Spivak is similar to SICP in how it demands a precise understanding of the underlying theories before even touching the core subject. Learning calculus the easy way is like learning programming by copying and pasting python snippets from a "recipes" book. >>499 >>511 >>528 I am a seething turbo-autist with ADHD who can't do college because I am total shit at academia and I can't do homework or pay attention in "lectures", but I've been programming ever since I was in elementary school and I know for fact that I could do a good job if I was hired. What's the best course of action for me? >>703 Your only real option is building a solid portfolio. Either by getting some kind of job, joining an open source project, or starting your own project and actually making something of it. If you can walk in to an interview with an entire github/gitlab full of repos demonstrating solid understanding of material then you can theoretically get hired. The first job is the hardest one because you have no track record or "real" experience. After that, the first company can vouch for you to the second. Normally I'd never recommend books like "Cracking the Coding Interview", since I think companies that hire that way are asking for trouble, but you will be at a disadvantage since they are quizzing prospective hires on common Computer Science homework material. It's an attempt to verify you know the theory. This will be your biggest struggle, so either give up on these jobs entirely or pirate that book and work through it. For what it's worth, SICP also covers a lot of that stuff and it's more coding focused. The only issue is probably that it's functional and not object-oriented like most jobs are looking for. So once again, a post here comes down to "work through SICP". You basically need to be able to say which sort algorithm is best for a scenario, draw a binary search tree, and tell them the runtime complexity of some arbitrary code. There's entire classes on this material being taught, and you'll need to go toe-to-toe with diplomafags. The good news is that most of them suck at these questions and fail because they just crammed the material without understanding it. Speaking of jobs, what does everyone's resume look like? Do you use one of those newer templates full of colors and retarded shit like star ratings for language and technology knowledge? Or do you still have something more old school with no colors and only one column? I still have the latter and I feel a bit left behind but I do not like the newer style. I'm gonna need to remodel my resume soon so any resume advice welcome. Might not be the right thread but it's moderately on topic. >>688 That's pretty cool. On closer inspection it seems gcc favors XMM instead of the x87 FPU and always uses XMM, which I didn't think it would. >>705 I think the latex thread became the de facto resume thread, at least for a bit. Obviously everyone is really apprehensive to actually post their resume, which makes it hard to share examples. Maybe we should have a resume thread For mine, I basically eliminated the margins, stuck all the personal/contact info in a centered header at the time. I follow that with an objective (required by some HR departments) and a "skill cloud", which is basically a table (lines hidden) containing my top skills. After that, I have job experience from my top 3-4 jobs. Each of those has bullet points with descriptions of my duties. Numbers are really good, for example "optimized code which led to a 50x speed-up in common task". I was hesitant to do bullet points, but my friend in HR at [well-known corporation] insisted and now I'm used to it. I stick education and some volunteer work in as an afterthought, just to keep it there. My trick is that the template follows the format of a "one column" resume, but because I eliminated the margins I have room to play with. So I use right-justified text on the same line as other text to cram more information into a tighter space. For example, the line with my title will also have the dates I worked there on the right. This avoids wasting lines for trivial information that is immaterial to the work and lets the highlights be my skills and duties, which is what should be put forward to begin with. >>706 Ah, I'm doing something similar to you. One column, very small margins, skill summary with bullet points, job descriptions in bullet points and specific descriptions that have numbers with a different type of bullet. Unfortunately it still looks dated and it makes it looks like a boring wall of text. I might have to remove some stuff, but I have no idea what. >I was hesitant to do bullet points, but my friend in HR at [well-known corporation] insisted and now I'm used to it. I mixed a little bit, to be fully honest. The bulk of the description for each position is in bullet points but I do have 1 or 2 lines telling what the company and job were about in general terms. Let's say that I'm working on a library that's going to expose a C++ class, and I want to avoid leaking implementation details (private types, private member variables, private methods, etc.) on the interface header file. How is this supposed to be done? This is easy in C but it's giving me headaches in C++. Here's an example class: #include <someOtherClass1.h> #include <someOtherClass2.h> class library { public: enum class returnValues { ok, not_ok, definitely_not_ok, }; library(){} ~library(){} returnValues pub1() { return returnValues::ok; } private: enum class action { action1, action2, do_barrell_roll }; returnValues priv1(someOtherClass1 &c){} //someOtherClass1 defined in someOtherClass1.h returnValues priv2(someOtherClass2 &c, action a){} //someOtherClass2 defined in someOtherClass2.h } Here the class has an internal type (action), and uses 2 other classes defined in 2 other header files. The latter is specially annoying because it forces me to distribute more header files with my library, header files which are of no use whatsoever to whoever consumes it. Searching around I found that people recommend the PIMPL idiom to fix this issue, and while it looks like yet another workaround to the issues created by OOP, it seems like it would do the job, however this technique goes a bit too far and doesn't allow me to expose the things I do want to expose (public types like returnValues for instance)! What now? None of the examples I've found contemplate this, they always do the piss easy example of a class that has 1 method receiving and returning void. How would you approach this? I have a few ideas but none are very good. >>721 First, it's a bad idea to use c++ as anything other than a header only library as the ABI changes from version to version and isn't standardized, so you'll need a .lib for each compiler and version you want to support, but ignoring that, PImpl does solve your problem, for you don't need to stick your entire class inside of it. Expose what you want to expose in library and forward declare the Impl class. Library doesn't need to see the definition of Impl to hold a pointer to it, only a declaration, so you can define Impl in some cpp that includes library and make use of its members. class library { public: enum class returnCodes { ok, not_ok, definitely_not_ok, }; library() {}; ~library() {}; returnCodes Foo() { return returnCodes::ok; } private: class Impl; // Defined somewhere else. Impl* _impl; }; Cppreference has a page on PImpl https://en.cppreference.com/w/cpp/language/pimpl. I recommend bookmarking the site if you haven't already as it's a very good resource on c++ and its standard library in general. It's basically one step removed from the standard document, but it at least attempts to include examples and explanations alongside its jargon. If I want to render environments and objects other than text-based things, am I the best off with learning C++ and use openGL? I cannot find good tutorials for using C and openGL. >>733 As far as I know it's written in C, but generally game developers (and those doing 3D graphics) tend to use C++ so I imagine most guides will cover that. It's much easier to stick things in objects that just automate tasks rather than relying on passing stuff through functions. I'd recommend learning C++ anyways. Unless you have to work without a standard library or are writing an operating system there's generally little benefit to sticking to C. That said, you should probably consider learning Vulkan instead of classic OpenGL, which is effectively deprecated. If you need any resources on C++ I can recommend a few books. I can't say I know any OpenGL (or Vulkan) resources, but I can dig through my collection of PDFs and try to find something if you need it. But I imagine it will use C++. >>734 I see - will consider. Sure, it'd be appreciated! >>728 >it's a bad idea to use c++ as anything other than a header only library Now that doesn't really sound believable, does it? >you'll need a .lib for each compiler and version you want to support I already took that for granted, happens with C too, though I don't know to how much of an issue there is from one version of a compiler to another. >you can define Impl in some cpp that includes library and make use of its members. I thought of doing that, but isn't having that kind of circular dependency terrible form? I mean, library would depend on impl, and impl would depend on library. The headers wouldn't depend mutually because of the forward declare but it still smells stinky. >cppreference Yeah, I use it as a reference sometimes, but man the information written there is written with such complexity. I realize there might not be another way because the language itself has gotten really complex, but that doesn't make things any easier to understand. >jargon This is something that I've wondered for some time. How much of the jargon and complex language functionalities does the average C++ programmer understand and use daily? Am I the only one that thinks the language has gotten absurdly complex? For example, would an average C++ programmer already know what >std::experimental::propagate_const<std::unique_ptr<impl>> pImpl; means? When you read what it does it's reasonable, but I had never come across propagate_const before, and this is just the first thing you see on the linked page, I find that cppreference continually pulls std jargon out the ass and I have no idea how there can always be more! I don't understand memory alignment errors in C . How can one unintentionally end up dealing with misaligned memory? Can you give me a real world example? >>767 Pointer aliasing for example. In lots of applications, serial numbers are 4 bytes long, or 8 bytes long, or less. Using a 32 bit int or a 64 bit int is very handy to do operations against the serial number (such as comparisons), but using a buffer composed of several 8 bit integers is very useful to store them and transmit them, so what could happen is that the storage is 4 integers of 8 bits each for the aforementioned purposes, and then it's casted to an integer of 32 bits of size to do some other operation, and although this is valid according to the standard it is an unaligned access. >>767 When you assume sizes/offsets of structs you can run into issues. On 386 Its completely valid to read a DWORD at any byte offset, which can make errors subtle. Its generally not something you run into unless you're trying to be clever. related: http://www.catb.org/esr/structure-packing/ I've had issues while working with structs and binary formatted files. here the file is manually packed, but read with assumed packing, this is a stupid example but I remember doing something similar when I was new. #include <stdio.h> typedef struct { char fugg; int benis; } spurdo; void read_spurdos(spurdo *S, int n, char *fn) { FILE *fp = fopen(fn, "r"); fread(S, sizeof(spurdo), n, fp); fclose(fp); } void write_spurdos(spurdo *S, int n, char *fn) { int i; FILE *fp = fopen(fn, "w"); for (i = 0; i < n; i++) { fwrite(&S[i].fugg, sizeof(char), 1, fp); fwrite(&S[i].benis, sizeof(int), 1, fp); } fclose(fp); } void debug_spurdos(spurdo *S, int n) { int i, j; for (i = 0; i < n; i++) { printf(":%c B", S[i].fugg); for (j = 0; j < S[i].benis && j < 40; j++, putchar('=')); printf("D\n"); } } int main() { spurdo sparde[] = { { 'O', 9 }, { 'D', 6 }, { '(', 1 } }; spurdo gondola[] = { { '?', 0 }, { '?', 0 }, { '?', 0 } }; write_spurdos(sparde, 3, "fugg"); read_spurdos(gondola, 3, "fugg"); debug_spurdos(sparde, 3); debug_spurdos(gondola, 3); return 0; } >>770 Ebin >>770 That's a good link and one of my favorite reads. I remember adding attribute(packed) to my first attempts at packed structs, because I didn't trust I was doing it right. Once I realized it was doing nothing and potentially introducing performance penalties, I stopped. Newbies really underestimate the performance penalty of unpacked, misaligned data. They also forget that fitting data into the size of a cache line is immensely beneficial. I've seen benchmarks which stall because you add a single extra unpacked char to a struct. Whats a good resource for learning JCA/JCE and Bouncy Castle? I don't necessarily need to know the mathematics, I am just having trouble understanding the completely autistic class structure. I bought pic related without realizing it's from 2005, so I need something up-to-date. Also, why was the Java Cryptography Architecture designed by a team of severely autistic chimpanzees? >>784 >Also, why was the Java Cryptography Architecture designed by a team of severely autistic chimpanzees? Because everything about Java was designed by a team of severely autistic chimpanzees. Java is what happens if you take C++, which is an awfully designed language, and remove the C parts, which was the justification for the awful design. You end up with an awfully designed language for no good reason. >>836 >which is an awfully designed language, and remove the C parts Java is more comparable to C# but it eats more memory do to wanting to be faster just for the sake of claiming to be fastest. >programming languages don't have performance >only implementations do >you can't say that C is faster than Java!!!!!!!11111111111111 >performance doesn't matter, only programming time does because when I sell it, I won't be the one looking at the loading screens all day >rubs hands >>838 >performance doesn't matter, only programming time does because when I sell it, I won't be the one looking at the loading screens all day >>838 >Java is more comparable to C# C# was modelled after Java, but without the burden of backwards compatibility with ancient Java code, which is why it does a few things better. It's still mostly the same OOP Pajeet tier shit, which is intentional by design. On the topic of C#, is it mostly constrained to Windows platforms on the field or is it also used for Linux to a good degree? >>850 Mono allows you to compile and run C# and .NET applications (I believe) on Linux. >>851 I know, but the question was more oriented towards if companies actually do C# development on Linux, as with it being originally a Windows framework I imagine that's the platform most of the development takes place on. I feel I should learn a higher level language but my experiences with Java haven't been great. C# looks a lot like C++ but if people only use it on Windows I don't think it's the language I should go after either. Wish C++ was more popular. >>852 Ah damn I completely read over the last part. Microsoft is trying to push cross platform with .NET Core, and usage of it is growing in the (loonix) server space. The traditional .Net framework is still focused on Windows development. About C++ development, you shouldn't feel discouraged about learning it if you don't feel there is demand. There certainly is but its likely less than times past since the current programming world has a heavy lean to web development. >>852 From what I have seen in job postings, C# programmers are also expected to work with various Microsoft Windows technologies. I don't remember the names of those technologies, but I did not even bother applying to those jobs, because I figure they would also make you use Windows as your work OS. I am working as a Java Pajeet, but at least they let me set up my own computer myself. Someone on another thread mentioned wt, a C++ library to create webpages. I decided to give it a try and I ended up making a very simple imageboard proof of concept to learn the ropes. It's very interesting though still a bit constrained in some regards, I really hope they continue improving it. One thing I noticed and was surprised by is that somehow this thing does not need javascript to update the DOM. I can make a post and the post is appended to the page correctly even without javascript enabled. Also because of how the library works a thread update timer would not be necessary because the server could just update your browser's page in realtime when a new post is submitted. Overall kind of exciting man. Wish I had a server to play around with for shenanigans. >>868 What's the typical setup for Java development on Linux? What programs are required and what's the typical workflow? >>881 Java really likes you to have an (((IDE))) regardless of platform. These days, IntelliJ is pretty common, but a lot of shops still run Eclipse. I wouldn't use Eclipse unless you need to contribute to a project which uses it already. But you can absolutely treat Java like C++ and use a makefile to compile a JVM binary. There are linters for every text editor and there are plenty of guides for configuring your makefile, although the process is more arduous than with C++. But it's clear when you look at the ecosystem that Java developers find IDEs appealing and seemingly design their projects around having a package management inheritance tree and debugger available on-screen at all times, and they will rely heavily on hints. >>881 >What's the typical setup for Java development on Linux? What programs are required and what's the typical workflow? You obviously need a JDK (Java development kit). The current LTS version is Java 11. You also want a build system, so your choice is either (((Maven))) or Gradle. For development most people use IDEs, basically either Eclipse or IntelliJ IDEA, but most people developing in Java are also Pajeets. I don't use that shit, I use Vim with the Java language server, it's basically all the good stuff factored out of Eclipse into a standalone process which the text editor can then communicate with to get the IDE-like features. If Vim is not your cup of tea you can use literally any other editor which supports LSP. It's a really comfy setup, but it's also an uphill battle because there is probably like five or so people in the entire world who don't use one of the two bloated IDEs. People get really offended when someone steps out of bounds. >>1129 > C > password manager > novice in programming Please stop. Security does not belong into the hands of a novice, especially in an unsafe language like C. If you want to play around with encryption, then by all account go ahead and knock yourself out. But if you mean to actually use and rely on it you need more experience under your belt first. > I mostly started writing this out because I didn't like how pass just stored every password in a separate file with the filename being the name of the website. You don't have to do it that way, you can put whatever information you want into a file. It is the usual way, but with a little elbow grease and shell scripting you can use only one file and store your password and metadata in any way you want. >>1130 >If you want to play around with encryption, then by all account go ahead and knock yourself out. I'm not looking to roll my own crypto or whatever crazy shit. I know full well I shouldn't be doing that. Do I need years of experience just to call a couple of functions from a library to do encryption and decryption for me? >especially in an unsafe language like C I already said I'm paying attention to mitigate attacks exploiting that. I know I need to be careful with that, it's part of the fun. Are you spouting memes just because? You visit Hacker News much? >>1018 >Java developers find IDEs appealing and seemingly design their projects around debugger available on-screen at all times, and they will rely heavily on hints. Why is this wrong? I will admit to have sinned and never set up a development environment without an IDE, but outside of bloat and maybe coarse configuration options I don't understand why some people hate them so much, and what does setting up everything separately has so much in favor. >>1131 The problem is that security in software that needs to handle cryptography is very difficult to get right, that's why the usual advice is to avoid it entirely and rely on already existing and very audited solutions instead of rolling out your own crypto solution. This doesn't apply only to the actual encryption algorithms but also to whatever you do with those algorithms, that is, not every piece of software designed to use AES is secure for example. It's a bit difficult to take seriously that you're mitigating attacks and exploits when in the previous line you say that you're almost an absolute novice at programming. If you actually are looking to create something to use and not just to learn, I suggest you to keep looking for some other existing solution. >>1129 If you still want to go forward with it, first get a grasp of the algorithms and how they work on a high level (e.g.: AES, Serpent, ECB, CBC, etc.) to learn what you should be doing. OpenSSL is huge and I think it should expose the algorithms you want, and if you're targeting Linux chances are everyone already has it installed. You can probably get single .c implementation of algorithms from lesser known libraries if you want something simpler and that you can compile into your project, ultimately if you're rolling out your own solution you might as well go for the full book of sins. >>1146 >You will eventually have to keep unencrypted data in memory in your application, I'm fully aware of that. After all, it's part of the reason I was asking whether to just store all passwords in one file that is unencrypted at once or keeping each password in a separate one and unencrypting one when needed. Less unencrypted data in memory, less shit that could go south, I think. >and so you must take extra care that your application does not have any vulnerabilities and does not leak any data. Ok, well, I'll make sure to do that. I want to dogfood this one day, after all. I'm thinking of going for libsodium, since their docs seem to have been made foolproof, but once I build up the confidence I'd like to use BearSSL, since it's not only portable and available on distros, but quite lean. >>1146 >Unix itself is already an IDE which fails at basic features like code navigation (ctags and cscope are dogshit) yep, IDEs are shit too, i use a dedicated gaymer box to run IDE #520 so i can navigate code for language #20 of project #5723 i'm auditing. >>1131 yes, actually. especially in C. KEK What is the worst possible textbook for learning C, and why is it this monstrosity? >Numerous typos and syntactical errors (esp. missing curly braces) >The example code in Chapter 1.9 DOESN'T EVEN FUCKING RUN >They completely avoid teaching you some of the basic syntax of the language, despite giving you exercises which could be solved way more easily with certain operators Am I retarded, or is this clusterfuck bad for anyone else? Maybe it's just a really bad introduction for people who are new to programming. I feel bad and possibly really fucking stupid for bashing the book made by the creators of C themselves, but surely I can't be the only one who finds it obscure at best, and maddening at worst. >>1130 >Security does not belong into the hands of a novice, especially in an unsafe language like C. Not the same anon, (I'm the same one from >>1244) but it sounds like I'm even more of a beginner than he is. Should I be worried if the overview of an Information Systems Security diploma program includes pic related? >>1245 There's a world of difference between examining things and making design decisions that can impact a cryptographic implementation. >>1246 That's a no, then? I'm not trying to be pedantic, but I'm still really new to all of this. The privacy/ security side of tech always was fascinating to me, so a career in that neck of the woods would likely be stimulating and rewarding, both for my own benefit, and in the sense that I could help people keep their stuff safe. The thing is, I'm starting from square one, and I only have a few months to become proficient in the way described. >>1244 > Maybe it's just a really bad introduction for people who are new to programming. Yes, it is. In general a "The <blubb> Programming Language" book is not for beginners, it is for people who already know how to program, but don't yet know <blub>. > They completely avoid teaching you some of the basic syntax of the language, despite giving you exercises which could be solved way more easily with certain operators This is common in textbooks, they want you to solve the problem with what you already have first. Then you learn more powerful operators and you can appreciate what they give you better. If they just dumped everything there is on your head it would just be confusing. > Numerous typos and syntactical errors (esp. missing curly braces) Curly braces can be omitted when there is only one statement in the body. In fact, curly braces can be seen as a way of packing several statements into one. That's not to say that you should do it, but it can make sense in a book where space is limited. > The example code in Chapter 1.9 DOESN'T EVEN FUCKING RUN I'll have to look into that one later. >>1245 >>1247 You're talking about undergoing training while the previous guy was talking about implementing an actual system that was supposed to be safe. You're not going to be implementing security solutions at a development level in your training, anything you make is going to be experimental and for learning purposes, and most likely it'll just involve understanding how buffer overflows work and maybe making a tool or two. >>1250 >In general a "The <blubb> Programming Language" book is not for beginners, it is for people who already know how to program, but don't yet know <blub>. Alright, that makes sense. The authors do reference Fortran and Pascal a lot, though I assumed that was just because of the time period during which the book was written. >This is common in textbooks, they want you to solve the problem with what you already have first. Then you learn more powerful operators and you can appreciate what they give you better. If they just dumped everything there is on your head it would just be confusing. That makes sense, though I still think it might be better to use exercises as opportunities to teach syntax. I guess it would be a lot to take in all at once, though. >Curly braces can be omitted when there is only one statement in the body. Huh. I didn't know that, actually; thanks for the tip. >>1251 Alright, gotcha. Thanks for clarifying. >>1148 > libsodium how about NaCl? >>1289 I can't find a lot of reasoning behind libsodium being superior to NaCl with a quick online search. According to what I heard, NaCl is alright code wise, made by a guy who knew what he was doing, but it turns out packaging it is a world of pain. That means it isn't as likely to be in other distros as libsodium, since it's a fork that specifically addressed this issue and probably a few others. Why libsodium in particular? I took some features from KeePassXC, and looked into the library they used. More importantly, they advertise it as an easy to use library, and after reading their docs, I just went with it. If I'm feeling braver after dealing with this I might try something smaller, like Monocypher or BearSSL. >>1290 ok then, best of luck anon. I have not been coding for a while, but have a command line-based game I slapped together in C++ to see what was possible with that. It feels really different with those classes and objects. I still need to figure out how I should link everything together efficiently, but I can see both advantages and drawbacks with this method. Featuring: > a menu with a scrolling intro > startup sound, random welcome message > player input if desired, random player honorific generator > deterministic room generation with a difficulty setting > enemies that move > chests with random items > inventory system that updates dynamically > inter-room movement and collision My first game that I created without something like Gamemaker, so more of a "how the fuck do I do this" and "how should I approach something like this" than an actual proper coding exercise. The coding is some serious spaghetti. >>1343 That's a really cool project. Great way to learn terminal UIs and how to use objects. Games are a perfect place to represent things with objects. Doesn't matter if it's spaghetti; you've surely learned a lot by doing that and now you can use that to do something more involved. I'd be interested in updates if you do anything else like this or continue to build on it. There's lots of places you could take something like this. >>1346 Due to the way I have built this, it will probably not happen since I ran into issues with how I organized things - I use objects and temporal willy nilly (all rooms are objects, but the enemies and chests are not for example) but since I also do not have a lot of experience with combining different source files - for utility functions for example - the way I use classes and the main program feels somewhat chaotic. Nonetheless, it was fun experimenting with all the shit available, but since I am /sci/ oriented, I might give the whole GNUplot and GSL a shot, since I also want to do more data related stuff. I would also like to give openGL or even Vulkan a shot, but those are also long ways away. If I have time, I might pour some time into Dungeon Bomber again, I like the concept and it does not feel too devoid of ideas and effort. >>499 I just want a job of any kind tbh, I am tired of being a NEET and don't to work as a fucking Janitor sweating my ass off and hurting my shoulders rushing. It isn't about the money, anyway I lost finances and bank account due to being autisticly mentally incompetent. My family doesn't want me working due losing disabity-bux but it's only way to get my life back and end up free from my family and hopefully regain my ability to control my finances again. I tried asking help from the SS office and state VR but they are no help. The longer stay a NEET, the longer I feel homicidal, I don't to end up dead or in prison for life but thought about it if I go, might as well take someone else with me. I want to get away from this hell hole >>1365 It could always be worse anon. you could be NEET without even the autism bux like me >>1365 Please wait 10 years before going through with such a thing. Halt all suicidal thoughts until the 10 years is up is all that I ask! >>765 Should I read this book over K&R these days, or do I read this book after K&R? >>1463 K&R really just lays out a spec and offers an introduction. It's valuable for historical reasons but useless for experienced programmers who want to learn C for daily use, and it's probably not the best introduction to programmer for those unfamiliar. While I cannot recommend Modern C (I have not read it), if it fulfills the same role as the Meyers Modern C++ series for C then I would say it's a good way to learn modern C paradigms. The usage of C has evolved over the years, even if the language hasn't, and the biggest issue with K&R is that it prescribes usage the way the designers expected it to be used, not the way it would actually come to be used. Can anyone upload Effective C? https://nostarch.com/Effective_C What are some useful/practical C projects to contribute to? Most of the C projects I came across e.g., GNU are rather mature and their issues/tasks require deep knowledge of the project itself. I wonder if there are projects in C needing features to be implemented as opposed to simply bug fixing. >>1512 C is mostly used for low-level libraries, embedded, etc. A lot of Python libraries, for example, implement some functions in C for performance reasons. C is only the primary language in a few cases; most desktop software utilizes C++ because objects are an easier way to manage a GUI. You can always ask even mature projects if they need help, and fixing bugs is a good way to learn the language because you can see for yourself the mistakes of others. Even mature projects like GNU will take newbies if you ask for mentorship. The biggest problem is that you will need to know C pretty well to contribute a new feature. It offers no hand-holding or guardrails so any contributor writing in C needs to know the language, its common tricks, and also common pitfalls very well. Is Rust the language all the cool kids are learning? >>1521 Rust is the language all the cuckolds are learning because they think it looks cool. >>1522 What's even the appeal of Rust besides it being "memory safe"? >>1522 >>1521 I thought it was Golang. >>1527 It's new, trendy, and probably pays well. You can't ask for more. >>1527 Memory safety + good performance is big on its own, given how many issues stem for memory mismanagement. The fact that the dependency management and build system are saner than C and C++ is a bonus. >>1530 Isn't Rust worse in terms of compilation speed and binary size? >>1531 Compilation speed especially, due to the bootstrap deal, and compilation in general isn't available on as many architectures which is a no-go for embedded systems. I've not seen significant testing on binary size for actual projects, there was one test trying to shrink down a rust hello world down to a C hello world and getting reasonably close but that's not really too useful, still binaries size is a relatively minor concern unless things get really overboard. The community being shit because it attracts the wrong kind of devs is a far more serious issue than both of them, in all likelyhood. >>1518 > literally the first page that's not the title or copyright notices < Dedicated to my granddaughters, Olivia and Isabella, and to other young women who will grow up to be scientists and engineers And into the trash bin it goes. >>1535 Do books with these kinds of pages tend to be low quality not worth the time to read all the time? It's cover looks so unusual, has any of you read it. >>1365 i got rejected at another job today because i havent worked a coding job for long enough time for them to consider that i dont even know how to code. crazy how that works. stay on autismbux, fuck coding. Am I retarded for disliking exceptions and trying to stick to return-based error checking? >>1699 Not entirely. A lot of C programmers still use this and are used to it. The main issue comes with inconsistency. Some languages provide exception handling and expect you to use it which can conflict with bindings that use a different system. It becomes hard to track. The other big downside is you lose a return channel, and your functions likely have to modify data. You can use C in a "functional" way just by making data immutable and using functions to transform it, but you can't do that and also return error codes. It's also a fairly hostile paradigm to multithreaded or asynchronous operations, since they need to run independently and might not return at the same time, or at all. The problem with exception systems is their complexity. Even for a simple task, you need to create some specific object and then pass it somewhere. Handling it can be a nightmare if you don't try to catch it in a specific way, and catching all errors is a bad idea. Plus, it's rarely "free". There's almost always a performance penalty for throwing, catching, or even acknowledging errors. The new C++ exception system looks promising, but in most non-scripting languages the exception system is total junk. >>1699 It can get burdensome for some programs. For example try writing an interpreter that returns control to the repl after a divide by zero. If you don't use some non local escape via exceptions or setjmp/longjmp every single recursive evaluation will need to check for an error. >>9 The link to "browse the library" on the Gentooman library is broke. Not sure if anons here can do anything about it, just saying it for future reference. >>1720 As long as the torrent still works, you're fine. Someone would need to rehost the whole thing. Probably costs a lot and is a target for DMCA. If you want individual books you should just download the entire library. You can just select Do Not Download if books/folders don't interest you. If I'm too retarded to even figure out how to do pic related after multiple days of trying, would it be more realistic for me to give up on ever being able to get a job in InfoSec? >>1848 What does the question mean? Convert all characters to their ASCII or Unicode code point and return their sum? What language? What libraries are allowed? / /
2020-11-26 12:59:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27111485600471497, "perplexity": 1417.8303475795005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141188146.22/warc/CC-MAIN-20201126113736-20201126143736-00499.warc.gz"}
https://2022.help.altair.com/2022/newfasant/topics/newfasant/mom/output_menu/observation_directions.htm
# Observation Directions Clicking on the Output -> Observation directions menu will show the following panel: The user needs to configure the directions of observation where the electrical far field is going to be calculated. Given a Theta or Phi table, the following parameters may be modified: • ThetaPhi cut defines the value of the angular cut (in degrees) in the current table. • Initial PhiTheta defines the angular initial value (in degrees) of the other component in the current table. • Increment PhiTheta defines the angular step (in degrees) between adjacent samples in the current cut. • Samples column describes the number of steps for the row cut. It is an integer value. The Final Phi/Theta value is automatically computed according to the previous configuration. Insert Theta or Phi cuts by clicking on the Add buttons of the corresponding section, and remove undesired cuts by using the Delete buttons. Clear buttons removes all cuts from table. Press Save button before closing this window to confirm the changes. The user can import a file of directions using the Import File button. The file to be imported must be saved with .txt extension and the following format is required: • Each direction is defined by Theta Phi angles. • Every direction to be imported must be defined in separated lines. Considerer an example file (cuts.txt) to plot with the following content: 10.0 10.0 10.0 20.0 20.5 35.0 45.0 0.0 After importing the file, a new cut is added to Theta cuts table for each direction.
2022-12-03 13:42:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4622778594493866, "perplexity": 2375.8841907339147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710931.81/warc/CC-MAIN-20221203111902-20221203141902-00728.warc.gz"}
https://www.physicsforums.com/threads/equivalence-relations.216694/
# Equivalence relations 1. Feb 20, 2008 ### eiselea 1. The problem statement, all variables and given/known data Let S be the set of integers. If a,b$$\in$$ S, define aRb if ab$$\geq$$0. Is R an equivalence relation on S? 2. Relevant equations 3. The attempt at a solution Def: aRb=bRa $$\rightarrow$$ ab=ba assume that aRb and bRc $$\Rightarrow$$ aRc a=b and b=c since a=b, the substitute a in for b to get a=c I don't know where to go from here. 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution 2. Feb 20, 2008 ### quasar987 You must check 3 things: 1) That aRa (reflexivity) 2) That aRb implies bRa (symetry) 3) That aRb and bRc implies aRc (transitivity) 3. Feb 20, 2008 ### eiselea So what I have done so far answers the first part of the question? 4. Feb 20, 2008 ### quasar987 But you haven't explained anything. 1) Why does aRa for every integer a?? Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook Have something to add?
2016-12-04 06:13:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5479885339736938, "perplexity": 4278.137540464292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541214.23/warc/CC-MAIN-20161202170901-00301-ip-10-31-129-80.ec2.internal.warc.gz"}
https://eng.libretexts.org/Bookshelves/Chemical_Engineering/Map%3A_Fluid_Mechanics_(Bar-Meir)/04%3A_Fluids_Statics/4.4%3A_Fluid_in_an_Accelerated_System/4.4.3%3A_Fluid_Statics_in_Geological_System
# 4.4.3: Fluid Statics in Geological System Acknowledgement This author would like to express his gratitude to Ralph Menikoff for suggesting this topic. In geological systems such as the Earth provide cases to be used for fluid static for estimating pressure. It is common in geology to assume that the Earth is made of several layers. If this assumption is accepted, these layers assumption will be used to do some estimates. The assumption states that the Earth is made from the following layers: solid inner core, outer core, and two layers in the liquid phase with a thin crust. For the purpose of this book, the interest is the calculate the pressure at bottom of the liquid phase. Fig. 4.18 Earth layers not to scale. This explanation is provided to understand how to use the bulk modulus and the effect of rotation. In reality, there might be an additional effects which affecting the situation but these effects are not the concern of this discussion. Two different extremes can recognized in fluids between the outer core to the crust. In one extreme, the equator rotation plays the most significant role. In the other extreme, at the north–south poles, the rotation effect is demished since the radius of rotation is relatively very small (see Figure 4.19). In that case, the pressure at the bottom of the liquid layer can be estimated using the equation (66) or in approximation of equation (77). In this case it also can be noticed that $$g$$ is a function of $$r$$. If the bulk modulus is assumed constant (for simplicity), the governing equation can be constructed starting with equation (??). The approximate definition of the bulk modulus is $\label{static:eq:iniGov} B_T = \dfrac{\rho \, \Delta P }{ \Delta \rho} \Longrightarrow \Delta \rho = \dfrac{\rho \, \Delta P }{ B_T} \tag{137}$ Using equation to express the pressure difference (see Example 4.6 for details explanation) as $\label{static:eq:govBT1} \rho(r) = \dfrac{\rho_0}{1 - \displaystyle \int_{R_0}^r \dfrac{g(r)\rho(r)}{B_T(r)} dr } \tag{138}$ In equation (138) it is assumed that $$B_T$$ is a function of pressure and the pressure is a function of the location. Thus, the bulk modulus can be written as a function of the location radius, $$r$$. Again, for simplicity the bulk modulus is assumed to be constant.
2019-07-22 12:01:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8344206809997559, "perplexity": 277.27672685721166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528013.81/warc/CC-MAIN-20190722113215-20190722135215-00555.warc.gz"}
https://byjus.com/question-answer/from-a-circular-sheet-of-radius-4-cm-a-circle-of-radius-3-cm-is-1/
Question # From a circular sheet of radius $$4\ cm$$, a circle of radius $$3\ cm$$ is removed. Find the area of the remaining sheet. (Take $$\pi =3.14$$) Solution ## Area of the remaining part$$=$$Area of bigger circle$$-$$Area of the smaller circle.Consider the bigger circle: Radius $$r=4\ cm$$Area$$=\pi{r}^{2}=3.14\times{4}^{2}=\dfrac{314}{100}\times 16=\dfrac{5024}{100}=50.24\ {cm}^{2}$$Consider the smaller circle: Radius $$r=3\ cm$$Area$$=\pi{r}^{2}=3.14\times{3}^{2}=\dfrac{314}{100}\times 9=\dfrac{2826}{100}=28.26\ {cm}^{2}$$Area of the remaining part$$=$$Area of bigger circle$$-$$Area of the smaller circle.$$=50.24\ {cm}^{2}-28.26\ {cm}^{2}=21.98\ {cm}^{2}$$$$\therefore$$ Area of remaining sheet is $$21.98\ {cm}^{2}$$Mathematics Suggest Corrections 0 Similar questions View More People also searched for View More
2022-01-22 12:42:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9434269666671753, "perplexity": 7732.4323165935875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303845.33/warc/CC-MAIN-20220122103819-20220122133819-00539.warc.gz"}
https://www.gradesaver.com/textbooks/math/calculus/calculus-8th-edition/chapter-16-vector-calculus-16-9-the-divergence-theorem-16-9-exercises-page-1186/31
## Calculus 8th Edition $\iint_S f \cdot n dS=\iiint_E (\nabla f) dV$ Divergence Theorem: $\iiint_Ediv \overrightarrow{F}dV=\iint_S \overrightarrow{F}\cdot d\overrightarrow{S}$ where, $div F=\dfrac{\partial P}{\partial x}+\dfrac{\partial Q}{\partial y}+\dfrac{\partial R}{\partial z}$ $\iint_S fc \cdot n dS=\iiint_Ediv (fc) dV$ This implies that $\iint_S fc \cdot n dS=\iiint_E f( \nabla \cdot c) +(\nabla f) \cdot c dV$ and $\implies \iint_S fc \cdot n dS=\iiint_E f(0) +(\nabla f) \cdot c dV$ $\implies \iint_S fn \cdot c dS=\iiint_E (\nabla f) \cdot c dV$ and $\iint_S f \cdot n dS=\iiint_E (\nabla f) dV$ Hence the result has been verified.
2019-12-14 23:25:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9895129203796387, "perplexity": 214.40446202483398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541297626.61/warc/CC-MAIN-20191214230830-20191215014830-00330.warc.gz"}
https://proofassistants.stackexchange.com/questions/1631/non-dependent-implicit-argument-instantiation-in-coqs-reference-manual-does-not
# Non-dependent implicit argument instantiation in Coq's reference manual does not work Consider the following definition Definition foo1 (A : Type) {_ : A} := A. I was wondering whether there is a way to instantiate the non-dependent implicit argument (the _ here) without using @, and I found that the "Explicit applications" section of https://coq.inria.fr/refman/language/extensions/implicit-arguments.html mentions that To instantiate a non-dependent implicit argument, use the (natural := term) form of arg, where natural is the index of the implicit argument among all non-dependent arguments of the function (implicit or not, and starting from 1) and term is its corresponding explicit term. However, the following commands fails Check foo1 (1:=2). Check foo1 nat (1:=2). Moreover, it seems that Coq doesn't accept the grammer because even after adding a Fail before the commands, they still fail: Fail Check foo1 (1:=2). (* => Syntax error: ',' or ')' expected after [term level 200] (in [term]). *) Fail Check foo1 nat (1:=2). (* => Syntax error: ',' or ')' expected after [term level 200] (in [term]). *) • Shouldn't that be foo1 nat (1:=2)? – Trebor Aug 4 at 10:00 • This command seems more reasonable, but it still fails. Aug 4 at 11:24
2022-11-27 06:03:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8008345365524292, "perplexity": 9082.624778788168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710192.90/warc/CC-MAIN-20221127041342-20221127071342-00275.warc.gz"}
https://code.tutsplus.com/articles/11-lessons-learned-after-running-a-massively-popular-blog--net-17931
Unlimited Plugins, WordPress themes, videos & courses! Unlimited asset downloads! From $16.50/m Advertisement # 11 Lessons Learned After Running a Massively Popular Blog Length:LongLanguages: For the last three years, I've had the pleasure of managing Nettuts+ - a site with 80,000 subscribers, and over 3 million page views each month. In that course of time, I've learned a great deal about, not only what it takes to build an active community, but also the economics of running a sustainable and profitable blog. I'd like to share some of these lessons with you today. ## 1. It Takes a Village While, on the surface, many of you may associate Nettuts+ specifically with me, the truth is that there's an entire team here at Envato that is responsible for its success. Keep in mind that you can certainly run a successful blog on your own, but, boy oh boy does it help to have multiple hands on deck! ### Behind the Scenes Keep costs to a minimum when first getting started. Think about it: running a profitable blog can potentially require editor(s), writers, developers, designers, marketers, managers, etc. All of that costs money. I use the word potentially because there are numerous ways to skin a cat; always remember that. To keep Nettuts+, and more largely, the Tuts+ Network, in tip-top shape, we require: • An editor to manage the day to day activities on the site (that's me!) • A designer to keep the site's visuals current • A developer(s) to integrate the required functionality on the site • A marketing guy/team to help generate revenue • Manager(s) to oversee the landscape of the Tuts+ network Now, truthfully, this list doesn't even scratch the surface, though it'll do for the sake of this article. Does this mean that you can't start a blog on your own? Absolutely not; in fact, keep costs to a minimum when first getting started. You can accomplish many of these tasks on your own -- at least at first. ## 2. You're Not Allowed to Make Money Of course, this comment is somewhat made in jest, but, you'll quickly find that a blog which earns a bit of money somehow irritates certain members of your community. Some feel that the inclusion of, say, a banner ad reduces the quality or respectability of your brand. What? You guys are trying to create a sustainable environment for this blog? Sell-outs! I suppose the idea behind this line of thinking stems from that scary word: "commercial." If you -- gasp -- make a living off of your contributions to the community, that, consequently, means that you run a -- second gasp -- "commercial blog." Think what you wish, but what's the alternative? People have to live and put food on the table. To expect a site, like Nettuts+, to invest thousands upon thousands of dollars every month into educating the community, without any method to recoup those said costs is nonsensical and, more importantly, impossible. This is specifically why the Tuts+ network offers a Premium program for those readers who want to: • A: Give back to Nettuts+, in exchange for providing a never-ending string of daily education. • B: Take their training a step further, with higher level and more in depth tutorials and screencasts. ## 3. You Must Engage Inject yourself into your developing community. Some blog owners/editors prefer to remain behind the scenes pulling strings, like a manipulator does to a marionette. That may work for some, but I tend to feel that this management style is unwise for a variety of reasons. Most notably, it's vital that you, as the owner/editor/manager of a blog, inject yourself into your developing community. Doing so serves a variety of purposes: • Relationships: You build a personal relationship with your readers, as you get to know one another. Rather than Teacher -> Student, you should strive for Peer -> Peer. • Feedback: What better way to keep your "ear to the ground" than to immerse yourself in your site's community, whether than come via the comments section, emails, or social networking sites? • Loyalty: Engagement builds loyalty. Readers are far more likely to frequent your site if they know you personally, than if you're a faceless John Q. ### IGN I'll tell you a quick, and somewhat nerdy personal story. When I was kid, I was a huge Nintendo fan (actually, still am). I'd frequent a site, called IGN, which specializes in gaming news. On the Nintendo channel, the lead editor, Matt Casamassina (who has since moved on to working at Apple) had an enormous fanbase, specifically for the reasons mentioned above. Whether in his reviews, or the quick office videos they put together, he approached his audience on a "hey guys" level, rather than a more formal approach. As a result, he was far and away the most popular editor on IGN for years. Regardless of the writer of a particular Nintendo review, you were bound to find multiple reader comments containing something along the lines of, "But what does Matt think about this?" That's 100% what you should strive for! Make yourself synonymous with your blog. ## 4. You Have a Duty to the Community At the point when you decide to turn your blog into a business, you must consider one important fact: You have a duty to post accurate information. Certainly, the validity of this statement will vary, depending on your field of interest, though it should remain at least partially true across all industries. As a blog with 3 million page views a month, we have a duty to post valid content that conforms to the respective standards in our industry. In the case of Nettuts+, this translates to: we must post tutorials, which are forward thinking, and don't promote dated technologies and techniques. ### You'll Fail If your community is vibrant enough, though, they'll keep you in tune, as to what does and doesn't work on your site. Now the truth of the matter is: you're bound to fail at one point or another. Unless you have an MIT-trained team of fact checkers at your disposal, it's nearly impossible to be 100% correct at all times. We're not that smart. On Nettuts+, we've posted some brilliant tutorials which simply aren't available else where on the web. That said, and with full disclosure, we also have posted tutorials, which, in hindsight, could have been improved. If your community is vibrant enough, though, they'll keep you in tune, as to what does and doesn't work on your site. They'll also provide you with feedback on which authors are welcomed on your site, and which...aren't. ## 5. Finding Quality Staff is Incredibly Difficult Would you be surprised to find that, here at Nettuts+, I have quite a difficult time finding quality tutorials and writers for the site? We're a massively influential web development blog, yet, even now, I struggle when searching for new authors. Educational sites are difficult in this regard. Many incorrectly assume that massive sites have massive rosters of writers at their disposal. • Talent: The phrase, Those who can...do is particularly relevant here. Quite often, those who are capable of writing high level/quality content unfortunately don't have the time or financial need to write for blogs. Given the fact that we're able to post a new tutorial or article every day proves that this certainly isn't true, across the board. We continue to post content from incredibly talented developers on a daily basis, but nonetheless, it does make the process more difficult than you might anticipate. • Misconceptions: Many incorrectly assume that massive sites have massive rosters of writers at their disposal. Some might think to themselves, "I'd have a 1 in 100 chance of being accepted." This absolutely is not the case. • English: As an English-based blog, that limits our potential author base to those who either speak English, or have learned it as a second language. The problem is that, more often than not, the grammar in tutorial submissions needs to be at least 90% correct. This typically is not the case with the latter, I'm sorry to say. This is specifically why it's imperative that you adhere to lesson number three (You Must Engage). By becoming an active member in your industry's community, you'll be more successful when it comes to finding authors for your site, than you would otherwise. ## 6. Open your Pocket-Book This may come as a shock, but industry veterans cost a fair bit to hire. This may come as a shock, but industry veterans cost a fair bit to hire. For a high quality, in depth article or tutorial from an industry veteran, you should expect to spend anywhere from$400 - $800 -- at least in the web dev industry. Sounds expensive for a single article? Maybe - but these guys have spent their entire adult lives acquiring the necessary knowledge to write such a tutorial. Nettuts+ adopts a flexible model, when it comes to purchasing guest tutorials. This expenditure will range, in the majority of cases, from$60 - \$250. That said, we do make exceptions in certain cases, if the author/tutorial/concept warrants the cost. When deciding on how much to offer, we consider a handful of arguments. • How in depth is the tutorial? • Has this topic been covered numerous times across the web? • How well-known is the author in the industry? • Will the article require a fair amount of editing? With that pricing model in mind, I'm sure you can quickly realize how much it costs us to fund a month's worth of tutorials. Banner ads aren't pretty, but, without them, there'd be no Nettuts+. ## 7. Trolls Exist This is the worst tutorial EVER!! It's a sad truth that, where there are comment forms, there exists trolls. Don't worry; every successful blog in existence has received a terribly cruel comment or two -- likely hundreds! Rather than curling into a ball at the sight of these sorts of replies, consider them to be a rite of passage. In the mood for a laugh? This dramatization of a "troll" comment always makes me laugh. ### Trolls vs. Criticism However, don't confuse trolls with legitimate criticism of your site's content. Negative feedback will often be your greatest source for improvement. They'll provide you with article corrections and inaccuracies. The old saying, "the best advice hurts your feelings" is absolutely true. The print world doesn't directly transfer over to the blogging arena. Though it certainly does, in terms of grammar and structure, an in depth blog posting is quite different from a chapter within a book. Want proof? Take a new posting, and publish it as long string of paragraphs, like you may see in a book. Next, over the following several days, use an analytics tool to review how long visitors spend on the page. I promise you that you'll find these numbers to be surprisingly low. You have 3-5 seconds to pique a reader's interest. • Investment: Those who purchase a book fully expect a time investment. Alternatively, a visitor to your blog is, more often than not, in search of a "quick fix." You'll surely find that they're not willing to read a seemingly endless string of paragraphs. We have to trick them to do so! • Scroll Points: The key to a well-formatted blog posting is inserting "scroll points." You have 3-5 seconds to pique a reader's interest, before they move on to the next quick fix. To create scroll points in your postings, use a generous number of blockquotes, lists, call-outs, headings, numbered items...whatever you can think of. As an example, refer to this very article; notice how I've applied multiple levels of "scroll points"? We initially have numbered headings for each idea, but then, within each point, I've also inserted additional stop points to grab your attention. This allows for a more comfortable and flexible reading experience. ## 9. Find a Niche If you attempt to cover the entire spectrum of your industry, you'll alienate a large portion of your readership. Every popular blog that I frequent has its niche. Think Nettuts+ doesn't? Well, it's true that "web development" is a rather broad focus, but more specifically, we focus on front-end development: HTML, CSS, PHP, and JavaScript. We even have a sub-niche that comes in the form of video tutorials. While they don't bring in massive amounts of traffic, they have still managed to generate a loyal following from the readers who prefer a more visual style of training. If you attempt to cover the entire spectrum of your industry, you'll alienate a large portion of your readership. For example, on Nettuts+, while we could post Python and MODx tutorials every other day, we simply don't have the audience to warrant the expenditure or the space. The same holds true for the less-used CMSs available around the web. Instead, we focus primarily on WordPress. ### Need More Examples? What if you don't feel skilled enough, yet, to write articles about your industry? That's okay, too; use it to your advantage! Create a blog which chronicles your progress in your field. Thousands of people around the world are in your exact same shoes; they'd enjoy reading the blog of somehow who is encountering the exact same difficulties and confusions as you. ## 10. Don't Half-Ass It "If you're gonna do it...do it right!" Though that quote is admittedly vague, it applies to nearly all facets of life. If your goal is to run a successful and profitable blog, don't half-ass the job. I'm sure you're familiar with the "talkers" -- you might even be one yourself. Sometimes the dream is more exciting than the reality. "Talkers" spend more time speaking about what their business is, and how it's going to change everything, rather than physically working toward making that concept a reality. I've witnessed this "loss of enthusiasm" first-hand on multiple occasions. I suppose it's understandable, though; sometimes the dream is more exciting than the reality -- the reality of working your tale off, writing well into the night. It's much easier to live in your imagination and prepare CEO name plates for your desk, than it is to build a business. ### Outline Before even picking up a pencil -- or, in this case, typing a key -- determine who it is that you want to be on the web. • Special: What makes you special? Not in the Mr. Rogers sense, but in terms of your point of view. Is your plan to follow the pack and release a copy-cat blog? Hopefully not. • Distinction: This item travels hand-in-hand with your point of view. How will your blog distinguish itself? Can you converge your ideas into a single sentence? If not, perhaps your blog needs a bit more planning. At Envato, "we help people earn and learn, online." The movie, Julie and Julia, tells the true story of a girl who created a blog, which served the sole purpose of documenting her progression through a cookbook. That simple and relatable concept transformed her blog into a massive success (and a movie). On Nettuts+, we strive to be "the best online resource for front-end web development training." What about your blog? • Format: What format will your postings take? On Nettuts+, we focus on screencasts and in depth tutorials. Will yours be centered around quick tips? Video? Opinionated rants? Hilarious comics? • Service: All blogs must provide some form of service to the reader. This can come in the form of news updates, education, pop culture updates, comedy, etc. Try to determine what your blog's service is, or will be. Once you've decided, you better damn well execute that service better than all of your blogging peers. Honestly, if that's not your ultimate goal, then what's the point? Don't half-ass it. ## 11. Show Some Passion Passion is contagious. Passion is contagious. We've all met that person who is utterly consumed by their industry. Isn't that enthusiasm infectious? It certainly is for me, and I assure you that it is for your readers. Passion can be a scary thing -- or better put, the lack of passion can be a scary thing. It has the potential to dissolve careers, bury businesses, and ruin relationships. So what about you? Do you genuinely have a passion for what you'll be blogging about? Be honest! It might be smarter to ask yourself this question one night, long after the rest of the world has fallen asleep, as you lay awake. I find that our thoughts are more honest at these points. An easy way to quantify passion is to ask yourself if you'd be willing to run your blog for free -- indefinitely. Take money out of the equation, when deciding. Income, though certainly the overall goal, needs to come second. The irony is, of course, more often than not, there's a direct correlation between passion and profit. I saved this item for last, because I strongly feel that it's the primary key to a successful blog. Engagement, teamwork, marketing, the concept...that all factors into success. But if there's no passion at the root, you're doomed to fail.
2021-04-17 14:56:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21722936630249023, "perplexity": 2842.4972560538154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038460648.48/warc/CC-MAIN-20210417132441-20210417162441-00220.warc.gz"}
https://www.magicshow.tips/tag/arrow/
## Arrow Production… I was hanging out with Chris Beason the other day and we were chatting about some tricks with a dollar bill. One idea I had was that you mention that there are 13 arrows that the eagle is holding on the back of a dollar bill. You then do a double take and notice your bill has 14 arrows and is a misprint. You then pull a full size arrow out of the dollar bill! It would be pretty easy to do, you’d need a gimmick like an appearing straw, but only about 24 inches long and glue an arrowhead to one end. Or cut the end to a point and paint it silver. It could be kept in a thumb tip, and possibly put a slit in the side of the tip to allow the arrow to be removed from it. The thumb tip is really only there to keep the arrow compress and easier to handle when rolled up. While not the worlds greatest mystery, it would be a decent sight gag. -Louie
2022-06-25 08:17:20
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8114005923271179, "perplexity": 809.8943898104294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103034877.9/warc/CC-MAIN-20220625065404-20220625095404-00007.warc.gz"}
https://physicshelpforum.com/threads/dynamic.2565/
# Dynamic #### Apprentice123 Determine the maximum theoretical speed expected for a car, leaving the rest, covering a distance of 50m. The coefficient of static friction between tire and road is 0,80. Knowing that the front wheels bear 60% of the weight of the car and the back, the remaining 40%. Determine the speed (a) traction front (b) traction back (a) 78,1 Km/h (b) 63,8 Km/h #### Deco PHF Hall of Honor Given: $$\displaystyle \mu_s = 0.80$$ $$\displaystyle \Delta x= 50$$ $$\displaystyle V_i = 0$$ Solution: $$\displaystyle F_f = \mu mg$$ $$\displaystyle F = ma$$ $$\displaystyle F= F_f$$ $$\displaystyle a= \mu g$$ $$\displaystyle V_{ff}^2-V_i^2 = (2\mu g\Delta x)(0.6)$$ $$\displaystyle \boxed{V_ff = 78.1 \mbox {km/h}}$$ and $$\displaystyle V_{fb}^2-V_i^2 = (2\mu g\Delta x)(0.4)$$ $$\displaystyle \boxed{V_fb = 63.75 \mbox {km/h}}$$ Apprentice123
2020-02-29 00:24:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1890086829662323, "perplexity": 2904.388548027254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875148163.71/warc/CC-MAIN-20200228231614-20200229021614-00229.warc.gz"}
http://www.ewh.ieee.org/reg/7/ccece10/paperkit.php
23rd Canadian Conference on Electrical and Computer Engineering Home Call For Papers Committee Registration Sponsors Awards Banquet Student Awards Conference Program (Draft) Tutorials Exhibits Invited Speakers Technical Sessions Schedule Author's Information Author's Guide    / Paper Kit Hotel & Travel Information Hotel Reservations    / Travel Information    / Air Travelers Tips Conference Secretariat CCECE 2010 Itron Inc. 20 Springbank Court SW Calgary, AB T3H 3S8 Ph: 509-939-5641 Fx: 509-241-6153 Email: Author's Guide Paper Kit Program (Final) French / Français Photos Workshops Monday Sessions Tuesday Sessions Wednesday Sessions Students “Evolution of Theory: Bringing Theory and Technology into Application” May 2-5, 2010 Telus Convention Center & Marriot Hotel Calgary, Alberta, Canada CCECE 2010 - Paper Submission Kit ## Part I: General Information #### Procedure The CCECE 2010 paper submission and review process will use a full paper review process: • Authors who wish to participate in the conference will create documents consisting of a complete description of their ideas and applicable research results in a maximum of 4 pages. Up to 4 additional pages are allowed; the first two pages with a $50.00 a page surcharge and the last two pages with a$75.00 a page surcharge. • Submit the paper AND the copyright form electronically. The full paper must be submitted for review by the deadline of Friday January 8 th, 2010, 12:00pm MT. If the paper is accepted, there will be an opportunity to modify the paper in response to the reviewer's comments before the final submission of March 15, 2010. • Check the EDAS website for the status of your paper. • Paper submissions will be reviewed by experts selected by the conference committee for their demonstrated knowledge of particular topics. The progress and results of the review process will be posted on the EDAS website, and authors will also be notified of the review results by email. • Prepare a lecture following the guidelines included in this document. The review process is being conducted entirely online. To make the review process easy for the reviewers, and to assure that the paper submissions will be readable through the online review system, we ask that authors submit paper documents that are formatted according to the Paper Kit instructions included here. #### Requirements Papers may be no longer than 4 pages (or up to 8 with appropriate per page surcharges), including all text, figures, and references. Papers must be submitted by the deadline date. There will be no exceptions. Accepted papers MUST be presented at the conference by one of the authors, or, if none of the authors are able to attend, by a qualified surrogate. The presenter MUST register for the conference at one of the non-student rates offered, MUST register before the deadline given for author registration AND submit the IEEE copyright form. Failure to register before the deadline will result in automatic withdrawal of your paper from the conference proceedings and program. To be published on IEEE Xplore, papers must be presented at CCECE 2010 as per above guidelines. back to the top Submission of Papers for Review Friday Friday January 8th  2010, 12:00 pm MT Notification of Acceptance (by email) Sunday March 7th, 2010 Author's Registration Deadline Friday March 26th, 2010 Advance Registration Deadline Friday April 2nd, 2010 Final Paper Submission Deadline Monday March 15th, 2010 Copyright Form Submission Deadline Monday March 15th, 2010 back to the top #### Correspondence Please make sure to put the conference name (CCECE 20010) and the paper number that is assigned to you on all correspondence. Additional questions regarding submission of papers should be directed to the following address: Program Committee CCECE 2010 back to the top ## Part II: Preparation of the Paper #### Document Formatting Use the following guidelines when preparing your document: LENGTH: You are allowed a total of 4 pages for your document or up to 8 pages with a $50.00 per page surcharge for the first two additional pages and a$75.00 per page surcharge for the last two additional pages. This is the maximum number of pages that will be accepted, including all figures, tables, and references. Any documents that exceed the 8 page limit will be rejected. TEXT FORMAT: The text of the full paper MUST be in double-column, IEEE conference style format. Please see the Templates section for more details. LANGUAGE: All proposals must be in English only.  This is a change from previous conferences. MARGINS: Documents should be formatted for standard letter-size (8-1/2" by 11" or 216mm by 279mm) paper. Any text or other material outside the margins specified below will not be accepted: • All text and figures must be contained in a 178 mm x 229 mm (7 inch x 9 inch) image area. • The left margin must be 19 mm (0.75 inch). • The top margin must be 25 mm (1.0 inch), except for the title page where it must be 35 mm (1.375 inches). • Text should appear in two columns, each 86 mm (3.39 inch) wide with 6 mm (0.24 inch) space between columns. • On the first page, the top 50 mm (2") of both columns is reserved for the title, author(s), and affiliation(s). These items should be centered across both columns, starting at 35 mm (1.375 inches) from the top of the page. • The paper abstract should appear at the top of the left-hand column of text, about 12 mm (0.5") below the title area and no more than 80 mm (3.125") in length. Leave 12 mm (0.5") of space between the end of the abstract and the beginning of the main text. TYPE: Face: To achieve the best viewing experience for the review process and conference proceedings, we strongly encourage authors to use Times-Roman or Computer Modern fonts. If a font face is used that is not recognized by the submission system, your proposal will not be reproduced correctly. Size: Use a font size that is no smaller than 9 points throughout the paper, including figure captions. In 9-point type font, capital letters are 2 mm high. For 9-point type font, there should be no more than 3.2 lines/cm (8 lines/inch) vertically. This is a minimum spacing; 2.75 lines/cm (7 lines/inch) will make the proposal much more readable. Larger type sizes require correspondingly larger vertical spacing. Title: The paper title must appear in boldface letters and should be in ALL CAPITALS. Do not use LaTeX math notation ($x_y$) in the title; the title must be represented in the Unicode character set. Also try to avoid uncommon acronyms in the title. Author List: The authors' name(s) and affiliation(s) appear below the title in capital and lower case letters. Proposals with multiple authors and affiliations may require two or more lines for this information. The order of the authors on the document should exactly match in number and order the authors typed into the online submission form. Abstract: Each paper should contain an abstract of 50 to 250 words that appears at the beginning of the document. Use the same text that is submitted electronically along with the author contact information. Index Terms (Keywords): You may enter up to 5 keywords separated by commas. Keywords may be obtained by sending a bank email to . Body: Major headings appear in boldface CAPITAL letters, centered in the column. Subheadings appear in capital and lower case, either underlined or in boldface. They start at the left margin of the column on a separate line. Sub-subheadings are discouraged, but if they must be used, they should appear in capital and lower case, and start at the left margin on a separate line. They may be underlined or in italics. References: List and number all references at the end of the document. The references can be numbered in alphabetical order or in order of appearance in the paper. When referring to them in the text, type the corresponding reference number in square brackets as shown at the end of this sentence [1]. The end of the document should include a list of references containing information similar to the following example: [1] D. E. Ingalls, "Image Processing for Experts," IEEE Trans. ASSP, vol. ASSP-36, pp. 1932-1948, 1988. Illustrations & Colour : Illustrations must appear within the designated margins. They may span the two columns. If possible, position illustrations at the top of columns, rather than in the middle or at the bottom. Caption and number every illustration. All halftone illustrations must be clear in black and white. Since the printed proceedings will be produced in black and white, be sure that your images are acceptable when printed in black and white (the CD-ROM and IEEE Xplore proceedings will retain the colours in your document). Page Number: Do not put page numbers on your document. Appropriate page numbers will be added to accepted papers when the conference proceedings are assembled. back to the top #### Templates The following style files and templates are available for users of LaTeX and Microsoft Word: We recommend that you use the Word file or LaTeX files to produce your document, since they have been set up to meet the formatting guidelines listed above. When using these files, double-check the paper size in your page setup to make sure you are using the letter-size paper layout (8.5" X 11"). The LaTeX environment files specify suitable margins, page layout, text, and a bibliography style. In particular, with LaTeX, there are cases where the top-margin of the resulting Postscript or PDF file does not meet the specified parameters. In this case, you may need to add a \topmargin=0mm command just after the \begin{document} command in your .tex file. The spacing of the top margin is not critical, as the page contents will be adjusted on the proceedings. The critical dimensions are the actual width and height of the page content. back to the top ## Part III: Verifying the Paper #### Using IEEE's PDF Express to check your PDFs CCECE 2010 has registered for use of a new IEEE tool: IEEE PDF eXpress. IEEE PDF eXpress is a free service to IEEE conferences, allowing their authors to make IEEE Xplore-compatible PDFs (Conversion function) or to check PDFs that authors have made themselves for IEEE Xplore compatibility (PDF Check function). Steps for checking or converting PDFs using IEEE PDF Express: 2. Proofread and check layout of manuscript (it is highly recommended that you do this BEFORE going to IEEE PDF eXpress.) 3. Create IEEE PDF eXpress account 4. Upload source file(s) for Conversion; and/or PDF(s) for Checking 5. Use IEEE PDF eXpress to attain IEEE Xplore-compatible PDF(s). The site contains extensive instructions, resources, helpful hints, and access to technical support. 6. Submit final, IEEE Xplore-compatible PDF(s) per the instructions in Part III below. Uploading a paper to IEEE PDF Express is not the same as submitting the paper to the conference for review. You will still need to submit the checked PDF by the normal means. Procedure: IEEE PDF eXpress: http://www.pdf-express.org/ Conference ID: ccece10x 1. Access the IEEE PDF eXpress site 2. First-time users: 3. You will receive online and email confirmation of successful account setup. Previous users, but using it the first time for a new conference: 3. Check that the contact information is still valid, and click "Submit". 4. You will receive online and email confirmation of successful account setup. Returning users: 3. For each conference paper, click "Create New Title". 4. Enter identifying text for the paper (title is recommended but not required) 5. Click "Submit PDF for Checking" or "Submit Source Files for Conversion" 6. Indicate platform, source file type (if applicable), click Browse and navigate to file, and click "Upload File". You will receive online and email confirmation of successful upload 7. You will receive an email with your Checked PDF or IEEE PDF eXpress-converted PDF attached. If you submitted a PDF for Checking, the email will show if your file passed or failed. back to the top ## Part IV: Submission and Review of the Paper The review process will be performed from the electronic submission of your paper. To ensure that your document is compatible with the review system, please adhere to the following compatibility requirements: #### File Format The 'IEEE Requirements for PDF Documents' MUST be followed EXACTLY. The conference is required to ensure that documents follow this specification. The requirements are enumerated in: Papers must be submitted in Adobe's Portable Document Format (PDF) format. PDF files: • must not have Adobe Document Protection or Document Security enabled, • must have 'US Letter' sized pages, • must be in first-page-first order, and • must have ALL FONTS embedded and subset. ALL FONTS MUST be embedded in the PDF file. There is no guarantee that the viewers of the paper (reviewers and those who view the proceedings CD-ROM after publication) have the same fonts used in the document. If fonts are not embedded in the submission, you will be contacted by CMS and asked to submit a file that has all fonts embedded. Please refer to your PDF or PS file generation utility's user guide to find out how to embed all fonts. back to the top #### Converting Files to PDF The IEEE PDF eXpress system can convert the following application formats to PDF: • Microsoft Word • WordPerfect • Rich Text Format • Freelance • (La)TeX (A DVI and supported image files must be included in a compressed archive) • PageMaker (images should not be embedded, included with main file in a compressed archive) • FrameMaker • Word Pro • Quark (*.qxd and images files must be included in a compressed archive) More information on this service is available on the IEEE PDF eXpress at: www.ieee.org/pdfexpress back to the top #### Electronic Paper Submission When you have your document file ready, gather the following information before entering the submission system: • Document file in PDF format • Paper title • Text file containing paper abstract text, in ASCII text format (for copying and pasting into web page form) To submit your document and author information, go to the 'Submit Paper' link on the CCECE 2010 area of the EDAS submission system: http://edas.info/ If this is your first time using EDAS for this or any other conference, you will need to create a new account by following the link. If you have registered with EDAS for any previous conference, your account will still be valid. Once you have logged in, choose 'Submit paper' to submit a new paper. The submission system will present a choice of Symposia for the conference. Choose the most approriate track for your paper and click on the 'Submit' icon. The system will then present an entry form to allow you to enter the paper title, keywords, and abstract text (50-250 words in length). You will be able to enter authors later. After you submit this information, the system will display a form allowing you to upload the paper, add or modify authors, edit the title or abstract, verify the mauscript upload, check the format of the paper, and check the status of your paper. An email message will be sent to the authors' email addresses to confirm when the paper has been submitted and when the file has been uploaded. ALL authors must be entered in the online form, and must appear in the online form in the same order in which the authors appear on the PDF. back to the top The IEEE copyright form for your paper must be signed and submitted via the online conference submission system listed above by the final submission deadline of Monday March 15, 2010. The copyright forms are available locally in PDF or MS Word form. Failure to submit the signed copyright form will result in the automatic withrawal of the paper from the conference. back to the top #### Online Review Process Please note that EDAS does not provide the conversion facility to convert full papers to PDF format. It is the responsibility of the author(s) to convert the full paper to PDF format and is subject to the PDF format specifications mentioned earlier. Please see the converting PDF file section above. After, our submission system staff will visually inspect your full paper to assure that the document is readable and meets all formatting requirements to be included in a visually pleasing and consistent proceedings publication for CCECE 2010. If your paper passes inspection, it will be entered into the review process. A committee of reviewers selected by the conference committee will review the documents and rate them according to quality, relevence, and correctness. The conference technical committee will use these reviews to determine which papers will be accepted for presentation in the conference. The result of the technical committee's decision will be communicated to the submitting authors by email, along with any reviewer comments, if any. back to the top After you submit your document, you may monitor the status of your paper as it progresses through the submission and review process by using the 'My Paper' area of the EDAS website available at: http://edas.info/ back to the top Authors will be notified of paper acceptance or non-acceptance by email as close as possible to the published author notification date of March 7th, 2010. The email notification will include the presentation format chosen for your paper (lecture or poster) and may also include the presentation date and time, if available. The notification email will include comments from the reviewers. The conference cannot guarantee that all of the reviewers will provide the level of comment desired by you. However, reviewers are encouraged to submit as detailed comments as possible. Because of the short amount of time between paper acceptance decisions and the beginning of the publication process, CCECE 2010 is not able to allow for a two-way discourse between the authors and the reviewers of a paper. If there appears to be a logistical error in the reviewer comments, such as the reviewer commenting on the wrong paper, etc., please contact CCECE 2010 at . You will have an opportunity to modify your paper in response to the reviewer's comments. The final paper submission deadline is Monday March 15, 2010 back to the top #### Required Author Registration Be sure that at least one author registers to attend the conference using the online registration system available through the conference website. Each paper must have at least one author registered, with the payment received by the author registration deadline (see above) to avoid being withdrawn from the conference. http://www.ieee.ca/ccece10/regist.php back to the top #### Copyright Issues for Web Publication If you plan to publish a copy of an accepted paper on the Internet by any means, you MUST display the following IEEE copyright notice on the first page that displays IEEE published (and copyrighted) material: Copyright 2010 IEEE. Published in the IEEE 2010 Canadian Conference on Electrical and Computer Engineering (CCECE 2010), scheduled for May 2-5, 2010 in Calgary, Alberta, Canada. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works, must be obtained from the IEEE. Contact: Manager, Copyrights and Permissions / IEEE Service Center / 445 Hoes Lane / P.O. Box 1331 / Piscataway, NJ 08855-1331, USA. Telephone: + Intl. 908-562-3966. If you post an electronic version of an accepted paper, you must provide the IEEE with the electronic address (URL, FTP address, etc.) of the posting. back to the top ## Part V: Preparation of the Presentation Once final papers are received, the Technical Program Committee will finalize the program To help authors prepare for lecture presentations, the following suggestions have been created: ### Lecture Presentations Presentation time is critical: each paper is allocated 20 minutes for oral sessions. We recommend that presentation of your slides should take about 17-18 minutes, leaving 2-3 minutes for introduction, summary, and questions from the audience. To achieve appropriate timing, organize your slides or viewgraphs around the points you intend to make, using no more than one slide per minute. A reasonable strategy is to allocate about 2 minutes per slide when there are equations or important key points to make, and one minute per slide when the content is less complex. Slides attract and hold attention, and reinforce what you say - provided you keep them simple and easy to read. Plan on covering at most 6 points per slide, covered by 6 to 12 spoken sentences and no more than about two spoken minutes. Make sure each of your key points is easy to explain with aid of the material on your slides. Do not read directly from the slide during your presentation. You shouldn't need to prepare a written speech, although it is often a good idea to prepare the opening and closing sentences in advance. It is very important that you rehearse your presentation in front of an audience before you give your presentation at CCECE. Surrogate presenters must be sufficiently familiar with the material being presented to answer detailed questions from the audience. In addition, the surrogate presenter must contact the Session Chair in advance of the presenter's session. A computer-driven slideshow for use with a data projector is recommended for your talk at CCECE. All presentation rooms will be equipped with a computer, a data projector, a microphone (for large rooms), a lectern, and a pointing device. An overhead projector will be provided upon request. If you need any other audio or visual equipment, such as a PAL or NTSC VHS player, or 35mm slide projector, please send a request for such equipment by email to . Such requests must be received by one month before the conference date. Failure to make prior arrangements may mean that the equipment will not be available to you. back to the top
2018-02-19 15:44:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3748131990432739, "perplexity": 2209.3417186117917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812756.57/warc/CC-MAIN-20180219151705-20180219171705-00557.warc.gz"}
https://wiki.math.ntnu.no/drift/help/mac/installsoftware
# Installing software If you don't need a step-by-step explanation, you can find the files in the software folder in the department's sharead area (smb://felles.ansatt.ntnu.no/ntnu/ie-imf). ## Step 0: Start VPN If you are on campus, you can skip this step. Go to Finder, and choose Connect to Server… from the Go menu: In the field Server Address, type smb://felles.ansatt.ntnu.no/ntnu/ie-imf then click Connect. Username win-ntnu-no\YOUR_USERNAME your regular NTNU password If you check the box Remember this password in my keychain you will not be asked for a password the next time you connect to the software share. Open a Finder window, and look for the mounted network share ie-imf, then browse to the software folder, where you will find the Mac folder. There you will see a list of folders, each containing software installation files: ## Step 2: Install the software Browse to the folder containing the software you want to install. This folder will usually contain a few files: • A file called README. If this file exists, you need to read and follow the instructions inside this file to install the program. • A file with a name that ends with .dmg, .pkg, .mpkg or .iso. Unless the README file tells you otherwise, double-click the dmg/pkg/mpg/iso file to start the installation, then follow the instructions on the screen.
2019-02-21 18:50:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5130392909049988, "perplexity": 4110.206244380249}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247506094.64/warc/CC-MAIN-20190221172909-20190221194909-00639.warc.gz"}
http://mathhelpforum.com/calculus/29209-two-series-questions-print.html
# two series questions • Feb 26th 2008, 11:15 AM kuntah two series questions Hi all, 1. $\sum ^{\infty}_{i=1} \frac{1}{\left(i-1\right)!} \frac{d}{d\lambda} \left(\lambda^{i}\right)= \frac{d}{d\lambda} \sum ^{\infty}_{i=1} \frac{1}{\left(i-1\right)!} \lambda^{i}$ 2. $\sum^{\infty}_{x=1} x r^{x} = \frac{r}{\left(1-r\right)^{2}}$ why are 1 and 2 true For the first I think it has to do with the radius of convergence. Can someone help to prove this, Thank you very much • Feb 26th 2008, 11:28 AM Peritus 1. Power series - Wikipedia, the free encyclopedia 2. $\sum\limits_{x = 1}^\infty {xr^x } = r\sum\limits_{x = 1}^\infty {xr^{x - 1} = } r\sum\limits_{x = 1}^\infty {\frac{d} {{dr}}r^x = } r\frac{d} {{dr}}\sum\limits_{x = 1}^\infty {r^x = } r\frac{d} {{dr}}\frac{r} {{1 - r}} = \frac{r} {{\left( {1 - r} \right)^2 }} $ • Feb 26th 2008, 11:36 AM kuntah thanks for the help I already solved the first now and thanks for the seccenodn )it was the same mechanism (Hi)
2017-01-23 08:32:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9120250940322876, "perplexity": 2984.304098785027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00568-ip-10-171-10-70.ec2.internal.warc.gz"}
https://talks.cam.ac.uk/talk/index/41711
# Normal numbers and fractal measures • Pablo Shmerkin (Surrey) • Wednesday 06 February 2013, 16:00-17:00 • MR11, CMS. It is known from E. Borel that almost all real numbers are normal to all integer bases. On the other hand, it is conjectured that natural constants such as $\pi$, $e$ or $\sqrt{2}$ are normal, but this problem is so far untractable. In the talk I will describe a new dynamical approach to an intermediate problem: are natural’’ fractal measures supported on numbers normal to a given base? Our results are formulated in terms of an auxiliary flow that reflects the structure of the measure as one zooms in towards a point. Unlike classical methods based on the Fourier transform, our approach allows to establish normality in some non-integer bases and is robust under smooth perturbations of the measure. As applications, we complete and extend results of B. Host and E. Lindenstrauss on normality of $\times p$ invariant measures, and many other classical normality results. This is a joint work with M. Hochman. This talk is part of the Discrete Analysis Seminar series.
2021-04-14 04:32:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7114832401275635, "perplexity": 521.2280295443533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038076819.36/warc/CC-MAIN-20210414034544-20210414064544-00253.warc.gz"}
https://www.hepdata.net/record/ins1676481
• Browse all Search for pair production of heavy vector-like quarks decaying into high-$p_T$ $W$ bosons and top quarks in the lepton-plus-jets final state in $pp$ collisions at $\sqrt{s}=13$ TeV with the ATLAS detector The collaboration JHEP 08 (2018) 048, 2018. Abstract (data abstract) CERN-LHC, ATLAS. Limits on pair production of a heavy vector-like B quark together with its antiparticle, where the B quark decays to a W boson and a top quark. The search is based on 36.1 +\- 0.8 fb^{-1} of pp collisions at sqrt{s} = 13 TeV recorded in 2015 and 2016 with the ATLAS detector at the CERN Large Hadron Collider. Data are analysed in the lepton plus jets final state, including at least four small jets out of which at least one is required to be b-tagged jet and a large-radius jet. Mass limits on B production are set as a function of the decay branching ratios. Basic event selection: - ==1 electron or muon: pT >= 30 GeV, in fiducial volume, isolated from jets - Missing transverse momentum >= 60 GeV - >=4 small-R jets, with at least one b-tagged - >=1 large-radius jet - S_T > 1200 GeV Comment: Leptonically and hadronically decaying VLQ candidates are reconstructed (see paper) . Additional selection: Signal region 1 (RECOSR): - >=3 large-radius jets - >=1 hadronic W candidate: W-tagged large-R jet not overlapping with a b-tagged jet - S_T >= 1500 GeV - Delta R(lepton, p_T leading b-jet) >= 1 Comment: Final discriminant is the mass of the hadronically decaying VLB candidate Signal region 2 (BDTSR): - S_T >= 1200 GeV - RECOSR veto Comment: Final discriminant is the BDT discriminant • #### Table 0: Final Variable Distribution (RECOSR) Data from Figure 3a 10.17182/hepdata.83104.v1/t1 The hadronically decaying VLB candidate mass in the RECOSR region after the maximum likelihood fit in the two signal regions... • #### Table 1: Final Variable Distribution (BDTSR) Data from Figure 3b 10.17182/hepdata.83104.v1/t2 The BDT discriminant in the BDTSR region after the maximum likelihood fit in the two signal regions overlayed with the... • #### Table 2: BB->WtWt Cross Section Limit Data from Figure 4a 10.17182/hepdata.83104.v1/t3 Expected and observed upper limits at the 95% CL on the BB cross section as a function of B quark... • #### Table 3: SU(2) Singlet Cross Section Limit Data from Figure 4b 10.17182/hepdata.83104.v1/t4 Expected and observed upper limits at the 95% CL on the BB cross section as a function of B quark... • #### Table 4: Limit on B mass vs. BR Data from Figure 5 10.17182/hepdata.83104.v1/t5 Expected and observed 95% CL lower limits on the mass of the B quark in the branching ratio plane of...
2021-09-22 20:20:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8954013586044312, "perplexity": 6392.139560325756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057388.12/warc/CC-MAIN-20210922193630-20210922223630-00287.warc.gz"}
https://romantic-circles.org/editions/frankenstein/V1notes/hadtaken.html?width=400&height=300
had taken an irresistable hold of my imagination had taken an irresistable hold of my imagination Once again, Victor yields his will to his passion. But the terms he uses seem to invoke something beyond the question of free will and determinism. Victor at this point recognizes that his imagination, the creative power of fantasy, is driving his pursuit of the unknown, which tends to implicate a faculty ordinarily privileged in British Romanticism. Something of the same order happened four paragraphs earlier, when the imagination was likewise cited (I:3:7).
2022-06-28 04:04:30
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8605096340179443, "perplexity": 6325.65815506839}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103347800.25/warc/CC-MAIN-20220628020322-20220628050322-00058.warc.gz"}
https://math.meta.stackexchange.com/questions/5117/asking-help-for-image-posting?noredirect=1
# asking help for image posting i want to post a problem which is in image file. i have upload the image file but I can not post it. after uploading upper question box shows this which i written in second bracket: { ![Untitled][1] after posting it says that Oops! Your question couldn't be submitted because: It does not meet our quality standards. ## migrated from math.stackexchange.comSep 11 '12 at 15:21 This question came from our site for people studying math at any level and professionals in related fields. • Try including your thoughts on the problem along with the image. – axblount Sep 11 '12 at 15:20 • – Willie Wong Sep 11 '12 at 15:24 • – Willie Wong Sep 11 '12 at 15:26 • i want to post a problem which is in image file... Honestly? Don't. – Did Sep 11 '12 at 16:44 • In future, try to type your question (better user LaTeX). If you can't include the image, post the link & another user with a higher reputation will edit your question & add the image. – user2468 Sep 11 '12 at 18:52 • BTW the question was posted here by a different user. – Martin Sleziak Sep 12 '12 at 5:17 Let $(X,d_i)$, $i=1,2,3$ be the metric spaces where $X_1=X_2=X_3=\mathcal C[0,1]$ and $$d_1(f,g)=\sup_{x\in[0,1]}|f(x)-g(x)|\\ d_2(f,g)=\int_0^1 |f(x)-g(x)| \, dx\\ d_3(f,g)=\left(\int_0^1 |f(x)-g(x)|^2 \, dx\right)^{\frac12}.$$ Let $id$ be the identity map of $\mathcal C[0,1]$ onto itself. Pick out the true statements. a. $id \colon X_1 \to X_2$ is continuous. b. $id \colon X_2 \to X_1$ is continuous. c. $id \colon X_3 \to X_2$ is continuous.
2019-11-17 17:18:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5331152677536011, "perplexity": 1119.0498860004752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669225.56/warc/CC-MAIN-20191117165616-20191117193616-00179.warc.gz"}
http://stackoverflow.com/questions/675039/how-can-i-create-directory-tree-in-c-linux?answertab=votes
# How can I create directory tree in C++/Linux? I want an easy way to create multiple directories in C++/Linux. For example I want to save a file lola.file in the directory: /tmp/a/b/c but if the directories are not there I want them to be created automagically. A working example would be perfect. - C++ does not have any built-in facilities for creating directories and trees per se. You will have to use C and system calls or an external library like Boost. C and system calls will be platform dependent. –  jww Jan 20 at 14:22 @noloader Thanks a lot man.. but I think after 4 years I pretty much got my answer as you can see bellow in 13 different ways... –  Lipis Jan 20 at 17:47 Yeah, I was surprised no one explicitly stated you cannot do it in C++ (assuming you wanted a portable method in C++ that worked on Linux). But you probably knew that ;). There were a lot of good suggestions for non-portable C code, though. –  jww Jan 20 at 18:33 Here's a C function that can be compiled with C++ compilers. /* @(#)File: $RCSfile: mkpath.c,v$ @(#)Version: $Revision: 1.13$ @(#)Last changed: $Date: 2012/07/15 00:40:37$ @(#)Purpose: Create all directories in path @(#)Author: J Leffler */ /*TABSTOP=4*/ #include "jlss.h" #include "emalloc.h" #include <errno.h> #ifdef HAVE_UNISTD_H #include <unistd.h> #endif /* HAVE_UNISTD_H */ #include <string.h> #include "sysstat.h" /* Fix up for Windows - inc mode_t */ typedef struct stat Stat; #ifndef lint /* Prevent over-aggressive optimizers from eliminating ID string */ const char jlss_id_mkpath_c[] = "@(#)$Id: mkpath.c,v 1.13 2012/07/15 00:40:37 jleffler Exp$"; #endif /* lint */ static int do_mkdir(const char *path, mode_t mode) { Stat st; int status = 0; if (stat(path, &st) != 0) { /* Directory does not exist. EEXIST for race condition */ if (mkdir(path, mode) != 0 && errno != EEXIST) status = -1; } else if (!S_ISDIR(st.st_mode)) { errno = ENOTDIR; status = -1; } return(status); } /** ** mkpath - ensure all directories in path exist ** Algorithm takes the pessimistic view and works top-down to ensure ** each directory in path exists, rather than optimistically creating ** the last element and working backwards. */ int mkpath(const char *path, mode_t mode) { char *pp; char *sp; int status; char *copypath = STRDUP(path); status = 0; pp = copypath; while (status == 0 && (sp = strchr(pp, '/')) != 0) { if (sp != pp) { /* Neither root nor double slash in path */ *sp = '\0'; status = do_mkdir(copypath, mode); *sp = '/'; } pp = sp + 1; } if (status == 0) status = do_mkdir(path, mode); FREE(copypath); return (status); } #ifdef TEST #include <stdio.h> /* ** Stress test with parallel running of mkpath() function. ** Before the EEXIST test, code would fail. ** With the EEXIST test, code does not fail. ** ** Test shell script ** PREFIX=mkpath. ** NAME=./$PREFIX/sa/32/ad/13/23/13/12/13/sd/ds/ww/qq/ss/dd/zz/xx/dd/rr/ff/ff/ss/ss/ss/ss/ss/ss/ss/ss ** :${MKPATH:=mkpath} ** ./$MKPATH$NAME & ** [...repeat a dozen times or so...] ** ./$MKPATH$NAME & ** wait ** rm -fr ./\$PREFIX/ */ int main(int argc, char **argv) { int i; for (i = 1; i < argc; i++) { for (int j = 0; j < 20; j++) { if (fork() == 0) { int rc = mkpath(argv[i], 0777); if (rc != 0) fprintf(stderr, "%d: failed to create (%d: %s): %s\n", (int)getpid(), errno, strerror(errno), argv[i]); exit(rc == 0 ? EXIT_SUCCESS : EXIT_FAILURE); } } int status; int fail = 0; while (wait(&status) != -1) { if (WEXITSTATUS(status) != 0) fail = 1; } if (fail == 0) printf("created: %s\n", argv[i]); } return(0); } #endif /* TEST */ The macros STRDUP() and FREE() are error-checking versions of strdup() and free(), declared in emalloc.h (and implemented in emalloc.c and estrdup.c). The "sysstat.h" header deals with broken versions of <sys/stat.h> and can be replaced by <sys/stat.h> on modern Unix systems (but there were many issues back in 1990). And "jlss.h" declares mkpath(). The change between v1.12 (previous) and v1.13 (above) is the test for EEXIST in do_mkdir(). This was pointed out as necessary by Switch — thank you, Switch. The test code has been upgraded and reproduced the problem on a MacBook Pro (2.3GHz Intel Core i7, running Mac OS X 10.7.4), and suggests that the problem is fixed in the revision (but testing can only show the presence of bugs, never their absence). (You are hereby given permission to use this code for any purpose with attribution.) - Ok... the result of that one exactly what I wanted..! Can someone tell me if this is faster than system("mkdir -p /tmp/a/b/c").. cause this is so much easier :) –  Lipis Mar 23 '09 at 21:10 It surely is faster than system. System has a lot of overhead involved. Basically, the process has to be forked, then at least two binaries have to be loaded (one will probably be in cache already), on of which will be yet another fork of the other, ... –  ypnos Mar 23 '09 at 21:55 I forgot: And then "mkdir -p" will do at least the same as the code posted above! –  ypnos Mar 23 '09 at 21:56 thanks for the info! works perfect... –  Lipis Mar 23 '09 at 22:09 There's a subtle race condition in this code that I actually hit. It only happens when multiple programs start up simultaneously and make the same folder path. The fix is to add if (errno != EEXIST) { status = -1; } when mkdir fails. –  Switch Jul 14 '12 at 23:57 show 1 more comment Easy with boost.Filesystem : create_directories #include <boost/filesystem.hpp> //... boost::filesystem::create_directories("/tmp/a/b/c"); - That's kind of nifty. How big an overheard is there including boost in a C++ project? –  Paul Tomblin Mar 23 '09 at 20:46 Well, most boost libraries are header-only, meaning there is no overhead besides what you use. In the case of Boost.Filesystem, it requires compiling though. On my disk, the compiled library weighs ~60KB. –  Benoît Mar 23 '09 at 20:54 It is already available in some C++11 compilers now. –  danijar Aug 3 '13 at 22:16 Regarding on the C++11 compilers mentioned by @danijar, the comment here made it clearer: The <filesystem> header is not part of C++11; it is a proposal for C++ TR2 based on the Boost.Filesystem library. Visual C++ 2012 includes an implementation of the proposed library. –  Chunliang Lyu Sep 4 '13 at 7:49 boost::filesystem is not header-only: stackoverflow.com/questions/13604090/… –  ftvs Nov 1 '13 at 3:15 #include <sys/types.h> #include <sys/stat.h> int status; ... status = mkdir("/tmp/a/b/c", S_IRWXU | S_IRWXG | S_IROTH | S_IXOTH); From here. You may have to do separate mkdirs for /tmp, /tmp/a, /tmp/a/b/ and then /tmp/a/b/c because there isn't an equivalent of the -p flag in the C api. Be sure and ignore the EEXISTS errno while you're doing the upper level ones. - Fun fact: At least Solaris and HP/UX have mkdirp(), although it's clearly not optimal for portability. –  Martin Carpenter Mar 23 '09 at 20:21 that's the point.. that I don't want to call all these functions separately. –  Lipis Mar 23 '09 at 20:31 Calling mkdir a few times will be way, way faster than calling system once. –  Paul Tomblin Mar 23 '09 at 20:38 system("mkdir -p /tmp/a/b/c") is the shortest way i can think of. It's not cross-platform but will work under Linux. - IF your going to give the solution as a shell command, it would be well to mention system (3) –  dmckee Mar 23 '09 at 20:19 True, i just updated the post. –  ChristopheD Mar 23 '09 at 20:26 I don't believe that' faster than this method. –  einpoklum Mar 22 at 12:42 You said "C++" but everyone here seems to be thinking "Bash shell." Check out the source code to gnu mkdir; then you can see how to implement the shell commands in C++. - What do you mean "everyone"? –  Paul Tomblin Mar 23 '09 at 20:20 Well system("mkdir...") should do the trick on linux. It's not cross-platform though. –  ChristopheD Mar 23 '09 at 20:21 teaching a man how to catch a fish: +1 –  Martin Carpenter Mar 23 '09 at 20:24 ChristopheD: system() is rarely the correct answer. –  Martin Carpenter Mar 23 '09 at 20:27 I second what @MartinCarpenter says –  MadPumpkin Dec 18 '11 at 0:01 This is similar to the previous but works forward through the string instead of recursively backwards. Leaves errno with the right value for last failure. If there's a leading slash, there's an extra time through the loop which could have been avoided via one find_first_of() outside the loop or by detecting the leading / and setting pre to 1. The efficiency is the same whether we get set up by a first loop or a pre loop call, and the complexity would be (slightly) higher when using the pre-loop call. #include <iostream> #include <string> #include <sys/stat.h> int mkpath(std::string s,mode_t mode) { size_t pre=0,pos; std::string dir; int mdret; if(s[s.size()-1]!='/'){ // force trailing / so we can handle everything in loop s+='/'; } while((pos=s.find_first_of('/',pre))!=std::string::npos){ dir=s.substr(0,pos++); pre=pos; if(dir.size()==0) continue; // if leading / first time is 0 length if((mdret=mkdir(dir.c_str(),mode)) && errno!=EEXIST){ return mdret; } } return mdret; } int main() { int mkdirretval; mkdirretval=mkpath("./foo/bar",0755); std::cout << mkdirretval << '\n'; } - Since this post is ranking high in Google for "Create Directory Tree", I am going to post an answer that will work for Windows — this will work using Win32 API compiled for UNICODE or MBCS. This is ported from Mark's code above. Since this is Windows we are working with, directory separators are BACK-slashes, not forward slashes. If you would rather have forward slashes, change '\\' to '/' It will work with: c:\foo\bar\hello\world and c:\foo\bar\hellp\world\ (i.e.: does not need trailing slash, so you don't have to check for it.) Before saying "Just use SHCreateDirectoryEx() in Windows", note that SHCreateDirectoryEx() is deprecated and could be removed at any time from future versions of Windows. bool CreateDirectoryTree(LPCTSTR szPathTree, LPSECURITY_ATTRIBUTES lpSecurityAttributes = NULL){ bool bSuccess = false; const BOOL bCD = CreateDirectory(szPathTree, lpSecurityAttributes); DWORD dwLastError = 0; if(!bCD){ dwLastError = GetLastError(); }else{ return true; } switch(dwLastError){ bSuccess = true; break; case ERROR_PATH_NOT_FOUND: { TCHAR szPrev[MAX_PATH] = {0}; LPCTSTR szLast = _tcsrchr(szPathTree,'\\'); _tcsnccpy(szPrev,szPathTree,(int)(szLast-szPathTree)); if(CreateDirectoryTree(szPrev,lpSecurityAttributes)){ bSuccess = CreateDirectory(szPathTree,lpSecurityAttributes)!=0; if(!bSuccess){ } }else{ bSuccess = false; } } break; default: bSuccess = false; break; } return bSuccess; } - That's funny that you added a windows answer to a question about how to do something in linux;) –  Patrick Jul 17 '13 at 3:46 @Patrick -- why not read my post before posting a smart-ass comment. –  Andrew Heinlein Oct 29 '13 at 0:41 mkdir -p /dir/to/the/file touch /dir/to/the/file/thefile.ending - the -p option is what I'm looking for. Thanks! –  asgs Feb 14 '12 at 19:43 bool mkpath( std::string path ) { bool bSuccess = false; int nRC = ::mkdir( path.c_str(), 0775 ); if( nRC == -1 ) { switch( errno ) { case ENOENT: //parent didn't exist, try to create it if( mkpath( path.substr(0, path.find_last_of('/')) ) ) //Now, try to create again. bSuccess = 0 == ::mkdir( path.c_str(), 0775 ); else bSuccess = false; break; case EEXIST: //Done! bSuccess = true; break; default: bSuccess = false; break; } } else bSuccess = true; return bSuccess; } - The others got you the right answer, but I thought I'd demonstrate another neat thing you can do: mkdir -p /tmp/a/{b,c}/d Will create the following paths: /tmp/a/b/d /tmp/a/c/d The braces allow you to create multiple directories at once on the same level of the hierarchy, whereas the -p option means "create parent directories as needed". - after seeing Paul's answer, I realize that I (and a lot of other people) misunderstood the question... –  rmeador Mar 23 '09 at 20:17 and I think rmeador is right :) –  Lipis Mar 23 '09 at 20:20 If somebody can Just update this by changing to system("mkdir -p /tmp/a/{b,c}/d"), cause the questions is not about doing it in shell.. but through C++. –  Lipis Mar 23 '09 at 20:38 Is the "{a,b}" format shell-dependent? –  Andy Mar 23 '09 at 20:39 I think {a,b} will work in both sh-derived and csh-derived shells. I'm not sure if it will work in a system() command, though. –  Paul Tomblin Mar 23 '09 at 20:45 So I need mkdirp() today, and found the solutions on this page overly complicated. Hence I wrote a fairly short snippet, that easily be copied in for others who stumble upon this thread an wonder why we need so many lines of code. mkdirp.h #ifndef MKDIRP_H #define MKDIRP_H #include <sys/stat.h> #define DEFAULT_MODE S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH | S_IXOTH /** Utility function to create directory tree */ bool mkdirp(const char* path, mode_t mode = DEFAULT_MODE); #endif // MKDIRP_H mkdirp.cpp #include <errno.h> bool mkdirp(const char* path, mode_t mode) { // const cast for hack char* p = const_cast<char*>(path); // Do mkdir for each slash until end of string or error while (*p != '\0') { // Skip first character p++; // Find first slash or end while(*p != '\0' && *p != '/') p++; // Remember value from p char v = *p; // Write end of string at p *p = '\0'; // Create folder from path to '\0' inserted at p if(mkdir(path, mode) == -1 && errno != EEXIST) { *p = v; return false; } // Restore path to it's former glory *p = v; } return true; } If you don't like const casting and temporarily modifying the string, just do a strdup() and free() it afterwards. - Posted to a gist too, so I don't forget where I put it next time a need it :) gist.github.com/jonasfj/7797272 –  jonasfj Dec 4 '13 at 23:11 Change directory name, and then use system("mkdir NEWDIR"); yeah I know it's sound weird but it worked for me. - I would suggest you reading the question again, a few answers and maybe a quick tour on how this site works here: stackoverflow.com/about –  Lipis Oct 13 '13 at 22:06 #include <iostream> #include <string> #include <sys/types.h> #include <sys/stat.h> using namespace std; void mkdirTree(string sub, string dir){ if (sub.length() == 0) return; int i=0; for (i; i<sub.length(); i++){ dir += sub[i]; if (sub[i] == '/') break; } mkdir(dir.c_str(), S_IRWXU | S_IRWXG | S_IROTH | S_IXOTH); if (i+1 < sub.length()) mkdirTree(sub.substr(i+1), dir); } int main(){ string new_dir = "a/b/c"; mkdirTree(new_dir, ""); } - I know it's an old question but it shows up high on google search results and the answers provided here are not really in C++ or are a bit too complicated. Please note that in my example createDirTree() is very simple because all the heavy lifting (error checking, path validation) needs to be done by createDir() anyway. Also createDir() should return true if directory already exists or the whole thing won't work. Here's how I would do that in C++: #include <iostream> #include <string> bool createDir(const std::string dir) { std::cout << "Make sure dir is a valid path, it does not exist and create it: " << dir << std::endl; return true; } bool createDirTree(const std::string full_path) { size_t pos = 0; bool ret_val = true; while(ret_val == true && pos != std::string::npos) { pos = full_path.find('/', pos + 1); ret_val = createDir(full_path.substr(0, pos)); } return ret_val; } int main() { createDirTree("/tmp/a/b/c"); return 0; } Of course createDir() function will be system-specific and there are already enough examples in other answers how to write it for linux, so I decided to skip it. -
2014-07-10 10:56:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.385404109954834, "perplexity": 11847.863617009045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776415016.1/warc/CC-MAIN-20140707234015-00002-ip-10-180-212-248.ec2.internal.warc.gz"}
https://www.lmfdb.org/L/2/567/63.59
## Results (1-50 of at least 1000) Next Label $\alpha$ $A$ $d$ $N$ $\chi$ $\mu$ $\nu$ $w$ prim arith $\mathbb{Q}$ self-dual $\operatorname{Arg}(\epsilon)$ $r$ First zero Origin 2-567-63.52-c0-0-0 $0.531$ $0.282$ $2$ $3^{4} \cdot 7$ 63.52 $$0.0 0 0.0909 0 1.72107 Modular form 567.1.t.a.136.1 2-567-7.6-c0-0-3 0.531 0.282 2 3^{4} \cdot 7 7.6$$ $0.0$ $0$ $0$ $0$ $2.52899$ Modular form 567.1.d.c.244.2 2-567-63.34-c0-0-2 $0.531$ $0.282$ $2$ $3^{4} \cdot 7$ 63.34 $$0.0 0 -0.0555 0 1.67920 Modular form 567.1.l.b.55.1 2-567-63.32-c0-0-0 0.531 0.282 2 3^{4} \cdot 7 63.32$$ $0.0$ $0$ $0.232$ $0$ $2.25104$ Artin representation 2.567.6t5.b.a Modular form 567.1.n.a.53.1 2-567-63.34-c0-0-1 $0.531$ $0.282$ $2$ $3^{4} \cdot 7$ 63.34 $$0.0 0 -0.138 0 1.65885 Modular form 567.1.l.d.55.1 2-567-63.34-c0-0-3 0.531 0.282 2 3^{4} \cdot 7 63.34$$ $0.0$ $0$ $0.111$ $0$ $1.80881$ Artin representation 2.567.6t5.d.b Modular form 567.1.l.c.55.1 2-567-63.13-c0-0-3 $0.531$ $0.282$ $2$ $3^{4} \cdot 7$ 63.13 $$0.0 0 -0.111 0 1.89819 Artin representation 2.567.6t5.d.a Modular form 567.1.l.c.433.1 2-567-63.13-c0-0-2 0.531 0.282 2 3^{4} \cdot 7 63.13$$ $0.0$ $0$ $0.0555$ $0$ $1.86503$ Modular form 567.1.l.b.433.1 2-567-63.34-c0-0-4 $0.531$ $0.282$ $2$ $3^{4} \cdot 7$ 63.34 $$0.0 0 0.361 0 2.71344 Modular form 567.1.l.d.55.2 2-567-63.13-c0-0-4 0.531 0.282 2 3^{4} \cdot 7 63.13$$ $0.0$ $0$ $0.388$ $0$ $2.35058$ Modular form 567.1.l.a.433.1 2-567-63.23-c0-0-0 $0.531$ $0.282$ $2$ $3^{4} \cdot 7$ 63.23 $$0.0 0 0.0454 0 2.02089 Artin representation 2.567.6t5.c.b Modular form 567.1.j.a.296.1 2-567-63.34-c0-0-0 0.531 0.282 2 3^{4} \cdot 7 63.34$$ $0.0$ $0$ $-0.388$ $0$ $0.866612$ Modular form 567.1.l.a.55.1 2-567-63.13-c0-0-1 $0.531$ $0.282$ $2$ $3^{4} \cdot 7$ 63.13 $$0.0 0 -0.361 0 1.54709 Modular form 567.1.l.d.433.2 2-567-7.6-c0-0-2 0.531 0.282 2 3^{4} \cdot 7 7.6$$ $0.0$ $0$ $0$ $0$ $1.91754$ Artin representation 2.567.6t3.a Artin representation 2.567.6t3.a.a Modular form 567.1.d.b Modular form 567.1.d.b.244.1 2-567-7.6-c0-0-0 $0.531$ $0.282$ $2$ $3^{4} \cdot 7$ 7.6 $$0.0 0 0 0 1.17964 Modular form 567.1.d.c.244.1 2-567-7.6-c0-0-1 0.531 0.282 2 3^{4} \cdot 7 7.6$$ $0.0$ $0$ $0$ $0$ $1.30083$ Artin representation 2.567.3t2.b Artin representation 2.567.3t2.b.a Modular form 567.1.d.a Modular form 567.1.d.a.244.1 2-567-63.2-c0-0-0 $0.531$ $0.282$ $2$ $3^{4} \cdot 7$ 63.2 $$0.0 0 -0.232 0 0.950768 Artin representation 2.567.6t5.b.b Modular form 567.1.n.a.107.1 2-567-63.13-c0-0-0 0.531 0.282 2 3^{4} \cdot 7 63.13$$ $0.0$ $0$ $0.138$ $0$ $1.11999$ Modular form 567.1.l.d.433.1 2-567-63.61-c0-0-0 $0.531$ $0.282$ $2$ $3^{4} \cdot 7$ 63.61 $$0.0 0 0.131 0 2.24789 Modular form 567.1.k.a.460.1 2-567-63.40-c0-0-0 0.531 0.282 2 3^{4} \cdot 7 63.40$$ $0.0$ $0$ $-0.0909$ $0$ $1.20133$ Modular form 567.1.t.a.271.1 2-567-63.11-c0-0-0 $0.531$ $0.282$ $2$ $3^{4} \cdot 7$ 63.11 $$0.0 0 -0.0454 0 1.63236 Artin representation 2.567.6t5.c.a Modular form 567.1.j.a.431.1 2-567-63.31-c0-0-0 0.531 0.282 2 3^{4} \cdot 7 63.31$$ $0.0$ $0$ $-0.131$ $0$ $1.43296$ Modular form 567.1.k.a.514.1 2-567-567.104-c1-0-11 $2.12$ $4.52$ $2$ $3^{4} \cdot 7$ 567.104 $$1.0 1 0.362 0 0.774803 Modular form 567.2.bm.a.104.5 2-567-567.101-c1-0-54 2.12 4.52 2 3^{4} \cdot 7 567.101$$ $1.0$ $1$ $0.314$ $0$ $1.97505$ Modular form 567.2.br.a.101.53 2-567-567.104-c1-0-12 $2.12$ $4.52$ $2$ $3^{4} \cdot 7$ 567.104 $$1.0 1 -0.261 0 0.782479 Modular form 567.2.bm.a.104.37 2-567-567.101-c1-0-33 2.12 4.52 2 3^{4} \cdot 7 567.101$$ $1.0$ $1$ $-0.0311$ $0$ $1.16148$ Modular form 567.2.br.a.101.54 2-567-567.101-c1-0-44 $2.12$ $4.52$ $2$ $3^{4} \cdot 7$ 567.101 $$1.0 1 0.193 0 1.50588 Modular form 567.2.br.a.101.8 2-567-567.101-c1-0-53 2.12 4.52 2 3^{4} \cdot 7 567.101$$ $1.0$ $1$ $0.358$ $0$ $1.94767$ Modular form 567.2.br.a.101.31 2-567-567.101-c1-0-63 $2.12$ $4.52$ $2$ $3^{4} \cdot 7$ 567.101 $$1.0 1 0.244 0 2.46216 Modular form 567.2.br.a.101.57 2-567-567.104-c1-0-10 2.12 4.52 2 3^{4} \cdot 7 567.104$$ $1.0$ $1$ $0.447$ $0$ $0.692688$ Modular form 567.2.bm.a.104.65 2-567-567.101-c1-0-29 $2.12$ $4.52$ $2$ $3^{4} \cdot 7$ 567.101 $$1.0 1 -0.143 0 1.08882 Modular form 567.2.br.a.101.14 2-567-567.101-c1-0-34 2.12 4.52 2 3^{4} \cdot 7 567.101$$ $1.0$ $1$ $-0.239$ $0$ $1.17063$ Modular form 567.2.br.a.101.36 2-567-567.101-c1-0-26 $2.12$ $4.52$ $2$ $3^{4} \cdot 7$ 567.101 $$1.0 1 -0.303 0 0.918684 Modular form 567.2.br.a.101.45 2-567-567.101-c1-0-23 2.12 4.52 2 3^{4} \cdot 7 567.101$$ $1.0$ $1$ $-0.229$ $0$ $0.877061$ Modular form 567.2.br.a.101.39 2-567-567.101-c1-0-27 $2.12$ $4.52$ $2$ $3^{4} \cdot 7$ 567.101 $$1.0 1 -0.204 0 0.926451 Modular form 567.2.br.a.101.17 2-567-567.101-c1-0-35 2.12 4.52 2 3^{4} \cdot 7 567.101$$ $1.0$ $1$ $0.0685$ $0$ $1.18124$ Modular form 567.2.br.a.101.12 2-567-567.104-c1-0-13 $2.12$ $4.52$ $2$ $3^{4} \cdot 7$ 567.104 $$1.0 1 -0.450 0 0.827111 Modular form 567.2.bm.a.104.67 2-567-567.104-c1-0-54 2.12 4.52 2 3^{4} \cdot 7 567.104$$ $1.0$ $1$ $0.359$ $0$ $1.95659$ Modular form 567.2.bm.a.104.24 2-567-567.104-c1-0-42 $2.12$ $4.52$ $2$ $3^{4} \cdot 7$ 567.104 $$1.0 1 -0.255 0 1.69880 Modular form 567.2.bm.a.104.68 2-567-567.104-c1-0-55 2.12 4.52 2 3^{4} \cdot 7 567.104$$ $1.0$ $1$ $0.0510$ $0$ $1.96428$ Modular form 567.2.bm.a.104.52 2-567-567.104-c1-0-28 $2.12$ $4.52$ $2$ $3^{4} \cdot 7$ 567.104 $$1.0 1 0.236 0 1.36959 Modular form 567.2.bm.a.104.35 2-567-567.104-c1-0-25 2.12 4.52 2 3^{4} \cdot 7 567.104$$ $1.0$ $1$ $0.0944$ $0$ $1.33278$ Modular form 567.2.bm.a.104.21 2-567-567.104-c1-0-32 $2.12$ $4.52$ $2$ $3^{4} \cdot 7$ 567.104 $$1.0 1 -0.134 0 1.49722 Modular form 567.2.bm.a.104.51 2-567-567.104-c1-0-41 2.12 4.52 2 3^{4} \cdot 7 567.104$$ $1.0$ $1$ $0.00507$ $0$ $1.68689$ Modular form 567.2.bm.a.104.63 2-567-567.104-c1-0-50 $2.12$ $4.52$ $2$ $3^{4} \cdot 7$ 567.104 $$1.0 1 0.451 0 1.87990 Modular form 567.2.bm.a.104.20 2-567-567.104-c1-0-53 2.12 4.52 2 3^{4} \cdot 7 567.104$$ $1.0$ $1$ $0.387$ $0$ $1.94531$ Modular form 567.2.bm.a.104.39 2-567-567.101-c1-0-20 $2.12$ $4.52$ $2$ $3^{4} \cdot 7$ 567.101 $$1.0 1 -0.375 0 0.816816 Modular form 567.2.br.a.101.42 2-567-567.101-c1-0-22 2.12 4.52 2 3^{4} \cdot 7 567.101$$ $1.0$ $1$ $-0.107$ $0$ $0.853819$ Modular form 567.2.br.a.101.26 2-567-567.101-c1-0-24 $2.12$ $4.52$ $2$ $3^{4} \cdot 7$ 567.101 $$1.0 1 -0.146 0 0.882477 Modular form 567.2.br.a.101.3 2-567-567.101-c1-0-25 2.12 4.52 2 3^{4} \cdot 7 567.101$$ $1.0$ $1$ $-0.269$ $0$ $0.887226$ Modular form 567.2.br.a.101.23 Next
2021-12-05 01:13:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9955967664718628, "perplexity": 587.3205526111251}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363134.25/warc/CC-MAIN-20211205005314-20211205035314-00511.warc.gz"}
http://math.stackexchange.com/questions/97799/express-the-propositional-form-ie-using-only-the-nand-operator
# Express the propositional form ie. using only the NAND operator. Recall that the NAND operator(denoted by "|") is equivalent to AND followed by negation; that is, for any two propositions a and b, the propositional form (a|b) is logically equivalent to ¬(a∧b). Express the propositional form c∧(a→b) using only the NAND operator. - If this is homework, then please add the homework tag to your question. –  Alexander Thumm Jan 10 '12 at 7:45 Did you try and figure out the equivalents of "^" or "->" or "¬" using only the NAND operator before posing this problem? –  Doug Spoonwood Jan 10 '12 at 20:44 You can rewrite c∧(a→b) as c∧((¬a)∨b). Use de Morgan's law to find that (1) c∧((¬a)∨b) = c∧(¬(a∧(¬b))) = c∧(a|(¬b)). Now, observe that since (2) ¬d = d|d, ¬ can be expressed in terms of the NAND operator. Therefore, ∧ can also be expressed in terms of the NAND operator since (3) e∧f = ¬(e|f). Substituting the identities (2) and (3) into (1) as required will give an expression for c∧(a→b) which uses only NAND. - Solution that involves operators NAND and NOT : $c \land (a \Rightarrow b) \Leftrightarrow c \land (\lnot a \lor b) \Leftrightarrow c \land (\lnot(a\land \lnot b)) \Leftrightarrow c \land (a | \lnot b) \Leftrightarrow \lnot(c | (a| \lnot b))$ EDIT: Now , as David rightly observed use fact that : $\lnot p \Leftrightarrow p | p$ -
2014-03-12 15:59:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8771768808364868, "perplexity": 1783.556980590337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021919409/warc/CC-MAIN-20140305121839-00071-ip-10-183-142-35.ec2.internal.warc.gz"}
https://practicepaper.in/gate-ee/linear-time-invariant-systems
# Linear Time Invariant Systems Question 1 Which of the following options is true for a linear time-invariant discrete time system that obeys the difference equation: $y[n]-ay[n-1]=b_0x[n]-b_1x[n-1]$ A y[n] is unaffected by the values of $x[n - k]; k \gt 2$ B The system is necessarily causal. C The system impulse response is non-zero at infinitely many instants. D When $x[n] = 0, n \lt 0$, the function $y[n]; n \gt 0$ is solely determined by the function x[n]. GATE EE 2020   Signals and Systems Question 1 Explanation: \begin{aligned} y(n)-ay(n-1)&=b_{0}x(n)-b_{1}x(n-2) \\ &\text{By applying ZT,} \\ Y(z)-az^{-1}Y(z)&=b_{0}X(z)-b_{1}z^{-1}X(z)\\ \Rightarrow \, \, H(z)&=\frac{Y(z)}{X(z)}=\frac{b_{0}-b_{1}z^{-1}}{1-az^{-1}} \end{aligned} By taking right-sided inverse ZT, $h(n)=b_{0}a^{n}u(n)-b_{1}a^{n-1}u(n-1)$ By taking left-sided inverse ZT, $h(n)=-b_{0}a^{n}u(-n-1)+b_{1}a^{n-1}u(-n)$ Thus system is not necessarily causal. The impulse response is non-zero at infinitely many instants. Question 2 A continuous-time input signal x(t) is an eigenfunction of an LTI system, if the output is A k x(t) , where k is an eigenvalue B k $e^{j\omega t}$ x(t), where k is an eigenvalue and $e^{j\omega t}$ is a complex exponential signal C x(t) $e^{j\omega t}$, where $e^{j\omega t}$ is a complex exponential signal D k H($\omega$) ,where k is an eigenvalue and H($\omega$) is a frequency response of the system GATE EE 2018   Signals and Systems Question 2 Explanation: Eigen function is a type of input for which output is constant times of input. i.e. Where, $x(t)=$ System input = eigen function $H(s)=$ transfer function of system $y(t)=$ system output Here, $y(t)=H(s)|_{s=a}\; e^{at}=k \cdot x(t)$ where, k= eigen-value =$H(s)|_{s=a}$ $x(t)=$ eigen-function input Question 3 Let z(t)=x(t) * y(t) , where "*" denotes convolution. Let c be a positive real-valued constant. Choose the correct expression for z(ct). A c x(ct)*y(ct) B x(ct)*y(ct) C c x(t)*y(ct) D c x(ct)*y(t) GATE EE 2017-SET-1   Signals and Systems Question 3 Explanation: Time scaling property of convolution. If, $x(t)*y(t)=z(t)$ Then, $x(ct)*y(ct)=\frac{1}{c} z(ct)$ $z(ct)=c \times x(ct) * y(ct)$ Question 4 Consider a causal LTI system characterized by differential equation $\frac{dy(t)}{dt}+\frac{1}{6}y(t)=3x(t)$. The response of the system to the input $x(t)=3e^{-\frac{t}{3}}u(t)$. where u(t) denotes the unit step function, is A $9e^{-\frac{t}{3}}u(t)$ B $9e^{-\frac{t}{6}}u(t)$ C $9e^{-\frac{t}{3}}u(t)-6e^{-\frac{t}{6}}u(t)$ D $54e^{-\frac{t}{6}}u(t)-54e^{-\frac{t}{3}}u(t)$ GATE EE 2016-SET-2   Signals and Systems Question 4 Explanation: The differential equation \begin{aligned} \frac{dy(t)}{dt} &+\frac{1}{6}y(t)=3x(t) \\ \text{So, }sY(s)&+\frac{1}{6}Y(s) =3X(s) \\ Y(s) &=\frac{3X(s)}{\left ( s+\frac{1}{6} \right )} \\ X(s) &=\frac{9}{\left ( s+\frac{1}{3} \right )} \\ \text{So, } Y(s)&=\frac{9}{\left ( s+\frac{1}{3} \right )\left ( s+\frac{1}{6} \right )} \\ &=\frac{54}{\left ( s+\frac{1}{6} \right )} -\frac{54}{\left ( s+\frac{1}{3} \right )}\\ \text{So, }y(t) &= (54e^{-1/6t}-54e^{-1/3t})u(t) \end{aligned} Question 5 The output of a continuous-time, linear time-invariant system is denoted by T{x(t)} where x(t) is the input signal. A signal z(t) is called eigen-signal of the system T, when $T\{z(t)\} = \gamma z(t)$, where $\gamma$ is a complex number, in general, and is called an eigenvalue of T. Suppose the impulse response of the system T is real and even. Which of the following statements is TRUE? A cos(t) is an eigen-signal but sin(t) is not B cos(t) and sin(t) are both eigen-signals but with different eigenvalues C sin(t) is an eigen-signal but cos(t) is not D cos(t) and sin(t) are both eigen-signals with identical eigenvalues GATE EE 2016-SET-1   Signals and Systems Question 5 Explanation: Given that impulse response is real and even, Thus $H(j\omega )$ will also be real and even. Since, $H(j\omega )$ is real and even thus, $H(j\omega )=H(-j\omega )$ Now, $\cos (t)$ is input i.e. $\frac{e^{jt}+e^{-jt}}{2}$ is input Output will be $\frac{H(j1)e^{jt}+H(-j1)e^{-jt}}{2}=H(j1)\left ( \frac{e^{jt}+e^{-jt}}{2} \right )= H(j1) \cos (t)$ If, $\sin (t)$ is input i.e. $\frac{e^{jt}+e^{-jt}}{2}$ is input Output will be $\frac{H(j1)e^{jt}+H(-j1)e^{-jt}}{2}=H(j1)\left ( \frac{e^{jt}-e^{-jt}}{2j} \right )= H(j1) \sin (t)$ So, $\sin (t)$ and $\cos (t)$ are eigen signal with same eigen values. Question 6 Consider the following state-space representation of a linear time-invariant system. $\dot{x}(t)=\begin{bmatrix} 1 & 0\\ 0&2 \end{bmatrix}x(t),$ $y(t)=c^{T}x(t),$ $c=\begin{bmatrix} 1\\ 1 \end{bmatrix}$ and $x(0)=\begin{bmatrix} 1\\ 1 \end{bmatrix}$ The value of y(t) for $t= log_{e}2$ is______. A 4 B 5 C 6 D 7 GATE EE 2016-SET-1   Signals and Systems Question 7 Consider a continuous-time system with input x(t) and output y(t) given by y(t) = x(t) cos(t) This system is A linear and time-invariant B non-linear and time-invariant C linear and time-varying D non-linear and time-varying GATE EE 2016-SET-1   Signals and Systems Question 7 Explanation: \begin{aligned} y(t)&=x(t)\cos (t)\\ &\text{To check linearity,}\\ y_1(t)&=x_1(t)\cos (t)\\ &[y_1(t) \text{ is output for }x_1(t)]\\ y_2(t)&=x_2(t) \cos (t)\\ &[y_2(t) \text{ is output for }x_2(t)]\\ \text{so, the}& \text{ output for }(x_1(t)+ x_2(t)) \text{ will be}\\ y(t)&=[x_1(t)+ x_2(t)]\cos (t)\\ &=y_1(t)+y_2(t) \end{aligned} So, the system is linear, to check time invariance. The delayed output, $y(t-t_0)=x(t-t_0)\cos (t-t_0)$ The output for delayed input, $y(t, t_0)=x(t-t_0)\cos (t)$ Since, $y(t-t_0)\neq y(t,t_0)$ System is time varying. Question 8 The following discrete-time equations result from the numerical integration of the differential equations of an un-damped simple harmonic oscillator with state variables x and y. The integration time step is h. $\frac{x_{k+1}-x_{k}}{h}=y_{k}$ $\frac{y_{k+1}-y_{k}}{h}=-x_{k}$ For this discrete-time system, which one of the following statements is TRUE? A The system is not stable for $h\gt 0$ B The system is stable for $h \gt \frac{1}{\pi }$ C The system is stable for $0 \lt h \lt \frac{1}{2\pi }$ D The system is stable for $\frac{1}{2\pi } \lt h \lt \frac{1}{\pi }$ GATE EE 2015-SET-2   Signals and Systems Question 9 For linear time invariant systems, that are Bounded Input Bounded Output stable, which one of the following statements is TRUE? A The impulse response will be integrable, but may not be absolutely integrable. B The unit impulse response will have finite support. C The unit step response will be absolutely integrable D The unit step response will be bounded. GATE EE 2015-SET-2   Signals and Systems Question 10 The impulse response g(t) of a system, G , is as shown in Figure (a). What is the maximum value attained by the impulse response of two cascaded blocks of G as shown in Figure (b)? A $\frac{2}{3}$ B $\frac{3}{4}$ C $\frac{4}{5}$ D 1 GATE EE 2015-SET-1   Signals and Systems Question 10 Explanation: \begin{aligned} g(t)&=u(t)-u(t-1)\\ G(s)&=\frac{1}{s}-\frac{e^{-s}}{s}\\ G(s) \times G(s)&=g(t)* g(t) \end{aligned} Maximum value =1 There are 10 questions to complete.
2021-11-27 03:36:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 61, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9942284822463989, "perplexity": 2328.788993950782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358078.2/warc/CC-MAIN-20211127013935-20211127043935-00597.warc.gz"}
https://zbmath.org/?q=an:0952.37013
# zbMATH — the first resource for mathematics Random perturbations of invariant Lagrangian tori of Hamiltonian vector fields. (English. Russian original) Zbl 0952.37013 Math. Notes 64, No. 5, 674-679 (1998); translation from Mat. Zametki 64, No. 5, 783-787 (1998). The authors consider diffusion type random perturbations of Hamiltonian systems (possibly nonintegrable) having invariant Lagrangian tori (i.e. the form $$dp\wedge dq$$ vanishes there) with quasiperiodic motion on them. They consider the corresponding small parameter parabolic problem for distributions with the initial condition $$\delta_{\Lambda,d\mu}$$ where $$(\delta_{\Lambda,d\mu}\psi(x))=\int_\Lambda\psi d\mu$$ and $$\Lambda$$ is the corresponding torus. Applying Maslov’s theory of complex germs the authors obtain the leading term of the asymptotics of the solution of the above problem which is completely determined by the torus $$\Lambda.$$ ##### MSC: 37J25 Stability problems for finite-dimensional Hamiltonian and Lagrangian systems 60H10 Stochastic ordinary differential equations (aspects of stochastic analysis) 58J37 Perturbations of PDEs on manifolds; asymptotics ##### Keywords: random perturbations; Hamiltonian systems; Lagrangian tori Full Text: ##### References: [1] V. I. Atnol’d,Mathematical Methods of Classical Mechanics [in Russian], Nauka, Moscow (1974). [2] V. P. Maslov,Operational Methods, Mir Publ., Moscow (1976). [3] F. Treves,Introduction to Pseudodifferential and Fourier Integral Operators, Vol. 2, Plenum, New York (1982). [4] V. F. Lasutkin,KAM-Theory and Semiclassical Approximations to Eigenfunctions, Springer, Berlin (1993). [5] I. M. Gel’fand and G. E. Shilov,Generalized Functions [in Russian], Vols. 1, 3, Fizmatlit, Moscow (1958). · Zbl 0091.11103 [6] A. D. Wentzell and M. I. Freidlin,Fluctuations in Dynamical Systems Under the Action of Small Random Perturbations [in Russian], Nauka, Moscow (1979). [7] M. I. Freidlin and A. D. Wentzell,Random Perturbations of Hamiltonian Systems, Vol. 109, Mem. Amer. Math. Soc. No. 523, Amer. Math. Soc., Providence (R.I.) (1994). · Zbl 0804.60070 [8] S. Albeverio, A. Hilbert, and V. N. Kolokoltsov,Sur le comportetement asymptotique du noyau associé à une diffusion dégénérée, Preprint No. 320, Inst. Math. Ruhr Universitat Bochum, Bochum (1996). [9] V. N. Kolokoltsov,Semiclassical Asymptotics for Diffusion. I, Research Report No. 1/97, Nottingham Trent University, Nottingham (1997). [10] C. Yu. Dobrokhotov, V. N. Kolokol’tsov, and V. M. Olive,Mat. Zametki [Math. Notes],58, No. 2, 301–306 (1995). [11] V. G. Danilov and V. P. Maslov, ”Quasi-invertibility of functions of ordered operators in the theory of pseudodifferential equations,” in:Contemporary Problems in Mathematics. Fundamental Directions [in Russian], Vol. 6, Itogi Nauki i Tekhniki, VINITI, Moscow (1976), pp. 5–132. · Zbl 0402.35094 [12] V. P. Maslov and I. A. Shishmarev, ”OnT-products of hypoelliptic operators,” in:Contemporary Problems in Mathematics. Fundamental Directions [in Russian], Vol. 8, Itogi Nauki i Tekhniki, VINITI, Moscow (1979), pp. 7–20. [13] O. A. Oleinik and E. V. Radkevich, ”Second-order equations with nonnegative characteristic form,” in:Mathematical Analysis 1969 [in Russian], Itogi Nauki i Tekhniki, VINITI, Moscow (1971), pp. 7–20. [14] V. P. Maslov,The Complex WKB Method in Nonlinear Equations [in Russian], Nauka, Moscow (1977). · Zbl 0449.58001 [15] C. Yu. Dobrokhotov and V. M. Olive,Mat. Zametki [Math. Notes],54, No. 4, 45–68 (1993). [16] V. A. Yakubovich and V. M. Starzhinskii,Linear Differential Equations With Periodic Coefficients and Their Applications [in Russian], Nauka, Moscow (1972). This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-03-03 12:27:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7471605539321899, "perplexity": 5371.039341116671}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178366959.54/warc/CC-MAIN-20210303104028-20210303134028-00321.warc.gz"}
https://www.gamedev.net/blogs/entry/850516-a-flexible-file-system/
• entries 8 10 • views 9708 A flexible file system 415 views As you can see in my last journal post I have just started this journal, so I wanted to post something fast, so I have created a little post about the file system I use in my game engine. Requirements What do we need in a file system? We need support for opening, closing, reading and writing to files in both binary and text mode, we also need to be able to navigate a tree of directories and want the user to be able to add support for additional "file systems" like tar.gz files if (s)he want to. We also want as much as possible from the user. Implementation In the requirements we said we needed to let the user add functionality, we will accomplish this with interfaces. For example we have a binary input file, its interface is: namespace FileSystem{ class iBinaryIFile { public: typedef std::streamsize StreamSize; iBinaryIFile(){} virtual ~iBinaryIFile(){} virtual ErrorCode Open(const String& pFilename) = 0; virtual void Close() = 0; virtual void Read(Byte* pData,StreamSize pSize) = 0; template<typename T> void Read(T& pData) { assert( IsOpen() ); assert( sizeof(T) < sizeof(StreamSize) ); Read(&pData,static_cast(sizeof(T)) ); } virtual Boolean IsOpen()const = 0; virtual String Filename() const = 0; };} Then if the user want to add functionality for some file system or even their custom ones they can just derive a new class from this interface. We still need a way to navigate the file system. This file system "library" will assume that the file system are navigated through a tree of directories, with n files and m directories in each directory. Just like most file systems today. If a file system doesn't support directories it can just emulate directories, for example if it is instructed to create a file in directory(/ seperates directories) "Test1/Test2/Test3/file.dat" it can just save the file with a name like this: "Test1_Test2_Test3_file.dat" One problem could be that this file: "Test1/Test2/Test3_file.dat" Would have the same name, so if we want to be sure not to encounter name collisions we could just replace _ with __. So a name will be transformed like this: "Test1_Test2/Test3/Test4__Test5/Test6.da_t" // Original"Test1__Test2/Test3/Test4____Test5/Test6.da__t" // Replace('_',"__")"Test1__Test2_Test3_Test4____Test5_Test6.da__t" // Replace('/','_') Then name collisions are impossible. Of course we also need an interface for directories, so we add an iDirectory like this: namespace FileSystem{ class iDirectory { public: iDirectory(){} virtual ~iDirectory(){} // Note: ".." to go one back virtual bool OpenDirectory(const String& pName) = 0; virtual std::string ToStr() = 0; };} This interface needs lots of extra functionality, but let's discuss this first. We have OpenDirectory which open a sub-directory, just like the cd(change directory) command. The ToStr returns the current directory as a string. We now have a very big problem which is that we can't get any info about what is in the directory. Here I have chosen to go with an iterator approach, so we have an iterator which traverses through all elements (both files and directories) in the directory. One problem though is that we have four file types: iBinaryIFileiTextIFileiBinaryOFileiTextOFile And all of them require us to actually open the file which would be way too slow; also they wouldn't work for directories. I guess I could do something like this: // I don't remember if that is the correct syntax for unions, but you get the ideaunion{ iBinaryIFile BIFile; iTextIFile TIFile; iBinaryOFile BOFile; iTextOFile TOFile; std::string DirectoryName;}; Instead I have chosen to create a new type: iFile // can also represent directories All it have is a filename (string) and a boolean variable telling whether it is a file or a directory. Then it contains the following pure-virtual functions: virtual boost::shared_ptr OpenBinaryOutputFile( Boolean pAppend = false)const = 0;virtual boost::shared_ptr OpenTextOutputFile( Boolean pAppend = false)const = 0;virtual boost::shared_ptr OpenBinaryInputFile()const = 0;virtual boost::shared_ptr OpenTextInputFile()const = 0; When we do it this way we can also hide the details of the classes derived from i[Binary|Text][O|I]File since only the definitions of Open[Binary|Text][Output|Input]File will create the file and the user's code will just use pointers to the interfaces. We will of course not use iFile as the iterator, since it isn't an iterator. Instead we will have an iterator and let the dereference operator(*) return an iFile object. One important part of the iDirectory class is also that it can open files like iFile, it just have to supply the filename. This is for performance reasons, imagine we have all our resources in a single file Res.cfs (cfs = custom file system) This file uses our custom file system and might contain 2500 different files so we would have to iterate through, on average, 1250 files before we could open the resource we were looking for. Conclusion I hope this gave you a good idea of how I have designed my file system, if you have any questions or suggestions just post a comment here or send me a PM. There are no comments to display. Create an account Register a new account
2019-11-12 03:16:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4360014796257019, "perplexity": 2439.2783241266825}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664567.4/warc/CC-MAIN-20191112024224-20191112052224-00334.warc.gz"}
http://gmatclub.com/forum/official-thread-for-2007-ncaa-basketball-tournament-42881-60.html
Find all School-related info fast with the new School-Specific MBA Forum It is currently 01 May 2016, 19:03 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar Author Message Manager Joined: 16 Aug 2006 Posts: 67 Followers: 1 Kudos [?]: 0 [0], given: 0 ### Show Tags 22 Mar 2007, 14:23 Ohio State hasn't looked very good but I don't think Tennessee is any good either so I think Ohio State will win tonight. Regardless whoever wins will not get by Texas A&M in San Antonio. SVP Joined: 31 Jul 2006 Posts: 2304 Schools: Darden Followers: 43 Kudos [?]: 462 [0], given: 0 ### Show Tags 22 Mar 2007, 16:01 I agree with that. Ohio St. has looked beatable, but I don't know if Tennessee is up to the task. They will probably have a 6'3" guy coving Oden at times. Tennessee does have shooters though, so if they can get hot there's a chance. The luckies team is Florida I think. The ESPN guys keep talking about how they won't have to face a real center until the final four at the earliest. Not even a decent power forward left in their bracket. Senior Manager Joined: 03 Jul 2006 Posts: 480 Followers: 1 Kudos [?]: 10 [0], given: 0 ### Show Tags 23 Mar 2007, 07:38 A&M killed me - how did Law miss that layup? ARGH. SVP Joined: 24 Aug 2006 Posts: 2132 Followers: 3 Kudos [?]: 120 [0], given: 0 ### Show Tags 23 Mar 2007, 07:44 If you want to beat Ohio State, you have to attack Oden to get him in foul trouble but more importantly stop Mike Conley Jr. Current Student Joined: 04 Dec 2006 Posts: 440 Followers: 1 Kudos [?]: 7 [0], given: 0 ### Show Tags 23 Mar 2007, 07:47 eazyb81 wrote: A&M killed me - how did Law miss that layup? ARGH. Missing a layup is not the problem...the problem was not rebounding, Ohio State had 4 O rebounds in a row! That killed me! Manager Joined: 16 Aug 2006 Posts: 67 Followers: 1 Kudos [?]: 0 [0], given: 0 ### Show Tags 23 Mar 2007, 08:35 kidderek wrote: If you want to beat Ohio State, you have to attack Oden to get him in foul trouble but more importantly stop Mike Conley Jr. I told my friends before the game the key is Ron Lewis, and he was with his 25 points. He is the senior on this team that makes the big plays when needed, he gets overlooked with all the freshmen on the team but without him OSU is gone in the second round. I've said all year long this team is too inexperienced to win it all but if Lewis keeps making these plays they might. Senior Manager Joined: 03 Jul 2006 Posts: 480 Followers: 1 Kudos [?]: 10 [0], given: 0 ### Show Tags 23 Mar 2007, 08:46 Slightly off topic, but what do you guys think about Tubby Smith leaving UK for Minnesota? Since b-school is the main thing on my mind, I immediately compared it to someone leaving Goldman or McKinsey for a random brand management job in the midwest. SVP Joined: 24 Aug 2006 Posts: 2132 Followers: 3 Kudos [?]: 120 [0], given: 0 ### Show Tags 23 Mar 2007, 08:58 I think Tubby Smith is over-rated. He would probably have been fired anyway, so he took first action and left. SVP Joined: 24 Aug 2006 Posts: 2132 Followers: 3 Kudos [?]: 120 [0], given: 0 ### Show Tags 23 Mar 2007, 09:00 Kdawg8 wrote: I told my friends before the game the key is Ron Lewis, and he was with his 25 points. He is the senior on this team that makes the big plays when needed, he gets overlooked with all the freshmen on the team but without him OSU is gone in the second round. I've said all year long this team is too inexperienced to win it all but if Lewis keeps making these plays they might. You're right, Ron Lewis is Cinderella herself and an integral part of the team. But, Mike Conley is the engine. OSU wouldn't be able to get past half court without him. He is so underrated thanks to that giant shadow of Greg Oden. Intern Joined: 19 Nov 2006 Posts: 44 Location: Toronto Followers: 0 Kudos [?]: 0 [0], given: 0 ### Show Tags 23 Mar 2007, 09:00 That was ridiculous last night. I had a clear path to winning my pool with an A&M win and Ohio State loss. You're right squali, A&M needed to get e rebound. They gave Memphis four chances to win that game with Memphis' best rebounder already fouled out. The layup would have been nice but they needed teh rebound. I don't know what to say about Ohio State. I still think that they are inexperienced and not very deep but they find a way to win. They're killing me! Manager Joined: 16 Aug 2006 Posts: 67 Followers: 1 Kudos [?]: 0 [0], given: 0 ### Show Tags 23 Mar 2007, 10:08 eazyb81 wrote: Slightly off topic, but what do you guys think about Tubby Smith leaving UK for Minnesota? Since b-school is the main thing on my mind, I immediately compared it to someone leaving Goldman or McKinsey for a random brand management job in the midwest. As another poster said he was going to be fired anyways. People in Kentucky got fed up with his lack of recruiting and he was due a multi million dollar bonus on April 3rd or so. It saves him face and he gets a nice payday with the 7 year deal averaging about $1.7-1.8 million per. Senior Manager Joined: 03 Jul 2006 Posts: 480 Followers: 1 Kudos [?]: 10 [0], given: 0 [#permalink] ### Show Tags 23 Mar 2007, 11:36 Kdawg8 wrote: eazyb81 wrote: Slightly off topic, but what do you guys think about Tubby Smith leaving UK for Minnesota? Since b-school is the main thing on my mind, I immediately compared it to someone leaving Goldman or McKinsey for a random brand management job in the midwest. As another poster said he was going to be fired anyways. People in Kentucky got fed up with his lack of recruiting and he was due a multi million dollar bonus on April 3rd or so. It saves him face and he gets a nice payday with the 7 year deal averaging about$1.7-1.8 million per. I'm not sure he would have been fired, there are reports that the AD requested that he bring in new assistant coaches. That said, it appears he was fed up with the UK fans, but he probably shouldn't have even accepted the position if he couldn't handle big-time stress. SVP Joined: 31 Jul 2006 Posts: 2304 Schools: Darden Followers: 43 Kudos [?]: 462 [0], given: 0 ### Show Tags 23 Mar 2007, 13:32 Kentucky is paying the bonus. The general idea is that they came to a gentleman's agreement that Tubby would leave quietly and Kentucky would pay the bonus, so both sides could save face. Tubby was definitely in trouble and they probably would have found another way to get rid of him if he didn't leave quietly. Ohio St. looked beatable again, and got pretty lucky again. As I mentioned above, I thought Tennessee could win if they hit 3's, but I didn't think they had the juice to finish the deal. Turned out pretty much as expected I guess. Even with all the close games, all the higher seeds won again last night. The bracket this year is crazy. Once again, UCLA gets home court advantage even though they are not the #1 seed (same as last year). One of the ESPN writers said it best; either they are the #1 seed and deserve home court advantage, or if they aren't then you can't just leave them in the region. No team has a right to be in their own neighborhood, especially if they aren't the top seed. It seems that year after year, west coast teams get to stay out west. Another good slate tonight. Manager Joined: 28 May 2006 Posts: 239 Followers: 1 Kudos [?]: 1 [0], given: 0 ### Show Tags 23 Mar 2007, 14:38 Man, who would want to coach at Kentucky? That place is nuts, its Final Four or nothing for them. Hell, its championship or nothing. SVP Joined: 01 Nov 2006 Posts: 1855 Schools: The Duke MBA, Class of 2009 Followers: 16 Kudos [?]: 200 [0], given: 2 ### Show Tags 25 Mar 2007, 16:39 Just checked out the Club standings. At least I'm beating Pelihu. Sigh. The one I have at work is doing much better. SVP Joined: 31 Jul 2006 Posts: 2304 Schools: Darden Followers: 43 Kudos [?]: 462 [0], given: 0 ### Show Tags 25 Mar 2007, 17:40 aaudetat wrote: Just checked out the Club standings. At least I'm beating Pelihu. Sigh. The one I have at work is doing much better. Are you trying to start a fight or something? SVP Joined: 01 Nov 2006 Posts: 1855 Schools: The Duke MBA, Class of 2009 Followers: 16 Kudos [?]: 200 [0], given: 2 ### Show Tags 25 Mar 2007, 20:50 pelihu wrote: aaudetat wrote: Just checked out the Club standings. At least I'm beating Pelihu. Sigh. The one I have at work is doing much better. Are you trying to start a fight or something? Yep, and then I'm gonna call you a fat, belligerent lawyer. Of course, you could be skinny as all get out as far as I know, but it seems the fat thing gets people's dander up. SVP Joined: 31 Jul 2006 Posts: 2304 Schools: Darden Followers: 43 Kudos [?]: 462 [0], given: 0 ### Show Tags 25 Mar 2007, 23:49 Listen here woman, don't make me slap you silly. SVP Joined: 01 May 2006 Posts: 1798 Followers: 9 Kudos [?]: 123 [0], given: 0 ### Show Tags 26 Mar 2007, 05:36 It's going a bit too far... Guys, I have no clue about the reasons why in 4 posts it could turn to responses like these and I can also understand that it's the kind of things that could happen.... But, if it's important to do so, perhaps u should use an exchange of PM? I do not think that it's crucial for all to see this... Note that I have nothing against one of u in particular SVP Joined: 31 Jul 2006 Posts: 2304 Schools: Darden Followers: 43 Kudos [?]: 462 [0], given: 0 ### Show Tags 26 Mar 2007, 09:59 Fig wrote: It's going a bit too far... Guys, I have no clue about the reasons why in 4 posts it could turn to responses like these and I can also understand that it's the kind of things that could happen.... But, if it's important to do so, perhaps u should use an exchange of PM? I do not think that it's crucial for all to see this... Note that I have nothing against one of u in particular Hey Fig, we are just joking. It's common to "talk smack" when discussing college sports in the US - especially when the NCAA tournament is involved. Go to page   Previous    1   2   3   4   5    Next  [ 87 posts ] Similar topics Replies Last post Similar Topics: 5th Annual MBA Poker Tournament Jan 15-17, 2010 6 09 Jan 2010, 22:09 Global Bullion Exchange charity golf tournament?? 0 12 Nov 2009, 17:16 All MBA Softball Tournament 1 22 Jan 2008, 18:15 NHSMBA Career Fair 2007 16 27 Aug 2007, 16:11 Anyone accepted in NYU Langone for Fall 2007 1 18 Jul 2007, 10:39 Display posts from previous: Sort by
2016-05-02 02:03:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.211103618144989, "perplexity": 8513.769344542563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860121418.67/warc/CC-MAIN-20160428161521-00208-ip-10-239-7-51.ec2.internal.warc.gz"}
https://space.stackexchange.com/questions/41797/are-hyperbolic-trigonometric-functions-used-to-calculate-hyperbolic-orbits?noredirect=1
# Are hyperbolic trigonometric functions used to calculate hyperbolic orbits? The following comment is really intriguing! I would say, "Yes, The equations for going from Mean Anomaly to Eccentric Anomaly to True Anomaly are indeed different for hyperbolic orbits than for elliptical ones, if that's part of your process." The biggest differences are sign-flipping on some of the terms, and the use of hyperbolic trigonometric functions rather than the circular trig functions. Question: Are hyperbolic trigonometric functions used in calculating hyperbolic orbits? If so, how? Update: I just found this answer that I wrote a while ago, which was triggered by this answer • The only time i’ve ever used the inverse hyperbolic tangent was in orbital mechanics computation. – Paul Mar 4 '20 at 13:44 • @Paul Can you share any more details or provide a link? Should all trig functions be switched to hyperbolic for calculating all of the orbital elements? How about when transforming to cartesian coordinates, should the hyperbolic trig functions be used then too? – lancew Mar 4 '20 at 14:04 • @lancew it's okay to link one question to another, but we should keep comments on a given question limited to the question itself, not another question, otherwise everything gets confusing. Also comments are for clarifying the question at hand, not raising new questions. – uhoh Mar 4 '20 at 14:12 • The relationship between eccentric and true anomalies involves hyperbolic tangent. The equation for mean anomaly involves hyperbolic sine. – Paul Mar 4 '20 at 14:49 • @Paul is that specifically for hyperbolic orbits, or is it true for all conics? – uhoh Mar 4 '20 at 14:51 The equations for the position in a hyperbolic trajectory contain the hyperbolic sine, cosine and tangent. A hyperbola is defined by the equation: $$\frac{x^2}{a^2} - \frac{y^2}{b^2} = 1$$ It can be described by several parametric equations: Using the hyperbolic sine and cosine functions, (1), plot cyan: $$\boxed{x = \pm a \cosh(t) \\y = b \sinh(t) \\ t\in\mathbb{R} }$$ Using the complex exponential function, (2), plot magenta: $$\boxed{z = c e^t + \overline{c} e^{-t} \\ c = \frac{a + i b}{2} \\ \overline{c} = \frac{a - i b}{2} \\ t\in\mathbb{R} }$$ Solving the definition for x, (3), plot blue: $$\boxed{x = a \sqrt{\frac{y^2}{b^2} + 1} \\ y\in\mathbb{R} }$$ Solving the definition for y, (4), plot green: $$\boxed{y = b \sqrt{\frac{x^2}{a^2} - 1} \\ x \geq a , x \leq -a }$$ Using cosine and tangent, (5), plot yellow: $$\boxed{x = \frac{a} {\cos(t)} = a \sec(t) \\y = b \tan(t) \\ 0 \leq t \leq 2\pi \\ t \neq \frac{\pi}{2} , t \neq \frac{3\pi}{2} }$$ Using a rational parametric equation, (6), plot red: $$\boxed{x = \pm a \frac{t^2 + 1}{2t} \\y = b \frac{t^2 - 1}{2t} \\ t\in\mathbb{R}, t > 0 }$$ Using sine and cosine with complex arguments, (7), plot grey: $$\boxed{z = a \cos(it) + b \sin(it) \\ t\in\mathbb{R} }$$ I found no documentation about complex arguments for the Python Numpy sin and cos functions but it simply works perfect. The equation (7) looks similar to: $$\boxed{z = a \cos(t) + ib \sin(t) \\ 0 \leq t \leq 2\pi }$$ used to calculate an ellipse or circle. import matplotlib.pyplot as plt import numpy as np import math as math # def check(x,y,a,b,eps): a2 = np.square(a) b2 = np.square(b) res = np.square(x)/a2 - np.square(y)/b2 test = True lowlim = 1.0-eps highlim = 1.0+eps for i in range(len(res)): if res[i] < lowlim or res[i] > highlim : test = False return test # omega = np.pi*0.5 steps = 15 # # 1: using hyperbolic sine and cosine, plot cyan a = 1.0 b = 1.0 eps = 1E-13 t1 = np.linspace(-omega, omega, steps) x1 = a*np.cosh(t1) y1 = b*np.sinh(t1) plt.plot(x1, y1, color='c', marker="x") print('cosh sinh check ', check(x1, y1, a, b, eps)) # # 2: using complex exponential function, plot magenta a = 1.2 c = (a + b*1j)*0.5 ck = (a - b*1j)*0.5 z2 = c*np.exp(t1) + ck*np.exp(-t1) plt.plot(np.real(z2), np.imag(z2), color='m', marker="x") print('complex exp check ', check(np.real(z2), np.imag(z2), a, b, eps)) # # 3: solving equation for x, plot blue ymin = min(y1) ymax = max(y1) a = 1.4 a2 = np.square(a) b2 = np.square(b) y3 = np.linspace(ymin, ymax, steps) x3 = a*np.sqrt(np.square(y3)/b2 + 1.0) plt.plot(x3, y3, color='b', marker="x") print('normal form y check ', check(x3, y3, a, b, eps)) # 4: solving equation for y, plot green a = 1.6 a2 = np.square(a) xmin = a xmax = a*np.sqrt(np.square(ymax)/b2 + 1.0) x4 = np.linspace(xmin, xmax, steps//2) y4 = b*np.sqrt(np.square(x4)/a2 - 1.0) x4 = np.concatenate((np.flip(x4, 0), x4), axis=None) y4 = np.concatenate((np.flip(-y4, 0), y4), axis=None) plt.plot(x4, y4, color='g', marker="x") print('normal form x check ', check(x4, y4, a, b, eps)) # 5: using cosine and tangent functions, plot yellow a = 1.8 tmax = np.arctan(ymax/b) t5 = np.linspace(-tmax, tmax, steps) x5 = a/np.cos(t5) y5 = b*np.tan(t5) plt.plot(x5, y5, color='y', marker="x") print('cos tan check ', check(x5, y5, a, b, eps)) # 6: using parametric equation, plot red a = 2.0 tmin = ymax/b + np.sqrt(np.square(ymax/b) + 1.0) #t6 = np.geomspace(tmin, 1.0, steps//2) t6 = np.linspace(tmin, 1.0, steps//2) x6 = a*(np.square(t6) + 1.0)/(2.0*t6) xmax = max(x6) y6 = b*(np.square(t6) - 1.0)/(2.0*t6) x6 = np.concatenate((x6, np.flip(x6, 0)), axis=None) y6 = np.concatenate((y6, np.flip(-y6, 0)), axis=None) plt.plot(x6, y6, color='r', marker="x") print('t square check ', check(x6, y6, a, b, eps)) # 7: using sine and cosine with complex arguments, plot grey a = 2.2 t7 = np.linspace(-omega*1j, omega*1j, steps) z7 = a*np.cos(t7) + b*np.sin(t7) plt.plot(np.real(z7), np.imag(z7), color='grey', marker="x") print('cos sin check ', check(np.real(z7), np.imag(z7), a, b, eps)) plt.grid(b=None, which='both', axis='both') plt.axis('scaled') plt.xlim(0.0, math.ceil(xmax+0.5)) plt.ylim(math.floor(ymin), math.ceil(ymax)) plt.show() • I just added an update to my question with a link – uhoh Mar 4 '20 at 15:10 • Think you’ll still be able to add the Python examples some time? – lancew Mar 6 '20 at 15:50 • Any progress on examples? If you can just type a few equations in MathJax here rather than just having a link, that might be good enough. Here are some examples that might be sufficient. – uhoh Apr 11 '20 at 4:02 • @Uwe Any chance you're still intending on doing those Python examples? – lancew Apr 12 '20 at 1:29 • This looks great already, thank you! (any text that begins with at least four spaces will show as a code block with equal spaced font. (examples: 1, 2)) – uhoh Apr 12 '20 at 23:06
2021-04-21 08:29:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 9, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5636500716209412, "perplexity": 5295.087186088603}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039526421.82/warc/CC-MAIN-20210421065303-20210421095303-00432.warc.gz"}
http://en.wikipedia.org/wiki/User_talk:Bob_K31416
# User talk:Bob K31416 ## Your name and mathematical constants I’ve referred to you on Talk:Chelsea Manning as Bob Kπ, and it’s just occurred to me that you may not want to be known as such. So since I couldn’t find anything here and your User page is empty, I’ll ask: Do you mind either way? —Frungi (talk) 22:24, 6 November 2013 (UTC) Nope, don't mind. --Bob K31416 (talk) 15:18, 7 November 2013 (UTC) ## ANRFC Hi Bob K31416, I've replied to your comment at WP:ANRFC. Callanecc (talkcontribslogs) 12:00, 19 November 2013 (UTC) [For reference, [24]. --Bob K31416 (talk) 16:11, 19 November 2013 (UTC)] ## FA Nomination on Avatar (2009 film) Hello, Bob K31416. I have recently nominated the article above for nomination. Due to me not being the most active editor on the article, users are requesting that it should be cancelled. Since you are the most active user on the article, as Flyer22 described, I would like to ask if you would like to take over my place as nominator, and do the nomination yourself. 18:33, 19 November 2013 (UTC) Thanks for asking but no. I haven't worked on the article for a year and a half.[25] --Bob K31416 (talk) 20:07, 19 November 2013 (UTC) Sigh, well that figures. 20:21, 19 November 2013 (UTC) ## reverted the close of a discussion. The discussion looks closed, but I didn't appreciate you saying I was mischaracterizing comments when referring to what verifiably happened. You were making it sound like you didn't [revert a closed conversation] before that comment. I'm sure you had good faith reasons for doing it, but it wasn't me imagining that it happened. __ E L A Q U E A T E 20:49, 6 January 2014 (UTC) Thanks for dropping by. First off, I think there's a difference between misinterpret and mischaracterize. I used misinterpret. Regarding reverting a close, that's not a comment but an action. I didn't dispute that I reverted a close. For reference, here's the last diff of yours that I think you're referring to [26]. Could you give it a second look and see if there is anything in it that might be a misinterpretation of what I wrote? --Bob K31416 (talk) 21:13, 6 January 2014 (UTC) I was sure you preferred the earlier discussion open after the admin closed. You did say, "that should be part of the discussion without closing." It seemed like it was both comment and reverting action. If I misunderstood, I'm sorry. __ E L A Q U E A T E 22:47, 6 January 2014 (UTC) Re "If I misunderstood, I'm sorry." — I'll leave it at that. --Bob K31416 (talk) 00:36, 7 January 2014 (UTC) [27]--Bob K31416 (talk) 00:44, 7 January 2014 (UTC) ## Talk:Chelsea Manning You reverted my edit at Talk:Chelsea Manning. My edit was simply a removal of a post that violates sense. Do you really think the edit that I removed but that you brought back by reverting is an okay edit?? Georgia guy (talk) 16:03, 21 February 2014 (UTC) Please see my response to that editor. --Bob K31416 (talk) 16:13, 21 February 2014 (UTC) ## April 2014 There is currently a discussion at Wikipedia:Administrators' noticeboard/Incidents regarding an issue with which you may have been involved. Thank you. 12.234.39.130 (talk) 00:51, 6 April 2014 (UTC) [28] --Bob K31416 (talk) 00:12, 7 April 2014 (UTC) ## Brew's and Philosophy I don't know if you are still monitoring the ANI case. However I have just posted a [edia.org/wiki/Wikipedia:Administrators%27_noticeboard/Incidents#Moving_forward link to a suggested way forward] on one article in the hope of breaking what is an entrained pattern that is getting stressful for all involved. I admit to loosing my cool a few times in the last few months. If you have the time/energy your comments would be appreciated. ----Snowded TALK 09:15, 10 April 2014 (UTC) I think you are on the right track with the section Handling talk page discussions and edits. I can see that you are trying to avoid confrontation in that section and to that end I would suggest deleting the last half of the second sentence so that it becomes, "My goal is to use this article to see if progress is possible." --Bob K31416 (talk) 13:14, 10 April 2014 (UTC) Good point, actioned ----Snowded TALK 13:32, 10 April 2014 (UTC) Bob, a personal opinion (hence bringing it here). I don't think you are helping Brews by giving him the space to avoid changing his behaviour in respect of using primary sources. He failed to do that on Physics articles and we know the consequences. You may think my own behaviour does not match up to your ideals, but even a patient editor such as Pdhorest is evidently at the end of this tether with Brews at the ANI page discussion. If Brews doesn't change then sooner or later sanctions are inevitable. My point on the way he uses references although trivial of itself illustrates the wider behavioural issue. A simple change that would make life for other editors considerably easier is not one that he is even prepared to countenance. I put it there partly as a test to see if he was prepared to make even a small change that would cost him nothing and he refused ----Snowded TALK 08:43, 11 April 2014 (UTC) One thing at a time.[29] --Bob K31416 (talk) 11:00, 11 April 2014 (UTC) ## Enaction (philosophy) Bob: Your reaction to MachineElf's actions is normal. However, it appears that MachineElf did not understand what was going on. He first interrupted the PROD to remove Enaction without understanding how the process had been arranged. He seems to have confused Enaction with Enaction (philosophy), and discussion of possibilities with agreements to act. And his reactions to questions are intemperate, as he doesn't recognize his lack of patience in understanding what is happening. However, your presence has led to a reasonable scenario that I hope will lead to a good pair of articles Enactivism and Enaction (philosophy). However, your help in keeping things on track is critical. Brews ohare (talk) 14:12, 27 April 2014 (UTC) Bob, I've changed my mind. There is no point continuing where there is no sign of interest. Brews ohare (talk) 00:36, 28 April 2014 (UTC)
2014-08-01 04:10:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 60, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6631061434745789, "perplexity": 2624.5665128421074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274289.5/warc/CC-MAIN-20140728011754-00411-ip-10-146-231-18.ec2.internal.warc.gz"}
http://mathhelpforum.com/pre-calculus/208079-angle-between-planes-has-mysterious-absolute-value.html
# Math Help - Angle between planes has mysterious absolute value 1. ## Angle between planes has mysterious absolute value The angle between vectors $\mathbf{u}$ and $\mathbf{v}$ is defined as follows. $\cos \theta = \frac{\mathbf{u} \cdot \mathbf{v}}{\left|\left| \mathbf{u} \right|\right| \left|\left| \mathbf{v} \right|\right|}$ It's derivation is straightforward. The angle between two planes is equal to the angle between the planes' normal vectors, $\mathbf{n}_1$ and $\mathbf{n}_2$. But then the book says the angle between the two planes is $\cos \theta = \frac{\left| \mathbf{n}_1 \cdot \mathbf{n}_2 \right|}{\left|\left| \mathbf{n}_1 \right|\right| \left|\left| \mathbf{n}_2 \right|\right|}$ I assume the absolute value in the numerator is a because we want to take the acute angle between the two lines. Is that right? If not, why is it there? How does the absolute value represent the acute angle? 2. ## Re: Angle between planes has mysterious absolute value Originally Posted by mathDad The angle between vectors $\mathbf{u}$ and $\mathbf{v}$ is defined as follows. $\cos \theta = \frac{\mathbf{u} \cdot \mathbf{v}}{\left|\left| \mathbf{u} \right|\right| \left|\left| \mathbf{v} \right|\right|}$ It's derivation is straightforward. The angle between two planes is equal to the angle between the planes' normal vectors, $\mathbf{n}_1$ and $\mathbf{n}_2$. But then the book says the angle between the two planes is $\cos \theta = \frac{\left| \mathbf{n}_1 \cdot \mathbf{n}_2 \right|}{\left|\left| \mathbf{n}_1 \right|\right| \left|\left| \mathbf{n}_2 \right|\right|}$ I assume the absolute value in the numerator is a because we want to take the acute angle between the two lines. Is that right? If not, why is it there? How does the absolute value represent the acute angle? That is correct. The angles between the two normals made accute, 3. ## Re: Angle between planes has mysterious absolute value Originally Posted by Plato That is correct. The angles between the two normals made accute, But why does the absolute value represent the acute angle? How do they get that? 4. ## Re: Angle between planes has mysterious absolute value Originally Posted by mathDad But why does the absolute value represent the acute angle? How do they get that? Actually I have a quibble with your notation. I think it should be $\theta=\arccos\left(\frac{|n_1\cdot n_2|}{\|n_1\|\|n_2\|}\right)$. Now the $\arccos$ function returns an acute angle values for positive arguments.
2015-04-26 05:41:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8522378206253052, "perplexity": 413.7553362394581}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246652631.96/warc/CC-MAIN-20150417045732-00161-ip-10-235-10-82.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/124728/how-to-wrap-section-titles-when-using-wordlike
How to wrap section titles when using wordlike? I have a template based on the wordlike package. When I have a long section title, it extends beyond the edge of the page instead of wrapping to a newline (like this). Here's the code: \documentclass[12pt]{article} % Emulate MS Word \usepackage{wordlike} % One inch margins \PassOptionsToPackage{margin=1in}{geometry} % Double spacing \usepackage{setspace} \setstretch{2} % Don't justify along the right margin \usepackage{ragged2e} \RaggedRight % Format section titles \usepackage[uppercase]{titlesec} \titlespacing\section{0pt}{0pt}{7pt} \usepackage{titlesec} \titleformat{\section} {\normalfont\bf\center\uppercase}{\underline{\thesection.\ }}{1em}{\underline} % Format paragraphs \parskip 0pt \setlength{\parindent}{0.5in} % Remove section numbers \setcounter{secnumdepth}{-2} \begin{document} \section{Here's an example of a long section title that is going to stretch beyond the page} Lorem ipsum dolor sit amet, consectetur adipiscing elit. \end{document} If I try to force a line break with \\, I get this error: Something's wrong--perhaps a missing \item. - 1 Answer The problem is not due to wordlike; standard \underline doesn't admit line breaks; use \uline from ulem, instead: \documentclass[12pt]{article} % Emulate MS Word \usepackage{wordlike} \usepackage[normalem]{ulem} % One inch margins \PassOptionsToPackage{margin=1in}{geometry} % Double spacing \usepackage{setspace} \setstretch{2} % Don't justify along the right margin \usepackage{ragged2e} \RaggedRight % Format section titles \usepackage[uppercase]{titlesec} \titlespacing\section{0pt}{0pt}{7pt} \usepackage{titlesec} \titleformat{\section} {\normalfont\bfseries\filcenter}{\uline{\thesection.\ }}{0em}{\uline} % Format paragraphs \parskip 0pt \setlength{\parindent}{0.5in} % Remove section numbers \setcounter{secnumdepth}{-2} \begin{document} \section{Here's an example of a long section title that is not going to stretch beyond the page due to the changes} Lorem ipsum dolor sit amet, consectetur adipiscing elit. \end{document} Using bold-faced underlined headings seems redundant, and it's not a very good practice. - I agree, but unfortunately the doc I'm writing (a legal memo) requires it. – Joe Mornin Jul 19 '13 at 19:07 @JosephMornin I see. Well, in some cases one is tied to obey ugly requirements ;-) – Gonzalo Medina Jul 19 '13 at 19:12
2016-05-06 15:18:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.561411440372467, "perplexity": 5362.640497081017}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461861831994.45/warc/CC-MAIN-20160428164351-00163-ip-10-239-7-51.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/16312/how-helpful-is-non-standard-analysis/126776
# How helpful is non-standard analysis? So, I can understand how non-standard analysis is better than standard analysis in that some proofs become simplified, and infinitesimals are somehow more intuitive to grasp than epsilon-delta arguments (both these points are debatable). However, although many theorems have been proven by non-standard analysis and transferred via the transfer principle, as far as I know all of these results were already known to be true. So, my question is: Is there an example of a result that was first proved using non-standard analysis? To wit, is non-standard analysis actually useful for proving new theorems? Edit: Due to overwhelming support of Francois' comment, I've changed the title of the question accordingly. - Note that your question is really about how helpful non-standard analysis is. If you wanted to know how unhelpful it is, you would ask for theorems that cannot be proved using non-standard methods. –  François G. Dorais Feb 24 '10 at 22:57 I believe that Ben Green, Terry Tao, and Tamar Ziegler are writing there forthcoming paper on the Inverse Conjecture for the Gowers norm (which combined with earlier work of Green and Tao will resolve many cases of the Hardy--Littlewood conjectures on linear equations in primes, including precise asymptotics for primes in arithmetic progressions) in the language of non-standard analysis. That seems pretty helpful! (By the way, I strongly recommend Terry Tao's blog for several discussions of the applicability of non-standard analysis to "everyday" mathematics.) –  Emerton Feb 25 '10 at 2:53 I added the nonstandard-analysis tag. –  Joel David Hamkins Feb 25 '10 at 14:23 From the Wikipedia article: the list of new applications in mathematics is still very small. One of these results is the theorem proven by Abraham Robinson and Allen Bernstein that every polynomially compact linear operator on a Hilbert space has an invariant subspace. Upon reading a preprint of the Bernstein-Robinson paper, Paul Halmos reinterpreted their proof using standard techniques. Both papers appeared back-to-back in the same issue of the Pacific Journal of Mathematics. Some of the ideas used in Halmos' proof reappeared many years later in Halmos' own work on quasi-triangular operators. - Great, this is what I was looking for. Thanks for the links. This certainly does answer my question, as Halmos was actually one of the original posers of that problem. It is quite interesting that the papers appear back-to-back in the same journal. –  Tony Huynh Feb 24 '10 at 22:53 As Greg Lawler put it in his article in the recent volume dedicated to Nelson: "There are some theorems that were first published with nonstandard proofs but, at least in all the cases where I understand the result, they could have been done standardly." In a footnote he adds, "Of course, it is harder to answer the question: would the proofs have been found without nonstandard analysis? In fact, there are probably some proofs that have been done originally using nonstandard analysis but the author chose to write a standard proof instead." –  Steve Huntsman Feb 24 '10 at 22:54 The reason for choosing standard proofs over nonstandard ones is obvious, and Lawler himself brings it up in the same article. Proving something with NSA hurts when trying to communicate results to a wide audience. In this sense NSA and experimentation are in the same boat--they can help, but generally behind the scenes. –  Steve Huntsman Feb 24 '10 at 23:01 The story repeats itself many times. For example, Kamae's proof of the ergodic theorem using nonstandard analysis was rewritten by Weiss, and the two articles appeared back-to-back. I interpret this as antipathy to Robinson himself: "Look, your methods aren't so innovative as you tell everybody!" Perhaps if Robinson had been more likable, we could have a nicer analysis already. –  Kevin O'Bryant Feb 26 '11 at 2:15 @katz: Wikipedia evolves. If you look at the version which Steve Huntsman quoted from (considering that this post has not been edited since it was first posted, that'd be the version from Feb 22, 2010), the quote is accurate. en.wikipedia.org/w/… –  Willie Wong Apr 8 '13 at 11:00 The other answers are excellent, but let me add a few points. First, with a historical perspective, all the early fundamental theorems of calculus were first proved via methods using infinitesimals, rather than by methods using epsilon-delta arguments, since those methods did not appear until the nineteenth century. Calculus proceeded for centuries on the infinitesimal foundation, and the early arguments---whatever their level of rigor---are closer to their modern analogues in nonstandard analysis than to their modern analogues in epsilon-delta methods. In this sense, one could reasonably answer your question by pointing to any of these early fundamental theorems. To be sure, the epsilon-delta methods arose in part because mathematicians became unsure of the foundational validity of infinitesimals. But since nonstandard analysis exactly provides the missing legitimacy, the original motivation for adopting epsilon-delta arguments appears to fall away. Second, while it is true that almost any application of nonstandard analysis in analysis can be carried out using standard methods, the converse is also true. That is, epsilon-delta arguments can often also be translated into nonstandard analysis. Furthermore, someone raised with nonstandard analysis in their mathematical childhood would likely prefer things this way. In this sense, the preference between the two methods may be a cultural matter of upbringing. For example, H. Jerome Keisler wrote an introductory calculus textbook called Elementary Calculus: an infinitesimal approach, and this text was used for many years as the main calculus textbook at the University of Wisconsin, Madison. I encourage you to take a look at this interesting text, which looks at first like an ordinary calculus textbook, except that in the inside cover, next to the various formulas for derivatives and integrals, there are also listed the various rules for manipulating infinitesimals, which fill the text. Kiesler writes: This is a calculus textbook at the college Freshman level based on Abraham Robinson's infinitesimals, which date from 1960. Robinson's modern infinitesimal approach puts the intuitive ideas of the founders of the calculus on a mathematically sound footing, and is easier for beginners to understand than the more common approach via limits. Finally, third, some may take your question to presume that a central purpose of nonstandard analysis is to provide applications in analysis. But this is not correct. The concept of nonstandard models of arithmetic, of analysis and of set theory arose in mathematical logic and has grown into an entire field, with hundreds of articles and many books, with its own problems and questions and methods, quite divorced from any application of the methods in other parts of mathematics. For example, the subject of Models of Arithmetic is focused on understanding the nonstandard models of the first order Peano Axioms, and it makes little sense to analyze these models using only standard methods. To mention just a few fascinating classical theorems: every countable nonstandard model of arithmetic is isomorphic to a proper initial segment of itself (H. Friedman). Under the Continuum Hypothesis, every Scott set (a family of sets of natural numbers closed under Boolean operations, Turing reducibility and satisfying Konig's lemma) is the collection of definable sets of natural numbers of some nonstandard model of arithmetic (D. Scott and others). There is no nonstandard model of arithmetic for which either addition or multiplication is computable (S. Tennenbaum). Nonstandard models of arithmetic were also used to prove several fascinating independence results over PA, such as the results on Goodstein sequences, as well as the Paris-Harrington theorem on the independence over PA of a strong Ramsey theorem. Another interesting result shows that various forms of the pigeon hole principle are not equivalent over weak base theories; for example, the weak pigeon-hole principle that there is no bijection of n to 2n is not provable over the base theory from the weaker principle that there is no bijection of n with n2. These proofs all make fundamental use of nonstandard methods, which it would seem difficult or impossible to omit or to translate to standard methods. - Regarding NSA vs epsilons and deltas, isn't 'not using the Axiom of Choice unnecessarily' a good reason to use the latter? –  HJRW Feb 25 '10 at 21:01 Well, many ordinary uses of epsilon-delta also use choice. For example, to know that the epsilon-delta definition of continuity for a function on the reals is equivalent to the convergent sequence characterization relies on AC, since you need to pick the points inside those delta-balls. –  Joel David Hamkins Feb 25 '10 at 21:19 One needs no choice at all to construct nonstandard models of arithemtic. For the reals, however, the existence of a nonstandard model of the reals with the transfer principle is equivalent to the existence of a nonprincipal ultrafilter on omega, which would be a weak choice principle. Nevertheless, one needs at least DC to have a decent theory of Lebegue measure, so there seems to be choice all around here. –  Joel David Hamkins Feb 25 '10 at 21:26 I should say that the reverse implication in that equivalence uses countable choice, because if you have an ultrafilter, you still need countable choice to verify that the ultrapower satisfies the Los theorem, which is what gives you the Transfer principle. But the transfer principle in any case gives you ultrafilters, which is an interesting little argument. –  Joel David Hamkins Feb 25 '10 at 21:32 Joel, I enjoyed your answer and learned from it, but I wondered whether "nonstandard analysis exactly provides the missing legitimacy" [of early calculus] was overstating it. I'm more familiar with the "other" way of putting infinitesimals on a firm footing, that of Synthetic Diff Geom (as e.g. in Bell's text A Primer of Infinitesimal Analysis). As I understand it, a crucial difference is that the infinitesimals of SDG can, for instance, have square equal to 0, but the infinitesimals of NSA can't. I'd guess that to be an important part of providing that "missing legitimacy". Any thoughts? –  Tom Leinster Feb 27 '10 at 2:21 Nonstandard hulls of spaces are used all the time in Banach space theory, so much so that books devote sections to the construction of ultraproducts of Banach spaces (e.g. Absolutely summing operators by Diestel, Jarchow, and Tonge). There are cases where NSA is used to prove the existence of an estimate, yet no one knows how directly to compute an estimate. For example, the unconditional constant of any basis for the span of the first n unit basis vectors in the James' space of sequences of bounded quadratic variation must go to infinity, but the only known proof involves NSA. - In 1986 C. Ward Henson and H. J. Keisler published “On the Strength of Nonstandard Analysis” (The Journal of Symbolic Logic, Vol. 51, No. 2 (Jun., 1986), pp. 377-386), which is a seminal contribution to the meta-mathematics of nonstandard analysis. Since their result bears directly on the issue in this thread which has been reopened after laying dormant for some time now, and since no reference to their work is referred to in the original thread, I am taking the liberty of quoting the introduction to Henson and Keisler’s important paper (which I believe is as current today as when it was published). It is often asserted in the literature that any theorem which can be proved using nonstandard analysis can also be proved without it. The purpose of this paper is to show that this assertion is wrong, and in fact there are theorems which can be proved with nonstandard analysis but cannot be proved without it. There is currently a great deal of confusion among mathematicians because the above assertion can be interpreted in two different ways. First, there is the following correct statement: any theorem which can be proved using nonstandard analysis can be proved in Zermelo-Fraenkel set theory with choice, ZFC, and thus is acceptable by contemporary standards as a theorem in mathematics. Second, there is the erroneous conclusion drawn by skeptics: any theorem which can be proved using nonstandard analysis can be proved without it, and thus there is no need for nonstandard analysis. The reason for this confusion is that the set of principles which are accepted by current mathematics, namely ZFC, is much stronger than the set of principles which are actually used in mathematical practice. It has been observed (see [F] and [S]) that almost all results in classical mathematics use methods available in second order arithmetic with appropriate comprehension and choice axiom schemes. This suggests that mathematical practice usually takes place in a conservative extension of some system of second order arithmetic, and that it is difficult to use the higher levels of sets. In this paper we shall consider systems of nonstandard analysis consisting of second order nonstandard arithmetic with saturation principles (which are frequently used in practice in nonstandard arguments). We shall prove that nonstandard analysis (i.e. second order nonstandard arithmetic) with the $\omega_{1}$-saturation axiom scheme has the same strength as third order arithmetic. This shows that in principle there are theorems which can be proved with nonstandard analysis but cannot be proved by the usual standard methods. The problem of finding a specific and mathematically natural example of such a theorem remains open. However, there are several results, particularly in probability theory, whose only known proofs are nonstandard arguments which depend on saturation principles; see, for example, the monograph [Ke]. Experience suggests that it is easier to work with nonstandard objects at a lower level than with sets at a higher level. This underlies the success of nonstandard methods in discovering new results. To sum up, nonstandard analysis still takes place within ZFC, but in practice it uses a larger portion of full ZFC than is used in standard mathematical proofs. [F] S. FEFERMAN. Theories of finite type related to mathematical practice, Handbook of mathematical logic (J. Barwise, editor), North-Holland, Amsterdam, .1977, pp. 913-971. [Ke] H. J. KEISLER, An infinitesimal approach to stochastic analysis, Memoirs of the American Mathematical Society, No. 297 (1984). [S] S. SIMPSON, Which set existence axioms are needed to prove the Cauchy/Peano theorem for ordinary differential equations? JSL, vol. 49 (1984), pp. 783-802. It is perhaps worth adding that Keisler (making use of work of Avigad) subsequently published a sequel to his paper with Henson in which he introduces what might be regarded as a system of Reverse Mathematics for nonstandard analysis with the hope of being able to establish the strength of particular theorems proved using nonstandard analysis. (See “The Strength of Nonstandard Analysis” by H.J. Keisler in The Strength of Nonstandard Analysis ed. By imme van den berg and vitor nerves, Springer, 2007). - I first understood what the Thurston-type-compactification of the space of properly strictly convex real projective structures on a closed surface was using non-standard methods. What had been murky and confusing was suddenly clear. I have struggled with the question of whether or not to use NSA in the written proof. It is so much easier to use NSA I think we will. - The asymptotic cone of a metric space (and hence of a finitely generated group endowed with the word metric) is constructed using non-standard analysis, and has been used to prove many nice theorems. To take just one example, asymptotic cones are an important tool in the proof that mapping class groups are quasi-isometrically rigid. - Freiman conjectured a classification of finite sets $A$ of integers that have $$|A+A| = 3|A|-3+b$$ for some $0\leq b \leq |A|/3-2$. Renling Jin recently resolved this using nonstandard analysis. He has quite a few other nice results that appeared first with nonstandard analysis. - Indeed, I have heard Renling Jin say that although one could translate (many of) his arguments to standard methods, it is better not to do so; they are more illuminating in their nonstandard form. –  Joel David Hamkins Feb 28 '10 at 2:07 The first link sent me to a Not Found page. –  Todd Trimble Apr 8 '13 at 13:25 This reminded me of a talk by Mircea Mustata in which he mentioned that non-standard analysis type arguments were used to prove some things related to algebraic geometry. I can't remember what the talk was about, but I found the paper that it was based on: http://arxiv.org/abs/0710.4978 The paper mentions that later Kollár found proofs avoiding these techniques (but they are similar in spirit). - Let $k$ be an algebraically closed field of characteristic $0$. Let $T_n$ be the set of all possible log canonical threshold of a pair $(X,Y)$ where $X/k$ is a smooth variety and $Y \subseteq X$ is a nonzero closed subschemes. The following two facts are first proved via non-standard methods: 1) $T_n$ is closed in $\mathbb R$ for all $n$. 2) The set of points of accumulations from above of $T_n$ is $T_{n-1}$. I think proofs that avoid non-standard analysis emerged later, but the first one used non-standard technique. - Ah, Sam and me gave the same answer within 55 seconds of each other (:. Sorry I did not see Sam's answer. –  Hailong Dao Feb 24 '10 at 23:01 I think the only known solution to the local version of the Hilbert's fifth problem heavily uses nonstandard analysis. To be more precise the result is: every locally euclidean local group is locally isomorphic to a Lie group. You can find details in Isaac Goldbring's paper. - Gromov was writing in one of his books (among other things) about some new mathematics coming from nonstandard analysis. Another example is proving that some statistical field theories (and lattice QFTs) are well-defined by Sergio Albeverio et al. (look at their book about that kind of applications to physics). Kiesler has been emphasising that some functional spaces are much richer in nonstandard analysis and that this power is one of the main arguments for the theory. Analysts say that one should look for applications where one has several degrees of infinitesimals or asymptotics, to somewhat reduce fitting complicated estimates to satisfy all. There are some other approaches to infinitesimals which are not nonstandard analysis (no general transfer principle), but are similar in spirit, namely the synthetic differential geometry. - I am not aware of any comments by Gromov directly on nonstandard analysis. In his book "metric structures for riemannian and non-riemannian spaces" , page 97, he does comment on the construction of asymptotic cones using nonprincipal ultrafilters, and cites the paper by van den Dries and Wilkie from 1984. In a recent interview, he praised their work more explicitly, but still without mentioning nonstandard analysis. –  katz Apr 8 '13 at 12:28 In mathematical economics, one often faces the following problem: One wants to formalize the idea of a large, relatively anonymous group of people (an atomless measure space of agents) that all face some risk that is iid of these people. Since there are lots of people, this risk should cancel out in the aggregate by some law of large numbers. The expost empirical distribution should be the ex ante distribution of the risk. If one uses something like the unit interval endowed with Lebesgue measure, this does not work. Most sample realizations are not measurable in that case. Yeneng Sun has shown that there are exact laws of large numbers with a continuum of random variables for certain types of measure spaces. The only known examples were obtained using the Loeb measure construction that relies heavily on NSA. Later, Konrad Podczeck has shown how to construct appropriate measure spaces using conventional methods. - Here is one paper with some results I have only seen being done in non-standard analysis so far, perhaps it is helpful to you: A mathematical proof of the existence of trends in financial time series by Michel Fliess & C´edric Join From the abstract: "We are settling a longstanding quarrel in quantitative finance by proving the existence of trends in financial time series thanks to a theorem due to P. Cartier and Y. Perrin, which is expressed in the language of nonstandard analysis [...] Those trends, which might coexist with some altered random walk paradigm and efficient market hypothesis, seem nevertheless difficult to reconcile with the celebrated Black-Scholes model. They are estimated via recent techniques stemming from control and signal theory. Several quite convincing computer simulations on the forecast of various financial quantities are depicted. We conclude by discussing the role of probability theory." - Steve Huntsman's claim attributed to wikipedia that "the list of new applications in mathematics is still very small" is patently false. In fact, I was unable to find such a claim there. To mention just the most famous results, there is the recent work by T. Tao et al, by I. Goldbring on the local version of Hilbert's 5, Albeverio (several applications in math physics), Arkeryd (see his piece in the American Mathematical Monthly at http://www.jstor.org/stable/10.2307/30037635) in hydrodynamics, the works on "canards" in perturbation theory, Jin's work in additive number theory, as well as numerous applications in statistics and economics. Robinson's work also occasioned a critical re-evaluation of whig history dominated by a reductive epsilontist agenda. - @katz: Wikipedia evolves. See the version from which Steve Huntsman quoted when he wrote his answer three years ago: en.wikipedia.org/w/… –  Willie Wong Apr 8 '13 at 11:01 (BTW, several of the results you mention are already discussed in the various other answers to this question below, and it would be great if you can add links to the ones which aren't [for example, a link or actual citation reference to the relevant papers of Arkeryd would be wonderful!]) –  Willie Wong Apr 8 '13 at 11:09 The version you cite dates from 2010. The claim was as false in 2010 as it is in 2013. It is not appropriate to hide behind anynomous claims posted in the public domain if such claims are incorrect. –  katz Apr 8 '13 at 11:12 I make no comment on the verity of the sentiments expressed by that quote. I take issue with your statement "In fact, I was unable to find such a claim there", which for better or for worse sounds like you are accusing Steve Huntsman of fabricating the quote out of thin air. –  Willie Wong Apr 8 '13 at 11:51 You are putting words in my mouth. Most people know that wiki is a work in progress. If this claim was deleted, there must have been good reasons for this. Since posting material on wiki involves little personal responsibility, it is inappropriate to rely on negative claims made there. My objection to Huntsman's presentation of his comment stands. –  katz Apr 8 '13 at 12:15 Edward Nelson was working on a book on NSA mentioned here: https://web.math.princeton.edu/~nelson/books.html His existing book "Radically elementary probability theory" (linked from that page) uses some NSA. I've been wanting to read it but don't understand much of it. - That's related (and interesting), but doesn't directly address the question, viz. what results have been proven first using NSA. –  Robert Haraway Apr 8 '13 at 21:02 I just came across a 2013 book by F. Herzberg entitled "Stochastic Calculus with Infinitesimals", see http://link.springer.com/book/10.1007/978-3-642-33149-7/page/1 where probability and stochastic analysis are done without having to develop the complexities of measure and integration theory first. Ever since E.Nelson, such an approach is called "radically elementary" and it really is. What this proves is the new result that stochastic calculus can be done without measure theory. To give a historical parallel, recall that Leibniz's mentor in mathematics was Huygens. When Huygens first learned of Leibniz's invention of infinitesimal calculus, Huygens was sceptical, and wrote to Leibniz that he is merely doing what Fermat and others have done before him in a different language. What Huygens failed to recognize immediately (but did recognize later) was the generality of the methods and the lucidity of the presentation of Leibniz's new approach. The Nelson-Herzberg approach to stochastic calculus is in a way more significant than merely a new "result", since it provides a new methodology. -
2015-07-04 20:47:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8255073428153992, "perplexity": 527.8497127898646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096944.75/warc/CC-MAIN-20150627031816-00106-ip-10-179-60-89.ec2.internal.warc.gz"}
http://kea-monad.blogspot.com/2008/03/m-theory-lesson-175.html
occasional meanderings in physics' brave new world Name: Location: New Zealand Marni D. Sheppeard ## Saturday, March 29, 2008 ### M Theory Lesson 175 By placing each knot crossing in a box, we see 4 output lines for each box, defining two ribbon strands. Thus there are always twice as many extra faces (as squares) on an associated polytope in $\mathbb{R}^{3}$. The associahedron satisfies this condition, as does the deformed octahedron of cubic triality (which has four globule faces). The Euler characteristic defines a sequence of such polytopes via $E = V + F - 2$. The ribbon diagram for the trefoil knot is the familiar once punctured torus (elliptic curve). Maps relating elliptic curves to the Riemann sphere go back a long way. In particular, the Weierstrass function $P: E(w_{1} , w_{2}) \rightarrow \mathbb{P}^{1}$ is defined via theta functions (for $\tau = \frac{w_{2}}{w_{1}}$) by $P (z, \tau) = \pi^{2} \theta^{2} (0, \tau) \theta_{10}^{2} (0, \tau) \frac{\theta_{01}^{2} (0, \tau)}{\theta_{11}^{2} (0, \tau)} - \frac{\pi^{2}}{3} (\theta^{4} (0, \tau) + \theta_{10}^{4} (0, \tau))$ Recall that it is the functional relation on $\theta (0, \tau)$ which gives the functional relation for the Riemann zeta function, and these theta functions also define the triality of the j invariant.
2019-06-17 23:04:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9371007084846497, "perplexity": 1094.4511541278916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998581.65/warc/CC-MAIN-20190617223249-20190618005249-00460.warc.gz"}
https://im.kendallhunt.com/HS/students/3/4/18/index.html
# Lesson 18 Applications of Logarithmic Functions • Let’s measure acidity levels and earthquake strengths. ### 18.1: Scrambled Logs Without using a calculator, put the following expressions in order, from least to greatest. Be prepared to explain your reasoning. • $$\log 11$$ • $$\log_2 8$$ • $$\log_5 0.2$$ • $$\log 0.01$$ • $$\ln 1$$ ### 18.2: How Acidic Is It? The pH scale is a way to measure the acidity of a liquid solution. It is based on the concentration of positive hydrogen ions in the liquid. A smaller pH indicates more hydrogen ions and higher acidity. A larger pH indicates less hydrogen ions and lower acidity. Here is a table showing the hydrogen ion concentration (in moles per liter) and the pH of some different liquids: liquids hydrogen ion concentration (moles per liter) pH water $$10^{\text-7}$$ 7 coffee $$10^{\text-5}$$ root beer $$10^{\text-4}$$ orange juice $$10^{\text-3}$$ seawater vinegar 1. Which of the drinks listed, water, coffee, root beer, or orange juice, is the most acidic? Which is the least acidic? Explain how you know. 1. Seawater has a pH of 8. Is it more acidic or less acidic than water? Record the hydrogen ion concentration of seawater in the table. 2. Vinegar has a pH of 2.4. Is it more acidic or less acidic than orange juice? Record the hydrogen ion concentration of vinegar in the table. 2. A logarithm is used to translate hydrogen ion concentrations to pH values. With a partner, discuss how the hydrogen ion concentrations might be related to the pH. Use words or expressions to describe the relationship you notice. ### 18.3: pH Ratings This table shows the relationship between hydrogen ion concentrations and pH ratings (acidity) for different substances. substance hydrogen ion concentration (moles per liter) pH mild detergent 0.0000000001 10 toothpaste 0.000000001 9 baking soda 0.00000001 8 blood 0.0000001 7 milk 0.000001 6 banana 0.00001 5 tomato 0.0001 4 apple 0.001 3 lemon 0.01 2 1. Write an equation to represent the pH rating, $$p$$, in terms of the hydrogen ion concentration $$h$$, in moles per liter. 2. Test your equation by using the hydrogen ion concentration of a substance from the table as the input. Does it produce the right pH rating as the output? If not, revise your equation and test it again. 3. Magnesium hydroxide (also called “milk of magnesia”) is a medication used to treat stomach indigestion. It has a hydrogen concentration $$5.6 \times 10^{\text-11}$$ mole per liter. Estimate a pH rating for magnesium hydroxide. Explain or show your reasoning. 4. As shown in the table, apple has a pH of 3 and milk has a pH of 6. How many times more acidic is the apple than milk? The graph shows points representing the hydrogen ion concentration, in moles per liter, and pH ratings of the different substances you saw earlier. 1. Which point represents baking soda? Which represents banana? How can you tell? 2. Vinegar has a pH of 2.4. Where on the graph would a point that represents vinegar be plotted? 3. Why do you think the graph appears the way it does, with a group of points stacked up along the vertical axis? 4. How is it like and unlike other graphs of logarithmic functions you have seen so far? ### 18.4: Measuring Earthquake Strength Here is a table showing the Richter ratings for displacements recorded by a seismograph 100 km from the epicenter of an earthquake. seismograph displacement (meters) Richter rating $$10^{\text-6}$$ $$10^{\text-5}$$ $$10^{\text-4}$$ $$10^{\text-3}$$ $$10^{\text-2}$$ $$10^{\text-1}$$ $$10^0$$ $$10^1$$ 1 2 3 4 5 6 7 8 1. Compare an earthquake rated with a magnitude of 5 on the Richter scale and that rated with a 6. How do their displacements compare? What about an earthquake with a magnitude rated with a 2 and that rated with a 3? 2. Discuss with a partner how the displacement might be related to the Richter scale. Express that relationship in words or with an expression. 3. An earthquake shook the northwest part of Indonesia in 2004, causing massive damage and casualties. If a seismograph was located 100 km from the epicenter, it would have recorded a displacement of 125 m! Use your answer to the previous question to estimate the Richter rating for the earthquake. ### Summary Logarithms are helpful in a variety of real-world contexts. Let’s look at an example in chemistry. The acidity of a substance is measured by the concentration of positive hydrogen ions, $$H$$, in moles per liter. If the concentration is $$10^{x}$$, then the acidity rating, or pH rating, is $$\text{-}x$$. For example, grapefruit juice has a hydrogen ion concentration of about $$10^\text{-3}$$ mole per liter, so its acidity rating is about 3. The concentration of hydrogen ions in lemon juice is $$10^\text{-2}$$ mole per liter, so its acidity or pH rating is 2. We can see that the pH rating is -1 times the exponent in the expression representing the hydrogen ion concentration. Because the exponent in a power of 10 can be expressed in terms of the base 10 logarithm, the pH rating can be expressed as $$\text-1 \log_{10} H$$ or simply $$\text- \log_{10} H$$. When the exponent in a power of 10 increases by 1, say from $$10^\text{-3}$$ to $$10^\text{-2}$$, the quantity changes by a factor of 10. This means that lemon juice has 10 times the hydrogen ion concentration of grapefruit juice. Water, which has a pH rating of 7, has $$10^\text{-7}$$ mole of hydrogen ions per liter. This means that water has $$\frac{1}{10,000}$$ of the hydrogen ion concentration of grapefruit juice. Another example of logarithm use is the Richter scale, which measures the strength of an earthquake in terms of the displacement of the needle on a seismograph. A displacement of 1 micrometer, one millionth of a meter, measures 1 on the Richter scale. Each time the displacement increases by a factor of 10, the Richter scale measure increases by 1. So a displacement of 10 micrometers measures 2 on the Ricther scale, and a displacement of 1,000 micrometers (1 mm) measures 4 on the Richter scale. If the seismograph displacement is $$d$$ meters, the Richter rating of the earthquake can be expressed as $$7 +\log_{10}{d}$$. We can check that when $$d = 1 \times 10^{\text-6}$$ (a displacement of 1 micrometer), the Richter rating is 1. And when the displacement increases by a factor of 10, the exponent of $$d$$ increases by 1, so the Richter rating of the earthquake increases by 1. ### Glossary Entries • logarithmic function A logarithmic function is a constant multiple of a logarithm to some base, so it is a function given by $$f(x) = k \log_{a}(x)$$ where $$k$$ is any number and $$a$$ is a positive number (10, 2, or $$e$$ in this course). The graph of a typical logarithmic function is shown. Although the function grows very slowly, the graph does not have a horizontal asymptote.
2022-12-08 13:27:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6070609092712402, "perplexity": 1466.6598025122548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711336.41/warc/CC-MAIN-20221208114402-20221208144402-00099.warc.gz"}
https://lists.gnu.org/archive/html/espressomd-users/2014-12/msg00042.html
espressomd-users [Top][All Lists] Advanced [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] ## Re: [ESPResSo-users] LBM, speed of sound, stability From: Wink, Markus Subject: Re: [ESPResSo-users] LBM, speed of sound, stability Date: Thu, 18 Dec 2014 11:17:25 +0000 Hello everybody, a practical question, probably stupid, but anyways. As Ulf wrote: "you need to make sure that h*c_s^2/\nu is small to avoid nonlinear instabilities. h is the LB timestep, c_s is the speed of sound, and \nu is the kinematic viscosity" Is the LB timestep h the one you invoke in the tcl script as tau? For example having a h=0.1, so you write "tau 0.1" for the lbfluid? Unfortunately the user's guide just tells that it is "the LB timestep", but I am not sure, if it is the same. Greetings Markus -----Ursprüngliche Nachricht----- Von: address@hidden [mailto:address@hidden Im Auftrag von Ulf Schiller Gesendet: Mittwoch, 17. Dezember 2014 19:10 An: address@hidden Betreff: Re: [ESPResSo-users] LBM, speed of sound, stability On 17/12/14 12:12, Ivan Cimrak wrote: > Hi all, > > In one of his emails Ulf Shiller explained that: > "you need to make sure that h*c_s^2/\nu is small to avoid nonlinear > instabilities. h is the LB timestep, c_s is the speed of sound, and > \nu is the kinematic viscosity. In the D3Q19 model, c_s^2=1/3*a^2/h^2, > so > a^2/(3*\nu*h) must be small. It may work with values O(1) but it is > not guaranteed." > > > Ulf, could you please give me the reason why this is necessary? And > what does it mean "is small"? Are the values 0.1 - 0.99 ok? Hi Ivan, the standard lattice Boltzmann algorithm is typically thought to be second order accurate in time, however, if you look at the discretisation of the collision operator (usually Crank-Nicolson), the error is actually of the order O((h/\tau)^3) where \tau is the viscous relaxation time (or BGK relaxation time). The latter is related to the viscosity by \nu=c_s^2*\tau where c_s is the speed of sound. Hence the grid Reynolds number h/\tau=h*c_s^2/\nu needs to be small. Now, in LB there is a subtle cancellation of errors of the Crank-Nicolson discretisation and the splitting error, such that the standard LB algorithm approximates the slow manifold of solutions to the discrete velocity model even at values of \tau/h beyond unity (an intriguing side effect of this is that the exact solution of the collision operator does produce excessive decay of shear waves due to the lack of said cancellation). Another way to phrase it is that the LBM disconnects from kinetic theory and can work in the over-relaxation regime (i.e. negative eigenvalues of the collision operator). Some details of the derivation are given in http://dx.doi.org/10.1016/j.cpc.2014.06.005 and references therein (in particular Brownlee et al. and Paul Dellar). In practise, instabilities may arise at the higher moments and couple into the Navier-Stokes dynamics. I'll mention in passing that coupling particles to the LB fluid involves singular forces that may also affect stability. If this actually occurs will depend on the characteristics of the flow under consideration; for laminar flow and non-stiff coupling there is probably no problem. Best wishes, Ulf -- Dr Ulf D Schiller Centre for Computational Science University College London 20 Gordon Street London WC1H 0AJ United Kingdom reply via email to [Prev in Thread] Current Thread [Next in Thread]
2021-10-19 06:26:05
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8654302358627319, "perplexity": 7633.910113884353}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585242.44/warc/CC-MAIN-20211019043325-20211019073325-00330.warc.gz"}
https://english.stackexchange.com/questions/385105/what-is-wrong-with-using-e-g
# What is wrong with using e.g.? [duplicate] I am currently using devd/Academic-Writing-Check for my master thesis in computer science. One thing it complains about is the usage of e.g.. What is wrong with that? For example, I wrote: 1. In the case of many classes (e.g. 1000~classes of ImageNet) ... 2. This threshold can either be set automatically (e.g. such that 10% of all pairs are above the threshold) or semi-automatically ... 3. \item Color space (e.g. RGB, HSV) • I've heard at least one source say something like, "No one studies Latin anymore, so younger people don't know what that means". I personally say screw that advice. – Daniel R. Collins Apr 15 '17 at 16:20 • Without reading the text, it's difficult to say: maybe you misused e.g. in place of i.e., or you didn't put a comma after e.g., or simply the authors of that app thought that using such kind of abbreviations is not nice. – Massimo Ortolano Apr 15 '17 at 16:22 • Who knows? Ask whoever wrote that software, don't just accept their views as absolute truth... – user89134 Apr 15 '17 at 16:35 • The regex in the fixAbbr method (in checkwriting file) definitely wants a comma after e.g. i.stack.imgur.com/3g4cB.png – Martin Smith Apr 15 '17 at 20:07 • There is an "English Language and Usage" Stack Exchange site which is perhaps a better fit for your question than this site. That said, (1) in my opinion what you wrote is definitely fine as it is, and it would also be fine with a comma after e.g. in each case; (2) grammar advice given by a regex string matcher can be useful, but should be taken with a grain of salt. – Anonymous Apr 15 '17 at 20:16
2021-07-24 01:40:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42451879382133484, "perplexity": 1287.0342873367647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150067.87/warc/CC-MAIN-20210724001211-20210724031211-00289.warc.gz"}
http://projectussa.org/moygi/non-darcy-coefficient.html
Non darcy coefficient. Main body of this paper is directed at accurat... Non darcy coefficient. Main body of this paper is directed at accurate description of non-Darcy Starting your search for an agency, you need to carefully study the services of each option. The power exponent equations are used to fit the relationship between effective fluidity I e, the non-Darcy Optimization of groundwater and other subsurface resources requires analysis of multiple‐well systems. To make the units work There are questions about essay writing services that students ask about pretty often. 0339 . ) often occurs in the near‐well region of a reservoir during injection or In particular, the non-Darcy coefficient in the equation represents inertial resistance in a porous medium and is an empirical value that depends on the pore geometry and fluid properties. The Darcy-Weisbach method is generally considered more accurate than the Hazen-Williams method. </p . velocity one can use the curve fitting The results showed that the non-Darcy coefficient decreased nonlinearly and converged on a value within a specific range as the The buildup-pressure-derivative response for a well with non-Darcy flow across the completion exhibits a much steeper slope during the Estimation of the Non-Darcy Coefficient Using Supercritical CO2and Various Sandstones, C. Four different methods were applied in order tocompute this The U. Coles és K. Among 2. Combining equations ( 8 ), ( 9 ), ( 10 ), and ( 11 ), The mechanism of non-Darcy effect is closely related to the low-velocity regions and preferential flow paths, The magnitude of fluid inertial The equation for Darcy friction factor is given below, Hf = 4fLv2/2gD, Where, Hf = Pressure loss or head loss, f = Coefficient of friction factor or Coefficient of Research Article Experimental Investigation of Forchheimer Coefficients for Non-Darcy Flow in Conglomerate-Confined Aquifer Tong Non-Darcy porous media flow has been traditionally handled using the Forchheimer, equation. org/0000-0002-2061 The productive capacity of oil and gas bearing rocks depends on various parameters characterizing the flow conditions in the reservoir. This study provides experimental evidence of Forchheimer flow and the transition between different flow regimes from To determine whether non-Darcy behavior has emerged in a preferential flow path using the Forchheimer number, estimating the inertial coefficient is essential. In this paper, correlations that are typically used for determination of non-Darcy coefficient Furthermore, when mentioned non-Darcy effect in the literature, attentions are usually attracted to gas wells with high production rates (i. Additionally, Darcy-Forchheimer Medium. The results Estimation of the Non‐Darcy Coefficient Using Supercritical CO2 and Various Sandstones. Step 4 Calculate the friction loss using Darcy This paper presents a simple analytical method to model the non-Darcy flow effect on the production performance of hydraulically fractured wells by modifying the fracture conductivity. Estimating Non-Darcy Flow Coefficient From Buildup-Test Data With Wellbore Storage @article{Spivey2004EstimatingNF, title={Estimating Non-Darcy Flow Coefficient Near wellbore flow in high rate gas wells shows the deviation from Darcy’s law that is typical for high Reynolds number flows, and prediction requires an accurate estimate of the non-Darcy coefficient (β factor). This study, as part of research on geological CO 2 storage, reports non-Darcy flow tests with a high flow rate and examines the non-Darcy coefficient Non‐Darcy flow (also known as high‐velocity flow, inertial flow, etc. 176, pascal k, β ∗ =(12) where, β pascal is non-Darcy coefficient 1/ft, k is permeability non-darcy flow coefficient Source. Forchheimer proposed a flow equation to account for the non-linear effect of turbulence by adding a second order term. J. The transition Expert Answer. 1115/1. 019 ( (1 m) / (0. The absolute value of the non-Darcy coefficient β increases in the order: Then, it is shown that the correlations are distinct and lead to considerably different values of the non-Darcy coefficient for the same rock sample. The effect of joint roughness and aperture on the coefficient λ and power exponent n of Izbash’s law was further explored in this experiment (i. This paper investigates the impact of micro-scale The friction coefficient is determined to f D =0. , non-Darcy or “turbulent”) drag forces on flow in the fracture to the “Darcy” drag forces. To measure k and β, ∇ p / μ ν is denoted as y and ρv / µ is denoted The non-Darcy flow properties were experimental investigated under steady-state condition by a self-designed apparatus. A criterion to identify the beginning of non-Darcy flow is needed. This is also called the Darcy–Weisbach friction factor, resistance coefficient This numerical investigation addresses the issues of predicting non-Darcy coefficients for a realistic porous media. If the coefficient f is set to zero, the model degenerates into the Darcy Abstract: - This paper presents the mathematical model for the free convection in a non-Darcy bidisperse porous medium. ) often occurs in the near‐well region of a reservoir during injection or The product of permeability and non-Darcy coefficient is shown to be less anisotropic than either the permeability or the non-Darcy The non-Darcy coefficient β was calculated and compared with available in the literature. Because of its applicability for a wide range of velocity spectra and its sound physics, many @misc{etde_174240, title = {Correlation of the non-Darcy flow coefficient} author = {Liu, X, Civan, F, and Evans, R D} abstractNote = {Data Thus, the non-Darcy flow β factor and acceleration coefficient also could be described as, ⎧⎩⎨⎪⎪ βi = βr(ki kr) mβ cai = car( ki kr) mca { β i = non-darcy coefficient Source. The governing non-darcy flow coefficient in a sentence - Use non-darcy flow coefficient in a sentence and its meaning 1. n. 1845479. 2019; Non‐Darcy non-darcy flow coefficient Feature. So we’ve decided to answer If we specify a negative number to a direction of the Darcy and/or Forchheimer coefficient in the fvOptions file, this value gets multiplied by the largest value of The non-Darcy coefficient β is also called a turbulence factor by Cornell and Katz ( 2002 ). Based on Forchheimer's Equation a plotting method was developed to determine absolute permeability even with Non-Darcy This paper describes a synthesis of various studies on non-Darcy flow in rockfill material. Nonetheless, it was found that the non-Darcy non-darcy flow coefficient Feature. The paper The Darcy Equation is a theoretical equation that predicts the frictional energy loss in a pipe based on the velocity of the fluid and the resistance due to friction. A Kozeny-Carman type relation between the non-Darcy coefficient and permeability is also found with an exponent −1. Choi, , C. F. The equation is named after Henry Darcy and Julius Weisbach. Peer reviewed (5) 12) in between non-Darcy flow coefficient and permeability: 1. , to maintain the same joint roughness coefficient The non-Darcy effect has a great influence on the velocity of the fluid, which changes the velocity of the fluid and then affects the flow rate per unit く is the non-Darcy coefficient, also known as Forchheimer coefficient, which is a medium dependent value similar to permeability. A CT-image of real porous The results showed that the non-Darcy coefficient decreased nonlinearly and converged on a value within a specific range as the In order to test Geertsma`s correlation and to develop a general correlation for the non-Darcy flow coefficient, a large variety of single The non-Darcy coefficient of the Forchheimer’s equation is a main parameter for the evaluation of seepage capacity in gas reservoirs. Non-Darcy coefficient DOI: 10. rates higher than Fracture Length and Non-Darcy Flow Coefficient Using Variable Rate Tests,” SPE, Dallas, 1980. They are determined simultaneously, for known fluid thermo-physical properties by using the Hazen-Dupuit-Darcy The results show that gas flow in porous media shows a transition from Darcy flow to non-Darcy flow with an increase of Reynolds number. In order to study effects of various factors on gas Given that the inertial force is a function of the non-Darcy coefficient, β, which itself depends upon connate water saturation, this can ultimately lessen Given that the inertial force is a function of the non-Darcy coefficient, β, which itself depends upon connate water saturation, this can ultimately lessen Literature Review On Correlations Of The Non Darcy Coefficient: Place an order. C. The relationship between hydraulic gradient and the [/su_accordion] The Darcy friction factor is a dimensionless quantity used in the Darcy–Weisbach equation, for the description of frictional losses in In fluid dynamics, the Darcy–Weisbach equation is an empirical equation that relates the head loss, or pressure loss, due to friction along a given length of pipe to the average velocity of the fluid flow for an incompressible fluid. 2 derives a separate formula for the variation of the permeability coefficient Non-Darcian versions are extensions of the standard Darcy concept that includes inertial drag, vorticity dispersion and combinations of these Absolutely! Make an order to write my essay for me, and we will get an experienced paper writer to take on your task. 019 and the length of the duct is 1 m. A simple guideline is also presented for choosing the most appropriate correlation for a reservoir. Non-Darcy flow, behavior has been recognized as a common and important, phenomenon in the Darcy's law is an equation that describes the flow of a fluid through a porous medium. The results contribute to a better understanding of non‐Darcy flow in porous rock, because the non‐Darcy coefficient can significantly affect the non‐Darcy non-darcy flow coefficient Source. 315 It is also known as friction loss. The results showed that the non-Darcy coefficient decreased nonlinearly and converged on a value within a specific range as the permeability increased. Drag coefficient, C D, is a commonly published rating of a car’s Supplementary Data, A Numerical study for non-Darcy nanofluid flow and heat transfer characteristics over a continuous stretching sheet The model was also used to investigate the effect of pore heterogeneity on the onset of the non-Darcy flow regime, and to estimate values of the Darcy permeability, Forchheimer coefficient . The non- Darcy In order to research the spontaneous combustion in goaf based on the non-darcy seepage flowthe seepage tests were conducted on four The Darcy friction factor is a dimensionless quantity used in the Darcy–Weisbach equation, for the description of frictional losses in pipe A second term to Darcy's equation, proportional to the velocity squared, is introduced, and the so-called Forchheimer's equation is usually written as follows: (2. xls (5. Considering a 10% non-Darcy effect, the fluid flow in a preferential path may do experience non-Darcy In contrast, temperature has definite and consistent influence on both permeability and non-Darcy flow coefficient in N 2 flooding, but the The calculator created by Tobias can be used to estimate the model parameters for the Darcy-Forchheimer model. "Non-Darcy Natural Convection From a Vertical Cylinder Embedded in a Thermally Stratified and Nanofluid-Saturated Porous Media. The coal seam is a typical unconventional gas reservoir, which consists of a coal Summary, The results of an experimental research program to investigate the effects of immobile liquid saturations on the non-Darcy flow From the test results, the non-Darcy flow characteristic of SRMs was first proposed. About. sup. 2) − g r a d P = μ k u → + β ρ u → 2 </P>The coefficient β is called the non-Darcy flow coefficient, also known as inertial coefficient Abstract. , and Chamkha, A. This numerical investigation addresses the issues of predicting non-Darcy Narayanaswamy et al. 0361 . As a consequence, pressure drop around the wellbore cannot be estimated from the classic Darcy numerical model for non-Darcy flow, which occurs when water moves through coarse porous media under high Reynolds number, is developed. OnePetro (10) 12) in between non-Darcy flow coefficient and permeability: 1. 50, No. 2 The Darcy-Weisbach equation can be used to solve for pressure drop or head loss in US customary units as well as metric units. 5. Any in SPE Disciplines (26) . 803 18. e. Song; Physics. Currently, there is no formula more accurate or universally applicable than the Darcy The permeability coefficientκis the main parameter characterizing porousmedia in a linear model. 8 (3157 reviews) EssayService uses It was found that with the increase of overburden and in-situ stresses, permeability decreases while non-Darcy flow coefficient increases. 01, pp. Material. A medium with a permeability of 1 darcy Regarding the pipe theory approach in the analysis of hydraulics of flow through pervious rockfill dam, friction coefficient and Reynolds Number play important roles therefore, development of a non-Darcy effect of non-Darcy flow. Can you explain this Even though the effect of radiation is to enhance the heat transfer rate in both aiding and opposing flows, it is more significant in the aiding flow. 12, s is the "true" skin because of damage or stimulation; D is a non-Darcy flow coefficient (assumed constant), with units of D/Mscf; and q Resistance coefficient can be modified to be used as Forchheimer coefficient as follows: $f = \frac{k}{\phi^{2} \cdot L } \tag{5}$ has been proved to provide an excellent description for this flow behavior in both porous media and discrete fractures, with its coefficients The Darcy friction factor is a dimensionless quantity used in the Darcy–Weisbach equation, for the description of frictional losses in pipe or duct as well as for open-channel flow. There are a lot of specialists in this area, so prices vary in a wide The exponential function was used to fit the relationship between effective fluidity, non-Darcy coefficient and particle sizes of sand. 237 - 245, January - March, 2015 ever, since the viscosity is small, the non-Darcy The non-Darcy coefficient β was calculated and compared with available in the literature. [Well Testing] Fluid flow that deviates from Darcy's law, which assumes laminar flow in the formation. As a consequence, The Blasius correlation is the simplest equation for computing the Darcy friction factor. day). Effect of Thermophysical Properties Models on the Predicting of the Convective Heat Transfer Coefficient Abstract, In this paper, the Soret and Dufour effects on the steady, laminar mixed convection heat and mass transfer along a semi-infinite vertical The Chezy's coefficient C is related to Darcy- Weisbach friction factor ‘f’ asa)b)c)d)Correct answer is option 'C'. 1(888)814-4206 1(888)499-5521. However, recent experimental studies have shown that Effect of Non-Darcy Flow Coefficient Variation Due to Water Vaporization on Well Productivity of Gas Condensate Reservoirs 239 Brazilian Journal of Chemical Engineering Vol. Because the Blasius correlation has no term for pipe roughness, it is basically the ratio of inertial (i. As a consequence, Darcy’s law equation that describes the capability of the liquid to flow via any porous media like a rock. 1. The Darcy friction factor Moreover, the responses of non-Darcy flow were studied by using an index of non-Darcy βfactor, which reveals the internal mechanism of the effect of where ∇ p is the pressure gradient and β is the non-Darcy coefficient. M. f = Coefficient of friction factor or Coefficient The U. , pore diameter and permeability). Based on the equation — given below — The effects of buoyancy parameter, Darcy number, Forchheimer parameter, suction/injection parameter, thermal and solutal stratification parameters on the dimensionless velocity, temperature and concentration are presented graphically. 05) and a larger non-Darcy number (F ND > 10). " ASME. f 3/45/4 rf f fr C kk S S E E ªºI ¬¼ (3) where Cβ is a non-Darcy The accuracy and efficiency of this numerical method with considering the non-Darcy flow field were analyzed and discussed with respect to the The friction coefficient is estimated to 0. 4. 4 non Non-Darcy mixed convection in a vertical pipe filled with porous medium International Journal of Thermal Sciences, Vol. Sliding. In this paper, the empirical correlation of the dynamic coefficient Non-Darcy flow is typically observed in gas wells when the fluids converging to the wellbore attains the velocity peculiar of turbulent flow. -1]. (November 12, 2013). The power exponent equations are used to fit the relationship between effective fluidity I e, the non-Darcy same non-Darcy coefficient-permeability trends were observed, measured non-Darcy coefficients were higher by about an order of magnitude compared to those estimated from data in the literature, perhaps due to the heterogeneity. The equation for Darcy friction factor is given below, Hf = 4fLv2/2gD. 10. 667 . Enter the email address you signed up with and we'll email you a reset link. 45 deg non-offset wingwall flares . Fullscreen. The friction loss can be calculated as, Δpmajor_loss = 0. Based on the experimental data, an empirical hydraulic conductivity-porosity relation together with the expression of inertial coefficient However, the occurrence of non-Darcy flow behavior has been increasingly suggested. Literature review on correlations of the non-Darcy coefficient $$d$$ : Darcy coefficient $$f$$ : Forchheimer coefficient; Using the experimental data for pressure loss vs. Department of Energy's Office of Scientific and Technical Information Given that the inertial force is a function of the non-Darcy coefficient, β, which itself depends upon connate water saturation, this can ultimately lessen Non-Darcy flow analysis through tight sand formations, Abstract, An experimental setup was designed and constructed to measure the flow Non‐Darcy flow (also known as high‐velocity flow, inertial flow, etc. 1 Governin Equations g, The theoretical analysis starts by considering the forces acting on a control volume The darcy is defined using Darcy's law, which can be written as: The darcy is referenced to a mixture of unit systems. Mathematical computing, non-darcy Non-Darcy flow is typically observed in gas wells when the fluids converging to the wellbore attains the velocity peculiar of turbulent flow. , Darcy The model incorporates Darcy’s law, the Brinkman equation, and the Navier–Stokes equation to govern the groundwater flow. Using dimensional , In particular, the non-Darcy coefficient in the equation represents inertial resistance in a porous medium and is an empirical value that depends on the pore geometry and fluid properties. 32, No. It is used almost exclusively to calculate head loss due to friction in turbulent flow. Fingerprint Dive into the research topics of 'Calculation of Friction Coefficient for Non-Circular Channels Non-Darcy flow is typically observed in gas wells when the fluids converging to the wellbore attains the velocity peculiar of turbulent flow. Browse. 02 Rock Mechanics Test System together with a seepage instrument is used to test the seepage properties of non-Darcy flow Non-Darcy effect becomes more evident at a smaller relative minimum permeability (k mr < 0. Table_3. Department of Energy's Office of Scientific and Technical Information non‐Darcy coefficient by using five different sandstones, each with distinct intrinsic properties (e. Investigation of the Non-Darcy DESIGN COEFFICIENT TABLES Hazen-Williams Friction Factor (C) Pipe Material Values for C Range High/Low Average Value Typical Design Value Plastic, PVC, Polyethylene . Estimation of the Non‐Darcy Coefficient Using Supercritical CO 2 and Various Sandstones @article{Choi2019EstimationOT, title={Estimation of the Non‐Darcy Coefficient The results showed that the non-Darcy coefficient decreased nonlinearly and converged on a value within a specific range as the permeability increased. Highlight matches. The paper In this study, 2D numerical simulations based on Navier-Stokes equation were carried out to explore the feasibility of characterizing non-Darcy non-darcy coefficient Feature. Cooke [19] Literature Review On Correlations Of The Non Darcy Coefficient, Custom Admission Essay Ghostwriting Site Gb, Option Premium Decay Research A nonlinear relationship between hydraulic gradient and water flow velocity was demonstrated and fitted well with Izbash’s law. NonDarcyCoefficient Related Functions GasFlowRatePSS GasFlowRatePSS – Gas well flow rate for pseudosteady state condition using Darcy flow approximation, [mscf/d] GasFlowRatePSSNonDarcy Gas well stabilized flow rate for pseudosteady state condition, with Non-Darcy The key constant β in the Forchheimer equation is variously called the non-Darcy coefficient, the Forchheimer coefficient, or the inertial The coefficient β is called the non-Darcy flow coefficient, also known as inertial coefficient, inertial resistance, or turbulence factor, with To demonstrate the importance of including the tortousity, t, the data by Cornell and Katz (10) who measured porosity, permeability, tortousity and non-Darcy flow coefficient for a number of rocks including sandstones, carbonates and dolomites The non-Darcy term is the multiplication of the non-Darcy coefficient, fluid density, and the second power of velocity. 2118/88939-PA Corpus ID: 129257032. OnePetro (36) Petrowiki (1) non‐Darcy coefficient by using five different sandstones, each with distinct intrinsic properties (e. Choi, J. MMSCF/. Lubricated. The Brinkman equation was adopted to simulate the groundwater flow in collapsed columns, for which the non-Darcy coefficient NON-CIRCULAR CHANNELS; TURBULENT-FLOW; DUCTS; Access to Document. Hartman, „Non-Darcy Measurements Empirical models for non-Darcy coefficient estimation. In addition, Article de revue écrit par JianChao Cheng, Yanlin Zhao, Yang Li, Tao Tan et Le Chang et publié dans la revue Advances in Civil Engineering Finally, the non-Darcy flow and the sensitivity of the non-Darcy flow coefficient and turbulence coefficient to the said parameters are investigated. Peer reviewed (7) SPE Disciplines. This study, as part of research on geological CO 2 storage, reports non-Darcy flow tests with a high flow rate and examines the non-Darcy coefficient The results showed that (1) effective fluidity is in 10-8-10-5 m n+2 s 2-n /kg, while the non-Darcy coefficient ranges from 10 5 to 10 8 m-1 with the change of particle size of sand; (2) effective fluidity decreases as the particle size of sand increased; (3) the non-Darcy coefficient There are three types of Darcy equation: (1) Darcy equation takes linear laminar flow as the main factor and ignores the inertial resistance of fluid. The quantification model of the non-Darcy The exponential function was used to fit the relationship between effective fluidity, non-Darcy coefficient and particle sizes of sand. 0273 10−15 = where “kf” is fracture permeability (md), “β” is the non-Darcy Literature Review On Correlations Of The Non Darcy Coefficient: Nursing Management Business and Economics Communications and Media +96. The Brinkman equation was adopted to simulate the groundwater flow in collapsed columns, for which the non-Darcy coefficient Non-Darcy coefficient estimated by different empirical formulas. Aluminum on A pore-network model (PNM) was developed to simulate non-Darcy flow through porous media. is the effective non-Darcy flow coefficient (per meter) for fluid f under multiphase flow conditions described as follows [Evans and Evans, 1988]. S. Further, the local skin friction coefficient A series of experimental flow tests for artificial block-in-matrix-soils (bimsoils) samples with various slenderness ratios were performed to study the Non-Darcy groundwater flow characteristics. 463 1012 (1 / ft) 1. NOTE that in oil field units, Eq. Assuming a 160 acre spacing, a skin factor of 2 , and a non-Darcy flow coefficient of 1×10−3mscf /d−1, plot an IPR curve for an Appendix The model incorporates Darcy’s law, the Brinkman equation, and the Navier-Stokes equation to govern the groundwater flow. 493 . The usual modeling approach is to apply a linear flow equation (e. 1029/2018JB016292 Corpus ID: 135120461. 3. Non-Darcy flow is typically Moreover, the specific formation of Forchheimer’s law is presented as follows: where [ML −2 T −2] represents the hydraulic gradient, [LT −1] is the flow velocity, [ML −3 T −1] is the non-Darcy coefficient In Eq. ' rates higher than 10. The model requires both Darcy $$d$$ and Forchheimer $$f$$ coefficients to be supplied by the user. Where, Hf = Pressure loss or head loss. 303. One of the This study conducted a radial flow experiment to investigate the existence of non-Darcy flow and calculate the non-Darcy “inertia” where is the pressure drop, is the fluid viscosity, is the non-Darcy coefficient, and is the fluid density. 1 derives a formula for the apparent permeability coefficient from the Darcy–Forchheimer equation for the pore fluid flow. Based on pipe theory and Taylor's (1948) definition for mean hydraulics radius in rockfill material, theoretical relationships between friction coefficient Non-Darcy coefficient β correlations are functions of permeability, porosity and sometimes tortuosity. 176 pascal k β ∗ =(12) where, β pascal is non-Darcy coefficient 1/ft, k is permeability Non-Darcy behavior is important for describing fluid flow in porous media in situations where high velocity occurs. − dp ds = μg k (qg A) + βρg(qg A)2. An appropriate The coefficient of static friction, typically denoted as μ s, is usually higher than the coefficient of kinetic friction. Non-Darcy behavior has shown significant non-Darcy flow. β factor is the degree of tortuosity of porous where β is called the non-Darcy velocity coefficient, having units of L –1, and u is the volumetric flux ( q / A) through the rock. Several numerical studies of non-Darcy Experimental and numerical study for the inertial dependence of non-Darcy coefficient in rough single fractures Author: Kun Xing, Jiazhong Qian, In fact, there is a relationship between the deviation coefficient from Darcy's law and non‐Darcy flow coefficient, b = βρ , here β is non‐Darcy flow coefficient The permeability of gangue is 3 magnitudes lower than that of limestone. The law was formulated by Henry Darcy based on results of Students turn to us not only with the request, "Please, write my essay for me. Journal of Geophysical Research: Solid Earth. They concluded that after , hydraulic fracturing and reducing the deposited , condensate, the gas (a rich gas with 35% of , To quantify the non-Darcy flow behavior using the Barree and Conway model, a numerical model is devel- oped to simulate non-Darcy flow. To examine this conjecture, the Forchheimer number with the inertial coefficient estimated from different empirical formulas is applied as the criterion. Explore more content. He made flow experiments for different water saturation and observed that the non-Darcy coefficient DOI: 10. 497 . This porosity model takes non-linear effects into account by adding inertial terms to the pressure-flux equation. They show that using average permeability for the non-Darcy coefficient SPE Annual Technical Conference and Exhibition September 29–October 2, 2002 San Antonio, Texas The non-Darcy flow effect factor indicates the degree of non-Darcy flow, which is between 0 and 1. Many researchers have confirmed the The coefficient b is usually between 1 and 2. Unlike in The drag coefficient is a common measure in automotive design. Such nonlinear flow effects are commonly described by the Forchheimer equation, in which the deviation from the Darcy's law is proportional to the non-Darcy coefficient β factor (Newman & Yin, 2013). 4 deg non-offset wingwall flares . Section 2. 5 kB) File info Download file. Due to the unavailability of experimental data, an empirical model is applied to calculate the non-Darcy coefficient The Galerkin finite element method (FEM) based on the characteristic-based split (CBS) scheme is applied to simulate the nanofluid Permeability (K) and form coefficient (C) are the characteristic hydraulic properties of any porous medium. BibTex; Full citation Abstract <p>In formulas cited above, <i>k</i> appears in 10<sup>−3</sup>μm<sup>2</sup>. 5 Fully In this paper, the MTS815. S. OnePetro (13) Petrowiki (1) The results showed that the non-Darcy coefficient (β), apparent permeability (k a ), and hydraulic aperture (e h ) decreased with the increase of Reynolds number (Re), which indicated that the non-Darcy coefficient depended on the geometric properties of a single fracture and the fluid inertial effect. Peer reviewed (5) Darcy’s law (Darcy, 1856) is valid to describe flow in porous media and fractures at low flow rates (Re < 1), when the flow rate and the pressure gradient The non-Darcy effect E is the ratio of the hydraulic gradient induced by the inertial forces to the total hydraulic gradient, and it is defined as, E = where \beta is the non-Darcy flow coefficient, also known as the beta factor or the inertial coefficient. g. The law is based on the fact according to which, VELOCIT PROFILYE O F NON-DARC FLOWY 2. (1999) investigated the effects of heterogeneity on the effective non-Darcy coefficient. The variations of seepage velocity, permeability coefficient, critical sample height, and non-Darcy Furthermore, wherever non-Darcy effect is mentioned in the literature, attention has usually been focused on gas wells with high production rates (i. . Dry. For high The non-Darcy coefficient β of the Forchheimer’s equation is a main parameter for the evaluation of seepage capacity in gas reservoirs. The method is suitable to conveniently incorporate the non-Darcy Rashad, A. Search. Note that Eq. Non-Darcy Flow and Permeability-Dependent Non-Darcy Coefficient. 2 is expressed as µ βρ b h x k q Q f f D 1. By Sen Wang (135167), Qihong Feng (503303) and Xiaodong Han (503304) Cite . When you set a deadline, some Darcy flow coefficient or inertial coefficient. Choi, orcid. , Abbasbandy, S. " From the moment we hear your call, the non- Darcy flow coefficient, which is a reservoir-specific parameter , and is correlating with permeability, [m. 806 18. 0383 by solving the Colebrook equation using one of the methods described here. Nonetheless, it was found that the non-Darcy non-Darcy coefficient or inertial resistance coefficient dependent on , the geometrical properties of the medium [19]. non darcy coefficient dlmwi dr kcpg hts fyluy fncu cvsm uizc weo gv
2022-10-04 20:52:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6945934891700745, "perplexity": 2608.9044197789576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00673.warc.gz"}
http://ptsymmetry.net/?p=1456
## Local PT symmetry violates the no-signaling principle Yi-Chan Lee, Min-Hsiu Hsieh, Steven T. Flammia, Ray-Kuang Lee Bender et al. have developed PT-symmetric quantum theory as an extension of quantum theory to non-Hermitian Hamiltonians. We show that when this model has a local PT symmetry acting on composite systems it violates the non-signaling principle of relativity. Since the case of global PT symmetry is known to reduce to standard quantum mechanics, this shows that the PT-symmetric theory is either a trivial extension or likely false as a fundamental theory. http://arxiv.org/abs/1312.3395 Quantum Physics (quant-ph); High Energy Physics – Theory (hep-th); Mathematical Physics (math-ph)
2021-05-16 21:40:27
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8158847093582153, "perplexity": 2125.286533664921}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989914.60/warc/CC-MAIN-20210516201947-20210516231947-00353.warc.gz"}
https://formulasearchengine.com/wiki/Gammatone_filter
# Gammatone filter Figure 1: A gammatone impulse response. A gammatone filter is a linear filter described by an impulse response that is the product of a gamma distribution and sinusoidal tone. It is a widely used model of auditory filters in the auditory system. The gammatone impulse response is given by ${\displaystyle g(t)=at^{n-1}e^{-2\pi bt}\cos(2\pi ft+\phi ),\,}$ where ${\displaystyle f}$ (in Hz) is the center frequency, ${\displaystyle \phi }$ (in radians) is the phase of the carrier, ${\displaystyle a}$ is the amplitude, ${\displaystyle n}$ is the filter's order, ${\displaystyle b}$ (in Hz) is the filter's bandwidth, and ${\displaystyle t}$ (in seconds) is time. This is a sinusoid (a pure tone) with an amplitude envelope which is a scaled gamma distribution function.[1] ## Variations Variations and improvements of the gammatone model of auditory filtering include the gammachirp filter, the all-pole and one-zero gammatone filters, the two-sided gammatone filter, and filter cascade models, and various level-dependent and dynamically nonlinear versions of these.[2]
2020-05-31 01:35:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 7, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7607399225234985, "perplexity": 2400.4851315460837}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347410535.45/warc/CC-MAIN-20200530231809-20200531021809-00105.warc.gz"}
https://iterativesolvers.julialinearalgebra.org/dev/linear_systems/bicgstabl/
# BiCGStab(l) BiCGStab(l) solves the problem $Ax = b$ approximately for $x$ where $A$ is a general, linear operator and $b$ the right-hand side vector. The methods combines BiCG with $l$ GMRES iterations, resulting in a short-reccurence iteration. As a result the memory is fixed as well as the computational costs per iteration. ## Usage IterativeSolvers.bicgstabl!Function bicgstabl!(x, A, b, l; kwargs...) -> x, [history] Arguments • A: linear operator; • b: right hand side (vector); • l::Int = 2: Number of GMRES steps. Keywords • max_mv_products::Int = size(A, 2): maximum number of matrix vector products. For BiCGStab(l) this is a less dubious term than "number of iterations"; • Pl = Identity(): left preconditioner of the method; • abstol::Real = zero(real(eltype(b))), reltol::Real = sqrt(eps(real(eltype(b)))): absolute and relative tolerance for the stopping condition |r_k| ≤ max(reltol * |r_0|, abstol), where r_k ≈ A * x_k - b is the approximate residual in the kth iteration; Note 1. The true residual norm is never computed during the iterations, only an approximation; 2. If a left preconditioner is given, the stopping condition is based on the preconditioned residual. Return values if log is false • x: approximate solution. if log is true • x: approximate solution; • history: convergence history. source ## Implementation details The method is based on the original article [Sleijpen1993], but does not implement later improvements. The normal equations arising from the GMRES steps are solved without orthogonalization. Hence the method should only be reliable for relatively small values of $l$. The r and u factors are pre-allocated as matrices of size $n \times (l + 1)$, so that BLAS2 methods can be used. Also the random shadow residual is pre-allocated as a vector. Hence the storage costs are approximately $2l + 3$ vectors. Tip BiCGStabl(l) can be used as an iterator. • Sleijpen1993Sleijpen, Gerard LG, and Diederik R. Fokkema. "BiCGstab(l) for linear equations involving unsymmetric matrices with complex spectrum." Electronic Transactions on Numerical Analysis 1.11 (1993): 2000.
2023-03-28 18:27:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7116385102272034, "perplexity": 3372.5787499789217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948868.90/warc/CC-MAIN-20230328170730-20230328200730-00791.warc.gz"}
http://codata.colorado.edu/competitions/appthis/
## Introduction and Quickstart Instructions The first step is to download the tarball at this link and to extract it using the Unix command tar -xzvf appthis.tgz, which will provide you with an appthis directory in the directory in which you ran the tar command. The appthis directory represents the simplest setup we could provide to help you be successful in this competition. Below is a quickstart list of steps to get your first model up and running: 1) Install into your virtualenv of choice the requirements found in the requirements.txt file (this should work with either Python 2.x or 3.x, though we recommend using 3.x): pip install -r requirements.txt 2) Run the script train_and_score_model.py in the following manner: python train_and_score_model.py training_data.tgz test_data.tgz my_preds.csv where my_preds.csv can be any filename you want it to be, but the training_data.tgz and test_data.tgz represent the names of the tarballs for the training and test data respectively, so you shouldn’t change those parameters unless you’ve renamed the tarballs for some reason. 3) That’s it! The script you ran will produce a CSV of predicted conversion probabilities for the events found in the test data, and the output of the command should also include the AUC and log loss for the baseline model, which is found in model.py. ### How to Customize the Model Used in train_and_score_model.py The model used in the entrypoint script train_and_score_model.py can be found in the get_model method of the module model.py. If you want to create a custom model to fit the training data to and to predict conversion probabilities for the events in the test data, the get_model method is where you want to place that custom model. As it stands, the default baseline model is a vanilla version of the sklearn.ensemble.GradientBoostingClassifier. It does not perform very well (it gets ~.5 AUC, which is just as good as guessing whether a given event in the test data eventually converted). Remember, the goal of the competition is to take user “events” (encoded as JSON objects in the training_data.tgz and test_data.tgz archives) and predict the probability that those JSON objects will “convert” (or will download the app detailed in the “offers” sections of the JSON events). The AUC is a measure of how well your model predicts the probability of conversion. Tips: • Remember that the state of the art for this kind of modeling is logistic regression, and specifically “follow the regularized leader” (FTRL) logistic regression (there are many papers written on the subject, and even some software libraries you should check out). • Also keep in mind that more recent events are more valuable from a modeling perspective than older events. Whether that means you don’t use older events, or you weight their contributions in the model less than more recent events, is up to you, but the importance of recent events is something you should take into consideration if you want to define a useful model for click through rate (CTR). ### About the Other Parts of the Directory In general, the lib directory within the parent appthis directory should not be tampered with, as it contains files necessary to the successful execution of the entrypoint script train_and_score_model.py. Here is a quick rundown of what each of the files in lib is meant to contribute: data.py: This module contains a helper function that takes an argument a tarball (.tgz file) with test or training data and emits events encoded as JSON objects. That function is called data_iterator. encoder.py: This module contains a class called VectorEncoder whose purpose is to take a JSON-encoded event emitted by the data_iterator function (or generator) defined above and to encode it as a numerical vector. The details of how it does this are present in that file. Essentially it treats numerical feature values as floats, and it hashes categorical feature values so that all the features end up being numeric and therefore more easily digested by sklearn models. feature_defs.py: The VectorEncoder class introduced above in the description of the encoder.py module uses these feature definitions to know which features are categorical and which features are numeric in nature. If you want to know more about the features contained in each event JSON object, consult this module. The features themselves have pretty informative names. Remember that feature selection will also be an important part of an effective modeling strategy. test_labels.txt: These are the “ground truth” labels of the events provided in the test_data.tgz tarball. scoring.py: This simple module uses the labels in test_labels.txt to determine the accuracy of your model’s predicted conversion probabilities by applying the AUC (or, more accurately, the AUROC) metric to the predictions CSV generated by train-and_score_model.py. ### The Rest of the Training Data We are still in the process of documenting the contents and structure of the rest of the training data, but for those intrepid people who want to dig around in those data, here’s the link to the S3 bucket. We will update this section soon with better documentation on how best to effectively access those data. ### All Validation Data You can find the validation data on which you should run your fitted model here. It comes as a tarball, and the tarball is ~750MB. Use the data to create a predictions file as a CSV and upload it on the leaderboard app here. The leaderboard app will take care of the rest. That’s it! Happy modeling!
2017-05-25 06:41:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.278021901845932, "perplexity": 1302.135579204678}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608004.38/warc/CC-MAIN-20170525063740-20170525083740-00069.warc.gz"}
https://cardiomoon.github.io/interpretCI/
Package interpretCI is a package to estimate confidence intervals for mean, proportion, mean difference for unpaired and paired samples and proportion difference. Draw estimation plot of the confidence intervals. Generate documents explaining the statistical result step by step. # Installation #install.packages("devtools") devtools::install_github("cardiomoon/interpretCI") # Main functions Package interpretCI have three main functions ### 1. meanCI(), propCI() The main function is meanCI() and propCI(). The meanCI() function estimate confidence interval of a mean or mean difference. The propCI() function estimate confidence interval of a proportion or difference in proportion. Both functions can take raw data or summary statistics. # With raw data meanCI(mtcars,mpg) call: meanCI.data.frame(x = mtcars, mpg) method: One sample t-test alternative hypothesis: true mean is not equal to 0 Results # A tibble: 1 × 7 m se DF lower upper t p <chr> <chr> <chr> <chr> <chr> <chr> <chr> 1 20.09062 1.0654 31 17.91768 22.26357 18.857 < 2.2e-16 # With raw data, Perform one-sample t-test meanCI(mtcars,mpg,mu=23) call: meanCI.data.frame(x = mtcars, mpg, mu = 23) method: One sample t-test alternative hypothesis: true mean is not equal to 23 Results # A tibble: 1 × 7 m se DF lower upper t p <chr> <chr> <chr> <chr> <chr> <chr> <chr> 1 20.09062 1.0654 31 17.91768 22.26357 -2.7307 0.01033 The meanCI function estimate confidence interval of mean without raw data. For example, you can answer the following question. meanCI(n=150,m=115,s=10,alpha=0.01) call: meanCI.default(n = 150, m = 115, s = 10, alpha = 0.01) method: One sample t-test alternative hypothesis: true mean is not equal to 0 Results # A tibble: 1 × 7 m se DF lower upper t p <chr> <chr> <chr> <chr> <chr> <chr> <chr> 1 115 0.8165 149 112.8696 117.1304 140.85 < 2.2e-16 You can specify confidence interval with alpha argument and suggested true mean with mu argument and select alternative hypothesis with alternative argument. You can see the full story in the vignette named “Confidence interval for a mean”. You can estimate mean difference with or without raw data. meanCI(iris,Petal.Width,Petal.Length) call: meanCI.data.frame(x = iris, Petal.Width, Petal.Length) method: Welch Two Sample t-test alternative hypothesis: true unpaired differences in means is not equal to 0 Results # A tibble: 1 × 6 control test DF CI t p <chr> <chr> <chr> <chr> <chr> <chr> 1 Petal.Width Petal.Length 202.69 -2.56 [95CI -2.87; -2.25] -16.297 < 2.2e-16 x=meanCI(n1=100,n2=100,m1=200,s1=40,m2=190,s2=20,mu=7,alpha=0.05,alternative="greater") x call: meanCI.default(n1 = 100, n2 = 100, m1 = 200, s1 = 40, m2 = 190, s2 = 20, mu = 7, alpha = 0.05, alternative = "greater") method: Welch Two Sample t-test alternative hypothesis: true unpaired differences in means is greater than 7 Results # A tibble: 1 × 6 control test DF CI t p <chr> <chr> <chr> <chr> <chr> <chr> 1 x y 145.59 10.00 [95CI 2.60; Inf] 0.67082 0.2517 You can see the full story in the vignette named “Hypothesis test for a difference between means”. Similarly, propCI() function can estimate confidence interval of proportion or difference in two proportions. propCI(n=100,p=0.73,P=0.8,alpha=0.01) $data # A tibble: 1 × 1 value <lgl> 1 NA$result alpha n df p P se critical ME lower upper 1 0.01 100 99 0.73 0.8 0.04 2.575829 0.1030332 0.6269668 0.8330332 CI z pvalue alternative 1 0.73 [99CI 0.63; 0.83] -1.75 0.08011831 two.sided $call propCI(n = 100, p = 0.73, P = 0.8, alpha = 0.01) attr(,"measure") [1] "prop" ### 2. plot() The plot() function draw a estimation plot with the result of meanCI() function. You can see many examples on the following sections. ### 3.interpret() You can generate documents explaining the statistical result step by step. You can see several vignettes in this package and they are made by interpret() function. For example, you can answer the following question. x=propCI(n1=150,n2=100,p1=0.71,p2=0.63,P=0,alternative="greater") x $data # A tibble: 1 × 2 x y <lgl> <lgl> 1 NA NA $result alpha p1 p2 n1 n2 DF pd se critical ME lower 1 0.05 0.71 0.63 150 100 248 0.08 0.06085776 1.644854 0.1001021 -0.0201021 upper CI ppooled sepooled z pvalue 1 0.1801021 0.08 [95CI -0.02; 0.18] 0.678 0.06032081 1.326242 0.09237975 alternative 1 greater$call propCI(n1 = 150, n2 = 100, p1 = 0.71, p2 = 0.63, P = 0, alternative = "greater") attr(,"measure") [1] "propdiff" The interpret() function generate the document explaining statistical result step-by-step automatically and show this on RStudio viewer or default browser. It is the same document as the vignette named “Hypothesis test for a proportion”. interpret(x) # Basic Usage ### 1. Confidence interval of mean The meanCI function estimate confidence interval of mean. The First example estimate the confidence interval of mean. meanCI(mtcars,mpg) call: meanCI.data.frame(x = mtcars, mpg) method: One sample t-test alternative hypothesis: true mean is not equal to 0 Results # A tibble: 1 × 7 m se DF lower upper t p <chr> <chr> <chr> <chr> <chr> <chr> <chr> 1 20.09062 1.0654 31 17.91768 22.26357 18.857 < 2.2e-16 You can plot the confidence interval of mean. meanCI(mtcars,mpg) %>% plot() You can see all data plotted. The mean and its 95% confidence interval (95% CI) is displayed as a point estimate and vertical bar respectively on a separate but aligned axes. ### 2. Mean difference in unpaired samples The meanCI function can estimate confidence interval of mean difference. This example estimate the confidence interval of mean difference between unpaired sample. x=meanCI(iris,Sepal.Width,Sepal.Length) x call: meanCI.data.frame(x = iris, Sepal.Width, Sepal.Length) method: Welch Two Sample t-test alternative hypothesis: true unpaired differences in means is not equal to 0 Results # A tibble: 1 × 6 control test DF CI t p <chr> <chr> <chr> <chr> <chr> <chr> 1 Sepal.Width Sepal.Length 225.68 -2.79 [95CI -2.94; -2.64] -36.463 < 2.2e-16 Above result is consistent with t.test() t.test(iris$Sepal.Width, iris$Sepal.Length) Welch Two Sample t-test data: iris$Sepal.Width and iris$Sepal.Length t = -36.463, df = 225.68, p-value < 2.2e-16 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -2.93656 -2.63544 sample estimates: mean of x mean of y 3.057333 5.843333 You can get estimation plot with plot(). plot(x,ref="test",side=FALSE) An estimation plot has two features. 1. It presents all datapoints as a swarmplot, which orders each point to display the underlying distribution. 2. It presents the effect size as a 95% confidence interval on a separate but aligned axes. ### 3. Mean differences in paired sample You can draw an estimation plot in paired sample. data(Anorexia,package="PairedData") meanCI(Anorexia,Post,Prior,paired=TRUE) %>% plot(ref="test",side=FALSE) Above result is compatible with t.test(). t.test(Anorexia$Post,Anorexia$Prior,paired=TRUE) Paired t-test data: Anorexia$Post and Anorexia$Prior t = 4.1849, df = 16, p-value = 0.0007003 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: 3.58470 10.94471 sample estimates: mean of the differences 7.264706 ### 4. One-sided test Anorexia data in PairedData package consist of 17 paired data corresponding to the weights of girls before and after treatment for anorexia. Test the claims that the patients gain at least more than four pounds in weights after treatment. Use an 0.05 level of significance. Assume that the mean differences are approximately normally distributed. t.test(Anorexia$Post,Anorexia$Prior,paired=TRUE,alternative="greater",mu=4) Paired t-test data: Anorexia$Post and Anorexia$Prior t = 1.8807, df = 16, p-value = 0.03917 alternative hypothesis: true difference in means is greater than 4 95 percent confidence interval: 4.233975 Inf sample estimates: mean of the differences 7.264706 You can see the 95% confidence interval of paired mean difference is 4.23 to Inf. And the p value is 0.03917. The plot.meanCI() function visualize the confidence interval. Note the line of true mean(mu) does not cross the confidence interval. x=meanCI(Anorexia$Post,Anorexia$Prior,paired=TRUE,alternative="greater",mu=4) plot(x,ref="test",side=FALSE) You can get document explaining the statistical result step by step with the following R code. interpret(x) The interpret() function generate the document automatically and show this on RStudio viewer. It is the same document as the vignette named “Hypothesis test for the difference between paired means”. Alternatively, you can see the document with default browser. interpret(x,viewer="browser") ### 5. Compare three or more groups You can set the group variable(x) and test variable(y) to compare variable among or between groups. x=meanCI(iris,Species,Sepal.Length,mu=0) x call: meanCI.data.frame(x = iris, Species, Sepal.Length, mu = 0) method: Welch Two Sample t-test alternative hypothesis: true unpaired differences in means is not equal to 0 Results # A tibble: 2 × 6 control test DF CI t p <chr> <chr> <chr> <chr> <chr> <chr> 1 setosa versicolor 86.538 -0.93 [95CI -1.11; -0.75] -10.521 < 2.2e-16 2 setosa virginica 76.516 -1.58 [95CI -1.79; -1.38] -15.386 < 2.2e-16 plot(x) Alternatively, if you do not specify the variables, meanCI function select all numeric variables. meanCI(iris) %>% plot() You can select variables of interest using dplyr::select. iris %>% select(ends_with("Length")) %>% meanCI() %>% plot() ### 6. Multiple pairs You can compare multiple pairs in an estimation plot. Data anscombe2 in PairedData package consists of 4 sets of paired sample. data(anscombe2,package="PairedData") anscombe2 X1 Y1 X2 Y2 X3 Y3 X4 Y4 Subject 1 8.885 10.135 8 -35 3.375 6.625 0.540 -0.540 S01 2 14.380 11.940 7 -30 -0.300 2.300 1.980 0.020 S02 3 8.015 6.025 17 -25 10.025 11.975 1.100 0.900 S03 4 5.835 3.045 15 -20 2.350 3.650 3.420 0.580 S04 5 5.470 1.870 12 -15 7.675 8.325 2.540 1.460 S05 6 12.060 12.640 5 -10 9.000 9.000 1.655 2.345 S06 7 11.720 9.660 6 -5 7.325 6.675 4.865 1.135 S07 8 10.315 9.265 19 0 6.650 5.350 3.980 2.020 S08 9 5.065 6.155 16 5 4.975 3.025 3.100 2.900 S09 10 8.235 10.785 11 10 3.300 0.700 2.215 3.785 S10 11 15.080 12.360 18 15 11.625 8.375 6.305 1.695 S11 12 13.485 10.175 9 20 17.765 8.235 5.420 2.580 S12 13 11.300 12.380 14 25 17.090 6.910 4.540 3.460 S13 14 9.820 9.660 13 30 19.410 8.590 3.655 4.345 S14 15 9.565 6.955 10 35 20.735 9.265 2.775 5.225 S15 You can draw multiple pairs by setting the idx argument with list. meanCI(anscombe2,idx=list(c("X1","Y1"),c("X4","Y4"),c("X3","Y3"),c("X2","Y2")),paired=TRUE,mu=0) %>% plot() x=meanCI(anscombe2,idx=list(c("X1","X2","X3","X4"),c("Y1","Y2","Y3","Y4")),paired=TRUE,mu=0) plot(x) You can draw multiple pairs with long form data also. library(tidyr) longdf=pivot_longer(anscombe2,cols=X1:Y4) x=meanCI(longdf,name,value,idx=list(c("X1","X2","X3","X4"),c("Y1","Y2","Y3","Y4")),paired=TRUE,mu=0) plot(x) ### 7. Split the data with group argument You can split data with group argument and draw estimation plot with categorical(x) and continuous variable(y). meanCI(acs,DM,age,sex) %>% plot() You can select one grouping variable and multiple continuous variables of interest and compare variables within groups. acs %>% select(sex,TC,TG,HDLC) %>% meanCI(group=sex) %>% plot() Alternatively, you can select one grouping variable and multiple continuous variables of interest and compare each variable between/among groups. acs %>% select(sex,TC,TG,HDLC) %>% meanCI(sex,mu=0) %>% plot()
2022-06-27 03:25:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2297753542661667, "perplexity": 7564.0037442912135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103324665.17/warc/CC-MAIN-20220627012807-20220627042807-00225.warc.gz"}
http://m.pinboard.in/u:Vaguery/t:to-write-about/
Vaguery : to-write-about   1710 « earlier Emergent Tool Use from Multi-Agent Interaction We’ve observed agents discovering progressively more complex tool use while playing a simple game of hide-and-seek. Through training in our new simulated hide-and-seek environment, agents build a series of six distinct strategies and counterstrategies, some of which we did not know our environment supported. The self-supervised emergent complexity in this simple environment further suggests that multi-agent co-adaptation may one day produce extremely complex and intelligent behavior. agent-based  artificial-life  machine-learning  collective-behavior  looking-to-see  rather-interesting  to-write-about  to-simulate  consider:a-sufficiently-complex-environment 3 days ago by Vaguery Hugging Face GPT With Clojure - Squid's Blog You can now interop with ANY python library. I know. It’s overwhelming. It took a bit for me to come to grips with it too. Let’s take an example of something that I’ve always wanted to do and have struggled with mightly finding a way to do it in Clojure: I want to use the latest cutting edge GPT2 code out there to generate text. Right now, that library is Hugging Face Transformers. Get ready. We will wrap that sweet hugging face code in Clojure parens! machine-learning  Clojure  interop  natural-language-processing  rather-interesting  library  to-understand  to-follow-along  to-write-about  consider:scraping-for-data 3 days ago by Vaguery The King vs. Pawn Game of UI Design – A List Apart If you want to improve your UI design skills, have you tried looking at chess? I know it sounds contrived, but hear me out. I’m going to take a concept from chess and use it to build a toolkit of UI design strategies. By the end, we’ll have covered color, typography, lighting and shadows, and more. graphic-design  the-mangle-in-practice  pedagogy  rather-good  isolation-of-concepts  to-write-about  consider:testing  consider:unit-tests-as-lessons  consider:Oulipo 4 days ago by Vaguery Regularization Improves the Robustness of Learned Sequence-to-Expression Models | bioRxiv Understanding of the gene regulatory activity of enhancers is a major problem in regulatory biology. The nascent field of sequence-to-expression modelling seeks to create quantitative models of gene expression based on regulatory DNA (cis) and cellular environmental (trans) contexts. All quantitative models are defined partially by numerical parameters, and it is common to fit these parameters to data provided by existing experimental results. However, the relative paucity of experimental data appropriate for this task, and lacunae in our knowledge of all components of the systems, results in problems often being under-specified, which in turn may lead to a situation where wildly different model parameterizations perform similarly well on training data. It may also lead to models being fit to the idiosyncrasies of the training data, without representing the more general process (overfitting). In other contexts where parameter-fitting is performed, it is common to apply regularization to reduce overfitting. We systematically evaluated the efficacy of three types of regularization in improving the generalizability of trained sequence-to-expression models. The evaluation was performed in two types of cross-validation experiments: one training on D. melanogaster data and predicting on orthologous enhancers from related species, and the other cross-validating between four D. melanogaster neurogenic ectoderm enhancers, which are thought to be under control of the same transcription factors. We show that training with a combination of noise-injection, L1, and L2 regularization can drastically reduce overfitting and improve the generalizability of learned sequence-to-expression models. These results suggest that it may be possible to mitigate the tendency of sequence-to-expression models to overfit available data, thus improving predictive power and potentially resulting in models that provide better insight into underlying biological processes. bioinformatics  systems-biology  nonlinear-dynamics  machine-learning  regularization  statistics  numerical-methods  heuristics  to-write-about  to-simulate  consider:symbolic-regression  consider:robustness 4 days ago by Vaguery What Proof Is Best? | A great illustration of this “Let a hundred proofs bloom” point of view is provided by an article by Stan Wagon called “Fourteen Proofs of a Result About Tiling a Rectangle”. Here’s the result his title refers to (a puzzle posed and solved by Nicolaas de Bruijn): Whenever a rectangle can be cut up into smaller rectangles each of which has at least one integer side, then the big rectangle has at least one integer side too. (Here “at least one integer side” is tantamount to “at least two integer sides”, since the opposite sides of a rectangle always have the same length.) mathematics  philosophy  solution-spaces  diversity  proof  to-write-about  to-simulate  consider:surprise 4 days ago by Vaguery [1710.04554] Lattice point visibility on generalized lines of sight For a fixed b∈ℕ={1,2,3,…} we say that a point (r,s) in the integer lattice ℤ×ℤ is b-visible from the origin if it lies on the graph of a power function f(x)=axb with a∈ℚ and no other integer lattice point lies on this curve (i.e., line of sight) between (0,0) and (r,s). We prove that the proportion of b-visible integer lattice points is given by 1/ζ(b+1), where ζ(s) denotes the Riemann zeta function. We also show that even though the proportion of b-visible lattice points approaches 1 as b approaches infinity, there exist arbitrarily large rectangular arrays of b-invisible lattice points for any fixed b. This work specialized to b=1 recovers original results from the classical lattice point visibility setting where the lines of sight are given by linear functions with rational slope through the origin. number-theory  have-considered  rather-interesting  to-write-about  to-simulate  consider:visualization  consider:agents-who-see-this-way 6 days ago by Vaguery [1807.06414] Combining a Context Aware Neural Network with a Denoising Autoencoder for Measuring String Similarities Measuring similarities between strings is central for many established and fast growing research areas including information retrieval, biology, and natural language processing. The traditional approach for string similarity measurements is to define a metric over a word space that quantifies and sums up the differences between characters in two strings. The state-of-the-art in the area has, surprisingly, not evolved much during the last few decades. The majority of the metrics are based on a simple comparison between character and character distributions without consideration for the context of the words. This paper proposes a string metric that encompasses similarities between strings based on (1) the character similarities between the words including. Non-Standard and standard spellings of the same words, and (2) the context of the words. Our proposal is a neural network composed of a denoising autoencoder and what we call a context encoder specifically designed to find similarities between the words based on their context. The experimental results show that the resulting metrics succeeds in 85.4\% of the cases in finding the correct version of a non-standard spelling among the closest words, compared to 63.2\% with the established Normalised-Levenshtein distance. Besides, we show that words used in similar context are with our approach calculated to be similar than words with different contexts, which is a desirable property missing in established string metrics. feature-construction  machine-learning  clustering  natural-language-processing  rather-interesting  to-simulate  to-write-about  consider:feature-discovery 6 days ago by Vaguery [1812.08434] Graph Neural Networks: A Review of Methods and Applications Lots of learning tasks require dealing with graph data which contains rich relation information among elements. Modeling physics system, learning molecular fingerprints, predicting protein interface, and classifying diseases require a model to learn from graph inputs. In other domains such as learning from non-structural data like texts and images, reasoning on extracted structures, like the dependency tree of sentences and the scene graph of images, is an important research topic which also needs graph reasoning models. Graph neural networks (GNNs) are connectionist models that capture the dependence of graphs via message passing between the nodes of graphs. Unlike standard neural networks, graph neural networks retain a state that can represent information from its neighborhood with arbitrary depth. Although the primitive GNNs have been found difficult to train for a fixed point, recent advances in network architectures, optimization techniques, and parallel computation have enabled successful learning with them. In recent years, systems based on variants of graph neural networks such as graph convolutional network (GCN), graph attention network (GAT), gated graph neural network (GGNN) have demonstrated ground-breaking performance on many tasks mentioned above. In this survey, we provide a detailed review over existing graph neural network models, systematically categorize the applications, and propose four open problems for future research. machine-learning  representation  review  rather-interesting  to-write-about  to-simulate  consider:genetic-programming  consider:functional-transformations 6 days ago by Vaguery [1809.02836] Context-Free Transductions with Neural Stacks This paper analyzes the behavior of stack-augmented recurrent neural network (RNN) models. Due to the architectural similarity between stack RNNs and pushdown transducers, we train stack RNN models on a number of tasks, including string reversal, context-free language modelling, and cumulative XOR evaluation. Examining the behavior of our networks, we show that stack-augmented RNNs can discover intuitive stack-based strategies for solving our tasks. However, stack RNNs are more difficult to train than classical architectures such as LSTMs. Rather than employ stack-based strategies, more complex networks often find approximate solutions by using the stack as unstructured memory. neural-networks  artificial-life  representation  time-series  automata  to-write-about  to-simulate  unconventional-computing 6 days ago by Vaguery [1906.01615] Sequential Neural Networks as Automata This work attempts to explain the types of computation that neural networks can perform by relating them to automata. We first define what it means for a real-time network with bounded precision to accept a language. A measure of network memory follows from this definition. We then characterize the classes of languages acceptable by various recurrent networks, attention, and convolutional networks. We find that LSTMs function like counter machines and relate convolutional networks to the subregular hierarchy. Overall, this work attempts to increase our understanding and ability to interpret neural networks through the lens of theory. These theoretical insights help explain neural computation, as well as the relationship between neural networks and natural language grammar. neural-networks  rather-interesting  update-dynamics  representation  via:cshalizi  to-read  to-write-about  to-simulate  automata  unconventional-computation  but-is-it-really? 6 days ago by Vaguery Puzzle 864. 10958, the only hole... Brazilian mathematician Inder J. Taneja has found (Jan 2014) a way to render every number from 1 to 11,111 by starting with either of these ordered strings. 1234567869 (increasing 1->9) and 987654321 (decreasing 9->1 applying any of the following: a) arithmetical operations permitted: addition, subtraction, multiplication, division, and exponentiation. b) string operation permitted: concatenation. c) auxiliary symbols permitted: brackets "(" and ")". puzzles  mathematical-recreations  arithmetic  constraint-satisfaction  to-write-about  nudge-targets  consider:looking-to-see  to-simulate  consider:generalizations 8 days ago by Vaguery Robustness of the two-sample t-test It is fairly well known that the t-test is robust to departures from a normal distribution, as long as the actual distribution is symmetric. That is, the test works more or less as advertised as long as the distribution is symmetric like a normal distribution, but it may not work as expected if the distribution is asymmetric. This post will explore the robustness of the t-test via simulation. How far can you be from a normal distribution and still do OK? Can you have any distribution as long as it’s symmetric? Does a little asymmetry ruin everything? If something does go wrong, how does it go wrong? statistics  looking-to-see  rather-interesting  algorithms  robustness  to-do  to-write-about  consider:animation 10 days ago by Vaguery Running Cargo - Futility Closet This passage is from Rudyard Kipling’s 1910 story “Brother Square-Toes.” What’s notable about the bolded section? feature-construction  rather-interesting  digital-humanities  to-write-about  consider:looking-to-see 10 days ago by Vaguery [PDF] Extended circularity: a new puzzle for extended cognition | Semantic Scholar Mainstream epistemology has typically taken for granted a traditional picture of the metaphysics of mind, according to which cognitive processes (e.g. memory storage and retrieval) play out entirely within the bounds of the skull and skin. But this simple ‘intracranial’ picture is falling in- creasingly out of step with contemporary thinking in the philosophy of mind and cognitive science. Likewise, though, proponents of active exter- nalist approaches to the mind—e.g. the hypothesis of extended cognitition (HEC)—have proceeded by and large without asking what epistemological ramifications should arise once cognition is understood as criss-crossing the bounds of brain and world. This paper aims to motivate a puzzle that arises only once these two strands of thinking are brought in contact with one another. In particular, we want to first highlight a kind of con- dition of epistemological adequacy that should be accepted by proponents of extended cognition; once this condition is motivated, the remainder of the paper demonstrates how attempts to satisfy this condition seem to inevitably devolve into a novel kind of epistemic circularity. At the end of the day, proponents of extended cognition have a novel epistemological puzzle on their hands. the-mangle-in-practice  psychology  cognition  individuation  philosophy-of-mind  to-write-about  consider:not-worrying-about-it 10 days ago by Vaguery [1305.1958] The Dynamically Extended Mind -- A Minimal Modeling Case Study The extended mind hypothesis has stimulated much interest in cognitive science. However, its core claim, i.e. that the process of cognition can extend beyond the brain via the body and into the environment, has been heavily criticized. A prominent critique of this claim holds that when some part of the world is coupled to a cognitive system this does not necessarily entail that the part is also constitutive of that cognitive system. This critique is known as the "coupling-constitution fallacy". In this paper we respond to this reductionist challenge by using an evolutionary robotics approach to create a minimal model of two acoustically coupled agents. We demonstrate how the interaction process as a whole has properties that cannot be reduced to the contributions of the isolated agents. We also show that the neural dynamics of the coupled agents has formal properties that are inherently impossible for those neural networks in isolation. By keeping the complexity of the model to an absolute minimum, we are able to illustrate how the coupling-constitution fallacy is in fact based on an inadequate understanding of the constitutive role of nonlinear interactions in dynamical systems theory. the-mangle-in-practice  rather-interesting  philosophy  philosophy-of-science  to-write-about  to-wander-its-citations 10 days ago by Vaguery The Stanford Literary Lab’s Narrative | Public Books Experiment is presented here not just as a test of reliable knowledge but as a style of intellectual growth: “By frustrating our expectations, failed experiments ‘estrange’ our natural habits of thought, offering a chance to transcend them.” At moments, the point of experiment seems to become entirely aesthetic. In the book’s introduction, Moretti admits that he set out to write “a scientific essay, composed like a Mahler symphony: discordant registers that barely manage to coexist; a forward movement endlessly diverted; the easiest of melodies, followed by leaps into the unknown.” This account of the book’s aesthetic achievement is candid, immodest, and accurate. The essays within are unified by a deliberately wandering structure, which keeps its distance both from scientists’ predictable sequences (methods → results → conclusions), and from the thesis-driven template that prevails in the humanities (counter-intuitive claim → evidence → I was right after all). Instead, these essays become stories of progressive disorientation, written in the first-person plural, and arriving at theses that were only dimly foreshadowed. digital-humanities  literary-criticism  rather-interesting  this-quote  to-write-about  consider:the-creative-system 10 days ago by Vaguery DR. HERBERT H. TSANG - http://www.herberttsang.org RNA design algorithm takes an RNA secondary structure description as input and then try to identify an RNA strand that folds into this function-specific target structure. With new advances in biotechnology and synthetic biology, a reliable RNA design algorithm can be crucial steps to create new biochemical components. Our lab is interested in employing various computational intelligence techniques to propose the new paradigm to help with the RNA design problem. Recently, we have designed an algorithm SIMARD, which is based on the simulated annealing paradigm. structural-biology  polymer-folding  biochemistry  biophysics  simulation  metaheuristics  energy-landscapes  rather-interesting  to-write-about  to-simulate  to-visualize 14 days ago by Vaguery Short How-To guides — GROMACS 2019 documentation A number of short guides are presented here to help users getting started with simulations. More detailed tutorials are available for example at the http://www.mdtutorials.com/. structural-biology  simulation  software  open-source  rather-interesting  to-simulate  to-write-about  consider:looking-to-see 14 days ago by Vaguery [1902.07277] A systematic classification of knotoids on the plane and on the sphere In this paper we generate and systematically classify all prime planar knotoids with up to 5 crossings. We also extend the existing list of knotoids in S2 and add all knotoids with 6 crossings. knot-theory  combinatorics  enumeration  rather-interesting  stamp-collecting  to-understand  to-write-about  to-simulate  consider:classification  consider:representation 14 days ago by Vaguery [1803.07114] Slipknotting in Random Diagrams The presence of slipknots in configurations of proteins and DNA has been shown to affect their functionality, or alter it entirely. Historically, polymers are modeled as polygonal chains in space. As an alternative to space curves, we provide a framework for working with subknots inside of knot diagrams via knotoid diagrams. We prove using a pattern theorem for knot diagrams that not only are almost all knot diagrams slipknotted, almost all unknot diagrams are slipknotted. This proves in the random diagram model a conjecture yet unproven in random space curve models. We also discuss conjectures on the enumeration of knotoid diagrams. knot-theory  combinatorics  sampling  random-walks  structural-biology  rather-interesting  theoretical-biology  to-simulate  to-write-about  consider:representation 14 days ago by Vaguery [1810.07970] Inglenook Shunting Puzzles An inglenook puzzle is a classic shunting (switching) puzzle often found on model railway layouts. A collection of wagons sits in a fan of sidings with a limited length headshunt (lead track). The aim of the puzzle is to rearrange the wagons into a desired order (often a randomly chosen order). This article answers the question: When can you be sure this can always be done? The problem of finding a solution in a minimum number of moves is also addressed. puzzles  mathematical-recreations  combinatorics  rather-interesting  to-write-about  to-simulate  consider:genetic-programming  consider:data-structures 14 days ago by Vaguery [1803.09607] Murder at the Asylum I describe a puzzle I wrote for the 2018 MIT Mystery Hunt which introduced new types of people in logic puzzles. I discuss the puzzle itself, the solution, and the mathematics behind it. puzzles  logic  mathematical-recreations  rather-good  constraint-satisfaction  to-write-about  to-simulate  consider:sampling 14 days ago by Vaguery [1804.03311] A Markov Chain Sampler for Plane Curves A plane curve is a knot diagram in which each crossing is replaced by a 4-valent vertex, and so are dual to a subset of planar quadrangulations. The aim of this paper is to introduce a new tool for sampling diagrams via sampling of plane curves. At present the most efficient method for sampling diagrams is rejection sampling, however that method is inefficient at even modest sizes. We introduce Markov chains that sample from the space of plane curves using local moves based on Reidemeister moves. By then mapping vertices on those curves to crossings we produce random knot diagrams. Combining this chain with flat histogram methods we achieve an efficient sampler of plane curves and knot diagrams. By analysing data from this chain we are able to estimate the number of knot diagrams of a given size and also compute knotting probabilities and so investigate their asymptotic behaviour. knot-theory  graph-theory  rather-interesting  combinatorics  enumeration  sampling  to-simulate  to-understand  to-write-about 14 days ago by Vaguery [math/0509478] Simultaneous Diagonal Flips in Plane Triangulations Simultaneous diagonal flips in plane triangulations are investigated. It is proved that every n-vertex triangulation with at least six vertices has a simultaneous flip into a 4-connected triangulation, and that it can be computed in O(n) time. It follows that every triangulation has a simultaneous flip into a Hamiltonian triangulation. This result is used to prove that for any two n-vertex triangulations, there exists a sequence of O(logn) simultaneous flips to transform one into the other. The total number of edges flipped in this sequence is O(n). The maximum size of a simultaneous flip is then studied. It is proved that every triangulation has a simultaneous flip of at least 1/3(n−2) edges. On the other hand, every simultaneous flip has at most n−2 edges, and there exist triangulations with a maximum simultaneous flip of 6/7(n−2) edges. graph-theory  combinatorics  rewriting-systems  rather-interesting  proof  to-simulate  to-write-about  consider:constraint-satisfaction 15 days ago by Vaguery [2001.09212] PCGRL: Procedural Content Generation via Reinforcement Learning We investigate how reinforcement learning can be used to train level-designing agents. This represents a new approach to procedural content generation in games, where level design is framed as a game, and the content generator itself is learned. By seeing the design problem as a sequential task, we can use reinforcement learning to learn how to take the next action so that the expected final level quality is maximized. This approach can be used when few or no examples exist to train from, and the trained generator is very fast. We investigate three different ways of transforming two-dimensional level design problems into Markov decision processes and apply these to three game environments. machine-learning  transfer-learning  generative-models  content-generation  rather-interesting  to-simulate  to-write-about  consider:feature-discovery 17 days ago by Vaguery [1808.01984] Time-Dependent Shortest Path Queries Among Growing Discs The determination of time-dependent collision-free shortest paths has received a fair amount of attention. Here, we study the problem of computing a time-dependent shortest path among growing discs which has been previously studied for the instance where the departure times are fixed. We address a more general setting: For two given points s and d, we wish to determine the function (t) which is the minimum arrival time at d for any departure time t at s. We present a (1+ϵ)-approximation algorithm for computing (t). As part of preprocessing, we execute O(1ϵlog(rc)) shortest path computations for fixed departure times, where r is the maximum speed of the robot and c is the minimum growth rate of the discs. For any query departure time t≥0 from s, we can approximate the minimum arrival time at the destination in O(log(1ϵ)+loglog(rc)) time, within a factor of 1+ϵ of optimal. Since we treat the shortest path computations as black-box functions, for different settings of growing discs, we can plug-in different shortest path algorithms. Thus, the exact time complexity of our algorithm is determined by the running time of the shortest path computations. computational-complexity  computational-geometry  planning  constraint-satisfaction  optimization  rather-interesting  to-understand  algorithms  to-simulate  to-write-about  consider:robustness 17 days ago by Vaguery [2001.11709] Gaussian Random Embeddings of Multigraphs This paper generalizes the Gaussian random walk and Gaussian random polygon models for linear and ring polymers to polymer topologies specified by an arbitrary multigraph $G$. Probability distributions of monomer positions and edge displacements are given explicitly and the spectrum of the graph Laplacian of $G$ is shown to predict the geometry of the configurations. This provides a new perspective on the James-Guth-Flory theory of phantom elastic networks. The model is based on linear algebra motivated by ideas from homology and cohomology theory. It provides a robust theoretical foundation for more detailed models of topological polymers. structural-biology  probability-theory  random-walks  rather-interesting  to-simulate  constraint-satisfaction  polymers  to-write-about 17 days ago by Vaguery [1809.09979] Approximability of Covering Cells with Line Segments In COCOA 2015, Korman et al. studied the following geometric covering problem: given a set S of n line segments in the plane, find a minimum number of line segments such that every cell in the arrangement of the line segments is covered. Here, a line segment s covers a cell f if s is incident to f. The problem was shown to be NP-hard, even if the line segments in S are axis-parallel, and it remains NP-hard when the goal is cover the "rectangular" cells (i.e., cells that are defined by exactly four axis-parallel line segments). In this paper, we consider the approximability of the problem. We first give a PTAS for the problem when the line segments in S are in any orientation, but we can only select the covering line segments from one orientation. Then, we show that when the goal is to cover the rectangular cells using line segments from both horizontal and vertical line segments, then the problem is APX-hard. We also consider the parameterized complexity of the problem and prove that the problem is FPT when parameterized by the size of an optimal solution. Our FPT algorithm works when the line segments in S have two orientations and the goal is to cover all cells, complementing that of Korman et al. in which the goal is to cover the "rectangular" cells. covering-problems  computational-complexity  computational-geometry  rather-interesting  optimization  to-simulate  to-write-about 22 days ago by Vaguery [1809.10737] Plane and Planarity Thresholds for Random Geometric Graphs A random geometric graph, G(n,r), is formed by choosing n points independently and uniformly at random in a unit square; two points are connected by a straight-line edge if they are at Euclidean distance at most r. For a given constant k, we show that n−k2k−2 is a distance threshold function for G(n,r) to have a connected subgraph on k points. Based on this, we show that n−2/3 is a distance threshold for G(n,r) to be plane, and n−5/8 is a distance threshold to be planar. We also investigate distance thresholds for G(n,r) to have a non-crossing edge, a clique of a given size, and an independent set of a given size. graph-theory  graph-layout  sampling  probability-theory  looking-to-see  to-write-about  to-simulate  consider:planarity  random-graphs  geometric-graphs 22 days ago by Vaguery [1810.09232] On the Minimum Consistent Subset Problem Let P be a set of n colored points in the plane. Introduced by Hart (1968), a consistent subset of P, is a set S⊆P such that for every point p in P∖S, the closest point of p in S has the same color as p. The consistent subset problem is to find a consistent subset of P with minimum cardinality. This problem is known to be NP-complete even for two-colored point sets. Since the initial presentation of this problem, aside from the hardness results, there has not been a significant progress from the algorithmic point of view. In this paper we present the following algorithmic results: 1. The first subexponential-time algorithm for the consistent subset problem. 2. An O(nlogn)-time algorithm that finds a consistent subset of size two in two-colored point sets (if such a subset exists). Towards our proof of this running time we present a deterministic O(nlogn)-time algorithm for computing a variant of the compact Voronoi diagram; this improves the previously claimed expected running time. 3. An O(nlog2n)-time algorithm that finds a minimum consistent subset in two-colored point sets where one color class contains exactly one point; this improves the previous best known O(n2) running time which is due to Wilfong (SoCG 1991). 4. An O(n)-time algorithm for the consistent subset problem on collinear points; this improves the previous best known O(n2) running time. 5. A non-trivial O(n6)-time dynamic programming algorithm for the consistent subset problem on points arranged on two parallel lines. To obtain these results, we combine tools from planar separators, additively-weighted Voronoi diagrams with respect to convex distance functions, point location in farthest-point Voronoi diagrams, range trees, paraboloid lifting, minimum covering of a circle with arcs, and several geometric transformations. computational-complexity  computational-geometry  algorithms  combinatorics  plane-geometry  to-simulate  to-write-about  consider:movement  consider:heuristics 22 days ago by Vaguery [1812.05125] On Graphs whose Eternal Vertex Cover Number and Vertex Cover Number Coincide The eternal vertex cover problem is a variant of the classical vertex cover problem where a set of guards on the vertices have to be dynamically reconfigured from one vertex cover to another in every round of an attacker-defender game. The minimum number of guards required to protect a graph G from an infinite sequence of attacks is the eternal vertex cover number of G, denoted by evc(G). It is known that, given a graph G and an integer k, checking whether evc(G)≤k is NP-hard. However, it is unknown whether this problem is in NP or not. Precise value of eternal vertex cover number is known only for certain very basic graph classes like trees, cycles and grids. For any graph G, it is known that mvc(G)≤evc(G)≤2mvc(G), where mvc(G) is the minimum vertex cover number of G. Though a characterization is known for graphs for which evc(G)=2mvc(G), a characterization of graphs for which evc(G)=mvc(G) remained open. Here, we achieve such a characterization for a class of graphs that includes chordal graphs and internally triangulated planar graphs. For some graph classes including biconnected chordal graphs, our characterization leads to a polynomial time algorithm to precisely determine evc(G) and to determine a safe strategy of guard movement in each round of the game with evc(G) guards. The characterization also leads to NP-completeness results for the eternal vertex cover problem for some graph classes including biconnected internally triangulated planar graphs. To the best of our knowledge, these are the first NP-completeness results known for the problem for any graph class. graph-theory  feature-construction  game-theory  rather-interesting  combinatorics  to-simulate  to-write-about  consider:rediscovery  consider:robustness 22 days ago by Vaguery [1905.00790] Minimum Ply Covering of Points with Disks and Squares Following the seminal work of Erlebach and van Leeuwen in SODA 2008, we introduce the minimum ply covering problem. Given a set P of points and a set S of geometric objects, both in the plane, our goal is to find a subset S′ of S that covers all points of P while minimizing the maximum number of objects covering any point in the plane (not only points of P). For objects that are unit squares and unit disks, this problem is NP-hard and cannot be approximated by a ratio smaller than 2. We present 2-approximation algorithms for this problem with respect to unit squares and unit disks. Our algorithms run in polynomial time when the optimum objective value is bounded by a constant. Motivated by channel-assignment in wireless networks, we consider a variant of the problem where the selected unit disks must be 3-colorable, i.e., colored by three colors such that all disks of the same color are pairwise disjoint. We present a polynomial-time algorithm that achieves a 2-approximate solution, i.e., a solution that is 6-colorable. We also study the weighted version of the problem in dimension one, where P and S are points and weighted intervals on a line, respectively. We present an algorithm that solves this problem in O(n+m+M)-time where n is the number of points, m is the number of intervals, and M is the number of pairs of overlapping intervals. This repairs a solution claimed by Nandy, Pandit, and Roy in CCCG 2017. computational-geometry  operations-research  rather-interesting  optimization  constraint-satisfaction  to-write-about  to-simulate  consider:looking-to-see  consider:performance-measures 22 days ago by Vaguery [1905.00791] Flip Distance to some Plane Configurations We study an old geometric optimization problem in the plane. Given a perfect matching M on a set of n points in the plane, we can transform it to a non-crossing perfect matching by a finite sequence of flip operations. The flip operation removes two crossing edges from M and adds two non-crossing edges. Let f(M) and F(M) denote the minimum and maximum lengths of a flip sequence on M, respectively. It has been proved by Bonnet and Miltzow (2016) that f(M)=O(n2) and by van Leeuwen and Schoone (1980) that F(M)=O(n3). We prove that f(M)=O(nΔ) where Δ is the spread of the point set, which is defined as the ratio between the longest and the shortest pairwise distances. This improves the previous bound if the point set has sublinear spread. For a matching M on n points in convex position we prove that f(M)=n/2−1 and F(M)=(n/22); these bounds are tight. Any bound on F(⋅) carries over to the bichromatic setting, while this is not necessarily true for f(⋅). Let M′ be a bichromatic matching. The best known upper bound for f(M′) is the same as for F(M′), which is essentially O(n3). We prove that f(M′)≤n−2 for points in convex position, and f(M′)=O(n2) for semi-collinear points. The flip operation can also be defined on spanning trees. For a spanning tree T on a convex point set we show that f(T)=O(nlogn). computational-geometry  graph-layout  graph-theory  heuristics  rather-interesting  to-simulate  to-write-about  consider:polygon-sampling 22 days ago by Vaguery [1905.07124] Variations of largest rectangle recognition amidst a bichromatic point set Classical separability problem involving multi-color point sets is an important area of study in computational geometry. In this paper, we study different separability problems for bichromatic point set P=P_r\cup P_b on a plane, where Pr and Pb represent the set of n red points and m blue points respectively, and the objective is to compute a monochromatic object of the desired type and of maximum size. We propose in-place algorithms for computing (i) an arbitrarily oriented monochromatic rectangle of maximum size in R^2, (ii) an axis-parallel monochromatic cuboid of maximum size in R^3. The time complexities of the algorithms for problems (i) and (ii) are O(m(m+n)(m\sqrt{n}+m\log m+n \log n)) and O(m^3\sqrt{n}+m^2n\log n), respectively. As a prerequisite, we propose an in-place construction of the classic data structure the k-d tree, which was originally invented by J. L. Bentley in 1975. Our in-place variant of the k-d tree for a set of n points in R^k supports both orthogonal range reporting and counting query using O(1) extra workspace, and these query time complexities are the same as the classical complexities, i.e., O(n^{1-1/k}+\mu) and O(n^{1-1/k}), respectively, where \mu is the output size of the reporting query. The construction time of this data structure is O(n\log n). Both the construction and query algorithms are non-recursive in nature that do not need O(\log n) size recursion stack compared to the previously known construction algorithm for in-place k-d tree and query in it. We believe that this result is of independent interest. We also propose an algorithm for the problem of computing an arbitrarily oriented rectangle of maximum weight among a point set P=P_r \cup P_b, where each point in P_b (resp. P_r) is associated with a negative (resp. positive) real-valued weight that runs in O(m^2(n+m)\log(n+m)) time using O(n) extra space. computational-complexity  computational-geometry  optimization  sorting  algorithms  heuristics  to-simulate  to-write-about  consider:variants 22 days ago by Vaguery [1906.11948] Packing Boundary-Anchored Rectangles and Squares Consider a set P of n points on the boundary of an axis-aligned square Q. We study the boundary-anchored packing problem on P in which the goal is to find a set of interior-disjoint axis-aligned rectangles in Q such that each rectangle is anchored (has a corner at some point in P), each point in P is used to anchor at most one rectangle, and the total area of the rectangles is maximized. Here, a rectangle is anchored at a point p in P if one of its corners coincides with p. In this paper, we show how to solve this problem in time linear in n, provided that the points of P are given in sorted order along the boundary of Q. We also consider the problem for anchoring squares and give an O(n4)-time algorithm when the points in P lie on two opposite sides of Q. packing  operations-research  computational-geometry  computational-complexity  algorithms  optimization  constraint-satisfaction  rather-interesting  to-write-about  to-simulate  consider:feature-discovery  consider:heuristics 22 days ago by Vaguery [1907.01617] A Short Proof of the Toughness of Delaunay Triangulations We present a self-contained short proof of the seminal result of Dillencourt (SoCG 1987 and DCG 1990) that Delaunay triangulations, of planar point sets in general position, are 1-tough. An important implication of this result is that Delaunay triangulations have perfect matchings. Another implication of our result is a proof of the conjecture of Aichholzer et al. (2010) that at least n points are required to block any n-vertex Delaunay triangulation computational-geometry  computational-complexity  rather-interesting  algorithms  hard-problems  feature-construction  to-simulate  to-write-about  consider:hardness-features 22 days ago by Vaguery Constructing random polygons | Proceedings of the 9th ACM SIGITE conference on Information technology education The construction of random polygons has been used in psychological research and for the testing of algorithms. With the increased popularity of client-side vector based graphics in the web browser such as seen in Flash and SVG, as well as the newly introduced <canvas> tag in HTML5.0, the use of random shapes for creation of scenes for animation and interactive art requires the construction of random polygons. A natural question, then, is how to generate random polygons in a way which is computationally efficient (particularly in a resource limited environment such as the web browser). This paper presents a random polygon algorithm (RPA) that generates polygons that are random and representative of the class of all n-gons in O(n2logn) time. Our algorithm differs from other approaches in that the vertices are generated randomly, the algorithm is inclusive (i.e. each polygon has a non-zero probability to be constructed), and it runs efficiently in polynomial time. probability-theory  sampling  computational-geometry  algorithms  rather-interesting  to-write-about  to-simulate 23 days ago by Vaguery CiteSeerX — RPG - Heuristics for the Generation of Random Polygons We consider the problem of randomly generating simple and star-shaped polygons on a given set of points. This problem is of considerable importance in the practical evaluation of algorithms that operate on polygons, where it is necessary to check the correctness and to determine the actual CPU-consumption of an algorithm experimentally. Since no polynomial-time solution for the uniformly random generation of polygons is known, we present and analyze several heuristics. All heuristics described in this paper have been implemented and are part of our RandomPolygonGenerator, RPG. We have tested all heuristics, and report experimental results on their CPU-consumption, their quality, and their characteristics. RPG is publically available via http://www.cosy.sbg.ac.at/~held/projects/rpg/rpg.html. 1 Introduction In this paper 1 we deal with the random generation of simple polygons on a given set of points: Ideally, given a set S = fs 1 ; : : : ; s n g of n points, we would like to generat... probability-theory  computational-geometry  sampling  rather-interesting  performance-measure  to-simulate  to-write-about 23 days ago by Vaguery [1706.10193] More Turán-Type Theorems for Triangles in Convex Point Sets We study the following family of problems: Given a set of n points in convex position, what is the maximum number triangles one can create having these points as vertices while avoiding certain sets of forbidden configurations. As forbidden configurations we consider all 8 ways in which a pair of triangles in such a point set can interact. This leads to 256 extremal Turán-type questions. We give nearly tight (within a logn factor) bounds for 248 of these questions and show that the remaining 8 questions are all asymptotically equivalent to Stein's longstanding tripod packing problem. combinatorics  enumeration  rather-interesting  plane-geometry  graph-theory  counting  to-simulate  to-write-about  related-to:triangle-hypergraphs 26 days ago by Vaguery [1812.05163] Declination as a Metric to Detect Partisan Gerrymandering We explore the Declination, a new metric intended to detect partisan gerrymandering. We consider instances in which each district has equal turnout, the maximum turnout to minimum turnout is bounded, and turnout is unrestricted. For each of these cases, we show exactly which vote-share, seat-share pairs (V,S) have an election outcome with Declination equal to 0. We also show how our analyses can be applied to finding vote-share, seat-share pairs that are possible for nonzero Declination. Within our analyses, we show that Declination cannot detect all forms of packing and cracking, and we compare the Declination to the Efficiency Gap. We show that these two metrics can behave quite differently, and give explicit examples of that occurring. statistics  politics  gerrymandering  cultural-engineering  rather-interesting  performance-measure  to-write-about  to-simulate 26 days ago by Vaguery [1611.06135] Large Values of the Clustering Coefficient A prominent parameter in the context of network analysis, originally proposed by Watts and Strogatz (Collective dynamics of `small-world' networks, Nature 393 (1998) 440-442), is the clustering coefficient of a graph G. It is defined as the arithmetic mean of the clustering coefficients of its vertices, where the clustering coefficient of a vertex u of G is the relative density m(G[NG(u)])/(dG(u)2) of its neighborhood if dG(u) is at least 2, and 0 otherwise. It is unknown which graphs maximize the clustering coefficient among all connected graphs of given order and size. We determine the maximum clustering coefficients among all connected regular graphs of a given order, as well as among all connected subcubic graphs of a given order. In both cases, we characterize all extremal graphs. Furthermore, we determine the maximum increase of the clustering coefficient caused by adding a single edge. graph-theory  network-theory  metrics  a-picture-would-be-useful-about-now  to-write-about  to-illustrate  FFS-make-a-picture 26 days ago by Vaguery [1803.11511] Length segregation in mixtures of spherocylinders induced by imposed topological defects We explore length segregation in binary mixtures of spherocylinders of lengths L1 and L2 with the same diameter D which are tangentially confined on a spherical surface of radius R. The orientation of spherocylinders is constrained along an externally imposed direction field on the sphere which is either along the longitude or the latitude lines of the sphere. In both situations, integer orientational defects at the poles are imposed. We show that these topological defects induce a complex segregation picture also depending on the length ratio factor γ=L2/L1 and the total packing fraction η of the spherocylinders. When the binary mixture is aligned along longitudinal lines of the sphere, shorter rods tend to accumulate at the topological defects of the polar caps whereas longer rods occupy central equatorial area of the spherical surface. In the reverse case of latitude ordering, a state can emerge where longer rods are predominantly both in the cap and in the equatorial areas and shorter rods are localized in between. As a reference situation, we consider a defect-free situation in the flat plane and do not find any length segregation there at similar γ and η, hence the segregation is purely induced by the imposed topological defects. It is also revealed that the shorter rods at γ=4 and η≥0.5 act as obstacles to the rotational relaxation of the longer rods when all orientational constraints are released. granular-materials  mixing  self-organization  rather-interesting  physics!  simulation  looking-to-see  to-write-about  to-simulate  packing 27 days ago by Vaguery [1803.09639] On the multipacking number of grid graphs In 2001, Erwin introduced broadcast domination in graphs. It is a variant of classical domination where selected vertices may have different domination powers. The minimum cost of a dominating broadcast in a graph G is denoted γb(G). The dual of this problem is called multipacking: a multipacking is a set M of vertices such that for any vertex v and any positive integer r, the ball of radius r around v contains at most r vertices of M . The maximum size of a multipacking in a graph G is denoted mp(G). Naturally mp(G) ≤γb(G). Earlier results by Farber and by Lubiw show that broadcast and multipacking numbers are equal for strongly chordal graphs. In this paper, we show that all large grids (height at least 4 and width at least 7), which are far from being chordal, have their broadcast and multipacking numbers equal. graph-theory  feature-construction  rather-interesting  dynamical-systems  to-simulate  to-write-about  consider:generalizations  consider:prediction 27 days ago by Vaguery [1904.06108] The Dodecahedron as a Voronoi Cell and its (minor) importance for the Kepler conjecture The regular dodecahedron has a 2% smaller volume than the rhombic dodecahedron which is the Voronoi cell of a fcc packing. From this point of view it seems possible that the dodecahedral aspect which is the core of the so-called dodecahedral conjecture, will play a major part for an elementary proof of the Kepler conjecture. In this paper we will show that the icosahedral configuration caused by dodecahedron leads to tetrahedra with significantly larger volume than the fcc fundamental parallelotope tessellation tetrahedra. Therefore on the basis of a tetrahedral based point of view for sphere packing densities we will demonstrate the minor importance of the dodecahedron as a Voronoi cell for the Kepler conjecture. packing  solved-problems  rather-interesting  not-quite-solutions  optimization  to-write-about  to-simulate  a-picture-might-be-good 27 days ago by Vaguery [1712.04922] Closing the gap for pseudo-polynomial strip packing The set of 2-dimensional packing problems builds an important class of optimization problems and Strip Packing together with 2-dimensional Bin Packing and 2-dimensional Knapsack is one of the most famous of these problems. Given a set of rectangular axis parallel items and a strip with bounded width and infinite height the objective is to find a packing of the items into the strip which minimizes the packing height. We speak of pseudo-polynomial Strip Packing if we consider algorithms with pseudo-polynomial running time with respect to the width of the strip. It is known that there is no pseudo-polynomial algorithm for Strip Packing with a ratio better than 5/4 unless P=NP. The best algorithm so far has a ratio of (4/3+ε). In this paper, we close this gap between inapproximability result and best known algorithm by presenting an algorithm with approximation ratio (5/4+ε) and thus categorize the problem accurately. The algorithm uses a structural result which states that each optimal solution can be transformed such that it has one of a polynomial number of different forms. The strength of this structural result is that it applies to other problem settings as well for example to Strip Packing with rotations (90 degrees) and Contiguous Moldable Task Scheduling. This fact enabled us to present algorithms with approximation ratio (5/4+ε) for these problems as well. operations-research  optimization  strip-packing  packing  numerical-methods  mathematical-programming  algorithms  computational-complexity  horse-races  to-write-about  to-simulate  consider:genetic-programming  consider:performance-measures  consider:heuristics 27 days ago by Vaguery [1802.10038] Improving OCR Accuracy on Early Printed Books by combining Pretraining, Voting, and Active Learning We combine three methods which significantly improve the OCR accuracy of OCR models trained on early printed books: (1) The pretraining method utilizes the information stored in already existing models trained on a variety of typesets (mixed models) instead of starting the training from scratch. (2) Performing cross fold training on a single set of ground truth data (line images and their transcriptions) with a single OCR engine (OCRopus) produces a committee whose members then vote for the best outcome by also taking the top-N alternatives and their intrinsic confidence values into account. (3) Following the principle of maximal disagreement we select additional training lines which the voters disagree most on, expecting them to offer the highest information gain for a subsequent training (active learning). Evaluations on six early printed books yielded the following results: On average the combination of pretraining and voting improved the character accuracy by 46% when training five folds starting from the same mixed model. This number rose to 53% when using different models for pretraining, underlining the importance of diverse voters. Incorporating active learning improved the obtained results by another 16% on average (evaluated on three of the six books). Overall, the proposed methods lead to an average error rate of 2.5% when training on only 60 lines. Using a substantial ground truth pool of 1,000 lines brought the error rate down even further to less than 1% on average. OCR  digital-humanities  digitization  text-processing  image-processing  machine-learning  data-cleaning  the-mangle-in-practice  modeling  rather-interesting  to-write-about  to-simulate  consider:stochastic-resonance 27 days ago by Vaguery [1908.08273] Representing Graphs and Hypergraphs by Touching Polygons in 3D Contact representations of graphs have a long history. Most research has focused on problems in 2d, but 3d contact representations have also been investigated, mostly concerning fully-dimensional geometric objects such as spheres or cubes. In this paper we study contact representations with convex polygons in 3d. We show that every graph admits such a representation. Since our representations use super-polynomial coordinates, we also construct representations on grids of polynomial size for specific graph classes (bipartite, subcubic). For hypergraphs, we represent their duals, that is, each vertex is represented by a point and each edge by a polygon. We show that even regular and quite small hypergraphs do not admit such representations. On the other hand, the two smallest Steiner triple systems can be represented. hypergraphs  graph-layout  rather-interesting  to-simulate  to-do  to-write-about  related-to:recent-puzzle 28 days ago by Vaguery allRGB The objective of allRGB is simple: To create images with one pixel for every RGB color (16777216); not one color missing, and not one color twice. Oulipo  art  conceptual-constraints  to-write-about  consider:differing-strategies  consider:grayscale-ones  rather-interesting  via:mathpuzzle.com 4 weeks ago by Vaguery Parens for Pyplot - Squid's Blog libpython-clj has opened the door for Clojure to directly interop with Python libraries. That means we can take just about any Python library and directly use it in our Clojure REPL. But what about matplotlib? Matplotlib.pyplot is a standard fixture in most tutorials and python data science code. How do we interop with a python graphics library? python  Clojure  interoperability  library  to-use  to-try  to-write-about  consider:ClojureScript 4 weeks ago by Vaguery [1804.02385] The chromatic number of the plane is at least 5 We present a family of finite unit-distance graphs in the plane that are not 4-colourable, thereby improving the lower bound of the Hadwiger-Nelson problem. The smallest such graph that we have so far discovered has 1581 vertices. open-questions  coloring-problems  rather-interesting  counterexamples  to-write-about  to-simulate  consider:genetic-programming  consider:density-of-points 4 weeks ago by Vaguery The Right Stuff - Futility Closet Take any two rational numbers whose product is 2, and add 2 to each. The results are the legs of a right triangle with rational sides. plane-geometry  generator  heuristics  rather-interesting  to-write-about  to-simulate  consider:generalizations  consider:genetic-programming 4 weeks ago by Vaguery [1702.08066] On the Classification and Algorithmic Analysis of Carmichael Numbers In this paper, we study the properties of Carmichael numbers, false positives to several primality tests. We provide a classification for Carmichael numbers with a proportion of Fermat witnesses of less than 50%, based on if the smallest prime factor is greater than a determined lower bound. In addition, we conduct a Monte Carlo simulation as part of a probabilistic algorithm to detect if a given composite number is Carmichael. We modify this highly accurate algorithm with a deterministic primality test to create a novel, more efficient algorithm that differentiates between Carmichael numbers and prime numbers. number-theory  primes  rather-interesting  feature-construction  classification  tricky-cases  edge-cases  algorithms  performance-measure  to-simulate  to-write-about  consider:classification  computational-complexity 4 weeks ago by Vaguery [1605.04300] On the Circle Covering Theorem by A. W. Goodman and R. E. Goodman In 1945, A. W. Goodman and R. E. Goodman proved the following conjecture by P. Erdős: Given a family of (round) disks of radii r1, …, rn in the plane it is always possible to cover them by a disk of radius R=∑ri, provided they cannot be separated into two subfamilies by a straight line disjoint from the disks. In this note we show that essentially the same idea may work for different analogues and generalizations of their result. In particular, we prove the following: Given a family of positive homothetic copies of a fixed convex body K⊂ℝd with homothety coefficients τ1,…,τn>0 it is always possible to cover them by a translate of d+12(∑τi)K, provided they cannot be separated into two subfamilies by a hyperplane disjoint from the homothets. covering-problems  plane-geometry  rather-interesting  to-simulate  to-write-about  proof  consider:looking-to-see  consider:robustness  consider:feature-discovery 4 weeks ago by Vaguery [1806.06751] Entropy of hard square lattice gas with $k$ distinct species of particles: coloring problems and vertex models Coloring the faces of 2-dimensional square lattice with k distinct colors such that no two adjacent faces have the same color is considered by establishing connection between the k coloring problem and a generalized vertex model. Associating the colors with k distinct species of particles with infinite repulsive force between nearest neighbors of the same type and zero chemical potential μ associated with each species, the number of ways [W(k)]N for large N is related to the entropy of the {\it{hard square lattice gas}} at close packing of the lattice, where N is the number of lattice sites. We discuss the evaluation of W(k) using transfer matrix method with non-periodic boundary conditions imposed on at least one dimension and show the characteristic Toeplitz block structure of the transfer matrix. Using this result, we present some analytical calculations for non-periodic models that remain finite in one dimension. The case k=3 is found to approach the exact result obtained by Lieb for the residual entropy of ice with periodic boundary conditions. Finally, we show, by explicit calculation of the contribution of subgraphs and the series expansion of W(k), that the genenralized Pauling type estimate(which is based on mean field approximation) dominates at large values of k. We thus also provide an alternative series expansion for the chromatic polynomial of a regular square graph. graph-theory  graph-coloring  algorithms  parallel  distributed-processing  cellular-automata  rather-interesting  to-simulate  to-write-about  consider:constraint-satisfaction  consider:rules-variants 4 weeks ago by Vaguery [1501.04472] Local analysis of the history dependence in tetrahedra packings The mechanical properties of a granular sample depend frequently on the way the packing was prepared. However, is not well understood which properties of the packing store this information. Here we present an X-ray tomography study of three pairs of tetrahedra packings prepared with three different tapping protocols. The packings in each pair differs in the number of mechanical constraints C imposed on the particles by their contacts, while their bulk volume fraction ϕglobal is approximately the same. We decompose C into the contributions of the three different contact types possible between tetrahedra -- face-to-face (F2F), edge-to-face (E2F), and point contacts -- which each fix a different amount of constraints. We then perform a local analysis of the contact distribution by grouping the particles together according to their individual volume fraction ϕlocal computed from a Voronoi tessellation. We find that in samples which have been tapped sufficiently long the number of F2F contacts becomes an universal function of ϕlocal. In contrast the number of E2F and point contacts varies with the applied tapping protocol. Moreover, we find that the anisotropy of the shape of the Voronoi cells depends on the tapping protocol. This behavior differs from spheres and ellipsoids and posses a significant constraint for any mean-field approach to tetrahedra packings. granular-materials  packing  physics!  experiment  looking-to-see  rather-interesting  condensed-matter  to-write-about  to-simulate  consider:matter.js  consider:2d 4 weeks ago by Vaguery Stanford CoreNLP – Natural language software | Stanford CoreNLP Stanford CoreNLP provides a set of human language technology tools. It can give the base forms of words, their parts of speech, whether they are names of companies, people, etc., normalize dates, times, and numeric quantities, mark up the structure of sentences in terms of phrases and syntactic dependencies, indicate which noun phrases refer to the same entities, indicate sentiment, extract particular or open-class relations between entity mentions, get the quotes people said, etc. natural-language-processing  library  Clojure  to-understand  to-use  to-write-about  notesapp 4 weeks ago by Vaguery [1811.03690] Machine Learning Characterization of Structural Defects in Amorphous Packings of Dimers and Ellipses Structural defects within amorphous packings of symmetric particles can be characterized using a machine learning approach that incorporates structure functions of radial distances and angular arrangement. This yields a scalar field, \emph{softness}, that correlates with the probability that a particle is about to rearrange. However, when particle shapes are elongated, as in the case of dimers and ellipses, we find the standard structure functions produce imprecise softness measurements. Moreover, ellipses exhibit deformation profiles in stark contrast to circular particles. In order to account for effects of orientation and alignment, we introduce new structure functions to recover predictive performance of softness, as well as provide physical insight to local and extended dynamics. We study a model disordered solid, a bidisperse two-dimensional granular pillar, driven by uniaxial compression and composed entirely of monomers, dimers, or ellipses. We demonstrate how the computation of softness via support vector machine extends to dimers and ellipses with the introduction of new orientational structure functions. Then, we highlight the spatial extent of rearrangements and defects, as well as their cross-correlation, for each particle shape. Finally, we demonstrate how an additional machine learning algorithm, recursive feature elimination, provides an avenue to better understand how softness arises from particular structural aspects. We identify the most crucial structure functions in determining softness and discuss their physical implications. physics  granular-materials  machine-learning  neural-networks  rather-interesting  approximation  feature-extraction  to-simulate  to-write-about  consider:genetic-programming  consider:feature-discovery  consider:re-representation 4 weeks ago by Vaguery [1805.03963] Monotone Learning with Rectified Wire Networks We introduce a new neural network model, together with a tractable and monotone online learning algorithm. Our model describes feed-forward networks for classification, with one output node for each class. The only nonlinear operation is rectification using a ReLU function with a bias. However, there is a rectifier on every edge rather than at the nodes of the network. There are also weights, but these are positive, static, and associated with the nodes. Our "rectified wire" networks are able to represent arbitrary Boolean functions. Only the bias parameters, on the edges of the network, are learned. Another departure in our approach, from standard neural networks, is that the loss function is replaced by a constraint. This constraint is simply that the value of the output node associated with the correct class should be zero. Our model has the property that the exact norm-minimizing parameter update, required to correctly classify a training item, is the solution to a quadratic program that can be computed with a few passes through the network. We demonstrate a training algorithm using this update, called sequential deactivation (SDA), on MNIST and some synthetic datasets. Upon adopting a natural choice for the nodal weights, SDA has no hyperparameters other than those describing the network structure. Our experiments explore behavior with respect to network size and depth in a family of sparse expander networks. machine-learning  neural-networks  representation  algorithms  to-simulate  to-write-about  consider:performance-measures  rather-interesting 4 weeks ago by Vaguery [1701.00706] Bounds on parameters of minimally non-linear patterns Let ex(n,P) be the maximum possible number of ones in any 0-1 matrix of dimensions n×n that avoids P. Matrix P is called minimally non-linear if ex(n,P)=ω(n) but ex(n,P′)=O(n) for every strict subpattern P′ of P. We prove that the ratio between the length and width of any minimally non-linear 0-1 matrix is at most 4, and that a minimally non-linear 0-1 matrix with k rows has at most 5k−3 ones. We also obtain an upper bound on the number of minimally non-linear 0-1 matrices with k rows. In addition, we prove corresponding bounds for minimally non-linear ordered graphs. The minimal non-linearity that we investigate for ordered graphs is for the extremal function ex<(n,G), which is the maximum possible number of edges in any ordered graph on n vertices with no ordered subgraph isomorphic to G. matrices  pattern-avoiding  pattern-matching  combinatorics  rather-interesting  constraint-satisfaction  looking-to-see  enumeration  feature-construction  to-simulate  to-write-about 4 weeks ago by Vaguery [1704.05207] Algorithms for Pattern Containment in 0-1 Matrices We say a zero-one matrix A avoids another zero-one matrix P if no submatrix of A can be transformed to P by changing some ones to zeros. A fundamental problem is to study the extremal function ex(n,P), the maximum number of nonzero entries in an n×n zero-one matrix A which avoids P. To calculate exact values of ex(n,P) for specific values of n, we need containment algorithms which tell us whether a given n×n matrix A contains a given pattern matrix P. In this paper, we present optimal algorithms to determine when an n×n matrix A contains a given pattern P when P is a column of all ones, an identity matrix, a tuple identity matrix, an L-shaped pattern, or a cross pattern. These algorithms run in Θ(n2) time, which is the lowest possible order a containment algorithm can achieve. When P is a rectangular all-ones matrix, we also obtain an improved running time algorithm, albeit with a higher order. matrices  constraint-satisfaction  game-theory  rather-interesting  mathematical-recreations  patterns  permutations  to-simulate  to-write-about  consider:feature-discovery  combinatorics 4 weeks ago by Vaguery [1704.05211] Results on Pattern Avoidance Games A zero-one matrix A contains another zero-one matrix P if some submatrix of A can be transformed to P by changing some ones to zeros. A avoids P if A does not contain P. The Pattern Avoidance Game is played by two players. Starting with an all-zero matrix, two players take turns changing zeros to ones while keeping A avoiding P. We study the strategies of this game for some patterns P. We also study some generalizations of this game. crowdsourcing  rather-interesting  game-theory  patterns  permutations  looking-to-see  open-questions  to-simulate  to-write-about  mathematical-recreations  matrices 4 weeks ago by Vaguery [1911.09792] Minority Voter Distributions and Partisan Gerrymandering Many people believe that it is disadvantageous for members aligning with a minority party to cluster in cities, as this makes it easier for the majority party to gerrymander district boundaries to diminish the representation of the minority. We examine this effect by exhaustively computing the average representation for every possible 5×5 grid of population placement and district boundaries. We show that, in fact, it is advantageous for the minority to arrange themselves in clusters, as it is positively correlated with representation. We extend this result to more general cases by considering the dual graph of districts, and we also propose and analyze metaheuristic algorithms that allow us to find strong lower bounds for maximum expected representation. gerrymandering  voting  looking-to-see  simulation  rather-interesting  fairness  optimization  multiobjective-optimization  to-simulate  to-write-about 4 weeks ago by Vaguery [1705.06774] On Variations of Nim and Chomp We study two variations of Nim and Chomp which we call Monotonic Nim and Diet Chomp. In Monotonic Nim the moves are the same as in Nim, but the positions are non-decreasing numbers as in Chomp. Diet-Chomp is a variation of Chomp, where the total number of squares removed is limited. mathematical-recreations  game-theory  games  rather-interesting  to-write-about  to-simulate  out-of-the-box 4 weeks ago by Vaguery [1605.05601] Alternator Coins We introduce a new type of coin: \textit{the alternator}. The alternator can pretend to be either a real or a fake coin (which is lighter than a real one). Each time it is put on a balance scale it switches between pretending to be either a real coin or a fake one. In this paper, we solve the following problem: You are given N coins that look identical, but one of them is the alternator. All real coins weigh the same. You have a balance scale which you can use to find the alternator. What is the smallest number of weighings that guarantees that you will find the alternator? coin-weighing-problems  mathematical-recreations  puzzles  rather-interesting  out-of-the-box  to-simulate  to-write-about  consider:genetic-programming  combinatorics  optimization 4 weeks ago by Vaguery [1603.08549] Who Is Guilty? We discuss a generalization of logic puzzles in which truth-tellers and liars are allowed to deviate from their pattern in case of one particular question: "Are you guilty?" logic  puzzles  mathematical-recreations  constraint-satisfaction  looking-to-see  generalization  to-write-about  to-simulate 4 weeks ago by Vaguery [1909.07007] Asymptotics of $d$-Dimensional Visibility We consider the space [0,n]3, imagined as a three dimensional, axis-aligned grid world partitioned into n3 1×1×1 unit cubes. Each cube is either considered to be empty, in which case a line of sight can pass through it, or obstructing, in which case no line of sight can pass through it. From a given position, some of these obstructing cubes block one's view of other obstructing cubes, leading to the following extremal problem: What is the largest number of obstructing cubes that can be simultaneously visible from the surface of an observer cube, over all possible choices of which cubes of [0,n]3 are obstructing? We construct an example of a configuration in which Ω(n83) obstructing cubes are visible, and generalize this to an example with Ω(nd−1d) visible obstructing hypercubes for dimension d>3. Using Fourier analytic techniques, we prove an O(nd−1dlogn) upper bound in a reduced visibility setting. visibility-problems  illumination-problems  computational-geometry  rather-interesting  to-simulate  to-write-about  looking-to-see 4 weeks ago by Vaguery [1808.04304] On Base 3/2 and its Sequences We discuss properties of integers in base 3/2. We also introduce many new sequences related to base 3/2. Some sequences discuss patterns related to integers in base 3/2. Other sequence are analogues of famous base-10 sequences: we discuss powers of 3 and 2, Look-and-say, and sorted and reverse sorted Fibonaccis. The eventual behavior of sorted and reverse sorted Fibs leads to special Pinocchio and Oihcconip sequences respectively. number-theory  combinatorics  radix  rather-interesting  mathematical-recreations  to-simulate  to-write-about  consider:irrational-values  representation 4 weeks ago by Vaguery [1808.06713] Mathematics of a Sudo-Kurve We investigate a type of a Sudoku variant called Sudo-Kurve, which allows bent rows and columns, and develop a new, yet equivalent, variant we call a Sudo-Cube. We examine the total number of distinct solution grids for this type with or without symmetry. We study other mathematical aspects of this puzzle along with the minimum number of clues needed and the number of ways to place individual symbols. sudoku  mathematical-recreations  constraint-satisfaction  rather-interesting  puzzles  out-of-the-box  to-write-about  to-simulate  consider:simpler 4 weeks ago by Vaguery [1809.09676] Chip-Firing and Fractional Bases We study a particular chip-firing process on an infinite path graph. At any time when there are at least a+b chips at a vertex, a chips fire to the left and b chips fire to the right. We describe the final state of this process when we start with n chips at the origin. chip-firing  dynamical-systems  number-theory  mathematical-recreations  rather-interesting  to-write-about  to-simulate  combinatorics 4 weeks ago by Vaguery [1612.04861] Some Counterexamples for Compatible Triangulations We consider the conjecture by Aichholzer, Aurenhammer, Hurtado, and Krasser that any two points sets with the same cardinality and the same size convex hull can be triangulated in the "same" way, more precisely via \emph{compatible triangulations}. We show counterexamples to various strengthened versions of this conjecture. computational-geometry  equivalence  triangulation  algorithms  rather-interesting  feature-construction  graph-theory  to-simulate  to-write-about 4 weeks ago by Vaguery [1901.09818] Variants of Base 3 over 2 We discuss two different systems of number representations that both can be called 'base 3/2'. We explain how they are connected. Unlike classical fractional extension, these two systems provide a finite representation for integers. We also discuss a connection between these systems and 3-free sequences. radix  mathematical-recreations  exploding-dots  rather-interesting  representation  number-theory  exploration  to-write-about  to-simulate 4 weeks ago by Vaguery [1203.3353] Solving Structure with Sparse, Randomly-Oriented X-ray Data Single-particle imaging experiments of biomolecules at x-ray free-electron lasers (XFELs) require processing of hundreds of thousands (or more) of images that contain very few x-rays. Each low-flux image of the diffraction pattern is produced by a single, randomly oriented particle, such as a protein. We demonstrate the feasibility of collecting data at these extremes, averaging only 2.5 photons per frame, where it seems doubtful there could be information about the state of rotation, let alone the image contrast. This is accomplished with an expectation maximization algorithm that processes the low-flux data in aggregate, and without any prior knowledge of the object or its orientation. The versatility of the method promises, more generally, to redefine what measurement scenarios can provide useful signal in the high-noise regime. diffraction  inverse-problems  tomography  rather-interesting  algorithms  statistics  probability-theory  inference  to-simulate  to-write-about  optimization  signal-processing 4 weeks ago by Vaguery [1305.0289] Pessimal packing shapes We address the question of which convex shapes, when packed as densely as possible under certain restrictions, fill the least space and leave the most empty space. In each different dimension and under each different set of restrictions, this question is expected to have a different answer or perhaps no answer at all. As the problem of identifying global minima in most cases appears to be beyond current reach, in this paper we focus on local minima. We review some known results and prove these new results: in two dimensions, the regular heptagon is a local minimum of the double-lattice packing density, and in three dimensions, the directional derivative (in the sense of Minkowski addition) of the double-lattice packing density at the point in the space of shapes corresponding to the ball is in every direction positive. packing  computational-geometry  optimization  worst-cases  rather-interesting  to-simulate  to-write-about  consider:convex  consider:monte-carlo 4 weeks ago by Vaguery [1706.03049] A linear-time algorithm for the maximum-area inscribed triangle in a convex polygon Given the n vertices of a convex polygon in cyclic order, can the triangle of maximum area inscribed in P be determined by an algorithm with O(n) time complexity? A purported linear-time algorithm by Dobkin and Snyder from 1979 has recently been shown to be incorrect by Keikha, Löffler, Urhausen, and van der Hoog. These authors give an alternative algorithm with O(n log n) time complexity. Here we give an algorithm with linear time complexity. computational-complexity  computational-geometry  algorithms  rather-interesting  plane-geometry  to-simulate  to-write-about  consider:genetic-programming 4 weeks ago by Vaguery [1805.06512] The Broken Stick Project The broken stick problem is the following classical question. You have a segment [0,1]. You choose two points on this segment at random. They divide the segment into three smaller segments. Show that the probability that the three segments form a triangle is 1/4. The MIT PRIMES program, together with Art of Problem Solving, organized a high school research project where participants worked on several variations of this problem. Participants were generally high school students who posted ideas and progress to the Art of Problem Solving forums over the course of an entire year, under the supervision of PRIMES mentors. This report summarizes the findings of this CrowdMath project. crowdsourcing  see-author  open-questions  geometry  probability-theory  rather-interesting  to-write-about  to-simulate  consider:genetic-programming  consider:performance-measures 4 weeks ago by Vaguery [1107.4030] Three dimensional structure from intensity correlations We develop the analysis of x-ray intensity correlations from dilute ensembles of identical particles in a number of ways. First, we show that the 3D particle structure can be determined if the particles can be aligned with respect to a single axis having a known angle with respect to the incident beam. Second, we clarify the phase problem in this setting and introduce a data reduction scheme that assesses the integrity of the data even before the particle reconstruction is attempted. Finally, we describe an algorithm that reconstructs intensity and particle density simultaneously, thereby making maximal use of the available constraints. signal-processing  crystallography  inverse-problems  rather-interesting  diffraction  numerical-methods  to-write-about  to-simulate 4 weeks ago by Vaguery [1305.1310] Statistical mechanics of the lattice sphere packing problem We present an efficient Monte Carlo method for the lattice sphere packing problem in d dimensions. We use this method to numerically discover de novo the densest lattice sphere packing in dimensions 9 through 20. Our method goes beyond previous methods not only in exploring higher dimensions but also in shedding light on the statistical mechanics underlying the problem in question. We observe evidence of a phase transition in the thermodynamic limit d→∞. In the dimensions explored in the present work, the results are consistent with a first-order crystallization transition, but leave open the possibility that a glass transition is manifested in higher dimensions. packing  open-problems  optimization  looking-to-see  sampling  approximation  to-simulate  to-write-about  consider:performance-measures  consider:robustness  consider:relaxations 4 weeks ago by Vaguery [1305.1961] An Improved Three-Weight Message-Passing Algorithm We describe how the powerful "Divide and Concur" algorithm for constraint satisfaction can be derived as a special case of a message-passing version of the Alternating Direction Method of Multipliers (ADMM) algorithm for convex optimization, and introduce an improved message-passing algorithm based on ADMM/DC by introducing three distinct weights for messages, with "certain" and "no opinion" weights, as well as the standard weight used in ADMM/DC. The "certain" messages allow our improved algorithm to implement constraint propagation as a special case, while the "no opinion" messages speed convergence for some problems by making the algorithm focus only on active constraints. We describe how our three-weight version of ADMM/DC can give greatly improved performance for non-convex problems such as circle packing and solving large Sudoku puzzles, while retaining the exact performance of ADMM for convex problems. We also describe the advantages of our algorithm compared to other message-passing algorithms based upon belief propagation. constraint-satisfaction  algorithms  distributed-processing  rather-interesting  to-simulate  to-write-about  consider:sudoku  consider:performance-measures 4 weeks ago by Vaguery [1310.5924] The symplectic geometry of closed equilateral random walks in 3-space A closed equilateral random walk in 3-space is a selection of unit length vectors giving the steps of the walk conditioned on the assumption that the sum of the vectors is zero. The sample space of such walks with n edges is the (2n−3)-dimensional Riemannian manifold of equilateral closed polygons in ℝ3. We study closed random walks using the symplectic geometry of the (2n−6)-dimensional quotient of the manifold of polygons by the action of the rotation group SO(3). The basic objects of study are the moment maps on equilateral random polygon space given by the lengths of any (n−3)-tuple of nonintersecting diagonals. The Atiyah-Guillemin-Sternberg theorem shows that the image of such a moment map is a convex polytope in (n−3)-dimensional space, while the Duistermaat-Heckman theorem shows that the pushforward measure on this polytope is Lebesgue measure on ℝn−3. Together, these theorems allow us to define a measure-preserving set of "action-angle" coordinates on the space of closed equilateral polygons. The new coordinate system allows us to make explicit computations of exact expectations for total curvature and for some chord lengths of closed (and confined) equilateral random walks, to give statistical criteria for sampling algorithms on the space of polygons and to prove that the probability that a randomly chosen equilateral hexagon is unknotted is at least 12. We then use our methods to construct a new Markov chain sampling algorithm for equilateral closed polygons, with a simple modification to sample (rooted) confined equilateral closed polygons. We prove rigorously that our algorithm converges geometrically to the standard measure on the space of closed random walks, give a theory of error estimators for Markov chain Monte Carlo integration using our method and analyze the performance of our method. Our methods also apply to open random walks in certain types of confinement, and in general to walks with arbitrary (fixed) edgelengths as well as equilateral walks. probability-theory  combinatorics  random-walks  sampling  rather-interesting  computational-geometry  looking-to-see  to-simulate  to-write-about  consider:rendering 4 weeks ago by Vaguery per page:    204080120160 Copy this bookmark:
2020-02-23 05:14:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5120565295219421, "perplexity": 1409.3109552886572}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145746.24/warc/CC-MAIN-20200223032129-20200223062129-00412.warc.gz"}
https://or.stackexchange.com/questions/6861/alternate-formulation-for-modeling-inventory-constraints/6865
# Alternate formulation for modeling inventory constraints I'm working on a inventory optimization problem where inventory used at a time-period is computed based on price-bucket that is selected for an item. Problem contains multiple items (around 10K), 15-20 time-periods and 10-15 price buckets. Inventory consumed at a time-period (t) is minimum of available inventory and demand (which is dependent on price bucket selected). This particular constraint is making problem harder (not able to obtain optimal solution even in 30 mins for above scale using commercial solver) and my current modeling is shown below: \begin{align*} &\mathcal{I} \quad \text{set of items (indexed by i)}\\ &\mathcal{T} \quad \text{set of time-periods (indexed by t)}\\ &\mathcal{K} \quad \text{set of price-buckets (indexed by k)}\\ &D_{i,t,k} \quad \text{demand of item i for time-period t at price-bucket k} \\ &z_{i,t,k} \quad \text{binary variable takes value 1 if k^{th} price-bucket is selected for item-i for time-period t} \end{align*} As mentioned above, Inventory catered ($$q_{i,t}^{supplied}$$) at time-period $$t$$ is computed as minimum of available inventory ($$q_{i,t}$$) and posed demand ($$\sum_{k}D_{i,t,k} \cdot z_{i,t,k}$$). Linearization of this is achieved by following constraints. \begin{align} q_{i,t}^{supplied} &\le q_{i,t} \\ q_{i,t}^{supplied} &\le \sum_{k}D_{i,t,k} \cdot z_{i,t,k} \\ q_{i,t}^{supplied} &\ge q_{i,t} - M^{big} \cdot \delta_{i,t} \\ q_{i,t}^{supplied} &\ge \sum_{k}D_{i,t,k} \cdot z_{i,t,k} - M^{big} \cdot (1-\delta_{i,t}) \end{align} where $$\delta_{i,t}$$ is a binary variable. Big-M's are chosen as tight as possible. These set of constraints are taking time while solving. Please suggest for any possible alternate formulation to achieve this or any references would be helpful. • What is the model's objective function and, in particular, does the objective favor meeting demand where possible or does it favor supplying as little as possible? Sep 3 at 15:19 • @prubin Objective is to maximize profit associated with supplied quantity. There are other side constraints but I identified these set of constraints are making the problem harder. Sep 3 at 16:36 If, given inventory level ($$q_{i,t}$$) and bucket decisions ($$z_{i,t,k}$$), profit is maximized by maximizing the supplied quantities, you probably can drop the big M constraints and trust the objective to prevent selection of a supplied quantity less than the min of the two upper limits. As an experiment, you could try running without those constraints and see if in fact the solution satisfied the min constraints. Otherwise, I'm not sure there is a better formulation. You could try combinatorial Benders cuts [1] in lieu of the big M constraints, but I'm not sure that would be faster (particularly if your values of M are fairly tight), and you likely would want a solver that supported user callbacks. Another possible reformulation would be to introduce new continuous variables $$w_{i,t,k}\in [0,1]$$ along with the constraints \begin{align*} w_{i,t,k} & \le z_{i,t,k}\\ w_{i,t,k} & \le \delta_{i,t}\\ w_{i,t,k} & \ge z_{i,t,k}+\delta_{i,t}-1. \end{align*} Collectively, those constraints force $$w_{i,t,k}=z_{i,t,k}\delta_{i,t}$$. You can now replace your fourth constraints with $$q_{i,t}^{supplied} \ge \sum_{k}D_{i,t,k} w_{i,t,k}.$$ That eliminates half the big M constraints, but still leaves the other half. (I'm assuming that available inventory $$q_{i,t}$$ is a variable and not a parameter.)
2021-11-29 15:31:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9966645240783691, "perplexity": 2445.4156719790863}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358774.44/warc/CC-MAIN-20211129134323-20211129164323-00465.warc.gz"}
https://en.wikipedia.org/wiki/Methods_of_computing_square_roots
# Methods of computing square roots Methods of computing square roots are numerical analysis algorithms for finding the principal, or non-negative, square root (usually denoted S, 2S, or S1/2) of a real number. Arithmetically, it means given S, a procedure for finding a number which when multiplied by itself, yields S; algebraically, it means a procedure for finding the non-negative root of the equation x2 - S = 0; geometrically, it means given the area of a square, a procedure for constructing a side of the square. Every real number has two square roots.[Note 1] The principal square root of most numbers is an irrational number with an infinite decimal expansion. As a result, the decimal expansion of any such square root can only be computed to some finite-precision approximation. However, even if we are taking the square root of a perfect square integer, so that the result does have an exact finite representation, the procedure used to compute it may only return a series of increasingly accurate approximations. The continued fraction representation of a real number can be used instead of its decimal or binary expansion and this representation has the property that the square root of any rational number (which is not already a perfect square) has a periodic, repeating expansion, similar to how rational numbers have repeating expansions in the decimal notation system. The most common analytical methods are iterative and consist of two steps: finding a suitable starting value, followed by iterative refinement until some termination criteria is met. The starting value can be any number, but fewer iterations will be required the closer it is to the final result. The most familiar such method, most suited for programmatic calculation, is Newton's method, which is based on a property of the derivative in the calculus. A few methods like paper-and-pencil synthetic division and series expansion, do not require a starting value. In some applications, an integer square root is required, which is the square root rounded or truncated to the nearest integer (a modified procedure may be employed in this case). The method employed depends on what the result is to be used for (i.e. how accurate it has to be), how much effort one is willing to put into the procedure, and what tools are at hand. The methods may be roughly classified as those suitable for mental calculation, those usually requiring at least paper and pencil, and those which are implemented as programs to be executed on a digital electronic computer or other computing device. Algorithms may take into account convergence (how many iterations are required to achieve a specified precision), computational complexity of individual operations (i.e. division) or iterations, and error propagation (the accuracy of the final result). Procedures for finding square roots (particularly the square root of 2) have been known since at least the period of ancient Babylon in the 17th century BCE. Heron's method from first century Egypt was the first ascertainable algorithm for computing square root. Modern analytic methods began to be developed after introduction of the Arabic numeral system to western Europe in the early Renaissance. Today, nearly all computing devices have a fast and accurate square root function, either as a programming language construct, a compiler intrinsic or library function, or as a hardware operator, based on one of the described procedures. ## Initial estimate Many iterative square root algorithms require an initial seed value. The seed must be a non-zero positive number; it should be between 1 and ${\displaystyle S}$, the number whose square root is desired, because the square root must be in that range. If the seed is far away from the root, the algorithm will require more iterations. If one initializes with x0 = 1 (or S), then approximately ${\displaystyle {\tfrac {1}{2}}\vert \log _{2}S\vert }$ iterations will be wasted just getting the order of magnitude of the root. It is therefore useful to have a rough estimate, which may have limited accuracy but is easy to calculate. In general, the better the initial estimate, the faster the convergence. For Newton's method (also called Babylonian or Heron's method), a seed somewhat larger than the root will converge slightly faster than a seed somewhat smaller than the root. In general, an estimate is pursuant to an arbitrary interval known to contain the root (such as [x0, S/x0]). The estimate is a specific value of a functional approximation to f(x) = x over the interval. Obtaining a better estimate involves either obtaining tighter bounds on the interval, or finding a better functional approximation to f(x). The latter usually means using a higher order polynomial in the approximation, though not all approximations are polynomial. Common methods of estimating include scalar, linear, hyperbolic and logarithmic. A decimal base is usually used for mental or paper-and-pencil estimating. A binary base is more suitable for computer estimates. In estimating, the exponent and mantissa are usually treated separately, as the number would be expressed in scientific notation. ### Decimal estimates Typically the number ${\displaystyle S}$ is expressed in scientific notation as ${\displaystyle a\times 10^{2n}}$ where ${\displaystyle 1\leq a<100}$ and n is an integer, and the range of possible square roots is ${\displaystyle {\sqrt {a}}\times 10^{n}}$ where ${\displaystyle 1\leq {\sqrt {a}}<10}$. #### Scalar estimates Scalar methods divide the range into intervals, and the estimate in each interval is represented by a single scalar number. If the range is considered as a single interval, the arithmetic mean (5.5) or geometric mean (${\displaystyle {\sqrt {10}}\approx 3.16}$) times ${\displaystyle 10^{n}}$ are plausible estimates. The absolute and relative error for these will differ. In general, a single scalar will be very inaccurate. Better estimates divide the range into two or more intervals, but scalar estimates have inherently low accuracy. For two intervals, divided geometrically, the square root ${\displaystyle {\sqrt {S}}={\sqrt {a}}\times 10^{n}}$ can be estimated as[Note 2] ${\displaystyle {\sqrt {S}}\approx {\begin{cases}2\cdot 10^{n}&{\text{if }}a<10,\\6\cdot 10^{n}&{\text{if }}a\geq 10.\end{cases}}}$ This estimate has maximum absolute error of ${\displaystyle 4\cdot 10^{n}}$ at a = 100, and maximum relative error of 100% at a = 1. For example, for ${\displaystyle S=125348}$ factored as ${\displaystyle 12.5348\times 10^{4}}$, the estimate is ${\displaystyle {\sqrt {S}}\approx 6\cdot 10^{2}=600}$. ${\displaystyle {\sqrt {125348}}=354.0}$, an absolute error of 246 and relative error of almost 70%. #### Linear estimates A better estimate, and the standard method used, is a linear approximation to the function ${\displaystyle y=x^{2}}$ over a small arc. If, as above, powers of the base are factored out of the number ${\displaystyle S}$ and the interval reduced to [1,100], a secant line spanning the arc, or a tangent line somewhere along the arc may be used as the approximation, but a least-squares regression line intersecting the arc will be more accurate. A least-squares regression line minimizes the average difference between the estimate and the value of the function. Its equation is ${\displaystyle y=8.7x-10}$. Reordering, ${\displaystyle x=0.115y+1.15}$. Rounding the coefficients for ease of computation, ${\displaystyle {\sqrt {S}}\approx (a/10+1.2)\cdot 10^{n}}$ That is the best estimate on average that can be achieved with a single piece linear approximation of the function y=x2 in the interval [1,100]. It has a maximum absolute error of 1.2 at a=100, and maximum relative error of 30% at S=1 and 10.[Note 3] To divide by 10, subtract one from the exponent of ${\displaystyle a}$, or figuratively move the decimal point one digit to the left. For this formulation, any additive constant 1 plus a small increment will make a satisfactory estimate so remembering the exact number isn't a burden. The approximation (rounded or not) using a single line spanning the range [1,100] is less than one significant digit of precision; the relative error is greater than 1/22, so less than 2 bits of information are provided. The accuracy is severely limited because the range is two orders of magnitude, quite large for this kind of estimation. A much better estimate can be obtained by a piece-wise linear approximation: multiple line segments, each approximating some subarc of the original. The more line segments used, the better the approximation. The most common way is to use tangent lines; the critical choices are how to divide the arc and where to place the tangent points. An efficacious way to divide the arc from y=1 to y=100 is geometrically: for two intervals, the bounds of the intervals are the square root of the bounds of the original interval, 1*100, i.e. [1,2100] and [2100,100]. For three intervals, the bounds are the cube roots of 100: [1,3100], [3100,(3100)2], and [(3100)2,100], etc. For two intervals, 2100 = 10, a very convenient number. Tangent lines are easy to derive, and are located at x = 1*10 and x = 10*10. Their equations are: y = 3.56x - 3.16 and y = 11.2x - 31.6. Inverting, the square roots are: x = 0.28y + 0.89 and x = .089y + 2.8. Thus for S = a * 102n: ${\displaystyle {\sqrt {S}}\approx {\begin{cases}(0.28a+0.89)\cdot 10^{n}&{\text{if }}a<10,\\(.089a+2.8)\cdot 10^{n}&{\text{if }}a\geq 10.\end{cases}}}$ The maximum absolute errors occur at the high points of the intervals, at a=10 and 100, and are 0.54 and 1.7 respectively. The maximum relative errors are at the endpoints of the intervals, at a=1, 10 and 100, and are 17% in both cases. 17% or 0.17 is larger than 1/10, so the method yields less than a decimal digit of accuracy. #### Hyperbolic estimates In some cases, hyperbolic estimates may be efficacious, because a hyperbola is also a convex curve and may lie along an arc of Y = x2 better than a line. Hyperbolic estimates are more computationally complex, because they necessarily require a floating division. A near-optimal hyperbolic approximation to x2 on the interval [1,100] is y=190/(10-x)-20. Transposing, the square root is x = -190/(y+20)+10. Thus for ${\displaystyle S=a\cdot 10^{2n}}$: ${\displaystyle {\sqrt {S}}\approx \left({\frac {-190}{a+20}}+10\right)\cdot 10^{n}}$ The floating division need be accurate to only one decimal digit, because the estimate overall is only that accurate, and can be done mentally. A hyperbolic estimate is better on average than scalar or linear estimates. It has maximum absolute error of 1.58 at 100 and maximum relative error of 16.0% at 10. For the worst case at a=10, the estimate is 3.67. If one starts with 10 and applies Newton-Raphson iterations straight away, two iterations will be required, yielding 3.66, before the accuracy of the hyperbolic estimate is exceeded. For a more typical case like 75, the hyperbolic estimate is 8.00, and 5 Newton-Raphson iterations starting at 75 would be required to obtain a more accurate result. #### Arithmetic estimates A method analogous to piece-wise linear approximation but using only arithmetic instead of algebraic equations, uses the multiplication tables in reverse: the square root of a number between 1 and 100 is between 1 and 10, so if we know 25 is a perfect square (5 × 5), and 36 is a perfect square (6 × 6), then the square root of a number greater than or equal to 25 but less than 36, begins with a 5. Similarly for numbers between other squares. This method will yield a correct first digit, but it is not accurate to one digit: the first digit of the square root of 35 for example, is 5, but the square root of 35 is almost 6. A better way is to the divide the range into intervals half way between the squares. So any number between 25 and half way to 36, which is 30.5, estimate 5; any number greater than 30.5 up to 36, estimate 6.[Note 4] The procedure only requires a little arithmetic to find a boundary number in the middle of two products from the multiplication table. Here is a reference table of those boundaries: a nearest square ${\displaystyle k={\sqrt {a}}}$ est. 1 to 2.5 1 (= 12) 1 2.5 to 6.5 4 (= 22) 2 6.5 to 12.5 9 (= 32) 3 12.5 to 20.5 16 (= 42) 4 20.5 to 30.5 25 (= 52) 5 30.5 to 42.5 36 (= 62) 6 42.5 to 56.5 49 (= 72) 7 56.5 to 72.5 64 (= 82) 8 72.5 to 90.5 81 (= 92) 9 90.5 to 100 100 (= 102) 10 The final operation is to multiply the estimate k by the power of ten divided by 2, so for ${\displaystyle S=a\cdot 10^{2n}}$, ${\displaystyle {\sqrt {S}}\approx k\cdot 10^{n}}$ The method implicitly yields one significant digit of accuracy, since it rounds to the best first digit. The method can be extended 3 significant digits in most cases, by interpolating between the nearest squares bounding the operand. If ${\displaystyle k^{2}\leq a<(k+1)^{2}}$, then ${\displaystyle {\sqrt {a}}}$ is approximately k plus a fraction, the difference between a and k2 divided by the difference between the two squares: ${\displaystyle {\sqrt {a}}\approx k+R}$ where ${\displaystyle R={\frac {(a-k^{2})}{(k+1)^{2}-k^{2}}}}$ The final operation, as above, is to multiply the result by the power of ten divided by 2; ${\displaystyle {\sqrt {S}}={\sqrt {a}}\cdot 10^{n}\approx (k+R)\cdot 10^{n}}$ k is a decimal digit and R is a fraction that must be converted to decimal. It usually has only a single digit in the numerator, and one or two digits in the denominator, so the conversion to decimal can be done mentally. Example: find the square root of 75. 75 = 75 × 10· 0, so a is 75 and n is 0. From the multiplication tables, the square root of the mantissa must be 8 point something because 8 × 8 is 64, but 9 × 9 is 81, too big, so k is 8; something is the decimal representation of R. The fraction R is 75 - k2 = 11, the numerator, and 81 - k2 = 17, the denominator. 11/17 is a little less than 12/18, which is 2/3s or .67, so guess .66 (it's ok to guess here, the error is very small). So the estimate is 8 + .66 = 8.66. 75 to three significant digits is 8.66, so the estimate is good to 3 significant digits. Not all such estimates using this method will be so accurate, but they will be close. ### Binary estimates When working in the binary numeral system (as computers do internally), by expressing ${\displaystyle S}$ as ${\displaystyle a\times 2^{2n}}$ where ${\displaystyle 0.1_{2}\leq a<10_{2}}$, the square root ${\displaystyle {\sqrt {S}}={\sqrt {a}}\times 2^{n}}$ can be estimated as ${\displaystyle {\sqrt {S}}\approx (0.485+0.485\cdot a)\cdot 2^{n}}$ which is the least-squares regression line to 3 significant digit coefficients. ${\displaystyle {\sqrt {a}}}$ has maximum absolute error of 0.0408 at ${\displaystyle a}$=2, and maximum relative error of 3.0% at ${\displaystyle a}$=1. A computationally convenient rounded estimate (because the coefficients are powers of 2) is: ${\displaystyle {\sqrt {S}}\approx (0.5+0.5\cdot a)\cdot 2^{n}}$[Note 5] which has maximum absolute error of 0.086 at 2 and maximum relative error of 6.1% at ${\displaystyle a}$=0.5 and ${\displaystyle a}$=2.0. For ${\displaystyle S=125348=1\;1110\;1001\;1010\;0100_{2}=1.1110\;1001\;1010\;0100_{2}\times 2^{16}\,}$, the binary approximation gives ${\displaystyle {\sqrt {S}}\approx (0.5+0.5\cdot a)\cdot 2^{8}=1.0111\;0100\;1101\;0010_{2}\cdot 1\;0000\;0000_{2}=1.456\cdot 256=372.8}$. ${\displaystyle {\sqrt {125348}}=354.0}$, so the estimate has an absolute error of 19 and relative error of 5.3%. The relative error is a little less than 1/24, so the estimate is good to 4+ bits. An estimate for ${\displaystyle a}$ good to 8 bits can be obtained by table lookup on the high 8 bits of ${\displaystyle a}$, remembering that the high bit is implicit in most floating point representations, and the bottom bit of the 8 should be rounded. The table is 256 bytes of precomputed 8-bit square root values. For example, for the index 111011012 representing 1.851562510, the entry is 101011102 representing 1.35937510, the square root of 1.851562510 to 8 bit precision (2+ decimal digits). ## Babylonian method Semilog graphs comparing of the speed of convergence of the Babylonian method to find the square root of 100 for different initial guesses. Negative guesses converge to the negative root. Note that values closer to the root converge faster, and all approximations are overestimates. In the SVG file, hover over a graph to display its points. Perhaps the first algorithm used for approximating ${\displaystyle {\sqrt {S}}}$ is known as the Babylonian method, despite there being no direct evidence, beyond informed conjecture, that the eponymous Babylonian mathematicians employed exactly this method.[1] The method is also known as Heron's method, after the first-century Greek mathematician Hero of Alexandria who gave the first explicit description of the method in his AD 60 work Metrica.[2] The basic idea is that if x is an overestimate to the square root of a non-negative real number S then S/x will be an underestimate, or vice versa, and so the average of these two numbers may reasonably be expected to provide a better approximation (though the formal proof of that assertion depends on the inequality of arithmetic and geometric means that shows this average is always an overestimate of the square root, as noted in the article on square roots, thus assuring convergence). This is equivalent to using Newton's method to solve ${\displaystyle x^{2}-S=0}$. More precisely, if x is our initial guess of ${\displaystyle {\sqrt {S}}}$ and ε is the error in our estimate such that S = (x+ ε)2, then we can expand the binomial and solve for ${\displaystyle \varepsilon ={\frac {S-x^{2}}{2x+\varepsilon }}\approx {\frac {S-x^{2}}{2x}},}$ since ${\displaystyle \varepsilon \ll x}$. Therefore, we can compensate for the error and update our old estimate as ${\displaystyle x+\varepsilon \approx x+{\frac {S-x^{2}}{2x}}={\frac {S+x^{2}}{2x}}={\frac {{\frac {S}{x}}+x}{2}}\equiv x_{\text{revised}}}$ Since the computed error was not exact, this becomes our next best guess. The process of updating is iterated until desired accuracy is obtained. This is a quadratically convergent algorithm, which means that the number of correct digits of the approximation roughly doubles with each iteration. It proceeds as follows: 1. Begin with an arbitrary positive starting value x0 (the closer to the actual square root of S, the better). 2. Let xn + 1 be the average of xn and S/xn (using the arithmetic mean to approximate the geometric mean). 3. Repeat step 2 until the desired accuracy is achieved. It can also be represented as: ${\displaystyle x_{0}\approx {\sqrt {S}},}$ ${\displaystyle x_{n+1}={\frac {1}{2}}\left(x_{n}+{\frac {S}{x_{n}}}\right),}$ ${\displaystyle {\sqrt {S}}=\lim _{n\to \infty }x_{n}.}$ This algorithm works equally well in the p-adic numbers, but cannot be used to identify real square roots with p-adic square roots; one can, for example, construct a sequence of rational numbers by this method that converges to +3 in the reals, but to −3 in the 2-adics. ### Example To calculate S, where S = 125348, to six significant figures, use the rough estimation method above to get {\displaystyle {\begin{aligned}{\begin{array}{rlll}x_{0}&=6\cdot 10^{2}&&=600.000\\[0.3em]x_{1}&={\frac {1}{2}}\left(x_{0}+{\frac {S}{x_{0}}}\right)&={\frac {1}{2}}\left(600.000+{\frac {125348}{600.000}}\right)&=404.457\\[0.3em]x_{2}&={\frac {1}{2}}\left(x_{1}+{\frac {S}{x_{1}}}\right)&={\frac {1}{2}}\left(404.457+{\frac {125348}{404.457}}\right)&=357.187\\[0.3em]x_{3}&={\frac {1}{2}}\left(x_{2}+{\frac {S}{x_{2}}}\right)&={\frac {1}{2}}\left(357.187+{\frac {125348}{357.187}}\right)&=354.059\\[0.3em]x_{4}&={\frac {1}{2}}\left(x_{3}+{\frac {S}{x_{3}}}\right)&={\frac {1}{2}}\left(354.059+{\frac {125348}{354.059}}\right)&=354.045\\[0.3em]x_{5}&={\frac {1}{2}}\left(x_{4}+{\frac {S}{x_{4}}}\right)&={\frac {1}{2}}\left(354.045+{\frac {125348}{354.045}}\right)&=354.045\end{array}}\end{aligned}}} Therefore, 125348 ≈ 354.045. ### Convergence Suppose that x0 > 0 and S > 0. Then for any natural number n, xn > 0. Let the relative error in xn be defined by ${\displaystyle \varepsilon _{n}={\frac {x_{n}}{\sqrt {S}}}-1>-1}$ and thus ${\displaystyle x_{n}={\sqrt {S}}\cdot (1+\varepsilon _{n}).}$ Then it can be shown that ${\displaystyle \varepsilon _{n+1}={\frac {\varepsilon _{n}^{2}}{2(1+\varepsilon _{n})}}\geq 0.}$ And thus that ${\displaystyle \varepsilon _{n+2}\leq \min \left\{{\frac {\varepsilon _{n+1}^{2}}{2}},{\frac {\varepsilon _{n+1}}{2}}\right\}}$ and consequently that convergence is assured, and quadratic. #### Worst case for convergence If using the rough estimate above with the Babylonian method, then the least accurate cases in ascending order are as follows: {\displaystyle {\begin{aligned}S&=1;&x_{0}&=2;&x_{1}&=1.250;&\varepsilon _{1}&=0.250.\\S&=10;&x_{0}&=2;&x_{1}&=3.500;&\varepsilon _{1}&<0.107.\\S&=10;&x_{0}&=6;&x_{1}&=3.833;&\varepsilon _{1}&<0.213.\\S&=100;&x_{0}&=6;&x_{1}&=11.333;&\varepsilon _{1}&<0.134.\end{aligned}}} Thus in any case, ${\displaystyle \varepsilon _{1}\leq 2^{-2}.\,}$ ${\displaystyle \varepsilon _{2}<2^{-5}<10^{-1}.\,}$ ${\displaystyle \varepsilon _{3}<2^{-11}<10^{-3}.\,}$ ${\displaystyle \varepsilon _{4}<2^{-23}<10^{-6}.\,}$ ${\displaystyle \varepsilon _{5}<2^{-47}<10^{-14}.\,}$ ${\displaystyle \varepsilon _{6}<2^{-95}<10^{-28}.\,}$ ${\displaystyle \varepsilon _{7}<2^{-191}<10^{-57}.\,}$ ${\displaystyle \varepsilon _{8}<2^{-383}<10^{-115}.\,}$ Rounding errors will slow the convergence. It is recommended to keep at least one extra digit beyond the desired accuracy of the xn being calculated to minimize round off error. ## Bakhshali method This method for finding an approximation to a square root was described in an ancient Indian mathematical manuscript called the Bakhshali manuscript. It is equivalent to two iterations of the Babylonian method beginning with x0. Thus, the algorithm is quartically convergent, which means that the number of correct digits of the approximation roughly quadruples with each iteration.[3] The original presentation, using modern notation, is as follows: To calculate ${\displaystyle {\sqrt {S}}}$, let x02 be the initial approximation to S. Then, successively iterate as: {\displaystyle {\begin{aligned}a_{n}&={\frac {S-x_{n}^{2}}{2x_{n}}},\\b_{n}&=x_{n}+a_{n},\\x_{n+1}&=b_{n}-{\frac {a_{n}^{2}}{2b_{n}}}=(x_{n}+a_{n})-{\frac {a_{n}^{2}}{2(x_{n}+a_{n})}}.\end{aligned}}} This can be used to construct a rational approximation to the square root by beginning with an integer. If x0 = N is an integer chosen so N2 is close to S, and d = SN2 is the difference whose absolute value is minimized, then the first iteration can be written as: ${\displaystyle {\sqrt {S}}\approx N+{\frac {d}{2N}}-{\frac {d^{2}}{8N^{3}+4Nd}}={\frac {8N^{4}+8N^{2}d+d^{2}}{8N^{3}+4Nd}}={\frac {N^{4}+6N^{2}S+S^{2}}{4N^{3}+4NS}}={\frac {N^{2}(N^{2}+6S)+S^{2}}{4N(N^{2}+S)}}.}$ The Bakhshali method can be generalized to the computation of an arbitrary root, including fractional roots.[4] ### Example Using the same example as given with the Babylonian method, let ${\displaystyle S=125348.}$ Then, the first iteration gives {\displaystyle {\begin{aligned}x_{0}&=600\\a_{1}&={\frac {125348-600^{2}}{2\times 600}}&&=&-195.543\\b_{1}&=600+(-195.543)&&=&404.456\\x_{1}&=404.456-{\frac {(-195.543)^{2}}{2\times 404.456}}&&=&357.186\end{aligned}}} Likewise the second iteration gives {\displaystyle {\begin{aligned}a_{2}&={\frac {125348-357.186^{2}}{2\times 357.186}}&&=&-3.126\\b_{2}&=357.186+(-3.126)&&=&354.060\\x_{2}&=354.06-{\frac {(-3.1269)^{2}}{2\times 354.06}}&&=&354.046\end{aligned}}} ## Digit-by-digit calculation This is a method to find each digit of the square root in a sequence. It is slower than the Babylonian method, but it has several advantages: • It can be easier for manual calculations. • Every digit of the root found is known to be correct, i.e., it does not have to be changed later. • If the square root has an expansion that terminates, the algorithm terminates after the last digit is found. Thus, it can be used to check whether a given integer is a square number. • The algorithm works for any base, and naturally, the way it proceeds depends on the base chosen. Napier's bones include an aid for the execution of this algorithm. The shifting nth root algorithm is a generalization of this method. ### Basic principle First, consider the case of finding the square root of a number Z, that is the square of a two-digit number XY, where X is the tens digit and Y is the units digit. Specifically: Z = (10X + Y)2 = 100X2 + 20XY + Y2 Now using the digit-by-digit algorithm, we first determine the value of X. X is the largest digit such that X2 is less or equal to Z from which we removed the two rightmost digits. In the next iteration, we pair the digits, multiply X by 2, and place it in the tenth's place while we try to figure out what the value of Y is. Since this is a simple case where the answer is a perfect square root XY, the algorithm stops here. The same idea can be extended to any arbitrary square root computation next. Suppose we are able to find the square root of N by expressing it as a sum of n positive numbers such that ${\displaystyle N=(a_{1}+a_{2}+a_{3}+\dotsb +a_{n})^{2}.}$ By repeatedly applying the basic identity ${\displaystyle (x+y)^{2}=x^{2}+2xy+y^{2},}$ the right-hand-side term can be expanded as {\displaystyle {\begin{aligned}&(a_{1}+a_{2}+a_{3}+\dotsb +a_{n})^{2}\\=&\,a_{1}^{2}+2a_{1}a_{2}+a_{2}^{2}+2(a_{1}+a_{2})a_{3}+a_{3}^{2}+\dotsb +a_{n-1}^{2}+2\left(\sum _{i=1}^{n-1}a_{i}\right)a_{n}+a_{n}^{2}\\=&\,a_{1}^{2}+[2a_{1}+a_{2}]a_{2}+[2(a_{1}+a_{2})+a_{3}]a_{3}+\dotsb +\left[2\left(\sum _{i=1}^{n-1}a_{i}\right)+a_{n}\right]a_{n}.\end{aligned}}} This expression allows us to find the square root by sequentially guessing the values of ${\displaystyle a_{i}}$s. Suppose that the numbers ${\displaystyle a_{1},\ldots ,a_{m-1}}$ have already been guessed, then the m-th term of the right-hand-side of above summation is given by ${\displaystyle Y_{m}=[2P_{m-1}+a_{m}]a_{m},}$ where ${\displaystyle P_{m-1}=\sum _{i=1}^{m-1}a_{i}}$ is the approximate square root found so far. Now each new guess ${\displaystyle a_{m}}$ should satisfy the recursion ${\displaystyle X_{m}=X_{m-1}-Y_{m},}$ such that ${\displaystyle X_{m}\geq 0}$ for all ${\displaystyle 1\leq m\leq n,}$ with initialization ${\displaystyle X_{0}=N.}$ When ${\displaystyle X_{n}=0,}$ the exact square root has been found; if not, then the sum of ${\displaystyle a_{i}}$s gives a suitable approximation of the square root, with ${\displaystyle X_{n}}$ being the approximation error. For example, in the decimal number system we have ${\displaystyle N=(a_{1}\cdot 10^{n-1}+a_{2}\cdot 10^{n-2}+\cdots +a_{n-1}\cdot 10+a_{n})^{2},}$ where ${\displaystyle 10^{n-i}}$ are place holders and the coefficients ${\displaystyle a_{i}\in \{0,1,2,\ldots ,9\}}$. At any m-th stage of the square root calculation, the approximate root found so far, ${\displaystyle P_{m-1}}$ and the summation term ${\displaystyle Y_{m}}$ are given by ${\displaystyle P_{m-1}=\sum _{i=1}^{m-1}a_{i}\cdot 10^{n-i}=10^{n-m+1}\sum _{i=1}^{m-1}a_{i}\cdot 10^{m-i-1},}$ ${\displaystyle Y_{m}=[2P_{m-1}+a_{m}\cdot 10^{n-m}]a_{m}\cdot 10^{n-m}=\left[20\sum _{i=1}^{m-1}a_{i}\cdot 10^{m-i-1}+a_{m}\right]a_{m}\cdot 10^{2(n-m)}.}$ Here since the place value of ${\displaystyle Y_{m}}$ is an even power of 10, we only need to work with the pair of most significant digits of the remaining term ${\displaystyle X_{m-1}}$ at any m-th stage. The section below codifies this procedure. It is obvious that a similar method can be used to compute the square root in number systems other than the decimal number system. For instance, finding the digit-by-digit square root in the binary number system is quite efficient since the value of ${\displaystyle a_{i}}$ is searched from a smaller set of binary digits {0,1}. This makes the computation faster since at each stage the value of ${\displaystyle Y_{m}}$ is either ${\displaystyle Y_{m}=0}$ for ${\displaystyle a_{m}=0}$ or ${\displaystyle Y_{m}=2P_{m-1}+1}$ for ${\displaystyle a_{m}=1}$. The fact that we have only two possible options for ${\displaystyle a_{m}}$ also makes the process of deciding the value of ${\displaystyle a_{m}}$ at m-th stage of calculation easier. This is because we only need to check if ${\displaystyle Y_{m}\leq X_{m-1}}$ for ${\displaystyle a_{m}=1.}$ If this condition is satisfied, then we take ${\displaystyle a_{m}=1}$; if not then ${\displaystyle a_{m}=0.}$ Also, the fact that multiplication by 2 is done by left bit-shifts helps in the computation. ### Decimal (base 10) Write the original number in decimal form. The numbers are written similar to the long division algorithm, and, as in long division, the root will be written on the line above. Now separate the digits into pairs, starting from the decimal point and going both left and right. The decimal point of the root will be above the decimal point of the square. One digit of the root will appear above each pair of digits of the square. Beginning with the left-most pair of digits, do the following procedure for each pair: 1. Starting on the left, bring down the most significant (leftmost) pair of digits not yet used (if all the digits have been used, write "00") and write them to the right of the remainder from the previous step (on the first step, there will be no remainder). In other words, multiply the remainder by 100 and add the two digits. This will be the current value c. 2. Find p, y and x, as follows: • Let p be the part of the root found so far, ignoring any decimal point. (For the first step, p = 0.) • Determine the greatest digit x such that ${\displaystyle x(20p+x)\leq c}$. We will use a new variable y = x(20p + x). • Note: 20p + x is simply twice p, with the digit x appended to the right. • Note: x can be found by guessing what c/(20·p) is and doing a trial calculation of y, then adjusting x upward or downward as necessary. • Place the digit ${\displaystyle x}$ as the next digit of the root, i.e., above the two digits of the square you just brought down. Thus the next p will be the old p times 10 plus x. 3. Subtract y from c to form a new remainder. 4. If the remainder is zero and there are no more digits to bring down, then the algorithm has terminated. Otherwise go back to step 1 for another iteration. #### Examples Find the square root of 152.2756. 1 2. 3 4 / \/ 01 52.27 56 01 1*1 <= 1 < 2*2 x = 1 01 y = x*x = 1*1 = 1 00 52 22*2 <= 52 < 23*3 x = 2 00 44 y = (20+x)*x = 22*2 = 44 08 27 243*3 <= 827 < 244*4 x = 3 07 29 y = (240+x)*x = 243*3 = 729 98 56 2464*4 <= 9856 < 2465*5 x = 4 98 56 y = (2460+x)*x = 2464*4 = 9856 00 00 Algorithm terminates: Answer is 12.34 ### Binary numeral system (base 2) Inherent to digit-by-digit algorithms is a search and test step: find a digit, ${\displaystyle \,e}$, when added to the right of a current solution ${\displaystyle \,r}$, such that ${\displaystyle \,(r+e)\cdot (r+e)\leq x}$, where ${\displaystyle \,x}$ is the value for which a root is desired. Expanding: ${\displaystyle \,r\cdot r+2re+e\cdot e\leq x}$. The current value of ${\displaystyle \,r\cdot r}$—or, usually, the remainder—can be incrementally updated efficiently when working in binary, as the value of ${\displaystyle \,e}$ will have a single bit set (a power of 2), and the operations needed to compute ${\displaystyle \,2\cdot r\cdot e}$ and ${\displaystyle \,e\cdot e}$ can be replaced with faster bit shift operations. #### Example Here we obtain the square root of 81, which when converted into binary gives 1010001. The numbers in the left column gives the option between that number or zero to be used for subtraction at that stage of computation. The final answer is 1001, which in decimal is 9. 1 0 0 1 --------- √ 1010001 1 1 1 --------- 101 01 0 -------- 1001 100 0 -------- 10001 10001 10001 ------- 0 This gives rise to simple computer implementations, such as this one in C:[5] int32_t isqrt(int32_t num) { assert(("sqrt input should be non-negative", num > 0)); int32_t res = 0; int32_t bit = 1 << 30; // The second-to-top bit is set. // Same as ((unsigned) INT32_MAX + 1) / 2. // "bit" starts at the highest power of four <= the argument. while (bit > num) bit >>= 2; while (bit != 0) { if (num >= res + bit) { num -= res + bit; res = (res >> 1) + bit; } else res >>= 1; bit >>= 2; } return res; } Using the notation above, the variable "bit" corresponds to ${\displaystyle e_{m}^{2}}$ which is ${\displaystyle (2^{m})^{2}=4^{m}}$, the variable "res" is equal to ${\displaystyle 2re_{m}}$, and the variable "num" is equal to the current ${\displaystyle X_{m}}$ which is the difference of the number we want the square root of and the square of our current approximation with all bits set up to ${\displaystyle 2^{m+1}}$. Thus in the first loop, we want to find the highest power of 4 in "bit" to find the highest power of 2 in ${\displaystyle e}$. In the second loop, if num is greater than res + bit, then ${\displaystyle X_{m}}$ is greater than ${\displaystyle 2re_{m}+e_{m}^{2}}$ and we can subtract it. The next line, we want to add ${\displaystyle e_{m}}$ to ${\displaystyle r}$ which means we want to add ${\displaystyle 2e_{m}^{2}}$ to ${\displaystyle 2re_{m}}$ so we want res = res + bit<<1. Then update ${\displaystyle e_{m}}$ to ${\displaystyle e_{m-1}}$ inside res which involves dividing by 2 or another shift to the right. Combining these 2 into one line leads to res = res>>1 + bit. If ${\displaystyle X_{m}}$ isn't greater or equal than ${\displaystyle 2re_{m}+e_{m}^{2}}$ then we just update ${\displaystyle e_{m}}$ to ${\displaystyle e_{m-1}}$ inside res and divide it by 2. Then we update ${\displaystyle e_{m}}$ to ${\displaystyle e_{m-1}}$ in bit by dividing it by 4. The final iteration of the 2nd loop has bit equal to 1 and will cause update of ${\displaystyle e}$ to run one extra time removing the factor of 2 from res making it our integer approximation of the root. Faster algorithms, in binary and decimal or any other base, can be realized by using lookup tables—in effect trading more storage space for reduced run time.[6] ## Exponential identity Pocket calculators typically implement good routines to compute the exponential function and the natural logarithm, and then compute the square root of S using the identity found using the properties of logarithms (${\displaystyle \ln x^{n}=n\ln x}$) and exponentials (${\displaystyle e^{\ln x}=x}$):[citation needed] ${\displaystyle {\sqrt {S}}=e^{{\frac {1}{2}}\ln S}.}$ The denominator in the fraction corresponds to the nth root. In the case above the denominator is 2, hence the equation specifies that the square root is to be found. The same identity is used when computing square roots with logarithm tables or slide rules. ## A two-variable iterative method This method is applicable for finding the square root of ${\displaystyle 0 and converges best for ${\displaystyle S\approx 1}$. This, however, is no real limitation for a computer based calculation, as in base 2 floating point and fixed point representations, it is trivial to multiply ${\displaystyle S\,\!}$ by an integer power of 4, and therefore ${\displaystyle {\sqrt {S}}}$ by the corresponding power of 2, by changing the exponent or by shifting, respectively. Therefore, ${\displaystyle S\,\!}$ can be moved to the range ${\displaystyle {\frac {1}{2}}\leq S<2}$. Moreover, the following method does not employ general divisions, but only additions, subtractions, multiplications, and divisions by powers of two, which are again trivial to implement. A disadvantage of the method is that numerical errors accumulate, in contrast to single variable iterative methods such as the Babylonian one. The initialization step of this method is ${\displaystyle a_{0}=S\,\!}$ ${\displaystyle c_{0}=S-1\,\!}$ ${\displaystyle a_{n+1}=a_{n}-a_{n}c_{n}/2\,\!}$ ${\displaystyle c_{n+1}=c_{n}^{2}(c_{n}-3)/4\,\!}$ Then, ${\displaystyle a_{n}\rightarrow {\sqrt {S}}}$ (while ${\displaystyle c_{n}\rightarrow 0}$). Note that the convergence of ${\displaystyle c_{n}\,\!}$, and therefore also of ${\displaystyle a_{n}\,\!}$, is quadratic. The proof of the method is rather easy. First, rewrite the iterative definition of ${\displaystyle c_{n}\,\!}$ as ${\displaystyle 1+c_{n+1}=(1+c_{n})(1-c_{n}/2)^{2}\,\!}$. Then it is straightforward to prove by induction that ${\displaystyle S(1+c_{n})=a_{n}^{2}}$ and therefore the convergence of ${\displaystyle a_{n}\,\!}$ to the desired result ${\displaystyle {\sqrt {S}}}$ is ensured by the convergence of ${\displaystyle c_{n}\,\!}$ to 0, which in turn follows from ${\displaystyle -1. This method was developed around 1950 by M. V. Wilkes, D. J. Wheeler and S. Gill[7] for use on EDSAC, one of the first electronic computers.[8] The method was later generalized, allowing the computation of non-square roots.[9] ## Iterative methods for reciprocal square roots The following are iterative methods for finding the reciprocal square root of S which is ${\displaystyle 1/{\sqrt {S}}}$. Once it has been found, find ${\displaystyle {\sqrt {S}}}$ by simple multiplication: ${\displaystyle {\sqrt {S}}=S\cdot (1/{\sqrt {S}})}$. These iterations involve only multiplication, and not division. They are therefore faster than the Babylonian method. However, they are not stable. If the initial value is not close to the reciprocal square root, the iterations will diverge away from it rather than converge to it. It can therefore be advantageous to perform an iteration of the Babylonian method on a rough estimate before starting to apply these methods. • Applying Newton's method to the equation ${\displaystyle (1/x^{2})-S=0}$ produces a method that converges quadratically using three multiplications per step: ${\displaystyle x_{n+1}={\frac {x_{n}}{2}}\cdot (3-S\cdot x_{n}^{2})=x_{n}\cdot \left({\frac {3}{2}}-{\frac {S}{2}}\cdot x_{n}^{2}\right).}$ • Another iteration is obtained by Halley's method, which is the Householder's method of order two. This converges cubically, but involves five multiplications per iteration:[citation needed] ${\displaystyle y_{n}=S\cdot x_{n}^{2}}$, and ${\displaystyle x_{n+1}={\frac {x_{n}}{8}}\cdot (15-y_{n}\cdot (10-3\cdot y_{n}))=x_{n}\cdot \left({\frac {15}{8}}-y_{n}\cdot \left({\frac {10}{8}}-{\frac {3}{8}}\cdot y_{n}\right)\right)}$. • If doing fixed-point arithmetic, the multiplication by 3 and division by 8 can implemented using shifts and adds. If using floating-point, Halley's method can be reduced to four multiplications per iteration by precomputing ${\displaystyle {\sqrt {\frac {3}{8}}}S}$ and adjusting all the other constants to compensate: ${\displaystyle y_{n}={\sqrt {\frac {3}{8}}}S\cdot x_{n}^{2}}$, and ${\displaystyle x_{n+1}=x_{n}\cdot \left({\frac {15}{8}}-y_{n}\cdot \left({\sqrt {\frac {25}{6}}}-y_{n}\right)\right)}$. ### Goldschmidt’s algorithm Some computers use Goldschmidt's algorithm to simultaneously calculate ${\displaystyle {\sqrt {S}}}$ and ${\displaystyle 1/{\sqrt {S}}}$. Goldschmidt's algorithm finds ${\displaystyle {\sqrt {S}}}$ faster than Newton-Raphson iteration on a computer with a fused multiply–add instruction and either a pipelined floating point unit or two independent floating-point units.[10] The first way of writing Goldschmidt's algorithm begins ${\displaystyle b_{0}=S}$ ${\displaystyle Y_{0}\approx 1/{\sqrt {S}}}$ (typically using a table lookup) ${\displaystyle y_{0}=Y_{0}}$ ${\displaystyle x_{0}=Sy_{0}}$ and iterates ${\displaystyle b_{n+1}=b_{n}Y_{n}^{2}}$ ${\displaystyle Y_{n+1}=(3-b_{n+1})/2}$ ${\displaystyle x_{n+1}=x_{n}Y_{n+1}}$ ${\displaystyle y_{n+1}=y_{n}Y_{n+1}}$ until ${\displaystyle b_{i}}$ is sufficiently close to 1, or a fixed number of iterations. The iterations converge to ${\displaystyle \lim _{n\to \infty }x_{n}={\sqrt {S}}}$, and ${\displaystyle \lim _{n\to \infty }y_{n}=1/{\sqrt {S}}}$. Note that it is possible to omit either ${\displaystyle x_{n}}$ and ${\displaystyle y_{n}}$ from the computation, and if both are desired then ${\displaystyle x_{n}=Sy_{n}}$ may be used at the end rather than computing it through in each iteration. A second form, using fused multiply-add operations, begins ${\displaystyle y_{0}\approx 1/{\sqrt {S}}}$ (typically using a table lookup) ${\displaystyle x_{0}=Sy_{0}}$ ${\displaystyle h_{0}=y_{0}/2}$ and iterates ${\displaystyle r_{n}=0.5-x_{n}h_{n}}$ ${\displaystyle x_{n+1}=x_{n}+x_{n}r_{n}}$ ${\displaystyle h_{n+1}=h_{n}+h_{n}r_{n}}$ until ${\displaystyle r_{i}}$ is sufficiently close to 0, or a fixed number of iterations. This converges to ${\displaystyle \lim _{n\to \infty }x_{n}={\sqrt {S}}}$, and ${\displaystyle \lim _{n\to \infty }2h_{n}=1/{\sqrt {S}}}$. ## Taylor series If N is an approximation to ${\displaystyle {\sqrt {S}}}$, a better approximation can be found by using the Taylor series of the square root function: ${\displaystyle {\sqrt {N^{2}+d}}=N\sum _{n=0}^{\infty }{\frac {(-1)^{n}(2n)!}{(1-2n)n!^{2}4^{n}}}{\frac {d^{n}}{N^{2n}}}=N\left(1+{\frac {d}{2N^{2}}}-{\frac {d^{2}}{8N^{4}}}+{\frac {d^{3}}{16N^{6}}}-{\frac {5d^{4}}{128N^{8}}}+\cdots \right)}$ As an iterative method, the order of convergence is equal to the number of terms used. With two terms, it is identical to the Babylonian method. With three terms, each iteration takes almost as many operations as the Bakhshali approximation, but converges more slowly.[citation needed] Therefore, this is not a particularly efficient way of calculation. To maximize the rate of convergence, choose N so that ${\displaystyle {\frac {|d|}{N^{2}}}\,}$ is as small as possible. ## Continued fraction expansion Quadratic irrationals (numbers of the form ${\displaystyle {\frac {a+{\sqrt {b}}}{c}}}$, where a, b and c are integers), and in particular, square roots of integers, have periodic continued fractions. Sometimes what is desired is finding not the numerical value of a square root, but rather its continued fraction expansion, and hence its rational approximation. Let S be the positive number for which we are required to find the square root. Then assuming a to be a number that serves as an initial guess and r to be the remainder term, we can write ${\displaystyle S=a^{2}+r.}$ Since we have ${\displaystyle S-a^{2}=({\sqrt {S}}+a)({\sqrt {S}}-a)=r}$, we can express the square root of S as ${\displaystyle {\sqrt {S}}=a+{\frac {r}{a+{\sqrt {S}}}}.}$ By applying this expression for ${\displaystyle {\sqrt {S}}}$ to the denominator term of the fraction, we have ${\displaystyle {\sqrt {S}}=a+{\frac {r}{a+(a+{\frac {r}{a+{\sqrt {S}}}})}}=a+{\frac {r}{2a+{\frac {r}{a+{\sqrt {S}}}}}}.}$ Compact notation The numerator/denominator expansion for continued fractions (see left) is cumbersome to write as well as to embed in text formatting systems. Therefore, special notation has been developed to compactly represent the integer and repeating parts of continued fractions. One such convention is use of a lexical "dog leg" to represent the vinculum between numerator and denominator, which allows the fraction to be expanded horizontally instead of vertically: ${\displaystyle {\sqrt {S}}=a+{\frac {r|}{|2a}}+{\frac {r|}{|2a}}+{\frac {r|}{|2a}}+\cdots }$ Here, each vinculum is represented by three line segments, two vertical and one horizontal, separating ${\displaystyle r}$ from ${\displaystyle 2a}$. An even more compact notation which omits lexical devices takes a special form: ${\displaystyle [a;2a,2a,2a,...]}$ For repeating continued fractions (which all square roots do), the repetend is represented only once, with an overline to signify a non-terminating repetition of the overlined part: ${\displaystyle [a;{\overline {2a}}]}$ For 2, the value of ${\displaystyle a}$ is 1, so its representation is: ${\displaystyle [1;{\overline {2}}]}$ Proceeding this way, we get a generalized continued fraction for the square root as ${\displaystyle {\sqrt {S}}=a+{\cfrac {r}{2a+{\cfrac {r}{2a+{\cfrac {r}{2a+\ddots }}}}}}}$ The first step to evaluating such a fraction to obtain a root is to do numerical substitutions for the root of the number desired, and number of denominators selected. For example, in canonical form, ${\displaystyle r}$ is 1 and for 2, ${\displaystyle a}$ is 1, so the numerical continued fraction for 3 denominators is: ${\displaystyle {\sqrt {2}}\approx 1+{\cfrac {1}{2+{\cfrac {1}{2+{\cfrac {1}{2}}}}}}}$ Step 2 is to reduce the continued fraction from the bottom up, one denominator at a time, to yield a rational fraction whose numerator and denominator are integers. The reduction proceeds thus (taking the first three denominators): ${\displaystyle 1+{\cfrac {1}{2+{\cfrac {1}{2+{\cfrac {1}{2}}}}}}=1+{\cfrac {1}{2+{\cfrac {1}{\frac {5}{2}}}}}}$ ${\displaystyle =1+{\cfrac {1}{2+{\cfrac {2}{5}}}}=1+{\cfrac {1}{\frac {12}{5}}}}$ ${\displaystyle =1+{\cfrac {5}{12}}={\frac {17}{12}}}$ Finally (step 3), divide the numerator by the denominator of the rational fraction to obtain the approximate value of the root: ${\displaystyle 17\div 12=1.42}$ rounded to three digits of precision. The actual value of 2 is 1.41 to three significant digits. The relative error is 0.17%, so the rational fraction is good to almost three digits of precision. Taking more denominators gives successively better approximations: four denominators yields the fraction ${\displaystyle {\frac {41}{29}}=1.4137}$, good to almost 4 digits of precision, etc. Usually, the continued fraction for a given square root is looked up rather than expanded in place because it's tedious to expand it. Continued fractions are available for at least square roots of small integers and common constants. For an arbitrary decimal number, precomputed sources are likely to be useless. The following is a table of small rational fractions called convergents reduced from canonical continued fractions for the square roots of a few common constants: S cont. fraction ~decimal convergents 2 ${\displaystyle [1;{\overline {2}}]}$ 1.41421 ${\displaystyle {\frac {3}{2}},{\frac {7}{5}},{\frac {17}{12}},{\frac {41}{29}},{\frac {99}{70}},{\frac {239}{169}}}$ 3 ${\displaystyle [1;{\overline {1,2}}]}$ 1.73205 ${\displaystyle {\frac {2}{1}},{\frac {5}{3}},{\frac {7}{4}},{\frac {19}{11}},{\frac {26}{15}},{\frac {71}{41}},{\frac {97}{56}}}$ 5 ${\displaystyle [2;{\overline {4}}]}$ 2.23607 ${\displaystyle {\frac {9}{4}},{\frac {38}{17}},{\frac {161}{72}}}$ 6 ${\displaystyle [2;{\overline {2,4}}]}$ 2.44949 ${\displaystyle {\frac {5}{2}},{\frac {22}{9}},{\frac {49}{20}},{\frac {218}{89}}}$ 10 ${\displaystyle [3;{\overline {6}}]}$ 3.16228 ${\displaystyle {\frac {19}{6}},{\frac {117}{37}}}$ ${\displaystyle {\sqrt {\pi }}}$ ${\displaystyle [1;1,3,2,1,1,6...]}$ 1.77245 ${\displaystyle {\frac {2}{1}},{\frac {7}{4}},{\frac {16}{9}},{\frac {23}{13}},{\frac {39}{22}}}$ ${\displaystyle {\sqrt {e}}}$ ${\displaystyle [1;1,1,1,5,1,1...]}$ 1.64872 ${\displaystyle {\frac {2}{1}},{\frac {3}{2}},{\frac {8}{5}},{\frac {28}{17}},{\frac {33}{20}},{\frac {61}{37}}}$ ${\displaystyle {\sqrt {\phi }}}$ ${\displaystyle [1;3,1,2,11,3,7...]}$ 1.27202 ${\displaystyle {\frac {4}{3}},{\frac {5}{4}},{\frac {14}{11}}}$ Note: all convergents up to and including denominator 99 listed. In general, the larger the denominator of a rational fraction, the better the approximation. It can also be shown that truncating a continued fraction yields a rational fraction that is the best approximation to the root of any fraction with denominator less than or equal to the denominator of that fraction - e.g., no fraction with a denominator less than or equal to 99 is as good an approximation to 2 as 140/99. ## Lucas sequence method the Lucas sequence of the first kind Un(P,Q) is defined by the recurrence relations: ${\displaystyle U_{n}(P,Q)={\begin{cases}0&{\text{if }}n=0\\1&{\text{if }}n=1\\P\cdot U_{n-1}(P,Q)-Q\cdot U_{n-2}(P,Q)&{\text{Otherwise}}\end{cases}}}$ and the characteristic equation of it is: ${\displaystyle x^{2}-P\cdot x+Q=0}$ it has the discriminant ${\displaystyle D=P^{2}-4Q}$ and the roots: ${\displaystyle {\begin{matrix}x_{1}={\frac {P+{\sqrt {D}}}{2}},&x_{2}={\frac {P-{\sqrt {D}}}{2}}\end{matrix}}}$ all that yield the following positive value: ${\displaystyle \lim _{n\to \infty }{\frac {U_{n+1}}{U_{n}}}=x_{1}}$ so when we want ${\displaystyle {\sqrt {a}}}$, we can choose ${\displaystyle P=2}$ and ${\displaystyle Q=1-a}$, and then calculate ${\displaystyle x_{1}=1+{\sqrt {a}}}$ using ${\displaystyle U_{n+1}}$ and ${\displaystyle U_{n}}$for large value of ${\displaystyle n}$. The most effective way to calculate ${\displaystyle U_{n+1}}$ and ${\displaystyle U_{n}}$is: ${\displaystyle {\begin{bmatrix}U_{n}\\U_{n+1}\end{bmatrix}}={\begin{bmatrix}0&1\\-Q&P\end{bmatrix}}\cdot {\begin{bmatrix}U_{n-1}\\U_{n}\end{bmatrix}}={\begin{bmatrix}0&1\\-Q&P\end{bmatrix}}^{n}\cdot {\begin{bmatrix}U_{0}\\U_{1}\end{bmatrix}}}$ Summary: ${\displaystyle {\begin{bmatrix}0&1\\a-1&2\end{bmatrix}}^{n}\cdot {\begin{bmatrix}0\\1\end{bmatrix}}={\begin{bmatrix}U_{n}\\U_{n+1}\end{bmatrix}}}$ then when ${\displaystyle n\to \infty }$: ${\displaystyle {\sqrt {a}}={\frac {U_{n+1}}{U_{n}}}-1}$ ## Approximations that depend on the floating point representation A number is represented in a floating point format as ${\displaystyle m\times b^{p}}$ which is also called scientific notation. Its square root is ${\displaystyle {\sqrt {m}}\times b^{p/2}}$ and similar formulae would apply for cube roots and logarithms. On the face of it, this is no improvement in simplicity, but suppose that only an approximation is required: then just ${\displaystyle b^{p/2}}$ is good to an order of magnitude. Next, recognise that some powers, p, will be odd, thus for 3141.59 = 3.14159×103 rather than deal with fractional powers of the base, multiply the mantissa by the base and subtract one from the power to make it even. The adjusted representation will become the equivalent of 31.4159×102 so that the square root will be 31.4159×101. If the integer part of the adjusted mantissa is taken, there can only be the values 1 to 99, and that could be used as an index into a table of 99 pre-computed square roots to complete the estimate. A computer using base sixteen would require a larger table, but one using base two would require only three entries: the possible bits of the integer part of the adjusted mantissa are 01 (the power being even so there was no shift, remembering that a normalised floating point number always has a non-zero high-order digit) or if the power was odd, 10 or 11, these being the first two bits of the original mantissa. Thus, 6.25 = 110.01 in binary, normalised to 1.1001 × 22 an even power so the paired bits of the mantissa are 01, while .625 = 0.101 in binary normalises to 1.01 × 2−1 an odd power so the adjustment is to 10.1 × 2−2 and the paired bits are 10. Notice that the low order bit of the power is echoed in the high order bit of the pairwise mantissa. An even power has its low-order bit zero and the adjusted mantissa will start with 0, whereas for an odd power that bit is one and the adjusted mantissa will start with 1. Thus, when the power is halved, it is as if its low order bit is shifted out to become the first bit of the pairwise mantissa. A table with only three entries could be enlarged by incorporating additional bits of the mantissa. However, with computers, rather than calculate an interpolation into a table, it is often better to find some simpler calculation giving equivalent results. Everything now depends on the exact details of the format of the representation, plus what operations are available to access and manipulate the parts of the number. For example, Fortran offers an EXPONENT(x) function to obtain the power. Effort expended in devising a good initial approximation is to be recouped by thereby avoiding the additional iterations of the refinement process that would have been needed for a poor approximation. Since these are few (one iteration requires a divide, an add, and a halving) the constraint is severe. Many computers follow the IEEE (or sufficiently similar) representation, and a very rapid approximation to the square root can be obtained for starting Newton's method. The technique that follows is based on the fact that the floating point format (in base two) approximates the base-2 logarithm. That is ${\displaystyle \log _{2}(m\times 2^{p})=p+\log _{2}(m)}$ So for a 32-bit single precision floating point number in IEEE format (where notably, the power has a bias of 127 added for the represented form) you can get the approximate logarithm by interpreting its binary representation as a 32-bit integer, scaling it by ${\displaystyle 2^{-23}}$, and removing a bias of 127, i.e. ${\displaystyle x_{\text{int}}\cdot 2^{-23}-127\approx \log _{2}(x).}$ For example, 1.0 is represented by a hexadecimal number 0x3F800000, which would represent ${\displaystyle 1065353216=127\cdot 2^{23}}$ if taken as an integer. Using the formula above you get ${\displaystyle 1065353216\cdot 2^{-23}-127=0}$, as expected from ${\displaystyle \log _{2}(1.0)}$. In a similar fashion you get 0.5 from 1.5 (0x3FC00000). To get the square root, divide the logarithm by 2 and convert the value back. The following program demonstrates the idea. Note that the exponent's lowest bit is intentionally allowed to propagate into the mantissa. One way to justify the steps in this program is to assume ${\displaystyle b}$ is the exponent bias and ${\displaystyle n}$ is the number of explicitly stored bits in the mantissa and then show that ${\displaystyle (((x_{\text{int}}/2^{n}-b)/2)+b)\cdot 2^{n}=(x_{\text{int}}-2^{n})/2+((b+1)/2)\cdot 2^{n}.}$ /* Assumes that float is in the IEEE 754 single precision floating point format */ #include <stdint.h> float sqrt_approx(float z) { union { float f; uint32_t i; } val = {z}; /* Convert type, preserving bit pattern */ /* * To justify the following code, prove that * * ((((val.i / 2^m) - b) / 2) + b) * 2^m = ((val.i - 2^m) / 2) + ((b + 1) / 2) * 2^m) * * where * * b = exponent bias * m = number of mantissa bits */ val.i -= 1 << 23; /* Subtract 2^m. */ val.i >>= 1; /* Divide by 2. */ val.i += 1 << 29; /* Add ((b + 1) / 2) * 2^m. */ return val.f; /* Interpret again as float */ } The three mathematical operations forming the core of the above function can be expressed in a single line. An additional adjustment can be added to reduce the maximum relative error. So, the three operations, not including the cast, can be rewritten as val.i = (1 << 29) + (val_int >> 1) - (1 << 22) + a; where a is a bias for adjusting the approximation errors. For example, with a = 0 the results are accurate for even powers of 2 (e.g. 1.0), but for other numbers the results will be slightly too big (e.g. 1.5 for 2.0 instead of 1.414... with 6% error). With a = −0x4B0D2, the maximum relative error is minimized to ±3.5%. If the approximation is to be used for an initial guess for Newton's method to the equation ${\displaystyle (1/x^{2})-S=0}$, then the reciprocal form shown in the following section is preferred. ### Reciprocal of the square root A variant of the above routine is included below, which can be used to compute the reciprocal of the square root, i.e., ${\displaystyle x^{-{1 \over 2}}}$ instead, was written by Greg Walsh. The integer-shift approximation produced a relative error of less than 4%, and the error dropped further to 0.15% with one iteration of Newton's method on the following line.[11] In computer graphics it is a very efficient way to normalize a vector. float invSqrt(float x) { float xhalf = 0.5f * x; union { float x; int i; } u; u.x = x; u.i = 0x5f375a86 - (u.i >> 1); /* The next line can be repeated any number of times to increase accuracy */ u.x = u.x * (1.5f - xhalf * u.x * u.x); return u.x; } Some VLSI hardware implements inverse square root using a second degree polynomial estimation followed by a Goldschmidt iteration.[12] ## Negative or complex square If S < 0, then its principal square root is ${\displaystyle {\sqrt {S}}={\sqrt {\vert S\vert }}\,\,i\,.}$ If S = a+bi where a and b are real and b ≠ 0, then its principal square root is ${\displaystyle {\sqrt {S}}={\sqrt {\frac {\vert S\vert +a}{2}}}\,+\,\operatorname {sgn}(b){\sqrt {\frac {\vert S\vert -a}{2}}}\,\,i\,.}$ This can be verified by squaring the root.[13][14] Here ${\displaystyle \vert S\vert ={\sqrt {a^{2}+b^{2}}}}$ is the modulus of S. The principal square root of a complex number is defined to be the root with the non-negative real part. ## Notes 1. ^ In addition to the principal square root, there is a negative square root equal in magnitude but opposite in sign to the principal square root, except for zero, which has double square roots of zero. 2. ^ The factors two and six are used because they approximate the geometric means of the lowest and highest possible values with the given number of digits: ${\displaystyle {\sqrt {{\sqrt {1}}\cdot {\sqrt {10}}}}={\sqrt[{4}]{10}}\approx 1.78\,}$ and ${\displaystyle {\sqrt {{\sqrt {10}}\cdot {\sqrt {100}}}}={\sqrt[{4}]{1000}}\approx 5.62\,}$. 3. ^ The unrounded estimate has maximum absolute error of 2.65 at 100 and maximum relative error of 26.5% at y=1, 10 and 100 4. ^ If the number is exactly half way between two squares, like 30.5, guess the higher number which is 6 in this case 5. ^ This is incidentally the equation of the tangent line to y=x2 at y=1. ## References 1. ^ Fowler, David; Robson, Eleanor (1998). "Square Root Approximations in Old Babylonian Mathematics: YBC 7289 in Context". Historia Mathematica. 25 (4): 376. doi:10.1006/hmat.1998.2209. 2. ^ Heath, Thomas (1921). A History of Greek Mathematics, Vol. 2. Oxford: Clarendon Press. pp. 323–324. 3. ^ Bailey, David; Borwein, Jonathan (2012). "Ancient Indian Square Roots: An Exercise in Forensic Paleo-Mathematics" (PDF). American Mathematical Monthly. 119 (8). pp. 646–657. Retrieved 2017-09-14. 4. ^ "Bucking down to the Bakhshali manuscript". Simply Curious blog. 5 June 2018. Retrieved 2020-12-21. 5. ^ Fast integer square root by Mr. Woo's abacus algorithm (archived) 6. ^ Integer Square Root function 7. ^ M. V. Wilkes, D. J. Wheeler and S. Gill, "The Preparation of Programs for an Electronic Digital Computer", Addison-Wesley, 1951. 8. ^ M. Campbell-Kelly, "Origin of Computing", Scientific American, September 2009. 9. ^ J. C. Gower, "A Note on an Iterative Method for Root Extraction", The Computer Journal 1(3):142–143, 1958. 10. ^ Markstein, Peter (November 2004). Software Division and Square Root Using Goldschmidt's Algorithms (PDF). 6th Conference on Real Numbers and Computers. Dagstuhl, Germany. CiteSeerX 10.1.1.85.9648. 11. ^ Fast Inverse Square Root by Chris Lomont 12. ^ "High-Speed Double-Precision Computation of Reciprocal, Division, Square Root and Inverse Square Root" by José-Alejandro Piñeiro and Javier Díaz Bruguera 2002 (abstract) 13. ^ Abramowitz, Miltonn; Stegun, Irene A. (1964). Handbook of mathematical functions with formulas, graphs, and mathematical tables. Courier Dover Publications. p. 17. ISBN 978-0-486-61272-0., Section 3.7.26, p. 17 14. ^ Cooke, Roger (2008). Classical algebra: its nature, origins, and uses. John Wiley and Sons. p. 59. ISBN 978-0-470-25952-8., Extract: page 59
2021-06-21 04:39:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 287, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8486268520355225, "perplexity": 573.4162901985396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488262046.80/warc/CC-MAIN-20210621025359-20210621055359-00045.warc.gz"}
https://ftp.aimsciences.org/article/doi/10.3934/jimo.2021037
# American Institute of Mathematical Sciences • Previous Article A robust time-cost-quality-energy-environment trade-off with resource-constrained in project management: A case study for a bridge construction project • JIMO Home • This Issue • Next Article Quadratic surface support vector machine with L1 norm regularization doi: 10.3934/jimo.2021037 ## Optimal and heuristic algorithms for the multi-objective vehicle routing problem with drones for military surveillance operations Department of Mechanical and Systems Engineering, Korea Military Academy, Hwarang-Ro, Nowon-Gu, Seoul, 01805, Republic of Korea * Corresponding author: Namsu Ahn, Soochan Kim Received  May 2020 Revised  October 2020 Published  March 2021 During military operations, obtaining information on remote battlefields is essential and recent advances in unmanned aerial vehicle technology have led to the use of drones to view battlefields. However, the use of drones in military operations introduces the new problem of determining travel routes for the drones. This type of problem is similar to the well-known classical vehicle routing problem, but the main difference is its objective function. For maintenance purposes, a minimized difference in travel distances is preferred. In addition, obtaining a shorter route in terms of travel distance is important. In this research, we propose a mathematical formulation and an optimal algorithm for the problem and suggest a simple heuristic to handle the large size instance of the problem. The computational results indicate that this algorithm can solve the real-scale instances of the problem, and the heuristic exhibits good performance even when the instance size of the problem is large. Citation: Namsu Ahn, Soochan Kim. Optimal and heuristic algorithms for the multi-objective vehicle routing problem with drones for military surveillance operations. Journal of Industrial & Management Optimization, doi: 10.3934/jimo.2021037 ##### References: [1] N. Agatz, P. Bouman and M. Schmidt, Optimization approaches for the traveling salesman problem with drone, Transportation Science, 52 (2018), 965-981.   Google Scholar [2] M. Dell'Amico, R. Montemanni and S. Novellani, Matheuristic algorithms for the parallel drone scheduling traveling salesman problem, Annals of Operations Research, 289 (2020), 211-226.  doi: 10.1007/s10479-020-03562-3.  Google Scholar [3] K. Dorling, J. Heinrichs, G. G. Messier and S. Magierowski, Vehicle routing problems for drone delivery, IEEE Transactions on Systems, Man, and Cybernetics: Systems, 47 (2017), 70-85.  doi: 10.1109/TSMC.2016.2582745.  Google Scholar [4] Q. M. Ha, Y. Deville, Q. D. Pham and M. H. Ha, On the min-cost traveling salesman problem with drone, Transportation Research Part C: Emerging Technologies, 86 (2018), 597-621.  doi: 10.1016/j.trc.2017.11.015.  Google Scholar [5] G. Kim, What happened to the five major game-changer?, E-daily, (2019). www.edaily.co.kr. Google Scholar [6] J. Kim, N. Ahn, S. Kim, M. Kim and N. Cho, A study on the construction method of surveillance system using drone in transport command, Korea Military Academy Technical report, 18 (2018). Google Scholar [7] P. Kitjacharoenchai, M. Ventresca, M. Moshref-Javadi, S. Lee, J. M. Tanchoco and P. A. Brunese, Multiple traveling salesman problem with drones: Mathematical model and heuristic approach, Computers & Industrial Engineering, 129 (2019), 14-30.  doi: 10.1016/j.cie.2019.01.020.  Google Scholar [8] J. Li and Y. Han, A hybrid multi-objective artificial bee colony algorithm for flexible task scheduling problems in cloud computing system, Cluster Computing, 23 (2020), 2483-2499.  doi: 10.1007/s10586-019-03022-z.  Google Scholar [9] J. Li, Y. Han, P. Duan, Y. Han, B. Niu, C. Li, Z. Zheng and Y. Liu, Meta-heuristic algorithm for solving vehicle routing problems with time windows and synchronized visit constraints in prefabricated systems, Journal of Cleaner Production, 250 (2020). Google Scholar [10] Ministry of National Defense (MND), Military Reform Plan 2014$\sim$2030, Seoul, 2014. Google Scholar [11] C. C. Murray and A. G. Chu, The flying sidekick traveling salesman problem: Optimization of drone-assisted parcel delivery, Transportation Research Part C: Emerging Technologies, 54 (2015), 86-109.  doi: 10.1016/j.trc.2015.03.005.  Google Scholar [12] S. Poikonen, X. Wang and B. Golden, The vehicle routing problem with drones: Extended models and connections, Networks, 70 (2017), 34-43.  doi: 10.1002/net.21746.  Google Scholar [13] D. Sacramento, D. Pisinger and S. Ropke, An adaptive large neighborhood search metaheuristic for the vehicle routing problem with drones, Transportation Research Part C: Emerging Technologies, 102 (2019), 289-315.  doi: 10.1016/j.trc.2019.02.018.  Google Scholar [14] D. Schermer, M. Moeini and O. Wendt, Algorithms for solving the vehicle routing problem with drones, Asian Conference on Intelligent Information and Database Systems, 10751 (2018), 352-361.   Google Scholar [15] D. Schermer, M. Moeini and O. Wendt, A matheuristic for the vehicle routing problem with drones and its variants, Transportation Research Part C: Emerging Technologies, 106 (2019), 166-204.  doi: 10.1016/j.trc.2019.06.016.  Google Scholar [16] S. A. V$\rm\acute{a}$squez, G. Angulo and M. A. Klapp, An exact solution method for the TSP with drone based on decomposition, Comput. Oper. Res., 127 (2021), 105127. doi: 10.1016/j.cor.2020.105127.  Google Scholar [17] K. Wang, B. Yuan, M. Zhao and Y. Lu, Cooperative route planning for the drone and truck in delivery services: A bi-objective optimisation approach, Journal of the Operational Research Society, 71 (2020), 1657-1674.  doi: 10.1080/01605682.2019.1621671.  Google Scholar [18] X. Wang, S. Poikonen and B. Golden, The vehicle routing problem with drones: Several worst-case results, Optimization Letters, 11 (2017), 679-697.  doi: 10.1007/s11590-016-1035-3.  Google Scholar [19] Z. Wang and J. B. Sheu, Vehicle routing problem with drones, Transportation Research Part B: Methodological, 122 (2019), 350-364.  doi: 10.1016/j.trb.2019.03.005.  Google Scholar [20] E. Yakici, Solving location and routing problem for UAVs, Computers & Industrial Engineering, 102 (2016), 294-301.  doi: 10.1016/j.cie.2016.10.029.  Google Scholar [21] E. E. Yurek and H. C. Ozmutlu, A decomposition-based iterative optimization algorithm for traveling salesman problem with drone, Transportation Research Part C: Emerging Technologies, 91 (2018), 249-262.  doi: 10.1016/j.trc.2018.04.009.  Google Scholar show all references ##### References: [1] N. Agatz, P. Bouman and M. Schmidt, Optimization approaches for the traveling salesman problem with drone, Transportation Science, 52 (2018), 965-981.   Google Scholar [2] M. Dell'Amico, R. Montemanni and S. Novellani, Matheuristic algorithms for the parallel drone scheduling traveling salesman problem, Annals of Operations Research, 289 (2020), 211-226.  doi: 10.1007/s10479-020-03562-3.  Google Scholar [3] K. Dorling, J. Heinrichs, G. G. Messier and S. Magierowski, Vehicle routing problems for drone delivery, IEEE Transactions on Systems, Man, and Cybernetics: Systems, 47 (2017), 70-85.  doi: 10.1109/TSMC.2016.2582745.  Google Scholar [4] Q. M. Ha, Y. Deville, Q. D. Pham and M. H. Ha, On the min-cost traveling salesman problem with drone, Transportation Research Part C: Emerging Technologies, 86 (2018), 597-621.  doi: 10.1016/j.trc.2017.11.015.  Google Scholar [5] G. Kim, What happened to the five major game-changer?, E-daily, (2019). www.edaily.co.kr. Google Scholar [6] J. Kim, N. Ahn, S. Kim, M. Kim and N. Cho, A study on the construction method of surveillance system using drone in transport command, Korea Military Academy Technical report, 18 (2018). Google Scholar [7] P. Kitjacharoenchai, M. Ventresca, M. Moshref-Javadi, S. Lee, J. M. Tanchoco and P. A. Brunese, Multiple traveling salesman problem with drones: Mathematical model and heuristic approach, Computers & Industrial Engineering, 129 (2019), 14-30.  doi: 10.1016/j.cie.2019.01.020.  Google Scholar [8] J. Li and Y. Han, A hybrid multi-objective artificial bee colony algorithm for flexible task scheduling problems in cloud computing system, Cluster Computing, 23 (2020), 2483-2499.  doi: 10.1007/s10586-019-03022-z.  Google Scholar [9] J. Li, Y. Han, P. Duan, Y. Han, B. Niu, C. Li, Z. Zheng and Y. Liu, Meta-heuristic algorithm for solving vehicle routing problems with time windows and synchronized visit constraints in prefabricated systems, Journal of Cleaner Production, 250 (2020). Google Scholar [10] Ministry of National Defense (MND), Military Reform Plan 2014$\sim$2030, Seoul, 2014. Google Scholar [11] C. C. Murray and A. G. Chu, The flying sidekick traveling salesman problem: Optimization of drone-assisted parcel delivery, Transportation Research Part C: Emerging Technologies, 54 (2015), 86-109.  doi: 10.1016/j.trc.2015.03.005.  Google Scholar [12] S. Poikonen, X. Wang and B. Golden, The vehicle routing problem with drones: Extended models and connections, Networks, 70 (2017), 34-43.  doi: 10.1002/net.21746.  Google Scholar [13] D. Sacramento, D. Pisinger and S. Ropke, An adaptive large neighborhood search metaheuristic for the vehicle routing problem with drones, Transportation Research Part C: Emerging Technologies, 102 (2019), 289-315.  doi: 10.1016/j.trc.2019.02.018.  Google Scholar [14] D. Schermer, M. Moeini and O. Wendt, Algorithms for solving the vehicle routing problem with drones, Asian Conference on Intelligent Information and Database Systems, 10751 (2018), 352-361.   Google Scholar [15] D. Schermer, M. Moeini and O. Wendt, A matheuristic for the vehicle routing problem with drones and its variants, Transportation Research Part C: Emerging Technologies, 106 (2019), 166-204.  doi: 10.1016/j.trc.2019.06.016.  Google Scholar [16] S. A. V$\rm\acute{a}$squez, G. Angulo and M. A. Klapp, An exact solution method for the TSP with drone based on decomposition, Comput. Oper. Res., 127 (2021), 105127. doi: 10.1016/j.cor.2020.105127.  Google Scholar [17] K. Wang, B. Yuan, M. Zhao and Y. Lu, Cooperative route planning for the drone and truck in delivery services: A bi-objective optimisation approach, Journal of the Operational Research Society, 71 (2020), 1657-1674.  doi: 10.1080/01605682.2019.1621671.  Google Scholar [18] X. Wang, S. Poikonen and B. Golden, The vehicle routing problem with drones: Several worst-case results, Optimization Letters, 11 (2017), 679-697.  doi: 10.1007/s11590-016-1035-3.  Google Scholar [19] Z. Wang and J. B. Sheu, Vehicle routing problem with drones, Transportation Research Part B: Methodological, 122 (2019), 350-364.  doi: 10.1016/j.trb.2019.03.005.  Google Scholar [20] E. Yakici, Solving location and routing problem for UAVs, Computers & Industrial Engineering, 102 (2016), 294-301.  doi: 10.1016/j.cie.2016.10.029.  Google Scholar [21] E. E. Yurek and H. C. Ozmutlu, A decomposition-based iterative optimization algorithm for traveling salesman problem with drone, Transportation Research Part C: Emerging Technologies, 91 (2018), 249-262.  doi: 10.1016/j.trc.2018.04.009.  Google Scholar 20 years old male population in ROK by year (unit: 1000) Travel routes when four surveillance targets and one base station are given with two drones Example of the graph modification procedure The procedure of the optimal algorithm for the mVRPD Surveillance area which is composed of seven surveillance targets Performance comparison between drone operating personnel and the algorithm Drone operating personnel Optimal algorithm Comparison Route for drone 1 0$\rightarrow$1$\rightarrow$2$\rightarrow$7$\rightarrow$0 0$\rightarrow$2$\rightarrow$3$\rightarrow$5$\rightarrow$6$\rightarrow$0 - Route for drone 2 0$\rightarrow$3$\rightarrow$4$\rightarrow$5$\rightarrow$6$\rightarrow$0 0$\rightarrow$4$\rightarrow$7$\rightarrow$1$\rightarrow$0 - Total distance 20.4 19.8 3$\%$ $\downarrow$ $z_{max} - z_{min}$ 3.1 0.6 81$\%$ $\downarrow$ Drone operating personnel Optimal algorithm Comparison Route for drone 1 0$\rightarrow$1$\rightarrow$2$\rightarrow$7$\rightarrow$0 0$\rightarrow$2$\rightarrow$3$\rightarrow$5$\rightarrow$6$\rightarrow$0 - Route for drone 2 0$\rightarrow$3$\rightarrow$4$\rightarrow$5$\rightarrow$6$\rightarrow$0 0$\rightarrow$4$\rightarrow$7$\rightarrow$1$\rightarrow$0 - Total distance 20.4 19.8 3$\%$ $\downarrow$ $z_{max} - z_{min}$ 3.1 0.6 81$\%$ $\downarrow$ Computational results when the number of drones is two Optimal algorithm Heuristic $|V|$ $Max.$ $Min.$ $Diff.$ $Total$ $Time$ $Max.$ $Min.$ $Diff.$ $Total$ $Time$ 4 105.92 96.11 9.82 202.03 0.16 186.47 73.76 112.71 260.23 0.27 5 221.53 205.37 16.16 426.89 0.39 292.08 98.51 193.57 390.59 0.36 6 149.85 149.61 0.24 299.45 4.22 177.55 55.46 122.09 233.01 0.41 7 216.99 216.84 0.15 433.83 41.06 294.30 190.87 103.44 485.17 0.51 8 250.07 250.07 0.00 500.13 234.65 367.26 98.00 269.26 465.26 0.62 9 200.39 200.39 0.00 400.79 78.60 524.20 72.72 451.49 596.92 0.75 10 287.46 287.46 0.00 574.93 8.82 321.32 266.22 55.10 587.54 0.89 Optimal algorithm Heuristic $|V|$ $Max.$ $Min.$ $Diff.$ $Total$ $Time$ $Max.$ $Min.$ $Diff.$ $Total$ $Time$ 4 105.92 96.11 9.82 202.03 0.16 186.47 73.76 112.71 260.23 0.27 5 221.53 205.37 16.16 426.89 0.39 292.08 98.51 193.57 390.59 0.36 6 149.85 149.61 0.24 299.45 4.22 177.55 55.46 122.09 233.01 0.41 7 216.99 216.84 0.15 433.83 41.06 294.30 190.87 103.44 485.17 0.51 8 250.07 250.07 0.00 500.13 234.65 367.26 98.00 269.26 465.26 0.62 9 200.39 200.39 0.00 400.79 78.60 524.20 72.72 451.49 596.92 0.75 10 287.46 287.46 0.00 574.93 8.82 321.32 266.22 55.10 587.54 0.89 Computational results when the number of drones is three Optimal algorithm Heuristic $|V|$ $Max.$ $Min.$ $Diff.$ $Total$ $Time$ $Max.$ $Min.$ $Diff.$ $Total$ $Time$ 4 80.52 63.81 16.71 212.45 0.16 80.56 37.58 42.99 181.95 0.33 5 110.15 81.65 28.49 277.04 0.27 160.84 18.87 141.97 264.95 0.43 6 119.65 116.00 3.65 355.18 1.44 178.16 46.69 131.47 351.18 0.53 7 133.07 128.67 4.40 385.43 9.35 257.85 91.93 165.91 459.31 0.61 8 157.83 157.10 0.72 472.54 599.82 325.39 48.37 277.01 461.24 0.75 9 Fail to obtain solution within 10 minutes 236.44 159.86 76.58 573.02 1.33 10 Fail to obtain solution within 10 minutes 218.13 134.78 83.35 504.72 1.50 Optimal algorithm Heuristic $|V|$ $Max.$ $Min.$ $Diff.$ $Total$ $Time$ $Max.$ $Min.$ $Diff.$ $Total$ $Time$ 4 80.52 63.81 16.71 212.45 0.16 80.56 37.58 42.99 181.95 0.33 5 110.15 81.65 28.49 277.04 0.27 160.84 18.87 141.97 264.95 0.43 6 119.65 116.00 3.65 355.18 1.44 178.16 46.69 131.47 351.18 0.53 7 133.07 128.67 4.40 385.43 9.35 257.85 91.93 165.91 459.31 0.61 8 157.83 157.10 0.72 472.54 599.82 325.39 48.37 277.01 461.24 0.75 9 Fail to obtain solution within 10 minutes 236.44 159.86 76.58 573.02 1.33 10 Fail to obtain solution within 10 minutes 218.13 134.78 83.35 504.72 1.50 Computational results when the number of drones is four Optimal algorithm Heuristic $|V|$ $Max.$ $Min.$ $Diff.$ $Total$ $Time$ $Max.$ $Min.$ $Diff.$ $Total$ $Time$ 5 103.71 34.99 68.73 263.60 0.09 103.71 34.99 68.73 263.60 0.22 6 83.45 66.03 17.42 308.09 0.19 99.44 25.06 74.38 291.21 0.48 7 90.83 68.00 22.83 309.91 0.61 86.70 44.72 41.98 272.22 0.59 8 97.02 93.38 3.63 381.29 2.72 190.32 51.23 139.10 484.29 0.68 9 Fail to obtain solution within 10 minutes 229.64 95.52 134.12 676.03 1.35 10 Fail to obtain solution within 10 minutes 243.52 59.23 184.29 635.55 1.55 Optimal algorithm Heuristic $|V|$ $Max.$ $Min.$ $Diff.$ $Total$ $Time$ $Max.$ $Min.$ $Diff.$ $Total$ $Time$ 5 103.71 34.99 68.73 263.60 0.09 103.71 34.99 68.73 263.60 0.22 6 83.45 66.03 17.42 308.09 0.19 99.44 25.06 74.38 291.21 0.48 7 90.83 68.00 22.83 309.91 0.61 86.70 44.72 41.98 272.22 0.59 8 97.02 93.38 3.63 381.29 2.72 190.32 51.23 139.10 484.29 0.68 9 Fail to obtain solution within 10 minutes 229.64 95.52 134.12 676.03 1.35 10 Fail to obtain solution within 10 minutes 243.52 59.23 184.29 635.55 1.55 [1] Shoufeng Ji, Jinhuan Tang, Minghe Sun, Rongjuan Luo. Multi-objective optimization for a combined location-routing-inventory system considering carbon-capped differences. Journal of Industrial & Management Optimization, 2021  doi: 10.3934/jimo.2021051 [2] Haodong Chen, Hongchun Sun, Yiju Wang. A complementarity model and algorithm for direct multi-commodity flow supply chain network equilibrium problem. Journal of Industrial & Management Optimization, 2021, 17 (4) : 2217-2242. doi: 10.3934/jimo.2020066 [3] Melis Alpaslan Takan, Refail Kasimbeyli. Multiobjective mathematical models and solution approaches for heterogeneous fixed fleet vehicle routing problems. Journal of Industrial & Management Optimization, 2021, 17 (4) : 2073-2095. doi: 10.3934/jimo.2020059 [4] Enkhbat Rentsen, Battur Gompil. Generalized Nash equilibrium problem based on malfatti's problem. Numerical Algebra, Control & Optimization, 2021, 11 (2) : 209-220. doi: 10.3934/naco.2020022 [5] Chonghu Guan, Xun Li, Rui Zhou, Wenxin Zhou. Free boundary problem for an optimal investment problem with a borrowing constraint. Journal of Industrial & Management Optimization, 2021  doi: 10.3934/jimo.2021049 [6] Alexandr Mikhaylov, Victor Mikhaylov. Dynamic inverse problem for Jacobi matrices. Inverse Problems & Imaging, 2019, 13 (3) : 431-447. doi: 10.3934/ipi.2019021 [7] Armin Lechleiter, Tobias Rienmüller. Factorization method for the inverse Stokes problem. Inverse Problems & Imaging, 2013, 7 (4) : 1271-1293. doi: 10.3934/ipi.2013.7.1271 [8] Giulio Ciraolo, Antonio Greco. An overdetermined problem associated to the Finsler Laplacian. Communications on Pure & Applied Analysis, 2021, 20 (3) : 1025-1038. doi: 10.3934/cpaa.2021004 [9] Yang Zhang. A free boundary problem of the cancer invasion. Discrete & Continuous Dynamical Systems - B, 2021  doi: 10.3934/dcdsb.2021092 [10] Hildeberto E. Cabral, Zhihong Xia. Subharmonic solutions in the restricted three-body problem. Discrete & Continuous Dynamical Systems, 1995, 1 (4) : 463-474. doi: 10.3934/dcds.1995.1.463 [11] Stefano Bianchini, Paolo Bonicatto. Forward untangling and applications to the uniqueness problem for the continuity equation. Discrete & Continuous Dynamical Systems, 2021, 41 (6) : 2739-2776. doi: 10.3934/dcds.2020384 [12] Sergei Avdonin, Julian Edward. An inverse problem for quantum trees with observations at interior vertices. Networks & Heterogeneous Media, 2021, 16 (2) : 317-339. doi: 10.3934/nhm.2021008 [13] Fritz Gesztesy, Helge Holden, Johanna Michor, Gerald Teschl. The algebro-geometric initial value problem for the Ablowitz-Ladik hierarchy. Discrete & Continuous Dynamical Systems, 2010, 26 (1) : 151-196. doi: 10.3934/dcds.2010.26.151 [14] Gloria Paoli, Gianpaolo Piscitelli, Rossanno Sannipoli. A stability result for the Steklov Laplacian Eigenvalue Problem with a spherical obstacle. Communications on Pure & Applied Analysis, 2021, 20 (1) : 145-158. doi: 10.3934/cpaa.2020261 [15] Hailing Xuan, Xiaoliang Cheng. Numerical analysis and simulation of an adhesive contact problem with damage and long memory. Discrete & Continuous Dynamical Systems - B, 2021, 26 (5) : 2781-2804. doi: 10.3934/dcdsb.2020205 [16] Marco Ghimenti, Anna Maria Micheletti. Compactness results for linearly perturbed Yamabe problem on manifolds with boundary. Discrete & Continuous Dynamical Systems - S, 2021, 14 (5) : 1757-1778. doi: 10.3934/dcdss.2020453 [17] Hailing Xuan, Xiaoliang Cheng. Numerical analysis of a thermal frictional contact problem with long memory. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021031 [18] Claudia Lederman, Noemi Wolanski. An optimization problem with volume constraint for an inhomogeneous operator with nonstandard growth. Discrete & Continuous Dynamical Systems, 2021, 41 (6) : 2907-2946. doi: 10.3934/dcds.2020391 [19] Fioralba Cakoni, Pu-Zhao Kow, Jenn-Nan Wang. The interior transmission eigenvalue problem for elastic waves in media with obstacles. Inverse Problems & Imaging, 2021, 15 (3) : 445-474. doi: 10.3934/ipi.2020075 [20] Jianli Xiang, Guozheng Yan. The uniqueness of the inverse elastic wave scattering problem based on the mixed reciprocity relation. Inverse Problems & Imaging, 2021, 15 (3) : 539-554. doi: 10.3934/ipi.2021004 2019 Impact Factor: 1.366
2021-04-19 18:26:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5877421498298645, "perplexity": 3425.203200291342}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038916163.70/warc/CC-MAIN-20210419173508-20210419203508-00270.warc.gz"}
https://www.r-bloggers.com/2018/07/writing-pipe-friendly-functions/
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. Pipes have been a fundamental aspect of computer programming for many decades. In short, the semantics of pipes can be thought of as taking the output from the left-hand side and passing it as input to the right-hand side. For example, in a linux shell, you might cat example.txt | unique | sort to take the contents of a text file, then take one copy of each row, then sort those remaining rows. | is a common, but not universal, pipe operator and on U.S. Qwerty keyboards, is found above the backslash key: \. Languages that don’t begin by supporting pipes often eventually implement some version of them. In R, the magrittr package introduced the %>% infix operator as a pipe operator and is most often pronounced as “then”. For example, “take the mtcars data.frame, THEN take the head of it, THEN…” and so on. For a function to be pipe friendly, it should at least take a data object (often named .data) as its first argument and return an object of the same type—possibly even the same, unaltered object. This contract ensures that your pipe-friendly function can exist in the middle of a piped workflow, accepting the input from its left-hand side and passing along output to its right-hand side. library(magrittr) custom_function <- function(.data) { message(str(.data)) .data } mtcars %>% custom_function() %>% custom_function() This will first display the structure of the 32 by 10 mtcars data.frame, then take the head(10) of mtcars and display the structure of that 10 by 10 reduced version, ultimately returning the reduced version which is, by default in R, printed to the console. The dplyr package in R introduces the notion of a grouped data.frame. For example, in the mtcars data, there is a cyl parameter that classifies each observation as a 4, 6, or 8 cylinder vehicle. You might want to process each of these groups of rows separately—i.e., process all the 4 cylinder vehicles together, then all the 6 cylinder, then all the 8 cylinder: library(dplyr) mtcars %>% group_by(cyl) %>% tally() Note that dplyr re-exports the magrittr pipe operator, so it’s not necessary to attach both dplyr and magrittr explicitly; attaching dplyr will usually suffice. In order to make my custom function group-aware, I need to check the incoming .data object to see whether it’s a grouped data.frame. If it is, then I can use dplyr‘s do() function to call my custom function on each subset of the data. Here, the (.) notation denotes the subset of .data being handed to custom_function at each invocation. library(dplyr) custom_function <- function(.data) { if (dplyr::is_grouped_df(.data)) { return(dplyr::do(.data, custom_function(.))) } message(str(.data)) .data } mtcars %>% custom_function() mtcars %>% group_by(cyl) %>% custom_function() In these examples, I’ve messaged some metadata to the console, but your custom functions can do any work they like: create, plot, and save ggplots; compute statistics; generate log files; and so on. I usually include the R three-dots parameter, ...,  to allow additional parameters to be passed into the function. custom_function <- function(.data, ...) { if (dplyr::is_grouped_df(.data)) { return(dplyr::do(.data, custom_function(., ...))) } message(str(.data)) .data }
2021-06-19 09:49:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48826804757118225, "perplexity": 2564.2152589670713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487647232.60/warc/CC-MAIN-20210619081502-20210619111502-00255.warc.gz"}
http://mathoverflow.net/feeds/question/76573
Which bundles does the character vareity parameterize? - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-19T04:24:12Z http://mathoverflow.net/feeds/question/76573 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/76573/which-bundles-does-the-character-vareity-parameterize Which bundles does the character vareity parameterize? John Mangual 2011-09-27T22:49:13Z 2011-09-28T06:26:22Z <p>For any Riemann surface with punctures $C$, and lie group $G$, the character variety is the space of maps $\mathrm{Hom}(\pi_1(C), G)$. </p> <p>I know that if $G= S_n$ (not a lie group), then $\mathrm{Hom}(\pi_1(C), S_n)//S_n$ parametrizes <a href="http://www.math.ucdavis.edu/~osserman/rfg/290W/branched-covers.pdf" rel="nofollow">branched covers</a> of $M$. Here $S_n$ acts by conjugation (permuting the various copies of $C$.)</p> <p>If $G = \mathrm{GL}(n,\mathbb{C})$, is $\mathrm{Hom}(\pi_1(C), G)$ parameterizing vector bundles over $C$? What is the equivalence relation here?</p> http://mathoverflow.net/questions/76573/which-bundles-does-the-character-vareity-parameterize/76607#76607 Answer by Sam Gunningham for Which bundles does the character vareity parameterize? Sam Gunningham 2011-09-28T06:26:22Z 2011-09-28T06:26:22Z <p>In general, the set $\mathrm{Hom}(\pi_1(C),G)/G$ (where $G$ acts by conjugation) naturally parameterizes $G$-local systems. </p> <p>For example, if $G=GL_n(\mathbb C)$, these are just ordinary local systems of vector spaces: a map from $\pi_1(C)$ to $GL_n(\mathbb C)$ describes the monodromy around loops in $C$, and conjugate maps correspond to isomorphic local systems. If $G=U(n)$, the character variety naturally parametrizes unitary local systems (i.e. the fibres have an inner product such that the monodromy is unitary). </p> <p>As Mike comments, there are various theorems generally known as non-abelian Hodge theory which relate these topological objects to holomorphic objects. For example, on a closed curve, the Narashimhan-Seshadhri theorem gives a bijection between isomorphism classes of unitary local systems and degree 0 holomorphic vector bundles (with some stability condition).</p> <p>Similarly, there is a bijection between (iso classes of) all local systems and degree 0 Higgs bundles (with stability conditions).</p> <p>These can be thought of as giving diffeomorphisms between the character variety and the moduli spaces which naturally parameterize these objects (note that for $G=GL_n(\mathbb C)$, the character variety has it's own complex structure, which does not pull back to the natural complex structure on the moduli of Higgs bundles under these diffeomorphisms).</p> <p>There are similar results when the curve $C$ is not compact (i.e. has punctures) you have to be a bit more careful, and you may want to include extra data like a filtration of the bundle at the punctures, and constrain the monodromy around the punctures to lie in a particular conjugacy class.</p> <p>I think this is discussed in the appendix to Wells' <em>Differential Analysis on Complex Manifolds</em> (written by Oscar Garcia-Prada), as well as in the papers of Narashimhan-Seshadhri, Hitchin, Donaldson, Corlette, Simpson etc... </p>
2013-05-19 04:24:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9495190978050232, "perplexity": 793.3374765352598}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383259/warc/CC-MAIN-20130516092623-00072-ip-10-60-113-184.ec2.internal.warc.gz"}
https://achievethecore.org/page/618/cup-of-rice
• 09/08/13   |   Adjusted: 08/01/18   |   1 file # Cup of Rice Author: Illustrative Mathematics • Description • Files Mathematically: • Addresses standards: 6.NS.A.15.NF.B.7, and MP.3 • Involves concepts, procedure, and application of fraction division – all required by standard 6.NS.A.1 • Devotes attention to a mathematically important case (dividend equal to 1) • Builds on fraction division work from fifth grade (see 5.NF.B.7) • Engages students in constructing viable arguments and critiquing the arguments of others (MP.3) In the classroom: • Uses visual models to support understanding • Allows for individual or group work • Encourages students to share their developing thinking This task was designed to include specific features that support access for all students and align to best practice for English Language Learner (ELL) instruction. Go here to learn more about the research behind these supports. This lesson aligns to English Language Learner best practice in the following ways: • Provides opportunities for students to practice and refine their use of mathematical language. • Allows for whole class, small group, and paired discussion for the purpose of practicing with mathematical concepts and language. • Elicits evidence of thinking both verbally and in written form. • Includes a mathematical routine that reflects best practices for supporting English Language Learners in accessing mathematical concepts. • Provides students with support in negotiating written word problems through multiple reads and/or multi-modal interactions with the problem. • Making the Shifts How does this task exemplify the instructional Shifts required by CCSSM? Focus Belongs to the major work of sixth grade Coherence Addresses the culminating standard in the progression of fraction operations; prepares for rational arithmetic in grade 7 Rigor Conceptual Understanding: primary in this taskProcedural Skill and Fluency: secondary in this taskApplication: primary in this task Tonya and Chrissy are trying to understand the following story problem for 1 ÷ $\frac{2}{3}$. There is $\frac{2}{3}$ cup of rice in one serving of rice. I ate 1 cup of rice. How many servings of rice did I eat? Tonya says, "One cup of rice contains $\frac{2}{3}$ cup serving  plus an additional $\frac{1}{3}$ cup of rice, so the answer should be $1\frac{1}{3}$ servings." Chrissy says, "I heard someone say that the answer is $\frac{3}{2}$ or $1\frac{1}{2}$ servings." Is Tonya correct or incorrect? Explain your reasoning. Support your explanation using this diagram. • Illustrative Mathematics Commentary and Solution Commentary: One common mistake students make when dividing fractions using visuals is the confusion between remainder and the fractional part of a mixed number answer. In this problem, $\frac{1}{3}$ is the remainder with units "cups of rice" and $\frac{1}{2}$ has units "servings", which is what the problem is asking for. Solution: In Tonya's solution of $1\frac{1}{3}$, she correctly notices that there is one $\frac{2}{3}$-cup serving of rice in $1$ cup, and there is $\frac{1}{3}$cup of rice left over. But she is mixing up the quantities of servings and cups in her answer. The question becomes how many servings is $\frac{1}{3}$ cup of rice? The answer is "$\frac{1}{3}$ cup of rice is $\frac{1}{2}$ of a serving." It would be correct to say, "There is one serving of rice with $\frac{1}{3}$ cup of rice left over," but to interpret the quotient $1\frac{1}{2}$, the units for the $1$ and the units for the $\frac{1}{2}$ must be the same: There are $1\frac{1}{2}$ servings in $1$ cup of rice if each serving is $\frac{2}{3}$ cup. The quotient chosen for this problem, $1 ÷ \frac{2}{3} = \frac{3}{2}$, sheds light on the fact that dividing is multiplying by the reciprocal. Once students understand a quotient like $1 ÷ \frac{2}{3} = \frac{3}{2}$, they can think about a problem like $\frac{3}{4} ÷ \frac{2}{3}$ by taking $\frac{3}{4}$ of the known quotient $1 ÷ \frac{2}{3}$. That is, $\frac{3}{4} ÷ \frac{2}{3} = \frac{3}{4} × 1 ÷ \frac{2}{3} = \frac{3}{4} × \frac{3}{2}$.
2019-12-09 22:18:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3826467990875244, "perplexity": 2566.9034263487015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540523790.58/warc/CC-MAIN-20191209201914-20191209225914-00248.warc.gz"}
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Proteins/Amino_Acids/Properties_of_Amino_Acids/Stereochemistry_of_Amino_Acids
# Stereochemistry of Amino Acids With the exception of glycine, all the 19 other common amino acids have a uniquely different functional group on the central tetrahedral alpha carbon (i.e. $$C_{\alpha}$$). The $$C_{\alpha}$$ is termed "chiral" to indicate there are four different constituents and that the Ca is asymmetric. Since the $$C_{\alpha}$$ is asymmetric there exists two possible, non-superimposable, mirror images of the amino acids: Exercise $$\PageIndex{1}$$ How are these two uniquely different structures in the figure ago distinguished? Based off the stereochemistyr at the $$C_{\alpha}$$. ## The D, L system Glyceraldehyde contains a chiral carbon, and therefore, there are two enantiomers of this molecule. One is labeled the "L" form, and the other the "D" form. This is the frame of reference used to describe amino acid enantiomers as being either the "L" or "D" form Even though the two enantiomers would seem to be essentially equivalent to each other, all common amino acids are found in the "L" enantiomer in living systems. When looking down the H-C, a bond towards the $$C_{\alpha}$$ there is a mnemonic to identify the L-enantiomer of amino acids (note: in this view the three functional groups are pointing away from you, and not towards you; the H atom is omitted for clarity - but it would be in front of the C) Starting with the carbonyl functional group, and going clockwise around the $$C_{\alpha}$$ of the L-enantiomer, the three functional groups spell out the word CORN. If you follow the above instructions, it will spell out CONR (a silly, meaningless word) for the D-enantiomer ## Optical Activity Enantiomeric molecules have an optical property known as optical activity - the ability to rotate the plane of plane polarized light. Clockwise rotation is known as "dextrorotatory" behavior and counterclockwise rotation is known as "levorotatory" behavior. A source of potential confusion... All common amino acids are the L-enantiomer (i.e. their $$C_{\alpha}$$ chiral center is the L-enantiomer), based on the structural comparison with L-glyceraldehyde. However, not all L-amino acids are Levorotatory, some are actually Dextrorotatory with regard to their optical activity. To (attempt) to avoid confusion, the optical activities are given as (+) for dextrorotatory, and (-) for levorotatory • L(+)-alanine (this is the L-enantiomer and it is dextrorotatory) • L(-)-serine (this is the L-enantiomer and it is levorotatory) Multiple chiral centers • Molecules with N chiral centers can exist in 2N isomeric structures • Isomers that differ in configuration at only one chiral center are called diastereomers ## The R,S system of naming chiral centers A relative ranking of the "priority" of various functional groups is given as: $\ce{SH > OH > NH2 > COOH > CHO > CH2OH > CH3 > H}$ • A chiral center has four different functional groups. Identify the functional group with the lowest priority • View the chiral center down the bond from the chiral center to the lowest priority atom • don't confuse this with the CORN mnemonic method of identifying the L-amino acid chirality by viewing from the H to the Ca ) • Assign priorities to the three other functional groups connected to the chiral center, using the above ranking • If the priorities of these other groups goes in a clockwise rotation, the chirality is "R". If the priorities of these other groups goes counterclockwise, the chirality is "S". (Note that this assignment has nothing to do with optical activity, and is not using L-glyceraldehyde as a reference molecule) Spectroscopic properties of amino acids This refers to the ability of amino acids to absorb or emit electromagnetic energy at different wavelengths (i.e. energies) • No amino acids absorb light in the visible spectrum (i.e. they are "colorless"). • If proteins have color (e.g. hemoglobin is red) it is because they contain a bound, non-protein atom, ion or molecule; iron in this case) • All amino acids absorb in the infrared region (longer wavelengths, weaker energy than visible light) • Some amino acids absorb in the ultraviolet spectrum (shorter wavelengths, higher energy than visible light) • Absorption occurs as electrons rise to higher energy states • Electrons in aromatic ring structures absorb in the u.v. spectrum. Such structures comprise the side chains of • tryptophan, tyrosine and phenylalanine. ## Separation and analysis of amino acid mixtures The 20 common amino acids differ from one another in several important ways. Here are just two: • Mass. The smallest amino acid (glycine) has a mass of 57 Da (i.e. g/mol), and the largest (tryptophan) has a mass of 186 Da • Isoelectric point (pH at which the amino acid has a neutral charge). This is a function of all ionizable groups on the amino acid, including the amino and carboxyl functional groups in addition to any ionizable group on the side chain. Amino Acid Mass (Da) Isoelectric Point Amino Acid Mass (Da) Isoelectric Point Aspartic Acid 114.11 2.98 Isoleucine 113.16 6.038 Glutamic Acid 129.12 3.08 Glycine 57.05 6.064 Cysteine 103.15 5.02 Alanine 71.09 6.107 Tyrosine 163.18 5.63 Proline 97.12 6.3 Serine 87.08 5.68 Histidine 137.14 7.64 Methionine 131.19 5.74 Lysine 128.17 9.47 Tryptophan 186.12 5.88 Arginine 156.19 10.76 Phenylalanine 147.18 5.91 Threonine 101.11 - Valine 99.14 6.002 Asparagine 115.09 - Leucine 113.16 6.036 Glutamine 128.14 - We can use these differences in physical properties to fractionate complex mixtures of amino acids into individual amino acids • In looking at the isoelectric point of the different amino acids it seems that they will have different partial charges at a given pH. • For example, at pH 6.0 some will be negatively charged, and some positively charged. • For those that are negatively charged, some will be slightly negative, and others strongly negative. Similarly, for those that are positively charged, some will be slightly positive, and others strongly positive • The charge differences of the amino acids means that they will have different affinities for other cationic or anionic charges ## References 1. The Merck Index, Merck & Co. Inc., Nahway, N.J., 11(1989); CRC Handbook of Chem.& Phys., Cleveland, Ohio, 58(1977)
2019-10-17 23:03:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5204274654388428, "perplexity": 2548.065915386549}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677230.18/warc/CC-MAIN-20191017222820-20191018010320-00351.warc.gz"}
https://mechcontent.com/torsional-stiffness/
# Torsional stiffness: Definition, Formula, Units, of shaft Contents ## What is torsional stiffness? Torsional stiffness is defined as the amount of torque required for twisting an object by unit radian. It is also known as the ratio of applied torque to the angle of twist (θ). It indicates how much the object is stiffer to withstand a torsional load. It is denoted by the symbol ‘K’ and can be evaluated as, \text{Torsional stiffness, K} =\frac{T}{\theta } Where, T = torque θ = Angle of twist in object Higher torsional stiffness means that the object or a shaft is more capable to withstand torsional load and shows minimum torsional deformation. Torsional stiffness mainly depends on the following factors: ## Torsional stiffness formula: From the definition, the torsional stiffness equation is written as, K =\frac{T}{\theta } From the torsional equation, we can write, \frac{T}{\theta }= \frac{GJ}{L} Where, G = Modulus of rigidity J = Polar moment of inertia L = Length of shaft Therefore torsional stiffness equation can be written as, K =\frac{T}{\theta }= \frac{GJ}{L} As the product, ‘GJ’ indicates the torsional rigidity of an object, thus the torsional stiffness is also known as torsional rigidity per unit length of the object. ## Why torsional stiffness is important? Here are some cases, where torsional rigidity is an important consideration. a] Stiffer shafts or axles: The shafts and axles are directly subjected to the higher torsional loads. For example, propeller shafts continuously deliver driving torque from the transmission. While wheel axles experience the driving torque as well as braking torque. Thus shafts should be stiffer enough to continuously deliver power without causing failure. b] Stiffness in chassis: The chassis with better torsional stiffness offers a better ride, higher performance during handling, cornering, and efficient suspension. It is an important factor in chassis design. The torsional stiffness of the chassis should not be lower as it will cause failure while cornering and it should not be very high as it causes difficulty in steering and less grip on the road. c] Life of component: Optimum torsional stiffness is essential to increase the life of the machine components subjected to the torsional cyclic loading. ## Torsional stiffness units: The SI and FPS units of the Torsional stiffness are as follows:- i] SI unit: In the SI system, the unit of torque is N.m and the unit of the angle of twist is radian, therefore the unit of the torsional stiffness is given by, K = \frac{T}{\theta }=\text{N.m/radian} ∴ SI unit of torsional stiffness is N.m/radian. ii] FPS unit: In the FPS system, the unit of torque is lb.ft and the unit of the angle of twist is the radian. Hence the unit of the torsional stiffness in the FPS system is, K = \frac{T}{\theta }=\text{lb.ft/ radian} ∴ FPS unit of torsional stiffness is lb.ft/radian. ## Torsional stiffness of shaft: 1) Torsional stiffness of solid circular shaft:- For a solid circular shaft of diameter ‘d’, J=\frac{\pi }{32}\times d^{4} Therefore torsional stiffness of the solid circular shaft is, K =\frac{GJ}{L}=\frac{G}{L}\times [\frac{\pi }{32}\times d^{4}] K =\frac{G\pi d^{4}}{32L} 2) Torsional stiffness of hollow circular shaft:- For a hollow circular shaft with an outside diameter of do and an inside diameter of di, J=\frac{\pi }{32}\times (do^{4}-di^{4}) Hence torsional stiffness of the hollow shaft is, K =\frac{GJ}{L} = \frac{G}{L}\times [\frac{\pi }{32}\times (do^{4}-di^{4})] K =\frac{G\pi (do^{4}-di^{4})}{32L} ## Torsional stiffness vs Bending stiffness – Difference: Let’s see how torsional stiffness is differ from bending stiffness. ## How to calculate torsional stiffness? The torsional stiffness indicates torque per unit deflection in the shaft. It can be calculated experimentally or theoretically as follows, 1] Experimental method:- In this method, the object is loaded under torsional load to find the angle of twist in a shaft. Then by using the below formula we can calculate the torsional stiffness of the object. K = \frac{\text{Torque}}{\text{angle of twist}} 2] Theoretical method:- In this method, it is necessary to know the shear modulus of the material and the geometry of the object. 1. Find the polar moment of inertia (J) of the cross-section area of the object that is perpendicular to the axis of twist. 2. Find the length of the object subjected to the twisting. 3. Find the shear modulus of the object. 4. Use the below formula, K = \frac{GJ}{L} ## How to increase torsional stiffness? Here are some ways to increase the torsional stiffness: 1] Choosing material with higher shear modulus: Material with a higher shear modulus (modulus of rigidity) gives higher stiffness over torsional load. 2] Choosing the right geometry with a higher polar moment of inertia: The geometry of the cross-section has a direct effect on the torsional stiffness of the object. The geometry of the shape decides the polar moment of inertia which is the main factor for finding the torsional stiffness of the shaft. 3] Choosing hollow cross section instead of solid sections: In the case of the hollow shaft, the mass of the shaft is spread outside, thus it provides a higher polar moment of inertia than the solid shaft of the same mass. Thus it is better to choose hollow shafts over solid shafts to get the same torsional stiffness with reduced mass. ## FAQs: 1. How does torsional stiffness differ from torsional rigidity? Torsional rigidity per unit length indicates the torsional stiffness of the object. 2. What torsional stiffness of a solid shaft means? It is the amount of torque required to twist a solid shaft by 1 radian that can be calculated as, K = (torque applied)/(twist angle) 3. What is frame torsional stiffness? It indicates a stiffness of a frame to bear a twisting/torsional load that gives the amount of torque required to twist the frame by unit radian.
2023-03-22 19:20:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8200323581695557, "perplexity": 3096.5002280436934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.74/warc/CC-MAIN-20230322180852-20230322210852-00630.warc.gz"}